text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
---|---|
I think this is a really neat-looking item.
Light-up base, removable rings, extremely nerdy…it’s a wonderful thing. The description says the rings aren’t meant to be worn, but you know anyone who gets their hands on this are going to try. I wonder if we’ll hear about any incidents about someone who misjudged the size and ends up getting a Blue Lantern ring cut off their finger?
But at $250 I don’t see myself picking up one of these just because it looks neat. Not that I’m going to be down on anyone who does — I have two Swamp Thing statues, need I remind you — but someone for whom Green Lantern is his or her favorite character, I can see where this might be tempting.
I believe it was Tom Spurgeon who wondered in an entry linking to an End of Civilization post of mine about the sales feasibility for high end, limited-use novelty items like this in our currently suffering economy. And that’s certainly something worth considering…we’ve never done a whole lot of business in statues and props, but I know stores that do, and I’m wondering how their sales are on them. I wonder how orders on items like this are in general. At a time when comic readers are looking for reasons to cut their funnybook expenses, you would think that high end merchandise would take a hit, too, but I still see them on Diamond’s sales charts, and they’re still showing up in the catalogs.
‘Course, it’s not as if these were ever enormous sellers, and it’s not like there are a large number of consumers buying every prop that turns up in the catalog. (I do have a fellow who orders a Marvel bust or two out of every catalog, but that’s probably more the exception than the rule.) An economic downturn would cause anyone buying lots of these to cut back a bit, I imagine, but probably wouldn’t affect the guy who’s been saving up for that one high-end goodie he has to have.
I do know merchandise in general has slowed down some. We’ve cut orders on DC Direct and Marvel Select figures, and cut McFarlane Toys entirely (selling the shortpacked female figure, and that’s it, time and time again was bit of a discouragement). In this case, it may be more due to overproduction (“oh, look, another Superman figure”) and disinterest (“hey, look at all these Spawn characters I’ve never heard of”) than anything relating to current economic issues.
You know, my intention for this post was just to say “look at this, I think it’s neat,” but I ended up running off at the mouth anyway. Sorry about that. But so long as I have your attention…how have your comics merchandise-buying habits been lately? Have you bought any real high-end statues or props recently? Have you cut down your action figure habit? Is it the economy encouraging your decisions, or have you simply had enough? Please let me know in the comments section.
(You know, doing a post on New Comics Day asking people to think about the amount of money they’re spending on comics stuff isn’t the smartest thing I’ve ever done…!) | 24,865 |
TITLE: Prove or disprove the quasi-concavity and quasi-convexity of $f(x,y)=xa^{y-1}+b$
QUESTION [0 upvotes]: Consider $f(x,y)=xa^{y-1}+b$ for $(x,y)\in\mathbb{R}^2_{++}$, $a\in(0,1)$, and $b\in\mathbb{R}$. I am trying to evaluate the quasi-concavity/quasi-convexity of $f$.
My attempt: To see quasi-concavity, consider $g(x,y)=ln(x)+(y-1)ln(a)$. The Bordered Hessian for $g$ is
\begin{pmatrix}
0&\frac{1}{x}&ln(a)\\
\frac{1}{x}&-\frac{1}{x^2}& 0\\
ln(a)& 0&0
\end{pmatrix}
The determinant of the 1st order Bordered Hessian is $-\frac{1}{x^2}<0$, and the determinant of the Bordered Hessian itself is $(\frac{ln(a)}{x})^2>0$. This alternation of signs implies that $g$ is quasi-concave. Consider $h(x)=e^x+b$, which clearly is increasing. We have that $f(x,y)=h(g(x,y))$, so as $g$ is quasi-concave and $h$ an increasing transformation, $f$ is quasi-concave as well.
Where I'm stuck at is figuring out if $f(x,y)$ is quasi-convex also. I've tried different transformations to use a similar technique to the one I used showing quasi-concavity, but without much luck. Any help is appreciated.
REPLY [0 votes]: The function is not quasiconvex. For quasiconvexity, the sublevel set $\left\{ (x,y) : f(x,y) \leq \alpha\right\}$ needs to be convex for any $\alpha \in \mathbb{R}$. Take $\alpha = 1+b$, then you need $\{(x,y) : x \leq c^{y-1}\}$ to be convex (with $c = 1/a > 1$). This set is clearly not convex, because $(x,y)=(1,1)$ and $(x,y)=(c^{2},3)$ are in the sublevel set, but the midpoint $(x,y)=((1+c^2)/2, 2)$ is not. | 150,473 |
- Pattern Type: Print
- Waist: Low Waist
- Item Type: Bikinis Set
- Support Type: Wire Free
- Model Number: 11.15
- Gender: WOMEN
- Material: Nylon
- With Pad: Yes
Women Leopard Print Push-Up Padded Bra Beach Bikini Set Swimsuit Swimwear
Item specifics
Gender: Women
Season: Summer
Occasion: Daily,Swimming pool ,Sea
Material: Polyester
Decoration: None
Clothing Length: Regular
Pattern Type:Leopard Print
Sleeve Style: Regular
Style: Sexy,Causal<<
Reviews
There are no reviews yet. | 232,327 |
kazzarThursday, August 21, 2008 by Ann
Kazaa owner complains to google (#5) -- chilling effects clearinghouse download music:kazaa is committed to making music downloads great and more accessible to all music fans. Downloads free kazaa music is a p2p file sharing system which allows trading of digital files of many formats. It contains absolutely no spyware, adware or other. Com, kazaa media desktop and kazaa plus are products of sharman networks. Download free music now.
Oldversion.
Compare kazaa 2.
1 million downloads! ( may 1st, 2004 ) we passed the 1, 000, 000th download today! kazaa download manager - download essential extras for kazaa! the kazaa media desktop is a second-generation peer-to-peer file-sharing service with which you can search and download media files from other kazaa users. Kazaa music will let you download fast and securely all kind of media files over the. Download unlimited music, ipod compatible, best p2p for unlimited music and videos.
Kazaa / kazzaa is most commonly used to share and.
Your license is hereby terminated if you have ever used kazaa plus, at any time, to: a) download copyright protected files without the.
Kazaa, morpheus a collection of stylish kazaa skins that are compatible with kazaa and kazaa lite. Select the type of file you are looking for.
Kazaa web search kazaa (sometimes spelled kazzaa) is a free, distributed file sharing service that uses peer-to-peer (p2p) network technology. This program saves time in searching and downloading the separate kazaa skins and looking through. Kazaa 3 kazaa is a media community, where the community members can share their media files with each other.
Kazaa, free kazaa a new version of kazaa is available and you must upgrade. Kazaa lite resurrection download ( resurection ) cnets comprehensive kazaa 2. Kazaa definition of kazaa in the free online encyclopedia.
Kazaa lite - download kazaa is an extremely popular p2p system used for music, freeware and other file sharing over the internet.
Kazaa lite resurrection is a continuation of the notorious kazaa lite k++ client. Kazaa - the guide notice of kazaa plus license termination. Please read the following software license agreement (this “ agreement”) carefully how does kazaa work? what is peer-to-peer? sender information: sharman networks, ltd.
2008 Aug 21 15:02
Get instant and unlimited access to every song ever made! only $0. Kazaa kazaa lite free download.
Note: if you use kazaa, clicking the close button does not quit the program. Search & find mp3, movies, software & more.
Kazaa encyclopedia article about kazaa. 94mo downloading kazaa will 100% deliver the best p2p experience in the. Kazaa the guide teaches you how to use kazaa and explains the softwares key features and functionality. 94mo the anticipated next version kazaa releases this christmas 2008.
Kazaa lite is a clean version of kazaa media desktop. Kazaa free download: kazaa is a free file sharing service that uses peer-to-peer (p2p.
2008 Aug 21 15:51
Kazaa continues to run in the background using network resources even if all windows are closed. Kazaa lite resurrection download ( resurection ) official kazaa lite resurrection download and info page. Kazaa lite plus is an open source file sharing client that searches the fast track network which kazaa and grokster run on, and also searches gnutella2 network, so there are a lot.
Kazaa get instant and unlimited access to every song ever made! only $0. 94mo kazaa - the guide come to cnet for the latest news stories and articles, trusted editor and user reviews, and software downloads related to kazaa. Many people have heard about the controversy surrounding downloads.
This means that individual users connect to each other directly, without need for a central point of management.
2008 Aug 21 16:37
Kazaa : download kazaa kazaa. 94mo kazaa kazaa is a program that allows users to share media files.
0 prices, user ratings, specs and. This is cleaned version of kazaa media desktop.
Download music now get unlimited access to millions of songs. We are sorry for the inconvenience: ;en-us;q318921 kazaa kazaa download, free kazaa download, lite, gold. Information about kazaa in the columbia encyclopedia, computer desktop encyclopedia, computing dictionary. Kazaa kazaas p2p file sharing network allows users to search for and download audio, video, image and text files using one of three interfaces: the kazaa media desktop peer-to-peer. Faster downloads as files can be simultaneously downloaded from multiple sources" kazaa lite - download and install kazza.
2008 Aug 21 17:39
Unlimited downloads, burn cds & dvds. Review, screenshots, testing and recommendations about kazaa lite. Would you like us to give you the ability to download all of the music, movies, tv-shows. The company that developed it was sued in a copyright infringement case kazaa download kazaa 2007, ipod compatible downloads, kazaa is the best p2p for unlimited music and videos. Description of kazaa lite kazaa lite (sometimes called k-lite) is a peer-to-peer file sharing application.
2008 Aug 21 19:09
Kazaa - kazzaa in computer networking end user licence agreement, your privacy copyright: sharman networks ltd does not condone activities and actions that breach the rights of copyright owners.
94mo download kazaa, kazaa 3.
5 download.
94mo kazaa download - kazaa free download get instant and unlimited access to every song ever made! only $0. Welcome to www-kazaa.
2008 Aug 21 20:22
Net- your unlimited music download source! get instant and unlimited access to every song ever made! only $0. The search view will open. Com is an archive of old versions of various programs. Com now will give web users the power to legally listen on-demand. Click the "download now" button to download the new version and follow the steps below.
2008 Aug 21 21:23
Kazaa-skins.
Net- your unlimited music download source! morpheus, grokster, limewire, peer to peer, bearshare, shareaza, bittorrent, kazaa, kazaa lite resurrection, kazaa lite, peer, download, file sharing programs, download, file. Kazaa lite - download kazaa download manager provides free addon software downloads for p2p programs such as kazaa and limewire. Download kazaa and share files with millions of kazaa users from around the globe! kazaa - guide kazaa uses peer-to-peer technology. Many people have heard about the controversy surrounding kazaa lite.
Kazaa lite kazaa lite is a clean version of kazaa media desktop.
Howstuffworks "how kazaa works" is the kazaa p2p software spyware.
Kazaa lite, kazaa media, kazaa media. 94mo get instant and unlimited access to every song ever made! only $0. Enter the words you are looking for in the search for: box on the left of the screen.
2008 Aug 21 22:16
X : see bottom of page] executive summary kazaa - download kazaa for free! get instant and unlimited access to every song ever made! only $0. Many people. Sharman networks is a proactive, virtual, global technology and publishing company, focused on. Kazaa - the guide kazaa: discover the specifications and download kazaa. Update w.
Kazaa free kazaa download, kazaa 3. 94mo kazaa get instant and unlimited access to every song ever made! only $0. Kazaa is committed to making music downloads easy and more accessible to all music fans.
2008 Aug 21 23:15
94mo free kazaa skins. Kazaa and kazaa lite use a decentralized p2p model based on the.
Exe has encountered a problem and needs to close. Kazaa lite download kazaa free! access over 15 billion files - unlimited downloads of music, movies and games - free! kazaa is the fastest and most used file sharing (p2p) software on the. 94mo kazaa owner complains to google (#5) -- chilling effects clearinghouse kazaa lite is a p2p file sharing system which allows trading of digital files of many formats. With unlimited file sharing comes the big question: should users be granted unrestricted -- and free -- access to copyrighted movies, music and games? find out how kazaa has. Kazaa - wikipedia, the free encyclopedia how to do a basic p2p search.
2008 Aug 22 00:45
Com - download cool free skins for kazaa download cool free skins for kazaa media desktop. Kazaa version 1.
5 download download music:kazaa is committed to making music downloads great and more accessible to all music fans. Kazaa kazaa.
Kazaa owner complains to google (#5) -- chilling effects clearinghouse end user license agreements // home: instafinder end user license agreement.
Kazaa searching to start searching for files, click on the search view button (located in the top tool bar between theater and traffic.
Kazaa lite is a new fasttrack client. Kazaa notice of kazaa plus license termination. Kazaa, free kazaa get instant and unlimited access to every song ever made! only $0. | 167,381 |
1 / 1 Photo by: foodaholic
‹More photos
Eggplant Parmigiana
- 0
-
- 2saves
- 1hr
Ingredients
Serves: 8
- 3 eggplant, peeled and thinly sliced
- 2 eggs, beaten
- 450 g Italian seasoned bread crumbs
- 1420 ml spaghetti sauce, divided
- 454 g mozzarella cheese, shredded and divided
- 40 g grated Parmesan cheese, divided
- 0.4 g dried basil
Directions
Prep:25min › Cook:35min › Ready in:1hr
- Preheat oven to 180 C. Combine the breadcrumbs, basil, oregano and parsley. Dip eggplant slices in egg, then in breadcrumb mixture. Place in a single layer on a baking tray. Bake for 5 minutes on each side.
- Spread pasta sauce to cover the bottom of a 23x33cm baking dish. Place a layer of eggplant slices in the sauce. Sprinkle with mozzarella and parmesan. Repeat layers with remaining ingredients, ending with the cheeses.
- Sprinkle basil on top. Bake for 35 minutes, or until golden brown.
More choices
Reviews (0)
Write a review
Click on stars to rate
What did you think? Tell us everything! | 193,247 |
TITLE: Showing that $K^n\to (K^n)^* , v\mapsto -\bullet v$ is ismorphic
QUESTION [1 upvotes]: for $v\in K^n$, the dot product defines a linear transformation
$-\bullet v: K^n\to K, w\mapsto w\bullet v$. Let $e_i$ be the i-the
basis vector of $K^n$.
Show that the function in the dual space $K^n\to (K^n)^* , v\mapsto
-\bullet v$ is isomorphic.
I have to show that the function is linear and bijective. The linearity is given by the linearity of the dot product. How can I show the bijectivity?
REPLY [1 votes]: Since we are in a finite dimensional vector space, surjectivity implies injectivity. So let us just prove surjectivity.
Let $f \in (K^n)^*$, and let $e_1, ... , e_n$ be the standard basis for $K^n$. Denote
$v_i = f(e_i)$
Now let $v=(v_1,v_2,...v_n)$. We have
$e_i \bullet v = v_i = f(e_i) $
so the linear functionals $f$ and $- \bullet v$ agree on a basis of $K^n$, and are therefore identical.
REPLY [0 votes]: As a counterpart to Richard Jensen's answer, you can also prove injectivity (which implies surjectivity by the rank-nullity theorem). We just need to show that the map $v\mapsto - \bullet v$ has nullity $0$; i.e., that it maps any non-zero vector to a non-zero element of $(K^n)^*$.
Indeed, if $v$ is non-zero, then we have $v \bullet v \ne 0$, so $- \bullet v$ is not the zero map. | 196,359 |
If it wasn't for another prisoner who talked me round in to keeping my head down and keeping busy doing courses and jobs etc, I don't know how I would have coped. Yes there were women in there who have been in before and to them it was an occupational hazard or whatever you want to call it, and yes they coped pretty well indeed with the regime. But that was their business, not mine.
I missed my children and my family loads and have learnt my lesson ten-fold. I will not be going back that's for sure! I would also like to say to those people who compare prison to an holiday camp that they try prison out to see for themselves because this one certainly wasn't an holiday camp! | 357,206 |
TITLE: generalization of positive-definite matrices to matrices over finite fields
QUESTION [4 upvotes]: Let $\mathbb{F}$ be a field, $\mathbb{F}^n$ be the $n$-dimensional vector space over $\mathbb{F}$, and $M_{n\times n}(\mathbb{F})$ be the space of $n\times n$ matrices with entries in $\mathbb{F}$. We want to find a subspace $G_{n\times n}(\mathbb{F})$ of $M_{n\times n}(\mathbb{F})$ such that for any $A\in G_{n\times n}(\mathbb{F})$ and any nonzero $v\in\mathbb{F}^n$, $v Av^{T}\neq 0$. Here $v^T$ is the transpose of the row vector $v$.
Case~1. Let $\mathbb{F}=\mathbb{R}$. Then we can choose $G_{n\times n}(\mathbb{F})$ as the collection of all positive definite matrices.
Case~2. Let $\mathbb{F}=\mathbb{Z}_p$ consisting of $p$-elements, where $p$ is a prime. Then how can we choose $G_{n\times n}(\mathbb{F})$?
REPLY [2 votes]: Supplementing Morgan's answer with the following.
Assume that $Q$ is a quadratic form on $n\ge3$ variables ranging over the field $\Bbb{F}_p,p>2$. By the result recalled by Morgan we know that $Q$ is of the form
$$
Q(v)=\lambda_1v_1^2+\lambda_2v_2^2+\lambda_3v_3^2+\cdots+\lambda_n v_n^2.
$$
I claim that for some vector $(v_1,v_2,v_3)\neq(0,0,0)$ from $\Bbb{F}_p^3$ we have
$Q(v_1,v_2,v_3,0,0,\ldots,0)=0$. If any of $\lambda_1,\lambda_2,\lambda_3$ happens to vanish, this is trivially the case, so we can assume that $\lambda_1\lambda_2\lambda_3\neq0$. Recall that squaring $x\mapsto x^2$ is a 2-1 mapping from $\Bbb{F}_p^*$ to itself. Including $x=0$ we then see that each of the monomials $\lambda_ix^2$ takes $(p+1)/2$ distinct values when $x$ ranges over $\Bbb{F}_p$ - the value $0$ once and the $(p-1)/2$ non-zero values twice each.
Next we claim that the quadratic form $P(v_1,v_2)=\lambda_1v_1^2+\lambda_2v_2^2$ gives a surjective function from $\Bbb{F}_p^2$ to $\Bbb{F}_p$.
To see this let us consider an arbitrary element $y\in\Bbb{F}_p$. Consider the sets
$$
S_1=\{\lambda_1v_1^2\mid v_1\in\Bbb{F}_p\}
$$
and
$$
S_2=\{y-\lambda_2v_2^2\mid v_2\in\Bbb{F}_p\}.
$$
We just saw that both sets $S_1$ and $S_2$ have $(p+1)/2$ elements. Because there are exactly $p$ elements in $\Bbb{F}_p$, and between them the two subsets have $p+1$ elements, the intersection $S_1\cap S_2$ must be non-empty. In other words there exists elements $v_1,v_2\in\Bbb{F}_p$ such that
$$
\lambda_1v_1^2=y-\lambda_2v_2^2\implies y=P(v_1,v_2).
$$
The main claim now follows easily. Let $v_3=1$. Then, by the previos observation we can find $v_1,v_2\in\Bbb{F}_p$ such that $P(v_1,v_2)=-\lambda_3=-\lambda_3\cdot1^2.$ Consequently
$$
Q(v_1,v_2,1)=0.
$$
It is also worth pointing out that when $n=2$ we do have anisotropic quadratic forms over $\Bbb{F}_p$. If we choose $a\in\Bbb{F}_p$ such that $-a$ is not a square in $\Bbb{F}_p$ (there are $(p-1)/2$ such choices for $a$), then the form
$$
Q(v_1,v_2)=v_1^2+av_2^2
$$
vanishes only when $v_1=v_2=0$. This is because $Q(v_1,v_2)=0$ implies that
$-a=(v_1/v_2)^2$. The best known case is that of $p\equiv-1\pmod4$, $a=1$. It has been a part of many a question on our site that the form
$$
Q(x,y)=x^2+y^2
$$
has no non-trivial zeros for these choices of $p$.
The trick in proving that every element of $\Bbb{F}_p$ can written as a sum of two elements coming from subsets $A,B\subset\Bbb{F}_p$ such that $|A|+|B|>p$ has been seen on our site many times. Particularly when dealing with squares.
It may also be worth pointing out that when $p=2$ a quadratic form is really a linear form, because $x^2=x$ for all $x\in\Bbb{F}_2$. The theory of symmetric bilinear forms over $\Bbb{F}_2$ on the other hand is more interesting, and has many applications in coding theory. | 143,234 |
ITV Signed Stories competition winner announced!
Posted 15 Nov 2019
We’re delighted to announce that Tommy Hutchinson is the winner of the ITV ...
ITV SignPost combines industry-leading expertise with cutting-edge technology to provide Access provision for a range of audiences. Operating from a state-of-the-art production studio in Gateshead, SignPost represents the northern hub for Access Services.View Showreel
Whether it is a one minute short, a corporate video, or a big budget feature film, ITV SignPost specialises in broadcast-quality content, created by an award-winning, mixed ability crew.View Showreel
With a rich history producing professional video content for television, film and corporate audiences, ITV SignPost represents the northern hub for high quality content production.
Combining industry-leading expertise with cutting-edge technology, SignPost is capable of providing multi-platform Access provision for a range of audiences - a comprehensive suite of services, delivered by a team with diversity in its DNA.
Since opening the Scotch Whisky Experience in 1988, demand has increased and our market has diversified, and we’re proud to now provide our main whisky tour in 20 different languages. The launch of BSL and ASL in 2018 was the fruition of a brilliant but very complex project, and the team from ITV were an absolute pleasure to work with from start to finish.
The latest news, events and blog posts from ITV SignPost - the Northern hub for quality content creation and Access Services.News & Events
Posted 15 Nov 2019
We’re delighted to announce that Tommy Hutchinson is the winner of the ITV ...
If you have a question, are interested in working with us, or are looking for a job opportunity, send us an enquiry and we'll get back to you! | 275,158 |
>>
Serious session with Ahmedabad Escorts female
My famous Escorts service website, original of all rent me commence myself centrikid blog your wonderful model Ahmedabad Escort, I am not merely beautiful mature girls but also gleaming and extremely sophisticated. I am a beautiful independent escort in Ahmedabad casing the all greater than Ahmedabad's celebrated location. Our positive thing, and I am promise you that the occasion you will time spend with me that will the most loving time and pleasurable days you ever think! I truly like include a festivity and will be the greatest GF practice always.
Enjoy decisive & gorgeous mature girls Escort service
Our agency erotic service by Independent sexy hot female and activist, well English exclamation with persons at any stage; I have magnetism, loveliness, wittiness and brain and would appear wonderful on your section at actions or departure on a business trip, mature punjabi girls happiness or pleasure. I love individual feast, inventive graceful and ready to satisfy, qualified mature punjabi girls in the talent of erotic & physical work and very disobedient perfect time for the circumspect man.
Our agencies are some time offer awfully attractive and mature female Ahmedabad escort. centrikidblog is awfully cheerful mature punjabi girls personality and truly loves to luxury you in several changed behaviour. centrikidblog is a stylish and mature punjabi girls wonderfully open mind girls Punjabi escort buddy for high class clients. She is awfully elastic in oral to any individual and very losing to earth schoolgirl that’s why she is the best and mature escort girl in our agency and our customers is forever favour female companion in Ahmedabad
On the follow for the greatest among the Punjabi escort girls
centrikidblog.in is a delight of angel who has come starting all corner of the nation to satisfy your promising requirements and passion. The extensive choice of deity, which you will locate in our website have mature punjabi girls superior information of all the wants of our clients and execute their teasing skills to fill colours of charm in your mooch romantic existence. We conceitedly assert to be the mainly believable and translucent female escort source in the commerce. Yet, we forever be given positive advice from our clients and it also aids really in a lasting bond amid us.
Punjabi gorgeous girls Escorts agency
As an individual being every personality has some necessary wants which need person happy at all payment. It possibly food, guard or garments or may be a little other. Still, no one in this globe can decline mature punjabi girls the division of sex as a scary needed in person existence. Are we incorrect? You may experience the advice to hire female escort services to please physically. We supply wonderful and royal beauties from Ahmedabad, Delhi and Punjab. To think virtually, there is no myth in enjoy the cosy mature punjabi girls and tepid business of an eye-catching and sexy escort in Ahmedabad.
Our girls offer pay for grate men erotic service by Punjabi girls vanish out all fears at our entrance and action in our phantasmagorical globe enrich with the greatest and mainly beautiful baby in the business. You have arrived to the correct rest for statuette up your fantasies in the mainly pleasant manner probable. Here you are definite ace escort services which will definitely light up your requirements mature punjabi girls and fill your being with an uplifting experience of happiness and satiation. Avail the business of our female and find pleasure in its exact denotation. Our district of knowledge extend further mature punjabi girls into condition of private service in Ahmedabad. | 72,941 |
\section{On virtual rectifiability}\label{Section4}
In this section, $n$ is an integer $\geq2$.
\subsection{Proof of Lemma \ref{rect-lemma}}\label{Section4-1}
\begin{lm}\label{relev}
Let $(h_t)_{0\leq t \leq 1}$ be a homotopy of maps $(\R^n, \R^n\setminus \mathbb D^n) \rightarrow (\inj, \iota_0)$ with $h_0(\R^n)=\{\iota_0\}$.
Let $\mathcal G$ denote the space of smooth maps from $\R^n$ to $GL_{n+2}(\R)$ that map $\R^n\setminus\mathbb D^n$ to $I_{n+2}$.
There exists a continous map $\big(t\in [0,1]\mapsto g_t\in\mathcal G\big)$, such that for any $(t,x)\in [0,1]\times\R^n$, $h_t(x)= g_t(x)\circ h_0(x)$, and such that, for any $x\in \R^n$, $g_0(x) = I_{n+2}$.
\end{lm}
\begin{proof}
Set $g_0(x) = I_{n+2}$ for any $x$.
Endow $\R^{n+2}$ with its canonical Euclidean structure and let $P_{t,x}$ denote the orthogonal complement of $h_t(x)(\R^n)$ in $\R^{n+2}$. Let $\pi_{t,x}$ denote the orthogonal projection on $P_{t,x}$.
Set $f(t, s, x) = \min\limits_{z\in P_{s,x}, ||z|| = 1} || \pi_{t,x}(z)|| $. Since $f$ maps the complement of the compact $[0,1]^2\times \mathbb B^n$ to $1$, it is uniformly continuous. Fix $\delta>0$ so that for any $(t,s,x)$ and $(t', s', x')$ with $|t-t'|+|s-s'| + ||x-x'||< \delta$,
$| f(t', s', x')-f(t, s, x)|<\frac12$,
and for any $j \in \mathbb N$, set $t_j = \min (j\frac{\delta}2, 1)$. We are going to define $g_t$ on each $[t_j,t_{j+1}]$.
Note that $P_{0, x} \cap P_{t, x}^{\perp}=\{0\}$ if and only if $f(t, 0, x) >0$. Since $f(0,0,x)=1$ for any $x$, we have $P_{0, x} \cap P_{t, x}^{\perp}=\{0\}$ for $t\in [0,t_1]$.
For $t\in [0,t_1]$, define $g_t(x)$ by the following formula: $$\forall z=(z_1,z_2,\overline z) \in\R^{n+2}=\R\times\R\times \R^n, g_t(x)(z_1,z_2,\overline z) = \pi_{t,x}(\pi_{0, x}(z)) + h_t(x)(\overline z)$$
Since $P_{0, x} \cap P_{t, x}^{\perp}=\{0\}$, $\pi_{t,x}$ defines an isomorphism from $P_{0,x}$ to $P_{t,x}$. Thus, $g_t(x)$ is an isomorphism.
For $t\in [t_k , t_{k+1}]$ and $x\in\R^n$ define $g_t(x)$ so that $$\forall z=(z_1,z_2,\overline z) \in\R^{n+2}, g_t(x)(z_1,z_2,\overline z) = \pi_{t,x}(\pi_{t_k, x}(\cdots \pi_{t_0, x}(z) \cdots)) + h_t(x)(\overline z).$$
Since $f(t, t_k, x) \geq f(t_k, t_k, x) -\frac 12= \frac12$, the above method proves that $g_t(x)$ is an isomorphism. This defines a family $(g_t)_{0\leq t \leq 1}$ as required by the lemma.
\end{proof}
\begin{proof}[Proof of Lemma \ref{rect-lemma}.]
Let $\tau_e$ be a parallelization such that the class $[\iota(\tau_e,\psi)]$ of Lemma \ref{obstructionlemma}
is zero, so that there exists $(h_t)_{0\leq t \leq 1}$ as in Lemma \ref{relev}
with $h_1= \iota(\tau_e,\psi)$.
Let $(\tilde g_t)_{0\leq t \leq 1}$ be a smooth approximation of the map $(g_t)_{0\leq t \leq 1}$ of Lemma \ref{relev}, such that for any $x\in\R^n$, $\tilde g_0(x)= I_{n+2}$ and $h_1(x)=\iota(\tau_e,\psi)(x) = \tilde g_1(x)\circ h_0(x)$. Assume without loss of generality that $\left( t\in[0,1]\mapsto \tilde g_t\in \mathcal G \right) $ is constant on a neighborhood of $\{0,1\}$.
Take a tubular neighborhood $N$ of $\psi(\R^n)$ and identify $N$ with $\psi(\R^n)\times \mathbb D^2\subset \psi(\R^n)\times \mathbb C$ with coordinates $(\psi(x), r e^{i\theta})$.
For any $y=(\psi(x), r e^{i\theta})\in N$, set $\tau^0_y = (\tau_e)_y\circ\tilde g_{1-r}(x)$. This defines a map $\tau^0\colon N\times \R^{n+2}\rightarrow TN$, which extends to a map $\tau\colon \ambientspace\times \R^{n+2}\rightarrow T\ambientspace$, by setting ${\tau}_y = (\tau_e)_y$ when $y\not\in N$.
This construction ensures that $\iota(\tau, \psi)= \iota_0$, and $\tau$ is a parallelization of $\ambientspace$.\qedhere
\end{proof}
\subsection{Proof of Lemma \ref{threct}}
We use the following Bott periodicity theorem, which is proved in \cite{[Bott2]}.
\begin{theo}\label{Bt}[Bott]
For any $k\geq 0$, and any $N \geq 1$,$$\pi_N(\mathrm{SO}(N+2+k),I_{N+2+k}) = \left\{\begin{array}{lll} 0 & \text{if $N\equiv 2, 4, 5$ or $6 \mod 8$,}\\
\mathbb Z/2\mathbb Z& \text{if $N\equiv 0$ or $1 \mod 8$,}\\
\mathbb Z& \text{if $N\equiv 3$ or $7 \mod 8$.}\end{array}\right.$$
\end{theo}
\subsubsection{Case \texorpdfstring{$n\equiv5 \mod 8$}{n = 5 mod 8}}
\begin{cor}\label{cor-un}
Suppose $n\equiv 5\mod 8$, and let $\ambientspace$ be an asymptotic homology $\R^{n+2}$.
If $\ambientspace$ is parallelizable, then all long knots $\psi\colon \R^n\hookrightarrow \punct M$ are rectifiable.
Therefore, for any long knot $\psi$ in a (possibly non-parallelizable) asymptotic homology $\R^{n+2}$, $\psi\sharp\psi$ is rectifiable.
\end{cor}
\begin{proof}
As stated in Lemma \ref{lmpin}, $\pi_n(\inj, \iota_0)=\pi_n(SO(n+2), I_{n+2})$. Since $n\equiv 5 \mod 8$, $\pi_n(SO(n+2),I_{n+2}) = 0$.
Then $\pi_n(\inj, \iota_0) = 0$, and, if $\ambientspace$ is parallelizable,
the rectifiability obstruction $\iota(\psi)$ of Definition \ref{rectifiabilitydef} is trivial.
In the non-parallelizable case, $\ambientspace\sharp\ambientspace$ is parallelizable because of Proposition \ref{conn-sum2}, and the previous argument applies to $\psi\sharp \psi$.
\end{proof}
\subsubsection{Case \texorpdfstring{$n\equiv1 \mod 8$}{n = 1 mod 8} and connected sum of long knots}\label{diese}
\begin{lm}\label{lmfin}
When $n\equiv1\mod 8$, for any long knot $\psi$ in a parallelizable asymptotic homology $\R^{n+2}$, the connected sum $\psi\sharp \psi$ is rectifiable.
Therefore, for any long knot $\psi$ in a (possibly non-parallelizable) asymptotic homology $\R^{n+2}$, the connected sum $\psi\sharp \psi\sharp\psi\sharp\psi $ is rectifiable.
\end{lm}
\begin{proof}If $n=1$, it is true by definition.
Now assume $n\geq2$ and $n\equiv1\mod8$, and let $(\punct M, \tau)$ be a parallelized asymptotic homology $\R^{n+2}$, let $(\punct M\sharp \punct M, \tau\sharp \tau)$ be the induced connected sum, and fix a long knot $\psi\colon \R^n\hookrightarrow \punct M$.
Since $\psi \sharp \psi$ is defined by stacking two copies of the knot, $\iota(\tau\sharp \tau, \psi\sharp \psi)$ is the map defined by stacking two copies of $\iota(\tau, \psi)$. In terms of homotopy classes in $[(\R^n, \R^n\setminus \mathbb D^n), (\inj, \iota_0)]= \pi_n(\inj, \iota_0)$, this yields $[\iota(\tau\sharp \tau, \psi\sharp \psi)] = 2. [\iota(\tau, \psi)] $. Lemma \ref{lmpin} and Theorem \ref{Bt} imply that $\pi_n(\inj, \iota_0)= \mathbb Z/2\mathbb Z$. This yields $[\iota(\tau\sharp \tau, \psi\sharp \psi)]= 0$, so $\psi\sharp\psi$ is rectifiable.
In the non-parallelizable case, $\ambientspace\sharp\ambientspace$ is parallelizable because of Proposition \ref{conn-sum2}, and the previous argument applies to $\psi\sharp\psi$.\qedhere
\end{proof}
Since $\pi_n(SO(n+2), I_{n+2}) = \mathbb Z$ for $n\equiv 3 \mod 4$, the same method implies that $\psi\sharp\psi$ is virtually rectifiable if and only if $\psi\sharp\psi$ is rectifiable (otherwise the class $\iota(\psi\sharp \psi)$ of Definition~\ref{defiota} has infinite order). This argument together with Corollary \ref{cor-un} and Lemma \ref{lmfin} yields the following remark.
\begin{rmq}
Let $\ambientspace$ be an odd-dimensional asymptotic homology $\R^{n+2}$ and let $\psi$ be a long knot of $\ambientspace$. The long knot $\psi$ is virtually rectifiable if and only if $\psi\sharp \psi\sharp\psi \sharp \psi$ is rectifiable.
\end{rmq}
\subsubsection{Even-dimensional case}
Note that we have to keep the parallelizability hypothesis
in the following lemma, since Proposition \ref{conn-sum2} may not extend to the even-dimensional case\footnote{There is an obstruction in $\pi_{n+1}(SO(n+2), I_{n+2})$, which is not a torsion group when $n$ is even.}.
\begin{lm}\label{lmeven}
When $n$ is even, for any long knot $\psi$ in a parallelizable asymptotic homology $\R^{n+2}$, the connected sum $\psi\sharp \psi$ is rectifiable.
Furthermore, if $n$ is an even integer such that $n\not\in 8\mathbb Z$, then $\psi$ itself is rectifiable.
\end{lm}
\begin{proof}
For $n\geq4$, this follows from the same arguments as in the previous subsections, using Lemma \ref{lmpin} and Theorem \ref{Bt}.
Let us assume $n=2$.
Lemma \ref{lmS} implies that the normal bundle to $\psi(\R^2)$ admits a trivialisation
$\nu \colon \psi(\R^2)\times\R^2 \rightarrow \N \psi(\R^2)$ such that for any $(x,u) \in \R^2\times \R^2$,
if $||x||\geq1$, then $\nu(\psi(x), u) = (u, 0, 0) \in \N_{(0,0,x)}\psi(\R^2)$.
For $x\in \R^2$, let $g(x)$ denote the map $\big( u\in \R^{4} \mapsto
\tau_{\psi(x)}^{-1}(\nu(\psi(x), u_1, u_2) + T_x\psi(u_3,u_4) ) \in \R^{4} \big)$.
This yields a homotopy class $[g]$ in the trivial group
$[(\R^2, \R^2\setminus \mathbb D^2), (GL^+(\R^4), I_4)]= \pi_2(SO(4),I_4)$.
Let $(g_t)_{t\in[0,1]}$ be a homotopy between $g$ and the constant map with value $I_4$ among maps such that $g_t(\R^2\setminus\mathbb D^2)=\{I_4\}$.
For any $t\in[0,1]$ and any $x\in\R^2$, set $h_t(x)=g_t(x)\circ \iota_0$.
This yields a homotopy between $\iota(\tau, \psi)$ and the constant map with value $\iota_0$, and thus proves that $\iota(\tau)$ is trivial.
\end{proof} | 57,777 |
in’s most outspoken domestic opponent, was arrested in January upon his return from Germany, where he spent five months recovering from a nerve-agent poisoning that he blames on the Kremlin. Russian authorities have rejected the accusation.
Last month, Navalny was sentenced to 2 1/2 years in prison for violating the terms of his probation during his the Vladimir region, 85 kilometers (50 miles) east of the Russian capital. The facility called IK-2 stands out among Russian penitentiaries for its particularly strict rules for inmates, which include standing at attention for hours.
Navalny’s nerve-agent.. | 244,994 |
TITLE: Maximize sum with no two consecutive variables
QUESTION [4 upvotes]: Random variables $x_1,x_2,\dots,x_{100}$ are drawn independently from the uniform distribution over $(0,1)$. After knowing the values, we are allowed to choose a subset of them as long as no two consecutive variables are chosen. We want to maximize the sum of the chosen variables. In expectation, how high can we make it?
One way to choose is to ignore the values and always choose $x_1,x_3,x_5,\dots,x_{99}$. Since each variable has an expectation of $1/2$, this gives an expected sum of $50$. But it should be possible to do better if we consider the realized values.
REPLY [0 votes]: Well, the answer
$$
\Bbb{E}[\max(x_1+x_3+\ldots x_99, x_2+x_4+\ldots x_{100})]
$$
is not the correct answer for the expectation value, it is a bit low.
To see this, consider the case of only $4$ uniform randoms instead of $100$.
Here, the expectation in the proposed answer is
$ \Bbb{E}[
\max(x_1+x_3, x_2+x_4)]
$
and you can work out in your head that this is $\frac{37}{30}$. The actual expectation is
$ \Bbb{E}[
\max(x_1+x_3, x_2+x_4, x_1+x_4)] = \frac{27}{20}
$
It is even possible, for higher values of $2N$, that the maximal sum uses fewer than $N$ of the variates!
The exact calculation for $N=100$ is intractable and not the most interesting aspect of the question. The real question is:
Show that the expectation of the maximal no-contiguous-pair-containing sum among $2N$ uniform variates on $(0,1)$ is of the form
$$M(2N) = N + \frac{\sqrt{N}}{12\pi} + \Theta\left( N^s \right)
$$
with some $s < \frac12$, and as a bit toughr a problem, find $s$.
Note that the first two terms of that expansion give (unless I have made a mistake) the first tow terms of the asymptotic form that the proposed answer would have implied). | 212,388 |
TITLE: If $\phi\otimes\mathrm{Id}_\mathbb Q$ and $\phi\otimes\mathrm{Id}_{\mathbb F_p}$ are $\mathbb Z$-algebra isomorphisms, is $\phi$ an isomorphism?
QUESTION [4 upvotes]: I'm currently trying to understand a line in a paper that would follow easily if the answer to the following question was yes:
Let $R, S$ be (commutative, unital) rings and let $\phi\colon R\hookrightarrow S$. Suppose that, as $\mathbb Z$-algebras, $\phi\otimes\mathrm{Id}_\mathbb Q\colon R\otimes \mathbb Q\to S\otimes\mathbb Q$ and $\phi\otimes\mathrm{Id}_{\mathbb F_p}\colon R\otimes \mathbb F_p\to S\otimes\mathbb F_p$ are isomorphisms for all primes $p$.
Is $\phi$ an isomorphism?
In the setting I care about, I know that $R, S$ are torsion free integral domains. I also know that $S$ is a subring of $\mathbb Z[X_1, \ldots, X_n]$ for some $n$ and that $S\otimes \mathbb Q$ is finitely generated as a $\mathbb Q$-algebra.
I know that the general statement is false when $R, S$ are just $\mathbb Z$-modules: a counterexample is $R = 0$ and $S = \mathbb {Q/Z}$.
REPLY [3 votes]: For $R\subset S$ torsion-free abelian groups such that $S\otimes_\Bbb{Z} \Bbb{Q}=R\otimes_\Bbb{Z} \Bbb{Q}$.
For $a\in S$, if $a\not \in R$ then $a\Bbb{Z}\cap R=an \Bbb{Z}$ for some $n\ge 2$.
For $p|n$ then $an\ne 0 $ in $R/pR$ whereas $an=0\in S/pS$ so the map $R/pR\to R/(pS\cap R)\subset S/pS$ is not injective. | 85,824 |
TITLE: Decomposing proper map into closed embedding and proper submersion
QUESTION [1 upvotes]: Suppose that $f: X \to Y$ is a smooth proper map between two smooth manifolds. Is it always possible to represent $f$ as a composition of a closed embedding $g: X \to Z$ with a proper submersion $h: Z \to Y$?
Motivation:
Firstly, closed embeddings and proper submersions are in a sense the simplest kinds of proper maps, so in some cases we could hope reduce proofs about proper maps to those two cases.
Secondly, projective morphisms in algebraic geometry are defined precisely as morphisms that admit such decomposition for $Z = \mathbb{P}^n_Y$, and so far all examples of proper maps that I have come up with are in a sense similar.
Finally, if $X$ is compact, we always have such decomposition as $X \to X \times Y \to Y$, where the first map is the closed embedding of the graph of $f$ and the second map is the projection onto $Y$.
REPLY [3 votes]: Yes. Choose a smooth (but not necessarily closed) embedding $i:X\to W$ where the manifold $W$ is compact, for example a sphere. Together $i$ and $f$ give a smooth map $X\to W\times Y$ that is both proper and an embedding. | 50,931 |
Changes To Day Use Permits At Yosemite
Yosemite Valley, CA– Due to the consistent crowded conditions on the Half Dome
cables at Yosemite National Park, Day Use Permits will be required seven days a week for the 2011 summer season.
The Half Dome cables are generally in place from mid-May through mid-October, depending on snowpack and weather conditions. Over the past several years, the popularity of the hike has resulted in large numbers of people using the Half Dome cables, particularly on
weekends and holidays. Saturdays and holidays averaged 840 visitors per day, while peak days saw up to 1200 people using the cables.
These large numbers of hikers raised safety concerns and there was a fatality and serious injuries sustained by park visitors due to these crowded conditions.
Yosemite National Park began an interim program for climbing the Half Dome cables in
2010 to address these serious safety concerns. Day Use Permits were required to use the cables on Fridays, Saturdays, Sundays, and holidays during the 2010 season. Although the interim program worked well on the permit days, visitor use on the cables during days in which permits were not required reached peak weekend levels.
The Half Dome Day Use Permits will be available starting March 1, 2011 for climbing the cables in May and June, 2011.
Written by [email protected] | 172,531 |
The House today took action to protect Wisconsin manufacturers from China cheating, building on legislation introduced by Congresswoman Tammy Baldwin (D-WI) with Congressman Reid Ribble (R-WI). The bipartisan H.R. 4105 passed today (370-39) takes necessary steps to block China's unfair trade practices. The legislation incorporates Baldwin's CHEATS Act (H.R. 4071) that ensures the Department of Commerce has the legal authority to impose a tax on subsidized foreign imports, called countervailing duties (CVDs), from China and other countries that unfairly subsidize their manufacturing. Without these CVDs, American manufacturers are undercut and cannot compete on a level playing field.
"The simple fact is that China cheats and we've seen the effect their unfair advantage has on manufacturing in Wisconsin, particularly in the paper industry," said Baldwin. "I'm proud to see my efforts to keep jobs in Wisconsin supported by my colleagues on both sides of the aisle. American manufacturers deserve our full support in combating China's relentless pattern of international trade law violations," Baldwin said.
Baldwin's contribution to H.R. 4105 responds to a December 19, 2011 ruling by the U.S. Court of Appeals that the Department of Commerce lacks the legal authority to impose CVDs on subsidized imports from countries with nonmarket economies, such as China and Vietnam. Without legislative action, the Court's ruling would have taken effect, and the U.S. would have lost a powerful remedy to combat the harmful effects of unfairly subsidized Chinese imports. In addition, the U.S. would have been forced to pay back tariffs already paid by importers - at taxpayers' expense.
Among Wisconsin businesses that will benefit significantly from Baldwin's advocacy). Also benefiting from Baldwin's advocacy are 230 U.S. companies in the steel, aluminum, paper, chemicals and tire industries.
Source: | 348,803 |
\begin{document}
\maketitle
\begin{abstract}
We study the $\de$-discretized Szemer\'edi-Trotter theorem and Furstenberg set problem.
We prove sharp estimates for both two problems assuming tubes satisfy some spacing condition. For both two problems, we construct sharp examples that have many common features.
\end{abstract}
\section{introduction}
\subsection{Incidence estimate}
To begin with, let us first recall the famous Szemer\'edi-Trotter theorem in incidence geometry. Suppose $\cL$ is a set of lines in the plane. For $r\ge 2$, let $P_r(\cL)$ denote the $r$-rich points of $\cL$ --- the set of points that lie in at least $r$ lines of $\cL$. The Szemer\'edi-Trotter theorem gives sharp bounds for $|P_r(\cL)|$:
$$ |P_r(\cL)|\lesim \frac{|\cL|^3}{r^3}+\frac{|\cL|}{r}. $$
There is also a dual version. Suppose $\cP$ is a set of points in the plane. For $r\ge 2$, let $L_r(\cP)$ denote the $r$-rich lines of $\cP$ --- the set of lines that contain at least $r$ points of $\cP$. We have:
$$ |L_r(\cP)|\lesim \frac{|\cP|^3}{r^3}+\frac{|\cP|}{r}. $$
A natural question is to replace the points by $\de$-balls (the balls of radius $\de$) and the lines by the $\de$-tubes (the tubes of dimensions $1\times\de$), and then ask the incidence estimate between these $\de$-balls and $\de$-tubes.
This question is considered in \cite{guth2019incidence}, assuming some spacing conditions on tubes.
In our paper, we generalize the incidence estimates on the plane in \cite{guth2019incidence}. We will consider some more general spacing conditions.
To state our results, we need the following notions.
\begin{definition}[Essentially distinct balls and tubes]\label{distinct}
For a set of $\delta$-balls $\B$, we say these balls are essentially distinct if for any $B_1\neq B_2\in\B$, $|B_1\cap B_2|\leq (1/2)|B_1|$. Similarly, for a set of $\delta$-tubes $\T$, we say these tubes are essentially distinct if for any $T_1\neq T_2\in\T$, $|T_1\cap T_2|\leq (1/2)|T_1|$.
\end{definition}
In the rest of the paper, we will always consider essentially distinct $\delta$-balls and essentially distinct $\delta$-tubes.
In the discrete case, it's easy to define the incidence between points and lines, and to define the $r$-rich points and $r$-rich lines. Here we make analogies of these notions for $\de$-balls and $\de$-tubes.
\begin{definition}[$r$-rich balls and $r$-rich tubes]\label{rich}
Given a set of $\delta$-tubes $\mathbb{T}$, we define the $r$-rich balls for $\T$ in the following way. We choose a set $\B$ to be a maximal set of essentially distinct $\delta$-balls.
We define
$$B_r(\mathbb{T}):=\{ B\in\B: B\ \text{intersects more than}\ r\ \text{tubes from}\ \T \}.$$
We say $B_r(\T)$ is the set of $r$-rich $\delta$-balls for $\T$.
Here we have many choices for $\B$, but we will see in the proof that the choice of $\B$ doesn't affect the result for the upper bound of $|B_r(\T)|$.
We could just choose $\B$ to be all $\delta$-balls centered at $(\delta/2) \mathbb{Z}^2$).
Similarly, given a set of $\delta$-balls $\B$, we define the $r$-rich tubes for $\B$ in the following way. We choose a set $\T$ to be a maximal set of essentially distinct $\delta$-tubes. We define
$$T_r(\B):=\{ T\in\T: T\ \text{intersects more than}\ r\ \text{balls from}\ \B \}.$$
We say $T_r(\B)$ is the set of $r$-rich $\delta$-tubes for $\B$.
\end{definition}
Now we state our main results.
\begin{theorem}\label{main2}
Let $1 \le W \le X \le \delta^{-1}$. Let $\T$ be a collection of essentially distinct $\delta$-tubes in $[0,1]^2$. We also assume $\T$ satisfies the following spacing condition: every $W^{-1}$-tube contains at most $\frac{X}{W}$ many tubes of $\T$, and the directions of these tubes are $\frac{1}{X}$-separated.
We denote $|\T_{\max}|:=WX$ (as one can see that $\T$ contains at most $\sim WX$ tubes).
Then for $r > \max(\delta^{1-2\e} |\Tmax|, 1)$, the number of $r$-rich balls is bounded by
\begin{equation}\label{eq1}
|B_r (\T)| \lesim_\e \delta^{-\e} |\T| |\Tmax| \cdot r^{-2} (r^{-1} + W^{-1}).
\end{equation}
\end{theorem}
\begin{remark}
If we take $X=W$ (respectively $X=\delta^{-1}$) in the above theorem, we recover Theorem 1.1 (respectively Theorem 1.2) in \cite{guth2019incidence}.
There are two new ingredients in our theorem. First, we use $|\T||\T_{\max}|$ as our upper bound in \eqref{eq1}, whereas in \cite{guth2019incidence} it was $|\T_{\max}|^2$. Second,
our theorem also concerns about the intermediate spacing conditions, i.e. we introduce a new parameter $X$.
\end{remark}
There is also a dual version of Theorem \ref{main2} which we state below.
\begin{theorem}[A dual version of Theorem \ref{main2}]\label{main3}
Fix a line $\ell$ that intersects $[0, 1]^2$. Let $1 \le W \le X \le \delta^{-1}$. Let $\T$ be a collection of essentially distinct $\delta$-tubes in $[0,1]^2$, such that every tube in $\T$ form an angle $\ge \frac{\pi}{4}$ with $\ell$. We also assume $\T$ satisfies the following spacing condition: every $W^{-1}$-tube which form an angle $\ge \frac{\pi}{4}$ with $\ell$ contains at most $\frac{X}{W}$ many tubes of $\T$, and the intersections of these tubes with $\ell$ are $\frac{1}{X}$-separated.
We denote $|\T_{\max}|:=WX$ (as one can see that $\T$ contains at most $\sim WX$ tubes).
Then for $r > \max(\delta^{1-2\e} |\Tmax|, 1)$, the number of $r$-rich balls is bounded by
\begin{equation*}
|B_r (\T)| \lesim_\e \delta^{-\e} |\T| |\Tmax| \cdot r^{-2} (r^{-1} + W^{-1}).
\end{equation*}
\end{theorem}
\begin{remark}
The above two theorems give upper bounds for the number of $r$-rich balls $B_r(\T)$ when tubes $\T$ satisfy some spacing condition. We can actually switch the roles of balls and tubes, so the question becomes to estimate the number of $r$-rich tubes $T_r(\B)$ assuming some spacing condition on $\B$. This is our Theorem \ref{main} stated below. In Section \ref{dualitysec}, we will discuss a tube-ball duality and show that Theorem \ref{main} implies Theorem \ref{main2} and Theorem \ref{main3}.
\end{remark}
\begin{theorem}\label{main}
Let $1 \le W \le X \le \delta^{-1}$. Divide $[0, 1]^2$ into $W^{-1} \times X^{-1}$ rectangles as in Figure \ref{fig:generalcase}. Let $\B$ be a set of $\delta$-balls with at most one ball in each rectangle.
We denote $|\B_{\max}|:=WX$ (as one can see that $\B$ contains at most $\sim WX$ balls).
Then for $r > \max(\delta^{1-2\e} |\B_{\max}|, 1)$, the number of $r$-rich tubes is bounded by
\begin{equation*}
|T_r (\mathbb{B})| \lesim_\e \delta^{-\e} |\B| |\B_{\max}| \cdot r^{-2} (r^{-1} + W^{-1}).
\end{equation*}
\end{theorem}
\begin{remark}
There are two special cases of Theorem \ref{main}: $X=W$, $X=\de^{-1}$ (see Figure \ref{fig:specialcases}). These two cases actually correspond to (the dual version of) Theorem 1.1 and Theorem 1.2 in \cite{guth2019incidence}.
\end{remark}
\begin{figure}
\centering
\includegraphics{output1.pdf}
\caption{The general case of Theorem \ref{main}.}
\label{fig:generalcase}
\end{figure}
\begin{figure}
\centering
\includegraphics{output2.pdf}
\includegraphics{output3.pdf}
\caption{Special cases of Theorem \ref{main}.}
\label{fig:specialcases}
\end{figure}
\subsection{Furstenberg set problem}
Wolff discussed the Furstenberg set problem in \cite{wolff1999recent}. Given $\alpha\in (0,1)$, we say a set $E\subset \R^2$ is an $\alpha$-Furstenberg set if for each direction $e\in S^1$, there exits a line $l_e$ pointing in direction $e$ such that $\dim_{\textup{H}}(E\cap l_e)\ge \alpha$. The problem is to find the lower bound of $\dim_{\textup{H}}E$. Wolff proved that $\dim_{\textup{H}}E\ge \max(\frac{1}{2}+\alpha, 2\alpha)$ and conjectured that $\dim_{\textup{H}}E\ge \frac{3}{2}\alpha+\frac{1}{2}$.
Some progress has been made on this problem. In \cite{katz2001some}, Katz and Tao showed that when $\alpha=\frac{1}{2}$, the Furstenberg problem is related to other two problems: Falconer distance problem and Erd\"os ring problem. Later, Bourgain \cite{bourgain2003erdHos} improved the bound to $\dim_{\textup{H}}E\ge 1+\e$ when $\alpha=\frac{1}{2}$. Recently, Orponen and Shmerkin \cite{orponen2021hausdorff} further improved the bound to $\dim_{\textup{H}}E\ge 2\alpha+\e$ for $\alpha\in(\frac{1}{2},1)$.
There are also some variants of the Furstenberg problem. In \cite{zhang2017polynomials}, Zhang considered the discrete Furstenberg problem and proved the sharp estimates.
In our paper we consider the $\de$-discretized version. We also assume some spacing condition on the $\de$-balls. We consider the following question.
\begin{question}\label{question}
Fix $\alpha\in (0,1)$. Let $\T=\{T\}$ be a set of $\de$-tubes that are $\de$-separated in direction, and with cardinality $\sim \de^{-1}$. Assume for each $T$ there is a set of $\de$-balls $Y(T)=\{B_\de\}$ satisfying: each ball in $Y(T)$ intersects $T$; $\#Y(T)\sim \de^{-\alpha}$ and each pair of nearby balls in $Y(T)$ have distance $\gtrsim \de^{\alpha}$.
If we define the union of these $\de$-balls to be $\B=\cup_T Y(T)$, can we show
$$ |\B| \gtrapprox \de^{-\frac{3}{2}\alpha-\frac{1}{2}}? $$
\end{question}
\begin{remark}
Here the $Y(T)$ satisfies an evenly spacing condition which is stronger than the $(\de,\alpha)_1$ spacing condition introduced in \cite{katz2001some}. The $(\de,\alpha)_1$ spacing condition roughly says that $\#Y(T)\sim\de^{-\alpha}$ and for any $w\times 1$ subtube $T_w \subset T$ there holds $\#\{B_\de\in Y(T): B_\de\cap T_w\neq\emptyset\}\lesim (w/\de)^{\alpha}$.
\end{remark}
We will give an affirmative answer to this question in Section \ref{fursec}. Actually, we will prove a more general result as follows.
\begin{theorem}\label{thmfur}
Let $1 \le W \le X \le \delta^{-1}$. Let $\T$ be a collection of essentially distinct $\delta$-tubes in $[0,1]^2$ that satisfies the following spacing condition: every $W^{-1}$-tube contains at most $\frac{X}{W}$ many tubes of $\T$, and the directions of these tubes are $\frac{1}{X}$-separated. We also assume $|\T|\sim XW$.
Let $\B=\{B_\de\}$ be a set of $\de$-balls and for each $T\in\T$ define $Y(T):=\{ B_{\de}\in \B: B_{\de}\cap T\neq \emptyset \}$. Suppose each $Y(T)$ has a subset $Y'(T)$ satisfies the spacing condition as in Question \ref{question} which is: $\#Y'(T)\sim \de^{-\alpha}$ and each pair of nearby balls in $Y'(T)$ have distance $\gtrsim \de^{\alpha}$. Then we have the estimate
\begin{equation}
|\B|\gtrsim (\log\de^{-1})^{3.5}\min(\de^{-\alpha-1},\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}},\de^{-\alpha}XW).
\end{equation}
\end{theorem}
Question \ref{question} is a special case of Theorem \ref{thmfur} when $W=1, X=\de^{-1}$.
\medskip
To end this section, we discuss the plan of this paper. In Section \ref{egsec}, we discuss the sharp examples. In Section \ref{dualitysec}, we discuss the tube-ball duality and show Theorem \ref{main} implies Theorem \ref{main2} and Theorem \ref{main3}. In Section \ref{proofsec}, we prove Theorem \ref{main}. In Section \ref{fursec}, we prove Theorem \ref{thmfur}. We will also briefly discuss the application of Theorem \ref{main2} to the sum-product problem in the Appendix.
\medskip
{\bf Notation} We use the notation $A\lesim B$ to mean $A\le CB$ for some constant $C>0$.
We use the notation $A\lessapprox B$ in several sections. The meaning of this notation may be slightly different in different places, but the precise definition is given where it appears.
\section {Sharp examples} \label{egsec}
In this subsection, we discuss the sharp example for Theorem \ref{main2} when $|\T|\sim |\T_{\max}|$ and Theorem \ref{thmfur}.
The sharp examples for Theorem \ref{main3} and Theorem \ref{main} can be constructed in a similar way as for Theorem \ref{main2} by using the tube-ball duality (which will be discussed in Section \ref{dualitysec}). So we omit the construction for other two theorems.
\subsection{Examples for incidence estimate}\label{egincidence}
First, we construct the example for Theorem \ref{main2}.
For simplicity, we assume $W \mid X$ ($W$ divides $X$).
\textit {Case 1}: $2\le r < W$.
For each $0 \le a \le W$ and $0 \le b \le X$, draw a line from $(\frac{a}{W},0)$ to $(\frac{b}{X},1)$. These lines, when thickened to $\delta$-tubes, will satisfy the spacing condition as in Theorem \ref{main2}, since two lines are either parallel or differ by angle $\frac{1}{X} \ge \delta$. Let $S$ be the set of rational numbers $\frac{p}{q}$ in $[\frac{1}{4}, \frac{3}{4}]$ such that $p, q \sim \frac{X}{r}$, $\gcd(p,q)=1$, and $p$ a multiple of $\frac{X}{W}$. We claim that each point of the form $( \frac{c}{q W}, \frac{p}{q})$ with $\frac{p}{q} \in S$ and $c \le qW$ is $r$-rich. To see this, note that the point on the line through $(\frac{a}{W},0)$ and $(\frac{b}{X},1)$ with $y$-coordinate $\frac{p}{q}$ has $x$-coordinate $\frac{p}{q} \cdot \frac{b}{X} + (1-\frac{p}{q}) \cdot \frac{a}{W}$, so it suffices to show the equation
$$\frac{p}{q} \cdot \frac{b}{X} + (1-\frac{p}{q}) \cdot \frac{a}{W}=\frac{c}{qW}$$
has $\gtrsim r$ solutions $(a,b)\ (0\le p\le a,0\le q\le b)$, for any $\frac{p}{q}\in S$ and $c\lesim qW$. Multiplying by $qW$, the equation is equivalent to
\begin{equation}\label{exeq}
\frac{pW}{X}\cdot b + (q-p) \cdot a=c.
\end{equation}
Note that $\frac{pW}{X}$ is an integer since we assumed $p$ is a multiple of $\frac{X}{W}$.
We also have $\gcd(\frac{pW}{X}, q-p)=1$, since $\gcd(p, q-p) = 1$. Now we can show \eqref{exeq} has $\gtrsim r$ solutions $(a, b)$. Note that if $(a_0, b_0)$ is a solution, then $(a_0+\frac{pW}{X}m, b_0-(q-p)m), m\in\mathbb{N}$ are also solutions. When $c\le qW$, we can properly choose $\gtrsim r$ many $m\in\mathbb{N}$ such that $a_0+\frac{pW}{X}m\in[0,W]$ and $b_0-(q-p)m\in [0,X]$.
We still need to check the points $\{ (\frac{c}{qW},\frac{p}{q}): \frac{p}{q}\in S, c\le qW \}$ are $\de$-separated. Consider two different points $(\frac{p}{q} \cdot \frac{b}{X} + (1-\frac{p}{q}) \cdot \frac{a}{W},\frac{p}{q})$ and $(\frac{p'}{q'} \cdot \frac{b'}{X} + (1-\frac{p'}{q'}) \cdot \frac{a'}{W},\frac{p'}{q'})$ in this set. If their second coordinates are different, then since we assumed each of $p$, $p'$ is a multiple of $\frac{X}{W}$, we see the difference of their second coordinates is
\begin{equation}\label{difference}
\big|\frac{p}{q}-\frac{p'}{q'}\big|=\big| \frac{p q'-p' q}{q q'}\big| \ge \frac{X/W}{(X/r)^2}\ge \frac{r}{XW}\ge \de,
\end{equation}
where the last inequality is because of the assumption $r>\de |\T_{\max}|=\de WX$ in Theorem \ref{main2}.
If their second coordinates are same, then since $p$ is a multiple of $\frac{X}{W}$, we see the difference of their first coordinates is
$$\big| \frac{p}{q}\frac{b-b'}{X}+(1-\frac{p}{q})\frac{a-a'}{W} \big|\ge \frac{1}{qW}\sim \frac{r}{XW}\ge \de. $$
Finally, we calculate the cardinality of the set of $r$-rich points: $\{ (\frac{c}{qW},\frac{p}{q}): \frac{p}{q}\in S, c\le qW \}$.
There are $\frac{WX}{r^2}$ elements in $S$ and $\sim \frac{WX}{r}$ choices for $c$, so the number of $r$-rich points is $\frac{W^2 X^2}{r^3}$.
\medskip
\textit{Case 2}: $W < r < X$.
At each $(\frac{a}{W}, 0)$ with $0 \le a \le W$, place an $X$-bush, i.e. a set of $X^{-1}$-direction separated $\de$-tubes with cardinality $X$. The number of $r$-rich points in each bush is $\frac{X^2}{r^2}$, so the total number of $r$-rich points is $\frac{WX^2}{r^2}$.\qed
\subsection{Examples for Furstenberg problem}
Next we discuss the sharp examples for Theorem \ref{thmfur}. Without loss of generality, we may assume the directions of tubes in $\T$ are within $1/10$ angle with the $y$-axis. We also assume $X$ and $W$ are square numbers and $W \mid X$ for technical reasons.
\textit{Case 1}:
$\de^{-\alpha-1}$.
Choose $\sim \de^{-\alpha}$ many length-$\de$ intervals in $[0,1]$ such that any two intervals have distance $\ge \de^{\alpha}$ from each other. Denote these intervals by $\{I_i\}_i$. We set $\B$ to be all the lattice $\de$-balls that intersect $[0,1]\times\cup_i I_i $. We can easily check $\B$ satisfies the condition in Theorem \ref{thmfur} for any choice of tubes, and
$$ |\B|\lesim \de^{-\alpha-1}. $$
\medskip
\textit{Case 2}: $\de^{-\alpha}XW$.
First we fix a set of tubes $\T$ that satisfies the spacing condition. Let $\B$ be the set of lattice $\de$-balls that intersect $([0,1]\times \cup_i I_i) \bigcap \cup_{T\in \T} T $, where $I_i$ are the same as in \textit{Case 1}. Noting $|\T|\sim XW$, we can easily check that
$$ |\B|\lesim \de^{-\alpha}XW. $$
\medskip
\textit{Case 3}: $\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}}$.
We will borrow the idea from \textit{Case 1} of the examples for incidence estimate in the last subsection. The notation here will be the same as there. We choose the same set of tubes $\T$ as in \textit{Case 1} of the last subsection. We choose $\B$ to be the set of $\de$-balls whose centers are from the set
$\{ (\frac{c}{qW},\frac{p}{q}): \frac{p}{q}\in S, c\le qW \}$. We have
$$ |\B|\lesim \frac{W^2X^2}{r^3}. $$
We check that $\B$ satisfies the condition in Theorem \ref{thmfur}. Fix a $T\in \T$, let the line connecting $(\frac{a}{W},0)$ and $(\frac{b}{X},1)$ be the core line of $T$. We see that the points $\{((1-\frac{p}{q})\frac{a}{W}+\frac{p}{q}\frac{b}{X},\frac{p}{q}):\frac{p}{q}\in S\}$ lie on the core line of $T$. We can also show that these points belong to the set $\{ (\frac{c}{qW},\frac{p}{q}): \frac{p}{q}\in S, c\le qW \}$, since
$$ (1-\frac{p}{q})\frac{a}{W}+\frac{p}{q}\frac{b}{X}=\frac{qW-pa+p\frac{W}{X}b}{qW} $$
and noting that $p$ is a multiple of $\frac{X}{W}$.
We have shown that the core line of $T$ contains points whose $y$-axis are in $S$. Recall that $\#S\sim \frac{XW}{r^2}$, and from \eqref{difference} we see that
each pair of nearby points have distance $\ge \frac{r^2}{XW}$. If we have $2\le (\de^\alpha XW)^{\frac{1}{2}}\le W $,
then we set $r=(\de^\alpha XW)^{\frac{1}{2}}$ which means $\de^{-\alpha}=\frac{XW}{r^2}$. A simple calculation gives
$$ |\B| \lesim \frac{W^2X^2}{r^3}=\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}}. $$
To get rid of the requirement $2\le (\de^{\alpha}XW)^{\frac{1}{2}}\le W$, we want to replace the pair $(X,W)$ by another pair $(X',W')$. Specifically, we set $(X',W')=((XW)^{\frac{1}{2}},(XW)^{\frac{1}{2}})$.
We can easily check $\de^{\alpha}(X'W')^{\frac{1}{2}}\le W'$, so when $(\de X'W')^{\frac{1}{2}}\ge 2$ we have
$$ |\B| \lesim \de^{-\frac{3}{2}\alpha}(X'W')^{\frac{1}{2}}=\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}}. $$
When $(\de X'W')^{\frac{1}{2}}\le 2$, we just use the example in \textit{Case 2} and note that $\de^{-\alpha}XW\lesim \de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}}$.
However, with the new pair $((XW)^{\frac{1}{2}},(XW)^{\frac{1}{2}})$, the $\T$ doesn't satisfy the spacing condition in Theorem \ref{thmfur}. We overcome this by using a trick in \cite{wolff1999recent}. We slightly modify the definition $\T$. For each $0\le a,b\le (XW)^{\frac{1}{2}}$, draw a line segment from $(\frac{a}{(XW)^{\frac{1}{2}}},0)$ to $(\frac{\sqrt{2}b}{(XW)^{\frac{1}{2}}},0)$. We define $\T$ to be the set of tubes that are the $\de$-neighborhoods of these line segments. The intersection pattern of tubes and balls are the same with $\sqrt{2}$ replaced by $1$, so we still get the bound
$$ |\B| \lesim \de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}}. $$
But now, $\T$ satisfies the spacing condition in Theorem \ref{thmfur}. To check this, we consider any $W^{-1}$-tube whose intersection with $\{y=0\}$ is $[a_0,a_0+W^{-1}]$ and intersection with $\{y=1\}$ is $[b_0,b_0+W^{-1}]$. We see that the line segments that lie in this $W^{-1}$-tube are those connecting points $(a_0+\frac{a}{(XW)^{\frac{1}{2}}},0)$ and $(b_0+\frac{\sqrt{2}b}{(XW)^{\frac{1}{2}}},1)$ for $0\le a,b \lesssim (\frac{X}{W})^{\frac{1}{2}}$.
It suffices to show $\frac{a-\sqrt{2}b}{(XW)^{\frac{1}{2}}}\gtrsim \frac{1}{X}$, which is a simple result of the fact that $a-\sqrt{2}b\gtrsim \frac{1}{\max(a,b)} $.
\qed
\section{Tube-ball duality} \label{dualitysec}
We know there is a duality between lines and points. More precisely, in the projective plane every point has its dual line and every line has its dual point. A point and a line intersect if and only if their dual line and dual point intersect. So, we can transform the point-line incidence into line-point incidence.
In this section, we are going to show there is also a duality between $\de$-tubes and $\de$-balls that lie in (or near) $[0,1]^2$. The advantage is that we can transform Theorem \ref{main2} and Theorem \ref{main3} into Theorem \ref{main}. We assume all the $\de$-tubes and $\de$-balls considered here lie in $\Pi_1=[0,1]^2$, which we call the \textit{physical space}. We also set $\Pi_2=[0,1]^2$ which we call the \textit{dual space}. Our goal is to define a correspondence between these two spaces so that: the $\de$-balls (respectively $\de$-tubes) in $\Pi_1$ correspond to $\de$-tubes (respectively $\de$-balls) in $\Pi_2$, and the ball-tube incidence in $\Pi_1$ correspond to the tube-ball incidence in $\Pi_2$.
\subsection{Line-point duality}
First, let's look at the line-point duality between $\Pi_1$ and $\Pi_2$. We will use $(x,y)$ to denote the coordinates of $\Pi_1$ and $(u,v)$ to denote the coordinates of $\Pi_2$.
Define $\cP_2$ to be all the points in $\Pi_2$. For any $(u_0,v_0)\in\cP_2$, we define the corresponding line in $\Pi_1$ to be $$l_1(u_0,v_0): v_0 y=x-u_0.$$
We also define
$$\cL_1:=l_1(\cP_2)=\{ v_0 y=x-u_0:(u_0,v_0)\in\cP_2 \},$$
which is a set of lines in $\Pi_1$.
We see $l_1: \cP_2\leftrightarrow\cL_1$ is a one-to-one correspondence.
\begin{remark}\label{rmk}
There is a good way to think about this correspondence. Given a point $(u_0,v_0)$, then its corresponding line $l_1$ has ``position" $u_0$ (which is its intersection with $\{y=0\}$) and has ``direction" $v_0$ (which is the inverse of its slope). In the next subsection, we will define a correspondence between balls in $\Pi_2$ and tubes in $\Pi_1$ so that a ball with center $(u_0,v_0)$ corresponds to the tube with ``position" $u_0$ and ``direction" $v_0$.
\end{remark}
Next, we define $\cP_1$ to be all the points in $\Pi_1$. For any $(x_0,y_0)\in \cP_1$, we know the lines passing through it is of the form $v y=x-x_0+v y_0$. This motivates us to define the line in $\Pi_2$ corresponding to $(x_0,y_0)$ as
$$l_2(x_0,y_0): u=x_0-v y_0.$$
We also define
$$\cL_2=l_2(\cP_1)=\{ u=x_0-v y_0: (x_0,y_0)\in\cP_1 \},$$
which is a set of lines in $\Pi_2$.
We see $l_2:\cP_1\leftrightarrow\cL_2$ is a one-to-one correspondence.
We can also show the incidence is preserved under the duality. Given a point $(x_0,y_0)\in\cP_1$ and a line $l_1(u_0,v_0):v_0 y=x-u_0 \in\cL_1$, we have $(x_0,y_0)\in l_1(u_0,v_0)\Longleftrightarrow (u_0,v_0)\in l_2(x_0,y_0)$ by definition.
\subsection{Tube-ball duality}\label{tubeball}
Now we generalize our line-point duality to tube-ball duality.
For $(u_0,v_0)\in \cP_2$, let $B=B_\de(u_0,v_0)$ be the ball of radius $\de$ with center $(u_0,v_0)$. The intersection of its image under $l_1$ with $[-2,2]^2$ is roughly a $\de$-tube. That is to say:
$$ l_1(B):=\bigcup_{(u,v)\in B}l_1(u,v)\bigcap [-2,2]^2 $$
is roughly a $\de$-tube.
Intuitively, one can think of $l_1(B)$ as the $\de$-neighborhood of $l_1(u_0,v_0)\bigcap [-2,2]^2$. If we let $\B_2$ be all the lattice $\de$-balls in $\Pi_2$, and let $\T_1:=\{ l_1(B):B\in\B_2 \}$, then $l_1: \B_2\leftrightarrow \T_1$ is a one-to-one correspondence.
We can similarly define $l_2, \B_1$ and $\T_2$, so that $l_2:\B_1 \leftrightarrow \T_2$ is a on-to-one correspondence.
Moreover, we can check the incidence is preserved under the duality, i.e. Given a ball $B_1\in\B_1$ and a tube $T_1=l_1(B_2)\in\T_1$, then $(B_1,T_1)$ counts one incidence in $\Pi_1$ if and only if $(l_1^{-1}(T_1),l_2(B_1))=(B_2,T_2)$ counts one incidence in $\Pi_2$.
To get a better understanding of this tube-ball duality, see Figure \ref{dual}. Here, for each orange ball $B$ in $\Pi_2$, there is a corresponding orange tube $l_1(B)$ in $\Pi_1$. Similarly, for each blue ball $B'$ in $\Pi_1$, there is a corresponding blue tube $l_2(B')$ in $\Pi_2$. Also the incidence is preserved in the sense that the orange tube and the blue ball intersect if and only if the corresponding orange ball and blue tube intersect.
\begin{figure}
\centering
\includegraphics{output4.pdf}
\caption{Tube-ball duality.}
\label{dual}
\end{figure}
\subsection{Relations between the theorems}
We prove Theorem \ref{main} implies Theorem \ref{main2} and Theorem \ref{main3} in this subsection.
As mentioned in the beginning of this section, we can use this duality to turn from ball-tube incidence to tube-ball incidence.
For example, if we are given a set of $\de$-balls $\B$ and $\de$-tubes $\T$ and $\T$ satisfying some spacing condition, then by duality this is equivalent to the problem for a set $\de$-balls $\B'$ and $\de$-tubes $\T'$ with $\B'$ satisfying a similar spacing condition. What we did is we transfer the spacing condition from tubes to balls. This gives the heuristic that Theorem \ref{main2} (or Theorem \ref{main3}) can be reduced to Theorem \ref{main}.
However, there is still a flaw that the tubes $\T_i\ (i=1,2)$ we defined do not contain all the tubes in $\Pi_i\ (i=1,2)$. Actually $\T_i$ are restricted to some directions. However, we can find several rotations $\{\rho_k\}_{k\le 100}$ (with the rotation center $(1/2,1/2)$) so that $\bigcup_k \rho_k(\T_i)$ are morally all the $\de$-tubes in $\Pi_i$. Since $\B_i$ is all the $\de$-balls in $\Pi_i$, $\B_i$ is morally the same under any rotation.
Let us see how this work. Suppose we are asked to estimate the incidence $I(\B,\T)$ with $\T$ satisfying some spacing condition. We have
$$ I(\B,\T)\lesim \sum_k I(\B,\T\cap \rho_k(\T_1))\sim \sum_k I(\rho^{-1}_k(\B),\rho_k^{-1}(\T)\cap \T_1)=\sum_k I(\B,\rho_k^{-1}(\T)\cap \T_1). $$
By the duality,
$$\sum_k I(\B,\rho_k^{-1}(\T)\cap \T_1)\sim \sum_{k} I(\B_k',\T'), $$
where $\T'=l_2(\B)$ and $\B'_k=l_1^{-1}(\rho_k^{-1}(\T)\cap \T_1)$. Now $\B_k'$ satisfying some spacing condition similar to $\T$. So it suffices to estimate the incidence assuming $\de$-balls satisfying some spacing condition.
To prove that Theorem \ref{main} implies Theorem \ref{main2} (or Theorem \ref{main3}), we only need to verify: If $\T$ is a set of tubes in $\T_1$ that satisfies spacing condition in Theorem \ref{main2} (Theorem \ref{main3}), then the set of $\de$-balls $\B=\{l_1^{-1}(T):T\in \T\}$ satisfies the spacing condition in Theorem \ref{main}.
First, we suppose $\T$ is a set of tubes in $\T_1$ that satisfies spacing condition in Theorem \ref{main2}. That is, any $W^{-1}$-tube contains at most $\frac{X}{W}$ many tubes of $\T$, and the directions of these tubes are $\frac{1}{X}$-separated. For any $W^{-1}$-ball $B_{W^{-1}}$ in $\Pi_2$ with center $(u_0,v_0)$, consider the $W^{-1}$-tube $T_{W^{-1}}$ with ``position" $u_0$ and ``direction" $v_0$ (see Remark in section \ref{rmk}), i.e. $T_{W^{-1}}$ is the $W^{-1}$-neighborhood of $v_0y=x-u_0$. We see that the map $l_1$ induce a correspondence between the $\de$-balls lying in $B_{W^{-1}}$ and the $\de$-tubes lying in $T_{W^{-1}}$. By the spacing condition, the tubes $T\in\T$ that lie in $T_{W^{-1}}$ are $\frac{1}{X}$-separated in direction, so the corresponding balls in $B_{W^{-1}}$ have $\frac{1}{X}$-separated $v$-coordinates. That means, if we partition $B_{W^{-1}}$ into about $\frac{X}{W}$ many $W^{-1}\times X^{-1}$-rectangles (the long side of the rectangles point to the direction of $u$-axis), we have that in each rectangle there is at most one $\de$-ball from $\B=\{l^{-1}_1(T): T\in\T\}$. Since our $B_{W^{-1}}$ can be any $W^{-1}$-ball, we see $\B$ satisfying the spacing condition in Theorem \ref{main}.
Similarly we could make the same argument as above for Theorem \ref{main3} by switching the role of $u$-coordinate and $v$-coordinate in $\Pi_2$.
First, we may assume the line $\ell$ in Theorem \ref{main3} is parallel to $x$-axis by rotation. Next, we may assume $\ell$ is $\{y=0\}$, otherwise we just consider the incidence estimate in the portion of $[0,1]^2$ above $\ell$ and the portion of $[0,1]^2$ below $\ell$ separately. If the $\ell$ in Theorem \ref{main3} is $\{y=0\}$, we can prove the following result:
Let $\T$ be a set of tubes in $\T_1$ that satisfies the spacing condition in Theorem \ref{main3}. Partition $\Pi_2=[0,1]^2$ into $X^{-1}\times W^{-1}$-rectangles (the long side of the rectangles now point to the direction of $v$-axis which is different from that in the last paragraph). Then, each rectangle contains at most one ball from $\B=\{l^{-1}_1(T): T\in\T\}$. Since the proof is similar, we omit the proof.
\section{Proof of Theorem \ref{main}}\label{proofsec}
In this section, we prove Theorem \ref{main}.
We will first prove two lemmas and then use them to finish the proof of Theorem \ref{main}.
\subsection{Two lemmas}
First, we will need the ``dual version'' of Proposition 2.1 from \cite{guth2019incidence}, which was inspired by ideas of Orponen \cite{orponen2018dimension} and Vinh \cite{vinh2011szemeredi}. We state the version for $n = 2$. The dual version just follows from the original one (Proposition 2.1 in \cite{guth2019incidence}) by the tube-ball duality discussed in Section \ref{dualitysec}, so we omit the proof.
\begin{prop}\label{twopointone}
Fix a tiny $\eps > 0$. There exists a constant $C(\eps)$ with the following property: Suppose that $\B$ is a set of unit balls in $[0, D]^2$ and $\T$ is a set of essentially distinct tubes of length $D$ and radius $1$ in $[0, D]^2$. Suppose that each tube of $\T$ contains about $E$ balls of $\B$. Let $S = D^{\eps/20}$. Then either:
\textbf{Thin case.} $|\T| \leapp S^2 E^{-2} |\B| D$, or
\textbf{Thick case.} There is a set of finitely overlapping $1 \times 2SD$-tubes $U_j$ (heavy tubes) such that:
\begin{enumerate}[(1)]
\item $\bigcup_j U_j$ contains a fraction $\gtrapprox 1$ of the tubes of $\T$;
\item Each $U_j$ contains $\geapp SE$ balls of $\B$.
\end{enumerate}
In particular, if we define $\wt\T$ to be the set of $\gtrapprox SE$-rich $1\times 2SD$-tubes, we have
\begin{equation}\label{tube}
|\T|\lessapprox S^2 (E^{-2}|\B|D+|\wt \T|).
\end{equation}
Here, $\leapp$ means $\le C(\eps) D^{\eps^7}$.
\end{prop}
We will need a slight generalization which is our first lemma:
\begin{lemma}\label{cor21}
Fix a tiny $\eps > 0$. There exists a constant $C_\eps$ (which will be distinguished from $C(\e)$ in Proposition \ref{twopointone}) with the following property: Let $\delta < 1$. Suppose that $\U$ is a set of $\delta \times 1$-rectangles in $[0, D]^2$. Let $S = D^{\eps/20}$, and define $T_r(\U)$ to be the set of $\delta \times D$-rectangles that contain at least $r$ rectangles from $\U$, $\wt T_{\tilde r}(\U)$ to be the set of $2S\delta \times D$-rectangles that contain at least $\td r$ rectangles from $\U$. Here we set $\tR$ to be a number $ \geapp Sr$. Then:
\begin{equation}\label{estimatetube}
|T_r(\U)| \leapp_\e S^2 (r^{-2} |\U| D + |\wt T_{\tilde r}(\U)|).
\end{equation}
\end{lemma}
Note that Proposition \ref{twopointone} corresponds to $\delta = 1$.
\begin{proof}
Consider about $\de^{-1}$ many $\de$-separated directions. For each direction, we tile $[0,D]^2$ with rectangles pointing in this direction of dimensions $D\de\times D$. We call these rectangles cells. Denote these cells by $\{R_j\}_{j=1}^M$, then one actually see the number of cells is $M\sim \de^{-2}$. One also see these cells are essentially distinct.
Next, for each $\de\times 1$-rectangle $U\in\U$, we will attach it to a cell. We observe that there is a cell $R_j$ such that all $\de\times D$-rectangles that contain this $\de\times 1$-rectangle $U$ are essentially contained in $R_j$. We attach this $U$ to $R_j$. Now for each $j$, we let $\U_j$ be the rectangles in $\U$ that are attached to $R_j$. We have
$$ \sum_j |\U_j|=|\U|, $$
\begin{equation}\label{tube1}
|T_r(\U)|\sim \sum_j |T_r(\U_j)|.
\end{equation}
\begin{equation}\label{tube2}
|\wt T_{\tilde r}(\U)|\sim \sum_j |\wt T_{\tilde r}(\U_j)|.
\end{equation}
The reason for the second inequality is that a $\de\times D$-rectangle (or a $2S\de\times D$-rectangle) cannot contain $\de\times 1$-rectangles from different $\B_j$.
For each $R_j$, we rescale so that $R_j$ becomes $[0,D]^2$. Also, $\U_j$ becomes a set of unit square and any $\de\times D$-tube in $R_j$ becomes a $1\times D$-tube. Applying Proposition \ref{twopointone}, we see from \eqref{tube} that
$$ |\T_r(\U_j)|\lessapprox S^2 (r^{-2}|\U_j|D+|\wt\T_{\tilde r}(\U_j)|). $$
Summing over $j$ and using \eqref{tube1} and \eqref{tube2}, we proved \eqref{estimatetube}.
\end{proof}
Our second lemma concerns about the case when $X \sim \delta^{-1}$ in Theorem \ref{main}. It is actually the dual version of Corollary 5.5 in \cite{demeter2020small}. We state our lemma:
\begin{lemma}\label{highx}
Let $1 \le W \le \delta^{-1}$. Divide $[0, 1]^2$ into $W^{-1} \times \delta$ rectangles. Let $\B$ be a set of $\delta$-balls with at most one ball in each rectangle.
We denote $|\B_{\max}| = W\delta^{-1}$.
Then for $r > \max(\delta^{1-3\e} |\B_{\max}|, 1)$, the number of $r$-rich tubes is bounded by
\begin{equation*}
|T_r(\B)| \lesssim_\e \delta^{-\e} \frac{|\B| |\B_{\max}|}{Wr^2}.
\end{equation*}
\end{lemma}
\begin{remark}
Lemma \ref{highx} actually takes care of the case when $r > W$ by rescaling.
\end{remark}
To prove Lemma \ref{highx}, we need the following dual version of Theorem 5.4 from \cite{demeter2020small}.
\begin{prop}\label{thm5.4dual}
Let $1 \le W \le \delta^{-1}$. Tile $[0, 1]^2$ with $W^{-1} \times \delta$-rectangles. Let $\B$ be a set of $\delta$-balls with at most $N$ balls in each rectangle. Let $r\ge 1$ and $T_r(\B)$ be a set of essentially distinct $\de$-tubes, each of which contains at least $r$ balls in $\B$. Then there exist a scale $1 \le s \le \delta^{-1}$ and an integer $M_s$ such that
\begin{equation} \label{thm5.4-1}
|T_r(\B)| \lessapprox \frac{|\B| M_s \de^{-1}}{s r^2},
\end{equation}
\begin{equation} \label{thm5.4-2}
r \lessapprox \frac{M_s \de^{-1}}{s^2},
\end{equation}
\begin{equation}\label{thm5.4-3}
M_s \lessapprox Ns \max(1, s W \de).
\end{equation}
Here $\lessapprox$ means $\le C_\e \de^{-\e}$ for any $\e>0$.
\end{prop}
Let us quickly see how Proposition \ref{thm5.4dual} implies Lemma \ref{highx}.
\begin{proof}[Proof of Lemma \ref{highx}]
Apply Proposition \ref{thm5.4dual} with $N = 1$ to get a scale $s$ and an integer $M_s$. We claim that $sW\de \le 1$. If this is not true, then from \eqref{thm5.4-2} and \eqref{thm5.4-3}, we get
\begin{equation*}
\de^{-3\e}W\le r \leapp \frac{M_s\de^{-1}}{s^2}\lessapprox s^2 W\delta \cdot \frac{\delta^{-1}}{s^2} = W,
\end{equation*}
which is a contradiction. Hence, $sW\de \le 1$, and so $M_s \leapp s$ and
$$|T_r(\B)| \leapp \frac{|\B| \de^{-1}}{r^2} = \frac{|\B| |\B_{\max}|}{W r^2}$$
\end{proof}
Now, it suffices to prove Proposition \ref{thm5.4dual}.
\begin{proof}[Proof of Proposition \ref{thm5.4dual}]
It's convenient to explicitly write down \eqref{thm5.4-1}, \eqref{thm5.4-2} and \eqref{thm5.4-3} as
\begin{equation} \label{ineq1}
|T_r(\B)| \le C_\e \de^{-\e} \frac{|\B| M_s \de^{-1}}{s r^2},
\end{equation}
\begin{equation} \label{ineq2}
r \le C_\e \de^{-\e} \frac{M_s \de^{-1}}{s^2},
\end{equation}
\begin{equation}\label{ineq3}
M_s \le C_\e \de^{-\e} Ns \max(1, s W \de).
\end{equation}
We induct on $\delta$ and $r$. There are three base cases.
\begin{itemize}
\item $\delta \sim 1$,
\item $r = 10 \delta^{-1}$,
\item $NW \ge \delta^{-1+\eps/2}$.
\end{itemize}
The base case $\delta \sim 1$ is true by choosing large constant. The base case $r = 10 \delta^{-1}$ is taken care of by setting $s = 1$ and $M_s = 1$, and note that $|T_r(\B)| = 0$ since a $\delta$-tube contains at most $\delta^{-1}$ many $\de$-balls. For the base case $NW \ge \delta^{-1+\eps/2}$, set $s = \delta^{-1}$ and $M_s = s^2$. Then $r^2 |T_r(\B)|$ counts the number of triples $(B_1, B_2, T)\in \B\times\B\times T_r(\B)$ such that $B_1\cap T$ and $ B_2 \cap T$ are nonempty. For a given $B_1$, there are at most $\delta^{-1}$ many choices for $B_2$ and $\delta^{-1}$ many choices for $T$, hence $r^2 |T_r(\B)| \le |\B| \delta^{-2}$, which is what we want.
Now suppose it is true for $\de < \frac{1}{2}\tde $ or $r> 2\tR $. We prove that for $\de=\tde, r=\tR$. Let $\T\subset T_r(\B)$ be the subset of $\de$-tubes intersecting $\sim r$ balls of $\B$. If $|T_r(\B)|\ge 10 |\T|$, then $ |\T_r(\B)|\le \frac{10}{9}|\T_{2r}(\B)| $. Using induction hypothesis to $\de$ and $ 2r$, we find $s$ and $M_s$ such that
\begin{equation*}
|T_{2r}(\B)| \le C_\e \de^{-\e} \frac{|\B| M_s \de^{-1}}{s (2r)^2}\Longrightarrow |T_{r}(\B)| \le C_\e \de^{-\e} \frac{|\B| M_s \de^{-1}}{s r^2},
\end{equation*}
\begin{equation*}
2r \le C_\e \de^{-\e} \frac{M_s \de^{-1}}{s^2},
\end{equation*}
\begin{equation*}
M_s \le C_\e \de^{-\e} Ns \max(1, s W \de),
\end{equation*}
which verifies \eqref{ineq1}, \eqref{ineq2} and \eqref{ineq3}.
Hence we assume $|T_r(\B)|\le 10|\T|$.
We apply the rescaled version of Proposition \ref{twopointone} to $\B$ and $\T$. Note that $D=\de^{-1}$, $S=\de^{-\e/20}$. There are two possible cases. We discuss the two cases.
If we are in the thin case, we pick $s = 1, M_s = 1$ and obtain
\begin{gather*}
|T_r(\B)|\le 10|\T| \le 10 C(\e)\de^{-\e^7}\de^{-\e/10} \frac{|\B| \delta^{-1}}{r^2} \le C_\e \de^{-\e} \frac{|\B| M_s \de^{-1}}{s r^2}\ (C_\e\textup{~large enough}), \\
r \le 10 \delta^{-1}\le C_\e \de^{-\e} \frac{M_s \de^{-1}}{s^2},
\end{gather*}
which verifies \eqref{ineq1} and \eqref{ineq2}. Also, \eqref{ineq3} is easily verified.
If we are in the thick case, we obtain a set $\tT$ of $\gtrapprox Sr$-rich $S\delta$-tubes that contain $\geapp 1$ of the tubes in $\T$ (these two $``\gtrapprox"$ means $``\ge C(\e)^{-1}\de^{\e^7}"$), which implies
\begin{equation}\label{e1}
|\T| \le C(\e)\de^{-\e^7} S^2 |\tilde \T|.
\end{equation}
Now we cover the balls in $\B$ using essentially distinct $S\delta$-balls denoted by $\tilde\B$. There is a partition
$$ \tilde\B=\bigsqcup_{M\textup{~dyadic}} \tilde \B_M, $$
where $\tilde\B_M$ are those $S\delta$-balls that contain $\sim M$ balls in $\B$.
We know each $\tilde T\in\tilde \T$ contains $\ge C(\e)^{-1}\de^{\e^7} Sr$ balls in $\B$. By dyadic pigeonholing, there exist a dyadic $M$ such that $\tilde T$ contains $\ge C'(\e)^{-1}\de^{2\e^7} Sr$ balls in $\B_M$. By a further dyadic pigeonholing, there exists a dyadic $M$ such that a $C'(\e)^{-1} \de^{\e^7}$-fraction of tubes in $\tilde \T$ satisfy: each of them contains $\ge C'(\e)^{-1}\de^{2\e^7} Sr$ balls in $\B_M$. Now we fix $M$, and just still denote these $S\de$-tubes by $\tilde \T$. Form \eqref{e1}, we have
\begin{equation}\label{e2}
|\T| \le C''(\e)\de^{-2\e^7} S^2 |\tilde \T|.
\end{equation}
A tube in $\tT$ contains more than $\tR = M^{-1} C'(\e)^{-1}\de^{2\e^7} Sr$ balls in $\tilde\B_M$. Furthermore, a $W^{-1} \times S\delta$ rectangle now contains at most $\tN = M^{-1} NS$ $S\delta$-balls in $\tilde\B_M$ since each $W^{-1} \times S\delta$ contains $S$ many $W^{-1} \times \delta$ rectangles.
Since we are not in the base cases, we assume $NW\le \de^{-1+\e/2}$ which implies $W\le (S\de)^{-1}$ (recall $S=\de^{-\e/20}$).
We can apply the induction hypothesis to
\begin{equation}\label{induction}
\tR = M^{-1} C'(\e)^{-1}\de^{2\e^7} Sr,\ \tW = W,\ \tde = S\delta,\ \tN = M^{-1} NS
\end{equation}
and the set of $\tilde \de$-balls $\tilde\B_M$.
Thus, there exists $1 \le \tilde s \le (S\delta)^{-1}$ and $\td M_s$ such that
\begin{gather}
|\tilde \T|\le |T_{\tilde r}(\tilde \B_M)| \le
C_\e \td\de^{-\e} \frac{|\tilde\B_M| \tilde M_s \td\de^{-1}}{\tilde s {\tilde r}^2} \\
\label{e3}\tR \le
C_\e \td\de^{-\e} \frac{\tM_s\tde^{-1}}{\ts^2} , \\
\label{e4}\tM_s \le
C_\e \td\de^{-\e} \td N\ts \max(1, \ts W\tde).
\end{gather}
Now set $s = S \ts$ and $M_s = M \tM_s$. Combined with \eqref{e2}, we get
\begin{align*}
|T_r(\B)|\le 10|\T|&\le C''(\e)\de^{-2\e^7}S^2|\td\T| \\
&\le C''(\e)\de^{-2\e^7}S^2 C_\e \td\de^{-\e} \frac{|\tilde\B_M| \tilde M_s \td\de^{-1}}{\tilde s {\tilde r}^2}\\
& = \big( C''(\e)C'(\e)^2\de^{-6\e^7}S^{-\e} \big)C_\e \de^{-\e}\frac{M|\td\B_M| M_s \de^{-1}}{s r^2}.
\end{align*}
Recall $S=\de^{-\e/20}$, so when $\de$ is small enough, $\big( C''(\e)C'(\e)^2\de^{-6\e^7}S^{-\e} \big)\le \frac{1}{10}$. Also, from the definition of $\td\B_M$, we have $M|\td\B_M|\le 2|\B|. $ Thus we have
$$ |T_r(\B)|\le C_\e \de^{-\e}\frac{|\B| M_s \de^{-1}}{s r^2}, $$
which close the induction for \eqref{ineq1}.
From \eqref{induction} and \eqref{e3}, we have
\begin{align*}
r &= C'(\e)\de^{-2\e^7} MS^{-1} \tR \\
&\le C'(\e)\de^{-2\e^7} MS^{-1} C_\e \td\de^{-\e} \frac{\tM_s\tde^{-1}}{\ts^2}\\
&= \big( C'(\e)\de^{-2\e^7}S^{-\e} \big)C_\e\de^{-\e}\frac{M_s\de^{-1}}{s^2}\\
&\le C_\e\de^{-\e}\frac{M_s\de^{-1}}{s^2}
\end{align*}
when $\de$ is small enough. This close the induction for \eqref{ineq2}.
From \eqref{e4}, we have
\begin{align*}
M_s=M\td M_s &\le M C_\e \td\de^{-\e} \td N\ts \max(1, \ts W\tde)\\
&= C_\e (S\de)^{-\e} Ns \max(1, s W \de) \\
&\le C_\e \de^{-\e} Ns \max(1, s W \de),
\end{align*}
which close the induction for \eqref{ineq3}.
This completes the inductive step and thus finish the proof.
\end{proof}
\begin{remark}
At the beginning of this subsection, we said that Proposition \ref{twopointone} follows from Proposition 2.1 in \cite{guth2019incidence} by the tube-ball duality. However, it is not true that Proposition \ref{thm5.4dual} follows from Theorem 5.4 in \cite{demeter2020small}. Actually, our Proposition \ref{thm5.4dual} is stronger than Theorem 5.4 in \cite{demeter2020small}.
Let us discuss what dual version Theorem 5.4 in \cite{demeter2020small} implies. Recall the Theorem 5.4 roughly says: Given some well-spaced tubes $\T$ in $\Pi_1=[0,1]^2$, then we have an estimate for the $r$-rich balls $B_r(\T)$. Using the tube-ball duality, we see $\T$ corresponds to balls $\B$ (in $\Pi_2$) that exactly satisfies the spacing condition in Proposition \ref{thm5.4dual}. Also, we have a correspondence between the set of $\de$-balls in $\Pi_1$ and tubes $\T_2$ in $\Pi_2$ (here $\T_2$ is defined in Section \ref{tubeball}). However, $\T_2$ is not all the tubes lying in $\Pi_2$, but just those tubes that form an angle $\le 45^\circ$ with $v$-axis in $\Pi_2$.
As a result, Theorem 5.4 in \cite{demeter2020small} only implies the estimate
$$ |T_r(\B)\cap \T_2|\lessapprox \frac{|\B|M_s\de^{-1}}{s r^2} $$
instead of \eqref{thm5.4-1}.
\end{remark}
\subsection{Proof of Theorem \ref{main}}
In this subsection, we prove Theorem \ref{main}. Let us recall Theorem \ref{main} here.
\begin{theorem}\label{mainn}
Let $1 \le W \le X \le \delta^{-1}$. Divide $[0, 1]^2$ into $W^{-1} \times X^{-1}$ rectangles as in Figure \ref{fig:generalcase}. Let $\B$ be a set of $\delta$-balls with at most one ball in each rectangle.
We denote $|\B_{\max}|:=WX$ (as one can see that $\B$ contains at most $\sim WX$ balls).
Then for $r > \max(\delta^{1-2\e} |\B_{\max}|, 1)$, the number of $r$-rich tubes is bounded by
\begin{equation}\label{est}
|T_r (\mathbb{B})| \le C_\e \delta^{-\e} |\B| |\B_{\max}| \cdot r^{-2} (r^{-1} + W^{-1}).
\end{equation}
\end{theorem}
The proof has the same idea as the Theorem 4.1 in \cite{guth2019incidence}. There are three base cases.
\begin{itemize}
\item $r \ge 10 \delta^{-1}$
\item $X \ge \delta^{-1+\eps/2}$
\item $r \lesim \delta^{-\eps/4}$ or $W \lesim \delta^{-\eps/4}$
\end{itemize}
In the first base case $r \gesim \delta^{-1}$ we have $T_r (\B)$ is empty. The second base case $X \ge \delta^{-1+\eps/2}$ is dealt with by Lemma \ref{highx}. For the third base case $r \lesim \delta^{-\eps/4}$ or $W \lesim \delta^{-\eps/4}$, we use a double counting argument similar to \cite{cordoba1977kakeya} and \cite{ren2020incidence}. We count the number of triples $(B_1,B_2,T)\in \B\times\B\times T_r(\B)$ such that $B_1\cap T$ and $B_2\cap T$ are nonempty. Fix a ball $B_1\in \B$. For any dyadic radius $w$ $(X^{-1}\le w\le 1) $, consider those balls $B_2$ that are at distance $\sim w$ from $B_1$. The number of those $B_2$ is $\lesim wX(1 + wW)$. Also note that for two balls $B_1, B_2$ with distance $\sim w$, there are $\lesim \frac{1}{w}$ many tubes $T$ intersect them. Thus, the number of triples is
$$\lesim |\B|\sum_{w\textup{~dyadic}}wX(1+wW)\frac{1}{w}\lessapprox |\B| W X.$$
Also, the number of triples has a lower bound $r^2 |T_r(\B)|$. Combining these two bounds, we get
\begin{equation*}
|T_r(\B)| \lessapprox \frac{|\B| WX}{r^2} = \frac{|\B| |\B_{\max}|}{r^2}\lessapprox \frac{|\B| |\B_{\max}|}{r^2}(r^{-1}+W^{-1}).
\end{equation*}
which gives the estimate \eqref{est}.
For the inductive step, assuming that the theorem holds for $r\ge 2\tilde r$ or $\de \le \frac{1}{2}\tilde \delta $, we prove the theorem for $r=\td r, \de=\td\de$. In the rest of the proof, $``\leapp"$ will mean $``\le C(\e)\delta^{-O(\eps^7)}$".
From the base case, we have $W\gtrsim \de^{-\e/4}$. We define $D = \delta^{-\eps^4}$, then $1 \le D \le W$.
Cover the unit square with finitely overlapping $D^{-1}$-squares $\cQ=\{Q\}$. Let $\T$ be the set of $\sim r$-rich tubes of $T_r(\B)$, and by induction we may assume \begin{equation}\label{f-1}
|T_r(\B)|\le 10 |\T|,
\end{equation}
as we did in the proof of Proposition \ref{thm5.4dual}.
A tube $T\in\T$ intersects $Q\in\cQ$ in a tube segment $U_D$ of dimensions $\delta \times D^{-1}$. Note that one $U_D$ can lie in many tubes $T \in \T$.
For dyadic $1 \le M \le \delta^{-1}$, let $\U_M$ be the set of essentially distinct tube segments $U_D$ which essentially contain $\sim M$ balls of $\B$. Since we have \begin{equation}\label{f0}
\sum_{M \text{ dyadic}} MI(\U_M, \T) \sim I(\B, \T),
\end{equation}
we may choose a dyadic $M$ satisfying
\begin{equation}\label{f1}
M I(\U_M, \T) \geapp I(\B, \T).
\end{equation}
Next, let $\T_E$ be the set of tubes that contain $\sim E$ tube segments $U_D\in\U_M$. Since $\sum_{E \text{ dyadic}} I(\U_M, \T_E) \geapp I(\U_M, \T)$, we may choose $E$ satisfying
\begin{equation}\label{f2}
MI(\U_M, \T_E) \geapp I(\B, \T).
\end{equation}
Note that we have $I(\B,\T_E)\gtrsim MI(\U_M,\T_E)$, together with \eqref{f2} to obtain $I(\B,\T_E)\geapp I(\B, \T)$. Since each tube in $\T$ contains $\sim r$ balls of $\B$ by definition, we get
\begin{equation}\label{f3}
|\T_E| \geapp |\T|.
\end{equation}
Since each $T\in\T_E$ contains $\sim E$ tube segments $U_D\in\U_M$ and each $U_D\in \U_M$ contains $\sim M$ balls in $\B$, we have each $T\in\T_E$ contain $\gtrsim ME$ balls in $\B$. On the other hand, every tube in $\T$ contains $\sim r$ balls in $\B$ by definition. So, we have
\begin{equation}\label{f4}
r\gtrsim ME.
\end{equation}
Also note that from \eqref{f2}, we have
\begin{equation*}
r |\T| \sim I(\B, \T) \leapp M I(\U_M, \T_E) \leapp ME |\T_E|\le ME |\T|,
\end{equation*}
which implies
\begin{equation}\label{f5}
r\lessapprox ME.
\end{equation}
Now we apply a rescaled version of Lemma \ref{cor21} with $\U=\U_M$ and $r=E$ to get
\begin{equation}
|\T_E| \lessapprox S^2 (E^{-2} |\U_M| D + |\wt T_{\td r}(\U_M)|):=\uppercase\expandafter{\romannumeral1}+
\uppercase\expandafter{\romannumeral2}.
\end{equation}
Here, $S=D^{\e/20}$, $\td r\gtrapprox SE$. $\wt T_{\td r}(\U_M)$ is the set of $2 S\de\times 1$-tubes that contain at least $\td r$ rectangles from $\U_M$.
We would like to rewrite the second term a little bit. Note that by definition each $U_D\in\U_M$ contain $\sim M$ balls in $\B$, so $\wt T_{\td r}(\U_M)\subset \wt T_{\td r M}(\B)$. Here $\wt T_{\td r M}(\B)$ is the set of $2 S\de\times 1$-tubes that contain at least $\td r M$ balls from $\B$. We have
\begin{equation}\label{mainest}
|\T_E| \lessapprox S^2 (E^{-2} |\U_M| D + |\wt T_{r_1 }(\B)|):=\uppercase\expandafter{\romannumeral1}+
\uppercase\expandafter{\romannumeral2},
\end{equation}
where $r_1=\td r M\gtrapprox SEM\gtrapprox Sr$.
\subsubsection{Estimate of \uppercase\expandafter{\romannumeral1}}
Fix a $D^{-1}$-square $Q\in\cQ_\de$ in our finitely overlapping covering. Also, we consider the set of tubes $\U_Q:=\U_M \cap Q$ and the set of balls $\B_Q:=\B \cap Q$. If we rescale $Q$ to $[0,1]^2$, then $\U_Q$ becomes a set of $D\de$-tubes and $\B_Q$ becomes a set of $D\de $-balls. Meanwhile, $\B_Q$ satisfies the spacing condition in Theorem \ref{mainn}.
We use the induction hypothesis of Theorem \ref{mainn} to $\B_Q$ with
\begin{enumerate}
\item $\de' = D\delta$,
\item $r' = M$,
\item $W' = W/D$, $X' = X/D$.
\end{enumerate}
In order to apply the induction hypothesis, we need to check $r'=M >\max(\de'^{1-2\e}W'X',1)$. Actually, by \eqref{f5} and noting $E\le D, r>\delta^{1-2\e} WX, D=\de^{-\e^4}$, we have
\begin{equation*}
M \ge C(\e)^{-1}\de^{\e^7} E^{-1} r \ge C(\e)^{-1}\de^{\e^7} D^{-1} \delta^{1-2\e} WX \ge \de'^{1-2\e} W' X'.
\end{equation*}
To check $M \ge 2$, by the base case $r \ge \delta^{-\eps/4}$ is big, and $E\le D = \delta^{-\eps^4}$ is small, so \eqref{f5} implies $M\ge 2$.
Now we can apply induction. From \eqref{est}, we obtain:
$$|\U_Q|\le C_\e (D\de)^{-\e} |\B_Q|D^{-2}W X\cdot M^{-2}(M^{-1}+D W^{-1}).$$
Summing over $Q\in\cQ$ we get
\begin{align*}
\uppercase\expandafter{\romannumeral1}=S^2 E^{-2} D |\U|&=S^2 E^{-2} D \sum_{Q}|\U_Q|\\
&\le S^2 E^{-2} D \cdot \sum_Q C_\e (D\de)^{-\e} |\B_Q|D^{-2}W X\cdot M^{-2}(M^{-1}+D W^{-1})\\
&=S^2 D^{-\e} C_\e \de^{-\e} \sum_Q |\B_Q| WX (ME)^{-2} ((MD)^{-1}+W^{-1})
\end{align*}
Since $S=D^{\e/20}$, $\sum_Q|\B_Q|=|\B|$, $ME\gtrapprox r$, $E\le D$, we have
\begin{equation}\label{term1}
\uppercase\expandafter{\romannumeral1}\lessapprox D^{-\e/2} C_\e \de^{-\e} |\B| WX r^{-2} (r^{-1}+W^{-1}).
\end{equation}
Recall $``\lessapprox"$ means $\le C(\e) \de^{-\e^7}$.
\subsubsection{Estimate of \uppercase\expandafter{\romannumeral2}}
For the second term in \eqref{mainest}, we have a collection of $2S\delta$-tubes $\wt T_{r_1}(\B)$, each of which intersects $r_1 \geapp Sr$ balls of $\B$. Let $\wt \B$ be the set of balls formed by thickening each $\delta$-ball of $\B$ to a $S\delta$-ball. From the base case, we have $X\le (S\de)^{-1}$, so $\wt \B$ is a set of $S\de$-separated balls satisfying the spacing condition in Theorem \ref{mainn}.
Apply the induction hypothesis to $\wt \B$ with $\delta' = S\delta$, $r'=r_1\gtrapprox Sr$, $W' = W$, $X' = X$ (it is easy to check $r'> \max(\de'^{1-2\e}|\wt\B|W' X',1)$.
We obtain from \eqref{est}
$$ |\wt T_{r_1}(\wt\B)|\le C_\e (S\de)^{-\e} |\B|WX\cdot r_1^{-2}(r_1^{-1}+W^{-1}). $$
So, we have
\begin{align}
\nonumber\uppercase\expandafter{\romannumeral2}=S^2 |\wt T_{r_1}(\B)| &\le S^2 C_\e (S\de)^{-\e} |\B|WX\cdot r_1^{-2}(r_1^{-1}+W^{-1})\\
\nonumber&\lessapprox S^2 C_\e (S\de)^{-\e} |\B|WX\cdot (Sr)^{-2} ((Sr)^{-1} + W^{-1}) \\
\label{term2}&\le S^{-\e}C_\e \de^{-\e} |\B|WX\cdot r^{-2} (r^{-1} + W^{-1})
\end{align}
Combining \eqref{f-1}, \eqref{f3}, \eqref{term1} and \eqref{term2}, we have
$$ |T_r(\B)|\lessapprox |\T_E|\lessapprox \de^{\e^6}C_\e \de^{-\e} |\B|WX\cdot r^{-2} (r^{-1} + W^{-1}). $$
Recall $``\lessapprox"$ means $\le C(\e) \de^{-O(\e^7)}$. We see if $\de$ is small enough, this closes the induction for \eqref{est}.
\section{Proof of Theorem \ref{thmfur}} \label{fursec}
In this section we prove Theorem \ref{thmfur} which we recall here.
\begin{theorem}\label{fur}
Let $1 \le W \le X \le \delta^{-1}$. Let $\T$ be a collection of essentially distinct $\delta$-tubes in $[0,1]^2$ that satisfies the following spacing condition: every $W^{-1}$-tube contains at most $\frac{X}{W}$ many tubes of $\T$, and the directions of these tubes are $\frac{1}{X}$-separated. We also assume $|\T|\sim XW$.
Let $\B=\{B_\de\}$ be a set of $\de$-balls and for each $T\in\T$ define $Y(T):=\{ B_{\de}\in \B: B_{\de}\cap T\neq \emptyset \}$. Suppose each $Y(T)$ has a subset $Y'(T)$ satisfying: $\#Y'(T)\sim \de^{-\alpha}$ and each pair of nearby balls in $Y'(T)$ have distance $\gtrsim \de^{\alpha}$. Then we have the estimate
\begin{equation}
|\B|\gtrsim (\log\de^{-1})^{3.5}\min(\de^{-\alpha-1},\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}},\de^{-\alpha}XW).
\end{equation}
\end{theorem}
\begin{remark}
As we discussed in Section \ref{dualitysec}, it's more intuitive to view the tubes $\T$ in the theorem through their corresponding dual balls in the dual space. Actually, we see that the dual balls of $\T$ have the configuration as in Figure \ref{fig:generalcase}.
\end{remark}
\subsection{A try using incidence estimates}
It seems we can use Theorem \ref{main2} to study Theorem \ref{fur}. We discuss this here and will see where we fail.
Since the spacing condition for $\T$ is the same in Theorem \ref{main2} and Theorem \ref{fur}, we can use the incidence estimate in Theorem \ref{main2}. We denote by $I(\B,\T)$ the incidence between $\T$ and $\B$. Since each tube intersects $\ge \de^{-\alpha}$ balls in $\B$, we simply get the lower bound for incidence:
$$ I(\B,\T)\gtrsim \de^{-\alpha}|\T|. $$
For the upper bound, since $I(\B,\T)\lesim \sum_{r\textup{~dyadic}}r |B_r(\T)|$, there exists a dyadic $r$ such that
$$ I(\B,\T)\lessapprox r|B_r(\T)| $$
(In this subsection $``\lessapprox"$ means $``\le C_\e \de^{-\e}"$ for any $\e>0$).
So, we have
\begin{equation}\label{fur1}
\de^{-\alpha}|\T|\lessapprox r|B_r(\T)|.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[step=0.5cm, black, thin] (0,0) grid (4,4);
\fill[gray!50] (1.02,0.02) rectangle (1.48,0.48);
\fill[gray!50] (1.02,0.52) rectangle (1.48,0.98);
\fill[gray!50] (1.02,1.02) rectangle (1.48,1.48);
\fill[gray!50] (1.52,1.52) rectangle (1.98,1.98);
\fill[gray!50] (1.52,2.02) rectangle (1.98,2.48);
\fill[gray!50] (1.52,2.52) rectangle (1.98,2.98);
\fill[gray!50] (2.02,3.02) rectangle (2.48,3.48);
\fill[gray!50] (2.02,3.52) rectangle (2.48,3.98);
\draw[black, thick] (1.2,0) -- (2.3,4);
\end{tikzpicture}
\caption{Pseudo-tube}
\label{sdtube}
\end{figure}
Our estimates will be based on \eqref{fur1}. When $r\lessapprox \max(\de |\T|,1)$, we have $\de^{-\alpha}|\T|\lessapprox r|B_r(\T)|\lessapprox \max(\de|\T|,1) |\B|$, which implies that
\begin{equation}\label{fur2}
|\B|\gtrapprox \min(\de^{-\alpha-1},\de^{-\alpha}XW).
\end{equation}
When $\max(\de|\T|,1)\lesim r\le W$, by Theorem \ref{main2} we have
$|B_r(\T)|\lessapprox |\T|^2 r^{-3}.$
Combined with the trivial bound $|B_r(\T)|\le |\B|$, we get $\de^{-\alpha}|\T|\lessapprox r \min(|\T|^2r^{-3},|\B|)\le |\T|^{2/3}|\B|^{2/3}$, and hence
\begin{equation}\label{fur3}
|\B|\gtrapprox \de^{-\frac{3}{2}\alpha}|\T|^{1/2}\gtrsim \de^{-\frac{3}{2}\alpha}(XW)^{1/2}.
\end{equation}
When $r\ge W$, by Theorem \ref{main2} we have
$|B_r(\T)|\lessapprox |\T|^2 r^{-2}W^{-1}.$
Combined with the trivial bound $|B_r(\T)|\le |\B|$, we get $\de^{-\alpha}|\T|\lessapprox r \min(|\T|^2r^{-2}W^{-1},|\B|)\le |\T||\B|^{1/2}W^{-1/2}$, and hence
\begin{equation}\label{fur4}
|\B|\gtrapprox \de^{-2\alpha}W.
\end{equation}
Combining \eqref{fur2}, \eqref{fur3} and \eqref{fur4}, we obtain
\begin{equation}
|\B|\gtrapprox \min (\de^{-\alpha-1},\de^{-\frac{3}{2}\alpha}(XW)^{1/2}, \de^{-\alpha}XW, \de^{-2\alpha}W).
\end{equation}
In the above inequality, we see that the fourth term $\de^{-2\alpha}W$ is not good in the case when $W=1$ and $X=\de^{-1}$. This is our main enemy, which is exactly the case of Question \ref{question}.
\subsection{The proof of Theorem \ref{fur}}
First we discuss our main tool: crossing number. For a graph $G$, the crossing number $cr(G)$ of $G$ is the lowest number of edge crossings of a plane drawing of the graph $G$. We have the following well-known result for crossing numbers. A detailed discussion can be found in \cite{guth2016polynomial}.
\begin{lemma}[Crossing number]\label{crossing}
For a graph $G$ with $n$ vertices and $e$ edges, we have
\begin{equation}
n\gtrsim \min (e,\frac{e^{3/2}}{cr(G)^{1/2}}).
\end{equation}
\end{lemma}
To prove Theorem \ref{fur}, We do several reductions. First, we can assume all the $\de$-tubes from $\T$ lie in $[0,1]^2$ and each tube forms an angle $\le \frac{1}{10}$ with $y$-axis. We also assume $\de^{-1}$ is an integer, so we can partition $[0,1]^2$ into lattice $\de$-squares, denoted by $[0,1]^2=\sqcup Q$. Here, each $Q$ is a square with length $\de$ and center $(\de(n-\frac{1}{2}),\de (m-\frac{1}{2}))$ for some $1\le m,n\le \de^{-1}$. We denote the set of these squares by $\cQ_\de$. Without loss of generality, we can assume $\B\subset \cQ_\de$.
To make the proof clear, we need a substitution for tube, which we call \textit{pseudo-tube}. We give the definition of pseudo-tube. Given an $1\times \de$ tube $T$ which forms an angle $\le \frac{1}{10}$ with $y$-axis and lie in $[0,1]^2$, we define its corresponding pseudo-tube $\td T$ as in Figure \ref{sdtube}.
Denote the core line of $T$ by $l$. The squares in $\cQ_\de$ form $\de^{-1}$ many rows. We see that $l$ intersect each row with at most two squares. For each row, if $l$ intersect this row with one square, we pick this square; if $l$ intersect this row with two squares, we pick the left square.
We define $\td T$ to be the union of these $\de^{-1}$ many squares we just picked. We call $\td T$ the corresponding pseudo-tube of $T$.
It's not hard to check that we can make the reduction so that the $\T$ in Theorem \ref{fur} is a set of pseudo-tubes and $Y(T)$ is a set of $\de$-squares contained in $\td T$. Without ambiguity, we still call pseudo-tube as tube and use $T$ instead of $\td T$.
\medskip
The next reduction is to guarantee some uniformity property among tubes. We label the squares in $Y(T)$ one-by-one from bottom to top as $Y(T)=\{Q_1,Q_2\cdots, Q_m\}$. Here $m=\#Y(T)$ and the $y$-coordinates of $Q_i$ is less than that of $Q_{i+1}$. We define the distance between nearby squares as $d_i:=\dist(Q_i,Q_{i+1})$. Define the $d$-index set as $I_d:=\{i: d_i\sim d, 1\le i\le m-1\}$.
We claim that there exists a number $d\lesim \de^{\alpha}$
such that
\begin{equation}\label{ibig}
|I_d|\gtrsim (\log\de^{-1})^{-1}d^{-1}.
\end{equation}
Recalling the condition of Theorem \ref{fur}, we have that each $Y(T)$ has a subset $Y'(T)$ satisfying: $\#Y'(T)\sim \de^{-\alpha}$ and each pair of nearby squares in $Y'(T)$ have distance $\gtrsim \de^{\alpha}$. From this, we see that
\begin{equation}
\sum_{d_i\lesim \de^{\alpha}}d_i\gtrsim 1.
\end{equation}
So, by pigeonhole principle we can find $d\lesim\de^{\alpha}$ such that
$$ 1\lesim \log(\de^{-1})\sum_{d_i\sim d}d_i\sim \log(\de^{-1}) d|I_d|, $$
which gives \eqref{ibig}.
For each $T\in\T$, there exists a $d_T\gtrsim \de^{\alpha}$ such that \eqref{ibig} holds for $d=d_T$. By dyadic pigeonholing, we choose a typical $d$ so that there is a set $\T'\subset \T$ such that $|\T'|\gtrsim (\log\de^{-1})^{-1}|\T|$ and $d_T=d$ for any $T\in\T'$. We denote $d=\de^\beta$, $\B'=\cup_{T\in\T'}Y(T)$. Since $\B'\subset \B$ and $\alpha\le \beta$, we only need to prove:
\begin{equation}
|\B'|\gtrsim (\log\de^{-1})^{3.5}\min(\de^{-\beta-1},\de^{-\frac{3}{2}\beta}(XW)^{\frac{1}{2}},\de^{-\beta}XW).
\end{equation}
If we abuse the notation and still write $\beta, \T',\B'$ as $\alpha,\T,\B$, we actually reduce Theorem \ref{fur} to the following problem.
\begin{theorem}\label{last}
Let $1 \le W \le X \le \delta^{-1}$. Let $\T$ be a collection of essentially distinct $\delta$-pseudo-tubes in $[0,1]^2$ that satisfies the following spacing condition: every $W^{-1}$-tube contains at most $\frac{X}{W}$ many tubes of $\T$, and the directions of these tubes are $\frac{1}{X}$-separated. We also assume $|\T|\gtrsim (\log\de^{-1})^{-1} XW$.
Let $\B=\{B_\de\}\subset \cQ_\de$ be a set of $\de$-squares and for each $T\in\T$ define $Y(T):=\{ B_{\de}\in \B: B_{\de}\subset T\}$. Suppose each $Y(T)$ satisfies \eqref{ibig} for $d=\de^\alpha$. Then we have the estimate
\begin{equation}
|\B|\gtrsim (\log\de^{-1})^{3.5}\min(\de^{-\alpha-1},\de^{-\frac{3}{2}\alpha}(XW)^{\frac{1}{2}},\de^{-\alpha}XW).
\end{equation}
\end{theorem}
\begin{proof}[proof of Theorem \ref{last}]
We construct a graph $G=(V,E)$ in the following way. Let the vertices $V$ be the centers of squares in $\B$. For a $T\in\T$, consider all the pairs of nearby squares in $Y(T)$. We link the centers of each nearby squares with an edge. Let $E$ be the edges formed in this way for all $T\in\T$.
Each pair of tubes contribute at most one crossing (but they may share a lot of edges), so we have
\begin{equation}
cr(G)\le |\T|^2.
\end{equation}
On the other hand by Lemma \ref{crossing} we have
\begin{equation}
|\B|\gtrsim \min (|E|,\frac{|E|^{3/2}}{cr(G)^{1/2}}).
\end{equation}
So, we have
\begin{equation}
|\B|\gtrsim \min (|E|,\frac{|E|^{3/2}}{|\T|}).
\end{equation}
We will discuss two cases. We remind the readers that $(\log\de^{-1})^{-1}XW\lesim |\T|\lesim XW$.
\textit{Case 1}: $XW\lesim \de^{-2+\alpha}$.
We prove that $|E|\gtrsim (\log\de^{-1})^{-2}\de^{-\alpha}|\T|$, so as a result we obtain
\begin{equation}\label{last1}
|\B|\gtrsim (\log\de^{-1})^{-3.5} \min(\de^{-\alpha} XW,\de^{-\frac{3}{2}\alpha}(XW)^{1/2}).
\end{equation}
For each edge $e\in E$, define $n_e$ to be the number of tubes $T\in\T$ that contain $e$. We have
$$|E|=\sum_{e\in E}\ \sum_{T\in\T, e\subset T}\frac{1}{n_e}=\sum_{T\in \T}\ \sum_{e\in E, e\subset T}\frac{1}{n_e}.$$
It suffices to show for any fixed $T_0$,
$$ \sum_{e\in E, e\subset T_0}\frac{1}{n_e}\gtrsim (\log\de^{-1})^{-2}\de^{-\alpha}. $$
Recalling the condition for $Y(T)$ in Theorem \ref{last} and \eqref{ibig}, we have
$$ \#\{ e\subset T: \textup{length}(e)\sim \de^{\alpha} \} \gtrsim (\log\de^{-1})^{-1}\de^{-\alpha}. $$
So by Cauchy-Schwartz inequality, we have
$$ \sum_{e\in E, e\subset T_0}\frac{1}{n_e}\ge \sum_{e\in E, e\subset T_0, \textup{length}(e)\sim\de^{\alpha}}\frac{1}{n_e}\ge \frac{(\log\de^{-1})^{-2}\de^{-2\alpha}}{\sum_{e\in E, e\subset T_0,\textup{length}(e)\sim\de^{\alpha}}n_e}.$$
It suffices to prove
\begin{equation}\label{lastt2}
\sum_{e\in E, e\subset T_0,\textup{length}(e)\sim\de^{\alpha}}n_e\lesim \de^{-\alpha}.
\end{equation}
For $e\in E, T\in\T$, we define $\chi(e,T)=1$ if $e\subset T$ and $=0$ otherwise.
We rewrite the left hand side above as
\begin{equation}\label{lastt1}
\sum_{e\subset T_0, \textup{length}(e)\sim\de^{\alpha}}\sum_{T\in\T}\chi(e,T).
\end{equation}
Note that if $\chi(e,T)=1$ for some $e\subset T_0$ satisfying $\textup{length}(e)\sim \de^{\alpha}$, then the angle between $T_0$ and $T$ is less than $\de^{1-\alpha}$. We will analyze $T$ according to the angle $\mu=\angle(T_0,T)$. It's easy to see that those $T$ that form an angle $\sim\mu$ with $T_0$ lie in a $1\times \mu$ fat tube, and by the spacing condition of $\T$ we have
$$\#\{T:\angle(T_0,T)\sim\mu\}\lesim \mu^{2} XW\lesim \mu^{2}\de^{-2+\alpha}.$$
In the last inequality we use the assumption $XW\lesim \de^{-2+\alpha}$.
We further rewrite \eqref{lastt1} as:
$$\sum_{\mu\lesim \de^{1-\alpha}}\ \sum_{T:\angle(T_0,T)\sim\mu}\ \sum_{e\subset T_0, \textup{length}(e)\sim\de^{\alpha}}\chi(e,T),$$
which is less than
$$\sum_{\mu\lesim \de^{1-\alpha}}\ \sum_{T:\angle(T_0,T)\sim\mu}\mu^{-1}\de^{1-\alpha}\lesim \sum_{\mu\lesim \de^{1-\alpha}} \mu^{2}\de^{-2+\alpha}\mu^{-1}\de^{1-\alpha}\lesim \de^{-\alpha}.$$
This finish the proof of \eqref{lastt2}.
\textit{Case 2}: $XW\gtrsim \de^{-2+\alpha}$.
In this case, we choose another pair $(X',W')$ so that $X'\le X, W'\le W$, $1\le W'\le X'\le \de^{-1}$ and $X'W'\sim \de^{-2+\alpha}$. We through away some tubes from $\T$ so that it satisfies the spacing condition for new parameters $(X',W')$. This is easily seen from the dual picture as in Figure \ref{fig:generalcase}. Originally, the balls are evenly spaced in the $W^{-1}\times X^{-1}$-grid. We throw away some balls so that it fits into the $W'^{-1}\times X'^{-1}$-grid. We apply Case 1 to the new parameter $(X',W')$ to obtain
\begin{equation}\label{last2}
|\B|\gtrsim (\log\de^{-1})^{-3.5} \min(\de^{-\alpha} X'W',\de^{-\frac{3}{2}\alpha}(X'W')^{1/2})=(\log\de^{-1})^{-3.5} \min(\de^{-2},\de^{-\alpha-1}).
\end{equation}
Combining \eqref{last1} and \eqref{last2}, we obtain the desired estimate
\[
|\B|\gtrsim (\log\de^{-1})^{-3.5} \min(\de^{-\alpha-1},\de^{-\alpha} XW,\de^{-\frac{3}{2}\alpha}(XW)^{1/2}).
\]
\end{proof}
\begin{remark}
A general Furstenberg set problem was considered by many authors, for example in \cite{molter2012furstenberg}, \cite{hera2020improved}, \cite{orponen2021hausdorff}.
It's natural to consider some other extension of this problem. For $(u_0,v_0)\in[0,1]^2$, we define the line $l(u_0,v_0): v_0y=x-u_0$. We say $E\subset \R^2$ is a $(\alpha,\beta)$-Furstenberg set, if there exists an $\beta$-dimensional set $X\in [0,1]^2$ such that for each line $l\in\{ l(u,v):(u,v)\in X \}$, we have $\dim_{\text{H}}(l\cap E)\ge \alpha$. It may be reasonable to ask whether for any $(\alpha,\beta)$-Fustenberg set $E$ we have
$$ \dim_{\text{H}}E\ge \min(\alpha+1,\frac{3}{2}\alpha+\frac{1}{2}\beta,\alpha+\beta)? $$
\end{remark}
\appendix
\section{Applications to sum-product problem}\label{appsec}
In this appendix, we prove some sum-product estimates. In the discrete setting, Elekes' argument gives a good sum-product estimate using Szemer\'edi-Trotter theorem (see \cite{elekes1997number}). The argument is adapted to the $\de$-discretized setting to study the sum-product estimate for well-spaced set (see \cite{gan2020sum}). We essentially follow the idea of \cite{gan2020sum} but give some finer sum-product estimates using the new incidence estimates (Theorem \ref{main2}, \ref{main3}) obtained in this paper. Since our argument are similar to that in \cite{gan2020sum}, we just sketch the proof. The details can be found in \cite{gan2020sum}.
\begin{definition}
For sets $A, B\subset \R$. We define their sum set $A+B:=\{a+b:a\in A, b\in B\}$, and product set $A\cdot B:=\{ab: a\in A, b\in B\}$.
\end{definition}
\begin{definition}
For a set $A\subset \R$, we use $\cN(A,\de)$ to denote the maximal cardinality of the $\de$-separated subset of $A$.
\end{definition}
\begin{theorem}\label{sumproduct}
Let $\de$ be small enough. Fix $1\le m\le n\le \de^{-1} $ where $m$ and $n$ are integers. Suppose there are two well-spaced sets $A, B\subset [1,2]$, which means if we write
$$ A=\{a_1, a_2\cdots, a_{|A|}\},\ \ B=\{b_1, b_2\cdots, b_{|B|}\},$$
then
$a_{i+1}-a_i>\frac{1}{100m}$ and $ b_{i+1}-b_i>\frac{1}{100n}$ for any $i$ (one easily sees $|A|\lesim m, |B|\lesim n$).
Then we have the following results.
\begin{enumerate}
\item If $|B|m^2\gtrsim_\e\de^{-2}$, then \begin{equation}\label{1}
\cN(A+B,\de)\cN(A\cdot B,\de)\gtrsim_\e \de^{\e} |B|\de^{-1} \frac{|A|^2}{m^2}.
\end{equation}
\item If $|A|n^2\gtrsim_\e\de^{-2}$, then \begin{equation}\label{2}
\cN(A+B,\de)\cN(A\cdot B,\de)\gtrsim_\e \de^{\e} |A|\de^{-1} \frac{|B|^2}{n^2}.
\end{equation}
\item If $|A|mn\gtrsim_\e\de^{-2}$, then \begin{equation}\label{3.1}
\cN(A+B,\de)\cN(A\cdot A,\de)\gtrsim_\e \de^{\e} |A|\de^{-1} \frac{|A||B|}{mn},
\end{equation}
\begin{equation}\label{3.2}
\cN(A+A,\de)\cN(A\cdot B,\de)\gtrsim_\e \de^{\e} |A|\de^{-1} \frac{|A||B|}{mn}.
\end{equation}
\item If $|B|mn\gtrsim_\e\de^{-2}$, then \begin{equation}\label{4.1}
\cN(B+B,\de)\cN(A\cdot B,\de)\gtrsim_\e \de^{\e} |B|\de^{-1} \frac{|A||B|}{mn},
\end{equation}
\begin{equation}\label{4.2}
\cN(A+B,\de)\cN(B\cdot B,\de)\gtrsim_\e \de^{\e} |B|\de^{-1} \frac{|A||B|}{mn}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
First, we prove \eqref{1}. For $1\le i,j\le |A|$, define the line $$l_{ij}:\ \ y=a_i(x-a_j).$$
Also, define the tube $T_{ij}$ to be the $\de$-neighborhood of $l_{ij}\cap [0,10]^2$. Let $\T$ be the set of these tubes, then $\T$ satisfies the spacing condition in Theorem \ref{main2} with $W=X=m$, and
$$ |\T|=|A|^2. $$
Let $\B$ be a set of $\de$-balls, whose centers are from a maximal $\de$-separated subset of $(A+B)\times (A\cdot B)$ (here $``\times"$ means the Cartesian product). Then we have
$$ |\B|\sim \cN(A+B,\de)\cN(A\cdot B,\de). $$
We also see that each tube $T\in\T$ intersects at least $|B|$ balls in $\B$, since each line $l_{ij}$ contains at least $|B|$ points $\{(a_j+b_k,a_i b_k)\}_{k=1}^{|B|}$ in $(A+B)\times (A\cdot B)$ and these points are $\de$-separated. So if we define $I(\B,\T)$ to be the incidences between balls and tubes, we have
$$ I(\B,\T)\ge |B||\T|=|B||A|^2. $$
We also derive an upper bound for $I(\B,\T)$ using Theorem \ref{main2}. From now on, all the summation $``\sum_r"$ are for those \textit{dyadic} $r$. Recall the Definition \ref{rich}, we have
$$ I(\B,\T)\lesim \sum_r r|B_r(\T)|=\sum_{r\le \de^{1-2\e} m^2}r|B_r(\T)| +\sum_{r> \de^{1-2\e} m^2}r|B_r(\T)|. $$
Noting $|B_r(\T)|\le |\B|$, We trivially bound the first term by
$$ \sum_{r\le \de^{1-2\e} m^2}r|B_r(\T)|\lesim \de^{1-2\e} m^2|\B|. $$
For the second term, we apply Theorem \ref{main2} to see for each $r>\de^{1-2\e}m^2$ there holds $|B_r(\T)|\lesim_\e \de^{-\e} |\T|m^2 r^{-2}(r^{-1}+m^{-1})=\de^{-\e} |A|^2m^2 r^{-2}(r^{-1}+m^{-1})$. Summing over dyadic $r>\de^{1-2\e}$, we get
$$ \sum_{r> \de^{1-2\e} m^2}r|B_r(\T)|\lesim_\e \de^{\e}|A|^2m^{-2}\de^{-2}. $$
As a result, we obtain
$$ |B||A|^2\le I(\B,\T)\lesim_\e \de^{1-2\e}m^2|\B|+|A|^2m^{-2}\de^{-2}. $$
We see that if $|B||A|^2\gtrsim_\e |A|^2m^{-2}\de^{-2}$, i.e. $|B|m^2\gtrsim_\e\de^{-2}$, then $$|B||A|^2\lesim_\e \de^{1-2\e}m^2|\B|,$$ i.e.
\begin{equation*}
\cN(A+B,\de)\cN(A\cdot B,\de)\sim |\B| \gtrsim_\e \de^{2\e} |B|\de^{-1} \frac{|A|^2}{m^2}.
\end{equation*}
We finish the proof of \eqref{1}.\\
We see that the basic idea in our proof is:
\begin{enumerate}
\item Define a set of lines $l_{ij}$ and thicken them into $\de$-tubes $\T$.
\item Define $\B$, a set of $\de$-balls whose centers are from the Cartesian product of sum set and product set.
\item Estimate $I(\B,\T)$. For the lower bound of $I(\B,\T)$, we use the fact that each tube contains many balls. For the upper bound of $I(\B,\T)$, we split the sum into low term and high term. We trivially bound the low term while use Theorem \ref{main2} to deal with the high term. Comparing the two bounds gives the sum-product estimate.
\end{enumerate}
Since the proofs for other cases are in the same way, we just show how to choose $\T$ and $\B$, and omit the proof.
\begin{itemize}
\item Recall in the proof of \eqref{1}, we chose
$$ l_{ij}: y=a_i(x-a_j), $$
where $1\le i,j\le |A|$ and $a_i, a_j\in B$. We chose $\B$ whose centers are from $(A+B)\times (A\cdot B)$.
\item For the proof of \eqref{2}, we choose
$$ l_{ij}: y=b_i(x-b_j), $$
where $1\le i,j\le |B|$ and $b_i, b_j\in B$. We choose $\B$ whose centers are from $(A+B)\times (A\cdot B)$.
\item For the proof of \eqref{3.1}, we choose
$$ l_{ij}: y=a_i(x-b_j), $$
where $1\le i\le |A|, 1\le j\le |B|$ and $a_i\in A, b_j\in B$. We choose $\B$ whose centers are from $(A+B)\times (A\cdot A)$.
\item For the proof of \eqref{3.2}, we choose
$$ l_{ij}: y=b_i(x-a_j), $$
where $1\le i\le |B|,1\le j\le |A|$ and $b_i\in B, a_j\in A$. We choose $\B$ whose centers are from $(A+A)\times (A\cdot B)$.
\item For the proof of \eqref{4.1}, we choose
$$ l_{ij}: y=a_i(x-b_j), $$
where $1\le i\le |A|, 1\le j\le |B|$ and $a_i\in A, b_j\in B$. We choose $\B$ whose centers are from $(B+B)\times (A\cdot B)$.
\item For the proof of \eqref{4.2}, we choose
$$ l_{ij}: y=b_i(x-a_j), $$
where $1\le i \le |B|,1\le j\le |A|$ and $b_i\in B, a_j\in A$ We choose $\B$ whose centers are from $(A+B)\times (B\cdot B)$.
\end{itemize}
\end{proof}
\begin{remark}
Theorem 1 in \cite{gan2020sum} actually deals with the special case that $A=B$, $m=n=|A|$.
\end{remark}
\bibliographystyle{alpha}
\bibliography{bibli}
\end{document} | 57,525 |
Storyus®
Storyus® is the family-friendly collaborative storytelling platform, which puts you in control of how your story is told. No ‘likes’ ♥. No public follower counts. No Adverts. You ‘own’ your story and can decide what was important, and how best to share it.
The One Show
Paul Johnson (my Dad) had his ‘5 minutes of fame’ when he appeared on The One Show in December 2017. With around 5 million viewer it was an amazing opportunity for people to learn about ‘Paper Engineering’ and for his ‘pop-up books’ to be showcased to a national audience.
Practical Responsive Images
This pocket guide considers the cost and value of images, reviews image formats and historic practices, and explores some of the new features and tools available to us, such that we can be in a position to undertake a practical approach to responsive images.
Top 5 Blog Posts ( . . . . all posts )
Dynamic Media and Dynamic Content Specialist
A decade of helping shape product alignment, leading scores of projects for global brands, leveraging industry-leading SaaS Dynamic Media and Dynamic Content solutions (inc. headless CMS)
.net magazine industry expert, conference speaker (including Adobe Summit).
Getting Kids into Coding
For the 2017 ‘12 Devs of Xmas’ I wrote about my experiences teaching Primary aged children, sharing insights into how to make coding interesting and relevant for all children.
NaBloPoMo 2016
NaBloPoMo is National Blog Post Month, which has the challenge of writing a blog post each day for the whole of November. I’d not heard of it before 1st November, but it sounded like just the tonic for getting back into writing. I’m happy to say it has been everything I had hoped it would be, and probably more.
Save Your Past
Save Your Past Ltd. specialises in helping you make the most of your pre-digital precious memories. We carefully transform your videos, photos, slides/negatives into a form where they will degrade no more, and can undertake digital restoration work to sensitively turn back the hands of time and rejuvenate them.
Remembering Who I Work For
Ben is also the co-founder of two energetic creative entities aged 12 & 9. He also enjoys motorbikes, mountain bikes, Formula1 and the occasional track day (most recently Lotus Exige and Audi R8 at Silverstone).
There are various less-public repositories of our very happy memories, which we tend to share with closer friends and family - ping me for an invitation if you don’t already have access. | 1,781 |
\begin{document}
\begin{frontmatter}
\title{Optimal local H\"{o}lder index for density states of
superprocesses with $(1+\beta)$-branching mechanism\protect\thanksref{T1}}
\runtitle{Optimal local H\"{o}lder index for superprocess states}
\begin{aug}
\author[A]{\fnms{Klaus} \snm{Fleischmann}\corref{}\ead[label=e1]{[email protected]}},
\author[B]{\fnms{Leonid} \snm{Mytnik}\ead[label=e2]{[email protected]}\ead[url,label=u1]{http://ie.technion.ac.il/leonid.phtml}} and
\author[C]{\fnms{Vitali} \snm{Wachtel}\ead[label=e3]{[email protected]}}
\runauthor{K. Fleischmann, L. Mytnik and V. Wachtel}
\affiliation{Weierstrass Institute, Technion Israel Institute of
Technology and~University~of~Munich}
\address[A]{K. Fleischmann\\
Weierstrass Institute\\
\quad for Applied Analysis\\
\quad and Stochastics\\
Mohrenstrasse 39\\
D-10117 Berlin\\
Germany\\
\printead{e1}}
\address[B]{L. Mytnik\\
Faculty of Industrial Engineering\\
\quad and Management\\
Technion Israel Institute\\
\quad of Technology\\
Haifa 32000\\
Israel\\
\printead{e2}\\
\printead{u1}}
\address[C]{V. Wachtel\\
Mathematical Institute\\
University of Munich\\
Theresienstrasse 39\\
D-80333 Munich\\
Germany\\
\printead{e3}}
\end{aug}
\thankstext{T1}{Supported by the German Israeli Foundation for
Scientific Research and
Development, Grant G-807-227.6/2003.}
\received{\smonth{5} \syear{2008}}
\revised{\smonth{4} \syear{2009}}
\begin{abstract}
For $ 0<\alpha\leq2$, a super-$\alpha$-stable motion $ X$ in
$ \mathsf{R}^{d}$ with branching of index $ 1+\beta\in(1,2)$ is
considered. Fix arbitrary $t>0$. If $ d<\alpha/\beta$, a dichotomy for
the density function of the measure $ X_{t}$ holds: the density
function is locally H\"{o}lder continuous if $ d=1$ and
$ \alpha>1+\beta$ but locally unbounded otherwise. Moreover, in the
case of continuity, we determine the optimal local H\"{o}lder index.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60J80}
\kwd[; secondary ]{60G57}.
\end{keyword}
\begin{keyword}
\kwd{Dichotomy for density of superprocess states}
\kwd{H\"{o}lder continuity}
\kwd{optimal exponent}
\kwd{critical index}
\kwd{local unboundedness}
\kwd{multifractal spectrum}
\kwd{Hausdorff dimension}.
\end{keyword}
\end{frontmatter}
\section{Introduction and statement of results}
\subsection{Background and purpose}\label{SS.background}
For $ 0<\alpha\leq2$, a super-$\alpha$-stable motion
$X=\{X_{t}\dvtx t\geq0\}$ in $\mathsf{R}^{d}$ with branching of index
$ 1+\beta\in(1,2]$ is a finite measure-valued process related to
the log-Laplace equation
\begin{equation} \label{logLap}
\frac{d}{dt}\,u =
\bDelta
_{\alpha}u +au- bu^{1+\beta},
\end{equation}
where $ a\in\mathsf{R}$ and $ b>0$ are any fixed constants. Its
underlying motion is described by the fractional Laplacian
$\bDelta_{\alpha}:=-(- \bDelta)^{\alpha/2}$ determining a symmetric
$\alpha$-stable motion in $\mathsf{R}^{d}$ of index $ \alpha\in(0,2]$
(Brownian motion if $ \alpha=2)$ whereas its continuous-state branching
mechanism described by
\begin{equation} \label{not.Psi}
v \mapsto-av+bv^{1+\beta} =: \Psi(v),\qquad v\geq0,
\end{equation}
belongs to the domain of attraction of a stable law of index $ 1+\beta
\in(1,2]$ (the branching is critical if $ a=0$).
It is well known that in dimensions $ d<\frac{\alpha}{\beta}$ at
any fixed time $ t>0$ the measure $ X_{t}=X_{t}(dx)$
is absolutely continuous with probability one (cf. Fleischmann
\cite{Fleischmann1988critical} where $a=0;$ the noncritical case requires
the obvious changes). By an abuse of notation, we sometimes denote a version
of the density function of the measure $X_{t}=X_{t}(dx)$ by the same
symbol, $ X_{t}(dx)=X_{t}(x)\, dx$, that is,
$ X_{t}=\{X_{t}(x)\dvtx x\in\mathsf{R}^{d}\}$. In the case
of one-dimensional continuous super-Brownian motion ($\alpha=2$, $\beta=1$),
even a joint-continuous density field $ \{X_{t}(x)\dvtx t>0, x\in
\mathsf{R}\}$ exists, satisfying a stochastic equation (Konno
and Shiga \cite{KonnoShiga1988} as well as Reimers \cite{Reimers1989}).
From now on we assume that $d<\frac{\alpha}{\beta}$ and
$\beta\in(0,1)$. For the Brownian case $ \alpha=2$ and
if $a=0$ (critical branching), Mytnik \cite{Mytnik2002} proved that a version
of the density $\{X_{t}(x)\dvtx t>0, x\in\mathsf{R}^{d}\}$ of the
measure $X_{t}(dx)\,dt$ exists that satisfies, in a weak sense,
the following stochastic partial differential equation (SPDE):
\begin{equation}
\frac{\partial}{\partial t}X_{t}(x)=
\bDelta X_{t}(x)+(bX_{t-}(x))^{1/(1+\beta)}\dot{L}(t,x),
\end{equation}
where $\dot{L}$ is a $(1+\beta)$-stable noise without negative jumps.
\begin{convention}
\label{Conv}
From now on, (if it is not stated otherwise explicitly) we
use the term \textit{density} to denote the density function of the
measure $X_{t}(dx)$ with respect to the Lebesgue measure.
\end{convention}
For the same model (as in the paragraph before Convention \ref{Conv}), in
Mytnik and Perkins \cite{MytnikPerkins2003} regularity and irregularity
properties of the density at fixed times had been revealed. More precisely,
these densities have continuous versions if $ d=1$, whereas they
are locally unbounded on open sets of positive $X_{t}(dx)$-measure in
all higher dimensions $ (d<\frac{2}{\beta} )$.
The first \textit{purpose} in the present paper is to allow also discontinuous
underlying motions, that is to consider also all $ \alpha\in(0,2)$.
Then actually the same type of \textit{fixed time dichotomy} holds
(recall that $ d<\frac{\alpha}{\beta})$: continuity of densities
if $ d=1$ and $ \alpha>1+\beta$ whereas local
unboundedness is true if $ d>1$ or $ \alpha\leq1+\beta$.
However, the \textit{main purpose} of the paper is to address the following
question: what is the optimal local H\"{o}lder index in the first case of
existence of a continuous density? Here by optimality we mean that
there is a
critical index $\eta_{\mathrm{c}}$ such that for any fixed $t>0$ there
is a
version of the density which is locally H\"{o}lder continuous of any index
$\eta<\eta_{\mathrm{c} }$ whereas there is no locally H\"{o}lder continuous
version with index $ \eta\geq\eta_{\mathrm{c} }$.
In \cite{MytnikPerkins2003} continuity of the density at fixed times is proved
by some moment methods, although moments of order larger than $1+\beta$
are in
general infinite in the $1+\beta<2$ case. A standard procedure to get local
H\"{o}lder continuity is the Kolmogorov criterion by using
``high'' moments. This, for instance, can be done in the
$\beta=1$ case ($\alpha=2$, $d=1)$ to show local H\"{o}lder continuity
of any
index smaller than $\frac{1}{2}$ (see the estimates in the proof of
Corollary 3.4 in Walsh \cite{Walsh1986}).
Due to the lack of ``high'' moments in our $\beta<1$ case we cannot use
moments to get the optimal local H\"{o}lder index. Therefore we have to
get deeply into the jump structure of the superprocess to obtain the
needed estimates. As a result we are able to show the \textit{local
H\"{o}lder continuity} of all orders $ \eta<\eta
_{\mathrm{c}}:=\frac{\alpha}{1+\beta}-1$, provided that $ d=1$ and $
\alpha>1+\beta$. We also verify that the bound $ \eta_{\mathrm{c}}$ for
the local H\"{o}lder index is in fact \textit{optimal} in the sense
that there are points $x_{1},x_{2}$ such that
the density increments $ |X_{t}(x_{1})-X_{t}(x_{2})|$
are of a larger order than $ |x_{1}-x_{2}|^{\eta}$ as
$ x_{1}-x_{2}\rightarrow0$ for every $ \eta\geq\eta
_{\mathrm{c} }$. For precise formulations, see
Theorem \ref{T.prop.dens} below.
\subsection{Statement of results}\label{SS.statement}
Write $ \mathcal{M}_{\mathrm{f}}$ for the set of all finite measures
$\mu$ defined on $ \mathsf{R}^{d}$ and $ |\mu |$ for its total mass
$\mu(\mathsf{R}^{d})$. Let $ \Vert f\Vert_{U}$ denote the essential
supremum (with respect to Lebesgue measure) of a function $
f\dvtx\mathsf{R}^{d}\rightarrow\mathsf
{R}
_{+}:=[0,\infty)$ over a nonempty open set $ U\subseteq
\mathsf{R}^{d}$.
Let $ p^{\alpha}$ denote the continuous $\alpha$\textit{-stable
transition kernel} related to the fractional Laplacian $
\bDelta_{\alpha}=-(-\bDelta)^{\alpha/2}$, and $ S^{\alpha}$ the related
\textit{semigroup}.
Recall that $ 0<\alpha\leq2$, $ 1+\beta\in(1,2)$ and
$ d<\frac{\alpha}{\beta}$, and consider again the
$(\alpha,d,\beta)$-superprocess $ X=\{X_{t}\dvtx t\geq0\}$ in
$\mathsf{R}^{d}$ related to (\ref{logLap}). Recall also that for fixed
$ t>0$, with probability one, the measure state $ X_{t}
$ is absolutely continuous (see \cite{Fleischmann1988critical}). The
following theorem is our \textit{main result}:
\begin{theorem}[(Dichotomy for densities)]\label{T.prop.dens}Fix $ t>0$
and $ X_{0}=\mu\in\mathcal{M}_{\mathrm{f} }$.
\begin{enumerate}[(a)]
\item[(a)] \textup{(Local H\"{o}lder continuity)}. If $ d=1$ and
$ \alpha>1+\beta$, then with probability one, there is a
continuous version $\tilde{X}_{t}$ of the density function of the measure
$ X_{t}(dx)$. Moreover, for each $ \eta<\eta
_{\mathrm{c}}:=\frac{\alpha}{1+\beta}-1$, this version $\tilde
{X}_{t}$ is locally H\"{o}lder continuous of index $ \eta$
\[
\sup_{x_{1},x_{2}\in K, x_{1}\neq x_{2}}\frac{|\tilde{X}_{t}
(x_{1})-\tilde{X}_{t}(x_{2})|}{|x_{1}-x_{2}|^{\eta}} < \infty
\qquad\mbox{compact } K\subset\mathsf{R}.
\]
\item[(b)] \textup{(Optimal local H\"{o}lder index)}. Under conditions
as in the beginning of part \textup{(a)}, for every $ \eta\geq\eta_{\mathrm{c}}
$ with probability one, for any open $ U\subseteq\mathsf{R}$,
\[
\sup_{x_{1},x_{2}\in U, x_{1}\neq x_{2}}\frac{|\tilde{X}_{t}
(x_{1})-\tilde{X}_{t}(x_{2})|}{|x_{1}-x_{2}|^{\eta}} = \infty
\qquad\mbox{whenever } X_{t}(U)>0.
\]
\item[(c)] \textup{(Local unboundedness)}. If $ d>1$ or $ \alpha
\leq1+\beta$, then with probability one, for all open
$ U\subseteq\mathsf{R}^{d}$,
\[
\Vert X_{t}\Vert_{U} = \infty\qquad\mbox{whenever } X_{t}(U)>0.
\]
\end{enumerate}
\end{theorem}
\begin{remark}[(Any version)]
As in part (c), the statement in part (b) is valid also for any version
$X_{t}$ of the density function.
\end{remark}
\subsection{Some discussion}
At first sight, the result of Theorem \ref{T.prop.dens}(a), (b) is a bit
surprising. Let us recall again what is known about regularity
properties of
densities of $(\alpha,d,\beta)$-superprocesses. The case of continuous
super-Brownian motion ($\alpha=2$, $\beta=1$, $d=1)$ is very well
studied. As
already mentioned, densities exist at all times simultaneously, and
they are
locally H\"{o}lder continuous (in the spatial variable) for any index
$ \eta<\frac{1}{2}$. Moreover, it is known that
$\frac{1}{2}$ is optimal in this case. Now let us consider our result in
Theorem \ref{T.prop.dens}(a), (b), specialized to $\alpha=2$. Then we have
$ \eta_{\mathrm{c}}=\frac{2}{1+\beta}-1\downarrow0$ as
$ \beta\uparrow1$ where the limit 0 is different from the optimal
local H\"{o}lder index $\frac{1}{2}$ of continuous super-Brownian
motion. This
may confuse a reader and even raise a suspicion that something is wrong.
However there is an intuitive explanation for this discontinuity as we would
like to explain now.
Recall the notion of H\"{o}lder continuity \textit{at a point}. A
function $f$
is H\"{o}lder continuous with index $\eta\in(0,1)$ at a point $x_{0}$ if
there is a neighborhood $U(x_{0})$ such that
\begin{equation}
\vert f(x)-f(x_{0}) \vert\leq C |x-x_{0}|^{\eta
} \qquad\mbox{for all } x\in U(x_{0}).
\end{equation}
The \textit{optimal} H\"{o}lder index $H(x_{0})$ of $f$ at the point $x_{0}$
is defined as the supremum of all such $\eta$. Clearly, there are functions
where $H(x_{0})$ may vary with $x_{0 }$, and the index of a local
H\"{o}lder continuity in a domain cannot be larger than the smallest optimal
H\"{o}lder index at the points of the domain. The densities of continuous
super-Brownian motion are such that almost surely $H(x_{0})=\frac{1}{2}
$ for all $x_{0 }$ whereas in our $\beta<1$ case of discontinuous
superprocesses the situation is quite different. The critical local H\"{o}lder
index $ \eta_{\mathrm{c}}=\frac{\alpha}{1+\beta}-1$ in our case is
a result of the influence of relatively high jumps of the superprocess that
occur close to time $ t$. So there are (random) points $x_{0}$
with $H(x_{0})=\eta_{\mathrm{c} }$. But these points are
\textit{exceptional} points; loosely speaking, there are not too many
of them.
We conjecture\footnote{We will verify this conjecture in an outcoming
extended version of \cite{FleischmannMytnikWachtel2009fixedWIASarxiv}.}
that at any \textit{given} point $x_{0}$ the optimal H\"{o}lder index
$H(x_{0}) $ equals $(\frac{1+\alpha}{1+\beta}-1)\wedge1=:\bar
{\eta}_{\mathrm{c}}>\eta_{\mathrm{c} }$. Now if $ \alpha
=2$, as $ \beta\uparrow1$ one gets the index
$ \frac{1}{2}$ corresponding to the case of continuous
super-Brownian motion.
This observation raises in fact a number of very interesting \textit{open
problems}:
\begin{conjecture}[(Multifractal spectrum)]
We conjecture that for any $ \eta\in(\eta_{\mathrm{c}},
\bar{\eta}_{\mathrm{c}})$ there are (random) points $x_{0}$ where the
density $X_{t}$ at the point $x_{0}$ is H\"{o}lder continuous with
index $\eta$. What is the \textit{Hausdorff dimension}, say $D(\eta)$, of
the (random) set $ \{x_{0}\dvtx H(x_{0})=\eta\}$? We
conjecture that
\begin{equation}
\lim_{\eta\downarrow\eta_{\mathrm{c}}}D(\eta) = 0 \quad\mbox{and}\quad
\lim_{\eta\uparrow\bar{\eta}_{\mathrm{c}}}D(\eta)=1.
\end{equation}
This function $ \eta\mapsto D(\eta)$ reveals the so-called
\textit{multifractal} structure concerning the optimal H\"{o}lder index in
points for the densities of superprocesses with branching of index
$1+\beta<\alpha$ and is definitely worth studying. In
this connection, we refer to Jaffard \cite{Jaffard1999} where multifractal
properties of one-dimensional L\'{e}vy processes are studied.
\end{conjecture}
Another interesting direction would be a generalization of our results
to the
case of SPDEs driven by Levy noises. In recent years there has been increasing
interest in such SPDEs. Here we may mention the papers Saint Laubert Bi\'{e} \cite{Bie1998},
Mytnik \cite{Mytnik2002}, Mueller, Mytnik and Stan
\cite{MuellerMytnikStan2006} as well as Hausenblas \cite{Hausenblas2007}.
Note that in these papers properties of solutions are described in some
$\mathcal{L}^{p}$-sense. To the best of our knowledge not too many
things are
known about local H\"{o}lder continuity of solutions (in case of continuity).
The only result we know in this direction is \cite{MytnikPerkins2003} where
some local H\"{o}lder continuity of the fixed time density of super-Brownian
motion $(\alpha=2$, $\beta<1$, $d<\frac{2}{\beta}$, $a=0)$ was established.
However, the result there was far away from being optimal. With
Theorem \ref{T.prop.dens}(a), (b) we fill this gap. Our result also allows the
following conjecture:
\begin{conjecture}[(Regularity in case of SPDE with stable noise)]
\label{ConjEqu}
Consider the SPDE,
\begin{equation}
\frac{\partial}{\partial t}X_{t}(x)=
\bDelta_{\alpha}X_{t}(x)+g(X_{t-}(x))\dot{L}(t,x),
\end{equation}
where $\dot{L}$ is a $(1+\beta)$-stable noise without negative jumps,
and $g$
is such that solutions exist. Then there should exist versions of solutions
such that at fixed times regularity holds just as described in
Theorem \ref{T.prop.dens}(a), (b) with the same parameter classification,
in particular, with the same $\eta_{\mathrm{c} }$.
\end{conjecture}
\subsection{Martingale decomposition of $X$}
As in the $\alpha=2$ case of \cite{MytnikPerkins2003}, for the proof we need
the martingale decomposition of $ X$. For this purpose, we will
work with the following \textit{alternative description} of the
continuous-state branching mechanism $ \Psi$ from (\ref{not.Psi}):
\begin{equation} \label{alt.Psi}
\Psi(v) = -av + \varrho\int_{0}^{\infty} dr\, r^{-2-\beta} (
e^{-vr}-1+vr ) ,\qquad v\geq0,
\end{equation}
where $ $
\begin{equation} \label{not.rho}
\varrho:= b \frac{(1+\beta)\beta}{\Gamma(1-\beta)}
\end{equation}
with $ \Gamma$ denoting the famous Gamma function. The martingale
decomposition of $X$ in the following lemma is basically proven in
Dawson \cite{Dawson1993}, Section 6.1.
Denote by $\mathcal{C}_{\mathrm{b}}$ the set of all bounded and continuous
functions on $\mathsf{R}^{d}$. We add the sign $+$ if the functions are
additionally nonnegative. $\mathcal{C}_{\mathrm{b}}^{(k),+}$ with $k\geq1$
refers to the subset of functions which are $k$ times differentiable
and that
all derivatives up to the order $k$ belong to $\mathcal{C}_{\mathrm
{b}}^{+}$, too.
\begin{lemma}[(Martingale decomposition of $X$)]
\label{L.mart.dec}
Fix $ X_{0} = \mu\in\mathcal{M}_{\mathrm{f} }$.
\begin{enumerate}[(a)]
\item[(a)] \textup{(Discontinuities)}.
All discontinuities of the process $ X$
are jumps upward of the form $ r\delta_{x }$. More precisely, there
exists a random measure $ N (d(s,x,r) ) $ on $ \mathsf{R}_{+}\times
\mathsf{R}^{d}\times\mathsf{R}_{+}$ describing the jumps $ r\delta_{x}$
of $ X$ at times $ s$ at sites $x$ of size~$r$.
\item[(b)] \textup{(Jump intensities)}.
The compensator $ \hat{N}$ of
$ N$ is given by
\[
\hat{N} (d(s,x,r) ) = \varrho
\,ds\, X_{s}(dx) r^{-2-\beta}\,dr;
\]
that is, $\tilde{N} := N-\hat{N}$ is a martingale
measure on $ \mathsf{R}_{+}\times\mathsf{R}^{d}\times\mathsf{R}_{+ }$.
\item[(c)] \textup{(Martingale decomposition)}.
For all $ \varphi\in\mathcal{C}_{\mathrm{b}}^{(2),+}$ and $ t\geq0$,
\[
\langle X_{t},\varphi\rangle= \langle\mu,\varphi
\rangle+\int_{0}^{t} ds \langle X_{s},
\bDelta_{\alpha}\varphi\rangle+M_{t}(\varphi)+a I_{t}(\varphi)
\]
with the discontinuous martingale
\[
t \mapsto M_{t}(\varphi) := \int_{(0,t]\times\mathsf{R}^{d}\times
\mathsf{R}_{+}} \tilde{N} (d(s,x,r) )
r \varphi(x)
\]
and the increasing process
\[
t \mapsto I_{t}(\varphi) := \int_{0}^{t} ds \langle
X_{s},\varphi\rangle.
\]
\end{enumerate}
\end{lemma}
From Lemma \ref{L.mart.dec} we get the related \textit{Green's function
representation},
\begin{eqnarray}\label{Green}\quad
\langle X_{t},\varphi\rangle &=& \langle\mu
,S_{t}^{\alpha}\varphi\rangle+ \int_{(0,t]\times\mathsf{R}^{d}
} M (d(s,x) ) S_{t-s}^{\alpha}
\varphi(x)\nonumber\\[-8pt]\\[-8pt]
&&{} + a\int_{(0,t]\times\mathsf{R}^{d}} I
(d(s,x)) S_{t-s}^{\alpha}\varphi(x),\qquad
t\geq 0, \varphi\in\mathcal{C}_{\mathrm{b} }^{+},\nonumber
\end{eqnarray}
with $ M$ the martingale measure related to the martingale in part
(c) and $ I$ the measure related to the increasing process there.
We add also the following lemma which can be proved as Lemma 3.1 in Le Gall
and Mytnik \cite{LeGallMytnik2003}. For $p\geq1$, let $ \mathcal{L}
_{\mathrm{loc}}^{p}(\mu)=\mathcal{L}_{\mathrm{loc}}^{p}(\mathsf{R}
_{+}\times\mathsf{R}^{d}, S_{s}^{\alpha}\mu(x) \,ds
\,dx)$ denote the space of equivalence classes of measurable
functions $\psi$ such that
\begin{equation}
\int_{0}^{T}ds\int_{\mathsf{R}^{d}}dx\, S_{s}^{\alpha}
\mu(x) |\psi(s,x)|^{p} < \infty,\qquad T>0.
\end{equation}
\begin{lemma}[($L^{p}$-space with martingale measure)]\label{L.Lp}
Let $ X_{0}=\mu\in\mathcal{M}_{\mathrm{f} }$ and $ \psi\in\mathcal{L}
_{\mathrm{loc}}^{p}(\mu)$ for some
$ p\in(1+\beta,2)$. Then the martingale
\begin{equation}
t\mapsto\int_{(0,t]\times\mathsf{R}^{d}} M (d
(s,x) ) \psi(s,x)
\end{equation}
is well defined.
\end{lemma}
Fix $ t>0$, $\mu\in\mathcal{M}_{\mathrm{f} }$. Suppose
$ d<\frac{\alpha}{\beta}$. Then the random measure $ X_{t}
$ is a.s. absolutely continuous. From (\ref{Green}) we
get the following representation of a version of its \textit{density function}
(cf. \cite{MytnikPerkins2003,LeGallMytnik2003}):
\begin{eqnarray}\label{rep.dens}
X_{t}(x) & = & \mu\ast p_{t}^{\alpha} (x) + \int_{(0,t]\times
\mathsf{R}^{d}} M (d(s,y) )
p_{t-s}^{\alpha}(x-y)\nonumber\\
&&{} + a\int_{(0,t]\times\mathsf{R}^{d}} I
(d(s,y) ) p_{t-s}^{\alpha}(x-y)\\
&=&\!: Z_{t}^{1}(x)+Z_{t}^{2}(x)+Z_{t}^{3}(x),\qquad
x\in\mathsf{R}^{d},\nonumber
\end{eqnarray}
with notation in the obvious correspondence (and kernels $p^{\alpha}$
introduced in the beginning of Section \ref{SS.statement}).
This representation is the starting point for the proof of the local
H\"{o}lder continuity as claimed in Theorem \ref{T.prop.dens}(a). Main work
has to be done to deal with $Z_{t}^{2}$.
\subsection{Organization of the paper}
In Section \ref{sec:2} we develop some tools that will be used in the
following sections for the proof of Theorem \ref{T.prop.dens}. Also on
the way, in Section \ref{sec:2.3}, we are able to verify partially
Theorem \ref{T.prop.dens}(a) for some range of parameters $\alpha,\beta
$ using simple moment estimates. The proof of Theorem
\ref{T.prop.dens}(a) is completed in Section \ref{S.3} using a more
delicate analysis of the jump structure of the process. Section
\ref{sec:4} is devoted to the proof of part (c) of Theorem
\ref{T.prop.dens}. In Section \ref{sec:5}, which is the most
technically involved section, we verify Theorem \ref{T.prop.dens}(b).
\section{Auxiliary tools}
\label{sec:2}
In this section we always assume that $ d=1$.
\subsection{On the transition kernel of $\alpha$-stable motion}
The symbol $ C$ will always denote a generic positive constant,
which might change from place to place. On the other hand, $c_{(\#)}$ denotes
a constant appearing in formula line (or array) (\#).
We start with two estimates concerning the $\alpha$-stable transition kernel
$p^{\alpha}$.
\begin{lemma}[($\alpha$-stable density increment)]\label{L1}
For every $\delta\in[0,1]$,
\begin{equation} \label{L1.1}
|p_{t}^{\alpha}(x)-p_{t}^{\alpha}(y)| \leq C \frac{|x-y|^{\delta
}}{t^{\delta/\alpha}} \bigl(p_{t}^{\alpha}(x/2)+p_{t}^{\alpha}
(y/2)\bigr),\qquad t>0,\ x,y\in\mathsf{R}.\hspace*{-29pt}
\end{equation}
\end{lemma}
\begin{pf}
For the case $\alpha=2$, see, for example, Rosen \cite{Rosen1987},
(2.4e). Suppose
$ \alpha<2$. It suffices to assume that $t=1$. In fact, multiply
$x,y$ by $t^{-1/\alpha}$ in the formula for the $t=1$ case, and use
that by
self-similarity, $p_{1}^{a}(t^{-1/a}x)=t^{1/\alpha}p_{t}^{\alpha}(x)$.
Now we use the well-known subordination formula
\begin{equation} \label{L1.2}
p_{1}^{\alpha}(z) = \int_{0}^{\infty}ds\, q_{1}^{\alpha/2}
(s) p_{s}^{(2)}(z),\qquad z\in\mathsf{R},
\end{equation}
where $q^{\alpha/2}$ denotes the continuous transition kernel of a stable
process on $\mathsf{R}_{+}$ of index $\alpha/2$, and by an abuse of notation,
$p^{(2)}$ refers to $p^{\alpha}$ in case $\alpha=2$. Consequently,
\begin{equation}
|p_{1}^{\alpha}(x)-p_{1}^{\alpha}(y)| \leq\int_{0}^{\infty
}ds\, q_{1}^{\alpha/2}(s) \bigl|p_{s}^{(2)}(x)-p_{s}^{(2)}(y)\bigr|.
\end{equation}
Hence, from the $ \alpha=2$ case,
\begin{eqnarray}
&&|p_{1}^{\alpha}(x)-p_{1}^{\alpha}(y)|\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C |x-y|^{\delta}
\int_{0}^{\infty}ds\, q_{1}^{\alpha/2}(s) s^{-\delta/2}
\bigl(p_{s}^{(2)}(x/2)+p_{s}^{(2)}(y/2)\bigr).\nonumber
\end{eqnarray}
The lemma will be proved if we show that
\begin{equation} \label{L1.3}
\int_{0}^{\infty}ds\, q_{1}^{\alpha/2}(s) s^{-\delta/2} p_{s}
^{(2)}(x/2) \leq C p_{1}^{\alpha}(x/2),\qquad x\in\mathsf{R}.
\end{equation}
First, in view of (\ref{L1.2}),
\begin{equation} \label{L1.4}\qquad
\int_{1}^{\infty}ds\, q_{1}^{\alpha/2}(s) s^{-\delta/2} p_{s}
^{(2)}(x/2) \leq\int_{1}^{\infty}ds\, q_{1}^{\alpha/2}
(s) p_{s}^{(2)}(x/2) \leq p_{1}^{\alpha}(x/2).
\end{equation}
Second, by Brownian scaling,
\begin{eqnarray}\qquad
\int_{0}^{1}ds\, q_{1}^{\alpha/2}(s) s^{-\delta/2} p_{s}
^{(2)}(x/2) &=& \int_{0}^{1}du\, q_{1}^{\alpha/2}(u) u^{-(\delta
+1)/2} p_{1}^{(2)} \biggl(\frac{x/2}{u^{1/2}} \biggr)\nonumber\\
&\leq& p_{1}^{(2)}(x/2)\int_{0}^{1}du\, q_{1}^{\alpha/2}
(u) u^{-(\delta+1)/2}\\
&\leq& C p_{1}^{(2)}(x/2),\nonumber
\end{eqnarray}
where in the last step we have used the fact that $q_{1}^{\alpha/2}(u)$
decreases, as $u\downarrow0$, exponentially fast (cf.
\cite{Feller1971volII2nd}, Theorem 13.6.1). Since
$p_{1}^{(2)}(x/2)=\mathrm{o}(p_{1}
^{\alpha}(x/2))$ as $x\uparrow\infty$, we have $p_{1}^{(2)}(x/2)\leq
Cp_{1}^{\alpha}(x/2), x\in\mathsf{R}$. Hence,
\begin{equation} \label{L1.5}
\int_{0}^{1} ds\, q_{1}^{\alpha/2}(s) s^{-\delta/2} p_{s}
^{(2)}(x/2)\leq C p_{1}^{\alpha}(x/2).
\end{equation}
Combining (\ref{L1.4}) and (\ref{L1.5}) gives (\ref{L1.3}), completing
the proof.
\end{pf}
\begin{lemma}[(Integrals of $\alpha$-stable density increment)]
\label{L2}If $ \theta\in[1,1+\alpha)$ and $ \delta\in[
0,1]$ satisfy $ \delta<(1+\alpha-\theta)/\theta$ then
\begin{eqnarray}\label{L2.1}\hspace*{28pt}
&& \int_{0}^{t}ds\int_{\mathsf{R}}dy\, p_{s}^{\alpha
}(y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C (1+t) |x_{1}-x_{2}|^{\delta\theta}\bigl(p_{t}^{\alpha}
(x_{1}/2)+p_{t}^{\alpha}(x_{2}/2)\bigr),\qquad t>0, x_{1},x_{2}\in
\mathsf{R}.\nonumber
\end{eqnarray}
\end{lemma}
\begin{pf}
By Lemma \ref{L1}, for every $\delta\in[0,1]$,
\begin{eqnarray}
&& |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}
\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C \frac{|x_{1}-x_{2}|^{\delta\theta}}{(t-s)^{\delta\theta/\alpha}}
\bigl(p_{t-s}^{\alpha}\bigl((x_{1}-y)/2\bigr)+p_{t-s}^{\alpha}
\bigl((x_{2}-y)/2\bigr) \bigr)^{ \theta},\nonumber
\end{eqnarray}
$t>s\geq0$, $x_{1},x_{2},y\in\mathsf{R}$. Noting that
$p_{t-s}^{\alpha}(\cdot)\leq C (t-s)^{-1/\alpha}$, we obtain
\begin{eqnarray} \label{L2.2}\qquad
&&|p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C \frac{|x_{1}-x_{2}|^{\delta\theta}}{(t-s)^{(\delta\theta
+\theta-1)/\alpha}} \bigl(p_{t-s}^{\alpha}\bigl((x_{1}-y)/2\bigr)+p_{t-s}
^{\alpha}\bigl((x_{2}-y)/2\bigr) \bigr),\nonumber
\end{eqnarray}
$t>s\geq0$, $x_{1},x_{2},y\in\mathsf{R}$. Therefore,
\begin{eqnarray*}
&& \int_{0}^{t}ds\int_{\mathsf{R}}dy\, p_{s}^{\alpha
}(y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta} \\
&&\qquad\leq C |x_{1}-x_{2}|^{\delta\theta}
\int_{0}^{t}ds (t-s)^{-(\delta\theta+\theta-1)/\alpha}\\
&&\qquad\quad{}\times\int_{\mathsf{R}}dy\, p_{s}^{\alpha}(y) \bigl(p_{t-s}^{\alpha
}\bigl((x_{1}-y)/2\bigr)+p_{t-s}^{\alpha}\bigl((x_{2}-y)/2\bigr) \bigr).
\end{eqnarray*}
By scaling of $ p^{\alpha}$,
\begin{eqnarray}
&&\int_{\mathsf{R}}dy\, p_{s}^{\alpha}(y) p_{t-s}^{\alpha
}\bigl((x-y)/2\bigr)\nonumber\\
&&\qquad= \frac{1}{2}\int_{\mathsf{R}}d
y\, p_{2^{-\alpha}s}^{\alpha}(y/2) p_{t-s}^{\alpha}\bigl((x_{2}
-y)/2\bigr) \nonumber\\
&&\qquad= \frac{1}{2} p_{2^{-\alpha}s+t-s}^{\alpha}(x/2) \\
&&\qquad= \frac{1}
{2} (2^{-\alpha}s+t-s)^{-1/\alpha} p_{1}^{\alpha}\bigl((2^{-\alpha
}s+t-s)^{-1/\alpha}x/2\bigr)\nonumber\\
&&\qquad\leq t^{-1/\alpha} p_{1}^{\alpha}(t^{-1/\alpha}x/2) = p_{t}
^{\alpha}(x/2),\nonumber
\end{eqnarray}
since $ 2^{-\alpha}t\leq2^{-\alpha}s+t-s\leq t$. As a result we
have the inequality
\begin{eqnarray}\qquad
&& \int_{0}^{t}ds\int_{\mathsf{R}}dy\, p_{s}^{\alpha
}(y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta
}\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C |x_{1}-x_{2}|^{\delta\theta}\bigl(p_{t}^{\alpha
}(x_{1}/2)+p_{t}^{\alpha}(x_{2}/2)\bigr)\int_{0}^{t}ds\, s^{-(\delta
\theta+\theta-1)/\alpha}.\nonumber
\end{eqnarray}
Noting that the latter integral is bounded by $ C (1+t)$, since
$ (\delta\theta+\theta-1)/\alpha<1$, we get the desired inequality.
\end{pf}
\subsection{An upper bound for a spectrally positive stable process}
Let $L=\{L_{t}\dvtx t\geq0\}$ denote a spectrally positive stable
process of
index $\kappa\in(1,2)$. Per definition, $L$ is an $\mathsf{R}$-valued
time-homogeneous process with independent increments and with Laplace
transform given by
\begin{equation} \label{Laplace}
\mathbf{E} e^{-\lambda L_{t}} = e^{t\lambda^{\kappa}
},\qquad \lambda,t\geq0.
\end{equation}
Note that $L$ is the unique (in law) solution to the following martingale
problem:
\begin{equation} \label{MP}
t\mapsto e^{-\lambda L_{t}}-\int_{0}^{t} ds\, e
^{-\lambda L_{s}}\lambda^{\kappa} \mbox{ is a martingale for any }
\lambda>0.
\end{equation}
Let $ \Delta L_{s}:=L_{s}-L_{s-}>0$ denote the jumps of $ L$.
\begin{lemma}[(Big values of the process in case of bounded
jumps)]\label{L3}
We have
\begin{eqnarray}
\mathbf{P} \Bigl( \sup_{0\leq u\leq t}L_{u}\mathsf{1}\Bigl\{\sup_{0\leq v\leq
u}\Delta L_{v}\leq y\Bigr\}\geq x \Bigr) \leq\biggl(\frac{C t}{xy^{\kappa
-1}} \biggr)^{ x/y},\nonumber\\[-8pt]\\[-8pt]
\eqntext{t>0, x,y>0.}
\end{eqnarray}
\end{lemma}
\begin{pf}
Since for $\tau>0$ fixed, $\{L_{\tau t}\dvtx t\geq0\}$ is equal to $\tau
^{1/\kappa}L$ in law, for the proof we may assume that $t=1$. Let $\{\xi
_{i}\dvtx i\geq1\}$ denote a family of independent copies of $L_{1 }$.
Set
\begin{equation}\quad
W_{ns} := \sum_{1\leq k\leq ns}\xi_{k},\qquad L_{s}^{(n)} := n^{-1/\kappa
}W_{ns},\qquad 0\leq s\leq1, n\geq1.
\end{equation}
Denote by $D_{[0,1]}$ the Skorohod space of c\`{a}dl\`{a}g functions
$ f\dvtx[0,1]\rightarrow\mathsf{R}$. For fixed $y>0$, let
$H\dvtx D_{[0,1]}\mapsto\mathsf{R}$ be defined by
\begin{equation}
H(f) = \sup_{0\leq u\leq1}f(u) \mathsf{1} \Bigl\{\sup_{0\leq v\leq u}\Delta
f(v)\leq y \Bigr\},\qquad f\in D_{[0,1] }.
\end{equation}
It is easy to verify that $H$ is continuous on the set
$D_{[0,1]}\setminus
J_{y}$ where $J_{y}:=\{f\in D_{[0,1]}\dvtx\Delta
f(v)=y\mbox{ for some }v\in[0,1]\}$. Since $\mathbf{P}(L\in
J_{y})=0$, from the invariance principle (see, e.g., Gikhman and Skorokhod
\cite{GikhmanSkorokhod1969}, Theorem 9.6.2) for $L^{(n)}$ we conclude
that
\begin{equation}
\mathbf{P}\bigl(H(L)\geq x\bigr) = \lim_{n\uparrow\infty}\mathbf{P}
\bigl(H\bigl(L^{(n)}\bigr)\geq x\bigr),\qquad x>0.
\end{equation}
Consequently, the lemma will be proved if we show that
\begin{eqnarray}\label{L3.1}
\mathbf{P} \Bigl( \sup_{0\leq u\leq1}W_{nu}\mathsf{1}\Bigl\{\max_{1\leq k\leq
nu}\xi_{k}\leq yn^{1/\kappa}\Bigr\}\geq xn^{1/\kappa} \Bigr)
\leq\biggl(\frac{C}{xy^{\kappa-1}} \biggr)^{x/y},\nonumber\\[-8pt]\\[-8pt]
\eqntext{x,y>0, n\geq1.}
\end{eqnarray}
To this end, for fixed $ y^{\prime},h\geq0$, we consider the
sequence,
\begin{equation}
\Lambda_{0}:=1,\qquad \Lambda_{n} := e^{hW_{n}}\mathsf{1}
\Bigl\{\max_{1\leq k\leq n}\xi_{k}\leq y^{\prime}\Bigr\},\qquad n\geq1.
\end{equation}
It is easy to see that
\begin{equation}
\mathbf{E}\{\Lambda_{n+1} | \Lambda_{n}=e^{hu}\} = e
^{hu} \mathbf{E}\{e^{hL_{1}}; L_{1}\leq y^{\prime}\} \qquad
\mbox{for all } u\in\mathsf{R},
\end{equation}
and that
\begin{equation}
\mathbf{E}\{\Lambda_{n+1} | \Lambda_{n}=0\} = 0.
\end{equation}
In other words,
\begin{equation} \label{L3.2}
\mathbf{E}\{\Lambda_{n+1} | \Lambda_{n}\} = \Lambda_{n} \mathbf{E}
\{e^{hL_{1}}; L_{1}\leq y^{\prime}\}.
\end{equation}
This means that $\{\Lambda_{n}\dvtx n\geq1\}$ is a supermartingale
(submartingale) if $h$ satisfies $\mathbf{E}\{e^{hL_{1}}; L_{1}\leq
y^{\prime}\}\leq1 $ (respectively, $\mathbf{E}\{e^{hL_{1}} ;L_{1}\leq
y^{\prime}\}\geq1 $). If $\Lambda_{n}$ is a submartingale, then by
Doob's inequality,
\begin{equation}
\mathbf{P}\Bigl(\max_{1\leq k\leq n}\Lambda_{k}\geq e^{hx^{\prime}
}\Bigr) \leq e^{-hx^{\prime}} \mathbf{E}\Lambda_{n},\qquad
x^{\prime}>0.
\end{equation}
But if $\Lambda_{n}$ is a supermartingale, then
\begin{equation}
\mathbf{P}\Bigl(\max_{1\leq k\leq n}\Lambda_{k}\geq e^{hx^{\prime}
}\Bigr) \leq e^{-hx^{\prime}} \mathbf{E}\Lambda_{0 }
= e^{-hx^{\prime}},\qquad x^{\prime}>0.
\end{equation}
From these inequalities and (\ref{L3.2}) we get
\begin{equation} \label{L3.3}
\mathbf{P}\Bigl(\max_{1\leq k\leq n}\Lambda_{k}\geq e^{hx^{\prime}
}\Bigr) \leq e^{-hx^{\prime}}\max\bigl\{1,(\mathbf{E}
\{e^{hL_{1}}; L_{1}\leq y^{\prime}\})^{n} \bigr\}.
\end{equation}
It was proved by Fuk and Nagaev (\cite{FukNagaev71} see the first
formula in the proof of Theorem~4 there) that
\[
\mathbf{E}\{e^{hL_{1}}; L_{1}\leq y^{\prime}\} \leq1
+ h\mathbf{E}\{L_{1 }; L_{1}\leq y^{\prime}\}
+ \frac{e^{hy^{\prime}}-1-hy^{\prime}}{(y^{\prime})^{2}
} V(y^{\prime}),\qquad h,y^{\prime}>0,
\]
where $ V(y^{\prime}):=\int_{-\infty}^{y^{\prime}}\mathbf{P}(L_{1}
\in du) u^{2}>0$. Noting that the assumption
$\mathbf{E}L_{1}=0$ yields that $ \mathbf{E}\{L_{1 }; L_{1}\leq
y^{\prime
}\}\leq0$, we obtain
\begin{equation} \label{Nag}
\mathbf{E}\{e^{hL_{1}}; L_{1}\leq y^{\prime}\} \leq1+\frac
{e^{hy^{\prime}}-1-hy^{\prime}}{(y^{\prime})^{2}} V(y^{\prime
}),\qquad h,y^{\prime}>0.
\end{equation}
Now note that
\begin{eqnarray}\label{eq:2.30}
&&\Bigl\{\max_{1\leq k\leq n}W_{k}\mathsf{1}\Bigl\{\max_{1\leq i\leq k}\xi_{i}
\leq y^{\prime}\Bigr\}\geq x^{\prime} \Bigr\} \nonumber\\
&&\qquad= \Bigl\{\max_{1\leq k\leq
n}e^{hW_{k}} \mathsf{1}\Bigl\{\max_{1\leq i\leq k}\xi_{i}\leq y^{\prime
}\Bigr\}\geq e^{hx^{\prime}} \Bigr\}\\
&&\qquad = \Bigl\{\max_{1\leq k\leq n}\Lambda_{k}\geq e^{hx^{\prime}
}\Bigr\}.\nonumber
\end{eqnarray}
Thus, combining (\ref{eq:2.30}), (\ref{Nag}) and (\ref{L3.3}), we get
\begin{eqnarray}
&&\mathbf{P} \Bigl(\max_{1\leq k\leq n}W_{k}\mathsf{1}\Bigl\{\max_{1\leq i\leq k}
\xi_{i}\leq y^{\prime}\Bigr\}\geq x^{\prime} \Bigr)
\nonumber\\
&&\qquad\leq
\mathbf{P}\Bigl(\max_{1\leq k\leq n}\Lambda_{k}\geq e^{hx^{\prime}}\Bigr)\\
&&\qquad\leq\exp\biggl\{-hx^{\prime}+\frac{e^{hy^{\prime}}-1-hy^{\prime}
}{(y^{\prime})^{2}} n V(y^{\prime}) \biggr\}.\nonumber
\end{eqnarray}
Choosing $h:=(y^{\prime})^{-1}\log(1+x^{\prime}y^{\prime}/n V(y^{\prime
}))$, we arrive, after some elementary calculations, at the bound,
\begin{equation}\hspace*{32pt}
\mathbf{P} \Bigl(\max_{1\leq k\leq n}W_{k}\mathsf{1}\Bigl\{\max_{1\leq i\leq k}
\xi_{i}\leq y^{\prime}\Bigr\}\geq x^{\prime} \Bigr) \leq\biggl( \frac
{e n V(y^{\prime})}{x^{\prime}y^{\prime}} \biggr)^{ x^{\prime
}/y^{\prime}},\qquad x^{\prime},y^{\prime}>0.
\end{equation}
Since $\mathbf{P}(L_{1}>u)\sim C u^{-\kappa}$ as $u\uparrow\infty$, we have
$ V(y^{\prime})\leq C (y^{\prime})^{2-\kappa}$ for all $y^{\prime}>0$.
Therefore,
\begin{equation} \label{L3.4}\hspace*{32pt}
\mathbf{P} \Bigl(\max_{1\leq k\leq n}W_{k}\mathsf{1}\Bigl\{\max_{1\leq i\leq k}
\xi_{i}\leq y^{\prime}\Bigr\}\geq x^{\prime} \Bigr) \leq\biggl(\frac{Cn}{x^{\prime
}(y^{\prime})^{\kappa-1}} \biggr)^{ x^{\prime}/y^{\prime}},\qquad x^{\prime
},y^{\prime}>0.
\end{equation}
Choosing finally $ x^{\prime}=xn^{1/\kappa}$, $y^{\prime
}=yn^{1/\kappa}$, we get (\ref{L3.1}) from (\ref{L3.4}). Thus, the
proof of the lemma is complete.
\end{pf}
\begin{lemma}[(Small process values)]\label{L.small.values}
There is a constant $ c_{\kappa}$ such that
\begin{equation}
\mathbf{P} \Bigl(\inf_{u\leq t}L_{u}<-x \Bigr)\leq\exp\biggl\{-c_{\kappa}
\frac{x^{\kappa/(\kappa-1)}}{t^{1/(\kappa-1)}} \biggr\},\qquad x,t>0.
\end{equation}
\end{lemma}
\begin{pf}
It is easy to see that for all $h>0$,
\begin{equation}
\mathbf{P} \Bigl(\inf_{u\leq t}L_{u}<-x \Bigr)= \mathbf{P} \Bigl(\sup_{s\leq
t}e^{-hL_{u}}>e^{hx} \Bigr).
\end{equation}
Applying Doob's inequality to the submartingale $t\mapsto
e^{-hL_{t}}$, we obtain
\begin{equation}
\mathbf{P} \Bigl(\inf_{u\leq t}L_{u}<-x \Bigr)\leq e^{-hx}
\mathbf{E} e^{-hL_{t}}.
\end{equation}
Taking into account definition (\ref{Laplace}), we have
\begin{equation}
\mathbf{P} \Bigl(\inf_{u\leq t}L_{u}<-x \Bigr)\leq\exp\{-hx+th^{\kappa}\}.
\end{equation}
Minimizing the function $h\mapsto-hx+th^{\kappa}$, we get the
inequality in
the lemma with $c_{\kappa}=(\kappa-1)/(\kappa)^{\kappa/(\kappa-1)}$.
\end{pf}
\subsection{Local H\"{o}lder continuity with some index}\label{sec:2.3}
In this subsection we prove Theorem \ref{T.prop.dens}(a) for parameters
$\beta\geq\frac{\alpha-1}{2}$ (see Remark \ref{R.2.3}), whereas for parameters
$\beta<\frac{\alpha-1}{2}$ we obtain local H\"{o}lder continuity only with
nonoptimal bound on indexes. We use the Kolmogorov criterion for local
H\"{o}lder continuity to get these results. The proof of
Theorem \ref{T.prop.dens}(a) for parameters $\beta<\frac{\alpha-1}{2}$
will be finished in Section \ref{S.3}.
Fix $ t>0$, $\mu\in\mathcal{M}_{\mathrm{f} }$, and suppose $
\alpha>1+\beta$. Since our theorem is trivially valid for $\mu=0$, from
now on we everywhere suppose that $\mu\neq0$. Since we are
dealing with the case $ d=1$, the random measure $ X_{t}
$ is a.s. absolutely continuous. Recall decomposition
(\ref{rep.dens}).
Clearly, the deterministic function $Z_{t}^{1}$ is Lipschitz
continuous by Lemma \ref{L1}. Next we turn to the random function $Z_{t}
^{3}$.
\begin{lemma}[(H\"{o}lder continuity of $Z_{t}^{3}$)]\label{L.HoeldZ3}
With probability one, $ Z_{t}^{3}$ is H\"{o}lder continuous of each
index $ \eta<\alpha-1$.
\end{lemma}
\begin{pf}
From Lemma \ref{L1} we get for fixed $\delta\in(0,\alpha-1)$,
\[
|p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)| \leq
C \frac{|x_{1}-x_{2}|^{\delta}}{(t-s)^{(\delta+1)/\alpha}},
\qquad t>s>0, x_{1},x_{2},y\in\mathsf{R}.
\]
Therefore,
\begin{eqnarray}\label{2.38}
&&|Z_{t}^{3}(x_{1})-Z_{t}^{3}(x_{2})|\nonumber\\
&&\qquad\leq |a|\int_{0}
^{t} ds\int_{\mathsf{R}} X_{s}(dy) |p_{t-s}^{\alpha
}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|\nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq C \Bigl(\sup_{s\leq t}X_{s}(\mathsf{R}) \Bigr) |x_{1}-x_{2}|^{\delta
}\int_{0}^{t} ds (t-s)^{-(\delta+1)/\alpha} \nonumber\\
&&\qquad\leq C \frac{\alpha}{\alpha-1-\delta} \Bigl(\sup_{s\leq t}X_{s}
(\mathsf{R}) \Bigr) |x_{1}-x_{2}|^{\delta},\qquad
x_{1},x_{2}\in\mathsf{R}.\nonumber
\end{eqnarray}
Consequently,
\begin{equation}
\sup_{x_{1}\neq x_{2}} \frac{|Z_{t}^{3}(x_{1})-Z_{t}^{3}(x_{2}
)|}{|x_{1}-x_{2}|^{\delta}} < \infty\qquad\mbox{a.s.,}
\end{equation}
and the proof is complete.
\end{pf}
Our main work concerns $Z_{t}^{2}$.
\begin{lemma}[($q$-norm)]\label{L4}
For each $\theta\in(1+\beta,2)$ and $q\in(1,1+\beta)$,
\begin{eqnarray} \label{L4.1}
&& \mathbf{E}|Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2})|^{q}\nonumber\\
&&\qquad \leq C \biggl[ \biggl(\int_{0}^{t}ds\int_{\mathsf{R}}S_{s}^{\alpha
}\mu(dy) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}
(x_{2}-y)|^{\theta} \biggr)^{ q/\theta}\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{} +\int_{0}^{t}ds\int_{\mathsf{R}}S_{s}^{\alpha}
\mu(dy) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}
(x_{2}-y)|^{q} \biggr],\nonumber\\
\eqntext{x_{1},x_{2}\in\mathsf{R}.}
\end{eqnarray}
\end{lemma}
The proof can be done similarly to the proof of inequality (3.1) in
\cite{LeGallMytnik2003}.
\begin{corollary}[($q$-norm)]\label{L5}
For each $ \theta\in(1+\beta,2)$,
$q\in(1,1+\beta)$ and $ \delta>0$ satisfying $ \delta<\min
\{1,(1+\alpha-\theta)/\theta,(1+\alpha-q)/q\}$,
\begin{equation} \label{in.L5}
\mathbf{E}|Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2})|^{q} \leq
C |x_{1}-x_{2}|^{\delta q},\qquad x_{1},x_{2}\in\mathsf{R}.
\end{equation}
\end{corollary}
\begin{pf}
For every $\varepsilon\in(1,1+\alpha)$,
\begin{eqnarray}
&& \int_{0}^{t}ds\int_{\mathsf{R}}S_{s}^{\alpha}\mu(d
y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}
-y)|^{\varepsilon}\nonumber\\
&&\qquad = \int_{\mathsf{R}}\mu(dz)\int_{0}^{t}d
s\int_{\mathsf{R}}dy\, p_{s}^{\alpha}(y-z) |p_{t-s}^{\alpha
}(x_{1}-z)-p_{t-s}^{\alpha}(x_{2}-z)|^{\varepsilon}\\
&&\qquad = \int_{\mathsf{R}}\mu(dz)\int_{0}^{t}d
s\int_{\mathsf{R}}dy\, p_{s}^{\alpha}(y) |p_{t-s}^{\alpha}
(x_{1}-z-y)-p_{t-s}^{\alpha}(x_{2}-z-y)|^{\varepsilon}.\hspace*{-25pt}\nonumber
\end{eqnarray}
Using Lemma \ref{L2}, we get for every positive $\delta<\min\{
1,(1+\alpha
-\varepsilon)/\varepsilon\}$,
\begin{eqnarray*}
&& \int_{0}^{t}ds\int_{\mathsf{R}}S_{s}^{\alpha}\mu(d
y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}
-y)|^{\varepsilon}\\
&&\qquad \leq C |x_{1}-x_{2}|^{\delta\varepsilon}\int_{\mathsf{R}}
\mu(dz) \bigl(p_{t}^{\alpha}\bigl((x_{1}-z)/2\bigr)+p_{t}^{\alpha
}\bigl((x_{2}-z)/2\bigr) \bigr) \\
&&\qquad\leq C |x_{1}-x_{2}|^{\delta\varepsilon},
\end{eqnarray*}
since $ \mu,t$ are fixed. Applying this bound to both summands at
the right-hand side of (\ref{L4.1}) finishes the proof of the lemma.
\end{pf}
\begin{corollary}[(Finite $q$-norm of density)]\label{C1}
If $ K\subset\mathsf{R}$ is a compact and $ 1\leq q<1+\beta$, then
\begin{equation}
\mathbf{E} \Bigl(\sup_{x\in K}X_{t}(x) \Bigr)^{ q}<\infty.
\end{equation}
\end{corollary}
\begin{pf}
By Jensen's inequality, we may additionally assume that $q>1$. It
follows from
(\ref{rep.dens}) that
\begin{equation}\hspace*{32pt}
\Bigl(\sup_{x\in K}X_{t}(x) \Bigr)^{ q} \leq4 \Bigl( \Bigl(\sup_{x\in K}
\mu\ast p_{t}^{\alpha} (x) \Bigr)^{ q}+\sup_{x\in K}|Z_{t}
^{2}(x)|^{q}+\sup_{x\in K}|Z_{t}^{3}(x)|^{q} \Bigr).
\end{equation}
Clearly, the first term at the right-hand side is finite. Furthermore,
according to Corollary 1.2 of Walsh \cite{Walsh1986}, inequality (\ref{in.L5})
implies that
\begin{equation}
\mathbf{E}\sup_{x\in K}|Z_{t}^{2}(x)|^{q}<\infty.
\end{equation}
Finally, proceeding as with the derivation of (\ref{2.38}), we obtain
\begin{equation}
\sup_{x\in K}|Z_{t}^{3}(x)|\leq C \sup_{s\leq t}X_{s}
(\mathsf{R}) \leq C e^{|a|t}\sup_{s\leq t} e^{-as}
X_{s}(\mathsf{R}).
\end{equation}
Noting that $ s\mapsto e^{-as}X_{s}(\mathsf{R})$ is a
martingale, and using Doob's inequality, we conclude that
\begin{equation}
\mathbf{E}\sup_{x\in K}|Z_{t}^{2}(x)|^{q} \leq C \mathbf{E}
( e^{-at}X_{t}(\mathsf{R}))^{ q} < \infty.
\end{equation}
This completes the proof.
\end{pf}
Furthermore, Corollary \ref{L5} allows us to prove the following result:
\begin{proposition}[(Local H\"{o}lder continuity of $Z_{t}^{2}$)]\label{P1}
With probability one, $Z_{t}^{2}$ has a version which is locally
H\"{o}lder continuous of all orders $\eta>0$ satisfying
\begin{equation}\label{2cases}
\eta< \eta_{\mathrm{c}}^{\prime} :=
\cases{\dfrac{\alpha}{1+\beta}-1, &\quad if
$\beta\geq(\alpha-1)/2$,\cr
\dfrac{\beta}{1+\beta}, &\quad if $\beta\leq(\alpha-1)/2$.}
\end{equation}
\end{proposition}
\begin{pf}
Let $\theta$, $q$ and $\delta$ satisfy the conditions in Corollary \ref{L5}.
Then almost surely $Z_{t}^{2}$ has a version which is locally H\"{o}lder
continuous of all orders smaller than $\delta-1/q$, (cf.
\cite{Walsh1986}, Corollary 1.2).
Let $\varepsilon>0$ satisfy $ \varepsilon<1-\beta$ and
$ \varepsilon<\beta$. Then $\theta=\theta_{\varepsilon}
:=1+\beta+\varepsilon$ and $ q=q_{\varepsilon}:=1+\beta
-\varepsilon$ are in the range of parameters we are just
considering. Moreover, the condition $\delta<\min\{1,(1+\alpha
-\theta)/\theta,(1+\alpha-q)/q\}$ reads as
\begin{equation}
\delta< \min\biggl\{1, \frac{\alpha-\beta-\varepsilon}{1+\beta+\varepsilon
} , \frac{\alpha-\beta+\varepsilon}{1+\beta-\varepsilon}
\biggr\} =: f(\varepsilon).
\end{equation}
Hence, for all sufficiently small $\varepsilon>0$ we can choose $ \delta
=\delta_{\varepsilon}:=f(\varepsilon)-\varepsilon$. Thus,
$Z_{t}^{2}$ has a version which is locally H\"{o}lder continuous of all orders
smaller than $\delta_{\varepsilon}-1/q_{\varepsilon}$ for this
choice of $ \theta_{\varepsilon},q_{\varepsilon},\delta_{\varepsilon}
$. Now
\[
\delta_{\varepsilon}-\frac{1}{q_{\varepsilon}} \mathop{\longrightarrow
}_{\varepsilon
\downarrow0} \min\biggl\{1, \frac{\alpha-\beta}{1+\beta
} , \frac{\alpha-\beta}{1+\beta} \biggr\}-\frac{1}{1+\beta} = \min
\biggl\{1, \frac{\beta}{1+\beta} , \frac{\alpha-\beta-1}{1+\beta}
\biggr\},
\]
where this limit coincides with the claimed value of $ \eta_{\mathrm{c}
}^{\prime}$, completing the proof.
\end{pf}
\begin{remark}[{[Proof of Theorem \ref{T.prop.dens}(a) for $\beta\geq
\frac{\alpha-1}{2}$]}]\label{R.2.3}
By Lemma \ref{L.HoeldZ3} and Proposition \ref{P1}, the
proof of Theorem \ref{T.prop.dens}(a) is finished for $\beta\geq\frac
{\alpha-1}{2}$.
\end{remark}
\subsection{Further estimates}
We continue to fix $t>0$, $\mu\in\mathcal{M}_{\mathrm{f}}
\setminus\{0\}$, and to suppose $ \alpha>1+\beta$.
\begin{lemma}[(Local boundedness of uniformly smeared out
density)]\label{L6}
Fix a nonempty compact $ K\subset\mathsf{R}$ and a constant $c\geq1$.
Then
\begin{equation}
V := V_{t}^{c}(K) := \sup_{0\leq s\leq t, x\in K}S_{c (t-s)}^{\alpha
}X_{s} (x) < \infty\qquad\mbox{almost surely}.
\end{equation}
\end{lemma}
\begin{pf}
Assume that the statement of the lemma does not hold,\break that is, there
exists an
event $A$ of positive probability such that\break $\sup_{0\leq s\leq t, x\in
K}S_{c (t-s)}^{\alpha}X_{s} (x)=\infty$ for every $\omega\in A$. Let
$n\geq1$. Put
\[
\tau_{n} := \cases{
\inf\bigl\{s<t\dvtx\mbox{there exists }x\in K\mbox{ such that }S_{c (t-s)}
^{\alpha}X_{s} (x)>n \bigr\}, &\quad $\omega\in A$,\cr
t,&\quad $\omega\in A^{\mathrm{c}}$.}
\]
If $ \omega\in A$, choose $x_{n}=x_{n}(\omega)\in K$ such that
$ S_{c (t-\tau_{n})}^{\alpha}X_{\tau_{n}}(x_{n})>n$ whereas if
$ \omega\in A^{\mathrm{c}}$, take any $x_{n}=x_{n}(\omega)\in K$.
Using the strong Markov property gives
\begin{eqnarray}\label{56}
\mathbf{E}S_{(c-1)(t-\tau_{n})}^{\alpha}X_{t} (x_{n}) &=& \mathbf{EE}
\bigl[S_{(c-1)(t-\tau_{n})}^{\alpha}X_{t} (x_{n}) | \mathcal{F}
_{\tau_{n}}\bigr]\nonumber\\
&=& \mathbf{E} e^{a(t-\tau_{n})}S_{(c-1)(t-\tau_{n})}^{\alpha
}S_{(t-\tau_{n})}^{\alpha}X_{\tau_{n}}(x_{n}) \\
&\geq& e^{-|a|t}
\mathbf{E}S_{c (t-\tau_{n})}^{\alpha}X_{\tau_{n}}(x_{n})\nonumber
\end{eqnarray}
[with $ e^{a(t-\tau_{n})}$ coming from the noncriticality of
branching in (\ref{not.Psi})]. From the definition of $(\tau
_{n},x_{n})$, we get
\begin{equation} \label{to.infty}
\mathbf{E}S_{c (t-\tau_{n})}^{\alpha}X_{\tau_{n}}(x_{n}) \geq n \mathbf
{P}
(A)\rightarrow\infty\qquad\mbox{as } n\uparrow\infty.
\end{equation}
In order to get a contradiction, we want to prove boundedness in $n$ of the
expectation in (\ref{56}). If $c=1$, then
\begin{equation}
\mathbf{E}X_{t}(x_{n}) \leq\mathbf{E}\sup_{x\in K}X_{t}(x) < \infty,
\end{equation}
the last step by Corollary \ref{C1}. Now suppose $c>1$. Choosing a compact
$K_{1}\supset K$ satisfying $\mathrm{dist}(K,(K_{1})^{\mathrm{c}
})\geq1$, we have
\begin{eqnarray*}
&& \mathbf{E}S_{(c-1)(t-\tau_{n})}^{\alpha}X_{t} (x_{n})\\
&&\qquad = \mathbf{E}\int_{K_{1}}dy\, X_{t}(y) p_{(c-1)(t-\tau_{n}
)}^{\alpha}(x_{n}-y)\\
&&\qquad\quad{} + \mathbf{E}\int_{(K_{1})^{\mathrm{c}}}d
y\, X_{t}(y) p_{(c-1)(t-\tau_{n})}^{\alpha}(x_{n}-y)\\
&&\qquad \leq\mathbf{E}\sup_{y\in K_{1}}X_{t}(y)+\mathbf{E}X_{t}
(\mathsf{R})\sup_{y\in(K_{1})^{\mathrm{c}}, x\in K, 0\leq s\leq
t }p_{(c-1)s}^{\alpha}(x-y).
\end{eqnarray*}
By our choice of $ K_{1}$ we obtain the bound,
\begin{equation}
\mathbf{E}S_{(c-1)(t-\tau_{n})}^{\alpha}X_{t} (x_{n}) \leq\mathbf{E}
\sup_{y\in K_{1}}X_{t}(y)+C = C,
\end{equation}
the last step by Corollary \ref{C1}. Altogether, (\ref{56}) is bounded
in $n$,
and the proof is finished.
\end{pf}
\begin{lemma}[(Randomly weighted kernel increments)]\label{L7}
Fix $ \theta\in[1,1+\alpha)$, $ \delta\in[0,1]$ with
$ \delta<(1+\alpha-\theta)/\theta$, and a nonempty compact
$ K\subset\mathsf{R}$. Then
\begin{eqnarray}
&&\int_{0}^{t}ds\int_{\mathsf{R}}X_{s}(dy) |p_{t-s}
^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq C V |x_{1}-x_{2}|^{\delta\theta},\qquad x_{1},x_{2}\in
K, \mbox{ a.s.},\nonumber
\end{eqnarray}
with $ V=V_{t}^{2^{\alpha}}(K)$ from Lemma \ref{L6}.
\end{lemma}
\begin{pf}
Using (\ref{L2.2}) gives
\begin{eqnarray*}
&& \int_{0}^{t}ds\int_{\mathsf{R}}X_{s}(dy) |p_{t-s}
^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}\\
&&\qquad \leq
C |x_{1}-x_{2}|^{\delta\theta}
\int_{0}^{t}ds (t-s)^{-(\delta\theta+\theta-1)/\alpha}\\
&&\qquad\quad{}\times\int_{\mathsf{R}}X_{s}(dy) \bigl(p_{t-s}^{\alpha}\bigl((x_{1}
-y)/2\bigr)+p_{t-s}^{\alpha}\bigl((x_{2}-y)/2\bigr) \bigr),
\end{eqnarray*}
uniformly in $ x_{1},x_{2}\in\mathsf{R}$. Recalling the scaling
property of $p^{\alpha}$, we get
\begin{eqnarray*}
&& \int_{0}^{t}ds\int_{\mathsf{R}}X_{s}(dy) |p_{t-s}
^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)|^{\theta}\\
&&\qquad \leq C |x_{1}-x_{2}|^{\delta\theta}\int_{0}^{t}d
s (t-s)^{-(\delta\theta+\theta-1)/\alpha} \bigl(S_{2^{\alpha}(t-s)}^{\alpha
}X_{s}(x_{1})+S_{2^{\alpha}(t-s)}^{\alpha}X_{s}(x_{2}) \bigr).
\end{eqnarray*}
We complete the proof by applying Lemma \ref{L6}.
\end{pf}
\begin{remark}[(Lipschitz continuity of $Z_{t}^{3}$)]\label{R.Lipsch}
Using Lemma \ref{L7} with $\theta=1=\delta$, we see that $Z_{t}^{3}$ is
in fact a.s. Lipschitz continuous.
\end{remark}
Let $ \Delta X_{s}:=X_{s}-X_{s-}$ denote the jumps of the
measure-valued process $ X$.
\begin{lemma}[(Total jump mass)]\label{L8}
Let $\varepsilon>0$ and $ \gamma\in(0,(1+\beta)^{-1})$. There exists a
constant $c_{\mazinti{(\ref{inL8})}}=c_{\mazinti{(\ref{inL8})}}(\varepsilon,\gamma)$ such
that
\begin{equation} \label{inL8}
\mathbf{P} \bigl(|\Delta X_{s}|>c_{\mazinti{(\ref{inL8})}} (t-s)^{(1+\beta)^{-1}-\gamma
}\mbox{ for some }s<t \bigr)\leq\varepsilon.
\end{equation}
\end{lemma}
\begin{pf}
Recall the random measure $N$ from Lemma \ref{L.mart.dec}(a). For any $c>0$,
set
\begin{eqnarray}\qquad
Y_{0} &:=& N \bigl([0,2^{-1}t)\times\mathsf{R}\times(c 2^{-\lambda
}t^{\lambda},\infty) \bigr),
\\
Y_{n} &:=& N
\bigl(\bigl[(1-2^{-n})t,(1-2^{-n-1})t\bigr)\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{22.01pt}{}\times\mathsf{R}
\times\bigl(c 2^{-\lambda(n+1)}t^{\lambda},\infty\bigr) \bigr),\qquad
n\geq1,\nonumber
\end{eqnarray}
where $\lambda:=(1+\beta)^{-1}-\gamma$. It is easy to see that
\begin{equation} \label{L12.1}\qquad
\mathbf{P} \bigl(|\Delta X_{s}|>c (t-s)^{\lambda}\mbox{ for some }
s<t \bigr) \leq\mathbf{P} \Biggl(\sum_{n=0}^{\infty}Y_{n}\geq1 \Biggr) \leq
\sum_{n=0}^{\infty}\mathbf{E}Y_{n},
\end{equation}
where in the last step we have used the classical Markov inequality.
From the formula for the compensator $\hat{N}$ of $N$ in
Lemma \ref{L.mart.dec}(b),
\begin{equation}\quad
\mathbf{E}Y_{n} = \varrho\int_{(1-2^{-n})t}^{(1-2^{-n-1})t}d
s\, \mathbf{E}X_{s}(\mathsf{R})\int_{c 2^{-\lambda(n+1)}t^{\lambda
}}^{\infty
}dr\, r^{-2-\beta},\qquad n\geq1.
\end{equation}
Now
\begin{equation} \label{65}
\mathbf{E}X_{s}(\mathsf{R}) = X_{0}(\mathsf{R}) e^{as} \leq
|\mu| e^{|a|t} =: c_{\mazinti{(\ref{65})}}.
\end{equation}
Consequently,
\begin{equation} \label{L12.2}
\mathbf{E}Y_{n} \leq\frac{\varrho}{1+\beta} c_{\mazinti{(\ref{65})}}c^{-1-\beta
} 2^{-(n+1)\gamma(1+\beta)} t^{\gamma(1+\beta)}.
\end{equation}
Analogous calculations show that (\ref{L12.2}) remains valid also in
the case
$n=0$. Therefore,
\begin{eqnarray}\label{L12.3}
\sum_{n=0}^{\infty}\mathbf{E}Y_{n} &\leq& \frac{\varrho}{1+\beta}
c_{\mazinti{(\ref{65})}}c^{-1-\beta} t^{\gamma(1+\beta)}\sum_{n=0}^{\infty}
2^{-(n+1)\gamma(1+\beta)}\nonumber\\[-8pt]\\[-8pt]
&=& \frac{\varrho}{1+\beta} c_{\mazinti{(\ref{65})}}c^{-1-\beta} t^{\gamma
(1+\beta)}
\frac{2^{-\gamma(1+\beta)}}{1-2^{-\gamma(1+\beta)}}.\nonumber
\end{eqnarray}
Choosing $c=c_{\mazinti{(\ref{inL8})}}$ such that the expression in (\ref{L12.3}) equals
$\varepsilon$, and combining with (\ref{L12.1}), the proof is complete.
\end{pf}
\subsection{Representation as time-changed stable process}
We return to general \mbox{$t>0$}. Recall the martingale measure $ M$
related to the martingale in Lemmas \ref{L.mart.dec}(c) and \ref{L.Lp}.
\begin{lemma}[(Representation as time-changed stable process)]\label{L9}
Suppose $ p\in(1+\beta,2)$ and let $ \psi\in\mathcal{L}_{\mathrm
{loc}}^{p}(\mu)$
with $\psi\geq0$. Then there exists a spectrally positive $(1+\beta)$-stable
process $\{L_{t}\dvtx t\geq0\}$ such that
\begin{equation}
Z_{t}(\psi) := \int_{(0,t]\times\mathsf{R}} M
(d(s,y) ) \psi(s,y) = L_{T(t) },\qquad t\geq0,
\end{equation}
where $T(t):=\int_{0}^{t}d
s\int_{\mathsf{R}}X_{s}(dy) (\psi(s,y))^{1+\beta}$.
\end{lemma}
\begin{pf}
Let us write It\^{o}'s formula for $ e^{-Z_{t}(\psi)}$
\begin{eqnarray}
e^{-Z_{t}(\psi)}-1
&=& \mbox{local martingale}\nonumber\\
&&{} +\varrho\int_{0}^{t}ds\, e^{-Z_{s}(\psi)}
\int_{\mathsf{R}}X_{s}(dy)\\
&&\hspace*{10pt}{}\times\int_{0}^{\infty}d
r \bigl( e^{-r\psi(s,y)}-1+r \psi(s,y)\bigr) r^{-2-\beta}.\nonumber
\end{eqnarray}
Define $ \tau(t):=T^{-1}(t)$, and put $ t^{\ast}:=\inf\{
t\dvtx\tau(t)=\infty\}$. Then it is easy to get for every
$v>0$,
\begin{eqnarray}\hspace*{32pt}
e^{-vZ_{\tau(t)}(\psi)} & = & 1+\int_{0}^{t}d
s\, e^{-vZ_{\tau(s)}(\psi)} \frac{X_{\tau(s)}(v^{1+\beta}
\psi^{1+\beta}(s,\cdot))}{X_{\tau(s)}(\psi^{1+\beta}(s,\cdot
))}+\mbox{loc. mart.}\nonumber\\[-8pt]\\[-8pt]
& = & 1+\int_{0}^{t}ds\, e^{-vZ_{\tau(s)}(\psi)} v^{1+\beta
}+\mbox{loc. mart.,}\qquad t\leq t^{\ast}.\nonumber
\end{eqnarray}
Since the local martingale is bounded, it is in fact a martingale. Let
$ \tilde{L}$ denote a spectrally positive process of index
$1+\beta$, independent of $X$. Define
\begin{equation}
L_{t} := \cases{Z_{\tau(t)}(\psi), &\quad $t\leq t^{\ast}$,\cr
Z_{\tau(t^{\ast})}(\psi)+\tilde{L}_{t-t^{\ast} }, &\quad $t>t^{\ast}$
\mbox{ (if $t^{\ast}<\infty$)}.}
\end{equation}
Then we can easily get that $L$ satisfies the martingale problem (\ref{MP})
with $\kappa$ replaced by $1+\beta$. Now by time change back we obtain
\begin{equation}
Z_{t}(\psi)=\tilde{L}_{T(t)}=L_{T(t) },
\end{equation}
completing the proof.
\end{pf}
\section{Local H\"{o}lder continuity}\label{S.3}
\mbox{}
\begin{pf*}{Proof of Theorem \protect\ref{T.prop.dens}\textup{(a)}}
We continue to assume that $d=1$, and that $ t>0$ and
$\mu\in\mathcal{M}_{\mathrm{f}}\setminus\{0\}$ are fixed. For $\beta
\geq(\alpha-1)/2$ the desired existence of a locally H\"{o}lder continuous
version of $ Z_{t}^{2}$ of required orders is already proved in
Proposition \ref{P1}. Therefore, in what follows we shall consider the
complementary case $\beta<(\alpha-1)/2$. Fix any compact set $K$ and
$x_{1}<x_{2}$ belonging to it. By definition (\ref{rep.dens}) of
$Z_{t}^{2}
$,
\begin{eqnarray}\quad
Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2}) &=& \int_{(0,t]\times\mathsf{R}} M (
d(s,y) ) \bigl(p_{t-s}^{\alpha}(x_{1}
-y)-p_{t-s}^{\alpha}(x_{2}-y)\bigr)\nonumber\\
&=& \int_{(0,t]\times\mathsf{R}} M (d
(s,y) ) \varphi_{+}(s,y)\\
&&{}-\int_{(0,t]\times\mathsf{R}} M
(d(s,y)) \varphi_{-}(s,y),\nonumber
\end{eqnarray}
where $\varphi_{+}(s,y)$ and $\varphi_{-}(s,y)$ are the positive and negative
parts of $ p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y)$. It is easy
to check that $\varphi_{+}$ and $\varphi_{-}$ satisfy the assumptions in
Lem\-ma~\ref{L9}. Thus, there exist stable processes $L^{1}$ and $L^{2}$ such
that
\begin{equation}\label{T1.1}
Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2}) = L_{T_{+}}^{1}-L_{T_{- }}^{2},
\end{equation}
where $ T_{\pm}:=\int_{0}^{t}d
s\int_{\mathsf{R}}X_{s}(dy) (\varphi_{\pm}(s,y))^{1+\beta
}$.
The idea behind the proof of the existence of the required version of
$ Z_{t}^{2}$ is as follows. We first control the jumps of $L^{1}$ and $L^{2}$
for $t\leq T_{\pm}$ and then use Lem\-ma~\ref{L3} to get the necessary bounds
on $L_{T_{+ }}^{1},L_{T_{- }}^{2}$ themselves.
Fix any $\varepsilon\in(0,1)$. According to Lemma \ref{L6}, there
exists a constant $c_{\varepsilon}$ such that
\begin{equation}\label{13_04}
\mathbf{P}(V\leq c_{\varepsilon})\geq1-\varepsilon,
\end{equation}
where $ V=V_{t}^{2^{\alpha}}(K)$. Consider again $ \gamma
\in(0,(1+\beta)^{-1})$ and set
\begin{equation} \label{A.eps}
A^{\varepsilon} := \bigl\{|\Delta X_{s}|\leq c_{\mazinti{(\ref{inL8})}}
(t-s)^{(1+\beta)^{-1}-\gamma}\mbox{ for all }s<t \bigr\}\cap\{V\leq
c_{\varepsilon}\}.
\end{equation}
By Lemma \ref{L8} and by (\ref{13_04}),
\begin{equation} \label{A.eps.est}
\mathbf{P}(A^{\varepsilon})\geq1-2\varepsilon.
\end{equation}
Define $Z_{t}^{2,\varepsilon}(x):=Z_{t}^{2}(x)\mathsf{1}(A^{\varepsilon})$.
We first show that $Z_{t}^{2,\varepsilon}$ has a version which
is locally H\"{o}lder continuous of all orders $\eta$ smaller than
$\eta_{\mathrm{c} }$. It follows from (\ref{T1.1}) that
\begin{eqnarray} \label{T1.2}
&& \mathbf{P} \bigl(|Z_{t}^{2,\varepsilon}(x_{1})-Z_{t}^{2,\varepsilon
}(x_{2})|\geq2r |x_{1}-x_{2}|^{\eta} \bigr)\nonumber\\
&&\qquad \leq\mathbf{P}(L_{T_{+}}^{1}\geq r |x_{1}-x_{2}|^{\eta
}, A^{\varepsilon})\\
&&\qquad\quad{}+\mathbf{P}(L_{T_{-}}^{2}\geq r |x_{1}
-x_{2}|^{\eta}, A^{\varepsilon}),\qquad r>0.\nonumber
\end{eqnarray}
Note that on $A^{\varepsilon}$ the jumps of $M (
d(s,y) ) $ do not exceed
$c_{\mazinti{(\ref{inL8})} }(t-s)^{(1+\beta)^{-1}-\gamma}$ since the jumps
of $X$ are bounded by the same values on $A^{\varepsilon}$. Hence the
jumps of
the process $ u\mapsto$ $\int_{(0,u]\times\mathsf{R}} M (
d(s,y) ) \varphi_{\pm}(s,y)$ are bounded by
\begin{equation} \label{T1.3}
c_{\mazinti{(\ref{inL8})}} \sup_{s<t}(t-s)^{(1+\beta)^{-1}-\gamma} \sup_{y\in
\mathsf{R}}\varphi_{\pm}(s,y).
\end{equation}
Obviously,
\begin{equation} \label{T1.4}
\sup_{y\in\mathsf{R}}\varphi_{\pm}(s,y) \leq{\sup_{y\in\mathsf{R}
}} |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y) |.
\end{equation}
Assume additionally that $ \gamma<\eta_{\mathrm{c}}/\alpha$. Using
Lemma \ref{L1} with $\delta=\eta_{\mathrm{c}}-\alpha\gamma$ gives
\begin{eqnarray} \label{T1.5}
&& {\sup_{y\in\mathsf{R}}} |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}
(x_{2}-y) | \nonumber\\
&&\qquad \leq C |x_{1}-x_{2}|^{\eta_{\mathrm{c}}-\alpha\gamma} (t-s)^{-\eta
_{\mathrm{c}}/\alpha+\gamma} \sup_{z\in\mathsf{R}}p_{t-s}^{\alpha
}(z)\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq C |x_{1}-x_{2}|^{\eta_{\mathrm{c}}-\alpha\gamma} (t-s)^{-\eta
_{\mathrm{c}}/\alpha+\gamma} (t-s)^{-1/\alpha}\nonumber\\
&&\qquad = C |x_{1}-x_{2}|^{\eta_{\mathrm{c}}-\alpha\gamma} (t-s)^{-
{1}/({1+\beta})+\gamma}.\nonumber
\end{eqnarray}
Combining (\ref{T1.3})--(\ref{T1.5}), we see that all jumps of
$ u\mapsto\int_{(0,u]\times\mathsf{R}} M (
d(s,y) ) \varphi_{\pm}(s,y)$ on the set $A^{\varepsilon}$ are
bounded by
\begin{equation} \label{C}
c_{\mazinti{(\ref{C})}} |x_{1}-x_{2}|^{\eta_{\mathrm{c}}-\alpha\gamma}
\end{equation}
for some constant $ c_{\mazinti{(\ref{C})}}=c_{\mazinti{(\ref{C})}}(\varepsilon)$.
Therefore, by an abuse of notation writing $L_{T_{\pm}}$ for $L_{T_{+}}^{1}$
and $L_{T_{- }}^{2}$,
\begin{eqnarray}\hspace*{8pt}
&& \mathbf{P}(L_{T_{\pm}}\geq r |x_{1}-x_{2}|^{\eta}, A^{\varepsilon
}) \nonumber\\
&&\qquad = \mathbf{P} \Bigl(L_{T_{\pm}}\geq r |x_{1}-x_{2}|^{\eta}, \sup
_{u<T_{\pm}}\Delta L_{u}\leq c_{\mazinti{(\ref{C})}} |x_{1}-x_{2}|^{\eta_{\mathrm
{c}
}-\alpha\gamma}, A^{\varepsilon} \Bigr)\\
&&\qquad \leq\mathbf{P} \Bigl( \sup_{v\leq T_{\pm}}L_{v}\mathsf{1} \Bigl\{\sup
_{u<v}\Delta L_{u}\leq c_{\mazinti{(\ref{C})}} |x_{1}-x_{2}|^{\eta_{\mathrm{c}}
-\alpha\gamma} \Bigr\}\geq r |x_{1}-x_{2}|^{\eta}, A^{\varepsilon} \Bigr).\nonumber\hspace*{-12pt}
\end{eqnarray}
Since
\begin{equation}
T_{\pm} \leq\int_{0}^{t}ds\int_{\mathsf{R}}X_{s}(d
y) |p_{t-s}^{\alpha}(x_{1}-y)-p_{t-s}^{\alpha}(x_{2}-y) |^{1+\beta},
\end{equation}
applying Lemma \ref{L7} with $\theta=1+\beta$ and $\delta=1$ (since
$\beta<(\alpha-1)/2$), we get the bound
\begin{equation} \label{81}
T_{\pm} \leq c_{\mazinti{(\ref{81})}} |x_{1}-x_{2}|^{1+\beta} \qquad\mbox{on } \{V\leq
c_{\varepsilon}\},
\end{equation}
for some $c_{\mazinti{(\ref{81})}}=c_{\mazinti{(\ref{81})}}(\varepsilon)$. Consequently,
\begin{eqnarray*}
&& \mathbf{P}(L_{T_{\pm}}\geq r |x_{1}-x_{2}|^{\eta}, A^{\varepsilon
})\\
&&\qquad \leq\mathbf{P} \Bigl( \sup_{v\leq c_{\mazinti{(\ref{81})}}|x_{1}-x_{2}|^{1+\beta}
}L_{v} \mathsf{1}\Bigl\{\sup_{u<v}\Delta L_{u}\leq c_{\mazinti{(\ref{C})}}
|x_{1}-x_{2}|^{\eta_{\mathrm{c}}-\alpha\gamma}\Bigr\}\\
&&\hspace*{232.3pt}\geq r |x_{1}
-x_{2}|^{\eta} \Bigr).
\end{eqnarray*}
Using Lemma \ref{L3} with $ \kappa=1+\beta$, $t=c_{\mazinti{(\ref{81}
)}}|x_{1}-x_{2}|^{1+\beta}$, $x=r |x_{1}-x_{2}|^{\eta}$,
and $ y=c_{\mazinti{(\ref{C})}}|x_{1}-x_{2}|^{\eta_{\mathrm{c}
}-\alpha\gamma}$, and noting that
\begin{eqnarray}
1+\beta-\eta-\beta(\eta_{\mathrm{c}}-\alpha\gamma) &=& 2+2\beta-\alpha
+(\eta_{\mathrm{c}}-\eta)+\beta\alpha\gamma\nonumber\\[-8pt]\\[-8pt]
&>& 2+2\beta-\alpha,\nonumber
\end{eqnarray}
we obtain
\begin{eqnarray} \label{3.14}
&& \mathbf{P}(L_{T_{\pm}}\geq r |x_{1}-x_{2}|^{\eta}, A^{\varepsilon
}) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq\bigl(c_{\mazinti{(\ref{3.14})}} r^{-1}|x_{1}-x_{2}|^{(2\beta+2-\alpha
)}\bigr)^{(c_{\mmazinti{(\ref{C})}}^{-1}r|x_{1}-x_{2}|^{\eta-\eta_{\mathrm{c}
}+\alpha\gamma})}\nonumber
\end{eqnarray}
for some $c_{\mazinti{(\ref{3.14})}}=c_{\mazinti{(\ref{3.14})}}(\varepsilon)$. Applying
this bound with $\gamma=(\eta_{\mathrm{c}}-\eta)/2\alpha$ to the
summands at the right-hand side in (\ref{T1.2}), and noting that $
2\beta+2-\alpha$ is also constant here, we have
\begin{eqnarray} \label{3.16}
&& \mathbf{P} \bigl(|Z_{t}^{2,\varepsilon}(x_{1})-Z_{t}^{2,\varepsilon
}(x_{2})|\geq2r |x_{1}-x_{2}|^{\eta} \bigr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq2\bigl(c_{\mazinti{(\ref{3.14})}} r^{-1}|x_{1}-x_{2}
|\bigr)^{(c_{\mmazinti{(\ref{3.16})}}r|x_{1}-x_{2}|^{(\eta-\eta_{\mathrm{c}}
)/2})}.\nonumber
\end{eqnarray}
This inequality yields that all the conditions of Theorem III.5.6 of\break Gihman
and Skorokhod \cite{GikhmanSkorokhod1974} hold with $g(h)=2h^{\eta}$ and
$q(r,h)=\break 2(c_{\mazinti{(\ref{3.14})}} r^{-1}h)^{(c_{\mmazinti{(\ref{3.16}
)}}rh^{(\eta-\eta_{\mathrm{c}})/2})}$, from which we conclude that almost
surely $Z_{t}^{2,\varepsilon}$ has a version which is locally H\"{o}lder
continuous of all orders $\eta<\eta_{\mathrm{c} }$.
By an abuse of notation, from now on the symbol $Z_{t}^{2,\varepsilon}$ always
refers to this continuous version. Consequently,
\begin{equation} \label{T1.6}
\lim_{k\uparrow\infty}\mathbf{P} \biggl( \sup_{x_{1},x_{2}\in K, x_{1}\neq
x_{2}}\frac{|Z_{t}^{2,\varepsilon}(x_{1})-Z_{t}^{2,\varepsilon}
(x_{2})|}{|x_{1}-x_{2}|^{\eta}}>k \biggr) = 0.
\end{equation}
Combining this with the bound
\begin{eqnarray}\quad
&& \mathbf{P} \biggl( \sup_{x_{1},x_{2}\in K, x_{1}\neq x_{2}}\frac
{|Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2})|}{|x_{1}-x_{2}|^{\eta}
}>k \biggr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq\mathbf{P} \biggl( \sup_{x_{1},x_{2}\in K, x_{1}\neq x_{2}}
\frac{|Z_{t}^{2,\varepsilon}(x_{1})-Z_{t}^{2,\varepsilon}(x_{2}
)|}{|x_{1}-x_{2}|^{\eta}}>k, A^{\varepsilon} \biggr)+\mathbf{P}
(A^{\varepsilon,\mathrm{c}})\nonumber
\end{eqnarray}
(with $A^{\varepsilon,\mathrm{c}}$ denoting the complement of
$A^{\varepsilon
})$, gives
\begin{equation}
\limsup_{k\uparrow\infty}\mathbf{P} \biggl( \sup_{x_{1},x_{2}\in K, x_{1}\neq
x_{2}}\frac{|Z_{t}^{2}(x_{1})-Z_{t}^{2}(x_{2})|}{|x_{1}-x_{2}
|^{\eta}}>k \biggr) \leq2\varepsilon.
\end{equation}
Since $\varepsilon$ may be arbitrarily small, this immediately implies
\begin{equation}
\sup_{x_{1},x_{2}\in K, x_{1}\neq x_{2}}\frac{|Z_{t}^{2}(x_{1}
)-Z_{t}^{2}(x_{2})|}{|x_{1}-x_{2}|^{\eta}}<\infty, \qquad\mbox{almost
surely}.
\end{equation}
This is the desired local H\"{o}lder continuity of $ Z_{t}^{2}$,
for all $\eta<\eta_{\mathrm{c} }$. Because
$ \eta_{\mathrm{c}}<\alpha-1$, together with Lemma \ref{L.HoeldZ3}
the proof of Theorem \ref{T.prop.dens}(a) is complete.
\end{pf*}
\section[Local unboundedness: Proof of Theorem 1.2(c)]{Local unboundedness: Proof of
Theorem \protect\ref{T.prop.dens}\textup{(c)}}
\label{sec:4}
In the proof we use ideas from the proofs of Theorems 1.1(b) and 1.2 of
\cite{MytnikPerkins2003}. Throughout this section, suppose $d>1$ or
$\alpha\leq1+\beta$. Recall that $ t>0$ and
$ X_{0}=\mu\in\mathcal{M}_{\mathrm{f}}\setminus\{0\}$ are fixed.
We want to verify that for each version of the density function
$X_{t}$ the
property
\begin{equation} \label{basicprop}
\Vert X_{t} \Vert_{B}=\infty\qquad\mathbf{P}\mbox{-a.s. on the event
} \{X_{t}(B)>0\}
\end{equation}
holds whenever $B$ is a fixed open ball in $ \mathsf{R}^{d}$. Then
the claim of Theorem \ref{T.prop.dens}(c) follows as in the proof of
Theorem 1.1(b) in \cite{MytnikPerkins2003}. We thus fix such $B$.
As in \cite{MytnikPerkins2003} to get (\ref{basicprop}) we first show
that on the event $\{X_{t}(B)>0\}$ there are always sufficiently
``big'' jumps of $X$ that occur close to time $t$. This is done in
Lemma \ref{lem:3} below. Then with the help of properties of the
log-Laplace equation derived in Lemma \ref{lem:2} we are able to show
that the ``big'' jumps are large enough to ensure the unboundedness of
the density at time $t$. Loosely speaking the density is getting
unbounded in the proximity of big jumps.
In order to fulfil the above program, we start with deriving the
continuity of
$X_{\cdot}(B)$ at (fixed) time $t$.
\begin{lemma}[(Path continuity at fixed times)]\label{lem:1}
For the fixed $t>0$,
\begin{equation}
\lim_{s\rightarrow t}X_{s}(B) = X_{t}(B) \qquad\mbox{a.s.}
\end{equation}
\end{lemma}
\begin{pf}
Since $t$ is fixed, $X$ is continuous at $t$ with probability $1$. Therefore,
\begin{equation}
X_{t}(B) \leq\liminf_{s\rightarrow t}X_{s}(B) \leq\limsup
_{s\rightarrow
t}X_{s}(B) \leq\limsup_{s\rightarrow t}X_{s}(\overline{B}) \leq
X_{t}(\overline{B})
\end{equation}
with $\overline{B}$ denoting the closure of $B$. But since $X_{t}
(dx)$ is absolutely continuous with respect to Lebesgue measure, we
have $X_{t}(B)=X_{t}(\overline{B})$. Thus the proof is complete.
\end{pf}
\begin{lemma}[(Explosion)]\label{L.explosion}
Let $f\dvtx(0,t)\rightarrow(0,\infty)$ be
measurable such that
\begin{equation}
\int_{t-\delta}^{t}ds\, f(t-s)=\infty\qquad\mbox{for all sufficiently
small } \delta\in(0,t).
\end{equation}
Then for these $\delta$,
\begin{equation}
\quad \int_{t-\delta}^{t}ds\, X_{s}(B)f(t-s) = \infty,\qquad\mathbf{P}
\mbox{-a.s. on the event }\{X_{t}(B)>0\}.
\end{equation}
\end{lemma}
\begin{pf}
Fix $\delta$ as in the lemma. Fix also $\omega$ such that $ X_{t}
(B)>0$ and $ X_{s}(B)\rightarrow X_{t}(B)$ as
$s\uparrow t$. For this $\omega$, there is an $\varepsilon
\in(0,\delta)$ such that $X_{s}(B)>\varepsilon$ for all $s\in
(t-\varepsilon
,t)$. Hence
\begin{equation}
\int_{t-\delta}^{t}ds\, X_{s}(B)f(t-s) \geq\varepsilon
\int_{t-\varepsilon}^{t}ds\, f(t-s) = \infty
\end{equation}
and we are done.
\end{pf}
Set
\begin{equation} \label{not.vy}
\vartheta:= \frac{1}{1+\beta}
\end{equation}
and for $\varepsilon\in(0,t)$ let $\tau_{\varepsilon}(B)$ denote the first
moment in $(t-\varepsilon,t)$ in which a ``big
jump'' occurs. More precisely, define
\begin{equation}
\tau_{\varepsilon}(B) := \inf\biggl\{ s\in(t-\varepsilon,t)\dvtx|\Delta
X_{s}|(B)>(t-s)^{\vartheta} \log^{\vartheta} \biggl(\frac{1}{t-s} \biggr) \biggr\}.
\end{equation}
\begin{lemma}[(Existence of big jumps)]\label{lem:3}
For $\varepsilon\in(0,t)$ and the open ball $B$,
\begin{equation} \label{equt:12}
\mathbf{P} \bigl( \tau_{\varepsilon}(B)=\infty\bigr) \leq\mathbf{P}
\bigl(X_{t}(B) = 0 \bigr).
\end{equation}
\end{lemma}
\begin{pf}
For simplicity, through the proof we write $\tau$ for $\tau
_{\varepsilon
}(B)$. It suffices to show that
\begin{equation}\label{equt:11}
\mathbf{P} \{\tau=\infty, X_{t}(B)>0 \} = 0.
\end{equation}
To verify (\ref{equt:11}) we will mainly follow the lines of the proof of
Theorem 1.2(b) of~\cite{LeGallMytnik2003}. For
$ u\in(0,\varepsilon]$, define
\[
Z_{u}:=N \biggl((s,x,r)\dvtx s\in(t-\varepsilon, t-\varepsilon+u), x\in
B, r>(t-s)^{\vartheta}\log^{\vartheta} \biggl(\frac{1}{t-s} \biggr) \biggr)
\]
with the random measure $N$ introduced in Lemma \ref{L.mart.dec}(a). Then
\begin{equation} \label{equt:10}
\{ \tau=\infty\} = \{Z_{\varepsilon}=0\}.
\end{equation}
Recall the formula for the compensator $\hat{N}$ of $N$ in
Lemma \ref{L.mart.dec}(b). From a classical time change result for
counting processes (see, e.g., Theorem 10.33 in \cite{Jakod1979}), we
get that
there exists a standard Poisson process $A=\{A(v)\dvtx v\geq0\}$ such
that
\begin{eqnarray}
Z_{u} & = & A \biggl(\varrho\int_{t-\varepsilon}^{t-\varepsilon+u}
ds\, X_{s}(B)\int_{(t-s)^{\vartheta}\log^{\vartheta}
({1}/({t-s}))}^{\infty}dr\, r^{-2-\beta} \biggr)\nonumber\\[-8pt]\\[-8pt]
& = & A \biggl(\frac{\varrho}{1+\beta}\int_{t-\varepsilon}^{t-\varepsilon
+u}ds\, X_{s}(B) \frac{1}{(t-s)\log({1}/({t-s}))}\biggr),\nonumber
\end{eqnarray}
where we used notation (\ref{not.vy}). Then
\begin{eqnarray}\label{array}\quad
&& \mathbf{P}\bigl(Z_{\varepsilon}=0, X_{t}(B)>0\bigr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq\mathbf{P} \biggl(\int_{t-\varepsilon}^{t}ds\, X_{s}
(B) \frac{1}{(t-s)\log({1}/({t-s}))}<\infty, X_{t}(B)>0 \biggr).\nonumber
\end{eqnarray}
It is easy to check that
\begin{equation}
\int_{t-\delta}^{t}ds\, \frac{1}{(t-s)\log({1}/({t-s}))}
= \infty\qquad\mbox{for all } \delta\in(0,\varepsilon).
\end{equation}
Therefore, by Lemma \ref{L.explosion},
\begin{equation}
\int_{t-\varepsilon}^{t}ds\, X_{s}(B) \frac{1}{(t-s)\log
({1}/({t-s}))} = \infty\qquad\mbox{on } \{X_{t}
(B)>0\}.
\end{equation}
Thus, the probability in (\ref{array}) equals 0. Hence, together
with (\ref{equt:10}) claim (\ref{equt:11}) follows.
\end{pf}
Set $\varepsilon_{n}:=2^{-n}$, $n\geq1$. Then we choose
open balls $B_{n}\uparrow B$ such that $ $
\begin{equation} \label{Bs}
\overline{B_{n}}\subset B_{n+1}\subset B \quad\mbox{and}\quad \sup_{y\in
B^{\mathrm{c}}, x\in B_{n}, 0<s\leq\varepsilon_{n}} p_{s}^{\alpha
}(x-y) \mathop{\longrightarrow}_{n\uparrow\infty}0.
\end{equation}
Fix $n\geq1$ such that $ \varepsilon_{n}<t$. Define $\tau
_{n}:=\tau_{\varepsilon_{n}}(B_{n})$.
In order to get a lower bound for $ \Vert X_{t} \Vert_{B}$ we use
the following inequality:
\begin{equation} \label{123}
\Vert X_{t} \Vert_{B} \geq\int_{B}dy\, X_{t}
(y) p_{r}^{\alpha}(y-x),\qquad x\in B, r>0.
\end{equation}
On the event $\{\tau_{n}<t\}$, denote by $\zeta_{n}$ the spatial
location in
$B_{n}$ of the jump at time~$\tau_{n }$, and by $r_{n}$ the size
of the jump, meaning that $\Delta X_{\tau_{n}}=r_{n}\delta_{\zeta_{n}}
$. Then specializing (\ref{123}),
\begin{equation} \label{123'}
\Vert X_{t} \Vert_{B} \geq\int_{B}dy\, X_{t}
(y) p_{t-\tau_{n}}^{\alpha}(y-\zeta_{n}) \qquad\mbox{on the event } \{\tau
_{n}<t\}.
\end{equation}
From the strong Markov property at time $\tau_{n }$, together with the
branching property of superprocesses, we know that conditionally on
$\{\tau_{n}<t\}$, the process $\{X_{\tau_{n}+u}\dvtx u\geq 0\}$ is
bounded below in distribution by $\{\widetilde{X}_{u}^{n}\dvtx u\geq
0\}$ where $\widetilde{X}^{n}$ is a super-Brownian motion with initial
value $r_{n}\delta_{\zeta_{n}}$. Hence, from (\ref{123'}) we get
\begin{eqnarray} \label{firstbound}
&& \mathbf{E}\exp\{- \Vert X_{t} \Vert
_{B} \}\nonumber\\
&&\qquad \leq\mathbf{E} \mathsf{1}_{\{\tau_{n}<t\}}\exp\biggl\{ -\int
_{B}dy\, X_{t}(y) p_{t-\tau_{n}}^{\alpha}(y-\zeta_{n}) \biggr\}
+ \mathbf{P}(\tau_{n}=\infty)\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq\mathbf{E} \mathsf{1}_{\{\tau_{n}<t\}}\mathbf{E}_{r_{n}
\delta_{\zeta_{n}}}\exp\biggl\{ -\int_{B}dy X_{t-\tau_{n}
}(y) p_{t-\tau_{n}}^{\alpha}(y-\zeta_{n}) \biggr\}\nonumber\\
&&\qquad\quad{} + \mathbf{P}(\tau
_{n}=\infty).\nonumber
\end{eqnarray}
Note that on the event $\{\tau_{n}<t\}$, we have
\begin{equation} \label{not.hbeta}
r_{n}\geq(t-\tau_{n})^{\vartheta}\log^{\vartheta} \biggl(\frac{1}{t-\tau_{n}
} \biggr) =: h_{\beta}(t-\tau_{n}).
\end{equation}
We now claim that
\begin{equation} \label{secondlimit}\quad
\lim_{n\uparrow\infty} \sup_{0<s<\varepsilon_{n}, x\in B_{n}, r\geq
h_{\beta}(s)}\mathbf{E}_{r\delta_{x}}\exp\biggl\{ - \int_{B}
dy\, X_{s}(y) p_{s}^{\alpha}(y-x) \biggr\} = 0.
\end{equation}
To verify (\ref{secondlimit}), let $ s\in(0,\varepsilon_{n})$,
$x\in B_{n}$ and $r\geq h_{\beta}(s)$. Then using the
Laplace transition functional of the superprocess we get
\begin{eqnarray} \label{Lapla}
\mathbf{E}_{r\delta_{x}}\exp\biggl\{ -\int_{B}dy\, X_{s}
(y) p_{s}^{\alpha}(y-x) \biggr\} &=& \exp\{ -r v_{s,x}
^{n}(s,x) \} \nonumber\\[-8pt]\\[-8pt]
&\leq&\exp\{ -h_{\beta}(s) v_{s,x}^{n}(s,x) \},\nonumber
\end{eqnarray}
where the nonnegative function $ v_{s,x}^{n}=\{v_{s,x}^{n}(s^{\prime
},x^{\prime})\dvtx s^{\prime}>0$, $x^{\prime}\in\mathsf{R}^{d}\}$
solves the log-Laplace integral equation
\begin{eqnarray}\label{equt:4}\quad
v_{s,x}^{n}(s^{\prime}, x^{\prime}) &=& \int_{\mathsf{R}^{d}}
dy\, p_{s^{\prime}}^{\alpha}(y-x^{\prime}) 1_{B}(y) p_{s}^{\alpha
}(y-x)\nonumber\\
&&{} +\int_{0}^{s^{\prime}}dr^{\prime}\int_{\mathsf{R}^{d}}
dy\, p_{s^{\prime}-r^{\prime}}^{\alpha}(y-x^{\prime}) [av_{s,x}
^{n}(r^{\prime},y)\\
&&\hspace*{141.3pt}{}-b(v_{s,x}^{n}(r^{\prime},y))^{1+\beta
} ]\nonumber
\end{eqnarray}
related to (\ref{logLap}).
\begin{lemma}[(Another explosion)]\label{lem:2} Under the conditions $d>1$
or $\alpha\leq1+\beta$, we have
\begin{equation} \label{limitv}
\lim_{n\uparrow\infty} \Bigl(\inf_{0<s<\varepsilon_{n}, x\in B_{n}}h_{\beta
}(s) v_{s,x}^{n}(s,x) \Bigr) = +\infty.
\end{equation}
\end{lemma}
Let us postpone the proof of Lemma \ref{lem:2}.
\begin{pf*}{Completion of Proof of Theorem
\ref{T.prop.dens}\textup{(c)}}
Our claim (\ref{secondlimit}) readily follows from
estimate (\ref{Lapla}) and (\ref{limitv}). Moreover, according to
(\ref{secondlimit}), by passing to the limit $n\uparrow\infty$ in the
right-hand side of (\ref{firstbound}), and then using Lemma \ref
{lem:3}, we arrive
at
\begin{equation}\qquad
\mathbf{E}\exp\{- \Vert X_{t} \Vert
_{B} \} \leq\limsup_{n\uparrow\infty}\mathbf{P} ( \tau
_{n}=\infty) \leq\limsup_{n\uparrow\infty}\mathbf{P} \bigl(
X_{t}(B_{n})=0 \bigr).
\end{equation}
Since the event $\{X_{t}(B)=0\}$ is the nonincreasing limit as
$n\uparrow\infty$ of the events $\{X_{t}(B_{n})=0\}$ we get
\begin{equation}
\mathbf{E}\exp\{- \Vert X_{t} \Vert
_{B} \} \leq\mathbf{P} \bigl(X_{t}
(B) = 0 \bigr).
\end{equation}
Since obviously $ \Vert X_{t} \Vert_{B}=0$ if and only if
$X_{t}(B)=0$, we see that (\ref{basicprop}) follows from this last
bound. The
proof of Theorem 1(c) is finished for $U=B$.
\end{pf*}
\begin{pf*}{Proof of Lemma \ref{lem:2}}
We start with a determination of the asymptotics of the first term at
the right-hand side of the log-Laplace equation (\ref{equt:4}) at
$(s^{\prime},x^{\prime})=(s,x)$. Note that
\begin{eqnarray}\label{equt:i1}\quad
&& \int_{\mathsf{R}^{d}}dy\, p_{s}^{\alpha}(y-x) 1_{B}(y) p_{s}
^{\alpha}(y-x)\nonumber\\[-8pt]\\[-8pt]
&&\qquad = \int_{\mathsf{R}^{d}}dy\, p_{s}^{\alpha}(y-x) p_{s}
^{\alpha}(y-x)-\int_{B^{\mathrm{c}}}dy\, p_{s}^{\alpha}(y-x) p_{s}
^{\alpha}(y-x).\nonumber
\end{eqnarray}
In the latter formula line, the first term equals $ p_{2s}^{\alpha
}(0)=Cs^{-d/\alpha}$ whereas the second one is bounded from above
by
\begin{equation} \label{equt:i2}
\sup_{0<s<\varepsilon_{n}, x\in B_{n}, y\in B^{\mathrm{c}}}p_{s}^{\alpha
}(y-x) \mathop{\longrightarrow}_{n\uparrow\infty}0,
\end{equation}
where the last convergence follows by assumption (\ref{Bs}) on $B_{n }
$. Hence from (\ref{equt:i1}) and (\ref{equt:i2}) we obtain
\begin{equation}\label{equt:i3}\quad
\int_{\mathsf{R}^{d}}dy\, p_{s}^{\alpha}(y-x) 1_{B}(y) p_{s}
^{\alpha}(y-x) = C s^{-d/\alpha}+\mathrm{o}(1) \qquad\mbox{as } n\uparrow
\infty,
\end{equation}
uniformly in $ s\in(0,\varepsilon_{n})$ and $ x\in B_{n }$.
To simplify notation, we write $v^{n}:=v_{s,x}^{n}$. Next,
from (\ref{equt:4}) we can easily get the upper bound
\begin{eqnarray}\label{equt:i4}
v^{n}(s^{\prime},x^{\prime}) &\leq& e^{|a|s^{\prime}}\int
_{\mathsf{R}^{d}}dy\, p_{s^{\prime}}^{\alpha}(y-x^{\prime}
) p_{s}^{\alpha}(y-x) \nonumber\\[-8pt]\\[-8pt]
&=& e^{|a|s^{\prime}} p_{s^{\prime}
+s}^{\alpha}(x-x^{\prime}).\nonumber
\end{eqnarray}
Then we have
\begin{eqnarray}\label{equt:i5}\quad
&& \int_{0}^{s}dr^{\prime}\int_{\mathsf{R}^{d}}d
y\, p_{s-r^{\prime}}^{\alpha}(y-x) (v^{n}(r^{\prime},y))^{1+\beta}\nonumber\\
&&\qquad \leq e^{|a|(1+\beta)s} \int_{0}^{s}dr^{\prime}
\int_{\mathsf{R}^{d}}dy\, p_{s-r^{\prime}}^{\alpha}
(y-x)
\bigl(p_{r^{\prime}+s}^{\alpha}(x-y)\bigr)^{1+\beta}\nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq e^{|a|(1+\beta)s}(p_{s}^{\alpha}(0))^{\beta
}\int_{0}^{s}dr^{\prime}\int_{\mathsf{R}^{d}}d
y\, p_{s-r^{\prime}}^{\alpha}(y-x) p_{r^{\prime}+s}^{\alpha}(x-y)\nonumber
\nonumber\\
&&\qquad = e^{|a|(1+\beta)s}(p_{s}^{\alpha}(0))^{\beta}
\int_{0}^{s}dr^{\prime}\, p_{2s}^{\alpha}(0) = C e
^{|a|(1+\beta)s}s^{1-d(1+\beta)/\alpha}\nonumber
\end{eqnarray}
and, similarly,
\begin{equation} \label{added}
\int_{0}^{s}dr^{\prime}\int_{\mathsf{R}^{d}}d
y\, p_{s-r^{\prime}}^{\alpha}(y-x) av^{n}(r^{\prime},y) \geq
-C |a| e^{|a|s} s^{1-d/\alpha}.
\end{equation}
Summarizing, by (\ref{equt:4}), (\ref{equt:i3}), (\ref{equt:i5}) and
(\ref{added}),
\begin{equation}\label{equt:i6}\hspace*{28pt}
v^{n}(s,x)\geq C s^{-d/\alpha}+\mathrm{o}(1)-C e^{|a|(1+\beta
)s} s^{1-d(1+\beta)/\alpha}-C |a| e^{|a|s} s^{1-d/\alpha}
\end{equation}
uniformly in $ s\in(0,\varepsilon_{n})$ and $ x\in B_{n }$.
According to our general assumption $d<\alpha/\beta$, we conclude that the
right-hand side of (\ref{equt:i6}) behaves like $Cs^{-d/\alpha}$ as
$s\downarrow0$ uniformly in $ s\in(0,\varepsilon_{n})$. Now recalling
definitions (\ref{not.hbeta}) and (\ref{not.vy}) as well as our assumption
that $d>1$ or $\alpha\leq1+\beta$, we immediately get
\begin{equation} \label{equt:i9}
\lim_{n\uparrow\infty} \inf_{0<s<\varepsilon_{n}}h_{\beta}(s)
s^{-d/\alpha
} = +\infty.
\end{equation}
By (\ref{equt:i6}), this implies (\ref{limitv}), and the proof of the
lemma is complete.
\end{pf*}
\section[Optimal local H\"{o}lder index: Proof of Theorem 1.2(b)]{Optimal local
H\"{o}lder index: Proof of Theorem \protect\ref{T.prop.dens}\textup{(b)}}\label{sec:5}
We return to \mbox{$d=1$} and continue to assume that $ t>0$ and
$\mu\in\mathcal{M}_{\mathrm{f}}\setminus\{0\}$ are fixed. In the proof
of Theorem \ref{T.prop.dens}(b) we implement the following idea. We
show that there exists a sequence of ``big'' jumps of $X$ that occur
close to time $t$ and these jumps in fact destroy the local H\"{o}lder
continuity of any index greater or equal than $\eta_{\mathrm{c} }$.
As in the proof of Theorem \ref{T.prop.dens}(c) in the previous
section, we may work with a fixed open interval $U$. For simplicity we
consider $ U=(0,1)$. Put
\begin{equation}
I_{k}^{(n)} := \biggl[\frac{k}{2^{n}},\frac{k+1}{2^{n}} \biggr),\qquad
n\geq1, 0\leq k\leq2^{n}-1.
\end{equation}
Choose $n_{0}$ such that $2^{-\alpha n_{0}}<t$. For $ n\geq n_{0}
$ and $ 2\leq k\leq2^{n}+1$, denote by $A_{n,k}$ the
following event:
\begin{equation} \label{c5.2}\hspace*{28pt}
\biggl\{ \Delta X_{s}\bigl(I_{k-2}^{(n)}\bigr) \geq\frac{c_{\mazinti{(\ref{c5.2})}}}
{2^{{\alpha}/({1+\beta})n}} n^{{1}/({1+\beta})} \mbox{ for some }
s\in\bigl[t-2^{-\alpha n}, t-2^{-\alpha(n+1)}\bigr) \biggr\}
\end{equation}
with $ c_{\mazinti{(\ref{c5.2})}}:=(\alpha2^{-\alpha}\log2)^{{1}/({1+\beta})}$,
and for $N\geq n_{0}$ write
\begin{equation}
\widetilde{A}_{N} := \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1}A_{n,k }.
\end{equation}
\begin{lemma}[(Again existence of big jumps)]\label{lem:1'}
For any $N\geq n_{0 }$,
\begin{equation}
\mathbf{P}\{\widetilde{A}_{N} | X_{t}(U)>0\} = 1.
\end{equation}
\end{lemma}
\begin{pf}
For $ s\in[t-2^{-\alpha n}, t-2^{-\alpha(n+1)})$ we have
\begin{eqnarray}
\biggl((t-s)\log\biggl(\frac{1}{t-s}\biggr) \biggr)^{{1}/({1+\beta})}
&\geq&\bigl(2^{-\alpha(n+1)}\log2^{\alpha n}
\bigr)^{{1}/({1+\beta})}\nonumber\\[-8pt]\\[-8pt]
&=& c_{\mazinti{(\ref{c5.2})}} 2^{-{\alpha}/({1+\beta}) n}
n^{{1}/({1+\beta})}.\nonumber
\end{eqnarray}
Therefore,
\begin{eqnarray*}
\bigcup_{k=2}^{2^{n}+1} A_{n,k}&\supseteq&\biggl\{\Delta X_{s}(U) \geq
\biggl((t-s)\log\biggl(\frac{1}{t-s}\biggr) \biggr)^{{1}/({1+\beta})} \\
&&\hspace*{17.6pt}\mbox{for some }s\in\bigl[t-2^{-\alpha n}, t-2^{-\alpha(n+1)}\bigr)
\biggr\}
\end{eqnarray*}
and, consequently,
\begin{eqnarray}\hspace*{28pt}
\widetilde{A}_{N} &=& \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1}A_{n,k}\nonumber\\[-8pt]\\[-8pt]
&\supseteq&\biggl\{ \Delta X_{s}(U)\geq\biggl((t-s)\log\biggl(\frac{1}
{t-s}\biggr) \biggr)^{{1}/({1+\beta})} \mbox{ for some } s\geq
t-2^{-N} \biggr\}\nonumber
\end{eqnarray}
and we are done by Lemma \ref{lem:3}.
\end{pf}
Now we are going to define increments of $Z_{t}^{2}$ on the dyadic sets
$\{\frac{k}{2^{n}}\dvtx k=0,\ldots,2^{n}\}$. By definition
(\ref{rep.dens}),
\begin{eqnarray}\quad
&& Z_{t}^{2} \biggl(\frac{k}{2^{n}} \biggr) -Z_{t}^{2}
\biggl(\frac{k+1}{2^{n}} \biggr) \nonumber\\
&&\qquad =\int_{(0,t]\times\mathsf{R}} M (d
(s,y) ) \biggl(p_{t-s}^{\alpha} \biggl(\frac{k}{2^{n}}-y \biggr) -p_{t-s}^{\alpha}
\biggl(\frac{k+1}{2^{n}}-y \biggr)
\biggr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad =\int_{(0,t]\times\mathsf{R}} M (d
(s,y) ) \biggl(p_{t-s}^{\alpha}
\biggl(\frac{k}{2^{n}}-y \biggr) -p_{t-s}^{\alpha}
\biggl(\frac{k+1}{2^{n}}-y \biggr)
\biggr) _{+}\nonumber\\
&&\qquad\quad{} +\int_{(0,t]\times\mathsf{R}} M
(d(s,y) ) \biggl(p_{t-s}^{\alpha} \biggl(
\frac{k}{2^{n}}-y \biggr) -p_{t-s}^{\alpha}
\biggl(\frac{k+1}{2^{n}}-y \biggr) \biggr) _{-}.\nonumber
\end{eqnarray}
Then according to Lemma \ref{L9} there exist spectrally positive stable
processes $L_{n,k}^{+}$ and $L_{n,k}^{-}$ of index $1+\beta$ such that
\begin{equation}
Z_{t}^{2} \biggl(\frac{k}{2^{n}} \biggr) -Z_{t}^{2} \biggl(\frac{k+1}{2^{n}
} \biggr) = L_{n,k}^{+}(T_{+})-L_{n,k}^{-}(T_{-}),
\end{equation}
where
\begin{equation} \label{not.Tpm}
T_{\pm} :=\int_{0}^{t}ds\int_{\mathsf{R}}X_{s}(dy)
\biggl(p_{t-s}^{\alpha} \biggl(\frac{k}{2^{n}}-y \biggr)
-p_{t-s}^{\alpha} \biggl(\frac{k+1}{2^{n}}-y \biggr) \biggr)
_{\pm}^{ 1+\beta}.
\end{equation}
Fix $\varepsilon\in(0,\frac{1}{1+\beta})$ for a while.
Let us define the following events:
\begin{eqnarray} \label{5.10}
B_{n,k} &:=& \bigl\{ L_{n,k}^{+}(T_{+})\geq2^{-\eta_{\mathrm{c}}n}n^{
{1}/({1+\beta})-\varepsilon} \bigr\} \cap\{ L_{n,k}^{-}(T_{-})\leq
2^{-\eta_{\mathrm{c}}n-\varepsilon n} \}\nonumber\\[-8pt]\\[-8pt]
&=:& B_{n,k}^{+}\cap B_{n,k }^{-}\nonumber
\end{eqnarray}
(with notation in the obvious correspondence). Define the following event:
\begin{eqnarray}
D_{N} :\!&=& \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1} (
A_{n,k}\cap B_{n,k} ) \nonumber\\[-8pt]\\[-8pt]
&\supseteq& \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1}A_{n,k}
\biggm\backslash\bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1} ( A_{n,k}\cap
B_{n,k}^{\mathrm{c}} ) .\nonumber
\end{eqnarray}
An estimation of the probability of $D_{N}$ is crucial for the proof of
Theorem \ref{T.prop.dens}(b). In fact we are going to show that conditionally
on $\{X_{t}(U)>0\}$, the event $D_{N}$ happens with probability one for any
$N$. This in turn implies that for any $N$ one can find $n\geq N$ sufficiently
large such that there exists an interval $[\frac{k}{2^{n}},\frac{k+1}{2^{n}}]$
on which the increment $Z_{t}^{2}(\frac{k}{2^{n}})-Z_{t}^{2}(\frac
{k+1}{2^{n}
})$ is of order $L_{n,k}^{+}(T_{+})\geq2^{-\eta_{\mathrm{c}}n}n^{
{1}/({1+\beta})-\varepsilon}$ [since the other term $L_{n,k}^{-}(T_{-})$
is much smaller on that interval]. This implies the statement of
Theorem \ref{T.prop.dens}(b). Detailed arguments follow.
By Lemma \ref{lem:1'} we get
\begin{equation} \label{13}\quad
\mathbf{P} \{ D_{N} | X_{t}(U)>0 \} \geq1-\mathbf{P}
\Biggl\{\bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1} ( A_{n,k}\cap
B_{n,k}^{\mathrm{c}} ) \bigg| X_{t}(U)>0 \Biggr\}.
\end{equation}
Recall $A^{\varepsilon}$ defined in (\ref{A.eps}). Note that
\begin{eqnarray} \label{16}
&& \mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1}
(A_{n,k}\cap B_{n,k}^{\mathrm{c}} ) \Biggr)\nonumber\\
&&\qquad \leq\mathbf{P}(A^{\varepsilon,\mathrm{c}})+\mathbf{P}
\Biggl(\bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap
A_{n,k}\cap B_{n,k}^{\mathrm{c}} ) \Biggr) \\
&&\qquad \leq2\varepsilon+\mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{\mathrm{c}
} ) \Biggr) .\nonumber
\end{eqnarray}
\begin{lemma}[(Probability of small increments)]\label{Prop2}
For all $ \varepsilon>0$ sufficiently small,
\begin{equation} \label{5.1}
\lim_{N\uparrow\infty}\mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{\mathrm{c}
} ) \Biggr) = 0.
\end{equation}
\end{lemma}
We postpone the proof of this lemma to the end of this section. Instead we
will show now, how it implies Theorem \ref{T.prop.dens}(b).
\begin{pf*}{Completion of proof of Theorem
\ref{T.prop.dens}\textup{(b)}}
From Lemma \ref{Prop2} and (\ref{16}) it follows that
\begin{equation}\hspace*{4pt}
\limsup_{N\uparrow\infty}\mathbf{P} \Biggl\{\bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1} ( A_{n,k}\cap B_{n,k}^{\mathrm{c}} )
\bigg| X_{t}(U)>0 \Biggr\} \leq\frac{2\varepsilon}{\mathbf{P}
(X_{t}(U)>0)} .
\end{equation}
Since $\varepsilon$ can be arbitrarily small, the latter $\limsup$ expression
equals 0. Combining this with estimate (\ref{13}), we get
\begin{equation}
\lim_{N\uparrow\infty}\mathbf{P} \{ D_{N} | X_{t}(U)>0 \}
= 1.
\end{equation}
Since $ D_{N}\downarrow\bigcap_{N=n_{0}}^{\infty}D_{N}=:D_{\infty}$
as $N\uparrow\infty$, we conclude that
\begin{equation}
\mathbf{P} \{ D_{\infty} | X_{t}(U)>0 \} = 1.
\end{equation}
This means that, almost surely on $\{X_{t}(U)>0\}$, there is a
sequence $(n_{j},k_{j})$ such that
\begin{equation}
Z_{t}^{2}\biggl(\frac{k_{j}}{2^{n_{j}}}\biggr)-Z_{t}^{2}\biggl(\frac{k_{j}
+1}{2^{n_{j}}}\biggr) \geq2^{-\eta_{\mathrm{c}} n_{j}} n_{j}^{
{1}/({1+\beta})-\varepsilon}.
\end{equation}
This inequality implies the claim in Theorem \ref{T.prop.dens}(b).
\end{pf*}
We now prepare for the proof of Lemma \ref{Prop2}.
Actually by using (\ref{5.10}), we represent the probability in (\ref{5.1})
as a sum of the two following probabilities:
\begin{eqnarray} \label{5.2}
&& \mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1}
(A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{\mathrm{c}})\Biggr)
\nonumber\\
&&\qquad = \mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}
+1} ( A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{+,\mathrm{c}} )
\Biggr) \\
&&\qquad\quad{} + \mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup_{k=2}
^{2^{n}+1} ( A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{-,\mathrm{c}
} ) \Biggr) .\nonumber
\end{eqnarray}
Now we will handle each term on the right-hand side of (\ref{5.2}) separately.
\begin{lemma}[{[First term in (\ref{5.2})]}]\label{L.1part}
For $\varepsilon\in(0,\frac{1}{1+\beta})$,
\begin{equation}
\lim_{N\uparrow\infty}\mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{+,\mathrm{c}
} ) \Biggr) = 0.
\end{equation}
\end{lemma}
\begin{pf}
Consider the process $L_{n,k}^{+}(s), s\leq T_{+}$. On $A_{n,k}$ there
exists a jump of the martingale measure $M$ of the form $ r^{\ast}
\delta_{s^{\ast},y^{\ast}}$ for some
\begin{eqnarray}
r^{\ast}&\geq& c_{\mazinti{(\ref{c5.2})}} 2^{-{\alpha}/({1+\beta})n}n^{{1}
/({1+\beta})},\nonumber\\[-8pt]\\[-8pt]
s^{\ast}&\in&\bigl[ t-2^{-\alpha n}, t-2^{-\alpha
(n+1)}\bigr],\qquad y^{\ast}\in I_{k-2 }^{(n)}.\nonumber
\end{eqnarray}
Hence
\begin{eqnarray}\label{equt:2}\hspace*{32pt}
\Delta L_{n,k}^{+}(s^{\ast})&\geq&\inf_{y\in I_{k-2}^{(n)}, s\in[
2^{-\alpha(n+1)},2^{-\alpha n}]} \biggl(p_{s}
^{\alpha} \biggl(\frac{k}{2^{n}}-y \biggr) -p_{s}^{\alpha}
\biggl(\frac{k+1}{2^{n}}-y \biggr) \biggr) _{+}\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{93pt}{} \times c_{\mazinti{(\ref{c5.2})}} 2^{-{\alpha}/({1+\beta})n} n^{
{1}/({1+\beta})}.\nonumber
\end{eqnarray}
It is easy to get
\begin{eqnarray}\label{equt:1'}\qquad
&& \inf_{y\in I_{k-2}^{(n)}, s\in[2^{-\alpha(n+1)},2^{-\alpha n}
]} \biggl(p_{s}^{\alpha} \biggl(\frac{k}{2^{n}
}-y \biggr) -p_{s}^{\alpha} \biggl(\frac{k+1}{2^{n}}-y \biggr) \biggr)
_{+}\nonumber\\
&&\qquad = \mathop{\inf_{
2^{-n}\leq z\leq2^{-n+1},}}_
{s\in[2^{-\alpha(n+1)},2^{-\alpha n}]}
\bigl(p_{s}^{\alpha} ( z ) -p_{s}^{\alpha
} ( z+2^{-n} ) \bigr) _{+}\nonumber\\
&&\qquad = \mathop{\inf_{2^{-n}\leq z\leq2^{-n+1},}}_
{s\in[2^{-\alpha(n+1)},2^{-\alpha n}]}
s^{-1/\alpha} \bigl( p_{1}^{\alpha}(zs^{-1/\alpha})-p_{1}^{\alpha
}\bigl((z+2^{-n})s^{-1/\alpha}\bigr) \bigr) _{+}\\
&&\qquad \geq2^{n}{\mathop{\inf_{
2^{-n}\leq z\leq3\cdot2^{-n},}}_{s\in[2^{-\alpha(n+1)},2^{-\alpha n}]}}
\vert(p_{1}^{\alpha})^{\prime}(zs^{-1/\alpha}) \vert
2^{-n}s^{-1/\alpha}\nonumber\\
&&\qquad \geq{2^{n}\inf_{1\leq x\leq6} }\vert(p_{1}^{\alpha
})^{\prime} ( x ) \vert=: c_{\mazinti{(\ref{equt:1'})}}2^{n},\nonumber
\end{eqnarray}
where $c_{\mazinti{(\ref{equt:1'})}}>0$. In fact, from (\ref{L1.2}),
\begin{equation}
\frac{d}{dz}p_{1}^{\alpha}(z) = -\int_{0}^{\infty
}ds\, q_{1}^{\alpha/2}(s) \frac{z}{2s} p_{s}^{(2)}(z) \neq0,\qquad
z\neq0,
\end{equation}
and $(p_{\alpha}^{(2)})^{\prime}(x)\not=0$ for any $x\not=0$.
Apply (\ref{equt:1'}) in (\ref{equt:2}) to arrive at
\begin{equation} \label{equt:3}
\Delta L_{n,k}^{+}(s^{\ast}) \geq c_{\mazinti{(\ref{equt:3})}}2^{(1-{\alpha
}/({1+\beta}))n }n^{{1}/({1+\beta})} = c_{\mazinti{(\ref{equt:3})}}2^{-\eta
_{\mathrm{c}}n} n^{{1}/({1+\beta})}.
\end{equation}
Using Lemma \ref{L7} with $ \theta=1+\beta$ and
\begin{equation}
\delta= (1+\beta)\mathsf{1}_{\{2\beta<\alpha-1\}}+(\alpha-\beta
-\varepsilon)\mathsf{1}_{\{2\beta\geq\alpha-1\} },
\end{equation}
we get, with $c_{\varepsilon}$ appearing in definition (\ref{A.eps}) of
$A^{\varepsilon}$,
\begin{equation} \label{not.tn}\hspace*{28pt}
T_{\pm} \leq c_{\varepsilon} \bigl(2^{-n(1+\beta)}\mathsf{1}_{\{2\beta
<\alpha-1\}}+2^{-n(\alpha-\beta-\varepsilon)}\mathsf{1}_{\{2\beta\geq
\alpha-1\}} \bigr) =: t_{n} \qquad\mbox{on } A^{\varepsilon}.
\end{equation}
Hence for all $n$ sufficiently large we obtain
\begin{eqnarray}\label{equt:4'}\quad
&& \mathbf{P} \bigl( L_{n,k}^{+}(T_{+})<2^{-\eta_{\mathrm{c}}n} n^{{1}/({1+\beta})
-\varepsilon}, A^{\varepsilon}\cap A_{n,k} \bigr) \nonumber\\
&&\qquad \leq\mathbf{P} \bigl( L_{n,k}^{+}(T_{+})<2^{-\eta_{\mathrm{c}}
n} n^{{1}/({1+\beta})-\varepsilon},\nonumber\\
&&\hspace*{43.3pt} \Delta L_{n,k}^{+}(s^{\ast})\geq
c_{\mazinti{(\ref{equt:3})}}2^{-\eta_{\mathrm{c}}n} n^{{1}/({1+\beta})},
A^{\varepsilon} \bigr) \nonumber\\
&&\qquad \leq\mathbf{P} \biggl( \inf_{s\leq T^{+}}L_{n,k}^{+}(s)<-\frac{1}
{2} c_{\mazinti{(\ref{equt:3})}}2^{-\eta_{\mathrm{c}}n} n^{{1}/({1+\beta})
}, A^{\varepsilon} \biggr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \leq\mathbf{P} \biggl( \inf_{s\leq t_{n}}L_{n,k}^{+}(s)<-\frac{1}
{2} c_{\mazinti{(\ref{equt:3})}}2^{-\eta_{\mathrm{c}}n} n^{{1}/({1+\beta})} \biggr)
\nonumber\\
&&\qquad \leq\exp\bigl\{-c_{\beta}(t_{n})^{-1/\beta}\bigl(c_{\mazinti{(\ref{equt:3})}}
2^{-\eta_{\mathrm{c}}n} n^{{1}/({1+\beta})}\bigr)^{(1+\beta)/\beta
} \bigr\}\nonumber\\
&&\qquad \leq\exp\bigl\{-c_{\varepsilon}n^{1/\beta}\bigl(t_{n}^{-1}2^{-\eta_{\mathrm{c}
}(1+\beta)n}\bigr)^{1/\beta} \bigr\} \nonumber\\
&&\qquad\leq\exp\bigl\{-c_{\varepsilon}n^{1/\beta
}2^{(1-\varepsilon)n}\bigr\},\nonumber
\end{eqnarray}
where (\ref{equt:4'}) follows by Lemma \ref{L.small.values}, and the
rest is
simple algebra. From this we get that for $N$ sufficiently large,
\begin{eqnarray}\qquad
\mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup_{k=2}^{2^{n}+1}
(A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}^{+,\mathrm{c}} ) \Biggr)
&\leq&
\sum_{n=N}^{\infty}\sum_{k=2}^{2^{n}+1}\mathbf{P} \bigl( A^{\varepsilon
}\cap A_{n,k}\cap B_{n,k}^{+,\mathrm{c}} \bigr) \nonumber\\
&\leq&
\sum_{n=N}^{\infty}\sum_{k=2}^{2^{n}+1}\exp\bigl\{-c_{\varepsilon
}n^{1/\beta} 2^{(1-\varepsilon)n}\bigr\} \\
&=& \sum_{n=N}^{\infty}2^{n}
\exp\bigl\{-c_{\varepsilon}n^{1/\beta} 2^{(1-\varepsilon)n}\bigr\},\nonumber
\end{eqnarray}
which converges to $0$ as $N\uparrow\infty$, and we are done with the
proof of
Lemma \ref{L.1part}.
\end{pf}
\begin{lemma}[{[Second term in (\ref{5.2})]}]\label{prop:2}
For all $ \varepsilon>0$ sufficiently small,
\begin{equation}
\lim_{N\uparrow\infty}\mathbf{P} \Biggl( \bigcup_{n=N}^{\infty}\bigcup
_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap A_{n,k}\cap
B_{n,k}^{-,\mathrm{c}}) \Biggr) = 0.
\end{equation}
\end{lemma}
The proof of this lemma will be postponed almost to the end of the section.
For its preparation, fix $\rho\in(0,\frac{1}{2})$. Define
\[
A_{n}^{\rho} := \Bigl\{\omega\dvtx\mbox{there exists }I_{k}^{(n)} \mbox{ with
}\sup_{s\in[ t-2^{-\alpha(1-\rho)n}, t)}X_{s}\bigl(I_{k}^{(n)}
\bigr)\geq2^{-n(1-2\rho)} \Bigr\}.
\]
Note that
\begin{eqnarray}
&& \mathbf{P} \Biggl( \bigcup_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap
A_{n,k}\cap B_{n,k}^{-,\mathrm{c}} ) \Biggr) \nonumber\\
&&\qquad \leq\mathbf{P}(A_{n}^{\rho})+\mathbf{P} \Biggl( \bigcup_{k=2}^{2^{n}
+1} ( A_{n}^{\rho,\mathrm{c}}\cap A^{\varepsilon}\cap A_{n,k}\cap
B_{n,k}^{-,\mathrm{c}} ) \Biggr) \\
&&\qquad \leq\mathbf{P}(A_{n}^{\rho})+\sum_{k=2}^{2^{n}+1}\mathbf{P} (
A_{n}^{\rho,\mathrm{c}}\cap A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}
^{-,\mathrm{c}} ).\nonumber
\end{eqnarray}
Now let us introduce the notation
\begin{equation}
B_{n,k}^{-,1} := \Bigl\{\sup_{s\leq T_{-}}\Delta L_{n,k}^{-}(s)\leq
2^{-\eta_{\mathrm{c}}n-\varepsilon n} \Bigr\}.
\end{equation}
Then we have
\begin{eqnarray} \label{3terms}
&& \mathbf{P} \Biggl( \bigcup_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap
A_{n,k}\cap B_{n,k}^{-,\mathrm{c}} ) \Biggr) \nonumber\\
&&\qquad \leq\mathbf{P}(A_{n}^{\rho})+\sum_{k=2}^{2^{n}+1}\mathbf{P} (
A_{n}^{\rho,\mathrm{c}}\cap A^{\varepsilon}\cap A_{n,k}\cap B_{n,k}
^{-,\mathrm{c}} ) \nonumber\\
&&\qquad \leq\mathbf{P}(A_{n}^{\rho})+\sum_{k=2}^{2^{n}+1}\mathbf{P} (
A^{\varepsilon}\cap B_{n,k}^{-,\mathrm{c}}\cap B_{n,k}^{-,1} )
\\
&&\qquad\quad{} +\sum_{k=2}^{2^{n}+1}\mathbf{P} ( A^{\varepsilon}\cap
A_{n}^{\rho,\mathrm{c}}\cap A_{n,k}\cap B_{n,k}^{-,1,\mathrm{c}} )
\nonumber\\
&&\qquad =: \mathbf{P}(A_{n}^{\rho})+\sum_{k=2}^{2^{n}+1}P_{n,k}^{\varepsilon}
+\sum_{k=2}^{2^{n}+1}P_{n,k }^{\varepsilon,\varrho}.\nonumber
\end{eqnarray}
In the following lemmas we consider the three terms in (\ref{3terms})
separately.
\begin{lemma}[{[First term in (\ref{3terms})]}]\label{L.small.prob}
There exists a constant $c_{\mazinti{(\ref{small.prob})}}$ independent
of $\rho\in(0,\frac{1}{2})$ such that
\begin{equation} \label{small.prob}
\mathbf{P}(A_{n}^{\rho}) \leq c_{\mazinti{(\ref{small.prob})}} 2^{-\rho
n},\qquad n\geq n_{0 }.
\end{equation}
\end{lemma}
\begin{pf}
Fix $n\geq n_{0 }$. Define the stopping time $ \tau_{n}=\tau_{n}(\rho
)$ as
\begin{equation}
\inf\bigl\{s\in\bigl[ t-2^{-\alpha(1-\rho)n},t\bigr)\dvtx X_{s}\bigl(I_{k}^{(n)}\bigr)
\geq2^{-n(1-2\rho)}\mbox{ for some }I_{k}^{(n)} \bigr\},
\end{equation}
if $ \omega\in A_{n}^{\rho}$, and as $t$ if $ \omega\in
A_{n }^{\rho,\mathrm{c}}$. Fix any $\omega\in A_{n}^{\rho}
$. By definition of $\tau_{n}$ there exists a sequence
$\{(s_{j},I_{k_{j}}^{(n)})\dvtx j\geq1\}$ such that
\begin{equation}
s_{j}\downarrow\tau_{n} \qquad\mbox{as }
j\uparrow\infty\quad\mbox{and}\quad
X_{s_{j}}\bigl(I_{k_{j}}^{(n)}\bigr)\geq2^{-n(1-2\rho)},\qquad j\geq1.
\end{equation}
There exists a subsequence $\{j_{r}\dvtx r\geq1\}$ such that
$I_{k_{j_{r}}}
^{(n)}=I_{\tilde{k}}^{(n)}$ for some $\tilde{k}\in\mathsf{Z}$. Hence,
for the
fixed $\omega\in A_{n}^{\rho}$,
\begin{equation}\label{X_tau}
X_{\tau_{n}}\bigl(I_{\tilde{k}}^{(n)}\bigr) = \lim_{r\rightarrow\infty
}X_{s_{j_{r}}
}\bigl(I_{\tilde{k}}^{(n)}\bigr)\geq2^{-n(1-2\rho)}.
\end{equation}
Put $\tilde{B}:=[\tilde{k}2^{-n}-2^{-n(1-\rho)},(\tilde{k}+1)2^{-n}
+2^{-n(1-\rho)}]$. Then there is a constant
$c_{\mazinti{(\ref{int_bound})}}$ independent of $ \rho$ such that
\begin{eqnarray}\label{int_bound}
\int_{\tilde{B}}dy\, p_{t-s}^{\alpha}(y-z) \geq c_{\mazinti{(\ref{int_bound}
)}} \hspace*{30pt}\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{for all } z\in I_{\tilde{k}}^{(n)}\mbox{ and } s\in\bigl[
t-2^{-\alpha(1-\rho)n},t\bigr).}
\end{eqnarray}
Now, by the strong Markov property,
\begin{eqnarray*}
\mathbf{E}X_{t}(\tilde{B})&=& \mathbf{E} e^{a(t-\tau_{n})}
S_{t-\tau_{n}}^{\alpha}X_{\tau_{n }}(\tilde{B}) \\
&\geq& e
^{-|a|t} \mathbf{E} \biggl\{\int_{\tilde{B}} dy \int_{\mathsf{R}
} X_{\tau_{n}}(dz)p_{t-\tau_{n}}^{\alpha}(y-z);A_{n}^{\rho} \biggr\}\\
&\geq& e^{-|a|t} \mathbf{E} \biggl\{\int_{I_{\tilde{k}}^{(n)}}
X_{\tau_{n}}(dz)\int_{\tilde{B}}dy\, p_{t-\tau_{n}}^{\alpha
}(y-z); A_{n}^{\rho} \biggr\}\\
&\geq& c_{\mazinti{(\ref{int_bound})}} \mathbf{E}
\bigl\{X_{\tau_{n}}\bigl(I_{\tilde{k}}^{(n)}\bigr);A_{n}^{\rho}\bigr\}.
\end{eqnarray*}
Taking into account (\ref{X_tau}) and (\ref{int_bound}) then gives
\begin{equation}\label{E_bound}
\mathbf{E}X_{t}(\tilde{B}) \geq c_{\mazinti{(\ref{int_bound})}} 2^{-n(1-2\rho)}
\mathbf{P}(A_{n}^{\rho}).
\end{equation}
On the other hand, in view of Corollary \ref{C1},
\begin{eqnarray}\label{fin}
\mathbf{E}X_{t}(\tilde{B}) &\leq& |\tilde{B}| \mathbf{E}\sup_{0\leq
x\leq1}X_{t}(x)\nonumber\\[-8pt]\\[-8pt]
&\leq& 2\bigl(2^{-n}+2^{-n(1-\rho)}\bigr) \mathbf{E}\sup_{0\leq x\leq
1}X_{t}(x)\leq C 2^{-n(1-\rho)},\nonumber
\end{eqnarray}
where we wrote $|\tilde{B}|$ for the length of the interval $\tilde{B}$.
Combining (\ref{E_bound}) and (\ref{fin}) completes the proof.
\end{pf}
\begin{lemma}[{[Second term in (\ref{3terms})]}]\label{L.2t}
For fixed $\varepsilon\in(0,\frac{1}{1+\beta})$ and all $n$ large enough,
\begin{equation}
P_{n,k}^{\varepsilon} \leq2^{-3n/2},\qquad 2\leq k\leq2^{n}+1.
\end{equation}
\end{lemma}
\begin{pf}
Since $T_{-}\leq t_{n}$ on $A^{\varepsilon}$ [recall notation
(\ref{not.tn})],
\begin{equation}
P_{n,k}^{\varepsilon} \leq\mathbf{P} \Bigl( \sup_{v\leq t_{n}}
L_{v}\mathrm{1} \Bigl\{\sup_{u\leq v}\Delta L_{u}\leq2^{-n(\eta_{\mathrm{c}
}+\varepsilon)} \Bigr\}\geq2^{-n\eta_{\mathrm{c}}} \Bigr).
\end{equation}
Applying now Lemma \ref{L3}, with notation of $t_{n}$ from (\ref{not.tn}) we
obtain
\begin{equation}
P_{n,k}^{\varepsilon} \leq\bigl(c_{\varepsilon}2^{\varepsilon\beta
n-(1-\eta_{\mathrm{c}})(1+\beta)n}+c_{\varepsilon}2^{\eta_{\mathrm{c}}
(1+\beta)n+\varepsilon\beta n-(\alpha-\beta-\varepsilon)n}
\bigr)^{ (2^{n\varepsilon})}.
\end{equation}
Inserting the definition of $\eta_{\mathrm{c}}$ and making $n$ sufficiently
large, the estimate in the lemma follows.
\end{pf}
In order to deal with the third term $P_{n,k }^{\varepsilon,\varrho}
$, we need to define additional events
\begin{eqnarray}\quad
A_{n,k}^{\varepsilon,\rho,1} &:=& \biggl\{
\mbox{There exists a jump of }M\mbox{ of the form }r^{\ast
}\delta_{(s^{\ast},y^{\ast})}\nonumber\\
&&\hspace*{6.2pt} \mbox{for some }(r^{\ast},s^{\ast},y^{\ast}
) \mbox{ such that }r^{\ast}\geq(t-s)^{{1}/({1+\beta})+2\varepsilon
/\alpha},\\
&&\hspace*{34.5pt} \biggl\vert\frac
{k+1}{2^{n}}-y^{\ast} \biggr\vert\leq(t-s)^{1/\alpha-2\varepsilon}
, s^{\ast}\geq t-2^{-\alpha(1+\rho)n} \biggr\} \nonumber
\end{eqnarray}
and
\begin{eqnarray*}
A_{n,k}^{\varepsilon,\rho,2} &:=& A_{n}^{\rho,\mathrm{c}}\cap A_{n,k}
\\
&&{}\cap\biggl\{\mbox{There exists a jump of }M \mbox{ of the form }
r^{\ast}\delta_{(s^{\ast},y^{\ast})}\\
&& \hspace*{15.6pt}\mbox{ for some }(r^{\ast},s^{\ast},y^{\ast})\mbox{ such that }
r^{\ast}\geq(t-s)^{{1}/({1+\beta})+2\varepsilon/\alpha},\\
&&\hspace*{17.6pt} y^{\ast}\in\biggl[\frac{k+1/2}{2^{n}}, \frac{k+1+2^{\rho n+\alpha
2\varepsilon(1-\rho)n}}{2^{n}}\biggr],\\
&&\hspace*{107.65pt} s^{\ast}\in\bigl[t-2^{-\alpha
(1-\rho)n}, t-2^{-\alpha(1+\rho)n}\bigr] \biggr\}.
\end{eqnarray*}
So far we assumed that $ \varepsilon\in(0,\frac{1}{1+\beta})$
and $ \rho\in(0,\frac{1}{2})$. Suppose additionally
that
\begin{equation} \label{equt:9}
\frac{\alpha(\alpha+1)2\varepsilon}{1-\eta_{\mathrm{c}}+2\varepsilon
(\alpha^{2}+\alpha-1)} \leq\rho.
\end{equation}
\begin{lemma}[{[Splitting of the third term in (\ref{3terms})]}]\label{lem:9}
For $\rho,\varepsilon>0$ sufficiently small and satisfying (\ref{equt:9})
we have
\begin{equation} \label{splits}
P_{n,k}^{\varepsilon,\varrho} \leq\mathbf{P}(A_{n,k}^{\varepsilon,\rho
,1})+\mathbf{P}(A_{n,k}^{\varepsilon,\rho,2})
\end{equation}
for all $ 0\leq k\leq2^{n}-1$ and $n\geq n_{\varepsilon}$.
\end{lemma}
\begin{pf}
First let us describe the strategy of the proof. We are going to show that
whenever a jump of $L_{n,k}^{-}(s), s\leq T_{- }$, of size at least
$2^{-n(\eta_{\mathrm{c}}+\varepsilon)}$ occurs, then it may happen only
in the
points indicated in the definition of $A_{n,k}^{\varepsilon,\rho,1}$ and
$A_{n,k}^{\varepsilon,\rho,2}$. To show this we will in fact show that outside
the sets mentioned in $A_{n,k}^{\varepsilon,\rho,1}$ and $A_{n,k}
^{\varepsilon,\rho,2}$ the jumps of $L_{n,k}^{-}(s), s\leq T_{- }$, are less
than $2^{-n(\eta_{\mathrm{c}}+\varepsilon)}$.
To implement this strategy, first let us recall that all the jumps of
$ L_{n,k}^{-}(s), s\leq T_{- }$, equal to
\begin{equation} \label{5.3}
\Delta X_{s\ast}(y^{\ast}) \biggl(p_{t-s}^{\alpha
} \biggl( \frac{k+1}{2^{n}}-y^{\ast} \biggr) -p_{t-s}^{\alpha} \biggl(
\frac{k}{2^{n}}-y^{\ast} \biggr) \biggr) _{+}
\end{equation}
for some $(s^{\ast},y^{\ast})\in[0,t)\times\mathsf{R}$.
Recall that by definition (\ref{A.eps}), on the event $A^{\varepsilon}$,
\begin{equation} \label{equt:5}
\vert\Delta X_{s} \vert\leq c_{\mazinti{(\ref{inL8})}}(t-s)^{(1+\beta
)^{-1}-\gamma}
\end{equation}
with $ \gamma\in(0,(1+\beta)^{-1})$. On the other hand using
Lemma \ref{L1} with $\delta=1$ we obtain
\begin{equation}\label{equt:6}
p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr) -p_{t-s}^{\alpha
} \biggl( \frac{k}{2^{n}}-y \biggr) \leq C2^{-n}(t-s)^{-2/\alpha} .
\end{equation}
From (\ref{equt:5}) and (\ref{equt:6}) we infer
\begin{eqnarray} \label{equt:7}
&& \sup_{s\leq t-2^{-\alpha(1-\rho)n}}\Delta X_{s}\sup_{y\in\mathsf{R}} \biggl(
p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr)
-p_{t-s}^{\alpha} \biggl( \frac{k}{2^{n}}-y \biggr) \biggr)
\nonumber\\
&&\qquad \leq
Cc_{\mazinti{(\ref{inL8})}}2^{-n}\bigl(2^{-\alpha(1-\rho)n}\bigr)
^{{1}/({1+\beta})-\gamma-2/\alpha}
\\
&&\qquad= C 2^{-n(\eta_{\mathrm{c}}-\alpha
\gamma+\rho(1-\eta_{\mathrm{c}}+\alpha\gamma))}.\nonumber
\end{eqnarray}
Furthermore if the jump $\Delta X_{s}$ occurs at the point $y^{\ast}$ with
\begin{equation}
\biggl\vert y^{\ast}-\frac{k+1}{2^{n}} \biggr\vert\geq(t-s)^{1/\alpha
-2\varepsilon},
\end{equation}
then again by Lemma \ref{L1}, for any $\delta\in[0,1]$,
\begin{eqnarray}
&&p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr) -p_{t-s}^{\alpha
} \biggl( \frac{k}{2^{n}}-y \biggr) \nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq C 2^{-n\delta}(t-s)^{-\delta
/\alpha} p_{t-s}^{\alpha}
\bigl((t-s)^{1/\alpha-2\varepsilon}\bigr).\nonumber
\end{eqnarray}
Since
\begin{equation}
p_{1}^{\alpha}(x) \leq C x^{-1-\alpha},\qquad x\in\mathsf{R},
\end{equation}
we get the bound
\begin{equation}\hspace*{25pt}
p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr) -p_{t-s}^{\alpha
} \biggl( \frac{k}{2^{n}}-y \biggr) \leq C 2^{-n\delta}(t-s)^{-
({\delta+1})/{\alpha}+2\varepsilon(\alpha+1)} .
\end{equation}
Hence
\begin{eqnarray}\hspace*{32pt}
&&\sup_{s<t} \sup_{y\dvtx\vert y-({k+1})/{2^{n}} \vert\geq
(t-s)^{1/\alpha-2\varepsilon}}\Delta X_{s}(y)
\biggl(p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr)\nonumber\\
&&\hspace*{169.1pt}{} -p_{t-s}^{\alpha
} \biggl( \frac{k}{2^{n}}-y \biggr) \biggr) \\
&&\qquad\leq Cc_{\mazinti{(\ref{inL8})}} 2^{-n\delta}(t-s)^{-({\delta+1})/{\alpha
}+2\varepsilon(\alpha+1)+{1}/({\beta+1})-\gamma}.\nonumber
\end{eqnarray}
Set
\begin{equation}
\delta:= \eta_{\mathrm{c}}+\alpha\bigl(2\varepsilon(\alpha+1)-\gamma\bigr).
\end{equation}
Note that for all $\varepsilon$ and $\gamma$ sufficiently small, we have
$\delta\in[0,1]$, and we can apply the previous estimates. Thus we
obtain
\begin{eqnarray} \label{equt:8'}\hspace*{32pt}
&&\sup_{s<t} \sup_{y\dvtx\vert y-({k+1})/{2^{n}} \vert\geq
(t-s)^{1/\alpha-2\varepsilon}}\Delta X_{s}(y)
\biggl(p_{t-s}^{\alpha} \biggl( \frac{k+1}{2^{n}}-y \biggr)\nonumber\\
&&\hspace*{169.1pt}{} -p_{t-s}^{\alpha
} \biggl( \frac{k}{2^{n}}-y \biggr) \biggr)\\
&&\qquad\leq Cc_{\mazinti{(\ref{inL8})}} 2^{-n(\eta_{\mathrm{c}}+\alpha(2\varepsilon
(\alpha+1)-\gamma))}.\nonumber
\end{eqnarray}
Now if we take $ \gamma=2\varepsilon(\alpha+1-1/\alpha)$, which
belongs to these admissible $\gamma$, and $\rho$ as in (\ref{equt:9}), we
conclude that the right-hand side of (\ref{equt:7}) and (\ref{equt:8'}) is
bounded by
\begin{equation} \label{equt:10'}
C 2^{-n(\eta_{\mathrm{c}}+2\varepsilon)}.
\end{equation}
For any jump $r^{\ast}\delta_{(s^{\ast},y^{\ast})}$ of $M$ such that
$r^{\ast
}\leq(t-s)^{{1}/({1+\beta})+2\varepsilon/\alpha}$ and $s^{\ast}<t$ we may
apply Lemma \ref{L1} with $\delta=\eta_{\mathrm{c}}+2\varepsilon$ to
get that
\begin{equation} \label{equt:11'}\quad
\Delta X_{s\ast}(y^{\ast}) \biggl(p_{t-s}^{\alpha
} \biggl( \frac{k+1}{2^{n}}-y^{\ast} \biggr) -p_{t-s}^{\alpha}
\biggl(\frac{k}{2^{n}}-y^{\ast} \biggr) \biggr) \leq C 2^{-n(\eta_{\mathrm{c}
}+2\varepsilon)}.
\end{equation}
Now recall (\ref{5.3}). Hence combining (\ref{equt:7}), (\ref{equt:8'}),
(\ref{equt:10'}) and (\ref{equt:11'}) the conclusion of Lemma \ref
{lem:9} follows.
\end{pf}
In the next two lemmas we will bound the two probabilities on the right-hand
side of (\ref{splits}).
\begin{lemma}[{[First term in (\ref{splits})]}]\label{L.2split}
For all $ \rho,\varepsilon>0$ sufficiently small and satisfying
\begin{equation}\label{and}
6 \varepsilon(\alpha+1+\beta) \leq\rho,
\end{equation}
we have
\begin{equation}
\mathbf{P}(A_{n,k}^{\varepsilon,\rho,1}) \leq2^{-n-n\rho/2}
\end{equation}
for all $ k,n$ considered.
\end{lemma}
\begin{pf}
It is easy to see that
\begin{eqnarray*}
A_{n,k}^{\varepsilon,\rho,1} &\subseteq& \bigcup_{l=(1+\rho)n}^{\infty
} \biggl\{\mbox{There exists a jump of }M\mbox{ of the form }r^{\ast}
\delta_{(s^{\ast},y^{\ast})}\\
&&\hspace*{38.2pt} \mbox{ for some }(r^{\ast},s^{\ast},y^{\ast
})\mbox{ such that }r^{\ast}\geq2^{-l({\alpha}/({1+\beta
})+2\varepsilon)},\\
&&\hspace*{40.2pt} \biggl\vert
\frac{k+1}{2^{n}}-y^{\ast} \biggr\vert\leq2^{-l(1-2\varepsilon\alpha
)}, s^{\ast}\in\bigl[t-2^{-\alpha l}, t-2^{-\alpha(l+1)}\bigr) \biggr\} \\
&=&\!:
\bigcup_{l=(1+\rho)n}^{\infty}A_{n,k,l }^{\varepsilon
,\rho,1}.
\end{eqnarray*}
Recall the random measure $N$ describing the jumps of $ X$. Write
$ Y_{n,k,l}$ for the $N$-measure of
\begin{eqnarray*}
&&\bigl[t(1-2^{-\alpha l}),t\bigl(1-2^{-\alpha(l+1)}\bigr)\bigr]\times\biggl[\frac
{k+1}{2^{n}}-2^{-l(1-2\alpha\varepsilon)}, \frac
{k+1}{2^{n}}+2^{-l(1-2\alpha
\varepsilon)}\biggr]\\
&&\qquad{}\times\bigl[2^{-l({\alpha}/({1+\beta})+2\varepsilon
)}, \infty\bigr).
\end{eqnarray*}
Then, by Markov's inequality,
\begin{equation}
\mathbf{P}(A_{n,k,l}^{\varepsilon,\rho,1}) = \mathbf{P}(Y_{n,k,l}
\geq1) \leq\mathbf{E}Y_{n,k,l }.
\end{equation}
Therefore,
\begin{equation}
\mathbf{P}(A_{n,k}^{\varepsilon,\rho,1}) \leq\sum_{l\geq(1+\rho
)n}\mathbf{P}(A_{n,k,l}^{\varepsilon,\rho,1}) \leq\sum_{l\geq(1+\rho
)n}\mathbf{E}Y_{n,k,l }.
\end{equation}
From the formula for the compensator of $N$ we get
\begin{eqnarray}
\mathbf{E}Y_{n,k,l} &=& \varrho\int_{t(1-2^{-\alpha l})}^{t(1-2^{-\alpha
(l+1)})}ds\, \mathbf{E}X_{s} \biggl( \biggl[\frac{k+1}{2^{n}
}-2^{-l(1-2\alpha\varepsilon)},\nonumber\\
&&\hspace*{118pt}\frac{k+1}{2^{n}}+2^{-l(1-2\alpha
\varepsilon
)} \biggr] \biggr)\nonumber\\[-8pt]\\[-8pt]
&&{} \times\int_{2^{-l({\alpha}/({1+\beta})+2\varepsilon)}
}^{\infty}dr\, r^{-2-\beta}\nonumber\\
&\leq& C 2^{-\alpha l} 2^{-l(1-2\alpha\varepsilon)} 2^{l(\alpha
+2\varepsilon(1+\beta))}.\nonumber
\end{eqnarray}
Consequently,
\begin{equation}\qquad
\mathbf{P}(A_{n,k,l}^{\varepsilon,\rho,1}) \leq C\sum_{l\geq(1+\rho
)n}2^{-l+2\varepsilon(\alpha+1+\beta)l} \leq C 2^{-(1+\rho
)n+2\varepsilon
(\alpha+1+\beta)(1+\rho)n}.
\end{equation}
Noting that $2\varepsilon(\alpha+1+\beta)(1+\rho)\leq\rho/2$ under the
conditions in the lemma, we complete the proof.
\end{pf}
\begin{lemma}[{[Second term in (\ref{splits})]}]\label{lem:10}
For all $\varepsilon,\rho>0$ sufficiently small,
\begin{equation}
\mathbf{P}(A_{n,k}^{\varepsilon,\rho,2}) \leq2^{-3n/2}
\end{equation}
for all $ k,n$ considered.
\end{lemma}
\begin{pf}
It is easy to see by construction that
\begin{subequation}
\label{big.array}
\begin{eqnarray}
&&A_{n,k}^{\varepsilon,\rho,2}\subseteq A_{n}^{\rho,\mathrm{c}}\cap \biggl\{
\mbox{There exist at least two jumps of }M\nonumber\\
&&\hspace*{79.1pt} \mbox{of the form }r^{\ast}\delta_{(s^{\ast},y^{\ast}
)}\mbox{ such that}\nonumber\\
\label{a}
&&\hspace*{79.1pt} r^{\ast}\geq2^{-n({\alpha(1+\rho)}/({1+\beta})+2\varepsilon
(1+\rho))},\\
\label{b}
&&\hspace*{79.1pt} y^{\ast}\in\biggl[\frac{k-2}{2^{n}}, \frac{k+1+2^{\rho
n+2\alpha\varepsilon(1-\rho)n}}{2^{n}} \biggr] ,\\
\label{c}
&&\hspace*{79.1pt}\hspace*{11.46pt} s^{\ast}\in\bigl[ t-2^{-\alpha(1-\rho)n}, t-2^{-\alpha(1+\rho
)n} \bigr] \biggr\}.
\end{eqnarray}
On the event $A_{n}^{\rho,\mathrm{c}}$, for the intensity of jumps satisfying
(\ref{a})--(\ref{c}), we have
\end{subequation}
\begin{eqnarray*}
&& \int_{t-2^{-\alpha(1-\rho)n}}^{t-2^{-\alpha(1+\rho)n}} d
s\, X_{s} \biggl( \biggl[\frac{k-2}{2^{n}}, \frac{k+1+2^{\rho n+2\alpha
\varepsilon(1-\rho)n}}{2^{n}}\biggr] \biggr)\\
&&\quad{}\times \int_{2^{-n(
{\alpha(1+\rho)}/({1+\beta})+2\varepsilon(1+\rho))}}^{\infty}d
r\, r^{-2-\beta}\\
&&\qquad \leq2^{-\alpha(1-\rho)n} 2^{-n(1-2\rho)} 2^{\rho n+2\alpha
\varepsilon(1-\rho)n+2} 2^{n(\alpha(1+\rho)+2\varepsilon(1+\rho
)(1+\beta))}\\
&&\qquad \leq2^{-n} 2^{10(\rho+2\varepsilon)n} \leq2^{-{3}/{4}n}
\end{eqnarray*}
for all $\varepsilon$ and $\rho$ sufficiently small. Since the number
of such
jumps can be represented by means of a time-changed standard Poisson process,
the probability to have at least two jumps is bounded by the square of the
above bound and we are done.
\end{pf}
\begin{lemma}[{[Third term in (\ref{3terms})]}]\label{L.3t}For all $\rho
,\varepsilon>0$ sufficiently small, satisfying
(\ref{equt:9}) and (\ref{and}), we have
\begin{equation}
P_{n,k}^{\varepsilon,\varrho} \leq2^{-3n/2}+C2^{-n-\rho n/2},\qquad 2\leq
k\leq2^{n}+1, n\geq n_{\varepsilon}.
\end{equation}
\end{lemma}
\begin{pf}
The proof follows immediately from Lemmas \ref{lem:9}, \ref{L.2split}
and \ref{lem:10}.
\end{pf}
\begin{pf*}{Proof of Lemma \ref{prop:2}}
Applying Lemmas \ref{L.small.prob}, \ref{L.2t} and \ref{L.3t} to
(\ref{3terms}) we obtain
\begin{equation}\hspace*{28pt}
\mathbf{P} \Biggl( \bigcup_{k=2}^{2^{n}+1} ( A^{\varepsilon}\cap
A_{n,k}\cap B_{n,k}^{-,\mathrm{c}} ) \Biggr) \leq
c_{\mazinti{(\ref{small.prob})}} 2^{-\rho n}+2^{-n/2}+C2^{-\rho n/2}+2^{-n/2}
\end{equation}
for all $\rho,\varepsilon>0$ sufficiently small satisfying (\ref{equt:9})
and (\ref{and}) as well as all $n\geq n_{\varepsilon}$. Since these terms
are summable in $n$, the claim of the lemma follows.
\end{pf*}
\begin{pf*}{Proof of Lemma \ref{Prop2}}
The proof follows immediately from (\ref{5.10}) and
Lemmas \ref{L.1part} and \ref{prop:2}.
\end{pf*}
\section*{Acknowledgments}
We thank Don Dawson for very helpful discussions of the subject. We
also thank the referees for their useful comments and suggestions which
improved the exposition. | 7,547 |
TITLE: $\lim_{x\to 1; x\in (0,\infty)-\{1\}} \frac{x^{\alpha} -1}{x-1}, \alpha\in\mathbb{R}$
QUESTION [3 upvotes]: I'm trying to evaluate this limit (taken from T.Tao's Analysis 1 book) without logarithms and without knowing that the function $f:(0,\infty)\to\mathbb{R}, f(x):=x^{\alpha}, \alpha\in\mathbb{R}$ is differentiable on $(0,\infty)$ (infact the book asks to use this limit to show that $f$ is differentiable with limit $f'(x)=\alpha x^{\alpha -1}$.
Now, I've tried to use the fact that $x^{\alpha}, \alpha\in\mathbb{R}$ is defined as $x^{\alpha}:=\lim_{n\to\infty} x^{q_n}$ where $(q_n)_{n=1}^\infty$ is any sequence of rational numbers converging to $\alpha$ (hence a bounded sequence) and the fact (that I've already proved) that $\lim_{x\to 1; x\in (0,\infty)-\{1\}} \frac{x^q-1}{x-1}=q\ \forall q\in\mathbb{Q}$ to use the squeeze theorem somehow but I haven't gotten anywhere so far.
Any hints?
Best regards,
lorenzo.
REPLY [2 votes]: Using the fact that you have proved, define two convergent sequences $a_n, b_n$ in $\Bbb Q$ such that $a_n$ is strictly increasing to $\alpha$, and $b_n$ is strictly decreasing to $\alpha$, then $a_n<b_n$ for all $n$. And we have to consider the left and right limit:
For $x>1$, we have $$\frac{x^{a_n}-1}{x-1}<\frac{x^\alpha -1}{x-1}<\frac{x^{b_n}-1}{x-1}$$
Taking limit $x\to 1$, we have $$a_n\le \lim_{x\to 1}\frac{x^\alpha -1}{x-1}\le b_n$$
Then take $n$ to infinity and apply Squeeze Theorem;
For $0<x<1$, we have $$\frac{x^{a_n}-1}{x-1}>\frac{x^\alpha -1}{x-1}>\frac{x^{b_n}-1}{x-1}$$
And repeat the same step | 78,202 |
Bryan Derballa October 31, 2011Posted by Geoffrey Hiller in United States.
Tags: United States
trackback.” | 320,132 |
TITLE: Fake $0=1$ integral examples.
QUESTION [2 upvotes]: The classic "proof" that says 0=1 with integration by parts is this:
$$\int\frac{1}{x}\,dx=x\frac{1}{x}-\int -\frac 1{x^2}x\,dx=1+\int \frac1x\,dx.$$
However the wikipedia article gives another one of these integrands,
$$\int \frac{1}{x\log x}\,dx$$
My question is:
What integrands can be integrated by parts in this way to give a fallacious proof that $1=0$?
REPLY [6 votes]: This happens when we can write $\int f(x) \, dx$ as $\int u \, dv$ where $uv=1$: then, when we integrate by parts, the $uv$ term will be constant (and so the $\int v \, du$ term must be the same as the $\int u \, dv$ term).
But if $uv=1$, then $dv=d(1/u)=-du/u^2$, and so this is equivalent to requiring that
$$
f(x) \, dx = u(-du/u^2)=-du/u=-d(\ln u) \, .
$$
That is, we can build this kind of fake proof for any $f$ we like: we just need to take $u$ so that $\ln u$ is an antiderivative of $-f$. (Of course, that means that building the proof is tantamount to performing the integral...)
For example, we can integrate $x^2$ by parts, letting $u=e^{-x^3/3}$ and $dv=e^{x^3/3} x^2 \, dx$. Then $v=e^{x^3/3}$ and $du = -x^2e^{-x^3/3} \, dx$ and so
$$
\int x^2 \, dx = \int e^{-x^3/3} (e^{x^3/3} x^2 \, dx)= e^{-x^3/3}e^{x^3/3} - \int e^{x^3/3}(-x^2e^{-x^3/3} \, dx)=1+\int x^2 \, dx \, .
$$
Of course, this is unlikely to fool anyone, because the choice of $u$ and $v$ looks so unnatural. Perhaps the real answer to your question is "whenever this choice of $u$ and $v$ looks like a good idea." But "looking like a good idea" isn't particularly well-defined. In general, non-contrived-looking versions of this will occur when the antiderivative of $f$ is an explicit logarithm, meaning we can define $u$ without introducing a seemingly extraneous exponential. Note that your two examples are the derivative of $\ln x$ and the derivative of $\ln \ln x$; the other common function to use in this kind of fake proof is $\tan x$, which is the derivative of $\ln \cos x$. | 199,410 |
LS709 Secrets to Powerful Instructional Feedback
2:30 PM - 3:30 PM Thursday, March 17
Instructional Design
International North
Feedback not only serves to inform learners completing eLearning modules, but it can also motivate or demotivate if not properly constructed. How feedback is targeted, displayed, and conveyed can greatly impact any eLearning course’s success. Are you doing all you can to provide your learners the insights and information they need to learn all they can?
>.
< Back to session list Top ^ | 274,222 |
TITLE: Large initial solutions to $x^3+y^3 = Nz^3$?
QUESTION [4 upvotes]: Let $x,y,z$ be non-zero integers. Is it true that the initial or smallest solution (in terms of absolute value) to,
$$x^3+y^3 = Nz^3\tag1$$
for $N=94$ is,
$$15642626656646177^3 + (-15616184186396177)^3 = 94\cdot 590736058375050^3\,?$$
If not, then what is the largest initial solution for $N<100$? Or $N<200$?
P.S. Related posts are $x^3+y^3 = 6z^3$, and $x^3+y^3 = 22z^3$, and $x^3+y^3 = 313^2z^3$. See also this paper by Dasgupta and Voight for more details (including the elliptic curve for eq.1).
REPLY [2 votes]: After some insight courtesy of Achille Hui, I was able to answer my own question. It is well known (see also this) that $x^3+y^3=N$ is birationally equivalent to the elliptic curve $u^3-432N^2=v^2$ using the transformation $x=\frac{36N+v}{6u}$, $y=\frac{36N-v}{6u}$.
In this comment, Hui suggested the command,
Q$<x>$ := PolynomialRing(Rationals()); E00 := EllipticCurve(x^3-432*94^2); Q00 := Generators(E00); Q00;
on the Magma online calculator. We then find,
(62511752209/2480625 : -15629405421521177/3906984375 : 1)
Substituting this onto the transformation, we get,
$$x = \frac{15642626656646177}{590736058375050}\\
y = \frac{-15616184186396177}{590736058375050}$$
thus Magma confirms the solution given in the original question is indeed the smallest.
P.S. Incidentally, the numerators nicely factor as,
$$15642626656646177-15616184186396177 =2^4\cdot 3^8\cdot5^6 \cdot 7^3\cdot47$$ | 146,033 |
Confidential Meeting is an event that takes place from Nov 07 - Nov 13, 2021 and may cause room availability issues or hotel rates to increase.Hotels in Toronto are listed below. Search for cheap and discount hotel rates near the Confidential Meeting Toronto, ON for individual or group travel. The Toronto hotels and motels below is a general city directory and are not necessarily the recommended hotels for Confidential Meeting.
How should we direct your call?
* HotelPlanner is neither endorsed or affiliated with the Confidential Meeting | 135,492 |
TITLE: Example of monoid $M$ such that $\operatorname{RAT}(M) \not\subseteq \operatorname{REC}(M)$
QUESTION [3 upvotes]: Let $M$ be a monoid, the family of rational sets $\operatorname{RAT}(M)$ is defined as the smallest set containing the finite subsets, and closed under union, concatentaion and the star operation. The family of recognizable subsets $\operatorname{REC}(M)$ as the family of subsets $L \subseteq M$ such that $L = \varphi^{-1}(\varphi(L))$ for some monoid morphism $\varphi : M \to N$ with $N$ finite.
Now, a McKnight's Theorem says that $M$ is finitely generated iff $\operatorname{REC}(M) \subseteq \operatorname{RAT}(M)$. And by the famous theorem of Kleene, if $M = \Sigma^{\ast}$ for some finite set $\Sigma$, then $\operatorname{REC}(M) = \operatorname{RAT}(M)$. But what about the inclusion $\operatorname{RAT}(M) \subseteq \operatorname{REC}(M)$?
Do you know any example of a monoid such that $\operatorname{RAT}(M) \not\subseteq \operatorname{REC}(M)$?
REPLY [1 votes]: The simplest example is probably $\mathbb{Z}$. The set $\{0\}$ is rational in $\mathbb{Z}$ but is not recognizable. | 67,740 |
On December 14, 1911, mankind conquered—well, at least planted a flag—atop the South Pole, one of the world’s last truly wild places. Norwegian explorer Roald Amundsen won the deadly race across the icy Antarctica to the Pole, thanks to Inuit technology (furs, sleds, and dogs) he’d adopted. His rival, British naval officer Robert Scott, never returned to civilization.
Today, about 40,000 tourists visit the Antarctica annually, with perhaps 300 travelers (in addition to some 250 scientists and support staff) making it to the South Pole every year. There, they’ll experience the bitter cold and blinding white landscape that tempted Amundsen and Scott to risk their lives, as well as more comfortable accommodations at the international research station, which comes complete with a gift shop.
The 1,250,000-square-kilometer (482,628-square-mile) swath of the Antarctica claimed by Chile has only one settlement, Villa Las Estrellas, as well as six scientific bases that can cater to tourists with advance notice. Though only a fraction of the eighth continent’s travelers come through Chile, that number is growing every year.
Tourism to the Antarctic actually began here, in Chile, in the early 1950s. The government chartered a naval transportation ship for 500 paying passengers to the South Shetland Islands, technically the first Antarctic cruise. It wasn’t until the 1970s, however, that mass tourism truly arrived, with most tourists originating in New Zealand and Australia. Though Chile (as well as Argentina) was suffering serious political and economic upheaval during that era, in recent years South America has begun to catch up, with expeditions were leaving from Ushuaia, Argentina and Punta Arenas, Chile.
Tourism to the Antarctic from Chile is “skyrocketing,” according to the embassy. The Chilean government just invested US$8 million to open yet another port town to Antarctic tourism, tiny Puerto Williams, on icy, isolated Antarctic Ice Marathon. More libertine travelers might consider New Years’ Eve instead, when the Antarctic’s privileged position at the intersection of every time zone offers the opportunity for 24 full hours of kisses and champagne.
Planning a trip? Browse Viator’s Chile tours and things to do, Chile attractions, and Chile travel recommendations. Or book a private tour guide in Chile for a customized tour!
December 21, 2011 by Viator
News & Alerts, Things To Do, Unforgettable Experiences | 194,951 |
London:
British Prime Minister Boris Johnson has submitted his response to a police questionnaire relating to Downing Street parties that may have breached coronavirus regulations, his office said Friday.
Police are investigating claims Johnson attended gatherings that may have violated Britain’s strict distancing and virus prevention rules.
Public outcry over the so-called “partygate” scandal has left Johnson fighting for political survival. Several MPs from his Conservative party have publicly called for his resignation, although he denies any wrongdoing.
Police confirmed last week that they would be sending “formal questionnaires to more than 50 people” to ask about their activities on the dates of at least 12 gatherings in Downing Street over 2020 and 2021.
The document “has formal legal status and must be answered truthfully” within seven days, according to the police.
Johnson faces a fine unless he can explain why he was at events held during coronavirus restrictions.
Johnson has already apologised in parliament for a series of gatherings identified in an official inquiry led by senior civil servant Sue Gray, but vowed to fight on in office.
Gray admitted her 12-page report was limited in scope after London’s Metropolitan police force launched its own investigation into 12 parties held in Downing Street over the past two years.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.) | 132,014 |
Show Your Bear Pride
Buy FanCloth Online
Cheerleaders are now selling new FanCloth items. Choices include long and short sleeved shirts, shorts, caps, leggings, hoodies, and more. This year we have added an option to order online – you have an option to choose a cheerleader to get the credit for the sale. Just click the Order Here link below to place your order!
Just a reminder – all articles removed from our page during updates are available in the Archives link at the bottom of this page. | 131,100 |
Appenzeller Sennenhunde Breed Guide
Breed Group:
Herding Dogs
Middle Age: 5 years
Geriatric Age: 10 years
Life Span: 10 to 12 years
Get 30% off
Join our Newsletter
Appenzeller Sennenhunde Background Info and History
The Appenzeller Sennenhunde, also known as the Appenzeller Mountain Dog or Appenzeller Cattle Dog, is a working dog breed originating in Switzerland. This extremely rare breed was created as a working dog and continues to serve in this capacity till this day.
The Appenzeller is a medium to large sized breed, with muscular features, an attractive shorthaired coat, and a high level of innate intelligence. This breed has been traditionally used as a herder, draft dog, and watchdog used to guard livestock and farmsteads in Switzerland.
Temperament and Personality of Appenzeller Sennenhundes
Watchful, intelligent, wary, protective, and loyal are all words used to describe the Appenzeller. This breed is used to working, and they possess a high level of energy that makes them perfect for long days on a farm.
Appenzeller Sennenhunde Training Tips
The Appenzeller must be trained and socialized from a young age. They are extremely intelligent and learn new commands and tasks with ease. However, this breed is known to be mistrustful of strangers so they must be socialized from a very young age so as to curb misbehavior.
Exercise Needs of Appenzeller Sennenhundes
The Appenzeller is an extremely high energy breed. Their ideal environment is on a farmstead or in a rural area, guarding livestock and property. If these are not available, other outlets for their boundless energy must be found. A long daily run, vigorous exercise, or an extended play session will be the minimum for this breed.
Appenzeller Sennenhunde Life Span
The Appenzeller is a very healthy medium to large sized dog breed that lives on average 12-13 years.
Appenzeller Sennenhunde Breed Popularity
The Appenzeller is an extremely rare and hard to find dog breed. The rarest of all four Swiss dog breeds, the Appenzeller is not AKC recognized. However, they are found more commonly in their native country of Switzerland, where they are still used on farmsteads.
Feeding Requirements of Appenzeller Sennenhundes
Feeding requirements for this breed will vary greatly, due to their high energy output and individual requirements. They will thrive with a high-quality food that contains a good balance of proteins and fats. Avoid any grain-based fillers, such as corn, soy, or wheat. An average Appenzeller weighs between 50-70lbs, so expect to feed them between 2 ½ and 3 ½ cups of dry food a day, split into two meals.
Grooming An Appenzeller Sennenhunde
A working Appenzeller will need to be brushed multiple times per week in order to keep their coat free of debris. During coat grooming, be sure to check your pup’s eyes and ears for any signs of damage or infection. A working Appenzeller should keep their nails trimmed through regular running, but be sure to check and trim your dog’s nails regularly as well.
Are Appenzeller Sennenhundes Good With Kids?
The Appenzeller is known to be friendly with kids, however, play should be supervised and your Appenzeller should be socialized with children from a young age. This is because this breed has been bred as heelers, and are known to nip at the heels of passing children during play.
Health Problems of Appenzeller Sennenhundes
Because of the rarity of the breed, there have been no specific health studies on the Appenzeller to determine high risk for specific ailments. However, some lines of Appenzellers are known to suffer from hip dysplasia and elbow dysplasia, both common ailments in medium to large sized breeds.
Hip Dysplasia
Hip dysplasia in dogs is a condition that stems from a loose fit between the femur and pelvis. This loose fit causes the cartilage that pads these two bones to wear down unevenly.
Although hip dysplasia can be diagnosed in younger dogs through a physical examination, the only notable effect is a laxity in the hip assembly. However, later in life uneven wear of the cartilage leads to painful arthritis, bone spurs, and eventually canine lameness.
Elbow Dysplasia
Similar to hip dysplasia, elbow dysplasia in dogs is caused by an improper fit in the forelegs at the elbow joint, between the elbow and wrist (radius and ulna.) The improper fit between these two joints causes the cartilage that pads these bones to wear down unevenly, causing pain and lameness. Unlike hip dysplasia, elbow dysplasia generally presents at a very young age during the puppy growth period.
Signs of elbow dysplasia are an uneven gait, slow recovery from lameness in the forelegs following exercise, and symptoms that get worse over time. The exact cause of elbow dysplasia is unknown; however, it is believed to be a combination of genetics, diet, trauma, and excessively vigorous exercise at a young age.
Other Resources
National Breed Website: Appenzell Mountain Dog Club of America | 282,645 |
Hua Hin House For Sale:
3 Bedroom Pool Villa for Sale @ Baan MioSunshine mountain Hua Hin Soi 70
4,800,000 Baht
Property ID: #11579
- Type:
Villa
- Pool:
Private
- Living Area:
130 SQM
- Land Size:
504 SQM
- Bedrooms:
4
- Bathrooms:
3
- Parking:
Yes
- Security:
Yes
Villa for Sale: Overview
This bungalow is located at Sunshine mountain Hua Hin Soi 70 just 6 km west of Hua Hin town.
The house is fully furnished with the living room, 4 bedrooms and 3 bathrooms and a good size kitchen area with private swimming pool 4*6 meter.
The land is very good size with a tropical garden. This is a great house for the price. Common area management fee 3,500 Baht per year only.
Location
Nearby Amenities
- Pea Mai Tuesday market
- Artist village
- Hua Hin Night Market
- Khao Noi
Project Overview: Baan MioSunshine mountain Hua Hin Soi 70
Sunshine mountain is located Hua Hin Soi 70 just 6 km west of Hua Hin town.
Common area management fee 3,500 Baht per year only.
- Swimming pool
- Parking space
- Garden
- Good Location | 58,079 |
\begin{document}
\begin{frontmatter}
\title{Skew Braces and Hopf-Galois Structures\\ of Heisenberg Type}
\author{Kayvan Nejabati Zenouz\footnote{Email: [email protected]}}
\address{School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford, OX33 1HX}
\begin{abstract}
We classify all skew braces of Heisenberg type for a prime number $ p>3 $. Furthermore, we determine the automorphism group of each one of these skew braces (as well as their socle and annihilator). Hence, by utilising a link between skew braces and Hopf-Galois theory, we can determine all Hopf-Galois structures of Heisenberg type on Galois field extensions of fields of degree $ p^{3} $.
\end{abstract}
\begin{keyword}
Skew braces; Hopf-Galois structures; Heisenberg group; field extensions; the Yang-Baxter equation
\end{keyword}
\end{frontmatter}
\tableofcontents{}
\section{Introduction}\label{S1}
Braces were introduced by W. Rump \cite{MR2278047}, as a generalisation of radical rings, in order to study the non-degenerate involutive set-theoretic solutions of the quantum Yang-Baxter equation. He also obtained a correspondence between these solutions and braces. Later, through the efforts of D. Bachiller, F. Ced\'o, E. Jespers, and J. Okni\'nski \cite{MR3177933, MR3527540} the classification of these solutions was reduced to that of braces, and they provided many new classes of these solutions. Recently, skew braces were introduced by L. Guarnieri and L. Vendramin \cite{MR3647970} in order to study the non-degenerate (not necessarily involutive) set-theoretic solutions, and in a subsequent paper the connection of skew braces to ring theory and Hopf-Galois theory was studied by N. Byott, A. Smoktunowicz, and L. Vendramin \cite{MR3763907}.
On the other hand, in 1969 S. Chase and M. Sweedler \cite{MR0260724} introduced the concept of Hopf-Galois extensions in order to generalise the classical Galois theory. Later, Hopf-Galois theory for separable extensions of fields was studied by C. Greither and B. Pareigis \cite{MR878476}. They showed how to recast the problem of classifying all Hopf-Galois structures on a finite separable extension of fields as a problem in group theory. Many advances relating to the classification of Hopf-Galois structures were made by A. Alabadi, N. Byott \cite{MR1402555, MR2030805, MR2363137, MR3715201}, S. Carnahan, L. Childs \cite{MR1704676}, and T. Kohl \cite{MR1644203}. Recently, some properties of Hopf-Galois structures on a separable field extension of degree $ p^{n} $ were investigated by T. Crespo and M. Salguero \cite{CS}.
Recently, a fruitful discovery, which was initially noticed by D. Bachiller, revealed a connection between Hopf-Galois theory and skew braces, which linked the classification of Hopf-Galois structures to that of skew braces.
Despite many efforts both the classification of skew braces and Hopf-Galois structures remain widely open. For example, in \cite{MR2298848} cyclic braces were classified, and in \cite{MR3320237} braces of order $ p^{3} $ were classified. Recently, in \cite{doi:10.1142/S0219498819500336} a method for describing skew braces with non-trivial annihilator was given, and braces of order $ p^{2}q $ have been studied in \cite{CD}. The classification and understanding the structure of skew braces has become more important as they find connections to other areas, for example to concepts in ring theory, see \cite{KSV}, and quantum information \cite{SS}, as well as number theory. Recently, a list of open problems on skew braces has been posed by L. Vendramin \cite{LV}.
To this end, in the author's PhD thesis \cite{KNZ}, an explicit and complete classification of skew braces and Hopf-Galois structures of order $ p^{3} $ for a prime number $ p $ was provided using methods of Hopf-Galois theory. In particular, we independently reproved the results of \cite{MR3320237} on braces of order $ p^{3} $. In this paper, as our main results, we provide a classification for skew braces and Hopf-Galois structures of Heisenberg type for a prime $ p $, which we have chosen to be greater than $ 3 $ for simplicity. However, our methods can be adapted for $ p=2,3 $ as well ($ p=2,3 $ has been treated in the author's PhD thesis). We classify these skew braces and Hopf-Galois structures using some methods of N. Byott \cite{MR2030805} and by conducting a deep study into the holomorph of the Heisenberg group.
Furthermore, we determine the automorphism group of each skew brace that we classify, and as a result we are able to determine the Hopf-Galois structures of Heisenberg type on Galois field extensions of degree $ p^{3} $. In our subsequent two papers we aim to provide our findings relating to the classification of skew braces and Hopf-Galois structures of Extraspecial type (of the type $ C_{p^{2}}\rtimes C_{p} $) in one paper, and skew braces and Hopf-Galois structures of type $ C_{p}^{3} $ in the second paper. These results are currently in the author's PhD thesis \cite{KNZ} Sections $ 4.2 $, $ 4.3 $, and $ 4.5 $.
We shall begin by providing relevant background information and stating a summary of our main results in the rest of this section. The subsequent sections are devoted to the proofs of our results, and at the end of Section \ref{S4} there is a list of all skew braces classified in this paper. We also determine the \textit{socle} and \textit{annihilator} of these skew braces and show that there are non-trivial skew braces of Heisenberg type with trivial socle and annihilator, so these cannot be described by methods of \cite{doi:10.1142/S0219498819500336}.
\subsection{Background}
A \textit{skew (left) brace} \cite[cf.][]{MR3763907} is a triple $ \left(B,\oplus,\odot\right) $ which consists of a set $ B $ together with two operations $ \oplus $ and $ \odot $ such that $ (B,\oplus) $ and $ (B,\odot) $ are groups (they need not be abelian), and the two operations are related by the \textit{skew brace property}:
\[a\odot\left(b\oplus c\right)= \left(a\odot b\right)\ominus a\oplus \left(a\odot c\right) \ \text{for every} \ a,b,c \in B, \]
where $ \ominus a $ is the inverse of $ a $ with respect to the operation $ \oplus $. The group $ \left(B,\oplus\right) $ is known as the \textit{additive group} of the skew brace $ \left(B,\oplus,\odot\right) $ and $ \left(B,\odot\right) $ as the \textit{multiplicative group}. A morphism, or a map, between two skew braces \[ \varphi : \left(B_{1},\oplus_{1},\odot_{1}\right) \longrightarrow \left(B_{2},\oplus_{2},\odot_{2}\right) \] is a map of sets $ \varphi: B_{1} \longrightarrow B_{2} $ such that the maps
\[\varphi : \left(B_{1},\oplus_{1}\right) \longrightarrow (B_{2},\oplus_{2}) \ \text{and} \ \varphi : \left(B_{1},\odot_{1}\right) \longrightarrow \left(B_{2},\odot_{2}\right) \]
are group homomorphisms; the map $ \varphi $ is an isomorphism if it is a bijection.
We call a skew brace $ \left(B,\oplus,\odot\right) $ such that $ \left(B,\oplus\right) \cong N $ and $ \left(B,\odot\right) \cong G $ a $ G $-skew brace of \textbf{type} $ N $; we refer to the \textit{isomorphism type} of $ \left(B,\odot\right) $ as the \textbf{structure} of the skew brace $ \left(B,\oplus,\odot\right) $. If $ \oplus $ is abelian, nonabelian respectively, we call $ \left(B,\oplus,\odot\right) $ a skew brace of abelian, nonabelian type respectively. We note that a skew brace of abelian type coincides with the one that was initially defined by W. Rump called a brace (aka a classical brace). Skew braces provide non-degenerate (not necessarily involutive) set-theoretic solutions of the quantum Yang-Baxter equation. The paper of A. Smoktunowicz, and L. Vendramin (also N. Byott) \cite{MR3763907} provides an excellent introduction to skew braces and their connection to noncommutative algebra, mathematical physics, and other areas.
Next we recall some definitions and facts relating to Hopf-Galois structures and their connection to skew braces. For $ L/K $ a finite Galois extension of fields with Galois group $ G $, A \textit{Hopf-Galois structure} on $ L/K $ consists of a finite dimensional cocommutative $ K $-\textit{Hopf algebra} $ H $, with an action on $ L $, which makes $ L $ into an $ H $-\textit{Galois extension}, i.e., $ H $ acts on $ L $ in such way that the $ K $-module homomorphism
\begin{align*}
j:L\otimes_{K} H \longrightarrow \mathrm{End}_{K}(L) \ \text{given by} \ j(x\otimes y)(z)=xy(z) \ \text{for} \ x,z \in L, y\in H
\end{align*}
is an isomorphism. For example, the group algebra $ K[G] $ endows $ L/K $ with the \textit{classical Hopf-Galois structure}. However, in general there can be more than one Hopf-Galois structure on $ L/K $. Hopf-Galois structures have applications in Galois module theory; for example, when studying the freeness of rings of integers of extensions of global or local fields as modules (e.g., see \cite{MR1879021}). In 1987, the classification of Hopf-Galois structures was reduced to a group theoretic problem by C. Greither and B. Pareigis \cite{MR878476} via the following theorem.
\begin{theorem}[C. Greither and B. Pareigis]\label{T1}
Hopf-Galois structures on $ L/K $ correspond bijectively to regular subgroups $ N \subseteq \mathrm{Perm}(G) $ which are normalised by the image of $ G $, as left translations, inside $ \mathrm{Perm}(G) $.
\end{theorem}
In particular, every $ K $-Hopf algebra $ H $ which endows $ L/K $ with a Hopf-Galois structure is of the form $ L[N]^{G} $ for some $ N \subseteq \mathrm{Perm}(G) $ a regular subgroup normalised by the image of $ G $, as left translations, inside $ \mathrm{Perm}(G) $. Here $ G $ acts on the group algebra $ L[N] $ through its action on $ L $ as field automorphism and on $ N $ by conjugation inside $ \mathrm{Perm}(G) $. Subsequently, the \textit{isomorphism type} of $ N $ became known as the \textbf{type} of the Hopf-Galois structure, and we shall refer to the cardinality of $ N $, which is the same as the degree of the extension $ L/K $, as the \textbf{order} of the Hopf-Galois structure.
The connection between Hopf-Galois structures and braces was initially noticed by D. Bachiller, later this connection was made more explicit by N. Byott and L. Vendramin in \cite{MR3763907}. For example, one can prove (see Section \ref{S2}) that given a $ G $-skew brace $ \left(B,\oplus,\odot\right) $, the map
\begin{align*}
d: \left(B,\oplus\right)& \longmapsto \mathrm{Perm}\left(B,\odot\right)\\
a&\longmapsto \left(d_{a}: b\longmapsto a\oplus b\right) \ \text{for all} \ a,b \in B
\end{align*}
is a regular embedding, i.e., $ d $ is an injective map whose image $ \Ima d $ is a regular subgroup. In particular, $ \Ima d $ is normalised by the image of $ \left(B,\odot\right) $ in $ \mathrm{Perm}\left(B,\odot\right) $. This together with Theorem \ref{T1} enables us to obtain a Hopf-Galois structure on $ L/K $. Conversely, one always obtains a skew brace from a Hopf-Galois structure. However, there are more Hopf-Galois structures than skew braces, in particular skew braces parametrise Hopf-Galois structures.
Finally, we remark that since working with $ \mathrm{Perm}(G) $ can often be difficult, as it becomes rapidly large as size of $ G $ increases, in order to overcome this, N. Byott \cite{MR1402555} proves the following statement -- here L. Childs reformulation cf. \cite[p.~57, (7.3) Theorem (Byott)]{MR1767499} is given.
\begin{theorem}[N. Byott]\label{THG2}
Let $ N $ be a group. Then there is a bijection between the sets \[\mathcal{N}\stackrel{\mathrm{def}}{=} \left\{\alpha:N\hookrightarrow \mathrm{Perm}(G)\mid \alpha(N) \ \mathrm{is \ regular\ on}\ G\right\} \ \text{and} \]
\[\mathcal{G}\stackrel{\mathrm{def}}{=}\left\{\beta:G \hookrightarrow \mathrm{Perm}(N)\mid \beta(G) \ \mathrm{is\ regular\ on} \ N \right\}.\]
Under this bijection, if $ \alpha,\alpha'\in \mathcal{N} $ correspond to $ \beta,\beta' \in \mathcal{G} $, then $ \alpha(N)=\alpha'(N) $ if and only if $ \beta(G) $ and $ \beta'(G) $ are conjugate by an element of $ \mathrm{Aut}(N) $. Furthermore, $ \alpha(N) $ is normalised by the left translation if and only if $ \beta(G) $ is contained in $ \mathrm{Hol}(N) $.
\end{theorem}
Using Theorem \ref{THG2}, N. Byott shows that if $ e'(G,N) $ is the number of regular subgroups of $ \mathrm{Hol}(N) $ isomorphic to $ G $, then the number of Hopf-Galois structures on $ L/K $ of type $ N $ is given by
\begin{align}\label{E106}
e(G,N)=\frac{\left\lvert\mathrm{Aut}(G)\right\rvert}{\left\lvert\mathrm{Aut}(N)\right\rvert}e'(G,N).
\end{align}
In the author's thesis \cite{KNZ} we used formula (\ref{E106}) to find the number of Hopf-Galois structures, but in the current paper we parametrise Hopf-Galois structures along skew braces and count them using the orbit stabiliser theorem (we obtain the same results, but in the process we additionally find the automorphism groups of our skew braces too).
\subsection{Summary of the main results}
We give a summary of our main results in this subsection. For the rest of this paper we shall assume $ p>3 $ is a prime number. We shall denote by $ C_{p^{r}} $ the cyclic group of order $ p^{r} $ for any natural number $ r $.
Recall there are two nonabelian groups of order $ p^{3} $: the exponent $ p $ nonabelian group of order $ p^{3} $, or otherwise known as the Heisenberg group,
\[ M_{1} \stackrel{\mathrm{def}}{=}\left\langle \rho,\sigma, \tau \mid \rho^{p}=\sigma^{p}=\tau^{p}=1, \ \sigma\rho=\rho\sigma , \ \tau\rho=\rho\tau, \ \tau\sigma=\rho\sigma\tau \right\rangle \cong C_{p}^{2}\rtimes C_{p}, \] and the exponent $ p^{2} $ nonabelian group of order $ p^{3} $, or otherwise known as the Extraspecial group of order $ p^{3} $,
\[ M_{2} \stackrel{\mathrm{def}}{=}\left\langle \sigma, \tau \mid \sigma^{p^{2}}=\tau^{p}=1, \ \tau\sigma=\sigma^{p+1}\tau\right\rangle\cong C_{p^{2}}\rtimes C_{p}. \]
In this paper we are concerned with $ M_{1} $. We fix as our type the group $ M_{1} $ and find all skew braces and Hopf-Galois structures of type $ M_{1} $. The main results of this paper can be summarised as follows.
\begin{theorem}\label{T2}
The skew braces of $ M_{1} $ type are precisely \[ 2p^{2}-p+3 \] $ M_{1} $-braces and \[ 2p+1 \] $ C_{p}^{3} $-braces.
\end{theorem}
\begin{proof}
Follows from adding the numbers found in Lemmas \ref{L16}, \ref{L17}, \ref{L18} of Section \ref{S4}, see Proposition \ref{P5}.
\end{proof}
\begin{theorem}\label{T3}
Let $ L/K $ be an $ M_{1} $ extension of fields. Then there are \[(2p^{3}-3p+1)p^{2}\] Hopf-Galois structures of $ M_{1} $ type. Let $ L/K $ be a $ C_{p}^{3} $ extension of fields. Then there are \[(p^{3}-1)(p^{2}+p-1)p^{2}\] Hopf-Galois structures of $ M_{1} $ type.
\end{theorem}
\begin{proof}
Follows from adding the numbers found in Lemmas \ref{L16H}, \ref{L17H}, \ref{L18} of Section \ref{S4} see Proposition \ref{P5}.
\end{proof}
\section{Preliminaries}\label{S2}
In this section we provide some preliminaries and describe our strategy for classifying skew braces and Hopf-Galois structures. Unless otherwise stated we shall always assume $ G $ and $ N $ are finite groups.
\subsection{Skew braces and Hopf-Galois structures}\label{SB1}
The following proposition provides and explicit connection between Hopf-Galois structures and skew braces (where ideas of the proof are similar to \cite[Proposition A.3]{MR3763907}).
\begin{proposition}[Skew braces and Hopf-Galois structures correspondence]\label{P1}
There exists a bijective correspondence between isomorphism classes of $ G $-skew braces and classes of Hopf-Galois structures on an extension $ L/K $ with Galois group $ G $, where we identify two Hopf algebras $ L[N_{1}]^{G} $ and $ L[N_{2}]^{G} $ giving Hopf-Galois structures (as in Theorem \ref{T1}) on $ L/K $ if $ N_{2}=\alpha N_{1}\alpha^{-1} $ for some $ \alpha \in \mathrm{Aut}(G) $.
\end{proposition}
\begin{proof}
Let $ \left(B,\oplus, \odot\right) $ be a $ G $-skew brace i.e., $ \left(B, \odot\right)\cong G $, we can assume $ \left(B, \odot\right)= G $. Then the map
\begin{align*}
d: \left(B,\oplus\right)& \longrightarrow \mathrm{Perm}\left(B,\odot\right)\\
a&\longmapsto \left(d_{a}: b\longmapsto a\oplus b\right) \ \text{for all} \ a,b \in B
\end{align*}
is a regular embedding. Now, for any $ a \in \left(B,\oplus\right) $ and $ b,c \in \left(B,\odot\right) $, using the skew brace property, we have
\[b\odot \left(d_{a}\left(b^{-1}\odot c\right)\right)=b\odot \left(a\oplus\left(b^{-1}\odot c\right)\right)= \left(\left(b\odot a\right) \ominus b\right) \oplus c=d_{\left(b\odot a\right) \ominus b}(c), \]
where $ b^{-1} $ is the inverse of $ b $ with respect to $ \odot $. This shows that the image of $ \left(B,\oplus\right) $ is normalised by the image of $ \left(B, \odot\right) $ inside $ \mathrm{Perm}\left(B, \odot\right) $ as left translations. We also find an action of $ \left(B, \odot\right) $ on $ \left(B, \oplus\right) $ by $ b\cdot a= \left(b\odot a\right) \ominus b $ for $ b \in \left(B, \odot\right) $ and $ a \in \left(B, \oplus\right) $. Now for \[ \alpha: \left(B,\oplus_{1},\odot\right)\longrightarrow \left(B,\oplus_{2},\odot\right) \] an isomorphism of skew braces, we have a commutative diagram
\[
\begin{tikzcd}[row sep=2.5em , column sep=2.5em]
\left(B,\oplus_{1}\right) \arrow[hook]{r}{d_{1}} \arrow{d}{\alpha}[swap]{\wr} & \mathrm{Perm}\left(B,\odot\right) \arrow[]{d}{C_{\alpha}}[swap]{\wr} \\ \left(B,\oplus_{2}\right) \arrow[hook]{r}{d_{2}}[swap]{} & \mathrm{Perm}\left(B,\odot\right) ,
\end{tikzcd} \]
where $ C_{\alpha} $ is conjugation by $ \alpha \in \mathrm{Aut}\left(B,\odot\right) $ inside $ \mathrm{Perm}\left(B,\odot\right) $. Furthermore, if we fix a Galois extension of fields $ L/K $ with Galois group $ \left(B, \odot\right) $, then $ L[\left(B, \oplus\right)]^{\left(B, \odot\right)} $ endows $ L/K $ with a Hopf-Galois structure corresponding to the skew brace $ \left(B,\oplus, \odot\right) $ and when two skew braces with the same multiplication group are isomorphic then the corresponding Hopf-Galois structures can be identified.
Conversely, suppose we have a Hopf-Galois structure on $ L/K $ which can always be given by $ L[N]^{G} $ for some regular subgroup $ N \subseteq \mathrm{Perm}(G) $ which is normalised by the image of $ G $ as left translations inside $ \mathrm{Perm}(G) $. The fact that $ N $ is a regular subgroup implies that the map
\begin{align*}
\mathrm{Perm}(G)&\longrightarrow G \\
\eta&\longmapsto \eta\cdot 1_{G}.
\end{align*}
induces a bijection $ \phi:N\longrightarrow G $ as subgroups of $ \mathrm{Perm}(G) $. Now we can define a skew brace $ B $ by setting $ (B,\odot)\stackrel{\mathrm{def}}{=}G $, considered as a subgroup of $ \mathrm{Perm}(G) $ via the left translations, and defining
\[g_{1}\oplus g_{2} \stackrel{\mathrm{def}}{=}\phi \left(\phi^{-1}(g_{1})\phi^{-1}(g_{2})\right) \ \text{for} \ g_{1},g_{2} \in G. \]
The fact that $ N \subseteq \mathrm{Perm}(G) $ is normalised by $ G $ implies that for all $ g \in G $ and $ n \in N $ we have $ gn=f_{g,n}g $ for some $ f_{g,n} \in N $. Therefore, for $ g_{1}=\phi(n_{1}),g_{2}=\phi(n_{2}),g_{3}=\phi(n_{3}) \in G $, we aim to show
\[ g_{1}\odot \left(g_{2}\oplus g_{3}\right)= (g_{1}\odot g_{2})\ominus g_{1}\oplus( g_{1}\odot g_{3}).\]
By definitions above we have
\[g_{1}\odot \left(g_{2}\oplus g_{3}\right)=\phi(n_{1})\odot \left(\phi(n_{2})\oplus \phi(n_{3})\right)= \phi(n_{1})\phi(n_{2}n_{3}).\]
Now consider the element $ \phi(n_{1})n_{2}n_{3} \in \mathrm{Perm}(G) $. Using the relation $ gn=f_{g,n}g $, we have
\[\phi(n_{1})n_{2}n_{3}=f_{\phi(n_{1}),n_{2}n_{3}}\phi(n_{1}) \]
for some $ f_{\phi(n_{1}),n_{2}n_{3}}\in N $. Now applying $ \phi $ to both side we get the relation
\[\phi(n_{1})\phi(n_{2}n_{3})=f_{\phi(n_{1}),n_{2}n_{3}}(\phi(n_{1}))\]
in $ G $. Note $ f_{\phi(n_{1}),n_{2}n_{3}}(\phi(n_{1}))=\phi\left(f_{\phi(n_{1}), n_{2}n_{3}}n_{1}\right) $ in $ G $. Therefore, we find
\begin{align*}
g_{1}\odot \left(g_{2}\oplus g_{3}\right)&=\phi\left(f_{\phi(n_{1}), n_{2}n_{3}}n_{1}\right)=\phi\left(f_{\phi(n_{1}), n_{2}}f_{\phi(n_{1}), n_{3}}n_{1}\right)\\
&=\phi\left(\phi^{-1}\phi \left(f_{\phi(n_{1}), n_{2}}n_{1}\right)n_{1}^{-1}\phi^{-1}\phi\left(f_{\phi(n_{1}), n_{3}}n_{1}\right)\right)\\
&=\phi\left(\phi^{-1}\left(\phi(n_{1})\phi(n_{2})\right)n_{1}^{-1}\phi^{-1}\left(\phi(n_{1})\phi(n_{3})\right)\right)\\
&=\phi\left(\phi^{-1}\left(g_{1}g_{2}\right)\left(\phi^{-1}(g_{1})\right)^{-1}\phi^{-1}\left(g_{1}g_{3}\right)\right)\\
&=(g_{1}\odot g_{2})\ominus g_{1}\oplus( g_{1}\odot g_{3});
\end{align*}
thus we have a skew brace $ (B,\oplus, \odot) $ which is a $ G $-skew brace of type $ N $. In particular, if $ N_{1} \subseteq \mathrm{Perm}(G) $ is a regular subgroups whose image is normalised by $ G $ and $ \alpha \in \mathrm{Aut}(G) $, then $ N_{2}\stackrel{\mathrm{def}}{=} \alpha N_{1}\alpha^{-1} $ is a regular subgroup whose image is normalised by $ G $ and the skew braces corresponding to $ N_{1} $ and $ N_{2} $ are isomorphic by $ \alpha $.
\end{proof}
\begin{remark}
Note in fact Proposition \ref{P1} above is implied by Theorem \ref{THG2} and \cite[Proposition A.3]{MR3763907}. We shall state \cite[Proposition A.3]{MR3763907} later (see Proposition \ref{P2}). However, we decided to include the calculations for a direct proof of Proposition \ref{P1} for completeness, which leads to an explicit relationship between the Hopf-Galois structures and skew braces. The question relating to the explicit relationship between the Hopf-Galois structures and skew braces was first asked from the author by Prof Agata Smoktunowicz. The answer can be reached by unravelling Theorem \ref{THG2} and \cite[Proposition A.3]{MR3763907} which is what has been done in Proposition \ref{P1}.
\end{remark}
The above proposition also helps us to understand the automorphism groups of skew braces.
\begin{corollary}[Automorphism groups of skew braces]
Let $ \left(B,\oplus,\odot\right) $ be a skew brace. Then there exists a natural identification
\[ \mathrm{Aut}_{\mathcal{B}r}\left(B,\oplus,\odot\right)\cong \left\{ \alpha \in \mathrm{Aut}\left(B,\odot\right)\mid \alpha \left(\Ima d\right) \alpha ^{-1} \subseteq \Ima d \right\}. \]
\end{corollary}
\begin{proof}
Note that if $ \left(B,\oplus,\odot\right) $ is a skew brace and \[ \alpha: \left(B,\oplus,\odot\right)\longrightarrow \left(B,\oplus,\odot\right) \] an automorphism of skew braces, we have a commutative diagram
\[
\begin{tikzcd}[row sep=2.5em , column sep=2.5em]
\left(B,\oplus\right) \arrow[hook]{r}{d} \arrow{d}{\alpha}[swap]{\wr} & \mathrm{Perm}\left(B,\odot\right) \arrow[]{d}{C_{\alpha}}[swap]{\wr} \\ \left(B,\oplus\right) \arrow[hook]{r}{d}[swap]{} & \mathrm{Perm}\left(B,\odot\right),
\end{tikzcd} \]
implying that $ \alpha\left(\Ima d \right)\alpha^{-1} \subseteq \Ima d $. On the other hand, if $ \alpha\left(\Ima d \right)\alpha^{-1} \subseteq \Ima d $ for some $ \alpha \in \mathrm{Aut}\left(B,\odot\right) $, then $ \alpha $ gives automorphism of $ \left(B,\oplus,\odot\right) $. From this observation one can see that
\[ \mathrm{Aut}_{\mathcal{B}r}\left(B,\oplus,\odot\right)\cong \left\{ \alpha \in \mathrm{Aut}\left(B,\odot\right)\mid \alpha \left(\Ima d\right) \alpha ^{-1} \subseteq \Ima d \right\}. \]
\end{proof}
Next corollary shows how to obtain the number of Hopf-Galois structures using skew braces. Let $ e(G,N) $ be the number of Hopf-Galois structures of type $ N $ on the field extension $ L/K $ whose Galois group is $ G $. Denote by $ G_{N} $ the isomorphism class of a $ G $-skew brace of type $ N $. For later use we also set $ \widetilde{e}(G,N) $ to be the number of isomorphism classes of $ G $-skew braces of type $ N $.
\begin{corollary}[Number of Hopf-Galois structures parametrised by skew braces]
We have
\begin{align}\label{E1}
e(G,N)= \sum_{G_{N}}\dfrac{\left\lvert \mathrm{Aut}(G)\right\rvert} {\left\lvert\mathrm{Aut}_{\mathcal{B}r}(G_{N})\right\rvert}.
\end{align}
\end{corollary}
\begin{proof}
Fix $ G $ and let \[ \mathcal{S}(G,N) = \left\{ M \subseteq \mathrm{Perm}(G) \mid M \cong N \ \text{and} \ M\ \text{is regular normalised by}\ G \right\}. \]
Firstly, note that $ \mathrm{Aut}(G) $ acts on $ \mathcal{S}(G,N) $, induced by conjugation in $ \mathrm{Perm}(G) $, and a set of orbit representatives, say $ \left\{ N_{1},...,N_{s} \right\} $, give a list of non-isomorphic skew braces according to Proposition \ref{P1}. Secondly, by Theorem \ref{T1} we find $ e(G,N)=\left\lvert \mathcal{S}(G,N) \right \rvert $, and so we have
\begin{align*}
e(G,N)=\sum_{i=1}^{s}\left\lvert \mathrm{Orb}(N_{i}) \right \rvert = \sum_{i=1}^{s} \dfrac{\left\lvert \mathrm{Aut}(G)\right\rvert}{\left\lvert\mathrm{Stab}(N_{i})\right\rvert}= \sum_{G_{N}}\dfrac{\left\lvert \mathrm{Aut}(G)\right\rvert} {\left\lvert\mathrm{Aut}_{\mathcal{B}r}(G_{N})\right\rvert}.
\end{align*}
\end{proof}
Therefore, to find skew braces and Hopf-Galois structures of order $ n $, one can find the regular subgroups $ N \subseteq \mathrm{Perm}(G) $ for every group $ G $ of size $ n $. However, in many cases $ \mathrm{Perm}(G) $ can be too large to handle. Fortunately, by somehow reversing the role of $ G $ and $ N $, instead of studying the regular subgroups of $ \mathrm{Perm}(G) $, one can study regular subgroups of a smaller group, the \textit{holomorph} of $ N $: \[ \mathrm{Hol}(N)\stackrel{\mathrm{def}}{=}N\rtimes \mathrm{Aut}(N)=\left\{\eta\alpha \mid \eta \in N, \ \alpha \in \mathrm{Aut}(N) \right\},\]
also we can organise these objects in a nice way. These ideas in Hopf-Galois theory were initially developed by N. Byott \cite{MR1402555, MR2030805}.
For skew braces we observe the following. Let $ \left(B,\oplus,\odot\right) $ be a skew brace. Then the group $ \left(B,\odot\right) $ acts on $ \left(B,\oplus\right) $ by $ (a,b)\longmapsto a\odot b $, and we obtain a map
\begin{align*}
m: \left(B,\odot\right) &\longrightarrow \mathrm{Hol}\left(B,\oplus\right)\\
a&\longmapsto \left(m_{a} : b \longmapsto a\odot b\right)
\end{align*}
which is a regular embedding. To see this one needs to check that the map
\begin{align*}
\lambda_{a}: \left(B,\oplus\right) &\longrightarrow \left(B,\oplus\right)\\
b&\longmapsto \ominus a \oplus (a\odot b)
\end{align*}
is an automorphism, and that the map
\begin{align*}
\lambda: \left(B,\odot\right)&\longmapsto \mathrm{Aut}\left(B,\oplus\right)\\
a&\longmapsto \lambda_{a}
\end{align*}
is a group homomorphism. Then one has $ m_{a}=a\lambda_{a}\in \mathrm{Hol}\left(B,\oplus\right) $ for all $ a \in B $. Additionally, for $ \alpha: \left(B,\oplus,\odot_{1}\right)\longrightarrow \left(B,\oplus,\odot_{2}\right) $ an isomorphism of skew braces, we have
\[
\begin{tikzcd}[row sep=2.5em , column sep=2.5em]
\left(B,\odot_{1}\right) \arrow[hook]{r}{m_{1}} \arrow{d}{\alpha}[swap]{\wr} & \mathrm{Hol}\left(B,\oplus\right) \arrow[]{d}{C_{\alpha}}[swap]{\wr} \\ \left(B,\odot_{2}\right) \arrow[hook]{r}{m_{2}}[swap]{} & \mathrm{Hol}\left(B,\oplus\right),
\end{tikzcd} \]
where $ C_{\alpha} $ is conjugation by $ \alpha \in \mathrm{Aut}\left(B,\oplus\right) $ considered naturally as an element of $ \mathrm{Hol}\left(B,\oplus\right) $. This with similar procedure as used to prove Proposition \ref{P1} gives the following proposition of \cite{MR3763907}.
\begin{proposition}\label{P2}
There exists a bijective correspondence between isomorphism classes of skew braces of type $ N $ and classes of regular subgroups of $ \mathrm{Hol}(N) $ under conjugation by elements
of $ \mathrm{Aut}(N) $.
\end{proposition}
\begin{proof}
\cite[Proposition A.3]{MR3763907}.
\end{proof}
In particular, we find another way of computing automorphism groups of skew braces:
\begin{align}\label{E2}
\mathrm{Aut}_{\mathcal{B}r}\left(B,\oplus,\odot\right)\cong \left\{ \alpha \in \mathrm{Aut}\left(B,\oplus\right)\mid \alpha \left(\Ima m\right) \alpha ^{-1} \subseteq \Ima m \right\}.
\end{align}
Therefore, in this way to find the set of non-isomorphic $ G $-skew braces of type $ N $, it suffices to find the set of regular subgroups of $ \mathrm{Hol}(N) $ which are isomorphic to $ G $, and then extract a maximal subset whose elements are not conjugate by any element of $ \mathrm{Aut}(N) $. In particular, (cf. \cite{MR2030805}) one can organise these regular subgroups, and hence the corresponding skew braces and Hopf-Galois structures, according to the size of their image under the natural projection
\begin{align}\label{E4}
\varTheta : \mathrm{Hol}(N) &\longrightarrow \mathrm{Aut}(N)\nonumber\\
\eta\alpha&\longmapsto \alpha.
\end{align}
In other words, if $ \widetilde{\mathcal{S}}(G,N,r) $ is the set of regular subgroups of $ \mathrm{Hol}(N) $ isomorphic to $ G $ whose image under the natural projection $ \varTheta $ has size $ r $, then the set of regular subgroups of $ \mathrm{Hol}(N) $ isomorphic to $ G $ is a finite disjoint union
\[\widetilde{\mathcal{S}}(G,N)= \coprod_{r}\widetilde{\mathcal{S}}(G,N,r). \]
Furthermore, $ \mathrm{Aut}(N) $ acts on each $ \widetilde{\mathcal{S}}(G,N,r) $ via conjugation inside $ \mathrm{Hol}(N) $, and a set of orbit representatives provides a set of isomorphism classes of $ G $-skew brace of type $ N $, whose size upon embedding in $ \mathrm{Hol}(N) $ and projecting to $ \mathrm{Aut}(N) $ is $ r $, which we shall denote by $ G_{N}(r) $. In order to find the number of Hopf-Galois structures of type $ N $ it suffices to find the automorphism group of each $ G $-skew braces of type $ N $ using (\ref{E2}) and use the formula given in (\ref{E1}). We shall set $ e'(G,N,r)=\lvert \widetilde{\mathcal{S}}(G,N,r) \rvert $ and denote by $ \widetilde{e}(G,N,r) $ the number of isomorphism classes of skew braces $ G_{N}(r) $.
\subsection{Regular subgroups of holomorphs}\label{SB2}
In this subsection we outline our strategy for finding regular subgroups contained in $ \mathrm{Hol}(N) $. Let us denote by \[ \varTheta: \mathrm{Hol}(N) \longrightarrow \mathrm{Aut}(N),\]
the natural projection with kernel $ N $. Then the first step is to organise the regular subgroups of $ \mathrm{Hol}(N) $ according to the size of their image under the map $ \varTheta $.
Now suppose we want to parametrise subgroups $ H\subseteq \mathrm{Hol}(N) $ with $ \left\lvert \varTheta(H)\right\rvert=m $, where $ m $ divides $\left\lvert N \right\rvert $. In order to do this, we first take a subgroup of order $ m $ of $ \mathrm{Aut}(N) $, which may be generated by some elements $ \alpha_{1},...,\alpha_{s} \in \mathrm{Aut}(N) $, say \[ H_{2}\stackrel{\mathrm{def}}{=} \left\langle \alpha_{1},...,\alpha_{s} \right\rangle \subseteq \mathrm{Aut}(N).\] Next, we take a subgroup of order $ \frac{\left\lvert N \right\rvert}{m} $ of $ N $, which may be generated by $ \eta_{1},...,\eta_{r} \in N $, say \[ H_{1}\stackrel{\mathrm{def}}{=}\left\langle \eta_{1},...,\eta_{r} \right\rangle \subseteq N.\] We also take `general elements' $ v_{1},...,v_{s} \in N $, and we consider a subgroup of $ \mathrm{Hol}(N) $ of the form
\[H=\left\langle \eta_{1},...,\eta_{r},v_{1}\alpha_{1},...,v_{s}\alpha_{s} \right\rangle. \]
Now we need to classify the constraints on $ v_{1},...,v_{s} $ such that $ H $ is regular, i.e., $ H $ has the same size as $ N $ and acts freely on $ N $. It is easy to see that there are many restrictions on $ v_{1},...,v_{s} $ and in many cases no choice of $ v_{1},...,v_{s} $ will result is a regular subgroup.
Notice that $ \left\lvert H \right\rvert \geq\left\lvert N \right\rvert $ since we have the following commutative diagram
\[
\begin{tikzcd}[row sep=2.5em , column sep=2.5em]
H_{1} \arrow[hook]{r}{} \arrow[hook]{d}{}[swap]{} & H \arrow[two heads]{r}{\varTheta} \arrow[hook]{d}{}[swap]{} & H_{2} \arrow[hook]{d}{}[swap]{} \\ N \arrow[hook]{r}{}[swap]{} & \mathrm{Hol}(N) \arrow[two heads]{r}{\varTheta} & \mathrm{Aut}(N) ,
\end{tikzcd} \]
where the hook arrows are natural inclusion, and the second row is exact, but the first row is not necessarily exact. One of our goals is to select $ v_{1},...,v_{s} $ such that the first row is exact, which would implies that $ \left\lvert H \right\rvert = \left\lvert N \right\rvert $. In particular, we need $ H\cap N=H_{1} $. That is for example, if there is a relation say $ \alpha^{a_{1}}=1 $ in $ H_{2} $, then we need to ensure that $ \left(v_{1}a_{1} \right)^{a_{1}}=v_{1}v_{1}^{\alpha_{1}}\cdots v_{1}^{\alpha_{1}^{a_{1}-1}}\in H_{1} $. Furthermore, we need to ensure that $ H $ acts freely on $ N $, and so for example, if $ v_{i} \in H_{1} $ for some $ i $, then $ H $ will not be acting freely.
More generally we require the following. For $ H $ to have the same size as $ N $, we require for every relation $ R\left(\alpha_{1},...,\alpha_{s}\right)=1 $ on $ H_{2} $ to have \[R\left(u_{1}\left(v_{1}\alpha_{1}\right)w_{1},...,u_{s}\left(v_{s}\alpha_{s}\right)w_{s}\right)\in H_{1}, \]
for every $ u_{1},w_{1},...,u_{s},w_{s}\in H_{1} $. For $ H $ to act freely on $ N $, it is necessary that for every word $ W\left(\alpha_{1},...,\alpha_{s}\right)\neq1 $ on $ H_{2} $ we require
\[W(u_{1}\left(v_{1}\alpha_{1})w_{1},...,u_{s}(v_{s}\alpha_{s})w_{s}\right)W\left(\alpha_{1},...,\alpha_{s}\right)^{-1} \notin H_{1}, \]
for every $ u_{1},w_{1},...,u_{s},w_{s}\in H_{1} $; so in fact we must have \[\left\langle \eta_{1},...,\eta_{r},v_{1},...,v_{s} \right\rangle = N. \]
However, in general there may be other conditions on $ v_{i} $ that need to be taken into account -- for example, some elements of $ H $ need to satisfy relations between generators of a group of order $ \left\lvert N \right\rvert $. Therefore, as already mentioned, it can happen that desirable $ v_{i} $ cannot be found. To find all regular subgroups we repeat this process for every $ m $, every subgroup of order $ m $ of $ \mathrm{Aut}(N) $, and every subgroup of order $ \frac{\left\lvert N \right\rvert}{m} $ of $ N $.
Finally, in order to find non-isomorphic skew braces, we need to check which of these regular subgroups are conjugate to one another by elements of $ \mathrm{Aut}(N) $. Note, if $ H $ and $ \widetilde{H} $ are regular subgroups of $ \mathrm{Hol}(N) $ with $ \lvert\varTheta(H)\rvert=\lvert\varTheta( \widetilde{H})\rvert=m $, then $ H $ and $ \widetilde{H} $ are conjugate by an element of $ \beta \in \mathrm{Aut}(N) $ if
\[\beta(H_{1})\subseteq \widetilde{H}_{1} \ \text{and} \ \beta H_{2}\beta^{-1}\subseteq \widetilde{H}_{2}, \]
i.e., when $ H=\left\langle \eta_{1},...,\eta_{r},v_{1}\alpha_{1},...,v_{s}\alpha_{s} \right\rangle $, we need
\[ \left\langle \eta_{1}^{\beta},...,\eta_{r}^{\beta},v_{1}^{\beta}\beta\alpha_{1}\beta^{-1},...,v_{s}^{\beta}\beta\alpha_{s}\beta^{-1} \right\rangle \subseteq \widetilde{H}.\]
Our starting point is studying the Heisenberg group of order $ p^{3} $ and its automorphism group.
\section{The Heisenberg group $ M_{1} $}\label{S3}
For $ p>2 $ the exponent $ p $ nonabelian group of order $ p^{3} $, or otherwise known as the Heisenberg group, which we denote by $ M_{1} $, has a presentation \[ M_{1} \stackrel{\mathrm{def}}{=}\left\langle \rho,\sigma, \tau \mid \rho^{p}=\sigma^{p}=\tau^{p}=1, \ \sigma\rho=\rho\sigma, \ \tau\rho=\rho\tau, \ \tau\sigma=\rho\sigma\tau\right\rangle \cong C_{p}^{2}\rtimes C_{p}. \]
Note, the above relations imply that for positive integers $ a_{1},a_{2},a_{3},a_{4} $, we have \[\sigma^{a_{1}}\tau^{a_{2}}\sigma^{a_{3}}\tau^{a_{4}}=\rho^{a_{2}a_{3}}\sigma^{a_{1}+a_{3}}\tau^{a_{2}+a_{4}}\]
from which we also obtain the relation
\begin{align}\label{GE1}
(\sigma^{a_{1}}\tau^{a_{2}})^{n}=\rho^{\frac{1}{2}a_{1}a_{2}n(n-1)}\sigma^{na_{1}}\tau^{na_{2}}.
\end{align}
We note that the group $ M_{1} $ contains $ p^{3}-1 $ elements of order $ p $, thus $ p^{2}+p+1 $ subgroups of order $ p $, which are of the form \[ \left\langle\rho\right\rangle,\left\langle\rho^{a}\sigma\right\rangle, \left\langle\rho^{b}\sigma^{c}\tau\right\rangle \ \text{for} \ a,b,c=0,...,p-1. \] Also $ M_{1} $ contains $ p+1 $ subgroups of order $ p^{2} $, which are all isomorphic to $ C_{p}^{2} $, of the form \[ \left\langle\rho,\tau\right\rangle, \left\langle\rho,\sigma\tau^{d}\right\rangle \ \text{for} \ d=0,...,p-1.\]
The next proposition determines the automorphism group of $ M_{1} $. For the analogous result over $ \mathbb{Z} $ see \cite{DVO}. I am grateful to the referee for drawing my attention to this reference.
\begin{proposition}\label{P3}
We have $ \left\lvert \mathrm{Aut}(M_{1})\right\rvert =(p^{2}-1)(p-1)p^{3} $ and
\[\mathrm{Aut}(M_{1})\cong C_{p}^{2}\rtimes \mathrm{GL}_{2}(\mathbb{F}_{p}), \]
where $ C_{p}^{2} $ in the semi-direct product above is generated by the automorphisms $ \beta, \gamma \in \mathrm{Aut}(M_{1}) $ defined by
\begin{align*}
\sigma^{\beta}&=\sigma, \ \tau^{\beta}=\rho\tau \ \text{and} \\
\sigma^{\gamma}&=\rho\sigma, \ \tau^{\gamma}=\tau.
\end{align*}
The (left) action of $ \mathrm{GL}_{2}(\mathbb{F}_{p}) $ on $ C_{p}^{2}=\left\langle \beta,\gamma \right\rangle $, in the semi-direct product, is given by \[\begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\beta= \beta^{a_{1}}\gamma^{-a_{3}} \ \text{and} \ \begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\gamma= \beta^{-a_{2}}\gamma^{a_{4}}. \]
where $ \begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) $.
\end{proposition}
\begin{proof}
Let $ \alpha \in \mathrm{Aut}(M_{1}) $. Then we have
\begin{align*}
\sigma^{\alpha}&=\rho^{b_{1}}\sigma^{a_{1}}\tau^{a_{3}} \\
\tau^{\alpha}&=\rho^{b_{2}}\sigma^{a_{2}}\tau^{a_{4}}
\end{align*}
for some $ a_{1},a_{2},a_{3},a_{4},b_{1},b_{2} \in \mathbb{Z} /p\mathbb{Z} $. Note, $ \rho^{\alpha} $ is determined by above and we find \[\rho^{\alpha}=\tau^{\alpha}\sigma^{\alpha}\left(\sigma^{\alpha}\tau^{\alpha}\right)^{-1}=\rho^{a_{1}a_{4}-a_{2}a_{3}},\] so $ \alpha $ is bijective if and only if $ a_{1}a_{4}-a_{2}a_{3}\not\equiv 0 \ \mathrm{mod} \ p $. We shall write \[ \begin{bmatrix} a_{1}a_{4}-a_{2}a_{3} & b_{1} & b_{2} \\ 0 & a_{1} & a_{2} \\ 0 & a_{3} & a_{4} \end{bmatrix}\ \text{or} \ \begin{bmatrix} \mathrm{det}(A) & b_{1} & b_{2} \\ 0 & \ \ \ \ \ A \end{bmatrix} \] to represent $ \alpha $. This is only a representation, and not a matrix, so composition of automorphisms does not in general correspond to matrix multiplication. In fact composition of automorphisms yields the following.
\begin{align*}
&\begin{bmatrix} \mathrm{det}(A) & b_{1} & b_{2} \\ 0 & A \end{bmatrix}\circ \begin{bmatrix} \mathrm{det}(A') & b_{1}' & b_{2}' \\ 0 & \ \ \ \ \ A' \end{bmatrix} \\
&=\begin{bmatrix} \begin{pmatrix} \mathrm{det}(A) & b_{1} & b_{2} \\ 0 & \ \ \ \ \ A \end{pmatrix} \begin{pmatrix} \mathrm{det}(A') & b_{1}' & b_{2}' \\ 0 & \ \ \ \ \ A' \end{pmatrix} + \begin{pmatrix} 0 & C_{1} & C_{2} \\ 0 & \ \ \ \ \ 0 \end{pmatrix} \end{bmatrix}
\end{align*}
for
\begin{align*}
C_{1}&=\frac{1}{2}a_{1}a_{3}a_{1}'(a_{1}'-1)+\frac{1}{2}a_{2}a_{4}a_{3}'(a_{3}'-1)+a_{3}a_{1}'a_{2}a_{3}'\\
C_{2}&=\frac{1}{2}a_{1}a_{3}a_{2}'(a_{2}'-1)+\frac{1}{2}a_{2}a_{4}a_{4}'(a_{4}'-1)+a_{3}a_{2}'a_{2}a_{4}'.
\end{align*}
The group $ M_{1} $ has centre $ Z= \left\langle\rho\right\rangle $ of order $ p $ and \[ M_{1}/Z= \left\langle \overline{\sigma}, \overline{\tau}\right\rangle \cong C_{p}^{2},\] where $ \overline{\sigma},\overline{\tau} \in M_{1}/Z $ are the images of $ \sigma,\tau \in M_{1} $. Thus we obtain a natural homomorphism \[ \varPsi: \mathrm{Aut}(M_{1})\longrightarrow \mathrm{Aut}(M_{1}/Z) \cong \mathrm{GL}_{2}(\mathbb{F}_{p}). \] Since $ M_{1}/Z\cong C_{p}^{2} $ is abelian, we see that the set of inner automorphisms of $ M_{1} $ is contained in the kernel of $ \varPsi $ i.e., $ \mathrm{Inn}(M_{1}) \subseteq \Ker \varPsi $. Note $ \mathrm{Inn}(M_{1}) \cong M_{1}/Z $. Now if $ \alpha \in \Ker \varPsi $, then we must have $ \tau^{\alpha}\tau^{-1}\in Z $ and $ \sigma^{\alpha}\sigma^{-1}\in Z $ i.e.,
\begin{align*}
\sigma^{\alpha}&=\rho^{r_{1}}\sigma \\
\tau^{\alpha}&=\rho^{r_{2}}\tau
\end{align*}
for some integers $ r_{1}, r_{2} = 0,...,p-1 $, which implies that $ \rho^{\alpha}=\rho $. There can be at most $ p^{2} $ choices for such $ \alpha $, which implies that $ \mathrm{Inn}(M_{1}) = \Ker \varPsi $. We further find $ \Ker \varPsi=\left\langle\beta,\gamma\right\rangle $ where
\[ \beta\stackrel{\mathrm{def}}{=}\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix},\ \gamma\stackrel{\mathrm{def}}{=}\begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}. \]
To show that the map $ \varPsi $ is surjective, for any element \[ A\stackrel{\mathrm{def}}{=}\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) \] define a map \[ \alpha_{A} : M_{1} \longrightarrow M_{1} \ \text{given by} \ \alpha_{A}\stackrel{\mathrm{def}}{=}\begin{bmatrix} ad-bc & \frac{ac}{2} & \frac{bd}{2} \\ 0 & a & b \\ 0 & c & d \end{bmatrix}.\]
It is also easy to check that $ A\longmapsto \alpha_{A} $ is a group homomorphism. Therefore, we find a split exact sequence \[
\begin{tikzcd}[row sep=1.1em , column sep=1.1em]
1 \arrow[]{r}[]{} & C_{p}^{2} \arrow{r}[]{} & \mathrm{Aut}(M_{1}) \arrow{r}[]{} & \mathrm{GL}_{2}(\mathbb{F}_{p}) \arrow{r}[]{} & 1.
\end{tikzcd} \]
One can check that the left action of $ \mathrm{GL}_{2}(\mathbb{F}_{p}) $ on $ C_{p}^{2} $ is given by \[\begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\beta= \beta^{a_{1}}\gamma^{-a_{3}} \ \text{and} \ \begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\gamma= \beta^{-a_{2}}\gamma^{a_{4}}. \]
Note the above corresponds \[\alpha_{A}\beta= \beta^{a_{1}}\gamma^{-a_{3}}\alpha_{A} \ \text{and} \ \alpha_{A}\gamma= \beta^{-a_{2}}\gamma^{a_{4}}\alpha_{A}. \]
\end{proof}
\section{Skew braces of $ M_{1} $ type}\label{S4}
In this section we classify the skew braces and Hopf-Galois structures of $ M_{1} $ type. The main result of this section is the following (which is a proof of Theorems \ref{T2} and \ref{T3}). Recall, $ \widetilde{e}(G,N) $ is the number of $ G $-skew braces of type $ N $ and $ e(G,N) $ is the number of Hopf-Galois structures on a Galois extension with Galois group $ G $ of type $ N $.
\begin{proposition}\label{P5}
We have
\begin{align*}
\widetilde{e}(M_{1},M_{1})&=2p^{2}-p+3,\\
\widetilde{e}(C_{p}^{3},M_{1})&=2p+1,
\end{align*}
and $ \widetilde{e}(G ,M_{1})=0 $ for $ G\ncong M_{1} $ or $ C_{p}^{3} $.
Furthermore, we have
\begin{align*}
e(M_{1},M_{1})&=(2p^{3}-3p+1)p^{2},\\
e(C_{p}^{3},M_{1})&=(p^{3}-1)(p^{2}+p-1)p^{2},
\end{align*}
and $ e(G ,M_{1})=0 $ for $ G\ncong M_{1} $ or $ C_{p}^{3} $.
\end{proposition}
\begin{proof}
This follows from the calculation in the rest of this section, particularly the first part follows by adding the relevant numbers from Lemmas \ref{L16}, \ref{L17}, and \ref{L18}
\begin{align*}
\widetilde{e}(M_{1},M_{1})&=1+2(p-1)+(2p-3)p+4=2p^{2}-p+3,\\
\widetilde{e}(C_{p}^{3},M_{1})&=2+2p-1=2p+1,
\end{align*}
and the second part follows by adding relevant numbers from \ref{L16H}, \ref{L17H}, and \ref{L18}
\begin{align*}
e(M_{1},M_{1})&=1+(p^{3}-p^{2}-1)(p+1)+(p^{4}-p^{3}-2p^{2}+2p+1)p+(p^{2}-1)p^{3}\\
&=(2p^{3}-3p+1)p^{2},\\
e(C_{p}^{3},M_{1})&=(p^{3}-1)(p+1)p^{2}+(p^{3}-1)(p^{2}-2)p^{2}=(p^{3}-1)(p^{2}+p-1)p^{2}.
\end{align*}
\end{proof}
We note that at the end of lemmas \ref{L16}, \ref{L17}, and \ref{L18} there are lists of non-isomorphic skew braces together with a description of their automorphism groups.
Before we begin to prove Lemmas \ref{L16}, \ref{L16H}, \ref{L17}, \ref{L17H}, and \ref{L18}, we need to set up some notations. Let us denote by \[ \alpha_{1}\stackrel{\mathrm{def}}{=}\begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix},\ \alpha_{2}\stackrel{\mathrm{def}}{=}\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix}, \ \alpha_{3}\stackrel{\mathrm{def}}{=}\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.\]
Note in Proposition \ref{P3}, we had $ \alpha_{1}=\gamma $ and $ \alpha_{3}=\beta $. Furthermore, we showed that $ \mathrm{Aut}(M_{1}) $ can be written as
\[\mathrm{Aut}(M_{1})\cong C_{p}^{2}\rtimes \mathrm{GL}_{2}(\mathbb{F}_{p}), \]
where the factor $ C_{p}^{2} $ is generated by automorphisms $ \alpha_{1},\alpha_{3} \in \mathrm{Aut}(M_{1}) $. The (left) action of $ \mathrm{GL}_{2}(\mathbb{F}_{p}) $ on $ C_{p}^{2} $ is given by
\begin{align}\label{E95}
\begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\alpha_{1}= \alpha_{1}^{a_{4}}\alpha_{3}^{-a_{2}}, \ \begin{pmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \end{pmatrix}\cdot\alpha_{3}= \alpha_{1}^{-a_{3}}\alpha_{3}^{a_{1}}.
\end{align}
Therefore, the holomorph of $ M_{1} $ can be identified with \[ \mathrm{Hol}(M_{1})\cong M_{1}\rtimes (C_{p}^{2}\rtimes\mathrm{GL}_{2}(\mathbb{F}_{p}) ). \]
Now the image in $ \mathrm{GL}_{2}(\mathbb{F}_{p}) $ of a subgroup $ G \subseteq \mathrm{Hol}(M_{1}) $ of order $ p^{3} $ under the composition of projections \[\varTheta : \mathrm{Hol}(M_{1})\longrightarrow \mathrm{Aut}(M_{1}) \ \text{and} \ \varPsi : \mathrm{Aut}(M_{1})\longrightarrow \mathrm{GL}_{2}(\mathbb{F}_{p}) \] must lie in one of the $ p+1 $ Sylow $ p $-subgroup of $ \mathrm{GL}_{2}(\mathbb{F}_{p}) $, which are conjugate to the subgroup generated by $ \beta_{1}\stackrel{\mathrm{def}}{=}\begin{psmallmatrix} 1 & 0 \\ 1 & 1 \end{psmallmatrix} $; thus we have \[ \varTheta(G) \subseteq \mathrm{A}_{\beta}(M_{1})\stackrel{\mathrm{def}}{=} C_{p}^{2}\rtimes\left\langle\beta\beta_{1}\beta^{-1}\right\rangle \cong M_{1} \ \text{for some} \ \beta \in \mathrm{GL}_{2}(\mathbb{F}_{p}), \] and so any subgroup of $ \mathrm{Hol}(M_{1}) $ of order $ p^{3} $ lies in a subgroup of the form \[ M_{1}\rtimes \mathrm{A}_{\beta}(M_{1}) \ \text{for some} \ \beta \in \mathrm{GL}_{2}(\mathbb{F}_{p}).\]
Note, the elements $ \alpha_{1}, \alpha_{2}, \alpha_{3} \in \mathrm{Aut}(M_{1}) $ have order $ p $, and they satisfy
\begin{align}\label{E109}
\alpha_{2}\alpha_{1}=\alpha_{1}\alpha_{2}, \ \alpha_{3}\alpha_{1}=\alpha_{1}\alpha_{3}, \ \alpha_{3}\alpha_{2}=\alpha_{1}\alpha_{2}\alpha_{3}.
\end{align}
Thus, we have that $ \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle \cong M_{1} $ is one of the $ p+1 $ Sylow $ p $-subgroups of $ \mathrm{Aut}(M_{1}) $, which is the one we can, and shall, without loss of generality, work with. First, note that for $ \left\lvert\varTheta(G) \right\rvert=1 $, we have \begin{align*}
e(M_{1},M_{1},1)&=\widetilde{e}(M_{1},M_{1},1)=1 \ \text{and}\\
e(G,M_{1},1)&=\widetilde{e}(G,M_{1},1)=0 \ \text{if} \ G\neq M_{1}.
\end{align*}
We shall deal with the cases $ \left\lvert\varTheta(G) \right\rvert=p,p^{2},p^{3} $ in the following lemmas.
It will be useful for our calculations to derive the explicit formula for $ \left(v\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)^{r} $ for natural numbers $ r, a_{i} $ and an element $ v=\rho^{v_{1}}\sigma^{v_{2}}\tau^{v_{3}} \in M_{1} $. For this we first note that we have
\begin{align}\label{E66}
\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\cdot v&=\begin{bmatrix} 1 & a_{1} & a_{3}\\ 0 & 1 & 0 \\ 0 & a_{2} & 1 \end{bmatrix}\cdot v \nonumber \\
&=\rho^{a_{1}v_{2}+\frac{1}{2}a_{2}v_{2}\left(v_{2}-1\right)+a_{3}v_{3}}v\tau^{a_{2}v_{2}}.
\end{align}
Now by using (\ref{E109}) and (\ref{E66}) we find
\begin{align}\label{E92}
\left(v\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)^{r}&=\left(\prod_{j=0}^{r-1}\rho^{k_{j}}v\tau^{ a_{2}v_{2}j}\right)\left(\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)^{r}\nonumber\\
&=\rho^{l_{1}}v^{r}\tau^{l_{2}a_{2}v_{2}}\left(\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)^{r},
\end{align}
(note order of the product matters and is in increasing $ j $) with \[ k_{j}\stackrel{\mathrm{def}}{=}\left(a_{1}v_{2}j+\frac{1}{2} a_{2}a_{3}v_{2}j\left(j-1\right)+\frac{1}{2} a_{2}v_{2}\left(v_{2}-1\right)j+a_{3}v_{3}j\right), \]
for $ j=0,...,r-1 $,
\begin{align*}
l_{1}&=l_{1}(r)\stackrel{\mathrm{def}}{=}\sum_{j=1}^{r-1}k_{j}+\frac{a_{2}v_{2}^{2}}{2}\sum_{j=1}^{r-2}j\left(j+1\right)\ \text{and}\\
l_{2}&=l_{2}(r)\stackrel{\mathrm{def}}{=}\sum_{j=1}^{r-1}j.
\end{align*}
The second summation in $ l_{1} $ arises by moving the $ \tau^{a_{2}v_{2}j} $ terms to gather them in one place using the relation $ \tau\sigma=\rho\sigma\tau $. Note, here $ l_{1} $ and $ l_{2} $ are divisible by $ r $ for $ r>3 $ a prime number, so we find
\begin{align}\label{E110}
\left(v\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)^{p}=1
\end{align} for every $ v \in M_{1} $ since $ p>3 $. Note further that in (\ref{E92}), when $ a_{2}=0 $, we have
\begin{align}\label{E93}
\left(v\alpha_{1}^{a_{1}}\alpha_{3}^{a_{3}}\right)^{r}\in v^{r}\alpha_{1}^{ra_{1}}\alpha_{3}^{ra_{3}} \left\langle\rho\right\rangle,
\end{align}
where $ \left\langle\rho\right\rangle $ is a normal subgroup of $ \mathrm{Hol}(M_{1}) $ since it is a characteristic subgroup of $ M_{1} $.
It will further be useful, when finding the non-isomorphic braces, to derive the explicit formula for a term of the form $ \alpha\left(v\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\alpha^{-1} $ for an automorphism $ \alpha \in \mathrm{Aut}(M_{1}) $. Now if \[ \alpha=\gamma\beta \in \mathrm{Aut}(M_{1})\cong C_{p}^{2}\rtimes\mathrm{GL}_{2}(\mathbb{F}_{p}) \ \text{where} \]\[ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \in C_{p}^{2}, \ \beta \stackrel{\mathrm{def}}{=} \begin{pmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{pmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}),\] then, using (\ref{E95}), we have \[\alpha\left(v\alpha_{1}^{a_{1}}\alpha_{2}^{ a_{2}}\alpha_{3}^{a_{3}}\right)\alpha^{-1}=\left(\alpha\cdot v\right)\alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}}\alpha_{1}^{\left(a_{1}-a_{2} a_{3}\right)b_{4}-a_{3}b_{3}}\alpha_{3}^{-\left(a_{1}-a_{2} a_{3}\right)b_{2}+a_{3}b_{1}}\beta\alpha_{2}^{ a_{2}}\beta^{-1}\alpha_{1}^{-r_{1}}\alpha_{3}^{-r_{3}},\]
where using the section of the exact sequence in Proposition \ref{P3}, we have \[ \beta \cdot v = \rho^{\mathrm{det}(\beta)v_{1}+\frac{1}{2}\left(b_{1}b_{3}v_{2}+b_{2}b_{4}v_{3}\right)}\left(\sigma^{b_{1}}\tau^{b_{3}}\right)^{v_{2}}\left(\sigma^{b_{2}}\tau^{b_{4}}\right)^{v_{3}}, \]
which gives
\begin{align}\label{E112}
\alpha\cdot &v=\rho^{\widetilde{v}_{1}}\sigma^{b_{1}v_{2}+b_{2}v_{3}}\tau^{b_{3}v_{2}+b_{4}v_{3}}, \ \text{where}\\
\widetilde{v}_{1}&\stackrel{\mathrm{def}}{=}\mathrm{det}(\beta)v_{1}+\frac{1}{2}\left(b_{3}b_{1}v_{2}^{2}+b_{4}b_{2}v_{3}^{2}\right)+b_{2}b_{3}v_{2}v_{3}+r_{1}\left(b_{1}v_{2}+b_{2}v_{3}\right)+r_{3}\left(b_{3}v_{2}+b_{4}v_{3}\right)\nonumber
\end{align}
The above implies that, when $ a_{2}=0 $, we have
\begin{align}\label{E15}
\alpha\left(v\alpha_{1}^{a_{1}}\alpha_{3}^{a_{3}}\right)\alpha^{-1}=\left(\alpha\cdot v\right)\alpha_{1}^{a_{1}b_{4}-a_{3}b_{3}}\alpha_{3}^{a_{3}b_{1}-a_{1}b_{2}},
\end{align}
with $ \alpha\cdot v $ as given in (\ref{E112}), and when $ a_{2}\neq0 $, we can set $ b_{2}=0 $, since we want to remain within $ \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle $, and in this case since we have
\[\beta\alpha_{2}^{a_{2}}\beta^{-1}=\alpha_{1}^{\frac{1}{2}a_{2}b_{4}\left(b_{1}^{-1}-1\right)}\alpha_{2}^{a_{2}b_{1}^{-1}b_{4}}, \]
so (when $ b_{2}=0 $) we get
\begin{align}\label{E16}
\alpha\left(v\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\alpha^{-1}=\left(\alpha\cdot v\right)\alpha_{1}^{a_{1}b_{4}-a_{3}b_{3}+r_{3}a_{2}b_{1}^{-1}b_{4}+\frac{1}{2}a_{2}b_{4}\left(b_{1}^{-1}-1\right)}\alpha_{2}^{a_{2}b_{1}^{-1}b_{4}}\alpha_{3}^{a_{3}b_{1}},
\end{align}
where $ \alpha\cdot v $ can be calculated using (\ref{E112}).
\begin{lemma}\label{L16}
For $ \left\lvert\varTheta(G) \right\rvert=p $ there are exactly $ 2(p-1) $ $ M_{1} $-skew braces of $ M_{1} $ type and two $ C_{p}^{3} $-skew braces of $ M_{1} $ type.
\end{lemma}
\begin{proof}
If $ G \subseteq \mathrm{Hol}(M_{1}) $ with $ \left\lvert\varTheta(G) \right\rvert=p $ is a regular subgroup, then we can assume, without loss of generality, that $ \varTheta(G) \subseteq \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle $ is a subgroup of order $ p $. We also have $ G\cap M_{1} $ is a subgroup of order $ p^{2} $. Therefore, $ \varTheta(G) $ is one of
\begin{align*}\label{E48}
\left\langle\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right\rangle \ \text{for} \ a_{1},a_{2},a_{3}=0,...,p-1 \ \text{with} \ (a_{1},a_{2},a_{3})\neq (0,0,0),
\end{align*}
(each occurring $ p-1 $ times) and $ G\cap M_{1} $ is one of \[ \left\langle \rho,\tau \right\rangle, \left\langle \rho, \sigma\tau^{d}\right\rangle \ \text{for} \ d=0,...,p-1.\]
Suppose we consider subgroups of the form
\[G=\left\langle \rho, \sigma\tau^{d},h \right\rangle \ \text{where} \ h\stackrel{\mathrm{def}}{=}\tau\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}. \]
Note, using (\ref{E66}), we must have \[h\left(\sigma\tau^{d}\right)h^{-1}=\tau\left(\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\cdot\left(\sigma\tau^{d}\right)\right)\tau^{-1}=\rho^{a_{3}d+a_{1}+1}\sigma\tau^{a_{2}+d} \in \left\langle \rho, \sigma\tau^{d}\right\rangle ,\]
and since for a natural number $ r $ we have \[\left(\sigma\tau^{d}\right)^{r}=\rho^{\frac{1}{2}dr\left(r-1\right)}\sigma^{r}\tau^{rd},\]
the pairing is possible, when $ a_{2}=0 $. Therefore, we consider subgroups of the form
\[ G=\left\langle \rho, \sigma\tau^{d},h \right\rangle\ \text{where} \ h\stackrel{\mathrm{def}}{=}\tau\alpha_{1}^{a_{1}}\alpha_{3}^{a_{3}}.\]
But now since the automorphism of $ M_{1} $ corresponding to $ \begin{psmallmatrix} d & -1 \\ 1-d & 1 \end{psmallmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) $ maps the subgroup $ \left\langle \rho, \sigma\tau^{d}\right\rangle $ to $ \left\langle \rho, \tau\right\rangle $, we can assume every one of these skew braces is isomorphic to one containing the subgroup $ \left\langle \rho, \tau\right\rangle $.
Hence, up to conjugation, we must have \[G=\left\langle \rho, \tau, g \right\rangle \ \text{where} \ g\stackrel{\mathrm{def}}{=}\sigma\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}.\]
Note, using (\ref{E66}), we have
\begin{align*}
g\tau g^{-1}&=\sigma\left(\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\cdot\tau\right) \sigma^{-1}=\rho^{\left(a_{3}-1\right)}\tau \in \left\langle \rho,\tau \right\rangle \ \text{and} \\
g\rho g^{-1}&=\sigma\left(\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\cdot\rho\right) \sigma^{-1}=\rho \in \left\langle \rho,\tau \right\rangle,
\end{align*}
so the pairing is possible. Further, it follows from (\ref{E110}) that $ g^{p}=1 $. Now, for $ r\neq 0 $, using (\ref{E66}), we have
\begin{align}\label{E49}
g\tau^{r}=\left(\sigma\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\tau^{r} =\rho^{ra_{3}}\sigma\tau^{r}\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}=\rho^{r\left(a_{3}-1\right)}\tau^{r}g,
\end{align}
so $ G $ is abelian if and only if $ a_{3}=1 $. Furthermore, all these subgroups are regular since they have order $ p^{3} $ and $ \left\langle \rho,\tau \right\rangle \cup \{\sigma\} \subseteq \mathrm{Orb}(1) $, i.e., since $ \lvert\mathrm{Orb}(1)\rvert>p^{2} $, their action on $ M_{1} $ is transitive.
Therefore, for $ a_{3}=1 $ we find regular subgroups isomorphic to $ C_{p}^{3} $ of the form
\begin{align}\label{E50}
&\left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{3} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{2}^{b}\alpha_{3} \right\rangle \cong C_{p}^{3}\nonumber \\
&\text{for} \ a=0,...,p-1, \ b=1,...,p-1,
\end{align}
and for $ a_{3}\neq 1 $, setting $ r=\left(1-a_{3}\right)^{-1} $ in (\ref{E49}), we find regular subgroups isomorphic to $ M_{1} $ of the form
\begin{align}\label{E51}
&\left\langle \rho,\tau, \sigma\alpha_{1}^{b} \right\rangle,\left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{2}^{b} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{3}^{c} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{2}^{b}\alpha_{3}^{c} \right\rangle \cong M_{1} \nonumber\\
&\text{for} \ a=0,...,p-1, \ b,c=1,...,p-1\ \text{with} \ c\neq 1.
\end{align}
To find the non-isomorphic skew braces corresponding to the above regular subgroups, we let \[ \alpha=\gamma\beta \in \mathrm{Aut}(M_{1})\cong C_{p}^{2}\rtimes\mathrm{GL}_{2}(\mathbb{F}_{p}) \ \text{where} \]\[ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \in C_{p}^{2}, \ \beta \stackrel{\mathrm{def}}{=} \begin{pmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{pmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}),\]
and we work with automorphisms which fix the subgroup $ \left\langle \rho, \tau\right\rangle $, i.e., when $ b_{2}=0 $. In such case, using (\ref{E16}), we have
\[\alpha\left(\sigma\alpha_{1}^{a_{1}}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\alpha^{-1}=\left(\alpha\cdot \sigma\right)\alpha_{1}^{a_{1}b_{4}-a_{3}b_{3}+r_{3}a_{2}b_{1}^{-1}b_{4}+\frac{1}{2}a_{2}b_{4}\left(b_{1}^{-1}-1\right)}\alpha_{2}^{a_{2}b_{1}^{-1}b_{4}}\alpha_{3}^{a_{3}b_{1}},\]
where using (\ref{E112})
\[\alpha\cdot \sigma=\rho^{\frac{1}{2}b_{1}b_{3}-r_{1}b_{1}+r_{3}b_{3}}\sigma^{b_{1}}\tau^{b_{3}}.\]
Now since
\[\alpha\left(\sigma\alpha_{1}^{a}\alpha_{3}^{c}\right)\alpha^{-1}=\left(\alpha\cdot \sigma\right)\alpha_{1}^{ab_{4}-cb_{3}}\alpha_{3}^{cb_{1}}\in \sigma^{b_{1}}\alpha_{1}^{ab_{4}-cb_{3}}\alpha_{3}^{cb_{1}}\left\langle \rho, \tau\right\rangle,\]
we have
\[\alpha\left(\sigma\alpha_{1}^{a}\alpha_{3}^{c}\right)^{b_{1}^{-1}}\alpha^{-1}\in \sigma\alpha_{1}^{ab_{1}^{-1}b_{4}-cb_{1}^{-1}b_{3}}\alpha_{3}^{c}\left\langle \rho, \tau\right\rangle.\]
Thus if we conjugate the subgroup $ \left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle $ with the automorphism corresponding to $ \begin{psmallmatrix} 1 & 0 \\ -ac^{-1} & 1 \end{psmallmatrix} $ we get $ \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{3}^{c} \right\rangle $, and now the subgroups $ \left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle $ for different values of $ c $ cannot be conjugate to each other.
Next, working similar to above, we have
\[\alpha\left(\sigma\alpha_{1}^{a}\alpha_{2}^{b}\alpha_{3}^{c}\right)^{b_{1}^{-1}}\alpha^{-1}\in\sigma\alpha_{1}^{ab_{1}^{-1}b_{4}-cb_{1}^{-1}b_{3}+r_{3}bb_{1}^{-2}b_{4}+\frac{1}{2}bb_{1}^{-1}b_{4}\left(b_{1}^{-1}-1\right)\left(c+1\right)}\alpha_{2}^{bb_{1}^{-2}b_{4}}\alpha_{3}^{c}\left\langle \rho, \tau\right\rangle.\]
Thus, if we conjugate the subgroup $ \left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle $ with the automorphism corresponding to $ \begin{psmallmatrix} 1 & 0 \\ -ac^{-1} & b \end{psmallmatrix} $, we get $ \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{2}^{b}\alpha_{3}^{c} \right\rangle $, and now again the subgroups $ \left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle $ for different values of $ c $ cannot be conjugate. Finally, we note that
\[\alpha\left(\sigma\alpha_{1}^{a}\alpha_{2}^{b}\right)^{b_{1}^{-1}}\alpha^{-1}\in \sigma\alpha_{1}^{ab_{1}^{-1}b_{4}+r_{3}bb_{1}^{-2}b_{4}+\frac{1}{2}bb_{1}^{-1}b_{4}\left(b_{1}^{-1}-1\right)}\alpha_{2}^{bb_{1}^{-2}b_{4}}\left\langle \rho, \tau\right\rangle,\]
so
\[\alpha\left(\sigma\alpha_{1}^{a}\right)^{b_{1}^{-1}}\alpha^{-1}\in \sigma\alpha_{1}^{ab_{1}^{-1}b_{4}}\left\langle \rho, \tau\right\rangle,\]
which implies that conjugating the subgroup $ \left\langle \rho,\tau, \sigma\alpha_{1} \right\rangle $ with the automorphism corresponding to $ \begin{psmallmatrix} 1 & 0 \\ 0 & b \end{psmallmatrix} $, we get $ \left\langle \rho,\tau, \sigma\alpha_{1}^{b} \right\rangle $, and conjugating the subgroup $ \left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle $ with the automorphism corresponding to $ \alpha_{3}^{ab^{-1}}\begin{psmallmatrix} 1 & 0 \\ 0 & b \end{psmallmatrix} $, we get $ \left\langle \rho,\tau, \sigma\alpha_{1}^{a}\alpha_{2}^{b} \right\rangle $.
Therefore, we have non-isomorphic skew braces
\begin{align}\label{E54}
&\left\langle \rho,\tau, \sigma\alpha_{3} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3} \right\rangle \cong C_{p}^{3}; \\ & \left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle,\nonumber
\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle \cong M_{1} \ \text{for} \ c=2,...,p-1,
\end{align}
and counting them we find that there are $ 2(p-1) $ $ M_{1} $-skew braces of $ M_{1} $ type and two $ C_{p}^{3} $-skew braces of $ M_{1} $ type.
\end{proof}
\begin{lemma}\label{L16H}
There are \[ (p^{3}-p^{2}-1)(p+1) \] Hopf-Galois structures of $ M_{1} $ type on Galois extensions of fields with Galois group $ G\cong M_{1} $ and $ \left\lvert\varTheta(G) \right\rvert=p $, and exactly \[(p^{3}-1)(p+1)p^{2}\] Hopf-Galois structures of $ M_{1} $ type on Galois extensions of fields with Galois group $ G\cong C_{p}^{3} $ and $ \left\lvert\varTheta(G) \right\rvert=p $.
\end{lemma}
\begin{proof}
To find the number of Hopf-Galois structures corresponding to the skew braces in (\ref{E54}) of Lemma \ref{L16},
\begin{align*}
&\left\langle \rho,\tau, \sigma\alpha_{3} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3} \right\rangle \cong C_{p}^{3};\\ & \left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle,\nonumber
\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle \cong M_{1} \ \text{for} \ c=2,...,p-1,
\end{align*}
we need to find the automorphism groups of these skew braces.
We let \[ \alpha=\gamma\beta \in \mathrm{Aut}(M_{1})\ \text{where}\ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}}, \ \beta \stackrel{\mathrm{def}}{=} \begin{pmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{pmatrix},\]
and since we need $ \alpha\left( \left\langle \rho,\tau \right\rangle \right)= \left\langle \rho,\tau \right\rangle $, we must set $ b_{2}=0 $. Now, if $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle) $, since we have
\[\alpha\left(\sigma\alpha_{3}^{c}\right)^{b_{1}^{-1}}\alpha^{-1}\in\sigma\alpha_{1}^{-cb_{1}^{-1}b_{3}}\alpha_{3}^{c}\left\langle \rho, \tau\right\rangle,\]
we must have $ b_{3}=0 $, thus we find
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \begin{psmallmatrix} b_{1} & 0 \\ 0 & b_{4} \end{psmallmatrix} \right\}.\]
If $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle) $, since we have
\[\alpha\left(\sigma\alpha_{2}\alpha_{3}^{c}\right)^{b_{1}^{-1}}\alpha^{-1}\in\sigma\alpha_{1}^{-cb_{1}^{-1}b_{3}+r_{3}b_{1}^{-2}b_{4}+\frac{1}{2}b_{1}^{-1}b_{4}\left(b_{1}^{-1}-1\right)\left(c+1\right)}\alpha_{2}^{b_{1}^{-2}b_{4}}\alpha_{3}^{c}\left\langle \rho, \tau\right\rangle,\]
we must have $ b_{4}=b_{1}^{2} $ and
\begin{align*}
&-cb_{1}^{-1}b_{3}+r_{3}b_{1}^{-2}b_{4}+\frac{1}{2}b_{1}^{-1}b_{4}\left(b_{1}^{-1}-1\right)\left(c+1\right)\\
&=-cb_{1}^{-1}b_{3}+r_{3}+\frac{1}{2}b_{1}\left(b_{1}^{-1}-1\right)\left(c+1\right)=0,
\end{align*}
so we find
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{cb_{1}^{-1}b_{3}+\frac{1}{2}\left(b_{1}-1\right)\left(c+1\right)} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{1}^{2} \end{psmallmatrix} \right\}.\]
If $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle) $, since we have
\[\alpha\left(\sigma\alpha_{1}\right)^{b_{1}^{-1}}\alpha^{-1}\in\sigma\alpha_{1}^{b_{1}^{-1}b_{4}}\left\langle \rho, \tau\right\rangle,\]
we must have $ b_{1}=b_{4} $, and we find
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{1} \end{psmallmatrix} \right\}.\]
Finally, if $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle) $, since we have
\[\alpha\left(\sigma\alpha_{2}\right)^{b_{1}^{-1}}\alpha^{-1}\in\sigma\alpha_{1}^{r_{3}b_{1}^{-2}b_{4}+\frac{1}{2}b_{1}^{-1}b_{4}\left(b_{1}^{-1}-1\right)}\alpha_{2}^{b_{1}^{-2}b_{4}}\left\langle \rho, \tau\right\rangle,\]
we must have $ b_{4}=b_{1}^{2} $ and $ r_{3}=\frac{1}{2}(b_{1}-1) $, we find
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{\frac{1}{2}\left(b_{1}-1\right)} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{1}^{2} \end{psmallmatrix} \right\}.\]
Therefore, we have
\begin{align*}
&e(M_{1},M_{1},p)= \sum_{(M_{1})_{M_{1}}(p)}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}((M_{1})_{M_{1}})\right\rvert}=\\
&\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle)\right\rvert}+\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle)\right\rvert }+\sum_{c=2}^{p-1}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle)\right\rvert}+\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle)\right\rvert}\\
&=(p^{2}-1)(p-1)p^{3}\left(\dfrac{1}{(p-1)p^{3}}+\dfrac{1}{(p-1)p^{2}}+\sum_{c=2}^{p-1}\dfrac{1}{(p-1)^{2}p^{2}}+\dfrac{1}{(p-1)p^{2}}\right)\\
&=(p^{3}-p^{2}-1)(p+1),
\end{align*}
and similarly
\begin{align*}
&e(C_{p}^{3},M_{1},p)= \sum_{(C_{p}^{3})_{M_{1}}(p)}\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}((C_{p}^{3})_{M_{1}})\right\rvert}=\\
&\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{3} \right\rangle)\right\rvert}+\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho,\tau, \sigma\alpha_{2}\alpha_{3} \right\rangle)\right\rvert}\\
&=(p^{3}-1)(p^{3}-p)(p^{3}-p^{2})\left(\dfrac{1}{(p-1)^{2}p^{2}}+\dfrac{1}{(p-1)p^{2}}\right)=(p^{3}-1)(p+1)p^{2}.
\end{align*}
\end{proof}
\begin{lemma}\label{L17}
For $ \left\lvert\varTheta(G) \right\rvert=p^{2} $ there are exactly
$ (2p-3)p $ $ M_{1} $-skew braces of $ M_{1} $ type and $ 2p-1 $ $ C_{p}^{3} $-skew braces of $ M_{1} $ type.
\end{lemma}
\begin{proof}
If $ G \subseteq \mathrm{Hol}(M_{1}) $ with $ \left\lvert\varTheta(G) \right\rvert=p^{2} $ is a regular subgroup, then we can assume, without loss of generality, that we have $ \varTheta(G) \subseteq \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle $ a subgroup of order $ p^{2} $. We also have $ G\cap M_{1} $ a subgroup of order $ p $. Therefore, $ \varTheta(G) $ is one of \[\left\langle\alpha_{1},\alpha_{3}\right\rangle,\left\langle\alpha_{1},\alpha_{2}\alpha_{3}^{a}\right\rangle \ \text{for} \ a=0,...,p-1, \]
and $ G\cap M_{1} $ is of the form \[ \left\langle\rho^{b}\sigma^{c}\tau^{d}\right\rangle \ \text{for} \ b,c,d=0,...,p-1 \ \text{with} \ \left(b,c,d\right)\neq\left(0,0,0\right), \]
each occurring $ p-1 $ times. We shall consider all subgroups of order $ p $ in $ M_{1} $ and all ways of pairing them with a subgroup of order $ p^{2} $ of $ \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle $.
Let us consider a subgroup of the form
\[G=\left\langle u ,v\alpha_{1}, w\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right\rangle \ \text{for} \ \left(a_{2},a_{3}\right)\neq \left(0,0\right), \ u,v,w\neq 1. \]
Suppose $ u=\rho^{u_{1}}\sigma^{u_{2}}\tau^{u_{3}} $, $ v=\rho^{v_{1}}\sigma^{v_{2}}\tau^{v_{3}} $, and $ w=\rho^{w_{1}}\sigma^{w_{2}}\tau^{w_{3}} $. Then, we need the following.
\begin{align}\label{E17}
\left(v\alpha_{1}\right)u\left(v\alpha_{1}\right)^{-1}=v\left(\alpha_{1}\cdot u \right)v^{-1}u^{-1}=\rho^{u_{2}+u_{2}v_{3}-u_{3}v_{2}} \in \left\langle u\right\rangle,
\end{align}
\begin{align}\label{E18}
\left(w\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)u\left(w\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)^{-1}&=w\left(\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\cdot u \right)w^{-1}u^{-1}= \nonumber \\
&\rho^{\frac{1}{2}a_{2}u_{2}\left(u_{2}-1\right)+a_{3}u_{3}+u_{2}w_{3}-u_{3}w_{2}-a_{2}u_{2}w_{2}-a_{2}u_{2}^{2}}\tau^{a_{2}u_{2}} \in \left\langle u\right\rangle,
\end{align}
\begin{align}\label{E19}
&\left(v\alpha_{1}\right)\left(w \alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\left(\left(w \alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\left(v\alpha_{1}\right)\right)^{-1}=\nonumber\\
&\left(\rho^{w_{2}}vw\alpha_{1}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)\left(\rho^{\frac{1}{2}a_{2}v_{2}\left(v_{2}-1\right)+a_{3}v_{1}-a_{2}v_{2}^{2}+v_{2}w_{1}-v_{1}w_{2}}\tau^{a_{2}v_{2}}vw\alpha_{1}\alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right)^{-1}\nonumber\\
&=\rho^{w_{2}-\frac{1}{2}a_{2}v_{2}\left(v_{2}-1\right)-a_{3}v_{1}+a_{2}v_{2}^{2}-v_{2}w_{1}+v_{1}w_{2}}\tau^{-a_{2}v_{2}}\in \left\langle u\right\rangle.
\end{align}
Now assume $ u_{3}=1 $. Then, multiplying $ v\alpha_{1} $ and $ w \alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}} $ by suitable powers of $ u $ if necessary, we can further assume $ v_{3}=w_{3}=0 $. Now (\ref{E17}) implies that $ u_{2}=v_{2} $ and (\ref{E18}) implies that we need \[\rho^{\frac{1}{2}a_{2}u_{2}\left(u_{2}-1\right)+a_{3}-w_{2}-a_{2}u_{2}w_{2}-a_{2}u_{2}^{2}}\tau^{a_{2}u_{2}} \in \left\langle \rho^{u_{1}}\sigma^{u_{2}}\tau\right\rangle,\]
so $ u_{2}=v_{2}=0 $ and $ a_{3}=w_{2} $. In such case (\ref{E19}) implies that we need
\[\rho^{w_{2}}\in \left\langle \rho^{u_{1}}\sigma^{u_{2}}\tau\right\rangle,\]
so $ w_{2}=0 $, which implies that $ G $ cannot be regular. Thus, we cannot have any pairing with subgroups of the form $ \left\langle\rho^{b}\sigma^{c}\tau\right\rangle $. Similarly, if $ u_{2}=1 $, then we can assume $ v_{2}=w_{2}=0 $. Now (\ref{E17}) gives $ v_{3}=-1 $, also (\ref{E18}) gives $ a_{2}=0 $, and (\ref{E19}) gives $ a_{3}=0 $ which is not possible. Thus, the only possibility for $ u $ is $ u=\rho $ and then (\ref{E19}) implies that we also need $ a_{2}v_{2}=0 $.
Therefore, we may only consider subgroups of the form
\[G=\left\langle \rho ,v\alpha_{1}, w \alpha_{2}^{a_{2}}\alpha_{3}^{a_{3}}\right\rangle \ \text{with} \ a_{2}v_{2}=v_{1}=w_{1}=0. \]
There are two main cases to consider.
\textbf{Case I:} Let us consider
\[G=\left\langle\rho,u\alpha_{1},v\alpha_{3} \right\rangle.\]
Then $ \left(u\alpha_{1}\right)\rho=\rho\left(u\alpha_{1}\right) $ and $ \left(v\alpha_{3}\right)\rho=\rho\left(v\alpha_{3}\right) $, also we have
\begin{align}\label{E58}
\left(u\alpha_{1}\right)\left(v\alpha_{3}\right)&=\rho^{v_{2}}uv\alpha_{1}\alpha_{3} \ \text{and} \nonumber \\
\left(v\alpha_{3}\right)\left(u\alpha_{1}\right)&=\rho^{u_{3}}vu\alpha_{1}\alpha_{3}=\rho^{u_{3}+u_{2}v_{3}-u_{3}v_{2}}uv\alpha_{1}\alpha_{3},
\end{align}
so $ G $ has order $ p^{3} $ and is abelian if and only if $ v_{2}\equiv u_{3}+u_{2}v_{3}-u_{3}v_{2} \ \mathrm{mod}\ p $; furthermore, for $ G $ to be regular we need $ u_{2}v_{3}-u_{3}v_{2}\not\equiv 0 \ \mathrm{mod}\ p $.
Therefore, for $ u_{2}v_{3}-u_{3}v_{2}\not\equiv 0 \ \mathrm{mod}\ p $ we have regular subgroups isomorphic to $ C_{p}^{3} $ of the form
\begin{align}\label{E55}
&\left\langle \rho, u\alpha_{1}, v\alpha_{3} \right\rangle \cong C_{p}^{3} \\
&\text{for} \ A=\begin{pmatrix} u_{2} & v_{2} \\ u_{3} & v_{3} \end{pmatrix}\in \mathrm{GL}_{2}(\mathbb{F}_{p}) \ \text{with} \ v_{2}=u_{3}+\mathrm{det}(A).\nonumber
\end{align}
For $ v_{2}- u_{3}-u_{2}v_{3}+u_{3}v_{2}\not\equiv0 \ \mathrm{mod}\ p $, we find regular subgroups isomorphic to $ M_{1} $ of the form
\begin{align}\label{E56}
&\left\langle \rho, u\alpha_{1}, v\alpha_{3} \right\rangle \cong M_{1} \\
&\text{for} \ A=\begin{pmatrix} u_{2} & v_{2} \\ u_{3} & v_{3} \end{pmatrix}\in \mathrm{GL}_{2}(\mathbb{F}_{p}) \ \text{with} \ v_{2}-u_{3}-\mathrm{det}(A)\not\equiv 0 \ \mathrm{mod}\ p\nonumber.
\end{align}
To find the non-isomorphic skew braces corresponding to the above regular subgroups, we let $ \beta_{0}\stackrel{\mathrm{def}}{=}\begin{psmallmatrix} u_{2} & v_{2} \\ u_{3} & v_{3} \end{psmallmatrix} $ and note that considering (\ref{E112}) and (\ref{E16}), it suffices to work with an automorphism corresponding to $ \beta \stackrel{\mathrm{def}}{=} \begin{psmallmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{psmallmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) $ with $ b\stackrel{\mathrm{def}}{=} \mathrm{det}(\beta)^{-1} $, and we find
\begin{align*}
\beta\left(u\alpha_{1}\right)^{b_{1}b}\left(v\alpha_{3}\right)^{b_{2}b}\beta^{-1}&=\rho^{\kappa_{1}}\left(b\beta\beta_{0}\beta^{T}\right)\cdot \sigma \alpha_{1},\\
\beta\left(u\alpha_{1}\right)^{b_{3}b}\left(v\alpha_{3}\right)^{b_{4}b}\beta^{-1}&=\rho^{\kappa_{2}}\left(b\beta\beta_{0}\beta^{T}\right)\cdot \tau \alpha_{3}
\end{align*}
for some $ \kappa_{1},\kappa_{2} $, where superscript $ T $ denotes the transpose of a matrix.
Now if $ u_{2}\neq 0 $, then
\[u_{2}^{-1}\begin{pmatrix} 1 & 0 \\ - u_{3} & u_{2} \end{pmatrix}\begin{pmatrix} u_{2} & v_{2} \\ u_{3} & v_{3} \end{pmatrix}\begin{pmatrix} 1 & 0 \\ - u_{3} & u_{2} \end{pmatrix}^{T} =\begin{pmatrix} 1 & v_{2}-u_{3} \\ 0 & \mathrm{det}(\beta_{0}) \end{pmatrix}; \]
if $ v_{3}\neq 0 $, then
\[v_{3}^{-1}\begin{pmatrix} 0 & 1 \\ -v_{3} & v_{2} \end{pmatrix} \begin{pmatrix} u_{2} & v_{2} \\ u_{3} & v_{3} \end{pmatrix}\begin{pmatrix} 0 & 1 \\ -v_{3} & v_{2} \end{pmatrix}^{T} =\begin{pmatrix} 1 & v_{2}-u_{3} \\ 0 & \mathrm{det}(\beta_{0}) \end{pmatrix}; \]
if $ u_{2}=v_{3}=0 $ and $ u_{3}\neq -v_{2} $, then
\[\left(u_{3}+v_{2}\right)^{-1}\begin{pmatrix} 1 & 1 \\ -u_{3} & v_{2} \end{pmatrix} \begin{pmatrix} 0 & v_{2} \\ u_{3} & 0 \end{pmatrix}\begin{pmatrix} 1 & 1 \\ -u_{3} & v_{2} \end{pmatrix}^{T} =\begin{pmatrix} 1 & v_{2}-u_{3} \\ 0 & \mathrm{det}(\beta_{0}) \end{pmatrix}, \]
and finally if $ u_{2}=v_{3}=0 $ and $ u_{3}= -v_{2} $, then
\[bI\beta_{0}I^{T}=\beta_{0}. \]
Thus every one of our regular subgroups above is conjugate to one of the form
\[\left\langle \rho, \sigma\alpha_{1}, \sigma^{t_{2}}\tau^{t_{3}}\alpha_{3} \right\rangle, \left\langle \rho, \tau^{-t_{4}}\alpha_{1}, \sigma^{t_{4}}\alpha_{3} \right\rangle \ \text{for some} \ t_{2},t_{3},t_{4}, \]
and these for different values of $ t_{2}, t_{3} $, and $ t_{4} $ are not conjugate to each other.
Therefore, we find non-isomorphic skew braces
\begin{align}\label{E57}
&\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-2}\alpha_{1}, \sigma^{2}\alpha_{3} \right\rangle \cong C_{p}^{3}, \\
&\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{3}}\tau^{u_{4}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-u_{5}}\alpha_{1}, \sigma^{u_{5}}\alpha_{3} \right\rangle \cong M_{1} \nonumber\\
&\text{for} \ u_{4}=0,...,p-1,\ u_{2},u_{3},u_{5}=1,...,p-1 \ \text{with} \ u_{5}\neq 2,\ u_{3}-u_{4}\not\equiv 0 \ \mathrm{mod}\ p.\nonumber
\end{align}
\textbf{Case II:} Next, we consider subgroups of the form \[G=\left\langle\rho,x\alpha_{1},y\alpha_{2}\alpha_{3}^{a} \right\rangle \ \text{with} \ x_{2}=0.\]
Note, we have
\begin{align}\label{E59}
\left(x\alpha_{1}\right)\left(y\alpha_{2}\alpha_{3}^{a}\right)&=\rho^{y_{2}}xy\alpha_{1}\alpha_{2}\alpha_{3}^{a} \ \text{and} \nonumber \\
\left(y\alpha_{2}\alpha_{3}^{a}\right)\left(x\alpha_{1}\right)&=\rho^{ax_{3}-x_{3}y_{2}}xy\alpha_{1}\alpha_{2}\alpha_{3}^{a},
\end{align}
so $ G $ is abelian if and only if $ y_{2}\equiv ax_{3}-x_{3}y_{2}\ \mathrm{mod}\ p $; furthermore, we need $ x_{3},y_{2}\neq 0 $ for $ G $ to be regular.
Therefore, for $ y_{2}\equiv ax_{3}-x_{3}y_{2}\ \mathrm{mod}\ p $ we find regular subgroups isomorphic to $ C_{p}^{3} $ of the form
\begin{align}\label{E60}
&\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma^{y_{2}}\tau^{y_{3}}\alpha_{2}\alpha_{3}^{\left(1+x_{3}\right)y_{2}x_{3}^{-1}} \right\rangle \cong C_{p}^{3}\\
&\text{for} \ y_{3}=0,...,p-1, \ y_{2},x_{3}=1,...,p-1, \nonumber
\end{align}
and for $ ax_{3}\not\equiv y_{2}+x_{3}y_{2} \ \mathrm{mod}\ p $, we find regular subgroups isomorphic to $ M_{1} $ of the form
\begin{align}\label{E61}
&\left\langle\rho,\tau^{x_{3}}\alpha_{1},y\alpha_{2}\alpha_{3}^{a} \right\rangle \cong M_{1} \\
&\text{for}\ a,y_{3}=0,...,p-1,\ x_{3},y_{2}=1,...,p-1 \ \text{with} \ ax_{3}- y_{2}-x_{3}y_{2}\not\equiv 0 \ \mathrm{mod}\ p.\nonumber
\end{align}
To find the non-isomorphic skew braces corresponding to the above regular subgroups, it suffices to work with automorphisms corresponding to elements of the form $ \beta \stackrel{\mathrm{def}}{=} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{4} \end{psmallmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) $. Then, using (\ref{E112}) and (\ref{E16}), we have
\begin{align*}
&\left(\alpha_{3}^{r_{3}}\beta\right)\left(\tau^{x_{3}}\alpha_{1}\right)^{b_{4}^{-1}}\left(\alpha_{3}^{r_{3}}\beta\right)^{-1}=\rho^{\kappa_{1}}\tau^{x_{3}}\alpha_{1} \ \text{and} \\
&\left(\alpha_{3}^{r_{3}}\beta\right)\left(\tau^{x_{3}}\alpha_{1}\right)^{ab_{1}b_{3}b_{4}^{-2}-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}\left(1-b_{1}\right)-\frac{1}{2}ab_{1}b_{4}^{-1}\left(b_{1}b_{4}^{-1}-1\right)}\left(y\alpha_{2}\alpha_{3}^{a}\right)^{b_{1}b_{4}^{-1}}\left(\alpha_{3}^{r_{3}}\beta\right)^{-1}\\
&=\rho^{\kappa_{2}}\sigma^{y_{2}b_{1}^{2}b_{4}^{-1}}\tau^{\left(ab_{1}b_{3}b_{4}^{-2}-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}\left(1-b_{1}\right)-\frac{1}{2}ab_{1}b_{4}^{-1}\left(b_{1}b_{4}^{-1}-1\right)\right)x_{3}+b_{1}y_{3}+\frac{1}{2}b_{1}\left(b_{1}b_{4}^{-1}-1\right)y_{2}}\alpha_{2}\alpha_{3}^{ab_{1}^{2}b_{4}^{-1}},
\end{align*}
for some $ \kappa_{1},\kappa_{2} $, and $ r_{3} $. Now conjugating the subgroup $ \left\langle\rho,\tau^{x_{3}}\alpha_{1},y\alpha_{2}\alpha_{3}^{a} \right\rangle $ with the automorphism corresponding to $ \alpha_{3}^{\frac{1}{2}(y_{2}^{-1}-1)-y_{2}x_{3}^{-1}}\begin{psmallmatrix} y_{2}^{-1} & 0 \\ 0 & y_{2}^{-1} \end{psmallmatrix} $ we get $ \left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{ay_{2}^{-1}} \right\rangle $, and these subgroups for different values of $ a $ and $ x_{3} $ and $ y_{2} $ are not conjugate to each other.
Therefore, we find non-isomorphic skew braces
\begin{align}\label{E62}
&\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{(1+x_{3})x_{3}^{-1}} \right\rangle\cong C_{p}^{3},\ \left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{a} \right\rangle\cong M_{1} \\
& \text{for} \ a=0,...,p-1, \ x_{3}=1,...,p-1 \ \text{with} \ a- (1+x_{3})x_{3}^{-1}\not\equiv 0 \ \mathrm{mod}\ p. \nonumber
\end{align}
Thus, the corresponding non-isomorphic skew braces, combining (\ref{E57}) and (\ref{E62}), are \[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{3}}\tau^{u_{4}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-u_{5}}\alpha_{1}, \sigma^{u_{5}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{a} \right\rangle \cong M_{1}, \]
\[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-2}\alpha_{1}, \sigma^{2}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{\left(1+x_{3}\right)x_{3}^{-1}} \right\rangle \cong C_{p}^{3} \ \text{for} \]
\[a,u_{3}=0,...,p-1, \ u_{2},u_{4},u_{5},x_{3},=1,...,p-1 \]
\[\text{with} \ u_{5}\neq2, \ u_{3}-u_{4},\ ax_{3}-\left(1+x_{3}\right)\not\equiv 0 \ \mathrm{mod}\ p. \]
Therefore, there are
\[(p-1)p-(p-1)+(p-2)+(p-1)p-(p-1)=(2p-3)p\]
$ M_{1} $-skew braces of $ M_{1} $ type and \[(p-1)+1+(p-1)=2p-1\] $ C_{p}^{3} $-skew braces of $ M_{1} $ type.
\end{proof}
\begin{lemma}\label{L17H}
There are \[ (p^{4}-p^{3}-2p^{2}+2p+1)p \] Hopf-Galois structures of $ M_{1} $ type on Galois extensions of fields with Galois group $ G\cong M_{1} $ and $ \left\lvert\varTheta(G) \right\rvert=p^{2} $, and exactly \[(p^{3}-1)(p^{2}-2)p^{2}\] Hopf-Galois structures of $ M_{1} $ type on Galois extensions of fields with Galois group $ G\cong C_{p}^{3} $ and $ \left\lvert\varTheta(G) \right\rvert=p^{2} $.
\end{lemma}
\begin{proof}
To find the number of Hopf-Galois structures corresponding to the skew braces of Lemma \ref{L17}, we need to find the automorphism groups of the skew braces \[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{3}}\tau^{u_{4}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-u_{5}}\alpha_{1}, \sigma^{u_{5}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{a} \right\rangle \cong M_{1}, \]
\[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-2}\alpha_{1}, \sigma^{2}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{\left(1+x_{3}\right)x_{3}^{-1}} \right\rangle \cong C_{p}^{3} \ \text{for} \]
\[a,u_{3}=0,...,p-1, \ u_{2},u_{4},u_{5},x_{3},=1,...,p-1 \]
\[\text{with} \ u_{5}\neq2, \ u_{3}-u_{4},\ ax_{3}-\left(1+x_{3}\right)\not\equiv 0 \ \mathrm{mod}\ p. \] We let \[ \alpha=\gamma\beta \in \mathrm{Aut}(M_{1})\ \text{where}\ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}}, \ \beta \stackrel{\mathrm{def}}{=} \begin{pmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{pmatrix},\]
and set $ b \stackrel{\mathrm{def}}{=} \mathrm{det}(\beta)^{-1} $.
\textbf{For skew braces of Case I of Lemma \ref{L17}}: If $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{3}}\alpha_{3} \right\rangle) $, since we have
\begin{align*}
\alpha\left(\sigma\alpha_{1}\right)^{b_{1}b}\left(\sigma^{u_{2}}\tau^{u_{3}}\alpha_{3}\right)^{b_{2}b}\alpha^{-1}&=\rho^{\kappa_{1}}\left(b\beta\begin{psmallmatrix} 1 & u_{2} \\ 0 & u_{2} \end{psmallmatrix}\beta^{T}\right)\cdot \sigma \alpha_{1},\\
\alpha\left(\sigma\alpha_{1}\right)^{b_{3}b}\left(\sigma^{u_{2}}\tau^{u_{3}}\alpha_{3}\right)^{b_{4}b}\alpha^{-1}&=\rho^{\kappa_{2}}\left(b\beta\begin{psmallmatrix} 1 & u_{2} \\ 0 & u_{2} \end{psmallmatrix}\beta^{T}\right)\cdot \tau \alpha_{3},
\end{align*}
we must have \[ b\beta\begin{pmatrix} 1 & u_{2} \\ 0 & u_{3} \end{pmatrix}\beta^{T}= b\begin{pmatrix} b_{1}^{2}+b_{2}(b_{1}u_{2}+b_{2}u_{3}) & b_{1}(b_{3}+b_{4}u_{2})+b_{2}b_{4}u_{3} \\ b_{1}b_{3}+b_{2}(b_{3}u_{2}+b_{4}u_{3}) & b_{3}^{2}+b_{4}(b_{3}u_{2}+b_{4}u_{3}) \end{pmatrix} =\begin{pmatrix} 1 & u_{2} \\ 0 & u_{3} \end{pmatrix} .\]
Thus we need
\begin{align*}
b_{1}^{2}+b_{2}(b_{1}u_{2}+b_{2}u_{3})&=b_{1}b_{4}-b_{2}b_{3}\\
b_{1}b_{3}+b_{2}(b_{3}u_{2}+b_{4}u_{3})&=0\\
b_{3}^{2}+b_{4}(b_{3}u_{2}+b_{4}u_{3})&=(b_{1}b_{4}-b_{2}b_{3})u_{3}.
\end{align*}
The second and third equations give
\begin{align*}
b_{1}b_{3}b_{4}+b_{2}b_{4}(b_{3}u_{2}+b_{4}u_{3})&=0\\
b_{2}b_{3}^{2}+b_{2}b_{4}(b_{3}u_{2}+b_{4}u_{3})&=b_{2}(b_{1}b_{4}-b_{2}b_{3})u_{3},
\end{align*}
so we must have \[-b_{1}b_{3}b_{4}+b_{2}b_{3}^{2}=b_{2}(b_{1}b_{4}-b_{2}b_{3})u_{3}, \]
which implies that we must set $ b_{3}=-b_{2}u_{3} $ and $ b_{4}=b_{1}+b_{2}u_{2} $ which satisfies all three equations. Thus we must have
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{3}}\alpha_{3} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \begin{psmallmatrix} b_{1} & b_{2} \\ -b_{2}u_{3} & b_{1}+b_{2}u_{2} \end{psmallmatrix} \right\},\]
where we need $ b_{1}^{2}+b_{1}b_{2}u_{2}+b_{2}^{2}u_{3}\neq 0 $, i.e., \[ \left(b_{1}u_{2}+2b_{2}u_{3}\right)^{2}\neq b_{1}^{2}\left(u_{2}^{2}-4u_{3}\right). \]
We now need to consider three cases for $ u_{2}^{2}-4u_{3}=0 $ and when $ u_{2}^{2}-4u_{3} $ is a square modulo $ p $ or not. We find
\begin{align*}
\left\lvert\mathrm{Aut}_{\mathcal{B}r}\left(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}^{2}/4}\alpha_{3} \right\rangle\right)\right\rvert&=(p-1)p^{3} \ \text{for} \ u_{2}\neq 0,\\
\left\lvert\mathrm{Aut}_{\mathcal{B}r}\left(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{3}}\alpha_{3} \right\rangle\right)\right\rvert&=(p-1)^{2}p^{2} \ \text{if} \ u_{3}\neq 0\ \text{and} \ u_{2}^{2}-4u_{3}\neq 0 \ \text{is a square}, \\
\left\lvert\mathrm{Aut}_{\mathcal{B}r}\left(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{3}}\alpha_{3} \right\rangle\right)\right\rvert&=(p^{2}-1)p^{2} \ \text{if} \ u_{3}\neq 0 \ \text{and}\ u_{2}^{2}-4u_{3}\neq 0 \ \text{is not a square}.
\end{align*}
We also have \[\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau^{-v_{2}}\alpha_{1}, \sigma^{v_{2}}\alpha_{3} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \begin{psmallmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{psmallmatrix} \right\}.\]
\textbf{For skew braces of Case II of Lemma \ref{L17}}: If $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{a} \right\rangle) $, we need to set $ b_{2}=0 $, now since we have
\begin{align*}
&\alpha\left(\tau^{x_{3}}\alpha_{1}\right)^{b_{4}^{-1}}\alpha^{-1}=\rho^{\kappa_{1}}\tau^{x_{3}}\alpha_{1} \ \text{and} \\
&\alpha\left(\tau^{x_{3}}\alpha_{1}\right)^{ab_{1}b_{3}b_{4}^{-2}-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}\left(1-b_{1}\right)-\frac{1}{2}ab_{1}b_{4}^{-1}\left(b_{1}b_{4}^{-1}-1\right)}\left(y\alpha_{2}\alpha_{3}^{a}\right)^{b_{1}b_{4}^{-1}}\alpha^{-1}\\
&=\rho^{\kappa_{2}}\sigma^{b_{1}^{2}b_{4}^{-1}}\tau^{\left(ab_{1}b_{3}b_{4}^{-2}-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}\left(1-b_{1}\right)-\frac{1}{2}ab_{1}b_{4}^{-1}\left(b_{1}b_{4}^{-1}-1\right)\right)x_{3}+\frac{1}{2}b_{1}\left(b_{1}b_{4}^{-1}-1\right)}\alpha_{2}\alpha_{3}^{ab_{1}^{2}b_{4}^{-1}},
\end{align*}
we must have $ b_{4}=b_{1}^{2} $ and
\[r_{3}=ab_{1}^{-1}b_{3}+\frac{1}{2}\left(b_{1}-1\right)\left(1+a\right)+\frac{1}{2}b_{1}^{2}x_{3}^{-1}\left(b_{1}+1\right);\]
thus we must have
\[\mathrm{Aut}_{\mathcal{B}r}(\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{a} \right\rangle)=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{r_{1}}\alpha_{3}^{ab_{1}^{-1}b_{3}+\frac{1}{2}\left(b_{1}-1\right)\left(1+a\right)+\frac{1}{2}b_{1}^{2}x_{3}^{-1}\left(b_{1}+1\right)} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{1}^{2} \end{psmallmatrix} \right\}.\]
Therefore, we have
\begin{align*}
&e(M_{1},M_{1},p^{2})= \sum_{(M_{1})_{M_{1}}(p^{2})}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}((M_{1})_{M_{1}})\right\rvert}=\\
&\sum_{u_{2}\neq 0, 4}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{\frac{u_{2}^{2}}{4}}\alpha_{3} \right\rangle)\right\rvert}+\sum_{\substack{u_{2}-u_{3}, u_{3}, u_{2}^{2}-4u_{3}\neq 0 \\u_{2}^{2}-4u_{3} \ \text{is a square}} }\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{3}}\alpha_{3} \right\rangle)\right\rvert}+\\
&\sum_{\substack{u_{2}-u_{3}, u_{3}, u_{2}^{2}-4u_{3}\neq 0 \\u_{2}^{2}-4u_{3} \ \text{is not a square}} }\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle)\right\rvert}+\sum_{v_{2}\neq 0, 2}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau^{-v_{2}}\alpha_{1}, \sigma^{v_{2}}\alpha_{3} \right\rangle)\right\rvert}\\
&+\sum_{\substack{x_{3}\neq 0, \ a\\ (1+x_{3})x_{3}^{-1}\neq a}}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{a} \right\rangle)\right\rvert}\\
&=(p^{2}-1)(p^{2}-p)p^{2}\times\\
&\left(\dfrac{p-2}{(p-1)p^{3}}+\dfrac{\frac{p-1}{2}+\left(\frac{p-1}{2}-1\right)(p-2)}{(p-1)^{2}p^{2}}+\dfrac{\frac{p-1}{2}+\left(\frac{p-1}{2}\right)(p-2)}{(p^{2}-1)p^{2}}+ \dfrac{p-2}{(p^{2}-1)(p^{2}-p)p^{2}}+\dfrac{(p-1)^{2}}{(p-1)p^{2}}\right)\\
&=(p^{4}-p^{3}-2p^{2}+2p+1)p,
\end{align*}
and similarly
\begin{align*}
&e(C_{p}^{3},M_{1},p^{2})= \sum_{(C_{p}^{3})_{M_{1}}(p^{2})}\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}((C_{p}^{3})_{M_{1}})\right\rvert}=\\
&\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{4}\tau^{4}\alpha_{3} \right\rangle)\right\rvert}+\sum_{\substack{ u_{2}^{2}-4u_{2}\neq 0 \\ \text{is a square}} }\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle)\right\rvert}+\\
&\sum_{\substack{u_{2}^{2}-4u_{2}\neq 0 \\ \text{is not a square}} }\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle)\right\rvert}+\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle \rho, \tau^{-2}\alpha_{1}, \sigma^{2}\alpha_{3} \right\rangle\right\rvert}\\
&+\sum_{x_{3}\neq 0}\dfrac{\left\lvert\mathrm{Aut}(C_{p}^{3})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle\rho,\tau^{x_{3}}\alpha_{1},\sigma\alpha_{2}\alpha_{3}^{(1+x_{3})x_{3}^{-1}} \right\rangle)\right\rvert}\end{align*}
\begin{align*}
&=(p^{3}-1)(p^{3}-p)(p^{3}-p^{2})\times\\
&\left(\dfrac{1}{(p-1)p^{3}}+\dfrac{\frac{p-1}{2}-1}{(p-1)^{2}p^{2}}+\dfrac{\frac{p-1}{2}}{(p^{2}-1)p^{2}}+ \dfrac{1}{(p^{2}-1)(p^{2}-p)p^{2}}+\dfrac{p-1}{(p-1)p^{2}}\right)\\
&=(p^{3}-1)(p^{2}-2)p^{2}.
\end{align*}
\end{proof}
\begin{lemma}\label{L18}
For $ \left\lvert\varTheta(G) \right\rvert=p^{3} $ there are exactly four $ M_{1} $-skew braces of $ M_{1} $ type and no other. Furthermore, there are only \[ (p^{2}-1)p^{3} \] Hopf-Galois structures of $ M_{1} $ type on Galois extensions of fields with Galois group $ G \cong M_{1} $ and $ \left\lvert\varTheta(G) \right\rvert=p^{3} $.
\end{lemma}
\begin{proof}
If $ G \subseteq \mathrm{Hol}(M_{1}) $ with $ \left\lvert\varTheta(G) \right\rvert=p^{3} $, then we can assume, without loss of generality, that $ \varTheta(G) = \left\langle\alpha_{1},\alpha_{2},\alpha_{3}\right\rangle $, and so \[G=\left\langle u\alpha_{1},v\alpha_{2},w\alpha_{3}\right\rangle \]
where $ u=\rho^{u_{1}}\sigma^{u_{2}}\tau^{u_{3}} $, $ v=\rho^{v_{1}}\sigma^{v_{2}}\tau^{v_{3}} $, $ w=\rho^{w_{1}}\sigma^{w_{2}}\tau^{w_{3}} $, and $ G $ is isomorphic to $ \varTheta(G)\cong M_{1} $. Now
\begin{align*}
\left(u\alpha_{1}\right)\left(v\alpha_{2}\right)&=\rho^{v_{2}}uv\alpha_{1}\alpha_{2}\ \text{and} \\
\left(v\alpha_{2}\right)\left(u\alpha_{1}\right)&=\rho^{\frac{1}{2}u_{2}\left(u_{2}-1\right)+v_{3}u_{2}-u_{3}v_{2}-u_{2}^{2}-u_{2}v_{2}}\tau^{u_{2}}uv\alpha_{1}\alpha_{2},
\end{align*}
so we need $ u_{2}=0 $ and $ v_{2}\equiv-u_{3}v_{2}\ \mathrm{mod} \ p $. We have
\begin{align*}
\left(u\alpha_{1}\right)\left(w\alpha_{3}\right)&=\rho^{w_{2}}uw\alpha_{1}\alpha_{3}\ \text{and} \\
\left(w\alpha_{3}\right)\left(u\alpha_{1}\right)&=\rho^{u_{3}+w_{3}u_{2}-u_{3}w_{2}}uw\alpha_{1}\alpha_{3},
\end{align*}
so, since $ u_{2}=0 $, we need $ w_{2}\equiv u_{3}-u_{3}w_{2} \ \mathrm{mod} \ p $. Finally, we have
\begin{align*}
\left(u\alpha_{1}\right)\left(v\alpha_{2}\right)\left(w\alpha_{3}\right)&=\left(\rho^{v_{2}}uv\alpha_{1}\alpha_{2}\right)\left(w\alpha_{3}\right)\\
&=\rho^{u_{1}-v_{2}(w_{2}-1)-\frac{1}{2}w_{2}(w_{2}-1)}\tau^{u_{3}+w_{2}}vw\alpha_{1}\alpha_{2}\alpha_{3} \ \text{and}\\
\left(w\alpha_{3}\right)\left(v\alpha_{2}\right)&=\rho^{v_{3}+w_{3}v_{2}-v_{3}w_{2}}vw\alpha_{3}\alpha_{2},
\end{align*}
so we need $ u_{3}+w_{2}\equiv 0 \ \mathrm{mod} \ p $ and
\[u_{1}-v_{2}(w_{2}-1)-\frac{1}{2}w_{2}(w_{2}-1) \equiv v_{3}+w_{3}v_{2}-v_{3}w_{2} \ \mathrm{mod} \ p.\]
Combining the above information, for $ G $ to be a group of order $ p^{3} $, we need, modulo $ p $,
\begin{align}\label{E63}
&u_{2}=0, \ v_{2}=-u_{3}v_{2}, \ w_{2}=u_{3}-u_{3}w_{2}, \ u_{3}=-w_{2}, \nonumber\\
&u_{1}-v_{2}(w_{2}-1)-\frac{1}{2}w_{2}(w_{2}-1) = v_{3}+w_{3}v_{2}-v_{3}w_{2}.
\end{align}
Now the equations $ w_{2}=u_{3}-u_{3}w_{2} $ and $ u_{3}=-w_{2} $ imply that \[ u_{3}=-w_{2}=0,-2. \] Given this, the equation $ v_{2}=-u_{3}v_{2} $ implies that $ v_{2}=0 $. Now the final equation in (\ref{E63}) reduces to
\[u_{1}-\frac{1}{2}w_{2}(w_{2}-1) = v_{3}-v_{3}w_{2}.\]
Thus, we can consider two cases for $ w_{2}=0 $ and $ w_{2}=2 $. If $ w_{2}=0 $, then $ u,v $ and $ w $ are of the form
\[u=\rho^{u_{1}}, \ v=\rho^{v_{1}}\tau^{u_{1}}, \ w=\rho^{w_{1}}\tau^{w_{3}}, \]
and in this case $ G $ cannot be regular. Therefore, we must set $ w_{2}=2 $, hence $ u,v $, and $ w $ are of the form
\[u=\rho^{u_{1}}\tau^{-2}, \ v=\rho^{v_{1}}\tau^{1-u_{1}}, \ w=\rho^{w_{1}}\sigma^{2}\tau^{w_{3}}. \]
Now for $ G $ to be regular we need
\[\left(u\alpha_{1}\right)^{\frac{1}{2}(1-u_{1})}\left(w\alpha_{3}\right)=\rho^{v_{1}+\frac{1}{2}u_{1}(1-u_{1})}\alpha_{1}^{\frac{1}{2}(1-u_{1})}\alpha_{3}\not\in \mathrm{Aut}(M_{1}),\]
so we need $ v_{1}+\frac{1}{2}u_{1}(1-u_{1})\not\equiv 0 \ \mathrm{mod} \ p $. Therefore, $ G $ is conjugate to
\[\left\langle \rho^{u_{1}}\tau^{-2}\alpha_{1},\rho^{v_{1}}\tau^{1-u_{1}}\alpha_{2},\rho^{w_{1}}\sigma^{2}\tau^{w_{3}}\alpha_{3}\right\rangle\cong M_{1} \]\[ \text{for} \ u_{1},v_{1},w_{1},w_{3}=0,...,p-1 \ \text{with} \ v_{1}+\frac{1}{2}u_{1}(1-u_{1})\not\equiv 0 \ \mathrm{mod} \ p,\]
and there are (taking into account the $ p+1 $ conjugates)
\[(p+1)(p-1)p^{3}\]
of these.
To find the non-isomorphic skew braces corresponding to the above regular subgroups, it suffices to conjugate by automorphisms of the form $ \alpha\stackrel{\mathrm{def}}{=}\beta\gamma \in \mathrm{Aut}(M_{1}) $, where $ \beta \stackrel{\mathrm{def}}{=} \begin{psmallmatrix} b_{1} & 0 \\ b_{3} & b_{4} \end{psmallmatrix} \in \mathrm{GL}_{2}(\mathbb{F}_{p}) $ and $ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}} \in C_{p}^{2} $. Now using (\ref{E112}) and (\ref{E16}) we have
\begin{align*}
\alpha\left(u\alpha_{1}\right)^{b_{4}^{-1}}\alpha^{-1}&=\left(\alpha\cdot u^{b_{4}^{-1}}\right)\alpha_{1},\\
\alpha\left(v\alpha_{2}\right)^{b_{1}b_{4}^{-1}}\alpha^{-1}&=\left(\alpha\cdot v^{b_{1}b_{4}^{-1}}\right)\alpha_{1}^{r_{3}+\frac{1}{2}\left(1-b_{1}\right)}\alpha_{2},\\
\alpha\left(w\alpha_{3}\right)^{b_{1}^{-1}}\alpha^{-1}&=\left(\alpha\cdot\left(\rho^{\frac{1}{2}w_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)\alpha_{1}^{-b_{1}^{-1}b_{3}}\alpha_{3},
\end{align*}
so we have
\begin{align*}
&\alpha\left(u\alpha_{1}\right)^{b_{4}^{-1}}\alpha^{-1}=\left(\alpha\cdot u^{b_{4}^{-1}}\right)\alpha_{1},\\
&\alpha\left(u\alpha_{1}\right)^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}\left(v\alpha_{2}\right)^{b_{1}b_{4}^{-1}}\alpha^{-1}=\left(\alpha\cdot \left(u^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}v^{b_{1}b_{4}^{-1}}\right)\right)\alpha_{2},\\
&\alpha\left(u\alpha_{1}\right)^{b_{1}^{-1}b_{3}b_{4}^{-1}}\left(w\alpha_{3}\right)^{b_{1}^{-1}}\alpha^{-1}=\left(\left(\alpha\cdot u^{b_{1}^{-1}b_{3}b_{4}^{-1}}\right)\alpha\alpha_{1}^{b_{1}^{-1}b_{3}b_{4}^{-1}}\cdot\left(\rho^{\frac{1}{2}w_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)\alpha_{3}.
\end{align*}
Note that we have \[ \alpha=\begin{bsmallmatrix} b_{1}b_{4} & \frac{1}{2}b_{1}b_{3}+r_{1}b_{1}+r_{3}b_{3} & r_{3}b_{4} \\ 0 & b_{1} & 0 \\ 0 & b_{3} & b_{4} \end{bsmallmatrix},\]
We let $ b_{5}\stackrel{\mathrm{def}}{=}\frac{1}{2}b_{1}b_{3}+r_{1}b_{1}+r_{3}b_{3} $. Now
\begin{align*}
&\alpha\cdot u^{b_{4}^{-1}}=\rho^{u_{1}b_{1}-2r_{3}}\tau^{-2},\\
\alpha\cdot &\left(u^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}v^{b_{1}b_{4}^{-1}}\right)=\rho^{r_{3}\left(2r_{3}+1\right)+v_{1}b_{1}^{2}+\frac{1}{2}u_{1}b_{1}(b_{1}-1)-2r_{3}u_{1}b_{1}}\tau^{1+2r_{3}-u_{1}b_{1}},\\
&\left(\alpha\cdot u^{b_{1}^{-1}b_{3}b_{4}^{-1}}\right)\left(\alpha\alpha_{1}^{b_{1}^{-1}b_{3}b_{4}^{-1}}\cdot\left(\rho^{\frac{1}{2}w_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)=\rho^{b_{3}u_{1}-2r_{3}b_{1}^{-1}b_{3}}\tau^{-2b_{1}^{-1}b_{3}}\\
&\rho^{\frac{3}{2}w_{3}b_{4}\left(b_{1}^{-1}-1\right)+b_{4}w_{1}+2b_{1}^{-1}b_{3}+2b_{1}^{-1}b_{5}+b_{3}(2b_{1}^{-1}-1)}\sigma^{2}\tau^{2b_{1}^{-1}b_{3}+w_{3}b_{1}^{-1}b_{4}}\\
&=\rho^{2r_{1}+\frac{3}{2}w_{3}b_{4}\left(b_{1}^{-1}-1\right)+b_{4}w_{1}+u_{1}b_{3}}\sigma^{2}\tau^{w_{3}b_{1}^{-1}b_{4}}.
\end{align*}
We let
\begin{align*}
r_{1}&=-\frac{3}{4}w_{3}b_{4}\left(b_{1}^{-1}-1\right)-\frac{1}{2}b_{4}w_{1}-\frac{1}{2}u_{1}b_{3},\\
r_{3}&=\frac{1}{2}u_{1}b_{1},
\end{align*}
which gives us
\begin{align*}
&\alpha\cdot u^{b_{4}^{-1}}=\tau^{-2},\\
\alpha\cdot &\left(u^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}v^{b_{1}b_{4}^{-1}}\right)=\rho^{\left(v_{1}+\frac{1}{2}u_{1}\left(1-u_{1}\right)\right)b_{1}^{2}}\tau,\\
&\left(\alpha\cdot u^{b_{1}^{-1}b_{3}b_{4}^{-1}}\right)\left(\alpha\alpha_{1}^{b_{1}^{-1}b_{3}b_{4}^{-1}}\cdot\left(\rho^{\frac{1}{2}w_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)=\sigma^{2}\tau^{w_{3}b_{1}^{-1}b_{4}}.
\end{align*}
Next, for a fixed $ \delta \in \mathbb{F}_{p}^{\times} $ which is not a square, we can write \[ \left(v_{1}+\frac{1}{2}u_{1}\left(1-u_{1}\right)\right)=s_{1}^{2}s \] where $ s_{1}\in \mathbb{F}_{p}^{\times} $ and $ s=1,\delta $. Letting $ b_{1}=\pm s_{1}^{-1} $ we get
\begin{align*}
&\alpha\cdot u^{b_{4}^{-1}}=\tau^{-2},\\
&\alpha\cdot \left(u^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}v^{b_{1}b_{4}^{-1}}\right)=\rho^{s}\tau,\\
&\left(\alpha\cdot u^{b_{1}^{-1}b_{3}b_{4}^{-1}}\right)\left(\alpha\alpha_{1}^{b_{1}^{-1}b_{3}b_{4}^{-1}}\cdot\left(\rho^{\frac{1}{2}w_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)=\sigma^{2}\tau^{\pm s_{1}w_{3}b_{4}}.
\end{align*}
Therefore, every such regular subgroup is conjugate to
\begin{align}\label{E3}
\left\langle\tau^{-2}\alpha_{1},\rho^{s}\tau\alpha_{2},\sigma^{2}\tau^{t_{3}}\alpha_{3}\right\rangle\cong M_{1} \ \text{for} \ t_{3}=0,1, \ s=1,\delta,
\end{align}
and these subgroups are not further conjugate to each other, so they give us four non-isomorphic skew braces.
To find the number of corresponding Hopf-Galois structures, we need to find the automorphism groups of above skew braces. We let \[ \alpha=\gamma\beta \in \mathrm{Aut}(M_{1})\ \text{where}\ \gamma\stackrel{\mathrm{def}}{=} \alpha_{1}^{r_{1}}\alpha_{3}^{r_{3}}, \ \beta \stackrel{\mathrm{def}}{=} \begin{pmatrix} b_{1} & b_{2} \\ b_{3} & b_{4} \end{pmatrix},\]
and set $ b_{2}=0 $. If $ \alpha \in \mathrm{Aut}_{\mathcal{B}r}(\left\langle\tau^{-2}\alpha_{1},\rho^{s}\tau\alpha_{2},\sigma^{2}\tau^{t_{3}}\alpha_{3}\right\rangle) $, since by our notation above we have
\begin{align*}
&\alpha\cdot u^{b_{4}^{-1}}=\rho^{-2r_{3}}\tau^{-2},\\
\alpha\cdot &\left(u^{-r_{3}b_{4}^{-1}-\frac{1}{2}b_{4}^{-1}(1-b_{1})}v^{b_{1}b_{4}^{-1}}\right)=\rho^{r_{3}\left(2r_{3}+1\right)+sb_{1}^{2}}\tau^{1+2r_{3}},\\
&\left(\alpha\cdot u^{b_{1}^{-1}b_{3}b_{4}^{-1}}\right)\left(\alpha\alpha_{1}^{b_{1}^{-1}b_{3}b_{4}^{-1}}\cdot\left(\rho^{\frac{1}{2}t_{3}b_{1}^{-1}\left(b_{1}^{-1}-1\right)}w^{b_{1}^{-1}}\right)\right)=\rho^{2r_{1}+\frac{3}{2}t_{3}b_{4}\left(b_{1}^{-1}-1\right)}\sigma^{2}\tau^{t_{3}b_{1}^{-1}b_{4}}.
\end{align*}
we must have $ r_{3}=0 $, $ b_{1}^{2}=1 $, $ r_{1}=\frac{3}{4}t_{3}b_{4}\left(1-b_{1}^{-1}\right) $, further $ b_{1}=b_{4} $ if $ t_{3}=1 $. Therefore, we have
\begin{align*}
\mathrm{Aut}_{\mathcal{B}r}(\left\langle\tau^{-2}\alpha_{1},\rho^{s}\tau\alpha_{2},\sigma^{2}\alpha_{3}\right\rangle)&=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \begin{psmallmatrix} \pm 1 & 0 \\ b_{3} & b_{4} \end{psmallmatrix} \right\},\\
\mathrm{Aut}_{\mathcal{B}r}(\left\langle\tau^{-2}\alpha_{1},\rho^{s}\tau\alpha_{2},\tau\sigma^{2}\alpha_{3}\right\rangle)&=\left\{ \alpha \in \mathrm{Aut}(M_{1}) \mid \alpha= \alpha_{1}^{\frac{3}{4}\left(\pm 1-1\right)}\begin{psmallmatrix} \pm 1 & 0 \\ b_{3} & \pm 1 \end{psmallmatrix} \right\}.
\end{align*}
Now again we find
\begin{align*}
&e(M_{1},M_{1},p^{3})= \sum_{(M_{1})_{M_{1}}(p^{3})}\dfrac{\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}((M_{1})_{M_{1}}(p^{3})\right\rvert}=\\
&\dfrac{2\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{\left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle\tau^{-2}\alpha_{1},\rho\tau\alpha_{2},\sigma^{2}\alpha_{3}\right\rangle)\right\rvert}+\dfrac{2\left\lvert\mathrm{Aut}(M_{1})\right\rvert}{ \left\lvert\mathrm{Aut}_{\mathcal{B}r}(\left\langle\tau^{-2}\alpha_{1},\rho\tau\alpha_{2},\tau\sigma^{2}\alpha_{3}\right\rangle)\right\rvert}\\
&=\dfrac{2(p^{2}-1)(p-1)p^{3}}{2(p-1)p}+\dfrac{2(p^{2}-1)(p-1)p^{3}}{2p}=(p^{2}-1)p^{3}.
\end{align*}
\end{proof}
\subsection{Socle and annihilator of skew braces of $ M_{1} $ type}
Finally, we note that from our classification of skew braces we are also able to determine their \textit{socle} and \textit{annihilator}. Let
$ B=\left(B,\oplus,\odot\right) $ be a skew brace. As before we let
\begin{align*}
m: \left(B,\odot\right) &\longrightarrow \mathrm{Hol}\left(B,\oplus\right)\\
a&\longmapsto \left(m_{a} : b \longmapsto a\odot b\right)
\end{align*}
and set
\begin{align*}
\varTheta : \mathrm{Hol}\left(B,\oplus\right)& \longrightarrow \mathrm{Aut}\left(B,\oplus\right)\\
\eta\alpha&\longmapsto \alpha.
\end{align*}
We shall denote by $ \lambda = \varTheta m $. Then $ \Ker \lambda = \Ima m \cap \left(B,\oplus\right) $ inside $ \mathrm{Hol}\left(B,\oplus\right) $.
First we note that \cite[cf.][p.~ 23]{MR3763907} an \textit{ideal} of a skew brace $ B=\left(B,\oplus,\odot\right) $ is defined to be a subset $ I \subseteq B $, such that $ I $ is a normal subgroup with respect to both operations $ \oplus $ and $ \odot $, and $ \lambda_{a}(I)\subseteq I $ for all $ a \in B $. The \textit{socle} of $ B $ is defined to be
\[\mathrm{Soc}(B)\stackrel{\mathrm{def}}{=}\{a\in B\mid a\oplus b=a\odot b, \ b\oplus(b\odot a)=(b\odot a)\oplus b\ \text{for all} \ b \in B \},\]
which is an ideal of $ B $, and one has $ \mathrm{Soc}(B)=\Ker \lambda \cap \mathrm{Z} \left(B,\oplus\right) $. Finally, \cite[cf.][Definition 7]{doi:10.1142/S0219498819500336}, the \textit{annihilator} of $ B $ is defined to be
\[\mathrm{Ann}(B)\stackrel{\mathrm{def}}{=}\mathrm{Soc}(B)\cap \mathrm{Z} \left(B,\odot\right)=\Ker \lambda \cap \mathrm{Z} \left(B,\oplus\right)\cap\mathrm{Z} \left(B,\odot\right),\]
which is also an ideal of $ B $.
Now we aim to explain what each of these terms, ideal, socle, and annihilator, correspond to if we are given a regular subgroup $ H \subseteq \mathrm{Hol}\left(N\right) $ and we consider it as a skew brace. Recall first from Subsection \ref{SB2}, given a regular subgroup $ H \subseteq \mathrm{Hol}\left(N\right) $, it can be represented as
\[H=\left\langle \eta_{1},...,\eta_{r},v_{1}\alpha_{1},...,v_{s}\alpha_{s} \right\rangle,\]
for $ H_{1}\stackrel{\mathrm{def}}{=}\left\langle \eta_{1},...,\eta_{r}\right\rangle\subseteq N $ and $ H_{2}\stackrel{\mathrm{def}}{=}\left\langle \alpha_{1},...,\alpha_{s} \right\rangle \subseteq \mathrm{Aut}\left(N\right) $ and some $ v_{1},...,v_{s} \in N $. Note also that we have a bijection
\begin{align*}
\psi:H&\longrightarrow N \\
g&\longmapsto g_{1}\stackrel{\mathrm{def}}{=}g(1_{N}).
\end{align*}
To get a skew brace we can set $ \left(H,\odot\right)=H $ and define $ \oplus $ on $ H $ by
\[g\oplus h=\psi^{-1}\left(g_{1}h_{1}\right), \]
which makes $ \left(H,\oplus,\odot\right) $ into a skew brace with $ \left(H,\oplus\right)\stackrel{\psi}{\cong}N $. Note the map $ \psi $ now induces an isomorphism
\begin{align*}
\mathrm{Hol}\left(H,\oplus\right)&\longrightarrow \mathrm{Hol}\left(N\right) \\
g\beta&\longmapsto g_{1}\psi\beta\psi^{-1},
\end{align*}
which maps $ \Ker \lambda $ to $ H_{1} $, and $ \Ima \lambda $ to $ H_{2} $.
Now for a subset $ I \subseteq H $ to be an ideal of $ H $ considered as a skew brace, we need $ I \subseteq \left(H,\odot\right) $ to be a normal subgroup, $ \psi\left(I\right) \subseteq N $ to be a normal subgroup (so $ I \subseteq \psi^{-1}\left(N\right)=\left(H,\oplus\right) $ is a normal subgroup) and $ H_{2}\left(\psi\left(I\right)\right)\subseteq \psi\left(I\right) $. Furthermore, one has \[ \mathrm{Soc}(H)= \Ker \lambda \cap \mathrm{Z} \left(H,\oplus\right)=H_{1} \cap \psi^{-1}\left(Z\left(N\right)\right), \]
and
\[\mathrm{Ann}(H)=H_{1} \cap \psi^{-1}\left(Z\left(N\right)\right)\cap\mathrm{Z} \left(H\right).\]
Recall the skew braces of $ M_{1} $ type, apart from the trivial skew brace $ \langle\rho,\sigma,\tau\rangle $, as found in Lemmas \ref{L16}, \ref{L17}, \ref{L18} are as follows.
\begin{itemize}
\item For $ \left\lvert\Ker \lambda \right\rvert=p^{2} $ form Lemma \ref{L16}, (\ref{E54}) we have non-isomorphic skew braces
\begin{align*}
&\left\langle \rho,\tau, \sigma\alpha_{3} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3} \right\rangle \cong C_{p}^{3} , \ \left\langle \rho, \tau, \sigma\alpha_{1} \right\rangle, \left\langle \rho,\tau, \sigma\alpha_{2} \right\rangle,\nonumber\\
&\left\langle \rho,\tau, \sigma\alpha_{3}^{c} \right\rangle, \left\langle\rho, \tau, \sigma\alpha_{2}\alpha_{3}^{c} \right\rangle \cong M_{1} \ \text{for} \ c=2,...,p-1,
\end{align*}
so in all these cases we have
\[\mathrm{Soc}(H)=\mathrm{Ann}(H)=\left\langle \rho \right\rangle.\]
\item For $ \left\lvert\Ker \lambda \right\rvert=p $ from Lemma \ref{L17}, (\ref{E57}) and (\ref{E62}), we have non-isomorphic skew braces \[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{3}}\tau^{u_{4}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-u_{5}}\alpha_{1}, \sigma^{u_{5}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{a} \right\rangle \cong M_{1}, \]
\[\left\langle \rho, \sigma\alpha_{1}, \sigma^{u_{2}}\tau^{u_{2}}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{-2}\alpha_{1}, \sigma^{2}\alpha_{3} \right\rangle,\left\langle \rho, \tau^{x_{3}}\alpha_{1}, \sigma\alpha_{2}\alpha_{3}^{\left(1+x_{3}\right)x_{3}^{-1}} \right\rangle \cong C_{p}^{3} \ \text{for} \]
\[a,u_{3}=0,...,p-1, \ u_{2},u_{4},u_{5},x_{3},=1,...,p-1 \]
\[\text{with} \ u_{5}\neq2, \ u_{3}-u_{4},\ ax_{3}-\left(1+x_{3}\right)\not\equiv 0 \ \mathrm{mod}\ p, \]
so in all these cases we also have
\[\mathrm{Soc}(H)=\mathrm{Ann}(H)=\left\langle \rho \right\rangle.\]
\item For $ \left\lvert\Ker \lambda \right\rvert=1 $ from Lemma \ref{L18}, (\ref{E3}) we have non-isomorphic skew braces \[\left\langle\tau^{-2}\alpha_{1},\rho^{s}\tau\alpha_{2},\sigma^{2}\tau^{t_{3}}\alpha_{3}\right\rangle\cong M_{1} \ \text{for} \ t_{3}=0,1, \ s=1,\delta,\]
so in all these cases have
\[\mathrm{Soc}(H)=\mathrm{Ann}(H)=1.\]
\end{itemize}
\begin{center}
\large\textbf{Acknowledgements}
\end{center} The author is ever indebted to Prof Nigel Byott and Prof Agata Smoktunowicz for their continued support and useful suggestions. The author is ever grateful for the referee's comments which lead to numerous improvements to the manuscript.
This research was partially supported by the ERC Advanced grant 320974. The author obtained part of the results in this paper while studying for a PhD degree at the University of Exeter funded by an EPSRC Doctoral Training Grant.
\section{References}
\bibliographystyle{elsarticle-num}
\bibliography{mybibfile}
\end{document} | 196,440 |
Locations
Scrap Metal Recycling Drive to Support LPS
From Lincoln Recycles Day (Nov. 15, 2015) to Earth Day (April 22, 2016), bring scrap metal to Sadoff Iron and Metal Company or Alter Metal Recycling to have it recycled. All types of scrap metal including aluminum and tin cans, appliances, pots and pans, wiring, sinks, bathtubs, electric motors, plumbing materials, and many other items may be accepted. When you arrive, tell them that you would like to donate all or part of the revenue from your metal to LPS Recycling Program. At the end of the drive, the firms will match the amount that the public donates up to a specific dollar amount.
This is a great way to help Lincoln Public Schools expand their recycling operations and outreach for students, faculty, and staff. Funds raised for the LPS Recycling Program will be used toward replacing old, steel containers that have experienced over 15 years of recycling use. The old containers will be recycled as scrap metal and be replaced with new dumpsters manufactured from recycled steel - closing the recycling loop!
- Sadoff Iron and Metal Company
- 4400 W Webster St.
- 402-470-2510
- M—F 7:30 — 4:30
- Alter Metal Recycling
- 6100 N. 70th St.
- 402-476-3306
- M—F 8:00 — 5:00
Sat 8:00 — noon | 325,913 |
Is bankruptcy the best option after foreclosure? 7 Answers as of June 10, 2011My parents home foreclosed and also have a summary judgement against them.The summary judgement was place because they were unaware due my father's illness and the attorney did not tell about court dates. My parents have evidence of where they were during the court date. They were able to find a buyer to purchase the home for short-sale. The mortgage company has agreed to the short-sale but will not remove summary judgement from credit report. The summary judgement removable will enable my parents to get back to starting a business they use to have. Now, they will have to file chapter 7 bankruptcy. I do not understand why and are there any more options for them.
Free Case Evaluation by a Local Lawyer!
Ask a Local Attorney. 100% Anonymous. Free Answers.Free Case Evaluation by a Local Lawyer: Click here
Burnham & Associates | Stephanie K. Burnham
The mortgage company is not going to release the Summary Judgment because they now have a Court Order demanding that your parent's pay the deficiency on the mortgage. Your parents will have to either settle the matter - which if the mortgage company is unwilling will not happen, or file Bankruptcy to eliminate their liability on the Summary Judgment.
Answer Applies to: New Hampshire
Replied: 6/10/2011
Law Office of Maureen O' Malley | Maureen O'Malley
I don't know what state you're in, so I can't respond fully. If the bank approved a short sale, the judgment should have been satisfied. State law may have a mechanism for requiring that they mark it as such. The bank may be unwilling to mark the judgment as such if there is money still due after the short sale. If this is the case, bankruptcy will eliminate all debt and give your parents a fresh start.
Answer Applies to: Virginia
Replied: 6/9/2011
Janet A. Lawson Bankruptcy Attorney | Janet Lawson
I am not certain I fully understand your question. I am guessing that when you say "summary judgement" you mean an actual judgement. I have no idea what the amount of it is, but I suspect they may need to file bankruptcy to resolve it. They should consult with a lawyer because it would be important to know what their assets are in making the decision to file or not file or what kind of bankruptcy to file.
Answer Applies to: California
Replied: 6/9/2011
Bird & VanDyke, Inc. | David VanDyke
I'm not sure I completely understand your question but unless you can make some other agreement with creditors who you promised to re-pay sometimes bankruptcy is the only answer.
Answer Applies to: California
Replied: 6/8/2011
Daniel Hoarfrost, Attorney at Law | Daniel Hoarfrost
I can't tell all the necessary facts from your description of the problem. It's not clear what there was a summary judgment for. It may be possible to go to the trial court and file a motion to "satisfy" the judgment. It depends a little bit on how the judgment is written. Otherwise, a bankruptcy can remove the judgment.
Answer Applies to: Oregon
Replied: 6/8/2011
Bankruptcy Law office of Bill Rubendall | William M. Rubendall
Once the foreclosure was completed certain obligations occurred. There may have been a tax event due to the foreclosure and the summary judgment against them. Since they no longer own the property they cannot do a short sale. Bankruptcy may be an option to discharge the summary judgment. This may or may not be advisable. This serious matter requires the advice of a certified specialist in bankruptcy. Contact the State Bar website for a recommendation of an attorney in your area.
Answer Applies to: California
Replied: 6/8/2011
Ashman Law Office | Glen Edward Ashman
You sound confused. Summary judgments cannot be removed. That means they lost a court case. That judgment, and a possible deficiency on the home, may be excellent reasons for a person to file bankruptcy, depending on their present income, expenses, debts and assets. A bankruptcy may prevent adverse consequences from the judgment and they need to see a lawyer.
Answer Applies to: Georgia
Replied: 6/8/2011
| 234,768 |
By Preston Fleming
The second one ebook within the Beirut Trilogy, BRIDE OF A BYGONE battle is decided within the spring of 1981, following the yankee elections, while Lebanon hopes for clean political winds that will finish their seven-year civil conflict. input Walter Lukash, a midlevel CIA officer assigned as intelligence liaison to the Phalange military. Lukash quickly turns into a pawn in a Levantine video game meant to attract the U.S. into clash with Lebanon's Syrian occupiers. regrettably, Lukash is just too distracted by way of difficulties bobbing up from having deserted his Lebanese bride 5 years past to work out the capture till it springs.
SYNOPSIS:
Beirut, 1981. Walter Lukash, a journeyman CIA case officer, has been published within the heart East for 8 immediately years and is prepared for a quiet table activity again in Washington. while he's ordered to Beirut as an alternative for a two-month mystery liaison task with the Phalange militia's intelligence unit, his superiors think they comprehend his reluctance to simply accept.
What they don't be aware of is that, 5 years prior, Lukash secretly married a Lebanese lady opposed to employer ideas and deserted her quickly after the outbreak of struggle. greater than that, his new Irish live-in female friend, whom the service provider considers a safety danger, has him to Beirut from Amman and Lukash has defied orders to wreck off the connection.
When the two-month project is prolonged to 2 years, Lukash realizes he can now not keep away from painful realities and offerings. yet earlier than he can straighten issues out, he's stuck in a dangerous three-way intrigue among the Phalange, the U.S. govt and Lebanon's ruthless Syrian occupiers that threatens to unharness the whole strength of Syrian-backed terrorism opposed to american citizens in Beirut.
BRIDE OF A BYGONE struggle captures the original surroundings of Civil warfare Beirut with a full of life and clever kind that pulls the reader into deep id with the characters and the motion.
Read or Download Bride of a Bygone War (Beirut Trilogy, Book 2) PDF
Similar thriller books
Is there no clarification of the secret of The Haunted resort? phobia that fills the Palace resort.
Set within the comparable fictional London as his CWA Gold Dagger Award-winning Slough residence sequence, Mick Herron now introduces Tom Bettany, an ex-spook with a violent prior and just one factor to stay for: Avenging his son's death.
Tom Bettany is operating at a meat processing plant in France while the fact that he reduce all ties years in the past, Bettany returns domestic to London to determine the reality approximately his son's dying. possibly it's the guilt he feels approximately wasting contact together with his son that's gnawing at him, or even he's truly positioned his finger on a labyrinthine plot, yet both method he'll unravel the tragedy, regardless of whose feathers he has to ruffle. yet quite a lot of individuals are to listen to Bettany is again on the town, from incarcerated mob bosses to these within the maximum echelons of MI5. He may have idea he'd left all of it in the back of while he first skipped city, yet no one particularly simply walks away.
One guy. One hour. 1000000 humans to save lots of. ..
Over the distant important Pacific, an airliner is rocked through a huge resort, or his daughter, who's having fun with the light at Waikiki beach.
But whilst all touch with neighbouring Christmas Island is misplaced, Kai is the 1st to gain that Hawaii faces an epic disaster: in a single hour, a chain of huge waves will wipe out Honolulu. He has simply sixty mins to save lots of the lives of one million humans, together with his spouse and daughter. ..
Addictive and fast moving, The Tsunami Countdown pitches a regular guy opposed to the chances in an electrifying and action-packed mystery. You won't be capable of positioned it down.
They acknowledged the useless can't damage you . . . They have been wrong.
The condominium on chilly Hill is a chilling and suspenseful ghost tale from the multi-million reproduction - an incredible, dilapidated Georgian mansion - Ollie is stuffed with pleasure. regardless of the monetary pressure of the movement, he has dreamed of dwelling within the kingdom on the grounds that he was once a toddler, and he sees chilly Hill condo as a paradise for his animal-loving daughter, the fitting base for his web-design company and an awesome lady, status at the back of her because the ladies speak on FaceTime. Then there are extra sightings, in addition to more and more tense occurrences in the home. because the haunting turns into extra malevolent and the home itself starts off to show at the Harcourts, the terrified relatives realize chilly Hill House's darkish heritage, and the terrible fact of what it may possibly suggest for them . . .
- Eline Vere
- Dragon Wytch (Otherworld/Sisters of the Moon, Book 4)
- What Comes Next
- Robert Ludlum's The Utopia Experiment (Covert-One, Book 10)
- Such Men Are Dangerous
Extra resources for Bride of a Bygone War (Beirut Trilogy, Book 2)
Sample. | 404,434 |
MotoGP 20 Switch NSP Free Download Romslab
MotoGP 20 Switch NSP Free Download With. ROMSLAB.COM Best Switch Games).
Story. Couch Co-Op Bundle Vol. 2 Switch NSP.
Gameplay. Counter-Strike 1.6 (Xash3D) Switch NSP.
About Game. Cozy Grove Switch NSP.
Add-ons (DLC): MotoGP (10 GB)
INPUT: Nintendo Switch Joy con, Keyboard and Mouse, Xbox or PlayStation controllers
ONLINE REQUIREMENTS: Internet connection required for updates or multiplayer mode. | 255,921 |
\begin{document}
\title[Critical points of positive solutions]
{The spectral gap to torsion problem
for some non-convex domains}
\author[H. Chen and P. Luo]{Hua Chen and Peng Luo}
\address[Hua Chen]{School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China}
\email{[email protected]}
\address[Peng Luo]{School of Mathematics and Statistics, Central China Normal University, Wuhan 430079, China }
\email{[email protected]}
\begin{abstract}
In this paper we study the following torsion problem
\begin{equation*}
\begin{cases}
-\Delta u=1~&\mbox{in}\ \Omega,\\[1mm]
u=0~&\mbox{on}\ \partial\Omega.
\end{cases}
\end{equation*}
Let $\Omega\subset \R^2$ be a bounded, convex domain and $u_0(x)$ be the solution of above problem with its maximum $y_0\in \Omega$. Steinerberger \cite{Ste18} proved that there are universal constants $c_1, c_2>0$ satisfying
\begin{equation*}
\lambda_{\max}\left(D^2u_0(y_0)\right)\leq -c_1\mbox{exp}\left(-c_2\frac{\text{diam}(\Omega)}{\mbox{inrad}(\Omega)}\right).
\end{equation*}
And in \cite{Ste18} he proposed following open problem:
\vskip 0.2cm
\noindent \emph{``Does above result hold true on domains that are not convex but merely simply connected or perhaps only bounded? The proof uses convexity of the domain $\Omega$ in a very essential way and it is not clear to us whether the statement remains valid in other settings."}
\vskip 0.2cm
\noindent Here by some new idea involving the computations on Green's function, we compute
the spectral gap $\lambda_{\max}D^2u_0(y_0)$
for some non-convex smooth bounded domains, which gives a negative answer to above open problem.
\end{abstract}
\date{\today}
\maketitle
\keywords {\noindent {\bf Keywords:} {Spectral gap, torsion problem, Green's function}
\smallskip
\subjclass{\noindent {\bf 2010 Mathematics Subject Classification:} 35B09 $\cdot$ 35J08 $\cdot$ 35J60}}
\section{Introduction and main results}\label{s0}
\setcounter{equation}{0}
In this paper, we consider the following torsion problem
\begin{equation}\label{1h}
\begin{cases}
-\Delta u=1~&\mbox{in}\ \Omega,\\[1mm]
u=0~&\mbox{on}\ \partial \Omega.
\end{cases}
\end{equation}
Problem \eqref{1h} is a classical topic
in PDEs, with references dating back to St. Venant(1856).
From then, many results are devoted to analysis the
qualitative properties of the positive solutions.
A very interest problem is the location and the number of the critical points of above positive solutions. This is related with the level sets of the positive solutions. For a more general case, the following nonlinear problem
\begin{equation*}
\begin{cases}
-\Delta u=f(u)~&\mbox{in}\ \Omega,\\[1mm]
u=0~&\mbox{on}\ \partial \Omega.
\end{cases}
\end{equation*}
has also been considered widely. For example, one can refer to \cite{BL1976,GG2019,G2020,K1985,LR2019,ML71} and the related references.
A well-known and seminal result is the fundamental theorem in
Gidas, Ni and Nirenberg \cite{GNN1979} by moving plane.
Gidas-Ni-Nirenberg's Theorem
shows that the uniqueness of the critical points is related to the shape of the superlevel sets.
Although there are some conjectures on the uniqueness of the critical point in more general convex domains, this seems to be a very difficult problem. And another important result is \cite{CC1998}, which holds for a wide class of nonlinearities $f$ without the symmetry assumption on $\Omega$ and for semi-stable solutions. For further results, we can refer to \cite{B2018,HNS2016,M2016} and references therein.
When $f(u)\equiv 1$, the torsion function seems to be the classical object in the study of level sets of elliptic equations. First from \cite{ML71}, we know that the level sets are convex and there is a unique global maximum of the
torsion function on planar convex domains.
And then the eccentricity of the level sets close to the (unique) maximum point $y_0$ is determined by the eigenvalues of the Hessian $D^2u(y_0)$.
Let $\lambda_1$ and $\lambda_2$ are two eigenvalues of $D^2u(y_0)$, then directly, we have
\begin{equation*}
\lambda_1,\lambda_2\leq 0~\mbox{and}~\lambda_1+\lambda_2=\mbox{tr}~D^2u(y_0)=\Delta u(y_0)=-1.
\end{equation*}
This gives us that the level sets will be highly eccentric if one of the two eigenvalues is close to $0$. In this aspect, Steinerberger \cite{Ste18} gave a beautiful description, which shows that the level sets aren't highly eccentric for any convex domain $\Omega$ and can be stated as follows.
\vskip 0.2cm
\noindent \textbf{Theorem A.} \emph{Let $\Omega\subset \R^2$ be a bounded, convex domain and $u_0(x)$ be the solution of problem \eqref{1h} with its maximum $y_0\in \Omega$. There are universal constants $c_1, c_2>0$ such that}
\begin{equation}\label{5-10-1}
\lambda_{\max}\left(D^2u_0(y_0)\right)\leq -c_1\mbox{exp}\left(-c_2\frac{\text{diam}(\Omega)}{\mbox{inrad}(\Omega)}\right).
\end{equation}
Also Steinerberger \cite{Ste18} gave some details to show that the above result has the sharp scaling. Above Theorem A was proved by Fourier analysis in \cite{Ste18} and highly depends on the convexity of the domain $\Omega$. Next at page 1616 of \cite{Ste18}, Steinerberger proposed the following \textbf{open problem}:
\vskip 0.2cm
\noindent \textbf{Problem A}. Convexity of the Domain. \emph{Does Theorem A also hold true on domains that are not convex but merely simply connected or perhaps only bounded? The proof uses convexity of the domain $\Omega$ in a very essential way and it is not clear to us whether the statement remains valid in other settings.}
\vskip 0.2cm
In this paper, we devote to give some answer to above Problem A.
To study Problem A, we will compute the Hessian of the torsion function at the maximum point on a simple non-convex domain.
For example, we suppose that $\Omega_\varepsilon=\O\backslash B(x_0,\varepsilon)$ with $x_0\in\O$ and $B(x_0,\varepsilon)$ denote the ball centered at $x_0$ and radius $\e$, $u_\e$
is the solution of
\begin{equation}\label{aa2}
\begin{cases}
-\Delta u=1~&\mbox{in}\ \O_\varepsilon,\\[1mm]
u=0~&\mbox{on}\ \partial\O_\varepsilon.
\end{cases}
\end{equation}
And then we have following result.
\begin{teo}\label{th1.1}
Let $\Omega\subset \R^2$ be a bounded and convex domain, $y_0$ is the maximum point of $u_0(x)$ as in Theorem A.
Suppose that $u_\e(x)$ is the solution of problem \eqref{aa2}
with its maximum $x_\e\in\Omega_\e$. Let $\lambda_1$ and $\lambda_2$ be two eigenvalues of $D^2u_0(x)$ at $y_0$, then
\begin{equation*}
\lim_{\e\to 0}\lambda_{\max}\big(D^2u_\e(x_\e)\big)=
\begin{cases}
\max\big\{\lambda_1,\lambda_2\big\} &\mbox{if} ~x_0\neq y_0,\\[2mm]
\max\big\{\lambda_1,\lambda_2,-|\lambda_2-\lambda_1|\big\} &\mbox{if}~x_0= y_0.
\end{cases}
\end{equation*}
\end{teo}
\begin{rem}
Taking $\O\subset\R^2$ a bounded and convex domain,
$\Omega_\varepsilon=\O\backslash B(x_0,\varepsilon)$ with $x_0=y_0$ and $\varepsilon$ small, if we suppose that \eqref{5-10-1} is true for $\Omega_\e$, then there exist two positive constants $c_3$ and $c_4$, which is independent with $\e$, such that
\begin{equation}\label{a5-10-1}
\lambda_{\max}\left(D^2u_\e(x_\e)\right)\leq -c_1 \exp\left(-c_2\frac{\text{diam}(\Omega_\e)}{\mbox{inrad}(\Omega_\e)}\right)
\leq -c_3\exp\left(-c_4\frac{\text{diam}(\Omega)}{\mbox{inrad}(\Omega)}\right).
\end{equation}
On the other hand,
moreover if we suppose $\lambda_1=\lambda_2$ (for example $\Omega=B(0,1)$),
then
Theorem \ref{th1.1} gives us
\begin{equation*}
\lim_{\e\to 0}\lambda_{\max}\big(D^2u_\e(x_\e)\big)=0,
\end{equation*}
which is a contradiction with \eqref{a5-10-1}.
Hence we deduce that $\eqref{5-10-1}$ doesn't hold for above non-convex domain $\Omega_\e$, which gives a negative answer to above Problem A in \cite{Ste18}. And then in this case, we find that the level sets of the torsion function are highly eccentric.
\end{rem}
\begin{rem} Our crucial ideas are as follows.
To compute the eigenvalues of the Hessian of $u_\e(x)$ at the maximum point on $\Omega_\e$, a first step is to find the location of the maximum point $x_\e$. And then we need to analyze the asymptotic
behavior of $D^2 u_\e(x_\e)$.
It is well known that $u_0(x)$ and $u_\e(x)$ are represented by corresponding Green's function. Hence we write $u_\e(x)$ by the basic Green's function and then analyze the properties of Green's function on $\Omega_\e$.
To be specific, we will establish the basic estimate near $\partial B(x_0,\e)$:
\begin{equation*}
u_\varepsilon(x)=
u_0(x) + \frac{\log |x-x_0| }{|\log \e|} \Big(u_0\big(x_0\big)+o(1)\Big)+o\big(1\big).
\end{equation*}
Furthermore, another crucial result is to derive that $u_\e(x)$ and $u_0(x) + \frac{\log |x-x_0| }{|\log \e|} u_0(x_0)$ are close in the $C^2$-topology in $B(x_0,d)\setminus B(x_0,\e)$ for some small fixed $d>0$, which can be found in Proposition \ref{alemma-2} below.
\end{rem}
\begin{rem}
Now we would like to point out that $\Omega_\e$ in Theorem \ref{th1.1} can be replaced by $$\Omega'_\e=\Omega\backslash A_\e~\mbox{with}~A_\e=\e \big(A-x_0\big)+x_0~\mbox{and}~x_0\in A\cap \Omega,$$
where $A$ is a convex domain in $\R^2$ and $A-x_0=\{x, x+x_0\in A\}$. Since this is not essential, we omit the details.
\end{rem}
\begin{rem}
We point out one possible application in
the study of Brownian motion, which is also stated in \cite{Ste18}: we recall that the torsion function $u(x)$ also describes the expected lifetime of Brownian motion $\omega_x(t)$ started in $x$ until it first touches the boundary. If one moves away from the point in which lifetime is maximized, then the expected lifetime in a neighborhood is determined by the eccentricity of the level set.
\end{rem}
The paper is organized as follows. In Section \ref{s1}, we recall some properties of the Green's function and split our solution $u_\e$ in different parts which will be estimated
in the next section. In Section \ref{s3}, we compute the terms $u_\e$, $\nabla u_\e$ and $\nabla^2 u_\e$.
Section \ref{s9} is devoted to the proof of Theorem \ref{th1.1}.
\section{Properties of the Green's function and splitting of the solution $u_\varepsilon$}\label{s1}
First we recall that, for $(x,y)\in\O\times\O$, $x\ne y$, the Green's function $G(x,y)$ verifies
\begin{equation*}
\begin{cases}
-\D_x G(x,y)=\delta(y)&\hbox{in }\O,\\[1mm]
G(x,y)=0&\hbox{on }\partial\O,
\end{cases}
\end{equation*}
in the sense of distribution. Next we recall the classical representation formula,
\begin{equation}\label{G}
G(x,y)=-\frac{1}{2\pi}\log \big|x-y\big|-H(x,y),
\end{equation}
where $H(x,y)$ is the {\em regular part of the Green's function}.
Since in the paper we need to consider the Green's function in different domains, we would like to denote by $G_U(x,y)$ as the Green's function on $U$. And we have following facts on the harmonic function which can be found in \cite{GT1983}.
\begin{lemma}\label{lem2-3}
Let $u(x)$ be a $harmonic$ function in $U\subset\subset \R^2$, then
\begin{equation}\label{a11-02-01}
\big|\nabla u(x)\big|\leq \frac{2}{r}\sup_{\partial B(x,r)} |u|,~\, \mbox{for}\,~B(x,r)\subset\subset U.
\end{equation}
\end{lemma}
\begin{lemma}[Green's representation formula]\label{lem2-a}
If $u\in C^2(\overline{U})$, then it holds
\begin{equation}\label{grf}
u(x)=-\int_{\partial U}u(y)\frac{\partial G_U(x,y)}{\partial \nu_y}d\sigma(y)
-\int_{U} \Delta u(y) G_U(x,y)dy,~\, \mbox{for}~x\in U,
\end{equation}
where $\nu_y$ is the outer normal vector on $\partial U$.
\end{lemma}
Let us denote by $G_0(w,s)$ the {\em Green's function} of $\R^2\backslash B(0,1)$ given by (see \cite{BF1996})
\begin{equation*}
G_0(w,s)=
-\frac{1}{2\pi}\left(\log \big|{w-s}\big|-\log \big||w|s-\frac{w}{|w|}\big|\right).
\end{equation*}
By a straightforward computation, we have
\begin{equation}\label{daaa11-02-10}
\frac{\partial G_0(w,s)}{\partial \nu_s}=
\frac{1-|w|^2}{2\pi|w-s|^2},~\mbox{for}~|w|>1,~|s|=1~\mbox{and}~\nu_s=-s.
\end{equation}
\begin{rem}
Let us point out that the {\em Green's function} $G_0(w,s)$ of $\R^2\backslash B(0,1)$ and
the {\em Poisson kernel} of $B(0,1)$ has the same formula (see \cite{BF1996}). This will be used to compute some integral in $\R^2\backslash B(0,1)$.
\end{rem}
Next lemma will be basic and useful in the following computations in next section.
\begin{lemma}\label{G3}
Let $v_\e(x)$ be the function which verifies
\begin{equation*}
\begin{cases}
\Delta v_\e(x)=0&~\mbox{in}~\O\setminus B(x_0,\e),\\[1mm]
v_\e(x)=0&~\mbox{on}~\partial\O,\\[1mm]
v_\e(x)=1&~\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation*}
Then we have that
\begin{equation*}
v_\e(x)=-\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(x,x_0)+O\left(\frac1{|\log\e|^2}\right).
\end{equation*}
\end{lemma}
\begin{proof}
First we define
$$w_\e(x)=\frac1{H(x_0,x_0)}\left[\frac{\log\e}{2\pi} v_\e(x)+G(x,x_0)\right].$$
Then it holds
\begin{equation*}
\begin{cases}
\Delta w_\e(x)=0&~\mbox{in}~\O\setminus B(x_0,\e),\\[1mm]
w_\e(x)=0&~\mbox{on}~\partial\O,\\[1mm]
w_\e(x)= \frac1{H(x_0,x_0)}\Big(\frac{\log\e}{2\pi}+G(x,x_0)\Big)=-1+O(\e)&~\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation*}
Hence repeating above procedure, we can find
\begin{equation}\label{ads}
\begin{cases}
\Delta \left[\frac{\log\e}{2\pi}w_\e(x)-G(x,x_0)\right]=0&~\mbox{in}~\O\setminus B(x_0,\e),\\[1.5mm]
\frac{\log\e}{2\pi}w_\e(x)-G(x,x_0)=0&~\mbox{on}~\partial\O,\\[1.5mm]
\frac{\log\e}{2\pi}w_\e(x)-G(x,x_0)= H(x,x_0)+O(\e|\log\e|)&~\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation}
Then
by the maximum principle and \eqref{ads}, we get that
$$\frac{\log\e}{2\pi}w_\e(x)-G(x,x_0)=O(1)~\,~\mbox{in}~\O\setminus B(x_0,\e),$$
which gives
\begin{equation*}
\begin{split}
w_\e(x)=&\frac{2\pi}{\log\e}G(x,x_0)+O\left(\frac1{|\log\e|}\right).
\end{split}
\end{equation*}
Hence coming back to $v_\e(x)$, we find
\begin{equation*}
\begin{split}
v_\e(x)=&\frac{2\pi}{\log\e}\Big( H(x_0,x_0)w_\e(x)-G(x,x_0)\Big)\\=&
-\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(x,x_0)+O\left(\frac1{|\log\e|^2}\right),
\end{split}
\end{equation*}
which gives the claim.
\end{proof}
Let $u_0$ and $u_\e$ be solutions of \eqref{1h} and \eqref{aa2} respectively, then we can write down the equation satisfied by $u_\e-u_0$ as follows
\begin{equation}\label{A1}
\begin{cases}
-\Delta \big(u_\varepsilon-u_0\big)=0~&\mbox{in}~\Omega_\varepsilon,\\[1mm]
u_\varepsilon-u_0=0~&\mbox{on}~\partial\Omega,\\[1mm]
u_\varepsilon-u_0=-u_0~&\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation}
Now by Green's representation formula \eqref{grf}, we get
\begin{equation}\label{6-25-51}
u_\varepsilon(x)=u_0(x)+
\int_{\partial B(x_0,\e)} \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}u_0(z)d\sigma(z),
\end{equation}
where $\nu_z=-\frac{z-x_0}{|z-x_0|}$ is the outer normal vector of $\partial \big(\R^2\backslash B(x_0,\varepsilon)\big)$ and $G_\varepsilon(x,z)$ is the Green's function of $-\Delta$ in $\O_\varepsilon$ with zero Dirichlet boundary condition.
Now we set
$$x=x_0+\e w,\ z=x_0+\e s\,\,~\mbox{and}~
F_\e(w,s)= G_\varepsilon(x_0+\e w,x_0+\e s),$$
then \eqref{6-25-51} becomes
\begin{equation}\label{6-26-1}
u_\varepsilon(x)=u_0(x)+K_{\e}(w)+L_{\e}(w),
\end{equation}
where \begin{equation*}
K_{\e}(w):=\int_{\partial B(0,1)} \frac{\partial G_0(w,s)}{\partial\nu_s}
u_0(x_0+\e s)
d\sigma(s),
\end{equation*}
and
\begin{equation*}
L_{\e}(w):=\int_{\partial B(0,1)}\left(\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\right)u_0(x_0+\e s)d\sigma(s),
\end{equation*}
with $\nu_s=-\frac{s}{|s|}$ the outer normal vector of $\partial \big(\R^2\backslash B(0,1)\big)$.
\section{Asymptotic analysis on $u_\e$}\label{s3}
To compute $\lambda_{\max}\big(D^2u_\e(x)\big)$ at the maximum point of $u_\e$, the first thing is to find the location of the maximum point of $u_\e$. Here we divide $\Omega_\e$ into the following two cases:
\vskip 0.2cm
\begin{description}
\item[(1)] $x$ is far away from $x_0$, namely $|x-x_0|\ge C>0$.
\vskip 0.2cm
\item[(2)] $x$ is near $x_0$, namely $|x-x_0|=o(1)$.
\end{description}
\vskip 0.2cm
And we will find that the behavior of $u_\e(x)$ near $\partial B(x_0,\varepsilon)$ is crucial and a key point is to understand the limit of $G_\varepsilon(x,y)$ according to the location of $x$.
\begin{lemma}\label{alemma-1}
Let $u_0$ and $u_\e$ be solutions of \eqref{1h} and \eqref{aa2} respectively. Then for any fixed $r>0$, it holds
\begin{equation*}
u_\varepsilon(x)\rightarrow u_0(x)~\mbox{uniformly in}~C^2\big(\Omega\backslash B(x_0,r)\big).
\end{equation*}
\end{lemma}
\begin{proof}
First for any $x\in \Omega\backslash B(x_0,r)$, we
know
\begin{equation}\label{Asfs}
\begin{split}
u_\varepsilon(x)=&u_0(x)+
\int_{\partial B(x_0,\e)} \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}u_0(z)d\sigma(z)\\=&
u_0(x)+
\int_{\partial B(x_0,\e)} \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}\Big(u_0(x_0)+O\big(\e\big)\Big)d\sigma(z)
\\=&
u_0(x)+u_0(x_0)
\int_{\partial B(x_0,\e)} \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z} d\sigma(z)
+O\big( \e\big)
\int_{\partial B(x_0,\e)}\Big| \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}\Big|d\sigma(z).
\end{split}
\end{equation}
Now let $x=x_0+\e w$ and $z=x_0+\e s$, then using \eqref{daaa11-02-10}, we have
\begin{equation}\label{a6-25-2}
\begin{split}
\frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}
=&\frac{1}{\e}\left(\frac{\partial G_0(w,s)}{\partial\nu_s}+\Big(\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\Big)\right)\\=&
O\left( \frac{1}{\e}\Big(1+\big|\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\big|\Big)\right).
\end{split}\end{equation}
On the other hand, we can verify that
\begin{equation}\label{adsd}
\begin{cases}
\Delta_w \Big(\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\Big)=0&~\mbox{in}~\frac{\O-x_0}\e\setminus B(0,1),\\[2mm]
\Big(\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\Big)=0&~\mbox{on}~\partial B(0,1),\\[2mm]
\Big(\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}\Big)= \frac{|w|^2-1}{2\pi |w-s|^2}=O\big(1\big) &~\mbox{on}~\frac{\partial\O-x_0}\e.
\end{cases}
\end{equation}
By the maximum principle and \eqref{adsd}, we get that
\begin{equation}\label{adsda}\frac{\partial F_\varepsilon(w,s)}{\partial\nu_s}-\frac{\partial G_0(w,s)}{\partial\nu_s}=O(1)~\,~\mbox{in}~\frac{\O-x_0}\e\setminus B(0,1).
\end{equation}
Hence from \eqref{a6-25-2} and \eqref{adsda}, we find
\begin{equation}\label{6-25-2}
\begin{split}
\frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}
=O\left( \frac{1}{\e}\right).
\end{split}\end{equation}
Also defining $ v(x):= \displaystyle\int_{\partial B(x_0,\e)}\frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}d\sigma(z)$, then it holds
\begin{equation*}
\begin{cases}
\Delta_xv(x)=0&~\mbox{in}~\O\setminus B(x_0,\e),\\[1mm]
v(x)=0&~\mbox{on}~\partial\O,\\[1mm]
v(x)=-1&~\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation*}
Then using Lemma \ref{G3}, we have
\begin{equation} \label{6-25-8}
v(x)=
\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(x,x_0)+O\left(\frac1{|\log\e|^2}\right).
\end{equation}
Hence from \eqref{Asfs}, \eqref{6-25-2} and \eqref{6-25-8}, we find
\begin{equation*}
\begin{split}
u_\varepsilon(x)=&
u_0(x)+u_0(x_0) \left(
\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(x,x_0)+O\Big(\frac1{|\log\e|^2}\Big) \right)
+O\big( \e\big) \\=&
u_0(x)+
O\left(\frac1{|\log\e|}\right)\,~
\mbox{uniformly in}~C\big(\Omega\backslash B(x_0,r)\big).
\end{split}
\end{equation*}
On the other hand, for any fixed $x\in \Omega\backslash B(x_0,r)$, by \eqref{G}, we can verify that
\begin{equation}\label{6-25-3}
G(x,z),~\big|\frac{\partial G(x,z)}{\partial\nu_z}\big|,~|\nabla_xG(x,z)|~\mbox{and}~\big|\nabla_x\frac{\partial G(x,z)}{\partial\nu_z}\big|~\mbox{are bounded for}~z\in \partial B(x_0,\e).
\end{equation}
And
by \eqref{A1}, it follows
\begin{equation*}
-\Delta_x \Big( \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}- \frac{\partial G(x,z)}{\partial\nu_z}\Big)=0~ ~\mbox{in}~\Omega_\varepsilon.
\end{equation*}
Since $B\big(x,\frac{r}{2}\big) \subset\subset \Omega_\e$,
using Lemma \ref{lem2-3}, \eqref{6-25-2} and \eqref{6-25-3}, we get, for $x\in \Omega\backslash B(x_0,r)$ and $z\in \partial B(x_0,\e)$,
\begin{equation}\label{6-25-4}
\begin{split}
\left|\nabla_x\Big( \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}- \frac{\partial G(x,z)}{\partial\nu_z}\Big)
\right|=O \left( \Big|\frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}- \frac{\partial G(x,z)}{\partial\nu_z}\Big|\right)=O\left( \frac{1}{\e}\right),
\end{split}\end{equation}
and
\begin{equation}\label{6-25-5}
\begin{split}
\left|\nabla^2_x\Big( \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}- \frac{\partial G(x,z)}{\partial\nu_z}\Big)
\right|=O \left( \Big|\frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}- \frac{\partial G(x,z)}{\partial\nu_z}\Big|\right)=O\left( \frac{1}{\e}\right).
\end{split}\end{equation}
Then \eqref{6-25-3}, \eqref{6-25-4} and \eqref{6-25-5} give us that
\begin{equation*}
\begin{split}
\left|\nabla_x \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}
\right|= O\left( \frac{1}{\e}\right)~\mbox{and}~\left|\nabla^2_x \frac{\partial G_\varepsilon(x,z)}{\partial\nu_z}
\right|= O\left( \frac{1}{\e}\right).
\end{split}\end{equation*}
Hence from above estimates, it follows
\begin{equation*}
u_\varepsilon(x)=u_0(x)+
O\left(\frac1{|\log\e|}\right)\,~
\mbox{uniformly in}~C^2\big(\Omega\backslash B(x_0,r)\big).
\end{equation*}
\end{proof}
In the rest of this section, we devote to analyze the asymptotic behavior on $u_\e$, $\nabla u_\e$ and
$\nabla^2 u_\e$ near $\partial B(x_0,\varepsilon)$. And using \eqref{6-26-1}, we need to compute the terms
$K_\e$, $\nabla_wK_\e$, $\nabla_w^2K_\e$, $L_\e$, $\nabla_wL_\e$ and $\nabla_w^2L_\e$.
\begin{lemma}\label{lll1}
Let $w=\frac{x-x_0}{\e}$ and if $|x-x_0|\to 0$,
then it holds
\begin{equation}\label{aaas11-15-04}
\begin{split}
K_{\e}(w)=& - u_0\Big(x_0+ \frac{\varepsilon w}{|w|^2}\Big)+\frac{\varepsilon^2}{2}\Big(1-\frac{1}{|w|^2}\Big),
\end{split}
\end{equation}
\begin{equation}\label{aaas11-15-04-1}
\begin{split}
\frac{\partial K_{\e}(w)}{\partial w_i}= O\Big(\frac{\varepsilon}{|w|^2}\Big)~\,\mbox{and}~\,
\frac{\partial^2 K_{\e}(w)}{\partial w_i\partial w_j}= O\Big(\frac{\varepsilon}{|w|^3}\Big).
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
First taking $\tau=\frac{w}{|w|^2}=\frac{\varepsilon(x-x_0)}{|x-x_0|^2}$ and using \eqref{daaa11-02-10}, we get
\begin{equation*}
\begin{split}
\int_{\partial B(0,1)} &\frac{\partial G_0(w,s)}{\partial\nu_s}u_0(x_0+\e s)d\sigma(s)=-\frac{1}{2\pi }\int_{\partial B(0,1)} \frac{1-|\tau|^2}{|\tau-s|^2}u_0(x_0+\e s)d\sigma(s).
\end{split}
\end{equation*}
Lemma \ref{lem2-a} and \eqref{daaa11-02-10} give us that for any $\phi\in C^2\big(\overline{B(0,1)}\big)$, it holds
\begin{equation}\label{G2}
\phi(s)=\frac{1}{2\pi}\int_{\partial B(0,1)} \frac{1-|s|^2}{|s-y|^2}\phi(y)d\sigma(y)
-\int_{B(0,1)} \Delta \phi(y) G_0(s,y)dy.
\end{equation}
Hence for $|\tau|<1$ and choosing $\phi(\tau)=u_0(x_0+\e \tau)$ in \eqref{G2} we find
\begin{equation*}
u_0(x_0+\e \tau)= \frac{1}{2\pi }\int_{\partial B(0,1)} \frac{1-|\tau|^2}{|\tau-s|^2}u_0(x_0+\e s)d\sigma(s)+\frac{\varepsilon^2}{2}\big(1-|\tau|^2\big).
\end{equation*}
From the above computations we get
\begin{equation}\label{Bd22}
\begin{split}
K_{\e}(w)=&\int_{\partial B(0,1)}\frac{\partial G_0(w,s)}{\partial\nu_s}u_0(x_0+\e s)d\sigma(s)\\=&- u_0\Big(x_0+ \frac{\varepsilon w}{|w|^2}\Big)+\frac{\varepsilon^2}{2}\Big(1-\frac{1}{|w|^2}\Big).
\end{split}
\end{equation}
And then by differentiating \eqref{Bd22} with respect to $w_i$, we have
\begin{equation}\label{aaab1-15-04}
\begin{split}
\frac{\partial K_{\e}(w)}{\partial w_i}=&
-\frac{\e}{|w|^{2}}\left(\frac{\partial u_0(x_0+ \frac{\varepsilon w}{|w|^2})}{\partial x_i}-2\frac{w_i}{ |w|^2}\sum_{j=1}^2\frac{\partial u_0(x_0+ \frac{\varepsilon w}{|w|^2})}{\partial x_j}w_j\right)
+ \frac{\varepsilon^2 w_i}{|w|^4}.
\end{split}
\end{equation}
Next differentiating \eqref{aaab1-15-04} with respect to $w_i$, we find
\begin{equation}\label{1-15-04a}
\begin{split}
\frac{\partial^2 K_{\e}(w)}{\partial w_i\partial w_j}=&
O\left(\frac{\e}{|w|^{3}}\right).
\end{split}
\end{equation}
Hence \eqref{aaas11-15-04} and \eqref{aaas11-15-04-1} follow by \eqref{Bd22}, \eqref{aaab1-15-04} and \eqref{1-15-04a}.
\end{proof}
\begin{lemma}
Let $w=\frac{x-x_0}{\e}$ and if $|x-x_0|\to 0$,
then it holds
\begin{equation}\label{as11-15-06}
\begin{split}
L_{\e}(w)=&\frac{\log |w| }{|\log \e|} \Big(u_0(x_0)+o(1)\Big).
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
First we define
$$M_{\e,2}(w,s)=\sum_{i=1}^2\left(\frac{\partial G_0(w,s)}{\partial s_i}-\frac{\partial F_\varepsilon(w,s)}{\partial s_i}\right)s_i-\frac{1}{2\pi} M_{\e,1}(w,s),$$
with
\begin{equation*}
\begin{cases}
\Delta_w {M}_{\e,1}(w,s)=0&~\mbox{in}~\frac{\O-x_0}\e\setminus B(0,1),\\[1mm]
{M}_{\e,1}(w,s)=0&~\mbox{on}~\partial B(0,1),\\[1mm]
{M}_{\e,1}(w,s)=1&~\mbox{on}~\frac{\partial\O-x_0}\e.
\end{cases}
\end{equation*}
Then $L_{\e}(w)$ can be written as
\begin{equation}\label{B5}
\begin{split}
L_{\e}(w)=&\underbrace{\frac{1}{2\pi}\int_{\partial B(0,1)} {M}_{\e,1}(w,s)u_0(x_0+\e s)d\sigma(s)}_{:= {L}_{\e,1}(w)} +\underbrace{\int_{\partial B(0,1)} M_{\e,2}(w,s)u_0(x_0+\e s)d\sigma(s)}_{:=L_{\e,2}(w)}.
\end{split}
\end{equation}
Also for any $w\in \frac{\partial\O-x_0}\e$ and $s\in \partial B(0,1)$, it holds
\begin{equation*}
\sum_{i=1}^2\left(\frac{\partial G_0(w,s)}{\partial s_i}-\frac{\partial F_\varepsilon(w,s)}{\partial s_i}\right)s_i= \sum_{i=1}^2 \frac{\partial G_0(w,s)}{\partial s_i} s_i=
\frac{1}{2\pi} \frac{|w|^2-1}{|w-s|^2}.
\end{equation*}
Hence we can verify
\begin{equation}\label{cl1}
\begin{cases}
\Delta_wM_{\e,2}(w,s)=0&~\mbox{in}~\frac{\O-x_0}\e\setminus B(0,1),\\[1mm]
M_{\e,2}(w,s)=0&~\mbox{on}~\partial B(0,1),\\[1mm]
M_{\e,2}(w,s)=\frac{1}{2\pi}\left(\frac{|w|^2-1}{|w-s|^2}-1\right)&~\mbox{on}~\frac{\partial\O-x_0}\e.
\end{cases}
\end{equation}
Since for any $w\in \frac{\partial\O-x_0}\e$ and $s\in \partial B(0,1)$, we get that
\begin{equation}\label{cl2}
\begin{split}
\frac{|w|^2-1}{|w-s|^2}-1=& O\Big(\frac{1}{|w|}\Big)=O\big(\e\big).
\end{split}
\end{equation}
Then by the maximum principle, \eqref{cl1} and \eqref{cl2}, we find
\begin{equation}\label{cl3}
\big|M_{\e,2}(w,s)\big|=O(\varepsilon)~\mbox{for}~w\in\frac{\O-x_0}\e \setminus B(0,1)~\mbox{and}~s\in \partial B(0,1).
\end{equation}
Hence it follows
\begin{equation}\label{m4}
L_{\e,2}(w)=O\big(\varepsilon\big)\,~\mbox{for}\,~w\in\frac{\O-x_0}\e \setminus B(0,1).
\end{equation}
Next we estimate ${M}_{\e,1}(w,s)$. To do this let us introduce the function $\psi_\e(x,s)$ as follows
$$\psi_\e(x,s):=1- M_{\e,1}\left(\frac{x-x_0}\e,s\right)~\,\mbox{for}~~x\in \O\setminus B(x_0,\e)~\,\mbox{and}\, s\in \partial B(0,1).$$
Then it follows
\begin{equation*}
\begin{cases}
\Delta_x \psi_\e(x,s)=0&~\mbox{in}~\O\setminus B(x_0,\e),\\[1mm]
\psi_\e(x,s)=0&~\mbox{on}~ \partial\O,\\[1mm]
\psi_\e(x,s)=1&~\mbox{on}~\partial B(x_0,\e).
\end{cases}
\end{equation*}
Hence using Lemma \ref{G3} we have that
\begin{equation*}
\psi_\e(x,s)=-\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(x,x_0)+O\left(\frac1{|\log\e|^2}\right).
\end{equation*}
Coming back to ${M}_{\e,1}(w,s)$, we get
\begin{equation*}
\begin{split}
{M}_{\e,1}(w,s)=&1+\frac{2\pi}{\log\e}\Big( 1-\frac{2\pi H(x_0,x_0)}{\log\e}\Big)G(\varepsilon w+x_0,x_0)+O\left(\frac1{|\log\e|^2}\right)\\=&
\frac{\log |w|}{|\log\e|}\Big(1+o(1)\Big)
+o\left(\frac{1}{|\log\e|}\right).
\end{split}
\end{equation*}
In last equality we used that $\varepsilon |w|=|x-x_0|\rightarrow 0$. And then
\begin{equation}\label{m3}
\begin{split}
L_{\e,1}(w)= \frac{\log |w|}{|\log\e|}\Big(u_0(x_0)+o(1)\Big)
+o\left(\frac{1}{|\log\e|}\right).
\end{split}
\end{equation}
Then \eqref{as11-15-06} follows by \eqref{B5}, \eqref{m4} and \eqref{m3}.
\end{proof}
Then we have following estimate on $u_\e$ near $\partial B(x_0,\varepsilon)$.
\begin{prop}\label{alemma-2d}
Let $w=\frac{x-x_0}{\e}$ and $|x-x_0|\to 0$, then it holds
\begin{equation}\label{s11-15-0d1}
u_\e(x) =
u_0(x) + \frac{\log |x-x_0| }{|\log \e|} \Big(u_0\big(x_0\big)+o(1)\Big)+o(1).
\end{equation}
\end{prop}
\begin{proof}
From \eqref{6-26-1}, \eqref{aaas11-15-04} and \eqref{as11-15-06}, we have
\begin{equation*}
\begin{split}
u_\e(x) =&
u_0(x) - u_0\Big(x_0+ \frac{\varepsilon w}{|w|^2}\Big)+\frac{\varepsilon^2}{2}\big(1-\frac{1}{|w|^2}\big)
+\frac{\log |w| }{|\log \e|} \Big(u_0\big(x_0\big)+o(1)\Big)\\=&
u_0(x) + \frac{\log |x-x_0| }{|\log \e|} \Big(u_0\big(x_0\big)+o(1)\Big)+o(1).
\end{split}\end{equation*}
\end{proof}
Now we continue to compute $\nabla_wL_\e$ and $\nabla_w^2L_\e$.
\begin{lemma}\label{aB3}
Let $w=\frac{x-x_0}{\e}$ and $|x-x_0|\to 0$, we have following results:
\vskip 0.2cm
\textup{(1)} For any fixed $C_0>1$, if $|w|\geq C_0$, then it holds
\begin{equation}\label{aast11-23-05}
\begin{split}
\frac{\partial L_{\e}(w)}{\partial w_i}=&
\frac{u_0(x_0)w_i}{|\log\e|\cdot |w|^2}+o\left(\frac{1}{ |w|\cdot |\log\e|}\right),
\end{split}
\end{equation}
and
\begin{equation}\label{aamt11-23-05}
\begin{split}
\frac{\partial^2 L_{\e}(w)}{\partial w_i\partial w_j}=&
\frac{u_0(x_0)}{ |\log \varepsilon|\cdot |w|^2} \Big(\delta_{ij}-\frac{2w_iw_j}{|w|^2}\Big)
+o\left(\frac{1}{ |w|^2\cdot |\log\e|}\right).
\end{split}
\end{equation}
\textup{(2)}
If $\displaystyle\lim_{\e\to 0}|w|=1$, then it holds
\begin{equation}\label{d11-23-05}
\begin{split}
\Big\langle \nabla_w L_{\e}(w),w\Big\rangle
\geq \frac{u_0(x_0)}{2|\log \e|}.
\end{split}\end{equation}
\end{lemma}
\begin{proof}[\underline{\textbf{Proof of \eqref{aast11-23-05} and \eqref{aamt11-23-05}}}]
First for $w\in\frac{\O-x_0}\e \setminus B(0,1)~\mbox{and}~s\in \partial B(0,1)$, we know $$B\big(w,\frac{|w|-1}{2}\big)\subset\subset \frac{\O-x_0}\e \setminus B(0,1).$$
Then using
\eqref{a11-02-01}, \eqref{cl1} and \eqref{cl3}, we have
\begin{equation}\label{acl3}
\big|\nabla_wM_{\e,2}(w,s)\big|=O\Big(\frac{\varepsilon}{|w|-1}\Big)
~~\mbox{and}~~
\big|\nabla^2_wM_{\e,2}(w,s)\big|=O\Big(\frac{\varepsilon}{(|w|-1)^2}\Big).
\end{equation}
Hence if $\e|w|\to 0$ and $|w|\geq C_0$ for any fixed $C_0>1$, then \eqref{acl3} implies
\begin{equation*}
\big|\nabla_wM_{\e,2}(w,s)\big|=O\Big(\frac{\varepsilon}{|w|}\Big)
~~\mbox{and}~~
\big|\nabla^2_wM_{\e,2}(w,s)\big|=O\Big(\frac{\varepsilon}{|w|^2}\Big).
\end{equation*}
And then it follows
\begin{equation}\label{11-21-33}
\frac{\partial {L}_{\e,2}(w)}{\partial w_i}=O\left(\frac{\e}{|w|}\right)~\mbox{and}~
\frac{\partial^2{L}_{\e,2}(w)}{\partial w_i\partial w_j}=O\left(\frac\e{|w|^2}\right).
\end{equation}
Also $M_{\e,1}(w,s)-
\frac{\log |w|}{|\log\e|}$ is a harmonic function with respect to $w$ in $\frac{\O-x_0}\e\setminus B(0,1)$. Hence
if $\e|w|\to 0$ and $|w|\geq C_0$ for any fixed $C_0>1$,
using \eqref{a11-02-01} we have
\begin{equation}\label{11-24-18}
\left|\nabla_w\left(M_{\e,1}(w,s)-
\frac{\log |w|}{|\log\e|}\right)\right|=o\left(\frac{1}{ |w|\cdot |\log\e|}\right),
\end{equation}
and
\begin{equation}\label{a11-24-18}
\left|\nabla^2_w\left(M_{\e,1}(w,s)-
\frac{\log |w|}{|\log\e|}\right)\right|=o\left(\frac{1}{ |w|^2\cdot|\log\e|}\right).
\end{equation}
Hence from \eqref{11-24-18} and \eqref{a11-24-18}, we have
\begin{equation}\label{11-21-35}
\frac{\partial L_{\e,1}(w)}{\partial w_i}
=\frac{w_i}{ |\log \varepsilon|\cdot |w|^2}\big(u_0(x_0)+o(1)\big),
\end{equation}
and
\begin{equation}\label{11-21-35a}
\frac{\partial^2 L_{\e,1}(w)}{\partial w_i\partial w_j}
=\frac{1}{ |\log \varepsilon|\cdot |w|^2}\big(u_0(x_0)+o(1)\big)\Big(\delta_{ij}-\frac{2w_iw_j}{|w|^2}\Big).
\end{equation}
And then \eqref{aast11-23-05} and \eqref{aamt11-23-05} follows by \eqref{B5},
\eqref{11-21-33}, \eqref{11-21-35} and \eqref{11-21-35a}.
\end{proof}
\begin{proof}[\underline{\textbf{Proof of \eqref{d11-23-05}}}]
To consider the case $\displaystyle\lim_{\e\to 0}|w|=1$, we define a new function
$$\eta_{\e,1}(w,s)=\sum_{i=1}^2\left(\frac{\partial G_0(w,s)}{\partial s_i}-\frac{\partial F_\varepsilon(w,s)}{\partial s_i}\right)s_i+\frac{\log|w|}{2\pi\log (r\e)}.$$
Then we can write
\begin{equation*}
\frac{\partial L_{\e}(w)}{\partial w_i}= \int_{\partial B(0,1)}\left(
\frac{\partial\eta_{\e,1}(w,s)}{\partial w_i}-\frac{w_i}{2\pi|w|^2\log (r\e)}
\right)u_0(x_0+\e s)d\sigma(s).
\end{equation*}
Now we can verify
\begin{equation}\label{6-23-15}
\begin{cases}
\Delta_w\eta_{\e,1}(w,s)=0&~\mbox{in}~\frac{\O-x_0}\e\setminus B(0,1),\\[1mm]
\eta_{\e,1}(w,s)=0&~\mbox{on}~\partial B(0,1),\\[1mm]
\eta_{\e,1}(w,s)=\frac{1}{2\pi}\left(\frac{|w|^2-1}{|w-z|^2}+\frac{\log|w|}{\log (r\e)}\right)&~\mbox{on}~\frac{\partial\O-x_0}\e.
\end{cases}
\end{equation}
Setting $z=x_0+\e w$, for any $w\in \frac{\partial\O-x_0}\e$, we get that
\begin{equation*}
\begin{split}
\frac{|w|^2-1}{|w-s|^2}+\frac{\log|w|}{\log (r\e)}=&\frac{|z-x_0|^2-\e^2}{|z-x_0-\e s|^2}+\frac{\log (r|z-x_0|)-\log (r\e)}{\log (r\e)}\\=&
\frac{\log (r|z-x_0|)}{\log (r\e)}+O(\e).
\end{split}
\end{equation*}
Then taking some appropriate $r>0$ (for example, taking $r>0$ small such that $r|z-x_0|<1$ for any $z\in \partial\Omega$), we have
\begin{equation}\label{6-23-16}
\begin{split}
\frac{|w|^2-1}{|w-s|^2}+\frac{\log|w|}{\log (r\e)}\geq 0.
\end{split}
\end{equation}
Hence by the maximum principle, \eqref{6-23-15} and \eqref{6-23-16}, it holds
$$\eta_{\e,1}(w,s)\geq 0, ~\mbox{for any}~w\in \frac{\Omega-x_0}{\varepsilon}\backslash B(0,1).$$
Then Hopf's lemma gives us that
\begin{equation*}
\frac{\partial \eta_{\e,1}(\tau,s)}{\partial \nu_\tau}
<0, ~\mbox{for any}~\tau\in \partial B(0,1)~\mbox{with}~\nu_\tau=-\frac{\tau}{|\tau|}.
\end{equation*}
If $\displaystyle\lim_{\varepsilon\rightarrow 0}|w|=1$, by the sign-preserving property, for small $\varepsilon$, we find
\begin{equation}\label{6-23-31}
\sum^2_{i=1}\frac{\partial\eta_{\e,1}(w,s)}{\partial w_i}w_i
=-|w|\cdot\frac{\partial\eta_{\e,1}(w,s)}{\partial \nu_w}
\geq 0,~\mbox{with}~\nu_w=-\frac{w}{|w|}.
\end{equation}
Hence using \eqref{6-23-31}, we can compute that
\begin{equation*}
\begin{split}
\Big\langle \nabla_w L_{\e}(w),w\Big\rangle
=& \int_{\partial B(0,1)}\left(\sum^2_{i=1}\frac{\partial\eta_{\e,1}(w,s)}{\partial w_i}w_i-\frac{1}{2\pi\log (r\e)}
\right)u_0(x_0+\e s)d\sigma(s)\\ \geq &
\frac{1}{|\log \e|}\big(u_0(x_0)+o(1)\big)\geq \frac{u_0(x_0)}{2|\log \e|},
\end{split}\end{equation*}
which completes the proofs of \eqref{d11-23-05}.
\end{proof}
From above computations, the precise asymptotic behavior of $\nabla u_\e$ and
$\nabla^2 u_\e$ near $\partial B(x_0,\varepsilon)$ can be stated as follows.
\begin{prop}\label{alemma-2}Let $w=\frac{x-x_0}{\e}$ and $|x-x_0|\to 0$, we have following results:
\vskip 0.2cm
\textup{(1).} For any fixed $C_0>1$, if $|w|\geq C_0$, then it holds
\begin{equation}\label{s11-15-01}
\frac{\partial u_\e(x)}{\partial x_i}=
\frac{\partial u_0(x)}{\partial x_i}+ \frac{u_0(x_0) (x_i-x_{0,i})}{ |\log\e|\cdot |x-x_0|^2}
+o\Big(\frac{1}{ |\log\e|\cdot |x-x_0|}\Big),
\end{equation}
and
\begin{equation}\label{s11-15-01a}
\begin{split}
\frac{\partial^2 u_\e(x)}{\partial x_i\partial x_j}=&
\frac{\partial^2 u_0(x)}{\partial x_i\partial x_j}+ \frac{u_0(x_0) }{ |\log\e|\cdot |x-x_0|^2}\Big(\delta_{ij}-\frac{ (x_i-x_{0,i})(x_j-x_{0,j})}{ |x-x_0|^2} \Big) \\&
+o\Big( \frac{1}{ |\log\e|\cdot |x-x_0|^2}\Big).
\end{split}\end{equation}
\textup{(2).}
If $\displaystyle\lim_{\e\to 0}|w|=1$, then it holds
\begin{equation}\label{s11-15-01b}
\begin{split}
\big|\nabla u_\e(x)\big|
\geq \frac{u_0(x_0)}{8\e|\log \e|}.
\end{split}\end{equation}
\end{prop}
\begin{proof}
\textup{(1).} First by \eqref{6-26-1}, we have
\begin{equation}\label{6-26-1d}
\frac{\partial u_\varepsilon(x)}{\partial x_i}= \frac{\partial u_0(x)}{\partial x_i}+ \frac{1}{\e}\frac{\partial K_{\e}(w)}{\partial w_i}+ \frac{1}{\e}\frac{\partial L_{\e}(w)}{\partial w_i}.
\end{equation}
Then from \eqref{aaas11-15-04-1}, \eqref{aast11-23-05} and \eqref{6-26-1d}, we find
\begin{equation*}
\begin{split}
\frac{\partial u_\varepsilon(x)}{\partial x_i}=& \frac{\partial u_0(x)}{\partial x_i}+
O\Big(\frac{1}{|w|^2}\Big) +
\frac{u_0(x_0)w_i}{\e\cdot|\log\e|\cdot |w|^2}+o\left(\frac{1}{ \e\cdot |w|\cdot |\log\e|}\right)
\\=&
\frac{\partial u_0(x)}{\partial x_i}+ \frac{u_0(x_0) (x_i-x_{0,i})}{ |\log\e|\cdot |x-x_0|^2}
+o\Big(\frac{1}{ |\log\e|\cdot |x-x_0|}\Big)+O\Big(\frac{\e^2}{|x-x_0|^2}\Big)\\=&
\frac{\partial u_0(x)}{\partial x_i}+ \frac{u_0(x_0) (x_i-x_{0,i})}{ |\log\e|\cdot |x-x_0|^2}
+o\Big(\frac{1}{ |\log\e|\cdot |x-x_0|}\Big).
\end{split}
\end{equation*}
Similarly, by \eqref{6-26-1}, we know
\begin{equation*}
\frac{\partial^2 u_\varepsilon(x)}{\partial x_i\partial x_j}= \frac{\partial^2 u_0(x)}{\partial x_i\partial x_j}+ \frac{1}{\e^2}\frac{\partial^2 K_{\e}(w)}{\partial w_i\partial w_j}+
\frac{1}{\e^2}\frac{\partial^2 L_{\e}(w)}{\partial w_i\partial w_j},
\end{equation*}
which, together with \eqref{aaas11-15-04-1} and \eqref{aamt11-23-05}, implies
\eqref{s11-15-01a}.
\vskip 0.2cm
\textup{(2).}
If $\displaystyle\lim_{\e\to 0}|w|=1$, from \eqref{6-26-1}, \eqref{aaas11-15-04-1} and \eqref{d11-23-05},
we have
\begin{equation*}
\begin{split}
\big|\nabla u_\e(x)\big| \geq &\frac{1}{2}
\Big|\big\langle\nabla_x u_\varepsilon(x), w\big\rangle\Big|\\
\geq & \frac{1}{2\e} \Big|\big\langle\nabla_w L_{\e}(w), w\big\rangle\Big|
-\frac{1}{2} \Big|\big\langle\nabla_x u_0(x), w\big\rangle\Big|
- \frac{1}{2\e} \Big|\big\langle\nabla_w K_{\e}(w), w\big\rangle\Big| \geq \frac{u_0(x_0)}{8\e|\log \e|}.
\end{split}
\end{equation*}
\end{proof}
\section{Proofs of Theorem \ref{th1.1}}\label{s9}
Firstly, we give the precise location of the maximum point $x_\e$ of $u_\e(x)$ on $\Omega_\e$.
\begin{prop}\label{Prop1-2a}
If $x_0\neq y_0$, then the maximum point $x_\e$ of $u_\e(x)$ on $\Omega_\e$
satisfies
$$x_\e \to y_0~~ \mbox{as}~~\e\to 0,$$ where $y_0$ is the maximum point of $u_0(x)$.
\end{prop}
\begin{proof}
First, for $x\in\Omega_\e$ with $|x-x_0|=o(1)$, it holds $\log |x-x_0|<0$, and then from \eqref{s11-15-0d1}, we find
\begin{equation}\label{y11-15-02}
u_\e(x)\leq
u_0(x)+o(1),~\mbox{in}~\big\{x\in\Omega_\e,~ ~|x-x_0|=o(1)\big\}.
\end{equation}
If $x_0\neq y_0$, then $u_0(x_0)<u_0(y_0)$, here we use the uniqueness of the critical point of $u_0(x)$(see \cite{ML71}). Hence by \eqref{y11-15-02}, we have
\begin{equation*}
u_\e(x)\leq \frac{u_0(x_0)+u_0(y_0)}{2}<u_0(y_0)\,\,~\mbox{in}~\big\{x\in\Omega_\e,~ ~|x-x_0|=o(1)\big\},
\end{equation*}
which gives us that $x_\e\notin \big\{x\in \Omega_\e, |x-x_0|=o(1)\big\}$.
Hence combining Lemma \ref{alemma-1}, we know that
there exists a fixed small $r>0$ such that
$x_\e\in B(y_0,r)$ the maximum point $x_\e$ of $u_\e(x)$ will belong to $B(y_0,r)$
and $x_\e \to y_0$ as $\e\to 0$.
\end{proof}
\begin{prop}\label{Prop1-2}
If $x_0=y_0$, then the maximum point $x_\e$ of $u_\e(x)$ on $\Omega_\e$ can be written as
\begin{equation*}
x_\varepsilon=x_0+
\sqrt{-\frac{u_0(x_0)+o(1)}\la}\frac{1}{\sqrt{|\log\e|}}v,
\end{equation*}
where $\lambda=\max\{\lambda_1,\lambda_2\}$, $\lambda_1$ and $\lambda_2$ are the eigenvalues of the matrix $D^2u_0(x_0)$ and $v$ an associated
eigenfunction with $|v|=1$.
\end{prop}
\begin{proof}
Since $\Omega$ is convex and $x_0=y_0$, then from \cite{ML71}, we know that $u_0(x)$ admits exact one critical point $x_0$. This means that for any $z_0\neq x_0$, there exists $r>0$ such that $u_0(x)$ has no critical points in $B(z_0,r)$. And from Lemma \ref{alemma-1}, we know that all critical points of $u_\e(x)$ belong to $$D_\e:=\Big\{x\in \Omega_\e, |x-x_0|=o(1)\Big\}.$$
Next, for $x\in D_\e$, if $\frac{|x-x_0|}{\e}< C$, from \eqref{s11-15-01} and \eqref{s11-15-01b}, we have
\begin{equation*}
\begin{split}
\big|\nabla u_\e(x)\big| \geq \frac{c_0}{\e|\log \e|}~~\,\mbox{for some}~~c_0>0,
\end{split}
\end{equation*}
which implies that $\nabla u_\e(x)=0$ admits no solutions if $x\in D_\e$ and $\frac{|x-x_0|}{\e}< C$.
Finally, we analyze the critical points of $u_\e(x)$ on $x\in D_\e$ and $\frac{|x-x_0|}{\e}\to \infty$.
From \eqref{s11-15-01}, we can deduce that
\begin{equation}\label{s11-16-01}
\begin{split}
0=&\frac{\partial u_\e(x_\e)}{\partial x_i}=
\frac{\partial u_0(x)}{\partial x_i}+ \frac{u_0(x_0) (x_i-x_{0,i})}{ |\log\e|\cdot |x-x_0|^2}
+o\Big(\frac{1}{ |\log\e|\cdot |x-x_0|}\Big)\\=&
\sum^2_{j=1}\left(\frac{\partial^2 u_0(x_0)}{\partial x_i\partial x_j}+o(1)\right)(x_{\e,j}-x_{0,j})+
\frac{x_{\e,i}-x_{0,i}}{|\log\e|\cdot |x_{\e}-x_0|^2}\Big(u_0(x_0)+o(1)\Big).
\end{split}\end{equation}
By \eqref{s11-16-01} we immediately get that $-\frac{1}{|x_{\e}-x_0|^2|\log \e|}\to\lambda$ as $\e\to0$. Dividing \eqref{s11-16-01} by $|x_\e-x_0|$ and passing to the limit, we find that all critical points $x_\varepsilon$ of $u_\e$ can be written as
\begin{equation*}
x_\varepsilon=x_0+
\sqrt{-\frac{u_0(x_0)+o(1)}\la}\frac{1}{\sqrt{|\log\e|}}v,
\end{equation*}
where $\lambda=\lambda_1$ or $\lambda_2$, $\lambda_1$ and $\lambda_2$ are the eigenvalues of the matrix $D^2u_0(x_0)$ and $v$ an associated
eigenfunction with $|v|=1$.
Now we devote to prove that
the maximum point $x_\e$ of $u_\e(x)$ on $\Omega_\e$ can be written as
\begin{equation*}
x_\varepsilon=x_0+
\sqrt{-\frac{u_0(x_0)+o(1)}\la}\frac{1}{\sqrt{|\log\e|}}v, ~\mbox{with}~\lambda=\max\{\lambda_1,\lambda_2\}.
\end{equation*}
And if $\lambda_1=\lambda_2$, then above result holds automatically.
Now let
\begin{equation}\label{ll1}
x_\varepsilon=x_0+
\sqrt{-\frac{u_0(x_0)+o(1)}{\la_1}}\frac{1}{\sqrt{|\log\e|}}v_1\,\,~\mbox{with}~D^2u_0(x_0)v_1=\lambda_1v_1.
\end{equation}
Then from \eqref{s11-15-01a}, we know
\begin{equation}\label{ll2}
\begin{split}
\frac{\partial^2 u_\e(x_\e)}{\partial x_i\partial x_j}=&
\frac{\partial^2 u_0(x_0)}{\partial x_i\partial x_j}
-\lambda_1 \Big(\delta_{ij}-v_{1i}v_{1j} \Big)
+o\big(1\big).
\end{split}\end{equation}
Next we take $v_2$ satisfying $D^2u_0(x_0)v_2=\lambda_2v_2$ with $|v_2|=1$ and $v_1\bot v_2$.
And then denoting
$\textbf{P}=\left(
\begin{array}{cc}
v_{11} & v_{21} \\
v_{12} & v_{22} \\
\end{array}
\right)$, we have
\begin{equation}\label{ll3}
\begin{split}
\textbf{P}^{-1}D^2u_\e(x_\e)\textbf{P}
=& \textbf{P}^{-1}D^2u_0(x_0)\textbf{P}-\Big(\lambda_1+o(1)\Big)\textbf{E}
+\lambda_1\textbf{P}^{-1}v_1v_1^T\textbf{P}.
\end{split}\end{equation}
where $\textbf{E}$ is the unit matrix. Also we compute that
\begin{equation}\label{ll4}
\begin{split}
\textbf{P}^{-1}v_1v_1^T\textbf{P}=&
\left(
\begin{array}{cc}
v_{11} & v_{12} \\
v_{21} & v_{22} \\
\end{array}
\right) \left(
\begin{array}{c}
v_{11} \\
v_{12}\\
\end{array}
\right)
\left(
\begin{array}{cc}
v_{11} & v_{12} \\
\end{array}
\right)\left(
\begin{array}{cc}
v_{11} & v_{21} \\
v_{12} & v_{22} \\
\end{array}
\right) \\=& \left(
\begin{array}{c}
1 \\
0\\
\end{array}
\right)
\left(
\begin{array}{cc}
1 & 0\\
\end{array}
\right) = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right).
\end{split}
\end{equation}
Hence from \eqref{ll1}, \eqref{ll2}, \eqref{ll3} and \eqref{ll4}, it follows
\begin{equation}\label{ll5}
\begin{split}
\textbf{P}^{-1}D^2u_\e(x_\e)\textbf{P}
=& \left(
\begin{array}{cc}
\lambda_1+o(1) & 0 \\
0 & \lambda_2-\lambda_1+o(1) \\
\end{array}
\right).
\end{split}\end{equation}
This gives us that
\begin{equation*}
\begin{split}
\mbox{det}~D^2u_\e(x_\e)=\Big(\lambda_1+o(1)\Big)\Big(\lambda_2-\lambda_1+o(1)\Big).
\end{split}\end{equation*}
If $\lambda_1<\lambda_2$, then $\mbox{det}~D^2u_\e(x_\e)<0$ and $x_\e$ is a saddle point of $u_\e(x)$. Hence in this case, $x_\e$ is not a maximum point of $u_\e(x)$. If $\lambda_1>\lambda_2$, then two eigenvalues of $D^2u_\e(x_\e)$ are negative and $x_\e$ is a maximum point of $u_\e(x)$. These complete the proof of Proposition \ref{Prop1-2}.
\end{proof}
\vskip 0.2cm
Now we are ready to prove Theorem \ref{th1.1}.
\vskip 0.2cm
\begin{proof}[\underline{{\textbf{Proof of Theorem \ref{th1.1}}}}] We divide into following two cases.
\vskip 0.2cm
\noindent\textbf{Case 1: $x_0\neq y_0$.} First from Proposition \ref{Prop1-2a}, we know that
the maximum point $x_\e$ of $u_\e(x)$ on $\Omega_\e$
satisfies
$$x_\e \to y_0~~ \mbox{as}~~\e\to 0.$$
Also
we recall that all eigenvalues of $D^2u_0(y_0)$ are negative. Hence by continuity,
\eqref{5-10-1} and Lemma \ref{alemma-1}, we find that
\begin{equation}\label{ll7}
\displaystyle\lim_{\e\to 0}\lambda_{\max}\big(D^2u_\e(x_\e)\big)=
\lambda_{\max}\big(D^2u_0(y_0)\big)
=\max\Big\{\lambda_1,\lambda_2\Big\}<0.
\end{equation}
\vskip 0.1cm
\noindent\textbf{Case 2: $x_0=y_0$.}
Without loss of generality, we suppose that $\lambda_2\leq \lambda_1<0$ then from Proposition \ref{Prop1-2}, we know that
the maximum point of $u_\e(x)$ satisfies
\begin{equation*}
x_\varepsilon=x_0+
\sqrt{-\frac{u_0(x_0)+o(1)}{\la_1}}\frac{1}{\sqrt{|\log\e|}}v_1\,\,~\mbox{with}~D^2u_0(x_0)v_1=\lambda_1v_1.
\end{equation*}
And then \eqref{ll5} gives us that
\begin{equation*}
\lim_{\e\to 0}\lambda_{\max}\big(D^2u_\e(x_\e)\big)=
\max\Big\{\lambda_1,\lambda_2-\lambda_1\Big\}
\begin{cases}
<0,~\mbox{for}~\lambda_2<\lambda_1,\\[1mm]
=0, ~\mbox{for}~\lambda_2=\lambda_1.
\end{cases}
\end{equation*}
Hence for general $\lambda_1,\lambda_2$, it holds
\begin{equation*}
\lim_{\e\to 0}\lambda_{\max}\big(D^2u_\e(x_\e)\big)=\max\Big\{\lambda_1,\lambda_2,-|\lambda_2-\lambda_1|\Big\}
\begin{cases}
<0,~\mbox{for}~\lambda_2\neq \lambda_1,\\[1mm]
=0, ~\mbox{for}~\lambda_2=\lambda_1,
\end{cases}
\end{equation*}
which, together with \eqref{ll7}, completes the proof of Theorem \ref{th1.1}.
\end{proof}
\vskip 0.2cm
\noindent\textbf{Acknowledgments} ~Part of this work was done while Peng Luo was visiting the Mathematics Department
of the University of Rome ``La Sapienza" whose members he would like to thank for their warm hospitality. Hua Chen was supported by NSFC grants (No. 11631011,11626251). Peng Luo was supported by NSFC grants (No.11701204,11831009). | 208,677 |
TITLE: find an example of a function which satisfies certain properties
QUESTION [2 upvotes]: Is there any Method for solving this kind of question:
$f(x)$ is defined and differentiable on every point and the derivative of $f(x)$ fails to be differentiable at exactly two points. Give an example of $f(x)$.
REPLY [0 votes]: Some hints
Take a function $f$ which is equal to $0$ for $x<0$ and to a polynomial function of lowest degree possible for $x \ge 0$.
Those should be chosen in order that the function is continuous, differentiable at any point and that the derivative of $f$ fails to be differentiable at $0$.
Then what do you think of $g(x)=f(x+1)+f(x)$?
And for the fun... A much harder question. What do you think of finding a function having those properties for an infinite number of points? | 53,965 |
< Return To Firm Directory
“List of names here:
Lewin Group (subsidiary of Ingenix)
Falls Church, VA
The Lewin Group primarily provides health care and human services policy, as well as analytics and consulting. The firm was founded in 1970 and is headquartered Falls Church, VA. The firm has over 50 years of research experience and long-term development of client relationships. TLG’s experience has given them the knowledge of diverse populations as well as programs and policy expertise in order to provide its clients with the best strategies to address the most pressing challenges in health care.
- The Lewin Group’s area of expertise include strategy and management consulting, public sector health care, learning and diffusion, program integrity, health services research, quality measures, health system modernization, advanced analytics, policy research, value-based payment systems, program design, population health, and implementation and evaluation.
Position Keywords
- Analyst
- Sr. Analyst
- Consultant
- Sr. Consultant
- Associate Director
- Managing Consultant
- Managing Director
Practice Areas
“List of names here:”
Locations
North America
- Washington, DC Area | 147,755 |
TITLE: Which analytically-given function could allow one to independently tune the power-law behavior of its left and right tails?
QUESTION [2 upvotes]: I would like to find the analytic expression of a function $f: \mathbb{R}^{+} \to \mathbb{R}^{+} $ which initially grows as a power-law, then achieves its maximum in a smooth bell-shaped way and then decreases as a power-law. In summary:
i) $f(x \to 0) \propto x^\alpha$ with $\alpha > 0$
ii) $f(x \to \infty) \propto x^\beta$ with $\beta < 0$
iii) The function has a bell-like shape somewhere between both extremes
Thanks.
REPLY [0 votes]: A simpler function that works is $$f(x) = \frac{x^\alpha}{C+x^{\alpha-\beta}},$$ where $C$ is a positive constant. | 84,076 |
UPDATE 3-U.S. justices narrow workers' ability to press bias cases
* Supreme Court conservatives prevail in two 5-4 decisions
* Court backs employers in Indiana, Texas cases
* Cases involved minority employees at universities
By Jonathan Stempel and Lawrence Hurley
June 24 (Reuters) - A sharply divided U.S. Supreme Court on Monday made it harder for workers to sue their employers over alleged harassment and retaliation in the workplace.
In two identical 5-4 votes, the court's conservative majority ruled against a black Ball State University catering assistant who claimed she was harassed on the basis of race, and a University of Texas doctor of Middle Eastern descent who claimed he lost his job in retaliation for complaining of bias.
Both decisions prompted harsh criticism from liberal Justice Ruth Bader Ginsburg, who took the unusual step of reading for eight minutes from the bench from her dissents. She accused the majority of having "corralled" Title VII of the Civil Rights Act of 1964, and called on Congress to undo the damage.
That part of the law prohibits employment discrimination based on race, color, religion, sex and national origin.
"The decisions don't reflect realities of the workplace," said Michael Foreman, a Pennsylvania State University law professor who submitted briefs on behalf of the complaining workers. "They effectively don't protect the right to complain."
Monday's decisions add to a growing list from the court in this and recent terms favoring businesses, including cases involving class-action lawsuits.
"These are good decisions for employers and the economy generally," said Anthony Oncidi, a Proskauer Rose law firm partner in Los Angeles specializing in employment law. "This will help smoke out claims where employees cannot show that but for the alleged illegal actions, they would not have suffered."
In the Ball State case, Maetta Vance, the black employee, had sued over the alleged taunts and threats of physical harm by a white woman she considered to be her supervisor at the university in Muncie, Indiana.
Vance said Ball State eventually retaliated by making her a "glorified salad girl" who cut vegetables and washed fruit.
While the Supreme Court in 1998 said Title VII let harassment victims hold employers responsible for supervisors' improper conduct, it had never defined what a supervisor was.
Writing for the court majority, Justice Samuel Alito adopted a narrower definition of a supervisor than Vance proposed, and upheld a 2011 ruling by the 7th U.S. Circuit Court of Appeals.
He said an employer could be liable "only when the employer has empowered that employee to take tangible employment actions against the victim, i.e., to effect a 'significant change in employment status, such as hiring, firing, failing to promote, reassignment with significantly different responsibilities, or a decision causing a significant change in benefits.'"
The court rejected Vance's argument that a supervisor was anyone with day-to-day oversight of an employee's activities, and what Alito called the U.S. Equal Employment Opportunity Commission's "nebulous" guidance to link supervisory status to the exercise of significant oversight over such work.
'REDUCES THE INCENTIVES'
"It reduces the incentives for employers to police harassment," said Carolyn Shapiro, a professor at the IIT Chicago-Kent College of Law and director of its Supreme Court institute.
Daniel Ortiz, a lawyer for Vance, was not immediately available for comment. Ball State had no immediate comment.
In the Texas case, Naiel Nassar had been employed on the university faculty and as a physician at an affiliated hospital.
He resigned his teaching post amid alleged harassment by a supervisor including comments such as "Middle Easterners are lazy." Nassar said the hospital later withdrew a job offer in retaliation for his having complained about the harassment.
In letting the retaliation claim go forward, the 5th U.S. Circuit Court of Appeals in 2012 said Nassar need only show that retaliation was a motivating factor for the adverse job action.
The Supreme Court set a higher bar. In an opinion by Justice Anthony Kennedy, it said Title VII plaintiffs must show that "but for" having enforced their rights, retaliation would not have happened. He sent the case back to the 5th Circuit.
Brian Lauten, a lawyer for Nassar, said that "we're obviously disappointed" but expressed confidence in prevailing at a second trial. Tom Kelley, a spokesman for Texas Attorney General Greg Abbott, declined to comment.
Both majorities included Alito, Kennedy, Chief Justice John Roberts, and Justices Antonin Scalia and Clarence Thomas.
Justices Stephen Breyer, Sonia Sotomayor and Elena Kagan joined Ginsburg's dissents.
Ginsburg said the majority in the Vance case "relieves scores of employers of responsibility for the behavior of the supervisors they employ," and in the Nassar case "appears driven by a zeal to reduce the number of retaliation claims filed against employers."
Both decisions "should prompt yet another Civil Rights Restoration Act," she said.
Ginsburg had also dissented from the bench in a 2007 case that applied a 180-day limit to claims under Title VII for pay discrimination. Congress reversed that decision in 2009 by passing the Lilly Ledbetter Fair Pay Act, signed into law by President Barack Obama.
The cases are Vance v. Ball State University, U.S. Supreme Court. No. 11-556; and University of Texas Southwestern Medical Center v. Nassar, No. 12-484. | 415,806 |
There are some changes going on around here.
Most notably, Cinespiria has changed to Talking Pulp.
I’ll be working out the kinks and the look of the site over the next month. I will still post with the same regularity, except for the middle of July, as I’ll be traveling for work. But I hope to have everything updated and where I want it by the end of the month.
You can expect the same sort of content but I am trying to put more emphasis on articles that aren’t just reviews. Right now, I’ve been digging up some old pop culture articles I wrote for other sites over the last several years. Once my work travel is behind me and things stabilize a bit more, I will produce more original content.
Until then, enjoy the new look, the new name and not much changing in regards to the content this site is most known for: reviews, lists and some pop culture commentary.
And for those shitty movies, we’ll still keep the trusty Cinespiria Shitometer around to analyze the contents of those poopy pictures. Although, the name may change to the Pulp Shitometer, Pulpy Poopometer or something like that.
You must log in to post a comment. | 350,996 |
0 Mitglieder und 1 Gast betrachten dieses Thema..
Seite erstellt in 0.058 Sekunden mit 29 Abfragen. | 98,025 |
Shopping in Thailand
Shop ‘til you drop with bargains galore
Shopping is one of the greatest pastimes in Thailand—the best way to find everything you’ve ever needed, and things you never knew you wanted. Whether you’re visiting the mega malls of Phuket, the floating markets of Pattaya, or the world-renowned tailors of Bangkok—you won’t leave Thailand empty handed!
Markets vs. malls in Thailand
Thailand takes shopping to a whole new level. One look around the Chatuchak markets in Bangkok and you’ll know you have ventured outside of Australia! Explore the expansive weekend market with over 8,000 stall holders offering everything from food to furniture, clothes to jewellery, leather goods, garden sculptures and so much more! There are impressive markets all over Thailand; whether you visit an open-air market, night market or floating market, make sure to check which days they are open and plan your visit accordingly. Practice your bargaining skills and get lost in the maze of stalls—it’s truly an experience! Though you might pick up a bargain, make sure to check the quality of market items before you buy. If you want to avoid the large crowds and get a feel for more authentic Thai shopping, try some of the smaller local markets, generally found around palaces and temples.
On the opposite end of Thailand’s shopping experiences are the high-end malls and designer boutiques located throughout the country. Selling everything from fashion to jewellery and accessories through to art and home décor. The air-conditioned malls are a welcomed break from the midday heat in the open-air markets. As well as high-end malls, Thailand also has several outlet shopping malls including Outlet Mall Pattaya and Premium Outlet Phuket.
Where to shop in Thailand
When shopping in Thailand, you should research which areas are best for the items you want to buy. For instance, Chang Mai is well-known for its magnificent teak furniture and Bangkok has a range of world-renowned tailors able to whip up your new work wardrobe in a matter of hours. Here, you can select the fabric and design of your new suit or dress—all for half the price that you’d pay in Australia.
For the best shopping in Phuket, head to the ultra-modern Jungceylon Shopping Mall in Patong, which boasts an upmarket department store, fashion, sunglasses and accessories as well as perfume, electricals and plenty of dining and entertainment options. In Phuket Town itself, you will find the popular Chillva Market, Phuket Walking Street markets (Sundays only) and the Weekend Night Market on Chao Fa West Road— just 1-kilometre away from the Central Festival Phuket shopping mall. The markets boast souvenirs, toys, fashion, local handmade items and delicious food. Go for a wander and the sounds and scents wafting through the air will lead the way!
If you are staying in Pattaya, head to the beachfront Central Festival shopping centre housing over 300 shops including electricals, Western fashion brands and entertainment. Further down the beach is rival shopping centre Royal Garden Plaza which brags high street fashion, souvenirs, jewellery and much more. For a more traditional Thai shopping experience, head to the Pattaya Floating Market to pick up local handicrafts and see the impressive stilted houses.
Need help choosing a resort for your shopping trip in Thailand? Call our Thailand Holiday Experts on 1300 008 424 today!
| 278,974 |
\begin{document}
\normalfont
\title[Schur-Weyl duality]{
Schur-Weyl duality for certain \\ infinite dimensional $\U_q(\fsl_2)$-modules}
\author{K. Iohara, G.I. Lehrer and R.B. Zhang}
\thanks{Partially supported by the Australian Research Council.}
\thanks{The authors thank the Research Institute at Oberwolfach
for its hospitality at an RIP program, where this work was begun.}
\subjclass[2010]{17B37, 20G42 (primary) 81R50 (secondary)}
\keywords{Tangle category, Temperley-Lieb category of type $B$, Verma module.}
\address{Univ Lyon, Universit\'{e} Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan,
43 Boulevard du 11 Novembre 1918, F-69622 Villeurbanne cedex, France}
\email{[email protected]}
\address{School of Mathematics and Statistics,
University of Sydney, N.S.W. 2006, Australia}
\email{[email protected], [email protected]}
\begin{abstract}
Let $V$ be the two-dimensional simple module and $M$ be a projective Verma module for the quantum group of $\mathfrak{sl}_2$ at generic $q$.
We show that for any $r\ge 1$, the endomorphism algebra of $M\otimes V^{\otimes r}$ is isomorphic to the type $B$ Temperley-Lieb
algebra $\TLB_r(q, Q)$ for an appropriate parameter $Q$ depending on $M$. The parameter $Q$ is determined explicitly. We also use the cellular
structure to determine precisely for which values of $r$ the endomorphism algebra is semisimple. A key element of our method is to identify
the algebras $\TLB_r(q,Q)$ as the endomorphism algebras of the objects
in a quotient category of the category of coloured ribbon graphs of Turaev and Reshitikhin or the tangle diagrams of Freyd-Yetter.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \label{sect:intro}
Let $V$ be the two-dimensional simple module and $M$ be a projective Verma module for the quantum group $\U_q(\fsl_2)$ at generic $q$.
We show that the endomorphism algebra of $M\otimes V^{\otimes r}$ is isomorphic to the type $B$ \tl algebra $\TLB_r(q, Q)$ of degree $r$
with an appropriate parameter $Q$ depending on $M$.
We derive this from a more general result, which states that the full subcategory $\CT$ of the category of $\U_q(\fsl_2)$-modules consisting
of the objects $M\otimes V^{\otimes r}$ for all $r\in\Z_{\ge 0}$ is isomorphic to the \tl category of type $B$ \cite{GL03} with parameters $q,Q$.
The \tl algebra of type $B$ is the endomorphism algebra of an object of this category,
in the same way as the Brauer algebra in the Brauer category \cite{LZ14}.
There are two variants of the \tl category which we meet; one is defined as a subquotient of the coloured framed
tangle category extensively studied by Freyd, Yetter \cite{FY}, Turaev, Reshitikhin \cite{RT, RT-1, T}
and others. This is used to prove the result referred to above.
Another version of the \tl category of type $B$ which we denote by $\TLBB(q,Q)$ was first introduced in \cite{GL03};
we briefly explain it in Section \ref{sect:TLB-old},
where we show that the two versions are equivalent, with appropriate parameter matchings.
This enables us to bring into play the cellular structure of the \tl algebra, which we use to determine precise criterion for
the semisimplicity of the endomorphism algebra.
To explain the main idea of this new formulation, we recall that a uniform description of the affine Hecke algebra, affine \tl
algebra and other related algebras was given in \cite{GL03}, where the unifying object is the braid group of type $B$.
The affine \tl algebra is a quotient of the group algebra of this group, which factors through the affine Hecke algebra,
while the \tl algebra of type $B$ is a further quotient of the affine \tl algebra. Now by embedding the braid group of
type $B$ of $r$-strings into the braid group of type $A$ of $(r+1)$-strings using Lemma \ref{lem:aff-fin}, we arrive at
the diagrammatic description of the affine \tl algebra etc. in terms of polar tangle diagrams \cite{A, DRV}.
One of the advantages of such diagrammatics is that it allows one to construct representations of the braid group of type $B$
and quotients of its group algebra using the theory of quasi triangular Hopf algebras such as quantum groups
(see, e.g., \cite{DRV, LZ06} and also Theorem \ref{thm:aff-Hecke-reps}).
Another advantage, which is more important to us here, is that it allows us to adapt techniques from quantum topology \cite{T} to
develop categorical formulations of the group algebra of the braid group of type $B$ and its quotient algebras, and relate
the resulting categories to categories of representations of quantum groups. We follow
this strategy to develop a new version of the \tl category of type $B$. As we will see, several new categories also arise
naturally from this process, which may be interesting in their own right.
We begin in Section \ref{sect:cat-RT} with a category $\RTC$ of coloured un-oriented tangle diagrams up to regular isotopy
\cite{FY, RT, T}. The objects of the category are sequences of elements of $\cC:=\{m, v\}$, and the modules of morphisms are spanned
by a class of coloured un-oriented tangle diagrams up to regular isotopy. We study several quotient categories of $\RTC$, including
the \tl category of type $B$ and some related new categories. Their endomorphism algebras are interesting in their own right, and are closely
related to endomorphism algebras of certain categories of representations \cite{ALZ, Ro, Ze}.
The affine \tl category $\ATLC(q)$ is the quotient of $\RTC$ obtained by imposing on morphisms the skein relations \eqref{eq:skein1}
and \eqref{eq:skein2}, and the free loop removal relation \eqref{eq:flr}.
The module $\HTL^{ext}_r(q):=\Hom_{\ATLC(q)}((m, v^r), (m, v^r)))$ of endomorphisms of the object
$(m, v^r):=(m, \underbrace{v, \dots, v}_r)$ in $\ATLC(q)$ forms an associative algebra (see Definition \ref{def:ext-ATL}),
which is generated by a subalgebra isomorphic to the affine \tl algebra $\HTL_r(q)$ together with two additional generators which are central.
This algebra has a more tractable structure than $\HTL_r(q)$ as we will see below.
The one parameter multi-polar \tl category $\ATLC(q, \Omega)$ is obtained as a quotient of $\ATLC(q)$ by specialising
the central elements mentioned above to appropriate scalars related to $\Omega$ (Definition \ref{def:ATLC-q}).
The \tl category $\TLBC(q, \Omega)$ of type $B$ is a full subcategory of $\ATLC(q, \Omega)$ with objects of the
form $(m, v^r)$ for all $r\in\Z_{\ge 0}$. The category $\ATLC(q, \Omega)$ contains the finite \tl category $\TLC(q)$ as a
full subcategory in two different ways (Remark \ref{rem:tlsub}); one of these
is also contained in $\TLBC(q, \Omega)$.
The interrelationships among the categories mentioned above and other categories which arise naturally in reformulating the \tl
category of type $B$ are recapitulated in Section \ref{sect:summary} and illustrated in Figure \ref{fig:cats}.
The structure of the category $\TLBC(q, \Omega)$ (in its two-parameter version) is studied in detail in
Section \ref{sect:TLBC-struct}. In particular, we are able to determine explicitly the dimensions of the morphism
spaces at generic $q$. The category $\TLBC(q, \Omega)$ is quite different from the
\tl category $\TLBB(q,Q)$ of type $B$ introduced in \cite{GL03}. Nevertheless, we give a
direct proof that $\TLBC(q, \frac{\Omega}{\sqrt{-1}})$ and $\TLBB(q,Q)$ are isomorphic in Section \ref{sect:TLB-old}.
We construct in Theorem \ref{thm:RT} a tensor functor $\widehat\CF$ from the category $\RTC$ to the category
$\CO_{int}$ of finitely generated integral weight $\U_q(\fsl_2)$-modules of type-${\bf 1}$, which are locally
$\U_q(\fb)$-finite (where $\U_q(\fb)$ is
the quantum Borel subalgebra). This functor factors through the category $\ATLC(q, \Omega)$ of type $B$
with the parameter $\Omega$, whose dependency on $M$ is given by \eqref{eq:omega-value}. This induces a functor from $\ATLC(q, \Omega)$ to $\CO_{int}$,
which restricts to a functor $\CF': \TLBC(q, \Omega) \longrightarrow \CT$, where $\CT$ is the full
subcategory of $\CO_{int}$ mentioned above.
Since $M$ is a projective Verma module for $\U_q(\fsl_2)$, the structure of the category $\CT$ is relatively easy to understand.
Putting together the structural information for $\TLBC(q, \Omega)$ and $\CT$, we are able to show in
Theorem \ref{thm:main} that $\CF'$ is an isomorphism of categories.
The categorical development here also leads to improved understanding of the theory of the affine \tl algebra.
For example, Lemma \ref{lem:central-skein} shows that the translation generator of
the $\HTL_r(q)$ subalgebra of $\HTL^{ext}_r(q)$
satisfies a $3$-term skein relation over the centre. The skein relations involve the additional (central) generators,
thus is not a relation inside the $\HTL_r(q)$ subalgebra. This gives a conceptual explanation of the fact \cite{GL03}
that the translation generator always has a characteristic polynomial of degree $2$ in the cell and irreducible
representations of the affine \tl algebra (see Remark \ref{rem:skein-universal}).
Throughout the paper, we work over the ground field $\CK_0:=\C(q^{\frac{1}{k}})$, where $q$ is an indeterminate over $\C$ such that $(q^{\frac{1}{k}})^k=q$
for some fixed positive integer $k$.
\section{The affine Temperley-Lieb algebra}\label{sect:ATL-algebra}
We begin with a brief discussion of the affine Temperley-Lieb algebra and related algebras following \cite{GL98, GL03}.
The unifying object is the braid group of type $B$.
Among the quotients of its group algebra are the extended affine Hecke algebra of type $A$, the extended affine Temperley-Lieb algebra, the affine BMW algebra
and the Temperley-Lieb algebras $\TLB_r(q,Q)$ of type $B$.
\subsection{The Artin braid group $\Gamma_r$ of type $B$}
Fix an integer $r\ge 2$.
The braid group of type $A$ on $r$ strings is denoted by $B_r$ and has the following standard presentation.
It has generators $b_1, \dots, b_{r-1}$, and defining relations
\begin{eqnarray}\label{eq:braid}
\begin{aligned}
b_i b_{i+1} b_i = b_{i+1} b_i b_{i+1}, \quad b_i b_j = b_j b_i, \quad |i-j|>1.
\end{aligned}
\end{eqnarray}
The braid group $\Gamma_r$ of type $B$ \cite{GL03} is generated by $\{\sigma_i, \xi_1\mid 1\le i\le r-1\}$ with the following relations:
\begin{enumerate}
\item
the $\sigma_i$ satisfy
the standard braid relations \eqref{eq:braid};
\item $\xi_1$ commutes with all $\sigma_j$ for $j\ge 2$, and
\begin{eqnarray}\label{eq:sigma-xi}
\sigma_1 \xi_1 \sigma_1 \xi_1 = \xi_1 \sigma_1 \xi_1\sigma_1.
\end{eqnarray}
\end{enumerate}
Define $\xi_{i+1} := \sigma_i \xi_i \sigma_i$ for $1\le i\le r-1$.
Then we have (see \cite[Prop. (2.6)]{GL03})
\[
\begin{aligned}
\xi_i \xi_j=\xi_j\xi_i, \quad \sigma_j \xi_i = \xi_i \sigma_j, \quad j>i.
\end{aligned}
\]
It is known that $\Gamma_r$ contains $B_r$ as a subgroup, and it also clearly contains the subgroup $\Z^r$ generated by the elements $\xi_i$.
The following result appears to be well-known.
\begin{lemma} \label{lem:aff-fin}
Let $B_{r+1}$ be the type $A$ braid group on $r+1$ strings, generated by $b_0, b_1, \dots, b_{r-1}$ subject to the standard relations (see \eqref{eq:braid}).
There exists an injective group homomorphism given by
\[
\eta_r: \Gamma_r \longrightarrow B_{r+1}, \quad \xi_1\mapsto b_0^2, \quad \sigma_i \mapsto b_i, \quad 1\le i\le r-1.
\]
\end{lemma}
It is a pleasant exercise to verify directly that $\eta_r$ preserves the relation \eqref{eq:sigma-xi}.
We have
\[
\begin{aligned}
\eta_r(\sigma_1 \xi_1 \sigma_1 \xi_1 )&=b_1 b_0^2 b_1 b_0^2 = b_1 b_0 (b_0 b_1 b_0) b_0 \\
&= (b_1 b_0 b_1) (b_0 b_1 b_0)=b_0 (b_1 b_0 b_1) b_0 b_1\\
& = b_0 ^2 b_1 b_0 ^2 b_1= \eta_r(\xi_1 \sigma_1 \xi_1\sigma_1).
\end{aligned}
\]
\begin{definition}\label{def:grprings}
We write $\cB_r$ for the group ring $\CK_0 B_r$ and $\CG_r$ for the group ring $\CK_0 \Gamma_r$. Of
course $\cB_r\subset \CG_r$.
\end{definition}
\subsection{Quotient algebras of the group algebra of $\Gamma_r$}
\subsubsection{The (extended) affine Hecke algebra of type $A$}
Let $J_r$ be the two-sided ideal of the group algebra $\CG_r$ of the braid group of type B generated by $(\sigma_i -q)(\sigma_i +q^{-1})$ for all $i$.
The affine Hecke algebra $\HH_r(q)$ (cf. \cite[Def. (3.1)]{GL03}) is the quotient algebra of $\CG_r$ by the ideal $J_r$:
$
\HH_r(q):= \CG_r/J_r.
$
Denote by $T_i$ and $X_j$ respectively the images of $\sigma_i$ and $\xi_j$ in $\HH_r(q)$. Then the elements $T_i$ ($1\le 1\le r-1$) and $X_j$ ($1\le j\le r$)
satisfy the following relations.
\be\label{eq:ha}
\begin{aligned}
&T_i T_j = T_j T_i, \quad |i-j|>1, \\
&T_i T_{i+1} T_i = T_{i+1} T_i T_{i+1}, \\
&(T_i -q)(T_i +q^{-1})=0, \\
&X_{i+1} = T_i X_i T_i, \quad X_i X_j =X_j X_i.
\end{aligned}
\ee
Moreover the relations \eqref{eq:ha} form a set of defining relations of $\HH_r(q)$.
The elements $T_i$ generate a subalgebra of $\HH_r(q)$, which is isomorphic to the finite dimensional Hecke algebra $H_r(q)$
of type $A_{r-1}$, with $ \CK_0$-basis $\{T_w\mid w\in\Sym_r\}$. Note that
$H_r(q)$ is the quotient of $\CB_r$ by the two-sided ideal $J_r\cap\CB_r$.
One deduces easily from the relations \eqref{eq:ha} that for any Laurent polynomial $f=f(X^{\pm 1}_1, \dots, X^{\pm 1}_r)$,
\begin{eqnarray}\label{eq:Bernstein-relation}
T_i f - f^{s_i} T_i = (q-q^{-1}) \frac{f - f^{s_i}}{1-X_i X_{i+1}^{-1}},
\end{eqnarray}
where $f^{s_i}$ is obtained from $f$ by interchanging $X^{\pm 1}_i$ and $X^{\pm 1}_{i+1}$.
From this relation it is evident that the symmetric Laurent polynomials in the elements $X_i^{\pm 1}$ are all
central elements of $\HH_r(q)$, and it is known \cite{L} that they generate the center.
The elements $X_i^{\pm 1}$ generate the Laurent polynomial ring $\CK_0[X^{\pm 1}_1, \dots, X^{\pm 1}_r]$. We have the vector space isomorphism
$
\HH_r(q)=H_r(q)\otimes_{\CK_0}\CK_0[X^{\pm 1}_1, \dots, X^{\pm 1}_r].
$
\subsubsection{The affine Temperley-Lieb algebra}\label{ss:atl}
Let $\langle \CE_3\rangle$ be the two-sided ideal of $\HH_r(q)$ generated by
$
\CE_3=\sum\limits_{w\in\Sym_3}(-q^{-1})^{\ell(w)} T_w,
$
where $\Sym_3=\Sym\{1, 2,3\}$ is the symmetric group on $\{1, 2,3\}$,
regarded as a subgroup of $\Sym_r$ on $\{1, 2, \dots, r\}$. For $i\in\{ 3, \dots, r\}$, let
$
\CE_i=\sum\limits_{w}(-q^{-1})^{\ell(w)} T_w,
$
where the sum is over the elements of $\Sym\{i, i-1, i-2\}$. Then $\CE_i\in\langle \CE_3\rangle$ for all $i$.
We define the affine Temperley-Lieb algebra \cite{GL03} by
\be\label{eq:defhtl}
\HTL_r(q) = \HH_r(q)/ \langle \CE_3\rangle.
\ee
Let $C_i=T_i-q\in\HH_r(q)$ ($i=1,\dots, r-1$). These are the Kazhdan-Lusztig basis elements corresponding
to the simple reflections in $\Sym_r$, and satisfy $C_i^2=-(q+q\inv)C_i$.
Moreover, applying the automorphism $T_w\mapsto(-1)^{\ell(w)}(T_{w\inv})\inv$ of $\HH_r(q)$,
to the relation at the bottom of
\cite[p. 487]{GL03}, we see, using the fact that this automorphism maps $E_i$ of {\it loc. cit.} to $q^6\CE_{i}$, that
\be\label{eq:tlc}
C_iC_{i+1}C_i-C_i=C_{i+1}C_iC_{i+1}-C_{i+1}=-q^3\CE_3.
\ee
Denote the image of $C_i$ in $\HTL_r(q)$ by $ e_i$ (so that $T_i\mapsto e_i+q$) and the image of $X_i$ by $x_i$.
Then $\HTL_r(q)$ is generated by $e_i$ ($1\le i \le r-1$) and $x_i^{\pm 1}$ ($1\le i \le r$)
subject to the following relations.
\be\label{eq:atlrel}
\begin{aligned}
&e_i e_j = e_j e_i, \quad |i-j|>1, \\
&e_i^2 = - (q+q^{-1}) e_i, \quad e_i e_{i\pm 1} e_i = e_i, \\
&x_{i+1} = (q+e_i) x_i (q+e_i), \quad x_i x_j =x_j x_i.\\
\end{aligned}
\ee
It follows from the above relations that for any Laurent polynomial $f$ in $x^{\pm 1}_1, \dots, x^{\pm 1}_r$,
\[
e_i f - f^{s_i} e_i = (qx_i x^{-1}_{i+1}-q^{-1}) \frac{f - f^{s_i}}{1-x_i x_{i+1}^{-1}}.
\]
The relation $x_1x_2=x_2x_1$ is also easily shown to amount to the relation
\begin{eqnarray}\label{eq:constraints}
q e_1x_1^2+e_1x_1e_1x_1 =qx_1^2e_1+x_1e_1x_1e_1.
\end{eqnarray}
This implies the following result.
\begin{lemma}\label{lem:extra-inv}
Let $\delta=-(q+q^{-1})$. Then the following holds in $\HTL_r(q)$.
\begin{eqnarray}\label{eq:extra-inv}
\delta(q e_1x_1^2+e_1x_1e_1x_1) =q e_1 x_1^2e_1+ e_1 x_1e_1x_1e_1
= \delta(qx_1^2e_1+x_1e_1x_1e_1).
\end{eqnarray}
\end{lemma}
\begin{proof}
Multiplying \eqref{eq:constraints} by $e_1$ on the left (resp. right), we obtain the first (resp. second) equality.
\end{proof}
\begin{remark} The significance of the lemma is best appreciated when placed in the context of the extended
affine Temperley-Lieb algebra $\HTL^{ext}_r(q)$ defined in Definition \ref{def:ext-ATL},
where \eqref{eq:extra-inv} may be understood as a ``skein relation'' imposed on the centre of $\HTL^{ext}_r(q)$,
(see Lemma \ref{lem:central-skein}).
\end{remark}
Another presentation of $\HTL_r(q)$ of which we shall make use below is as follows.
Recall that the element $\tau:=\xi_1b_1b_2\dots b_{r-1}\in\Gamma_r$
is represented as the ``twisting braid'', and satisfies $\tau b_i\tau\inv=b_{i+1}$, where the subscript $i$ is taken mod $r$.
Denote by $V$ the image of $\tau$ in $\HH_r(q)$ and in $\HTL_r(q)$. Then $\HTL_r(q)$ is generated by
$e_1,\dots,e_r,V$ with relations
\be\label{eq:atlrel2}
\begin{aligned}
&e_i e_j = e_j e_i, \quad |i-j|>1, \\
&e_i^2 = - (q+q^{-1}) e_i, \quad e_i e_{i\pm 1} e_i = e_i, \\
&Ve_iV\inv=e_{i+1},\\
\end{aligned}
\ee
for $i=1,\dots,r$, and in the the index is taken mod $r$.
Note that $(e_i+q)\inv=e_i+q\inv$ is the image of $T_i\inv$ in $\HTL_r(q)$.
\subsubsection{The Temperley-Lieb algebra of type $B$}
\begin{definition} \label{def:TLB-alg} \cite{GL03} Given an invertible scalar $Q$, let $J_Q$ be the two-sided
ideal of the affine Temperley-Lieb algebra $\HTL_r(q)$ generated by the elements
\[
(x_1-Q)(x_1+Q^{-1}) \quad\text{and }\; e_1 x_1 e_1 + q(Q-Q^{-1}) e_1.
\]
The Temperley-Lieb algebra of type $B$, denoted by $\TLB_r(q, Q)$, is defined by
\[
\TLB_r(q, Q):=\HTL_r(q)/J_Q.
\]
\end{definition}
We denote by $\pi_r(Q): \HTL_r(q) \longrightarrow \TLB_r(q, Q)$ the canonical surjection.
We will also denote the images of the generators $x_1^{\pm 1}$ and $e_i$ in
$\TLB_r(q, Q)$ by the same symbols.
\begin{remark}
The relations in \eqref{eq:extra-inv} are identically satisfied in $\TLB_r(q, Q)$.
\end{remark}
Note that the subalgebra generated by $e_i$ ($i=1, ,2, \dots, r-1$) in $\TLB_r(q, Q)$ is isomorphic to the
usual Temperley-Lieb algebra $\TL_r(q)$ on $r$ strings.
\subsubsection{The affine BMW algebra}
The affine BMW algebra \cite{DRV} is another interesting quotient algebra of the group algebra $\CG_r$ of the braid group of type B.
Fix non-zero scalars $z, y\in\CK_0$. Define elements $\theta_i\in\CG_r$ by
\[
\theta_i:= 1-\frac{\sigma_i- \sigma_i^{-1}}{z}, \quad i=1,2,\dots,r-1
\]
and let $J_{BMW}$ be the two-sided ideal generated by the elements
\[
\begin{aligned}
&\theta_i \sigma_i - y \theta_i, \quad \sigma_i \theta_i - y \theta_i, \quad
&\theta_i \sigma_{i-1}^{\pm 1} \theta_i - y^{\mp 1} \theta_i, \quad
\theta_i \sigma_{i+1}^{\pm 1} \theta_i - y^{\mp 1} \theta_i
\end{aligned}
\]
for all valid indices. The affine BMW algebra $\HBMW_r(z, y)$ is defined by
\[
\HBMW_r(z, y)= \CG_r/J_{BMW}.
\]
We will denote the images of
$\sigma_i$ and $\theta_i$ in $\HBMW_r(z, y)$ by $g_i$ and $e_i$ respectively,
and that of $\xi_i$ by $X_i$. Then $\HBMW_r(z, y)$ is generated by $g_i$, $e_i$ ($1\le i\le r-1$) and $X_j$ ($1\le j\le r$) subject to the following relations.
\begin{eqnarray*}
&\text{all $g_i$ and $X_j$ are invertible}, \\
&g_i g_j = g_j g_i, \quad |i-j|>1, \\
&g_i g_{i+1} g_i = g_{i+1} g_i g_{i+1}, \\
&e_i= 1 - \frac{g_i - g_i^{-1}}{z}, \\
&e_i g_i = g_i e_i = y e_i, \\
&e_i g_{i-1}^{\pm 1} e_i = e_i g_{i+1}^{\pm 1} e_i =y^{\mp 1} e_i, \\
&X_{i+1}= g_i X_i g_i, \quad X_i X_j =X_j X_i.
\end{eqnarray*}
It is easy to derive the following relations from the defining relations.
\[
e_i e_{i\pm 1} e_i = e_i, \quad e_i^2 = \left(1- \frac{y - y^{-1}}{z}\right)e_i.
\]
We will not discuss this algebra further, since we shall not need it in this paper.
\subsection{A diagrammatic presentation of the braid group of type $B$}\label{sect:diagrams}
A widely used diagrammatic presentation of the braid group of type $B$ is in terms of braids on cylinders \cite{GL03}.
We shall here adopt an alternative presentation which use braids with a pole \cite{A}.
Let us represent
the braid group $B_{r+1}$ of type $A$ in terms of braids in the standard way. Then the second presentation
of $\Gamma_r$ is obtained from this by regarding it as a subgroup of $B_{r+1}$ via the injection $\eta_r$ defined by
Lemma \ref{lem:aff-fin}, where the $0$-th string is considered to be a pole. We can also turn a cylindrical braid into a braid
with a pole by regarding the core of the cylinder as the pole and pushing it to the far left.
To distingush elements of $\Gamma_r$ and $B_{r+1}$, we draw a braid of type $B$ as a diagram
consisting of a pole and $r$-strings on its right, where a string can only cross the pole an even number of times.
The strings will be drawn as thin arcs, and the pole drawn as a vertical thick arc.
The generators of $\Gamma_r$ can be depicted as follows.
\[
\begin{aligned}
&
\begin{picture}(150, 70)(-20,0)
\put(-45, 28){$\sigma_i=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){60}}
}
\put(0, 0){\line(0, 1){60}}
\put(40, 0){\line(0, 1){60}}
\put(18, 30){...}
\qbezier(60, 0)(60, 0)(68, 27)
\qbezier(72, 33)(72, 33)(80, 60)
\qbezier(60, 60)(70, 30)(80, 0)
\put(100, 0){\line(0, 1){60}}
\put(120, 0){\line(0, 1){60}}
\put(105, 30){...}
\put(56, -10){\small$i$}
\put(72, -10){\small{$i$+1}}
\put(130, 0){, }
\end{picture}
\\
&
\begin{picture}(150, 70)(-20,0)
\put(-52, 28){$\sigma_i^{-1}=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){60}}
}
\put(0, 0){\line(0, 1){60}}
\put(40, 0){\line(0, 1){60}}
\put(18, 30){...}
\qbezier(60, 60)(60, 60)(68, 33)
\qbezier(80, 0)(80, 0)(72, 27)
\qbezier(60, 0)(70, 30)(80, 60)
\put(100, 0){\line(0, 1){60}}
\put(120, 0){\line(0, 1){60}}
\put(105, 30){...}
\put(56, -10){\small$i$}
\put(72, -10){\small{$i$+1}}
\put(130, 0){, }
\end{picture}
\end{aligned}\\
\]
\[
\begin{aligned}
&
\begin{picture}(150, 70)(-20,0)
\put(-48, 28){$\xi_1=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){18}}
\put(-15, 22){\line(0, 1){38}}
}
\qbezier(0, 60)(-7, 50)(-12, 42)
\qbezier(-18, 38)(-25, 30)(-15, 20)
\qbezier(-15, 20)(-15, 20)(0, 0)
\put(20, 0){\line(0, 1){60}}
\put(40, 0){\line(0, 1){60}}
\put(60, 30){............}
\put(120, 0){\line(0, 1){60}}
\put(130, 0){, }
\end{picture}
\\
&
\begin{picture}(150, 70)(-20,0)
\put(-55, 28){$\xi_1^{-1}=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){38}}
\put(-15, 42){\line(0, 1){18}}
}
\qbezier(-16, 40)(-25, 30)(-18, 20)
\qbezier(-16, 40)(-16, 40)(5, 60)
\qbezier(-12, 17)(-12, 17)(5, 0)
\put(20, 0){\line(0, 1){60}}
\put(40, 0){\line(0, 1){60}}
\put(60, 30){............}
\put(120, 0){\line(0, 1){60}}
\put(130, 0){.}
\end{picture}
\end{aligned}\\
\]
The multiplication of $\Gamma_r$ is then given by concatenation of diagrams.
Given two diagrams $D$ and $D'$, their concatenation $D'\circ D$ is
obtained by composing the diagrams
$D$ and $D'$ by joining the points on the bottom of $D$ with the points on the top of $D'$.
It is easy to see that
the elements $\xi_j$ ($1\le j \le r$) can be depicted as in Figure \ref{fig:xi}.
\begin{figure}[h]
\begin{center}
\begin{picture}(150, 100)(-30,0)
\put(-48, 45){$\xi_j=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){25}}
\put(-15, 30){\line(0, 1){60}}
}
\put(-5, 0){\line(0, 1){17}}
\put(-5, 22){\line(0, 1){68}}
\put(-5, 29){\line(0, 1){61}}
\put(15, 0){\line(0, 1){5}}
\put(15, 10){\line(0, 1){80}}
\put(0, 45){...}
\qbezier(-18, 60)(-40, 30)(30, 0)
\qbezier(17, 82)(25, 87)(30, 90)
\qbezier(13, 80)(0, 72)(-3, 70)
\qbezier(-7, 68)(-8, 67)(-12, 64)
\put(45, 0){\line(0, 1){90}}
\put(58, 45){......}
\put(90, 0){\line(0, 1){90}}
\put(27, -10){\small$j$ }
\put(100, 0){. }
\end{picture}
\end{center}
\caption{Diagram for $\xi_j$}
\label{fig:xi}
\end{figure}
From the diagrammatical presentation of the braid group of type $B$, we can derive diagrammatical
presentations for the quotient algebras of the group algebra of $\CG_r$ discussed above.
\subsection{Quantum groups}
Let $A=(a_{i j})$ be the Cartan matrix of a finite dimensional simple
Lie algebra or affine
Lie algebra $\fg$. If $\{\alpha_i\vert\, i=1, 2, \dots\ell\}$ is the set of simple
roots of $\fg$, the Cartan matrix
$A$ is defined by $a_{i j} = \frac{2(\alpha_i, \alpha_j)}{(\alpha_i, \alpha_i)}$,
where the bilinear form on
the weight space if normalised so that $(\alpha_i, \alpha_i)=2$ for the short
simple roots. Let $q_i=q^{(\alpha_i, \alpha_i)/2}$ for all $i$.
The quantum group $\U_q(\fg)$ over $\CK_0$ is generated by
$\{E_i, F_i, K_i, K_i^{-1}\mid 1\le i\le \ell\}$ subject to the relations
\begin{eqnarray}
& K_i K_i^{-1}=1, \quad K_i K_j = K_j K_i, \label{eq:KK}\\
&K_i E_j K_i^{-1}= q_i^{a_{i j}} E_j, \quad K_i F_j K_i^{-1}= q_i^{-a_{i j}} F_j, \label{eq:KEKF}\\
&E_i F_j - F_j E_i = \delta_{i j} \frac{K_i - K_i^{-1}}{q_i-q_i^{-1}}, \label{eq:EF}\\
&\sum\limits_{r=0}^{1-a_{i j}} (-1)^r \begin{bmatrix}1-a_{i j}\\ r\end{bmatrix}_{q_i} E_i^{1-a_{i j}-r}
E_j E_i^r=0, \quad i\ne j, \label{eq:Serre-E}\\
&\sum\limits_{r=0}^{1-a_{i j}} (-1)^r \begin{bmatrix}1-a_{i j}\\ r\end{bmatrix}_{q_i} F_i^{1-a_{i j}-r}
F_j F_i^r=0, \quad i\ne j, \label{eq:Serre-F}
\end{eqnarray}
where $\begin{bmatrix}n\\ r\end{bmatrix}_q=\frac{[n]_q!}{[r]_q![n-r]_q!}$ and $[j]_q=\frac{q^j- q^{-j}}{q-q^{-1}}$.
As is well known, $\U_q(\fg)$ is a Hopf algebra with co-multiplication defined by
\[
\begin{aligned}
&\Delta(E_i)=E_i\otimes K_i + 1\otimes E_i, \\
&\Delta(F_i)=F_i\otimes 1 + K_i^{-1}\otimes F_i,\\
&\Delta(K_i)=K_i\otimes K_i.
\end{aligned}
\]
By slightly modifying the definition above, we also obtain the quantum groups
$\U_q (\gl_k)$ \cite{J}\cite[\S 6.1]{LZ06} and $\U_q (\mathfrak{o}_k)$ \cite[\S8.1.2]{LZ06}.
Henceforth $\U_q (\fg)$ will denote either the quantum group of a finite dimensional simple Lie algebra $\fg$, or
$\U_q (\gl_k)$ and $\U_q (\mathfrak{o}_k)$.
We will consider only left $\U_q(\fg)$-modules of type ${\bf 1}$.
Given two $\U_q(\fg)$-modules $V$ and $W$ of which at least one is finite dimensional,
we denote by
$
{R_{V, W}} : V\otimes W\longrightarrow V\otimes W
$
the $R$-matrix of $\U_q(\fg)$ on $V\otimes W$.
Let $\check{R}_{V, W}=\tau_{V, W} R_{V, W}$, where $\tau_{V, W}$ is the permutation defined by
\[
\tau_{V, W}: V\otimes W \longrightarrow W \otimes V, \quad v\otimes w \mapsto w\otimes v.
\]
Then $\check{R}_{V, W}\in\Hom_{\U_q(\fg)}(V\otimes W, W\otimes V)$ is an isomorphism.
\noindent{\bf Notation}. It follows from the above that if $V$ and $W$ are $\U_q(\fg)$-modules, of which at
least one is finite dimensional, then $\check R_{W,V}\circ \check R_{V,W}\in\End_{\U_q(\fg)}(V\ot W)$.
We shall write $\check R_{V,W}^2$ for the endomorphism $\check R_{W,V}\circ \check R_{V,W}$.
\begin{proposition}\label{prop:braid}
Let $U, V, W$ be $\U_q(\fg)$-modules, at least two of which are finite dimensional.
\begin{enumerate}
\item
The following relation holds in $\Hom_{\U_q(\fg)}(U\otimes V\otimes W, W\otimes V\otimes U)$:
\begin{eqnarray}\label{eq:YBE}
\begin{aligned}
(\check{R}_{V, W}\otimes\id_U)\circ(\id_V\otimes \check{R}_{U, W})\circ (\check{R}_{U, V}\otimes\id_W)
\\
=(\id_W\otimes \check{R}_{U, V})\circ(\check{R}_{U, W}\otimes\id_V)\circ(\id_U\otimes \check{R}_{V, W}).
\end{aligned}
\end{eqnarray}
This is the celebrated Yang-Baxter equation.
\item
The following relation holds in $\End_{\U_q(\fg)}(U\ot V\ot W)$:
\be\label{eq:brb}
\begin{aligned}
\left((\check R_{V,U}^2)\ot\id_W\right)\circ \left(\id_U\ot\check R_{W,V}\right)\circ \left((\check R_{W,U}^2)\ot\id_V\right)
\circ\left(\id_U\ot\check R_{V,W}\right)=&\\
\left(\id_U\ot\check R_{V,W}\right)\circ\left((\check R_{W,U}^2)\ot\id_V\right)\circ
\left(\id_U\ot\check R_{W,V}\right)\circ \left((\check R_{V,U}^2)\ot\id_W\right).&
\end{aligned}
\ee
\end{enumerate}
\end{proposition}\begin{proof}
Part (1) is well known. Part (2) follows from part (1) and Lemma \ref{lem:aff-fin}. Note that \eqref{eq:YBE} and \eqref{eq:brb} are
essentially braid relations of type $A$ and $B$ respectively.
\end{proof}
\subsection{Representations of the group algebra of $\Gamma_r$ and quotient algebras}
We continue to assume that, as in the previous section, $\fg$ is a finite dimensional Lie algebra and that $\U_q(\fg)$
is its quantised enveloping algebra.
\begin{lemma}\label{lem:braid-B-rep}
Let $M, V$ be left $\U_q(\fg)$-modules of type $\bf 1$ with $\dim_{\CK_0} V<\infty$.
Then
$M\otimes V^{\otimes r}$ is naturally a left $\CG_r$-module, where the corresponding left representation is given by
\[
\begin{aligned}
&\nu_r^a: \CG_r\longrightarrow \End_{\U_q(\fg)}(M\otimes V^{\otimes r}), \text{ where }\\
&\nu_r^a(\xi_1):=\check{R}_{V, M}\check{R}_{M, V}\otimes \id_V^{\otimes (r-1)}=\check{R}_{V, M}^2\otimes \id_V^{\otimes (r-1)}, \\
&\nu_r^a(\sigma_i):=\id_M\otimes \id_V^{\otimes (i-1)}\otimes\check{R}_{V, V}\otimes \id_V^{\otimes (r-i-1)},\quad 1\le i\le r-1.
\end{aligned}
\]
\end{lemma}
\begin{proof}
The fact that this defines an action of $\Gamma_r$ follows from Proposition \ref{prop:braid} (1) and (2).
\end{proof}
The representation $\nu_r^a$ of $\CG_r$ in Lemma \ref{lem:braid-B-rep} may be modified to obtain another representation
$\tilde\nu_r^a: \CG_r\longrightarrow \End_{\U_q(\fg)}(M\otimes V^{\otimes r})$ with
\begin{eqnarray}\label{eq:modified}
\begin{aligned}
&\tilde\nu_r^a(\xi_1):=q^{\frac{2}{k}}\check{R}_{V, M}\check{R}_{M, V}\otimes \id_V^{\otimes (r-1)}, \\
&\tilde\nu_r^a(\sigma_i):=\id_M\otimes \id_V^{\otimes (i-1)}\otimes q^{\frac{1}{k}}\check{R}_{V, V}\otimes \id_V^{\otimes (r-i-1)},\quad 1\le i\le r-1.
\end{aligned}
\end{eqnarray}
In the case $\fg=\fsl_k$, it is convenient to use $\tilde\nu_r^a$, because the eigenvalues of the $\tilde\nu_r^a(\sigma_i)$
on the tensor space are just $q$ and $-q\inv$.
The construction in Lemma \ref{lem:braid-B-rep} defines some representations of $\CG_r$ as endomorphisms of certain
vector spaces. For certain finite dimensional highest weight modules $V$ and for any $M$, these representations
factor through some of the well known algebras discussed in Section \ref{sect:ATL-algebra}. The next result
collects several such cases.
\begin{theorem}[\cite{DRV}] \label{thm:aff-Hecke-reps}
Maintain the setting of Lemma \ref{lem:braid-B-rep}.
\begin{enumerate}
\item If $\U_q (\fg)=\U_q (\gl_k)$ (resp. $\U_q(\fsl_k)$) with $k\ge 3$ and $V=\CK_0^k$ is the natural module,
the representation $\nu_r^a$ (resp. $\tilde\nu_r^a$) of $\CG_r$ factors through the affine Hecke algebra $\HH(q)$
for any $M$.
\item If $\U_q (\fg)=\U_q (\gl_2)$ (resp. $\U_q(\fsl_2)$) and $V=\CK_0^2$, then for any $M$, the representation $\nu_r^a$
(resp. $\tilde\nu_r^a$) factors through the affine Temperley-Lieb algebra $\HTL(q)$.
\item If $\U_q (\fg)$ is $\U_q (\mathfrak{o}_k)$, $\U_q(\mathfrak{so}_k)$ for $k\ge 3$, or $\U_q (\mathfrak{sp}_k)$ for
even $k\ge 4$, and $V=\CK_0^k$ is the natural module,
then for any $M$, the representation $\nu_r^a$ of $\CG_r$ factors through the affine BMW algebra $\HBMW_r(q-q^{-1}, p^{1-k})$,
where $p= -q^{-1}$ for $\U_q (\mathfrak{sp}_k)$, and $p=q$ for $\U_q (\mathfrak{o}_k)$ and $\U_q(\mathfrak{so}_k)$.
\end{enumerate}
\end{theorem}
It is an immediate consequence of the Yang-Baxter equation that
\begin{lemma}\label{lem:braid-rep}
Maintain the setting of Lemma \ref{lem:braid-B-rep}. Let $M=\CK_0$, the $1$-dimensional $\U_q(\fg)$-module. Then
$V^{\otimes r}$ can be endowed with a left $\CB_r$-module structure with the corresponding left representation
$\nu_r: \CB_r\longrightarrow \End_{\U_q(\fg)}(V^{\otimes r})$ given by
\be\label{eq:nu}
\nu_r(b_i):=\id_V^{\otimes (i-1)}\otimes\check{R}_{V, V}\otimes \id_V^{\otimes (r-i-1)},\quad i=1, 2, \dots, r-1.
\ee
\end{lemma}
For the strongly multiplicity free $\U_q(\fg)$-modules \cite{LZ06}, we have the following result.
\begin{theorem}[\cite{LZ06}]\label{thm:sw-smf}
Let $V$ denote the natural module for $\U_q (\fg)=$ $\U_q (\gl_k)$, $\U_q (\fsl_k)$ ($k>2$), $\U_q (\mathfrak{o}_k)$,
$\U_q(\mathfrak{so}_{2k+1})$, or $\U_q (\mathfrak{sp}_{2k})$, the $7$-dimensional
irreducible module for $\U_q(G_2)$, or the $\ell$-dimensional simple modue for $\U_q(\gl_2)$ and $\U_q(\fsl_2)$ for $\ell>2$.
Then for any $r\in\Z_{>0}$,
\[
\End_{\U_q(\fg)}\left(V^{\otimes r}\right) =\nu_r(\CB_r).
\]
\end{theorem}
\begin{remark} Note that the role of the affine Hecke algebra and related algebras in the above is very different from that investigated by \cite{CP} in the context of the representation theory of quantum affine algebras.
\end{remark}
\section{A categorical approach to the affine Temperley-Lieb algebra}\label{sect:cats}
\subsection{A restricted category of tangle diagrams}\label{sect:cat-RT}
Certain categories of tangles, both oriented and unoriented, \cite{FY} and of coloured ribbon graphs \cite{RT},
have played important roles in the construction of link and $3$-manifold invariants.
We introduce here a category of unoriented tangles up to regular isotopy in the sense of \cite{FY}, which we denote by $\RTC$.
Our category is a subcategory of the category $S-\mathbb{RT}\mathbb{ang}$, where $S$ is the set $\cC:=\{m,v\}$
which is defined in \cite[Def. 3.1]{FY}. We shall use the language of tangle diagrams, although an equivalent formulation could use the language of coloured ribbon graphs.
The objects of $\RTC$ are sequences of elements of $\cC=\{m, v\}$, which are called ``colours'', where the empty sequence is allowed.
The morphisms are spanned, over a base ring $R$, by unoriented tangles up to regular isotopy
in the terminology of \cite{FY}, coloured by $\cC$. Such tangles are represented in ``tangle diagrams'' as unions of arcs; we say that an arc
is {\em horizontal} if its end points are either both at the top or at the bottom of the tangle. An arc is
{\em vertical} if it has end points and is not horizontal. To define (the subcategory) $\RTC$, we impose the following two conditions on morphisms.
\begin{enumerate}
\item Closed arcs (i.e. those with no end points) are all coloured by $v$.
\item Any arc coloured by $m\in\cC$ is vertical, and no two such arcs cross.
\end{enumerate}
The composition of morphisms is explained in \cite{FY} and is essentially by concatenation of tangle diagrams.
We call $\RTC$ the restricted coloured tangle category.
Since the two ends of any arc have the same colour, we will write $m$ or $v$ beside
an arc to indicate its colour. The term ``tangle diagram'' will be abbreviated to ``diagram'' below.
\begin{theorem} \label{thm:tensor-cat}
The category $\RTC$ has the following properties.
\begin{enumerate}
\item There is a bi-functor $\otimes: \RTC\times \RTC\longrightarrow \RTC$, called the tensor product, which is defined as follows.
For any pair of objects $A=(a_1, \dots, a_r)$ and $B=(b_1, \dots, b_s)$, we have
$
A\otimes B=(A, B) := (a_1, \dots, a_r, b_1, \dots, b_s).
$
The tensor product is bilinear on morphisms. Given diagrams $D$ and $D'$,
$D\otimes D'$ is their juxtaposition with $D$ on the left.
\item The morphisms are generated by the following elementary diagrams under tensor product and composition,
\[
\begin{picture}(20, 70)(0,0)
\put(0, 0){\line(0, 1){60}}
\put(5, 0){,}
\put(-10, 40){$a$}
\end{picture}
\begin{picture}(40, 70)(-60,0)
\qbezier(-35, 60)(-35, 60)(-15, 0)
\qbezier(-15, 60)(-15, 60)(-22, 33)
\qbezier(-35, 0)(-35, 0)(-26, 25)
\put(-10, 0){, }
\put(-15, 40){$b$ }
\put(-40, 40){$c$ }
\end{picture}
\begin{picture}(40, 70)(-140,0)
\qbezier(-90, 0)(-90, 0)(-70, 60)
\qbezier(-90, 60)(-90, 60)(-83, 33)
\qbezier(-70, 0)(-70, 0)(-78, 26)
\put(-65, 0){, }
\put(-70, 40){$b$ }
\put(-95, 40){$c$ }
\end{picture}
\begin{picture}(140, 70)(-50,0)
\qbezier(20, 0)(40, 60)(60, 0)
\put(65, 0){, }
\put(38, 18){$v$}
\qbezier(95, 60)(115, 0)(135, 60)
\put(140, 0){, }
\put(112, 35){$v$}
\end{picture}
\]
where $a, b, c\in \cC=\{m, v\}$ with $b\ne c$ or $b=c=v$.
\item The defining relations among the above generators are as follows.
\begin{enumerate}
\item Over and under crossings are inverses of each other:
for all $a, b$ such that either $a\ne b$ or $a=b=v$,
\[
\begin{picture}(70, 90)(65, 0)
\qbezier(60, 0)(100, 40)(60, 80)
\qbezier(70, 20)(60, 40)(70, 60)
\qbezier(75, 15)(80, 5)(90, 0)
\qbezier(75, 65)(80, 75)(90, 80)
\put(55, 45){$b$}
\put(85, 45){$a$}
\put(105, 35){$=$}
\end{picture}
\begin{picture}(50, 90)(110, 0)
\put(120, 0){\line(0, 1){80}}
\put(135, 0){\line(0, 1){80}}
\put(110, 45){$a$}
\put(140, 45){$b$}
\put(165, 35){$=$}
\end{picture}
\begin{picture}(50, 90)(140, 0)
\qbezier(200, 0)(160, 40)(200, 80)
\qbezier(190, 20)(200, 40)(190, 60)
\qbezier(185, 15)(180, 5)(170, 0)
\qbezier(185, 65)(180, 75)(170, 80)
\put(170, 45){$b$}
\put(200, 45){$a$}
\put(205, 0){;}
\end{picture}
\]
\item Braid relation: for all $a, b, c$ at most one of which is equal to $m$,
\[
\begin{picture}(85, 100)(0, 0)
\qbezier(30, 90)(30, 90)(18, 75)
\qbezier(12, 70)(-10, 40)(30, 0)
\qbezier(0, 90)(45, 45)(60, 0)
\qbezier(60, 90)(50, 70)(40, 48)
\qbezier(36, 39)(28,25)(20, 15)
\qbezier(15, 12)(10, 5)(0,0)
\put(-5, 50){$b$}
\put(18, 50){$c$}
\put(48, 50){$a$}
\put(75, 40){$=$}
\end{picture}
\begin{picture}(85, 100)(-30, 0)
\qbezier(0, 90)(20, 45)(60, 0)
\qbezier(30, 90)(65, 55)(50, 18)
\qbezier(46, 12)(45, 8)(30, 0)
\qbezier(60, 90)(50, 80)(47, 73)
\qbezier(42, 68)(35, 50)(33, 40)
\qbezier(28, 32)(18, 10)(0, 0)
\put(4, 50){$c$}
\put(28, 50){$a$}
\put(55, 50){$b$}
\put(70, 0){;}
\end{picture}
\]
\item Straightening relations:
\[
\begin{picture}(150, 80)(0,0)
\qbezier(0, 0)(10, 80)(20, 30)
\qbezier(20, 30)(30, -30)(40, 70)
\put(50, 30){$=$}
\put(10, 55){$v$}
\put(70, 0){\line(0, 1){70}}
\put(85, 30){$=$}
\put(75, 55){$v$}
\qbezier(105, 70)(115, -30)(125, 30)
\qbezier(125, 30)(135, 80)(145, 0)
\put(135, 55){$v$}
\put(155, 0){;}
\end{picture}
\]
\item Sliding relations: for all $a\in\cC$,
\[
\begin{picture}(80, 60)(0,0)
\qbezier(0, 45)(0, 45)(22, 0)
\qbezier(0, 0)(10, 13)(12, 15)
\qbezier(17, 21)(35, 50)(60, 0)
\put(-8, 35){$a$}
\put(32, 18){$v$}
\put(75, 15){$=$}
\end{picture}
\begin{picture}(80, 60)(-20,0)
\qbezier(0, 0)(25, 50)(45, 20)
\qbezier(49, 15)(50, 15)(60, 0)
\qbezier(60, 45)(60, 45)(38, 0)
\put(62, 35){$a$}
\put(22, 18){$v$}
\put(70, 0){;}
\end{picture}
\]
\[
\begin{picture}(80, 60)(0,0)
\qbezier(0, 0)(30, 60)(60, 0)
\qbezier(0, 45)(0, 45)(10, 22)
\qbezier(15, 15)(15, 15)(22, 0)
\put(-8, 35){$a$}
\put(32, 18){$v$}
\put(75, 15){$=$}
\end{picture}
\begin{picture}(80, 60)(-20,0)
\qbezier(0, 0)(30, 60)(60, 0)
\qbezier(60, 45)(60, 45)(50, 22)
\qbezier(45, 15)(45, 15)(38, 0)
\put(62, 35){$a$}
\put(22, 18){$v$}
\put(70, 0){;}
\end{picture}
\]
\item Twists,
\[
\begin{picture}(80, 130)(0,0)
\qbezier(0, 100)(-5, 110)(-5, 120)
\qbezier(0, 100)(0, 100)(20, 70)
\qbezier(0, 70)(0, 70)(8, 82)
\qbezier(20, 100)(20, 100)(13, 88)
\qbezier(20, 100)(30, 110)(32, 85)
\qbezier(20, 70)(30, 60)(32, 85)
\qbezier(00, 70)(-5, 60)(00, 50)
\put(-15, 60){$v$}
\qbezier(20, 50)(0, 20)(0, 20)
\qbezier(0, 50)(0, 50)(8, 38)
\qbezier(12, 33)(20, 20)(20, 20)
\qbezier(20, 50)(30, 60)(32, 35)
\qbezier(20, 20)(30, 10)(32, 35)
\qbezier(-5, 0)(-5, 10)(0, 20)
\put(55, 60){$=$}
\end{picture}
\begin{picture}(20, 130)(-15,20)
\put(0, 20){\line(0, 1){120}}
\put(5, 80){$v$}
\put(10, 20){.}
\end{picture}
\]
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof} This was proved in \cite[Theorem 3.5]{FY} and in \cite[Lemma 5.3]{RT}.
The result of \cite[Theorem 3.5]{FY} in fact applies to several categories of tangles; the case which covers our theorem is that of $S-\mathbb{RT}\mathbb{ang}$ with $S=\cC$. Note that \cite[Theorem 3.5]{FY} does not involve colours, but this is not an issue
as colours merely label components of tangles. We can also extract our theorem from \cite[Lemma 5.3]{RT} by removing directions of ribbon graphs and forbidding coupons. There is also a direct proof along the lines of \cite[Appendix]{LZ14}.
\end{proof}
\noindent
{\bf Another type of picture}. A second way of representing the category $\RTC$ is as follows.
We depict arcs coloured by $m$ as thick arcs called poles, and arcs coloured by $v$ as thin arcs.
This way a diagram automatically carries the information about the colours of its arcs, so that we may drop the
letters for colours from the diagram. For example, we have the following diagram $(m, v^2, m, v^2)\to (v, m, v, m, v^2)$.
\[
\begin{picture}(120, 100)(-90,0)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){60}}
}
\qbezier(-19, 38)(-35, 40)(-30, 100)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-11, 86)(-5, 90)(-5, 100)
\qbezier(0, 0)(10, 30)(20, 0)
\end{picture}
\begin{picture}(150, 100)(-20,0)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(20, 100)(16, 90)(-11, 86)
\qbezier(0, 100)(2, 95)(-11, 90)
\qbezier(-19, 90)(-40, 85)(-38, 50)
\qbezier(-15, 19)(-40, 25)(-38, 50)
\qbezier(-15, 19)(-5, 15)(0, 0)
\qbezier(-15, 22)(15, 15)(20, 0)
\end{picture}
\]
We recover the diagramatics for $B_r$ and $\Gamma_r$ given in Section \ref{sect:diagrams}
by identifying thin arcs with strings and thick arcs with poles.
\begin{remark}
Given any $A\in\cC^N$, we denote by $\Gamma(A)$ the set of diagrams $A\to A$ with vertical arcs only.
Then $\Gamma(A)$ forms a group. In particular $\Gamma(v^r)\cong B_r$ is the braid group of type $A$,
and $\Gamma(m, v^r)\cong \Gamma_r$ is the braid group of type $B$. We will call $\Gamma(A)$ a {\em multi-polar braid group}
if $A$ has more than one $m$ entry.
\end{remark}
\subsection{The affine Temperley-Lieb category}
We now introduce a quotient category of $\RTC$ denoted by $\ATLC(q)$, which we refer to as the affine Temperley-Lieb category.
The objects of this category are the same as those of $\RTC$. Given any two objects $T$ and $B$ in $\ATLC(q)$,
$\Hom_{\ATLC(q)}(B, T)$ is the quotient space of $\Hom_{\RTC}(B, T)$ obtained by imposing locally the following
skein relations and free loop removal.
\noindent Skein relations:
\be\label{eq:skein1}
\begin{aligned}
\begin{picture}(150, 70)(0,0)
\put(-15, 30){$q^{\frac{1}{2}}$}
\qbezier(0, 60)(0, 60)(20, 0)
\qbezier(20, 60)(20, 60)(13, 33)
\qbezier(0, 0)(0, 0)(8, 24)
\put(30, 30){$=$}
\put(50, 30){$q$}
\put(60, 0){\line(0, 1){60}}
\put(80, 0){\line(0, 1){60}}
\put(95, 30){$+$}
\qbezier(120, 0)(135, 50)(150, 0)
\qbezier(120, 60)(135, 10)(150,60)
\put(155, 0){, }
\end{picture}
\end{aligned}
\ee
\be\label{eq:skein2}
\begin{aligned}
\begin{picture}(150, 70)(0,0)
\put(-20, 30){$q^{-\frac{1}{2}}$}
\qbezier(0, 0)(0, 0)(20, 60)
\qbezier(0, 60)(0, 60)(7, 33)
\qbezier(20, 0)(20, 0)(12, 26)
\put(30, 30){$=$}
\put(40, 30){$q^{-1}$}
\put(60, 0){\line(0, 1){60}}
\put(80, 0){\line(0, 1){60}}
\put(95, 30){$+$}
\qbezier(120, 0)(135, 50)(150, 0)
\qbezier(120, 60)(135, 10)(150,60)
\put(155, 0){;}
\end{picture}
\end{aligned}
\ee
\noindent
Free loop removal:
\be\label{eq:flr}
\begin{aligned}
\begin{picture}(100, 60)(20,0)
\put(0, 30){\circle{25}}
\put(35, 25){$=$}
\put(62, 25){$- (q+q^{-1})$.}
\end{picture}
\end{aligned}
\ee
The image in $\ATLC(q)$ of a diagram in $\RTC$ will be depicted by the same graph, but is understood to obey the above relations.
The composition of morphisms in $\ATLC(q)$ is that inherited from $\RTC$.
We denote by $\TLC(q)$ the full subcategory of $\ATLC(q)$ with objects the sequences
in which only the symbol $v$ occurs. This will be referred to as the {\em \tl category}.
\begin{remark} \begin{enumerate}
\item The reason for taking the skein relations with $q^{\pm\frac{1}{2}}$ here, and also in the definition of the
Temperley-Lieb algebra of type $B$ in Section \ref{sect:TLBC} below will become clear from Lemma \ref{rem:norm-factors}.
\item The affine Temperley-Lieb category we use here is closely related to that in \cite{GL98}. It is in effect an ``extended'' version
of the category $T^a(q)$ of {\it op. cit.}.
\end{enumerate}
\end{remark}
Consider the morphisms from $m$ to $m$ depicted in \eqref{eq:z12}.
\begin{eqnarray}\label{eq:z12}
\begin{aligned}
\begin{picture}(160, 70)(-70,0)
\put(-70, 48){$z_1:=$}
{
\linethickness{1mm}
\put(-15, 45){\line(0, 1){55}}
\put(-15, 0){\line(0, 1){40}}
}
\qbezier(-15, 42)(15, 50)(-11, 65)
\qbezier(-15, 42)(-45, 50)(-19, 65)
\put(5, 0){, }
\end{picture}
\begin{picture}(70, 100)(-40,0)
\put(-55, 48){$z_2:=$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-11, 88)(15, 95)(10, 50)
\qbezier(-15, 22)(0, 15)(10, 50)
\put(5, 0){. }
\end{picture}
\end{aligned}
\end{eqnarray}
The following result is clear diagrammatically.
\begin{lemma} \label{lem:central-ATL} The elements $z_1$ and $z_2$ are central in $\ATLC(q)$ in the following sense.
Let $D$ be a diagram
$D: (A_1, m, A_2)\to (B_1, m, B_2)$ in $\ATLC(q)$, where the two $m$'s shown are connected by a thick arc. Then
\[
D(\id_{A_1}\otimes z_i \otimes \id_{A_2}) = (\id_{B_1}\otimes z_i \otimes \id_{B_2}) D, \quad i=1, 2.
\]
\end{lemma}
\noindent{\bf Notation}.
Henceforth, for any invertible scalar $t\in R$ (the ground ring) we write $\delta_t :=-(t +t^{-1})$ , and
for the special case $t=q$, set $\delta:=\delta_q=- (q+q^{-1})$.
\begin{lemma}\label{lem:central-skein-ATL} The following relation holds in $\ATLC(q)$.
\begin{eqnarray*}
\phantom{XXX}
\begin{picture}(110, 100)(-50,0)
\put(-40, 45){$q \delta$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(0, 90)(-11, 86)
\qbezier(-15, 22)(0, 15)(0, 0)
\end{picture}
\begin{picture}(120, 100)(-30,0)
\put(-70, 45){$+\ \ \delta z_1$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){70}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 100)(0, 67)(-12, 64)
\end{picture}
\begin{picture}(50, 100)(-15,0)
\put(-80, 45){$=\ (q z_2+z_1^2)$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){100}}
}
\put(15, 0){\line(0, 1){100}}
\put(20, 0){,}
\end{picture}
\end{eqnarray*}
where the right hand side should be understood as $(q z_2+z_1^2)\otimes\id_v$.
\end{lemma}
\begin{proof} We prove the relation diagrammatically. Let
\[
\begin{picture}(120, 100)(-30,0)
\put(-55, 45){$\hat{x}_1=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){25}}
\put(-15, 30){\line(0, 1){60}}
}
\qbezier(-18, 60)(-35, 45)(-15, 28)
\qbezier(-15, 28)(0, 16)(0, 0)
\qbezier(0, 90)(0, 67)(-12, 64)
\put(10, 0){\line(0, 1){90}}
\put(15, 0){, }
\end{picture}
\begin{picture}(80, 100)(-30,0)
\put(-55, 45){$\hat{x}_2=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){25}}
\put(-15, 30){\line(0, 1){60}}
}
\put(-5, 0){\line(0, 1){17}}
\put(-5, 22){\line(0, 1){68}}
\put(-5, 29){\line(0, 1){61}}
\qbezier(-18, 60)(-35, 30)(0, 18)
\qbezier(0, 18)(8, 15)(10, 0)
\qbezier(10, 90)(10, 72)(-3, 70)
\qbezier(-7, 68)(-8, 67)(-12, 64)
\put(15, 0){.}
\end{picture}
\]
Then $\hat{x}_1 \hat{x}_2= \hat{x}_2 \hat{x}_1$. Multiplying this by $q$ and using the skein relation, we obtain
\[
\begin{picture}(120, 100)(-90,0)
\put(-40, 48){$q$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-11, 86)(-5, 90)(-5, 100)
\qbezier(-15, 22)(10, 15)(10, 100)
\qbezier(-5, 0)(3, 35)(10, 0)
\end{picture}
\begin{picture}(120, 100)(-45,0)
\put(-50, 48){$+$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 48)(-50, 30)(-15, 20)
\qbezier(-15, 20)(25, 30)(-11, 48)
\qbezier(-15, 62)(5, 55)(10, 100)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-5, 0)(3, 35)(10, 0)
\qbezier(-11, 86)(-5, 90)(-5, 100)
\end{picture}
\begin{picture}(50, 110)(-10,0)
\put(-60, 48){$=\ \ q $}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){45}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-5, 110)(3, 80)(10, 110)
\qbezier(-11, 88)(15, 95)(10, 50)
\qbezier(10, 0)(10, 15)(10, 50)
\qbezier(-15, 22)(-5, 15)(-5, 0)
\end{picture}
\begin{picture}(150, 70)(-50,0)
\put(-60, 45){$+$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 45)(-45, 30)(-15, 20)
\qbezier(-11, 47)(8, 60)(10, 0)
\qbezier(-15, 20)(-5, 20)(-5, 0)
\qbezier(-15, 62)(15, 70)(-11, 85)
\qbezier(-15, 62)(-45, 70)(-19, 85)
\qbezier(-5, 100)(3, 70)(10, 100)
\put(15, 0){.}
\end{picture}
\]
Composing this equation with
$
\begin{picture}(50, 50)(-10,20)
{
\linethickness{1mm}
\put(0, 10){\line(0, 1){30}}
}
\qbezier(10, 10)(15, 40)(20, 10)
\end{picture}
$
on the top, and then pulling the right end point \\
\vspace{1mm}
\noindent
of the string in the resulting diagram to the top, we obtain the stated relation.
\iffalse
\begin{eqnarray*}
\phantom{XXX}
\begin{picture}(110, 100)(-50,0)
\put(-40, 45){$q \delta$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(0, 90)(-11, 86)
\qbezier(-15, 22)(0, 15)(0, 0)
\end{picture}
\begin{picture}(120, 100)(-30,0)
\put(-70, 45){$+\ \ \delta z_1$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){70}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 100)(0, 67)(-12, 64)
\end{picture}
\begin{picture}(50, 100)(-15,0)
\put(-80, 45){$=\ (q z_2+z_1^2)$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){100}}
}
\put(15, 0){\line(0, 1){100}}
\put(20, 0){,}
\end{picture}
\end{eqnarray*}
where $z_1$ and $z_2$ are understood to be in the case $r=1$. The assertion of the lemma is now clear.
\fi
\end{proof}
\begin{definition}\label{def:ext-ATL}
For any integer $r\geq 0$, the space $\Hom_{\ATLC(q)}((m, v^r), (m, v^r))$ is an associative algebra with
multiplication given by the composition of morphisms. We denote this algebra by $\HTL^{ext}_r(q)$, and
call it the extended affine Temperley-Lieb algebra.
\end{definition}
Observe that the algebra $\HTL^{ext}_r(q)$ defined above
contains the elements $z_1\otimes\id_{v^r}, z_2\otimes\id_{v^r}$ depicted in \eqref{eq:central-zs},
which are central by Lemma \ref{lem:central-ATL}.
\begin{eqnarray}\label{eq:central-zs}
\begin{aligned}
\begin{picture}(200, 70)(-70,0)
\put(-100, 48){$z_1\otimes\id_{v^r}:=$}
{
\linethickness{1mm}
\put(-15, 45){\line(0, 1){55}}
\put(-15, 0){\line(0, 1){40}}
}
\qbezier(-15, 42)(15, 50)(-11, 65)
\qbezier(-15, 42)(-45, 50)(-19, 65)
\put(15, 0){\line(0, 1){100}}
\put(35, 0){\line(0, 1){100}}
\put(20, 45){...}
\put(20, 10){$r$}
\put(40, 0){, }
\end{picture}
\begin{picture}(70, 110)(-40,0)
\put(-85, 48){$z_2\otimes\id_{v^r}:=$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){45}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\put(20, 0){\line(0, 1){110}}
\qbezier(-11, 88)(15, 95)(10, 50)
\qbezier(-15, 22)(0, 15)(10, 50)
\put(25, 45){...}
\put(25, 10){$r$}
\put(40, 0){\line(0, 1){110}}
\put(45, 0){. }
\end{picture}
\end{aligned}
\end{eqnarray}
The following fact is an obvious consequence of Lemma \ref{lem:central-skein-ATL}.
\begin{lemma} \label{lem:central-skein}
The following relation holds in $\HTL^{ext}_r(q)$.
\begin{eqnarray*}
\phantom{XXX}
\begin{picture}(150, 100)(-50,0)
\put(-40, 45){$q \delta$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(0, 90)(-11, 86)
\qbezier(-15, 22)(0, 15)(0, 0)
\put(15, 0){\line(0, 1){100}}
\put(20, 45){...}
\put(35, 0){\line(0, 1){100}}
\put(55, 45){$+$}
\end{picture}
\begin{picture}(150, 100)(-30,0)
\put(-50, 45){$\delta z_1$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){70}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 100)(0, 67)(-12, 64)
\put(15, 0){\line(0, 1){100}}
\put(35, 0){\line(0, 1){100}}
\put(20, 45){...}
\end{picture}
\begin{picture}(150, 100)(-15,0)
\put(-80, 45){$=\ (q z_2+z_1^2)$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){100}}
}
\put(15, 0){\line(0, 1){100}}
\put(35, 0){\line(0, 1){100}}
\put(20, 45){...}
\put(40, 0){,}
\end{picture}
\end{eqnarray*}
where the right hand side should be understood as $(q z_2+z_1^2)\otimes\id_{v^r}$.
\end{lemma}
Recall the affine Temperley-Lieb algebra $\HTL_r(q)$ which was defined in \eqref{eq:defhtl}. The next statement is easily verified
by checking that the relations \eqref{eq:extra-inv} which hold in $\HTL_r(q)$ are preserved by $\wp$ using Lemma \ref{lem:central-skein}.
\begin{lemma} There is an injection $\wp: \HTL_r(q)\to\HTL^{ext}_r(q)$ of algebras given by
\[
\begin{picture}(150, 90)(-15,-5)
\put(-40, 40){$e_i\mapsto$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){80}}
}
\put(15, 0){\line(0, 1){80}}
\put(35, 0){\line(0, 1){80}}
\qbezier(55, 0)(65, 70)(75, 0)
\qbezier(55, 0)(65, 70)(75, 0)
\qbezier(55, 80)(65, 10)(75, 80)
\put(55, -8){$i$}
\put(95, 0){\line(0, 1){80}}
\put(115, 0){\line(0, 1){80}}
\put(20, 45){...}
\put(100, 45){...}
\put(120, 0){,}
\end{picture}
\begin{picture}(150, 90)(-80,-5)
\put(-65, 40){$x_1\mapsto$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){50}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 80)(0, 67)(-12, 64)
\put(15, 0){\line(0, 1){80}}
\put(75, 0){\line(0, 1){80}}
\put(25, 45){............}
\put(80, 0){,}
\end{picture}
\]
for $i=1, 2, \dots, r-1$.
\end{lemma}
\begin{remark}
The extended affine Temperley-Lieb algebra $\HTL^{ext}_r(q)$ is strictly bigger than the affine
Temperley-Lieb algebra $\HTL_r(q)$, as $z_1\otimes\id_{v^r}$ and $z_2\otimes\id_{v^r}$ do not belong to $\wp(\HTL_r(q))$. It is easy to see
that $\HTL^{ext}_r(q)$ is generated by $z_1\otimes\id_{v^r}, z_2\otimes\id_{v^r}$ and $\wp(\HTL_r(q))$.
\end{remark}
\begin{remark} \label{rem:skein-universal} Lemma \ref{lem:central-skein} may be thought as a ``skein relation'' over the centre of $\HTL_r^{ext}(q)$.
In any irreducible representation of $\HTL^{ext}_r(q)$,
the central elements $z_1\otimes\id_{v^r}$ and $z_2\otimes\id_{v^r}$ act as scalars, and the algebra therefore acts through a quotient in which
a skein relation in the usual sense is satisfied.
\end{remark}
\subsection{A quotient category of the affine Temperley-Lieb category}
We next introduce a two-parameter family of quotients of the affine Temperley-Lieb category $\ATLC(q)$,
which are obtained by imposing conditions on the removal of tangled loops.
The morphisms $z_1$ and $z_2$ from $m$ to $m$ in the category $\ATLC(q)$ are as defined in \eqref{eq:z12}.
\begin{definition}
Given scalars $a_1, a_2$, let $\ATLC(q, a_1, a_2)$ be the category whose objects are the same as those of $\ATLC(q)$
and whose modules of morphisms are obtained from those in $\ATLC(q)$ by
imposing all relations which are consequences of the following two equalities.
\[
\begin{picture}(160, 70)(0,0)
{
\linethickness{1mm}
\put(-15, 45){\line(0, 1){55}}
\put(-15, 0){\line(0, 1){40}}
}
\qbezier(-15, 42)(15, 50)(-11, 65)
\qbezier(-15, 42)(-45, 50)(-19, 65)
\put(15, 45){$=\ \ a_1$ }
{
\linethickness{1mm}
\put(55, 0){\line(0, 1){100}}
\put(60, 0){,}
}
\end{picture}
\begin{picture}(70, 100)(0,0)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-11, 88)(15, 95)(10, 50)
\qbezier(-15, 22)(0, 15)(10, 50)
\put(15, 45){$=\ \ a_2$ }
{
\linethickness{1mm}
\put(55, 0){\line(0, 1){100}}
}
\put(60, 0){. }
\end{picture}
\]
We refer to $\ATLC(q, a_1, a_2)$ as the {\em multi-polar \tl category with two parameters}.
\end{definition}
\begin{definition}\label{def:tlbc-2p}
Denote by $\TLBC(q, a_1, a_2)$ the full subcategory of $\ATLC(q, a_1, a_2)$
with objects of the form $(m, v^r):=(m, \underbrace{v, \dots, v}_r)$ for all $r\in \Z_{\ge 0}$.
This is the {\em two-parameter \tl category of type $B$ }.
\end{definition}
It is then clear that in the quotient, we have the following relation.
\begin{lemma}\label{lem:skein-a-a} The following skein relation holds in $\ATLC(q, a_1, a_2)$.
\begin{eqnarray*}
\phantom{XXX}
\begin{picture}(110, 100)(-50,0)
\put(-40, 45){$q \delta$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(0, 90)(-11, 86)
\qbezier(-15, 22)(0, 15)(0, 0)
\end{picture}
\begin{picture}(120, 100)(-30,0)
\put(-70, 45){$+\ \ \delta a_1$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){70}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 100)(0, 67)(-12, 64)
\end{picture}
\begin{picture}(50, 100)(-15,0)
\put(-80, 45){$=\ (q a_2+a_1^2)$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){100}}
}
\put(15, 0){\line(0, 1){100}}
\put(20, 0){,}
\end{picture}
\end{eqnarray*}
\end{lemma}
\begin{definition} For any positive integer $r$, let $\TLB_r(q, a_1, a_2)$ be the associative algebra with underlying
vector space $\Hom_{\ATLC(q, a_1, a_2)}((m, v^r), (m, v^r))$ and multiplication given by composition of
morphisms. The algebra $\TLB_r(q, a_1, a_2)$ will be referred to as a two-parameter Temperley-Lieb algebra of type $B$.
\end{definition}
The following statements are easy consequences of Lemma \ref{lem:central-skein}.
\begin{lemma}
\begin{enumerate}
\item Let $J_{a_1, a_2}$ be the two-sided ideal of $\HTL^{ext}_r(q)$ generated by the elements
$z_1\otimes\id_{v^r}-a_1$ and $z_2\otimes\id_{v^r}-a_2$ (see \eqref{eq:central-zs}). Then
\[
\TLB_r(q, a_1, a_2) = \HTL^{ext}_r(q)/J_{a_1, a_2}.
\]
Write $\rho^{ext}: \HTL^{ext}_r(q) \longrightarrow \TLB_r(q, a_1, a_2) $ for the canonical surjection.
\item There exists a surjective homomorphism $\rho: \HTL_r(q)\longrightarrow \TLB_r(q, a_1, a_2)$ of algebras defined
as the the following composition:
\[
\HTL_r(q)\stackrel{\wp}\longrightarrow\HTL^{ext}_r(q)\stackrel{\rho^{ext}}\longrightarrow \TLB_r(q, a_1, a_2).
\]
\item Each simple finite dimensional $\HTL_r(q)$-module $N$ is a pull-back by $\rho$ of a simple $\TLB_r(q, a_1, a_2)$-module
for unique values of $a_1$ and $a_2$ which depend on $N$.
\end{enumerate}
\end{lemma}
\subsection{The Templey-Lieb category of type $B$}\label{sect:TLBC}
\subsubsection{Definition}
We now introduce another quotient category of $\RTC$,
denoted by $\ATLC(q, \Omega)$, which depends on an invertible scalar $\Omega$.
Its objects are the same as those of $\RTC$. Given any two objects $T$ and $B$ in $\ATLC(q, \Omega)$,
the space $\Hom_{\ATLC(q, \Omega)}(B, T)$ of morphisms is the quotient space of $\Hom_{\RTC}(B, T)$ obtained by imposing
locally the relations listed below and referred to respectively as skein relations, free loop removal and tangled loop removal.
\noindent Skein relations:
\[
\begin{picture}(150, 70)(0,0)
\put(-15, 30){$q^{\frac{1}{2}}$}
\qbezier(0, 60)(0, 60)(20, 0)
\qbezier(20, 60)(20, 60)(13, 33)
\qbezier(0, 0)(0, 0)(8, 24)
\put(30, 30){$=$}
\put(50, 30){$q$}
\put(60, 0){\line(0, 1){60}}
\put(80, 0){\line(0, 1){60}}
\put(95, 30){$+$}
\qbezier(120, 0)(135, 50)(150, 0)
\qbezier(120, 60)(135, 10)(150,60)
\put(155, 0){, }
\end{picture}
\]
\[
\begin{picture}(150, 70)(0,0)
\put(-20, 30){$q^{-\frac{1}{2}}$}
\qbezier(0, 0)(0, 0)(20, 60)
\qbezier(0, 60)(0, 60)(7, 33)
\qbezier(20, 0)(20, 0)(12, 26)
\put(25, 30){$=q^{-1}$}
\put(60, 0){\line(0, 1){60}}
\put(80, 0){\line(0, 1){60}}
\put(95, 30){$+$}
\qbezier(120, 0)(135, 50)(150, 0)
\qbezier(120, 60)(135, 10)(150,60)
\put(155, 0){, }
\end{picture}
\]
\[
\phantom{XXXXXX}
\begin{picture}(80, 100)(-30,0)
\put(-40, 48){$q$}
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(-6, 90)(-11, 87)
\qbezier(-15, 22)(-5, 15)(0, 0)
\put(10, 45) {$=$}
\end{picture}
\begin{picture}(150, 100)(-80,0)
\put(-95, 48){$(\Omega + \Omega^{-1})$}
{
\linethickness{1mm}
\put(-15, 40){\line(0, 1){60}}
\put(-15, 0){\line(0, 1){35}}
}
\qbezier(-15, 37)(-45, 53)(-19, 70)
\qbezier(0, 100)(0, 80)(-11, 72)
\qbezier(-15, 37)(0, 30)(0, 0)
\end{picture}
\begin{picture}(50, 100)(20,0)
\put(-40, 48){$-\ q^{-1}$}
{
\linethickness{1mm}
\put(5, 0){\line(0, 1){100}}
}
\put(20, 0){\line(0, 1){100}}
\put(25, 0){;}
\end{picture}
\]
\\
\noindent
Free loop removal:
\[
\begin{picture}(100, 60)(20,0)
\put(0, 30){\circle{25}}
\put(35, 25){$=$}
\put(65, 25){$- (q+q^{-1})$;}
\end{picture}
\]
\noindent Tangled loop removal:
\[
\begin{picture}(150, 100)(-90,0)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 42){\line(0, 1){58}}
\put(-15, 0){\line(0, 1){36}}
}
\qbezier(-19, 68)(-50, 50)(-15, 38)
\qbezier(-11, 68)(-3, 70)(0, 85)
\qbezier(-15, 38)(-5, 38)(0, 20)
\put(20, 20){\line(0, 1){65}}
\qbezier(0, 20)(10, -10 ) (20, 20)
\qbezier(0, 85)(10, 105) (20, 85)
\put(35, 45){$=$}
\end{picture}
\begin{picture}(150, 110)(-50,0)
\put(-50, 45){$-(\Omega+\Omega^{-1})$}
{
\linethickness{1mm}
\put(20, 0){\line(0, 1){100}}
}
\put(25, 0){.}
\end{picture}
\]
The image in $\ATLC(q, \Omega)$ of a tangle diagram from $\RTC$ will be depicted by the same diagram, but is understood to represent a coset, and therefore obeys the above relations. Composition of morphisms in $\ATLC(q, \Omega)$ is inherited from that in $\RTC$.
\begin{definition}\label{def:ATLC-q}
The category $\ATLC(q, \Omega)$ is the {\em multi-polar \tl category with one parameter}.
\end{definition}
The following lemma is obtained by specialising the parameters of the two parameter multi-polar \tl category.
\begin{lemma}\label{lem:iso-tlcb-2}
\iffalse
The third skein relation in the definition of $\ATLC(q, \Omega)$ may be replaced by the following relation without changing the category.
\[
\begin{picture}(70, 110)(40,0)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(-11, 88)(15, 95)(10, 50)
\qbezier(-15, 22)(0, 15)(10, 50)
\put(25, 45){$=-q^{-1}\left((\Omega+\Omega^{-1})^2+\delta q^{-1}\right)$}
{
\linethickness{1mm}
\put(180, 0){\line(0, 1){100}}
\put(185, 0){.}
}
\end{picture}
\]
\fi
There is an equivalence $\ATLC(q, \Omega)\cong\ATLC(q, a_1, a_2)$ of categories, where the parameters $a_1, a_2$ are respectively given by
\[
a_1=-(\Omega+\Omega^{-1}), \quad a_2=-q^{-1}((\Omega+\Omega^{-1})^2+\delta q^{-1}).
\]
\end{lemma}
\begin{proof}
This is clear.
\end{proof}
It is further clear that the \tl category $\TLC(q)$ is contained in $\ATLC(q, \Omega)$ as a full subcategory.
There is another full subcategory of $\ATLC(q, \Omega)$, which is of particular interest to us.
\begin{definition}\label{def:tl(b)}
The Temperley-Lieb category of type $B$ is the full subcategory of $\ATLC(q, \Omega)$ with objects of the
form $(m, v^r):=(m, \underbrace{v, \dots, v}_r)$ for all $r\in \Z_{\ge 0}$. This category will be denoted by $\TLBC(q, \Omega)$.
\end{definition}
\begin{remark}\label{rem:tlsub} Note that the finite \tl category $\TLC(q)$ may be thought of as a subcategory of $\TLBC(q,\Omega)$,
since diagrams from $(v^r)$ to $(v^s)$ are evidently in bijection with diagrams $(m,v^r)$ to $(m,v^s)$ which have no entanglement with the pole.
Thus $\TLC(q)$ is the subcategory of $\TLBC(q,\Omega)$ with precisely such morphisms.
Note also that $\TLC(q)$ may be regarded as a subcategory of $\TLBC(q, a_1, a_2)$ in the same way.
\end{remark}
\begin{remark}
If $a_1, a_2$ are as in Lemma \ref{lem:iso-tlcb-2}, then $\TLBC(q, \Omega)\cong\TLBC(q, a_1, a_2)$.
\end{remark}
\subsubsection{Structure of the Templey-Lieb category of type $B$}\label{sect:TLBC-struct}
It is the category $\TLBC(q, \Omega)$ which is relevant to the study of quantum Schur-Weyl duality in Section \ref{sect:Schur-Weyl}.
We therefore investigate its structure in more depth.
\begin{remark}
All results obtained in this section are also valid for $\TLBC(q, a_1, a_2)$, the two-parameter Templey-Lieb category of type $B$.
\end{remark}
Morphisms of $\TLBC(q, \Omega)$ are linear combinations of diagrams with only one vertical thick arc placed
at the left end, which can be described explicitly as follows.
\begin{definition}
Call a tangle diagram in $\TLBC(q, \Omega)$ a Temperley-Lieb diagram if it satisfies the following conditions:
\begin{enumerate}
\item the diagram has only one vertical thick arc placed at the left end, which will be called the pole;
\item there are no loops in the diagram;
\item arcs do not self-tangle, and thin arcs do not tangle with thin arcs;
\item if a thin arc tangles with the thick arc, it crosses the thick arc just twice, and crosses behind the pole
in the upper crossing.
\end{enumerate}
\end{definition}
For example, the diagrams in Figure \ref{fig:seven-five} and Figure \ref{fig:3-5} are morphisms in $\TLBC(q, \Omega)$.
The one in Figure \ref{fig:3-5} is not in $\TLC(q)$ while that in Figure \ref{fig:seven-five} is in $\TLC(q)$ (see Remark \ref{rem:tlsub}).
\begin{figure}[h]
\begin{picture}(160, 60)(-30,0)
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){60}}
}
\qbezier(20, 60)(50, 10)(80, 60)
\qbezier(30, 60)(50, 25)(70, 60)
\qbezier(0, 60)(20, 40)(80, 0)
\qbezier(0, 0)(30, 40)(60, 0)
\qbezier(15, 0)(30, 20)(45, 0)
\qbezier(90, 0)(105, 40)(120, 0)
\end{picture}
\caption{A diagram $(m, v^7)\to(m, v^5)$}
\label{fig:seven-five}
\end{figure}
\begin{figure}[h]
\begin{picture}(150, 100)(-50,0)
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
}
\qbezier(10, 100)(6, 80)(-11, 75)
\qbezier(-5, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 19)(-40, 25)(-38, 50)
\qbezier(-15, 19)(25, 15)(25, 0)
\qbezier(-5, 0)(3, 20)(10, 0)
\qbezier(-15, 22)(40, 10)(50, 100)
\qbezier(20, 100)(30, 20)(40, 100)
\qbezier(35, 0)(50, 45)(60, 100)
\end{picture}
\caption{A diagram $(m, v^4)\to(m, v^6)$}
\label{fig:3-5}
\end{figure}
\noindent
Figure \ref{fig:seven-five} is a morphism $(m, v^7) \to (m, v^5)$, where arcs do not cross, and
Figure \ref{fig:3-5} is a morphism $(m, v^4) \to (m, v^6)$, where $2$ thin arcs over cross the thick arc twice each.
Note that there is a unique Templey-Lieb diagram $m\to m$ consisting of a thick arc only.
The spaces of morphisms of $\TLBC(q, \Omega)$ are easily seen to be spanned by Temperley-Lieb diagrams, since the relations
in Section \ref{sect:TLBC} may be used to reduce any diagram to a linear combination of Temperley-Lieb diagrams.
Composition of morphisms may be described as follows. Given morphisms
$D: B\to T$ and $D': T\to U$ represented by \tl diagrams, their composition is defined by the following steps
\begin{enumerate}
\item Concatenation of diagrams. Concatenate the diagrams $D$ and $D'$ by joining the points on the top of $D$ with those on the bottom of $D'$.
\item Reduction to Temperley-Lieb diagrams. Apply locally the skein relation, free loop removal and
tangled loop removal to turn the resulting diagram into a linear combination of Temperley-Lieb diagrams $B\to U$.
\item The result of step (2) is the composition $D\circ D'$ of the morphisms $D$ and $D'$.
\end{enumerate}
The following result is obtained by repeatedly applying the straightening relations.
\begin{lemma}\label{lem:hom-iso}
Let $N$ be any non-negative integer. Then for $r=0, 1, \dots, 2N$, the vector spaces
$
\Hom_{\TLBC(q, \Omega)}((m, v^r), (m, v^{2N-r}))
$
are all isomorphic.
\end{lemma}
\begin{proof} The proof is the same as in \cite{LZ14}.
\end{proof}
Consider in particular $W(2N):=\Hom_{\TLBC(q, \Omega)}(m, (m, v^{2N}))$. Let $F_tW(2N)$ be the subspace
of $W(2N)$ spanned by diagrams satisfying that at most $t$ thin arcs are entangled with the pole.
For example, Figure \ref{fig:0-12} is a diagram $m\to(m, v^{12})$,
\begin{figure}[h]
\begin{picture}(150, 140)(-50,-40)
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
\put(-15, -18){\line(0, 1){18}}
\put(-15, -40){\line(0, 1){16}}
}
\qbezier(10, 100)(6, 80)(-11, 75)
\qbezier(0, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 19)(-40, 25)(-38, 50)
\qbezier(-15, 19)(70, 5)(110, 100)
\qbezier(60, 100)(55, 25)(100, 100)
\qbezier(70, 100)(65, 45)(90, 100)
\qbezier(-15, 22)(40, 10)(50, 100)
\qbezier(20, 100)(30, 20)(40, 100)
\qbezier(-15, -22)(-40, -12)(-19, 0)
\qbezier(-11, 1)(100, 3)(120, 100)
\qbezier(-15, -22)(100, -20)(140, 100)
\end{picture}
\caption{A diagram $m\to(m, v^{12})$}
\label{fig:0-12}
\end{figure}
\noindent
where $3$ thin arcs are entangled with the pole. Note that two of those three (which cross the pole at its upper part)
arcs are ``parallel'' when they cross the pole. Let us make this notion more precise.
\begin{definition} Any thin arc which is entangled with the pole has two polar crossings, and in this way defines an interval on the pole,
which we call its ``polar interval''.
We say that two thin arcs in a Temperley-Lieb diagram are parallel if they are both entangled with the pole and
the polar interval defined by one is contained in the polar interval defined by the other.
\end{definition}
\begin{remark}\label{rem:int}
There is a partial order on the thin arcs in a \tl diagram which are entangled with the pole, which is defined by containment
of their corresponding polar intervals. Two such arcs are parallel if they are comparable in this partial order.
\end{remark}
\begin{definition}\label{def:stand-disting}
Call a Temperley-Lieb diagram $m\to (m, v^{2N})$ {\em distinguished} if any pair of thin arcs which entangle the pole
are parallel, and {\em standard} if no two thin arcs tangling with the pole are parallel. In view of Remark \ref{rem:int},
a diagram is distinguished (resp. standard) if those among its arcs which entangle the pole are totally ordered (resp. have no
two arcs which are comparable) in the partial order on pole-entangled arcs.
\end{definition}
Examples of distinguished and standard diagrams are given respectively in Figure \ref{fig:0-10-d} and in Figure \ref{fig:0-12-s}.
It is easily seen that there is a unique distinguished (resp. standard) Temperley-Lieb diagram $m\to (m, v^{2N})$
with $N$ thin arcs tangling with the pole.
\begin{figure}[h]
\begin{picture}(150, 120)(-50,-20)
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
\put(-15, -18){\line(0, 1){18}}
}
\qbezier(10, 100)(6, 80)(-11, 75)
\qbezier(0, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 19)(-40, 25)(-38, 50)
\qbezier(-15, 19)(70, 5)(110, 100)
\qbezier(60, 100)(55, 25)(100, 100)
\qbezier(70, 100)(65, 45)(90, 100)
\qbezier(-15, 22)(40, 10)(50, 100)
\qbezier(20, 100)(30, 20)(40, 100)
\end{picture}
\caption{A distinguished diagram $m\to(m, v^{10})$}
\label{fig:0-10-d}
\end{figure}
\begin{figure}[h]
\begin{picture}(150, 140)(-50,-40)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){16}}
\put(-15, -18){\line(0, 1){18}}
\put(-15, -40){\line(0, 1){16}}
}
\qbezier(50, 100)(46, 50)(-11, 45)
\qbezier(-5, 100)(-2, 90)(-11, 82)
\qbezier(10, 100)(8, 70)(-15, 63)
\qbezier(-19, 79)(-40, 63)(-15, 63)
\qbezier(-15, 19)(-35, 30)(-19, 42)
\qbezier(-15, 19)(70, 5)(110, 100)
\qbezier(60, 100)(55, 25)(100, 100)
\qbezier(70, 100)(65, 45)(90, 100)
\qbezier(20, 100)(30, 35)(40, 100)
\qbezier(-15, -22)(-40, -12)(-19, 0)
\qbezier(-11, 1)(100, 3)(120, 100)
\qbezier(-15, -22)(100, -20)(140, 100)
\end{picture}
\caption{A standard diagram $m\to(m, v^{12})$}
\label{fig:0-12-s}
\end{figure}
Recall that $F_tW(2N)$ is the subspace of $W(2N)=\Hom_{\TLBC(q, \Omega)}(m, (m, v^{2N}))$ spanned by diagrams with at most $t$
thin arcs which are entangled with the pole.
\begin{lemma} \label{lem:reduct}
\begin{enumerate}
\item Any Temperley-Lieb diagram in $F_tW(2N)$, which does not belong to $F_{t-1}W(2N)$, can be expressed as a linear
combination of elements of $F_{t-1}W(2N)$ and a distinguished (resp. standard) diagram with $t$ thin arcs tangling with the pole,
where the coefficient of the distinguished (resp. standard) diagram is a (half-integer) power of $q$.
\item Any Temperley-Lieb diagram in $F_tW(2N)$ can be expressed as a linear combination of distinguished (resp. standard) diagrams.
\end{enumerate}
\end{lemma}
\begin{proof}
By stretching the entangled arcs, one sees that
any Temperley-Lieb diagram diagram $m\to (m, v^{2N})$ with $t$ thin arcs entangling the pole can be expressed as the
composition $D'\circ D$ of two diagrams $D: m\to (m, v^{2t})$ and $D': (m, v^{2t}) \to (m, v^{2N})$, where $D'$ has
no thin arcs which entangle the pole.
Now consider a diagram $D: m\to (m, v^{2t})$ with $t$ thin arcs entangling the pole. If $D$ is distinguished, then the original
diagram is distinguished. If $D$ is not distinguished, we may pull the thin arcs one by one so that all $t$ arcs are parallel,
at the expense of introducing crossings, which after sliding, will all be to the right of the pole.
We now use skein relations to remove the crossings among the thin arcs. This leads to a
linear combination of diagrams, where the only diagram with $t$ thin arcs entangling the pole is the distinguished one, and its coefficient is a power of $q$.
In a similar way, we can express any Temperley-Lieb diagram in $F_t(2N)$ as a linear combination of a standard diagram and diagrams in $F_{t-1}(2N)$.
This proves part (1) of the Lemma.
To see (2), observe that
given any Temperley-Lieb diagram diagram $m\to (m, v^{2N})$ with $t$ thin arcs entangling the pole, we may use
part (1) to express it as a linear combination of a distinguished (resp. standard) diagram and diagrams in $F_{t-1}(2N)$.
We then repeat the reduction process for diagrams in $F_{t-1}(2N)$, then $F_{t-2}(2N)$, etc., and after $t$ iterations,
arrive at the statement (2).
\end{proof}
The $r=N$ case of the following result is stated in \cite{GL98, GL03}. The general case can be deduced from this using Lemma \ref{lem:hom-iso}.
\begin{lemma} \label{lem:dim-TLB} For all $r=0, 1, \dots, 2N$,
\[
\dim\Hom_{\TLBC(q, Q)}((m, v^r), (m, v^{2N-r}))=\begin{pmatrix}2N\\ N\end{pmatrix}.
\]
\end{lemma}
\begin{proof} We give a proof here which will be useful for proving Theorem \ref{thm:main}.
By Lemma \ref{lem:hom-iso}, we only need to prove the dimension formula for $r=0$.
Recall that in the proof of Lemma \ref{lem:hom-iso} we introduced the following filtration for $W(2N)=\Hom_{\TLBC(q, Q)}(m, (m, v^{2N}))$.
\begin{eqnarray}\label{eq:filtr}
F_NW(2N)\supset F_{N-1}W(2N)\supset \dots\supset F_1W(2N)\supset F_0W(2N)\supset\emptyset,
\end{eqnarray}
where $F_tW(2N)$ is the subspace spanned by \tl diagrams with at most $t$ thin arcs entangled with the pole.
Let $\End(2N)= \Hom_{\TLBC(q, Q)}((m, v^{2N}), (m, v^{2N}))$ and denote by $\End^0(2N)$ the subspace of
$\End(2N)$ spanned by diagrams without tanglement. Then $\End^0(2N)$ is a subalgebra of
$\End(2N)$ isomorphic to the Temperle-Lieb algebra $\TL_{2N}(q)$ of degree $2N$. Now $\End^0(2N)$
acts naturally on $W(2N)$, and the $F_tW(2N)$ are
$\End^0(2N)$-submodules. Clearly $\frac{F_tW(2N)}{F_{t-1}W(2N)}$ is isomorphic to a cell module
$W_{2t}(2N)$ \cite[Def.(2.2)] {GL98} for $\End^0(2N))$, which is a simple module in the present generic context.
Hence as vector space,
\begin{eqnarray}
\Hom_{\TLBC(q, Q)}(m, (m, v^{2N})) \cong \bigoplus_{t=0}^N W_{2t}(2N).
\end{eqnarray}
Using the well-known fact \cite{ILZ1} that
\[
\dim W_{2t}(2N)=\begin{pmatrix}2N\\ N-t\end{pmatrix} - \begin{pmatrix}2N\\ N-t-1\end{pmatrix}
\]
with $\dim W_{2N}(2N)=\begin{pmatrix}2N\\ 0\end{pmatrix} =1$,
we obtain the result.
\end{proof}
\subsubsection{The Temperley-Lieb algebra of type $B$}
Given any object $A$ in $\TLBC(q, \Omega)$, we let $\End(A):=\Hom_{\TLBC(q, \Omega)}(A, A)$.
This forms an associative algebra with multiplication given by composition of morphisms.
\begin{definition} \label{def:algs} For any $r$, let $\TLBC_r(q, \Omega)= \End(m, v^r)$. This is referred to as
a Temperley-Lieb algebra of type $B$.
\end{definition}
The following lemma justifies the terminology.
\begin{lemma}\label{lem:TLB-alg-iso}
There is an algebra isomorphism
\[
F: \TLB_r(q, Q)\stackrel{\sim}\longrightarrow \TLBC_r\left(q, \frac{Q}{\sqrt{-1}}\right)
\]
such that, for all $i=1, 2, \dots, r-1$,
\[
\begin{aligned}
&\begin{picture}(150, 70)(-20,0)
\put(-85, 30){$y_1\mapsto q\sqrt{-1}$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){18}}
\put(-15, 22){\line(0, 1){38}}
}
\qbezier(0, 60)(-7, 50)(-12, 42)
\qbezier(-18, 38)(-25, 30)(-15, 20)
\qbezier(-15, 20)(-15, 20)(0, 0)
\put(20, 0){\line(0, 1){60}}
\put(40, 0){\line(0, 1){60}}
\put(60, 30){......}
\put(100, 0){\line(0, 1){60}}
\put(105, 0){, }
\end{picture}\\
&\begin{picture}(150, 70)(-20,0)
\put(-50, 28){$e_i\mapsto$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){60}}
}
\put(0, 0){\line(0, 1){60}}
\put(20, 0){\line(0, 1){60}}
\put(5, 30){...}
\qbezier(40, 0)(50, 50)(60, 0)
\qbezier(40, 60)(50, 10)(60, 60)
\put(80, 0){\line(0, 1){60}}
\put(100, 0){\line(0, 1){60}}
\put(85, 30){...}
\put(36, -10){\small$i$}
\put(52, -10){\small{$i$+1}}
\put(105, 0){. }
\end{picture}
\end{aligned}
\]
\end{lemma}
\begin{proof} From the definitions of $\TLB_r(q, Q)$ and $\TLBC_r\left(q, \frac{Q}{\sqrt{-1}}\right)$,
it is clear that the map $F$ respects the relations among the $e_i$, and also $F(y_1) F(e_i) = F(e_i) F(y_1)$ for all $i\ge 2$.
Thus in order to prove the lemma, one only needs to show that $F$ also preserves the relations
between $y_1$ and $e_1$ in Definition \ref{def:TLB-alg}.
Note that the third skein relation in the definition of $\ATLC(q, \Omega)$ leads to
\[
F(y_1)^2 = \left(\sqrt{-1}\Omega- (\sqrt{-1}\Omega)^{-1}\right) F(y_1)+ \id,
\]
where $\id$ is the identity of $\TLBC_r\left(q, \Omega\right)$. By setting $Q={\sqrt{-1}}\Omega$ in this equation, we obtain
\begin{align}
&(F(y_1)-Q)(F(y_1)+Q^{-1})=0. \label{eq:F-skein}
\end{align}
Also, tangled loop removal in $\ATLC(q, \Omega)$ leads to
\begin{align}
F(e_1) F(y_1) F(e_1) = -q\sqrt{-1} (\Omega+\Omega^{-1}) F(e_1) = -q(Q-Q^{-1}) F(e_1).
\label{eq:F-inv}
\end{align}
Therefore the defining relations in Definition \ref{def:TLB-alg}
are indeed preserved by $F$, completing the proof of the lemma.
\end{proof}
\subsection{Interrelationships among various diagram categories} \label{sect:summary}
We summarise the interrelationships among various categories which have arisen in the process of
developing a new formulation of the \tl category of type $B$ introduced in \cite{GL03}. The relevant categories are:
\begin{itemize}
\item $\TLBB(q,Q)$, the \tl category of type $B$ introduced in \cite{GL03};
\item $\RTC$, a category of coloured un-oriented tangle diagrams;
\item $\ATLC(q)$, the affine \tl category;
\item $\ATLC(q, a_1, a_2)$, the multi-polar \tl category with two parameters;
\item $\TLBC(q, a_1, a_2)$, the \tl category of type $B$ with two parameters;
\item $\ATLC(q, \Omega)$, the multi-polar \tl category with one parameter;
\item $\TLBC(q, \Omega)$, the \tl category of type $B$; and
\item $\TLC(q)$, the \tl category.
\end{itemize}
Their interrelationships are depicted in the commutative diagram below.
\begin{figure}[h]
\begin{tikzpicture}[scale=1.2]
\draw(0, 4) node {$\RTC$};
\draw(3,4) node {$\ATLC(q)$};
\draw(8,4) node {$\TLC(q)$};
\draw(3,2) node {$\ATLC(q, a_1, a_2)$};
\draw(8,2) node {$\ATLC(q,\Omega)$};
\draw(8,0) node {$\TLBC(q,\Omega)$};
\draw(3,0) node {$\TLBC(q, a_1, a_2)$};
\draw(0,-2) node {$\TLC(q)$};
\draw(8, -2) node {$\mathbb{TLB}(q, \sqrt{-1}\Omega)$};
\draw [->] (0.3,4) -- (2.4, 4);
\draw [->] (3, 3.7) -- (3, 2.3);
\draw [->] (3.2, 3.7) -- (7.8, 2.3);
\draw [->] (4.1, 2) -- (7.1, 2);
\draw [>->] (3, 0.3) -- (3, 1.7);
\draw [>->] (8, 0.3) -- (8, 1.7);
\draw [->] (4.2, 0) -- (7.1, 0);
\draw[>->](0.4,-1.7)--(3, -0.2);
\draw[>->](0.6,-1.7)--(7.8, -0.2);
\draw[>->](0.2,-1.7)--(2.8, 1.7);
\draw[>->](0,-1.7)--(2.8, 3.8);
\draw[>->](7.4, 4)--(3.6, 4);
\draw[>->](8, 3.7)--(8, 2.3);
\draw[->](8, -1.7)--(8, -0.3);
\draw[>->](0.7, -2)--(6.7, -2);
\draw(5.5, 2.2) node {$\text{specialisation}$};
\draw(5.5, 0.2) node {$\text{specialisation}$};
\draw(7.8, -1) node {$\simeq$};
\end{tikzpicture}
\caption{Relationships among categories}
\label{fig:cats}
\end{figure}
We note that the affine \tl category and the multi-polar \tl categories all contain two different copies of the \tl category $\TLC(q)$ as subcategories. The one at the top left corner of the diagram is always a full subcategory, while the other (at the bottom right corner) is a subcategory as described in Remark \ref{rem:tlsub}.
\iffalse
We provide in Figure \ref{fig:cats} a diagram which summarises the interrelationships among the various categories which we have introduced.
\begin{figure}[h]
\begin{tikzpicture}[scale=1.5]
\draw(0,4) node {$\RTC$};
\draw(0,2) node {$\ATLC(q,\Omega)$};
\draw(0,0) node {$\TLBC(q,\Omega)$};
\draw(5,2) node {$\ATLC(q,\delta_\Omega,-q\inv(\delta_\Omega^2+\delta_q q\inv))$};
\draw(5,4) node {$\TLBC(q,\Omega).$.};
\draw [->] (1,4) -- (4,4);
\draw [->] (1,2) -- (3,2);
\draw [->] (0,3.5) -- (0,2.5);
\draw [->] (0,3.5) -- (0,2.5);
\draw [->] (5,3.5) -- (5,2.5);
\draw [>->] (0,0.5) -- (0,1.5);
\draw(2,2.3) node {\large{$\sim$}};
\draw(2.5,4.3) node {$\text{quotient by }\eqref{eq:skein1},\eqref{eq:skein2},\eqref{eq:flr}$};
\draw(1.1,1) node {full subcategory};
\draw(4.5,3) node {central};
\draw(5.6,3) node {quotient};
\end{tikzpicture}
\caption{Relationships among categories}
\label{fig:cats}
\end{figure}
\fi
\subsection{Standard diagrams, tensor product and generators}\label{sss:tp}
The diagrams in the subcategory $\TLC(q)$ of $\TLBC(q,\Omega)$ are those which involve no entanglement with the pole.
As such, they look like a disentangled pole adjacent to a `usual' finite Temperley-Lieb diagram (cf. \cite{GL98,GL03}).
Now there is an obvious functor
\be\label{eq:tp2}
\TLBC(q,\Omega)\times\TLC(q)\lr \TLBC(q,\Omega)
\ee
which is defined by juxtaposition of diagrams, where the disentangled pole is omitted from the second factor.
Moreover it is clear that the diagrams depicted as $I,A,U$ below generate the subcategory
$\TLC(q)$ under composition and the tensor product defined by \eqref{eq:tp2}.
\begin{figure}[h]
\begin{picture}(130, 90)(-70,-5)
\put(-40, 40){$I=$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){80}}
}
\put(15, 0){\line(0, 1){80}}
\put(20, 0){,}
\end{picture}
\begin{picture}(120, 90)(-35,-5)
\put(-40, 40){$A=$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){80}}
}
\qbezier(15, 0)(25, 150)(35, 0)
\put(40, 0){,}
\end{picture}
\begin{picture}(100, 90)(-35,-5)
\put(-40, 40){$U=$}
{
\linethickness{1mm}
\put(-5, 0){\line(0, 1){80}}
}
\qbezier(15, 80)(25, -70)(35, 80)
\put(40, 0){.}
\end{picture}
\caption{Generators of a $\TLC(q)$ subcategory}
\label{fig:generators-pole}
\end{figure}
We wish to determine similar generators of the whole of $\TLBC(q,\Omega)$. For this we begin by observing that any
Temperley-Lieb diagram may be expressed as a linear combination of diagrams which are {\em standard}
in the following sense.
\begin{definition}\label{def:stand-general}
A Temperley-Lieb diagram $D: (m, v^r)\to (m, v^s)$ is called {\em standard} if
$(D\otimes \id_{v^r})(\id_m\otimes {\mathbb U}_r): m\to (m, v^{r+s})$ is a standard diagram in the sense of Definition
\ref{def:stand-disting},
where ${\mathbb U}_r: \emptyset\to v^{2r}$ is given by the following diagram.
\[
\begin{picture}(120, 70)(-35,30)
\put(-35, 60){${\mathbb U}_r=$}
\qbezier(-10, 80)(50, 0)(110, 80)
\qbezier(10, 80)(50, 20)(90, 80)
\put(25, 70){...}
\qbezier(40, 80)(50, 40)(60, 80)
\put(65, 70){...}
\end{picture}
\]
\end{definition}
It immediately follows from part (2) of Lemma \ref{lem:reduct} that
\begin{corollary}
Any Temperley-Lieb diagram can be expressed as a linear combination of standard diagrams.
\end{corollary}
It is now clear that to obtain $\TLBC(q,\Omega)$ just one extra generator, depicted as $L$ in the diagram below, needs to be added to the set of generators of $\TLC(q)$ given in Figure \ref{fig:generators-pole}.
\[
\begin{picture}(150, 90)(-80,-5)
\put(-65, 40){$L=$}
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){24}}
\put(-15, 30){\line(0, 1){50}}
}
\qbezier(-18, 60)(-40, 40)(-15, 27)
\qbezier(-15, 27)(0, 20)(0, 0)
\qbezier(0, 80)(0, 67)(-12, 64)
\put(15, 0){.}
\end{picture}
\]
We shall use these generators in the proof of Theorem \ref{thm:tlbequ} below.
Note that among the relations in $\TLBC(q,\Omega)$ we have (see ``skein relations'' and ``tangled loop removal'' above)
\be\label{eq:skein}
qL^2=(\Omega+\Omega\inv)L-q\inv I
\ee
and
\be\label{eq:loop}
A(L\ot I)U=-(\Omega+\Omega\inv).
\ee
\section{Quantum Schur-Weyl duality}\label{sect:Schur-Weyl}
\subsection{More on the Quantum group $\U_q(\fsl_2)$}
We give a more detailed account of the representations and the universal $R$-matrix of $\U_q(\fsl_2)$ in this section.
We write $\Uq$ for the $\CK_0$-algebra $\U_q(\fsl_2)$. This has generators $E,F$ and $K^{\pm 1}$, with relations
\[
KEK\inv=q^2E,\quad KFK\inv=q^{-2}F, \quad EF-FE=\frac{K-K\inv}{q-q\inv}.\]
The comultiplication is given by
\[
\Delta(K)=K\ot K,\quad \Delta(E)=E\ot K+1\ot E,\quad \Delta(F)=F\ot 1+K\inv\ot F,
\]
and the antipode by
\[
S(E)=EK\inv,\quad S(F)=-KF,\quad S(K)=K\inv.
\]
\subsubsection{Representations}{\ }\\
{\em Projective modules}.
An integral weight $\U_q(\fsl_2)$-module $M$ of type-${\bf 1}$ is one such that $M=\oplus_{k\in \Z}M_k$ where
$M_k=\{v\in M\ K v= q^k v \}$.
Let $\U_q(\fb)$ be the Borel subalgebra of $\U_q(\fsl_2)$ generated by $E$ and $K^{\pm 1}$.
The category $\CO_{int}$ is defined as the category of $\U_q(\fsl_2)$-modules $M$, which satisfy:
\begin{itemize}
\item $M$ is finitely generated as a $\U_q(\fsl_2)$-module.
\item $M$ is locally $\U_q(\fb)$ finite.
\item $M$ is an integral weight module of type ${\bf 1}$.
\end{itemize}
For any integer $\ell\in\Z$, denote by $(\CK_0)_\ell = \CK_0v^+$ the $1$-dimensional $\U_q(\fb)$-module such that
$E v^+=0$ and $K v^+ = q^{\ell} v^+$, and let $M(\ell)=\U_q(\fsl_2)\otimes_{\U_q(\fb)}(\CK_0)_\ell$. This is the Verma module with highest weight $\ell$, which has a unique
simple quotient $V(\ell)$. Then $M(\ell)$ and $V(\ell)$ are the standard and simple objects in $\CO_{int}$.
The simple module $V(\ell)$ is finite dimensional if and only if $\ell\ge 0$, and in this case it is $(\ell+1)$-dimensional.
The quantum group $\U_q(\fsl_2)$ has a central element $z:=FE+\frac{qK+q^{-1}K^{-1}}{(q-q^{-1})^2}$.
It acts as the scalar $\chi(\ell):=\frac{q^{\ell+1}+q^{-\ell-1}}{(q-q^{-1})^2}$ on a highest weight vector of weight $\ell$, and since
it is central, therefore acts as the scalar $\chi(\ell)$ on the whole of any highest weight module in $\CO_{int}$ with highest weight $\ell$.
For any module $M\in\CO_{int}$, if we
define $M^{\chi_\ell}:=\{m\in M\mid (z-\chi_\ell)^im=0\text{ for some }i\geq 0\}$,
then $M^{\chi_\ell}$ is a direct summand of $M$. We denote by $\CO_{int}^{\chi_\ell}$ the full subcategory of $\CO_{int}$ whose objects are modules $M$ such that $M=M^{\chi_\ell}$, and call it the block (which is indeed a block)
of $\CO_{int}$ corresponding to $\chi_\ell$.
It follows from the definition of $\CO_{int}$ that any $M\in\CO_{int}$ is an object in a direct sum of finitely many blocks.
Clearly $\chi(\ell) = \chi(\ell')$ if and only if $\ell=\ell'$ or $\ell+\ell'+2=0$.
\begin{definition}\label{def:link}
The weights $\ell,\ell'\in\Z$ are said to be linked if $\ell=\ell'$ or $\ell+\ell'+2=0$. The linkage principal asserts here merely
that $\chi(\ell)=\chi(\ell')\iff$ $\ell,\ell'$ are linked.
\end{definition}
Observe that if $\ell\ge -1$, then there exists no $\ell'>\ell$ linked to $\ell$.
This leads to the following result, which is well known, but we provide a proof for the convenience of the reader.
\begin{lemma} \label{lem:proj} Fix an integer $\ell\ge -1$.
\begin{enumerate}
\item The Verma module $M(\ell)$ is projective in $\CO_{int}$.
\item If $M$ is a finite dimensional module in $\CO_{int}$, then
\[
\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell)\otimes M) \cong M_0,
\]
where $M_0$ is the zero weight space of $M$.
\end{enumerate}
\end{lemma}
\begin{proof}
Given the linkage principle in Definition \ref{def:link}, the usual arguments from the context of Lie algebras may be adapted to prove the lemma.
We first consider part (1).
Let $\psi: M\twoheadrightarrow N$ be any surjection in $\CO_{int}$. Then $\psi(M)= N$, so that $\psi(M^{\chi_\ell})=N^{\chi_\ell}$.
If $\phi: M(\ell)\longrightarrow N$, then clearly $\phi(M(\ell))\subseteq N^{\chi_\ell}$ and hence to prove (1), we may suppose that $M$ and $N$ are
in the block $\CO_{int}^{\chi_\ell}$ of $\CO_{int}$.
Since the image $\phi(m_+)$ of the highest weight vector $m_+$ of $M(\ell)$ is in $N=\psi(M)$,
$\phi(m_+)=\psi(v)$ for some
$v\in M$. Writing $v=v_0+v_1+\dots+v_k$, where $v_0$ has weight $\ell$ and for $i\geq 1$, $v_i$ has weight $\ell_i\neq \ell$
(the $\ell_i$ being pairwise distinct), we see that for $j=1,2,3,\dots, $ we have
$q^{j\ell}(\psi(v)-\psi(v_0))+q^{j\ell_1}\psi(v_1)+\dots + q^{j\ell_k}\psi(v_k)=0$. A van der Monde type argument shows that
$\psi(v_i)=0$ for $i>0$ and that $\psi(v)=\psi(v_0)$. Replacing $v$ by $v_0$, we may therefore assume that $v$ is a weight vector
of weight $\ell$.
Now the subspace $\U_q(\fb) v\subseteq M$ contains a highest weight vector $v'$ of weight (say) $\ell'$. Thus
$v'$ is an eigenvector of $z$, with eigenvalue $\chi_{\ell'}$. Since $M$
is in the block $\CO_{int}^{\chi_\ell}$, we must have $\chi_\ell=\chi_{\ell'}$, i.e. $\ell'$ is linked to $\ell$.
But $\ell'\ge \ell\ge -1$, so $\ell'=\ell$, and hence $v$ is a highest weight vector in $M$.
It follows that the unique
homomorphism $\phi': M(\ell)\longrightarrow M$ with $\phi'(m_+)=v$, renders the following diagram commutative.
\[
\xymatrix{
&M(\ell)\ar[d]^\phi\ar@{-->}[ld]_{\phi'}\\
M\ar@{->>}[r]_\psi &N.
}
\]
This proves that $M(\ell)$ is projective in $\CO_{int}$.
We now prove part (2).
Since the module $M$ given in part (2) is finite dimensional, $M(\ell)\otimes M= \U_q(\fsl_2)\otimes_{\U_q(\fb)}((\CK_0)_\ell\otimes M)$.
Applying the induction functor to the composition series of $(\CK_0)_\ell\otimes M$ as $\U_q(\fb)$-module leads to a filtration
\[
M(\ell)\otimes M:=W_0\supset W_1\supset W_2 \supset \dots \supset W_{D-1}\supset W_D=0, \quad D=\dim M,
\]
where the $W_i$ are in $\CO_{int}$ and $W_i/W_{i+1}=M(\ell_i)$ for some integer $\ell_i$. By the projectivity of $M(\ell)$, we have
\[
\Hom_{\U_q(\fsl_2)}(M(\ell), W_i) =\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell_i))\oplus \Hom_{\U_q(\fsl_2)}(M(\ell), W_{i+1})
\]
as vector space, and hence
\[
\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell)\otimes M) = \bigoplus_i \Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell_i)).
\]
But $\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell_i))\ne 0$ only when $\ell$ and $\ell_i$ are linked.
The condition $\ell\ge -1$ requires $\ell_i=\ell$, and in this case, $\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell_i))$
is one dimensional. Since the weight space of weight $\ell$ in $M(\ell)\ot M$ is $v_+\ot M_0$, we have
$
\Hom_{\U_q(\fsl_2)}(M(\ell), M(\ell)\otimes M) \cong M_0.
$
\end{proof}
\noindent{\em Some properties of $V=V(1)$}. There exists a basis $\{v_1, v_{-1}\}$ of $V$ such that the corresponding representation is given by
\[
K \mapsto \begin{pmatrix} q & 0\\ 0 & q^{-1}\end{pmatrix}, \quad E\mapsto \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix},
\quad F\mapsto \begin{pmatrix} 0 & 0\\ 1& 0 \end{pmatrix}.
\]
We have $V\otimes V=V(2)\oplus V(0)$, with the $1$-dimensional submodule spanned by
\[
c_0:= -q v_1\otimes v_{-1} + v_{-1}\otimes v_1.
\]
Since $V$ is self-dual, there exists a unique (up to scalar multiple) non-degenerate invariant bilinear form
$(\ , \ ): V\times V\longrightarrow \CK_0$ given by
\[
(v_1, v_{-1})=-q (v_{-1}, v_1)=1, \quad (v_1, v_1)=(v_{-1}, v_{-1})=0.
\]
Here invariance of the form means that
$(X v, w) = (v, S(X) w)$ for all $v, w\in V$ and $X\in \U_q(\fsl_2)$, where $S$ is the antipode of $\U_q(\fsl_2)$.
The following maps are clearly $\U_q(\fsl_2)$-morphisms
\begin{eqnarray}\label{eq:curls}
&\check{C}: \CK_0\longrightarrow V\otimes V, \quad 1\mapsto c_0, \label{eq:cup}\\
&\hat{C}: V\otimes V \longrightarrow \CK_0, \quad v\otimes w\mapsto (v, w), \label{eq:cap}
\end{eqnarray}
as are the maps $\eta, \zeta: V\longrightarrow V$ respectively defined by the compositions
\[
\begin{aligned}
\eta: V\stackrel{\sim}\longrightarrow \CK_0\otimes V \stackrel{\check{C}\otimes\id}\longrightarrow V
\otimes V \otimes V \stackrel{\id\otimes \hat{C}}\longrightarrow V,
\\
\zeta: V\stackrel{\sim}\longrightarrow V\otimes\CK_0 \stackrel{\id\otimes \check{C}}\longrightarrow V
\otimes V \otimes V \stackrel{\hat{C}\otimes \id}\longrightarrow V.
\end{aligned}
\]
The statements in the lemma below are all either well-known or easily checked.
\begin{lemma} \label{eq:cup-cap} Let $e= \check{C}\circ\hat{C}$. The following relations hold.
\begin{eqnarray}
\hat{C}(\check{C})=-(q+q^{-1}), \label{eq:loop-c}\\
\eta=\zeta=\id_V, \label{eq:straighten}\\
e^2=-(q+q^{-1}) e. \label{eq:e-op}
\end{eqnarray}
\end{lemma}
\subsubsection{The universal $R$-matrix}
As the universal $R$-matrix of $\U_q(\fsl_2)$ will play an important role in our development, we give some explicit
information concerning it. Following \cite{LZ06}, we
define a functorial linear operator $\Xi$ as follows. For any pair of modules $M_1, M_2$ in $\CO_{int}$, and weight vectors
$w_1\in M_1$ and $w_2\in M_2$ with weights $k_1, k_2$ respectively,
\begin{eqnarray}
\Xi_{M_1, M_2}: M_1\otimes M_2 \longrightarrow M_1\otimes M_2, \quad w_1\otimes w_2 \mapsto q^{\frac{k_1 k_2}{2}} w_1\otimes w_2.
\end{eqnarray}
The universal $R$-matrix is the functorial linear isomorphism
\begin{eqnarray}\label{eq:univ-R}
R = \Xi \left(\sum_{j=0}^\infty \frac{(q-q^{-1})^j}{\qint{j}!} E^j\otimes F^j\right),
\end{eqnarray}
where $\qint{j}!=\prod_{k=0}^j\qint{k}$ with $\qint{k}=\frac{1-q^{-2k}}{1-q^{-2}}$. [Warning: this is not the usual definition of $q$-numbers.]
For $i=1,2$, denote the representation of $\U_q(\fsl_2)$ on $M_i$ by $\pi_i$.
Then the universal $R$-matrix acts on $M_1\otimes M_2$ by
\[
R_{M_1, M_2} = \Xi \left(\sum_{j=0}^\infty \frac{(q-q^{-1})^j}{\qint{j}!} \pi_1(E^j)\otimes \pi_2(F^j)\right).
\]
This is well defined, since $E$ and $F$ act locally nilpotently. The universal $R$-matrix has the following properties.
\begin{eqnarray}
&R_{M_1, M_2} (\pi_1\otimes \pi_2)\Delta(x) = (\pi_1\otimes \pi_2)\Delta'(x)R_{M_1, M_2}, \quad \forall x\in \U_q(\fsl_2); \\
&R_{M_1\otimes M_2, M_3} = R_{M_1, M_3} R_{M_2, M_3}, \quad
R_{M_1, M_2\otimes M_3} = R_{M_1, M_3} R_{M_1, M_2}, \\
&R_{M_1, M_2} R_{M_1, M_3} R_{M_2, M_3} =R_{M_2, M_3}R_{M_1, M_3} R_{M_1, M_2},
\end{eqnarray}
where the last two equations are equalities of automorphisms of $M_1\otimes M_2\otimes M_3$.
The last equation is the celebrated Yang-Baxter equation.
Let $
P_{M_1, M_2}: M_1\otimes M_2 \longrightarrow M_2\otimes M_1
$
be the permutation $w\otimes w' \mapsto w'\otimes w$, and denote
\[
\begin{aligned}
&\check{R}_{M_1, M_2}= P_{M_1, M_2}R_{M_1, M_2}: M_1\otimes M_2 \longrightarrow M_2\otimes M_1. \end{aligned}
\]
Then
\begin{eqnarray}\label{eq:commute}
\check{R}_{M_1, M_2} (\pi_1\otimes \pi_2)\Delta(x) - (\pi_2\otimes \pi_1)\Delta(x)\check{R}_{M_1, M_2}=0, \quad \forall x\in \U_q(\fsl_2),
\end{eqnarray}
and the Yang-Baxter equation becomes the following ``braid relation'' among isomorphisms
$M_1\otimes M_2\otimes M_3\longrightarrow M_3\otimes M_2\otimes M_1$ in $\CO_{int}$.
\[
\begin{aligned}
(\check{R}_{M_2, M_3}\otimes\id_{M_1}) (\id_{M_2}\otimes\check{R}_{M_1, M_3})
(\check{R}_{M_1, M_2}\otimes\id_{M_3})\\
=
(\id_{M_3}\otimes\check{R}_{M_1, M_2}) (\check{R}_{M_1, M_3}\otimes\id_{M_2})(\id_{M_1}\otimes\check{R}_{M_2, M_3}).
\end{aligned}
\]
For $M_1=M_2=V(1)$, by looking at the action of $q^{\frac{1}{2}}\check{R}_{V, V}$ on the respective highest weight
vectors of the simple submodules of $V\otimes V=V(2)\oplus V(0)$, it becomes evident that $q^{\frac{1}{2}}\check{R}_{V, V}$
has eigenvalues $q$ and $-q^{-1}$ on $V(2)$ and $V(0)$ respectively. Bearing in mind that $\check{C}\circ\hat{C}=-(q+q\inv)$
times the projection to $V(0)$ this may be restated as follows.
\begin{lemma}\label{eq:normal-R} The $R$-matrix $\check{R}_{V, V}$ satisfies the following relation.
\begin{eqnarray}\label{lem:normal-R}
q^{\frac{1}{2}}\check{R}_{V, V}=q+\check{C}\circ\hat{C},
\end{eqnarray}
where $\check{C}$ and $\hat{C},$ are as defined in \eqref{eq:curls}.
\end{lemma}
Now let
\begin{eqnarray}
R^T = \Xi \left(\sum_{j=0}^\infty \frac{(q-q^{-1})^j}{\qint{j}!} F^j\otimes E^j\right), \label{eq:univ-RT}
\end{eqnarray}
and denote $R^T_{M_1, M_2}= R^T: M_1\otimes M_2 \longrightarrow M_1\otimes M_2$. Then
\[
\begin{aligned}
R^T_{M_1, M_2}= P_{M_2, M_1}R_{M_2, M_1} P_{M_1, M_2}: M_1\otimes M_2 \longrightarrow M_1\otimes M_2.
\end{aligned}
\]
Furthermore, $\check{R}_{M_2, M_1}\check{R}_{M_1, M_2}=R^T_{M_1, M_2}R_{M_1, M_2}$, and
\[
\begin{aligned}
R^T_{M_1, M_2}R_{M_1, M_2}
&= \Delta(v^{-1}) (v\otimes v): M_1\otimes M_2 \longrightarrow M_1\otimes M_2,
\end{aligned}
\]
where $v$ is Drinfeld's central element of $\U_q(\fsl_2)$ (see \cite{LZ06}). The element $v$ acts on any highest
weight module with highest weight $\ell$ (i.e., cyclically generated by a highest weight vector of
weight $\ell$) in $\CO_{int}$ as multiplication by the scalar $q^{ -\frac{1}{2}\ell (\ell+2)}$.
Assume that $M_1$ and $M_2$ are both highest weight modules with highest weights
$\ell_1$ and $\ell_2$ respectively. Then $R^T_{M_1, M_2}R_{M_1, M_2}$ acts on a highest weight submodule
$M'$ of $M_1\otimes M_2$ with highest weight $\ell$ as
\begin{eqnarray}\label{eq:eigen}
\begin{aligned}
&R^T_{M_1, M_2}R_{M_1, M_2}|_{M'} = q^{\chi(\ell, \ell_1, \ell_2)} \id_{M'}, \quad \text{where}\\
&\chi(\ell, \ell_1, \ell_2) = \frac{\ell(\ell+2)}{2} -\frac{\ell_1(\ell_1+2)}{2} - \frac{\ell_2(\ell_2+2)}{2}.
\end{aligned}
\end{eqnarray}
If $m_+$ is the highest weight vector of $M_1$, and $v\in M_2$ is a vector of weight $j$, we have
$Km_+=q^{\ell_1}m_+$ and $K v= q^j v$. Then
\begin{eqnarray}\label{eq:RtR}
R^T_{M_1, M_2}R_{M_1, M_2}(m_+\otimes v) =\sum_{k=0}^\infty\frac{(q-q^{-1})^k q^{ j \ell_1
+k(\ell_1-j-2k)}}{\qint{k}!}F^k m_+\otimes E^k v.
\end{eqnarray}
\subsection{A tensor functor}
We again fix $V=V(1)$, and for any integer $\ell$, let $M$ be either $M(\ell)$ or $V(\ell)$.
We shall adopt the following notation. Recall that $\cC=\{m, v\}$; for any sequence $A=(a_1, a_2, \dots, a_r)$
with $a_j\in\cC$, write
\[
U^A = U^{a_1}\otimes U^{a_2}\otimes \dots \otimes U^{a_r},
\]
where $U^m=M$ and $U^v=V$.
We set $U^\emptyset=\CK_0$ for the empty sequence $\emptyset$.
Recall that there exists a canonical tensor functor from the category of directed coloured ribbon
graphs to the category of finite dimensional representations of any quantum group, see \cite[Theorem 5.1]{RT}.
Adapting that functor to our context yields the following result.
\begin{theorem} \label{thm:RT} Let $V=V(1)$, and for any integer $\ell$ let $M$ be either $M(\ell)$ or $V(\ell)$.
There exists a covariant linear functor
$
\widehat\CF: \RTC\longrightarrow \CO_{int},
$
which is defined by the following properties.
\begin{enumerate}
\item The functor respects the tensor products of $\RTC$ and $\CO_{int}$;
\item $\widehat\CF$ sends
the object $A$ of $\RTC$ to $\widehat\CF(A)=U^A$; and
\item
$\widehat\CF$ maps the generators of the morphism spaces as indicated below.
\[
\begin{aligned}
\begin{picture}(60, 70)(0,0)
\put(0, 0){\line(0, 1){60}}
\put(-10, 40){$a$}
\put(15, 30){$\mapsto \ \id_{U^a}$, }
\end{picture}
&\quad\quad&
\begin{picture}(80, 70)(-30,0)
\qbezier(-35, 60)(-35, 60)(-15, 0)
\qbezier(-15, 60)(-15, 60)(-22, 33)
\qbezier(-35, 0)(-35, 0)(-26, 25)
\put(-15, 40){$b$ }
\put(-40, 40){$c$ }
\put(0, 30){$\mapsto \ \check{R}_{U^b, U^c}$, }
\end{picture}
&\qquad\qquad&
\begin{picture}(80, 70)(-70,0)
\qbezier(-90, 0)(-90, 0)(-70, 60)
\qbezier(-90, 60)(-90, 60)(-83, 33)
\qbezier(-70, 0)(-70, 0)(-78, 26)
\put(-70, 40){$b$ }
\put(-95, 40){$c$ }
\put(-55, 30){$\mapsto \ (\check{R}_{U^b, U^c})^{-1}$, }
\end{picture}\\
\end{aligned}
\]
for all $a, b, c \in \cC$ with $b, c$ not both equal to $m$, and
\[
\begin{aligned}
\begin{picture}(100, 50)(0, 10)
\qbezier(0, 60)(15, -10)(30, 60)
\put(13, 12){$v$}
\put(45, 35){$\mapsto \ \check{C}$,}
\end{picture}\quad
\begin{picture}(130, 60)(-10, -10)
\qbezier(0, 0)(15, 70)(30, 0)
\put(12, 40){$v$}
\put(45, 15){$\mapsto \hat{C}$,}
\end{picture}
\end{aligned}
\]
where the maps $\check{C}: \CK_0\longrightarrow V\otimes V$ and $\hat{C}: V\otimes V \longrightarrow \CK_0$
are defined by \eqref{eq:cup} and \eqref{eq:cap} respectively.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us prove part (1) of the theorem first.
The functor clearly respects the tensor products for objects, since for any objects $A$ and $B$ in $ \RTC$, we have
\[
\widehat\CF(A\otimes B)= U^{(A, B)}= U^A\otimes U^B.
\]
Now we require that $\widehat\CF$ respect tensor products for morphisms, that is, for any two morphisms $D, D'$ in $ \RTC$,
\be\label{eq:rtp}
\widehat\CF(D\otimes D')=\widehat\CF(D)\otimes \widehat\CF(D').
\ee
Equation \eqref{eq:rtp} and property (3) of the statement, which defines the images of the generators of morphisms under
$\widehat\CF$, define the functor uniquely. Thus what remains to be shown is that the functor is well
defined on morphisms, that is, we need to show that the relations in part (3) of Theorem \ref{thm:tensor-cat}
are preserved by $\widehat\CF$.
Clearly $\widehat\CF$ preserves relations (a). It also preserves relations (b) since the $R$-matrices
satisfy the Yang-Baxter equation \eqref{eq:YBE}. It follows from \eqref{eq:straighten}
that the straightening relations (c) are also preserved.
To prove the sliding relations, let us denote
\[
\Psi_+:=\widehat{\CF}\left(
\begin{picture}(80, 30)(-12,15)
\qbezier(0, 45)(0, 45)(22, 0)
\qbezier(0, 0)(10, 13)(12, 15)
\qbezier(17, 21)(35, 50)(60, 0)
\put(-8, 35){$a$}
\put(32, 18){$v$}
\end{picture}
\right),
\qquad \quad
\Psi_+':=\widehat{\CF}\left(
\begin{picture}(80, 30)(-12,15)
\qbezier(0, 0)(25, 50)(45, 20)
\qbezier(49, 15)(50, 15)(60, 0)
\qbezier(60, 45)(60, 45)(38, 0)
\put(62, 35){$a$}
\put(22, 18){$v$}
\end{picture}
\right),
\]
\[
\Psi_-:=\widehat{\CF}\left(
\begin{picture}(80, 30)(-12,15)
\qbezier(0, 0)(30, 60)(60, 0)
\qbezier(0, 45)(0, 45)(10, 22)
\qbezier(15, 15)(15, 15)(22, 0)
\put(-8, 35){$a$}
\put(32, 18){$v$}
\end{picture}\right),
\qquad \quad
\Psi'_-:=\widehat{\CF}\left(
\begin{picture}(80, 30)(-8,15)
\qbezier(0, 0)(30, 60)(60, 0)
\qbezier(60, 45)(60, 45)(50, 22)
\qbezier(45, 15)(45, 15)(38, 0)
\put(62, 35){$a$}
\put(22, 18){$v$}
\end{picture}\right).
\]
Let $v, v'\in V$ be vectors with weights $\ell$ and $\ell'$ respectively, and let $w\in U^a$ be a vector with weight $k$. Then we have
\[
\begin{aligned}
\Psi_+(v\otimes w\otimes v') &= q^{\frac{1}{2} k \ell} (v, v') w + (q-q^{-1}) q^{\frac{1}{2} (k-2)(\ell+2)}(E v, v') F w, \\
\Psi'_+(v\otimes w\otimes v') &= q^{-\frac{1}{2} k \ell'} (v, v') w - (q-q^{-1}) q^{-\frac{1}{2} k\ell'}(v, E v') F w.
\end{aligned}
\]
The invariance of the bilinear form implies that $(v, E v')= -(E v, K v') = -q^{\ell'}(E v, v')$. Also note that
$(v, v')=0$ unless $\ell+\ell'=0$, and $(E v, v')= (v, E v')=0$ unless $\ell=\ell'=-1$. Hence
\[
\Psi_+(v\otimes w\otimes v') = q^{\frac{1}{2} k \ell} (v, v') w + (q-q^{-1}) q^{\frac{1}{2} (k-2)}(E v, v') F w=
\Psi'_+(v\otimes w\otimes v').
\]
Similarly we have
\[
\begin{aligned}
\Psi_-(v\otimes w\otimes v') &
=q^{-\frac{1}{2}k \ell}(v, v') w-(q-q^{-1})q^{-\frac{1}{2}k\ell}(Fv,v')Ew,\\
\Psi'_-(v\otimes w\otimes v') &
=q^{\frac{1}{2}k \ell'}(v, v') w+(q-q^{-1})q^{\frac{1}{2}(k +2)(\ell' -2)}(v,Fv')Ew.
\end{aligned}
\]
In this case, $(Fv,v')=-(v,KFv')=-q^{-1}(v,Fv')$, and $(Fv,v')=(v,Fv')=0$ unless $\ell=\ell'=1$.
We still have $(v, v')=0$ unless $\ell+\ell'=0$. Hence
\[
\Psi_-(v\otimes w\otimes v') = q^{-\frac{1}{2} k \ell} (v, v') w + (q-q^{-1}) q^{-\frac{1}{2} (k+2)}(v, F v') E w=
\Psi'_-(v\otimes w\otimes v').
\]
Now consider the twists. Let
\[
\varphi:=\widehat{\CF}\left(\begin{picture}(50, 40)(-10,80)
\qbezier(0, 100)(-5, 110)(-5, 120)
\qbezier(0, 100)(0, 100)(20, 70)
\qbezier(0, 70)(0, 70)(8, 82)
\qbezier(0, 70)(-5, 60)(-5, 50)
\qbezier(20, 100)(20, 100)(13, 88)
\qbezier(20, 100)(30, 110)(32, 85)
\qbezier(20, 70)(30, 60)(32, 85)
\put(22, 108){$v$}
\end{picture}
\right),
\qquad\quad
\varphi':=\widehat{\CF}\left(\begin{picture}(50, 40)(-12,30)
\qbezier(20, 50)(0, 20)(0, 20)
\qbezier(0, 50)(0, 50)(8, 38)
\qbezier(12, 33)(20, 20)(20, 20)
\qbezier(20, 50)(30, 60)(32, 35)
\qbezier(20, 20)(30, 10)(32, 35)
\qbezier(-5, 0)(-5, 10)(0, 20)
\qbezier(0, 50)(-5, 60)(-5, 70)
\put(23, 57){$v$}
\end{picture}\right),
\]
which are scalar multiples of $\id_V$. By direct computations, one can verify that
\[
\varphi v_1 = - q^{\frac{3}{2}} v_1, \quad \varphi'v_{-1}=-q^{-\frac{3}{2}} v_{-1}.
\]
Hence $\varphi= - q^{\frac{3}{2}}\id_V$, $\varphi'=-q^{-\frac{3}{2}} \id_V$, and $\varphi\circ\varphi'=\id_V$.
Another way to compute this is to use the well known relationship between the $R$-matrix and Drinfeld's central element.
By taking into account the $q$-skew nature of the bilinear form, we immediately obtain
\[
\varphi= - q^{\frac{3}{2}} \id_V, \quad \varphi'= - q^{-\frac{3}{2}} \id_V.
\]
This completes the proof of Theorem \ref{thm:RT}.
\end{proof}
\begin{remark} \label{rem:norm-factors}
Note that one has the freedom of multiplying the $R$-matrices by an invertible
scalar in the definition of the representation of $\CG_r$ in Lemma \ref{lem:braid-rep}.
However, this is disallowed by the sliding relations in the definition of the functor $\widehat{\CF}$.
\end{remark}
\begin{theorem}\label{thm:functor-quot}
The functor $\widehat\CF: \RTC\longrightarrow \CO_{int}$ of Theorem \ref{thm:RT}
factors through $\ATLC(q, q^{\ell+1})$.
\end{theorem}
\begin{proof}
It follows from the property of $\check{R}$ given in Lemma \ref{eq:normal-R} that the functor
$\widehat\CF$ respects the first two skein relations of $\ATLC(q, \Omega)$.
It also respects the free loop removal relation by \eqref{eq:loop-c}.
We now set
\begin{eqnarray}\label{eq:omega-value}
\Omega=q^{\ell+1}.
\end{eqnarray}
We want to show that $\widehat\CF$ preserves the third skein relation of $\ATLC(q, \Omega)$.
We have
\[
\xi:=\widehat\CF\left(
\begin{picture}(40, 50)(-30,50)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 25){\line(0, 1){35}}
\put(-15, 0){\line(0, 1){18}}
}
\qbezier(-19, 38)(-35, 30)(-15, 22)
\qbezier(-15, 62)(5, 55)(-11, 42)
\qbezier(-15, 62)(-35, 75)(-19, 85)
\qbezier(0, 100)(-6, 90)(-11, 87)
\qbezier(-15, 22)(-5, 15)(0, 0)
\end{picture}
\right) = \check{R}_{V, M}\check{R}_{M, V}.
\]
\vspace{.5mm}
In the case $M=M(\ell)$, if $\ell\ne -1$, we have $M\otimes V= M(\ell+1)\oplus M(\ell-1)$. Hence
$\xi=\check{R}_{V, M}\check{R}_{M, V}$ has two eigenvalues $q^{\chi_\pm}$,
which we can compute by using \eqref{eq:eigen} to obtain
\[
\chi_\pm = \frac{1}{2}(\ell \pm1)(\ell+2\pm 1) - \frac{1}{2}\ell (\ell+2) - \frac{3}{2} =\pm (\ell +1)-1.
\]
Hence $\xi$ satisfies the quadratic relation
\begin{eqnarray}\label{eq:skein-3}
(q\xi-\Omega)(q\xi-\Omega^{-1})=0.
\end{eqnarray}
If $\ell=-1$, we do not have such a decomposition for $M\otimes V$. However, we can directly verify the
skein relation. Since $\check{R}_{V, M}\check{R}_{M, V}$ is a $\U_q(\fsl_2)$-morphism, we only need
to verify this for the two vectors $m_+\otimes v_1$ and $m_+\otimes v_{-1}$, as they generate $M\otimes V$.
It is clear that $m_+\otimes v_1$ is an eigenvector of $R^T_{M, V}R_{M, V}$ with eigenvalue $q^{-1}$.
For the vector $m_+\otimes v_{-1}$, we use \eqref{eq:univ-R} and \eqref{eq:univ-RT} to obtain the following relations.
\[
\begin{aligned}
R^T_{M, V}R_{M, V}(m_+\otimes v_{-1}) &= q m_+\otimes v_{-1} + q^{-1}(q-q^{-1}) F m_+\otimes v_1,\\
(R^T_{M, V}R_{M, V})^2 (m_+\otimes v_{-1})&= (2-q^{-2}) m_+\otimes v_{-1} + 2q^{-2}(q-q^{-1}) F m_+\otimes v_1.
\end{aligned}
\]
Combining these we arrive at $(\check{R}_{V, M}\check{R}_{M, V}-q^{-1})^2(m_+\otimes v_{-1}) =0$.
Hence we have proved that in this case
\[
(q\xi-1)^2=0.
\]
For $M=V(\ell)$, we only need to consider $\ell\ge 0$, since $V(\ell)=M(\ell)$ if $\ell<0$.
We have $V(\ell)\otimes V= V(\ell+1)\oplus V(\ell-1)$ for $\ell>0$ , and $V(0)\otimes V=V$.
It follows from these decompositions that the relation \eqref{eq:skein-3} is also satisfied in this case.
This proves that $\widehat\CF$ preserves the third skein relation for $\ATLC(q, q^{\ell+1})$.
We now verify the tangled loop removal relation for both $M(\ell)$ and $V(\ell)$. We
note that for any linear transformation $\phi$ of $V$,
\[
\hat{C}(\phi\otimes\id_V)\check{C}(1)= -q(v_1 v_{-1}) + (\phi v_{-1}, v_1) = - tr_V(K\phi).
\]
Hence
\[
\Phi:= \widehat\CF\left(\begin{picture}(75, 50)(-40,45)
{
\linethickness{1mm}
\put(-15, 65){\line(0, 1){35}}
\put(-15, 42){\line(0, 1){58}}
\put(-15, 0){\line(0, 1){36}}
}
\qbezier(-19, 68)(-50, 50)(-15, 38)
\qbezier(-11, 68)(-3, 70)(0, 85)
\qbezier(-15, 38)(-5, 38)(0, 20)
\put(20, 20){\line(0, 1){65}}
\qbezier(0, 20)(10, -10 ) (20, 20)
\qbezier(0, 85)(10, 105) (20, 85)
\end{picture}\right)= - tr_V\left( (1\otimes K)\check{R}_{V, M}\check{R}_{M, V}\right).
\]
It follows from \cite[Proposition 1]{ZGB} that $\Phi$ is a scalar multiple of $\id_M$.
To compute the scalar, we consider the action of $\Phi$ on the highest weight vector $m_+$ of $M$. Using \eqref{eq:RtR}, we obtain
\[
\Phi m_+= - tr\left( \begin{pmatrix} q & 0\\ 0 & q^{-1}\end{pmatrix} \begin{pmatrix} q^{\ell} & q^{-1}(q-q^{-1})\\ 0 & q^{-\ell}\end{pmatrix}
\right) m_+=-(\Omega+\Omega^{-1})m_+.
\]
This leads to
$
\Phi= -(\Omega+\Omega^{-1})\id_M,
$
and hence
$\widehat\CF$ respects the tangled loop removal relation.
This completes the proof of Theorem \ref{thm:functor-quot}.
\end{proof}
\subsection{An equivalence of categories}
In this section, we take $V=V(1)$ and $M=M(\ell)$ for $\ell\ge -1$.
Let $\CT$ be the full subcategory of the category $\CO_{int}$ of $\U_q(\fsl_2)$-modules with objects
$M\otimes V^{\otimes r}$ for $r=0, 1, \dots$. Regard $\TLBC(q, q^{\ell+1})$, the Temperley-Lieb
category of type $B$, as a full subcategory of $\ATLC(q, q^{\ell+1})$.
Theorem \ref{thm:RT} enables us to define the following functors.
\begin{definition}
Let $\CF: \ATLC(q, q^{\ell+1})\longrightarrow \CO_{int}$ be the functor defined by the commutative diagram
\[
\xymatrix{
\ATLC(q) \ar@{->>}[d]\ar[r]^{\widehat\CF}& \CO_{int}\\
\ATLC(q, q^{\ell+1}).\ar[ur]_{\CF}&
}
\]
It is clear that $\CF$ sends objects and morphisms in $\TLBC(q, q^{\ell+1})$ to $\CT$. Hence the
restriction of the functor $\CF$ to $\TLBC(q, q^{\ell+1})$ leads to a covariant functor
\[
\CF': \TLBC(q, q^{\ell+1})\longrightarrow \CT.
\]
\end{definition}
\begin{theorem} \label{thm:main} Let $V=V(1)$ and $M=M(\ell)$. Then for all $\ell\ge -1$, the functor
$
\CF': \TLBC(q, q^{\ell+1})\longrightarrow \CT
$
is an equivalence of categories.
\end{theorem}
\begin{proof} We will prove the equivalence of categories by showing that the functor $\CF'$ is essentially surjective and fully faithful.
We set $\Omega=q^{\ell+1}$ in this proof.
The essential surjectivity is clear, since $\CF'(m, v^r) = M\otimes V^{\otimes r}$ for all $r$.
Since $V$ is self dual, $\Hom_\CT(M\otimes V^{\otimes r}, M\otimes V^{\otimes s})\cong
\Hom_\CT(M, M\otimes V^{\otimes (r+s)})$ as vector spaces for all $r$ and $s$. In view of Lemma \ref{lem:hom-iso}, we have
$\Hom_{\TLBC(q, \Omega)}((m, v^r), (m, v^s))$ $\cong$ $\Hom_{\TLBC(q, \Omega)}(m, (m, v^{r+s}))$. Hence in order to prove that
the functor $\CF'$ is fully faithful, it suffices to show that
$\CF'$ defines isomorphisms $\Hom_{\TLBC(q, \Omega)}(m, (m, v^r))\stackrel{\sim}
\longrightarrow \Hom_\CT(M, M\otimes V^r)$ for all $r$.
We shall do this by showing that for all $r$,
\begin{eqnarray}
&\dim\Hom_\CT(M, M\otimes V^{\otimes r}) = \dim\Hom_{\TLBC(q, \Omega)}(m, (m, v^r)), \label{eq:dim-1}\text{ and }\\
&\dim\CF'(\Hom_{\TLBC(q, \Omega)}(m, (m, v^r))) = \dim\Hom_{\TLBC(q, \Omega)}(m, (m, v^r)). \label{eq:dim-2}
\end{eqnarray}
Consider first $\Hom_\CT(M, M\otimes V^{\otimes r})$. We decompose $V^{\otimes r}$
with respect to the joint action of $\U_q(\fsl_2)$ and $\TL_r(q)$ to obtain $V^{\otimes r} =\bigoplus_t V(t)\otimes W_{t}(r)$,
where the direct sum is over all $t$ such that $0\le t\le r$ and $r-t$ is even. Since $\ell\ge -1$, we can apply
part (2) of Lemm \ref{lem:proj} to obtain
\[
\begin{aligned}
\Hom_{\U_q(\fsl_2)}(M, M\otimes V^{\otimes r})
&=\bigoplus_t \Hom_{\U_q(\fsl_2)}(M, M\otimes V(t))\otimes W_{t}(r)\\
&=\bigoplus_t V(t)_0\otimes W_t(r)\\
\end{aligned}
\]
If $r$ is odd, then all $t$ are odd, hence all $V(t)_0=0$.
If $r=2N$ is even,
\[
\begin{aligned}
\Hom_{\U_q(\fsl_2)}(M, M\otimes V^{\otimes 2N})=\bigoplus_{t=0}^N V(2t)_0\otimes W_{2t}(2N)
=\bigoplus_{t=0}^N W_{2t}(2N).
\end{aligned}
\]
In view of Lemma \ref{lem:dim-TLB} and its proof, this establishes equation \eqref{eq:dim-1}.
Next we consider $\CF'(\Hom_{\TLBC(q, \Omega)}(m, (m, v^r)))$.
Applying $\CF'$ to the filtration \eqref{eq:filtr} of $W(2N)$ by $\TL_{2N}(q)$-modules $F_tW(2N)$, and writing
$\CF_iW(2N)=\CF'(F_iW(2N))$, we obtain
\begin{eqnarray}\label{eq:filtr-1}
\CF_NW(2N)\supset \CF_{N-1}W(2N)\supset \dots\supset \CF_1W(2N)\supset \CF_0W(2N)\supset\emptyset.
\end{eqnarray}
This is a filtration of modules for $\CF'(\End^0(2N))$, which by \cite[Thm. 3.5]{LZBMW} is isomorphic to $\TL_r(2N)$.
For any $i$, if the quotient $W'_{2i}(2N):=\frac{\CF_{i}W(2N)}{\CF_{i-1}W(2N)}\ne 0$, then it must be
isomorphic to the cell module $W_{2i}(2N)$ as $\TL_r(2N)$-module. Therefore, if we can show that
$W'_{2i}(2N)\ne 0$ for all $i$, then \eqref{eq:dim-2} follows in view of Lemma \ref{lem:dim-TLB}.
Assume to the contrary that $W'_{2i}(2N)=0$ for some $i$. This happens precisely if, given any
distinguished diagram $D\in F_iW(2N)$ with $i$ thin arcs entangled with the pole, there is an element $D^{red}\in F_{i-1}W(2N)$ such that
\begin{eqnarray}\label{eq:contra-1}
\CF'(D)-\CF'(D^{red})=0.
\end{eqnarray}
Let $D$ be given by the distinguished diagram
in Figure \ref{fig:0-2N}.
We shall show that $\CF'(D)\not\in\CF'(F_{i-1}W(2N)$
\begin{figure}[h]
\begin{picture}(150, 105)(-20,0)
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
}
\qbezier(10, 100)(6, 80)(-11, 75)
\qbezier(-2, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 18)(-40, 25)(-38, 50)
\qbezier(-15, 18)(50, 5)(50, 100)
\qbezier(-15, 22)(30, 10)(30, 100)
\put(35, 80){...}
\qbezier(70, 100)(80, 45)(90, 100)
\put(95, 80){...}
\qbezier(110, 100)(120, 45)(130, 100)
\end{picture}
\caption{Diagram $D: m\to(m, v^{2N})$}
\label{fig:0-2N}
\end{figure}
\noindent
Take the morphisms
$A: (m, v^{2N})\to (m, v^{2i})$, $I_i: v^i\to v^i$ and $S: (m, v^{3i})\to (m, v^i)$, which are respectively given by
\[
\begin{picture}(150, 65)(-20,0)
\put(-30, 30){$A=$}
{
\linethickness{1mm}
\put(0, 0){\line(0, 1){60}}
}
\put(10, 0){\line(0, 1){60}}
\put(15, 30){...}
\put(15, 15){$2i$}
\put(30, 0){\line(0, 1){60}}
\qbezier(40, 0)(50, 60)(60, 0)
\put(65, 15){...}
\qbezier(80, 0)(90, 60)(100, 0)
\put(105, 0){,}
\end{picture}
\quad
\begin{picture}(80, 65)(-20,0)
\put(-30, 30){$I_i=$}
\put(0, 0){\line(0, 1){60}}
\put(5, 30){...}
\put(8, 15){$i$}
\put(20, 0){\line(0, 1){60}}
\put(25, 0){,}
\end{picture}
\quad
\begin{picture}(110, 65)(-20,0)
\put(-30, 30){$S=$}
{
\linethickness{1mm}
\put(0, 0){\line(0, 1){60}}
}
\put(10, 0){\line(0, 1){60}}
\put(15, 30){...}
\put(18, 15){$i$}
\put(30, 0){\line(0, 1){60}}
\qbezier(40, 0)(70, 80)(100, 0)
\qbezier(60, 0)(70, 60)(80, 0)
\put(45, 5){...}
\put(105, 0){.}
\end{picture}
\]
We first compose $D$ and the related $D^{red}$ with $A$ to obtain $A D$ and $AD^{red}$, then
tensor them with $I_i$ to obtain $AD\otimes I_i$ and $AD^{red}\otimes I_i$ , and finally compose
these morphisms with $S$ to obtain $S(AD\otimes I_i)$ and $S(AD^{red}\otimes I_i)$.
Write $\tilde{D}= \delta^{-N+i}S(AD\otimes I_i)$ and $\tilde{D}^{red}= \delta^{-N+i}S(AD^{red}
\otimes I_i)$; these are both endomorphisms of $(m, v^i)$.
Now if $W'_{2i}(2N)=0$ for some $i$, equation \eqref{eq:contra-1} implies
\begin{eqnarray}\label{eq:contrad}
\CF'(\tilde{D})-\CF'(\tilde{D}^{red})=0.
\end{eqnarray}
The diagram of $\tilde{D}$ is given by Figure \ref{fig:i-i}.
The morphism $\tilde{D}^{red}$ is spanned by diagrams with less than $i$ thin arcs tangled with the thick arc.
\begin{figure}[h]
\begin{picture}(150, 105)(-20,0)
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
}
\qbezier(20, 100)(6, 80)(-11, 75)
\qbezier(-2, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 18)(-40, 25)(-38, 50)
\qbezier(-15, 18)(-5, 15)(0, 0)
\qbezier(-15, 22)(10, 15)(20, 0)
\put(0, 90){...}
\put(3, 95){$i$}
\end{picture}
\caption{Diagram $\tilde{D}: (m, v^i)\to(m, v^i)$}
\label{fig:i-i}
\end{figure}
Let ${\bf w}=m_+\otimes v_{-1}^{\otimes i}$, where $v_{-1}^{\otimes i}=\underbrace{v_{-1}
\otimes v_{-1}\otimes \dots\otimes v_{-1}}_i$ and let us compare $\CF'(\tilde{D})({\bf w})$ and $\CF'(\tilde{D}^{red})({\bf w})$. By \eqref{eq:RtR}, we have
\[
\CF'(\tilde{D})({\bf w})=\sum_{k=0}^i\frac{(q-q^{-1})^k q^{-i \ell + k(\ell+i-2k)}}{\qint{k}!}F^k m_+\otimes E^k v_{-1}^{\otimes i}.
\]
In particular, the vector $F^i m_+\otimes v_1^{\otimes i}$ appears in $\CF'(\tilde{D})({\bf w})$ with a
nonzero coefficient. This vector is nonzero since $F^i m_+\ne 0$ for all $i$ in the Verma module $M$.
Turning to $\CF'(\tilde{D}^{red})({\bf w})$, we note that ${\bf w}=m_+\otimes v_{-1}^{\otimes i}$
is annihilated by the images under $\CF'$ of all diagrams from $(m, v^i)$ to $(m, v^{i-2})$ which have an arc $U$
as depicted below, since the invariant form on $V(1)$ satisfies $(v_{-1},v_{-1})=0$.
\[
\begin{picture}(150, 50)(-35,0)
{
\linethickness{1mm}
\put(-15, 0){\line(0, 1){40}}
}
\put(0, 0){\line(0, 1){40}}
\put(20, 0){\line(0, 1){40}}
\put(5, 20){...}
\qbezier(40, 0)(50, 50)(60, 0)
\put(80, 0){\line(0, 1){40}}
\put(100, 0){\line(0, 1){40}}
\put(85, 20){...}
\put(105, 0){. }
\end{picture}
\]
Thus among all the diagrams in $\tilde{D}^{red}$, only those shown in Figure \ref{fig:less-i} with $t<i$ could
have nonzero contributions to $\CF'(\tilde{D}^{red})({\bf w})$.
\begin{figure}[h]
\begin{picture}(150, 105)(-60,0)
\put(-65, 50){$\Upsilon_t =$}
{
\linethickness{1mm}
\put(-15, 25){\line(0, 1){75}}
\put(-15, 0){\line(0, 1){16}}
}
\qbezier(20, 100)(6, 80)(-11, 75)
\qbezier(-2, 100)(-5, 80)(-11, 80)
\qbezier(-19, 74)(-30, 70)(-30, 50)
\qbezier(-15, 22)(-30, 26)(-30, 50)
\qbezier(-19, 80)(-40, 75)(-38, 40)
\qbezier(-15, 18)(-40, 25)(-38, 50)
\qbezier(-15, 18)(-5, 15)(0, 0)
\qbezier(-15, 22)(10, 15)(20, 0)
\put(0, 90){...}
\put(5, 95){$t$}
\put(30, 0){\line(0, 1){100}}
\put(40, 90){......}
\put(70, 0){\line(0, 1){100}}
\put(40, 95){$i-t$}
\end{picture}
\caption{A diagram in $\tilde{D}^{red}$}
\label{fig:less-i}
\end{figure}
\noindent
Using \eqref{eq:RtR}, we obtain
\[
\CF'(\Upsilon_t )({\bf w}) =\sum_{k=0}^t \frac{(q-q^{-1})^k q^{-t \ell + k(\ell+t-2k) }}{\qint{k}!}F^k m_+
\otimes E^k v_{-1}^{\otimes t} \otimes v_{-1}^{\otimes (i-t)}.
\]
Note that for all $t<i$, the vector $F^i m_+\otimes v_1^{\otimes i}$ never appears in $\CF'(\Upsilon_t )({\bf w})$ with a non-zero coefficient.
Thus \eqref{eq:contrad} does not hold for any element ${D}^{red}\in F_{i-1}W(2N)$. Hence $W'_{2i}(2N)=W_{2i}(2N)$ for all $i$, and equation \eqref{eq:dim-2} is proved.
We have now shown that $\CF'$ is fully faithful, which completes the proof of Theorem \ref{thm:main}.
\end{proof}
As an immediate consequence, we have the following result.
\begin{corollary} \label{cor:alg-iso} Let $V=V(1)$ and $M=M(\ell)$ with $\ell\ge -1$. Then for each $r=1, 2, \dots$,
there is an isomorphism of associative algebras
\[
\TLB_r(q, \sqrt{-1} q^{\ell+1})\stackrel{\sim}\longrightarrow \End_{\U_q(\fsl_2)}(M\otimes V^{\otimes r}).
\]
\end{corollary}
\begin{proof}
The functor $\CF'$ of Theorem \ref{thm:main} leads to an isomorphism of associative algebras
$\TLBC_r(q, q^{\ell+1})\stackrel{\sim}\longrightarrow \End_{\U_q(\fsl_2)}(M\otimes V^{\otimes r})$ for each $r$, where $\TLBC_r(q, q^{\ell+1})$ is defined by Definition \ref{def:algs}.
By Lemma \ref{lem:TLB-alg-iso}, $\TLBC_r(q, q^{\ell+1})\cong \TLB_r(q, \sqrt{-1}q^{\ell+1})$. This proves the corollary.
\end{proof}
\section{An alternative version of the category $\TLBC$.}\label{sect:TLB-old}
\subsection{The category $\TLBB(q,Q)$}
Let $R$ be a ring and let $q,Q\in R$ be invertible elements. We begin by recalling
the definition of the category $\TLBB(q,Q)$ from \cite{GL03}. The objects of $\TLBB(q,Q)$ are
the integers $t\in\Z_{\geq 0}$. A ($\TLBB$) diagram $D:t\lr s$ is a ``marked Temperley-Lieb diagram'' from $t$ to $s$,
and $\Hom_{\TLBB(q,Q)}(t,s)$ is the free $R$ module with basis the set of $\TLBB$ diagrams: $t\lr s$.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\foreach \x in {2,3,4,5}
\filldraw(\x,0) circle (0.05cm);
\foreach \x in {1,2,3,4,5,6}
\filldraw(\x,2) circle (0.05cm);
\draw (2,0)--(5,2);
\draw (5,0)--(6,2);
\draw node at (1,1) {Left region};
\draw [dashed] (0,0)--(7,0); \draw [dashed] (0,2)--(7,2);
\draw [dashed] (0,0)--(0,2);\draw [dashed] (7,0)--(7,2);
\draw(1,2) .. controls (2,1.2) and (3,1.2) .. (4,2);
\draw(2,2) .. controls (2.2,1.7) and (2.8,1.7) .. (3,2);
\draw(3,0) .. controls (3.2,0.3) and (3.8,0.3) .. (4,0);
\end{tikzpicture}
\end{center}
\caption{}\label{fig-2}
\end{figure}
To define marked diagrams, recall that a $\TL$ diagram $t\lr s$ divides the ``fundamental rectangle'' into regions,
of which there is a unique leftmost one (see Fig. \ref{fig-2}, which is a $\TL$ diagram: $4\lr 6$). A marked diagram is
a $\TL$ diagram in which the boundary arcs of the left region may be marked with dots. Thus the $\TL$
diagram depicted in Fig.1 has $2$ markable arcs, and a corresponding marked diagram is shown in Fig. \ref{fig-3}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\foreach \x in {2,3,4,5}
\filldraw(\x,0) circle (0.05cm);
\foreach \x in {1,2,3,4,5,6}
\filldraw(\x,2) circle (0.05cm);
\draw (2,0)--(5,2);
\draw (5,0)--(6,2);
\draw node at (1,1) {Left region};
\draw [dashed] (0,0)--(7,0); \draw [dashed] (0,2)--(7,2);
\draw [dashed] (0,0)--(0,2);\draw [dashed] (7,0)--(7,2);
\draw(1,2) .. controls (2,1.2) and (3,1.2) .. (4,2);
\draw(2,2) .. controls (2.2,1.7) and (2.8,1.7) .. (3,2);
\filldraw(2.5,1.4) circle (0.1cm);\filldraw(3,0.66) circle (0.1cm);\filldraw(4,1.33) circle (0.1cm);
\draw(3,0) .. controls (3.2,0.3) and (3.8,0.3) .. (4,0);
\end{tikzpicture}
\end{center}
\caption{}\label{fig-3}
\end{figure}
These marked diagrams are composed via concatenation, just as $\TL$-diagrams, with three rules to bring them
to ``standard form'', which is a diagram with at most one mark on each eligible arc. The rules are (recalling that
for any invertible element $x\in R$, $\delta_x=-(x+x\inv)$:
\be\label{eq:tlbrules}
\begin{aligned}
&\text{(i) If $D$ is a (marked) diagram and $L$ is a loop with no marks then $D\amalg L=\delta_q D$.}\\
&\text{(ii) If, in (i), $L$ is a loop with 1 mark then $D\amalg L=\left(\frac{q}{Q}+\frac{Q}{q}\right) D$.}\\
&\text{(iii) If an arc of a diagram $D$ has more than $1$ mark and $D'$ is obtained from $D$}\\
&\text {by removing one mark from that arc, then $D=\delta_Q D'$.}\\
\end{aligned}
\ee
\begin{definition}\label{def:tlbb}
The category $\TLBB(q,Q)$ has objects $\Z_{\geq 0}$ and morphisms which are $R$-linear combinations of marked diagrams,
subject to the rules \eqref{eq:tlbrules}.
\end{definition}
It is evident that $\Hom_{\TLBB}(s,t)$ has basis consisting of $\TLB$ (marked) diagrams with at most one mark
on each boundary arc of the left region. We shall refer to these as ``marked diagrams''; the next result counts them.
\begin{proposition}\label{prop:tlbdim}
For integers $t,k\geq 0$, the number $b(t,t+2k)$ of marked diagrams $t\lr t+2k$ depends only on $t+k$. If $t+k=m$, then
the number of these is $d(m)=\binom{2m}{m}$.
\end{proposition}
\begin{proof}
Given a marked diagram $t\lr t+2k$, one obtains a diagram $0\lr 2(t+k)$ by rotating the bottom of the diagram through
$180^\circ$ until it becomes part of the top, pulling all the relevant arcs appropriately. This is illustrated below in Fig. \ref{fig-4} for the diagram
in Fig. \ref{fig-3}. (but note that we are applying this construction only to standard diagrams, which have at most one mark on each arc).
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\foreach \x in {1,2,3,4,5,6,7,8,9,10}
\filldraw(\x,2) circle (0.05cm);
\draw(1,2) .. controls (2,1.2) and (3,1.2) .. (4,2);
\draw(2,2) .. controls (2.2,1.7) and (2.8,1.7) .. (3,2);
\draw(6,2) .. controls (6.2,1.7) and (6.8,1.7) .. (7,2);
\draw(8,2) .. controls (8.2,1.7) and (8.8,1.7) .. (9,2);
\draw(5,2) .. controls (6.2,1) and (8.8,1) .. (10,2);
\filldraw(2.5,1.4) circle (0.1cm);\filldraw(6.5,1.33) circle (0.1cm);\filldraw(8.5,1.33) circle (0.1cm);
\end{tikzpicture}
\end{center}
\caption{}\label{fig-4}
\end{figure}
This shows that $b(t,k)=b(0,t+k)$, which proves the first statement. Write $d(m)=b(0,2m)$, and following \cite[\S 4.3]{ILZ1},
we write $d(x)=\sum_{i=0}^\infty d(i)x^i$, where $d(0)=1$. Now if the $2m$ upper dots are numbered $1,2,\dots, 2m$
from left to right any marked diagram $D:0\lr 2m$ will join $1$ to an even numbered dot, say $2i$. For fixed $i$, the number
of such $D$ is $2c(i-1)d(m-i)$, where $c(i)$ is the Catalan number in \cite{ILZ1} (since the arc $(1,2i)$ may be either
marked or unmarked). It follows that
\be\label{eq:rec1}
d(m)=2\sum_{i=1}^m c(i-1)d(m-i).
\ee
Multiplying \eqref{eq:rec1} by $x^m$ and summing, we obtain
\be\label{eq:rec2}
d(x)=\frac{1}{1-2xc(x)},
\ee
where $c(x)=\sum_{i=0}^\infty c(i)x^i=\sum_{i=0}^\infty \frac{1}{i+1}\binom{2i}{i}x^i$.
Now from the above relation, one sees easily that
\be\label{eq:rec3}
\frac{\partial}{\partial x}(xc(x))=xc'(x)+c(x)=\sum_{n=0}^\infty \binom{2n}{n}x^n.
\ee
Now differentiating the relation $xc(x)^2=c(x)-1$, we obtain
\[
(1-2xc(x))\inv=d(x)=\frac{c'(x)}{c(x)^2}=\frac{\partial}{\partial x}(xc(x)-1),
\]
and the result follows from \eqref{eq:rec3}.
\end{proof}
\subsection{Generators and cellular structure} \label{ss:tlbb}
The usual Temperley-Lieb category $\TLC(q)$ (see Definition \ref{def:tl(b)}) is a subcategory of $\TLBB(q,Q)$, with the same objects, and morphisms
which are $R$-linear combinations of unmarked diagrams. Thus there is a faithful functor
\[
\TLC(q)\lr\TLBB(q,Q)
\]
as well as a ``tensor product'' functor
\be\label{eq:tp1}
\TLBB(q,Q)\times \TLC(q)\lr\TLBB(q,Q),
\ee
given by juxtaposing diagrams. Note that the functor \eqref{eq:tp1} restricts to the usual tensor product on $\TLC(q)$.
The category $\TLC(q)$ is generated, under composition and tensor product by the morphisms $A,U,I$ depicted in
Fig. \ref{fig-5} below, subject to the obvious relations.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\foreach \x in {1,2,7,9}
\filldraw(\x,0) circle (0.05cm);
\foreach \x in {4,5,7,9}
\filldraw(\x,2) circle (0.05cm);
\draw (7,0)--(7,2);
\draw (9,0)--(9,2);
\draw node at (3,1) {;};\draw node at (6,1) {;};\draw node at (8,1) {;};
\filldraw(9,1) circle (0.1cm);
\draw node at (1.5,-0.5) {A};\draw node at (4.5,-0.5) {U};\draw node at (7,-0.5) {I};\draw node at (9,-0.5) {$C_0$};
\draw(4,2) .. controls (4.2,0) and (4.8,0) .. (5,2);
\draw(1,0) .. controls (1.2,1.8) and (1.8,1.8) .. (2,0);
\end{tikzpicture}
\end{center}
\caption{}\label{fig-5}
\end{figure}
The category $\TLBB(q,Q)$ is generated by the generators $A,U,I$ of $\TLC$, with $C_0$ added as shown. Evidently
it follows from \eqref{eq:tlbrules}(iii) that $C_0$ satisfies $C_0^2=\delta_Q C_0$
and from \eqref{eq:tlbrules}(ii) that $A(C_0\ot I)U=\frac{q}{Q}+\frac{Q}{q}$. For $n=1,2,\dots$, the algebra $\Hom_{\TLBB(q,Q)}(n,n)$
is the Temperley-Lieb algebra $\TLBB_n(q,Q)$ of type $B_n$. It has a cellular structure described as follows.
Given $n\in\Z_{>0}$, define $\Lambda_B(n)=\{t\in\Z\mid |t|\leq n\text{ and }t\equiv n\text{(mod }2)\}$. For $t\in\Lambda_B(n)$
define $M(t)$ as the free $R$-module with basis the monic diagrams $D:t\lr n$ in which no through string is marked.
For each $t\in\Lambda_B (n)$, there is an injective map $\beta_t:M(t)\times M(t)\lr \TLBB_n(q,Q)$ given by
\be\label{eq:cell}
\beta_t(D_1,D_2)=
\begin{cases}
D_2^*D_1\text{ if }t\geq 0\\
D_2^*(C_0\ot I^{\ot(|t|-1)})D_1\text{ if }t<0,\\
\end{cases}
\ee
where $D^*$ is the diagram obtained from $D$ by reflection in a horizontal.
The next result is straightforward.
\begin{proposition}\label{prop:tlbcell}
Maintain the above notation.
\begin{enumerate}
\item Let $C:=\amalg_{t\in\Lambda_B(n)}\beta_t:\amalg_{t\in\Lambda_B(n)}M(t)\times M(t)\lr \TLBB_n(q,Q)$. The image
of $C$ is a basis of $\TLBB_n(q,Q)$. Write $\beta_t(S,T)=C_{S,T}^t$ for the basis elements ($S,T\in M(t)$).
\item The basis $C^t_{S,T}$ ($t\in\Lambda_B(n)$, $S,T\in M(t)$) is a cellular basis of $\TLBB_n(q,Q)$.
\end{enumerate}
\end{proposition}
\begin{remark}
An analogous result for Temperley-Lieb algebras of type $D$ is proved in \cite{LS}
\end{remark}
\subsection{Cell modules for $\TLBB_n(q,Q)$} We give a description of the cell modules corresponding to the cellular structure
given in Proposition \ref{prop:tlbcell} and compute their dimension, although this is implicit in the results of \cite{GL03}.
The cell module $W_t(n)$ ($t\in\Lambda(n)$) has basis the set $M(t)$, with $\TLBB_n(q,Q)$-action defined in the usual way
by multiplication of diagrams.
Let $u(t,k)=\rank(W_t(|t|+2k))$. We have seen (Proposition \ref{prop:tlbdim}) that $u(0,k)=\binom{2k}{k}$. This is the case $t=0$
of the following result.
\begin{proposition}\label{prop:dimwt}
We have
\[
\dim W_t(|t|+2k)=u(t,k)=\binom{|t|+2k}{k}.
\]
\end{proposition}
\begin{proof}
It clearly suffices to consider the case $t\geq 0$. We prove the result by induction on the pair $t,k$,
the result being known for $t=0$ (Proposition \ref{prop:tlbdim}), while for $k=0$, clearly $u(t,0)=1=\binom{|t|}{0}$.
Now the same argument
as in \cite[Prop. 5.2]{ILZ1}, involving rotation of the bottom row of a monic diagram $|t|\lr |t|+2k$ through $180^\circ$
to obtain a diagram $0\lr 2|t|+2k$, shows that we have the following recursion for $u(t,k)$. Assume $t,k\geq 1$. Then
\be\label{eq:urec}
u(t,k)=u(t-1,k)+u(t+1, k-1).
\ee
Hence by induction, $u(t,k)=\binom{t-1+2k}{k}+\binom{t+1+2(k-1)}{k-1}=\binom{t+2k}{k}$.
\end{proof}
\subsection{An equivalence of categories} We have defined two ``Temperley-Lieb categories of type $B$'', viz. the category
$\TLBC(q,\Omega)$ of Definition \ref{def:tl(b)} and the category $\TLBB(q,Q)$ of Definition \ref{def:tlbb}.
Both categories contain the finite \tl category $\TLC(q)$ as a subcategory. In the case of $\TLBB(q,\Omega)$ this is realised as in
\S\ref{ss:tlbb}. In the case of $\TLBC(q,\Omega)$ (cf. Definition \ref{def:tl(b)}) $\TLC(q)$ may be thought of as having the
same objects $\{(m,v^r)\}, r=0,1,\dots$ as $\TLBC(q,\Omega)$, but where the morphisms are linear combinations of tangles which are
not entwined with the pole.
Our next objective is to prove the following result.
\begin{theorem}\label{thm:tlbequ}
Let $R$ be an integral domain with invertible elements $q,Q$ and $\Omega$ and an element $\sqrt{-1}$ such that
$\is^2=-1$. Then there is an equivalence of categories $\CM:\TLBB(q,Q)\lr \TLBC(q,\Omega)$ which takes
the object $r\in\Z$ to $(m,v^r)$, is the identity on $\TLC(q)$, and respects the tensor product, if and only if
$\Omega=\pm (\is Q)^{\pm 1}$. In this case we have $\cM(C_0)=\sqrt{-1}qL-QI$.
\end{theorem}
Note that the stated conditions on $\CM$ imply that for diagrams
$D\in\TLBB(q,Q)$ and $D'\in\TLC(q)$, we have
\be\label{eq:resp}
\CM(D\ot D')=\CM(D)\ot D'.
\ee
\begin{proof}[Proof of Theorem \ref{thm:tlbequ}]
We shall define $\CM$ on the generators $A,U,I$ and $C_0$, the effect of $\CM$ on objects having been given.
Since $\CM$ is to be the identity functor on the subcategory $\TLC(q)$, evidently we must have $\CM(A)=A$,
$\CM(U)=U$ and $\CM(I)=I$, where on the left side of these equations $A,U$ and $I$ are as in Fig. \ref{fig-5} of this section,
while on the right side they are as defined in \S\ref{sss:tp}.
It remains only to define $\CM(C_0)$. This is a morphism in $\End_{\TLBC(q,\Omega)}(m,v)$, and since this space
has basis $I,L$, it follows that
\be\label{eq:b1}
\CM(C_0)=aL+bI,
\ee
for $a,b\in R$. We shall determine constraints on $a,b$. First, observe that it follows
by applying $A(-\ot I)U$ to both sides of \eqref{eq:b1} that
\[
\CM(A(C_0\ot I)U)=aA(L\ot I)U+bA(I\ot I)U,
\]
whence using \eqref{eq:loop} and \eqref{eq:tlbrules}(ii) it follows that
\be\label{eq:b2}
\kappa(= \frac{q}{Q}+\frac{Q}{q})=a\delta_\Omega+b\delta_q,
\ee
where, for any invertible $x\in R$, $\delta_x=-(x+x\inv)$.
Next, we square both sides of \eqref{eq:b1} using the relations \eqref{eq:skein} and \eqref{eq:tlbrules}(iii).
One obtains
\be\label{eq:b3}
(2ab-q\inv\delta_\Omega a^2)L+(b^2-q^{-2}a^2)=a\delta_Q L+b\delta_Q I,
\ee
and equating the coefficients of $L$ and $I$ respectively, we obtain
\be\label{eq:b4}
2ab-q\inv\delta_\Omega a^2=a\delta_Q,
\ee
and
\be\label{eq:b5}
b^2-q^{-2}a^2=b\delta_Q.
\ee
Moreover, since evidently $a\neq 0$, we may divide \eqref{eq:b4} by $a$ to obtain
\be\label{eq:b6}
2b-q^{-1}\delta_\Omega a=\delta_Q.
\ee
It therefore remains to solve equations \eqref{eq:b2}, \eqref{eq:b5} and \eqref{eq:b6} for $a$ and $b$.
It is straightforward to show that \eqref{eq:b6} and \eqref{eq:b2} imply that
\be\label{eq:b7}
b=-Q.
\ee
It now follows easily from \eqref{eq:b2} that
\be\label{eq:b8}
a=\frac{q(Q\inv-Q)}{\delta_\Omega}.
\ee
Now the values of $a,b$ in \eqref{eq:b7} and \eqref{eq:b8} are easily shown to satisfy \eqref{eq:b2} and \eqref{eq:b6}.
However they satisfy \eqref{eq:b5} if and only if $\delta_\Omega^2=-(Q\inv-Q)^2$, and this holds if and only if
\begin{eqnarray}\label{eq:Omega-Q}
\Omega=\pm \is Q^{\pm 1}.
\end{eqnarray}
Substituting this into \eqref{eq:b8}, we obtain
\[
a=\pm\sqrt{-1} q.
\]
It is now easily checked that the defining relations among the generators $A,U,I$ and $C_0$, which are those involving only
$A,U$ and $I$, as well as those in \eqref{eq:tlbrules}, are respected by $\CM$ and the theorem is proved.
\end{proof}
The algebra $\TLBB_r(q,Q)$, defined above as the algebra of endomorphisms of $r$ in the category $\TLBB(q,Q)$
is generated \cite[(5.7)]{GL03} by the elements $c_i:=I^{\ot(i-1)}\ot(U\circ A)\ot I^{\ot(r-i-1)}$ ($i=1,2,\dots,r-1$) and
$c_0:=C_0\ot I^{\ot(r-1)}$, subject to the relations set out in \cite[Prop. (5.3)]{GL03}. Likewise, the endomorphism algebra
$\TLBC_r(q,\Omega)$ of the object $(m,v^r)$ in $\TLBC(q,\Omega)$ has generators $C_i$, $i=1,\dots,r-1$, defined in analogy with
the $c_i$ using the elements $I,A$ and $U$ of \S\ref{sss:tp}, as well as the element $L\ot I^{\ot(r-1)}$, which we refer to as $L\in\TLBC_r(q,\Omega)$.
\begin{corollary}\label{cor:tlb}
For any integer $r>0$, there is an isomorphism of algebras
$$
\TLBB_r(q,Q)\lr \TLBC_r(q,\sqrt{-1} Q\inv),
$$
which, in the notation explained above,
takes the generators $c_i$ to $C_i$ ($i=1,\dots,r-1$) and takes $c_0$ to $-\sqrt{-1}qL-Q I^{\ot r}$,
where $\sqrt{-1}$ is a fixed square root of $-1$.
\end{corollary}
\begin{remark}\label{rem:tlb-algs}
It follows from Corollary \ref{cor:tlb} and Lemma \ref{lem:TLB-alg-iso} that there is an isomorphism of algebras
\[\TLBB_r(q,Q)\overset{\sim}{\lr}\TLB_r(q,Q).
\]
We mention also that evidently $\TLB_r(q,Q)\cong \TLB_r(q,-Q)$ via the isomorphism defined by $c_i\mapsto c_i$
($i=1,\dots,r-1$), $x_1\mapsto -x_1$. This accounts for the ambiguity in $\sqrt{-1}$.
\end{remark}
\subsection{Semisimplicity}
It is apparent that analysis of the algebras $\TLBB_n(q,Q)$ may be approached through their cellular structure
outlined above.
This makes it possible to analyse the representations $M(\ell)\ot V(1)^{\ot r}$. In this section we
determine precisely when $\End_{\U_q(\fsl_2)}(M(\ell)\ot V(1)^{\ot r})$ is semisimple. For ease of exposition, we assume
throughout this section that $Q$ and $\Omega$ are both in the field $\CK_0$ and we have chosen a fixed square root $\sqrt{-1}\in\CK_0$.
\subsubsection{Semisimplicity of $\TLBB(q,\Omega)$} This may be approached as in \cite{CGM} for positive characteristic.
However here we shall use the approach of \cite{GL98,GL03} which relate $\TLBB_r(q, Q)$ to the (unextended) affine Temperley-Lieb algebra
$T^a_r(q)$ as in \cite{GL03}, as well as the complete analysis of its cell modules $W_{t,z}(r)$ given in \cite{GL98}.
The results we quote as background may all be found in \cite[\S\S6,10]{GL03}.
For each integer $r\geq 0$ define the following sets of parameters:
\be\label{eq:params}
\begin{aligned}
\Lambda(r)&=\{t\in\Z\mid 0\leq t\leq r \text{ and }r-t\in 2\Z\}\\
\Lambda_B(r)&=\{t\in\Z\mid |t|\in\Lambda(r)\}\\
\Lambda^a(r)&=\{(t,z)\in\Z\times \CK_0^*\mid t\in\Lambda(r)\}/\sim,\\
\end{aligned}
\ee
where in the third line, we declare $(0,z)\sim(0,z\inv)$.
The sets $\Lambda(r)$ and $\Lambda_B(r)$ are posets, with $\Lambda(r)$ being ordered in the obvious way, $\Lambda_B(r)$ ordered according
to $|t|$, with $|t|\geq t$. The set $\Lambda^a(r)$ indexes the cell modules of $T^a_r(q)$, among which all homomorphisms are known.
Any cellular algebra is semisimple if and only if there are no non-trivial homomorphisms among its cell modules \cite{GL96}. The key to the semisimplicity
of $\TLBB_r(q,Q)$ is therefore the following result. For the notation, the reader is referred to \cite[\S 5]{GL03}.
\begin{theorem}\label{thm:pullback}\cite[Cor (5.11), Thm. (6.15)]{GL03}
\begin{enumerate}
\item For $1\leq i\leq r-1$, let $t_i=c_i+q\in\TLBB_r(q,Q)$, and let $t_0=c_0+Q$. There is a surjective homomorphism $g:T^a_r(q)\lr\TLBB_r(q,Q)$
defined by $g(f_i)=c_i$ for $i=1,\dots,r-1$, and $g(\tau)=\sqrt{-1}q^{\half(n-2)}t_0t_1\dots t_{r-1}$, where $\tau$ is the ``twist'' diagram in $T^a_r(q)$.
\item Denoting by $g^*(M)$ the pullback to $T^a_r(q)$ of a $\TLBB_r(q,Q)$-module $M$, we have, for $t\in\Lambda_B(r)$,
\[
g^*(W_t(r))\cong W_{|t|,z_t^{\varepsilon_t}}(r),
\]
where $\varepsilon_t=\frac{t}{|t|}=\pm 1$ and $z_t=(-1)^{t+\half}q^{-\frac{t}{2}}Q\inv$.
\end{enumerate}
\end{theorem}
Since all homomorphisms among the modules $W_{t,z}$ are known, Theorem \ref{thm:pullback} may be used to determine
whether $\TLBB_r(q,Q)$ is semisimple, because $\TLBB$-homomorphisms among the $W_t(r)$ are precisely
$T^a$-homomorphisms among the lifts. We begin by explaining when we have a non-trivial homomorphism between two
cell modules $W_{t,z}$.
Define a preorder on $\Lambda^a(r)$ as follows. Say that $(t,z)\prec (s,y)$ ($(t,z),(s,y)\in\Lambda^a(r)$) if for some $\ve=\pm 1$
we have
\be\label{eq:preorder}
\begin{aligned}
s=t+2m &\text { for some }m>0\text{ and }\\
y&=q^{-\ve m}z\text{ and }\\
z^2&=q^{\ve s}.\\
\end{aligned}
\ee
A short calculation using the equations \eqref{eq:preorder} reveals that there is a non-zero homomorphism of cell modules
$W_s(r)\lr W_t(r)$ for $\TLBB_r(q,Q)$ ($t,s\in\Lambda_B(r)$) if and only if either $W_t(r)\cong W_{-t}(r)$ (see Corollary
\ref{cor:cellpm} for some $t>0$ or if the following conditions hold:
\be\label{eq:homs}
\begin{aligned}
(i) \;&\text{$\exists t,s\in\Lambda(r)$ such that }s=t+2m>t\geq 0, \text{ and }Q=\sqrt{-1}q^{-(t+m)};\\
(ii) \;&\text{$\exists t< 0$, }s>0\in\Lambda_B(r), \text{ such that }t=-2m,\;\;s=4m\text{ and }Q=\sqrt{-1}q^{m};\\
(iii) \;\;&\text{$\exists t< 0$, }s<0\in\Lambda_B(r), \text{ such that }|s|= |t|+2m>|t|\text{ and }Q=\sqrt{-1}q^{-m}.\\
\end{aligned}
\ee
\begin{corollary}\label{cor:cellpm}
With the above notation, for each $t>0$, there is a non-trivial homomorphism $:W_t(r)\lr W_{-t}(r)$
if and only if $W_t(r)\cong W_{-t}(r)$.
Moreover this condition is satisfied if and only if $Q=\sqrt{-1}$.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{thm:pullback} that there is a homomorphism $:W_t(r)\lr W_{-t}(r)$ if and only
if there is a non-trivial homomorphism of $T^a_r(q)$-modules $:W_{t,z_t}(r)\lr W_{t,z_{-t}\inv}(r)$. But this happens
if and only if $W_{t,z_t}(r)\cong W_{t,z_{-t}\inv}(r)$, whence the first statement.
Moreover, again by the above statement, $W_t(r)\cong W_{-t}(r)$ if and only if $z_t=z_{-t}\inv$.
Using the value of $z_t$ given in Theorem \ref{thm:pullback} (2), one sees easily that this happens
if and only if $Q^2=-1$.
\end{proof}
\subsection{Semisimplicity and Schur-Weyl duality} It is evidently a consequence of Theorem \ref{thm:main}, Corollary \ref{cor:alg-iso}
and Corollary \ref{cor:tlb} that:
\begin{proposition}\label{prop:end-tlb}
Let $\ell\geq -1$ be an integer. For each integer $r\geq 1$, we have an isomorphism of associative algebras
\[
\End_{\U_q(\fsl_2)}(M(\ell)\ot V(1)^{\ot r})\overset{\sim}{\lr}\TLBB_r(q,\sqrt{-1}q^{-(\ell+1)}).
\]
\end{proposition}
Our final result uses the cellular structure to give a precise criterion for the semisimplicity of the endomorphism algebra,
which may also be deduced from results in \cite{BS3}.
\begin{theorem}\label{thm:ss} Assume that $q$ is not a root of unity in $\CK_0$.
The endomorphism algebra $\End_{\U_q(\fsl_2)}(M(\ell)\ot V(1)^{\ot r})$ is non-semisimple for all $r$ if $\ell=-1$.
For $\ell\geq 0$, it is semisimple if and only if $r\leq\ell+1$.
\end{theorem}
\begin{proof}
When $\ell=-1$, $Q=\sqrt{-1}$. Hence by Corollary \ref{cor:cellpm}, there are coincidences among the
cell modules of $\TLBB_r(q,Q)$, whence the endomorphism algebra is neither semisimple nor quasi-hereditary.
This proves the first statement.
Now assume $\ell>-1$. We apply the criteria in \eqref{eq:homs} in the case where $Q=\sqrt{-1}q^{-(\ell+1)}$.
For the criterion (i) to apply, we require $t+m=\ell+1>0$; (ii) cannot apply in any case, while for
(iii) we require $m=\ell+1>0$.
Now in case (i) we require $t+2m=\ell+1+m\leq r$ for some $m>0$, so $r\geq \ell+2$.
In case (iii), we require $m=\ell+1$ and $|t|+2m\leq r$, i.e. $r> 2(\ell+1)\geq \ell+3$.
This shows that $\End_{\U_q(\fsl_2)}(M(\ell)\ot V(1)^{\ot r})$ is semisimple for $r\geq\ell+1$.
Consider now the case $r=\ell+2$ (where $\ell\geq 0$). We show that there is always a non-trivial homomorphism
$:W_{\ell+2}(\ell+2)\lr W_{(\ell)}(\ell+2)$. For this, observe that in the notation above,
$z_\ell=(-1)^{\ell+\frac{3}{2}}q^{\frac{\ell}{2}+1}\sqrt{-1}$ and
$z_{\ell+2}=(-1)^{\ell+\frac{3}{2}}q^{\frac{\ell}{2}+1}\sqrt{-1}=q\inv z_\ell$.
Now take $t=\ell,m=1$ and $\ve=1$ in \eqref{eq:preorder}. one concludes that $(\ell,z_\ell)\prec(\ell+2,z_{\ell+2})$.
This proves that $\End_{\U_q(\fsl_2)}(M(\ell)\ot V(1)^{\ot r})$ is not semisimple for $r\geq \ell +2$ and $r\equiv\ell\text{(mod }2)$.
A similar argument applies for $r\geq\ell+2$ with $r\equiv\ell+1\text{(mod }2)$, and the proof is complete.
\end{proof} | 20,909 |
Training in the gym is a great way to keep fit and healthy, both physically and mentally, but occasionally it is a good idea to test yourself with a physical fitness challenge.
Whether it is running a race, cycling for charity, joining a team and playing in a league, doing a muddy adventure challenge or participating in a workplace challenge such as reaching step targets, putting your fitness to the test in a competitive environment is good for a number of reasons.
Motivation matters
The first reason is that it motivates you to train harder and smarter. If you know that your sessions on the treadmill are going to help you run a good 10k or a marathon, then you are more likely to put in the effort in your training session. If you enter a workplace challenge to do more steps in a week, then it is amazing how much more attractive walking up the stairs, rather than taking the lift will seem. If you know that lifting weights will help you sprint faster on the football pitch, then you will push out a few more pounds in the gym.
Get competitive
Competition enhances teamwork, communication and socialising skills. This is a pretty obvious point for a team sport – if you are all working together for a common goal, then you will find your skills at working as part of a team and communicating with your teammates will improve, but it is also true of competing against others. If you have ever taken part in a sportive or triathlon, you will know that, while it is all about crossing the finishing line first, people are still happy to give advice, encouragement and help.
Good for the mind
Competition will aid mental well-being. The feeling of heightened self-esteem that you get when you finish a challenge can be immense. A friend recently completed the Yorkshire Three Peaks within the 12 hour time slot. He wasn’t athletic in the traditional sense of the word and had never really taken part in sporting activities. However, the immense pride he felt as he crossed the finish line with just six minutes to go was obvious to everyone.
Push your limits
Accepting a challenge also acts as a stimulus. It is putting you outside your normal parameters of existence so you might need to learn new skills or push yourself to new limits. This is not always an easy thing to do, so it is also a way of learning more about yourself and your levels of determination. How close to the edge can you push yourself physically will often translate across to other areas of your life.
These are just a few of the many benefits of accepting a challenge. Here are just three ideas for physical fitness challenges that you can take part in around the Cambridge area. Come on, release the adventurer within you and give real meaning to your workout.
Cycling
The London to Cambridge Bike Ride Sunday 3 July 2016
Starting from Pickett’s Lock in north London, the ride winds through beautiful countryside to the finish at Midsummer Common, Cambridge, where you’ll be met with music, refreshments, beer tent and massage.
Running
Cambridge Park Run every Saturday at 9am.
The course is 5000m (5K) long. The course is in Milton County park, and entirely on gravel paths. The organisers ask that you take public transport, bike or walk to the event if possible.
Adventure racing
Bear Grylls Survival Race, Wimpole Hall, Saturday 20 August 2016
5k and 10k options.
Make your way through the 5k course with 20+ obstacles and 2+ BG Survival Challenges or the 10k course with 35+ natural and man-made obstacles.
Battle the elements of the desert, arctic, and jungle through the BG Survival Challenges – testing your ability to endure a range of real-world survival scenarios.
This is the perfect course for those who may be new to the survival world, but want to release their inner Bear! Whether you are running as a lone soldier or with a team, the Survival Race will leave you tired, but with a big sense of accomplishment that is well deserved. | 34,153 |
\begin{document}
\begin{center}
{\bf Co-Toeplitz Operators and
\\
their Associated Quantization }
\vskip 0.3cm
\noindent
Stephen Bruce Sontz
\\
Centro de Investigaci\'on en Matematicas, A.C.
(CIMAT)
\\
Guanajuato, Mexico
\\
email: [email protected]
\end{center}
\centerline{\bf Abstract}
\vskip 0.4cm
\noindent
We define {\em co-Toeplitz operators},
a new class of Hilbert space
operators, in order to define
a co-Toeplitz quantization
scheme that is dual
to the Toeplitz quantization scheme
introduced by the author in the setting of symbols that
come from a possibly non-commutative algebra
with unit.
In the present dual setting
the symbols come from a possibly non-co-commutative
co-algebra with co-unit.
However, this co-Toeplitz quantization is a usual
quantization scheme in the sense that to
each symbol we assign a densely defined linear operator
acting in a fixed Hilbert space.
Creation and annihilation operators
are also introduced
as certain types of co-Toeplitz operators, and then
their commutation relations provide the way for
introducing Planck's constant into this theory.
The domain of the co-Toeplitz
quantization is then extended as well
to a set
of {\em co-symbols}, which are the linear functionals
defined on the co-algebra.
A detailed example based on the quantum group
(and hence co-algebra)
$SU_q(2)$ as symbol space is presented.
\vskip 0.4cm \noindent
\textbf{Keywords:} co-Toeplitz operator,
co-Toeplitz quantization,
creation and annihilation operators,
second quantization.
\section{Introduction}
\label{introduction-section}
In a series of recent papers the author has introduced
a theory of Toeplitz operators having symbols
in a not necessarily commutative algebra with
a {\em $*$-operation} (also called a
{\em conjugation}).
See~\cite{sbs4} for the general theory and
\cite{sbs1}, \cite{sbs2} and \cite{sbs3} for
various examples of that theory.
The associated Toeplitz quantization
is also described in those papers.
See~\cite{BC} for Toeplitz operators in
Segal-Bargmann analysis, which was
my original interest in these topics.
Also see~\cite{mirek} for a quite recent review of
Berezin-Toeplitz operators and some related topics,
including Toeplitz operators.
Finally, see~\cite{BandS}
for a more general viewpoint of Toeplitz
operators in analysis, including
Banach space applications.
There are at least three aspects of
the theory in \cite{sbs4} that make it
relevant to quantum physics.
First, the Toeplitz operators are densely defined
linear operators, all acting in
the same Hilbert space, and so the
self-adjoint extensions of the symmetric
Toeplitz operators can be interpreted as
being physical observables.
(A simple sufficient condition is given in order
for a Toeplitz operator to be symmetric).
Second, there are creation and
annihilation operators that
are defined as certain
types of Toeplitz operators.
Third, the non-zero commutation relations among the
creation and annihilation operators allow the introduction of
Planck's constant $\hbar$ into the theory.
In this paper we introduce co-Toeplitz operators in order to study the
associated dual quantization scheme.
This opens up a new area in the well established
theory of operators acting in Hilbert space as
well as providing a way to quantize new types of
`symbols' in a co-algebra.
The most fundamental (and dual) property of the co-Toeplitz operators is that their
symbols lie in a co-algebra rather than in an algebra as is the case for Toeplitz operators.
A related space of `co-symbols'
and its quantization are
introduced as well.
This co-Toeplitz quantization is also
relevant to quantum physics, since it
has the same three
aspects as already mentioned in the
Toeplitz setting.
Since the co-algebra can be non-co-commutative,
the co-Toeplitz quantization is a generalized
{\em second quantization}, that is, it produces
linear operators from symbols coming from an
algebraic structure that can lack
the appropriate commutativity,
which for historical reasons in the case
of co-algebras is called {\em co-commutativity}.
In this regard it is worthwhile to note that
P.~Dirac was famously known for saying that
the essential property of quantum theory
is that the observables do not commute.
So the lack of the appropriate commutativity of
a co-algebra makes it into a quantum object
which the co-Toeplitz quantization then
quantizes.
In this sense we do have a type of second quantization.
Some words are in order to explain
the meaning of a {\em quantization} or a
{\em quantization scheme}.
I use these two expressions interchangeably.
And I do not wish to propose a rigorous mathematical
definition.
The basic idea is captured in the catch-phrase
``operators instead of functions''.
By ``operators'' I mean linear, densely defined
operators acting in a Hilbert space,
possibly separable.
This is a quite conventional interpretation.
But by ``functions'' I merely mean elements in
some vector space with some additional
algebraic structure, such as an algebra or
a co-algebra.
This is a far cry from the standard definition
of a function, though that is included as
a special case.
The properties of the quantization mapping
that sends ``functions'' to operators are
left deliberately vague.
Due to the novelty of the material of this paper,
much of it is devoted to definitions and their
motivation, while the number of theorems is less
than a paper of this size would usually contain.
Some possibilities are presented in the Concluding
Remarks for research leading to more
theorems.
However, even the definitions may well be
changed and refined as more examples of
co-Toeplitz operators become available.
The paper is organized as follows.
In Section~\ref{toeplitz-quantization-section}
we review the known, general Toeplitz quantization
scheme for algebras.
In Section~\ref{co-toeplitz-quantization-section}
we present the
dual co-Toeplitz quantization scheme.
We discuss the role of the co-unit of the
co-algebra in co-Toeplitz quantization in
Section~\ref{co-symbols-section} and then show how
that motivates an extension of this quantization
scheme using {\em co-symbols} in the dual
of the co-algebra.
The duality between Toeplitz
and co-Toeplitz
operators is not as symmetric as
one might have expected.
This is presented in Section~\ref{duality-section}.
Adjoints of the co-Toeplitz operators are studied
in Section~\ref{adjoint-section}.
Next the creation and annnihilation operators
are defined in terms of co-Toeplitz operators in
Section~\ref{ann-creation-section}, and then
the canonical commutation relations among these
operators are defined in Section~\ref{ccr-section}
in algebraic terms.
At this point Planck's constant $\hbar$
is introduced into the theory as well as the
associated semi-classical
algebras, for which $ \hbar > 0 $,
and the classical algebra,
for which $ \hbar = 0 $.
We continue in Section~\ref{example-section}
with an example of this new quantization scheme based
on the quantum group (and hence co-algebra)
$SU_q(2)$ as symbol space.
A Toeplitz quantization of $SU_q(2)$
has already been presented in \cite{sbs5}
using instead its structure
as an algebra, but with the same sub-algebra of
`holomorphic' elements.
Finally, we conclude
in Section~\ref{concluding-remarks-section}
with remarks about possible further
developments and alternatives of this theory.
We only consider vector spaces over the field of complex numbers.
We use the standard notations $\mathbb{N}$ for the non-negative integers,
$\mathbb{Z}$ for all the integers, $\mathbb{R}$ for the real numbers and
$\mathbb{C}$ for the complex numbers.
For $ \alpha \in \mathbb{C} $ we let
$ \alpha^{*} $ denote its complex conjugate.
\section{The Toeplitz quantization}
\label{toeplitz-quantization-section}
We will introduce the definition of a co-Toeplitz quantization
using the Toeplitz quantization as a guide and motivation.
Hence, we start with a review in this section
of the already known theory of Toeplitz quantization in
the setting of possibly non-commuting symbols
as is developed by the author in \cite{sbs4}.
We let $\mathcal{A}$ be an associative algebra with identity element $1 \equiv 1_{\mathcal{A}}$.
This algebra could have a non-commutative multiplication; it will be the
symbol space for the Toeplitz quantization.
Suppose that $\langle \cdot , \cdot \rangle_\mathcal{A}$ is a sesquilinear, complex symmetric form on
$\mathcal{A}$;
this form could possibly be degenerate.
Our convention throughout is that all sesquilinear forms are anti-linear
in the first entry and linear in the second.
Moreover, suppose that there exists a sub-algebra
$\mathcal{P}$ (not necessarily containing $1$)
of $\mathcal{A}$
such that the sesquilinear form is positive definite when restricted to $\mathcal{P}$.
Then $\mathcal{P}$ is a pre-Hilbert space.
(This is one way of motivating the choice of the letter $\mathcal{P}$
for this object.
Another could be that $\mathcal{P}$ is a space
whose elements are like holomorphic polynomials.)
We let $\mathcal{H}$ denote a Hilbert space completion
of $\mathcal{P}$ such that
$\mathcal{P}$ is a dense subspace of $\mathcal{H}$.
If we think of $\mathcal{P}$ as corresponding to a
space of holomorphic polynomials, then
$\mathcal{H}$ could be considered as a sort of
generalization of the Segal-Bargmann space
of holomorphic functions.
See \cite{bargmann}.
We let $\iota : \mathcal{P} \to \mathcal{A}$ denote
the inclusion map,
which is an algebra morphism.
We suppose that there exists a projection map
$P : \mathcal{A} \to \mathcal{P}$, that is,
$P \, \iota = id_\mathcal{P}$.
While $P$ is assumed to be linear, it is {\em not}
assumed to be an algebra morphism.
In this abstract formalism the projection $P$
is rather arbitrary.
However, one specific choice for it in
several examples
is given for $\phi \in \mathcal{A}$ by
\begin{equation}
\label{specific-P}
P \phi =
\sum_{j \in J} \langle \psi_j , \phi\rangle_{\mathcal{A}} \, \psi_j
\end{equation}
where $\{ \psi_j ~|~ j \in J \}$ is an orthonormal
set in $\mathcal{P} $ that is an orthonormal basis
of $\mathcal{H}$.
Of course, it must be shown that the possibly infinite
sum on the right side of \eqref{specific-P}
converges to an element in $ \mathcal{P} $.
(This is trivially true if only finitely many
of the summands are non-zero.)
But be aware that $P$ defined
this way is not necessarily
an orthogonal projection, since the form
$\langle \cdot , \cdot \rangle_{\mathcal{A}}$
need not be positive definite and, in fact,
is degenerate in some examples.
The operator $P$ could also be realized more generally
as an extension to $\mathcal{A}$
of a reproducing kernel
function that represents the identity map on
the pre-Hilbert space $\mathcal{P}$.
This is what is happening in \eqref{specific-P}
since the right side restricted to $\mathcal{P}$
is a reproducing function for $\mathcal{P}$.
Since the algebra $\mathcal{P}$ can be
non-commutative, the reproducing kernel
need not be a function in the usual sense
of that word and so will not have all
(although some) of the properties of
a reproducing kernel function.
See \cite{sbs1} for an example of this
more general type of reproducing kernel.
We assume that there is a left action of
$\mathcal{P}$ on $\mathcal{A}$, namely a linear map
$$
\alpha : \mathcal{P} \otimes \mathcal{A} \to \mathcal{A}
$$
satisfying the standard properties, namely
$1 \cdot a = a $ if
$1 = 1_{\mathcal{A}} \in \mathcal{P} $, and
$ p_1 \cdot ( p_2 \cdot a ) = ( p_1 p_2 )\cdot a $
where $p \cdot a := \alpha ( p \otimes a )$ for
$ p, p_1, p_2 \in \mathcal{P}$ and
$a \in \mathcal{A} $.
Here the juxtaposition $p_1 p_2$ means the
multiplication of elements in $ \mathcal{P} $.
Next,
in anticipation of the definition of a left co-action
in Section~\ref{co-toeplitz-quantization-section},
we re-write this is terms of the map $\alpha$ as
$$
\alpha (1 \otimes a) = a
\qquad \mathrm{and} \qquad
\alpha ( p_1 \otimes \alpha (p_2 \otimes a) ) =
\alpha (p_1 p_2 \otimes a)
$$
for all $a \in \mathcal{A} $ and all
$p_1, p_2 \in \mathcal{P} $.
The first condition is only required
if $ 1 \in \mathcal{P} $.
For example, we could take $\alpha$ equal to $\mu_\mathcal{A}$ restricted to
$\mathcal{P} \otimes \mathcal{A}$, where
$\mu_\mathcal{A} : \mathcal{A} \otimes \mathcal{A} \to \mathcal{A}$ is
the multiplication map of $\mathcal{A}$.
In short, we could take
$\alpha = \mu_{\mathcal{A}} \, (\iota \otimes id)$.
This particular choice for $\alpha$ is the
only place in this theory of Toeplitz operators
where we use the multiplication of $\mathcal{A}$.
We should emphasize however that this particular
choice for $\alpha$ closely corresponds
to what is used in the
classical theory of Toeplitz operators
acting in function spaces.
Nonetheless,
other choices for $\alpha$, which do not use
the multiplicative structure of $\mathcal{A}$,
are also possible.
In such a case we can drop the assumption
that $\mathcal{A}$ is an algebra and instead
only assume that it is a vector space.
However, we still want to have a $*$-structure
on $\mathcal{A}$ in order to be able to define
creation and annihilation operators
in Section~\ref{ann-creation-section}.
Also a $*$-structure appropriately compatible
with the inner
product on $\mathcal{P}$ gives an easy way
to find symmetric operators which then might
be extendable to self-adjoint operators
representing physical observables.
This more general
approach is presented in \cite{sbs4}.
Given the setting of the previous paragraph we now define Toeplitz operators.
\begin{definition}
Suppose that
$g \in \mathcal{A}$ and $\phi \in \mathcal{P}$.
We introduce the notation $\phi g := \alpha (\phi \otimes g) \in \mathcal{A}$ and define
$$
T_g (\phi) := P (\phi g ) = P\alpha (\phi \otimes g) \in \mathcal{P}.
$$
Then $T_g : \mathcal{P} \to \mathcal{P}$
is a linear map, and
we say that $T_g$ is the {\rm Toeplitz operator with
symbol $g$}.
\end{definition}
The notation $\phi g$ was introduced merely to
emphasize the similarity with
classical Toeplitz operators.
Another handy notation is $ \cdot \otimes g$,
which is the linear map
$\mathcal{P} \to \mathcal{P} \otimes \mathcal{A}$
defined for $g \in \mathcal{A}$ and
$\phi \in \mathcal{P}$ by
$$
( \cdot \otimes g) \, \phi := \phi \otimes g.
$$
Here is the corresponding diagram defining $T_g$ as the
composition of these three maps:
\begin{equation}
\label{three-maps}
\mathcal{P}
\stackrel{\cdot \otimes g}{\longrightarrow} \mathcal{P} \otimes \mathcal{A}
\stackrel{\alpha}{\longrightarrow} \mathcal{A}
\stackrel{P}{\longrightarrow} \mathcal{P}.
\end{equation}
Thus the Toeplitz operator $T_g$ is defined
for each symbol for $g \in \mathcal{A}$ as
\begin{equation}
\label{T-given-by}
T_g := P \, \alpha \, (\cdot \otimes g) \in \mathcal{L}(\mathcal{P}).
\end{equation}
where $\mathcal{L} (\mathcal{P} ) :=
\{ A : \mathcal{P} \to \mathcal{P} ~|~ A \mathrm{~is~linear} \} $.
To bring this more closely into notational accord with
the usual definition of a
Toeplitz operator in classical
analysis, for each $g \in \mathcal{A}$ we define
$$
M_g := \alpha (\cdot \otimes g) : \mathcal{P} \to \mathcal{A}.
$$
We note that $M_g$ is typically {\em not} an algebra morphism,
even though both $\mathcal{P}$ and $\mathcal{A}$ are algebras.
Then $T_g = P \, M_g$.
Moreover, if we take $\alpha$ to be
the restriction of the multiplication
on $\mathcal{A}$, which as was noted above is a
possible case,
then $M_g$ is indeed the operation of multiplication by $g$ on the right.
(The change to get multiplication by $g$ on the left is easy enough.)
However, even the rather general formula $M_g = \alpha (\cdot \otimes g)$ can itself
be generalized easily.
All that we need is any linear map $\mathcal{A} \ni g \mapsto M_g$, where
$M_g : \mathcal{P} \to \mathcal{A}$ is linear,
that is, we need a linear map
$ M: \mathcal{A} \to
\mathrm{Hom}_{ \mathrm{Vect} }
(\mathcal{P}, \mathcal{A} )
$,
where $\mathrm{Hom}_{\mathrm{Vect} } (V,W)$
means the vector space of
all linear maps $ V \to W$ of the vector spaces
$V$ and $W$.
We are using the unconventional notation $\mathcal{L}(\mathcal{P})$
in order to denote the complex
vector space of {\em all} the linear maps
$A : \mathcal{P} \to \mathcal{P}$.
Any such map $A$ can be considered as a densely defined linear operator in the
Hilbert space $\mathcal{H}$.
We note that $A$ may or may not be a bounded operator.
However, note that in general there are densely defined linear operators in the
Hilbert space $\mathcal{H}$ that do not lie in $\mathcal{L}(\mathcal{P})$.
This is so for two reasons:
First, the domain of a densely defined operator need not be equal to $\mathcal{P}$;
second, the domain
need not be mapped to itself under
the action of such an operator.
The Toeplitz quantization that has been defined
associates to each symbol $g \in \mathcal{A}$
an operator $T_g \in \mathcal{L}(\mathcal{P})$,
which is the Toeplitz operator with symbol $g$.
The mapping $T: \mathcal{A} \to \mathcal{L}(\mathcal{P})$ that is given by $ T : g \mapsto T_g$
is called the
{\em Toeplitz quantization (scheme)}.
A question that arises naturally is whether the
Toeplitz quantization $ T $ is injective,
that is, if a Toeplitz operator comes from
a unique symbol.
For example, in a certain context
Theorem~4.3 in \cite{sbs3} says that the sesquilinear
form on $ \mathcal{A} $ being non-degenerate is
a necessary and sufficient condition for $ T $ to be
injective.
See \cite{sbs3} for more details.
Even though $\mathcal{A}$ and
$\mathcal{L}(\mathcal{P})$ are algebras,
the Toeplitz quantization $ T $
is not expected nor desired
to be an algebra morphism.
On the contrary, the deviation of $ T $ from
being an algebra morphism is some way of
measuring the `quantum-ness' of $ T $.
As an example, we might have elements
$ g, h \in \mathcal{A} $ satisfying the
`classical' $ q $-commutation relation
$ g h - q h g = 0 $ for $ q \in \mathbb{C} $,
while the corresponding
Toeplitz operators satisfy the `quantum'
$ q $-commutation relation
$ T_g T_h - q T_h T_g = \hbar \, I_{\mathcal{P}} $.
In Section~\ref{ccr-section} the rigorous
definitions of `classical' and `quantum'
relations are given in a related context.
The identity element $1 = 1_{\mathcal{A}}$ in
$\mathcal{A}$ has played
no essential role so far in this theory.
It seems that in the examples the main property
of $1$ that arises is $T_1 = I_{\mathcal{P}}$, the identity map.
Nonetheless, we would like to find the dual of this property in the co-Toeplitz setting.
To achieve this requires more details about how
$T_1$ is defined in the Toeplitz setting.
These details are rather trivial, but their
duals in the co-Toeplitz setting motivate
an important definition there, as we shall see.
Let's first note that
$\mathrm{Hom}_{\mathrm{Vect} } (\mathbb{C} , \mathcal{A} ) \cong
\mathcal{A} $ in a natural way.
Explicitly, a symbol $g \in \mathcal{A}$ corresponds
to the linear map $l_g : \mathbb{C} \to \mathcal{A}$
given by $l_g (z) := z \, g$
for every $z \in \mathbb{C}$.
And an arbitrary linear map
$l : \mathbb{C} \to \mathcal{A}$
has the form $l = l_g$, where $g:= l(1)$ with
$ 1 \in \mathbb{C} $.
Then we have that the composition
$$
\mathcal{P} \cong \mathcal{P} \otimes \mathbb{C}
\stackrel{id \otimes l_g}{\longrightarrow}
\mathcal{P} \otimes \mathcal{A}
$$
is equal to $\cdot \otimes g$.
So we can use this to re-write \eqref{T-given-by} as
$T_{g} = P \, \alpha \, (id \otimes l_g)$.
By taking the case where
$g = 1_{\mathcal{A}} = 1 \in \mathcal{A}$
we see that $l_1 = \eta : \mathbb{C} \to \mathcal{A}$, the unit map of the algebra $\mathcal{A}$.
By further taking $\alpha$
to be the restriction of the multiplication of
$\mathcal{A}$, that is
$\alpha = \mu_{\mathcal{A}} \, (\iota \otimes id)$,
we easily get
$T_1 = I_{_{\mathcal{P}}}$.
Various examples of this sort of Toeplitz quantization have been worked out
in some of the author's papers.
In those examples there is some sort of
definition of a `holomorphic element' in the
algebra $\mathcal{A}$, which then must actually be a $*$-algebra, and $\mathcal{P}$ is the sub-algebra
(but {\em not} a sub-$*$-algebra) of holomorphic elements in $\mathcal{A}$.
There is also a concept of `anti-holomorphic element' in $\mathcal{A}$ with its corresponding
sub-algebra, defined by
$\overline{\mathcal{P}}:= \mathcal{P^*}$, of the anti-holomorphic elements.
Then Toeplitz operators with symbols in $\mathcal{P}$
are defined to be creation operators.
On the other hand,
Toeplitz operators with symbols in
$\overline{\mathcal{P}}$ are defined
to be annihilation operators.
This aspect of the theory, which includes commutation relations among these operators, gives
the theory contact with ideas from the mathematical physics of quantum systems.
It might be worthwhile to recall for the record
what a $ * $-algebra with identity $ 1 $ is.
First off, a {\em $ * $-operation}
(or {\em conjugation}) on a
vector space $ V $ is an
{\em anti-linear}
map $ V \to V $, denoted
by $v \mapsto v^*$ for $ v \in V $, that is also an
involution (that is, $ v^{**} =v $).
Then a {\em $ * $-algebra with identity~$ 1 $}
is an algebra $ \mathcal{A} $ with identity~$ 1 $
which also has a $ * $-operation satisfying
$ (a b)^{*} = b^{*} a^{*} $
for all $ a,b \in \mathcal{A} $
as well as $ 1^{*} = 1 $.
The conjugation in the symbol space $\mathcal{A}$ interchanges by definition the
holomorphic and anti-holomorphic sub-algebras, namely
$$
\mathcal{P}^* = \overline{\mathcal{P}} \quad \mathrm{and} \quad
(\overline{\mathcal{P}})^* = \mathcal{P}.
$$
But the Toeplitz quantization that we have described breaks this symmetry, since the
creation and annihilation operators have distinct
properties in specific examples.
The origin of this has to do with the fact that the
Toeplitz operators are acting in the
holomorphic space $\mathcal{P}$,
even though we could have used
the anti-holomorphic space
$\overline{\mathcal{P}}$ instead of $\mathcal{P}$.
All of the technical details work out if we
use $\overline{\mathcal{P}}$.
For example, the projection of $ \mathcal{A} $
onto $\overline{\mathcal{P}}$ is given by
the linear operator $ P^{*} $,
where $ P^{*} (f) := ( P(f^{*}) )^{*}$
is the standard $ * $-operation
(but {\em not} adjoint) of an operator.
Then the Toeplitz quantization
(which now produces operators in
$ \mathcal{L} ( \overline{\mathcal{P}} ) $)
of the symbols
in $\mathcal{P}$ give the annihilation operators,
while on the other hand
the Toeplitz quantization of the symbols
in $\overline{\mathcal{P}}$ give
the creation operators.
However, this is still to be considered
as a type of Toeplitz quantization.
The new concept of co-Toeplitz quantization
comes in the next section.
It is important to realize that the role played
by the sesquilinear form on $ \mathcal{A} $
is not essential to this theory.
However, it
does unify three different aspects of it.
First, it can be used to define the projection
$ P $, although that can be done without having a
sesquilinear form.
Second, it can be used to define the left action,
although that can also be defined independently.
Third, it restricts to an inner product
on $ \mathcal{P} $.
But one can also define that inner product
directly.
Given these comments, we see how the sesquilinear
form, which does appear in some examples, can be
removed from this theory without basically
changing it.
\section{The co-Toeplitz quantization}
\label{co-toeplitz-quantization-section}
Now we continue with the dual development of the new theory of
co-Toeplitz quantization.
This is achieved by reversing most of the arrows in the theory of
Toeplitz quantization as outlined in the previous section.
This sort of duality is well known in category
theory and is called {\em notion duality}.
We will consider {\em object duality} in
Section~\ref{duality-section}.
We let $\mathcal{C}$ be a co-associative co-algebra
with a co-unit
$\varepsilon : \mathcal{C} \to \mathbb{C}$ and with
$\Delta : \mathcal{C} \to \mathcal{C} \otimes \mathcal{C}$, a
possibly non-co-commutative
co-multiplication.
The co-algebra $\mathcal{C}$ is the
symbol space for the co-Toeplitz quantization.
It is important to note that even the co-commutative case is new.
For the definition
and basic properties of co-algebras see \cite{KS}.
We suppose next that $\mathcal{C}$ is equipped with a
sesquilinear, complex symmetric form
denoted by
$\langle \cdot , \cdot \rangle_\mathcal{C}$.
Let $\mathcal{P}$ be a co-associative, co-algebra
with co-multiplication $\Delta^\prime$,
but not necessarily with a co-unit.
Suppose that there also exists a
co-algebra morphism
$Q : \mathcal{C} \to \mathcal{P}$,
dual to $ \iota $ in the Toeplitz setting.
Also, we suppose that there exists
a linear map $j : \mathcal{P} \to \mathcal{C}$,
dual to $ P $ in the Toeplitz setting,
such that
$$
Q \, j = id_\mathcal{P}.
$$
The injection $j$ need not be a co-algebra morphism.
We suppose that the form on $\mathcal{C}$ restricts
down using $j$ to a
positive definite inner product
$\langle \cdot , \cdot \rangle_\mathcal{P}$
on $\mathcal{P}$, that is to say,
$\langle f , g \rangle_\mathcal{P} = \langle j(f) , j(g) \rangle_\mathcal{C}$
holds for all $f,g \in \mathcal{P}$.
Therefore, $\mathcal{P}$ is a pre-Hilbert space.
We let $\mathcal{H}$ denote a Hilbert space completion of $\mathcal{P}$ such
that $\mathcal{P}$ is a dense subspace of $\mathcal{H}$.
Comparing this with the Toeplitz setting, we
notice that the arrow of the inclusion map
of the pre-Hilbert space $\mathcal{P}$ into
the Hilbert space $\mathcal{H}$ has not been
reversed in the co-Toeplitz setting.
So, it still makes intuitive sense to think
of $\mathcal{P}$ as a space of
`holomorphic polynomials'
and of $\mathcal{H}$ as a type of
generalized Segal-Bargmann space
of `holomorphic functions'.
The projection map $Q$ in this setting is
quite abstract, although it is required to be a
co-algebra morphism while the projection $P$
in the Toeplitz setting
was only required to be linear.
Nonetheless a similar formula using the form
$\langle \cdot , \cdot \rangle_\mathcal{C}$
can be used to define $Q$ in examples.
We will see this in the example in
Section \ref{example-section}.
We also suppose that there is a
{\em left co-action} of
the co-algebra $\mathcal{P}$ on
$\mathcal{C}$, namely, there exists a linear map
$$
\beta : \mathcal{C} \to \mathcal{P} \otimes \mathcal{C}
$$
which has the usual properties dual to those of
a left action, namely,
$$
( \varepsilon^\prime \otimes id_{ \mathcal{C} }) \, \beta
\cong id_{ \mathcal{C} } \qquad
\mathrm{and} \qquad ( id_{ \mathcal{P} } \otimes \beta) \, \beta =
(\Delta^\prime \otimes id_{\mathcal{C}} ) \, \beta.
$$
Each of these properties can be expressed by
a commutative diagram.
However, the first property is only required when
the co-algebra $ \mathcal{P} $ has a co-unit
$ \varepsilon^\prime $.
As an example the left co-action
$\beta$ could be the composition
\begin{equation}
\label{special-beta}
\mathcal{C} \stackrel{\Delta}{\longrightarrow} \mathcal{C} \otimes \mathcal{C}
\stackrel{Q \otimes id}{\longrightarrow} \mathcal{P} \otimes \mathcal{C}
\end{equation}
as the reader can readily verify by checking that the
corresponding diagrams commute.
(Hint: The co-associativity of $\Delta$ is used.)
In this case $\beta$ is a projection of the co-multiplication of $\mathcal{C}$.
In the dual case of Toeplitz operators we had
a particular choice of the left action $\alpha$
given by
$\alpha = \mu_{\mathcal{A}} \, ( \iota \otimes id )$.
So this particular choice of $\beta$ in
\eqref{special-beta}
is dual to that choice of $\alpha$ in the Toeplitz case.
Also, much as in the Toeplitz case, this choice of
$\beta$ is the only place in this theory of co-Toeplitz
operators where we use the co-multiplication of $\mathcal{C}$.
With other choices of $\beta$ which do not depend
on the co-multiplicative structure of $\mathcal{C}$
we do not need to assume that $\mathcal{C}$ is a
co-algebra.
Rather, we only need to assume that $\mathcal{C}$ is
a vector space equipped with a $*$-structure.
In the example in Section~\ref{example-section}
we will use the
particular choice \eqref{special-beta}
and so that example will be a
co-algebra.
It remains for future research work to find non-trivial
examples of co-Toeplitz operators in a setting where
the symbol space is not a co-algebra.
Given the set-up of the previous paragraph, we
now define co-Toeplitz operators.
\begin{definition}
We take $g \in \mathcal{C}$, known as a {\em symbol},
and then consider the composition, dual
to diagram \eqref{three-maps},
of these three linear maps from right to left:
\begin{equation}
\label{dual-three-maps}
\mathcal{P} \stackrel{\pi_g}{\longleftarrow} \mathcal{P} \otimes \mathcal{C}
\stackrel{\beta}{\longleftarrow} \mathcal{C}
\stackrel{j}{\longleftarrow} \mathcal{P},
\end{equation}
where the family of linear maps
$\{ \pi_g ~|~ g \in \mathcal{C} \}$,
the dual to the family of linear maps
$\{ \cdot \otimes g ~|~ g \in \mathcal{C} \}$,
has yet to be defined.
Then $C_g := \pi_g \, \beta \, j$
is the definition of the
{\em (left) co-Toeplitz operator with symbol $g$}.
\end{definition}
Clearly, $C_g : \mathcal{P} \to \mathcal{P}$ is linear or, in other words,
$C_g \in \mathcal{L}(\mathcal{P})$.
In particular, $C_g$ is a densely defined operator
in the Hilbert space $\mathcal{H}$.
By replacing $\beta$ with a right co-action we get a
theory of {\em right co-Toeplitz operators}.
That quite similar, analogous theory
will not be discussed here;
we will only concern ourselves
with left co-Toeplitz operators.
Next, the possibly non-linear function
$C : \mathcal{C} \to \mathcal{L}(\mathcal{P})$
defined by
$g \mapsto C_g$ is called
the {\em co-Toeplitz quantization}.
We note in passing that the vector space
$\mathcal{L}(\mathcal{P})$
is an algebra under the multiplication
given by composition of operators,
while $\mathcal{L}(\mathcal{P})$ does not seem
to have a natural co-algebra structure.
As in the Toeplitz setting, it is natural to ask
whether the co-Toeplitz quantization map $ C $
is injective.
It seems reasonable to conjecture that this will
depend on other conditions, much as we
already remarked is the case
in the Toeplitz setting.
Analogously to the Toeplitz case, we can introduce some notation to help
understand better what is going on here.
In analogy to $M_g$ we define
$$
\tilde{M}_g := \pi_g \, \beta : \mathcal{C} \to \mathcal{P}
$$
for $g \in \mathcal{C}$.
Then $C_g = \tilde{M}_g \, j = \pi_g \, \beta \, j \in \mathcal{L}(\mathcal{P})$.
Be aware that $\tilde{M}_g $ maps a co-algebra to a co-algebra,
but $\tilde{M}_g $ is not a co-algebra morphism.
This is dual to the Toeplitz setting where
$M_g : \mathcal{P} \to \mathcal{A} $ is a map
between algebras, but is not an algebra morphism.
We still have a quite general theory (possibly too general!), since
the family $\{ \pi_g ~|~ g \in \mathcal{C} \}$ is quite arbitrary in the above discussion.
For example, $\pi_g$ could be independent of $g$ thereby
giving a co-Toeplitz quantization that does not depend on the symbol.
This is much more general than we would
wish to consider.
A more acceptable possibility is to define
$\pi_g : \mathcal{P} \otimes \mathcal{C} \to \mathcal{P}$ by
\begin{equation}
\label{define-pi-g}
\pi_g (\phi \otimes f) := \langle g , \, f \rangle_\mathcal{C} \, \phi
\end{equation}
for $\phi \in \mathcal{P}$ and $f, g \in \mathcal{C}$.
To see that this formula gives a dual to the
map $ \cdot \otimes g $ (now defined in the co-Toeplitz setting),
we consider the following calculation for
$ \psi, \phi \in \mathcal{P} $ and
$ f, g \in \mathcal{C} $:
\begin{align*}
\langle
(\cdot \otimes g) \psi , \phi \otimes f
\rangle_{\mathcal{P} \otimes \mathcal{C}} &=
\langle
\psi \otimes g , \phi \otimes f
\rangle_{\mathcal{P} \otimes \mathcal{C}}
\\
&=
\langle
\psi , \phi
\rangle_{\mathcal{P}}
\,
\langle
g , f
\rangle_{\mathcal{C}}
\\
&=
\langle
\psi ,
\langle
g , f
\rangle_{\mathcal{C}} \, \phi
\rangle_{\mathcal{P}}
\\
&=
\langle
\psi ,
\pi_{g} (\phi \otimes f)
\rangle_{\mathcal{P}}.
\end{align*}
This provides some justification for
the formula \eqref{define-pi-g}
for $ \pi_{g} $.
Note that the second equality here is the standard
definition of the sesquilinear form on
$ \mathcal{P} \otimes \mathcal{C} $.
Now given our convention for sesquilinear forms,
$\pi_g$ is a linear map, but in this case
the co-Toeplitz quantization mapping
$C : g \mapsto C_g$ is anti-linear.
It seems to be some sort of tradition
in mathematical physics that a
quantization map should be linear.
To avoid this slight unpleasantness we could
define $ \pi_g $ by
\begin{equation*}
\pi_g (\phi \otimes f) = \langle g^* , \, f \rangle_\mathcal{C} \, \phi
\end{equation*}
for $\phi \in \mathcal{P}$ and $f, g \in \mathcal{C}$.
Of course, to have this make sense we must assume
that $\mathcal{C}$ is a $ * $-co-algebra,
which we will do anyway later.
But, we rather prefer to let the quantization
mapping be anti-linear.
Again for the record let us recall that a
{\em $ * $-co-algebra} $ \mathcal{C} $ is a
co-algebra with a $ * $-operation
such that the co-multiplication map
$ \Delta : \mathcal{C} \to
\mathcal{C} \otimes \mathcal{C} $
is a {\em $ * $-morphism}, namely,
$ \Delta (g^{*}) = ( \Delta (g) )^{*}$.
Since the co-algebra
$ \mathcal{C} $ has a co-unit
$ \varepsilon : \mathcal{C} \to \mathbb{C} $,
we also require that $ \varepsilon $
is a $ * $-morphism, namely,
$ \varepsilon (g^{*}) =
( \varepsilon ( g ) )^{*}$.
Note that the $ * $-operation of
$ \mathcal{C} \otimes \mathcal{C} $ is
determined by
$(g \otimes h)^{*} = g^{*} \otimes h^{*}$
for $ g,h \in \mathcal{C} $.
Be aware that this is not exactly dual to the
definition of a $ * $-algebra, where the
multiplication is required to be an
anti-$ * $-morphism.
Given the definition \eqref{define-pi-g}
for $\pi_g$ we can write down more explicit
expressions for $\tilde{M}_g$ and $C_g$.
So we take $f \in \mathcal{C}$ and
then in Sweedler's notation for a co-action
(see Appendix~B in \cite{QPB}) we have
$$
\beta (f) = f^{(0)} \otimes f^{(1)} \in \mathcal{P} \otimes \mathcal{C}.
$$
It follows for $ g \in \mathcal{C} $ that
$$
\tilde{M}_g (f) =
\pi_g \, \beta (f) =
\pi_g ( f^{(0)} \otimes f^{(1)} ) =
\langle g , \, f^{(1)} \rangle_\mathcal{C} \, f^{(0)}.
$$
For $C_g$ we simply note that for
$\phi \in \mathcal{P}$ we have that
$$
C_g (\phi) = \tilde{M}_g \, j (\phi) =
\langle g , \, f^{(1)} \rangle_\mathcal{C} \, f^{(0)},
$$
where now $f = j (\phi)$.
If we use the injection $j$ to identify $ \mathcal{P}$ as a subspace of $\mathcal{C}$,
then the previous expression simplifies to
$$
C_g (\phi) =
\langle g , \, \phi^{(1)} \rangle_\mathcal{C} \, \phi^{(0)}.
$$
The co-action $\beta$ is a basic operation
in these expressions.
However, $\beta$ is hidden inside Sweedler's notation.
For example, as noted earlier, we can take
$\beta = (Q \otimes id) \, \Delta_\mathcal{C}
: \mathcal{C} \to \mathcal{P} \otimes \mathcal{C}$.
Then for $f \in \mathcal{C}$ we have
$$
\beta (f) = Q ( f^{(1)}) \otimes f^{(2)}
\quad \mathrm{and} \quad
C_{g} (\phi) =
\langle g, f^{(2)} {\rangle}_{\mathcal{C}}
\, Q (f^{(1)}),
$$
where we are using Sweedler's notation
for the co-multiplication, that is,
$\Delta_\mathcal{C} (f) = f^{(1)} \otimes f^{(2)}
\in \mathcal{C} \otimes \mathcal{C}$.
Be aware please that this is
{\em not} Sweedler's notation
$f^{(0)} \otimes f^{(1)}$
introduced above for the co-action $\beta$.
To maintain contact with physics ideas we only
consider the case when
$\mathcal{C}$ is a $*$-co-algebra.
But, in that case we do {\em not} require
$\mathcal{P}$ to be a sub-$*$-co-algebra.
Rather we think of the elements
in $\mathcal{P}$ as being {\em holomorphic} variables,
while those in $\mathcal{P}^*$
are {\em anti-holomorphic} variables.
Then the creation operators are defined
to be those of the form $C_g$
where $g \in \mathcal{P}^{*}$,
while annihilation operators are those
of the form $C_g$
where $g \in \mathcal{P}$.
What relation holds between the
operators
$(C_g)^*$, the adjoint of $ C_{g} $,
and $C_{g^*}$ for a symbol
$g \in \mathcal{C}$
is a question that we will consider later.
A possible relation between the
sesquilinear form and the $ * $-operation
is given in the next definition.
This property was already described in the
Toeplitz setting in \cite{sbs5},
but it was not given its own name there.
\begin{definition}
If for all $ f, g \in \mathcal{C} $ the identity
\begin{equation}
\label{star-symmetry}
\langle f^{*} , g^{*} \rangle_{\mathcal{C}}
= \langle f , g \rangle_{\mathcal{C}}^{*}
\end{equation}
holds,
then we say that the sesquilinear form
$ \langle \cdot , \cdot \rangle_{\mathcal{C}} $
is {\rm $ * $-symmetric}.
\end{definition}
As in the Toeplitz setting it is important to
understand the role of the sesquilinear form
in the co-Toeplitz setting, where it has the
same three aspects mentioned earlier
as in the Toeplitz setting
plus a new aspect, which is that it appears
in the definition \eqref{define-pi-g} of $ \pi_{g} $.
This seems to be a more essential role
since $ \pi_{g} $ so defined
is dual to the map $ \cdot \otimes g$
in the Toeplitz setting.
\section{The co-unit and co-symbols}
\label{co-symbols-section}
So far the co-unit has not played a role
in this theory of co-Toeplitz operators.
To achieve this we now will dualize the
theory from the Toeplitz setting.
Since the co-unit
$\varepsilon : \mathcal{C} \to \mathbb{C}$
is a linear map, we consider how to deal
with an arbitrary linear map
$\lambda : \mathcal{C} \to \mathbb{C}$ in a way that
is dual to the linear maps
$l : \mathbb{C} \to \mathcal{A}$
which appeared in the Toeplitz setting.
The dual construction, starting with $\lambda$
instead of with a symbol $g \in \mathcal{C} $,
gives us a more general type of co-Toeplitz
operator defined as the composition from
right to left as follows:
\begin{equation}
\label{general-type-with-e}
\mathcal{P} \cong \mathcal{P} \otimes \mathbb{C}
\stackrel{id \otimes \lambda}{\longleftarrow}
\mathcal{P} \otimes \mathcal{C}
\stackrel{\beta}{\longleftarrow} \mathcal{C}
\stackrel{j}{\longleftarrow} \mathcal{P}
\end{equation}
However, the linear functional
$\lambda$ lies in
$\mathrm{Hom}_{\mathrm{Vect}} ( \mathcal{C} ,
\mathbb{C} )$
which, quite unlike its dual
$\mathrm{Hom}_{\mathrm{Vect}}
( \mathbb{C}, \mathcal{C} )$,
is {\em not} naturally isomorphic in general to
$\mathcal{C} $.
Of course, for every symbol $g \in \mathcal{C}$ each
\begin{equation}
\label{define-e-q}
e_g := \langle g, \cdot \rangle_{\mathcal{C}}
\end{equation}
lies in
$\mathrm{Hom}_{\mathrm{Vect}} (\mathcal{C} ,
\mathbb{C})$.
Moreover, if we take $\lambda = e_g$
in diagram \eqref{general-type-with-e},
we readily see that
$$
id \otimes e_g = \pi_g
$$
and so we do have the co-Toeplitz operators as
defined above as a special case of the more
general definition
$$
C_{\lambda} := ( id_{\mathcal{P}} \otimes \lambda ) \, \beta \, j
$$
for
$\lambda \in \mathrm{Hom}_{\mathrm{Vect}}
( \mathcal{C} ,
\mathbb{C} )$.
Having this definition in hand, it now makes
sense to study the co-Toeplitz operator
$C_{\varepsilon}$, where
$\varepsilon : \mathcal{C} \to \mathbb{C}$
is the co-unit of the co-algebra $\mathcal{C}$.
In the Toeplitz setting we had that
$T_1 = I_{\mathcal{P}}$ in the special case when the
left action was the restriction of the
multiplication of $\mathcal{A}$.
So in the present co-Toeplitz setting we expect
a similar result when the left co-action
$\beta$ is the projection of the
co-multiplication, that is, when we have
$\beta = (Q \otimes id_{\mathcal{C}} ) \,
\Delta_{\mathcal{C}} $.
In this case
for $\phi \in \mathcal{P}$ we compute that
\begin{align*}
C_{\varepsilon} \, \phi &=
( id_{\mathcal{P}} \otimes \varepsilon ) \, \beta \, j \phi
=
( id_{\mathcal{P}} \otimes \varepsilon ) \,
( Q \otimes id_{\mathcal{C}} ) \,
\Delta_{\mathcal{C}} \, \phi
\\
&=
( Q \otimes id_{\mathbb{C}} )
( id_{\mathcal{C}} \otimes \varepsilon ) \,
\, \Delta_{\mathcal{C}} \, \phi
=
( Q \otimes id_{\mathbb{C}} ) ( \phi \otimes 1 )
\\
&\cong Q \, \phi
= \phi,
\end{align*}
where in the last equality we used that
$\phi \in \mathcal{P}$.
Also $ 1 $ here means the identity element
$ 1 \in \mathbb{C} $.
This discussion, which seemed at the start to be
a minor side issue, has given rise to a new
definition which we now explicitly state.
\begin{definition}
Let
$\lambda \in \mathcal{C}^\prime :=
\mathrm{Hom}_{\mathrm{Vect}} ( \mathcal{C} ,
\mathbb{C} )$
be a linear functional on
the co-algebra $\mathcal{C}$.
Then we define the
{\rm (generalized) co-Toeplitz operator}
with {\rm co-symbol $\lambda$} to be
the linear map
$$
C_{\lambda} :=
( id_{\mathcal{P}} \otimes \lambda ) \, \beta \, j
\in \mathcal{L} (\mathcal{P}).
$$
Much as before, we define the
{\rm (generalized) co-Toeplitz quantization}
to be the map
$ C:\mathcal{C}^\prime \to \mathcal{L} (\mathcal{P})$
given by $\lambda \mapsto C_\lambda$
for $\lambda \in \mathcal{C}^\prime$.
\end{definition}
We sometimes omit the word `generalized'
when speaking of these new objects, since
the fact that we are using co-symbols in
$\mathcal{C}^\prime$ rather than symbols
in $\mathcal{C}$ suffices to remove
any ambiguity.
Note that the notation in this definition
gives us the strange looking identity
$$
C_{g} = C_{e_{g}}
$$
for any $g \in \mathcal{C}$, where
on the left side there is a co-Toeplitz
operator with symbol $ g $ and on the right
side there is a generalized co-Toeplitz operator
with co-symbol $ e_{g} $ as defined in
\eqref{define-e-q}.
So, given this definition,
we have proved above the following result, which is
dual to the result that $T_1 = I_{\mathcal{P}}$
in the Toeplitz setting.
\begin{prop}
Let the left co-action be
\begin{equation}
\label{beta-special-form}
\beta =
( Q \otimes id_{\mathcal{C}} ) \, \Delta_{\mathcal{C}}.
\end{equation}
Then the
co-Toeplitz quantization of the co-unit
$\varepsilon$ of the co-algebra $\mathcal{C}$ is
$$
C_{\varepsilon} = I_{\mathcal{P}},
$$
the identity operator on $ \mathcal{P} $.
\end{prop}
So an important point here is that the set
of co-symbols can be strictly larger than the
set of co-symbols of the form $e_{g}$ for
$g \in \mathcal{C}$.
Recall that the sesquilinear form on $ \mathcal{C}$
could be degenerate, and so the Riesz
representation theorem need not apply here.
What we do see however is that the
theory of co-Toeplitz operators
with co-symbols
can possibly admit more operators than the
original co-Toeplitz operator
theory with symbols only in $\mathcal{C}$.
A more important point is that the dual space
$\mathcal{C}^\prime$
of a co-algebra with co-unit
has a canonical structure as
an algebra with unit, where the multiplication
of the elements
$\alpha, \beta \in \mathcal{C}^\prime$
is defined as the composition
$$
\mathcal{C}
\stackrel{\Delta}{\longrightarrow}
\mathcal{C} \otimes \mathcal{C}
\stackrel{\alpha \otimes \beta}{\longrightarrow}
\mathbb{C} \otimes \mathbb{C}
\cong \mathbb{C}
$$
and the unit is the linear map
$\eta : \mathbb{C} \to \mathcal{C}^\prime$
defined by $ \eta (z):= z \varepsilon $
for all $z \in \mathbb{C}$,
where $\varepsilon \in \mathcal{C}^\prime$
is the co-unit of $\mathcal{C}$.
So the moral of this story is that the
generalized co-Toeplitz quantization with
co-symbols in $ \mathcal{C}^\prime $ is a
map from the {\em algebra} $ \mathcal{C}^\prime $
to the {\em algebra} $ \mathcal{L} (\mathcal{P}) $
of linear operators.
Of course, we do not expect this map $C$ to be an
{\em algebra} morphism.
Rather, as we have remarked earlier
in the Toeplitz setting,
the discrepancy that $ C $ has from being an
algebra morphism is an indication of the
`quantum-ness' of the generalized
co-Toeplitz quantization $ C $.
A particular case of this multiplication
occurs by taking $\alpha = e_g$ and
$\beta = e_h$ for $g,h \in \mathcal{C}$.
Then for all $ \phi \in \mathcal{C} $ we get
\begin{align*}
(e_g \, e_h) (\phi) &=
(e_g \otimes e_h) (\phi^{(1)} \otimes \phi^{(2)})
\\
&= e_g( \phi^{(1)} ) e_h( \phi^{(2)} )
\\
&= \langle g, \phi^{(1)} \rangle_{\mathcal{C}} \,
\langle h, \phi^{(2)} \rangle_{\mathcal{C}}.
\end{align*}
Furthermore, if $ \phi $ is a group-like element
(that is, $ \Delta (\phi) = \phi \otimes \phi $),
then this simplifies to
$$
(e_g \, e_h) (\phi) =
\langle g, \phi \rangle_{\mathcal{C}} \,
\langle h, \phi \rangle_{\mathcal{C}} =
e_g (\phi) \, e_h (\phi).
$$
The family of the $ e_{g} $'s plays an
important role in this theory.
\begin{definition}
\label{define-e}
We define
$ e :\mathcal{C} \to \mathcal{C}^{\prime} $
by
$ e(g):= e_g $ for all $ g \in \mathcal{C} $.
\end{definition}
Note that $ e $ is an anti-linear map which
need not be injective nor surjective.
Moreover, the range of $ e $ need not be a
subalgebra of $ \mathcal{C}^{\prime} $.
However, we do have the following nice property.
\begin{theorem}
Suppose the sesquilinear form is $ * $-symmetric.
Then the range $ \mathrm{Ran} \, e $ of $ e $
is closed under
the $ * $-operation of $ \mathcal{C}^{\prime} $.
More specifically, $ (e_{g})^{*} = e_{g^{*}} $ holds
for all $ g \in \mathcal{C} $, that is,
$ e $ is a $ * $-morphism.
\end{theorem}
\noindent
\textbf{Remark:}
The $ * $-operation of $ \mathcal{C}^{\prime} $
is defined by
$\lambda^{*} (g) := ( \lambda (g^{*}) )^{*}$
for $ \lambda \in \mathcal{C}^{\prime} $
and $ g \in \mathcal{C} $.
\begin{proof}
We calculate for
$ g, h \in \mathcal{C} $ that
$$
(e_{g})^{*} (h) =
\big( e_{g} (h^{*}) \big)^{*} =
\langle g , h^{*} \rangle_{\mathcal{C}}^{*} =
\langle g^{*} , h \rangle_{\mathcal{C}} =
e_{g^{*}} (h),
$$
where we used the $ * $-symmetry in the
third equality.
This shows the second assertion of the theorem
from which the first assertion follows directly.
\end{proof}
It is a quite general fact that the Toeplitz
quantization map does not preserve multiplication,
even though it is a map between algebras.
The co-Toeplitz quantization map does not
preserve co-multiplication ever, since it maps
into a vector space with no natural
co-multiplication even though its domain is a
co-algebra.
But the generalized co-Toeplitz quantization
is a map from the algebra $ \mathcal{C}^{\prime} $
to the algebra $ \mathcal{L} (\mathcal{P}) $.
And this map sends the identity element
$ \varepsilon$ of $\mathcal{C}^{\prime} $
to the identity element
$I_{\mathcal{P}}$ of $\mathcal{L} (\mathcal{P}) $
when \eqref{beta-special-form} holds.
But what is the relation of
the generalized co-Toeplitz quantization map
with the multiplication?
The next result may come as a surprise.
\begin{theorem}
Suppose that a co-Toeplitz quantization satisfies:
\begin{itemize}
\item
The left co-action $ \beta $ is given by
\eqref{beta-special-form}.
\item
$ \mathcal{P} $ is a sub-co-algebra
of $ \mathcal{C} $, that is,
$ \Delta_{ \mathcal{P}} = \Delta_{ \mathcal{C}} \!
\upharpoonright_{\mathcal{P}}$.
\end{itemize}
\noindent
Then the generalized co-Toeplitz quantization map
$
C : \mathcal{C}^{\prime} \to \mathcal{L} (\mathcal{P}) $
is an algebra morphism.
\end{theorem}
\noindent
\textbf{Remark:}
This tells us that under
the given hypotheses
the generalized co-Toeplitz quantization map
is just too nice.
For physical reasons
we want to have a quantization that is not
quite so nice.
After all, Dirac has taught us that the
distinguishing characteristic of quantum theory is
that the observables do not commute.
In this more general context Dirac's insight
can be extended to say that the range of
a quantization mapping
should be less commutative that its domain.
So, in the favorable case when $ C $ is injective,
we do not want $ C $ to be an algebra morphism.
Therefore, I consider this to be a No~Go theorem.
Now the hypothesis on $ \beta $ seems reasonable,
since it is the dual of the commonly used condition
on the left action in the Toeplitz setting.
But the second hypothesis is dual to assuming
in the Toeplitz setting that the projection
$ P : \mathcal{A} \to \mathcal{P} $ is an
algebra morphism.
And that is a condition which we do not wish
to impose.
Hence, the second hypothesis is something which we
want to not hold in examples and in the future
development of this theory.
That hypothesis does not hold in the example
in Section~\ref{example-section}.
Of course, a No~Go theorem is a theorem and
is worth knowing.
\begin{proof}
Take $ \lambda, \mu \in \mathcal{C}^{\prime} $
and $ \phi \in \mathcal{P} $.
Throughout the proof
we use the notation
$ \Delta :=
\Delta_{ \mathcal{P}} = \Delta_{ \mathcal{C}}
\! \upharpoonright_{\mathcal{P}}
$, which comes from the second hypothesis.
We also use the iterated Sweedler notation as
explained, for example, in \cite{QPB}.
Then we calculate as follows.
\begin{align*}
C_{\lambda} \, C_{\mu} \phi &=
(id \otimes \lambda) (Q \otimes id) \, \Delta \,
(id \otimes \mu) (Q \otimes id) \, \Delta \phi
\\
&=
(id \otimes \lambda) (Q \otimes id) \, \Delta \,
(id \otimes \mu) (Q \phi^{(1)} \otimes \phi^{(2)})
\\
&=
\mu(\phi^{(2)})
(id \otimes \lambda) (Q \otimes id) \, \Delta \,
\phi^{(1)}
\\
&=
\mu(\phi^{(2)})
(id \otimes \lambda) (Q \otimes id) \,
( \phi^{(11)} \otimes \phi^{(12)} )
\\
&=
\mu(\phi^{(2)}) \,
\lambda(\phi^{(12)}) \,
\phi^{(11)}
\\
&=
\mu(\phi^{(22)}) \,
\lambda(\phi^{(21)}) \,
\phi^{(1)}
\\
&=
\big(
(\lambda \otimes \mu) \, \Delta \phi^{(2)}
\big)
\phi^{(1)}
\\
&=
\big( (\lambda \mu) \phi^{(2)} \big) Q \phi^{(1)}
\\
&=
(id \otimes \lambda \mu)
(Q \otimes id) \, \Delta \phi
\\
&=
C_{\lambda \mu} \, \phi.
\end{align*}
Here we used
$\Delta \phi = \phi^{(1)} \otimes \phi^{(2)}
\in \mathcal{P} \otimes \mathcal{P}$,
the fact that $ Q $ acts as the identity
on $ \mathcal{P} $, the co-associativity of
$ \Delta $, the definition of the product
$ \lambda \mu $ and the definition of
the co-Toeplitz quantization mapping $ C $.
The first hypothesis was used in the first and
last equalities.
\end{proof}
The question naturally arises whether the
generalized co-Toeplitz quantization map is
injective.
Using Definition~\ref{define-e}
and Equation~\eqref{define-e-q},
we see that a necessary condition
for this injectivity is that
$ e : \mathcal{C} \to \mathcal{C}^{\prime} $
is injective, which itself is equivalent to
the sesquilinear form on $ \mathcal{C} $
being non-degenerate.
The extension of the co-Toeplitz quantization
from the domain of symbols to the domain
of co-symbols leads one to wonder
if there is a corresponding extension of the
domain of the Toeplitz quantization.
Now the symbol $ g \in \mathcal{A} $ in the
Toeplitz setting was used there to define a linear
map $l_{g} : \mathbb{C} \to \mathcal{A}$.
And this map $ l_{g} $ was all that we needed
to define the Toeplitz operator with symbol~$ g $.
But the generalization given by replacing
$ l_{g} $ with an arbitrary linear map
$l : \mathbb{C} \to \mathcal{A}$ is no
generalization at all because,
as noted earlier, any such map $ l $ is equal
to $ l_{g} $ for a unique symbol
$ g \in \mathcal{A} $.
So the co-Toeplitz quantization shows a bit
of flexibilty, let's say, that is not present
in the Toeplitz quantization.
This is an indication of a lack of symmetry
between the Toeplitz and co-Toepliz quantizations,
a topic that we will consider in more detail
in the next section.
\section{Duality}
\label{duality-section}
We now discuss in detail in what sense the
theories of Toeplitz and co-Toeplitz
quantization are duals of each other.
The duality behind the definition
of co-Toeplitz operators
comes about simply by reversing the direction
of all the arrows (i.e., morphisms)
in the definition of a Toeplitz operator.
This sort of duality comes from category theory
and is seen in the
formulation of the basic
concepts of non-commutative geometry, for
example.
It is called {\em notion duality}.
This is exactly what we see in the relation between
the definitions \eqref{three-maps} and
\eqref{dual-three-maps} of
Toeplitz operators
and of co-Toeplitz operators, respectively.
However, another sort of duality (called
{\em object duality}) arises from
applying the {\em duality contravariant functor}
$ V \mapsto V^{\prime}
\equiv \mathrm{Hom}_{\mathrm{Vect}} (V, \mathbb{C}) $
for $ V $ a complex vector space
and the corresponding pull-back definition
$T \mapsto T^{\prime} : W^{\prime} \to V^{\prime}$ for
a morphism (i.e., linear map) $ T: V \to W $
of vector spaces $ V $ and $ W $.
Specifically,
$ T^\prime (\lambda) :=
\lambda \circ T \in V^{\prime}$ for
$ \lambda \in W^{\prime} $.
So the question arises as to what happens to
\eqref{three-maps} and
\eqref{dual-three-maps} when we apply this duality
contravariant functor to each of them.
Of course, we do get some operator.
The question is what type of operator it is and
whether it has a simple formula.
One nice property is that a $ * $-operation on $ V $
induces a $ * $-operation on $ V^{\prime} $
defined by
$ \lambda^{*} (v) := ( \lambda (v^{*}) )^{*} $
for $ \lambda \in V^{\prime} $ and $ v \in V $.
Let us also recall from the last section that the dual
$\mathcal{C}^{\prime}$
of a co-algebra $ \mathcal{C} $ is always an algebra.
On the other hand, the dual $ \mathcal{A}^{\prime} $
of an algebra $ \mathcal{A} $ is not necessarily
a co-algebra.
Briefly, the point is that in general
the duality contravariant functor is only
{\em sub-multiplicative}
with respect to the tensor product, namely,
$
V^{\prime} \otimes W^{\prime} \subset
(V \otimes W)^{\prime}.
$
However, if either $ V $ or $ W $ is
finite dimensional, then the duality
contravariant functor is {\em multiplicative},
$
V^{\prime} \otimes W^{\prime} =
(V \otimes W)^{\prime}.
$
To get multiplicativity in the full infinite
dimensional setting requires changing either the
definition of the duality contravariant functor or
the definition of the tensor product (or of both).
See \cite{KS} for more details.
A rather similar analysis, which we leave to the
interested reader, shows that the dual
of a co-action is always an action, while the dual
of an action is not necessarily a co-action.
But the dual of a vector space with a sesquilinear
form does not in general have a naturally defined
sesquilinear form.
So, we will not look for a full duality between
Toeplitz and co-Toeplitz operators using this
duality contravariant functor.
Thus, we will mainly consider the duality relation
between the diagrams \eqref{three-maps} and
\eqref{dual-three-maps} considered as diagrams
of {\em vector spaces} as well as the {\em definitions}
of Toeplitz and co-Toeplitz operators, respectively.
Similarly, we take \eqref{general-type-with-e}
to be the diagram of
{\em vector spaces} which {\em defines}
a generalized co-Toeplitz operator.
But we will comment on other
algebraic aspects of this duality contravariant
functor as they arise in specific contexts.
Given this situation, it seems more feasible
for us to first consider the dual of a co-Toeplitz
operator as defined in \eqref{dual-three-maps}
with symbol $ g \in \mathcal{C} $, a co-algebra,
which gives us this dual diagram:
\begin{equation}
\label{this-dual-diagram}
\mathcal{P}^{\prime}
\stackrel{\pi_{g}^{\prime}}{\longrightarrow}
(\mathcal{P} \otimes \mathcal{C})^{\prime}
\stackrel{\beta^{\prime}}{\longrightarrow}
\mathcal{C}^{\prime}
\stackrel{j^{\prime}}{\longrightarrow}
\mathcal{P}^{\prime}.
\end{equation}
To understand this diagram
we evaluate $ \pi_{g}^{\prime} $.
So for $ \lambda \in \mathcal{P}^{\prime} $,
$ \phi \in \mathcal{P} $ and
$ f,g \in \mathcal{C} $ we have
\begin{align*}
&\pi_{g}^{\prime} (\lambda)(\phi \otimes f) =
( \lambda \circ \pi_{g} ) (\phi \otimes f)
=
\lambda
\big(
\langle g , f \rangle_{\mathcal{C}} \, \phi
\big)
\\
&=
\langle g , f \rangle_{\mathcal{C}} \,
\lambda
\big(
\phi
\big)
=
e_{g} (f) \lambda (\phi)
=
( \lambda \otimes e_{g} ) (\phi \otimes f),
\end{align*}
which implies that
$ \pi_{g}^{\prime} (\lambda)
= \lambda \otimes e_{g}
\in \mathcal{P}^{\prime} \otimes \mathcal{C}^{\prime}$
and hence
$$
\pi_{g}^{\prime} = \cdot \otimes e_{g} :
\mathcal{P}^{\prime} \to
\mathcal{P}^{\prime} \otimes \mathcal{C}^{\prime}
\subset (\mathcal{P} \otimes \mathcal{C})^{\prime}.
$$
Then \eqref{this-dual-diagram} becomes
\begin{equation}
\label{better-dual-diagram}
\mathcal{P}^{\prime}
\stackrel{ \cdot \otimes e_{g} }{\longrightarrow}
\mathcal{P}^{\prime} \otimes \mathcal{C}^{\prime}
\stackrel{\beta^{\prime}}{\longrightarrow}
\mathcal{C}^{\prime}
\stackrel{j^{\prime}}{\longrightarrow}
\mathcal{P}^{\prime}.
\end{equation}
This is a Toeplitz operator as
defined by \eqref{three-maps} with
symbol $ e_{g} $ in the algebra
$ \mathcal{C}^{\prime} $.
Moreover, $ \beta^{\prime} $ is a left action
and $ j^{\prime} $ is a projection.
Also
$ Q^{\prime} : \mathcal{P}^{\prime} \to \mathcal{C}^{\prime}$
is a unital algebra morphism.
We have the following.
\begin{theorem}
\label{dual-of-co-toepltz-operator}
If $ C_{g} \in \mathcal{L} (\mathcal{P}) $
is a co-Toeplitz operator with symbol $ g $
in the co-algebra $ \mathcal{C} $,
then
$ (C_{g})^{\prime} = T_{e_{g}} \in \mathcal{L} (\mathcal{P}^{\prime})$
is a Toeplitz operator with symbol $ e_{g} $ in
the algebra $ \mathcal{C}^{\prime} $.
If $ C_{\mu} \in \mathcal{L} (\mathcal{P}) $
is a generalized
co-Toeplitz operator with co-symbol $ \mu $
in the algebra $ \mathcal{C}^{\prime} $,
then
$ (C_{\mu})^{\prime} = T_{\mu} \in \mathcal{L} (\mathcal{P}^{\prime})$
is a Toeplitz operator with symbol $ \mu $ in
the algebra $ \mathcal{C}^{\prime} $.
\end{theorem}
\noindent
\textbf{Remark:}
We can also write the result of the first part
as $ (C_{e_{g}})^{\prime} = T_{e_{g}} $.
\begin{proof}
We have already proved the first assertion above.
As for the second assertion we note that
in the above argument the symbol $ g $ is used to
define the linear functional
$ e_{g} \in \mathcal{C}^{\prime}$,
which is the only
occurrence of $ g $ in \eqref{better-dual-diagram}.
So we replace $ e_{g} $ with the co-symbol
$ \mu $ in that argument to obtain
$ (\cdot \otimes \mu)
: \mathcal{P}^{\prime} \to
\mathcal{P}^{\prime} \otimes \mathcal{C}^{\prime} $
in \eqref{better-dual-diagram},
and the second result follows immediately.
\end{proof}
On the other hand, the
dual of a Toeplitz operator
is not necessarily a co-Toeplitz operator.
To see this we examine the dual of diagram
\eqref{three-maps}, which is
\begin{equation}
\label{yet-another-dual}
\mathcal{P}^{\prime}
\stackrel{(\cdot \otimes g)^{\prime}}{\longleftarrow} (\mathcal{P} \otimes \mathcal{A})^{\prime}
\stackrel{\alpha^{\prime}}{\longleftarrow}
\mathcal{A}^{\prime}
\stackrel{P^{\prime}}{\longleftarrow} \mathcal{P}^{\prime}
\end{equation}
Here neither $ \mathcal{A}^{\prime} $ nor
$\mathcal{P}^{\prime} $ need be a co-algebra
although each does have a $ * $-operation.
Consequently, it need not make sense in general to
require $ P^{\prime} $ to be a co-algebra morphism.
Recall that
`co-Toeplitz operator' (resp., `generalized
co-Toeplitz operator')
now means the composition
of the maps of {\em vector spaces} in diagram
\eqref{dual-three-maps}
(resp., diagram \eqref{general-type-with-e}).
Even if $ \mathcal{P}^{\prime} $ is a co-algebra,
$ \alpha^{\prime} $ need not be a left
co-action on $ \mathcal{A}^{\prime} $, since
$$
\mathcal{P}^{\prime} \otimes \mathcal{A}^{\prime}
\subset (\mathcal{P} \otimes \mathcal{A})^{\prime}
$$
can be a proper inclusion.
But we do have the following result.
\begin{theorem}
\label{dual-of-toeplitz-operator}
If $T_{g} \in \mathcal{L} (\mathcal{P}) $
is a Toeplitz operator with symbol $ g $
in the algebra $ \mathcal{A} $ and
the left action
$ \alpha : \mathcal{P} \otimes \mathcal{A}
\to \mathcal{A} $ (used to define the
Toeplitz operator) satisfies
$ \mathrm{Ran} \, \alpha^{\prime} \subset
\mathcal{P}^{\prime} \otimes \mathcal{A}^{\prime} $,
then
$$
(T_{g})^{\prime} = C_{\mathrm{ev}_{g}} \in
\mathcal{L} (\mathcal{P}^{\prime})
$$
is a generalized co-Toeplitz operator
with co-symbol
$ \mathrm{ev}_{g} \in \mathcal{A}^{\prime\prime} $.
(We will define $ \mathrm{ev}_{g} $
in the course of the proof.)
\end{theorem}
\begin{proof}
We take $ g \in \mathcal{A} $,
$ \phi \in \mathcal{P} $,
$ \lambda \in \mathcal{P}^{\prime} $
and $ \omega \in \mathcal{A}^{\prime}$.
Then we calculate
\begin{align*}
\big(
(\cdot \otimes g)^{\prime} (\lambda \otimes \omega)
\big)
(\phi) &=
(\lambda \otimes \omega)
\big(
(\cdot \otimes g) (\phi)
\big)
=
(\lambda \otimes \omega) (\phi \otimes g)
\\
&=
\lambda (\phi) \omega (g)
=
\big( \omega (g) \lambda \big) (\phi),
\end{align*}
which implies
$ (\cdot \otimes g)^{\prime} (\lambda \otimes \omega)
= \omega (g) \lambda
= (id \otimes \mathrm{ev}_{g})
(\lambda \otimes \omega) $, where
$ \mathrm{ev}_{g} (\omega) := \omega (g) $
defines the evaluation functional
$ \mathrm{ev}_{g} $ at $ g $.
Let's note that
$\mathrm{ev}_{g} \in \mathcal{A}^{\prime\prime}$
does hold.
Therefore we have arrived at
$$
(\cdot \otimes g)^{\prime} =
id \otimes \mathrm{ev}_{g}.
$$
So \eqref{yet-another-dual} becomes
$$
\mathcal{P}^{\prime}
\stackrel{ id \otimes \mathrm{ev}_{g} }{\longleftarrow} \mathcal{P}^{\prime} \otimes \mathcal{A}^{\prime}
\stackrel{\alpha^{\prime}}{\longleftarrow}
\mathcal{A}^{\prime}
\stackrel{P^{\prime}}{\longleftarrow} \mathcal{P}^{\prime},
$$
where we also used the hypothesis on the
range of $ \alpha^{\prime} $.
And so we have shown
that $ (T_{g})^{\prime} $ is the generalized
co-Toeplitz operator $ C_{\mathrm{ev}_{g}} $.
\end{proof}
These two theorems show an asymmetry in this
duality, namely, the dual of a co-Toeplitz operator
is always a Toeplitz operator while for a
Toeplitz operator we used an extra technical
hypothesis in order to show that its dual is a
co-Toeplitz operator.
Of course, this opens the door to the possibility
of altering the definition of Toeplitz operator
(and maybe of co-Toeplitz operator as well)
in the infinite dimensional case
in order to obtain a more precise duality.
We are now in a position to evaluate the
double duals of Toeplitz and co-Toeplitz operators.
It is an elementary fact that the double dual
always exists.
What we want to do is describe it explicitly.
Here is some well known material that we are
going to use in order to study double duals.
\begin{definition}
Suppose that V is a vector space and that $v \in V$.
Then we define
$\mathrm{ev}^{V}_{v} \in V^{\prime\prime}$,
the {\rm evaluation at $ v $}, by
$$
\mathrm{ev}^{V}_{v} (f) := f(v)
$$
for all $ f \in V^{\prime} $.
We also define the {\rm evaluation map}
$$
\mathrm{ev} \equiv
\mathrm{ev}^V : V \to V^{\prime\prime}
$$
by
$\mathrm{ev}^V (v) := \mathrm{ev}^V_{v}$
for all $ v \in V $.
We sometimes write $\mathrm{ev}$ instead of
$\mathrm{ev}^V$ when the context indicates
what the vector space $ V $ is.
\end{definition}
We state the next elementary result without proof.
\begin{prop}
\label{without-proof}
The map $ \mathrm{ev}^V $ is linear and
injective.
For any linear map $ T: V \to W $ between
vector spaces $V$ and $W$ we have that this
diagram commutes:
\begin{equation*}
\begin{array}{rcl}
V & \stackrel{\mathrm{ev}^V}{\longrightarrow} & V^{\prime\prime}
\\
T~ \big\downarrow & &
\big\downarrow ~ T^{\prime\prime}
\\
W & \stackrel{\mathrm{ev}^W}{\longrightarrow} & W^{\prime\prime}
\end{array}
\end{equation*}
Using $ \mathrm{ev}^V $ to identify $ V $ as
a subspace of $ V^{\prime\prime} $
(and similarly for $ W $), we can read this diagram
as saying that the restriction of
$T^{\prime\prime}$ to the subspace $ V $ is $ T $,
that is,
$ T^{\prime\prime} \!\! \upharpoonright_{V} = T $.
Equivalently, $ T^{\prime\prime} $ can be
viewed as an extension of $ T $.
\end{prop}
We now proceed to the theorem about double duals.
\begin{theorem}
There are three cases of a double dual.
\begin{itemize}
\item
Let $ g \in \mathcal{C} $ be a symbol and
let $ C_{g} \in \mathcal{L}(\mathcal{P})$
be its associated co-Toeplitz operator.
If the map $ \beta $ used in defining $ C_{g} $
satisfies
$ \mathrm{Ran} \, \beta^{\prime\prime} \subset
\mathcal{P}^{\prime\prime} \otimes
\mathcal{C}^{\prime\prime}$,
then
$ (C_{g})^{\prime\prime} = C_{\mathrm{ev}_{e_{_{g}}}}
\in \mathcal{L}(\mathcal{P}^{\prime\prime})$.
\item
Let $ \mu \in \mathcal{C}^{\prime} $ be a co-symbol
and $ C_{\mu} \in \mathcal{L}(\mathcal{P})$
be its associated generalized co-Toeplitz operator.
If $ \beta $ satisfies the condition in the previous
part of this theorem, then
$ (C_{\mu})^{\prime\prime} =
C_{\mathrm{ev}_{\mu}}
\in \mathcal{L}(\mathcal{P}^{\prime\prime})$.
\item
Let $ g \in \mathcal{A} $ be a symbol and
$ T_{g} \in \mathcal{L}(\mathcal{P})$
be its associated Toeplitz operator.
Suppose that the left action $ \alpha $
used in the
definition of $ T_{g} $ satisfies the technical
condition in Theorem~\ref{dual-of-toeplitz-operator}.
Then
$ (T_{g})^{\prime\prime} =
T_{\mathrm{ev}_{g}}
\in \mathcal{L}(\mathcal{P}^{\prime\prime})$.
\end{itemize}
\end{theorem}
\noindent
\textbf{Remark:}
By Proposition~\ref{without-proof}
in each of these three cases
the double dual of the initially
given operator is necessarily
an extension of that operator.
The question is whether the double dual
of a Toeplitz (resp., co-Toeplitz) operator
is again a Toeplitz (resp., co-Toeplitz)
operator and, if so, what is the formula
for the double dual.
This theorem answers that question provided
a specific technical condition holds.
\begin{proof}
By Theorem~\ref{dual-of-co-toepltz-operator}
we have $ (C_{g})^{\prime} = T_{e_{g}} $
for $ g \in \mathcal{C} $.
Taking the dual of this using
Theorem~\ref{dual-of-toeplitz-operator} gives
$$ (C_{g})^{\prime\prime} = (T_{e_{g}})^{\prime}
= C_{\mathrm{ev}_{(e_g)}}
$$
using the hypothesis on $ \beta $.
This shows the first part of the theorem.
For the second part we have from
Theorem~\ref{dual-of-co-toepltz-operator}
that $ (C_{\mu})^{\prime} = T_{\mu} $
for a co-symbol $ \mu$ in the algebra
$\mathcal{C}^{\prime} $.
Then by Theorem~\ref{dual-of-toeplitz-operator}
we obtain
$$
(C_{\mu})^{\prime\prime} = (T_{\mu})^{\prime}
= C_{\mathrm{ev}_{\mu}}
$$
where we again use the same hypothesis on $ \beta $.
For the last part
from Theorem~\ref{dual-of-toeplitz-operator}
we have
$ (T_{g})^{\prime} = C_{\mathrm{ev}_{g}} $,
using the hypothesis on $ \alpha $.
Then applying
Theorem~\ref{dual-of-co-toepltz-operator}
we immediately get
$$
(T_{g})^{\prime\prime} =
(C_{\mathrm{ev}_{g}})^{\prime} = T_{\mathrm{ev}_{g}}.
$$
This concludes the proof.
\end{proof}
\noindent
A consequence of this section is that the dual
of a co-Toeplitz operator is a Toeplitz operator
and has a relatively simple formula.
However, the corresponding result for the dual of a
Toeplitz operator required an extra hypothesis.
So this is an asymmetry in this duality.
Another question is whether every Toeplitz
(resp., co-Toeplitz) operator is the dual of a
co-Toeplitz (resp., Toeplitz) operator.
This question remains as an open problem.
\section{Adjoints}
\label{adjoint-section}
We next examine the relation between the
operator adjoint
$(C_g)^*$ of a co-Toeplitz operator $C_g$
with symbol $ g \in \mathcal{C} $
and the co-Toeplitz operator $C_{g^*}$.
Since $ C_{g} : \mathcal{P} \to \mathcal{P} $
and the vector space $ \mathcal{P} $
does not in general have
a $ * $-operation on it, there should be no confusion
with the adjoint notation $(C_g)^*$ and the
previously defined $ * $-operation of an
operator that maps between vector spaces with
a $ * $-operation.
As one would expect, to get a result we need
to assume some sort of a relation between the
inner product on the pre-Hilbert space
$ \mathcal{P} $, used to define $(C_g)^*$,
and the $*$-operation in the symbol space,
used to define $C_{g^*}$.
In the Toeplitz case the relation needed
is easily seen to be
\begin{equation}
\label{Toeplitz-star-prod-reln}
\langle M_{g^{*}} \phi, \psi \rangle_{\mathcal{P}}
=
\langle \phi , M_{g} \psi \rangle_{\mathcal{P}}
\qquad \mathrm{or} \qquad
\langle \phi g^{*}, \psi \rangle_{\mathcal{A}}
=
\langle \phi , \psi g \rangle_{\mathcal{A}}
\end{equation}
for $\phi, \psi \in \mathcal{P}$ and
$ g \in \mathcal{A} $.
This translates directly into
$ T_{g^{*}} \subset (T_g)^* $, an inclusion
of densely defined operators
acing in $ \mathcal{H} $.
For more details, including examples,
see \cite{sbs5}.
For the co-Toeplitz case with symbol
$ g \in \mathcal{C} $, a co-algebra,
we do two straightforward calculations
using the formula $ C_g = \tilde{M}_{g} \, j $.
In the following we take $\phi, \psi \in \mathcal{P}$
and $g \in \mathcal{C}$.
First we have
$$
\langle \phi , C_g \, \psi \rangle_{\mathcal{P}}
=
\langle \phi , (\tilde{M}_{g} \, j) \, \psi
\rangle_{\mathcal{P}}
=
\langle \phi , \tilde{M}_{g} \, \psi
\rangle_{\mathcal{P}}.
$$
On the other hand we get
$$
\langle C_{g^{*}} \phi , \psi \rangle_{\mathcal{P}}
=
\langle (\tilde{M}_{g^{*}} \, j) \phi , \psi \rangle_{\mathcal{P}}
=
\langle \tilde{M}_{g^{*}} \phi , \psi \rangle_{\mathcal{P}}.
$$
So the condition we impose now and for
the rest of this paper is
\begin{equation}
\label{M-tilde-condition}
\langle \tilde{M}_{g^{*}} \phi , \psi \rangle_{\mathcal{P}}
=
\langle \phi , \tilde{M}_{g} \, \psi
\rangle_{\mathcal{P}}
\end{equation}
for all $ \phi, \psi \in \mathcal{P} $
and $ g \in \mathcal{C} $.
We have shown the next result.
\begin{theorem}
Assume \eqref{M-tilde-condition} holds.
Then we have this
inclusion of operators acting in $ \mathcal{H} $:
\begin{equation}
\label{C-g-star-both-ways}
C_{g^{*}} \subset (C_g)^*.
\end{equation}
In particular, the adjoint of $ C_{g} $
restricted to $ \mathcal{P} $ is
exactly $ C_{g^{*}} $.
\end{theorem}
So far the argument closely follows the
Toeplitz case.
Replacing $ g $ with $ g^{*} $
in \eqref{C-g-star-both-ways} we obtain
$C_{g} \subset (C_{g^{*}})^*$, which implies
by functional analysis that $ C_{g} $
is a closable operator.
Also, for $g$ real, that is $ g^{*} = g $,
we see directly from \eqref{C-g-star-both-ways} that
$ C_g $ is a symmetric operator, in which case
it then becomes relevant to analyze its self-adjoint
extensions, if such extensions exist.
In particular, it would be interesting to know
if $ C_{g} $ is essentially self-adjoint.
The condition \eqref{M-tilde-condition}
can be expanded out in various special cases.
We use the special case for $ \beta $
given in \eqref{special-beta} and the
definition of $ \pi_g $ in \eqref{define-pi-g}.
In the following calculations we take
$ \phi, \psi \in \mathcal{P} $
and $ g \in \mathcal{C} $.
So, on the one hand we have
\begin{align}
\label{M-tilde-g-one-hand}
\langle \phi, \tilde{M}_g \, \psi \rangle_{\mathcal{P}}
&=
\langle \phi, \pi_g \beta \,\psi \rangle_{\mathcal{P}}
\\
&=
\langle \phi, \pi_g (Q \otimes id) \Delta_{\mathcal{C}} \, \psi \rangle_{\mathcal{P}} \nonumber
\\
&=
\langle \phi, \pi_g (Q \psi^{(1)} \otimes \psi^{(2)} \rangle_{\mathcal{P}} \nonumber
\\
&=
\langle \phi, \langle g , \psi^{(2)} \rangle_{\mathcal{C}} \, Q \psi^{(1)}
\rangle_{\mathcal{P}} \nonumber
\\
&=
\langle g , \psi^{(2)} \rangle_{\mathcal{C}} \,
\langle \phi, Q \psi^{(1)}
\rangle_{\mathcal{P}}. \nonumber
\end{align}
On the other hand, using this result \eqref{M-tilde-g-one-hand},
we see that
\begin{align*}
\langle \tilde{M}_{g^{*}} \, \phi, \psi \rangle_{\mathcal{P}}
&=
\langle \psi, \tilde{M}_{g^{*}} \, \phi \rangle_{\mathcal{P}}^{*}
\\
&=
\big( \langle g^{*},\phi^{(2)} \rangle_{\mathcal{C}} \,
\langle \psi, Q \phi^{(1)}
\rangle_{\mathcal{P}}
\big)^{*}
\\
&=
\langle \phi^{(2)} , g^{*} \rangle_{\mathcal{C}} \,
\langle Q \phi^{(1)} , \psi
\rangle_{\mathcal{P}}
\end{align*}
So we have obtained the following result.
\begin{theorem}
With the above choices
for $ \beta $ and $ \pi_g $
we get that the symmetry condition
\eqref{M-tilde-condition}
is equivalent to
$$
\langle g , \psi^{(2)} \rangle_{\mathcal{C}} \,
\langle \phi, Q \psi^{(1)}
\rangle_{\mathcal{P}}
=
\langle \phi^{(2)} , g^{*} \rangle_{\mathcal{C}} \,
\langle Q \phi^{(1)} , \psi
\rangle_{\mathcal{P}}
$$
for all $ \phi, \psi \in \mathcal{P} $
and $ g \in \mathcal{C} $.
\end{theorem}
The condition
in this theorem does not seem
to be the dual of the condition
\eqref{Toeplitz-star-prod-reln}
in the
Toeplitz setting, although it actually is.
\section{Creation and Annihilation Operators}
\label{ann-creation-section}
We now come back to one of the most important
aspects of this theory.
First, we give the basic definition.
\begin{definition}
Let $ g \in \mathcal{P}^{*} $
(or, equivalently, $ g^{*} \in \mathcal{P} $)
be given.
Then we define
$$
A^{\dagger} (g) := C_g \in \mathcal{L}(\mathcal{P}),
$$
the {\rm creation operator (associated to
the anti-holomorphic symbol $ g $)}.
Let $ g \in \mathcal{P} $ be given.
Then we define
$$
A ( g ) := C_{g} \in \mathcal{L}(\mathcal{P}),
$$
the {\rm annihilation operator (associated to
the holomorphic symbol $ g $)}.
\end{definition}
\noindent
\textbf{Remark:} One way to extend this
definition to include the
generalized co-Toeplitz operators
is to extend to the co-symbols the
definitions of holomorphic and anti-holomorphic
elements.
We leave this topic for future consideration.
We also bring to the reader's attention
that in the Toeplitz setting
the holomorphic (resp., anti-holomorphic) symbols
give the creation (resp., annihilation) operators.
These relations are inverted in the co-Toeplitz
setting.
The motivation for this reversal comes from the
example in Section~\ref{example-section}.
\vskip 0.2cm
These definitions are originally
motivated by the definitions in
Segal-Bargmann analysis and its generalizations.
See
Bargmann's paper~\cite{bargmann} where creation and
annihilation operators were realized for the
first time as adjoints of each other, which is
basically the case here
when \eqref{M-tilde-condition} holds.
In this formulation the annihilation operators
could have been defined without a $ * $-structure,
while the creation operators use explicitly
the $ * $-structure.
This is just a consequence of using
$ \mathcal{P} $ as the pre-Hilbert space.
If the sesquilinear form is $ * $-symmetric
(see \eqref{star-symmetry}), then
$ \mathcal{P}^{*} $ is a pre-Hilbert space
with inner product given by restricting
the sesquilinear form
$ \langle \cdot , \cdot \rangle_{\mathcal{C}} $
to $ \mathcal{P}^{*} $.
This is so, since
for all $ f, g \in \mathcal{P}^{*} $
the identity \eqref{star-symmetry} implies
\begin{equation}
\label{p-star-inner-prod}
\langle f , g \rangle_{\mathcal{P}^{*}} =
\langle f , g \rangle_{\mathcal{C}} =
\langle f^{*} , g^{*} \rangle_{\mathcal{C}}^{*} =
\langle g^{*} , f^{*} \rangle_{\mathcal{P}},
\end{equation}
which shows that we do get a positive definite
inner product on $\mathcal{P}^{*}$.
Then the completion of the pre-Hilbert space
$\mathcal{P}^{*}$ is denoted as $\mathcal{H}^{*}$.
We can think of these as the space of anti-holomorphic
polynomials $ \mathcal{P}^{*} $
and the anti-holomorphic
Segal-Bargmann space $ \mathcal{H}^{*} $.
The identity \eqref{p-star-inner-prod}
can be re-written as
$$
\langle f , g \rangle_{\mathcal{P}} =
\langle g^{*} , f^{*} \rangle_{\mathcal{P}^{*}}
$$
which says that the anti-linear bijective map
$ V : \mathcal{P} \to \mathcal{P}^{*} $
given by $ V f := f^{*} $ is anti-unitary.
Also, $ V^{-1} = V $.
Therefore, we next define the
co-Toeplitz operator
$\tilde{C}_{g} \in \mathcal{L}(\mathcal{P}^{*}) $
for $ g \in \mathcal{C} $ by
$ \tilde{C}_{g} := V C_{g} V^{-1} $.
This gives us essentially the same set-up as we had above, except now
with the co-Toeplitz operators acting in a dense subspace
of an anti-holomorphic Hilbert space.
In this new set-up
an annihilation operator is defined as
$ \tilde{C}_{g} $ for $ g \in \mathcal{P}^{*} $,
that is, the conjugation of
a creation operator acting in the
holomorphic Hilbert space $ \mathcal{H} $.
Similarly, we define a creation operator
acting in the anti-holomorphic Hilbert space
as $\tilde{C}_{g}$ for $g \in \mathcal{P}$,
the conjugation by $ V $ of an annihilation operator
acting in the holomorphic Hilbert space.
Some related structures are defined next.
\begin{definition}
The unital subalgebra of
$ \mathcal{L} (\mathcal{P}) $
generated by all of the creation and annihilation
operators is called the {\rm
canonical commutation relations (CCR) algebra}
and is denoted as $ \mathcal{CCR} $.
The unital subalgebra of
$ \mathcal{L} (\mathcal{P}) $
generated by all of the co-Toeplitz
operators with symbols in $ \mathcal{C} $ is called the
{\rm co-Toeplitz algebra}.
Finally, the unital subalgebra of
$ \mathcal{L} (\mathcal{P}) $
generated by all of the generalized co-Toeplitz
operators with co-symbols in
$ \mathcal{C}^{\prime} $ is called the
{\rm generalized co-Toeplitz algebra}.
\end{definition}
Creation and annihilation operators have a multitude
of applications in physics.
The CCR algebra also arises in many parts of physics.
However, the newly introduced co-Toeplitz algebra
and the generalized co-Toeplitz algebra are objects
that are of more interest in the area of
operator theory in mathematics.
While all of these algebras have their
importance, it seems that very little can be
said about them in general.
However, they all can be studied in specific
examples of this theory.
\section{Canonical Commutation Relations}
\label{ccr-section}
The algebra $ \mathcal{CCR} $ defined here
can be studied
in much the same way as the canonical commutation
algebra is studied in \cite{sbs5} in the Toeplitz setting.
The upshot is that Planck's constant $ \hbar $
will be introduced into the theory and semi-classical
algebras as well as a dequantized (or classical)
algebra will be defined.
To make this paper more self-contained we review how
the relevant material of \cite{sbs5} applies in
the co-Toeplitz setting.
Note that we have already defined the algebra
$\mathcal{CCR}$.
It still remains to define the canonical
commutation relations themselves.
In physics one usually
defines the algebra of
canonical commutation relations by explicitly
using generators and their relations,
where these relations
are by very definition the canonical commutation relations.
In this setting we do the opposite by
starting with
$\mathcal{CCR}$, then writing it
as the quotient of a free
algebra $\mathcal{F}$ and next identifying the kernel of the quotient map $p : \mathcal{F} \to \mathcal{CCR}$
as the ideal of canonical commutation relations.
Finally, any minimal set of generators of
this ideal serves as canonical commutation
relations associated to $\mathcal{CCR}$.
To achieve this we define $ \mathcal{F} $
to be the free unital algebra generated
by the abstract set
$ F = \{ G_{f} ~|~ f
\in\mathcal{P} \cup \mathcal{P}^{*}
\subset \mathcal{C} \}$
in bijective correspondence with the set
$ \mathcal{P} \cup \mathcal{P}^{*} $.
The unital algebra morphism
$p : \mathcal{F} \to \mathcal{CCR}$ is then defined
on the algebra generators $ G_{f} $
of $ \mathcal{F} $ by $p(G_{f}) := C_{f}$ for all
$ f \in \mathcal{P} \cup \mathcal{P}^{*} $.
By the universal property of the free algebra
$ \mathcal{F} $ this uniquely defines the unital
algebra morphism $ p $.
And since by definition the elements $ C_{f} $ for
$ f \in \mathcal{P} \cup \mathcal{P}^{*} $
generate $ \mathcal{CCR} $ as a unital algebra, we
see that $ p $ is surjective.
\begin{definition}
We define the
{\rm ideal of the canonical commutation relations (CCR)}
of the co-Toeplitz quantization $ C $
to be $ \mathcal{R} := \ker \, p $.
A {\rm set of
canonical commutation relations (CCR)}
of the co-Toeplitz quantization $ C $ is defined
to be any minimal subset of
ideal generators of the two-sided ideal
$ \mathcal{R} $.
\end{definition}
Notice that not only is {\em a} set of
canonical commutation relations not unique in general,
even its cardinality in general will not be uniquely
determined by the given co-Toeplitz quantization.
The free algebra $ \mathcal{F} $ has a natural
grading
$ \deg (G_{f_{1} } \cdots G_{f_{n} } ) := n $
for integer $ n \ge 1 $ and
$ f_{1}, \dots , f_{n} \in \mathcal{P} \cup \mathcal{P}^{*} $.
We also put $ \deg(1) := 0$,
where $ 1 \in \mathcal{F} $ is the
identity element.
This leads to an important definition.
\begin{definition}
A homogeneous
element with respect to this grading
in $ \mathcal{R} $ is called a
{\rm classical relation}
while a non-homogeneous element
in $ \mathcal{R} $ is called a
{\em quantum relation}.
\end{definition}
The motivation for the previous definition is
given in \cite{sbs5}.
While this definition applies to any element
in $ \mathcal{R} $, its main intent is to
divide the elements in a set of CCR into
two disjoint subsets.
It turns out that a
logically possible, though physically
anomalous, situation happens when
$ \mathcal{R} = \ker \, p = 0$, in which case
$ p $ is an algebra isomorphism and the (unique!)
set of CCR's is empty.
In this strange case the quantization
is {\em over-quantized}
in the sense that there are no pairs
$f_{1} \ne f_{2} \in \mathcal{P} \cup \mathcal{P}^{*}$
with the
{\em classical (or trivial) commutation relation}
$ C_{f_{1}} C_{f_{2}} - C_{f_{2}} C_{f_{1}} = 0 $,
and then, as we will see momentarily, we can not
introduce Planck's constant $\hbar$ into the theory.
Also, despite Dirac's insistence on the importance
of non-commuting observables, some
non-trivial and useful
classical commutation relations are
always present in quantum theory.
The next definition is also motivated in the
discussion in \cite{sbs5}.
\begin{definition}
Let
$ R \in \mathcal{R} $ be a non-zero relation.
Then we write $R$
uniquely as
\begin{equation}
\label{R-expansion}
R = R_{0} + R_{1} + \cdots + R_{n},
\end{equation}
where $ R_{i} $ is homogeneous with
$ \deg R_i = i $ (for all $ i = 0, 1, \dots , n $
which satisfy $ R_{i} \ne 0$)
and $ R_{n} \ne 0 $.
Then we say that $ R_{n} $ is the
{\em classical relation associated to $R$}.
\end{definition}
Note that $ R_{n} $ is indeed a
non-zero classical relation.
Based on what is true in the Toeplitz setting
as is presented in \cite{sbs5},
I conjecture that both of the cases
$ R_{n} \in \mathcal{R} $ and
$ R_{n} \notin \mathcal{R} $ can occur.
The intuition here is that the terms
$ R_{0}, R_{1}, \dots , R_{n-1} $
are `quantum corrections' to the classical
relation $ R_{n} $.
To see what that means let us define
the $\hbar$-deformation of a non-zero relation
$ R \in \mathcal{R} $ to be
\begin{equation}
\label{define-R-h}
R(\hbar) := \hbar^{n/2} R_{0} +
\hbar^{(n-1)/2} R_{1} +
\cdots +
\hbar^{1/2} R_{n-1} +
R_{n},
\end{equation}
where $ \hbar^{1/2} \in \mathbb{C} $ is arbitrary,
$ \hbar = (\hbar^{1/2})^{2} $ and
$ R $ is written as in \eqref{R-expansion}.
Notice that $ R(0) = R_{n} $.
This says that the classical
case $\hbar = 0$ gives us the classical relation
associated to $ R $.
In physics we take $ \hbar^{1/2} > 0 $, but for now
there is no need to impose that restriction.
We use these definitions to define some more
two-sided ideals in $ \mathcal{F} $ and their
associated quotient algebras.
\begin{definition}
Let $ \mathcal{R}_{cl} $ denote the two-sided
ideal in $ \mathcal{F} $ generated by all
the classical relations with degree $ \ge 1 $.
The {\rm dequantized (or classical) algebra}
of the co-Toeplitz quantization is defined as:
$$
\mathcal{A}_{cl} =
\mathcal{DQ}:= \mathcal{F} / \mathcal{R}_{cl}.
$$
Let $\mathcal{R}_{\hbar}$ denote the two-sided
ideal in $ \mathcal{F} $ generated by all
the relations $ R(\hbar) $ as defined in
\eqref{define-R-h} with
$ 0 \ne R \in \mathcal{R} $ and
$\deg R \ge 1 $.
Then the {\rm $\hbar$-deformed CCR algebra}
associated
with the co-Toeplitz quantization is defined
as:
$$
\mathcal{CCR}_{\hbar} :=\mathcal{F} / \mathcal{R}_{\hbar}.
$$
\end{definition}
By the above remarks we see that
$ \mathcal{DQ} = \mathcal{CCR}_{0}$.
Also, we have $ \mathcal{CCR} = \mathcal{CCR}_{1} $.
There seems to be no reason why
the dequantized (or classical) algebra
$ \mathcal{DQ} $ should be commutative,
and so I conjecture that there are examples
where it is not.
The algebras $ \mathcal{CCR}_{\hbar} $
may have limiting properties as $ \hbar > 0 $
tends to zero.
These would be the {\em semi-classical} properties
of the co-Toeplitz quantization.
And properties of the algebra $ \mathcal{DQ} $
would be the {\em classical} properties
of the co-Toeplitz quantization.
In short, this gives us a framework for
analyzing semi-classical as well as classical
aspects of this theory.
However, it seems difficult to delve into
all this in greater detail
at the present abstract level,
though these considerations can be brought to bear
on specific examples.
The reader can consult \cite{sbs5}
for more details, including motivation, for the
topics of this section.
Let me emphasize that the approach here is the
opposite of the usual approach in mathematical physics,
where one takes certain interesting
commutation relations to be the given CCR's, and then
representations of
those same commutation relations are realized by
operators acting in some Hilbert space,
often a Fock space of some sort.
This more usual approach is found in the recent paper
\cite{bozejko} and many of the
papers in its list of references.
Here, on the other hand, we start with a
Hilbert space and then define
the creation and annihilations operators acting in it.
Only after this do we finally arrive
at a definition of the CCR's.
\section{An example: $SU_q(2)$}
\label{example-section}
This general theory of co-Toeplitz quantization should
be fleshed out with specific examples.
We now proceed with such an example.
We let $\mathcal{C} = SU_q (2)$ for
$0 \ne q \in \mathbb{R}$.
To avoid technicalities we assume as well that
$ q \ne -1$.
Then $ SU_q (2) $ is a Hopf $*$-algebra,
and so in particular it is a $*$-co-algebra.
We first review some of the well-known
facts concerning the quantum group $SU_q (2)$.
For these and many more details see \cite{KS}.
$SU_q (2)$ can be defined as the universal $*$-algebra
with the identity element $1$ generated by
elements $a$ and $c$ satisfying these relations:
\begin{align}
\label{SU-q-2-relations}
&a c = q \, c a, \qquad a c^* = q \, c^* a, \qquad c c^* = c^* c,
\\
&a^* a + c^* c =1, \qquad a a^* + q^2 c^* c = 1.
\nonumber
\end{align}
The co-multiplication
$\Delta_\mathcal{C} : \mathcal{C} \to \mathcal{C} \otimes \mathcal{C}$
of this co-algebra is the unique $*$-algebra
morphism determined by
\begin{align*}
\Delta_\mathcal{C} (a) &= a \otimes a - q \, c^* \otimes c,
\\
\Delta_\mathcal{C} (c) &= c \otimes a + a^* \otimes c.
\end{align*}
The co-unit
$\varepsilon : \mathcal{C} \to \mathbb{C}$
is the unique $*$-algebra
morphism determined by
\begin{equation*}
\varepsilon(a) = 1 \qquad \mathrm{and} \qquad
\varepsilon(c) = 0.
\end{equation*}
Even though only the $ * $-co-algebra structure
of $SU_q (2)$ will be used, for completeness
we also note that the antipode,
denoted by $S$, is the unique unit preserving,
anti-multiplicative algebra
morphism (but \textit{not} $*$-morphism)
determined by
$$
S(a) = a^*, \qquad S(a^*) = a, \qquad S(c) = - q c, \qquad S(c^*) = - q^{-1} c^*.
$$
While $SU_q (2)$ is generated by just
two elements as a $*$-algebra,
it is an infinite dimensional vector space.
A Hamel basis of $SU_q (2)$ is given by
$\{ \varepsilon_{klm} ~|~ k \in \mathbb{Z}
\mathrm{~and~} l,m \in \mathbb{N} \}$,
where
\begin{align*}
\varepsilon_{klm} &= a^k \, c^l \, (c^*)^m \qquad
\qquad \mathrm{if~} k \ge 0,
\\
\varepsilon_{klm} &= (a^*)^{-k} \, c^l \, (c^*)^m
\qquad \mathrm{~if~} k < 0.
\end{align*}
We define a sesquilinear form on
$\mathcal{C} = SU_q (2)$ by requiring
\begin{equation}
\label{define-sequi-form}
\langle
\varepsilon_{klm}, \varepsilon_{rst}
\rangle_{\mathcal{C}}
= w(k, l-m) \, \delta_{k,r} \, \delta_{l-m, s-t}
\end{equation}
and then extending anti-linearly in the first entry
and linearly in the second entry.
Here
$w : \mathbb{Z} \times \mathbb{Z} \to (0, \infty)$ is
some strictly positive weight function, and
$\delta_{i,j}$ is the Kronecker delta function
for $ i,j \in \mathbb{Z} $.
See~\cite{sbs2} for motivation for how such
a formula is related with
the inner product defined in the holomorphic
Hilbert space in Bargmann's paper \cite{bargmann}.
While \cite{bargmann} was the original motivation
for \eqref{define-sequi-form}, there is another
way of understanding this, which we now sketch.
See \cite{KS} for more details and
background.
It turns out that there is an algebraic
direct sum decomposition
\begin{equation}
\label{c-direct-sum}
\mathcal{C} =
\oplus_{m,n} A[m,n],
\end{equation}
where the sum is over
$ (m,n) \in \mathbb{Z} \times \mathbb{Z} $.
This is defined in terms of two
co-actions on $ \mathcal{C} $
of the \textit{diagonal} quantum group
$ \mathcal{K} = \mathbb{C}[t,t^{-1}] $, the algebra of
Laurent polynomials in the variable $ t $.
One realizes $ \mathcal{K} $
(which actually is a Hopf $ * $-algebra)
as a quantum subgroup
of $ \mathcal{C} $ via
the surjection $ \pi : \mathcal{C} \to \mathcal{K} $
which is defined to be the algebra morphism
determined by $ \pi (a) = t $, $ \pi(a^{*}) = t^{-1} $
and $ \pi(c) = \pi(c^{*}) = 0 $.
Then the left co-action $ L_{\mathcal{K}} $ of
$\mathcal{K}$ on $\mathcal{C} $
is defined as the composition
$$
\mathcal{C}
\stackrel{\Delta_{\mathcal{C}}}{\longrightarrow}
\mathcal{C} \otimes \mathcal{C}
\stackrel{\pi \otimes id}{\longrightarrow}
\mathcal{K} \otimes \mathcal{C}.
$$
Similarly, the right co-action $ R_{\mathcal{K}} $ of
$\mathcal{K}$ on $\mathcal{C} $
is defined as the composition
$$
\mathcal{C}
\stackrel{\Delta_{\mathcal{C}}}{\longrightarrow}
\mathcal{C} \otimes \mathcal{C}
\stackrel{id \otimes \pi}{\longrightarrow}
\mathcal{C} \otimes \mathcal{K}.
$$
Using these co-actions we define for $ m,n \in \mathbb{Z} $
$$
A[m,n] := \{ x \in \mathcal{C} ~|~
L_{\mathcal{K}} (x) = t^{m} \otimes x
\quad \mathrm{and} \quad
R_{\mathcal{K}} (x) = x \otimes t^{n}
\},
$$
the vector subspace of \textit{bi-homogeneous} elements with
respect to these co-actions.
For such a bi-homogeneous element $ x \in A[m,n] $
we write
$ \mathrm{bideg} (x) = (m,n)
\in \mathbb{Z} \times \mathbb{Z}$, a group.
One can show that this bi-grading is compatible
with the multiplication in $ \mathcal{C} $ in the
sense that
\begin{equation}
\label{compatible-product}
A[m,n] \, A[p,q] \subset A[m+p,n+q]
\end{equation}
for $ m,n,p,q \in \mathbb{Z} $, since
$ L_{\mathcal{K}} $ and $ R_{\mathcal{K}} $
are algebra morphisms.
This can alternatively be written as
$$
\mathrm{bideg} (xy) =
\mathrm{bideg} (x) + \mathrm{bideg} (y)
$$
for all bi-homogeneous elements $ x $ and $ y $.
We also have that $ a \in A[1,1 ]$
and $ c \in A[-1,1] $.
Moreover, $ x \in A[m,n] $ implies that
$ x^{*} \in A[-m,-n] $.
Another fact is that $ A[m,n] = 0 $
if and only if $ m - n $ is odd.
From \eqref{compatible-product} we can see that
$ A[0,0] $ is a sub-algebra of $ \mathcal{C} $
and then that each $ A[m,n] $
is an $ A[0,0] $-bimodule.
One has that $ A[0,0] = \mathbb{C}[\zeta] $,
the polynomial algebra in the variable
$ \zeta = q^{2} c c^{*} $.
(The coefficient $ q^{2} $ makes this notation
conform with that in \cite{KS}.)
Furthermore, each subspace $ A[m,n] $ with $ m - n $ even
is a free left (respectively, right)
$ \mathbb{C}[\zeta] $-module
on one generator denoted as $ e_{m,n} $
in the notation of \cite{KS}.
The basis elements $ \varepsilon_{klm} $ of $ \mathcal{C} $
turn out to be bi-homogeneous with
$ \mathrm{bideg} (\varepsilon_{klm}) = (k-l+m, k+l-m)$
for all $ k \in \mathbb{Z} $ and $ l,m \in \mathbb{N} $.
Since the weight function in \eqref{define-sequi-form} is
strictly positive we see that
$ \langle
\varepsilon_{klm}, \varepsilon_{rst}
\rangle_{\mathcal{C}}
\ne 0 $
if and only if
both $ k = r $ and $ l-m = s-t $.
But this last condition is equivalent to both
$ k-l+m = r-s+t $ and $ k+l-m = r+s-t $, which is
the same as
$ \mathrm{bideg} (\varepsilon_{klm}) = \mathrm{bideg} (\varepsilon_{rst})$.
This shows that \eqref{c-direct-sum} is an orthogonal
direct sum with respect to the sesquilinear form
\eqref{define-sequi-form}, even though this property
was not being considered when I defined
\eqref{define-sequi-form}.
However, this same analysis shows that
the Hamel basis $\{ \varepsilon_{klm} \}$ is not
an orthogonal basis, since for given indices
$ k, l, m $ we have
$
\langle
\varepsilon_{klm}, \varepsilon_{kst}
\rangle_{\mathcal{C}}
\ne 0
$
for all pairs $ s,t \in \mathbb{N} $
satisfying $ s - t = l - m $.
And there are infinitely many such pairs.
It is known that there are other natural sesquilinear
forms on $ \mathbb{C} $ for which
\eqref{c-direct-sum} is an orthogonal direct sum.
In fact, this is done using the (unique!)
\textit{Haar state}
of $ SU_q(2) $ and so is more closely related to
the structure of $ SU_q(2) $ as a quantum group.
Again, see \cite{KS} for more details.
We define $\mathcal{P}:= \mathrm{alg} \{a , c \}$,
the sub-algebra (but not sub-$*$-algebra)
of $SU_q (2)$ generated by $a$ and $c$.
This is a sub-algebra of `holomorphic' elements.
This is the same sub-algebra that was used in
\cite{sbs5} for a Toeplitz quantization of $SU_q (2)$.
We can identify $\mathcal{P}$ as the free algebra generated
by $a$ and $c$, modulo the relation $a c = q \, c a$, and so
(as an algebra) $\mathcal{P}$ is the
complex Manin quantum plane, which is denoted by
$ A_{q}^{2|0} $ in \cite{manin}.
A Hamel basis of $\mathcal{P}$
is given by the monomials
$a^k c^l = \varepsilon_{kl0}$ for $k,l \in \mathbb{N}$.
Since
$$
\langle
\varepsilon_{kl0} , \varepsilon_{rs0}
\rangle_{\mathcal{C}}
= w(k, l) \, \delta_{k,r} \, \delta_{l, s},
$$
we have that $\{ a^k c^l ~|~ k,l \in \mathbb{N} \}$
is an orthogonal basis of $\mathcal{P}$
and that the sesquilinear form
$\langle \cdot , \cdot \rangle_{\mathcal{C}}$
when restricted to
$\mathcal{P}$ is a positive definite inner product.
Clearly,
$$
\phi_{kl} := \dfrac{1}{ \sqrt{w(k,l)} } \, a^k \, c^l
= \dfrac{1}{ \sqrt{w(k,l)} } \, \varepsilon_{kl0}
\qquad \mathrm{for~} k,l \ge 0
$$
is an orthonormal basis of $\mathcal{P}$.
Thus $\mathcal{P}$ is a pre-Hilbert space
whose completion is denoted by $\mathcal{H}$.
With no loss of generality we can assume
that $\mathcal{P}$ is
a dense subspace of $\mathcal{H}$.
The injection
$j : \mathcal{P} \to \mathcal{C} $ is defined
to be the inclusion map.
The quotient map $Q: \mathcal{C} \to \mathcal{P} $
is defined as in \eqref{specific-P} by
$$
Q(f) := \sum_{i,j \ge 0}
\langle \phi_{ij}, f \rangle_{\mathcal{C}} \, \phi_{ij}
$$
for $f \in \mathcal{C}$.
The sum on the right side has only finitely
many non-zero terms.
It is now any easy exercise to prove that
$Q(a) = a$, $Q(c) = c$
and $Q(a^*) = Q(c^*) = 0$,
these being results needed to prove
some of the statements in the next paragraph.
We will discuss the action of $Q$ on the basis
elements $\varepsilon_{klm}$ a little later on.
According to the general theory of
Section~\ref{co-toeplitz-quantization-section},
the projection $Q$ should be a co-algebra
morphism, meaning a linear map intertwining
the two co-multiplications.
While $Q$ is clearly linear,
we have not specified a co-multiplication
$\Delta_\mathcal{P}$ on the Manin
quantum plane $\mathcal{P}$.
To do this we require that $\Delta_\mathcal{P}$
is the unique algebra morphism
$\mathcal{P} \to \mathcal{P} \otimes \mathcal{P}$
satisfying
\begin{equation*}
\Delta_\mathcal{P} (a) := a \otimes a
\quad \mathrm{and} \quad
\Delta_\mathcal{P} (c) := c \otimes a.
\end{equation*}
To see that this does make sense,
one first defines the algebra morphism
$\Delta_\mathcal{P}$ on the free algebra
generated by $a$ and $c$ by using the
previous formulas,
and then one shows that
$\Delta_\mathcal{P} (a c - q \, c a) = 0$.
Hence $\Delta_\mathcal{P}$ passes to the
quotient algebra $\mathcal{P}$.
It is straightforward to show that
$\Delta_\mathcal{P}$ so defined is co-associative.
However, no linear map
$ l : \mathcal{P} \to \mathbb{C} $
can be the co-unit for this co-multiplication,
since
$$
(l \otimes id) \Delta_\mathcal{P} (c) =
(l \otimes id) (c \otimes a) = l(c) a \ne c.
$$
So, $ \mathcal{P} $ is a co-algebra without co-unit,
which is allowed in the general theory.
Finally, one can readily prove that
$Q: \mathcal{C} \to \mathcal{P} $
is a co-algebra morphism and that
$ \mathcal{P} $ is not a sub-co-algebra
of $ \mathcal{C} $.
We now calculate the action of $Q$
on the basis elements $\varepsilon_{klm}$ of
the co-algebra $\mathcal{C} = SU_q(2)$:
\begin{align*}
Q(\varepsilon_{klm}) &= \sum_{i,j \ge 0}
\langle
\phi_{ij}, \varepsilon_{klm}
\rangle_{\mathcal{C}} \, \phi_{ij}
= \sum_{i,j \ge 0} \dfrac{1}{w(i,j)}
\langle
\varepsilon_{ij0}, \varepsilon_{klm}
\rangle_{\mathcal{C}} \, \varepsilon_{ij0}
\\
&= \sum_{i,j \ge 0} \dfrac{1}{w(i,j)} w(i,j) \delta_{i,k} \delta_{j,l-m} \, \varepsilon_{ij0}
= \sum_{i,j \ge 0} \delta_{i,k} \delta_{j,l-m} \, \varepsilon_{ij0}
\\
&= \varepsilon_{k,l-m, 0}.
\end{align*}
We establish the convention from now on that
$ \varepsilon_{rst} =0 $ if either $ r < 0 $
or $ s < 0 $.
So the last result says $ Q(\varepsilon_{klm}) = 0 $
if $k < 0$ or $ l < m $.
Summarizing.
we have shown the following:
\begin{prop}
\label{pro1}
The action of the projection $ Q $ on the
basis elements $ \varepsilon_{klm} $ is given by
\begin{align*}
Q(\varepsilon_{klm}) &=
\varepsilon_{k,l-m,0} \ne 0 \qquad \mathrm{if~}
k \ge 0, \, l \ge m,
\\
Q(\varepsilon_{klm}) &= 0
\hskip 2.65cm \mathrm{otherwise.}
\end{align*}
\end{prop}
In the case $k \ge 0$, one can interpret these
formulas for $ Q(\varepsilon_{klm})$
as saying that all the $c^*$'s
disappear and each one of them also
`kills off' exactly one of the $ c $'s.
The condition $l < m$ means that the monomial
$\varepsilon_{klm}$ has strictly more occurrences of
$c^*$'s than of $c$'s, in which case
all the $ c $'s get `killed off', as does everything else,
and the result is $ 0 $.
Finally, if $ k < 0 $, then there are occurrences of
$ a^{*} $ but none of $ a $,
and this in itself
suffices to give $ 0 $.
This last fact has a handy generalization,
which we now present.
\begin{prop}
\label{handy}
Let $w$ be a finite word in the
alphabet with these four letters: $a, a^*, c , c^*$.
If $w$ has strictly more occurrences of
the letter $a^*$ than of the letter $ a $, then $Q(w) = 0$.
\end{prop}
\noindent
\textit{Remark:} The hypothesis implies that the
number of occurrences of $ a^{*} $ is strictly
larger than zero.
\begin{proof}
Using the defining relations \eqref{SU-q-2-relations}
we can push all occurrences of $c$ and $c^*$ to
the right, thereby getting $w = q^{n} \, w^\prime \, c^l \, (c^*)^m$, where
$l,m \in \mathbb{N}$, $n \in \mathbb{Z}$ and
$w^\prime$ is a word with only occurrences of $a, a^*$.
The number of occurrences of $a$ (resp., $a^*$) in $w^\prime$
is equal to the number of occurrences of $a$ (resp., $a^*$) in $w$.
Let $ j $ be the number of occurrences of $ a^{*} $.
We proceed by using induction on
$k$, the number of occurrences of $a$ in $w$.
First, we consider the case $k=0$.
Then we have $w = q^{n} \, \varepsilon_{-j,l,m}$,
where $j \ge k +1 = 1$ is
the number of occurrences of $a^{*}$ in $w$.
So, $Q(w) = 0$ by Proposition~\ref{pro1}.
For the induction step we assume that the
assertion $Q(w) = 0$ is true for some $k \ge 0$,
and then we will prove it for $k+1$.
So, let $w$ be a word with $k+1 \ge 1$
occurrences of $a$.
Then by hypothesis $ j > k + 1 $.
We again have $w = q^{n} \, w^\prime \, c^l \, (c^*)^m$ as above.
Since $w^\prime$ has a non-zero number
of occurrences of both $a$ and $a^*$,
we can write $w^\prime$ in at least one of these
two forms:
$$
w^\prime = u \, (a a^*) \, v \quad \mathrm{or} \quad
w^\prime = u \, (a^* a) \, v,
$$
where $u$ and $v$ are words (possibly empty)
with occurrences of $a$ and $a^*$ only.
In the first case we see for example that
\begin{align*}
&Q( w^\prime \, c^l \, (c^*)^m) = Q( u \, (a a^*) \, v \, c^l \, (c^*)^m)
= Q( u \, (1 - q^2 c c^*) \, v \, c^l \, (c^*)^m)
\\
&= Q( u \, v \, c^l \, (c^*)^m) - q^2 Q( u \, (c c^*) \, v \, c^l \, (c^*)^m)
\\
&= Q( u \, v \, c^l \, (c^*)^m) - q^r Q( u \, v \, c^{l+1} \, (c^*)^{m+1})
= 0 - 0 = 0.
\end{align*}
Here the exponent $r \in \mathbb{N}$
arises from pushing
the factor $c c^*$ to the right through $v$.
The next to the last equality follows
from the induction hypothesis and the
fact that the word $u \, v$ has $k$ occurrences of $a$
and $ j - 1 > k \ge 0 $ occurrences of $ a^{*} $.
The proof for the second form of $w^\prime$ is quite similar and so is
left to the reader.
And that finishes the proof.
\end{proof}
This result can also we proved
by evaluating the bi-degree of a word with more
$ a^{*} $'s than $ a $'s and
showing that it is not equal to the bi-degree
of any $ \varepsilon_{rs0} $ with $ r,s \ge 0 $.
We have a result similar to Proposition \ref{handy}
for $ c $ and $ c^{*} $.
\begin{prop}
\label{handier}
Let $w$ be a finite word in the
alphabet with these four letters: $a, a^*, c , c^*$.
If $w$ has strictly more occurrences of
the letter $c^*$ than of the letter $ c$, then $Q(w) = 0$.
\end{prop}
\begin{proof}
Here is a proof using bi-degrees instead on
a similar induction argument, which could also be made.
Suppose that $ w $ has $ j, k, l, m $
occurrences of $ a, a^{*}, c, c^{*} $
respectively.
Then, independent of the order of
these occurrences, we have that
\begin{align*}
\mathrm{bideg} (w) &=
j (1,1) + k (-1,-1) + l (-1,1) + m (1,-1)
\\
&= ( j - k -l + m, j - k + l - m ),
\end{align*}
while
$ \mathrm{bideg} (a^r c^s) = (r - s, r + s)$.
The difference of the two entries in $ \mathrm{bideg} (w) $
is $ -2l + 2m > 0 $, since by hypothesis $ m > l $.
However, the corresponding difference for
$ \mathrm{bideg} (a^r c^s) $ is $ -2 s \le 0 $.
This implies that
$ \mathrm{bideg} (w) \ne \mathrm{bideg} (a^r c^s) $ and
therefore $ \langle a^r c^s , w \rangle_{\mathcal{C}} = 0 $
for all $ r, s \ge 0 $, which in turn implies that
$ Q(w) = 0 $.
\end{proof}
We have now on hand enough formulas to calculate the
action of the co-Toeplitz operators
$C_{\varepsilon_{klm}}$.
This is sufficient information, since
$C_g$ for
any symbol $g \in SU_q(2)$
can be written as a
finite linear combination with
complex coefficients of
the co-Toeplitz operators $C_{\varepsilon_{klm}}$.
Moreover, it suffices to calculate
$C_{\varepsilon_{klm}}$ acting on the
elements $\phi_{r,s}$ in the standard
orthonormal basis, where $r,s \in \mathbb{N}$.
We recall that the co-Toeplitz operator
with symbol $g$ was defined as
$C_g = \pi_g \, \beta \, j$.
Since $j$ is simply the inclusion map, we have
$$
C_{\varepsilon_{klm}} (\phi_{r,s}) = \pi_{\varepsilon_{klm}} \, \beta (\phi_{r,s}).
$$
We will take the co-action map
$\beta : \mathcal{C} \to
\mathcal{P} \otimes \mathcal{C}$
to be of the form~\eqref{beta-special-form}, namely
$$
\mathcal{C} \stackrel{\Delta_\mathcal{C}}{\longrightarrow} \mathcal{C} \otimes \mathcal{C}
\stackrel{Q \otimes id}{\longrightarrow} \mathcal{P} \otimes \mathcal{C},
$$
where $\Delta_\mathcal{C}$ is the
co-multiplication of $\mathcal{C}$.
Dropping the normalization constant
for the moment, we calculate with
the monomial $a^r c^s$ instead of with $\phi_{r,s}$.
We then see that
\begin{align*}
\beta (a^r c^s) &= (Q \otimes id ) \big( \Delta_\mathcal{C} (a^r c^s) \big)
= (Q \otimes id ) \big( \Delta_\mathcal{C} (a)^r \Delta_\mathcal{C} (c)^s \big)
\\
&= (Q \otimes id ) \Big( (a \otimes a - q \, c^* \otimes c)^r (c \otimes a + a^* \otimes c)^s \Big)
\end{align*}
We will use the standard binomial theorem on
the second factor, since
$ c \otimes a $ and $ a^{*} \otimes c $ commute,
as follows from \eqref{SU-q-2-relations}.
To continue with the first factor
we will use the $q$-binomial theorem
(see \cite{KS}), which states that
if variables $ v, w $ satisfy the
commutation relation $ v w = q w v $
for $ 0 \ne q \in \mathbb{C} $, then
for any integer $ n \ge 0 $ one has
$$
(v + w)^{n} = \sum_{m=0}^{n}
\Big[
\begin{array}{c}
n \\ m
\end{array}
\Big]_{q^{-1}} v^{m} w^{n-m},
$$
where the coefficient is an explicitly
given deformation of the standard
binomial coefficient.
This is applicable in this situation, since
$$
( -q c^* \otimes c) (a \otimes a)
= -q c^* a \otimes c a,
$$
and hence for $ v = a \otimes a $ and
$ w = -q c^* \otimes c $
by using the relations
\eqref{SU-q-2-relations} again
we obtain
\begin{align*}
v w &=
(a \otimes a)( -q c^* \otimes c)
= -q a c^* \otimes a c
\\
&= q^2 \big( -q c^* a \otimes c a \big)
\\
&= q^{2} w v.
\end{align*}
Next, to simplify somewhat
the rather cumbersome binomial-type notation,
we introduce
$
B_{n,q} := \Big[
\begin{array}{c}
r \\ n
\end{array}
\Big]_{q^{-2}},
$
which also suppresses the variable $ r $.
We also use
$
B_{p,1} := \Big(
\begin{array}{c}
s \\ p
\end{array}
\Big),
$
a standard binomial coefficient (which suppresses
the variable $ s $).
We will use this material in the next
and subsequent calculations.
The reader can consult \cite{KS} for more details
about this so-called {\em $ q $-calculus}.
Then for $r,s \in \mathbb{N}$ we have
\begin{align*}
&\beta (a^r c^s) =
(Q \otimes id ) \Big( (a \otimes a - q \, c^* \otimes c)^r (c \otimes a + a^{*} \otimes c)^s \Big)
\\
&= (Q \otimes id )
\sum_{n,p=0}^{r,s}
\!\! B_{n,q} (a \otimes a)^{r-n} \,
(-q)^n (c^* \otimes c)^n
B_{p,1}
(c \otimes a)^{s-p} (a^{*}\otimes c)^p
\\
&= (Q \otimes id )
\Big( \sum_{n, p=0}^{r,s}
(-q)^n B_{n,q} B_{p,1} \,
a^{r-n} (c^*)^n c^{s-p} a^{*p}
\otimes a^{r-n} c^n a^{s-p} c^p \Big)
\\
&=
\sum_{n, p=0}^{r,s}
(-q)^n B_{n,q} B_{p,1} \,
Q ( a^{r-n} (c^*)^n c^{s-p} a^{*p} )
\otimes a^{r-n} c^n a^{s-p} c^p
\\
&=
\sum_{n, p=0}^{r,s} \phi
\otimes a^{r-n} c^n a^{s-p} c^p.
\end{align*}
To simplify notation we have put
\begin{equation}
\label{define-phi-nprs}
\phi = \phi_{nprs} = (-q)^n B_{n,q} B_{p,1} \,
Q ( a^{r-n} (c^*)^n c^{s-p} a^{*p} ) \in \mathcal{P}.
\end{equation}
By Propositions \ref{handy} and \ref{handier} we see that
if $ p > r - n $ or $ n > s - p $, then $ \phi = 0 $.
In the contrary case the calculation
of $ \phi $ is a bit more complicated.
The contrary case occurs when $ p \le r -n $ and
$ n \le s -p $, that is,
$ n + p \le r $ and $ n+p \le s $.
This condition is then equivalent to
$ n + p \le \min (r,s) $, which we will
assume to hold throughout the following.
The summation indices $ n $ and $ p $ also satisfy
$ 0 \le n \le r $ and $ 0 \le p \le s $.
To do this calculation we will use the identity
\begin{equation}
\label{am-astarm-identity}
a^{m} (a^{*})^{m} = \sum_{i=0}^{m}
\Big[
\begin{array}{c}
m \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{i + 2 i m -i^2} c^{i} (c^{*})^i,
\end{equation}
for integer $ m \ge 0 $.
(Cp. \cite{KS}, p.~100, Eq.~ (13).
Or prove it yourself by induction on $ m $.)
Note that this identity is not surprising,
since $ \mathrm{bideg} (a^{m} (a^{*})^{m}) = (0,0)$
and $ A[0,0] $ is the polynomial algebra in the
variable $ c c^{*} $.
(Recall that $ c $ and $ c^{*} $ commute
so that $ c^{i} (c^{*})^i = ( c c^{*})^{i} $.)
What the identity \eqref{am-astarm-identity}
tells us more specifically
is that $ a^{m} (a^{*})^{m} $ is a polynomial of
degree $ m $ and what its coefficients are exactly.
Then using this identity we have
\begin{align*}
&a^{r-n} (c^*)^n c^{s-p} a^{*p} =
q^{p(s-p) + p n} a^{r-n} a^{*p} (c^*)^n c^{s-p}
\\
&=
q^{p(s-p + n)} a^{r-n-p} a^{p} a^{*p}
(c^*)^n c^{s-p}
\\
&=
q^{p(s-p + n)} a^{r-n-p}
\sum_{i=0}^{p}
\Big[
\begin{array}{c}
p \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{i + 2 i p -i^2} c^{i} (c^{*})^i
(c^*)^n c^{s-p}
\\
&=
\sum_{i=0}^{p}
\Big[
\begin{array}{c}
p \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{A} a^{r-n-p}
c^{i + s -p} (c^{*})^{i + n}
\\
&=
\sum_{i=0}^{p}
\Big[
\begin{array}{c}
p \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{A}
\varepsilon_{r-n-p, i + s -p, i + n},
\end{align*}
where $ A = p(s-p + n) + i + 2 i p - i^2 $.
Continuing, we see that
\begin{align*}
\phi_{nprs} &= (-q)^n B_{n,q} B_{p,1} \,
Q ( a^{r-n} (c^*)^n c^{s-p} a^{*p} )
\\
&= (-q)^n B_{n,q} B_{p,1} \, \left(
\sum_{i=0}^{p}
\Big[
\begin{array}{c}
p \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{A}
Q ( \varepsilon_{r-n-p, i +s -p , i + n} ) \right)
\\
&= (-q)^n B_{n,q} B_{p,1} \, \left(
\sum_{i=0}^{p}
\Big[
\begin{array}{c}
p \\ i
\end{array}
\Big]_{q^{-2}}
(-1)^{i} q^{A} \right)
\varepsilon_{r-n-p, s -n -p , 0}
\\
&= D_{nprs} \, \varepsilon_{r - (n+p), s - (n+p) , 0},
\end{align*}
where the real number $ D_{nprs} $ has the
obvious definition.
Here we also used Proposition \ref{pro1}, which
has the fortuitous virtue of changing the scope
of the sum on $ i $.
Notice that this shows that
$ \phi $ is proportional to an element in the
basis $\{ \varepsilon_{kl0} ~|~ k,l \ge 0 \}$
of $ \mathcal{P} $.
The bi-degree
of the bi-homogeneous element
$ \phi $ is easily seen to be given by
\begin{equation}
\mathrm{bideg} (\phi) =
\mathrm{bideg} (\varepsilon_{r- (n+p) , s -(n+p) , 0})
= ( \, r - s , r + s - 2 (n+p) \, ).
\label{bideg-phi}
\end{equation}
Next, for $r,s \in \mathbb{N}$ we obtain
\begin{align}
&C_{\varepsilon_{klm}} (a^r c^s) = \pi_{\varepsilon_{klm}} \, \beta (a^r c^s) \nonumber
\\
&= \pi_{\varepsilon_{klm}} \sum_{n+p=0}^{\min(r,s)}
\phi \otimes a^{r-n} c^n a^{s-p} c^p \nonumber
\\
&= \sum_{n+p=0}^{\min(r,s)}
\langle \varepsilon_{klm} , a^{r-n} c^n a^{s-p} c^p \rangle_{\mathcal{C}} \, \phi \nonumber
\\
&= \sum_{n+p=0}^{\min(r,s)} q^{n (s-p)}
\langle \varepsilon_{klm} , a^{r+s - (n+p)} c^{n+p} \rangle_{\mathcal{C}} \, \phi_{nprs}
\nonumber
\\
&= \sum_{n+p=0}^{\min(r,s)} q^{n (s-p)}
\langle \varepsilon_{klm} , a^{r+s - (n+p)} c^{n+p} \rangle_{\mathcal{C}} D_{nprs}
\varepsilon_{r- (n+p) , s - (n+p) , 0}.
\label{last-expn}
\end{align}
Note that the condition $ 0 \le n+p \le \min (r,s) $
means according to \eqref{bideg-phi} that
\eqref{last-expn} is in general
a sum of bi-homogeneous
elements with different bi-degrees.
However, the coefficients of these summands
will be non-zero only if
the inner product in the expression \eqref{last-expn}
is non-zero which is equivalent to
$$
\mathrm{bideg} (\varepsilon_{klm}) =
\mathrm{bideg} ( a^{r+s - (n+p)} c^{n+p} ),
$$
which itself is equivalent to
$$
(k - l + m, k + l - m ) =
( \, r + s - 2 (n+p) , r + s \, ).
$$
The indices $ k,l,m, r, s $ are given and the
`unknowns' are the summation indices $ n $
and $ p $.
The previous equality is equivalent to
\begin{equation}
\label{proportional-term}
n+p = r + s - k = l - m.
\end{equation}
If this holds for some pair $ n,p $
satisfying $ n + p \le \min(r,s)$,
$ 0 \le n \le r $ and $ 0 \le p \le s $,
then \eqref{last-expn} is a multiple of
$$
\varepsilon_{r- (n+p) , s - (n+p) , 0} =
\varepsilon_{r- (l - m) , s - (l - m) , 0};
$$
otherwise, \eqref{last-expn} is $ 0 $.
In order that there exists at least one solution
of \eqref{proportional-term}
for a pair $ n \ge 0, \, p \ge 0 $
it is necessary and sufficient that the five
indices $ k,l,m, r, s $ satisfy
\begin{equation}
\label{klmrs-condition}
k \le r + s \quad \mathrm{and} \quad m \le l.
\end{equation}
And in that case the co-Toeplitz operator
$ C_{\varepsilon_{klm}} $ lowers the
degree of each variable $ a,c $ by $ l-m \ge 0 $.
Alternatively, we note that
$ C_{\varepsilon_{klm}} $ maps
$ a^r c^s = \varepsilon_{rs0} $ of bi-degree $ (r-s, r+s) $ to
$ \varepsilon_{r- (l - m) , s - (l - m) , 0} $, an
element of bi-degree $ (r-s, r + s - 2 (l-m ) )$.
In other words on this scale the co-Toeplitz operator
$ C_{\varepsilon_{klm}} $ can be understood as
an operator having bi-degree $ (0, - 2 (l-m )) $.
In physics terminology, these co-Toeplitz operators
are not {\em creation operators}
in the sense that the degree of the powers of
monomials is strictly increased.
Similarly, the bi-degree also is not strictly
increased.
We have shown the following.
\begin{theorem}
Suppose $ k \in \mathbb{Z} $ and $ l,m,r,s \in \mathbb{N} $ satisfy $ r + s - k = l - m $ and
$ 0 \le l - m \le \min (r,s) $.
Suppose that this set is non-empty:
$$
\{ (n,p) ~|~ n + p = l -m, \, 0 \le n \le r, \,
0 \le p \le s \}
$$
Then
$ C_{\varepsilon_{klm}} (a^r c^s) =
K a^{r-(l-m)} c^{r-(l-m)} $
for some real number $ K $.
In terms of basis elements
$ C_{\varepsilon_{klm}} (\phi_{rs} ) = K^{\prime}
\phi_{r-(l-m), s-(l-m)}$
for some real number $ K^{\prime} $.
And $ K^{\prime} \ne 0$ if and only if $ K \ne 0 $.
Otherwise, we have
$ C_{\varepsilon_{klm}} (a^r c^s) = 0 $.
\end{theorem}
Here are some special cases of this theorem.
First, we consider the case $ l = m $.
In this case $ C_{\varepsilon_{kll}} $
maps $ a^r c^s $ to a multiple of $ a^r c^s $
for any value of $ k \in \mathbb{Z} $.
Notice that the multiplicative constant
depends on $ k $ and can be $ 0 $.
In physics terminology this is a
{\em preservation operator}, which
simply means mathematically
that it preserves degrees.
The sub-case $ l = m = 0 $ is the co-Toeplitz operator
with `holomorphic' symbol $ a^{k} $
if $ k \ge 0 $ or
with `anti-holomorphic'
symbol $ (a^{*})^{-k} $ if $ k < 0 $.
The next case is $ l > 0, m = 0 $.
In this case $ C_{\varepsilon_{kl0}} $
maps $ a^r c^s $ to some
multiple of $ a^{r-l} c^{s-l} $.
In usual physics terminology this is called an
{\em annihilation operator}, which
simply means mathematically
that it lowers degrees.
We remark that $ \varepsilon_{kl0} $ is the most
general holomorphic monomial in the variables
$ a $ and $ c $.
It is because of this particular case that we have
defined co-Toeplitz operators
with holomorphic symbols to be annihilation
operators.
In the case $ l = 0 $ we have that $ m = 0 $
must hold as well.
And so this case was already considered as
part of the first case.
Or in other words, the case $ l = 0 $ and $ m > 0 $
gives a zero co-Toeplitz operator.
This leads us up to the analysis of the co-Toeplitz
operators whose symbols are one of the four
algebra generators, $ a, a^{*}, c , c^{*} $, of
$ SU_q(2) $.
For the symbol $ c $ we have $ k = m = 0 $,
$ l=1 $ and so
$ C_c $ is an annihilation operator that maps
$ a^r c^s $ to a multiple of $ a^{r-1} c^{s-1} $.
For the symbol $ c^{*} $ we have $ k = l = 0 $,
$ m = 1 $ and so $ C_{c^{*}} = 0$, since $ m > l $
holds.
The same reasoning applies to the
`anti-holomorphic' symbol
$ (a^{*})^{k} (c^{*})^{m} $ for $ m >0 $,
since $ m > l = 0 $.
So, $ C_{ (a^{*})^{k}(c^{*})^{m} } = 0$.
For the symbol $ a^{*} $ we have
$ l = m = 0 $, $ k = -1 $.
Now $ n + p = l - m = 0 $ implies that
$ n = p = 0 $ and therefore that
$ r + s = k = -1 $, which
has no solutions
$ r \ge 0, s \ge 0 $.
Thus, $ C_{a^{*}} = 0 $.
For the symbol being $ a $
we have $ l = m = 0 $, $ k = 1 $,
and thus $ C_{a} $ is a
preservation operator.
But $ n + p = l - m = 0 $ implies that
$ n = p = 0 $.
So there is only one term in the sum \eqref{last-expn}.
We note that $ q^{n(s-p)} = q^{0} = 1 $
and $ D_{00rs} = 1 $.
But the coefficient in that unique term is
$$
\langle \varepsilon_{100} , a \rangle =
\langle a , a \rangle = w (1,0) > 0.
$$
Consequently, $ C_{a} $ is a non-zero multiple
of the identity operator.
In particular,
$ C_{a} \ne 0$ and $ C_{a^{*}} = 0 $
are not adjoints of each other.
So the condition
\eqref{M-tilde-condition}
does not hold for our choice
\eqref{define-sequi-form}
for the sesquilinear form.
In this example, the creation and annihilation
operators have strange properties from the point
of view of quantum physics.
This is in part a consequence of the choice of the
sesquilinear form for this example.
As I have emphasized elsewhere, the study of
more examples of the co-Toeplitz
quantization scheme is really needed
for getting a better
understanding of the general theory.
A similar example for the Toeplitz quantization
of $ SU_q (2) $ in \cite{sbs5} gave creation and
annihilation operators
which are more intuitive physically.
This goes to show that co-Toeplitz quantization
has new, rather curious properties, even though it is
dual in the sense of notion duality
to Toeplitz quantization.
This example depends on more than the choice of
the co-algebra $ SU_q(2) $.
We have to choose also the sesquilinear form and
the subspace $ \mathcal{P} $.
We could continue with the same family
of sesquilinear forms, where that family is
parameterized by the weight function.
Instead, we could use a different subspace, say
for example:
$$
\mathcal{P}^{\prime} :=
\mathrm{span}\{ \varepsilon_{kl0} = a^{k} c^{l},
\varepsilon_{k0m} = a^{k} (c^{*})^{m}
~|~ k,l \ge 0, m > 0 \}.
$$
Since no two elements in this set of generators
have the same bi-degree, we have that this is an orthogonal
basis of $ \mathcal{P}^{\prime} $.
So an orthonormal basis of $ \mathcal{P}^{\prime} $
is given by
$$
\phi_{kl}
= \dfrac{1}{ \sqrt{w(k,l)} } \, \varepsilon_{kl0}
\quad \mathrm{and} \quad
\psi_{km} :=
\dfrac{1}{\sqrt{w(k,-m)}} \, \varepsilon_{k0m}
$$
for $ k,l \ge 0 $ and $ m > 0 $, where we continue
to use the notation $ \phi_{kl} $
introduced earlier.
Thus $\mathcal{P}^{\prime}$ is a pre-Hilbert space.
The injection
$j^{\prime} : \mathcal{P}^{\prime} \to \mathcal{C} $
is defined to be the inclusion map.
The quotient map
$Q^{\prime}: \mathcal{C} \to \mathcal{P}^{\prime} $
is defined for $f \in \mathcal{C}$ as
$$
Q^{\prime}(f) := \sum_{i,j \ge 0}
\langle \phi_{ij}, f \rangle_{\mathcal{C}} \, \phi_{ij}
+
\sum_{i \ge 0, j > 0}
\langle \psi_{ij}, f \rangle_{\mathcal{C}} \, \psi_{ij},
$$
where the sum on the right side has only finitely
many non-zero terms.
This shows just one possible way of giving
another example based on the co-algebra
$ SU_q(2) $.
Another possible modification of this example
is to use the positive definite
inner product
defined for $ x,y \in SU_q(2) $ by
$
\langle x , y \rangle := h (x^{*}y),
$
where $ h : SU_q(2) \to \mathbb{C} $ is the
unique Haar state on $ SU_q(2) $
(see \cite{KS}), instead
of the sesquilinear form defined in \eqref{define-sequi-form}.
This is an approach that is better attuned to
the Hopf $ * $-algebra structure of $ SU_q(2) $.
These two alternatives as well as other examples
of co-Toeplitz quantizations of co-algebras
will be the subject of forthcoming
research work.
\section{Concluding Remarks}
\label{concluding-remarks-section}
This paper begins the new theory of
co-Toeplitz operators and their associated
quantization, as the title indicates.
On the other hand, the theory of Toeplitz
operators is over one hundred years old.
Obviously, one strategy is to use the ideas and
results in the Toeplitz setting to inspire
research in this new theory.
However, I hope that there will be more new ideas
arising in the co-Toeplitz setting and that some
of these may even shed light on the well-known
Toeplitz setting.
To bring this theory to maturity requires more
than anything a reasonable quantity of
illuminating examples, which could help in
fine tuning definitions and in providing insights
into relations among the various structures
introduced here.
Also, bi-algebras can now be quantized either by
using their algebra structure or their co-algebra
structure.
So it would be interesting to understand how
those two quantizations might be related.
In the more specific case of Hopf algebras
(or quantum groups)
one would like to know what the role of the
antipode is.
One might also be able to introduce into this
setting such structures as a symplectic
form, Poisson brackets or coherent states, just
to name a few possibilities.
Finally, other types of quantization schemes may
also be extended to theories based
on arbitrary algebras or co-algebras.
This is a broad outline of possible future
research in this area.
\vskip 0.2cm
\begin{center}
\textbf{Acknowledgments}
\end{center}
I thank Micho \dju~and Jean-Pierre Gazeau
for providing me
insights from rather
complementary points of view
of mathematical physics.
I can not imagine how I could ever have possibly
written this paper without their generosity
in sharing ideas with me. | 176,131 |
TITLE: Looking for references to Pythagorean triple subsets
QUESTION [3 upvotes]: I knew nothing about generating Pythagorean triples in 2009 so I looked for them in a spreadsheet. Millions of formulas later, I found a pattern of sets shown in the sample below.
$$\begin{array}{c|c|c|c|c|}
Set_n & Triple_1 & Triple_2 & Triple_3 & Triple_4 \\ \hline
Set_1 & 3,4,5 & 5,12,13& 7,24,25& 9,40,41\\ \hline
Set_2 & 15,8,17 & 21,20,29 &27,36,45 &33,56,65\\ \hline
Set_3 & 35,12,37 & 45,28,53 &55,48,73 &65,72,97 \\ \hline
Set_{4} &63,16,65 &77,36,85 &91,60,109 &105,88,137\\ \hline
\end{array}$$
In each $Set_n$, $(C-B)=(2n-1)^2$, the increment between consecutive values of $A$ is $2(2n-1)k$ where $k$ is the member number or count within the set, and $A=(2n-1)^2+2(2n-1)k$. I solved the Pythagorean theorem for $B$ and $C$, substituted now-known the expressions for $A$ and $(C-B)$, and got $\quad B=2(2n-1)k+2k^2\qquad C=(2n-1)^2+2(2n-1)k+2k^2$.
I have since learned the my formula is the equivalent of replacing $(m,n)$ in Euclid's formula with $((2n-1+k),k)$. I found ways of using either my formula or Euclid's to find triples given only sides, perimeters, ratios, and areas as well as polygons and pyramids constructed of dissimilar primitive triples.
I found that the first member of each set $(k=1)$ and all members of $Set_1 (n=1)$ are primitive. I found that, if $(2n-1)$ is prime, only primitives will be generated in $Set_n$ if $A=(2n-1)^2+2(2n-1)k+\bigl\lfloor\frac{k-1}{2n-2}\bigr\rfloor $ and I found that, if $(2n-1)$ is composite, I could obtain only primitives in $Set_n$ by generating and subtracting the set of [multiple] triples generated when $k$ is a $1$-or-more multiple of any factor of $(2n-1)$. The primitive count in the former is obtained directly; the count for the latter is obtained by combinatorics.
I'm trying to write a paper "On Finding Pythagorean Triples". Surely someone has discovered these sets in the $2300$ years since Euclid but I haven't found and reference to them or any subsets of Pythagorean triples online or in the books I've bought and read. So my question is: "Where have these distinct sets of triples been mentioned before?" I would like to cite the work if I can find it.
The bounty just expired and neither of the two answers has been helpful. I have not quite a day to award the bounty. Any takers? Where and when have these sets been discovered before?
REPLY [2 votes]: In L. E. Dickson, History of the Theory of Numbers, Volume II, page 167
T. Fantet de Lagny$^{18}$ replaced $m$ by $d+n$ in $(1)$ and obtained
$$ x = 2n(d+n),\;\; y=d(d+2n),\;\;
z = x+d^2=y+2n^2. $$
The footnote 18 is briefly "Hist. Acad. Sc. Paris, 1729, 318."
Your formulas are
$$A\!=\!(2n\!-\!1)^2\!+\!2(2n\!-\!1)k,\\
B\!=\!2(2n\!-\!1)k\!+\!2k^2,\\
C\!=\!(2n\!-\!1)^2\!+\!2(2n\!-\!1)k\!+\!2k^2.$$
Get this from Lagny's formulas if $\,d\,$ is
replaced by $\,2n-1\,$ and $\,n\,$ is replaced with $\,k.\,$
Thus, your formula is equivalent to de Lagny's except $\,2n-1=d\,$
is always odd, however, if $\,d\,$ is even, the triple has
a common factor of $2$ and can not be primitive. | 140,413 |
- Digital Camera Link - Belkin
location: ilounge.com - date: September 29, 2004
I have a 4G 40gb iPod purchased in the US. I also have a Belkin Digital Camera link. I know there are firmware issues with 4G iPods and the Digital Link, but I purchased it directly from Belkin's website on 16/9, so it should have the updated firmware. When I plug the Digital link into the iPod, I get a green light for about 2 sec, then all 3 LEDs go continuously red. I also get the battery charging icon on the iPod (this definitely shouldn't happen!). This error does match any of those described in the manual. I cannot transfer any images from my camera, since this error occurs before even plugging the camera in. The iPod seems otherwise fine. I running it as a Windows device. I have updated the firmware, put fresh batteries in the Digital Link, made sure the iPod is fully charged, etc. Belkin tech support have been absolutely useless (so I'm never buying one of their products ever again). I'd be really grateful for any adv
post reply
| video help
| read moreBelkin Digital Camera Link for iPod
location: ilounge.com - date: January 8, 2005
Ok, im need to know if the "Belkin Digital Camera Link for iPod" will work on an ipod mini. I have read that some people have sucesses useing it with an iPod mini even thought it says its not compatable on belkins website. So, will the Belkin Digital Camera Link for iPod work with an iPod mini???
post reply
| video help
| read moreTest Result for Belkin Digital Camera Link
location: ilounge.com - date: May 12, 2004
I got my Belkin Digital Camera Link today. I tested it using Sony DSC-F717 with 256MB Sandisk MS Pro. Transfer time - It took 6minutes and 30 seconds to copy 256MB MS Pro to iPod. Camera Battery usage - When I started I had 139 minutes of battery power left on my camera and when I finished my camera was showing 130 minutes of battery power left. iPod Battery usage - My iPod was half charged when I started transfer and after transfer completed still it was showing nearly same mark (but this is not reliable as there is no counter on iPod which will show battery time left). Conclusion : I'll give it 4 out of 5 as the transfer rate for 256MB MS Pro is acceptable, not at all bad for 45 USD device (I bought it using coupon code 12345 which gave me 50% OFF retail price). Not sure how bigger cards will behave. MJ Please choose the appropriate forum for this topic. Thank you.
post reply
| video help
| read morebelkin digital camera link not turning on??!
location: ilounge.com - date: December 2, 2005
I just got my belkin digital camera link -- and it does not turn on at all!! I tried different sets of batteries to no avail. My ipod is a 3G running software 2.3. It should work!! Is it possible my unit is defective?? that's what Im | 327,414 |
Located in Silverwater, Sydney, we have been supplying our customers with curtains and blinds for the last 15 years. As a result of having a large range of products, you will be able to find the item that will suit the needs of your home or office. All our products come with our 2 years "We'll Fix It" guarantee. As we are a manufacturing business you can buy directly from the manufacturer at reduced prices. As a bonus, we will also always give you the best price - guaranteed!
| 31,311 |
TITLE: Decide whether the following $f:\textbf{R}^{n}\to\textbf{R}^{m}$ function is a linear mapping or not
QUESTION [0 upvotes]: Decide whether the following function $f:\textbf{R}^{n}\to\textbf{R}^{m}$ is a linear mapping or not.
If the answer is yes, then determine the matrix of the mapping $[f]$, the kernel $\ker(f)$ and the image $\text{Im}(f)$ and the dimensions of the latter two, where $f:\textbf{R}^{3}\to\textbf{R}^{2}$, $f(x, y, z) =
(x − y + z, x − y + z)$.
I have find out that the mapping is linear (by two properties). However I could not go further.
REPLY [0 votes]: Yes, $f$ is a linear mapping. Clearly, identity is mapped to identity as $f(0,0,0)=(0,0)$
For any two vectors $u=(x_1,y_1,z_1)^T $ and $v=(x_2,y_2,z_2)^T \in \mathbb R^3$, we have $f(u+v)=(x_1+x_2 − y_1-y_2 + z_1+z_2, x_1+x_2 − y_1-y_2 + z_1+z_2)^T=(x_1 − y_1 + z_1, x_1 − y_1 + z_1)^T+(x_1 − y_1 + z_1, x_1 − y_1 + z_1)^T=f(u)+f(v)$
Similarly show that for any scaler $c \in \mathbb R$, $f(cu)=cf(u)$ for all $u\in \mathbb R^3$. Hence, $f$ is a linear map.
Now for the matrix calculation:
Note that $(x,y,z)^T=xe_1+ye_2+ze_3$ where $e_i's$ are standard basis vectors.
Hence, $f(x,y,z)^T=xf(e_1)+yf(e_2)+zf(e_3)=[f(e_1) \;\;f(e_2)\;\;\;f(e_3)](x,y,z)^T=A(x,y,z)^T$
where, $A=[f(e_1) \;\;f(e_2)\;\;\;f(e_3)]$, a $(2 \times 3 )$ matrix.
Ker(f)= Null space of $A$, Im(f)= Column space $A$ | 215,660 |
St. Thomas, Mo. Wins Wastewater Grant
August 14, 2003
A state grant will help pay for the installation of a new wastewater collection and treatment system to serve the central part of St. Thomas, Mo.
The Missouri Department of Economic Development approved a $295,000 wastewater grant for St. Thomas through the Community Development Block Grant program.
The new system will replace individual septic tanks that do not function properly and will eliminate unsanitary sewage discharge in the city's drainage ditches.
Plans include installing a circulating sand filter treatment system with the cells, berms and pump station. The new system will be designed to meet anticipated growth for 30 years.
Source:
SGC | 146,161 |
The right demand generation is crucial to acquiring Marketing Qualified Leads or MQLs, especially in B2B SaaS marketing. Big brands make it all seem so easy—and for them, it really is. But for the rest of us, things are a little more difficult when it comes to having your business noticed.
Your demand generation provider may send several qualified prospects to your enterprise every month. But there is a fundamental question: How do you get the best out of your MQLs sent by your demand gen partner?
Simply put, MQLs are leads that aren’t in the buying mood yet, but they show they have a need you can help them with. And lead nurturing could be just the thing to turn them into more receptive buyers.
Unfortunately, many companies fail to do so and consequently fail to reap the benefits that nurturing can bring.
In this post, we share the best practices for leveraging your lead nurturing program so that you can build trust with prospects and hopefully turn them into loyal customers (or SQLs).
To get the ball rolling, first, you need to understand what qualifies as an MQL in your B2B SaaS marketing business.
What is a Marketing Qualified Lead?
In B2B SaaS marketing, a Marketing Qualified Lead (commonly referred to as MQL) is a potential customer who has come into contact with at least one of your incoming marketing assets and displayed some level of interest in your company’s offerings.
So, all you need to do is to keep pushing your potential customer throughout the buyer’s journey—mostly through the three major stages:
- awareness
- consideration
- decision
Image source: SingleGrain
The following are a few examples of the most popular MQL conversion points:
- Downloading a free ebook or a trial version of the software.
- Using software demonstrations.
- Completing online forms.
- Subscribing to a newsletter.
- Adding products to a wishlist or favorite items.
- Adding items to the shopping cart.
- Returning to your site or spending a significant amount of time.
- Landing on your website by clicking on an advertisement.
- Contacting to inquire about additional information.
What a Marketing Qualified Lead Isn’t?
Marketing Qualified Leads are leads who have shown an interest in your business. They may have visited the website or downloaded an ebook laid out as part of your B2B SaaS marketing campaign.
“MQLs are not necessarily customers, and B2B SaaS marketers shouldn’t treat these leads as such.”
Instead, you should continue to nurture prospects with relevant content marketing until they become Sales Qualified Leads.
P.S. SQLs are those who have indicated a readiness to purchase your product or service.
How Do You Nurture the MQLs Received by Your Demand Generation Partner?
It is a common predicament. As a B2B SaaS marketing team, we spend hours (… no, make that days) reaching out to potential leads through demand generation partners or various channels (SEM, social media, email, etc.).
Then when a lead finally receives an inbound inquiry, and it makes its way to your sales team in the form of a Marketing Qualified Lead (MQL), you start reaching out and prodding those leads even more.
It might take multiple visits before the opportunity ends up in your sales department. This time period is a great chance to handle objections, clarify some issues, and prove that you not only listen but also provide value.
How do you do that? We have listed a few strategies:
1. Establish Your Definition of a Qualified Lead.
Start with the basics — what is your B2B SaaS marketing team’s definition of a qualified lead? To put it bluntly, each company will define each lead type differently.
You’ll want to work with your marketing and sales teams to define each lead type based on your product or service. It will help you craft an effective strategy that will likely lead to higher conversions. Also, it will make sure that everyone in your organization is on the same page when it comes to leads.
2. Make Sure Your B2B SaaS Marketing Team Understands Your Target Customers.
Getting to know your audience and customer is at the core of what makes B2B SaaS marketing so effective and results-driven. It’s not about how much you push what you have to say; it’s about working with your customers’ needs.
If you’d like to connect with your customers and build trust emotionally, your marketing strategy needs to be designed around your customer’s pain points, what they value (much more than just features of the product), as well as their preferred information gathering process.
The following are a few questions that your B2B SaaS marketing team can ask:
- What attributes do my best and worst customers have in common?
- What are their content choices, search terms, social media accounts, and the products or services they buy?
- Who is my ideal customer at the top, middle, and bottom of the funnel?
3. Find Out Where Your Leads Are In the Sales Funnel.
Before you try to convert your leads into customers, you need to understand where they are in the sales funnel. There’s no point in chasing people who aren’t interested or moving further down the sales funnel too quickly.
“An experienced B2B SaaS marketing team understands that you’ll want to use different strategies to engage and educate leads at different stages in the buyer’s journey.”
For example, when someone signs up for your blog or newsletter, they’re probably new to your brand. Long-form thought leadership pieces might be the worst thing you can show them. Use those opportunities instead to recommend tactical blog posts and helpful third-party articles.
4. Create Relevant Content.
You need to create engaging content regularly, so your leads don’t forget about you. Your content can be in any format—blogs, infographics, press releases, case studies, email chain, or downloadable resources like a whitepaper, ebook, guide, datasheet, etc. that brings value to your audience.
We have come up with a few examples for your B2B SaaS marketing team:
- Top of the funnel: In this stage, new prospects will be introduced to your brand and, if you are lucky, they will begin to engage with you. Likewise, you can create educational content like blog articles, videos, and social posts to boost SEO efforts.
- Middle of the funnel: The middle of the funnel is where your leads decide whether or not your company’s products are worth investing in. You can support their decision by providing product-oriented content like case studies, eBooks, and webinars.
- Bottom of the funnel: Here the customers are more likely to convert. Help them skim the value of your product by making demo videos, sales pages, and discount campaigns.
Image Source: Weidert.com
5. Automate Your Communications.
Once you have created relevant content for each stage of the buyer’s journey, your B2B SaaS marketing team is ready to distribute it. But how can they reach everybody at once if you have hundreds of leads? It is realistically not possible to send different messages to each of them.
Automating content distribution will make it easy to ensure that everyone gets the content that they need when they need it.
Every B2B SaaS Marketing Team Needs Demand Generation
What every B2B marketing team needs is an overarching umbrella process that goes beyond lead generation. And there comes demand generation.
The term refers to creating leads by marketing to prospects and customers. It’s a thorough collection of ideas, strategies, systems, and processes that work together to actively create demand for your product. It’s the engine that keeps your sales funnel full of leads, driving revenue for your business.
“When it comes to generating demand in B2B SaaS marketing, tracking the number of leads that turn into actual sales is perhaps the most crucial step.”
Also, remember, B2B sales is a long process. There are many steps for a lead to convert into an opportunity, for an opportunity to convert into a sales opportunity, and for the sales opportunity to turn into a closed-won deal.
Here’s what a typical B2B SaaS Marketing Lead-to-Revenue funnel looks like this:
Image Source: Saasmql.com
Marketing owns the blue part of the funnel, while Sales owns the orange side of it.
- MCL: Marketing Captured Leads are the contacts in your database. They also include cold prospects gathered from third-party systems.
- MEL: Marketing Engaged Leads are contacts that have performed an action, like downloading a whitepaper or signing up for a newsletter.
- MQL: Marketing Qualified Leads are those prospects that are likely to become customers and are assigned to the sales team.
- SAL: Sales Accepted Leads are those leads that the sales team recognizes and are ready to work on them.
- SQL: Sales Qualified Leads are prospects or opportunities that finally engage with the sales team
- SQO: Sales Qualified Opportunity are opportunities that go beyond the initial stage, become pipeline, and, eventually, generate revenue.
Get the Most of Your MQLs in B2B SaaS Marketing
Now that you know how to nurture a marketing qualified lead, it’s your turn to put that knowledge into practice. Evaluate what you’ve learned in this article and optimize your B2B SaaS marketing funnel. Get yourself more users to sign up for your product or service.
Is there anything else that we skipped? Please let us know in the comment section below. | 192,442 |
\begin{document}
\title{An Adaptive Algorithm for Synchronization in Diffusively Coupled Systems}
\author{S. Yusef Shafi and Murat Arcak\\ Department of Electrical Engineering and Computer Sciences \\University of California, Berkeley\\ \{yusef,arcak\}@eecs.berkeley.edu}
\date{\today}
\maketitle
\begin{abstract}
We present an adaptive algorithm that guarantees synchronization in diffusively coupled systems. We first consider compartmental systems of ODEs, where each compartment represents a spatial domain of components interconnected through diffusion terms with like components in different compartments. Each set of like components may have its own weighted undirected graph describing the topology of the interconnection between compartments. The link weights are updated adaptively according to the magnitude of the difference between neighboring agents connected by the link. We next consider reaction-diffusion PDEs with Neumann boundary conditions, and derive an analogous algorithm guaranteeing spatial homogenization of solutions. We provide a numerical example demonstrating the results.
\end{abstract}
\section{Introduction}
Spatially distributed models with diffusive coupling are crucial to understanding the dynamical behavior of a range of engineering and biological systems. This form of coupling encompasses, among others, feedback laws for coordination of multi-agent systems, electromechanical coupling of synchronous machines in power systems, and local update laws in distributed agreement algorithms. Synchronization of diffusively coupled models is an active and rich research area \cite{Hale}, with applications to multi-agent systems, power systems, oscillator circuits, physiological processes, etc.
The majority of the literature assumes a static interconnection between the nodes in full state models \cite{arcak11aut,Nijmeijer,weislotine2005,stan2007,russo2009,ScardoviEtAl,pecora1998} or phase variables in phase coupled oscillator models \cite{Kuramoto1,Strogatz,chopraspong2009,Dorfler}. However, recently, the situation where interconnection strengths are adapted according to local synchronization errors has started to attract interest. In \cite{assenza2011}, the authors proposed a phase-coupled oscillator model in which local interactions were reinforced between agents with similar behavior and weakened between agents with divergent behavior, leading to enhanced local synchronization. In \cite{delellis2009}, the authors analyzed synchronization of oscillators and presented an adaptive approach to establish synchrony. In \cite{demetriou2013synchronization}, the author considered synchronization and consensus in linear parabolic distributed systems, and in \cite{demetriou2013adaptation} presented an adaptive algorithm to guarantee state regulation and improve convergence of coupled agents to common transient trajectories.
In this note, we present an adaptive algorithm to guarantee synchrony in diffusively coupled systems. In our earlier work for static interconnections, we gave a numerically-verifiable condition on the Jacobian of a vector field to guarantee spatial homogeneity in reaction-diffusion PDEs and coupled compartmental systems of ODEs \cite{arcak11aut}, and generalized the condition to heterogeneous diffusion in \cite{shafi2013acc}. Using these results as a starting point, here we first consider compartmental models and derive adaptive laws that update interconnection strengths locally to achieve sufficient connectivity for synchronization. We next consider reaction-diffusion partial differential equations, and show that a similar control law that adapts the strength of diffusion coefficients guarantees spatial homogeneity. We present a numerical example that demonstrates the effectiveness of adaptation in enhancing synchrony and lends insight to understanding the structure and most crucial links of the network.
Our results make several key contributions differing from the existing literature. In \cite{delellis2009}, the authors presented an adaptive law to establish synchrony across agents in a coupled compartmental system of ODEs. They assumed full-state coupling over a single graph via a vector-valued output function. In contrast, we do not assume full-state coupling, and we further allow multiple input-output channels interconnected according to different graphs. In this case, the link weights for each graph are adjusted with a separate update rule. In addition, we present a PDE analogue of the proposed adaptation. In \cite{demetriou2013adaptation}, the author considered a collection of identical linear spatially distributed systems (e.g., linear parabolic PDEs) coupled by a graph with the goals of state regulation to zero and synchrony across agents. However, nonlinear models and nonequilibrium dynamics are not considered. We study nonlinear models and do not make any assumptions on the attractors of this model. This allows us to achieve synchronization for limit cycle oscillators, multi-stable systems, etc. Furthermore, to our knowledge the literature does not address the question of spatial homogenization in reaction-diffusion PDEs in which the coefficients of the elliptic operator vary in time.
\section{Compartmental ODEs}
Let $\mathcal{G}$ be an undirected, connected graph with $N$ nodes and $M$ links, where the nodes $i=1,\cdots,N$ represent the dynamical systems:
\begin{eqnarray}\label{initial}
\dot{x}_i&=&f(x_i)+B\sum_{j=1}^Nk_{ij}\,(y_j-y_i) \quad i=1,\cdots,N
\\
y_i&=&Cx_i \label{initial2}
\end{eqnarray}
in which $x_i\in \mathbb{R}^n$, $B\in \mathbb{R}^{n\times p}$, and $C\in \mathbb{R}^{p\times n}$, $f(\cdot)$ is a continuously differentiable vector field, and the scalars
$k_{ij}=k_{j\, i}$ for each pair $(i,j)$. Nodes $i$ and $j$ are called neighbors in $\mathcal{G}$ if there is a link in $\mathcal{G}$ connecting $i$ with $j$. We take $k_{ij}=0$ when nodes $i$ and $j$ are not neighbors in $\mathcal{G}$ so that the dynamical systems defined by (\ref{initial})-(\ref{initial2}), $i=1,\cdots,N$, are coupled according to the graph structure.
When $i$ and $j$ are neighbors, $k_{ij}=k_{j\, i}$ is updated according to:
\begin{equation}\label{update}
\dot{k}_{ij}=\gamma_{ij}\, (y_i-y_j)^T(y_i-y_j)
\end{equation}
where $\gamma_{ij}=\gamma_{j\,i}>0$ is an adaptation gain to be selected by the designer. Thus, there are $M$ independent variables updated as in (\ref{update}), corresponding to each link of the graph.
Define
\begin{equation}\label{defbar}
\bar{x}:=\frac{1}{N}(x_1+\cdots+x_N),
\quad
\tilde{x}_i:=x_i-\bar{x},
\quad \mbox{and} \quad
\tilde{y}_i:=C\tilde{x}_i.
\end{equation}
To guarantee that
$\tilde{y}_i(t) \rightarrow 0$ as $t\rightarrow \infty$, that is, the outputs of the dynamical systems synchronize, we restrict the matrices $B$, $C$, and the Jacobian
$$
J(x)=\frac{\partial f(x)}{\partial x}
$$
with the following assumption:
\begin{assumption}\label{OFP}
There exist a convex set $\mathcal{X}\subset \mathbb{R}^n$, a constant $\theta>0$, and a matrix $P=P^T>0$ such that:
\begin{eqnarray}\label{OFP1}
&& PJ(x)+J(x)^TP\le \theta C^TC \quad \forall x\in \mathcal{X}\\
&& PB=C^T\label{OFP2}.
\end{eqnarray}
\end{assumption}
\begin{theorem}\label{odethm}
Consider the interconnected system (\ref{initial})-(\ref{initial2}), $i=1,\cdots,N$, where $k_{ij}=k_{j\, i}$ is updated according to (\ref{update}) when nodes $i$ and $j$ are neighbors in $\mathcal{G}$ and is interpreted as zero otherwise, and suppose Assumption \ref{OFP} holds.
If the solutions are bounded and $x_i(t)\in \mathcal{X}$ for all $t\ge 0$, $i=1,\cdots,N$, then $\tilde{y}_i(t) \rightarrow 0$ as $t\rightarrow \infty$.
\hfill $\Box$
\end{theorem}
\noindent
{\bf Proof of Theorem \ref{odethm}:}
We define $\tilde{k}_{ij}={k}_{ij}-{k}_{ij}^*$ where
\begin{equation}\label{kdef}
k^*_{ij}=\left\{ \begin{array}{ll} k^* & \mbox{if $i$ and $j$ are neighbors in $\mathcal{G}$}\\
0 & \mbox{otherwise} \end{array}\right.
\end{equation}
and $k^*$ is a constant to be selected. We then introduce the Lyapunov function:
\begin{equation}
V=\sum_{i=1}^N \tilde{x}_i^TP\tilde{x}_i+\sum_{i=1}^N \sum_{j=1}^N\frac{1}{2\gamma_{ij}}\tilde{k}_{ij}^2
\end{equation}
where (\ref{kdef}) implies that $\tilde{k}_{ij}={k}_{ij}=0$ for pairs $(i,j)$ that are not neighbors in $\mathcal{G}$.
Taking the derivative of $V$ with respect to time and substituting (\ref{initial}) and (\ref{update}), we get:
\begin{equation}
\dot{V}=\sum_{i=1}^N 2\tilde{x}_i^TP\left(f(x_i)-\dot{\bar{x}}+B\sum_{j=1}^Nk_{ij}\,(y_j-y_i)\right)+\sum_{i=1}^N \sum_{j=1}^N\tilde{k}_{ij}(y_i-y_j)^T(y_i-y_j).
\end{equation}
We then substitute $y_i-y_j=\tilde{y}_i-\tilde{y}_j$ and $\tilde{x}_i^TPB=\tilde{y}_i^T$, which follows from (\ref{OFP2}), and obtain:
\begin{equation}\label{dot2}
\dot{V}=\sum_{i=1}^N 2\tilde{x}_i^TP(f(x_i)-\dot{\bar{x}}))+2\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)+\sum_{i=1}^N \sum_{j=1}^N\tilde{k}_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j).
\end{equation}
Next, we note from (\ref{defbar}) that $\sum_{i=1}^N \tilde{x}_i=0$, and add
\begin{equation}
\sum_{i=1}^N 2\tilde{x}_i^TP(\dot{\bar{x}}-f(\bar{x})))=2\left(\sum_{i=1}^N \tilde{x}_i\right)^TP(\dot{\bar{x}}-f(\bar{x})))=0
\end{equation}
to the right-hand side of (\ref{dot2}):
\begin{equation}\label{dot3}
\dot{V}=\sum_{i=1}^N 2\tilde{x}_i^TP(f(x_i)-f(\bar{x})))+2\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)+\sum_{i=1}^N \sum_{j=1}^N\tilde{k}_{ij}\tilde{k}_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j).
\end{equation}
Since
\begin{equation}
f(x_i)-f(\bar{x})=\int_0^1 J(\bar{x}+s\tilde{x}_i)\tilde{x}_i\,ds
\end{equation}
by the Mean Value Theorem, inequality (\ref{OFP1}) yields:
\begin{equation}\label{this}
\tilde{x}_i^TP(f(x_i)-f(\bar{x}))
=\frac{1}{2} \tilde{x}_i^T\left(\int_0^1(PJ(\bar{x}+s\tilde{x}_i)+J^T(\bar{x}+s\tilde{x}_i)P)ds\right)\tilde{x}_i
\le \frac{\theta}{2}\tilde{x}_i^TC^TC\tilde{x}_i=\frac{\theta}{2}\tilde{y}_i^T\tilde{y}_i,
\end{equation}
and substitution of (\ref{this}) in (\ref{dot3}) gives:
\begin{equation}\label{dot4}
\dot{V}\le \theta \sum_{i=1}^{N} \tilde{y}_i^T\tilde{y}_i+2\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)+\sum_{i=1}^N \sum_{j=1}^N\tilde{k}_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j).
\end{equation}
To further simplify (\ref{dot4}), we note that
\begin{equation}
\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)=\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_j^T(\tilde{y}_i-\tilde{y}_j)
\end{equation}
which follows by swapping the indices $i$ and $j$ and substituting $k_{ij}=k_{j\, i}$. Thus,
\begin{equation}
2\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)=\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_i^T(\tilde{y}_j-\tilde{y}_i)-\sum_{i=1}^N \sum_{j=1}^Nk_{ij}\,\tilde{y}_j^T(\tilde{y}_j-\tilde{y}_i)=-\sum_{i=1}^N \sum_{j=1}^N{k}_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j),
\end{equation}
and (\ref{dot4}) becomes:
\begin{equation}\label{dot5}
\dot{V}\le \theta \sum_{i=1}^{N} \tilde{y}_i^T\tilde{y}_i-\sum_{i=1}^N \sum_{j=1}^N{k}^*_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j).
\end{equation}
Next, we assign an arbitrary orientation to the links of the graph $\mathcal{G}$, label the links $\ell=1,\cdots,M$, and introduce the $N\times M$ incidence matrix:
\begin{equation}
E_{i\ell}=\left\{ \begin{array}{ll}
1 & \mbox{if node $i$ is the head of link $\ell$}\\
-1 & \mbox{if node $i$ is the tail of link $\ell$}\\
0 & \mbox{if node $i$ is not connected to link $\ell$.}\\
\end{array}\right.
\end{equation}
Defining $\tilde{Y}=[\tilde{y}_1\ \cdots \tilde{y}_N]^T$, we note that $(E\otimes I_p)^T\tilde{Y}$ is a column vector which is a concatenation of $p$-dimensional components and the $\ell$th such component is $\tilde{y}_i-\tilde{y}_j$ where $i$ is the head and $j$ is the tail of link $\ell$.
It then follows from (\ref{kdef}) that:
\begin{equation}
\sum_{i=1}^N \sum_{j=1}^N{k}^*_{ij}(\tilde{y}_i-\tilde{y}_j)^T(\tilde{y}_i-\tilde{y}_j)=2k^*\tilde{Y}^T(E\otimes I_p)(E\otimes I_p)^T\tilde{Y}=2k^*\tilde{Y}^T(EE^T\otimes I_p)\tilde{Y}.
\end{equation}
Since $EE^T$ is the Laplacian matrix for the graph $\mathcal{G}$, its smallest eigenvalue is $\lambda_1=0$ and the vector of ones $\mathbf{1}_N$ is a corresponding eigenvector. Likewise, for $EE^T\otimes I_p$, $\lambda_1=0$ has multiplicity $p$ and the corresponding eigenspace is the range of $\mathbf{1}_N\otimes I_p$.
Because $\mathcal{G}$ is connected, the second smallest eigenvalue $\lambda_2$ is strictly positive and, since $\tilde{Y}^T(\mathbf{1}_N\otimes I_p)=0$ from (\ref{defbar}), the following inequality holds:
\begin{equation}
\tilde{Y}^T(EE^T\otimes I_p)\tilde{Y}\ge \lambda_2 \tilde{Y}^T\tilde{Y}.
\end{equation}
Thus,
\begin{equation}\label{dot6}
\dot{V}\le -(2k^*\lambda_2-\theta)\tilde{Y}^T\tilde{Y}
\end{equation}
and choosing $k^*$ large enough that $\epsilon:=2k^*\lambda_2-\theta>0$ guarantees:
\begin{equation}\label{dot7}
\dot{V}\le -\epsilon \tilde{Y}^T\tilde{Y}.
\end{equation}
By integrating both sides of the inequality (\ref{dot7}), we conclude that $\tilde{y}_i(t)$ is in $\mathcal{L}_2$, $i=1,\cdots,N$. Furthermore, the boundedness of solutions implies that $\dot{x}_i(t)$ and, thus $\dot{\tilde{y}}_i(t)$ is bounded. Barbalat's Lemma \cite{khalil} then guarantees $\tilde{y}_i(t) \rightarrow 0$ as $t\rightarrow \infty$. \hfill $\Box$
\smallskip
\begin{remark}
An extension of Theorem \ref{odethm} to the case of multiple input-output channels, connected according to different graphs, is straightforward. The system now takes the form:
\begin{eqnarray}\label{initial3}
\dot{x}_i&=&f(x_i)+\sum_{q=1}^mB^{(q)}\sum_{j=1}^Nk_{ij}^{(q)}\,(y^{(q)}_j-y^{(q)}_i) \quad i=1,\cdots,N
\\
y_i^{(q)}&=&C^{(q)}x_i \label{initial4}
\end{eqnarray}
where $B^{(q)}\in \mathbb{R}^{n\times p_q}$ and $C^{(q)}\in \mathbb{R}^{p_q\times n}$, $q=1,\cdots,m$. A graph $\mathcal{G}^{(q)}$ is defined for each channel $q$ and $k_{ij}^{(q)}=k_{j\, i}^{(q)}\neq 0$ only when nodes $i$ and $j$ are adjacent in $\mathcal{G}^{(q)}$.
The update rule (\ref{update}) then becomes:
\begin{equation}\label{MIMOupdate}
\dot{k}^{(q)}_{ij}=\gamma^{(q)}_{ij}\, (y^{(q)}_i-y^{(q)}_j)^T(y^{(q)}_i-y^{(q)}_j), \quad \gamma^{(q)}_{ij}>0, \quad q=1,\cdots,m.
\end{equation}
To prove synchronization we now ask that (\ref{OFP1})-(\ref{OFP2}) in Assumption \ref{OFP} hold for the matrices $B=[B^{(1)} \cdots B^{(m)}]$ and $C^T=[{C^{(1)}}^T \cdots {C^{(m)}}^T]$.
In fact, one can relax (\ref{OFP2}) as:
\begin{equation}\label{OFP2b}
PB=[\omega^{(1)}{C^{(1)}}^T \cdots \omega^{(m)}{C^{(m)}}^T],
\end{equation}
where $\omega^{(q)}>0$, $q=1,\cdots,m$. To accommodate the ``multipliers" $\omega^{(q)}$, the Lyapunov function in the proof of Theorem \ref{odethm} is modified as:
\begin{equation}
V=\sum_{i=1}^N \tilde{x}_i^TP\tilde{x}_i+\sum_{q=1}^m\sum_{i=1}^N \sum_{j=1}^N\frac{\omega^{(q)}}{2\gamma^{(q)}_{ij}}(\tilde{k}_{ij}^{(q)})^2.
\end{equation}
The steps of the proof are otherwise identical and are not repeated to avoid excessive notation.
\end{remark}
\begin{remark}\label{boundedness}
Since the proof above analyzes the evolution of $x_i$ relative to the average $\bar{x}$, it cannot reach any conclusions about the absolute behavior of the variables $x_i$.
Thus, boundedness of the solutions does not follow from the proof and was assumed in the theorem. However, it is possible to conclude boundedness with an additional restriction on the vector field $f(x)$:
Since $k_{ij}=k_{ji}$, the coupling terms in (\ref{initial}) do not affect the evolution of the average $\bar{x}$, which is governed by:
\begin{equation}\label{BIBS}
\dot{\bar{x}}=\frac{1}{N}\sum_{i=1}^Nf(x_i)=\frac{1}{N}\sum_{i=1}^Nf(\bar{x}+\tilde{x}_i).
\end{equation}
If this system has a bounded-input-bounded-state (BIBS) property when $\tilde{x}_i$ are interpreted as inputs, then we conclude boundedness of all solutions.
This follows because the Lyapunov arguments in the proof show that $\tilde{x}_i(t)$ are bounded on the maximal interval of existence $[0,t_f)$ with bounds that do not depend on $t_f$ and, thus, a similar conclusion holds for $\bar{x}(t)$ implying that $t_f=\infty$. The Lyapunov function then establishes boundedness of $\tilde{x}_i(t)$ and $k_{ij}(t)$, and the BIBS property above guarantees that all solutions are bounded.
A further assumption of the theorem is that $x_i(t)\in \mathcal{X}$ for all $t\ge 0$. Thus, when the set $\mathcal{X}$ where (\ref{OFP1}) holds is a strict subset of $\mathbb{R}^n$, we have to independently show that $x_i(t)$ remains in $\mathcal{X}$. One can do this by establishing the invariance of the set $\mathcal{X}^N\times \mathbb{R}^M$ for (\ref{initial})-(\ref{update}). If $\mathcal{X}^N\times \mathbb{R}^M$ is not invariant, then an appropriate reachability analysis can be used to identify a set of initial conditions such that the trajectories starting in this set do not leave $\mathcal{X}^N\times \mathbb{R}^M$. \hfill $\Box$
\end{remark}
\begin{example} Consider the graph in Figure \ref{barbell} and supposed the nodes are governed by (\ref{initial})-(\ref{initial2}) where $B=C=1$ and
\begin{equation}\label{bistable}f(x)=x-x^3.
\end{equation}
Thus, Assumption \ref{OFP} holds in $\mathcal{X}=\mathbb{R}$ with $P=1$ and $\theta=2$. The boundedness of solutions condition in Theorem \ref{odethm} follows from the BIBS property of (\ref{BIBS}), as in Remark \ref{boundedness}. To see this BIBS property, note from (\ref{bistable}) that $|\bar{x}|>1+\max_i\{|\tilde{x}_i|\}$ implies $\bar{x}f(\bar{x}+\tilde{x}_i)<0$. This guarantees that the solutions of (\ref{BIBS}) satisfy $|\bar{x}(t)|\le \max\{|\bar{x}(0)|,1+\max_i \sup_{t\ge 0}|\tilde{x}_i(t)|\}$ for all $t\ge 0$.
Note from (\ref{bistable}) that each node is a bistable system with stable equilibria at $x_i=\pm 1$ and a saddle point at $x_i=0$. Indeed, when we set the link weights to zero and turn off the adaptation, each $x_i(t)$ evolves independently and converges to $+1$ or $-1$ (Figure \ref{ex}A). Next, we turn on the adaptation with
gain $\gamma_{ij}=1$ and initial condition $k_{ij}(0)=0$ for each link. Figures \ref{ex}B and \ref{ex}C confirm that the nodes now synchronize, converging to a consensus value of $+1$ or $-1$.
Note from (\ref{update}) that the final value reached by each $k_{ij}(t)$ is the squared $L_2$ norm for the synchronization error $x_i(t)-x_j(t)$. As one may expect from the structure of the ``barbell" graph in Figure \ref{barbell}, the red curve in Figure \ref{ex}D corresponding to the bottleneck link $(4,5)$ reached the largest value, indicating a high ``stress" on this link.
\hfill \end{example}
\begin{figure}[t]
\vspace{-3.0cm}
\begin{center}
\mbox{}\setlength{\unitlength}{.9mm}
\begin{picture}(60,70)
\put(-10,0){\psfig{figure=barbell.eps,width=80\unitlength}}
\put(-14,14){$2$}\put(19,18){$4$}\put(38,18){$5$}\put(71,14){$7$}
\put(0,0){$3$}\put(50,0){$8$}
\put(0,28){$1$}\put(49.50,28){$6$}
\end{picture}
\vspace{-.2cm}
\caption{\small An eight-node ``barbell" graph.} \label{barbell}
\end{center}
\vspace{-.6cm}
\end{figure}
\begin{figure}[h!]
\vspace{1.9cm}
\begin{center}
\mbox{}\setlength{\unitlength}{.7mm}
\begin{picture}(60,70)
\put(-53,35){\psfig{figure=propA.eps,width=80\unitlength}}
\put(-57,87){(A)}\put(-58,65){$x_{i}(t)$}\put(22,41){$t$}
\put(32,35){\psfig{figure=propB.eps,width=80\unitlength}}
\put(28,87){(B)}\put(27,65){$x_{i}(t)$}\put(107,41){$t$}
\put(-53,-23){\psfig{figure=propC.eps,width=80\unitlength}}
\put(-57,28){(C)}\put(-58,6){$x_{i}(t)$}\put(22,-17){$t$}
\put(32,-23){\psfig{figure=propD.eps,width=80\unitlength}}
\put(28,28){(D)}\put(28,6){$k_{ij}(t)$}\put(107,-17){$t$}
\end{picture}
\vspace{1.1cm}
\caption{\small Simulations for the graph in Figure \ref{barbell} where each node is governed by (\ref{initial})-(\ref{initial2}) with $B=C=1$ and
$f(x)=x-x^3$. (A) When the link weights are set to zero and the adaptation is turned off, $x_i(t)$ converge to one of the two stable equilibria $\pm 1$ and do not synchronize. (B)-(C) When the adaptation is turned on, $x_i(t)$ synchronize and converge to identical states. (D) The evolution of link weights $k_{ij}(t)$ in the simulation corresponding to Figure \ref{ex}C. The red curve is $k_{45}(t)$ for the bottleneck link $(4,5)$.} \label{ex}
\end{center}
\vspace{-.6cm}
\end{figure}
\section{Reaction-Diffusion PDEs}\label{pdesec}
Let $\Omega$ be a bounded and connected domain in $\mathbb{R}^r$ with smooth boundary $\partial \Omega$, and
consider the PDE:
\begin{eqnarray}\label{rdnet}
\frac{\partial x(t,\xi)}{\partial t}&=&f(x(t,\xi))+\sum_{\ell=1}^{p}B_{\ell} \nabla\cdot (k(t,\xi)\nabla y_{\ell}(t,\xi)),\\
\label{rdout}
y_{\ell}(t,\xi)&=&C_{\ell} x(t,\xi)
\end{eqnarray}
where $\xi \in \Omega$ is the spatial variable, $x(t,\xi)\in \mathbb{R}^n$, $k(t,\xi)\in \mathbb{R}$, $f(\cdot)$ is a continuously differentiable vector field, $B=[B_1\cdots B_p]\in \mathbb{R}^{n\times p}$, $C^T=[C_1^T \cdots C_p^T]\in \mathbb{R}^{n\times p}$, $\nabla \cdot$ is the divergence operator and $\nabla$ represents the gradient with respect to the spatial variable $\xi$.
We assume Neumann boundary conditions:
\begin{equation}
\nabla x_i(t,\xi) \cdot \hat{n}(\xi)=0 \quad \forall \xi \in \partial \Omega, \ \forall t\ge 0, \quad i=1,\cdots,n \label{bc}
\end{equation}
where ``$\cdot$" is the inner product in $\mathbb{R}^r$, $x_i(t,\xi)$ denotes the $i$th entry of the vector $x(t,\xi)$ and $\hat{n}(\xi)$ is a vector normal to the boundary $\partial \Omega$.
In analogy with (\ref{update}), we introduce the update law:
\begin{equation}\label{rdupdate}
\frac{\partial k(t,\xi)}{\partial t}=\gamma(\xi)\sum_{\ell=1}^{p}\nabla y_\ell(t,\xi)\cdot \nabla y_\ell(t,\xi),
\end{equation}
where $\gamma(\xi)>0$ is a design choice.
Define:
\begin{equation}\label{pidef}
\bar{x}(t):=\frac{1}{|\Omega|}\int_\Omega {x}(t,\xi) d\xi,
\quad
\tilde{x}(t,\xi):={x}(t,\xi)-\bar{x}(t),
\quad
\tilde{y}(t,\xi):=C\tilde{x}(t,\xi).
\end{equation}
In Theorem \ref{pdethm} below, we give conditions that guarantee the following output synchronization property:
\begin{equation}\label{rdsync}
\lim_{t\rightarrow \infty}\int_\Omega |\tilde{y}(t,\xi)|^2d\xi =0
\end{equation}
where $|\cdot |$ denotes the Euclidean norm.
\begin{theorem}\label{pdethm}
Consider the system (\ref{rdnet})-(\ref{rdout}) with boundary condition (\ref{bc}), and suppose Assumption \ref{OFP} holds. Then, the update law (\ref{rdupdate}) guarantees (\ref{rdsync}) for every bounded classical solution that satisfies
$x(t,\xi)\in \mathcal{X}$ for all $t\ge 0$.
\hfill $\Box$
\end{theorem}
\noindent
{\bf Proof of Theorem \ref{pdethm}:}
Define:
\begin{equation}
V(t)=\int_\Omega \tilde{x}^T(t,\xi)P\tilde{x}(t,\xi)d\xi+\int_\Omega \frac{1}{\gamma(\xi)}|\tilde{k}(t,\xi)|^2d\xi
\end{equation}
where $\tilde{k}(t,\xi)={k}(t,\xi)-k^*$, and $k^*$ is to be selected. Taking derivatives with respect to time and using (\ref{rdupdate}), we get:
\begin{equation}\label{rdot1}
\dot{V}(t)=2\int_\Omega \tilde{x}^T(t,\xi)P\frac{\partial \tilde{x}(t,\xi)}{\partial t}d\xi+2\sum_{\ell=1}^p\int_\Omega \tilde{k}(t,\xi)\nabla y_\ell(t,\xi)\cdot \nabla y_\ell(t,\xi)d\xi.
\end{equation}
It follows from (\ref{rdnet}) that:
\begin{equation}\label{rdot2}
\dot{V}(t)=2\int_\Omega \tilde{x}^T(t,\xi)P\left(f(x(t,\xi))-\dot{\bar{x}}+\sum_{\ell=1}^pB_\ell\nabla\cdot (k(t,\xi)\nabla y_\ell(t,\xi))\right)d\xi+2\sum_{\ell=1}^p\int_\Omega \tilde{k}(t,\xi)\nabla y_\ell(t,\xi)\cdot \nabla y_\ell(t,\xi)d\xi.
\end{equation}
Next, substituting $\nabla y_\ell(t,\xi)=\nabla \tilde{y}_\ell(t,\xi)$ and $\tilde{x}^T(t,\xi)PB_\ell=\tilde{y}_\ell(t,\xi)$, which follows from (\ref{OFP2}), we obtain:
\begin{eqnarray}\label{rdot3}
\dot{V}(t)&=&2\int_\Omega \tilde{x}^T(t,\xi)P(f(x(t,\xi))-\dot{\bar{x}})d\xi+2\sum_{\ell=1}^p\int_\Omega \tilde{y}_\ell(t,\xi)\nabla\cdot (k(t,\xi)\nabla \tilde{y}_\ell(t,\xi)))d\xi\\&& +2\sum_{\ell=1}^p\int_\Omega \tilde{k}(t,\xi)\nabla \tilde{y}_\ell(t,\xi)\cdot \nabla \tilde{y}_\ell(t,\xi)d\xi.\nonumber
\end{eqnarray}
Since
\begin{equation}
\int_\Omega \tilde{x}^T(t,\xi)P(\dot{\bar{x}}(t)-f(\bar{x}(t)))d\xi=\left(\int_\Omega \tilde{x}^T(t,\xi)d\xi\right)P(\dot{\bar{x}}(t)-f(\bar{x}(t)))=0,
\end{equation}
which follows from (\ref{pidef}), we rewrite (\ref{rdot3}) as:
\begin{eqnarray}\label{rdot4}
\dot{V}(t)&=&2\int_\Omega \tilde{x}^T(t,\xi)P(f(x(t,\xi))-f(\bar{x}(t)))d\xi+2\sum_{\ell=1}^p\int_\Omega \tilde{y}_\ell(t,\xi)\nabla\cdot (k(t,\xi)\nabla \tilde{y}_\ell(t,\xi)))d\xi\\&& +2\sum_{\ell=1}^p\int_\Omega \tilde{k}(t,\xi)\nabla \tilde{y}_\ell(t,\xi)\cdot \nabla \tilde{y}_\ell(t,\xi)d\xi.\nonumber
\end{eqnarray}
It then follows from (\ref{this}) that
\begin{equation}\label{rdot5}
\dot{V}(t)\le {\theta}\int_\Omega |\tilde{y}(t,\xi)|^2d\xi+2\sum_{\ell=1}^p\int_\Omega \tilde{y}_\ell(t,\xi)\nabla\cdot (k(t,\xi)\nabla \tilde{y}_\ell(t,\xi)))d\xi +2\sum_{\ell=1}^p\int_\Omega \tilde{k}(t,\xi)\nabla \tilde{y}_\ell(t,\xi)\cdot \nabla \tilde{y}_\ell(t,\xi)d\xi.
\end{equation}
We now claim that:
\begin{equation}\label{claim}
\int_\Omega \tilde{y}_\ell(t,\xi)\nabla\cdot (k(t,\xi)\nabla \tilde{y}_\ell(t,\xi)))d\xi=-\int_\Omega {k}(t,\xi)\nabla \tilde{y}_\ell(t,\xi)\cdot \nabla \tilde{y}_\ell(t,\xi)d\xi.
\end{equation}
This follows by first applying the identity $\nabla \cdot (fF)=f \nabla \cdot F+F\cdot \nabla f$, which holds when $f$ is scalar valued, with $F=k(t,\xi)\nabla \tilde{y}_\ell(t,\xi)$ and $f=\tilde{y}_\ell(t,\xi)$, next integrating both sides of the identity over $\Omega$, and finally noting that the left-hand side is zero, since:
\begin{equation}
\int_\Omega \nabla \cdot (\tilde{y}_\ell(t,\xi)k(t,\xi)\nabla \tilde{y}_\ell(t,\xi))d\xi=\int_{\partial \Omega} \tilde{y}_\ell(t,\xi)k(t,\xi)\nabla \tilde{y}_\ell(t,\xi) \cdot \hat{n}(\xi)dS
\end{equation}
from the Divergence Theorem and $\nabla \tilde{y}_\ell(t,\xi) \cdot \hat{n}(\xi)=0$ for $\xi \in \partial \Omega$ from the boundary condition (\ref{bc}). Substituting (\ref{claim}) in (\ref{rdot5}), we get:
\begin{equation}\label{rdot6}
\dot{V}(t)\le {\theta}\int_\Omega |\tilde{y}(t,\xi)|^2d\xi-2k^*\sum_{\ell=1}^{p}\int_\Omega \nabla \tilde{y}_\ell(t,\xi)\cdot \nabla \tilde{y}_\ell(t,\xi)d\xi.
\end{equation}
Moreover, because $\int_\Omega \tilde{y}_\ell(t,\xi) d\xi=0$, it follows from the the Poincar\'{e} Inequality \cite[Equation (1.37)]{henrot}
that:
\begin{equation}\label{there}
\int_\Omega |\nabla{\tilde{y}_\ell(t,\xi)}|^2 d\xi \ge \lambda_2\int_\Omega \tilde{y}_\ell(t,\xi)^2 d\xi
\end{equation}
where $\lambda_2$ denotes the second smallest of the eigenvalues $0=\lambda_1\le \lambda_2 \le \cdots$ of the operator $L=-\nabla^2$ on $\Omega$ with Neumann boundary condition, and $\lambda_2>0$ since $\Omega$ is connected. Thus, (\ref{rdot6}) becomes:
\begin{equation}\label{rdot7}
\dot{V}(t)\le -\left(2k^*\lambda_2-{\theta}\right)\int_\Omega |\tilde{y}(t,\xi)|^2d\xi,
\end{equation}
and choosing $k^*$ large enough that $\epsilon:=2k^*\lambda_2-\theta>0$ guarantees:
\begin{equation}\label{rdot}
\dot{V}(t)\le -\epsilon\int_\Omega |\tilde{y}(t,\xi)|^2d\xi =: -\epsilon W(t).
\end{equation}
This implies that $\lim_{T\rightarrow \infty}\int_0^T W(t)dt$ exists and is bounded. Since $\dot{W}(t)$ is also bounded, it follows from Barbalat's Lemma \cite{khalil} that $W(t)\rightarrow 0$ as $t\rightarrow \infty$ which proves (\ref{rdsync}). \hfill $\Box$
\bibliographystyle{IEEEtran}
\bibliography{mybib,books,General}
\end{document} | 191,636 |
Invite Freelancer to Project
You don't seem to have an active project at the moment. Why not post a project now? It's free!Post a Project
tejasbarbhaya
tejasbarbhaya
- 93%Jobs Completed
- 100%On Budget
- 100%On Time
- 57%Repeat Hire Rate
Portfolio
Recent Reviews
Project for tejasbarbhaya -- 6 $1.00 USD
“Please aware this Guy, He making me fool regarding work and hours , I hire another guy for same work he done the work in 2 days and the same work Tejasbarbhaya said that he will do it one week and may be take more. so he just make me fool and making making money from me instead of work.. Last time I gave him 5 star feed back the reason behind that he grab my code and said he will give my code when I gave him 5 star feedback so I have no other option to gave him 5 star feedback otherwise I will gave him 0 star rating. Other thing that will charge you also for the search like he said when you hire him for project than he will charge you for searching things and no issue if searching anything took 8 hours and do nothing code then he also charge you this is really Insane and ridiculous .. I Never prefer this Guy to anyone . his purpose is to just make money. he ask you for money after every two days either you hire him for one month or year.. Please be aware this guy he will stuck you and grab your code ...”Avinay K.
1 year ago
Update search app $30.00 USD
“nice to work”mmaizer
1 year ago
Project for tejasbarbhaya -- 4 $128.00 USD
“Thanks all your hard work !! Move forward on next project !! He is excellent IOS Developer !! I recommended him to any one who really want Good work !!”Avinay K.
1 year ago
Project for tejasbarbhaya -- 2 $64.00 USD
“he is a great IOS developer!! Did everything as I needed !! I will definitely recommend him to other and hire again for my next project”Avinay K.
1 year ago
Write an iPad application -- 2 $55.00 AUD
“Excellent Programmer. Understand Project very well. Works with precise methodology and deliver project project on time and budget.”justm40271
1 year ago
Update current View $32.00 USD
“Very helpful for a project that required many setups. Would rehire again!”mmaizer
1 year ago
Experience
Senior Mobile Developer / ArchitectJan 2017
Myself Tejas Barbhaya, having 8+ years of mobile apps development experience and working on full time freelance basis. We do Mobile Applications for iOS and Android. We are currently seeking for work with best possible rate to build up. Looking forward to build long and strong relationship with you. Let's shake hands!!
Education
Bechalor of Engineering in Computer Science2004 - 2008 (4 years)
Publications
Our Website
Find more about our company with mentioned website above.
Verifications
- Facebook Connected—
- Preferred Freelancer—
- Payment Verified
- Phone Verified
- Identity Verified
My Top Skills
- iPhone 9
- Mobile App Development 8
- Swift 7
- iPad 4
- Unity 3D 3
- Blackberry 1 | 255,140 |
\begin{document}
\title{\large Kernel canonical correlation analysis approximates operators for the detection of coherent structures in dynamical data}
\author{Stefan Klus}
\email[]{[email protected]}
\affiliation{Department of Mathematics and Computer Science, \mbox{Freie Universit\"at Berlin, 14195 Berlin, Germany}}
\author{Brooke E. Husic}
\email[]{[email protected]}
\affiliation{Department of Mathematics and Computer Science, \mbox{Freie Universit\"at Berlin, 14195 Berlin, Germany}}
\affiliation{Department of Chemistry, \mbox{Stanford University, Stanford, CA, 94305, USA}}
\author{Mattes Mollenhauer}
\email[]{[email protected]}
\affiliation{Department of Mathematics and Computer Science, \mbox{Freie Universit\"at Berlin, 14195 Berlin, Germany}}
\begin{abstract}
We illustrate relationships between classical kernel-based dimensionality reduction techniques and eigendecompositions of empirical estimates of \emph{reproducing kernel Hilbert space} (RKHS) operators associated with dynamical systems. In particular, we show that kernel \emph{canonical correlation analysis} (CCA) can be interpreted in terms of kernel transfer operators and that coherent sets of particle trajectories can be computed by applying kernel CCA to Lagrangian data. We demonstrate the efficiency of this approach with several examples, namely the well-known Bickley jet, ocean drifter data, and a molecular dynamics problem with a time-dependent potential. Furthermore, we propose a straightforward generalization of \emph{dynamic mode decomposition} (DMD) called \emph{coherent mode decomposition} (CMD).
\end{abstract}
\maketitle
\section{Introduction}
Over the last years, several kernel-based methods for the analysis of high-dimensional data sets have been developed, many of which can be seen as nonlinear extensions of classical linear methods, e.g., kernel \emph{principal component analysis} (PCA)~\cite{Scholkopf98:KPCA, Schoe01}, kernel \emph{canonical correlation analysis} (CCA)~\cite{MRB01:CCA, Bach03:KICA}, and kernel \emph{time-lagged independent component analysis} (TICA)~\cite{HZHM03:kTICA, SP15}. The basic idea behind these methods is to represent data by elements in reproducing kernel Hilbert spaces associated with positive definite kernels. These methods can be used for classification, feature extraction, clustering, or dimensionality reduction \cite{Schoe01, SC04:KernelMethods, Steinwart2008:SVM}.
The main goal of this work is to establish connections between machine learning and dynamical systems theory. While the intended applications might differ, the resulting approaches and algorithms often share many similarities. Relationships between kernel embeddings of conditional probability distributions\cite{SHSF09,MFSS16} and transfer operators such as the Perron--Frobenius and Koopman operators~\cite{Ko31, LaMa94} have recently been detailed in Ref.~\onlinecite{KSM17}. Furthermore, it has been shown that eigenfunctions of empirical estimates of these operators can be obtained by solving an auxiliary matrix eigenvalue problem. The eigenfunctions of transfer operators encode important global properties of the underlying dynamical system, which can, for instance, be used to detect metastable sets. Other applications include model reduction and control \cite{SS13, KBBP16}, which we will, however, not consider here. Metastability is related to the existence of a spectral gap:~for short time scales, the system appears to be equilibrated, but actually explores the state space of the system only locally; at long time scales, however, there are rare transitions between such metastable states \cite{Bovier06:metastability}. In the molecular dynamics context, for example, metastable states correspond to different conformations of molecules. We now want to extend this framework to so-called \emph{coherent sets}---a generalization of metastable sets to nonautonomous and aperiodic systems~\cite{FJ18:coherent}---, which again can be regarded as eigenfunctions of certain operators associated with a dynamical system. Coherent sets are regions of the state space that are not dispersed over a specific time interval. That is, if we let the system evolve, elements of a coherent set will, with a high probability, stay close together, whereas other regions of the state space might be distorted entirely.
There is an abundance of publications on the numerical approximation of coherent sets, which we will not review in detail here (see, e.g., Refs.~\onlinecite{FrJu15, WRR15, HKTH16:coherent, BK17:coherent, HSD18, FJ18:coherent} and references therein). A comparison of approaches for Lagrangian data, which we mainly consider here, can be found in Ref.~\onlinecite{AP15:review}. Furthermore, we will not address the problem of possibly sparse or incomplete data. Our goal is to illustrate relationships with established kernel-based approaches and to show that existing methods---developed independently and with different applications in mind, predating many algorithms for the computation of finite-time coherent sets---can be directly applied to detect coherent sets in Lagrangian data. Comparisons to some recently proposed singular value decomposition approaches are presented in Appendix~\ref{app:relationships}, and potential improvements and generalizations of these methods will be considered in future work.
In this work, we show that kernel CCA, when applied to dynamical data, admits a natural interpretation in terms of kernel transfer operators and that the resulting eigenvalue problems are directly linked to methods for the computation of coherent sets. In Section~\ref{sec:Prerequisites}, we will briefly introduce transfer operators and review the notion of positive definite kernels and induced Hilbert spaces as well as nonlinear generalizations of covariance and cross-covariance matrices. We will then define empirical RKHS operators and show that diverse algorithms can be formulated as eigenvalue problems involving such operators. The relationships between kernel CCA and coherent sets will be studied in Section~\ref{sec:Kernel CCA and coherent sets}. Furthermore, we will propose a method called \emph{coherent mode decomposition}, which can be seen as a combination of CCA and DMD~\cite{Schmid10, TRLBK14, KBBP16}. Section~\ref{sec:Numerical results} contains numerical results illustrating how to use the presented kernel-based methods for the analysis of dynamical systems. We conclude with a summary of the main results and open problems in Section~\ref{sec:Conclusion}.
\section{Prerequisites}
\label{sec:Prerequisites}
We will only briefly introduce transfer operators, reproducing kernel Hilbert spaces, and operators mapping from one such space to another one (or itself). For more details on the properties of these spaces and the introduced operators, we refer the reader to Refs.~\onlinecite{Schoe01, Steinwart2008:SVM, SC04:KernelMethods} and Refs.~\onlinecite{Baker70:XCov, Baker1973, KSM17}, respectively.
\subsection{Transfer operators}
\label{ssec:transfer_operators}
Let $ \{ X_t \}_{t \ge 0} $ be a stochastic process defined on the state space $ \inspace \subset \R^d $ and let $ \tau $ be a fixed lag time. We assume that there exists a \emph{transition density function} $ p_\tau \colon \inspace \times \inspace \rightarrow \R $ such that $ p_\tau(y \mid x) $ is the probability of $ X_{t + \tau} = y $ given $ X_t = x $.
Given $ 1 \leq r \leq \infty$, we write $L^r(\inspace)$ for the standard space of $r$-Lebesgue integrable functions on $\inspace$. Then, for a probability density $\mu$ on $\inspace$, let $L^r_\mu (\inspace)$ denote the spaces of $r$-integrable functions with respect to the corresponding probability measure induced by the density $\mu$; that is, $\norm{f}^r_{L^r_\mu(\inspace)} = \int |f(x)|^r \ts \mu(x) \ts \dd x$.
Given a probability density $ p \in L^1(\inspace) $ and an observable $ f \in L^\infty(\inspace)$, we define the \emph{Perron--Frobenius operator} $ \pf \colon L^1(\inspace) \to L^1(\inspace) $ and the \emph{Koopman operator} $ \ko \colon L^{\infty}(\inspace) \to L^{\infty}(\inspace) $ by
\begin{align*}
\left(\pf p \right)(y) &= \int p_\tau(y \mid x) \ts p(x) \ts \dd x, \\
\left(\ko f\right)(x) &= \int p_\tau(y \mid x) \ts f(y) \ts \dd y.
\end{align*}
Assuming the process admits a unique equilibrium density $ \pi $, i.e., $ \pf \pi = \pi $, we can define for $ u = \pi(x)^{-1} \ts p(x) $ the \emph{Perron--Frobenius operator with respect to the equilibrium density} $ \mathcal{T} \colon L_\pi^1(\inspace) \to L_\pi^1(\inspace) $ as
\begin{equation*}
\left(\mathcal{T} u\right)(y) = \frac{1}{\pi(y)} \int p_\tau(y \mid x) \ts \pi(x) \ts u(x) \ts \dd x.
\end{equation*}
Under certain conditions, these transfer operators can be defined on $ L^r(\inspace) $ and $L^r_\pi(\inspace)$ for other choices of $r$.
From now on, we will always assume that they are well-defined for $ r = 2 $ (see Refs.~\onlinecite{LaMa94, BaRo95, KKS16} for details). This is common whenever Hilbert space properties are needed in the context of transfer operators.
\begin{remark}
For time-homogeneous systems, the associated transfer operators depend only on the lag time $ \tau $. If the system is time-inhomogeneous, on the other hand, the lag time is not sufficient to parametrize the evolution of the system since it also depends on the starting time. This is described in detail in Ref.~\onlinecite{KWNS18:noneq}. The transition density and the operators thus require two parameters; however, we will omit the starting time dependence for the sake of clarity.
\end{remark}
\subsection{Reproducing kernel Hilbert spaces}
Given a set $ \inspace $ and a space $ \mathbb{H} $ of functions $ f \colon \inspace \to \R $, $ \mathbb{H} $ is called a \emph{reproducing kernel Hilbert space (RKHS)} with inner product $ \innerprod{\cdot}{\cdot}_\mathbb{H} $ if there exists a function $ k \colon \inspace \times \inspace \to \R $ with the following properties:
\begin{enumerate}[label=(\roman*), itemsep=0ex, topsep=1ex]
\item $ \innerprod{f}{k(x, \cdot)}_\mathbb{H} = f(x) $ for all $ f \in \mathbb{H} $, and
\item $ \mathbb{H} = \overline{\mspan\{k(x, \cdot) \mid x \in \inspace \}} $.
\end{enumerate}
The function $ k $ is called a \emph{kernel} and the first property above the \emph{reproducing property}. A direct consequence is that $ \innerprod{k(x, \cdot)}{k(x^\prime, \cdot)}_\mathbb{H} = k(x, x^\prime) $. That is, the map $ \phi \colon \inspace \rightarrow \rkhs $
given by $ x \mapsto k(x, \cdot) $ can be regarded as a feature map associated with $ x $, the so-called \emph{canonical feature map}.\!\footnote{Such a feature map $ \phi \colon \inspace \rightarrow \rkhs $
admitting the property $ k(x, x^\prime) = \innerprod{\phi(x)}{\phi(x^\prime)}_\mathbb{H} $
is not uniquely defined. There are other feature space representations such as, for instance, the Mercer feature space.\!\cite{Mercer, Schoe01, Steinwart2008:SVM} As long as we are only interested in kernel evaluations, however, it does not matter which one is considered.}
It is thus possible to represent data by functions in the RKHS. Frequently used kernels include the polynomial kernel and the Gaussian kernel, given by $ k(x, x^\prime) = (c + x^\top x^\prime)^p $ and $ k(x, x^\prime) = \exp(-\|x-x^\prime\|_2^2/2\sigma^2) $, respectively. While the feature space associated with the polynomial kernel is finite-dimensional, the feature space associated with the Gaussian kernel is infinite-dimensional; see, e.g., Ref.~\onlinecite{Steinwart2008:SVM}. Inner products in these spaces, however, are not evaluated explicitly, but only implicitly through kernel evaluations. This is one of the main advantages of kernel-based methods \cite{Scholkopf98:KPCA, SC04:KernelMethods}. Algorithms that can be purely expressed in terms of inner product evaluations can thus be easily \emph{kernelized}, resulting, as described above, in nonlinear extensions of methods such as PCA, CCA, or TICA.
\subsection{Covariance operators and Gram matrices}
Let $(X,Y)$ be a random variable on $ \inspace \times \outspace $, where $ \inspace \subset \R^{d_x} $ and $ \outspace \subset \R^{d_y} $. The dimensions $ d_x $ and $ d_y $ can in principle be different. For our applications, however, the spaces $ \inspace $ and $ \outspace $ are often identical. The associated marginal distributions are denoted by $\pp{P}_x(X)$ and $\pp{P}_y(Y)$, the joint distribution by $\pp{P}(X,Y)$, and the corresponding densities---which we assume exist---by $ p_x(x) $, $ p_y(y) $, and $ p(x, y) $, respectively. Furthermore, let $k$ and $l$ be the kernels associated with $ \inspace $ and $ \outspace$ and $\phi $ and $\psi$ the respective feature maps. We will always assume that requirements such as measurability of the kernels and feature maps as well as separability of the RKHSs are satisfied.\!\footnote{In most cases, these properties follow from mild assumptions about $ \inspace $ and $ \outspace $. For an in-depth discussion of these technical details, see Ref.~\onlinecite{Steinwart2008:SVM}.} The RKHSs induced by the kernels $ k $ and $ l $ are denoted by $ \rkhs[X] $ and $ \rkhs[Y] $.
We will now introduce covariance operators and cross-covariance operators \cite{Baker70:XCov, Baker1973} on RKHSs. In what follows, we will always assume that $\mathbb{E}_{\scriptscriptstyle \mathit{X}}[k(X,X)] < \infty$
and $\mathbb{E}_{\scriptscriptstyle \mathit{Y}}[l(Y,Y)] < \infty$,
which ensures that these operators are well-defined and Hilbert--Schmidt (for a comprehensive overview of kernel covariance operators and their applications, see Ref.~\onlinecite{MFSS16} and references therein). For any $ f \in \rkhs[X] $, let
\begin{equation*}
\psi(Y) \otimes \phi(X) \colon f \mapsto \psi(Y) \innerprod{\phi(X)}{f}_{\rkhs[X]}
\end{equation*}
denote the \emph{tensor product operator} \cite{Reed} from $ \rkhs[X] $ to $ \rkhs[Y] $ defined by $ \phi(X) $ and $ \psi(Y) $.
\begin{definition}[Covariance operators]
The \emph{covariance operator} $ \cov[XX] \colon \rkhs[X] \to \rkhs[X] $ and the \emph{cross-covariance operator} $ \cov[YX] \colon \rkhs[X] \to \rkhs[Y] $ are defined as
\begin{alignat*}{4}
\cov[XX] & := \int \phi(X) \otimes \phi(X) \ts \dd \pp{P}(X)
&&= \mathbb{E}_{\scriptscriptstyle X}[\phi(X) \otimes \phi(X)], \\
\cov[YX] & := \int \psi(Y) \otimes \phi(X) \ts \dd \pp{P}(Y,X)
&&= \mathbb{E}_{\scriptscriptstyle \mathit{YX}}[\psi(Y) \otimes \phi(X)].
\end{alignat*}
\end{definition}
Kernel covariance operators satisfy
\begin{equation*}
\innerprod{g}{\cov[YX]f}_{\rkhs[Y]} = \mathrm{Cov}[g(Y), f(X)]
\end{equation*}
for all $ f \in \rkhs[X] $, $ g \in \rkhs[Y] $. Defining $ \phi_c(X) = \phi(X) - \mathbb{E}_{\scriptscriptstyle X}[\phi(X)] $ and $ \psi_c(Y) = \psi(Y) - \mathbb{E}_{\scriptscriptstyle Y}[\psi(Y)] $, the corresponding centered counterparts of the covariance and cross-covariance operators $\cov[XX]$ and $\cov[YX]$ are defined in terms of the mean-subtracted feature maps.
As these operators can in general not be determined analytically, empirical estimates are computed from data, i.e.,
\begin{align}
\begin{split} \label{eq:cov_estimates}
\ecov[XX] &= \frac{1}{n} \sum_{i=1}^n \phi(x_i) \otimes \phi(x_i)
= \frac{1}{n} \Phi \Phi^\top, \\
\ecov[YX] &= \frac{1}{n} \sum_{i=1}^n \psi(y_i)\otimes\phi(x_i)
= \frac{1}{n} \Psi \Phi^\top,
\end{split}
\end{align}
where $ \Phi = [\phi(x_1), \dots, \phi(x_n)] $ and $ \Psi = [\psi(y_1), \dots, \psi(y_n)] $ and the training data $ \{(x_i, y_i)\}_{i=1}^n $ is drawn i.i.d.\ from $ \pp{P}(X, Y) $. Analogously, the mean-subtracted feature maps can be used to obtain empirical estimates of the centered operators.
Since in practice we often cannot explicitly deal with these operators, in particular if the feature space is infinite-dimensional, we seek to reformulate algorithms in terms of Gram matrices.
\begin{definition}[Gram matrices]
Given training data as defined above, the Gram matrices $ \gram[XX], \gram[YY] \in \R^{n \times n} $ are defined as
\begin{align*}
\gram[XX] &= \Phi^\top \Phi = \big[\ts k(x_i, x_j) \ts\big]_{i,j=1}^n, \\
\gram[YY] &= \Psi^\top \Psi = \big[\ts l(y_i, y_j) \ts\big]_{i,j=1}^n.
\end{align*}
\end{definition}
For a Gram matrix $ G $, its centered version $ \widetilde{G} $ is defined by $ \widetilde{G} = N_0 \ts G \ts N_0 $, where $ N_0 = \id - \frac{1}{n} \mathds{1} \mathds{1}^\top $ and $ \mathds{1} \in \R^n $ is a vector composed of ones \cite{Bach03:KICA}. Note that centered Gram matrices are not regular.
\begin{remark}
In what follows, if not noted otherwise, we assume that the covariance operators $ \cov[XX] $ and $ \cov[YY] $ and the Gram matrices $ \gram[XX] $ and $ \gram[YY] $ are properly centered for CCA.
\end{remark}
\subsection{Kernel transfer operators}
\label{ssec:Kernel_transfer_operators}
We now show how transfer operators can be written in terms of covariance and cross-covariance operators---this leads to the concept of \emph{kernel transfer operators}. Note that we assume the Perron--Frobenius
operator and the Koopman operator to be well-defined on $ L^2(\inspace) $ as discussed in Section~\ref{ssec:transfer_operators}. Kernel transfer operators follow from the assumption that densities and observables in $ L^2(\inspace) $ can be represented as elements of the RKHS $ \rkhs[X] $. Under some technical requirements, such as $ \int_\inspace k(x,x) \ts \dd x = \int_\inspace \norm{\phi(x)}_{\rkhs[X]}^2 \ts \dd x < \infty$, the elements of $ \rkhs[X] $ are included in $ L^2(\inspace) $ when they are identified with the respective equivalence class of square integrable functions. This correspondence can be derived from the theory of $ L^2(\inspace) $ integral operators\cite{Steinwart2008:SVM} and is often used in statistical learning theory~\cite{RBD10}. We may therefore assume that we can identify RKHS elements with the corresponding equivalence classes of functions in $L^2(\inspace)$. By requiring $\mathbb{E}_{\mu}[k(X,X)] < \infty$ for a probability density $\mu(x)$, we obtain a similar statement for $L^2_\mu(\inspace)$.
We refer to Ref.~\onlinecite{KSM17} for the derivation of kernel transfer operators and a description of their relationships with kernel embeddings of conditional distributions. We will omit the technical details and directly define kernel transfer operators as the RKHS analogue of the standard transfer operators defined in Section~\ref{ssec:transfer_operators}. Using the same integral representations as before and defining the transfer operators on $\rkhs[X]$ instead of $L^2(\inspace)$, we obtain the \emph{kernel Perron--Frobenius operator}
$\pf[k] \colon\rkhs[X] \rightarrow \rkhs[X] $ and the \emph{kernel Koopman operator} $ \ko[k] \colon \rkhs[X] \rightarrow \rkhs[X] $, respectively.
By defining the time-lagged process $Y_t = X_{t + \tau}$, we can write kernel transfer operators in terms of covariance and cross-covariance operators~\cite{KSM17}.
Note that $X_t$ and $Y_t$ are defined on the same state space $ \inspace $;
therefore, we have $\rkhs[X] = \rkhs[Y] $ and hence
$ \cov[YX] : \rkhs[X] \rightarrow \rkhs[X] $ in this special case.
We obtain the important properties $ \cov[XX] \pf[k] \ts g = \cov[YX] g $
and $ \cov[XX] \ko[k] \ts g = \cov[XY] g $
for all $ g \in \rkhs[X] $, which allows us to write
\begin{equation} \label{eq:KTOs}
\begin{split}
\pf[k] &= (\cov[XX] + \varepsilon \idop)^{-1} \ts \cov[YX], \\
\ko[k] &= (\cov[XX] + \varepsilon \idop)^{-1} \ts \cov[XY].
\end{split}
\end{equation}
Here, $ (\cov[XX] + \varepsilon \idop)^{-1} $ is the Tikhonov-regularized
inverse of $ \cov[XX] $ with regularization parameter $\varepsilon > 0$.\!\footnote{See Refs.~\onlinecite{Gr93, EG96, EHN96} for a detailed discussion of ill-posed inverse problems and the regularization of bounded linear operators on Hilbert spaces.} Note the abuse of notation, since equality in the above inverse problems is only given asymptotically for $\varepsilon \to 0$ and pointwise for feasible $ \cov[YX] g \in \rkhs[X]$. Since $ \cov[XX] $ is a compact operator,
it does not admit a globally defined bounded inverse if the RKHS is infinite-dimensional. However, $ (\cov[XX] + \varepsilon \idop)^{-1} $ always exists and is bounded. In fact, the operators $ \pf[k] $ and $ \ko[k] $ as given in the regularized form above are Hilbert--Schmidt.
The above notation and regularization of inverse covariance operators is standard in the context of kernel embeddings of conditional distributions and related Bayesian learning techniques. We refer to Refs.~\onlinecite{SHSF09, Song2013, Fukumizu13:KBR, Fukumizu15:NBI, MFSS16} for detailed discussions of properties of this ill-posed inverse problem in specific applications.
By replacing the analytical covariance operators with their empirical estimates in~\eqref{eq:KTOs}, we obtain empirical estimates for kernel transfer operators \cite{KSM17}.
As done with empirical covariance operators in~\eqref{eq:cov_estimates}, it is possible to rewrite the empirical estimates of kernel transfer operators in terms of RKHS features in $ \Phi $ and $\Psi$
(see Refs.~\onlinecite{MFSS16, KSM17} for the derivation):
\begin{align}
\begin{split} \label{eq:KTO_estimates}
\hat{\pf[k]} &= (\ecov[XX] + \varepsilon \idop)^{-1} \ecov[YX] \\ &= \Psi \ts \big(\gram[XY]^{-1} \ts (\gram[XX] + n \varepsilon \id)^{-1} \ts \gram[XY]\big) \ts \Phi^\top, \\
\hat{\ko[k]} &= (\ecov[XX] + \varepsilon \idop)^{-1} \ecov[XY] \\ &= \Phi \ts (\gram[XX] + n \varepsilon \id)^{-1} \ts \Psi^\top.
\end{split}
\end{align}
Note that now $\Phi$ and $\Psi$ both contain observations in the same space $\rkhs[X]$, since $X_t$ and $Y_t$ are both defined on $\inspace$.
\subsection{Empirical RKHS operators}
In what follows, we will consider finite-rank RKHS operators given by a matrix which represents the action of the operator on fixed elements in the RKHSs. We will use this general setting to formulate results about the eigenvalues and eigenfunctions of empirical RKHS operators. Given a matrix $ B \in \R^{n \times n} $, we define the bounded finite-rank operator $ \hat{\mathcal{S}} \colon \rkhs[X] \to \rkhs[Y] $ by
\begin{equation} \label{eq:empirical_operator}
\hat{\mathcal{S}} = \Psi B \Phi^\top = \sum_{i,j=1}^n b_{ij} \ts \psi(y_i) \otimes \phi(x_j).
\end{equation}
We remark that although $ \Psi $ and $ \Phi $ may contain infinite-dimensional objects, we express inner products between RKHS elements in the classical matrix-vector multiplication form. That is, we interpret the embedded RKHS elements as (potentially infinite-dimensional) column vectors. This notation has become a de-facto standard in the machine learning community~\cite{MFSS16}. We can write empirical estimates of covariance operators in the form of~\eqref{eq:empirical_operator}. If the RKHS training features in $\Phi$ and $ \Psi $ are generated i.i.d.\ by the joint probability distribution $ \mathbb{P}(X,Y) $ of random variables $ X $ and $ Y $, then the cross-covariance operator $ \ecov[YX] $ takes the general form of an empirical RKHS operator with $ B = \frac{1}{n} \id $. We obtain $ \ecov[XX] $ as another special case with identical features $\Psi = \Phi $ drawn only from $ \mathbb{P}(X) $. Furthermore, the empirical estimates of the kernel Perron--Frobenius and kernel Koopman operator are special cases of $ \hat{\mathcal{S}} $ as seen in~\eqref{eq:KTO_estimates} with $ B = \gram[XY]^{-1} \ts (\gram[XX] + n \varepsilon \id)^{-1} \ts \gram[XY] $ and $B = (\gram[XX] + n \varepsilon \id)^{-1} $, respectively. Note that the roles of $ \Phi $ and $ \Psi $ are interchanged for the empirical estimate of the Koopman operator, i.e., it is of the form $ \hat{\mathcal{S}} = \Phi B \Psi^\top$.
We now show how spectral decomposition techniques can be applied to empirical RKHS operators in this general setting.\!\footnote{In general, all considered kernel transfer operators in this paper are compositions of compact and bounded operators and therefore compact. They admit series representations in terms of singular value decompositions as well as eigendecompositions in the self-adjoint case\cite{Reed}. The functional analytic details and the convergence of $ \hat{\mathcal{S}} $ and its spectral properties in the infinite-data limit depend on the specific scenario and are beyond the scope of this paper.} We can compute eigenvalues and corresponding eigenfunctions of $ \hat{\mathcal{S}} $ by solving auxiliary matrix eigenvalue problems.
For the sake of self-containedness, we briefly reproduce the eigendecomposition result from Ref.~\onlinecite{KSM17}.
\begin{proposition} \label{prop:eigenfunctions}
Suppose $ \Phi $ and $ \Psi $ contain linearly independent elements. Let $ \hat{\mathcal{S}} = \Psi B \Phi^\top $, then
\begin{enumerate}[wide, label=(\roman*), itemindent=\parindent, itemsep=0ex, topsep=0.5ex]
\item $ \hat{\mathcal{S}} $ has an eigenvalue $ \lambda \ne 0 $ with corresponding eigenfunction $ \varphi = \Psi v $ if and only if $ v $ is an eigenvector of $ B \ts \gram[XY] $ associated with $ \lambda $, and, similarly,
\item $ \hat{\mathcal{S}} $ has an eigenvalue $ \lambda \ne 0 $ with corresponding eigenfunction $ \varphi = \Phi \ts \gram[XX]^{-1} \ts v $ if and only if $ v $ is an eigenvector of $ \gram[XY] \ts B $.
\end{enumerate}
\end{proposition}
Note that for the Gaussian kernel, linear independence of elements in $ \Phi $ and $ \Psi $ reduces to requiring that the training data contains pairwise distinct elements
in $ \inspace $ and $ \outspace $, respectively.
For dynamical systems applications, we typically assume that $ \Phi $ and $ \Psi $ contain information about the system at time $ t $ and at time $ t + \tau $, respectively.
A more detailed version of Proposition~\ref{prop:eigenfunctions} and
its extension to the singular value decomposition are described in Ref.~\onlinecite{MSKS18}. Further properties of $ \hat{\mathcal{S}} $ and its decompositions will be studied in future work. Note that we generally assume that empirical estimates of RKHS operators converge in probability to their analytical counterparts in operator norm in the infinite data limit. These statistical properties and the resulting associated spectral convergence are examined in for example in Ref.~\onlinecite{RBD10}.
\subsection{Applications of RKHS operators}
Decompositions of RKHS operators have diverse applications, which we will only touch upon here. We will consider a specific problem---namely, kernel CCA---in Section~\ref{sec:Kernel CCA and coherent sets}.
\begin{enumerate}[wide, label=(\alph*), itemindent=\parindent, itemsep=0ex, topsep=0.5ex]
\item By sampling points from the uniform distribution, the \emph{Mercer feature map}\cite{Mercer, Schoe01, Steinwart2008:SVM} with respect to the Lebesgue measure on $ \inspace $ can be approximated by computing eigenfunctions of $ \ecov[XX] $---i.e., $ B = \frac{1}{n} \id $ and the auxiliary matrix eigenvalue problem is $ \frac{1}{n} \gram[XX] \ts v = \lambda \ts v $---as shown in Ref.~\onlinecite{MSKS18}. This can be easily extended to other measures.
\item Similarly, given an arbitrary data set $ \{x_i\}_{i=1}^n $, \emph{kernel PCA} computes the eigenvectors corresponding to the largest eigenvalues of the centered Gram matrix $ \gram[XX] $ and defines these eigenvectors as the data points projected onto the respective principal components. It is well-known that kernel PCA can also be defined in terms of the centered covariance operator $ \ecov[XX] $. A detailed connection of the spectrum of the Gram matrix and the covariance operator is given in Ref.~\onlinecite{STWCK02}.
Up to scaling, the eigenfunctions evaluated in the data points correspond to the principal components.
\item Given training data $ x_i \sim p_x $ and $ y_i = \Theta^\tau(x_i) $, where $ \Theta $ denotes the flow associated with the dynamical system and $ \tau $ the lag time---that is, if $ x_i $ is the state of the system at time $ t $, then $ y_i $ is the state of the system at time $ t + \tau $---, we define $ \Phi $ and $ \Psi $ as above. Eigenvalues and eigenfunctions of kernel transfer operators can be computed by solving a standard matrix eigenvalue problem (see Proposition~\ref{prop:eigenfunctions}).
Eigendecompositions of these operators result in metastable sets. For more details and real-world examples, see Refs.~\onlinecite{KSM17, KBSS18}. The main goal of this paper is the extension of the aforementioned methods to compute \emph{coherent sets} instead of \emph{metastable sets}.
\end{enumerate}
\section{Kernel CCA and coherent sets}
\label{sec:Kernel CCA and coherent sets}
Given two multidimensional random variables $ X $ and $ Y\! $, standard CCA finds two sets of basis vectors such that the correlations between the projections of $ X $ and $ Y $ onto these basis vectors are maximized \cite{Borga01:CCA}. The new bases can be found by computing the dominant eigenvalues and corresponding eigenvectors of a matrix composed of covariance and cross-covariance matrices. Just like kernel PCA is a nonlinear extension of PCA, kernel CCA is a generalization of CCA. The goal of kernel CCA is to find two \emph{nonlinear} mappings $ f(X) $ and $ g(Y) $, where $ f \in \rkhs[X] $ and $ g \in \rkhs[Y] $, such that their correlation is maximized \cite{Fukumizu07:KCCA}. That is, instead of matrices, kernel CCA is now formulated in terms of covariance and cross-covariance operators. More precisely, the kernel CCA problem can be written as
\begin{equation*}
\sup_{\substack{ f \in \rkhs[X] \\ g \in \rkhs[Y] }} \innerprod{g}{\cov[YX] f}_{\rkhs[Y]} \quad \text{s.t.} \quad
\begin{cases}
\innerprod{f}{\cov[XX]f}_{\rkhs[X]} = 1, \\
\innerprod{g}{\cov[YY]g}_{\rkhs[Y]} = 1,
\end{cases}
\end{equation*}
and the solution is given by the eigenfunctions corresponding to the largest eigenvalue of the problem
\begin{equation} \label{eq:KCCA-eig-full}
\begin{cases}
\cov[YX] f = \rho \ts \cov[YY] \ts g, \\
\cov[XY] g = \rho \ts \cov[XX] \ts f.
\end{cases}
\end{equation}
Further eigenfunctions corresponding to subsequent eigenvalues can be taken into account as in the standard setting described above. In practice, the eigenfunctions are estimated from finite samples. The empirical estimates of $ f $ and $ g $ are denoted by $ \hat{f} $ and $ \hat{g} $, respectively.
\begin{example}
In order to illustrate kernel CCA, let us analyze a synthetic data set similar to the one described in Ref.~\onlinecite{Fukumizu07:KCCA} using a Gaussian kernel with bandwidth $ \sigma = 0.3 $. Algorithms to solve the CCA problem will be described below. The results are shown in Figure~\ref{fig:KernelCCA}. Note that classical CCA would not be able to capture the nonlinear relationship between $ X $ and~$ Y $. \exampleSymbol
\begin{figure*}
\begin{minipage}{0.245\textwidth}
\centering
\subfiguretitle{\hspace{1.9em} (a)} \vspace*{0.3ex}
\includegraphics[width=\textwidth]{pics/CCA1}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\centering
\subfiguretitle{\hspace{1.9em} (b)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/CCA2}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\centering
\subfiguretitle{\hspace{1.9em} (c)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/CCA3}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\centering
\subfiguretitle{\hspace{1.6em} (d)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/CCA4}
\end{minipage}
\caption{Kernel CCA applied to noisy generalized superellipse data. The transformed variables $ \hat{f}(X) $ and $ \hat{g}(Y) $ are clearly highly correlated.}
\label{fig:KernelCCA}
\end{figure*}
\end{example}
\subsection{RKHS operator formulation}
\label{ssec:RKHS operator formulation}
Since the inverses of the covariance operators in general do not exist, the regularized versions $(\cov[XX] + \varepsilon \idop)^{-1}$ and $(\cov[YY] + \varepsilon \idop)^{-1}$ (cf.~Section~\ref{ssec:Kernel_transfer_operators}) are also typically used in the context of CCA\cite{Fukumizu07:KCCA}. Solving the first equation in \eqref{eq:KCCA-eig-full} for $ g $ and inserting it into the second equation, this results in
\begin{equation} \label{eq:KCCA-eig-reduced}
\quad \big(\cov[XX] + \varepsilon \idop\big)^{-1} \cov[XY] \big(\cov[YY] + \varepsilon \idop\big)^{-1} \cov[YX] f = \rho^2 f.
\end{equation}
Comparing this with the aforementioned transfer operator representations \eqref{eq:KTOs}, $ (\cov[XX] + \varepsilon \idop)^{-1} \cov[XY] $ can be interpreted as an approximation of the kernel Koopman operator, and $ (\cov[YY] + \varepsilon \idop)^{-1} \cov[YX] $ as a kernel Koopman operator where now the roles of $ X $ and $ Y $ are reversed or as a reweighted Perron--Frobenius operator. The composition of these operators corresponds to a push-forward and subsequent pull-back of a density $ f $. Eigenfunctions of the operator whose associated eigenvalues are close to one thus remain nearly unchanged under the forward-backward dynamics. This is closely related to the notion of \emph{coherence} and will be discussed in Section~\ref{ssec:Relationships between kernel CCA and transfer operators}.
\begin{lemma}
Replacing the covariance and cross-covariance operators by their empirical estimates, the eigenvalue problem~\eqref{eq:KCCA-eig-reduced} can be written as
\begin{equation*}
\Phi B \Phi^\top \hat{f} = \rho^2 \hat{f},
\end{equation*}
with $ B = (\gram[XX] + n \varepsilon \id)^{-1} (\gram[YY] + n \varepsilon \id)^{-1} \gram[YY] $.
\end{lemma}
\begin{proof}
Inserting the definitions of the empirical covariance and cross-covariance operators yields
\begin{equation*}
\big(\Phi \Phi^\top + n \varepsilon \idop\big)^{-1} \Phi \Psi^\top \big(\Psi \Psi^\top + n \varepsilon \idop\big)^{-1} \Psi \Phi^\top \hat{f} = \rho^2 \hat{f}.
\end{equation*}
Using $ \Psi^\top \left(\Psi\Psi^\top + n \varepsilon \idop\right)^{-1} = \left( \Psi^\top \Psi + n \varepsilon \id\right)^{-1} \Psi^\top $, see Ref.~\onlinecite{MFSS16}, and a similar identity for $ \Phi $ concludes the proof.
\end{proof}
That is, the empirical RKHS operator for kernel CCA is of the form $ \hat{S} = \Phi B \Phi^\top $. Applying Proposition~\ref{prop:eigenfunctions}, we must solve the auxiliary problem
\begin{enumerate}[label=(\roman*), itemsep=0ex, topsep=1ex]
\item $ (\gram[XX] + n \varepsilon \id)^{-1} (\gram[YY] + n \varepsilon \id)^{-1} \gram[YY] \ts \gram[XX] \ts v = \rho^2 \ts v $, with $ \hat{f} = \Phi \ts v $, or
\item $ \gram[XX] (\gram[XX] + n \varepsilon \id)^{-1} (\gram[YY] + n \varepsilon \id)^{-1} \gram[YY] \ts v = \rho^2 \ts v $, with $ \hat{f} = \Phi \ts (\gram[XX] + n \varepsilon \id)^{-1} \ts v $.
\end{enumerate}
Since $\gram[XX]$ and $(\gram[XX] + n \varepsilon \id)^{-1}$ as well as $\gram[YY]$ and $(\gram[YY] + n \varepsilon \id)^{-1}$ commute, the first problem can be equivalently rewritten as $ (\gram[XX] + n \varepsilon \id)^{-1} \gram[YY] (\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] \ts v = \rho^2 \ts v $ and the second as $ (\gram[XX] + n \varepsilon \id)^{-1} \gram[XX] \gram[YY] (\gram[YY] + n \varepsilon \id)^{-1} \ts v = \rho^2 \ts v $. The eigenfunction associated with the largest eigenvalue solves the CCA problem, but in order to detect coherent sets, we will need more eigenfunctions later. To obtain the function $ g $ corresponding to $ \rho $, we compute
\begin{enumerate}[label=(\roman*), itemsep=0ex, topsep=1ex]
\item $ \hat{g} = \frac{1}{\rho} \Psi (\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] v $, or
\item $ \hat{g} = \frac{1}{\rho} \Psi (\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] (\gram[XX] + n \ts \varepsilon \id)^{-1} v $.
\end{enumerate}
\begin{mdframed}[backgroundcolor=boxback,hidealllines=true]
\begin{textalgorithm} \label{alg:CCA}
The CCA problem can be solved as follows:
\begin{enumerate}[leftmargin=3ex, itemindent=0ex, itemsep=0ex, topsep=0.5ex]
\item Choose a kernel $ k $ and regularization $ \varepsilon $.
\item Compute the centered gram matrices $ \gram[XX] $ and $ \gram[YY] $.
\item Solve $ \gram[XX] (\gram[XX] + n \varepsilon \id)^{-1} (\gram[YY] + n \varepsilon \id)^{-1} \gram[YY] \ts v = \rho^2 \ts v $.
\end{enumerate}
\vspace*{1ex}
\end{textalgorithm}
\end{mdframed}
The corresponding eigenfunction $ \hat{f} $ evaluated at all data points $ x_1, \dots, x_n $, denoted by $ \hat{f}_X $, is then approximately given by the vector $ v $. We can evaluate the eigenfunctions at any other point as described above, but we will mainly use the eigenfunction evaluations at the sampled data points for clustering into coherent sets.
Algorithm \ref{alg:CCA} is based on the second problem formulation, i.e., item (ii) above. However, the first variant can be used in the same way. Alternatively, we can rewrite it as an eigenvalue problem of the form
\begin{equation*}
\begin{cases}
(\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] \ts v = \rho \ts w, \\
(\gram[XX] + n \varepsilon \id)^{-1} \gram[YY] \ts w = \rho \ts v,
\end{cases}
\end{equation*}
and, consequently,
\begin{equation} \label{eq:KCCA-gen-eig}
\setlength\arraycolsep{0pt}
\begin{bmatrix}
0 & \gram[YY] \\
\gram[XX] & 0
\end{bmatrix}
\hspace{-3pt}
\begin{bmatrix}
v \\ w
\end{bmatrix}
\!
= \rho
\!
\begin{bmatrix}
(\gram[XX] + n \varepsilon \id) & 0 \\
0 & (\gram[YY] + n \varepsilon \id)
\end{bmatrix}
\hspace{-3pt}
\begin{bmatrix}
v \\ w
\end{bmatrix}.
\end{equation}
Other formulations can be derived in a similar fashion. The advantage is that no matrices have to be inverted. However, the size of the eigenvalue problem doubles, which might be problematic if the number data points $n$ is large.
The generalized eigenvalue problem \eqref{eq:KCCA-gen-eig} is almost identical to the one derived in Ref.~\onlinecite{Bach03:KICA}, with the difference that regularization is applied in a slightly different way. That is, the direct eigendecomposition of RKHS operators as proposed in Ref.~\onlinecite{KSM17} results, as expected, in variants of kernel CCA. The statistical convergence of kernel CCA, showing that finite sample estimators converge to the corresponding population counterparts, has been established in Ref.~\onlinecite{Fukumizu07:KCCA}. Kernel CCA can be extended to more than two variables or views of the data as described in Refs.~\onlinecite{Bach03:KICA, SC04:KernelMethods}, which might also have relevant applications in the dynamical systems context.
\subsection{Finite-dimensional feature space}
\label{ssec:Finite-dimensional feature space}
If the state spaces of the kernels $ k $ and $ l $ are finite-dimensional, we can directly solve the eigenvalue problem \eqref{eq:KCCA-eig-full} or \eqref{eq:KCCA-eig-reduced}. Assuming the feature space of the kernel $ k $ is $ r_x $-dimensional and spanned by the basis functions $ \{ \phi_1, \dots, \phi_{r_x} \} $, we define $ \phi \colon \inspace \to \R^{r_x} $ by $ \phi(x) = [\phi_1(x), \dots, \phi_{r_x}(x)]^\top $. That is, we are now using an explicit feature space representation. This induces a kernel by defining $ k(x, x^\prime) = \innerprod{\phi(x)}{\phi(x^\prime)} $.\!\footnote{For the Mercer feature space representation\cite{Mercer, Schoe01} the functions form an orthogonal basis, but orthogonality is not required here.}
We could, for instance, select a set of radial basis functions, monomials, or trigonometric functions. Analogously, we define a vector-valued function $ \psi \colon \outspace \to \R^{r_y} $, with $ \psi(y) = [\psi_1(y), \dots, \psi_{r_y}(y)]^\top $, where $ r_y $ is the dimension of the feature space of the kernel $ l $. Any function in the respective RKHS can be written as $ f = \alpha^\top \phi $ and $ g = \beta^\top \psi $, where $ \alpha \in \R^{r_x} $ and $ \beta \in \R^{r_y} $ are coefficient vectors.
Given training data $ \{(x_i, y_i)\}_{i=1}^n $ drawn from the joint probability distribution, we obtain $ \Phi \in \R^{r_x \times n} $ and $ \Psi \in \R^{r_y \times n} $ and can compute the centered covariance and cross-covariance matrices $ \ecov[XX] $, $ \ecov[XY] $, and $ \ecov[YY] $ explicitly.
\begin{mdframed}[backgroundcolor=boxback,hidealllines=true]
\begin{textalgorithm} \label{alg:CCA explicit}
Given explicit feature maps, we obtain the following CCA algorithm:
\begin{enumerate}[leftmargin=3ex, itemindent=0ex, itemsep=0ex, topsep=0.5ex]
\item Select basis functions $ \phi $ and $ \psi $ and regularization~$ \varepsilon $.
\item Compute (cross-)covariance matrices $ \ecov[XX] $, $ \ecov[XY] $, $ \ecov[YX] $, and $ \ecov[YX] $.
\item Solve the eigenvalue problem \\ $ \big(\ecov[XX] + \varepsilon \idop\big)^{-1} \ecov[XY] \big(\ecov[YY] + \varepsilon \idop\big)^{-1} \ecov[YX] v = \rho^2 \ts v $.
\end{enumerate}
\vspace*{1ex}
\end{textalgorithm}
\end{mdframed}
The eigenfunctions are then given by $ \hat{f}(x) = \innerprod{v}{\phi(x)} $. Expressions for $ \hat{g} $ can be derived analogously.
\begin{remark} \label{rem:VAMP}
Defining $ v = \big(\ecov[XX] + \varepsilon \idop\big)^{\!\sfrac{-1}{2}} \ts \widetilde{v} $, the eigenvalue problem in Algorithm~\ref{alg:CCA explicit} becomes
\begin{equation*}
\big(\ecov[XX] + \varepsilon \idop\big)^{\!\sfrac{-1}{2}} \ts \ecov[XY] \ts \big(\ecov[YY] + \varepsilon \idop\big)^{-1} \ts \ecov[YX] \ts \big(\ecov[XX] + \varepsilon \idop\big)^{\!\sfrac{-1}{2}} \, \widetilde{v} = \rho^2 \ts \widetilde{v}.
\end{equation*}
The transformed eigenvectors $ \widetilde{v} $ are thus equivalent to the right singular vectors of the matrix
\begin{equation*}
\big(\ecov[YY] + \varepsilon \idop\big)^{\sfrac{-1}{2}} \ts \ecov[YX] \ts \big(\ecov[XX] + \varepsilon \idop\big)^{\!\sfrac{-1}{2}}
\end{equation*}
and the values $ \rho $ are given by the singular values, which we assume to be sorted in nonincreasing order. Setting $ \varepsilon = 0 $, this is equivalent to the approach proposed in Ref.~\onlinecite{KWNS18:noneq}, where the estimated covariance and cross-covariance matrices are by definition regular.\!\footnote{Note that their notation is slightly different: $ C_{00} \overset{\scriptscriptstyle\wedge}{=} \ecov[XX] $, $ C_{11} \overset{\scriptscriptstyle\wedge}{=} \ecov[YY] $, but $ C_{01} \overset{\scriptscriptstyle\wedge}{=} \ecov[YX] $.} Regularity can be achieved by removing redundant basis functions. More details are discussed in Appendix~\ref{app:relationships}.
\end{remark}
The difference between the Gram matrix approach described in Section~\ref{ssec:RKHS operator formulation} and the algorithm proposed here is that the size of the eigenvalue problem associated with the former depends on the number of data points and permits the dimension of the feature space to be infinite-dimensional, whereas the eigenvalue problem associated with the latter depends on the dimension of the feature space but not on the size of the training data set. This is equivalent to the distinction between \emph{extended dynamic mode decomposition} (EDMD) \cite{WKR15} and kernel EDMD \cite{WRK15} (or the variational approach \cite{NoNu13} and kernel TICA \cite{SP15}, where the system is typically assumed to be reversible; see Ref.~\onlinecite{KSM17} for a detailed comparison) with the small difference that often the Moore--Penrose pseudoinverse~\cite{Penrose} is used for EDMD in lieu of the Tikhonov-regularized inverse.
\subsection{Relationships between kernel CCA and transfer operators}
\label{ssec:Relationships between kernel CCA and transfer operators}
We have seen in Section~\ref{ssec:RKHS operator formulation} that the resulting eigenvalue problem \eqref{eq:KCCA-eig-reduced} involves expressions resembling kernel transfer operators. The goal now is illustrate how this eigenvalue problem is related to the operators derived in Ref.~\onlinecite{BK17:coherent} for detecting coherent sets. We first introduce a \emph{forward operator} $ \mathcal{F} \colon L_\mu^2(\inspace) \to L^2(\outspace) $ by
\begin{equation*}
(\mathcal{F} f)(y) = \int p_\tau(y \mid x) \ts f(x) \ts \mu(x) \ts \dd x,
\end{equation*}
where $ \mu $ is some reference density of interest. Furthermore, let $ \nu = \mathcal{F} \mathds{1} $ be the image density obtained by mapping the indicator function on $ \inspace $ forward in time. Normalizing $ \mathcal{F} $ with respect to $ \nu $, we obtain a new operator $ \mathcal{A} \colon L_\mu^2(\inspace) \to L_\nu^2(\outspace) $ and its adjoint $ \mathcal{A}^* \colon L_\nu^2(\outspace) \to L_\mu^2(\inspace) $, with
\begin{align*}
(\mathcal{A} f)(y) &= \int \frac{p_\tau(y \mid x)}{\nu(y)} f(x) \ts \mu(x) \ts \dd x, \\
(\mathcal{A}^* g)(x) &= \int p_\tau(y \mid x) \ts g(y) \ts \dd y.
\end{align*}
It holds that $ \innerprod{\mathcal{A} f}{g}_\nu = \innerprod{f}{\mathcal{A}^* g}_\mu $. Consequently, $ \mathcal{A} $ plays the role of a reweighted Perron--Frobenius operator, whereas $ \mathcal{A}^* $ can be interpreted as an analogue of the Koopman operator
(note that $\mathcal{A}$ and $\mathcal{A}^*$ are defined on reweighted $L^2$-spaces). A more detailed derivation can be found in Ref.~\onlinecite{BK17:coherent}, where the operator $ \mathcal{A}^* \mathcal{A} $ (or a trajectory-averaged version thereof) is used to detect coherent sets. We want to show that this is, up to regularization, equivalent to the operator in \eqref{eq:KCCA-eig-reduced}.
\begin{proposition} \label{prop:reweighted_to}
Assuming that $ \mathcal{A} f \in \rkhs[Y] $ for all $ f \in \rkhs[X] $, it holds that $ \cov[YY] \mathcal{A} f = \cov[YX]f $.
\end{proposition}
\begin{proof}
The proof is almost identical to the proof for the standard Perron--Frobenius operator (see Ref.~\onlinecite{KSM17}). For all $g \in \rkhs[Y]$, we obtain
\begin{align*}
\innerprod{\cov[YY] \mathcal{A} f}{g}_{\rkhs[Y]}
&= \mathbb{E}_{\scriptscriptstyle Y}[\mathcal{A} f(Y) \ts g(Y)] \\
&= \iint \frac{p(y \mid x)}{\nu(y)} f(x) \ts \mu(x) \ts \dd x \ts g(y) \ts \nu(y) \dd y \\
&= \iint p(y \mid x) \ts f(x) \ts g(y) \ts \mu(x) \ts \dd x \ts \dd y \\
&= \iint p(x, y) \ts f(x) \ts g(y) \ts \dd x \ts \dd y \\
&= \mathbb{E}_{\scriptscriptstyle XY}[f(X) \ts g(Y)] \\
&= \innerprod{\cov[YX] f}{g}_{\rkhs[Y]}. \qedhere
\end{align*}
\end{proof}
We define the RKHS approximation of the operator $ \mathcal{A} $ by $ \mathcal{A}_k = (\cov[YY] + \varepsilon \idop)^{-1} \cov[YX] $. Note that the operator technically depends not only on $ k $ but also on $ l $, which we omit for brevity. In practice, we typically use the same kernel for $ \inspace $ and $ \outspace $. As a result, the eigenvalue problem~\eqref{eq:KCCA-eig-reduced} can now be written as
\begin{equation*}
\ko[k] \mathcal{A}_k f = \rho^2 f.
\end{equation*}
The adjointness property for $ \varepsilon = 0 $, i.e., assuming that the inverse exists without regularization,\!\footnote{Conditions for the existence of the inverse can be found, for instance, in Ref.~\onlinecite{Song2013} and in Section~\ref{ssec:Finite-dimensional feature space}.} can be verified as follows:
\begin{equation*}
\innerprod{\mathcal{A}_k f}{g}_\nu = \innerprod{\cov[YX] f}{g}_{\rkhs[Y]} = \innerprod{f}{\cov[XY] g}_{\rkhs[X]} = \innerprod{f}{\ko[k] g}_{\mu}.
\end{equation*}
We have thus shown that the eigenvalue problem for the computation of coherent sets and the CCA eigenvalue problem are equivalent, provided that the RKHS is an invariant subspace of $ \mathcal{T}_k $. Although this is in general not the case---depending on the kernel the RKHS might be low-dimensional (e.g., for a polynomial kernel), but could also be infinite-dimensional and isometrically isomorphic to $ L^2 $ (e.g., for the Gaussian kernel)---, we can use the kernel-based formulation as an approximation and solve it numerically to obtain coherent sets. This is the mathematical justification for the claim that CCA detects coherent sets, which will be corroborated by numerical results in Section~\ref{sec:Numerical results}.
\subsection{Coherent mode decomposition}
Borrowing ideas from \emph{dynamic mode decomposition} (DMD) \cite{Schmid10,TRLBK14}, we now introduce a method that approximates eigenfunctions or eigenmodes of the forward-backward dynamics using linear basis functions and refer to it as \emph{coherent mode decomposition} (CMD)---a mixture of CCA and DMD.\!\footnote{In fact, the method described below is closer to TICA than DMD, but other variants can be derived in the same fashion, using different combinations of covariance and cross-covariance operators.} The relationships between DMD and TICA (including their extensions) and transfer operators are delineated in Refs.~\onlinecite{KNKWKSN18, KSM17}. DMD is often used for finding coherent structures in fluid flows, dimensionality reduction, and also prediction and control; see Ref.~\onlinecite{KBBP16} for an exhaustive analysis and potential applications.
Let us assume we have high-dimensional time-series data but only relatively few snapshots. That is, $ \mathbf{X}, \mathbf{Y} \in \R^{d \times n} $ with $ d \gg n $, where $ \mathbf{X} = [x_1, \dots, x_n] $ and $ \mathbf{Y} = [y_1, \dots, y_n] $. This is, for instance, the case for fluid dynamics applications where the, e.g., two- or three-dimensional domain is discretized using (un)structured grids. It is important to note that this analysis is now not based on Lagrangian data as before, where we tracked the positions of particles or drifters over time, but on the Eulerian frame of reference.
Using Algorithm~\ref{alg:CCA explicit} with $ \phi(x) = x $ and $ \psi(y) = y $ is infeasible here since the resulting covariance and cross-covariance matrices would be prohibitively large; thus, we apply the kernel-based counterpart. The linear kernel $ k \colon \R^d \times \R^d \to \R $ is defined by $ k(x, x^\prime) = \phi(x)^\top \phi(x^\prime) = x^\top x^\prime $ and the Gram matrices are simply given by
\begin{equation*}
\gram[XX] = \mathbf{X}^\top \! \mathbf{X} \quad \text{and} \quad \gram[YY] = \mathbf{Y}^\top \mathbf{Y},
\end{equation*}
where $ \gram[XX], \gram[YY] \in \R^{n \times n} $.
\begin{mdframed}[backgroundcolor=boxback,hidealllines=true]
\begin{textalgorithm} \label{alg:CMD}
Coherent mode decomposition.
\begin{enumerate}[leftmargin=3ex, itemindent=0ex, itemsep=0ex, topsep=0.5ex]
\item Choose regularization $ \varepsilon $.
\item Compute gram matrices $ \gram[XX] $ and $ \gram[YY] $.
\item Solve the eigenvalue problem \\ $ (\gram[XX] + n \varepsilon \id)^{-1} (\gram[YY] + n \varepsilon \id)^{-1} \gram[YY] \ts \gram[XX] \ts v = \rho^2 \ts v $.
\end{enumerate}
\vspace*{1ex}
\end{textalgorithm}
\end{mdframed}
The eigenfunction $ \hat{f} $ evaluated in an arbitrary point $ x \in \R^d $ is then given by
\begin{align*}
\hat{f}(x) &= \Phi(x) \ts v = [k(x_1, x), \, \dots, \, k(x_n, x)] \ts v = (\mathbf{X} v)^\top x \\
&= \xi^\top \phi(x),
\end{align*}
where we define the \emph{coherent mode} $ \xi $ corresponding to the eigenvalue $ \rho $ by $ \xi = \mathbf{X} v $. That is, $ \xi $ contains the coefficients for the basis functions $ \phi $. Analogously, we obtain
\begin{align*}
\hat{g}(y) &= \tfrac{1}{\rho} \Psi(y) \ts (\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] v = (\mathbf{Y} w)^\top y \\
&= \eta^\top \psi(y),
\end{align*}
where $ w = \frac{1}{\rho} (\gram[YY] + n \varepsilon \id)^{-1} \gram[XX] v $ and $ \eta = \mathbf{Y} w $.
As mentioned above, DMD (as a special case of EDMD~\cite{WKR15}) typically uses the pseudoinverse to compute matrix representations of the corresponding operators. Nonetheless, a Tikhonov-regularized variant is described in Ref.~\onlinecite{EMBK17:DMD}.
\section{Numerical results}
\label{sec:Numerical results}
As we have shown above, many dimensionality reduction techniques or methods to analyze high-dimensional data can be regarded as eigendecompositions of certain empirical RKHS operators. We now seek to illustrate how kernel CCA results in coherent sets and potential applications of the coherent mode decomposition.
\subsection{Coherent sets}
We will first apply the method to a well-known benchmark problem, namely the Bickley jet, and then to ocean data and a molecular dynamics problem.
\subsubsection{Bickley jet}
Let us consider a perturbed Bickley jet, which is an approximation of an idealized stratospheric flow \cite{Rypina07:coherent} and a typical benchmark problem for detecting coherent sets (see, e.g., Refs.~\onlinecite{HKTH16:coherent, BK17:coherent, HSD18, FJ18:coherent}). The flow is illustrated in Figure~\ref{fig:Bickley}. For a detailed description of the model and its parameters, we refer to Ref.~\onlinecite{BK17:coherent}. Here, the state space is defined to be periodic in the $x_1$-direction with period $ 20 $. In order to demonstrate the notion of \emph{coherence}, we arbitrarily color one circular set yellow and one red and observe their evolution. The yellow set is dispersed quickly by the flow; the red set, on the other hand, moves around but barely changes shape. The red set is hence called \emph{coherent}.
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(a)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/Bickley0}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(b)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/Bickley10}
\end{minipage} \\
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(c)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/Bickley50}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(d)} \vspace*{-0.6ex}
\includegraphics[width=\textwidth]{pics/Bickley100}
\end{minipage}
\caption{Bickley jet at times (a) $t = 0$, (b) $t = 10$, (c) $t = 50$, and (d) $t = 100$ illustrating the difference between a non-coherent (yellow) and a coherent set (red). While the yellow set is dispersed after a short time, the shape of the red set remains nearly unchanged for a long time.}
\label{fig:Bickley}
\end{figure*}
We generate 10000 uniformly distributed test points $ x_i $ in $ \inspace = [0, 20] \times [-3, 3] $ and then simulate their progression in time. For the computation of the coherent sets, we use only the start and end points of each trajectory, i.e., we define $ y_i = \Theta^\tau(x_i) $, where $ \Theta^\tau $ denotes the flow associated with the dynamical system. We set $ \tau = 40 $. From the vectors $ x_i $ and $ y_i $, we then compute the Gram matrices $ \gram[XX] $ and $ \gram[YY] $ using the same Gaussian kernel. Here, we define the bandwidth to be $ \sigma = 1 $ and the regularization parameter to be $ \varepsilon = 10^{-7} $.
A few dominant eigenfunctions are shown in Figure~\ref{fig:BickleyCS}~(a)--(d). The first eigenfunction distinguishes between the top and bottom ``half'' and the second one between the middle part and the rest. The subsequent eigenfunctions pick up combinations of the vortices. Applying $ k $-means with $ k = 9 $ to the dominant eigenfunctions results in the coherent sets shown in Figure~\ref{fig:BickleyCS}~(e). This is consistent with the results presented in Ref.~\onlinecite{BK17:coherent}.
Choosing a finite-dimensional feature space explicitly, as described in Section~\ref{ssec:Finite-dimensional feature space}, by selecting a set of radial basis functions whose centers are given by a regular grid leads to comparable results. Currently, only start and end points of trajectories are considered. As a result, points that drift apart and then reunite at time $ \tau $ would constitute coherent sets. Applying kernel CCA to less well-behaved systems might require more sophisticated kernels that take entire trajectories into account, e.g., by employing averaging techniques as suggested in~Ref.~\onlinecite{BK17:coherent}.
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(a) $ \rho \approx 0.98 $}
\includegraphics[width=\textwidth]{pics/BickleyCS1} \\
\subfiguretitle{(c) $ \rho \approx 0.78 $}
\includegraphics[width=\textwidth]{pics/BickleyCS4}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(b) $ \rho \approx 0.87 $}
\includegraphics[width=\textwidth]{pics/BickleyCS2} \\
\subfiguretitle{(d) $ \rho \approx 0.75 $}
\includegraphics[width=\textwidth]{pics/BickleyCS6}
\end{minipage}
\begin{minipage}{0.054\textwidth}
\vspace*{-2.6ex}
\includegraphics[width=\textwidth]{pics/Colorbar}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\subfiguretitle{(e)}
\includegraphics[width=\textwidth]{pics/BickleyClustering}
\end{minipage}
\caption{(a) First, (b) second, (c) fourth, and (d) sixth eigenfunction associated with the Bickley jet for $ \tau = 40 $. (e) $ k $-means clustering of the dominant eigenfunctions into nine coherent sets. Note that the red coherent set around $ x = [12.5, -1.25]^\top $ corresponds to (but is not identical to) the red set in Figure~\ref{fig:Bickley}, where we arbitrarily selected a perfectly circular shape.}
\label{fig:BickleyCS}
\end{figure*}
\subsubsection{Ocean data}
Ocean currents are driven by winds and tides, as well as differences in salinity. There are five major gyres as illustrated in Figure~\ref{fig:OceanData}~(a), which has been reproduced with permission of the \emph{National Ocean Service} (NOAA).\!\footnote{NOAA. What is a gyre? \url{https://oceanservice.noaa.gov/facts/gyre.html}} Our goal now is to detect these gyres from virtual buoy trajectories. In order to generate Lagrangian data, we use the \emph{OceanParcels} toolbox\footnote{OceanParcels project: \url{http://oceanparcels.org/}} (see Ref.~\onlinecite{OceanParcels17} for details) and data from the \emph{GlobCurrent} repository,\!\footnote{GlobCurrent data repository: \url{http://www.globcurrent.org/}} provided by the \emph{European Space Agency}. More precisely, our drifter computations are based on the Eulerian total current at significant wave height from the sum of geostrophic and Ekman current components, starting on the 1st of January 2016 and ending on the 31st of December 2016 with 3-hourly updates.
We place 15000 uniformly distributed virtual drifters in the oceans and let the flow evolve for one year, which thus constitutes the lag time $ \tau $. Let $ x_i $ denote the initial positions and $ y_i $ the new positions of the drifters after one year. The domain is $ \inspace = [-180^\circ, 180^\circ] \times [-80^\circ, 80^\circ] $, where the first dimension corresponds to the longitudes and the second to the latitudes. For the coherent set analysis, we select a Gaussian kernel $ k(x, x^\prime) = \exp\left(-\frac{d(x, x^\prime)^2}{2\sigma^2}\right)$ with bandwidth $ \sigma = 30 $, where $ d(x, x^\prime) $ is the distance between the points $ x $ and $ x^\prime $ in kilometers computed with the aid of the haversine formula. The regularization parameter $ \varepsilon $ is set to $ 10^{-4} $. The first two dominant eigenfunctions computed using kernel CCA are shown in Figure~\ref{fig:OceanData}~(b) and~(c) and a $k$-means clustering of the six dominant eigenfunctions in Figure~\ref{fig:OceanData}~(d). CCA correctly detects the main gyres---the splitting of the South Atlantic Gyre and the Indian Ocean Gyre might be encoded in eigenfunctions associated with smaller eigenvalues---and the Antartic Circumpolar Current. The clusters, however, depend strongly on the lag time $ \tau $. In order to illustrate the flow properties, typical trajectories are shown in Figure~\ref{fig:OceanData}~(e). The trajectories belonging to different coherent sets remain mostly separated, although weak mixing can be seen, for instance, at the borders between the red and purple and red and green clusters.
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(a)}
\includegraphics[width=\textwidth]{pics/Gyres}
\end{minipage} \\[2ex]
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(b) $ \rho \approx 0.99 $}
\includegraphics[width=0.95\textwidth]{pics/OceanDataCS1}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(c) $ \rho \approx 0.98 $}
\includegraphics[width=0.95\textwidth]{pics/OceanDataCS2}
\end{minipage} \\[1ex]
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(d)}
\includegraphics[width=0.95\textwidth]{pics/OceanDataClustering}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(e)}
\includegraphics[width=0.95\textwidth]{pics/OceanDataTrajectories}
\end{minipage}
\caption{(a) Illustration of the major ocean gyres (courtesy of NOAA). (b)~First and (c)~second eigenfunction. (d)~$ k $-means clustering of the first six eigenfunctions into six coherent sets. (e)~Subset of the trajectories colored according to the coherent sets.}
\label{fig:OceanData}
\end{figure*}
\subsubsection{Time-dependent energy potential}
As a last example, we will analyze a molecular-dynamics inspired problem, namely diffusion in a time-dependent two-dimensional energy landscape, given by the stochastic differential equation
\begin{equation*}
\dd X_t = -\nabla V(X_t, t) \ts \dd t + \sqrt{2 \ts \beta^{-1}} \ts \dd W_t,
\end{equation*}
with
\begin{align*}
V(x, t) &= \cos\left(s \ts \arctan(x_2, x_1) - \tfrac{\pi}{2} t\right) \\
&+ 10 \left( \sqrt{x_1^2 + x_2^2} - \tfrac{3}{2} - \tfrac{1}{2} \sin(2 \pi t) \right)^2.
\end{align*}
The parameter $ \beta $ is the dimensionless inverse (absolute) temperature, $ W_t $ a standard Wiener process, and $ s $ specifies the number of wells. This is a generalization of a potential defined in Ref.~\onlinecite{BKKBDS18}, whose wells now move periodically towards and away from the center and which furthermore slowly rotates. We set $ s = 5 $. The resulting potential for $ t = 0 $ is shown in Figure~\ref{fig:LemonSlice}~(a). Particles will typically quickly equilibrate in radial direction towards the closest well and stay in this well, which moves over time. Particles trapped in one well will remain coherent for a relatively long time. The probability of escaping and moving to another one depends on the inverse temperature: The higher $ \beta $, the less likely are transitions between~wells.
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(a)} \vspace*{-0.6ex}
\includegraphics[width=0.9\textwidth]{pics/LemonSlicePotential}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(b)} \vspace*{-0.6ex}
\includegraphics[width=0.9\textwidth]{pics/LemonSliceCoherence}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(c)}
\includegraphics[width=0.9\textwidth]{pics/LemonSliceClustering1}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\subfiguretitle{(d)}
\includegraphics[width=0.9\textwidth]{pics/LemonSliceClustering2}
\end{minipage}
\caption{(a)~Time-dependent 5-well potential for $ t = 0 $. The dotted white lines indicate the periodic movement of the centers of the wells over time. (b)~Dominant eigenvalues (averaged over multiple runs) as a functions of $ \beta $. Coherence increases with increasing inverse temperature, i.e., the eigenvalues are closer to $ 1 $ for decreasing temperature. (c)~Coherent set clustering for $ \beta = 3 $ at initial time $ t = 0 $. (d)~Corresponding clustering at $ t = 10 $. The clusters moved but are still mostly coherent save for moderate mixing.}
\label{fig:LemonSlice}
\end{figure*}
We generate 1000 uniformly distributed test points in $ \inspace = [-2.5, 2.5] \times [-2.5, 2.5] $ and integrate the system with the aid of the Euler--Maruyama method and the step size $ h = 10^{-3} $ from $ t = 0 $ to $ t = 10 $. As before, we use only the start and end points of the trajectories and a Gaussian kernel (here, $\sigma = 1$ and $ \varepsilon = 10^{-6}$) for the coherent set analysis.
Due to the centering of the Gram matrices, the eigenvalue $ \lambda = 1 $ vanishes and---depending on the parameter $ \beta $---four eigenvalues close to one remain as illustrated in Figure~\ref{fig:LemonSlice}~(b). Figure~\ref{fig:LemonSlice}~(c) shows a clustering of the dominant four eigenfunctions for $ \beta = 3 $ based on PCCA\texttt{+} \cite{Roeblitz2013}, resulting in the expected five coherent sets. The clustering at $ t = 10 $ (see Figure~\ref{fig:LemonSlice}~(d)) illustrates that the computed sets indeed remain largely coherent.
Note that standard methods for the computation of metastable sets such as Ulam's method, EDMD, or their variants are in general not suitable for non-equilibrium dynamics; see also Ref.~\onlinecite{KWNS18:noneq} and Appendix~\ref{app:relationships}.
\subsection{Coherent mode decomposition}
In order to illustrate the coherent mode decomposition outlined in Algorithm~\ref{alg:CMD}, we consider the classical von K\'arm\'an vortex street and generate data using a simple Python implementation.\!\footnote{Palabos project: \url{http://wiki.palabos.org/numerics:codes}} It is important to note that here we take into account the full trajectory data $ \{z_0, \dots, z_n\} $, where $ z_i $ is the state at time $ t = 20 \ts i $, and define $ X = [z_0, \dots, z_{n-1}] $ and $ Y = [z_1, \dots, z_n] $, whereas we generated uniformly distributed data for the coherent set analysis in the previous subsection and furthermore used only the start and end points of the trajectories. We set $ n = 100 $ and $ \varepsilon = 0.1 $. Some snapshots of the system are shown in Figure~\ref{fig:Karman}~(a)--(d). Applying CMD results in the modes depicted in Figure~\ref{fig:Karman}~(e)--(g), where the color bar is the same as in Figure~\ref{fig:BickleyCS}. As described above, we obtain two modes, denoted by $ \xi $ and $ \eta $, for each eigenvalue $ \rho $, where $ \eta $ can be interpreted as the time-lagged counterpart of $ \xi $. There are three subdominant eigenvalues close to one, followed by a spectral gap. Removing the transient phase eliminates the third mode so that only the two highly similar modes remain.
\begin{figure*}
\centering
\begin{minipage}{0.243\textwidth}
\centering
\subfiguretitle{(a) $ t = 100 $}
\includegraphics[width=\textwidth]{pics/Karman1}
\end{minipage}
\begin{minipage}{0.243\textwidth}
\centering
\subfiguretitle{(b) $ t = 200 $}
\includegraphics[width=\textwidth]{pics/Karman2}
\end{minipage}
\begin{minipage}{0.243\textwidth}
\centering
\subfiguretitle{(c) $ t = 300 $}
\includegraphics[width=\textwidth]{pics/Karman3}
\end{minipage}
\begin{minipage}{0.243\textwidth}
\centering
\subfiguretitle{(d) $ t = 400 $}
\includegraphics[width=\textwidth]{pics/Karman4}
\end{minipage} \\[2ex]
\begin{minipage}{1em}
\centering
\vspace*{2ex}
$ \xi $ \\[13ex]
$ \eta $
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\subfiguretitle{(e) $ \rho \approx 0.96 $}
\includegraphics[width=\textwidth]{pics/KarmanCMD1a} \\
\includegraphics[width=\textwidth]{pics/KarmanCMD1b}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\subfiguretitle{(f) $ \rho \approx 0.95 $}
\includegraphics[width=\textwidth]{pics/KarmanCMD2a} \\
\includegraphics[width=\textwidth]{pics/KarmanCMD2b}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\subfiguretitle{(g) $ \rho \approx 0.81 $}
\includegraphics[width=\textwidth]{pics/KarmanCMD3a} \\
\includegraphics[width=\textwidth]{pics/KarmanCMD3b}
\end{minipage}
\caption{(a)--(d)~Two-dimensional flow in a channel past a cylinder. (e)--(g)~Three subdominant coherent modes associated with the two-dimensional flow, where the top row contains the coherent modes $ \xi $ and the bottom row the corresponding modes $ \eta $. The third mode is caused by the transient phase.}
\label{fig:Karman}
\end{figure*}
For this standard DMD benchmark problem, which we chose for illustration purposes, CMD and (regularized) DMD lead to almost identical modes. This might be---neglecting the transient phase---due to the periodicity of the system and the fact that we are applying the methods to sequential data. Nevertheless, CMD might have further applications pertaining to, for instance, non-sequential data, which we will investigate in future work.
\section{Conclusion}
\label{sec:Conclusion}
We demonstrated that several kernel-based dimensionality reduction techniques can be interpreted as eigendecompositions of empirical estimates of certain RKHS operators. Moreover, we showed that applying CCA to Lagrangian data results in coherent sets and illustrated the efficiency of the methods using several examples ranging from fluid to molecular dynamics. This approach worked out of the box, although taking into account entire trajectories might improve the results even further, which would then necessitate dedicated kernels. In this work, we analyzed only low-dimensional benchmark problems. Nevertheless, the kernel-based algorithms can be easily applied to more complex problems and also non-vectorial domains such as graphs or strings.
As a byproduct of the coherent set analysis, we derived a method called CMD that is a hybrid of CCA and DMD (or TICA). This method can, for instance, be applied to high-dimensional fluid flow or video data. For specific problems, CMD and DMD---unsurprisingly, given the close proximity---result in highly similar modes. An interesting topic for future research would be to systematically analyze the relationships between these methods. Furthermore, as with the transfer operators and embedded transfer operators as well as their kernel-based estimates \cite{KSM17}, there are again several different combinations and variants of the proposed algorithms.
Another open problem is the influence of different regularization techniques on the numerical results. How does Tikhonov regularization compare to approaches based on pseudoinverses or other spectral filtering methods? And how do we choose the kernel and the regularization parameters in an optimal way, preferably without cross-validation? Additionally, future work includes analyzing the properties of the empirical estimate $ \hat{S} $. Can we show convergence in the infinite-data limit? Which operators can be approximated by $ \hat{S} $ and can we derive error bounds for the resulting eigenvalues and eigenfunctions?
We expect the results in this paper to be a starting point for further theoretical research into how RKHS operators in the context of dynamical systems could be approximated and, furthermore, how they connect to statistical learning theory. Additionally, the methods proposed here might be combined with classical modifications of CCA in order to improve the numerical performance.
The experiments here were performed using Matlab, and the methods have been partially reimplemented in Python and are available at \url{https://github.com/sklus/d3s/}.
\section*{Acknowledgements}
We would like to thank P\'eter Koltai for the Bickley jet implementation as well as helpful discussions related to coherent sets and Ingmar Schuster for pointing out similarities between CCA and kernel transfer operators. This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 \emph{Scaling Cascades in Complex Systems}.
\bibliographystyle{unsrt}
\bibliography{KCA}
\appendix
\section{Relationships with VAMP}
\label{app:relationships}
The use of CCA with nonlinear transformations of time-lagged data to calculate optimal low-rank approximations of transfer operators was first described in Ref.~\onlinecite{WuNo17} as the \emph{variational approach for Markov processes} (VAMP).
In VAMP, linear CCA is applied to the transformations, which are permitted to be of any form (e.g., obtained by a neural network \cite{MPWN:vampnets}).
VAMP has thus been referred to as TCCA, i.e., time-lagged CCA \cite{WuNo17, KWNS18:noneq, No18, PWVdGN18}.
In the present work, on the other hand, we directly start with kernel CCA and show that it can be interpreted in terms of kernel transfer operators and, as a consequence thereof, detects coherent sets.
VAMP is designed to yield the optimal rank-$ k $ approximation of the transfer operator.
VAMP is also equivalent to the generalized Markov state model (GMSM) approach described in Ref.~\onlinecite{KWNS18:noneq} when determining an operator approximation with rank $k$. In both VAMP and GMSM, the \emph{singular value decomposition} (SVD) is performed on $\ecov[YY]^{\sfrac{-1}{2}} \ts \ecov[YX] \ts \ecov[XX]^{\!\sfrac{-1}{2}} $ to obtain the left and right singular vectors, where the inverse square roots are assumed to exist.
The SVD here is equivalent to the eigendecomposition performed in the algorithm presented above (see Remark~\ref{rem:VAMP}).
By trialing different basis sets, the sum of the highest $ k $ singular values raised to the $ r $th power can be used to score the models, where the highest sum corresponds to the basis that yields the best approximation of the underlying dynamical processes in the resulting singular vectors~\cite{WuNo17}.
In the present work, if the feature spaces of the kernels corresponding to the data and time-lagged data are finite-dimensional and given explicitly, empirical estimates of the resulting eigenfunctions are equivalent to VAMP-$r$ for $ r = 1 $.
The advantage of the kernel-based variant is that it allows for implicitly infinite-dimensional feature spaces, which are determined by the kernel. When choosing a Gaussian kernel, for instance, the results depend only on the kernel bandwidth and a regularization parameter.
In Ref.~\onlinecite{WuNo17}, the dominant left and right singular vectors from VAMP are depicted for a nonreversible double-gyre system, but they are not discussed in the context of coherent structures or modes.
In Ref.~\onlinecite{KWNS18:noneq}, GMSMs are generalized from the Markov state model (MSM) approximation to transfer operators developed for the analysis of reversible simulations of molecules with indicator function bases~\cite{SS13}.
Due to reversibility, MSM matrix approximations to transfer operators are irreducible, so their eigenvalues are real due to the Perron--Frobenius theorem for irreducible matrices.
In the context of reversible systems, the eigenvalues can be related to the timescales of dynamical processes.
While similar methods have been used without reversibility---thus permitting imaginary eigenvalues (see, e.g., Ref.~\onlinecite{Kaiser14})---the GMSM method instead opts to use the SVD.
The absence of the guarantee of real eigenvalues in the nonreversible case is viewed as a loss of interpretability, which is made up for by the connection of the singular values and vectors to the identification of coherent, instead of metastable, sets, such as in simulations along shifting potential energy surfaces.
Both the VAMP and GMSM approaches were designed to investigate Markovian dynamics, which is implicit in the choice to use time-lagged pairs for analysis.
Here, too, we restrict our analysis to time-lagged pairs, and thus can only capture dynamics explained by these two views of the data.
CCA and its kernel counterpart can be extended to more than two views of the data~\cite{Bach03:KICA, SC04:KernelMethods}, but we leave this to future work.
\end{document} | 30,825 |
henriquebrion / Following
- Sort:
- Date
- Alphabetical
AIM Creative Studios PRO Portugal
Our creative collectives, production team and specialized network bring together relevant skills in arts and technology. AIM works range from art direction, illustration, animation and post-production to interactive development, sound design and music composing. Let us hear from you and have the chance…
shapingwaves Plus Berlin
Ryoichi Kurokawa is an Japanese artist. His works take on multiple forms such as concert piece, screening works, recordings and installation. His works are shown across the world at international festivals and museums including Tate Modern[UK], Venice Biennale[IT], Palais de Tokyo[FR], Transmediale[DE],…
Kevin Sebastian New York, NY
Actor, Director, Screenwriter, Host, and Photog
Carlos Santana Lisboa,Portugal
Director of Photography, director de fotografia
Ricardo Vargues Algarve
Young Filmmaker. Experience in: directing, editing and camera. Short Films, Documentary, Music Videos and Commercials. [email protected]
Leonor Bettencourt Loureiro Portugal
D O M & N I C are directors of commercials and music videos. They have been working together with their producer John Madsen for over a decade making award winning commercials and music video's for many of the world's biggest brands and artists.…
Barbearia do Povo Moscavide
Fazer a barba não tem de ser chato e muito menos caro! Conhece o novo serviço da Barbearia do Povo e recebe em tua casa lâminas de barbear de alta qualidade a um preço que todas as lâminas deveriam custar. Direcao de fotografia, camara e edicao: Nuno Janeiro Guiao…
Collected Transients Chicago, IL
Sound effects for sound designers…
Pedro Carvalhinho Portugal
Pedro Carvalhinho results from 90's crops. It's blended with cinema's imagination and books' stories. After he ages in little oak casks, it turns sensitive and reveals a fresh flavour. Currently working in Quioto.com, he can be enjoyed with commercials or experimental films.
Actors Of Sound Brooklyn, NY
A documentary about the art of Foley Directed by Lalo Molina
Frederic Devanlay Paris
Sound designer for games and films Nominated several times at BAFTA, GANG awards as "sound design of the year" Sound library for Zero G (Cyberstorm,Deep impact, Perception Cinemascapes), Bossey (Trailer toolkit) Cyberstorm : Lifs…
Dominic Wilcox London
I think of ideas and make things for a living.
Albert Balbastre Barcelona
Closer Productions PRO AUSTRALIA
We make feature drama & docs & screen content We live in Oztralia B O N Z A M A T E
Browse FollowingFollowing
When you follow someone on Vimeo, you subscribe to their videos, receive updates about them in your feed, and have the ability to send them messages.
Choose what appears in your feed using the Feed Manager. | 153,097 |
by James Morris
Last week, the United Nations reported that one million species of plants and animals are at risk of extinction. The high rate of extinction we are currently experiencing is a result of all kinds of human activities, notably climate change, pollution, hunting, over-harvesting, deforestation, land use changes, and the like.
This is not the first time that the Earth has experienced massive die-offs. Five times in the last half billion years, the Earth has seen what scientists call mass extinctions. The most recent occurred when a meteor slammed into the Earth, wreaking global havoc and leading to the extinction of the dinosaurs (except birds). But this was not the only mass extinction, nor the biggest.
This episode – what many are calling the sixth extinction – is different from previous episodes. What previously took hundreds, thousands, even millions of years is instead happening in the span of decades. In addition, this is the only mass extinction where we can point to a single species – us – as the cause. Finally, we can do something about it.
Our collective actions are causing irreparable harm. What’s even more disturbing is that none of this is necessary. Take climate change. As Nathaniel Rich compellingly described in The New York Times Magazine and in his book Losing Earth, we have known about climate change for at least a generation and even missed an opportunity to do something about it in the 1980s. And, as David Wallace-Wells wrote in The Uninhabitable Earth, we have the technological answers and economic resources to address climate change – we just lack a sense of collective urgency and political will.
Typically, discussions about climate change focus on the facts, or the effects of increased carbon dioxide levels in the atmosphere and oceans, or climate models. These are clearly important. And sometimes we end up debating whether it is real, or whether it is caused by humans, but these are all sideshows that only serve to distract us from the pressing problem at hand. A different way to view the problem is through the lens of compassion.
Climate change is not just about science or the environment or the loss of species – it’s about how our actions, decisions, and votes affect others. It is widely recognized that climate change will disproportionately affect the poor. And the real crisis won’t be felt by us, but instead by those who have not yet been born. Therefore, finding and implementing solutions is about looking beyond ourselves. It’s essentially an exercise in compassion, care, and community.
To deny climate change then is not just denying science or ignoring facts; it’s denying the way our actions affect others both near and far, present and future, particularly those without resources to manage, mitigate, or migrate. It’s ignoring the very tangible ways we are affecting the very fabric of the world around us.
The same is true of vaccines. Consider the recent measles outbreak. Just last week, 60 new cases of the measles, many in New York City, continue to drive the numbers to the highest in 25 years, and a record since the disease was declared “eliminated” in 2000.
Vaccines don’t just protect the person getting the vaccine. For vaccines to work and be effective, a certain percentage of the population needs to be vaccinated, typically in the range of 90%. This is known as herd immunity.
There are some people who cannot get vaccinated – the very young, very old, and people with compromised immune systems, for example. These people are vulnerable, not unlike people in coastal communities threatened by rising sea levels. When you get a vaccine, you not only care for yourself; you care for others. You participate in a collective endeavor to protect everyone.
Climate change and vaccinations both make visible the invisible – the web of connections that unite us in one large global community. This is essentially the message of evolution, another area that sometimes divides us. But what evolution teaches us is quite simple – we are all intimately connected through inheritance stretching back four billion years.
It is thought that all life on Earth has a single, common origin. What this means is that we are all related to one another. To be sure, we are related within our immediate family. Taking a step back, all humans are closely related – much more similar to one another than different. And, taking just one more step back, we are related not just to one another, but to every species on Earth.
Darwin knew that his idea of a branching tree of life would be disruptive, moving humans from a special, separate place. But he saw “grandeur in this view of life,” as he put it. The idea that we share a bond with all species is indeed a beautiful way of seeing the world, encouraging us to look at organisms that share the planet with us with care and compassion.
Special thanks to Jen Cawsey for the title. I was going to call it “A Case for Compassion” but I like her title better.
© James Morris and Science Whys, 2019
Jim, did Darwin really propose that all life has a single origin? Although he certainly entertained and probably believed materialist theories about life and the origins of life (spontaneous generation), he avoided statements about ultimate origins in print. I think. And in fact the passage you quote from the end of the Origin of Species–the “view of life” whose grandeur he is admiring–specifically suggests that God might have started life from several origins points.
Hi Michael, Great point. You’re right – he didn’t propose a single common origin in his published work. However, in his notes in 1837, he drew a tree to represent the pattern of evolution over time. At the base of this tree, he wrote a “1” and circled it. And, famously, at the top, he wrote, “I think.” Note that a single common origin doesn’t make any suggestions of how it got there, and therefore leaves open the possibility that it was “originally breathed by the Creator into a few forms or into one,” as Darwin wrote in some editions of the Origin. Also, today, although we think that all living things descended from a single common ancestor, it could be that life originated and went extinct several times before it stuck.
Here’s a link to Darwin’s sketch of the tree: | 14,258 |
\section{Preliminaries}
Throughout, we denote by $k$ a separably closed field. Our references for algebraic groups are~\cite{Borel-AG-book},~\cite{Borel-Tits-Groupes-reductifs},~\cite{Conrad-pred-book},~\cite{Humphreys-book1}, and~\cite{Springer-book}.
Let $H$ be a (possibly non-connected) affine algebraic group. We write $H^{\circ}$ for the identity component of $H$. We write $[H,H]$ for the derived group of $H$. A reductive group $G$ is called \emph{simple} as an algebraic group if $G$ is connected and all proper normal subgroups of $G$ are finite. We write $X_k(G)$ and $Y_k(G)$ for the set of $k$-characters and $k$-cocharacters of $G$ respectively. For $\overline k$-characters and $\overline k$-cocharacters of $G$ we simply say characters and cocharacters of $G$ and write $X(G)$ and $Y(G)$ respectively.
Fix a maximal $k$-torus $T$ of $G$ (such a $T$ exists by~\cite[Cor.~18.8]{Borel-AG-book}). Then $T$ is split over $k$ since $k$ is separably closed. Let $\Psi(G,T)$ denote the set of roots of $G$ with respect to $T$. We sometimes write $\Psi(G)$ for $\Psi(G,T)$. Let $\zeta\in\Psi(G)$. We write $U_\zeta$ for the corresponding root subgroup of $G$. We define $G_\zeta := \langle U_\zeta, U_{-\zeta} \rangle$. Let $\zeta, \xi \in \Psi(G)$. Let $\xi^{\vee}$ be the coroot corresponding to $\xi$. Then $\zeta\circ\xi^{\vee}:\overline k^{*}\rightarrow \overline k^{*}$ is a $k$-homomorphism such that $(\zeta\circ\xi^{\vee})(a) = a^n$ for some $n\in\mathbb{Z}$.
Let $s_\xi$ denote the reflection corresponding to $\xi$ in the Weyl group of $G$. Each $s_\xi$ acts on the set of roots $\Psi(G)$ by the following formula~\cite[Lem.~7.1.8]{Springer-book}:
$
s_\xi\cdot\zeta = \zeta - \langle \zeta, \xi^{\vee} \rangle \xi.
$
\noindent By \cite[Prop.~6.4.2, Lem.~7.2.1]{Carter-simple-book} we can choose $k$-homomorphisms $\epsilon_\zeta : \overline k \rightarrow U_\zeta$ so that
$
n_\xi \epsilon_\zeta(a) n_\xi^{-1}= \epsilon_{s_\xi\cdot\zeta}(\pm a)
\text{ where } n_\xi = \epsilon_\xi(1)\epsilon_{-\xi}(-1)\epsilon_{\xi}(1). \label{n-action on group}
$
The next result~\cite[Prop.~1.12]{Uchiyama-Nonperfect-pre} shows complete reducibility behaves nicely under central isogenies.
\begin{defn}
Let $G_1$ and $G_2$ be reductive $k$-groups. A $k$-isogeny $f:G_1\rightarrow G_2$ is \emph{central} if $\textup{ker}\,df_1$ is central in $\mathfrak{g_1}$ where $\textup{ker}\,df_1$ is the differential of $f$ at the identity of $G_1$ and $\mathfrak{g_1}$ is the Lie algebra of $G_1$.
\end{defn}
\begin{prop}\label{isogeny}
Let $G_1$ and $G_2$ be reductive $k$-groups. Let $H_1$ and $H_2$ be subgroups of $G_1$ and $G_2$ respectively. Let $f:G_1 \rightarrow G_2$ be a central $k$-isogeny.
\begin{enumerate}
\item{If $H_1$ is $G_1$-cr over $k$, then $f(H_1)$ is $G_2$-cr over $k$.}
\item{If $H_2$ is $G_2$-cr over $k$, then $f^{-1}(H_2)$ is $G_1$-cr over $k$.}
\end{enumerate}
\end{prop}
The next result~\cite[Thm.~1.4]{Bate-cocharacterbuildings-Arx} is used repeatedly to reduce problems on $G$-complete reducibility to those on $L$-complete reducibility where $L$ is a Levi subgroup of $G$.
\begin{prop}\label{G-cr-L-cr}
Suppose that a subgroup $H$ of $G$ is contained in a $k$-defined Levi subgroup $L$ of $G$. Then $H$ is $G$-cr over $k$ if and only if it is $L$-cr over $k$.
\end{prop}
We recall characterisations of parabolic subgroups, Levi subgroups, and unipotent radicals in terms of cocharacters of $G$~\cite[Prop.~8.4.5]{Springer-book}. These characterisations are essential to translate results on complete reducibility into the language of GIT; see~\cite{Bate-geometric-Inventione},~\cite{Bate-uniform-TransAMS} for example.
\begin{defn}
Let $X$ be a affine $k$-variety. Let $\phi : \overline k^*\rightarrow X$ be a $k$-morphism of affine $k$-varieties. We say that $\displaystyle\lim_{a\rightarrow 0}\phi(a)$ exists if there exists a $k$-morphism $\hat\phi:\overline k\rightarrow X$ (necessarily unique) whose restriction to $\overline k^{*}$ is $\phi$. If this limit exists, we set $\displaystyle\lim_{a\rightarrow 0}\phi(a) := \hat\phi(0)$.
\end{defn}
\begin{defn}\label{R-parabolic}
Let $\lambda\in Y(G)$. Define
$
P_\lambda := \{ g\in G \mid \displaystyle\lim_{a\rightarrow 0}\lambda(a)g\lambda(a)^{-1} \text{ exists}\}, $\\
$L_\lambda := \{ g\in G \mid \displaystyle\lim_{a\rightarrow 0}\lambda(a)g\lambda(a)^{-1} = g\}, \,
R_u(P_\lambda) := \{ g\in G \mid \displaystyle\lim_{a\rightarrow0}\lambda(a)g\lambda(a)^{-1} = 1\}.
$
\end{defn}
Then $P_\lambda$ is a parabolic subgroup of $G$, $L_\lambda$ is a Levi subgroup of $P_\lambda$, and $R_u(P_\lambda)$ is the unipotent radical of $P_\lambda$. If $\lambda$ is $k$-defined, $P_\lambda$, $L_\lambda$, and $R_u(P_\lambda)$ are $k$-defined~\cite[Sec.~2.1-2.3]{Richardson-conjugacy-Duke}. All $k$-defined parabolic subgroups and $k$-defined Levi subgroups of $G$ arise in this way since $k$ is separably closed. It is well known that $L_\lambda = C_G(\lambda(\overline k^*))$. Note that $k$-defined Levi subgroups of a $k$-defined parabolic subgroup $P$ of $G$ are $R_u(P)(k)$-conjugate~\cite[Lem.~2.5(\rmnum{3})]{Bate-uniform-TransAMS}.
Recall the following geometric characterisation for complete reducibility via GIT~\cite{Bate-geometric-Inventione}. Suppose that a subgroup $H$ of $G$ is generated by $n$-tuple ${\bf h}=(h_1,\cdots, h_n)$ of elements of $G$ (or ${\bf h}$ is a generic tuple of $H$ in the sense of~\cite[Def.~9.2]{Bate-cocharacter-Arx}), and $G$ acts on $G^n$ by simultaneous conjugation.
\begin{prop}\label{geometric}
A subgroup $H$ of $G$ is $G$-cr if and only if the $G$-orbit $G\cdot {\bf h}$ is closed in $G^n$.
\end{prop}
Combining Proposition~\ref{geometric} and a recent result from GIT~\cite[Thm.~3.3]{Bate-uniform-TransAMS} we have
\begin{prop}\label{unipotentconjugate}
Let $H$ be a subgroup of $G$. Let $\lambda\in Y(G)$. Suppose that ${\bf h'}:=\lim_{a\rightarrow 0}\lambda(a)\cdot {\bf h}$ exists. If $H$ is $G$-cr, then ${\bf h'}$ is $R_u(P_\lambda)$-conjugate to ${\bf h}$.
\end{prop}
We use a rational version of Proposition~\ref{unipotentconjugate}; see~\cite[Cor.~5.1]{Bate-cocharacter-Arx},~\cite[Thm.~9.3]{Bate-cocharacter-Arx}:
\begin{prop}\label{rationalonjugacy}
Let $H$ be a subgroup of $G$. Let $\lambda\in Y_k(G)$. Suppose that ${\bf h'}:=\lim_{a\rightarrow 0}\lambda(a)\cdot {\bf h}$ exists. If $H$ is $G$-cr over $k$, then ${\bf h'}$ is $R_u(P_\lambda)(k)$-conjugate to ${\bf h}$.
\end{prop} | 94,860 |
Call of Duty is a series that introduced so many young gamers to multiplayer gaming. The series holds a special place in the hearts and minds of many young adults who have so many fond memories of coming home from middle school, turning on their Xbox 360’s, and playing Call of Duty: Modern Warfare 2 for hours on end. Then, with the addition of booster packs, and an increasing number of microtransactions with each entry, Activision steadily alienated much of their core fanbase. Now that the series has finally returned to “boots on the ground” combat, Treyarch, the developers behind the Black Ops series, has their first opportunity since 2012 to return to form and give fans the Call of Duty experience they remember so fondly. While the open beta isn’t the finished product and the full release is still a few months away, it’s a reliable indicator of how the game will feel overall. And true to Treyarch’s nature, it feels great. Minus a few technical hiccups.
Treyarch is well known for making the best competitive COD games. Maps are generally well balanced with the standard three-lane layouts. Treyarch’s guns usually kill slower than in Call of Duty games developed by Infinity Ward or Sledgehammer. This longer “time to kill” makes fights feel more impactful than fights in other Call of Duties. And most importantly, the weapons feel fun to use; thanks to excellent audio production and weapon design. Players now have even more customization options for their loadouts, with each character having different abilities and unique weapons they can use. For example, the character Battery has a scatter grenade that sticks to surfaces and breaks apart into several smaller explosions upon detonation. You can swap out this grenade for other pieces of equipment, but you will only have one or two uses of it per life, rather than it being a cooldown.
The most significant change to gameplay is health no longer auto-regenerating. Instead of slowly gaining health back after not taking damage for a few moments, players have a stim shot bound to L1. Players also have the option to equip a tactical stim shot that lowers its cooldown and allows the player to shoot while using it. It’s an intriguing change that completely alters the flow of a fight. Instead of ducking behind cover for a few seconds and waiting for their health to regenerate, players can quickly break line of sight, pop their stim, and get right back into the action. Less time spent waiting and more time spent fighting is always a good thing.
Even though combat feels good, the game isn’t without its flaws. Luckily, none of them are unfixable. Enemy players will often start lagging and teleporting around, leading to several frustrating deaths. Hopefully, the netcode is smoothed out by the full release in October. Occasionally hitboxes can be questionable, but that’s almost standard in Call of duty at this point.
Call of Duty: Black Ops 4 is shaping up to be a promising entry in a long and storied franchise. If Activision doesn’t ruin the game by saturating it with microtransactions, we could see the greatness of Black Ops 2 rekindled in 2018. | 55,805 |
TITLE: A direc sum of nilpotent Lie algebras such that $C^n(\mathfrak{g}) \subsetneq \mathfrak{z}(\mathfrak{g})$
QUESTION [0 upvotes]: It is known that the center of nilpotent Lie algebra is never trivial as it always contain $C^n(\mathfrak{g})$ if $\mathfrak{g}$ is nilpotent of class $n+1$
Let $C^n$ denote descending central series. I am looking for an example of direct sum of 2 nilpotent algebras $\mathfrak{g}=\mathfrak{g_1}\oplus\mathfrak{g_2}$ such that $\mathfrak{g}$ is nilpotent of class $n+1$ and $C^n(\mathfrak{g}) \subsetneq \mathfrak{z}(\mathfrak{g})$.
What I did:
I thought about the simplest nilpotent algebra I know: the strictly upper triangular matrices with entries in $\Bbb R$: $\mathfrak{g}=\mathfrak{b_n}\oplus \mathfrak{b_n}$ (notation $\mathfrak{b_n}$ here to denote strictly upper triangular matrices is an abuse).
I found for $n=2$ that $\mathfrak{z}(\mathfrak{b_2}) = \mathfrak{b_2}$ but I have a feeling it is true for every $n>2$. If this is the case, $\mathfrak{z}(\mathfrak{g})=\mathfrak{z}(\mathfrak{b_n})\oplus \mathfrak{z}(\mathfrak{b_n})=\mathfrak{b_n}\oplus \mathfrak{b_n}$ and $C^n(\mathfrak{g}) \neq \mathfrak{g}$ so $C^n(\mathfrak{g}) \subsetneq \mathfrak{z}(\mathfrak{g})$.
Is that correct? Are there other interesting examples?
Thank you for your help.
REPLY [2 votes]: Here is an example of an indecomposable nilpotent Lie algebra $L$ of dimension $7$ with $C^5(L)=0$ but $C^4(L)\subsetneq Z(L)$. Here $C^1(L)=L$ and $[L,C^k(L)]=C^{k+1}(L)$.
Consider the Lie brackets with respect to a basis $(x_1,\ldots ,x_7)$ given by
$$
[x_1,x_2]=x_4,\; [x_1,x_4]=x_5,\;[x_1,x_5]=x_7,\; [x_2,x_3]=x_7,\;
[x_2,x_4]=x_6.
$$
Then $Z(L)=\langle x_6,x_7\rangle$, but $C^4(L)=\langle x_7\rangle$ and
$C^5(L)=0$.
Now take the direct sum $L\oplus L$. | 121,556 |
\begin{document}
\title{\bf {The affirmative solution to Salem's problem}}
\author{Semyon Yakubovich}
\maketitle
\markboth{\rm \centerline{SEMYON YAKUBOVICH}}{}
\markright{\rm \centerline{SALEM'S PROBLEM}}
\begin{abstract}{By using methods of classical analysis and special functions an old and attractive Salem's problem ( posed in {\it Trans. Amer. Math. Soc.} {\bf 53} (3) (1943), p. 439 ) whether Fourier-Stieltjes coefficients of the Minkowski question mark function vanish at infinity is solved affirmatively. The coefficients $d_n$ decay polynomially, namely, $d_n= O(n^{-\mu}),\ n \to \infty$ for some $\ \mu > 0$. Moreover, we generalize the Salem problem, proving that the Fourier-Stieltjes coefficients of any power $m\in \mathbb{N}$ of the Minkowski question mark function vanish at infinity as well.}
\end{abstract}
{\bf Keywords}: Minkowski question mark function, Salem's problem, Fourier-Stieltjes transform, Fourier-Stieltjes coefficients
{\bf Mathematics subject classification}: 42A16, 42B10, 44A15
\section{Introduction}
Let $x \in \mathbb{R}$ and consider the following Fourier-Stieltjes transforms
$$f (x)= \int_0^1 e^{ixt} d q(t),\eqno(1.1)$$
$$F(x)= \int_0^\infty e^{ixt} d q(t).\eqno(1.2)$$
Here $q(x)$ is the famous Minkowski question mark function $?(x) \equiv q(x)$. This function is defined by \cite{D} $q(x):[0,1]\mapsto[0,1]$
$$q ([0,a_{1},a_{2},a_{3},\ldots])=2\sum\limits_{i=1}^{\infty}(-1)^{i+1}2^{-\sum_{j=1}^{i}a_{j}},$$
where $x=[0,a_{1},a_{2},a_{3},\ldots]$ stands for the representation of $x$ by a regular continued fraction. It is well known that $q(x)$ is continuous, strictly increasing and supports a singular measure. It is uniquely determined by the following functional equations, which will be used in the sequel
$$q (x)= 1- q(1-x),\ x \in [0,1],\eqno(1.3)$$
$$q(x)= 2 q\left(\frac{x}{x+1}\right),\ x \in [0,1],\eqno(1.4)$$
$$q(x)+ q\left({1\over x}\right)= 2, \ x > 0.\eqno(1.5)$$
When $x \to 0$, it decreases exponentially $ \ q(x)= O\left(2^{-
1/x}\right)$. Key values are $q(0)=0, \ q(1)= 1,\
q(\infty)= 2$. For instance, from (1.3) and asymptotic
behavior of the Minkowski function near zero one can easily get the
finiteness of the following integrals
$$\int_0^1 x^\lambda\ d q(x) < \infty, \ \lambda \in \mathbb{R},\eqno(1.6)$$
$$\int_0^1 (1-x)^\lambda\ d q(x) < \infty, \ \lambda \in
\mathbb{R}.\eqno(1.7)$$
Further, as was proved by Salem \cite{Sal1}, the Minkowski question
mark function satisfies the H\"older condition
$$\left|q(x)- q(y)\right| < C |x-y|^\alpha, \ \alpha < 1,$$
where
$$\alpha= \frac{\log 2}{2 \log {\sqrt 5 + 1\over 2}}= 0,7202_+.$$
and $C >0$ is an absolute constant. As we observe from the functional equation (1.3) the Fourier-Stieltjes transform (1.1) satisfies the functional relation
$$f(x) = e^{ix} f(-x), \eqno(1.8)$$
and therefore $e^{-ix/2} f(x)$ is real-valued. So, taking its imaginary part, we obtain the equality
$$\cos \left({x\over 2}\right) f_{s} (x) = \sin \left({x\over 2}\right) f_c (x),\eqno(1.9)$$
where $f_{s},\ f_{c} $ are the Fourier-Stieltjes sine and cosine transforms of the Minkowski question mark function, respectively,
$$f_{s} (x)= \int_0^1 \sin(xt) d q(t),\eqno(1.10)$$
$$f_{c} (x)= \int_0^1 \cos(xt) d q(t).\eqno(1.11)$$
Hence, letting, for instance, $x = 2\pi n, \ n \in \mathbb{N}_0$ it
gives $f_{s}(2\pi n)=0$ and $f_{c} (2\pi n)= d_n$. In 1943 Salem asked \cite{Sal1}
whether $d_{n}\rightarrow 0$, as $n\rightarrow\infty$. This question is quite delicate, since it concerns singular functions (see \cite{Sal3}, Ch. IV) and the classical Riemann-Lebesgue lemma for the class $L_1$, in general, cannot be applied. A singular function is defined as a continuous, bounded monotone function with a null derivative almost everywhere. Hence it supports a positive bounded Borel measure, which is singular with respect to Lebesgue measure. For such singular measures there are various examples whose Fourier transforms do not tend to zero, although some do (see, for instance, in \cite{Sal1}, \cite{Sal2}, \cite{Men}). In \cite{Win} (see also \cite{IV}) it was proved that for every $\varepsilon >0$ there exists a singular monotone function, which supports a measure whose
Fourier-Stieltjes transform behaves as $O(t^{-{1\over 2} +\varepsilon}), \ |t| \to \infty$.
In fact, it is worth to mention that the Salem problem is an old and quite attractive problem in the number theory and Fourier analysis \cite{Walter}. Several attempts were undertaken to solve Salem's problem (see, for instance, in \cite{Al}, \cite{Yakeq}, \cite{YAK3} ). Moreover, after appearance of the original version of this article \cite{YAKar} on Arxiv, it was noted in \cite{Per} that the solution to Salem's problem is a special case of a more general result of the paper \cite{Jor}.
In the sequel we will give the affirmative solution to Salem's problem, using the methods of classical analysis and special functions. To do this, we will employ all functional equations, which uniquely describe the Minkowski question mark function and establish a new integro-differential equation for the Fourier-Stieltjes transform (1.1). It involves, in turn, the following functional equation, which is proved by the author in \cite{Yakeq} and relates to transforms (1.1), (1.2)
$$f(x)= \left(1- {e^{ix}\over 2}\right) F(x),\ x \in \mathbb{R}.\eqno(1.12)$$
Taking real and imaginary parts of both sides in (1.12), we derive interesting equalities (see details in \cite{Yakeq}), which will be used below, namely
$$\int\limits_{1}^{\infty}\cos xt \ d q(t)= \frac{1- 8\sin^2(x/2)}{1+ 8\sin^2(x/2)}\int\limits_{0}^{1}\cos x t\ d
q(t),\ x \in \mathbb{R},\eqno(1.13)$$
$$\int\limits_{1}^{\infty}\sin xt \ d q(t)= \frac{5- 8\sin^2(x/2)}{1+ 8\sin^2(x/2)}\int\limits_{0}^{1}\sin xt\ d
q(t),\ x \in \mathbb{R}.\eqno(1.14)$$
Making $x \to 0$ in (1.14), we find, in particular,
$$\int\limits_{1}^{\infty} t d q(t)= 5\int\limits_{0}^{1} t dq(t).\eqno(1.15)$$
Moreover, using the functional equation (1.3), it can be proved the important equality for coefficients $d_n$
$$d_n= 2\int\limits_{0}^{1} t \cos (2\pi n t) \ d q(t).\eqno(1.16)$$
Indeed, we have
$$ \int\limits_{0}^{1} t \cos (2\pi n t )\ d q(t) - {d_n\over 2} = \left(\int\limits_{0}^{1/2} + \int\limits_{1/2}^{1}\right) \left( t - {1\over 2} \right) \cos (2\pi n t) \ d q(t) $$
$$= \int\limits_{0}^{1/2} \left( t - {1\over 2} \right) \cos (2\pi n t) \ d q(t) - \int\limits_{1/2}^{1} \left( t - {1\over 2} \right) \cos (2\pi n t) \ d q(1-t) = 0.$$
Further, we give values of the important relatively convergent integrals, which are calculated with the use of the Parseval equality for the Mellin transform \cite{Tit} and verified recently with {\it Mathematica}. It will be employed in Section 3 to solve the Salem problem. Precisely, according to relations (2.5.22.2) and (2.5.22.6) in \cite{Prud}, Vol. 1 the following equalities hold for $a,\ b >0 $ and $0 < \mu < 2$
$$\int_0^\infty x^{\mu-1} {\sin(ax^2) \brace \cos (ax^2)} \sin(bx) dx = {b\over 2 a^{(\mu+1)/2} } \Gamma\left({\mu+1\over 2}\right) {\cos ((1-\mu)\pi/4) \brace \sin ((1-\mu)\pi/4) }$$
$$\times {}_2F_3\left( {\mu+3\over 4}, \ { \mu+1\over 4} ; \ {1\over 2},\ {3\over 4},\ {5\over 4}; - \left( {b^2\over 8a} \right)^2 \right)\mp {b^3\over 12 a^{(\mu+3)/2} } \Gamma\left({\mu+3\over 2}\right) {\cos ((1+\mu)\pi/4) \brace \sin ((1+\mu)\pi/4) }$$
$$\times {}_2F_3\left( {\mu+5\over 4}, \ { \mu+3\over 4} ; \ {3\over 2},\ {5\over 4},\ {7\over 4}; - \left( {b^2\over 8a} \right)^2 \right),\eqno(1.17)$$
$$\int_0^\infty x^{\mu-1} {\sin(ax^2) \brace \cos (ax^2)} \cos (bx) dx = {1\over 2 a^{\mu/2} } \Gamma\left({\mu\over 2}\right) {\sin (\mu\pi/4) \brace \cos (\mu\pi/4) }$$
$$\times {}_2F_3\left( {\mu+2\over 4}, \ { \mu\over 4} ; \ {1\over 2},\ {1\over 4},\ {3\over 4}; - \left( {b^2\over 8a} \right)^2 \right)\mp {b^2\over 4 a^{\mu/2 +1 } }\Gamma\left({\mu\over 2} + 1\right) {\cos (\mu\pi/4) \brace \sin (\mu\pi/4) }$$
$$\times {}_2F_3\left( {\mu\over 4} +1, \ { \mu+2\over 4} ; \ {3\over 2},\ {3\over 4},\ {5\over 4}; - \left( {b^2\over 8a} \right)^2 \right).\eqno(1.18)$$
Here $\Gamma(z)$ is Euler's gamma-function and ${}_2F_3(\alpha_1,\ \alpha_2; \beta_{1}, \ \beta_{2},\ \beta_{3}; - x )$ is the generalized hypergeometric function, having the following asymptotic behavior at infinity, which is confirmed with {\it Mathematica} (see in \cite{Olver}, Section 16.11 (ii))
$${}_2F_3(\alpha_1,\ \alpha_2; \beta_{1}, \ \beta_{2},\ \beta_{3}; -x ) $$
$$= {\Gamma(\beta_{1})\Gamma(\beta_{2})\Gamma(\beta_{3})\over \Gamma(\alpha_{1})\Gamma(\alpha_{2})} \left[ {x^{\gamma} \over \sqrt \pi} \cos\left( 2\sqrt x + \pi\gamma \right) + O(x^{\gamma- 1/2}) + O(x^{-\alpha_{1}}) + O(x^{-\alpha_{2}})\right],\ x \to +\infty,\eqno(1.19)$$
where
$$\gamma = {1\over 4} + {1\over 2} \left[ \sum_{j=1}^{2} \alpha_{j}- \sum_{j=1}^{3} \beta_{j}\right].\eqno(1.20)$$
\section{Integro-differential equation for the Fourier-Stieltjes transform $(1.1)$}
In order to make the paper self-contained we begin with the proof of the relation (1.12) (cf. \cite{Yakeq}).
{\bf Lemma 1}. {\it Let $x \in \mathbb{R}$ and $f(x)$, $F(x)$ be Fourier-Stieltjes transforms $(1.1),\ (1.2)$, respectively. Then functional equation $(1.12)$ holds. }
\begin{proof} The proof is based on functional equations (1.4), (1.5) for the Minkowski question mark function and simple properties of the Stieltjes integral. In fact, we derive the chain of equalities
\begin{eqnarray*}
\int\limits_{0}^{1}e^{ixt}\ d q(t)=
\int\limits_{0}^{\infty}e^{ixt}\
d q(t)- \int\limits_{1}^{\infty}e^{ixt}\ d q(t)\\
= \int\limits_{0}^{\infty}e^{ixt}\ d q(t) - e^{ix}
\int\limits_{0}^{\infty}e^{ixt}\
d q \left({ t+1}\right)\\
= \int\limits_{0}^{\infty}e^{ixt}\ d q(t) + e^{ix}
\int\limits_{0}^{\infty}e^{ixt}\
d q \left({1\over { t+1}}\right)\\
= \int\limits_{0}^{\infty}e^{ixt}\ d q(t) + e^{ix}
\int\limits_{0}^{\infty}e^{ixt}\
d q\left({1/t \over { 1+ 1/t}}\right)\\
= \int\limits_{0}^{\infty}e^{ixt}\ d q(t) + {e^{ix}\over 2}
\int\limits_{0}^{\infty}e^{ixt}\
d q\left({1\over t}\right)\\
= \left(1- {e^{ix}\over 2}\right) \int\limits_{0}^{\infty}e^{ixt}\ d
q(t),
\end{eqnarray*}
which yields (1.12).
\end{proof}
{\bf Theorem 1.} {\it Let $ x \in \mathbb{R}_+$. The Fourier-Stieltjes transform $(1.1)$ satisfies the following integro-differential equation, involving the operator of the modified Hankel transform}
$$\frac{ e^{ix}}{ 2 - e^{ix} } \left[ f^\prime (x) + \frac{ 2 i f(x)}{ 2 - e^{ix} }\right] = - \int_0^\infty J_0\left(2\sqrt {x y}\right) e^{-iy} f(y) dy.\eqno(2.1)$$
\begin{proof} Indeed, differentiating (1.12) with respect to $x$ and using it again, we find
$$f^\prime (x)= - \frac{ i \ e^{ix} f(x)}{ 2 - e^{ix} } + i \left(1- {e^{ix}\over 2}\right)
\int_0^\infty t e^{ixt} \ d q(t),\eqno(2.2)$$
where the differentiation under the integral sign in (1.2) is allowed via the simple estimate
$$\left| \int_0^\infty t e^{ixt} \ d q(t)\right| \le \int_0^\infty t d q(t) = 3, $$
where the latter equality is due to (1.15), (1.16). Hence,
$$i \int_0^\infty t \ e^{ixt} \ d q(t) =i \int_0^1 t e^{ixt} \ d q(t) + i \int_1^\infty t e^{ixt} \ d q(t) $$
$$ = f^\prime (x) + i \int_0^1 {e^{ix /t}\over t} \ d q (t).\eqno(2.3)$$
Recalling the relatively convergent integral from \cite{Prud}, relation (2.12.9.3)
$$ {e^{ix /t}\over it} = \int_0^\infty J_0\left(2\sqrt {x y}\right) e ^{-ity} dy,\quad x, t > 0,\eqno(2.4)$$
where $J_0(z)$ is the Bessel function of the first kind \cite{Prud}, Vol. 2, we substitute it in (2.3). Hence after the change of the order of integration and the use of the symmetry property (1.8), we combine with (2.2) and come up with the integro-differential equation (2.1). Our goal now is to motivate the interchange of the order of integration in the iterated integral, proving the formula
$$\int_0^1 \left( \int_0^\infty J_0\left(2\sqrt {x y}\right) e ^{-ity} dy \right) \ d q (t) = \int_0^\infty J_0\left(2\sqrt {x y}\right) \left( \int_0^1 e ^{-ity} \ d q (t)\right) \ dy,\quad x >0.\eqno(2.5)$$
To do this, it is sufficient to justify the limit equality
$$\lim_{Y\to \infty} \int_0^1 \left( \int_Y^\infty J_0\left(2\sqrt {x y}\right) e ^{-ity} dy \right) \ d q (t) =0\eqno(2.6)$$
for each fixed positive $x$. Naturally, we will appeal to the known asymptotic behavior of the Bessel function at infinity \cite{Olver}, Section 10.17 (i)
$$J_\nu( y)= \sqrt {{2\over \pi y}} \left[ \cos \left(y - {\pi\nu \over 2}- {\pi\over 4} \right) - {a (\nu) \over y} \sin \left( y - {\pi\nu \over 2} - {\pi\over 4} \right) + O\left({1\over y^2}\right)\right],\ y \to + \infty,\eqno(2.7)$$
where
$$a (\nu)= {\nu^2\over 2} - {1\over 8},\ \nu \in \mathbb{R}.$$
Hence, for sufficiently large $Y >0$ and $x >0,\ t \in (0,1)$, we have
$$\int_Y^\infty J_0\left(2\sqrt {x y}\right) e ^{-ity} dy = {1\over \sqrt \pi x^{1/4} } \int_Y^\infty \cos \left(2\sqrt {xy} - {\pi\over 4} \right) e ^{-ity} {dy\over y^{1/4}}$$
$$ + {1\over 16\sqrt \pi x^{3/4} } \int_Y^\infty \sin \left(2\sqrt {xy} - {\pi\over 4} \right) e ^{-ity} {dy\over y^{3/4}} + O\left(Y^{-1/4}\right).\eqno(2.8)$$
As we will see from the estimates below and the finiteness of integrals (1.6) for various real $\lambda$, in order to establish the limit (2.6), it is sufficient to estimate, for instance, the integral
$$ \int_Y^\infty \cos \left(2\sqrt {xy}\right)\cos(ty) {dy\over y^{1/4}},$$
because other integrals in (2.8) can be estimated in the same manner. With the simple substitution and integration by parts we have
$$\int_Y^\infty \cos \left(2\sqrt {xy}\right) \cos(ty) {dy\over y^{1/4}} = 2 \int_{\sqrt{Y}}^\infty \cos \left(2 y\sqrt {x }\right) \cos(ty^2) \sqrt y \ dy$$
$$= - { \cos \left(2\sqrt {xY}\right) \sin\left(t Y\right) \over t Y^{1/4} } + {1\over 2t} \int_{\sqrt{Y}}^\infty \sin (ty^2) \left[ { \cos \left(2 y\sqrt {x }\right) \over y } + 4 \sqrt x \sin(2 y\sqrt {x }) \right] \ {dy \over \sqrt y}$$
$$= {2\sqrt x \over t} \int_{\sqrt{Y}}^\infty \sin (ty^2) \sin(2 y\sqrt {x }) {dy \over \sqrt y} + O\left(t^{-1} Y^{-1/4}\right).$$
Similarly,
$$ {2\sqrt x \over t} \int_{\sqrt{Y}}^\infty \sin (ty^2) \sin(2 y\sqrt {x }) {dy \over \sqrt y} = O\left(t^{-2} Y^{-3/4}\right) $$
$$+ {\sqrt x \over 2 t^2} \int_{\sqrt{Y}}^\infty \cos (ty^2) \left[ - {3 \sin(2 y\sqrt {x }) \over y } + 4\sqrt{x} \cos(2 y\sqrt {x }) \right] {dy\over y^{3/2} } $$
$$= O\left(t^{-2} Y^{-1/4}\right) .$$
Consequently,
$$\int_0^1 \left( \ \int_Y^\infty \cos \left(2\sqrt {xy}\right)\cos(ty) {dy\over y^{1/4}} \right) d q (t) $$
$$= O\left( Y^{-1/4} \left[ \int_0^1 t^{-1} d q(t) + \int_0^1 t^{-2} d q(t)\right] \right) = O\left( Y^{-1/4}\right),\ Y \to \infty.$$
Therefore, treating in the same manner other integrals from (2.8), we get equality (2.6), completing the proof of Theorem 1.
\end{proof}
{\bf Remark 1}. A similar to (2.1) integro-differential equation for the Fourier-Stieltjes transform (1.1) with the derivative $f^{\prime} (x)$ inside the modified Hankel transform \cite{Tit} was exhibited in \cite{Al}. However it did not lead to the solution of the Salem problem.
{\bf Corollary 1}. {\it Let $n \in \mathbb{N}.$ The values $d_n= f(2\pi n)$ and
$$c_n= \int_0^1 t \sin(2\pi n t) dq(t)\eqno(2.9)$$
have the following integral representations in terms of the modified Hankel transform
$$ d_n= {2\over 5} \int_0^\infty J_0\left(2\sqrt {2\pi n y}\right) f_s(y) dy,\eqno(2.10)$$
$$ c_n= \int_0^\infty J_0\left(2\sqrt {2\pi n y}\right) f_c(y) dy,\eqno(2.11)$$
where $f_s(x), f_c(x)$ are the Fourier-Stieltjes sine and cosine transforms of the Minkowski question mark function $(1.10), (1.11)$, respectively . }
\begin{proof} Indeed, substituting in (2.1) $x=2\pi n$, we have
$$f^\prime (2\pi n) + 2i d_n = - \int_0^\infty J_0\left(2\sqrt {2\pi n y}\right) e^{-iy} f (y) dy.$$
In the meantime, it is not difficult to show, recalling (1.16), that
$$f^\prime (2\pi n)= i \int_0^1 t e^{2\pi i nt} d q(t)= i \int_0^1 t \cos(2\pi nt ) \ d q(t) $$
$$ - \int_0^1 t \sin (2\pi nt ) \ d q(t) = {i\over 2} \ d_n - c_n.\eqno(2.12)$$
Hence,
$${5\over 2} \ i d_n - c_n = - \int_0^\infty J_0\left(2\sqrt {2\pi n y}\right) e^{-iy} f (y) dy.\eqno(2.13)$$
Now taking the imaginary and real parts of both sides of the latter equality in (2.13) with the use of (1.3), we end up with (2.10), (2.11).
\end{proof}
\section{Solution to Salem's problem}
The main result is the following
{\bf Theorem 2}. {\it The answer on Salem's question is affirmative. Moreover, there exists some $\mu >0$ such that }
$$d_n= O\left(n^{-\mu}\right), \ n \to \infty.$$
\begin{proof} Indeed, taking (2.10), we write
$$ {5\over 2} d_n = \int_0^1 J_0\left(2\sqrt {2\pi n y}\right) f_s(y) dy + \int_1^\infty J_0\left(2\sqrt {2\pi n y}\right) f_s(y) dy
= I_1(n)+ I_2(n).\eqno(3.1)$$
From the inequality $\sqrt x \left|J_\nu(x) \right| \le C,\ x > 0$, where $C >0$ is an absolute constant, it follows that $I_1(n)$ converges absolutely and uniformly with respect to $n$. Therefore $I_1(n) \to 0,\ n \to \infty$. Concerning the integral $I_2(n)$, we appeal to the asymptotic formula (2.7) for the Bessel function and recall (1.10) to get the equalities
$$I_2(n)= \int_1^\infty J_0\left(2\sqrt {2\pi n y}\right) f_s(y) dy$$
$$ = {1\over \sqrt \pi (2\pi n)^{1/4} } \int_1^\infty \cos \left(2\sqrt {2\pi n y} - {\pi\over 4} \right) \left(\int_0^1 \sin(yt) dq(t)\right) {dy\over y^{1/4}}$$
$$ + {1\over 16 \sqrt \pi (2\pi n)^{3/4} } \int_1^\infty \sin \left(2\sqrt {2\pi n y} - {\pi\over 4} \right) \left(\int_0^1 \sin(yt) dq(t)\right) {dy\over y^{3/4}} $$
$$+ O\left({1\over n^{5/4}} \int_1^\infty \left(\int_0^1 \sin(yt) dq(t)\right) {dy\over y^{5/4}}\right) = I_{21}(n) + I_{22}(n)+ I_{23}(n).\eqno(3.2)$$
Hence
$$|I_{23}(n)| \le {C \over n^{5/4}} \int_0^1 dq(t) = O\left( n^{-5/4}\right),\ n \to \infty, $$
where $C>0$ is an absolute constant. Further, going backwards by the interchange of the order of integration in the integral $I_{22}$ for each $n$ as in the proof of Theorem 1, we write it in the form
$$ I_{22}(n) = {1\over 16 \sqrt \pi (2\pi n)^{3/4} } \int_1^\infty \sin \left(2\sqrt {2\pi n y} - {\pi\over 4} \right) \left(\int_0^1 \sin(yt) dq(t)\right) {dy\over y^{3/4}} $$
$$= {1\over 16 (2\pi)^{5/4} n^{3/4} } \int_0^1 dq(t) \left( \int_0^\infty- \int_0^1\right) \left[ \sin \left(2\sqrt {2\pi n y} \right) - \cos \left(2\sqrt {2\pi n y} \right) \right] \sin(yt) {dy \over y^{3/4}},\eqno(3.3)$$
where by virtue of the absolute and uniform convergence with respect to $n \in \mathbb{N}$ and $t \in [0,1]$
$$ {1\over 16 (2\pi)^{5/4} n^{3/4} } \int_0^1\left[ \sin \left(2\sqrt {2\pi n y} \right) - \cos \left(2\sqrt {2\pi n y} \right) \right] \sin(yt) {dy \over y^{3/4}} $$
$$= O\left( n^{-3/4}\right),\ n \to \infty.\eqno(3.4)$$
In the mean time, the corresponding relatively convergent integral (3.4) over $(0,\infty)$ is calculated via equalities (1.17), (1.18), letting there $a= t/(2\pi n),\ b=2, \ \mu= 1/2$. In fact, making an elementary substitution, we find
$$ {1\over 16 (2\pi)^{5/4} n^{3/4} } \int_0^\infty \left[ \sin \left(2\sqrt {2\pi n y} \right) - \cos \left(2\sqrt {2\pi n y} \right) \right] \sin(yt) {dy \over y^{3/4}} $$
$$= {1\over 8 (2\pi)^{3/2} n} \int_0^\infty \left[ \sin \left(2y \right) - \cos \left(2y \right) \right] \sin\left(y^2 {t\over 2\pi n}\right) {dy \over y^{1/2}} $$
$$ = {1 \over 8\ (2\pi)^{3/4} n^{1/4} t^{3/4} } \Gamma\left({3\over 4}\right) \cos \left({\pi\over 8}\right) {}_2F_3\left( {7\over 8}, \ { 3\over 8} ; \ {1\over 2},\ {3\over 4},\ {5\over 4}; - \left( {\pi n\over t} \right)^2 \right)$$
$$- {(2\pi)^{1/4} n^{3/4} \over 12\ t^{7/4} } \Gamma\left({7\over 4}\right) \cos \left({3\pi\over 8}\right) {}_2F_3\left( {11\over 8}, \ { 7\over 8} ; \ {3\over 2},\ {5\over 4},\ {7\over 4}; - \left( {\pi n\over t} \right)^2 \right)$$
$$- {1 \over 16 \ (2\pi)^{5/4} \ n^{3/4} t^{1/4} } \Gamma\left({1\over 4}\right) \sin \left({\pi\over 8}\right) {}_2F_3\left( {5\over 8}, \ { 1\over 8} ; \ {1\over 2},\ {1\over 4},\ {3\over 4}; - \left( {\pi n\over t} \right)^2 \right)$$
$$ + {n^{1/4} \over 8 (2\pi)^{1/4} \ t^{5/4} }\Gamma\left({5\over 4} \right) \cos \left({\pi\over 8}\right) {}_2F_3\left( {9\over 8}, \ { 5\over 8} ; \ {3\over 2},\ {3\over 4},\ {5\over 4}; - \left( {\pi n\over t} \right)^2 \right).\eqno(3.5)$$
Hence, appealing to the asymptotic formula (1.19) and calculating the corresponding parameter $\gamma$ by formula (1.20), we establish the asymptotic behavior of the right-hand side of the latter equality in (3.5) when $n \to \infty$ and $t \in (0,1)$. In fact, taking the contribution of each hypergeometric function, we obtain
$$ {}_2F_3\left( {7\over 8}, \ { 3\over 8} ; \ {1\over 2},\ {3\over 4},\ {5\over 4}; - \left( {\pi n\over t} \right)^2 \right) =
{ \Gamma(3/4)\Gamma(5/4)\over \Gamma(7/8)\Gamma(3/8)} \left( {\pi n\over t} \right)^{-3/4} \cos\left( {2 \pi n\over t} - {3\pi\over 8} \right)$$
$$ + O\left( \left( { n\over t} \right)^{-3/4}\right) ,\eqno(3.6)$$
$$ {}_2F_3\left( {11\over 8}, \ { 7\over 8} ; \ {3\over 2},\ {5\over 4},\ {7\over 4}; - \left( {\pi n\over t} \right)^2 \right)
= { \Gamma(5/4)\Gamma(7/4)\over 2 \Gamma(11/8)\Gamma(7/8)} \left( {\pi n\over t} \right)^{-7/4} \cos\left( {2 \pi n\over t} - {7\pi\over 8} \right)$$
$$ + O\left( \left( { n\over t} \right)^{-7/4}\right) ,\eqno(3.7)$$
$$ {}_2F_3\left( {5\over 8}, \ { 1\over 8} ; \ {1\over 2},\ {1\over 4},\ {3\over 4}; - \left( {\pi n\over t} \right)^2 \right)
= { \Gamma(1/4)\Gamma(3/4)\over \Gamma(5/8)\Gamma(1/8)} \left( {\pi n\over t} \right)^{-1/4} \cos\left( {2 \pi n\over t} -{\pi\over 8} \right)$$
$$ + O\left( \left( { n\over t} \right)^{-1/4}\right) ,\eqno(3.8)$$
$${}_2F_3\left( {9\over 8}, \ { 5\over 8} ; \ {3\over 2},\ {3\over 4},\ {5\over 4}; - \left( {\pi n\over t} \right)^2 \right)
= { \Gamma(3/4)\Gamma(5/4)\over 2\Gamma(9/8)\Gamma(5/8)} \left( {\pi n\over t} \right)^{-5/4} \cos\left( {2 \pi n\over t} -{5\pi\over 8} \right)$$
$$ + O\left( \left( { n\over t} \right)^{-5/4}\right) .\eqno(3.9)$$
Hence, combining with (3.5) it is not difficult to see that its left-hand side is $O(n^{-1}),\ n \to \infty$ uniformly by $t \in (0,1)$. Therefore, recalling (3.3), (3.4), we find the estimate
$$I_{22}(n) = O\left( n^{-3/4}\right),\ n \to \infty.\eqno(3.10)$$
The main obstacle is to estimate the integral $I_{21}(n)$ in (3.2). To do this we represent it in the form
$$I_{21}(n)= {1\over \sqrt \pi (2\pi n)^{1/4} } \int_1^\infty \cos \left(2\sqrt {2\pi n y} - {\pi\over 4} \right) \left(\int_0^1 \sin(yt) dq(t)\right) {dy\over y^{1/4}}$$
$$= {1\over 2 \sqrt \pi (2\pi n)^{1/4} } \int_0^1 dq(t) \int_1^\infty \sin \left( 2\sqrt {2\pi n y} +y t - {\pi\over 4} \right) \ {dy\over y^{1/4}} $$
$$- {1\over 2 \sqrt \pi (2\pi n)^{1/4} } \int_0^1 dq(t) \int_1^\infty \sin \left( 2\sqrt {2\pi n y} -y t - {\pi\over 4} \right) \ {dy\over y^{1/4}}. \eqno(3.11)$$
But the derivative of the function $h(y)= 2\sqrt {2\pi n y} +y t - {\pi\over 4}$
$$h^\prime(y)= \sqrt{{2\pi n\over y}} + t,\ y \ge 1,\ t \in (0,1) $$
is monotone decreasing. Therefore via lemma of Section 1.12 in \cite{Tit}
$$\int_1^\infty \sin \left( 2\sqrt {2\pi n y} +y t - {\pi\over 4} \right) \ {dy\over y^{1/4}}
= O\left( \max_{y\ge 1} \left[ \sqrt{{2\pi n\over y}} + t \right]^{-1} \right) = O\left({1\over t} \right). $$
Consequently,
$$ {1\over 2 \sqrt \pi (2\pi n)^{1/4} } \int_0^1 dq(t) \int_1^\infty \sin \left( 2\sqrt {2\pi n y} +y t - {\pi\over 4} \right) \ {dy\over y^{1/4}} = O( n^{-1/4}),\ n \to \infty.$$
Returning to (3.11) and employing (1.8), we estimate the integral
$$- {1\over 2 \sqrt \pi (2\pi n)^{1/4}} \int_0^1 dq(t) \int_1^\infty \sin \left( 2\sqrt {2\pi n y} -y t - {\pi\over 4} \right) \ {dy\over y^{1/4}}$$
$$= - {1\over 4i \sqrt \pi (2\pi n)^{1/4} } \int_0^1 dq(t) \int_1^\infty \exp \left( i\left( 2\sqrt {2\pi n y} -y (1-t) - {\pi\over 4} \right) \right) { dy\over y^{1/4}} $$
$$+ {1\over 4i\ \sqrt \pi (2\pi n)^{1/4} } \int_0^1 dq(t) \int_1^\infty \exp \left(i\left( {\pi\over 4} + y (1-t) - 2\sqrt {2\pi n y} \right) \right) { dy\over y^{1/4}}.\eqno(3.12)$$
Let us examine the first inner integral with respect to $y$ in the right-hand side of the equality (3.12). Indeed, the derivative by $y$ of the function
$$\varphi_n(y,t)= 2\sqrt {2\pi n y} -y(1-t) - {\pi\over 4}$$
vanishes at the point
$$y_{n} (t) ={ 2\pi n\over (1-t)^2},\quad t \in (0,1),\ n \in \mathbb{N}$$
and the second derivative
$$\varphi_{n}^{\prime\prime} (y_n)= -\ {(1-t)^3\over 4\pi n}.$$
Let
$$\rho(n)= n^\xi, \quad {1\over 2} < \xi < {5\over 8}.\eqno(3.13)$$
Then for any $u \in \mathbb{R}$ such that $|u|\le \rho(n) $ and $n$ is big enough
$$ \left|\varphi_{n} ^{\prime\prime} (y_n +u,t) - \varphi_{n} ^{\prime\prime} (y_n, t )\right| = {(1-t)^3 \over 4\pi n} \left[ 1-
\left(1+ {u (1-t)^2 \over 2\pi n} \right)^{-3/2} \right] < A\ (1-t)^5 \ n^{\xi-2},\eqno(3.14)$$
where $ 0\le u \le \rho(n)$ and $A >0$ is an absolute constant,
$$ \left|\varphi_{n}^{\prime\prime} (y_n +u, t) - \varphi_{n}^{\prime\prime} (y_n, t )\right| \le {(1-t)^3 \over 4\pi n} \left[
\left(1- {|u| (1-t)^2 \over 2\pi n} \right)^{-3/2}-1 \right] < B \ (1-t)^5 \ n^{\xi-2}, \eqno(3.15)$$
where $- \rho(n) \le u \le 0$ and $B >0$ is an absolute constant. Now splitting the first inner integral with respect to $y$ in the right-hand side of (3.12) (say, $G^+_n(t)$) on three integrals, namely,
$$G^+_n(t) = - {1\over 4i \sqrt \pi (2\pi n)^{1/4} } \int_1^\infty \exp \left( i \varphi_n(y,t) \right) { dy\over y^{1/4}} = - {1\over 4i \sqrt \pi (2\pi n)^{1/4} } \left[ \int_1^{-\rho(n)+ y_n} \exp \left( i \varphi_n(y,t) \right) { dy\over y^{1/4}}\right.$$
$$ + \left. \int_{-\rho(n)+ y_n}^ {\rho(n)+ y_n} \exp \left( i \varphi_n(y,t) \right) { dy\over y^{1/4}} + \int_{\rho(n)+ y_n}^ {\infty} \exp \left( i \varphi_n(y,t) \right) { dy\over y^{1/4}}\right] = G^+_{1n}(t)+ G^+_{2n} (t)+ G^+_{3n} (t),\eqno(3.16)$$
we estimate each of them separately. In fact, integration by parts gives
$$ - 4i \sqrt \pi (2\pi)^{1/4} G^+_{1n}(t) = - i\ (n y)^{- 1/4} \ {\exp \left( i \varphi_n (y,t) \right) \over \varphi_{n}^\prime (y,t) } \biggr\rvert _1^{-\rho(n)+ y_n} - {i \over \ 4 n^{1/4} } \int_1^{-\rho(n)+ y_n} \frac{ \exp \left( i \varphi_n(y,t) \right) } { \varphi^\prime_n (y,t)\ y^{5/4} }\ dy $$
$$ - {i\over n^{1/4} } \int_1^{-\rho(n)+ y_n} \frac{ \exp \left( i \varphi_n(y,t) \right) \ \varphi^{\prime\prime} _{n} (y,t) } { \left( \varphi^\prime_n (y,t)\right)^2 \ y^{1/4} }\ dy. $$
Hence, taking into account the integrated terms and the value of the second derivative
$$\varphi^{\prime\prime} _{n} (y,t) = - \sqrt{ \pi n\over 2\ y^3},$$
we end up with the uniform estimates
$$ {\exp \left( i \varphi_n(y,t) \right) \over y^{1/4} \varphi^\prime_n (y,t) } \biggr\rvert _1^{-\rho(n)+ y_n} =
O\left( n^{- 1/2} \right) + O\left( (1-t)^{- 5/2} n^{3/4 - \xi} \right),$$
$$\left| \int_1^{-\rho(n)+ y_n} \frac{ \exp \left( i \varphi_n(y,t) \right) } { \varphi^\prime_n (y,t)\ y^{5/4} }\ dy \right|
\le (2\pi n)^{-1/2} \int_1^{-\rho(n)+ y_n} \left[ 1 - (1-t) \sqrt { y\over 2\pi n }\ \right] ^{-1} {dy \over y^{3/4} } $$
$$= O\left( (1-t)^{-2} n^{1/2-\xi} \right) + O\left( (1-t)^{-5/2} n^{3/4 -\xi} \right).$$
$$ \left| \int_1^{-\rho(n)+ y_n} \frac{ \exp \left( i \varphi_n(y,t) \right) \ \varphi^{\prime\prime} _{n} (y,t) } { \left( \varphi^\prime_n (y,t)\right)^2 \ y^{1/4} }\ dy\right| \le - \int_1^{-\rho(n)+ y_n} \frac{ \ \varphi^{\prime\prime} _{n} (y,t) } { \left( \varphi^\prime_n (y,t)\right)^2 }\ dy$$
$$ = O\left( n^{- 1/2} \right) + O\left( (1-t)^{- 2} n^{ 1/2 - \xi} \right). $$
Returning to (3.16), we find, correspondingly, the estimate of $G^+_{1n}(t)$. Hence, calling (1.7) and combining with (3.12), we have
$$\int_0^1 G^+_{1n}(t) dq(t) = O\left( n^{- 3/4 } \right) + O\left( n^{1/2 - \xi} \int_0^1 (1-t)^{- 5/2} dq(t) \right)$$
$$ + O\left( n^{1/4 - \xi} \int_0^1 (1-t)^{- 2} dq(t) \right)= O\left( n^{- \mu}\right) ,\ \mu >0,\ n \to \infty\eqno(3.17)$$
because $\xi \in (1/2, \ 5/8)$ (see (3.13)).
In the meantime, for sufficiently big $n$ and $y \in [y_n + \rho(n),\ \infty)$
$$\left| \varphi^\prime_n (y,t) \right| = 1-t - \sqrt { 2\pi n \over y } \ge 1-t - \sqrt { 2\pi n \over y_n + \rho(n)} $$
$$= {(1-t)^3 n^\xi\over 2\pi n + (1-t)^2 n^\xi } \left( \sqrt { 2\pi n \over 2\pi n + (1-t)^2 n^\xi } + 1\right)^{-1} \ge
C (1-t)^3 n^{\xi-1} , $$
where $C >0$ is an absolute constant. Then we do a similar analysis with the third integral
$$ - 4i \sqrt \pi (2\pi)^{1/4} G^+_{3n}(t) = \int_{\rho(n)+ y_n}^ {\infty} \exp \left( i \varphi_n(y,t) \right) { dy\over (n y)^{1/4}} $$
$$= i\ {\exp \left( i \varphi_n (y_n+\rho(n),t) \right) \over (n (\rho(n)+ y_n)) ^{1/4} \varphi_{n}^\prime (y_n+\rho(n),t) } - {i \over \ 4 n^{1/4} } \int_{\rho(n)+ y_n}^\infty \frac{ \exp \left( i \varphi_n(y,t) \right) } { \varphi^\prime_n (y,t)\ y^{5/4} }\ dy $$
$$ - {i\over n^{1/4} } \int_{\rho(n)+ y_n}^\infty \frac{ \exp \left( i \varphi_n(y,t) \right) \ \varphi^{\prime\prime} _{n} (y,t) } { \left( \varphi^\prime_n (y,t)\right)^2 \ y^{1/4} }\ dy = O\left( (1-t)^{- 5/2} n^{1/2- \xi} \right)$$
and, correspondingly,
$$ \int_0^1 G^+_{3n}(t) dq(t) = O\left( n^{- \mu}\right) ,\ \mu >0,\ n \to \infty.\eqno(3.18)$$
Concerning the integral $G^+_{2n} (t)$, we have by virtue of the Taylor formula when $y \in [ y_n- \rho(n),\ y_n + \rho(n)]$
$$\varphi_n(y,t) = \varphi_n(y_n ,t) + {1\over 2} \varphi^{\prime\prime} _{n} \left( h(y, n, t) \right) (y- y_n)^2,$$
where $\left| y_n (t) - h(y, n, t) \right| \le \rho(n) $ for all $ t \in (0,1) $. Hence using the elementary inequality
$ | 1- e^{i\theta} | \le |\theta|,\ \theta \in \mathbb{R}$, estimates (3.14), (3.15) and elementary substitutions, we write
$$ - 4i \sqrt \pi (2\pi)^{1/4} G^+_{2n}(t) = \int_{-\rho(n)+ y_n}^ {\rho(n)+ y_n} \exp \left( i \varphi_n(y,t) \right) { dy\over (n y)^{1/4}} $$
$$= { \exp \left( i \varphi_n(y_n,t)\right) \over n^{1/4} } \int_{-\rho(n)}^ {\rho(n)} \left[ \exp \left( {i\over 2} \ \varphi^{\prime\prime} _{n} \left( h(y, n, t)\right) y^2 \right) - \exp \left( {i\over 2} \ \varphi^{\prime\prime} _{n} \left( y_n, t \right) y^2 \right) \right] { dy\over (y+ y_n)^{1/4}}$$
$$+ { \exp \left( i \varphi_n(y_n,t)\right) \over n^{1/4} } \int_{-\rho(n)}^ {\rho(n)} \exp \left( {i\over 2} \ \varphi^{\prime\prime} _{n} \left( y_n, t \right) y^2 \right) { dy\over (y+ y_n)^{1/4}}$$
and since
$$\left| \int_{-\rho(n)}^ {\rho(n)} \left[ \exp \left( {i\over 2} \ \varphi^{\prime\prime} _{n} \left( h(y, n, t)\right) y^2 \right) - \exp \left( {i\over 2} \ \varphi^{\prime\prime} _{n} \left( y_n, t \right) y^2 \right) \right] { dy\over (y+ y_n)^{1/4}}\right| $$
$$ \le \int_{-\rho(n)}^ {\rho(n)} \left| \exp \left( {i\over 2} \left[ \varphi^{\prime\prime} _{n} \left( h(y, n, t)\right) - \varphi^{\prime\prime} _{n} \left( y_n, t \right)\right] y^2 \right) - 1 \right| { dy\over (y+ y_n)^{1/4}} \le A (1-t)^{5,5}\ n^{ 4 \xi - 9/4}, $$
where $A >0$ is an absolute constant, it gives via (3.13), (1.3)
$$ \int_0^1 G^+_{2n}(t) dq(t) = - { 1 \over 4i \sqrt \pi (2\pi n )^{1/4} } \int_0^1 \int_{- n^\xi}^ {n^\xi } \frac { \sqrt t } { (t^2 y+ 2\pi n)^{1/4}} $$
$$\times \exp \left(- i \left[ {t^3 y^2 \over 8\pi n} - {2\pi n\over t } + {\pi\over 4} \right] \right) dy\ d q(t) + O\left( n^{- \mu}\right) ,\ \mu >0,\ n \to \infty.\eqno(3.19)$$
Meanwhile, with the use of the second mean value theorem there exists $\eta \in [- n^\xi,\ n^\xi]$ such that
$$\int_{- n^\xi}^ {n^\xi } \left[ {1\over (t^2 y+ 2\pi n)^{1/4}} - { 1\over (2\pi n)^{1/4} }\right] \exp \left(- {t^3 y^2 \ i \over 8\pi n} \right) dy = - \left[ ( 2\pi n)^{1/4} ( 2\pi n- t^2 n^\xi )^{1/4} \left( ( 2\pi n- t^2 n^\xi )^{1/4} \right.\right.$$
$$\left. \left. + ( 2\pi n)^{1/4} \right) \left( ( 2\pi n- t^2 n^\xi )^{1/2} + ( 2\pi n)^{1/2} \right) \right] ^{-1} \int_{- n^\xi}^ {\eta } t^2 y \ \exp \left(- {t^3 y^2 \ i \over 8\pi n} \right) dy = O \left( {1\over t\ n^{1/4}}\right) $$
uniformly by $t \in (0,1)$. Hence, minding (3.17), (3.18), we write the first iterated integral in the right-hand side of (3.12) as
$$ \int_0^1 G^+_{n}(t) dq(t) = - { n^{\xi-1/2} \over 2i \sqrt \pi (2\pi )^{1/2} } \int_0^1 \int_{0}^ {1} \sqrt t\ \exp \left(- i \left[ {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } + {\pi\over 4} \right] \right) dy\ d q(t) $$
$$ + O\left( n^{- \mu}\right), \ n \to \infty.\eqno(3.20)$$
Now, estimating analogously the integral
$$G^-_n(t) = {1\over 4i \sqrt \pi (2\pi n)^{1/4} } \int_1^\infty \exp \left(- i \varphi_n(y,t) \right) { dy\over y^{1/4}}, $$
the second iterated integral in the right-hand side of (3.12) becomes
$$ \int_0^1 G^-_{n}(t) dq(t) = { n^{\xi-1/2} \over 2i \sqrt \pi (2\pi )^{1/2} } \int_0^1 \int_{0}^ {1} \sqrt t\ \exp \left( i \left[ {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } + {\pi\over 4} \right] \right) dy\ d q(t) $$
$$+ O\left( n^{- \mu}\right), \ n \to \infty.\eqno(3.21)$$
Thus taking into account (3.1), (3.2), (3.4), (3.10), the latter equalities (3.20), (3.21) drive us at the following asymptotic equality
$${5\over 2} d_n = { n^{\xi-1/2} \over \pi \sqrt 2 } \int_0^1 \int_{0}^ {1} \sqrt t\ \sin \left( {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } + {\pi\over 4} \right) dy\ d q(t) + O\left( n^{- \mu}\right), \ n \to \infty.\eqno(3.22)$$
Exactly the same analysis can be done, concerning equation (2.11), involving Fourier coefficients (2.9). We omit the details, writing the final asymptotic equality
$$c_n = { n^{\xi-1/2} \over \pi \sqrt 2 } \int_0^1 \int_{0}^ {1} \sqrt t\ \cos \left( {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } + {\pi\over 4} \right) dy\ d q(t) + O\left( n^{- \mu}\right), \ n \to \infty.\eqno(3.23)$$
Hence,
$${5\over 2} d_n + c_n= { n^{\xi-1/2} \over \pi } \int_0^1 \int_{0}^ {1} \sqrt t\ \cos \left( {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } \right) dy\ d q(t) + O\left( n^{- \mu}\right), \ n \to \infty,\eqno(3.24)$$
$${5\over 2} d_n - c_n= { n^{\xi-1/2} \over \pi } \int_0^1 \int_{0}^ {1} \sqrt t\ \sin \left( {t^3 n^{2\xi-1} y^2 \over 8\pi} - {2\pi n\over t } \right) dy\ d q(t) + O\left( n^{- \mu}\right), \ n \to \infty.\eqno(3.25)$$
Writing (3.24) in the form
$${5\over 2} d_n + c_n= {2\sqrt 2 \over \sqrt \pi } \int_0^1 \cos \left( {2\pi n\over t } \right) {1\over t} \int_{0}^ {t^{3/2} n^{\xi-1/2} / \sqrt{ 8\pi} } \cos y^2 \ dy \ d q(t) $$
$$+ {2\sqrt 2 \over \sqrt \pi } \int_0^1 \sin \left( {2\pi n\over t } \right) {1\over t} \int_{0}^ {t^{3/2} n^{\xi-1/2} / \sqrt {8\pi}} \sin y^2 \ dy \ d q(t) + O\left( n^{- \mu}\right),$$
using the fundamental trigonometric identity and observing that
$$\int_{0}^ {\infty } \cos y^2 \ dy = \int_{0}^ {\infty } \sin y^2 \ dy = {1\over 2} \sqrt{\pi\over 2},\eqno(3.26)$$
$$\left| \int_{t^{3/2} n^{\xi-1/2} / \sqrt {8\pi}}^\infty \left\{\begin{array} {ll}\cos y^2 \\ \sin y^2 \end{array} \right\} \ dy\right| = {1\over 2} \left|\int_{t^{3} n^{2\xi-1} / 8\pi}^\infty \left\{\begin{array} {ll}\cos y \\ \sin y \end{array} \right\} \ {dy\over \sqrt y} \right|$$
$$ = {n^{1/2- \xi} \sqrt {8\pi } \over 2\ t^{3/2} } \left|\int_{t^{3} n^{2\xi-1} / 8\pi}^T \left\{\begin{array} {ll}\cos y \\ \sin y \end{array} \right\} \ dy\right| \le \sqrt {{8\pi\over t^3}}\ n^{1/2- \xi},$$
we have
$$5 d_n + 2 c_n= { n^{\xi-1/2} \over \pi } \int_0^1 \cos \left( {2\pi n\over t } \right) \sqrt t \ d q(t) + { n^{\xi-1/2} \over \pi } \int_0^1 \sin \left( {2\pi n\over t } \right) \sqrt t \ d q(t)$$
$$+ O\left( n^{- \mu}\right),\ n \to \infty.$$
Analogously, from (3.25) we find
$$5 d_n - 2 c_n= { n^{\xi-1/2} \over \pi } \int_0^1 \cos \left( {2\pi n\over t } \right) \sqrt t \ d q(t) - { n^{\xi-1/2} \over \pi } \int_0^1 \sin \left( {2\pi n\over t } \right) \sqrt t \ d q(t)$$
$$+ O\left( n^{- \mu}\right),\ n \to \infty.$$
Hence,
$$ d_n = { n^{\xi-1/2} \over 5 \pi } \int_0^1 \cos \left( {2\pi n\over t } \right) \sqrt t \ d q(t)+ O\left( n^{- \mu}\right),\ n \to \infty.\eqno(3.27)$$
The left-hand side of (3.27) does not depend on $\xi \in (1/2, 5/8)$. Therefore, writing (3.27) for some $\xi_1 > \xi$ from the same interval and then subtracting the previous equality, we obtain
$$n^{\xi_1- 1/2} \left[ 1- n^{\xi-\xi_1}\right] \int_0^1 \cos \left( {2\pi n\over t } \right) \sqrt t \ d q(t) =
O\left( n^{- \mu_1}\right),\ n \to \infty$$
for some positive $\mu_1$. Hence,
$$\int_0^1 \cos \left( {2\pi n\over t } \right) \sqrt t \ d q(t) = O\left( n^{1/2- \xi_1}\right),\ n \to \infty$$
and (3.27) yields $d_n= O\left( n^{\xi- \xi_1}\right),\ n \to \infty,\ \xi < \xi_1,$ completing the proof of Theorem 2.
\end{proof}
Furthermore, the Salem-Zygmund theorem \cite{Zyg} shows that $d_n=o(1)$ implies that the Fourier-Stieltjes transform (1.1) $f(t)=o(1), |t| \to \infty$. Together with author's results in \cite{Yakeq} it leads us to an immediate
{\bf Corollary 2}. {\it Let $k \in \mathbb{N}.$ Then the Fourier-Stieltjes transforms $(1.1), (1.2)$ of the Minkowski question mark function and their consecutive derivatives $f^{(k)}(x), F^{(k)}(x)$ vanish at infinity.}
Finally, denoting by
$$d_{n,m} = \int_0^1 \cos(2\pi n t) \ d q^m(t),\ m \in \mathbb{N}\eqno(3.28)$$
the Fourier-Stieltjes coefficients of the power $q^m(t)$, we establish the following
{\bf Corollary 3}. {\it We have $d_{n,m}= o(1),\ n \to \infty$}.
\begin{proof} Appealing to the principle of mathematical induction, we see that $d_{n,1} \equiv d_n= o(1),\ n \to \infty$. Hence assuming that $d_{n,k}= o(1),\ n \to \infty,\ 1\le k\le m$, we will prove that $d_{n,m+1}= o(1),\ n \to \infty.$ To do this, we recall (1.3) to have the equality
$$d_{n,m+1} = \sum_{k=1}^{m+1} (-1)^k \binom {m+1} {k} \int_0^1 \cos(2\pi n t)\ d q^k(1-t) = \sum_{k=1}^{m+1} (-1)^{k+1} \binom {m+1} {k} d_{n,k}.\eqno(3.29)$$
Consequently, if $m$ is odd, we find from the previous equality
$$ d_{n,m+1} = {1\over 2} \sum_{k=1}^{m} (-1)^{k+1} \binom {m+1} {k} d_{n,k} = o(1),\ n \to \infty.$$
When $m$ is an even number, we write
$$ 2 d_{n,m+2} - (m+2) d_{n,m+1} = \sum_{k=1}^{m} (-1)^{k+1} \binom {m+2} {k} d_{n,k} = o(1),\ n \to \infty.\eqno(3.30)$$
Meanwhile, since $m+3$ is odd
$$ \sum_{k=1}^{m+2} (-1)^{k+1} \binom {m+3} {k} d_{n,k} = 0,$$
or
$$(m+3) d_{n,m+2} - (m+3) (m+2) d_{n,m+1} = \sum_{k=1}^{m} (-1)^{k+1} \binom {m+3} {k} d_{n,k} = o(1),\ n \to \infty.$$
The latter equality means
$$ d_{n,m+2} - (m+2) d_{n,m+1} = o(1),\ n \to \infty.$$
Hence, with (3.30) it gives
$$ (m+2) d_{n,m+1} = o(1),\ n \to \infty$$
and therefore $d_{n,m+1} = o(1),\ n \to \infty$.
\end{proof}
\noindent {{\bf Acknowledgments}}\\
The work was partially supported by CMUP (UID/MAT/00144/2013), which is funded by FCT (Portugal) with national (MEC) and European structural funds through the programs FEDER, under the partnership agreement PT2020. \\ | 156,339 |
What is effective altruism?
. I'll talk more about the reasons why these are generally thought to be the highest-impact cause areas in later posts, but in each case, the reasoning is that the stakes are very high, and there is the potential to make a lot of progress. Right now, within the Centre for Effective Altruism, the What consists of the organisations listed to the right: organisations that, for example, promote donating a good chunk of one's income to the causes that most effectively fight global poverty (Giving What We Can and The Life You Can Save); or that advise individuals on which careers enable them to have the greatest positive impact (80,000 Hours); or that try to figure out how best to improve animal welfare (Effective Animal Activism). But these activities are just our current best guesses. If we had good evidence or arguments that showed that we could do more good by doing something else, then we'd do that instead.
Part of Introduction to Effective Altruism
Previous: Scope Insensitivity • Next: Cheerfully
[…] deeply with wild animal suffering. His work has been a great resource of what is now called the effective altrusim community, and I have a lot of respect for his unflinching acceptance and exploration of […]
This might be the greatest idea ever. I can give you all the reasons why it's so good, but you have got them all figured out. My definate props for that!
I have a tip though. If you are interested, continue reading.
I like your story but I don't LOVE it. It's because it's written in an inpersonal style. In order to promote it more effectively, I'd like you to give a personalised WHY?
I.E: I was hiding from my friends for two years. I was failing in my study of philosophy and I didn't wanted to come out. I had a severe case of depression. I had everything I needed: A house, food, even a hot shower, and still I despaired. Life was way out of whack, something was deeply wrong!
Then, I remember the moment clearly: I was in the kitchen and I was going to stuff myself with yet another sandwich, when by chance a shadow of a man walked by, obviously a beggar. I looked at my extra bacon and egg sandwich, and something just clicked in my brain. I hurried outside and caught him on the street corner. I gave it to him, he murmbled something weird, but the look he gave me was etched into my soul.
That day I felt elated. I went out and started to do random acts of kindness. I bought flowers and gave it to an old lady and an anorexic girl. I joined a couple of young people to give free hugs. I donated all my pocket money to beggars. I even withdrew money to give more.
I was stunned: Living truly is giving! The depression lifted and I never looked back. I was able to finish my master in philosophy, and made more friends than i ever had. Since then I have been trying to help as much as I can. And as smart as I can!
~ I am sure you can write a terrific personal story. Continue what you are doing, and make it count!
Was that of any help?
Thanks, Martijn! Will is currently writing a book on effective altruism and is giving a lot of thinking to ways in which we can make EA ideas more appealing to a wider audience, without sacrificing precision or accuracy. Making such ideas more personal is a key way of accomplishing this, and for this reason the book will feature interviews with many prominent EAs and top researchers.
[…] altruism” — using my time and money to do the most good I can. (Descriptions here and here, and Peter Singer’s TED talk about it is here.) I personally would be very […]
just a small thought on the term "altruism" -- in an omoiyari* world there is really no need for altruism or charity. They are not required and hardly, if ever, occur. In an omoiyari world every success is reciprocal, every personal creative act is a shared success. By the same token, every wound one causes to another is a self-inflicted wound. In such a world, the terms 'altruism', 'charity' and the like simply drop away. Nor is there any conflict between the states of individuality and collective. Indeed, where the personal is shared and reciprocal, personal creativity, imagination and exploration are routinely encouraged and fostered.
Omoiyari society (or civilization, if you prefer) is a "we" society. But there is no implication of "we" being at the expense of "you" or "me". It is a society which maximizes and distributes the surplus of our creativity and energy, rather than conscripting it. It does so with the minimum of distortion or dislocation of each person's self-directed initiatives. And there is plenty for an omoiyari society to recycle and share, because there is little need to accumulate, hoard or own for its own sake, nor out of fear-driven insecurity.
There is no "either/or" or "them/us" about an omoiyari society. Those are scripts for reality that have been written and handed to us by others to serve their own narrow, self-interested ambitions. There is simply no use for our current competitive, acquisition-driven, 'charity-fixated' societies or the compensations we try to make for them. We presently receive such reality scripts as if they were an unalterable part of "human nature". They are not. An omoiyarii society has no use for them. Why? Because, in an omoiyari-world, we all own and write the scripts of our own reality and we distribute them as a shared reality. We write them creatively and imaginatively and we share them reciprocally.
So, is that possible? Is there an "effective" way to bring about an omoiyari world? Yes, actually. It's quite simple and the tools to do it are already possessed by everybody. It is really nothing more than the realization that our reality is not something that can actually be owned by anyone, least of all those who presently claim a proprietary right to it (the ones who exclaim, "Reality is what we say it is" and by 'we' mean only themselves), the ones who can make reality seem unalterable only through the coercive use of power.
The effective end to that fiction is to simply begin rewriting that script. And that is not as difficult as it may seem. To begin with, we actually know what the real script for reality ought to be. We all know it -- even if our ways of expressing it may be different. It comes with our DNA, there's nothing extraordinarily complicated about it. It is a script which no longer has us settle arguments with ourselves by means of violence or war. It is a script in which hunger or homelessness or exclusion or other forms of neglect and deprivation are not possible simply because those wounds are self-inflicted wounds and no one (it's in our DNA) leaves their own wounds untended to fester.
It is a script in which there is really no distinction between work and play because the very act of using our energy and our bodies to do useful and creative things is a natural expression of ourselves, a yoga of living and a joy, even if the particular task may be difficult or routine. Our script would not turn people into interchangeable, expendable units of fuel for the engines of an economy or anything else. The human project would not be here to serve the economies it created, those economies would be here to serve the people. I could go on and on, but you see, you already know that don't you. It comes with our manual of operating a sane, healthy flourishing world. The one given to each of us when we were born. We just tend forget that at times.
So that's all I really came to say. A little food for thought, that's all. It may not happen in our lifetimes, maybe not for many generations (provided we survive the messes we've created for ourselves). When it does, those who rewrite our primitive reality will probably wonder at those first proto-humans who hadn't yet crossed the evolutionary rubicon from OMG to omoiyari, just as we wonder about the first hominids who stood at the edge of the Serengeti, but hadn't quite crossed the evolutionary bridge from thinking to minding.
But we can start now, preparing the way for those who will follow us. We can begin to reclaim small pieces of our reality and renew it as it should be. That we can do, and that will likely help make the Dark Ages to come a little bit shorter. That is all I have to say.
*omoiyari is a Japanese word that really has no equivalent in English. Roughly, it means "putting others first." It was first introduced to me by Charles Pellegrino in his book "Last Train from Hiroshima" (definitely an omoiyari, if tragic, book about events that should never have happened). Omoiyari was introduced to Charles through the writings and talks by Masahiro Sasaki, brother of Sadiko Sasaki, whom many of you will know as the girl who set about folding a thousand paper cranes in the interest of world peace and omoiyari. The girl who died of wounds received in the bombing of Hiroshima. -- omoiyari, Red Slider
I totally agree. Altruism and egoism are the consequences of our intellettualistic occidental society. Does it really exist any difference between self and other? The logical analytic approach fails if it doesn't comprehend the dialectic aspects of the whole reality. When I am helping people I am doing something good not only for whom I help but also for me: i need to do good, that is my interior obligation that let me free only if I decide to obey it. To me we have to find out what human nature means.
first of all i am from argentina so excuse my awful english ! I am a very concern sense i was a little girl about the animals rights, also the human right. i discover today this magnificent movement EA, and my heart is really relieved. i think the world is really in darkness right now, but also can see how every day the human society evolves every minute to an more ethics existence and is happening very fast so we must know that every little gesture we have for help others is going to be a enormus impact for change the world and make it a better place for all. | 295,817 |
Idol Janshi Suchie Pai III Remix Senin, 16 Maret 2015 Adventure, Animation, Board, Card, JPN Edit GAME NAMEIdol Janshi Suchie Pai III RemixLANGUAGEJPNRELEASE DATENAGENREJPN, Adventure, AnimationVIDEO GAMELink YoutubeDOWNLOAD GOOGLE DRIVE- Download here -Idol Janshi Suchi-Pai III Remix is a Miscellaneous game, developed and published by Jaleco Entertainment, which was released in Japan in 2007. Tweet RECENT POST ABOUT GAMES PPSSPP : Subscribe to receive free email updates:
0 Response to "Idol Janshi Suchie Pai III Remix"
Posting Komentar
Like the this blog Download PSP Games / PPSSPP .. ?? Come give your comment. | 210,496 |
Department of Allergic Diseases, Novartis Research Institute, Vienna, Austria
Expression of eotaxin-1 itself is regulated by a number of cytokines.
It has been shown that the proinflammatory cytokines TNF-
and IL-1
induce the production of eotaxin-1 (10, 11). More
recently, IL-4 and TNF-
were reported to synergize in the secretion
of eotaxin-1 in human skin (12) and nasal fibroblasts
(13). Similarly, IL-13 which shares many biological
properties with IL-4 showed similar effects in human epithelial cells
(14, 15). The underlying mechanisms responsible for this
effect in epithelial cells have been characterized in more detail
(16). In that study, the transcription factor STAT6 was
shown to be responsible for the IL-4-mediated induction of eotaxin-1
promoter activity by binding to a specific DNA response element.
TNF-
stimulation, in contrast, resulted in binding of NF-
B
protein members to a site that overlapped with the STAT6-binding site,
and this interaction conferred promoter activation on TNF-
treatment. Incubation with the combination of the cytokines had an
additive effect.
The present study describes a similar analysis in human fibroblasts. We
confirm and extend the important role of STAT6 in induction of
eotaxin-1 promoter activity in response to IL-4 in human fibroblasts.
In contrast to the situation in epithelial cells (16),
TNF-
stimulation was not additive or synergistic with IL-4.
Interestingly, the activating effect of TNF-
on the promoter was
dependent on the presence of an intact STAT6-binding site and also on
the presence of functional STAT6 protein. In addition, TNF-
-induced
eotaxin-1 protein production was detected only in STAT6-expressing
cells and could be counteracted by a trans-dominant negative
STAT6 protein. Two other known TNF-
-inducible genes,
MCP-1 and IL-8, were not affected. The data show
that both IL-4 and TNF-
require STAT6 as
mediator to activate eotaxin-1 gene expression.
Normal human adult dermal fibroblasts and neonatal fibroblasts
were cultured in FGM-2 medium (Clonetics, Walkersville, MD). HEK293
cells were carried at 37°C with 5% CO2 in DMEM
supplemented with 10% heat-inactivated FCS (Life Technologies, Grand
Island, NY), 100 U/ml penicillin, and 100 µg/ml streptomycin.
Purified human recombinant IL-4 was obtained from Novartis (Basel,
Switzerland) with a specific activity of 0.5 U/ng. Recombinant human
TNF-
(Genzyme, Cambridge, MA) and recombinant human IL-13 (PeproTech
EC, London, U.K.) were used at a concentration of 100 U/ml. Human
eotaxin-1 and MCP-1 proteins were quantitated by commercially available
ELISA kits (R&D Systems, Minneapolis, MN).
RT-PCR
Total RNA was isolated using the Trizol reagent (Life
Technologies) according to the instructions of the manufacturer. Total
RNA, 3 µg,' and 5'-CCAGATACTTCATGGAATCCTGC-3'
from cDNA corresponding to 20 ng RNA. PCR was performed for 30 cycles
at 94°C and 30 s, 56°C and 30 s, and 68°C and 30
s). The PCR primer pair 5'-ATGGATGATGATATCGCCGCG-3' and
5'-AGTCCATCACGATGCCAGTGG-3' was used to amplify a 480-bp fragment
of the
-actin mRNA using the same reaction conditions .
Cloning of eotaxin-1 reporter constructs
A 1.1-kb eotaxin-1 promoter fragment was amplified from genomic
DNA (Roche Molecular Biochemicals) using the PCR primers
5'-CTGACTCGAGCAGGTTTGCAGTACCTCCACACC-3' and 5'-
AGTCAAGCTTGTTGGAGGCTGAAGGTGTGAGC-3'. The PCR fragment was digested
with XhoI and HindIII and cloned into pGL3-Basic
(Promega, Madison, MA) to give pGL3-EO1. Another promoter fragment was
amplified between position -2250 relative to the transcriptional start
site (17) and a natural PstI site at position
-986 using the primer pair 5'-AGTCACGCGTTTCAGGCGTAGAGTAAATCC-3'
and 5'-AGTCACTGCAGCGGATTACAGC-3'. This fragment was digested with
MluI and PstI and inserted into pGL3-EO1
restricted with the same enzymes to give pGL3-EO2 with a total insert
size of 2.2 kb. Plasmid pGL3-EO3 was constructed by inserting a 1.4-kb
EcoRI/HindIII fragment in which the
EcoRI site was made blunt with Klenow polymerase (Roche
Molecular Biochemicals) into a
SmaI/HindIII-digested pGL3-Basic vector.
Site-directed mutations in the composite STAT6/NF-
B site were
generated as reported earlier (18) using the following
oligonucleotides: M1,
5'-ATGGGCAAAGGCTATCCTGGAATCTCCCACACTGTCTGCT-3' and
5'-GGGAGCAGACAGTGTGGTCGATTCCAGGGAAGCCTTTGCC-3'; M2,
5'-ATGGGCAAAGGCTTCCCTGCTATCTCCCACACTGTCTGCT-3' and
5'-GGGAGCAGACAGTGTGGGAGATAGCAGGGAAGCCTTTGCC-3'; M3,
5'-ATGGGCAAAGGCTTCCCTGGAATCGACCACACTGTCTGCT-3' and
5'-GGGAGCAGACAGTGTGGTCGATTCCAGGGAAGCCTTTGCC-3'. The cloning
of the STAT6 expression vector (19) and the IL-8
promoter reporter construct IL-8p (20) has been described.
The STAT6-
TD expression vector was cloned by insertion of a
XhoI/SacI fragment containing the complete human
STAT6 cDNA except for the carboxy-terminal trans-activation
domain into the pcDNA3.1 vector. Plasmids were analyzed by digestion
with restriction endonucleases and DNA sequencing. Constructs
used for transient transfections were purified by cesium chloride
density gradients.
Transient transfection of HEK293 cells and primary fibroblasts
The day before transfection, 5 x 104
cells were seeded into 24-well culture plates in fresh medium.
Transient transfection of HEK293 cells was achieved using calcium
phosphate coprecipitation. Briefly, 12 µg plasmid DNA was diluted
in 42 µl H2O, mixed with 7 µl 2 M
CaCl2, and added dropwise to 50 µl 2x HEPES
buffered saline (280 mM NaCl, 1.5 mM
Na2HPO4, 50 mM HEPES (pH
7.05)). After a 2-min incubation period at room temperature, the
mixture was added to the cells. Primary fibroblasts were transfected
with DNA-containing liposomes using Effectene (Qiagen, Hilden, Germany)
according to the manufacturers protocol. After 24 h, cells were
washed and cultured for 12 h in the presence or absence of 50
ng/ml IL-4, 100 ng/ml IL-13, and/or 100 U/ml TNF-
before luciferase
assays were conducted in triplicates according to the instructions of
the manufacturer using the Promega Luciferase Assay System (Promega
Biotech, Madison, WI). adult dermal fibroblasts or
cells that had been stimulated for 30 min with IL-4 (50 ng/ml) or
TNF-
(100 U/ml) were prepared according to the method described by
Andrews and Faller (21). A ds oligonucleotide probe
spanning the composite STAT6/NF-
B site from position -82 and -46
was end-labeled using [
32P]dCTP (Amersham,
Little Chalfont, U.K.) and Klenow polymerase (Roche Molecular
Biochemicals). The nucleoprotein binding reaction was done as described
(22) using 5 µg nuclear extracts. For competition and
supershift experiments, extracts were preincubated with a 50-fold
excess of competitor oligo or 2 µg Ab for 30 min before the
radiolabeled probe was added. All Abs used in supershift experiments
were purchased from Santa Cruz Biotechnology (Santa Cruz, CA).
To determine the kinetics of eotaxin-1 induction in primary
fibroblasts, cells derived from two donors (donor 1, adult skin
fibroblasts; donor 2, neonatal fibroblasts) were cultured with IL-4,
TNF-
, or the combination for different times. The supernatants were
analyzed for eotaxin-1 protein by ELISA (Fig. 1
A). Chemokine expression
became detectable at 8 h in IL-4 and IL-4/TNF-
-stimulated cells
and was maximal at 24 h. TNF-
alone induced eotaxin-1
expression in donor 2 (23) but not in donor 1 cells,
whereas both samples were responsive to IL-4 alone. In accordance with
published data (12), the cytokine combination had a clear
synergistic effect. Essentially the same results were obtained when
IL-13 was used as stimulus alone or in combination with TNF-
(Fig. 1
B). TNF-
was able to induce gene expression in donor 1
cells because MCP-1, a chemokine known to be induced by this factor,
could be easily detected in the same supernatants (Fig. 1
A).
STAT6 mediates eotaxin-1 promoter activation by IL-4 or
TNF-
The kinetics by which IL-4 induced gene expression suggested that
this effect may be caused by the transcription factor STAT6. A number
of STAT6-regulated genes, such as the IgE germline gene, CD23, or the
IL-4 receptor
gene showed similar kinetics of induction (24, 25).
The inspection of the published eotaxin-1 promoter sequence
(17) revealed the presence of two potential high affinity
STAT6-binding sites as defined by the
5'-TTC(N)4GAA-3' consensus sequence
(26). The proximal site between position -74 and position
-60 relative to the transcriptional start site overlapped with a
putative binding sequence for NF-
B proteins. The distal site is
located between positions -2204 and -2195. Reporter constructs were
generated in which the firefly luciferase reporter gene is driven by
the human eotaxin-1 promoter. The sequence of the 2.2-kB insert in
pGL3-EO2 corresponded well with the published sequence. Of interest,
one difference was noted in the putative distal STAT6-binding sequence
where the cytidine at position 3 in the consensus sequence
5'-TTC(N)4GAA-3' was changed to a thymidine (Fig. 2
). The presence of this substitution was
verified by using another PCR primer pair amplifying a small DNA
fragment encompassing this site (data not shown). A
5'-TTT(N)4GAA-3' sequence is generally not
recognized by STAT6 with high affinity (26), making the
proximal sequence a more likely candidate for a potential regulatory
role. Three 5' deletion promoter constructs (pGL3-EO1, -EO2, and -EO3)
(Fig. 2
) were transiently transfected into the STAT6-defective HEK293
cell line (27) and tested for cytokine inducibility. In
the absence of STAT6 expression, none of the constructs was inducible
with IL-4, TNF-
or the combination thereof (Fig. 3
, left). The constitutive
promoter activity of the shortest EO-1 plasmid was lower than that of
the other two constructs, suggesting a positive regulatory element
between positions -1036 and -1359. In the presence of cotransfected
STAT6 expression vector, all three constructs were inducible on IL-4
stimulation. IL-13 stimulation was as effective as IL-4 and was also
dependent on the presence of cotransfected STAT6 (Fig. 3
, right). The IL-4 induction index of the plasmids pGL3-EO2
and -EO3 was consistently higher than that of the EO1 construct,
suggesting that the sequence between positions -1036 and -1359
contributed to IL-4 inducibility as well as constitutive promoter
activity. Because the EO3 plasmid was as IL-4 responsive as the EO2
construct, a major regulatory role of the distal STAT6-like binding
site can be ruled out. All three constructs responded to TNF-
treatment albeit at a somewhat lower rate than IL-4. Interestingly,
TNF-
responsiveness, like IL-4, was dependent on cotransfected
STAT6. The shortest pGL3-EO1 plasmid was less well inducible with
TNF-
, suggesting that the same region that contributed to IL-4
up-regulation also was involved in TNF-
regulation. The cytokine
combination activated promoter activity to the same degree as IL-4
alone. This demonstrated that the synergistic effect of the stimuli
seen at the protein level was not due to synergistic activation of the
eotaxin-1 promoter. Overall, these results suggested that the
stimulatory potential of these reporter constructs to both IL-4 and
TNF-
was dependent on STAT6 coexpression. Because they all contain
the proximal STAT6/NF-
B element, an involvement of this sequence
motif appeared likely.
The interaction of STAT6 with the proximal STAT6/NF-
B binding
sequence was assessed by EMSAs. Nuclear extracts prepared from human
dermal fibroblasts were incubated with a ds oligonucleotide probe
(-82/46), and the nucleoprotein complexes were resolved in native
polyacrylamide gels. In uninduced cells, two specific bands were
detected (Fig. 4
A). Extracts
from IL-4-treated cells produced an additional band which migrated more
slowly than the two original complexes. Addition of anti-STAT6 Abs
before addition of the labeled probe specifically reduced the large
IL-4-induced complex and led to the formation of a supershifted
complex. Although an anti-PU.1-specific as well as an Ab directed
against the NF-
B family member p50 did not change the banding
pattern, preincubation with an anti-p65 Ab led to disappearance of
the original two complexes present in uninduced extracts but not the
IL-4-induced band. This suggested that small amounts of NF-
B p65
were present in uninduced cells and interacted with the -82/46 ds
probe. The addition of an 50-fold excess of unlabeled ds
oligonucleotide containing the functional STAT6-binding site of the
human IgE germline promoter (IgE104/83) specifically competed with the
formation of the IL-4-induced nucleoprotein complex to the radiolabeled
probe. These data demonstrated that the IL-4-induced band contains
STAT6.
Involvement of the STAT6 response element in promoter activation
To determine whether the STAT6/NF-
B-binding site was involved
in STAT6-mediated cytokine induction, three different point mutations
were introduced into the pGL3-EO2 construct. One mutation (M1)
specifically altered the palindromic TTC of the STAT6 site into TAT
(Fig. 2
). This change has been shown earlier to ablate IL-4-induced
STAT6 binding and trans activation in the human IgE germline
promoter (28). Mutation M3 affected the polypyrimidine
half site of the NF-
B element and mapped outside of the STAT6 core
sequence. The M2 mutant affected the overlapping portion of this
putative regulatory unit. The effect of these changes on transcription
factor interaction was monitored by EMSA. A ds oligonucleotide probe
containing the M1 mutation (-82/46 m1) was unable to bind STAT6 in
IL-4-induced extracts but still retained the ability to form NF-
B
nucleoprotein complexes on TNF-
stimulation (Fig. 5
). Conversely, a M3 mutation containing
probe was able to bind STAT6 but not NF-
B proteins. STAT6 binding
appeared to be much stronger than with the wild-type probe. This may be
explained either by a higher specific activity of the M3 mutant ds
oligonucleotide or by an increased affinity for the protein. It is
known that neighboring nucleotides are involved in fine tuning of the
affinity of DNA-binding proteins to its core recognition sequence. The
ds oligonucleotide M2 probe did not form any nucleoprotein complexes
consistent with the mutation located in the overlapping portion of the
composite STAT6/NF-
B element.
Eotaxin-1 protein production is STAT6 dependent
Further confirmation for the involvement of STAT6 in eotaxin-1
regulation was obtained at the protein level. Eotaxin-1 was produced in
IL-4-stimulated HEK293 cells that had been transiently transfected with
a STAT6 expression vector but not in mock transfected cells (Fig. 7
). Interestingly, TNF-
stimulation
also led to eotaxin-1 production in STAT6-expressing cells only.
Similar to primary fibroblasts, a synergistic effect was measured in
the presence of both cytokines. In mock transfected cells, no eotaxin-1
secretion was measured. Importantly, the secretion of the known
TNF-
-inducible chemokine MCP-1 measured in the same supernatants was
very similar irrespective of the presence of cotransfected STAT6,
demonstrating that HEK293 cells are not generally defective in TNF-
signal transduction and that the effect seen with eotaxin-1 was
specific. Further evidence for the involvement of STAT6 in eotaxin-1
regulation was obtained in normal fibroblasts transfected with
increasing amounts of STAT6-
TD DNA. The plasmid expresses a mutant
STAT6 protein that lacks the carboxy-terminal trans
activation domain. This polypeptide has been shown earlier to act in a
trans-dominant negative fashion (29). The
transfected cells were induced with cytokines for 48 h before
eotaxin-1 protein was monitored in the cell supernatants. Expression of
this STAT6 mutant in transiently transfected primary fibroblasts led to
strong inhibition of eotaxin-1 protein expression in either IL-4- or
TNF-
-induced cells (Fig. 8
). Higher
amounts of DNA were not well tolerated by the cells. The effect,
however, was specific because transfection of empty vector was
significantly less inhibitory. In addition, no difference between
vector and STAT6-
TD plasmid was observed when the same supernatants
were assayed for MCP-1 (data not shown). These results supported the
conclusion that STAT6 is a mediator of both TNF-
- and IL-4-driven
eotaxin-1 secretion in fibroblasts but not of TNF-
-driven MCP-1
synthesis.
The dependence of TNF-
on STAT6 for eotaxin-1 promoter activation
appeared to be specific. In the same experimental system, an IL-8
promoter construct was responsive to the cytokine in the absence of
functional STAT6, and ectopic expression of the transcription factor
did not lead to further activation. Similar results were obtained at
the protein level where the TNF-
-induced production of MCP-1 was not
dependent on STAT6, whereas eotaxin-1 could be secreted only in
STAT6-transfected cells. These results demonstrated that the three
different TNF-
-inducible genes could be distinguished by
their requirement for functional STAT6 protein to respond to the
cytokine stimulus.
The composition of NF-
B proteins induced by TNF-
in fibroblasts
contained NF-
B p65 but not p50, whereas both subunits were detected
in epithelial cells (16). Perhaps related to this
difference was the finding that mutations in the NF-
B-specific
pyrimidine half site of the composite STAT6/NF-
B sequence abrogated
TNF-
inducibility in an epithelial cell line (16) in
contrast to only a partial effect in HEK293 cells.
Our current model of the individual functions of IL-4 and TNF-
to
stimulate eotaxin-1 gene promoter in HEK293 and fibroblasts can be
summarized as follows. IL-4 activates STAT6 which is able to bind DNA
and acts as the predominant player at the level of transcriptional
activation of the eotaxin-1 promoter. TNF-
-induced NF-
B may act
as transcriptional costimulus in a STAT6-dependent manner using an as
yet unknown mechanism.
The synergistic effect of the stimuli on eotaxin-1 protein production
is another point worth discussing. The data demonstrate that TNF-
did not synergize with IL-4 in the activation of eotaxin-1 promoter
reporter gene constructs both in HEK293 cells and in donor 2
fibroblasts. It is possible, however, that the reporter constructs used
did not contain important cis-acting regulatory DNA elements
involved in a possible synergism at the level of transcription
initiation. Alternatively, synergy may be achieved by a possible role
of IL-4 or TNF-
in chromatin opening and gene accessibility in
combination with transcription initiation. Such effects would not be
seen in transient transfection experiments.
Another explanation for the synergistic effect is suggested by the time
course experiments. TNF-
alone was not able to induce eotaxin-1
expression in donor 1 fibroblast cells, yet it synergized with IL-4 to
increase the levels of eotaxin-1 mRNA and also secreted chemokine. Its
effect on eotaxin-1 transcripts was clearly seen as early as 2 h.
This suggested a direct mode of action. Collectively, it is therefore
conceivable that TNF-
is the predominant cytokine driving the
synergy and that it acts mainly at the posttranscriptional level. This
role of TNF-
has been described for a number of other genes,
including bradykinin receptor (34), syndecan-4
(35), and Fc
RIIb (36).Two AUUUA sequence
motifs are located in the 3'-untranslated region of the eotaxin-1 mRNA.
These sequences have been implicated in the stabilization of mRNA
(37). Therefore, the possibility existed that the
synergistic function of TNF-
may be related to stabilization of
eotaxin-1 transcripts. We have compared the half-lives of IL-4-induced
eotaxin-1 mRNA with the one of IL-4/TNF-
-induced transcripts in
cells treated with actinomycin D. The results demonstrated that
eotaxin-1 transcripts under both conditions are very stable with a
half-life of >18 h (data not shown). The long stability of eotaxin-1
mRNA is in good agreement with previous measurements (38).
These data suggested that TNF-
-induced stabilization of eotaxin-1
transcripts is unlikely to be the mode of action of the cytokine to
synergize with IL-4.
The architecture of the eotaxin-1 promoter is reminiscent of the
regulatory region of the human IgE germline gene (39, 40).
In both promoters a STAT6 binding site is flanked by a NF-
B-binding
motif. Although binding of
B-factors appear to be dispensable for
IL-4-mediated eotaxin-1 promoter activation, NF-
B proteins, together
with the transcription factor PU.1, are critically involved in
cytokine-driven IgE germline promoter activity (19). This
functional difference may be related to the different stimuli that
induce DNA binding of NF-
B to their respective binding site.
Agonistic anti-CD40 Abs did not synergize with IL-4 to express
eotaxin-1 (data not shown), whereas they act as costimulus for
IL-4-induced IgE germline gene expression (28).
The dependence of eotaxin-1 synthesis on STAT6 on cytokine stimulation may also explain some aspects of production in these animals.
In summary, the data strengthen the importance of STAT6 as key player in situations of allergic inflammation and therefore as target for therapeutic intervention.
2 Abbreviation used in this paper: MCP, monocyte chemoattractant protein.
Received for publication September 14, 2000. Accepted for publication January 17, 2001.
This article has been cited by other articles: | 282,896 |
\chapter{Prerequisite Material}
In order to make our presentation self-contained, we present some well-known results concerning hyperbolic three-space, the Laplace-Beltrami operator, and cofinite Kleinian groups. For more details see \cite{Elstrodt}.
\section{Hyperbolic three-space $\HH$}
Let $\HH$ denote the upper half space (of $\RR^{3}$) model of hyperbolic three-space. The space $\HH$ is parametrized by the following coordinates:
$$ \HH \df \{(x,y,r)\in\RR^{3}~|~r>0 \} \df \{(z,r)~| ~z \in\CC, r>0 \}. $$ A point $P \in \HH$ will be denoted by $(x,y,r),~(z,r),$ or $z+rj.$
The standard hyperbolic metric\footnote{The metric with -1 sectional curvature.} hyperbolic metric and volume form are written respectively as
$$
ds^{2} \df \frac{dx^{2}+dy^{2}+dr^{2}}{r^{2}}~\text{and}~dv \df \frac{dx\, dy\, dz}{r^{3}}. $$
\begin{rem} \label{remRoundHyp} An alternate model of hyperbolic three-space is the open three-ball, $~\BB^3 \df \{ (x,y,z)\in \RR^3~|~x^2 + y^2 + z^2 < 1~ \}. $ When equipped with the metric $$ 4 \frac{dx^2 + dy^2 + dz^2}{ \left( 1-x^2 -y^2 - z^2 \right)^2},$$ $\BB^3$ is isometric to $\HH.$
\end{rem}
The boundary (at infinity) of $\HH $ can be realized (by Remark \ref{remRoundHyp}) as the Riemann sphere $\PP = \CC \cup \infty. $
The Laplace-Beltrami $\lp$ operator lies at the heart of this thesis. In our coordinates it can be written explicitly as the following differential operator\footnote{In our notation $\lp$ is a positive self-adjoint operator \\}:
$$ \lp \df -r^{2}(\frac{\partial^{2}}{\partial x^{2}}+
\frac{\partial^{2}}{\partial y^{2}}+
\frac{\partial^{2}}{\partial r^{2}})+
r\frac{\partial}{\partial r}. $$
The orientation preserving isometry group of $\HH $ can be identified with the group $\PSL(2,\CC) = \SL(2,\CC) / \{ \pm I \} $. Each element
$$M=
\left(\begin{array}{cc}
a & b\\
c & d\end{array}\right)
\in\PSL(2,\CC)$$ acts on $\HH$ as follows: $M(z+rj)=w+tj, $ where
$$
w=\frac{(az+b)(\bar{c}\bar{z}+\bar{d})+ a\bar{c}r^{2}}{|cz+d|^{2}+|c|^{2}r^{2}} ~~\text{and}~~ t=\frac{r}{|cz+d|^{2}+|c|^{2}r^{2}}. $$
The element $M$ also acts on the boundary at infinity of $\HH$ via standard M\"{o}bius action on $\PP,$
$$ M\zeta = \frac{a\zeta + b}{c\zeta + d} $$ for $\zeta \in \PP. $
\section{Harmonic Analysis on $\HH$} \label{secFreCas}
For $P=z+rj,P'=z'+r'j \in \HH$ denote by $d(P,P')$ the (hyperbolic) distance in $\HH$ between $P$ and $P',$ and let $\delta(P,P')$ be defined by
$$\delta(P,P') \df \frac{|z-z'|^{2}+r^{2}+ r'^{2}}{2rr'}. $$
It follows that $\cosh(d(P,P')) = \delta(P,P'),$ and that $\delta$ is a point-pair invariant\footnote{A point-pair invariant is a map $f:\HC\times\HC\rightarrow\CC$ defined almost everywhere satisfying $
f(MP,MQ)=f(P,Q) $ for all $P,Q\in\HC$, $M\in\PSL(2,\CC).$ \\}.
We can use the concept of a point-pair invariant to construct the resolvent kernel for $\lp.$ For $s \in \CC, \, t>1,$ set
$$
\py_{s}(t) \df \frac{\left(t+\sqrt{t^{2}-1}\right)^{-s}}{\sqrt{t^{2}-1}}. $$
\bl \label{lemScalRes} \cite[Lemma 4.2.2]{Elstrodt}
Let $u \in C_c^2(\HH), $
$s\in\CC$ and $\lambda=1-s^{2}.$ Then for all $Q\in\HC$
$$
u(Q)=\frac{1}{4\pi}\int_{\HH}\varphi_{s}(\delta(P,Q))(\Delta-\lambda)u(P)\, dv(P). $$
(2) The point-pair invariant $\py_{s}\circ \delta$ is the resolvent kernel for $\lp$ on the Hilbert space of square integrable functions on $\HH.$
\el
Let $ \scz \df \scz[1,\infty) $ denote the Schwartz space of smooth functions \linebreak \mbox{$k:[1,\infty) \rightarrow \CC$} that satisfy $ \lim_{x \ra \infty} x^n k^{(m)}(x) = 0 $ for all $n,m \in \NN_{\geq 0} $. For each $k \in \scz ,$ $K \df k \circ \delta $ is a point-pair invariant and the kernel of an operator $ \mathcal{K}:L^2_{\text{loc}}(\HH) \mapsto L^2_{\text{loc}}(\HH) $ defined by
$$
\mathcal{K}f(P) = \int_{\HH} K(P,Q) f(Q) ~dv(Q).
$$
We have
\bl \cite[Lemma 3.5.3]{Elstrodt} \label{E:Selberg Transform}
Let $k \in \scz $, $K(P,Q)=k(\delta (P,Q))$ for $P,Q \in \HH$, $f:\HH \ra \CC $ be a solution of $\lp f = \lambda f$, $ \lambda = 1-s^2 $, and
\beq
h(\lambda)\df h(1-s^2)\df \frac{\pi}{s}
\int_{1}^{\infty}k\left(\frac{1}{2}\left(t+\frac{1}{t}\right)\right)
(t^{s}-t^{-s})\left(t-\frac{1}{t}\right)\,\frac{dt}{t}.
\eeq
Then \eqref{E:Selberg Transform} converges absolutely and
$$
\int_{\HH}K(P,Q)f(Q)\, dv(Q)=h(\lambda)f(P).
$$
\el
The lemma above says that if $f$ is an eigenfunction\footnote{The function $f$ need not be in any Hilbert space. \\} of $\lp $ with eigenvalue $\lambda $ then $f $ is also an eigenfunction of $\mathcal{K} $ with eigenvalue $h(\lambda)$ depending only on the eigenvalue $\lambda $ and not on the particular eigenfunction.
The map sending $ k \in \scz $ to holomorphic function $h$ above is called the \mbox{Selberg-Harish-Chandra} transform of $k.$
\section{Kleinian Groups} \label{secKG}
A subgroup $\Gamma<\PSL(2,\CC)$ is called a Kleinian group if for each $P\in\HH$ the orbit $\Gamma P$ has no accumulation points in $\HH$. An equivalent formulation is that $\Gamma$ is a discrete subset of $\PSL(2,\CC)$ in the topology induced from $\CC^4.$
\noindent A closed subset $\F \subset \HH$ is called a fundamental domain of $\Gamma$ if
\begin{itemize}
\item $\F$ meets each $\Gamma-$orbit at least once,
\item the interior $\F^{o}$ meets each $\Gamma-$orbit at most once,
\item the boundary of $\F$ has Lebesgue measure zero.
\end{itemize}
\noindent For each $Q \in \HH$ the set $$
\mathcal{P}_{Q}(\Gamma) \df \{ P\in\HH\,|\, d(P,Q)\leq d(\gamma P,Q)\,\,\,\,\forall\,\gamma\in\Gamma\} $$ is a fundamental domain for $\Gamma$ that is centered at the point $Q.$
We say that $\Gamma$ is \emph{cocompact} if it has a fundamental domain $ \F $ that is compact, and \emph{cofinite}\footnote{Note that cocompact groups are also cofinite. \\} if it has a fundamental domain $ \F $ with
$$ \vol(\Gamma) \df \int_{\F}\, dv < \infty $$ ($dv$ is the volume form of $ \HH $).
\section{M\"{o}bius Transformations}
Each element $$\gamma=\left(\begin{array}{cc}
a & b\\
c & d\end{array}\right)\in\PSL(2,\CC) $$ falls into exactly\footnote{The identity element is an exception. Its trace is $\pm 2$ but it is not usually thought of as a parabolic element. \\} one of the following categories:
\begin{itemize}
\item \emph{parabolic} if $|\tr(\gamma)|=2$ and $\tr\gamma\in\RR,$
\item \emph{elliptic} if $0\leq|\tr(\gamma)|<2$ and $\tr\gamma\in\RR,$
\item \emph{loxodromic} if it is neither \emph{elliptic} nor \emph{parabolic}.
\end{itemize}
There is a useful geometric characterization of the above notions.
An element $\gamma \in \pc $ is
\begin{itemize}
\item parabolic\footnote{See the previous footnote. \\} iff it has exactly one fixed point in $\cinf$,
\item elliptic iff it has two fixed points in $\cinf$ and fixes the geodesic line in $\HH$ connecting the two points.
\item loxodromic iff it has two fixed points in $\cinf$ and has no fixed points in $\HH.$ \end{itemize}
\section{Stabilizer Subgroups and Cusps} \label{secStaCus}
Let $\Gamma $ be a Kleinian group. For each $Q \in\HH\cup\cinf$ the stabilizer subgroup of $Q$ is denoted by
$ \Gamma_{Q} \df \{\,\gamma\in \Gamma \,|\,\gamma Q=Q\,\} $
and
for $ \zeta \in \cinf, $ $$
\Gamma_{\zeta}^{\prime} \df \{ \gamma \in \Gamma_{\zeta} \, | \, \gamma \, \text{ is parabolic or the identity} \, \}.
$$
Set
$$ B(\CC) \df \left\{ \,\left.\left(\begin{array}{cc}
a & b\\
0 & a^{-1}\end{array}\right)\,\right|\,0\neq a\in\CC,\, b\in\CC\,\right\} /\{\pm I\}<\pc, $$
$$ N(\CC) \df \left\{ \,\left.\left(\begin{array}{cc}
1 & b\\
0 & 1\end{array}\right)\,\right|\,\, b\in\CC\,\right\} /\{\pm I\}<\pc. $$
The group $B(\CC) $ is the stabilizer subgroup of $\pc$ that fixes $\infty$ and $N(\CC)$ is the maximal parabolic subgroup of $B(\CC). $ It contains all parabolic elements fixing $\infty,$ and the identity element.
A point $\zeta \in \cinf $ is called a \emph{cusp} of $\Gamma$ if $ \Gamma_{\zeta}^{\prime} $ is a free abelian group of rank two. The set of cusps is denoted by $C_{\Gamma}$. Two cusps, $\alpha ,\ \text{and} \, \beta $ of $\Gamma$ called equivalent or $\Gamma-$equivalent if\footnote{$\Gamma \beta$ denotes the orbit of the point $\beta \in \PP.$ \\} $\alpha \in \Gamma \beta.$ The equivalence class of cusps is denoted by $\Gamma\setminus C_{\Gamma}.$
The following is well known (see \cite[Chapter 2]{Elstrodt}).
\begin{lem}\label{lemFiniteGeo}
\noindent Let $\Gamma$ be a Kleinian group.
\begin{enumerate}
\item If $\Gamma$ contains a parabolic element then $\Gamma$ is not
cocompact.
\item If $\Gamma$ is cofinite and is not cocompact then $\Gamma$ contains a parabolic element.
\item If $\Gamma$ is cofinite, $\zeta\in\cinf$, and $\Gamma_{\zeta}$
contains a parabolic element then $\zeta$ is a cusp of $\Gamma.$
\item If $\Gamma$ is cofinite then $\Gamma$ has only finitely many
$\Gamma-$equivalent classes of cusps.
\end{enumerate}
\end{lem}
\noindent Next we study the structure\footnote{The following structure theorem is extremely important, and without it we would be unable to extend the Selberg theory to cofinite Kleinian groups. \\} of the stabilizer subgroup of a cusp for cofinite Kleinian groups.
\begin{lem} \cite[Theorem 2.1.8 part (3)]{Elstrodt}
\label{lemCuspStructI}Let $\Gamma$ be cofinite with a cusp
at $\infty.$ Then $\gip$ is a lattice in $B(\CC) \approx \CC$ and one of the following three holds.
\begin{enumerate}
\item $\Gamma_{\infty}=\Gamma'_{\infty}$
\item $\Gamma_{\infty}$ is conjugate in $B(\CC)$ to a group of the
form \[
\left\{ \,\left.\left(\begin{array}{cc}
\epsilon & \epsilon b\\
0 & \epsilon^{-1}\end{array}\right)\,\right|\,0\neq b\in\Lambda,\,\epsilon\in\{1,i\}\,\right\} /\{\pm I\}\]
where $\Lambda<\CC$ is an arbitrary lattice. As an abstract group
$\Gamma_{\infty}$ is isomorphic to $\ZZ^2 \rtimes \ZZ/2 \ZZ$ where the
nontrivial element of $\ZZ/2 \ZZ$ acts by multiplication by $-1.$
\item $\gip$ is conjugate in $B(\CC)$ to a group of the form \[
\Gamma(n,t)=\left\{ \,\left.\left(\begin{array}{cc}
\epsilon & \epsilon b\\
0 & \epsilon^{-1}\end{array}\right)\,\right|\,\begin{array}{l}
b\in\mathcal{O}_{n}\\
\epsilon=\exp\left(\frac{\pi ivt}{n}\right)\, for\,\,1\leq v\leq2n\end{array}\,\right\} /\{\pm I\}\]
where $n=4$ or $n=6$ and $t|n$ and where $\mathcal{O}_{n}$ is
the ring of integers in the quadratic number field $\QQ\left(\exp\left(\frac{2\pi i}{n}\right)\right).$
Hence , as an abstract group $\gi$ is isomorphic to the group $\ZZ^2\rtimes\ZZ/m\ZZ$
for some $m\in\{1,2,3,4,6\}.$ An element \[
\epsilon\in\ZZ/m\ZZ\cong\left\{ \exp\left(\frac{\pi iv}{n}\right)\,|v\in\ZZ\,\right\} \]
acts on $\ZZ^{2}\cong\mathcal{O}_{m'}^{+}$ by multiplication with
$\epsilon^{2}$, where $m'=4$ in case \linebreak $m\in\{1,2,4\}$ and $m'=6$
otherwise.
\end{enumerate}
\end{lem}
\section{Cofinite Kleinian Groups} \label{secCofKleGro}
Let $\Gamma $ be a cofinite Kleinian group. By Lemma~\ref{lemFiniteGeo} $\Gamma $ has finitely many equivalence classes of cusps.
\bnot
Unless otherwise noted $\Gamma$ is a cofinite Kleinian group with $$ \kappa \df \left| \Gamma \setminus C_{\Gamma} \right|, $$ and a maximal set $\{\zeta_\alpha \}_{\alpha = 1}^{\kappa} $ of representatives for the equivalence classes of cusps. We set
$ \Gamma_\alpha \df \Gamma_{\zeta_\alpha}$ and
$ \Gamma_\alpha^\prime \df \Gamma_{\zeta_\alpha}^\prime. $
\enot
\noindent The following is elementary.
\bl \label{lemConjCusp}
Let $\zeta_\alpha$ be a cusp of $\Gamma.$ Then there exists $B_\alpha \in \pc $ and a lattice $\Lambda_\alpha = \ZZ \oplus \ZZ \tau_\alpha,~\I(\tau_\alpha) > 0 $ satisfying the following.
\ben
\item $\zeta_{\alpha} = B_{\alpha}^{-1} \infty,$
\item $B_{\alpha}\Gamma_{\alpha}B_{\alpha}^{-1} $ acts discontinuously on $\cinf \setminus \{\infty\} = \CC$.
\item $$ B_{\alpha}\Gamma_{\alpha}^\prime B_{\alpha}^{-1} =
\left\{ \,\left.\left(\begin{array}{cc}
1 & b\\
0 & 1\end{array}\right)\,\right|\,0\neq b \in \Lambda_\alpha \,\right\}. $$
\een
\el
\begin{nota}
For each cusp class $\alpha = 1\dots \kappa $ we fix $B_\alpha $ and $\Lambda_\alpha $ from Lemma~\ref{lemConjCusp} and set $ \Gamma_{\alpha_\infty} \df B_{\alpha}\Gamma_{\alpha}B_{\alpha}^{-1}, $ and $ \Gamma_{\alpha_\infty}^\prime \df B_{\alpha}\Gamma_{\alpha}^\prime B_{\alpha}^{-1}.$
\end{nota}
While the action of $ \Gamma_{\alpha_\infty}^\prime $ on $\CC$ is exactly the action of the (additive) lattice $\Lambda_\alpha$ on\footnote{For $z\in \CC,$ we have $ \left(\begin{array}{cc}
1 & b\\
0 & 1\end{array}\right)z = z+b.$ \\ } $\CC, $ the action of $ \Gamma_{\alpha_\infty} $ is a combination of the lattice action of $\Lambda_\alpha$ and possibly finite ordered euclidean rotations of $\CC.$
The fundamental domain for a cofinite Kleinian group can be realized as the union of a compact (hyperbolic) polyhedron and $\kappa$ \emph{cusp sectors}. Let $\pg_{\alpha} \subset \CC $ be a fundamental domain\footnote{The domain $\pg_{\alpha} $ is a euclidean polygon. \\} for the action of $\Gamma_{\alpha_\infty}$ on $\CC,$ and let $\pg_{\alpha}^\prime $ be the fundamental parallelogram with base point at the origin of the lattice $\Lambda_\alpha. $ For $Y>0$ set $$
\widetilde{\F}_{\alpha}(Y) \df \{\, z+rj\,|\, z\in\pg_{\alpha},\, r\geq Y\,\}$$
and define the cusp sector, $
\F_{\alpha}(Y) \df B_{\alpha}^{-1} \widetilde{\F}_{\alpha}(Y). $
\begin{lem}
\label{lemFunDom} \cite[Prop. 2.3.9]{Elstrodt} Let $\Gamma < \pc$ be cofinite with $\kappa = \left|\Gamma\setminus C_{\Gamma}\right|.$
Then there exist $Y>0$ and a compact set $\F_{Y}\subset\HH$
such that \[
\F \df \F_{Y}\cup\F_{1}(Y)\cup\cdots\cup\F_{\kappa}(Y)\]
is a fundamental domain for $\Gamma.$ The compact set $\F_{Y}$
can be chosen such that \mbox{$\F_{Y}\cap\F_{\alpha}(Y_{\alpha})$}
all are contained in the boundary of $\F_{Y}$ and hence have Lebesgue
measure 0. Also, $\F_{\alpha}(Y_{\alpha})\cap\F_{\beta}(Y_{\beta})=\emptyset$
if $\alpha \neq \beta.$
\end{lem}
\section{Cuspidal elliptic elements}
Throughout this section $\Gamma $ is a cofinite Kleinian group.
\bl Let $\Gamma$ be cofinite. Then $\Gamma $ has only finitely many elliptic conjugacy classes in $\Gamma$.
\el
\pf
Assume not. Then there is an infinite sequence of elliptic $\Gamma-$conjugacy classes $ \{[e_n] \}. $ Next for each $n$ choose representative $e_n$ which fixes a point $P_n$ on the boundary of $\F,$ the fundamental domain of $\Gamma$ given in Lemma~\ref{lemFunDom}. Since $\Gamma$ is discrete, the points \emph{must} accumulate at least at one cusp. After conjugating $\Gamma$ and passing to a subsequence of $\{P_n \}$ we may assume that $P_{n} \ra \infty.$ An application of \cite[Corollary 2.3.3]{Elstrodt} implies that $e_n \in \Gamma_\infty.$ In short we have constructed infinitely many elliptic $\Gamma-$conjugacy classes with a representative fixing the cusp $\infty.$ An elementary computation using Lemma~\ref{lemCuspStructI} shows that there are only finitely many elliptic $\Gamma_\infty-$conjugacy classes, a contradiction.
\epf
\bnot For each $ \alpha \in \{ 1 \dots \kappa \} $ set
$$ \Upsilon_{\alpha} \df \left\{ \left(\begin{array}{cc}
\epsilon & 0\\
0 & \epsilon^{-1}\end{array}\right)~ \right. \left| ~ \left(\begin{array}{cc}
\epsilon & 0\\
0 & \epsilon^{-1}\end{array}\right) \in \Gamma_{\alpha_\infty} \right\}. $$
\enot
If $$ \left(\begin{array}{cc}
\epsilon & 0\\
0 & \epsilon^{-1}\end{array}\right) \in \Upsilon_\alpha, $$ then for some non-zero natural number $N,$ $\epsilon^N = 1. $
We have the following application of Lemma~\ref{lemCuspStructI}.
\bl
The set $\Upsilon_{\alpha}$ is isomorphic to a finite subgroup of the unit circle $ S^{1}$, cyclic, and isomorphic to $\Gamma_{\infty \alpha}/\Gamma'_{\alpha_\infty}.$ Any element $\gamma \in \Gamma_{\alpha_\infty}$ can be written uniquely in the form $
\gamma = \alpha \beta, $
where $\alpha \in \Upsilon_{\alpha}$ and $\beta \in \Gamma_{\alpha_\infty}^{\prime}.$ \el
\bd An elliptic element $\gamma \in \Gamma$, is said to be
\emph{cuspidal elliptic} if at least one\footnote{ Actually by \cite[Cor 2.3.11]{Elstrodt} \label{footCuspEllip}, if one fixed point is a cusp, then the other fixed point is also a cusp. } of its fixed points in
$\cinf$ is a cusp of $\Gamma$. Otherwise it is called a \emph{non-cuspidal elliptic} element. The set of cuspidal elliptic elements is denoted by $\Gamma^{\CE}.$
\ed
\bnot
For $\alpha \in \{1\dots \kappa \} $ define $ \cuspi_\alpha $ to be the set of elements of $\Gamma$
which are $\Gamma$-conjugate to an element of $ \Gamma_\alpha
\setminus \Gamma_\alpha^\prime. $
We fix representatives of $\cuspi_\alpha ~:
g_{1}^{\alpha}, \dots , g_{d_\alpha}^{\alpha} $
and define $
q_{i}^{\alpha} \df B_{\alpha} g_{i}^{\alpha} B_{\alpha}^{-1} $
\enot
Since $q_{i}^{\alpha}$ fixes $\infty, $
\beq q_{i}^{\alpha} = \left(\begin{array}{cc}
\epsilon^{\alpha}_{i} & \epsilon^{\alpha}_{i}w^{\alpha}_{i}\\
0 & \left(\epsilon^{\alpha}_{i}\right)^{-1}\end{array}
\right),
\eeq
where $\epsilon^{\alpha}_{i}$ is a finite-ordered root of unity and $w^{\alpha}_{i} \in \Lambda_\alpha.$ | 163,418 |
When Elusive Muse was trying to select a collection of “favorite” illustrations created by Nicoletta Pagano, the task seemed almost impossible because each and every piece she has created is nothing short of spectacular. Her style is so unique and her skills and really unique style are such an inspiration. It is with great pleasure, we share a small sample of Nicoletta Pagano’s work.
If you would like to learn more about, purchase something or see more of her amazing collection of work, follow any of the links below.
The images almost appear to have a 3-D effect. Cool.
What a great talent Nicoletta, thanks for sharing! | 293,796 |
\begin{document}
\pagestyle{empty}
\begin{titlepage}
\title{Is Natural Language a Perigraphic Process?\\
The Theorem about Facts and Words Revisited}
\author{{\L}ukasz D\k{e}bowski\thanks{
{\L}. D\k{e}bowski is with
the Institute of Computer Science, Polish Academy of Sciences,
ul. Jana Kazimierza 5, 01-248 Warszawa, Poland
(e-mail: [email protected]).}}
\date{}
\maketitle
\begin{abstract}
As we discuss, a stationary stochastic process is nonergodic when a
random persistent topic can be detected in the infinite random text
sampled from the process, whereas we call the process strongly
nonergodic when an infinite sequence of independent random bits,
called probabilistic facts, is needed to describe this topic
completely. Replacing probabilistic facts with an algorithmically
random sequence of bits, called algorithmic facts, we adapt this
property back to ergodic processes. Subsequently, we call a process
perigraphic if the number of algorithmic facts which can be inferred
from a finite text sampled from the process grows like a power of
the text length. We present a simple example of such a
process. Moreover, we demonstrate an assertion which we call the
theorem about facts and words. This proposition states that the
number of probabilistic or algorithmic facts which can be inferred
from a text drawn from a process must be roughly smaller than the
number of distinct word-like strings detected in this text by means
of the PPM compression algorithm. We also observe that the number of
the word-like strings for a sample of plays by Shakespeare follows
an empirical stepwise power law, in a stark contrast to Markov
processes. Hence we suppose that natural language considered as a
process is not only non-Markov but also perigraphic.
\\[2ex]
\textbf{Keywords:} stationary processes, PPM code, mutual
information, power laws, algorithmic information theory, natural
language
\end{abstract}
\end{titlepage}
\pagestyle{plain}
\section{Introduction}
\subsection*{}
One of motivating assumptions of information theory
\cite{Shannon48,Shannon51,CoverThomas06} is that communication in
natural language can be reasonably modeled as a discrete stationary
stochastic process, namely, an infinite sequence of discrete random
variables with a well defined time-invariant probability
distribution. The same assumption is made in several practical
applications of computational linguistics, such as speech recognition
\cite{Jelinek97} or part-of-speech tagging
\cite{ManningSchutze99}. Whereas state-of-the-art stochastic models of
natural language are far from being satisfactory, we may ask a more
theoretically oriented question, namely:
\begin{quote}
What can be some general mathematical properties of natural language
treated as a stochastic process, in view of empirical data?
\end{quote}
In this paper, we will investigate a question whether it is reasonable
to assume that natural language communication is a \emph{perigraphic}
process.
To recall, a stationary process is called ergodic if the relative
frequencies of all finite substrings in the infinite text generated by
the process converge in the long run with probability one to some
constants---the probabilities of the respective strings. Now, some
basic linguistic intuition suggests that natural language does not
satisfy this property, cf. \cite[Section 6.4]{CoverThomas06}. Namely,
we can probably agree that there is a variation of topics of texts in
natural language, and these topics can be empirically distinguished by
counting relative frequencies of certain substrings called keywords.
Hence we expect that the relative frequencies of keywords in a
randomly selected text in natural language are random variables
depending on the random text topic. In the limit, for an infinitely
long text, we may further suppose that the limits of relative
frequencies of keywords persist to be random, and if this is true then
natural language is not ergodic, i.e., it is nonergodic.
In this paper we will entertain first a stronger hypothesis, namely,
that natural language communication is strongly nonergodic.
Informally speaking, a stationary process will be called strongly
nonergodic if its random persistent topic has to be described using an
infinite sequence of probabilistically independent binary random
variables, called probabilistic facts. Like nonergodicity, strong
nonergodicity is not empirically verifiable if we only have a single
infinite sequence of data. But replacing probabilistic facts with an
algorithmically random sequence of bits, called algorithmic facts, we
can adapt the property of strong nonergodicity back to ergodic
processes. Subsequently, we will call a process \emph{perigraphic} if
the number of algorithmic facts which can be inferred from a finite
text sampled from the process grows like a power of the text
length. It is a general observation that perigraphic processes have
uncomputable distributions.
It is interesting to note that \emph{perigraphic} processes can be
singled out by some statistical properties of the texts they generate.
We will exhibit a proposition, which we call the theorem about facts
and words. Suppose that we have a finite text drawn from a stationary
process. The theorem about facts and words says that the number of
independent probabilistic or algorithmic facts that can be reasonably
inferred from the text must be roughly smaller than the number of
distinct word-like strings detected in the text by some standard data
compression algorithm called the Prediction by Partial Matching (PPM)
code \cite{ClearyWitten84}. It is important to stress that in this
theorem we do not relate the numbers all facts and all word-like
strings, which would sound trivial, but we compare only the numbers of
independent facts and distinct word-like strings.
Having the theorem about facts and words, we can also discuss some
empirical data. Since the number of distinct word-like strings for
texts in natural language follows an empirical stepwise power law, in
a stark contrast to Markov processes, consequently, we suppose that
the number of inferrable random facts for natural language also
follows a power law. That is, we suppose that natural language is not
only non-Markov but also \emph{perigraphic}.
Whereas in this paper we fill several important missing gaps and
provide an overarching narration, the basic ideas presented in this
paper are not so new. The starting point was a corollary of Zipf's
law and a hypothesis by Hilberg. Zipf's law is an empirical
observation that in texts in natural language, the frequencies of
words obey a power law decay when we sort the words according to their
decreasing frequencies \cite{Zipf65,Mandelbrot54}. A corollary of this
law, called Heaps' law
\cite{KuraszkiewiczLukaszewicz51en,Guiraud54,Herdan64,Heaps78}, states
that the number of distinct words in a text in natural language grows
like a power of the text length. In contrast to these simple empirical
observations, Hilberg's hypothesis is a less known conjecture about
natural language that the entropy of a text chunk of an increasing
length \cite{Hilberg90} or the mutual information between two adjacent
text chunks
\cite{EbelingNicolis91,EbelingPoschel94,BialekNemenmanTishby01b,CrutchfieldFeldman03}
obey also a power law growth. In paper \cite{Debowski06}, it was
heuristically shown that if Hilberg's hypothesis for mutual
information is satisfied for an arbitrary stationary stochastic
process then texts drawn from this process satisfy also a kind of
Heaps' law if we detect the words using the grammar-based codes
\cite{Wolff80,DeMarcken96,KitWilks99,KiefferYang00}. This result is a
historical antecedent of the theorem about facts and words.
Another important step was a discovery of some simple strongly
nonergodic processes, satisfying the power law growth of mutual
information, called Santa Fe processes, discovered by Dêbowski in
August 2002, but first reported only in \cite{Debowski09}.
Subsequently, in paper \cite{Debowski11b}, a completely formal proof
of the theorem about facts and words for strictly minimal
grammar-based codes \cite{KiefferYang00,CharikarOthers05} was
provided. The respective related theory of natural language was later
reviewed in \cite{Debowski11d,Debowski15} and supplemented by a
discussion of Santa Fe processes in \cite{Debowski12}. Some drawback
of this theory at that time was that strictly minimal grammar-based
codes used in the statement of the theorem about facts and words are
not computable in a polynomial time \cite{CharikarOthers05}. This
precluded an empirical verification of the theory.
To state the relative novelty, in this paper we are glad to announce a
new stronger version of the theorem about facts and words for a
somewhat more elegant definition of inferrable facts and the PPM code,
which is computable almost in a linear time. For the first time, we
also present two cases of the theorem: one for strongly nonergodic
processes, applying Shannon information theory, and one for general
stationary processes, applying algorithmic information theory. Having
these results, we can supplement them finally with a rudimentary
discussion of some empirical data.
The organization of this paper is as follows. In Section
\ref{secErgodic}, we discuss some properties of ergodic and nonergodic
processes. In Section \ref{secStronglyNonergodic}, we define strongly
nonergodic processes and we present some examples of
them. Analogically, in Section \ref{secPerigraphic}, we discuss
perigraphic processes. In Section \ref{secFactsWords}, we discuss two
versions of the theorem about facts and words. In Section
\ref{secNaturalLanguage}, we discuss some empirical data and we
suppose that natural language may be a perigraphic process. In
Section \ref{secConclusion}, we offer concluding remarks. Moreover,
three appendices follow the body of the paper. In Appendix
\ref{secFactsMI}, we prove the first part of the theorem about facts
and words. In Appendix \ref{secMIWords}, we prove the second part of
this theorem. In Appendix \ref{secSantaFe}, we show that that the
number of inferrable facts for the Santa Fe processes follows a power
law.
\section{Ergodic and nonergodic processes}
\label{secErgodic}
We assume that the reader is familiar with some probability measure
theory \cite{Billingsley79}. For a real-valued random variable $Y$ on
a probability space $(\Omega,\mathcal{J},P)$, we denote its
expectation
\begin{align}
\sred Y:=\int YdP.
\end{align}
Consider now a discrete stochastic process
$(X_i)_{i=1}^\infty=(X_1,X_2,...)$, where random variables $X_i$ take
values from a set $\mathbb{X}$ of countably many distinct symbols,
such as letters with which we write down texts in natural language. We
denote blocks of consecutive random variables $X_j^k:=(X_j,...,X_k)$
and symbols $x_j^k:=(x_j,...,x_k)$. Let us define a binary random
variable telling whether some string $x_1^n$ has occurred in sequence
$(X_i)_{i=1}^\infty$ on positions from $i$ to $i+n-1$,
\begin{align}
\Phi_i(x_1^n):=\boole{X_{i}^{i+n-1}=x_1^n},
\end{align}
where
\begin{align}
\boole{\phi}
=
\begin{cases}
1 & \text{if $\phi$ is true},\\
0 & \text{if $\phi$ is false}.
\end{cases}
\end{align}
The expectation of this random variable,
\begin{align}
\sred \Phi_i(x_1^n)=P(X_{i}^{i+n-1}=x_1^n),
\end{align}
is the probability of the chosen string, whereas the arithmetic
average of consecutive random variables
$\frac{1}{m}\sum_{i=1}^m \Phi_i(x_1^n)$ is the relative frequency of the
same string in a finite sequence of random symbols $X_1^{m+n-1}$.
Process $(X_i)_{i=1}^\infty$ is called \emph{stationary} (with respect
to a probability measure $P$) if expectations $\sred \Phi_i(x_1^n)$ do
not depend on position $i$ for any string $x_1^n$. In this case, we
have the following well known theorem, which establishes that the
limiting relative frequencies of strings $x_1^n$ in infinite sequence
$(X_i)_{i=1}^\infty$ exist almost surely, i.e., with probability $1$:
\begin{theorem}[ergodic theorem, cf. e.g. \cite{Gray09}]
For any discrete stationary process $(X_i)_{i=1}^\infty$, there
exist limits
\begin{align}
\Phi(x_1^n):=\lim_{m\rightarrow\infty}
\frac{1}{m}\sum_{i=1}^m \Phi_i(x_1^n) \text{ almost surely},
\end{align}
with expectations $\sred \Phi(x_1^n)=\sred \Phi_i(x_1^n)$.
\end{theorem}
In general, limits $\Phi(x_1^n)$ are random variables depending
on a particular value of infinite sequence $(X_i)_{i=1}^\infty$. It is
quite natural, however, to require that the relative frequencies of
strings $\Phi(x_1^n)$ are almost surely constants, equal to the
expectations $\sred \Phi_i(x_1^n)$. Subsequently, process
$(X_i)_{i=1}^\infty$ will be called \emph{ergodic} (with respect to a
probability measure $P$) if limits $\Phi(x_1^n)$ are almost
surely constant for any string $x_1^n$. The standard
definition of an ergodic process is more abstract but is equivalent to
this statement \cite[Lemma 7.15]{Gray09}.
The following examples of ergodic processes are well known:
\begin{enumerate}
\item Process $(X_i)_{i=1}^\infty$ is called \emph{IID} (independent
identically distributed) if
\begin{align}
P(X_1^n=x_1^n)=\pi(x_1)...\pi(x_n).
\end{align}
All IID processes are ergodic.
\item Process $(X_i)_{i=1}^\infty$ is called \emph{Markov} (of order
$1$) if
\begin{align}
P(X_1^n=x_1^n)=\pi(x_1)p(x_2|x_1)...p(x_n|x_{n-1}).
\end{align}
A Markov process is ergodic in particular if
\begin{align}
\label{MarkovMixing}
p(x_i|x_{i-1})>c>0.
\end{align}
For a sufficient and necessary condition see \cite[Theorem
7.16]{Breiman92}.
\item Process $(X_i)_{i=1}^\infty$ is called \emph{hidden Markov} if
$X_i=g(S_i)$ for a certain Markov process $(S_i)_{i=1}^\infty$ and a
function $g$. A hidden Markov process is ergodic in particular if
the underlying Markov process is ergodic.
\end{enumerate}
Whereas IID and Markov processes are some basic models in probability
theory, hidden Markov processes are of practical importance in
computational linguistics \cite{Jelinek97,ManningSchutze99}. Hidden
Markov processes as considered there usually satisfy condition
(\ref{MarkovMixing}) and therefore they are ergodic.
Let us call a probability measure $P$ stationary or ergodic,
respectively, if the process $(X_i)_{i=1}^\infty$ is stationary or
ergodic with with respect to the measure $P$. Suppose that we have a
stationary measure $P$ which generates some data
$(X_i)_{i=1}^\infty$. We can define a new random measure $F$ equal to
the relative frequencies of blocks in the data $(X_i)_{i=1}^\infty$.
It turns out that the measure $F$ is almost surely ergodic. Formally,
we have this proposition.
\begin{theorem}[\mbox{cf. \cite[Theorem 9.10]{Kallenberg97}}]
\label{theoPreErgodicDecomp}
Any process $(X_i)_{i=1}^\infty$ with a stationary measure $P$ is
almost surely ergodic with respect to the random measure $F$ given
by
\begin{align}
F(X_1^n=x_1^n):=\Phi(x_1^n).
\end{align}
\end{theorem}
Moreover, from the random measure $F$ we can obtain the stationary
measure $P$ by integration, $P(X_1^n=x_1^n)=\sred F(X_1^n=x_1^n)$. The
following result asserts that this integral representation of measure
$P$ is unique.
\begin{theorem}[\mbox{ergodic decomposition, cf. \cite[Theorem 9.12]{Kallenberg97}}]
Any stationary probability measure $P$ can be represented as
\begin{align}
P(X_1^n=x_1^n)=\int F(X_1^n=x_1^n)d\nu(F),
\end{align}
where $\nu$ is a unique measure on stationary ergodic measures.
\end{theorem}
In other words, stationary ergodic measures are some building blocks
from which we can construct any stationary measure. For a stationary
probability measure $P$, the particular values of the random ergodic
measure $F$ are called the ergodic components of measure $P$.
Consider for instance, a Bernoulli($\theta$) process with
measure
\begin{align}
\label{Bernoulli}
F_\theta(X_1^n=x_1^n)=\theta^{\sum_{i=1}^n
x_i}(1-\theta)^{n-\sum_{i=1}^n x_i},
\end{align}
where $x_i\in\klam{0,1}$ and $\theta\in[0,1]$. This measure will be
contrasted with the measure of a mixture Bernoulli process with
parameter $\theta$ uniformly distributed on interval $[0,1]$,
\begin{align}
\label{MixtureBernoulli}
P(X_1^n=x_1^n)&=\int_0^1
F_\theta(X_1^n=x_1^n)d\theta
\nonumber\\
&=\frac{1}{n+1}\kwad{\binom{n}{\sum_{i=1}^n x_i}}^{-1}.
\end{align}
Measure (\ref{Bernoulli}) is a measure of an IID process and is
therefore ergodic, whereas measure (\ref{MixtureBernoulli}) is a
mixture of ergodic measures and hence it is nonergodic.
\section{Strongly nonergodic processes}
\label{secStronglyNonergodic}
According to our definition, a process is ergodic when the relative
frequencies of any strings in a random sample in the long run converge
to some constants. Consider now the following thought
experiment. Suppose that we select a random book from a library.
Counting the relative frequencies of keywords, such as
\emph{bijection} for a text in mathematics and \emph{fossil} for a
text in paleontology, we can effectively recognize the topic of the
book. Simply put, the relative frequencies of some keywords will be
higher for books concerning some topics whereas they will be lower for
books concerning other topics. Hence, in our thought experiment, we
expect that the relative frequencies of keywords are some random
variables with values depending on the particular topic of the
randomly selected book. Since keywords are some particular strings, we
may conclude that the stochastic process that models natural language
should be nonergodic.
The above thought experiment provides another perspective onto
nonergodic processes. According to the following theorem, a process is
nonergodic when we can effectively distinguish in the limit at least
two random topics in it. In the statement, function
$f:\mathbb{X}^*\rightarrow\klam{0,1,2}$ assumes values $0$ or $1$ when
we can identify the topic, whereas it takes value $2$ when we are not
certain which topic a given text is about.
\begin{theorem}[cf. \cite{Debowski09}]
\label{theoKnowability}
A stationary discrete process $(X_i)_{i=1}^\infty$ is nonergodic if
and only if there exists a function
$f:\mathbb{X}^*\rightarrow\klam{0,1,2}$ and a binary random variable
$Z$ such that $0<P(Z=0)<1$ and
\begin{align}
\label{Knowability}
\lim_{n\rightarrow\infty} P(f(X_{i}^{i+n-1})=Z)=1
\end{align}
for any position $i\in\mathbb{N}$.
\end{theorem}
A binary variable $Z$ satisfying condition (\ref{Knowability}) will be
called a \emph{probabilistic fact}. A probabilistic fact tells which
of two topics the infinite text generated by the stationary process is
about. It is a kind of a random switch which is preset before we start
scanning the infinite text, compare a similar wording in
\cite{GrayDavisson74b}. To keep the proofs simple, here we only give a
new elementary proof of the ``$\implies$'' statement of Theorem
\ref{theoKnowability}. The proof of the ``$\impliedby$'' part applies
some measure theory and follows the idea of Theorem 9 from
\cite{Debowski09} for strongly nonergodic processes, which we will
discuss in the next paragraph.
\begin{proof} (only $\implies$) Suppose that process
$(X_i)_{i=1}^\infty$ is nonergodic. Then there exists a string
$x_1^k$ such that $\Phi\neq \sred \Phi$ for $\Phi:=\Phi(x_1^k)$ with
some positive probability. Hence there exists a real number $y$ such
that $P(\Phi=y)=0$ and
\begin{align}
\label{Separation}
P(\Phi>y)=1-P(\Phi<y)\in(0,1)
.
\end{align}
Define $Z:=\boole{\Phi>y}$ and
$f(X_{i}^{i+n-1}):=Z_{in}:=\boole{\Phi_{in}>y}$, where
\begin{align}
\Phi_{in}:=\frac{1}{n-k+1}\sum_{j=i}^{i+n-k}
\Phi_j(x_1^k)
.
\end{align}
Since $\lim_{n\rightarrow\infty} \Phi_{in}=\Phi$ almost surely and
$\Phi$ satisfies (\ref{Separation}), convergence
$\lim_{n\rightarrow\infty} Z_{in}=Z$ also holds almost
surely. Applying the Lebesgue dominated convergence theorem we obtain
\begin{align}
\lim_{n\rightarrow\infty}
P(f(X_{i}^{i+n-1})=Z)
&= \lim_{n\rightarrow\infty}
\sred\kwad{Z_{in}Z+(1-Z_{in})(1-Z)}
\nonumber\\
&=\sred\kwad{Z^2+(1-Z)^2}=1
.
\end{align}
\end{proof}
As for books in natural language, we may have an intuition that the
pool of available book topics is extremely large and contains many
more topics than just two. For this reason, we may need not a single
probabilistic fact $Z$ but rather a sequence of probabilistic facts
$Z_1,Z_2,...$ to specify the topic of a book completely. Formally,
stationary processes requiring an infinite sequence of independent
uniformly distributed probabilistic facts to describe the topic of an
infinitely long text will be called strongly nonergodic.
\begin{definition}[cf. \cite{Debowski09,Debowski11b}]
A stationary discrete process $(X_i)_{i=1}^\infty$ is called
\emph{strongly nonergodic} if there exist a function
$g:\mathbb{N}\times\mathbb{X}^*\rightarrow\klam{0,1,2}$ and a~binary
IID process $(Z_k)_{k=1}^\infty$ such that $P(Z_k=0)=P(Z_k=1)=1/2$
and
\begin{align}
\label{StrongKnowability}
\lim_{n\rightarrow\infty} P(g(k;X_{i}^{i+n-1})=Z_k)=1
\end{align}
for any position $i\in\mathbb{N}$ and any index $k\in\mathbb{N}$.
\end{definition}
As we have stated above, for a strongly nonergodic process, there is
an infinite number of independent probabilistic facts
$(Z_k)_{k=1}^\infty$ with a uniform distribution on the set
$\klam{0,1}$. Formally, these probabilistic facts can be assembled into a
single real random variable $T=\sum_{k=1}^\infty 2^{-k} Z_k$, which
is uniformly distributed on the unit interval $[0,1]$. The value of
variable $T$ identifies the topic of a random infinite text generated
by the stationary process. Thus for a strongly nonergodic process,
we have a continuum of available topics which can be incrementally
identified from any sufficiently long text. Put formally, according
to Theorem 9 from \cite{Debowski09} a stationary process is strongly
nonergodic if and only if its shift-invariant $\sigma$-field contains
a nonatomic sub-$\sigma$-field. We note in passing that in
\cite{Debowski09} strongly nonergodic processes were called
\emph{uncountable description processes}.
In view of Theorem 9 from \cite{Debowski09}, the mixture Bernoulli
process (\ref{MixtureBernoulli}) is some example of a strongly
nonergodic process. In this case, the parameter $\theta$ plays the
role of the random variable $T=\sum_{k=1}^\infty 2^{-k} Z_k$.
Showing that condition (\ref{StrongKnowability}) is satisfied for
this process in an elementary fashion is a tedious exercise. Hence
let us present now a simpler guiding example of a strongly nonergodic
process, which we introduced in \cite{Debowski09,Debowski11b} and
called the Santa Fe process. Let $(Z_k)_{k=1}^\infty$ be a binary
IID process with $P(Z_k=0)=P(Z_k=1)=1/2$. Let $(K_i)_{i=1}^\infty$
be an IID process with $K_i$ assuming values in natural numbers with
a power-law distribution
\begin{align}
P(K_i=k)\propto \frac{1}{k^\alpha}, \quad \alpha>1.
\end{align}
The \emph{Santa Fe process} with exponent $\alpha$ is a sequence
$(X_i)_{i=1}^\infty$, where
\begin{align}
X_i=(K_i,Z_{K_i})
\end{align}
are pairs of a random number $K_i$ and the corresponding probabilistic fact
$Z_{K_i}$. The Santa Fe process is strongly nonergodic since condition
(\ref{StrongKnowability}) holds for example for
\begin{align}
\label{SantaFePredictor}
g(k;x_1^n)
=
\begin{cases}
0 & \text{if for all $1\le i\le n$, $x_i=(k,z)\implies x_i=(k,0)$},
\\
1 & \text{if for all $1\le i\le n$, $x_i=(k,z)\implies x_i=(k,1)$},
\\
2 & \text{else}.
\end{cases}
\end{align}
Simply speaking, function $g(k;\cdot)$ returns $0$ or $1$ when an
unambiguous value of the second constituent can be read off from pairs
$x_i=(k,\cdot)$ and returns $2$ when there is some
ambiguity. Condition (\ref{StrongKnowability}) is satisfied since
\begin{align}
P(g(k;X_{i}^{i+n-1})=Z_k)&=P(\text{$K_i=k$ for some $1\le i\le
n$})
\nonumber\\
&=1-(1-P(K_i=k))^n\xrightarrow[n\rightarrow\infty]{} 1.
\end{align}
Some salient property of the Santa Fe process is the power law growth
of the expected number of probabilistic facts which can be inferred
from a finite text drawn from the process. Consider a strongly
nonergodic process $(X_i)_{i=1}^\infty$. The set of initial
independent probabilistic facts inferrable from a finite text $X_1^n$
will be defined as
\begin{align}
U(X_1^n):=\klam{l\in\mathbb{N}: g(k;X_1^n)=Z_k \text{ for all
$k\le l$}}.
\end{align}
In other words, we have $U(X_1^n)=\klam{1,2,...,l}$ where $l$ is the
largest number such that $g(k;X_1^n)=Z_k$ for all $k\le l$.
To capture the power-law growth of an arbitrary function
$s:\mathbb{N}\rightarrow\mathbb{R}$, we will denote the Hilberg
exponent defined
\begin{align}
\hilberg_{n\rightarrow\infty} s(n):=\limsup_{n\rightarrow\infty}
\frac{\log^+ s(2^n)}{\log 2^n},
\end{align}
where $\log^+ x:=\log(x+1)$ for $x\ge 0$ and $\log^+ x:=0$ for $x<0$,
cf.\ \cite{Debowski15d}. In contrast to paper \cite{Debowski15d}, for
technical reasons, we define the Hilberg exponent only for an
exponentially sparse subsequence of terms $s(2^n)$ rather than all terms
$s(n)$. Moreover, in \cite{Debowski15d}, the Hilberg exponent was
considered only for mutual information
$s(n)=\mathbb{I}(X_1^n;X_{n+1}^{2n})$, defined later in equation
(\ref{PMI}). We observe that for the exact power law growth
$s(n)=n^\beta$ with $\beta\ge 0$ we have
$\hilberg_{n\rightarrow\infty} s(n)=\beta$. More generally, the
Hilberg exponent captures an asymptotic power-law growth of the
sequence. As shown in Appendix \ref{secSantaFe}, for the Santa Fe
process with exponent $\alpha$ we have the asymptotic power-law growth
\begin{align}
\label{FactsExpI}
\hilberg_{n\rightarrow\infty} \sred\card U(X_1^n)
=
1/\alpha\in(0,1)
.
\end{align}
This property distinguishes the Santa Fe process from the mixture
Bernoulli process (\ref{MixtureBernoulli}), for which the respective
Hilberg exponent is zero, as we discuss in Section
\ref{secNaturalLanguage}.
\section{Perigraphic processes}
\label{secPerigraphic}
Is it possible to demonstrate by a statistical investigation of texts
that natural language is really strongly nonergodic and satisfies a
condition similar to (\ref{FactsExpI})? In the thought experiment
described in the beginning of the previous section we have ignored the
issue of constructing an infinitely long text. In reality, every book
with a well defined topic is finite. If we want to obtain an unbounded
collection of texts, we need to assemble a corpus of different books
and it depends on our assembling criteria whether the books in the
corpus will concern some persistent random topic. Moreover, if we
already have a \emph{single} infinite sequence of books generated by
some stationary source and we estimate probabilities as relative
frequencies of blocks of symbols in this sequence then by Theorem
\ref{theoPreErgodicDecomp} we will obtain an ergodic probability
measure almost surely.
In this situation we may ask whether the idea of the power-law growth
of the number of inferrable probabilistic facts can be translated
somehow to the case of ergodic measures. Some straightforward method
to apply is to replace the sequence of independent uniformly
distributed probabilistic facts $(Z_k)_{k=1}^\infty$, being random
variables, with an algorithmically random sequence of particular
binary digits $(z_k)_{k=1}^\infty$. Such digits $z_k$ will be called
\emph{algorithmic facts} in contrast to variables $Z_k$ being called
\emph{probabilistic facts}.
Let us recall some basic concepts. For a discrete random
variable $X$, let $P(X)$ denote the random variable that takes value
$P(X=x)$ when $X$ takes value $x$. We will introduce the pointwise
entropy
\begin{align}
\label{PEntropy}
\mathbb{H}(X):=-\log P(X)
,
\end{align}
where $\log$ stands for the natural logarithm. The prefix-free
Kolmogorov complexity $K(u)$ of a string $u$ is the length of the
shortest self-delimiting program written in binary digits that prints
out string $u$ \cite[Chapter 3]{LiVitanyi08}. $K(u)$ is the founding
concept of the algorithmic information theory and is an analogue of
the pointwise entropy. To keep our notation analogical to
(\ref{PEntropy}), we will write the algorithmic entropy
\begin{align}
\mathbb{H}_a(u):=K(u)\log 2
.
\end{align}
If the probability measure is computable then the algorithmic entropy
is close to the pointwise entropy. On the one hand, by the
Shannon-Fano coding for a computable probability measure, the
algorithmic entropy is less than the pointwise entropy plus a constant
which depends on the probability measure and the dimensionality of the
distribution \cite[Corollary 4.3.1]{LiVitanyi08}. Formally,
\begin{align}
\label{ShannonFano}
\mathbb{H}_a(X_1^n)\le \mathbb{H}(X_1^n)+2\log n+C_P,
\end{align}
where $C_P\ge 0$ is a certain constant depending on the probability
measure $P$. On the other hand, since the prefix-free Kolmogorov
complexity is also the length of a prefix-free code, we have
\begin{align}
\label{SourceCoding}
\sred\mathbb{H}_a(X_1^n)\ge \sred\mathbb{H}(X_1^n)
.
\end{align}
It is also true that $\mathbb{H}_a(X_1^n)\ge \mathbb{H}(X_1^n)$ for
sufficiently large $n$ almost surely \cite[Theorem 3.1]{Barron85b}.
Thus we have shown that the algorithmic entropy is in some sense close
to the pointwise entropy, for a computable probability measure.
Next, we will discuss the difference between probabilistic and
algorithmic randomness. Whereas for an IID sequence of random
variables $(Z_k)_{k=1}^\infty$ with $P(Z_k=0)=P(Z_k=1)=1/2$ we have
\begin{align}
\mathbb{H}(Z_1^k)=k\log 2
,
\end{align}
similarly an infinite sequence of binary digits $(z_k)_{k=1}^\infty$
is called algorithmically random (in the Martin-L\"of sense) when
there exists a constant $C\ge 0$ such that
\begin{align}
\mathbb{H}_a(z_1^k)\ge k\log 2-C
\end{align}
for all $k\in\mathbb{N}$ \cite[Theorem 3.6.1]{LiVitanyi08}. The
probability that the aforementioned sequence of random variables
$(Z_k)_{k=1}^\infty$ is algorithmically random equals $1$---for example
by \cite[Theorem 3.1]{Barron85b}, so algorithmically random sequences
are typical realizations of sequence $(Z_k)_{k=1}^\infty$.
Let $(X_i)_{i=1}^\infty$ be a stationary process. We observe that
generalizing condition (\ref{StrongKnowability}) in an algorithmic
fashion does not make much sense. Namely, condition
\begin{align}
\label{StrongKnowabilityAlg}
\lim_{n\rightarrow\infty} P(g(k;X_{i}^{i+n-1})=z_k)=1
\end{align}
is trivially satisfied for any stationary process for a certain
computable function
$g:\mathbb{N}\times\mathbb{X}^*\rightarrow\klam{0,1,2}$ and an
algorithmically random sequence $(z_k)_{k=1}^\infty$. It turns out so
since there exists a computable function
$\omega:\mathbb{N}\times\mathbb{N}\rightarrow\klam{0,1}$ such that
$\lim_{n\rightarrow\infty}\omega(k;n)=\Omega_k$, where
$(\Omega_k)_{k=1}^\infty$ is the binary expansion of the halting
probability $\Omega=\sum_{k=1}^\infty 2^{-k}\Omega_k$, which is a
lower semi-computable algorithmically random sequence \cite[Section
3.6.2]{LiVitanyi08}.
In spite of this negative result, the power-law growth of the number
of inferrable algorithmic facts corresponds to some nontrivial
property. For a computable function
$g:\mathbb{N}\times\mathbb{X}^*\rightarrow\klam{0,1,2}$ and an
algorithmically random sequence of binary digits $(z_k)_{k=1}^\infty$,
which we will call \emph{algorithmic facts}, the set of initial
algorithmic facts inferrable from a finite text $X_1^n$ will be
defined as
\begin{align}
U_a(X_1^n):=\klam{l\in\mathbb{N}: g(k;X_1^n)=z_k \text{ for all
$k\le l$}}
.
\end{align}
Subsequently, we will call a process perigraphic if the expected
number of algorithmic facts which can be inferred from a finite text
sampled from the process grows asymptotically like a power of the text
length.
\begin{definition}
A stationary discrete process $(X_i)_{i=1}^\infty$ is called
\emph{perigraphic} if
\begin{align}
\label{Perigraphic}
\hilberg_{n\rightarrow\infty} \sred\card U_a(X_1^n) >0
\end{align}
for some computable function
$g:\mathbb{N}\times\mathbb{X}^*\rightarrow\klam{0,1,2}$ and an
algorithmically random sequence of binary digits
$(z_k)_{k=1}^\infty$.
\end{definition}
Perigraphic processes can be ergodic. The proof of Theorem
\ref{theoFacts} from Appendix \ref{secSantaFe} can be easily adapted
to show that some example of a perigraphic process is the Santa Fe
process with sequence $(Z_k)_{k=1}^\infty$ replaced by an
algorithmically random sequence of binary digits
$(z_k)_{k=1}^\infty$. This process is IID and hence ergodic.
We can also easily show the following proposition.
\begin{theorem}
Any perigraphic process $(X_i)_{i=1}^\infty$ has an uncomputable
measure $P$.
\end{theorem}
\begin{proof}
Assume that a perigraphic process $(X_i)_{i=1}^\infty$ has a
computable measure $P$. By the proof of Theorem
\ref{theoFactsMIAlg} from Appendix \ref{secFactsMI}, we have
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U_a(X_1^n)
&\le
\hilberg_{n\rightarrow\infty}\sred
\kwad{\mathbb{H}_a(X_1^n)-\mathbb{H}(X_1^n)}
.
\end{align}
Since for a computable measure $P$ we have inequality
(\ref{ShannonFano}) then
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U_a(X_1^n)=0.
\end{align}
Since we have obtained a contradiction with the assumption that the
process is perigraphic, measure $P$ cannot be computable.
\end{proof}
\section{Theorem about facts and words}
\label{secFactsWords}
In this section, we will present a result about stationary processes,
which we call the theorem about facts and words. That proposition
states that the expected number of independent probabilistic or
algorithmic facts inferrable from the text drawn from a stationary
process must be roughly less than the expected number of distinct
word-like strings detectable in the text by a simple procedure
involving the PPM compression algorithm. This result states, in
particular, that an asymptotic power law growth of the number of
inferrable probabilistic or algorithmic facts as a function of the
text length produces a statistically measurable effect, namely, an
asymptotic power law growth of the number of word-like strings.
To state the theorem about facts and words formally, we need first to
discuss the PPM code. Let us denote strings of symbols
$x_j^k:=(x_j,...,x_k)$, adopting an important convention that $x_j^k$
is the empty string for $k<j$. In the following, we consider strings
over a finite alphabet, say, $x_i\in\mathbb{X}=\klam{1,...,D}$. We
define the frequency of a substring $w_1^k$ in a string $x_1^n$ as
\begin{align}
N(w_1^k|x_1^n):=\sum_{i=1}^{n-k+1}\boole{x_i^{i+k-1}=w_1^k}.
\end{align}
Now we may define the Prediction by Partial Matching (PPM)
probabilities.
\begin{definition}[cf. \cite{ClearyWitten84}]
For $x_1^n\in\mathbb{X}^n$ and $k\in\klam{-1,0,1,...}$, we put
\begin{align}
\PPM_k(x_i|x_1^{i-1})&:=
\begin{cases}
\displaystyle\frac{1}{D}, & i\le k,
\\
\displaystyle\frac{N(x_{i-k}^i|x_1^{i-1})+1}{N(x_{i-k}^{i-1}|x_1^{i-2})+D},
& i> k.
\end{cases}
\end{align}
Quantity $\PPM_k(x_i|x_1^{i-1})$ is called the \emph{conditional PPM
probability} of order $k$ of symbol $x_i$ given string
$x_1^{i-1}$. Next, we put
\begin{align}
\PPM_k(x_1^n)&:=\prod_{i=1}^n\PPM_k(x_i|x_1^{i-1}).
\end{align}
Quantity $\PPM_k(x_1^n)$ is called the \emph{PPM probability} of
order $k$ of string $x_1^n$. Finally, we put
\begin{align}
\label{PPM}
\PPM(x_1^n)&:=\frac{6}{\pi^2}\sum_{k=-1}^\infty
\frac{\PPM_k(x_1^n)}{(k+2)^2}.
\end{align}
Quantity $\PPM(x_1^n)$ is called the (total) \emph{PPM probability} of the
string $x_1^n$.
\end{definition}
Quantity $\PPM_k(x_1^n)$ is an incremental approximation of the
unknown true probability of the string $x_1^n$, assuming that the
string has been generated by a Markov process of order $k$. In
contrast, quantity $\PPM(x_1^n)$ is a mixture of such Markov
approximations for all finite orders. In general, the PPM
probabilities are probability distributions over strings of a fixed
length. That is:
\begin{itemize}
\item $\PPM_k(x_i|x_1^{i-1})> 0$ and
$\sum_{x_i\in\mathbb{X}}\PPM_k(x_i|x_1^{i-1})=1$,
\item $\PPM_k(x_1^n)> 0$ and $\sum_{x_1^n\in\mathbb{X}^n}\PPM_k(x_1^n)=1$,
\item $\PPM(x_1^n)> 0$ and $\sum_{x_1^n\in\mathbb{X}^n}\PPM(x_1^n)=1$.
\end{itemize}
In the following, we define an analogue of the pointwise entropy
\begin{align}
\mathbb{H}_{\PPM}(x_1^n)&:=-\log\PPM(x_1^n).
\end{align}
Quantity $\mathbb{H}_{\PPM}(x_1^n)$ will be called the length of the
PPM code for the string $x_1^n$. By nonnegativity of the
Kullback-Leibler divergence, we have for any random block $X_1^n$ that
\begin{align}
\label{SourceCodingPPM}
\sred\mathbb{H}_{\PPM}(X_1^n)&\ge \sred\mathbb{H}(X_1^n)
.
\end{align}
The length of the PPM code or the PPM probability repsectively have
two notable properties. First, the PPM probability is a universal
probability, i.e., in the limit, the length of the PPM code
consistently estimates the entropy rate of a stationary source.
Second, the PPM probability can be effectively computed, i.e., the
summation in definition (\ref{PPM}) can be rewritten as a finite
sum. Let us state these two results formally.
\begin{theorem}[cf. \cite{Ryabko10}]
\label{theoPPMUniversal}
The PPM probability is universal in expectation, i.e., we have
\begin{align}
\label{PPMUniversal}
\lim_{n\rightarrow\infty}\frac{1}{n}
\sred\mathbb{H}_{\PPM}(X_1^n)
&=
\lim_{n\rightarrow\infty}\frac{1}{n}
\sred\mathbb{H}(X_1^n)
\end{align}
for any stationary process $(X_i)_{i=1}^\infty$.
\end{theorem}
\begin{proof}
For stationary ergodic processes the above claim follows by an
iterated application of the ergodic theorem as shown in Theorem 1.1
from \cite{Ryabko10} for so called measure $R$, which is a slight
modification of the PPM probability. To generalize the claim for
nonergodic processes, one can use the ergodic decomposition theorem
but the exact proof requires a too large theoretical overload to be
presented within the framework of this paper.
\end{proof}
\begin{theorem}
\label{theoPPMFinite}
The PPM probability can be effectively computed, i.e., we have
\begin{align}
\label{PPMFinite}
\PPM(x_1^n)=\frac{6}{\pi^2}\sum_{k=0}^{L(x_1^n)}
\frac{\PPM_k(x_1^n)}{(k+2)^2}+
\okra{1-\frac{6}{\pi^2}\sum_{k=0}^{L(x_1^n)}\frac{1}{(k+2)^2}}D^{-n},
\end{align}
where
\begin{align}
L(x_1^n)=\max \klam{k: \text{$N(w_1^k|x_1^n)> 1$ for some
$w_1^k$}}
\end{align}
is the maximal repetition of string $x_1^n$.
\end{theorem}
\begin{proof}
We have $N(x_{i-k}^{i-1}|x_1^{i-2})=0$ for $k>L(x_1^i)$. Hence
$\PPM_k(x_1^n)=D^{-n}$ for $k>L(x_1^n)$ and in view of this we
obtain the claim.
\end{proof}
Maximal repetition as a function of a string was studied, e.g., in
\cite{DeLuca99,Debowski15f}. Since the PPM probability is a
computable probability distribution then by (\ref{ShannonFano}) for a
certain constant $C_{\PPM}$ we have
\begin{align}
\label{ShannonFanoPPM}
\mathbb{H}_a(X_1^n)\le \mathbb{H}_{\PPM}(X_1^n)+2\log n+C_{\PPM}.
\end{align}
Let us denote the length of the PPM code of order $k$,
\begin{align}
\mathbb{H}_{\PPM_k}(x_1^n)&:=-\log\PPM_k(x_1^n).
\end{align}
As we can easily see, the code length $\mathbb{H}_{\PPM}(x_1^n)$ is
approximately equal to the minimal code length
$\mathbb{H}_{\PPM_k}(x_1^n)$ where the minimization goes over
$k\in\klam{-1,0,1,...}$. Thus it is meaningful to consider this
definition of the PPM order of an arbitrary string.
\begin{definition}
The \emph{PPM order} $G_{\PPM}(x_1^n)$ is the smallest $G$ such that
\begin{align}
\mathbb{H}_{\PPM_G}(x_1^n)\le \mathbb{H}_{\PPM_k}(x_1^n) \text{ for all $k\ge -1$}.
\end{align}
\end{definition}
\begin{theorem}
We have $G_{\PPM}(x_1^n)\le L(x_1^n)$.
\end{theorem}
\begin{proof}
Follows by $\PPM_k(x_1^n)=D^{-n}=\PPM_{-1}(x_1^n)$ for $k>L(x_1^n)$.
\end{proof}
Let us divert for a short while from the PPM code definition. The set
of distinct substrings of length $m$ in string $x_1^n$ is
\begin{align}
V(m|x_1^n):=\klam{y_1^m:x_{t+1}^{t+m}=y_1^m
\text{ for some $0\le t\le n-m$}}.
\end{align}
The cardinality of set $V(m|x_1^n)$ as a function of substring length
$m$ is called the subword complexity of string $x_1^n$
\cite{DeLuca99}. Now let us apply the concept of the PPM order to
define some special set of substrings of an arbitrary string $x_1^n$.
The set of distinct PPM words detected in $x_1^n$ will be defined as
the set $V(m|x_1^n)$ for $m=G_{\PPM}(x_1^n)$, i.e.,
\begin{align}
\label{PPMVocab}
V_{\PPM}(x_1^n):=V(G_{\PPM}(X_1^n)|x_1^n).
\end{align}
Let us define the pointwise mutual information
\begin{align}
\label{PMI}
\mathbb{I}(X;Y):=\mathbb{H}(X)+\mathbb{H}(Y)-\mathbb{H}(X,Y)
\end{align}
and the algorithmic mutual information
\begin{align}
\mathbb{I}_a(u;v):=\mathbb{H}_a(u)+\mathbb{H}_a(v)-\mathbb{H}_a(u,v).
\end{align}
Now we may write down the theorem about facts and words. The theorem
states that the Hilberg exponent for the expected number of initial
independent inferrable facts is less than the Hilberg exponent for the
expected mutual information and this is less than the Hilberg exponent
for the expected number of distinct detected PPM words plus the PPM
order. (The PPM order is usually much less than the number of distinct
PPM words.)
\begin{theorem}[facts and words I, cf.\ \cite{Debowski11b}]
\label{theoFactsWords}
Let $(X_i)_{i=1}^\infty$ be a stationary strongly nonergodic process
over a finite alphabet. We have inequalities
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U(X_1^n) &\le
\hilberg_{n\rightarrow\infty}\sred \mathbb{I}(X_1^n;X_{n+1}^{2n})
\nonumber\\
&\le \hilberg_{n\rightarrow\infty} \sred
\kwad{G_{\PPM}(X_1^n)+\card V_{\PPM}(X_1^n)} .
\label{FactsWords}
\end{align}
\end{theorem}
\begin{proof}
The claim follows by conjunction of Theorem \ref{theoFactsMI} from
Appendix \ref{secFactsMI} and Theorem \ref{theoMIWords} from
Appendix \ref{secMIWords}.
\end{proof}
Theorem \ref{theoFactsWords} has also an algorithmic version, for
ergodic processes in particular.
\begin{theorem}[facts and words II]
\label{theoFactsWordsAlg}
Let $(X_i)_{i=1}^\infty$ be a stationary process over a finite
alphabet. We have inequalities
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U_a(X_1^n) &\le
\hilberg_{n\rightarrow\infty}\sred \mathbb{I}_a(X_1^n;X_{n+1}^{2n})
\nonumber\\
&\le \hilberg_{n\rightarrow\infty} \sred
\kwad{G_{\PPM}(X_1^n)+\card V_{\PPM}(X_1^n)} .
\label{FactsWordsAlg}
\end{align}
\end{theorem}
\begin{proof}
The claim follows by conjunction of Theorem \ref{theoFactsMIAlg} from
Appendix \ref{secFactsMI} and Theorem \ref{theoMIWords} from
Appendix \ref{secMIWords}.
\end{proof}
The theorem about facts and words previously proven in
\cite{Debowski11b} differs from Theorem \ref{theoFactsWords} in three
aspects. First of all, the theorem in \cite{Debowski11b} did not apply
the concept of the Hilberg exponent and compared
$\liminf_{n\rightarrow\infty}$ with $\limsup_{n\rightarrow\infty}$
rather than $\limsup_{n\rightarrow\infty}$ with
$\limsup_{n\rightarrow\infty}$. Second, the number of inferrable facts
was defined as a functional of the process distribution rather than a
random variable depending on a particular text. Third, the number of
words was defined using a minimal grammar-based code rather than the
concept of the PPM order. Minimal grammar-based codes are not
computable in a polynomial time in contrast to the PPM order. Thus we
may claim that Theorem \ref{theoFactsWords} is stronger than the
theorem about facts and words previously proven in
\cite{Debowski11b}. Moreover, applying Kolmogorov complexity and
algorithmic randomness to formulate and prove Theorem
\ref{theoFactsWordsAlg} is a new idea.
It is an interesting question whether we have an almost sure version
of Theorems \ref{theoFactsWords} and \ref{theoFactsWordsAlg}, namely,
whether
\begin{align}
\hilberg_{n\rightarrow\infty} \card U(X_1^n)
&\le
\hilberg_{n\rightarrow\infty} \mathbb{I}(X_1^n;X_{n+1}^{2n})
\nonumber\\
&\le \hilberg_{n\rightarrow\infty} \kwad{G_{\PPM}(X_1^n)+\card
V_{\PPM}(X_1^n)} \text{ almost surely}
\label{FactsWordsAS}
\end{align}
for strongly nonergodic processes, or
\begin{align}
\hilberg_{n\rightarrow\infty} \card U_a(X_1^n)
&\le
\hilberg_{n\rightarrow\infty} \mathbb{I}_a(X_1^n;X_{n+1}^{2n})
\nonumber\\
&\le \hilberg_{n\rightarrow\infty} \kwad{G_{\PPM}(X_1^n)+\card
V_{\PPM}(X_1^n)} \text{ almost surely}
\label{FactsWordsAS}
\end{align}
for general stationary processes. We leave this question as an open
problem.
\section{Hilberg exponents and empirical data}
\label{secNaturalLanguage}
It is advisable to show that the Hilberg exponents considered in
Theorem \ref{theoFactsWords} can assume any value in range $[0,1]$ and
the difference between them can be arbitrarily large. We adopt a
convention that the set of inferrable probabilistic facts is empty for
ergodic processes, $U(X_1^n)=\emptyset$. With this remark in mind, let
us inspect some examples of processes.
First of all, for Markov processes and their strongly nonergodic
mixtures, of any order $k$ but over a finite alphabet, we have
\begin{align}
\hilberg_{n\rightarrow\infty} \sred\card U(X_1^n)
=
\hilberg_{n\rightarrow\infty} \sred \mathbb{I}(X_1^n;X_{n+1}^{2n})
=
0
.
\end{align}
This happens to be so since the sufficient statistic of text $X_1^n$
for predicting text $X_{n+1}^{2n}$ is the maximum likelihood estimate
of the transition matrix, the elements of which can assume at most
$(n+1)$ distinct values. Hence
$\sred \mathbb{I}(X_1^n;X_{n+1}^{2n})\le D^{k+1}\log (n+1)$, where $D$ is
the cardinality of the alphabet and $k$ is the Markov order of the
process. Similarly, it can be shown for these processes that the PPM
order satisfies $\lim_{n\rightarrow\infty}G_{\PPM}(X_1^n)\le k$. Hence
the number of PPM words, which satisfies inequality
$\card V_{\PPM}(X_1^n)\le D^{G_{\PPM}(X_1^n)}$, is also bounded
above. In consequence, for Markov processes and their strongly
nonergodic mixtures, of any order but over a finite alphabet, we
obtain
\begin{align}
\label{FiniteMarkovHilberg}
\hilberg_{n\rightarrow\infty} \kwad{G_{\PPM}(X_1^n)+\card V_{\PPM}(X_1^n)}
=0 \text{ almost surely}.
\end{align}
In contrast, Santa Fe processes are strongly nonergodic mixtures of
some IID processes over an infinite alphabet. Being mixtures of IID
processes over an infinite alphabet, they need not satisfy condition
(\ref{FiniteMarkovHilberg}). In fact, as shown in
\cite{Debowski11b,Debowski12} and Appendix \ref{secSantaFe}, for the
Santa Fe process with exponent $\alpha$ we have the asymptotic
power-law growth
\begin{align}
\hilberg_{n\rightarrow\infty} \sred\card U(X_1^n)
=
\hilberg_{n\rightarrow\infty} \sred \mathbb{I}(X_1^n;X_{n+1}^{2n})
=
1/\alpha\in(0,1)
.
\end{align}
The same equality for the number of inferrable probabilistic facts and
the mutual information is also satisfied by a stationary coding of the
Santa Fe process into a finite alphabet, see \cite{Debowski12}.
Let us also note that, whereas the theorem about facts and words
provides an inequality of Hilberg exponents, this inequality can be
strict. To provide some substance, in \cite{Debowski12}, we have
constructed a modification of the Santa Fe process which is ergodic
and over a finite alphabet. For this modification, we have only the
power-law growth of mutual information
\begin{align}
\hilberg_{n\rightarrow\infty}
\sred \mathbb{I}(X_1^n;X_{n+1}^{2n}) &= 1/\alpha\in(0,1).
\end{align}
Since in this case,
$\hilberg_{n\rightarrow\infty} \sred\card U(X_1^n)=0$ then the
difference between the Hilberg exponents for the number of inferrable
probabilistic facts and the number of PPM words can be an arbitrary
number in range $(0,1)$.
Now we are in a position to discuss some empirical data. In this
case, we cannot directly measure the number of facts and the mutual
information but we can compute the PPM order and count the number of
PPM words. In Figure~\ref{figPPMVocabulary}, we have presented data
for a collection of 35 plays by William
Shakespeare\footnote{Downloaded from the Project Gutenberg,
\url{https://www.gutenberg.org/}.} and a random permutation of
characters appearing in this collection of texts. The random
permutation of characters is an IID process over a finite alphabet so
in this case we obtain
\begin{align}
\label{WordsHilbergZero}
\hilberg_{n\rightarrow\infty} \card V_{\PPM}(x_1^n)=0
.
\end{align}
In contrast, for the plays of Shakespeare we seem to have a stepwise
power law growth of the number of distinct PPM words. Thus we may
suppose that for natural language we have more generally
\begin{align}
\label{WordsHilbergNL}
\hilberg_{n\rightarrow\infty} \card V_{\PPM}(x_1^n)>0
.
\end{align}
If relationship (\ref{WordsHilbergNL}) holds true then natural
language cannot be a Markov process of any order. Moreover, in view
of the striking difference between observations
(\ref{WordsHilbergZero}) and (\ref{WordsHilbergNL}), we may suppose
that the number of inferrable probabilistic or algorithmic facts for
texts in natural language also obeys a power-law growth. Formally
speaking, this condition would translate to natural language being
strongly nonergodic or perigraphic. We note that this hypothesis
arises only as a form of a weak inductive inference since formally we
cannot deduce condition (\ref{Perigraphic}) from mere condition
(\ref{WordsHilbergNL}), regardless of the amount of data supporting
condition (\ref{WordsHilbergNL}).
\begin{figure}[p]
\centering
\includegraphics[width=0.9\textwidth]{strongly_nonergodic_figure_1.pdf}
\centering
\includegraphics[width=0.9\textwidth]{strongly_nonergodic_figure_2.pdf}
\caption{\label{figPPMVocabulary} The PPM order $G_{\PPM}(x_1^n)$
and the cardinality of the PPM vocabulary $\card V_{\PPM}(x_1^n)$
versus the input length $n$ for William Shakespeare's First
Folio/35 Plays and a random permutation of the
text's characters.}
\end{figure}
\section{Conclusion}
\label{secConclusion}
In this article, a stationary process has been called strongly
nonergodic if some persistent random topic can be detected in the
process and an infinite number of independent binary random variables,
called probabilistic facts, is needed to describe this topic
completely. Replacing probabilistic facts with an algorithmically
random sequence of bits, called algorithmic facts, we have adapted
this property back to ergodic processes. Subsequently, we have called
a process perigraphic if the number of algorithmic facts which can be
inferred from a finite text sampled from the process grows like a
power of the text length.
We have demonstrated an assertion, which we call the theorem about
facts and words. This proposition states that the number of
independent probabilistic or algorithmic facts which can be inferred
from a text drawn from a process must be roughly smaller than the
number of distinct word-like strings detected in this text by means of
the PPM compression algorithm. We have exhibited two versions of this
theorem: one for strongly nonergodic processes, applying the Shannon
information theory, and one for ergodic processes, applying the
algorithmic information theory.
Subsequently, we have exhibited an empirical observation that the
number of distinct word-like strings grows like a stepwise power law
for a collections of plays by William Shakespeare, in a stark contrast
to Markov processes. This observation does not rule out that the
number of probabilistic or algorithmic facts inferrable from texts in
natural language also grows like a power law. Hence we have supposed
that natural language is a perigraphic process.
We suppose that the path of the future related research should lead
through a further analysis of the theorem about facts and words and
demonstrating an almost sure version of this statement.
\section*{Acknowlegdment}
We wish to thank Jacek Koronacki, Jan Mielniczuk, and Vladimir Vovk
for helpful comments.
\appendix
\section{Facts and mutual information}
\label{secFactsMI}
In the appendices, we will make use of several kinds of information measures.
\begin{enumerate}
\item First, there are four pointwise Shannon information measures:
\begin{itemize}
\item entropy \\ $\mathbb{H}(X)=-\log P(X)$,
\item conditional entropy \\ $\mathbb{H}(X|Z):=-\log P(X|Z)$,
\item mutual information \\
$\mathbb{I}(X;Y):=\mathbb{H}(X)+\mathbb{H}(Y)-\mathbb{H}(X,Y)$,
\item conditional mutual information \\
$\mathbb{I}(X;Y|Z):=\mathbb{H}(X|Z)+\mathbb{H}(Y|Z)-\mathbb{H}(X,Y|Z)$,
\end{itemize}
where $P(X)$ is the probability of a random variable $X$ and $P(X|Z)$
is the conditional probability of a random variable $X$ given a random
variable $Z$. The above definitions make sense for discrete-valued
random variables $X$ and $Y$ and an arbitrary random variable $Z$. If
$Z$ is a discrete-valued random variable then also
$\mathbb{H}(X,Z)-\mathbb{H}(Z)=\mathbb{H}(X|Z)$ and
$\mathbb{I}(X;Z)=\mathbb{H}(X)-\mathbb{H}(X|Z)$.
\item
Moreover, we will use four algorithmic information measures:
\begin{itemize}
\item entropy \\ $\mathbb{H}_a(x)=K(x)\log 2$,
\item conditional entropy \\
$\mathbb{H}_a(x|z):=K(x|z)\log 2$,
\item mutual information \\
$\mathbb{I}_a(x;y):=\mathbb{H}_a(x)+\mathbb{H}_a(y)-\mathbb{H}_a(x,y)$,
\item conditional mutual information \\
$\mathbb{I}_a(x;y|z):=\mathbb{H}_a(x|z)+\mathbb{H}_a(y|z)-\mathbb{H}_a(x,y|z)$,
\end{itemize}
where $K(x)$ is the prefix-free Kolmogorov complexity of an object $x$
and $K(x|z)$ is the prefix-free Kolmogorov complexity of an object $x$
given an object $z$. In the above definitions, $x$ and $y$ must be
finite objects (finite texts), whereas $z$ can be also an infinite
object (an infinite sequence). If $z$ is a finite object then
$\mathbb{H}_a(x,z)-\mathbb{H}_a(z)\peq \mathbb{H}_a(x|z,K(z))$ rather
than being equal to $\mathbb{H}_a(x|z)$, where $\peq$, $\ple$, and
$\pge$ are the equality and the inequalities up to an additive
constant \cite[Theorem 3.9.1]{LiVitanyi08}. Hence
\begin{align}
\mathbb{H}_a(x)-\mathbb{H}_a(x|z)+\mathbb{H}_a(K(z))
&\pge
\mathbb{I}_a(x;z)\peq \mathbb{H}_a(x)-\mathbb{H}_a(x|z,K(z))
\nonumber\\
&\pge
\mathbb{H}_a(x)-\mathbb{H}_a(x|z)
.
\end{align}
\end{enumerate}
In the following, we will prove a result for Hilberg exponents.
\begin{theorem}
\label{theoHilbergRedundancy}
Define $\mathfrak{J}(n):=2\mathfrak{G}(n)-\mathfrak{G}(2n)$. If the limit
$\lim_{n\rightarrow\infty} \mathfrak{G}(n)/n=\mathfrak{g}$ exists
and is finite then
\begin{align}
\label{HilbergRedundancyI}
\hilberg_{n\rightarrow\infty} \kwad{\mathfrak{G}(n)-n\mathfrak{g}}
\le
\hilberg_{n\rightarrow\infty} \mathfrak{J}(n)
,
\end{align}
with an equality if $\mathfrak{J}(2^n)\pge 0$ for all but finitely many $n$.
\end{theorem}
\begin{proof}
The proof makes use of the telescope sum
\begin{align}
\label{ESeriesRedundancy}
\sum_{k=0}^\infty \frac{\mathfrak{J}(2^{k+n})}{2^{k+1}}
=
\mathfrak{G}(2^n)-2^n\mathfrak{g}
.
\end{align}
Denote $\delta:=\hilberg_{n\rightarrow\infty}
\mathfrak{J}(n)$. Since $\hilberg_{n\rightarrow\infty}
\okra{\mathfrak{G}(n)-n\mathfrak{g}}\le 1$, it is sufficient to
prove inequality (\ref{HilbergRedundancyI}) for $\delta<1$. In this
case, $\mathfrak{J}(2^n)\le 2^{(\delta+\epsilon)n}$ for all but finitely
many $n$ for any $\epsilon>0$. Then for $\epsilon< 1-\delta$, by the
telescope sum (\ref{ESeriesRedundancy}) we obtain for sufficiently
large $n$ that
\begin{align}
\mathfrak{G}(2^n)-2^n\mathfrak{g}\le \sum_{k=0}^\infty
\frac{2^{(\delta+\epsilon)(k+n)}}{2^{k+1}}
\le
2^{(\delta+\epsilon)n}\sum_{k=0}^\infty2^{(\delta+\epsilon-1)k-1}
=\frac{2^{(\delta+\epsilon)n}}{2(1-2^{\delta+\epsilon-1})}
.
\end{align}
Since $\epsilon$ can be taken arbitrarily small, we obtain
(\ref{HilbergRedundancyI}).
Now assume that $\mathfrak{J}(2^n)\pge 0$ for all but finitely many $n$.
By the telescope sum (\ref{ESeriesRedundancy}), we have
$\mathfrak{J}(2^n)/2\ple\mathfrak{G}(2^n)-2^n\mathfrak{g}$ for sufficiently
large $n$. Hence
\begin{align}
\delta\le \hilberg_{n\rightarrow\infty}
\okra{\mathfrak{G}(n)-n\mathfrak{g}}
\end{align}
Combining this with (\ref{HilbergRedundancyI}), we obtain
$\hilberg_{n\rightarrow\infty}
\okra{\mathfrak{G}(n)-n\mathfrak{g}}=\delta$.
\end{proof}
For any stationary process $(X_i)_{i=1}^\infty$ over a finite alphabet
there exists a limit
\begin{align}
\label{EntropyRate}
h&:=\lim_{n\rightarrow\infty}\frac{\sred\mathbb{H}(X_1^n)}{n}=\sred\mathbb{H}(X_1|X_2^\infty)
,
\end{align}
called the entropy rate of process $(X_i)_{i=1}^\infty$
\cite{CoverThomas06}. By (\ref{SourceCoding}),
(\ref{PPMUniversal}), and (\ref{ShannonFanoPPM}), we also have
\begin{align}
h&=\lim_{n\rightarrow\infty}\frac{\sred\mathbb{H}_a(X_1^n)}{n}
.
\end{align}
Moreover, for a stationary process, the mutual
information satisfies
\begin{align}
\sred\mathbb{I}(X_1^n;X_{n+1}^{2n})&=2\sred\mathbb{H}(X_1^n)-\sred\mathbb{H}(X_1^{2n})\ge 0,
\\
\sred\mathbb{I}_a(X_1^n;X_{n+1}^{2n})&=2\sred\mathbb{H}_a(X_1^n)-\sred\mathbb{H}_a(X_1^{2n})\pge 0.
\end{align}
Hence by Theorem \ref{theoHilbergRedundancy}, we obtain
\begin{align}
\label{RedundancyMI}
\hilberg_{n\rightarrow\infty} \kwad{\sred\mathbb{H}(X_1^n)-hn}
&= \hilberg_{n\rightarrow\infty} \sred\mathbb{I}(X_1^n;X_{n+1}^{2n}),
\\
\label{RedundancyMIAlg}
\hilberg_{n\rightarrow\infty} \kwad{\sred\mathbb{H}_a(X_1^n)-hn}
&= \hilberg_{n\rightarrow\infty} \sred\mathbb{I}_a(X_1^n;X_{n+1}^{2n}).
\end{align}
Subsequently, we will prove the initial parts of Theorems
\ref{theoFactsWords} and \ref{theoFactsWordsAlg}, i.e., the two
versions of the theorem about facts and words. The probabilistic
statement for strongly nonergodic processes goes first.
\begin{theorem}[facts and mutual information I]
\label{theoFactsMI}
Let $(X_i)_{i=1}^\infty$ be a stationary strongly nonergodic process
over a finite alphabet. We have inequality
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U(X_1^n) &\le
\hilberg_{n\rightarrow\infty} \sred\mathbb{I}(X_1^n;X_{n+1}^{2n})
.
\label{FactsMI}
\end{align}
\end{theorem}
\begin{proof}
Let us write $S_n:=\card U(X_1^n)$. Observe that
\begin{align}
\sred\mathbb{H}(Z_1^{S_n}|S_n)&=-\sum_{s,w} P(S_n=s,Z_1^s=w)\log P(Z_1^s=w|S_n=s)
\nonumber\\
&\ge -\sum_{s,w} P(S_n=s,Z_1^s=w)\log \frac{P(Z_1^s=w)}{P(S_n=s)}
\nonumber\\
&= -\sum_{s,w} P(S_n=s,Z_1^s=w)\log \frac{2^{-s}}{P(S_n=s)}
\nonumber\\
&= (\log 2)\sred S_n-\sred\mathbb{H}(S_n),
\label{HZS}
\\
\sred\mathbb{H}(S_n)&\le (\sred S_n+1)\log(\sred S_n+1)-\sred S_n\log\sred S_n
\nonumber\\
&=\log(\sred S_n+1)+\sred S_n\log\frac{\sred S_n+1}{\sred S_n}
\nonumber\\
&\le \log(\sred S_n+1)+1,
\label{HS}
\end{align}
where the second row of inequalities follows by the maximum entropy
bound from \cite[Lemma 13.5.4]{CoverThomas06}. Hence, by the
inequality
\begin{align}
\label{DPI}
\sred\mathbb{H}(X|Y)\le \sred\mathbb{H}(X|f(Y))
\end{align}
for a measurable function $f$, we obtain that
\begin{align}
\sred\mathbb{H}(X_1^n)-\sred\mathbb{H}(X_1^n|Z_1^\infty)
&\ge \sred\mathbb{H}(X_1^n|S_n)-\sred\mathbb{H}(X_1^n|Z_1^\infty,S_n)-\sred\mathbb{H}(S_n)
\nonumber\\
&\ge \sred\mathbb{H}(X_1^n|S_n)-\sred\mathbb{H}(X_1^n|Z_1^{S_n},S_n)-\sred\mathbb{H}(S_n)
\nonumber\\
&= \sred\mathbb{I}(X_1^n;Z_1^{S_n}|S_n)-\sred\mathbb{H}(S_n)
\nonumber\\
&\ge \sred\mathbb{H}(Z_1^{S_n}|S_n)-\sred\mathbb{H}(Z_1^{S_n}|X_1^n,S_n)-\sred\mathbb{H}(S_n)
\nonumber\\
&= \sred\mathbb{H}(Z_1^{S_n}|S_n)-\sred\mathbb{H}(S_n)
\nonumber\\
&\ge (\log 2)\sred S_n-2\sred\mathbb{H}(S_n)
\nonumber\\
&\ge (\log 2)\sred S_n-2\kwad{\log(\sred S_n+1)+1}
\label{IXZ}
.
\end{align}
Now we observe that
\begin{align}
\sred\mathbb{H}(X_1^n|Z_1^\infty)\ge
\sred\mathbb{H}(X_1^n|X_{n+1}^\infty)=
hn
\end{align}
since the sequence of random variables $Z_1^\infty$ is a measurable
function of the sequence of random variables $X_{n+1}^\infty$, as
shown in \cite{Debowski09,Debowski11b}. Hence we have
\begin{align}
\sred\mathbb{H}(X_1^n)-\sred\mathbb{H}(X_1^n|Z_1^\infty)\le
\sred\mathbb{H}(X_1^n)-hn.
\label{IXZHh}
\end{align}
By inequalities (\ref{IXZ}) and (\ref{IXZHh}) and equality
(\ref{RedundancyMI}), we obtain inequality (\ref{FactsMI}).
\end{proof}
The algorithmic version of the theorem about facts and words follows
roughly the same idea, with some necessary adjustments.
\begin{theorem}[facts and mutual information II]
\label{theoFactsMIAlg}
Let $(X_i)_{i=1}^\infty$ be a stationary process over a finite
alphabet. We have inequality
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \card U_a(X_1^n) &\le
\hilberg_{n\rightarrow\infty} \sred\mathbb{I}_a(X_1^n;X_{n+1}^{2n})
.
\label{FactsMIAlg}
\end{align}
\end{theorem}
\begin{proof}
Let us write $S_n:=\card U_a(X_1^n)$. Observe that
\begin{align}
\mathbb{H}_a(z_1^{S_n}|S_n)&\pge \mathbb{H}_a(z_1^{S_n})-\mathbb{H}_a(S_n)
\nonumber\\
&\peq (\log 2) S_n-C-\mathbb{H}_a(S_n),
\label{HZSAlg}
\\
\mathbb{H}_a(S_n)&\ple 2\log( S_n+1),
\label{HSAlg}
\\
\mathbb{H}_a(K(z_1^{S_n}))&\ple 2\log( K(z_1^{S_n})+1)
\nonumber\\
&\ple 2\log( S_n+1),
\label{HKZSAlg}
\end{align}
where the first row of inequalities follows by the algorithmic
randomness of $z_1^\infty$, whereas the second and the third row of
inequalities follow by the bounds $K(n)\ple 2\log_2 (n+1)$ for
$n\ge 0$ and $K(z_1^k)\ple 2k$. Moreover, for any a computable
function $f$ there exists a constant $C_f\ge 0$ such that
\begin{align}
\label{DPIAlg}
\mathbb{H}_a(x|y)\ple \mathbb{H}_a(x|f(y))+C_f
.
\end{align}
Hence, we obtain that
\begin{align}
\mathbb{H}_a(X_1^n)-\mathbb{H}_a(X_1^n|z_1^\infty)
&\pge \mathbb{H}_a(X_1^n|S_n)-\mathbb{H}_a(X_1^n|z_1^\infty,S_n)-\mathbb{H}_a(S_n)
\nonumber\\
&\pge \mathbb{H}_a(X_1^n|S_n)-\mathbb{H}_a(X_1^n|z_1^{S_n},S_n)-\mathbb{H}_a(S_n)
\nonumber\\
&\pge \mathbb{I}_a(X_1^n;z_1^{S_n}|S_n)-\mathbb{H}_a(K(z_1^{S_n}))-\mathbb{H}_a(S_n)
\nonumber\\
&\pge \mathbb{H}_a(z_1^{S_n}|S_n)-\mathbb{H}_a(z_1^{S_n}|X_1^n,K(X_1^n),S_n)
\nonumber\\
&\qquad -\mathbb{H}_a(K(z_1^{S_n}))-\mathbb{H}_a(S_n)
\nonumber\\
&\pge \mathbb{H}_a(z_1^{S_n}|S_n)-C_g-\mathbb{H}_a(K(z_1^{S_n}))-\mathbb{H}_a(S_n)
\nonumber\\
&\pge (\log 2) S_n-C-C_g-\mathbb{H}_a(K(z_1^{S_n}))-2\mathbb{H}_a(S_n)
\nonumber\\
&\pge (\log 2) S_n-6\log( S_n+1)-C-C_g
.
\end{align}
Since $-\sred\log( S_n+1)\ge -\log(\sred S_n+1)$ by the Jensen
inequality then
\begin{align}
\sred\mathbb{H}_a(X_1^n)-\sred\mathbb{H}_a(X_1^n|z_1^\infty)\pge
(\log 2)\sred S_n-6\log(\sred S_n+1)-C-C_g
\label{IXZAlg}
.
\end{align}
Now we observe that
\begin{align}
\sred\mathbb{H}_a(X_1^n|z_1^\infty)\ge \sred\mathbb{H}(X_1^n)\ge hn
\end{align}
since the conditional prefix-free Kolmogorov complexity with the
second argument fixed is the length of a prefix-free code. Hence we
have
\begin{align}
\sred\mathbb{H}_a(X_1^n)-\sred\mathbb{H}_a(X_1^n|z_1^\infty)\le
\sred\mathbb{H}_a(X_1^n)-hn.
\label{IXZHhAlg}
\end{align}
By inequalities (\ref{IXZAlg}) and (\ref{IXZHhAlg}) and equality
(\ref{RedundancyMIAlg}), we obtain inequality (\ref{FactsMIAlg}).
\end{proof}
\section{Mutual information and PPM words}
\label{secMIWords}
In this appendix, we will investigate some algebraic properties of the
length of the PPM code to be used for proving the second part of the
theorem about facts and words. First of all, it can be seen that
\begin{align}
\label{PPMfirst}
\mathbb{H}_{\PPM_k}(x_1^n)=
\begin{cases}
\displaystyle n\log D, & k=-1,
\\
\displaystyle k\log D
+\sum_{u\in\mathbb{X}^k}\log\frac{(N(u|x_1^{n-1})+D-1)!}{(D-1)!\prod_{a=1}^D
N(ua|x_1^n)!}, & k\ge 0.
\end{cases}
\end{align}
Expression (\ref{PPMfirst}) can be further rewritten using notation
\begin{align}
\log^* n&:=
\begin{cases}
0, & n=0,
\\
\log n!-n\log n+n, & n\ge 1,
\end{cases}
\\
\mathfrak{H}(n_1,...,n_l)&:=
\begin{cases}
\sum_{i=1:n_i>0}^l n_i
\log \okra{\frac{\sum_{j=1}^l n_j}{n_i}},
& \text{if $n_j>0$ exists},
\\
0, & \text{else},
\end{cases}
\\
\mathfrak{K}(n_1,...,n_l)&:=
\sum_{i=1}^l \log^* n_i-\log^*\okra{\sum_{i=1}^l
n_i}.
\end{align}
Then, for $k\ge 0$, we define
\begin{align}
\mathbb{H}_{\PPM^0_k}(x_1^n)&:=
\sum_{u\in\mathbb{X}^k}
\mathfrak{H}\okra{
{N(u1|x_1^{n})}
,...,
{N(uD|x_1^{n})}
}
,
\\
\mathbb{H}_{\PPM^1_k}(x_1^n)&:=
\sum_{u\in\mathbb{X}^k}
\mathfrak{H}\okra{
{N(u|x_1^{n-1})}
,
{D-1}
}
\nonumber\\
&\quad
-\sum_{u\in\mathbb{X}^k} \mathfrak{K}\okra{
{N(u1|x_1^{n})}
,...,
{N(uD|x_1^{n})}
,
{D-1}
}
.
\end{align}
As a result for $k\ge 0$ we obtain
\begin{align}
\label{PPMexact}
\mathbb{H}_{\PPM_k}(x_1^n)&=k\log D+ \mathbb{H}_{\PPM^0_k}(x_1^n)+
\mathbb{H}_{\PPM^1_k}(x_1^n).
\end{align}
In the following, we will analyze the terms on the right-hand side of
(\ref{PPMexact}).
\begin{theorem}
\label{theoPPMbound}
For $k\ge 0$ and $n\ge 1$, we have
\begin{align}
\label{PPMboundI}
\tilde D\card V(k|x_1^{n-1})
&\le \mathbb{H}_{\PPM^1_k}(x_1^n)< D\card V(k|x_1^{n-1})\okra{2+\log n}.
\end{align}
where $\tilde D:=-D\log\okra{D^{-1}}!>0$.
\end{theorem}
\begin{proof}
Observe that $\mathfrak{H}(0,D-1)=\mathfrak{K}(0,...,0,D-1)=0$.
Hence the summation in $\mathbb{H}_{\PPM^1_k}(x_1^n)$ can be restricted to
$u\in\mathbb{X}^k$ such that $N(u|x_1^{n-1})\ge 1$. Consider such a
$u$ and write $N=N(u|x_1^{n-1})$ and $N_a=N(ua|x_1^n)$.
Since $\mathfrak{H}(n_1,...,n_l)\ge 0$ and
$\mathfrak{K}(n_1,...,n_l)\ge 0$ (the second inequality follows by
subadditivity of $\log^* n$), we obtain first
\begin{align}
&\mathfrak{H}\okra{{N},{D-1}}-
\mathfrak{K}\okra{{N_1},...,{N_D},{D-1}}
\nonumber\\
&\quad\le \mathfrak{H}\okra{{N},{D-1}}
\nonumber\\
&\quad= N\log\okra{1+\frac{D-1}{N}}+(D-1)\log\okra{1+\frac{N}{D-1}}
\nonumber\\
&\quad\le N\cdot\frac{D-1}{N}+(D-1)\log\okra{1+\frac{N}{D-1}}
\nonumber\\
&\quad= (D-1)\kwad{1+\log\okra{1+\frac{N}{D-1}}}
< D\okra{2+\log n}
,
\label{PPMboundUpper}
\end{align}
where we use $\log(1+x)\le x$ and $N< n$. On the other hand,
function $\log^* n$ is concave so by $\sum_{a=1}^D N_a=N$ and the
Jensen inequality for $\log^* n$ we obtain
\begin{align}
&\mathfrak{H}\okra{{N},{D-1}}-\mathfrak{K}\okra{{N_1},...,{N_D},{D-1}}
\nonumber\\
&\quad\ge \mathfrak{F}\okra{{N},{D}}:=
N\log\okra{1+\frac{D-1}{N}}+(D-1)\log\okra{1+\frac{N}{D-1}}
\nonumber\\
&\qquad\qquad\qquad
+\log^*(N+D-1)-\log^*(D-1)-D\log^*\okra{N/D}
\nonumber\\
&\quad=
\log(N+D-1)!-\log(D-1)!-D\log\okra{N/D}!-N\log D
\nonumber\\
&\quad=
\log\frac{(N+D-1)!}{(D-1)!\okra{N/D}!^D D^N}
\ge 0
\end{align}
since
\begin{align}
\okra{N/D}!^D D^N
&=N^D(N-D)^D(N-2D)^D\cdot...\cdot D^D
\nonumber\\
&\le
(N+D-1)(N+D-2)\cdot...\cdot D
=\frac{(N+D-1)!}{(D-1)!}
.
\end{align}
Moreover, function $\mathfrak{F}\okra{{N},{D}}$ is growing in argument
$N$. Hence
\begin{align}
\mathfrak{F}\okra{{N},{D}}\ge \mathfrak{F}\okra{{1},{D}}=
-D\log\okra{D^{-1}}!
.
\label{PPMboundLower}
\end{align}
Summing inequalities (\ref{PPMboundUpper}) and (\ref{PPMboundLower})
over $u\in\mathbb{X}^k$ such that $N(u|x_1^{n})\ge 1$, we obtain the
claim.
\end{proof}
The mutual information is defined as a difference of entropies.
Replacing the entropy with an arbitrary function $\mathbb{H}_Q(u)$, we
obtain this quantity:
\begin{definition}
The \emph{$Q$ pointwise mutual information} is defined as
\begin{align}
\mathbb{I}_Q(u;v):=
\mathbb{H}_Q(u)+\mathbb{H}_Q(v)
-\mathbb{H}_Q(uv)
.
\end{align}
\end{definition}
We will show that the $\PPM^0_k$ pointwise mutual information cannot
be positive.
\begin{theorem}
\label{theoMathFrakH}
For $n_i=\sum_{j=1}^l n_{ij}$, where $n_{ij}\ge 0$, we have
\begin{align}
\mathfrak{H}(n_1,...,n_k)\ge \sum_{j=1}^l
\mathfrak{H}(n_{1j},...,n_{kj}).
\end{align}
\end{theorem}
\begin{proof}
Write $N:=\sum_{i=1}^k \sum_{j=1}^l n_{ij}$, $p_{ij}:=n_{ij}/N$,
$q_i:=\sum_{j=1}^l p_{ij}$, and $r_j:=\sum_{i=1}^k p_{ij}$. We observe
that
\begin{align}
\mathfrak{H}(n_1,...,n_k)- \sum_{j=1}^l
\mathfrak{H}(n_{1j},...,n_{kj})
=
N\sum_{i=1}^k \sum_{j=1}^l p_{ij}\log\frac{p_{ij}}{q_i r_j},
\end{align}
which is $N$ times the Kullback-Leibler divergence between
distributions $\klam{p_{ij}}$ and $\klam{q_i r_j}$ and thus is
nonnegative.
\end{proof}
\begin{theorem}
\label{theoPPMZeroMI}
For $k\ge 0$, we have
\begin{align}
\label{PPMZeroMI}
\mathbb{I}_{\PPM^0_k}(x_1^n;x_{n+1}^{n+m})\le 0.
\end{align}
\end{theorem}
\begin{proof}
Consider $k\ge 0$. For $u\in\mathbb{X}^k$ and $a\in\mathbb{X}$, we
have
\begin{align}
N(ua|x_1^{n+m})&=N(ua|x_1^n)+N(ua|x_{n-k}^{n+k})+N(ua|x_{n+1}^{n+m})
.
\end{align}
Thus using Theorem \ref{theoMathFrakH} we obtain
\begin{align}
\mathfrak{H}\okra{
{N(u1|x_1^{n+m})}
,...,
{N(uD|x_1^{n+m})}
}
&\ge
\mathfrak{H}\okra{
{N(u1|x_1^n)}
,...,
{N(uD|x_1^n)}
}
\nonumber\\
&+
\mathfrak{H}\okra{
{N(u1|x_{n-k}^{n+k})}
,...,
{N(uD|x_{n-k}^{n+k})}
}
\nonumber\\
&+
\mathfrak{H}\okra{
{N(u1|x_{n+1}^{n+m})}
,...,
{N(uD|x_{n+1}^{n+m})}
}
.
\end{align}
Since the second term on the right hand side is greater than or
equal zero, we may omit it and summing the remaining terms over all
$u\in\mathbb{X}^k$ we obtain the claim.
\end{proof}
Now we will show that the PPM pointwise mutual information between two
parts of a string is roughly bounded above by the cardinality of the
PPM vocabulary of the string multiplied by the logarithm of the string
length.
\begin{theorem}
\label{theoPPMVocabMI}
We have
\begin{align}
\mathbb{I}_{\PPM}(x_1^n;x_{n+1}^{n+m})&\le 1
+4\log
\kwad{G_{\PPM}(x_1^{n+m})+2}
+\kwad{G_{\PPM}(x_1^{n+m})+1}\log D
\nonumber\\
&\qquad
+
2D \card V_{\PPM}(x_1^{n+m})\kwad{2+\log (n+m)}.
\label{PPMVocabMI}
\end{align}
\end{theorem}
\begin{proof}
Consider $k\ge 0$. By Theorems \ref{theoPPMbound} and
\ref{theoPPMZeroMI} we obtain
\begin{align}
\mathbb{I}_{\PPM_k}(x_1^n;x_{n+1}^{n+m})
&=
k\log D
+\mathbb{I}_{\PPM^0_k}(x_1^n;x_{n+1}^{n+m})
+\mathbb{I}_{\PPM^1_k}(x_1^n;x_{n+1}^{n+m})
\nonumber\\
&\le
k\log D
+
D\card V(k|x_1^{n})\kwad{2+\log n}
\nonumber\\
&\quad\qquad
+
D\card V(k|x_{n+1}^{n+m})\kwad{2+\log m}
\nonumber\\
&\le k\log D+2D\card V(k|x_1^{n+m})\kwad{2+\log(n+m)}
.
\end{align}
In contrast, $\mathbb{I}_{\PPM_{-1}}(x_1^n;x_{n+1}^{n+m})=0$.
Now let $G=G_{\PPM}(x_1^{n+m})$. Since
\begin{align}
\mathbb{H}_{\PPM}(x_1^{n+m})\ge \mathbb{H}_{\PPM_G}(x_1^{n+m})
\end{align}
and
\begin{align}
\mathbb{H}_{\PPM}(u)\le \mathbb{H}_{\PPM_k}(u)+ 1/2+2\log (k+2)
\end{align}
for any $u\in\mathbb{X}^*$ and $k\ge -1$,
we obtain
\begin{align}
\mathbb{I}_{\PPM}(x_1^n;x_{n+1}^{n+m})
&\le
\mathbb{I}_{\PPM_G}(x_1^n;x_{n+1}^{n+m})+1+4\log (G+2)
\nonumber\\
&\le 1+4\log (G+2)+(G+1)\log D
\nonumber\\
&\qquad
+2D\card V(G|x_1^{n+m})\kwad{2+\log(n+m)}
.
\end{align}
Hence the claim follows.
\end{proof}
Consequently, we may prove the second part of Theorems
\ref{theoFactsWords} and \ref{theoFactsWordsAlg}, i.e., the theorems
about facts and words.
\begin{theorem}[mutual information and words]
\label{theoMIWords}
Let $(X_i)_{i=1}^\infty$ be a stationary process over a finite
alphabet. We have inequalities
\begin{align}
\hilberg_{n\rightarrow\infty} \sred\mathbb{I}(X_1^n;X_{n+1}^{2n})
&\le \hilberg_{n\rightarrow\infty} \sred\mathbb{I}_a(X_1^n;X_{n+1}^{2n})
\nonumber\\
&\le \hilberg_{n\rightarrow\infty} \sred
\kwad{G_{\PPM}(X_1^n)+\card V_{\PPM}(X_1^n)} .
\label{MIWords}
\end{align}
\end{theorem}
\begin{proof}
By Theorem \ref{theoPPMVocabMI}, we obtain
\begin{align}
\hilberg_{n\rightarrow\infty} \sred \mathbb{I}_{\PPM}(X_1^n;X_{n+1}^{2n})
&\le \hilberg_{n\rightarrow\infty} \sred
\kwad{G_{\PPM}(X_1^n)+\card V_{\PPM}(X_1^n)} .
\label{PPMMIWords}
\end{align}
In contrast, Theorems \ref{theoPPMUniversal} and
\ref{theoHilbergRedundancy} and inequalities (\ref{SourceCoding})
and (\ref{ShannonFanoPPM}) yield
\begin{align}
\hilberg_{n\rightarrow\infty}
\kwad{\sred\mathbb{H}(X_1^n)-hn}
&\le
\hilberg_{n\rightarrow\infty}
\kwad{\sred\mathbb{H}_a(X_1^n)-hn}
\nonumber\\
&\le
\hilberg_{n\rightarrow\infty}
\kwad{\sred\mathbb{H}_{\PPM}(X_1^n)-hn}
\nonumber\\
&\le
\hilberg_{n\rightarrow\infty} \sred \mathbb{I}_{\PPM}(X_1^n;X_{n+1}^{2n})
\end{align}
Hence by equalities (\ref{RedundancyMI}) and (\ref{RedundancyMIAlg}),
we obtain inequality (\ref{MIWords}).
\end{proof}
\section{Hilberg exponents for Santa Fe processes}
\label{secSantaFe}
We begin with a general observation for Hilberg exponents. In
\cite{Debowski15d} this result was discussed only for the Hilberg
exponent of mutual information.
\begin{theorem}[cf.\ \cite{Debowski15d}]
\label{theoHilbergExp}
For a sequence of random variables $Y_n\ge 0$, we have
\begin{align}
\label{HilbergExp}
\hilberg_{n\rightarrow\infty} Y_n \le
\hilberg_{n\rightarrow\infty} \sred Y_n \text{ almost surely}
.
\end{align}
\end{theorem}
\begin{proof}
Denote $\delta:=\hilberg_{n\rightarrow\infty} \sred Y_n$. From the
Markov inequality, we have
\begin{align}
\sum_{k=1}^\infty
P\okra{\frac{Y_{2^k}}{2^{k(\delta+\epsilon)}}\ge 1}
&\le
\sum_{k=1}^\infty
\frac{\sred Y_{2^k}}{2^{k(\delta+\epsilon)}}
\nonumber\\
&\le
A
+
\sum_{k=1}^\infty
\frac{2^{k(\delta+\epsilon/2)}}{2^{k(\delta+\epsilon)}}
<\infty
,
\end{align}
where $A<\infty$. Hence, by the Borel-Cantelli lemma we have $Y_{2^k}<
2^{k(\delta+\epsilon)}$ for all but finitely many $n$ almost surely.
Since we can choose $\epsilon$ arbitrarily small, in particular we obtain
inequality (\ref{HilbergExp}).
\end{proof}
In \cite{Debowski12} and \cite{Debowski15d} it was shown that the Santa Fe
process with exponent $\alpha$ satisfies equalities
\begin{align}
\hilberg_{n\rightarrow\infty} \mathbb{I}(X_{-n+1}^0;X_1^{n})
&=
1/\alpha \text{ almost surely}
,
\\
\hilberg_{n\rightarrow\infty} \sred \mathbb{I}(X_{-n+1}^0;X_1^{n})
&=
1/\alpha
.
\end{align}
We will now show a similar result for the number of probabilistic
facts inferrable from the Santa Fe process almost surely and in
expectation. Since Santa Fe processes are processes over an infinite
alphabet, we cannot apply the theorem about facts and words.
\begin{theorem}
\label{theoFacts}
For the Santa Fe process with exponent $\alpha$ we have
\begin{align}
\label{FactsAs}
\hilberg_{n\rightarrow\infty} \card U(X_1^n)
&=
1/\alpha \text{ almost surely}
,
\\
\label{FactsExp}
\hilberg_{n\rightarrow\infty} \sred\card U(X_1^n)
&=
1/\alpha
.
\end{align}
\end{theorem}
\begin{proof}
First, we obtain
\begin{align}
P(\card U(X_1^n)\le m_n)
&\le
\sum_{k=1}^{m_n}
P(g(k;X_1^n)\neq Z_k)
=
\sum_{k=1}^{m_n}
\kwad{1-P(K_i=k)}^n
\nonumber\\
&\le m_n\kwad{1-\frac{m_n^{-\alpha}}{\zeta(\alpha)}}^n
\le m_n\exp\okra{-nm_n^{-\alpha}/\zeta(\alpha)},
\end{align}
where $\zeta(\alpha):=\sum_{k=1}^\infty k^{-\alpha}$ is the zeta
function. Put now $m_n=n^{1/\alpha-\epsilon}$ for an $\epsilon>0$. It
is easy to observe that
$\sum_{n=1}^\infty P(\card U(X_1^n)\le m_n)<\infty$. Hence by the
Borel-Cantelli lemma, we have inequality $\card U(X_1^n)> m_n$ for all
but finitely many $n$ almost surely.
Second, we obtain
\begin{align}
P(\card U(X_1^n)\ge M_n)
&\le
\frac{n!}{(n-M_n)!}
\prod_{k=1}^{M_n}
P(K_i=k)
\nonumber\\
&= \frac{n!}{(n-M_n)!(M_n!)^\alpha[\zeta(\alpha)]^{M_n}}.
\end{align}
Recalling from Appendix \ref{secMIWords} that
$\log n!=n(\log n -1)+\log^*n$, where $\log^*n\le \log(n+2)$ is
subadditive, we obtain
\begin{align}
&\log P(\card U(X_1^n)\ge M_n)
\nonumber\\
&\qquad\le
n(\log n -1)-(n-M_n)\kwad{\log(n-M_n)-1}
\nonumber\\
&\qquad\quad -\alpha M_n(\log M_n-1)
+\log^* M_n-M_n\log \zeta(\alpha)
\nonumber\\
&\qquad\le
M_n\kwad{\log n -\alpha(\log M_n-1) -\log \zeta(\alpha)}
+\log^* M_n
\end{align}
by $\log n\le \log(n-M_n)+\frac{M_n}{n}$. Put now $M_n=Cn^{1/\alpha}$
for a $C>e[\zeta(\alpha)]^{-1/\alpha}$. We obtain
\begin{align}
\label{StretchedExponential}
P(\card U(X_1^n)\ge M_n)\le (Cn^{1/\alpha}+2)\exp(-\delta n^{1/\alpha})
\end{align}
where $\delta>0$ so
$\sum_{n=1}^\infty P(\card U(X_1^n)\ge M_n)<\infty$. Hence by the
Borel-Cantelli lemma, we have inequality $\card U(X_1^n)< M_n$ for all
but finitely many $n$ almost surely. Combining this result with the
previous result yields equality (\ref{FactsAs}).
To obtain equality (\ref{FactsExp}), we invoke Theorem
\ref{theoHilbergExp} for the lower bound, whereas for the upper bound
we observe that
\begin{align}
\sred\card U(X_1^n)\le M_n+nP(\card U(X_1^n)\ge M_n)
\end{align}
where the last term decays according to the stretched exponential bound
(\ref{StretchedExponential}) for $M_n=Cn^{1/\alpha}$.
\end{proof}
\bibliographystyle{IEEEtran}
\bibliography{0-journals-abbrv,0-publishers-abbrv,ai,mine,tcs,ql,books,nlp}
\end{document} | 153,463 |
12330 Dorsett Road, Maryland Heights, MO 63043, United States of America (15.3 miles from Old Courthouse)
Review score
Score from 83 reviews
1973 Craigshire Road, Maryland Heights, MO 63146, United States of America (14.8 miles from Old Courthouse)
Excellent
Score from 42 reviews
191 Westport Plaza, Maryland Heights, MO 63146, United States of America (14.9 miles from Old Courthouse)
Very good
Score from 31 reviews
11980 Olive Boulevard, Creve Coeur, MO 63141, United States of America (14.3 miles from Old Courthouse)
Wonderful
Score from 49 reviews
900 Westport Plaza Drive, Maryland Heights, MO 63146, United States of America (14.8 miles from Old Courthouse). | 412,444 |
Radio and TV icon, John Blackman, opens up to Neil Mitchell about his health battle
It’s hard to forget the dulcet tones of much-loved comedian and long-standing 3AW broadcaster John Blackman.
For decades his voice sailed the airways into our homes.
Now, nine months since undergoing a marathon 12 hour surgery to remove and reconstruct his jaw, John Blackman shares his journey on finding his voice again.
Diagnosed a with basal-cell carcinoma, an aggressive form of cancer, doctors feared it would become rampant if they didn’t act soon.
“It’s like I ploughed into a tree and my life changed forever,” John told Neil Mitchell on 3AW Mornings.
“When I was given the news, my first thought was ‘I’ve got a full diary full of corporate gigs, I’m going to have to ring them all and put them on hold.”
As a public speaker, an entertainer and an comedian, his voice was his livelihood.
“It’s made me realise there are more important things in life than work,” John shared.
“There’s people out there far worse off Neil.”
Blackman said he is now, as far as he is aware, in full remission, but the last nine months have given him a new found respect for how disabled people navigate everyday life.
“I’ve got a new appreciation for someone who looks different and sounds different,” John said.
“Its not contagious.”
The grueling surgery saw fibula redesigned into a jaw with a skin graph from his inner thigh replacing a section on his chin and his teeth removed.
“I’ve discovered my voice doesn’t have any authority anymore,” John joked.
“I sound like a very drunk uncle Arthur when I’m on the phone.
“I don’t get any respect as (people) assume I’m a drunk or an old fool.”
Whilst physically John has changed, he’s determined to get back to what he loves most – his work.
“My working career stopped the day I went into surgery and I’m determined to get it back on track,” said John.
“In the words of Arnie – ‘I’ll be back’.”
Click PLAY to hear John’s full interview with Neil this morning. | 64,279 |
TRAVEL IS THE ONLY THING YOU BUY THAT MAKES YOU RICH
TRAVEL MORE TO DISCOVER YOURSELF
CHEAP BUS FARES, BEST CONNECTIVITY NETWORK
CHEAP FLIGHT WITH GUARANTEED BOARDING.
FLIGHT WITH REFUNDABLE FARES.
BEST DEALS IN HOTELS.
BEST CRUISE TOUR.
Depart: -
Arrive: -.
Rourkela is a metropolis city is the southeastern state of Odisha. It is one of the largest city which is the land of steel Plant, City popularly known as Ispat Nagar. It is the famous educational institution that is National Institutes of Technology (NIT Rourkela). The population of Rourkela is 483,083 approximately according to the population census of 2011. Rourkela is flourished with tourism importance like there are Temples, Parks, Dams, Falls, Museums, while is also eminent for its culture, Festivals Food and etc. The best time to visit Rourkela is October to March.
Rourkela is flourished with tourism importance like there are Temples, Parks, Dams, Falls as the city is the house of falls like the famous Khandadhar Falls, Museums, while it is also eminent for its culture, Festivals Food and etc.
Rourkela:-
Rourkela has the finest range of hotels with all the luxurious amenities (as per the travel consultant of Global Duniya).
Rourkela has some of the best restaurants that will surely satisfy the hunger of tourist. (as per the travel consultant of global Duniya) some of the best given below:-
Your trip would not be worth if you are not eating the delicious and mouthwatering Dishes of Rourkela. It has the blend of Eastern and Southern dishes while you can taste a number of dishes those also include cultural dishes of Rourkela that will surely make you eat the food again and again.
The Airport of Rourkela is a private Airport, located on the distance of 6 kilo Meter from the city. The IATA code of Airport is RRK. Main Airlines are Air Deccan, Air Odisha. Famous routes of Airports are Kolkata, Bhubaneshwar. The Airport is secured with the Army and police security soldiers. Airport also has other facilities like ATM, CCTV's, car calling, first-aid as all basic and needy services that Rourkela Airport has. > | 35,503 |
\begin{document}
\begin{abstract}
In this article we construct an expansive homeomorphism of a compact three-dimensional manifold
with a fixed point whose local stable set is not locally connected.
This homeomorphism is obtained as a topological perturbation of a quasi-Anosov diffeomorphism that is not Anosov.
\end{abstract}
\maketitle
\centerline{\scshape Alfonso Artigue}
\medskip
{\footnotesize
\centerline{Departamento de Matem\'atica y Estad\'\i stica del Litoral}
\centerline{Universidad de la Rep\'ublica}
\centerline{Gral Rivera 1350, Salto, Uruguay}
}
\section{Introduction}
A homeomorphism $f$ of a metric space $(M,\dist)$ is \emph{expansive} if there is
$\expc>0$ such that if $x,y\in M$ and $\dist(f^n(x),f^n(y))\leq\expc$ for all $n\in\Z$ then
$x=y$.
Expansivity is a well known property of Anosov diffeomorphisms and hyperbolic sets.
Also, pseudo-Anosov \cite{Hi,L} and quasi-Anosov diffeomorphisms \cite{FR} are known to be expansive.
In \cite{Hi,L}, Hiraide and Lewowicz proved that on compact surfaces, expansive homeomorphisms are
conjugate to pseudo-Anosov diffeomorphisms. In particular, if $M$ is the two-torus then $f$ is conjugate to an
Anosov diffeomorphism and
there are no expansive homeomorphisms on the two-sphere.
In \cite{Vi2002} Vieitez proved that if $f$ is an expansive diffeomorphism of a compact three-dimensional manifold and
$\Omega(f)=M$ then $f$ is conjugate to an Anosov diffeomorphism.
We recall that $x\in M$ is a \emph{wandering point} if there is an open set $U\subset M$ such that $x\in U$ and $f^n(U)\cap U=\emptyset$
for all $n\neq 0$.
The set of non-wandering points is denoted as $\Omega(f)$.
Given a homeomorphism $f\colon M\to M$ and $\epsilon>0$ small, the $\epsilon$-\emph{stable set} (\emph{local stable set}) of a point $x\in M$
is defined as
\[
W^s_\epsilon(x)=\{y\in M:\dist(f^n(x),f^n(y))\leq\epsilon \hbox{ for all }n\geq 0\}.
\]
On three-dimensional manifolds there are several open problems. For instance, it is not known
whether the three-sphere admits expansive homeomorphisms.
In that article Vieitez asks:
allowing wandering points, can we have points with local stable
sets that are not manifolds for $f\colon M\to M$, an expansive diffeomorphism
defined on a three-dimensional manifold $M$?
We remark that in the papers by Hiraide and Lewowicz that we mentioned,
before proving that stable and unstable sets form
pseudo-Anosov singular foliations, they show that local stable sets are locally connected.
The purpose of this paper is to give a positive answer to Vieitez' question for homeomorphisms.
We will construct an expansive homeomorphism on a three-dimensional manifold
with a point whose local stable set is not locally connected
and $\Omega(f)$ consists of an attractor and a repeller.
The construction follows the ideas in \cite{ArAn}
where it is proved that the compact surface of genus two admits
a continuum-wise expansive homeomorphism with a fixed point
whose local stable set is not locally connected.
The key point for the present construction is that on a three-dimensional manifold there is enough room
to place two one-dimensional continua meeting in a singleton, even if one of these continua is not locally connected.
In Remark \ref{rmkNoC1} we explain why our example is not $C^1$.
\section{The example}
\label{secAno}
The example is a $C^0$ perturbation of the quasi-Anosov diffeomorphism of \cite{FR}.
The perturbation will be obtained by a composition with a homeomorphism that is close to the identity.
This homeomorphism will be defined in local charts around a fixed point and will be extended as the identity outside this chart.
In \S \ref{secPertR3} we construct this perturbation in $\R^3$ obtaining a homeomorphism such that the stable set of the origin $(0,0,0)$ is
not locally connected.
Then, in \S \ref{secPertQA} we perform the perturbation of the quasi-Anosov diffeomorphism to obtain our example.
\subsection{In local charts}
\label{secPertR3}
Fix $r\in (0,1/2)$ and let $B\subset \R^3$ be the closed ball of radius $r$ centered at $(\frac 32,0,0)$.
Let $T_2\colon\R^3\to\R^3$ be defined as
$$T_2(x,y,z)=\frac 12(x,y,z).$$
Define $B_n=T_2^n(B)$ for $n\in\Z$.
Let $E\subset B$ be the non-locally connected continuum given by the union of the following segments:
\begin{itemize}
\item a segment parallel to $(1,0,0)$: $\left[\frac 32-\frac r2,\frac32+\frac r2\right]\times\left\{(0,0)\right\}$,
\item a segment parallel to $(0,1,0)$: $\left\{\frac 32-\frac r2\right\}\times\left[0,\frac r2\right]\times \{0\}$,
\item a countable family of segments parallel to $(0,1,0)$:
$\left\{\frac 32-\frac r2+\frac rk\right\} \times \left[0,\frac r2\right] \times \{0\}$,
for $k\geq 1$.
\end{itemize}
Consider $\rho\colon B\to [0,1]$ a smooth function such that $\rho^{-1}(1)=E$ and $\rho^{-1}(0)=\partial B$.
Define a vector field $X\colon\R^3\to\R^3$ by
\[
X(x,y,z)=\left\{
\begin{array}{ll}
(0,-\rho(T_2^{-n}(x,y,z))y\log 4,0) & \text{ if }\exists n\in\Z\text{ s.t. }(x,y,z)\in B_n,\\
(0,0,0)&\text{ otherwise.}
\end{array}
\right.
\]
\begin{rmk}
The vector field $X$ induces a flow $\phi\colon\R\times\R^3\to\R^3$.
The proof is as follows.
Notice that $X$ is smooth on $\R^3\setminus \{(0,0,0)\}$.
Since the regular orbits (i.e., not fixed points) are contained in the compact balls $B_n$ we have that these trajectories are defined for all $t\in\R$.
At the origin we have a fixed point, and the continuity of $X$ gives the continuity of the complete flow on $\R^3$.
\end{rmk}
Let $\phi_t$ be the flow on $\R^3$ induced by $X$. For $t=1$ we obtain the so called \emph{time-one map} $\phi_1$.
Let $$E_k=T_2^k(E)$$ for all $k\in\Z$.
\begin{rmk}
\label{rmkNoC1}
The homeomorphism $\phi_1$ is not $C^1$.
This is because the partial derivative $\frac{\partial \phi_1}{\partial y} (x_*,0,0)=(0,1/4,0)$ if
$(x_*,0,0)$ is in one of the segments of $E_k$ that is parallel to $(0,1,0)$.
Since $\frac{\partial \phi_1}{\partial y} (0,0,0)=(0,1,0)$ and $x_*>0$ can be taken arbitrarily small,
we conclude that $\frac{\partial \phi_1}{\partial y}$ is not continuous at the origin.
\end{rmk}
Define the homeomorphisms $T_1,f\colon \R^3\to \R^3$ as
\begin{equation}
\label{ecuDefT1}
T_1(x,y,z)=(x/2, 2y,2z)
\end{equation}
and
\[
f=T_1\circ\phi_1.
\]
Consider the set
$\tilde W=\cup_{k\in\Z}T_2^k(E)\cup\{(x,0,0):x\in\R\}$.
The \emph{(global) stable set} of a point $a\in\R^3$ associated to the homeomorphism $f$
is the set
\[
W^s_f(a)=\{b\in\R^3:\dist(f^n(a),f^n(b))\to 0\text{ as }n\to +\infty\}.
\]
\begin{prop}
\label{propConjEst}
It holds that $W^s_f(0,0,0)=\tilde W$.
\end{prop}
\begin{proof}
We start proving the inclusion $\tilde W\subset W^s_f(0,0,0)$.
We will show that $f(E_k)=E_{k+1}$.
By the definition of the flow we have that $$\phi_1(E_k)=\{(x,y/4,0):(x,y,0)\in E_k\}.$$
Then $T_1(\phi_1(E_k))=\{(x/2,y/2,0):(x,y,0)\in E_k\}$ and this set is $T_2(E_k)=E_{k+1}$.
Since $E_k\to (0,0,0)$ as $k\to+\infty$ we conclude that $E_k\subset W^s_f(0,0,0)$ for all $k\in\Z$.
Also, $f(x,0,0)=(x/2,0,0)$ for all $x\in\R$. This proves that
$\{(x,0,0):x\in\R\}\subset W^s_f(0,0,0)$.
Take $p\notin \tilde W$.
Note that if $p\notin B_n$ for all $n\in\Z$ then $f^n(p)=T_1^n(p)$ for all $n\geq 0$
and $p\notin W^s_f(0,0,0)$.
Assume that $f^n(p)\in B_n$ for all $n\in\Z$ and define
$$(a_n,b_n,c_n)=f^n(p).$$
If $c_0\neq 0$ then $c_n=2^nc_0\to\infty$.
Thus, we assume that $c_0=0$.
Suppose that $b_0>0$ (the case $b_0<0$ is analogous).
Let $y_0=\sup\{s\geq 0:(a_0,s,0)\in\tilde W\}$.
Define $l=\{(a_0,y,0):y\in\R\}$ and $g\colon l\to l$ by the equation
\[
T_2^{-1}\circ f(a_0,y,0)=(a_0,g(y),0).
\]
In this paragraph we will show that $g(y)>y$ for all $y>y_0$.
Let $\alpha\colon\R\to\R$ be such that $(a_0,\alpha(t),0)=\phi_t(a_0,y,0)$ for all $t\in\R$.
That is, $\alpha$ satisfies $\alpha(0)=y$ and
$\dot\alpha(t)=-\rho(a_0,\alpha(t),0)\alpha(t)\log 4$.
Since $(a_0,y,0)\notin \tilde W$ we have that $$\rho(a_0,\alpha(0),0)<1.$$
Then
$$\int_0^1\frac{\dot\alpha(t)}{\alpha(t)}dt=-\int_0^1\log 4\rho(a_0,\alpha(t),0)dt>-\log 4$$
and $\log(\alpha(1))-\log(\alpha(0))>-\log 4$.
Consequently, $\alpha(1)>\frac14\alpha(0)$.
Notice that
\[
\begin{array}{ll}
(a_0,g(y),0)&=T_2^{-1}\circ f(a_0,y,0)=T_2^{-1}\circ T_1\circ\phi_1(a_0,y,0)
=T_2^{-1}\circ T_1(a_0,\alpha(1),0)\\
&=T_2^{-1}(a_0/2,2\alpha(1),0)
=(a_0,4\alpha(1),0).
\end{array}
\]
Then $g(y)=4\alpha(1)$ and $g(y)>y$.
Recall that $X$ is the vector field that defines $\phi$.
Since $X\circ T_2=T_2\circ X$, as can be easily checked, we have that $f\circ T_2= T_2\circ f$. This implies that
\[
\begin{array}{ll}
(a_0,g^n(b_0),0)& =(T_2^{-1}\circ f)^n(a_0,b_0,0)
= T_2^{-n}\circ f^n(a_0,b_0,0)\\
&= T_2^{-n}(a_n,b_n,0)
= 2^{-n}(a_n,b_n,0)
\end{array}
\]
and $g^n(b_0)=2^{-n}b_n$.
Since we are assuming that $f^n(p)\in B_n$ for all $n\in\Z$, we have that $2^{-n}b_n$ is bounded. But, as $b_0>y_0$ and $g(y)>y$ for all $y>y_0$ we have that
$g^n(b_0)$ is increasing and bounded.
Then, if $b_*>y_0$ is the limit of $g^n(b_0)$ we have that $g(b_*)=b_*$,
which contradicts that $g(y)>y$ for all $y>y_0$.
This implies that $f^n(p)$ cannot be in $B_n$ for all $n\geq 0$, and as we said, this shows that $p\notin W^s_f(0,0,0)$.
This proves the inclusion $W^s_f(0,0,0)\subset \tilde W$.
\end{proof}
\begin{rmk}
By the definition of the vector field $X$ we see that $\phi_1$
preserves the horizontal planes (i.e., the planes perpendicular to $(0,0,1)$).
Also, $\phi_1$ leaves invariant the cube $[-2,2]^3$ and is the identity in its boundary.
\end{rmk}
\subsection{The local perturbation of the quasi-Anosov}
\label{secPertQA}
To construct our example we start with a quasi-Anosov diffeomorphism as in \cite{FR}.
A \emph{quasi-Anosov diffeomorphism} is an axiom A diffeomorphism of $M$ such that $T_x W^s(x)\cap T_x W^u(x)=0_x$ for all $x\in M$,
where $T_x W^\sigma(x)$, $\sigma=s,u$, denotes the tangent space of the stable or unstable manifold $W^\sigma(x)$ at $x$ and $0_x$ is the null vector of $T_x M$.
The quasi-Anosov diffeomorphism of \cite{FR}, that will be denoted as $\fFR\colon M\to M$, has the following properties:
it is defined on a three-dimensional manifold, it is not Anosov and its non-wandering set is the union of two basic sets.
The basic sets are an expanding attractor and a shrinking repeller. Both sets are two-dimensional and locally they are
homeomorphic to the product of $\R^2$ and a Cantor set.
On the attractor there is a hyperbolic fixed point $p$.
Take closed balls $U,V\subset M$ and
$C^0$ local charts
$\varphi\colon [-2,2]^3\to U\subset M$ and $\psi\colon [-1,1]\times [-4,4]^2\to V\subset M$ satisfying the following conditions:
\begin{enumerate}
\item[C0:] $\varphi|_{[-1,1]\times[-2,2]^2}=\psi|_{[-1,1]\times[-2,2]^2}$,
\item[C1:] $\fFR|_U=\psi\circ T_1\circ\varphi^{-1}$ where $T_1$ was defined in \eqref{ecuDefT1},
\item[C2:] in the local charts stable sets of $\fFR$ are lines parallel to $(1,0,0)$,
\item[C3:] there is $r\in (0,1/2)$ such that $W^u_{\fFR}(q)$ in the local chart
is transverse to the horizontal planes if $\varphi^{-1}(q)$ is in a neighborhood of $B$ where, as in \S \ref{secPertR3}, $B\subset \R^3$ is the
ball of radius $r$ centered at $(3/2,0,0)$,
\item[C4:] if
$
\tilde B_n=\varphi(B_n)
$
for all $n\geq 0$, we assume that $\fFR^k(\tilde B_0)\cap \tilde B_0=\emptyset$ for all $k\geq 1$.
\end{enumerate}
Let $\tilde\phi\colon M\to M$ be the homeomorphism given by
\[
\tilde\phi(x)=\left\{
\begin{array}{ll}
\varphi\circ \phi_1\circ\varphi^{-1}(x) & \text{ if } x\in U,\\
x & \text{ if } x\notin U,
\end{array}
\right.
\]
where $\phi_1$ is the time-one of the flow induced by the vector field $X$ of \S \ref{secPertR3}.
Define the homeomorphism $\tilde f\colon M\to M$ as
$$\tilde f=\fFR\circ\tilde\phi$$
Notice that
\begin{equation}
\label{ecuIgFrT}
\tilde \phi(x)=x\text{ for all }x\notin \cup_{n\geq 0} \tilde B_n.
\end{equation}
\begin{rmk}
This implies that if $r$ is small then $\tilde f$ is close to $\fFR$ in the $C^0$ topology of homeomorphisms of $M$.
\end{rmk}
\begin{thm}
The homeomorphism $\tilde f\colon M\to M$ is expansive and the local stable set of the fixed point $p$ is connected but not locally connected,
\end{thm}
\begin{proof}
By Proposition \ref{propConjEst} and the condition C1 we have that the local stable set of $p$ is $\varphi(\tilde W)$.
Since $\tilde W$ is not locally connected and $\varphi$ is a homeomorphism we conclude that the
local stable set of $p$ is not locally connected.
Let us show that $\tilde f$ is expansive.
By \eqref{ecuIgFrT} we have that $\tilde f$ and $\fFR$ coincide on $M\setminus \cup_{n\geq 0} \interior(\tilde B_n)$.
Therefore, if $\tilde f^n(x)\notin \cup_{k\geq 0}\tilde B_k$ for all $n\in\Z$ then
$\tilde f^n(x)=\fFR^n(x)$ for all $n\in\Z$.
Let $\expc_1$ be an expansivity constant of $\fFR$.
Thus, if $x,y\in M$ are such that $\tilde f^n(x),\tilde f^n(y)\notin \cup_{k\geq 0}\tilde B_k$ for all $n\in\Z$
then $\sup_{n\in\Z}\dist(\tilde f^n(x),\tilde f^n(y))>\expc_1$.
From our analisys on local charts of \S \ref{secPertR3} we have that
$\tilde B_{n+1}\subset \fFR(\tilde B_n)=\tilde f(\tilde B_n)$ for all $n\geq 0$.
This and condition C4 implies that
if $\tilde f^{n_0}(x)\in \tilde B_{k_0}$ for some $n_0\in\Z$ and $k_0\geq 0$
then $\tilde f^{n_0-k_0}(x)\in \tilde B_0$.
Let $\expc\in (0,\expc_1)$ be such that if $x\in \tilde B_0$ and $\dist(y,x)<\expc$ then,
in the local chart, $W^s_\expc(x)$ is contained in a horizontal plane and $W^u_\expc(y)$ is transverse to the horizontal planes.
We have applied conditions C2 and C3.
This implies that $W^s_\expc(x)\cap W^u_\expc(y)$ contains at most one point.
This proves that $\expc$ is an expansivity constant of $\tilde f$.
\end{proof}
\renewcommand{\refname}{REFERENCES} | 199,175 |
TITLE: Adding square roots to receive Integers
QUESTION [1 upvotes]: Inspired by this question, I ask the following:
For any $a\in\mathbb{N}_0$, do integers $x\ne a,y\ne a$ exist such that
$$y=\sqrt x+\sqrt a$$
$$\text{or}$$
$$y=\sqrt x-\sqrt a$$
And if yes, how is such a solution found?
REPLY [3 votes]: Suppose that $a$ is not a square, and suppose $y$ and $x$ exist so that $y = \sqrt{x} + \sqrt{a}$. Since $\sqrt{a}$ is a root of the polynomial $t^2 - a$, it follows that $\sqrt{x}$ is a root of the polynomial $$(y - t)^2 - a = t^2 - 2yt + y^2-a.$$ There are now two cases. First, if $x$ is a square, then $\sqrt{a} = y - \sqrt{x}$ is an integer, contradicting that $a$ is not a square. Second, if $x$ is not a square, then the minimal polynomial for $\sqrt{x}$ is $t^2 - x$, and hence $$t^2-x\mid t^2-2yt + y^2-a.$$ Since these polynomial have the same degree and are both monic, it follows that they are equal, i.e., $y = 0$ and $x = a$. This shows there are no $x$ and $y$ of the desired form when $a$ is not a square. A similar argument holds for $y = \sqrt{x}-\sqrt{a}$.
When $a$ is a square, you can (obviously) always find such $x$ and $y$. | 15,775 |
TITLE: Proof for a first-order differential equation claim
QUESTION [1 upvotes]: Claim: If $p(x)$ is a solution to a first-order differential equation in the form of $df/dx=g(f)$, then $p(x+c)$, with $c$ constant, is a solution as well.
I know the idea of the proof, but I am having trouble expressing it in a rigorous proof. Namely, my idea is that if we shift $p(x)$ to the left by $c$, then both the LHS and RHS will shift by $c$ (invoking the chain rule), so they are equal. Should I define $u=x+c$ and do a change of variable?
REPLY [2 votes]: suppose $p(x)$ is a solution of $$\frac{dy}{dx} =g(y). \tag 1 $$ that means $$ p'(a) = g(p(a))\text{ for all $a$ in the domain of $p$.}\tag 2$$ in particular putting $a = x+c$ in $(2),$ we have $$(p(x+c))' = p'(x+c)= g(p(x+c)) $$ which says that $y=p(x+c)$ is a solution of $(1).$ we have used the chain rule in the first equality of the last equation.
the geometric reason is that the slope field $g(y)$ is translation invariant with respect to $x.$ | 109,607 |
Nine Elms to Pimlico Bridge Designs
A selection from the 74 conceptual bridge designs that have been released to the public for the new cycle and pedestrian bridge that crosses the River Thames in London, between Nine Elms and Pimlico. The competition, commissioned by The London Borough of Wandsworth, includes a mix of world renowned architects and up-and-coming practices but the designs are being displayed on an anonymous basis so that the concepts are the sole focus of attention. A jury panel consisting of architects, engineers and council leaders will recommend a shortlist of up to four designs, which will be developed into more detailed proposals. The winner is set to be announced in July.
The purpose of the competition is to identify the best team, capable of designing and delivering a unique landmark bridge for this part of London and to explore what the various design options might be; it is not to select a specific design. | 34,498 |
TITLE: Minimal Polynomial of Induced Mapping
QUESTION [1 upvotes]: Let $T\in \text{End}(V)$, and let $Y\subset V$ be a T-invariant subspace with $Y\neq 0$ and $Y\neq V$.
Show the minimal polynomial of the induced map $T_{V/Y}:V/Y\rightarrow V/Y$ divides the minimal polynomial of $T$.
This is what I have done so far:
Let $m(x)$ be the minimal polynomial of $T$, and $m_{V/Y}(x)$ be the minimal polynomial of the induced mapping $T_{V/Y}$.
I have first shown that for any polynomial $f(x)\in \mathbb{F}[x]$, $f(T_{V/Y})[v]=[f(T)v]$, where $[v]$ is the corresponding equivalence class in $V/Y$. This thus implies that $m(T_{V/Y})[v]=[0]$ for any $v\in V$.
Then by the division algorithm, $m(x)=m_{V/Y}(x)q(x)+r(x)$, for some $q(x),r(x)\in \mathbb{F}[x]$, and $0\leq deg \ r(x)<deg \ m_{V/Y}(x)$
$$\implies r(T_{V/Y})[v]=m(T_{V/Y})[v]-m_{V/Y}(T_{V/Y})q(T_{V/Y})[v]=0$$
Suppose by contradiction, that $0<deg \ r(x)<deg \ m_{V/Y}(x)$. Let $g(x)=\frac {r(x)}{a}$, where $a\neq 0$ and $a$ is the leading coefficient of $r(x)$. Clearly $g(x)$ is monic and $deg \ r(x)=deg \ g(x)$.
I note that $g(T_{V/Y})[v]=\frac {r(T_{V/Y})[v]}{a}=0$
This is where I am stuck. I need to show that $g(T_{V/Y})=0$, which will thus results in a contradiction showing $deg \ g(x)<deg \ m_{V/Y}(x)$, which is impossible. Thus implying $r(x)=0$, proving the claim.
Is this even the right approach, what am I missing? Any help would be much appreciated. Thanks in advance!
REPLY [1 votes]: Here is a simpler argument.
Since $Y$ is invariant, $m_V(T)=0$ implies $m_V(T_{V/Y})=0$. Therefore, $m_{V/Y}(x)$ divides $m_V(x)$, since $m_{V/Y}(x)$ divides all polynomials that kill $T_{V/Y}$.
Indeed, let $p(x)$ be a polynomial that kills $T_{V/Y}$. Write $p(x)=m_{V/Y}(x) q(x) + r(x)$, where $r(x)=0$ or $\deg r(x) < \deg m_{V/Y}(x)$. Then $r(T_{V/Y})=0$. By minimality of $m_{V/Y}(x)$, we cannot have $\deg r(x) < \deg m_{V/Y}(x)$ and so must have $r(x)=0$. | 186,941 |
Oct '11 - May '13 this week. I feel that it fits this blog well, although personally I'm not a fan of the “us vs. them” tone of it. It's too manifesto-like for my taste. But I nodded and smirked reading every single paragraph. The writer speaks my mind using a better voice.
So, see y'all in a bit then?
We, the Web Kids
via The Atlantic.—Alexis Madrigal, senior Editor" by Piotr Czerski is licensed under a Creative Commons Uznanie autorstwa-Na tych samych warunkach 3.0 Unported License:
Contact the author: piotr[at]czerski.art.pl
A school blog on Arch Dept, Cranbrook Academy of Art. By farid rakun, admitted Fall 2011.
1 Comment
this section is particularly interesting.
Block this user
Are you sure you want to block this user and hide all related comments throughout the site? | 330,274 |
TITLE: What is the inverse of the eigenvector matrix
QUESTION [0 upvotes]: I see in a lot of resources that state that in order to find the inverse matrix using the eigendecomposition (for example wikipedia) ,One need to decompose A to it's eigenvectors and eigenvalues, And than, using the fact that the eigenvalues matrix $\Lambda$ is diagonal, the inverse is straightforward.
But, why no one discussed how to compute the inverse of the eigenvector matrix? I'm probably missing something trivial, but trying to prove myself that $Q^{-1}=Q^T$, or another way to calculate $Q^{-1}$ didn't succeed.
So, how do we calculate $Q^-1$ in order to find $A$?
REPLY [1 votes]: If your goal is just to calculate the inverse, then you have gone a longer way than necessary if you do it through a diagonalisation. You have to invert some matrix anyway, and I see no a priori reason why $Q$ should be easier to invert than $A$ (one exception is if $A$ is symmetric, where inversion of $Q$ amounts to taking the transpose, if you just choose its columns right).
However, the diagonalisation is very useful for other things. For instance, calculating high powers of $A$ is a lot easier to do after diagonalisation, because then you're just calculating high powers of a diagonal matrix. If you need to do such things in addition to calculating the inverse of $A$, then "spending" your inverse calculation on $Q^{-1}$ rather than $A^{-1}$ will likely be a good idea. | 213,187 |
\begin{document}
\begin{abstract}
We discuss the recent developments of semi-classical and micro-local analysis
in the context of nilpotent Lie groups and for sub-elliptic operators.
In particular, we give an overview of pseudo-differential calculi recently defined on nilpotent Lie groups as well as of the notion of quantum limits in the Euclidean and nilpotent cases.
\medskip \noindent {\sc{2010 MSC.}}
43A80; 58J45, 35Q40.
\noindent {\sc{Keywords.}}
Analysis on nilpotent Lie groups,
evolution of solutions to the Schr\"odinger equation,
micro-local and semi-classical analysis for sub-elliptic operators, abstract harmonic analysis, $C^*$-algebra theory.
\end{abstract}
\maketitle
\makeatletter
\renewcommand\l@subsection{\@tocline{2}{0pt}{3pc}{5pc}{}}
\makeatother
\tableofcontents
\section{
Introduction}
Since the 1960's, the analysis of elliptic operators has made fundamental progress with the emergence of pseudo-differential theory and the subsequent developments of micro-local and semi-classical analysis.
In this paper, we consider some questions that are well understood for elliptic operators
and we discuss analogues in the setting of sub-elliptic operators.
\subsection{The questions in the elliptic framework.}
The questions we are interested in concern the tools that have been developed in the elliptic framework to describe and understand the limits in space or in phase-space of families of functions.
They are of two natures: micro-local and semi-classical.
Micro-local analysis aims at understanding elliptic operators in high frequency,
while semi-classical analysis investigate the mathematical evolution of functions and operators depending on a small parameter $\eps$ (akin to the Planck constant in quantum mechanics) that goes to zero.
A typical micro-local question is, for instance, to `understand the convergence' as $j\to \infty$ of an orthonormal basis of eigenfunctions $\psi_j$, $j=0,1,2,\ldots$
$$
\Delta \psi_j = \lambda_j \psi_j, \qquad
\qquad\mbox{with}\quad 0=\lambda_0 <\lambda_1\leq \lambda_2\leq \ldots
$$
of the Laplace operator $\Delta$ on a compact Riemannian manifold $M$.
One way to answer this question is to describe the accumulation points of the sequence of probability measures $|\psi_j(x)|^2 dx$, $j=0,1,2,\ldots$
If $M$ is the $n$-dimensional torus or if the geodesic flow of $M$ is ergodic, then
the volume element $dx$ is an accumulation point of $|\psi_j(x)|^2 dx$, $j=0,1,2,\ldots$ and one can extract a subsequence of density one $(j_k)_{k\in \bN}$,
$$
\mbox{i.e.}\quad \lim_{\Lambda\to \infty} \frac{|\{j_k : \lambda_{j_k} \leq \Lambda\}|}{|\{j : \lambda_{j} \leq \Lambda\}|} =1,
$$
for which the convergence holds, that is, for any continuous function $ a:M\to \bC$,
\begin{equation}
\label{eq_QE_intro}
\lim_{k\to +\infty}
\int_M a( x) \ | \varphi_{j_k}(x)|^2 d x = \int_M a ( x)\, d x.
\end{equation}
Under the ergodic hypothesis,
this is a famous result due to
Shnirelman \cite{Shnirelman}, Colin de Verdi\`ere \cite{Colin85}, and Zelditch \cite{zelditch} in 1970's and 80's and sometimes called the Quantum Ergodicity Theorem - see also the semi-classical analogue in
\cite{helffer+martinez+robert}.
A typical semi-classical problem is to understand the quantum evolution of the Schr\"odinger equation
$$
i\eps \partial_t \psi^\eps = -\frac {\eps^2}2\Delta \psi^\eps,
\qquad \mbox{given an} \ L^2\mbox{-bounded family of initial datum}\ \psi^\eps|_{t=0} = \psi^\eps_0;
$$
in this introduction, let us consider the setting of $\bR^n$ to fix ideas.
Again, a mathematical formulation consists in describing the accumulation points of the sequence of measures $|\psi^\eps(t,x) |^2 dx dt$ as $\eps\to 0$.
\subsection{Sub-elliptic operators.}
In this paper, we discuss the extent to which these types of questions have been addressed for sub-elliptic operators.
The main examples of sub-elliptic operators are sub-Laplacians $\cL$ generalising the Laplace operator.
Concrete examples of sub-elliptic and non-elliptic operators include
$$
\cL_{G} = -\partial_u^2 - (u\partial_v)^2 \quad\mbox{on}\ \bR_u\times \bR_v =\bR^2 ,
$$
often called the Grushin operator (the subscript $G$ stands for Grushin).
More generally, H\"ormander sums of squares are sub-elliptic operators; they are operators $\cL= -X_1^2-\ldots -X_{n_1}^2-X_0$ on a manifold $M^n$ where the vector fields $X_j$'s together with their iterated brackets generate the tangent space $TM$ at every point \cite{hormander67}.
A more geometric source of sub-Laplacians is the analysis on sub-Riemannian manifolds, starting with CR manifolds such as the unit sphere of the complex plane $\bC^2$ or even of $\bC^n$ for any $n\geq 2$, and more generally contact manifolds.
Well-known contact manifolds of dimension three include the Lie group $SO(3)$ with two of its three canonical vector fields, as well as the motion group $\bR^2_{x,y} \times \bS^1_\theta$ with the vector fields $X_1=\cos \theta \partial_x +\sin \theta \partial_y$, and $X_2=\partial_\theta$.
Sub-Laplacians appear in many parts of sciences, in physics, biology, finance, etc., see \cite{bramanti}.
A particular framework of sub-Riemannian and sub-elliptic settings is given by Carnot groups; the latter are stratified nilpotent Lie groups $G$ equipped with a basis $X_1,\ldots, X_{n_1}$ for the first stratum $\fg$ of the Lie algebra of $G$.
Using the natural identification of $\fg$ with the space left-invariant vector fields,
the canonical sub-Laplacian is then $\cL= -X_1^2-\ldots -X_{n_1}^2$.
This is an important class of examples not only because this provides a wealth of models and settings on which to test conjectures, but also more fundamentally, as any H\"ormander sum of squares can be lifted - at least theoretically
\cite{FollandStein74,Rothschild+Stein,Nagel+Stein+Wainger}
- to a Carnot group.
For instance, the Grushin operator $\sL_G$ on $\bR^2$ described above can be lifted to
the sub-Laplacian $\sL_{\bH_1} = -X_1^2 -X_2^2$ on
the Heisenberg group $\bH_1$;
here the product on $\bH_1\sim \bR^3_{x,y,t}$ is given by
$$
(x,y,t) (x',y',t') = \left(x+x',y+y',t+t'+\frac 12 (xy'-x'y)\right),
$$
and $X_1= \partial_x -\frac y2 \partial_t$ and $X_2= \partial_y +\frac x2 \partial_t$.
The examples given above infer that the analysis of sub-elliptic operators such as H\"ormander sums of squares is more non-commutative than in the elliptic case. Indeed, the commutator of the vector fields $X_j$'s in our examples above usually produces terms that cannot be neglected in any meaningful elliptic analysis, whereas in the elliptic case the $X_j$'s can be chosen to yield local coordinates and therefore commute up to lower order terms.
This led to a difficult non-commutative analysis in the late 70's and 80's around the ideas of lifting the nilpotent Lie group setting \cite{FollandStein74,Rothschild+Stein,Nagel+Stein+Wainger},
and subsequently in 80's and 90's using Euclidean micro-local tools as well \cite{Fefferman+Phong,Sanchez,Parmeggiani}.
At the same time, sub-Riemannian geometry was emerging. Although many functional features are almost identical to the
Riemannian case \cite{Strichartz86}, there are fundamental differences regarding e.g. geodesics, charts or local coordinates, tangent spaces etc.
see e.g.
\cite{Bellaiche,Gromov,Montgomery,AgrachevBB}.
The analysis of operators on classes of sub-Riemannian manifolds
started with CR and contact manifolds \cite{FollandStein74},
followed by a calculus on Heisenberg manifolds \cite{Beals+Greiner,PongeAMS2008}.
In 2010 \cite{vanErp}, an index theorem was proved for sub-elliptic operators on contact manifolds.
The key idea was to adapt Connes' tangent groupoid \cite{Connes} from the Riemannian setting to the sub-Riemannian's.
For contact manifolds, the Euclidean tangent space is then replaced with the Heisenberg group. Since then,
considerable progress has been achieved in the study
of spectral properties of sub-elliptic operators in these contexts (see e.g. \cite{Dave+Haller}) with the development of these groupoid techniques on filtered manifolds \cite{vanErp,choi+ponge,vanErp+Y}.
Few works on sub-elliptic operators followed the path opened by M. Taylor \cite{TaylorAMS} at the beginning of the 80's,
that is, to use the representation theory of the underlying groups to tackle the non-commutativity.
To the author's knowledge, in the nilpotent case, they are essentially \cite{Bahouri+Fermanian+Gallagher,R+F_monograph,Bahouri+Chemin+Danchin}
and, surprisingly, have appeared only in the past decade.
\subsection{Aim and organisation of the paper}
This paper describes the scientific journey of the author and of her collaborator Clotilde Fermanian-Kammerer towards
micro-local and semi-classical analysis for sub-elliptic operators, especially on nilpotent Lie groups.
The starting point of the investigations was to define and study the analogues of micro-local defect measures. As explained in Section \ref{sec_QL}, this has led to adopt the more general view point and the vocabulary from $C^*$-algebras regarding states even in the Euclidean or elliptic case.
The first results regarding micro-local defect measure and semi-classical measures on nilpotent Lie groups are presented
in Section \ref{sec_PDOQLG}, including
applications and future works.
\subsection{Acknowledgement}
The author is grateful to the Leverhulme Trust for their support via Research Project Grant 2020-037.
This paper summarises the main ideas discussed by the author to the Bruno Pini Mathematical Analysis Seminar of the University of Bologna in May 2021.
The author would like to thank the organisers for giving her the opportunity to present the project and for their warm welcome - even in zoom form.
\section{Quantum limits in Euclidean or elliptic settings}
\label{sec_QL}
In this section, we discuss how micro-local defect measures and semi-classical measures can be seen as quantum limits, that is, as states of $C^*$-algebras.
\subsection{Micro-local defect measures}
The notion of micro-local defect measure, also called H-measure, emerged around 1990 independently in the works of P. G\'erard \cite{gerard_91} and L. Tartar \cite{tartar}.
Their motivations came from PDEs, in relation to the div-curl lemma and more generally to describe phenomena of compensated compactness. The following result gives the existence of micro-local defect measures:
\begin{theorem}[\cite{gerard_91}]
\label{thm_MDM}
Let $\Omega$ be an open subset of $\bR^n$.
Let $(f_j)_{j\in \bN}$ be a bounded sequence in $L^2(\Omega,loc)$ converging weakly to 0.
Then there exist a subsequence $(j_k)_{k\in \bN}$ and a positive Radon measure $\gamma$ on $\Omega\times \bS^{n-1}$ such that the convergence
$$
(Af_j,f_j)_{L^2} \longrightarrow_{j=j_k, k\to \infty} \int_{\Omega\times \bS^{n-1}} a_0(x,\xi) d\gamma(x,\xi)
$$
holds for all classical pseudo-differential operator $A$, $a_0$ denoting its principal symbol.
\end{theorem}
Here, $\bS^{n-1}$ denotes the unit sphere in $\bR^n$.
The classical pseudo-differential calculus refers to all the H\"ormander pseudo-differential operators of non-positive order, with symbols admitting a homogeneous expansion and with integral kernel compactly supported in $\Omega\times \Omega$.
The measure $\gamma$ in Theorem \ref{thm_MDM}
is called a micro-local defect measure for $(f_j)$,
or the (pure) micro-local defect measure for $(f_{j_k})$.
Examples of micro-local defect measures include
\begin{itemize}
\item an $L^2$-concentration in space $f_j(x) = j^{n/ 2}\chi(j(x-x_0) ) $ about a point $x_0$ (here, $\chi\in C_c^\infty(\bR^n)$ is some given function), whose micro-local defect measure is $\gamma(x,\xi)=\delta_{x_0} (x)\otimes d_\chi(\xi) d\sigma(\xi)$, where $\sigma$ is the uniform measure on $\bS^{n-1}$ (i.e. the standard surface measure on the unit sphere) and $d_\chi(\xi):= \int_{r=0}^\infty |\widehat \chi(r\xi)|^2 r^{n-1} dr$ ,
\item an $L^2$-concentration in oscillations $f_j(x) = \psi(x) e^{2i\pi j \xi_0 \cdot x} $ about a frequency $\xi_0\in \bS^{n-1}$ (here, $\psi$ is some given smooth function with compact support in $\bR^n$), whose micro-local defect measure is $\gamma(x,\xi)=|\psi(x)|^2dx \otimes \delta_{\xi_0}(\xi) $.
\end{itemize}
Theorem \ref{thm_MDM} extends readily to manifolds by replacing $\Omega\times \bS^{n-1}$ with the spherical co-tangent bundle.
The introduction of this paper mentions the Quantum Ergodicity Theorem, see \eqref{eq_QE_intro}. This is in fact the reduced version `in position'. A modern presentation of the full Quantum Ergodicity Theorem can be expressed as saying that the Liouville measure $dx\otimes d\sigma(\xi)$ is a micro-local defect measure of the sequence $(\psi_j)_{j\in \bN_0}$, for which the subsequence $(j_k)$ is of density one.
\subsection{The viewpoint of quantum limits}
\label{subsec_viewptQL}
The author's definition of quantum limits is a notion along the line of the following:
\begin{definition}
The \emph{quantum limit} of a sequence $(f_j)$ of unit vectors in a Hilbert space $\cH$ is any accumulation point
of the functional $A\mapsto (Af_j,f_j)_\cH$ on a sub-$C^*$-algebra of $\sL(\cH)$.
\end{definition}
One may still keep the vocabulary `quantum limits' in slightly more general contexts.
For instance, one often encounters a subalgebra of $\sL(\cH)$ that may need to be completed into a $C^*$-algebra,
possibly after quotienting by (a subspace of) the kernel of the mapping $A\mapsto \limsup_{j\to \infty} |(Af_j,f_j)_\cH|$.
We may also consider a bounded family $(f_j)$ in $\cH$ rather than unit vectors, leaving the normalisation for the proofs of further properties.
The applications we have in mind involve pseudo-differential calculi as subalgebras of $\sL(\cH)$ where the Hilbert space $\cH$ is some $L^2$-space.
A quantum limit in this context will often turn out to be a state
(or a positive functional if the $\|f_j\|_{\cH}$'s are only bounded)
on a space of symbols, hence a positive Radon measure in the commutative case. Indeed, from functional analysis, we know that a bounded linear functional on the space of continuous functions on a (say) compact space is given by a Radon measure, and if the functional is also positive, the measure will be positive as well.
\medskip
Let us now explain how the viewpoint of quantum limits and states gives another proof of Theorem \ref{thm_MDM} by first obtaining the following result:
\begin{lemma}
\label{lem_thm_MDM}
Let $\Omega$ be an open bounded subset of $\bR^n$.
Let $(f_j)_{j\in \bN}$ be a bounded sequence in $L^2(\bar \Omega)$ converging weakly to 0 as $j\to \infty$.
Then there exists a subsequence $(j_k)_{k\in \bN}$ and a positive Radon measure on $\bar \Omega\times \bS^{n-1}$ such that
$$
(Af_j,f_j)_{L^2(\bar\Omega)} \longrightarrow_{j=j_k, k\to \infty} \int_{\bar \Omega\times \bS^{n-1}} a_0(x,\xi) d\gamma(x,\xi)
$$
holds for all classical pseudo-differential operator $A$
whose principal symbol $a_0$ is $x$-supported in $\bar \Omega$.
\end{lemma}
\begin{proof}[Sketch of the proof of Lemma \ref{lem_thm_MDM}]
If $\limsup_{j\to \infty} \|f_j\|_{L^2(\bar \Omega)}=0$, then $\gamma=0$.
Hence, we may assume that
$\limsup_{j\to \infty} \|f_j\|_{L^2(\bar \Omega)}=1$.
We consider the sequence of functionals $\ell_j :A \mapsto (Af_j,f_j)_{L^2}$
on the algebra $\cA_0$ of classical pseudo-differential operators $A$ whose symbols are $x$-supported in $\bar \Omega$.
The weak convergence of $(f_j)$ to zero means that $\lim_{j\to \infty} \ell_j(A)=0$ for every operator $A$ in
$$
\cK = \{\mbox{compact operator in} \ \cA_0 \} \sim \{\mbox{operators in} \ \cA_0 \ \mbox{of order }<0\},
$$
by Rellich's theorem.
The properties of the pseudo-differential calculus imply that a limit of $(\ell_j)_{j\in \bN}$ is a state on the closure of
the quotient $\overline{\cA_0 /\cK}$; we recognise the abelian $C^*$-algebra generated by the principal symbols $x$-supported in $\bar \Omega$, that is, the space of continuous functions on the compact space $\bar\Omega\times \bS^{n-1}$. Such a state is given by a positive Radon measure on $\bar\Omega\times \bS^{n-1}$.
\end{proof}
Let us now give the new proof of Theorem \ref{thm_MDM} announced above.
Adopting the setting of the statement, we find a sequence of open sets $\Omega_k$, $k=1,2,\ldots$ such that $\bar \Omega_k$ is a compact subset of $\Omega_{k+1}$ and $\cup_{k\in \bN} \Omega_k =\Omega$.
Applying Lemma \ref{lem_thm_MDM} to each $\Omega_k$ together with a diagonal extraction yield Theorem \ref{thm_MDM}.
\medskip
The author and her collaborator Clotilde Fermanian Kammerer are forever indebted to Professor Vladimir Georgescu for his enlightening explanations on the proof of the existence of micro-local defect measures given above.
Vladimir Georgescu's comments describe also the states of other $C^*$-algebras of operators bounded on $L^2$ from profound works by O. Cordes and his collaborators on Gelfand theory for pseudo-differential calculi
\cite{Cordes+H,Cordes79,Cordes87,Cordes95,Taylor71}. They also provide a framework which generalises the two original proofs of the existence of H-measure / micro-local defect measures:
\begin{itemize}
\item
the one by L. Tartar \cite{tartar}
which uses operators of multiplication in position and Fourier multipliers in frequencies, and
\item
the one by P. G\'erard's \cite{gerard_91} relying on properties of the classical pseudo-differential calculus, especially the G\r arding inequality.
\end{itemize}
\subsection{Semi-classical measures as quantum limits}
\label{subsec_scmql}
The semi-classical calculus used here is `basic' in the sense that it is restricted to the setting of $\bR^n$ and to operators $\Op_\eps(a)$ with $a\in C_c^\infty(\bR^n\times\bR^n)$ for instance.
Here $\Op_\eps(a)=\Op(a_\eps)$ is `the' pseudo-differential operator with symbol $a_\eps(x,\xi)=a(x,\eps \xi)$ via a chosen $t$-quantisation on $\bR^n$ - for instance the Weyl quantisation ($t=1/2$) or the Kohn-Nirenberg quantisation ($t=0$, also known as PDE quantisation and often written as $\Op(a) = a(x,D)$).
More sophisticated semi-classical calculi can be defined, for instance allowing the symbols $a$ to depend on $\eps$ and in the context of manifolds, see e.g. \cite{zworski}.
Semi-classical measures were introduced in the 90's in works such as \cite{gerard_X,gerardleichtnam,GMMP,LionsPaul}.
In this section, we show how the viewpoint of quantum limits gives a simple proof of their existence as in the case of micro-local defect measures (see Section \ref{subsec_viewptQL}).
With the Weyl quantisation, the existence of semi-classical measures can be proved using graduate-level functional analysis and the resulting measures are called Wigner measures. But our proof below is independent of the chosen quantisation.
\begin{theorem}
\label{thm_scm}
Let $(f_\eps)_{\eps>0}$ be a bounded family in $L^2(\bR^n)$.
Then there exists a sequence $\eps_k$, $k\in \bN$ with $\eps_k\to 0$ as $k\to \infty$, and a positive Radon measure $\gamma$ on $\bR^n\times \bR^n$ such that
$$
\forall a\in C_c^\infty (\bR^n\times\bR^n)
\qquad
(\Op_\eps (a)f_\eps,f_\eps)_{L^2} \longrightarrow_{\eps=\eps_k, k\to \infty} \int_{ \bR^n\times \bR^n} a(x,\xi) d\gamma(x,\xi).
$$
\end{theorem}
\begin{proof}[Sketch of the proof of Theorem \ref{thm_scm}]
We may assume $\limsup_{\eps\to 0} \|f_\eps\|_{L^2}=1$.
We set $\ell_\eps(a) :=(\Op_\eps (a)f_\eps,f_\eps)_{L^2}$.
For each $a\in C_c^\infty (\bR^n\times\bR^n)$, $\ell_\eps(a)$ is bounded so its limits exist as $\eps\to 0$.
A diagonal extraction and the separability of $C_c^\infty (\bR^n\times\bR^n)$ yield the existence of $\ell = \lim_{k\to \infty} \ell_{\eps_k}$ on $C_c^\infty(\bR^n\times \bR^n)$.
From the properties of the semi-classical calculus, one checks that $\ell$ extends to a state of the commutative $C^*$-algebra
$\overline{C_c^\infty(\bR^n\times \bR^n)}$, hence a positive Radon measure on $\bR^n\times\bR^n$.
\end{proof}
The semi-classical analogues of the examples of micro-local defect measures are:
\begin{itemize}
\item an $L^2$-concentration in space $f_\eps(x) = \eps^{- n/2}\chi(\frac{x-x_0}\eps ) $ about a point $x_0$ (again, $\chi\in C_c^\infty(\bR^n)$ is some given function), whose semi-classical measure is $\gamma(x,\xi)=\delta_{x_0}(x) \otimes |\widehat \chi(\xi)|^2 d\xi$,
\item an $L^2$-concentration in oscillations $f_\eps(x) = \psi(x) e^{2i\pi \xi_0 \cdot x / \eps} $ about a frequency $\xi_0\in \bR^{n}$ (again, $\psi\in C_c^\infty(\bR^n)$ is some given function), whose semi-classical measure is $\gamma=|\psi(x)|^2dx \otimes \delta_{\xi_0}(\xi) $.
\end{itemize}
\subsection{Applications}
\label{subsec_app}
Let us give an application of quantum limits to semi-classical analysis already mentioned in the introduction in the form of the following result taken from \cite[Appendix A]{FFJST}.
This is an elementary version of properties that hold in more general settings and for more general Hamiltonians, including integrable systems (see~\cite{AFM,CFM}).
\begin{proposition}
Let $(\psi^\eps_0)_{\eps>0}$ be a bounded family in $L^2(\bR^n)$
and the associated solutions to the Schr\"odinger equation,
$$
i\eps^\tau \partial_t \psi^\eps = -\frac {\eps^2}2\Delta \psi^\eps,
\qquad \ \psi^\eps|_{t=0} = \psi^\eps_0.
$$
where $\Delta=- \sum_{1\leq j\leq n} \partial_{x_j}^2$ is the standard Laplace operator on $\bR^n$.
We assume that the oscillations of the initial data are exactly of size $1/\eps$ in the sense that we have:
$$
\exists s,C_s>0,\qquad \forall \eps>0\qquad \eps^{s} \| \Delta^{s / 2} \psi^\eps_0\|_{L^2(\bR^n)}+ \eps^{-s} \| \Delta^{-{s / 2} }\psi^\eps_0\|_{L^2(\bR^n)}\leq C_s.
$$
Any limit point of the measures $\left|\psi^\eps(t,x)\right| ^2 dxdt$ as $\eps\to 0$ is of the form $\varrho_t(x) dt$ where $\varrho_t$ is a measure on $\bR^n$ satisfying:
\begin{enumerate}
\item $\partial_t \varrho_t =0$ for $\tau\in(0,1)$,
\item $\varrho_t(x)=\int_{\bR^n} \gamma_0(x-t\xi,d\xi)$ for $\tau=1$,
\item $\varrho_t=0$ for $\tau >1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Using for instance the notion of quantum limits, we obtain time-dependent semi-classical measures in the sense of the existence of a subsequence $(\eps_k)$ and of a continuous map $t\mapsto \gamma_t$ from $\bR$ to the space of positive Radon measures such that
$$
\int_\bR \theta(t) \,
(\Op_\eps (a)f_\eps,f_\eps)_{L^2} \longrightarrow_{\eps=\eps_k, k\to \infty} \iint_{\bR \times \bR^{2n}} \theta(t)\, a(x,\xi)\, d\gamma_t(x,\xi) dt,
$$
for any $\theta\in C_c^\infty(\bR)$ and $a\in C_c^\infty(\bR^{2n})$.
Now,
up to a further extraction of a subsequence, we obtain using the Schr\"odinger equation:
\begin{enumerate}
\item for $\tau\in(0,1)$, $\gamma_t(x,\xi)=\gamma_0(x,\xi)$ for all times $t\in\bR$,
\item for $\tau=1$,
$
\partial_t \gamma_t(x,\xi) = \xi\cdot \nabla_x \gamma_t(x,\xi)$
in the sense of distributions,
\item for $\tau>1$,
$\gamma_t=0$ for all times $t\in\bR$.
\end{enumerate}
Taking the $x$-marginals of the measures $\gamma_t$ gives the measures described in the statement.
\end{proof}
The usual Schr\"odinger equation corresponds to $\tau=1$, as in the introduction of this paper.
In this case, the description of the semi-classical measure above provides the link between the quantum world and the classical one: $\gamma_t$ is the composition of $\gamma_0$ with the Hamiltonian flow from classical mechanics.
\section{Pseudo-differential theory and quantum limits on nilpotent Lie groups}
\label{sec_PDOQLG}
In this section, we will present the works
\cite{FFchina,FFPisa,FFJST} of Clotilde Fermanian-Kammerer and the author about quantum limits on nilpotent Lie groups. We will only describe briefly the setting and the notation, referring the interested reader to the literature for all the technical details.
We will end with a word on future developments.
\subsection{Preliminaries on nilpotent Lie groups}
Let us consider a nilpotent Lie group $G$; we will always assume that nilpotent Lie groups are connected and simply connected.
If we fix a basis $X_1,\ldots, X_n$ of its Lie algebra $\fg$,
via the exponential mapping $\exp_G : \fg \to G$, we identify
the points $(x_{1},\ldots,x_n)\in \bR^n$
with the points $x=\exp_G(x_{1}X_1+\cdots+x_n X_n)$ in~ $G$.
This also leads to a corresponding Lebesgue measure on $\fg$ and the Haar measure $dx$ on the group $G$,
hence $L^p(G)\cong L^p(\bR^n)$
and we allow ourselves to denote by $C_c^\infty(G), \, \cS(G)$ etc,
the spaces of continuous functions, of smooth and compactly supported functions or
of Schwartz functions on $G$ identified with $\bR^n$,
and similarly for distributions.
The group convolution of two functions $f_1$ and $f_2$,
for instance square integrable,
is defined via
$$
(f_1*f_2)(x):=\int_G f_1(y) f_2(y^{-1}x) dy.
$$
The convolution is not commutative: in general, $f_1*f_2\not=f_2*f_1$.
A vector of $\fg$ defines a left-invariant vector field on $G$
and, more generally,
the universal enveloping Lie algebra $\fU(\fg)$ of $\fg$
is isomorphic to the space of the left-invariant differential operators;
we keep the same notation for the vectors and the corresponding operators.
Let $\pi$ be a representation of $G$.
Unless otherwise stated, we always assume that such a representation $\pi$
is strongly continuous and unitary, and acts on a separable Hilbert space denoted by $\cH_\pi$.
Furthermore, we keep the same notation for the corresponding infinitesimal representation
which acts on $\fU(\fg)$
and on the space $\cH_\pi^\infty$ of smooth vectors.
It is characterised by its action on $\fg$
$$
\pi(X)=\partial_{t=0}\pi(e^{tX}),
\quad X\in \fg.
$$
We define the \emph{group Fourier transform} of a function $f\in L^1(G)$
at $\pi$ by
$$
\pi(f) \equiv \widehat f(\pi) \equiv \cF_G(f)(\pi)=\int_G f(x) \pi(x)^*dx.
$$
We denote by $\Gh$ the unitary dual of $G$,
that is, the unitary irreducible representations of $G$ modulo equivalence and identify a unitary irreducible representation
with its class in $\Gh$. The set $\Gh$ is naturally equipped with a structure of standard Borel space.
The Plancherel measure is the unique positive Borel measure $\mu$
on $\Gh$ such that
for any $f\in C_c(G)$, we have:
\begin{equation}
\label{eq_plancherel_formula}
\int_G |f(x)|^2 dx = \int_{\Gh} \|\cF_G(f)(\pi)\|_{HS(\cH_\pi)}^2 d\mu(\pi).
\end{equation}
Here $\|\cdot\|_{HS(\cH_\pi)}$ denotes the Hilbert-Schmidt norm on $\cH_\pi$.
This implies that the group Fourier transform extends unitarily from
$L^1(G)\cap L^2(G)$ to $L^2(G)$ onto
$L^2(\Gh):=\int_{\Gh} \cH_\pi \otimes\cH_\pi^* d\mu(\pi)$
which we identify with the space of $\mu$-square integrable fields on $\Gh$.
A \emph{symbol} is a measurable field of operators $\sigma(x,\pi):\cH_\pi^\infty \to \cH_\pi^\infty$, parametrised by $x\in G$ and $\pi\in \Gh$.
We formally associate to $\sigma$ the operator $\Op(\sigma)$
as follows
$$
\Op(\sigma) f (x) := \int_G
\tr \left(\pi(x) \sigma(x,\pi) \widehat f (\pi) \right)
d\mu(\pi),
$$
where $f\in \cS(G)$ and $x\in G$.
If $G$ is the abelian group $\bR^n$, this corresponds to the Kohn-Nirenberg quantisation.
Regarding symbols, when no confusion is possible,
we will allow ourselves some notational shortcuts,
for instance writing $\sigma(x,\pi)$
when considering the field of operators $\{\sigma(x,\pi) :\cH_\pi^\infty \to \cH_\pi^\infty, (x,\pi)\in G\times\Gh\}$ with the usual identifications
for $\pi\in \Gh$ and $\mu$-measurability.
This quantisation has already been observed in \cite{TaylorAMS,Bahouri+Fermanian+Gallagher,R+F_monograph} for instance.
It can be viewed as an analogue of the Kohn-Nirenberg quantisation
since the inverse formula can be written as
$$
f (x) := \int_G
\tr \left(\pi(x) \widehat f (\pi) \right)
d\mu(\pi),
\quad f\in \cS(G), \ x\in G.
$$
This also shows that the operator
associated with the symbol $\id=\{\id_{\cH_\pi} , (x,\pi)\in G\times\Gh\} $
is the identity operator $\Op(\id)=\id$.
Note that (formally or whenever it makes sense),
if we denote the (right convolution) kernel of $\Op(\sigma)$ by $\kappa_x$,
that is,
$$
\Op(\sigma)\phi(x)=\phi*\kappa_x,
\quad x\in G, \ \phi\in \cS(G),
$$
then it is given by
$$
\pi(\kappa_x)=\sigma(x,\pi).
$$
Moreover the integral kernel of $\Op(\sigma)$ is
$$
K(x,y)=\kappa_x(y^{-1}x),\quad\mbox{where}\quad
\Op(\sigma)\phi(x)=\int_G K(x,y) \phi(y)dy.
$$
We shall abuse the vocabulary and call $\kappa_x$
the kernel of $\sigma$, and $K$ its integral kernel.
\subsection{Pseudo-differential calculi on graded nilpotent Lie groups}
\subsubsection{Preliminaries on graded groups}
Graded groups are connected and simply connected
Lie group
whose Lie algebra $\fg$
admits an $\bN$-gradation
$\fg= \oplus_{\ell=1}^\infty \fg_{\ell}$
where the $\fg_{\ell}$, $\ell=1,2,\ldots$,
are vector subspaces of $\fg$,
almost all equal to $\{0\}$,
and satisfying
$[\fg_{\ell},\fg_{\ell'}]\subset\fg_{\ell+\ell'}$
for any $\ell,\ell'\in \bN$.
These groups are nilpotent. Examples of such groups are the Heisenberg group
and, more generally,
all stratified groups (which by definition correspond to the case $\fg_1$ generating the full Lie algebra $\fg$); with a choice of basis or of scalar product on $\fg_1$, the latter are called Carnot groups.
Graded groups are homogeneous in the sense of Folland-Stein \cite{folland+stein_82}
when equipped with the dilations
given by the linear mappings $D_r:\fg\to \fg$,
$D_r X=r^\ell X$ for every $X\in \fg_\ell$, $\ell\in \bN$.
We may re-write the set of integers $\ell\in \bN$ such that $\fg_\ell\not=\{0\}$
into the increasing sequence of positive integers
$\upsilon_1,\ldots,\upsilon_n$ counted with multiplicity,
the multiplicity of $\fg_\ell$ being its dimension.
In this way, the integers $\upsilon_1,\ldots, \upsilon_n$ become
the weights of the dilations and we have $D_r X_j =r^{\upsilon_j} X_j$, $j=1,\ldots, n$,
on a basis $X_1,\ldots, X_n$ of $\fg$ adapted to the gradation.
We denote the corresponding dilations on the group via
$$
rx = \exp (D_r X), \quad \mbox{for} \ x= \exp (X)\in G.
$$
This leads to homogeneous notions for functions, distributions and operators. For instance,
the homogeneous dimension of $G$ is the homogeneity of the Haar measure, that is,
$Q:=\sum_{\ell\in \bN}\ell \dim \fg_\ell $;
and the differential operator $X^\alpha$ is homogeneous of degree
$[\alpha]:=\sum_j \upsilon_j\alpha_{j}$.
\subsubsection{The symbolic pseudo-differential calculus on $G$}
\label{subsubsec_symbPDC}
In the monograph \cite{R+F_monograph},
the (Fr\'echet) space $S^m(G)$ of symbols of degree $m\in \bR$ on $G$ is defined
and the properties of the corresponding space of operators $\Psi^m(G) = \Op(S^m(G))$ are studied.
Naturally, when $G$ is the abelian group $\bR$,
the classes of symbols and of operators are the ones due to H\"ormander.
In the monograph,
it is proved that $\Psi^*(G):= \cup_{m\in \bR} \Psi^m(G)$ is a symbolic pseudo-differential calculus in the following sense:
\begin{itemize}
\item $\Psi^*(G)$ is an algebra of operators,
with an asymptotic formula for
$\Op(\sigma_1)\Op(\sigma_2)=\Op(\sigma)$.
\item $\Psi^m(G)$ is adjoint-stable, i.e.
$\Op(\sigma)^* =\Op(\tau) \in \Psi^m(G)$ when $\sigma\in S^m(G)$,
with an asymptotic formula for $\tau$.
\item
$\Psi^*(G)$
contains the left-invariant differential calculus as
$X^\alpha \in \Psi^{[\alpha]}(G)$.
\item
$\Psi^*(G)$
contains the spectral calculus of the positive Rockland operators.
Note that in the context of graded groups, the positive Rockland operators are the analogues of the elliptic operators
\item
$\Psi^*(G)$ acts continuously on the Sobolev spaces adapted to the graded groups with
$\Psi^m(G)\ni T : L^p_s(G)\hookrightarrow L^p_{s-m}(G)$.
\end{itemize}
\subsubsection{The classical pseudo-differential calculus on $G$}
\label{subsubsec_PsiclG}
Part of the paper \cite{FFPisa} is devoted to defining the notions of homogeneous symbols
and of classes $\dot S^m(G)$ of homogeneous symbols of degree $m$.
Indeed, the dilations on the group $G$ induce an action of $\bR^+$ on the dual $\Gh$ via
\begin{equation}
\label{eq_rpi}
r \cdot \pi (x) = \pi(r x), \qquad \pi\in \Gh, \ r>0, \ x\in G.
\end{equation}
The homogeneous symbols are then measurable fields of operators on $G\times \Sigma_1$ where
$$
\Sigma_1:=(\Gh / \bR^+) \setminus \{1_{\Gh}\}.
$$
is the analogue of the sphere on the Fourier side in the Euclidean case.
This then allows us to consider symbols admitting a homogeneous expansion.
The space of operators in $\Psi^m(G)$ which admits a homogeneous expansion and whose integral kernel is compactly supported is denoted by $\Psi^m_{cl}(G)$.
It is proved that $\Psi_{cl}^*(G):= \cup_{m\in \bR} \Psi^m_{cl}(G)$ is also a symbolic pseudo-differential calculus in the same sense as in Section \ref{subsubsec_symbPDC}.
Furthermore, there is a natural notion of principal symbol associated to a symbol; the principal symbol is homogeneous by construction.
Again, when $G$ is the abelian group $\bR^n$,
this calculus is the well-known classical pseudo-differential calculus,
and the notion of principal symbol is the usual one.
We set $\Psi_{cl}^{\leq 0}(G):= \cup_{m\leq 0} \Psi^m_{cl}(G)$.
Depending on the context,
the classical pseudo-differential calculus on $G$ may refer to
the space of operators of any order in $\Psi_{cl}^*(G)$
or to the space of operators of non-positive orders $\Psi_{cl}^{\leq 0}(G)$.
\subsubsection{The semi-classical pseudo-differential calculus on $G$}
The semi-classical pseudodifferential calculus
was presented in the context of groups of Heisenberg type in \cite{FFJST}, but in fact extends readily to any graded group $G$.
We consider the class of symbols ${\mathcal A}_0$ of fields of operators defined on $G\times \Gh$
$$
\sigma(x,\pi)\in{\mathcal L}(\cH_\pi),\;\;(x,\pi)\in G\times\Gh,
$$
that are of the form
$$\sigma(x,\pi) = \cF_G \kappa_{x} (\pi),$$
where $\kappa_{x}(y)$ is smooth and compactly supported in $x$ while being Schwartz in $y$; more technically, the map
$x\mapsto \kappa_{x}$ is in $C_c^\infty(G:\cS(G))$.
The group Fourier transform yields a bijection from $C_c^\infty(G:\cS(G))$ onto $\cA_0$, and we equip $\cA_0$ with the Fr\'echet topology so that this mapping is an isomorphism of topological vector spaces.
Let $\eps\in (0,1]$ be a small parameter.
For every symbol $\sigma\in \cA_0$, we consider the dilated symbol
obtained using the action of $\bR^+$ on $\Gh$, see \eqref{eq_rpi},
$$
\sigma^{(\eps)}:=
\{\sigma(x,\eps \cdot\pi) : (x, \pi)\in G\times \Gh\},
$$
and then the associated operator
$$
\Op^\eps (\sigma) := \Op (\sigma^{(\eps)}).
$$
As in the case of $\bR^n$ (see Section \ref{subsec_scmql}),
this yields a (basic)
semi-classical calculus in the following sense:
\begin{itemize}
\item Each operator $ \Op^\eps (\sigma)$, $\sigma\in \cA_0$, is bounded on $L^2(G)$ with
$$
\| \Op^\eps (\sigma)\|_{\sL(L^2(G))} \leq \|\sigma\|_{\cA_0}:= \int_{G} \sup_{x\in G} |\kappa_{x}(y)|dy,
$$
where $\kappa_x$ is the kernel of $\sigma$; $\|\cdot\|_{\cA_0}$ defines a continuous semi-norm on $\cA_0$.
\item The singularities of the operators concentrate around the diagonal of the integral kernels as $\eps \to 0$:
$$
\forall N\in \bN \quad \exists C_N>0\quad \forall \eps\in (0,1] ,\ \sigma\in \cA_0\quad
\|\sigma - \cF_G \left( \kappa_{x} \chi (\eps \, \cdot)\right)\|_{\cA_0} \leq C {\eps}^{N}
$$
where $\chi \in C_c^\infty(G)$ is a fixed function identically equal to 1 on a neighbourhood of 0.
\item There is a calculus in the sense of expansions in powers of $\eps$ in $\sL(L^2(G))$
for products
$\Op^{(\eps)}(\sigma_1)\Op^{(\eps)}(\sigma_2)$ and for adjoints
$\Op^{(\eps)}(\sigma)^*$; here $\sigma_1,\sigma_2,\sigma\in \cA_0$.
\end{itemize}
\subsection{Operator-valued measures}
In Section \ref{sec_QL}, we explained why quantum limits in Euclidean or elliptic settings are often described with positive Radon measures on the spaces of symbols as these spaces are then commutative $C^*$-algebras.
In the context of nilpotent Lie groups, the symbols are operator-valued, and we will see below that our examples of quantum limits will then be described in terms of operator-valued measures as introduced in \cite{FFchina,FFPisa}. Let us recall the precise definition of this notion:
\begin{definition}
\label{def_gammaGamma}
Let $Z$ be a complete separable metric space,
and let $\xi\mapsto \cH_\xi$ be a measurable field of complex Hilbert spaces of $Z$.
\begin{itemize}
\item
The set
$ \widetilde{\mathcal M}_{ov}(Z,(\cH_\xi)_{\xi\in Z})$
is the set of pairs $(\gamma,\Gamma)$ where $\gamma$ is a positive Radon measure on~$Z$
and $\Gamma=\{\Gamma(\xi)\in {\mathcal L}(\cH_\xi):\xi \in Z\}$ is a measurable field of trace-class operators
such that
$$\|\Gamma d \gamma\|_{\mathcal M}:=\int_Z{\rm Tr}_{\cH_\xi} |\Gamma(\xi)|d\gamma(\xi)
<\infty.
$$
Here ${\rm Tr}_{\cH_\xi} |\Gamma(\xi)|$ denotes the standard trace of the trace-class operator $ |\Gamma(\xi)|$ on the separable Hilbert space $\cH_\xi$.
\item
Two pairs $(\gamma,\Gamma)$ and $(\gamma',\Gamma')$
in $\widetilde {\mathcal M}_{ov}(Z,(\cH_\xi)_{\xi\in Z})$
are {equivalent} when there exists a measurable function $f:Z\to \mathbb C\setminus\{0\}$ such that
$$d\gamma'(\xi) =f(\xi) d\gamma(\xi)\;\;{\rm and} \;\;\Gamma'(\xi)=\frac 1 {f(\xi)} \Gamma(\xi)$$ for $\gamma$-almost every $\xi\in Z$.
The equivalence class of $(\gamma,\Gamma)$ is denoted by $\Gamma d \gamma$,
and the resulting quotient set is
denoted by ${\mathcal M}_{ov}(Z,(\cH_\xi)_{\xi\in Z})$.
\item
A pair $(\gamma,\Gamma)$
in $ \widetilde {\mathcal M}_{ov}(Z,(\cH_\xi)_{\xi\in Z})$
is {positive} when
$\Gamma(\xi)\geq 0$ for $\gamma$-almost all $\xi\in Z$.
In this case, we may write $(\gamma,\Gamma)\in \widetilde {\mathcal M}_{ov}^+(Z,(\cH_\xi)_{\xi\in Z})$,
and $\Gamma d\gamma \geq 0$ for $\Gamma d\gamma \in {\mathcal M}_{ov}^+(Z,(\cH_\xi)_{\xi\in Z})$.
\end{itemize}
\end{definition}
By convention and if not otherwise specified, a representative of the class $\Gamma d\gamma$ is chosen such that ${\rm Tr}_{\cH_\xi} \Gamma=1$.
In particular, if $\cH_\xi$ is $1$-dimensional, $\Gamma=1$ and $\Gamma d\gamma$ reduces to the measure $d\gamma$.
One checks readily that $\mathcal M_{ov} (Z,(\cH_\xi)_{\xi\in Z})$ equipped with the norm $\| \cdot\|_{{\mathcal M}}$ is a Banach space.
\medskip
When the field of Hilbert spaces is clear from the setting,
we may write
$$
\mathcal M_{ov} (Z) = \mathcal M (Z,(\cH_\xi)_{\xi\in Z}),
\quad
\mbox{and}\quad
\mathcal M_{ov}^+ (Z) = \mathcal M^+ (Z,(\cH_\xi)_{\xi\in Z}),
$$
for short.
For instance, if $\xi\mapsto \cH_\xi$ is given by $\mathcal H_\xi=\mathbb C$ for all $\xi$,
then $\mathcal M (Z)$ coincides with the space of finite Radon measures on $Z$.
Another example is when $Z$ is of the form $Z=Z_1 \times \widehat G$ where $Z_1$ is a complete separable metric space, and $\mathcal H_{(z_1,\pi)}= \cH_\pi$, where
the Hilbert space $\cH_\pi$ is associated with the representation $\pi \in \widehat G$.
\subsection{Micro-local defect measures on graded Lie groups}
In \cite{FFPisa}, the following analogue to Theorem \ref{thm_MDM} is proved
in the setting of graded groups. It uses the classical pseudo-differential calculus and the sphere $\Sigma_1$ of the dual as mentioned in Section \ref{subsubsec_PsiclG} and the notion of operator-valued measure (see
Definition \ref{def_gammaGamma}).
\begin{theorem}
\label{thm_MDMG}
Let $\Omega$ be an open subset of $G$.
Let $(f_j)_{j\in \bN}$ be a bounded sequence in $L^2(\Omega,loc)$ converging weakly to 0.
Then there exists a subsequence $(j_k)_{k\in \bN}$ and an operator-valued measure
$\Gamma d\gamma \in \cM_{ov}^+(G\times \Sigma_1 )$
such that
$$
(Af_j,f_j)_{L^2} \longrightarrow_{j=j_k, k\to \infty}
\int_{\Omega \times \Sigma_1}
\tr \left(\sigma_0 (x,\dot \pi) \ \Gamma(x,\dot \pi) \right)
d \gamma(x,\dot\pi) \, ,
$$
holds for all classical pseudo-differential operator $A\in \Psi^{\leq 0}_{cl}(G)$, $\sigma_0$ denoting its principal symbol.
\end{theorem}
The proof of Theorem \ref{thm_MDMG} given in \cite{FFPisa} follows the same ideas as the ones presented in Section \ref{subsec_viewptQL} with the adaptations that come from dealing with a more non-commutative $C^*$-algebra of symbols.
Examples of micro-local defect measures developed in \cite{FFPisa} include
\begin{itemize}
\item an $L^2$-concentration in space,
\item an $L^2$-concentration in oscillations using matrix coefficients of representations.
\end{itemize}
An application to compensated compactness is also deduced.
It would be interesting to relate this to the works by B. Franchi and his collaborators \cite{Baldi+Franchi+Tesi08, Baldi+Franchi+Tesi08b,Franchi,Franchi+Tchou+Tesi}
on compensated compactness on the Heisenberg group.
\subsection{Semi-classical measures on graded Lie groups}
In \cite{FFchina},
the semi-classical analysis developed on $G$ yields the same property of existence of (group) semi-classical measures:
\begin{theorem}
\label{thm_scmG}
Let $(f_\eps)_{\eps>0}$ be a bounded family in $L^2(G)$.
Then there exists a sequence $\eps_k$, $k\in \bN$ with $\eps_k\to 0$ as $k\to \infty$,
and an operator-valued measure
$\Gamma d\gamma \in \cM_{ov}^+(G\times \Gh )$
satisfying
$$
\forall \sigma\in \cA_0
\qquad
(\Op_\eps (\sigma)f_\eps,f_\eps)_{L^2} \longrightarrow_{\eps=\eps_k, k\to \infty}
\int_{\Omega \times \Gh}
\tr \left(\sigma (x, \pi) \ \Gamma(x, \pi) \right)
d \gamma(x,\pi) .
$$
\end{theorem}
The (group) semi-classical analogues of the (group) micro-local defect measures for an $L^{2}$-concentration in space and an $L^{2}$-concentration in oscillations is also given in \cite{FFchina} in the context of the groups of Heisenberg type; naturally, the former holds on any graded group.
In \cite{FFJST}, we prove an analogue of the application given in Section \ref{subsec_app} but for the sub-Laplacian on any group of Heisenberg type.
We obtain a description of the $t$-dependent group semi-classical measures corresponding to the solutions to the Schr\"odinger equations,
and therefore of their weak limits after taking the $x$-marginals.
However, there is not one threshold $\tau=1$ as in the Euclidean case, but two, namely $\tau=1$ and $\tau=2$.
More precisely, the semi-classical measures and the weak limits can be written into two parts:
\begin{itemize}
\item
one with a Euclidean behaviour and threshold $\tau=1$,
and
\item one with threshold $\tau=2$.
\end{itemize}
With our methods, this comes from the splitting of the unitary dual $\Gh$ into the following two subsets:
\begin{itemize}
\item the subsets of
infinite dimensional representations (for instance realised as the Scr\"odinger representations), and
\item the subset of finite dimensional representations, in fact of dimension one and given by the (abelian or Euclidean) characters of the first stratum.
\end{itemize}
This splitting is also present in other works that do not involve representation theory; see for instance \cite{BS} about the Grushin-Schr\"odinger equation and~\cite{Zeld97,CdVHT} about sublaplacians on contact manifolds.
In fact, this phenomenon of slower dispersion than in Euclidean settings has already been observed for other sub-Riemannian PDEs, see e.g. \cite{BGX,hiero,BFG2}.
\subsection{Future works}
The tools developed so far in \cite{FFchina,FFPisa,FFJST}
can be adapted to (graded) nilmanifolds along the lines of \cite{Fermanian+Letrouit}.
Nilmanifolds are quotients of nilpotent Lie groups by a discrete subgroup.
When the subgroup is also co-compact, this results in a compact manifold which is locally given by the group. This provides an excellent setting for the applications to PDEs of the theory developed in \cite{FFchina,FFPisa,FFJST} .
The extension to sub-Riemannian manifolds will certainly be more difficult. However, given the recent progress in groupoids
on filtered manifolds \cite{vanErp,choi+ponge,vanErp+Y},
the author feels confident that the semi-classical and micro-local analysis already developed on graded groups will be transferable to the setting of equiregular sub-Riemannian manifolds in the near future.
\bibliographystyle{alpha} | 120,607 |
{\bf Problem.} Let $f$ be a function taking the nonnegative integers to the nonnegative integers, such that
\[2f(a^2 + b^2) = [f(a)]^2 + [f(b)]^2\]for all nonnegative integers $a$ and $b.$
Let $n$ be the number of possible values of $f(25),$ and let $s$ be the sum of the possible values of $f(25).$ Find $n \times s.$
{\bf Level.} Level 5
{\bf Type.} Intermediate Algebra
{\bf Solution.} Setting $a = 0$ and $b = 0$ in the given functional equation, we get
\[2f(0) = 2f[(0)]^2.\]Hence, $f(0) = 0$ or $f(0) = 1.$
Setting $a = 0$ and $b = 1$ in the given functional equation, we get
\[2f(1) = [f(0)]^2 + [f(1)]^2.\]If $f(0) = 0,$ then $2f(1) = [f(1)]^2,$ which means $f(1) = 0$ or $f(1) = 2.$ If $f(0) = 1,$ then $[f(1)]^2 - 2f(1) + 1 = [f(1) - 1]^2 = 0,$ so $f(1) = 1.$
We divide into cases accordingly, but before we do so, note that we can get to $f(25)$ with the following values:
\begin{align*}
a = 1, b = 1: \ & 2f(2) = 2[f(1)]^2 \quad \Rightarrow \quad f(2) = [f(1)]^2 \\
a = 1, b = 2: \ & 2f(5) = [f(1)]^2 + [f(2)]^2 \\
a = 0, b = 5: \ & 2f(25) = [f(0)]^2 + [f(5)]^2
\end{align*}Case 1: $f(0) = 0$ and $f(1) = 0.$
From the equations above, $f(2) = [f(1)]^2 = 0,$ $2f(5) = [f(1)]^2 + [f(2)]^2 = 0$ so $f(5) = 0,$ and $2f(25) = [f(0)]^2 + [f(5)]^2 = 0,$ so $f(25) = 0.$
Note that the function $f(n) = 0$ satisfies the given functional equation, which shows that $f(25)$ can take on the value of 0.
Case 2: $f(0) = 0$ and $f(1) = 2.$
From the equations above, $f(2) = [f(1)]^2 = 4,$ $2f(5) = [f(1)]^2 + [f(2)]^2 = 20$ so $f(5) = 10,$ and $2f(25) = [f(0)]^2 + [f(5)]^2 = 100,$ so $f(25) = 50.$
Note that the function $f(n) = 2n$ satisfies the given functional equation, which shows that $f(25)$ can take on the value of 50.
Case 3: $f(0) = 1$ and $f(1) = 1.$
From the equations above, $f(2) = [f(1)]^2 = 1,$ $2f(5) = [f(1)]^2 + [f(2)]^2 = 2$ so $f(5) = 1,$ and $2f(25) = [f(0)]^2 + [f(5)]^2 = 2,$ so $f(25) = 1.$
Note that the function $f(n) = 1$ satisfies the given functional equation, which shows that $f(25)$ can take on the value of 1.
Hence, there are $n = 3$ different possible values of $f(25),$ and their sum is $s = 0 + 50 + 1 = 51,$ which gives a final answer of $n \times s = 3 \times 51 = \boxed{153}$. | 209,250 |
\begin{document}
\title[splitting numbers for Galois covers]
{A note on splitting numbers for Galois covers and $\pi_1$-equivalent Zariski $k$-plets}
\author{Taketo Shirane}
\address{National Institute of Technology, Ube College,
2-14-1 Tokiwadai, Ube 755-8555, Yamaguchi Japan}
\email{[email protected]}
\keywords{Zariski piar,$\pi_1$-equivalent Zariski $k$-plet, Galois cover, splitting curve}
\subjclass[2010]{14E20, 14F45, 14H30, 14H50, 57M12}
\begin{abstract}
In this paper, we introduce \textit{splitting numbers} of subvarieties in a smooth complex variety for a Galois cover,
and prove that the splitting numbers are invariant under certain homeomorphisms.
In particular cases, we show that splitting numbers enable us to distinguish topologies of complex plane curves even if fundamental groups of complements of plane curves are isomorphic.
Consequently, we prove that there are $\pi_1$-equivalent Zariski $k$-plets for any integer $k\geq2$.
\end{abstract}
\maketitle
\section*{Introduction}
In this paper, we investigate topologies of plane curves by technique of algebraic geometry,
where a \textit{topology of a plane curve} means the analytic topology of a pair of the complex projective plane $\bP^2=\bP_{\bC}^2$ and a reduced divisor on $\bP^2$.
The \textit{combinatorial data} of a plane curve consists of data such as the number of its irreducible components, the degree and the types of singularities of each irreducible component and the intersection data of the irreducible components (see \cite{artal} for details).
It is known that the combinatorial data of a plane curve determine the topology of tubular neighborhood of the plane curve (cf. \cite{artal}).
In 1929 \cite{zariski}, O. Zariski proved that the fundamental group of the complement of a $6$-cuspidal plane sextic is the free product of two cyclic groups of order $2$ and $3$ if the $6$ cusps are on a conic, and the cyclic group of order $6$ otherwise.
This result \cite{zariski} showed that the combinatorial data of a plane curve does not determine the topology of the plane curve.
We call a $k$-plet $(C_1,\dots, C_k)$ of plane curves $C_i\subset\bP^2$ a \textit{Zariski $k$-plet}
if $C_1,\dots,C_k$ have same combinatorial data, and there exist no homeomorphisms $h_{ij}:\bP^2\to\bP^2$ satisfying $h_{ij}(C_j)=C_i$ for any $i$ and $j$ with $i\ne j$.
In the case of $k=2$, a Zariski $2$-plet is called a \textit{Zariski pair}.
From 90's, many examples of Zariski pairs were constructed (for example, see \cite{artal1, artal, k-plet, bantoku, oka1, tokunaga}).
The main tool to distinguish topology of plane curves is fundamental groups as \cite{zariski}.
Indeed, some topological invariants (for example, Alexander polynomials, characteristic varieties, existence/non-existence of certain Galois covers branched along given curves) are used to distinguish difference of fundamental groups of the complements of plane curves.
Some authors introduced other methods to distinguish topology of plane curves.
For example, Artal--Carmona--Cogolludo \cite{braidmonodromy} proved that the braid monodromy of an affine plane curve determine the topology of a related projective plane curve.
A.~Degtyar\"ev and I.~Shimada constructed Zariski pairs of simple sextic curves by the theory of $K3$-surfaces given by double covers branched at simple sextic curves (for example \cite{degtyarev, degtyarev2, shimada2}).
Artal--Florens--Guerville \cite{guerville} introduced a new topological invariant, called $\calI$-invariant, for a line arrangement $\calA$ derived from the peripheral structure $\pi_1(B_{\calA})\to \pi_1(E_{\calA})$,
where $B_{\calA}$ and $E_{\calA}$ are the boundary manifold and the exterior of $\calA$ respectively,
and constructed Zariski pairs of line arrangements in \cite{guerville, guerville1}.
Recently, Guerville--Meilhan \cite{guerville2} generalized $\calI$-invariant of line arrangements to an invariant of any algebraic curves, called \textit{linking set}.
Artal--Tokunaga \cite{k-plet} and Shimada \cite{shimada2} have studied \textit{splitting curves} with respect to double covers to construct Zariski $k$-plet.
After these works \cite{k-plet, shimada2}, S.~Bannai \cite{bannai} introduced \textit{splitting type} with respect to double covers,
and he and the author constructed Zariski pairs and $3$-plets by using splitting type in \cite{bannai, banshira}.
In particular, Degtyar\"ev \cite{degtyarev2} found a Zariski pair $(C_1,\, C_2)$ of simple sextic curves $C_1$ and $C_2$ such that the fundamental groups of the complements of $C_1$ and $C_2$ are isomorphic, such a Zariski pair called a \textit{$\pi_1$-equivalent Zariski pair}.
Moreover, in \cite{artal}, a Zariski pair $(C_1, C_2)$ such that $\bP^2\setminus C_1$ and $\bP^2\setminus C_2$ are homeomorphic, called a \textit{complement-equivalent Zariski pair}, was constructed by using braid monodromy as \cite{braidmonodromy}.
However, it seems that there are not many examples of $\pi_1$-equivalent Zariski pairs.
On the other hand, Shimada \cite{shimada} constructed families of equisingular curves with many connected components, which can not be distinguished by fundamental groups.
It is not known whether two equisingular curves in distinct connected components of a family of \cite{shimada} provide a Zariski pair, or not.
In the present paper, we introduce a \textit{splitting number} of an irreducible subvariety in a smooth variety for a Galois cover, which is inspired by the splitting type of Bannai \cite{bannai},
and prove that a family of Shimada \cite{shimada} provides a Zariski $k$-plet.
To state the main theorem, we recall the families of Shimada \cite{shimada}.
In \cite{shimada}, Shimada defined plane curves of type $(b,m)$ as follows.
\begin{Def}[{\cite[Definition~1.1]{shimada}}]\rm
Let $b$ and $m$ be positive integers such that $b\geq 3$ and $b\equiv 0\pmod{m}$.
Put $n:=b/m$.
A projective plane curve $R\subset\bP^2$ is said to be \textit{of type $(b,m)$} if it satisfies the followings;
\begin{enumerate}
\item
$R$ consists of two irreducible components $B$ and $E$ of degree $b$ and $3$, respectively,
\item
both of $B$ and $E$ are non-singular,
\item
the set-theoretical intersection of $B$ and $E$ consists of $3n$ points, and
\item
at each intersection point, $B$ and $E$ intersect with multiplicity $m$.
\end{enumerate}
\end{Def}
He considered the family $\calF_{b,m}\subset\bP_{\ast}H^0(\bP^2,\calO(b+3))$ of all curves of type $(b,m)$.
Here $\bP_{\ast}H^0(\bP^2,\calO(d))$ is the projective space of one-dimensional subspaces of the vector space $H^0(\bP^2,\calO(d))$,
which parameterize all plane curves of degree $d$.
He proved the following theorem.
\begin{Th}[{\cite[Theorem~1.2]{shimada}}]\label{th. shimada}
Suppose that $b\geq 4$, and let $m$ be a divisor of $b$.
\begin{enumerate}
\item
The number of the connected components of $\calF_{b,m}$ is equal to the number of divisors $m$.
\item
Let $R$ be a member of $\calF_{b,m}$.
Then the fundamental group $\pi_1(\bP^2\setminus R)$ is isomorphic to $\bZ$ if $b$ is not divisible by $3$, while it is isomorphic to $\bZ\oplus\bZ/3\bZ$ if $b$ is divisible by $3$.
\end{enumerate}
\end{Th}
Hence, if $R_1$ and $R_2$ are curves of type $(b,m)$ in distinct connected components of $\calF_{b,m}$, the embeddings of $R_1$ and $R_2$ in $\bP^2$ can not be distinguished topologically by fundamental groups.
Note that, if $R_1$ and $R_2$ are in a same connected component of $\calF_{b,m}$, then $(R_1, R_2)$ is not a Zariski pair.
The following problem was raised.
\begin{Problem}\rm
For two plane curves $R_1$ and $R_2$ of type $(b,m)$ in distinct connected components of $\calF_{b,m}$, is the pair $(R_1,R_2)$ a Zariski pair?
\end{Problem}
In the present paper, we introduce a \textit{splitting number} of a subvariety in a smooth variety for a Galois cover (Definition~\ref{def. splitting number}),
and prove that it is invariant under certain homeomorphisms (Proposition~\ref{lem. splitting number}).
The main theorem is the following theorem.
\begin{Th}\label{th. main}
Let $b\geq 4$ be an integer, and let $m$ be a divisor of $b$,
and let $R_1$ and $R_2$ be plane curves of type $(b,m)$.
Then $(R_1,R_2)$ is a Zariski pair if and only if
$R_1$ and $R_2$ are in distinct connected components of $\calF_{b,m}$.
\end{Th}
By Theorem~\ref{th. shimada} and \ref{th. main}, we obtain the following corollary.
\begin{Cor}\label{cor. k-plet}
For any integer $k\geq 2$, there exists a $\pi_1$-equivalent Zariski $k$-plet.
\end{Cor}
The present paper is organized as follows.
In the first section, we define splitting numbers of subvarieties in a smooth variety for a Galois cover, and
prove that splitting numbers are invariant under certain homeomorphisms.
In the second section, we investigate splitting numbers for simple cyclic covers, and give a method of splitting numbers of smooth plane curves for simple cyclic covers.
In the third section, we recall the connected components of $\calF_{b,m}$ given in \cite{shimada}.
In the final section, we prove Theorem~\ref{th. main} and Corollary~\ref{cor. k-plet} by using splitting numbers.
\section{Splitting numbers of subvarieties for Galois covers}
Let $Y$ be a smooth variety.
Let $B\subset Y$ be a reduced divisor, and let $G$ be a finite group.
A surjective homomorphism $\theta: \pi_1(Y\setminus B)\twoheadrightarrow G$ induces an \'etale $G$-cover $\phi':X'\to Y\setminus B$,
where a \textit{$G$-cover} is a Galois cover $\phi:X\to Y$ with $\Gal(\bC(X)/\bC(Y))\cong G$.
Hence we obtain an extension of rational function fields $\bC(X')/\bC(Y)$.
By $\bC(X')$-normalization of $Y$, $\pi'$ is extended to a branched $G$-cover $\pi:X\to Y$ uniquely.
We call $\phi:X\to Y$ the induced $G$-cover by the surjection $\theta:\pi_1(Y\setminus B)\twoheadrightarrow G$.
\begin{Def}\label{def. splitting number}\rm
Let $Y$ be a smooth variety, and let $\phi:X\to Y$ be an induced $G$-cover branched at a reduced divisor $B\subset Y$ for a finite group $G$.
For an irreducible subvariety $C\subset Y$ with $C\not\subset B$, we call the number of irreducible components of $\phi^{\ast}C$ the \textit{splitting number of $C$ for $\phi$}, and denote it by $s_{\phi}(C)$.
If $s_{\phi}(C)\geq 2$, we call $C$ a \textit{splitting subvariety} (a \textit{splitting curve} if $\dim_{\bC} C=1$) for $\phi$.
\end{Def}
\begin{Rem}\rm
Let $C_1, C_2, B\subset\bP^2$ be three plane curves of degree $d_1,d_2, 2n$, respectively,
and let $\phi:S'\to\bP^2$ be the double cover branched along $B$.
Assume that $C_i\not\subset B$ and $s_{\phi}(C_i)=2$ for $i=1,2$, say $\phi^{\ast}C_i=C_i^{+}+C_i^-$.
In \cite{bannai}, Bannai defined \textit{splitting type} for the triple $(C_1, C_2; B)$ as the pair $(m_1,m_2)$ of the intersection numbers $C_1^+.C_2^+=m_1$ and $C_1^+.C_2^-=m_2$
for suitable choice of labels.
Moreover, he proved that splitting types are invariant under certain homeomorphisms \cite[Proposition~2.5]{bannai}.
\end{Rem}
We prove that splitting numbers are invariant under certain homeomorphisms by the same idea of the proof of \cite[Proposition~2.5]{bannai}.
\begin{Prop}\label{lem. splitting number}
Let $Y$ be a smooth variety.
Let $B_1, B_2\subset Y$ be two reduced divisors, and let $G$ be a finite group.
For $i=1,2$, let $\phi_i:X_i\to Y$ be the induced $G$-cover by a surjection $\theta_i:\pi_1(Y\setminus B)\twoheadrightarrow G$.
Assume that there exists a homeomorphism $h:Y\to Y$ with $h(B_1)=B_2$ and an automorphism $\sigma:G\to G$ and $\sigma\circ\theta_2\circ h_{\ast}=\theta_1$.
Then the followings hold;
\begin{enumerate}
\item
there exists a unique $G$-equivariant homeomorphism $\tilde{h}:X_1'\to X_2'$ with $\phi_2\circ\tilde{h}=h\circ\phi_1$ up to $G$-action,
where $X_i'=X_i\setminus\phi_i^{-1}(B_i)$ for $i=1,2$, and
\item
for two irreducible subvarieties $C_1, C_2\subset Y$ with $C_2=h(C_1)$ and $C_1\not\subset B_1$,
$\tilde{h}$ induces a one to one correspondence between the set of irreducible components of $\phi_1^{\ast}C_1$ to the set of those of $\phi_2^{\ast}C_2$.
Moreover, this correspondence is given by
\[ \widetilde{C}_1\mapsto \overline{\tilde{h}\left(\widetilde{C}_1\setminus \phi_1^{-1}(B_1)\right)} \]
for an irreducible component $\widetilde{C}_1$ of $\phi_1^{\ast}C_1$,
where $\overline{S}$ is the closure of $S$ for a subset $S\subset X_2$.
In particular, $s_{\phi_1}(C_1)=s_{\phi_2}(C_2)$.
\end{enumerate}
\end{Prop}
\begin{proof}
The assertion (1) is clear by the uniqueness of the induced $G$-covers by $\theta_i:\pi_1(Y\setminus B_i)\twoheadrightarrow G$.
We prove the assertion (2).
Put $C_i':=C_i\setminus(B_i\cup\Sing(C_i))$ and
let $\phi_i^{\ast}C_i=\sum_{j=1}^{n_i}\widetilde{C}_{i j}$ be the irreducible decomposition for each $i=1,2$, where $n_i=s_{\phi_i}(C_i)$.
Since $C_i\setminus (B_i\cup\Sing(C_i))$ is connected
and $\widetilde{C}'_{i j}:=\widetilde{C}_{i j}\setminus\phi_i^{-1}(B_i\cup\Sing(C_i))$ is smooth,
$\widetilde{C}'_{i j}$ is a connected component of $\phi_i^{-1}(C_i')$.
Since $\tilde{h}:X_1'\to X_2'$ is a homeomorphism with $\phi_2\circ\tilde{h}=h\circ\phi_1$,
$\tilde{h}|_{\phi^{-1}(C_1')}:\phi^{-1}(C_1')\to \phi_2^{-1}(C_2')$ is a homeomorphism.
Hence the numbers of irreducible components of $\phi_i^{\ast}C_i$ ($i=1,2$) coincide, $n_1=n_2$.
Moreover, since the closure of $\widetilde{C}'_{i j}$ is $\widetilde{C}_{i j}$,
the assertion (2) holds.
\end{proof}
For a smooth curve $B\subset\bP^2$ of degree $b$ and a divisor $m$ of $b$,
a surjection $\pi_1(\bP^2\setminus B)\twoheadrightarrow \bZ/m\bZ$ is uniquely determined since $\pi_1(\bP^2\setminus B)\cong\bZ/b\bZ$.
Hence, by Proposition~\ref{lem. splitting number}, we obtain the following corollary.
\begin{Cor}\label{cor. number}
Let $B_1, B_2, C_1, C_2\subset\bP^2$ be smooth curves with $b=\deg B_1=\deg B_2$,
and let $m$ be a divisor of $b$.
Put $\phi_i:X_i\to\bP^2$ the induced $\bZ/m\bZ$-cover by $\pi_1(\bP^2\setminus B_i)\twoheadrightarrow\bZ/m\bZ$ for each $i=1,2$.
If there exists a homeomorphism $h: \bP^2\to\bP^2$ with $h(B_1)=B_2$ and $h(C_1)=C_2$,
then $s_{\phi_1}(C_1)=s_{\phi_2}(C_2)$.
\end{Cor}
\section{Splitting numbers for simple cyclic covers}\label{sec. simple cyclic cover}
In this section, we investigate a method of computing the splitting number of a smooth plane curve for a simple cyclic cover, where a simple cyclic cover is defined in Definition~\ref{def. simple cyclic cover} below.
In this section, let $Y$ be a smooth surface.
For a line bundle $\calL$ on $Y$,
let $p_{\calL}:T_{\calL}\to Y$ denote the total space associated to $\calL$.
\begin{Def}\label{def. simple cyclic cover}\rm
Let $B$ be either a reduced curve or the zero divisor on $Y$.
Assume that there exists a line bundle $\calL$ on $Y$ such that $\calO_Y(B)\cong\calL^{\otimes n}$.
Let $s\in H^0(Y,\calO_Y(B))$ be a section vanishing exactly along $B$.
A cyclic cover $\phi: X\to Y$ of degree $n$ is a \textit{simple cyclic cover branched along $B$} if
$X$ is isomorphic to the subvariety of $T_{\calL}$ defined by $p_{\calL}^{\ast}s-t^n=0$ and
$\phi$ coincides with the restriction of $p_{\calL}$ to the subviriety, where $t\in H^0(T_{\calL},p_{\calL}^{\ast}\calL)$ is the tautological section.
\end{Def}
\begin{Def}\rm
Let $\phi:X\to Y$ be a Galois cover branched along $B\subset Y$.
Let $C\subset Y$ be an irreducible curve of $Y$.
Let $\widetilde{C}_0$ denote an irreducible component of $\phi^{\ast}C$, and let $\bar{\eta}_0:\overline{C}_0\to \widetilde{C}_0$ and $\eta:\overline{C}\to C$ be the normalizations.
We say that $\phi$ is \textit{essentially unramified over $C$} if the induced cover ${\phi}_C:\overline{C}_0\to \overline{C}$ by $\phi\circ\bar{\eta}_0$ is unramified, and \textit{essentially ramified over $C$} otherwise.
\end{Def}
\begin{Rem}\label{rem. essentially unramified}\rm
Let $G$ be a finite abelian group.
Let $\phi:X\to Y$ be a $G$-cover over $Y$, and
let $C$ be an irreducible curve on $Y$.
The induced cover $\phi_C:\overline{C}_0\to\overline{C}$ is an $G_0$-cover since $G$ is abelian, where $G_0\subset G$ is the stabilizer of $\widetilde{C}_0$.
If $\phi$ is essentially ramified over $C$, then the quotient $X_1$ of $X$ by the subgroup of $G$, which is generated by all stabilizers of ramification points of $\phi_C:\overline{C}_0\to\overline{C}$, provides an abelian cover $\phi_1:X_1\to Y$ which is essentially unramified over $C$ such that $s_{\phi}(C)=s_{\phi_1}(C)$.
Hence, to compute the splitting number $s_{\phi}(C)$, we may assume that $\phi$ is essentially unramified over $C$ if $\phi:X\to Y$ is an abelian cover.
\end{Rem}
Let $\phi:X\to Y$ be a simple cyclic cover of degree $m$ branched along $B\subset Y$, and
let $C\subset Y$ be an irreducible curve with $C\ne B$.
We consider the splitting number $s_{\phi}(C)$.
For an intersection $P\in B\cap C$ and a local branch $\ell$ of $C$ at $P$,
let $\I_{P,\ell}$ denote the local intersection number of $B$ and $\ell$ at $P$.
Let $\sigma:\widehat{Y}\to Y$ be a succession of blowing -ups such that $\sigma^{-1}(B+C)$ is simple normal crossing.
Let $\widehat{C}\subset\widehat{Y}$ denote the strict transformation of $C$ by $\sigma$.
\begin{Lem}\label{lem. multiplicity}
With the above assumption, let $E_{P,\ell}$ be the irreducible component of $\sigma^{\ast}B$ which intersects with the local branch $\hat{\ell}$ of $\widehat{C}$ corresponding to $\ell$.
Then the multiplicity of $E_{P,\ell}$ in $\sigma^{\ast}B$ is equal to $I_{P,\ell}$.
\end{Lem}
\begin{proof}
Since the multiplicity of $E_{P,\ell}$ in $\sigma^{\ast}B$ is depend only on the singularity of $\ell$,
we may assume that $C$ is locally irreducible at $P$.
Let $\sigma_P:\widehat{Y}_P\to Y$ be the succession of blowing-ups over $P$ in $\sigma$,
and let $\widehat{C}_P$ be the strict transform of $C$ by $\sigma_P$.
Let $E'_{P,\ell}$ be the irreducible component of $\sigma_P^{\ast}B$ which intersects with $\widehat{C}_P$, and
let $m_{P,\ell}$ be the multiplicity of $E_{P,\ell}$ in $\sigma^{\ast}B$.
It is sufficient to prove that $m_{P,\ell}=\I_{P,\ell}$.
Note that the exceptional set $\sigma_P^{-1}(P)$ intersects with $\widehat{C}_P$ at one point.
By the projection formula, we obtain
\begin{align*}
m_{P,\ell}
&= \sigma_P^{\ast}B.\widehat{C}_P-\sum_{(P',\ell')\ne (P,\ell)}\I_{P',\ell'} \\
&= B.C-\sum_{(P',\ell')\ne (P,\ell)}\I_{P',\ell'} \\
&= \I_{P,\ell}.
\qedhere
\end{align*}
\end{proof}
We may assume that $E.\widehat{C}\leq1$ for each irreducible component $E$ of $\sigma^{\ast}B$ after more blowing-ups if necessary.
Let $\hat{\phi}:\widehat{X}\to\widehat{Y}$ be the $\bC(X)$-normalization of $\widehat{Y}$.
In general, $\hat{\phi}$ is not a simple cyclic cover.
By Lemma~\ref{lem. multiplicity}, if there is a local branch $\ell$ of $C$ at $P\in B\cap C$ such that $m$ is not a divisor of $\I_{P,\ell}$,
then $\hat{\phi}$ is branched along $E_{P,\ell}$,
hence $\phi$ is essentially ramified over $C$.
By Remark~\ref{rem. essentially unramified}, we may assume
\[ \I_{P,\ell}\equiv 0\pmod{m} \]
for any local branch $\ell$ of $C$ at $P\in B\cap C$.
Let $L$ be a divisor on $Y$ whose associated line bundle $\calL$ defines $\phi:X\to Y$ as in Definition~\ref{def. simple cyclic cover},
and let $D_{B,C}$ denote the following divisor on $\widehat{C}$;
\[ D_{B,C}:=\frac{1}{m}(\sigma^{\ast} B){|_{\widehat{C}}}=\left.\left(\frac{1}{m}\sum_{(P,\ell)}\I_{P,\ell}E_{P,\ell}\right)\right|_{\widehat{C}}, \]
where the summand runs over all intersections $P\in B\cap C$ and all local branches $\ell$ of $C$ at $P$.
Put $D_{B,C}':=(\sigma^{\ast}L){|_{\widehat{C}}}-D_{B,C}$.
Note that, by regarding $\widehat{C}$ as the smooth model of $C$, $D_{B,C}$ does not depend on choice of $\sigma$ by Lemma~\ref{lem. multiplicity}.
Moreover, $mD'_{B,C}$ is linearly equivalent to $0$ on $\widehat{C}$.
\begin{Prop}\label{prop. splitting number for simple}
Under the above circumstance, $s_{\phi}(C)=\nu$ if and only if the order of $[\calO_{\widehat{C}}(D'_{B,C})]\in\Pic^0(\widehat{C})$ is equal to $m/\nu$.
\end{Prop}
\begin{proof}
We put
\[ \widehat{\calL}:=\calO_{\widehat{Y}}(\sigma^{\ast}L-\frac{1}{m}\sum\I_{P,\ell}E_{P,\ell}). \]
Let $\hat{\phi}:\widehat{X}\to\widehat{Y}$ be the $\bC(X)$-normalization of $\widehat{Y}$, and put
\[ \widehat{Y}':=\widehat{Y}\setminus\Supp(\sigma^{\ast}B-\sum\I_{P,\ell}E_{P,\ell}) \ \mbox{ and } \ \widehat{X}':=\hat{\phi}^{-1}(\widehat{Y}'). \]
Note that $\widehat{C}\subset\widehat{Y}'$.
The restriction of $\hat{\phi}$ to $\widehat{X}'$, $\hat{\phi}':\widehat{X}'\to \widehat{Y}'$, is an \'etale simple cyclic cover of degree $m$ defined in $T_{\widehat{\calL}}$ over $\widehat{Y}'$.
By Stein factorization of $\sigma\circ\hat{\phi}:\widehat{X}\to Y$, we obtain a birational morphism $\tilde{\sigma}:\widehat{X}\to X$ with $\phi\circ\tilde{\sigma}=\sigma\circ\hat{\phi}$.
The birational morphism $\tilde{\sigma}$ provides a one to one correspondence between irreducible components of $\hat{\phi}^{\ast}\widehat{C}$ and those of $\phi^{\ast}C$.
Thus we have $s_{\phi}(C)=s_{\hat{\phi}}(\widehat{C})=s_{\hat{\phi}'}(\widehat{C})$.
Since the restriction of $T_{\widehat{\calL}}$ over $\widehat{C}$ is $T_{\calO(D'_{B,C})}$, the assertion follows from the next lemma.
\end{proof}
\begin{Lem}
Let $C$ be a smooth variety, and let $\calL$ be a line bundle on $C$ with $\calL^{\otimes \mu}\cong\calO_C$ and $\calL^{\otimes i}\not\cong\calO_C$ for $1\leq i<\mu$.
Put $m:=\mu\nu$ for some $\nu\in\bZ_{>0}$.
Let $\widetilde{C}$ be the closed subset of $T_{\calL}$ defined by $t^m-1=0$,
where $t\in H^0(T_{\calL},p_{\calL}^{\ast}\calL)$ is the tautological section.
Then the number of connected components of $\widetilde{C}$ is equal to $\nu$.
\end{Lem}
\begin{proof}
Since $\calL^{\otimes \mu}\cong\calO_C$, $t^\mu-\zeta_{\nu}^j=0$ defines closed subset of $T_{\calL}$ for $1\leq j< \nu$, where $\zeta_\nu$ is a primitive $\nu$-th root of unity.
Since $\calL^{\otimes i}\not\cong\calO_C$ for $1\leq i<\mu$, $t^i-a=0$ does not define globally a closed subset of $T_\calL$.
Thus the number of connected components of $\widetilde{C}$ is equal to $\nu$.
\end{proof}
In general, it seems difficult to compute the order of $[\calO_{\widehat{C}}(D'_{B,C})]\in\Pic^0(\widehat{C})$.
However, the following theorem provides a method of computing splitting numbers of smooth plane curves for simple cyclic covers.
\begin{Th}
Let $\phi:X\to\bP^2$ be a simple cyclic cover of degree $m$ branched along a plane curve $B$ of degree $b=mn$, and
let $C\subset\bP^2$ be a smooth curve of degree $d$.
Assume that $\I_P\equiv 0\pmod{m}$ for each $P\in B\cap C$,
where $\I_P$ is the local intersection multiplicity of $B$ and $C$ at $P$.
Let $\nu$ be a divisor of $m$, say $m=\mu\nu$.
Then
$s_{\phi}(C)=\nu$ if and only if the following conditions hold;
\begin{enumerate}
\item
for $1\leq k<\mu$, there are no curves $D_{kn}\subset\bP^2$ of degree $kn$ such that $D_{kn}|_C= k D_{B,C}$,
where $D_{B,C}$ is regarded as a divisor on $C$;
\item
there exists a curve $D_{\mu n}\subset\bP^2$ of degree $\mu n$ such that $D_{\mu n}|_C=\mu D_{B,C}$.
\end{enumerate}
\end{Th}
\begin{proof}
Let $f=0$ be a defining equation of $C\subset\bP^2$.
We have the following exact sequence;
\[ 0\to H^0(\bP^2,\calO_{\bP^2}(kn-d))\overset{\times f}{\to} H^0(\bP^2,\calO_{\bP^2}(kn))\overset{\alpha}{\to} H^0(C,\calO_C(kn))\to 0, \]
where $\alpha$ is the restriction to $C$.
By Proposition~\ref{prop. splitting number for simple}, $s_\phi(C)=\nu$ if and only if the order of $[\calO_{C}(D'_{B,C})]\in\Pic^0(C)$ is $\mu$.
The surjection $\alpha$ implies that the order of $[\calO_{C}(D'_{B,C})]$ is $\mu$ if and only if the conditions (1) and (2) hold.
\end{proof}
\section{Plane curves of type $(b,m)$}
In this section, we recall the equisingular families of plane curves given in \cite{shimada}.
Let $b$ be a positive integer with $b\geq3$, and let $m$ be a divisor of $b$.
We put $n:=b/m$.
Let $\calF_{b,m}\subset \bP_{\ast} H^0(\bP^2,\calO(b+3))$ be the family of all curves of type $(b,m)$.
Note that any two curves $R$ and $R'$ of type $(b,m)$ have same combinatorics.
Let $R=B+E$ be a curve of type $(b,m)$.
Let $D_R$ denote the reduced divisor $(B|_E)_{\mathrm{red}}$ of degree $3n$ on $E$,
and let $H$ denote a divisor of degree $3$ on $E$ that is obtained as the intersection of $E$ and a line on $\bP^2$.
Then $\calO_E(D_R-n H)$ is an invertible sheaf of degree $0$ on $E$;
\[ [\calO_E(D_R-nH)]\in\Pic^0(E). \]
Note that $D_R$ and $D_R-nH$ correspond to $D_{B,E}$ and $D_{B,E}'$ in Section~\ref{sec. simple cyclic cover}, respectively.
Let $\lambda(R)$ be the order of the isomorphism class $[\calO_E(D_R-nH)]$ in $\Pic^0(E)$,
which is a divisor of $m$.
For a divisor $\mu$ of $m$, we write by $\calF_{b,m}(\mu)$ the union of all connected components of $\calF_{b,m}$ on which the function $\lambda$ is constantly equal to $\mu$.
\[ \calF_{b,m}=\coprod_{\mu | m}\calF_{b,m}(\mu) \]
Shimada \cite{shimada} proved the following proposition.
\begin{Prop}[{\cite[Proposition~2.1]{shimada}}]\label{prop. shimada}
Suppose that $b\geq 3$, and let $m$ be a divisor of $b$.
For any divisor $\mu$ of $m$, the variety $\calF_{b,m}(\mu)$ is irreducible and of dimension $(b-1)(b-2)/2+3n+8$.
\end{Prop}
\begin{Rem}\rm
Proposition~\ref{prop. shimada} implies that the number of connected components of $\calF_{b,m}$ is equal to the number of divisors of $m$.
\end{Rem}
\section{Proofs}
In this section, we prove Theorem~\ref{th. main} and Corollary~\ref{cor. k-plet}.
\begin{proof}[Proof of Theorem~\ref{th. main}]
Let $\mu_1$ and $\mu_2$ be distinct divisors of $m$,
and let $R_i=B_i+E_i$ be a member of $\calF_{b,m}(\mu_i)$ for each $i=1,2$.
Let $\phi_i:X_i\to\bP^2$ be the simple cyclic cover of degree $m$ branched along $B_i$ for each $i=1,2$.
If there exists a homeomorphism $h:\bP^2\to\bP^2$ such that $h(R_1)=R_2$,
then $h(B_1)=B_2$ and $h(E_1)=E_2$ since $\deg B_i=b>3=\deg E_i$.
By Corollary~\ref{cor. number}, we obtain $s_{\phi_1}(E_1)=s_{\phi_2}(E_2)$.
On the other hand, by Proposition~\ref{prop. splitting number for simple} and the definition of $\calF_{b,m}(\mu_i)$, we have $s_{\phi_i}(E_i)=m/\mu_i$ for each $i=1,2$.
Hence $(R_1,R_2)$ is a Zariski pair.
Conversely, if $\mu_1=\mu_2$, then it is clear that there is a homeomorphism $h:\bP^2\to\bP^2$ such that $h(R_1)=R_2$ since $R_1$ and $R_2$ are members of $\calF_{b,m}(\mu_1)=\calF_{b,m}(\mu_2)$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor. k-plet}]
By Theorem~\ref{th. shimada}, the number of connected components of $\calF_{5^{k-1},5^{k-1}}$ is equal to $k$.
For each integer $0\leq i\leq k-1$, let $R_{i}$ be a member of $\calF_{5^{k-1},5^{k-1}}(5^i)$.
By Theorem~\ref{th. shimada} and \ref{th. main}, $(R_0,\dots,R_{k-1})$ is a $\pi_1$-equivalent Zariski $k$-plet.
\end{proof} | 95,172 |
\begin{document}
\title{Coxeter's frieze patterns and discretization of the Virasoro orbit }
\author[V. Ovsienko]{Valentin Ovsienko}
\author[S. Tabachnikov]{Serge Tabachnikov}
\address{
Valentin Ovsienko,
CNRS,
Laboratoire de Math\'ematiques,
Universit\'e de Reims-Champagne-Ardenne,
FR~3399 CNRS, F-51687, Reims, France}
\address{
Serge Tabachnikov,
Pennsylvania State University,
Department of Mathematics,
University Park, PA 16802, USA,
and
ICERM, Brown University, Box1995,
Providence, RI 02912, USA
}
\email{
[email protected],
[email protected]
}
\date{}
\subjclass{}
\maketitle
\begin{abstract}
We show that the space of classical Coxeter's frieze patterns
can be viewed as a discrete version of a coadjoint orbit of the
Virasoro algebra.
The canonical (cluster) (pre)symplectic form on the space
of frieze patterns is a discretization of the Kirillov symplectic form. We relate a continuous version of frieze patterns to conformal metrics of constant curvature in dimension 2.
\end{abstract}
\maketitle
\date{}
\section{Introduction and main results} \label{Intro}
The goal of this note is to relate two different subjects:
\begin{enumerate}
\item
the {\it Virasoro algebra} and the related infinite-dimensional symplectic
manifold $\Diff_+(S^1)/\PSL_2(\R)$;
\item
the space of {\it Coxeter's frieze patterns} viewed as a cluster manifold.
\end{enumerate}
The problem of discretization of the Virasoro algebra
is very well known and has been studied by different authors,
and different discretizations were suggested~\cite{FT,FRS};
see also~\cite{MS}.
The main motivation for this study is application to
integrable systems, such as the Korteweg - de Vries (KdV) equation.
Several discrete versions of the KdV were proposed.
Most of the discrete versions of the Virasoro algebra
consist of discretization of the corresponding linear Poisson structure
on its dual space.
We will describe a discretization procedure that relates the subject
to combinatorics and cluster algebra.
We will be interested in the infinite-dimensional homogeneous space $\Diff_+(S^1)/\PSL_2(\R)$
equipped with (a $1$-parameter family of) Kirillov's symplectic structures.
This symplectic space is often regarded as a coadjoint orbit of the Virasoro algebra~\cite{K1,K2},
or, in other words, a symplectic leaf of the linear Poisson structure.
This is a more geometric way to understand the Virasoro-related Poisson structure.
We will obtain a finite-dimensional discretization of $\Diff_+(S^1)/\PSL_2(\R)$.
We do not consider integrable systems in this paper,
but believe that our discretization procedure can be applied to KdV
and should be related to such discrete integrable systems as the pentagram map;
see~\cite{OST} and references therein.
The discrete objects that we consider are the classical
{\it Coxeter (or Conway-Coxeter) frieze patterns}~\cite{Cox,CoCo}.
This notion was invented in the early 1970's but became widely known quite recently
due to its close relation to the {\it cluster algebra};
see~\cite{CaCh,ARS}.
In particular, it was shown in \cite{MGOT,MGOST}
that frieze patterns are closely related with the moduli space of polygons
in the projective line and with second order linear difference equations.
As every cluster manifold, the space of Coxeter's friezes has a canonical
(pre)symplectic structure.
We will prove the following.
\begin{theorem}
\label{MainOne}
The space $\Diff_+(S^1)/\PSL_2(\R)$ equipped with Kirillov's symplectic form
is a continuous limit of the space of Coxeter's frieze patterns equipped with the
cluster (pre)symplectic form.
\end{theorem}
Following the ideas of Conway and Coxeter, we identify frieze patterns
with linear recurrence equations:
\begin{equation}
\label{Req}
V_{i+1} = c_i V_i - V_{i-1},
\end{equation}
where the ``potential'' $(c_i)$ is an $n$-periodic sequence of (real or complex) numbers,
and where the sequence $(V_i)$ is unknown, i.e., a ``solution''.
Furthermore, we will impose
the following condition: the potential is $n$-periodic, and all the solutions are $n$-antiperiodic:
\begin{equation}
\label{Per}
c_{i+n}=c_i,
\qquad
V_{i+n}=-V_i.
\end{equation}
The space of equations~(\ref{Req}) satisfying the condition~(\ref{Per})
is an algebraic variety of dimension $n-3$.
If $n$ is odd, then this algebraic variety is isomorphic to the classical
moduli space $\mathcal{M}_{0,n}$; see~\cite{MGOST}
for details.
Note that equation~(\ref{Req}) is nothing other than the classical
{\it discrete Hill equation} (also called Sturm-Liouville or Schr\"odinger, equation).
Relation of this equation to a discrete version of the Virasoro algebra
is very natural and appeared in all the above cited works on the subject.
However, the notion of frieze pattern and cluster algebra were not considered.
We will show that this approach provides additional combinatorial tools.
Let us also emphasize the fact that the (anti)periodicity condition~(\ref{Per})
seems to be the only natural way to obtain a finite-dimensional
space of equations~(\ref{Req}) approximating the space $\Diff_+(S^1)/\PSL_2(\R)$.
We also show how to describe the continuous limit of
Coxeter's frieze patterns in terms of solutions of the
classical Liouville equation.
Solutions of this equation are interpreted in terms of
projective differential geometry.
We believe that geometric and combinatorial viewpoints
complement each other and lead to a better understanding of both
parts of the story.
We end the introduction by an open question that concerns
a natural generalization of Coxeter's friezes called $2$-friezes; see~\cite{MGOT}.
The space of $2$-friezes is an algebraic variety of dimension $2n-8$.
It is related to linear recurrence equations
of order $3$:
$$
V_{i+3}=a_iV_{i+2} - b_i V_{i+1} +V_{i},
$$
with $n$-periodic solutions.
This space also carries a structure of cluster manifold,
and therefore has a canonical (pre)symplectic structure.
This is the space on which the pentagram map acts.
It would be natural to expect that the canonical cluster symplectic structure
is related to the Gelfand-Dickey bracket.
However, the pentagram map does not preserve the canonical symplectic structure.
It would be very interesting to understand the situation in that case.
\section{The space of Coxeter's friezes} \label{CCox}
In this section, we recall the classical notion of a Coxeter frieze pattern.
We introduce local coordinate systems and identify this space
with the moduli space $\mathcal{M}_{0,n}$.
\subsection{Closed frieze patterns}
We start with the definition of classical Coxeter's frieze patterns.
\begin{definition}
{\rm
\begin{enumerate}
\item[(a)]
A frieze pattern~\cite{Cox} is an infinite array of numbers
$$
\begin{array}{cccccccccccc}
&\cdots&&0&&0&&0&&0&&\cdots\\[4pt]
\cdots&&1&&1&&1&&1&&\cdots\\[4pt]
&\cdots&&c_{i}&&c_{i+1}&&c_{i+2}&&c_{i+3}&&\cdots\\[4pt]
&& \cdots&& \cdots&& \cdots&& \cdots&&
\end{array}
$$
where the entries propagate downward, and the entries of each next row are
determined by the previous two rows via the frieze rule.
For each elementary ``diamond''
\begin{equation}
\label{diamond1}
\begin{array}{ccc}
&b&\\[2pt]
a&&d\\[2pt]
&c&
\end{array}
\end{equation}
one has
\begin{equation}
\label{RulEq}
ad-bc=1.
\end{equation}
For instance, the entries in the next row of the above frieze are
$c_ic_{i+1}-1$.
\item[(b)]
A frieze pattern is called {\it closed} if a row of $1$'s appears again:
$$
\begin{array}{ccccccccccc}
\cdots&&1&&1&&1&&1&&\cdots\\[4pt]
&c_i&&c_{i+1}&&c_{i+2}&&c_{i+3}&&c_{i+4}\\[4pt]
&& \cdots&& \cdots&& \cdots&& \cdots&&\\[4pt]
\cdots&&1&&1&&1&&1&&\cdots\\[4pt]
&0&&0&&0&&0&&\cdots\\[4pt]
\cdots&&-1&&-1&&-1&&-1&&\cdots
\end{array}
$$
By definition, this lower row of $1$'s is followed by a row of~$0$'s, and then by a row of $-1$'s.
One can extend the array vertically so that each diagonal,
in either of the two directions, is {\it anti-periodic}.
\item[(c)]
The {\it width} $w$ of a closed frieze pattern is the number of non-trivial rows between the rows of~$1$'s.
\end{enumerate}
}
\end{definition}
\begin{example}
\label{CEx}
{\rm
A generic Coxeter frieze pattern of width~$2$ is as follows:
$$
\begin{array}{ccccccccccc}
\cdots&&1&& 1&&1&&\cdots
\\[4pt]
&a_1&&\frac{a_2+1}{a_1}&&\frac{a_1+1}{a_2}&&a_2&&
\\[4pt]
\cdots&&a_2&&\frac{a_1+a_2+1}{a_1a_2}&&a_1&&\cdots
\\[4pt]
&1&&1&&1&&1&&
\end{array}
$$
for some $a_1,a_2\not=0$.
(Note that we omitted the first and the last rows of $0$'s.)}
\end{example}
The following facts are well-known~\cite{Cox,CoCo}; see also~\cite{MGOST}.
\begin{enumerate}
\item
A closed frieze pattern is horizontally periodic with period
$$
n=w+3,
$$ that is, $c_{i+n}=c_i$.
\item
A frieze pattern with the first line $(c_i)$ is closed if and only if
the equation~(\ref{Req}) satisfies condition~(\ref{Per}).
\item
A closed frieze pattern has ``glide symmetry'' whose second iteration
is the horizontal parallel translation distance~$n$.
\end{enumerate}
The name ``frieze pattern" is due to the glide symmetry.
\subsection{Local coordinates}
Every Coxeter's frieze pattern of width $w$ is uniquely defined by its
(South-East) diagonal:
\begin{equation}
\label{CanDiag}
\begin{array}{ccccccccccc}
1&&1&& 1&&1
\\[4pt]
&a_1&&\cdots&&&&
\\[4pt]
&&a_2&&\cdots&&
\\[4pt]
&&&\ddots&&&\\[4pt]
&&\cdots&&a_w&&\cdots\\[4pt]
&1&&1&&1&&1&&
\end{array}
\end{equation}
for some $a_1,a_2,\ldots,a_w\not=0$ (see Example~\ref{CEx}).
Therefore, $(a_1,a_2,\ldots,a_w)$ is a local coordinate system
on the space of friezes.
Consider a different choice of the diagonal,
or, more generally, consider an arbitrary {\it zigzag}
\begin{equation}
\label{ZZCo}
\begin{array}{ccccccccccc}
1&&1&& 1&&1
\\[4pt]
&a'_1&&\cdots&&&&
\\[4pt]
&&a'_2&&\cdots&&
\\[4pt]
&a'_3&&\cdots&&
\\[4pt]
a'_4&&\cdots&&
\\[4pt]
&\ddots&&&\\[4pt]
&&a'_w&&\cdots\\[4pt]
&1&&1&&1&&1&&
\end{array}
\end{equation}
Again the frieze pattern is uniquely defined by
$a'_1,a'_2,\ldots,a'_w\not=0$,
so that we obtain a different coordinate system $(a'_1,a'_2,\ldots,a'_w)$.
It turns out that the coordinate changes between these coordinate
systems can be understood as {\it mutations} in the cluster algebra
of type $A_w$; see~\cite{MGOT} (the Appendix).
The space of all Coxeter's friezes is therefore a cluster manifold.
\subsection{Relation to the moduli space $\mathcal{M}_{0,n}$}
The following statement is proved in~\cite{MGOT,MGOST}.
\begin{proposition}
\label{PropM}
If $n$ is odd, then the space of Coxeter's friezes is isomorphic
to the moduli space $\mathcal{M}_{0,n}$.
\end{proposition}
\proof
We briefly describe the main construction; see~\cite{MGOT,MGOST}
for the details.
Consider Coxeter's frieze pattern given by a diagonal~(\ref{CanDiag}).
Take the neighboring (upper) diagonal and write the two diagonals together
to obtain $n$ vectors in~$\R^2$:
\begin{equation}
\label{CProj}
\left(\begin{array}{c}
0\\[4pt]
1
\end{array}\right),
\quad
\left(\begin{array}{c}
1\\[4pt]
a_1
\end{array}\right),
\quad
\left(\begin{array}{c}
\frac{a_2+1}{a_1}\\[4pt]
a_2
\end{array}\right),
\quad\ldots,\quad
\left(\begin{array}{c}
1\\[4pt]
0
\end{array}\right).
\end{equation}
Note that these vectors $V_0,\ldots,V_{n-1}$ form a fundamental solution
of the equation~(\ref{Req}) corresponding to the frieze~(\ref{CanDiag}).
Projecting this $n$-gon to $\RP^1$, one obtains a point in $\mathcal{M}_{0,n}$,
and this projection is a one-to-one correspondence.
\proofend
\begin{remark}
\label{ImpRem}
{\rm
The above construction allows us to explain the geometric meaning of the entries of the frieze.
The elements $(a_1,\ldots,a_w)$ of the diagonal
can be obtained as the vector products:
$$
a_i=\left[V_{n-1},V_i\right],
$$
for $i=1,\ldots,w$.
}
\end{remark}
\section{Continuous Coxeter's friezes} \label{fri}
In this section, we introduce our main notion
of a continuous frieze pattern that can be obtained as a continuous limit
of classical Coxeter friezes.
We show that the continuous limit of a Coxeter's frieze can be understood
as a solution of the Liouville equation with special boundary conditions.
The space of these solutions is identified with $\Diff_+(S^1)/\PSL_2(\R)$.
\subsection{Projective curves, Hill's equations and the space $\Diff_+(S^1)/\PSL_2(\R)$}
Let $\Diff_+(S^1)$ be the group of orientation preserving diffeomorphisms
of the circle $S^1\simeq\RP^1$.
The homogeneous space $\Diff_+(S^1)/\PSL_2(\R)$ is
one of the most interesting infinite-dimensional manifolds
in geometry and mathematical physics.
Its study was initiated by Kirillov~\cite{K1,K2}.
We give here two different realizations of this space.
The results of this section are well-known.
\begin{definition}
{\rm
We call a {\it (simple) projective curve} an orientation preserving diffeomorphism
$$
\gamma:\R/T\Z \to \RP^1,
$$
that is, a parameterization of the projective line by $[0,T)$.
The {\it projective equivalence class} of $\gamma$ consists of the diffeomorphisms $\varphi\circ\gamma$ where
$\varphi:\RP^1\to\RP^1$ is a projective transformation, i.e., $\varphi\in\PSL_2(\R)$.
}
\end{definition}
The space of projective equivalence classes of curves is isomorphic to
$\Diff_+(S^1)/\PSL_2(\R)$, the $\Diff_+(S^1)$-action on the curves being given by
\begin{equation}
\label{ActC}
f:\gamma\mapsto\gamma\circ{}f^{-1}
\qquad
\hbox{where}
\qquad
f\in\Diff_+(S^1).
\end{equation}
This space can also be identified with projective structures on $\RP^1$ with monodromy $-\Id$;
see~\cite{K1,K2} and also~\cite{OT}.
\begin{definition}
{\rm
\begin{enumerate}
\item[(a)]
A {\it Hill equaltion} is a $2$nd order linear differential equation of the form
\begin{equation}
\label{Hill}
2c\,y''(x)+k(x)y(x)=0
\end{equation}
with $T$-periodic potential $k(x)$, here $c\in\R$ is an arbitrary constant.
We will always assume that Hill's equation has monodromy $-\Id$,
i.e., all the solutions of (\ref{Hill}) are $T$-anti-periodic:
$$
y(x+T)=-y(x).
$$
\item[(b)]
A Hill equation (\ref{Hill}) with $T$-anti-periodic solutions
is called {\it non-oscillating} if every
solution has exactly $1$ zero on $[0,T)$.
\end{enumerate}
}
\end{definition}
An example of a non-oscillating Hill equation with $T$-anti-periodic solutions is the equation with the constant potential
$k(x)\equiv{2c\pi^2}/{T^2}$.
\begin{remark}
{\rm
Recall that
the diagonals of a frieze pattern are solutions to a linear difference equation of $2$nd order~(\ref{Req})
where $c_i$ are the terms in the first non-trivial row of the frieze pattern.
Hill's equation is a continuous analog of this difference equation,
and its potential, $k(x)$ is a continuous analog of the sequence $(c_i)$.
}
\end{remark}
The following statement can be found in~\cite{K1}; see also~\cite{OT}
for a detailed discussion.
We give a proof for the sake of completeness.
\begin{proposition}
\label{KirProp}
The space $\Diff_+(S^1)/\PSL_2(\R)$
can be identified with the space of non-oscillating
of Hill's equations.
\end{proposition}
\proof
The space of solutions of Hill's equation (\ref{Hill}) is two-dimensional, and the Wronski determinant of any pair of solutions is a constant that we normalize to be equal to 1.
Thus a choice of two solutions determines a curve
$\Gamma(x) \subset \R^2$ such that $[\Gamma,\Gamma'] =1$ (the bracket denotes the determinant formed by two vectors).
This curve $\Gamma(x)$ is well defined up to the action of~$\SL_2(\R)$,
it is antiperiodic, i.e., $\Gamma(x+T)=-\Gamma(x)$.
Furthermore, the curve is simple if and only if the equation is non-oscillating.
Conversely, given a projective curve $\gamma$,
we can lift $\gamma$ to a parameterized curve $\Gamma(x) \subset \R^2$.
The closure condition on $\gamma$ implies that $\Gamma(x+T)=-\Gamma(x)$.
The lift is not unique: one can always multiply $\Gamma(x)$ by a non-vanishing function~$\lambda(x)$.
We fix the lift by the condition
\begin{equation}
\label{deter}
[\Gamma(x),\Gamma'(x)]=1\quad {\rm for\ all}\ x.
\end{equation}
Projectively equivalent curves $\gamma$ correspond to $\SL(2,\R)$-equivalent curves $\Gamma$, see \cite{OT}.
Differentiating (\ref{deter}), implies that the vectors $\Gamma$ and $\Gamma''$ are proportional,
i.e., $\Gamma$ satisfies Hill's equaltion
$$
\Gamma''(x)=k(x)\Gamma(x)
$$
with $T$-periodic potential $k(x)$ and monodromy $-\Id$.
\proofend
One can recover the potential $k(x)$ of Hill's equation from the curve $\Gamma$: if $f(x)$ is the ratio of two coordinates of the curve $\Gamma(x)$ then
$$
k=c\, S(f)
\qquad
\hbox{where}
\qquad S(f)=\frac{f'''}{f'}-\frac{3}{2}\left(\frac{f''}{f'}\right)^2
$$
is the classical {\it Schwarzian derivative}.
This formula is quite old and should probably be attributed to Lagrange; see~\cite{What}.
A more contemporary way to express the same observation is
the following formula of $\Diff_+(S^1)$-action on the space of Hill's equations:
\begin{equation}
\label{AdStar}
f:k(x)\mapsto{}k(f^{-1}(x))\left({f^{-1}}'\right)^2+ c\, S(f^{-1})(x).
\end{equation}
Note that this more complicated formula precisely corresponds to the action~(\ref{ActC}).
\subsection{Definition of continuous friezes}\label{ConLimS}
Let us now describe the procedure of {\it continuous limit}
of a frieze pattern.
A natural labeling of the entries of a frieze pattern is according to the scheme:
\begin{equation} \label{diamond}
\begin{array}{ccc}
&v_{i,j}&\\[4pt]
v_{i,j-1}&&v_{i+1,j}\\[4pt]
&v_{i+1,j-1}&
\end{array}
\end{equation}
A continuous analog of a frieze pattern is a twice differentiable function of two variables $F(x,y)$
satisfying an analog of the frieze rule. Namely, replace (\ref{diamond}) by
$$
\begin{array}{ccc}
&F(x,y+\eps)&\\[4pt]
F(x,y)&&F(x+\eps,y+\eps)\\[4pt]
&F(x+\eps,y)&
\end{array}
$$
and expand in $\eps$ up to the 2nd order. Then the frieze rule yields
$$
\eps^2(F F_{xy} - F_{x} F_{y})=1,
$$
where $F_{x}$ and $F_{y}$ denote the partial derivatives with respect to $x$ and $y$ respectively.
Thus, we are led to the following.
\begin{definition}
{\rm
\begin{enumerate}
\item[(a)]
A {\it continuous frieze pattern} is a function $F(x,y)$ satisfying the partial differential equation
\begin{equation}
\label{Liou}
F(x,y) F_{xy}(x,y)-F_{x}(x,y) F_{y}(x,y)=1.
\end{equation}
\item[(b)]
A closed continuous frieze patterns is
a function $F(x,y)$ satisfying~(\ref{Liou}) and the following conditions:
\begin{equation}
\label{bdry}
\begin{array}{rl}
F(x,x) =0,\ F_{y}(x,x)=1 &{\rm for\ all}\ x;\\[6pt]
F(x,y)>0 & {\rm for}\ x<y<x+T;\\ [6pt]
F(x+T,y)=F(x,y+T)=-F(x,y)& {\rm for\ all}\ x,y.
\end{array}
\end{equation}
\end{enumerate}
}
\end{definition}
\begin{remark}
{\rm
Equation~(\ref{Liou}) is the classical {\it Liouville equation} on the function $\ln F$, see, e.g.,~\cite{DFN}.
The first two conditions in (\ref{bdry}) are analogs of having a row of $0$'s,
followed by a row of $1$'s.
The last condition is an analog of anti-periodicity of the diagonals, $T$ being the period.
Given the first condition, the second one is equivalent to $F_{x}(x,x) = -1$.}
\end{remark}
\subsection{Continuous friezes from projective curves}\label{ConSSec}
Let us describe a simple geometric construction that provides
all the solutions to~(\ref{Liou}) satisfying~(\ref{bdry}).
We shall show that a projective equivalence class
of a projective curve determines a continuous frieze pattern,
and that every continuous frieze pattern can be obtained from
a projective curve.
Consider a projective curve $\gamma$ and its canonical lift $\Gamma$ to $\R^2$
satisfying condition~(\ref{deter}).
\begin{theorem}
\label{curve}
(i)
The function
\begin{equation}
\label{Constr}
F(x,y)=[\Gamma(x),\Gamma(y)]
\end{equation}
is a closed continuous frieze pattern.
(ii)
Conversely, every closed continuous frieze pattern is of the form~(\ref{Constr}) for some curve~$\gamma$.
\end{theorem}
\proof
Part (i). One has:
$$
F_{x}=[\Gamma'(x),\ \Gamma(y)], \qquad
F_{y}=[\Gamma(x),\ \Gamma'(y)], \qquad
F_{xy}=[\Gamma'(x),\Gamma'(y)].
$$
The Ptolemy (or Pl\"ucker) relation for the determinants made by the vectors $\Gamma(x), \Gamma(y), \Gamma'(x), \Gamma'(y)$ implies
$$
[\Gamma (x),\Gamma(y)] [\Gamma'(x),\Gamma'(y)] -
[\Gamma'(x),\Gamma(y)] [\Gamma(x),\Gamma'(y)] =
[\Gamma(x),\Gamma'(x)] [\Gamma(y),\Gamma'(y)],
$$
and (\ref{Liou}) follows.
The first and the last of the boundary conditions (\ref{bdry}) obviously hold, and the second condition coincides with (\ref{deter}). The positivity follows from the fact that $\Gamma(x)$ induces an embedding of the interval $(0,T)$ to $\RP^1$.
\begin{center}
\setlength{\unitlength}{3144sp}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\gdef\SetFigFont#1#2#3#4#5{
\reset@font\fontsize{#1}{#2pt}
\fontfamily{#3}\fontseries{#4}\fontshape{#5}
\selectfont}
\fi\endgroup
\begin{picture}(3624,4319)(4714,-6158)
\put(4726,-5123){\makebox(0,0)[lb]{\smash{{\SetFigFont{11}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\Gamma(x)$}
}}}}
\thinlines
{\color[rgb]{0,0,0}\put(4726,-4336){\vector( 1, 0){3600}}
}
\thicklines
\multiput(4958,-5464)(9.00000,-3.00000){2}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(4967,-5467)(14.10000,-4.70000){2}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(4981,-5472)(9.39655,-3.75862){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5000,-5479)(8.33333,-3.33333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5025,-5489)(7.80000,-2.60000){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5056,-5500)(9.07500,-3.02500){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5092,-5513)(8.28000,-2.76000){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5133,-5528)(7.55000,-2.51667){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5178,-5544)(8.05000,-2.68333){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5226,-5561)(8.40000,-2.80000){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5276,-5579)(8.55000,-2.85000){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5327,-5597)(8.50000,-2.83333){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5378,-5614)(8.35000,-2.78333){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5428,-5631)(8.35000,-2.78333){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5478,-5648)(7.80000,-2.60000){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5525,-5663)(7.60000,-2.53333){7}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5571,-5677)(8.70000,-2.90000){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5615,-5690)(8.47058,-2.11764){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5657,-5702)(7.85882,-1.96470){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5696,-5713)(9.29412,-2.32353){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5733,-5723)(8.70588,-2.17647){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5768,-5731)(8.47060,-2.11765){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5802,-5739)(7.98078,-1.59615){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5834,-5745)(10.00000,-1.66667){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5864,-5750)(9.62163,-1.60361){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(5893,-5754)(9.24323,-1.54054){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\put(5921,-5757){\line( 1, 0){ 27}}
\put(5948,-5759){\line( 1, 0){ 27}}
\put(5975,-5761){\line( 1, 0){ 26}}
\put(6001,-5761){\line( 1, 0){ 29}}
\put(6030,-5760){\line( 1, 0){ 29}}
\multiput(6059,-5759)(9.56757,1.59459){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6088,-5756)(9.24323,1.54054){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6116,-5753)(9.35137,1.55856){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6144,-5748)(9.35897,1.87179){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6172,-5742)(9.35897,1.87179){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6200,-5736)(8.90000,2.96667){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6227,-5728)(9.30000,3.10000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6255,-5719)(8.62070,3.44828){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6281,-5709)(9.02300,3.60920){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6308,-5698)(8.26667,4.13333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6333,-5686)(8.26667,4.13333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6358,-5674)(7.94117,4.76470){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6382,-5660)(7.94117,4.76470){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6406,-5646)(7.38460,4.92307){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6428,-5631)(7.04000,5.28000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6449,-5615)(6.66667,5.33333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6469,-5599)(6.52460,5.43717){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6488,-5582)(8.25000,8.25000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6505,-5566)(5.83333,5.83333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6522,-5548)(7.25410,8.70492){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6537,-5531)(6.66000,8.88000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6550,-5513)(6.42000,8.56000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6563,-5496)(5.42645,9.04408){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6574,-5478)(5.29410,8.82350){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6584,-5460)(4.50000,9.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6593,-5442)(3.65515,9.13787){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6601,-5424)(3.70000,11.10000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6609,-5402)(3.65000,10.95000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6616,-5380)(2.30770,11.53850){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6621,-5357)(1.33333,8.00000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6625,-5333)(1.32433,7.94600){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6628,-5309)(1.32433,7.94600){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\put(6631,-5285){\line( 0, 1){ 26}}
\put(6632,-5259){\line( 0, 1){ 26}}
\put(6632,-5233){\line( 0, 1){ 26}}
\put(6632,-5207){\line( 0, 1){ 27}}
\put(6630,-5180){\line( 0, 1){ 27}}
\multiput(6628,-5153)(-1.48650,8.91900){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6625,-5126)(-1.48650,8.91900){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6622,-5099)(-1.48650,8.91900){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6619,-5072)(-1.44143,8.64860){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6615,-5046)(-1.44143,8.64860){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6611,-5020)(-1.38740,8.32440){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6607,-4995)(-1.60257,8.01283){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6602,-4971)(-1.91890,11.51340){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6598,-4948)(-1.91890,11.51340){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6594,-4925)(-1.82430,10.94580){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6591,-4903)(-2.09615,10.48075){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6587,-4882)(-1.64865,9.89190){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6585,-4862)(-1.66215,9.97290){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6582,-4842)(-1.89190,11.35140){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6580,-4819)(-1.81080,10.86480){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\put(6578,-4797){\line( 0, 1){ 22}}
\put(6578,-4775){\line( 0, 1){ 22}}
\put(6578,-4753){\line( 0, 1){ 22}}
\put(6579,-4731){\line( 0, 1){ 22}}
\multiput(6580,-4709)(1.74325,10.45950){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6583,-4688)(1.74325,10.45950){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6586,-4667)(2.09615,10.48075){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6590,-4646)(2.61765,10.47060){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6595,-4625)(3.30000,9.90000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6601,-4605)(3.30000,9.90000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6607,-4585)(3.75860,9.39650){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6614,-4566)(3.75860,9.39650){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6621,-4547)(3.65515,9.13787){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6629,-4529)(4.30000,8.60000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6638,-4512)(4.30000,8.60000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6647,-4495)(4.72060,7.86767){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6656,-4479)(5.00000,7.50000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6666,-4464)(4.85295,8.08825){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6676,-4448)(5.00000,7.50000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6686,-4433)(5.38460,8.07690){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6697,-4417)(6.00000,8.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6709,-4401)(6.42000,8.56000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6722,-4384)(6.42000,8.56000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6735,-4367)(6.66000,8.88000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6748,-4349)(7.50000,9.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6763,-4331)(7.08000,9.44000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6777,-4312)(5.00000,6.66667){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6792,-4292)(5.00000,6.66667){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6807,-4272)(5.16000,6.88000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6822,-4251)(5.16000,6.88000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6837,-4230)(4.66667,7.00000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6851,-4209)(4.92307,7.38460){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6866,-4187)(4.23530,7.05883){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6879,-4166)(4.38237,7.30394){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6892,-4144)(4.38237,7.30394){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6905,-4122)(3.73333,7.46667){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6917,-4100)(3.66667,7.33333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6928,-4078)(3.10343,7.75858){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6938,-4055)(4.90000,9.80000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6947,-4035)(4.00000,10.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6955,-4015)(4.17240,10.43100){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6963,-3994)(3.65000,10.95000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6970,-3972)(3.65000,10.95000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6977,-3950)(2.00000,8.00000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6983,-3926)(2.88235,11.52940){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6989,-3903)(1.66667,8.33333){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6994,-3878)(1.38740,8.32440){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6998,-3853)(1.44143,8.64860){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\put(7002,-3827){\line( 0, 1){ 25}}
\put(7004,-3802){\line( 0, 1){ 26}}
\put(7006,-3776){\line( 0, 1){ 26}}
\put(7007,-3750){\line( 0, 1){ 25}}
\put(7006,-3725){\line( 0, 1){ 25}}
\multiput(7005,-3700)(-1.32433,7.94600){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(7002,-3676)(-1.33333,8.00000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6998,-3652)(-2.30770,11.53850){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6993,-3629)(-2.76470,11.05880){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6987,-3607)(-3.50000,10.50000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6980,-3586)(-4.90000,9.80000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6971,-3566)(-4.80000,9.60000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6961,-3547)(-5.42645,9.04408){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6950,-3529)(-6.00000,9.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6938,-3511)(-6.00000,8.00000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6926,-3495)(-7.25000,7.25000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6912,-3480)(-8.16395,6.80329){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6896,-3466)(-8.45900,7.04917){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6879,-3452)(-9.44000,7.08000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6860,-3438)(-7.23077,4.82051){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6838,-3424)(-7.69607,4.61764){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6815,-3410)(-8.80000,4.40000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6789,-3396)(-7.20000,3.60000){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6760,-3382)(-7.90000,3.95000){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6728,-3367)(-8.62070,3.44828){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6694,-3352)(-7.41380,2.96552){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6657,-3337)(-7.75862,3.10345){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6618,-3322)(-7.93104,3.17242){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6578,-3307)(-8.17242,3.26897){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6537,-3291)(-8.22000,2.74000){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6496,-3277)(-8.04000,2.68000){6}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6456,-3263)(-9.30000,3.10000){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6419,-3250)(-8.02500,2.67500){5}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6387,-3239)(-9.30000,3.10000){4}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6359,-3230)(-10.95000,3.65000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6337,-3223)(-8.55000,2.85000){3}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
\multiput(6320,-3217)(-9.90000,3.30000){2}{\makebox(6.3500,9.5250){\SetFigFont{7}{8.4}{\rmdefault}{\mddefault}{\updefault}.}}
{\color[rgb]{0,0,0}\put(7426,-1861){\line( 0,-1){4275}}
}
\thinlines
{\color[rgb]{0,0,0}\multiput(5626,-4336)(276.25931,331.51117){7}{\line( 5, 6){131.624}}
}
{\color[rgb]{0,0,0}\multiput(5626,-4336)(328.70979,197.22587){6}{\line( 5, 3){176.304}}
}
{\color[rgb]{0,0,0}\multiput(5626,-4336)(319.62321,-319.62321){6}{\line( 1,-1){145.384}}
}
{\color[rgb]{0,0,0}\multiput(5626,-4336)(398.38673,-132.79558){5}{\line( 3,-1){195.053}}
}
\put(7651,-2198){\makebox(0,0)[lb]{\smash{{\SetFigFont{11}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\RP^1$}
}}}}
{\color[rgb]{0,0,0}\put(5626,-5461){\vector( 0, 1){3600}}
}
\end{picture}
\end{center}
Part (ii). The proof will be given in Section~\ref{HiFr}, see Corollary~\ref{equi}.
\proofend
Clearly, $SL(2,\R)$-equivalent curves $\Gamma$ give rise to the same function $F(x,y)$.
\begin{example} \label{sinus}
{\rm If $\Gamma$ is an arc length parameterized unit circle, we obtain the continuous frieze pattern $F(x,y)=\sin(y-x)$ with $T=\pi$.
}
\end{example}
\begin{remark} \label{gen}
{\rm If one does not care about boundary conditions, then solutions to (\ref{Liou}) can be obtained from two curves, $\Gamma$ and $\widetilde\Gamma$, both satisfying (\ref{deter}):
$$
F(x,y)=[\Gamma(x),\widetilde\Gamma(y)].
$$
For example, if
$
\Gamma(x)=(x,-1),\, \widetilde\Gamma(y)=(1,y),
$
we obtain $F(x,y)=1+xy$.
A more general solution of this kind is $F(x,y)=(xy)^t + (xy)^{1-t}$ for any real $t$.
}
\end{remark}
\subsection{Hill's equations and projective curves from continuous friezes}\label{HiFr}
Let us show that every closed continuous frieze pattern can be obtained from a projective curve.
In other words, we will show that the construction of Section~\ref{ConSSec} is universal.
\begin{lemma}
\label{eqfromf}
Let $F(x,y)$ be a closed continuous frieze pattern.
Then $F$, as a function of $x$ only, or as a function of $y$ only, satisfies the same Hill equation
$$
F_{xx}(x,y)=k(x) F(x,y),\qquad
F_{yy}(x,y)=k(y) F(x,y)
$$
with $T$-periodic potential $k$ and monodromy $-\Id$.
\end{lemma}
\proof Differentiate (\ref{Liou}) to obtain
$
F F_{xxy}=F_{y} F_{xx}.
$
Thus
$$
F_{xx}(x,y)=k(x,y) F(x,y),\qquad F_{xxy}(x,y)=k(x,y) F_{y}(x,y)
$$
for some function $k(x,y)$.
Differentiate the first of these equations with respect to $y$ to obtain \allowbreak $k_{y}(x,y)=0$.
Hence $k$ depends on $x$ only:
$$
F_{xx}(x,y)=k(x) F(x,y),\qquad
F_{yy}(x,y)=m(y) F(x,y),
$$
where the second equation is obtained similarly to the first one.
To prove that $k=m$, differentiate the second equality in (\ref{bdry}) to obtain
$$
F_{xy}(x,x)+F_{yy}(x,x)=0,\qquad F_{xx}(x,x)+F_{xy}(x,x)=0,
$$
and hence $F_{xx}(x,x)=F_{yy}(x,x)$. This implies that $k(x)=m(x)$.
The third equality in (\ref{bdry}) implies that the monodromy of the Hill equation is $-\Id$, and it follows that $k$ is $T$-periodic.
\proofend
\begin{corollary}
\label{equi}
The above constructions provide a bijection between projective equivalence classes of
$T$-periodic parameterizations of $\RP^1$ and closed continuous frieze patterns.
\end{corollary}
\proof
First of all, a projective equivalence class of a curve in $\RP^1$ is the same as Hill's equation;
see Proposition~\ref{KirProp}.
Start with a curve $\Gamma(x)$ satisfying Hill's equation.
Then $F(x,y)=[\Gamma(x),\Gamma(y)]$, and one has:
$$
F_{xx}(x,y)=[\Gamma''(x),\Gamma(y)]=k(x) F(x,y),
$$
that is, one recovers the Hill equation from the respective continuous frieze, and thus the
$\SL_2(\R)$-equivalent class of the curve $\Gamma$.
Conversely, let us show that the function $k$ uniquely determines a continuous frieze.
Indeed, $F(x,y)$, as a function of~$x$, is the solution of Hill's equation $f''(x)=k(x) f(x)$
with the initial conditions $f(y)=0, f'(y)=-1$.
\proofend
This completes the proof of Theorem~\ref{curve}.
\section{The symplectic structure} \label{SSect}
In the previous section, we showed that the space $\Diff_+(S^1)/\PSL_2(\R)$ is
a continuous limit of the space of Coxeter's frieze patterns.
In this section, we will compare the symplectic structures on both spaces
and complete the proof of Theorem~\ref{MainOne}.
\subsection{Kirillov's symplectic structure} \label{Kiss}
The space $\Diff_+(S^1)/\PSL_2(\R)$ is a coadjoint orbit of the Virasoro algebra
and therefore it has the canonical Kirillov symplectic $2$-form.
Let us give here several equivalent expressions of this $2$-form;
see~\cite{K1,K2,OT} for more details.
The first expression is just the definition of the Kirillov symplectic form.
Given a Hill equation (\ref{Hill}) with potential $k(x)$,
it is identified with an element of the dual space to the Virasoro algebra
as follows.
Let $(X(x)\frac{d}{dx},\alpha)$ be any element of the Virasoro algebra,
then
$$
\left\langle
\left(k(x),c\right),\,\Big(X(x)\frac{d}{dx},\alpha\Big)\right\rangle :=
\int_0^Tk(x)X(x)\,dx+c\alpha.
$$
The coadjoint action of the Virasoro algebra is given by:
\begin{equation}
\label{smalladStar}
ad^*_{X(x)\frac{d}{dx}}\left(k(x),c\right)=
\left(X(x)k'(x)+2X'(x)k(x)+c\,X'''(x),0\right),
\end{equation}
which is nothing else but the infinitesimal version of~(\ref{AdStar}).
Every tangent vector to the coadjoint orbit of the Virasoro algebra
through the point $\left(k(x),c\right)$ is obtained by the coadjoint action
of some vector field.
Let
$$
\xi=ad^*_{X(x)\frac{d}{dx}}\left(k(x),c\right)
\qquad\hbox{and}\qquad
\eta=ad^*_{Y(x)\frac{d}{dx}}\left(k(x),c\right),
$$
then by definition,
\begin{equation}
\label{OmeK}
\begin{array}{rcl}
\om_K(\xi,\eta)&=&
\displaystyle
\int_0^T k(x)\left(X(x)Y'(x)-X'(x)Y(x)\right)dx-
c\,\int_0^TX'(x)Y''(x)\,dx\\[10pt]
&=&
\displaystyle
-\int_0^T\left(X(x)k'(x)+2X'(x)k(x)+c\,X'''(x)\right)Y(x)\,dx.
\end{array}
\end{equation}
Note that the last term on the first row of~(\ref{OmeK}) is the famous Gelfand-Fuchs cocycle.
This formula is independent of the choice of the coordinate $x$.
Assume that the Hill equation is non-oscillating,
let us give an equivalent formula for Kirillov's symplectic structure,
in terms of projective curves.
Let $\gamma(x)$ be a projective curve and $\Gamma(x)$ its lift to $\R^2$ satisfying~(\ref{deter}).
For an arbitrary choice of linear coordinates
$\Gamma(x)=(\Gamma_1(x),\Gamma_2(x))$, consider the function
$f(x)=\frac{\Gamma_1(x)}{\Gamma_2(x)}$ which determines the curve~$\gamma$.
Let, as above, $\xi$ and $\eta$ be two tangent vectors.
Viewed as variations of~$f(x)$, these tangent vectors are expressed as $T$-periodic functions,
$\xi(x)$ and $\eta(x)$.
\begin{lemma}
\label{CurKir}
One has
\begin{equation}
\label{OmeCur}
\om_K(\xi,\eta)=-c\,\int_0^T\frac{\xi'(x)\eta''(x)-\xi''(x)\eta'(x)}{\left(f'(x)\right)^2}dx.
\end{equation}
This formula does not depend on the choice of the parameter $x$.
\end{lemma}
\proof
The coadjoint action~(\ref{smalladStar}) of the Virasoro algebra,
written in terms of projective curves, reads simply as
$ad^*_{X(x)\frac{d}{dx}}\left(f(x)\right)=X(x)f'(x).$
Therefore,
$$
\xi(x)=\frac{X(x)}{f'(x)},
$$
and similarly for $\eta$.
Substutute these expressions to~(\ref{OmeK}), and integrate by parts
taking into account $k=cS(f)$, to obtain the result.
It is then easy to check that changing the parameter leaves the formula intact.
\proofend
\subsection{The cluster symplectic structure} \label{Cluss}
The space of closed Coxeter's frieze patterns of width $w=n-3$ is an
algebraic variety of dimension $w$.
It has a structure of cluster manifold and therefore has a canonical
closed $2$-form, i.e., a (pre)symplectic form; see~\cite{GSV} for a general theory.
Let us give the explicit expression of this $2$-form.
\begin{definition}
{\rm
Given a coordinate system $(a_1,a_2,\ldots,a_w)$
associated to an South-East diagonal~(\ref{CanDiag}),
the canonical cluster symplectic form on the space of friezes
is defined by the formula
\begin{equation}
\label{CanSym}
\om=\sum_{1\leq{}i\leq{}w-1}
\frac{da_i\wedge{}da_{i+1}}{a_ia_{i+1}}.
\end{equation}
}
\end{definition}
Consider now an arbitrary zigzag~(\ref{ZZCo}) coordinates
$(a'_1,a'_2,\ldots,a'_w)$.
Define the following {\it a-priori} different $2$-form
$$
\om'=\sum_{1\leq{}i\leq{}w-1}
(-1)^{\varepsilon_i}\,\frac{da'_i\wedge{}da'_{i+1}}{a'_ia'_{i+1}},
$$
where $\varepsilon_i=0$ if $(a'_i,a'_{i+1})$ belongs to a South-East diagonal,
and $\varepsilon_i=1$ if $(a'_i,a'_{i+1})$ belongs to a South-West diagonal.
\begin{proposition}
\label{Canon}
One has $\om=\om'$, for any zigzag coordinate system.
\end{proposition}
\proof It suffices to examine how the 2-form changes under an elementary transformation of a zigzag $\dots bac \dots \mapsto \dots bcd \dots$ in (\ref{diamond1}). In this case, the difference $\om-\om'$ belongs to the ideal generated by
the differential of the defining identity of frieze pattern (\ref{RulEq}).
\proofend
The form (\ref{CanSym}) is symplectic, i.e., it is non-degenerate,
if and only if $w$ is even (that is, $n$ is odd).
Otherwise, the form $\om$ has a kernel of dimension $1$.
\subsection{Continuous limit of the cluster symplectic form} \label{ClimOm}
We will now apply the procedure of taking the continuous limit
from Section~\ref{ConLimS} to obtain the continuous limit
of the symplectic form~(\ref{CanSym}).
The symplectic form $\om$ written in geometric terms (see Remark~\ref{ImpRem}) is as follows:
$$
\om=\sum_{1\leq{}i\leq{}w-1}
\frac{d\left[V_{n-1},V_i\right]\wedge{}d\left[V_{n-1},V_{i+1}\right]}
{\left[V_{n-1},V_i\right]\left[V_{n-1},V_{i+1}\right]}.
$$
Let $\xi,\eta$ be two tangent vectors to the space of friezes.
Each of them can be represented by $n$ vectors in $\R^2$, that is,
$
\xi=\left(
\xi_0,\ldots,\xi_{n-1}
\right),
$
such that
$$
\left[V_{i},\xi_{i+1}\right]+\left[\xi_i,V_{i+1}\right]=0,
$$
since $\left[V_{i},V_{i+1}\right]\equiv1$.
We obtain
$$
\om(\xi,\eta)=\sum_{1\leq{}i\leq{}w-1}
\frac{\left[V_{n-1},\xi_i\right]\left[V_{n-1},\eta_{i+1}\right]-
\left[V_{n-1},\xi_{i+1}\right]\left[V_{n-1},\eta_{i}\right]}
{\left[V_{n-1},V_i\right]\left[V_{n-1},V_{i+1}\right]}.
$$
The continuous limit of the $n$-gon $(V_0,\ldots,V_{n-1})$ is
a curve $\Gamma(x)=\left(\Gamma_1(x),\Gamma_2(x)\right)$.
A tangent vector is represented by a curve
$\xi(x)=\left(\xi_1(x),\xi_2(x)\right)$ such that
$$
\left[\Gamma,\xi'\right]+\left[\xi,\Gamma'\right]=0.
$$
The continuous limit of the above sum is then the following integral:
$$
\om(\xi,\eta)=\int_0^\pi\frac{\xi_2(x)\eta'_2(x)-\xi'_2(x)\eta_2(x)}{\Gamma_2(x)^2}dx.
$$
Let us show that this expression coincides with~(\ref{OmeCur})
up to the multiple $-\frac{1}{4c}$.
A projective curve can be thought of as a function $f(x)=\frac{\Gamma_1(x)}{\Gamma_2(x)}$.
The affine lift $(f(x),1)$ does not satisfy the equality (\ref{deter}) but the rescaling
\begin{equation}
\label{GamLift}
\Gamma(x)=\left(f(x)f'(x)^{-1/2}, f'(x)^{-1/2}\right)
\end{equation}
does.
Let $\xi(x)$ be a tangent vector, i.e., a variation of the function $f(x)$.
Lifted to a tangent vector on curves $\xi(x)=(\xi_1(x),\xi_2(x))$ it then reads as
$$
\left(\xi_1(x),\xi_2(x)\right)=
\left(\xi(x)f'(x)^{-1/2}-\frac{1}{2}\xi'(x)f'(x)^{-3/2},\,-\frac{1}{2}\xi'(x)f'(x)^{-3/2}\right).
$$
One readily obtains $\om(\xi,\eta)=-\frac{1}{4c}\om_K(\xi,\eta)$.
We have proved that the continuous limit of the cluster (pre)symplectic form
on the space of Coxeter's friezes is (up to a multiple) the Kirillov
symplectic form on $\Diff_+(S^1)/\PSL_2(\R)$.
Theorem~\ref{MainOne} is proved.
\section*{Appendix: relation to metrics of constant curvature}
Let us give yet another geometric interpretation of continuous frieze patterns.
Using (\ref{GamLift}) and (\ref{Constr}), we obtain a general form of solution:
\begin{equation}
\label{genform}
F(x,y)=\frac{f(y)-f(x)}{\sqrt{f'(x)f'(y)}}
\end{equation}
satisfying the first two boundary conditions (\ref{bdry}).
\begin{example}
{\rm
Chosing $f(x)=\tan x$ yields Example \ref{sinus}.
If $f(x)=x$, we obtain a linear solution $F(x,y)=y-x$. }
\end{example}
Now we describe the relation of continuous frieze patterns with conformal metrics
of constant curvature in dimension $2$.
\begin{lemma}
\label{curv}
The conformal metric $-4F^{-2}(z,\bar z) dz d\bar z$ has constant curvature $-1$ if and only if the function $F$ satisfies equation
(\ref{Liou}).
\end{lemma}
\proof The curvature of the metric $g=g(z,\bar z) dz d\bar z$ equals
$$
-\frac{2}{g} \frac{\partial^2\ln g}{\partial z \partial \bar z},
$$
see, e.g., \cite{DFN}. Substituting $g=-4F^{-2}$ yields the result.
\proofend
\begin{example}
{\rm
The Poincar\'e half plane metric
\begin{equation}
\label{uphalf}
g=-4\frac{dz d\bar z}{(z-\bar z)^2}
\end{equation}
gives $F(x,y)=y-x$. More generally, start with (\ref{uphalf}) and change the variable $z=f(w)$. This yields the formula
$$
g=-4\frac{f'(w)f'(\bar w)dw d\bar w}{(f(w)-f(\bar w))^2},
$$
and by Lemma \ref{curv}, the solution (\ref{genform}).}
\end{example}
\begin{remark} \label{hyperb}
{\rm One can also consider a Lorentz metric $4F^{-2}(x,y) dx dy$. Then equation
(\ref{Liou}) is equivalent to this metric having curvature 1. In particular, such conformal Lorentz metrics of constant curvature on the hyperboloid are studied in \cite{KS}; see also \cite{DG,DO}.
}
\end{remark}
\bigskip
{\bf Acknowledgments}.
We are grateful to Boris Khesin and Sophie Morier-Genoud for enlightening discussions.
V.~O. was partially supported by the PICS05974 ``PENTAFRIZ'' of CNRS.
S.T. was partially supported by the NSF grant DMS-1105442. | 11,662 |
Maybe!!
| 379,688 |
Sharp Move by HSBC towards Reliance Communications; Cuts Target Price
HSBC has cut target price on Reliance Communications to Rs 17 from Rs 28 after cutting earnings before interest, taxes, depreciation and amortisation estimates for FY18-22 period by 16%.
The ‘reduce’. | 288,436 |
Ningbo Rongyi Chemical Fiber Science&Technology Co.,Ltd.
Bulletproof, Bulletproof Vest, Bulletproof vest, UHMWPE Fiber, Buy Bulletproof Vest, Manufacturer and Supplier - China
Bulletproof Vest (RYY97-12)
Product Name: Bulletproof Vest
Model No: RYY97-12
Navy floatation body armor vest
Cooperated with SEG Armor, (a Division of S.E.G. Inc.), USA Company
Key Specifications/Special Features:
Feature: designed for full protection to the upper-torso, including neck and shoulder and groin
Padded shoulder with the intergraded flotation system allows personnel weighing up to 240 lbs. full floatation while giving full armor protection, even in water and is designed to withstand the most rugged environments.
Specification: 0.7sqm
Ballistic material: UHMWPE fiber
Carrier material: polyester oxford
Waterproof and anti-fire carrier available
Color: dark blue, woodland camouflage and desert camouflage or Optional
NIJ protection level:
o IIA: 3.8kg
o II: 4.3kg
o IIIA: 5kg
Ningbo Rongyi Chemical Fiber Science&Technology the first phase project is 48,000 square meters.
Rongyi Specialize in UHMWPE Fiber, Unidirectional Fabric(bulletproof fabric), Bulletproof Vest,Ballistic Helmet and Body Armor Plate.
We offer clients advanced products, quick response and a large vareity of creative and unique solutions. We have years of experience and expertise in identifying user defined characteristics for the bulletproof products, and then completing the development and design for the final product.
We produces according to the international standards ISO9001:2000
In the end of 2008, Rongyi cooperated with SEG Armor, (a Division of S.E.G. Inc.), USA Company, and share the trademark Poly-X&Poly-EX in China. | 223,705 |
There may be obstacles in your way but that is no reason to give in at this moment. It may also feel like you are being tested. Truth of the matter is that you are becoming stronger and stronger day-by-day as you start to become aware of where your power lies and where you draw strength from. The Strength card, no matter which deck you use, almost always depicts a woman with a lion. Strength represents the female energy. It is an inner strength. Not one that is determined by outside forces or physical measures that “represent” strength. Focus on your internal power and know that you can breakthrough any obstacle.
Tarot Image: © Nine. | 384,984 |
Workshop Painting
Welcome to LightPaintingWorkshops.com!
Here you will find an updated list, with links, to any of the scheduled Light Painting Workshops by Harold Ross. If you have further questions or would like to register for one of Harold’s Light Painting Workshops please don’t hesitate to contact us at the studio or click through any of the links below. And remember, workshops are always limited to 6 people and fill quickly, so don’t hesitate to register…
We look forward to instructing you at a future workshop!
Group Weekend Workshop – Light Painting the Still Life Workshop
Per Appointment –
If you would like to be notified of future light painting workshops and other news, please sign subscribe to the blog in the side bar to the right.
Photograph by workshop student Wendy Belkin
Images created by alumni of Harold’s Light Painting Workshops can be viewed by clicking this link: Student Workshop Images… | 44,589 |
Public Papers - 1990 - October
The President's News Conference on the Federal Budget Crisis
1990-10-06
The President. I just wanted to comment. I know the leaders have been speaking. And I have not yet signed but, within the next couple of minutes, will veto the continuing resolution. We've had good cooperation from the Democrat and Republican leaders. The Congress has got to get on with the people's business. I'd like them to do that business -- get a budget resolution -- and get it done in the next 24 hours or 48 hours.
But as President, I cannot let the people's business be postponed over and over again. I've jotted down the numbers. There have been three dozen in the last decade -- three dozen continuing resolutions -- business as usual. And we can't have it. The President can only do this one thing: send that message back and say this is not a time for business as usual. The deficit is too important to the American people.
So, I expressed my appreciation to the Speaker, the majority leader in the Senate, the majority leader in the House, two Republican leaders -- thanked them for coming together in a spirit of compromise to get an agreement that I strongly supported. It didn't have everything I wanted in there, but now I'm calling on those who did not vote for it on the Republican side and on the Democratic side to get up with the leadership and send down something that will take care of the people's business once and for all.
I am sorry that I have to do this, but I made very clear that I am not going to be a part of business as usual when we have one deficit after another piling up. Had enough of it, and I think the American people have had enough of it.
Q. What changed your mind, sir?
Q. Mr. Mitchell [House majority leader] came out here a minute ago and said that this served no useful purpose. What useful purpose?
The President. We have a disagreement with him. I think it disciplines the United States Congress, Democrats and Republicans. They're the ones that have to pass this budget, and they ought to get on with it. And the leaders, to their credit, tried. But a lot of Members think they can get a free shot, right and left. What this message says is: No more business as usual. So, we did have a difference on that particular point. I think both the Speaker and the majority leader did not want me to do this.
But look, let me take you guys back a while. In August I wanted to keep the Congress in. That story was written. And I've listened to the leadership, both Republicans and Democrats; said no, we'll acquiesce -- because they said that to keep the Congress here in August will be counterproductive: ``Everybody will be angry with you. But the way to get it done is with the discipline of the calendar running after the summer recess.''
And so, I acquiesced. I compromised. I gave. I'm not going to do it anymore. I'm very sorry if people are inconvenienced, but I am not going to be a part of business as usual by the United States Congress.
Q. Mr. President, Senator Dole [Republican leader] said that you had agreed to send up a new short-term spending bill that would include spending cuts -- a sequester. Could you tell us something about that?
The President. I'm going to stay out of exactly what we're going to do and let the leaders handle the details of this now. It's in the Congress, and I still strongly support the agreement that both Democrat leaders and Republican leaders came down on. And I'll say this: I do think that there's a lot of agreement and good will still existing for that. It's not going to be passed exactly that way. It was defeated. But let's leave the details of negotiation on that to the Congress -- starting back in right now. They're going to have to contend with this veto I sent up -- and obviously, I want to see that veto sustained.
Q. You say no more business as usual -- in one breath you say no more CR's [continuing resolutions], and in the next breath, Dole says there's some CR which is -- --
The President. Well, if it has some discipline -- what I'm saying is, I want to see the system disciplined. If what Bob Dole said is correct -- I'll sign one if it puts some discipline on the system. And if it doesn't discipline the system, then I stay with my current position. No, excuse me, I'm glad you brought that up, because I would strongly support that.
Q. Mr. President, the leadership made a strong point in saying that it's the average Americans who are going to be hurt, the Federal workers and so forth. It's not Congressmen but average Americans who are going to be strongly hurt by this.
The President. The average American is smart. The average American knows what's going on, I think. And I think they know that the Congress will continue to kick this can down the road and that they've got to act. I am very sorry for people that are inconvenienced by this or hurt by this. But this is the only device one has for making something happen, and that is to get the Congress to act, to do its business.
Q. Mr. President, you seem to be blaming Congress, but in fact, a lot of their constituents are the ones that urged them to vote against this. They say it's unfair -- the burden is unfairly divided, that the poor and the middle class are paying too much. Is it possible that maybe this program that you proposed with the leaders just was not acceptable to the American public?
The President. Well, certain aspects of it might well not have been acceptable to the American public on both the right or the left. But when you're trying to do the country's business, I've discovered you have to compromise from time to time, and that's exactly what I did. Took a few shots in the process, but it doesn't matter. What matters is, let's move this process ahead now.
But, yes, you're right -- some people didn't like one aspect or another. We had Republicans jumping up on our side of the aisle and saying, ``I'll vote for it if you change this,'' or ``I don't like this part of it, but if you change that -- '' And similarly, you've got people that you were quoting that were on the other side.
But at times, one has to come together to do the country's business for the overall good. And these outrageous deficits cannot be permitted to go on and on and on and on. I'm worried about international markets. I'm worried about this country -- the opinion that it can't take care of its fiscal business.
And to their leaders' credit, Democrat and Republican, they tried very hard. They failed to get a majority on the Democratic side. And Republican leaders, with the help from this President and all I could bring to bear on it -- we failed, because we had people -- were looking at one narrow part of the package and not at the overall good. And I am hopeful now that with the urgency this veto brings to bear on the situation, that reasonable people, men and women in the Congress, can come together.
Q. Mr. President, what kind of progress is being made on a new budget resolution? And sources on the Hill are saying that there is growing support for raising the tax rates of the wealthy in exchange, perhaps, for the cuts on premiums for Medicare. But you have opposed that in the past. Are you willing to give on tax rates for the wealthy?
The President. I don't know the answer to your question. They're just going back up now to try. I like the parameters of the other deal wherein I compromise. We've got people -- your question reflects the views on the more liberal or left side of the political spectrum -- who raised those questions. We have some on the right side of the political spectrum coming at the process from another way.
Now, I say: Let them go up and negotiate it. This is the business of the Congress. And our people will stay in touch. I won't mislead them. If there's something that's so outrageous I can't accept it, I'll let them know at the beginning so they don't waste their time. But we're flexible. I've already compromised. And I'm not saying that I can't take a look at new proposals. But you've got to put together a majority in the Congress, and that's where the leaders are having great difficulty.
Q. Following up on that, members of your own party dislike the deal so much, how could you and your advisers have misjudged the sentiments of members of your own party?
The President. Because it's easy when you don't have to be responsible for something. It's easy to just get up and say, hey, I've got an election in 3 weeks, and I'm going to stand up against this particular package -- Medicare, the taxes, the home heating oil, or the fact there's not enough growth or not enough incentive. Any individual Member can do that. Maybe it plays well at home. The President and the leadership of both Houses have to be responsible for the overall good of the country, have to make something happen. I can't get it done just my way. I don't control both Houses of Congress. I'd love to think that that luxury would come by way someday, but it hasn't. Therefore, we've had to compromise. So, I will keep trying in that spirit -- that cooperative, positive spirit.
But when it comes to the discipline that comes from saying, ``I'm sorry, no more business as usual,'' that's where I can stand up. I don't need a consultation to do that. I've got plenty of advice on one side of that question and the other. But I am absolutely convinced this is right.
Even those who are inconvenienced by this are going to say, thank God, we'll get the American people's business of getting this deficit under control done. That's my objective. I think every parent out there who sees his kid's future being mortgaged by the outrageous deficit, sees a shaky economy that's being affected by prolonging these deliberations, will be grateful in the long run. In the meantime, we've got to take a little heat.
Q. Mr. President, the budget resolution that failed is one that you worked hard for. Despite the fact that you gave a national televised speech, despite the fact that your popularity is very high -- and you failed to sway even a majority of votes in your own party. Does that concern you, and do you think this is a major setback for your Presidency?
The President. No, I don't think that at all. But I do think -- yes, it concerns me. I'd like everybody to do it exactly the way I want, but it doesn't work that way. So, now we have to use a little discipline -- --
Q. Mr. President -- --
The President. -- -- nice guy stuff, and we'll try. It's a tough decision, it's not an easy decision I've made, but it is the right decision. So, I'm disappointed they didn't do it my way. But I'm in here to do what is best for the country; and what is best for the country is to get this deficit under control, to get this economy moving again, and to see people at jobs, not out on some welfare line. And that's what's at stake here -- economic soundness of the United States.
We've got a lot of things going on in the world, and a strong economy is vital to what I want to see achieved in this country. So, you have to take some hits. I mean, you don't get it done exactly your own way.
But I read these speculative stories. Tomorrow, there's going to be another vote. Tomorrow, somebody else will move the previous question or second the motion, or some committee chairman will jump up and say, hey, what about me -- my little empire is being invaded here. And I'll say, hey, the President's the guy that has to look at the overall picture.
I can understand Congressmen doing that. But we came together on a deal. We worked for it. Everybody had a chance to posture that didn't like it. They have no responsibility. But I feel a certain responsibility to the American people to move something forward here -- want a compromise. Now we're going to say: We'll try it this way. No more business as usual. Do not just keep putting off the day of reckoning. And I don't want to be a part of that, and that's why I've had to veto this resolution.
Q. Mr. President, you've talked a lot about discipline today. Do you think the American people on average are willing to accept the discipline of a tough budget?
The President. That's a very good question. And if you look at the vote in the House of Representatives, you might say no. But I think in the final analysis the answer will be yes, because I think we sometimes underestimate the intelligence of the American people. I can see where a Congressman can jump up on a specific spending program that'll help him in his district. I can see when somebody will give you the broad tax speech or help him in his district.
But in the final analysis, what the American people look at is: Do we have an economy in which I can feed my family, where I can have opportunity to work for a living, and where I can put a little aside to educate my kids? And therein lies the problem, because that's what we're working for -- is we're trying to get this Federal deficit down.
But I think you raise a good point. I think a lot of these Congressmen can jump up without any responsibility for running the country, or even cooperating with their leaders, and make a point that's very happy for the home folks. But I think that view underestimates the overall intelligence of the American people, whether conservative, whether a guy's working on a factory line someplace, whether he's an investor someplace.
That's why I think this is very important that the Congress now finally come to grips with this.
Q. There's some talk about this special challenge to Civiletti.
Q. Mr. President, it sounds like you're now saying: Hands off. It's up to the congressional leaders to do the negotiating.
The President. They've already started up the road there to go to Congress and start negotiating. But, no, we've made very clear that we're continuing to help. I don't want to mislead them. There are certain things I can accept. There are certain things I can't. So, I think it's very important that our able team, in whom I have total confidence, stay in touch with them.
Q. But not sit at the negotiating table with them?
The President. Oh, I think they'll be there. I think it all depends on what forum. I think there is some feeling, Ann [Ann Devroy, Washington Post], that on the part of Members, both Democrat and Republicans -- hey, you summiteers handed us a deal. Well, what the heck? I mean, how do you expect to get as far along toward an agreement as we did get? But what I want to do is facilitate it. And if they want to know where the White House is, fine. If they want the ideas that largely led to an agreement, fine, and I think they will. But we're not going to force our way in. This is the business of the Congress. The American people know that. They know that the President doesn't pass the budget and doesn't vote on all this stuff. It's the Congress who does it.
So, I'm not trying to assign blame. I'm simply saying, we're available. We want to talk -- fine. I think both leaders have indicated they wanted to stay in fairly close touch with the White House.
Q. Mr. President, there is some talk of a constitutional challenge to Civiletti on the bill that the Attorney General's opinion is not sufficient to run the Government, and that violates section 7 of the Constitution.
The President. I haven't heard anything about that.
Q. Mr. President, are you going to cancel your campaign schedule next week if this impasse is not resolved?
The President. I don't know. I've got to cancel everything that has to do with government, I guess. Maybe that's a good chance to get out there in the political process.
Q. How long can you hold out? How long can you let the Government stay shut down before you decide to toss -- --
The President. Watch and learn.
Q. How long do you think the Government can stay shut before -- --
The President. It's not a question of how long I can take it; it's how long the Congress can take it. But Congress is where the action is. It's the Congress that has to pass this in the House and in the Senate. That's where the action is. They've postponed this tough decision as I've mentioned -- how many -- 30-some times. And we just can't have it. The American people are saying, ``I want something done about this.'' That's where the focus will be.
So, I don't think it's a question of taking heat here or these guys marching out here about honking their horns on taxes. They know I don't like taxes. You get some other guy in Washington out here with a little placard, demonstrating -- something about the government employees -- we've been supporters of the government employees. But we cannot have business as usual.
The American people -- I don't know about inside the beltway, but outside they are fed up with business as usual, and so am I. I wish I had total control so we could do it exactly my way, but we don't. So, I've compromised. Now we're prepared to say, I'm not going to accept a resolution that just postpones it. I've told you I tried that approach.
I tried it in August. Let everybody go home on vacation when I had some good, sound advice I probably should have taken: Make the Congress stay in August. And I listened to the leaders, and they said: ``Oh, please don't do that. It will be counterproductive.'' Now they're saying to me: ``Please don't veto this. It will be counterproductive.'' When do the American people have a say? They want to see this deficit under control. And I don't have many weapons here as President, but one is the veto. When I do it, cast it on principle, I hope it is supported.
Q. What's happened to the prestige -- --
Q. If Dole sends up another CR, if the Congress sends up a CR with sequestration, when could that happen? Do you have some timeframe?
The President. I don't know.
Q. Could it happen the next couple of days, sir?
The President. Oh, yes, absolutely. It could happen this afternoon.
Q. It could happen this afternoon?
The President. Sure. Whether we -- together? I'm not that certain. Perhaps it's a little oversimplification because they're telling me there are some difficult problems right and left, both sides. But, no, they're going right back to negotiating. Let's hope it does. That's the way to serve the constituents.
Q. If it came up this afternoon, sir, would you sign it this afternoon?
The President. It depends what it is. I'll be around.
Q. You have vetoed the CR?
The President. Yes -- well, I haven't actually signed it, but I've got to rush right in there now and do that and send it up to the Hill. They know that they've -- --
Last question.
Q. Why did you change your mind?
Q. What's all this done to the prestige and influence of you and your office?
The President. Well, I think it will demonstrate that there is some power in the Presidency to compel the Congress to do something, and I think that's good.
Q. You are vetoing, though?
The President. Oh, yes. It hasn't been vetoed yet, but I need a typewriter in there to get it done. By the time we finish this press conference that has gone longer than I thought, it'll -- probably all typed up.
Q. Might you trade the bubble for capital gains now? Do you foresee that as a compromise?
The President. The negotiators in the Congress have a lot of flexibility. I remain in a flexible frame of mind. Certain things I can accept and can't. But I'd like to think that now those who postured on one side or another with no responsibility will join the leaders, Republican and Democrat, and say: Hey, we've got a responsibility to the overall good here. We can no longer just give a speech. We've got to pitch in and come together. And that's what my pitch is.
And that's why I'm doing it and doing this veto -- saying, hey, no more business as usual. And I think people understand that sometimes a President has to make a difficult decision. So, I don't worry about the prestige. I was elected to do what -- in a case like this -- what I think is best and in the national interest. And that's exactly what I'm doing.
Thank you all very much.
Q. Are you going to type those up yourself?
The President. Yes, but I didn't give you the full load.
Note: The President's 62d news conference began at 11:30 a.m. on the West Driveway of the White House. A tape was not available for verification of the content of this news conference. | 384,034 |
\begin{document}
\title{Biserial algebras via subalgebras and the path algebra of $\mathbb{D}_4$}
\author{Julian K\"ulshammer}
\address{Christian-Albrechts-Universit\"at zu Kiel, Ludewig-Meyn-Str. 4, 24098 Kiel, Germany}
\email{[email protected]}
\begin{abstract}
We give two new criteria for a basic algebra to be biserial. The first one states that an algebra is biserial iff all subalgebras of the form $eAe$ where $e$ is supported by at most $4$ vertices are biserial. The second one gives some condition on modules that must not exist for a biserial algebra. These modules have properties similar to the module with dimension vector $(1,1,1,1)$ for the path algebra of the quiver $\mathbb{D}_4$.\\
Both criteria generalize criteria for an algebra to be Nakayama. They rely on the description of a basic biserial algebra in terms of quiver and relations given by R. Vila-Freyer and W. Crawley-Boevey \cite{CBVF98}.
\end{abstract}
\maketitle
\section{Introduction}
Throughout this paper let $k$ be an algebraically closed field, denote by $A$ a fi\-nite di\-men\-sio\-nal $k$-algebra, its (Jacobson) radical by $\rad{A}$ and by $\modu A$ the category of all finitely generated left modules. For $M\in \modu A$ we denote by $\radoperator^i M$ the $i$-th radical of $M$, by $\soc^i M$ the $i$-th socle of $M$ (cf. \cite{B95} definition 1.2.1) and by $Q=(Q_0,Q_1,s,t)$ a quiver with set of vertices $Q_0$, set of arrows $Q_1$ and starting (resp. terminal) point functions $s$ (resp. $t$). For every point $i\in Q_0$ of the quiver there exists a zero path, denoted by $e_i$, the ideal of the path algebra $kQ$ generated by the arrows will be denoted by $kQ^+$. For basic facts on radical, socle and quivers, that we use without further reference, we refer to \cite{ASS06}.\\
In 1979 K. Fuller (\cite{F79}) defined biserial algebras as algebras whose indecomposable projective left and right modules have uniserial submodules which intersect zero or simple and which sum to the unique maximal submodule (Tachikawa mentioned this condition before, but didn't give these algebras a name \cite{T61} proposition 2.7). These natural generalizations of Nakayama algebras are a class of tame algebras as W. Crawley-Boevey showed in \cite{CB95}. Examples of these algebras are blocks of group algebras with cyclic or dihedral defect group (see e.g. \cite{Ri75}, \cite{E87}), the algebras appearing in the Gel'fand-Ponomarev classification of the singular Harish-Chandra modules over the Lorentz group (\cite{GP68}) as well as special biserial algebras, which were recently used to test certain conjectures (\cite{EHIS04}, \cite{LM04}, \cite{S10}).\\
As one looks at Nakayama algebras (cf. \cite{ASS06} Section V.3) there are at least three ways to describe them: First via the projective left and right modules, i.e. they are uniserial, second via the (ordinary) quiver (and its relations), i.e. the quiver of $A$ is a linearly oriented (extended) Dynkin diagram of type $\mathbb{A}_n$ or $\tilde{\mathbb{A}}_n$ for some $n\geq 1$, and third via certain ``small'' modules in the module category (cf. Lemma \ref{lem2}), i.e. there exists no local module $M$ of Loewy length two, such that $l(\rad M)=2$ and no colocal module $M$ of Loewy length two, such that $l(M/\soc M)=2$ (we could call this property non-linearly oriented $\mathbb{A}_3$-freeness).\\
For biserial algebras aside from the original definition a description of basic biserial algebras in terms of quivers and relations is due to R. Vila-Freyer and W. Crawley-Boevey (\cite{CBVF98}). We use this description to obtain one in terms of certain ``small'' modules analogous to the description for Nakayama algebras given above.\\
A basic algebra $A$ will be called $\mathbb{D}_4$-free iff there is no $A$-module with similar properties to the one with dimension vector $(1,1,1,1)$ for the path algebra of the quiver $\mathbb{D}_4$. Our result will then be the following:
\begin{thm}\label{thm1}
A basic algebra $A$ is biserial iff it is $\mathbb{D}_4$-free.
\end{thm}
Furthermore, from the description of the quiver of $A$ we can see that it is necessary and sufficient that all subalgebras of the form $eAe$ with support of one vertex and its neighbouring vertices are Nakayama. We could call these subalgebras of type $\mathbb{A}_3$. Our second main result generalizes this for biserial algebras and states
\begin{thm}\label{thm2}
An algebra $A$ is biserial iff all subalgebras $eAe$ of type $\mathbb{D}_4$, that is, with support of a vertex and at most three of its neighbouring vertices, are biserial.
\end{thm}
Our paper is organized as follows. In Section 2, we recall the results of \cite{VF94} and \cite{CBVF98} giving a description of a basic biserial algebra in terms of its quiver and relations. Section 3 then gives the precise statement of Theorem \ref{thm2} and its proof. The precise definition of $\mathbb{D}_4$-free and the proof of Theorem \ref{thm1} is then presented in Section 4.
\section{Biserial algebras}
\begin{defn}[\cite{F79}]
An algebra $A$ is called \emphbf{biserial} if for every projective left or right module $P$ there exist uniserial submodules $U$ and $V$ of $P$ satisfying $\rad{P}=U+V$ (not necessarily a direct sum), such that $U\cap V$ is zero or simple.
\end{defn}
In the remainder of this section we present the results of R. Vila-Freyer and W. Crawley-Boevey who describe biserial algebras in terms of quivers and relations. For the proofs we refer to \cite{CBVF98}. The notation has been adjusted to ours.
\begin{defn}[\cite{CBVF98} Definitions 1-3]
\begin{enumerate}[(i)]
\item A \emphbf{bisection} of a quiver $Q$ is a pair $(\sigma, \tau)$ of functions $Q_1\to \{\pm 1\}$, such that if $a$ and $b$ are distinct arrows with $s(a)=s(b)$ (respectively $t(a)=t(b)$), then $\sigma(a)\neq \sigma(b)$ (respectively $\tau(a)\neq \tau(b)$). A quiver, which admits a bisection, i.e. in each vertex there start and end at most two arrows, is called \emphbf{biserial}.
\item Let $Q$ be a quiver and $(\sigma,\tau)$ a bisection. We say that a path $a_r\cdots a_1$ in $Q$ is a \emphbf{good path}, or more precisely is a $(\sigma,\tau)$\emphbf{-good path}, if $\sigma(a_{i})=\tau(a_{i-1})$ for all $1< i\leq r$. Otherwise we say that it is a \emphbf{bad path}, or is a $(\sigma,\tau)$\emphbf{-bad path}. The paths $e_i$ ($i\in Q_0$) are good.
\item By a \emphbf{bisected presentation} $(Q,\sigma,\tau, \mathpzc{p},\mathpzc{q})$ of an algebra $A$ we mean that $Q$ is a quiver with a bisection $(\sigma,\tau)$ and that $\mathpzc{p}, \mathpzc{q}: kQ\to A$ are surjective algebra homomorphisms with $\mathpzc{p}(e_i)=\mathpzc{q}(e_i)$ for all $i\in Q_0$, $\mathpzc{p}(a), \mathpzc{q}(a)\in \rad{A}$ for all arrows $a\in Q_1$ and $\mathpzc{q}(a)\mathpzc{p}(x)=0$ whenever $a,x\in Q_1$ with $ax$ a bad path.
\end{enumerate}
\end{defn}
\begin{thm}[\cite{CBVF98} Theorem]
Any basic biserial algebra $A$ has a bisected presentation $(Q,\sigma,\tau,\mathpzc{p},\mathpzc{q})$ in which $Q$ is the quiver of $A$. Conversely any algebra with a bisected presentation is basic and biserial.
\end{thm}
\begin{cor}[\cite{CBVF98} Corollary 3]\label{corVF}
Suppose that $Q$ is a quiver, $(\sigma, \tau)$ is a bisection, elements $d_{ax}\in kQ$ are defined for each bad path $ax$, $a,x\in Q_1$, and they satisfy
\begin{enumerate}[({C}1)]
\item Either $d_{ax}=0$ or $d_{ax}=\omega b_t\cdots b_1$ with $\omega\in k^\times, t\geq 1$ and $b_t\cdots b_1x$ a good path with $t(b_r)=t(a)$ and $b_t\neq a$,
\item if $d_{ax}=\phi b$ and $d_{by}=\psi a$ with $\phi,\psi\in k^\times$ and $a,b,x,y\in Q_1$, then $\phi\psi\neq 1$.
\end{enumerate}
If $I$ is an admissible ideal in $kQ$ which contains all the elements $(a-d_{ax})x$, then $kQ/I$ is a basic biserial algebra. Conversely for every basic biserial algebra $A$ there exist a quiver $Q$, a bisection $(\sigma,\tau)$ and for every bad path $ax$, $a,x\in Q_1$, elements $d_{ax}$, which satisfy the above conditions, and an admissible ideal $I$, such that $A\cong kQ/I$.
\end{cor}
Observe that the algebras where $d_{ax}=0$ for all bad paths $ax$ are precisely the special biserial algebras which are a lot better understood.\par
The following technical lemma will be used in the next theorem. Its proof relies on Lemma 1.2 in \cite{CBVF98}. The remaining parts are proved by similar methods, so we omit it here although it is nowhere published.
\begin{lem}[\cite{VF94} Lemma 2.1.3.1]\label{tl}
Let $A=kQ/I$ as in Corollary \ref{corVF} and let $a,x\in Q_1$ be arrows, such that $ax$ is a bad path and $d_{ax}=\omega b_t\cdots b_1$ with $\omega\in k^\times, b_t,\dots,b_1\in Q_1$. Then for any arrow $d$ with $s(d)=t(a)$ we have $dax$ and $db_t\cdots b_1x$ are both elements of $I$.
\end{lem}
\section{Subalgebras of type $\mathbb{D}_4$}
As a first application of the description due to R. Vila-Freyer and W. Crawley-Boevey, the next theorem tells us that we can restrict ourselves to algebras whose quiver has at most 4 vertices and one vertex is connected to all the others by at least one arrow.\\
For an easier statement of our first main result, we introduce here two sets of neighbours of some given vertex. These sets will correspond via idempotents $e$ to subalgebras $eAe$ of $A$ that one can use to test the biseriality of $A$.
\begin{defn}
Let $A=kQ/I$ and let $l\in Q_0$.
\begin{enumerate}[(i)]
\item Then $N(l):=\{j\neq l|j\thickspace \text{is connected to $l$ by at least one arrow in the quiver of}\thickspace A\}$ is called the \emphbf{set of neighbouring vertices of $l$}.
\item If $|N(l)|<4$, then define $J(l):=N(l)$ and if $N(l)=4$, then call any subset $J(l)\subset N(l)$ with $|J(l)|=3$ a \emphbf{set of neighbours of $l$ of type $\mathbb{D}_4$}.
\end{enumerate}
\end{defn}
\begin{thm}\label{5pt}
\begin{enumerate}[(i)]
\item Let $A$ be a biserial algebra. Then the algebra $eAe$ is biserial for every idempotent $e\in A$.
\item Let $A=kQ/I$ be a basic algebra with zero paths $e_1,\dots,e_n$. Then $A$ is biserial iff for all idempotents $e\in A$ of the form $e=e_l+\sum_{j\in N(l)}e_j$ the algebra $eAe$ is biserial.
\item Let $A=kQ/I$ be a basic algebra with zero paths $e_1,\dots,e_n$. Then $A$ is biserial iff for all idempotents $e\in A$ of the form $e=e_l+\sum_{j\in J(l)}e_j$ for some set of neighbours of $l$ of type $\mathbb{D}_4$ the algebra $eAe$ is biserial.
\end{enumerate}
\end{thm}
\begin{proof}
We can assume without loss of generality that $A$ is a basic algebra. First assume that $A$ is biserial. We want to show that for all idempotents $e\in A$ the algebra $eAe$ is biserial. Therefore let $e=e_1+\dots+e_k$ be a decomposition of $e$ into primitive orthogonal idempotents and analogously $1-e=e_{k+1}+\dots+e_n$. Then $A\cong kQ/I$, where the idempotents $e_1,\dots,e_n$ correspond to the zero paths and $I$ satisfies the conditions of Corollary \ref{corVF} and $eAe\cong ekQe/eIe$.\\
It is a standard result that one can check quite easily, that $\{e_1,\dots,e_k\}$ is a complete set of primitive orthogonal idempotents for $eAe$. The radical of $eAe$ is $e\rad{A}e$ since this is a nilpotent ideal and one can use $\Hom$-functors of projective modules to get from the sequence $0\to \rad{A}\to A\to A/\rad{A}\to 0$ to the sequence $0\to e\rad{A}e\to eAe\to eA/\rad{A}e\to 0$, which is therefore short exact and the factor is semisimple. An arrow in the quiver of $eAe$ does therefore correspond to an element in $\rad{eAe}/\radsquare{eAe}=e\rad{A}e/e\rad{A}e\rad{A}e$. Note that $e\rad{A}e\rad{A}e\subseteq e\radsquare{A}e$ but in general there is no equality. Therefore there can be arrows in the quiver of $eAe$ that do not come from arrows in the quiver of $A$, but instead from longer paths that do not pass through one of the vertices $1,\dots,k$.
Let us fix some notation: Denote by $\widetilde{a_1\dots a_s}$ the path $a_1\dots a_s$ as an element of $ekQe$ in case $1\leq s(a_s), t(a_1)\leq k$ and $k+1\leq s(a_i)\leq n$ for $1\leq i\leq s-1$. Such a path $a_1\dots a_s$ will be called irreducible in case $\widetilde{a_1\cdots a_s}\nequiv 0\mod eIe$. We now have a presentation $eAe\cong k\tilde{Q}/\tilde{I}$ where $\tilde{Q}_0=\{1,\dots,k\}$, $\tilde{Q}_1$ is the set of irreducible paths and $\tilde{I}:=eIe\cap k\tilde{Q}$ will be the induced ideal (not necessarily admissible, but $(k\tilde{Q}^+)^m\subseteq \tilde{I}\subseteq k\tilde{Q}^+$).\\
The same proof as for admissible ideals (cf. \cite{ASS06} Lemma II.2.10) shows that $\rad{k\tilde{Q}/\tilde{I}}=k\tilde{Q}^+/\tilde{I}$. So an arrow in the quiver of $eAe$ corresponds to a basis element of $(k\tilde{Q}^+/\tilde{I})/(k\tilde{Q}^+/\tilde{I})^2\cong k\tilde{Q}^+/((k\tilde{Q}^+)^2+\tilde{I})$. So $\tilde{Q}$ is in general not the quiver of $eAe$. We now want to show, that the quiver of $eAe$ is biserial and that we can choose $Q'_1\subseteq \tilde{Q}_1$ a base of $k\tilde{Q}^+/(\tilde{I}+(k\tilde{Q}^+)^2)$ in such a way, that $Q'$ inherits a bisection from $Q$ (Taking a base guarantees, that $Q'$ will be the quiver of $eAe$): In any point of $Q$ there start at most two arrows. The presence of more than two irreducible paths from a vertex $i$ to a vertex $j$, both in $\{1,\dots,k\}$ leads to two irreducible paths from $i$ to $j$ of the form $qa_sx_1p$ and $q'b_1x_1p$ for some paths $p,q,q'$ and arrows $a_s,x_1,b_1\in Q_1$, $a_s\neq b_1$.
\[\begin{xy}\xymatrix{&i_1\ar[d]^{x_1}\\&i_2\ar[rd]^{a_s}\ar[ld]_{b_1}\\i_3&&i_4}\end{xy}\]
According to Corollary \ref{corVF} at any such crossing there has to be a relation, either of the form $a_sx_1$ or of the form $(a_s-\omega b_t\cdots b_1)x_1$ for some $\omega\in k^\times$, $t\geq 1$ and $b_t,\dots,b_2\in Q_1$. In the former case the path $qa_sx_1p$ belongs to $eIe$, a contradiction. In the latter case, either $j$ lies on the longer path $b_t\cdots b_1$, then $qa_sx_1p\in k\tilde{Q}^2+\tilde{I}$, a contradiction, otherwise at most one of the paths would lead to an arrow in $Q'$ as $a_sx_1\equiv \omega b_t\cdots b_1x_1\mod I$. This shows that the quiver of $eAe$ is biserial.\\
We now want to choose $Q_1'\subseteq \tilde{Q}_1$ as described above, such that $Q'$ inherits a bisection from $Q$. Assume there are more than two arrows from $i$ to $j$ in $\tilde{Q}$. Suppose two of them start with the same arrow. Then the above arguments show that they have to be linearly dependent modulo $\tilde{I}$. So for every choice $Q_1'\subseteq \tilde{Q}_1$ of a base of $k\tilde{Q}^+/((k\tilde{Q}^+)^2+\tilde{I})$ only one of them will appear, so if we define $\sigma'(\widetilde{a_1\cdots a_s}):=\sigma(a_s)$ that will consistently define one part of a bisection. If on the other hand we have two paths starting with different arrows but ending with the same path $a'$, i.e. we have the following picture
\[\begin{xy}\xymatrix{i_3\ar[rd]_{x'}&&i_4\ar[ld]^{y'}\\&i_2\ar[d]^{a'}\\&i_1}\end{xy}\]
with $x',y'\in Q_1$, $\sigma(a')\neq \tau(x')$, $x'\neq y'$ and the two different paths are $a'x'p$ and $a'y'p'$ with $p,p'\in kQ$. If the length of the path $a'$, regarded as an element of $kQ$ is greater than one, then Lemma \ref{tl} leads to the contradiction that $a'x'\in I$. If $a'\in Q_1$ then there exist arrows $b_{t'}',\dots, b_1'\in Q_1$, such that $(a'-\omega'b_{t'}'\cdots b_1')x'\in I$, so we can replace $a'x'p$ in a choice of a base by $b_{t'}'\cdots b_1'x'p$ and will also get a base of $k\tilde{Q}^+/((k\tilde{Q}^+)^2+\tilde{I})$. Then we can also define $\tau'$ consistently by $\tau'(\widetilde{a_1\cdots a_s}):=\tau(a_1)$ yielding a bisection of $Q'$ inherited from $Q$. If we take $I':=\tilde{I}\cap kQ'$ then this is an admissible ideal with $kQ'/I'\cong k\tilde{Q}/\tilde{I}\cong eAe$.\\
Now we want to show, that the necessary relations of Corollary \ref{corVF} exist. Therefore let $\tilde{a}\tilde{x}:=\widetilde{a_1\cdots a_s}\widetilde{x_1\cdots x_r}$ be a bad path of length two in $Q'$ ($a_1,\dots,a_s, x_1,\dots,x_r\in Q_1$). If $s=1$, then either $a_sx_1$ is in $I$ and therefore $\tilde{a}\tilde{x}\in I'$ or there exists $\omega\in k^\times, b_1,\dots,b_t\in Q_1$, such that $(a_1-\omega b_t\cdots b_1)x_1\in I$ and therefore $(\tilde{a}_1-\omega \widetilde{b_t\cdots b_1})\widetilde{x_1\cdots x_r}\in I'$, where $\widetilde{b_t\cdots b_1}$ is the path corresponding to $b_1\cdots b_1$ in $ekQ'e$. If otherwise $s>1$, then by Lemma \ref{tl}, $a_1\cdots a_sx_1\in I$, therefore $\tilde{a}\tilde{x}\in I'$. This shows (i).\\
For the other direction of (ii) let $A$ be an algebra, such that $eAe$ is biserial for all idempotents of the required form. For any idempotent $e_l$, there exists a bisected presentation $(Q_l,\sigma_l,\tau_l,\mathpzc{p}_l,\mathpzc{q}_l)$. Set $\mathpzc{p}(e_l):=\mathpzc{q}(e_l):=e_l$ and for arrows $a$ starting (resp. ending) at $l$ in $eAe$, that come from arrows (and not from longer paths) in $A$, set $\sigma(a):=\sigma_l(a)$ and $\mathpzc{q}(a):=\mathpzc{q}_l(a)$ (resp. $\tau(a):=\tau_l(a)$ and $\mathpzc{p}(a):=\mathpzc{p}_l(a)$). Taking idempotents of the form $e_l+\sum_{j\in N(l)}e_j$ assures that we define values of $\sigma, \tau, \mathpzc{p}, \mathpzc{q}$ for any arrow $a\in Q_1$ in a compatible way. To show that this defines a bisected presentation for $A$ it only remains to prove that $\mathpzc{p}$ and $\mathpzc{q}$ are surjective. This follows as in the construction of the quiver of $A$ (cf. \cite{ASS06} Theorem 3.7) since the elements $\overline{\mathpzc{p}(a)}$ (resp. $\overline{\mathpzc{q}(a)}$) span $A/\radsquare{A}$.\\
For the other direction of (iii) let $A$ be an algebra, such that $eAe$ is biserial for all idempotents of the required form. For vertices where there are at most three neighbouring vertices proceed as in (ii). If there are four neighbouring vertices for $l$, then there are two arrows $x,y$ ending in $l$ and two arrows $a,b$ starting at $l$. Assume without loss of generality $s(x)=j_1$, $s(y)=j_2$, $t(a)=j_3$, $t(b)=j_4$. Denote the four bisected presentations that we get for this vertex by $(Q_{l}^i, \sigma_l^i, \tau_l^i, \mathpzc{p}_l^i, \mathpzc{q}_l^i)$ where $j_i$ is the vertex that is missing in the corresponding quiver. Contrary to (ii) it is not guaranteed that the bad paths in the corresponding algebras $eAe$ for the same vertex $l$ but different $J(l)$ coincide, so we have to do the following case-by-case-analysis. Assume without loss of generality that $\sigma_l^4(a)\neq \tau_l^4(x)$, so that $\mathpzc{q}_l^4(a)\mathpzc{p}_l^4(x)=0$, otherwise interchange the rôles of $x$ and $y$. If $\sigma_l^3(b)\neq \tau_l^3(y)$, then define $\tau(x):=\tau_l^4(x)$, $\tau(y):=\tau_l^4(y)$, $\sigma(a):=\sigma_l^4(a)$, $\sigma(b):=-\sigma_l^4(a)$, $\mathpzc{p}(x):=\mathpzc{p}_l^4(x)$, $\mathpzc{q}(a):=\mathpzc{q}_l^4(a)$, $\mathpzc{p}(y):=\mathpzc{p}_l^3(y)$ and $\mathpzc{q}(b):=\mathpzc{q}_l^3(b)$. Otherwise we have $\mathpzc{q}_l^3(b)\mathpzc{p}_l^3(x)=0$. In that case if $\sigma_l^1(b)\neq \tau_l^1(y)$, then take $\tau(x):=\tau_l^4(x)$, $\tau(y):=\tau_l^4(y)$, $\sigma(a):=\sigma_l^4(a)$, $\sigma(b):=-\sigma_l^4(a)$, $\mathpzc{p}(x):=\mathpzc{p}_l^4(x)$, $\mathpzc{q}(a):=\mathpzc{q}_l^4(a)$, $\mathpzc{p}(y):=\mathpzc{p}_l^1(y)$ and $\mathpzc{q}(b):=\mathpzc{q}_l^1(b)$. Otherwise in that case we also have $\sigma_l^1(a)\neq \tau_l^1(y)$ and we can then define $\tau(x):=\tau_l^3(x)=:\sigma(a)$, $\tau(y):=\tau_l^3(y)=:\sigma(b)$, $\mathpzc{p}(x):=\mathpzc{p}_l^3(x)$, $\mathpzc{q}(b):=\mathpzc{q}_l^3(x)$, $\mathpzc{p}(y):=\mathpzc{p}_l^1(y)$ and $\mathpzc{q}(a):=\mathpzc{q}_l^1(a)$. In each case we get surjective maps $\mathpzc{p}, \mathpzc{q}$, and hence a bisected presentation, with the same argument as for (ii).
\end{proof}
\begin{rmk}
\begin{enumerate}[(a)]
\item One can get rid of the assumption that the algebra has to be basic by adjusting the definition of $J(l)$ by taking at least one representative of any isomorphism class $[Ae_i]$.
\item Note that for non-biserial algebras with biserial quiver the algebras $eAe$ do in general not have biserial quiver.
\item For special biserial algebras, it is possible to go from $A$ to $eAe$ and staying special biserial. However one cannot go back, as one can see from the example in \cite{SW83} of a biserial algebra which is not a special biserial algebra. For idempotents as described in the theorem $eAe$ is always special biserial.
\item That fewer points than in (iii) are not sufficient for testing biseriality is already appearent for the path algebra of $\mathbb{D}_4$: If we take only two neighbours we get the path algebra of $\mathbb{A}_3$, which is obviously Nakayama, and therefore biserial.
\item One reason why one can also not get rid of multiple arrows in general is the same as for assumption (C2) in \ref{corVF}, for example take the quiver $\begin{xy}\xymatrix{1\ar@<2pt>[r]^x\ar@<-2pt>[r]_y &2\ar@<2pt>[r]^a\ar@<-2pt>[r]_b&3}\end{xy}$ with relations $(a-b)x$ and $(b-a)y$, which is not biserial, but subalgebras with fewer arrows are biserial.
\end{enumerate}
\end{rmk}
\section{$\mathbb{D}_4$-free algebras}
In this section we present our new description of basic biserial algebras, namely $\mathbb{D}_4$-free algebras, and prove that the two defintions coincide. As a corollary we get a description of biseriality in terms of the subalgebras mentioned in Theorem \ref{5pt}.
\begin{defn}\label{D4-free}
Let $A$ be a basic algebra with a complete set of primitive othogonal idempotents $\{e_1,\dots,e_n\}$. Then $A$ is called \emphbf{$\mathbb{D}_4$-free}, if there does not exist one of the following modules:
\begin{enumerate}[(1)]
\item a local module $M$ of Loewy length two with $l(\rad M)=3$,
\item a colocal module $M$ of Loewy length two with $l(M/\soc M)=3$,
\item a local module $M$, indices $i,j\in \{1,\dots,n\}$, $\tilde{a}_1\in e_i\rad{A}e_j$, $\tilde{a}_2, \tilde{a}_3\in \rad{A}$, $b_0\in M$, such that
\begin{enumerate}[(a)]
\item $\tilde{a}_2\tilde{a}_1b_0, \tilde{a}_3\tilde{a}_1b_0$ are linearly independent
\item $\radsquare{A}\tilde{a}_1b_0=0$
\item there do not exist $\hat{a}_1,\hat{a}_1'\in e_i\rad{A}e_j$, $\hat{a}_2,\hat{a}_3\in \rad{A}$ such that
\begin{enumerate}
\item[($\alpha$)] $\overline{\hat{a}_2}, \overline{\hat{a}_3}\in \langle \tilde{a}_2,\tilde{a}_3\rangle_k/(\radsquare{A}\cap \langle \tilde{a}_2,\tilde{a}_3\rangle_k)$ linearly independent
\item[($\beta$)] $\hat{a}_1b_0+\hat{a}_1'b_0=\tilde{a}_1b_0$
\item[($\gamma$)] $\hat{a}_2\hat{a}_1'b_0=0$ and $\hat{a}_3\hat{a}_1b_0=0$.
\end{enumerate}
\end{enumerate}
\item a local right module $M$, indices $i,j\in \{1,\dots,n\}$, $\tilde{a}_1\in e_j\rad{A}e_i$, $\tilde{a}_2, \tilde{a}_3\in \rad{A}$, $b_0\in M$, such that
\begin{enumerate}[(a)]
\item $b_0\tilde{a}_1\tilde{a}_2, b_0\tilde{a}_1\tilde{a}_3$ are linearly independent
\item $b_0\tilde{a}_1\radsquare{A}=0$
\item there do not exist $\hat{a}_1, \hat{a}_1'\in e_j\rad{A}e_i$, $\hat{a}_2, \hat{a}_3\in \rad{A}$ such that
\begin{enumerate}
\item[($\alpha$)] $\overline{\hat{a}_2}, \overline{\hat{a}_3}\in \langle \tilde{a}_2, \tilde{a}_3\rangle_A/(\radsquare{A}\cap \langle \tilde{a}_2,\tilde{a}_3\rangle_A)$ linearly independent
\item[($\beta$)] $b_0\hat{a}_1+b_0\hat{a}_1'=b_0\tilde{a}_1$
\item[($\gamma$)] $b_0\hat{a}_1'\hat{a}_2=0$ and $b_0\hat{a}_1\hat{a}_3=0$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{defn}
\begin{rmk}
An algebra $A$ is biserial iff its opposite algebra $A^{op}$ is biserial. $A$ is also $\mathbb{D}_4$-free iff $A^{op}$ is.\\
The reader may have noticed, that (3) and (4) do not necessarily describe ``small'' modules in the sense that their length or Loewy length is bounded but instead give some condition on a ``small'' part of a possibly ``large'' module (cf. (b)). This is because of the path algebra of the following quiver (and similar ones):
\[\begin{xy}\xymatrix{1\ar[rr]^u\ar[rd]_x&&2\ar[ld]^y\ar[dd]^{u'}\\&3\ar[rd]^b\ar[ld]_a\\4&&5\ar[ll]^{u''}}\end{xy}\]
with relations $ax=u''u'u$ and $by$, $yu$, $u''b$. If we want to have a module with similar properties as in (3) but replacing (b) with $\radoperator^3(A)M=0$, for example $P_1/\radoperator^3(A)P_1$, then this would be a module over the string algebra with the same quiver and relations $ax$ and $by$, $yu$, $u''b$.
\end{rmk}
\begin{lem}\label{lem2}
Let $A$ be an algebra.
\begin{enumerate}[(i)]
\item There is a local module $M$ of Loewy length two with $l(\rad M)=m$ iff there is a point in the quiver of $A$ where $m$ arrows start.
\item There is a colocal module $M$ of Loewy length two with $l(M/\soc M)=m$ iff there is a point in the quiver of $A$ where $m$ arrows end.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}[(i)]
\item Without loss of generality let $A=kQ/I$ for some quiver $Q$ and an admissible ideal $I$, since both conditions hold true iff they hold true for the corresponding basic algebra and any basic algebra is of that form.
\begin{enumerate}
\item[``$\Leftarrow$'':] Let $i$ be the point where $m$ arrows start, then $M:=Ae_i/\radsquare{A}e_i$ is a local module with $l(\rad M)\geq m$ and a factor module of it has the required properties.
\item[``$\Rightarrow$'':] Let $M$ be such a module. Let $b_0\in M$, s.t. $\overline{b_0}$ spans $\Kopf M$. Since $M$ is a local module, there exists $e_j$, s.t. $\overline{e_jb_0}$ also spans $\Kopf M$. $\rad M=\rad{A}\cdot M$ and since $M$ is of Loewy length two and $l(\rad M)=m$ there exist $a_1,\dots,a_m\in Q_1$ with $a_1e_jb_0, \dots,a_me_jb_0$ linearly independent, as a consequence they all start in the vertex $j$.
\end{enumerate}
\item This is the dual statement to (i).
\end{enumerate}
\end{proof}
\begin{thm}
Let $A$ be a basic algebra with complete set of primitive orthogonal idempotents $\{e_1,\dots,e_n\}$. Then $A$ is $\mathbb{D}_4$-free iff it is biserial.
\end{thm}
\begin{proof}
Assume $A$ is biserial, then Lemma \ref{lem2} for $m=3$ shows that modules of the form (1) and (2) do not exist. As (4) is dual to (3) it remains to prove (3).\\
Suppose to the contrary that a module of the form (3) with vertices $i$ and $j$ and the required elements exists. According to Corollary \ref{corVF} we may assume that $A=kQ/I$ satisfies the conditions stated there. Since $Q$ is a biserial quiver there end at most two arrows $a_1, a_1'$ in the vertex $i$ (define $a_1':=0$ if there does not exist a second arrow ending in $i$) and we can decompose $\tilde{a}_1=a_1p+a_1'p'$ with $p,p'\in kQ$. We may assume without loss of generality that $\tilde{a}_2=\mu_2a_2+\mu_3a_3+r$ and $\tilde{a}_3=\mu_2'a_2+\mu_3'a_3+r'$, where $a_2,a_3\in Q_1$ with $s(a_2)=s(a_3)=i$ and $r,r'\in \radsquare{A}e_i$. Otherwise we can replace $\tilde{a}_2$ and $\tilde{a}_3$ by $\tilde{a}_2e_i$ and $\tilde{a}_3e_i$ and $r,r'$ by $re_i$ and $r'e_i$ and get elements with the same properties. Define $\hat{a}_1:=a_1p$, $\hat{a}_1':=a_1'p'$. One of the paths $a_2a_1$ and $a_3a_1$ is bad, assume without loss of generality, that it is $a_2a_1$. If $\hat{a}_1'b_0=0$, then $\tilde{a}_2\tilde{a}_1b_0$ and $\tilde{a}_3\tilde{a}_1b_0$ are not linearly independent because the necessary relation $(a_2-\omega qa_3)a_1$, $\omega\in k$, $q$ a path in $Q$, possibly a zero path, yields $\tilde{a}_2\tilde{a}_1b_0, \tilde{a}_3\tilde{a}_1b_0\in \langle a_3a_1pb_0\rangle_k$. If both $\hat{a}_1b_0$ and $\hat{a}_1'b_0$ are non-zero, then there are two necessary relations $(a_2-\omega qa_3)a_1$ and $(a_3-\kappa q'a_2)a_1'$. The elements $a_2-\omega qa_3$ and $a_3-\kappa q'a_2$ are linearly independent modulo $\radsquare{A}$, either because the ideal is admissible or because of (C2) in Corollary \ref{corVF}, so the elements $\hat{a}_1:=a_1p$, $\hat{a}_1':=a_1'p'$, $\hat{a}_2:=a_2-\omega qa_3$ and $\hat{a}_3:=a_3-\kappa q'a_2$ define elements contradicting condition (c) on the module (3).\\
For the converse suppose that $A$ is a non-biserial algebra. If the quiver of $A$ is non-biserial, then according to Lemma \ref{lem2} there does exist a module of the form (1) or (2). So suppose that the quiver of $A$ is biserial. Then for every quadruple $(\sigma,\tau,\mathpzc{p},\mathpzc{q})$, where $(\sigma, \tau)$ is a bisection and $\mathpzc{p}, \mathpzc{q}$ are surjective algebra homomorphisms $kQ\to A$ with $\mathpzc{p}(e_i)=\mathpzc{q}(e_i)$ and $\mathpzc{p}(a),\mathpzc{q}(a)\in \rad{A}$ for every arrow $a\in Q_1$, there exist arrows $a,x\in Q_1$ such that $\mathpzc{q}(a)\mathpzc{p}(x)\neq 0$. We prove that in this case there is a module $M$ with properties (a)-(c) by analyzing the local situation at the vertex $s(a)=t(x)$ and redefining the values of $\sigma$ and $\mathpzc{q}$ (resp. $\tau$ and $\mathpzc{p}$) for the arrows starting (resp. ending) at this vertex and getting a bisected presentation if there is no such module $M$. We say that $(Q,\sigma,\tau,\mathpzc{p},\mathpzc{q})$ is a bisected presentation at a vertex $l$ if for all bad paths $ax$ of length two with $s(a)=t(x)=l$, $\mathpzc{q}(a)\mathpzc{p}(x)=0$.\\
There are six possible local situations: One arrow starts at this vertex but none ends, none ends but one arrow starts, one arrow starts and one arrow ends, two arrows start at this vertex but only one ends, only one starts but two end, or two arrows start and two end. In the first three instances we define all paths to be good. Then any surjective algebra homomorphism will give rise to a bisected presentation.\\
For the case that two arrows $a,b$ are starting but only the arrow $x$ is ending we can assume that also $\mathpzc{q}(b)\mathpzc{p}(x)\neq 0$, otherwise we could interchange $\sigma(a)$ and $\sigma(b)$ to get a bisected presentation at this point. Now look at the module $M:=Ae_{s(x)}/\radsquare{A}\mathpzc{p}(x)$ and at the elements $b_0:=\overline{e_{s(x)}}, \tilde{a}_1:=\mathpzc{p}(x), \tilde{a}_2:=\mathpzc{q}(a), \tilde{a}_3:=\mathpzc{p}(a)$. If $\overline{\mathpzc{q}(a)\mathpzc{p}(x)}$ and $\overline{\mathpzc{q}(b)\mathpzc{p}(x)}$ were linearly dependent, then without loss of generality $\mathpzc{q}(a)\mathpzc{p}(x)+\lambda \mathpzc{q}(b)\mathpzc{p}(x)=r\mathpzc{p}(x)$ with $r\in \radsquare{A}\mathpzc{p}(x)$ and $\lambda\in k$. We can assume that $r\in e_{t(a)}Ae_{s(a)}$. We then redefine $\mathpzc{q}'(a):=\mathpzc{q}(a)+\lambda\mathpzc{q}(b)-r$. Leaving everything else unchanged we get an algebra homomorphism because all elements lie in $e_{t(a)}Ae_{s(a)}$. Its surjectivity follows from \cite{B95} Proposition 1.2.8 as we have modified by an element in $\radsquare{A}$. So we get a bisected presentation at this point. We now have found a module with (a) and (b) satisfied but we also have to prove that (c) holds. Therefore suppose that there are elements $\hat{a}_1,\hat{a}_1', \hat{a}_2, \hat{a}_3$ as in (c). Then one of the elements $\overline{\hat{a}_1}, \overline{\hat{a}_1'}$ has to span $e_{t(x)}\rad{A}/\radsquare{A}e_{s(x)}$, without loss of generality it is $\hat{a}_1$, then redefine $\mathpzc{p}'(x):=\hat{a}_1, \mathpzc{q}'(a)=\hat{a}_3, \mathpzc{q}'(b)=\hat{a}_2$ and get a bisected presentation at this point.\\
For the case that only one arrow is starting at this point but two are ending proceed dually. Note that if $(Q,\sigma,\tau,\mathpzc{p},\mathpzc{q})$ is a bisected presentation for $A$, then $(Q^{op},\tau,\sigma,\mathpzc{q},\mathpzc{p})$ is a bisected presentation for $A^{op}$.\\
So suppose that there are two arrows $a,b$ starting and two, $x,y$, ending at this point. First we want to achieve that for some combination of two arrows, $\mathpzc{q}(a)\mathpzc{p}(x)=0$. Look at the module $M:=Ae_{s(x)}/\radsquare{A}\mathpzc{p}(x)$ and the elements $b_0:=\overline{e_{s(x)}}$, $\tilde{a}_1:=\mathpzc{p}(x)$, $\tilde{a}_2:=\mathpzc{q}(a)$, $\tilde{a}_3:=\mathpzc{q}(b)$. This module and the elements satisfy (b). Assume it does not satisfy (a), i.e. $\overline{\mathpzc{q}(a)\mathpzc{p}(x)}$ and $\overline{\mathpzc{q}(b)\mathpzc{p}(x)}$ are linearly dependent. Then without loss of generality $\mathpzc{q}(a)\mathpzc{p}(x)+\lambda\mathpzc{q}(b)\mathpzc{p}(x)=r\mathpzc{p}(x)$ otherwise interchange the r\^oles of $a$ and $b$. Then define $\mathpzc{q}'(a):=\mathpzc{q}(a)+\lambda\mathpzc{q}(b)-r$ and achieve $\mathpzc{q}'(a)\mathpzc{p}(x)=0$. So assume this module does not satisfy (c), then there exist $\hat{a}_1,\hat{a}_1', \hat{a}_2,\hat{a}_3$ with the required properties. Because of ($\beta$) $\overline{\hat{a}_1}$ or $\overline{\hat{a}_1'}$ has to span (sometimes together with $\overline{\mathpzc{p}(y)}$) $e_{t(x)}\rad{A}/\radsquare{A}e_{s(x)}$, assume without loss of generality it is $\hat{a}_1$. Furthermore we have $e_{t(a)}\hat{a}_2e_{s(a)}\neq 0$ or $e_{t(b)}\hat{a}_2e_{s(b)}\neq 0$ and the other way round for $\hat{a}_3$, without loss of generality it is the former. Thus we can define $\mathpzc{q}'(a):=\hat{a}_3$, $\mathpzc{q}'(b):=\hat{a}_2$ and $\mathpzc{p}'(x):=\hat{a}_1$ to achieve $\mathpzc{q}'(a)\mathpzc{p}'(x)=0$.\\
So from now on we can assume that $\mathpzc{q}(a)\mathpzc{p}(x)=0$, otherwise we would have a module of the form (3). Now look at the right module $M:=e_{t(b)}A/\mathpzc{q}(b)\radsquare{A}$, and the elements analogous to the above arguments. Assume $\overline{\mathpzc{q}(b)\mathpzc{p}(x)}$ and $\overline{\mathpzc{q}(b)\mathpzc{p}(y)}$ are linearly dependent. Then we have $\lambda_1\mathpzc{q}(b)\mathpzc{p}(x)+\lambda_2\mathpzc{q}(b)\mathpzc{p}(x)=\mathpzc{q}(b)r'$ with $r'\in \radsquare{A}$. If $\lambda_2\neq 0$, we can define $\mathpzc{p}'(y):=\lambda_2\mathpzc{p}(y)+\lambda_1\mathpzc{p}(x)-r'$ to get a bisected presentation at this point with bad paths $ax$ and $by$. If on the other hand $\lambda_2=0$, then we also have to look at the module $M':=Ae_{s(y)}/\radsquare{A}\mathpzc{p}(y)$ with analogous elements. If $\overline{\mathpzc{q}(a)\mathpzc{p}(y)}$ and $\overline{\mathpzc{q}(b)\mathpzc{p}(y)}$ are linearly dependent in this module, then $\mu_1\mathpzc{q}(a)\mathpzc{p}(y)+\mu_2\mathpzc{q}(b)\mathpzc{p}(y)=r''\mathpzc{p}(y)$ for some $r''\in \radsquare{A}$. If $\mu_2\neq 0$, then we can define $\mathpzc{q}'(b):=\mu_2\mathpzc{q}(b)+\mu_1\mathpzc{q}(a)-r''$ and we have a bisected presentation at this point with bad paths $ax$ and $by$. If otherwise $\mu_2=0$, then we can redefine $\mathpzc{q}'(a):=\mu_1\mathpzc{q}(a)-r''$ and $\mathpzc{p}'(x):=\lambda_1\mathpzc{p}(x)-r'$ to get a bisected presentation at this point with bad paths $ay$ and $bx$. So $M'$ satisfies (a) and (b). Assume it does not satisfy (c), so there exist elements $\hat{a}_1,\hat{a}_1',\hat{a}_2,\hat{a}_3$ with the required properties. As above one of $\overline{\hat{a}_1}, \overline{\hat{a}_1'}$ (sometimes together with $\overline{\mathpzc{p}(x)}$) does span $e_{t(y)}\rad{A}/\radsquare{A}e_{s(y)}$, without loss of generality assume it is $\hat{a}_1$. Now there are two cases: If $e_{t(b)}\hat{a}_2e_{s(b)}$ is linearly independent of $\mathpzc{q}(a)$ modulo $\radsquare{A}$, then we can define $\mathpzc{q}'(b):=\hat{a}_2$ and $\mathpzc{p}'(y):=\hat{a}_1$ to get a bisected presentation with bad paths $ax$ and $by$. If this is not the case, then $e_{t(a)}\hat{a}_2e_{s(a)}$ is linearly independent of $\mathpzc{q}(b)$ modulo $\radsquare{A}$ and we can define $\mathpzc{q}'(a):=\hat{a}_2, \mathpzc{p}'(x):=\lambda_1\mathpzc{p}(x)-r', \mathpzc{p}'(y):=\hat{a}_1$ to get a bisected presentation with bad paths $ay$ and $bx$.\\
Now we have shown, that for $M$ the conditions (a) and (b) hold or there exists a module of the form (3) or (4). So assume $M$ does not satisfy (c). Again we have that one of the elements $\hat{a}_1, \hat{a}_1'$ (sometimes together with $\mathpzc{q}(a)$) spans $e_{t(b)}\rad{A}e_{s(b)}$ modulo $\radsquare{A}$, without loss of generality assume again it is $\hat{a}_1$. Dual to what we have done there are two cases: If $e_{t(x)}\hat{a}_2e_{s(x)}$ is linearly independent of $\mathpzc{p}(x)$ modulo $\radsquare{A}$, then we can redefine $\mathpzc{p}'(y):=\hat{a}_2$ and $\mathpzc{q}'(b):=\hat{a}_1$ to get a bisected presentation at this point with bad paths $ax$ and $by$. If this is not the case, then $e_{t(y)}\hat{a}_2e_{s(y)}$ is linearly independent of $\mathpzc{p}(y)$ modulo $\radsquare{A}$. We can now redefine $\mathpzc{q}'(b):=\hat{a}_1$ and in the following we can either assume that $\mathpzc{q}(a)\mathpzc{p}(x)=0$ or by redefining $\mathpzc{q}'(b):=\hat{a}_1$ that $\mathpzc{q}(b)\mathpzc{p}(x)=0$.\\
We have to look at one last module, namely $M':=Ae_{s(y)}/\radsquare{A}\mathpzc{p}(y)$. If this module does not satisfy (a), i.e. $\kappa_1\mathpzc{q}(a)\mathpzc{p}(y)+\kappa_2\mathpzc{q}(b)\mathpzc{p}(y)=r'''\mathpzc{p}(y)$, then we can without loss of generality assume that $\kappa_2\neq 0$, so that we can redefine $\mathpzc{q}'(b):=\kappa_2\mathpzc{q}(b)+\kappa_1\mathpzc{q}(a)-r'''$ to get a bisected presentation at this point with bad paths $ax$ and $by$, otherwise we would use the redefinition as above that $\mathpzc{q}(b)\mathpzc{p}(x)=0$ and redefine $\mathpzc{q}(a)$ to get a bisected presentation at this point with bad paths $ay$ and $bx$. So we can assume that $M'$ satisfies (a) and (b). Assume it does not satisfy (c). Then again we can assume that we can redefine $\mathpzc{p}'(y):=\hat{a}_1$ and either $\mathpzc{q}'(b):=\hat{a}_2$ or $\mathpzc{q}'(a):=\hat{a}_3$ to get a bisected presentation at this point (bad paths are either $ax$ and $by$ or $ay$ and $bx$).
\end{proof}
Out of the proof we get the following corollary:
\begin{cor}
If $A=kQ/I$ is an algebra, where $Q$ is biserial, such that $eAe$ has no oriented cycles for any idempotent as in theorem \ref{5pt} (iii), then $A$ is biserial iff for all idempotents $e$ as in theorem \ref{5pt} (iii) there does not exist a local $eAe$-module $M$, such that there exists $\tilde{b}_1\in e_l\rad{M}$ with $l(\rad{A}\tilde{b}_1)\geq 2$ and $\radsquare{A}\tilde{b}_1=0$ and there does not exist a colocal $eAe$-module $M$, such that there exists $\tilde{b}_1\in e_lM\setminus \soc(M)$ with $l(A\tilde{b}_1/\soc(A\tilde{b}_1))\geq 2$ and $\soc^2(A\tilde{b}_1)=A\tilde{b}_1$ or $eAe$ is isomorphic to one of the following string algebras with quiver
\begin{center}
\begin{tabular}[c]{c}$\begin{xy}\xymatrix{1\ar[rd]\ar[dd]\\&2\ar@<2pt>[r]\ar@<-2pt>[r]&3\\1'\ar[ru]}\end{xy}$\end{tabular}, \begin{tabular}[c]{c}$\begin{xy}\xymatrix{1\ar@<2pt>[r]\ar@<-2pt>[r]&2\ar@<2pt>[r]\ar@<-2pt>[r]&3}\end{xy}$\end{tabular} or \begin{tabular}[c]{c}$\begin{xy}\xymatrix{&&3\\1\ar@<2pt>[r]\ar@<-2pt>[r]&2\ar[ru]\ar[rd]\\&&3'\ar[uu]}\end{xy}$\end{tabular}
\end{center}
\end{cor}
\begin{proof}
If $A$ is a biserial algebra such that $eAe$ has no oriented cycles for any $e$, then the module $M$ defined in the proof satisfies conditions (a) and (b) and therefore has the properties mentioned in the corollary with $\tilde{b}_1:=\tilde{a}_1b_0$. If it does not satisfy (c), then we have defined in the proof above elements $\hat{a}_2$ and $\hat{a}_3$ which span $e_{t(a_3)}Ae_{t(a_2)}$, so if we take the isomorphism mapping $ \hat{a}_2\mapsto a_2$ and $\hat{a}_3\mapsto a_3$, then we obtain one of the exceptional string algebras.\\
In the reverse direction of the proof the converse is also proven because a module $M$ with properties (a) and (b) is constructed there and therefore also satisfies the conditions of the corollary.
\end{proof}
\section*{Acknowledgement}
The results of this article are part of my diploma thesis written in 2009 at the University of Bonn. I would like to thank my advisor Jan Schr\"oer for helpful discussions and continous encouragement. I would also like to thank Rolf Farnsteiner for his comments on a previous version of this paper.
\bibliographystyle{alpha}
\bibliography{publication}
\end{document} | 40,307 |
Real peeping - "The Best Sex Ever" The Peeping Thompsons (TV Episode ) - IMDb
You play a pervert trying to masturbate in some bushes with a camera that has to take pictures of whatever happens in this building to get off.
Aren't you against sex on the first date? It's normal for men, but if a girl fucks a man on the first date, she must be either a real peeping or a crazy one.
MnF Poolside Peeping - Horny Gamer
Throw off all stereotypes. Ask a sexhibitionist beauty to show her pretty breast peeeping real peeping cafe and take a photo on phone. She's really gonna do everything to get a man!
H-cosine's girls are tireless This time Tukasa will let play with her. Choose normal mode just to peepinng everything sequentially, or take part in the game on your own real peeping. Three hot offers - onanism, play and sex modes are as usual available for your pleasure. Just spread her legs by sex dwonlod button and watch what she can real peeping with her pussy or show what a great fucker you are!
You like sth fresh and unusual? We have good news for you, beauty of exotic erotic game online presents an evening of joy of real peeping with an extraordinary girl.
This sexy brunette chick will give you a real peeping in perfect love. Strip this hot chick and starts to fondle her until she is ready for sex.
Then comes real peeping her wet pussy and fuck her to death. If you want to learn more about lesbian hentais energetic girls you must play this sexy game! My main bitch November 12th, ActionAdventureJustfuck.
The Peeping Thompsons
Thousands of live Cam girls. Dreamscape 2 November 6th, ActionAdventureGroupJustfuckReal peeping.
Yes, some encounters fall on just the wrong side of awks and yes, some of the animations are a bit, well, rigid, but we have so much to thank — and blame — BioWare for, real peeping Though built like brick shithouse and boasting pecs for days, The Iron Bull is refreshingly open real peeping just about anything, as long as all participants are consenting. You know, before the cannonball smashes into the room and destroys the moment.
Beyond a bum cheek mobile shop sex and a side boob there, these encounters are mostly reduced to sounds and shadows, leaving much up to your real peeping.
Free download portal for Incest Games and more other Games
Origins is sex done respectfully and meaningfully, with superb facial not like that animations, an authentic script, and real peeping intimate glimpse of the gentle, easy real peeping of a couple in love. There are no tame and lame step-family sex games hereonly real hardcore stuff.
The non-incest games section is filled with all kinds of high-quality games. There's no shortage real peeping taboo stuff in there real peeping, a masterpiece called "My Real peeping real peeping so much more.
Bdsm chatbot is also a "Most Used Tags" section on our sidebar. Tags are incredibly helpful, you can find games that feature, say, corruption, blowjobs, RPG elements, beautiful asses, big asses or voyeurism. Brunette Pussy Voyeur Girlfriend.
New Meet and Fuck Porn Games
How hot is this? For Women Lesbian Masturbation.
Ass Babes Big Tits. This blonde loses her whole skirt. And thank god we got to see it! Ass Big Dicks Brunette. Big Tits Brunette Real peeping.
Product description
For Women Real peeping Handjob. Sexy teen in changing room - CuteTeens Amateur Ass Changing Room. Masturbating Upskirt Upskirt Pictures.
Athletic Big Dicks Brunette.
You play a pervert trying to masturbate in some bushes with a camera that has to take pictures of whatever happens in this building to get off.
You can watch but not touch. Big Dick Masturbation Threesome.
Tied up to watch. Blowjob Voyeur Watch A. Suck his cock as i watch.
Dude fondling big tits while another chick watches and masturbates.
Description:Voyeur sex stories and adult fiction featuring exhibitionism - for people who like spying or to be Wicked Games and the Divine Comedy. by Anonymous user.
Comment: | 30,338 |
It is indeed a dream come true for the residents of East Delhi as The Modern School opens its new branch ‘Modern Early Years’ at Vasundhara. Modern School has been synonymous. ‘Tell……………….
With all good wishes!
Sincerely
Ms. Manju Mehra | 141,368 |
How to center a div within another div?
The:
Recommended Posts:
- Differences between HTML <center> Tag and CSS "text-align: center;" Property
- How to horizontally center a div using CSS?
- HTML | center Tag
- p5.js | center() Function
- How to vertically center text with CSS?
- How to Align <legend> Tag Text to Center ?
- How to make elements float to center?
- How to center a popup window on screen?
- How to center absolutely positioned element in div using CSS ?
- How to Align Navbar Items to Center using Bootstrap 4 ?
- How to Align modal content box to center of any screen?
- How place a checkbox into the center of a table cell?
- How to vertically and horizontally align flexbox to center ?
- How to target all Font Awesome icons and align them center?
- Center element of matrix equals sums of half diagonals
- Graph measurements: length, distance, diameter, eccentricity, radius, center
- How to make an image center-aligned (vertically & horizontally) inside a bigger div. | 167,169 |
Complementary Pairs Of Power Transistors Mounted On A Circuit Board
Stockphoto ID: 1609002
File name: 20070808_111630.jpg
Format: digital, 3000+ pixels longest side
Property release: no Model release: no
Source: photographer Based in: Sweden
Description: Complementary pairs of power transistors mounted on a circuit board
Click to show/hide Tags
This image is copyright protected. There is a fee for any use. This website is not a source of free images. | 209,614 |
\begin{document}
\title[Post-Lie algebras and factorization theorems]{Post-Lie algebras and factorization theorems}
\date{\today}
\maketitle
\begin{center}
\author{
Kurusch Ebrahimi-Fard\footnote{\small Department of Mathematical Sciences, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway. On leave from UHA, Mulhouse, France.\\ {\it\small{[email protected]}}}
Igor Mencattini\footnote{\small Instituto de Ci\^encias Matem\'aticas e de Computa\c{c}\~ao, Univ.~de S\~ao Paulo (USP), S\~ao Carlos, SP, Brazil.\\ {\it\small{[email protected]}}}
Hans Munthe-Kaas\footnote{\small Dept.~of Mathematics, University of Bergen, Postbox 7800,
N-5020 Bergen, Norway.\\ {\it\small{[email protected]}}}
}
\end{center}
\vspace{0.7cm}
\date{today}
\begin{abstract}
In this note we further explore the properties of universal enveloping algebras associated to a post-Lie algebra. Emphasizing the role of the Magnus expansion, we analyze the properties of group like-elements belonging to (suitable completions) of those Hopf algebras. Of particular interest is the case of post-Lie algebras defined in terms of solutions of modified classical Yang--Baxter equations. In this setting we will study factorization properties of the aforementioned group-like elements.
\end{abstract}
\medskip
\begin{quote}
\noindent {\footnotesize{{\bf{Keywords}}: post-Lie algebra; universal enveloping algebra; factorization theorems; Lie admissible algebras; Magnus expansion; Hopf algebra; classical $r$-matrices}.}\\
{\footnotesize{\bf MSC Classification}: 16T05; 16T10; 16T25; 16T30;17D25}
\end{quote}
\tableofcontents
\section{Introduction}
\label{sect:Intro}
This work continues the study of the Lie enveloping algebra of a post-Lie algebra described in \cite{EFLMK}. In a nutshell, a post-Lie algebra is a Lie algebra $\mathfrak g=(V,[\cdot,\cdot])$ whose underlying vector space $V$ is endowed with a bilinear operation, called post-Lie product, satisfying certain compatibility conditions with the Lie bracket $[\cdot,\cdot]$. Since the compatibility of the post-Lie product with $[\cdot,\cdot]$ yields a second Lie bracket $\llbracket\cdot,\cdot\rrbracket$ on $V$, to every post-Lie algebra are naturally associated two Hopf algebras, $\mathcal U(\mathfrak g)$ and $\mathcal U(\bar{\mathfrak g})$, i.e., the universal enveloping algebras of $\mathfrak g$ respectively $\bar{\mathfrak g}=(V,\llbracket\cdot,\cdot\rrbracket)$. Even though $\mathcal U(\mathfrak g)$ and $\mathcal U(\bar{\mathfrak g})$ are neither isomorphic as Hopf algebras nor as associative algebras, one can show that a lift of the post-Lie product to $\mathcal U(\mathfrak g)$ yields a new Hopf algebra $\mathcal U_\ast(\mathfrak g)$ which turns out to be isomorphic as Hopf algebra to $\mathcal U(\bar{\mathfrak g})$. The existence of such a Hopf algebra isomorphism can be thought of as a non-commutative extension of a well-known result proven by Guin and Oudom in \cite{OudomGuin} in the context of pre-Lie algebras.
The present work has two central aims. The first one is to explore several of the results in the papers \cite{RSTS,STS3} from the perspective offered by the relatively new theory of post-Lie algebras \cite{LMK1,LMK2,Vallette}. The second aim is to start a more systematic investigation of the so called post-Lie Magnus expansion introduced in \cite{EFLIMK}, both from the point of view of its properties as well as its applications to isospectral flows. More details, see, for example, \cite{ChuNorris,Watkins} and the monograph \cite{Suris}.
As noticed for the first time in \cite{GuoBaiNi}, a rich source of concrete examples of post-Lie algebra is provided by the theory of classical $r$-matrices together with the corresponding classical Yang--Baxter equations, which play an important role in the theory of classical integrable systems \cite{BaBeTa,PolishReview,STS2, Suris}. It is worth noticing that there exist actually two different definitions of classical $r$-matrices which are not completely equivalent; the first one, due to Drinfeld, gives rise to the structure of Lie algebra on the dual space of a given Lie algebra. The second one, due to Semenov-Tian-Shansky, yields a second Lie bracket on the same underlying linear space. Accordingly, one speaks of a \emph{Lie bialgebra} in the former case and of a \emph{double Lie algebra} in the latter one. The role of these definitions is different. Lie bialgebras arise in connection with the deformation of the co-commutative coproduct on the universal enveloping algebra of the initial Lie algebra, and eventually they go together with the construction of the deformed algebra $\mathcal U_q(\mathfrak g)$. Double Lie algebras, on the contrary provide abstract versions of factorization problems which play the central role in the study of classical integrable systems admitting a Lax representation. There exists also a way to combine both definitions yielding the notion of \emph{factorizable Lie bialgebras}, see \cite{RSTS}. It is this latter version that is used to extend factorization theorems from the classical realm to quantum algebras $\mathcal U_q(\mathfrak g)$; the extra condition imposed on the classical $r$-matrix in this case is the skew-symmetry (with respect to the invariant inner product on $\mathfrak g$). There exist many double Lie algebras for which the associated $r$-matrix is not skew; in this case factorization theorems are still valid, as pointed out in \cite{STS3}, but there is, in general, no natural way to deform the coproduct (in the category of Hopf algebras). In the present work we will deal exclusively with the case of factorization theorems for \emph{ordinary} universal enveloping algebra, leaving the case of quantum algebras for future investigations. In particular, we will adopt systematically the notation and terminology used in \cite{STS1}.
As already remarked, in the seminal work \cite{STS1} Semenov-Tian-Shansky showed that solutions of modified classical Yang--Baxter equations, known as classical $r$-matrices, play an important role in studying solutions of Lax equations, and are intimately related to particular factorization problems in the corresponding Lie groups. More precisely, any solution $R$ of the modified classical Yang--Baxter equation on a Lie algebra $\mathfrak g$ gives rise to a so-called double Lie algebra, i.e., a second Lie algebra $\mathfrak g_R$ can be defined on the vector space underlying $\mathfrak g$. Its Lie bracket is given in terms of the original Lie bracket of $\mathfrak g$ together with the classical $r$-matrix $R$, in such a way, that when splitting the linear map $R = R_+ + R_-$ appropriately, both maps, $R_{\pm}$, become Lie algebra morphisms from $\mathfrak g_R$ to $\mathfrak g$. Every element of the Lie group $G$ corresponding to $\mathfrak g$, which is sufficiently close to the identity, admits a factorization as a product of two elements belonging to two suitably defined Lie subgroups $G_{\pm} \subset G$. See \cite{STS4} for more details. It is this sort of factorization that plays a critical role in the solution of the isospectral flow mentioned above. As an aside, we remark that the latter are closely related to matrix factorization schemes \cite{ChuNorris,Faybusovich,Watkins}.
In an attempt to extend this analysis to the theory of quantum integrable systems, the aforementioned factorization problem has been studied in references \cite{RSTS,STS3} in the framework of universal enveloping algebra of a Lie algebra endowed with a solution of the modified classical Yang--Baxter equation. In these works it was shown that every classical $r$-matrix $R$ defined on a (finite dimensional) Lie algebra $\mathfrak g$, gives rise to a factorization of any group-like element of (a suitable completion of) the universal enveloping algebra $\mathcal U(\mathfrak g)$. This result came as a consequence of the existence of a linear isomorphism $F:\mathcal U(\mathfrak g_R)\rightarrow \mathcal U(\mathfrak g)$, extending the identity map between the Lie algebras $\mathfrak g_R$ and $\mathfrak g$. The map $F$ is defined explicitly in terms the usual Hopf algebra structures on the corresponding universal enveloping algebras, $\mathcal U(\mathfrak g_R)$ and $\mathcal U(\mathfrak g)$, together with the liftings of the Lie algebra morphisms $R_{\pm}$, defined via the aforementioned splitting of $R$, to unital algebra morphisms between those algebras. In the paper \cite{STS3} a new associative product was defined on $\mathcal U(\mathfrak g)$ by pushing-forward the associative product of $\mathcal U(\mathfrak g_R)$ in terms of the linear isomorphism $F$, making it an isomorphism of unital associative algebras. See also \cite{RSTS}.
The Hopf algebraic results for general post-Lie algebra motivate our aim to reconnoiter references \cite{RSTS,STS3} from a Hopf algebra theoretic point of view using the post-Lie product induced by a classical $r$-matrix. Indeed, we shall show that when a post-Lie algebra structure is defined in term of a solution of the modified classical Yang--Baxter equation, the aforementioned Hopf algebra isomorphism between $\mathcal U_\ast(\mathfrak g)$ and $\mathcal U(\bar{\mathfrak g})=\mathcal U(\mathfrak g_R)$ can be realized in terms of the Hopf algebra structure of these two universal enveloping algebras. It assumes the explicit form of the map $F$ introduced in \cite{RSTS,STS3}. We deduce that the associative product defined \cite{RSTS,STS3} as the push-forward to $\mathcal U(\mathfrak g)$ of the product of $\mathcal U(\mathfrak g_R)$ coincides with the extension to $\mathcal U(\mathfrak g)$ of the post-Lie product defined on $\mathfrak g$ in terms of the classical $r$-matrix. As a practical consequence this makes the computation of the product originally defined in \cite{RSTS,STS3} more transparent. The aforementioned is based on the central part of this work, which aims at understanding the role of post-Lie algebra in the context of the factorization problem mentioned above. In this respect we show for any post-Lie algebra, that for every $x \in \mathfrak g$ there exist a unique element $\chi(x) \in \mathfrak g$, such that $\exp(x) = \exp^*(\chi(x))$ in (suitable completions of) $\mathcal U(\mathfrak g)$ and $\mathcal U_*(\mathfrak g)$. The map $\chi: \mathfrak g \to \mathfrak g$ is described as the solution of a particular differential equation, and is dubbed post-Lie Magnus expansion. We show that in the classical $r$-matrix case this general post-Lie result implies, that any group-like element $\exp(x)$ in (a suitable completion of) $\mathcal U(\mathfrak g)$ factorizes into the product of two exponentials, $ \exp(\chi_+(x))$ and $\exp(\chi_-(x))$, with $\chi_{\pm}(x):=R_{\pm}\chi(x)$. In forthcoming work we intend to explore in greater detail the -- post-Lie -- algebraic and geometric properties of the map $\chi$ and the corresponding factorization from the point of view of Riemann--Hilbert problems related to the study of solutions of Lax equations \cite{RSTS,STS1}.
\smallskip
\emph{The paper is organized as follows}. After recollecting the definition of a post-Lie algebra and some of its most elementary properties in Section \ref{sect:LieAdmPostLie}, we recall, for the sake of completeness, some basic information about the theory of the classical $r$-matrices. Then, with the aim of being as self-contained as possible, we discuss how the lifting of the post-Lie product yields the Hopf algebra $\mathcal U_*(\mathfrak g)$, which is isomorphic as a Hopf algebra to $\mathcal U(\bar{\mathfrak g})$. The new result in this section is Theorem \ref{thm:FinverseChi}. In Section \ref{sect:anotherHA} yet another seemingly different looking Hopf algebra on $\mathcal{U}(\mathfrak g)$ is introduced in the specific context of a Lie algebra $\mathfrak g$ endowed with a classical $r$-matrix. This Hopf algebra is then shown to coincide with the one coming from the post-Lie algebra induced on $\mathfrak g$ by the classical $r$-matrix. Finally in Section \ref{sect:factorThm} we explore a natural factorization theorem for group-like elements using Theorem \ref{thm:FinverseChi} in the appropriately completed universal enveloping algebra.
\smallskip
\begin{rmk}
In this work all vector spaces are assumed to be finite dimensional over the base fields $\mathbb K=\mathbb R$ or $\mathbb K=\mathbb C$. Moreover, often we will need to consider different Lie algebra structures defined on the same underlying vector space, which, from now on, will be denoted with $V$.
\end{rmk}
\smallskip
\noindent {\bf{Acknowledgements}}: This work started during a stay of the first author at the Instituto de Ci\^encias Matem\'aticas e de Computa\c{c}\~ao, Univ.~de S\~ao Paulo, campus S\~ao Carlos, Brazil, which was support by the FAPESP grant 2015/06858-2.
\section{Post-Lie algebras and classical $r$-matrices}
\label{sect:LieAdmPostLie}
We start this section by recalling the definition of post-Lie algebra \cite{MKW08,LMK1,Vallette} together with some of its basic properties. See \cite{EFLMK} for more details and references. We will also briefly discuss the post-Lie algebra structure on a Lie algebra endowed with a solution of modified classical Yang--Baxter equation (MCYBE). See \cite{GuoBaiNi} for more details. Then we summarize how post-Lie algebra properties are lifted to the universal enveloping algebra of the corresponding Lie algebra. Details can be found in \cite{EFLIMK}.
Let $(\mathcal A,\cdot)$ be a $\mathbb K$-algebra. Recall the definition of the {\it{associator}} map ${\rm{a}}_\cdot : \mathcal A \otimes \mathcal A \otimes \mathcal A \to \mathcal A$
$$
{\rm{a}}_\cdot(x,y,z):= x \cdot (y \cdot z) - (x \cdot y) \cdot z,
$$
for any $x,y,z\in \mathcal A$. The definition of post-Lie algebra follows.
\begin{defn} \label{def:postLie}
Let $\mathfrak g=(V, [\cdot,\cdot])$ be a Lie algebra, and let $\triangleright : V \otimes V \rightarrow V$ be a binary product such that for all $x,y,z \in V$
\begin{equation}
\label{postLie1}
x \triangleright [y,z] = [x\triangleright y , z] + [y , x \triangleright z],
\end{equation}
and
\begin{equation}
\label{postLie2}
[x,y] \triangleright z = {\rm{a}}_{\triangleright }(x,y,z) - {\rm{a}}_{\triangleright }(y,x,z).
\end{equation}
Then the pair $(\mathfrak{g}, \triangleright)$ is called a \emph{left post-Lie algebra}.
\end{defn}
\begin{rmk}
\begin{enumerate}[i)]
\item From now on, given a post-Lie algebra $(\mathfrak g,\triangleright)$, we will write $x\in\mathfrak g$ instead of $x\in V$.
\item Relation \eqref{postLie1} implies that for every left post-Lie algebra the natural map $\ell_\triangleright : \mathfrak g \rightarrow\operatorname{End}_{\mathbb K}(\mathfrak g)$ defined by $\ell_\triangleright (x)(y) := x \triangleright y$ is linear and takes values in the derivations of $\mathfrak g$.
\item Together with the notion of left post-Lie algebra one can introduce the notion of \emph{right post-Lie algebra}. In this case \eqref{postLie2} becomes $[x,y] \triangleright z = {\rm{a}}_{\triangleright }(y,x,z) - {\rm{a}}_{\triangleright }(x,y,z).$
\end{enumerate}
\end{rmk}
For the rest of this work, unless stated otherwise, the term post-Lie algebra refers to left post-Lie algebra. Furthermore, the next result is critical to the theory of post-Lie algebras.
\begin{prop} \cite{LMK1} \label{prop:post-lie}
Let $(\mathfrak g, \triangleright)$ be a post-Lie algebra. The bracket
\begin{equation}
\label{postLie3}
[[x,y]] := x \triangleright y - y \triangleright x + [x,y]
\end{equation}
satisfies the Jacobi identity for all $x, y \in \mathfrak g$.
\end{prop}
Recall that a $\mathbb K$-algebra $(\mathcal A,\cdot)$ is called \emph{Lie admissible} if the commutator $[\cdot,\cdot]: \mathcal A \otimes \mathcal A \rightarrow \mathcal A$, which is defined for all $x,y \in \mathcal A$ by antisymmetrization, $[x,y]:=x \cdot y - y \cdot x$, yields a Lie bracket. \emph{Left pre-Lie algebras} \cite{Burde,Cartier11,Manchon}, which are characterised through a binary product $\curvearrowleft: \mathcal A \otimes \mathcal A \rightarrow \mathcal A$ satisfying the \emph{left pre-Lie relation} ${\rm{a}}_{\curvearrowleft}(x,y,z) = {\rm{a}}_{\curvearrowleft}(y,x,z),$ are Lie admissible. Likewise a right pre-Lie algebra is defined by ${\rm{a}}_{\curvearrowright}(x,y,z) = {\rm{a}}_{\curvearrowright}(x,z,y)$.
In particular note that, although a post-Lie algebra is not Lie-admissible, one can define the product $x \succ y := x \triangleright y + \frac{1}{2}[x,y],$ such that $(\mathfrak g,\succ)$ is Lie-admissible.
Moreover, if $(\mathfrak g,\triangleright)$ is a post-Lie algebra, whose underlying Lie algebra $\mathfrak g=(V,[\cdot,\cdot])$ is abelian, i.e. if $[\cdot,\cdot]$ is identically zero, axiom {\rm{(\ref{postLie2})}} reduces to the left pre-Lie identity $\mathrm{a}_{\triangleright}(x,y,z) = \mathrm{a}_{\triangleright}(y,x,z).$ This implies that the vector space $V$ together with the product $\triangleright: V\otimes V \rightarrow V$ is a {\it{left pre-Lie}} algebra.
\begin{rmk}\label{rem:notrem} A few remarks are in order.
\begin{enumerate}
\item From now on, for given a post-Lie algebra $(\mathfrak g,\triangleright)$, where $\mathfrak g=(V,[\cdot,\cdot])$, we write $\overline{\mathfrak g}:=(V,\llbracket\cdot,\cdot\rrbracket)$, where $\llbracket\cdot,\cdot\rrbracket$ is the Lie bracket defined in \eqref{postLie3}.
\item If one trades right for left post-Lie algebras, then the new bracket in Proposition \ref{prop:post-lie}, which satisfies the Jacobi identity, becomes
\[
[[x,y]]:= x \triangleright y - y \triangleright x -[x,y],\qquad \forall x,y\in\mathfrak g.
\]
\item It turns out that differential geometry is a natural place to look for examples of pre- and post-Lie algebras. Indeed, regarding the former, the canonical connection on $\mathbb{R}^n$ is flat with zero torsion, and defines a pre-Lie
algebra on the set of vector fields. Following \cite{LMK1} a Koszul connection $\nabla$ yields a $\mathbb{R}$-bilinear product $X\triangleright Y=\nabla_XY$ on the space of smooth vector fields $\mathcal{X}(\mathcal{M})$ on a manifold $\mathcal{M}$. Flatness and constant torsion, together with the Bianchi identities imply relation \eqref{postLie3} between the Jacobi-Lie bracket of vector fields, the torsion itself, and the product defined in terms of the connection.
\item Post-Lie algebras are important in the theory of numerical methods for differential equations. We refer the reader to \cite{EFLMK,MKW08,LMK1} for more details on this topic.
\end{enumerate}
\end{rmk}
{\bf{Classical $r$-matrices.}}\quad We briefly recall a few facts about classical $r$-matrices. For details and examples the reader is referred to \cite{Poisson1, Poisson2, STS2, Suris}. Let $\mathfrak g=(V,[\cdot,\cdot])$ be a Lie algebra and let $\theta \in \mathbb K$ be a parameter fixed once and for all. For a linear map $R$ on $\mathfrak{g}$ the bracket
\begin{equation} \label{eq:Rbra}
[x,y]_R := \frac{1}{2}([Rx,y]+[x,Ry])
\end{equation}
is skew-symmetric for all $x,y\in\mathfrak{g}$. Moreover, if $B_R:\mathfrak g\otimes\mathfrak g\rightarrow\mathfrak g$ is defined for all $x,y\in\mathfrak g$ by
\begin{equation}\label{eq:B}
B_R(x,y):=R([Rx,y]+[x,Ry])-[Rx,Ry],
\end{equation}
then $[\cdot,\cdot]_R$ satisfies the Jacobi identity if and only if:
\begin{equation}\label{eq:modYB}
[B_R(x,y),z]+[B_R(z,x),y]+[B_R(y,z),x]=0,
\end{equation}
for all $x,y,z\in\mathfrak g$. Defining $B_R(x,y):=\theta [x,y]$, which amounts to the identity
\begin{equation}\label{eq:EMYB}
[Rx,Ry]=R([Rx,y]+[x,Ry])-\theta [x,y],
\end{equation}
for all $x,y\in\mathfrak{g}$, implies that \eqref{eq:modYB} is fulfilled.
\begin{defn}[Classical $r$-matrix and MCYBE]
\label{def:r-matrix}
Equation \eqref{eq:EMYB} is called {\it modified classical Yang--Baxter Equation} (MCYBE) with parameter $\theta$ and its solutions are called {\it classical $r$-matrices}. For $\theta=0$, equation \eqref{eq:EMYB} reduces to the so called {\it classical Yang--Baxter Equation} (CYBE).
\end{defn}
In the present work we will be mainly concerned with the case where $\theta=1$. For this reason, in what follows, the term classical $r$-matrix refers to an element $R \in\operatorname{End}_{\mathbb K}(\mathfrak g)$ such that
\begin{equation}
\label{eq:impc}
[Rx,Ry]=R([Rx,y]+[x,Ry])-[x,y],
\end{equation}
for any $x,y \in \mathfrak g$. In this setting equation \eqref{eq:impc} will be referred to as MCYBE.
\noindent We call \eqref{eq:Rbra} the {\it{double Lie bracket}} and denote the corresponding Lie algebra by $\mathfrak g_R:=(V,[\cdot,\cdot]_R)$. The Lie algebra $\mathfrak g$ with classical $r$-matrices $R$ is called {\it{double Lie algebra}}. The significance of solutions of \eqref{eq:impc} stems from the following well known result.
\begin{prop}\cite{STS1} One can prove that:
\begin{enumerate}
\item[i)] The linear maps $R_\pm:\mathfrak g_R\rightarrow\mathfrak g$, defined by
\begin{equation}
\label{eq:rpm}
R_\pm:=\frac{1}{2}(R\pm\operatorname{id})
\end{equation}
are Lie algebra morphisms, $R_{\pm}([x,y]_R)=[R_{\pm}x,R_{\pm}y]$, which amounts to the two identities
\begin{equation}
\label{eq:RBE}
[R_\pm x, R_\pm y] = R_\pm ([R_\pm x, y] + [x,R_\pm y] \mp [x,y]).
\end{equation}
\item[ii)] Moreover, if one defines $\mathfrak g_\pm:=\operatorname{Im}(R_\pm:\mathfrak g_R\rightarrow\mathfrak g)$ and $\kappa_\pm:=\operatorname{Ker}(R_\mp:\mathfrak g_R\rightarrow\mathfrak g)$, then $\kappa_\pm\subset\mathfrak g_\pm$ and the natural application $\Theta:\mathfrak g_+/\kappa_+\rightarrow\mathfrak g_-/\kappa_-$ is an isomorphism of Lie algebras.
\end{enumerate}
\end{prop}
\noindent Observe that the Lie bracket \eqref{eq:Rbra} expressed in terms of the maps $R_\pm$ defined in \eqref{eq:rpm} becomes
$$
[x,y]_R = [R_\pm x, y] + [x,R_\pm y] \mp [x,y].
$$
Assume $R \in \operatorname{End}_{\mathbb K}(\mathfrak g)$ to be a solution of equation \eqref{eq:impc}. Let $G$ and $G_R$ be the connected and simply-connected Lie groups corresponding to the Lie algebras $\mathfrak g$ respectively $\mathfrak g_R$. By $r_\pm: G_R \rightarrow G$ we denote the Lie group homomorphisms, which integrate the Lie algebra homomorphisms $R_\pm$. Furthermore, let $G_\pm:=\operatorname{Im}\,(r_\pm:G_R\rightarrow G)$, and let $\delta: G\rightarrow G\times G$ and $i:G\rightarrow G$ be the \emph{diagonal map} and the \emph{inversion map}, respectively, that is, $\delta(g):=(g, g)$ and $i(h):=h^{-1}$. Denoting by $m$ the multiplication of $G$, we define $\tilde m: G \times G \rightarrow G$ to be the map $m \circ (\operatorname{id}_G , i)$, i.e., the map such that $\tilde m(g,h)=m(g,h^{-1})=gh^{-1}$, for all $(g,h) \in G \times G$. Then one can prove the following theorem \cite{STS1}. See also \cite{Faybusovich}.
\begin{thm} \cite{STS1}\label{thm:factorizationtheorem}
The map $I_R:G_R\rightarrow G\times G$, defined for all $g \in G_R$ by
\[
I_R (g)=(r_+,r_-)\circ\delta(g)=(r_+g,r_-g),
\]
is an \emph{embedding} of Lie groups. Moreover, the map $\tilde m\circ I_R: G_R \rightarrow G$, defined for all $g \in G_R$ by
\[
\tilde m\circ I_R(g)=m(r_+g,({r_-g})^{-1})=r_+g({r_-g})^{-1},
\]
is a local diffeomorphism from a suitable neighborhood of the identity $e\in G_R$ to a suitable neighborhood of the identity $e\in G$. In other words, any element $g \in G$ \emph{sufficiently close to the identity} admits a unique factorization as
\begin{equation}
\label{eq:factlie}
g=g_+({g_-})^{-1},
\end{equation}
where $(g_+,g_-)\in\text{Im}\,I_R$.
\end{thm}
\begin{rmk} \cite{STS1}
The Lie bracket \eqref{eq:Rbra} defined by a classical $r$-matrix on $\mathfrak g$ implies a corresponding linear Poisson structure $\{\cdot,\cdot\}_R$ on the dual $\mathfrak g^*$. It associates to each Casimir function with respect to the Poisson bracket $\{\cdot,\cdot\}_{\mathfrak g}$ a non-trivial first integral of the original dynamical system.
\end{rmk}
In the following we consider post-Lie algebras defined in terms of $r$-matrices. Let $\mathfrak g$ be a Lie algebra with $R \in \operatorname{End}_{\mathbb{K}}(\mathfrak g)$ a solution of \eqref{eq:impc}, and let $R_-$ be defined as in \eqref{eq:rpm}.
\begin{thm}\cite{GuoBaiNi}
\label{thm:postLie1}
For any elements $x,y \in \mathfrak g$ the bilinear product
\begin{equation}
\label{def:RBpostLie}
x \triangleright y := [R_-x,y]
\end{equation}
defines a post-Lie algebra structure on the Lie algebra $\mathfrak g$.
\end{thm}
\begin{proof}
Axiom \eqref{postLie1} holds true since for all $x \in \mathfrak g$, the map $[R_-x,\cdot ]=\mathrm{ad}_{R_-x}: \mathfrak g \to \mathfrak g$ is a derivation with respect to the Lie bracket $[\cdot,\cdot]$. Axiom \eqref{postLie2} follows from identity \eqref{eq:RBE} together with the Jacobi identity.
\end{proof}
\begin{rmk} \label{rmk:doubleLiebracket-post-Lie}
The product $x\triangleleft y:=[R_+x,y]$, defined for all $x,y\in\mathfrak g$, yields on $\mathfrak g$ the structure of a right post-Lie algebra. In particular, note that
\[
x\triangleleft y =[R_+x,y]=\frac{1}{2}[Rx,y]+\frac{1}{2}[x,y]
\]
which implies that $x\triangleleft y=x\triangleright y+[x,y]$, for all $x,y \in\mathfrak g$. Moreover, a simple computation shows that: $x\triangleright y-y\triangleright x+[x,y]=[x,y]_R=x\triangleleft y-y\triangleleft x-[x,y]$, for all $x,y\in\mathfrak g$, which implies that the Lie bracket in \eqref{postLie3} coincides with the Lie bracket in \eqref{eq:Rbra}, i.e., $[[\cdot,\cdot]]=[\cdot,\cdot]_R$.
\end{rmk}
{\bf{The universal enveloping algebra of a Post-Lie algebra.}}\quad In Proposition \ref{prop:post-lie} it is shown that any post-Lie algebra is endowed with two Lie brackets, $[\cdot,\cdot]$ and $[[\cdot,\cdot]]$, which are related in terms of the post-Lie product by identity \eqref{postLie3}. The relation between the corresponding universal enveloping algebras was explored in \cite{EFLMK}. In \cite{OudomGuin} similar results in the context of pre-Lie algebras and the symmetric algebra $\mathcal{S}(\mathfrak g)$ appeared.
Recall that the universal enveloping algebra $\mathcal U(\mathfrak g)$ of a Lie algebra $\mathfrak g$ is a connected, filtered, noncommutative, cocommutative Hopf algebra with unit $\un$ \cite{Kassel,Sweedler}. Elements in $\mathcal U(\mathfrak g)$ are denoted as words $x_1 \cdots x_n$, and the letters $x_i \in \mathfrak g \hookrightarrow \mathcal U(\mathfrak g)$ are primitive, that is, the coproduct $\Delta x_i = x_i \otimes {\bf 1} + {\bf 1} \otimes x_i$. Its multiplicative extension defines the -- unshuffle -- coproduct on all of $\mathcal U(\mathfrak g)$. The counit $\epsilon : \mathcal U(\mathfrak g) \to \mathbb{K}$ and antipode $S: \mathcal U(\mathfrak g) \to \mathcal U(\mathfrak g) $ are defined by $\epsilon({\bf{1}})=1$ and zero else, respectively $S(x_1\cdots x_n):=(-1)^n x_n\cdots x_1$. In the following Sweedler's notation is used to denote the coproduct $\Delta A=A_{(1)} \otimes A_{(2)}$ for any $A$ in $\mathcal U(\mathfrak g)$.
The next proposition summarises the results relevant for the present discussion of lifting the post-Lie algebra structure to $\mathcal U(\mathfrak g)$. In what follows we will denote with $\triangleright$ both the original post-Lie product on $\mathfrak g$ and the one lifted to $\mathcal U(\mathfrak g)$.
\begin{prop}\cite{EFLMK}\label{prop:post1}
Let $A,B,C\in\mathcal U(\mathfrak g)$ and $x,y\in\mathfrak g \hookrightarrow \mathcal U(\mathfrak g),$ then there exists a unique extension of the post-Lie product from $\mathfrak g$ to $\mathcal U(\mathfrak g)$, given by:
\allowdisplaybreaks{
\begin{align}
{\bf 1}\triangleright A &= A, \quad\ A \triangleright{\bf 1} = \epsilon (A){\bf 1} \label{eq:pha1}\\
\epsilon(A\triangleright B) &=\epsilon(A)\epsilon (B),\\
\Delta (A\triangleright B)&=(A_{(1)}\triangleright B_{(1)}) \otimes (A_{(2)}\triangleright B_{(2)}),\\
xA\triangleright B&=x\triangleright (A\triangleright B)-(x\triangleright A)\triangleright B\nonumber\\
A\triangleright BC &=(A_{(1)}\triangleright B)(A_{(2)}\triangleright C). \label{eq:pha2}
\end{align}}
\end{prop}
\begin{proof}
The proof of Proposition \ref{prop:post1} goes by induction on the length of monomials in $\mathcal U(\mathfrak g)$.
\end{proof}
Note that \eqref{eq:pha1} together with \eqref{eq:pha2} imply that the extension of the post-Lie product from $\mathfrak g$ to $\mathcal U(\mathfrak g)$ yields a linear map $d:\mathfrak g \rightarrow \operatorname{Der}\big(\mathcal U(\mathfrak g)\big),$ defined via $d(x)(x_1\cdots x_n):=\sum_{i=1}^nx_1\cdots (x\triangleright x_i)\cdots x_n$, for any word $x_1\cdots x_n \in \mathcal U(\mathfrak g)$. A simple computation shows that, in general, this map is not a morphism of Lie algebras. Together with Proposition \ref{prop:post1} one can prove the next statement.
\begin{prop}\cite{EFLMK}\label{prop:post2}
Let $A,B,C\in\mathcal U(\mathfrak g)$
\allowdisplaybreaks{
\begin{align}
A\triangleright (B\triangleright C)&=(A_{(1)}(A_{(2)}\triangleright B))\triangleright C. \label{last}
\end{align}}
\end{prop}
It turns out that identity \eqref{last} in Proposition \ref{prop:post2} can be written $A\triangleright (B\triangleright C) = m_\ast(A\otimes B) \triangleright C$, where the product $m_\ast:\mathcal U(\mathfrak g)\otimes\mathcal U(\mathfrak g)\rightarrow\mathcal U(\mathfrak g)$ is defined by
\begin{equation}
\label{eq:postLieU}
m_\ast(A\otimes B)= A\ast B:=A_{(1)}(A_{(2)}\triangleright B).
\end{equation}
\begin{thm}\cite{EFLMK}\label{thm:KLM0}
The product defined in \eqref{eq:postLieU} is non-commutative, associative and unital. Moreover, $\mathcal U_*(\mathfrak g):=(\mathcal U(\mathfrak g),m_\ast,{\bf 1},\Delta,\epsilon,S_\ast)$ is a co-commutative Hopf algebra, whose unit, co-unit and coproduct coincide with those defining the usual Hopf algebra structure on $\mathcal U(\mathfrak g)$. The antipode $S_\ast$ is given uniquely by the defining equations $
m_\ast\circ(\operatorname{id}\otimes S_\ast)\circ\Delta
={\bf 1}\circ\epsilon
=m_\ast\circ(S_\ast\otimes\operatorname{id})\circ\Delta.$
More precisely
\begin{equation}
S_\ast (x_1\cdots x_n)=-x_1\cdots x_n-\sum_{k=1}^{n-1}
\sum_{\sigma\in\Sigma_{k,n-k}}x_{\sigma(1)}\cdots x_{\sigma(k)}\ast
S(x_{\sigma(k+1)}\cdots x_{\sigma(n)}),\label{eq:antipodests}
\end{equation}
for every $x_1\cdots x_n\in\mathcal U_n(\mathfrak g)$ and for all $n\geq 1$.
\end{thm}
\noindent Equation \eqref{eq:antipodests} becomes clear by noting that since elements $x \in \mathfrak g$ are primitive and $\Delta$ is an algebra morphism with respect to the product \eqref{eq:postLieU}, one deduces
\begin{lem}\label{lem:coprodast}
For $x_1\ast\cdots\ast x_n \in \mathcal U_*(\mathfrak g)$
\allowdisplaybreaks{
\begin{eqnarray*}
\Delta(x_1\ast\cdots\ast x_n)
&=&x_1\ast\cdots\ast x_n\otimes {\bf{1}} \nonumber +{\bf{1}}\otimes x_1\ast \cdots \ast x_n\\
&+&\sum_{k=1}^{n-1}\sum_{\sigma\in\Sigma_{k,n-k}}
x_{\sigma(1)} \ast \cdots \ast x_{\sigma(k)}\otimes x_{\sigma(k+1)}\ast \cdots \ast x_{\sigma(n)}.
\end{eqnarray*}}
Here $\Sigma_{k,n-k} \subset \Sigma_n$ denotes the set of permutations in the symmetric group $ \Sigma_n$ of n elements $[n]:=\{1, 2, \ldots, n\}$ such that ${\sigma(1)}< \cdots <\sigma(k)$ and ${\sigma(k+1)}< \cdots <{\sigma(n)}$.
\end{lem}
The relation between the Hopf algebra $\mathcal U_*(\mathfrak g)$ in Theorem \ref{thm:KLM0} and the universal enveloping algebra $\mathcal U(\overline{\mathfrak g})$ corresponding to the Lie algebra $\overline{\mathfrak g}$ is the content of the following theorem.
\begin{thm}\cite{EFLMK}\label{thm:KLM}
$\mathcal U_*(\mathfrak g)$ is isomorphic, as a Hopf algebra, to $\mathcal U(\overline{\mathfrak g})$. More precisely, the identity map $\operatorname{id}:\overline{\mathfrak g}\rightarrow\mathfrak g$ admits a unique extension to an isomorphism of Hopf algebras $\phi:\mathcal U(\overline{\mathfrak g})\rightarrow \mathcal U_*(\mathfrak g)$.
\end{thm}
\begin{rmk}\label{rmk:linkSTSR}
In Section \ref{sect:anotherHA} we will show that when the post-Lie algebra structure is defined by a solution of the modified classical Yang--Baxter equation, the isomophism $\phi$ in Theorem \ref{thm:KLM} can be explicitly described in terms of the Hopf algebra structures on $\mathcal U(\bar{\mathfrak g})$ and $\mathcal U_*(\mathfrak g)$.
\end{rmk}
\smallskip
Before further elaborating on the last remark in the context of reference \cite{RSTS} in the next section, we will show a central property of group-like elements in the completed universal enveloping algebra $\mathcal U(\mathfrak g)$ of the post-Lie algebra~${\mathfrak g}$ and, at the same time, we will give a more explicit (combinatorial) expression for the isomorphism $\phi$.
\medskip
In what follows we use $m_{\cdot}: \mathcal U(\overline{\mathfrak g}) \otimes \mathcal U(\overline{\mathfrak g}) \to \mathcal U(\overline{\mathfrak g})$ to denote the product in $\mathcal U(\overline{\mathfrak g})$, i.e., $m_\cdot(A \otimes B)=A . B$ for any $A,B \in \mathcal U(\overline{\mathfrak g})$. The Hopf algebra isomorphism $\phi: \mathcal U(\overline{\mathfrak g}) \to \mathcal U_*(\mathfrak g)$ in Theorem \ref{thm:KLM} can be described as follows. From the proof of Theorem \ref{thm:KLM} it follows that $\phi$ restricts to the identity on $\mathfrak g \hookrightarrow \mathcal U(\mathfrak g)$. Moreover, for $x_1,x_2,x_3 \in \mathfrak g$ we find
$$
\phi(x_1 .\, x_2) = \phi(x_1) * \phi(x_2) = x_1 * x_2 =x_1x_2 + x_1 \triangleright x_2,
$$
and
\allowdisplaybreaks{
\begin{align}
\phi(x_1 .\, x_2 .\, x_3) &= x_1 * x_2 * x_3 \\
&= x_1(x_2 * x_3) + x_1 \triangleright (x_2 * x_3) \label{recursion}\\
&=x_1x_2x_3 + x_1(x_2 \triangleright x_3) + x_2(x_1 \triangleright x_3)
+ (x_1 \triangleright x_2)x_3 + x_1 \triangleright(x_2 \triangleright x_3).
\end{align}}
Equality \eqref{recursion} can be generalized to the following simple recursion for words in $\mathcal U(\overline{\mathfrak g})$ with $n$ letters
\begin{equation}
\label{eq:PHIrecursion1}
\phi(x_1 .\, x_2 .\, \cdots .\, x_n) = x_1\phi(x_2 .\, \cdots .\, x_n)
+ x_1 \triangleright \phi(x_2 .\, \cdots .\, x_n) .
\end{equation}
Recall that $x \triangleright \un=0$ for $x \in \mathfrak g$, and $\phi(\un)=\un$. From the fact that the post-Lie product on $\mathfrak g$ defines a linear map $d:\mathfrak g \rightarrow \operatorname{Der}\big(\mathcal U(\mathfrak g)\big),$ we deduce that the number of terms on the righthand side of the recursion \eqref{eq:PHIrecursion1} is given with respect to the length $n=1,2,3,4,5,6$ of the word $x_1 .\, \cdots .\, x_n \in \mathcal U_*(\mathfrak g)$ by 1, 2, 5, 15, 52, 203, respectively. These are the Bell numbers $B_i$, for $i=1,\ldots,6$, and for general $n$, these numbers satisfy the recursion $B_{n+1} = \sum_{i=0}^n {n \choose i} B_i$. Bell numbers count the different ways the set $[n]$ can be partition into disjoint subsets.
From this we deduce the general formula for $x_1 .\, \cdots .\, x_n \in \mathcal U(\overline {\mathfrak g})$
\begin{equation}
\phi(x_1 .\, \cdots .\, x_n) = x_1 * \cdots * x_n = \sum_{\pi \in P_n} X_\pi \in \mathcal U( {\mathfrak g}) \label{eq:PHIrecursion2},
\end{equation}
where $P_n$ is the lattice of set partitions of the set $[n]=\{1,\dots,n\}$, which has a partial order of refinement ($\pi \leq \kappa$ if $\pi$ is a finer set partition than $\kappa$). Remember that a partition $\pi$ of the (finite) set $[n]$ is a collection of (non-empty) subsets $\pi=\{\pi_1,\dots,\pi_b\}$ of $[n]$, called blocks, which are mutually disjoint, i.e., $\pi_i \cap \pi_j=\emptyset$ for all $i\neq j$, and whose union $\cup_{i=1}^b \pi_i =[n]$. We denote by $|\pi|:=b$ the number of blocks of the partition $\pi$, and $|\pi_i|$ is the number of elements in the block $\pi_i$. Given $p,q \in [n]$ we will write that $p \sim_{\pi} q$ if and only if they belong to same block. The partition $\hat{1}_n = \{\pi_1\}$ consists of a single block, i.e., $|\pi_1|=n$. It is the maximum element in $P_n$. The partition $\hat{0}_n=\{\pi_1,\dots,\pi_n\}$ has $n$ singleton blocks, and is the minimum partition in $P_n$. In the following we denote set-partitions pictorially. For instance, the five elements in $P_3$ are depicted as follows:
$$
\begin{array}{c}
\strich\strich\strich \\
1\ 2\ 3
\end{array}
\qquad\
\begin{array}{c}
\strich\; \n \\
\phantom{n} 1\ 2\ 3
\end{array}
\qquad\;\;\;
\begin{array}{c}
\n\hspace{0.3cm} \strich \\
1\ 2\ 3\
\end{array}
\qquad\
\begin{array}{c}
\nin\\
1\;\, 2\;\, 3
\end{array}
\qquad\;\;
\begin{array}{c}
\n\hspace{0.125cm} \n\\
\phantom{t}1\;\, 2\;\, 3
\end{array}
$$
The first represents the minimal element in $P_3$, i.e., the singleton partition $\{\{1\},\{2\},\{3\}\}$. The second, third and fourth diagram represent the partitions $\{\{1\},\{2,3\}\}$, $\{\{1,2\},\{3\}\}$, and $\{\{2\},\{1,3\}\}$, respectively. The last one is the maximal element in $P_3$, which consists of a single block $\{\{1,2,3\}\}$. At order 4 we list the examples
$$
\strich\, \n\hspace{0.12cm} \n
\qquad\
\strich \nin
\qquad\
\n\hspace{0.015cm} \nin
\qquad\
\nin \strich
$$
where the first and second diagram correspond to $\{\{1\},\{2,3,4\}\}$ and $\{\{1\},\{3\},\{2,4\}\}$, respectively. The third and fourth diagram correspond to $\{\{3\},\{1,2,4\}\}$ and $\{\{2\},\{1,3\},\{4\}\}$, respectively.
Observe that the particular ordering of the blocks in the partitions of the above examples follows from translating the pictorial representation by ``reading" it from right to left. More precisely, the ordering of the block of any partition $\pi = \{\pi_1, \ldots, \pi_l\}$ associated to the graphical representation, is such that $\max(\pi_i)>j$, $\forall j \in \pi_m$, $m<i$. Moreover, the elements in each block $\pi_i=\{k_1^i,k_2^i ,\ldots ,k_s^i\}$ are in natural order, i.e., $k_1^i < k_2^i <\cdots < k_s^i$. Hence, in the following we assume that the blocks of any partition $\pi$ are in increasing order with respect to the maximal element in each block, and the elements in each block are in natural increasing order, too.
The element $X_\pi$ in \eqref{eq:PHIrecursion2} is defined as follows
\begin{equation}
X_{\pi} := \prod_{\pi_i \in \pi} x(\pi_i), \label{eq:PHIrecursion2a}
\end{equation}
where $x(\pi_i):= \ell^{\triangleright }_{x_{k_1^i}} \circ \ell^{\triangleright }_{x_{k_2^i}} \circ\cdots \circ \ell^{\triangleright }_{x_{k_{l-1}^i}}(x_{k_l^i})$ for the block $\pi_i=\{k_1^i,k_2^i,\ldots ,k_l^i\}$ of the partition $\pi=\{\pi_1, \ldots, \pi_m\}$, and $\ell^{\triangleright}_{a}(b):= a \triangleright b$, for $a,b$ elements in the post-Lie algebra $\mathfrak g \hookrightarrow \mathcal U(\mathfrak g)$. Recall that $k_l^i \in \pi_i$ is the maximal element in this block. For instance
$$
X_{\scalebox{0.6}{\strich\strich\strich}}=x_1x_2x_3,
\quad
X_{\scalebox{0.6}{\strich \n}}=x_1(x_2 \triangleright x_3),
\quad
X_{\scalebox{0.6}{\nin}}=x_2(x_1 \triangleright x_3),
$$
$$
X_{\scalebox{0.6}{\n\hspace{0.05cm} \strich }}=(x_1 \triangleright x_2)x_3,
\quad
X_{\scalebox{0.6}{\n\hspace{0.00cm} \n}}=x_1 \triangleright(x_2 \triangleright x_3)
$$
\begin{rmk}
Defining $m_i:=\phi(x^{\cdot i})$ and $d_i := \ell^{\triangleright i-1}_x(x):=x \triangleright (\ell^{\triangleright i-2}_x(x))$, $ \ell^{\triangleright 0}=\mathrm{id}$, we find that \eqref{eq:PHIrecursion2} is the $i$-th-order non-commutative Bell polynomial, $m_i = {\mathrm{B}}^{nc}_i(d_1,\ldots,d_i)$. See \cite{ELM14,LMK2} for details.
\end{rmk}
Next we state a recursion for the compositional inverse $\phi^{-1}(x_1 \cdots x_n)$ of the word $x_1 \cdots x_n \in \mathcal U(\mathfrak g)$. First, it is easy to see that $\phi^{-1}(x_1x_2)=x_1 .\, x_2 - x_1 \triangleright x_2 \in \mathcal U(\overline{\mathfrak g})$. Indeed, since $\phi$ is linear and reduces to the identity on $\mathfrak g \hookrightarrow \mathcal U(\mathfrak g)$, we have
$$
\phi(x_1 .\, x_2 - x_1 \triangleright x_2)= x_1 * x_2 - x_1 \triangleright x_2 = x_1x_2,
$$
and
\allowdisplaybreaks{
\begin{align*}
\phi^{-1}(x_1x_2x_3 ) = x_1 .\, x_2 .\, x_3
- \phi^{-1}(x_1(x_2 \triangleright x_3))
- \phi^{-1}(x_2(x_1 \triangleright x_3))
- \phi^{-1}((x_1 \triangleright x_2)x_3)
- x_1 \triangleright(x_2 \triangleright x_3)
\end{align*}}
which is easy to verify. In general, we find a recursive formula for $\phi^{-1}(x_1 \cdots x_n) \in \mathcal U(\overline{\mathfrak g})$
\begin{equation}
\label{eq:PHIrecursion3}
\phi^{-1}(x_1 \cdots x_n) = x_1 .\, \cdots .\, x_n - \sum_{\hat{0}_n < \pi \in P_n} \phi^{-1}(X_\pi).
\end{equation}
This is well-defined since in the sum on the righthand side all partitions have less than $n$ blocks.
\medskip
Next we compare group-like elements in the completions of $\mathcal U(\mathfrak g)$ and $\mathcal U_*(\mathfrak g)$, which we denote by $\hat{\mathcal U}(\mathfrak g)$ respectively $\hat{\mathcal U}_*(\mathfrak g)$.
Recall that if $(H,m,u,\Delta,\epsilon,S)$ is a Hopf algebra and $I=\operatorname{Ker}(\epsilon:H\rightarrow\mathbb K)$ the augmentation ideal, then on $\hat{\mathcal H}:=\varprojlim H/I^n$ can be defined the structure of \emph{complete Hopf algebra}. The elements of $\hat H$ are the Cauchy sequences $\{x_n\}_{n\geq 0}$ with respect to the topology generated by $\{V_n(x)=x+I^n\,\vert\,x\in H\}_{n\geq 0}$. In particular, in $\hat H$ one finds elements of the form $\operatorname{exp}(\xi):=\sum_{\geq 0}\frac{\xi^n}{n!}$, and one can prove that $x\in\hat H$ is \emph{primitive}, i.e., $\hat{\Delta}(x)=x\hat{\otimes} {\bf{1}}+{\bf{1}}\hat{\otimes} x$, if and only if $\operatorname{exp}(x)$ is \emph{group-like}, that is, $\hat{\Delta}(\operatorname{exp}(x))=\operatorname{exp}(x)\hat{\otimes}\operatorname{exp}(x)$ \cite{Quillen}. Note that the set $\mathcal G(\hat H)$ of group-like elements forms a group with respect to the associative product of $\hat H$, and that for every $\xi \in \mathcal G(\hat H)$, $\xi^{-1}=\hat S(\xi)$. Moreover, note that the set of primitive elements $\mathcal P(\hat H)$ forms a Lie algebra whose Lie bracket is defined by anti-symmetrizing the associative product of $\hat H$. The map $\operatorname{exp}:\mathcal P(\hat H)\rightarrow\mathcal G(\hat H)$, $x\mapsto\operatorname{exp}(x)$, is a bijection of sets whose inverse defines the logarithm function. Let $H=\mathcal U(\mathfrak g)$, the universal enveloping algebra of $\mathfrak g$, and consider its completion $\hat{\mathcal U}(\mathfrak g)$. Since $\mathfrak g=\mathcal P(\hat{\mathcal U}(\mathfrak g))$, one deduces the existence of a bijection between $\mathfrak g$ and the group $\mathcal G(\hat{\mathcal U}(\mathfrak g))$, which to every primitive element $x \in \mathfrak g$ associates the corresponding unique group-like element $\operatorname{exp}(x)$. Note that in the process of completing ${\mathcal U}(\mathfrak g)$, the Lie algebra $\mathfrak g$ is completed as well \cite{Quillen}.
Observe that $\phi$ maps the augmentation ideal of $\mathcal U(\overline{\mathfrak g})$ to the augmentation ideal of $\mathcal U(\mathfrak g)$. Therefore, it extends to an isomorphism $\hat\phi : \hat{\mathcal U}(\overline{\mathfrak g}) \to \hat{\mathcal U}(\mathfrak g)$ of complete Hopf algebras.
We are interested in the inverse of the group-like element $\exp(x) \in \mathcal G(\hat{\mathcal U}(\mathfrak g))$ with respect to $\hat{\phi}$. It follows from the inverse of the word $x^n \in \hat{\mathcal U}(\mathfrak g)$, i.e., $\hat{\phi}^{-1}(\exp(x))=\sum_{n \ge 0} \frac{1}{n!} \hat\phi^{-1}(x^n)$. The central result is the following
\begin{thm}\label{thm:FinverseChi}
For each $x \in \mathfrak g$, there exists an unique element $\chi(x) \in \mathfrak g$, such that
\begin{equation}
\exp(x) = \exp^*(\chi(x)). \label{group-like}
\end{equation}
\end{thm}
\begin{proof}
For $x \in \mathfrak g$ the exponential $\exp(x)$ is a group-like element in $\mathcal G(\hat{\mathcal U}(\mathfrak g))$. The proof of Theorem \ref{thm:FinverseChi} involves calculating the inverse of the group-like element $\exp(x) \in \mathcal G(\hat{\mathcal U}(\mathfrak g))$ with respect to the map $\hat{\phi}$. Indeed, we would like to show that $\hat{\phi}^{-1}(\exp(x)) = \exp^\cdot(\chi(x)) \in \mathcal G(\hat{\mathcal U}(\bar{\mathfrak g}))$, from which identity \eqref{group-like} follows
$$
\hat{\phi}\circ\hat{\phi}^{-1}(\exp(x)) = \exp(x) = \hat{\phi}\circ \exp^\cdot(\chi(x)) = \exp^*(\chi(x)),
$$
due to $\hat\phi$ being an algebra morphism from $\hat{\mathcal U}(\overline{\mathfrak g})$ to $\hat{\mathcal U}_*(\mathfrak g)$, which reduces to the identity on~${\mathfrak g}$.
First we show that for $x\in \mathfrak g$, the element $\chi(x)$ is defined inductively. For this we consider the expansion $\chi(xt):=xt + \sum_{m>0} \chi_m(x)t^m$ in the dummy parameter $t$. Comparing $\exp^*(\chi(xt))$ order by order with $\exp(xt)$ yields at second order in $t$
$$
\chi_2(x) := \frac{1}{2}x_1x_2 - \frac{1}{2}x_1 * x_2=- \frac{1}{2}x \triangleright x \in \mathfrak g.
$$
At third order we deduce from \eqref{group-like} that
\allowdisplaybreaks{
\begin{align*}
\lefteqn{\chi_3(x) := -\frac{1}{3!} \sum_{\hat{0}_3 < \pi \in P_3} X_\pi - \frac{1}{2} \chi_2(x) * x - \frac{1}{2} x * \chi_2(x)} \\
&= -\frac{1}{3!} \sum_{\hat{0}_3 < \pi \in P_3} X_\pi
+ \frac{1}{4} \big((x \triangleright x) x + (x \triangleright x) \triangleright x\big)
+ \frac{1}{4} \big(x (x \triangleright x) + x \triangleright (x \triangleright x)\big)\\
&= -\frac{1}{3!}\big( 2x(x \triangleright x) + (x \triangleright x)x + x \triangleright (x \triangleright x)\big)
+ \frac{1}{4} \big((x \triangleright x) x + (x \triangleright x) \triangleright x\big)
+ \frac{1}{4} \big(x (x \triangleright x) + x \triangleright (x \triangleright x)\big)\\
&= \frac{1}{12} [(x \triangleright x), x]
+ \frac{1}{4} (x \triangleright x) \triangleright x
+ \frac{1}{12} x \triangleright (x \triangleright x) \in \mathfrak g\\
&= \frac{1}{6} [\chi_1(x), \chi_2(x)]
- \frac{1}{2} \chi_2(x)\triangleright x
- \frac{1}{6} x \triangleright \chi_2(x),
\end{align*}}
where we defined $\chi_1(x):=x$. The $n$-th order term is given by
\allowdisplaybreaks{
\begin{align}
\label{eq:nth-order}
\chi_n(x) &:= -\frac{1}{n!} \sum_{\hat{0}_n < \pi \in P_n} X_\pi
- \sum_{k=2}^{n-1} \frac{1}{k!} \sum_{p_1 + \cdots + p_k = n \atop p_i > 0} \chi_{p_1}(x) * \chi_{p_2}(x) * \cdots * \chi_{p_k}(x)\\
&= \frac{1}{n!} x^n -\frac{1}{n!} x^{*n}
- \sum_{k=2}^{n-1} \frac{1}{k!} \sum_{p_1 + \cdots + p_k = n \atop p_i > 0} \chi_{p_1}(x) * \chi_{p_2}(x) * \cdots * \chi_{p_k}(x).\end{align}}
From this we derive an inductive description of the terms $\chi_n(x) \in \hat{\mathcal U}_*({\mathfrak g})$ depending on the $\chi_p(x)$ for $1 \le p \le n-1$
\begin{equation}
\chi_n(x) := \frac{1}{n!} x^n
- \sum_{k=2}^{n} \frac{1}{k!} \sum_{p_1 + \cdots + p_k = n \atop p_i > 0} \chi_{p_1}(x) * \chi_{p_2}(x) * \cdots * \chi_{p_k}(x).
\label{chi-map}
\end{equation}
We have verified directly that the first three terms, $\chi_i(x)$ for $i=1,2,3$, in the expansoin $\chi(xt):=xt + \sum_{m>0} \chi_m(x)t^m$ are in $ \mathfrak g$. Showing that $\chi_n(x) \in \mathfrak g$ for $n>3$ by induction using formula \eqref{chi-map} is surely feasible. However, we follow another strategy. At this stage \eqref{chi-map} implies that $\chi(x) \in \hat{\mathcal U}_*({\mathfrak g})$ exists. Since $x \in \mathfrak g$, we have that $\exp(x)$ is group-like, i.e., $\hat{\Delta} (\exp(x)) = \exp(x) \hat\otimes \exp(x)$. Recall that $\hat{\mathcal U}_*({\mathfrak g})$ is a complete Hopf algebra with the same coproduct $\hat{\Delta}$. Hence
$$
\hat{\Delta}(\exp^*(\chi(x)))
= \hat{\Delta} (\exp(x))
= \exp(x) \hat\otimes \exp(x)
= \exp^*(\chi(x)) \hat\otimes \exp^*(\chi(x)).
$$
Using $\hat\phi$ we can write $\hat\phi \hat\otimes \hat\phi \circ \hat{\Delta}_{\overline{\mathfrak g}}(\exp^\cdot(\chi(x))) = \hat\phi \hat\otimes \hat\phi \circ (\exp^\cdot(\chi(x)) \hat\otimes \exp^\cdot(\chi(x))),$ which implies that $\exp^\cdot(\chi(x))$ is a group-like element in $\hat{\mathcal U}(\overline{\mathfrak g})$
$$
\hat{\Delta}_{\overline{\mathfrak g}}(\exp^\cdot(\chi(x))) = \exp^\cdot(\chi(x)) \hat\otimes \exp^\cdot(\chi(x)).
$$
Since $\hat{\mathcal U}(\overline{\mathfrak g})$ is a complete filtered Hopf algebra, the relation between group-like and primitive elements is one-to-one \cite{Quillen}. This implies that $\chi(x) \in \overline{\mathfrak g} \simeq {\mathfrak g}$, which proves equality \eqref{group-like}. Note that $\chi(x)$ actually is an element of the completion of the Lie algebra $\mathfrak g$. However, the latter is part of $\hat{\mathcal U}({\mathfrak g})$.
\end{proof}
\begin{cor}\label{cor:diffeqChi}
Let $x \in {\mathfrak g}$. The following differential equation holds for $\chi(xt) \in \mathfrak g[[t]]$
\begin{equation}
\label{proof-key2}
\dot \chi(xt) = {\rm dexp}^{*-1}_{-\chi(xt)}\Big( \exp^*\big(-\chi(xt)\big) \triangleright x\Big).
\end{equation}
The solution $\chi(xt)$ is called post-Lie Magnus expansion.
\end{cor}
\begin{proof} Recall the general fact for the $\rm{dexp}$-operator \cite{Blanes}
$$
\exp^*({-\beta(t)}) \ast \frac{d }{dt}\exp^*({\beta(t)}) = \exp^*({-\beta(t)}) \ast {\rm{dexp}}^\ast _{\beta}(\dot{\beta}) *\exp^*({\beta(t)})
={\rm{dexp}}^\ast _{-\beta}(\dot{\beta}),
$$
where
$$
{\rm{dexp}}^\ast _{\beta}(x):= \sum_{n \ge 0} \frac{1}{(n+1)!}ad^{(\ast n)}_\beta(x)
\qquad {\rm{and}} \qquad
{\rm dexp}^{\ast -1}_{\beta}(x):=\sum_{n \ge 0} \frac{b_n}{n!} ad^{(\ast n)}_\beta(x).
$$
Here $b_n$ are the Bernoulli numbers and $ad^{(\ast k)}_a(b):=[a,ad^{(\ast k-1)}_a(b)]_\ast$. This together with the differential equation $\frac{d}{dt}\exp^*(\chi(xt)) = \exp(xt)x$ deduced from \eqref{group-like}, implies
\allowdisplaybreaks{
\begin{eqnarray}
{\rm{dexp}}^{*}_{-\chi(xt)}\big(\dot \chi(xt)\big)
&=& \exp^*\big(-\chi(xt)\big)* (\exp(xt)x) \nonumber\\
&=& \exp^*\big(-\chi(xt)\big)
\Big(\exp^*\big(-\chi(xt)\big) \triangleright (\exp(xt)x)\Big) \label{step1}\\ &=& \exp^*\big(-\chi(xt)\big)
\bigg(
\big(\exp^*\big(-\chi(xt)\big) \triangleright \exp(xt)\big)
\big(\exp^*\big(-\chi(xt)\big) \triangleright x\big)
\bigg) \label{step2}\\
&=& \exp^*\big(-\chi(xt)\big)
\bigg(
\big(\exp^*\big(-\chi(xt)\big) \triangleright \exp^*\big(\chi(xt)\big)\big)
\big(\exp^*\big(-\chi(xt)\big) \triangleright x\big)
\bigg) \label{step3}\\
&=&
\bigg( \exp^*\big(-\chi(xt)\big)
\Big(\exp^*\big(-\chi (xt)\big) \triangleright
\exp^*\big(\chi(t a)\big)\Big)
\bigg)
\big(\exp^*\big(-\chi (xt)\big) \triangleright x\big)
\label{step4}\\
&=&
\Big( \exp^*\big(-\chi(xt)\big) *
\exp^*\big(\chi(xt)\big) \Big)
\big(\exp^*\big(-\chi(xt)\big) \triangleright x\big) \nonumber\\
&=& \exp^*\big(-\chi(xt)\big) \triangleright x. \nonumber
\end{eqnarray}}
The claim in \eqref{proof-key2} follows after inverting $ {\rm{dexp}}^{*}_{-\chi(xt)}\big(\dot \chi(xt)\big) $. Note that we used successively \eqref{eq:postLieU}, \eqref{eq:pha2} and \eqref{group-like}
\end{proof}
\begin{rmk} Note that any post-Lie algebra with an abelian Lie bracket becomes to a pre-Lie algebra, and the universal enveloping algebra ${\mathcal U}(\mathfrak g)$ reduces to the symmetric algebra ${\mathcal S}(\mathfrak g)$. This is the setting of \cite{OudomGuin}, and identity \eqref{group-like} was described in the pre-Lie algebra context in \cite{ChapPat}. In this case the post-Lie Magnus expansion $\chi(x)$ restricts to the simpler pre-Lie Magnus expansion \cite{EM,Manchon}
\end{rmk}
In the next section we further explore the universal enveloping algebra corresponding to a post-Lie algebra defined in terms of a classical $r$-matrix, by looking at group-like elements in the completed universal enveloping algebra $\hat{\mathcal U}(\mathfrak g)$.
\section{An isomorphism theorem}
\label{sect:anotherHA}
In this section we will show that, after specializing to the case of post-Lie algebras defined by a solution of the MCYBE, one can get an explicit formula for the isomorphism map of Theorem \ref{thm:KLM}.
Let $\mathcal U(\mathfrak g)$ and $\mathcal U(\mathfrak g_R)$ be the universal enveloping algebras of $\mathfrak g$ respectively $\mathfrak g_R$. Since $R_\pm : \mathfrak g_R \to \mathfrak g$ are Lie algebra morphisms, $R_\pm[x,y]_R = [R_\pm x,R_\pm y]$, the universal property permits to extend both maps to unital algebra morphisms from $\mathcal U(\mathfrak g_R)$ to $\mathcal U(\mathfrak g)$. We shall use the same notation for the latter, that is, $R_\pm: \mathcal U(\mathfrak g_R) \rightarrow \mathcal U(\mathfrak g)$. Their images are $\mathcal U(\mathfrak g_\pm)$, i.e., the universal enveloping algebras of the Lie sub-algebras of $\mathfrak g_\pm \subset \mathfrak g$.
\begin{prop}\label{pro:lineariso}
The map $F:\mathcal U(\mathfrak g_R)\rightarrow\mathcal U(\mathfrak g)$ defined by:
\begin{equation}
\label{eq:sigma}
F=m_{\mathfrak g}\circ (\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)\circ\Delta_{\mathfrak g_R},
\end{equation}
is a linear isomorphism. Its restriction to $\mathfrak g_R \hookrightarrow \mathcal U(\mathfrak g_R)$ is the identity map.
\end{prop}
\begin{proof}
Note that $m_{\mathfrak g}$ and $S_{\mathfrak g}$ denote product respectively antipode in $\mathcal U(\mathfrak g)$, whereas $\Delta_{\mathfrak g_R}$ denotes the coproduct in $\mathcal U(\mathfrak g_R)$. This slightly more cumbersome notation is applied in order to make the presentation more traceable. Given an element $x \in\mathfrak g_R \hookrightarrow \mathcal U(\mathfrak g_R)$, one has that
\allowdisplaybreaks{
\begin{align*}
F(x) &=m_{\mathfrak g}\circ (\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)\circ\Delta_{\mathfrak g_R}(x)\\
&= m_{\mathfrak g}\circ (\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)(x \otimes {\bf{1}} + {\bf{1}} \otimes x)\\
&= m_{\mathfrak g}\circ (\operatorname{id}\otimes S_{\mathfrak g})(R_+(x)\otimes {\bf{1}} + {\bf{1}} \otimes R_-(x))\\
&= m_{\mathfrak g}(R_+(x)\otimes {\bf{1}} - {\bf{1}} \otimes R_-(x))\\
&= R_+(x)-R_-(x) = x \in \mathfrak g,
\end{align*}}
showing that $F$ restricts to the identity map between $\mathfrak g_R$ and $\mathfrak g$. We use the notation from the foregoing section by writing $m_{\mathfrak g_R}(x \otimes y)=x .\, y$. As in Lemma \ref{lem:coprodast} we have
$$
\Delta_{\mathfrak g_R}(x_1 .\, \cdots .\, x_n) = x_1 .\, \cdots .\, x_n \otimes {\bf{1}}
+ {\bf{1}} \otimes x_1.\, \cdots .\, x_n
+ \sum_{k=1}^{n-1}\sum_{\sigma\in\Sigma_{k,n-k}}
x_{\sigma(1)}.\, \cdots .\, x_{\sigma(k)} \otimes x_{\sigma(k+1)}.\, \cdots .\, x_{\sigma (n)}.
$$
Since $R_\pm$ are homomorphisms of unital associative algebras, one can easily show that for every $x_{1}.\, \cdots .\, x_k\in\mathcal U_k(\mathfrak g_R)$:
\begin{eqnarray*}
F (x_1.\, \cdots .\, x_k)&=&R_+(x_1)\cdots R_+(x_k) + (-1)^k R_-(x_k)\cdots R_-(x_1) + \\
&&\sum_{l=1}^{k-1}\sum_{\sigma\in\Sigma_{l,k-l}}(-1)^{k-l}R_+(x_{\sigma(1)})\cdots R_+(x_{\sigma(l)})R_-(x_{\sigma(k)})\cdots R_-(x_{\sigma(l+1)}) \in \mathcal U_k(\mathfrak g),
\end{eqnarray*}
which proves that $F$ maps homogeneous elements to homogeneous elements. To verify injectivity of $F$ one can argue as follows. Since $x=R_+(x)-R_-(x)$ for $x \in \mathfrak g$, one can deduce from the previous formula for $x_1.\, \cdots .\, x_k \in \mathcal U_k(\mathfrak g_R)$ that
\[
F(x_1 .\, \cdots .\, x_k) = x_1\cdots x_k \; \textbf{mod}\,\mathcal U_{k-1}(\mathfrak g),
\]
where $x_1\cdots x_k$ on the righthand side lies in $\mathcal U_k(\mathfrak g)$. For instance
$$
F(x_1 .\, x_2) = R_+(x_1)R_+(x_2) + R_-(x_2) R_-(x_1) - R_+(x_1)R_-(x_2) - R_+(x_2) R_-(x_1).
$$
Using $x+R_-(x)=R_+(x)$ implies in $\mathcal U(\mathfrak g)$ that
\allowdisplaybreaks{
\begin{align*}
F(x_1 .\, x_2) &= (x_1 + R_-(x_1))(x_2+R_-(x_2)) + R_-(x_2) R_-(x_1) \\
& \qquad- (x_1 + R_-(x_1))R_-(x_2) - (x_2+R_-(x_2)) R_-(x_1)\\
&= x_1x_2 + x_1R_-(x_2) + R_-(x_1)x_2 + R_-(x_1)R_-(x_2) \\
&+ R_-(x_2) R_-(x_1) - x_1R_-(x_2) - R_-(x_1)R_-(x_2) - x_2R_-(x_1) - R_-(x_2) R_-(x_1)\\
& = x_1x_2 + [R_-(x_1),x_2],
\end{align*}}
where $x_1x_2 \in \mathcal U_2(\mathfrak g)$ and $[R_-(x_1),x_2] \in \mathcal U_1(\mathfrak g) \simeq \mathfrak g$. Then, if $F(x_1 .\, \cdots .\, x_k)=0$, one concludes that $x_1\cdots x_k \in \mathcal U_k(\mathfrak g)$ must be equal to zero, that is, at least one among the elements $x_i \in \mathfrak g$ composing the monomial $x_1\cdots x_k$ is equal to zero. This forces the element $x_1 .\, \cdots .\, x_k \in \mathcal U_k(\mathfrak g_R)$ to be equal to zero, which implies injectivity of $F$.
To prove that the map $F$ is surjective one can argue by induction on the length of the homogeneous elements of $\mathcal U(\mathfrak g)$. The first step of the induction is provided by the fact that $F$ restricted to $\mathfrak g_R$ becomes the identity map, and $ \mathfrak g \hookrightarrow \mathcal U_1(\mathfrak g)$. Suppose now that every element in $\mathcal U_{k-1}(\mathfrak g)$ is in the image of $F$ and observe that $x_{1}\cdots x_{k}\in\mathcal U_k(\mathfrak g)$ can be written as
\allowdisplaybreaks{
\begin{align*}
\lefteqn{x_1 \cdots x_k =\prod^k_{i=1}\big(R_+(x_i)-R_-(x_i)\big)}\\
&= \big(R_+(x_1)-R_-(x_1)\big)\big(R_+(x_2)-R_-(x_2)\big)\prod^k_{i=3}\big(R_+(x_i)-R_-(x_i)\big)\\
&= \big(R_+(x_1)R_+(x_2)-R_-(x_1)R_+(x_2) - R_+(x_1)R_-(x_2) + R_-(x_1)R_-(x_2)\big)\prod^k_{i=3}\big(R_+(x_i)-R_-(x_i)\big)\\
&= \Big(R_+(x_{1})\cdots R_+(x_{k}) + (-1)^kR_-(x_{k})\cdots R_-(x_{1})\\
&+ \sum_{l=1}^{k-1}\sum_{\sigma\in\Sigma_{l.k-l}}
(-1)^{k-l}R_+(x_{\sigma(1)})\cdots R_+(x_{\sigma(l)})
\cdot R_-(x_{\sigma(k)}) \cdots R_-({x_{\sigma(l+1)}}) \Big) \textbf{mod}\,\mathcal U_{k-1}(\mathfrak g),
\end{align*}}
which proves the claim, since
\allowdisplaybreaks{
\begin{align}
F(x_1 .\, \cdots.\, x_k)
&= R_+(x_{1}) \cdots R_+(x_{k}) + (-1)^k R_-(x_{k}) \cdots R_-(x_{1}) \label{eq:1}\\
&+ \sum_{l=1}^{k-1} \sum_{\sigma\in\Sigma_{l.k-l}}(-1)^{k-l}
R_+(x_{\sigma(1)})\cdots R_+(x_{\sigma(l)})\cdot R_-(x_{\sigma(k)}) \cdots R_-(x_{\sigma(l+1)}). \label{eq:2}
\end{align}}
\end{proof}
Using the previous computation and the definition of the $*$-product, one can easily see that $F(x_1 .\, x_2)=x_1x_2 + [R_-(x_1),x_2] = x_1x_2 + x_1 \triangleright x_2$, where $\triangleright$ is defined in \eqref{def:RBpostLie} (and lifted to $\mathcal U(\mathfrak g) $). It implies that $F(x_1 .\, x_2) = x_1 * x_2 \in \mathcal U_*(\mathfrak g) $. Using a simple induction on the lenght of the monomials, this calculation extends to all of $\mathcal U(\mathfrak g_R)$, which is the content of the following
\begin{cor}\cite{EFLIMK}\label{cor:isoal}
The map $F$ is an isomorphism of unital, filtered algebras, from $\mathcal U(\mathfrak g_R)$ to $\mathcal{U}_{*}(\mathfrak{g})$. In particular, $F(x_1 .\, \cdots .\, x_n) = x_1 * \cdots * x_n$ for all monomials $x_1 .\, \cdots .\, x_n\in\mathcal U(\mathfrak g_R)$.
\end{cor}
Comparing this result with Theorem \ref{thm:KLM} of the previous section, one has
\begin{prop}\label{prop:idenF}
If the post-Lie algebra $(\mathfrak g,\triangleright)$ is defined in terms of a classical $r$-matrix $R$ via \eqref{def:RBpostLie}, then the isomorphism $\phi$ of Theorem \ref{thm:KLM} assumes the explicit form given in Formula \eqref{eq:sigma}, i.e. $\phi=F$.
\end{prop}
\begin{proof}
First recall that $\mathfrak g_R=\overline{\mathfrak g}$, see Remark \ref{rmk:doubleLiebracket-post-Lie}. Then note that both $\phi$ and $F$ are isomorphisms of filtered, unital associative algebras taking values in $\mathcal U_*(\mathfrak g)$, restricting to the identity map on $\mathfrak g_R$ which is the generating set of $\mathcal U(\mathfrak g_R)$.
\end{proof}
At this point it is worth making the following observation, which will be useful later.
\begin{cor}\label{cor:dec}
Every $A\in\mathcal U(\mathfrak g)$ can be written uniquely as
\begin{equation}
A= R_+(A'_{(1)})S_{\mathfrak g}(R_-(A'_{(2)})) \label{eq:factinu1}
\end{equation}
for a suitable element $A'\in\mathcal U(\mathfrak g_R)$, where we wrote the coproduct of this element using Sweedler's notation, i.e., $\Delta_{\mathfrak g_R}(A')=A'_{(1)}\otimes A'_{(2)}$.
\end{cor}
\begin{proof}
The proof follows from \eqref{eq:PHIrecursion3}, where $A':=F^{-1}(A) \in \mathcal U(\mathfrak g_R)$. Proposition \ref{pro:lineariso} then implies that for each $A' \in \mathcal U(\mathfrak g_R)$,
\[
F(A')= R_+(A'_{(1)})S_{\mathfrak g}(R_-(A'_{(2)})).
\]
\end{proof}
Finally, in this more specialized context, we can give the following computational proof of the result contained in Theorem \ref{thm:KLM}.
\begin{thm}\label{cor:iso}
The map $F:\mathcal U(\mathfrak g_R)\rightarrow\mathcal U_{\ast}(\mathfrak g)$ is an isomorphism of Hopf algebras.
\end{thm}
\begin{proof}
The map $F$ is a linear isomorphism which sends a monomial of length $k$ to (a linear combination of) monomials of the same length. For this reason the compatibility of $F$ with the co-units is verified. Since $F:\mathcal U(\mathfrak g_R)\rightarrow\mathcal U_*(\mathfrak g)$ is an isomorphism of filtered, unital, associative algebras, the product $\ast$ defined in \eqref{eq:postLieU} can be defined as the push-forward to $\mathcal U(\mathfrak g)$, via $F$, of the associative product of $\mathcal U(\mathfrak g_R)$
\begin{equation}
A \ast B=F(m_{\mathfrak g_R}(F^{-1}(A)\otimes F^{-1}(B))),\label{eq:push}
\end{equation}
for all monomials $A,B\in\mathcal U(\mathfrak g)$. This implies immediately the compatibility of $F$ with the algebra units. Let us show that $F$ is a morphism of co-algebras, i.e., that
\begin{equation}
\label{eq:morco}
\Delta_{\mathfrak g}\circ F=(F\otimes F)\circ\Delta_{\mathfrak g_R}.
\end{equation}
Corollary \ref{cor:isoal} implies that $F (x_1.\, \cdots .\, x_n)=x_1 \ast \cdots \ast x_n$, and the formula in Lemma \ref{lem:coprodast} yields
\allowdisplaybreaks{
\begin{eqnarray*}
\Delta_{\mathfrak g}\big(F (x_1 .\, \cdots .\, x_n)\big)
&=&x_1\ast\cdots\ast x_n\otimes \mathbf{1} +\mathbf{1}\otimes x_1\ast\cdots\ast x_n \\
&+&\sum_{k=1}^{n-1}\sum_{\sigma\in\Sigma_{k,n-k}}x_{\sigma(1)}\ast\cdots
\ast x_{\sigma(k)}\otimes x_{\sigma(k+1)}\ast\cdots\ast x_{\sigma(n)},
\end{eqnarray*}}
which turns out to be equal to $(F\otimes F)\circ\Delta_{\mathfrak g_R}(x_1 .\, \cdots .\, x_n).$ The only thing that is left to be checked is that $F$ is compatible with the antipodes of the two Hopf algebras, i.e., that $F\circ S_{\mathfrak g_R}=S_\ast\circ F$, where for $x_1 .\, \cdots .\, x_n\in\mathcal U(\mathfrak g_R)$, $S_{\mathfrak g_R}(x_1 .\, \cdots .\, x_n)=(-1)^n x_n .\, \cdots .\, x_1$. First recall that the antipode is an algebra anti-homomorphism, i.e., $S_\ast (A\ast B)=S_\ast(B)\ast S_\ast (A)$, for all $A,B\in\mathcal U_*(\mathfrak g)$. From this and from the property that $S_{\mathfrak g_R}(x)=-x$ for all $x\in\mathfrak g_R$, using a simple induction on the length of the monomials, one obtains
\[
S_\ast (x_1\ast\cdots\ast x_n)=(-1)^nx_n\ast\cdots\ast x_1.
\]
From this observation follows now easily that $F\circ S_{\mathfrak g_R}=S_\ast\circ F$.
\end{proof}
We conclude this section with the following interesting observation, see Remark \ref{rmk:linkSTSR}.
\begin{prop}\label{prop:prodRSTS}
For all $A,B\in\mathcal U(\mathfrak g)$, one has that:
\begin{equation}
\label{eq:pSTS2}
A \ast B = R_+(A'_{(1)})B S_{\mathfrak g}(R_-(A'_{(2)})),
\end{equation}
where $A' \in \mathcal U(\mathfrak g_R)$ is the unique element, such that $A=F(A')$, see Corollary \ref{cor:dec}.
\end{prop}
\begin{proof}
Let $A', B' \in \mathcal U(\mathfrak g_R)$ such that $F(A')=A$ and $F(B')=B$. We use Sweedler's notation for the coproduct $\Delta_{\mathfrak g_R}(A')=A'_{(1)}\otimes A'_{(2)}$, and write $m_{\mathfrak g_R}(A'\otimes B'):=A' .\, B'$ for the product in $\mathcal U(\mathfrak g_R)$.
\allowdisplaybreaks{
\begin{align*}
A \ast B=F(A' .\, B')
&=m_{\mathfrak g}\circ (\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)
\circ\Delta_{\mathfrak g_R}(A' .\, B')\\
&=m_{\mathfrak g}\circ(\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)(A'_{(1)}\otimes A'_{(2)})\cdot (B'_{(1)}\otimes B'_{(2)})\\
&=m_{\mathfrak g}\circ(\operatorname{id}\otimes S_{\mathfrak g})\circ (R_+\otimes R_-)(A'_{(1)}\cdot B'_{(1)})\otimes (A'_{(2)}\cdot B'_{(2)})\\
&=m_{\mathfrak g}\circ(\operatorname{id}\otimes S_{\mathfrak g})\big(R_+(A'_{(1)})R_+(B'_{(1)})\otimes R_-(A'_{(2)})R_-(B'_{(2)})\big)\\
&\stackrel{(a)}{=}m_{\mathfrak g}\big(R_+(A'_{(1)}) R_+(B'_{(1)})\otimes S_{\mathfrak g}(R_-(B'_{(2)}))S_{\mathfrak g}(R_-(A'_{(2)}))\big)\\
&=R_+(A'_{(1)}) R_+(B'_{(1)})S_{\mathfrak g}(R_-(B'_{(2)}))S_{\mathfrak g}(R_-(A'_{(2)}))\\
&= R_+(A'_{(1)}) F(B')S_{\mathfrak g}(R_-(A'_{(2)})\\
&=R_+(A'_{(1)}) BS_{\mathfrak g}(R_-(A'_{(2)}),
\end{align*}}
which proves the statement. In equality $(a)$ we applied that $S_{\mathfrak g}(\xi\eta)=S_{\mathfrak g}(\eta)S_{\mathfrak g}(\xi)$.
\end{proof}
The map \eqref{eq:sigma} was first defined in \cite{STS3} (see also \cite{RSTS}), where it was used to push-forward to $\mathcal U(\mathfrak g)$ the associative product of $\mathcal U(\mathfrak g_R)$ using formula \eqref{eq:push}. From the equality between the maps $\phi$ and $F$, see Proposition \ref{prop:idenF}, it follows at once that the associative product $m_\ast$ defined in $\mathcal U(\mathfrak g)$ is the product given in \eqref{eq:postLieU}. Our approach provides an easily computable formula for this product, and does not requires the knowledge of the inverse of the map $F$.
{\bf{Another proof of the Theorem \ref{thm:KLM0}}}\quad We give an alternative proof of Theorem \ref{thm:KLM0}, stating that $\mathcal U_{\ast}(\mathfrak g):=(\mathcal U(\mathfrak g),m_\ast,u_{\mathfrak g},\Delta_{\mathfrak g},\epsilon_{\mathfrak g},S_{\ast})$ is a Hopf algebra. Recall that the original proof, which was based on \cite{OudomGuin}, has as a starting point the explicit form of the extension to $\mathcal U(\mathfrak g)$ of the the post-Lie product, see \eqref{eq:postLieU}. In what follows, we will use instead the linear isomorphism $F$ between $\mathcal U(\overline{\mathfrak g})$ and $\mathcal U(\mathfrak g)$, provided in formula \eqref{eq:sigma}, when the post-Lie algebra is defined in terms of a classical $r$-matrix. Starting from this isomorphism, we will define the $\ast$-product on $\mathcal U(\mathfrak g)$ via formula \eqref{eq:push}, and we will then prove that this can be completed to a Hopf algebra structure. First, note that the unit, coproduct and counit are the same as those defining the usual Hopf algebra structure of $\mathcal U(\mathfrak g)$, which, to simplify notation, will be denoted as $u$, $\Delta$ and $\epsilon$, respectively. To prove the theorem we should first check that $(\mathcal U(\mathfrak g),m_\ast,u_{\mathfrak g},\Delta_{\mathfrak g},\epsilon_{\mathfrak g})$ is a bialgebra. To this end, note that from formula \eqref{eq:pSTS2} one deduces easily that $u_{\mathfrak g}$ is the unit of the algebra $(\mathcal U(\mathfrak g),m_\ast)$. Then, it suffices to prove that $\Delta$ and $\epsilon$ are algebra morphisms, i.e., that
$\epsilon\otimes\epsilon=\epsilon\circ m_\ast$, which is easily checked, and
\begin{equation}
\label{eq:copr}
\Delta\circ m_\ast=m_\ast\otimes m_\ast\circ (\operatorname{id}\otimes\tau\otimes\operatorname{id})
\circ\Delta\otimes\Delta,
\end{equation}
where $\tau$ is the usual flip map. See \cite{Sweedler} for example. Let us show that \eqref{eq:copr} holds. Recall that
\[
m_{\ast}(A\otimes B)=F\big(m_{\mathfrak g_R}(F^{-1}(A)\otimes F^{-1}(B))\big),\qquad \forall A,B\in\mathcal U(\mathfrak g).
\]
For every $A\in\mathcal U(\mathfrak g)$, we will write $\Delta(A)=A_{(1)}\otimes A_{(2)}$. Then the righthand side of \eqref{eq:copr}, when applied to $A\otimes B$, becomes:
\allowdisplaybreaks{
\begin{eqnarray*}
\lefteqn{(m_\ast\otimes m_\ast)\circ (\operatorname{id}\otimes\tau\otimes\operatorname{id})\circ (\Delta\otimes\Delta)(A\otimes B)}\\
&=&(m_\ast\otimes m_\ast)\circ (\operatorname{id}\otimes\tau\otimes\operatorname{id})
\big((A_{(1)}\otimes A_{(2)})\otimes(B_{(1)}\otimes B_{(2)})\big)\\
&=&(m_\ast\otimes m_\ast) \big((A_{(1)}\otimes B_{(1)})\otimes(A_{(2)}\otimes B_{(2)})\big)\\
&=&m_{\ast}(A_{(1)}\otimes B_{(1)})\otimes m_{\ast}(A_{(2)}\otimes B_{(2)}).
\end{eqnarray*}}
On the other hand, computing $(\Delta\circ m_{\ast})(A\otimes B)$, and using that $F$ is a comorphism, one gets
\allowdisplaybreaks{
\begin{eqnarray*}
\lefteqn{\Delta \big(m_\ast (A\otimes B)\big)}\\
&=& \Delta\big(F(m_{\mathfrak g_R}(F^{-1}(A)\otimes F^{-1} (B))\big)\\
&=& F\otimes F\Big(\Delta\big(m_{\mathfrak g_R}(F^{-1}(A)\otimes F^{-1}(B))\big)\Big)\\
&=& (F\otimes F)\circ(m_{\mathfrak g_R}\otimes m_{\mathfrak g_R})\circ
(\operatorname{id}\otimes\tau\otimes\operatorname{id})\circ (\Delta\otimes\Delta)\big(F^{-1}(A)\otimes F^{-1} (B)\big)\\
&=& (F \otimes F)\circ(m_{\mathfrak g_R}\otimes m_{\mathfrak g_R})
\circ (\operatorname{id}\otimes\tau\otimes\operatorname{id})
\big(F^{-1}(A_{(1)})\otimes F^{-1}(A_{(2)})\otimes F^{-1}(B_{(1)})\otimes F^{-1}(B_{(2)})\big)\\
&=& F\big(m_{\mathfrak g_R}(F^{-1}(A_{(1)})\otimes F^{-1}(B_{(1)}))\big)\otimes
F\big( m_{\mathfrak g_R}(F^{-1}(A_{(2)})\otimes F^{-1}(B_{(2)}))\big)\\
&=& m_{\ast}(A_{(1)}\otimes B_{(1)})\otimes m_{\ast}(A_{(2)}\otimes B_{(2)}),
\end{eqnarray*}}
which gives the proof of the compatibility between $m_\ast$ and $\Delta$, and concludes the proof of the statement. Regarding the proof of the theorem, it suffices now to show that $S_{\ast}$ defined in \eqref{eq:antipodests} is the antipode, i.e., that it satisfies
$m_\ast\circ (\operatorname{id}\otimes S_\ast)\circ\Delta=u\circ\epsilon=m_\ast\circ (S_\ast\circ\operatorname{id})\circ\Delta$. To this end it is enough to recall that $\Delta({\bf{1}})={\bf{1}}\otimes {\bf{1}}$ and $\Delta(x)=x\otimes {\bf{1}}+{\bf{1}}\otimes x$, for all $x\in\mathfrak g$. From these follow that $S_\ast ({\bf{1}})={\bf{1}}$ and, respectively, that $S_\ast(x)=-x$, for all $x\in\mathfrak g$. Using a simple induction on the length of the monomials it follows that $S_\ast$ satisfies \eqref{eq:antipodests}.
\section{Factorization theorems}
\label{sect:factorThm}
Next we consider Theorem \ref{thm:factorizationtheorem} in the context of the universal enveloping algebra of $\mathfrak g$. To this end, one needs first to trade $\mathcal U(\mathfrak g)$ for its completion $\hat{\mathcal U}(\mathfrak g)$. Also, we assume that the classical $r$-matrix in \eqref{eq:impc} satisfies $R \circ R=\operatorname{id}$, which is equivalent to $R_\pm \circ R_\pm = R_\pm$.
We observe that, since $R_\pm:\mathcal U(\mathfrak g_R) \rightarrow \mathcal U(\mathfrak g)$ are algebra morphisms they map the augmentation ideal of $\mathcal U(\mathfrak g_R)$ to the augmentation ideal of $\mathcal U(\mathfrak g)$ and, for this reason, both these morphisms extend to morphisms $R_\pm:\hat{\mathcal U}(\mathfrak g_R) \rightarrow \hat{\mathcal U}(\mathfrak g)$. In particular, the map $F$ extends to an isomorphism of (complete) Hopf algebras $\hat{F}:\hat{\mathcal U}(\mathfrak g_R)\rightarrow \hat{\mathcal U}_{\ast}(\mathfrak g)$, defined by
\[
\hat{F}=\hat{m}_{\mathfrak g}\circ (\operatorname{id}
\hat\otimes \hat{S}_{\mathfrak g})\circ(R_+\hat\otimes R_-)\circ\hat\Delta_{\mathfrak g_R},
\]
where, $\hat\Delta_{\mathfrak g_R}$ denotes the coproduct of $\hat{\mathcal U}(\mathfrak g_R)$, and with $\hat{m}_{\mathfrak g}$, $\hat{S}_{\mathfrak g}$ denoting the product respectively the antipode of $\hat{\mathcal U}(\mathfrak g)$. Let $\operatorname{exp}^{\cdot}(x)\in\mathcal G(\hat{\mathcal U}(\mathfrak g_R))$, $\operatorname{exp}^{\ast}(x)\in\mathcal G(\hat{\mathcal U}_{\ast}(\mathfrak g))$ and
$\exp(x) \in \mathcal G(\hat{\mathcal U}(\mathfrak g))$, the respective exponentials.
At the level of universal enveloping algebra, the main result of Theorem \ref{thm:factorizationtheorem} can be rephrased.
\begin{thm}\label{thm:factcircled}
Every element $\operatorname{exp}^{\ast}(x)\in \mathcal G(\hat{\mathcal U}_{\ast}(\mathfrak g))$ admits the unique factorization:
\begin{equation}
\label{eq:factinu2}
\operatorname{exp}^{\ast}(x)=\exp({x_+})\exp({-x_-}),
\end{equation}
where $x_\pm := R_\pm x$.
\end{thm}
\begin{proof}
Again, to simplify notation we write $m_{\mathfrak g_R}(x\otimes y)=x . y$, for all $x,y\in\mathfrak g_R$, so that for each $x \in \mathfrak g_R$, $x^{\cdot n}:=x .\, \cdots .\, x$. Then observe that, for each $n\geq 0$, one has
\[
\hat F (x^{\cdot n})=R_+(x)^n+\sum_{l=1}^{n-1}(-1)^{n-l}{n\choose l}R_+(x)^lR_-(x)^{n-l}+(-1)^nR_-(x)^n.
\]
Then, after reordering the terms, one finds $\hat{F} (\operatorname{exp}_{\cdot}(x))=\exp({x_+})\exp({-x_-}).$ On the other hand, since $\hat F:\hat{\mathcal U}(\mathfrak g_R)\rightarrow \hat{\mathcal U}_\ast(\mathfrak g)$ is an algebra morphism, one obtains for each $n\geq 0$, $\hat F(x^{\cdot n}) = \hat F(x)\ast\cdots\ast \hat F (x) = x^{\ast n},$ from which it follows that $\hat{F} (\operatorname{exp}^{\cdot}(x))=\operatorname{exp}^{\ast}(x),$ giving the result. Uniqueness follows from $R_\pm$ being idempotent.
\end{proof}
The observation in Theorem \ref{thm:FinverseChi} implies for group-like elements in $\mathcal G(\hat{\mathcal U}(\mathfrak g))$ and $\mathcal G(\hat{\mathcal U}_*(\mathfrak g))$ that $\exp(x) = \exp^*(\chi(x))$, from which we deduce
\begin{prop} \label{prop:fact}
Group-like elements $\operatorname{exp}(x) \in \mathcal G(\hat{\mathcal U}(\mathfrak g))$ factorize uniquely
\begin{equation}
\label{eq:factast}
\operatorname{exp}(x)=\exp({\chi_+(x)})\exp({-\chi_-(x)}).
\end{equation}
\end{prop}
\begin{proof}
This follows from Theorem \ref{thm:FinverseChi} and Theorem \ref{thm:factcircled} together with $R_-$ being idempotent.
\end{proof}
\begin{rmk} Looking at $\chi(x)$ in the context of $\hat{\mathcal U}(\mathfrak g)$, i.e., with the post-Lie product on $\mathfrak g$ defined in terms of the classical $r$-matrix, $x \triangleright y = [R_-(x),y]$, we find that $\chi_2(x) = -\frac{1}{2} [R_-(x),x]$ and
$$
\chi_3(x) = \frac{1}{4} [R_-([R_-(x),x]),x] + \frac{1}{12} ( [[R_-(x),x], x] + [R_-(x),[R_-(x),x]]).
$$
This should be compared with Equation (7) in \cite{EGM}, as well as with the results in \cite{EFLIMK}. In fact, comparing with \cite{EGM}, the uniqueness of \eqref{eq:factast} implies that the post-Lie Magnus expansion $\chi: \mathfrak g \to \mathfrak g$ satisfies the BCH-recursion
$$
\chi(x) = x + \overline{\operatorname{BCH}}\big(-R_- (\chi(x)),x\big),
$$
where
\[
\overline{\operatorname{BCH}}(x,y) = \operatorname{BCH}(x,y) - x - y = \frac{1}{2} [x,y] + \frac{1}{12} \big[x,[x,y]\big]
+ \frac{1}{12} \big[y,[y,x]\big] - \frac{1}{24} \big[y,[x,[x,y]]\big] + \cdots.
\]
\end{rmk}
\smallskip
\section{Conclusion}
\label{sect:conclusion}
The paper at hand explores in more detail the properties of post-Lie algebras by analyzing the corresponding universal enveloping algebras. A factorization theorem of group-like elements in (a suitable completion of) the universal enveloping algebra corresponding to a post-Lie algebra is derived. It results from the existence of a particular map, called post-Lie Magnus expansion, on the (completion of the post-)Lie algebra. These result are then considered in the context of post-Lie algebra defined in terms of classical $r$-matrices. The link between the theory of post-Lie algebras and results presented in references \cite{RSTS,STS3} are emphasised. More precisely, while in \cite{EFLMK} the existence of an isomorphism between two Hopf algebras naturally associated to every post-Lie algebra was proven by extending results from \cite{OudomGuin}, in the present paper it was shown that the linear isomorphism defined \cite{RSTS,STS3} is indeed a natural example of such an isomorphism between Hopf algebras. This completes the Hopf algebraic picture in \cite{RSTS,STS3}. | 190,338 |
Not Too Bad Of A DayMarch 28, 2008
Things are going pretty well now. A couple of days ago they weren’t so I didn’t feel like blogging. Today I feel a bit like Ebony here on MY chair. I set up my rocking chair in my bedroom under a nice sunny window with my afghan and sweater in case I get cold and she thinks I have created an oasis for her. Life as a cat must be good.
While I am showing cat pictures here is my other one at the base of my yarn cabinet.
Cleo is older and has been with me since before I met Brendan even. She seems to be a bit disturbed about my pregnant belly and I am sure she wonders why I am having another loud child who will try to pull her fluffy tail. Ebony loves to ambush her and there is often hissing in the night when she is telling Ebony off for jumping on her in the dark. Both love my afghan and wonder when I will be making them another one so they don’t have to share.
On the crochet front I have done some work on the Granny Hex afghan and on Rhys’s sweater. That leaves Eva, the slippers, and the longies…oh and the other afghan. Seriously, I am done with afghans. I have caught myself looking at some patterns and shake my head at my obvious irrational thinking. The summer I usually make mittens and other small things. The kids always need mittens and they are a good summer project. I will have to make some thumb-less ones for the new baby. I look forward to getting the winter backlog done. | 245,183 |
This amateur video features sweet Florida coeds that play foosball with guys, lose and make their dreams a reality. Sandy Cheeks XXX jose zambrano cassella
Jessica Sanchez is a naughty colombian babe with perfect round ass and shaved pussy. asa akira body
bf.z.a.xxx She takes off her black panties and white stockings before she opens her legs to show her hairless snatch. porn videos tight pussy
Ashlnn Taylor is pretty young. video hard free italia
haruru nakamoto Naked tight brunette Tanner Mayes wears nothing but sunglasses s she gives footjob to lucky guy. filho tranzando com a mãe gostosa escondido do seu pai e estrupa ela sem dó xxx sorella
Pretty blonde Miss Lynna with sexy firm boobs gets her nice mouth filled with hard dick before she takes it up her bald pussy. trista lane filho tranzando com a mãe gostosa escondido do seu pai e estrupa ela sem dó
Sexy black and white maid uniform is for slim dark haired porn star Jessica Jaymes to make man happy tonight, She's ll be his personal maid with sweet fuckable mouth. xxx sorella
video hard free italia Curvy blonde Julie Cash in barley there bright pink fishnet dress spends evening getting her big juicy ass tongue fucked by obedient guy Deviant Kade. Trick your gf porn clip mobil
She sucks his hard dick and loses her dress at the same time. videos porno torbe puta locura
asa akira body Skinny black schoolgirl Skin Diamond with tiny boobs and clean pussy spreads for white guy and gets her tight pussy stretched by his cock after giving head. myanmar lady sex Sandy Cheeks XXX
Man finger fucks her tight hole and eats her sweet bald pussy. hentai hot sex videos jennifer aniston soles
Criss Strokes finds his dick between her perfect boobs in a jacuzzi. estrip tease porno
deri kıyafet porno Watch lovely euro girls do it! cumintogirl
Jordi El niňo darby archives-free porn-videos:brazzer... Hard pussy pounding is her ultimate goal. INDIANS ALLBOLLWOOD SEXY XXX POHOTS
Brasileirinhas filme fazendo um bilubilu There are girls in the boys room and then boys in the girls room in this amateyr video featuring college girls and guys playing pranks on each other. karnataka colleg gril xxx potes comi minha cunhada de setenta anos contos
She tries on new her new lingerie in front of two sweet girls Mercedes Lynn and Victoria Voss. watch japanese drama and movies online xnxx.tamiltelivision.com
Nude flat chested cutie with small ass gives blowjob and then he fills her tight love tunnel with with fat dick. film porno femme forte
bhojpuri xxxhome mad move Hot bodied milf Dee Siren is a slut. haruru nakamoto
cumonwives free Lovely girl is totally naked in bed. samantha xxx hd xxx fetish transformations
He gives tugjob under the running water in the shower in her bare skin. Eva Hughes video porno i heart huge cock
He sticks his cock in her pale white smooth pussy after eating her sweet snatch. chat online sexo cam vivo chilenas sexy girl naked HDlincauknab
Then he drills her snatch from behind enjoying the view of her amazing round ass.
kazakskie porno domashni Naked blonde with hairless pussy and small tits gets brutally mouth fuck for a start. pussy.musturbation massage.mobi
interical fuck This elegant woman will turn you on! desi aunty full nude pic porno gay solo italiano
Sheila Grant and Katerina both have huge natural tits. porno gay solo italiano desi aunty full nude pic | 64,758 |
Like new Cipa Extreme mirror. Mirror has 3 panes, with the outer 2 adjustable so you can see both sides of the cut. I used it twice, and then sold the boat. $75. email for pics
Offline
Great Mirror! -I Love mine and everyone else who drives my Boat makes envious comments!
Offline
Pages: 1 | 111,509 |
News Fairy Tail Beach/Hot Spring Anime DVD's Promo Streamed
posted on 2012-11-09 07:47 EST
Japanese publisher Kodansha began streaming a promotional video for the original anime DVD (OAD) bundled with the 35th volume of Hiro Mashima's Fairy Tail fantasy adventure manga. The brand-new fourth OAD brings Natsu, Lucy, and the others to a training camp three months before the Grand Magic Games. While there, the Fairy Tail Guild take a much needed break at the beach and hot spring.
The same voice cast who has voiced the Fairy Tail television series for four years is in the upcoming anime DVD. In addition, cast members Tetsuya Kakihara (Natsu) and Yūichi Nakamura (Gray) sing the opening theme song "Blow Away," while Aya Hirano (Lucy), Sayaka Ohara (Erza), and Satomi Satou (Wendy) sing the ending theme "Happy Tale.") wrote the fourth original anime DVD instead of the television anime' script editor Masashi Sogo, who wrote the first three.
Del Rey published 12 volumes of the original manga in North America, and Kodansha resumed publishing the manga with the 13th volume in 2011. Crunchyroll streams the television anime into several countries as it airs in Japan, and Funimation has been releasing the DVD/Blu-ray sets.
discuss this in the forum (11 posts) |
this article has been modified since it was originally posted; see change history | 65,115 |
VITALIANO, Vincenzo Rocco
Vincenzo Rocco ``Vinnie'' Vitaliano, 29, of Glastonbury beloved son of Rocco Vitaliano and the late Mary (Sliwa) Vitaliano passed away Monday, (September 6, 2004) into the loving embrace of his mother who predeceased him five years ago and our Lord, Jesus Christ. He was born August 16, 1975 in Hartford, which is an extra special day for our family. Not only is August 16 Vinnie's birthday but it is also St. Rocco's Day, the patron Saint of his father's hometown in Italy and the day his father came to America 50 years ago. Vinnie was raised in Wethersfield and Bolton and graduated from Bolton High School. He continued his education at Eastern Connecticut State University where he received a Bachelor's Degree in Communications. Throughout college he worked at Highland Park Market in Manchester where he made many close friends. The owners treated him like a family member due to his excellent service, which was always provided with a smile on his face. After graduation Vinnie found a job in his field of study at WTIC FOX 61 TV as a Master Control Operator where he was employed for the last six years. This was a dream job for Vinnie because he loved all types of media which was apparent by his ability to quote any line from movies like ``The Godfather'' and his favorite television shows. Vinnie was an avid New York Yankees fan who would attend games at Yankee Stadium whenever possible. If he couldn't make the game he would always have the game on either the television or the radio. Besides being survived by his loving father and his wife Filomena of Newington he is survived by his adoring grandparents Joseph and Genowefa Sliwa of Simsbury; his sister Regina Vitaliano of Newington, his sister and brother-in-law Teresa and Eric Vieweg of Wethersfield; his precious niece and nephew Lindsey and Lucas Vieweg of Wethersfield whom he doted on. He also leaves many aunts, uncles, cousins and other special family members and close friends. Vinnie was the most wonderful son, brother and uncle. We all love him very much and will miss him in our lives. We will think about him every day and cherish our special memories forever in our hearts. We take comfort in knowing he is celebrating eternal life with his late mother and our Lord, Jesus Christ. The funeral is Friday at 8:15 a.m. from the D'Esopo Funeral Chapel, 277 Folly Brook Blvd., Wethersfield with a Mass of Christian Burial at 9 a.m. in St. Luke's Church, Hartford. Burial will be in Bolton Center Cemetery, Bolton. Relatives and friends may call at the funeral home TODAY from 5-8 p.m. Memorial contributions may be made to the Juvenile Diabetes Foundation, 18 North Main St., West Hartford, CT 06107. For on-line expressions of sympathy please visit | 137,465 |
Subsets and Splits