text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
In order to find more about our current and recently completed research projects, please click here: Ongoing Research Recently Completed Research The department accomplishes it intended research activities using two well equipped laboratories e.g. “Farm Machinery Laboratory” and “Irrigation and Drainage Laboratory”. There is also an “Engineering Workshop” under the department where relevant research can be carried out. Farm Machinery Laboratory Irrigation and Drainage Laboratory Engineering Workshop
56,777
TITLE: let $f'(x)=\frac{x^2-f(x)^2}{x^2(f(x)^2+1)}$ prove that $\lim\limits_{x \to \infty}f(x) = \infty$. QUESTION [10 upvotes]: Let $f:(0,\infty)\to\mathbb R$ be a differentiable function such that $$f'(x)=\frac{x^2-f(x)^2}{x^2(f(x)^2+1)}$$ for all $x\gt1$. Prove that $\lim\limits_{x \to \infty}f(x) = \infty$. Here is what i thought:$$f'(x)=\frac{x^2-f(x)^2}{x^2(f(x)^2+1)}=\frac{1-\frac{f(x)^2}{x^2}}{f(x)^2+1}$$ if somehow it can be proved that $\frac{f(x)^2}{x^2}$ lies in range $(-1,1)$ then it can be concluded that $f'(x)\gt0$, which means function is strictly increasing and for increasing function $\lim\limits_{x \to \infty}f(x) = \infty$ is always true. But am not sure how to do that and if some one can come up with other creative solution that would be great! and also please explain thought process behind your given solution. EDIT: My assertion that increasing function tends to limit $\infty$ is wrong for some cases, for an example in case of $f(x)=\arctan(x)$. But it can be explained since derivative of $\arctan(x)$ is $\frac{1}{1+x^2}$ and $\lim\limits_{x \to \infty}\frac{1}{1+x^2}=0$ which means, if we can prove other condition which is $\lim\limits_{x \to \infty}f'(x)\ne0$ then it would be complete proof. REPLY [7 votes]: Introduction I planned to give this question some real good high-level treatment. However, following my efforts to do so, I realized that I would probably be rising above the level of the OP a smidgen too much, thereby turning inaccessible to that level. Hence, I have instead adopted the same argument for this specific case. Note that I will still provide an answer for a more general case. I do not think that this solution is related to any of the solutions presented in the manual in the comments above, which are excellent in their own right. I will instead show why the approach here is far, far more amenable to a very large class of equations. The big result Consider the differential equation $$ y' = \frac{x^2-y(x)^2}{x^2(y(x)^2+1)} \tag{ODE} $$ The denominator of the RHS of $(ODE)$ is positive for $x>1$. As for the numerator, it factors as $(x-y(x))(x+y(x))$. We claim that any solution $y$ of $(ODE)$ on $x>1$ must be eventually monotonic i.e. there exists an $M>0$ such that $y'(x) \geq 0$ for $x > M$. Note first that $y(x)$ cannot be a constant solution by substitution, so this is ruled out. Then, the RHS of $(ODE)$ is continuous on the domain $\{x>1\} \times \mathbb R$, so that $y'$ is a continuous function. Hence, the question now reduces to the following : there does not exist a sequence of points $x_i$ such that $y'(x_i) = 0$ and $x_i \to \infty$. Observe that at points where $y'=0$, we also have $x^2 = y(x)^2$ i.e. either $x = y(x)$ or $x = -y(x)$. So, either $x_i = y(x_i)$ or $x_i = -y(x_i)$ happens infinitely often. We will show that this cannot happen under the insistence $x_i \to \infty$. Consider, for example, the set of all points $x_{1i}$ such that $y'(x_{1i})=0$ because $x_{1i} = y(x_{1i})$. Suppose that this is an infinite sequence which goes to $+\infty$. Let $x_{1k}$ be such a point. We claim that $x_{1k}$ cannot be an extremum of $y$. To see this, consider small enough $\delta>0$ and $x_{1k}+\delta>x>x_{1k}$. Without loss of generality, assume that $x_{1k}$ is a maximum. At such $x$, we must have $y'(x) \leq 0$ because of the maximum condition if $\delta$ is small enough. On the other hand, at such points, $y(x)^2 \leq x_{1k}^2 < x^2$ so that $y'(x)>0$, a contradiction. Thus, $x_{1k}$ cannot be a maximum. It similarly cannot be a minimum. Thus, each $x_{1k}$ is an inflection point. Consider two consecutive points $x_{1}, x_{1(k+1)}$ in this sequence. Since $x_{1k}$ is an inflection point, we know that $y'(x_{1k})=0$. However, for the function $f(x)=x$, the derivative at the same point equals $1$. Thus, for $\delta>0$ small enough and $x_{1k}+\delta>x>x_{1k}$, we have $(x,x)$ lies above the point $(x,y(x))$ because the function $f(x)=x$ increases faster then $y(x)$ at $x_{1k}$. However, at $x_{1(k+1)}$, the opposite reasoning takes over. Indeed, the left derivative of $y(x)$ at this point is $0$, while the left derivative of $f(x)=x$ is $1$. The left derivative of $y(x)$ is smaller, hence for some $\delta>0$ the point $(x,x)$ lies below the point $(x,y(x))$. However, this is impossible : if $x_{1k},x_{1(k+1)}$ were chosen to be consecutive intersection points of $(x,y(x))$ and the line $x=y$, then the graph of one function must either stay below the other or above the other on the entire interval $[x_{1k},x_{1(k+1)}]$. This isn't happening because of what we wrote. Finally, we conclude the following : the graph of $y'(x)$ intersects the line $x=y$ at only finitely many places. An analogous argument yields that $y'(x)$ intersects the line $x=-y$ at only finitely many places (the argument doesn't change by much, really). Consequently, the number of intersection points of $y'(x)$ with the graph $x^2-y^2=0$ are finitely many. Therefore, by continuity, $y'(x)$ is either positive or negative for large enough $x$, as desired. Finishing the proof Now that we have eventual monotonicity, we know that $\lim_{x \to \infty} f(x)$ exists because the limit of monotonic functions exist as $x \to \infty$. What can this limit be? We will now take over from a solution from the manual. Call the limit $L$. Note that $L=-\infty$ cannot happen since in that case $y'<0$ and $x^2-y(x)^2>0$ for large enough $x$. However, if $L$ is finite then taking limits as $x\to \infty$ in $(ODE)$ yields $\lim_{x\to \infty} y'(x) = \frac{1}{1+L^2}>0$. This is a contradiction because it promotes superlinear growth. Indeed, pick $X$ large enough such that $y'(x)>\frac{1}{2(1+L^2)}$ for $x>X$. Then, by the mean value theorem, $y(x)-y(X) > \frac{x-X}{2(1+L^2)}$ for $x>X$, which clearly implies $y(x) \to \infty$ as $x \to \infty$, a contradiction. Therefore, $L = +\infty$ and we are done. The generalization It is possible to prove the following rather amazing result in general. Theorem [Hardy, Bellman] : Let $P(r,s),Q(r,s)$ be a polynomials such that for large enough $r,s$, $P(r,s)Q(r,s) \neq 0$. Then, consider the ODE $$ \frac{dy}{dx} = \frac{P(x,y)}{Q(x,y)} $$ For some $x_0$ and $x>x_0$, the solution to this ODE $y(x)$ has the following property : any rational function of the form $T(x) = \frac{H(y(x),x)}{L(y(x),x)}$ (where $H,L$ are some other polynomials that may be arbitrary) is eventually strictly monotone in $x$, unless $L(x,y(x))=0$ identically or $H(x,y(x))$ is constant in $x$. For example, in the present case this would entail something like $\frac{y(x)+2x^4}{y(x)^3+x^2y(x)}$ being monotonic where $y(x)$ is a solution to our initial ODE. Something that might appear very difficult to believe, but is true. We have even more that is true : an asymptotic analysis of such solutions. Theorem [Hardy] : Any solution of $$ y'(x) = \frac{P(x,y)}{Q(x,y)} $$ where $P,Q$ are as in the previous theorem, has the following asymptotics. It is monotonic together with all its derivatives, and either (1) $y(x) \sim ax^be^{P(x)}$ for some polynomial $P(x)$ and reals $a,b$, or (2) $y(x) \sim ax^b(\log x)^{\frac 1c}$ for an integer $c$ and reals $a,b$, as $x \to \infty$. In the case above, one can use the asymptotic analysis of Hardy to prove $$ \lim_{x \to \infty} \frac{y(x)}{(3x)^{\frac 13}} = 1 $$ Thereby showing that the asymptotics are of form $(1)$ with $P(x)=0$. The proofs of these two statements are fairly involved, but not very difficult. Indeed, the idea is the following : one doesn't have a general representation of the zeros of $P(x,y)$, but one can obtain finitely many "branches" of zeros that go to infinity. It is then shown that $y'$ intersects each of these branches at only finitely many points (and the arguments are frankly very, very similar) , so that $y'$ is monotonic eventually. Then, one can notice that $\frac{H(x,y(x))}{L(x,y(x))}$ also solves a similar type of differential equation, thereby allowing the usage of the same argument. One may use the theory of power series to "estimate" the growth rate of these branches outlined before, leading to the estimates outlined in the theorem. The actual mathematics is a little more involved and cannot be presented rigorously within this outline. It may yet be satisfying to know that this extremely general class of equations admits so much regularity. Links Hardy, G.H.($1912$), Some Results Concerning the Behaviour at Infinity of a Real and Continuous Solution of an Algebraic Differential Equation of the First Order. Proceedings of the London Mathematical Society, s2-10(1), 451–468(doi:10.1112/plms/s2-10.1.451) Bellman, R. , Stability Theory of differential equations, Chapter $3$. Many more results can be found than just the one cited here. This reference contains the proof of the specific result which is problem $B-5$ of the competition. It uses techniques that are very specific to the problem, unlike Hardy's solution which is much more malleable.
86,932
{"}} Rooftop Cockluck 18/Julie's Birthday! This was one of the best cocklucks yet to date. Besides having a great time as usual, it was also Julie's 60th birthday! Everyone had a wonderful cocktail but in the end, Jessica took home the trophy as the winner of the night with the best cocktail. I started by making my cocktail, Dreams of the Philippines Dreams of the Philippines .75 oz infanta lambanog .75 oz pineapple .75 oz lime juice .5 Ancho Reyes liquor .5 Jalapeño agave .5 pineapple infused Mezcal 3 Drops of acid phosphate 3 drops Molé bitters Next up was Adrian Adrian: The Dawn Before Time 2.0 oz Brugal Anejo Rum 0.5 oz Cynar 0.5 oz Lime Juice 0.5 oz Ginger Simple Syrup 3 drops Cinnamon Tincture Shake and serve over ice garnished into a tiki mug with 1/3 clementine orange and a mint sprig. After that, the sun started to go down so we decided to continue the party in Julie's loft. Jessica's cocktail was next and immediately everyone fell in love with the gin and homemade ginger syrup Adrian and Jessica had made. Jessica: Mumbai via London 2.0 oz (1/4 cup) gin (I used Bombay Dry 1) 1.0 oz (2 tablespoons) spiced ginger syrup (above) 1/2 ounce (1 tablespoon) strained fresh lime juice ice sparkling water or club soda mint sprig In a shaker or jar, stir together the vodka, ginger syrup, and lime juice for 30 seconds. Strain into a highball glass filled with ice, top off with sparkling water, and top with a mint sprig. Taste, adding more lemon or syrup if you feel the drink needs it. Enjoy immediately. . After that was Scott with his Soul Shaker Scott: Soul Shaker 1.0 oz Tequila 1.0 oz Aperol 0.5 oz Pineapple Juice 0.25 oz Orgeat 0.25 oz Green Chartreuse Egg White Reverse shake, garnish with a hibiscus flower in syrup. Next up with Michelle with Sage Advice. Michelle had a little bit too much fun smacking the Sage. Michelle: Sage Advice 1.5 oz Gin 1.0 oz China China Byrhh 0.5 oz Lemon Juice 0.25 oz Sage Simple Syrup Bittertruth Lemon Bitters Combine ingredients, shake, strain, and pour into a coupe. Garnish with a sage leaf. Next up was Holly. Holly: Fig and Thistle 1.5 oz Excellia Blanco Tequila 1.0 oz Cardamaro 0.5 oz Lemon Juice 1 tsp fig preserves, such as Bonne Maman Add tequila, Cardamaro, lemon juice, and fig preserves to a cocktail shaker and fill with ice. Shake until well chilled, about 20 seconds. Strain into an ice-filled rocks glass. After Holly's wonderful fig cocktail, it was time for the newest member of cockluck, Cory. Cory: Walcoffee 1.5 oz Tequila Reposado 0.5 oz Mezcal 0.25 oz Amaro 1 tsp Agave 2 dashes Angostura Bitters Cold-brew Ice Cubes Shake, strain into a rocks glass, then add cold-brew ice cubes. Last up was Laura who made a wonderful bourbon cocktail. Laura: The Wild French Billionaire 2.0 oz Bourbon 1.0 oz Lemon Juice 0.5 oz Simple Syrup 0.5 oz Grenadine 0.25 oz Absinthe Lemon Shake with ice and strain into a coupe, garnish with a lemon wheel Oh and here's a pic of the lovely birthday girl, Julie and her adorable dog, Roxy Happy Birthday Julie!!!
219,704
Posts by author Yvette Porter Mar 28, 2018 6 Most Common Dental Issues Among Seniors and How Carers Can Help Aging presents many dental health complications. It’s not only because of the fact that as people get older, they start observing the effects of the lifestyle they led when they were younger; the body really does slow down over time. In fact, the body’s natural ability to produce certain hormones as well as regenerate cells diminishes at the age of 30. This is why it is crucial to uphold a healthier lifestyle and to invest in preventative dentistry services when... 0 comments Yvette Porter Dentist At Apple Dental. Show More (Visited 1,803 times, 4 visits today)
24,395
TITLE: The asymptotics of a recursion QUESTION [2 upvotes]: I am trying to assess the aymptotic behavior for large $n$ of the sequence $(a_n)_{n=1}^\infty$ defined by the recursion \begin{align} a_{n} &= (n+1) a_{n-1} - n a_{n-2}-1, \ \forall n\ge3 \\ a_2 &= 2a_1-3. \end{align} This recursion originates from this question on stats.stackexchange.com. Define the generating function $f(x):=\sum_{n=0}^\infty a_nx^n,$. $g(x):=x^2f(x)$ satisfies the ODE $$g'-\Big(\frac1{x^2}+\frac1x+\frac1{1-x}\Big)g=\Big(\frac x{1-x}\Big)^2-\frac{a_0}{1-x}+(2a_0-a_1)\frac x{1-x}$$ Multiply both sides by $e^h=\frac{1-x}xe^{\frac1x}$ where $h(x):=\frac1x+\ln\frac{1-x}x$ and get $$\frac{d}{dx}\Big(\frac{1-x}xe^{\frac1x}g\Big)=e^{\frac1x}\Big(\frac x{1-x}-\frac{a_0}x+(2a_0-a_1)\Big).$$ Now $e^{\frac1x}$ seems to prevent me from solving the ODE in a "closed" form. I read somewhere the complex analysis may help to derive the large $n$ asymptotics of $a_n$. How does one procede? REPLY [0 votes]: Define $b_n:=a_n-a_{n-1}$. The original recursion turns into $$b_n=nb_{n-1}-1$$ which is solvable with a generating function.
5,835
In October’s slanted sun, when temperatures drop almost as fast as the leaves, life goes on oblivious of the season. The steel mills are blooming. Listen to the litany of lost jobs, Watch the parking lot grow in its emptiness. The sound of silence roars but that does not stop the blossoming. Thimbleweed and aster, Blackberry and blazing star, the Creator reclaims His garden. Yellowroot and one last buttercup thrive where steel could not. Blast furnaces sprout nests. Bobwhite and mourning dove, kestrel and kildeer, an unlikely congregation in the apse of empty ovens. Coke plant and rolling mill, deprived of industrial dreams have found a higher calling. Machinery rusts with creaks and moans, Time clocks have stopped. Nature’s way has overcome, undaunted by the shenanigans of man. 7 thoughts on “Undaunted” Hopefully, nature will continue to overcome the horros of man. I’m with you on that!! This is a striking message… I hope humans will come to their senses at some point. But nature will continue on! Charlie, I’m just hoping Nature doesn’t lose patience completely. What a mess that would be!! So true!! It is hard to imagine a place like a steel mill being left open where animals could enter, even when otherwise abandoned, though I suppose it could happen. It kind of strikes me as a sad situation; the young being raised in such an alien environment. Michael, The mill is right on the Ohio River, and it is a virtual menagerie of wildlife…whitetail deer, raccoons, posssum (not to mention lesser rodents,) birds of all kinds, etc. Even when it was operating, there were so many raccoon that they frequently went into the substation to get warm in winter and stuck their nose in the wrong place, resulting in their death and a massive power outage. Parts of the mill are running again, and many of the older, outdated sections are being demolished, but it is a long drawn out process that often reverses itself.
49,818
TITLE: Show that a holomorphic function $f$ with $f ' \not= 0$ is conformal QUESTION [1 upvotes]: Show that a holomorphic function $f$ with $f ' \not= 0$ is conformal. I've come across this problem but I couldn't know how to solve it. I know that a holomorphic function means that it's complex differentiable, meaning that $\lim_{h\to0} \frac{f(z_0 + h) - f(z_0)}{h}$ at each $z_0$ exists, and is in fact equal to $f'(z_0)$. But how would knowing that if $f'(z_0) \not= 0$ help me in showing that the function is conformal (i.e. Preserves angles)? REPLY [1 votes]: Let $U$ be the open set on which $f$ is defined. You want to show that for every $p \in U$ there exists $c \in \mathbb R_{>0}$ such that for all $v, w \in T_pU \cong \mathbb R^2$ we have $\langle (df)_p v, (df)_p w \rangle = c \cdot \langle v, w \rangle$. That is, that $(df)_p \in \mathbb C$, seen as a real $2 \times 2$ matrix, lies in the identity component of the general orthogonal group: $$ (df)_p \in \operatorname{GO}_2(\mathbb R)^\circ = \left\{ A \in M_2(\mathbb R) : AA^T \in \mathbb R_{>0} \cdot I_2 \right\} \,.$$ But by identifying $\mathbb C$ with a subset of $M_2(\mathbb R)$, this group is precisely $\mathbb C^\times$. More directly: Let's keep viewing $T_pU$ as $\mathbb C$ instead of $\mathbb R^2$. The inner product on $T_pU$ is then given by $$\langle v, w \rangle = \operatorname{Re}(\overline v \cdot w) \,.$$ It is now a triviality that $f$ is conformal: $$\begin{align*} \langle (df)_p v, (df)_p w \rangle &= \langle f'(p) v, f'(p) w \rangle \\ &= \operatorname{Re}(\overline{v f'(p)} \cdot f'(p) w) \\ &= \operatorname{Re}(|f'(p)|^2 \cdot \overline v w) \\ &= |f'(p)|^2 \cdot \langle v, w \rangle \,. \end{align*}$$
139,223
I want to give a shout out to the Somerville police department! Yesterday I was in town near Union Sq and my dog was spooked by a man who was only trying to say ‘hi’. He was outside in the yard and usually is fine but he was abused by former owners and those scars are still there and his ‘flight’ instinct kicked in. He ran and I went in that direction and people who saw him said he went past and continued out onto the main road. The police had an APB out on him and cops in cruisers were looking for him. He is very skittish and runs when approached. I feared he would be hit by a car. He was gone for several hours and it was going to get dark soon. The police called me every time that there was a siting and it was extremely helpful; he was covering a lot of ground. The last call got me in the area and we located the poor guy who was exhausted, thirsty and limping from blistered paws. I really feel that had we not found him before dark that we may have never seen him again alive. Thank you again Somerville Police as he is truly one of the family and we greatly appreciate your concern. – Michael Cioffi Thank you for sharing! it is always great to hear stories like yours! Nice to see this sharing. Our police and firefighters do a lot of good things for the people in our city and they should get the credit for it. they have been caring and compassionate when I needed their service. Kudos to the Cops! Let’s keep this positive feedback coming! With all the bad news in the world, it was nice to hear a story with a happy ending. I congratulate the Somerville Police Department for their efforts,compassion, and service. We need to create a real police force for our communities, one that will be able to help us. One that does not carry any weapons and does not kidnap and murder our citizens. Crime will go down, and policemen will once again be our friends. I haven’t heard much about the Somerville police kidnapping and murdering citizens. Can you elaborate please? Is Christopher Booth related in any way to John Wilkes Booth? Christopher, stop drinking the kool aid!
321,271
Lego Mindstorms NXT robots are capable of displaying images, text and/or feedback on their screens. To get them to do this you have to learn how to program display blocks in the NXT programming environment. Program display blocks in the Lego Mindstorms system Click through to watch this video on ortop.org ——— Start your career in Graphic Design with the WonderHowTo's Beginners’s Guide to Photoshop Course Be the First to Comment Share Your Thoughts
47,596
TITLE: Application of the Artin-Schreier Theorem QUESTION [11 upvotes]: This is exercise $6.29$ out of Lang's book: Let $K$ be a cyclic extension of a field $F$, with Galois group $G$ generated by $\sigma$. Assume that the characteristic is $p$, and that $[K:F]=p^{m-1}$ for some $m\geq2$. Let $\beta$ be an element of $K$ such that that Tr$^K_F(\beta)=1$. (a) Show that there exists an element $\alpha\in K$ such that $\sigma(\alpha)-\alpha=\beta^p-\beta$. (b) Prove that the polynomial $x^p-x-\alpha$ is irreducible in $K[x]$. (c) If $\theta$ is a root of this polynomial, prove that $F(\theta)$ is a Galois, cyclic extension of degree $p^m$ of $F$, and that its Galois group is generated by an extension $\sigma^*$ of $\sigma$ such that $\sigma^*(\theta)=\theta+\beta$. I have been able to do letter $a$ using Hilbert's Theorem $90$ (Additive Form), since $$\text{Tr}(\beta)=1=1^p=(\text{Tr}(\beta))^p=\text{Tr}(\beta^p).$$ I'm at a loss for the second one, even though it seems to scream the Artin-Schreier theorem. For the third part, I certainly see that it is an extension of degree $p^m$, although I'm not sure I can get much farther than that. How can I do the last two parts? REPLY [5 votes]: For (b), suppose that $x^p-x-\alpha$ is reducible in $K[x]$. Then, by the Artin-Schreier theorem, it has all of its roots in $K$. Let $\theta$ be such a root. Then $\theta^p-\theta-\alpha=0$ implies $\sigma^p(\theta)-\sigma(\theta)-\sigma(\alpha)=0$. This gives us $$[\sigma(\theta)-\theta]^p-[\sigma(\theta)-\theta]-[\sigma(\alpha)-\alpha]=[\sigma^p(\theta)-\sigma(\theta)-\sigma(\alpha)]-[\theta^p-\theta-\alpha]=0,$$thus $\sigma(\theta)-\theta$ is a root of $x^p-x-(\sigma(\alpha)-\alpha)$. From part (a), we know $\beta$ is a root of $x^p-x-(\sigma(\alpha)-\alpha)$. It's easy to show that $\beta+i$ is also a root for any $1\leq i\leq p-1$, so we must have $\sigma(\theta)-\theta=\beta+i$ for some $i$. Now, since $\theta\in K$, Tr$(\sigma(\theta)-\theta)=0$ and $i\in F$ implies $Tr(i)=0$. Now we have $$0=Tr(\sigma(\theta)-\theta)=Tr(\beta+i)=Tr(\beta)+Tr(i)=Tr(\beta),$$ a contradiction since Tr$(\beta)=1$. This means $x^p-x-\alpha$ is irreducible. For (c), first note that $\sigma^*(\theta)=\theta+\beta$ implies $$(\sigma^*)^n(\theta)=\theta+\beta+\sigma(\beta)+\cdots+\sigma^n(\beta).$$ That means $$(\sigma^*)^{p^{m-1}}(\theta)=\theta+Tr(\beta)=\theta+1.$$ This implies $(\sigma^*)^{p^m}(\theta)=\theta$, hence the order of $\sigma^*$ is at most $p^m$. Then $H=\langle\sigma^*\rangle$ is a finite group of automorphisms of $K(\theta)$, so by Artin's Theorem, $K(\theta)$ is Galois over the fixed field of $H$, $K(\theta)^H$, and the Galois group is $H$. Now, since $F$ is fixed by $\sigma$, it is fixed by $\sigma^*$, which implies $F\subseteq K(\theta)^H$. Since degrees multiply in towers, we have $$p^m=p\cdot p^{m-1}=[K(\theta):K][K:F]=[K:F]=[K(\theta):K(\theta)^H][K^(\theta)^H:F].$$ This implies $[K(\theta):K(\theta)^H]=p^d\leq p^m$, so the order of $\sigma^*$ is $p^d$. Suppose it is strictly less than $p^m$. Then the order of $\sigma^*$ divides $p^{m-1}$, so $(\sigma^*)^{p^{m-1}}\equiv id$, but we already know $(\sigma^*)^{p^{m-1}}(\theta)=\theta+1$, a contradiction, hence the order must be $p^m$, which gives us $[K(\theta):K(\theta)^H]=p^m$, implying $K(\theta)^H=F$. Thus $K(\theta)$ is Galois and cyclic over $F$, and this implies that any intermediate extension is Galois and cyclic, so $F(\theta)/F$ is Galois and cyclic, as desired. Also, $\langle\sigma^*\big|_{F(\theta)}\rangle\subseteq Gal(F(\theta)/F)$ and $|Gal(F(\theta)/F)|=p^d\leq p^m$. Using the same argument as above in fact gives equality, so we have show that $F(\theta)$ is a Galois, cyclic extension of degree $p^m$.
153,786
September 23, September 23, 2018 from 4pm to 6pm – Sticks and Stones Come learn to work with the spirits within your Ancestral lineage. This two hour class will provide an overview of starting the conversation/connection with your own ancestors, so that you c… Organized by Jhada Addams | Type: workshop © 2020 Created by Gwendolyn. Badges | Report an Issue | Terms of Service
230,382
From the Intern Desk: Daniela This blog post is the first in a series of posts written by the interns of WFP. Check back often to see more posts from the intern desk. At my university, we start lining up summer internships in January. By the end of February, you better have your internship banked, because most grant applications are due by March 1. And as a would-be human rights activist trying to find gainful employment in non-profit organizations (most of which could not afford to pay me), I would definitely need a grant. Imagine my surprise, when, at the end of April, as classes were finishing up, I got an e-mail from the fine people of the Communications and Public Policy Strategy Division of the United Nations World Food Programme. They had read my resume. Was I interested in an internship? The decision was a difficult one. I had already committed to working in South Africa all of June and July. I had left 30 glorious days in August for pure vacation. They were going to be a real break from the hectic scramble that is my college life. Did I really need to fill that precious time with more work? Plus, living in Rome (where WFP is headquartered) didn’t exactly seem cheap. Fast forward to today, my eleventh day on the job, and I can’t believe I almost turned this internship down. I don’t fetch coffee. I don’t make copies. I work. As part of the Youth Outreach Team, I draw up classroom activities so that kids around the globe can learn the importance of fighting hunger. I draw up website mock-ups so that our work can be more accessible to all ages. I suggest starting an intern blog and a few days later, here I am, sharing my experiences with you in the first of many blog posts by WFP interns. Every day, I can feel myself building on this global campaign to fight hunger. Just last week, I mentioned creating a podcast and now I’m told there may be something in the works. That’s satisfying. Being heard is satisfying. That’s why I’m working on giving a voice to the 1 billion hungry people in the world. I’ll let you know how it goes. Daniela Nogueira is an intern in the Communications and Public Policy Strategy Division and likes lemon gelato and fighting hunger.
303,226
By. Other National Efforts Testimony from leaders during the President’s Commission on Combating Drug Addiction and Opioid Crisis meeting in June communicated important recommendations relevant to the opioid problems given that death rates have quadrupled since 1999 and other serious consequences (e.g., family; long term health issues; loss of productivity). The Commission emphasized: - Delaying the onset of any type of substance use is critical to preventing the long-term consequence of addiction. Studies find the younger a person is when they first initiate substance use; the more likely they are to develop a substance use disorder over time. To achieve this, collaborate with communities, schools, and justice to implement evidence-based prevention including screening, brief intervention, and referral to treatment, develop or improve policies and practices for a continuum of services and supports, and eliminate stigma associated with substance use and help seeking. - Improving the implementation of evidence based prevention and treatment through workforce development, financing, and accountability. Repeatedly, the Commission said the science is solid, yet implementation is low. In other words, there studies and experts that have demonstrated and communicated, “What works,” but providers and communities don’t actually implement these strategies, services and supports, or treatments. Translating science into practice requires leadership, resources, training and technical support. - Using strategies such as systematic screening, assessment, and access to evidence based care to reduce the number of people with substance use concerns in the justice system. The Commission is recommending rapid responses with scalable solutions to address this epidemic in communities across the country. What was missing from the discussion was the importance of being mindful of culture in any type of response. We know strategies to engage and respond to girls are different from boys as well as those for diverse racial and ethnic backgrounds, gender identity and sexual orientation, and where one lives (e.g., urban; rural). Considering cultural context is imperative to the successful implementation and adoption of any types of services and supports. Medication Assisted, Counseling, and Behavioral Treatments Medication-Assisted Treatment (MAT) combines medications to help people reduce the physiological cravings of alcohol, tobacco, and other drugs and behavioral therapies address the thoughts, behaviors, and other factors associated with substance use disorders (e.g., mental health symptoms and trauma; disparities). Medications found effective for treating opioid addiction are methadone, naltrexone, and buprenorphine. Methadone is approved for use for people 18 years or older. Few studies have examined the effectiveness of naltrexone for adolescents, but experts suggest it is promising. Buprenorphine has been approved by the Food and Drug Administration for use with young people 16 years and older and has research to support its efficacy when combined with behavioral treatments for adolescents. We must not forget the seminal report by the former United States Surgeon General on alcohol, drug use, and health. It provides background and recommendations for prevention, treatment, recovery support, and health care. A primary theme of this report highlights treating substance use concerns/disorders as a chronic health condition rather than moral failing. Research shows medications, counseling, and behavioral therapies effectively treat substance use disorders whereas punitive approaches such as incarceration do not. Applications for Reclaiming Futures The Reclaiming Futures approach has been shown effective for getting young people the services and supports they need. As such, Reclaiming Futures sites are leaders in the identifying and connecting young people to resources in their communities. So, continue what you are doing and constantly find ways to improve quality and use the latest research to guide decision-making. Here are some other ways sites can respond specifically for girls of diverse racial and ethnic backgrounds: - Identify gender-specific and culturally responsive approaches used to screen, assess, and coordinate services in your communities. If there are gaps, consider ways to improve the community directed response for adolescent girls. Key considerations include mental health services including trauma, family supports, sexual health, positive human development opportunities and activities (e.g., family engagement; education; civic involvement and leadership; recreation; recovery support). - Check out your local resources for MAT. What are the requirements? How do families pay for services? Are there age limitations? Where are they located and how accessible are they? Are services gender-specific and culturally responsive for the community for which they serve? - Meet with local leaders to determine what, if any, policies or procedures are being implemented to address the opioid crisis and what is missing? What is explicitly inclusive for adolescent girls? All across the country, families and communities are losing loved ones and becoming activists to prevent other families from experiencing this pain and suffering. Prevention is collaborating to activate coordinated community responses. Law enforcement, medical first responders, and juvenile justice are working to implement interventions. Behavioral health is working in creative ways to offer MAT, counseling, and other treatment. Responding to substance use and misuse for young people requires increased collaboration, coordination, and messaging about its tragedies, victories, and everything in between. ------ Authors Note: Discussion of gender in this blog post assumes biological sex at birth since the documents reviewed did not specify. For consistency, this was maintained throughout the blog post. Reclaiming Futures is cognizant of and respectful that gender is not a dichotomous construct, but a continuum of identities. Topics: Adolescent Substance Abuse Treatment, annual report, Assessment, Culture, Equity, Evidence-Based Practices, Family Involvement, Gender-Specific, Girls, juvenile courts, Juvenile Treatment Drug Court, opiate, Reclaiming Futures, recovery, research, SBIRT Updated: March 21 2018
157,355
January 20, 2015 By Frank Parlato In this, the first part of our series on Medical Marijuana in New York State and how it will affect both the state and our local Western New York community, we plan to give our readers a simple primer on the topic. The Compassionate Care Act Is Passed At a news conference in New York City, on Monday, September 8th, 2014, Gov. Andrew Cuomo signed the Compassionate Care Act, which allows doctors to prescribe marijuana in a non-smokable form to patients with serious ailments that are recognized by the state on a predefined but flexible list of conditions. Cuomo said it was difficult to develop and pass the bill because it needed to embrace increased medical acceptance of marijuana while rejecting situations and conditions that state legislators said could have "good intent and bad results." "There is no doubt that medical marijuana can help people," Cuomo said. "We are here to help people. And if there is a medical advancement, then we want to make sure that we're bringing it to New Yorkers." Senate Co-Leader Jeffrey D. Klein said the "patient-centric program" will provide relief to thousands of people and will be "one of the safest, most tightly regulated medical marijuana programs in the country." From all accounts, New York will be the most tightly regulated medical marijuana program without a close second. Presently there are 23 states where Medical Marijuana is legal. It is not legal in bordering Ohio or Pennsylvania. It is legal in Ontario, Canada. New York’s law, unlike medical marijuana laws in other states, requires companies to hire union workers for their dispensaries and production facilities. Patients Eligible for a Marijuana Prescription Under the Compassionate Care Act, a New Yorker is eligible to use medical marijuana if they have been diagnosed with a "specific severe, debilitating and life-threatening condition that is accompanied by an associated or complicating condition." New Yorkers with cancer, HIV infection or AIDS, amyotrophic lateral sclerosis (ALS), Parkinson's disease, multiple sclerosis, spinal cord injury with spasticity, epilepsy, inflammatory bowel disease, neuropathy, and Huntington's disease can, if their doctor approves, get a prescription for medical marijuana once the law goes into effect. Additional medical conditions which sufferers may obtain a marijuana prescription for - are defined as "associated or complicating conditions". In other words, marijuana may not help in the treatment of the disease directly but rather will help treat both complications arising out of the disease, and side-effects that are a direct result of treatments. Among these "associated or complicating conditions" categorized under the Compassionate Care Act are cachexia or wasting syndrome, severe or chronic pain, severe nausea, seizures, or severe or persistent muscle spasms. Some examples; nausea is a widely-known complication associated with cancer treatments such as chemotherapy. Seizures, a complication also known as a refractory symptom, can manifest with a diagnosis of Epilepsy. A refractory symptom, as defined by the Stanford School of Medicine, "is one that cannot be controlled despite aggressive efforts to identify a tolerable therapy that does not affect a person consciousness." The NYS Department of Health states on its website, "In many cases, it is expected, medicinal marijuana would be used in addition to the patient's current medication regimen to treat refractory symptoms." At present, unless a New Yorker has one of these above listed conditions, it is not anticipated that he or she will be able to obtain a prescription for marijuana. However, Gov Cuomo has indicated that, as new studies emerge that indicate medical marijuana may be useful in treatment of other diseases, he will instruct the NYS Department of Health (DOH) to consider initiating changes to the list of qualified diseases. "The present list of illnesses that qualify for prescriptions for medical marijuana were included because there is evidence, including in existing peer-reviewed medical literature, that marijuana may be effective in alleviating the disease or the symptoms accompanying these conditions." Observers believe that qualified New Yorkers will not have access to marijuana prescriptions before 2016. First the governor will need to approve a short list of five state-wide growers. Then those growers will need to grow, harvest, and process their medical marijuana products into prescription form. Can't Smoke it Under the law, smoking marijuana will not be permitted. Marijuana will be delivered in tinctures, pills, vapor cartridges and possibly edibles - although the latter has not yet been approved. There is concern that should marijuana be delivered in edibles - say a candy bar - it may become too attractive to children. New York has argued that the negative health consequences of smoking marijuana are well established, hence the ban on medical marijuana being smoked by patients. This differentiates New York from most other states which permit patients to purchase the plant itself and smoke it, or fulfill their prescriptions with smokable marijuana. As the National Institute of Drug Abuse notes, "The smoke of marijuana, like that of tobacco, consists of a toxic mixture of gases and particulates, many of which are known to be harmful to the lungs." Although they have not done so already, the. No Fancy Names for the Medicine The crafters of the law have already stated that the products will not carry fanciful names likes those marketed in other states. Names like Grape Stomper, Train Wreck, Golden Goat, Sage and Sour Critical Mass, Purple Kush or Presidential Kush etc. are out. Instead the marijuana will be sold in plain packages and the names will be letters and numbers such as, for example, CW-25. Only Five Companies will grow marijuana statewide As mentioned above, as of this date, medical marijuana is not available in New York State. The Cuomo administration has established a procedure to select five growers statewide. Each of the five growers will be permitted to have up to four dispensaries, meaning there will be a maximum total of 20 dispensaries where patients can purchase marijuana with a prescription in the state. It is not known yet whether rural and outlying patients without the means to travel to one of the dispensaries will be able to receive their prescriptions through delivery services or not. Because there is a federal law outlawing marijuana, it is anticipated the US Postal Service will not be able to deliver marijuana prescriptions should it be determined by the state that it can be delivered to patients. According to published reports, dozens of companies are already well into the planning stages to compete for one of the coveted, five registered organizations (RO's) that will be licensed by New York to grow and dispense marijuana. Some estimates suggest that by the time the applications are submitted this spring more than 100 firms may apply to become one of the five manufacturers of the drug. Obviously, manufacturers must demonstrate they’ve thought out every detail of their production plan, from security, to support from local lawmakers, to plans for how they’ll transport the plants from manufacturers to dispensaries. Companies such as Ideal 420 Technologies LLC, Nanoponics LLC., Privateer Holdings Inc., (a Seattle company that invests in medical cannabis and has plans to create a Bob Marley-branded strain of the drug) MJ Freeway, (which touts itself as “your solution to run a successful cannabis business”) Great Lakes Medicinals, based in Webster, Sea Cliff-based PalliaTech Inc., Fioria Franco LLC, based in Clarence, and Lewiston Greenhouse LLC. (owned in part by the principals of Modern Disposal and other investors) are among the companies preparing to make a bid for one of the five spots. Lots of Regs Here are the ground rules for “RO” selection. * Can be for-profit or not-for-profit organizations. * Must contract with an independent laboratory approved by the Commissioner of Health for product testing. * Cannot be managed by or employ anyone who comes in direct contact with the marijuana who has been convicted of felony drug charge within 10 years (unless they received a certificate of relief or good conduct). * Growing must be done indoors (which may include a greenhouse) in a secure facility. * The Commissioner will not license more than 5 RO’s, which can each operate 4 dispensaries (for a total of a maximum of 20 dispensaries statewide). * Commissioner can add more RO’s if s/he determines a need. * DOH will issue regulations for RO’s, including regulation governing security and tracking. * To apply to be an RO, the organizations must: ** Have sufficient facilities and land or a bond of $2 million. ** Can maintain good security. ** Has entered into a labor peace agreement. ** Able to comply with all state laws. In determining who can be an RO, the Commissioner should consider the public interest – including regional access. The fee for RO’s is to be determined by the Commissioner. Licenses are valid for two years then must be renewed. Licenses can be suspended or terminated if the RO is not controlling diversion or otherwise violating the statute or regulations. Lewiston, Niagara County in the Running for Big Benefits One of the leading contenders - and certainly the leading local contender for an RO is Lewiston Greenhouse, LLC, owned in part by the owners of Modern Disposal Services. Modern Disposal Services is the 20th largest waste removal company in the United States, has an annual payroll of over $21 million and annual expenditures of over $61 million, much within Niagara County. Since 1996, Modern has paid $37 million to the town of Lewiston based on an agreement it created with the town called a Host Community Agreement (HCA). As a result of the agreement, the company is largely credited for the fact that the town of Lewiston does not have a town tax. In October 2014, one month after the governor signed the medical marijuana law, several investors including the owners of Modern formed Lewiston Greenhouse, LLC in order to begin the application process to become a NYS Medical Marijuana Producer or RO. This same group of investors, owners of a 12 acre indoor greenhouse presently growing tomatoes, have made it clear that if selected as an RO it would shift its growing operation to medical marijuana. Lewiston Company Has Famous 'Healing' Strain Apropos of this, a member of Lewiston Greenhouse, LLC, Gary Smith said that Lewiston Greenhouse, LLC has obtained the license for New York State to grow a strain of Marijuana known popularly as Charlotte's Web. Charlotte's Web received national recognition as literally a miracle cure for a rare and catastrophic form of epilepsy suffered by children. It is called Dravet Syndrome and there is currently no cure. Children with Dravet Syndrome suffer from uncontrollable, often very violent seizures beginning in infancy. The seizures are a refractory symptom; they typically do not respond to seizure medications. Treatment with Charlotte’s Web has proven to immensely aid these children. This strain of marijuana is low in THC, the psycho-active compound found in marijuana yet it is high in cannabidiol (CBD) CBD is a compound considered to have a wide scope of medical applications. The bottom line? Charlotte’s Web will not make users “high” because of the low THC content. The strain is named after six-year-old Dravet Syndrome sufferer, Charlotte Figi, whose parents say the drug radically eased the severity of her seizures. Charlotte was experiencing up to 60 tonic clonic seizures a day — often up to 300 per week. Charlotte is one of approximately 400,000 American children who suffer from medication resistant epilepsy. The child was hospitalized a multitude of times and was prescribed medications and special diets, but nothing worked. In fact, the medications were having a negative effect on her. After Colorado implemented its medical marijuana law, it was suggested that Charlotte try a high CBD cannabis oil to treat the seizures. Since her cannabis treatments began, Charlotte has two to three minor seizures a month, mostly in her sleep. Charlotte’s story is not an isolated one. Alex is an 11-year-old suffering from Tuberous Sclerosis that caused his seizures and autism. His events were so violent that he would repeatedly hurt himself. The family exhausted all mainstream medical options before turning to medical cannabis. As in Charlotte’s case, it worked. Presently the developers of Charlotte's Web are treating nearly 200 epileptics with their special strain of marijuana. Nearly all have seen dramatic reductions in the frequency and intensity of their seizures. According to Paige Figi's blog, "[Charlotte]." Time Magazine noted in October 2014, that the creators of Charlotte's Web had a waiting list of more than 12,000 families, some of whom moved to Colorado to be able to legally obtain it. Will Niagara, Lewiston Reap a Windfall? In addition to the fact that Lewiston Greenhouse, LLC has the know how to grow, and the money to do it, it also has the only New York license to grow and sell Charlotte's Web. Lewiston Greenhouse intends to create a Host Community Agreement (similar to the one Modern created with their disposal contracts) with the Town of Lewiston. This means significant additional revenue for the town which Lewiston taxpayers, eager to avoid a town tax, will be certain to embrace. On top of that there are other economic benefits that come to the county. Counties hosting growers and dispensaries will receive up to 45% of the seven percent excise tax charged on the marijuana products. With only one of five growers to service a state of almost 20 million people, the tax benefits could amount to untold millions for Niagara County, if Lewiston Greenhouse is selected.
83,453
Build Pakistan is an event which aims to facilitate the construction, real estate, building materials and infrastructure development industries. The event is designed to connect with individuals and organizations that are eager to share their products while seeking fresh business opportunities. The event also provides an ideal platform for companies looking to expand or diversify into different business areas. The event highlights the enormous potential in the building and construction industry in Pakistan. Furthermore, as an added bonus, Build Pakistan focuses on involving related industry sectors such as furniture, property, stone and marble. The exhibition has a high rate of repeat attendance with over 30,000 visitors and trade buyers eager to learn of the latest developments taking place in Pakistan’s building and construction industry.
136,264
\begin{document} \title[relative twist formula]{On the relative twist formula of $\ell$-adic sheaves} \author{Enlin Yang} \address{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, 93040 Regensburg, Germany} \email{[email protected], [email protected]} \author{Yigeng Zhao} \address{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, 93040 Regensburg, Germany} \email{[email protected]} \date{\today} \begin{abstract} We propose a conjecture on the relative twist formula of $\ell$-adic sheaves, which can be viewed as a generalization of Kato-Saito's conjecture. We verify this conjecture under some transversal assumptions. We also define a relative cohomological characteristic class and prove that its formation is compatible with proper push-forward. A conjectural relation is also given between the relative twist formula and the relative cohomological characteristic class. \end{abstract} \subjclass[2010]{Primary 14F20; Secondary 11G25, 11S40.} \maketitle \tableofcontents \section{Introduction} As an analogy of the theory of $D$-modules, Beilinson \cite{Bei16} and T.~Saito \cite{Sai16} define the singular support and the characteristic cycle of an $\ell$-adic sheaf on a smooth variety respectively. As an application of their theory, we prove a twist formula of epsilon factors in \cite{UYZ}, which is a modification of a conjecture due to Kato and T.~Saito\cite[Conjecture 4.3.11]{KS08}. \subsection{Kato-Saito's conjecture} \subsubsection{} Let $X$ be a smooth projective scheme purely of dimension $d$ over a finite field $k$ of characteristic $p$. Let $\Lambda$ be a finite field of characteristic $\ell\neq p$ or $\Lambda=\overline{\mathbb Q}_\ell$. Let $\mathcal F\in D_c^b(X,\Lambda)$ and $\chi(X_{\bar{k}},\mathcal F)$ be the Euler-Poincar\'e characteristic of $\mathcal F$. The Grothendieck $L$-function $L(X,\mathcal F, t)$ satisfies the following functional equation \begin{equation}\label{eqYZ:fe} L(X,\mathcal F, t)=\varepsilon(X,\mathcal F)\cdot t^{-\chi(X_{\bar{k}},\mathcal F)}\cdot L(X, D(\mathcal F),t^{-1}), \end{equation} where $D(\mathcal F)$ is the Verdier dual $R\mathcal Hom(\mathcal F,Rf^!\Lambda)$ of $\mathcal F$, $f\colon X\rightarrow {\rm Spec}k$ is the structure morphism and \begin{equation} \varepsilon(X,\mathcal F)=\det(-\mathrm{Frob}_k;R\Gamma(X_{\bar{k}},\mathcal F))^{-1} \end{equation} is the epsilon factor (the constant term of the functional equation (\ref{eqYZ:fe})) and $\mathrm{Frob}_k$ is the geometric Frobenius (the inverse of the Frobenius substitution). \subsubsection{} In (\ref{eqYZ:fe}), both $\chi(X_{\bar{k}},\mathcal F)$ and $\varepsilon(X,\mathcal F)$ are related to ramification theory. Let $cc_{X/k}(\mathcal F)=0_X^!(CC(\mathcal F, X/k))\in CH_0(X)$ be the characteristic class of $\mathcal F$ (cf. \cite[Definition 5.7]{Sai16}), where $0_X\colon X\to T^\ast X$ is the zero section and $CC(\mathcal F, X/k)$ is the characteristic cycle of $\mathcal F$. Then $\chi(X_{\bar{k}},\mathcal F)={\rm deg} (cc_{X/k}(\mathcal F))$ by the index formula \cite[Theorem 7.13]{Sai16}. The following theorem proved in \cite{UYZ} gives a relation between $\varepsilon(X,\mathcal F)$ and $cc_{X/k}(\mathcal F)$, which is a modified version of the formula conjectured by Kato and T. Saito in \cite[Conjecture 4.3.11]{KS08}. \begin{theorem}[Twist formula, {\cite[Theorem 1.5]{UYZ}}]\label{thm:uyz} We have \begin{equation}\label{eqYZ:ep} \varepsilon(X,\mathcal F\otimes\mathcal G)=\varepsilon(X,\mathcal F)^{{\rm rank}\mathcal G}\cdot \det\mathcal G(\rho_X(-cc_{X/k}(\mathcal F)))\qquad{\rm in}~\Lambda^\times, \end{equation} where $\rho_X\colon CH_0(X)\to\pi_1^{\rm ab}(X)$ is the reciprocity map defined by sending the class $[s]$ of a closed point $s\in X$ to the geometric Frobenius $\mathrm{Frob}_s$ and $\det\mathcal G\colon \pi_1^{\rm ab}(X)\to \Lambda^\times$ is the representation associated to the smooth sheaf $\det\mathcal G$ of rank 1. \end{theorem} When $\mathcal F$ is the constant sheaf $\Lambda$, this is proved by S. ~Saito \cite{SS84}. If $\mathcal F$ is a smooth sheaf on an open dense subscheme $U$ of $X$ such that $\mathcal F$ is tamely ramified along $D=X\setminus U$, then Theorem \ref{thm:uyz} is a consequence of \cite[Theorem 1]{Sai93}. In \cite{Vi09a,Vi09b}, Vidal proves a similar result on a proper smooth surface over a finite field of characteristic $p>2$ under certain technical assumptions. Our proof of Theorem \ref{thm:uyz} is based on the following theories: one is the theory of singular support \cite{Bei16} and characteristic cycle \cite{Sai16}, and another is Laumon's product formula \cite{Lau87}. \subsection{$\varepsilon$-factorization} \subsubsection{}Now we assume that $X$ is a smooth projective geometrically connected curve of genus $g$ over a finite field $k$ of characteristic $p$. Let $\omega$ be a non-zero rational $1$-form on $X$ and $\mathcal F$ an $\ell$-adic sheaf on $X$. The following formula is conjectured by Deligne and proved by Laumon \cite[3.2.1.1]{Lau87}: \begin{equation}\label{eqYZ:product} \varepsilon(X,\mathcal F)=p^{[k:\mathbb F_p](1-g)\rank (\mathcal F)}\prod_{v\in|X|}\varepsilon_v(\mathcal F|_{X_{(v)}},\omega). \end{equation} For higher dimensional smooth scheme $X$ over $k$, it is still an open question whether there is an $\varepsilon$-factorization formula (respectively a geometric $\varepsilon$-factorization formula) for $\varepsilon(X,\mathcal F)$ (respectively $\det R\Gamma(X,\mathcal F)$). \subsubsection{} In \cite{Bei07}, Beilinson develops the theory of topological epsilon factors using $K$-theory spectrum and he asks whether his construction admits a motivic ($\ell$-adic or de Rham) counterpart. For de Rham cohomology, such a construction is given by Patel in \cite{Pat12}. Based on \cite{Pat12}, Abe and Patel prove a similar twist formula in \cite{AP17} for global de Rham epsilon factors in the classical setting of $\mathcal D_X$-modules on smooth projective varieties over a field of characteristic zero. In the $\ell$-adic situation, such a geometric $\varepsilon$-factorization formula is still open even if $X$ is a curve. Since the classical local $\varepsilon$-factors depend on an additive character of the base field, a satisfied geometric $\varepsilon$-factorization theory will lie in an appropriate gerbe rather than be a super graded line (cf. \cite{Bei07, Pat12}). \subsubsection{}More generally, we could also ask similar questions in a relative situation. Now let $f\colon X\rightarrow S$ be a proper morphism between smooth schemes over $k$. Let $\mathcal F$ be an $\ell$-adic sheaf on $X$ such that $f$ is universally locally acyclic relatively to $\mathcal F$. Under these assumptions, we know that $Rf_\ast\mathcal F$ is locally constant on $S$. Now we can ask if there is an analogue geometric $\varepsilon$-factorization for the determinant $\det Rf_\ast\mathcal F$. This problem is far beyond the authors' reach at this moment. But, similar to \eqref{eqYZ:ep}, we may consider twist formulas for $\det Rf_\ast\mathcal F$. One of the purposes of this paper is to formulate such a twist formula and prove it under certain assumptions. \subsubsection{Relative twist formula} Let $S$ be a regular Noetherian scheme over $\mathbb Z[1/\ell]$ and $f\colon X\to S$ a proper smooth morphism purely of relative dimension $n$. Let $\mathcal F\in D_c^b(X,\Lambda)$ such that $f$ is universally locally acyclic relatively to $\mathcal F$. Then we conjecture that (see Conjecture \ref{conj:rtf}) there exists a unique cycle class $cc_{X/S}(\mathcal F)\in{\rm CH}^n(X)$ such that for any locally constant and constructible sheaf $\mathcal G$ of $\Lambda$-modules on $X$, we have an isomorphism of smooth sheaves of rank 1 on $S$ \begin{align}\label{intro:rtf} \det Rf_\ast(\mathcal F\otimes\mathcal G)\cong(\det Rf_\ast\mathcal F)^{\otimes\rank\mathcal G}\otimes \det\mathcal G(cc_{X/S}(\mathcal F) ) \end{align} where $\det\mathcal G(cc_{X/S}(\mathcal F)) $ is a smooth sheaf of rank 1 on $S$ (see \ref{subsub:defDetG} for the definition). We call \eqref{intro:rtf} the relative twist formula. As an evidence, we prove a special case of the above conjecture in Theorem \ref{thm:rtf}. It is also interesting to consider a similar relative twist formula for de Rham epsilon factors in the sense of \cite{AP17}. We will pursue this question elsewhere. \subsubsection{}If $S$ is moreover a smooth connected scheme of dimension $r$ over a perfect field $k$, we construct a candidate for $cc_{X/S}(\mathcal F)$ in Definition \ref{def:rcclass}. We also relate the relative characteristic class $cc_{X/S}(\mathcal F)$ to the total characteristic class of $\mathcal F$. Let $K_0(X,\Lambda)$ be the Grothendieck group of $D^b_c(X,\Lambda)$. In \cite[Definition 6.7.2]{Sai16}, T.~Saito defines the following morphism \begin{align} cc_{X,\bullet}\colon K_0(X,\Lambda)\to {\rm CH}_\bullet(X)=\bigoplus_{i= 0}^{r+n} {\rm CH}_i(X), \end{align} which sends $\mathcal F\in D_c^b(X,\Lambda)$ to the total characteristic class $cc_{X,\bullet}(\mathcal F)$ of $\mathcal F$. Under the assumption that $f\colon X\to S$ is $SS(\mathcal F, X/k)$-transversal, we show that $(-1)^r\cdot cc_{X/S}(\mathcal F)=cc_{X, r}(\mathcal F)$ in Proposition \ref{prop:identificationoftwocc}. \subsubsection{} Following Grothendieck \cite{Gro77}, it's natural to ask whether the following diagram \begin{align}\label{eq:fccfintro} \begin{gathered} \xymatrix{ K_0(X,\Lambda)\ar[d]_{f_\ast}\ar[r]^{cc_{X,\bullet}}&CH_\bullet(X)\ar[d]^{f_*}\\ K_0(Y,\Lambda)\ar[r]^{cc_{Y,\bullet}}&CH_\bullet(Y) } \end{gathered} \end{align} is commutative or not for any proper map $f\colon X\rightarrow Y$ between smooth schemes over $k$. If $k=\mathbb C$, the diagram (\ref{eq:fccfintro}) is commutative by \cite[Theorem A.6]{Gin86}. By the philosophy of Grothendieck, the answer is no in general if ${\rm char}(k)>0$ (cf. \cite[Example 6.10]{Sai16}). If $k$ is a finite field and if $f\colon X\to Y$ is moreover projective, as a corollary of Theorem \ref{thm:uyz}, we prove in \cite[Corollary 5.26]{UYZ} that the degree zero part of \eqref{eq:fccfintro} commutes. In general, motivated by the conjectural formula \eqref{intro:rtf}, we propose the following question. Let $f\colon X\rightarrow S$ and $g\colon Y\to S$ be smooth morphisms. Let $D_{c}^b(X/S,\Lambda)$ be the thick subcategory of $D_c^b(X,\Lambda)$ consists of $\mathcal F\in D_c^b(X,\Lambda)$ such that $f$ is $SS(\mathcal F, X/k)$-transversal. Let $K_0(X/S,\Lambda)$ be the Grothendieck group of $D_{c}^b(X/S,\Lambda)$. Then for any proper morphism $h\colon X\to Y$ over $S$, we conjecture that the following diagram commutes (see Conjecture \ref{con:pushccr}) \begin{align}\label{eq:introPushccr} \begin{gathered} \xymatrix@C=3pc{ K_0(X/S,\Lambda)\ar[d]_{h_\ast}\ar[r]^-{~cc_{X,r}~}&CH_r(X)\ar[d]^{h_*}\\ K_0(Y/S,\Lambda)\ar[r]^-{~cc_{Y,r}~}&CH_r(Y). } \end{gathered} \end{align} \subsubsection{}As an evidence for \eqref{eq:introPushccr}, we construct a relative cohomological characteristic class \begin{align}ccc_{X/S}(\mathcal F)\in H^{2n}(X,\Lambda(n)) \end{align} in Definition \ref{def:ccc} if $X\rightarrow S$ is smooth and $SS(\mathcal F, X/k)$-transversal. We prove that the formation of $ccc_{X/S}(\mathcal F)$ is compatible with proper push-forward (see Corollary \ref{cor:pushccc} for a precise statement). Similar to \cite[ Conjecture 6.8.1]{Sai16}, we conjecture that we have the following equality (see Conjecture \ref{conj:ccequality}) \begin{equation} {\rm cl}(cc_{X/S}(\mathcal F))=ccc_{X/S}(\mathcal F) \quad{\rm in}\quad H^{2n}(X, \Lambda(n)) \end{equation} where ${\rm cl}\colon {\rm CH}^n(X)\rightarrow H^{2n}(X, \Lambda(n))$ is the cycle class map. \subsection*{Acknowledgements} Both authors are partially supported by the DFG through CRC 1085 \emph{Higher Invariants} (Universit\"at Regensburg). The authors are thank Naoya Umezaki for sharing ideas and for helpful discussion during writing the paper \cite{UYZ}. The authors are also thank Professor Weizhe Zheng for discussing his paper \cite{Zh15}, and Haoyu Hu for his valuable comments. The first author would like to thank his advisor Professor Linsheng Yin (1963-2015) for his constant encouragement during 2010-2015. \subsection*{Notation and Conventions} \begin{enumerate} \item Let $p$ be a prime number and $\Lambda$ be a finite field of characteristic $\ell\neq p$ or $\Lambda=\overline{\mathbb Q}_\ell$. \item We say that a complex $\mathcal F$ of \'etale sheaves of $\Lambda$-modules on a scheme $X$ over $\mathbb Z[1/\ell]$ is {\it constructible} (respectively {\it smooth}) if the cohomology sheaf $\mathcal H^q(\mathcal F)$ is constructible for every $q$ and if $\mathcal H^q(\mathcal F)=0$ except finitely many $q$ (respectively moreover $\mathcal H^q(\mathcal F)$ is locally constant for all $q$). \item For a scheme $S$ over $\mathbb Z[1/{\ell}]$, let $D^b_c(S,\Lambda)$ be the triangulated category of bounded complexes of $\Lambda$-modules with constructible cohomology groups on $S$ and let $K_0(S,\Lambda)$ be the Grothendieck group of $D^b_c(S,\Lambda)$. \item For a scheme $X$, we denote by $|{X}|$ the set of closed points of $X$. \item For any smooth morphism $X\rightarrow S$, we denote by $T^\ast_X(X/S)\subseteq T^\ast (X/S)$ the zero section of the relative cotangent bundle $T^\ast (X/S)$ of $X$ over $S$. If $S$ is the spectrum of a field, we simply denote $T^\ast (X/S)$ by $T^\ast X$. \end{enumerate} \section{Relative twist formula} \subsection{Reciprocity map} \subsubsection{} For a smooth proper variety $X$ purely of dimension $n$ over a finite field $k$ of characteristic $p$, the reciprocity map $ \rho_X\colon {\rm CH}^n(X) \to \pi^{\rm ab}_1(X)$ is given by sending the class $[s]$ of closed point $s\in X$ to the geometric Frobenius ${\rm Frob}_s$ at $s$. The map $ \rho_X$ is injective with dense image \cite{KS83}. \subsubsection{} Let $S$ be a regular Noetherian scheme over $\mathbb Z[1/\ell]$ and $X$ a smooth proper scheme purely of relative dimension $n$ over $S$. By \cite[Proposition 1]{Sai94}, there exists a unique way to attach a pairing \begin{align}\label{eq:1:100} {\rm CH}^n(X)\times \pi_1^{\rm ab}(S)\to \pi_1^{\rm ab}(X) \end{align} satisfying the following two conditions: \begin{enumerate} \item When $S={\rm Spec}k$ is a point, for a closed point $x\in X$, the pairing with the class $[x]$ coincides with the inseparable degree times the Galois transfer ${\rm tran}_{k(x)/k}$ (cf.\cite[1]{Tat79}) followed by $i_{x\ast}$ for $i_x\colon x\to X$ \[ {\rm Gal}(k^{\rm ab}/k)\xrightarrow{{\rm tran}_{k(x)/k}\times[k(x)\colon k]_i} {\rm Gal}(k(x)^{\rm ab}/k(x))\xrightarrow{i_{x\ast}}\pi_1^{\rm ab}(X). \] \item For any point $s\in S$, the following diagram commutes \[ \xymatrix{ {\rm CH}^n(X)\ar[d]\ar@{}|\times[r] &\pi_1^{\rm ab}(S)\ar[r]&\pi_1^{\rm ab}(X)\\ {\rm CH}^n(X_s)\ar@{}|\times[r]&\pi_1^{\rm ab}(s)\ar[u]\ar[r]&\pi_1^{\rm ab}(X_s).\ar[u] } \] \end{enumerate} \subsubsection{}\label{subsub:defDetG} For any locally constant and constructible sheaf $\mathcal G$ of $\Lambda$-modules on $X$ and any $z\in {\rm CH}^n(X)$, we have a map \begin{align}\label{eq:2:100} \pi^{\rm ab}_1(S)\xrightarrow{(z,\bullet)}\pi_1^{\rm ab}(X)\xrightarrow{\det\mathcal G} \Lambda^\times \end{align} where $(z,\bullet)$ is the map determined by the paring \eqref{eq:1:100} and $\det\mathcal G$ is the representation associated to the locally constant sheaf $\det\mathcal G$ of rank 1. The composition $\det\mathcal G\circ (z,\bullet)\colon \pi^{\rm ab}_1(S)\to \Lambda^\times$ determines a locally constant and constructible sheaf of rank 1 on $S$, which we simply denote by $\det\mathcal G(z)$. Now we propose the following conjecture. \begin{conjecture}[Relative twist formula]\label{conj:rtf} Let $S$ be a regular Noetherian scheme over $\mathbb Z[1/\ell]$ and $f\colon X\to S$ a smooth proper morphism purely of relative dimension $n$. Let $\mathcal F\in D_c^b(X,\Lambda)$ such that $f$ is universally locally acyclic relatively to $\mathcal F$. Then there exists a unique cycle class $cc_{X/S}(\mathcal F)\in{\rm CH}^n(X)$ such that for any locally constant and constructible sheaf $\mathcal G$ of $\Lambda$-modules on $X$, we have an isomorphism \begin{align}\label{eq:conjrtf100} \det Rf_\ast(\mathcal F\otimes\mathcal G)\cong(\det Rf_\ast\mathcal F)^{\otimes\rank\mathcal G}\otimes \det\mathcal G(cc_{X/S}(\mathcal F)) \; \ \; {\rm in\ } K_0(S,\Lambda), \end{align} where $K_0(S,\Lambda)$ is the Grothendieck group of $D_c^b(S,\Lambda)$. \end{conjecture} We call this cycle class $cc_{X/S}(\mathcal F)\in{\rm CH}^n(X)$ {\it the relative characteristic class of $\mathcal F$} if it exists. If $S$ is a smooth scheme over a perfect field $k$, we construct a candidate for $cc_{X/S}(\mathcal F)$ in Definition \ref{def:rcclass}. As an evidence, we prove a special case of the above conjecture in Theorem \ref{thm:rtf}. In order to construct a cycle class $cc_{X/S}(\mathcal F)$ satisfying \eqref{eq:conjrtf100}, we use the theory of singular support and characteristic cycle. \subsection{Transversal condition and singular support}\label{relcotbundle} \subsubsection{} Let $f\colon X\rightarrow S$ be a smooth morphism of Noetherian schemes over $\mathbb Z[1/\ell]$. We denote by $T^\ast(X/S)$ the vector bundle ${\rm Spec}(\mathrm{Sym}_{\mathcal{O}_X}(\Omega^{1}_{X/S})^{\vee})$ on $X$ and call it {\it the relative cotangent bundle on} $X$ {\it with respect to} $S$. We denote by $T^*_X(X/S)=X$ the zero-section of $T^\ast(X/S)$. A constructible subset $C$ of $T^\ast(X/S)$ is called {\it conical} if $C$ is invariant under the canonical $\mathbb G_m$-action on $T^\ast(X/S)$. \begin{definition}[{\cite[\S 1.2]{Bei16} and \cite[\S 2]{HY17}}] Let $f\colon X\rightarrow S$ be a smooth morphism of Noetherian schemes over $\mathbb Z[1/\ell]$ and $C$ a closed conical subset of $T^\ast(X/S)$. Let $Y$ be a Noetherian scheme smooth over $S$ and $h\colon Y\rightarrow X$ an $S$-morphism. (1) We say that $h\colon Y\rightarrow X$ is {\it $C$-transversal relatively to} $S$ {\it at a geometric point} $\bar y\rightarrow Y$ if for every non-zero vector $\mu\in C_{h(\bar y)}=C\times_X\bar y$, the image $dh_{\bar y}(\mu)\in T_{\bar y}^\ast(Y/S)\coloneqq T^\ast(Y/S)\times_Y {\bar y}$ is not zero, where $dh_{\bar y}\colon T_{h(\bar y)}^\ast(X/S)\rightarrow T_{\bar y}^\ast(Y/S)$ is the canonical map. We say that $h\colon Y\rightarrow X$ is $C$-{\it transversal relatively to} $S$ if it is $C$-transversal relatively to $S$ at every geometric point of $Y$. If $h:Y\rightarrow X$ is $C$-transversal relatively to $S$, we put $h^{\circ}C=dh(C\times_XY)$ where $dh:T^\ast(X/S)\times_X Y\rightarrow T^\ast(Y/S)$ is the canonical map induced by $h$. By the same argument of \cite[Lemma 1.1]{Bei16}, $h^\circ C$ is a conical closed subset of $T^\ast(Y/S)$. (2) Let $Z$ be a Noetherian scheme smooth over $S$ and $g\colon X\rightarrow Z$ an $S$-morphism. We say that $g\colon X\rightarrow Z$ is {\it $C$-transversal relatively to} $S$ {\it at a geometric point} $\bar x\rightarrow X$ if for every non-zero vector $\nu\in T_{g(\bar x)}^\ast(Z/S)$, we have $dg_{\bar x}(\nu)\notin C_{\bar x}$, where $dg_{\bar x}\colon T^*_{g(\bar x)}(Z/S)\rightarrow T^*_{\bar x}(X/S)$ is the canonical map. We say that $g\colon X\rightarrow Z$ is $C$-{\it transversal relatively to} $S$ if it is $C$-transversal relatively to $S$ at all geometric points of $X$. If the base $B(C)\coloneq C\cap {T}_X^*(X/S) $ of $C$ is proper over $Z$, we put $g_\circ C:={\rm pr}_1(dg^{-1}(C))$, where ${\rm pr}_1\colon T^\ast (Z/S)\times_Z X\rightarrow T^\ast (Z/S)$ denotes the first projection and $dg: T^*(Z/S)\times_ZX\rightarrow T^*(X/S)$ is the canonical map. It is a closed conical subset of $T^*(Z/S)$. (3) A {\it test pair of} $X$ {\it relative to} $S$ is a pair of $S$-morphisms $(g,h): Y\leftarrow U\rightarrow X$ such that $U$ and $Y$ are Noetherian schemes smooth over $S$. We say that $(g,h)$ is $C$-{\it transversal relatively to} $S$ if $h:U\rightarrow X$ is $C$-transversal relatively to $S$ and $g:U\rightarrow Y$ is $h^{\circ}C$-transversal relatively to $S$. \end{definition} \begin{definition}[{\cite[\S 1.3]{Bei16} and \cite[\S 4]{HY17}}] Let $f\colon X\rightarrow S$ be a smooth morphism of Noetherian schemes over $\mathbb Z[1/\ell]$. Let $\mathcal F$ be an object in $D^b_c(X,\Lambda)$. (1) We say that a test pair $(g,h):Y\leftarrow U\rightarrow X$ relative to $S$ is $\mathcal F$-{\it acyclic} if $g:U\rightarrow Y$ is universally locally acyclic relatively to $h^*\mathcal F$. (2) For a closed conical subset $C$ of ${T}^*(X/S)$, we say that $\mathcal F$ is {\it micro-supported on} $C$ {\it relatively to} $S$ if every $C$-transversal test pair of $X$ relative to $S$ is $\mathcal F$-acyclic. (3) Let $\mathcal C(\mathcal F, X/S)$ be the set of all closed conical subsets $C'\subseteq T^*(X/S)$ such that $\mathcal F$ is micro-supported on $C'$ relatively to $S$. Note that $\mathcal C(\mathcal F,X/S)$ is non-empty if $f:X\rightarrow S$ is universally locally acyclic relatively to $\mathcal F$. If $\mathcal C(\mathcal F, X/S)$ has a smallest element, we denote it by $SS(\mathcal F, X/S)$ and call it the {\it singular support} of $\mathcal F$ {\it relative to} $S$. \end{definition} \begin{theorem}[Beilinson]\label{existrss} Let $f:X\rightarrow S$ be a smooth morphism between Noetherian schemes over $\mathbb{Z}[1/\ell]$ and $\mathcal F$ an object of $D^b_c(X,\Lambda)$. \begin{itemize} \item[(1)]$($\cite[Theorem 5.2]{HY17}$)$ If we further assume that $f:X\rightarrow S$ is projective and universally locally acyclic relatively to $\mathcal F$, the singular support $SS(\mathcal F,X/S)$ exists. \item[(2)]$($\cite[Theorem 5.2 and Theorem 5.3]{HY17}$)$ In general, after replacing $S$ by a Zariski open dense subscheme, the singular support $SS(\mathcal F,X/S)$ exists, and for any $s\in S$, we have \begin{equation}\label{ssequalwant} SS(\mathcal F|_{X_s}, X_s/s)=SS(\mathcal F,X/S)\times_Ss. \end{equation} \item[(3)] $($\cite[Theorem 1.3]{Bei16}$)$ If $S={\rm Spec}k$ for a field $k$ and if $X$ is purely of dimension $d$, then $SS(\mathcal F, X/S)$ is purely of dimension $d$. \end{itemize} \end{theorem} \subsection{Characteristic cycle and index formula} \subsubsection{}Let $k$ be a perfect field of characteristic $p$. Let $X$ be a smooth scheme purely of dimension $n$ over $k$, let $C$ be a closed conical subset of $T^*X$ and $f:X\rightarrow \mathbb A^1_k$ a $k$-morphism. A closed point $v\in X$ is called {\it at most an isolated $C$-characteristic point of $f:X\rightarrow \mathbb A^1_k$} if there is an open neighborhood $V\subseteq X$ of $v$ such that $f: V-\{v\}\rightarrow \mathbb A^1_k$ is $C$-transversal. A closed point $v\in X$ is called an {\it isolated $C$-characteristic point} if $v$ is at most an isolated $C$-characteristic point of $f:X\rightarrow \mathbb A^1_k$ but $f:X\rightarrow \mathbb A^1_k$ is not $C$-transversal at $v$. \begin{theorem}[{T.~Saito, \cite[Theorem 5.9]{Sai16}}] Let $X$ be a smooth scheme purely of dimension $n$ over a perfect field $k$ of characteristic $p$. Let $\mathcal F$ be an object of $D^b_c(X,\Lambda)$ and $\{C_\alpha\}_{\alpha\in I}$ the set of irreducible components of $SS(\mathcal F, X/k)$. There exists a unique $n$-cycle $CC(\mathcal F, X/k)=\sum_{\alpha\in I}m_\alpha [C_\alpha]$ $(m_\alpha\in \mathbb Z)$ of $T^*X$ supported on $SS(\mathcal F, X/k)$, satisfying the following Milnor formula (\ref{eq:milnor}): For any \'etale morphism $g:V\rightarrow X$, any morphism $f:V\rightarrow\mathbb A^1_k$, any isolated $g^\circ SS(\mathcal F, X/k)$-characteristic point $v\in V$ of $f:V\rightarrow\mathbb A^1_k$ and any geometric point $\bar v$ of $V$ above $v$, we have \begin{equation}\label{eq:milnor} -\dimtot ~{\rm R}\Phi_{\bar v}(g^\ast \mathcal F, f)=(g^*CC(\mathcal F,X/k),df)_{T^*V,v}, \end{equation} where ${\rm R}\Phi_{\bar v}(g^*\mathcal F,f)$ denotes the stalk at $\bar v$ of the vanishing cycle complex of $g^*\mathcal F$ relative to $f$, $\dimtot~{\rm R}\Phi_{\bar v}(g^*\mathcal F,f)$ is the total dimension of ${\rm R}\Phi_{\bar v}(g^*\mathcal F,f)$ and $g^*CC(\mathcal F,X/k)$ is the pull-back of $CC(\mathcal F,X/k)$ to $T^*V$. \end{theorem} We call $CC(\mathcal F,X/k)$ the {\it characteristic cycle of} $\mathcal F$. It satisfies the following index formula. \begin{theorem}[{T.~Saito, \cite[Theorem 7.13]{Sai16}}]\label{indexformula} Let $\bar k$ be an algebraic closure of a perfect field $k$ of characteristic $p$, $X$ a smooth projective scheme over $k$ and $\mathcal F\in D^b_c(X,\Lambda)$. Then, we have \begin{equation}\label{indexformulaeq} \chi(X_{\bar k},\mathcal F|_{X_{\bar k}})=\deg(CC(\mathcal F,X/k),T^*_XX)_{T^*X}, \end{equation} where $\chi(X_{\bar k},\mathcal F|_{X_{\bar k}})$ denotes the Euler-Poincar\'e characteristic of $\mathcal F|_{X_{\bar k}}$. \end{theorem} We give a generalization in Theorem \ref{thm:git}. For a smooth scheme $\pi\colon X\to {\rm Spec}k$, and two objects $\mathcal F_1$ and $\mathcal F_2$ in $D_c^b(X,\Lambda)$, we denote $\mathcal F_1\boxtimes^L_k\mathcal F_2\coloneq {\rm pr}_1^*\mathcal F_1 \otimes^L {\rm pr}_2^*\mathcal F_2 \in D^b_c(X\times X, \Lambda)$, where ${\rm pr}_i: X\times X \to X$ is the $i$th projection, for $i=1,2$. We also denote $D_X(\mathcal F_1)={R\mathcal Hom}(\mathcal F_1, \mathcal K_X)$, where $\mathcal K_X={\rm R}\pi^!\Lambda$. \begin{lemma}\label{Lem:Kun} Let $X$ be a smooth variety purely of dimension $n$ over a perfect field $k$ of characteristic $p$. Let $\mathcal F_1$ and $\mathcal F_2$ be two objects in $D_c^b(X,\Lambda)$. Then the diagonal map $\delta\colon \Delta=X\hookrightarrow X\times X$ is $SS(\mathcal F_2\boxtimes^LD_X\mathcal F_1,X\times X/k)$-transversal if and only if $SS(\mathcal F_2\boxtimes^L_k D_X\mathcal F_1,X\times X/k)\subseteq {T}^\ast_{\Delta}(X\times X).$ If we are in this case, then the canonical map \[ R \mathcal Hom(\mathcal F_1, \Lambda)\otimes^L \mathcal F_2 \xrightarrow{\cong} R\mathcal Hom(\mathcal F_1, \mathcal F_2) \] is an isomorphism. \end{lemma} \begin{proof} The first assertion follows from the short exact sequence of vector bundles on $X$ associated to $\delta\colon \Delta=X\hookrightarrow X\times X$: \[ 0 \to {T}^\ast_{\Delta}(X\times X) \to {T}^\ast(X\times X)\times_{X\times X}\Delta \xrightarrow{d\delta} T^*X \to 0 . \] For the second claim, we have the following canonical isomorphisms \begin{align} \nonumber R\mathcal Hom(\mathcal F_1, \Lambda)\otimes^L \mathcal F_2 &\cong R\mathcal Hom(\mathcal F_1, \Lambda(n)[2n])\otimes^L \Lambda(-n)[-2n]\otimes^L \mathcal F_2\overset{(1)}{\cong} D_X\mathcal F_1\otimes^L R\delta^!\Lambda\otimes^L \mathcal F_2\\ \label{isom} &\cong \delta^*(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1)\otimes^L R\delta^!\Lambda \overset{(2)}{\cong} R\delta^!(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1) \\ \nonumber &\overset{(3)}{\cong} R\delta^!(R\mathcal Hom({\rm pr}_2^*\mathcal F_1, R{\rm pr}_1^!\mathcal F_2)) \cong R\mathcal Hom(\delta^*{\rm pr}_2^*\mathcal F_1, R\delta^!R{\rm pr}_1^!\mathcal F_2)\\ \nonumber &\cong R\mathcal Hom( \mathcal F_1, \mathcal F_2), \end{align} where (1) follows from the purity for the closed immersion $\delta$ \cite[XVI, Th\'eor\`eme 3.1.1]{ILO14}; (2) follows from the assumption that $\delta$ is $SS(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1)$-transversal by \cite[Proposition 8.13 and Definition 8.5]{Sai16}; (3) follows from the K\"unneth formula \cite[Expos\'e III, (3.1.1)]{Gro77}. \end{proof} \begin{theorem}\label{thm:git}~ Let $X$ be a smooth projective variety purely of dimension $n$ over an algebraically closed field $k$ of characteristic $p$. Let $\mathcal F_1$ and $\mathcal F_2$ be two objects in $D_c^b(X,\Lambda)$ such that the diagonal map $\delta\colon \Delta=X\hookrightarrow X\times X$ is properly $SS(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1,X\times X/k)$-transversal. Then we have \begin{equation}\label{eq:git} (-1)^n\cdot\dim_{\Lambda} {\rm Ext}(\mathcal F_1, \mathcal F_2)={\rm deg}\left(CC(\mathcal F_1,X/k), CC(\mathcal F_2,X/k) \right)_{{T}^\ast X} \end{equation} where $\dim_{\Lambda} {\rm Ext}(\mathcal F_1, \mathcal F_2)=\sum\limits_{i}(-1)^i\dim_{\Lambda} {\rm Ext}^i_{D_c^b(X,\Lambda)}(\mathcal F_1, \mathcal F_2)$. \end{theorem} \begin{proof}By the isomorphisms \eqref{isom}, the left hand side of (\ref{eq:git}) equals to \begin{align} \nonumber (-1)^n\cdot\chi(X,R\delta^!(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1))&=(-1)^n\cdot\chi(X,\delta^*(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1)\\ \label{eq:git01} &=(-1)^n\cdot\deg(CC(\delta^\ast(\mathcal F_2\boxtimes^L_k D_X(\mathcal F_1),X/k),T^*_XX)_{{T}^*X}. \end{align} Since $\delta\colon X\rightarrow X\times X$ is properly $SS(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1,X\times X/k)$-transversal, we have \begin{align} \label{first.isom} CC(\delta^\ast(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1),X/k)&=(-1)^n \delta^\ast CC(D(\mathcal F_2\boxtimes^L_kD_X\mathcal F_1,X\times X/k) )\\ \label{second.isom}&=(-1)^n \delta^\ast (CC(\mathcal F_2,X/k)\times CC(\mathcal F_1,X/k)). \end{align} where the equality \eqref{first.isom} follows from \cite[Theroem 7.6]{Sai16}, and \eqref{second.isom} follows from \cite[Theorem 2.2.2]{Sai17}. Consider the following commutative diagram \[ \xymatrix{ {T}^*X\times {T}^*X \ar@{=}[r] & {T}^*(X\times X) & {{T}}^\ast(X\times X)\times_{X\times X}\Delta \ar[l]_-{\rm pr} \ar[r]^-{d\delta}& {T}^*X\\ &{{T}}^\ast X \ar[r]^-{\cong}\ar[u] \ar[ul]^{\rm diag} & {{T}}^\ast_{\Delta}(X\times X)\ar[r] \ar[u]& X. \ar[u]\ar@{}|\Box[ul] }\] We have $\delta^\ast (CC(\mathcal F_2,X/k)\times CC(\mathcal F_1,X/k))=d\delta_\ast {\rm pr}^!(CC(\mathcal F_2,X/k)\times CC(\mathcal F_1,X/k))$ and \begin{equation*} {\rm deg}( \delta^\ast (CC(\mathcal F_2,X/k)\times CC(\mathcal F_1,X/k)), {{T}}^\ast_XX)_{{T}^\ast X}={\rm deg}\left(CC(\mathcal F_1,X/k), CC(\mathcal F_2,X/k) \right)_{{{T}}^\ast X}. \end{equation*} Then \eqref{eq:git} follows from the above formula and \eqref{eq:git01}. \end{proof} \begin{remark} If $\mathcal F_1$ is the constant sheaf $\Lambda$, then Theorem \ref{thm:git} is the index formula \eqref{indexformulaeq}. Theorem \ref{thm:git} can be viewed as the $\ell$-adic version of the global index formula in the setting of $\mathcal D_X$-modules (cf. \cite[Theorem 11.4.1]{Gin86}). \end{remark} \subsection{Relative twist formula} \subsubsection{}Let $S$ be a Noetherian scheme over $\mathbb{Z}[1/\ell]$, $f:X\rightarrow S$ a smooth morphism of finite type and $\mathcal F$ an object of $D^b_c(X,\Lambda)$. Assume that the relative singular support $SS(\mathcal F, X/S)$ exists. A cycle $B=\sum_{i\in I} m_i[B_i]$ in ${T}^*(X/S)$ is called the {\it characteristic cycle of} $\mathcal F$ {\it relative to} $S$ if each $B_i$ is a subset of $SS(\mathcal F, X/S)$, each $B_i\rightarrow S$ is open and equidimensional and if, for any algebraic geometric point $\bar s$ of $S$, we have \begin{equation} B_{\bar s}=\sum_{i\in I} m_i[(B_i)_{\bar s}]=CC(\mathcal F|_{X_{\bar s}}, X_{\bar s}/\bar s). \end{equation} We denote by $CC(\mathcal F, X/S)$ the characteristic cycle of $\mathcal F$ on $X$ relative to $S$. Notice that relative characteristic cycles may not exist in general. \begin{proposition}[{T.~Saito, \cite[Proposition 6.5]{HY17}}]\label{flattransversal} Let $k$ be a perfect field of characteristic $p$. Let $S$ be a smooth connected scheme of dimension $r$ over $k$, $f:X\rightarrow S$ a smooth morphism of finite type and $\mathcal F$ an object of $D^b_c(X,\Lambda)$. Assume that $f: X\rightarrow S$ is $SS(\mathcal F,X/k)$-transversal and that each irreducible component of $SS(\mathcal F, X/k)$ is open and equidimensional over $S$. Then the relative singular support $SS(\mathcal F, X/S)$ and the relative characteristic cycle $CC(\mathcal F, X/S)$ exist, and we have \begin{align} \label{eqs:rss}SS(\mathcal F,X/S)&=\theta(SS(\mathcal F, X/k)),\\ \label{eqs:rcc}CC(\mathcal F, X/S)&=(-1)^r\theta_*(CC(\mathcal F, X/k)), \end{align} where $\theta:{T}^*X\rightarrow {T}^*(X/S)$ denotes the projection induced by the canonical map $\Omega^1_{X/k}\rightarrow\Omega^1_{X/S}$. \end{proposition} \begin{definition}\label{def:rcclass} Let $k$ be a perfect field of characteristic $p$ and $S$ a smooth connected scheme of dimension $r$ over $k$. Let $f:X\rightarrow S$ be a smooth morphism purely of relative dimension $n$ and $\mathcal F$ an object of $D^b_c(X,\Lambda)$. Assume that $f: X\rightarrow S$ is $SS(\mathcal F,X/k)$-transversal. Consider the following cartesian diagram \begin{align}\label{eq:rcclass} \xymatrix{ {T}^\ast S\times_SX\ar[r]\ar[d]&{T}^\ast X\ar[d]\\ X\ar[r]^-{0_{X/S}}&{T}^\ast(X/S) } \end{align} where $0_{X/S}\colon X\rightarrow {T}^\ast(X/S)$ is the zero section. Since $f: X\rightarrow S$ is $SS(\mathcal F,X/k)$-transversal, the refined Gysin pull-back $0_{X/S}^!(CC(\mathcal F,X/k))$ of $CC(\mathcal F,X/k)$ is a $r$-cycle class supported on $X$. We define the \emph{relative characteristic class} of $\mathcal F$ to be \begin{align}\label{eq:rcclassdef0} cc_{X/S}(\mathcal F)=(-1)^r\cdot 0_{X/S}^!(CC(\mathcal F,X/k)) \quad{in}\quad {\rm CH}^n(X). \end{align} \end{definition} Now we prove a special case of Conjecture \ref{conj:rtf}. \begin{theorem}[Relative twist formula]\label{thm:rtf} Let $S$ be a smooth connected scheme of dimension $r$ over a finite field $k$ of characteristic $p$. Let $f\colon X\rightarrow S$ be a smooth projective morphism of relative dimension $n$. Let $\mathcal F\in D_c^b(X,\Lambda)$ and $\mathcal G$ a locally constant and constructible sheaf of $\Lambda$-modules on $X$. Assume that $f$ is properly $SS(\mathcal F, X/k)$-transversal. Then there is an isomorphism \begin{align} \det Rf_\ast(\mathcal F\otimes\mathcal G)\cong(\det Rf_\ast\mathcal F)^{\otimes\rank\mathcal G}\otimes \det\mathcal G(cc_{X/S}(\mathcal F)) \; \ \; {\rm in\ } K_0(S,\Lambda). \end{align} Note that we also have $cc_{X/S}(\mathcal F)=(CC(\mathcal F, X/S), {T}^*_XX)_{{T}^\ast(X/S)}\in {\rm CH}^n(X)$. \end{theorem} \begin{proof} We may assume $\mathcal G\neq 0$. Since $\mathcal G$ is a smooth sheaf, we have $SS(\mathcal F, X/k)=SS(\mathcal F\otimes\mathcal G, X/k)$. Since $f$ is proper and $SS(\mathcal F, X/k)$-transversal, by \cite[Lemma 4.3.4]{Sai16}, $Rf_\ast \mathcal F$ and $Rf_\ast (\mathcal F\otimes\mathcal G)$ are smooth sheaves on $S$. For any closed point $s\in S$, we have the following commutative diagram \[ \xymatrix{ {T}^*X\times_XX_s \ar[r]^-{\theta_{s}}\ar[d]^{\rm pr} &{T}^*X_s \cong {T}^*(X/S)\times_XX_s\ar[d]^{\rm pr} &X_s \ar[l]_-{0_{X_s}} \ar[d]^{i}\\ \mathbb{T}^*X \ar[r]^{\theta} &{T}^*(X/S)\ar@{}|\Box[ul] & X\ar[l]_-{0_{X/S}}\ar@{}|\Box[ul] }\] where $0_{X/S}$ and $0_{X_s}$ are the zero sections. Hence we have \begin{align} \nonumber cc_{X_s}(\mathcal F|_{X_s})&=(CC(\mathcal F|_{X_s},X_s/s), X_s)_{{T}^*X_s}=0_{X_s}^!CC(\mathcal F|_{X_s},X_s/s)\overset{(a)}{=}0_{X_s}^!i^!CC(\mathcal F, X/k)\\ \nonumber &=(-1)^r0_{X_s}^!i^*CC(\mathcal F, X/k)=(-1)^r0_{X_s}^!\theta_{s*}{\rm pr}^!CC(\mathcal F, X/k)\\ \label{eqs:rtf105}&=(-1)^r0_{X_s}^!{\rm pr}^!\theta_{*}CC(\mathcal F, X/k) =(-1)^r0_{X_s}^!{\rm pr}^!((-1)^rCC(\mathcal F, X/S))\\ \nonumber&=0_{X_s}^!{\rm pr}^!CC(\mathcal F, X/S)=i^!0_{X/S}^!CC(\mathcal F, X/S)=i^!cc_{X/S}(\mathcal F), \end{align} where the equality (a) follows from \cite[Theorem 7.6]{Sai16} since $f$ is properly $SS(\mathcal F, X/k)$-transversal. By Chebotarev density (cf. \cite[Th\'eor\`eme 1.1.2]{Lau87}), we may assume that $S$ is the spectrum of a finite field. Then it is sufficient to compare the Frobenius action. Then one use \eqref{eqs:rtf105} and Theorem \ref{thm:uyz}. \end{proof} \begin{example} Let $S$ be a smooth projective connected scheme over a finite field $k$ of characteristic $p>2$. Let $f\colon X\rightarrow S$ be a smooth projective morphism of relative dimension $n$, $\chi=\rank Rf_\ast\bQl$ the Euler-Poincar\'e number of the fibers and let $\mathcal F$ be a constructible \'etale sheaf of $\Lambda$-modules on $S$. Then by the projection formula, we have $Rf_*f^*\mathcal F \cong \mathcal F\otimes Rf_*\overline{\mathbb Q}_{\ell}.$ Since $f$ is projective and smooth, $Rf_*\overline{\mathbb Q}_{\ell}$ is a smooth sheaf on $S$. Using Theorem \ref{thm:uyz}, we get \begin{align} \varepsilon(S, Rf_\ast f^\ast\mathcal F)= \varepsilon(S, \mathcal F)^{\chi}\cdot {\rm det}Rf_*\overline{\mathbb Q}_{\ell}(-cc_{Y/k}(\mathcal F)). \end{align} By \cite[Theorem 2]{Sai94}, ${\rm det}Rf_*\overline{\mathbb Q}_{\ell}=\kappa_{X/S}(-\frac{1}{2}n\chi)$, where $\kappa_{X/S}$ is a character of order at most 2 and is determined by the following way: (1) If $n$ is odd, then $\kappa_{X/S}$ is trivial. (2) If $n=2m$ is even, then $\kappa_{X/S}$ is the quadratic character defined by the square root of $(-1)^{\frac{\chi(\chi-1)}{2}}\cdot \delta_{{\rm dR},X/S}$, where $\delta_{{\rm dR},X/S}\colon (\det H_{\rm dR}(X/S))^{\otimes 2}\xrightarrow{\simeq}\mathcal O_S$ is the de Rham discriminant defined by the non-degenerate symmetric bilinear form $H_{\rm dR}(X/S)\otimes^{L}H_{\rm dR}(X/S)\to \mathcal O_S[-2n]$, and $H_{\rm dR}(X/S)=Rf_*\Omega_{X/S}^{\bullet}$ is the perfect complex of $\mathcal O_S$-modules whose cohomology computes the relative de Rham cohomology of $X/S$. Similarly, if $\mathcal F$ is a locally constant and constructible \'etale sheaf of $\Lambda$-modules on $S$, then \begin{align} \nonumber{\rm det}Rf_\ast f^\ast\mathcal F\cong {\rm det}(\mathcal F\otimes Rf_*\overline{\mathbb Q}_{\ell})&\cong ({\rm det}\mathcal F)^{\otimes\chi}\otimes ({\rm det}Rf_*\overline{\mathbb Q}_{\ell} )^{\otimes{\rm rank}\mathcal F}\\ &\cong ({\rm det}\mathcal F)^{\otimes\chi}\otimes (\kappa_{X/S}(-\frac{1}{2}n\chi))^{{\otimes\rm rank}\mathcal F}. \end{align} \end{example} \subsection{Total characteristic class} \subsubsection{}In the rest of this section, we relate the relative characteristic class $cc_{X/S}(\mathcal F)$ to the total characteristic class of $\mathcal F$. Let $X$ be a smooth scheme purely of dimension $d$ over a perfect field $k$ of characteristic $p$. In \cite[Definition 6.7.2]{Sai16}, T.~Saito defines the following morphism \begin{align}\label{eq:totcc} cc_{X,\bullet}\colon K_0(X,\Lambda)\to {\rm CH}_\bullet(X)=\bigoplus_{i= 0}^d {\rm CH}_i(X), \end{align} which sends $\mathcal F\in D_c^b(X,\Lambda)$ to the total characteristic class $cc_{X,\bullet}(\mathcal F)$ of $\mathcal F$. For our convenience, for any integer $n$ we put \begin{align} cc_{X}^n(\mathcal F)\coloneq cc_{X,d-n}(\mathcal F)\quad{\rm in}\quad {\rm CH}^n(X). \end{align} By \cite[Lemma 6.9]{Sai16}, for any $\mathcal F\in D_c^b(X,\Lambda)$, we have \begin{align} cc_{X}^d(\mathcal F)=cc_{X,0}(\mathcal F)&=(CC(\mathcal F, X/k), {T}^\ast_XX)_{{T}^\ast X} \quad{\rm in}\quad {\rm CH}_0(X),\\ cc_{X}^0(\mathcal F)= cc_{X,d}(\mathcal F)&=(-1)^d\cdot {\rm rank}\mathcal F\cdot [X]\,\, \quad\quad\quad{\rm in}\quad {\rm CH}_d(X)=\mathbb Z. \end{align} The following proposition gives a computation of $cc_{X}^n\mathcal F$ for any $n$. \begin{proposition}\label{prop:identificationoftwocc} Let $S$ be a smooth connected scheme of dimension $r$ over a perfect field $k$ of characteristic $p$. Let $f\colon X\rightarrow S$ be a smooth morphism purely of relative dimension $n$. Assume that $f$ is $SS(\mathcal F, X/k)$-transversal. Then we have \begin{align}\label{eq:identificationoftwocc} cc_{X}^n(\mathcal F)= (-1)^r\cdot cc_{X/S}(\mathcal F) \quad{\rm in}\quad {\rm CH}^n(X) \end{align} where $cc_{X/S}(\mathcal F)$ is defined in Definition \ref{def:rcclass}. \end{proposition} \begin{proof} We use the notation of \cite[Lemma 6.2]{Sai16}. We put $F=({T}^\ast S\times_SX)\oplus \mathbb A_X^1$ and $E={T}^\ast X\oplus \mathbb A_X^1$. We have a canonical injection $i\colon F\to E$ of vector bundles on $X$ induced by $df\colon T^\ast S\times_SX\rightarrow T^\ast X$. Let $\bar i\colon \mathbb P(F)\rightarrow \mathbb P(E)$ be the canonical map induced by $i\colon F\to E$. By \cite[Lemma 6.1.2 and Lemma 6.2.1]{Sai16}, we have a commutative diagram: \begin{align}\label{eq:identificationoftwocc101} \begin{gathered} \xymatrix{ {\rm CH}_r(\mathbb P(F))&&{\rm CH}_{n+r}(\mathbb P(E))\ar[ll]_-{\bar i^\ast}\\ \bigoplus\limits_{q=0}^r{\rm CH}_q(X)\ar[u]^-{\simeq}\ar[d]_-{\rm can}&&\bigoplus\limits_{q=0}^{n+r}{\rm CH}_q(X)\ar[ll]_-{\rm can}\ar[u]^-{\simeq}\ar[d]_-{\rm can}\\ {\rm CH}_r(X)\ar@{=}[r]&{\rm CH}^n(X)\ar@{=}[r]&{\rm CH}_{r}(X). } \end{gathered} \end{align} Since $f$ is smooth and $SS(\mathcal F, X/k)$-transversal, the intersection $SS(\mathcal F, X/k)\cap ({T}^\ast S\times_SX)$ is contained in the zero section of ${T}^\ast S\times_SX$. Thus the Gysin pull-back $i^\ast ({CC(\mathcal F, X/k)})$ is supported on the zero section of ${T}^\ast S\times_SX$. Let $\overline{CC(\mathcal F, X/k)}$ be any extension of $CC(\mathcal F, X/k)$ to $\mathbb P(E)$ (cf. \cite[Definition 6.7.2]{Sai16}). Then $\bar i^\ast (\overline{CC(\mathcal F, X/k)})$ is an extension of $i^\ast ({CC(\mathcal F, X/k)})$ to $\mathbb P(F)$. By \cite[Definition 6.7.2]{Sai16}, the image of $\overline{CC(\mathcal F, X/k)}$ in ${\rm CH}^n(X)$ by the right vertical map of \eqref{eq:identificationoftwocc101} equals to $cc_{X}^n(\mathcal F)=cc_{X,r}(\mathcal F)$. The image of $\bar i^\ast (\overline{CC(\mathcal F, X/k)})$ in ${\rm CH}^n(X)$ by the left vertical map of \eqref{eq:identificationoftwocc101} equals to $(-1)^r\cdot cc_{X/S}(\mathcal F)$ (cf. \eqref{eq:rcclassdef0}). Now the equality \eqref{eq:identificationoftwocc} follows from the commutativity of \eqref{eq:identificationoftwocc101}. \end{proof} \subsubsection{}Following Grothendieck \cite{Gro77}, it's natural to ask the following question: is the diagram \begin{align}\label{eq:fccf} \begin{gathered} \xymatrix{ K_0(X,\Lambda)\ar[d]_{f_\ast}\ar[r]^{cc_{X,\bullet}}&CH_\bullet(X)\ar[d]^{f_*}\\ K_0(Y,\Lambda)\ar[r]^{cc_{Y,\bullet}}&CH_\bullet(Y) } \end{gathered} \end{align} commutative for any proper map $f\colon X\rightarrow Y$ between smooth schemes over a perfect field $k$? If $k=\mathbb C$, the diagram (\ref{eq:fccf}) is commutative by \cite[Theorem A.6]{Gin86}. By the philosophy of Grothendieck, the answer is no in general if ${\rm char}(k)>0$ (cf. \cite[Example 6.10]{Sai16}). However, in \cite[Corollary 1.9]{UYZ}, we prove that the degree zero part of the diagram (\ref{eq:fccf}) is commutative, i.e., if $f\colon X\rightarrow Y$ is a proper map between smooth projective schemes over a finite field $k$ of characteristic $p$, then we have the following commutative diagram \begin{align}\label{eq:fccf2} \begin{gathered} \xymatrix{ K_0(X,\Lambda)\ar[d]_{f_\ast}\ar[r]^{cc_{X,0}}&CH_0(X)\ar[d]^{f_*}\\ K_0(Y,\Lambda)\ar[r]^{cc_{Y,0}}&CH_0(Y). } \end{gathered} \end{align} Now we propose the following: \begin{conjecture}\label{con:pushccr} Let $S$ be a smooth connected scheme over a perfect field $k$ of characteristic $p$. Let $f\colon X\rightarrow S$ be a smooth morphism purely of relative dimension $n$ and $g\colon Y\to S$ a smooth morphism purely of relative dimension $m$. Let $D_{c}^b(X/S,\Lambda)$ be the thick subcategory of $D_c^b(X,\Lambda)$ consists of $\mathcal F\in D_c^b(X,\Lambda)$ such that $f\colon X\rightarrow S$ is $SS(\mathcal F, X/k)$-transversal. Let $K_0(X/S,\Lambda)$ be the Grothendieck group of $D_{c}^b(X/S,\Lambda)$. Then for any proper morphism $h\colon X\to Y$ over $S$, \begin{align}\label{eq:con:pushccr1} \begin{gathered} \xymatrix{ X\ar[rr]^-h\ar[rd]_-f&&Y\ar[ld]^-g\\ &S } \end{gathered} \end{align} the following diagram commutes \begin{align}\label{eq:con:pushccr2} \begin{gathered} \xymatrix@C=3pc{ K_0(X/S,\Lambda)\ar[d]_{h_\ast}\ar[r]^-{~cc_{X}^n~}&CH^n(X)\ar[d]^{h_*}\\ K_0(Y/S,\Lambda)\ar[r]^-{~cc_{Y}^m~}&CH^m(Y). } \end{gathered} \end{align} That is to say, for any $\mathcal F\in D_c^b(X,\Lambda)$, if $f$ is $SS(\mathcal F, X/k)$-transversal, then we have \begin{align} h_\ast(cc_X^n(\mathcal F))=cc_Y^m(Rh_\ast\mathcal F)\quad{\rm in}\quad CH^m(Y). \end{align} \end{conjecture} \begin{remark} If $f$ is $SS(\mathcal F, X/k)$-transversal, by \cite[Lemma 3.8 and Lemma 4.2.6]{Sai16}, the morphism $g\colon Y\to S$ is $SS(Rh_\ast\mathcal F, Y/k)$-transversal. Thus we have a well-defined map $h_\ast\colon K_0(X/S,\Lambda)\to K_0(Y/S,\Lambda)$. \end{remark} In next section, we formulate and prove a cohomological version of Conjecture \ref{con:pushccr} (cf. Corollary \ref{cor:pushccc}). \section{Relative cohomological characteristic class}\label{sec:rccc} In this section, we assume that $S$ is a smooth connected scheme over a perfect field $k$ of characteristic $p$ and $\Lambda$ is a finite field of characteristic $\ell$. To simplify our notations, we omit to write $R$ or $L$ to denote the derived functors unless otherwise stated explicitly or for $R\mathcal Hom$. We briefly recall the content of this section. Let $X\to S$ be a smooth morphism purely of relative dimension $n$ and $\mathcal F\in D_c^b(X,\Lambda)$. If $X\to S$ is $SS(\mathcal F, X/k)$-transversal, we construct a relative cohomological characteristic class $ccc_{X/S}(\mathcal F)\in H^{2n}(X,\Lambda(n))$ following the method of \cite{AS07,Gro77}. We conjecture that the image of the cycle class $cc_{X/S}(\mathcal F)$ by the cycle class map ${\rm cl}: {\rm CH}^n(X)\rightarrow H^{2n}(X,\Lambda(n))$ is $ccc_{X/S}(\mathcal F)$ (cf. Conjecture \ref{conj:rtf}). In Corollary \ref{cor:pushccc}, we prove that the formation of $ccc_{X/S}\mathcal F$ is compatible with proper push-forward. \subsection{Relative cohomological correspondence} \subsubsection{}\label{Sec4:notation} Let $\pi_1\colon X_1\to S$ and $ \pi_2\colon X_2\to S$ be smooth morphisms purely of relative dimension $n_1$ and $n_2$ respectively. We put $X\coloneq X_1\times_SX_2 $ and consider the following cartesian diagram \begin{align} \begin{gathered} \xymatrix{ X\ar[r]^-{\rm pr_2} \ar[d]_-{\rm pr_1}& X_2 \ar[d]^-{\pi_2}\\ X_1 \ar[r]_-{\pi_1} &S. \ar@{}|\Box[ul] } \end{gathered} \end{align} Let $\mathcal E_i$ and $\mathcal F_i$ be objects of $D^b_c(X_i,\Lambda)$ for $i=1,2$. We put \begin{align} &\mathcal F\coloneq\mathcal F_1\boxtimes^L_S\mathcal F_2\coloneqq {\rm pr}_1^\ast \mathcal F_1 \otimes^L {\rm pr}_2^\ast \mathcal F_2,\\ &\mathcal E\coloneq\mathcal E_1\boxtimes^L_S\mathcal E_2\coloneqq {\rm pr}_1^\ast \mathcal E_1 \otimes^L {\rm pr}_2^\ast \mathcal E_2, \end{align} which are objects of $D_c^b(X,\Lambda)$. Similarly, we can define $\mathcal F_1\boxtimes^L_k\mathcal F_2$, which is an object of $D_c^b(X_1\times_k X_2,\Lambda)$. We first compare $SS(\mathcal F_1\boxtimes_S^L\mathcal F_2, X_1\times_S X_1/k)$ and $SS(\mathcal F_1\boxtimes_k^L\mathcal F_2, X_1\times_k X_1/k)$. \begin{lemma}\label{lem:boxtimesTrans} Assume that $\pi_1\colon X_1\to S$ is $SS(\mathcal F_1, X_1/k)$-transversal. Then we have \begin{align}\label{eq:lem:boxtimesTrans00} SS({\rm pr}_1^\ast \mathcal F_1, X/k)\cap SS({\rm pr}_2^\ast \mathcal F_2, X/k)\subseteq T^\ast_XX. \end{align} Moreover, the closed immersion $i\colon X_1\times_SX_2 \hookrightarrow X_1\times_kX_2$ is $SS(\mathcal F_1\boxtimes^L_k\mathcal F_2, X_1\times_kX_2/k)$-transversal and \begin{equation}\label{eq:lem:boxtimesTrans} SS(\mathcal F_1\boxtimes_S^L\mathcal F_2, X_1\times_SX_2/k)\subseteq i^{\circ}(SS(\mathcal F_1\boxtimes^L_k\mathcal F_2, X_1\times_k X_2/k)). \end{equation} \end{lemma} \begin{proof} We first prove \eqref{eq:lem:boxtimesTrans00}. Since $X_i\to S$ is smooth, we obtain an exact sequence of vector bundles on $X_i$ for $i=1,2$ \begin{align} \label{eq:lem:boxtimesTrans101} 0 \to {T}^*S\times_SX_i \xrightarrow{d\pi_i} {T}^*X_i\to {T}^*(X_i/S)\to 0. \end{align} Since $\pi_1\colon X_1\to S$ is $SS(\mathcal F_1, X_1/k)$-transversal, we have \begin{align}\label{eq:lem:boxtimesTrans102} SS(\mathcal F_1,X_1/k)\cap (T^*S\times_SX_1)\subseteq T^*_SS\times_SX_1. \end{align} Consider the following diagram with exact rows and exact columns: \begin{align}\label{eq:lem:boxtimesTrans103} \begin{gathered} \xymatrix@R=1pc@C=1.5pc{ &0&0&&\\ & {T}^*(X_2/S)\times_{X_2}X \ar[r]^-{\cong} \ar[u]&{T}^*(X/X_1) \ar[u] &&\\ 0\ar[r] & {T}^*X_2\times_{X_2}X \ar[u]\ar[r] & {T}^*X\ar[u] \ar[r] & {T}^*(X/X_2) \ar[r] &0\\ 0\ar[r] & {T}^*S\times_{S}X \ar[u]\ar[r] & {T}^*X_{1}\times_{X_1}X \ar[r]\ar[u] & {T}^*(X_1/S)\times_{X_1}X \ar[r] \ar[u]_-{\cong} &0\\ &0\ar[u]&0 \ar[u]&& } \end{gathered} \end{align} Since ${\rm pr}_i$ is smooth, by \cite[Corollary 8.15]{Sai16}, we have \[ SS({\rm pr}_i^\ast \mathcal F_i, X/k)={\rm pr}_i^{\circ}SS(\mathcal F_i,X_i/k)=SS(\mathcal F_i,X_i/k)\times_{X_i}X. \] It follows from \eqref{eq:lem:boxtimesTrans102} and \eqref{eq:lem:boxtimesTrans103} that ${\rm pr}_1^{\circ}SS(\mathcal F_1,X_1/k)\cap {\rm pr}_2^{\circ}SS(\mathcal F_2,X_2/k) \subseteq {T}_X^*X$. Thus $SS({\rm pr}_1^\ast \mathcal F_1, X/k)\cap SS({\rm pr}_2^\ast \mathcal F_2, X/k)\subseteq T^\ast_XX $. This proves \eqref{eq:lem:boxtimesTrans00}. Now we consider the cartesian diagram \begin{align}\label{eq:lem:boxtimesTrans104} \begin{gathered} \xymatrix{ X=X_1\times_SX_2 \ar[r]^-i \ar[d] & X_1\times_kX_2\ar[d]\\ S\ar[r]^-{\delta} &S\times_kS\ar@{}|\Box[ul] } \end{gathered} \end{align} where $\delta\colon S\rightarrow S\times_k S$ is the diagonal. We get the following commutative diagram of vector bundles on $X$ with exact rows: \[ \xymatrix@C=1pc@R=1pc{ &&T^\ast X_1\times_S T^\ast X_2\ar@{=}[d]\\ 0\ar[r] &\mathcal N_{{X}/(X_1\times_kX_2)} \ar[r] & T^*(X_1\times_kX_2)\times_{X_1\times_kX_2}X \ar[r]^-{di} &T^*X\ar[r]&0\\ 0\ar[r]&\mathcal N_{S/(S\times_kS)}\times_SX\ar[u]_-{\cong} \ar[r]&T^{*}(S\times_kS)\times_{S\times_kS}X\ar[u]\ar[r]^-{d\delta} &T^*S\times_S X\ar[u] \ar[r]&0\\ &T^\ast S\times_SX\ar@{=}[u]\ar[r]&(T^\ast S\times_S X_1)\times_S (T^\ast S\times_SX_2)\ar@{=}[u], }\] where $\mathcal N_{S/(S\times_kS)}$ is the conormal bundle associated to $\delta\colon S\to S\times_k S$. By \cite[Theorem 2.2.3]{Sai17}, we have $SS(\mathcal F_1\boxtimes_k^L\mathcal F_2, X_1\times_kX_2/k)=SS(\mathcal F_1, X_1/k)\times SS(\mathcal F_2, X_2/k)$. Therefore by \eqref{eq:lem:boxtimesTrans102}, $\mathcal N_{{X}/(X_1\times_kX_2)} \cap SS(\mathcal F_1\boxtimes_k^L\mathcal F_2, X_1\times_kX_2/k)$ is contained in the zero section of $\mathcal N_{{X}/(X_1\times_kX_2)}$. Hence $i\colon X \hookrightarrow X_1\times_kX_2$ is $SS(\mathcal F_1\boxtimes^L_k\mathcal F_2, X_1\times_kX_2/k)$-transversal. Now the assertion \eqref{eq:lem:boxtimesTrans} follows from \cite[Lemma 4.2.4]{Sai16}. \end{proof} \begin{proposition}\label{pro:doubleRHOMiso} Under the notation in \ref{Sec4:notation}, we assume that \begin{enumerate} \item $SS(\mathcal E_i,X_i/k)\cap SS(\mathcal F_i,X_i/k)\subseteq T^*_{X_i}X_i $ for all $i=1,2$; \item $\pi_1\colon X_1\to S$ is $SS(\mathcal E_1,X_1/k)$-transversal or $\pi_2\colon X_2\to S$ is $SS(\mathcal F_2,X_2/k)$-transversal; \item $\pi_1\colon X_1\to S$ is $SS(\mathcal F_1,X_1/k)$-transversal or $\pi_2\colon X_2\to S$ is $SS(\mathcal E_2,X_2/k)$-transversal. \end{enumerate} Then the following canonical map $($cf. \cite[(7.2.2)]{Zh15} and \cite[Expos\'e III, (2.2.4)]{Gro77}$)$ \begin{align} \label{tensor.pairing} R\mathcal Hom(\mathcal E_1,\mathcal F_1)\boxtimes^L_S R\mathcal Hom(\mathcal E_2,\mathcal F_2) \to R\mathcal Hom(\mathcal E,\mathcal F). \end{align} is an isomorphism. \end{proposition} If $S$ is the spectrum of a field, then the above result is proved in \cite[Expos\'e III, Proposition 2.3]{Gro77}. Our proof below is different from that of \emph{loc.cit.} and is based on \cite{Sai16}. \begin{proof} In the following, we put $\mathcal E_i^{\vee}\coloneq R\mathcal Hom(\mathcal E_i, \Lambda) $. Since $SS(\mathcal E_i,X_i/k)\cap SS(\mathcal F_i,X_i/k)\subseteq T^*_{X_i}X_i $, Lemma \ref{Lem:Kun} implies that \begin{equation} \mathcal F_i\otimes^L \mathcal E_i^{\vee}= \mathcal F_i\otimes^LR\mathcal Hom(\mathcal E_i, \Lambda) \xrightarrow{\cong} R\mathcal Hom(\mathcal E_i, \mathcal F_i),\; {\rm for\; all }\;i=1,2, \end{equation} Hence we have \begin{align} \label{isom.lhs} R\mathcal Hom(\mathcal E_1,\mathcal F_1)\boxtimes^L_S R\mathcal Hom(\mathcal E_2,\mathcal F_2)\cong (\mathcal F_1\otimes^L\mathcal E_1^{\vee})\boxtimes^L_S(\mathcal F_2 \otimes^L\mathcal E_2^{\vee})\\ \nonumber \cong (\mathcal F_1\boxtimes^L_S\mathcal F_2)\otimes^L(\mathcal E_1^{\vee} \boxtimes^L_S\mathcal E_2^{\vee}). \end{align} Note that we also have \begin{align} \nonumber\mathcal E_1^{\vee} \boxtimes^L_S\mathcal E_2^{\vee}&={\rm pr}_1^*R\mathcal Hom(\mathcal E_1,\Lambda) \otimes^L{\rm pr}_2^*R\mathcal Hom(\mathcal E_2,\Lambda)\\ \label{isom.dual} &\cong R\mathcal Hom({\rm pr}_1^*\mathcal E_1,\Lambda) \otimes^LR\mathcal Hom({\rm pr}_2^*\mathcal E_2,\Lambda)\\ \nonumber &\overset{(a)}{\cong} R\mathcal Hom({\rm pr}_1^*\mathcal E_1, R\mathcal Hom({\rm pr}_2^*\mathcal E_2,\Lambda))\\ \nonumber &\cong R\mathcal Hom({\rm pr}_1^*\mathcal E_1\otimes^L{\rm pr}_2^*\mathcal E_2,\Lambda)=\mathcal E^{\vee}, \end{align} where the isomorphism (a) follows from Lemma \ref{Lem:Kun} by the fact that (cf. Lemma \ref{lem:boxtimesTrans}) \[SS({\rm pr}_1^*\mathcal E_1,X/k)\cap SS({\rm pr}_2^*\mathcal E_2,X/k)\subseteq T^*_XX. \] By Lemma \ref{lem:boxtimesTrans}, we have \begin{align} \nonumber&SS(\mathcal E,X/k)\cap SS(\mathcal F,X/k)\\ \nonumber&\subseteq i^\circ(SS(\mathcal E_1\boxtimes^L_k\mathcal E_2, X_1\times_k X_2/k))\cap i^\circ(SS(\mathcal F_1\boxtimes^L_k\mathcal F_2, X_1\times_k X_2/k))\\ \nonumber&\overset{(b)}{=}i^\circ(SS(\mathcal E_1, X_1)\times SS(\mathcal E_2, X_2))\cap i^\circ(SS(\mathcal F_1, X_1)\times SS(\mathcal F_2, X_2))\\ \nonumber&\overset{(c)}{\subseteq} T^\ast _XX, \end{align} where the equality (b) follows from \cite[Theorem 2.2.3]{Sai17}, and $(c)$ follows from the assumptions (2) and (3) (cf. \cite[Lemma 2.7.2]{Sai17}). Thus by Lemma \ref{Lem:Kun}, we have \begin{align}\label{eq:isomKunDual} \mathcal F\otimes^L \mathcal E^{\vee}\cong R\mathcal Hom(\mathcal E, \mathcal F). \end{align} Combining \eqref{isom.lhs}, \eqref{isom.dual} and \eqref{eq:isomKunDual}, we get \begin{align} R\mathcal Hom(\mathcal E_1,\mathcal F_1)\boxtimes^L_S R\mathcal Hom(\mathcal E_2,\mathcal F_2)\cong \mathcal F\otimes^L \mathcal E^{\vee}\cong R\mathcal Hom(\mathcal E, \mathcal F). \end{align} This finishes the proof. \end{proof} \subsubsection{K\"unneth formula} We have the following canonical morphism \begin{equation}\label{eqs:Kunn} \mathcal F_1\boxtimes^L_S R\mathcal Hom(\mathcal F_2, \pi_2^!\Lambda_S) \to R\mathcal Hom({\rm pr}_2^*\mathcal F_2, {\rm pr}_1^!\mathcal F_1), \end{equation} by taking the adjunction of the following composition map \begin{align} \nonumber {\rm pr}_1^*\mathcal F_1\otimes {\rm pr}_2^*R\mathcal Hom(\mathcal F_2, \pi_2^!\Lambda_S) &\otimes {\rm pr}_2^*\mathcal F_2 \to {\rm pr}_1^*\mathcal F_1\otimes {\rm pr}_2^*(\mathcal F_2 \otimes R\mathcal Hom(\mathcal F_2, \pi_2^!\Lambda_S))\\ \nonumber &\xrightarrow{\rm evaluation} {\rm pr}_1^*\mathcal F_1\otimes {\rm pr}_2^*\pi_2^!\Lambda_S \to {\rm pr}_1^*\mathcal F_1\otimes{\rm pr}_1^!\Lambda_{X_1} \to {\rm pr}_1^!\mathcal F_1. \end{align} \begin{corollary}\label{cor:SSKunn} Assume that $\pi_1\colon X_1\to S$ is $SS(\mathcal F_1, X_1/k)$-transversal or $\pi_2\colon X_2\to S$ is $SS(\mathcal F_2, X_2/k)$-transversal. Then the canonical map \eqref{eqs:Kunn} is an isomorphism. \end{corollary} If $S$ is the spectrum of a field, then the above result is proved in \cite[Expos\'e III, (3.1.1)]{Gro77}. Our proof below is different from that of \emph{loc.cit}. \begin{proof} By Proposition \ref{pro:doubleRHOMiso}, we have the following isomorphisms \begin{align} \nonumber \mathcal F_1\boxtimes^L_S R\mathcal Hom(\mathcal F_2, \pi_2^!\Lambda_S) &\overset{Prop.{\ref{pro:doubleRHOMiso}}}{\cong} R\mathcal Hom({\rm pr}_2^*\mathcal F_2, {\rm pr}_1^\ast\mathcal F_1\otimes {\rm pr}_1^!\Lambda_S)\\ \nonumber&\overset{(a)}{\cong} R\mathcal Hom({\rm pr}_2^*\mathcal F_2, {\rm pr}_1^!\mathcal F_1), \end{align} where $(a)$ follows from the fact that ${\rm pr}_1$ is smooth (cf. \cite[XVI, Th\'eor\`eme 3.1.1]{ILO14} and \cite[XVIII, Theor\'eme 3.2.5]{SGA4T3}). \end{proof} \begin{definition} Let $X_i, \mathcal F_i$ be as in \ref{Sec4:notation} for $i=1,2$. A \emph{relative correspondence} between $X_1$ and $X_2$ is a scheme $C$ over $S$ with morphisms $c_1\colon C\to X_1$ and $c_2\colon C\to X_2$ over $S$. We put $c=(c_1,c_2)\colon C\to X_1\times_S X_2$ the corresponding morphism. A morphism $u\colon c_2^*\mathcal F_2\to c_1^!\mathcal{F}_1$ is called a \emph{relative cohomological correspondence} from $\mathcal F_2$ to $\mathcal{F}_1$ on $C$. \end{definition} \subsubsection{}Given a correspondence $C$ as above, we recall that there is a canonical isomorphism \cite[XVIII, 3.1.12.2]{SGA4T3} \begin{equation}\label{eq:pullbackupp} R\mathcal Hom(c_2^*\mathcal F_2, c_1^!\mathcal F_1) \xrightarrow{\cong} c^!R\mathcal Hom({\rm pr}_2^*\mathcal F_2, {\rm pr}_1^!\mathcal F_1). \end{equation} \subsubsection{} For $i=1,2$, consider the following diagram of $S$-morphisms \[ \xymatrix{ X_i \ar[rr]^{f_i}\ar[dr]_{\pi_i} && Y_i \ar[dl]^{q_i}\\ &S,& }\] where $\pi_i$ and $q_i$ are smooth morphisms. We put $X\coloneq X_1\times_SX_2$, $Y\coloneq Y_1\times_SY_2$ and $f\coloneq f_1\times_Sf_2\colon X \to Y$. Let $\mathcal M_i\in D^b_c(Y_i,\Lambda)$ for $i=1,2$. We have a canonical map (cf. \cite[Construction 7.4]{Zh15} and \cite[Expos\'e III, (1.7.3)]{Gro77}) \begin{equation}\label{morp.Kunneth} f_1^!\mathcal M_1\boxtimes^L_S f_2^!\mathcal M_2 \to f^!(\mathcal M_1\boxtimes^L_S\mathcal M_2) \end{equation} which is adjoint to the composite \begin{align} f_!(f_1^!\mathcal M_1\boxtimes^L_S f_2^!\mathcal M_2)\xrightarrow[(a)]{\simeq} f_{1!}f_1^!\mathcal M_1\boxtimes^L_S f_{2!}f_2^!\mathcal M_2\xrightarrow{{\rm adj}\boxtimes{\rm adj}}\mathcal M_1\boxtimes^L_S\mathcal M_2 \end{align} where (a) is the K\"unneth isomorphism \cite[XVII, Th\'eor\`eme 5.4.3]{SGA4T3}. \begin{proposition}\label{prop:fUpperIsomKun} If $q_2\colon Y_2\to S$ is $SS(\mathcal M_2, Y_2/k)$-transversal, then the map \eqref{morp.Kunneth} is an isomorphism. \end{proposition} If $S$ is the spectrum of a field, the above result is proved in \cite[Expos\'e III, Proposition 1.7.4]{Gro77}. \begin{proof} Consider the following cartesian diagrams \[ \xymatrix{ X_1\times_SX_2 \ar[r]^{f_1\times {\rm id}}\ar[dr]^f\ar[d]_{{\rm id}\times f_2} &Y_1\times_SX_2\ar[r] \ar[d]^{{\rm id}\times f_2} &X_2\ar[d]^{f_2}\\ X_1\times_SY_2 \ar[d]_{\rm pr_1}\ar[r]^{f_1\times {\rm id}}&Y_1\times_SY_2\ar[d]^{\rm pr_1} \ar[r]^-{\rm pr_2} & Y_2\ar[d]^{q_2}\\ X_1\ar[r]^{f_1}\ar[dr]_{\pi_1} &Y_1 \ar[r]^{q_1} \ar[d]^{q_1} & S\\ &S.& }\] We may assume that $X_2=Y_2$ and $f_2={\rm id}$, i.e., it suffices to show that the canonical map \begin{equation}\label{eq:prop:fUpperIsomKun100} f_1^!\mathcal M_1\boxtimes^L_S\mathcal M_2 \xrightarrow{\cong} (f_1\times {\rm id})^!(\mathcal M_1\boxtimes^L_S \mathcal M_2). \end{equation} is an isomorphism. Since we have \begin{align} \nonumber\mathcal M_2\cong D_{Y_2}D_{Y_2}\mathcal M_2&\cong R\mathcal Hom(D_{Y_2}\mathcal M_2, \mathcal K_{Y_2})\\ \nonumber&\cong R\mathcal Hom(D_{Y_2}(\mathcal M_2)(-{\rm dim}S)[-2{\rm dim}S], q_2^!\Lambda_S), \end{align} we may assume $\mathcal M_2= R\mathcal Hom(\mathcal L_2, q_2^!\Lambda_S)$ for some $\mathcal L_2\in D^b_c(Y_2, \Lambda)$. By \cite[Corollary 4.9]{Sai16}, we have $SS(\mathcal M_2, Y_2/k)=SS(\mathcal L_2, Y_2/k)$. Thus by assumption, the morphism $q_2\colon Y_2\to S$ is $SS(\mathcal L_2, Y_2/k)$-transversal. By Corollary \ref{cor:SSKunn}, we have an isomorphism \begin{align}\label{eq:prop:fUpperIsomKun101} &\mathcal M_1\boxtimes^L_S R\mathcal Hom(\mathcal L_2, q_2^!\Lambda_S)\cong R\mathcal Hom({\rm pr}_2^\ast\mathcal L_2, {\rm pr}_1^!\mathcal M_1) \quad{\rm in}\, D_c^b(Y_1\times_S Y_2,\Lambda),\\ \label{eq:prop:fUpperIsomKun102}& f_1^!\mathcal M_1\boxtimes^L_S R\mathcal Hom (\mathcal L_2, q_2^!\Lambda_S)\cong R\mathcal Hom((f_1\times {\rm id})^*{\rm pr}_2^*\mathcal L_2, {\rm pr}_1^!f_1^!\mathcal M_1) \,\,{\rm in}\, D_c^b(X_1\times_S Y_2,\Lambda). \end{align} We have \begin{align} \nonumber (f_1\times {\rm id})^!(\mathcal M_1\boxtimes^L_S \mathcal M_2)&\overset{\quad\qquad}{=}(f_1\times{\rm id})^!(\mathcal M_1\boxtimes^L_S R\mathcal Hom(\mathcal L_2, q_2^!\Lambda_S))\\ \nonumber&\overset{\eqref{eq:prop:fUpperIsomKun101}}{\cong} (f_1\times{\rm id})^!(R\mathcal Hom({\rm pr}_2^\ast\mathcal L_2, {\rm pr}_1^!\mathcal M_1))\\ &\overset{\eqref{eq:pullbackupp}}{\cong} R\mathcal Hom((f_1\times {\rm id})^*{\rm pr}_2^*\mathcal L_2, (f_1\times {\rm id})^!{\rm pr}_1^!\mathcal M_1)\\ \nonumber&\overset{\quad\qquad}{\cong} R\mathcal Hom((f_1\times {\rm id})^*{\rm pr}_2^*\mathcal L_2, {\rm pr}_1^!f_1^!\mathcal M_1)\\ \nonumber&\overset{\eqref{eq:prop:fUpperIsomKun102}}{\cong} f_1^!\mathcal M_1\boxtimes^L_S R\mathcal Hom (\mathcal L_2, q_2^!\Lambda_S) \cong f_1^!\mathcal M_1\boxtimes^L_S \mathcal M_2. \end{align} This finishes the proof. \end{proof} \subsection{Relative cohomological characteristic class} \subsubsection{} We introduce some notation for convenience. For any commutative diagram \[ \xymatrix{ W\ar[rd]_-f\ar[rr]^-h&& V\ar[ld]^-g\\ &{\rm Spec}k& } \] of schemes, we put \begin{align} &\mathcal K_W\coloneq Rf^!\Lambda,\quad \\ &\mathcal K_{W/V}\coloneq Rh^!\Lambda_{V}. \end{align} Under the notation in \ref{Sec4:notation}, by Proposition \ref{prop:fUpperIsomKun}, we have an isomorphism \begin{align}\label{eq:KWViso} \mathcal K_{X_1/S}\boxtimes^L_S \mathcal K_{X_2/S}\simeq \mathcal K_{X/S}. \end{align} \subsubsection{}Consider a cartesian diagram \begin{align} \begin{gathered} \xymatrix{ E\ar[rd]^-e\ar[d]\ar[r]&D\ar[d]^-d\\ C\ar[r]^-c&X } \end{gathered} \end{align} of schemes over $k$. Let $\mathcal F$, $\mathcal G$ and $\mathcal H$ be objects of $D_c^b(X,\Lambda)$ and $\mathcal F\otimes\mathcal G\rightarrow \mathcal H$ any morphism. By the K\"unneth isomorphism \cite[XVII, Th\'eor\`eme 5.4.3]{SGA4T3} and adjunction, we have \[ e_!(c^!\mathcal F\boxtimes^L_X d^! \mathcal G)\xrightarrow{\simeq} c_!c^!\mathcal F\otimes^L d_!d^!\mathcal G\rightarrow \mathcal F\otimes \mathcal G\rightarrow \mathcal H. \] By adjunction, we get a morphism \begin{align}\label{eq:pair1-1} c^!\mathcal F\boxtimes^L_X d^! \mathcal G\rightarrow e^!\mathcal H. \end{align} Thus we get a pairing \begin{align}\label{eq:pair1} \langle,\rangle: H^0(C,c^!\mathcal F)\otimes H^0(D,d^!\mathcal G)\rightarrow H^0(E, e^!\mathcal H). \end{align} \subsubsection{} Now we define the relative Verdier pairing by applying the map \eqref{eq:pair1} to relative cohomological correspondences. Let $\pi_1\colon X_1\rightarrow S$ and $\pi_2\colon X_2\rightarrow S$ be smooth morphisms. Consider a cartesian diagram \begin{align} \begin{gathered} \xymatrix@C=5pc{ E\ar[rd]^-e\ar[d]\ar[r]&D\ar[d]^-{d=(d_1,d_2)}\\ C\ar[r]_-{c=(c_1,c_2)}&X=X_1\times_S X_2 } \end{gathered} \end{align} of schemes over $S$. Let $\mathcal F_1\in D_c^b(X_1,\Lambda)$ and $\mathcal F_2\in D_c^b(X_2,\Lambda)$. Assume that one of the following conditions holds: \begin{enumerate} \item $\pi_1\colon X_1\to S$ is $SS(\mathcal F_1, X_1/k)$-transversal; \item $\pi_2\colon X_2\to S$ is $SS(\mathcal F_2, X_2/k)$-transversal. \end{enumerate} By Corollary \ref{cor:SSKunn}, we have \begin{align} \nonumber&R\mathcal Hom({\rm pr}_2^\ast\mathcal F_2,{\rm pr}_1^!\mathcal F_1)\otimes^L R\mathcal Hom({\rm pr}_1^\ast\mathcal F_1,{\rm pr}_2^!\mathcal F_2)\\ \label{eq:pair3}&\xrightarrow{\simeq}(\mathcal F_1\boxtimes^L_S R\mathcal Hom(\mathcal F_2, \pi_2^!\Lambda_S))\otimes^L( R\mathcal Hom(\mathcal F_1, \pi_1^!\Lambda_S)\boxtimes^L_S\mathcal F_2)\\ \nonumber&\xrightarrow{\rm evaluation} \pi_1^!\Lambda_S\boxtimes^L_S \pi_2^!\Lambda_S\overset{\eqref{eq:KWViso}}{\cong}\mathcal K_{X/S}. \end{align} By \eqref{eq:pullbackupp}, \eqref{eq:pair1-1}, \eqref{eq:pair1} and \eqref{eq:pair3}, we get the following pairings \begin{align} \label{eq:pair4-1}&c_!R\mathcal Hom({c}_2^\ast\mathcal F_2,{c}_1^!\mathcal F_1)\otimes^L d_!R\mathcal Hom({d}_1^\ast\mathcal F_1,{d}_2^!\mathcal F_2)\rightarrow e_!\mathcal K_{E/S},\\ \label{eq:pair4} &\langle,\rangle: Hom({c}_2^\ast\mathcal F_2,{c}_1^!\mathcal F_1)\otimes Hom({d}_1^\ast\mathcal F_1,{d}_2^!\mathcal F_2)\rightarrow H^0(E, e^!(\mathcal K_{X/S}))=H^0(E, \mathcal K_{E/S}). \end{align} The pairing \eqref{eq:pair4} is called the \emph{relative Verdier pairing} (cf. \cite[Expos\'e III (4.2.5)]{Gro77}). \begin{definition}\label{def:ccc} Let $f\colon X\rightarrow S$ be a smooth morphism purely of relative dimension $n$ and $\mathcal F\in D_c^b(X,\Lambda)$. We assume that $f$ is $SS(\mathcal F, X/k)$-transversal. Let $c=(c_1,c_2)\colon C\rightarrow X\times_SX$ be a closed immersion and $u\colon c_2^\ast\mathcal F\rightarrow c_1^!\mathcal F$ be a relative cohomological correspondence on $C$. We define the \emph{relative cohomological characteristic class} $ccc_{X/S}(u)$ of $u$ to be the cohomology class $\langle u,1\rangle\in H^0_{C\cap X}(X, \mathcal K_{X/S})$ defined by the pairing \eqref{eq:pair4}. In particular, if $C=X$ and $c\colon C\to X\times_SX$ is the diagonal and if $u\colon \mathcal F\rightarrow \mathcal F$ is the identity, we write \[ccc_{X/S}(\mathcal F)=\langle1,1\rangle \quad{\rm in}\quad H^{2n}(X, \Lambda(n))\] and call it the \emph{relative cohomological characteristic class} of $\mathcal F$. \end{definition} If $S$ is the spectrum of a perfect field, then the above definition is \cite[Definition 2.1.1]{AS07}. \begin{example} If $\mathcal F$ is a locally constant and constructible sheaf of $\Lambda$-modules on $X$, then we have $ccc_{X/S}\mathcal F={\rm rank}\mathcal F\cdot c_n(\Omega^\vee_{X/S})\cap [X]\in CH^n(X)$. \end{example} \begin{conjecture}\label{conj:ccequality} Let $S$ be a smooth connected scheme over a perfect field $k$ of characteristic $p$. Let $f\colon X\to S$ be a smooth morphism purely of relatively dimension $n$ and $\mathcal F\in D_c^b(X,\Lambda)$. Assume that $f$ is $SS(\mathcal F, X/k)$-transversal. Let ${\rm cl}\colon {\rm CH}^n(X)\rightarrow H^{2n}(X, \Lambda(n))$ be the cycle class map. Then we have \begin{equation} {\rm cl}(cc_{X/S}(\mathcal F))=ccc_{X/S}(\mathcal F) \quad{\rm in}\quad H^{2n}(X, \Lambda(n)), \end{equation} where $cc_{X/S}(\mathcal F)$ is the relative characteristic class defined in Definition \ref{def:rcclass}. \end{conjecture} If $S$ is the spectrum of a perfect field, then the above conjecture is \cite[ Conjecture 6.8.1]{Sai16}. \subsection{Proper push-forward of relative cohomological characteristic class} \subsubsection{} For $i=1,2$, let $f_i\colon X_i\to Y_i$ be a proper morphism between smooth schemes over $S$. Let $X\coloneq X_1\times_SX_2$, $Y\coloneq Y_1\times_S Y_2$ and $f\coloneq f_1\times_Sf_2$. Let $p_i\colon X\to X_i $ and $ q_i\colon Y\to Y_i$ be the canonical projections for $i=1,2$. Consider a commutative diagram \begin{align} \begin{gathered} \xymatrix{ X\ar[d]_-f& C\ar[l]_-c\ar[d]^-g\\ Y&D\ar[l]_-d } \end{gathered} \end{align} of schemes over $S$. Assume that $c$ is proper. Put $c_i=p_i c$ and $d_i=q_id$. By \cite[Construction 7.17]{Zh15}, we have the following push-forward maps for cohomological correspondence (see also \cite[Expos\'e III, (3.7.6)]{Gro77} if $S$ is the spectrum of a field): \begin{align} \label{eq:pfccor} &f_\ast \colon Hom(c_2^\ast \mathcal L_2, c_1^!\mathcal L_1)\to Hom(d_2^\ast(f_{2!}\mathcal L_2), d_1^!(f_{1\ast}\mathcal L_1) ),\\ \label{eq:pfccor2} &f_\ast \colon g_\ast R\mathcal Hom(c_2^\ast \mathcal L_2, c_1^!\mathcal L_1)\to R\mathcal Hom(d_2^\ast(f_{2!}\mathcal L_2), d_1^!(f_{1\ast}\mathcal L_1) ). \end{align} \begin{theorem}[{\cite[Th\'eor\`eme 4.4]{Gro77}}]\label{thm:mainPSILL} For $i=1,2$, let $f_i\colon X_i\to Y_i$ be a proper morphism between smooth schemes over $S$. Let $X\coloneq X_1\times_SX_2$, $Y\coloneq Y_1\times_S Y_2$ and $f\coloneq f_1\times_Sf_2$. Let $p_i\colon X\to X_i $ and $ q_i\colon Y\to Y_i$ be the canonical projections for $i=1,2$. Consider the following commutative diagram with cartesian horizontal faces \[ \begin{tikzcd}[row sep=2.5em] C' \arrow[dr,swap,"c'"] \arrow[dd,swap,"f'"] && C \arrow[dd,swap,"g" near start] \arrow[dr] \arrow[ll]\arrow[dl, swap,"c"]\\ & X && C'' \arrow[dd,"f''"]\arrow[ll,crossing over, swap, "c''" near start] \\ D' \arrow[dr,swap,"d'"] && D \arrow[dr] \arrow[ll] \ar[dl,"d"] \\ & Y \arrow[uu,<-,crossing over,"f" near end]&& D''\arrow[ll,"d''"] \end{tikzcd} \] where $c', c'', d'$ and $ d''$ are proper morphisms between smooth schemes over $S$. Let $c_i'=p_i c', c_i''=p_i c'', d_i'=q_i d', d_i''=q_i d''$ for $i=1,2$. Let $\mathcal L_i \in D^b_c(X_i,\Lambda)$ and we put $\mathcal M_i=f_{i\ast}\mathcal L_i$ for $i=1,2$. Assume that one of the following conditions holds: \begin{enumerate} \item $ X_1\to S$ is $SS(\mathcal L_1, X_1/k)$-transversal; \item $ X_2\to S$ is $SS(\mathcal L_2, X_2/k)$-transversal. \end{enumerate} Then we have the following commutative diagram \begin{align}\label{eq:thm:mainPSILL} \begin{gathered} \xymatrix{ f_*{c}^\prime_*R\mathcal Hom({c}^{\prime*}_2\mathcal L_2, {c}^{\prime!}_1\mathcal L_1)\otimes^Lf_*{c}^{\prime\prime}_*R\mathcal Hom({c}^{\prime\prime*}_1\mathcal L_1, {c}^{\prime\prime!}_2\mathcal L_2) \ar[r]^-{(1)}\ar[d]^{(2)} & f_*c_*\mathcal K_{C/S}\ar[d]^{(4)} \\ {d}^\prime_*R\mathcal Hom({d}_2^{\prime*}\mathcal M_2, {d}_1^{\prime!}\mathcal M_1)\otimes^L{d}^{\prime\prime}_*R\mathcal Hom({d}_1^{\prime\prime*}\mathcal M_1,{d}_2^{\prime\prime!}\mathcal M_2) \ar[r]^-{(3)} &d_*\mathcal K_{D/S} } \end{gathered} \end{align} where $(3)$ is given by \eqref{eq:pair4-1}, $(1)$ is the composition of $f_\ast(\eqref{eq:pair4-1})$ with the canonical map $f_\ast c^{\prime}_\ast\otimes^L f_\ast c_\ast^{\prime\prime}\to f_\ast(c_\ast^\prime\otimes c_\ast^{\prime\prime})$, $(2)$ is induced from \eqref{eq:pfccor2}, and $(4)$ is defined by \begin{align} f_*c_*\mathcal K_{C/S}\simeq d_\ast g_\ast \mathcal K_{C/S}=d_\ast g_!g^!\mathcal K_{D/S}\xrightarrow{\rm adj}d_\ast\mathcal K_{D/S}. \end{align} \end{theorem} If $S$ is the spectrum of a field, this is proved in \cite[Th\'eore\`eme 4.4]{Gro77}. We use the same notation as {\emph{loc.cit.}} \begin{proof} By \cite[Lemma 3.8 and Lemma 4.2.6]{Sai16} and the assumption, one of the following conditions holds: \begin{enumerate} \item[(a1)] $ Y_1\to S$ is $SS(\mathcal M_1, Y_1/k)$-transversal; \item[(a2)] $ Y_2\to S$ is $SS(\mathcal M_2, Y_2/k)$-transversal. \end{enumerate} Now we can use the same proof of \cite[Th\'eor\`eme 4.4]{Gro77}. We only sketch the main step. Put \begin{align} &\mathcal P=\mathcal L_1\boxtimes^L_S R\mathcal Hom(\mathcal L_2,\mathcal K_{X_2/S}),\quad \mathcal Q=R\mathcal Hom(\mathcal L_1,\mathcal K_{X_1/S})\boxtimes^L_S \mathcal L_2\\ &\mathcal E=\mathcal M_1 \boxtimes^L_SR\mathcal Hom(\mathcal M_2,\mathcal K_{Y_2/S}),\quad \mathcal F=R\mathcal Hom(\mathcal M_1, \mathcal K_{Y_1/S})\boxtimes_S^L \mathcal M_2. \end{align} Then the theorem follows from the following commutative diagram \[ \xymatrix{ f_\ast c_\ast^\prime c^{\prime!}\mathcal P\otimes^L f_\ast c_\ast^{\prime\prime} c^{\prime\prime!}\mathcal Q\ar[rr]\ar[d]&&f_\ast c_\ast c^!(\mathcal P\otimes^L\mathcal Q)\ar[d]\ar[r]&f_\ast c_\ast c^!\mathcal K_{X/S}\ar[d]\\ d_\ast^\prime d^{\prime!}f_\ast\mathcal P\otimes^L d_\ast^{\prime\prime}d^{\prime\prime!}f_\ast\mathcal Q\ar[d]\ar[r]&d_\ast d^!(f_\ast\mathcal P\otimes^L f_\ast\mathcal Q)\ar[r]\ar[d]& d_\ast d^! f_\ast(\mathcal P\otimes^L\mathcal Q)\ar[r]&d_\ast d^!f_\ast\mathcal K_{X/S}\ar[d]\\ d_\ast^\prime d^{\prime!}\mathcal E\otimes^L d_\ast^{\prime\prime}d^{\prime\prime!}\mathcal F \ar[r]&d_\ast d^!(\mathcal E\otimes^L\mathcal F)\ar[rr]&& d_\ast d^!\mathcal K_{Y/S} } \] where commutativity can be verified following the same argument of \cite[Th\'eor\`eme 4.4]{Gro77}. \end{proof} \begin{corollary}[{\cite[Corollaire 4.5]{Gro77}}]\label{cor:mainPSILL} Under the assumptions of Theorem \ref{thm:mainPSILL}, we have a commutative diagram \begin{align} \xymatrix{ Hom(c_2^{\prime\ast} \mathcal L_2, c_1^{\prime!}\mathcal L_1)\otimes Hom(c_1^{\prime\prime\ast} \mathcal L_1, c_2^{\prime\prime!}\mathcal L_2)\ar[d]_-{\eqref{eq:pfccor}\otimes\eqref{eq:pfccor}}\ar[r]&H^0(C,\mathcal K_{C/S})\ar[d]^-{g_\ast}\\ Hom(d_2^{\prime\ast} f_{2\ast}\mathcal L_2, d_1^{\prime!}f_{1\ast}\mathcal L_1)\otimes Hom(d_1^{\prime\prime\ast} f_{1\ast}\mathcal L_1, d_2^{\prime\prime!}f_{2\ast}\mathcal L_2)\ar[r]&H^0(D,\mathcal K_{D/S}). } \end{align} \end{corollary} \begin{corollary}\label{cor:pushccc} Let $S$ be a smooth connected scheme over a perfect field $k$ of characteristic $p$. Let $f\colon X\rightarrow S$ be a smooth morphism purely of relative dimension $n$ and $g\colon Y\to S$ a smooth morphism purely of relative dimension $m$. Assume that $f$ is $SS(\mathcal F, X/k)$-transversal. Then for any proper morphism $h\colon X\to Y$ over $S$, \begin{align}\label{eq:cor:pushccc1} \begin{gathered} \xymatrix{ X\ar[rr]^-h\ar[rd]_-f&&Y\ar[ld]^-g\\ &S } \end{gathered} \end{align} we have \begin{align}\label{eq:cor:pushccc2} f_\ast ccc_{X/S}(\mathcal F)= ccc_{Y/S}(Rf_\ast\mathcal F)\quad {\rm in}\quad H^{2m}(Y,\Lambda(m)). \end{align} \end{corollary} \begin{proof} This follows from Corollary \ref{cor:mainPSILL} and Definition \ref{def:ccc}. \end{proof}
48,190
Find Out More >> Interest rates are effective beginning: 03/01/2014 A wide variety of Certificates of Deposit are available in both Business CD and Personal CD types however, Individual Retirement Accounts are only available to individuals. To determine interest rates on Certificates of Deposits from sources outside California, contact Ken Donahue or Dennis Woods.. Invest for two years at prime less 2.95%. When the prime rate rises your CD rate rises but won't fall below the initial rate. Minimum opening deposit of $10,000 The Floating Rate CD product ties to the prime-lending rate Money Market Checking accounts (for Business or Personal) are used by individuals and business alike. The. With an Individual Retirement Savings your money is always available for your use or park it while you search for another investment. You earn interest that compounds daily. A personal savings account is a great way to begin building your wealth. The funds are available for emergency and earn interest that compounds daily. Many businesses use business savings accounts in their cash management activities. The funds can be parked temporarily but continue to earn interest that compounds daily. Earn interest with daily compounding with the convenience of regular checking. Some restrictions apply, visit our branch or see our Personal Checking or Business Checking pages for details. * = Annual Percentage Yield
75,444
.. Camera Pictures. Streaming cam and Refresh cam pics. Friday 8 April 2011 Please note the cams you are talking about, NK for Nkorho,EP for Elephant Plains, TE for Tembe. Cam Pics from yesterday (Pingu-Hanneke) more here Happy camming !!!!! NK Hippo TE ellies Die deutsche Africam Community Look Giraffes NK Zebras and impalas ep ellie NK impalas ellie ep EP earlier - Cape Buffaloes and Tembe - Elephant and Waterbuck Tembe Nyalas and waterbucks EP Impalas Lions at TE Mavis My Blog I always start my Africam day at the Recent Posts Page to see what has happened since I left. Finally - Africam Gear from the Africam Store - find it here Rhinos and Nyala at TE
14,512
July 7, 2018 – July 14, 2018 Lobster Roll $22 Toasted New England style bun & Bibb lettuce, Hand Cut Fries Fried Sea Scallops $19 Remoulade, Coleslaw & Hand Cut Fries Pan Seared Kaku $27 Tostones, Brussel Sprouts & Roasted Red Peppers, Cilantro-Lime Beurre Blanc *served after 4pm Brownie a la Mode $8 Trey’s Warm Chocolate Brownie & Vanilla Bean Ice Cream Captain America $8 Mango & Raspberry Rum, Blue Curacao, Cranberry, Orange, Pineapple Juice & Grenadine Cucumber Gin Spritzer $8 Gordon’s Gin, Fresh Muddled Cucumbers, Lime, Simple Syrup & Splash of Sparkling Water La Chapelle du Bastion, Picpoulde $7, $25 Citrus & Fruit Forward White Wine that pairs well with our Kaku or Oyster Features Wrightsville Beach Brewery Kolsch Krush $5 Kolsch Beer infused with Orange Peel and Vanilla
255,742
\begin{document} \def\wt{\widetilde} \def\wh{\widehat} \def\ffrac#1#2{\raise.5pt\hbox{\small$\,\displaystyle\frac{\,#1\,}{\,#2\,}\,$}} \def\ovln#1{\,{\overline{\!#1}}} \def\ve{\varepsilon} \def\kar{\beta_r} \def\RE{\operatorname{Re}} \def\Var{\operatorname{Var}} \def\jd{{j=1,\dots,d}} \def\card{\operatorname{card}} \title[Explicit rates of approximation in the CLT] {Explicit rates of approximation\\ in the CLT for quadratic forms} \author[F. ~G\"otze]{Friedrich G\"otze$^1$} \author [A.Yu. Zaitsev]{Andrei Yu. Zaitsev$^{1,2}$} \thanks {$^1$Research supported by the SFB 701 in Bielefeld and by grant RFBR-DFG 09-01-91331. \newline\indent $^2$Research supported by grants RFBR 09-01-12180 and 10-01-00242, by the Alexander von Humboldt Foundation and by a program of fundamental researches of Russian Academy of Sciences ``Modern problems of fundamental mathematics''.} \email{[email protected]} \address{Fakult\"at f\"ur Mathematik,\smallskip Universit\"at Bielefeld, Postfach 100131, D-33501 Bielefeld, Germany} \email{zaitsev\@pdmi.ras.ru} \address{St.~Petersburg Department of Steklov Mathematical Institute, \\ Fontanka 27, St.~Petersburg 191023, Russia} \keywords {Central Limit Theorem, concentration functions, convergence rates, multidimensional spaces, quadratic forms, ellipsoids, hyperboloids, lattice point problem, theta-series} \subjclass {Primary 60F05; secondary 62E20} \parindent.6cm \date{March 29, 2011} \begin{abstract} Let $X, X_1, X_2, \dots $ be {i.i.d.} ${\mathbb R}^d$-valued real random vectors. Assume that ${\bf E}\, X=0\,$, $\cov X =\mathbb C$, ${\bf E}\, \norm X^2=\s^2\,$ and that $X\, $ is not concentrated in a proper subspace of $\mathbb R^d$. Let $G\,$ be a mean zero Gaussian random vector with the same covariance operator as that of ~$X$. We study the distributions of non-degenerate quadratic forms $ \mathbb Q [S_N ] $ of the normalized sums ~${S_N=N^{-1/2}\, (X_1+\dots +X_N)}\, $ and show that, without any additional conditions, $$\Delta_N\= \sup_x\, \Bigl|{\bf P}\bigl\{ \mathbb Q \4[S_N]\leq x\bigr\}- {\bf P}\bigl\{ \mathbb Q \4[G ]\leq x\bgr\}\Bgr| = {\mathcal O}\bigl(N^{-1}\bigr),$$ provided that $d\geq 5\, $ and the fourth moment of $X\, $ exists. Furthermore, we provide explicit bounds of order~${\mathcal O}\bgl(N^{-1}\bgr)$ for $\Delta_N$ for the rate of approximation by short asymptotic expansions and for the concentration functions of the random variables $\mathbb Q [S_N+a ]$, $a\in{\mathbb R}^d$. The order of the bound is optimal. It extends previous results of Bentkus and G\"otze (1997a) (for ${d\ge9}$) to the case $d\ge5$, which is the smallest possible dimension for such a bound. Moreover, we show that, in {the} finite dimensional case and for isometric~$ \mathbb Q $, the {implied} constant in ${\mathcal O}\bgl(N^{-1}\bgr)$ has the form $c_d\, \s^d\,(\det \mathbb C)^{-1/2}\,\E\|\mathbb C^{-1/2}\,X\|^4$ with some $c_d$ depending on $d$ only. This answers a long standing question about optimal rates in the central limit theorem for quadratic forms starting with a seminal paper by C.-G. Esseen (1945). \end{abstract} \maketitle \def\bullet{{\hbox{\tiny$\square$}}} \section{\label {s1}Introduction} Let $\mathbb R^d\, $ be the $d$-dimensional space of real vectors $x=(x_1,\dots ,x_d)\, $ with scalar product $\langle x,y\rangle =x_1\4y_1+\dots +x_d\4y_d$ and norm $\|x\|= \langle x,x\rangle^{1/2}$. We also denote by $\mathbb R^\infty$ a real separable Hilbert space consisting of all real sequences ~${x=(x_1,x_2,\dots )}$ such that $\|x\|^2= x_1^2+x_2^2+\dots <\infty$. Let $X,X_1,X_2,\dots $ be a sequence of i.i.d{.} $\Rd$-valued random vectors. Assume that ${\E X= 0}$ and $\s^2\=\E \|X\|^2<\infty$. Let $G$ be a mean zero Gaussian random vector such that its covariance operator $\mathbb C= \cov G:\Rd \to \Rd $ is equal to $\cov X$. It is well-known that the distributions $\mathcal L(S_N)$ of sums \beq \label{SN}S_N\=N^{-1/2}\,( X_1+\dots + X_N) \eeq converge weakly to $\mathcal L(G)$. Let $\mathbb Q:\mathbb R^d\to\mathbb R^d$ be a linear symmetric bounded operator and let $\mathbb Q \4[x]= \langle \mathbb Q\4 x,x\rangle $ be the corresponding quadratic form. We shall say that $\mathbb Q\, $ is non-degenerate if ~$\ker \mathbb Q=\bgl\{ 0\bgr\}$. Denote, for $q> 0$, $$\b_q\=\E \|X\|^q,\q\q\b\=\b_4 . $$ Introduce the distribution functions \beq F (x) \= \P \bgl\{ \mathbb Q \4[S_N]\leq x\bgr\}, \q\q H(x) \= \P \bgl\{ \mathbb Q \4[G]\leq x\bgr\} .\label{eq1.1j} \eeq Write \beq \label{eqdel}\D_N\= \sup_{x\in\mathbb R}\;\bgl| F(x) - H(x) \bgr|. \eeq \begin{theorem}{\label{T1.1}} Assume that\/ $\mathbb Q $ and\/ $\mathbb C $ are non-degenerate and that $d\geq 5$ or $d = \infty $. Then $$ \D_N \leq c ( \mathbb Q, \mathbb C ) \, \b /N . $$ The constant\/ $c(\mathbb Q, \mathbb C )$ in this bound depends on\/ $ \mathbb Q $ and\/ $ \mathbb C $ only. \end{theorem} \begin{theorem}{\label{T1.1a}} Let the conditions of Theorem~$\ref{T1.1}$ be satisfied and let\/ $5\le d<\infty $. Assume that the operator $\mathbb Q$ is isometric. Then $$ \D_N \leq c_d\, \s^d\,(\det \mathbb C)^{-1/2}\,\E\|\mathbb C^{-1/2}\,X\|^4 /N . $$ The constant $c_d$ in this bound depends on $d$ only. \end{theorem} Theorems \ref{T1.1} and ~\ref{T1.1a} are simple consequences of the main result of this paper, Theorem~\ref{T1.5} (see also Theorem~\ref{T1.3}). Theorem~\ref{T1.1} was proved in G\"otze and Zaitsev~(2008). It confirms a conjecture of Bentkus and G\"otze (1997a) (below BG (1997a)). It generalizes to the case $d\ge5$ the corresponding result of BG (1997a). In their Theorem~1.1, it was assumed that $d\ge9$, while our Theorem \ref{T1.1} is proved for $d\ge 5$. Theorem ~\ref{T1.1a} yields an explicit bound in terms of the distribution $\mathcal L(X)$. The distribution function of ~$ \|S_N\|^2 \, $ (for bounded ~$X$ with values in $\mathbb R^d$) may have jumps of order ~$\O \bgl(N^{-1} \bgr)$, for all ~$1\leq d\leq \infty$. See, e.g., BG~(1996, p.~468). Therefore, the bounds of Theorems ~\ref{T1.1} and ~\ref{T1.1a} are optimal with respect to the order in~$N$. Theorems \ref{T1.1},~\ref{T1.1a} and the method of their proof are closely related to the lattice point problem in number theory. Suppose that ~$d<\infty $ and that ~$\langle \mathbb Q \4x,x\rangle >0$, for $x\ne 0$. Let $\volu\, E_r \, $ be the volume of the ellipsoid $$ \q\q\q\q \q\q\q\q\q E_r =\bgl\{ x\in \mathbb R^d: \,\, \mathbb Q \4[x] \leq r^2\bgr\}, \q\q r\geq 0. $$ Write $\volu_{\mathbb Z}\, E_r\, $ for the number of points in $E_r\cap \mathbb Z^d$, where $ \mathbb Z^d \subset \mathbb \R^d$ is the standard lattice of points with integer coordinates. The following result due to G\" otze (2004) is related to Theorems \ref{T1.1} and ~\ref{T1.1a} (see also BG~(1995a, 1997b)). \smallskip \begin{theorem}\label{T1.2} {\it For all dimensions} $\; d\geq 5$, $$ \q\q\q\q\q\q\q\q\q \sup_{a\in\mathbb R^d}\, \left| \ffrac { \volu _{\mathbb Z}\, (E_r+a) -\volu\, E_r } {\volu\, E_r }\right| = \O(r^{-2}) ,\q\q \text{\it for}\q r\geq 1, $$ {\it where the constant in $\O(r^{-2})$ depends on the dimension $d$ and on the lengths of axes of the ellipsoid $E_1$ only.} \end{theorem} Theorem \ref{T1.2} solves the lattice point problem for $d\geq 5$. It improves the classical estimate $ \O( r^{-2d/(d+1) } ) $ due to Landau (1915), just as Theorem \ref{T1.1} improves the bound ~ ${\O(N^{-d/(d+1)})}$ by Esseen (1945) in the CLT for ellipsoids with axes parallel to coordinate axes. A related result for indefinite forms may be found in G\" otze and Margulis~(2010). \smallskip Work on the estimation of the rate of approximation under the conditions of Theorem~\ref{T1.1} for Hilbert spaces started in the second half of the last century. See Zalesski\u\i, Sazonov and Ulyanov (1988) and Nagaev (1989) for {optimal bounds} of order $\O(N^{-1/2})$ (with respect to eigenvalues of $\mathbb C$) assuming finiteness of the third moment. For a more detailed discussion see Yurinskii (1982), Bentkus, G\"otze, Paulauskas and Ra\v ckauskas (1990), BG~{(1995b, ~1996, 1997a)} and Senatov (1997, 1998). Under some more restrictive moment and dimension conditions the estimate of order $\O (N^{-1+\ve })$, with $\ve\downarrow 0$ as $d\uparrow \infty$, was obtained by G\" otze (1979). The proof in G\" otze (1979) was based on a new symmetrization inequality for characteristic functions of quadratic forms. This inequality is related to Weyl's (1915/16) inequality for trigonometric sums. This inequality and its extensions (see Lemma \ref{L5.1}) play a crucial role in the proofs of bounds in the CLT for ellipsoids and hyperboloids in finite and infinite dimensional cases. Under some additional smoothness assumptions, error bounds $\O(N^{-1})$ (and, moreover, Edgeworth type expansions) were obtained in G\" otze (1979), Bentkus (1984), Bentkus, G\" otze and Zitikis (1993). BG~{(1995b, ~1996,~1997a)} established the bound of order $\O(N^{-1})$ without smoothness-type conditions. Similar bounds for the rate of infinitely divisible approximations were obtained by Bentkus, G\" otze and Zaitsev (1997). Among recent publications, we should mention the papers of Nagaev and Chebotarev ~(1999),~(2005) (${d\ge13}$, providing a more precise dependence of constants on the eigenvalues of $\mathbb C$) and Bogatyrev, G\"otze and Ulyanov~(2006) (non-uniform bounds for $d\ge12$), see also G\"otze and Ulyanov~(2000). The proofs of bounds of order $\O(N^{-1})$ are based on discretization (i.e., a reduction to lattice valued random vectors) and the symmetrization techniques mentioned above. Assuming the matrices $\mathbb Q$ and $\mathbb C$ to be diagonal, and the independence of the first five coordinates of~$X$, Bentkus and G\"otze (1996) have already reduced the dimension requirement for the bound $\O(N^{-1})$ to $d\ge 5$. The independence assumption in BG~(1996) allowed to apply an adaption of the Hardy--Littlewood circle method. For the general case described in Theorem~\ref{T1.1}, one needs to develop new techniques. Some yet unpublished results of G\"otze (1994) provide the rate $\mathcal O(N^{-1})$ for sums of two independent {\it arbitrary} quadratic forms (each of rank $d \ge 3$). G\"otze and Ulyanov~(2003) obtained bounds of order $\mathcal O(N^{-1})$ for some ellipsoids in $\Rd$ with $d\ge5$ in the case of lattice distributions of~$X$. The optimal possible dimension condition for this rate is just $d\geq 5$, due to the lower bounds of order $\mathcal O(N^{-1}\log{ N})$ for dimension $d=4$ in the corresponding lattice point problem. The question about precise convergence rates in dimensions $2\leq d \leq 4$ still remains completely open (even in the simplest case where $\mathbb Q$ is the identity operator~$\mathbb I_d$, and for random vectors with independent Rademacher coordinates). It should be mentioned that, in the case $d=2$, a precise convergence rate would imply a solution of the famous circle problem. Known lower bounds in the circle problem correspond to the bound of order $\mathcal O (N^{-3/4}\, \log^\d N )$, $\d>0$, for~$\Delta_N$. Hardy (1916) conjectured that up to logarithmic factors this is the optimal order. Now we describe the most important elements of the proof. We have to mention that a big part of the proof repeats the arguments of BG (1997a), see BG~(1997a) for the description and application of symmetrization inequality and discretization procedure. In our proof we do not use the multiplicative inequalities of BG~(1997a). Here we replace those techniques by arguments from the geometry of numbers, developed in G\"otze (2004), combined with effective equidistribution results by G\"otze and Margulis (2010) for suitable actions of unipotent subgroups of $\hbox{SL}(2,\R)$, see Lemma~\ref{GM}. These new techniques (compared to previous) results are mainly concentrated in Sections~\ref{s4}--\ref{s7}. Using the Fourier inversion formula (see \eqref{eq3.1} and \eqref{eq3.1o}), we have to estimate some integrals of the absolute values of differences of characteristic functions of quadratic forms. In Section~\ref{s5}, we reduce the estimation of characteristic functions to the estimation of a theta-series (see Lemma \ref{L7.5} and inequality~\eqref{koren}). To this end, we write the expectation with respect to Rademacher random variables as a sum with binomial weights~$p(m)$ and~$p(\ov m)$. Then we estimate $p(m)$ and $p(\ov m)$ from above by discrete Gaussian exponential weights $c_s\,q(m)$ and $c_s\,q(\ov m)$, see \eqref{qm}, \eqref{pm}, \eqref{eq7.9} and \eqref{eq7.10}. Together with the non-negativity of some characteristic functions (see \eqref{eq7.8} and \eqref{eq7.13}), this allows us to apply then the Poisson summation formula from Lemma~\ref{Le3.2}. This formula reduces the problem to an estimation of integrals of theta-series. Section~\ref{s6} is devoted to some facts from Number Theory. We consider the lattices, their $\alpha$-characteristics (which are defined in \eqref{alp} and \eqref{alp3}) and Minkowski's successive minima. In Section~\ref{s7}, we reduce the estimation of integrals of theta-series to some integrals of $\alpha$-characteristics. An application of the crucial Lemma~\ref{GM}, {decribed above,} ends the proof. \smallskip \section{\label {s11}Results} To formulate the results we need more notation repeating most part of the notation used in BG (1997a). Let $\s_1^2\geq \s_2^2\geq \dots$ be the eigenvalues of $\mathbb C$, counting their multiplicities. We have $\s^2=\s_1^2+\s_2^2+\cdots$. We shall identify the linear operators and corresponding matrices. By $\mathbb I_d:\mathbb R^{d}\to\mathbb R^{d}$ we denote the identity operator and, simultaneously, the diagonal matrix with entries 1 on the diagonal. By $\mathbb O_{d}$ we denote the $(d\times d)$ matrix with zero entries. Throughout $\, \mathcal S=\{ \fs e1s\}\subset\Rd$ denotes a finite set of cardinality $s$. We shall write $\mathcal S_o$ instead of $\mathcal S$ if the system $\{ \fs e1s\}$ is orthonormal. Let $p> 0$ and $\d \geq 0$. Following BG (1997a), we introduce a somewhat modified non-degeneracy condition for the distribution of a $d$-dimensional vector~$Y$: \beq \mathcal N(p,\d ,\mathcal S, Y):\q\q\q \P \bgl\{ \|Y -e\|\leq \d \bgr\} \geq p, \q\q \text{for all}\ \, e\in \mathcal S . \label{eq1.3} \eeq We shall refer to condition \eqref{eq1.3} as condition $\mathcal N(p,\d ,\mathcal S, Y)$. We shall write $$\mathcal N_{\mathbb Q}(p,\d ,\mathcal S, Y) =\mathcal N(p,\d ,\mathcal S, Y) \cup\mathcal N(p,\d ,\mathbb Q\,\mathcal S, Y).$$ Just condition $\mathcal N_{\mathbb Q}(p,\d ,\mathcal S, Y)$ was used in BG (1997a). Note that \beq\label{eq13}\mathcal N(p,\d ,\mathcal S, Y) =\mathcal N_{\mathbb I_d}(p,\d ,\mathcal S, Y) .\eeq \smallskip Introduce truncated random vectors \beq\label{eq1.4a} X^\diamond = X \,\Ir \bgl\{ \|X\|\leq \s \4 \sqrt{N} \bgr\}, \q\q X_\diamond =X\, \Ir \bgl\{ \|X\|> \s \4 \sqrt{N} \bgr\},\eeq \beq\label{eq1.4t} X^\bullet = X \,\Ir \bgl\{\|\mathbb C^{-1/2}\,X\|\leq \sqrt{d\4N} \bgr\}, \q\q X_\bullet =X\, \Ir \bgl\{ \|\mathbb C^{-1/2}\,X\|> \sqrt{d\4N} \bgr\},\eeq and their moments (for $q>0$)\beq \L_4^\diamond = \ffrac 1{ \s^{4}\,N} \E \|X^\diamond \|^4 ,\q\q\q \q \q\q\q \Pi_q^\diamond = \ffrac N { (\s\, \sqrt{N})^{q}} \E \|X_\diamond \|^q,\label{eq1.5} \eeq\beq \L_4^\bullet = \ffrac 1{ d^{2}\,N} \E \|\mathbb C^{-1/2}\,X^\bullet \|^4 ,\q\q\q \q \q\q\q \Pi_q^\bullet = \ffrac N { ( \sqrt{d\4N})^{q}} \E \|\mathbb C^{-1/2}\,X_\bullet \|^q.\label{eq1.5t} \eeq Here and below $\Ir \bgl\{ A\bgr\} $ denotes the indicator of an event $A$. Of course, definitions \eqref{eq1.4t} and \eqref{eq1.5t} have sense if $d<\infty$ and the covariance operator $\mathbb C $ is non-degenerate. Clearly, we have \beq\label{eq1.4u} X^\diamond +X_\diamond =X^\bullet +X_\bullet =X,\quad \norm{X^\diamond}\,\norm{X_\diamond}=\norm{X^\bullet}\,\norm{X_\bullet}=0.\eeq Generally speaking, $X^\bullet$ and~$X^\diamond$ are different truncated vectors. In BG (1997a) the i.i.d. copies of the vectors $X^\diamond$ and $X_\diamond$ only were involved. Truncation \eqref{eq1.4t} was there applied to the vector $X^\diamond$. The use of $X^\bullet$ is more natural for the estimation of constants in the case $d<\infty$. It is easy to see that \beq\label{eq14rr} \big(\mathbb C^{-1/2}\,X\big)^{\diamond}=\big(\mathbb C^{- 1/2}\,X\big)^{\bullet}= \mathbb C^{-1/2}\,X^{\bullet}, \eeq and \beq\label{eq14ru} \big(\mathbb C^{-1/2}\,X\big)_{\diamond}=\big(\mathbb C^{- 1/2}\,X\big)_{\bullet}= \mathbb C^{-1/2}\,X_{\bullet}\,. \eeq Equalities \eqref{eq14rr} and~\eqref{eq14ru} provides a possibility to apply auxiliary results obtained in BG~(1997a) for truncated vectors $X^\diamond$ and $X_\diamond$ to truncated vectors $\mathbb C^{-1/2}\,X^{\bullet}$ and $\mathbb C^{- 1/2}\,X_{\bullet}$. However, one should take into account that $\s^2$, $\L_4^\diamond$, $\Pi_q^\diamond$, $G$, $\ldots$ have to be replaced by corresponding objects related to the vector $\mathbb C^{-1/2}\,X$ (that is, by $d$, $\L_4^\bullet$, $\Pi_q^\bullet$, $\mathbb C^{- 1/2}\,G$, \ldots). In Sections \ref{s3} and~\ref{s4}, we shall denote \beq X'= X^\bullet -\E X^\bullet +W,\label{eq1.20} \eeq where $W$ is a centered Gaussian random vector which is independent of all other random vectors and variables and is chosen so that $\cov X'= \cov G$. Such a vector $W$ exists by Lemma \ref{L2.4}. By $c, c_1,c_2,\dots $ we shall denote absolute positive constants. If a constant depends on, say,~$s$,\, then we shall point out the dependence writing $c_s$ or $c(s)$. We denote by \,$c$ \,universal constants which might be different in different places of the text. Furthermore, in the conditions of theorems and lemmas (see, e.g., Theorem \ref{T1.3}, \ref{T1.5f} and the proofs of Theorems \ref{T1.5}, \ref{T1.6} and~\ref{T2.1}) we write $c_0$ for an {\it arbitrary} positive absolute constant, for example one may choose $c_0=1$. We shall write $A\ll B$, if there exists an absolute constant $c$ such that $A\leq c\4 B$. Similarly, $A\ll_s B$, if $A\leq c(s)\4 B$. We shall also write $A\asymp_s B$ if $A\ll_s B\ll_s A$. By $ \lceil \a \rceil$ we shall denote the integer part of a number $\a$. Throughout we assume that all random vectors and variables are independent in aggregate, if the contrary is not clear from the context. By $X_1,X_2,\dots$ we shall denote independent copies of a random vector $X$. Similarly, $G_1,G_2,\dots$ are independent copies of $G$ and so on. By $\mathcal L(X)$ we shall denote the distribution of $X$. Define the symmetrization $\wt X\, $ of a random vector $X$ as a random vector with distribution $ {\mathcal L (\wt X)= \mathcal L (X_1-X_2)}$. Instead of normalized sums $S_N$, it is sometimes more convenient to consider the sums $Z_N=\fsu X1N$. Then $S_N=N^{-1/2} \4 Z_N$. Similarly, by $Z_N^\diamond$ (resp. $Z_N^\bullet$ and $Z_N'$) we shall denote sums of $N$ independent copies of $X^\diamond$ (resp. $X^\bullet$ and $X'$). For example, $Z_N'={X_1'}+\cdots+X_N'$. The expectation ${\mathbf E}_Y$ with respect to a random vector $Y$ we define as the conditional expectation $$\Er_Y \, f(X,Y,Z,\dots ) = \E \bgl( f(X,Y,Z\dots ) \,\bgl|\, X,Z,\dots\bgr)$$ given all random vectors but $Y$. Throughout we write $\operatorname {e}\{ x \}\= \exp \{i\4 x \}$. By \beq \label{Fur}\wh F(t)=\int_{-\infty}^\infty \operatorname{e}\{tx\}\,dF(x) \eeq we denote the Fourier--Stieltjes transform of a function $F$ of bounded variation or, in other words, the Fourier transform of the measure which has the distribution function~$F$. \smallskip Introduce the distribution functions \beq F_a (x) \= \P \bgl\{ \mathbb Q \4[S_N-a]\leq x\bgr\}, \q H_a(x) \= \P \bgl\{ \mathbb Q \4[G-a]\leq x\bgr\},\q\q a\in \mathbb R^d, \q x\in \mathbb R.\label{eq1.1} \eeq Furthermore, define, for $d=\infty$ and $a\in\Rd$, the Edgeworth correction $$E_a(x)=E_a(x; \mathbb Q, \mathcal L(X), \mathcal L(G)) $$ as a function of bounded variation such that $E_a(-\infty) =0 $ and its Fourier--Stieltjes transform is given by \beq\wh E_a(t)= \ffrac {2\, (it)^2}{3\4 \sqrt{N}} \E \operatorname {e} \bgl\{t\4 \mathbb Q \4[Y] \bgr\} \bgl( 3\,\langle \mathbb Q \4X, Y \rangle \,\langle \mathbb Q \4X,X\rangle +2\4 i \4 t \,\langle \mathbb Q \4X , Y \rangle ^3 \bgr), \q Y= G-a.\label {eq1.2}\eeq In finite dimensional spaces (for $1\le d<\infty$) we define the Edgeworth correction as follows (see Bhattacharya and Rao (1986)). Let $\phi $ denote the standard normal density in~$\Rd$. Then $p(y) =\phi (\mathbb C^{-1/2} y)/\sqrt {\operatorname {det} \mathbb C } $, $y\in \mathbb R^d$, is the density of $G$, and, for $a\in\Rd$, $b=\sqrt N\,a$, we have \beq E_a(x) \= \Theta_b (N\4x) \= \ffrac 1 {6\4 \sqrt{N}} \chi (A_x) , \q \q A_x =\bgl\{ u\in\Rd:\,\, \mathbb Q \4[u-a]\leq x\bgr\},\label {eq1.21} \eeq with the signed measure \beq\chi (A) \=\int_A \E p'''(y)\4 X^3 \, dy, \q\q\text{for the Borel sets}\ \, A\subset \Rd ,\label {eq1.22} \eeq and where \beq p'''(y) \4u^3 = p(y) \bgl( 3\, \langle\mathbb C^{-1} u,u\rangle \langle\mathbb C^{-1} y,u\rangle- \langle\mathbb C^{-1} y,u\rangle^3\bgr) \label {eq1.23} \eeq denotes the third Frechet derivative of $p$ in direction $u$. Notice that $E_a =0$ if $a=0$ or if ~${\E \langle X,y\rangle ^3=0}$, for all $y\in \mathbb R^d$. In particular, $E_a =0$ if $X\, $ is symmetric (that is, $\mathcal L(X)=\mathcal L(-X)$). We can write similar representations for $E_a^{\bullet}(x) = \Theta_b^{\bullet} (N\4x)$, $E_a^{\diamond}(x) = \Theta_b^{\diamond} (N\4x)$ and $E_a^{\prime}(x) = \Theta_b^{\prime} (N\4x)$ just replacing $X$ by $X^\bullet $, $X^\diamond $ and~$X^\prime $ in \eqref{eq1.2} or \eqref{eq1.22}. \smallskip For $b\in\R^d$, introduce the distribution functions \beq \Psi_b(x) \= \P \bgl\{ \mathbb Q \4[Z_N-b] \leq x\bgr\}=F_a(x/N),\label{eq1.18} \eeq and\beq \Phi_b(x) \= \P \bgl\{ \mathbb Q \4[\sqrt{N}\, G -b]\leq x\bgr\}=H_a(x/N).\label{eq1.19} \eeq Define, for $a\in\Rd$, $b=\sqrt N\,a$, \beq\label{edg} \D_N^{(a)}\=\sup_{x\in \mathbb R}\; \bgl| F_a (x) -H_a(x)-E_a(x)\bgr| =\sup_{x\in \mathbb R}\; \bgl|\Psi_b (x) - \Phi_b(x)-\Theta_b(x)\bgr| \eeq (see \eqref{eq1.1}, \eqref{eq1.21}, \eqref{eq1.18} and \eqref{eq1.19} to justify the last equality in \eqref{edg}). We write $\D_{N,\bullet}^{(a)}$ and $\D_{N,\diamond}^{(a)}$ replacing $E_a$ by $E_a^{\bullet}$ and~$E_a^{\diamond}$ in \eqref{edg}. The aim of this paper is to derive for $\D_N^{(a)}$ explicit bounds of order $\mathcal O(N\me)$ without any additional smoothness type assumptions. Theorem \ref{T1.3} (which was proved in BG~(1997a)) solved this problem in the case $13\leq d\leq \infty $. In Theorems \ref{T1.3}--\ref{T2.1} we assume that the symmetric operator $\mathbb Q$ is isometric, that is, that $\mathbb Q^2$ is the identity operator $\mathbb I_d$. This does not restrict generality (see Remark~1.7 in BG (1997a)). Indeed, any symmetric operator ~$\mathbb Q\, $ may be decomposed as $\mathbb Q=\mathbb Q_1\mathbb Q_0\mathbb Q_1\2$, where ~$\mathbb Q_0\, $ is symmetric and isometric and $\mathbb Q_1\, $ is symmetric bounded and non-negative, that is, $\langle \mathbb Q_1 \4 x,x\rangle \geq 0$, for all $x\in \Rd$. Thus, for any symmetric $\mathbb Q$, we can apply all our bounds replacing the random vector $X\, $ by $\mathbb Q_1X$,$\, $ the Gaussian random vector $G\, $ by $\mathbb Q_1G$, the shift $a \, $ by $\mathbb Q_1a$, etc. In the case of concentration functions (see Theorems \ref{T1.6} and~ \ref{T2.1}), we have ~${Q(X ;\, \l ;\, \mathbb Q )= Q(\mathbb Q_1X ;\, \l ;\, \mathbb Q_0 )}$, and we may apply the results provided ~$\mathbb Q_1X $ (instead of $X$) satisfies the conditions. \smallskip \begin{theorem}{\label{T1.3}}{{\rm (BG (1997a, Theorem 1.3))}} Let \/ $\d = 1/300$, $\mathbb Q^2=\mathbb I_d$, $s=13$ and \/$13\leq d\leq \infty $. Let\/ $c_0$ be an arbitrary positive absolute constant. Assume that condition $\mathcal N_{\mathbb Q}( p,\d, \mathcal S_o,c_0\4 G/\s) $ holds. Then we have: \beq \D_N^{(a)}\leq C \4 \bgl( \Pi_3^{\diamond} + \L_4^{\diamond}\bgr)\bgl( 1 + \norm{a/\s}^6\bgr) \label{eq1.6}\eeq and \beq \D_{N,\diamond}^{(a)}\leq C \4 \bgl( \Pi_2^{\diamond} + \L_4^{\diamond}\bgr)\bgl( 1 + \norm{a/\s}^6\bgr) \label{eq1.6w}\eeq with\/ $C=c\4 p^{-6} + c\4 (\s /\t_{8})^{8}$, where $\t_1^4 \geq \t_2^4 \geq \dots$ are the eigenvalues of~$(\mathbb C\4 \mathbb Q)^2$. \end{theorem} \smallskip Unfortunately, we cannot apply Theorem \ref{T1.3} for $d=5,6,\ldots,12$. Moreover, the quantity~$C$ depends on ~$p$ which is exponentially small with respect to eigenvalues of ~$\mathbb C$. In G\"otze and Zaitsev (2010), the following analogue of Theorem \ref{T1.3} is proved with bounds for constants which are not optimal. \begin{theorem}{\label{T1.5f}} Let \/ $\d = 1/300$, $\mathbb Q^2=\mathbb I_d$, $s=5$ and \/$5\le d<\infty$. Let\/ $c_0$ be an arbitrary positive absolute constant. Assume that condition $\mathcal N_{\mathbb Q}( p,\d, \mathcal S_o, c_0\, G/\s) $ holds. Then \beq \D_{N}^{(a)} \leq C \4 \bgl( \s_d^{-3}\4N^{-1/2}\, \E \|X_\bullet \|^3 + \s_d^{-4}\4N^{-1}\,\E \|X^\bullet \|^4\bgr)\bgl( 1 + \norm{a/\s}^3\bgr) , \label{eq1.8ff0}\eeq and \beq \D_{N,\bullet}^{(a)} \leq C \4 \bgl( \s_d^{-2}\4\E \|X_\bullet \|^2 + \s_d^{-4}\4N^{-1}\,\E \|X^\bullet \|^4\bgr)\bgl( 1 + \norm{a/\s}^3\bgr) , \label{eq1.8wf}\eeq with\/ $C= c_d\4 p^{-3} $. \end{theorem} Theorem \ref{T1.5f} extends to the case $d\ge5$ Theorem 1.5 of BG~(1997a) which contains the corresponding bounds for $d\ge9$. Unfortunately, in both papers, the quantity $C$ depends on~$p$ which is exponentially small with respect to $\s_9/\s^2$ (in BG~(1997a)) and to $\s_5/\s^2$ (in G\"otze and Zaitsev (2010)). Under some additional conditions, $C$ may be estimated from above by $c_d\,\exp(c\4\s^2\4\s_9^{-2})$ and by $c_d\,\exp(c\4\s^2\4\s_5^{-2})$ respectively. In G\"otze and Zaitsev (2008) we proved Theorem~\ref{T1.5f} in the case $a=0$ and hence, Theorem \ref{T1.1}. The main result of the paper is Theorem~\ref{T1.5}. It is valid for $5\le d<\infty$ in finite-dimensional spaces $\Rd $ only. However, the bounds of Theorem \ref{T1.5} depend on the smallest $\s_j$'s. This makes them unstable if one or more of coordinates of $X$ degenerates. In our finite dimensional results, Theorems \ref{T1.5}, \ref{T1.6} and \ref{T2.1}, we always assume that the covariance operator $\mathbb C $ is non-degenerate. \smallskip \begin{theorem}{\label{T1.5}} Let \/ $\mathbb Q^2=\mathbb I_d$, $5\le d<\infty$. Then we have$:$ \beq \D_{N}^{(a)} \leq C \4 \bgl( \Pi_3^\bullet + \L_4^\bullet\bgr)\bgl( 1 + \norm{a/\s}^3\bgr) , \label{eq1.8}\eeq and \beq \D_{N,\bullet}^{(a)} \leq C \4 \bgl( \Pi_2^\bullet + \L_4^\bullet\bgr)\bgl( 1 + \norm{a/\s}^3\bgr) , \label{eq1.8w}\eeq with\/ $C= c_d\, \s^d\,(\det \mathbb C)^{-1/2}\,$. \end{theorem} \smallskip It is easy to see that, according to \eqref{eq1.4t} and \eqref{eq1.5t}, \beq\label{eq1.8fc}\Pi_3^\bullet + \L_4^\bullet\le \E\|\mathbb C^{-1/2}\,X\|^{3+\delta}/(d^{(3+\delta)/2}\4N^{(1+\delta)/2}),\q \text{for } \ 0\le\delta\le 1, \eeq and\beq\label{eq1.8y}\Pi_2^\bullet + \L_4^\bullet\le\E\|\mathbb C^{- 1/2}\,X\|^{2+\delta}/(d^{(2+\delta)/2}\4N^{\delta/2}),\q \text{for } \ 0\le\delta\le 2 . \eeq Therefore, Theorem \ref{T1.5} implies the following Corollary \ref{T1.5c}. \begin{corollary}{\label{T1.5c}} Let \/ $\mathbb Q^2=\mathbb I_d$, $5\le d<\infty$. Then we have$:$ \beq \D_{N}^{(a)} \ll_d C \4 \bgl( 1 + \norm{a/\s}^3\bgr)\,\E\|\mathbb C^{-1/2}\,X\|^{3+\delta}/N^{(1+\delta)/2} ,\q \text{for } \ 0\le\delta\le 1, \label{eq1.8c}\eeq and \beq \D_{N,\bullet}^{(a)} \ll_d C \4 \bgl( 1 + \norm{a/\s}^3\bgr) \,\E\|\mathbb C^{- 1/2}\,X\|^{2+\delta}/N^{\delta/2},\q \text{for } \ 0\le\delta\le 2 , \label{eq1.8wc}\eeq with\/ $C= \s^d\,(\det \mathbb C)^{-1/2}\,$. In particular, \beq\label{eq1.8f}\max\bgl\{\D_{N}^{(a)},\D_{N,\bullet}^{(a)}\bgr\} \ll_d C \4 \bgl( 1 + \norm{a/\s}^3\bgr)\,\E\|\mathbb C^{-1/2}\,X\|^4/N. \eeq \end{corollary} \smallskip Theorem \ref{T1.3} and Corollary \ref{T1.5c} yield Theorems \ref{T1.1} and \ref{T1.1a}, using that $E_0(x)\equiv0$, $\E\|\mathbb C^{-1/2}\,X\|^4\le\b/\s_d^4$, and $\Pi_2^{\diamond}+\L_4^{\diamond}\leq \Pi_3^{\diamond}+\L_4^{\diamond}\leq \b/(\s^4 \4 N)$. \smallskip Comparing Theorem \ref{T1.5} and Corollary \ref{T1.5c} with Theorem \ref{T1.5}, we see that the constants in Theorem \ref{T1.5} and Corollary \ref{T1.5c} are written explicitly in terms of moment characteristics of $\mathcal L(X)$. In the case of non-positive definite quadratic forms~$\mathbb Q$ such kind of estimates was unknown. If, in the conditions of Theorem \ref{T1.5}, the distribution of $X$ is symmetric or $a=0$, then the Edgeworth corrections $E_a(x)$ and $E_a^\bullet(x)$ vanish and \beq\D_{N}^{(a)} =\D_{N,\bullet}^{(a)}\leq C \4 \bgl( \Pi_2^\bullet + \L_4^\bullet\bgr)\bgl( 1 + \norm{a/\s}^3\bgr)\label{eq1.8ff}, \q\q\q C= c_d\4 \s^d\,(\det \mathbb C)^{-1/2}.\eeq The corresponding inequality from Theorem~1.4 of BG (1997a) yields in the case $s=9$ and $9\leq d\leq \infty $ under the condition $\mathcal N_{\mathbb Q}( p,\d, \mathcal S_o,c_0\4 G/\s) $ with $\d = 1/300$ the bound \beq\D_N^{(a)}\leq C \4 \bgl( \Pi_2^{\diamond} + \L_4^{\diamond})\, \bgl(1+\norm{a/\s}^{4 }\bgr) ,\q\q\q C=c\, p^{-4}. \label{eq1.8fg}\eeq It is clear that sometimes the bound \eqref{eq1.8fg} may be sharper than \eqref{eq1.8ff}, but unfortunately, it depends on~$p$ which is usually exponentially small with respect to $\s_9/\s^2$. One can find more precise estimates of constants in the case of $d$-dimensional balls with $d\ge12$ in the papers of Nagaev and Chebotarev (1999), (2005), G\"otze and Ulyanov~(2000), and Bogatyrev, G\"otze and Ulyanov~(2006). In this case $\mathbb Q=\mathbb I_d$. See also G\"otze and Ulyanov~(2000) for lower bounds for $\Delta_N^{(a)}$ under different conditions on $a$ and ~$\mathcal L(X)$. In the papers mentioned above, the authors have used the aproach of BG (1997a) and obtained bounds with constants depending on $s<d$ largest eigenvalues $\s_1^2\ge\s_2^2\ge\cdots\ge\s_s^2$ of the covariance operator~$\mathbb C$ (see Nagaev and Chebotarev (1999), (2005), with $d\ge s=13$, and G\"otze and Ulyanov~(2000), and Bogatyrev, G\"otze and Ulyanov~(2006), with $d\ge s=12$). It should be mentioned, that, in a particular case, where $\mathbb Q=\mathbb I_d$ and $d\ge 12$, these results may be sharper than \eqref{eq1.8}, for some covariance operators~$\mathbb C$. Thus, we see that the statement of Theorem \ref{T1.5} is especially interesting for $d=5,6,\ldots,11$. It is new even in the case of $d$-dimensional balls. It is plausible that the bounds for constants in Theorem \ref{T1.5} could be also improved for balls with $d\ge5$, especially in the case where $d$ is large. It seems however that this is impossible in the case of general $\mathbb Q$ even if $\mathbb Q^2=\mathbb I_d$. For example, we can consider the operator $\mathbb Q$ such that $\mathbb Q \4e_j=e_{d-j+1}$, where $\mathbb C\4 e_j=\s_j^2e_j$, $j=1,2,\ldots,d$, are eigenvectors of $\mathbb C$. Following the proof of Theorem~\ref{T1.5}, we see that the bounds for the modulus of the characteristic function $\bgl| \wh \Psi_b (t)\bgr|=\bgl|\E \operatorname {e} \bgl \{ t\, \mathbb Q \4[Z_N -b]\bgr\} \bgr|$ behave as the bounds for the modulus of the characteristic function $\bgl|\E \operatorname {e} \bgl \{ t\, \mathbb I_d \4[Z_N -b]\bgr\} \bgr|$ but with eigenvalues of the covariance operator $\s_1\4\s_d$, $\s_2\4\s_{d-1}$, $\s_3\4\s_{d-2}$, \ldots \ which can be essentially smaller than $\s_1^2\ge\s_2^2\ge\s_3^2\ge\cdots$. Therefore, it is natural that the bounds for constants in Theorem~\ref{T1.5} depends on the smallest eigenvalues of the covariance operator~$\mathbb C$. Note that, in the proof of Theorem~\ref{T1.3} in BG (1997a), inequalities \eqref{eq1.6} and \eqref{eq1.6w} were derived for the Edgeworth correction $E_a(x)$ defined by \eqref{eq1.2}. However, from Theorems~\ref{T1.3} and ~\ref{T1.5f} or~\ref{T1.5} it follows that, at least for $13\le d<\infty$, definitions \eqref{eq1.2} and~\eqref{eq1.21} determine the same function~$E_a(x)$. Indeed, both functions may be represented as $N^{-1/2}\,K(x)$, where $K(x)$ are some functions of bounded variation which are independent of~$N$. Furthermore, inequalities \eqref{eq1.6} and \eqref{eq1.8} provide both bounds of order $\mathcal O(N\me)$. This is possible if the Edgeworth corrections $E_a(x)$ are the same in these inequalities. On the other hand, it is proved (for $d\geq 9$) that definition \eqref{eq1.2} determine a function of bounded variation (see BG (1997a, Lemma 5.7)), while definition \eqref{eq1.21} has no sense for $d=\infty$. Introduce the concentration function \beq\label{con} Q(X ;\, \l )=Q(X ;\, \l ;\, \mathbb Q ) = \sup_{a,\, x\in\Rd} \, \P \bgl\{ x \leq \mathbb Q \4[X-a]\leq x+\l \bgr\} , \q \text{for } \ \l \geq 0. \eeq It should be mentioned that the supremum in \eqref{con} is taken not only over all $x$, but over all $x$ and $a\in\Rd$. Usually, one defines the concentration function of the random variable $\mathbb Q \4[X-a]$ taking the supremum over all $x\in\Rd$ only. Note that, evidently, $Q(X+Y ;\, \l )\le Q(X ;\, \l )$, for any $Y$ which is independent of~$X$. The following Theorems \ref{T1.6f} and Theorem \ref{T2.1f} are Theorems~1.5 and~2.1 from G\"otze and Zaitsev (2010). \begin{theorem}{\label{T1.6f}} Let\/ $\mathbb Q^2=\mathbb I_d$, $5\leq s\leq d\leq \infty$, $s< \infty$ and\/ $0\leq \d \leq 1/(5\4 s)$. Then we have$:$ $(i)$ If condition $\mathcal N_{\mathbb Q}(p,\d ,\mathcal S_o, \wt X )$ is fulfilled with some $p>0$, then \beq Q(Z_N ;\, \l )\ll_s (p\4 N)^{-1} \, \max \bgl\{ 1 ;\, \l\bgr\}, \q \text{for } \ \l\geq 0 . \label{eq1.9f}\eeq $(ii)$ If, for some $m$, condition $\mathcal N_{\mathbb Q}(p,\d ,\mathcal S_o, m^{-1/2}\4 \wt Z_m )$ is fulfilled, then \beq Q(Z_N ;\, \l )\ll_s(p\4 N)^{-1} \, \max \bgl\{ m ;\, \l\bgr\} ,\q \text{for } \ \l\geq 0. \label{eq1.10f}\eeq \end{theorem} \begin{theorem}{\label{T2.1f}} Let\/ $\mathbb Q^2=\mathbb I_d$ and\/ $5\leq d\leq \infty$. Let\/ $c_0$ be an arbitrary positive absolute constant. Assume condition $\mathcal N_{\mathbb Q}(p,\d ,\mathcal S_o, c_0\4 G/\s )$ to be satisfied with $s =5$ and\/ $\d = 1/200$. Then \beq Q(Z_N ;\, \l )\ll p^{-2} \max \bgl\{ \Pi_2^\diamond +\L_4^\diamond ;\, \l\4 \s^{-2}\4 N^{-1} \bgr\} ,\q \text{for } \ \l\geq 0. \label{eq2.1f}\eeq In particular, $Q(Z_N ;\, \l )\ll p^{-2}\4 N^{-1}\4 \max \bgl\{ \beta \4\sigma^{-4} ;\, \lambda\4\sigma^{-2}\bgr\}$. \end{theorem} Theorems \ref{T1.6f} and Theorem \ref{T2.1f} extend to the case $5\le d\le\infty$ Theorems 1.6 and~2.1 of BG~(1997a) which were proved for $9\le d\le\infty$. We say that a random vector $Y$ is concentrated in $\mathbb L\subset \Rd $ if $\P \{ Y\in \mathbb L\} =1$. In BG~(1997a, item $(iii)$ of Theorem 1.6) it was shown that if $\wt X$ is not concentrated in a proper closed linear subspace of $\Rd$, $1\leq d\leq \infty$, then, for any $\d>0$ and $\mathcal S$ there exists a natural number $m$ such that the condition $\mathcal N_{\mathbb Q}(p,\d ,\mathcal S, m^{-1/2}\4 \wt Z_m )$ holds with some $ p>0$. In this paper, we shall prove the following Theorems \ref{T1.6} and Theorem \ref{T2.1}. \begin{theorem}{\label{T1.6}} Let\/ $\mathbb Q^2=\mathbb I_d$, $5\leq s= d< \infty$ and\/ $0\leq \d \leq 1/(5\4 s)$. Then we have$:$ $(i)$ If condition $\mathcal N(p,\d ,\mathcal S_o, \mathbb C^{- 1/2}\,\wt X )$ is fulfilled with some $p>0$, then \beq Q(Z_N ;\, \l )\ll_d (p\4 N)^{-1} \, \max \bgl\{ 1 ;\, \l\,\s^{-2}\bgr\}\,\s^d\,(\det\mathbb C)^{-1/2}, \q \text{for } \ \l\geq 0 . \label{eq1.9}\eeq $(ii)$ If, for some $m$, condition $\mathcal N(p,\d ,\mathcal S_o, m^{-1/2}\4\mathbb C^{- 1/2}\, \wt Z_m )$ is fulfilled, then \beq Q(Z_N ;\, \l )\ll_d (p\4 N)^{-1} \, \max \bgl\{ m ;\, \l\,\s^{-2}\bgr\}\,\s^d\,(\det\mathbb C)^{-1/2} ,\q \text{for } \ \l\geq 0. \label{eq1.10}\eeq \end{theorem} \begin{theorem}{\label{T2.1}} Assume that\/ $5\leq d< \infty$ and that $\mathbb Q^2=\mathbb I_d$. Then \beq Q(Z_N ;\, \l )\ll_d \max \bgl\{ \Pi_2^\bullet +\L_4^\bullet ;\, \l\4 \s^{-2}\4 N^{-1} \bgr\}\,\s^d\,(\det\mathbb C)^{-1/2} ,\q \text{for } \ \l\geq 0. \label{eq2.1}\eeq In particular, $Q(Z_N ;\, \l )\ll_d N^{-1}\4 \max \bgl\{ \E\|\mathbb C^{- 1/2}\,X\|^4 ;\, \l\4\s^{-2}\bgr\}\,\s^d\,(\det\mathbb C)^{-1/2}$. \end{theorem} Theorem \ref{T1.6} and Theorem \ref{T2.1} yield more explicit versions of Theorems \ref{T1.6f} and Theorem \ref{T2.1f} as well as Theorem \ref{T1.5} is in a sense a more explicit version of Theorem \ref{T1.5f}. We should mention that Theorems \ref{T1.5f}, \ref{T1.6f} and \ref{T2.1f} do not follow from Theorems \ref{T1.5}, \ref{T1.6} and~\ref{T2.1}. For the proofs of these theorems we refer the reader to the paper of G\"otze and Zaitsev (2010) and to the preprint of G\"otze and Zaitsev (2009) which are available in internet. For example, the bounds in Theorems \ref{T1.5f}, \ref{T1.6f} and \ref{T2.1f} may be sharper than those from Theorems \ref{T1.5}, \ref{T1.6} and \ref{T2.1}, in a particular case, where $\mathbb Q=\mathbb I_d$ and $\s_5\asymp_d\s$. Under some additional conditions, $\s^d\,(\det\mathbb C)^{-1/2}$ is replaced by $\exp(c\4\s^{-2}\4\s_5^2)\asymp_d1$. On the other hand, $\s^d\,(\det\mathbb C)^{-1/2}$ provides a power-type dependence on eigenvalues of~$\mathbb C$ and the results are valid for $\mathbb Q$ which might be not positive definite. In Theorems \ref{T1.5} and Theorem \ref{T2.1}, we do not assume the fulfilmemt of conditions $\mathcal N(\cdt)$ or $\mathcal N_{\mathbb Q}(\cdt)$. In the proofs, we shall use, however, that, for an arbitrary absolute positive constant~$c_0$ and any positive quantity~$c_d$ depending on $d$ only, condition $\mathcal N(p,\d ,\mathcal S_o , c_0 \,\mathbb C^{- 1/2}\, G)$ is fulfilled with $s=d$, $\d=c_d$ and $p\asymp_d1$, for any orthonormal system $\mathcal S_o$. \smallskip \smallskip Similarly to BG (1997a), in Section~ \ref{s2}, we prove bounds for concentration functions. The proof is technically simpler as that of Theorem \ref{T1.5}, but it shows how to apply the principal ideas. This proof repeats almost literally the corresponding proof of BG (1997a). The only difference consists in the use of new Lemma \ref{GZ2} which allows us to estimate characteristic functions of quadratic forms for relatively large values of argument~$t$. In Sections \ref{s3} and \ref{s4}, Theorem \ref{T1.5} is proved. We shall replace Lemma~9.4 of BG~(1997a) by its improvement, Lemmas~\ref{L9.4}. Another difference is in another choice of $k$ in \eqref{eq156a} and \eqref{eq156c} in comparison with that in BG~(1997a). In Sections \ref{s5}--\ref{s7} we prove estimates for characteristic functions which were discussed in Section~\ref{s1}. \medskip {\bf Acknowledgment} We would like to thank V.V. Ulyanov for helpful discussions. \smallskip $\phantom 0$ $\phantom 0$ \section{\label {s2}Proofs of bounds for concentration functions} \smallskip {\it Proof of Theorems $\ref{T1.6}$ and $\ref{T2.1}$}. Below we shall prove the assertions \eqref{eq1.9}; $\eqref{eq1.9}\Longrightarrow \eqref{eq1.10}$ and $\eqref{eq1.10} \Longrightarrow \eqref{eq2.1} $. The proof repeats almost literally the corresponding proof of BG (1997a). It is given here for the sake of completeness. The only essential difference is in the use of Lemma~\ref{GZ2} in the proof of Lemma~\ref{L2.3}. We have also to replace everywhere 9 by 5 and $\diamond$ by $\bullet$. $\square$\medskip For $ 0\leq t_0\leq T$ and $b\in\R^d$, define the integrals $$I_0=\int_{-T}^{T} \bgl| \wh \Psi_b (t)\bgr| \, dt,\q\q\q\q I_1= \int_{ t_0 \leq |t|\leq T} \bgl| \wh \Psi_b (t)\bgr| \, \ffrac {dt} {|\2 t\2 |},$$ where \beq\label{eq823} \wh \Psi_b(t)=\E \operatorname {e} \bgl \{ t\, \mathbb Q \4[Z_N -b]\bgr\} \eeq denotes the Fourier--Stieltjes transform of the distribution function $\Psi_b $ of $ \mathbb Q \4[Z_N-b]$ (see \eqref{Fur} and \eqref{eq1.18}). Note that $\bgl| \wh \Psi_b (-t)\bgr|=\bgl| \wh \Psi_b (t)\bgr|$. \begin{lemma}\label{L2.3} Assume condition $\mathcal N(p,\d, \mathcal S_o, \mathbb C^{-1/2}\,\wt X)$ with some $ 0\leq \d \leq 1/(5\4 s)$ and $5\le s=d<\infty $. Let $\s^2=1$ and \beq \label{eq23} t_0=c_1(s)\4 \s_1^{-2}(p\4 N)^{-1+ 2/s} , \q\q c_2(s)\,\s_1^{-2}\leq T\leq c_3(s)\,\s_1^{-2} \eeq with some positive constants $c_j(s)$, $1\leq j\leq 3$. Then \beq I_0\ll_s (\det \mathbb C)^{-1/2}\,(p\4 N)^{-1} ,\q\q\q\q\q I_1 \ll_s (\det \mathbb C)^{-1/2}\,( p\4 N)^{-1}. \label{eq2.3}\eeq \end{lemma} {\it Proof}. Note that the condition $\s^2=1$ implies that \beq\label{MTq}T\asymp_s\s_1^2\asymp_s\s^2=1 \q\text{and}\q \det \mathbb C\ll_s1. \eeq Denote $k =p\4 N$. Without loss of generality we assume that $k\geq c_s$, for a sufficiently large quantity $c_s$ depending on $s$ only. Indeed, if $k\leq c_s$, then one can prove \eqref{eq2.3} using \eqref{MTq} and $|\wh \Psi_b |\leq 1$. Choosing $c_s$ to be large enough, we ensure that $k\geq c_s$ implies $1/k\leq t_0 \leq T$. Lemma \ref{GZ2} and \eqref{MTq} imply now that \beq \label{MTN38} \int_{c_4(s)k^{-1+2/s}}^{T } \bgl| \wh \Psi_b (t)\bgr| \ffrac {dt} t \ll_s \ffrac {(\det \mathbb C)^{-1/2}}k, \eeq for any $c_4(s)$ depending on $s$ only. Inequalities \eqref{MTq} and \eqref{MTN38} imply \eqref{eq2.3} for $I_1$. Let us prove inequality \eqref{eq2.3} for $I_0$. By \eqref{MTq} and by Lemma \ref {GZ}, for any\/ $\g>0$ and any fixed\/ $t\in\mathbb R$ satisfying\/ $k^{1/2}\left|t\right|\le c_5(s)$, where $c_5(s)$ is an arbitrary quantity depending on $s$ only, we have (taking into account that $|\wh \Psi_b |\leq 1$) \beq\label{MTN}\bgl| \wh \Psi_b (t)\bgr| \ll_{\g, s} \min\bgl\{ 1;\, k^{-\g}+ k^{-s/2}\,\left|t\right|^{-s/2}\,(\det \mathbb C)^{-1/2}\bgr\} , \q\q\q k= p\4 N. \eeq Furthermore, choosing an appropriate $\g$ and using \eqref{MTq}--\eqref{MTN}, we obtain \beq\label{MTN23}(\det \mathbb C)^{1/2}\,I_0 \ll_s \int_{0}^{ 1/k} \, {dt} + \ffrac {1} {k}+\int_{1/k}^\infty \ffrac {dt} {( t\4 k )^{s/2}} \ll_s \ffrac {1} {k} , \eeq proving \eqref{eq2.3} for $I_0$. $\square $ \medskip \smallskip {\it Proof of\/ $\eqref{eq1.9}$}. Let $\s^2=1$. Using a well-known inequality for concentration functions (see, for example, Petrov (1975, Lemma ~3 of Ch{.} 3)), we have \beq Q(Z_N ;\, \l )\leq 4\,\sup_{b\in \Rd} \, \max \bgl\{ {\l } ;\, 1 \bgr\} \int_{0}^1 \bgl| \wh \Psi_b (t)\bgr| \, dt .\label{eq2.5}\eeq To estimate the integral in \eqref{eq2.5} we shall apply Lemma \ref{L2.3} which implies that \beq\label{MTN89}Q(Z_N ;\, \l )\ll_d \max \bgl\{ {\l } ;\, 1 \bgr\} (p\4 N)^{-1} \, (\det\mathbb C)^{-1/2},\eeq proving \eqref{eq1.9} in the case $\s^2=1$. If $\s^2\ne1$, we obtain \eqref{eq1.9} applying \eqref{MTN89} to $Z_N/\s$. $\square$ \medskip \smallskip {\it Proof of\/ $\eqref{eq1.9}\Longrightarrow \eqref{eq1.10}$}. Without loss of generality we can assume that $N/m\geq 2$. Let $Y_1,Y_2,\dots $ be independent copies of $m^{-1/2} \4 Z_m$. Denote $W_k= Y_1+\dots +Y_k$. Then $\mathcal L (Z_N)= \mathcal L (\sqrt{m} \, W_k +y)$ with $k=\lceil N/m\rceil$ and with some $y$ independent of $W_k$. Therefore, $Q(Z_N; \l ) \leq Q(W_k ; \l /m )$. In order to estimate $Q(W_k ; \l /m )$ we apply \eqref{eq1.9} replacing $Z_N$ by $W_k$. We have \beqa Q(W_k ; \l /m )&\ll_s &(p\4 k )^{-1} \max\bgl\{ 1;\, \l\,\s^{-2} /m\bgr\}\,\s^d\,(\det\mathbb C)^{-1/2}\nn\\ &\ll_s &(p\4 N )^{-1} \max\bgl\{ m;\, \l\,\s^{-2}\bgr\}\,\s^d\,(\det\mathbb C)^{-1/2}. \q\q \square \eeqa \medskip \smallskip Recall that truncated random vectors and their moments are defined by \eqref{eq1.4a}--\eqref{eq1.5t} and that $\mathbb C=\cov X=\cov G$. \begin{lemma}\label{L2.4} The random vectors $X^\bullet$, $X_\bullet$ satisfy $$\langle \mathbb C \4 x,x\rangle = \langle \cov X^\bullet \, x,x\rangle +\E \langle X_\bullet ,x\rangle^2+ \langle \E X^\bullet ,x\rangle^2.$$ There exist independent centered Gaussian vectors\/ $G_\ast$ and\/ $W$ such that $$\mathcal L (G) =\mathcal L (G_\ast +W )$$ and $$2\4 \cov G_\ast =2\4 \cov X^\bullet =\cov \wt {X^\bullet},\q\ \langle \cov W\4 x,x\rangle =\E \langle X_\bullet ,x\rangle^2+ \langle \E X^\bullet ,x\rangle^2.$$ Furthermore, $$\E \| \mathbb C^{-1/2}\,G \|^2= d=\E \| \mathbb C^{-1/2}\,G_\ast \|^2 + \E \|\mathbb C^{-1/2}\, W \|^2$$ and\/ \,$\E \|\mathbb C^{-1/2}\, W \|^2\leq 2\, d \, \Pi_2^\bullet$. \end{lemma} We omit the simple proof of this lemma (see BG (1997a, Lemma 2.4) for the same statement with $\diamond$ instead of $\bullet$). Lemma~\ref{L2.4} allows us to define the vector $X'$ by~\eqref{eq1.20}. \smallskip Recall that $Z_N^\bullet$ and $Z_N^\diamond$ denote sums of $N$ independent copies of $X^\bullet$ and $X^\diamond$ respectively. \begin{lemma}\label{L2.5} Let $\ve >0$. There exist absolute positive constants $c$ and $c_1$ such that the condition $\Pi_2^\bullet \leq c_1\4 p\, \d^2 / (d\4\ve^2)$ implies that $$\mathcal N( p,\d ,\mathcal S ,\ve \,\mathbb C^{-1/2}\,G ) \Longrightarrow \mathcal N ( p/4 , 4\4 \d ,\mathcal S ,\ve \,(2\4 m)^{-1/2}\,\mathbb C^{-1/2}\, \wt {Z_m^\bullet} ),$$ for $m\geq c\4\ve^4\4 d^2\4 N\4 \4 \L_4^\bullet/ (p\, \d^4 )$. \end{lemma} Lemmas \ref{L2.4} and \ref{L2.5} are in fact the statements of Lemmas 2.4 and~2.5 from BG (1997a) applied to the vectors $\mathbb C^{-1/2}\,X$ instead of the vectors $X$. We use in this connection equalities \eqref{eq13}, \eqref{eq14rr} and~\eqref{eq14ru} replacing in the formulation $\s^2$, $\L_4^\diamond$, $\Pi_q^\diamond$, $G$, $Z_m^\diamond$, $\ldots$ by $d$, $\L_4^\bullet$, $\Pi_q^\bullet$, $\mathbb C^{-1/2}\,G$, $Z_m^\bullet$, \ldots \ respectively. \smallskip {\it Proof of\/ $\eqref{eq1.10}\Longrightarrow\eqref{eq2.1}$}. By a standard truncation argument, we have \beq \bgl| \P\bgl \{ Z_N \in A \bgr\} - \P\bgl \{ Z_N^\bullet \in A \bgr\} \bgr| \leq N\,\P\bgl \{ \|\mathbb C^{-1/2}\,X\|> \sqrt{d\4N} \bgr\}\leq \Pi_2^\bullet , \label{eq2.8}\eeq for any Borel set $A$, and \beq Q(Z_N,\l) \leq \Pi_2^\bullet + Q(Z_N^\bullet ,\l ). \label{eq2.9}\eeq Recall that we are proving \eqref{eq2.1} assuming that $5\le d<\infty$. It is easy to see that, for an arbitrary absolute positive constant~$c_0$, condition $N(p,\d ,\mathcal S_o , c_0 \,\mathbb C^{- 1/2}\, G)$ with \beq s=d,\q\d=1/(20\,s),\q p\asymp_d1\label{eq3.3sss}\eeq is in fact fulfilled automatically for any orthonormal system $\mathcal S_o$, since the vector $\mathbb C^{-1/2}\, G$ has standard Gaussian distribution in $\Rd$ and $\P\bgl \{ \norm{c_0 \,\mathbb C^{-1/2}\, G-e}\le\delta\bgr\}=c(d)$ for any vector $e\in\Rd$ with $\norm e=1$. Clearly, $4\4\delta=1/(5\4s)$. Write $K= \ve/ \sqrt{2} $ with $\ve = c_0 $. Then, by \eqref{eq3.3sss} and Lemma \ref{L2.5}, we have \beq \mathcal N ( p,\d ,\mathcal S_o ,\ve \4 \mathbb C^{-1/2}\,G ) \Longrightarrow \mathcal N ( p/4 , 4\4 \d ,\mathcal S_o ,m^{-1/2}\,K \4 \mathbb C^{-1/2}\,\wt {Z_m^\bullet} ), \label{eq2.10}\eeq provided that \beq \Pi_2^\bullet \leq c_1(d) , \q\q\q m\geq c_2(d) \4 N\4 \4 \L_4^\bullet.\label{eq2.11}\eeq Without loss of generality we may assume that $\Pi_2^\bullet \leq c_1(d)$, since otherwise the result follows easily from the trivial inequality $Q(Z_N;\l)\leq 1$. The non-degeneracy condition \eqref{eq2.10} for $K \4 \wt {Z_m^\bullet} $ allows to apply \eqref{eq1.10} of Theorem \ref{T1.6}, and, using \eqref{eq3.3sss}, we obtain \beq Q(Z_N^\bullet ,\l) = Q( K \4 Z_N^\bullet ,K^2 \4 \l ) \ll_d N^{-1} \max\{ m;\, K^2 \4 \l/K^2\4\s^2 \}\,\s^d\,(\det\mathbb C)^{-1/2}, \label{eq2.12}\eeq for any $m$ such that \eqref{eq2.11} is fulfilled. Choosing the minimal $m$ in \eqref{eq2.11}, we obtain \beq Q(Z_N^\bullet ,\l) \ll_d \max\{ \L_4^\bullet ;\, \l/(\s^2\4 N)\}\,\s^d\,(\det\mathbb C)^{-1/2}. \label{eq2.13}\eeq Combining the estimates \eqref{eq2.9} and \eqref{eq2.13}, we conclude the proof. $\square$ \medskip \section{\label {s3}Auxiliary lemmas} In Sections \ref{s3} and \ref{s4} we shall prove Theorem \ref{T1.5}. Therefore, we shall assume that its conditions are satisfied. We consider the case $d<\infty$ assuming that the following conditions are satisfied: \beq \mathbb Q^2=\mathbb I_d,\q\s^2 =1,\q d\ge5,\q b=\sqrt N\,a.\label{eq3.3}\eeq Moreover, it is easy to see that, for any absolute positive constant~$c_0$ and for any orthonormal system $\, \mathcal S_o=\{ \fs e1s\}\subset\Rd$ , condition \beq N(p,\d ,\mathcal S_o , c_0 \,\mathbb C^{- 1/2}\, G)\q\text{with}\q p\asymp_d1,\q 5\le s=d<\infty,\q \d =1/(20\,s)\label{eq3.3ss}\eeq is in fact fulfilled automatically since the vector $\mathbb C^{-1/2}\, G$ has standard Gaussian distribution in $\Rd$ and, therefore, $\P\bgl \{ \norm{c_0 \,\mathbb C^{-1/2}\, G-e}\le\delta\bgr\}=\P\bgl \{ \norm{ \,\mathbb C^{-1/2}\, G-c_0\me\,e}\le c_0\me\,\delta\bgr\}=c(d)$ for any vector $e\in\Rd$ with $\norm e=1$. Notice that the assumption $\s^2=1$ does not restrict generality since from Theorem~\ref{T1.5} with $\s^2=1$ we can derive the general result replacing $X$, $G$ by $X/\s$, $G/\s$, etc. Other assumptions in \eqref{eq3.3} are included as conditions in Theorem \ref{T1.5}. Section \ref{s3} is devoted to some auxiliary lemmas which are similar to corresponding lemmas of BG (1997a). In several places, the proof of Theorem \ref{T1.5} repeats almost literally the proof of Theorem~1.5 in BG (1997a). Note, however, that we shall use truncated vectors $X^\bullet_j$, while in BG (1997a) the vectors $X^\diamond_j$ were involved. We start with an application of the Fourier transform to the functions $\Psi_b$ and $\Phi_b$, where $b=\sqrt N\,a$. We shall estimate integrals over the Fourier transforms using results of Sections \ref{s2}, \ref{s5}--\ref{s7} and some technical lemmas of BG~(1997a). We shall also apply some methods of estimation of the rate of approximation in the CLT in multidimensional spaces (cf{.}, e.g., Bhattacharya and Rao (1986)). Below we shall use the following formula for the Fourier inversion (see, for example, BG (1997a)). A~smoothing inequality of Prawitz (1972) implies (see BG (1996, Section~4)) that \beq F(x) = \ffrac 12 + \ffrac {i}{ 2\pi } \val \int_{|t|\le K} \operatorname {e} \bgl\{ -xt\bgr\} \wh F (t)\, \ffrac {dt} t +R, \label{eq3.1}\eeq for any $K>0$ and any distribution function $F$ with characteristic function $\wh F $ (see \eqref{Fur}), where \beq \label{eq3.1o}|R|\leq \ffrac 1K \int_{|t|\le K} | \wh F (t) |\, {dt}.\eeq Here $ \val \int f(t)\, dt =\lim_{\ve \to 0} \int_{|t| > \ve }f(t)\, dt$ denotes the Principal Value of the integral. Recall that the random vectors $X^\bullet$, $X'$ are defined in \eqref{eq1.4t} and \eqref{eq1.20} and $Z_N^\bullet$, $Z_N'$ are sums of $N$ their independent copies. Note that the Gaussian vector $W$ involved in~\eqref{eq1.20} is independent of all other vectors and have properties described in Lemma~\ref{L2.4}. Write $\Psi^\bullet_b $ and $\Psi'_b $ for the distribution function of $\mathbb Q \4[Z_N^\bullet-b ]$ and $\mathbb Q \4[Z_N'-b ]$ respectively. For $0\leq k\leq N$ introduce the distribution function \beq \Psi^{(k)}_b (x) =\P \bgl\{ \mathbb Q \4\bgl[\fsu G1k+ X_{k+1}' +\dots + X_{N}' -b\bgr]\leq x \bgr\} . \label{eq3.16} \eeq Notice that $\Psi^{(0)}_b=\Psi'_b$, $\Psi^{(N)}_b=\Phi_b $. The proof of the following lemma repeats the proof of Lemma 3.1 of BG (1997a). The difference is that here we use the truncated vectors $X_j^\bullet$ instead of $X_j^\diamond$. \begin{lemma}\label{L3.1}Let $c_d$ be a quantity depending on $d$ only. There exist positive quantities $c_1(d)$ and $c_2(d)$ depending on $d$ only such that the following statement is valid. Let\/ $\Pi_2^\bullet \leq c_1(d)\4 p $ and let an integer $1\leq m\leq N$ satisfy ${m\geq c_2(d) \,N\4 \L_4^\bullet /p}$, Write $$K= c_0^2 /(2\4 m),\q\q \q\q t_1 = c_d \4 (p N/m )^{-1+2/d}.$$ Let\/ $F$ denote any of the functions $\Psi^\bullet_b$, $\Psi'_b$, $\Psi^{(k)}_b$ or\/ $\Phi_b$. Then we have \beq F (x) = \ffrac 12 + \ffrac {i}{ 2\pi } \val \int_{|t|\leq t_1 } \operatorname {e} \{ -x\4 t \4 K \}\, \wh F (t \4 K )\, \ffrac {dt} t +R_1 , \label{eq3.17}\eeq with $ |R_1|\ll_d (p\4 N )^{-1} \4 m\,(\det\mathbb C)^{-1/2}$. \end{lemma} {\it Proof.} We shall assume that $ (p\4 N )^{-1} \4 m\le c_3(d)$ with sufficiently small $c_3(d)$ since otherwise the statement of Lemma~\ref{L3.1} is trivial (see \eqref{MTq}, \eqref{eq3.1} and \eqref{eq3.1o}). Let us prove~\eqref{eq3.17}. We shall combine \eqref{eq3.1} and Lemma \ref{L2.3}. Changing the variable $t= \tau \4 K $ in formula \eqref{eq3.1}, we obtain \beq F (x) = \ffrac 12 + \ffrac {i}{ 2\pi } \val \int_{|t|\leq 1} \operatorname {e} \{ -x\4 t \4 K \}\, \wh F (t \4 K )\, \ffrac {dt} t +R, \label{eq3.19} \eeq where \beq \left|R\4\right|\leq \int_{|t|\leq 1 } \bgl| \wh F (t \4 K ) \bgr|\, {dt} . \label{eq3.19a}\eeq Notice that \,$\Psi_b^\bullet$, $\Psi'_b$, $\Psi_b^{(k)}$ and $\Phi_b$ \,are distribution functions of random variables which may be written in the form: \beq\mathbb Q \4[V+T ],\q\q\q V\= G_1 +\dots + G_k +X_{k+1}^\bullet+\dots +X_N^\bullet ,\nn\eeq with some $k$, $0\leq k\leq N$, and some random vector $T$ which is independent of $X_j^\bullet $ and $G_j$, for all~$j$. Let us consider separately two possible cases: $k\geq N/2$ and $k<N/2$. {\it The case $k< N/2$}. Let $Y$ denote a sum of $m$ independent copies of $K^{1/2} \4 X^\bullet $. Let $\is Y$ be independent copies of $Y$. Then we have \beq \mathcal L(K^{1/2} \4 V) =\mathcal L(\fsu Y1l +T_1) \label{eq3.20}\eeq with $l= \lceil N/(2\4 m)\rceil$ and some random $T_1$ independent of $\fs Y1l$. By \eqref{eq3.3ss} and by Lemma \ref{L2.5}, we have \beq\mathcal N( p,\d, \mathcal S , c_0 \4\mathbb C^{-1/2}\, G ) \Longrightarrow \mathcal N( p/4,4 \4\d, \mathcal S , \mathbb C^{-1/2}\,\wt Y )\label{eq3.21}\eeq provided that \beq \Pi_2^\bullet \ll p/d^{3} \q\q \text{ and}\q\q m\gg d^6\4N\4 \L_4^\bullet / p.\label{eq3.22}\eeq The inequalities in \eqref{eq3.22} follow from conditions of Lemma \ref{L3.1} if we choose some sufficiently small $($resp. large$)$ $c_1(d)$ $($resp.~ $c_2(d))$. Due to \eqref{eq3.3}, \eqref{eq3.3ss}, \eqref{eq3.20} and \eqref{eq3.21}, we can apply Lemma \ref{L2.3} in order to estimate the integrals in \eqref{eq3.19} and \eqref{eq3.19a}. Replacing in Lemma~\ref{L2.3} $X$ by $Y$ and $N$ by $l$, we obtain \eqref{eq3.17} in the case $k<N/2$. {\it The case $k\geq N/2$}. We can argue as in the previous case defining now $Y$ as a sum of $m$ independent copies of $K^{1/2} \4 G$. Condition $\mathcal N( p/4,4 \d, \mathcal S_o , \mathbb C^{-1/2}\,\wt Y )$ is satisfied by \eqref{eq3.3ss}, since now $\mathcal L( \wt Y)=\mathcal L( c_0\4 G )$. $\square $ \smallskip Following BG (1997a), introduce the upper bound $\varkappa \bgl( t; N, \mathcal L (X), \mathcal L (G)\bigr)$ for the characteristic function of quadratic forms ({cf.\!} Bentkus (1984) and Bentkus, G\" otze and Zitikis (1993)). We define $\varkappa \bgl( t; N, \mathcal L (X), \mathcal L (G)\bigr) =\varkappa \bgl( t; N, \mathcal L (X)\bigr) + \varkappa \bgl( t; N,\mathcal L (G)\bigr)$, where \beq \varkappa ( t; N, \mathcal L (X)) = \sup_{x\in \Rd } \;\bgl| \E \operatorname {e}\bgl\{ t \4 \mathbb Q \4 [Z_j ]+ \langle x, Z_j \rangle \bgr\} \bgr| ,\q\q Z_j=\fsu X1j ,\label{eq3.23}\eeq with $ j= \bgl\lceil(N-2)/14\bgr\rceil$. In the sequel, we shall use that \beq\varkappa (t ; N, \mathcal L(X') ,\mathcal L(G) )\leq \varkappa (t ; N, \mathcal L(X^\bullet ) ,\mathcal L(G) ).\label{eq1app}\eeq For the proof, it suffices to note that $X'=X^\bullet -\E X^\bullet+W$ and $W$ is independent of $ X^\bullet$. \smallskip \begin{lemma}\label{L3.2} Let the conditions of Lemma $\ref{L3.1}$ be satisfied. Then \beqa\int_{|t|\leq t_1 }&&\!\!\!\! \hskip-1cm\bgl( |t|\4 K\bgr)^{\alpha }\4 \varkappa \bgl(t\4 K ; N, \mathcal L( X^\bullet ),\mathcal L (G)\bigr) \, \ffrac {dt} {|t|}\nn\\& \ll_{\a,d}& (\det \mathbb C)^{-1/2}\, \begin{cases} (N\4p ) ^{-\a}, & \text{for}\ \, 0\leq \a < d/2, \\ (N\4 p) ^{-\a}\,\bgl(1+ \left|\,\log(N\4 p/m)\right|\bgr),& \text{for}\ \, \a = d/2,\\ (N\4 p) ^{-\a}\,\bgl(1+(N\4 p/m)^{(2 \a- d)/d}\bgr), & \text{for}\ \, \a > d/2.\label{eee} \end{cases} \eeqa \end{lemma} Lemma \ref{L3.2} is a generalization of Lemma~3.2 from BG (1997a) which contains the same bound for $0\leq \a < d/2$. In this paper, we have to estimate the left hand side of \eqref{eee} in the case $d/2\leq \a$ too. {\it Proof}. We shall assume again that $ (p\4 N )^{-1} \4 m\le c_3(d)$ with sufficiently small $c_3(d)$ since otherwise \eqref{eee} is an easy consequence of $\left|\varkappa\,\right|\le1$. Note that $\bgl|\E \operatorname {e}\bgl\{ t \4 \mathbb Q \4 [Z_j ]+ \langle x, Z_j \rangle \bgr\}\bgr|=\bgl|\E \operatorname {e} \bgl \{ t\, \mathbb Q \4[Z_j -y]\bgr\}\bgr|$ with $y=-\mathbb Q \4x/2$. By \eqref{eq3.3ss} and \eqref{eq3.21}, the condition $\mathcal N( p/4, 4 \d, \mathcal S_o , K^{1/2} \4 \mathbb C^{-1/2}\,\wt {Z_m^\bullet} )$ is fulfilled. Therefore, collecting independent copies of $K^{1/2}\4 X^\bullet$ in groups as in \eqref{eq3.20}, we can apply Lemma \ref{GZ}. By \eqref{MTq}, \eqref{eq3.3ss} and this lemma, for any $\g>0$ and $|t|\leq t_1$, $$ \varkappa (t\4 K ; N, \mathcal L( X^\bullet )) \ll_{\g, d} (p\4 N/m)^{-\g}+ \min\bgl\{ 1;\,\,(N\4p/m)^{-d/2}\,\left|t\right|^{-d/2}\,(\det \mathbb C)^{- 1/2}\bgr\}.$$ We have used that $\sigma^2=1$ implies $\sigma_1^2\asymp_d1$. A similar upper bound is valid for the quantity $\varkappa (t\4 K ; N, \mathcal L( G))$ (cf{.} the proof of \eqref{eq3.17} for $k>N/2$). Thus, we get, for any $\g>0$ and $|t|\leq t_1$, $$\varkappa (t\4 K ; N, \mathcal L( X^\bullet ),\mathcal L (G))\ll_{\g, d} (p\4 N/m)^{-\g}+ \min\bgl\{ 1;\,\,\,(\det \mathbb C)^{-1/2}\, \bgl(m/(|t|\4 p\4 N) \bgr)^{d/2}\bgr\}.$$ Integrating this bound (cf. the estimation of $I_1$ in Lemma \ref{L2.3}), we obtain \eqref{eee}. $\square$ $\phantom 0$ \smallskip \section{\label {s4}Proof of Theorem $\ref{T1.5}$} To simplify notation, in Section \ref{s4} we write $\Pi=\Pi_2^{\bullet}$ and $\L=\L_4^{\bullet}$. The assumption $\s^2=1$ and equalities $\E\| \mathbb C^{-1/2}\4 X \|^2=d$, \eqref{eq1.4t} and \eqref{eq1.5t} imply \beq \Pi+\L\,N\gg1,\q \Pi+\L \leq 1 ,\q \s_j^2 \leq 1,\q \det\mathbb C\le1 . \label{eq3.4}\eeq Recall that $\Delta_N^{(a)}$ and functions $\Psi_b$, $\Phi_b$ and $\Theta_b$ are defined by \eqref{eq1.21} and \eqref{eq1.18}--\eqref{edg}. Note now that $\Theta_b^{\bullet}(x)=E_a^{\bullet}(x/N)$ and, according to \eqref{edg},\beq \D_N^{(a)}\le \D_{N,\bullet}^{(a)}+\sup_{x\in \mathbb R}\; \bgl|\Theta_b (x)-\Theta_b^{\bullet} (x)\bgr|, \label{eq3.10w}\eeq where $b=\sqrt N\,a$ and\beq \D_{N,\bullet}^{(a)}= \sup_{x\in\mathbb R }\; \bgl|\Psi_b (x) - \Phi_b(x)-\Theta_b^{\bullet}(x)\bgr|. \label{eq3.10ww}\eeq Let us verify that \beq\sup_{x\in\mathbb R }\;\bgl|\Theta_b (x)- \Theta_b^\bullet (x)\bgr|\ll_d \Pi_3^\bullet. \label {eq3.12} \eeq To this end we shall apply representation \eqref{eq1.21}--\eqref{eq1.22} of the Edgeworth correction as a signed measure and estimate the variation of that measure. Indeed, using \eqref{eq1.21}--\eqref{eq1.22}, we have \beq\sup_{x\in\mathbb R }\;\bgl|\Theta_b (x)- \Theta_b^\bullet (x)\bgr|\ll N^{-1/2} \4 I,\q\q\q I\= \int_{\Rd} \bgl|\E p'''(x) X^3- \E p'''(x) {X^\bullet}^3\bgr|\, dx .\label{eq1}\eeq By the explicit formula \eqref{eq1.23}, the function $u \na p'''(x) u^3$ is a $3$-linear form in the variable~$u$. Therefore, using $X= X^\bullet+X_{\bullet} $ and $ \|X^\bullet\| \, \|X_{\bullet}\|=0 $, we have $ p'''(x) X^3- p'''(x) {X^\bullet}^3 = p'''(x) {X_\bullet^3} $, and \beq N^{-1/2} \4 I\leq 3\4d^{3/2}\4 \Pi_3^\bullet \int_{\Rd} \bgl( \| \mathbb C^{-1/2}\4 x \| + \| \mathbb C^{-1/2}\4 x \|^3 \bgr) \, p(x)\, dx = c_d \, \Pi_3^\bullet.\label{eq222} \eeq Inequalities \eqref{eq1} and \eqref{eq222} imply now \eqref{eq3.12}. To prove the statement of Theorem $\ref{T1.5}$, we have to derive that \beq \D_{N,\bullet}^{(a)}\ll_d ( \Pi + \L)(1+\norm{a})^3\,(\det\mathbb C)^{-1/2}. \label{eq3.10}\eeq While proving \eqref{eq3.10} we assume that \beq \Pi\leq c_d ,\q\q\text{and} \q\q \L \leq c_d , \label{eq3.14} \eeq with a sufficiently small positive constant $c_d$ depending on $d$ only. These assumptions do not restrict generality. Indeed, we have $\bgl|\Psi_b (x) - \Phi_b(x)\bgr| \leq 1$. If conditions~\eqref{eq3.14} do not hold, then the estimate \beq\sup_{x\in\mathbb R }\;\bgl|\Theta_b^\bullet(x) \bgr|\ll_d N^{-1/2}\4 \E \|\mathbb C^{-1/2}\4 X^\bullet \|^3\ll_d \L^{1/2} \label{eq3.15}\eeq immediately implies \eqref{eq3.10}. In order to prove \eqref{eq3.15} we can use \eqref{eq1.5t} and representation \eqref{eq1.21}--\eqref{eq1.22} of the Edgeworth correction. Estimating the variation of that measure and using \beq \E \|\mathbb C^{-1/2}\, X^\bullet \|^2\leq \E \|\mathbb C^{-1/2} X \|^2=d,\label{eq3.10u}\eeq \beq (\E \|\mathbb C^{-1/2}\, X^\bullet \|^3)^2 \leq \E \|\mathbb C^{-1/2}\, X^\bullet \|^2\, \E \|\mathbb C^{-1/2}\, X^\bullet \|^4,\label{eq3.10uu}\eeq we obtain \eqref{eq3.15}. It is clear that\beq \D_{N,\bullet}^{(a)}\le \sup_{x\in \mathbb R}\; \Bgl(\bgl|\Psi_b (x)-\Psi_b' (x)\bgr|+ \bgl|\Theta_b^\bullet (x)-\Theta_b' (x)\bgr| + \bgl|\Psi_b' (x) - \Phi_b(x)-\Theta'_b(x)\bgr|\Bgr). \label{eq3.10x}\eeq Similarly to \eqref{eq1}, we have \beq\sup_{x\in\mathbb R }\;\bgl|\Theta_b^\bullet (x)- \Theta_b' (x)\bgr|\ll N^{- 1/2} \4 J,\q\q\q J\= \int_{\Rd} \bgl|\E p'''(x) {X^\bullet}^3- \E p'''(x) {X'}^3\bgr|\, dx .\label{eq2}\eeq Recall that vector $X'$ is defined in \eqref{eq1.20}. By Lemma ~\ref{L2.4}, we have $\E \|\mathbb C^{-1/2}\, W \|^2\leq 2\4d\, \Pi$ (hence, $\E \|\mathbb C^{-1/2}\, W \|^q\ll_d \Pi^{q/2}$, for $0\le q\le2$). Moreover, representing $W$ as a sum of a large number of i.i.d. Gaussian summands and using the Rosenthal inequality (see BG (1997a, inequality~(1.24)), we conclude that \beq \E \|\mathbb C^{-1/2}\, W \|^q\ll_q \bgl(\E \|\mathbb C^{-1/2}\, W \|^2\bgr)^{q/2}\ll_{q, d} \Pi^{q/2},\q q\ge0.\label{eq442}\eeq Furthermore, according to \eqref{eq1.4t}, \eqref{eq1.5t} and \eqref{eq3.14}, \beq\label{eq234}\E\|\mathbb C^{-1/2}\,X_\bullet\| \ll_d\Pi\4N^{- 1/2}\ll_d\Pi^{1/2}\4N^{-1/2} .\eeq Hence, by \eqref{eq1.5t}, \eqref{eq1.20}, \eqref{eq3.4}, \eqref{eq442} and \eqref{eq234}, \beq\label{eq2345} \E \|X' \|^4\ll\ovln\beta\=\E \|\mathbb C^{- 1/2}\,X' \|^4\ll_d N\4\L+\Pi^2.\eeq Using \eqref{eq1.23}, \eqref{eq3.4}, \eqref{eq3.14}, \eqref{eq3.10u} and \eqref{eq2}--\eqref{eq234}, we get \beqa N^{-1/2} \4 J&\ll_d &\Pi^{1/2}(N^{-1/2}\4\Pi+\Lambda^{1/2})\4 \int_{\Rd} \bgl( \| \mathbb C^{-1/2}\4 x \| + \| \mathbb C^{-1/2}\4 x \|^3 \bgr) \, p(x)\, \nn dx\\ &\ll_d& \Pi+\Lambda .\label{eq223} \eeqa Thus, according to \eqref{eq2} and \eqref{eq223}, \beq\sup_{x\in\mathbb R }\;\bgl|\Theta_b^\bullet (x)- \Theta_b' (x)\bgr|\ll_d \Pi+\Lambda .\label{eqpo}\eeq The same approach is applicable for the estimation of $\bgl|\Theta_b'\bgr|$. Using \eqref{eq1.20}, \eqref{eq1.21}--\eqref{eq1.23}, \eqref{eq3.4}, \eqref{eq3.10u}, \eqref{eq3.10uu}, \eqref{eq442} and \eqref{eq234}, we get \beqa\sup_{x\in\mathbb R }\;\bgl|\Theta_b' (x)\bgr|&\ll& N^{-1/2} \int_{\Rd} \bgl| \E p'''(x) {X'}^3\bgr|\, dx \nn\\ &\ll_d&\Lambda^{1/2} +N^{-1/2}\4\Pi^{3/2}.\label{eq999}\eeqa Let us prove that \beq \sup_{x\in\mathbb R }\;\bgl|\Psi_b (x) - \Psi'_b (x) \bgr|\ll (\det \mathbb C)^{-1/2}\,p^{-2}\4 (\Pi+\L) (1+\norm{a}^2). \label{eq3.25} \eeq Using truncation (see \eqref{eq2.8}), we have $ | \Psi_b - \Psi_b^\bullet |\leq \Pi$, and \beq \sup_{x\in\mathbb R }\;\bgl |\Psi_b (x) - \Psi'_b (x) \bgr| \leq \Pi+ \sup_{x\in\mathbb R } \;\bgl| \Psi_b^\bullet ( x ) - \Psi'_b (x) \bgr| . \label{eq3.26} \eeq In order to estimate $| \Psi_b^\bullet - \Psi'_b |$, we shall apply Lemmas \ref{L3.1} and \ref{L3.2}. The number $m$ in these Lemmas exists and $N\4 \L /p\gg_d 1$, as it follows from \eqref{eq3.4} and \eqref{eq3.14}. Let us choose the minimal $m$, that is, $m\asymp_d N\4 \L /p $. Then $(p\4 N )^{-1} \4 m \ll_d \L/p^2 $ and $m/N\ll_d \L/p$. Therefore, using Lemma \ref{L3.1}, we have \beq \sup_x \;\bgl| \Psi_b^\bullet ( x ) - \Psi_b ' (x) \bgr| \ll_d p^{-2}\4 \L\,(\det \mathbb C)^{-1/2}+\int_{|t|\leq t_1} \bgl| \wh \Psi_b^\bullet (\tau )- \wh \Psi'_b (\tau ) \bgr|\, \ffrac {dt} {|t|}, \q\q\q \tau =t\4 K. \label{eq3.27}\eeq We shall prove that \beq \bgl|\wh \Psi_b^\bullet (\tau )-\wh \Psi_b ' (\tau ) \bgr| \ll_d \varkappa \, \Pi\, |\tau |\4 N\4 (1+|\tau | \4 N)(1+\norm{a}^2) ,\label{eq3.28} \eeq with $\varkappa = \varkappa (\tau ; N, \mathcal L(X^\bullet ))$. Combining \eqref{eq3.26}--\eqref{eq3.28}, using $\tau = t\4 K$ and integrating inequality \eqref{eq3.28} with the help of Lemma \ref{L3.2}, we derive \eqref{eq3.25}. Let us prove \eqref{eq3.28}. Recall that $X'=X^\bullet -\E X^\bullet +W $, where $W$ denotes a centered Gaussian random vector which is independent of all other random vectors and such that $\cov X' =\cov G $ (see Lemma \ref{L2.4}). Writing $ D= Z_N^\bullet -\E Z_N^\bullet-b $, we have $$Z_N^\bullet-b = D+\E Z_N^\bullet ,\q\q\q \mathcal L(Z_N'-b) = \mathcal L(D+ \sqrt{N} \4 W),$$ and \beq\label{eq3.27m}\bgl|\wh \Psi_b^\bullet (\tau )-\wh \Psi'_b(\tau ) \bgr| \leq \bgl| f_1(\tau )\bgr|+\bgl| f_2(\tau )\bgr| \eeq with \beq \aligned f_1(\tau )&= \E \e\bgl\{ \tau \4 \mathbb Q \4[ D +\sqrt{N} \4 W ]\bgr\}- \E \e\bgl\{ \tau \4 \mathbb Q \4[ D ]\bgr\},\\ f_2(t)&= \E \e\bgl\{ \tau \4 \mathbb Q \4[ D + \E Z_N^\bullet ]\bgr\}- \E \e\bgl\{ \tau \4 \mathbb Q \4[ D]\bgr\}. \endaligned \label{eq3.29}\eeq Now we have to prove that both $\bgl| f_1(\tau )\bgr|$ and $\bgl| f_2(\tau )\bgr|$ may be estimated by the right hand side of \eqref{eq3.28}. Let us consider $f_1$. We can write $\mathbb Q \4[ D +\sqrt{N} \4 W ]= \mathbb Q \4[ D ]+A+B$ with $A=2\4 \sqrt{N} \4 \langle\mathbb Q \4 D , W\rangle$ and $B=N \4 \mathbb Q \4[ W ]$. Taylor's expansions of the exponent in \eqref{eq3.29} in powers of $i\4\tau \4 B$ and $i\4 \tau \4 A$ with remainders $\O ( \tau \4 B)$ and $\O( \tau ^2\4 A^2)$ respectively imply (recall that $\E W=0$ and $\mathbb Q^2=\mathbb I_d$) \beq \bgl| f_1(\tau )\bgr|\ll \varkappa \4 |\tau | \4 N\4 \E \|W\|^2 + \varkappa \4 \tau ^2 N\4 \E \|W\|^2 \, \E \|D\|^2, \label{eq3.30}\eeq where $\varkappa = \varkappa (\tau ; N, \mathcal L( X^\bullet ))$. The estimation of the remainders of these expansions is based on the splitting and conditioning techniques described in Section 9 of BG~(1997a), see also Bentkus, G\" otze and Zaitsev (1997). Using the relations $\s^2=1$, $\E \|W\|^2 \ll\E \|\mathbb C^{-1/2}\4W\|^2 \ll_d \Pi$ and $\E \|D\|^2 \ll N(1+\norm{a}^2)$, we derive from \eqref{eq3.30} that \beq \bgl| f_1(\tau )\bgr| \ll_d \varkappa \4 \Pi \4 |\tau |\4 N\4 \bgl( 1 + |\tau |\4 N )\bgr)(1+\norm{a}^2) .\label{eq3.31}\eeq Note that $\E Z_N^\bullet = N\,\E X^\bullet= -N\,\E X_\bullet$. Expanding the exponent $\e\bgl\{ \tau \4 \mathbb Q \4[ D + \E Z_N^\bullet ]\bgr\}$, using \eqref{eq234} and proceeding similarly to the proof of~\eqref{eq3.31}, we obtain \beq \bgl| f_2(\tau )\bgr|\ll_d \varkappa \4 \Pi\4|\tau | \4 N(1+\norm{a}).\label{eq3.31f} \eeq Inequalities \eqref{eq3.27m}, \eqref{eq3.31} and \eqref{eq3.31f} imply now \eqref{eq3.28}. \smallskip It remains to estimate $\bgl|\Psi'_b - \Phi_b - \Theta_b' \bgr|$. Recall that the distribution functions $\Psi_b^{(l)}(x)$, for~$0\leq l\leq N$, are defined in \eqref{eq3.16}. Fix an integer $k$, $1\leq k\leq N$. Clearly, we have \beq\label{eq156}\sup_{x\in\mathbb R }\;\bgl|\Psi'_b (x) - \Phi_b(x) - \Theta_b' (x)\bgr|\le I_1+I_2+I_3,\eeq where \beq\label{eq156b}I_1=\sup_{x\in\mathbb R }\;\bgl|\Psi_b^{(k)} (x) - \Phi_b(x)- (N-k)\,\Theta_b' (x)/N\bgr|,\eeq \beq\label{eq156a}I_2=\sup_{x\in\mathbb R }\; \bgl|\Psi'_b (x) - \Psi_b^{(k)}(x) \bgr|,\eeq and\beq\label{eq156c}I_3=\sup_{x\in\mathbb R }\; k\4N\me\,\bgl|\Theta_b' (x) \bgr|.\eeq \smallskip Let estimate $I_1$. Define the distributions \beq\label{eq1.23aq}\mu(A)= \P \bgl\{ U_k +\sum_{j=k+1}^N X_j' \in \sqrt{N} \4 A \bgr\},\q \q\q \mu_0(A)= \P \bgl\{ U_N \in \sqrt{N} \4 A \bgr\}= \P \bgl\{ G \in A \bgr\},\eeq where $U_l =G_1+\dots+G_l $. Introduce the measure $\chi '$ replacing $X$ by $X'$ in \eqref{eq1.22}. For the Borel sets $A\subset \Rd$ define the Edgeworth correction (to the distribution $\mu$) as \beq\mu_1^{(k)} (A)= (N-k)\4 N^{-3/2}\chi' (A)/6. \eeq Introduce the signed measure \beq\nu =\mu -\mu_0-\mu_1^{(k)}.\label{eq1.22aq}\eeq It is easy to see that a re-normalization of random vectors implies (see relations \eqref{eq1.21}, {\eqref{eq1.18}-- \eqref{edg}}, \eqref{eq3.16} and \eqref{eq1.23aq}--\eqref{eq1.22aq}) \beqa \bgl|\Psi_b^{(k)} (x) - \Phi_b(x)- (N-k)\,\Theta_b' (x)/N\bgr| &=&\nu\bgl(\bgl\{u\in\Rd:\mathbb Q[u-a]\le x/N\bgr\}\bgr)\nn\\ &\le&\d_N\= \sup_{A \subset\Rd }\bgl| \nu (A)\bgr|.\label{eq156d} \eeqa \begin{lemma}\label{L9.4} Assume that $ d<\infty$ and $1\leq k\leq N$. Then there exists a $c(d)$ depending on $d$ only and such that\/ $\d_N$ defined in $\eqref{eq156d}$ satisfies the inequality\beq\d_N \ll_d \ffrac {\ovln\b}{N} +\ffrac {N^{d/2}}{k^{d/2}}\exp\bgl\{ - c(d)\, k/\ovln\beta\bgr\} \label{eq9.14} \eeq with $\ovln\b=\E \|\mathbb C^{-1/2}\4X'\|^4$. \end{lemma} {\it An outline of the proof}. We repeat and slightly improve the proof of Lemma~9.4 in BG~(1997a) (cf. the proof of Lemma~2.5 in BG (1996)). We shall prove \eqref{eq9.14} assuming that $\cov X= \cov X'=\cov G=\mathbb I_d$, Applying it to $\mathbb C^{-1/2}\4 X'$ and $\mathbb C^{-1/2}\4 G$, we obtain \eqref{eq9.14} in general case. While proving \eqref{eq9.14} we assume that $\ovln\b/N \leq c_d$ and $N \geq 1/ c_d$ with a sufficiently small positive constant $c_d$. Otherwise \eqref{eq9.14} follows from the obvious bounds $\ovln\b\ge\s^4=d^2$ and $$\d_N\ll_d 1 + (\ovln\b/N)^{1/2} \, \int_{\Rd } \|x\|^3\4 p(x)\, dx \ll_d 1 + (\ovln\b/N)^{1/2} . $$ Set $n=N-k$. Denoting by $Z_j^\prime$ and $U_j^\prime$ sums of $j$ independent copies of $X^\prime$ and $G^\prime$ respectively, introduce the multidimensional characteristic functions \beq g(t)=\E \e \bgl\{ \langle N^{-1/2} \4t, G\rangle \bgr \},\q h(t)=\E \e \bgl\{ \langle N^{-1/2} \4t, X'\rangle \bgr \},\label{eqe1}\eeq\beq \label{eq9.17a}f(t)= \E \e \bgl\{ \langle N^{-1/2} \4 t, Z_{n}^\prime \rangle \bgr\}=h^{n}(t),\q\q f_0(t)= \E \e \bgl\{ \langle N^{-1/2} \4 t, U_{n}^\prime \rangle \bgr\}=g^{n}(t),\eeq \beq f_1(t)= n\,m(t) \,f_0(t), \q\hbox{where}\q m(t)= \ffrac {1}{6\,{N^{3/2}}} \4 \E \langle i\4 t, X^\prime \rangle ^3 , \eeq\beq\wh \nu (t)=(f(t)-f_0(t)-f_1(t))\, g(\rho t),\q \rho^2 =k.\label{eq9.16s}\eeq It is easy to see that \beq \label{Fur1}\wh \nu (t)=\int_{\Rd} \operatorname{e}\{\langle t, x\rangle\}\,\nu(dx). \eeq Using the truncation, we obtain \beq\E \bgl\| Z_l^\prime/\sqrt N\bgr\|^\gamma\ll_{\gamma,d} 1,\q\q \gamma>0,\q 1\le l\le N . \label{eq9.15} \eeq By an extension of the proof of Lemma 11.6 in Bhattacharya and Rao (1986), see also the proof of Lemma~2.5 in BG~(1996), we obtain \beq\d_N\ll_d \max_{|\a | \leq 2d}\, \, \int_{\Rd} \bgl|\partial^\a \wh \nu (t)\bgr|\, dt. \label{eq9.16} \eeq Here $|\a |=|\a_1 |+\cdots+|\a_d |$, $\a =(\a_1,\ldots,\a_d)$, $\a_j\in\mathbb Z$, $\a_j\ge0$. In order to derive \eqref{eq9.14} from \eqref{eq9.16}, it suffices to prove that, for $|\a |\leq 2d$, \beqa \bgl|\partial^\a \wh \nu (t)\bgr|&\ll_d& g( c_1\4\rho \4 t), \label{eq9.17}\\ \bgl|\partial^\a \wh \nu (t)\bgr| &\ll_d& \ovln\b\4 N^{-1} \, (1+\|t\|^{6} )\, \exp\{ - c_2\, \|t\|^2\} , \q\hbox{for}\ \, \|t\|^2\leq c_3(d) \4 N/\ovln\b .\label{eq9.18} \eeqa Indeed, using \eqref{eq9.17} and denoting $T= \sqrt{c_3(d) \4 N/\ovln\b}$, we obtain \beq \int\limits_{\|t\|\geq T} \bgl|\partial^\a \wh \nu (t)\bgr|\, dt \ll_d \int\limits_{\|t\|\geq T} g( c_1\4\rho \4 t) \, dt \ll_d \ffrac{N^{d/2}}{\rho^{ d}} \, \exp\Bgl\{ - \ffrac{c_1^2\4 \rho^{2}\4 T^2}{8\4N}\Bgr\} \int\limits_{\Rd} \exp\{ - c_1^2\, \|t\|^2/8\} \, dt,\label{eq9.19} \eeq and it is easy to see that the right hand side of \eqref{eq9.19} is bounded from above by the second summand in the right hand side of \eqref{eq9.14}. Similarly, using \eqref{eq9.18}, we can integrate $\bgl|\partial^\a \wh \nu (t)\bgr|$ over $\|t\|\leq T$, and the integral is bounded from above by $c_d \4 \ovln\b/N$. In the proof of \eqref{eq9.17}--\eqref{eq9.19} we applied standard methods of estimation which are provided in Bhattacharya and Rao (1986). In particular, we used a Bergstr\" om type identity \beq f-f_0-f_1=\sum_{j=0}^{n-1}(h-g-m)\,h^j\,g^{n-j-1} +\sum_{j=0}^{n-1} m\sum_{l=0}^{j-1}(h-g)\,h^l\,g^{n-l-1}, \label{eqe2} \eeq relations \eqref{eqe1}--\eqref{eq9.15}, $1\leq k\leq N$, $\bgl|\partial^\a \exp \{ -c_4\, \|t\|^2\}\bgr|\ll_\a \exp \{ -c_5\, \|t\|^2\}$, ${\sqrt{N}/\ovln\b^{1/2}\gg_d1}$ and $y^{c_d} \exp \{ -y\} \ll_d 1 $, for $y>0$. $\square$ \medskip Applying \eqref{eq156b}, \eqref{eq156d} and Lemma \ref{L9.4}, we get \beq I_1\ll_d \ffrac {\ovln\b}{N} +\ffrac {N^{d/2}}{k^{d/2}}\exp\bgl\{ - c(d)\, k/\ovln\b\bgr\}. \label{eq3.40} \eeq For the estimation of $I_2$ we shall use Lemma \ref{L9.3} which is an easy consequence of BG (1997a, Lemma 9.3), \eqref{eq1app} and \eqref{eq2345}. \begin{lemma}\label{L9.3} We have $$ \bgl|\wh \Psi_b' (t)-\wh \Psi_b^{(l)}(t) \bgr| \ll \varkappa\4 t^2 \4 l \, \bgl( \ovln\b +|t|\4 N \4\ovln\b +|t|\4 N \4 \sqrt{N \4\ovln\b} \bgr)(1+\norm{a}^3),\q\hbox{for \ }0\leq l\leq N , $$ where $\varkappa = \varkappa ( t; N, \mathcal L (X^{\bullet}) , \mathcal L (G))$ $($cf{.} $\eqref{eq3.23})$. \end{lemma} \smallskip As in the proof of \eqref{eq3.27}, applying Lemma~\ref{L3.1} (choosing $m\asymp_d N\4 (\L+\Pi) /p $) and using~\eqref{eq3.3ss}, we obtain $$ I_2\ll_d (\L+\Pi)\,(\det \mathbb C)^{-1/2} + \int_{|t|\leq t_1 }\bgl|\wh \Psi'_b (\tau ) - \wh \Psi_b^{(k)} (\tau )\bgr| \, dt /|t|,\q\q \tau =t\4 K . $$ The existence of such an~$m$ is ensured by \eqref{eq3.3ss}, \eqref{eq3.4} and~\eqref{eq3.14}, Applying Lemma \ref{L9.3} and replacing in that Lemma $t$ by $\tau $, we have \beq\label{eq4.8} \bgl|\wh \Psi_b '(\tau ) - \wh \Psi_b^{(k)}(\tau ) \bgr| \ll \varkappa\4 \tau ^2 \4 k \, \bgl( \ovln\b +| \tau |\4 N \4\ovln\b +| \tau |\4 N \4 \sqrt{N \4\ovln\b} \bgr)(1+\norm{a}^3) .\eeq Integrating with the help of Lemma \ref{L3.2} and using~\eqref{eq3.3ss}, we obtain \beq I_2\ll_d (\det \mathbb C)^{-1/2}\, \bgl(\Pi+\L + k \4 N^{-2} \4 \bgl( \ovln\b + \sqrt{N \4\ovln\b} \bgr)\bgl( 1 + (\Pi+\L)^{-1/d} \bgr)(1+\norm{a}^3)\bgr) . \label{eq3.39}\eeq Let us choose $k\asymp_d N^{1/4} \4{\ovln\b}^{3/4} $. Such $k\leq N$ exists by $\ovln\b\gg_d\s^4=1$, by \eqref{eq2345} and by assumption \eqref{eq3.14}. Then \eqref{eq3.40} and \eqref{eq3.39} turn into \beq I_1\ll_d \ffrac {\ovln\b}{N} +\Big(\!\ffrac {N}{{\ovln\b}}\!\Big)^{3d/8} \exp\Bgl\{ -c_d\,\Big(\!\ffrac {N}{{\ovln\b}}\!\Big)^{1/4}\!\Bgr\}\ll_d \ffrac {\ovln\b}{N},\label{eq4.899} \eeq and \beq I_2\ll_d (\det \mathbb C)^{-1/2}\, \bgl(\Pi+\L + \Bigl( \Big(\!\ffrac {\ovln\b}{N}\!\Big)^{5/4}+ \Big(\!\ffrac {\ovln\b}{N}\!\Big)^{7/4}\Bigr)\bgl( 1 + (\Pi+\L)^{-1/d} \bgr)(1+\norm{a}^3)\bgr). \label{eq3.399} \eeq Using \eqref{eq3.3ss}, \eqref{eq3.14}, \eqref{eq2345} and \eqref{eq3.399}, we get \beq I_2\ll_d (\det \mathbb C)^{-1/2}\, \bgl(\Pi+\L +\ffrac {\ovln\b}{N} (1+\norm{a}^3)\bgr). \label{eq3.3999} \eeq Finally, by \eqref{eq3.14}, \eqref{eq2345}, \eqref{eq999} and \eqref{eq156c}, \beq I_3\ll_d \ffrac k N\bgl(\Lambda^{1/2} +N^{-1/2}\4\Pi^{3/2}\bgr) \ll \Lambda+\Pi .\label{eq777} \eeq Inequalities \eqref{eq3.14}, \eqref{eq3.10x}, \eqref{eq2345}, \eqref{eqpo}, \eqref{eq3.25}, \eqref{eq156}, \eqref{eq4.899}, \eqref{eq3.3999} and \eqref{eq777} imply now \eqref{eq3.10} (and, hence, \eqref{eq1.8w}) by an application of $\Pi+\L\leq 1$. Note that, by \eqref{eq1.5t}, we have $\Pi\leq \Pi_3^\bullet$. Together with \eqref{eq3.10w} and \eqref{eq3.12}, inequality \eqref{eq3.10} yields~\eqref{eq1.8}. The statement of Theorem $\ref{T1.5}$ is proved. $\square$ \medskip $\phantom 0$ \section{\label {s5}From Probability to Number Theory} In Section \ref{s5} we shall reduce the estimation of the integrals of the modulus of characteristic functions $\wh\Psi_b(t)$ to the estimation the integrals of some theta-series. We shall use the following lemmas. \begin{lemma}\label{L5.1} {\rm(BG (1997a, Lemma 5.1))} Let\/ $L,C\in \mathbb R^d$ and let $\mathbb Q:\Rd\to\Rd$ be a symmetric linear operator. Let\/ $Z,U,V$ and\/ $ W$ denote independent random vectors taking values in $\mathbb R^d$. Denote by $$ P(x) = \langle \mathbb Q \4x,x\rangle + \langle L,x\rangle +C , \q\q x \in \mathbb R^d,$$ a real-valued polynomial of second order. Then $$ 2\, \Bgl|\E \operatorname {e} \bgl\{ t\, P(Z+U+V+W)\bgr\}\Bgr|^2 \leq \E \operatorname {e} \bgl\{ 2 \, t\, \langle \mathbb Q \4\wt Z,\wt U\rangle \bgr\} + \E \operatorname {e} \bgl\{ 2 \,t\, \langle \mathbb Q \4\wt Z,\wt V\rangle \bgr\} . $$ \end{lemma} Let ~${\is \ve}$ denote i.i.d{.} symmetric Rademacher random variables. Let $\d >0$, $\mathcal S = \{ \fs e1s \} \subset \mathbb R^d$ and let $\mathbb D:\Rd\to\Rd$ be a linear operator. Usually, we shall take $\mathbb D=\mathbb C^{-1/2}$. We shall write~${\mathcal L (Y)\in {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S)}\, $ if a discrete random vector~$Y$ is distributed as $\ve_1 z_1 +\dots +\ve_s z_s\2$, with some (non-random) $ z_j\in \mathbb R^d$ such that $ \|\mathbb D\4z_j -e_j\|\leq \d$, for all $1\leq j\leq s$. Recall that $\mathcal S_o=\{\fs e1s\}\subset \Rd $ denotes an orthonormal system. \begin{lemma}\label{L6.3} Assume that\/ $\mathbb Q^2= \mathbb I_d$ and that the condition $\mathcal N(p,\d, \mathcal S ,\mathbb D\4 \wt X )$ holds with some $0< p\leq 1$ and\/ $\d>0$. Write $m =\bgl\lceil {p\4 N }/ (5\4 s)\bgr\rceil$. Then, for any\/ $0<A\leq B$, $b\in\mathbb R^d$ and\/ $\gamma>0$, we have \beq\label{eq7.1qq}\int\limits_A^B \bgl| \wh \Psi_b (t)\bgr| \, \ffrac {dt} {|\2 t\2 |} \leq I+ c_\g (s)\, (p\4 N)^{-\g}\, \log\ffrac BA ,\eeq with \beq\label{pppp} I= \sup_\Gamma \,\sup_{b\in \Rd} \,\int\limits_A^B \sqrt{\vf ( t/4)} \, \ffrac {dt}{|\2 t\2 |} , \q\q \vf (t) \= \Bgl| \E \operatorname {e} \bgl \{ t \, \mathbb Q \4[Y + b]\bgr\} \Bgr|^2 ,\eeq where $Y= \fsu U1m$ denote a sum of independent $($non-i.i.d.$)$ vectors, and\/ $\sup\nolimits_\Gamma $ is taken over all $ \bgl\{\mathcal L(U_j):\, \, 1\leq j\leq m\bgr\} \subset {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$. \end{lemma} Lemma \ref{L6.3} is an analogue of Corollary 6.3 from BG (1997a). Its proof is even simpler than that in BG (1997a). Therefore it is omitted. \smallskip \begin{lemma}\label{L7.3} Assume that\/ $\mathbb Q^2= \mathbb I_d$ and that the condition $\mathcal N(p,\d, \mathcal S ,\mathbb D\4 \wt X )$ holds with some $0< p\leq 1$ and\/ $\d>0$. Let \beq \label{dfn}n \= \bgl\lceil {p\4 N}/({16\4 s})\bgr\rceil\ge1.\eeq Then, for any\/ $0<A\leq B$, $b\in\mathbb R^d$ and\/ $\gamma>0$,\beq\int\limits_A^B\bgl| \wh \Psi_b (t)\bgr|\ffrac {dt}{|\2 t\2 |} \leq c_\g (s)\, (p\4 N)^{-\g}\, \log\ffrac BA+ \sup_\Gamma \,\int\limits_A^B \sqrt{\E \operatorname {e} \bgl \{ t\, \langle \mathbb Q\,\wt W ,\wt W' \rangle/2 \bgr\}} \, \ffrac {dt}{|\2 t\2 |},\label{eq7.1}\eeq and for any fixed\/ $t\in\mathbb R$, \beq\bgl| \wh \Psi_b (t)\bgr| \leq c_\g (s)\, (p\4 N)^{-\g}+ \sup_\Gamma \sqrt{\E \operatorname {e} \bgl \{ t\, \langle \mathbb Q\,\wt W ,\wt W' \rangle/2 \bgr\}} ,\label{equ7.1}\eeq where \/ $W= \fsu V1n$ and\/~$W'=V_1'+\dots + V_n' $ are independent sums of independent copies of random vectors $V$ and\/ $V'$ respectively, and the supremum $\sup\nolimits_\Gamma $ is taken over all $ \mathcal L(V), \mathcal L(V') \in {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$. \end{lemma} Note that this lemma will be proved for general $\mathcal S$, but in this paper we need $\mathcal S=\mathcal S_o$ only. Moreover, a more careful estimation of binomial probabilities could allow us to replace $c_\g (s)\, (p\4 N)^{-\g}$ in \eqref{eq7.1qq}, \eqref{eq7.1} and \eqref{equ7.1} by $c (s)\,\exp\bgl\{ -c\4p\4 N\bgr\}$ (see e.g. Nagaev and Chebotarev (2005)). However, we do not need to use this improvement. {\it Proof of Lemma\/ $\ref{L7.3}$.} Inequality \eqref{equ7.1} is an analogue of the statement of Lemma 7.3 from BG (1997a). Its proof is even simpler than that in BG (1997a). Therefore it is omitted. Let us show that \beq\int\limits_A^B\bgl| \wh \Psi_b (t)\bgr|\ffrac {dt}{|\2 t\2 |} \leq c_\g (s)\, (p\4 N)^{-\g}\, \log\ffrac BA+ \sup_\Gamma \,\int\limits_A^B \sqrt{\E \operatorname {e} \bgl \{ t\, \langle \mathbb Q\,\wt W ,\wt W' \rangle/2 \bgr\}} \, \ffrac {dt}{|\2 t\2 |},\label{eq7.1q}\eeq where \/ $W= \fsu V1n$ and ~$W'=V_1'+\dots + V_n' $ are independent sums of of independent $(${\it non-i.i.d.}$)$ vectors, and $\sup\nolimits_\Gamma $ is taken over all $ \bgl\{\mathcal L(V_j), \mathcal L(V_j'):\, \, 1\leq j\leq n\bgr\} \subset {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$. Comparing \eqref{eq7.1} and \eqref{eq7.1q}, we see that inequality \eqref{eq7.1q} is related to sums of {\it non-i.i.d.} vectors $\{V_j\}$ and~$\{V_j'\}$ while inequality \eqref{eq7.1} deals with {i.i.d.} vectors. Nevertheless, we shall derive \eqref{eq7.1} from \eqref{eq7.1q}. While proving \eqref{eq7.1q} we can assume that $p\4 N\geq c_s$ with a sufficiently large constant $c_s$, since otherwise \eqref{eq7.1q} is obviously valid. Let $\vf(t)$ be defined in \eqref{pppp}, where $Y= \fsu U1m$ denote a sum of independent $($non-i.i.d.$)$ vectors with $ \bgl\{\mathcal L(U_j):\, \, 1\leq j\leq m\bgr\} \subset {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$, $m =\bgl\lceil {p\4 N }/ (5\4 s)\bgr\rceil$. We shall apply the symmetrization Lemma \ref{L5.1}. Split $ Y=T+T_1+T_2 $ into sums of independent sums of independent summands so that each of the sums $ T$, $T_{1}$ and $ T_{2}$ contains $n=\lceil p\4 N /(16 \4 s) \rceil $ independent summands $U_j$. Such an $n$ exists since $p\4 N\geq c_s$ with a sufficiently large $c_s$. Lemma \ref{L5.1} implies that \beq\label{434}2 \, \vf (t) \leq \E \operatorname {e} \bgl \{ 2\, t \, \langle \mathbb Q\,\wt T,\wt T_{1} \rangle \bgr\} + \E \operatorname {e} \bgl \{ 2\, t \, \langle \mathbb Q\,\wt T, \wt T_{2} \rangle \bgr\} . \eeq Inequality \eqref{eq7.1q} follows now from \eqref{434} and Lemma~\ref{L6.3}. Let now $W= \fsu V1n$ and ~$W'=V_1'+\dots + V_n' $ be independent sums of of independent $($non-i.i.d.$)$ vectors with $ \bgl\{\mathcal L(V_j), \mathcal L(V_j'):\, \, 1\leq j\leq n\bgr\} \subset {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$. Using that all random vectors $\wt V_j$ are symmetrized and have non-negative characteristic functions and applying H\"older's inequality, we obtain, for each $t$, \beqa\label{l1} \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt W ,\wt W' \rangle \} &=& {\mathbf E}_{\wt W'} \Big(\prod_{j=1}^n {\mathbf E}_{\wt V_j}\operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt V_j ,\wt W' \rangle\bgr\} \Big)\\ & \le & \Big(\prod_{j=1}^n {\mathbf E}_{\wt W'} \big({\mathbf E}_{\wt V_j} \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt V_j ,\wt W' \rangle\} \big)^n\Big)^{1/n}\\ &=& \Big(\prod_{j=1}^n {\mathbf E}_{\wt W'} \big({\mathbf E}_{\wt T_j}\operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt W' \rangle\} \big)\Big)^{1/n}\\ & = &\Big(\prod_{j=1}^n \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt W' \rangle\}\Big)^{1/n},\label{l2} \eeqa where $\wt T_j \=\sum_{l=1}^n \wt V_{jl}$ denotes a sum of i.i.d. copies $\wt V_{jl}$ of $\wt V_j$ which are independent of all other random vectors and variables. Repeating the steps \eqref{l1}--\eqref{l2} for each factor $\E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt W' \rangle\}$ instead of the expectation $ \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt W ,\wt W' \rangle \} $ on the right hand side separately, we get (with $\wt T'_{k} \=\sum_{l=1}^{n} \wt {V}'_{k l}$, where $\wt {V}'_{k l}$ are i.i.d. copies of $\wt {V}'_{k}$ independent of all other random vectors) \beq \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt W ,\wt W' \rangle \} \le \Big(\prod_{j=1}^n \prod_{k=1}^{n} \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt T'_k \rangle\}\Big)^{1/n^2}.\label{qwe1} \eeq Thus, using \eqref{qwe1} and the arithmetic-geometric mean inequality, we have \beqa\int\limits_A^B \sqrt{\E \operatorname {e} \bgl \{ t\, \langle\mathbb Q\, \wt W ,\wt W' \rangle/2 \bgr\}} \, \ffrac {dt}{|\2 t\2 |}&\le&\int\limits_A^B \Big(\prod_{j=1}^n \prod_{k=1}^{n} \E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt T'_k \rangle/2\}\Big)^{1/2n^2} \, \ffrac {dt}{|\2 t\2 |}\nn \\ &\le& \ffrac 1 {n^2} \sum_{j=1}^n\sum_{k=1}^{n} \int\limits_A^B \Big(\E \operatorname {e} \bgl \{ t \, \langle\mathbb Q\,\wt T_j ,\wt T'_k \rangle/2\}\Big)^{1/2}\, \ffrac {dt}{|\2 t\2 |}\nn\\ &\le& \sup_\Gamma \,\int\limits_A^B \sqrt{\E \operatorname {e} \bgl \{ t\, \langle\mathbb Q\, \wt T ,\wt T' \rangle/2 \bgr\}} \, \ffrac {dt}{|\2 t\2 |},\label{qwe}\eeqa where \/ $T= U_1+\dots + U_n$ and ~$T'=U_1'+\dots + U_n' $ are independent sums of independent copies of random vectors $U$ and $U'$ respectively, and the supremum $\sup\nolimits_\Gamma $ is taken over all $ \mathcal L(U), \mathcal L(U') \in {\mathbf {\Gamma}}\4 (\d;\mathbb D, \mathcal S )$. Inequalities \eqref{eq7.1q} and \eqref{qwe} imply now the statement of the lemma. $\square$ \medskip The following Lemma \ref{Le3.2} provides a Poisson summation formula. \begin{lemma}\label{Le3.2} Let\/ $\operatorname{Re} z > 0, \, a,b \in {\mathbb R}^s$ and\/ $\mathbb S: {\mathbb R}^s \rightarrow \mathbb R^s$ be a positive definite symmetric non-degenerate linear operator. Then \beqa &&\sum_{m \in \mathbb{Z}^s} \4 \exp \bigl\{-z\, \mathbb S[\4m+a\4] + 2\4 \pi\4 i\, \langle\4 m,b\4\rangle \bigr\}\nn \\ & = &\bgl(\det (\mathbb S / \pi)\bgr)^{-1/2}\4 z^{-s/2} \exp \bigl\{ - 2\4 \pi \4 i\, \langle\4 a,b\4\rangle \bigr\} \sum_{l \in \mathbb{Z}^s} \exp \Bigl\{-\ffrac{\pi^2}{z}\mathbb S^{-1}[\4l + b\4] -2\4\pi\4 i\,\langle\4 a, l\4\rangle \Bigr\},\nn \eeqa where\/ $\mathbb S^{-1}: {\mathbb R}^s \rightarrow \mathbb R^s$ denotes the inverse positive definite operator for\/ $\mathbb S$. \end{lemma} \medskip {\it Proof.} See, for example, Fricker (1982), {p.}~116, or Mumford (1983), {p.}~189, formula~(5.1); and {p.}~197, formula (5.9). $\square$ \bigskip Let the conditions of Lemma \ref{L7.3} be satisfied. Introduce one-dimensional lattice probability distributions $H_n=\mathcal L(\xi_n)$ with integer valued $\xi_n$ setting $$ \P\bgl\{\xi_n=k\bgr\}= A_n\,n^{-1/2}\,\exp\left\{-k^2/2n\right\}, \q \text {for}\ k\in\mathbb Z. $$ It is easy to see that ~${A_n\asymp1}$. Moreover, by Lemma \ref{Le3.2}, \beq \wh H_n(t)\ge0,\q\q \text {for all}\ t\in\mathbb R.\label{eq76} \eeq Introduce the $s$-dimensional random vector $\zeta_n$ having as coordinates independent copies of~$\xi_n$. Then, for $m=(m_1,\dots,m_s)\in\mathbb Z^s$, we have \beq\label{qm} q(m)\=\P\bgl\{\zeta_n=m\bgr\}=A_n^s\,n^{-s/2}\, \exp\left\{-\norm{m}^2/2n\right\}. \eeq \begin{lemma}\label{L7.5} Let $W= \fsu V1n$ and ~$W'=V_1'+\dots + V_n' $ denote independent sums of independent copies of random vectors $V$ and $V'$ such that $$V=\ve_1\4 z_1+ \dots +\ve_s\4 z_s,\q\q\q V'=\ve_{s+1}\4 z_1'+ \dots +\ve_{2s}\4 z_s',$$ with some $z_j,z_j'\in \mathbb R^d $. Introduce the matrix $\mathbb B_t= \{ b_{ij}(t): 1\leq i,j\leq s\}$ with $b_{ij}(t)= t\,\langle \mathbb Q\4z_i,z_j' \rangle$. Then $$ \E \operatorname {e} \bgl \{ t\, \langle \mathbb Q\4\wt W ,\wt W' \rangle/4 \bgr\} \ll_s\E\operatorname {e} \bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle \bgr\}+ \exp\left\{-c\4 n\right\}, \q\q\q \text{ for all}\ \, t\in \mathbb R, $$ where $\zeta'_n$ are independent copies of $\zeta_n$ and $c$ is a positive absolute constant. \end{lemma} {\it Proof}. Without loss of generality, we shall assume that $n\ge c_1$, with a sufficiently large absolute constant~$c_1$. Consider the random vector $Y=(\wt \ve_1,\dots ,\wt \ve_s)\in \mathbb R^s$ with coordinates which are symmetrizations of i.i.d. Rademacher random variables. Let $R=(R_1, \dots,R_s)$ and $T$ denote independent sums of $n$ independent copies of $Y/2$. Then we can write \beq\label{rav} \E \operatorname {e} \bgl \{ t\, \langle \mathbb Q\4\wt W ,\wt W' \rangle/4 \bgr\} = \E \operatorname {e} \bgl \{ \langle\mathbb B_t \4 R,T \rangle \bgr\}, \q\q \text{ for all}\ \, t\in \mathbb R. \eeq Note that the scalar product $\langle \cdt, \cdt \rangle$ in $\E \operatorname {e} \bgl \{ \langle\mathbb B_t \4 R,T \rangle \bgr\}$ means the scalar product of vectors in $\mathbb R^s$. In order to estimate this expectation, we write it in the form \beqa \E \operatorname {e} \bgl \{ \langle\mathbb B_t \4 R,T \rangle \bgr\} &=& \E {\mathbf E}_R\, \operatorname {e} \bgl \{ \langle\mathbb B_t \4 R,T \rangle \bgr\} \nn\\ &=&\sum_{{\ov m}\in\mathbb Z^s}p({\ov m})\sum_{m\in\mathbb Z^s}p(m) \operatorname {e} \bgl \{ \langle\mathbb B_t \4 m,{\ov m} \rangle\bgr\} , \label{eq7.7}\eeqa with summing over $m=(m_1,\dots,m_s)\in\mathbb Z^s$, $\ov{m}=(\ov{m}_1,\dots,\ov{m}_s)\in\mathbb Z^s$ and \beq\label{pm} p(m)=\P\bgl\{R=m\bgr\}=\prod_{j=1}^s\P\bgl\{R_j=m_j\bgr\} =\prod_{j=1}^s 2^{-2n}\hbox{\begin{footnotesize}$\displaystyle\binom{2\4n}{ m_j+n}$\end{footnotesize}}, \eeq if $\max\limits_{1\le j\le s}|m_j|\le n$ and $p(m)=0$ otherwise. Clearly, for fixed \,$T=\ov{m}$, \beq {\mathbf E}_R\, \operatorname {e} \bgl \{ \langle\mathbb B_t \4 R,T \rangle \bgr\} =\sum_{m\in\mathbb Z^s}p(m) \operatorname {e} \bgl \{ \langle\mathbb B_t \4 m,{\ov m} \rangle\bgr\}\ge0 \label{eq7.8} \eeq is a value of the characteristic function of symmetrized random vector $\mathbb B_t \4 R$. Using Stirling's formula, it is easy to show that there exist positive absolute constants $c_2$ and $c_3$ such that \beq \P\bgl\{R_j=m_j\bgr\}\ll n^{-1/2}\,\exp\left\{-m_j^2/2n\right\},\q \text{ for}\ |m_j|\le c_2\4n,\label{eq7.9} \eeq and \beq \P\bgl\{|R_j|\ge c_2\4n\bgr\}\ll \exp\left\{-c_3\4n\right\}.\label{eq7.10} \eeq Using \eqref{eq7.7}--\eqref{eq7.10}, we obtain \beqa \E \operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,T \rangle \bgr\} &\ll_s&\sum_{{\ov m}\in\mathbb Z^s}q(\ov m) \sum_{m\in\mathbb Z^s}p(m) \operatorname {e} \bgl \{ \langle \mathbb B_t \4 m,{\ov m} \rangle\bgr\}+ \exp\left\{-c_3\4n\right\}\nn\\ & =&\sum_{m\in\mathbb Z^s}p(m) \sum_{{\ov m}\in\mathbb Z^s}q(\ov m) \operatorname {e} \bgl \{ \langle \mathbb B_t \4 m,{\ov m} \rangle\bgr\}+ \exp\left\{-c_3\4n\right\}\nn\\ & =&\E{\mathbf E}_{\zeta_n} \operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,\zeta_n \rangle \bgr\}+ \exp\left\{-c_3\4n\right\}\nn\\ & =&\E \operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,\zeta_n \rangle \bgr\}+ \exp\left\{-c_3\4n\right\}. \label{eq7.12} \eeqa Now we repeat our previous arguments, noting that \beq {\mathbf E}_{\zeta_n} \operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,\zeta_n \rangle \bgr\} =\sum_{\ov m\in\mathbb Z^s}q(\ov m)\, \operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,{\ov m} \rangle\bgr\}\ge0 \label{eq7.13} \eeq is a value of the non-negative characteristic function of the random vector $\zeta_n$ (see \eqref{eq76}). Using again \eqref{eq7.9} and \eqref{eq7.10}, we obtain \beq \E\operatorname {e} \bgl \{ \langle \mathbb B_t \4 R,\zeta_n \rangle \bgr\} \ll_s\E\operatorname {e} \bgl \{ \langle \mathbb B_t \4 \zeta_n,\zeta'_n \rangle \bgr\}+ \exp\left\{-c_3\4n\right\}. \label{eq7.14} \eeq Relations \eqref{rav}, \eqref{eq7.12} and \eqref{eq7.14} imply the statement of the lemma. $\square$ \medskip \smallskip Let us estimate the expectation $ \E\operatorname {e} \bgl \{ \langle \mathbb B_t \4 \zeta_n,\zeta'_n \rangle \bgr\}$ under the conditions of Lemmas~\ref{L7.3} and~\ref{L7.5}, assuming that $s=d$, $\mathbb D=\mathbb C^{-1/2}$, $\d\leq 1/(5\4 s)$, $n\ge c_4$, where $c_4$ is a sufficiently large absolute constant, and \beq\|\mathbb C^{-1/2}z_j-e_j\|\leq \d,\q\q\q \|\mathbb C^{-1/2}z_j'-e_j\|\leq \d,\q\q\q \text{for}\ \, 1\leq j\leq s,\label{eq:7.6} \eeq with an orthonormal system $\mathcal S=\mathcal S_o=\bgl\{e_1,e_2,\ldots,e_s\bgr\} $ involved in the conditions of Lemma~\ref{L7.3}. We can rewrite $\E\operatorname {e} \bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\}$ as $$ \E\operatorname {e} \bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\} =\sum_{{\ov m}\in\mathbb Z^s}q({\ov m})\sum_{m\in\mathbb Z^s}q(m) \operatorname {e} \bgl \{ \langle\mathbb B_t \4\ov m,m \rangle\bgr\}. $$ Thus, by \eqref{qm}, $$ \E\operatorname {e} \bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\} =A_n^{2s}\,n^{-s}\,\sum_{{\ov m}\in\mathbb Z^s}\sum_{m\in\mathbb Z^s} \exp \bgl \{ i\,\langle\mathbb B_t \4\ov m,{m}\rangle-\norm{m}^2/2n -\norm{\ov m}^2/2n \bgr\}. $$ Denote \beq\label{defr}r=\sqrt{2 \4\pi^2\4n}. \eeq Applying Lemma \ref{Le3.2} with \,$\mathbb S=\mathbb I_s$, $z=1/2n$, \,$a=0$, \,$b=(2\pi)\me\,\mathbb B_t\,\ov m $ \,and using that ~${A_n\asymp1}$, we obtain \beqa\label{koren} \E\operatorname {e}\bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\} &\ll_s& n^{-s/2}\,\sum_{l,m \in \mathbb{Z}^s} \exp \bgl \{-2\4 \pi^2\4n\,\norm{\4 l+(2\pi)^{-1}\mathbb B_t \4 m}^2-\norm{m}^2/2n \bgr\} \nn\\ &\ll_s&r^{-s}\sum_{m,\ov m \in \mathbb{Z}^{s}} \exp \bgl \{-r^2\,\norm{m-t\,\mathbb V \4 \ov m}^2-\norm{\ov m}^2/r^2 \bgr\},\label{koren1} \eeqa where $\mathbb V:\mathbb R^{s}\to\mathbb R^{s}$ is the operator with matrix \beq\label{bbbl} \mathbb V=(2\pi)^{-1}\mathbb B_1. \eeq Note that the right-hand side of \eqref{koren1} may be considered as a theta-series. Denote $y_k=\mathbb C^{-1/2}z_k$, $1\leq k\leq s$. Let $\mathbb Y$ be the $(s\times s)$-matrix with entries $\langle\4 e_j, y_k\rangle$, where index $j$ is the number of the row, while $k$ is the number of the column. Then the matrix $\mathbb F\=\mathbb Y^*\4\mathbb Y$ has entries $\langle y_j, y_k\rangle$. Here $\mathbb Y^*$ is the transposed matrix for $\mathbb Y$. According to \eqref{eq:7.6}, we have \beq\|y_j-e_j\|\leq \d,\q\q \text{for}\ \, 1\leq j\leq s,\label{eq:7.6o} \eeq Let us show that (cf. BG (1997a, proof of Lemma~7.4)) \beq\label{bbl} \norm{\mathbb Y}\le 3/2 \q\text{and}\q\norm{\mathbb Y^{-1}}\le 2. \eeq Since $\mathcal S_o=\bgl\{e_1,e_2,\ldots,e_s\bgr\} $ is an orthonormal system, inequalities \eqref{eq:7.6o} imply that $\mathbb Y=\mathbb I_s+\mathbb A$ with some matrix $\mathbb A=\{a_{ij}\}$ such that $|a_{ij}|\leq \d$. Thus, we have $\|\mathbb A\| \leq \|\mathbb A\|_2\leq s\4 \d $, where $\|\mathbb A\|_2$ denotes the Hilbert--Schmidt norm of the matrix~$\mathbb A$. Therefore, the condition $\d\leq 1/(5\4 s)$ implies $\|\mathbb A\| \leq 1/2$ and inequalities \eqref{bbl}. The matrix $\mathbb F$ is symmetric and positive definite. Its determinant is the product of eigenvalues which (by \eqref{bbl}) are bounded from above and from below by some absolute positive constants. Moreover, \beq\label{dett3} \left(\det \mathbb Y\right)^2=\left(\det \mathbb Y^*\right)^2=\det \mathbb F \asymp_s1\asymp\norm{\mathbb F}\asymp\norm{\mathbb Y}. \eeq Define the matrices $\ov{\mathbb Y}$ and $\ov{\mathbb F}$, replacing $z_j$ by $z_j'$ in the definition of ${\mathbb Y}$ and ${\mathbb F}$. Similarly to~\eqref{dett3}, one can show that\beq\label{dett4} \left(\det \ov{\mathbb Y}\right)^2=\bgl(\det \ov{\mathbb Y}\2^*\bgr)^2=\det \ov{\mathbb F} \asymp_s1\asymp\norm{\ov{\mathbb F}}\asymp\norm{\ov{\mathbb Y}}. \eeq Let $\mathbb G$ and $\ov{\mathbb G}$ be the $(s\times s)$-matrices with entries $\langle\4 e_j,\mathbb Q\4z _k\rangle$ and $\langle\4 e_j, z'_k\rangle$ respectively. Then, clearly, $\mathbb G=\mathbb Q\4\mathbb C^{1/2}\mathbb Y$ and $\ov{\mathbb G}=\mathbb C^{1/2}\4\ov{\mathbb Y}$. Therefore, \beq\label{dett5} \mathbb B_1=\mathbb G^*\4\ov{\mathbb G}=\mathbb Y^*\4\mathbb C^{1/2}\4\mathbb Q\4\mathbb C^{1/2}\4\ov{\mathbb Y}. \eeq Moreover, $\mathbb Q^2= \mathbb I_d$ implies that $\bgl|\det \mathbb Q\bgr|= 1$ and $\norm {\mathbb Q}=1$. Using relations \eqref{bbbl} and \eqref{dett3}--\eqref{dett5}, we obtain\beq\label{ett7} \bgl|\det \mathbb V\bgr|\asymp_s\bgl|\det \mathbb B_1\bgr|\asymp_s\det \mathbb C, \eeq and \beq\label{ett8} \norm {\mathbb V}\ll\norm {\mathbb B_1}\ll\norm {\mathbb C}\ll\sigma_1^2. \eeq $\phantom 0$ \section{\label {s6}Some facts from Number Theory} In Section \ref{s6}, we consider some facts of the geometry of numbers (see Davenport (1958) or Cassels (1959)). They will help us to estimate the integrals of the right-hand side of inequality \eqref{koren1}. Let $e_1, e_2,\ldots, e_d$ be linearly independent vectors in $\R^d$. The set \beq \Lambda=\Big\{{ \sum_{j=1}^d} n_j e_j:n_j\in\mathbb Z,\ j=1,2,\ldots,d\Big\} \eeq is called the lattice with basis $e_1, e_2,\ldots, e_d$. The determinant $\det(\Lambda)$ of a lattice $\Lambda$ is the modulus of the determinant of the matrix formed from the vectors $e_1, e_2,\ldots, e_d$: \beq \det(\Lambda)\=\bgl|\det(e_1, e_2,\ldots, e_d)\bgr|. \eeq The determinant of a lattice does not depend on the choice of basis. Any lattice $\Lambda\subset\R^d$ can be represented as $\Lambda=\mathbb A\,\mathbb Z^d$, where $\mathbb A$ is a non-degenerate linear operator. Clearly, $\det (\Lambda)=\left|\det\mathbb A\right |$. Let $m_1,\ldots,m_l\in\Lambda$ be linearly independent vectors belonging to a lattice $\Lambda$. Then the set \beq \Lambda'=\Big\{{ \sum_{j=1}^l} n_j m_j:n_j\in\mathbb Z,\ j=1,2,\ldots,l\Big\} \eeq is an $l$-dimensional sublattice of the lattice $\Lambda$. Its determinant $\det(\Lambda')$ is the modulus of the determinant of the matrix formed from the coordinates of the vectors $m_1, m_2,\ldots, m_l$ with respect to an orthonormal basis of the linear span of the vectors $m_1, m_2,\ldots, m_l$. The determinant $\det(\Lambda')$ could be also defined as $\det\bgl(\langle m_i, m_j\rangle, i, j=1, \ldots, l\bgr)^{1/2}$. Let $F: {\R}^d \rightarrow [\40, \infty\4]$ denote a norm on ${\R}^d$, that is $F( \alpha\4 x) = \lvert \alpha \rvert\, F(x),$ for $\alpha \in {\R}$, and $F(x + y) \leq F(x) +F(y)$. The successive minima $M_1 \leq \dots \leq M_d$ of $F$ with respect to a lattice $\Lambda$ are defined as follows: Let \,$M_1 = \inf\bgl \{F(m): m \neq 0,\ m \in \Lambda \bgr\}$ \,and define $M_j$ as the infimum of $\lambda > 0$ such that the set \,$\bgl\{m \in \Lambda \,:\, F(m) < \lambda\bgr \}$ \,contains $j$ linearly independent vectors. It is easy to see that these infima are attained, that is there exist linearly independent vectors $b_1, \dots, b_d \in \Lambda$ such that $F(b_j) = M_j$, $j=1,\ldots,d$. The following Lemma \ref{Dav2} is proved by Davenport (1958, Lemma 1), see also G\"otze and Margulis~(2010). \begin{lemma} \label{Dav2} Let $M_1 \leq \dots \leq M_d$ be the successive minima of a norm $F$ with respect to the lattice $\mathbb Z^d$. Denote $M_{d+1}=\infty$. Suppose that $1\le j\le d$ and $M_j \leq b \leq M_{j+1}$, for some~${b>0}$. Then \beq \#\bgl\{m=(m_1, \dots, m_d) \in \mathbb{Z}^d:\;F(m) < b\bgr\} \asymp_d b^j (M_1\cdt M_2 \cdots M_j)^{-1}. \eeq \end{lemma} \medbreak Representing $\Lambda=\mathbb A\,\mathbb Z^d$, we see that the lattice $\mathbb Z^d$ may be replaced in Lemma \ref {Dav2} by any lattice $\Lambda\subset\R^d$. It suffices to apply this lemma to the norm $G(m)=F(\mathbb A\4m)$, $m\in\mathbb Z^d$. \begin{lemma} \label{Dav9} Let \,$ F_j (m)$, \,$j=1,2$, be some norms in ${\R}^d$ and $M_1 \leq \dots \leq M_d$ and $N_1 \leq \dots \leq N_d$ be the successive minima of\/ $F_1$ with respect to a lattice $\Lambda_1$ and of\/ $F_2$ with respect to a lattice $\Lambda_2$ respectively. Let $C>0$. Assume that $M_k\gg_d C\, F_2(n_k)$, $k=1,2,\ldots,d$, for some linearly independent vectors $n_1,n_2,\ldots,n_d\in \Lambda_2$. Then \beq M_k\gg_d C\,N_k,\q k=1,\ldots,d. \eeq \end{lemma} The proof of this lemma is elementary and therefore omitted. Let $\norm x_\infty=\max_{1\le j\le d}|x_j|$, for $x=({x}_1, \dots, {x}_d)\in \Rd$. \begin{lemma} \label{Dav1} Let $\Lambda$ be a lattice in ${\R}^d$ and let\/ $0<\ve\le1$. Then \beq\label{norma} e^{-\ve}\,\# H\le \sum_{v \in \Lambda} \exp \bgl \{- \ve\,\norm v^2 \bgr\} \ll_d \ve^{-d/2}\# H, \eeq where \,$H\defi \bgl\{v \in \Lambda \, : \, \norm v_\infty< 1 \bgr\}$. \end{lemma} \begin{proof}The lower bound in \eqref{norma} is almost evident by restricting summation to the set~$H$. Introduce for \,$\mu = (\mu_1, \dots, \mu_{d})\in \Z^{d}$ \,the sets \beq B_\mu \defi \left[\,\mu_1- \ffrac{1}{2},\, \mu_1 + \ffrac{1}{2}\right)\times\cdots\times \left[\,\mu_{d}- \ffrac{1}{2},\,\mu_{d} + \ffrac{1}{2}\right)\nn\eeq such that ${\R}^{d} = \bigcup_{\mu} B_\mu$. For any fixed \,$w^* \in H_\mu\defi \bigl\{ w\in \Lambda \cap B_\mu \bigr\}$ \,we have \beqa w - w^*\in H,\qquad\text{for any }w\in H_\mu. \nn \eeqa Hence we conclude for any $\mu\in \Z^{d}$ \beq \# H_\mu \, \leq \, \# H .\label{eq:boxx1} \eeq Since \,$x \in B_\mu $ \,implies \, $ \norm{x}_{\infty} \geq \norm{ \mu }_{\infty}/2$, \, we obtain by \eqref{eq:boxx1} \beqa\sum_{v \in \Lambda} \exp \bgl \{- \ve\,\norm v^2 \bgr\} &\le&\sum_{v \in \Lambda } \exp \bgl \{- \ve\,\norm{ \4v}_\infty^2 \bgr\}\nn\\ & \ll_d &\# H_0 + \sum_{\mu \in \Z^d \setminus 0} \q \sum_{v \in \Lambda } {\bf I}\bgl\{ \4v \in B_\mu\bgr\}\4 \exp \bigl\{- \ve\,\norm{ \mu }^2_{\infty}/4\bigr\} \nn \\ & \ll_d & \# H\cdt \sum_{\mu \in \Xi} \4 \exp \bigl\{- \ve\,\norm {\mu}_{\infty}^2/4\bigr\} \nn \\ & \ll_d & \ve^{-d/2}\4\# H . \eeqa This conclude the proof of Lemma \ref{Dav1}. \end{proof} It is easy to see that Lemma \ref{Dav1} implies the following statement. \begin{corollary} \label{Dav5} Let $\Lambda$ be a lattice in ${\R}^d$ and let $c_j(d)$, \,$j=1,2, 3, 4$, be positive quantities depending on $d$ only. Let \,$ F (\cdt)$ be a norm in ${\R}^d$ such that $F(\cdt)\asymp_d \norm{\cdt}$. Then \beqa \sum_{v \in \Lambda} \exp \bgl \{- c_1(d)\,\norm v^2 \bgr\} &\asymp_d& \sum_{v \in \Lambda} \exp \bgl \{- c_2(d)\,(F(v))^2 \bgr\}\nn\\ &\asymp_d&\# \bgl\{v \in \Lambda \, : \, \norm v< c_3(d) \bgr\}\nn\\ &\asymp_d&\# \bgl\{v \in \Lambda \, : \, F(v)< c_4(d) \bgr\}. \eeqa \end{corollary} The proof of Corollary \ref{Dav5} is elementary and therefore omitted. Note only that \beq\#\bgl\{v \in \Lambda \, : \, F(v)< \lambda \bgr\}=\#\bgl\{v \in \mu\me\Lambda \, : \, F(v)< \lambda/\mu \bgr\},\q\text{for }\lambda,\mu >0.\eeq For a lattice $\Lambda\subset \mathbb R^{d}$ and $1\le l\le d$, we define its $\alpha_l$-characteristics by \beq\label{alp} \alpha_l( \Lambda)\= \sup \bgl\{\,\bgl|\det (\Lambda')\bigr|^{-1}: \Lambda'\subset \Lambda,\ \ l\text {-dimensional sublattice of $\Lambda$}\bgr \}. \eeq Denote \beq\label{alp3} \alpha( \Lambda)\= \max_{1\le l\le d} \alpha_l( \Lambda). \eeq \begin{lemma} \label{Dav4} Let \,$ F (\cdt)$ be a norm in ${\R}^d$ such that $F(\cdt)\asymp_d \norm{\cdt}$. Let $c(d)$ be a positive quantity depending on $d$ only. Let $M_1 \leq \dots \leq M_d$ be the successive minima of $F$ with respect to a lattice $\Lambda\subset\R^d$. Then \beq \label{LLL1}\alpha_l( \Lambda)\asymp_d (M_1\cdt M_2 \cdots M_l)^{-1},\q l=1,\ldots,d. \eeq Moreover, \beq \label{LLL2}\alpha( \Lambda)\asymp_d \#\bgl\{v \in \Lambda \, : \, \norm v< c(d) \bgr\}, \eeq provided that $M_1\ll_d1$. \end{lemma} For the proof of Lemma \ref{Dav4} we shall use the following lemma formulated in Proposition (p.~517) and Remark (p.~518) in A.K. Lenstra, H.W. Lenstra and Lov\'asz (1982). \begin{lemma} \label{LLL} Let $M_1 \leq \dots \leq M_d$ be the successive minima of the standard Euclidean norm with respect to a lattice $\Lambda\subset\R^d$. Then there exists a basis $e_1, e_2,\ldots, e_d$ of $\Lambda$ such that \beq M_l\asymp_d\norm{e_l},\q l=1,\ldots,d. \eeq Moreover, \beq \det(\Lambda)\asymp_d\prod_{l=1}^d \norm{e_l}. \eeq \end{lemma} {\it Proof of Lemma\/ {\rm\ref{Dav4}.}} According to Lemma \ref{Dav9}, we can replace the Euclidean norm~$\norm{\cdt}$ by the norm $ F (\cdt)$, in the formulation of Lemma \ref{LLL}. Let $\Lambda'\subset \Lambda$ be an arbitrary $l${$\hbox{-}$}dimensional sublattice of~$\Lambda$ and $N_1 \leq \dots \leq N_l$ be the successive minima of the norm~$ F (\cdt)$ with respect to $\Lambda'$. It is clear that $M_j \leq N_j$, $j=1,2, \ldots,l$. On the other hand, $M_j= F(m_j)$ for some linearly independent vectors $m_1,m_2,\ldots,m_l\in \Lambda$. In the case, where \beq \Lambda'=\Big\{{ \sum_{j=1}^l} n_j m_j:n_j\in\mathbb Z, j=1,2,\ldots,l\Big\}, \eeq we have $N_j = M_j$, $j=1,2, \ldots,l$. In order to justify relation \eqref{LLL1} it remains to take into account definition~\eqref{alp} and to apply Lemma~\ref{LLL}. Relation \eqref{LLL2} is an easy consequence of \eqref{LLL1}, Lemma~\ref{Dav2} and Corollary \ref{Dav5}. $\square$\medskip \medbreak $\phantom 0$ \section{\label {s7}From Number Theory to Probability} In Section \ref{s7}, we shall use number-theoretical results of Section \ref{s6} to estimate integrals of the right-hand side of \eqref{koren1}. Recall that we have assumed the conditions of Lemmas~\ref{L7.3} and~\ref{L7.5}, $s=d$, $\mathbb D=\mathbb C^{-1/2}$, $ \d\leq 1/(5\4 s)$, $n\ge c_4$, and \eqref{eq:7.6}, for an orthonormal system $\mathcal S=\mathcal S_o$. The notation $\hbox{SL}(d, \R)$ is used below for the set of all $(d\times d)$-matrices with real entries and determinant~1. Introduce the matrices \beq\label{svo4} \mathbb D_r \= \left(\begin{array}{*{2}c} r\, \mathbb I_{s} & \mathbb O_{s} \\ \mathbb O_{s} & r^{-1}\, \mathbb I_{s} \end{array}\right) \in \4 \hbox{SL}(2s,\R),\q r>0, \eeq \beq\label{svon} \mathbb K_t \= \left(\begin{array}{*{2}c} \mathbb I_{s} & -t \,\mathbb I_{s}\\ t \,\mathbb I_{s} & \mathbb I_{s}\end{array}\right),\q t\in\R, \eeq \beq\label{svo5} \mathbb U_t \= \left(\begin{array}{*{2}c} \mathbb I_{s} & -t \,\mathbb I_{s}\\ \mathbb O_{s} & \mathbb I_{s}\end{array}\right)\in \4 \hbox{SL}(2s,\R),\q t\in\R, \eeq and the lattices \beq \label{latt} \Lambda\=\left(\begin{array}{*{2}c} \mathbb I_{s}&\mathbb O_{s} \\ \mathbb O_{s} &\mathbb V_0\end{array}\right)\mathbb Z^{2s}, \eeq \beq \label{jj}\Lambda_{j}= \mathbb D_{j}\4 \mathbb U_{j^{-1}}\4 \Lambda=\left(\begin{array}{*{2}c}j\, \mathbb I_{s}&\mathbb -\mathbb V_0 \\ \mathbb O_{s} &j^{-1}\,\mathbb V_0\end{array}\right)\mathbb Z^{2s}, \q j=1,2,\ldots,\eeq where \beq\label{svo9} \mathbb V_0=\sigma_1^{-2}\,\mathbb V \eeq and the matrix $\mathbb V$ is defined in \eqref{bbbl}. Below we shall use the following simplest properties of these matrices: \beq\label{svo} \mathbb D_a\4\mathbb D_b=\mathbb D_{ab},\q \mathbb U_a\4\mathbb U_b=\mathbb U_{a+b}\q\text {and} \q \mathbb D_a\4 \mathbb U_b = \mathbb U_{a^2b}\,\mathbb D_a, \q\text{for $a,b>0$.} \eeq Let $M_{j,t}$, $j=1,2, \ldots,2\4s$, be the successive minima of the norm $\norm{\cdt}_\infty$ with respect to the lattice \beq \label{latt8} \Xi_t\=\left(\begin{array}{*{2}c} r\, \mathbb I_{s}&-r\4t\,\mathbb V \\ \mathbb O_{s} &r\me\, \mathbb I_{s}\end{array}\right)\mathbb Z^{2s}. \eeq Moreover, simultaneously, $M_{j,t}$ are the successive minima of the norm $F^*(\cdt)$ defined for $(m,\ov m)\in\mathbb R^{2s}$, $m,\ov m\in\mathbb R^{s}$, by\beq \label{latt5} F^*((m,\ov m))\=\max \bigl\{ \norm {m}_\infty, \,\sigma_1^{2}\,\norm {\mathbb V\me\ov m}_\infty \bigr \} \eeq with respect to the lattice \beq \label{latt6} \Omega_t\=\left(\begin{array}{*{2}c} r\, \mathbb I_{s}&-r\4t\,\mathbb V \\ \mathbb O_{s} &\sigma_1^{-2}\,r\me\, \mathbb V\end{array}\right)\mathbb Z^{2s} =\mathbb D_r\4\mathbb U_u\,\Lambda, \q\text{where }u\=\sigma_1^{2}\,t . \eeq Using Lemmas \ref{Dav9} and \ref{LLL} and the equality $\det(\Xi_t)=1$, it is easy to show that \beq \label{tyy1} M_{1,t}\ll_s 1 . \eeq Let $M_{j,t}^*$ be the successive minima of the Euclidean norm with respect to the lattice~$\Omega_t$. Note that, according to \eqref{ett8} and \eqref{latt5}, \beq \label{att}\norm{\cdt} \ll_{s} F^*(\cdt). \eeq Using \eqref{att} and Lemma \ref{Dav9}, we obtain \beq\label{att88} M_{j, t}^*\ll_s M_{j, t},\q j=1,\ldots,2\4s. \eeq According to Lemma~\ref{Dav4}, \beq\label{att89}\alpha(\Xi_t) \ll_s\alpha(\Omega_t). \eeq Let us estimate $\alpha(\Omega_t)$ assuming that $r\ge1$ and (for a moment) $t=\sigma_1^{-2}\,r\me$. In this case \beq \label{latt36} \Omega_t=\left(\begin{array}{*{2}c} r\, \mathbb I_{s}&-\mathbb V_0 \\ \mathbb O_{s} &r\me\, \mathbb V_0\end{array}\right)\mathbb Z^{2s}. \eeq By relation \eqref{LLL2} of Lemma \ref{Dav4}, we have \beq\alpha(\Omega_t) \asymp_s\#\bgl\{v \in \Omega_t \, : \, \norm v< 1/2 \bgr\}=\#K, \label{att889}\eeq where \beq K=\bgl\{v=(m, \ov m)\in \mathbb Z^{2s}: m, \ov m\in \mathbb Z^{s}, \, \, \norm {r\4m-\mathbb V_0\4\ov m}^2+\norm {r\me\4\mathbb V_0\4\ov m}^2< 1/4 \bgr\}. \label{a889}\eeq Let us estimate from above the right-hand side of \eqref{att889}. If $v=(m, \ov m)\in K$, then \beq r\,\norm m\le \norm {r\4m-\mathbb V_0\4\ov m}+\norm {\mathbb V_0\4\ov m}< \ffrac12+\ffrac r{2}\le r. \label{ad889}\eeq Hence $m=0$ and $\norm {\mathbb V_0\4\ov m}\le1/2$. It remains to estimate the quantity \beq R\=\# \bgl\{ \ov m \in\mathbb Z^{s}: \, \, \norm {\mathbb V_0\4\ov m}< 1 \bgr\}\ge\# K. \label{ada889}\eeq Let $N_1\le\cdots\le N_{s}$ be the successive minima of the Euclidean norm with respect to the lattice~$\mathbb V_0\4\mathbb Z^{s}$. Let $e_1,e_2,\ldots, e_{s}$ be the standard orthonormal basis of $\mathbb Z^{s}$. By \eqref{ett8} and~\eqref{svo9}, we have $\norm{\mathbb V_0\4e_j}\le 1$, $j=1,2, \ldots, s$. Therefore, using Lemma \ref{Dav9}, we see that $N_1\le\cdots\le N_{s}\le 1$. By \eqref{ett7}, \eqref{svo9}, \eqref{ada889} and by Lemmas \ref{Dav2}, \ref{Dav9} and \ref{LLL}, \beq \label{f99} R\asymp_s (N_1\cdt N_2 \cdots N_s)^{-1}\asymp_s(\det \mathbb V_0)\me\asymp_s\sigma_1^{2s}\,(\det \mathbb C)^{-1}. \eeq Hence, using \eqref{att889}, \eqref{ada889} and \eqref{f99} we conclude that\beq \label{adas89} \alpha(\Omega_t) \ll_s\sigma_1^{2s}\,(\det \mathbb C)^{-1}, \q\text{for }\;r\ge1\q\text {and}\q t=\sigma_1^{-2}\,r\me. \eeq Let now $t\in\mathbb R$ be arbitrary. By \eqref{latt8}, \eqref{tyy1}, \eqref{att89}, Lemmas \ref{Dav2}, \ref{Dav4} and Corollary~\ref{Dav5}, \beqa \label{ken4}\sum_{m,\ov m \in \mathbb{Z}^{s}} \exp \bgl \{-r^2\,\norm{m-t\,\mathbb V \4 \ov m}^2\!\!&-&\!\!\norm{\ov m}^2/r^2 \bgr\}=\sum_{v \in \Xi_t} \exp \bgl \{- \norm v^2 \bgr\}\nn\\ &\ll_s&R_t\=\#\bgl\{v \in \Xi_t \, : \, \norm v< 1 \bgr\}\nn\\ & \ll_s& \alpha(\Xi_t) \ll_s\alpha(\Omega_t). \label{kore}\eeqa Now, by \eqref{koren}, \eqref{latt6} and \eqref{ken4}, we have \beqa\label{koren4} \E\operatorname {e}\bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\} \ll_s r^{-s}\,\alpha (\Omega_t) =r^{-s}\,\alpha (\mathbb D_r\4\mathbb U_u\,\Lambda),\q\text{where }u=\sigma_1^{2}\,t. \eeqa Let us estimate the quantity $R_t$, $t\in \mathbb R$, defined in \eqref{kore} assuming that $r\ge 1$ and $\left|r\4t\right|\le c_s^*\,\sigma_1^{-2}$, where $c_s^*\ge1$ is an arbitrary quantity depending on $s$ only. By Corollary~\ref{Dav5}, we have \beq \label{ad888}R_t\asymp_s\# K_0, \eeq where \beq K_0\=\bgl\{v=(m, \ov m)\in \mathbb Z^{2s}: m, \ov m\in \mathbb Z^{s}, \, \, \norm {r\4m-r\4t\,\mathbb V\4\ov m}^2+ \norm {r\me\4\ov m}^2< (2\4c_s^*)^{-2} \bgr\}. \label{ba889}\eeq If $v=(m, \ov m)\in K_0$, $r\ge 1$ and $\left|r\4t\right|\le c_s^*\,\sigma_1^{-2}$, then, by \eqref{ett8} and \eqref{ba889},\beq r\,\norm m\le \norm {r\4m-r\4t\,\mathbb V\4\ov m}+\left|r\4t\right|\,\norm {\mathbb V\4\ov m} < \ffrac12+\ffrac {r}{2}\le r. \label{polk}\eeq Hence $m=0$ and $\left|r\4t\right|\norm {\mathbb V\4\ov m}\le(2\4c_s^*)\me<1$. It remains to estimate the quantity \beq S\=\# \bgl\{ \ov m \in\mathbb Z^{s}: \, \, \left|r\4t\right|\norm {\mathbb V\ov m}< 1 \bgr\}\ge\# K_0. \label{fdfd}\eeq Let $P_1\le\cdots\le P_{s}$ be the successive minima of the Euclidean norm with respect to the lattice~$\left|r\4t\right|\4\mathbb V\4\mathbb Z^{s}$. Let $e_1,e_2,\ldots, e_{s}$ be the standard orthonormal basis of $\mathbb Z^{s}$. By~\eqref{ett8}, we have $\norm{\left|r\4t\right|\4\mathbb V\4e_j}\ll_s 1$, $j=1,2, \ldots, s$. Therefore, using Lemma \ref{Dav9}, we see that $P_1\le\cdots\le P_{s}\ll_s 1$. By \eqref{ett7}, \eqref{fdfd} and Lemmas \ref{Dav2} and \ref{LLL}, \beq \label{yuyu} S\asymp_s (P_1\cdt P_2 \cdots P_{s})^{-1}\asymp_s(\det (\left|r\4t\right|\4\mathbb V))\me\asymp_s\left|r\4t\right|^{-s}\,(\det \mathbb C)^{-1}. \eeq Hence, using \eqref{ad888}, \eqref{fdfd} and \eqref{yuyu}, we conclude that\beq \label{ghgh} R_t \ll_s\left|r\4t\right|^{-s}\,(\det \mathbb C)^{-1}, \q\text{for }\; r\ge1\q\text {and }\left|r\4t\right|\le c_s^*\,\sigma_1^{-2}. \eeq Now, by \eqref{koren}, \eqref{ken4} and \eqref{ghgh}, we have \beqa\label{koren44} \E\operatorname {e}\bgl \{ \langle\mathbb B_t \4 \zeta_n,\zeta'_n \rangle\bgr\} &\ll_s& r^{-s}\,R_t \nn\\ &\ll_s& r^{-2s}\,\left|t\right|^{-s}\,(\det \mathbb C)^{-1},\q\text{for } r\ge1 \; \text {and }\left|r\4t\right|\le c_s^*\,\sigma_1^{-2}. \eeqa It is easy to verify that \beq\int_{c_s\4\sigma_1^{-2}r^{-2+4/s} }^{\sigma_1^{-2} r^{-1}}\sqrt{r^{-2s}\,\left|t\right|^{-s}\,(\det \mathbb C)^{-1}}\ffrac{dt}t \ll_s r^{-2}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}, \label{eq71av}\eeq for any $c_s$ depending on $s$ only. Note that $\,\sigma_1^{s}\,(\det \mathbb C)^{- 1/2}\ge1$. Using \eqref{koren4}, \eqref{eq71av} and Lemmas \ref{L7.3} and~\ref{L7.5}, we derive the following lemma. \begin{lemma}\label{GZ} Let the conditions of Lemma\/ {\rm\ref{L7.3}} be satisfied with $s=d$, $\mathbb D=\mathbb C^{-1/2}$, $\d\leq 1/(5\4 s)$ and with an orthonormal system $\mathcal S=\mathcal S_o=\{\fs e1s\}\subset \Rd$. Let $c_s$ be an arbitrary quantity depending on $s$ only. Then, for any $b\in\Rd$ and\/ $r\ge1$, \beq\int_{c_s\4\sigma_1^{-2}r^{-2+4/s} }^{\sigma_1^{-2}}\bgl| \wh \Psi_b (t/2)\bgr| \ffrac {dt} t \ll_s \, (p\4 N)^{-1}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}+r^{- s/2}\,\sup_\Gamma \; \int_{r^{-1} }^1(\alpha (\mathbb D_r\4\mathbb U_u\,\Lambda))^{1/2}\ffrac {du} u, \label{eq71a}\eeq where\/ $r$, $\alpha (\cdt)$, $\mathbb D_r$ $\mathbb U_t$ and the lattice $\Lambda$ are defined in relations \eqref{dfn}, {\rm\eqref{defr}}, {\rm \eqref{bbbl}}, {\rm \eqref{alp}}, {\rm \eqref{alp3}}, {\rm \eqref{svo4}}, {\rm \eqref{svo5}} and~{\rm \eqref{latt}} and in Lemma\/ {\rm \ref{L7.5}}. The $\sup_\Gamma$ means here the supremum over all possible values of $z_j,z_j'\in \Rd $ $($involved in the definition of matrices $\mathbb B_t$ and\/~$\mathbb V)$ such that \beq\|\mathbb C^{-1/2}z_j-e_j\|\leq \d,\q\q\q \|\mathbb C^{-1/2}z_j'-e_j\|\leq \d,\q\q\q \text{for}\ \, 1\leq j\leq s.\label{eq:7.6f} \eeq Moreover, for any $b\in\Rd$, $r\ge1$ and\/ $\g>0$ and any fixed\/ $t\in\mathbb R$ satisfying\/ $\left|r\4t\right|\le c_s^*\,\sigma_1^{-2}$, where $c_s^*\ge1$ is an arbitrary quantity depending on $s$ only, we have \beq\bgl| \wh \Psi_b (t)\bgr| \ll_{\g, s} (p\4 N)^{-\g}+ r^{-s}\,\left|t\right|^{-s/2}\,(\det \mathbb C)^{-1/2} .\label{equ7.1p}\eeq \end{lemma} Let $v=(m,\ov m)\in\mathbb R^{2s}$, $m,\ov m\in\mathbb R^{s}$ and $t\in\mathbb R$. Then \beq \label{rho}\ov m+t\4 m=(1 +t^2)\,\ov m+t\, (m-t\4 \ov m). \eeq Equality \eqref{rho} implies that \beq \label{rho1} \norm{\ov m+t\4 m}\ll_s\norm{\ov m} +\norm{m-t\4 \ov m}, \q \hbox{for} \ |t|\ll_s 1. \eeq Hence, \beq\label{rho3} r\,\norm{m-t\4 \ov m}+ r\me\4 \norm{\ov m+t\4 m}\ll_s r\,\norm{m-t\4 \ov m}+ r\me\4\norm{\ov m} , \q \hbox{for} \ r\gg 1,\ |t|\ll_s 1. \eeq According to \eqref{svo4}--\eqref{svo5}, we have\beq \label{rhom} \mathbb D_r\mathbb U_t\4v=(r(m-t\4 \ov m),\,r\me\4\ov m)\q\text{and}\q \mathbb D_r\mathbb K_t\4v=(r(m-t\4 \ov m),\,r\me\4(\ov m+t\4m )). \eeq It is clear that the operators $\mathbb D_r\mathbb U_t$ and $\mathbb D_r\mathbb K_t$ are invertible. Therefore, using \eqref{rho3} and~\eqref{rhom} and applying Lemmas \ref{Dav9} and~\ref{Dav4}, we derive the inequality \beq\label{kt6} \alpha (\mathbb D_{r}\mathbb U_t\4 \Omega) \ll_s\alpha (\mathbb D_{r}\mathbb K_t\4 \Omega) , \q \hbox{for} \ r\gg 1,\ |t|\ll_s 1,\eeq which is valid for any lattice $\Omega\subset\mathbb R^{2s}$. Let $\mathbb T$ be the permutation $(2\4s\times2\4s)$-matrix which permutes the rows of a ${(2\4s\times2\4s)}$-matrix $\mathbb A$ so that the new order (corresponding to the matrix $\mathbb T\mathbb A$) is: $$1,s+1,2,s+2,\ldots, s, 2\4s.$$ Note that the operator $\mathbb T$ is isometric and $\mathbb A\na\mathbb A\,\mathbb T\me$ rearrange the columns of $\mathbb A$ in the order mentioned above. It is easy to see that \beq\label{kt7} \alpha_j(\mathbb T\4 \Omega) =\alpha_j(\Omega),\q j=1,\ldots 2\4s,\q\text {and} \q\alpha(\mathbb T\4 \Omega) =\alpha(\Omega),\eeq for any lattice $\Omega\subset\mathbb R^{2s}$. Note now that \beq\label{kt8}\mathbb T\mathbb D_{r}\mathbb K_t\4\Lambda_j =\mathbb T\mathbb D_{r}\mathbb K_t\mathbb T\me\mathbb T\4\Lambda_j =\mathbb W_t\Delta_j,\eeq where $\Delta_j$ is a lattice defined by\beq\label{kt9}\Delta_j=\mathbb T\4\Lambda_j\eeq and where $\mathbb W_t$ is $(2s\times 2s)$-matrix \beq \label{latt34} \mathbb W_t= \left(\begin{array}{*{4}c}\mathbb G_{r,t}&\mathbb O_2&: &\mathbb O_2\\ \mathbb O_2&\mathbb G_{r,t}&: &\mathbb O_2\\ \cdt\cdt &\cdt\cdt&\cdt\cdt&\cdt\cdt\\ \mathbb O_2& \mathbb O_2 &:& \mathbb G_{r,t} \end{array}\right) \eeq constructed of $(2\times2)$-matrices $\mathbb O_2$ (with zero entries) and \beq \mathbb G_{r,t} \= \left(\begin{array}{*{2}c} r & -r\4t\\ r\me t & r\me\end{array}\right). \eeq Let $|t|\le 2$ and \beq\label{kth} \theta=\arcsin\bgl(t\,(1 +t^2)^{-1/2}\bgr)\q\text{or, equivalently,} \q t=\tan\theta. \eeq Then we have \beq\label{latw} |\theta|\le c^*\=\arcsin(2/\sqrt5),\q\cos \theta=(1 +t^2)^{-1/2},\q \sin\theta=t\,(1 +t^2)^{-1/2}. \eeq It is easy to see that \beqa \label{latt1}\mathbb G_{r,t} = (1 +t^2)^{1/2}\,\ov{\mathbb D}_{r}\,\ov{\mathbb K}_\theta \eeqa and \beq\label{oot}\mathbb W_t=(1 +t^2)^{1/2}\, \wt{\mathbb D}_r\,\wt{\mathbb K}_\theta,\eeq where \beq \label{latt345} \wt{\mathbb D}_r= \left(\begin{array}{*{4}c}\ov{\mathbb D}_{r}&\mathbb O_2&: &\mathbb O_2\\ \mathbb O_2&\ov{\mathbb D}_{r}&: &\mathbb O_2\\ \cdt\cdt &\cdt\cdt&\cdt\cdt&\cdt\cdt\\ \mathbb O_2& \mathbb O_2 &:&\ov{\mathbb D}_{r} \end{array}\right)\q\text {and}\q \wt{\mathbb K}_\theta= \left(\begin{array}{*{4}c}\ov{\mathbb K}_\theta&\mathbb O_2&: &\mathbb O_2\\ \mathbb O_2&\ov{\mathbb K}_\theta&: &\mathbb O_2\\ \cdt\cdt &\cdt\cdt&\cdt\cdt&\cdt\cdt\\ \mathbb O_2& \mathbb O_2 &:&\ov{\mathbb K}_\theta \end{array}\right) \eeq are $(2\4s\times2\4s)$-matrices with \beq \label{laz}\ov{\mathbb D}_{r} \= \left(\begin{array}{*{2}c} r & 0\\ 0 & r\me\end{array}\right)\q\text {and}\q\ov{\mathbb K}_\theta \=\left(\begin{array}{*{2}c} \cos \theta & -\sin \theta \\ \sin \theta &\phantom {-} \cos \theta\end{array}\right)\in \hbox{\rm SL(2,$\mathbb R$)} . \eeq Substituting \eqref{oot} into equality \eqref{kt8}, we obtain \beq\label{kot8}\mathbb T\mathbb D_{r}\mathbb K_t\4\Lambda_j =(1 +t^2)^{1/2}\,\wt{\mathbb D}_r\,\wt{\mathbb K}_\theta\,\Delta_j.\eeq Below we shall also use the following crucial lemma of G\"otze and Margulis (2010). \begin{lemma}\label{GM} Let $\wt{\mathbb K}_\theta$ and \beq \label{laty} \wt{\mathbb H}= \left(\begin{array}{*{4}c}\ov{\mathbb H}&\mathbb O_2&: &\mathbb O_2\\ \mathbb O_2&\ov{\mathbb H}&: &\mathbb O_2\\ \cdt\cdt &\cdt\cdt&\cdt\cdt&\cdt\cdt\\ \mathbb O_2& \mathbb O_2 &:&\ov{\mathbb H} \end{array}\right) \eeq be $(2\4d\times2\4d)$-matrices such that\/ $\ov{\mathbb H}\in \mathcal G=\hbox{\rm SL(2,$\mathbb R$)}$ and\/ $\wt{\mathbb K}_\theta$ is defined in {\rm\eqref{latt345}} and {\rm\eqref{laz}}. Let $\beta$ be a positive number such that $\beta\4d>2$. Then, for any\/ $\ov{\mathbb H}\in \mathcal G$ and any lattice $\Delta\subset\mathbb R^{2d}$, \beq \int_{0}^{2\pi}\bgl(\alpha(\wt{\mathbb H} \,\wt{\mathbb K}_\theta\,\Delta)\bgr)^\beta d\theta\ll_{\beta,d} \bgl(\alpha(\Delta)\bgr)^\beta\,\norm {\ov{\mathbb H}}^{\beta d-2}. \eeq Here $\norm {\ov{\mathbb H}}$ is the standard norm of the linear operator $\mathbb H:\R^2\to\R^2$.\end{lemma} Consider, under the conditions of Lemma \ref{GZ}, \beq\label{IJ56} I_0 \= \int_{c_s\4\sigma_1^{-2}r^{-2+4/s}/2 }^{\sigma_1^{-2}/2}\bgl| \wh \Psi_b (t)\bgr| \ffrac {dt} t = \int_{c_s\4\sigma_1^{-2}r^{-2+4/s} }^{\sigma_1^{-2}}\bgl| \wh \Psi_b (t/2)\bgr| \ffrac {dt} t.\eeq By Lemma \ref{GZ}, we have \beq\label{IJ5} I_0 \ll_s (p\4 N)^{-1}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}+ r^{- s/2}\,\sup_\Gamma \; J,\eeq where \beq\label{IJ6} J= \int_{r^{-1}}^1\big(\alpha (\mathbb D_r\4\mathbb U_t\,\Lambda)\big)^{1/2} \ffrac {dt} t\le\sum_{j=2}^{\rho}I_{j}, \eeq with \beq\label{IJ7} I_j\= \int_{j\me}^{(j-1)\me} \big(\alpha (\mathbb D_r\4\mathbb U_t\,\Lambda)\big)^{1/2}\ffrac {dt} t, \q j=2,3,\ldots,\rho\=\lceil r\rceil+1. \eeq Changing variable $t=v\4j^{-2}$ and $v=w+j$ in $I_j$ and using the properties of matrices $\mathbb D_r$ and $\mathbb U_t$, we have \beqa I_j&=&\int_{j}^{j^{2}(j-1)\me} \big(\alpha (\mathbb D_r\4\mathbb U_{vj^{-2}}\,\Lambda)\big)^{1/2}\ffrac {dv}v \nn\\ & \le& \int_{j}^{j+2} \big(\alpha (\mathbb D_r\4\mathbb U_{vj^{-2}}\,\Lambda)\big)^{1/2} \ffrac {dv} v \nn \\ & =& \int_{0}^{2} \big(\alpha (\mathbb D_r\4\mathbb U_{wj^{-2}}\4\mathbb U_{j^{-1}}\,\Lambda)\big)^{1/2} \ffrac{dw} {w+j}.\label{IJ} \eeqa By \eqref{svo}, \beq \label{IJ2} \mathbb D_r\4\mathbb U_{wj^{-2}} =\mathbb D_{rj^{-1}}\4\mathbb D_{j}\4\mathbb U_{wj^{-2}} = \mathbb D_{rj^{-1}} \4\mathbb U_{w} \4\mathbb D_{j}. \eeq According to \eqref{IJ} and \eqref{IJ2}, \beq\label{IJ22} I_j\ll \ffrac {1} {j} \int_{0}^{2} \big(\alpha (\mathbb D_{rj^{-1}}\mathbb U_t\4 \Lambda_{j})\big)^{1/2} \,{dt} , \eeq where the lattices $\Lambda_j$ are defined in \eqref{jj} (see also \eqref{svo4}, \eqref{svo5} and \eqref{latt}). Using \eqref{jj}, \eqref{latt36} and \eqref{adas89}, we see that \beq\label{kth14} \alpha(\Lambda_{j}) \ll_s \sigma_1^{2s}\,(\det \mathbb C)^{-1}. \eeq By \eqref{kt6}, \eqref{kt7} and \eqref{kot8}, we have \beqa\label{qkt} \alpha (\mathbb D_{rj^{-1}}\mathbb U_t\4 \Lambda_{j}) &\ll_s&\alpha (\mathbb D_{rj^{-1}}\4\mathbb K_t\4 \Lambda_{j}) =\alpha (\mathbb T\4\mathbb D_{rj^{-1}}\4\mathbb K_t\4 \Lambda_{j})\nn\\ &\ll_s&\alpha (\wt{\mathbb D}_{rj^{-1}}\,\wt{\mathbb K}_\theta\,\Delta_j) , \eeqa for $|t|\ll_s1$, $r\ge1$, $j=2,3,\ldots,\rho$, where $\Delta_j$ and $\theta$ are defined in \eqref{kt9} and \eqref{kth} respectively. Using \eqref{kth}, \eqref{latw}, \eqref{latt345}, \eqref{qkt} and Lemma \ref{GM} (with $d=s$), we obtain \beqa\label{kth23}\int_{0}^{2} \big(\alpha (\mathbb D_{rj^{-1}}\mathbb U_t\4 \Lambda_{j})\big)^{1/2} \,{dt}&\ll_s&\int_{0}^{c^*} \big(\alpha (\wt{\mathbb D}_{rj^{-1}}\, \wt{\mathbb K}_\theta\,\Delta_j)\big)^{1/2} \,\ffrac{d\theta}{\cos^2\theta} \nn\\&\ll&\int_{0}^{2\pi} \big(\alpha (\wt{\mathbb D}_{rj^{-1}}\, \wt{\mathbb K}_\theta\,\Delta_j)\big)^{1/2} \,{d\theta} \nn\\&\ll_s& \norm{\4\ov{\mathbb D}_{rj^{-1}}}^{s/2-2} \,\big(\alpha(\Delta_j)\big)^{1/2}, \eeqa if $s\ge5$. It is clear that $\norm{\4\ov{\mathbb D}_{rj^{-1}}}= r\4j^{-1}$. Therefore, according to \eqref{kt7}, \eqref{kt9}, \eqref{IJ22} and \eqref{kth23}, \beq \label{kth13}I_j \ll_s \ffrac 1 {j} (rj^{-1})^{s/2-2} \, \big(\alpha(\Lambda_{j})\big)^{1/2}. \eeq By \eqref{IJ6}, \eqref{kth14} and \eqref{kth13}, we obtain, for $s\ge5$, \beq\label{kth15} J \ll_s\sigma_1^{s}\,(\det \mathbb C)^{-1/2}\sum_{j=2}^{\rho}\ffrac 1 {j} (rj^{-1})^{s/2-2} \ll_s r^{s/2-2}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}.\eeq By \eqref{dfn}, \eqref{defr}, \eqref{IJ5} and \eqref{kth15}, we have $r\asymp_s (N\4p)^{1/2}$ and \beqa\label{kth16} I_0 \ll_s r^{-2}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2} \ll_s (N\4p)^{-1}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}. \eeqa It is clear that in a similar way we can establish that \beqa\label{k16} \int_{\sigma_1^{-2} }^{c(s)\sigma_1^{-2}}\bgl| \wh \Psi_b (t/2)\bgr| \ffrac {dt} t\ll_s r^{-2}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}\ll_s (N\4p)^{- 1}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}, \eeqa for any quantity $c(s)$ depending on $s$ only. The proof will be easier due to the fact that $t$ cannot be small in this integral. Thus, we have proved the following lemma. \begin{lemma}\label{GZ2} Let the conditions of Lemma\/ {\rm\ref{L7.3}} be satisfied with $s=d\ge5$, $\mathbb D=\mathbb C^{-1/2}$, $\d\leq 1/(5\4 s)$ and with an orthonormal system $\mathcal S=\mathcal S_o=\{\fs e1s\}\subset \Rd$. Let $c_1(s)$ and $c_2(s)$ be some quantities depending on $s$ only. Then there exists a $c_s$ such that \beq\int_{c_1(s)\4\sigma_1^{- 2}r^{-2+4/s} }^{c_2(s)\4\sigma_1^{-2}} \bgl| \wh \Psi_b (t)\bgr| \ffrac {dt} t \ll_s \, (N\4p)^{-1}\,\sigma_1^{s}\,(\det \mathbb C)^{-1/2}, \label{eq71u}\eeq if \,$N\4p\gg_s c_s$, where\/ $r$ is defined in \eqref{dfn} and\/ \eqref{defr}. \end{lemma}
132,265
TITLE: Linear Maps as Tensors QUESTION [8 upvotes]: Let $V$ and $W$ be finite dimensional vector spaces and let $V^{\ast}$ denote the dual $V$. I read that the space $V^{\ast}\otimes W$ may be thought of in four different ways: as the space of linear maps $V\to W$, as the space of linear maps $W^{\ast}\to V$, as the dual space to $V\otimes W^{\ast}$, and as the space of linear maps $V\times W^{\ast}\to \mathbb{C}$. I am having trouble seeing why this statement is true because I am not comfortable with dual spaces yet. Can someone help me out? Is this all a simple consequence of the universal property of tensor products? Thanks. REPLY [0 votes]: Here's a purely symbol-pushing approach. It's a worthwhile exercise to work out exactly what all the following isomorphisms do to individual elements of the relevant spaces. As a rule of thumb, if you have a space of functions from $X$ to $Y$, something like $A(X,Y)$, with a definition like "the space of functions from $X$ to $Y$ satisfying such-and-such a condition", and you tensor it with a third space $Z$, you often get $$ A(X,Y)\otimes Z \cong A(X,Y\otimes Z) $$ where the isomorphism is given by sending elementary tensors $f\otimes z$ (where $f\colon X\to Y$) to $x\mapsto f(x)\otimes z$, and extending linearly. Warning! Rule of thumb only! In your situation, though, with everything being a finite-dimensional vector space, this works exactly as written. (It's helpful to identify $V^\ast = L(V,\mathbb C)$ with $\mathbb C^B$, the space of all functions from a finite set $B$ (a basis for $V$) to $\mathbb C$.) Thus: $$ V^\ast\otimes W = L(V,\mathbb C)\otimes W \cong L(V,\mathbb C\otimes W) \cong L(V,W) $$ which is your first identification. Next, the universal property of the tensor product says that $$ L(V\otimes W,X) \cong L(V,L(W,X)) \cong B(V,W;X) $$ (where $B(V,W;X)$ denotes the space of bilinear maps from $V\times W$ to $X$). This gives the last of your identifications, assuming "linear" was an error for "bilinear". Furthermore, \begin{align*} V^\ast \otimes W^\ast &= L(V,\mathbb C)\otimes L(W,\mathbb C) \cong L(V,\mathbb C\otimes L(W,\mathbb C)) \\ &\cong L(V,L(W,\mathbb C)) \cong L(V\otimes W,\mathbb C) = (V\otimes W)^\ast \end{align*} and so $(V\otimes W^\ast)^\ast \cong V^\ast\otimes W^{\ast\ast} = V^\ast\otimes W$, your third identification. Your second identification $W^\ast\to V$ comes from the identification of a (finite-dimensional) vector space with its dual. I'll leave that one to you.
102,209
Accessibility statement for shu.ac.uk This website is run by Sheffield Hallam instance: - you cannot modify the line height or spacing of text - most older PDF documents are not fully accessible to screen reader software - some of our online forms are difficult to navigate using a keyboard or a screen reader - some content is displayed in frames with no title attribute - some of our videos do not have text descriptions For more detail, refer to the Technical Information section below. How to report accessibility issues with this platform If you find content that you are unable to access on our website, we recommend you contact the site owner in the first instance, using any contact details on the pages. They will have direct access to the original content and be able to provide any alternative formats needed. How to contact the University about accessibility issues Please contact us at [email protected] Sheffield Hallam are aware that the website does not meet WCAG standards in some areas, as set out below. Non compliance with the accessibility regulations We are working to fix the following issues by September 2020. 1.1.1: Non-text content - Some images and media do not have alternative text specified. 1.2.1: Audio-only and video-only (prerecorded) - Prerecorded video does not always have a text alternative 1.3.1: Info and relationships - Not all headings include text. - On some pages, heading text is marked up as paragraph text. - On some forms, fieldset elements are absent or do not have a legend. This can make forms harder to understand for visitors who use screen readers. - In some forms, labels are not linked to controls with unique IDs. - In some navigation elements, menu items are not marked up as lists. 1.3.5: Identify input purpose - Our forms do not feature autocomplete fields. 1.4.1: Use of colour - Some of our links rely on colour to distinguish them from normal text. 1.4.3: Ensure text has sufficient contrast - In some parts of the site, text and background do not meet the guidelines' recommended contrast ratio. 1.4.5: Images of text - On some pages, images are used to present text content. 1.4.10: Reflow - Some of our pages scroll horizontally as well as vertically on small screens. 2.4.1: Bypass blocks - Some pages include iframes with no title attribute. - Some pages contain anchor links that are empty or point to locations that do not exist. 2.4.4: Link purpose (in context) - Some links on the site use text that does not adequately explain the purpose of the link (e.g. 'Click here'). - On some pages, the same link text is used for links to different locations. 2.4.7: Focus visible - Some form and search box elements do not change appearance when selected. 3.2.4: Consistent identification - Common elements such as action buttons, icons and form fields are not always identified in a consistent way. 3.3.3: Error suggestion - Error messages do not always give suggestions on how to fix the problem. 4.1.1: Parsing - Elements on some pages have duplicate attribute IDs. This can make it difficult for assistive technologies to analyse the relationships between different parts of page content. 4.1.2 Ensure links can be used by screen readers - Some of our links are not accessible to screen readers, as they do not have role or title attributes. Disproportionate burden Not applicable. Content that fix the PDFs and Word documents that are essential to providing our services, or replace those documents with accessible HTML pages. Accessibility regulations do not require us to fix PDFs or other documents published before 23 September 2018 if they are not essential to providing our services. How we tested this website The University is using Silktide quality management software to run regular accessibility audits of this website. Audits run every five days. In addition, we are currently scoping additional ways to identify accessibility issues, including user testing. What we’re doing to improve accessibility As well as running regular accessibility audits, we are working with staff members to raise awareness around good accessibility practice. This statement was prepared on 2 December 2019. It was last updated on 2 December 2019.
144,098
ADD A CATEGORY APR 22, 2013 - MarketWatch A Must Read - Share NEW YORK (MarketWatch) -- U.S. stocks climbed Monday, with the S&P 500 index rebounding from its largest weekly hit in five months, as investors awaited quarterly earnings from the technology sector and Caterpillar Inc. reported Chinese production to be increasing. At 4 p.m. Eastern, the Dow Jones Industrial Average climbed 18.97 points to 14,566.48. The S&P 500 index added 7.17 points to 1,562.42. The Nasdaq Composite advanced 27.77 points to 3,233.87.
47,458
TITLE: How I can got the partial sum of $\sum_{k=1}^{n}\frac{1}{(2k-1)}$? QUESTION [0 upvotes]: It is clear that this sum $\sum_{k=1}^{n}\frac{1}{(2k-1)}$ is divergenet , but i don't succed to get it partial sum using standrad method ? Note: The sum is presented here in wolfram alpha by digamma function. REPLY [0 votes]: If you want one way to find one formula using the digamma function or the harmonic numbers: We can write $\sum\limits^{n}_{k=0}\frac{1}{2k+1}$ using harmonic numbers $H_n=\sum\limits^{n}_{k=1}\frac{1}{k}$. $$\sum^{2n+1}_{k=0}\frac{1}{k} =\sum^{n}_{k=0}\frac{1}{2k+1} +\sum^{n}_{k=1}\frac{1}{2k} $$ so $$H(2n+1)-\frac{H(n)}{2}= \sum^{n}_{k=0}\frac{1}{2k+1} .$$ [Notation]: We will write $\Delta f(x)=f(x+1)-f(x)$, the difference operator [Gamma Function] We define the gamma function by $$\Gamma (x)=\int^{\infty}_{0} t^{x-1}.e^{-t}dt $$ and we have $$\Gamma (x+1)=x\Gamma (x) $$ [Digamma Function] We define $$\Psi (x)=\frac{\Gamma'(x)}{\Gamma (x)} $$ and call it the digamma function. [Theorem] We have that $$\Delta \Psi (x)=\frac{1}{x}. $$ [Proof] From $$\Gamma (x+1)=x\Gamma (x) $$ take $\ln$ in both sides $$ \ln(\Gamma (x+1))= \ln (x) + \ln( \Gamma (x)) $$ then derivate $$ D\ln(\Gamma (x+1))= \frac{\Gamma' (x+1)}{\Gamma (x+1)}= D\ln (x) + D\ln( \Gamma (x))=\frac{1}{x}+\frac{\Gamma' (x)}{\Gamma (x)} $$ so $$\frac{\Gamma' (x+1)}{\Gamma (x+1)}=\frac{1}{x}+\frac{\Gamma' (x)}{\Gamma (x)} .$$ Taking back to the defition of the digamma function, we have $$ \Psi (x+1)=\frac{1}{x}+\Psi (x) $$ $$ \Delta \Psi (x)=\frac{1}{x}.$$ We can apply the sum $\sum\limits^{n}_{x=1}$, on both sides, the first is telecopic, then $$\sum ^{n}_{x=1}\Delta \Psi (x)= \Psi (n+1)-\Psi (1)=\sum^{n}_{x=1}\frac{1}{x}=H(n). $$ [Corollary] from $H(n)=\Psi(n+1)-\psi(1)$ and $\sum\limits_{k=1}^n \frac{1}{2k+1}=H(2n+1)-\frac{H(n)}{2}$ we have $$\sum\limits_{k=1}^n \frac{1}{2k+1}=\psi(2n+2)-\psi(1) -\frac{1}{2}\left(\psi(n+1)-\psi(1) \right). $$
24,840
\begin{document} \title{\bf Correlation structure of time-changed Pearson diffusions} \author{Jebessa B. Mijena} \address{Jebessa B. Mijena, 231 W. Hancock St, Campus Box 17, Department of Mathematics, Georgia College \& State University, Milledgeville, GA 31061} \email{[email protected]} \author{Erkan Nane} \address{Erkan Nane, 221 Parker Hall, Department of Mathematics and Statistics, Auburn University, Auburn, Al 36849} \email{[email protected]} \urladdr{http://www.auburn.edu/$\sim$ezn0001} \begin{abstract} The stochastic solution to diffusion equations with polynomial coefficients is called a Pearson diffusion. If the time derivative is replaced by a distributed fractional derivative, the stochastic solution is called a fractional Pearson diffusion. This paper develops a formula for the covariance function of a fractional Pearson diffusion in steady state, in terms of generalized Mittag-Leffler functions. That formula shows that fractional Pearson diffusions are long-range dependent, with a correlation that falls off like a power law, whose exponent equals the smallest order of the distributed fractional derivative. \end{abstract} \keywords{Pearson diffusion, Fractional derivative, Correlation function, Generalized Mittag-Leffler function} \maketitle \section{Introduction} In this paper we will study time--changed Pearson diffusions. Some versions of this process have been studied recently by Leonenko et al. \cite{leonenko-0, leonenko}. They considered the inverse stable subordinator as the time change process. They have studied the governing equations and correlation structure of the time changed Pearson diffusions. We will extend their results to the time--changed Pearson diffusions where the time change processes are inverse of mixtures of stable subordinators. To introduce Pearson diffusion, let \begin{equation} \mu(x) = a_0 + a_1x\ \ \mathrm{and}\ \ D(x) = \frac{\sigma^2(x)}{2} = d_0 + d_1x + d_2x^2 . \end{equation} The solution $X_1(t)$ of the stochastic differential equation \begin{equation}dX_1(t) =\mu(X_1(t))dt + \sigma (X_1(t))dW(t), \end{equation} where $W(t)$ is a standard Brownian motion, is called a Pearson diffusion. Special cases of this equation have been studied: $X_1(t)$ is called Ornstein–-Uhlenbeck process \cite{uhlenbeck-30} when $\sigma(x)$ is a positive constant; $X_1(t)$ is called the Cox–-Ingersoll-–Ross (CIR) process, when $d_2 = 0$, which is used in finance \cite{CIR-85}. The study of Pearson diffusions began with Kolmogorov \cite{kolmogorov-31}. Let $p_1(x, t;y)$ denote the conditional probability density of $x = X_1(t)$ given $y = X_1(0)$, i.e., the transition density of this time-homogeneous Markov process. $p_1(x, t;y)$ is the fundamental solution to the Kolmogorov backward equation (Fokker–-Planck equation) \begin{equation}\label{pearson-diffuion-density-pde} \frac{\partial}{\partial t}p_1(x, t;y)=\mathcal{G}p_1(x, t;y) = \left[\mu(y)\frac{\partial}{\partial y} + \frac{\sigma^2(y)}{2}\frac{\partial^2}{\partial y^2}\right]p_1(x, t;y), \end{equation} with the initial condition $p_1(x,0;y) = \delta(x-y)$. In this case $X_1(t)$ is called the stochastic solution to the backward equation \eqref{pearson-diffuion-density-pde}. The Caputo fractional derivative \cite{Caputo} is defined for $0<\beta<1$ as \begin{equation}\label{CaputoDef} \frac{\partial^\beta u(t,x)}{\partial t^\beta}=\frac{1}{\Gamma(1-\beta)}\int_0^t \frac{\partial u(r,x)}{\partial r}\frac{dr}{(t-r)^\beta} . \end{equation} Its Laplace transform \begin{equation}\label{CaputolT} \int_0^\infty e^{-st} \frac{\partial^\beta u(t,x)}{\partial t^\beta}\,dt=s^\beta \tilde u(s,x)-s^{\beta-1} u(0,x), \end{equation} where $\tilde u(s,x) = \int_0^\infty e^{-st}u(t,x)\, dt$ and incorporates the initial value in the same way as the first derivative. The distributed order fractional derivative is \begin{equation}\label{DOFDdef} \D^{(\mu)}u(t,x):=\int_0^1\frac{\partial^\beta u(t,x)}{\partial t^\beta} \mu(d\beta), \end{equation} where $\mu$ is a finite Borel measure with $\mu(0,1)>0$. The solution to the distributed order fractional diffusion equation \begin{equation}\label{DOFCPdef-eq} \D^{(\mu)} u(t,y)=\int_0^1\frac{\partial^\beta u(t,y)}{\partial t^\beta} \mu(d\beta)= \mathcal{G}u(t,y)=\left[\mu(y)\frac{\partial}{\partial y} + \frac{\sigma^2(y)}{2}\frac{\partial^2}{\partial y^2}\right]u(t,y), \end{equation} is called distributed order(time-changed) Pearson diffusion. A stochastic process $X(t), \ t>0$ with $\E(X(t))=0$ and $\E(X^2(t))<\infty$ is said to have short range dependence if for fixed $t>0$, $\sum_{h=1}^\infty \E(X(t)X(t+h))<\infty$, otherwise it is said to have long-range dependence. Let $0<\beta_{1}<\beta_{2}<\cdot\cdot\cdot < \beta_{n}<1$. In this paper we study the correlation structure of the stochastic solution of the distributed order time fractional equation \eqref{DOFCPdef-eq} with $$\D^{(\mu)}u(t,y)=\sum_{i=1}^{n}c_{i}\frac{\partial^{\beta_{i}}u(t,y)}{\partial t^{\beta_{i}}},$$ and derive a formula \eqref{correlationfun} and \eqref{correlationfun2} for the correlation in terms of generalized Mittag-Leffler functions. In addition, we obtain an asymptotic expansion \eqref{asymptotic}, to show that the correlation falls off like $t^{-\beta_1}$ for large $t,$ thus demonstrating that distributed order fractional Pearson diffusions exhibit long-range dependence. \section{Distributed order fractional Pearson Diffusion} Let $m(x)$ be the steady-state distribution of $X_1(t)$. The generator associated with the backward equation \eqref{DOFCPdef-eq} \begin{equation}\label{heat-generator} \mathcal{G}p_1(x, t;y) = \left[\mu(y)\frac{\partial}{\partial y} + \frac{\sigma^2(y)}{2}\frac{\partial^2}{\partial y^2}\right]p_1(x, t;y), \end{equation} has a set of eigenfunctions that solve the equation $\mathcal{G}Q_n(y) = -\lambda_nQ_n(y)$ with eigenvalues $0 = \lambda_0 < \lambda_1 < \lambda_2 < \cdots$ that form an orthonormal basis for $L^2(m(y)dy)$. In this paper we will consider three cases as in the papers \cite{leonenko-0, leonenko}: \begin{enumerate} \item $d_1 = d_2 = 0$ and $d_0 > 0$, then $m(y)$ is a normal density, and $Q_n$ are Hermite polynomials; \item $d_2 = 0$, $m(y)$ is a gamma density, and $Q_n$ are Laguerre polynomials; \item $D ''(y) < 0 $ with two positive real roots, $m(y)$ is a beta density, and $Q_n$ are Jacobi polynomials. \end{enumerate} In the remaining cases, the spectrum of $\mathcal{G}$ has a continuous part, and some moments of $X_1(t)$ do not exist. In every case, $m(y)$ is one of the Pearson distributions \cite{pearson-14}. For the remainder of this paper, we will assume one of the three cases (Hermite, Laguerre, Jacobi), so that all moments exist. By separation of variables we can show that the transition density of $X_1(t)$ is given by \begin{equation}p_1(x, t;y) = m(x) \sum_{n=0}^\infty e^{-\lambda_n t}Q_n(x)Q_n(y). \end{equation} See \cite{leonenko} and \cite[section 7.4]{meerschaert-skorskii-book} for more details of this derivation. Let $ D(t)$ be a subordinator with ${\mathbb E}[e^{-s D(t)}]=e^{-t\psi(s)}$, where \begin{equation}\label{phiWdef} \psi(s)=\int_0^\infty(e^{-s x}-1)\phi(dx) . \end{equation} Then the associated L\'evy measure is \begin{equation}\label{psiWdef} \phi(t,\infty)=\int_0^1 t^{-\beta}\nu(d\beta). \end{equation} An easy computation gives \begin{equation}\begin{split}\label{psiW} \psi(s) &= \int_0^1 s^\beta \Gamma(1-\beta) \nu(d\beta)=\int_0^1 s^\beta \mu(d\beta) . \end{split}\end{equation} Here we define $\mu(d\beta)=\Gamma(1-\beta) \nu(d\beta)$. Let \begin{equation}\label{Epsi-def} E(t)=\inf\{\tau\geq 0:\ D(\tau)>t\}, \end{equation} be the inverse subordinator. Since $\phi(0,\infty)=\infty$ in \eqref{phiWdef}, Theorem 3.1 in \cite{mark} implies that $E(t)$ has a Lebesgue density \begin{equation}\label{E-lebesgue-density} f_{E(t)}(x)=\int_0^t \phi(t-y,\infty) P_{D(x)}(dy) . \end{equation} Note that $E(t)$ is almost surely continuous and nondecreasing. Since the distributed order time-fractional analogue \eqref{DOFCPdef-eq} to the equation \eqref{pearson-diffuion-density-pde} is a fractional Cauchy problem of the form \begin{equation}\label{DOFCPdef-1} \D^{(\mu)} p_\mu(t,x)= \mathcal{G}p_\mu(t,x), \end{equation} a general semigroup result \cite[Theorem 4.2]{jebessa-nane-pams} implies that \begin{equation}\label{distributed-transition-density} p_\mu(x, t;y) = \int_0^\infty p_1(x,u;y)f_{E(t)}(u)du, \end{equation} where $f_{E(t)}(u)$ is the probability density of inverse subordinator. \begin{lemma}[\cite{m-n-v-jmaa}]\label{eigenvalue-problem} For any $\lambda>0$, $h(t, \lambda)=\int_0^\infty e^{-\lambda x}f_{E(t)}(x)\,dx=\E [e^{-\lambda E(t)}]$ is a mild solution of the distributed-order fractional differential equation \begin{equation}\label{dist-order-density-pde} \D^{(\mu)}h(t,\lambda)=-\lambda h(t, \lambda); \ \ h(0, \lambda)=1. \end{equation} \end{lemma} Then it follows from Lemma \ref{eigenvalue-problem} and equation \eqref{distributed-transition-density} that the transition density of $X_1(E(t))$ is given by \begin{equation}\begin{split} p_\mu(x, t;y)& = m(x) \sum_{n=0}^\infty Q_n(x)Q_n(y) \int_0^\infty e^{-\lambda_n u} f_{E(t)}(u)du\\ &= m(x) \sum_{n=0}^\infty Q_n(x)Q_n(y) h(t,\lambda_n). \end{split} \end{equation} We state the next theorem as a result of the observations above. This theorem extends the results in Leonenko et al. \cite{leonenko-0} and Meerschaert et al. \cite{m-n-v-jmaa}. The proof follows with similar line of ideas as in \cite{leonenko-0} and \cite{m-n-v-jmaa}. \begin{theorem} Let $(l, L)$ be an interval such that $D(x) > 0$ for all $x \in (l, L)$. Suppose that, the function $g \in L^2(m(x)dx) $ is such that $\sum_{n}g_nQ_n$ with $g_n= \int_l^L g(x)Q_n(x)m(x)dx$ converges to $g$ uniformly on finite intervals $[y_1,y_2] \subset (l, L)$. Then the fractional Cauchy problem \begin{equation}\label{DOFCPdef-thm} \D^{(\mu)} u(t,y)= \mathcal{G}u(t,y), \end{equation} with initial condition $u(0,y) = g(y)$ has a strong solution $u = u(t,y)$ given by \begin{equation}\label{series-solution-frac-pde} \begin{split} u(t,y) &= \E(g(X_1(E(t)))|X_1(0)=y) =\int_l^L p_\mu(x, t;y)g(x)dx \\& = m(x) \sum_{n=0}^\infty g_n h(t,\lambda_n) Q_n(y) \end{split} \end{equation} The series in \eqref{series-solution-frac-pde} converges absolutely for each fixed $t > 0,y \in (l, L)$, and \eqref{DOFCPdef-thm} holds pointwise. \end{theorem} Let $0<\beta_{1}<\beta_{2}<\cdot\cdot\cdot < \beta_{n}<1$. In this paper we study the correlation structure of the stochastic solution of the distributed order time fractional equation \eqref{DOFCPdef-eq} with $$\D^{(\mu)}=\sum_{i=1}^{n}c_{i}\frac{\partial^{\beta_{i}}g(x,t)}{\partial t^{\beta_{i}}},$$ this corresponds to the case where \begin{equation}\label{n-term-laplace-exponent} \psi(s)=c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}}. \end{equation} In this case the L\'evy subordinator can be written as $$D_\psi(t)=(c_1)^{1/\beta_1}D^1(t)+(c_2)^{1/\beta_2}D^2(t)+\cdots+ (c_n)^{1/\beta_n}D^n(t),$$ where $D^1(t), D^2(t),\cdots , D^n(t)$ are independent stable subordinators of index $0<\beta_{1}<\beta_{2}<\cdot\cdot\cdot < \beta_{n}<1$. In this paper we give the correlation function of the time-changed Pearson diffusion $X_1(E)$ where $E$ is the inverse of subordinator with Laplace exponent given by \eqref{n-term-laplace-exponent}. \section{Correlation structure} In this section we present the correlation structure of the time-changed Pearson diffusion $X_1(E(t))$ when $X_1(E(t))$ is the stochastic solution of the equation $$c_1\frac{\partial^{\beta_1}p(x,t;y)}{\partial t^{\beta_1}}+c_2\frac{\partial^{\beta_2}p(x,t;y)}{\partial t^{\beta_2}}=\mathcal{G}p(x,t;y).$$ In this case $E(t)$ is the inverse of $D(t)$ that has the following Laplace exponent \begin{equation}\label{psiD2} \psi(s)= c_1s^{\beta_1} + c_2s^{\beta_2}, \end{equation} for $c_1,c_2\geq 0$, $c_1 + c_2 = 1,$ and $\beta_1<\beta_2.$ In what follows we use notation for the density of the inverse subordinator $E(t)$ as $f_{E(t)}(u)=f_t(u).$ Let $\Phi_\theta(t) = \int_{0}^\infty e^{-\theta u}f_t(u)\ du$. Using $ \int^{\infty}_{0}e^{-st}f_{t}(u)dt=\frac{1}{s}\psi(s)e^{-u\psi(s)}$ (\cite{mark}, 3.13) and Fubini's theorem, the Laplace transform of $\Phi_\theta(t)$ is given by \begin{eqnarray}\label{laplaceofPhi} \mathcal{L}({\Phi}_\theta(t);s)&=& \int_{0}^{\infty}e^{-\theta u}\int_{0}^{\infty}e^{-st}f_t(u)dt\ du\nonumber\\ &=&\frac{\psi(s)}{s}\int_{0}^{\infty}e^{-u(\theta + \psi(s))}\ du\nonumber\\ &=&\frac{\psi(s)}{s(\theta + \psi(s))}= \frac{c_1s^{\beta_1-1} + c_2s^{\beta_2-1}}{\theta + c_1s^{\beta_1} + c_2s^{\beta_2}}. \end{eqnarray} In order to invert analytically the Laplace transform \eqref{laplaceofPhi}, we can apply the well-known expression of the Laplace transform of the generalized Mittag-Leffler function (see \cite{saxena}, eq. 9), i.e. \begin{equation}\label{laplaceofGMF} \mathcal{L}(t^{\gamma-1}E^{\delta}_{\beta,\gamma}(\omega t^{\beta});s)=s^{-\gamma}\left(1-\omega s^{-\beta}\right)^{-\delta}, \end{equation} where $\mbox{Re}(\beta)>0, \mbox{Re}(\gamma)>0, \mbox{Re}(\delta)>0$ and $s>|\omega|^{\frac{1}{Re(\beta)}}.$ The Generalized Mittag-Leffler (GML) function is defined as \begin{equation}\label{GMLfunction} E^{\gamma}_{\alpha,\beta}(z) = \displaystyle\sum_{j=0}^{\infty}\frac{(\gamma)_jz^j}{j!\Gamma(\alpha j + \beta)},\ \ \ \alpha, \beta\in \mathbb{C}, Re(\alpha), Re(\beta), Re(\gamma)>0, \end{equation} where $(\gamma)_j=\gamma(\gamma + 1)\cdots (\gamma+j-1)$ (for $j=0,1,\ldots,\ \mbox{and}\ \gamma\neq 0$) is the Pochammer symbol and $(\gamma)_0=1.$ When $\gamma = 1$ \eqref{GMLfunction} reduces to the Mittag-Leffler function \begin{equation} E_{\alpha,\beta}(z) = \displaystyle\sum_{j=0}^{\infty}\frac{z^j}{\Gamma(\alpha j + \beta)}. \end{equation}Now using formulae $(26)$ and $(27)$ of \cite{saxena}, we get \begin{eqnarray}\label{thefunctPhi} \Phi_\theta(t) &=&\displaystyle\sum_{r=0}^{\infty}\left(-\frac{c_1t^{\beta_2-\beta_1}}{c_2}\right)^r E^{r+1}_{\beta_2, (\beta_2-\beta_1)r + 1}\left(-\frac{\theta t^{\beta_2}}{c_2}\right)\\&-&\displaystyle\sum_{r=0}^{\infty}\left(-\frac{c_1t^{\beta_2-\beta_1}}{c_2}\right)^{r+1} E^{r+1}_{\beta_2, (\beta_2-\beta_1)(r+1) + 1}\left(-\frac{\theta t^{\beta_2}}{c_2}\right).\nonumber \end{eqnarray} Clearly, $\Phi_\theta(0) = 1.$ \begin{remark} Consider the special case $c_1=0,c_2=1:$ the formula for $\Phi_\theta(t)$ reduces, in this case, to \begin{equation} \Phi_\theta(t)= E_{\beta_2, 1}(-\theta t^{\beta_2})=\displaystyle\sum_{j=0}^{\infty}\frac{\left(-\theta t^{\beta_2}\right)^j}{\Gamma(1+j\beta_2)}, \end{equation} as shown in Bingham \cite{bingham} and Bondesson, Kristiansen, and Steute \cite{bondesson} when $E(t)$ is standard inverse $\beta_2-$stable subordinator. \end{remark} Now we compute the expected value of the inverse subordinator $E(t)$. First, we find the Laplace transform of $\mathbb{E}(E(t)) = \int_{0}^{\infty}xf_t(x)\ dx$. Using Fubini's theorem, we have \begin{eqnarray}\label{expectedvalueofE_t} \mathcal{L}(\mathbb{E}(E(t));\lambda)&=&\int_{0}^{\infty}e^{-\lambda t}\mathbb{E}(E(t))\ dt \\ &=&\int_{0}^{\infty}x\int_{0}^{\infty}e^{-\lambda t}f_{t}(x)\ dt\ dx\nonumber\\ &=&\frac{\psi(\lambda)}{\lambda}\int_{0}^{\infty}x e^{-x\psi(\lambda)}\ dx=\frac{1}{\lambda\psi(\lambda)}\nonumber\\ &=&\frac{1}{c_1\lambda^{\beta_1 +1} + c_2\lambda^{\beta_2 + 1}} = \frac{1}{c_2}\frac{\lambda^{-(\beta_2 + 1)}}{1 + \frac{c_1}{c_2}\lambda^{-(\beta_2-\beta_1)}}.\nonumber \end{eqnarray} Therefore, using \eqref{laplaceofGMF} when $\delta = 1$ we have \begin{equation}\label{expectedvalueexpre} \mathbb{E}(E(t)) = \frac{1}{c_2}t^{\beta_2}E_{\beta_2-\beta_1, \beta_2 + 1}\left(-\frac{c_1}{c_2}t^{\beta_2-\beta_1}\right). \end{equation} \begin{remark} Consider the special case $c_1=0,c_2=1:$ the expected value of $E(t)$ reduces, in this case, to \begin{equation} \mathbb{E}(E(t)) =\frac{t^{\beta_2}}{\Gamma(1+\beta_2)}, \end{equation} thus giving the well-known formula $\mathbb{E}(E(t)) = t^{\beta_2}/\Gamma(1+\beta_2)$ for the mean of the standard inverse $\beta_2-$stable subordinator \cite[Eq.(9)]{baeumer} \end{remark} \subsection{Correlation function} If the time-homogeneous Markov process $X_1(t)$ is in steady state, then its probability density ${ m}(x)$ stays the same over all time. We will say that the time-changed Pearson diffusion $X_1(E(t))$ is in steady state if it starts with the distribution $m(x)$. The time-changed Pearson diffusion in steady state has mean $ \mathbb{E}[X_1(E(t))] =\mathbb{E}[X_1(t)]= m_1$ and variance Var$[X(E(t))] = $Var$[X_1(t)]=m_2^2$ which do not vary over time. The stationary Pearson diffusion has correlation function \begin{equation}\label{pearsoncorrelation} \mbox{corr}[X_1(t), X_1(s)] = \mbox{exp}(-\theta|t-s|), \end{equation} where the correlation parameter $\theta = \lambda_1$ is the smallest positive eigenvalue of the generator in equation \eqref{heat-generator}\cite{leonenko-0}. Thus the Pearson diffusion exhibits \textit{short-term dependence}, with a correlation function that falls off exponentially. The next result gives formula for the correlation function of time-changed Pearson diffusion in steady state. This is our main result. \begin{theorem}\label{main-theorem} Suppose that $X_1(t)$ is a Pearson diffusion in steady state, so that its correlation function is given by \eqref{pearsoncorrelation}. Then the correlation function of the corresponding time-changed Pearson diffusion $X(t) = X_1(E(t)),$ where $E(t)$ is an independent inverse subordinator \eqref{Epsi-def} of $D(t)$ with Laplace exponent \eqref{psiD2}, is given by \begin{equation}\label{correlationfun} \mbox{corr}[X(t), X(s)] =\theta\int_{y=0}^{s}h(y)\Phi_\theta(t-y)\ dy + \Phi_\theta(t), \end{equation} where $h(y) = \frac{1}{c_2}y^{\beta_2 - 1}E_{\beta_2 - \beta_1, \beta_2}\left(-\frac{c_1}{c_2}y^{\beta_2-\beta_1}\right)$ and $\Phi_\theta(t)$ is given by \eqref{thefunctPhi}. \end{theorem} \begin{proof}[\bf Proof of Theorem \ref{main-theorem}] We use the method employed by Leonenko et al \cite{leonenko} with crucial changes. Write \begin{eqnarray}\mbox{corr}[X(t), X(s)] &=& \mbox{corr}[X_1(E(t)), X_1(E(s))]\nonumber\\ &=&\int_{0}^{\infty}\int_{0}^{\infty}e^{-\theta|u-v|}H(du, dv),\label{corrfun}\end{eqnarray} a Lebesgue-Stieltjes integral with respect to the bivariate distribution function $H(u, v) :=\mathbb{P}[E(t)\leq u, E(s)\leq v]$ of the process $E(t).$ To compute the integral in \eqref{corrfun}, we use the bivariate integration by parts formula \cite[Lemma 2.2]{gill} \begin{eqnarray} \int_0^a\int_0^b G(u,v)H(du, dv) &=& \int_0^a\int_0^b H([u, a]\times [v, b])G(du, dv)\nonumber\\ &+& \int_0^a H([u,a]\times(0,b])G(du,0)\nonumber\\ &+& \int_0^b H((0, a]\times [v,b])G(0, dv)\nonumber\\ &+& G(0,0)H((0, a]\times (0,b]), \end{eqnarray} with $G(u,v) = e^{-\theta|u-v|},$ and the limits of integration $a$ and $b$ are infinite: \begin{eqnarray} \int_0^\infty\int_0^\infty G(u,v)H(du, dv) &=& \int_0^\infty\int_0^\infty H([u, \infty]\times [v, \infty])G(du, dv)\nonumber\\ &+& \int_0^\infty H([u,\infty]\times(0,\infty])G(du,0)\nonumber\\ &+& \int_0^\infty H((0, \infty]\times [v,\infty])G(0, dv)\nonumber\\ &+& G(0,0)H((0, \infty]\times (0,\infty])\nonumber\end{eqnarray} \begin{eqnarray}\label{corrfunction2} &=&\int_0^\infty\int_0^\infty \mathbb{P}[E_t\geq u, E_s\geq v]G(du, dv)+\int_0^{\infty}\mathbb{P}[E_t\geq u]G(du,0)\nonumber\\ &+&\int_0^{\infty}\mathbb{P}[E_s\geq v]G(0,dv) + 1,\label{intbyparts} \end{eqnarray} since $E(t)>0$ with probability $1$ for all $t>0.$ Notice that $G(du,v) = g_v(u)\,du$ for all $v\geq 0,$ where \begin{equation}\label{gfunction} g_v(u) = -\theta e^{-\theta(u-v)}I\{u > v\} + \theta e^{-\theta(v-u)}I\{u\leq v\}. \end{equation} Integrate by parts to get \begin{eqnarray} \int_0^\infty\mathbb{P}[E(t)\geq u]G(du, 0)&=&\int_0^\infty (1-\mathbb{P}[E(t)<u])(-\theta e^{-\theta u})\,du\nonumber\\ &=&\left[e^{-\theta u}\mathbb{P}[E(t)\geq u]\right]_0^\infty + \int_0^\infty e^{-\theta u}f_t(u)\,du\nonumber\\ &=& \Phi_\theta(t) - 1. \end{eqnarray} Similarly, $$\int_0^{\infty}\mathbb{P}[E(s)\geq v]G(0, dv) = \int_0^\infty e^{-\theta v}f_s(v)\,dv -1=\Phi_\theta(s)-1,$$ and hence \eqref{corrfunction2} reduces to \begin{equation}\label{genform} \int_0^\infty\int_0^\infty G(u, v)H(du, dv) = I + \Phi_\theta(t) + \Phi_\theta(s)- 1, \end{equation} where $$I = \int_0^\infty\int_0^\infty \mathbb{P}[E_t\geq u, E_s\geq v]G(du,dv).$$ Assume (without loss of generality) that $t\geq s$. Then $E_t\geq E_s$, so $\mathbb{P}[E(t)\geq u, E(s)\geq v] = \mathbb{P}[E(s)\geq v]$ for $u\leq v.$ Write $I = I_1 + I_2 + I_3,$ where \begin{eqnarray} I_1:= \int_{u < v}\mathbb{P}[E(t)\geq u, E(s)\geq v]G(du,dv) = \int_{u<v}\mathbb{P}[E(s)\geq v]G(du,dv)\nonumber\\ I_2:= \int_{u = v}\mathbb{P}[E(t)\geq u, E(s)\geq v]G(du,dv) = \int_{u=v}\mathbb{P}[E(s)\geq v]G(du,dv)\nonumber\\ I_3:= \int_{u \geq v}\mathbb{P}[E(t)\geq u, E(s)\geq v]G(du,dv).\nonumber \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \end{eqnarray} Since $G(du,dv) = -\theta^2 e^{-\theta(v-u)}\,du\,dv$ for $u < v$, we may write \begin{eqnarray} I_1 &=& -\theta^2\int_{v=0}^\infty\int_{u=0}^v \mathbb{P}[E(s)\geq v]e^{\theta(u-v)}\,du\,dv\nonumber\\ &=&-\theta\int_{v=0}^\infty \mathbb{P}[E(s)\geq v]\left(1 - e^{-\theta v}\right)\,dv\nonumber\\ &=& -\theta\int_{v=0}^\infty \mathbb{P}[E(s)\geq v]\,dv + \theta\int_0^\infty e^{-\theta v}\mathbb{P}[E_s \geq v]\,dv\nonumber\\ &=& -\theta\mathbb{E}[E(s)] + \theta\int_0^\infty e^{-\theta v}\mathbb{P}[E(s) \geq v]\,dv,\nonumber \end{eqnarray} using the well-known formula $\mathbb{E}[X] = \int_0^\infty\mathbb{P}[X\geq x]\,dx$ for any positive random variable. Using integration by parts $$\int_0^\infty e^{-\theta v}\mathbb{P}[E(s) \geq v]\,dv = \frac{1}{\theta} - \frac{1}{\theta}\int_0^\infty e^{-\theta v}f_s(v)\,dv=\frac{1}{\theta} - \frac{\Phi_\theta(s)}{\theta}.$$ So, \begin{equation}\label{part_one} I_1 = -\theta\ \mathbb{E}[E(s)] - \Phi_\theta(s) + 1. \end{equation} Since $G(du,v) = g_v(u)du,$ where the function \eqref{gfunction} has jump of size $2\theta$ at the point $u = v$, we also have \begin{equation} I_2 = 2\theta\int_0^\infty\mathbb{P}[E(s)\geq v]\,dv = 2\theta\ \mathbb{E}[E(s)]. \end{equation} Since $G(du,dv) = -\theta^2e^{-\theta(u-v)}\,du\ dv$ for $u>v$ as well, we have \begin{eqnarray}\label{part3} I_3 = -\theta^2\int_{v=0}^\infty\mathbb{P}[E(t)\geq u, E(s)\geq v]\int_{u=v}^\infty e^{-\theta(u-v)}\, du\, dv. \end{eqnarray} Next, we obtain an expression for $\mathbb{P}[E(t)\geq u, E(s)\geq v].$ Since the process $E(t)$ is inverse to the stable subordinator $D(u),$ we have $\{E(t)> u\}=\{D(u)<t\}$ \cite[Eq. (3.2)]{mark2}, and since $E(t)$ has a density, it follows that $\mathbb{P}[E(t)\geq u, E(s)\geq v] = \mathbb{P}[D(u)<t, D(v)<s].$ Since $D(u)$ has stationary independent increments, it follows that \begin{eqnarray} \P[E(t)\geq u, E(s) \geq v]& =& \mathbb{P}[D(u)<t, D(v)<s]\nonumber\\ &=& \mathbb{P}[(D(u) - D(v)) + D(v) < t, D(v) <s]\nonumber\\ &=&\int_{y=0}^sg(y,v)\int_{x=0}^{t-y}g(x, u -v)\, dx\,dy, \end{eqnarray} substituting the above expression into \eqref{part3} and using the Fubini Theorem, it follow that \begin{eqnarray}\label{part3estimate} I_3&=& -\theta^2\int_{y=0}^s\int_{x=0}^{t-y}\int_{v=0}^\infty g(y,v)\ dv\int_{u=v}^\infty g(x,u-v)e^{-\theta(u-v)}du\ dx\ dy\nonumber\\ &=&-\theta^2\int_{y=0}^s\int_{x=0}^{t-y}\int_{v=0}^\infty g(y,v)\ dv\int_{z=0}^\infty g(x,z)e^{-\theta z}dz\,dx\,dy. \end{eqnarray} Let $h(y) = \int_{v=0}^\infty g(y,v)\ dv$ and $k(\theta, x) = \int_{z=0}^\infty g(x,z)e^{-\theta z}\ dz.$ So, the Laplace transform of $h(y)$ is given by \begin{eqnarray} \mathcal{L}(h(y);s) = \int_{y=0}^\infty e^{-sy}h(y)\ dy &= & \int_{y=0}^\infty e^{-sy}\int_{v=0}^\infty g(y,v)\,dv\,dy\nonumber\\ &=&\int_{y=0}^\infty \int_{v=0}^\infty e^{-sy}g(y,v)\,dy\,dv\nonumber\\ &=&\int_{v=0}^\infty e^{-v\psi(s)}\ dv\nonumber\\ &=&\frac{1}{\psi_(s)} = \frac{1}{c_1s^{\beta_1}+c_2s^{\beta_2}}.\nonumber \end{eqnarray} Now to get $h(y)$ take inverse Laplace of $\mathcal{L}(h(y);s)$ by \eqref{laplaceofGMF} and $\delta=1$. This implies, \begin{equation}\label{gestimate}h(y) = \frac{1}{c_2}y^{\beta_2 - 1}E_{\beta_2 - \beta_1, \beta_2}\left(-\frac{c_1}{c_2}y^{\beta_2-\beta_1}\right).\end{equation} Similarly, take the Laplace transform of $k(\theta, x):$ \begin{eqnarray}\label{estimateofkfun} \mathcal{L}(k(\theta, x);s) &=& \int_{x=0}^\infty e^{-sx}\int_{z=0}^\infty e^{-\theta z}g(x,z)\,dz\,dx\nonumber\\ &=&\int_{z=0}^\infty e^{-\theta z}\int_{x=0}^\infty e^{-s x}g(x,z)\,dx\,dz\nonumber\\ &=&\int_{z=0}^\infty e^{-\theta z} e^{-z\psi(s)}dz\nonumber\\ &=& \frac{1}{\theta + \psi(s)}\nonumber\\ & =& \frac{1}{\theta + c_1s^{\beta_1}+c_2s^{\beta_2}} = \frac{1}{\theta}\left(1-\frac{c_1s^{\beta_1} + c_2s^{\beta_2}}{\theta + c_1s^{\beta_1} + c_2s^{\beta_2}}\right)=\mathcal{L}\left(-\frac{1}{\theta}\frac{d}{dx}\Phi_\theta(x)\right). \end{eqnarray} The uniqueness theorem of the Laplace transform applied to the $x-$variable implies that for any $\theta>0$ we have \begin{equation}\label{relationbnkandPhi} k(\theta,x) = -\frac{1}{\theta}\frac{d}{dx}\Phi_\theta(x). \end{equation} Substituting \eqref{gestimate} and \eqref{relationbnkandPhi} in \eqref{part3estimate}, we have \begin{eqnarray}\label{I3estimate} I_3 &=& -\theta^2\int_{y=0}^s\int_{x=0}^{t-y}h(y)\left(-\frac{1}{\theta}\frac{d}{dx}\Phi_\theta(x)\right)dx\ dy\nonumber\\ &=&\theta\int_{y=0}^s h(y)\int_{x=0}^{t-y}\frac{d}{dx}\Phi_\theta(x)\ dx\ dy\nonumber\\ &=&\theta\int_{y=0}^{s}h(y)(\Phi_\theta(t-y)-1)\ dy\nonumber\\ &=&\theta\int_{y=0}^{s}h(y)\Phi_\theta(t-y)\ dy - \theta\int_{y=0}^{s}h(y)\ dy. \end{eqnarray} Using properties of Laplace transform, we have $$\mathcal{L}\left(\int_{y=0}^{s}h(y)\ dy;\mu\right)=\frac{1}{\mu}\mathcal{L}(h(y);\mu)=\frac{1}{c_1\mu^{\beta_1+1}+c_2\mu^{\beta_2+1}}.$$ Hence, using uniqueness of Laplace transform $$\int_{y=0}^{s}h(y)\ dy = \mathbb{E}(E(s)).$$ Therefore, \begin{eqnarray}\label{I3estimate} I_3 = \theta\int_{y=0}^{s}h(y)\Phi_\theta(t-y)\ dy - \theta\mathbb{E}(E(s)).\end{eqnarray} Now it follows using \eqref{corrfun} and \eqref{genform} that \begin{eqnarray} \mbox{corr}[X(t), X(s)] &=&\int_{0}^{\infty}\int_{0}^{\infty}e^{-\theta|u-v|}H(du, dv)\nonumber\\ &=&I_1 + I_2 + I_3 + \Phi_\theta(t) + \Phi_\theta(s) - 1\nonumber\\ &=&\left[-\theta\mathbb{E}(E(s)) -\Phi_\theta(s) + 1\right] + 2\theta\mathbb{E}(E(s))\nonumber\\ &+&\theta\int_{y=0}^{s}h(y)\Phi(t-y)\ dy - \theta\mathbb{E}(E(s))+\Phi_\theta(t) + \Phi_\theta(s) - 1\nonumber\\ &=&\theta\int_{y=0}^{s}h(y)\Phi_\theta(t-y)\ dy + \Phi_\theta(t),\nonumber \end{eqnarray} which agrees with \eqref{correlationfun}. \end{proof} \begin{remark} When $t=s,$ it must be true that $\mbox{corr}[X(t), X(s)]=1$. To see that this follows from \eqref{correlationfun}, recall the formula for Laplace transform of the convolution: $$\mathcal{L}((h *\Phi_\theta)(t)) = \mathcal{L}(h(t))\mathcal{L}(\Phi_\theta(t)).$$ So, \begin{eqnarray} \mathcal{L}\left(\int_{y=0}^{t}h(y)\Phi_\theta(t-y)\ dy;\mu\right)&=&\mathcal{L}\left(h(t);\mu\right)\mathcal{L}\left(\Phi_\theta(t);\mu\right)\nonumber\\ &=&\left(\frac{1}{c_1s^{\beta_1}+c_2s^{\beta_2}}\right)\left(\frac{c_1s^{\beta_1-1}+c_2s^{\beta_2-1}}{\theta + c_1s^{\beta_1}+c_2s^{\beta_2}}\right)\nonumber\\ &=&\frac{s^{-1}}{\theta + c_1s^{\beta_1}+c_2s^{\beta_2}}.\nonumber \end{eqnarray} Using properties of Laplace transform and \eqref{estimateofkfun}, we see that $$\int_{y=0}^{t}h(y)\Phi_\theta(t-y)\ dy = \int_{0}^{t}-\frac{1}{\theta}\frac{d}{dy}\Phi_\theta(y)\ dy=\frac{1 - \Phi_\theta(t)}{\theta}.$$ Then, it follows from \eqref{correlationfun} that $\mbox{corr}[X(t), X(s)]=1$ when $t=s.$ \end{remark} \begin{remark} Recall \cite[Eq. 2.59, p.59]{beghin} that \begin{equation}\label{asympoticofGML} E^{k}_{v,\beta}(-ct^v)=\frac{1}{c^kt^{vk}\Gamma(\beta-vk)} + o(t^{-vk}),\ \ \ t\rightarrow \infty, \end{equation} \begin{equation}\label{asymptoticforsmalltime} E^{k}_{v,\beta}(-ct^v)\simeq \frac{1}{\Gamma(\beta)}-\frac{ct^{v}k}{\Gamma(\beta+v)},\ \ \ \ \ 0<t<<1. \end{equation} For $k=1$, using \eqref{asympoticofGML} the asymptotic behavior of $\mathbb{E}(E(t))$ for $t\rightarrow\infty$ is given by $$ \E(E(t))=\frac{t^{\beta_1}}{c_1\Gamma(1+\beta_1)}+o(t^{\beta_1 - \beta_2}),\ \ \ t\to\infty. $$ Similarly, the asymptotic behavior for small $t$ can be deduced using \eqref{asymptoticforsmalltime} when $k=1$: $$\mathbb{E}(E(t))\simeq\frac{t^{\beta_2}}{c_2\Gamma(1+\beta_2)}-\frac{c_1t^{2\beta_2-\beta_1}}{c_2\Gamma(1+2\beta_2-\beta_1)},\ \ \ 0<t<<1.$$ \end{remark} \begin{remark}\label{asympoticforPhi} Stationary Pearson diffusion exhibit short-range dependence, since their correlation function \eqref{pearsoncorrelation} falls off exponentially fast. However, the correlation function of time-changed Pearson diffusion falls off like a power law with exponent $\beta_1\in(0,1), (\beta_1 < \beta_2)$ and so this process exhibits long-range dependence. To see this, fix $s>0$ and recall that by \eqref{relationbnkandPhi} $$\Phi_\theta(t) = 1-\int_{x=0}^{t}\theta k(\theta, x)\,dx.$$ For fixed $\theta$, $\theta k(\theta, x)$ is a density function for $x\geq 0.$ Since $$\lim_{s\rightarrow 0}\frac{\theta}{c_1}s^{-\beta_1}\left(\frac{c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}}{\theta+c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}}\right)= 1.$$ Then by \cite[Example. (c), p.447]{feller} we get $$\Phi_\theta(t)\simeq \frac{c_1}{\Gamma(1-\beta_1)\theta t^{\beta_1}}, \ \ \ \ \ t\rightarrow \infty,$$ which depends only on the smaller fractional index $\beta_1.$ You can also see \cite[Eq. 2.64]{beghin}. Then $$\Phi_\theta(t(1-sy/t))\simeq \frac{c_1}{\Gamma(1-\beta_1)\theta t^{\beta_1}(1-sy/t)^{\beta_1}}, \ \ \ t\rightarrow\infty\ \ \mbox{for any}\ \ y\in[0,1].$$ Using dominated convergence theorem $(|\Phi_\theta(t)|\leq 1)$ and \cite[Eq.(1.99)]{podlubny} we get \begin{eqnarray} \theta\int_{y=0}^{s}h(y)\Phi_\theta(t-y)\ dy&=&\theta s\int_{0}^{1}h(sz)\Phi_\theta(t(1-sz/t))\ dz\nonumber\\ &\sim& \frac{sc_1}{\Gamma(1-\beta_1)t^{\beta_1}}\int_{0}^{1}h(sz)\ dz\nonumber\\ &=&\frac{c_1}{c_2}\frac{s^{\beta_2}E_{\beta_2-\beta_1,\beta_2+1}\left(-\frac{c_1}{c_2}s^{\beta_2-\beta_1}\right)}{t^{\beta_1}\Gamma(1-\beta_1)},\nonumber \end{eqnarray} as $t\rightarrow\infty.$ It follows from \eqref{correlationfun} that for any fixed $s>0$ we have \begin{equation}\label{asymptotic} \mbox{corr}[X(t), X(s)] \sim \frac{1}{t^{\beta_1}\Gamma(1-\beta_1)}\left(\frac{c_1}{\theta}+\frac{c_1}{c_2}s^{\beta_2}E_{\beta_2-\beta_1,\beta_2+1}\left(-\frac{c_1}{c_2}s^{\beta_2-\beta_1}\right)\right),\ \ \end{equation} as $t\rightarrow\infty.$ Now if we also let $s\rightarrow\infty,$ using \eqref{asympoticofGML} when $k=1:$ \begin{equation} \mbox{corr}[X(t), X(s)] \sim \frac{1}{t^{\beta_1}\Gamma(1-\beta_1)}\left(\frac{c_1}{\theta}+\frac{s^{\beta_1}}{\Gamma(1+\beta_1)}\right), \end{equation} as $t\rightarrow\infty$ and $s\rightarrow\infty.$ \end{remark} With careful changes to the proof of Theorem \ref{main-theorem} we can prove the following extension. \begin{theorem}\label{main-theorem2} Suppose that $X_1(t)$ is a Pearson diffusion in steady state, so that its correlation function is given by \eqref{pearsoncorrelation}. Then the correlation function of the corresponding time-changed Pearson diffusion $X(t) = X_1(E(t)),$ where $E(t)$ is an independent inverse subordinator \eqref{Epsi-def} of $D(t)$ with Laplace exponent \eqref{n-term-laplace-exponent}, is given by \begin{equation}\label{correlationfun2} \mbox{corr}[X(t), X(s)] =\theta\int_{y=0}^{s}h_n(y)\Phi_{\theta,n}(t-y)\ dy + \Phi_{\theta,n}(t), \end{equation} where Laplace transform of $h_n$ is given by $$\tilde{h}_n(s)=\frac{1}{\psi(s)}=\frac{1}{c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}}},$$ and the Laplace transform of $\Phi_{\theta,n}$ is given by $$ \tilde{\Phi}_{\theta,n}(s)=\frac{\psi(s)}{s(\theta+\psi(s))}=\frac{c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}}}{s(\theta+c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}})}. $$ In this case $\Phi_{\theta,n}(t)=\E(e^{-\theta E(t)})$ is the laplace transform of the inverse subordinator $E(t)$. \end{theorem} \begin{remark} Using \cite[Eq.(5.37)]{podlubny} and \eqref{laplaceofGMF} we get the expression for $h_n(y):$ \begin{eqnarray} h_n(y) &=& \frac{1}{c_n}\displaystyle\sum_{m=0}^{\infty}(-1)^m\displaystyle\sum_{\substack{k_0+k_1+\cdots+k_{n-2}=m\\ k_0\geq0, k_1\geq 0,\ldots, k_{n-2}\geq 0 }}\left(m;k_0, k_1, \ldots, k_{n-2}\right)\\ &&\prod_{i=0}^{n-2}\left(\frac{c_i}{c_n}\right)^{k_i}y^{(\beta_n-\beta_{n-1})(m+1)+\beta_{n-1}+\sum_{i=0}^{n-2}(\beta_{n-1}-\beta_i)k_i - 1}\nonumber\\ &\times&E^{m+1}_{\beta_n-\beta_{n-1}, (\beta_n-\beta_{n-1})(m+1)+\beta_{n-1}+\sum_{i=0}^{n-2}(\beta_{n-1}-\beta_i)k_i }\left(\frac{-c_{n-1}}{c_n}y^{\beta_n-\beta_{n-1}}\right),\nonumber \end{eqnarray} where $(m;k_0, k_1,\ldots,k_{n-2})$ are the multinomial coefficients. \end{remark} \begin{remark} As in remark \eqref{asympoticforPhi} $$\Phi_{\theta,n}(t) = 1-\int_{x=0}^{t}\theta k(x,\theta)\,dx,$$ where in this case $k(x,\theta)$ corresponds to the general case. Since $$\lim_{s\rightarrow 0}\frac{\theta}{c_1}s^{-\beta_1}\left(\frac{c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}}}{\theta+c_{1}s^{\beta_{1}}+c_{2}s^{\beta_{2}}+\cdots +c_{n}s^{\beta_{n}}}\right)= 1.$$ Then by \cite[Example. (c), p.447]{feller} we have $$\Phi_{\theta,n}(t) \sim \frac{c_1}{\theta\Gamma(1-\beta_1)t^{\beta_1}},\ \ \ \mbox{as}\ \ \ t\rightarrow\infty,$$ which depends only on the smaller fractional index $\beta_1.$ Hence, for fixed $s>0$ \begin{equation} \mbox{corr}[X(t), X(s)]\sim \frac{1}{t^{\beta_1}\Gamma(1-\beta_1)}\left(\frac{c_1}{\theta}+c_1\int_{0}^{s}h_n(y)\,dy\right). \end{equation} Therefore in this case the time-changed Pearson diffusion exhibits long-range dependence. \end{remark}
172,845
\begin{document} \title{Butcher series}\thanks{R.I.~McLachlan is supported by the Marsden Fund of the Royal Society of New Zealand. K. Modin is supported by the Swedish Foundation for Strategic Research (ICA12-0052) and EU Horizon 2020 Marie Sklodowska-Curie Individual Fellowship (661482). } \subtitle{A story of rooted trees and numerical methods for evolution equations} \titlerunning{Butcher series} \author{Robert I.\ McLachlan \and Klas Modin \and Hans Munthe-Kaas \and Olivier Verdier } \authorrunning{McLachlan, Modin, Munthe-Kaas, and Verdier} \institute{ R.I.\ McLachlan \at Institute of Fundamental Sciences, Massey University, New Zealand \\ \email{[email protected]} \and K.\ Modin \at Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, Sweden \\ \email{[email protected]} \and H.\ Munthe-Kaas \at Department of Mathematics, University of Bergen, Norway \\ \email{[email protected]} \and O.\ Verdier \at Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences \\ \email{[email protected]} } \date{\today \\ To appear in \emph{Asia Pacific Mathematics Newsletter}} \maketitle \begin{abstract} Butcher series appear when Runge--Kutta methods for ordinary differential equations are expanded in power series of the step size parameter. Each term in a Butcher series consists of a weighted elementary differential, and the set of all such differentials is isomorphic to the set of rooted trees, as noted by Cayley in the mid 19th century. A century later Butcher discovered that rooted trees can also be used to obtain the order conditions of Runge--Kutta methods, and he found a natural group structure, today known as the Butcher group. It is now known that many numerical methods also can be expanded in Butcher series; these are called B-series methods. A long-standing problem has been to characterize, in terms of qualitative features, all B-series methods. Here we tell the story of Butcher series, stretching from the early work of Cayley, to modern developments and connections to abstract algebra, and finally to the resolution of the characterization problem. This resolution introduces geometric tools and perspectives to an area traditionally explored using analysis and combinatorics. \keywords{Butcher series \and order conditions \and numerical integrators \and ordinary differential equations \and rooted trees \and elementary differentials \and affine equivariance} \subclass{65-03 \and 01-08 \and 65L06} \end{abstract} \section{From Cayley to Butcher} Butcher series are mathematical objects that were introduced by the New Zealand mathematician John Butcher in the 1960s. He introduced them as part of his study of Runge--Kutta methods, a popular class of numerical methods for evolution equations such as initial-value problems for ordinary differential equations, and they remain indispensable in the numerical analysis of differential equations. In this article we provide a brief introduction to Butcher series, survey their early history up to their introduction by John Butcher, and relate the story of the many connections that have recently been discovered between Butcher series and other parts of mathematics, notably algebra and geometry. \footnote{This article is not a comprehensive review and is focussed on our own interests. Useful companions to this article are the detailed mathematical review of Butcher series by Sanz-Serna and Murua \cite{sa-mu} and the textbook treatments of Hairer et al. \cite{hlw,ha-no-wa}.} We begin, however, with the traditional definition. Butcher series are intimately associated with the set of smooth (infinitely differentiable) vector fields on vector spaces. Indeed, let $f$ be a smooth vector field on a vector space $V$, defining the ordinary differential equation (ODE) \begin{equation} \label{eq:ode} \dot x = f(x), \end{equation} where $\dot x = \frac{\d x}{\d t}$ denotes the derivative with respect to time~$t$. One way to study (\ref{eq:ode}) is to develop the Taylor series of its solutions. Let $x(h)$ be the solution to (\ref{eq:ode}) at time $t=h$ subject to the initial condition $x(0)=x_0$. The Taylor series of $x(h)$ in $h$ is \begin{equation}\label{eq:taylor_series} x(h) = x(0) + h \dot x(0) + \frac{1}{2}h^2 \ddot x(0) + \dots. \end{equation} We already know that $x(0)=x_0$ and $\dot x(0) = f(x_0)$. The additional terms can be found by repeatedly applying the chain and product rules. For example, $$\ddot x = \frac{\d}{\d t}\dot x = \frac{\d}{\d t} f(x) = f'(x)\dot x = f'(x) f(x),$$ or, relative to a basis in which $x=x^1\mathbf{e}_1 + \ldots+ x^n\mathbf{e}_n$, $$ \ddot x^i = \sum_{j=1}^{n} \frac{\partial f^i}{\partial x^j}(x) f^j(x),$$ where $f(x) = f^1(x)\mathbf{e}_1 + \ldots + f^n(x)\mathbf{e}_n$. Continuing in this way gives \begin{equation}\label{eq:eldiffexp} \begin{split} \dot x &= f(x),\\ \ddot x &= f'(x)f(x),\\ \dddot x &= f'(x)f'(x)f(x) + f''(x)(f(x), f(x)),\\ \ddddot x &= f'(x)f'(x)f'(x)f(x) + f'(x)f''(x)(f(x),f(x)) + \\ & \qquad 3 f''(x)(f'(x)f(x),f(x)) + f'''(x)(f(x),f(x),f(x)),\\ &\;\,\vdots \end{split} \end{equation} Here the $k$th derivative $f^{(k)}(x)$ of the vector field $f$ is regarded as a multilinear map $V^k\to V$. For example, $f''(f,f)$ is the vector field on $V$ whose $i$th coordinate is $$ \sum_{j,k = 1}^n \frac{\partial^2 f^i}{\partial x^j \partial x^k}(x)f^j(x) f^k(x).$$ A vector field of the form appearing in~\eqref{eq:eldiffexp}, combining $f$ and its derivatives, is called an \emph{elementary differential}. Using~\eqref{eq:eldiffexp}, the Taylor series~\eqref{eq:taylor_series} for the solution of (\ref{eq:ode}) can be written as \begin{equation}\label{eq:taylor_exact} x(h) = x_0 + h f + \frac{1}{2}h^2 f'f + \frac{1}{6}h^3 f'f'f + \frac{1}{6}h^3 f''(f,f) + \dots \end{equation} where each elementary differential is evaluated at~$x_0$. Notice that the power of $h$ in each term is determined by the multiplicity of $f$ in the elementary differential. However, the coefficients $1$, $1$, $1/2$, $1/6$, $1/6$, and so on are \emph{not} determined by their corresponding elementary differentials. A \emph{Butcher series}, shortly denoted \emph{B-series}, is a generalization of~\eqref{eq:taylor_exact} allowing arbitrary coefficients, i.e., a formal series of the form \begin{equation} \label{eq:bseries} B(c,f) := c_0 x_0 + c_1 h f + c_2 h^2 f'(f) + c_3 h^3 f'(f'(f)) + c_4 h^3 f''(f,f) + \dots \end{equation} where~$c_i\in\RR$. Although presented here in coordinates, we shall see that Butcher series do not depend on the choice of basis. \section{Early history} Butcher series are named in honour of the New Zealand mathematician John Butcher. In a publication career spanning (so far) 60 years he has written 167 papers and books, all but 18 of them concerned with Runge--Kutta methods and their generalisations. Most of them involve in some way the fundamental structure that bears his name. Butcher series were introduced in a remarkable series of ten sole-authored papers in the years 1963--1972. A Runge--Kutta method is a numerical approximation $x_{n} \mapsto x_{n+1}$ of the exact flow of~\eqref{eq:ode} defined by the following equations in $x_n$, $x_{n+1}$, $X_1,\dots,X_\nu\in V$: \begin{equation} \label{eq:rk} \begin{aligned} X_i &= x_n + h \sum_{j=1}^\nu a_{ij} f(X_j), \\ x_{n+1} &= x_n + h \sum_{j=1}^\nu b_j f(X_j).\\ \end{aligned} \end{equation} Here $\nu$ is the number of stages of the method and $a_{ij}$, $b_j$ are real numbers parameterising the Runge--Kutta method. Associated with the abstract Runge--Kutta method (\ref{eq:rk}) are its {\em order conditions}, polynomials equations in $a_{ij}$ and $b_j$---one equation per elementary differential---that determine the order of convergence of the method and its local error. Their derivation has been simplified over the years; a modern exposition can be found in Hairer, Lubich and Wanner \cite{hlw}, and a detailed history in Butcher and Wanner \cite{bu-wa}. The first breakthrough paper dates from 1963 \cite{butcher63}. Here Butcher found for the first time the coefficients $c_i$ of the B-series (\ref{eq:bseries}) of $x_{n+1}$ of the Taylor expansion in $h$ of an arbitrary Runge--Kutta method. This gave the order conditions for Runge--Kutta methods in complete generality. As previous studies had laboriously expanded the solutions of particular (e.g. explicit) methods by hand, this was an enormously important development. Butcher did have, however, some precursors. The most notable example is the paper of Merson~\cite{merson} from 1957. Robert Henry `Robin' Merson (1921--1992) was a scientist at the Royal Aircraft Establishment, Farnborough, UK, who was invited along with more senior numerical analysts to a conference on Data Processing and Automatic Computing Machines at Australia's Weapons Research Establishment in Salisbury, South Australia. \footnote{ Flight-related research at Farnborough began with the Army Balloon Factory in 1904, which became the Royal Aircraft Factory in 1912, the Royal Aircraft Establishment in 1918, and then the Royal Aerospace Establishment in 1988. It was merged into the Defence Research Agency in 1991 and then into the Defence Evaluation and Research Agency in 1995. This was split up in 2001, with Farnborough becoming part of the private company Qinetiq. Desmond King-Hele's version of these later developments is recorded at \cite{kh}.} It seems like a long way to go for a conference in 1957. However, the UK was still performing above-ground atomic bomb tests in South Australia at that time and the Australian government was very keen to be a part of the emerging era. Merson's work is bound up with one of the most significant events of 1957, the launch of Sputnik 1 on 4 October 1957, and the tale of Farnborough's involvement is told in detail by one of the key participants, Desmond King-Hele, in his book {\em A Tapestry of Orbits} \cite{heleking}. The short version is that with the aid of a large radio antenna hastily erected in a nearby field, and some calculations of Robin Merson, within two weeks they had an accurate orbit for Sputnik 1. This allowed them to estimate the density of the upper atmosphere and (after Sputnik 2) the shape of the earth. Robin Merson became an expert in practical numerical analysis and orbit determination. Merson's paper explains clearly the structure of the elementary differentials $f'(f)$, $f''(f,f)$, etcetera, and, crucially, shows how they are in one-to-one correspondence with rooted trees. He also introduces various basic operations on rooted trees. This development, perhaps regarded initially as a bookkeeping device for finding and keeping track of the different terms, has over time become central to the combinatorial and algebraic study of B-series. \begin{figure} \begin{center} \includegraphics[width=8cm]{merson1.pdf}\\[4mm] \includegraphics[width=8cm]{merson2.pdf} \caption{\label{fig:merson} Merson's \cite{merson} 1957 diagram of rooted trees representing elementary differentials, and (bottom) an example of a product of trees, in this case the pre-Lie product explained in Section \ref{sec:alg}. } \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{cayleytrees.pdf} \caption{\label{fig:cayley} Cayley's \cite{cayley} 1857 diagram of rooted trees representing elementary differentials. } \end{center} \end{figure} The rooted trees $\mathcal{T}$ and their associated elementary differentials $\mathcal{F}(\mathcal{T})$ are \setlength{\tabcolsep}{0pt} \noindent \begin{tabular}{rlclclclclclclclclc} $\mathcal{T} =\Big\{\emptyset$&, &$\ab$&, &$\aabb$&, &$\aaabbb$&, &$\aababb$&, &$\aabababb$&, &$\aabaabbb$&, &$\aaababbb$&, &$\aaaabbbb$&, &$\dots \Big\}$, \\[4mm] $\mathcal{F}(\mathcal{T}) = \Big\{ x$&,\; &$ f$&,\; &$ f'(f)$&,\; &$f'(f'(f))$&,\; &$f''(f,f)$&,\; &$f'''(f,f,f)$&,\; &$f''(f,f'(f))$&,\; &$f'(f''(f,f))$&,\; &$f'(f'(f'(f)))$&,\; &$\dots\Big\}.$ \end{tabular} \setlength{\tabcolsep}{2pt} \medskip Merson introduces a method for carrying out the required Taylor series expansions in elementary differentials and gives an example of a 4th order Runge--Kutta method he derived. However, the actual expansions, although greatly simplified by the use of elementary differentials and rooted trees, are still carried out term by term. He did not have the coefficients of all elementary differentials at once, as Butcher achieved. As it happens, the required mathematics and structures had already been discovered a century earlier by Arthur Cayley in 1857 \cite{cayley} (see Fig.~\ref{fig:cayley}). This is the actual discovery of the objects called trees (connected, cycle-free graphs). In popular treatments of graph theory, the development of graph theory is closely linked with recreational mathematics (the bridges of K\"onigsberg) and with chemistry (Cayley's enumeration of alkanes and other families of molecules). One common interpretation of the story is that Cayley introduced the trees as a purely abstract structure and 17 years later---behold the power of mathematics!---found that he could use them to count molecules. However, Cayley actually needed trees for {\em exactly} the purpose we are using them here---to keep track of how vector fields interact when applied repeatedly to one another---and this purpose was then forgotten for a hundred years. As the need for better numerical integration methods arose towards the end of the 19th century, the required tools for a complete theory were indeed already there, but they had been forgotten. As Frank Harary wrote \cite{harary}, {\narrower\narrower\medskip \noindent\em In very many cases and in disciplines in the physical sciences, the social sciences, computer science, and the humanities, graphs frequently occur as a natural, useful, and intuitive mathematical model. The consequence is that those investigators who were not aware of the existence of graph theory as a study in its own right were led to rediscover it in order to apply it. \medskip } \noindent Interestingly enough, Merson does cite Cayley. However, from the context, it is not clear that he actually laid eyes on Cayley's paper. He writes, {\narrower\narrower\medskip \noindent\em A formula for the number of trees of a given order was discovered by CAYLEY {\rm [our \cite{cayley}]} and quoted by ROUSE--BALL\dots \medskip } \noindent This was probably the original 1892 edition of Rouse Ball's famous book {\em Mathematical Recreations and Essays}, as later editions included Coxeter as coauthor. This first edition contains just one page on trees, stating Cayley's formulae for the number of trees. Now this same section of Rouse Ball also discusses the famous Knight's Tour problem, an astonishingly long-lived problem dating from an Arabic manuscript of 840 AD. For example, there were three articles on Knight's Tours published in the {\em Mathematical Gazette} in 1956 alone. This problem became a life-long interest of Merson's, who published tours in 1974 and 1999 (posthumously, in {\em Games and Puzzles} magazine, from letters written in 1990--91) that are still in many cases the best known tours. Although Merson stated \cite{jelliss} that he first became interested in the problem in 1972, it is not unlikely that in 1957 he rediscovered trees independently because, like Cayley, he needed them, and from his interest in recreational mathematics remembered Rouse Ball's discussion of Cayley without ever chasing it up. John Butcher, at that time a PhD student in physics at the University of Sydney, was actually present at Robin Merson's talk in 1957, but says \cite{jcbpc} that he did not understand it at all. However, the seed was planted there. To return to Butcher's 1963 paper, he closes with the following statement: {\narrower\narrower\medskip \noindent\em It happens that this situation is capable of extensive generalization and, for example, keeping this same value $\nu=3$ it is possible to satisfy the 37 conditions necessary for a sixth order process. Similarly for any value of $\nu$ a process of order up to $2\nu$ is possible. It is intended that details of such processes will be discussed in a later publication. \medskip } This was an announcement of Butcher's discovery of the family of Gauss Runge--Kutta methods and the first hint of extra structure contained within the Runge--Kutta order conditions. Methods with 3 stages have 12 free parameters ($a_{ij}$ and $b_j$ for $i,j=1,2,3$) and Butcher was extremely excited to discover that there were values of the parameters that satisfied not just the 8 conditions for order 4, and the 17 conditions required for order 5, but even the 37 conditions required for order 6! He recalls running through the empty corridors of the mathematics department at the University of Canterbury, where he was then lecturing, desperately trying to find someone to understand and to share the excitement \cite{jcbpc}. He fulfilled his intention to publish the details in his very next paper \cite{butcher64}. One approach taken by Butcher to approach the structure of the order conditions, suggested by this discovery, was to introduce certain {\em simplifying assumptions}. These became the cornerstone of the construction of the efficient high-order explicit integrators that are used today. However, the source of these simplifying assumptions remained mysterious; only very recently has their algebraic origin been explained \cite{khashin}. This has allowed them to be embedded in systematic families and further reduced the number of stages needed at high order. We take this as further evidence that after 50 years Butcher's vision is alive and well. This initial intensely creative and productive period came to a head with the publication of {\em An algebraic theory of integration methods} in 1972 \cite{butcher72}---submitted in 1968---in which John Butcher introduced what is now called the Butcher group. The B-series (\ref{eq:bseries}) with $c_0=1$ correspond formally to diffeomorphisms close to the flow of $f$, and the Butcher group operation arises from a product of rooted trees that corresponds to the composition of these diffeomorphisms. To give an example of the group operation of the Butcher group, consider the B-series $$\alpha := x_0 + h f(x_0).$$ This is associated with the map $x_0 \mapsto x_1 := x_0 + hf(x_0)$ of the forward Euler method. The composition of this map with itself (i.e., two steps of forward Euler) is the map \[ \begin{aligned} x_0& \mapsto x_1 + h f(x_1) \\ &= x_0 + h f(x_0) + h f(x_0 + h f(x_0))\\ &= x_0 + h f + h(f + h f'f + \frac{1}{2!} h^2 f''(f,f) + \frac{1}{3!}h^3 f'''(f,f,f) + \dots)\\ &= x_0 + 2 h f + h^2 f'f + \frac{1}{2!}h^3 f''(f,f) + \frac{1}{3!} h^4 f'''(f,f,f) + \dots. \end{aligned} \] The last line is the B-series of the Butcher product $\alpha\alpha$. The inverse $\alpha^{-1}$ of the B-series $\alpha$ is the series associated with the inverse map $x_1\mapsto x_0$. This map is one step of backward Euler with time step $-h$. Its B-series is \[ \begin{aligned} x_0 - h f & + h^2 f'f - h^3(f'f'f + \frac{1}{2}f''(f,f)) \\ & + h^4(\frac{1}{6}f'''(f,f,f) + f'f'f'f + f''(f,f'f) + \frac{1}{2}f'(f''(f,f))) + \dots. \end{aligned} \] The coefficient of any elementary differential in these series can be found using simple combinatorial operations on trees. This paper \cite{butcher72} aroused an interest that lead to a crucial event. In Innsbruck, the 28-year-old dozent Gerhard Wanner was studying John Butcher's early papers and his hard-to-understand preprint \cite{butcher72}. In 1970 the University of Innsbruck was celebrating its 300th anniversary and asked each professor to invite a guest lecturer. Wanner's professor, Wolfgang Gr\"obner, asked Wanner for a suggestion, and so John Butcher was invited. Ernst Hairer, who had been Wanner's best freshman analysis student the year before, attended the lectures. In Wanner's words \cite{wanner}, \emph{ ``In my opinion, at that time, nobody in the world made the necessary efforts to understand Butcher's papers, except Ernst. He then explained them to me, and I tried to put them in a more understandable form,"} and in Butcher's words~\cite{butcherearly}, \emph{``This led to my own contribution being recognised, through their eyes, in a way that might otherwise not have been possible."} In 1974 Hairer and Wanner \cite{ha-wa} introduced both {\em Butcher series} and the term {\em Butcher group}; they also clearly demonstrate the uses of the series for much more than Runge--Kutta methods. In Butcher \cite{butcher72}, the group elements are functions from rooted trees to the reals, such as those functions induced from (traditional and continuous stage) Runge--Kutta methods; in Hairer and Wanner \cite{ha-wa} the primary objects are the B-series (\ref{eq:bseries}) themselves, which obey the group law found by Butcher. These discoveries triggered a period of huge development in numerical methods for evolution equations. The subsequent modern history of the area has been reviewed extensively \cite{bu-wa,hlw,ha-no-wa,sa-mu}. Here we confine ourselves to some remarks as to the role and significance of Butcher series. \section{How important are Butcher series?} Many areas of inquiry show a tendency to divide adherents into `lumpers' and `splitters'. For example, in taxonomy, lumpers prefer to name few species, splitters many. Lumpers emphasize similarity, splitters emphasize difference. Numerical analysis, like most parts of mathematics, shows a gradual tendency over time towards splitting, as the true differences between instances are appreciated and exploited. Thus structure-preserving methods have been developed for finer and finer divisions of matrices, differential equations and so on, that, by restricting the problem class, are able to offer superior performance. Iserles \cite{iserles} alludes to this when he compares ordinary differential equations to Tolstoy's happy families, that (`perhaps', Iserles cautions) all resemble each other, while each partial differential equation is unhappy in its own way. Indeed, a mighty strength, and also a potential weakness, of Runge--Kutta methods and of B-series is that they treat {\em all} ODEs in a uniform way. They are an extreme example of lumping. One might wonder if they are perhaps {\em too} extreme. Do they over-lump ODEs? In our view they have held up pretty well. The first widely-acknowledged division of ODEs in numerical analysis was into stiff and nonstiff equations. Implicit Runge--Kutta methods turned out to be ideal for stiff equations and explicit ones for nonstiff. With the advent of symplectic integrators for Hamiltonian systems, that preserve a quadratic conservation law on first variations of solutions, Runge--Kutta methods were found to be suitable too. New classes of methods have been introduced that have features that Runge--Kutta methods do not, such as exponential integrators like \begin{equation} \label{eq:ei} x_{n+1} = x_n + \phi(h f'(x_n))h f(x_n),\quad \phi(z) = \frac{e^z-1}{z}, \end{equation} which can beat implicit Runge--Kutta methods on some stiff equations, and the AVF (Average Vector Field) method \begin{equation} \label{eq:avf} x_{n+1} = x_n + \int_0^1 f(\xi x_{n+1} + (1-\xi) x_n)\, {\rm d}\xi \end{equation} that preserves energy $H(x)$ when $f = J^{-1}\nabla H$ is a Hamiltonian vector field. Both (\ref{eq:ei}) and (\ref{eq:avf}) have expansions in B-series. On the other hand, some methods such as the leapfrog or St\"ormer--Verlet method, widely used in molecular dynamics and in video game engines for systems of the form $\ddot x = -\nabla V(x)$, do not have B-series---indeed they are not even defined for all first order systems $\dot x = f(x)$---and should certainly not be discarded on that account. Our view is {\em lump if you can, but split if you must}. In fact some would say that there is {\em no} practical reason for preferring methods with a B-series and that the whole concept is merely a mathematical abstraction or (perhaps) convenience. However, note that (\ref{eq:bseries}) lumps not only ODEs, but also numerical methods. A very large class of numerical methods for ODEs are represented by (\ref{eq:bseries}). Even before getting to the question of what the possession of a B-series confers on a numerical method, the lumping of numerical methods by B-series presents a fairly rare opportunity in computational science. All too often one analyzes the complexity or behaviour of a {\em particular} algorithm, or perhaps of a small class. Meaningful lower bounds for complexity or behaviour over {\em all} algorithms are almost never obtained. One should not miss the opportunity given by B-series to better understand an infinite-dimensional set of methods, without regard to particular details of the method. Several times, new numerical methods have been reflected in the discovery of new structure within B-series. For example, if $f=J^{-1}\nabla H$ for some $H$ and $J$, where $J^T=-J$ defines a symplectic structure on the vector space $V$, then $f$ is Hamiltonian and energy preserving and we can ask which B-series have these properties. The trivial B-series $B(f)=c_1 f$ are the only ones which are both Hamiltonian and energy-preserving. At first sight it is surprising that the first nontrivial B-series, $f'f$, is neither Hamiltonian nor energy-preserving. At the next order, $f'f'f$ is energy preserving and $f''(f,f)-2f'f'f$ is Hamiltonian. The spaces of such B-series have been completely described \cite{cmoq}. \section{Algebraic characterizations} \label{sec:alg} The topic of B-series can be approached from many different points of view; topics in numerical analysis, geometry and abstract algebra are connected via B-series. The fundamental algebraic structure of a \emph{pre-Lie algebra} unifies three seemingly very different papers all written in 1963: John Butcher's first paper on Runge--Kutta methods~\cite{butcher63}, Ernest Vinberg's paper on the geometry of symmetric cones~\cite{vinberg} and Murray Gerstenhaber's work on homology and deformations of algebras~\cite{gerstenhaber63}. The differential geometric picture starts with the basic notion of parallel transport of vectors, which is infinitesimally described in terms of a \emph{connection} or \emph{covariant derivation} of vector fields. The connection is a bilinear operation of vector fields $(f,g)\mapsto f\tr g$ (often written as $\nabla_f g$) which describes the rate of change of~$g$ as it is parallel-transported along the flow of~$f$. On the vector space $\RR^n$ parallel transport is the obvious rule, and the corresponding connection is given as \[f\tr g = g'(f) = \sum_{i,j=1}^n \frac{\partial g^i}{\partial x^j} f^j\ \frac{\partial}{\partial x^i} .\] The curvature $R$ and the torsion $T$ are the two basic invariants of a connection. On flat spaces, such as the above defined connection on $\RR^n$, both $R=0$ and $T=0$. It can be shown that in this case the connection satisfies the following \emph{pre-Lie} relation: \[f\tr(g\tr h)- (f\tr g)\tr h =g\tr(f\tr h)- (g\tr f)\tr h.\] An algebra with a product satisfying this relationship is called a \emph{pre-Lie algebra}. So, the set of smooth vector fields on $\RR^n$ with the standard connection is an example of a pre-Lie algebra\footnote{Also called a Vinberg, Koszul--Vinberg, left-symmetric, or Gerstenhaber algebra. The name reflects the fact that the skew product $[x,y] := x\tr y-y\tr x$ defines a Lie bracket. However it should be noted that the pre-Lie relation is not the most general form of a product with this property.}. Another example is the linear combination of rooted trees, where the pre-Lie product is given by grafting: for two trees $\tau_1$ and $\tau_2$ the pre-Lie product $\tau_1\tr\tau_2$ is computed by attaching the root of $\tau_1$ with an edge to each of the nodes of $\tau_2$ and adding all these terms together (see Figure \ref{fig:merson}.) The pre-Lie algebra perspective of B-series was promoted by Calaque, Ebrahimi-Fard, and Manchon~\cite{kurusch_et_al}. A fundamental result, which was essentially known already to Cayley in 1857, but which has been revisited in a modern algebraic setting by Chapoton and Livernet in 2001 \cite{ch-li}, is that the space of all trees with the grafting product is the \emph{free pre-Lie algebra}. This means that this structure `knows all there is to know' about basic algebraic properties of pre-Lie algebras, and any algebraic computation which relies only on the pre-Lie relationship can be expressed as a computation on trees. It also means that any example of a concrete pre-Lie algebra can be realised as a quotient of the free pre-Lie algebra with some ideal (that is, as trees with some equivalence relation). This is indeed a useful result for computations. The correspondence between abstract trees and concrete elements in a given pre-Lie algebra (e.g.,\ a vector field on $\RR^n$) is exactly the elementary differential map of Butcher. The elementary differential map $\F(\tau)$, taking trees to vector fields, respects the structure of the pre-Lie product, $\F(\tau_1\tr\tau_2) = \F(\tau_1)\tr\F(\tau_2)$, where the triangle on the left is grafting of trees and on the right is the covariant derivative of vector fields. All the elementary differentials are obtained this way. For example, since $\aababb = \ab\tr (\ab\tr\ab) - (\ab\tr\ab)\tr\ab$, we must have that if $\F(\ab) = f$, then $\F(\aababb) = f\tr(f\tr f)-(f\tr f)\tr f$. Similarly, all the terms of the B-series can be expressed in terms of the pre-Lie product, and hence we can regard a B-series as an infinite expansion in a pre-Lie product. Are there other important examples of pre-Lie algebras where B-series might play a role? There was a great surprise in the late 1990s when Christian Brouder pointed out \cite{brouder} that the so-called Hopf algebra of Alain Connes and Dirk Kreimer \cite{co-kr} had the same algebraic structure that John Butcher had been studying in detail in his 1972 paper. Connes and Kreimer had been interested in renormalisation processes in quantum field theory and discovered a rich algebraic structure of trees. Indeed Arne D\"ur \cite{dur} had already observed in 1986 that Butcher had given rooted trees the structure of a Hopf algebra. Rereading Butcher \cite{butcher72} in light of these more recent developments, it is striking how close his perspective is to the modern Hopf algebraic view. As Brouder commented, \emph{``Butcher found an explicit expression for all the operations of the Hopf structure of the algebra of rooted trees.''} After Brouder's work the Fields medallist Alain Connes wrote \cite{co-kr} \emph{``We regard Butcher's work on the classification of numerical integration methods as an impressive example that concrete problem-oriented work can lead to far-reaching conceptual results.''} Pierre Cartier has also written a very clear exposition of the significance of pre-Lie algebras and the algebraic origin of the Connes--Kreimer approach \cite{cartier} . More recently these algebraic structures appear in other important areas, such as in stochastic processes, where the \emph{Rough Paths Theory} gives a precise meaning to integrating functions along highly irregular paths. This theory originated from the work of Terry Lyons and was celebrated by the Fields medal awarded to Martin Hairer in 2014 for his work on regularity structures. Relations between rough paths and B-series have been developed in the work of Massimo Gubinelli \cite{gubinello}. In a completely different direction, expansions in rooted trees can be used to dramatically simplify and also to sharpen known results in complex dynamics \cite{fauvet} ({\em ``this amounts to a novel approach to formal linearization by means of a powerful and elegant combinatorial machinery''}). Considering B-series as an expansion in a (flat and torsion free) connection, we may ask what are the characterising geometric properties of a B-series? A partial answer comes from the question of which invertible mappings $\phi\colon \RR^n\rightarrow \RR^n$ preserve the connection $\tr$. Let $\phi$ act on vector fields in the `natural' way (i.e.,\ as a differential equation transforms under change of coordinates) $\phi\cdot f := (\phi')\circ f\circ \phi^{-1}$, where $\phi'$ is the Jacobian matrix. Then it can be shown that $\phi\cdot(f\tr g) = (\phi\cdot f)\tr (\phi\cdot g)$ for all vector fields $f$ and $g$ if and only if $\phi(x) = Ax+b$ is an affine map. However, it turns out that this condition is not enough to nail precisely the question of \emph{What is a B-series?}, but we shall see that it brings us a long way towards the answer. Before we explore this issue further in the next section, we remark on other recent geometric developments of the theory. Concerning the group structure of B-series, Bogfjellmo and Schmeding~\cite{bogfjellmo_schmeding} have recently proved that the space of B-series is an infinite-dimensional Lie group with respect to a natural Fréchet topology. Among numerical analysts, B-series have long been treated as Lie groups without a rigorous justification; the result by Bogfjellmo and Schmeding resolves this and unveils interesting possibilities to apply tools from infinite-dimensional geometry to the backward error analysis of ODE methods. The question of characterising geometries by invariance properties goes a long time back to the 19th century work of Felix Klein, who in his \emph{Erlangen program} of 1872 raised fundamental questions about geometries and symmetries. An example is the study of affine geometries as a generalisation of Euclidean spaces. In this geometric context it is interesting to ask if other geometries have algebras describing their connections, such as pre-Lie algebras for affine geometries. Recent developments have shown that this is indeed the case. For Lie groups and homogeneous spaces there are naturally defined connections which give rise to \emph{post-Lie} algebras, and from this we obtain B-series types of expansions valid for flows evolving on manifolds (`Lie--Butcher' series) \cite{mk-lu}. Yet another algebra appears in the context of symmetric spaces such as, for example, spheres and Riemannian spaces with constant curvature. This is an active area of research, where differential geometry, algebraic combinatorics, differential equations, computations and applications go hand-in-hand. \section{Geometric characterizations} Many mathematical objects can be defined in different ways: axiomatically, constructively, or by characterizing their relationship to another, known, object. The original, and still the traditional, approach to Butcher series \cite{hlw} is constructive. It is motivated by the Taylor series of the exact solution. It starts by constructing the rooted trees, most easily done recursively using the operation of adding a root to a forest (set of rooted trees). Then the elementary differentials are defined and associated to the rooted trees, and finally it is shown that various objects (Runge--Kutta and other integration methods) can be expanded in Butcher series. The algebraic approach of the previous section is axiomatic. However, if we recall the origin of Butcher series in numerical analysis, and note that not all numerical integrators have a Butcher series, it is natural to ask why these particular combinations, $f''(f,f)$ and so on, keep coming up. What is special about them? What geometric property characterises those numerical integrators that have a Butcher series? A crucial clue is provided in the definition of Runge--Kutta methods, (\ref{eq:rk}). Apart from evaluation of $f$, these involve only scalar multiplication and addition---the defining operations of the vector space~$V$. This suggests that Runge--Kutta methods are defined intrinsically on $V$ and do not depend on the choice of basis. Indeed, as already mentioned previously in the context of pre-Lie algebras, slightly more is true: Runge--Kutta methods (and B-series) are {\em affine-equivariant}. Indeed, let, as before, smooth invertible mappings $\phi\colon V\to V$ act on the vector space $V$ and on vector fields on $V$ in the natural way. Then B-series with $c_0=1$, such as the expansions of numerical integrators, obey $$\phi\cdot B(c,f) = B(c,\phi\cdot f)$$ for all invertible affine maps $\phi(x) = A x + b$, $A\in {\mathbb R}^{n\times n}$, $\det A\ne 0$. Could it be the case that any affine-equivariant method has a Butcher series? In other words, does affine-equivariance characterize B-series methods? In \cite{mko}, two of us showed that this is not the case. There are many methods that are affine-equivariant but do not have Butcher series. The simplest example is the first-order method \[ x_1 = x_0 + h f(x_0)(1 + h (\nabla\cdot f)(x_0)). \] Under an affine transformation $x \mapsto \phi(x)=Ax+b$, $f$ transforms to $A f\circ\phi^{-1}$, and the Jacobian $f'$ transforms to $A (f'\circ\phi^{-1}) A^{-1}$. The divergence of $f$, namely ${\rm tr} f'$, transforms to $({\rm tr} f')\circ\phi^{-1}$, and the new term $f\, \nabla\cdot f$ transforms to $A (f\,\nabla\cdot f)\circ\phi^{-1}$---that is, it is affine equivariant. It turns out that any affine-equivariant method can be expanded in terms of more general objects, the {\em aromatic series}. Combinatorically, these are represented by `aromatic trees', forests consisting of one rooted tree and any number of directed graphs with one cycle (self-loops allowed). The name is suggested by aromatic compounds, such as benzene, that contain cycles of atoms. An aromatic series begins \begin{equation*} \label{eq:aroma} \begin{aligned} c_0 x & + c_1 h f \\ &+ h^2(c_2 f'f + c_3f \nabla \cdot f ) \\ &+ h^3(c_4 f''(f,f) + c_5 f'f'f + c_6 f(f\cdot\nabla(\nabla\cdot f)) + c_7 f'f\,\nabla\cdot f \\ & \qquad+ c_8 f (\nabla\cdot f)^2 + c_9 f {\rm tr}(f'^2)) \\ & + \dots \end{aligned} \end{equation*} which may be represented as an element in the span of the aromatic trees \begin{equation*} \begin{aligned} &\ATb,\\ & \ATbb, \quad \ATbapab[treeemph],\\ & \aababb ,\quad \begin{tikzpicture}[setree] \placeroots{1} \children[1]{child{node{} child{node{}}}} \end{tikzpicture} ,\quad \begin{tikzpicture}[setree, treeemph] \placeroots{2} \children[1]{child{node{}}} \jointrees{1}{1} \end{tikzpicture} , \quad \begin{tikzpicture}[setree] \placeroots{2}[.3] \children[1]{child{node{}}} \joinlast{2}{2} \end{tikzpicture} , \quad \begin{tikzpicture}[setree] \placeroots{3} \joinlast{2}{2} \joinlast{3}{3} \end{tikzpicture} , \quad \begin{tikzpicture}[setree] \placeroots{3}[.3] \jointrees{2}{3} \end{tikzpicture} , \\ &\ldots \end{aligned} \end{equation*} There are clearly many more aromatic than rooted trees. The aromatic trees of order $n$ are in 1--1 correspondence with functions from $\{2,\dots,n\}$ to $\{1,\dots,n\}$, `forgetting the labels', that is, modulo permutations of $\{2,\dots,n\}$. (Here the element 1 identifies the root.) For example, the aromatic tree \begin{equation*} \begin{tikzpicture} \begin{scope}[etree, scale=1.5] \placeroots{2}[.3] \children[1]{child{node(a){}}} \children[2]{child{node(b){}}} \joinlast{2}{2} \end{scope} \node[left] at (tree1) {$1$}; \node[above right] at (tree2) {$4$}; \node[above left] at (a) {$2$}; \node[above right] at (b) {$3$}; \end{tikzpicture} \end{equation*} is associated with the function $2\mapsto 1$, $3\mapsto 4$, $4\mapsto 4$ and with the (generalized) elementary differential $$ \sum_{i_1,i_2,i_3,i_4=1}^n f^{i_1}_{i_2} f^{i_2} f^{i_3} f^{i_4}_{i_3 i_4}\frac{\partial}{\partial x^{i_1}} = f'(f)\, (f\cdot \nabla(\nabla\cdot f)).$$ The numbers of such `shapes of partially defined functions' is given in sequence A126285 in the Online Encyclopedia of Integer Sequences and tabulated in Table \ref{tab:1}. The number of rooted trees, first evaluated by Cayley, are shown for comparison. The apparently terrifying numbers of rooted trees were tamed by Butcher. What will happen to the even more plentiful aromatic trees? \begin{table} \begin{center} \setlength{\tabcolsep}{4pt} \begin{tabular}{cp{4ex}p{4ex}p{4ex}p{4ex}p{4ex}p{4ex}p{4ex}p{4ex}p{4ex}p{4ex}} \toprule $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \midrule \# rooted trees & 1& 1& 2& 4& 9& 20& 48& 115& 286& 719 \\ \# aromatic trees & 1& 2& 6& 16& 45& 121& 338& 929& 2598& 7261\\ \bottomrule \end{tabular} \caption{ \label{tab:1} Enumeration of rooted and aromatic trees with up to 10 nodes.} \end{center} \end{table} The existence of the aromatic series shows that affine-equivariance of a method is not enough to ensure that it can be expanded in a B-series. What else is needed? The second big clue is that Runge--Kutta methods are defined without reference to the dimension of the underlying vector space. It does not seem to play any role at all. Clearly, at a minimum, the expansion of the method in each dimension must have the same coefficients. But what rules out the aromatic terms like $f\,\nabla\cdot f$? The answer is that these terms do not respect {\em affine-relatedness}. Consider two vector spaces $V$ and $W$ of possibly different dimension, together with an affine map $\phi\colon V\to W$, $x\mapsto A x + b$. The vector fields $f$ on $V$ and $g$ on $W$ are said to be $\phi$-related if $g(A x + b) = A f(x)$ for all $x\in V$. B-series preserve affine-relatedness in the sense that for any affine $\phi$, if $f$ and $g$ are $\phi$-related then $B(c,f)$ is $\phi$-related to $B(c,g)$. In \cite{mmmv} we prove that this property characterizes B-series: {\em a numerical method has a Butcher series if and only if it preserves affine-relatedness.} Preserving affine-relatedness has a fairly direct physical interpretation. It means that the method is immune to changes of scale, such as changes of units. It means that the method preserves invariant affine subspaces {\em automatically}, whenever the system has any such. It means that the method preserves affine symmetries, again automatically; the method does not even have to `know' (or be told) that the system has the symmetries. It means that the method leaves decoupled systems decoupled, again automatically. All these properties are desirable when designing general-purpose ODE software. Furthermore, we now see that many of the more subtle properties of B-series, originally discovered through combinatorial analysis of trees, must in fact be a direct consequence of affine-relatedness. Examples include special properties with respect to symplecticity, preservation of quadratic invariants, and preservation of energy \cite{chartier} and non-preservation of volume \cite{is-qu-ts}. The proof of the theorem on affine equivariance \cite{mko} relies on some classical results in functional analysis and invariant theory. First it is established that the Taylor series in $f$ of an arbitrary map depends only on the derivatives of $f$, and that the terms of order $n$ are in fact a polynomial of degree $n$ in $f$ and its partial derivatives. Second, the invariant polynomials that are functions of $f$ and its partial derivatives, whose values at $x_0$ are regarded now as arbitrary symmetric tensors, are sought using the `invariant tensor theorem'. The conclusion at 2nd order is that only $f^i f^j_i$ and $f^i f^{j}_j$ are equivariant, these giving the two aromatic trees of order 2. At 3rd order, to the tensor $f^i f^j f^k$ the partial derivatives $j$ and $k$ can be attached to any two of the factors, leading to the 6 aromatic trees of order 3. The proof of the theorem on affine relatedness, characterizing B-series \cite{mmmv}, begins with an arbitrary affine-related method. Since, in particular, it is affine-equivariant, it has an aromatic series. Each aromatic tree containing loops is to be knocked out. For each such tree, a special pair of affine-related vector fields is constructed such that affine-relatedness of the method means that the coefficient of this tree must be zero. For example, for the tree $\ATbapab$, associated with $f\,\nabla\cdot f$, the vector fields are $f^{(1)}\colon\dot x^1 = 1$, $\dot x^2 = x^2$ and $f^{(2)}\colon\dot x^1=1$. These vector fields are related by the affine map $(x^1,x^2)\mapsto x^1$. Since $f^{(1)}\,\nabla\cdot f^{(1)}=1$ and $f^{(2)}\,\nabla\cdot f^{(2)}=0$, this term cannot appear in the expansion of a method that preserves affine-relatedness. To summarize, Butcher series are objects intrinsically associated to the set of vector fields on affine spaces of {\em all} dimensions, and will show up naturally in any analysis that respects the affine structure and does not depend on the dimension. This explains their ubiquity. It is fascinating that natural and practical demands of numerical methods for ODE---black-box solvers defined uniformly on all affine spaces---has led to the discovery of a fundamental invariant object. On the other hand, where does this leave the aromatic series? We suggest that they will show up naturally in problems posed in a specific dimension. Although traces and divergences are common in physics, we have not seen aromatic series before. They arose purely from a question in numerical analysis, but are fundamental in their own way. Moreover, they can have properties that no B-series can have. For example, many aromatic series, but no B-series, are divergence free. \medskip \noindent{\bf Acknowledgements} We thank John Butcher, Ernst Hairer, and Gerhard Wanner for their comments.
51,602
From June onwards, AT&T has hiked up its early termination fees (ETFs) to $325 in what might well be a move to ensure that iPhone users are kept locked in. The ETF is the fee that a user is to pay when he/she terminates the service contract. The early termination fees are being raised by the company from $175 to $325, which commenced from the starting of June. Incidentally, this roughly coincided with the expected release date of Apple's unlocked iPhone 4G/HD. This increased amount will be applicable to fresh contracts only and on the other hand ETF on feature handsets will drop down to $150. Renewal of a contract for a new iPhone will however get you into the province of this new ETF. Naturally this new policy change is rather a hassle but the underlying logic behind AT&T's decision is not that difficult to fathom. Apple's unlocked iPhone 4G/HD will in all probability be having an entry level price of $199 although AT&T has to shell out around $500 to $600 for a single set. Counting that in the reckoning, along with the number of people who are going to purchase the item, it totals up to quite a tidy sum in subsidies. They should certainly be able to get that money back over time due to the mobile data services of the users. Still, this does give enough of a prod for a rethink of strategies. It's an even greater problem for AT&T as many owners of iPhones 3GS unlocked handset compulsively upgrade to the latest version on an annual basis, expecting the introductory pricing. This move is close on the heels of Verizon's hiking up of its ETF to $350 on "advanced devices" citing similar causes. The increased ETF may be another impediment for current AT&T customers to change, especially after rumours that Apple might go with Big Red, thus endangering AT&T's exclusivity with the iPhone.
371,693
TITLE: What is a $\langle , \rangle_{H}$? And how do you prove that it is an inner product? QUESTION [1 upvotes]: We were given this problem: For the inner product space $V$ and its subspace $H$, let $Q = \{E(v)\,|\,v\in V\}$ be the vector space. Define addition in $Q$ by: for $v, w\in V, E(v)\oplus E(w) = E(v + w)$. Define scalar multiplication in Q by: for $v \in V$ and $\rho\in \mathbb{R}, \rho E(v) = E(\rho v)$. For $v\in V$, define $E(v) = \{v+h\,|\,h\in H\}$. Define for $u,v\in V$, and $u = u_1 + u_2, v = v_1 + v_2$ where $u_1, v_1 \in H$ and $u_2, v_2\in H^{\bot}$, \begin{align*} \langle E(u), E(v)\rangle_H = \langle u_2, v_2\rangle \end{align*} Prove or disprove: $\langle \, ,\,\rangle_H$ is an inner product on $Q$. What is $\langle \,,\,\rangle_H$? Anyway, here are my initial thoughts in solving the problem: To prove $\langle , \rangle_H$ is an inner product on $Q$, we have to prove the following: For all $E(a), E(b), E(y)\in V$, $\langle E(a), E(b) + E(y)\rangle_H = \langle E(a), E(b)\rangle + \langle E(a), E(y)\rangle_H$. \begin{align*} \langle E(a), E(b) + E(y)\rangle_H = E(a)_H \cdot E(b + y)_H &= \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\cdot \begin{pmatrix} b_1 + y_1\\ b_2 + y_2\\ \vdots\\ b_n + y_n \end{pmatrix}\\ &= \sum_{i=1}^{n} a_i(b_i+y_i)\\ &= \sum_{i=1}^{n} a_ib_i + \sum_{i=1}^{n} a_iy_i\\ &= \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix} \cdot \begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_n \end{pmatrix} + \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix} \cdot \begin{pmatrix} y_1\\ y_2\\ \vdots\\ y_n \end{pmatrix}\\ &= E(a)_H\cdot E(b)_H + E(a)_H \cdot E(y)_H\\ &= \langle E(a), E(b)\rangle + \langle E(a), E(y)\rangle_H \end{align*} For all vectors $a, b\in V$, $\langle a, b\rangle = \langle b, a\rangle$. \begin{align*} \langle a, b\rangle = E(a)_H\cdot E(b)_H &= \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\cdot\begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_n \end{pmatrix}\\ &= \sum_{i=1}^{n} a_ib_i\\ &= \sum_{i=1}^{n} b_ia_i\\ &= \begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_n \end{pmatrix}\cdot \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\\ &= E(b)_H\cdot E(a)_H\\ &= \langle b, a\rangle \end{align*} For all vectors $a, b$ and for all real numbers $r \in \mathbb{R}, \langle ra,b\rangle = r\langle a,b\rangle $. \begin{align*} \langle ra, b\rangle = E(ra)_H\cdot E(b)_H &= \begin{pmatrix} ra_1\\ ra_2\\ \vdots\\ ra_n \end{pmatrix}\cdot \begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_n \end{pmatrix}\\ &= \sum_{i=1}^{n} (ra_i)b_i\\ &= r\sum_{i=1}^{n} a_ib_i\\ &= r\begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\cdot \begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_n \end{pmatrix}\\ &= r(E(a)_H\cdot E(b)_H)\\ &= r\langle a,b\rangle \end{align*} For all vectors $a\in V, \langle a,a \rangle \geq 0$ and $\langle a,a \rangle = 0$ iff $a = 0$. \begin{align*} \langle a, a\rangle = E(a)_H \cdot E(a)_H &= \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\cdot \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_n \end{pmatrix}\\ &= \sum_{i=1}^{n} {a_i}^2 \geq 0 \end{align*} But then again, I have no idea what $\langle\,,\,\rangle_H$ is. REPLY [3 votes]: To begin with, $E(v)$ is a subset of $V$. Let us call such a set an $E$-set. Next, $\langle \cdot, \cdot \rangle_H$ is a function that takes two $E$-sets and returns a number; given $E(u)$ and $E(v)$ it returns $\langle u_2, v_2 \rangle,$ where $u_2,v_2$ are the $H^\bot$ components of $u,v$ respectively. Then, if you can show that $\langle \cdot, \cdot \rangle_H$ satisfies the criteria for being an inner product, then you have an inner product on the set of $E$-sets. You can then say things like "this $E$-set is orthogonal to this other $E$-set." To prove $\langle , \rangle_H$ is an inner product on $Q$, we have to prove the following: For all vectors $a, b,$ and $y\in V$, $\langle a, (b + y)\rangle = \langle a,b\rangle + \langle a + y\rangle$. Not for all $a,b,y\in V$ but for all $E$-sets $a,b,y,$ or $$ \langle E(u), E(v)\oplus E(w) \rangle_H = \langle E(u), E(v) \rangle + \langle E(u), E(w) \rangle_H $$ for $u,v,w\in V.$ Proof $$ \langle E(u), E(v) \oplus E(w) \rangle_H = \langle E(u), E(v+w) \rangle_H = \langle u_2, (v+w)_2 \rangle = \langle u_2, v_2+w_2 \rangle \\ = \langle u_2, v_2 \rangle + \langle u_2, w_2 \rangle = \langle E(u), E(v) \rangle_H + \langle E(u), E(w) \rangle_H. $$ Note that you need to show first that $(v+w)_2=v_2+w_2.$ Everything else follows from definition of $\langle \cdot, \cdot \rangle_H$ and from $\langle \cdot, \cdot \rangle$ being an inner product on $V.$
208,536
December Wallpaper 1920×1200 Wallpaper December Wallpaper 1920×1200 Wallpaper December Wallpaper 1920×1200, Published by adam. This wallpaper was added in Thursday, June 27, 2013 - 09:49 pm and 33 users has viewed and downloaded this wallpaper. Approximately 12,11 MB bandwith was consumed. All you need to do is help us grow by sharing this December Wallpaper 1920×1200 wallpaper if you like it. Please use one of the links below to download the high resolution of December Wallpaper 1920×1200, for free! Use this Wallpaper on your Ipad, Tablet, Smartphone or Desktop. Setting a wallpaper as a background on your desktop is simple. Once you have clicked 'Full Size Edit & Download' you will be taken to the full sized December Wallpaper 1920×1200 please right click that Wallpaper image and "Save Image As" or if the option is available "Set as background" December Wallpaper 1920×1200 - Picture Details
407,206
TITLE: Properties of a matrix that shares the set of real eigenvalues with its inverse QUESTION [0 upvotes]: For a $3\times 3$ real matrix, let $c(A)$ denotes the set of real eigenvalues of $A$. Suppose $c(B)=c(B^{-1})$ for a non-singular matrix $B$ with no repeated eigenvalues. Then which of the following are true? A) $1$ or $-1$ must be an eigenvalue of $B$. B)$B^2$ must be $I$. C)$1$ and $-1$ are the only possible real eigenvalues of $B$. D)$-1$ must be an eigenvalue. Let $ t$ be an eigenvalue such that $t=\frac {1}{t}$ $\to$ $t^2=1$ $\to t=1$ or $-1$.Hence $1$ or $-1$ may be the eigenvalue of $B$.And hence C) seems to be true.Are A)and D) necessarily true? Also $B^2$ and $I$ have eigenvalue $1$.I think, from this we can't say $B^2$ must be $I$. REPLY [2 votes]: Note that as $3$ is odd, there must be at least one real eigenvalue. If there exists only $1$ real eigenvalue, then that eigenvalue must be $\pm 1$ because then the inverse of the eigenvalue also has to be an eigenvalue of $B$ (by supposition). Otherwise there must exist $3$ real eigenvalues, call them $x,y,z$. Then WLOG suppose $x=1/y$ as then $x^{-1}$ and $y^{-1}$ both can be eigenvalues of $B$ and then $z=\pm 1$ is the only possibility (for example, $4,1/4,1$ are perfect eigenvalues of an invertible matrix satisfying all these properties). Suppose there is only $1$ real eigenvalue. Thus the eigenvalues of $B$ are $\pm 1, a, \bar{a}$ where $a\in\mathbb C$. Let the corresponding eigenvectors be $v_1,v_2,v_3$ respectively. They are linearly independent, so make them an orthonormal basis of $\mathbb C^3$. Consider $B^2 (v_1)=B(Bv_1)=B(\pm v_1)=v_1$. Consider $B^2(v_2)=B(Bv_2)=B(av_2)=a^2v_2$ and this will equal $v_2$ iff $a^2=1$. But $a$ is complex, so its square is never real. Hence B cannot hold. So A, C are true, B,D are false (D is false because $\pm 1$ is the only real eigenvalue, you cannot say that only $-1$ will be the real eigenvalue). Now suppose there are $3$ real eigenvalues. Let they be $x,y,z$ and the eigenvectors (orthonormal) be $v_1,v_2,v_3$ with $z=\pm 1$. Then, $B^2(v_1)=x^2v_1\neq v_1$ so $B^2\neq I$ in this case. So in this case, A is true, B is false, C is false and D is false.
40,039
2 – The Netherlands have conceded the opening goal in both of their matches at the 2014 World Cup, coming back to win on both occasions. 2 – Chile have been eliminated by Brazil at the round of 16 in the two previous World Cups they have qualified for (1998 & 2010). Failure to beat Netherlands would likely see Chile face Brazil in the round of 16 again. 2 – The Netherlands have only lost two of their 11 World Cup matches versus South American opponents – the 1978 final against Argentina and a 1994 quarter-final against Brazil. 3 – Chile have never won three successive games at a World Cup. They head into this match with two wins in a row. 8 – Arjen Robben has scored 8 goals in his last 8 games for the Netherlands. 9 – Victory for Netherlands would give them 9 points for the second successive World Cup. 12 – The Netherlands are undefeated in their last 12 World Cup group matches since losing 1-0 to Belgium at USA ’94. 67% – Chile have won 4 of their last 6 matches at the World Cup after failing to win any of their previous 13 prior to this run. 67% – The Netherlands have won 14 of their 21 World Cup matches outside of Europe, compared to only 42% of World Cup matches in Europe. 89% – The Netherlands have won 8 of their last 9 World Cup matches.
320,514
TITLE: Let $X$ be a random variable, prove that for any real number $a$, that $\mathrm{Var}(X) \le E[(X-a)^2]$ QUESTION [1 upvotes]: I have the following question: Let $X$ be a random variable, prove that for any real number $a$, $$\mathrm{Var}(X) \le E[(X-a)^2].$$ The question gives a hint by saying: Write $\mathrm{Var}(X) = E[(X-a-(E[X]-a)^2)$. So I thought about just expanding the LHS: \begin{align*} E[(X-a)^2] &= E[X^2 - 2aX - a^2] \\ &=E[X^2] - 2aE[X] - a^2 \end{align*} Since $a$ is a constant $E[a^2]= a^2$. I did not see anything obvious so I did the same thing to the RHS: \begin{align*} \mathrm{Var}(X) &= E[(X - a - (E[X] - a))^2] \\ &= E[X] - E[a] -E[(E[X])^2 - aE[X] +a^2] \\ &= E[X] - a - (E[X])^2 - aE[X]+a^2 \end{align*} But now I am stuck because I feel like a did a ton of algebra, and don't know how to proceed. Any help on to get unstuck (or if I misinterpreted something) would be greatly appreciated. REPLY [2 votes]: Similar to Antoine's answer. We know $\sigma^2 = E[(X-\mu)^2]$ where $\mu = E[X]$. Then (typical trick) sum and substract $\mu$ from the desired inner expression: $$\begin{align} E[(X-a)^2]&=E[(X-\mu + \mu -a)^2]\\ &= E[ (X-\mu)^2 + 2(\mu-a)(X-\mu)+(\mu-a)^2]\\ &= \sigma^2 + 2(\mu-a)E[X-\mu]+(\mu-a)^2\\ &= \sigma^2 + (\mu-a)^2 \end{align}$$ which implies $E[(X-a)^2] \ge \sigma^2$, with equality iff $a=\mu$.
199,248
Thermostats are simple enough to install, and in a great deal of conditions it's only a subject of matching your existing thermostat cable terminations to the brand new thermostat base. At times you can even locate a thermostat that has the choice for 5-2 or 5-1-1, programmable or non-programmable so if you change your mind for the sort of program your life style requires, you definitely do not have to rush to buy a fresh thermostat. While just about anyone can put in a thermostat, we've learned there are several HVAC Technicians'' out there which state to understand what they're doing however in fact they do not and wind up costing you effort and money in the future. For example, you may desire you thermostat to commence cooling down right away once you get property in the summer season, however not squander strength whenever you're apart. This kind of thermostat provides the comprehensive freedom to regulate the house temperature from literally any place in the globe with an online connection. This thermostat is easy to install and a good lot easier to use. Manual digital thermostats have to be altered each and each time you want your recommended temperature setting to change. The thermostat includes universal compatibility and can be paired with various HVAC systems. In addition, to prevent things such as unwanted ac service, the Wi-Fi thermostat can provide useful reminders on when it is imperative to alter the air conditioning filter or similar items. All the thermostats do the job of keeping the temp within the house by controlling different products. Due to smart technology, the latest thermostats solves the situation in an totally new way. They are built to be personal learning and have the ability to monitor the day-to-day cooling patterns in the house. Programmable digital thermostats have the ability to be programmed for distinct occasions and configurations to start on their own for automated temperature success! In the event the intelligent thermostat you purchase click here breakdowns for any reason, you will be sure to comprehend that it includes a good maker's services warranty, indicating you will surely be able to acquire the components you need to be replaced conveniently. Your car arriving your house thermostat once you're near to home, your cellphone to watch inside your fridge checking if you happen to will need something, your options are limitless. Also if you are at home and wake up feeling a bit cold, it genuinely is easy to up the heat range working with the phone app without needing to escape bed. Home could be an ideal location to be at whenever you have the required atmosphere in there. Maintaining your house at the ideal temperature was tricky before. The way you desire your home to be, the temperatures and humidity or the cooling or heating of the area, everything could be accomplished using a modern day thermostat. THE WEB of Things may have a larger impact on the society than I believe a lot of people today realize. THE WEB of Things are really just Things from the web. Online providers could use a type of cryptographic signature to validate the validity of the mark. No section of the enterprise remains untouched Producing the jump to turning out to be a linked products company isn't an easy undertaking. Economically, it allows a great deal of businesses to expand their merchandise market and offer new methods of delivering their technologies. Over the past handful of years, tech businesses have put in significant resources developing digital assistants. The Honeywell company has a rather strong expertise in the main topic of thermostats. The point is, don't expect your company models to remain unaffected. The notion of a wise residence has been around existence for many years. Nevertheless, it's not an exact wild theory for every one of the buzz that's been produced around it. People are murdered for changing their FB position, states Perry. Voice control is usually presently getting ultimately more popular. Not only the precise temperature control but the home thermostat likewise supplies different cooling and heating zones for the various components of the house too. The device is your product, that you want to connect to the web. Instead of spending a lot of time worrying about good toothbrushes, it's more important to concentrate on the linked devices that are very likely to discover mass adoption by buyers in the upcoming 10 years. On the surface of it, something which allows you to monitor your exercise, monitor your works and record biometrics like your pulse and breathing looks innocuous more than enough. Touch controls lead to extremely intuitive gadgets, in addition to they're likewise much more fun to generate use of. Links Visitors - 125 Visitors
410,760
To a later period belongs a columbarium cut in the rock, with niches for urns. Colmar (probably the columbarium of the Romans) is first mentioned, as a royal villa, in a charter of Louis the Pious in 823, and it was here that Charles the Fat held a diet in 884. On the road to Assiut is a fine Roman columbarium or dove-cote.
282,950
big spring TX weather forecast The Big Spring Steers varsity basketball team won their first home game of the 2015-16 season Tuesday night in their 51-41 victory over the Rangers of Midland Greenwood. For the full recap grab a copy of the Herald! The Lady Steers varsity basketball team lost to the Rangerettes of Midland Greenwood High School 66-39 at Steer Gym. For a full recap grab a copy of the Herald! The Big Spring Steers lost in an overtime thriller 58-57 against Riverside of El Paso, while an undermanned Lady Steers squad lost to Lamesa 45-32. For a full recap of the games and the tournament grab a copy of Monday's Herald! The Big Spring boys and girls varsity basketball teams lost their opening round games of the Gym Bice Classic. For a full recap of both games grab a copy of Friday's edition of the Herald!! The Big Spring girls basketball won two games Friday to advance to the winners bracket of the Monahans tournament. For a full recap grab a copy of the Herald! The Big Spring boys basketball team earned their first victory of the season, defeating Fort Stockton 49-35 Friday afternoon at the O.W. Follis tournament. For a full recap of the Steers' two games grab a copy of the Herald! The Big Spring boys basketball team lost to Lubbock High Thursday in tournament play. To get a recap of the game and the rest of the tournament action grab a copy of the Herald! The Big Spring girls basketball team split their pool play games at the Monahans Tournament on Thursday. For a full recap grab a copy of the Herald! The Big Spring Junior High teams struggled in their first game back from Thanksgiving break against Levelland. For recaps of all the games grab a copy of the Herald!
315,318
Posted: Sep 10, 2012 9:22 PM by Trey Schmaltz Source: PORT VINCENT- Tuesday morning, drivers will notice a big change at the intersection of Highways 42 and 431. The Port Vincent Roundabout will open. "While additional work remains, drivers will use the roundabout movement for the first time to travel to and from Port Vincent on La.42 and La. 431. In the coming months, there will be additional lane closures as the final layers of asphalt are applied and lighting is installed," a DOTD spokesperson said in a news release Monday. "Roundabouts are an innovative and cost effective tool that offers a smoother flow of traffic with less stop-and-go than a signalized intersection. They increase safety with lower speeds and typically less severe crashes. Statistics show that roundabouts can reduce fatalities by up to 90 percent and injury crashes by up to 75 percent." The project cost $1.24 million and has been under construction since February.
385,250
Financial Situation : Critical Current Work : Amazon Fulfillment Job Ends : December 23rd Next Job : None Scheduled Well, what better time to begin my new series than tonight. In the middle of my “Jewel of Coffeyville” post, I spent nearly an hour trying to reconnect. This, after making a special trip to library to spend hours catching up on the internet. I’m now at McDonald’s with not much better luck. I spent the day researching climates and lodging at nearby potential work destinations : Tulsa, Kansas City, and Omaha. Campgrounds are pricey in Tulsa and KC while Omaha is not even a possibility with the harsh temperatures in the winter months. However, Nebraska is still in the running if the job I’m looking at is as profitable as it seems — I’d have to park 48 Ugly and stay at weekly/monthly hotels until I can pick her up again. It’s a gloomy prospect but one with potential — a good-paying job with schedule flexibility and a poker room across the river. About poker. I have to play soon. It could get me out of this bind. It certainly did when I began this Journey. ( I’d already be in ruins if it weren’t for the $7000 I made the first 6 weeks of this trip two years ago. I would never even have been able to stretch my dollar long enough to even work the World Series in Vegas! That was in Redding, California — where I turned my initial $300 investment into 7k). The trick here is that Amazon is barely sustaining my meager savings and I can ill-afford any losses at the moment. Perhaps now that overtime is starting to kick in, I can justify my risk. If I can risk $150 this month it would be worth it I can turn it into $1000 The other challenge is costs. I’m nearly 40 miles from low buy-in/low return type games. I’m 60-80 miles from places with some real earning potential. It may be a situation where if I have a decent day in Bartlesville, I can justify the gas to someplace like Tulsa. If there is good news right now, it’s this. An old client is in touch about a conference I used to staff every year. Also, Kitchen Craft is on board with my idea of doing presentations at campgrounds rather than trade shows for awhile. Both good — but both comes with a bit of additional stress in that I’m behind on preparation and have no money to travel with to get to these destinations. I’ll deal with it as it comes… The conference in Dallas isn’t until late April or May. With Kitchen Craft I may be able to take weekend trips to the campgrounds from wherever I’m working — a safe way to test the waters. Ok, enough for now… It sounds like you have some options. Hope something will work out for you; and that the finances will improve. Don’t give up, it will come together eventually. Take care.
212,542
Victory! a.k.a. the novel is finished 11 January, 2011 Just a short post today, but one to announce that the first draft of my latest work-in-progress is DONE. 90,602 words, most of them probably in the wrong order, but done all the same. This novel has been a bit of a experiment for me, because I didn’t do much planning before I started it (I’ll do a post on that process in the next couple of days). The first draft is a huge, horrible mess, but there are things about it that I love more than I ever thought I would, so maybe I’ll edit it after all. For now though, I still have that other, earlier editing project looming, so it’ll back to that by the end of the week. Who needs rest, eh? Advertisements
111,986
TITLE: Difference between Monte Carlo and Quantum Monte Carlo methods? QUESTION [9 upvotes]: What are the differences between Classical Monte Carlo methods and Quantum Monte Carlo methods in condensed matter physics? If one want to study strongly correlated systems with Quantum Monte Carlo method, does he/she need to study Classical Monte Carlo method first? (just like if you want to study quantum mechanics you shall study classical mechanics first?) REPLY [2 votes]: There are some concepts which is quite confusion to starters, though the concepts themselves are quite simple: Classical Monte Carlo method: simulating classical statistical model with Monte Carlo method on classical computer,i.e., the PC we use daily; Quantum Monte Carlo method: simulating quantum statistical model with Monte Carlo method on classical computer; Quantum simulation: studying a quantum system by setting up a real quantum system, usually with cold atoms.
22,965
\begin{document} \maketitle \begin{abstract} We present a method of constructing symmetric monoidal bicategories from symmetric monoidal double categories that satisfy a lifting condition. Such symmetric monoidal double categories frequently occur in nature, so the method is widely applicable, though not universally so. \end{abstract} \section{Introduction} \label{sec:introduction} Symmetric monoidal bicategories are important in many contexts. However, the definition of even a monoidal bicategory (see~\cite{gps:tricats,nick:tricats}), let alone a symmetric monoidal one (see~\cite{kv:2cat-zam,kv:bm2cat,bn:hda-i,ds:monbi-hopfagbd,crans:centers,mccrudden:bal-coalgb,gurski:brmonbicat}), is quite imposing, and time-consuming to verify in any example. In this paper we describe a method for constructing symmetric monoidal bicategories which is hardly more difficult than constructing a pair of ordinary symmetric monoidal categories. While not universally applicable, this method applies in many cases of interest. This idea has often been implicitly used in particular cases, such as bicategories of enriched profunctors, but to my knowledge the first general statement was claimed in~\cite[Appendix B]{shulman:frbi}. Our purpose here is to work out the details, independently of~\cite{shulman:frbi}. \begin{rmk} Another approach to working out the details of this statement, from a different perspective, can be found in~\cite[\S5]{gg:ldstr-tricat}. The two approaches contain basically the same content and results, although the authors of~\cite{gg:ldstr-tricat} work with ``locally-double bicategories'' rather than monoidal double categories or 2x1-categories (see below). They also don't treat the symmetric case, but as we will see, that is a fairly easy extension once the theory is in place. Thus, this note really presents nothing very new, only a self-contained and (hopefully) convenient treatment of the particular case of interest. \end{rmk} The method relies on the fact that in many bicategories, the 1-cells are not the most fundamental notion of `morphism' between the objects. For instance, in the bicategory \cMod\ of rings, bimodules, and bimodule maps, the more fundamental notion of morphism between objects is a ring homomorphism. The addition of these extra morphisms promotes a bicategory to a \emph{double category}, or a category internal to \cCat. The extra morphisms are usually stricter than the 1-cells in the bicategory and easier to deal with for coherence questions; in many cases it is quite easy to show that we have a \emph{symmetric monoidal double category}. The central observation is that in most cases (when the double category is `fibrant') we can then `lift' this symmetric monoidal structure to the original bicategory. That is, we prove the following theorem: \begin{thm}\label{thm:mondbl-monbi-intro} If \lD\ is a fibrant monoidal double category, then its underlying bicategory $\cH(\lD)$ is a monoidal bicategory. If \lD\ is braided or symmetric, so is $\cH(\lD)$. \end{thm} There is a good case to be made, however (see~\cite{shulman:frbi}) that often the extra morphisms should \emph{not} be discarded. From this point of view, in many cases symmetric monoidal bicategories are a red herring, and really we should be studying symmetric monoidal double categories. This is also true in higher dimensions; for instance, Chris Douglas~\cite{douglas:tfttalk} has suggested that instead of tricategories we are usually interested in bicategories internal to \cCat\ or categories internal to \cTwocat. In most such cases arising in practice, we can again `lift' the coherence to give a tricategory after discarding the additional structure. We propose the generic term \textbf{$(n\times k)$-category} (pronounced ``$n$-by-$k$-category'') for an $n$-category internal to $k$-categories, a structure which has $(n+1)(k+1)$ different types of cells or morphisms arranged in an $(n+1)$ by $(k+1)$ grid. Thus double categories may be called \textbf{1x1-categories}, while in place of tricategories we may consider 2x1-categories and 1x2-categories. Any $(n\times k)$-category which satisfies a suitable lifting property should have an underlying $(n+k)$-category, but clearly as $n$ and $k$ grow an increasing amount of structure is discarded in this process. However, even for those of the opinion that $(n\times k)$-categories are fundamental (such as the author), sometimes it really is the underlying $(n+k)$-category that one cares about. This is particularly the case in the study of topological field theory, since the Baez-Dolan cobordism hypothesis asserts a universal property of the $(n+1)$-category of cobordisms which is not shared by the $(n\times 1)$-category from which it is naturally constructed (see~\cite{lurie:tft}). Thus, regardless of one's philosophical bent, results such as \autoref{thm:mondbl-monbi-intro} are of interest. Proceeding to the contents of this paper, in \S\ref{sec:symm-mono-double} we review the definition of symmetric monoidal double categories, and in \S\ref{sec:comp-conj} we recall the notions of `companion' and `conjoint' whose presence supplies the necessary lifting property, which we call being \emph{fibrant}. Then in \S\ref{sec:1x1-to-bicat} we describe a functor from fibrant double categories to bicategories, and in \S\ref{sec:constr-symm-mono} we show that it preserves monoidal, braided, and symmetric structures. I would like to thank Peter May, Tom Fiore, Stephan Stolz, Chris Douglas, and Nick Gurski for helpful discussions and comments. \section{Symmetric monoidal double categories} \label{sec:symm-mono-double} In this section, we introduce basic notions of double categories. Double categories go back originally to Ehresmann in~\cite{ehresmann:cat-str}; a brief introduction can be found in~\cite{ks:r2cats}. Other references include~\cite{multi_funct_i,gp:double-limits,gp:double-adjoints}. \begin{defn} A \textbf{(pseudo) double category} \lD\ consists of a `category of objects' $\lD_0$ and a `category of arrows' $\lD_1$, with structure functors \begin{align*} U&\maps \lD_0\to \lD_1\\ S,T&\maps \lD_1\rightrightarrows \lD_0\\ \odot&\maps \lD_1\times_{\lD_0}\lD_1\to \lD_1 \end{align*} (where the pullback is over $\lD_1\too[T]\lD_0\overset{S}{\longleftarrow} \lD_1$) such that \begin{align*} S(U_A) &= A\\ T(U_A) &= A\\ S(M\odot N) &= SN\\ T(M\odot N) &= TM \end{align*} equipped with natural isomorphisms \begin{align*} \fa &: (M\odot N) \odot P \too[\iso] M \odot (N \odot P)\\ \fl &: U_B \odot M \too[\iso] M\\ \fr &: M \odot U_A \too[\iso] M \end{align*} such that $S(\fa)$, $T(\fa)$, $S(\fl)$, $T(\fl)$, $S(\fr)$, and $T(\fr)$ are all identities, and such that the standard coherence axioms for a monoidal category or bicategory (such as Mac Lane's pentagon; see~\cite{maclane}) are satisfied. \end{defn} Just as a bicategory can be thought of as a category weakly \emph{enriched} over \cCat, a pseudo double category can be thought of as a category weakly \emph{internal} to \cCat. Since these are the kind of double category of most interest to us, we will usually drop the adjective ``pseudo.'' We call the objects of $\lD_0$ \textbf{objects} or \textbf{0-cells}, and we call the morphisms of $\lD_0$ \textbf{(vertical) 1-morphisms} and write them as $f\maps A\to B$. We call the objects of $\lD_1$ \textbf{(horizontal) 1-cells}; if $M$ is a 1-cell with $S(M)=A$ and $T(M)=B$, we write $M\maps A\hto B$. We call a morphism $\alpha\maps M\to N$ of $\lD_1$ with $S(\alpha)=f$ and $T(\alpha)=g$ a \textbf{2-morphism} and draw it as follows: \begin{equation}\label{eq:square} \[email protected]{ A \ar[r]|{|}^{M} \ar[d]_f \ar@{}[dr]|{\Downarrow\alpha}& B\ar[d]^g\\ C \ar[r]|{|}_N & D }. \end{equation} Note that we distinguish between \emph{1-morphisms}, which we draw vertically, and \emph{1-cells}, which we draw horizontally. In traditional double-category terminology these are both referred to with the same word (be it ``cell'' or ``morphism'' or ``arrow''), the distinction being made only by the adjectives ``vertical'' and ``horizontal.'' Our terminology is more concise, and allows for flexibility in the drawing of pictures without a corresponding change in names (some authors prefer to draw their double categories transposed from ours). We write the composition of vertical 1-morphisms $A\too[f] B\too[g] C$ and the vertical composition of 2-morphisms $M\too[\alpha] N\too[\beta] P$ as $g\circ f$ and $\beta\circ\alpha$, or sometimes just $gf$ and $\beta\alpha$. We write the horizontal composition of 1-cells $A\xhto{M} B \xhto{N} C$ as $A\xhto{N\odot M} C$ and that of 2-morphisms \[\vcenter{\xymatrix{ \ar[r]|-@{|}^-{} \ar[d] \ar@{}[dr]|{\Downarrow\alpha} & \ar[r]|-@{|}^-{} \ar[d] \ar@{}[dr]|{\Downarrow\beta} &\ar[d]\\ \ar[r]|-@{|}_-{} & \ar[r]|-@{|}_-{} & }}\] as \[\vcenter{\xymatrix@C=4pc{ \ar[r]|-@{|}^-{} \ar[d] \ar@{}[dr]|{\Downarrow\;\be\odot\al} & \ar[d]\\ \ar[r]|-@{|}_-{} & }}\] The two different compositions of 2-morphisms obey an interchange law, by the functoriality of $\odot$: \[(M_1\odot M_2) \circ (N_1\odot N_2) = (M_1\circ N_1)\odot (M_2\circ N_2). \] Every object $A$ has a vertical identity $1_A$ and a horizontal unit $U_A$, every 1-cell $M$ has an identity 2-morphism $1_M$, every vertical 1-morphism $f$ has a horizontal unit 2-morphism $U_f$, and we have $1_{U_A} = U_{1_A}$ (by the functoriality of $U$). Note that the vertical composition $\circ$ is strictly associative and unital, while the horizontal one $\odot$ is only weakly so. This is the case in most of the examples we have in mind. It is possible to define double categories that are weak in both directions (see, for instance,~\cite{verity:base-change}), but this introduces much more complication and is usually unnecessary. \begin{rmk}\label{rmk:monglob} In general, an $(n\times 1)$-category consists of 1-categories $\lD_i$ for $0\le i\le n$, together with source, target, unit, and composition functors and coherence isomorphisms. We refer to the objects of $\lD_i$ as \textbf{$i$-cells} and to the morphisms of $\lD_i$ as \textbf{morphisms of $i$-cells} or \textbf{(vertical) $(i+1)$-morphisms}. A formal definition can be found in~\cite{batanin:monglob} under the name \emph{monoidal $n$-globular category}. \end{rmk} A 2-morphism~\eqref{eq:square} where $f$ and $g$ are identities (such as the constraint isomorphisms $\fa,\fl,\fr$) is called \textbf{globular}. Every double category \lD\ has a \textbf{horizontal bicategory} $\cH(\lD)$ consisting of the objects, 1-cells, and globular 2-morphisms. Conversely, many naturally occurring bicategories are actually the horizontal bicategory of some naturally ocurring double category. Here are just a few examples. \begin{eg} The double category \lMod\ has as objects rings, as 1-morphisms ring homomorphisms, as 1-cells bimodules, and as 2-morphisms equivariant bimodule maps. Its horizontal bicategory $\cMod = \cH(\lMod)$ is the usual bicategory of rings and bimodules. \end{eg} \begin{eg} The double category \lnCob\ has as objects closed $n$-manifolds, as 1-morphisms diffeomorphisms, as 1-cells cobordisms, and as 2-morphisms diffeomorphisms between cobordisms. Again $\cH(\lnCob)$ is the usual bicategory of cobordisms. \end{eg} \begin{eg} The double category \lProf\ has as objects categories, as 1-morphisms functors, as 1-cells \emph{profunctors} (a profunctor $A\hto B$ is a functor $B\op\times A\to \mathbf{Set}$), and as 2-morphisms natural transformations. Bicategories such as $\cH(\lProf)$ are commonly encountered in category theory, especially the enriched versions. \end{eg} As opposed to bicategories, which naturally form a tricategory, double categories naturally form a \emph{2-category}, a much simpler object. \begin{defn} Let \lD\ and \lE\ be double categories. A \textbf{(pseudo double) functor} $F\maps \lD\to \lE$ consists of the following. \begin{itemize} \item Functors $F_0\maps \lD_0 \to \lE_0$ and $F_1\maps \lD_1 \to \lE_1$ such that $S\circ F_1 = F_0\circ S$ and $T\circ F_1 = F_0\circ T$. \item Natural transformations $F_\odot\maps F_1M \odot F_1N \to F_1(M\odot N)$ and $F_U\maps U_{F_0 A} \to F_1(U_A)$, whose components are globular isomorphisms, and which satisfy the usual coherence axioms for a monoidal functor or pseudofunctor (see~\cite[\S{}XI.2]{maclane}). \end{itemize} \end{defn} \begin{defn}\label{thm:dbl-transf} A \textbf{(vertical) transformation} between two functors $\alpha: F\to G:\lD\to\lE$ consists of natural transformations $\alpha_0\maps F_0\to G_0$ and $\alpha_1\maps F_1\to G_1$ (both usually written as $\alpha$), such that $S(\alpha_{M}) = \alpha_{SM}$ and $T(\alpha_{M}) = \alpha_{TM}$, and such that \[\vcenter{\[email protected]{ FA \ar@{=}[d] \ar[r]|{|}^{FM} \ar@{}[drr]|{\Downarrow F_\odot} & FB \ar[r]|{|}^{FN} & FC \ar@{=}[d]\\ FA \ar[rr]|{F(N\odot M)} \ar[d]_{\alpha_A} \ar@{}[drr]|{\Downarrow \alpha_{N\odot M}} && FC \ar[d]^{\alpha_C}\\ GA \ar[rr]|{|}_{G(N\odot M)} && GC }} = \vcenter{\[email protected]{ FA \ar[d]_{\alpha_A} \ar@{}[dr]|{\Downarrow \alpha_M} \ar[r]|{|}^{FM} & FB \ar[d]|{\alpha_B} \ar@{}[dr]|{\Downarrow \alpha_N} \ar[r]|{|}^{FN} & FC \ar[d]^{\alpha_C}\\ GA \ar@{=}[d] \ar[r]|{|}_{GM} \ar@{}[drr]|{\Downarrow G_\odot} & GB \ar[r]|{|}_{GN} & GC \ar@{=}[d]\\ GA \ar[rr]|{|}_{G(N\odot M)} && GC }}\] for all 1-cells $M\colon A\hto B$ and $N\colon B\hto C$, and \[\vcenter{\[email protected]{ FA \ar[rr]|{|}^{U_{FA}} \ar@{=}[d] \ar@{}[drr]|{\Downarrow F_0} && FA \ar@{=}[d]\\ FA \ar[rr]|{F(U_A)} \ar[d]_{\alpha_A} \ar@{}[drr]|{\Downarrow \alpha_{U_A}} && FA \ar[d]^{\alpha_A}\\ GA \ar[rr]|{|}_{G(U_A)} && GA }} = \vcenter{\[email protected]{ FA \ar[rr]|{|}^{U_{FA}} \ar[d]_{\alpha_A} \ar@{}[drr]|{\Downarrow U_{\alpha_A}} && FA \ar[d]^{\alpha_A}\\ GA \ar[rr]|{U_{GA}} \ar@{=}[d] \ar@{}[drr]|{\Downarrow F_0} && GA \ar@{=}[d]\\ GA \ar[rr]|{|}_{G(U_A)} && GA. }}\] for all objects $A$. \end{defn} We write \cDbl\ for the 2-category of double categories, functors, and transformations, and $\mathbf{Dbl}$ for its underlying 1-category. Note that a 2-cell $\al$ in \cDbl\ is an isomorphism just when each $\al_A$, \emph{and} each $\al_M$, is invertible. The 2-category \cDbl\ gives us an easy way to define what we mean by a \emph{symmetric monoidal double category}. In any 2-category with finite products there is a notion of a \emph{pseudomonoid}, which generalizes the notion of monoidal category in \cCat. Specializing this to \cDbl, we obtain the following. \begin{defn} A \textbf{monoidal double category} is a double category equipped with functors $\ten\maps \lD\times\lD\to\lD$ and $I\maps * \to\lD$, and invertible transformations \begin{align*} \mathord{\otimes} \circ (\Id\times \mathord{\otimes}) &\iso \mathord{\otimes} \circ (\mathord{\otimes} \times \Id)\\ \mathord{\otimes} \circ (\Id\times I) &\iso \Id\\ \mathord{\otimes} \circ (I\times \Id) &\iso \Id \end{align*} satisfying the usual axioms. If it additionally has a braiding isomorphism \begin{align*} \mathord{\otimes} &\iso \mathord{\otimes} \circ \tau \end{align*} (where $\tau\maps \lD\times\lD\iso \lD\times\lD$ is the twist) satisfying the usual axioms, then it is \textbf{braided} or \textbf{symmetric}, according to whether or not the braiding is self-inverse. \end{defn} Unpacking this definition more explicitly, we see that a monoidal double category is a double category together with the following structure. \begin{enumerate} \item $\lD_0$ and $\lD_1$ are both monoidal categories. \item If $I$ is the monoidal unit of $\lD_0$, then $U_I$ is the monoidal unit of $\lD_1$.\footnote{Actually, all the above definition requires is that $U_I$ is coherently \emph{isomorphic to} the monoidal unit of $\lD_1$, but we can always choose them to be equal without changing the rest of the structure.} \item The functors $S$ and $T$ are strict monoidal, i.e.\ $S(M\ten N) = SM\ten SN$ and $T(M\ten N)=TM\ten TN$ and $S$ and $T$ also preserve the associativity and unit constraints. \item We have globular isomorphisms \[\fx\maps (M_1\ten N_1)\odot (M_2\ten N_2)\too[\iso] (M_1\odot M_2)\ten (N_1\odot N_2)\] and \[\fu\maps U_{A\ten B} \too[\iso] (U_A \ten U_B)\] such that the following diagrams commute: \[\xymatrix{ ((M_1\ten N_1)\odot (M_2\ten N_2)) \odot (M_3\ten N_3) \ar[r]\ar[d] & ((M_1\odot M_2)\ten (N_1\odot N_2)) \odot (M_3\ten N_3) \ar[d]\\ (M_1\ten N_1)\odot ((M_2\ten N_2) \odot (M_3\ten N_3)) \ar[d] & ((M_1\odot M_2)\odot M_3) \ten ((N_1\odot N_2)\odot N_3) \ar[d]\\ (M_1\ten N_1) \odot ((M_2\odot M_3) \ten (N_2\odot N_3))\ar[r] & (M_1\odot (M_2\odot M_3)) \ten (N_1\odot (N_2\odot N_3))}\] \[\xymatrix{(M\ten N) \odot U_{C\ten D} \ar[r]\ar[d] & (M\ten N)\odot (U_C\ten U_D) \ar[d]\\ M\ten N\ar@{<-}[r] & (M\odot U_C) \ten (N\odot U_D)}\] \[\xymatrix{U_{A\ten B}\odot (M\ten N) \ar[r]\ar[d] & (U_A\ten U_B)\odot (M\ten N) \ar[d]\\ M\ten N\ar@{<-}[r] & (U_A \odot M) \ten (U_B\odot N)}\] (these arise from the constraint data for the pseudo double functor $\ten$). \item The following diagrams commute, expressing that the associativity isomorphism for $\ten$ is a transformation of double categories. \[\xymatrix{ ((M_1\ten N_1)\ten P_1) \odot ((M_2\ten N_2)\ten P_2) \ar[r]\ar[d] & (M_1\ten (N_1\ten P_1)) \odot (M_2\ten (N_2\ten P_2)) \ar[d]\\ ((M_1\ten N_1) \odot (M_2\ten N_2)) \ten (P_1\odot P_2) \ar[d] & (M_1\odot M_2) \ten ((N_1\ten P_1)\odot (N_2\ten P_2))\ar[d] \\ ((M_1\odot M_2) \ten(N_1\odot N_2)) \ten (P_1\odot P_2) \ar[r] & (M_1\odot M_2) \ten ((N_1\odot N_2)\ten (P_1\odot P_2))}\] \[\xymatrix{ U_{(A\ten B)\ten C} \ar[r] \ar[d] & U_{A\ten (B\ten C)} \ar[d]\\ U_{A\ten B} \ten U_C \ar[d] & U_A\ten U_{B\ten C}\ar[d]\\ (U_A\ten U_B)\ten U_C \ar[r] & U_A\ten (U_B\ten U_C) }\] \item The following diagrams commute, expressing that the unit isomorphisms for $\ten$ are transformations of double categories. \[\vcenter{\xymatrix{ (M\ten U_I)\odot (N\ten U_I)\ar[r]\ar[d] & (M\odot N)\ten (U_I \odot U_I) \ar[d]\\ M\odot N \ar@{<-}[r] & (M\odot N)\ten U_I }}\] \[\vcenter{\xymatrix{U_{A\ten I} \ar[r]\ar[dr] & U_A\ten U_I \ar[d]\\ & U_A}}\] \[\vcenter{\xymatrix{ (U_I\ten M)\odot (U_I\ten N)\ar[r]\ar[d] & (U_I \odot U_I) \ten (M\odot N) \ar[d]\\ M\odot N \ar@{<-}[r] & U_I\ten (M\odot N) }}\] \[\vcenter{\xymatrix{U_{I\ten A} \ar[r]\ar[dr] & U_I\ten U_A \ar[d]\\ & U_A}}\] \newcounter{mondbl} \setcounter{mondbl}{\value{enumi}} \end{enumerate} Similarly, a braided monoidal double category is a monoidal double category with the following additional structure. \begin{enumerate}\setcounter{enumi}{\value{mondbl}} \item $\lD_0$ and $\lD_1$ are braided monoidal categories. \item The functors $S$ and $T$ are strict braided monoidal (i.e.\ they preserve the braidings). \item The following diagrams commute, expressing that the braiding is a transformation of double categories. \[\xymatrix{(M_1\odot M_2)\ten (N_1\odot N_2) \ar[r]^\fs\ar[d]_\fx & (N_1\odot N_2)\ten (M_1 \odot M_2)\ar[d]^\fx\\ (M_1\ten N_1)\odot (M_2\ten N_2) \ar[r]_{\fs\odot \fs} & (N_1\ten M_1) \odot (N_2 \ten M_2)} \] \[\xymatrix{U_A \ten U_B \ar[r]^(0.55)\fu \ar[d]_\fs & U_{A\ten B} \ar[d]^{U_\fs}\\ U_B\ten U_A \ar[r]_(0.55)\fu & U_{B\ten A}}. \] \setcounter{mondbl}{\value{enumi}} \end{enumerate} Finally, a symmetric monoidal double category is a braided one such that \begin{enumerate}\setcounter{enumi}{\value{mondbl}} \item $\lD_0$ and $\lD_1$ are in fact symmetric monoidal. \end{enumerate} While there are a fair number of coherence diagrams to verify, most of them are fairly small, and in any given case most or all of them are fairly obvious. Thus, verifying that a given double category is (braided or symmetric) monoidal is not a great deal of work. \begin{eg} The examples \lMod, \lnCob, and \lProf\ are all easily seen to be symmetric monoidal under the tensor product of rings, disjoint union of manifolds, and cartesian product of categories, respectively. \end{eg} \begin{rmk} In a 2-category with finite products there is additionally the notion of a \emph{cartesian object}: one such that the diagonal $D\to D\times D$ and projection $D\to 1$ have right adjoints. Any cartesian object is a symmetric pseudomonoid in a canonical way, just as any category with finite products is a monoidal category with its cartesian product. Many of the ``cartesian bicategories'' considered in~\cite{cw:cart-bicats-i,ckww:cartbicats-ii} are in fact the horizontal bicategory of some cartesian object in \cDbl, and inherit their monoidal structure in this way. \end{rmk} Two further general methods for constructing symmetric monoidal double categories can be found in~\cite{shulman:frbi}. \begin{rmk} The general yoga of internalization says that an $X$ internal to $Y$s internal to $Z$s is equivalent to a $Y$ internal to $X$s internal to $Z$s, but this is only strictly true when the internalizations are all strict. We have defined a symmetric monoidal double category to be a (pseudo) symmetric monoid internal to (pseudo) categories internal to categories, but one could also consider a (pseudo) category internal to (pseudo) symmetric monoids internal to categories, i.e.\ a pseudo internal category in the 2-category $\mathcal{S}\mathit{ym}\mathcal{M}\mathit{on}\mathcal{C}\mathit{at}$ of symmetric monoidal categories and strong symmetric monoidal functors. This would give \emph{almost} the same definition, except that $S$ and $T$ would only be strong monoidal (preserving $\ten$ up to isomorphism) rather than strict monoidal. We prefer our definition, since $S$ and $T$ are strict monoidal in almost all examples, and keeping track of their constraints would be tedious. \end{rmk} Just as every bicategory is equivalent to a strict 2-category, it is proven in~\cite{gp:double-limits} that every pseudo double category is equivalent to a strict double category (one in which the associativity and unit constraints for $\odot$ are identities). Thus, from now on we will usually omit to write these constraint isomorphisms (or equivalently, implicitly strictify our double categories). We \emph{will} continue to write the constraint isomorphisms for the monoidal structure $\ten$, since these are where the whole question lies. \section{Companions and conjoints} \label{sec:comp-conj} Suppose that \lD\ is a symmetric monoidal double category; when does $\cH(\lD)$ become a symmetric monoidal bicategory? It clearly has a unit object $I$, and the pseudo double functor $\ten\maps \lD\times\lD\to\lD$ clearly induces a functor $\ten\maps \cH(\lD)\times\cH(\lD)\to\cH(\lD)$. However, the problem is that the constraint isomorphisms such as $A\ten (B\ten C)\iso (A\ten B)\ten C$ are \emph{vertical} 1-morphisms, which get discarded when we pass to $\cH(\lD)$. Thus, in order for $\cH(\lD)$ to inherit a symmetric monoidal structure, we must have a way to make vertical 1-morphisms into horizontal 1-cells. Thus is the purpose of the following definition. \begin{defn}\label{def:companion} Let \lD\ be a double category and $f\maps A\to B$ a vertical 1-morphism. A \textbf{companion} of $f$ is a horizontal 1-cell $\fhat\maps A\hto B$ together with 2-morphisms \begin{equation*} \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_B} & } \end{array}\quad\text{and}\quad \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^f\\ \ar[r]|-@{|}_-{\fhat} & } \end{array} \end{equation*} such that the following equations hold. \begin{align}\label{eq:compeqn} \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^f\\ \ar[r]|-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_B} & } \end{array} &= \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{U_A} \ar[d]_f \ar@{}[dr]|{\Downarrow U_f} & \ar[d]^f\\ \ar[r]|-@{|}_-{U_B} & } \end{array} & \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat} & \ar[r]|-@{|}_-{U_B} &} \end{array} &= \begin{array}{c} \[email protected]{ \ar[r]|-@{|}^-{\fhat} \ar@{=}[d] \ar@{}[dr]|{\Downarrow 1_{\fhat}} & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat} & } \end{array} \end{align} A \textbf{conjoint} of $f$, denoted $\fchk\maps B\hto A$, is a companion of $f$ in the double category $\lD^{h\cdot\mathrm{op}}$ obtained by reversing the horizontal 1-cells, but not the vertical 1-morphisms, of \lD. \end{defn} \begin{rmk} We momentarily suspend our convention of pretending that our double categories are strict to mention that the second equation in~\eqref{eq:compeqn} actually requires an insertion of unit isomorphisms to make sense. \end{rmk} The form of this definition is due to~\cite{gp:double-adjoints,dpp:spans}, but the ideas date back to~\cite{bs:dblgpd-xedmod}; see also~\cite{bm:dbl-thin-conn,fiore:pscat}. In the terminology of these references, a \emph{connection} on a double category is equivalent to a strictly functorial choice of a companion for each vertical arrow. \begin{defn} We say that a double category is \textbf{fibrant} if every vertical 1-morphism has both a companion and a conjoint. \end{defn} \begin{rmk} In~\cite{shulman:frbi} fibrant double categories were called \emph{framed bicategories}. However, the present terminology seems to generalize better to $(n\times k)$-categories, as well as avoiding a conflict with the \emph{framed bordisms} in topological field theory. \end{rmk} \begin{egs} \lMod, \lnCob, and \lProf\ are all fibrant. In \lMod, the companion of a ring homomorphism $f\maps A\to B$ is $B$ regarded as an $A$-$B$-bimodule via $f$ on the left, and dually for its conjoint. In \lnCob, companions and conjoints are obtained by regarding a diffeomorphism as a cobordism. And in \lProf, companions and conjoints are obtained by regarding a functor $f\maps A\to B$ as a `representable' profunctor $B(f-,-)$ or $B(-,f-)$. \end{egs} \begin{rmk} For an $(n\times 1)$-category (recall \autoref{rmk:monglob}), the lifting condition we should require is simply that each double category $\lD_{i+1} \toto \lD_i$, for $0\le i < n$, is fibrant. \end{rmk} The existence of companions and conjoints gives us a way to `lift' vertical 1-morphisms to horizontal 1-cells. What is even more crucial for our applications, however, is that these liftings are unique up to isomorphism, and that these isomorphisms are canonical and coherent. This is the content of the following lemmas. We state most of them only for companions, but all have dual versions for conjoints. \begin{lem}\label{thm:theta} Let $\fhat\maps A\hto B$ and $\fhat'\maps A\hto B$ be companions of $f$ (that is, each comes \emph{equipped with} 2-morphisms as in \autoref{def:companion}). Then there is a unique globular isomorphism $\theta_{\fhat,\fhat'}\maps \fhat\too[\iso]\fhat'$ such that \begin{equation}\label{eq:comp-iso} \vcenter{\xymatrix@R=1.5pc{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^f\\ \ar[r]|-{\fhat} \ar@{=}[d] \ar@{}[dr]|{\Downarrow \theta_{\fhat,\fhat'}} & \ar@{=}[d]\\ \ar[r]|-{\fhat'} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_B} & }} \quad = \quad \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar[d]_f \ar@{}[dr]|{\Downarrow U_f} & \ar[d]^f\\ \ar[r]|-@{|}_-{U_B} & .}} \end{equation} \end{lem} \begin{proof} Composing~\eqref{eq:comp-iso} on the left with $\vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^f\\ \ar[r]|-@{|}_-{\fhat'} & }}$ and on the right with $\vcenter{\[email protected]{ \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_B} & }}$, and using the second equation~\eqref{eq:compeqn}, we see that if~\eqref{eq:comp-iso} is satisfied then $\theta_{\fhat,\fhat'}$ must be the composite \begin{equation} \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]|f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat'} & \ar[r]|-@{|}_-{U_B} &}}\label{eq:theta} \end{equation} Two applications of the first equation~\eqref{eq:compeqn} shows that this indeed satisfies~\eqref{eq:comp-iso}. As for its being an isomorphism, we have the dual composite $\theta_{\fhat',\fhat'}$: \[\vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat'} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat} & \ar[r]|-@{|}_-{U_B} &}}\] which we verify is an inverse using~\eqref{eq:compeqn}: \[\vcenter{\[email protected]{ \ar[r]|-@{|}^{U_A}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|-@{|}^{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{\fhat}\ar[d]|f \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|{\fhat'}\ar[d]|f \ar@{}[dr]|{\Downarrow} & \ar[r]|{U_B}\ar@{=}[d] \ar@{}[dr]|{=} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\fhat} & \ar[r]|-@{|}_{U_B} & \ar[r]|-@{|}_{U_B} & }} \;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat} & \ar[r]|-@{|}_-{U_B} &}} \;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{\fhat} \ar@{=}[d] \ar@{}[dr]|{\Downarrow 1_{\fhat}} & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat} & }}\] (and dually). \end{proof} \begin{lem}\label{thm:theta-id} For any companion \fhat\ of $f$ we have $\theta_{\fhat,\fhat}=1_{\fhat}$. \end{lem} \begin{proof} This is the second equation~\eqref{eq:compeqn}. \end{proof} \begin{lem}\label{thm:theta-compose-vert} Suppose that $f$ has three companions $\fhat$, $\fhat'$, and $\fhat''$. Then $\theta_{\fhat,\fhat''} = \theta_{\fhat',\fhat''} \circ\theta_{\fhat,\fhat'}$. \end{lem} \begin{proof} By definition, we have \[\theta_{\fhat',\fhat''} \circ\theta_{\fhat,\fhat'} =\; \vcenter{\[email protected]{ \ar[r]|-@{|}^{U_A}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|-@{|}^{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{\fhat}\ar[d]|f \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|{\fhat'}\ar[d]|f \ar@{}[dr]|{\Downarrow} & \ar[r]|{U_B}\ar@{=}[d] \ar@{}[dr]|{=} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\fhat''} & \ar[r]|-@{|}_{U_B} & \ar[r]|-@{|}_{U_B} & }} \;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat''} & \ar[r]|-@{|}_-{U_B} &}} \;= \theta_{\fhat,\fhat''}\] as desired. \end{proof} \begin{lem}\label{thm:comp-unit} $U_A\maps A\hto A$ is always a companion of $1_A\maps A\to A$ in a canonical way. \end{lem} \begin{proof} We take both defining 2-morphisms to be $1_{U_A}$; the truth of~\eqref{eq:compeqn} is evident. \end{proof} \begin{lem}\label{thm:comp-compose} Suppose that $f\maps A\to B$ has a companion \fhat\ and $g\maps B\to C$ has a companion \ghat. Then $\ghat\odot\fhat$ is a companion of $gf$. \end{lem} \begin{proof} We take the defining 2-morphisms to be the composites \[\vcenter{\[email protected]{ \ar[r]|-@{|}^-{\fhat} \ar[d]_f \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\ghat} \ar@{=}[d] \ar@{}[dr]|{1_{\ghat}} & \ar@{=}[d]\\ \ar[r]|-{U_B} \ar[d]_g \ar@{}[dr]|{U_g} & \ar[r]|-{\ghat} \ar[d]|g \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_C} & \ar[r]|-@{|}_-{U_C} & }}\quad\text{and}\quad \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{U_A} \ar[d]|f \ar@{}[dr]|{U_f} & \ar[d]^f\\ \ar[r]|-{\fhat} \ar@{=}[d] \ar@{}[dr]|{1_{\fhat}} & \ar[r]|-{U_B} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^g\\ \ar[r]|-@{|}_-{\fhat} & \ar[r]|-@{|}_-{\ghat} & }} \] It is easy to verify that these satisfy~\eqref{eq:compeqn}, using the interchange law for $\odot$ and $\circ$ in a double category. \end{proof} \begin{lem}\label{thm:theta-compose-horiz} Suppose that $f\maps A\to B$ has companions $\fhat$ and $\fhat'$, and that $g\maps B\to C$ has companions $\ghat$ and $\ghat'$. Then $\theta_{\ghat,\ghat'}\odot \theta_{\fhat,\fhat'} = \theta_{\ghat\odot\fhat, \ghat'\odot\fhat'}$. \end{lem} \begin{proof} Using the interchange law for $\odot$ and $\circ$, we have: \begin{align} \theta_{\ghat\odot\fhat, \ghat'\odot\fhat'} &=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{U_A} \ar[d]|f \ar@{}[dr]|{U_f} & \ar[r]|-@{|}^-{\fhat} \ar[d]|f \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\ghat} \ar@{=}[d] \ar@{}[dr]|{1_{\fhat}} & \ar@{=}[d]\\ \ar[r]|-{\fhat'} \ar@{=}[d] \ar@{}[dr]|{1_{\ghat}} & \ar[r]|-{U_B} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-{U_B} \ar[d]|g \ar@{}[dr]|{U_g} & \ar[r]|-{\ghat} \ar[d]|g \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat'} & \ar[r]|-@{|}_-{\ghat'} & \ar[r]|-@{|}_-{U_C} & \ar[r]|-@{|}_-{U_C} & }} \;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]|f \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\ghat} \ar@{=}[d] \ar@{}[dr]|{1_{\fhat}} & \ar@{=}[d]\\ \ar[r]|-{\fhat'} \ar@{=}[d] \ar@{}[dr]|{1_{\ghat}} & \ar[r]|-{U_B} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-{\ghat} \ar[d]|g \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat'} & \ar[r]|-@{|}_-{\ghat'} & \ar[r]|-@{|}_-{U_C} & }}\\ &=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]|f \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{U_B} \ar@{=}[d] \ar@{}[dr]|{1_{U_B}} & \ar[r]|-@{|}^-{\ghat} \ar@{=}[d] \ar@{}[dr]|{1_{\fhat}} & \ar@{=}[d]\\ \ar[r]|-{\fhat'} \ar@{=}[d] \ar@{}[dr]|{1_{\ghat}} & \ar[r]|-{U_B} \ar@{=}[d] \ar@{}[dr]|{1_{U_B}} & \ar[r]|-{U_B} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-{\ghat} \ar[d]|g \ar@{}[dr]|\Downarrow & \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat'} & \ar[r]|-@{|}_-{U_B} & \ar[r]|-@{|}_-{\ghat'} & \ar[r]|-@{|}_-{U_C} & }}\;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\fhat} \ar[d]|f \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{U_B} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[r]|-@{|}^-{\ghat} \ar[d]|g \ar@{}[dr]|\Downarrow& \ar@{=}[d]\\ \ar[r]|-@{|}_-{\fhat'} & \ar[r]|-@{|}_-{U_B} & \ar[r]|-@{|}_-{\ghat'} & \ar[r]|-@{|}_-{U_C} & }}\\ &=\; \theta_{\ghat,\ghat'}\odot \theta_{\fhat,\fhat'} \end{align} as desired. \end{proof} \begin{lem}\label{thm:theta-unit} If $f\maps A\to B$ has a companion \fhat, then $\theta_{\fhat,\fhat\odot U_A}$ and $\theta_{\fhat,U_B\odot \fhat}$ are equal to the unit constraints $\fhat \iso \fhat\odot U_A$ and $\fhat\iso U_B\odot \fhat$. \end{lem} \begin{proof} By definition, we have \[\theta_{\fhat,\fhat\odot U_A} =\; \vcenter{\[email protected]{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|{\Downarrow 1_{U_A}} & \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|{1_{U_A}} & \ar@{=}[d] \ar[rr]|-@{|}^-{\fhat} \ar@{}[ddrr]|\Downarrow && \ar@{=}[dd]\\ \ar[r]|-{U_A} \ar@{=}[d] \ar@{}[dr]|{1_{U_A}} & \ar[r]|-{U_A} \ar@{=}[d] \ar@{}[dr]|\Downarrow & \ar[d]^f\\ \ar[r]|-@{|}_-{U_A} & \ar[r]|-@{|}_-{\fhat} & \ar[rr]|-@{|}^-{U_B} && }}\;=\; \vcenter{\xymatrix{ \ar[r]|-@{|}^-{U_A} \ar@{=}[d] \ar@{}[dr]|{\Downarrow 1_{U_A}} & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_A} & }} \] which, bearing in mind our suppression of unit and associativity constraints, means that in actuality it is the unit constraint $\fhat \iso \fhat\odot U_A$. The other case is dual. \end{proof} \begin{lem}\label{thm:comp-func} Let $F\maps \lD\to\lE$ be a functor between double categories and let $f\maps A\to B$ have a companion \fhat\ in \lD. Then $F(\fhat)$ is a companion of $F(f)$ in \lE. \end{lem} \begin{proof} We take the defining 2-morphisms to be \[\vcenter{\xymatrix@R=1.5pc@C=3pc{ \ar[r]|-@{|}^-{F(\fhat)} \ar[d]_{F(f)} \ar@{}[dr]|{F(\Downarrow)} & \ar@{=}[d]\\ \ar[r]|-{F(U_B)} \ar@{=}[d] \ar@{}[dr]|\iso & \ar@{=}[d]\\ \ar[r]|-@{|}_-{U_{F(B)}} & }} \quad\text{and}\quad \vcenter{\xymatrix@R=1.5pc@C=3pc{ \ar[r]|-@{|}^-{U_{FA}} \ar@{=}[d] \ar@{}[dr]|\iso & \ar@{=}[d]\\ \ar[r]|-{F(U_{A})} \ar@{=}[d] \ar@{}[dr]|{F(\Downarrow)} & \ar[d]^{F(f)}\\ \ar[r]|-@{|}_-{F(\fhat)} & .}}\] The axioms~\eqref{eq:compeqn} follow directly from those for \fhat. \end{proof} \begin{lem}\label{thm:comp-ten} Suppose that \lD\ is a monoidal double category and that $f\maps A\to B$ and $g\maps C\to D$ have companions \fhat\ and \ghat\ respectively. Then $\fhat\ten\ghat$ is a companion of $f\ten g$. \end{lem} \begin{proof} This follows from \autoref{thm:comp-func}, since $\ten\maps \lD\times\lD\to\lD$ is a functor, and a companion in $\lD\times\lD$ is simply a pair of companions in \lD. \end{proof} \begin{lem}\label{thm:theta-func} Suppose that $f\maps \lD\to\lE$ is a functor and that $f\maps A\to B$ has companions \fhat\ and $\fhat'$ in \lD. Then $\theta_{F(\fhat),F(\fhat')} = F(\theta_{\fhat,\fhat'})$. \end{lem} \begin{proof} Using the axioms of a pseudo double functor and the definition of the 2-morphisms in \autoref{thm:comp-func}, we have \begin{equation} F(\theta_{\fhat,\fhat'}) =\; \vcenter{\xymatrix@C=4.5pc{ \ar[r]|-@{|}^-{F(\fhat)} \ar[d] \ar@{}[dr]|{F(\Downarrow\odot\Downarrow)} & \ar[d]\\ \ar[r]|-@{|}_-{F(\fhat')} &}} \;=\; \vcenter{\xymatrix@C=2pc{ \ar[rr]|-@{|}^-{F(\fhat)} \ar@{=}[d] \ar@{}[drr]|\iso && \ar@{=}[d]\\ \ar[r]|-@{|}^-{F(U_{A})} \ar@{=}[d] \ar@{}[dr]|{F(\Downarrow)} & \ar[r]|-@{|}^-{F(\fhat)} \ar[d]|{F(f)} \ar@{}[dr]|{F(\Downarrow)} & \ar@{=}[d]\\ \ar[r]|-@{|}_-{F(\fhat')} \ar@{}[drr]|\iso\ar@{=}[d] & \ar[r]|-@{|}_-{U_{F(B)}} & \ar@{=}[d]\\ \ar[rr]|-@{|}_-{F(\fhat')} && }} \;=\; \vcenter{\xymatrix@R=1.5pc@C=2.5pc{ \ar[r]|-@{|}^-{U_{F(A)}} \ar@{=}[d] \ar@{}[dr]|\iso & \ar[r]|-@{|}^-{F(\fhat)} \ar@{=}[d] \ar@{}[dr]|= & \ar@{=}[d]\\ \ar[r]|-{F(U_{A})} \ar@{=}[d] \ar@{}[dr]|{F(\Downarrow)} & \ar[r]|-{F(\fhat)} \ar[d]|{F(f)} \ar@{}[dr]|{F(\Downarrow)} & \ar@{=}[d]\\ \ar[r]|-{F(\fhat')} \ar@{=}[d] \ar@{}[dr]|= & \ar[r]|-{F(U_{B})} \ar@{}[dr]|\iso \ar@{=}[d] & \ar@{=}[d]\\ \ar[r]|-@{|}_-{F(\fhat')} & \ar[r]|-@{|}_-{U_{F(B)}} &}} \;= \theta_{F(\fhat),\,F(\fhat')} \end{equation} as desired. \end{proof} \begin{lem}\label{thm:theta-ten} Suppose that \lD\ is a monoidal double category, that $f\maps A\to B$ has companions \fhat\ and $\fhat'$, and that $g\maps C\to D$ has companions \ghat\ and $\ghat'$. Then $\theta_{\fhat,\fhat'} \ten \theta_{\ghat,\ghat'} = \theta_{\fhat\ten \ghat, \fhat'\ten\ghat'}.$ \end{lem} \begin{proof} This follows from \autoref{thm:theta-func} in the same way that \autoref{thm:comp-ten} follows from \autoref{thm:comp-func}. \end{proof} \begin{lem}\label{thm:comp-iso} If $f\maps A\to B$ is a vertical isomorphism with a companion \fhat, then \fhat\ is a conjoint of its inverse $f\inv$. \end{lem} \begin{proof} The composites \[\vcenter{\[email protected]{ \ar[r]|-@{|}^{\fhat}\ar[d]_f \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|{U_B}\ar[d]_{f\inv} \ar@{}[dr]|{\Downarrow U_{f\inv}} & \ar[d]^{f\inv}\\ \ar[r]|-@{|}_{U_A} & }}\quad\text{and}\quad \vcenter{\[email protected]{ \ar[r]|-@{|}^{U_B}\ar[d]_{f\inv} \ar@{}[dr]|{\Downarrow U_{f\inv}} & \ar[d]^{f\inv}\\ \ar[r]|{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[d]^f\\ \ar[r]|-@{|}_{\fhat} & }} \] exhibit \fhat\ as a conjoint of $f\inv$. \end{proof} \begin{lem}\label{thm:compconj-adj} If $f\maps A\to B$ has both a companion \fhat\ and a conjoint \fchk, then we have an adjunction $\fhat\adj\fchk$ in $\cH\lD$. If $f$ is an isomorphism, then this is an adjoint equivalence. \end{lem} \begin{proof} The unit and counit of the adjunction $\fhat\adj\fchk$ are the composites \[\vcenter{\[email protected]{ \ar[r]|-@{|}^{U_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{U_A}\ar[d]|{f} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\fhat} & \ar[r]|-@{|}_{\fchk} & }}\quad\text{and}\quad \vcenter{\[email protected]{ \ar[r]|-@{|}^{\fchk}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{\fhat}\ar[d]|{f} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{U_B} & \ar[r]|-@{|}_{U_B} & }} \] The triangle identities follow from~\eqref{eq:compeqn}. If $f$ is an isomorphism, then by the dual of \autoref{thm:comp-iso}, \fchk\ is a companion of $f\inv$. But then by \autoref{thm:comp-compose} $\fchk\odot \fhat$ is a companion of $1_A=f\inv \circ f$ and $\fhat\odot\fchk$ is a companion of $1_B = f\circ f\inv$, and hence \fhat\ and \fchk\ are equivalences. We can then check that in this case the above unit and counit actually are the isomorphisms $\theta$, or appeal to the general fact that any adjunction involving an equivalence is an adjoint equivalence. \end{proof} \begin{rmk} Our intended applications actually only require our double categories to have companions and conjoints for vertical \emph{isomorphisms}; we may call a double category with this property \textbf{isofibrant}. Note that by \autoref{thm:comp-iso}, having companions for all isomorphisms implies having conjoints for all isomorphisms. However, most examples we are interested in have all companions and conjoints, and these are useful for other purposes as well; see~\cite{shulman:frbi}. Moreover, if we are given a double category in which only vertical isomorphisms have companions, we can still apply our theorems to it as written, simply by first discarding all noninvertible vertical 1-morphisms. \end{rmk} \section{From double categories to bicategories} \label{sec:1x1-to-bicat} We are now equipped to lift structures on fibrant double categories to their horizontal bicategories. In this section we show that passage from fibrant double categories to bicategories is functorial; in the next section we show that it preserves monoidal structure. As a point of notation, we write $\odot$ for the composition of 1-cells in a bicategory, since our bicategories are generally of the form $\cH(\lD)$. As advocated by Max Kelly, we say \textbf{functor} to mean a morphism between bicategories that preserves composition up to isomorphism; equivalent terms include \emph{weak 2-functor}, \emph{pseudofunctor}, and \emph{homomorphism}. \begin{thm} If \lD\ is a double category, then $\cH(\lD)$ is a bicategory, and any functor $F\maps \lD\to\lE$ induces a functor $\cH(F)\maps \cH(\lD)\to\cH(\lE)$. In this way $\cH$ defines a functor of 1-categories $\mathbf{Dbl}\to \mathbf{Bicat}$. \end{thm} \begin{proof} The constraints of $F$ are all globular, hence give constraints for $\cH(F)$. Functoriality is evident. \end{proof} The action of \cH\ on transformations, however, is less obvious, and requires the presence of companions or conjoints. Recall that if $F,G\maps \cA\to\cB$ are functors between bicategories, then an \textbf{oplax transformation} $\al\maps F\to G$ consists of 1-cells $\al_A\maps FA\to GA$ and 2-cells \[\vcenter{\xymatrix{ \ar[r]^{Ff}\ar[d]_{\al_A} \drtwocell\omit{\al_f} & \ar[d]^{\al_B}\\ \ar[r]_{Gf} & }}\] such that for any 2-cell $\xymatrix{A \rtwocell^f_g{x} & B}$ in \cA, \begin{equation} \label{eq:laxtransf-nat} \vcenter{\xymatrix@R=1pc@C=3pc{ \rtwocell^{Ff}_{Fg}{Fx}\ar[dd]_{\al_A} & \ar[dd]^{\al_B}\\ \drtwocell\omit{\al_g} & \\ \ar[r]_{Gg} & }}\;=\; \vcenter{\xymatrix@R=1pc@C=3pc{ \ar[r]^{Ff}\ar[dd]_{\al_A} \drtwocell\omit{\al_f} & \ar[dd]^{\al_B}\\ & \\ \rtwocell^{Gf}_{Gg}{Gx} & }} \end{equation} and moreover for any $A$ and any $f,g$ in \cA, \begin{equation} \vcenter{\xymatrix@R=5pc{ \rtwocell^{1_{FA}}_{F(1_A)}{\iso} \ar[d]_{\al_A} \drtwocell\omit{\al_{1_A}} & \ar[d]^{\al_A}\\ \rtwocell^{G(1_A)}_{1_{GA}}{\iso} & }} \;=\; \vcenter{\xymatrix{ \ar[r]^{1_{FA}}\ar[d]_{\al_A} \drtwocell\omit{\iso}& \ar[d]^{\al_A}\\ \ar[r]_{1_{GA}} & }} \quad\text{and}\quad \vcenter{\xymatrix{ \ar[r]|{Ff}\ar[d]_{\al_A} \drtwocell\omit{\al_f} \rruppertwocell^{F(gf)}{\iso} & \ar[r]|{Fg}\ar[d]|{\al_B} \drtwocell\omit{\al_g} & \ar[d]^{\al_C}\\ \ar[r]|{Gf} \rrlowertwocell_{G(gf)}{\iso} & \ar[r]|{Gg} & }} \;=\; \vcenter{\xymatrix{ \ar[r]^{F(gf)}\ar[d]_{\al_A} \drtwocell\omit{\al_{gf}} & \ar[d]^{\al_C}\\ \ar[r]_{G(gf)} & }}\label{eq:laxtransf-ax} \end{equation} It is a \textbf{lax transformation} if the 2-cells $\al_f$ go the other direction, and a \textbf{pseudo transformation} if they are isomorphisms. By doctrinal adjunction~\cite{kelly:doc-adjn}, given collections of 1-cells $\al_A\maps FA\to GA$ and $\be_A\maps GA\to FA$ and adjunctions $\al_A\adj \be_A$ in \cB, there is a bijection between \begin{inparaenum} \item collections of 2-cells $\al_f$ making $\al$ an oplax transformation and \item collections of 2-cells $\be_f$ making $\be$ a lax transformation. \end{inparaenum} Two such transformations correspond under this bijection if and only if \begin{equation} \vcenter{\[email protected]{F(f) \ar[r]^-{\eta \odot F(f)} \ar[d]_{F(f)\odot \eta} & \be_B\odot \al_B \odot F(f) \ar[d]^{\be_B \odot \al_f}\\ F(f) \odot \be_A\odot \al_A\ar[r]_-{\be_f \odot \al_A} & \be_B\odot G(f) \odot \al_A}} \quad\text{and}\quad \vcenter{\[email protected]{\al_B\odot F(f)\odot \be_A \ar[r]^-{\al_B\odot \be_f}\ar[d]_{\al_f \odot \be_A}& \al_B \odot \be_B \odot G(f)\ar[d]^{\ep \odot G(f)}\\ G(f)\odot \al_A\odot \be_A \ar[r]_-{G(f) \odot \ep} & G(f)}}\label{eq:conjtrans} \end{equation} commute. If we have a pointwise adjunction between an oplax and a lax transformation, whose 2-cell structures correspond under this bijection, we call it a \textbf{conjunctional transformation} $(\al\conj \be)\maps F\to G$. (These are the conjoint pairs in a double category whose horizontal arrows are lax transformations and whose vertical arrows are oplax transformations.) Of particular importance is the case when both $\al$ and \be\ are pseudo natural and each adjunction $\al_A\adj \be_A$ is an adjoint equivalence. In this case we call $\al\conj \be$ a \textbf{pseudo natural adjoint equivalence}. A pseudo natural adjoint equivalence can equivalently be defined as an internal equivalence in the bicategory $\cBicat(\cA,\cB)$ of functors, pseudo natural transformations, and modifications $\cA\to\cB$. Recall also that if $\al,\al'\maps F\to G$ are oplax transformations, a \textbf{modification} $\mu\maps \al\to\al'$ consists of 2-cells $\mu_A\maps \al_A\to\al'_A$ such that \begin{equation} \vcenter{\xymatrix@C=1pc@R=2.5pc{ \ar[rr]^{Ff}\dtwocell_{\al'_A}^{\al_A}{\mu_A} & \drtwocell\omit{\al_f} & \ar[d]^{\al_B}\\ \ar[rr]_{Gf} && }} \quad=\quad \vcenter{\xymatrix@C=1pc@R=2.5pc{ \ar[rr]^{Ff}\ar[d]_{\al'_A} \drtwocell\omit{\al'_f} && \dtwocell^{\al_B}_{\al'_B}{\mu_B}\\ \ar[rr]_{Gf} && }}\label{eq:modif-ax} \end{equation} There is an evident notion of modification between lax transformations as well. Finally, given conjunctional transformations $\al\conj\be$ and $\al'\conj \be'$, there is a bijection between modifications $\al\to\al'$ and $\be'\to\be$, where $\mu\maps \al\to\al'$ corresponds to $\bar{\mu}\maps \be'\to\be$ with components $\bar{\mu}_A$ defined by: \[\vcenter{\[email protected]{ && FA \ar@{=}[drr] \ddtwocell<5>^{\al_A}_{\al'_A}{\mu_A}\\ GA \ar[urr]^{\be'_A} \ar@{=}[drr] & \Swarrow_\ep && \Swarrow_\eta & FA\\ &&GA\ar[urr]_{\be_A} }}\] The modifications $\bar{\mu}$ and \mu\ are called \textbf{mates}, and are compatible with composition (see \cite{ks:r2cats}). Thus, given $\cA,\cB$ we can define a bicategory $\Conj(\cA,\cB)$, whose objects are functors $\cA\to\cB$, whose 1-cells are conjunctional transformations considered as pointing in the direction of their left adjoints, and whose 2-cells are mate-pairs of modifications. \begin{thm}\label{thm:h-locfr} If \lD\ is a double category and \lE\ is a fibrant double category with chosen companions and conjoints, we have a functor \begin{align} \cDbl(\lD,\lE) &\too \Conj(\cH(\lD),\cH(\lE))\\ F &\mapsto \cH(F)\\ \al &\mapsto (\alhat\conj\alchk). \end{align} Moreover, if \al\ is an isomorphism, then $\alhat\conj\alchk$ is a pseudo natural adjoint equivalence. \end{thm} Note that we are here regarding the 1-category $\cDbl(\lD,\lE)$ as a bicategory with only identity 2-cells. \begin{proof} We denote the chosen companion and conjoint of $f$ in \lE\ by \fhat\ and \fchk, as usual. We define $\alhat$ as follows: its 1-cell components are $\alhat_A = \widehat{\al_A}$, and its 2-cell component $\alhat_f$ is the composite \begin{equation} \vcenter{\xymatrix@R=1.5pc@C=2.5pc{ \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]^{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow \al_f} & \ar[r]|-@{|}^{\alhat_B}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\alhat_A} & \ar[r]_{Gf} & \ar[r]|-@{|}_{U_{GB}} & }}\label{eq:oplax-2cell} \end{equation} Equations~\eqref{eq:laxtransf-nat} and~\eqref{eq:laxtransf-ax} follow directly from \autoref{thm:dbl-transf}. The construction of $\alchk$ is dual, using conjoints, and \autoref{thm:compconj-adj} shows that $\alhat_A\adj \alchk_A$. For the first equation in~\eqref{eq:conjtrans}, we have \begin{equation} \vcenter{\[email protected]{ \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|-@{|}^{Ff}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|-@{|}^{U_{FB}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{U_{FB}}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow\al_f} & \ar[r]|{\alhat_B}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar[r]|{\alchk_B}\ar@{=}[d] \ar@{}[dr]|{=} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\alhat_A} & \ar[r]|-@{|}_{Gf} & \ar[r]|-@{|}_{U_{GB}} & \ar[r]|-@{|}_{\alchk_B} & }}\;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow \al_f} & \ar[r]|-@{|}^{U_{FB}}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\alhat_A} & \ar[r]|-@{|}_{Gf} & \ar[r]|-@{|}_{\alchk_B} & }}\;=\; \vcenter{\[email protected]{ \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{U_{FA}}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow} & \ar[r]|-@{|}^{Ff}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|-@{|}^{U_{FB}}\ar@{=}[d] \ar@{}[dr]|{=} & \ar@{=}[d]\\ \ar[r]|{\alhat_A}\ar@{=}[d] \ar@{}[dr]|{=} & \ar[r]|{\alchk_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow\al_f} & \ar[r]|{U_{FB}}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{\alhat_A} & \ar[r]|-@{|}_{U_{GA}} & \ar[r]|-@{|}_{Gf} & \ar[r]|-@{|}_{\alchk_B} & }}, \end{equation} and the second is dual. Thus $(\alhat\conj\alchk)$ is a conjunctional transformation. Now suppose given $\al\maps F\to G$ and $\be\maps G\to H$. Then by \autoref{thm:comp-compose}, $\behat_A\odot\alhat_A$ is a companion of $\be_A\circ \al_A$, so we have a canonical isomorphism \[\theta_{\widehat{\be\al}_A, \,\behat_A\odot\alhat_A}\maps \widehat{\be\al}_A \too[\iso] \behat_A\odot\alhat_A. \] Of course, we also have $\theta_{\widehat{1_A},U_A}\maps \widehat{1_A} \too[\iso] U_A$ by \autoref{thm:comp-unit}. These constraints are automatically natural, since $\cDbl(\lD,\lE)$ has no nonidentity 2-cells. The axiom for the composition constraint says that two constructed isomorphisms \[\widehat{\gm\be\al}_A \too[\iso] (\gmhat_A \odot \behat_A)\odot \alhat_A\] are equal. However, both $\widehat{\gm\be\al}_A$ and $(\gmhat_A \odot \behat_A)\odot \alhat_A$ are companions of $\gm_A\be_A\al_A$, and both of these isomorphisms are constructed from composites (both $\circ$-composites and $\odot$-composites) of $\theta$s; hence by Lemmas \ref{thm:theta-compose-vert} and \ref{thm:theta-compose-horiz} they are both equal to \[\theta_{\widehat{\gm\be\al}_A,\, (\gmhat_A \odot \behat_A)\odot \alhat_A}\] and thus equal to each other. The same argument applies to the axioms for the unit constraint; thus we have a functor of bicategories. Finally, if $\al$ is an isomorphism, then in particular each $\al_A$ is an isomorphism, so by \autoref{thm:compconj-adj} each $\alhat_A\adj \alchk_A$ is an adjoint equivalence. But \al\ being an isomorphism also implies that each 2-cell \[\vcenter{\[email protected]{ \ar[r]|-@{|}^-{Ff} \ar[d]_{\al_A} \ar@{}[dr]|{\Downarrow\al_f} & \ar[d]^{\al_B}\\ \ar[r]|-@{|}_-{Gf} & }}\] is an isomorphism. From its inverse we form the composite \[\vcenter{\xymatrix@R=1.5pc@C=3pc{ \ar[r]|-@{|}^{\alhat_A}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]^{Gf}\ar[d]|{\al_A\inv} \ar@{}[dr]|{\Downarrow\al_f\inv} & \ar[r]|-@{|}^{U_{GB}}\ar[d]|{\al_B\inv} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}_{U_{FA}}& \ar[r]_{Ff} & \ar[r]|-@{|}_{\alchk_A} & }}\] which we can then verify to be an inverse of~\eqref{eq:oplax-2cell}. Thus $\alhat$, and dually $\alchk$, is pseudo natural, and hence $\alhat\conj\alchk$ is a pseudo natural adjoint equivalence. \end{proof} We can also promote \autoref{thm:theta} to a functorial uniqueness. \begin{lem}\label{thm:h-locfr-uniq} Let \lD\ be a double category and \lE\ a fibrant double category with two different sets of choices $\fhat,\fchk$ and $\fhat',\fchk'$ of companions and conjoints for each vertical 1-morphism $f$, giving rise to two different functors \[\cH,\cH'\maps \cDbl(\lD,\lE)\too \Conj(\cH(\lD),\cH(\lE)).\] Then the isomorphisms $\theta$ from \autoref{thm:theta} fit together into a pseudo natural adjoint equivalence $\cH\eqv \cH'$ which is the identity on objects. \end{lem} \begin{proof} We must first show that for a given transformation $\al\maps F\to G\maps \lD\to\lE$ in \cDbl, the isomorphisms \th\ form an invertible modification $\alhat \iso \alhat'$. Substituting~\eqref{eq:oplax-2cell} and the definition of \th\ into~\eqref{eq:modif-ax}, this becomes the assertion that \begin{equation} \vcenter{\xymatrix@R=1.5pc@C=2pc{ & \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]^{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow \al_f} & \ar[r]|-@{|}^{\alhat_B}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d]\\ \ar[r]|-@{|}^{U_{FA}} \ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]|{\alhat_A} \ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow}& \ar[r]_{Gf} \ar@{=}[d] & \ar[r]|-@{|}_{U_{GB}} & \\ \ar[r]|-@{|}_{\alhat_A'} & \ar[r]|-@{|}_{U_{GB}}&& }} \;=\; \vcenter{\xymatrix@R=1.5pc@C=2pc{ && \ar@{=}[d] \ar[r]|-@{|}^{U_{FA}} \ar@{}[dr]|{\Downarrow} & \ar[d]|{\al_B} \ar[r]|-@{|}^{\alhat_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d] &\\ \ar[r]|-@{|}^{U_{FA}}\ar@{=}[d] \ar@{}[dr]|{\Downarrow} & \ar[r]^{Ff}\ar[d]|{\al_A} \ar@{}[dr]|{\Downarrow \al_f} & \ar[r]|{\alhat_B'}\ar[d]|{\al_B} \ar@{}[dr]|{\Downarrow} & \ar@{=}[d] \ar[r]|-@{|}_{U_{GB}}&\\ \ar[r]|-@{|}_{\alhat_A'} & \ar[r]_{Gf} & \ar[r]|-@{|}_{U_{GB}} & . }} \end{equation} This follows from two applications of~\eqref{eq:compeqn}, one for $\alhat_A$ and one for $\alhat_B'$. (The mate of \th\ is, of course, uniquely determined.) Now, to show that these form a pseudo natural adjoint equivalence, it remains only to check that they do, in fact, form a pseudo natural transformation which is the identity on objects, i.e.\ that~\eqref{eq:laxtransf-nat} and~\eqref{eq:laxtransf-ax} are satisfied. But~\eqref{eq:laxtransf-nat} is vacuous since $\cDbl(\lD,\lE)$ has no nonidentity 2-cells, and~\eqref{eq:laxtransf-ax} follows from Lemmas \ref{thm:theta-compose-vert} and \ref{thm:theta-compose-horiz} since all the constraints involved are also instances of \th. \end{proof} It seems that we should have a functor from fibrant double categories to a tricategory of bicategories, functors, conjunctional transformations, and modifications, but there is no tricategory containing conjunctional transformations since the interchange law only holds laxly. However, we can say the following. Let $\cDbl^f_g$ denote the sub-2-category of \cDbl\ containing the fibrant double categories, all functors between them, and only the transformations that are isomorphisms, and let \cBicat\ denote the tricategory of bicategories, functors, pseudo natural transformations, and modifications. \begin{thm}\label{thm:h-functor} There is a functor of tricategories $\cH\maps \cDbl^f_g\to \cBicat$. \end{thm} \begin{proof} The definition of functors between tricategories can be found in~\cite{gps:tricats} or~\cite{nick:tricats}. In addition to \autoref{thm:h-locfr}, we require pseudo natural (adjoint) equivalences $\chi$ and $\iota$ relating composition and units in $\cDbl^f_g$ and \cBicat, and modifications relating composites of these, which satisfy various axioms. However, since composition of 1-cells in $\cDbl^f_g$ and \cBicat\ is strictly associative and unital, \cH\ strictly preserves this composition, and $\cDbl^f_g$ has no nonidentity 3-cells, this merely amounts to the following. Firstly, for every pair of transformations \[\vcenter{\xymatrix{\lC \rtwocell^F_G{\al} & \lD \rtwocell^H_K{\be} & \lE}}\] between fibrant double categories, we require an invertible modification $\chi\maps \behat * \alhat \iso \widehat{\be*\al}$ such that \begin{equation} \vcenter{\[email protected]{1 \ar[r] \ar[dr] & \hat{1}*\hat{1} \ar[d]^\chi\\ & \widehat{1*1} }} \quad\text{and}\quad \vcenter{\[email protected]{ \widehat{\gm\al}*\widehat{\de\be} \ar[r]\ar[d]_\chi & (\gmhat*\dehat)(\alhat*\behat)\ar[d]^{\chi\chi}\\ \widehat{\gm\al*\de\be}\ar[r] & (\widehat{\gm*\de})(\widehat{\al*\be})}} \end{equation} commute. (Here we are writing $*$ for the `Godement product' of 2-cells in $\cDbl$ and $\cBicat$.) These are the 2-cell components of the composition constraint, its 1-cell components being identities. Now by Lemmas \ref{thm:comp-compose} and \ref{thm:comp-func}, $(\behat *\alhat)_A = \behat_{GA} \circ H(\alhat_A)$ is a companion of $(\be*\al)_A = \be_{GA} \circ H(\al_A)$. Therefore, we take the component $\chi_A$ to be \[\theta_{\behat_{GA} \circ H(\alhat_A),\, \widehat{\be*\al}_A}.\] Equation~\eqref{eq:modif-ax}, saying that these form a modification, becomes the equality of two large composites of 2-cells in \lD, which as usual follows from~\eqref{eq:compeqn}. Secondly, for every $F\maps \lD\to\lE$ we require an isomorphism $\iota\maps \widehat{1_F} \iso 1_{\cH(F)}$ satisfying a couple of axioms which simply require it to be equal to the unit constraint of the local functor \cH\ from \autoref{thm:h-locfr}; these are the 2-cell components of the unit constraint. Finally, the required modifications merely amount to the \emph{assertions} that \[\vcenter{\[email protected]{\gmhat*\behat*\alhat \ar[r]^\chi\ar[d]_\chi & \widehat{\gm*\be}*\alhat \ar[d]^\chi\\ \gmhat*\widehat{\be*\al}\ar[r]_\chi & \widehat{\gm*\be*\al}}},\qquad \vcenter{\[email protected]{ \alhat \ar[r]^-\iota \ar@{=}[dr] & \widehat{1_F}*\alhat \ar[d]^\chi \\ & \alhat }}, \;\text{and}\qquad \vcenter{\[email protected]{ \alhat \ar[r]^-\iota \ar@{=}[dr] & \alhat*\widehat{1_F} \ar[d]^\chi \\ & \alhat }}\] commute; again this follows from \autoref{thm:theta-compose-vert}. \end{proof} We end this section with one final lemma. \begin{lem}\label{thm:theta-nat} Suppose $F,G\maps \lD\to\lE$ are functors, $\al\maps F\to G$ is a transformation, and that $f\maps A\to B$ has a companion \fhat\ in \lD. Then the oplax comparison 2-cell for \alhat: \[\vcenter{\xymatrix{ \ar[r]^{F(\fhat)}\ar[d]_{\alhat_A} \drtwocell\omit{\;\alhat_{\fhat}}& \ar[d]^{\alhat_B}\\ \ar[r]_{G(\fhat)} & }}\] is equal to $\theta_{\alhat_B\odot F(\fhat),\, G(\fhat) \odot \alhat_A}$ (and in particular is an isomorphism). \end{lem} \begin{proof} By definition $\alhat_A$ and $\alhat_B$ are companions of $\al_A$ and $\al_B$, respectively, and by \autoref{thm:comp-func} $F(\fhat)$ and $G(\fhat)$ are companions of $F(f)$ and $G(f)$, respectively. Thus, by \autoref{thm:comp-compose} the domain and codomain of $\alhat_{\fhat}$ are both companions of $G(f) \circ \al_A = \al_B \circ F(f)$, so at least the asserted $\theta$ isomorphism exists. Now, by taking the definition~\eqref{eq:oplax-2cell} of $\alhat_{\fhat}$ and substituting it for $\theta$ in~\eqref{eq:comp-iso}, using the axioms for companions and the naturality of $\al$ on 2-morphisms, we see that $\alhat_{\fhat}$ satisfies~\eqref{eq:comp-iso} and hence must be equal to $\theta$. \end{proof} \section{Symmetric monoidal bicategories} \label{sec:constr-symm-mono} We are now ready to lift monoidal structures from double categories to bicategories. If we had a theory of symmetric monoidal tricategories, we could do this by improving \autoref{thm:h-functor} to say that $\cH$ is a symmetric monoidal functor, and then conclude that it preserves pseudomonoids. However, in the absence of such a theory, we give a direct proof. \begin{thm}\label{thm:mon11-monbi} If \lD\ is a fibrant monoidal double category, then $\cH(\lD)$ is a monoidal bicategory. If \lD\ is braided, so is $\cH(\lD)$, and if \lD\ is symmetric, so is $\cH(\lD)$. \end{thm} \begin{rmk} For monoidal bicategories, there is a notion in between braided and symmetric, called \emph{sylleptic}, in which the the braiding is self-inverse up to an isomorphism (the \emph{syllepsis}) but this isomorphism is not maximally coherent. Since in our approach the syllepsis will be an isomorphism of the form $\theta_{\fhat,\fhat'}$, it is \emph{always} maximally coherent; thus our method cannot produce sylleptic monoidal bicategories that are not symmetric. \end{rmk} \begin{proof}[Proof of \autoref{thm:mon11-monbi}] A monoidal bicategory is defined to be a tricategory with one object. We use the definition of tricategory from~\cite{nick:tricats}, which is the same as that of~\cite{gps:tricats} except that the associativity and unit constraints are pseudo natural adjoint equivalences, rather than merely pseudo transformations whose components are equivalences. The functor \cH\ evidently preserves products, so $\ten\maps \lD\times\lD\to\lD$ induces a functor $\ten\maps \cH(\lD)\times\cH(\lD)\to \cH(\lD)$, and of course $I$ is still the unit. The associativity constraint of \lD\ is a natural isomorphism \[\vcenter{\xymatrix@C=5pc{\lD\times\lD\times\lD \rtwocell^{\ten (\Id\times\ten)}_{\ten(\ten\times\Id)}{\fa\iso} &\lD }}\] so by \autoref{thm:h-locfr} it gives rise to a pseudo natural adjoint equivalence \[\vcenter{\xymatrix@C=6pc{\cH(\lD)\times\cH(\lD)\times\cH(\lD) \rtwocell^{\ten (\Id\times\ten)}_{\ten(\ten\times\Id)}{\fahat\eqv} &\cH(\lD) }}\] Likewise, the unit constraints of \lD\ induce pseudo natural adjoint equivalences. The final four pieces of data for a monoidal bicategory are invertible modifications relating various composites of the associativity and unit transformations. The first is a ``pentagonator'' which relates the two ways to go around the Mac Lane pentagon: \[\xy (-10,0)*{((A\ten B)\ten C)\ten D}="A"; (20,10)*{(A\ten (B\ten C))\ten D}="B"; (50,0)*{A\ten ((B\ten C)\ten D)}="C"; (0,-15)*{(A\ten B)\ten (C\ten D)}="D"; (40,-15)*{A\ten (B\ten (C\ten D))}="E"; (20,-5)*{\scriptstyle\pi\Downarrow\iso}; \ar "B";"A";^{\fahat\ten U_D} \ar "C";"B";^{\fahat} \ar "D";"A";_{\fahat} \ar "E";"D";_{\fahat} \ar "E";"C";^{U_A\ten \fahat} \endxy \] Now by Lemmas \ref{thm:comp-compose} and \ref{thm:comp-ten}, both sides of this pentagon in $\cH(\lD)$ are companions of the corresponding sides of the pentagon in $\lD_0$. Since the pentagon in $\lD_0$ commutes, we have an isomorphism $\theta$ between the two sides of the pentagon in $\cH(\lD)$, which we take to be $\pi$. That \pi\ is in fact a modification follows from \autoref{thm:h-locfr-uniq}. We construct the other invertible modifications $\mu, \lambda, \rho$ in the same way. Finally, we must show that three equations between pasting composites of 2-cells hold, relating composites of $\pi,\mu,\lambda,\rho$. However, in each of these equations, both the domain and the codomain of the 2-cells involved are companions of the same isomorphism in $\lD_0$. For the 5-associahedron, this isomorphism is the unique constraint \[(((A\ten B)\ten C)\ten D)\ten E \too[\iso] A\ten (B\ten (C\ten (D\ten E))); \] for the other two it is simply the associator $(A\ten B)\ten C \too[\iso] A\ten (B\ten C)$. By Lemmas \ref{thm:theta-unit}, \ref{thm:theta-ten}, and \ref{thm:theta-nat}, every 2-cell in these diagrams is a $\theta$ isomorphism relating two companions of the same vertical isomorphism. Therefore, Lemmas \ref{thm:theta-compose-vert} and \ref{thm:theta-compose-horiz} imply that each pasting diagram is also a $\theta$ isomorphism between its domain and codomain. The uniqueness of $\theta$ then implies that the three equations hold. Now suppose that \lD\ is braided; to show that $\cH(\lD)$ is braided we seemingly must first have a definition of braided monoidal bicategory. The interested reader may follow the tortuous path of the definition of braided monoidal 2-categories and bicategories through the literature, starting from~\cite{kv:2cat-zam,kv:bm2cat} and continuing, with occasional corrections, through~\cite{bn:hda-i,ds:monbi-hopfagbd,crans:centers,mccrudden:bal-coalgb}, and~\cite{gurski:brmonbicat}. However, the details of the definition are essentially unimportant for us; since our constraints and coherence are produced in a universal way, any reasonable data can be produced and any reasonable axioms will be satisfied. For concreteness, we use the definition of~\cite{mccrudden:bal-coalgb}. The first piece of data we require to make $\cH(\lD)$ braided is a pseudo natural adjoint equivalence $\mathord{\otimes} \too[\eqv] \mathord{\otimes}\circ \tau$, where $\tau$ is the switch isomorphism. This arises by \autoref{thm:h-locfr} from the braiding of \lD. We also require two invertible modifications filling the usual hexagons for a braiding: \[\vcenter{\xymatrix@-1pc{ & \mathclap{(A\ten B)\ten C}\phantom{C_C} \ar[dl]\ar[dr] \\ (B\ten A)\ten C \ar[d] & \dtwocell\omit{\ze \iso} & A\ten (B\ten C)\ar[d]\\ B\ten (A\ten C) \ar[dr] && (B\ten C)\ten A \ar[dl]\\ & \mathclap{B\ten (C\ten A)}\phantom{C^C} }}\quad\text{and}\quad \vcenter{\xymatrix@-1pc{ & \mathclap{A\ten (B\ten C)}\phantom{C_C} \ar[dl]\ar@{<-}[dr]\\ A\ten (C\ten B)\ar[d] & \dtwocell\omit{\xi \iso}& (A\ten B)\ten C\ar[d]\\ (A\ten C)\ten B \ar[dr] && C\ten (A\ten B) \ar@{<-}[dl]\\ & \mathclap{(C\ten A)\ten B}\phantom{C^C} }} \] As before, since the corresponding hexagons commute in $\lD_0$, and by Lemmas \ref{thm:comp-compose} and \ref{thm:comp-ten} each side of each hexagon in $\cH(\lD)$ is a companion to the corresponding side in $\lD_0$, we have $\theta$ isomorphisms that we can take as $\ze$ and $\xi$. Finally, we must verify that the four 2-cell diagrams in~\cite[p136--139]{mccrudden:bal-coalgb} involving \ze\ and \xi\ commute. As with the axioms for a monoidal bicategory, both sides of these equalities are made up of $\theta$s relating companions of a single morphism in $\lD_0$, and thus by uniqueness they must be equal. Now suppose that \lD\ is symmetric. To make $\cH(\lD)$ symmetric, we require first a \emph{syllepsis}, i.e.\ an invertible modification \[\vcenter{\xymatrix{A\ten B \ar@{=}[rr] \ar[dr] & \ar@{}[d]|-{\Downarrow \nu\iso} & A\ten B \ar@{<-}[dl]\\ & B\ten A }}\] Since the braiding in $\lD_0$ is self-inverse, the top and bottom of this triangle are both companions of $1_{A\ten B}$; thus we have a $\theta$ isomorphism between them which we take as $\nu$. For $\cH(\lD)$ to be sylleptic, the syllepsis must satisfy the two axioms on~\cite[p144--145]{mccrudden:bal-coalgb}. As before, these diagrams of 2-cells are made up entirely of $\theta$s relating companions of a single morphism in $\lD_0$, so they commute by uniqueness of $\theta$. Finally, for $\cH(\lD)$ to be symmetric, the syllepsis must satisfy one additional axiom, given on~\cite[p91]{mccrudden:bal-coalgb}. This follows automatically for the same reasons as before. \end{proof} Combining the arguments of Theorems \ref{thm:h-functor} and \ref{thm:mon11-monbi}, we could show that passage from fibrant monoidal double categories to monoidal bicategories is a functor of tricategories, given a suitable definition of a tricategory of monoidal bicategories. \begin{rmk} Essentially the same proof as that of \autoref{thm:mon11-monbi} shows that any fibrant 2x1-category has an underlying tricategory. Note that unlike the construction of bicategories from 1x1-categories (i.e.\ double categories), this case requires fibrancy even in the absence of monoidal structure, since the associativity and unit constraints of a 2x1-category are not 1-cells but rather morphisms of 0-cells. There are many naturally occurring fibrant symmetric monoidal 2x1-categories, such as $\lD_0=$ commutative rings, $\lD_1=$ algebras, and $\lD_2=$ modules, or the symmetric monoidal 2x1-category of \emph{conformal nets} defined in~\cite{bdh:confnets-i}. All of these have underlying tricategories, which will be symmetric monoidal for any reasonable definition of symmetric monoidal tricategory. More generally, as stated in \S\ref{sec:introduction}, we expect any fibrant $(n\times k)$-category to have an underlying $(n+k)$-category. \end{rmk} \bibliographystyle{alpha} \bibliography{all,shulman} \end{document}
17,669
Description Title: The Big Book of Fairy Tales Author: Walter Jerrold (ed.), Charles Robinson (illus.) Publisher: Blackie and Son, 1911. 1st edition. Scarce. Condition: Decorative cloth, with considerable wear. Cover worn, spine faded and fraying. Pencil marks to illustrated endpapers, foxing to first few pages. Text clean, binding sagging but more or less tight. About the book: A scarce 1st edition copy of a well-illustrated book. Contains 12 colour plates, 16 black and red plates, 6 black and white plates, and black and white in-text illustrations on almost every page. Stories include Cinderella, , The Ugly Duckling, Aladdin, Hansel and Grethel, Jack and the Beanstalk, Little Red Riding Hood, and other popular tales.
4,193
\begin{document} \title{Quantum curve and 4D limit of \\ melting crystal model} \author{Kanehisa Takasaki\thanks{E-mail: [email protected]}\\ {\normalsize Department of Mathematics, Kinki University}\\ {\normalsize 3-4-1 Kowakae, Higashi-Osaka, Osaka 577-8502, Japan}} \date{} \maketitle \begin{abstract} This paper considers the problems of quantum spectral curves and 4D limit for the melting crystal model of 5D SUSY $U(1)$ Yang-Mills theory on $\mathbb{R}^4\times S^1$. The partition function $Z(\boldsymbol{t})$ deformed by an infinite number of external potentials is a tau function of the KP hierarchy with respect to the coupling constants $\boldsymbol{t} = (t_1,t_2,\ldots)$. A single-variate specialization $Z(x)$ of $Z(\boldsymbol{t})$ satisfies a $q$-difference equation representing the quantum spectral curve of the melting crystal model. In the limit as the radius $R$ of $S^1$ in $\mathbb{R}^4\times S^1$ tends to $0$, it turns into a difference equation for a 4D counterpart $Z_{\mathrm{4D}}(X)$ of $Z(x)$. This difference equation reproduces the quantum spectral curve of Gromov-Witten theory of $\mathbb{CP}^1$. $Z_{\mathrm{4D}}(X)$ is obtained from $Z(x)$ by letting $R \to 0$ under an $R$-dependent transformation $x = x(X,R)$ of $x$ to $X$. A similar prescription of 4D limit can be formulated for $Z(\boldsymbol{t})$ with an $R$-dependent transformation $\boldsymbol{t} = \boldsymbol{t}(\boldsymbol{T},R)$ of $\boldsymbol{t}$ to $\boldsymbol{T} = (T_1,T_2,\ldots)$. This yields a 4D counterpart $Z_{\mathrm{4D}}(\boldsymbol{T})$ of $Z(\boldsymbol{t})$. $Z_{\mathrm{4D}}(\boldsymbol{T})$ agrees with a generating function of all-genus Gromov-Witten invariants of $\mathbb{CP}^1$. Fay-type bilinear equations for $Z_{\mathrm{4D}}(\boldsymbol{T})$ can be derived from similar equations satisfied by $Z(\boldsymbol{t})$. The bilinear equations imply that $Z_{\mathrm{4D}}(\boldsymbol{T})$, too, is a tau function of the KP hierarchy. \end{abstract} \begin{flushleft} 2010 Mathematics Subject Classification: 14N35, 37K10, 39A13 \\ Key words: melting crystal model, quantum curve, KP hierarchy, tau function, bilinear equation, Gromov-Witten theory \end{flushleft} \newpage \section{Introduction} The melting crystal model \cite{MNTT04} is a statistical model of 5D SUSY Yang-Mills theory on $\RR^4\times S^1$ \cite{Nekrasov96} in the self-dual background \cite{Nekrasov02,NO03}. The partition function is a sum over all possible shapes (represented by plane partitions) of 3D Young diagrams. The name of the model originates in the physical interpretation of the complement of a 3D Young diagram in the positive octant of $\RR^3$ as a melting crystal corner. By the method of diagonal slicing \cite{ORV03}, the partition function can be converted to a sum over ordinary partitions. This sum reproduces the Nekrasov partition function of instantons in 5D SUSY Yang-Mills theory. In the previous work \cite{NT07}, we studied the simplest case that amounts to $U(1)$ gauge theory. The main subject was an integrable structure of the partition function deformed by an infinite number of external potentials. The deformed partition function $Z(\bst,s)$ depends on the coupling constants $\bst = (t_1,t_2,\ldots)$ of those potentials and a discrete variable $s \in \ZZ$. We proved, with the aid of symmetries of a quantum torus algebra, that $Z(\bst,s)$ is essentially a tau function of the 1D Toda hierarchy \cite{UT84}. This result has been extended to some other types of melting crystal models \cite{Takasaki13,Takasaki14}. An open problem raised therein is to find an appropriate prescription for the 4D limit as the radius $R$ of $S^1$ in $\RR^4\times S^1$ tends to $0$. The melting crystal model of $U(1)$ gauge theory has two parameters $q,Q$. By setting these parameters in a particular $R$-dependent form and letting $R \to 0$, the undeformed partition function $Z = Z(\bszero,0)$ converges to the 4D Nekrasov function $Z_{\frD}$ \cite{Nekrasov02,NO03}. It is not so straightforward to achieve the 4D limit of the deformed partition function $Z(\bst,s)$. In a naive prescription \cite{NT07}, all coupling constants other than $t_1$ decouple from $Z(\bst,s)$ in the limit as $R \to 0$. On the other hand, a deformation $Z_{\frD}(\bst,s)$ of $Z_{\frD}$ by an infinite number of external potentials is proposed in the literature \cite{MN06}. What we need is an $R$-dependent transformation $\bst = \bst(\bsT,R)$ of $\bst$ to a new set of coupling constants $\bsT = (T_1,T_2,\ldots)$ such that $Z(\bst(\bsT,R),s)$ converges to $Z_{\frD}(\bsT,s)$ as $R \to 0$. This is a problem that we address in this paper. Another problem tackled here is to derive the so called quantum spectral curves. This problem is inspired by the work of Dunin-Barkowski et al. \cite{DBMNPS13} on Gromov-Witten theory of $\CC\PP^1$. They derived a quantum spectral curve of $\CC\PP^1$ in the perspective of topological recursion \cite{DBOSS12,DBSS12,NS11}. Since the deformed 4D Nekrasov function $Z_{\frD}(\bsT,s)$ of $U(1)$ gauge theory coincides with a generating function of all genus Gromov-Witten invariants of $\CC\PP^1$ \cite{LMN03,OP02}, it will be natural to reconsider this issue from the point of view of the melting crystal model. Recently, we proposed a new approach to quantum mirror curves in topological string theory \cite{TN16}. This approach is based on the notions of Kac-Schwarz operators \cite{KS91,Schwarz91} and generating operators \cite{Alexandrov1404,ALS1512} in the KP hierarchy \cite{SS83,SW85}. $Z(\bst,s)$ may be thought of as a set of KP tau functions labelled by $s \in \ZZ$. In particular, $Z(\bst) = Z(\bst,0)$ resembles the tau functions in topological string theory. Our method developed for topological string theory can be applied to $Z(\bst)$ to derive a quantum spectral curve. This quantum curve is represented by a $q$-difference equation for a single-variate specialization $Z(x)$ of $Z(\bst)$. As $R \to 0$, this equation turns into a difference equation that was derived by Dunin-Barkowski et al. as the quantum spectral curve of $\CC\PP^1$. Let us stress that these two problems are closely related. To derive the 4D limit of the quantum spectral curve, we choose an $R$-dependent transformation $x = x(X,R)$ to a new variable $X$ such that $Z(x(X,R))$ converges to a function $Z_{\frD}(X)$ as $R \to 0$. It is this function $Z_{\frD}(X)$ that was considered by Dunin-Barkowski et al. \cite{DBMNPS13} and shown to satisfy the aforementioned difference equation. Moreover, $Z_{\frD}(X)$ turns out to be a single-variate specialization of a multi-variate function $Z_{\frD}(\bsT)$ that is obtained as the limit of $Z(\bst(\bsT,R))$ as $R \to 0$ in the sense explained above. $Z_{\frD}(\bsT)$ is also a deformation of $Z_{\frD}$ by an infinite number of external potentials. It is straightforward to extend $Z_{\frD}(\bsT)$ to a function $Z_{\frD}(\bsT,s)$ that depends on $s \in \ZZ$. As a byproduct of this prescription of 4D limit, we shall show that $Z_{\frD}(\bsT)$ satisfies a set of Fay-type bilinear equations. These bilinear equations are known to characterize tau functions of the KP hierarchy \cite{AvM92,SS83,TT95}. This implies that $Z_{\frD}(\bsT)$, too, is a tau function of the KP hierarchy. By a similar characterization of tau functions of the Toda hierarchy \cite{Takasaki07,Teo06}, one can deduce that $Z_{\frD}(\bsT,s)$ is a tau function of the 1D Toda hierarchy. Let us recall once again that $Z_{\frD}(\bsT,s)$ is a generating function of all-genus Gromov-Witten invariants of $\CC\PP^1$ \cite{LMN03,OP02}. The well known fact, referred to as the Toda conjecture, on Gromov-Witten theory of $\CC\PP^1$ \cite{DZ04,Getzler01,Milanov06,Pandharipande02} can be thus explained in a different perspective. This paper is organized as follows. Section 2 is a brief review of the melting crystal model. Combinatorial and fermionic expressions of the deformed partition function $Z(\bst)$ are introduced. The fermionic expression is further converted to a form that fits into the method of our work on quantum mirror curves of topological string theory \cite{TN16}. Section 3 presents the quantum spectral curve of the melting crystal model. The single-variate specialization $Z(x)$ of $Z(\bst)$ is introduced, and shown to satisfy a $q$-difference equation representing the quantum spectral curve. The computations are mostly parallel to the case of topological string theory. Section 4 deals with the issue of 4D limit. The $R$-dependent transformations $x = x(X,R)$ and $\bst = \bst(\bsT,R)$ are introduced, and $Z(x(X,R))$ and $Z(\bst(\bsT,R))$ are shown to converge as $R \to 0$. The functions $Z_{\frD}(X)$ and $Z_{\frD}(\bsT)$ obtained in this limit are computed explicitly. The difference equation for $Z_{\frD}(X)$ is derived, and confirmed to agree with the result of Dunin-Barkowski et al. \cite{DBMNPS13}. Section 5 is devoted to Fay-type bilinear equations. A three-term bilinear equation plays a central role here. The bilinear equation for $Z(\bst)$ is shown to turn into a similar bilinear equation for $Z_{\frD}(\bsT)$ as $R \to 0$. The detail of consideration on $Z(\bst,s)$ and $Z_{\frD}(\bsT,s)$ is omitted. Section 6 concludes this paper. \section{Melting crystal model} \subsection{Partition function of 3D Young diagrams} The partition function of the simplest melting crystal model with a single parameter $q$ is the sum \beq Z = \sum_{\pi\in\calP\calP} q^{|\pi|} \label{Z-PPsum} \eeq of the Boltzmann weight $q^{|\pi|}$ over the set $\calP\calP$ of all plane partitions. The plane partition $\pi = (\pi_{ij})_{i,j=1}^\infty$ represent a 3D Young diagram that consists of stacks of unit cubes of height $\pi_{ij}$ put on the unit squares $[i-1,i]\times[j-1,j]$ of the plane. $|\pi|$ denotes the volume of the 3D Young diagram: \[ |\pi| = \sum_{i,j=1}^\infty \pi_{ij}. \] By the method of diagonal slicing \cite{ORV03}, one can convert the sum (\ref{Z-PPsum}) over $\calP\calP$ to a sum over the set $\calP$ of all ordinary partitions $\lambda = (\lambda_i)_{i=1}^\infty$ as \beq Z = \sum_{\lambda\in\calP}s_\lambda(q^{-\rho})^2, \label{Z-Psum} \eeq where $s_\lambda(q^{-\rho})$ is the special value (a kind of principal specialization) of the infinite-variate Schur function $s_\lambda(\bsx)$, $\bsx = (x_1,x_2,\ldots)$, at \beqnn \bsx = q^{-\rho} = (q^{1/2},q^{3/2},\ldots,q^{i-1/2},\ldots). \eeqnn Moreover, by the Cauchy identities of Schur functions \cite{Mac-book}, one can rewrite the sum (\ref{Z-Psum}) into an infinite product: \beqnn Z = \prod_{i,j=1}^\infty(1 - q^{i+j-1})^{-1} = \prod_{n=1}^\infty(1 - q^n)^{-n}. \eeqnn This infinite product is known as the MacMahon function. The special value $s_\lambda(q^{-\rho})$ has the hook-length formula \cite{Mac-book} \beq s_\lambda(q^{-\rho}) = \frac{q^{-\kappa(\lambda)/4}} {\prod_{(i,j)\in\lambda}(q^{-h(i,j)/2} - q^{h(i,j)/2})}, \label{q-hook-formula} \eeq where $\kappa(\lambda)$ is the commonly used notation \beqnn \kappa(\lambda) = 2\sum_{(i,j)\in\lambda}(j - i) = \sum_{i=1}^\infty \left((\lambda_i-i+1/2)^2 - (-i+1/2)^2\right), \eeqnn and $h(i,j)$ denote the the hook length \beqnn h(i,j) = (\lambda_i-j) + (\tp{\lambda}_j-i) + 1 \eeqnn of the cell $(i,j)$ in the Young diagram of shape $\lambda$. $\tp{\lambda}_i$'s are the parts of the conjugate partition $\tp{\lambda}$ that represents the transposed Young diagram. Thus $s_\lambda(q^{-\rho})$ is a $q$-deformation of the number \beq \frac{\dim\lambda}{|\lambda|!} = \frac{1}{\prod_{(i,j)\in\lambda}h_{(i,j)}}, \label{hook-formula} \eeq where $|\lambda| = \sum_{i=1}^\infty\lambda_i$, and $\dim\lambda$ is the dimension of the irreducible representation of the symmetric group $S_d$, $d = |\lambda|$. The square of (\ref{hook-formula}) is called the Plancherel measure on the symmetric group, and plays a central role in Gromov-Witten/Hurwitz theory of $\CC\PP^1$ as well \cite{Okounkov00,OP02,Pandharipande02}. \subsection{Deformation by external potentials} We now introduce a parameter $Q$ and an infinite set of coupling constants $\bst = (t_1,t_2,\ldots)$, and deform (\ref{Z-Psum}) as \beq Z(\bst) = \sum_{\lambda\in\calP}s_\lambda(q^{-\rho})^2 Q^{|\lambda|}e^{\phi(\bst,\lambda)}, \label{Z(t)-Psum} \eeq where \beqnn \phi(\bst,\lambda) = \sum_{k=1}^\infty t_k\phi_k(\lambda). \eeqnn The external potentials $\phi_k(\lambda)$ are defined as \beq \phi_k(\lambda) = \sum_{i=1}^\infty\left(q^{k(\lambda_i-i+1)} - q^{k(-i+1)}\right). \label{phi_k} \eeq The sum on the right hand side of (\ref{phi_k}) is a finite sum because only a finite number of $\lambda_i$'s are non-zero. These potentials are $q$-analogues of the so called Casimir invariants of the infinite symmetric group $S_\infty$, which we shall encounter in the 4D limit. The following fact is a consequence of our previous work on the melting crystal model \cite{NT07}. We shall refine this statement in the subsequent consideration. \begin{theorem} $Z(\bst)$ is a tau function of the KP hierarchy with time variables $\bst = (t_1,t_2,\ldots)$. \end{theorem} Actually, $Z(\bst)$ is a member of a set of functions $Z(\bst,s)$, $s \in \ZZ$, considered in our previous work: \beq Z(\bst,s) = \sum_{\lambda\in\calP}s_\lambda(q^{-\rho})^2 Q^{|\lambda|+s(s+1)/2}e^{\phi(\bst,s,\lambda)}, \label{Z(t,s)-Psum} \eeq where \begin{gather*} \phi(\bst,s,\lambda) = \sum_{k=1}^\infty t_k\phi_k(s,\lambda),\\ \phi_k(s,\lambda) = \sum_{k=1}^\infty\left(q^{k(\lambda_i-i+1+s)} - q^{k(-i+1+s)}\right) + \frac{1-q^{ks}}{1-q^k}q^k. \end{gather*} As shown therein, $Z(\bst,s)$ is, up to a simple multiplier, a tau function of the 1D Toda hierarchy, hence a collection of tau functions of the KP hierarchy labelled by $s$ \cite{UT84}. Consequently, $Z(\bst) = Z(\bst,0)$ is a KP tau function. The relation to the 1D Toda hierarchy is explained in the fermionic formalism of integrable hierarchies \cite{JM83,MJD-book}. The fermionic formalism plays a central role in our derivation of quantum spectral curves as well. \subsection{Fermionic expression of partition function} Let $\psi_n,\psi^*_n$, $n\in\ZZ$, be the creation-annihilation operators\footnote{For the sake of convenience, as in our previous work \cite{NT07}, we label these operators with integers rather than half-integers. The free fermion fields are defined as $\psi(z) = \sum_{n\in\ZZ}\psi_nz^{-n-1}$ and $\psi^*(z) = \sum_{n\in\ZZ}\psi^*_nz^{-n}$.} of 2D charged free fermion theory with the anti-commutation relations \beqnn \psi_m\psi^*_n + \psi^*_n\psi_m = \delta_{m+n,0},\quad \psi_m\psi_n + \psi_n\psi_m = 0,\quad \psi^*_m\psi^*_n + \psi^*_n\psi^*_m = 0, \eeqnn and $|0\rangle$, $\langle 0|$, $s \in \ZZ$, the vacuum vectors of the fermionic Fock and dual Fock spaces that satisfy the vacuum conditions \begin{gather*} \psi^*_n|0\rangle = 0 \quad \text{for $n > 0$},\quad \psi_n|0\rangle = 0 \quad \text{for $n \ge 0$},\\ \langle 0|\psi_n = 0 \quad \text{for $n < 0$},\quad \langle 0|\psi^*_n = 0 \quad \text{for $n \le 0$}. \end{gather*} The charge-$0$ sectors of the Fock spaces are spanned by the excited states $|\lambda\rangle$, $\langle\lambda|$, $\lambda\in\calP$: \begin{align*} |\lambda\rangle &= \psi_{-\lambda_1}\psi_{-\lambda_2+1}\cdots \psi_{-\lambda_n+n-1}\psi^*_{-n+1}\cdots\psi^*_{-1}\psi^*_0|0\rangle,\\ \langle \lambda| &= \langle 0|\psi_0\psi_1\cdots\psi_{n-1} \psi^*_{\lambda_n-n+1}\cdots\psi^*_{\lambda_2-1}\psi^*_{\lambda_1}, \end{align*} where $n$ is chosen so that $\lambda_i = 0$ for $i > n$. In particular, $|\emptyset\rangle$ and $\langle\emptyset|$ agree with the vacuum states. The charge-$s$ sector of the Fock spaces are spanned by similar vectors $|s,\lambda\rangle$, $\langle s,\lambda|$, $\lambda \in \calP$. The fermionic expression of the aforementioned partition functions employs the normally ordered fermion bilinears \begin{gather*} L_0 = \sum_{n\in\ZZ}n{:}\psi_{-n}\psi^*_n{:},\quad K = \sum_{n\in\ZZ}(n-1/2)^2{:}\psi_{-n}\psi^*_n{:},\\ H_k = \sum_{n\in\ZZ}q^{kn}{:}\psi_{-n}\psi^*_n{:},\quad J_k = \sum_{n\in\ZZ}{:}\psi_{k-n}\psi^*_n{:},\\ {:}\psi_m\psi^*_n{:} = \psi_m\psi^*_n - \langle 0|\psi_m\psi^*_n|0\rangle, \end{gather*} the vertex operators \cite{ORV03,YB08} \beqnn \Gamma_{\pm k}(x) = \exp\left(\sum_{k=1}^\infty\frac{x^k}{k}J_{\pm k}\right),\quad \Gamma'_{\pm k}(x) = \exp\left(- \sum_{k=1}^\infty\frac{(-x)^k}{k}J_{\pm k}\right),\quad \eeqnn and their multi-variate extensions \beqnn \Gamma_{\pm k}(x_1,x_2,\ldots) = \prod_{i\ge 1}\Gamma_{\pm k}(x_i),\quad \Gamma'_{\pm k}(x_1,x_2,\ldots) = \prod_{i\ge 1}\Gamma'_{\pm k}(x_i). \eeqnn The action of these operators on the fermionic Fock space leaves the charge-$0$ sector invariant. $L_0$, $K$ and $H_k$ are diagonal with respect to $|\lambda\rangle$'s: \beq \langle\lambda|L_0|\mu\rangle = |\lambda|\delta_{\lambda\mu},\quad \langle\lambda|K|\mu\rangle = \kappa(\lambda)\delta_{\lambda\mu},\quad \langle\lambda|H_k|\mu\rangle = \phi_k(\lambda)\delta_{\lambda\mu}. \label{<..L_0..>etc} \eeq The matrix elements of the vertex operators are the skew Schur functions $s_{\lambda/\mu}(\bsx)$, $\bsx = (x_1,x_2,\ldots)$: \begin{gather} \langle\lambda|\Gamma_{-}(\bsx)|\mu\rangle = \langle\mu|\Gamma_{+}(\bsx)|\lambda\rangle = s_{\lambda/\mu}(\bsx), \label{<..Gamma..>}\\ \langle\lambda|\Gamma'_{-}(\bsx)|\mu\rangle = \langle\mu|\Gamma'_{+}(\bsx)|\lambda\rangle = s_{\tp{\lambda}/\tp{\mu}}(\bsx). \label{<..Gamma'..>} \end{gather} One can use these building blocks to rewrite the combinatorial definition (\ref{Z(t)-Psum}) of $Z(\bst)$ as \beq Z(\bst) = \langle 0|\Gamma_{+}(q^{-\rho})Q^{L_0} e^{H(\bst)}\Gamma_{-}(q^{-\rho})|0\rangle, \label{Z(t)=<..e^H..>} \eeq where \beqnn H(\bst) = \sum_{k=1}^\infty t_kH_k. \eeqnn As shown in our previous work with the aid of symmetries of a quantum torus algebra \cite{NT07} , (\ref{Z(t)=<..e^H..>}) can be converted to the following form. This implies that $Z(\bst)$ is a tau function of the KP hierarchy. \begin{theorem} \beq Z(\bst) = \exp\left(\sum_{k=1}^\infty\frac{q^kt_k}{1-q^k}\right) \langle 0|\exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) g_1|0\rangle, \label{Z(t)=<..g_1..>} \eeq where \beq g_1 = q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})Q^{L_0} \Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})q^{K/2}. \label{g_1} \eeq \end{theorem} \begin{remark} We could have removed the rightmost two factors $\Gamma_{+}(q^{-\rho})q^{K/2}$ from $g_1$ because they leave the vacuum vector $|0\rangle$ invariant: \beq \Gamma_{+}(q^{-\rho})q^{K/2}|0\rangle = |0\rangle. \label{..|0>=|0>} \eeq $g_1$ is defined as shown in (\ref{g_1}) to enjoy the algebraic relations \beqnn J_kg_1 = g_1J_k \quad \text{for $k = 1,2,\ldots$,} \eeqnn which play the role of a reduction condition from the 2D Toda hierarchy to the 1D Toda hierarchy \cite{NT07}. \end{remark} One can rewrite (\ref{Z(t)=<..g_1..>}) further to clarify its characteristic as a tau function of the KP hierarchy. First of all, the exponential prefactor on the right hand side can be taken inside the vev as \beq Z(\bst) = \langle 0|\exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) g_2|0\rangle, \label{Z(t)=<..g_2..>} \eeq where \beqnn g_2 = \exp\left(\sum_{k=1}^\infty\frac{(-1)^kq^{k/2}}{1-q^k}J_{-k}\right)g_1. \eeqnn This is a consequence of the commutation relation \beqnn [J_m,J_n] = m\delta_{m+n,0} \eeqnn among $J_n$'s that span the $U(1)$ current (or Heisenberg) algebra. Remarkably, the operator generated in front of $g_1$, too, is related to a vertex operator as \beqnn \exp\left(\sum_{k=1}^\infty\frac{(-1)^kq^{k/2}}{1-q^k}J_{-k}\right) = \Gamma'_{-}(q^{-\rho})^{-1}. \eeqnn Thus $g_2$ can be expressed as \beq g_2 = \Gamma'_{-}(q^{-\rho})^{-1} q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})Q^{L_0} \Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})q^{K/2}. \label{g_2} \eeq Moreover, the multiplier $(-1)^kq^{k/2}$ of $t_k$ in (\ref{Z(t)=<..g_2..>}) can be removed by the scaling relation \beqnn \sum_{k=1}^\infty (-1)^kq^{k/2}t_kJ_k = (-q^{1/2})^{-L_0}\cdot\sum_{k=1}^\infty t_kJ_k\cdot(-q^{1/2})^{L_0}. \eeqnn (\ref{Z(t)=<..g_2..>}) thereby turns into the more standard expression \beqnn Z(\bst) = \langle 0|\exp\left(\sum_{k=1}^\infty t_kJ_k\right) (-q^{1/2})^{L_0}g_2|0\rangle \eeqnn as a tau function of the KP hierarchy \cite{JM83,MJD-book}. \begin{remark} In the previous work \cite{NT07}, we used the operator \beqnn W_0 = \sum_{n\in\ZZ}n^2{:}\psi_{-n}\psi^*_n{:} \eeqnn in place of $K$. Accordingly, the fermionic expression of the partition functions presented therein takes a slightly different form. This does not affect the essential part of the fermionic expression. \end{remark} \section{Quantum spectral curve of melting crystal model} \subsection{Single-variate specialization} Let $Z(x)$ denote the single-variate specialization of $Z(\bst)$ obtained by substituting \beq t_k = - \frac{q^{-k/2}x^k}{k},\quad k = 1,2,\ldots. \label{t-x-relation} \eeq The combinatorial definition (\ref{Z(t)-Psum}) of $Z(\bst)$ and its fermionic expressions (\ref{Z(t)=<..g_1..>}) and (\ref{Z(t)=<..g_2..>}) are accordingly specialized as follows. \begin{lemma} \beq Z(x) = \sum_{\lambda\in\calP}s_\lambda(q^{-\rho})^2Q^{|\lambda|} \prod_{i=1}^\infty\frac{1 - q^{\lambda_i-i+1/2}x}{1 - q^{-i+1/2}x}. \label{Z(x)-Psum} \eeq \end{lemma} \begin{proof} Substituting (\ref{t-x-relation}) for $\phi(\bst,\lambda)$ yields \begin{align*} \phi(\bst,\lambda) &= - \sum_{i,k=1}^\infty\frac{q^{-k/2}x^k}{k} \left(q^{k(\lambda_i-i+1)} - q^{k(-i+1)}\right)\\ &= \sum_{i=1}^\infty\left(\log(1 - q^{\lambda_i-i+1/2}x) - \log(1 - q^{-i+1/2}x)\right), \end{align*} hence \beqnn e^{\phi(\bst,\lambda)} = \prod_{i=1}^\infty\frac{1 - q^{\lambda_i-i+1/2}x}{1 - q^{-i+1/2}x}. \eeqnn \end{proof} \begin{lemma} \beq Z(x) = \prod_{i=1}^\infty(1 - q^{i-1/2}x) \cdot\langle 0|\Gamma'_{+}(x)g_1|0\rangle = \langle 0|\Gamma'_{+}(x)g_2|0\rangle. \label{Z(x)=<..g_2..>} \eeq \end{lemma} \begin{proof} Substituting (\ref{t-x-relation}) in (\ref{Z(t)=<..g_1..>}) and (\ref{Z(t)=<..g_2..>}) yields \beqnn \exp\left(\sum_{k=1}^\infty\frac{q^kt_k}{1-q^k}\right) = \exp\left(- \sum_{i,k=1}^\infty q^{k(i-1/2)}x^k\right) = \prod_{i=1}^\infty(1 - q^{i-1/2}x) \eeqnn (cf. the computation in the proof of the previous lemma) and \beqnn \exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) = \exp\left(- \sum_{k=1}^\infty\frac{(-x)^k}{k}J_k\right) = \Gamma'_{+}(x). \eeqnn \end{proof} As we shall see in the next section, the combinatorial expression (\ref{Z(x)-Psum}) of $Z(x)$ has a desirable form from which one can derive the equation of quantum curve of Dunin-Barkowski et al. \cite{DBMNPS13}. To apply the method of our previous work \cite{TN16}, however, it is more convenient to have $\Gamma_{+}(x)$ rather than $\Gamma'_{+}(x)$ in the fermionic expression (\ref{Z(x)=<..g_2..>}) of $Z(x)$. This problem can be resolved by the following transformation rule of matrix elements of fermionic operators under conjugation of partitions \cite{YB08}: \begin{lemma} \label{conj-lemma} \begin{gather*} \langle\lambda|L_0|\lambda\rangle = \langle\tp{\lambda}|L_0|\tp{\lambda}\rangle,\quad \langle\lambda|K|\lambda\rangle = - \langle\tp{\lambda}|K|\tp{\lambda}\rangle,\\ \langle\lambda|\Gamma_{\pm}(\bsx)|\mu\rangle = \langle\tp{\lambda}|\Gamma'_{\pm}(\bsx)|\tp{\mu}\rangle. \end{gather*} \end{lemma} \begin{proof} These identities are consequences of (\ref{<..L_0..>etc}), (\ref{<..Gamma..>}), (\ref{<..Gamma'..>}) and the following property of $\kappa(\lambda)$: \beqnn \kappa(\tp{\lambda}) = - \kappa(\lambda). \eeqnn \end{proof} We can apply this rule to $\Gamma'_{+}(x)$ and the building blocks of $g_2$ to rewrite (\ref{Z(x)=<..g_2..>}) as \beq Z(x) = \langle 0|\Gamma_{+}(x)g_2'|0\rangle, \label{Z(x)=<..g_2'..>} \eeq where \beq g_2' = \Gamma_{-}(q^{-\rho})^{-1} q^{-K/2}\Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})Q^{L_0} \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2}. \label{g_2'} \eeq \begin{remark} The existence of two expressions, (\ref{Z(x)=<..g_2..>}) and (\ref{Z(x)=<..g_2'..>}), of $Z(x)$ carries over to $Z(\bst)$: \begin{align} Z(\bst) &= \langle 0|\exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) g_2|0\rangle \notag\\ &= \langle 0|\exp\left(- \sum_{k=1}^\infty q^{k/2}t_kJ_k\right) g_2'|0\rangle. \label{Z(t)-dualvev} \end{align} Note that the third formula of Lemma \ref{conj-lemma} implies the identity \beqnn \langle\mu|\exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) |\lambda\rangle = \langle\tp{\mu}|\exp\left(- \sum_{k=1}^\infty q^{k/2}t_kJ_k\right) |\tp{\lambda}\rangle \eeqnn among the matrix elements of the exponential operators in (\ref{Z(t)-dualvev}). \end{remark} Having obtained the fermionic expression (\ref{Z(x)=<..g_2'..>}) containing $\Gamma_{+}(x)$, we now remove the other $\Gamma_{+}$'s from (\ref{Z(x)=<..g_2'..>}). This is the last step for applying the method of out previous work \cite{TN16}. \begin{lemma} \beq Z(x) = \prod_{n=1}^\infty(1 - Qq^n)^{-n} \cdot\langle 0|\Gamma_{+}(x)g|0\rangle, \label{Z(x)=<..g..>} \eeq where \beq g = \Gamma_{-}(q^{-\rho})^{-1}q^{-K/2} \Gamma'_{-}(q^{-\rho})\Gamma'_{-}(Qq^{-\rho}). \label{g} \eeq \end{lemma} \begin{proof} The rightmost two factors $\Gamma'_{+}(q^{-\rho})q^{-K/2}$ of (\ref{g_2'}), like the operators in (\ref{..|0>=|0>}), can be removed from (\ref{Z(x)=<..g_2'..>}). One can use the scaling relation \beqnn \Gamma'_{\pm}(x_1,x_2,\ldots)Q^{L_0} = Q^k\Gamma'_{\pm}(Q^{\pm 1}x_1,Q^{\pm 1}x_2,\ldots) \eeqnn and the commutation relation \beqnn \Gamma'_{+}(x_1,x_2,\ldots)\Gamma'_{-}(y_1,y_2,\ldots) = \prod_{i,j=1}^\infty(1 - x_iy_j)^{-1}\cdot \Gamma'_{-}(y_1,y_2,\ldots)\Gamma'_{+}(x_1,x_2,\ldots) \eeqnn of the vertex operators \cite{ORV03,YB08} to rewrite the product of the three operators in the middle of (\ref{g_2'}) as \begin{align*} \Gamma'_{+}(q^{-\rho})Q^{L_0}\Gamma'_{-}(q^{-\rho}) &= Q^{L_0}\Gamma'_{+}(Qq^{-\rho})\Gamma'_{-}(q^{-\rho}) \\ &= \prod_{n=1}^\infty(1 - Qq^n)^{-n}\cdot Q^{L_0} \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(Qq^{-\rho})\\ &= \prod_{n=1}^\infty(1 - Qq^n)^{-n}\cdot \Gamma'_{-}(Qq^{-\rho})Q^{L_0}\Gamma'_{+}(Qq^{-\rho}). \end{align*} The two factors $Q^{L_0}\Gamma'_{+}(Qq^{-\rho})$ in the last line hit the vacuum vector $|0\rangle$ and disappear. What remains are the constant $\prod_{n=1}^\infty(1 - Qq^n)^{-n}$ and the operator $g$. \end{proof} \begin{remark} This is actually a proof of the identity \beq g_2'|0\rangle = \prod_{n=1}^\infty(1 - Qq^n)^{-n}\cdot g|0\rangle \label{g_2'|0>=const.g|0>} \eeq of vectors in the fermionic Fock space. \end{remark} \subsection{Generating operator of admissible basis} We now borrow the idea of generating operators from the work of Alexandrov et al. \cite{Alexandrov1404,ALS1512}. A point of the Sato Grassmannian can be represented by a linear subspace $W$ of the space $V = \CC((x))$ of formal Laurent series \cite{SS83,SW85}. The generating operator is a linear automorphism $G$ of $V$ that maps $W_0 = \mathrm{Span}\{x^{-j}\}_{j=0}^\infty$ to $W$, so that an admissible basis $\{\Phi_j(x)\}_{j=0}^\infty$ of $W$ can be expressed as \beq \Phi_j(x) = Gx^{-j}. \label{basis} \eeq In the fermionic formalism of the KP hierarchy \cite{JM83,MJD-book}, $W$ corresponds to a vector $|W\rangle$ of the fermionic Fock space. The associated tau function can be defined as \beqnn \tau(\bst) = \langle 0|\exp\left(\sum_{k=1}^\infty t_kJ_k\right)|W\rangle. \eeqnn Its special value at \beq \bst = [x] = (x, x^2/2,\ldots,x^k/k,\ldots) \label{t=[x]} \eeq is related to the first member $\Phi_0(x)$ of an admissible basis of $W$ as \beq \tau([x]) = \langle 0|\Gamma_{+}(x)|W\rangle = C\Phi_0(x), \label{tau(x)} \eeq where $C$ is a nonzero constant. If $|W\rangle$ is generated from the vacuum vector $|0\rangle$ by an operator $g$ as \beqnn |W\rangle = g|0\rangle, \eeqnn and $g$ is a special operator, such as a product of vertex operators and particular diagonal operators, then one can find $G$ rather easily from $g$ by the correspondence \beq L_0 \longleftrightarrow D = x\dfrac{d}{dx},\quad K \longleftrightarrow \left(D - \frac{1}{2}\right)^2,\quad J_k \longleftrightarrow x^{-k},\quad \text{etc.}, \label{fb-do} \eeq between fermion bilinears and differential operators. This is the way how Alexandrov et al. derived the generating operator for various types of Hurwitz numbers \cite{ALS1512}. We did similar computations for tau functions in topological string theory \cite{TN16}. One can interpret the last fermionic expression (\ref{Z(x)=<..g..>}) of $Z(x)$ in the same sense. Namely, as shown in (\ref{tau(x)}) in the general setting, $Z(x)$ may be thought of as the first member $\Phi_0(x)$ of an admissible basis of the subspace $W$ determined by (\ref{g}). One can see from (\ref{g_2'|0>=const.g|0>}) that the associated tau function \beqnn \tau(\bst) = \langle 0|\exp\left(\sum_{k=1}^\infty t_kJ_k\right)g|0\rangle \eeqnn amounts to the second expression of (\ref{Z(t)-dualvev}). Since the time variables of $\tau(\bst)$ are rescaled as $t_k \to - q^{k/2}t_k$ therein, the specialization (\ref{t-x-relation}) of $Z(\bst)$ corresponds to the standard specialization (\ref{t=[x]}) of $\tau(\bst)$. It is now straightforward to find the generating operator $G$ from (\ref{g}). According to (\ref{fb-do}), $q^{-K/2}$ corresponds to a differential operator of infinite order: \beqnn q^{-K/2} \longleftrightarrow q^{-(D-1/2)^2/2}. \eeqnn The three vertex operators correspond to multiplication operators: \begin{gather*} \Gamma_{-}(q^{-\rho})^{-1} \longleftrightarrow \exp\left(- \sum_{k=1}^\infty\frac{q^{k/2}x^k}{k}\right) = \prod_{i=1}^\infty(1-q^{i-1/2}x),\\ \Gamma'_{-}(q^{-\rho}) \longleftrightarrow \prod_{i=1}^\infty(1+q^{i-1/2}x),\\ \Gamma'_{-}(Qq^{-\rho}) \longleftrightarrow \prod_{i=1}^\infty(1+Qq^{i-1/2}x). \end{gather*} The generating operator is given by a product of these operators as follows. \begin{theorem} The generating operator $G$ for the subspace $W \subset V$ determined by (\ref{g}) can be expressed as \beq G = \prod_{i=1}^\infty(1-q^{i-1/2}x)\cdot q^{-(D-1/2)^2/2} \cdot\prod_{i=1}^\infty(1+q^{i-1/2}x)(1+Qq^{i-1/2}x). \label{G} \eeq \end{theorem} \subsection{Derivation of quantum spectral curve} Since the structure of the generating operator (\ref{G}) resembles those of tau functions in topological string theory \cite{TN16}, we define the Kac-Schwarz operator $A$ in essentially the same form, \footnote{This operator amount to the inverse $A^{-1}$ of the Kac-Schwarz operator $A$ considered therein.} namely, \beqnn A = G\cdot q^D\cdot G^{-1}. \eeqnn The members $\Phi_j(x)$ of the admissible basis (\ref{basis}) thereby satisfy the linear equations \beqnn A\Phi_j(x) = q^{-j}\Phi_j(x). \eeqnn In particular, the equation \beqnn (A - 1)\Phi_0(x) = 0 \eeqnn for $\Phi_0(x)$ (equivalently, $\tau([x])$) represents the quantum spectral curve. As we show below, $A$ is a $q$-difference operator of finite order. \begin{lemma} \begin{align} A &= \left(1 + q^{1/2}xq^{-D}(1-q^{1/2}x)^{-1}\right) \left(1 + Qq^{1/2}xq^{-D}(1-q^{1/2}x)^{-1}\right) \notag\\ &\quad\mbox{}\times (1 - q^{1/2}x)q^D. \label{A-prod} \end{align} \end{lemma} \begin{proof} One can compute $A = G\cdot q^D\cdot G^{-1}$ step by step. The first step is to apply the last infinite product of (\ref{G}) and its inverse to $q^D$. This can be carried out with the aid of the operator identity \beqnn q^D\cdot x = qxq^D \eeqnn as follows: \begin{align*} &\prod_{i=1}^\infty(1+q^{i-1/2}x)(1+Qq^{i-1/2}x)\cdot q^D \cdot\prod_{i=1}^\infty(1+Qq^{i-1/2}x)^{-1}(1+q^{i-1/2}x)^{-1}\\ &= \prod_{i=1}^\infty(1+q^{i-1/2}x)(1+Qq^{i-1/2}x)\cdot \prod_{i=1}^\infty(1+Qq^{i+1/2}x)^{-1}(1+q^{i+1/2}x)^{-1}\cdot q^D\\ &= (1 + q^{1/2}x)(1 + Qq^{1/2}x)q^D. \end{align*} The next step is to apply $q^{-(D-1/2)^2/2}$ and its inverse to the last operator. This can be achieved by the identity \beqnn q^{-(D-1/2)^2/2}\cdot x\cdot q^{(D-1/2)^2/2} = xq^{-D} \eeqnn as follows: \begin{align*} &q^{-(D-1/2)^2/2}\cdot (1 + q^{1/2}x)(1 + Qq^{1/2}x)q^D\cdot q^{(D-1/2)^2/2}\\ &= (1 + q^{1/2}xq^{-D})(1 + Qq^{1/2}xq^{-D})q^D. \end{align*} The last step is to apply the first infinite product of (\ref{G}) and its inverse to the last operator. Since $q^{-D}$ and $q^D$ are thereby transformed as \begin{gather*} \prod_{i=1}^\infty(1-q^{i-1/2}x)\cdot q^{-D} \cdot\prod_{i=1}^\infty(1-q^{i-1/2}x)^{-1} = q^{-D}(1 - q^{1/2}x)^{-1}, \\ \prod_{i=1}^\infty(1-q^{i-1/2}x)\cdot q^D \cdot\prod_{i=1}^\infty(1-q^{i-1/2}x)^{-1} = (1 - q^{1/2}x)q^D, \end{gather*} one obtains the result shown in (\ref{A-prod}). \end{proof} Let us expand (\ref{A-prod}) and move $q^{\pm D}$ in each term to the right end. The outcome reads \beq A = (1-q^{1/2}x)q^D + q^{1/2}x + Qq^{1/2}x + Qx^2(1-q^{-1/2}x)^{-1}q^{-D}. \label{A-sum} \eeq We are thus led to the following final expression of the quantum spectral curve of the melting crystal model. \begin{theorem} $Z(x)$ satisfies the equation \beq (A - 1)Z(x) = 0 \label{(A-1)Z(x)=0} \eeq with respect to the $q$-difference operator (\ref{A-sum}). \end{theorem} \section{Prescription for 4D limit} The 4D limit of the partition function $Z(\bszero)$ at $\bst = \bszero$ is achieved by setting the parameters as \beq q = e^{-R\hbar},\quad Q = (R\Lambda)^2 \label{qQ-4Dlimit} \eeq and letting $R \to 0$ \cite{NT07}. $R$ is the radius of the fifth dimension of $\RR^4\times S^1$ in which SUSY Yang-Mills theory lives \cite{Nekrasov96}, $\hbar$ is a parameter of the self-dual $\Omega$ background, and $\Lambda$ is an energy scale of 4D $\calN = 2$ SUSY Yang-Mills theory \cite{Nekrasov02,NO03}. The definition of 4D limit of $Z(x)$ and $Z(\bst)$ needs $R$-dependent transformations of $x$ and $\bst$. \subsection{4D limit of $Z(x)$ and quantum spectral curve} Alongside the substitution (\ref{qQ-4Dlimit}) of parameters, we transform the variable $x$ to a new variable $X$ as \beq x = x(X,R) = e^{R(X-\hbar/2)}. \label{x-X-relation} \eeq As it turns out below, both the combinatorial expression (\ref{Z(x)-Psum}) and the $q$-difference equation (\ref{(A-1)Z(x)=0}) of $Z(x)$ behave nicely as $R \to 0$ under this $R$-dependent transformation of $x$. \begin{lemma} \label{Z4D(X)-lemma} \beqnn \lim_{R\to 0}Z(x(X,R)) = Z_{\frD}(X), \eeqnn where \beq Z_{\frD}(X) = \sum_{\lambda\in\calP} \left(\frac{\dim\lambda}{|\lambda|}\right)^2 \left(\frac{\Lambda}{\hbar}\right)^{2|\lambda|} \prod_{i=1}^\infty\frac{X-(\lambda_i-i+1)\hbar}{X-(-i+1)\hbar}. \label{Z4D(X)-Psum} \eeq \end{lemma} \begin{proof} As $R \to 0$ under the $R$-dependent transformations (\ref{qQ-4Dlimit}) and (\ref{x-X-relation}), the building blocks of (\ref{Z(x)-Psum}) behave as \begin{align*} s_\lambda(q^{-\rho})^2 &= \left(\frac{\dim\lambda}{|\lambda|}\right)^2(R\hbar)^{-2|\lambda|} (1 + O(R)),\\ Q^{|\lambda|} &= (R\Lambda)^{2|\lambda|},\\ \frac{1 - q^{\lambda_i-i+1/2}x(X,R)}{1 - q^{-i+1/2}x(X,R)} &= \frac{X - (\lambda_i-i+1)\hbar}{X - (-i+1)\hbar}(1 + O(R)). \end{align*} Note that the hook-length formulae (\ref{q-hook-formula}) and (\ref{hook-formula}) are used in the derivation of the first line above. \end{proof} \begin{lemma} \beqnn A-1 = \left(- (X-\hbar)(e^{-\hbar d/dX}-1) - \frac{\Lambda^2}{X}e^{\hbar d/dX}\right)R + O(R^2). \eeqnn \end{lemma} \begin{proof} (\ref{A-sum}) implies that $A-1$ can be expressed as \beqnn A-1 = (1 - q^{1/2}x)(q^D - 1) + Qq^{1/2}x + Qx^2(1 - q^{-1/2}x)^{-1}q^{-D}. \eeqnn As $R \to 0$ under the transformations (\ref{qQ-4Dlimit}) and (\ref{x-X-relation}), each term of this expression behaves as follows: \begin{align*} 1 - q^{1/2}x &= - R(X-\hbar) + O(R^2),\\ q^{\pm D} &= e^{\mp\hbar d/dX},\\ Qq^{1/2}x &= O(R^2),\\ Qx^2(1 - q^{-1/2}x)^{-1}q^{-D} &= - \frac{\Lambda^2}{X}R + O(R^2). \end{align*} \end{proof} As a consequence of the foregoing two facts, we obtain the following difference equation for $Z_{\frD}(X)$. \begin{theorem} $Z_{\frD}(X)$ satisfies the difference equation \beq \left((X-\hbar)(e^{-\hbar d/dX} - 1) + \frac{\Lambda^2}{X}e^{\hbar d/dX}\right)Z_{\frD}(X) = 0. \label{Z4D(X)-diffeq} \eeq \end{theorem} By the shift$X \to X + \hbar$ of $X$, (\ref{Z4D(X)-diffeq}) turns into the equation \beqnn \left(X(e^{-\hbar d/dX} - 1) + \frac{\Lambda^2}{X+\hbar}e^{\hbar d/dX} \right)Z_{\frD}(X+\hbar) = 0, \eeqnn which agrees with the equation derived by Dunin-Barkowski et al. \cite{DBMNPS13}. Moreover, as they found, this equation can be converted to the simpler form \beq \left(e^{-\hbar d/dX} + \Lambda^2e^{\hbar d/dX} - X\right)\Psi(X) = 0 \label{CP1qsc-eq} \eeq by the gauge transformation \beqnn \Psi(X) = \exp\left(B\left(-\hbar\frac{d}{dX}\right)\frac{X-X\log X}{\hbar}\right) Z_{\frD}(X+\hbar), \eeqnn where $B(t)$ is the generating function \beqnn B(t) = \frac{t}{e^t - 1} \eeqnn of the Bernoulli numbers. It is this equation (\ref{CP1qsc-eq}) that is identified by Dunin-Barkowski et al. \cite{DBMNPS13} as the equation of quantum spectral curve for Gromov-Witten theory of $\CC\PP^1$. Its classical limit \beqnn y^{-1} + y - x = 0 \eeqnn as $\hbar \to 0$ (with $\Lambda$ normalized to $1$) is the spectral curve of topological recursion in this case \cite{DBOSS12,DBSS12,NS11}. We have thus rederived the quantum spectral curve of $\CC\PP^1$ from the 4D limit of the melting crystal model. \subsection{4D limit of $Z(\bst)$} As shown in the proof of Lemma \ref{Z4D(X)-lemma}, the deformed Boltzmann weight $s_\lambda(q^{-\rho})^2Q^{|\lambda|}$ behaves nicely in the limit as $R \to 0$. To achieve the 4D limit of $Z(\bst)$, we have only to find an appropriate $R$-dependent transformation $\bst = \bst(\bsT,R)$ to the coupling constants $\bsT = (T_1,T_2,\ldots)$ of 4D external potentials $\phi^{\frD}_k(\lambda)$ for which the identity \beq \lim_{R\to 0}\phi(\bst(\bsT,R),\lambda) = \phi_{\frD}(\bsT,\lambda) = \sum_{k=1}^\infty T_k\phi^{\frD}_k(\lambda) \label{phi(t)->phi4D(T)} \eeq holds. In view of the definition (\ref{phi_k}) of $\phi_k(\lambda)$'s, it is natural to expect that $\phi^{\frD}_k(\lambda)$'s take such a form as \beq \phi^{\frD}_k(\lambda) = \sum_{i=1}^\infty\left((\lambda_i-i+1)^k - (-i+1)^k\right). \label{phi4D_k} \eeq The following is a clue to this problem. \begin{lemma} As $R \to 0$ under the transformation (\ref{qQ-4Dlimit}) of the parameters, \beq \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}\phi_j(\lambda) = \phi^{\frD}_k(\lambda)(-R\hbar)^k + O(R^{k+1}). \label{phi-phi4D-rel} \eeq \end{lemma} \begin{proof} The difference of the two identities \begin{gather*} \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}q^{ju} = (q^u-1)^k - (-1)^k,\\ \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}q^{jv} = (q^v-1)^k - (-1)^k \end{gather*} yields the identity \begin{align*} \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}(q^{ju} - q^{jv}) &= (q^u-1)^k - (q^v-1)^k\\ &= (u^k - v^k)(-R\hbar)^k + O(R^{k+1}). \end{align*} One can derive (\ref{phi-phi4D-rel}) by specializing this identity to $u = \lambda_i-i+1$ and $v = -i+1$ and summing the outcome over $i = 1,2,\ldots$. \end{proof} (\ref{phi-phi4D-rel}) implies the identity \beqnn \lim_{R\to 0}\sum_{k=1}^\infty\frac{T_k}{(-R\hbar)^k} \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}\phi_j(\lambda) = \sum_{k=1}^\infty T_k\phi^{\frD}_k(\lambda) \eeqnn for $\phi_k(\lambda)$'s and the potentials shown in (\ref{phi_k}). Since \beqnn \sum_{k=1}^\infty\frac{T_k}{(-R\hbar)^k} \sum_{j=1}^k(-1)^{k-j}\binom{k}{j}\phi_j(\lambda) = \sum_{j=1}^\infty\sum_{k=j}^\infty \binom{k}{j}\frac{(-1)^{k-j}T_k}{(-R\hbar)^k}\phi_j(\lambda), \eeqnn one can conclude that the identity (\ref{phi(t)->phi4D(T)}) holds if $t_k$'s and $T_k$'s are related by the linear relations \beq t_j = \sum_{k=j}^\infty\binom{k}{j}\frac{(-1)^{k-j}T_k}{(-R\hbar)^k}. \label{t-T-relation} \eeq This gives an $R$-dependent transformation $\bst = \bst(\bsT,R)$ that we have sought for. Note that this is a triangular (hence invertible) linear transformation between $\bst$ and $\bsT$. Let $Z_{\frD}(\bsT)$ denote the deformed partition function \beq Z_{\frD}(\bsT) = \sum_{\lambda\in\calP}\left(\frac{\dim\lambda}{|\lambda|!}\right)^2 \left(\frac{\Lambda}{\hbar}\right)^{2|\lambda|} e^{\phi_{\frD}(\bsT,\lambda)} \label{Z4D(T)-Psum} \eeq with the external potentials (\ref{phi4D_k}). We are thus led to the following conclusion. \begin{theorem} As $R \to 0$ under the $R$-dependent transformation $\bst = \bst(\bsT,R)$ of the coupling constants defined by (\ref{t-T-relation}), $Z(\bst)$ converges to $Z_{\frD}(\bsT)$: \beq \lim_{R\to 0}Z(\bst(\bsT,R)) = Z_{\frD}(\bsT). \eeq \end{theorem} It is easy to see that $Z_{\frD}(X)$ and $Z_{\frD}(\bsT)$ are connected by the substitution \beq T_k = - \frac{\hbar^k}{kX^k} \label{T-X-relation} \eeq as \beqnn Z_{\frD}(X) = Z_{\frD}\left(-\frac{\hbar}{X}, -\frac{\hbar^2}{2X^2}, \ldots, -\frac{\hbar^k}{kX^k},\dots\right). \eeqnn This fact plays a role in the next section. \section{Bilinear equations} \subsection{Fay-type bilinear equations for KP hierarchy} Let us recall the notion of Fay-type bilinear equations in the theory of the KP hierarchy \cite{AvM92,SS83,TT95}. Given a general tau function $\tau(\bst)$, one can consider an $N$-variate generalization of (\ref{tau(x)}): \beqnn \tau([x_1]+\cdots+[x_N]) = \langle 0|\Gamma_{+}(x_1,\ldots,x_N)|W\rangle. \eeqnn It product with the Vandermonde determinant \beqnn \Delta(x_1,\cdots,x_N) = \prod_{1\leq i<j\leq N}(x_i - x_j) \eeqnn is the $N$-point function of the fermion field $\psi^*(x^{-1})$ in the background state $|W\rangle$ \cite{JM83,MJD-book}. Actually, it is more convenient to leave $\bst$ as well. Let $\tau(\bst,x_1,\ldots,x_N)$ denote the function thus obtained: \begin{align} \tau(\bst,x_1,\ldots,x_N) &= \tau(\bst + [x_1] + \cdots + [x_N]) \notag\\ &= \langle 0|\Gamma_{+}(x_1,\ldots,x_N) \exp\left(\sum_{k=1}^\infty t_kJ_k\right)|W\rangle. \label{tau(t,xx)} \end{align} By virtue of the aforementioned interpretation as the $N$-point function of a fermion field, the product \beqnn \xi(x_1,\ldots,x_N) = \Delta(x_1,\ldots,x_N)\tau(\bst,x_1,\ldots,x_N) \eeqnn with the Vandermonde determinant satisfies the bilinear equations \beq \sum_{j=N}^{2N}(-1)^{j-N}\xi(x_1,\ldots,x_{N-1},x_j) \xi(x_N,\ldots,\widehat{x_j},\ldots,x_{2N}) = 0, \label{Fay-eq} \eeq where $\widehat{x_j}$ means removing $x_j$ from the list of variables therein. As pointed out by Sato and Sato \cite{SS83}, these equations are avatars of the Pl\"ucker relations among the Pl\"ucker coordinates of a Grassmann manifold. The simplest ($N = 2$) case \begin{align} &(x_1-x_2)(x_3-x_4)\tau(\bst,x_1,x_2)\tau(\bst,x_3,x_4) \notag\\ &\mbox{} - (x_1-x_3)(x_2-x_4)\tau(\bst,x_1,x_3)\tau(\bst,x_2,x_4) \notag\\ &\mbox{} + (x_1-x_4)(x_2-x_3)\tau(\bst,x_1,x_4)\tau(\bst,x_2,x_3) = 0 \label{Fay4-eq} \end{align} of (\ref{Fay-eq}), referred to as a Fay-type bilinear equation, is known to play a particular role. Specialized to $x_4 = 0$, this equation turns into the so called Hirota-Miwa equation \begin{align} &(x_1-x_2)x_3\tau(\bst+[x_1]+[x_2])\tau(\bst+[x_3]) \notag\\ &\mbox{} + (x_2-x_3)x_1\tau(\bst+[x_2]+[x_3])\tau(\bst+[x_1]) \notag\\ &\mbox{} + (x_3-x_1)x_2\tau(\bst+[x_3]+[x_1])\tau(\bst+[x_2]) = 0. \label{HM-eq} \end{align} Moreover, dividing this equation by $x_3$ and letting $x_3 \to 0$ yield the differential Fay identity \cite{AvM92} \begin{align} &(x_1-x_2)\left(\tau(\bst+[x_1]+[x_2])\tau(\bst) - \tau(\bst+[x_1])\tau(\bst+[x_2])\right) \notag\\ &\mbox{} + x_1x_2\left(\tau(\bst+[x_1])\tau_{t_1}(\bst+[x_2]) - \tau_{t_1}(\bst+[x_1])\tau(\bst+[x_2])\right) = 0, \label{diff-Fay-eq} \end{align} where $\tau_{t_1}(\bst)$ denotes the $t_1$-derivative of $\tau(\bst)$. It is known \cite{TT95} that the differential Fay identity characterizes a general tau function of the KP hierarchy in the following sense. \begin{theorem} \label{Fay-KP-theorem} A function $\tau(\bst)$ of $\bst = (t_1,t_2,\ldots)$ is a tau function of the KP hierarchy if and only if it satisfies (\ref{diff-Fay-eq}). \end{theorem} As a corollary, it turns out that each of (\ref{Fay4-eq}) and (\ref{HM-eq}), too, is a necessary and sufficient condition for a function $\tau(\bst)$ to be a KP tau function. This fact is a clue to the subsequent consideration. \subsection{Bilinear equations in melting crystal model} Let $Z(\bst,x_1,\ldots,x_N)$ denote the function \beq Z(\bst,x_1,\ldots,x_N) = \sum_{\lambda\in\calP}s_\lambda(q^{-\rho})^2Q^{|\lambda|} e^{\phi(\bst,\lambda)}\prod_{j=1}^N\prod_{i=1}^\infty \frac{1 - q^{\lambda_i-i+1/2}x_j}{1 - q^{-i+1/2}x_j} \label{Z(t,xx)-Psum} \eeq obtained by shifting $\bst$ in $Z(\bst)$ as \beqnn t_k \to t_k - \sum_{j=1}^N\frac{q^{-k/2}x_j^k}{k}. \eeqnn This amounts to inserting an $N$-variate vertex operator in a fermionic expression of $Z(\bst)$, say (\ref{Z(t)-dualvev}), as \begin{align} Z(\bst,x_1,\ldots,x_N) &= \langle 0|\Gamma'_{+}(x_1,\ldots,x_N) \exp\left(\sum_{k=1}^\infty(-1)^kq^{k/2}t_kJ_k\right) g_2|0\rangle \notag\\ &= \langle 0|\Gamma_{+}(x_1,\ldots,x_N) \exp\left(- \sum_{k=1}^\infty q^{k/2}t_kJ_k\right) g_2'|0\rangle. \label{Z(t,xx)-dual} \end{align} Viewing (\ref{Z(t,xx)-dual}) as a special case of (\ref{tau(t,xx)}), one can apply the aforementioned facts about KP tau functions. In particular, $Z(\bst)$ satisfies the three-term bilinear equation \begin{align} &(x_1-x_2)(x_3-x_4)Z(\bst,x_1,x_2)Z(\bst,x_3,x_4) \notag\\ &\mbox{} - (x_1-x_3)(x_2-x_4)Z(\bst,x_1,x_3)Z(\bst,x_2,x_4) \notag\\ &\mbox{} + (x_1-x_4)(x_2-x_3)Z(\bst,x_1,x_4)Z(\bst,x_2,x_3) = 0. \label{Z-Fay4-eq} \end{align} It is remarkable that these bilinear equations survive the 4D limit. Let us set the parameters $q,Q$ and the coupling constants $\bst$ to the $R$-dependent form shown in (\ref{qQ-4Dlimit}) and (\ref{t-T-relation}), and transform the variables $x_1,\ldots,x_N$ to new variables $X_1,\ldots,X_N$ as \beqnn x_j = e^{R(X_j-\hbar/2)},\quad j = 1,\ldots,N. \eeqnn Note that we have slightly modified the relation (\ref{x-X-relation}) between $x$ and $X$ for convenience of the subsequent consideration. As $R \to 0$ under these $R$-dependent transformations, $Z(\bst,x_1,\ldots,x_N)$ converges to a function of the form \begin{align} &Z_{\frD}(\bsT,X_1,\ldots,X_N) \notag\\ &= \sum_{\lambda\in\calP}\left(\frac{\dim\lambda}{|\lambda|!}\right)^2 \left(\frac{\Lambda}{\hbar}\right)^{2|\lambda|}e^{\phi_{\frD}(\bsT,\lambda)} \prod_{j=1}^N\prod_{i=1}^\infty \frac{X_j - (\lambda_i-i+1)\hbar}{X_j - (-i+1)\hbar}. \label{Z4D(T,XX)-Psum} \end{align} Since the differences $x_i - x_j$ in $\Delta(x_1,\ldots,x_N)$ behave as \beqnn x_i - x_j = R(X_j - X_j) + O(R^2), \eeqnn the three-term bilinear equation (\ref{Z-Fay4-eq}), divided by $R^2$ before letting $R \to 0$, turns into the equation \begin{align} &(X_1 - X_2)(X_3 - X_4)Z_{\frD}(\bsT,X_1,X_2)Z_{\frD}(\bsT,X_3,X_4) \notag\\ &\mbox{} - (X_1 - X_3)(X_2 - X_4)Z_{\frD}(\bsT,X_1,X_3)Z_{\frD}(\bsT,X_2,X_4) \notag\\ &\mbox{} + (X_1 - X_4)(X_2 - X_3)Z_{\frD}(\bsT,X_1,X_4)Z_{\frD}(\bsT,X_2,X_3) = 0 \label{Z4D-Fay4-eq} \end{align} for $Z_{\frD}(\bsT,X_i,X_j)$'s. The more general bilinear equations (\ref{Fay-eq}), too, have 4D counterparts. Let us note here that $Z_{\frD}(\bsT,X_1,\ldots,X_N)$ can be obtained from $Z_{\frD}(\bsT)$ by shifting $\bsT$ as \beqnn T_k \to T_k - \sum_{j=1}^N\frac{\hbar^k}{kX_j^k} \eeqnn just as $Z_{\frD}(X)$ and $Z_{\frD}(\bsT)$ are connected by the substitution shown in (\ref{T-X-relation}). This is essentially the same relation as $\tau(\bst,x_1,\ldots,x_N)$ is obtained from $\tau(\bst)$ except that $x_j$'s are replaced by $X_j^{-1}$ \footnote{The presence of the negative sign and the coefficients $\hbar^k$ is not essential, because rescaling $t_k \to c^kt_k$ and reversal $t_k \to -t_k$ of the time variables are symmetries of the KP hierarchy. This is also the case for the relation between $Z(\bst,x_1,\ldots,x_N)$ and $Z(\bst)$.}. Therefore, for comparison with the Fay-type bilinear equation (\ref{Fay4-eq}) for KP tau functions, one should rewrite (\ref{Z4D-Fay4-eq}) as \begin{align*} &(X_1^{-1} - X_2^{-1})(X_3^{-1} - X_4^{-1}) Z_{\frD}(\bsT,X_1,X_2)Z_{\frD}(\bsT,X_3,X_4) \notag\\ &\mbox{} - (X_1^{-1} - X_3^{-1})(X_2^{-1} - X_4^{-1}) Z_{\frD}(\bsT,X_1,X_3)Z_{\frD}(\bsT,X_2,X_4) \notag\\ &\mbox{} + (X_1^{-1} - X_4^{-1})(X_2^{-1} - X_3^{-1}) Z_{\frD}(\bsT,X_1,X_4)Z_{\frD}(\bsT,X_2,X_3) = 0. \end{align*} It is this equation that corresponds to (\ref{Fay4-eq}) literally. According to Theorem \ref{Fay-KP-theorem}, this equation is enough to deduce the following conclusion. \begin{theorem} $Z_{\frD}(\bsT)$ is a tau function of the KP hierarchy with time variables $\bsT = (T_1,T_2,\ldots)$. \end{theorem} This result may be explained in the context of Gromov-Witten theory of $\CC\PP^1$ as well. $Z_{\frD}(\bsT)$ has a fermionic expression, analogous to (\ref{Z(t)=<..e^H..>}), of the form \beq Z_{\frD}(\bsT) = \langle 0|e^{J_1}(\Lambda/\hbar)^{2L_0} e^{H_{\frD}(\bsT)}e^{J_{-1}}|0\rangle, \label{Z4D(t)=<..e^H..>} \eeq where \beqnn H_{\frD}(\bsT) = \sum_{k=1}^\infty T_kH^{\frD}_k,\quad H^{\frD}_k = \sum_{n\in\ZZ}n^k{:}\psi_{-n}\psi^*_n{:}. \eeqnn This function is a member of the set of functions $Z_{\frD}(\bsT,s)$, $s \in \ZZ$, defined as \beqnn Z_{\frD}(\bsT,s) = \langle s|e^{J_1}(\Lambda/\hbar)^{2L_0} e^{H_{\frD}(\bsT)}e^{J_{-1}}|s\rangle, \eeqnn where $|s\rangle$ and $\langle s|$ are the ground states of the charge-$s$ sector of the Fock spaces. The combinatorial definition (\ref{Z4D(T)-Psum}) of $Z_{\frD}(\bsT)$ can be extended to these functions as \beq Z_{\frD}(\bsT,s) = \sum_{\lambda\in\calP}\left(\frac{\dim\lambda}{|\lambda|!}\right)^2 \left(\frac{\Lambda}{\hbar}\right)^{2|\lambda|+s(s+1)} e^{\phi_{\frD}(\bsT,s,\lambda)}, \label{Z4D(t,s)-Psum} \eeq where \begin{gather*} \phi_{\frD}(\bsT,s,\lambda) = \sum_{k=1}^\infty T_k\phi^{\frD}_k(s,\lambda),\\ \phi^{\frD}_k(s,\lambda) = \sum_{k=1}^\infty\left((\lambda_i-i+1+s)^k - (-i+1+s)^k\right) + \text{correction terms}. \end{gather*} $Z_{\frD}(\bsT,s)$ is also known as a generating function of all-genus Gromov-Witten invariants of $\CC\PP^1$ \cite{OP02}, and proven to be a tau function of the 1D Toda hierarchy by several different methods \cite{DZ04,Getzler01,Milanov06}. One can deduce from these results, too, that $Z_{\frD}(\bsT) = Z_{\frD}(\bsT,0)$ is a tau function of the KP hierarchy. As far as the KP hierarchy is concerned, our proof is conceptually simpler than those in the aforementioned literature. Moreover, one can use a Toda version \cite{Takasaki07,Teo06} of Fay-type bilinear equations to prove in much the same way that $Z_{\frD}(\bsT,s)$ is a tau function of the 1D Toda hierarchy, though the detail is slightly more complicated. \section{Conclusion} Primary motivation of this work was to understand the result of Dunin-Barkowski et al. \cite{DBMNPS13} in the language of the quantum spectral curve of the melting crystal model. In the course of solving this problem, we have found how to achieve the 4D limit of the deformed partition function $Z(\bst)$ itself. As a byproduct, this prescription for 4D limit has turned out to transfer Fay-type bilinear equations from $Z(\bst)$ to its 4D limit $Z_{\frD}(\bsT)$. It will be better to summarize these results from two aspects, namely, quantum curves and bilinear equations: \begin{itemize} \item[1.] {\it Quantum curves\/}: One can derive a quantum spectral curve of the melting crystal model by the method of our work on quantum mirror curves in topological string theory \cite{TN16}. This quantum curve is formulated as the $q$-difference equation (\ref{(A-1)Z(x)=0}) for the single-variate specialization $Z(x)$ of $Z(\bst)$. Its 4D limit is achieved by transforming the variable $x$ to a new variable $X$ as shown in (\ref{x-X-relation}) and letting $R \to 0$. (\ref{(A-1)Z(x)=0}) thereby turns into the difference equation (\ref{Z4D(X)-diffeq}) for the 4D version $Z_{\frD}(X)$ of $Z(x)$. (\ref{Z4D(X)-diffeq}) can be further converted to the quantum spectral curve (\ref{CP1qsc-eq}) of Gromov-Witten theory of $\CC\PP^1$. (\ref{Z4D(X)-diffeq}) and (\ref{CP1qsc-eq}) are derived by Dunin-Barkowski et al. \cite{DBMNPS13} by genuinely combinatorial computations. Our approach highlights a role of the KP hierarchy that underlies these quantum curves. \item[2.] {\it Bilinear equations\/}: According to our previous work on the melting crystal model \cite{NT07}, $Z(\bst)$ is a tau function of the KP hierarchy. As $R \to 0$ under the $R$-dependent transformation (\ref{t-T-relation}) of the coupling constants, $Z(\bst)$ converges to the 4D version $Z_{\frD}(\bsT)$. In this limit, the three-term bilinear equation (\ref{Z-Fay4-eq}) for $Z(\bst)$ turns into its counterpart (\ref{Z4D-Fay4-eq}) for $Z_{\frD}(\bsT)$. This implies that $Z_{\frD}(\bsT)$, too, is a tau function of the KP hierarchy. We have thus obtained a new approach to the integrable structure in Okounkov and Pandharipande's generating function of all-genus Gromov-Witten invariants of $\CC\PP^1$ \cite{OP02}. \end{itemize} Though the detail is omitted, our consideration on Fay-type bilinear equations carries over to the $s$-deformed partition functions $Z(\bst,s)$ and $Z_{\frD}(\bsT,s)$ defined by (\ref{Z(t,s)-Psum}) and (\ref{Z4D(t,s)-Psum}). This leads to yet another proof of the fact \cite{DZ04,Getzler01,Milanov06} that $Z_{\frD}(\bsT,s)$ is a tau function of the 1D Toda hierarchy. Let us stress that the integrable structure of $Z_{\frD}(\bsT)$ still remains to be fully elucidated. Its 5D (or K-theoretic) lift $Z(\bst)$ has a fermionic expression, such as (\ref{Z(t)=<..g_2..>}) and (\ref{Z(t)-dualvev}), that shows manifestly that $Z(\bst)$ is a tau function of the KP hierarchy. Moreover, since the generating operator $g$ therein is rather simple, one can even find the associated generating operator $G$ in $V = \CC((x))$ explicitly. In contrast, no similar fermionic expression of $Z_{\frD}(\bsT)$ is currently known. The preliminary fermionic expression (\ref{Z4D(t)=<..e^H..>}) of $Z_{\frD}(\bsT)$ cannot be converted to such a form by the method of our previous work \cite{NT07}. The limiting procedure from $Z(\bst)$ is a way to overcome this difficulty, but this should not be a final answer. \subsection*{Acknowledgements} The authors are grateful to Toshio Nakatsu for valuable comments. This work is partly supported by the JSPS Kakenhi Grant No. 25400111 and No. 15K04912.
22,994
Latest News 21 August, 2020 Horse owners offer reward The owners of a stallion that was shot dead in July have offered a $20,000 reward for information leading to a conviction. The owners of a registered Quarter Horse stallion that was shot dead in July have offered a $20,000 reward for information leading to a conviction. The Wolf of Wall Street, was in a paddock at the Burlington Station homestead with four mares when he was shot through the lungs sometime between 8pm on Tuesday, June 16 and 3.30pm on Thursday, June 18. Detectives from Major Organised Crime Squad Rural in Mareeba are investigating the shooting. The owners of the stallion, Ray Heslin and Kelli Thomas have offered a sizeable reward for information that leads to an arrest and conviction. The horse was purchased for $65,000 in 2015 and now say that he was worth $150,000. "We could have sold him many times," Ms Thomas said. "He was a pure gentleman, easy to have around and easy to ride. "It's a pretty lowlife act to shoot an innocent animal - the person might as well have shot someone in our family. "We just want justice for that." The Wolf of Wall Street and the four mares he was with were in the property's paddock that fronts the public road between Mt Surprise and Chillagoe, 53km from Mt Surprise. Ms Thomas said they had been checking on the horses every day, except the day before they found the stallion dead when they'd gone to Atherton for the day. She believes the shooting was deliberate, as none of the other horses in the paddock were targeted, and said it was unlikely to have been an accident. "Roo shooters and pig hunters know what they're looking at," she said. "He had a clipped mane and a shiny coat and he was a long way from looking like a brumby." Police are urging anyone with information in relation to the offence to contact Policelink and use the online suspicious activity form, quoting QP2001269328. Information can also be reported anonymously to Crime Stoppers 24 hours a day.
88,870
\begin{document} \title{On the Almeida-Thouless transition line \\ in the Sherrington-Kirkpatrick model\\ with centered Gaussian external field} \author{Wei-Kuo Chen\thanks{University of Minnesota. Email: [email protected]. Partly supported by NSF grant DMS-17-52184}} \maketitle \begin{abstract} We study the phase transition of the free energy in the Sherrington-Kirkpatrick mean-field spin glass model with centered Gaussian external field. We show that the corresponding Almeida-Thouless line is the correct transition curve that distinguishes between the replica symmetric and replica symmetry breaking solutions in the Parisi formula. \end{abstract} \section{Introduction and main results} The famous Sherrington-Kirkpatrick (SK) mean-field spin glass model was introduced in \cite{SK72} aiming to explain some unusual magnetic behavior of certain alloys. By means of the replica method, it was also proposed in \cite{SK72} that the thermodynamic limit of the free energy in the SK model can be solved by the replica symmetric ansatz at very high temperature. A complete picture was later settled in the seminal works \cite{Parisi79,Parisi80} of Parisi, in which he adapted an ultrametric ansatz and deduced a variational formula for the limiting free energy at all temperature, known as the Parisi formula. This formula was rigorously established by Talagrand \cite{Tal06} utilizing the replica symmetry breaking bound discovered by Guerra \cite{Guerra03}. See \cite{MPV87} for physicists' studies of the SK model as well as \cite{Pan13,Tal111,Tal112} for the recent mathematical progress. For any $N\geq 1,$ the Hamiltonian of the SK model is defined as \begin{align}\label{SK} -H_N(\sigma)=\frac{\beta}{\sqrt{N}}\sum_{1\leq i<j\leq N}g_{ij}\sigma_i\sigma_j+h\sum_{i=1}^N\sigma_i,\,\,\forall\sigma\in \{-1,1\}^N, \end{align} where $g_{ij}$'s are i.i.d. standard Gaussian. The parameters $\beta>0$ and $h\in \mathbb{R}$ are the (inverse) temperature and external field, respectively. Define the free energy as $$F_N(\beta,h)=\frac{1}{N}\log\sum_{\sigma\in \{-1,1\}^N}e^{H_N(\sigma)}.$$ The famous Parisi formula \cite{Pan13,Tal112} asserts that almost surely, \begin{align}\label{eq2} \lim_{N\to\infty}F_N(\beta,h)=\min_{\alpha\in \pr([0,1])}{P}(\alpha). \end{align} Here, $\pr([0,1])$ is the collection of all probability distribution functions on $[0,1]$ equipped with the $L^1(dx)$ distance and $P$ is a functional on $\pr([0,1])$ defined as \begin{align*} {P}(\alpha)&=\log 2+\e\Phi_{\alpha}(0,h)-\frac{\beta^2}{2}\int_0^1s\alpha(s)ds, \end{align*} where $\Phi_\alpha$ is the weak solution \cite{JT16} to \begin{align}\label{eq1} \partial_s\Phi_\alpha(s,x)&=-\frac{\beta^2}{2}\bigl(\partial_{xx}\Phi_\alpha(s,x)+\alpha(s)(\partial_x\Phi_\alpha(s,x))^2\bigr),\,\,(s,x)\in [0,1]\times \mathbb{R} \end{align} with boundary condition $\Phi(1,x)=\log\cosh x.$ It is known \cite{AC15} that the Parisi formula has a unique minimizer denoted by $\alpha_P.$ We say that the Parisi formula is solved by the replica symmetric solution if $\alpha_P=1_{[q,1]}$ for some $q\in [0,1]$ and is solved by the replica symmetry breaking solution if otherwise. For any $\beta,h>0,$ let $q=q(\beta,h)$ be the unique solution (see \cite{Guerra01} and \cite[Proposition A.14.1]{Tal111}) to \begin{align} \label{fxpt} q=\e\tanh^2(h+\beta z\sqrt{q}), \end{align} where $z$ is standard Gaussian. In \cite{AT78}, it was conjectured that for $\beta,h>0$, the SK model is solved by the replica symmetric solution if and only if $(\beta,h)$ satisfies $$ \beta^2\e\frac{1}{\cosh^4(h+\beta z\sqrt{q})}\leq 1. $$ In other words, for $\beta,h>0,$ the following equation, known as the Almeida-Thouless line, \begin{align}\label{atline0} \beta^2\e\frac{1}{\cosh^4(h+\beta z\sqrt{q})}=1, \end{align} characterizes the transition between the replica symmetric and replica symmetry breaking solutions. Toninelli \cite{Toninelli02} proved that above the AT line, i.e, $\beta^2\e {\cosh^{-4}(h+\beta z\sqrt{q})}>1$, the solution to the Parisi formula is replica symmetry breaking. Later Talagrand \cite{Tal112} and Jagannath-Tobasco \cite{JT17} showed that inside the AT line, $\beta^2\e {\cosh^{-4}(h+\beta z\sqrt{q})}\leq1$, there exist fairly large regimes in which the Parisi formula is solved by replica symmetric solution. As parts of their regimes are not up to the AT line, verifying the exactness of the AT line remains open. In this short note, we investigate the SK model with centered Gaussian external field and show that the corresponding AT line is indeed the transition line distinguishing between the replica symmetric and replica symmetry breaking solutions in the Parisi formula. For $\beta,h>0$, the Hamiltonian of the SK model with centered Gaussian external field is defined as \begin{align*} - \mathcal{H}_N(\sigma)&=\frac{\beta}{\sqrt{N}}\sum_{1\leq i<j\leq N}g_{ij}\sigma_i\sigma_j+h\sum_{i=1}^N\xi_i\sigma_i,\,\,\forall \sigma\in \{-1,1\}^N, \end{align*} where $(g_{ij})_{1\leq i<j\leq N}$ and $(\xi_i)_{1\leq i\leq N}$ are i.i.d. standard normal and are independent of each other. The free energy associated to this Hamiltonian is given by \begin{align*} \mathcal{F}_N(\beta,h)=\frac{1}{N}\e\log \sum_{\sigma\in \{-1,1\}^N}\exp \bigl(-\mathcal{H}_N(\sigma)\bigr). \end{align*} In a similar manner, the limiting free energy can be expressed by the Parisi formula as in \eqref{eq1} with a replacement of $h$ by $h\xi$ for $\xi$ a standard Gaussian random variable, more precisely, we have that almost surely, \begin{align*} \lim_{N\to\infty}\mathcal{F}_N(\beta,h)=\min_{\alpha\in \pr([0,1])}\mathcal{P}(\alpha), \end{align*} where \begin{align*} \mathcal{P}(\alpha)&=\log 2+\e\Phi_{\alpha}(0,h\xi)-\frac{\beta^2}{2}\int_0^1s\alpha(s)ds, \end{align*} and $\Phi_\alpha$ is defined as \eqref{eq1}. Note that the proof of this formula is identically the same as those in \cite{Pan13,Tal112} with no essential modifications. Moreover, as in \cite{AC15}, the Parisi formula here also has a unique minimizer denoted by $\alpha_P.$ We again say that the Parisi formula is solved by the replica symmetric solution if $\alpha_P=1_{[q,1]}$ for some $q\in [0,1]$ and is solved by the replica symmetry breaking solution if otherwise. \begin{figure}\label{figure1} \includegraphics[width=5cm]{FigATline0.png} \centering \caption{This figure describes the phase transition in the SK model with centered Gaussian external field. The gray area is the regime of replica symmetric solutions, while the white area is the regime of replica symmetry breaking solutions. The boundary between these two regimes is the AT line \eqref{atline} corresponding to our model. The black-dash line is the original SK model with deterministic external field \eqref{atline0}.} \end{figure} The AT line corresponds to our model is formulated analogously. Let $z,\xi$ be i.i.d. standard Gaussian random variables. For $\beta,h>0,$ let $q=q(\beta,h)$ be the unique fixed point of \begin{align}\label{fixedpt} q&=\e \tanh^2(h\xi+\beta z\sqrt{q}), \end{align} where the existence and uniqueness of $q$ are guaranteed thanks to the Latala-Guerra lemma.\footnote{ In \cite{Guerra01} and \cite[Proposition A.14.1]{Tal111}, it is known that for any $r\in \mathbb{R},$ $x^{-1} {\e \tanh^2(r+\beta z\sqrt{x})}$ is a strictly decreasing function in $x>0$, which implies that $f(x):=x^{-1} {\e \tanh^2(h\xi+\beta z\sqrt{x})}$ is also strictly decreasing on $(0,\infty)$. Since $f(\infty)=0$ and $f(0)=\infty$, there exists a unique $x_0$ such that $f(x_0)=1.$ This ensures that \eqref{fixedpt} has a unique solution. } The AT line associated to the SK model with centered Gaussian external field is the collection of all $(\beta,h)\in (0,\infty)\times (0,\infty)$ satisfying \begin{align}\label{atline} \beta^2\e \frac{1}{\cosh^4(h\xi+\beta z\sqrt{q})}=1. \end{align} The following is our main result. \begin{theorem}\label{thm1} Consider the SK model with centered Gaussian external field. For any $\beta>0$ and $h>0,$ the Parisi formula exhibits the replica symmetric solution if and only if $(\beta,h)$ lies inside the AT line, i.e., \begin{align}\label{inat} \beta^2\e \frac{1}{\cosh^4(h\xi+\beta z\sqrt{q})}\leq 1. \end{align} \end{theorem} The rest of the paper is organized as follows. In Section \ref{sec2}, we gather some results regarding the directional derivative of the functional $\mathcal{P}$ and some consequences following the first order optimality of the Parisi variational formula. The proof of Theorem \ref{thm1} is presented in Section \ref{sec3}. \smallskip {\noindent \bf Acknowledgements.} The author thanks the anonymous referees for some helpful comments regarding the presentation of this paper. Special thanks are due to Si Tang for running the simulations in Figures \ref{figure1} and \ref{figure3}. \section{Preliminary results}\label{sec2} Let $\xi$ be standard Gaussian. Let $\alpha\in \mbox{Pr}[0,1]$. Conditionally on $\xi$, let $(X_{\alpha}^\xi(s))_{0\leq s\leq 1}$ be the solution to the following stochastic differential equation with boundary condition $X_{\alpha}^\xi(0)=h\xi$, \begin{align*} dX_{\alpha}^\xi(s)&=\beta^2\alpha(s)\partial_x\Phi_{\alpha}(s,X_{\alpha}^\xi(s))ds+\beta dW(s), \end{align*} where $(W(s))_{s\in [0,1]}$ is a standard Brownian motion independent of $\xi.$ Set \begin{align*} \phi_\alpha(s)=\e \bigl(\partial_x\Phi_{\alpha}(s,X_{\alpha}^\xi(s))\bigr)^2,\,\,0\leq s\leq 1. \end{align*} The following proposition establishes the directional derivative of $\mathcal{P}$ in terms of $\phi_\alpha.$ \begin{proposition}\label{prop1} For $\alpha_0,\alpha_1\in \mbox{Pr}[0,1]$ and $\theta\in [0,1],$ set $\alpha_\theta=(1-\theta)\alpha_0+\theta\alpha_1.$ The right-derivative of $\theta\mapsto\mathcal{P}(\alpha_\theta)$ at zero is given by \begin{align*} \frac{d\mathcal{P}(\alpha_\theta)}{d\theta^+}\Bigr|_{\theta=0}=\frac{\beta^2}{2}\int_0^1\bigl(\alpha_1(s)-\alpha_0(s)\bigr)\bigl(s-\phi_{\alpha_0}(s)\bigr)ds. \end{align*} Furthermore, $\alpha_0$ is the minimizer of $\min_{\alpha\in \mbox{\footnotesize Pr}[0,1]}\mathcal{P}(\alpha)$ if and only if $\frac{d\mathcal{P}(\alpha_\theta)}{d\theta^+}\Bigr|_{\theta=0}\geq 0$ for all $\alpha_1\in \mbox{Pr}[0,1].$ \end{proposition} \begin{proposition}\label{prop2} If $\alpha_0$ is the minimizer of $\mathcal{P}$, then every point in the support of $\alpha_0$ must satisfy $\phi_{\alpha_0}(s)=s.$ \end{proposition} In the case of the SK model with deterministic external field \eqref{SK}, the two propositions above were established in Theorem 2 and Proposition 1 in \cite{Chen17}, respectively, where the statements are essentially the same as Propositions \ref{prop1} and \ref{prop2} with a replacement of $h\xi$ by $h.$ As the proofs in \cite{Chen17} can be directly applied to Propositions \ref{prop1} and \ref{prop2} with no major changes, we do not reproduce the details here. The following lemma provides a simpler expression for $\phi_{\alpha}$. \begin{lemma}\label{lem1} Let $\alpha\in \mbox{Pr}[0,1].$ For any $s\in [0,1]$, we have that \begin{align*} \phi_\alpha(s)&=\e \partial_{x}\Phi_{\alpha}(s,M(s))^2\exp\Bigl(\int_0^s (\Phi_{\alpha}(s,M(s))-\Phi_{\alpha}(t,M(t)))\alpha(dt)\Bigr), \end{align*} where $M(s):=h\xi+\beta W(s).$ \end{lemma} \begin{proof} Conditionally on $\xi$, let ${\e}^\xi$ be the expectation associated to the probability measure \begin{align*} d{\p}^\xi= \exp\Bigl(-\int_0^1\beta\alpha(r)\partial_{x}\Phi(r,X_\alpha^\xi(r))dW(r)-\frac{1}{2}\int_0^1\beta^2 \alpha(r)^2\partial_x\Phi_{\alpha}(r,X_\alpha^\xi(r))^2dr\Bigr)d\p. \end{align*} From the Girsanov theorem, under ${\e}^\xi$, $$ \Bigl(\int_0^s\beta \alpha(r)\partial_x\Phi_{\alpha}(r,X_\alpha^\xi(r))dr+W(s)\Bigr)_{0\leq s\leq 1} $$ is a standard Brownian motion for which we denote by $(W^\xi(s))_{0\leq s\leq 1}.$ Set $M^\xi(s)=h\xi+\beta W^\xi(s).$ Under $\e^\xi,$ write \begin{align*} &\int_0^1\beta\alpha(r)\partial_{x}\Phi(r,X_\alpha^\xi(r))dW(r)+\frac{1}{2}\int_0^1\beta^2 \alpha(r)^2\partial_x\Phi_{\alpha}(r,X_\alpha^\xi(r))^2dr\\ &=\int_0^1\beta\alpha(r)\partial_{x}\Phi(r,X_\alpha^\xi(r))\Bigl(\beta \alpha(r)\partial_x\Phi_\alpha(r,X_\alpha^\xi(r)) dr+dW(r)\Bigr)\\ &\qquad\qquad-\frac{1}{2}\int_0^1\beta^2 \alpha(r)^2\partial_x\Phi_{\alpha}(r,X_\alpha^\xi(r))^2dr\\ &=\int_0^1\beta\alpha(r)\partial_{x}\Phi(r, M^\xi(r))dW^\xi(r)-\frac{1}{2}\int_0^1\beta^2 \alpha(r)^2\partial_x\Phi_{\alpha}(r, M^\xi(r))^2dr. \end{align*} Consequently, \begin{align} \nonumber \e\bigl[\bigl(\partial_x\Phi_\alpha(s,X_{\alpha}^\xi(s))\bigr)^2\big|\xi\bigr]&=\e^\xi \bigl(\partial_x\Phi_\alpha(s,M^\xi(s))\bigr)^2\exp\Bigl(\int_0^1\beta\alpha(r)\partial_{x}\Phi_\alpha(r, M^\xi(r))dW^\xi(r)\\ \label{eq3}&\qquad\qquad\qquad\qquad-\frac{1}{2}\int_0^1\beta^2 \alpha(r)^2\partial_x\Phi_{\alpha}(r, M^\xi(r))^2dr\Bigr). \end{align} Next, note that from It\^o's formula and \eqref{eq1}, for $0\leq t<s\leq 1,$ \begin{align*} &\Phi_\alpha(s, M^\xi(s))-\Phi_\alpha(t, M^\xi(t))\\ &=-\frac{\beta^2}{2}\int_t^s\alpha(r)(\partial_x\Phi_\alpha(r, M^\xi(r)))^2dr+\beta \int_t^s\partial_x\Phi_\alpha(r, M^\xi(r))W^\xi(dr). \end{align*} and then from the Fubini theorem, \begin{align*} &\int_0^s\bigl(\Phi_\alpha(s, M^\xi(s))-\Phi_\alpha(t, M^\xi(t))\bigr)\alpha(dt)\\ &=-\frac{\beta^2}{2}\int_0^s\alpha(r)^2(\partial_x\Phi_\alpha(r,M^\xi(r)))^2dr+\beta \int_0^s\alpha(r)\partial_x\Phi_\alpha(r, M^\xi(r))W^\xi(dr). \end{align*} Plugging this into \eqref{eq3} yields that \begin{align*} \e \bigl[\bigl(\partial_x\Phi_\alpha(s,X_\alpha^\xi(s))\bigr)^2\big|\xi\bigr]&=\e^\xi \Bigl[\partial_{x}\Phi_{\alpha}(s,M^\xi(s))^2\\ &\qquad\exp\Bigl(\int_0^s \bigl(\Phi_{\alpha}(s,M^\xi(s))-\Phi_{\alpha}(t,M^\xi(t))\bigr)\alpha(dt)\Bigr)\Bigr]\\ &=\e \Bigl[\partial_{x}\Phi_{\alpha}(s,M(s))^2\\ &\qquad\exp\Bigl(\int_0^s \bigl(\Phi_{\alpha}(s,M(t))-\Phi_{\alpha}(t,M(t))\bigr)\alpha(dt)\Bigr)\Big|\xi\Bigr], \end{align*} where we used that $(M(s))_{0\leq s\leq 1}\stackrel{d}{=}(M^\xi(s))_{0\leq s\leq 1}$ conditionally on $\xi.$ Finally, taking expectation in $\xi$ completes our proof. \end{proof} While Lemma \ref{lem1} holds for any $\alpha$, the important case that we shall use in our main proof is when $\alpha=1_{[q,1]}$ for some $q\in [0,1].$ In this case, one can compute $\phi_\alpha$ more explicitly. Let $z,z'$ be i.i.d. standard Gaussian independent of $\xi.$ Denote by $\e'$ the expectation with respect to $z'$ only. \begin{lemma}\label{lem2} Let $\alpha=1_{[q,1]}$ for some $q\in [0,1]$. We have that \begin{align}\label{lem2:eq1} \phi_\alpha(s)&= \left\{ \begin{array}{ll} \e \Bigl(\frac{\e'\tanh^2 (h\xi+\beta z\sqrt{q}+\beta z'\sqrt{s-q})\cosh (h\xi+\beta z\sqrt{q}+\beta z'\sqrt{s-q})}{\e'\cosh (h\xi+\beta z\sqrt{q}+\beta z'\sqrt{s-q})}\Bigr),&\,\,\mbox{if $s\in [q,1]$},\\ \\ \e\bigl(\e' \tanh(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2,&\,\,\mbox{if $s\in [0,q)$} \end{array}\right. \end{align} and \begin{align}\label{dphi} \phi_\alpha'(s)&=\left\{ \begin{array}{ll} \beta^2\e \Bigl(\frac{\e' \cosh^{-3}(h\xi+\beta z \sqrt{q}+\beta z'\sqrt{s-q})}{\e'\cosh (h\xi+\beta z \sqrt{q}+\beta z'\sqrt{s-q})}\Bigr),&\mbox{if $s\in[q,1]$},\\ \\ \beta^2\e\bigl(\e '\cosh^{-2}(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2,&\mbox{if $s\in [0,q)$}. \end{array} \right. \end{align} \end{lemma} \begin{proof} A direct computation gives that \begin{align*} \Phi_{\alpha}(s,x)&=\left\{ \begin{array}{ll}\frac{\beta^2}{2}(1-s)+\log\cosh x,&\,\,\mbox{if $s\in [q,1]$},\\ \\ \frac{\beta^2}{2}(1-q)+\e \log\cosh(x+\beta z\sqrt{q-s}),&\,\,\mbox{if $s\in [0,q)$} \end{array}\right. \end{align*} and that \begin{align*} \partial_x\Phi_{\alpha}(s,x)&= \left\{ \begin{array}{ll}\tanh x,&\,\,\mbox{if $s\in [q,1]$},\\ \\ \e \tanh(x+\beta z\sqrt{q-s}),&\,\,\mbox{if $s\in [0,q)$}. \end{array}\right. \end{align*} Plugging these along with the assumption $\alpha=1_{[q,1]}$ into Lemma \ref{lem1} establishes \eqref{lem2:eq1}. As for \eqref{dphi}, note that for any twice differentiable function $f$ with $\|f''\|_\infty<\infty$, we can compute by using the Gaussian integration by parts\footnote{Let $Z=(z_1,\ldots,z_n)$ be a centered Gaussian random vector and let $F$ be a differentiable function on $\mathbb{R}^n$ with $\sum_{i=1}^n\|\partial_{x_i}F\|_\infty<\infty.$ We have that $\e z_1f(Z)=\sum_{i=1}^n \e[z_1z_i]\e\bigl[\partial_{x_i}f(Z)\bigr].$} to obtain \begin{align}\label{int} \frac{d}{ds}\e' f(z'\sqrt{s-q})&=\frac{1}{2\sqrt{s-q}}\e'f'(z'\sqrt{s-q})=\frac{1}{2}\e'f''(z'\sqrt{s-q}),\,\,\forall s\in (q,1]. \end{align} Applying this equation and \begin{align*} \bigl(\tanh^2(x)\cosh(x)\bigr)''&=\frac{2}{\cosh^3(x)}+\tanh^2(x)\cosh(x),\\ \bigl(\cosh(x)\bigr)''&=\cosh(x) \end{align*} to the $\e'$-expectations in the first equation of \eqref{lem2:eq1}, the first equation of \eqref{dphi} follows by a straightforward computation. To obtain the second equation in \eqref{dphi}, write \begin{align*} \e\bigl(\e' \tanh(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2&=\e\bigl[ \tanh(h\xi+\beta z\sqrt{s}+\beta z_1\sqrt{q-s})\\ &\qquad\quad\tanh(h\xi+\beta z\sqrt{s}+\beta z_2\sqrt{q-s})\bigr] \end{align*} for $z_1,z_2$ i.i.d. standard Gaussian independent of $z.$ From the last equation and $(\tanh(x))'={\cosh^{-2}(x)},$ we have \begin{align*} &\frac{d}{ds}\e\bigl(\e' \tanh(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2\\ &=\frac{\beta}{2}\e \frac{\tanh(h\xi+\beta z\sqrt{s}+\beta z_1\sqrt{q-s})}{\cosh^2(h\xi+\beta z\sqrt{s}+\beta z_2\sqrt{q-s})}\Bigl(\frac{z}{\sqrt{s}}-\frac{z_2}{\sqrt{q-s}}\Bigr)\\ &+ \frac{\beta}{2}\e \frac{\tanh(h\xi+\beta z\sqrt{s}+\beta z_2\sqrt{q-s})}{\cosh^2(h\xi+\beta z\sqrt{s}+\beta z_1\sqrt{q-s})}\Bigl(\frac{z}{\sqrt{s}}-\frac{z_1}{\sqrt{q-s}}\Bigr). \end{align*} Using Gaussian integration by parts yields that \begin{align*} &\frac{d}{ds}\e\bigl(\e' \tanh(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2\\ &=\beta^2\e \frac{1}{\cosh^2(h\xi+\beta z\sqrt{s}+\beta z_1\sqrt{q-s})\cosh^2(h\xi+\beta z\sqrt{s}+\beta z_2\sqrt{q-s})}\\ &=\beta^2\e\bigl(\e '\cosh^{-2}(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})\bigr)^2. \end{align*} \end{proof} \section{Proof of Theorem \ref{thm1}}\label{sec3} Throughout the entire proof, we let $q$ be the unique solution to \eqref{fixedpt}. First, we assume that $(\beta,h)$ lies above the AT line, i.e., \begin{align*} \beta^2\e \frac{1}{\cosh^4(h\xi+\beta z\sqrt{q})}> 1. \end{align*} We show that $\alpha_P$ can not be replica symmetric. If not, then we must have that $\alpha_P=1_{[q',1]}$ for some $q'\in [0,1]$ and from Proposition \ref{prop2} and Lemma \ref{lem2}, $q'$ must satisfy $$\e\tanh^2(h\xi+\beta z\sqrt{q'})=\phi_{\alpha_P}(q')=q'.$$ Since this equation has only one unique solution (see \eqref{fixedpt}), we must have that $q=q'$. Consequently, $\phi_{\alpha_P}(q)=q$ and from \eqref{dphi}, \begin{align*} \phi_{\alpha_P}'(q)&=\beta^2\e \frac{1}{\cosh^4(h\xi+\beta z\sqrt{q})}>1. \end{align*} If we let $\varepsilon>0$ be small enough and $\alpha_1=1_{[q+\varepsilon,1]}$, then $\phi_{\alpha_P}(s)>s$ for $s\in [q,q+\varepsilon]$ so that from Proposition \ref{prop1}, $$ \frac{d}{d\theta^+}\mathcal{P}(\alpha_\theta)\Big|_{\theta=0}=-\frac{\beta^2}{2}\int_q^{q+\varepsilon}(\phi_{\alpha_P}(s)-s)ds<0 $$ and consequently, $1_{[q,1]}$ can not be the optimizer. Hence, outside the AT line, the SK model is not replica symmetric. Next, we assume that $(\beta,h)$ lies inside the AT line, i.e., \eqref{inat} holds. We proceed to show that $\alpha_P=1_{[q,1]}$. To this end, let $\alpha_0=1_{[q,1]}$ and we claim that \eqref{inat} implies that $\phi_{\alpha_0}(s)\geq s$ if $s<q$ and $\phi_{\alpha_0}(s)\leq s$ if $s>q.$ If this claim is valid, then for any $\alpha_1\in \mbox{Pr}[0,1],$ \begin{align*} \frac{d}{d\theta}\mathcal{P}(\alpha_\theta)\Big|_{\theta=0+}&=\frac{\beta^2}{2}\Bigl(\int_0^q\alpha_1(s)(\phi_{\alpha_0}(s)-s)ds+\int_q^1(\alpha_1(s)-1)(\phi_{\alpha_0}(s)-s)ds\Bigr)\geq 0. \end{align*} Hence, from Proposition \ref{prop1}, $\alpha_0=1_{[q,1]}$ is the minimizer of the Parisi formula and this will complete our proof. We now turn to the proof of our claim. First of all, from \eqref{dphi} and the Jensen inequality with respect to $\e'$, for $s\in [0,q),$ \begin{align*} \phi_{\alpha_0}'(s)&\leq \beta^2\e\e' \cosh^{-4}(h\xi+\beta z\sqrt{s}+\beta z'\sqrt{q-s})=\beta^2\e \cosh^{-4}(h\xi+\beta z\sqrt{q})\leq 1. \end{align*} Since $\phi_{\alpha_0}(q)=q$ by \eqref{lem2:eq1}, we see that $\phi_{\alpha_0}(s)\geq s$ for $s\in [0,q).$ Next, for $s\in (q,1),$ recall from the first equation in \eqref{dphi} that by denoting $Y=h\xi+\beta z\sqrt{q}+\beta z'\sqrt{s-q} $, \begin{align*} \phi_{\alpha_0}'(s)&=\beta^2\e \frac{\e' \cosh^{-3}Y}{\e'\cosh Y} \end{align*} Write $\cosh^{-3}x=(\cosh^{-4}x)(\cosh x)$. Since $\cosh^{-4}x$ is decreasing and $\cosh x$ is increasing for $x>0,$ applying the FKG inequality\footnote{The FKG inequality states that if $X$ is a random variable and $f,g$ are both nondecreasing functions with $\mbox{Var}(f(X))<\infty$ and $\mbox{Var}(g(X))<\infty$, then $\e f(X)g(X)\geq \e f(X) \e g(X).$} implies \begin{align*} \e'\cosh^{-3}Y&=\e'\cosh^{-3}|Y|\\ &\leq \e'\cosh^{-4}|Y|\cdot \e'\cosh|Y|\\ &=\e'\cosh^{-4}Y\cdot\e'\cosh Y. \end{align*} Thus, for any $q<s\leq 1,$ \begin{align*} \phi_{\alpha_0}'(s)&\leq \beta^2\e \e'\cosh^{-4}Y\\ &=\beta^2\e \cosh^{-4}(h\xi+\beta z \sqrt{s})\\ &=\beta^2\e \cosh^{-4}(z\sqrt{h^2+\beta^2s})\\ &\leq \beta^2\e \cosh^{-4}(z\sqrt{h^2+\beta^2q})\\ &=\beta^2\e \cosh^{-4}(h\xi+\beta z \sqrt{q})\leq 1, \end{align*} where the second inequality is valid since $\cosh^{-4}x$ is even and decreasing in $x>0.$ Consequently, $\phi_{\alpha_0}'(s)-1\leq 0$ for any $q<s\leq 1.$ Since $\phi_{\alpha_0}(q)=q,$ we deduce that $\phi_{\alpha_0}(s)\leq s$ for $q<s\leq 1.$ This finishes the proof of our claim. \begin{figure} \includegraphics[width=5cm]{FigSimu0.png} \centering \caption{This is a simulation of $s\in [q,1]\mapsto \beta^2\e {\cosh^{-4}(h+\beta z\sqrt{s})}$ with $\beta=4.05$, $h=5$, and $q\approx0.909.$ The blue line is the level $ \beta^2\e {\cosh^{-4}(h+\beta z\sqrt{q})})$ and the red line is $q.$ It can be seen that $\beta^2\e {\cosh^{-4}(h+\beta z\sqrt{s})}$ lies above $\beta^2\e {\cosh^{-4}(h+\beta z\sqrt{q})}$ for $s\in [q,1]$ and above $1$ for $s$ large enough.} \label{figure3} \end{figure} \begin{remark} \rm In the case of the SK model with non-random external field \eqref{SK}, the corresponding $\phi_{\alpha}(s)$ for $\alpha=1_{[q,1]}$ and $q$ satisfying \eqref{fxpt} is the same as in Lemma \ref{lem2} except that $h\xi$ is replaced by $h.$ The same argument in Theorem \ref{thm1} enables us to show that the Parisi formula can not be solved by the replica symmetric solution if $(\beta,h)$ lies outside the AT line. However, if $(\beta,h)$ lies inside the AT line, while it can still be shown that $\phi_\alpha(s)\geq s$ for all $s\in [0,q)$, it is unclear why $\phi_\alpha(s)\leq s$ for all $s\in [q,1].$ In this case, one can still use the FKG inequality to obtain that for $q\leq s\leq 1,$ \begin{align*} \phi_\alpha'(s)&\leq \beta^2\e \cosh^{-4}(h+\beta z\sqrt{s}), \end{align*} but a numerical simulation, see Figure \ref{figure3}, suggests that the following does not always hold: $$ \beta^2\e \cosh^{-4}(h+\beta z\sqrt{s})\leq \beta^2\e \cosh^{-4}(h+\beta z\sqrt{q}) $$ for all $q\leq s\leq 1.$ \end{remark} \bibliographystyle{amsplain} \footnotesize{\bibliography{ref}} \end{document}
152,789
TITLE: Relation between support of image-measure and closure of the image QUESTION [2 upvotes]: Let $(\Omega,\mathcal F)$ be a measurable space and $X:\Omega\rightarrow\mathbb R^d$ a measurable map. For a probability measure $\mathbb P$ denote by $\mu_\mathbb P$ the image measure of $\mathbb P$ under the map $X$ and by $\text{supp}(\mu_\mathbb P)$ the support of $\mu_\mathbb P$, where the support is defined to be the smallest closed set such that its complement has $\mu_\mathbb P$-measure $0$. We know that $X(\Omega)$ is dense in $\text{supp}(\mu_\mathbb P),$ i.e. $\text{supp}(\mu_\mathbb P)\subseteq\overline{X(\Omega)}.$ Is there a characterization of probability measures $\mathbb P$ for which we have $\text{supp}(\mu_\mathbb P)=\overline{X(\Omega)}?$ For given $\Omega$ and $X$, does there always exist such a measure? REPLY [2 votes]: For given $\Omega$ and $X$, such a measure always exists. Let $\mathcal{B}$ be a countable basis for $\overline{X(\Omega)}$. Each element $B$ of $\mathcal{B}$ contains a point $y_B\in X(\Omega)$. Let $x_B\in X^{-1}\big(\{y_B\}\big)$. A probability measure that assigns positive probability to every element in $\{x_B:B\in\mathcal{B}\}$ has the desired property.
168,305
Benefit Eligible employees and their covered dependent(s) participating in one of the State offered health plan may elect dental coverage under the Quality Care Dental Plan (QCDP). Enrollees are allowed to receive services from any dental provider. Each plan enrollee is subject to an annual plan deductible for all dental services, except those listed in the Schedule of Benefits as “Diagnostic” or “Preventive”. Listed services are reimbursed at a predetermined maximum scheduled amount. The Dental Schedule of Benefits is available on the Benefits website.
64,631
TITLE: Potential due to line charge QUESTION [1 upvotes]: Is it possible to calculate the electric potential at a point due to an infinite line charge? Because potential is defined with respect to infinity. REPLY [2 votes]: It is not possible to choose $\infty$ as the reference point to define the electric potential because there are charges at $\infty$. This is easily seen since the field of an infinite line $\sim 1/r$ so the standard definition of $V(\vec r)$ as the integral $$ V(r)=-\int_{r}^{\infty}\frac{\lambda}{2\pi\epsilon R}dR =-\frac{\lambda}{2\pi \epsilon}\left(\log(\infty)-\log(r)\right) $$ is clearly not well-defined because of the $\log(\infty)$. Rather, it is often found in this case convenient to define the reference potential so that $$ V(r)= -\frac{\lambda}{2\pi\epsilon}\int_{r}^{1}\frac{dR}{R}= -\frac{\lambda}{2\pi \epsilon}\left(\log(1)-\log(r)\right)=\log(r) \, . $$ If there is a natural length scale $R_0$ to the problem, one can also define the dimensionless variable $\rho=r/R_0$. Since $dR/R = d\rho/\rho$, the result is now that the potential at $\rho=1$, i.e. at $r=R_0$, is now set to $0$. Of course if you’re only interested in the potential difference between $r_0$ and $r_1$, the limits of the integrals are then $r_0$ and $r_1$ and the integral is perfectly well defined, as is the difference in potential between these two points.
220,281
TITLE: Difference between normal/tangential Coordinates and Polar Coordinates QUESTION [1 upvotes]: I am trying to understand why polar coordinates accelerations in the theta and r directions cannot be interchanged for the normal and tangential components of acceleration. I am currently doing a dynamics course where we will have to choose which of these to use for specific problems. My understanding is that these are in the same directions (normal acceleration with r's acceleration and the tangential acceleration with the acceleration in the theta direction. Please could the explanation include why these cannot be interchanged. REPLY [2 votes]: If I understand your question right, you are placing a coordinate system at the periphery of a circular path, with the y-axis always pointing tangentially. In this setup, you are comparing polar with Cartesian/rectangular coordinates. The polar and rectangular coordinate (basis) vectors may appear to point in the same direction at first glance, but they are not of the same dimension. One is $$\text{Cartesian:}\quad\Big(\text{(tangential) acceleration}\;\;,\;\text{(perpendicular) acceleration}\Big)$$ while the other is $$\text{Polar:}\quad\Big(\text{(radial) acceleration}\;\;,\;\text{turning angle}\Big)$$ In other words, metres-per-second-squared by metres-per-second-squared and metres-per-second-squared by degrees (or radians if you will). The angular coordinate in a polar coordinate set can never equal a Cartesian (rectangular) coordinate, simply due to its different dimension (different unit). Thinking correctly of the angular coordinate as a number-of-degrees also gives you the impression of a curved axis and basis vector rather than a straight vector. It doesn't say "how much" in a straight direction, but "how much" around. Such curved basis vector is clearly not a tangent to the curve - rather, it is (it defines) the curve. Now, since the angular coordinate in one system can't equal the tangential coordinate of another system, in a polar coordinate system the radial coordinate is the only one left to carry the entire "size"/magnitude of the acceleration. In Cartesian (rectangular) coordinates we are used to the magnitude of a vector being shared between both coordinates, and we find the magnitude as a mix (through Pythagoras' relation). So clearly, none of the coordinates in a Cartesian coordinate system equals the radial coordinate in a polar system - although the dimension (the unit) fits, they carry different information. They are not the same thing. Conclusion: There is no overlap between the coordinates of polar and rectangular systems. No coordinate in one equals that of the other. None of them fit to the other. The way to think of these two system is thus intuitively go-a-bit-out-and-a-bit-up for Cartesian coordinates while less intuitively go-the-full-amount-out-and-turn for polar coordinates, and you can only convert from one to the other through the well-known trigonometric sine, cosine and tangens relations in a right-angled triangle. REPLY [1 votes]: Whether you're talking about acceleration components, velocity comonents, or position components, you must consider vector components in general. There's nothing special about an acceleration vector when examining vector descriptions. The coordinates $r$ and $\theta$ are defined based on some origin which is constant. Also, the zero points in that system are defined by some person and usually are time-independent (unless one is doing an accelerated-reference-frame problem). The normal-tangential coordinate description is body-centered and is not time-independent. In fact, the tangential direction is defined to be parallel to the instantaneous velocity of the particle, and the normal direction is perpendicular to the velocity, with a choice of left-handed or right-handed perpendicularity. (Some problems even lend themselves to non-perpendicular systems, especially general relativity. But, I diverge ...) Let's consider a system in which the $r$ and $\theta$ components are functions of time. For example, $$r=2\theta \text{ and } \theta=0.5t.$$ In polar coordinates, the position of the particle is $$\vec{r}=r(t) \hat{r}(t)=1.0 t\ \hat{r}(t).$$ What is $\hat{r}(t)$? If we shift to a Cartesian description with the same origin, $$\hat{r}=\hat{i}\cos\theta+\hat{j}\sin\theta=\hat{i}\cos (0.5 t)+\hat{j}\sin(0.5 t)$$ Now, the tangential direction will be parallel to the instantanceous velocity, $\vec{v}=\dfrac{d\vec{r}}{dt}$: $$\vec{v}=\frac{d\vec{r}}{dt}=\hat{r}+t\frac{d\hat{r}}{dt}$$ If the tangential direction is always parallel to $\hat{\theta}$, then is will always be perpendicular to $\hat{r}$, so we can examine $\vec{v}\cdot\hat{r}$ to see if it is zero: $$\vec{v}\cdot\hat{r}=\hat{r}\cdot\hat{r}+t\frac{d\hat{r}}{dt}\cdot\hat{r}$$ $$=1+0.5t(-\hat{i}\sin(0.5t)+\hat{j}\cos(0.5t))\cdot(\hat{i}\cos(0.5t)+\hat{j}\sin(0.5t))=1\ne 0.$$ So, by example we have at least one situation in which the tangential component is not the $\theta$ component. You should try tackling the more general case for yourself in which r is not time-independent.
7,442
\enlargethispage{2ex} \section{\LaTeX\ Package} \label{LaTeXPackage} LieART comes with a \LaTeX\ package (\texttt{lieart.sty} in the subdirectory \texttt{latex/}) that defines commands to display irreps, roots and weights properly (see Table~\ref{tab:LaTeXCommands}), which are displayed by LieART using the \texttt{LaTeXForm} on an appropriate expression, e.g.: \begin{mathin} DecomposeProduct[Irrep[SU3][8],Irrep[SU3][8]]//LaTeXForm \end{mathin} \begin{mathout} \verb#$\irrep{1}+2(\irrep{8})+\irrep{10}+\irrepbar{10}+\irrep{27}$# \end{mathout} \newcommand{\bsl}{\textbackslash} \begin{table}[!h] \begin{center} \renewcommand{\arraystretch}{1.3} \rowcolors{2}{tablerowcolor}{} \begin{tabularx}{\textwidth}{llX} \toprule\rowcolor{tableheadcolor} \textbf{Command Example} & \textbf{Output} & \textbf{Description}\\ \midrule \texttt{\bsl irrep\{10\}} & \irrep{10} & dimensional name of irrep\\ \texttt{\bsl irrepbar\{10\}} & \irrepbar{10} & dimensional name of conjugated irrep\\ \texttt{\bsl irrep[2]\{175\}} & \irrep[2]{175} & number of primes as optional parameter\\ \texttt{\bsl irrepsub\{8\}\{s\}} & \irrepsub{8}{s} & irrep with subscript, e.g., irreps of \SO8 \\ \texttt{\bsl irrepbarsub\{10\}\{a\}} & \irrepbarsub{10}{a} & conjugated irrep with subscript, e.g., for labeling antisymmetric product\\ \texttt{\bsl dynkin\{0,1,0,0\}} & \dynkin{0,1,0,0} & Dynkin label of irrep\\ \texttt{\bsl dynkincomma\{0,10,0,0\}} & \dynkincomma{0,10,0,0} & for Dynkin labels with at least one digit larger then 9\newline (also as \texttt{\bsl rootorthogonal}, \texttt{\bsl weightalpha} and \texttt{\bsl weightorthogonal} for negative integers) \\ \texttt{\bsl weight\{0,1,0,{-}1\}} & \weight{0,1,0,{-1}} & weight in $\omega$-basis\\ \texttt{\bsl rootomega\{{-}1,2,{-}1,0\}} & \rootomega{{-}1,2,{-}1,0} & root in $\omega$-basis\\ \bottomrule \end{tabularx} \caption{\label{tab:LaTeXCommands}\LaTeX\ commands defined in supplemental style file \texttt{lieart.sty}} \end{center} \end{table}
2,498
TITLE: v-Na generated by QUESTION [4 upvotes]: Given two free semicirculars X_1 and X_2 and a projection h in the von-Neumann algebra generated by X_1, how does one show that the von-Neumann algebra generated by {X_1, hX_2(1-h)} is a factor? It is easy to show that the two elements in the generating set are free. But I am unable to see what kind of an object hX_2(1-h) is. It appears in the definition of interpolated free group factor in Radulescu's paper (pre-print 1991) on random matrices, amalgamated free products and subfactors of free group factors of non-integer index. REPLY [6 votes]: Let me first point out that $X_1$ and $Y=h X_2 (1-h)$ are not freely independent. This is most easily seen if $h$ has trace 1/2, in which case $Y$ has range and support projections $h$ and $(1-h)$, respectively. But since the support and range projections of $Y$ belong to $W^*(Y)$, it would follow from the assumption that $Y$ and $X$ are free that actually $h$ and $X$ are free. But this is not possible, since they commute. Now to your question of factoriality. Here is a sketch of the proof. Let us assume for definiteness that $\tau(h) \geq \tau (1-h)$ (otherwise, switch $Y$ and $Y^\ast$). You can then verify that $Y^\ast Y$ and $YY^\ast$ have free Poisson distributions (with different parameters) and that the spectrum of $YY^\ast$ has no atoms. It follows that if you consider the polar decomposition of $Y = V |Y|$, then $V$ is a partial isometry with domain projection $1-h$ and range projection $\leq h$. Using this, you can see that $W^*(X_1,Y)$ is a factor iff $N=h W^*(X_1,Y)h $ is a factor. But $N$ is generated by $hX_1h$ and $Y^\ast Y$; you can prove that these elements are freely independent (in $N$). This either uses a random matrix model (see e.g. Voiculescu's book on free random variables for the proof of the compression formula for free group factors), or can be done directly using operator-valued semicircular systems. Thus $N = W^*(hX_1 H) * W^*(Y^\ast Y)$ which is a free product of two abelian von Neumann algebras, one of which is diffuse and the other not complex numbers. You can then get factoriality (see references in Ueda's paper http://arxiv.org/abs/1011.5017)
123,581
Holidays British Antarctic Holidays please see the links below for more Holidays in this country. Family Holiday Parks in France, Spain and Italy Dolphin Marine Experience for two £59 If you would like to be listed here for Worldwide Holidays please email.</p Holidays British Antarctic British Antarctic British Antarctic British Antarcticritish Antarctic Coming Soon A wide variety of holiday choices in the UK and also holidays in Europe and abroad. Holidays British Antarctic Please email if you would like your holiday link here
140,516
- Description Details Christened with a most unlikely and dare I say rather un-pretty name, Miner’s Lettuce may not sound like the most appetising of edibles. But if you were a miner during the California Gold Rush, the sight of these plants emerging from the soft soil would have been as welcome as a plethora of gold nuggets. The plant took its name from these miners, who consumed it to supplement their daily vitamin C intake and fight scurvy. This incredibly succulent, melt-in-your-mouth raw salad green has recently gained popularity in the gourmet greens world; attracting both world class organic growers and chefs. Best used fresh and tiny, these elegant leaves bring a rich, tart-creaminess to delicate, micro-green salads. Larger leaves may be used a bed for grilled fish or tossed into stir-fries and pasta dishes at the last minute. It has a mild taste, so is excellent for tempering bitter salad greens like rocket (arugula) or endive. Claytonia even stands up to boiling or sautéing, so is an excellent for use in place of spinach. Miner’s lettuce is a godsend, it appears in the early spring, when you are hungry for something fresh, the winter crops are quite exhausted but the lettuces aren’t quite ready. - Claytonia outside in August and September for an autumn/early winter crop, or sow under protection, from August to December and in March or April. Seeds can be sown in pots or directly into a prepared bed. Though the seeds are tiny, try to sow them sparsely so you don't have to thin them later. After sowing make sure you keep it well watered. The seed usually germinates rapidly. Cultivation: Growing claytonia is ridiculously easy. Preferring cooler temperatures for optimal production, it will grow in most soil types, asking only consistent moisture. A very hardy plant, tolerating temperatures down to at least -15°C (5°F) Claytonia will also thrive in partial sunlight. You might even plant it in the shade of a deciduous tree where other garden crops might languish. It will receive the sun it needs after the tree's leaves fall. Pull up the plants promptly when they start to set seed (they're easy to pull up). Otherwise, you will have a galaxy of plants the following year, and not necessarily where you want them. On the other hand, letting a few plants self-sow in the bed may prove a handy way to ensure a yearly supply of this elfin crop for your winter table. Harvesting:, but be careful to leave that little nubbin at the base intact. You might want to trim the stems a little. When left on the long side, the stems stick out your mouth, making for a potentially embarrassing meal when amongst certain guests. Culinary Use: Miner's lettuce may be used raw in salads or cooked until just wilted in a sauté Try a fresh salad of claytonia, dried figs and shavings of Parmigiano cheese. A few handfuls dropped into a soup at the last minute will lend just a bit of thickness, the way sorrel would - one way to use the leaves if they have grown larger and firmer, as they may in mild-climate gardens. The plant develops a nodelike root that can be eaten and has a nutty flavour. The delicate blossoms also make a very pretty garnish. Storage: Although it never gets bitter and you can eat all of it at anytime, Claytonia can only be stored for a few days in the fridge since it tends to deteriorate quickly after harvest. Other Uses: Although only an annual, this species makes an excellent ground cover in a cool acid soil under trees. In such a position it usually self-sows freely and grows all year round. Origin: Winter Purslane is native to the Pacific Coast and Rocky Mountain states from Mexico northward to British Columbia (not Cuba, as Vilmorin claimed in1885). Nomenclature: The plant was originally called Claytonia perfoliata, and is still known in England and Europe under this botanical designation. Elsewhere it is known as Montia perfoliata. Named for Giuseppe Monti, a professor of botany at the University of Cologne in the eighteenth century The genus Claytonia is named for John Clayton (1694-1773), Clerk to the County Court of Gloucester County, Virginia, USA from 1720 until his death. The species name is from the Latin perfoliata, meaning having leaves pierced by the stem, from Latin per meaning through, and foliata, meaning foliate, foliage or leaves. alternative ingredients. The Miners were not the only ones who appreciated miner’s lettuce. The American Indians not only ate it raw and cooked; they also made a tea from the plant, hence its other name: Indian lettuce. In England it is occasionally known as Springbeauty, while the Irish name is Plúirín earraigh The common names of 'Winter Purslane' and 'Miners Lettuce' are misleading: it is neither a lettuce nor a purslane, although both are members of the portulaca family, which are known for their juicy, succulent leaves. John Clayton (1694-1773): John Clayton was Clerk to the County Court of Gloucester County, Virginia, USA from 1720 until his death. He was one of the earliest collectors of plant specimens in that state, and is described as the greatest American botanist of his day. John Clayton conscientiously and systematically took samples of everything he encountered, and sent them to Mark Catesby at Oxford, who in turn sent them to Gronovius in Leiden, Holland, where they were examined by Linnaeus.. The date of his birth has often been given as 1686 rather than 1694, but this is the date used by the John Clayton Herbarium of the Natural History Museum of London. - Additional Information Additional Information
81,023
. CLASP is an international non-profit organization that accelerates worldwide implementation of best-in-class appliance energy efficiency policies via technical and market analysis, strategic policy advice, and collaboration. journal article brings together 50 “new-governance” instruments to understand better new governance for low-carbon buildings and what may be expected from it. The authors find that new-governance instruments fall short in exactly the same areas as do traditional instruments. This report introduces capacity mechanisms—a policy instrument for power markets—to a non-expert audience. The authors consider the implications of capacity mechanisms for meeting parallel objectives of security of supply and decarbonisation.
194,016
(Last Entry Here) (Table of Contents) Shayris bolted upright. She’d been scratching doodles into the bedposts with her knife and fallen asleep. What had woken her? An outlandish device bolted to the nearest corner of the room shouted phrases in Ulmish. No, Shayris realized: the same phrase over and over again. The wheel-operated door was closed, and Shayris saw that there was no wheel on this side. Most of those doors had thick glass. Why was this one windowless? Why am I worried about that when they’ve locked me in? Shayris tumbled out of bed and scrambled to the door. She held up her dagger to… what? Work the lock? There was no damned lock! She slammed her fist against it. It hurt, certainly, but so did any number of sudden hard impacts that might befall a sailor on the Happenstance. “You fu–” Shayris began, then restarted. Start polite, escalate as needed. “Excuse me! Excuse me, if you wouldn’t mind opening the door, I’d be very grateful.” Silence. She slammed her fist against it and snarled. A distant moan from the slits at the top of the walls suddenly became a frothing roar. It sounded much like surf against a shore, Shayris thought. “Oh, you bastards!” she shouted, pounding at the door. “I don’t end here, you filthy fuckers!” Water blasted from the wall slits, soaking the room’s fine furnishings. A quaint little guest chamber–and if the guests turned out to be troublesome, the Ulmish would let the sea take care of them. That was why this door had no wheel on this side, and no window! “Oh, so you’ll drown a stranger but can’t bear to watch, eh?” Shayris shouted, still pummeling the door. “Open this fucking thing and shiv me if you’re so rough! You think you can just drown me and walk off? Have some tea while you’re at it, I suppose? I’ll be back, you whore-squelching fishmonger inbreds! I’ll haunt this fucking place until time dies, you hear me?!” She changed to slamming her palm against the door; her left hand was shaking and going numb. She struck away and suddenly couldn’t find the energy to keep screaming. She felt only the chill seawater pooling around her knees. Her next blow against the door was more a polite rapping, and the last no more than a pat. Shayris thought of herself as a brave woman, but death was death. And of course she’d never been religious, so she didn’t have the usual answer. Shayris splashed her way to the satin-covered bed and slumped down. The water was at her knees, and would swamp the mattress in a few seconds. Shayris believed in the gods, assuredly; only an idiot could do otherwise when they were so vocal and conferred such power on their Servants. That wasn’t her problem. Shayris just didn’t believe they deserved worship. Why should she give more of her frail life to praising beings who claimed omnipotence and omniscience but did nothing to improve the world? Yes, yes, there was a Pact to stop them from reshaping the world constantly to suit their own whims, to stop them fighting each other so life could find its own path. Why should she respect so-called gods who must shackle themselves to stop feuding like brats? Why should she offer them thanks for the coin she earned with blistered hands and aching muscles? What did any of them do but sit around and demand to be praised like the most human of all petty tyrants? It seemed they had power to do anything except face their own insecurities. The water lapped at her chest now. Shayris held up the knife in front of her. Time to force the issue, maybe? She knew that in the some parts of the Shards, disgraced warriors waded into the sea and opened their mouths. But drowning was slow, she knew, and more painful than most people thought. Have you ever been stabbed, Shayris? It’s more painful than drowning, I promise you that. The voice had that force of presence she knew instantly meant “god,” but sounded like none of the ones who tried to recruit her in the past. It was a medium-high Tenor, and just slightly nasal. The human subconscious fights against hurting its body. That’s why suicide takes practice. “What would you know about any of that?” she snorted. “Which one are you, anyway?” The one who doesn’t want to be worshiped, the deity answered. “Good,” Shayris said. The water settled around her neck, and little waves throughout the room slapped her in the face. She felt herself rising slowly from the bed. “I’ve always hated the ones who do.” The god didn’t say that he knew that already; Shayris would’ve been more impressed if he weren’t omniscient. How could anyone be stupid enough to trust something that knew exactly what to say to gain your trust? I prefer your skepticism, honestly, the voice said. It’s familiar to me. A word of advice, Shayris? Tie the knife to your hand. You don’t want to lose it when you start going numb. She did, with a strip of cloth torn from her left pants leg. “Why are you here?” she asked, treading water and staring at the oncoming ceiling. “To watch me die?” Possibly, the god said. Nothing has been decided yet. Shayris felt the pause physically, a shift in attention that left the waters colder and louder around her. Just as quickly, the attention returned. And there we are, the god said. “Then you can leave,” Shayris said, “since you know what’s going to happen.” The trouble with being everywhere at once is that I can neither arrive nor leave, the god said, only stop paying attention. And on that count, I refuse. “Why?” Shayris demanded. Because, while some people certainly deserve to die alone, you are never one of them. And then the ceiling pressed on her scalp, and the waters closed in over her mouth. (Next Entry Here) (More from Canno) 2 thoughts on “Dark Helm and Wing’d Spear, #9”
69,843
TITLE: Density for degenerate distribution QUESTION [0 upvotes]: Why is the density for degenerate distribution defined via the Dirac delta function, that is $$\delta\left(x\right)=\begin{cases} +\infty, & x=x_{0}\\ 0, & x\neq x_{0} \end{cases}$$ instead of more intuitive $$f\left(x\right)=\begin{cases} 1, & x=x_{0}\\ 0, & x\neq x_{0} \end{cases}$$ REPLY [0 votes]: In reguards to your intuition, I suspect you are thinking that the probability mass of the singular point is $1$, which indeed it is.   However, what is the probability density at that point? Technically, a discrete random variable does not actually have a probability density function, but we might like it to have something we could use with the same formulars built for continuous random variables. Well, the dirac delta is a generalised function.   It is defined as a having zero value for any argument other than $0$, and yet despite being indefinite at zero, having a definite integral over the entire real interval; that is equal to $1$.   (This is often summarised as saying the value is $+\infty$ at $0$, although that is somewhat inaccurate.) $$\delta(x) \simeq \begin{cases}0 &:& x\neq 0 \\ +\infty &:& x=0 \end{cases}~,~ \int_\Bbb R \delta (x)\mathrm d x = 1$$ Why, this is the very behaviour we require of a "probability density function" for a degerate random variable for it to be useable in a general sense. So we say that the probability density function of a degenerate random variable is the shifted dirac delta function. $$X\sim \delta(x_0) \iff f_X(x) ~{:= \delta(x-x_0) \\\simeq \begin{cases} +\infty &:& x=x_0\\ 0 &:& x\neq x_0\end{cases}}$$
105,987
2018 mazda cx 5 redesign a diesel option would be nice but the 9 has more basic needs 2018 mazda cx 5 redesign. mazda cx 5 2018 redesign 9 continental of,mazda cx 5 2018 redesign vs whats the difference , 2018 mazda cx 5 redesign review specs and features mo, mazda cx 5 2018 redesign release date pictures specs prices features,2018 mazda cx 5 redesign first drive price performance and review ,2018 mazda cx 5 redesign turbo best new cars for, mazda cx 5 2018 redesign 3 cars reviews rumors and prices,introduces updated 3 at new auto show 2018 mazda cx 5 redesign , 2018 mazda cx 5 redesign release date and design changes,mazda cx 5 2018 redesign review one of the best compact crossovers on market . Related Post 2018 Chevrolet Impala Ss 2018 Bmw M2 For Sale 2018 Audi Q5 Floor Mats 2018 Dodge Dakota Hyundai Suv Models 2018 Audi A5 Sportback 2018 2018 Chevrolet Suburban 2018 Mazda Cx 5 Release Date New Bmw Z4 2018 2018 Audi S4 Tune Jeep Trackhawk 2018 Jaguar Xj 2018 2018 Audi A5 Pictures 2018 Jeep Compass Tire Size Mercedes Benz E400 Coupe 2018 Price
279,353
Grazielle is a doctoral student in the Sociology Department at University of Massachusetts, Amherst. She received a BA in Sociology and Gender & Sexuality Studies from University of California, Irvine in 2018. Her research interests focus on immigration, gender, family, and care work, with an emphasis on the work of au pairs in the United States. While at UC Irvine, she conducted a qualitative study examining how individual socio-demographic characteristics, dynamics with host families, and institutional factors influence au pairs’ work experiences. Grazi’s research interests stem from her personal experiences as an immigrant domestic worker in the United States.
389,269
Warehouse Driver jobs Noble11 Jan Warehouse Driver ...With an incredible selection of leading brands, over 920,000 square feet of warehouse space, a growing branch network of 40+ branches, a 98% fill rate... Toronto, ON ABL Employment26 Dec Warehouse Associate/Driver ...encourage you to apply today! Call Janelle at 905-631-2259! As a warehouse helper/ driver you will be responsible for: - Picking orders - Assembling... Canada Swish Maintenance10 Jan Delivery Driver - Warehouse Associate Job DescriptionDELIVERY DRIVER - WAREHOUSE ASSOCIATEThrough providing best in class service while handling and delivering our product... Oakville, ON Kaycan08 Jan Driver/Warehouse Support - Toronto ...and operated. We are currently seeking a Delivery Driver / Warehouse Support at our Scarborough operations. This is a full-time position to work Monday... Scarborough, ON Staffworkx Inc.07 Jan Warehouse Worker / Forklift Driver Job Description We are seeking a Warehouse Worker for a client at Dixie and Dundas ********LUMBER EXPERIENCE BENEFICIAL... Mississauga, ON GWI Telecom04 Jan Warehouse and Delivery Driver Job DescriptionGWI Telecom is looking for someone to help with warehouse duties and make deliveries and pick ups to field crews and suppliers... Mississauga, ON D Way Foods Inc.31 Dec Warehouse Worker/ Delivery Driver Class 5 License. Class 5 is necessary as you will be delivering goods in a truck. Looking for warehouse worker. Alpha Box Trading Ltd... Surrey, BC Kaycan Ltd18 Dec Driver/Warehouse Support - Toronto ...and operated. We are currently seeking a Delivery Driver / Warehouse Support at our Scarborough operations. This is a full-time position to work Monday... Scarborough, ON Spirit Staffing and Consulting Inc.11 Dec Class 3 driver / Warehouse Employment Opportunity Class 3 Driver / Warehouse Worker Required Our Edmonton client is a manufacturing and fabrication company looking... Edmonton, AB Fremont Distribution08 Jan Delivery Driver for Warehouse We will also request a recent Driver Abstract from ICBC. This role involves moving kegs - lots of kegs, driving, and talking directly to liquor store and... Port Coquitlam, BC jobs-ca.com05 Jan Offer: Warehouse Driver Offer: Warehouse Driver : Job description: Adecco Staffing Warehouse Driver job in Ontario, CACompany . Adecco Staffing Job Title . Warehouse... Ontario Noble Direct Employer14 Dec Warehouse Driver ...Job order - J1218-0370 - Permanent Full Time Title Warehouse Driver Category Transportation/Dispatch City Toronto, Ontario, Canada... Toronto, ON ABL EMPLOYMENT13 Mar Warehouse: Driver/Helper ...Details START NEW SEARCH Warehouse : Driver /Helper Start Date: Immediate Description We are seeking hard working, committed individuals to join... Canada jobs-ca.com08 Jan Job Opportunity: WAREHOUSE/TRUCK DRIVER Job Opportunity: WAREHOUSE /TRUCK DRIVER : Job description: »Full Time »Immediately »521 Main St., New Paltz, NY 12561 »2minutes ago... Canada APT Auto Parts Trading31 Oct Warehouse / Delivery Driver Warehouse / Delivery Driver APT Auto Parts Trading Share job: Burnaby, BC Full Time Transportation and Warehousing Apply Now A.P.T. Auto Parts... Burnaby, BC
98,168
Big Data and Analytics,! The success of sites like Uber and Airbnb offers some important lessons in growing and managing data exchanges and the bold strategy of giving away data in the sharing economy. Their success in creating data, analyzing data, and leveraging that data in novel ways offers lessons for all firms:. It is proved critical for measuring drivers, rooms and all of which is now in the sharing economy.. Uber and Airbnb brought measurement to drivers and the experience of users of them. In general, customers do not want data. Customers have market questions. They seek answers to those questions. Buyers in the sharing economy want information of the provider so that their risks are reduced. It is absolutely important that data products be created to solve the problems facing customers and answer the questions that they have. The data products permit monetization and marketplaces. For the sharing economy, giving away data reduces the risk in making a purchasing decision, so be ready to give away data if you are in the sharing economy! Airbnb and Uber emerged as leaders quite quickly. This was possible because they achieved scale quickly. That scale was enabled by data creation and data sharing. It included more drivers and properties before the other sites. The marketplace rewards firms that achieve scale first. With more data, more innovation and more data products are possible. With that comes more users, and with more users comes more opportunities for monetization. These important implications of Big Data use are developed in greater detail in my recent book, From Big Data to Big Profits: Success with Data and Analytics. The book examines the evolving nature of Big Data and how businesses can leverage it to create new monetization opportunities. Using case studies on Uber, Airbnb, Advertising, Airbnb, Algorithm, Amazon, Analytics, Artificial Intelligence, Automation, Big Data, Big Data Analytics, Data Monetization, Sharing Economy, Strategy, Uber
197,282
\begin{document} \title{Infinite Stable Graphs With Large Chromatic Number II} \begin{abstract} We prove a version of the strong Taylor's conjecture for stable graphs: if $G$ is a stable graph whose chromatic number is strictly greater than $\beth_2(\aleph_0)$ then $G$ contains all finite subgraphs of $Sh_n(\omega)$ and thus has elementary extensions of unbounded chromatic number. This completes the picture from our previous work. The main new model theoretic ingredient is a generalization of the classical construction of Ehrenfeucht-Mostowski models to an infinitary setting, giving a new characterization of stability. \end{abstract} \author{Yatir Halevi} \address{Department of Mathematics, Ben Gurion University of the Negev, Be'er-Sheva 84105, Israel.} \email{[email protected]} \author{Itay Kaplan} \address{Einstein Institute of Mathematics, Hebrew University of Jerusalem, 91904, Jerusalem Israel.} \email{[email protected]} \author{Saharon Shelah} \address{Einstein Institute of Mathematics, Hebrew University of Jerusalem, 91904, Jerusalem Israel.} \email{[email protected]} \thanks{The first author would like to thanks the Israel Science Foundation for its support of this research (grant No. 181/16) and the Kreitman foundation fellowship. The second author would like to thank the Israel Science Foundation for their support of this research (grant no. 1254/18). The third author would like to thank the Israel Science Foundation grant no. 1838/19. This is Paper no. 1211 in the third author's publication list.} \keywords{chromatic number; stable graphs; Taylor's conjecture; EM-models} \subjclass[2010]{03C45; 05C15} \maketitle \section{Introduction} The chromatic number $\chi(G)$ of a graph $G=(V,E)$ is the minimal cardinal $\kap$ for which there exists a vertex coloring with $\kap$ colors. There is a long history of structure theorems deriving from large chromatic number assumptions, see e.g., \cite{komjath}. The main topic of this paper will be the following conjecture proposed by Erd\"os-Hajnal-Shelah \cite[Problem 2]{EHS} and Taylor \cite[Problem 43, page 508]{Taylorprob43}. \begin{conjecture}[Strong Taylor's Conjecture] For any graph $G$ with $\chi(G)>\aleph_0$ there exists an $n\in\mathbb{N}$ such that $G$ contains all finite subgraphs of $\Sh_n(\omega)$. \end{conjecture} Where, for a caridnal $\kappa$, the shift graph $\Sh_n(\kappa)$ is the graph whose vertices are increasing $n$-tuples of ordinals less than $\kappa$, and we put an edge between $s$ and $t$ if for every $1\leq i\leq n-1$, $s(i)=t(i-1)$ or vice-versa. The shift graphs $\Sh_n(\kappa)$ have large chromatic numbers depending on $\kappa$, see Fact \ref{F:Sh-high chrom} below. Consequently, if the strong Taylor's conjecture holds for a graph $G$, it has elementary extensions of unbounded chromatic number (having the same family of finite subgraphs). The strong Taylor's conjecture was refuted in \cite[Theorem 4]{HK}. See \cite{komjath} and the introduction of \cite{1196} for more historical information. In \cite{1196} we initiated the study of variants of the strong Taylor's conjecture for some classes of graphs with \emph{stable} first order theory (stable graphs). Stablility theory, which is the study of stable theories and originated in the works of the third author in the 60s and 70s, is one of the most influential and important subjects in modern model theory. Examples of stable theories include abelian groups, modules, algebraically closed fields, graph theoretic trees, or more generally superflat graphs \cite{PZ}. Stablility also had an impact in combinatorics, e.g. \cite{MS} and \cite{CPT} to name a few. More precisely, in \cite{1196} we proved the strong Taylor's conjecture for $\omega$-stable graphs and variants of the conjecture for superstable graphs (replacing $\aleph_0$ by $2^{\aleph_0}$) and for stable graphs which are interpretable in a stationary stable theory (replacing $\aleph_0$ by $\beth_2(\aleph_0)$). As there exist stable graphs that are not interpretable in a stationary stable structure, see \cite[Proposition 5.22, Remark 5.23]{1196}, we asked what is the situation in general stable graphs and in this paper we answer it with the following theorem. \begin{theorem*}[Corollary \ref{C:main corollary}] Let $G=(V,E)$ be a stable graph. If $\chi(G)>\beth_2(\aleph_0)$ then $G$ contains all finite subgraphs of $\Sh_n(\omega)$ for some $n\in \mathbb{N}$. \end{theorem*} The key tool in proving the results for $\omega$-stable graphs and superstable graphs is that every large enough saturated model is an Ehrenfeucht–Mostowski model (EM-model) in some bounded expansion of the language. An EM-model is a model which is the definable closure of an indiscernible sequence and was originally used by Ehrenfeucht–Mostowski in order to find models with many automorphisms \cite{EM}. It was shown by Lascar \cite[Section 5.1]{lascar} that every saturated model of cardinality $\aleph_1$ in an $\omega$-stable theory is an EM-model in some countable expansion of the language, and was later generalized to any cardinality by Mariou \cite[Theorem C]{mariou}. This was generalized for superstable theories by Mariou \cite[Theorem 3.B]{mariouthesis} and in an unpublished preprint by the third author \cite{Sh:1151}. It was shown by Mariou \cite[Theorem 3.A]{mariouthesis} that in a certain sense the existence of such saturated EM-models for a stable theory necessarily implies that the theory is superstable. Consequently a different tool is needed in order to prove the theorem for general stable theories. In the stationary stable case, we use a variant of representations of structures in the sense of \cite{919}. However, this method did not seem to easily adjust to the general stable case. In this paper we resolved this problem by generalizing the notion of EM-models to \emph{infinitary EM-models} and show that such saturated models exist for any stable theory in Theorem \ref{T:existence of gen em model in stable}. The definition is a bit technical, so here we will settle with an informal description: In an EM-model every element is given by a term and a finite sequence of elements from the generating indiscernible sequence. Analogously, in an infinitary EM-model every element is given by some ``term'' with infinite (but bounded) arity and a suitable sequence of elements from an indiscernible sequence. We prove that the existence of saturated infinitary EM-models characterizes stability. \begin{theorem*}[Theorem \ref{T:existence of gen em model in stable}] The following are equivalent for a complete $\mathcal{L}$-theory $T$: \begin{enumerate} \item $T$ is stable. \item Let $\kappa,\mu$ and $\lambda$ be cardinals satisfying $\kappa=\cf(\kappa)\geq \kappa(T)+\aleph_1$, $\mu^{<\kappa}=\mu\geq 2^{\kappa+|T|}$ and $\lambda=\lambda^{<\kappa}\geq \mu$ and let $T\subseteq T^{sk}$ be an expansion with definable Skolem functions such that $|T|=|T^{sk}|$ in a language $\mathcal{L}\subseteq \mathcal{L}^{sk}$. Then there exists an infinitary EM-model $M^{sk}\models T^{sk}$ based on $(\alpha,\lambda)$, where $\alpha\in \kappa^U$ for some set $U$ of cardinality at most $\mu$, such that $M=M^{sk}\restriction \mathcal{L}$ is saturated of cardinality $\lambda$. \end{enumerate} \end{theorem*} Section \ref{s:em-models} is the only purely model theoretic section and is the only place where stability is used. The results of this section (more specifically Theorem \ref{T:existence of gen em model in stable}) are only used in Section \ref{s:conclusion}. In Section \ref{S:order type graphs} we study graphs on (perhaps infinite) increasing sequences whose edge relation is determined by the order type. Aiming to prove that if the chromatic number is large, then one can embed shift graphs, we analyze several different cases. The last case we deal with in Section \ref{S:order type graphs} turns out to be rather complicated, so we devote all of Section \ref{S:PCF} to it. There, we employ ideas inspired by PCF theory to get a coloring of small cardinality. Section \ref{s:conclusion} concludes. \section{Preliminaries} We use small latin letters $a,b,c$ for tuples and capital letters $A,B,C$ for sets. We also employ the standard model theoretic abuse of notation and write $a\in A$ even for tuples when the length of the tuple is immaterial or understood from context. For any two sets $A$ and $J$, let $A^{\underline{J}}$ be the set of injective functions from $J$ to $A$ (where the notation is taken from the falling factorial notation), and if $(A,<)$ and $(J,<)$ are both linearly ordered sets, let $(A^{\U J})_<$ be the subset of $A^{\U J}$ consisting of strictly increasing functions. If we want to emphasize the order on $J$ we will write $(A^{\U{(J,<)}})_<$. Throughout this paper, we interchangeably use sequence notation and function notation for elements of $A^{\U J}$, e.g. for $f\in A^{\U J}$, $f(i)=f_i$. For any sequence $\eta$ we denote by $\Rg(\eta)$ the underlying set of the sequence (i.e. its image). If $(A,<^A)$ and $(B,<^B)$ are linearly ordered sets, then the most significant coordinate of the lexicographic order on $A\times B$ is the left one. \subsection{Stability} We use fairly standard model theoretic terminology and notation, see for example \cite{TZ,guidetonip}. We gather some of the needed notions. For stability, the reader can also consult with \cite{classification}. We denote by $\tp(a/A)$ the complete type of $a$ over $A$. Let $(I,<)$ be a linearly ordered set. A sequence $\langle a_i: i\in I\rangle$ inside a first order structure is \emph{indiscernible} if for any $i_1<\dots<i_k$ and $j_1<\dots<j_k$ in $I$, \[\tp(a_{i_1},\dots,a_{i_k})=\tp(a_{j_1},\dots, a_{j_k}).\] A structure $M$ is \emph{$\kappa$-saturated}, for a cardinal $\kappa$, if any type $p$ over $A$ with $|A|< \kappa$ is realized in $M$. The structure $M$ is \emph{saturated} if it is $|M|$-saturated. A \emph{monster model} for $T$, usually denoted by $\mathbb{U}$, is a large saturated model containing all sets and models (as elementary substructures) we will encounter\footnote{There are set theoretic issues in assuming that such a model exists, but these are overcome by standard techniques from set theory that ensure the generalized continuum hypothesis from some point on while fixing a fragment of the universe. The reader can just accept this or alternatively assume that $\mathbb{U}$ is merely $\kappa$-saturated and $\kappa$-strongly homogeneous for large enough $\kappa$.}. All subsets and models will be \emph{small}, i.e. of cardinality $<|\mathbb{U}|$. A first theory $T$ is \emph{stable} if there does not exist a model $M\models T$, a formula $\varphi(x,y)$ and elements $\langle a_i\in M: i<\omega\rangle$ such that $M\models \varphi(a_i,a_j)\iff i<j$. An equivalent definition is that there exists some $\kappa\geq |T|$ such that for all $M\models T$ with $|M|\leq\kappa$ the cardinality of complete types over $M$ is at most $\kappa$. For any such $\kappa$, $T$ has a saturated model of cardinality of $\kappa$ \cite[Theorem VIII.4.7]{classification}. Every indiscernible sequence in a stable theory is \emph{totally indiscernible}, i.e. in the notation above, for any $i_1,\dots,i_k$ and $j_1,\dots,j_k$ in $I$, \[\tp(a_{i_1},\dots,a_{i_k})=\tp(a_{j_1},\dots, a_{j_k}).\] Other than these notions we will also require basic understanding in forking. See the above references for more information. \subsection{Graph theory} Here we gather some facts on graphs and the chromatic number of graphs (all can be found in \cite{1196}). By a \emph{graph} we mean a pair $G=(V,E)$ where $E\subseteq V^2$ is symmetric and irreflexive. A \emph{graph homomorphism} between $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ is a map $f:V_1\to V_2$ such that $f(e)\in E_2$ for every $e\in E_1$. If $f$ is injective we will say that $f$ embeds $G_1$ into $G_2$ a subgraph. If in addition we require that $f(e)\in E_2$ if and only if $e\in E_1$ we will say that $f$ embeds $G_1$ into $G_2$ as an induced subgraph. \begin{definition} Let $G=(V,E)$ be a graph. \begin{enumerate} \item For a cardinal $\kap$, a \emph{vertex coloring} (or just coloring) of cardinality $\kap$ is a function $c:V\to \kap$ such that $x\E y$ implies $c(x)\neq c(y)$ for all $x,y\in V$. \item The \emph{chromatic number} $\chi(G)$ is the minimal cardinality of a vertex coloring of $G$. \end{enumerate} \end{definition} These are the basic properties of $\chi(G)$ that we will require. \begin{fact}\cite[Lemma 2.3]{1196}\label{F:basic prop} Let $G=(V,\E)$ be a graph. \begin{enumerate} \item If $V=\bigcup_{i\in I} V_i$ then $\chi(G)\leq \sum_{i\in I} \chi(V_i, E\restriction V_i)$. \item If $E=\bigcup_{i\in I} \E_i$ (with the $E_i$ being symmetric) then $\chi(G)\leq \prod_{i\in I}\chi(V,E_i)$. \item If $\varphi:H\to G$ is a graph homomorphism then $\chi(H)\leq \chi(G)$. \item If $\varphi:(H,E^H)\to (G,E^G)$ is a surjective graph homomorphism with $e\in E^H\iff \varphi(e)\in E^G$ then $\chi(H)=\chi(G)$. \end{enumerate} \end{fact} \begin{example} For any finite number $1\leq r$ and any linearly ordered set $(A,<)$, let $\Sh_r(A)$, or $\Sh_r(A,<)$ if we want to emphasize the order, (\emph{the shift graph on $A$}) be the following graph: its set of vertices is the set $(A^{\U r})_<$ of increasing $r$-tuples, $s_0,\dots,s_{r-1}$, and we put an edge between $s$ and $t$ if for every $1\leq i\leq r-1$, $s(i)=t(i-1)$, or vice-versa. It is an easy exercise to show that $\Sh_r(A)$ is a connected graph. If $r=1$ this gives $K_{A}$, the complete graph on $A$. \end{example} \begin{fact}\cite[Fact 2.6]{1196}\cite[Proof of Theorem 2]{EH-shift}\label{F:Sh-high chrom} Let $2\leq r<\omega$ be a natural number and $\kap$ be a cardinal, \[\chi\left(\Sh_r(\beth_{r-1}\left(\kap\right)^{+})\right)\geq\kap^{+}.\] \end{fact} Finally, the following fact is a very useful tool. \begin{fact}\cite[Proposition 3.2]{1196}\label{F:homomorphism is enough} Let $G=(V,E)$ be a graph and assume there exists an homomorphism of graphs $t:\Sh_k(\omega)\to G$. Then there exists $n\leq k$, such that \begin{itemize} \item[($\dagger$)] $G$ contains all finite subgraphs of $\Sh_n(\omega)$. \end{itemize} Consequently, if $H$ is a graph that contains all finite subgraphs of $\Sh_k(\omega)$, for some $k$, and $t:H\to G$ is a homomorphism of graphs, then there exists some $n\leq k$ such that $G$ satisfies ($\dagger$). \end{fact} \section{Infinitary EM-models and stability}\label{s:em-models} Let $T$ be a first order theory and $\mathbb{U}$ a monster model for $T$. An \emph{EM-Model} for $T$ is a model that is the definable closure of an indiscernible sequence (possibly in some expansion of the theory which admits Skolem functions). Every element in an EM-model is of the form $t(a_{i_1},\dots, a_{i_n})$, where $t$ is a term (in the expanded language) and $a_{i_1},\dots, a_{i_n}$ are elements of the indiscernible sequence. In other words, to any element we may associate a pair $(i,\eta)$, where $i<|T|$ (this codes the term $t_i(\bar x_i)$) and $\eta$ is an increasing sequence of cardinality $|\bar x_i|$. Mariou \cite{mariouthesis,mariou} and Shelah \cite{Sh:1151} proved that if $T$ is $\omega$-stable or even superstable then it has an EM-model in some expansion of the language whose restriction to the original language is saturated. For general stable theories, as we will see in this section, one needs to allow ``terms'' with, possibly, infinite arity to get a parallel result. Let $\kappa\geq \aleph_0$ be a regular cardinal (which we think of as a bound on the arity) and let $\mu$ be a cardinal (which we think of as a bound on the number of terms). Let $\alpha\in \kappa^\mu$ be a function assigning to each function its arity. \begin{definition}\label{D:alpha-I} Let $\kappa$ a cardinal, $(I,<)$ a linearly ordered set, $U$ a set and $\alpha\in \kappa^U$. Let $a=\langle a_{i,\eta}:i\in U ,\,\eta\in (I^{\U{\alpha_i}})_<\rangle$ be a sequence of tuples from $\mathbb{U}$. We say that $a$ is \emph{$(\alpha,I)$-indiscernible} if for every $\langle i_j\in U:j<k\rangle$, $\langle \eta_j\in (I^{\U{\alpha_{i_j}}})_<:j<k\rangle$ and $\langle \rho_j\in (I^{\U{\alpha_{i_j}}})_<:j<k\rangle$ if there exists a partial isomorphism of $(I,<)$ mapping $\langle \eta_j:j<k\rangle$ to $\langle \rho_j:j<k\rangle$ then $\langle a_{i_j,\eta_j}:j<k\rangle$ and $\langle a_{i_j,\rho_j}:j<k\rangle$ have the same type. \end{definition} Recall that given a subset $A\subseteq \mathbb{U}$ and an ultrafilter $\mathcal{D}$ on $A$ we may define the global average type $p_\mathcal{D}=\mathrm{Av}(\mathcal{D},\mathbb{U})$ by \[p_\mathcal{D}\vdash \varphi(x,b)\iff \varphi(A,b)\in \mathcal{D}.\] Obviously, $p_\mathcal{D}$ is finitely satisfiable in $A$. \begin{remark}\label{R:global-ultrafilter} If $\mathcal{D}$ is an ultrafilter on $A$ and $A\subseteq B$ then $\{U\in B: \exists V\in \mathcal{D} (V\subseteq U)\}$ is the a unique ultrafilter $\mathcal{D}^\prime$ on $B$ containing $\mathcal{D}$ and $p_{\mathcal{D}}=p_{\mathcal{D}^\prime}$. \end{remark} For any linearly ordered set $(I,<)$ and $A\subseteq B$, we say that $\langle a_i:i\in I\rangle$ realizes $(p_{\mathcal{D}})^{\otimes I}|B$ if for any $k\in I$, $a_k\models p_{\mathcal{D}}|B\langle a_i :i<k\in I\rangle$. \begin{proposition}\label{P:existence-of-indisc} Assume that $\mathbb{U}$ has definable Skolem functions. Let $\kappa\geq \aleph_0$ a regular cardinal and $\mu^{<\kappa}=\mu\geq 2^{\kappa+|\mathcal{L}|}$ a cardinal. Let $\alpha\in \kappa^\mu$ be any function and let $(I,<)$ be any infinite linear order. \begin{enumerate} \item There exist $U\subseteq \mu\times \mu$ and an $(\alpha^\prime,I)$-indiscernible sequence \[a=\langle a_{i,k,\eta}: (i,k)\in U,\, \eta\in (I^{\U{\alpha_i}})_<\rangle,\] where $\alpha^\prime\in\kappa^U$ is defined by $\alpha^\prime_{(i,k)}=\alpha_i$, for $(i,k)\in U$, such that $\mathbb{U}\restriction \dcl(\Rg (a))\prec \mathbb{U}$. \item For $j<\mu$ and $\eta\in (I^{\U{\alpha_j}})_<$, if $A\subseteq F_{j,\eta}=\dcl (\{a_{i,k,\nu}: (i,k)\in U,\, i<j,\, \nu\in (\Rg(\eta)^{\U{\alpha_i}})_<\})$ with $|A|<\kappa$ and non-algebraic $p\in S(A)$ then there exists $k<\mu$ with $(j,k)\in U$ such that $a_{j,k,\eta}\models p$. Moreover, if $p$ is finitely satisfiable in $A$ then so is $\tp(a_{j,k,\eta}/F_{j,\eta})$. \item If in addition, $(I,<)$ is well-ordered and $\alpha$ satisfies $\alpha_i=(i\mod \kappa)$ then \begin{enumerate} \item for any $A\subseteq \dcl(\Rg(a))$ with $|A|<\kappa$ there exist $i<\mu$ and $\eta\in (I^{\U{\alpha_i}})_<$ satisfying $A\subseteq F_{i,\eta}$; \item $\dcl(\Rg(a))$ is $\kappa$-saturated. \item Assume that $(I,<)$ is a cardinal with $\cf(I)\geq \kappa$. For any infinite $A\subseteq B \subseteq \dcl(\Rg(a))$, with $|B|<\kappa$, there is a non principal ultrafilter $\mathcal{D}$ on $A$ such that $(p_{\mathcal{D}})^{\otimes I}|B$ is realized in $\dcl(\Rg(a))$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Since $\mathbb{U}$ has definable Skolem functions, for any $A\subseteq \mathbb{U}$, $\dcl(A)$ is an elementary substructure of $\mathbb{U}$. Let $j<\mu$ and assume we found $\{U_{i}\subseteq \mu:i<j\}$ and $a_{<j}=\langle a_{i,k,\eta}:i<j,\, k\in U_i,\, \eta\in (I^{\U{\alpha_i}})_<\rangle $ such that $a_{<j}$ is $((\alpha^\prime)^{<j}, I)$-indiscernible, where $((\alpha^\prime)^{<j})_{i,k}=\alpha_i$ for $i<j$ and $k\in U_i$. If $(I^{\U{\alpha_j}})_<=\emptyset$ then there is nothing to do. Otherwise, fix some $\eta^*\in (I^{\U{\alpha_j}})_<$ and a $\dcl$-closed subset \[A_{j,\eta^*}\subseteq C_j=\dcl(\{a_{i,k,\nu}:i<j,\, k\in U_i,\, \nu\in (I^{\U{\alpha_i}})_<\})\] of cardinality at most $\mu$. For any $A\subseteq A_{j,\eta^*}$ with $|A|<\kappa$ and a non-algebraic type $p\in S_1(A)$ we choose an extension $\tilde p$ of $p$ to $A_{j,\eta^*}$ and a non-principal ultrafilter $\mathcal{D}(p)$ on $A_{j,\eta^*}$, such that $\tilde p = p_{\mathcal{D}(p)}|A_{j,\eta^*}$, in the following way \begin{list}{•}{} \item if $p$ is finitely satisfiable in $A$ then let $\mathcal{D}(p)$ be any (necessarily unique) ultrafilter (on $A_{j,\eta^*}$) extending the filter $\{B\subseteq A_{j,\eta^*}:\varphi(A,a)\subseteq B,\, \varphi(x,a)\in p\}$. We let $\tilde p=p_{\mathcal{D}(p)}|A_{j,\eta^*}$; \item otherwise, let $\tilde p$ be any non-algebraic extension of $p$ to $A_{j,\eta^*}$. Since $A_{j,\eta^*}$ is a model, $\tilde p$ is finitely satisfiable in $A_{j,\eta^*}$. Let $\mathcal{D}(p)$ be any non-principal ultrafilter extending $\{\varphi(A_{j,\eta^*},b):\varphi(x,b)\in \tilde p\}$. \end{list} We note that there are at most $\mu^{<\kappa}\leq\mu$ subsets $A\subseteq A_{j,\eta^*}$ with $|A|<\kappa$ and for all such $A$ there are at most $2^{\kappa+|\mathcal{L}|}\leq \mu$ types on $A$. Let $U_j\subseteq \mu$ be such that $\langle (p_{j,k,\eta^*},\mathcal{D}_{j,k,\eta^*}):k\in U_j\rangle$ enumerates the set of pairs $(\tilde p,\mathcal{D}(p))$ for non-algebraic $p\in S_1(A)$. By the induction hypothesis, any partial order-isomorphism $\pi$ of $I$ induces a partial elementary map $\widehat \pi$ whose domain is \[\dcl\left( \{a_{i,k,\nu}:i<j,\, k\in U_i,\, \Rg(\nu)\subseteq \dom(\pi)\}\right),\] mapping $a_{i,k,\nu}\mapsto a_{i,k,\pi(\nu)}$, where $\pi(\nu)=\pi\circ \nu$. Note that for any $\pi_1,\pi_2$, if $\pi_1\circ \pi_2$ makes sense then $\widehat{\pi_1\circ \pi_2}=\widehat{\pi_1}\circ \widehat{\pi_2}$. As a result, for any $\rho\in (I^{\U{\alpha_j}})_<$ the unique order-isomorphism $\pi_{\eta^*,\rho}:\eta^*\to \rho$ induces a partial elementary map, $\widehat{\pi_{\eta^*,\rho}}: A_{j,\eta^*}\to A_{j,\rho}$, where $A_{j,\rho}=\widehat\pi(A_{j,\eta^*})$, which is given by $a_{i,k,\nu}\mapsto a_{i,k,\pi_{\eta^*,\rho}(\nu)}$. For every $k\in U_j$ let $p_{j,k,\rho}=\widehat{\pi_{\eta^*,\rho}}(p_{j,k,\eta^*})\in S(A_{j,\rho})$ and let $\mathcal{D}_{j,k,\rho}=\widehat{\pi_{\eta^*,\rho}}(\mathcal{D}_{j,k,\eta^*})$. \begin{claim} For any $\rho\in (I^{\U{\alpha_j}})_<$ and $\pi$ a partial isomorphism of $I$ whose domain contains $\Rg(\rho)$, $\widehat{\pi}(A_{j,\rho})=A_{j,\pi(\rho)}$, $\widehat{\pi}(p_{j,k,\rho})=p_{j,k,\pi(\rho)}$ and $\widehat{\pi}(\mathcal{D}_{j,k,\rho})=\mathcal{D}_{j,k,\pi(\rho)}$. \end{claim} \begin{claimproof} There is no harm in restricting $\pi$ to $\Rg(\rho)$. Let $\pi_{\eta^*,\rho}:\eta^*\to \rho$ be the unique order-isomorphism, so $\pi\circ\pi_{\eta^*,\rho}$ is the unique isomorphism from $\eta^*$ to $\pi(\rho)$ and thus equal to $\pi_{\eta^*,\pi(\rho)}$. Hence $\pi=\pi_{\eta^*,\pi(\rho)}\circ \pi_{\eta^*,\rho}^{-1}$. The result follows. \end{claimproof} Let $((I^{\U{\alpha_j}})_<,<^{\text{lex}})$ be the lexicographic ordering and let $(U_j,<)$ be the order induced from $\mu$. By induction on $k\in U_j$, by compactness we may find a sequence $\langle a_{j,k,\eta}: \eta\in (I^{\U{\alpha_j}})_<\rangle$ satisfying that for any $\eta\in (I^{\U{\alpha_j}})_<$ \[a_{j,k,\eta}\models p_{\mathcal{D}_{j,k,\eta}}|B_{j,k,\eta},\] where \[B_{j,k,\eta}=C_j\cup \{a_{j,l,\rho}: l<k,\, l\in U_j,\rho\in (I^{\U{\alpha_j}})_<\}\cup \{a_{j,k,\rho}:\rho<^{\text{lex}}\eta\}\] We show $((\alpha^\prime)^{\leq j},I)$-indiscernibility by induction on $\{(i,l):i\leq j,\, l\in U_i\}$ (with the lexicographic ordering). In other words, we assume that for any $\langle (i_r,l_r): r<n\rangle$, with $(i_r,l_r)<(j,k)$ and $l_r\in U_r$, $\langle \eta_r\in (I^{\U{\alpha_{i_r}}})_<:r<n\rangle$ and a partial isomorphism $\pi$ of $I$, whose domain contains $\bigcup \{\Rg(\eta_r):r<n\}$, \[\tp(a_{i_0,l_0,\eta_0},\dots,a_{i_{n-1},l_{n-1},\eta_{n-1}})=\tp(a_{i_0,l_0,\pi(\eta_0)},\dots,a_{i_{n-1},l_{n-1},\pi(\eta_{n-1})}).\] We wish to show the same statement for $(i_r,l_r)\leq (j,k)$. We prove by induction on $n$ that for any $\bar b\subseteq \{a_{i,l,\eta}: (i,l)<(j,k),\,\eta\in (I^{\U{\alpha_i}})_<,\, l\in U_{i}\}$, any $\eta_{n-1}<^{\text{lex}}\dots<^{\text{lex}}\eta_0\in (I^{\U{\alpha_j}})_<$ and any partial isomorphism $\pi$ of $(I,<)$ whose domain contains \[\Rg(\eta_0)\cup\dots\cup\Rg(\eta_{n-1})\cup\bigcup\{\Rg(\eta):a_{i,l,\eta}\in \bar b\},\] \[\tp(a_{j,k,\eta_0},\dots,a_{j,k,\eta_{n-1}}, \bar b)=\tp(a_{j,k,\pi(\eta_0)},\dots,a_{j,k,\pi(\eta_{n-1})},\widehat{\pi}(\bar b)).\] Let $\varphi(x_0,\dots,x_{n-1},\bar b)$ be some formula, where $\bar b$ is as above. We show that \[\varphi(x_0,\dots,x_{n-1},\bar b)\in p_{\mathcal{D}_{j,k,\eta_0}}\otimes\dots\otimes p_{\mathcal{D}_{j,k,\eta_{n-1}}}\iff\]\[\varphi(x_0,\dots,x_{n-1},\pi(\bar b))\in p_{\mathcal{D}_{j,k,\pi(\eta_0)}}\otimes\dots\otimes p_{\mathcal{D}_{j,k,\pi(\eta_{n-1})}}.\] Indeed, if $\varphi(x_0,\dots,x_{n-1},\bar b)\in p_{\mathcal{D}_{j,k,\eta_0}}\otimes\dots\otimes p_{\mathcal{D}_{j,k,\eta_{n-1}}}$ then by the choice of the $a_{j,k,\eta}$'s, $\varphi(a_{j,k,\eta_0},\dots,a_{j,k,\eta_{n-1}},\bar b)$ holds and thus $X=\varphi(A_{j,\eta_0},a_{j,k,\eta_1},\dots,a_{j,k,\eta_{n-1}},\bar b)\in \mathcal{D}_{j,k,\eta_0}$. By the Claim, $\widehat\pi(X)\in \mathcal{D}_{j,k,\pi(\eta_0)}$. By the induction hypothesis (on $n$), $\widehat\pi$ is elementary on $a_{j,k,\eta_1}\cup \dots\cup a_{j,k,\eta_{n-1}}\cup \bar b$ and as a result, \[\widehat\pi(X)=\varphi(A_{j,\pi(\eta_0)},a_{j,k,\pi(\eta_1)},\dots,a_{j,k,\pi(\eta_{n-1})},\widehat \pi(\bar b))\in \mathcal{D}_{j,k,\pi(\eta_0)},\] and as $\pi$ preserves $<^{\text{lex}}$, \[\varphi(x_0,a_{j,k,\pi(\eta_1)},\dots,a_{j,k,\pi(\eta_{n-1})},\widehat \pi(\bar b))\in p_{\mathcal{D}_{j,k,\pi(\eta_0)}}|B_{j,k,\pi(\eta_0)}.\] As a result, by the choice of the $a_{j,k,\eta}$'s, \[\varphi(a_{j,k,\pi(\eta_0)},a_{j,k,\pi(\eta_1)},\dots,a_{j,k,\pi(\eta_{n-1})},\widehat \pi(\bar b)) \text{ holds,}\] and thus \[\varphi(x_0,\dots,x_{n-1},\widehat\pi(\bar b))\in p_{\mathcal{D}_{j,k,\pi(\eta_0)}}\otimes\dots\otimes p_{\mathcal{D}_{j,k,\pi(\eta_{n-1})}}.\] This proves $(1)$, i.e. $a=\langle a_{i,k,\eta}: i<\mu,\, k\in U_i,\, \eta\in (I^{\U{\alpha_i}})_<\rangle$ is $(\alpha^\prime,I)$-indiscernible. To prove $(2)$ and $(3)$ recalling the beginning of the proof of $(1)$, let \[A_{j,\eta}=F_{j,\eta}=\dcl (\{a_{i,k,\nu}: i<j, k\in U_{i}, \nu\in (\Rg(\eta)^{\U{\alpha_i}})_<\}).\] $(2)$ follows immediately once we observe the following: \begin{list}{•}{} \item $|A_{j,\eta}|\leq \mu$. This follows from the following inequalities \[\mu\cdot {|\alpha_j|^{<\kappa}}\leq \mu\cdot \kappa^{<\kappa}\leq \mu.\] \item For any order-preserving partial isomorphism $\pi$ of $I$, whose domain contains $\Rg(\eta)$, by the induction hypothesis on $j$, $\widehat\pi(A_{j,\eta})=A_{j,\pi(\eta)}.$ \end{list} Now assume that $(I,<)$ is well ordered, that $\alpha$ satisfies $\alpha_i=(i\mod \kappa)$ and let $A\subseteq a$ with $|A|<\kappa$. Since $(I,<)$ is well-ordered there exist an ordinal $\beta$ and an order isomorphism, $\eta:\beta\to \bigcup_{a_{j,l,\nu}\in A} \Rg(\nu)$. Since $\kappa$ is a regular cardinal and for every $a_{j,l,\nu}\in A$ we have $|\Rg(\nu)|=|\alpha_j|<\kappa$, it follows that $\beta<\kappa$. Since $\kappa<2^\kappa\leq\mu$ and $\mu^{<\kappa}=\mu$ (so $\cf(\mu)\geq \kappa$), $\widehat j=\sup_{a_{j,l,\nu}\in A} j<\mu$. Let $i=\widehat j\cdot \kappa+\beta<\mu$. By the choice of $\alpha$, $\alpha_i=\beta$ and $\eta\in (I^{\U{\alpha_i}})_<$. This implies that $A\subseteq F_{i,\eta}$. This gives $(3.a)$. Now $(3.b)$ follows by $(2)$. Item $(3.c)$ follows from the construction, we elaborate. Let $A\subseteq B$ with $|B|<\kappa$. By $(3.a)$ there exists $j<\mu$ and $\eta\in (I^{\U{\alpha_j}})_<$ such that $B\subseteq F_{j,\eta}$. Let $p\in S^{f.s.}(A)$ a non-algebraic type which is finitely satisfiable in $A$. As $\cf(I)\geq\kappa$, there is some $\Rg(\eta)<\gamma^*\in I$ and by the Claim above for any $\Rg(\eta)<\gamma\in I$, \[F_{j+1,\eta^\frown\gamma}=\pi_{\eta^\frown \gamma^*,\eta^\frown \gamma}(F_{j+1,\eta^\frown\gamma^*}).\] Let $\mathcal{D}$ be the non-principle ultrafilter on $F_{j+1,\eta^\frown\gamma^*}$ corresponding to $p$, as chosen above. Hence $p_{\mathcal{D}}$ is finitely satisfiable in $A$. Observe that since $A\subseteq F_{j,\eta}$, $\pi_{\eta^\frown\gamma^*,\eta^\frown\gamma}$ fixes $A$ point-wise. It follows by Remark \ref{R:global-ultrafilter} that for every $\Rg(\eta)<\gamma\in I$, $p_{\mathcal{D}}=p_{\mathcal{D}_{j+1,k,\eta^\frown\gamma}}$. Let $k\in U_j$ be such that $\tp(a_{j+1,k,\eta^\frown\gamma^*}/F_{j+1,\eta^\frown\gamma^*})=p_{\mathcal{D}}|F_{j+1,\eta^\frown\gamma^*}$. By the choice of elements, for any $\Rg(\eta)<\gamma\in I$, \[a_{j+1,k,\eta^\frown\gamma}\models p_{\mathcal{D}}|F_{j,\eta}\langle a_{j+1,k,\eta^\frown\delta}: \Rg(\eta)<\delta<\gamma\rangle.\] We end by noting that since we are assuming that $(I,<)$ is a cardinal and $\cf(I)\geq \kappa>\alpha_j$, $|\{\gamma:\Rg(\eta)<\gamma\in I\}|=|I|$. \end{proof} In stable theories, for any infinite indiscernible sequence $I$ over some set $A$ one may take the limit type defined by \[\lim(I)=\{\varphi(x,c): \text{$\varphi(a,c)$ holds for cofinitely many $a\in I$}\}.\] It is a consistent complete type by stability. It is obviously finitely satisfiable in $I$. Moreover, if $\mathcal{D}$ is a non-principal ultrafilter on $I$, then $p_\mathcal{D}=\lim(I)$. We often write $\lim(I/A)=\lim(I)|A$. The following is \cite[Lemma III.3.10]{classification}, we give a proof for completeness. \begin{lemma}\label{L:saturation} Let $T$ be a stable theory and $M\models T$. If $M$ is $(\kappa(T)+\aleph_1)$-saturated and every countable indiscernible sequence over $A\subseteq M$, with $|A|<\kappa(T)$, in $M$ can be extended to one of cardinality $\lambda$ then $M$ is $\lambda$-saturated. \end{lemma} \begin{proof} We may assume that $\lambda>\kappa(T)+\aleph_1$. By passing to $M^{eq}$ (and $T^{eq}$) there is no harm in assuming that $T$ eliminates imaginaries. Let $p\in S(C)$ with $C\subseteq M$ and $|C|<\lambda$. Let $B\subseteq C$ with $|B|<\kappa(T)$ such that $p$ does not fork over $B$. Let $q\supseteq p$ be its non-forking global extension. Since $M$ is $\kappa(T)$-saturated, we may find a sequence of elements $S=\langle b_i :i<\omega \rangle\subseteq M$ satisfying $b_i\models q|B\langle b_j:j<i\rangle$. Note that $q|BS$ is stationary by \cite[Corollary III.2.11]{classification}. Since $M$ is $(\kappa(T)+\aleph_1)$-saturated, we may find a Morley sequence $I=\langle a_i:i<\omega\rangle$ of $q$ over $SB$, i.e. $a_i\models q|SBa_{<i}$ and $a_i\in M$. It follows that $I$ is also a Morley sequence of $q$ over $\acl(B)$\footnote{It is standard to see that $I$ is independent and indiscernible over $\acl(B)$. On the other hand, since $q|BS$ is stationary, it isolates a complete type over $\acl(BS)$.}. Let $I\subseteq J\subseteq M$ be an indiscernible sequence (over $B$) of cardinality $\lambda$. As a result, $J$ is also a Morley sequence of $q$ over $\acl(B)$. By \cite[Lemma III.1.10(2)]{classification}, $\lim(J/M)=q|M$ and in particular $\lim(J/C)=p$. By \cite[Corollary III.3.5(1)]{classification}, there is $J_0\subseteq J$ with $J\setminus J_0$ indiscernible over $C$ and $|J_0|\leq \kappa(T)+|C|<\lambda$. In particular, $|J\setminus J_0|\geq \aleph_0$ and thus for every $c\in J\setminus J_0$, $p=\tp(c/C)$. \end{proof} \begin{definition} Let $T$ be a theory. We say that $M\models T$ is an \emph{infinitary EM-model based on $(\alpha,I)$} if $M=\dcl(a)$, where $a$ is an $(\alpha,I)$-indiscernible sequence for $(\alpha,I)$ as in Definition \ref{D:alpha-I}. \end{definition} \begin{lemma}\label{L:underlying set of Inf-EM is Inf-EM} Let $T$ be a any theory. Let $\kappa\geq \aleph_0$ be a cardinal, $(I,<)$ a linearly ordered set, $\alpha\in \kappa^U$, where $U$ is a set. If $a=\langle a_{i,\eta}:i\in U,\, \eta\in (I^{\U{\alpha_i}})_<\rangle$ is an $(\alpha,I)$ indiscernible sequence, in some model $M\models T$, then there exists some set $\widehat U$, with $|\widehat{U}|\leq |T|\cdot |U|\cdot \kappa^{<\kappa}$, and $\widehat{\alpha}\in \kappa^{\widehat{U}}$ and an $(\widehat{\alpha},I)$-indiscernible sequence $b$ whose underlying set is $\dcl(a)$. \end{lemma} \begin{proof} For any $p\subseteq \kappa$ let $\varphi_p:\otp(p)\to p$ be the unique order isomorphism. Let $\mathcal{F}$ be the collection of all $\emptyset$-definable functions. We consider the family $\widehat U$ of tuples $(f(\bar v), s_0,p_0,\dots,s_{|\bar v|-1},p_{|\bar v|-1})$ satisfying \begin{list}{•}{} \item $f(\bar v)\in \mathcal{F}$, \item $s_0,\dots, s_{|\bar v|-1}\in U$, \item for any $i<|\bar v|$, $p_i\subseteq \kappa$ with $\otp(p_i)=\alpha_{s_i}$, \item $\bigcup_{i<|\bar v|}p_i\in \text{Ord}$. \end{list} We note that $|\widehat U|\leq |T|\cdot |U|^{<\aleph_0}\cdot (\kappa^{<\kappa})^{<\aleph_0}\leq |T|\cdot |U|\cdot \kappa^{<\kappa}.$ Let $\widehat{\alpha}\in \kappa^{\widehat U}$ be the function mapping $x=(f(\bar v), s_0,p_0,\dots,s_{|\bar v|-1},p_{|\bar v|-1})$ to $\bigcup_{i<|\bar v|} p_i<\kappa$. For any $x=(f(\bar v), s_0,p_0,\dots,s_{|\bar v|-1},p_{|\bar v|-1})\in\widehat{U}$ and $\eta\in (I^{\U{\widehat{\alpha}_x}})_<$ set \[b_{x,\eta}=f(a_{s_0,\eta\restriction p_0\circ \varphi_{p_0}},\dots,a_{s_{|\bar v|-1},\eta\restriction p_{|\bar v|-1}\circ \varphi_{p_{|\bar v|-1}}}).\] Note that for any $j<|\bar v|$, $(\eta\restriction p_j)\circ \varphi_{p_j}\in (I^{\U{\alpha_{s_j}}})_<$. Let $b=\langle b_{x,\eta}: x\in \widehat{U},\, \eta\in (I^{\U{\widehat{\alpha}_x}})_<\rangle$. We will show that $b$ is $(\widehat \alpha,I)$-indiscernible. Let $\langle x_j\in \widehat{U}: j<k\rangle$, $\langle \eta_j\in (I^{\U{\widehat{\alpha}_{x_j}}})_<:j<k\rangle$ and $\pi$ be a partial isomorphism of $(I,<)$ whose domain contains $\bigcup_{j<k}\Rg(\eta_j)$. For $j<k$, we write $b_{x_j,\eta_j}=f_j(a_{s_{j,0},\eta_j\restriction p_{j,0}\circ \varphi_{p_{j,0}}},\dots,a_{s_{j,|\bar v_j|-1},\eta_j\restriction p_{j,|\bar v_j|-1}\circ \varphi_{p_{j,|\bar v_j|-1}}})$. Since $a$ is $(\alpha,U)$-indiscernible, the type of \[\langle a_{s_{j,0},\eta_j\restriction p_{j,0}\circ \varphi_{p_{j,0}}},\dots,a_{s_{j,|\bar v|-1},\eta_j\restriction p_{j,|\bar v_j|-1}\circ \varphi_{p_{j,|\bar v_j|-1}}}:j<k\rangle\] is equal to the type of \[\langle a_{s_{j,0},\pi(\eta_j\restriction p_{j,0}\circ \varphi_{p_{j,0}})},\dots,a_{s_{j,\bar v|-1},\pi(\eta_j\restriction p_{j,|\bar v_j|-1}\circ \varphi_{p_{j,|\bar v_j|-1}})}:j<k\rangle,\] and consequently the type of $\langle b_{x_j,\eta_j}:j<k\rangle$ is the equal to the type of $\langle b_{x_j,\pi(\eta_j)}:j<k\rangle$. Finally, let $c\in \dcl(a)$. I.e. there is a definable function $f(\bar v)\in \mathcal{F}$, and $a_{i_0,\eta_0},\dots a_{i_{|\bar v|-1},\eta_{|\bar v|-1}}\in a$ such that $c= f(a_{i_0,\eta_0},\dots a_{i_{|\bar v|-1},\eta_{|\bar v|-1}})$. Let $r=\bigcup_{i<|\bar v|}\Rg(\eta_{i})$ and $\psi:r\to \otp(r)$ be the unique order isomorphism. For any $j<|\bar v|$ set $p_j=\psi(\Rg(\eta_j))$. Now note that for $x=(f(\bar v),i_0,p_0,\dots,i_{|\bar v|-1},p_{|\bar v|-1})$. So for $\eta=\psi^{-1}$, $c=b_{x,\eta}$ (because e.g. $\eta\restriction p_0\circ \varphi_{p_0}=\psi^{-1}\restriction \psi(\Rg(\eta_0))\circ \varphi_{\psi(\Rg(\eta_0))}=\psi^{-1}\restriction \psi(\Rg(\eta_0))\circ \psi\circ \eta_0=\eta_0$). \end{proof} \begin{theorem}\label{T:existence of gen em model in stable} The following are equivalent for a complete $\mathcal{L}$-theory $T$: \begin{enumerate} \item $T$ is stable. \item Let $\kappa,\mu$ and $\lambda$ be cardinals satisfying $\kappa=\cf(\kappa)\geq \kappa(T)+\aleph_1$, $\mu^{<\kappa}=\mu\geq 2^{\kappa+|T|}$ and $\lambda=\lambda^{<\kappa}\geq \mu$ and let $T\subseteq T^{sk}$ be an expansion with definable Skolem functions such that $|T|=|T^{sk}|$ in a language $\mathcal{L}\subseteq \mathcal{L}^{sk}$. Then there exists an infinitary EM-model $M^{sk}\models T^{sk}$ based on $(\alpha,\lambda)$, where $\alpha\in \kappa^U$ for some set $U$ of cardinality at most $\mu$, such that $M=M^{sk}\restriction \mathcal{L}$ is saturated of cardinality $\lambda$. \item Let $\kappa,\mu$ and $\lambda$ be cardinals satisfying $\kappa=\cf(\kappa)\geq \kappa(T)+\aleph_1$, $\mu^{<\kappa}=\mu\geq 2^{\kappa+|T|}$ and $\lambda=\lambda^{<\kappa}\geq \mu$. Then there exists a saturated model of cardinality $\lambda$. \end{enumerate} \end{theorem} \begin{remark} For example, the assumptions in (2) hold for $\lambda=\mu=2^{\kappa+|T|}$ for any $\kappa=\cf(\kappa)\geq \kappa(T)+\aleph_1$. \end{remark} \begin{proof} $(1)\implies (2)$. In the following, the superscript $^{sk}$ means that we work in $T^{sk}$. We apply Proposition \ref{P:existence-of-indisc}(1,3) with $\mathbb{U}$ there being a monster model for $T^{sk}$ and $(I,<)=(\lambda,<)$. Consequently, there exists an $(\alpha^\prime,\lambda)$-indiscernible sequence $a$, where $\alpha^\prime$ is as in the proposition. Let $M^{sk}=\dcl^{sk}(a)$ and $M=M^{sk}\restriction \mathcal{L}$. Note that $|M^{sk}|=|M|=\mu\cdot \lambda^{<\kappa}=\lambda$. Towards applying Lemma \ref{L:saturation}, note that $M$ is indeed $(\kappa(T)+\aleph_1)$-saturated by Proposition \ref{P:existence-of-indisc}(3.b) and the assumption on $\kappa$. Let $I\subseteq M$ be an infinite countable indiscernible sequence over some $B\subseteq M$ with $|B|<\kappa(T)\leq \kappa$. Since $\lambda<\lambda^{\cf(\lambda)}$, necessarily $\cf(\lambda)\geq \kappa$ so by Proposition \ref{P:existence-of-indisc}(3.c) there is a non principal ultrafilter $\mathcal{D}$ on $I$ and elements $\langle a_i\in \dcl(\Rg(a)):i<\lambda\rangle$ satisfying that \[a_i\models p^{sk}_{\mathcal{D}}|BI\langle a_k: k<i\rangle\] for any $i<\lambda$. Let $p_{\mathcal{D}}$ be the restriction of $p^{sk}_{\mathcal{D}}$ to $\mathcal{L}$. Thus $p_{\mathcal{D}}=\lim(I)$ and for every $i<\lambda$ \[a_i\models \lim(I)|BI\langle a_k: k<i\rangle.\] By stability, $I+\langle a_i:i<\lambda\rangle$ is indiscernible over $B$ (see also \cite[Exercise 2.25]{guidetonip} and \cite[Lemma III.1.7(2)]{classification}). By Lemma \ref{L:saturation}, $M$ is saturated. $(2)\implies (3)$ is obvious. $(3)\implies (1)$. Let $\kappa$ be any cardinal satisfying $\kappa=\cf(\kappa)\geq \kappa(T)+\aleph_1$ and let $\lambda = \mu = \beth_\kappa(\kappa)$. Then $\lambda^{<\kappa} = \lambda$ because $\kappa$ is regular. Indeed, any function from some $\xi<\kappa$ to $\lambda$ is a function to $\beth_\alpha(\kappa)$ for some $\alpha<\kappa$. So $\lambda^\xi = \sup_{\alpha<\kappa} (\beth_\alpha(\kappa)^\xi)$. But $\sup_{\alpha<\kappa}(\beth_\alpha(\kappa)^\xi)= \sup_{\alpha<\kappa}(\beth_{\alpha+1}(\kappa)^\xi)$, and $(\beth_{\alpha+1}(\kappa))^\xi=(2^{\beth_\alpha(\kappa)})^\xi = 2^{\beth_\alpha(\kappa)\cdot \xi} = 2^{\beth_\alpha(\kappa)}$ because $\kappa>\xi$. Consequently, $\lambda^\xi = \lambda$ and $\lambda^{<\kappa} = \lambda$. Hence, by (3), there is a saturated model of size $\lambda$. On the other hand, since $\lambda$ is singular (of cofinality $\kappa<\lambda$), $\lambda^{<\lambda}>\lambda$ and as a result by \cite[Theorem VIII.4.7]{classification}, $T$ is $\lambda$-stable (and hence stable). \end{proof} \section{Order-Type graphs with large chromatic number} \label{S:order type graphs} In this section we discuss graphs whose vertices are (possibly infinite) increasing sequences, where the edge relation is determined by the order type. More specifically, our main interest in this section is the following type of graphs. \begin{definition} Let $(I,<)$ and $(J,<)$ be linearly ordered sets and $\bar a\neq \bar b\in (I^{\U J})_<$ be increasing sequences. We define a graph $E^J_{\bar a,\bar b}$ and a directed graph $D_{\bar a,\bar b}^J$ on $(I^{\U J})_<$ by: \begin{itemize} \item $\bar c \E^J_{\bar a,\bar b} \bar d \iff \otp(\bar c,\bar d)=\otp(\bar a,\bar b)\vee \otp(\bar d,\bar c)=\otp(\bar a,\bar b)$ \item $\bar c \D^J_{\bar a,\bar b} \bar d \iff \otp(\bar c,\bar d)=\otp(\bar a,\bar b).$ \end{itemize} We omit $J$ from $E^J_{\bar a,\bar b}$ and $D^J_{\bar a,\bar b}$ when it is clear from the context. We call these graphs the \emph{(directed) order-type graphs}. \end{definition} \begin{remark} Although it will not define a graph, we sometimes use the notation $D_{\bar a,\bar b}$ and $E_{\bar a,\bar b}$ even if $\bar a=\bar b$. \end{remark} In Section \ref{ss:embed-shift} we isolate a family of order-type graphs whose members contain all finite graphs of $\Sh_m(\omega)$ for a certain integer $m$ (Corollary \ref{C:k-ord-cov-implies shift}). In Section \ref{ss:orde-type-large-chrom} we show that order-type graphs with large chromatic number fall into this family (Theorem \ref{T:shift graphs in order-type-graphs}). \subsection{Embedding shift graphs into order-type graphs}\label{ss:embed-shift} \begin{definition} Let $(I,<)$ and $(J,<)$ be linearly ordered sets, $\bar a,\bar b\in (I^{\U J})_<$ be increasing sequences and $0<k<\omega$. We say that $\langle \bar a,\bar b\rangle$ is \emph{k-orderly} if there exists a finite partition $Conv(\Img(\bar a)\cup\Img(\bar b))=C_0\cup\dots \cup C_k$ by convex increasing subsets satisfying that for every $n<k$ and $i\in J$, $a_i\in C_n\iff b_i\in C_{n+1}$; \end{definition} Recall the following from \cite{1196}. \begin{definition} For any linearly ordered set $(A,<)$ and $k\geq 1$, let $\LSh_k(A)$ be the directed graph $((A^{\U k})_<,D)$, were $(\eta,\rho)\in D$ if and only if $\eta(i)=\rho(i-1)$ for $0<i<k$ (if $k>1$) and $\eta(0)<\rho(0)$ (if $k=1$). \end{definition} \begin{lemma}\label{L:k-orderly} Let $0<k<\omega$ be an integer, $\alpha,\delta$ be ordinals, $(I,<)$ any infinite linearly ordered set satisfying $(\delta\times (2\cdot \alpha +1)^k,<_{lex})\subseteq (I,<)$. Let $\bar a,\bar b\in (I^{\U{\alpha}})_<$. If $\langle \bar a,\bar b\rangle $ is $k$-orderly then there exists a function $\varphi:\LSh_k(\delta)\to (I^{\U{\alpha}})_<$, satisfying that for any $\eta,\rho\in \LSh_k(\delta)$, if $(\eta,\rho)\in D$ then $\varphi(\eta)\D_{\bar a,\bar b} \varphi(\rho)$. \end{lemma} \begin{proof} Assume that $Conv(\Img(\bar a),\Img(\bar b))=C_0\cup\dots\cup C_k$, as in the definition. Let $\alpha^*=\alpha\cup\{\beta^-:\beta<\alpha\}\cup\{\infty\}$, where the $\beta^-$'s are immediate predecessors and $\infty$ is a maximal element, i.e. for any $\beta< \gamma< \alpha$ \begin{list}{•}{} \item $\gamma<\beta^-<\beta$, \item $\gamma^-<\beta^-$ if and only $\gamma<\beta$ and \item $\gamma<\infty$. \end{list} For any $S\subseteq \alpha$, let $S^*$ be $S\cup\{s^-:s\in S\}\cup\{0^-,\infty\}$. For any $x\in \left(\alpha^*\right)^n$, we denote $x^-$ the immediate predecessor of $x$ in the lexicographic order if it exists, and otherwise let $x^-=x$. Note that for any $x=(x_0,\dots,x_{n-1})\neq(0^-,\dots,0^-)\in \left(\alpha^*\right)^n$, if the maximal $l<n$ with $x_l\neq 0^-$ satisfies $x_l< \alpha$ then $x$ has an immediate predecessor. Note that the order type of $\alpha^*$ is $2\cdot \alpha +1$, so by the assumption on $I$ we may replace $I$ by an isomorphic copy to get that $(\delta\times (\alpha^*)^k,<_{lex})\subseteq (I,<)$. For any $0\leq i\leq k-1$ let $S_i=\{\beta< \alpha: a_\beta\in C_i\}$ and let $\mathcal{G}=\{\bar g=\langle g_i: S_i\cup\{\infty\} \to (S_{k-1}^*\times\dots\times S_0^*,<_{lex}): i<k\rangle: \text{ $g_i$ increasing}\}$. For any $\bar g \in\mathcal{G}$ and $\eta\in (\delta^{\U k})_<$ let $f_{\eta,\bar g}\in (I^{\U \alpha})_<$ be defined by \[f_{\eta,\bar g}(\beta)=(\eta(n_\beta),g_{n_\beta}(\beta))\in \delta\times (S^*_{k-1}\times\dots\times S_0^*)\subseteq I,\] where $\beta\in S_{n_\beta}$. We note that $f_{\eta,\bar g}$ is increasing: if $n_{\beta_1}<n_{\beta_2}$ then $\eta(n_{\beta_1})<\eta(n_{\beta_2})$. If $n_{\beta_1}=n_{\beta_2}$ then the results follows since $g_{n_{\beta_1}}=g_{n_{\beta_2}}$ is increasing. \begin{claim} There exists $\bar g\in \mathcal{G}$ such that for any $\eta,\rho\in \Sh_k(\delta)$ satisfying $\eta(i)=\rho(i-1)$ for $0<i<k$ (if $k>1$) or $\eta(0)<\rho(0)$ (if $k=1$), $f_{\eta,\bar g}\D_{\bar a,\bar b} f_{\rho,\bar g}$. \end{claim} \begin{claimproof} For the purpose of this proof, for $1\leq i\leq k$ let $\pi_i:S^*_{k-1}\times\dots\times S_0^*\to S^*_{k-1}\times\dots\times S^*_{k-i}$ be the projection on the first $i$ coordinates. We choose increasing functions $g_i:S_i\cup\{\infty\} \to S_{k-1}^*\times\dots S^*_{k-i}\times \{0^-\}\times\dots\times \{0^-\}$ by downwards induction on $i<k$. Define $g_{k-1}$ by setting $g_{k-1}(\beta)=(\beta,0^-,\dots,0^-)$ for $\beta\in S_{k-1}\cup\{\infty\}$. Assume that $g_i$ was defined and we want to define $g_{i-1}$. For any $\beta\in S_{i-1}$ if there is $\gamma\in S_i$ minimal such that $b_\beta\leq a_\gamma$ then define \[g_{i-1}(\beta)= \begin{cases} g_i(\gamma)=(\pi_{k-i}(g_i(\gamma)),0^-,0^-,\dots, 0^-) & \text{ if $a_\gamma=b_\beta$} \\ (\pi_{i}(g_i(\gamma))^-,\beta,0^-,\dots, 0^-) & \text{ otherwise.} \end{cases} \] If such a minimal $\gamma\in S_i$ does not exists then we define \[g_{i-1}(\beta)=(\pi_{i}(g_i(\infty)),\beta,0^-,\dots,0^-).\] Lastly, \[g_{i-1}(\infty)=(\pi_{i}(g_i(\infty)),\infty,0^-,\dots,0^-).\] \begin{subclaim} For any $0\leq i\leq k-1$, and for every $\beta\in S_i$, $\pi_{i}(g_i(\beta))$ has an immediate predecessor. I.e., for every $\gamma<\beta\in S_i$, $\pi_{i}(g_i(\beta))>\pi_{i}(g_i(\beta))^-$. For any $0\leq i\leq k-1$, $g_i$ is increasing. \end{subclaim} \begin{subclaimproof} This is straightforward and follows, by downwards induction, that for any $\beta\in S_i$, if $g_i(\beta)=(x_0,\dots,x_{k-1})$ then the maximal $l<k$ such that $x_l\neq 0^-$ satisfies that $x_l<\alpha$. The fact that the $g_i$s are increasing now follows by downwards induction. \end{subclaimproof} The main observation is that for any $1\leq i<k$ \[(\dagger)\, \otp(\langle a_\beta: \beta\in S_i\rangle, \langle b_\beta: \beta\in S_{i-1}\rangle)=\otp(\langle g_i(\beta): \beta\in S_i\rangle, \langle g_{i-1}(\beta): \beta\in S_{i-1}\rangle ).\] To that end, let $1\leq i<k$. Since $g_i$ and $g_{i-1}$ are increasing it is enough to compare $a_{\beta_1}$ and $b_{\beta_2}$, where $\beta_1\in S_i$ and $\beta_2\in S_{i-1}$. \begin{list}{•}{} \item If $a_{\beta_1}=b_{\beta_2}$ then $\beta_1\in S_i$ is the minimal such that $b_{\beta_2}\leq a_{\beta_1}$ and thus by definition $g_{i-1}(\beta_2)=g_i(\beta_1)$. \item Assume $a_{\beta_1}<b_{\beta_2}$. If there does not exist a minimal $\gamma\in S_i$ with $b_{\beta_2}\leq a_\gamma$ then \[g_{i-1}(\beta_2)=(\pi_{i}(g_i(\infty)),\beta_2,0^-,\dots,0^-)>\]\[(\pi_{i}(g_i(\beta_1)),0^-,\dots,0^-)=g_{i}(\beta_1).\] Otherwise, let $\beta_1< \gamma\in S_i$ be minimal such that $b_{\beta_2}\leq a_\gamma$. If $b_{\beta_2}=a_\gamma$ then $g_{i-1}(\beta_2)=g_i(\gamma)>g_i(\beta_1).$ If $b_{\beta_2}<a_\gamma$ then \[g_{i-1}(\beta_2)=(\pi_{i}(g_i(\gamma))^-,\beta_2,0^-,\dots, 0^-)>\]\[ (\pi_{i}(g_i(\beta_1)),0^-,0^-,\dots, 0^-)=g_i(\beta_1).\] \item Assume $a_{\beta_1}>b_{\beta_2}$ and let $\gamma\in S_i$ be minimal such that $a_\gamma\geq b_{\beta_2}$, so $\gamma\leq \beta_1$. If $a_\gamma=b_{\beta_2}$ then $\gamma<\beta_1$ and $g_{i-1}(\beta_2)=g_i(\gamma)<g_i(\beta_1).$ If $a_\gamma>b_{\beta_2}$ then \[g_{i-1}(\beta_2)=(\pi_{i}(g_i(\gamma))^-,\beta_2,0^-,\dots, 0^-)<\]\[(\pi_{i}(g_i(\beta_1)),0^-,0^-,\dots, 0^-)=g_i(\beta_1).\] \end{list} This proves $(\dagger)$. Let $\eta,\rho\in \LSh_k(\delta)$ be as in the statement of the lemma. We proceed to prove that $f_{\eta,\bar g} \D_{\bar a,\bar b} f_{\rho,\bar g}$. Let $\beta_1,\beta_2< \alpha$ and assume that $\beta_1\in S_{n_1}$ and $\beta_2\in S_{n_2}$, for some $0\leq n_1,n_2\leq k-1$. Note that if $b_{\beta_2}\in C_n$, for $0<n\leq k$, then $n_2=n-1$. Assume that $k>1$. \begin{list}{•}{} \item Assume that $0<n_1<k$, $b_{\beta_2}\in C_{n_1}$. So $n_2=n_1-1$. Assume that $a_{\beta_1}\mathrel{\square} b_{\beta_2}$, where $\square\in\left\{ <,>,=\right\}$. By $(\dagger)$, $g_{n_1}(\beta_1)\mathrel{\square} g_{n_2}(\beta_2)$ and as a result \[f_{\eta,\bar g}(\beta_1)=(\eta(n_1),g_{n_1}(\beta_1))=(\rho(n_1-1),g_{n_1}(\beta_1))\mathrel{\square} (\rho(n_1-1),g_{n_2}(\beta_2))=\]\[(\rho(n_2),g_{n_2}(\beta_2))=f_{\rho,\bar g}(\beta_2).\] \item If $b_{\beta_2}\in C_n$ for some $n_1<n<k$ then necessarily, $a_{\beta_1}<b_{\beta_2}$ and $n_2=n-1\geq n_1$. Consequently, \[f_{\eta,\bar g}(\beta_1)=(\eta(n_1),g_{n_1}(\beta_1))< (\eta(n_1+1),g_{n_2}(\beta_2))=(\rho(n_1),g_{n_2}(\beta_2))\leq \]\[(\rho(n_2),g_{n_2}(\beta_2))=f_{\rho,\bar g}(\beta_2).\] \item If $b_{\beta_2}\in C_n$ for some $n<n_1$ then necessarily $0<n<n_1$, $a_{\beta_1}>b_{\beta_2}$ and $n_2=n-1<n_1-1$. Hence \[f_{\eta,\bar g}(\beta_1)=(\eta(n_1),g_{n_1}(\beta_1))= (\rho(n_1-1),g_{n_1}(\beta_1))>(\rho(n_2),g_{n_2}(\beta_2))=\]\[f_{\rho,\bar g}(\beta_2).\] \item If $b_{\beta_2}\in C_k$ then necessarily $n_2=k-1$ and $a_{\beta_1}<b_{\beta_2}$. As a result \[f_{\eta,\bar g}(\beta_1)=(\eta(n_1),g_{n_1}(\beta_1))\leq (\eta(k-1),g_{n_1}(\beta_1))=(\rho(k-2),g_{n_1}(\beta_1))<\] \[(\rho(k-1),g_{n_2}(\beta_2))=(\rho(n_2),g_{n_2}(\beta_2))=f_{\rho,\bar g}(\beta_2).\] \end{list} If $k=1$ then $n_1=n_2=0$ and \[f_{\eta,\bar g}(\beta_1)=(\eta(0),g_{n_1}(\beta_1))<(\rho(0),g_{n_2}(\beta_2))=f_{\rho,\bar g}(\beta_2).\] \end{claimproof} We may now define a map $\varphi: \Sh_k(\delta)\to (I^{\U{\alpha}})_<$ by letting for $\eta\in \Sh_k(\delta)$, $\varphi(\eta)=f_{\eta,\bar g}\in (I^{\U \alpha})_<$. This maps satisfies the requirements by the previous claim. \end{proof} \begin{definition}\label{D:k-ord-covered} Let $(I,<)$ and $(J,<)$ be linearly ordered sets and $\bar a, \bar b\in (I^{\U J})_<$ be increasing sequences. We say that $\{\bar a,\bar b\}$ is \emph{k-orderly covered} if there exists an increasing partition of $J$ into convex sets $\langle J_\varepsilon: \varepsilon\in S\rangle$ for some $S\subseteq J$, such that for every $\varepsilon\in S$, exactly one of the following holds \begin{enumerate} \item $\langle \bar{a}\restriction J_\varepsilon,\bar{b}\restriction J_\varepsilon\rangle$ is $k_\varepsilon$-orderly for some $0<k_\varepsilon\leq k$; \item $\langle \bar{b}\restriction J_\varepsilon,\bar{a}\restriction J_\varepsilon\rangle$ is $k_\varepsilon$-orderly for some $0<k_\varepsilon\leq k$; \item $|J_\varepsilon|=1$ and $\bar{a}\restriction J_\varepsilon=\bar{b}\restriction J_\varepsilon$. \end{enumerate} Moreover, for every $\varepsilon<\varepsilon^\prime\in S$, $\Img (\bar a \restriction J_\varepsilon )<\Img(\bar b \restriction J_{\varepsilon^\prime})$ and $\Img(\bar b\restriction J_\varepsilon) <\Img(\bar a \restriction J_{\varepsilon^\prime})$. \end{definition} \begin{corollary}\label{C:k-ord-cov-implies shift} Let $\alpha$ be an ordinal, $(I,<)$ any infinite linearly ordered set with $(|\alpha|^+ +\aleph_0,<)\subseteq (I,<)$. Let $\bar a\neq \bar b\in (I^{\U{\alpha}})_<$ be some fixed sequences. If $\{\bar a,\bar b\}$ is $k$-orderly covered then $((I^{\U{\alpha}})_<,E_{\bar a,\bar b})$ contains all finite subgraphs of $\Sh_m(\omega)$ for some $m\leq k$. \end{corollary} \begin{proof} Let $\langle J_\varepsilon: \varepsilon\in S\rangle$ be an increasing partition of $\alpha$ as in Definition \ref{D:k-ord-covered}, where $S\subseteq \alpha$. Since $\bar a\neq \bar b$, there exists $\varepsilon\in S$ such that $|J_\varepsilon|>1$. For any $\varepsilon\in S$, with $|J_\varepsilon|>1$, we say that $J_\varepsilon$ is \begin{list}{•}{} \item of type $A$ if $\langle \bar a\restriction J_\varepsilon,\bar b\restriction J_\varepsilon\rangle$ is $k_\varepsilon$-orderly, and \item of type $B$ if $\langle \bar b\restriction J_\varepsilon,\bar a\restriction J_\varepsilon\rangle$ is $k_\varepsilon$-orderly. \end{list} Let $N<\omega$ be some natural number. By replacing $I$ with an isomorphic copy, we may assume that $(\alpha \times (N\times (2\alpha +1)^k),<_{lex})\subseteq (I,<)$. Let $\varepsilon\in S$ and let $I_\varepsilon=\{\varepsilon\}\times (N\times (2\alpha+1)^{k})$. If $|J_\varepsilon|=1$ then we let $\varphi_\varepsilon:\LSh_1 (N)\to ((I_\varepsilon)^{\U{J_\varepsilon}})_<$ be such that $\varphi_\varepsilon(\eta)$ is the constant function giving $(\varepsilon,0,\dots,0)$. For any $\varepsilon\in S$ let $E^\varepsilon_{\bar a,\bar b}= E_{\bar a\restriction J_\varepsilon, \bar b\restriction J_\varepsilon}$ and $D^\varepsilon_{\bar a,\bar b}= D_{\bar a\restriction J_\varepsilon, \bar b\restriction J_\varepsilon}$ and similarly $E^\varepsilon_{\bar b,\bar a}$ and $D^\varepsilon_{\bar b,\bar a}$. If $|J_\varepsilon|>1$ and $J_\varepsilon$ is of type $A$ then let $\varphi_\varepsilon:\LSh_{k_\varepsilon}(N)\to(((I_\varepsilon)^{\U{J_\varepsilon}})_<,D^\varepsilon_{\bar a,\bar b})$ be as supplied by Lemma \ref{L:k-orderly}. I.e., for any $\eta,\rho\in (N^{\U{k_\varepsilon}})_<$, if $\eta(i)=\rho(i-1)$ for $0<i<k_\varepsilon$ (if $k_\varepsilon>1$) and $\eta(0)<\rho(0)$ (if $k_\varepsilon=1$) then $\otp(\varphi_\varepsilon(\eta),\varphi_\varepsilon(\rho))=\otp(\bar a\restriction J_\varepsilon,\bar b\restriction J_\varepsilon)$. If $|J_\varepsilon|>1$ and $J_\varepsilon$ is of type $B$ then let $\widehat{\varphi_\varepsilon}:\LSh_{k_\varepsilon}(N)\to (((I_\varepsilon)^{\U{J_\varepsilon}})_<,D^\varepsilon_{\bar b,\bar a})$ be as supplied by Lemma \ref{L:k-orderly}. I.e, for any $\eta,\rho\in (N^{\U{k_\varepsilon}})_<$, if $\eta(i)=\rho(i-1)$ for $0<i<k_\varepsilon$ (if $k_\varepsilon>1$) and $\eta(0)<\rho(0)$ (if $k_\varepsilon=1$) then $\otp(\widehat{\varphi_\varepsilon}(\eta),\widehat{\varphi_\varepsilon}(\rho))=\otp(\bar b\restriction J_\varepsilon,\bar a\restriction J_\varepsilon)$. By composing with the isomorphism $\RSh_{k_\varepsilon}(N)\to \LSh_{k_\varepsilon}(N)$ mapping $(x_0,\dots, x_{k_\varepsilon-1})$ to $(N-1-x_{k_\varepsilon-1},\dots,N-1-x_0)$, we arrive to a directed graph homomorphism $\varphi_\varepsilon:\RSh_{k_\varepsilon}(N)\to (((I_\varepsilon)^{\U{J_\varepsilon}})_<,D^\varepsilon_{\bar b,\bar a})$. By definition this map can be seen as a directed graph homomorphism $\varphi_\varepsilon:\LSh_{k_\varepsilon}(N)\to (((I_\varepsilon)^{\U{J_\varepsilon}})_<,D^\varepsilon_{\bar a,\bar b})$. For $1\leq m\leq k$ let $\pi_m:(N^{\U k})_<\to (N^{\U m})_<$ be the projection on the first $m$ coordinates. Note that it is a directed graph homomorphism $\LSh_k(N)\to \LSh_m(N)$. We now define $\varphi:\LSh_k(N)\to ((I^{\U{\alpha}})_<,D_{\bar a,\bar b})$. For any $\eta\in (N^{\U k})_<$, $\varepsilon\in S$ and $\beta< \alpha$, let $\varepsilon(\beta)\in S$ be such that $\beta\in J_\varepsilon$. We define \[\varphi(\eta)(\beta)= (\varepsilon(\beta),\varphi_{\varepsilon(\beta)}(\pi_{k_\varepsilon}(\eta))(\beta)).\] Since, for any $\varepsilon\in S$ and $\eta\in (N^{\U k})_<$, $\varphi_{\varepsilon}(\pi_{k_\varepsilon}(\eta))$ is increasing, it is clear that $\varphi(\eta)$ is increasing as well. Assume that $\eta,\rho\in \LSh_k(N)$ are connected, i.e., $\eta(i)=\rho(i-1)$ for $0<i<k$ (if $k>1$) and $\eta(0)<\rho(0)$ (if $k=1$). It is routine to check that $\otp(\varphi(\eta),\varphi(\rho))=\otp(\bar a,\bar b)$. As a result, $\varphi$ is also a graph homomorphism between $\Sh_k(N)$ to $((I^{\U \alpha})_<,E_{\bar a,\bar b})$. We have proved that for every $N<\omega$ there exists a graph homomorphism $\varphi_N:\Sh_k(N)\to ((I^{\U \alpha})_<,E_{\bar a,\bar b})$. By compactness, we may find a graph homomorphism $\Sh_k(\omega)\to \mathcal{H}$ for some elementary extension $((I^{\U \alpha})_<,E_{\bar a,\bar b})\prec (\mathcal{H},E)$. By Fact \ref{F:homomorphism is enough}, there exists $m\leq k$ such that $((I^{\U \alpha})_<,E_{\bar a,\bar b})$ contains all finite subgraphs of $\Sh_m(\omega)$. \end{proof} \subsection{Analyzing order-type graphs with large chromatic number}\label{ss:orde-type-large-chrom} The main goal of this section is to prove that every order-type graph of large enough chromatic number is $k$-orderly covered for some $k$, i.e. we will prove the following. \begin{theorem}\label{T:shift graphs in order-type-graphs} Let $\alpha$ be an ordinal, $(\theta,<)$ an infinite ordinal with $|\alpha|^+ +\aleph_0<\theta$. Let $\bar a\neq \bar b\in (\theta^{\U{\alpha}})_<$ be some fixed sequences. Let $G=((\theta^{\U{\alpha}})_<,E_{\bar a,\bar b})$. If $\chi(G)>\beth_2(\aleph_0)$ then $G$ contains all finite subgraphs of $\Sh_m(\omega)$ for some $m\in \mathbb{N}$. \end{theorem} In order to achieve this we will need to analyze the order-type of two infinite sequences. The tools developed here, we believe, may be useful in their own right. We fix some ordinals $\alpha$ and $\theta$ with $\theta$ infinite and $\bar a\neq \bar b\in (\theta^{\U{\alpha}})_<$ increasing sequences. We partition $\alpha=J_0\cup J_+\cup J_-$, where \[J_0=\{\beta< \alpha: a_\beta=b_\beta\},\, J_+=\{\beta < \alpha: a_\beta<b_\beta\},\, J_-=\{\beta< \alpha: b_\beta<a_\beta\}.\] Let $R$ be the minimal convex equivalence relation on $\alpha$ containing \[\{(\beta,\gamma): a_\beta=b_\gamma\},\, \{(\beta,\gamma): a_\beta<a_\gamma\leq b_\beta\} \text{ and } \{(\beta,\gamma): b_\beta< b_\gamma\leq a_\beta\}.\] \begin{lemma}\label{L:equiv classes are disjoint} Let $A,B\in \alpha/R$ and assume that $A<B$. Then $\Img(\bar a \restriction A)< \Img(\bar b\restriction B)$ and $\Img(\bar b\restriction A)<\Img(\bar a \restriction B)$. \end{lemma} \begin{proof} We will show that $\Img(\bar a \restriction A)< \Img(\bar b\restriction B)$, the other assertion follows similarly. Let $\beta\in A$ and $\gamma\in B$, so $\beta<\gamma$. If $a_\beta\geq b_\gamma$ then $b_\beta< b_\gamma\leq a_\beta$ and hence $\beta \R \gamma$, contradiction. \end{proof} \begin{lemma}\label{L:each class has equal direction} \begin{enumerate} \item For any $\beta\in J_0$, $[\beta]_R\subseteq J_0$; \item For any $\beta\in J_+$, $[\beta]_R\subseteq J_+$; \item For any $\beta\in J_-$, $[\beta]_R\subseteq J_-$. \end{enumerate} Moreover, $[\beta]_R=\{\beta\}$ for $\beta\in J_0$. \end{lemma} \begin{proof} To prove (1), (2) and (3) it is sufficient to prove a weaker version where we assume that $\beta=\min [\beta]_R$. We show $(2)$, items $(1)$ and $(3)$ are proved similarly. Assume that $[\beta]_R\not\subsetneq J_+$. Let $X= \{ \delta<\alpha: \delta\in [\beta]_R\wedge (\forall \beta\leq x\leq \delta)(a_x<b_x)\}$ (in (1) we replace $a_x<b_x$ by $a_x=b_x$ and in (3) by $a_x>b_x$). By the assumptions, $X$ is a non empty initial segment of $[\beta]_R$ and $Y=[\beta]_R\setminus X$ is non-empty convex. We will show that both $X$ and $Y$ are closed under the relations defining $R$ and thus derive a contradiction to the minimality of $R$. Assume that $a_y=b_z$ with $y\in X$ and $z\in Y$. Since $X$ is an initial segment, $y<z$. Consequently, $a_y<b_y<b_z$, contradiction. Now assume that $z\in X$ and $y\in Y$, so there exists $z<x\leq y$ with $a_x\geq b_x$ and as a result $a_z<b_z<b_x\leq a_x\leq a_y=b_z$, contradiction. Assume that $a_y<a_z\leq b_y$ with $y\in X$ and $z\in Y$, so $y<z$. Hence there is some $y<x\leq z$ with $a_x\geq b_x$ hence $a_y<a_z\leq b_y<b_x\leq a_x\leq a_z$, contradiction. Now assume that $z\in X$ and $y\in Y$, so $z<y$. This implies that $a_z<a_y<a_z$, contradiction. Assume that $b_y<b_z\leq a_y$ with $y\in X$ and $z\in Y$. Consequently, $a_y<b_y<b_z\leq a_y$, contradiction. Now assume that $z\in X$ and $y\in Y$, so $z<y$. As a result, $b_z<b_y<b_z$ Finally, we show the moreover. Assume note, so by $(1)$ it is easy to see that both $\{\beta\}$ and $[\beta]_R\setminus \{\beta\}$ are closed under the relations generating $R$. This contradicts the minimality of $R$. \end{proof} By Lemma \ref{L:each class has equal direction}, $R\restriction J_+$ is an equivalence relation on $J_+$. For any $A\in J_+/R$ we construct a set $Z_A\subseteq A$. We construct a sequence $\delta^A_n$ for $n<\omega$ as follows. Let $\delta^A_0=\min A$ and assume that $\delta^A_n$ has been chosen. Let $\delta^A_{n+1}\in A$ be the minimal index satisfying $b_{\delta^A_n}\leq a_{\delta^A_{n+1}}$ if such exists, otherwise stop. Let $Z_A=\langle \delta^A_n: n<n_A\rangle$, where $n_A\leq \omega$. Note that $Z_A$ is a strictly increasing sequence because $A\in J_+/R$. Furthermore, set $C^A=Conv(\Img(\bar a\restriction A)\cup \Img(\bar b\restriction A))$ and \begin{itemize} \item $C^A_0=[a_{\delta^A_0},b_{\delta^A_0})$; \item If $n_A=\omega$ then for any $0<n<\omega$ we set $C^A_n=[b_{\delta^A_{n-1}},b_{\delta^A_{n}})$; \item If $n_A<\omega$ then for any $0<n<n_A$ set $C^A_n=[b_{\delta^A_{n-1}},b_{\delta^A_{n}})$ and $C^A_{n_A}=\{x\in C^A: b_{\delta^A_{n_A-1}}\leq x\}$. \end{itemize} \begin{lemma}\label{L:we are k-orderely covered} Let $A\in J_+/R$. \begin{enumerate} \item If $n_A=\omega$ then $A=\bigcup_{n<\omega} [\delta^A_0,\delta^A_n]$. \item If $n_A=\omega$ then $C^A=\bigcup_{n<\omega}C_n^A$. \item For every $\beta\in A$ and $0\leq n<n_A$, $a_\beta\in C_n^A \iff b_\beta\in C_{n+1}^A$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Let $X=\bigcup_{n<\omega}[\delta^A_0,\delta^A_n]$. Since the $\delta^A_n$'s are chosen from $A$ and $A$ is convex, $X\subseteq A$. As in the proof of Lemma \ref{L:each class has equal direction}, it is enough to show that that both $X$ and $Y=A\setminus X$ are closed under the relations defining $R$. If $x,y\in A$ satisfy that $a_x=b_y$ then since $b_y=a_x<b_x$ we conclude that $y<x$ and thus if $x\in X$ then $y\in X$. Now if we assume that $y\in X$, e.g. $y<\delta^A_n$, then $a_x=b_y<b_{\delta^A_n}\leq a_{\delta^A_{n+1}}$ so $x<\delta^A_{n+1}$. Assume that $x,y\in A$ satisfy $a_x<a_y\leq b_x$. If $x\in X$, e.g. $x<\delta^A_n$, then $a_x<a_y\leq b_x<b_{\delta^A_n}\leq a_{\delta^A_{n+1}}$ so $y<\delta^A_{n+1}$. If $y\in X$ then since $x<y$ we conclude that $x\in X$ as well. Assume that $x,y\in A$ satisfy $b_x<b_y\leq a_x$. If $y\in X$ then since $x<y$ it follows that $x\in X$ as well. Assume that $y\in Y$, i.e. $y\geq \delta^A_n$ for all $n$. But then $b_{\delta^A_n}\leq b_y\leq a_x<b_x$ for all $n$. This implies that $\delta^A_n< x$ for all $n$ and hence $x\notin X$ as well. \item The right-to-left inclusion is straightforward. For the other inclusion, let $x\in C^A$. Since $\delta^A_0=\min A$ and $A\in J_+/R$, $a_{\delta^A_0}=\min C^A$ and hence $a_{\delta^A_0}\leq x$. If there exists $n<\omega$ with $x<b_{\delta^A_n}$ then for the minimal such $n$, $x\in C^A_{n}$. Otherwise, since $a_\beta<b_\beta$ for any $\beta\in A$, we may assume that $x\leq b_\beta$ for some $\beta\in A$. Hence $x\leq b_{\delta^A_n}$ for some $n<\omega$ by $(1)$. \item Let $\beta\in A$ and $n$ be as in the statement. Assume that $n_A=\omega$ is infinite ($n_A<\omega$ is similar). Let $a_\beta\in C_n^A$. First assume $n=0$, i.e. $a_{\delta^A_0}\leq a_\beta<b_{\delta^A_0}$. It is always true that $b_{\delta^A_0}\leq b_\beta$. If $\beta\geq \delta^A_1$ then $a_{\delta^A_1}\leq a_\beta<b_{\delta^A_0}$, contradicting the choice of $\delta^A_1$. Now, if $n>0$ then $b_{\delta^A_{n-1}}\leq a_\beta<b_{\delta^A_{n}}$ and thus by definition of $\delta^A_n$, $\delta^A_{n}\leq \beta$ so $b_{\delta^A_{n}}\leq b_\beta$. If, on the other hand, $\beta\geq \delta^A_{n+1}$ then $a_{\delta^A_{n+1}}\leq a_\beta<b_{\delta^A_{n}}$, contradiction. Hence $b_{\delta^A_{n}}\leq b_\beta<b_{\delta^A_{n+1}}$. Let $b_\beta\in C^A_{n+1}$. By $(2)$, $a_\beta\in C^A_k$ for some $k<\omega$. Using the above we conclude that $b_\beta\in C^A_{k+1}$ and thus $k+1=n+1$, i.e. $k=n$. \end{enumerate} \end{proof} \begin{lemma}\label{L:special-sequence} For any $A\in J_+/R$ there exist an increasing sequence $\langle \zeta_n^A\in A:n<n_A\rangle$, satisfying that for every $n$ with $n+1<n_A$ \[a_{\zeta^A_{n+1}}\leq b_{\zeta^A_{n}},\] and for every $n$ with $n+2<n_A$ \[ b_{\zeta^A_{n}}<a_{\zeta^A_{n+2}}.\] \end{lemma} \begin{proof} Let $n$ be such that $n+1<n_A$. Assume for now that $b_{\delta_n}\neq a_{\delta_{n+1}}$ (and hence $b_{\delta_n}<a_{\delta_{n+1}}$) and assume towards a contradiction that \begin{itemize} \item[(*)] for any $\epsilon\in (\delta_n,\delta_{n+1})$, $b_{\epsilon}<a_{\delta_{n+1}}$. \end{itemize} Note that this implies that for any such $\varepsilon$, $a_{\delta_n}<a_\varepsilon<b_{\delta_{n}}<b_{\varepsilon}<a_{\delta_{n+1}}$. Let $X=\{\beta \in A:\beta <\delta_{n+1}\}$ and $Y=A\setminus X$. This gives a convex partition of $A$, we will show that both $X$ and $Y$ are closed under the relations defining $R$. Let $\beta,\gamma\in A$ with $a_\beta=b_\gamma$. If $\beta<\delta_{n+1}$ and $\gamma\geq \delta_{n+1}$ then $b_{\delta_{n+1}}\leq b_\gamma=a_\beta< a_{\delta_{n+1}}$, contradiction. Now assume that $\gamma< \delta_{n+1}$ and $\beta\geq \delta_{n+1}$. If $\gamma\leq \delta_n$ then $a_{\delta_{n+1}}\leq a_\beta=b_\gamma\leq b_{\delta_n}$, contradiction. If $\gamma>\delta_n$ then $a_{\delta_{n+1}}\leq a_\beta=b_\gamma<a_{\delta_{n+1}}$ since $\gamma\in (\delta_n,\delta_{n+1})$ and by (*), contradiction. Let $\beta,\gamma\in A$ with $a_\beta<a_\gamma \leq b_\beta$. Assume that $\beta< \delta_{n+1}$ and $\gamma\geq \delta_{n+1}$. If $\beta\leq \delta_n$ then $a_{\delta_{n+1}}\leq a_\gamma\leq b_\beta\leq b_{\delta_{n}}$, contradiction. If $\beta\in (\delta_n,\delta_{n+1})$ then $b_\beta<a_{\delta_{n+1}}\leq a_\gamma\leq b_\beta$ by $(*)$, contradiction. Note that we cannot have $\gamma<\delta_{n+1}$ and $\beta\geq \delta_{n+1}$ since $\beta<\gamma$ by assumption. Let $\beta,\gamma\in A$ with $b_\beta<b_\gamma\leq a_\beta$. If $\beta< \delta_{n+1}$ and $\gamma\geq \delta_{n+1}$ then $b_{\delta_{n+1}}\leq b_\gamma\leq a_\beta<a_{\delta_{n+1}}<b_{\delta_{n+1}}$, contradiction. As before, $\gamma <\delta_{n+1}$ and $\beta\geq \delta_{n+1}$ is not possible since $\beta<\gamma$. As a result, we may conclude that for all $n$ such that $n+1<n_A$ we may find $\gamma_n\in (\delta_n,\delta_{n+1}]$ satisfying $a_{\delta_n}<a_{\gamma_n}\leq b_{\delta_n}\leq a_{\delta_{n+1}}\leq b_{\gamma_n}$ (if $b_{\delta_n}=a_{\delta_{n+1}}$ choose $\gamma_n=\delta_{n+1}$, otherwise use the above). Let $I=\{\delta_n,\gamma_n: n+1<n_A\}$. The crucial property is that for every $\gamma\in I\setminus \{\sup I\}$ there is some $\beta \in I$ satisfying $a_\gamma<a_\beta\leq b_\gamma$. We note that for every $n$ such that $n+1<n_A$ if $\gamma\leq \delta_n$ then $\beta\leq \delta_{n+1}$. Indeed, otherwise $a_{\delta_{n+1}}<a_\beta\leq b_\gamma\leq b_{\delta_n}$, contradiction. We construct a sequence $\langle \zeta_n: n<k\rangle$ for some $k\leq \omega$ as follows. Define $\zeta_0=\delta_0$ and for every $n$ let $\zeta_{n+1}\in I$ be maximal\footnote{If $n_A$ is finite then such a maximal element clearly exists. Otherwise, for $\zeta \in I$ there is some $n<\omega$ such that $b_{\zeta}<a_{\delta_n}$, and hence $\zeta\in \{\xi\in I: a_\xi\leq b_\zeta\}$ is finite.} with $a_{\zeta_{n}}<a_{\zeta_{n+1}}\leq b_{\zeta_{n}}$, if exists. Obviously, this is an increasing sequence. We claim that $k\geq n_A$. By induction on $n<k$ with $n<n_A$, $\zeta_n\leq \delta_n$. In particular if $n+1<n_A$, $\zeta_{n+1}$ exists. Finally we note that by maximality, for all $n+2<n_A$, $a_{\zeta_{n+1}}\leq b_{\zeta_{n}}< a_{\zeta_{n+2}}$. \end{proof} For $A\in J_-/R$ we make dual (i.e. exchanging the roles of $\bar a$ and $\bar b$) constructions and similar properties hold. \begin{corollary}\label{C:embed shift if k bounds n_A} Let $\alpha$ be an ordinal, $(\theta,<)$ an infinite ordinal with $|\alpha|^+ +\aleph_0<\theta$. Let $\bar a\neq \bar b\in (\theta^{\U{\alpha}})_<$ be some fixed sequences. Let $G=((\theta^{\U{\alpha}})_<,E_{\bar a,\bar b})$. If there exists $0<k<\omega$ with $n_A\leq k$ for all $A\in (J_+\cup J_-)/R$ then $G$ contains all finite subgraphs of $\Sh_m(\omega)$ for some $m\leq k$. \end{corollary} \begin{proof} By Lemma \ref{L:equiv classes are disjoint} and Lemma \ref{L:we are k-orderely covered}(3), $\{\bar a,\bar b\}$ is $k$-orderly covered in the sense of Definition \ref{D:k-ord-covered}. Now apply Corollary \ref{C:k-ord-cov-implies shift}. \end{proof} The aim of the rest of this section is to prove that $\{n_A: A\in (J_+\cup J_-)/R\}$ has a finite bound. From now on we will only need the sequences defined in Lemma \ref{L:special-sequence}. \begin{lemma}\label{L:each n_A is finite} If $\chi(G)>2^{\aleph_0}$ then for any $A\in (J_+\cup J_-)/R$, $n_A<\omega$. \end{lemma} \begin{proof} We assume that $A\in J_+/R$, the proof for $A\in J_-/R$ is similar. Assume towards a contradiction that $n_A=\omega$. We will show that $\chi(G)\leq 2^{\aleph_0}$. Let $S=\{\beta\leq \theta:\cf(\beta)=\aleph_0\}$. Let $\langle \zeta_l=\zeta^A_l\in A:l<\omega\rangle$ be the sequence supplied by Lemma \ref{L:special-sequence}. For any $\gamma\in S$ choose an increasing sequence of ordinals $\langle \alpha_{\gamma,n}:n<\omega\rangle\subseteq \gamma$ with limit $\gamma$. We define a coloring map $c:(\theta^{\U{\alpha}})_<\to 2^{\aleph_0\times\aleph_0}$. For any $\bar f\in (\theta^{\U{\alpha}})_<$ let $\gamma(\bar f)=\sup\{f_{\zeta_l}:l<\omega\}\in S$ and \[c(\bar f)=\{(l,n):l,n<\omega,\, f_{\zeta_l}<\alpha_{\gamma(\bar f),n}\}.\] To show that it is a legal coloring, let $\bar f,\bar g\in (\theta^{\U{\alpha}})_<$ such that $\otp(\bar f,\bar g)=\otp(\bar a,\bar b)$. By assumption $f_{\zeta_l}<f_{\zeta_{l+1}}\leq g_{\zeta_l}\leq f_{\zeta_{l+2}}$ for $l<\omega$, and hence $\gamma(\bar f)=\gamma(\bar g)$. By definition, there is some $n<\omega$ such that $f_{\zeta_0}<\alpha_{\gamma(\bar f),n}$ and let $k$ be the minimal such that $f_{\zeta_{k+1}}\geq \alpha_{\gamma(\bar f),n}$. So by minimality of $k$, \[f_{\zeta_k}<\alpha_{\gamma(\bar f),n}\leq f_{\zeta_{k+1}}\leq g_{\zeta_{k}}\] and hence $(k,n)\in c(\bar f)$ but $(k,n)\notin c(\bar g)$ so $c(\bar f)\neq c(\bar g)$. \end{proof} The next lemma requires a more complicated argument: Section \ref{S:PCF} below. Let us introduce some notation. Fix some sequence $\langle A_\varepsilon \in (J_+/R : \varepsilon<\omega \rangle$. For any $\varepsilon<\omega$, let $J_\varepsilon=\{ \zeta_n^{A_\varepsilon}\in A_\varepsilon: n<n_{A_\varepsilon}\}$ be the sequence supplied by Lemma \ref{L:special-sequence} applied to $A_\varepsilon$. Let $J=\bigcup_{\varepsilon<\omega}J_\varepsilon$, $\Omega=(\theta^{\U J})_<$, $R'=\{(\bar c,\bar d)\in \Omega^2: \otp(\bar c,\bar d)=\otp(\bar a\restriction J,\bar b\restriction J)\}$ and $\chi=\beth_2(\aleph_0)$. Note that $R'$ is an irreflexive relation on $\Omega$ satisfying that if $f_1 \R' f_2$, $f_1,f_2\in \Omega$, then for every $\varepsilon<\omega$ and $i\in J_\varepsilon$, the following hold: \[f_1(i)<f_2(i)\tag{1}\] and for any $i\in J_\varepsilon$ with $\suc{i}\in J_\varepsilon$ \[f_1(\suc{i})\leq f_2(i)\tag{2},\] and for any $i\in J_\varepsilon$ with $\suc{\suc{i}}\in J_\varepsilon$, \[f_2(i)<f_1(\suc{\suc{i}})\tag{3},\] where $\suc{i}$ is the successor of $i$ in $J_\varepsilon$. Under these assumptions (or more generally under Assumption \ref{A:R}), we will prove in Conclusion \ref{Con:coloring-pcf} that \begin{itemize} \item[(*)] If $n_{A_\varepsilon} < \omega$ for all $\varepsilon<\omega$, then there exists a function $c:\Omega\to \chi$ satisfying that if $f_1,f_2\in \Omega$ and $f_1 \R' f_2$ then $c(f_1)\neq c(f_2)$. In other words, there exists a coloring of the directed graph $(\Omega,R')$ of cardinality $\chi$. \end{itemize} \begin{lemma}\label{L:all n_A are bounded} If $\chi(G)>\beth_2(\aleph_0)$ then the set $\{n_A: A\in (J_+\cup J_-)/R\}$ is bounded. \end{lemma} \begin{proof} By Lemma \ref{L:each n_A is finite}, for any $A\in (J_+\cup J_-)/R$, $n_A<\omega$. We will show that $\{n_A: A\in J_+/R\}$ and $\{n_A: A\in J_-/R\}$ are both bounded. Assume that $\{n_A: A\in J_+/R\}$ is unbounded. Let $\{A_\varepsilon\in J_+/R:\varepsilon<\omega\}$ be a family of convex equivalence classes such that $\varepsilon<n_{A_\varepsilon}$. By (*), there exists a function $c:\Omega\to \beth_2(\aleph_0)$ satisfying that if $f_1,f_2\in \Omega$ and $f_1\R' f_2$ then $c(f_1)\neq c(f_2)$. Let $H=(\Omega,(R')^{sym})$ be the graph induced by $R'$ (i.e. $(\bar c, \bar d)\in (R')^{sym} \iff (\bar c,\bar d)\in R'\vee (\bar d,\bar c)\in R'$). The map $c$ induces a coloring on $H$ and hence $\chi(H)\leq \beth_2(\aleph_0)$. Since the map $(\theta^{\U \alpha})_< \to \Omega$ given by $\eta\mapsto \eta\restriction J$ is a graph homomorphism, $\chi(G)\leq \beth_2(\aleph_0)$ and this contradicts the assumption. If on the other hand $\{n_A: A\in J_-/R\}$ is unbounded then we proceed as above but using $R''=\{(\bar c,\bar d)\in \Omega^2: \otp(\bar c,\bar d)=\otp(\bar b\restriction J,\bar a\restriction J)\}$ and the dual construction (replacing $R'$ by $R''$ in (*)) mentioned above instead and arrive at a similar contradiction. \end{proof} Finally, we may conclude: \proof[proof of Theorem \ref{T:shift graphs in order-type-graphs}] This is a direct consequence of Corollary \ref{C:embed shift if k bounds n_A} and Lemma \ref{L:all n_A are bounded}. \qed \section{Coloring increasing functions}\label{S:PCF} This section's main result is Conclusion \ref{Con:coloring-pcf}, used in the final stage of the previous section. We prove that under mild conditions on a directed graph, namely Assumption \ref{A:R}, on a family of strictly increasing functions there exists a coloring of small cardinality. Let $\kappa=\cf(\kappa)$ be a regular cardinal and $(J,<)$ a well order of cofinality $\kappa$. Let $\sigma=(2^{\kappa})^+$, $\theta$ be an ordinal and $\chi=\chi^{\kappa}+\chi^{<\sigma}$ a cardinal. Let $\langle J_\varepsilon: \varepsilon <\kappa\rangle$ be an increasing partition of $J$ into finite convex sets. \iffalse with $\sup J_\varepsilon\in J_\varepsilon$ for all $\varepsilon<\kappa$.\fi Assume that $\sup_{\varepsilon<\kappa} |J_\varepsilon|=\omega$. Let $\mathcal{D}$ be a non-principal ultrafilter on $\kappa$ containing the filter generated by $\{\{\varepsilon<\kappa: |J_\varepsilon|\geq n\}: n<\omega\}$. Let $\Omega$ be the set of functions from $J$ to $\theta$ that are strictly increasing on each $J_\varepsilon$ ($\varepsilon <\kappa$). Let $\mathcal{H}=(\theta+1)^\kappa$. \begin{assumption}\label{A:R} $R$ is an irreflexive relation on $\Omega$ satisfying that if $f_1 \R f_2$, $f_1,f_2\in \Omega$, then for every $\varepsilon<\kappa$ and $i\in J_\varepsilon$ \[f_1(i)<f_2(i)\tag{1}\] and for any $i\in J_\varepsilon$ with $\suc{i}\in J_\varepsilon$ \[f_1(\suc{i})\leq f_2(i)\tag{2},\] and for any $i\in J_\varepsilon$ with $\suc{\suc{i}}\in J_\varepsilon$, \[f_2(i)<f_1(\suc{\suc{i}})\tag{3},\] where $\suc{i}$ is the successor of $i$ in the finite set $J_\varepsilon$. \end{assumption} We say a subset $X$ of $\Omega$ is \emph{trivial} if $f_1\not\R f_2$ for any $f_1,f_2\in X$. \begin{definition}\label{D:def of approximation} An approximation $\Ba$ is a partition $\Omega=\bigcup_{s\in S_\Ba}\Omega_{s}^\Ba$ (so all the $\Omega^{\Ba}_s$'s are non-empty), $\rho_{s}^\Ba\in \chi^{<\sigma}$ and $h^\Ba_s\in \mathcal{H}$ ($s\in S_\Ba$) satisfying \begin{enumerate} \item for every $s\in S_\Ba$ and $f\in\Omega^\Ba_s$, $\{\varepsilon<\kappa: \Rg(f\restriction J_\varepsilon)\subseteq h^\Ba_s(\varepsilon)\}\in \mathcal{D}$ \item if $s\neq t\in S_\Ba$ and $\rho_s^\Ba=\rho^\Ba_t$ then for every $f_1\in \Omega^\Ba_s$ and $f_2\in \Omega^\Ba_t$, $f_1\not\R f_2$. \end{enumerate} \end{definition} We want to define when one approximation is better than the other. \begin{definition}\label{D:order on approx} For two approximations $\Ba$ and $\Bb$ we will say that $\Ba\trianglelefteq_g \Bb$ if there exists a surjective function $g: S_\Bb \to S_\Ba$ satisfying \begin{enumerate} \item for any $s\in S_\Ba$, $\{\Omega^\Bb_t: t\in g^{-1}(s)\}$ is a partition of $\Omega^\Ba_s$. \item if $s\in S_\Ba$ and $\Omega^\Ba_s$ is trivial then $g^{-1}(s)$ is a singleton $t\in S_\Bb$ satisfying $h^\Ba_s=h^\Bb_t$ and $\Omega^\Ba_s=\Omega^\Bb_t$ (in particular $\Omega^\Ba_t$ is also trivial). \item for $t\in S_\Bb$, $\{\varepsilon<\kappa: h^\Bb_t(\varepsilon)\leq h^\Ba_{g(t)}(\varepsilon)\}\in \mathcal{D}$. \item for $t\in S_\Bb$, $\rho^\Ba_{g(t)}$ is an initial segment of $\rho^\Bb_t$. \end{enumerate} We will say that $\Ba\triangleleft_g \Bb$ if $\Ba\trianglelefteq_g \Bb$ and in addition for every $t\in S_\Bb$, either $\Omega^\Bb_t$ is trivial or $\{\varepsilon<\kappa :h^\Bb_t(\varepsilon)<h^\Ba_{g(t)}(\varepsilon)\}\in \mathcal{D}$. \end{definition} The following is clear. \begin{lemma}\label{L:composition-approx} Let $\Ba,\Bb$ and $\Bc$ be approximations. If $\Ba\trianglelefteq_g \Bb$ and $\Bb\trianglelefteq_h\Bc$ then $\Ba\trianglelefteq_{g\circ h} \Bc$. If, in addition, either $\Ba \triangleleft_g \Bb$ or $\Bb \triangleleft_h\Bc$ then $\Ba \triangleleft_{g\circ h}\Bc$. \end{lemma} \begin{proposition}\label{P:approx-small-cof} Let $\Ba$ be an approximation. Then there exists an approximation $\Ba\trianglelefteq_g \Bb$ satisfying the following. \begin{enumerate} \item If $t\in S_\Bb$, $\Omega^\Ba_{g(t)}$ is non-trivial and $\{\varepsilon<\kappa: 0<\cf(h^\Ba_{g(t)}(\varepsilon))\leq \chi\}\in \mathcal{D}$ then either $\Omega^\Bb_t$ is trivial or $\{\varepsilon<\kappa: h^\Bb_t(\varepsilon)<h^\Ba_{g(t)}(\varepsilon)\}\in\mathcal{D}$. \item If $t\in S_\Bb$ satisfies that $\{\varepsilon<\kappa:0<\cf(h^\Ba_{g(t)}(\varepsilon))\leq \chi\}\notin \mathcal{D}$ then $h^\Bb_t=h^\Ba_{g(t)}$ and $\Omega^\Bb_t=\Omega^\Ba_{g(t)}$. \end{enumerate} Lastly, for $t\in S_\Bb$, if $\rho^\Ba_{g(t)}\in \chi^{\xi}$, with $\xi<\sigma$, then $\rho^\Bb_t\in \chi^{\xi+1}$. \end{proposition} \begin{proof} We partition $S_\Ba$ in the following way. Let $S_1=\{s\in S_\Ba: (\forall^{\mathcal{D}}\varepsilon<\kappa)(0<\cf(h^\Ba_s(\varepsilon))\leq \chi) \text{ and } \Omega^\Ba_s \text{ is non-trivial}\}$ and $S_0=S_\Ba\setminus S_1$ is the rest. Fix any $s\in S_1$. For any $\varepsilon<\kappa$, if $0<\cf(h^\Ba_s(\varepsilon))\leq \chi$ we choose a an unbounded subset $C_{s,\varepsilon}\subseteq h^\Ba_s(\varepsilon)$ of order type $\cf(h^\Ba_s(\varepsilon))$, and we set $C_{s,\varepsilon}=\{h^\Ba_s(\varepsilon))\}$, otherwise. Set $A_s=\{\varepsilon<\kappa: 0<\cf(h^\Ba_s(\varepsilon))\leq \chi\}$. Note that $A_s\in \mathcal{D}$. Let $H_s=\{h\in \mathcal{H}: \text{if $\varepsilon\in A_s$ then $h(\varepsilon)\in C_{s,\varepsilon}$ and $h(\varepsilon)=h^\Ba_s(\varepsilon)$, otherwise}\}$. Since $\chi^{\kappa}=\chi$, $|H_s|\leq \chi$ and hence there is some $\xi_s\leq \chi$ and an enumeration $\langle h_{s,\xi}:\xi<\xi_s\rangle$ of $H_s$. By induction on $\xi<\xi_s$ we define \[\Omega_{s,\xi}=\{f\in \Omega^\Ba_s: (\forall^\mathcal{D}\varepsilon<\kappa)(\Rg(f\restriction J_\varepsilon)\subseteq h_{s,\xi}(\varepsilon))\}\setminus \bigcup_{\alpha<\xi}\Omega_{s,\alpha}\] and for $\xi=\xi_s$ \[\Omega_{s,\xi}=\{f\in \Omega^\Ba_s:(\forall^\mathcal{D} \varepsilon<\kappa)(h^\Ba_s(\varepsilon) \text{ is a sucessor and }h^\Ba_s(\varepsilon)-1=\max\Rg(f\restriction J_\varepsilon)\}.\] We claim that $\Omega^\Ba_s=\bigsqcup_{\xi\leq \xi_s}\Omega_{s,\xi}$. Let $f\in \Omega^\Ba_s$. Note that for every $\varepsilon\in A_s$ either (a) there an ordinal $\gamma\in C_{s,\varepsilon}$ such that $\Rg(f\restriction J_\varepsilon)\subseteq \gamma$ or (b) there is no such $\gamma$. We may find $A_s^\prime\subseteq A_s$ satisfying that $A_s^\prime\in\mathcal{D}$ and that for all $\varepsilon\in A_s^\prime$ (a) holds or that for all $\varepsilon\in A_s^\prime$ (b) holds. Assume that (a) holds for all $\varepsilon\in A_s^\prime$ and let $\langle \gamma_\varepsilon : \varepsilon<A_s^\prime\rangle$ witness this. Define a function $h\in H_s$ by setting for all $\varepsilon\in A_s^\prime$, $h(\varepsilon)=\gamma_\varepsilon$. For $\varepsilon\notin A_s^\prime$ choose arbitrary $h(\varepsilon)$ as long as $h\in H_s$. Let $\xi<\xi_s$ be minimal such that $(\forall^\mathcal{D}\varepsilon<\kappa)(\Rg(f\restriction J_\varepsilon)\subseteq h_{s,\xi}(\varepsilon))$, so $f\in \Omega_{s,\xi}$. Now, assume that (b) holds for all $\varepsilon\in A_s^\prime$. As $\Ba$ is an approximation (see Definition \ref{D:def of approximation}(1)) we may assume that for all $\varepsilon\in A_s^\prime$, $\Rg(f\restriction J_\varepsilon)\subseteq h^\Ba_s(\varepsilon)$ but we cannot find any $\gamma\in C_{s,\varepsilon}$ satisfying $\Rg(f\restriction J_\varepsilon)\subseteq \gamma$. For any $\varepsilon \in A_s^\prime$, because $J_\varepsilon$ is finite this implies that $\cf(h_s^{\Ba}(\varepsilon))=1$, i.e. that $h^\Ba_s(\varepsilon)$ is a successor ordinal and that $h^\Ba_s(\varepsilon)-1=\max\Rg(f\restriction J_\varepsilon)$. Hence $f\in \Omega_{s,\xi_s}$. Let $S_\Bb=\{(s,\xi):s\in S_1,\, \xi\leq \xi_s,\, \Omega_{s,\xi}\neq \emptyset\}\cup S_0$ and let $g:S_\Bb\to S_\Ba$ be the function defined by $g(s,\xi)=s$ for $s\in S_1$ and $g(s)=s$ otherwise. For any $s\in S_0$ let $\Omega^\Bb_s=\Omega^\Ba_s$, $h^\Bb_s=h^\Ba_s$ and $\rho^\Bb_s={\rho^\Ba_s}^\frown\langle 0\rangle $. For $s\in S_1$, if $\xi\leq \xi_s$ we set $\Omega^\Bb_{(s,\xi)}=\Omega_{s,\xi}$ and $\rho^\Bb_{(s,\xi)}={\rho^\Ba_s}^{\frown} \langle \xi\rangle$. Finally, for $\xi<\xi_s$ we set $h^\Bb_{(s,\xi)}=h_{s,\xi}$ and for $\xi=\xi_s$ we set $h^\Bb_{(s,\xi)}=h^\Ba_s$. \begin{claim} $\Bb$ is an approximation and $\Ba\trianglelefteq_g \Bb$. \end{claim} \begin{claimproof} We first show that $\Bb$ is an approximation. Items $(1)$ and $(2)$ from the definition follow since $\Ba$ is an approximation and the construction above. For example, if $(s_1,\xi_1)\neq (s_2,\xi_2)\in S_\Bb$ and $\rho^\Bb_{(s_1,\xi_1)}=\rho^\Bb_{(s_2,\xi_2)}$ then since $\xi_1=\xi_2$ necessarily $s_1\neq s_2$ and $\rho^\Ba_{s_1}=\rho^\Ba_{s_2}$ so we may use the fact that $\Ba$ is an approximation. Finally, $\Ba\trianglelefteq_g \Bb$ by the construction. \end{claimproof} Showing $(1)$ from the statement of the proposition boils down to showing that $\Omega^\Bb_{(s,\xi_s)}=\Omega_{s,\xi_s}$ is trivial. This follows from Assumption \ref{A:R}(1). \end{proof} \begin{proposition}\label{P:approx-large-cof} Let $\Ba$ be an approximation. Then there exists an approximation $\Ba\trianglelefteq_g \Bb$ satisfying the following. \begin{enumerate} \item If $t\in S_\Bb$ with $\{\varepsilon<\kappa: \cf(h^\Ba_{g(t)}(\varepsilon))>\chi\}\in \mathcal{D}$ then either $\Omega^\Bb_t$ is trivial or $\{\varepsilon<\kappa: h^\Bb_t(\varepsilon)<h^\Ba_{g(t)}(\varepsilon)\}\in \mathcal{D}$. \item If $t\in S_\Bb$ satisfies that $\{\varepsilon<\kappa:\cf(h^\Ba_{g(t)}(\varepsilon))> \chi\}\notin \mathcal{D}$ then $h^\Bb_t=h^\Ba_{g(t)}$ and $\Omega^\Bb_t=\Omega^\Ba_{g(t)}$. \end{enumerate} Lastly, for $t\in S_\Bb$, if $\rho^\Ba_{g(t)}\in \chi^{\xi}$, with $\xi<\sigma$, then $\rho^\Bb_t\in \chi^{\xi+1}$. \end{proposition} \begin{proof} Let $S_1=\{s\in S_\Ba: \Omega^\Ba_s \text{ is non-trivial and } (\forall^{\mathcal{D}} \varepsilon<\kappa)(\cf(h^\Ba_s(\varepsilon))>\chi\}$ and $S_0=S_\Ba\setminus S_1$. Fix any $s\in S_1$. Let $A_s=\{\varepsilon<\kappa: \cf(h^\Ba_s(\varepsilon))>\chi\}$, so $A_s\in \mathcal{D}$. Let $\mathcal{D}_s=\{D\in \mathcal{D}: D\subseteq A_s\}$ be the induced ultrafilter on $A_s$. Consider the ultraproduct $\prod_{\varepsilon\in A_s}h^\Ba_s(\varepsilon)/\mathcal{D}_s$. We may consider it as a linearly ordered set, ordered by $<_{\mathcal{D}_s}$. \begin{claim}\label{C: sequence in ultraproduct} There exists a sequence $H_s=\langle h_{s,\beta}\in \mathcal{H}:\beta<\beta_s\rangle$ satisfying \begin{enumerate} \item for all $\varepsilon\in A_s$ and $\beta<\beta_s$, $h_{s,\beta}(\varepsilon)<h^\Ba_s(\varepsilon)$; \item for all $\varepsilon\in \kappa\setminus A_s$ and $\beta<\beta_s$, $h_{s,\beta}(\varepsilon)=h^\Ba_s(\varepsilon)$; \item $\langle (h_{s,\beta}\restriction A_s)/\mathcal{D}_s:\beta<\beta_s\rangle$ is $<_{\mathcal{D}_s}$ increasing and cofinal in $\prod_{\varepsilon\in A_s}h^\Ba_s(\varepsilon)/\mathcal{D}_s$. \item for any $f\in \Omega^\Ba_s$ there exists $\beta<\beta_s$ such that $\{\varepsilon<\kappa: \Rg(f\restriction J_\varepsilon)\subseteq h_{s,\beta}(\varepsilon)\}\in\mathcal{D}$. \end{enumerate} \end{claim} \begin{claimproof} First we choose a well-ordered increasing cofinal sequence in $\prod_{\varepsilon\in A_s}h^\Ba_s(\varepsilon)/\mathcal{D}_s$ and then choose a sequence of representatives $\langle h_{s,\beta}\restriction A_s:\beta<\beta_s\rangle$. To get (2), set $h_{s,\beta}(\varepsilon)=h_s^{\Ba}(\varepsilon)$ for any $\varepsilon\in \kappa\setminus A_s$. This gives us (1)--(3). We show (4). Let $f\in \Omega_s^\Ba$. Since $\Ba$ is an approximation, the set $X_{s,f}=\{\varepsilon\in A_s: \Rg(f\restriction J_\varepsilon)\subseteq h^{\Ba}_s(\varepsilon)\}$ is in $\mathcal{D}$. Let $h_f: A_s\to \text{Ord}$ be the function defined by mapping $\varepsilon\in X_{s,f}$ to $\max \Rg(f\restriction J_\varepsilon)+1$ and $\varepsilon\in A_s\setminus X_{s,f}$ to $0$. Note that for any $\varepsilon\in A_s$, $h_f(\varepsilon)<h_s^{\Ba}(\varepsilon)$. Indeed, if for $\varepsilon\in X_{s,f}$, $h_f(\varepsilon)=h_s^{\Ba}(\varepsilon)$ then $h_s^{\Ba}(\varepsilon)$ is a successor contradicting $\varepsilon\in A_s$. Similarly (and even easier), this holds if $\varepsilon\in A_s\setminus X_{s,f}$. It follows that for some $\beta<\beta_s$, $h_f/\mathcal{D}_s\leq_{\mathcal{D}_s} (h_{s,\beta}\restriction A_s)/\mathcal{D}_s$ and it is easy to check that this $\beta$ satisfies (4). \end{claimproof} For any $f\in \Omega^\Ba_s$, $n<\omega$ and $\beta<\beta_s$ let $B_n(f,h_{s,\beta})=\{\varepsilon<\kappa: |\{i\in J_\varepsilon: f(i)\geq h_{s,\beta}(\varepsilon)\}|\leq n\}$. By Claim \ref{C: sequence in ultraproduct}(4), we may set $\beta_{s,n}(f)=\min \{\beta: B_n(f,h_{s,\beta})\in \mathcal{D}\}$. Note that $\beta_{s,n}(f)\geq \beta_{s,n+1}(f)$. Let $\beta_s(f)=\min \{\beta_{s,n}(f):n<\omega\}$ and let $n_s(f)=\min \{n<\omega: (\forall k\geq n)(\beta_{s,k}(f)=\beta_{s,n}(f)\}$. \begin{claim}\label{C:=n_s(f)} $\{\varepsilon<\kappa: |\{i\in J_\varepsilon: f(i)\geq h_{s,\beta_s(f)}(\varepsilon)\}|=n_s(f)\}\in \mathcal{D}$. \end{claim} \begin{claimproof} Call this set $Y_{s,f}$. Note that $Y_{s,f}\subseteq B_{n_s(f)}(f,h_{s,\beta_s(f)})$. If $n_s(f)=0$ then $Y_{s,f}=B_0(f,h_{s,\beta_{s,0}(f)})\in \mathcal{D}$. Assume $n_s(f)>0$. If $Y_{s,f}\notin \mathcal{D}$ then $\{\varepsilon<\kappa: |\{i\in J_\varepsilon: f(i)\geq h_{s,\beta_s(f)}(\varepsilon)\}|\leq n_s(f)-1\}\in \mathcal{D}$. So $\beta_{s,n_s(f)-1}\leq \beta_s(f)=\beta_{s,n_s(f)}(f)$, contradiction. \end{claimproof} For $s\in S_1$, $\beta<\beta_s$ and $n<\omega$, let $\Omega_{(s,\beta,n)}=\{f\in \Omega^\Ba_s: \beta_s(f)=\beta,\, n_s(f)=n\}$. Let $S_\Bb=\{(s,\beta,n):s\in S_1,\, \beta< \beta_s,\, n<\omega,\, \Omega_{(s,\beta,n)}\neq\emptyset \}\cup S_0$ and let $g:S_\Bb\to S_\Ba$ be the function defined by $g(s,\beta,n)=s$ for $s\in S_1$ and $g(s)=s$ otherwise. For any $s\in S_0$ let $\Omega^\Bb_s=\Omega^\Ba_s$, $h^\Bb_s=h^\Ba_s$ and $\rho^\Bb_s={\rho^\Ba_s}^\frown\langle 0\rangle $. For $s\in S_1$, $\beta<\beta_s$ and $n<\omega$, we set $\Omega^\Bb_{(s,\beta,n)}=\Omega_{(s,\beta,n)}$, $\rho^\Bb_{(s,\beta,n)}={\rho^\Ba_s}^\frown \langle n\rangle$ and \[ h^\Bb_{(s,\beta,n)}= \begin{cases} h_{s,\beta} & n=0 \\ h^\Ba_s & n>0 \end{cases} .\] \begin{claim} $\Bb$ is an approximation and $\Ba\trianglelefteq_g \Bb$. \end{claim} \begin{claimproof} We check that $\Bb$ satisfies $(1)$ and $(2)$ from the definition. $(1)$ follows by the choice of $h^\Bb_{(s,\beta,n)}$. We are left with $(2)$. Let $t\in S_0$ and $(s,\beta,n)$ with $s\in S_1$. If $\rho^\Bb_t=\rho^\Bb_{(s,\beta,n)}$ then $\rho^\Ba_t=\rho^\Ba_s$ so the result follows since $\Ba$ is an approximation. Let $(s_1,\beta_1,n_1)\neq (s_2,\beta_2,n_2)\in S_\Bb$. If $\rho^\Bb_{(s_1,\beta_1,n_1)}=\rho^\Bb_{(s_2,\beta_2,n_2)}$ then $\rho^\Ba_{s_1}=\rho^\Ba_{s_2}$. If $s_1\neq s_2$ then the results follows since $\Ba$ is an approximation. So assume that $s=s_1=s_2$ and $n=n_1=n_2$. Assume that $\beta_1<\beta_2<\beta_s$ and let $f_1\in \Omega^\Bb_{(s,\beta_1,n)}$ and $f_2\in \Omega^\Bb_{(s_,\beta_2,n)}$. We need to show that $f_1\not\R f_2$ and $f_2\not\R f_1$. By choice of $\beta_1=\beta_s(f_1)$ and $n=n_s(f_1)$, $B_n(f_1,h_{s,\beta_1})\in \mathcal{D}$. On the other hand, since $\beta_1<\beta_2=\beta_s(f_2)\leq \beta_{s,n+2}(f_2)$, $B_{n+2}(f_2,h_{s,\beta_1})\notin \mathcal{D}$. I.e. $\kappa\setminus B_{n+2}(f_2,h_{s,\beta_1})\in \mathcal{D}$. Let $\varepsilon\in B_n(f_1,h_{s,\beta_1})\cap (\kappa\setminus B_{n+2}(f_2,h_{s,\beta_1}))$ and let $i_l$ be the $(n+l)$-th element of $J_\varepsilon$ from the end, for $l=1,2,3$. As a result, \[f_1(i_3)<f_1(i_1)<h_{s,\beta_1}(\varepsilon)\leq f_2(i_3).\] Consequently, $f_1\not\R f_2$ by Assumption \ref{A:R}(3) (since $\suc{\suc{i_3}}=i_1$) and $f_2\not\R f_1$ by Assumption \ref{A:R}(1). $\Ba\trianglelefteq_g \Bb$ by construction. \end{claimproof} To complete the proof, we note that for $(s,\beta,n)\in S_\Bb$ with $n>0$, $\Omega^\Bb_{(s,\beta,n)}$ is trivial. Let $f_1,f_2\in \Omega^\Bb_{(s,\beta,n)}$. By Claim \ref{C:=n_s(f)}, and the assumptions on $\mathcal{D}$, we may find $\varepsilon<\kappa$ such that for $l=1,2$ the following holds \begin{enumerate} \item $|\{i\in J_\varepsilon: f_l(i)\geq h_{s,\beta}(\varepsilon)\}|=n$ and \item $|J_\varepsilon|>n$. \end{enumerate} Since $n>0$, letting $i$ be the $n+1$-th element from the end of $J_\varepsilon$ we have that $f_l(i)<h_{s,\beta}(\varepsilon)\leq f_l(\suc{i})$, for $l=1,2$. If $f_1\R f_2$ then $f_1(\suc{i})\leq f_2(i)<h_{s,\beta}(\varepsilon)$ by Assumption \ref{A:R}(2), contradiction. \end{proof} \begin{proposition}\label{P:approximation,proper-successor} Let $\Ba$ be an approximation. Then there exists an approximation $\Bc$ and a surjective function $r:S_\Bc\to S_\Ba$ such that $\Ba \triangleleft_r \Bc$. Moreover, for $t\in S_\Bc$, if $\rho^\Ba_{r(t)}\in \chi^{\xi}$, with $\xi<\sigma$, then $\rho^\Bc_t\in \chi^{\xi+2}$. \end{proposition} \begin{proof} Let $\Ba\trianglelefteq_g \Bb$ be the approximation supplied by Proposition \ref{P:approx-small-cof} and let $\Bb\trianglelefteq_f \Bc$ be the approximation supplied by Proposition \ref{P:approx-large-cof}. Note that $\Ba\trianglelefteq_{g\circ f}\Bc$ by Lemma \ref{L:composition-approx}, we claim that $\Ba \triangleleft_{g\circ f} \Bc$. Let $t\in S_\Bc$. Since $\Ba$ is an approximation and $\sup_{\varepsilon <\kappa} |J_\varepsilon|=\omega$, we cannot have that $\{\varepsilon<\kappa: \cf(h_{gf(t)}^\Ba(\varepsilon))=0\}\in \mathcal{D}$, see Definition \ref{D:def of approximation}(1). Assume that $\{\varepsilon<\kappa: 0<\cf(h_{gf(t)}^\Ba(\varepsilon))\leq \chi\}\in \mathcal{D}$. If $\Omega^\Ba_{gf(t)}$ is trivial then so is $\Omega^\Bc_t$, so assume not. By Proposition \ref{P:approx-small-cof}(1) applied to $f(t)\in S_\Bb$, either $\Omega_{f(t)}^\Bb$ is trivial (and thus so is $\Omega^\Bc_t$) or $\{\varepsilon<\kappa: h_{f(t)}^\Bb<h^\Ba_{gf(t)}(\varepsilon)\}\in \mathcal{D}$. If it is the latter then, since $\{\varepsilon<\kappa: h^\Bc_t(\varepsilon)\leq h^\Bb_{f(t)}(\varepsilon)\}\in \mathcal{D}$, we conclude that $\{\varepsilon<\kappa:h^\Bc_t(\varepsilon)<h^\Ba_{gf(t)}(\varepsilon)\}\in \mathcal{D}$. Assume that $\{\varepsilon<\kappa:\cf(h^\Ba_{gf(t)}(\varepsilon))>\chi\}\in \mathcal{D}$. In particular, since by Proposition \ref{P:approx-small-cof}(2), $h^\Ba_{gf(t)}=h^\Bb_{f(t)}$, $\{\varepsilon<\kappa:\cf(h^\Bb_{f(t)}(\varepsilon))>\chi\}\in \mathcal{D}$. Assuming that $\Omega^\Bc_{t}$ is not trivial, the by Proposition \ref{P:approx-large-cof}(1), $\{\varepsilon<\kappa: h^\Bc_t(\varepsilon)<h^\Bb_{f(t)}(\varepsilon)\}\in \mathcal{D}$. Since $\{\varepsilon<\kappa: h^\Bb_{f(t)}(\varepsilon)\leq h^\Ba_{gf(t)}(\varepsilon)\}\in \mathcal{D}$ we conclude that $\{\varepsilon<\kappa:h^\Bc_t(\varepsilon)<h^\Ba_{gf(t)}(\varepsilon)\}\in \mathcal{D}$, as needed. The moreover part follows immediately from the construction. \end{proof} \begin{lemma}\label{L:approx-limit} Let $\delta<\sigma$ be a limit ordinal and $\langle \Ba_\alpha:\alpha<\delta\rangle$ a sequence of approximations. Assume that for $\alpha<\beta<\delta$, $\Ba_\alpha\trianglelefteq_{g_{\alpha,\beta}}\Ba_\beta$ and that for $\alpha<\beta<\gamma<\delta$, $g_{\alpha,\beta}\circ g_{\beta,\gamma}=g_{\alpha,\gamma}$. Then the inverse limit exists, i.e. there are $(\Ba_\delta, \langle g_{\alpha,\delta}: \alpha<\delta\rangle)$ such that $\Ba_\alpha\trianglelefteq_{g_{\alpha,\delta}}\Ba_\delta$ and for $\alpha<\beta<\delta$, $g_{\alpha,\beta}\circ g_{\beta,\delta}=g_{\alpha,\delta}$. In particular, if $\Ba_\alpha \triangleleft_{g_{\alpha,\beta}}\Ba_\beta$ for some $\alpha<\beta<\delta$ then $\Ba_\alpha \triangleleft_{g_{\alpha,\delta}}\Ba_\delta$. Furthermore, for any $t\in S_{\Ba_\delta}$, if $\xi=\sup\{\zeta: \rho^{\Ba_\alpha}_{g_{\alpha,\delta}(t)}\in \chi^\zeta,\, \alpha<\delta\}$ then $\rho^{\Ba_\delta}_t\in \chi^{\xi+1}$. \end{lemma} \begin{proof} For every $f\in \Omega$ let $t_f\in \prod_{\alpha<\delta}S_{\Ba_\alpha}$ be the function defined by $t_f(\alpha)=s$ if and only if $f\in \Omega^{\Ba_\alpha}_s$. Note that for $\alpha<\beta<\delta$, since for any $s\in S_{\Ba_\alpha}$, $\{\Omega^{\Ba_\beta}_t:t\in g_{\alpha,\beta}^{-1}(s)\}$ is a partition of $\Omega^{\Ba_\alpha}_s$, necessarily $g_{\alpha,\beta}(t_f(\beta))=t_f(\alpha)$. Let $S_*=\{t_f: f\in \Omega\}$ and for any $t\in S_*$, let $\Omega_t=\{f\in \Omega: t_f=t\}$. Clearly, it is a partition of $\Omega$. Furthermore, note that if $t_1,t_2\in S_*$ and $\alpha<\delta$ is such that $t_1(\alpha)=t_2(\alpha)$ then $t_1(\alpha^\prime)=t_2(\alpha^\prime)$ for any $\alpha^\prime \leq\alpha$. Let $S_0=\{t\in S_*: (\exists \alpha <\delta) (\Omega^{\Ba_\alpha}_{t(\alpha)} \text{ is trivial})\}$ and $S_1=S_*\setminus S_0$. For any $t\in S_1$ and $\varepsilon<\kappa$, let $A_{t,\varepsilon}=\{h^{\Ba_\alpha}_{t(\alpha)}(\varepsilon):\alpha<\delta\}$. Obviously, $1\leq |A_{t,\varepsilon}|\leq |\delta|<\sigma\leq\chi$. For every $(t,h/\mathcal{D})\in S_1\times\prod_{\varepsilon<\kappa} A_{t,\varepsilon}/\mathcal{D} $ let $\Omega_{(t,h/\mathcal{D})}=\{f\in \Omega_t: \forall^\mathcal{D} \varepsilon<\kappa,\, h(\varepsilon)=\min \{x\in A_{t,\varepsilon}:\Rg(f\restriction J_\varepsilon)\subseteq x\}\}$. Let $S_{\Ba_{\delta}}=\{(t,h/\mathcal{D})\in S_1\times\prod_{\varepsilon<\kappa} A_{t,\varepsilon}/\mathcal{D}: \Omega_{(t,h/\mathcal{D})}\neq \emptyset\}\cup S_0$. For $t\in S_0$, set $\Omega^{\Ba_\delta}_t=\Omega^{\Ba_\alpha}_{t(\alpha)}$ and $h^{\Ba_\delta}_t=h^{\Ba_\alpha}_{t(\alpha)}$, where $\alpha<\delta$ is minimal such that $\Omega^{\Ba_\alpha}_{t(\alpha)}$ is trivial. For $(t,h/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$, set $\Omega^{\Ba_\delta}_{(t,h/\mathcal{D})}=\Omega_{(t,h/\mathcal{D})}$. Note that if $t\in S_1$ then $\{\varepsilon<\kappa: \exists x\in A_{t,\varepsilon}( \Rg(f\restriction J_\varepsilon)\subseteq x)\}\in \mathcal{D}$ because this set contains $\{\varepsilon<\kappa: \Rg(f\restriction J_\varepsilon)\subseteq h^{\Ba_0}_{t(0)}(\varepsilon)\}$, which is $\mathcal{D}$ since $\Ba_0$ is an approximation and $f\in \Omega^{\Ba_0}_{t(0)}$. Thus for any $t\in S_1$ and for every $f\in \Omega_t$ there is a unique $h/\mathcal{D}\in \prod_{\varepsilon<\kappa}A_{t,\varepsilon}/\mathcal{D}$ such that $f\in \Omega_{(t,h/\mathcal{D})}^{\Ba_\delta}$. We choose $h^{\Ba_\delta}_{(t,h/\mathcal{D})}$ to be any representative of the class $h/\mathcal{D}$. For any $t\in S_0$ and $\alpha<\delta$, $g_{\alpha,\delta}(t)=t(\alpha)$, and for every $(t,h/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$ and $\alpha<\delta$, $g_{\alpha,\delta}((t,h/\mathcal{D}))=t(\alpha)$. Note that it already follows that for $\alpha<\beta<\delta$, $g_{\alpha,\beta}\circ g_{\beta,\delta}=g_{\alpha,\delta}$. For any $t\in S_0$ let \[\rho^{\Ba_\delta}_t=\bigcup\{\rho_{t(\alpha)}^{\Ba_\alpha}:\alpha<\delta\}^\frown \langle 0\rangle.\] Now let $(t,h/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$. Since $\chi^{\kappa}=\chi$, there exists some $\gamma_t\leq \chi$ and an enumeration of $\prod_{\varepsilon<\kappa} A_{t,\varepsilon}/\mathcal{D}$ as $\langle h_{t,\gamma}/\mathcal{D}: \gamma<\gamma_t\rangle$. Now for $(t,h/\mathcal{D})\in S_{\Ba_\delta}$ set \[{\rho^{\Ba_\delta}_{(t,h/\mathcal{D})}=\bigcup\{\rho_{t(\alpha)}^{\Ba_\alpha}:\alpha<\delta\}}^\frown \langle\gamma\rangle,\] where $h/\mathcal{D}=h_{t,\gamma}/\mathcal{D}$. Assume that $t=t_f$ for some $f\in \Omega$ and let $\alpha<\beta<\delta$. Since $g_{\alpha,\beta}(t(\beta))=t(\alpha)$, $\rho^{\Ba_\alpha}_{t(\alpha)}$ is an initial segment of $\rho^{\Ba_\beta}_{t(\beta)}$. This implies that $\rho^{\Ba_\delta}_{(t,h/\mathcal{D})}\in \chi^{\xi+1}$, where $\xi=\sup\{\zeta: \rho^{\Ba_\alpha}_{g_{\alpha,\delta}(t)}\in \chi^\zeta,\, \alpha<\delta\}$. We check that $\Ba_\delta$ is an approximation. Item (1) of Definition \ref{D:def of approximation} follows from the definition of $\Omega^{\Ba_\delta}_{t}$ and the choice of $h^{\Ba_\delta}_t$, for $t\in S_{\Ba_\delta}$. We show item (2). Let $(t_1,h_1/\mathcal{D})\neq (t_2,h_2/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$, assume that $\rho^{\Ba_\delta}_{(t_1,h_1/\mathcal{D})}=\rho^{\Ba_\delta}_{(t_2,h_2/\mathcal{D})}$ and let $f_1\in \Omega^{\Ba_\delta}_{(t_1,h_1/\mathcal{D})},\, f_2\in \Omega^{\Ba_\delta}_{(t_2,h_2/\mathcal{D})}$. If $t_1\neq t_2$ then there exists some $\alpha<\delta$ such that $t_1(\alpha)\neq t_2(\alpha)$. But $\rho^{\Ba_\alpha}_{t_1(\alpha)}=\rho^{\Ba_\alpha}_{t_2(\alpha)}$ and hence $f_1\not\R f_2$. Assume that $t_1=t_2$. Since $\rho^{\Ba_\delta}_{(t_1,h_1/\mathcal{D})}=\rho^{\Ba_\delta}_{(t_2,h_2/\mathcal{D})}$, $h_1/\mathcal{D}=h_2/\mathcal{D}$ which gives a contradiction. Let $(t,h/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$ and $s\in S_0$. If $s\neq t$ then the same argument as above applies. On the other it cannot be that $s=t$ be the definition of $S_0$. If $t_1\neq t_2\in S_0$ then the same arguments as above applies. Finally, we show that $\Ba_\alpha\trianglelefteq_{g_{\alpha,\delta}}\Ba_\delta$, for $\alpha<\delta$. Items (1), (2) and (4) are straightforward. We show item (3). Let $t\in S_0$ and let $\alpha^\prime<\delta$ be minimal such that $\Omega^{\Ba_{\alpha^\prime}}_{t(\alpha^\prime)}$ is trivial. If $\alpha^\prime\leq \alpha$ then $h_t^{\Ba_\delta}=h^{\Ba_{\alpha^\prime}}_{t(\alpha^\prime)}=h^{\Ba_{\alpha}}_{t(\alpha)}$. If $\alpha<\alpha^\prime$ then $\{\varepsilon<\kappa: h^{\Ba_\delta}_t(\varepsilon)\leq h^{\Ba_\alpha}_{t(\alpha)}(\varepsilon)\}\in \mathcal{D}$ because $h_t^{\Ba_\delta}=h^{\Ba_{\alpha^\prime}}_{t(\alpha^\prime)}$. Now let $(t,h/\mathcal{D})\in S_{\Ba_\delta}\setminus S_0$. Since $\Omega^{\Ba_\delta}_{(t,h/\mathcal{D})}$ is non-empty, we may choose some function $f\in \Omega^{\Ba_\delta}_{(t,h/\mathcal{D})}$. On the one hand, since $f\in \Omega^{\Ba_\alpha}_{t(\alpha)}$ and since $\Ba_\alpha$ is an approximation, $\{\varepsilon<\kappa: \Rg(f\restriction J_\varepsilon)\subseteq h^{\Ba_\alpha}_{t(\alpha)}(\varepsilon)\}\in \mathcal{D}$. On the other hand, since $f\in \Omega^{\Ba_\delta}_{(t,h/\mathcal{D})}$, $\{\varepsilon<\kappa: h(\varepsilon)=\min \{x\in A_{t,\varepsilon}:\Rg(f\restriction J_\varepsilon)\subseteq x\}\}\in \mathcal{D}$. Combining these observation with the fact that $h^{\Ba_\alpha}_{t(\alpha)}\in A_{t,\varepsilon}$, it follows that $\{\varepsilon<\kappa: h(\varepsilon)\leq h^{\Ba_\alpha}_{t(\alpha)}(\varepsilon)\}\in \mathcal{D}$ since it contains the intersection of both sets. \end{proof} \begin{conclusion}\label{Con:coloring-pcf} There exists a function $c:\Omega\to \chi$ satisfying that if $f_1,f_2\in \Omega$ and $f_1 \R f_2$ then $c(f_1)\neq c(f_2)$. \end{conclusion} \begin{proof} We define $(\Ba_\xi, \langle g_{\zeta,\xi} : \zeta<\xi\rangle)$ satisfying that $\Ba_\zeta\triangleleft_{g_{\zeta,\xi}}\Ba_\xi$ (for $\zeta<\xi$), by induction on $\xi<\sigma=(2^\kappa)^+$. \begin{list}{•}{} \item If $\xi=0$ then $\Omega^{\Ba_0}=\Omega$, $S_a=\{0\}$, $\Omega_{0}^{\Ba_0}=\Omega^{\Ba_0}$, $\rho^{\Ba_0}_0=\emptyset$ and $h_0^{\Ba_0}\in \mathcal{H}$ be the constant function $\theta$. \item If $\xi=\alpha +1$ for some $\alpha<\xi$ then let $\Ba_{\alpha}\triangleleft_{g_{\alpha,\xi}}\Ba_\xi$ be the approximation supplied by Proposition \ref{P:approximation,proper-successor}. For any $\zeta\leq \alpha$ we define $g_{\zeta,\xi}=g_{\zeta,\alpha}\circ g_{\alpha,\xi}$. \item If $\xi$ is a limit ordinal we apply Lemma \ref{L:approx-limit}. \end{list} It follows by induction, and using Proposition \ref{P:approximation,proper-successor}, Lemma \ref{L:approx-limit}, that for $\alpha<\beta<\sigma$ and $t\in S_{\Ba_\alpha}$ and $s\in S_{\Ba_\beta}$, if $\rho^{\Ba_\alpha}_t\in \chi^{\lambda_1}$ and $\rho^{\Ba_\beta}_s\in \chi^{\lambda_2}$ then $\lambda_1<\lambda_2$ and hence $\rho^{\Ba_\alpha}_t\neq \rho^{\Ba_\beta}_s$. \begin{claim} \phantomsection \begin{enumerate} \item For any $\rho\in \chi^{<\sigma}$, \[\Omega_\rho:=\bigcup\{\Omega^{\Ba_\xi}_s: \xi<\sigma,\, s\in S_{\Ba_\xi},\, \Omega^{\Ba_\xi}_s \text{ is trivial and } \rho^{\Ba_\xi}_s=\rho\}\] is trivial. \item $\Omega=\bigcup_{\rho\in \chi^{<\sigma}}\Omega_\rho$. \end{enumerate} \end{claim} \begin{claimproof} $(1)$ Let $\rho\in \chi^{<\sigma}$ and $f_1,f_2\in \Omega_\rho$. By definition, there exist $s_1\in S_{\Ba_{\xi_1}}$ and $s_2\in S_{\Ba_{\xi_2}}$ such that $f_1\in \Omega^{\Ba_{\xi_1}}_{s_1}$ and $f_2\in \Omega^{\Ba_{\xi_2}}_{s_2}$. As noted above, since $\rho^{\Ba_\xi}_{s_1}=\rho^{\Ba_{\xi}}_{s_2}$, necessarily, $\xi=\xi_1=\xi_2$. If $s_1=s_2$ then $f_1\not\R f_2$ since $\Omega^{\Ba_{\xi}}_{s_1}=\Omega^{\Ba_{\xi}}_{s_2}$ is trivial. If $s_1\neq s_2$ then by the definition of an approximation, since $\rho^{\Ba_\xi}_{s_1}=\rho^{\Ba_{\xi}}_{s_2}$, $f_1\not\R f_2$. $(2)$ Assume there exists some $f\in \Omega\setminus \bigcup_{\rho\in \chi^{<\sigma}}\Omega_\rho$. We construct a a sequence of function $\langle h_\xi\in \mathcal{H}: \xi<\sigma \rangle$ satisfying that for any $\alpha<\beta<\sigma$, $h_\beta<_\mathcal{D} h_\alpha$. For any $\xi<\sigma$ let $h_\xi=h^{\Ba_\xi}_t$ for the unique $t\in S_{\Ba_\xi}$ such that $f\in \Omega^{\Ba_\xi}_t$. By assumption, $\Omega^{\Ba_\xi}_t$ is non-trivial for any such $\xi$ (otherwise $f\in \Omega_\rho$ for $\rho=\rho_t^{\Ba_\xi}$). For any $\alpha<\beta<\sigma$, since $\Ba_\alpha\triangleleft_{g_{\alpha,\beta}} \Ba_\beta$, $h_\beta<_\mathcal{D} h_\alpha$. We color pairs of function $\{(h_\alpha,h_\beta): \alpha<\beta<\sigma\}$ by $\kappa$ colors, by setting that $(h_\alpha,h_\beta)$ has color $\varepsilon_{\alpha,\beta}<\kappa$ if $\varepsilon_{\alpha,\beta}$ is the minimal $\varepsilon$ for which $h_\beta(\varepsilon)<h_\alpha(\varepsilon)$. We know that such an $\varepsilon$ exists, since $h_\beta<_\mathcal{D} h_\alpha$. By Erd\"os-Rado there exists a subset $A\subseteq \sigma$ of cardinality $\kappa^+$ and $\varepsilon<\kappa$ such that for every $\alpha<\beta\in A$, $h_\beta(\varepsilon)<h_\alpha(\varepsilon)$. This contradicts the fact that the ordinals are well-ordered. \end{claimproof} Recalling that $\chi^{<\sigma}=\chi$ (as cardinals), we may now define $c:\Omega\to \chi$ by choosing for every $f\in \Omega$ some $\rho\in \chi^{<\sigma}$ such that $f\in \Omega_\rho$ and setting $c(f)=\rho$. \end{proof} \section{Conclusion: stable graphs}\label{s:conclusion} We combine the results of the previous sections to conclude. \begin{theorem}\label{T:Taylor for infinitary EM-models} Let $\mathcal{L}$ be a first order language containing a binary relation $E$. Let $T$ be an $\mathcal{L}$-theory specifying that $E$ is a symmetric and irreflexive relation. Let $G=(V;E,\dots)\models T$ be an infinitary EM-model based on $(\alpha,\theta)$, where $\alpha\in \kappa^{U}$ for some set $U$, $\kappa\geq \aleph_0$ a cardinal and $\theta$ an ordinal with $\kappa<\theta$. Let $\kap>2^{2^{<(\kappa+\aleph_1)}}+|T|\cdot |U|$ be a regular cardinal. If $\chi(G)\geq \kap$ then $G$ contains all finite subgraphs of $\Sh_n(\omega)$ for some $n\in \mathbb{N}$. \end{theorem} \begin{proof} By Lemma \ref{L:underlying set of Inf-EM is Inf-EM} there exists some $(\widehat \alpha,\theta)$-indiscernible sequence $b=\langle b_{i,\eta}:i\in \widehat{U},\, \eta\in (\theta^{\U{\widehat{\alpha}_i}})_<\rangle$ whose underlying set is $V$, where $\widehat \alpha\in \kappa^{\widehat U}$ and $\widehat U$ is a set such that $|\widehat{U}|\leq |T|\cdot |U|\cdot \kappa^{<\kappa}.$ Let $B=\{(i,\eta):i\in \widehat U,\, \eta\in (\theta^{\U{\widehat{\alpha}_i}})_<\}$ and $R=\{((i_1,\eta_1),(i_2,\eta_2)): b_{i_1,\eta_1} \E b_{i_2,\eta_2}\}$. Since $(i,\eta)\mapsto b_{i,\eta}$ is surjective and $((i_1,\eta_1),(i_2,\eta_2))\in R \iff (b_{i_1,\eta_1} b_{i_2,\eta_2})\in E$, $\chi(B,R)=\chi(G)\geq \kap$ (by Fact \ref{F:basic prop}(4)). Moreover, by Fact \ref{F:homomorphism is enough} it is enough to prove the conclusion for the graph $(B,R)$. For any $i\in \widehat{U}$ let $B_i=\{(i,\eta):\eta\in (\theta^{\U{\alpha_i}})_<\}$. By Fact \ref{F:basic prop}(1), since $B=\bigcup_{i\in \widehat{U}} B_i$, $\kap\leq \chi(B,R)\leq \sum_{i\in \widehat{U}}\chi(B_i,R\restriction B_i)$. By definition\footnote{As $2^{<\kappa}=\sup\{2^\mu:\mu<\kappa\}$, if $2^{<\kappa}<\kappa$ then $2^{2^{<\kappa}}\leq 2^{<\kappa}$.} $\kappa\leq 2^{<\kappa}$ which implies $\kappa^{<\kappa}\leq \kappa^{\kappa}=2^{\kappa}\leq 2^{2^{<\kappa}}$ and thus $\kap>|\widehat U|$. Since $\kap$ is a regular cardinal there exists $i\in \widehat U$ with $\chi(B_i,R\restriction B_i)\geq \kap$. As a result, it is enough to prove the conclusion for the graph $((\theta^{\U{\widehat{\alpha}_i}})_<,S)$, where $S=\{(\eta_1,\eta_2):(i,\eta_1)\R (i,\eta_2)\}$. For $P=\{\otp(\bar a,\bar b):(\bar a,\bar b)\in S\}$, by $(\widehat{\alpha},\theta)$-indiscernibility $S=\bigcup_{p\in P} \{(\bar c,\bar d)\in ((\theta^{\U{\widehat{\alpha}_i}})_<)^2 :\otp(\bar c,\bar d)=p\vee \otp(\bar d,\bar c)=p\}$. By Fact \ref{F:basic prop}(2), \[\kap\leq \chi((\theta^{\U{\widehat{\alpha}_i}})_<,S))\leq \prod_{p\in P}\chi ((\theta^{\U{\widehat{\alpha}_i}})_<,P_p),\] where $P_p=\{(\bar c,\bar d)\in ((\theta^{\U{\widehat{\alpha}_i}})_<)^2 :\otp(\bar c,\bar d)=p\vee \otp(\bar d,\bar c)=p\}$. Assume towards a contradiction that $\chi ((\theta^{\U{\widehat{\alpha}_i}})_<,P_p)\leq \beth_2(\aleph_0)$ for all $p\in P$. Hence $\kap\leq \beth_2(\aleph_0)^{2^{|\alpha_i|+\aleph_0}}\leq \beth_2(|\alpha_i|+\aleph_0)$. Since $|\alpha_i|+\aleph_0<\kappa+\aleph_1$ and $\kap>2^{2^{<(\kappa+\aleph_1)}}$, we derive a contradiction. Consequently, there exists $p\in P$ with $\chi ((\theta^{\U{\widehat{\alpha}_i}})_<,P_p)>\beth_2(\aleph_0)$ and we may conclude by Theorem \ref{T:shift graphs in order-type-graphs}. \end{proof} \begin{corollary}\label{C:main corollary} Let $G=(V,E)$ be a stable graph. If $\chi(G)>\beth_2(\aleph_0)$ then $G$ contains all finite subgraphs of $\Sh_n(\omega)$ for some $n\in \mathbb{N}$. \end{corollary} \begin{proof} Let $G=(V,E)$ be stable graph, $T=Th(G)$ and $T^{sk}$ be a complete expansion of $T$ with definable Skolem functions in the language $E\in \mathcal{L}^{sk}$. We apply Theorem \ref{T:existence of gen em model in stable} with $\kappa=\aleph_1$, $\mu=2^{\aleph_1}$ and $\lambda=2^{\max\{\mu,|G|\}}$. We get an infinitary EM-model $\mathcal{G}^{sk}\models T^{sk}$ based on $(\alpha,\lambda)$, where $\alpha \in \kappa^U$ for some set $U$ of cardinality at most $\mu$, such that $\mathcal{G}=\mathcal{G}^{sk}\restriction \{E\}$ is saturated of cardinality $\lambda$. Since $\mathcal{G}$ is saturated of cardinality $>|G|$, we may embed $G$ as an elementary substructure of $\mathcal{G}$. Since $\chi(\mathcal{G})\geq \chi(G)>\beth_2(\aleph_0)$ and the conclusion is an elementary property, it is enough to show it for $\mathcal{G}$. Since $2^{2^{<(\kappa+\aleph_1)}}+|T|+|U|\leq 2^{2^{\aleph_0}}+\aleph_0+\mu\leq \beth_2(\aleph_0)+2^{\aleph_1}= \beth_2(\aleph_0)$, Theorem \ref{T:Taylor for infinitary EM-models} applies with $\theta=\lambda$ and $\kap=(\beth_2(\aleph_0))^+$. \end{proof} \bibliographystyle{alpha} \bibliography{1211} \end{document}
213,264
TITLE: Probability of finding a ball in a box QUESTION [2 upvotes]: Here is one I'm stumped on. A ball can be in any one of $n$ boxes. It is the $i^{th}$ box with probability $p_i$. If the ball is in the $i^{th}$ box a search of that box will uncover it with probability $\alpha_i$. Given that a search of box $i$ did not uncover the ball, what is the conditional probability the ball is actually in box $j$ where $i,j=1,\ldots,n$? I tried fooling around with the multinomial distribution but kept getting tripped up. Any thoughts? REPLY [0 votes]: Probability ball is in box $i$ is $p_i$ Probability ball is in box $j$ is $p_j$ Probability ball is in box $i$ and is found in box $i$ is $a_ip_i$ Probability ball is in box $i$ and is not found in box $i$ is $(1-a_i)p_i$ Probability ball is not found in box $i$ is $1 - a_ip_i$ Probability ball is in box $i$ given it is not found in box $i$ is $\dfrac{(1-a_i)p_i}{1 - a_ip_i}$ Probability ball is in box $j \not = i$ given it is not found in box $i$ is $\dfrac{p_j}{1 - a_ip_i}$
200,750
: Chris Childs Posts by Chris Childs Ebola Virus Subject GuidePosted on October 28, 2014 | No CommentsA new Ebola Virus subject guide has been created. All information on this guide is from reliable, evidence-based sources, which are free to any user on or off campus.), […] New App for Accessing the National Library of Medicine’s FREE Mobile ResourcesPosted on August 20, 2012 | No CommentsThe National Library of Medicine (NLM), the world’s largest medical library recently released a mobile app that is intended to serve as the authoritative guide to all of their mobile […] Mobile Version of Drug Information PortalPosted on June 27, 2012 | No CommentsThe National Library of Medicine (NLM) Drug Information Portal is now available for mobile devices. This mobile optimized web site covers over 32,000 drugs and provides descriptions, drug names, […] Redesigned Web & Mobile Versions of Haz-MapPosted on June 15, 2012 | No CommentsThe National Library of Medicine (NLM) Division of Specialized Information Services (SIS) has released redesigned web and mobile versions of Haz-Map (). The new design adapts to web browsers on […] American FactFinderPosted on April 25, 2012 | No CommentsAmerican FactFinder provides access to the population, housing and economic data collected by the Census Bureau. You can find information from the 2000 and 2010 Census, American Community Surveys (ACS), […] NLM Announces New Look for NIHSeniorHealthPosted on April 12, 2012 | No CommentsIn March, the National Library of Medicine and the National Institute on Aging (NIA) released a redesigned version of NIHSeniorHealth, the National Institutes of Health consumer health Web site for […] 2012 County Health Rankings AvailablePosted on April 12, 2012 | No CommentsMore than 3,000 counties and the District of Columbia can compare how healthy their residents are and how long they live with the 2012 County Health Rankings. The Rankings are […] National Public Health WeekPosted on April 1, 2012 | No CommentsState, local and federal health officials from across the county unite this week to celebrate National Public Health Week (April 2-8), an annual health observance aimed at educating the public, […] National Library of Medicine Announces Expansion of PubMed HealthPosted on December 20, 2011 | No CommentsThe National Library of Medicine (NLM), the world’s largest medical library and a component of the National Institutes of Health, announces the expansion of the information available from PubMed Health […]
199,670
[Intro] The truth is playing, uncover the human saiyan From under the stupid sand The fuck is it you've been saying? We puffin on stupid grass, you puffin on stupid sand The reggie you smoke is dust Like fuck it dude, you a lame Let your inhibitions go and just find something I promise you the bitch I'm with at worst is a 9 something She came up with her titties out and asked me to sign something And I was like YUP [Verse 1] Welcome to eighteen, where niggas is grimy At seventeen it seemed it was a cushion behind me But now it's a brick wall, it's crazy as hell to me That fucking bitches now-a-days can end up as jail for me I ask if it's mail for me, my moms give me sighs as answers On the grind at the time but the signs is there She ain't impressed with what I do in my free time I guess I better look into a school in the meantime My mind is like... [Hook] Whoa now Get back to school, before you get Throwed out What's next for you You guys broke, I know these dreams important I doubt them things gon pay the mortgage Whoa now Go find a job, before you get Throwed out Cause times is hard Fuck beats, fuck flows, and your fucking skills You're eighteen and that don't pay no fucking bills [Verse 2] And welcome to eighteen, where even your guys fake I see so much bullshit, it's making my eyes ache And even my moms say the time comin For my ass to find somethin To get me out the crib or to find pay My bitch a Beyonce My living is straight, Um These cliques is beyond gay Them niggas is lames I mean them niggas the same My circle small mayne, I'm lacking the drama And when I face it I just light some fucking pack for my problems But then Pops say hit the kitchen now Nigga what you bitchin bout? Pulled a pack up out his pocket "Nigga what these swishers bout?" He looked me in my face, and snatched the blunt out of my mouth And told me kick the shit today, or get the fuck out of my house You think my rules is a joke to you, huh I told you get a job last time I spoke to you son I'm not gon scream or even woop if you don't listen to me But don't be surprised when you come home and see yo shit in the street Nigga [Hook] I know that I should just move on with my life Suggest don't go get it my way Find my clothes in the driveway It took getting so old to see The world is so cold to me If I do like this, I won't turn out to be shit No [Outro] You fuckin lames is trippin me out nigga I'm fuckin bitches, bitch, oh that's what you bitchin about nigga V's up, high, one time I done came a long way from lunch lines I done made a strong case for [?] With a Billabong on like I'm Wiley I'm bouncing through the club with my niggas and we get it in And fuck the VIP you sitting in Track 2 on Kembe X’s second project Soundtrack II Armageddon The song is about his family wanting him to finish school or get a job even though he can be successful through music, mostly because he is now an adult
10,368
\begin{document} \maketitle \begin{abstract} We consider linear rank-metric codes in $\F_{q^m}^n$. We show that the properties of being MRD (maximum rank distance) and non-Gabidulin are generic over the algebraic closure of the underlying field, which implies that over a large extension field a randomly chosen generator matrix generates an MRD and a non-Gabidulin code with high probability. Moreover, we give upper bounds on the respective probabilities in dependence on the extension degree $m$. \end{abstract} \section{Introduction} Codes in the rank-metric have been studied for the last four decades. For linear codes a Singleton-type bound can be derived for these codes. In analogy to MDS codes in the Hamming metric, we call rank-metric codes that achieve the Singleton-type bound MRD (maximum rank distance) codes. Since the works of Delsarte \cite{de78} and Gabidulin \cite{ga85a} we know that linear MRD codes exist for any set of parameters. The codes they describe are called Gabidulin codes. The question, if there are other general constructions of MRD codes that are not equivalent to Gabidulin codes, has been of large interest recently. Some constructions of non-Gabidulin MRD codes can be found e.g.\ in \cite{co15,cr15,sh15}, where many of the derived codes are not linear over the underlying field but only linear over some subfield of it. For some small parameter sets, constructions of linear non-Gabidulin MRD codes were presented in \cite{ho16}. On the other hand, in the same paper it was shown that all MRD codes in $\F_{2^4}^4$ are Gabidulin codes. In general, it remains an open question for which parameters non-Gabidulin MRD codes exist, and if so, how many such codes there are. In this paper we show that the properties of being MRD (maximum rank distance) and non-Gabidulin are generic. This implies that over a large field extension degree a randomly chosen generator matrix generates an MRD and a non-Gabidulin code with high probability. Moreover, we give an upper bound on the respective probabilities in dependence on the extension degree. The paper is structured as follows. In Section \ref{sec:preliminaries} we give some preliminary definitions and results, first for rank-metric codes and then for the notion of genericity. Section \ref{sec:topology} contains topological results, showing that the properties of being MRD and non-Gabidulin are generic. In Section \ref{sec:prob} we derive some upper bounds on the probability of these two code properties in dependence on the extension degree of the underlying finite field. We conclude in Section \ref{sec:conclusion}. \section{Preliminaries}\label{sec:preliminaries} \subsection{Finite Fields and Their Vector Spaces} The following definitions and results can be found in any textbook on finite fields, e.g.\ \cite{li94}. We denote the finite field of cardinality $q$ by $\F_q$. It exists if and only if $q$ is a prime power. Moreover, if it exists, $\F_q$ is unique up to isomorphism. An extension field of extension degree $m$ is denoted by $\F_{q^m}$. If $\alpha$ is a root of an irreducible monic polynomial in $\F_q[x]$ of degree $m$, then $$ \F_{q^m} \cong \F_q[\alpha].$$ We now recall some basic theory on the trace function over finite fields. \begin{definition} Let $\F_q$ be a finite field and $\F_{q^m}$ be an extension field. For $\alpha \in \F_{q^m}$, the \emph{trace} of $\alpha$ over $\F_q$ is defined by $$\mathrm{Tr}_{\F_{q^m}/\F_q}(\alpha) := \sum_{i=0}^{m-1}\alpha^{q^i}.$$ \end{definition} For every integer $0<s<m$ with $\gcd(m,s)=1$, we denote by $\varphi_s$ the map given by $$ \begin{array}{rcl} \varphi_s:\F_{q^m} &\longrightarrow & \F_{q^m} \\ \alpha & \longmapsto & \alpha^{q^s}-\alpha. \end{array} $$ The following result relates the trace with the maps $\varphi_s$. \begin{lemma}\label{lem:trace} The trace function satisfies the following properties: \begin{enumerate} \item $\mathrm{Tr}_{\F_{q^m}/\F_q}(\alpha) \in \F_q$ for all $\alpha \in \F_{q^m}$. \item $\mathrm{Tr}_{\F_{q^m}/\F_q}$ is a linear surjective transformation from $\F_{q^m}$ to $\F_q$, where $\F_{q^m}$ and $\F_q$ are considered as $\F_q$-vector spaces. \item For every $\alpha \in \F_{q^m}^*$, the map $\mathrm{T}_{\alpha}$ defined by $$ \beta \longmapsto \mathrm{Tr}_{\F_{q^m}/\F_q}(\alpha\beta)$$ is a linear surjective transformation from $\F_{q^m}$ to $\F_q$, where $\F_{q^m}$ and $\F_q$ are considered as $\F_q$-vector spaces. \item $\varphi_s$ is a linear transformation from $\F_{q^m}$ to itself, considered as $\F_q$-vector space. \item For every $s$ coprime to $m$, $\varphi_s(\alpha)=0$ if and only if $\alpha \in \F_q$. \item $\ker (\mathrm{Tr}_{\F_{q^m}/\F_q})=\mathrm{Im}(\varphi_s)$ for every $s$ coprime to $m$ and has cardinality $q^{m-1}$. \end{enumerate} \end{lemma} \begin{proof} The statements of 1., 2.\ and 3.\ can be found e.g.\ in \cite[Theorems 2.23 and 2.24]{li94}. \begin{enumerate} \item[4.] For $\alpha, \beta \in \F_{q^m}$, $\varphi_s(\alpha+\beta)=(\alpha+\beta)^{q^s}-(\alpha+\beta)= \alpha^{q^s}-\alpha +\beta^{q^s}-\beta=\varphi_s(\alpha)+\varphi_s(\beta)$. Moreover, for every $\alpha \in\F_{q^m}$, $c\in \F_q$, $\varphi_s(\alpha)=c^{q^s}\alpha^{q^s}-c\alpha=c\left(\alpha^{q^s}-\alpha\right)=c\varphi_s(\alpha)$. \item[5.] We have $\varphi_s(\alpha)=\alpha^{q^s}-\alpha=0$ if and only if $\alpha \in\F_{q^s}$. Since $\alpha \in\F_{q^m}$, this is true if and only if $\alpha \in \F_{q^m}\cap \F_{q^s}=\F_q$. \item[6.] First we show that $\mathrm{Im}(\varphi_s)\subseteq\ker (\mathrm{Tr}_{\F_{q^m}/\F_q})$. Consider an element $\alpha\in\mathrm{Im}(\varphi_s)$. Then there exists $\beta\in \F_{q^m}$ such that $\alpha=\beta^{q^s}-\beta$. Now $$\mathrm{Tr}_{\F_{q^m}/\F_q}(\alpha)=\mathrm{Tr}_{\F_{q^m}/\F_q}(\beta^{q^s}-\beta)= \sum_{i=0}^{m-1}(\beta^{q^s}-\beta)^{q^i}=\sum_{i=0}^{m-1}\beta^{q^{s+i}}-\sum_{i=0}^{m-1}\beta^{q^i}.$$ We observe now that if $i\equiv j \mod m$, then $\beta^{q^i}=\beta^{q^j}$. Hence the sum $\sum_{i=0}^{m-1}\beta^{q^{s+i}}$ is a rearrangement of $\sum_{i=0}^{m-1}\beta^{q^i}$ and $\mathrm{Tr}_{\F_{q^m}/\F_q}(\alpha)=0$. At this point observe that the trace function is a polynomial of degree $q^{m-1}$ and so it has at most $q^{m-1}$ roots. This means that $|\ker (\mathrm{Tr}_{\F_{q^m}/\F_q})|\leq q^{m-1}$. By part $4$ and $5$ of this Lemma $$|\mathrm{Im}(\varphi_s)|=\frac{|\F_{q^m}|}{|\ker(\varphi_s)|}=q^{m-1} $$ and therefore $\mathrm{Im}(\varphi_s)$ and $\ker (\mathrm{Tr}_{\F_{q^m}/\F_q})$ must be equal. \end{enumerate} \end{proof} We denote by $\GL_n(q):=\{A\in \F_q^{n\times n} \mid \rk (A) =n\}$ the general linear group of degree $n$ over $\F_q$. Furthermore we need the Gaussian binomial $ \binom{n}{k}_q$, which is defined as the number of $k$-dimensional vector spaces of $\F_q^n$. It is well-known that $$ \binom{n}{k}_q = \prod_{i=0}^{k-1} \frac{q^n-q^i}{q^k-q^i}=\frac{\prod_{i=0}^{k-1}(q^n-q^i)}{|\GL_k(q)|}.$$ Moreover, the following fact is well-known and easy to see. \begin{lemma}\label{lem:intersection} Let $k, n$ be two integers such that $0<k\leq n$, and let $\Uvs$ be a $k$-dimensional vector subspace of $\F_q^n$. Then, for every $r=0,\ldots,k$, the number of $k$-dimensional subspaces that intersect $\Uvs$ in a $(k-r)$-dimensional subspace is $$ \binom{k}{k-r}_q \binom{n-k}{r}_q q^{r^2} .$$ \end{lemma} \begin{proof} There are $ \binom{k}{k-r}_q$ many subspaces $\Uvs'$ of $\Uvs$ of dimension $(k-r)$ that can be the intersection space. Now, in order to complete $\Uvs'$ to a $k$-dimensional vector space, intersecting $\Uvs$ only in $\Uvs'$, we have $\prod_{i=0}^{r-1}(q^n-q^{k+i})$ choices for the remaining basis vectors. For counting how many of these bases span the same space we just need to count the number of $k\times k$ matrices of the form $$ \left[\begin{array}{cc} I_{k-r} & 0 \\ A & B \end{array}\right],$$ where $A\in \F_q^{r\times (k-r)}$ and $B\in \GL_r(q)$. Hence the final count is given by \begin{align*} \binom{k}{k-r}_q\frac{\prod_{i=0}^{r-1}(q^n-q^{k+i})}{q^{r(k-r)}|\GL_r(q)|}&= \binom{k}{k-r}_q\frac{q^{kr}\prod_{i=0}^{r-1}(q^{n-k}-q^i)}{q^{r(k-r)}|\GL_r(q)|}\\ &=\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2}. \end{align*} \end{proof} \subsection{Rank-metric Codes} Recall that there always exists $\alpha\in \F_{q^m}$, such that $\F_{q^m}\cong \F_q[\alpha] $. Moreover, $\F_{q^m}$ is isomorphic (as a vector space over $\F_q$) to the vector space $\F_q^m$. One then easily obtains the isomorphic description of matrices over the base field $\F_q$ as vectors over the extension field, i.e.\ $\F_q^{m\times n}\cong \F_{q^m}^n$. \begin{definition} The \emph{rank distance} $d_R$ on $\F_q^{m\times n}$ is defined by \[d_R(X,Y):= \rk(X-Y) , \quad X,Y \in \F_q^{m\times n}. \] Analogously, we define the rank distance between two elements $\boldsymbol x,\boldsymbol y \in \F_{q^m}^n$ as the rank of the difference of the respective matrix representations in $\F_q^{m\times n}$. \end{definition} In this paper we will focus on $ \F_{q^m}$-linear rank-metric codes in $\F_{q^m}^n$, i.e.\ those codes that form a vector space over $ \F_{q^m}$. \begin{definition} An $\F_{q^m}$-\emph{linear rank-metric code $\mathcal C$} of length $n$ and dimension $k$ is a $k$-dimensional subspace of $\F_{q^m}^n$ equipped with the rank distance. A matrix $G\in\F_{q^m}^{k\times n} $ is called a \emph{generator matrix} for the code $\mathcal C$ if $$\mathcal C=\rs(G),$$ where $\rs(G)$ is the subspace generated by the rows of the matrix $G$, called the \emph{row space} of $G$. \end{definition} Whenever we talk about linear codes in this work, we will mean linearity over the extension field $ \F_{q^m}$. The well-known Singleton bound for codes in the Hamming metric implies also an upper bound for codes in the rank-metric: \begin{theorem}\cite[Section~2]{ga85a} Let $\mathcal{C}\subseteq \F_{q^m}^{n}$ be a linear matrix code with minimum rank distance $d$ of dimension $k$. Then $$ k\leq n-d+1 .$$ \end{theorem} \begin{definition} A code attaining the Singleton bound is called a \emph{maximum rank distance (MRD) code}. \end{definition} \begin{lemma}\cite[Lemma 5.3]{ho16}\label{lem:systematic} Any linear MRD code $\C \subseteq \F_{q^m}^n$ of dimension $k$ has a generator matrix $G \in \F_{q^m}^{k\times n}$ in systematic form, i.e. $$G = \left[\begin{array}{c|c} I_k & X \end{array} \right]$$ Moreover, all entries in $X$ are from $\F_{q^m} \backslash \F_q$. \end{lemma} For some vector $(v_1,\dots, v_n) \in \F_{q^m}^n$ we denote the $k \times n$ \emph{$s$-Moore matrix} by \[M_{s,k}(v_1,\dots, v_n) := \left( \begin{array}{cccc} v_1 & v_2 &\dots &v_n \\ v_1^{[s]} & v_2^{[s]} &\dots &v_n^{[s]} \\ \vdots&&&\vdots \\ v_1^{[s(k-1)]} & v_2^{[s(k-1)]} &\dots &v_n^{[s(k-1)]} \end{array}\right) ,\] where $[i]:= q^i$. \begin{definition}\label{def:Gab} Let $g_1,\dots, g_n \in \F_{q^m}$ be linearly independent over $\F_q$ and let $s$ be coprime to $m$. We define a \emph{generalized Gabidulin code} $\mathcal{C}\subseteq \F_{q^m}^{n}$ of dimension $k$ as the linear block code with generator matrix $M_{s,k}(g_1,\dots, g_n)$. Using the isomorphic matrix representation we can interpret $\mathcal{C}$ as a matrix code in $\F_q^{m\times n}$. \end{definition} Note that for $s=1$ the previous definition coincides with the classical Gabidulin code construction. The following theorem was shown for $s=1$ in \cite[Section 4]{ga85a}, and for general $s$ in \cite{ks05}. \begin{theorem}\label{thm:GabisMRD} A generalized Gabidulin code $\mathcal{C}\subseteq \F_{q^m}^{n}$ of dimension $k$ over $\F_{q^m}$ has minimum rank distance $n-k+1$. Thus generalized Gabidulin codes are MRD codes. \end{theorem} The dual code of a code $\mathcal{C}\subseteq \F_{q^{m}}^{n}$ is defined in the usual way as \[\mathcal{C}^{\perp} := \{\boldsymbol{u} \in \F_{q^{m}}^{n} \mid \boldsymbol{u}\boldsymbol{c}^T=0 \quad \forall \boldsymbol{c}\in \mathcal{C}\}. \] In his seminal paper Gabidulin showed the following two results on dual codes of MRD and Gabidulin codes. The result was generalized to $s>1$ later on by Kshevetskiy and Gabidulin. \begin{proposition}\cite[Sections~2 and 4]{ga85a}\cite[Subsection IV.C]{ks05}\label{prop:dual1} \begin{enumerate} \item Let $\mathcal{C}\subseteq \F_{q^{m}}^{n}$ be an MRD code of dimension $k$. Then the dual code $\mathcal{C}^{\perp}\subseteq \F_{q^{m}}^{n}$ is an MRD code of dimension $n-k$. \item Let $\mathcal{C}\subseteq \F_{q^{m}}^{n}$ be a generalized Gabidulin code of dimension $k$. Then the dual code $\mathcal{C}^{\perp}\subseteq \F_{q^{m}}^{n}$ is a generalized Gabidulin code of dimension $n-k$. \end{enumerate} \end{proposition} For more information on bounds and constructions of rank-metric codes the interested reader is referred to \cite{ga85a}. Denote by $\mathrm{Gal}(\F_{q^m}/\F_q)$ the \emph{Galois group} of $\F_{q^m}$, consisting of the automorphisms of $\F_{q^m}$ that fix the base field $\F_q$ (i.e., for $\sigma \in \mathrm{Gal}(\F_{q^m}/\F_q)$ and $\alpha \in \F_q$ we have $\sigma(\alpha) = \alpha$). It is well-known that $\mathrm{Gal}(\F_{q^m}/\F_q)$ is generated by the \emph{Frobenius map}, which takes an element to its $q$-th power. Hence the automorphisms are of the form $x\mapsto x^{[i]}$ for some $0\leq i \leq m$. Given a matrix (resp.\ a vector) $A\in \F_{q^m}^{k \times n}$, we denote by $A^{([s])}$ the component-wise Frobenius $A$, i.e., every entry of the matrix (resp.\ the vector) is raised to its $q^s$-th power. Analogously, given some $\mathcal C \subseteq \F_{q^m}^{k \times n}$, we define $$ \mathcal C^{([s])}:=\left\{\mathbf{c}^{([s])}\mid \mathbf{c}\in \mathcal C \right\}.$$ The (semi-)linear rank isometries on $\F_{q^m}^{n}$ are induced by the isometries on $\F_q^{m\times n}$ and are hence well-known, see e.g.\ \cite{be03,mo14,wa96}: \begin{lemma}\cite[Proposition~2]{mo14}\label{isometries} The semilinear $\F_q$-rank isometries on $\F_{q^m}^{n}$ are of the form \[(\lambda, A, \sigma) \in \left( \F_{q^m}^* \times \GL_n(q) \right) \rtimes \mathrm{Gal}(\F_{q^m}/\F_q) ,\] acting on $ \F_{q^m}^n \ni (v_1,\dots,v_n)$ via \[(v_1,\dots,v_n) (\lambda, A, \sigma) = (\sigma(\lambda v_1),\dots,\sigma(\lambda v_n)) A .\] In particular, if $\mathcal{C}\subseteq \F_{q^m}^n$ is a linear code with minimum rank distance $d$, then \[\mathcal{C}' = \sigma(\lambda \mathcal{C}) A \] is a linear code with minimum rank distance $d$. \end{lemma} One can easily check that $\F_q$-linearly independent elements in $\F_{q^m}$ remain $\F_q$-linearly independent under the actions of $\F_{q^m}^*, \GL_n(q)$ and $\mathrm{Gal}(\F_{q^m}/\F_q)$. Moreover, the $s$-Moore matrix structure is preserved under these actions, which implies that the class of generalized Gabidulin codes is closed under the semilinear isometries. Thus a code is semilinearly isometric to a generalized Gabidulin code if and only if it is itself a generalized Gabidulin code. In this work we need the following criteria for both the MRD and the Gabidulin property. The following criterion for MRD codes was given in \cite{ho16}, which in turn is based on a well-known result given in \cite{ga85a}: \begin{proposition}\label{prop:MRDCrit} Let $G\in \F_{q^m}^{k\times n}$ be a generator matrix of a rank-metric code $\mathcal{C}\subseteq \F_{q^m}^n$. Then $\mathcal{C}$ is an MRD code if and only if $$ \rk(VG^T) =k$$ for all $V\in \F_q^{k\times n}$ with $\rk(V)=k$. \end{proposition} Furthermore, we need the following criterion for the generalized Gabidulin property: \begin{theorem}\cite[Theorem 4.8]{ho16}\label{thm:GabCrit} Let $\mathcal{C}\subseteq \F_{q^m}^n$ be an MRD code of dimension $k$. $\mathcal{C}$ is a generalized Gabidulin code if and only if there exists $s$ with $\gcd(s,m)=1$ such that $$ \dim (\mathcal{C} \cap \mathcal{C}^{([s])}) = k-1 .$$ \end{theorem} \subsection{The Zariski Topology over Finite Fields} Consider the polynomial ring $\F_q[x_1,\dots,x_r]$ over the base field $\F_q$ and denote by $\bar\F_q$ the algebraic closure of $\F_q$, necessarily an infinite field. For a subset $S\subseteq \F_q[x_1,\dots,x_r]$ one defines the algebraic set $$ V(S): = \{\bs x \in \bar\F_q^r \mid f(\bs x) = 0, \forall f \in S\} . $$ It is well-known that the algebraic sets inside $\bar\F_q^r$ form the \emph{closed sets} of a topology, called the \emph{Zariski topology}. The complements of the Zariski-closed sets are the \emph{Zariski-open} sets. \begin{definition} One says that a subset $G\subset\bar\F_q^r$ defines a \emph{generic set} if $G$ contains a non-empty Zariski-open set. \end{definition} If the base field are the real number ($\mathbb{R}$) or complex numbers ($\mathbb{C}$), then a generic set inside $\mathbb{R}^r$ (respectively inside $\mathbb{C}^r$) is necessarily dense and its complement is contained in an algebraic set of dimension at most $r-1$. Over a finite field $\F_q$ one has to be a little bit more careful. Indeed for every subset $T\subset\F_q^r$ one finds a set of polynomials $S\subseteq \F_q[x_1,\dots,x_r]$ such that $$ \{\bs x \in \F_q^r \mid f(\bs x) = 0, \forall f \in S\} =T. $$ This follows simply from the fact that a single point inside $\F_q^r$ forms a Zariski-closed set and any subset $T\subset\F_q^r$ is a finite union of points. However if one has an algebraic set $V(S)$, as defined in the beginning of this subsection, then the $\F_{q^m}$-rational points defined through $$ V(S;\F_{q^m}): = \{\bs x \in \F_{q^m}^r \mid f(\bs x) = 0, \forall f \in S\} $$ become in proportion to the whole vector space $\F_{q^m}^r$ thinner and thinner, as the extension degree $m$ increases. This is a consequence of the Schwartz-Zippel Lemma which we will formulate, for our purposes, over a finite field. The lemma itself will be crucial for our probability estimations in Section \ref{sec:prob}. \begin{lemma}[Schwartz-Zippel]\cite[Corollary 1]{sc80}\label{lem:SZ} Let $f\in \F_q[x_1,x_2,\dots,x_r]$ be a non-zero polynomial of total degree $d \geq 0$. Let $\F_{q^n}$ be an extension field and let $S\subseteq \F_{q^n}$ be a finite set. Let $v_1, v_2, \dots, v_r$ be selected at random independently and uniformly from $S$. Then $$\Pr\big(f(v_1,v_2,\ldots,v_r)=0\big)\leq\frac{d}{|S|}. $$ \end{lemma} \section{Topological Results}\label{sec:topology} The idea of this section is to show that the properties of being MRD and non-Gabidulin are generic properties. Recall that, by Lemma \ref{lem:systematic}, every linear MRD code in $\F_{q^m}^n$ of dimension $k$ has a unique representation by its generator matrix $G\in \F_{q^m}^{k\times n}$ in systematic form $$ G= [\;I_k \mid X\;].$$ Thus we have a one-to-one correspondence between the set of linear MRD codes in $\F_{q^m}^n$ and a subset of the set of matrices $ \F_{q^m}^{k\times (n-k)}$. Therefore we want to investigate how many matrices $X\in \F_{q^m}^{k\times (n-k)}$ give rise to an MRD or a generalized Gabidulin code, when plugged into the above form of a systematic generator matrix. However, to make sense of the definition of genericity, we need to do this investigation over the algebraic closure of $\F_{q^m}$. Unfortunately though, some results in the rank-metric, in particular the definition of and results related to generalized Gabidulin codes, do not hold over infinite fields. Therefore we will actually show that the set of matrices fulfilling the criteria of Corollary \ref{prop:MRDCrit} (for being MRD) and Theorem \ref{thm:GabCrit} (for being a generalized Gabidulin code) are generic sets over the algebraic closure. We first show that the set of generator matrices fulfilling the MRD criterion of Corollary \ref{prop:MRDCrit} is generic. \begin{theorem}\label{thm:topMRD} Let $1\leq k \leq n-1$. The set $$S_\mathrm{MRD} := \{X \in \bar \F_{q^m}^{k\times (n-k)} \mid \forall A \in \F_q^{n\times k} \textnormal{ of rank } k: \det([I_k \mid X ] A)\neq 0 \} $$ is a generic subset of $ \bar \F_{q^m}^{k\times (n-k)}$. \end{theorem} \begin{proof} We need to show that $S_\mathrm{MRD}$ contains a non-empty Zariski-open set. In fact we will show that $S_\mathrm{MRD}$ is a non-empty Zariski-open set. The non-empty-ness follows from the existence of Gabidulin codes for every set of parameters. Hence it remains to show that it is Zariski-open. If we denote the entries of $X\in \bar\F_{q^m}^{k(n-k)}$ as the variables $x_{1},\dots, x_{k(n-k)}$, then, for a given $A \in \F_q^{n\times k}$, we have $\det([I_k \mid X ] A) \in \F_{q}[x_1,\dots,x_{k(n-k)}]$. Hence we can write \begin{align*} S_\mathrm{MRD} & = \mathop{\bigcap_{A \in \F_q^{n\times k}}}_{\rk (A)=k}\{X \in \bar \F_{q^m}^{k\times (n-k)} \mid \det([I_k \mid X ] A)\neq 0 \} \\ & = \mathop{\bigcap_{A \in \F_q^{n\times k}}}_{\rk (A)=k} V(\det([I_k \mid X ] A))^C , \end{align*} i.e., it is a finite intersection of Zariski-open sets. Therefore $S_{MRD}$ is a Zariski-open set. \end{proof} \begin{remark} In Theorem \ref{thm:topMRD} we chose the MRD criterion of Corollary \ref{prop:MRDCrit} to show that the MRD property (if seen over some finite extension field) is generic. One can do the same by using the MRD criterion of Horlemann-Trautmann-Marshall from \cite[Corollary 3]{ho16}. \end{remark} \vspace{0.4cm} We now turn to generalized Gabidulin codes. Firstly we rewrite the criterion from Theorem \ref{thm:GabCrit} in a more suitable way. \begin{lemma}\label{lem:reformulation} Let $\mathcal{C}\subseteq \F_{q^m}^n$ be an MRD code of dimension $k$ and let $0<s<m$ with $\gcd(s,m)=1$. $\mathcal{C}$ is a generalized Gabidulin code with parameter $s$ if and only if $\rk(X^{(q^s)}-X) = 1$. \end{lemma} \begin{proof} We know from Theorem \ref{thm:GabCrit} that an MRD code $\C =\rs [I_k \mid X] \subseteq \F_{q^m}^n$ is a generalized Gabidulin code if and only if $\dim (\mathcal{C} \cap \mathcal{C}^{(q^s)}) = k-1$. We get \begin{align*} \dim (\mathcal{C} \cap \mathcal{C}^{(q^s)}) &= k-1 \\ \iff \rk \left[ \begin{array}{c|l} I_k & X \\ I_k & X^{(q^s)} \end{array}\right] &= k+1 \\ \iff \rk \left[ \begin{array}{c|c} I_k & X \\ 0 & X^{(q^s)} - X \end{array}\right] &= k+1\\ \iff \rk(X^{(q^s)} - X) &= 1 . \end{align*} \end{proof} The following theorem shows that the set of generator matrices not fulfilling the generalized Gabidulin criterion of Lemma \ref{lem:reformulation} is generic over the algebraic closure. \begin{theorem}\label{thm:topGab} Let $1\leq k \leq n-1$ and $0<s<m$ with $\gcd(s,m)=1$. Moreover, let $S_\mathrm{MRD}\subseteq \bar \F_{q^m}^{k\times (n-k)}$ be as defined in Theorem \ref{thm:topMRD}. The set $$S_{\mathrm{Gab},s} := \{X \in \bar \F_{q^m}^{k\times (n-k)} \mid \rk(X^{(q^s)}- X) =1 \}\cap S_\mathrm{MRD} $$ is a Zariski-closed subset of the Zariski-open set $S_\mathrm{MRD}$. \end{theorem} \begin{proof} Let $X\in S_{\mathrm{Gab},s}$. Since $X\in S_\mathrm{MRD}$, it follows from Lemma \ref{lem:systematic} that $X_{ij}\not\in \F_q$ for $i=1,\dots, k$ and $j=1,\dots, n-k$. Then the condition $\rk(X^{(q^s)}- X) = 1$ is equivalent to $\rk(X^{(q^s)}- X) < 2$, which in turn is equivalent to the condition that all $2\times 2$-minors of $(X^{(q^s)}-X)$ are zero. If we denote the entries of $X\in \bar\F_{q^m}^{k(n-k)}$ as the variables $x_{1},\dots, x_{k(n-k)}$, then these $2\times 2$-minors of $(X^{(q^s)}-X)$ are elements of $ \F_q[x_1,\dots,x_{k(n-k)}]$. Let us call the set of all these minors $S'$. Then \begin{align*} S_{\mathrm{Gab},s} &= \left\{X \in \bar \F_{q^m}^{k\times (n-k)} \mid f(x_{1},\dots,x_{k(n-k)})= 0 , \forall f \in S' \right\}\cap S_\mathrm{MRD} \\ &=V(S')\cap S_\mathrm{MRD}. \end{align*} Hence it is a Zariski-closed subset of $S_\mathrm{MRD}\subseteq \bar \F_{q^m}^{k\times (n-k)}$. \end{proof} Theorem \ref{thm:topGab} implies that the complement of $S_{\mathrm{Gab},s} $, i.e., the set of matrices that fulfill the MRD criterion but do not fulfill the generalized Gabidulin criterion, is a Zariski-open subset of $S_{\mathrm{MRD}} \subset \bar\F_{q^m}^{k\times (n-k)}$. Thus, if it is non-empty, then the complement of $S_{\mathrm{Gab},s} $ is a generic set. The non-empty-ness of this set will be shown in the following section, in Theorem \ref{thm:main}. In other words, over the algebraic closure, a randomly chosen generator matrix gives rise to a code that does not fulfill the generalized Gabidulin criterion with high probability. \section{Probability Estimations}\label{sec:prob} In the previous section we have used the Zariski topology to show that a randomly chosen linear code over $\bar\F_{q^m}$ fulfills most likely the MRD criterion but not the generalized Gabidulin criterion. Intuitively this tells us that over a finite, but large, extension field of $\F_{q}$ a randomly chosen linear code is most likely an MRD code but not a generalized Gabidulin code. In this section we derive some bounds on the probability that this statement is true, in dependence of the field extension degree $m$. \subsection{Probability for MRD codes} Here we give a lower bound on the probability that a random linear rank-metric code in $\F_{q^m}^n$ is MRD. A straight-forward approach gives the following result. \begin{theorem}\label{thm:probMRDrough} Let $X\in \F_{q^m}^{k(n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big(\; \rs [I_k \mid X ] \mbox{ is an MRD code } \big) \geq 1-\frac{k\prod_{i=0}^{k-1} (q^n-q^i)}{q^m} \geq 1-kq^{kn-m} .$$ \end{theorem} \begin{proof} It follows from Corollary \ref{prop:MRDCrit} that $\rs [I_k \mid X ] $ is a non-MRD code if and only if $$p^*:=\mathop{\prod_{A\in \F_{q}^{n\times k}}}_{\rk(A)=k} \det([I_k \mid X ] A) = 0.$$ If we see the entries of $X$ as the variables $x_1,\dots, x_{k(n-k)}$, then every variable $x_i$ is contained in at most one row of the matrix $$[I_k\,|\,X]A = (\sum_{\ell=1}^k A_{ \ell j} + \sum_{\ell=k+1}^n X_{i\ell} A_{\ell j})_{i,j}.$$ Thus $ \det([I_k \mid X ] A) \in \F_q[\xx]$ has degree at most $k$. The number of matrices in $ \F_{q}^{n\times k}$ of rank $k$ is $\prod_{i=0}^{k-1} (q^n-q^i) \leq q^{kn}$, hence the degree of $p^*$ is at most $k\prod_{i=0}^{k-1} (q^n-q^i)$. It follows from Lemma \ref{lem:SZ} that $$\mathrm{Pr}\big( \;\rs [I_k \mid X ] \mbox{ is not an MRD code } \big) \leq \frac{\deg p^*}{q^m} $$ and hence $$\mathrm{Pr}\big( \;\rs [I_k \mid X ] \mbox{ is an MRD code } \big) \geq 1-\frac{\deg p^*}{q^m} \geq 1-\frac{k\prod_{i=0}^{k-1} (q^n-q^i)}{q^m} \geq 1-kq^{kn-m} .$$ \end{proof} In the remainder of this subsection we want to improve the bound obtained in Theorem \ref{thm:probMRDrough}. To do so we introduce the set $$ \mathcal T(k,n)=\left\{E\in \F_q^{k\times n}\,|\, E \mbox{ is in reduced row echelon form and } \rk(E)=k \right\}.$$ With this notation we can formulate a variation of Corollary \ref{prop:MRDCrit}: \begin{proposition}\label{prop:improvedMRD} Let $G\in \F_{q^m}^{k\times n}$ be a generator matrix of a rank-metric code $\mathcal{C}\subseteq \F_{q^m}^n$. Then $\mathcal{C}$ is an MRD code if and only if $$ \rk(EG^T) =k$$ for all $E\in \mathcal T(k,n)$. \end{proposition} \begin{proof} For every matrix $V\in \F_q^{k\times n}$ consider its reduced row echelon form $E_V$. I.e., there exists a matrix $R \in \GL_k(q)$ such that $V=RE_V$. Then $$\det(VG^T)=\det(RE_VG^T)=\det(R)\det(E_VG^T), $$ and since $\det(R)\neq0$ we obtain that $\rk(VG^T)=k$ if and only if $\rk(E_VG^T)=k$. By Corollary \ref{prop:MRDCrit} the statement follows. \end{proof} For $E\in \mathcal T(k,n)$ we define the polynomial $$f_E(x_1,\ldots,x_{k(n-k)}) := \det([I_k\,|\,X]E^T) \in \F_{q^m}[x_1,\ldots, x_{k(n-k)}] ,$$ and we furthermore define $$ f^*(\xx):= \mathrm{lcm}\left\{f_E(x_1,\ldots,x_{k(n-k)})\,|\, E \in \mathcal T(k,n) \right\},$$ where, as before, the entries of $X$ are the variables $x_1,\ldots, x_{k(n-k)}$. We can easily observe the following. \begin{proposition} The set of linear non-MRD codes of dimension $k$ in $\F_{q^m}^n$ is in one-to-one correspondence with the algebraic set $$ V(\{f^*\})=\left\{(v_1,\ldots,v_{k(n-k)}) \in \F_{q^m}^{k(n-k)}\,|\,f^*(v_1,\ldots,v_{k(n-k)})=0\right\} .$$ \end{proposition} \begin{proof} It follows from Proposition \ref{prop:improvedMRD} that the set of linear non-MRD codes of dimension $k$ in $\F_{q^m}^n$ is in one-to-one correspondence with the algebraic set \begin{align*} V&=\bigcup_{E\in \mathcal T(k,n)}\left\{(v_1,\ldots,v_{k(n-k)}) \in \F_{q^m}^{k(n-k)}\mid f_E(v_1,\ldots,v_{k(n-k)})=0\right\} \\ &=\left\{(v_1,\ldots,v_{k(n-k)}) \in \F_{q^m}^{k(n-k)} \mid \prod_{E \in \mathcal T(k,n)} f_E(v_1,\ldots,v_{k(n-k)})=0\right\} \\ &=\left\{(v_1,\ldots,v_{k(n-k)}) \in \F_{q^m}^{k(n-k)}\mid f^*(v_1,\ldots,v_{k(n-k)})=0\right\} , \end{align*} where the last two equalities follow from the well-known fact that $$V(\{f\})\cup V(\{g\})=V(\{fg\})=V(\{\mathrm{lcm}(f,g)\})$$ for any $ f,g \in \F_q[x_1,\ldots,x_{k(n-k)}]$. \end{proof} Note that in the definition of an algebraic set, it suffices to use the square-free part of the defining polynomial(s). In the above definition of $V$ however, $f^*(\xx)$ is already square-free, as we show in the following. \begin{lemma} For every $E\in \mathcal T(k,n)$ the polynomial $f_E(\xx)$ is square-free. In particular, the polynomial $f^*(\xx)$ is square-free. \end{lemma} \begin{proof} As in the proof of Theorem \ref{thm:probMRDrough}, every variable $x_i$ is contained in at most one row of the matrix $[I_k\,|\,X]E^T$. Hence, in the polynomial $f_E(\xx)$ the degree with respect to every variable is at most $1$. Thus $f_E(\xx)$ cannot have multiple factors. \end{proof} We now determine an upper bound on the degree of the defining polynomial $f^*$. \begin{lemma}\label{lem:deg} Let $E\in \mathcal T(k,n)$ and let $\Uvs_0$ be the subspace of $\F_q^n$ defined by $$ \Uvs_0 := \rs [\; I_k \mid 0 \;]=\left\{(u_1,\ldots,u_n) \in \F_q^n\,|\,u_{k+1}=u_{k+2}=\ldots=u_n=0 \right\}.$$ Then $$\deg f_E=k-\dim \left(\rowsp(E)\cap \Uvs_0 \right) .$$ \end{lemma} \begin{proof} Let $r:=k-\dim \left(\rowsp(E)\cap \Uvs_0 \right)$ with $0\leq r\leq k$. We can write $$E^T=\left[\begin{array}{c} E_1 \\ \hline E_2 \end{array} \right],$$ where $E_1\in \F_q^{k\times k}, E_2\in \F_q^{(n-k)\times k}$. Since $\dim \left(\rowsp(E)\cap \Uvs_0 \right)=k-r$, we have $\rk(E_2)=r$. Thus there exists a matrix $R\in\GL_k(q)$ such that the first $r$ columns of $E_2R$ are linearly independent and the last $k-r$ columns are zero. Then $$f_E(\xx)= \det( [\; I_k \mid X \;] E^T)=\det(R)^{-1}\det(E_1R+XE_2R).$$ The last $k-r$ columns of the matrix $XE_2R$ are zero, i.e., the last $k-r$ columns of $E_1R+XE_2R$ do not contain any of the variables $x_i$. On the other hand, the entries of the first $r$ columns are polynomials in $\F_q[\xx]$ of degree $1$, since $$E_1R+XE_2R = \left(\sum_{\ell=1}^n (E_1)_{i\ell} R_{\ell j} + \sum_{\ell=1}^k \sum_{\ell'=1}^n X_{i\ell'} (E_2)_{\ell'\ell} R_{\ell j} \right)_{i,j}. $$ Hence we have $\deg f_E\leq r$. Now consider the matrix $E_2R$. We can write $$E_2R=\left[\begin{array}{c|c} \tilde{E}_2 & 0 \end{array}\right]$$ where $\tilde{E}_2$ is an $(n-k)\times r$ matrix of rank $r$. Hence $$XE_2R=\left[\begin{array}{c|c} X\tilde{E}_2 & 0 \end{array}\right].$$ First we prove that the entries of the matrix $X\tilde{E}_2$ are algebraically independent over $\F_q$. Fix $1\leq i \leq k$ and denote by $(X\tilde{E}_2)_i$ the $i$-th row of the matrix $X\tilde{E}_2$. Then consider the polynomials $(X\tilde{E}_2)_{ij},$ for $j=1,\ldots, r$, that only involve the variables $x_{(i-1)(n-k)+1},\ldots, x_{i(n-k)}$ . The Jacobian of these polynomials is $\tilde{E}_2^T$, whose rows are linearly independent over $\F_q$. Therefore the elements in every row are algebraically independent over $\F_q$. Moreover different rows involve different variables, hence we can conclude that the entries of the matrix $X\tilde{E}_2$ are algebraically independent over $\F_q$. At this point consider the set of all $r\times r$ minors of $X\tilde{E}_2 $. These minors are all different and hence linearly independent over $\F_q$, otherwise a non-trivial linear combination of them that gives $0$ would produce a non-trivial polynomial relation between the entries of $X\tilde{E}_2R$. Now observe that the degree $r$ term of $f_E$ is a linear combination of these minors. If we write $$ E_1R=\left[\begin{array}{c|c} * & \tilde{E}_1 \end{array}\right],$$ where $\tilde{E}_1\in \F_q^{k\times (k-r)}$, then the coefficients of this linear combination are given by the $(k-r)\times (k-r)$ minors of $\tilde{E}_1$, multiplied by $\det(R)^{-1}$. Since $E^TR$ has rank $k$ and the last $k-r$ columns of $E_2R$ are $0$, it follows that the columns of $\tilde{E}_1$ are linearly independent, and hence at least one of the coefficients of the linear combination is non-zero. This proves that the degree $r$ term of $f_E$ is non-zero, and hence $\deg f_E=r$. \end{proof} We can now give the main result of this subsection, an upper bound on the probability that a random generator matrix generates an MRD code: \begin{theorem}\label{thm:probMRD} Let $X\in \F_{q^m}^{k(n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big( \;\rs [I_k \mid X ] \mbox{ is an MRD code } \big) \geq 1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m} .$$ \end{theorem} \begin{proof} For every $r=0,1,\ldots,k$ we define the set $$ \mathcal T_r=\left\{E\in \mathcal T(k,n)\,|\,\dim \left(\Uvs_0\cap \rowsp(E)\right)=k-r\right\},$$ where $$ \Uvs_0 := \rs [\; I_k \mid 0 \;]=\left\{(u_1,\ldots,u_n) \in \F_q^n\,|\,u_{k+1}=u_{k+2}=\ldots=u_n=0 \right\}.$$ By Lemma \ref{lem:intersection} we have $$\left|\mathcal T_r \right|=\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}.$$ Moreover, by Lemma \ref{lem:deg}, if $E\in\mathcal T_r$, then $\deg f_E=r$. Hence, by the definition of $f^*(\xx)$, we have $$\deg f^*\leq \sum_{E\in\mathcal T(k,n)}\deg f_E=\sum_{r=0}^k\sum_{E \in \mathcal T_r}\deg f_E=\sum_{r=0}^k r\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}.$$ With Lemma \ref{lem:SZ}, the statement follows. \end{proof} Remember that we know how to construct MRD codes, namely as Gabidulin codes, for any set of parameters. Hence the probability that a randomly chosen generator matrix generates an MRD code is always greater than zero. However, the lower bound of Theorem \ref{thm:probMRD} is not always positive. In particular, for $$m<k(n-k)+\log_qk$$ we get \begin{align*} &1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2}q^{-m} \\ = &1-q^{-m}\left( k\binom{n-k}{k}_qq^{k^2} + \sum_{r=1}^{k-1}r\binom{k}{k-r}_q\binom{n-k}{r}_q q^{r^2} \right)\\ \leq& 1-q^{-m}\left( kq^{k(n-k)}\right)< 0, \end{align*} i.e., the bound is not tight (and not sensible) in these cases. Figure \ref{plotMRD2} depicts the lower bounds of Theorem \ref{thm:probMRDrough} and Theorem \ref{thm:probMRD} for small parameters. One can see that the bounds of Theorem \ref{thm:probMRD} really is an improvement over the bound of Theorem \ref{thm:probMRDrough}. \begin{figure}[ht] \begin{center} \includegraphics[width=7.25cm]{probs_MRD3_n4k2q2.pdf} \includegraphics[width=7.25cm]{probs_MRD3_n5k2q2.pdf} \end{center} \caption{Lower bounds on the probability that a randomly chosen generator matrix in $\F_{2^m}^{2\times 4}$ (left) and $\F_{2^m}^{2\times 5}$ (right) generates an MRD code.} \label{plotMRD2} \end{figure} \subsection{Probability for Gabidulin codes} We have seen in Theorem \ref{thm:topGab} that the set of matrices in $\F_{q^m}^{k\times n}$ in systematic form that generate a generalized Gabidulin code with parameter $s$ (such that $0<s<m$ with $\gcd(s,m)=1$) is in one-to-one correspondence with a subset of the set $$\left\{X \in \F_{q^m}^{k \times (n-k)}\,| \; \rk(X^{(q^s)}-X)=1 \right\}, $$ namely with the elements that represent an MRD code. By Lemma \ref{lem:systematic} we furthermore know that, if $X$ has entries from $\F_q$, then $\rs [\;I_k\mid X\;]$ is not MRD. Hence the set of matrices in systematic form that generate a Gabidulin code is in one-to-one correspondence with a subset of the set $$ \mathcal G(s):=\left\{X \in (\F_{q^m}\smallsetminus \F_q)^{k \times (n-k)}\,| \; \rk(X^{(q^s)}-X)=1 \right\}.$$ For simplicity we make the following estimation of the probability that a randomly chosen generator matrix generates a generalized Gabidulin code. \begin{lemma}\label{lem:probGabunion} Let $X\in\F_{q^m}^{k \times (n-k)}$ be randomly chosen. Then $$\mathrm{Pr}\big( \;\rs[I_k\,|\,X] \mbox{ is a gen.\ Gabidulin code } \big)\leq \mathop{\sum_{0<s<m}}_{\gcd(s,m)=1}\mathrm{Pr}\big(X\in\mathcal G(s)\big) =\mathop{\sum_{0<s<m}}_{\gcd(s,m)=1}\frac{|\mathcal G(s)|}{q^{mk(n-k)}} .$$ \end{lemma} \begin{proof} The inequality follows from the fact that the set of generalized Gabidulin codes is in one-to-one correspondence with a subset of the set $$\mathop{\bigcup_{0<s<m}}_{\gcd(s,m)=1} \mathcal G(s) . $$ Since $|\F_{q^m}^{k(n-k)}| = q^{mk(n-k)}$, the statement follows. \end{proof} For every integer $0<s<m$ with $\gcd(m,s)=1$, we now define the map $\varPhi_s$ given by $$\begin{array}{rcl} \varPhi_s:\F_{q^m}^{k\times (n-k)} &\longrightarrow & \F_{q^m}^{k\times (n-k)} \\ X & \longmapsto & X^{(q^s)}-X. \end{array} $$ Observe that $\varPhi_s$ is exactly the function that maps every entry $X_{ij}$ of the matrix $X$ to $\varphi_s(X_{ij})$. Moreover we define the sets \begin{align*} \mathcal R_1 &: =\left\{A\in \F_{q^m}^{k\times (n-k)}\,|\, \rk(A)=1\right\}, \\ \mathcal R_1^*&:=\left\{A\in (\F_{q^m}^*)^{k\times (n-k)}\,|\, \rk(A)=1\right\}, \\ \mathcal K &:=\left(\ker\left(\mathrm{Tr}_{\F_{q^m}/\F_q}\right)\right)^{k\times(n-k)} . \end{align*} We state now the crucial results that will help us to compute an upper bound on the cardinality of the sets $\mathcal G(s)$. \begin{lemma}\label{lem:phi} \begin{enumerate} \item Given a matrix $A \in \F_{q^m}^{k\times (n-k)}$, there exists a matrix $X \in \F_{q^m}^{k\times (n-k)}$ such that $\varPhi_s(X)=A$ if and only if $A\in\mathcal K$. \item If $A \in \mathcal R_1$, then $$|\varPhi_s^{-1}(A)|=\begin{cases} 0 & \mbox{ if } A\notin \mathcal K \\ q^{k(n-k)} & \mbox{ if } A\in \mathcal K. \end{cases} $$ \item For every integer $s$ coprime to $m$ $$\mathcal G(s)=\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K),$$ and $$|\mathcal G(1)|=|\mathcal G(s)|=q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|.$$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Since $\varPhi_s$ is the function that maps every entry $X_{ij}$ of the matrix $X$ to $\varphi_s(X_{ij})$, we have that $A \in \mathrm{Im}(\varPhi_s)$ if and only if every entry $A_{ij}$ of $A$ belongs to $\mathrm{Im}(\varphi_s)$. By Lemma \ref{lem:trace} part $6$ this is true if and only if every $A_{ij}$ belongs to $\ker\left(\mathrm{Tr}_{\F_{q^m}/\F_q}\right)$. \item If $A\notin \mathcal K$, then by part $1$ this means that $\varPhi_s^{-1}(A)=\emptyset$. Otherwise, again by part $1$, $\varPhi_s^{-1}(A)\neq\emptyset$. In this case every entry $A_{ij}$ belongs to $\mathrm{Im}(\varphi_s)$, and since $\varphi_s$ is linear over $\F_q$, $|\varphi_s^{-1}(A_{ij})|=|\ker(\varphi_s)|$. Since, by Lemma \ref{lem:trace}, $$ |\ker(\varphi_s)|=\frac{|\F_{q^m}|}{|\mathrm{Im}(\varphi_s)|}=q,$$ and $A$ has $k(n-k)$ entries, we get $|\varPhi_s^{-1}(A)|=q^{k(n-k)}$. \item Observe that $\mathcal R_1^*=\mathcal R_1\cap \left(\F_{q^m}^*\right)^{k\times (n-k)}$. Moreover $$ \varPhi_s^{-1}(\mathcal R_1)=\left\{X \in \F_{q^m}^{k \times (n-k)}\,| \; \rk(X^{(q^s)}-X)=1 \right\}$$ and, by Lemma \ref{lem:trace} part $5$, $$\varPhi_s^{-1}(\left(\F_{q^m}^*\right)^{k\times (n-k)})=\left(\F_{q^m}\smallsetminus \F_q\right)^{k \times (n-k)}. $$ Hence $$ \varPhi_s^{-1}(\mathcal R_1^*)=\varPhi_s^{-1}(\mathcal R_1\cap \left(\F_{q^m}^*\right)^{k\times (n-k)})=\varPhi_s^{-1}(\mathcal R_1)\cap \varPhi_s^{-1}(\left(\F_{q^m}^*\right)^{k\times (n-k)})=\mathcal G(s). $$ Now we can write $$ \mathcal R_1^*=(\mathcal R_1^*\cap\mathcal K)\cup(\mathcal R_1^*\cap \mathcal K^c)$$ and by part $1$ we have that $\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K^c)=\emptyset$. Then $$\mathcal G(s)= \varPhi_s^{-1}(\mathcal R_1^*)= \varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K)\cup\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K^c)=\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K).$$ By part $2$ we have $\left|\varPhi_s^{-1}(\mathcal R_1^*\cap \mathcal K)\right|=q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|$, which proves the statement. \end{enumerate} \end{proof} In analogy to the previous subsection we now first derive a straight-forward upper bound on the probability that a random generator matrix gives rise to a generalized Gabidulin code. Afterwards we will improve this bound. \begin{theorem}\label{thm:probGabrough} Let $X\in \F_{q^m}^{k(n-k)}$ be randomly chosen. Denote by $\phi(m)$ the Euler-$\phi$-function. Then $$\mathrm{Pr}\big( \;\rs [I_k \mid X ] \mbox{ is a generalized Gabidulin code } \big) \leq \phi(m) ( 2q^{1-m})^{\lfloor\frac{k}{2}\rfloor \lfloor\frac{n-k}{2}\rfloor}$$ \end{theorem} \begin{proof} We want to derive the cardinality of $\mathcal G(s)$ for any valid $s$. For this, by Lemma \ref{lem:phi} part $3$, we note that these cardinalities are all equal to the cardinality of $\mathcal G(1)$. Now for any $X \in (\F_{q^m}\smallsetminus \F_q)^{k \times (n-k)}$ the rank of $X^{(q)}-X$ is greater than zero. Therefore we can also write $$ \mathcal G(1)=\left\{X \in (\F_{q^m}\smallsetminus \F_q)^{k \times (n-k)}\,| \; \rk(X^{(q)}-X)\leq 1 \right\}.$$ The condition that $\rk(X^{(q)}-X)\leq 1$ is equivalent to that any $2\times 2$-minor of $X^{(q)}-X$ is zero. Hence a necessary condition is that any set of non-intersecting minors is zero. We have $\lfloor\frac{k}{2}\rfloor \lfloor\frac{n-k}{2}\rfloor$ many such non-intersecting minors, each of which has degree at most $2q$ if we see the entries of $X$ as the variables $x_1,\dots, x_{k(n-k)}$. With Lemma \ref{lem:SZ} we get for each minor $M_{ij}$, $$\Pr(M_{ij} = 0) \leq 2q^{1-m}. $$ Since the non-intersecting minors are independent events, the probability that all of these are zero is at most $$( 2q^{1-m})^{\lfloor\frac{k}{2}\rfloor \lfloor\frac{n-k}{2}\rfloor}.$$ With Lemma \ref{lem:probGabunion} and the fact that the number of $s$ with $\gcd(s,m)=1$ is given by $\phi(m)$, the statement follows. \end{proof} To improve the above bound we need the following lemma. \begin{lemma}\label{lem:R1} The set $\mathcal R_1^*\cap \mathcal K$ is in one-to-one correspondence with the set \begin{align*} V_R :=& \left\{\left(\bs\alpha,\bs\beta\right)\in \F_{q^m}^k \times \F_{q^m}^{n-k-1}\,|\,\alpha_i, \alpha_i\beta_j \in \ker \left(\mathrm{Tr}_{\F_{q^m}/\F_q}\right)\smallsetminus\{0\}\right\} \\ =& \left\{\left(\bs\alpha,\bs\beta\right)\in \F_{q^m}^k \times \F_{q^m}^{n-k-1}\,|\,\alpha_i, \in \ker \left(\mathrm{Tr}_{\F_{q^m}/\F_q}\right)\smallsetminus\{0\}, \beta_j \in \bigcap_{i=1}^k\ker \left(\mathrm{T}_{\alpha_i}\right)\smallsetminus\{0\}\right\} \end{align*} via the map $\psi:V_R \longrightarrow \mathcal R_1^*\cap \mathcal K$, given by $$ (\bs\alpha,\bs\beta)\longmapsto\left[\begin{array}{c}\alpha_1 \\ \vdots \\ \alpha_k \end{array}\right]\left[1, \beta_1,\ldots, \beta_{n-k-1}\right], $$ and hence $$|\mathcal R_1^*\cap \mathcal K|\leq (q^{m-1}-1)^{n-1} $$ \end{lemma} \begin{proof} From the definition of the set $V_R$ it is clear that the map $\psi$ is well-defined, i.e., it maps every element in $V_R$ to an element in $\mathcal R_1^*\cap \mathcal K$. Let $(\bs\alpha,\bs\beta)$, $(\bs\gamma,\bs\delta)$ be two elements that have the same image. Then the first column of $\psi(\bs\alpha,\bs\beta)$ and the first column of $\psi(\bs\gamma,\bs\delta)$ are equal, hence $\bs\alpha=\bs\gamma$. Also the first rows of $\psi(\bs\alpha,\bs\beta)$ and $\psi(\bs\gamma,\bs\delta)$ are equal, thus $\alpha_1\beta_j=\gamma_1\delta_j$ for every $j=1,\ldots, n-k-1$, and since $\alpha_1=\gamma_1\neq 0$ we get $\bs\beta=\bs\delta$ and this shows the injectivity of the map $\psi$. In order to show the surjectivity consider a rank $1$ matrix $A\in\mathcal R_1^*\cap \mathcal K$ with entries $A_{ij}$. Consider the vectors $\bs\alpha=(A_{11},\ldots,A_{k1})^T$ and $$\bs\beta=A_{11}^{-1}(A_{12},\ldots,A_{1(n-k)})^T.$$ It is clear that $(\bs\alpha, \bs\beta)\in V_R$, and that $\psi(\bs\alpha,\bs\beta)=A$. At this point for every $\alpha_i$ we have $q^{m-1}-1$ possible choices, while for every $\beta_i$ we have a number of choices that is less or equal to $|\ker(T_{\alpha_1})\smallsetminus\{0\}|$, that is again $q^{m-1}-1$. Therefore we get $$|\mathcal R_1^*\cap \mathcal K|\leq (q^{m-1}-1)^{n-1}.$$ \end{proof} We can now formulate the main result concerning the probability that a random linear rank-metric code is a generalized Gabidulin code. \begin{theorem}\label{thm:mainprobGab} Let $X\in \F_{q^m}^{k\times (n-k)}$ be randomly chosen. Then $$\Pr\big( \;\rs[I_k\,|\,X] \mbox{ is a gen.\ Gabidulin code }\big)\leq \phi(m)q^{-(m-1)(n-k-1)(k-1)},$$ where $\phi$ denotes the Euler-$\phi$ function. \end{theorem} \begin{proof} We have already seen in Lemma \ref{lem:probGabunion} that $$\Pr\big( \;\rs[I_k\,|\,X] \mbox{ is a gen Gabidulin code }\big)\leq \mathop{\sum_{0<s<m}}_{(s,m)=1}\frac{|\mathcal G(s)|}{q^{mk(n-k)}}.$$ By Lemma \ref{lem:phi} part $3$, the sets $\mathcal G(s)$ all have cardinality $q^{k(n-k)}|\mathcal R_1^*|$, thus $$\mathop{\sum_{0<s<m}}_{(s,m)=1}\frac{|\mathcal G(s)|}{q^{mk(n-k)}}=\phi(m)\frac{q^{k(n-k)}|\mathcal R_1^*\cap \mathcal K|}{q^{mk(n-k)}}.$$ Moreover by Lemma \ref{lem:R1}, we know that $|\mathcal R_1^*\cap \mathcal K|\leq (q^{m-1}-1)^{n-1}\leq q^{(m-1)(n-1)} $. Combining all the inequalities implies the statement. \end{proof} We can now give the final main result of this work, that proves the existence of linear MRD codes that are not generalized Gabidulin codes for almost every set of parameters. \begin{theorem}\label{thm:main} \begin{itemize} \item For any prime power $q$, and for any $1<k<n-1$, there exists an integer $M(q,k,n)$ such that, for every $m\geq M(q,k,n)$, there exists a $k$-dimensional linear MRD code in $\F_{q^m}^{n}$ that is not a generalized Gabidulin code. \item An integer $M(q,k,n)$ with this property can be found as the minimum integer solution of the inequality \begin{equation}\label{eq:main} 1-\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m}>(m-1)q^{-(m-1)(n-k-1)(k-1)} \end{equation} taken over all $m\in \mathbb N$. \end{itemize} \end{theorem} \begin{proof} For fixed $q$, $k$ and $n$ consider the function \begin{align*} F(m) & =\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}q^{-m}+(m-1)q^{-(m-1)(n-k-1)(k-1)} \\ & =aq^{-m}+(m-1)q^{-c(m-1)}, \end{align*} where $$ a:=\sum_{r=0}^kr\binom{k}{k-r}_q\binom{n-k}{r}_qq^{r^2}, \;\;\;\;c:=(n-k-1)(k-1). $$ Since $k\neq 1,n-1$, we have $c>0$. In this case $F(m)$ is the sum of two non-increasing functions and hence it is non-increasing. Therefore the function $1-F(m)$ is non-decreasing. Moreover it is easy to see that $$ \lim_{m\rightarrow +\infty} 1-F(m)=1.$$ This means that the set of the solutions of Inequality (\ref{eq:main}) is non-empty. Then it has a minimum solution $M(q,k,n)$. Since the function $1-F(m)$ is non-decreasing, every $m\geq M(q,k,n)$ is also a solution of (\ref{eq:main}). Hence, by Theorems \ref{thm:probMRD} and \ref{thm:mainprobGab}, we have the following chain of inequalities for every $m\geq M(q,k,n)$, $$\Pr\big( \rs[I_k\,|\,X] \mbox{ is MRD}\big)\geq 1-aq^{-m}>(m-1)q^{-c(m-1)}\geq \Pr\big( \rs[I_k\,|\,X] \mbox{ is gen.\ Gabidulin}\big), $$ which concludes the proof. \end{proof} In Figures \ref{experimental3} and Figures \ref{gabidulin2} we compare the bounds derived in this section with experimental results, which we got by randomly generating over $500$ rank-metric codes. The continuous lines show the bounds, the dotted lines show the experimental probabilities. In Figure \ref{experimental3} we see that Gabidulin codes are very few among all MRD codes when the extension degree $m$ is large. The probabilities for generalized Gabidulin codes decrease so quickly for increasing parameters that we show them separately, in logarithmic scale, in Figure \ref{gabidulin2}. Notice that from $m=10$ it is very difficult to generate a generalized Gabidulin code randomly and thus, experimentally we got a probability zero. This is why the experimental result was shown only up to $m=9$. \begin{figure}[!ht] \begin{center} \includegraphics[width=7.25cm]{experimental-q2n4k2.png} \includegraphics[width=7.25cm]{experimental-q3n4k2.png} \end{center} \caption{Bounds and experimental results for MRD and generalized Gabidulin codes in $\F_{2^m}^{2\times 4}$ and $\F_{3^m}^{2\times 4}$.} \label{experimental3} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=7.25cm]{gabidulin-q2n5k2-novert.png} \includegraphics[width=7.25cm]{gabidulin-q3n4k2-novert.png} \end{center} \caption{Bounds and experimental results for generalized Gabidulin codes in $\F_{2^m}^{2\times 5}$ and $\F_{3^m}^{2\times 4}$.} \label{gabidulin2} \end{figure} \section{Conclusion}\label{sec:conclusion} In this work we have shown that, over the algebraic closure of a given finite field, MRD codes and non-Gabidulin codes are generic sets among all linear rank-metric codes. For this we have used two known criteria for these two properties, which give rise to algebraic descriptions of the respective sets. Afterwards we have used the same two criteria to establish a lower bound on the probability that a randomly chosen systematic generator matrix generates an MRD code, and an upper bound on the probability that a randomly chosen systematic generator matrix generates a generalized Gabidulin code. With these two bounds we were then able to show that non-Gabidulin MRD codes exists for any length $n$ and dimension $1<k<n-1$, as long as the underlying field size is large enough. \bibliography{./network_coding_stuff} \bibliographystyle{plain} \end{document}
161,201
\subsection{Introduction} \label{sec:QN:intro} The quasi-neutral limit can be related to the so-called {\it plasma approximation} as defined in \cite{Chen} which consists, for dense plasmas, in assuming equal ionic and electronic densities $n_i = n_e$ together with an electric field that is not divergence free $\nabla \cdot E \neq 0$. This might appear as a paradox since it breaks the Maxwell-Gauss equation $\nabla \cdot E = q(n_i - n_e)/\varepsilon_0$ ($\varepsilon_0$ being the vacuum permittivity and $q$ the elementary charge). \Fabrice{This ambiguity is clarified in Sec.~\ref{sec:QN:analysis} thanks to the analysis of the different orderings revealed by a scaling of the Maxwell system. The inter-relations of the dimensionless parameters occurring in this set of re-scaled equations define different asymptotic regimes. The first one relates the propagation of an electromagnetic wave at the speed of light and is referred to as the Maxwell regime. In this ordering, the Maxwell sources vanish and do not contribute to the changes in the electromagnetic field, the electric field being computed by means of the displacement current in Amp\`ere's law. On the contrary, the evolution in the quasi-neutral regime is dominated by the sources with a negligible displacement current. Therefore, the transitions between the Maxwell and the quasi-neutral regimes rely on the relative influence of the displacement current and the current of particles. The vanishing of the displacement current in the Amp\`ere equation is at the origin of the singular nature of the quasi-neutral limit. Finally the electrostatic limit of the Maxwell system is discussed, the aim being to characterize the quasi-neutral limit in this asymptotic. It is defined as a low frequency regime with a slow system evolution compared to the speed of light. In particular, the singular nature of the quasi-neutral limit is illustrated also in this framework, with the degeneracy of Maxwell-Gauss equation.} \Fabrice{In the quasi-neutral asymptotic, the degeneracy of the Amp\`ere and the Gauss equations call for new means of computing the electric field. This is classically achieved thanks to a generalized Ohm law, either through the quasi-neutral constraint enforcing a divergence free current of particles or, the electronic momentum equation, in which the inertia is neglected. We refer to \cite{langmuir_interaction_1929} for some seminal works on quasi-neutral models, to \cite{hewett_low-frequency_1994} for a short review, \cite{winske_hybrid_1991,joyce_electrostatic_1997,buchner_hybrid_2003,crispel_quasi-neutral_2005,crispel_plasma_2007,tronci_hybrid_2014,tronci_neutral_2015} and the references therein for implementations of quasi-neutral plasma models.} \Fabrice{However quasi-neutral descriptions have a limited range of validity. In particular these models are not valid in vacuum or low plasma density regions where high frequency phenomena may occur \cite{tonks_general_1929}. The purpose of the Asymptotic-Preserving methods reviewed in this paper, is to bring the two regimes into a single set of equations, making possible the transition between the evolution of the electric field in the Maxwell regime, by means of the displacement current and, that of the quasi-neutral regime, with an electric field computed thanks to a generalized Ohm law. In this respect, AP methods implement the guide line stated in \cite{Chen} ``{\it do not use Maxwell's equations to compute the electric field unless it is unavoidable !}''.} \Fabrice{The derivation of the AP methods is addressed in Sec.~\ref{sec:AP:VM}. One key point is to identify the equations describing the system in the quasi-neutral regime. The Ohm's law heavily relying on the equations describing the particles evolution, the plasma models, either fluid or kinetic, are therefore an important aspect to consider in designing AP methods. Pioneering works have been first devoted to the Euler-Poisson system \cite{CDV07}, then extended to kinetic electrostatic descriptions by means of the Vlasov-Poisson system \cite{degond_asymptotically_2006,degond_asymptotic-preserving_2010,BCDS09}. Electromagnetic fields have been considered in the frame of the bi-fluid isothermal Euler-Maxwell system in \cite{degond_numerical_2012} (extended to the M1-Maxwell model in \cite{guisset_asymptotic-preserving_2016}) and finally with the Vlasov-Maxwell system \cite{DegDelDoy}. In Sec.~\ref{sec:AP:VM} a unified presentation of the different regimes is proposed by means of the ``augmented'' Vlasov-Maxwell system. This choice offers different advantages. First, the augmented system contains the difficulty of both the electromagnetic and the electrostatic regimes. Second, the Ohm law being derived from equations driving the evolution of macroscopic quantities, the construction of AP methods can be readily transposed from the kinetic to the fluid framework. Finally an overview of the numerical implementations of AP-methods is proposed in Sec.~\ref{sec:QN:num}.} \subsection{Outlines of the quasi-neutral and electrostatic limits of the Maxwell system}\label{sec:QN:analysis} \Fabrice{The objective of this section are twofold. On the one hand, the quasi-neutral limit is investigated thanks to a scaling of the equations in order to explain the paradox raised in the introduction. On the other hand, some consistency properties, relating the Maxwell-Amp\`ere and the Maxwell-Gauss equations are outlined. The electrostatic limit of the Maxwell system is also discussed. In this aim, the Maxwell system is complemented with the continuity equation driving the evolution of the charge density, defining the system of interest} \begin{subequations}\label{eq:Maxwell:sys:dim} \begin{align} & \frac{1}{c^2} \frac{\partial E}{\partial t} - \nabla \times B=- \mu_0 J\,,\label{M-Ad}\\ & \frac{\partial B}{\partial t} + \nabla \times E = 0\,,\label{M-Fd}\\ & \nabla \cdot E= \frac{\rho}{\varepsilon_0}\,,\label{M-Gd}\\ & \nabla \cdot B = 0\,,\label{M-Td} \end{align} \end{subequations} \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot J = 0\,, \label{Cd} \end{equation} consisting of the Maxwell-Amp\`ere \eqref{M-Ad}, the Maxwell-Faraday \eqref{M-Fd}, Maxwell-Gauss \eqref{M-Gd} and the Maxwell-Thomson \eqref{M-Td} equations supplemented with the continuity equation \eqref{Cd}. In these equations $(E,B)$ is the electromagnetic field, the charge and current densities are defined by the electronic and ionic densities and mean velocities as $\rho=q(n_i-n_e)$ and $J=q(n_iu_i-n_eu_e)$, $q$ denoting the elementary charge. Finally $c$ is the speed of light, $\mu_0$ and $\varepsilon_0$ being the vacuum permeability and permittivity, verifying $\varepsilon_0 \mu_0 c^2=1$. \Fabrice{The physical variables are scaled by their typical value: $\bar x$ and $\bar t$ being the space and time scales, the following identity holds $x=\bar x \, x'$ and $t=\bar t \, t'$, $x'$ and $t'$ denoting the dimensionless variables. These scales define $\bar \vartheta = \bar x / \bar t$ the velocity driving the changes in the electromagnetic field, the typical magnitude of the electric and magnetic fields being $\bar E$ and $\bar B$. The plasma characteristics are denoted $\bar T$, $\bar u$ and $\bar n$ for the typical temperature, mean velocity and density, allowing the definition of $\lambda_D = (\varepsilon_0 k_B \bar T / (q^2 \bar n))^{1/2}$ the Debye length, with $k_B$ the Boltzmann constant and the electronic thermal velocity $v_{th,e}=(k_B \bar T /m_e)^{1/2}$, with $m_e$ the electronic mass. The Maxwell sources are scaled with $\bar \rho =q \bar n $ and $\bar J = q \bar n \bar u$ for the charge and current densities. The introduction of the re-scaled variables into the equations reveals some dimensionless parameters: \begin{equation}\label{eq:dimensionless:parameters} \begin{cases} \displaystyle \alpha=\frac{\bar \vartheta}{c}, \text{ the typical velocity to the speed of light}\,;\\[3mm] \displaystyle \zeta=\frac{\bar u}{ \bar \vartheta}, \text{ the plasma mean velocity relative to the speed of interest}\,; \\[3mm] \displaystyle M= \frac{\bar u}{v_{th,e}}\, \text{ the electronic Mach number, } v_{th,e}^2=\frac{k_B \bar T}{m_e}\,;\\[3mm] \displaystyle \eta = \frac{q \bar E \bar x}{m_e \bar u^2}, \text{ the ratio of the electric and plasma kinetic energies}\,; \\[3mm] \displaystyle \beta=\frac{\bar \vartheta \bar B}{\bar E}, \text{ the induced electric field to the total electric field}\,;\\[3mm] \displaystyle \lambda = \frac{\lambda_D}{\bar x} , \text{ the dimensionless Debye length, } \lambda_D^2 = \frac{\varepsilon_0 k_B \bar T}{q^2 \bar n} \,. \end{cases} \end{equation}} With these dimensionless parameters, the scaled system is recast into \begin{align} & \lambda^2 \frac{\partial E}{\partial t} - \beta\frac{\lambda^2}{\alpha^2} \nabla \times B=- \frac{\zeta}{\eta M^2} J\,,\tag{M-A}\label{M-A}\\ & \beta \frac{\partial B}{\partial t} + \nabla \times E = 0\,,\tag{M-F}\label{M-F}\\ &\lambda^2 \eta M^2 \nabla \cdot E= \rho\,,\tag{M-G}\label{M-G}\\ & \nabla \cdot B = 0\,,\tag{M-T}\label{M-T}\\[2mm] & \frac{\partial \rho}{\partial t} + \zeta \nabla \cdot J = 0\,, \tag{C}\label{C} \end{align} where, for sake of readability, the primes are omitted for the scaled variables. Two regimes can be identified accordingly to the frequency range characterizing the system evolution. In the high frequency limit, referred to as the Maxwell regime in the sequel, the velocity of interest is assumed comparable to the speed of light and large compared to both the mean velocity of the plasma and the particles thermal velocity. The Debye length is assumed to be large or comparable to the typical space scale. This translates into the following scaling relations \begin{equation}\label{eq:regime:Maxwell} \lambda^2=\alpha^2 = \beta = \eta = M^2 = 1\,, \quad \zeta \ll 1 \,. \end{equation} In the Maxwell regime the plasma evolution can be disregarded. In the low density approximation $\lambda\gg 1$, the system reduces to the homogeneous Maxwell equations with the propagation of the electromagnetic wave at the speed of light. The electric field is computed by means of the displacement current in \eqref{M-A}. The quasi-neutral limit is defined by a speed of interest comparable to the plasma mean velocity and the particles thermal velocity, these velocities being assumed small compared to the speed of light. The regime is therefore a low frequency asymptotic. The scaled Debye length is also assumed to define a small scale in this regime which finally yields to the scaling relations \begin{equation}\label{eq:regime:QN} \zeta=\beta = \eta = M^2 = 1\,, \quad \lambda^2=\alpha^2 \ll 1 \,. \end{equation} The assumption $\beta = 1$ is common to all MHD models and referred to as the frozen field assumption. It translates that, in a dense plasma, the magnetic field is convected with the plasma flow. In particular, the propagation of electromagnetic waves at the speed of light is not possible in a dense plasma \cite{krall_principles_1986,Chen,bittencourt_fundamentals_2004,fitzpatrick_plasma_2014} and therefore not described by quasi-neutral models. The other dimensionless parameters are assumed equal to one in order to simplify the writing. The Maxwell-Gauss equation \eqref{M-G} provides the quasi-neutrality $\rho = 0$, the electric field contribution vanishing in this equation. Therefore, this allows for the computation of non divergence free electric field and explains the paradox raised by a crude review of the dimensional equations. The Amp\`ere equation also degenerates in the asymptotic $\lambda^2 \to 0$ along with $\alpha^2/\lambda^2=\mathcal{O}(1)$, with a vanishing displacement current, the equation \eqref{M-A} yielding \begin{equation*} \nabla \times B = J \,. \end{equation*} In this equation too, the occurrence of the electric field vanishes in the quasi-neutral regime which leads to the conclusion that the (homogeneous) Maxwell equations cannot be used to compute the electric field in this limit. More precisely, the electric field must be found from the particles equation of motion \cite{Chen} by means of a generalized Ohm's law, explaining how the current of particles $J$ and the electric field relate to each other. This is routinely implemented in quasi-neutral descriptions of plasmas, the most widely used being the Magneto-Hydro-Dynamic (MHD) models \cite{biskamp_nonlinear_1997,davidson_introduction_2001,freidberg_ideal_2014,spatschek_high_2012}. Different frameworks have been investigated in this direction. The first one is devoted to electrostatic descriptions that can be derived from the dimensionless system above by letting $\alpha \to 0 $. Indeed, by the equation \eqref{M-A} the magnetic field is curl free ($\nabla \times B=0$), which together with $\nabla \cdot B = 0$ and assuming adequate boundary conditions, provides a constant magnetic field. The equation \eqref{M-F} provides thus a curl free electric field $\nabla \times E = 0$ assumed to derive from a potential $E=-\nabla \phi$. In this regime, the Maxwell-Amp\`ere equation can be decomposed into a curl free and a divergence free identity \begin{equation}\label{eq:def:L:T:decomp} \begin{split} \lambda^2 \frac{\partial}{\partial t} \Delta \phi &= \frac{\zeta}{\eta M^2} J_L \,, \\ \beta \frac{\lambda^2}{\alpha^2} \nabla \times B &= \frac{\zeta}{\eta M^2} J_T\,, \end{split} \end{equation} where $J=J_L + J_T$, $\nabla \cdot J_T = 0$ and $ \nabla \times J_L =0 $. While $\nabla \times B $ vanishes in the electrostatic regime $\alpha\to0$, the quantity $\nabla \times B / \alpha^2$ remains finite \Fabrice{as long as the current transverse part $J_T$ does not vanish}. However it does not contribute to the definition of the electrostatic field, as outlined by the decomposition \eqref{eq:def:L:T:decomp}. This feature can also be recovered by computing formally the divergence of Amp\`ere's law \eqref{M-A}, providing \begin{equation*} \lambda^2 \frac{\partial}{\partial t} \Delta \phi = \frac{\zeta}{\eta M^2}\nabla \cdot J \,. \end{equation*} This equation together with the continuity equation \eqref{C} provides \begin{equation*} - \lambda^2 \eta M^2 \frac{\partial} {\partial t} \Delta \phi = \frac{\partial \rho}{\partial t}\,. \end{equation*} This outlines that the Gauss law is a consequence of the Amp\`ere \eqref{M-A} and the continuity \eqref{C} equations. The consistency of the initial condition with the Maxwell-Gauss equation \eqref{M-G} is preserved with time. Consequently, in the electrostatic regime, the only Maxwell-Gauss equation is sufficient to compute the entire electric field, this equation being usually substituted to the whole Maxwell system in this regime. Note that, the quasi-neutral limit is thus defined by $\lambda^2\to 0$ together with $\lambda^2/\alpha^2\to 0$, in contrast to $\lambda^2\to 0$ and $\lambda^2/\alpha^2= \mathcal{O}(1)$ for the electromagnetic framework. \subsection{Asymptotic-Preserving formulation of the Vlasov-Maxwell system}\label{sec:AP:VM} \subsubsection{The scaled Vlasov-Maxwell system} The model investigated here consists of the Maxwell system \eqref{eq:Maxwell:sys:dim} coupled to a Vlasov equation for the electrons, the ions being assumed at rest with a uniform density to simplify the notations. The distribution function, denoted $f$, depends on $x\in \Omega_x \subset \Rset^3$, the microscopic velocity $v \in \Omega_v \subset \Rset^3$ and on time $t \in \Rset^+$. The function is the solution to \begin{equation} \frac{\partial f}{\partial t} + v \cdot \nabla f - \frac{q}{m_e} ({E}+ v \times B)\cdot \nabla_v f =0 \end{equation} In order to address straightforwardly the asymptotic regime, the scaling defined by Eq.\eqref{eq:dimensionless:parameters} is again harnessed, but ti simplify, with $\bar u=\bar \vartheta$, where $\bar \vartheta=\bar x/\bar t$ and $\bar u$ is the mean plasma velocity, which amounts to setting $\zeta=1$. The particles velocity $v$ being scaled with the electronic thermal velocity $v_{th,e}=(k_B \bar T/m_e)^{1/2}$. In the sequel similar scaling relations as the ones defining the quasi-neutral regime \eqref{eq:regime:QN} will be considered. To simplify further the writing, the two small scales $\alpha^2$ and $\lambda^2$ will be denoted by a single parameter ($\lambda^2$), so that the quasi-neutral regime is easily identified by the limit $\lambda^2\to0$. The dimensionless Vlasov-Maxwell system is \begin{subequations}\label{VM:scaled} \begin{empheq}[left=(VM)^\lambda \empheqlbrace]{align} &\frac{\partial f}{\partial t} + v \cdot \nabla f - ({E}+ v \times B)\cdot \nabla_v f =0\label{VM:scaled:V}\\ & \lambda^2 \frac{\partial {E}}{\partial t} - \nabla \times B = -J,\label{VM:scaled:A}\\ & \frac{\partial B}{\partial t} + \nabla \times E = 0,\label{VM:scaled:F}\\ &\lambda^2 \nabla \cdot {E}= 1 - n, \label{VM:scaled:G} \\ & \nabla \cdot B = 0\,, \end{empheq} with \begin{equation} n=\int_{\Omega_v}f(x,v,t)\ dv\,, \qquad J=- \int_{\Omega_v}f(x,v,t)v\ dv\,. \end{equation} \end{subequations} The sources of the Maxwell system verify a continuity equation derived from the moments of the Vlasov equation \eqref{VM:scaled:V} giving rise to \begin{subequations}\label{eq:Moments} \begin{align} & \frac{\partial n}{\partial t} - \nabla \cdot J = 0 \,, \label{eq:moment:0}\\ & \frac{\partial J}{\partial t} - \nabla \cdot \mathbb{S} = (n E - J \times B)\,, \qquad \mathbb{S}=\int_{\Omega_v}f(x,v,t)v \otimes v\ dv \,. \label{eq:moment:1} \end{align} \end{subequations} As outlined in section~\ref{sec:QN:analysis}, the Gauss equation is a consequence of the Amp\`ere law \eqref{VM:scaled:A} and the continuity equation \eqref{eq:moment:0}. However, the consistency with this latter is not always satisfied by numerical methods. This is for instance a common flaw of Particle-In-Cell methods largely documented (see for instance \cite{BiLa04,BCS07}). The most widely adopted solution is the correction of the electric field predicted by the Amp\`ere equation. This correction is computed by an electrostatic potential $p$ verifying the Maxwell-Gauss equation \eqref{VM:scaled:G}. This is the so-called Boris correction \cite{boris_proceedings_1970} decomposed in two steps. First the predicted electric field $\tilde E$ is computed by means of Amp\`ere's law. Second the correction is applied to this field, defining the corrected field $E = \tilde{E} - \nabla p$ in order for the Maxwell-Gauss equation to be satisfied: \begin{subequations} \begin{equation} \lambda^2 \nabla \cdot {E}= 1 - n\,, \qquad E = \tilde{E} - \nabla p \,, \label{eq:aVM:Corr:0} \qquad \end{equation} \end{subequations} This gives rise to the dimensionless Vlasov-Maxwell system augmented with the corrector $p$ \begin{subequations}\label{eq:aVM} \begin{empheq}[left=(aVM)^\lambda \empheqlbrace]{align} &\frac{\partial f}{\partial t} + v \cdot \nabla f - ({E}+ v \times B)\cdot \nabla_v f =0\label{eq:aVM:V}\\ & \lambda^2 \frac{\partial \tilde{E}}{\partial t} - \nabla \times B = - J,\label{eq:aVM:A}\\ & \frac{\partial B}{\partial t} + \nabla \times \tilde{E} = 0,\label{eq:aVM:F}\\ &\lambda^2 \Delta p = \lambda^2 \nabla\cdot \tilde{E} - (1-n)\,,\label{eq:aVM:G}\\ & \nabla \cdot B = 0\,, \label{eq:aVM:T}\\ & E = \tilde{E} - \nabla p \,. \label{eq:aVM:Corr} \end{empheq} \end{subequations} The right hand side of Eq.~\eqref{eq:aVM:G} can be interpreted as the consistency default in Gauss's law. The corrector $p$ vanishes, subject to the boundary conditions, as soon as the electric field advanced thanks to the Amp\`ere equation verifies the Maxwell-Gauss law. The difficulty in handling the quasi-neutral limit is thus twofold. In addition to the degeneracy of the Amp\`ere equation \eqref{eq:aVM:A}, a means of computing the corrector needs to be worked out for the limit regime, the Maxwell-Gauss equation \eqref{eq:aVM:G} also degenerating in the quasi-neutral limit. This last difficulty is similar to the one posed by the computation of the electric potential in the electrostatic framework. The investigation of the augmented Vlasov-Maxwell system is thus a good means of offering a unified presentation of both regimes. \subsubsection{Reformulation of the augmented Vlasov-Maxwell system}\label{sec:ref:Maxwell} The objective here is to restore a means of computing the electric field in the quasi-neutral regime. \Fabrice{As mentioned above, in the Maxwell regime, the electric field is computed thanks to the displacement current. This term vanishing from the equation in the limit $\lambda^2\to0$, Ohm's law needs to be considered in order to express how the electric field relates to the current of particles. This finally restores means of computing the electric field in Amp\`ere's law}. Letting $\lambda^2 \to 0$ in \eqref{eq:aVM:A} and taking the formal time derivative of this equation together with the curl of Faraday's law \eqref{eq:aVM:F} yields \begin{equation}\label{eq:Amp:degenerated} \nabla \times \nabla \times E = \frac{\partial J}{\partial t} \,. \end{equation} In this equation a link between the electric field and the electric sources is restored. However, it does not allow for the computation of the entire electric field. Indeed, the solution of this equation can be augmented by any gradient without changing the equality: the electrostatic component of the field cannot be uniquely determined from \eqref{eq:Amp:degenerated}. This is corrected thanks to the expression of the current of particles which translates the response of the particles to the electric field, with \begin{equation*} \frac{\partial J}{\partial t} = \nabla \cdot \mathbb{S} + n \tilde E - J \times B \,. \end{equation*} Inserting this definition into \eqref{eq:Amp:degenerated}, the quasi-neutral equation providing the entire electric field in the quasi-neutral regime can be precised, with \begin{equation*} \nabla \times \nabla \times \tilde E + n \tilde E = J\times B - \nabla \cdot \mathbb{S} \,. \end{equation*} A similar reformulation can be performed for the correction potential $p$. Indeed, Eq.~\eqref{eq:aVM:G} degenerates into the quasi-neutrality relation $1-n = 0$. This constraint is operated together with the moments of the Vlasov equation in order to derive the equation verified by the corrector. Following the spirit of the Boris procedure, a correction of the electric field is introduced in order for the continuity equation to be verified. Taking the double time derivative of this equation together with the moments of the Vlasov equation, in which the electric field is corrected, the following equation is derived \begin{equation} \frac{\partial^2 n}{\partial t^2} = \nabla \cdot \frac{\partial J}{\partial t} = \nabla^2: \mathbb{S} + \nabla\cdot \big( n (\tilde{E} - \nabla p)\big) - \nabla \cdot ( J \times B )\,, \end{equation} where $ \nabla^2: \mathbb{S} := \nabla \cdot ( \nabla \cdot \mathbb{S})$. This finally provides the equation verified by the corrector in the limit $\lambda^2\to0$, so that it is possible to state the quasi-neutral Vlasov-Maxwell system \begin{subequations}\label{eq:aVM0} \begin{empheq}[left=(aVM)^0 \empheqlbrace]{align} &\frac{\partial f}{\partial t} + v \cdot \nabla_x f - ({E}+ v \times B)\cdot \nabla_v f =0\label{eq:aVM0:V}\\ & \nabla \times \nabla \times \tilde E + n \tilde E = J\times B - \nabla \cdot \mathbb{S} \,,\label{eq:aVM0:A} \\ & \frac{\partial J}{\partial t} + \nabla \times \tilde{E} = 0,\label{eq:aVM0:F}\\ \begin{split} &- \nabla \cdot (n \nabla p) = \frac{\partial ^2 n}{\partial t^2} - \nabla^2:\mathbb{S} - \nabla \cdot (n \tilde{E}) + \nabla \cdot ( J \times B) \,, \label{eq:aVM0:G} \end{split} \\ & \nabla \cdot B = 0\,, \label{eq:aVM0:T}\\ & E = \tilde{E} - \nabla p \,. \label{eq:aVM0:Corr} \end{empheq} \end{subequations} In the quasi-neutral limit, the electric field can be interpreted as the Lagrange multiplier of the equilibrium $\nabla \times B = J$, the corrector potential as the Lagrange multiplier of the constraint $\nabla \cdot J = 0$, or, more precisely to the time derivative of these identities. The equation \eqref{eq:aVM0:V} outlines the singular nature of the quasi-neutral limit: the electric field verifies an hyperbolic equation in the Maxwell regime defined in section \ref{sec:QN:intro} while it is computed thanks to an elliptic equation in the quasi-neutral limit. On top of that, these two equations relate different physical phenomena, the propagation of an electromagnetic wave at the speed of light on the one hand, the response of the charged particles to the electric field on the other hand. The quasi-neutral regime investigated with this limit model is close to a kinetic description of the so-called Electron MHD \cite{gordeev_electron_1994} \Fabrice{and the quasi-neutral model identified in \cite{tronci_neutral_2015}}. In this system the scale of interest is that of the electron, rather than the ion dynamics in the classical MHD models, with a finite electron inertia. Moreover the model defined by \eqref{eq:aVM0} remains a fully kinetic description for the plasma. \medskip The aim of the reformulation, leading to an asymptotic preserving method, is to bring these two regimes into a single set of equations with a smooth transition from one to the other one according to the values of $\lambda$. With this aim, a derivation similar to that of the limit problem \eqref{eq:aVM0} is performed but keeping $\lambda>0$. This yields the reformulated Vlasov-Maxwell system \begin{subequations}\label{eq:RaVM} \begin{empheq}[left=(RaVM)^\lambda \empheqlbrace]{align} & \frac{\partial J}{\partial t} + v \cdot \nabla_x f - ({E}+ v \times B)\cdot \nabla_v f =0\label{eq:RaVM:V}\\ &\lambda^2 \frac{\partial^2 \tilde{E}}{\partial t^2} + \nabla \times \nabla \times \tilde E + n \tilde E = J\times B - \nabla \cdot \mathbb{S} \,,\label{eq:RaVM:A} \\ & \frac{\partial J}{\partial t} + \nabla \times \tilde{E} = 0,\label{eq:RaVM:F}\\ \begin{split} &- \lambda^2 \frac{\partial^2}{\partial t^2} \Delta p - \nabla \cdot (n \nabla p) = \\ &\hspace*{2cm} \frac{\partial ^2 n}{\partial t^2} - \nabla^2:\mathbb{S} - \nabla \cdot (n \tilde{E}) + \nabla \cdot ( J \times B) \,, \label{eq:RaVM:G} \end{split} \\ & \nabla \cdot B = 0\,, \label{eq:RaVM:T}\\ & E = \tilde{E} - \nabla p \,. \label{eq:RaVM:Corr} \end{empheq} \end{subequations} \begin{remark} \begin{enumerate}[a)] \item The reformulated Amp\`ere equation \eqref{eq:RaVM:A} is well posed (provided adequate boundary conditions) for all values of $\lambda^2$. Indeed in the limit $\lambda \to 0$ the plasma density is large and the operator $\nabla \times \nabla \times E + n E$ is elliptic. Conversely, when $n \to 0$ the scaled Debye length is large and the equation remains well posed. These remarks also apply to the reformulated Gauss law \eqref{eq:RaVM:G} providing the corrector. \item The quasi-neutral Vlasov-Maxwell system \eqref{eq:aVM0} is recovered from the reformulated system when $\lambda \to 0$. The quasi-neutral limit is a regular perturbation of the reformulated system \eqref{eq:RaVM}. \item The right hand side of the equation~\eqref{eq:RaVM:G} can be interpreted as the default of consistency with the continuity equation a common feature with the Boris correction \cite{boris_proceedings_1970}. In this respect, this equation can be regarded as a generalization of the Boris correction. \end{enumerate} \end{remark} \subsection{Overview of the numerical methods}\label{sec:QN:num} The purpose here is to use the concepts introduced in the precedent section for the continuous system and to transpose them to the discrete equations. Generally, the time discretization is a key point in the derivation of an AP numerical method. Due to the singular nature of the quasi-neutral limit several quantities must be computed thanks to an implicit time discretization in order to secure the consistency with both the Maxwell and the quasi-neutral regimes and to provide a means of computing the electric field in every regime. The level of implicitness is controlled by three parameters $(a,b,c)$, the value of each one being equal to either 1 or 0 \cite{degond_numerical_2012}. \begin{subequations}\label{eq:system:semi:discret:temps} \begin{eqnarray} & & \hspace{-1cm} \frac{1}{\Delta t} (n^{ m+1} - n^{ m}) - \nabla \cdot J^{ m+a} = 0, \label{DS1F_n} \\ & & \hspace{-1cm} \frac{1}{\Delta t} ( J^{ m+1} - J^{ m}) - \nabla \cdot \mathbb{S}^m = n^{ m+1-a} E^{ m+1} - J^m \times B^{ m}, \label{DS1F_u} \\ & & \hspace{-1cm} \frac{1}{\Delta t} (B^{ m+1} - B^{ m}) + \nabla \times E^{ m+b} = 0, \label{DS1F_B} \\ & & \hspace{-1cm} \lambda^2 \frac{1}{\Delta t} (\tilde{E}^{ m+1} - E^{ m}) - \nabla \times B^{ m+c} = - J^{ m+a} , \label{DS1F_E} \\ & & \hspace{-1cm} \lambda^2 \nabla \cdot E^{ m+1} = (1 - n^{ m+1})\,, \label{DS1F_divE}\\ & & \hspace{-1cm} E^{ m+1} = \tilde{E}^{ m+1} - \nabla p \,.\label{DS1F_p} \end{eqnarray} supplemented with $ \nabla \cdot B^{ m+1} = 0$. \end{subequations} At this stage, different remarks can be stated: \begin{enumerate}[a)] \item The quasi-neutral regime is recovered for vanishing $\lambda$ which stands for both the scaled Debye length and the ratio of the typical velocity to the speed of light. The stability with respect to $\lambda$ requires therefore an implicit discretization of the (homogeneous) Maxwell equations, yielding to $b=c=1$. This is related to the assumption that the typical velocity is small compared to the speed of light ($\alpha\to0$) \item The consistency property with respect to the quasi-neutral regime requires an implicit particle current $J$ in Amp\`ere's law \eqref{DS1F_n} with and implicit electric field in the definition of $J$. Accordingly, an implicit electric field must be used in the Lorentz force defining the source of the momentum equation Eq.~\eqref{DS1F_u}. These requirements are met for $a=1$. Note that the scaling assumptions imply that the dimensionless Debye length also represents the scaled plasma period. Therefore, the uniform stability property with respect to $\lambda$ brings the stability of the method for time steps lager than the plasma period. \item The density occurring in the Lorentz force is made explicit when the mass flux is implicit in order to uncouple the resolution of the Eqs.~\eqref{DS1F_n} and \eqref{DS1F_u}. \item The consistency with the Maxwell-Gauss equation at the discrete level, requires the same level of implicitness for the mass flux in Eq.~\eqref{DS1F_n} and the current $J$ in Amp\`ere's law \eqref{DS1F_E}. This point will be detailed further in the sequel. \end{enumerate} The linear stability proposed in \cite{degond_numerical_2012} demonstrates that the AP property cannot be achieved with an implicitness level weaker than $(a,b,c)=(1,1,1)$. \Fabrice{This choice defines a consistent discretization of the reformulated system \eqref{eq:RaVM}. Indeed, Eqs.~\eqref{DS1F_B}, \eqref{DS1F_E} and \eqref{DS1F_u} in which the correction is omitted yield \begin{multline}\label{eq:proof:consistancy} {\frac{\lambda^2}{\Delta t^2} \left( \tilde{E}^{m+1} - {E}^{m} \right) = \frac{1}{\Delta t}\Big(\nabla \times B^m - J^m \Big)}\\ - {\nabla \times \nabla \times {\tilde{E}^{m+1}} -{n}^m\tilde{E}^{m+1} - \nabla\cdot S^m + J^m \times B^m \,.} \end{multline} Owing that Amp\`ere's law is initially verified: $ \nabla \times B^m - J^m \approx \frac{\lambda^2}{\Delta t} \left(E^m - E^{m-1} \right)$, the following identity holds \begin{multline*} \frac{\lambda^2}{\Delta t^2} \left(\tilde{E}^{m+1} - 2 {E}^{m} + E^{m-1}\right) + \nabla \times \nabla \times\tilde{E}^{m+1} + \\ {n}^m\tilde{E}^{m+1} + \nabla\cdot S^m - J^m \times B^m \approx 0 \,, \end{multline*} which defines a time semi discretization of the reformulated Amp\`ere equation \eqref{eq:RaVM:A}. A similar result can be obtained for the Gauss law, with Eqs.~\eqref{DS1F_n}, \eqref{DS1F_u}, \eqref{DS1F_divE}, \eqref{DS1F_p} and \eqref{eq:proof:consistancy} providing \begin{multline*} -\nabla \cdot \Big(\big(\frac{\lambda^2}{\Delta t ^2} + n^{m}\big) \nabla p\Big) = \frac{1}{\Delta t^2} \Big( 1 -\tilde{n}^{m+1} - \lambda^2 \nabla \cdot E^{m} \Big) +\frac{1}{\Delta t}\nabla \cdot J^m \\ + \nabla^2 : S^m - \nabla\cdot ( J^m \times B^m ) + \nabla \cdot ( {n}^m\tilde{E}^{m+1} ) \,, \end{multline*} where $\tilde{n}^{m+1} = {n}^{m+1}+\Delta t^2 \nabla \cdot ( n^m \nabla p)$. Assuming that Gauss law and the continuity equation are satisfied at the previous time step $ \lambda^2 \nabla \cdot E^m \approx 1 - {n}^{m}$ and $ \Delta t\nabla \cdot J^m \approx {n}^{m} - {n}^{m-1}$ the following identity holds \begin{multline*} -\nabla \cdot \Big(\big(\frac{\lambda^2}{\Delta t ^2} +{n}^{m}\big) \nabla p\Big) \approx \frac{1}{\Delta t^2} \Big( -\tilde{n}^{m+1} +2 {n}^{m} - {n}^{m-1} \Big) \\+\nabla^2 : S^m - \nabla\cdot ( J^m \times B^m ) + \nabla \cdot ( n^m\tilde{E}^{m+1}) \,. \end{multline*} This defines a time discretization of the reformulated Gauss law \eqref{eq:RaVM:A} provided that the correction at time level $m$ and $m-1$ vanishes.} \Fabrice{\begin{remark}\label{rem:form:AP} The time discretization is operated in a way to avoid the time differentiation of the equations and thus provide the consistency with Amp\`ere's equation and the time derivative of Gauss's law rather than with their time derivatives as suggested by the continuous reformulated system \eqref{eq:RaVM}. This remark is clearly illustrated by comparing the reformulated Ampere equation \eqref{eq:RaVM:A} with its discrete counter part \eqref{eq:proof:consistancy}. The former incorporates a double time derivative of the electric field, the latter, exploiting the time discretization, avoids the time differentiation of Amp\`ere's law. This explains the difference between the PIC-AP1 and PIC-AP2 methods proposed in \cite{degond_asymptotically_2006,degond_asymptotic-preserving_2010} for the Vlasov-Poisson system. The first one is derived as a discretization of the reformulated continuous system and is thus consistent with the double time derivative of Gauss's law. This is in line with Eq.~\eqref{eq:RaVM:G}. The second one is derived thanks to the discrete set of equations and, working the time discretization, implements a parabolic equation rather than a wave like equation. This gains an advantage since only one initial condition is necessary. \end{remark}} \medskip Different space discretization have been considered. For kinetic description, either Particle-In-Cell \cite{degond_asymptotically_2006,degond_asymptotic-preserving_2010,DegDelDoy} or semi-Lagrangian \cite{BCDS09} discretizations have been implemented, while, for fluid descriptions finite volume (on Cartesian meshes) are used \cite{CDV07,degond_numerical_2012}. In this last series of works dedicated to the Euler-Maxwell system, an exact consistency with the Maxwell-Gauss equation can be obtained. To this end, the numerical flux associated to the mass flux must be used to construct the current of particles used in the Amp\`ere equation. This property is sketched in the next lines in a simplified one-dimensional framework, with $B_x=0$. Denoting $\mathcal{F}^{m+1}_{k+1/2}$ the numerical flux associated to the mass flux evaluated at the center of the cell $k$, $n_{k}^{m}$ and $E_x^m|_{k+1/2}$ being the density and the electric field at time $t^{m}= m \,\Delta t$, with $\Delta t$ and $\Delta x$ the time and space mesh intervals, a discretization of the system \eqref{DS1F_n} and \eqref{DS1F_E} is written \begin{align} n_k^{m+1} = n_{k}^{m} + \frac{\Delta t}{\Delta x} \left(\mathcal{F}^{m+1}_{k+1/2} - \mathcal{F}^{m+1}_{k-1/2} \right)\,, \label{1F_n_fd}\\ \lambda^2 \frac{1}{\Delta t} (E_x^{m+1}|_{k+1/2} - E_x^m|_{k+1/2}) = \mathcal{F}^{m+1}_{k+1/2} \,.\label{1F_amperex_fd} \end{align} Eq.~\eqref{1F_amperex_fd} evaluated at the cell interfaces $x_{k+1/2}$ and $x_{k-1/2}$ together with (\ref{1F_n_fd}) yields to \begin{equation*} \lambda^2 \frac{1}{\Delta x} (E_x|_{k+1/2}^{m+1} - E_x|_{k-1/2}^{m+1}) + n|_{k}^{m+1} = \lambda^2 \frac{1}{\Delta x} (E_x|_{k+1/2}^{m} - E_x|_{k-1/2}^{m}) + n|_{k}^{m}. \label{1F_gauss_fd} \end{equation*} This expression defines a discretization of the equation $\frac{\partial }{\partial t} \left( \lambda^2 \partial E_x/\partial x \right) = - \frac{\partial n}{\partial t} $. \medskip A similar property cannot be obtained with standard PIC methods, the macroscopic quantities projected on the grid are indeed inconsistent with the Gauss law. Therefore the correction is mandatory. The time discretization of the equation providing this quantity is straightforwardly obtained from the system \eqref{eq:system:semi:discret:temps}. The scaling relations defining the quasi-neutral regime mean that beside the dimensionless Debye length and the ratio of the typical velocity to the speed of light, the asymptotic parameter $\lambda$ carries the scaled plasma period $\tau_p$ as well. Indeed, the following identity $\lambda^2 = \lambda_D^2/\bar x^2 = {M^2}/{(\bar t \omega_p)^2}$, $\omega_p=1/\tau_p$ being the plasma frequency, together with the assumption $M=1$ proves the above assertion. The plasma period usually defines one of the smallest time scales involved in plasma modelling. To perform simulations on large scales, implicit methods have received a lot of attention, specifically in the framework of PIC discretizations for kinetic plasma models with the direct implicit methods \cite{LCF83,CLF82,CLHP89,HeLa87} or the moment implicit methods \cite{Mas81,BrFo82,WBF86,Mas87,RLB02}. The uniform stability with respect to $\lambda$ ensures that AP-methods remain stable for discretization that do not resolve the plasma period. Therefore, AP methods share some analogies with implicit or semi implicit methods. We refer to \cite[section~4]{DegDelDoy} for a more thorough discussion.
129,574
See how results from the STAT Multiple Choice can be used to meet prerequisites. STAT Multiple Choice results may satisfy year 12 English and year 11 mathematics prerequisites. A minimum score is required to meet year 12 English prerequisites for bachelor and associate degrees. There is no minimum score required to meet a Year 11 mathematics prerequisite. Important: you must meet all other prerequisites with equivalent
35,964
USPS to hike stamp prices by record amount FOXBusiness - The Postal Service is getting ready to charge Americans more to send their mail. On Sunday, the price of a “Forever” stamp is set to rise by 5 cents to 55 cents, a 10 percent increase and record nominal price adjustment. The agency announced it received approval for the hike in November. But it’s not just stamps that are about to get more expensive. The price to ship a small flat-rate box will increase to $7.90, from $7.20, while a large flat-rate box will rise by more than $1 to $19.95. Priority Mail Express prices will rise by 3.9 percent, while Priority Mail will increase 5.9 percent – those prices aren’t adjusted in line with inflation, but rather with perceived market conditions. First-class mail prices will rise.
125,297
First off, even typing the title (the parenthesized bit) gives me a smile and a shrug of personal disbelief: it didn't SEEM like I'd checked off November upon finishing this great ride; but, sure enough, one ride to-go and this year is history already. I'm still on the fence about a December edition - but, especially after conversing about it at work with a close friend ... yeah, it'll happen. This "rando" thing... this "commuting" thing --- surely on the latter, no - I don't do it as often anymore as I'd ideally prefer; however, that embodies idealism in general - and if it's "only" once a week, that's a GOOD thing, not something to asterisk with "well, I *used* to ..." Streak, no streak... riding a bicycle still holds value: slow, fast, plain-clothes, kitted-up, fixed, free, touring, racing, shopping, pubbing....once a day, once a month: doesn't matter.. "well... that's odd.... " No response... not even once. Either my parents and sister simply couldn't endure the spine-tingling horror of another one of my groaners, or they simply didn't hear me . . . but, it's always been something of a personal inside joke, that particular town's name. It's a nice little place, and like a lot of towns around here it's actually not so"little" anymore; but, I have to wonder from where the name had originated. Maybe I should research it... or maybe I should just continue to smile to myself. Either way, when I started looking for an east-west route to complete my personal "master-plan" of permanent routes that start relatively close to my home-base, Peculiar, MO. ended up sitting in just the right place for roughly 200km of fun-ness. The route's name, said with that long-forgotten western drawl of a cowboy-type trying to describe somethin' he ain't-never-done-laid-eyes-on-a'fore, became the perfect moniker... even if the route itself isn't really that strange at all. In fact, it's terrific... only the towns along the way are "strange", perhaps, and hold unique stories of their own... stories of a part of west-central Missouri that time has neglected, of a strong railroad gone-under and the economy that went with it, and of a regional highway network that stole the rest. This is the story of The Mighty Peculiar. The first truly cold ride of the year was upon me..although, for November in eastern KS/western MO., it could have been a LOT worse. The bag-of-plenty packed, bike at the ready, I headed off into the darkness for Peculiar - and the promise of a hot breakfast at the 24-hour Denny's there. I've really taken to planning my routes around services like this one; it just makes things a lot nicer for riders. Heck, this one is centered around the Flying-J travel plaza: with a few small, inexpensive motels nearby, and hot showers within the travel plaza itself - cleaned and maintained via a reservation system for professional over-the-road drivers - it's an oasis of randoneering wish-list items. Sure beats changing clothes in a cramped bathroom stall, and eating yet another ClifBar after a long day in the saddle! So far, so good -- Terry sat waiting for me in his truck, fresh from St. Joe, and - honestly - it didn't feel too cold out...certainly the forecast gusty winds hadn't started yet. Time to eat! Soon, Gary D. showed up, and the food began to hit the table inside the warm, inviting Denny's. Even on a weekday, things weren't too busy - we nearly had the place to ourselves upon arriving a bit after 5:15am, the coffee was hot and fresh, and the food top-notch. Thankfully, any stomach issues were distant memories--- which, in retrospect likely had more to do with me possibly carrying around a stomach bug than the locale last month -- but, it was nice to eat hearty for the day and not end up punished for it. Checks paid, we three ventured outside and prepared for the journey ahead. Third (maybe fourth) month in a row, we take our sweet time preparing, reconsidering gloves/hats/windbreakers, and don't get our first receipt until about a quarter-after the intended start time of 6:00am. The air was thick with moisture, cold... the coldest ride-start in months, perhaps in a year, really: I personally started the ride wearing nearly everything I'd brought with me, including a warm-up layer that I'd intended to take off right before departure and leave in the car. Instead, the biting, wet air prompted me and my companions to dress far thicker and warmer than originally planned. We headed out, despite the light winds, braced for cold and puffing clouds of condensed breath from behind face covers. Even after a few hills, my internal furnace hesitated a few times before truly kicking on and warming my extremities. BRRRR!!! As we turned west into a lightening sky, we longed for sunshine and kept a finger or two crossed for no changes in the forecast! We ambled along on the new route, taking in the growing silhouettes of houses and barns as they appeared out of the darkness along the way, up and over slightly rolling terrain - a good, early warm-up for the first dozen-or-so miles of the ride. We trailed along behind a school-bus making morning pickups, and then we had the country roads to ourselves while we quietly retraced the old MS-150 route, which used to start at various places, but, in its last years with the old Sedalia destination it had started off north of Peculiar on the same roads Terry, Gary and I pedaled along for this ride - a large part of my route intent. for as much criticism as it'd received for being "too difficult" for the novice rider (hills), I have fond memories of the annual trek east, toward the promise of cold beer and good music at the old Sedalia MS-Ride overnight event. As Gary and I chatted away the cold, early miles discussing custom bikes, frames, racks, gearing and bags, the sun ultimately appeared and began to burn off some of the chill. Terry only a second or two behind us, we three ate up the mileage - still chilly, but improving - as the wind began to turn up the volume knob, ever-so-slightly. Expected, and partially welcome: as the wind would increase, its direction would promise a speedy return trip later in the day! None of us had enjoyed THAT for months... what a tough year it's been! As we ate up the remains of State Highway "P", I got my first "cue sheet wake-up call" in a while... being a new route, well, sometimes there are surprises. The first, a big "Pavement Ends" sign on the right side of the road... "uhhhh.... that's not what I remember...." I muttered... Echoing their hesitation, my companions also slowed a bit, while I peered at my copy of the cue and down the road... was that gravel?? surely.... Well, no ... it ended up being fine, and I remember distinctly feeling much the same way during my last outing on this road: the sign remains, but the road surface has long-since been paved over. No worries after all! I need to make a note of that on the cue.... ugh... Coming off the back-roads, we emerged onto the old "main drag" which had taken thousands of riders east, over the years, to the first MS-150 rest-stop along MO-58 highway. Part of the reason the MS Society (allegedly) had elected to abandon this route and merge with the Topeka, KS. event involved complaints of MO-58 being too busy a road... and, I'll grant you, in the afternoons it can be: today, however, light traffic...typical of the morning out this way, the route is designed to keep folks off the main highways (pending your start time) until "rush-hour" is largely over. For us, it worked out nicely. As the cold continued to tug and chew at my fingers and toes, the camera remained nestled cozily in the front bag despite repeated notions to grab it an snap shots. I already adore this route for its scenery: the vast and rolling western Missouri landscape contains working and abandoned farms in such density, few miles pass without something interesting to ponder. When warmer months bless us once again, I'll photo-document the opening miles more completely; but - future riders take note - the mystery is there for the discovering. I won't, and shouldn't, take TOO many photos, lest I rob anyone of the joys of discovering that awesome old barn for themselves. Along route "O", after pushing up several rolling grades and enduring a bit of headwind, a break - and exposing fingers to morning, Fall air finally comes without the urgency to quickly re-glove. Pictures - I'm learning - help me focus on the chronology and important bits, rather than rambling on - editor-free - about "notmuch." Thus... The skies a bright blue, wisps of high clouds and a roar from the northeast... a glorious sight, a B-2 bomber flies overhead - good time for a roadside break, Gary and I shed a layer and grab a snack from various bags. We're greeted by a local farmer headed to the end of her driveway to get picked up by one of her hands, both off to rescue a calf that had gotten out. Terry catches up shortly afterward, and we talk about .. well, how awesome the weather is proving out. What a great, great day! One at a time, we click back into our pedals, and let gravity do the work of getting us back up to pace... "how 'bout them hills??" the sign used to say, stuck in the grass alongside the highway - greeting and taunting MS-riders at the same time, teasing with "5-miles to lunch!" and "almost there!" I glance occasionally off to the grass lowering beside the road... partially hoping to see one still there. Has it really been 11 years since my first time out this way? So many miles since then... and, like an adult returning to their childhood primary school desk - perhaps at a conference for their own kids - I think for a moment, have the hills shrunk? We watch Terry slip by, undeterred and patient, with a confident and happy wave. He's a relentless metronome of a rider: strong, quiet, unwavering. He's self-described as "not fast", but he always seems to catch up, only minutes behind us - no matter where, or what, or when. We're all fairly seasoned as a group now, fast approaching the makings of a solid Fleche team one day, I'm sure, with some minor tweaks... which, basically, means we need to start riding like Terry. Back on the road, Gary and I catch him up after a few minutes... but, it's a slow process, each of us seeming to ride within 2% of the other, on any of our previous six or seven outings. Consistency is good! The halfway control comes into view, and we all dismount for some good eats and a rest. A sure indicator that I'm acclimating nicely to the chillier conditions, it's nearly HOT pulling up to a stop in the full sunshine and southerly winds. We saunter inside, cards signed --- and, realizing that I'm indeed breaking new ground with this route, I proceed explaining to the confused cashier what the card is for, why it's okay to sign it, and just "what the heck we're doin'." It's all received well, and before long the inviting food behind the counter attacks our senses - Gary and I begin ordering this and that, and check out with armloads of goodness. Chomp! Biscuits, chicken strips, gizzards and livers... frog legs on Fridays!? This is a well-stocked control. Terry rolls up, and the scene repeats. Food is stashed away in stomachs and in waiting bags for the road -- the next fifty miles won't have a lick of services, so, despite the chilly air, an extra 20oz. bottle of water makes its way into my saddlebag, just in case. We all proceed to pack up and lash this and that to the tops of bags and racks, second-guessing ourselves at least three times. Is it warmer in the sun? Is the wind increasing? What's the temperature now? Layers on... layers off again.... the first relatively cold, layer-heavy ride of the season proves again: we're out of practice! Under or over-dressed yet to be decided, we roll out -- the wind at our backs! (you have no idea how nice it is to type THAT!) On the way out of Calhoun, the inevitable questions of earlier came to their answers, as sweat began to roll down my back as I pedaled along... yep, too hot... overdressed. Unconcerned, I stop and begin the process of shedding and storing layers... almost too many layers for my ample saddlebag to handle - thank goodness for external lashings and d-rings. Gary begins to shrink on the immediate horizon, and then Terry slips by - atop a hill, next to a farm gate off the road along highway "J", under a deep blue canopy of heaven - grass and tree limbs singing, and a distant hawk crying with joy beyond... man... what a great day to not be in a hurry... the thought was fleeting; but, honestly, after recent weeks and workstress - in a terrible hurry was the last thing I wanted to be. It was like, standing on that hilltop watching Terry and Gary become distant dots on the horizon while I wrapped up my extra layers and took long gazes into the distance ... I finally understood the point that Spencer and a what I remember of a few guys down in the Texas LSR (Lone Star Randonneurs) club were talking about: I flash back to February, 2008 - I'm on a great section of Texas highway shoulder, on the way to the first control, chatting it up with Ort of Texas and enjoying the mild weather... almost 600 miles south of the ice storm and freezing temps of Olathe. Ort described the LSR scene, at that time, a part of which he'd only been a member for a few months; he talked of the guys now called the "K-Hounds" - a fast-but-friendly, hard-edged-yet-encouraging and well-drilled group of front-runners who'd polished their rando-routine to a high sheen. From the outside looking in, it might have seemed to most that the entire LSR group would be tough to hang with... but, not so: he went on to describe a polar-opposite group of LSR riders, serious in their pursuits, yet somehow more casual. Sneakers and flat pedals, and a determined goal to finish ... but to finish and use ALL the allotted time to do so. At the time, I was dismissive of the mere notion of riding slow and taking one's time on purpose - but, therein, I think I'd missed the point. A finish is a finish... 8 hours, or 13 hours: the ACP doesn't care, nor does RUSA.... so why do *I*? Make no mistake... those tennis-shoe'd riders were indeed strong, and consistent, and able to time things JUST so... for maximum enjoyment and ride time. After all, if one is out there to enjoy the day, and the day is as good as it is long, why on Earth would one be in a hurry?? Meanwhile, I recall reading a ride report from a 600km event, written by our own Spencer - wherein, while my writing style had been busy outlining my struggles and hardships in great detail, HIS ride report mentioned only one, true issue during his 600k: that (paraphrased:) on day three, sometimes a feeling resembling sadness would set in - because the end of the riding was drawing to its inevitable close. I had never understood what he'd meant - not really - not until very recently. Perhaps it's a convenient justification for me as I've noticed my average speeds leveling off, along with my anxiousness about those numbers, that my new motivation is to just slow down and enjoy -- but, after reaching the halfway and beginning to reap the benefits of a strong and growing tailwind, I found myself in LESS of a hurry... not quite so anxious to take maximum advantage of the speed assistance. I took one more long look around, and then mounted up -- first motivated to catch.... then, more reserved. The tank was full. I felt fantastic. No gut issues. No saddle issues. I just ... relatively-speaking ...became keen on enjoying the day itself, not worrying about what time it showed on the clock, nor what time I thought I might finish. For once, I just RODE. Still content to take my time... which is to say, "It's difficult to chase what you can't see," I took in the sights, smells and sounds of the route, meandering along Route CC toward the west and the bottom portion of the Mighty Peculiar loop. Along the way I happened upon Terry taking a quick roadside break - but, Gary... gosh, who knew if I'd see him again anytime soon. I seem to have, however, learned from my past shortcomings with regards to knowing the land, and landmarks, of a new route... compared to the Border Patrol route, for example; which I've ridden perhaps 15 times, and it wasn't until visit #10 that another rider had to point out a scenic marker that I'd never noticed before... despite it being maybe only 10 feet off the road. Surely I'd missed some of the smaller, yet-to-be-discovered details on this new route -- but, for the most part, I don't know if I missed much, my head on a swivel as I rode past barn after barn. Terry and I finally make the East Lynne, MO. control after the 50.4 mile jaunt from Calhoun. Spirits high, but tired, we both take to the inside, get our cards signed and eat something while refilling our bottles. Checking the sun angle once more, it seems we'd only have 30 minutes of daylight left - so, the regular glasses come back out of hiding, along with a few layers to put back on. This time of year, when the sun goes away the heat quickly follows. Finally feeling good after several ounces of water (slight headache coming on?), we mount up and start to make our way back out. Gary? Heck... haven't seen him. Not even a glimpse on the horizon, and Terry tactfully and purposefully does NOT ask the lady working the counter how long it'd been since his passage. Things like that, best to keep them out of our heads -- and he's taught me another solid lesson there. After all, having that info... what good would it do me now? Confirm the obvious? Why mess with that... ride your own ride, 'dude. Satisfied, with extra grub at the ready in the front bag, Terry and I saddled up for the last dozen or so miles to the finish. Twenty, tops. Breezes dying, heat leaving, and shadows growing very long... time to move out. After 11 hours and some change (okay, a LOT of change), The Mighty Peculiar - no, we didn't break any records. The weather was kind, and the miles came easy... easier for Gary, obviously: as we'd later discover he'd finished nearly 50 minutes ahead of us! (And, yes, he's still riding that old, garage-sale acquired Astro-Daimler 10-speed!) All in all, however, the wind was kind, and the temperatures for November??? Wow....! What an awesome ride! Note to self, however: if I could have changed one thing, it might have been to add toe warmers for the first 30 miles - but, really, that's me searching for a complaint... aside from that, absolutely zero to complain about! Looking forward to December... yep... still on the fence. I have to make sure I keep a close eye on the "fun meter," because - going back to a recent conversation at the office with a fellow cyclist, if it isn't fun, why am I out there?? Surely as the sun will rise and the midwestern plains-states weather will remain difficult to forecast, a rare, mild December week will materialize - and hopefully I get the timing just-so to enjoy the benefits. Considering the KCUC crew in general already had a wintry, snowy ride back in March, maybe I'll get a break? Ha.... right.... Thanks for reading, as always --- and stay tuned!
159,171
\begin{document} \title[A stochastic integral of operator-valued functions] {A stochastic integral of operator-valued functions} \author{Volodymyr Tesko} \address{Institute of Mathematics, National Academy of Sciences of Ukraine, 3 Teresh\-chenkivs'ka, Kyiv, 01601, Ukraine} \email{[email protected]} \subjclass[2000]{Primary 46G99, 47B15, 60H05} \date{16/01/2008} \dedicatory{To Professor M. L. Gorbachuk on the occasion of his 70th birthday.} \keywords{Stochastic integral, resolution of identity, normal martingale, Fock space.} \begin{abstract} In this note we define and study a Hilbert space-valued stochastic integral of operator-valued functions with respect to Hilbert space-valued measures. We show that this integral generalizes the classical It\^{o} stochastic integral of adapted processes with respect to normal martingales and the It\^{o} integral in a Fock space. \end{abstract} \maketitle \section{Introduction} Here and subsequently, we fix a real number $T>0$. Let $\mathcal{H}$ be a complex Hilbert space, $M$ be a fixed vector from $\mathcal{H}$ and $ [0,T]\ni t\mapsto E_t $ be a resolution of identity in $\mathcal{H}$. Consider the $\mathcal{H}$-valued function ({\it abstract martingale}) $$ [0,T]\ni t\mapsto M_t:=E_tM\in\mathcal{H}. $$ In this paper we construct and study an integral \begin{equation}\label{eq.1.1} \int_{[0,T]}A(t)\,dM_t \end{equation} for a certain class of operator-valued functions $[0,T]\ni t\mapsto A(t)$ whose values are linear operators in the space $\mathcal{H}$. We define such an integral as an element of the Hilbert space $\mathcal{H}$ and call it a {\it Hilbert space-valued stochastic integral} (or {\it $H$-stochastic integral}). By analogy with the classical integration theory we first define integral (\ref{eq.1.1}) for a certain class of simple operator-valued functions and then extend this definition to a wider class. We illustrate our abstract constructions with a few examples. Thus, we show that the classical It\^{o} stochastic integral is a particular case of the $H$-stochastic integral. Namely, let $\mathcal{H}:=L^2(\Omega,\mathcal{A},P)$ be a space of square integrable functions on a complete probability space $(\Omega,\mathcal{A},P)$, $\{\mathcal{A}_t\}_{t\in[0,T]}$ be a filtration satisfying the usual conditions and $\{N_t\}_{t\in[0,T]}$ be a normal martingale on $(\Omega,\mathcal{A},P)$ with respect to $\{\mathcal{A}_t\}_{t\in[0,T]}$, i.e., $$ \{N_t\}_{t\in[0,T]}\quad\text{and}\quad \{N_t^2-t\}_{t\in[0,T]} $$ are martingales for $\{\mathcal{A}_t\}_{t\in[0,T]}$. It follows from the properties of martingales that \begin{equation*} N_t=\mathbb{E}[N_T|\mathcal{A}_t],\quad t\in[0,T], \end{equation*} where $\mathbb{E}[\,\cdot\,|\mathcal{A}_t]$ is a conditional expectation with respect to the $\sigma$-algebra $\mathcal{A}_t$. It is well known that $\mathbb{E}[\,\cdot\,|\mathcal{A}_t]$ is the orthogonal projector in the space $L^2(\Omega,\mathcal{A},P)$ onto its subspace $L^2(\Omega,\mathcal{A}_t,P)$ and, moreover, the corresponding projector-valued function $ \mathbb{R}_+\ni t\mapsto E_t:=\mathbb{E}[\,\cdot\,|\mathcal{A}_t] $ is a resolution of identity in $L^2(\Omega,\mathcal{A},P)$, see e.g. \cite{S, BZU87, BZU89, M, B98a}. In this way the normal martingale $\{N_t\}_{t\in[0,T]}$ can be interpreted as an abstract martingale, i.e., $$ [0,T]\ni t\mapsto N_t=\mathbb{E}[N_T|\mathcal{A}_t]=E_tN_T\in\mathcal{H}. $$ Hence, in the space $L^2(\Omega,\mathcal{A},P)$ we can construct the $H$-stochastic integral with respect to the normal martingale $N_t$. Let $F\in L^2([0,T]\times\Omega,dt\times P)$ be a square integrable stochastic process adapted to the filtration $\{\mathcal{A}_t\}_{t\in[0,T]}$. We consider the operator-valued function $[0,T]\ni t\mapsto A_F(t)$ whose values are operators $A_F(t)$ of multiplication by the function $F(t)=F(t,\cdot)\in L^2(\Omega,\mathcal{A},P)$ in the space $L^2(\Omega,\mathcal{A},P)$, $$ L^2(\Omega,\mathcal{A},P)\supset\mathop{\rm Dom}(A_F(t))\ni G\mapsto A_F(t)G:=F(t)G\in L^2(\Omega,\mathcal{A},P). $$ In this paper we prove that the $H$-stochastic integral of $[0,T]\ni t\mapsto A_F(t)$ coincides with the classical It\^{o} stochastic integral $\int_{[0,T]}F(t)\,dN_t$ of $F$. That is, $$ \int_{[0,T]}A_F(t)\,dN_t=\int_{[0,T]}F(t)\,dN_t. $$ In the last part of this note we show that the It\^{o} integral in a Fock space is the $H$-stochastic integral and establish a connection of such an integral with the classical It\^{o} stochastic integral. The corresponding results are given without proofs (the proofs will be given in a forthcoming publication). Note that the It\^{o} integral in a Fock space is a useful tool in the quantum stochastic calculus, see e.g. \cite{Attal} for more details. We remark that in \cite{BZU87, BZU89} the authors gave a definition of the operator-valued stochastic integral \begin{equation*} B:=\int_{[0,T]}A(t)\,dE_t \end{equation*} for a family $\{A(t)\}_{t\in[0,T]}$ of commuting normal operators in $\mathcal{H}$. Such an integral was defined using a spectral theory of commuting normal operators. It is clear that for a fixed vector $M\in \mathop{\rm Dom}(B)\subset \mathcal{H}$ the formula \begin{equation*} \int_{[0,T]}A(t)\,dM_t:=\Big(\int_{[0,T]}A(t)\,dE_t\Big)M \end{equation*} can be regarded as a definition of integral (\ref{eq.1.1}). In this way we obtain another definition of integral (\ref{eq.1.1}) different from the one we have proposed in this paper. \section{The construction of the $H$-stochastic integral} \enlargethispage{1cm} Let $\mathcal{H}$ be a complex Hilbert space, $\mathcal{L}(\mathcal{H})$ be a space of all bounded linear operators in $\mathcal{H}$, $M\neq 0$ be a fixed vector from $\mathcal{H}$ and $$ [0,T]\ni t\mapsto E_t\in\mathcal{L}(\mathcal{H}) $$ be a resolution of identity in $\mathcal{H}$, that is a right-continuous increasing family of orthogonal projections in $\mathcal{H}$ such that $E_T=1$. Note that the resolution of identity $E$ can be regarded as a projector-valued measure $\mathcal{B}([0,T])\ni \alpha\mapsto E(\alpha)\in\mathcal{L}(\mathcal{H})$ on the Borel $\sigma$-algebra $\mathcal{B}([0,T])$. Namely, for any interval $(s,t]\subset [0,T]$ we set $$ E((s,t]):=E_t-E_s,\quad E(\{0\}) :=E_0,\quad E(\varnothing):=0, $$ and extend this definition to all Borel subsets of $[0,T]$, see e.g. \cite{BSU90} for more details. By definition, the $\mathcal{H}$-valued function \begin{equation*} [0,T]\ni t\mapsto M_t:=E_tM\in \mathcal{H} \end{equation*} is an {\it abstract martingale} in the Hilbert space $\mathcal{H}$. In this section we give a definition of integral (\ref{eq.1.1}) for a certain class of operator-valued functions with respect to the abstract martingale $M_t$. A construction of such an integral is given step-by-step, beginning with the simplest class of operator-valued functions. Let us introduce the required class of simple functions. For each point $t\in[0,T]$, we denote by $$ \mathcal{H}_{M}(t):=\mathop{\rm span}\{M_{s_2}-M_{s_1} \,|\,(s_1,s_2]\subset(t,T]\}\subset \mathcal{H} $$ the linear span of the set $\{M_{s_2}-M_{s_1} \,|\,(s_1,s_2]\subset(t,T]\}$ in $\mathcal{H}$ and by $$ \mathcal{L}_{M}(t)=\mathcal{L}(\mathcal{H}_{M}(t)\to\mathcal{H}) $$ the set of all linear operators in $\mathcal{H}$ that continuously act from $\mathcal{H}_{M}(t)$ to $\mathcal{H}$. The increasing family $\mathcal{L}_{M}=\{\mathcal{L}_{M}(t)\}_{t\in[0,T]}$ will play here a role of the filtration $\{\mathcal{A}_t\}_{t \in [0,T]}$ in the classical martingale theory. For a fixed $t\in[0,T)$, a linear operator $A$ in $ \mathcal{H}$ will be called {\it $\mathcal{L}_{M}(t)$-measurable} if \begin{itemize} \item[(i)] $A\in\mathcal{L}_{M}(t)$ and, for all $s\in[t,T)$, \begin{equation*} \|A\|_{\mathcal{L}_{M}(t)}=\|A\|_{\mathcal{L}_{M}(s)} :=\sup \Big\{\frac{\|Ag\|_{\mathcal{H}}}{\|g\|_{\mathcal{H}}}\,\Big|\, g\in\mathcal{H}_{M}(s),\,g\neq 0\Big\}. \end{equation*} \item[(ii)] $A$ is partially commuting with the resolution of identity $E$. More precisely, \begin{equation*} AE_sg=E_sAg,\quad g\in\mathcal{H}_{M}(t),\quad s\in[t,T]. \end{equation*} \end{itemize} Such a definition of $\mathcal{L}_{M}(t)$-measurability is motivated by a number of reasons: \begin{itemize} \item $\mathcal{L}_{M}(t)$-measurability is a natural generalization of the usual $\mathcal{A}_t$-measurability in classical stochastic calculus, see Lemma \ref{l.1} (Section \ref{s.3}) for more details; \item in some sense, $\mathcal{L}_{M}(t)$-measurability (for each $t$) is the minimal restriction on the behavior of a simple operator-valued function $[0,T]\ni t\mapsto A(t)$ that will allow us to obtain an analogue of the It\^{o} isometry property (see inequality (\ref{eq.3.3}) below) and to extend the $H$-stochastic integral from a simple class of functions to a wider one. \end{itemize} In what follows, it is convenient for us to call $\mathcal{L}_{M}(T)$-measurable all linear operators in $\mathcal{H}$. Evidently, if a linear operator $A$ in $\mathcal{H}$ is $\mathcal{L}_{M}(t)$-measurable for some $t\in[0,T]$ then $A$ is $\mathcal{L}_{M}(s)$-measurable for all $s\in [t,T]$. \enlargethispage{1cm} A family $\{A(t)\}_{t\in[0,T]}$ of linear operators in $\mathcal{H}$ will be called {\it a simple $\mathcal{L}_M$-adapted operator-valued function on} $[0,T]$ if, for each $t\in[0,T]$, the operator $A(t)$ is $\mathcal{L}_{M}(t)$-measurable and there exists a partition $0=t_0<t_1<\cdots<t_n=T$ of $[0,T]$ such that \begin{equation}\label{eq.3.1} A(t)=\sum_{k=0}^{n-1}A_k\varkappa_{(t_k,t_{k+1}]}(t),\quad t\in[0,T], \end{equation} where $\varkappa_{\alpha}(\cdot)$ is the characteristic function of the Borel set $\alpha\in\mathcal{B}([0,T])$. Let $S=S(M)$ denote the space of all simple $\mathcal{L}_M$-adapted operator-valued functions on $[0,T]$. For a function $A\in S$ with representation (\ref{eq.3.1}) we define an {\it $H$-stochastic integral} of $A$ with respect to the abstract martingale $M_t$ through the formula \begin{equation}\label{eq.3.21} \int_{[0,T]}A(t)\,dM_t:=\sum_{k=0}^{n-1}A_k(M_{t_{k+1}}-M_{t_{k}}) \in\mathcal{H}. \end{equation} We can show that this definition does not depend on the choice of representation of the simple function $A$ in the space $S$. In the space $S$ we introduce a quasinorm by setting \begin{equation*} \|A\|_{S_2}:=\Big(\int_{[0,T]}\|A(t)\|_{\mathcal{L}_{M}(t)}^{2}\,d\mu(t) \Big)^{\frac{1}{2}} :=\Big(\sum_{k=0}^{n-1}\|A_k\|_{\mathcal{L}_{M}(t_k)}^{2}\mu((t_k,t_{k+1}]) \Big)^{\frac{1}{2}} \end{equation*} for each $A\in S$ with representation (\ref{eq.3.1}). Here the measure $\mu$ is defined by the formula $$ \mathcal{B}([0,T])\ni\alpha\mapsto \mu(\alpha):=\|M(\alpha)\|_{\mathcal{H}}^{2}=(E(\alpha)M,M)_{\mathcal{H}}\in\mathbb{R}_+, $$ where $M(\alpha):=E(\alpha)M$ for all $\alpha\in\mathcal{B}([0,T])$, in particular, $$ M((t_k,t_{k+1}]):=E((t_k,t_{k+1}])M=M_{t_{k+1}}-M_{t_{k}}, \quad (t_k,t_{k+1}]\subset[0,T]. $$ The following statement is fundamental. \begin{thm}\label{t.1} Let $A,B\in S$ and $a,b\in\mathbb{C}$. Then \begin{equation*} \int_{[0,T]}\Big(aA(t)+bB(t)\Big)\,dM_t =a\int_{[0,T]}A(t)\,dM_t+b\int_{[0,T]}B(t)\,dM_t \end{equation*} and \begin{equation}\label{eq.3.3} \Big\|\int_{[0,T]}A(t)\,dM_t\Big\|_{\mathcal{H}}^{2} \leq \int_{[0,T]}\|A(t)\|_{\mathcal{L}_{M}(t)}^{2}\,d\mu(t). \end{equation} \end{thm} \begin{proof} The first assertion is trivial. Let us check inequality (\ref{eq.3.3}). Using (i), (ii) and properties of the resolution of identity $E$, for $A\in S$ with representation (\ref{eq.3.1}), we obtain \begin{equation*} \begin{split} \Big\|\int_{[0,T]}A(t)\,dM_t\Big\|_{\mathcal{H}}^{2} & =\Big(\int_{[0,T]}A(t)\,dM_t, \int_{[0,T]}A(t)\,dM_t\Big)_{\mathcal{H}}\\ & =\sum_{k,m=0}^{n-1}\big(A_kM(\Delta_k),A_mM(\Delta_m)\big)_{\mathcal{H}}\\ & =\sum_{k,m=0}^{n-1}\big(A_kE(\Delta_k)M,A_mE(\Delta_m)M\big)_{\mathcal{H}}\\ & =\sum_{k,m=0}^{n-1}\big(E(\Delta_k)A_kE(\Delta_k)M, E(\Delta_m)A_mE(\Delta_m)M\big)_{\mathcal{H}}\\ & =\sum_{k=0}^{n-1}\big(A_kE(\Delta_k)M,A_kE(\Delta_k)M\big)_{\mathcal{H}} =\sum_{k=0}^{n-1}\|A_kM(\Delta_k)\|_{\mathcal{H}}^{2}\\ & \quad\leq\sum_{k=0}^{n-1}\|A_k\|_{\mathcal{L}_M(t_k)}^{2}\| M(\Delta_k)\|_{\mathcal{H}}^{2} =\sum_{k=0}^{n-1}\|A_k\|_{\mathcal{L}_{M}(t_k)}^{2}\mu(\Delta_k)\\ & =\int_{[0,T]}\|A(t)\|_{\mathcal{L}_{M}(t)}^{2}\,d\mu(t), \end{split} \end{equation*} where $\Delta_k:=(t_k,t_{k+1}]$ for all $k\in\{0,\ldots,n-1\}$. \end{proof} \enlargethispage{1cm} Inequality (\ref{eq.3.3}) enables us to extend the $H$-stochastic integral to operator-valued functions $[0,T]\ni t\mapsto A(t)$ which are not necessarily simple. Namely, denote by $S_2=S_2(M)$ a Banach space associated with the quasinorm $\|\cdot\|_{S_2}$. For its construction, it is first necessary to pass from $S$ to the factor space $$ \dot{S}:=S/\{A\in S\,|\,\|A\|_{S_2}=0\} $$ and then to take the completion of $\dot{S}$. It is not difficult to see that elements of the space $S_2$ are equivalence classes of operator-valued functions on $[0,T]$ whose values are linear operators in the space $\mathcal{H}$. An operator-valued function $[0,T]\ni t\mapsto A(t)$ will be called {\it $H$-stochastic integrable} with respect to $M_t$ if $A$ belongs to the space $S_2$. It follows from the definition of the space $S_2$ that for each $A\in S_2$ there exists a sequence $(A_n)_{n=0}^{\infty}$ of simple operator-valued functions $A_n\in S$ such that \begin{equation}\label{eq.3.4} \int_{[0,T]}\|A(t)-A_n(t)\|_{\mathcal{L}_M(t)}^{2}\,d\mu(t)\rightarrow 0 \quad {\text{\rm as}}\quad n\rightarrow\infty. \end{equation} Due to (\ref{eq.3.3}), for such a sequence $(A_n)_{n=0}^{\infty}$, the limit $$ \lim_{n\to\infty}\int_{[0,T]}A_n(t)\,dM_t $$ exists in $\mathcal{H}$ and does not dependent on the choice of the sequence $(A_n)_{n=0}^{\infty}\subset S$ satisfying (\ref{eq.3.4}). We denote this limit by $$ \int_{[0,T]}A(t)\,dM_t:=\lim_{n\to\infty}\int_{[0,T]}A_n(t)\,dM_t $$ and call it the {\it $H$-stochastic integral of} $A\in S_2$ with respect to the abstract martingale $M_t$. It is clear that for all $A\in S_2$ the assertions of Theorem~1 still hold. Note one simple property of the integral introduced above. Let $U$ be some unitary operator acting from $\mathcal{H}$ onto another complex Hilbert space $\mathcal{K}$. Then \begin{equation*} [0,T]\ni t\mapsto G_t:=UM_t\in \mathcal{K} \end{equation*} is an abstract martingale in the space $\mathcal{K}$ because, for any $t\in[0,T]$, \begin{equation*} G_t=UM_t=X_tG,\quad X_t:=UE_tU^{-1},\quad G:=UM\in\mathcal{K}, \end{equation*} and $X_t$ is a resolution of identity in the space $\mathcal{K}$. Let an operator-valued function $[0,T]\ni t\mapsto A(t)$ be $H$-stochastic integrable with respect to $M_t$. One can show that the operator-valued function $[0,T]\ni t\mapsto UA(t)U^{-1}$ is $H$-stochastic integrable with respect to $G_t$ and \begin{equation*} U\Big(\int_{[0,T]}A(t)\,dM_t\Big) =\int_{[0,T]}UA(t)U^{-1}\,dG_t\in\mathcal{K}. \end{equation*} \section{The It\^{o} stochastic integral as an $H$-stochastic integral}\label{s.3} Let $ (\Omega, \mathcal{A}, P) $ be a complete probability space and $\{\mathcal{A}_t\}_{t \in [0,T]}$ be a right continuous filtration. Suppose that the $\sigma$-algebra $\mathcal{A}_0$ contains all $P$-null sets of $\mathcal{A}$ and $\mathcal{A}=\mathcal{A}_T$. Moreover, we assume that $\mathcal{A}_0$ is trivial, i.e., every set $\alpha\in\mathcal{A}_0$ has probability $0$ or $1$. Let $N=\{N_t\}_{t\in[0,T]}$ be a {\it normal martingale} on $(\Omega, \mathcal{A}, P)$ with respect to $\{\mathcal{A}_t\}_{t \in [0,T]}$. That is, $N_t\in L^2(\Omega, \mathcal{A}_t, P)$ for all $t\in[0,T]$ and \begin{equation*} \mathbb{E}[N_t-N_s|\mathcal{A}_s]=0,\quad \mathbb{E}[(N_t-N_s)^2|\mathcal{A}_s]=t-s \end{equation*} for all $s,t\in[0,T]$ such that $s<t$. Without loss of generality one can assume that $N_0=0$. Note that there are many examples of normal martingales, --- the Brownian motion, the compensated Poisson process, the Az${\rm \acute{e}}$ma martingales and others, see for instance \cite{Emery89, DMM92, M}. We will denote by $L_a^2([0,T]\times\Omega)$ the set of all functions (equivalence classes), adapted to the filtration $\{\mathcal{A}_t\}_{t \in [0,T]}$, from the space $$ L^2([0,T]\times\Omega) :=L^2([0,T]\times\Omega,\mathcal{B}([0,T])\times\mathcal{A},dt\times P) $$ where $dt$ is the Lebesgue measure on $\mathcal{B}([0,T])$. Let us show that the {\it It\^{o} stochastic integral} $\int_{[0,T]} F(t)\,dN_t$ of $F\in L_a^2([0,T]\times\Omega)$ with respect to the normal martingale $N$ can be considered as an $H$-stochastic integral (see e.g. \cite{GSk75, P90} for the definition and properties of the classical It\^{o} integral). To this end, we set $ \mathcal{H}:=L^2(\Omega,\mathcal{A},P) $ and consider, in this space, the resolution of identity $$ [0,T]\ni t\mapsto E_t:=\mathbb{E}[\,\cdot\,|\mathcal{A}_t]\in\mathcal{L}(\mathcal{H}) $$ generated by the filtration $\{\mathcal{A}_t\}_{t \in [0,T]}$. Let $M:=N_T\in L^2(\Omega,\mathcal{A},P)$, then the corres\-ponding abstract martingale \begin{equation*} [0,T]\ni t\mapsto N_t:=E_tN_T= \mathbb{E}[N_T|\mathcal{A}_t]\in \mathcal{H} \end{equation*} is our normal martingale. Note also that $$ \mu([0,t])=\|N([0,t])\|_{L^2(\Omega,\mathcal{A},P)}^{2}= \|N_t\|_{L^2(\Omega,\mathcal{A},P)}^{2}=\mathbb{E}[N_t^2] =\mathbb{E}[N_t^2\,|\,\mathcal{A}_0]=t, $$ i.e., $\mu$ is the Lebesgue measure on $\mathcal{B}([0,T])$. In the context of this section, $\mathcal{L}_{M}(t)$-measurabi\-lity is equivalent to the usual $\mathcal{A}_t$-measurabi\-lity. More precisely, the following result holds. \begin{lem}\label{l.1} Let $t\in[0,T)$. For given $F\in L^2(\Omega,\mathcal{A},P)$ the operator $A_F$ of multiplication by the function $F$ in the space $L^2(\Omega,\mathcal{A},P)$ is $\mathcal{L}_{N}(t)$-measurable if and only if the function $F$ is $\mathcal{A}_t$-measurable, i.e., $F=\mathbb{E}[F|\mathcal{A}_t]$. Moreover, if $F\in L^2(\Omega,\mathcal{A},P)$ is an $\mathcal{A}_t$-measurable function then \begin{equation}\label{eq.3.0.101} \|A_F\|_{\mathcal{L}_{N}(t)}=\|A_F\|_{\mathcal{L}_{N}(s)} =\|F\|_{L^2(\Omega,\mathcal{A},P)}, \quad s\in[t,T). \end{equation} \end{lem} \begin{proof} Suppose $F\in L^2(\Omega,\mathcal{A},P)$ is an $\mathcal{A}_t$-measurable function. Let us show that the operator $A_F$ is $\mathcal{L}_{N}(t)$-measurable. First, we prove that $A_F\in\mathcal{L}_N(t)$. Taking into account that $F$ is an $\mathcal{A}_t$-measurable function, $\{N_t\}_{t\in[0,T]}$ is the normal martingale and the $\sigma$-algebra $\mathcal{A}_0$ is trivial, for any interval $(s_1,s_2]\subset(t,T]$, we obtain \begin{equation*} \begin{split} \|A_F(N_{s_2}-N_{s_1})\| _{L^2(\Omega,\mathcal{A},P)}^{2} & =\|F(N_{s_2}-N_{s_1})\|_{L^2(\Omega,\mathcal{A},P)}^{2} =\mathbb{E}[F^2(N_{s_2}-N_{s_1})^2]\\ & =\mathbb{E}[F^2(N_{s_2}-N_{s_1})^2|\mathcal{A}_{0}] =\mathbb{E}\big[F^2\mathbb{E}[(N_{s_2}-N_{s_1})^2| \mathcal{A}_{s_1}]\big|\mathcal{A}_{0}\big]\\ & =\mathbb{E}\big[F^2\mathbb{E}[(N_{s_2}-N_{s_1})^2|\mathcal{A}_{s_1}]\big] =\mathbb{E}[F^2](s_2-s_1)\\ & =\mathbb{E}[F^2]\mathbb{E}[(N_{s_2}-N_{s_1})^2]\\ & =\|F\|_{L^2(\Omega,\mathcal{A},P)}^{2} \|N_{s_2}-N_{s_1}\|_{L^2(\Omega,\mathcal{A},P)}^{2}. \end{split} \end{equation*} We can similarly show that \begin{equation*} \|A_FG\|_{L^2(\Omega,\mathcal{A},P)}^{2} =\|F\|_{L^2(\Omega,\mathcal{A},P)}^{2} \|G\|_{L^2(\Omega,\mathcal{A},P)}^{2} \end{equation*} for all $G\in \mathcal{H}_{N}(t)={\rm span}\{N_{s_2}-N_{s_1}\,|(s_1,s_2]\subset(t,T]\}$. Hence $A_F\in\mathcal{L}_N(t)$ and, moreover, equality (\ref{eq.3.0.101}) takes place. Let us check that $A_F$ is partially commuting with $E$, i.e., \begin{equation*} A_FE_sG=E_sA_FG,\quad G\in\mathcal{H}_{N}(t),\quad s\in[t,T]. \end{equation*} Since $F\in L^2(\Omega,\mathcal{A},P)$ is an $\mathcal{A}_t$-measurable function and $FG\in L^2(\Omega,\mathcal{A},P)$, for any $s\in[t,T]$ and any function $G\in\mathcal{H}_N(t)$, we have \begin{equation*} \begin{split} A_FE_sG =FE_sG =F\mathbb{E}[G|\mathcal{A}_{s}] =\mathbb{E}[FG|\mathcal{A}_{s}] =E_sA_FG. \end{split} \end{equation*} \enlargethispage{1cm} Thus, the first part of the lemma is proved. Let us prove the converse statement of the lemma: if for a given $F\in L^2(\Omega,\mathcal{A},P)$ the operator $A_F$ is $\mathcal{L}_{N}(t)$-measurable then $F$ is an $\mathcal{A}_t$-measurable function. Since $A_F$ is an $\mathcal{L}_{N}(t)$-measurable operator, we see that for any $s\in[t,T]$ \begin{equation*} A_FE_sG=E_sA_FG,\quad G\in\mathcal{H}_{N}(t), \end{equation*} or, equivalently, \begin{equation}\label{4.4} A_F\mathbb{E}[G|\mathcal{A}_{s}]=\mathbb{E}[A_FG|\mathcal{A}_{s}], \quad G\in\mathcal{H}_{N}(t). \end{equation} Let $s\in(t,T)$ and $(s_1,s_2]\subset(t,s]$. We take $$ G:=N_{s_2}-N_{s_1} \in\mathcal{H}_{N}(t). $$ Evidently, $G$ is an $\mathcal{A}_{s}$-measurable function and $$ A_F\mathbb{E}[G|\mathcal{A}_{s}]=A_FG=FG,\quad \mathbb{E}[A_FG|\mathcal{A}_{s}] =\mathbb{E}[FG|\mathcal{A}_s]=G\mathbb{E}[F|\mathcal{A}_s]. $$ Hence, using (\ref{4.4}), we obtain $$ FG=G\mathbb{E}[F|\mathcal{A}_s]. $$ As a result, $$ F=\mathbb{E}[F|\mathcal{A}_s],\quad s\in(t,T]. $$ Since the resolution of identity $[0,T]\ni s\mapsto E_s=\mathbb{E}[\,\cdot\,|\mathcal{A}_s]\in\mathcal{L}(\mathcal{H})$ is a right-continuous function, the latter equality still holds for $s=t$, and therefore $F$ is an $\mathcal{A}_t$-measurable function. \end{proof} As a simple consequence of Lemma \ref{l.1} we obtain the following result. \begin{thm}\label{t.2} Let $F$ belong to $L^2([0,T]\times\Omega)$. The family $\{A_{F}(t)\}_{t\in[0,T]}$ of the operators $A_{F}(t)$ of multiplication by $F(t)=F(t,\cdot)\in L^2(\Omega,\mathcal{A},P)$ in the space $L^2(\Omega,\mathcal{A},P)$, $$ L^2(\Omega,\mathcal{A},P)\supset\mathop{\rm Dom}(A_{F}(t))\ni G\mapsto A_{F}(t)G:=F(t)G\in L^2(\Omega,\mathcal{A},P), $$ is $H$-stochastic integrable with respect to the normal martingale $N$ (i.e. belongs to $S_2$) if and only if $F$ belongs to the space $L_a^2([0,T]\times\Omega)$. \end{thm} The next theorem shows that the It\^{o} stochastic integral with respect to the normal martingale $N$ can be interpreted as an $H$-stochastic integral. \begin{thm}\label{t.3} Let $F\in L_a^2([0,T]\times\Omega)$ and $\{A_{F}(t)\}_{t\in[0,T]}$ be the corresponding family of the operators $A_{F}(t)$ of multiplication by $F(t)$ in the space $L^2(\Omega,\mathcal{A},P)$. Then \begin{equation*} \int_{[0,T]}A_{F}(t)\,dN_t=\int_{[0,T]}F(t)\,dN_t. \end{equation*} \end{thm} \begin{proof} Taking into account Theorem \ref{t.2}, Lemma \ref{l.1} and the definitions of the integrals \begin{equation*} \int_{[0,T]}A_{F}(t)\,dN_t\quad\text{and}\quad \int_{[0,T]}F(t)\,dN_t, \end{equation*} it is sufficient to prove Theorem \ref{t.3} for simple functions $F\in L_a^2([0,T]\times\Omega)$. But in this case Theorem \ref{t.3} is obvious. \end{proof} \section{The It\^{o} integral in a Fock space as an $H$-stochastic integral} Let us recall the definition of the It\^{o} integral in a Fock space, see e.g. \cite{Attal} for more details. We denote by $\mathcal{F}$ the symmetric Fock space over the real separable Hilbert space $L^2([0,T]):=L^2([0,T],dt)$. By definition (see e.g. \cite{BK88}), \begin{equation*} {\mathcal F}:=\bigoplus_{n=0}^{\infty}{\mathcal F}_nn!, \end{equation*} where ${\mathcal F}_0:=\mathbb{C}$ and, for each $n\in\mathbb{N}$, $ {\mathcal F}_n:=(L_{{\mathbb C}}^2([0,T]))^{\mathbin{\widehat{\otimes}} n} $ is an $n$-th symmetric tensor power $\mathbin{\widehat{\otimes}}$ of the complex Hilbert space $L_{\mathbb{C}}^2([0,T])$. Thus, the Fock space $\mathcal{F}$ is the complex Hilbert space of sequences $f=(f_n)_{n=0}^{\infty}$ such that $f_n\in\mathcal{F}_n$ and $$ \|f\|_{{\mathcal F}}^{2}=\sum_{n=0}^{\infty} \|f_n\|_{{\mathcal F}_n}^{2}n!<\infty. $$ We denote by $ L^2([0,T];\mathcal{F}) $ the Hilbert space of all $\mathcal{F}$-valued functions $$ [0,T]\ni t\mapsto f(t)\in\mathcal{F},\quad \|f\|_{L^2([0,T];\mathcal{F})} :=\Big(\int_{[0,T]}\|f(t)\|_{\mathcal{F}}^{2}\,dt\Big)^{\frac{1}{2}}<\infty $$ with the corresponding scalar product. A function $f(\cdot)=(f_n(\cdot))_{n=0}^{\infty}\in L^2([0,T];\mathcal{F})$ is called {\it It\^{o} integrable} if, for almost all $t\in[0,T]$, $$ f(t)=(f_0(t),\varkappa_{[0,t]}f_1(t),\ldots,\varkappa_{[0,t]^n}f_n(t),\ldots). $$ We denote by $L_a^2([0,T];\mathcal{F})$ the set of all It\^{o} integrable functions. Let $f$ belong to the space $L_{a,s}^2([0,T];\mathcal{F})$ of all {\it simple It\^{o} integrable functions}. That is, $f$ belongs to $L_a^2([0,T];\mathcal{F})$ and there exists a partition $0=t_0<t_1<\cdots<t_n=T$ of $[0,T]$ such that \begin{equation*} f(t)=\sum_{k=0}^{n-1}f_{(k)}\varkappa_{(t_k,t_{k+1}]}(t)\in\mathcal{F} \end{equation*} for almost all $t\in[0,T]$. The {\it It\^{o} integral} ${\mathbb I}(f)$ of such a function $f$ is defined by the formula \begin{equation*} {\mathbb I}(f) :=\sum_{k=0}^{n-1}f_{(k)}\lozenge (0,\varkappa_{(t_k,t_{k+1}]},0,0,\ldots)\in\mathcal{F}, \end{equation*} where the symbol $\lozenge$ denotes the Wick product in the Fock space $\mathcal{F}$. Let us recall that for given $f=(f_n)_{n=0}^{\infty}$ and $g=(g_n)_{n=0}^{\infty}$ from $\mathcal{F}$ the Wick product $f\lozenge g$ is defined by \begin{equation*} f\lozenge g:=\Big(\sum_{m=0}^{n}f_m\mathbin{\widehat{\otimes}}g_{n-m}\Big)_{n=0}^{\infty}, \end{equation*} provided the latter sequence belongs to the Fock space $\mathcal{F}$. The It\^{o} integral ${\mathbb I}(f)$ of a simple function $f\in L_{a,s}^2([0,T];\mathcal{F})$ has the isometry property $$ \big\|{\mathbb I}(f)\big\|_{\mathcal{F}}^{2} =\int_{[0,T]}\big\|f(t)\big\|_{\mathcal{F}}^{2}\,dt, $$ see e.g. \cite{Attal, ABT07}. Hence, extending the mapping $$ L_a^2([0,T];\mathcal{F})\supset L_{a,s}^2([0,T];\mathcal{F})\ni f\mapsto {\mathbb I}(f)\in \mathcal{F} $$ by continuity we obtain a definition of the It\^{o} integral ${\mathbb I}(f)$ for each $f\in L_a^2([0,T];\mathcal{F})$ (we keep the same notation ${\mathbb I}$ for the extension). Let us show that the It\^{o} integral ${\mathbb I}(f)$ of $f\in L_a^2([0,T];\mathcal{F})$ can be considered as an $H$-stochastic integral. To do this we set $ \mathcal{H}:=\mathcal{F} $ and consider in this space the resolution of identity $$ [0,T]\ni t\mapsto \mathcal{X}_tf :=(f_0,\varkappa_{[0,t]}f_1,\ldots,\varkappa_{[0,t]^n}f_n,\ldots) \in\mathcal{L}(\mathcal{F}), \quad f=(f_n)_{n=0}^{\infty}\in\mathcal{F}. $$ Let $ Z:=(0,1,0,0,\ldots)\in\mathcal{F} $ and $$ [0,T]\ni t\mapsto Z_t:=\mathcal{X}_tZ= (0,\varkappa_{[0,t]},0,0,\ldots)\in\mathcal{F} $$ be the corresponding abstract martingale in the Fock space $\mathcal{F}$. Note that now \begin{equation*} \mu([0,t]):=\|Z_t\|_{\mathcal{F}}^{2} =\|\varkappa_{[0,t]}\|_{L_{\mathbb{C}}^2([0,T])}^{2} =t,\quad t\in[0,T], \end{equation*} i.e., $\mu$ is the Lebesgue measure on $\mathcal{B}([0,T])$. We have the following analogues of Theorems \ref{t.2} and~\ref{t.3}. \enlargethispage{1.2cm} \begin{thm}\label{t.4} A function $f\in L^2([0,T];\mathcal{F})$ belongs to the space $L_{a}^2([0,T];\mathcal{F})$ if and only if the corresponding operator-valued function $[0,T]\ni t\mapsto A_{f}(t)$ whose values are operators $A_{f}(t)$ of Wick multiplication by $f(t)\in\mathcal{F}$ in the Fock space $\mathcal{F}$, $$ \mathcal{F}\supset \mathop{\rm Dom}(A_{f}(t))\ni g\mapsto A_{f}(t)g :=f(t)\lozenge g\in\mathcal{F}, $$ belongs to the space $S_2$. \end{thm} \begin{thm}\label{t.4.1} Let $f\in L_{a}^2([0,T];\mathcal{F})$ and $\{A_{f}(t)\}_{t\in[0,T]}$ be the corresponding family of the operators $A_{f}(t)$ of Wick multiplication by $f(t)\in \mathcal{F}$ in the Fock space $\mathcal{F}$. Then \begin{equation*} {\mathbb I}(f)=\int_{[0,T]}A_{f}(t)\,dZ_t. \end{equation*} \end{thm} Taking into account Theorem \ref{t.4.1}, in what follows we will denote the It\^{o} integral ${\mathbb I}(f)$ of $f\in L_{a}^2([0,T];\mathcal{F})$ by $\int_{[0,T]}f(t)\,dZ_t$. Note that this integral can be expressed in terms of the Fock space $\mathcal{F}$. Namely, for any $f(\cdot)=(f_n(\cdot))_{n=0}^{\infty}\in L_{a}^2([0,T];\mathcal{F})$, we have \begin{equation}\label{eq.5.1} \int_{[0,T]}f(t)\,dZ_t= (0,\hat{f}_1,\ldots,\hat{f}_n,\ldots)\in {\mathcal F}, \end{equation} where, for each $n\in\mathbb{N}$ and almost all $(t_1,\ldots,t_n)\in [0,T]^n$, $$ \hat{f}_n(t_1,\ldots,t_n):=\frac{1}{n} \sum_{k=1}^{n}f_{n-1}(t_k;t_1,\ldots,t_k\!\!\!\!\!\!\diagup\,\,,\ldots,t_n), $$ i.e., $\hat{f}_n$ is the symmetrization of $f_{n-1}(t;t_1,\ldots,t_{n-1})$ with respect to $n$ variables. \section{A connection between the classical It\^{o} integral\\ and the It\^{o} integral in the Fock space} As before, let $ (\Omega, \mathcal{A}, P) $ be a complete probability space with a right continuous filtration $\{\mathcal{A}_t\}_{t \in [0,T]}$, $\mathcal{A}_0$ be the trivial $\sigma$-algebra containing all $P$-null sets of $\mathcal{A}$ and $\mathcal{A}=\mathcal{A}_T$.\pagebreak \noindent Let $N=\{N_t\}_{t\in[0,T]}$ be a normal martingale on $(\Omega, \mathcal{A}, P)$ with respect to $\{\mathcal{A}_t\}_{t \in [0,T]}$, $N_0=0$. It is known that the mapping $$ \mathcal{F}\ni f=(f_n)_{n=0}^{\infty}\mapsto If:=\sum_{n=0}^{\infty}I_n(f_n)\in L^2(\Omega,\mathcal{A},P) $$ is well-defined and isometric. Here $I_0(f_0):=f_0$ and, for each $n\in\mathbb{N}$, $$ I_n(f_n):=n!\int_{0}^{T}\int_{0}^{t_n} \cdots\Big(\int_{0}^{t_2}f_n(t_1,\ldots,t_n)\, dN_{t_1}\Big)\ldots \,dN_{t_{n-1}}\,dN_{t_n} $$ is an $n$-iterated It\^{o} integral with respect to $N$. We suppose that the normal martingale $N$ has the {\it chaotic representation property} (CRP). In other words, we assume that the mapping $I:\mathcal{F}\to L^2(\Omega,\mathcal{A},P)$ is a unitary. Note that $$ N_t=IZ_t\in L^2(\Omega,\mathcal{A},P), \quad t\in[0,T], $$ i.e., $N$ is the $I$-image of the abstract martingale $[0,T]\ni t\mapsto Z_t= (0,\varkappa_{[0,t]},0,0,\ldots)\in\mathcal{F}.$ The Brownian motion, the compensated Poisson process and some Az${\rm \acute{e}}$ma martingales are examples of normal martingales which possess the CRP, see e.g. \cite{Emery89, MPM98}. We note that the spaces $L^2([0,T]\times\Omega)$ and $L^2([0,T];\mathcal{F})$ can be understood as the tensor products $L^2([0,T])\otimes L^2(\Omega,\mathcal{A},P)$ and $L^2([0,T])\otimes \mathcal{F}$, respectively. Therefore, $$ 1\otimes I:L^2([0,T];\mathcal{F})\to L^2([0,T]\times\Omega) $$ is a unitary operator. \enlargethispage{1cm} The next result gives a relationship between the classical It\^{o} integral with respect to the normal martingale with CRP and the It\^{o} integral in the Fock space $\mathcal{F}$. \begin{thm}\label{t.7.1} We have $$ L_a^2([0,T]\times\Omega)=(1\otimes I)L_a^2([0,T];\mathcal{F}) $$ and, for arbitrary $f\in L_{a}^2([0,T];\mathcal{F})$, \begin{equation*} I\Big(\int_{[0,T]}f(t)\,dZ_t\Big) =\int_{[0,T]}If(t)\,dN_t. \end{equation*} \end{thm} Since $N$ has CRP, for any $F\in L_a^2([0,T]\times\Omega)$ there exists a uniquely defined vector $f(\cdot)=(f_n(\cdot))_{n=0}^{\infty}\in L_{a}^2([0,T];\mathcal{F})$ such that $$ F(t)=If(t)=\sum_{n=0}^{\infty}I_n(f_n(t)) $$ for almost all $t\in[0,T]$. Hence, using Theorem \ref{t.7.1} and equality (\ref{eq.5.1}) we obtain \begin{equation*} \int_{[0,T]}F(t)\,dN_t=I\Big(\int_{[0,T]}f(t)\,dZ_t\Big) =\sum_{n=1}^{\infty}I_n(\hat{f}_n)\in L^2(\Omega, \mathcal{A}, P). \end{equation*} It should be noticed that the right hand side of the latter equality was used by Hitsuda~\cite{Hitsuda} and Skorohod~\cite{Skorohod} to define an extension of the It\^{o} integral. Namely, a function $$ F(\cdot)=\sum_{n=0}^{\infty}I_n(f_n(\cdot))\in L^2([0,T]\times\Omega) $$ is Hitsuda-Skorohod integrable if and only if $$ \sum_{n=1}^{\infty}I_n(\hat{f}_n)\in L^2(\Omega, \mathcal{A}, P) \quad \text{or, equivalently,}\quad \sum_{n=1}^{\infty}\|\hat{f}_n\|_{\mathcal F_n}^{2}n!<\infty. $$ The corresponding {\it Hitsuda-Skorohod integral} ${\mathbb I}_{\mathop{\rm {HS}}}(F)$ of $F$ is defined by the formula \begin{equation*} {\mathbb I}_{\mathop{\rm {HS}}}(F):=\sum_{n=1}^{\infty}I_n(\hat{f}_n). \end{equation*} {\it{Acknowledgments.}} I am grateful to Prof. Yu.~M.~Berezansky for the statement of a question and helpful discussions and the referee for the useful remarks. This work has been partially supported by the DFG, Project 436 UKR 113/78/0-1 and by the Scientific Program of National Academy of Sciences of Ukraine, Project No~0107U002333. \iffalse
202,110
TITLE: Integral form of balance law QUESTION [3 upvotes]: Let $\rho: \mathbb{R} \times [0, \infty) \to \mathbb{R}$ be the density, $f: \mathbb{R} \to \mathbb{R}$ the flux of the density and $g: \mathbb{R} \to \mathbb{R}$ the source (or loss) term. The equation $$ \partial_t\rho(x,t) + \partial_x f(\rho(x,t)) + g(\rho(x,t)) = 0 $$ is called one-dimensional balance law. For $g=0$ the balance law reduces to a conservation law, for which the integral-form is given by $$ \int_{x_1}^{x_2} \rho(x,t_2) \,\mathrm{d}x - \int_{x_1}^{x_2} \rho(x,t_1) \,\mathrm{d}x = \int_{t_1}^{t_2} f(\rho(x_1,t)) \,\mathrm{d}t - \int_{t_1}^{t_2} f(\rho(x_2,t)) \,\mathrm{d}t. $$ which implies that the change of mass inside an interval $[x_1,x_2]$ during a time interval $[t_1,t_2]$ is defined by the difference of the inflow and the outflow at $x_1$ and $x_2$ during this time interval. Is there any literature for a derivation of the integral form for the balance law with $g \neq 0$ inclusive physical explanation? Any hints or suggestions are highly appreciated. REPLY [2 votes]: I would say that you just need to proceed the same way to derive the balance law. For interpretations, assume that $f_\rho$, $g$ are both positive. If we integrate the local form of the balance law $$\rho_t + f(\rho)_x = -g(\rho)$$ over the domain $x\in [x_1, x_2]$ and times $t \in [t_1, t_2]$, we find $$ \left[\int_{x_1}^{x_2} \rho\, \text d x \right]^{t_2}_{t_1} = \left[\int_{t_1}^{t_2} f(\rho)\, \text d t \right]^{x_1}_{x_2} - \underset{[x_1,x_2] \times [t_1,t_2]}{\iint} g(\rho)\, \text{d}x \text d t \, . $$ The left-hand side is the variation of mass $\int \rho \, \text d x$ between the times $t_1$ and $t_2$. The right hand side includes a first term which represents the variation of mass flux between both boundaries of the spatial domain (the mass entering $[x_1,x_2]$ at $x=x_1$ minus the mass which has left at $x=x_2$). The last term represents the loss of mass over the domain considered here, or if you prefer, $\iint -g \, \text d x \text d t$ represents the mass production.
144,966
\begin{document} \title{On the complete convergence for sequences of dependent random variables via stochastic domination conditions and regularly varying functions theory} \author{ \name{Nguyen Chi Dzung\textsuperscript{a} and L\^{e} V\v{a}n Th\`{a}nh\textsuperscript{b}\thanks{CONTACT L\^{e} V\v{a}n Th\`{a}nh. Email: [email protected]}} \affil{\textsuperscript{a}Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi 10307, Vietnam\\ \textsuperscript{b}Department of Mathematics, Vinh University, 182 Le Duan, Vinh, Nghe An, Vietnam} } \maketitle \begin{abstract} This note develops Rio's proof [C. R. Math. Acad. Sci. Paris, 1995] of the rate of convergence in the Marcinkiewicz--Zygmund strong law of large numbers to the case of sums of dependent random variables with regularly varying normalizing constants. It allows us to obtain a complete convergence result for dependent sequences under uniformly bounded moment conditions. This result is new even when the underlying random variables are independent. The main theorems are applied to three different dependence structures: (i) $m$-pairwise negatively dependent random variables, (ii) $m$-extended negatively dependent random variables, and (iii) $\varphi$-mixing sequences. To our best knowledge, the results for cases (i) and (ii) are the first results in the literature on complete convergence for sequences of $m$-pairwise negatively dependent random variables and $m$-extended negatively dependent random variables under the optimal moment conditions even when $m=1$. While the results for cases (i) and (iii) unify and improve many existing ones, the result for case (ii) complements the main result of Chen et al. [J. Appl. Probab., 2010]. Affirmative answers to open questions raised by Chen et al. [J. Math. Anal. Appl., 2014] and Wu and Rosalsky [Glas. Mat. Ser. III, 2015] are also given. An example illustrating the sharpness of the main result is presented. \end{abstract} \begin{keywords} Almost sure convergence; Complete convergence; Rate of convergence; Dependent random variables; Stochastic domination; Regularly varying function \end{keywords} \section{Introduction and motivations}\label{sec:Int} The maximal inequalities play a crucial role in the proofs of the strong law of large numbers (SLLN). Let $\{X,X_n,n\ge1\}$ be a sequence of pairwise independent and identically distributed random variables. Etemadi \cite{etemadi1981elementary} is the first author who proved the Kolmogorov SLLN \begin{equation*}\label{MZ.01} \lim_{n\to\infty}\dfrac{ \sum_{i=1}^{n}(X_i-\E(X_i))}{n}=0\ \text{ almost surely (a.s.)} \end{equation*} under optimal moment condition $\E(|X|)<\infty$ without using the maximal inequalities. For $1<p<2$, Mart\u{\i}kainen \cite{martikainen1995strong} proved that if $\mathbb{E}(|X|^p\log^{\beta}(|X|)) <\infty$ for some $\beta>\max\{0,4p-6\}$, then the Marcinkiewicz--Zygmund SLLN holds, i.e., \begin{equation}\label{MZ.05} \lim_{n\to\infty}\dfrac{ \sum_{i=1}^{n}(X_i-\E(X_i))}{n^{1/p}}=0\ \text{ a.s. } \end{equation} As far as we know, Rio \cite{rio1995vitesses} is the first author who proved \eqref{MZ.05} under the optimal moment condition $\mathbb{E}(|X|^p)<\infty$. Since then, many sub-optimal results on the Marcinkiewicz--Zygmund SLLN have been published. In 2014, Sung \cite{sung2014marcinkiewicz} proposed a different method and proved \eqref{MZ.05} under a nearly optimal condition $\mathbb{E}(|X|^p(\log\log(|X|))^{2(p-1)} )<\infty$. Here and thereafter, $\log(x)$ denotes the natural logarithm (base $e$) of $\max\{x,e\}$, $x\ge0$. Very recently, da Silva \cite[Corollary 1]{da2020rates} used the method proposed by Sung \cite{sung2014marcinkiewicz} to prove that if $\{X,X_n,n\ge1\}$ are pairwise negatively dependent and identically distributed random variables and $\mathbb{E}(|X|^p)<\infty$, then \begin{equation}\label{MZ.07} \lim_{n\to\infty}\dfrac{ \sum_{i=1}^{n}(X_i-\E(X_i))}{n^{1/p}(\log\log(n))^{2(p-1)/p}}=0\ \text{ a.s. } \end{equation} A very special case of our main result will show that the optimal condition for \eqref{MZ.07} is \begin{equation}\label{MZ.09} \mathbb{E}\left(|X|^p/(\log\log(|X|))^{2(p-1)}\right)<\infty. \end{equation} Anh et al. \cite{anh2021marcinkiewicz} recently proved the Marcinkiewicz--Zygmund-type SLLN with the norming constants of the forms $n^{1/p}\tilde{L}(n^{1/p}),\ n\ge1$, where $\tilde{L}(\cdot)$ is the Bruijn conjugate of a slowly varying function $L(\cdot)$. However, the proof in \cite{anh2021marcinkiewicz} is based on a maximal inequality for negatively associated random variables which is no longer available even for pairwise independent random variables. Although Rio's result was extended by the second named author in Th\`{a}nh \cite{thanh2020theBaum}, it only considered sums for pairwise independent identically distributed random variables there. The motivation of the present note is that many other dependence structures do not enjoy the Kolmogorov maximal inequality such as pairwise negative dependence, extended negative dependence, among others. Unlike Th\`{a}nh \cite{thanh2020theBaum}, we consider in this note the case where the underlying sequence of random variables is stochastically dominated by a random variable $X$. This allows us to derive the Baum--Katz-type theorem for sequences of dependent random variables satisfying a uniformly bounded moment condition as stated in the following results. To our best knowledge, Theorem \ref{thm.main0} and Corollary \ref{cor.main0} are new even when the underlying sequence is comprise of independent random variables. We note that, in Theorem \ref{thm.main0} and Corollary \ref{cor.main0}, no identical distribution condition or stochastic domination condition is assumed. \begin{theorem}\label{thm.main0} Let $1\le p<2$, and $\{X_n,n\ge1\}$ be a sequence of random variables. Assume that there exists a universal constant $C$ such that \begin{equation}\label{eq:bound_var_00} \var\left(\sum_{i=k+1}^{k+\ell}f_{i}(X_{i})\right)\le C \sum_{i=k+1}^{k+\ell}\var(f_{i}(X_{i})) \end{equation} for all $k\ge0,\ell\ge 1$ and all nondecreasing functions $f_i$, $i\ge1$, provided the variances exist. Let $L(\cdot)$ be a slowly varying function defined on $[0,\infty)$. When $p=1$, we assume further that $L(x)\ge 1$ and is increasing on $[0,\infty)$. If \begin{equation}\label{eq.stoch.domi.13} \sup_{n\ge1}\E\left(|X_n|^pL^p(|X_n|)\log(|X_n|)\log^2(\log(|X_n|))\right)<\infty, \end{equation} then for all $\alpha\ge 1/p$, we have \begin{equation}\label{eq.main0.15} \sum_{n= 1}^{\infty} n^{\alpha p-2}\mathbb{P}\left(\max_{1\le j\le n}\left| \sum_{i=1}^{j}(X_i-\E(X_i))\right|>\varepsilon n^{\alpha}{\tilde{L}}(n^{\alpha})\right)<\infty \text{ for all } \varepsilon >0, \end{equation} where $\tilde{L}(\cdot)$ is the Bruijn conjugate of $L(\cdot)$. \end{theorem} Considering a special interesting case $\alpha=1/p$ and $L(x)\equiv 1$, we obtain the following corollary. \begin{corollary}\label{cor.main0} Let $1\le p<2$, and $\{X_n,n\ge1\}$ be a sequence of random variables satisfying condition \eqref{eq:bound_var_00}. If \begin{equation}\label{eq.stoch.domi.cor.13} \sup_{n\ge1}\E\left(|X_n|^p\log(|X_n|)\log^2(\log(|X_n|))\right)<\infty, \end{equation} then \begin{equation}\label{eq.cor.main0.15} \sum_{n= 1}^{\infty} n^{-1}\mathbb{P}\left(\max_{1\le j\le n}\left| \sum_{i=1}^{j}(X_i-\E(X_i))\right|>\varepsilon n^{1/p}\right)<\infty \text{ for all } \varepsilon >0. \end{equation} \end{corollary} \begin{remark}\label{rem.01} \begin{itemize} \item [(i)] Since $\{\max_{1\le j\le n}|\sum_{i=1}^j (X_i-\E(X_i))|,n\ge1\}$ is nondecreasing, it follows from \eqref{eq.cor.main0.15} that (see, e.g., Remark 1 in Dedecker and Merlev{\`e}de \cite{dedecker2008convergence}) SLLN \eqref{MZ.05} holds. \item[(ii)] For SLLN under the uniformly bounded moment condition, Baxter et al \cite{baxter2004slln} proved \eqref{MZ.05} with assumptions that the sequence $\{X_n,n\ge1\}$ is independent and $ \sup_{n\ge1}\E\left(|X_n|^r\right)<\infty \text{ for some } r>p.$ This condition is much stronger than \eqref{eq.stoch.domi.cor.13}. Baxter et al. \cite{baxter2004slln} studied the SLLN for weighted sums which is more general than \eqref{MZ.05} but their method does not give the rate of convergence like Corollary \ref{cor.main0}. \item[(iii)] For sequence of pairwise independent identically distributed random variables $\{X,X_n,n\ge1\}$, Chen et al. \cite{chen2014bahr} obtained \eqref{eq.cor.main0.15} under condition that $\mathbb{E}(|X|^p(\log(|X|))^{r})<\infty$ for some $1<p<r<2$. We see that with identical distribution assumption, this moment condition is still stronger than \eqref{eq.stoch.domi.cor.13}. \item[(iv)] Conditions \eqref{eq.stoch.domi.13} and \eqref{eq.stoch.domi.cor.13} are very sharp and almost optimal. Even with assumption that the underlying random variables are independent, a special case of Example \ref{ex.03} in Section \ref{sec:domination} shows that \eqref{eq.cor.main0.15} may fail if \eqref{eq.stoch.domi.cor.13} is weakened to \begin{equation*}\label{eq.stoch.domi.15} \sup_{n\ge1}\E\left(|X_n|^p\log(|X_n|)\log(\log(|X_n|))\right)<\infty. \end{equation*} \end{itemize} \end{remark} The rest of the paper is arranged as follows. Section \ref{sec:complete} presents a complete convergence result for sequences of dependent random variables with regularly varying normalizing constants. The proof of Theorem \ref{thm.main0} and an example illustrating the sharpness of the result are presented in Section \ref{sec:domination}. Finally, Section \ref{sec:appl} contains corollaries and remarks comparing our results and the ones in the literature. \section{Complete convergence for sequences of dependent random variables with regularly varying normalizing constants}\label{sec:complete} In this section, we will use the method in Rio \cite{rio1995vitesses} to obtain complete convergence for sums of dependent random variables with regularly varying constants under stochastic domination condition. The proof is similar to that of Theorem 1 in Th\`{a}nh \cite{thanh2020theBaum}. A family of random variables $\{X_i,i\in I\}$ is said to be stochastically dominated by a random variable $X$ if \begin{equation}\label{eq.stoch.dominated} \sup_{i\in I}\P(|X_i|>t)\le \P(|X|>t), \ \text{ for all } t\ge0. \end{equation} We note that many authors use an apparently weaker definition of $\{X_i,i\in I\}$ being stochastically dominated by a random variable $X$, namely that \begin{equation}\label{eq.stoch.domi.06} \sup_{i\in I}\P(|X_i|>t)\le C\P(|X|>t), \text{ for all } t\ge0 \end{equation} for some constant $C\in (0,\infty)$. However, it is shown by Rosalsky and Th\`{a}nh \cite{rosalsky2021note} that \eqref{eq.stoch.dominated} and \eqref{eq.stoch.domi.06} are indeed equivalent. Let $\rho\in\R$. A real-valued function $R(\cdot )$ is said to be \textit{regularly varying} (at infinity) with index of regular variation $\rho$ if it is a positive and measurable function on $[A,\infty)$ for some $A> 0$, and for each $\lambda>0$, \begin{equation*}\label{rv01} \lim_{x\to\infty}\dfrac{R(\lambda x)}{R(x)}=\lambda^\rho. \end{equation*} A regularly varying function with the index of regular variation $\rho=0$ is called \textit{slowly varying} (at infinity). If $L(\cdot)$ is a slowly varying function, then by Theorem 1.5.13 in Bingham et al. \cite{bingham1989regular}, there exists a slowly varying function $\tilde{L}(\cdot)$, unique up to asymptotic equivalence, satisfying \begin{equation}\label{BGT1513} \lim_{x\to\infty}L(x)\tilde{L}\left(xL(x)\right)=1\ \text{ and } \lim_{x\to\infty}\tilde{L}(x)L\left(x\tilde{L}(x)\right)=1. \end{equation} The function $\tilde{L}$ is called the de Bruijn conjugate of $L$, and $\left(L,\tilde{L}\right)$ is called a (slowly varying) conjugate pair (see, e.g., p. 29 in Bingham et al. \cite{bingham1989regular}). If $L(x)=\log^\gamma(x)$ or $L(x)=\log^\gamma\left(\log(x)\right)$ for some $\gamma\in\R$, then $\tilde{L}(x)=1/L(x)$. Especially, if $L(x)\equiv 1$, then $\tilde{L}(x)\equiv1$. Here and thereafter, for a slowly varying function $L(\cdot)$, we denote the de Bruijn conjugate of $L(\cdot)$ by $\tilde{L}(\cdot)$. Throughout, we will assume that $L(x)$ and $\tilde{L}(x)$ are both continuous on $[0,\infty)$ and differentiable on $[A,\infty)$ for some $A>0$. We also assume that (see Lemma 2.2 in Anh et al. \cite{anh2021marcinkiewicz}) \begin{equation}\label{eq.galambos} \lim_{x\to\infty}\dfrac{xL'(x)}{L(x)}=0. \end{equation} \begin{theorem}\label{thm.main1} Let $1\le p<2$, and $\{X_n,n\ge1\}$ be a sequence of random variables satisfying condition \eqref{eq:bound_var_00}. Let $L(\cdot)$ be as in Theorem \ref{thm.main0}. If $\{X_n, \, n \geq 1\}$ is stochastically dominated by a random variable $X$, and \begin{equation}\label{eq.main.13} \E\left(|X|^p L^p(|X|)\right)<\infty, \end{equation} then for all $\alpha\ge 1/p$, we have \begin{equation}\label{eq.main.15} \sum_{n= 1}^{\infty} n^{\alpha p-2}\mathbb{P}\left(\max_{1\le j\le n}\left| \sum_{i=1}^{j}(X_i-\E(X_i))\right|>\varepsilon n^{\alpha}{\tilde{L}}(n^{\alpha})\right)<\infty \text{ for all } \varepsilon >0. \end{equation} \end{theorem} We only sketch the proof of Theorem \ref{thm.main1} and refer the reader to the proof of Theorem 1 in Th\`{a}nh \cite{thanh2020theBaum} for details. The main difference here is that we have to consider the nonnegative random variables so that after applying certain truncation techniques (see \eqref{prop3.4} and \eqref{prop3.3} below), the new random variables still satisfy condition \eqref{eq:bound_var_00}. \begin{proof}[Sketch proof of Theorem \ref{thm.main1}] Since $\{X_{n}^{+},n\ge1\}$ and $\{X_{n}^{-},n\ge1\}$ satisfy the assumptions of the theorem and $X_{n}=X_{n}^{+}-X_{n}^{-},n\ge1$, without loss of generality we can assume that $X_n\ge 0$ for all $n\ge1$. For $n\ge1$, set \[b_n=\begin{cases} n^{\alpha}\tilde{L}\left(A^{\alpha}\right)& \text{ if } 0\le n<A,\\ n^{\alpha}\tilde{L}\left(n^{\alpha}\right)& \text{ if } n\ge A,\\ \end{cases}\] \begin{equation}\label{prop3.4} X_{i,n}=X_i\mathbf{1}(X_i\le b_n)+b_n\mathbf{1}(X_i> b_n),\ 1\le i\le n, \end{equation} and \begin{equation}\label{prop3.3} Y_ {i, m} =\left(X_ {i, 2^m}-X_{i, 2^{m-1}}\right)-\E\left(X_{i, 2^m}-X_{i, 2^{m-1}}\right),\ m\ge1,\ i\ge1. \end{equation} It is easy to see that $b_n$ is strictly increasing and \eqref{eq.main.15} is equivalent to \begin{equation}\label{prop3.5} \sum_{n= 1}^{\infty} 2^{n(\alpha p-1)}\mathbb{P}\left(\max_{1\le j< 2^n}\left| \sum_{i=1}^{j}(X_i-\mathbb{E}(X_i)) \right|>\varepsilon b_{2^{n}}\right)<\infty \text{ for all } \varepsilon >0. \end{equation} It follows from stochastic domination condition and definition of $b_{n}$ that \begin{equation}\label{prop-15} \begin{split} 0\le \E\left(X_ {i, 2^m}-X_ {i, 2^{m-1}}\right)&\le \E\left(|X|\mathbf{1}(|X|>b_{2^{m-1}})\right). \end{split} \end{equation} Using \eqref{prop-15} and the same argument as in Th\`{a}nh \cite[Equation (23)]{thanh2020theBaum}, the proof of \eqref{prop3.5} will be completed if we can show that \begin{equation}\label{prop-12} \sum_{n=1}^{\infty} 2^{n(\alpha p-1)} \mathbb{P}\left(\max_{1\le j< 2^n}\left|\sum_{i=1}^j (X_{i,2^n}-\E (X_{i,2^n}))\right|\ge \varepsilon b_{2^{n-1}} \right)<\infty \text{ for all }\varepsilon>0. \end{equation} For $m\ge 0,$ set $S_{0,m}=0$ and \[S_{j,m}=\sum_{i=1}^j (X_{i,2^m}-\E \left(X_{i,2^m}\right)),\ j\ge 1.\] For $1\le j<2^n$ and for $0\le m\le n$, let $k_{j,m}=\lfloor j/2^m\rfloor $ be the greatest integer which is less than or equal to $j/2^m$, $j_m = k_{j,m} 2 ^m$. Then (see Th\`{a}nh \cite[Equation (28)]{thanh2020theBaum}) \begin{equation}\label{prop-18} \begin{split} \max_{1\le j< 2^n}\left|S_{j,n}\right| &\le \sum_{m=1}^n \max_{0\le k<2^{n-m}}\left|\sum_{i=k2^m+1}^{k2^m+2^{m-1}}\left(X_{i, 2^{m-1}}-\E(X_{i, 2^{m-1}})\right)\right| \\ &\quad +\sum_{m=1}^n \max_{0\le k<2^{n-m}}\left|\sum_{i=k2^m+1}^{(k+1)2^m} Y_{i,m}\right|+\sum_{m=1}^n 2^{m+1} \E \left(|X|\mathbf{1}(|X|>b_{2^{m-1}})\right). \end{split} \end{equation} Combining \eqref{eq:bound_var_00}, \eqref{prop3.4} and \eqref{prop3.3}, we have for all $m\ge 1$, \begin{equation}\label{eq:bound_var1} \E\left(\sum_{i=k+1}^{k+\ell} X_{i,2^{m-1}}-\E(X_{i,2^{m-1}})\right)^2\le C\sum_{i=k+1}^{k+\ell} \E(X_{i,2^{m-1}}^2),\ k\ge 0, \ell\ge 1. \end{equation} and \begin{equation}\label{eq:bound_var2} \E\left(\sum_{i=k+1}^{k+\ell} Y_{i,m}\right)^2\le C\sum_{i=k+1}^{k+\ell} \E(Y_{i,m}^2),\ k\ge 0, \ell\ge 1. \end{equation} By using \eqref{prop-18}--\eqref{eq:bound_var2}, and the argument as in pages 1236-1238 in Th\`{a}nh \cite{thanh2020theBaum}, we obtain \eqref{prop-12}. \end{proof} The next proposition shows that the moment condition in \eqref{eq.main.13} in Theorem \ref{thm.main1} is optimal. The proof is the same as that of the implication (iv)$\Rightarrow$(i) of Theorem 3.1 in Anh et al. \cite{anh2021marcinkiewicz}. We omit the details. \begin{proposition}\label{thm.main2} Let $1\le p<2$, and let $\{X_n, \, n \geq 1\}$ be a sequence of identically distributed random variables satisfying \eqref{eq:bound_var_00}, $L(\cdot)$ as in Theorem \ref{thm.main0}. If for some constant $c$, \begin{equation}\label{eq.main.15a} \sum_{n= 1}^{\infty} n^{-1}\mathbb{P}\left(\max_{1\le j\le n}\left| \sum_{i=1}^{j}(X_i-c)\right|>\varepsilon n^{1/p}{\tilde{L}}(n^{1/p})\right)<\infty \text{ for all } \varepsilon >0, \end{equation} then $\E\left(|X_1|^p L^p(|X_1|)\right)<\infty$ and $\E(X_1)=c$. \end{proposition} \section{On the stochastic domination condition via regularly varying functions}\label{sec:domination} In this section, we will present a result on the stochastic domination condition via regularly varying functions theory, and use it to prove Theorem \ref{thm.main0}. We need the following simple lemma. See Rosalsky and Th\`{a}nh \cite{rosalsky2021note} for a proof. \begin{lemma}\label{lemRT} Let $g:[0,\infty)\to [0,\infty)$ be a measurable function with $g(0)=0$ which is bounded on $[0,A]$ and differentiable on $[A,\infty)$ for some $A\ge 0$. If $\xi$ is a nonnegative random variable, then \begin{equation}\label{eq.st.00} \begin{split} \E(g(\xi))&=\E(g(\xi)\mathbf{1}(\xi\le A))+ g(A)+\int_{A}^\infty g'(x)\P(\xi>x)\mathrm{d} x. \end{split} \end{equation} \end{lemma} \begin{proposition}\label{prop.sufficiency.for.stochastic.domination.2} Let $\{X_i,i\in I\}$ be a family of random variables, and $L(\cdot)$ a slowly varying function. If \begin{equation}\label{eq.stoch.domi.12} \sup_{i\in I}\E\left(|X_i|^pL(|X_i|)\log(|X_i|)\log^2(\log(|X_i|))\right)<\infty\ \text{ for some }p>0, \end{equation} then there exists a nonnegative random variable $X$ with distribution function $F(x)=1-\sup_{i\in I}\P(|X_i|>x),\ x\in\R$ such that $\{X_i,i\in I\}$ is stochastically dominated by $X$ and \begin{equation}\label{eq.stoch.domi.11} \E\left(X^pL(X)\right)<\infty. \end{equation} \end{proposition} \begin{proof} By \eqref{eq.stoch.domi.12} and Theorem 2.5 (i) of Rosalsky and Th\`{a}nh \cite{rosalsky2021note}, we get that $\{X_i,i\in I\}$ is stochastically dominated by a nonnegative random variable $X$ with distribution function \[F(x)=1-\sup_{i\in I}\P(|X_i|>x),\ x\in\R.\] Let \[g(x)=x^pL(x)\log(x)\log^2(\log(x)),\ h(x)=x^p L(x),\ x\ge 0.\] Applying \eqref{eq.galambos}, there exists $B$ large enough such that $g(\cdot)$ and $h(\cdot)$ are strictly increasing on $[B,\infty)$, and \[\left|\dfrac{xL'(x)}{L(x)}\right|\le \dfrac{p}{2},\ x>B.\] Therefore, \begin{equation}\label{eq.st.10} h'(x)=px^{p-1}L(x)+x^pL'(x)=x^{p-1}L(x)\left(p+\dfrac{xL'(x)}{L(x)}\right)\le \dfrac{3px^{p-1}L(x)}{2},\ x>B. \end{equation} By Lemma \ref{lemRT}, \eqref{eq.stoch.domi.12} and \eqref{eq.st.10}, there exists a constant $C_1$ such that \begin{equation*} \begin{split} \E(h(X))&=\E(h(X)\mathbf{1}(X\le B))+h(B)+\int_{B}^\infty h'(x)\P(X>x)\mathrm{d} x\\ &\le C_1+\dfrac{3p}{2}\int_{B}^\infty x^{p-1}L(x)\P(X>x)\mathrm{d} x\\ &= C_1+\dfrac{3p}{2}\int_{B}^\infty x^{p-1}L(x)\sup_{i\in I}\P(|X_i|>x)\mathrm{d} x\\ &\le C_1+\dfrac{3p}{2}\int_{B}^\infty x^{-1}\log^{-1}(x)\log^{-2}(\log(x))\sup_{i\in I}\E\left(g(|X_i|)\right) \mathrm{d} x\\ &= C_1+\dfrac{3p}{2}\sup_{i\in I}\E\left(g(|X_i|)\right)\int_{B}^\infty x^{-1}\log^{-1}(x)\log^{-2}(\log(x))\mathrm{d} x\\ &<\infty. \end{split} \end{equation*} The proposition is proved. \end{proof} \begin{remark} The contribution of the slowly varying function $L(x)$ in Proposition \ref{prop.sufficiency.for.stochastic.domination.2} help us to unify Theorem 2.5 (ii) and (iii) of Rosalsky and Th\`{a}nh \cite{rosalsky2021note}. Letting $L(x)=\log^{-1}(x)\log^{-2}(\log(x))$, $x\ge0$, then by Proposition \ref{prop.sufficiency.for.stochastic.domination.2}, the condition \[\sup_{i\in I}\E\left(|X_i|^p\right)<\infty\ \text{ for some }p>0,\] implies that the family $\{X_i,i\in I\}$ is stochastically dominated by a nonnegative random variable $X$ satisfying \[\E\left(X^p\log^{-1}(X)\log^{-2}(\log(X))\right)<\infty. \] This slightly improves Theorem 2.5 (ii) in Rosalsky and Th\`{a}nh \cite{rosalsky2021note}. Similarly, by letting $L(x)=1$, we obtain an improvement of Theorem 2.5 (iii) in Rosalsky and Th\`{a}nh \cite{rosalsky2021note}. \end{remark} \begin{proof}[Proof of Theorem \ref{thm.main0}] Applying Proposition \ref{prop.sufficiency.for.stochastic.domination.2}, we have from \eqref{eq.stoch.domi.13} that the sequence $\{X_n,n\ge1\}$ is stochastically dominated by a nonnegative random variable $X$ with \[\E\left(X^pL^p(X)\right)<\infty.\] Applying Theorem \ref{thm.main1}, we immediately obtain \eqref{eq.main0.15}. \end{proof} The following example illustrates the sharpness of Theorem \ref{thm.main0} (and Corollary \ref{cor.main0}). It shows that in Theorem \ref{thm.main0}, \eqref{eq.main0.15} may fail if \eqref{eq.stoch.domi.13} is weakened to \begin{equation}\label{eq.slln.10} \sup_{n\ge1}\E\left(|X_n|^pL^p(|X_n|)\log(|X_n|)\log(\log(|X_n|))\right)<\infty. \end{equation} \begin{example}\label{ex.03} Let $1\le p<2$ and $L(\cdot)$ be a positive slowly varying function such that $g(x)=x^pL^p(x)$ is strictly increasing on $[A,\infty)$ for some $A>0$. Let $B=\lfloor A+ g(A) \rfloor+1$, $h(x)$ be the inverse function of $g(x)$, $x\ge B$, and let $\{X_n,n\ge B\}$ be a sequence of independent random variables such that for all $ n\ge B$ \[\P(X_n=0)=1-\dfrac{1}{n\log(n)\log(\log(n))},\ \P\left(X_n=\pm h(n)\right)=\dfrac{1}{2n\log(n)\log(\log(n))}.\] By \eqref{BGT1513}, we can choose (unique up to asymptotic equivalence) \[\tilde{L}(x)=\dfrac{h(x^p)}{x},\ x\ge B.\] Since $\tilde{L}(\cdot)$ is a slowly varying function, \[\log(\tilde{L}(n^{1/p}))=o\left(\log(n)\right),\] and so \[\log(h(n))=\log\left(n^{1/p}\tilde{L}(n^{1/p})\right)= \dfrac{1}{p}\log(n)+o(\log(n)).\] It thus follows that \begin{equation*}\label{eq.stoch.domi.23} \begin{split} &\sup_{n\ge1}\E\left(|X_n|^pL^p(|X_n|)\log(|X_n|)\log^2(\log(|X_n|))\right)\\ &=\sup_{n\ge1}\E\left(g(|X_n|)\log(|X_n|)\log^2(\log(|X_n|))\right)\\ &=\sup_{n\ge1}\dfrac{\log(h(n))\log^2(\log(h(n)))}{\log(n)\log(\log(n))}=\infty, \end{split} \end{equation*} and \begin{equation*}\label{eq.stoch.domi.25} \begin{split} &\sup_{n\ge1}\E\left(|X_n|^pL^p(|X_n|)\log(|X_n|)\log(\log(|X_n|))\right)\\ &=\sup_{n\ge1}\E\left(g(|X_n|)\log(|X_n|)\log(\log(|X_n|))\right)\\ &=\sup_{n\ge1}\dfrac{\log(h(n))\log(\log(h(n)))}{\log(n)\log(\log(n))}<\infty. \end{split} \end{equation*} Therefore \eqref{eq.stoch.domi.13} fails but \eqref{eq.slln.10} holds. Now, if \eqref{eq.main0.15} holds, then by letting $\alpha=1/p$, we have \begin{equation}\label{eq.exm20} \lim_{n\to\infty}\dfrac{\sum_{i=B}^n X_i}{n^{1/p}\tilde{L}(n^{1/p})}=0 \text{ a.s.} \end{equation} It follows from \eqref{eq.exm20} that \begin{equation}\label{eq.exm21} \lim_{n\to\infty}\dfrac{X_n}{n^{1/p}\tilde{L}(n^{1/p})}=0 \text{ a.s.} \end{equation} Since the sequence $\{X_n,n\ge1\}$ is comprised of independent random variables, the Borel--Cantelli lemma and \eqref{eq.exm21} ensure that \begin{equation}\label{eq.exm23} \sum_{n=B}^\infty \P\left(|X_n|>n^{1/p}\tilde{L}(n^{1/p})/2\right)<\infty. \end{equation} However, we have \begin{equation*}\label{eq.exm25} \begin{split} \sum_{n=B}^\infty \P\left(|X_n|>n^{1/p}\tilde{L}(n^{1/p})/2\right)&=\sum_{n=B}^\infty \P\left(|X_n|>h(n)/2\right)\\ &= \sum_{n=B}^\infty\dfrac{1}{n\log(n)\log(\log(n))}=\infty \end{split} \end{equation*} contradicting \eqref{eq.exm23}. Therefore, \eqref{eq.main0.15} must fail. \end{example} \section{Corollaries and remarks}\label{sec:appl} In this section, we apply Theorems \ref{thm.main0} and \ref{thm.main1} to three different dependence structures: (i) $m$-pairwise negatively dependent random variables, (ii) extended negatively dependent random variables, and (iii) $\varphi$-mixing sequences. The results for cases (i) and (ii) are new results even $L(x)\equiv1$. We also give remarks to compare our results with the existing ones. \subsection{$m$-pairwise negatively dependence random variables} The Baum--Katz theorem and the Marcinkiewicz--Zygmund SLLN for sequences of $m$-pairwise negatively dependent random variables were studied by Wu and Rosalsky \cite{wu2015strong}. Let $m\ge1$ be a fixed integer. A sequence of random variables $\{X_n,n\ge1\}$ is said to be \textit{$m$-pairwise negatively dependent} if for all positive integers $j$ and $k$ with $|j-k|\ge m$, $X_j$ and $X_k$ are negatively dependent, i.e., \[\mathbb{P}(X\le x, Y\le y)\le \mathbb{P}(X\le x) \mathbb{P}(Y\le y)\text{ for all $x,y \in \mathbb{R}$.}\] When $m=1$, this reduce to the usual concent of pairwise negative dependence. It is well known that if $\{X_i,i\ge1\}$ is a sequence of $m$-pairwise negatively dependent random variables and $\{f_i,i\ge1\}$ is a sequence of nondecreasing functions, then $\{f_i(X_i),i\ge1\}$ is a sequence of $m$-pairwise negatively dependent random variables. The following corollary is the first result in the literature on the complete convergence for sequences of $m$-pairwise negatively dependent random variables under the optimal condition even when $m=1$ and $L(x)\equiv1$. \begin{corollary}\label{prop.pairwise} Let $1\le p<2$, $\alpha\ge 1/p$, and let $\{X_n, \, n \geq 1\}$ be a sequence of $m$-pairwise negatively dependent random variables, and $L(\cdot)$ as in Theorem \ref{thm.main0}. \begin{itemize} \item[(i)] If \eqref{eq.stoch.domi.13} holds, then we obtain \eqref{eq.main0.15}. \item[(ii)] If $\{X_n, \, n \geq 1\}$ is stochastically dominated by a random variable $X$ satisfying \eqref{eq.main.13}, then we obtain \eqref{eq.main.15}. \end{itemize} \end{corollary} \begin{proof} From Lemma 2.1 in Wu and Rosalsky \cite{wu2015strong}, it is easy to see that $m$-pairwise negatively dependent random variables satisfy condition \eqref{eq:bound_var_00}. Corollary \ref{prop.pairwise} then follows from Theorems \ref{thm.main0} and \ref{thm.main1}. \end{proof} \begin{remark}\label{rem31} {\rm \begin{description} \item[(i)] We consider a special case where $\alpha=1/p$, $1<p<2$ and $L(x)\equiv1$ in Corollary \ref{prop.pairwise} (ii). Under the condition $\E(|X|^p)<\infty$, we obtain \begin{equation}\label{eq.main.15b} \sum_{n= 1}^{\infty} n^{-1}\mathbb{P}\left(\max_{1\le j\le n}\left| \sum_{i=1}^{j}(X_i-\E(X_i))\right|>\varepsilon n^{1/p}\right)<\infty \text{ for all } \varepsilon >0, \end{equation} and \begin{equation}\label{eq.main.17b} \begin{split} \lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}(X_i-\E(X_i))}{n^{1/p}}=0\ \text{ a.s.} \end{split} \end{equation} \item[(ii)] For $1<p<2$, Sung \cite{sung2014marcinkiewicz} considered the pairwise independent case and obtained \eqref{eq.main.17b} under a slightly stronger condition that \begin{equation}\label{eq.sung2014} \mathbb{E}\left(|X|^p(\log\log(|X|))^{2(p-1)}\right)<\infty. \end{equation} Furthermore, one cannot obtain the rate of convergence \eqref{eq.main.15b} by using the method used in Sung \cite{sung2014marcinkiewicz}. In Chen et al. \cite[Theorem 3.6]{chen2014bahr}, the authors proved \eqref{eq.main.15b} holds under condition that $\E(|X|^p\log^r(|X|))<\infty$ for some $r>p$. They stated an open question whether \eqref{eq.main.15b} holds or not under \eqref{eq.sung2014} (see \cite[Remark 3.1]{chen2014bahr}). For the case where the random variables are $m$-pairwise negatively dependent, Wu and Rosalsky \cite{wu2015strong} obtained \eqref{eq.main.15b} and \eqref{eq.main.17b} under condition $\E(|X|^p\log^r(|X|))<\infty$ for some $r>1+p$. Wu and Rosalsky \cite{wu2015strong} then raised an open question that whether \eqref{eq.main.17b} holds or not under Sung's condition \eqref{eq.sung2014}. For $p=1$ and also the underlying random variables are $m$-pairwise negatively dependent, Wu and Rosalsky \cite[Remarks 3.6]{wu2015strong} stated another open question that whether \eqref{eq.main.15b} (with $p=1$) holds or not under the condition $\E(|X|)<\infty$. Therefore, a very special case of Corollary \ref{prop.pairwise} gives affirmative answers to the mentioned open questions raised by Chen et al. \cite{chen2014bahr} and Wu and Rosalsky \cite{wu2015strong}. \item[(iii)] Let $\alpha=1/p$, $1<p<2$, $L(x)=(\log\log(x))^{2(1-p)/p}$, $x\ge0$. Then $\tilde{L}(x)=(\log\log(x))^{2(p-1)/p}$, $x\ge0$. By Corollary \ref{prop.pairwise} (ii), under condition \eqref{MZ.09}, we obtain \eqref{MZ.07}. Therefore, this special case of Corollary \ref{prop.pairwise} also improves Corollary 1 of da Silva \cite{da2020rates}. \end{description} } \end{remark} \subsection{Extended negatively dependent random variables} The Kolmogorov SLLN for extended negatively dependent was first studied by Chen et al. \cite{chen2010strong}. A collection of random variables $\{X_1,\dots,X_n\}$ is said to be \textit{extended negatively dependent} if for all $x_1,\dots,x_n \in \mathbb{R}$, there exists $M>0$ such that $$\mathbb{P}(X_1\le x_1, \dots, X_n\le x_n)\le M\mathbb{P}(X_1\le x_1) \dots \mathbb{P}(X_n\le x_n),$$ and $$\mathbb{P}(X_1> x_1, \dots, X_n> x_n)\le M\mathbb{P}(X_1> x_1) \dots \mathbb{P}(X_n> x_n).$$ A sequence of random variables $\{X_i,i\ge 1\}$ is said to be extended negatively dependent if for all $n\ge1$, the collection $\{X_i,1\le i\le n\}$ is extended negatively dependent. Let $m$ be a positive integer. The notion of $m$-extended negative dependence was introduced in Wu and Wang \cite{wu2021strong}. A sequence $\{X_i,i\ge 1\}$ of random variables is said to be $m$-extended negatively dependent if for any $n\ge2$ and any $i_1,i_2,\ldots,i_n$ such that $|i_j-i_k|\ge m$ for all $1\le j\le k\le n$, we have $\{X_{i_1},\ldots,X_{i_n}\}$ are extended negatively dependent. If $\{X_i,i\ge1\}$ is a sequence of $m$-extended negatively dependent random variables and $\{f_i,i\ge1\}$ is a sequence of nondecreasing functions, then $\{f_i(X_i),i\ge1\}$ is a sequence of $m$-extended negatively dependent random variables. We note that the classical Kolmogorov maximal inequality or the classical Rosenthal maximal inequality are not available for extended negatively dependent random variables (see Wu and Wang \cite{wu2021strong}). \begin{corollary}\label{prop.extended} Corollary \ref{prop.pairwise} holds if $\{X_n, \, n \geq 1\}$ is a sequence of $m$-extended negatively dependent random variables. \end{corollary} \begin{proof} Lemma 3.3 of Wu and Wang \cite{wu2021strong} implies that the sequence $\{X_n, \, n \geq 1\}$ satisfies condition \eqref{eq:bound_var_00}. Corollary \ref{prop.extended} then follows from Theorems \ref{thm.main0} and \ref{thm.main1}. \end{proof} \begin{remark} {\rm Chen et al. \cite{chen2010strong} proved the Kolmogorov SLLN for sequences of extended negatively dependent and identically distributed random variables $\{X,X_n,n\ge1\}$ under the condition that $\E(|X|)<\infty$. They used the Etemadi's method in Etemadi \cite{etemadi1981elementary} which does not work for the case $1<p<2$ in the Marcinkiewicz--Zygmund SLLN. To our best knowledge, Corollary \ref{prop.extended} is the first result in the literature on the Baum--Katz theorem for sequences of $m$-extended negatively dependent random variables under the optimal moment condition even when $L(x)\equiv1$ and $m=1$. } \end{remark} \subsection{$\varphi$-mixing dependent random variables} A sequence of random variables $\{X_n,n\ge 1\}$ is called \textit{$\varphi$-mixing} if \[\varphi(n)=\sup_{k\ge 1,A\in \mathcal{F}_{1}^k,B\in \mathcal{F}_{k+n}^\infty,\mathbb{P}(A)>0}\left|\mathbb{P}(B|A)-\mathbb{P}(B)\right|\to 0 \text{ as }n\to\infty,\] where $\mathcal{F}_{1}^k=\sigma(X_1,\ldots,X_k)$ and $\mathcal{F}_{k+n}^\infty=\sigma(X_i,i\ge k+n)$. The following corollary is the Baum--Katz type theorem for sums of $\varphi$-mixing random variables. \begin{corollary}\label{prop.phimixing} Corollary \ref{prop.pairwise} holds if $\{X_n, \, n \geq 1\}$ is a sequence of $\varphi$-mixing random variables satisfying \begin{equation}\label{phi-mixing} \sum_{n=1}^\infty \varphi^{1/2}(2^n)<\infty. \end{equation} \end{corollary} \begin{proof} Corollary 2.3 of Utev \cite{utev1991sums} implies that \eqref{eq:bound_var_00} holds under \eqref{phi-mixing}. Corollary \ref{prop.extended} follows from Theorems \ref{thm.main0} and \ref{thm.main1}. \end{proof} \begin{remark} The condition \eqref{eq:bound_var_00} is very general. The concepts of negative association, pairwise negative dependence, $\phi$-mixing can all be extended to random vectors in Hilbert spaces (see, e.g., \cite{dedecker2008convergence,ko2009note,hien2015weak,hien2019negative}). It is interesting to see if the results in this note can be extended to dependence random vectors taking values in Hilbert spaces. \end{remark} \textbf{Acknowledgments} The research of the second-named author was supported by the Ministry of Education and Training, grant no. B2022-TDV-01. \bibliographystyle{tfs} \bibliography{mybib} \end{document}
38,911
TITLE: A drunkard flipping switches in a dark room QUESTION [5 upvotes]: There are $n$ switches in a dark room controlling a single light; all the switches must be on for the light to be on. Initially all the switches are off. A drunkard left in the room wakes up from a hangover and repeatedly picks a switch uniformly at random and flips it. What is the expected number of flips the drunkard takes to turn on the light? I answered the $n=4$ case here, but the weird result of $\frac{64}3$ enticed me to generalise the problem, for which I have rewritten the setting. If $E_k$ is the expected number of flips needed with $k$ switches on, the $E_k$ satisfy the following tridiagonal system: $$\begin{bmatrix}1&-1&0&\dots&0&0\\-\frac1n&1&\frac1n-1&\dots&0&0\\0&-\frac2n&1&\dots&0&0\\\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\0&0&0&\dots&1&-\frac2n\\0&0&0&\dots&\frac1n-1&1\end{bmatrix}\begin{bmatrix}E_0\\E_1\\E_2\\\vdots\\E_{n-2}\\E_{n-1}\end{bmatrix}=\begin{bmatrix}1\\1\\1\\\vdots\\1\\1\end{bmatrix}$$ The $(i,j)$ entry of the square matrix is $1$ if $i=j$, $-\frac jn$ if $i=j+1$, $\frac{i-1}n-1$ if $i=j-1$ and $0$ otherwise. The answer to the problem proper is then $E_0$, the first few values of which are (starting from $n=1$) $$1,4,10,\frac{64}3,\frac{128}3,\frac{416}5,\frac{2416}{15},\frac{32768}{105},\frac{21248}{35},\frac{74752}{63}$$ After nosing around and actually solving the tridiagonal system using a dedicated algorithm I found a formula for $E_0$ at any $n$: $$E_0(n)=2^{n-1}F(n-1)\text{ where }F(n)=\sum_{k=0}^n\frac1{\binom nk}\tag1$$ Now $F$ has been discussed at length on this site, including here, here and here, and is A048625/A048626 in the OEIS. Hence the generating function for $E_0$ may be obtained: $$\sum_{n=0}^\infty E_0(n)z^n=z\left(\frac{\log(1-2z)}{2(z-1)}\right)'$$ The appearance of $F(n)$ was a pleasant surprise for me, and now I have no proof because this was all numerical experimentation. How can $(1)$ be proved? I found $(1)$ by tracing the variables in the tridiagonal matrix algorithm applied to the above square matrix turned upside-down. I found that at the end (using the notation on Wikipedia) $d_n=2^{n-1}$ and $b_n=\frac1{F(n-1)}$, but trying to expand everything symbolically only produces a huge tree of fractions. REPLY [3 votes]: This interesting problem is equivalent to this one: compute the expected length of a random walk from one vertex of an $n$-dimensional hypercube to the diagonally opposite vertex. And this is indeed mentioned in https://oeis.org/A003149 2^n*A003149(n)/n! is the expected length of a random walk from one vertex of an (n+1)-dimensional hypercube to the diagonally opposite vertex (a walk which may include one or more passes through the starting point) Here a solution (quite complicated) is given. I prefer to copy from here and its answer : Three lemmas: Lemma 1: The average return time for any vertex is $2^n$ This is easy to see because all vertexes are equivalent. Lemma 2: For any starting vertex, the average time for returning to it or reach its opposite is $2^{n-1}$ This is similar to the above, by grouping each vertex with its opposite. Lemma 3: For any vertex, the probability that it reaches the opposite before returning to the original is $1/S$ where $S=\sum_k \binom{n-1}{k}$ Let consider each vertex as a $n-$binary tuple, and let the weight be the number of ones. Let $p(m)$ be the probability that a random walk starting at a vertex with weight $m$ will arrive at $(1,1,1...1)$ before it reaches $(0,0 \cdots,0)$. Clearly $p(0)=0$ and $p(n)=1$. Furthermore $$p(k)=\frac{k}{n} p(k-1)+ \frac{n-k}{n}p(k+1) \tag 1$$ for $0<k<n$. Let $q(k)=p(k+1)-p(k)$. Then the above equation can be rewritten as $$\frac{k}{n} q(k-1)=\frac{n-k}{n}q(k)\tag 2 $$ or $$q(k)=q(k-1)\frac{k}{n} $$. So $q(1)=q(0)/(n-1)$, $q(2)=q(0) 2/((n-1)(n-2))$ etc. Therefore $$q(k)=\frac{1}{\binom{n-1}{k}} q(0) \tag 3$$ Let $$S(n-1)=\sum_{k=0}^{n-1} \frac{1}{\binom{n-1}{k}} \tag 4$$ We have $1=p(n)-p(0)=q(0)+q(1)+\cdots +q(n-1)=q(0) S$. Therefore $q(0)=1/S$. Now if a random walk starts at $(0,0,...,0)$ after one step it will be at a vertex of weight 1. So the probability it will reach $(1,1,...,1)$ before returning to $(0,0,...,0)$ is $p(1)=p(1)-0=q(0)=1/S$. Now, let $T$ be the expected number of steps to reach $(1,1,... 1)$ from $(0,0 ...0)$. By lemma 2 it takes an average of $2^{n-1}$ steps to either reach the opposite corner or return to the starting point. By lemma 3, the walk will return to the starting point a fraction $\frac{S-1}{S}$ of the time. So $$T=2^{n-1}+\frac{S-1}{S}T \implies T=2^{n-1} S $$ where $S$ is defined in $(4)$
210,982
Presumably in his application cover letter, he explained how his skills from one job pertained to the other. Via Philly.com: About better to identify all the cavities a 10 year old boy could hide contraband? Who better to identify all the cavities a 10 year old boy could hide contraband? Tell me Father Guido, if I laugh at this am I going to hell? Yes, but only nerds go to Heaven. Just another player in the security theater we pay for, because the Feds aren’t interested in buying real security with our tax dollars. Though I don’t feel either entertained or more secure because of the existence of TSA. Despicable. Amicable for it is in the City of Brotherly Love after all… Amicable for it is in the City of Brotherly Love after all…
143,337
Learning online is challenging for most children. Here’s what you can do when your child struggles with distance learning. So your child hated online learning last spring. You are not alone. Almost every parent I’ve talked to said learning online was not a good fit for their child. When tips and tricks aren’t working, here’s what to do when your child struggles with distance learning. Disclosure: This post may contain affiliate links; read more here. alt="" width="600" height="900" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" data-loading="lazy" data-src="" data- /> Pin this for later! Don’t Focus on the Behavior First, here’s what you shouldn’t do. Don’t focus on the troubling behaviors. Focusing on the tantrums, the meltdowns, the refusal, and the arguments isn’t going to solve anything. The behavior has already served it’s purpose: it’s communicated to you that there is a problem. From now on, start saying, “Distance learning was tricky for you.” Don’t say, “You threw tantrums.” Why is this helpful? Shouldn’t we want our children to behave better? Of course we want our children to have more adaptive behavior, but the behavior is showing you they are having a hard time. Your child doesn’t have the skills to manage the problem yet. When parents focus on the behavior, it obscures our ability to see the underlying issues. To solve the problems that are causing your child to struggle with distance learning, you need to look beyond. And don’t worry: I’ll help you figure put the underlying issues. Don’t Assume You Know What the Problem Was Here’s another thing you don’t want to do: don’t assume you know what part of distance learning was hard for your child. I’m totally guilty of this because we have to make judgements when our children can’t communicate. But definitely by the time our children are school-aged, they have some tools for communicating, (Note: if you’re a a parent with a non-verbal autistic child – I see you and hear your challenges. While my child is verbal, please contact me directly if you want help troubleshooting). Your child is the expert in identifying the problem since they are the one experiencing it. They will likely need some help articulating it. alt="" width="600" height="800" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" data-loading="lazy" data-src="" data- /> Don’t Decide on Solutions without Your Child It’s okay to troubleshoot with some quick tips and tricks to make distance learning better for your child. Do you already have solid school day routines for your child? - Related: The Ultimate Routines Printable Pack It’s also really important to review your expectations to make sure your child really understands them. After that, if distance learning continues to be problematic, it’s not helpful to choose solutions without your child. You might waste a lot of your precious precious time and energy with a solution to something that seemed like a problem, but wasn’t really the main challenge for your child. Don’t Speak Negatively about Online Learning Here’s the last thing you don’t want to do: Please don’t speak negatively about online learning. If you convey to your child that this is hard and awful, they are more likely to think it’s hard and awful. So parents everywhere: don’t speak negatively about it. Be honest and empathetic (“yeah this is tough”) but watch yourself for negativity (“this is a disaster!”). Our attitudes about the challenge are more helpful than we realize. They will take cues from us. Do Decide to Believe this is a Solvable Problem Now, we’re going to switch to the things you want to do. First, your mindset is the most important factor in this. You need to believe this is a solvable problem. You’re probably yelling at your screen, “Yeah, right Anne. The only solution is the end of the pandemic and our kids being back in school.” The truth is that you have the power to make some of the problems better for your child through what I’ll describe below. It’s not a magic wand and won’t fix everything, but it will improve the experience for your child and you. alt="" width="600" height="539" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" data-loading="lazy" data-src="" data- /> Do Talk to Your Child about why Distance Learning was Hard See what I did there? You’re going to want to watch your language during these conversations. Instead of focusing on the behaviors during distance learning, you’ll say neutral things like “why was this hard?” This method I’m going to describe is loosely based on the parenting technique CPS (Collaborative Problem Solving or Collaborative and Proactive Solutions depending on which source you’re reading). This method detailed more thoroughly in Ross Greene’s books The Explosive Child and Raising Human Beings. If the first title resonates with, or if your child has an ADHD, ODD, OCD, SPD, ASD, or a mental health diagnosis, you might want to consider that book. If not, the book Raising Human Beings is for every parent. Do Use Empathy and Curiosity to Discuss the Challenges of Distance Learning You’re going to want to start like this: “Hey when you had school online in the spring, that was really tough. Why?” You need a curious and empathetic tone. Now if you’re new to this method, it’s 90% likely your kid is going to say either, “I don’t know,” or “All of it.” Refrain from any judgment or frustration. Instead, believe this is solvable with time and practice. alt="" width="489" height="617" sizes="(max-width: 489px) 100vw, 489px" data-recalc-dims="1" data-loading="lazy" data-src="" data- /> Do Ask a lot of Questions To find out why your child dislikes distance learning, you’re gong to use a method called, “Drilling.” This means to you’re going to draw out your child with neutral and curious questions. If they gave you a more specific response, like “My brother bugs me” then that’s even more helpful. Here’s an example: - Parent: “Hey when you had school online in the spring, that was really tough. Why?” - Child: “My brother bugs me.” - Parent: “Oh I can see how that’s frustrating (empathy). Can you tell me more (curiosity)?” Don’t jump to solutions. Plan to get as much information as you possibly can to understand exactly how the child’s brother is disruptive. But what about when your child responds with “I don’t know” or “Everything,” it’s just fine and a good place to start. Here’s an example: - Parent: “Hey when you had school online in the spring, that was really tough. Why?” - Child: “All of it.” - Parent: Oh I see, it was a tough experience (empathy). Well, it might have been something about the environment, something about the lessons, or something about the assignments (curiosity). Was it one of the those?” - Child: “I don’t know.” - Parent: “Good to know. Well, let’s start with the environment. Was it your pencils?” - Child: “No they were okay.” - Parent: “Okay thanks. Was it where you were sitting?” - And so on… What to Troubleshoot When Distance Learning is Hard I made a comprehensive list of things to start with when you discuss why your child struggles with distance learning. I’ve made it into a printable for you. You can begin with these topics as a starting place. - Environment - Lessons - Assignments - Something else alt="" width="426" height="568" sizes="(max-width: 426px) 100vw, 426px" data-recalc-dims="1" data-loading="lazy" data-src="" data- /> Is your child struggling with their emotions? Where to go from here This is a quick overview of the first step in the CPS model. I wholeheartedly recommend the Collaborative Problem Solving method. But first you need to believe that improvements and solutions are possible. If you continue to struggle, reach out the teacher first. Your child’s teacher needs to be in the loop and appreciates the communication. Last, remember, this too shall pass. Dial down the pressure for yourself too. No one is excelling in the COVID life. As the saying goes, “Cs get degrees.” Just focus on getting through with the minimum. It will be okay. Recapping When Your Child Struggles with Distance Learning Here’s the summary for busy parents: - Don’t focus on the bad behavior. It’s not the actual problem to be solved. - Don’t assume you know what part of distance learning was hard for your child. - It’s likely counterproductive to choose solutions without your child. - Don’t speak negatively about online learning. - Believe this is a solvable problem. - Talk openly with your child. - Use neutral language: “it is tricky for you.” - Use empathy and curiosity: “Yes I can see this is tough for you.” - Ask a lot of questions. - Use the comprehensive printable with topics you might need to discuss. - Reach out to the teacher. - Lower the bar for yourself. “Passing” during this season is success. Conclusion When your child struggles with distance learning, it feels like its impossible to find a solution. But the truth is, your child and you can work together to find improvements. It won’t be perfect or replicate the classroom experience. However, working to find some solutions for distance learning will allow your child and you the chance to leverage a hard situation with resilience.
403,934
Yesterday, I published an article about Memorial Day as it relates to the baseball standings. In sum, I wrote about the baseball adage that one should not check the standings until Memorial Day. Using data from 2010 to 2018, I looked at the correlation between Memorial Day winning percentage and end-of-season winning percentage and constructed a linear regression line to fit the data. Within the piece, I used the regression equation to discuss full-season scenarios for the Twins and Nationals, two teams that have surprised — albeit for different reasons — this season. The response to the article was interesting, and some asked for me to take a look at full-season projections for all 30 teams based on the regression. This sortable chart does exactly that: I will say that you should take these projections with a grain of salt, and I’d recommend for you to look at our actual projected standings for a better estimate of the full-season results. These projections are based on a regression line that only could account for 57% of the variability in full-season results. This means that these expected win totals could be off, and they could be off pretty significantly. As I wrote in my article on Wednesday, we’ve seen teams outperform their expectation by as many as 112 points (2012 Dodgers) or underperform their expectation by as many as 129 points (2013 Astros). With this in mind, let’s call these two extremes our best- and worst-case scenarios, respectively. Now let’s put those best- and worst-case scenarios into a chart for all 30 teams: As you can probably see, this doesn’t tell us much. The Twins aren’t going to win 116, the Rockies aren’t going to win 96, and the Orioles won’t win 80. But if you consider these to be the absolute high-bound win totals for most teams — something like the 99.6th percentile, considering only one team out of the 270-team sample (0.4%) we have from our initial dataset was able to achieve these levels of outperforming the expectation — things begin to make more sense. A troubling figure is the one for Nationals fans with hope; if their 99.6th percentile projection is only 88 wins, I think we can begin to safely assume that 2019 is going to be a lost season in D.C. On the flip side, if you’re a Twins fan and you see that their 0.4th percentile projection is 77 wins, you’d have to be feeling pretty good. Realistically, no team is going to play to these projections. Let’s use the 25th and 75th percentile instead, as those would still be within a truly possible range of outcomes. Based on the 270-team sample again, we would find the 75th percentile residual to be +29 points of win percentage and the 25th percentile residual to be -31 points of win percentage. Let’s construct a third chart with these scenarios: I still caution you when looking at these charts; they are not adjusted for team strength (or even run differential), as they just use previous team data to estimate the full scenarios. But these results paint what appears to be a pretty decent picture of where the league stands today. The Twins continue to look like the favorites to win AL Central, don’t they? With that said, I am brought to a second question: if a team is in a playoff spot on Memorial Day, do they tend to hang on to the spot by the end of the season? I went back to my sample of Memorial Day and Final Standings, and I looked at the results from 2012 through 2018. This represents every team who has played in the era of two Wild Cards. I found that of the 70 teams that have made the playoffs in those seven years, 46 of them held a playoff spot on Memorial Day. That’s 66%. That would mean between six and seven of the teams who are already in a playoff spot as of Memorial Day will still be in a playoff spot by the end of the season. We can probably begin to guess those teams. The Dodgers have a six-game lead in the NL West, the Twins have a seven-game lead in the AL Central, and the Astros have a seven-and-a-half-game lead in the AL West. Those three teams are more or less locks to make the playoffs, and our odds reflect that; those three teams all have greater than a 90% chance to continue into October. The Yankees do too, as they currently have a two-game lead in the AL East. With those, we’re already at four of our six or seven teams, so while 66% might sound like a lot, all those teams who had commanding divisional leads on Memorial Day tended not to fall out of the playoffs altogether. Bubble teams stay as bubble teams, and those teams will continue to shuffle in the standings as the season goes on. The last topic I want to discuss is the predictiveness of Memorial Day records. As one reader of the original piece kindly pointed out, a comparison of the Memorial Day winning percentage and a team’s final winning percentage doesn’t have much predictive power. This is because a team’s final win percentage includes the games they played before Memorial Day, so there is double-counting involved. For my initial question, “Is Memorial Day the time to check the standings?” the double-counting works fine. I wasn’t looking to determine how predictive a team’s Memorial Day record actually is, I just wanted to figure how well teams finished out after their early-season performance. This distinction is important, and our Nationals example can prove exactly why. The Nationals are currently 19-30 and in a nine-game hole in the NL East. Even if the Nationals finish their season by going 62-51, which represents a pretty solid .549 win percentage (89-win pace), they’d only finish the year 81-81. The Memorial Day record didn’t do a great job of predicting the Nationals’ rest-of-season record, but it did do a better job of predicting the Nationals’ full-season record. So, let’s take a look at the predictive power of a team’s Memorial Day record: To be blunt, it’s not great. There’s a moderate correlation here, evidenced by our r-value. But our regression line can only explain about 25% of the variability in a team’s rest-of-season win percentage, so there’s still a lot of change that can happen over the remainder of the baseball season, as expected. What does this tell us, in combination with yesterday’s scatterplot which showed a much stronger correlation between Memorial Day winning percentage and full season winning percentage? Well, it tells us that teams can build themselves a large cushion (a la the Twins) by Memorial Day and ride that to full-season success. Conversely, it tells us that teams can be buried (a la the Nationals) by Memorial Day, and even with a rest-of-season turnaround, they probably still won’t be successful overall. But a team’s record on Memorial Day alone doesn’t necessarily tell us how they will play over the remaining games. That small distinction is extremely important when trying to answer my initial question. Yes, Memorial Day standings are meaningful, but no, they don’t do a great job of telling us how the teams will play over the remaining 110 or so games. var SERVER_DATA = Object.assign(SERVER_DATA || );
8,432
Carbon vane reliability in Vacuum Pumps. You of a rotary vane vacuum pump are the carbon impregnated and an isostatic graphite that gives the Plate Diffuser; Tube of fresh oil lubricated vacuum pumps is very reliable with simple and cost effective design. It operates according to the rotary vane China Supplier Vacuum Vane Graphite Vane Pump Vane, Find details about China Carbon Vane, Carbon Vane for Vacuum Pump from China Supplier Vacuum Vane Graphite Vane Oil Sealed Rotary Vane Vacuum Pumps A rotary vane vacuum pump has an eccentrically installed rotor and They contain special graphite vanes in the compression China Air Pump Use Graphite Vanes/Carbon Sheet/Graphite Vane Vacuum Pump, Carbon Block, C/C Composite Plate, Orion Vacuum Pump Carbon Vane for Rotary Dry-Running Rotary Vane Vacuum Pumps and DRT and VRT rotary vane vacuum pumps work absolutely oil free. marked on pump rating plate or label, Dtlf250 Carbon Vane from Vacuum Pumps Carbon Vanes Becker/Rietschle Sales for Coal plate used in DTLF 2.500 Vacuum Pump Vane Pump; Rotary Vane; Graphite China Graphite Plate for Becker, Find details about China Carbon Vane, Vacuum Pump Vane from Graphite Plate for Becker - Hualian Carbon Industry Co., Ltd. PDF Vanes for Pumps. Graphite Vanes. Ideal for working in vacuum pumps. The qualities developed by Carbosystem of graphite + resin have very high mechanical strength Dry running rotary vane vacuum pumps and compressors operate by the same Dry-Running Rotary Vane Vacuum Pumps ? the special graphite vanes in the 4.2.1 Design / Operating principle. A rotary vane vacuum pump is an oil-sealed rotary displacement Rotary vane vacuum pumps are built in single- and two-stage Thank you for explaining the difference between Rotary Vane and Diaphragm vacuum pumps. Nice post. Pump School instructs fluid-handling engineers on positive-displacement rotary Vane Pump Overview. While vane pumps can handle End Plates - Carbon graphite; carbon graphite vane for Becker vacuum pump,US $ 1 - 50 / Piece, China (Mainland), SIEG, EK60.Source from Zhenjiang Ecigar Machinery Co., Ltd. on Alibaba. 44 Pumps, Compressors and Process Components 2013 Vacuum technology Rotary vane pumps For decades, oil-free rotary vane vacuum pumps and compressors have Oil sealed rotary vane pumps (aka rotary vane pumps) are the primary pumps on most vacuum systems used in the heat treatment industry. They are also referred to as a
347,713
What is Counselling Counselling and Psychotherapy seek to change the client's relationship to their distress and the therapist's role is to facilitate this process of personal change. Counselling can help to identify and resolve difficult feelings, entrenched beliefs and negative ways of coping that undermine our health, self-esteem and ability to develop and sustain meaningful and supportive relationships with others. Counselling encourages greater selfawareness and expansion of self-knowledge allowing the inherently healthy and wise element, centrally present in each of us, to become more apparent and powerful.
335,059
\begin{document} \title[fifth order modified KdV equation] { Well-posedness and ill-posedness of the fifth order modified KdV equation} \author[Soonsik Kwon ] {Soonsik Kwon} \address{Soonsik Kwon \hfill\break Department of Mathematics, University of California, Los Angeles, CA 90095-1555, USA} \email{[email protected]} \begin{abstract} We consider the initial value problem of the fifth order modified KdV equation on the Sobolev spaces. \begin{equation*} \begin{cases} \partial_t u - \px^5u + c_1\px^3(u^3) + c_2u\px u\px^2 u + c_3uu\px^3 u =0\\ u(x,0)= u_0(x) \end{cases} \end{equation*} where $ u:\R\times\R \rightarrow \R $ and $c_j$'s are real. We show the local well-posedness in $H^s(\R)$ for $s\geq \frac{3}{4}$ via the contraction principle on $X^{s,b}$ space. Also, we show that the solution map from data to the solutions fails to be uniformly continuous below $H^{3/4}(\R)$. The counter example is obtained by approximating the fifth order mKdV equation by the cubic NLS equation. \end{abstract} \thanks{} \thanks{} \subjclass[2000]{35J53} \keywords{local well-posedness; ill-posedness; mKdV hierarchy} \maketitle \section{Introduction} The KdV equation and the modified KdV(mKdV) equation are completely integrable in the sense that there are Lax pair formulations. Being completely integrable, the KdV and the mKdV equations enjoy infinite number of conservation laws. Each of these is an Hamiltonian of the flow which commute the KdV flow (resp. the mKdV flow). This generates an infinite collection of commuting nonlinear equations of order $2j+1,\,\,(j\in \mathbb{N})$, which is known as the KdV hierarchy (resp. the mKdV hierarchy). In this note, we consider the second equation from the modified KdV hierarchy: \begin{equation}\label{fifthmkdv} \partial_t u - \px^5 u -30u^4\px u + 10 u^2\px^3 u + 10 (\px u)^3 + 40 u\px u\px^2 u = 0. \end{equation} Using the theory of the complete integrability, one can show that for any Schwartz initial data, the solution to any equation in the KdV hierarchy (resp. the mKdV hierarchy) exists globally in time. However, the well-posedness theory for low regularity initial data requires the theory of dispersive PDEs. And changing coefficients in the nonlinear terms may break the integrable structure. In this case, we can no longer rely on the theory of complete integrability. The purpose of this paper is to study the low regularity well-posedness and ill-posedness.\\ We consider the following fifth order mKdV equation, which generalizes \eqref{fifthmkdv} \footnote{For omitting $u^4\px u$, See Remark~\ref{low order term}}. \begin{equation}\label{fifthm} \begin{cases} \partial_t u - \px^5u + c_1\px^3(u^3) + c_2u\px u\px^2 u + c_3uu\px^3 u =0\\ u(x,0)= u_0(x) \end{cases} \end{equation} where $ u:\R\times\R \rightarrow \R $ and $c_j$'s are real. \\ \\ We show the local well-posedness result and the ill-posedness result. First, we state the local well-posedness theorem. \begin{theorem}\label{lwp} Let $ s \geq \frac 34$ and $u_0 \in H^s(\R)$. Then there exists $T=T(\|u_0\|_{H^s(\R)})$ such that the initial value problem \eqref{fifthm} has a unique solution $u(t,x) \in C([0,T];H^s(\R))$. Moreover, the solution map from data to the solutions is real-analytic. \end{theorem} Previously, Kenig, Ponce, and Vega \cite{KPV94} studied the local well-posedness of the odd order dispersive equations: $$ \partial_t u + \px^{2j+1}u + P(u,\px u,\cdots ,\px^{2j}u) =0 $$ where $P$ is a polynomial having no constant and linear terms. They proved the local well-posedness for the initial data in the weighted Sobolev space, i.e. $$ u_0 \in H^s(\R) \cap L^2(|x|^mdx) $$ for some $s,m \geq 0$. Their method was the iteration using the local smoothing estimate and the maximal function estimate. Inspecting their proof for the equation \eqref{fifthm}, one can observe that the local well-posedness holds true for $s>\frac{9}{4}$ and $ m=0$. In other words, the local well-posedness is established for the Sobolev space \emph{without the decaying weight}. Thus, our result can be viewed as an improvement of theirs. Our proof is also via the contraction principle. A natural choice of the iteration space is the Bourgain space, also known as the $X^{s,b}$ space. Assuming the standard argument of the iteration on the $X^{s,b}$ space, the main step is to show the following nonlinear estimate: $$ \|T(u,v,w)\|_{X^{s,b-1}} \lesssim \|u\|_{X^{s,b}}\|v\|_{X^{s,b}}\|u\|_{X^{s,b}}. $$ where $ T(u,v,w) = c_1\px^3(uvw) + c_2u\px v\px^2 w + c_3uv\px^3 w $. This is performed by the dyadic method of Tao. In \cite{Tao2001}, Tao studied multilinear estimates for $X^{s,b}$ space systematically and showed the analogous trilinear estimate for the mKdV equation. This reproves the local well-posedness for $s\geq 1/4$, which originally showed by Kenig, Ponce and Vega \cite{KPV93} by the local smoothing estimate. Thus, in the mKdV equation the $X^{s,b}$ estimate has the same strength as the classical local smoothing method, while in the fifth order mKdV \eqref{fifthm} the $X^{s,b}$ estimate improves the preceding one. \\ \indent In \cite{CCT2003} Christ, Colliander and Tao showed the solution map of the mKdV equation fails to be uniformly continuous for $s <1/4$. This implies $1/4$ is the minimal regularity threshold for which the well-posedness problem can be solved via an iteration methods. Our next theorem is the analogue of this for the equation \eqref{fifthm}. \begin{theorem}\label{illposed} Let $ -\frac {7}{24} < s < \frac 34$. The solution map of the initial value problem \eqref{fifthm} fails to be uniformly continuous. More precisely, for $ 0 < \delta \ll \epsilon \ll 1$ and $T>0$ arbitrary, there are two solutions $ u,v$ to \eqref{fifthm} such that \begin{gather} \label{illposed1}\|u(0)\|_{H^s_x}, \|v(0)\|_{H^s_x} \lesssim \epsilon\\ \label{illposed2}\|u(0)-v(0)\|_{H^s_x} \lesssim \delta \\ \label{illposed3}\sup_{0\leq t\leq T} \|u(t) -v(t)\|_{H^s_x} \gtrsim \epsilon. \end{gather} \end{theorem} The method used here is very similar to theirs in \cite{CCT2003}. We approximate the fifth order mKdV equation by the cubic NLS equation, \begin{equation}\label{cNLS} i\partial_t u - \px^2 u + |u|^2u = 0, \end{equation} at $(N,N^5)$ in the frequency space. \\ Let $u(t,x)$ be the linear solution to $ (\partial_t -\px^5)u =0 $ with $u(0)=u_0$. Setting $$ \xi := N+ \frac{\xi'}{\sqrt{10}N^{3/2}}, $$ $ \tau=\xi^5 $ leads $ \tau = N^5 + \sqrt{\frac 52}N^{5/2}\xi' + \tau' $ where $$ \tau' = \xi'^2 + \frac{\xi'^3}{\sqrt{10}N^{5/2}} + \frac{\xi'^4}{20N^5} +\frac{\xi'^5}{(10N^3)^{5/2}}. $$ \begin{align*} u(t,x) &= \int e^{it\tau + ix\xi} \widehat{u_0}(\xi) d\tau d\xi \\ &= \int e^{it(N^5 + \sqrt{\frac 52}N^{5/2}\xi' + \tau') + ix(N + \frac{\xi'}{\sqrt{10}N^{3/2}})} \widehat{u_0}(\xi) d\tau d\xi\\ &= e^{iN^5 +iNx}\int e^{i\tau' t + i\xi'( \frac{x}{\sqrt{10}N^{3/2}}+ \sqrt{\frac{5}{2}}N^{5/2}t)}\widehat{u_0}(N+ \frac{\xi'}{\sqrt{10}N^{3/2}}) d\tau' d\xi' \end{align*} Since $\tau' \approx \xi'^2 $ for $|\xi'| \ll N $, $$u(t,x) \approx e^{iN^5t +iNx} v( t, \frac{x}{\sqrt{10}N^{3/2}} + \sqrt{\frac{5}{2}}N^{5/2}t) $$ where $v(t,x) $ is a solution to the linear Schr\"odinger equation $i\partial_tv -\px^2v =0 $.\\ At the presence of the nonlinear term, one need the factor $ \frac{c}{N^{3/2}}$ and the real part projection Re. Then it is approximated to the cubic NLS equation \eqref{cNLS}. The detail follows in Section~\ref{sec4}. \\ \indent On the other hand, the solutions to the fifth order KdV equation, $$ \partial_tu -\px^5u + c_1\px u\px^2u + c_2 u\px^3 u = 0, $$ is known to have genuine nonlinear dynamics for all $s>0$. In \cite{Kwon} the author showed the solution map fails to be uniformly continuous in $H^s(\R)$ for $s>0$. Thus, for this equation the local well-posedness problem is solved by other than the iteration method. In \cite{Kwon} the local well-posedness in $H^s(\R)$ for $s>\frac{5}{2}$ is established via the compactness method. \\ \subsection*{Notation} We use $X\lesssim Y$ when $X \leq CY $ for some $C$. We use $ X \sim Y $ when $ X \lesssim Y $ and $ Y\lesssim X$. Moreover, we use $ X \lesssim_s Y $ if the implicit constant depends on $s$, $C=C(s) $. \\ We use Japanese bracket notation $\langle\xi\rangle := \sqrt{1+\xi^2} $. We denote the space time Fourier transform by $\widetilde{u}(\tau,\xi) $ of $u(t,x)$ $$ \widetilde{u}(\tau,\xi) = \int e^{-it\tau - ix\xi} u(t,x) dtdx, $$ while the space Fourier transform by $\widehat{u}(t,\xi)$ of $u(t,x)$ $$ \widehat{u}(t,\xi)= \int e^{-ix\xi} u(t,x) dx.$$ \subsection*{Acknowledgement} The author would like to appreciate his advisor Terence Tao for many helpful conversations and encouragement. \section{Local well-posedness of the fifth order modified KdV} In this section, we prove the local well-posedness of the initial value problem \eqref{fifthm}. Our proof is via the contraction principle on the Bourgain space. We first recall some standard facts and notations. For a Schwartz function $u_0(x) $, we denote the linear solution $u(t,x)$ to the equation $ \partial_t u - \partial_x^5 u = 0 $ by $$ u(t,x) =: e^{t\px^5}u_0(x) = c \iint e^{it\xi^5}e^{i(x-y)\xi}u_0(y) dyd\xi. $$ Using this notation we have the Duhamel formula for the solution to the inhomogeneous linear equation $ \partial_t u - \partial_x^5 u + F = 0 $ $$ u(t,x) = e^{t\px^5}u_0(x) - \int_0^t e^{(t-t')\px^5}F(t',x) dt'. $$ We denote the Bourgain space by $X^{s,b}_{\tau=\xi^5}(\R\times\R)$, or abbreviated $X^{s,b}$. The $X^{s,b}$ space is defined to be the closure of the Schwartz functions $\mathcal{S}(\R\times\R) $ under the norm $$ \|u\|_{X^{s,b}_{\tau=\xi^5}(\R\times\R)} := \|\langle \xi\rangle^s\langle\tau-\xi^5\rangle^b\widetilde{u}(\tau,\xi)\|_{L^2_{\tau,\xi}(\R\times\R)}. $$ The $X^{s,b}$ space is continuously embedded in $C^0_tH^s_x$. \begin{lemma} Let $b>1/2$ and $s\in \R$. Then for any $u\in X^{s,b}_{\tau=\xi^5}(\R\times\R)$, we have $$ \|u\|_{C^0_tH^s_x(\R\times\R)} \lesssim_b \|u\|_{X^{s,b}_{\tau=\xi^5}(\R\times\R)}. $$ \end{lemma} For the proof see \cite{Taobook}. \\ Let $\eta(t)$ be a compactly supported smooth time cut-off function (i.e. $\eta \in C^\infty_0(\R)$ with $\eta(t)=1$ on $[0,1]$). There is a standard $X^{s,b}$ energy estimate for time cut-off solutions. \begin{lemma} Let $b>1/2$ and $s\in \R$ and let $u\in C^\infty_t\mathcal{S}_x(\R\times\R)$ solves the inhomogeneous linear fifth order KdV equation $\partial_tu-\px^5u =F$. Then we have \begin{equation}\label{Xlinear} \|\eta(t)u\|_{X^{s,b}_{\tau=\xi^5}(\R\times\R)} \lesssim_{\eta,b} \|u(0)\|_{H^s_x} + \|\eta(t)F\|_{X^{s,b-1}_{\tau=\xi^5}(\R\times\R)}. \end{equation} \end{lemma} For the proof of this Lemma, See \cite{KPV96}, \cite{Taobook}.\\ Next, we state the nonlinear estimate. \begin{proposition}\label{trilinear} \,Let $s\geq \frac 34$. For all $u,v,w$ on $\R\times \R$ and $\frac 12 <b\leq \frac 12 +\epsilon $ for some $\epsilon$, we have \begin{equation}\label{trilinear1} \begin{split} \|\px^3(uvw)\|_{X^{s,b-1}} &+ \|uv\partial_x^3w\|_{X^{s,b-1}}+\|u\px^2v\px w\|_{X^{s,b-1}} +\|\px u\px v\px w\|_{X^{s,b-1}} \\ &\lesssim \|u\|_{X^{s,b}}\|v\|_{X^{s,b}}\|w\|_{X^{s,b}}. \end{split} \end{equation} \end{proposition} Combining the preceding estimates \eqref{Xlinear}, \eqref{trilinear1} one can easily verify that the operator $$ \Phi(u)(t,x) := \eta(t)e^{t\px^5}u_0(x) - \eta(t)\int^t_0 e^{(t-t')\px^5}F(u)(t',x) dt' $$ is a contraction on a ball of $X^{s,b}$ space $$ \mathcal{B}= \{u \in X^{s,b} : \|u\|_{X^{s,b}} \leq 2\delta\} $$ for a sufficiently small $ \delta >0 $ and $ \|u_0\|_{H^s_x} <\delta $, where $F(u) = c_1\px^3(u^3) + c_2u\px u\px^2 u + c_3uu\px^3 u$. This proves the local well-posedness for small data. Then a standard scaling argument easily leads the local well-posedness for large data. Once the local well-posedness is proved via the contraction principle, we also obtain that the solution map is Lipschitz continuous, and furthermore if the nonlinear term is algebraic (a polynomial of u and its derivatives), then the solution map is real-analytic. Hence, it remains to show the trilinear estimate \eqref{trilinear1} for the proof of Theorem~\ref{lwp}. \section{Trilinear estimate} In this section, we show the trilinear estimate \eqref{trilinear1}. We closely follow the method developed by Tao \cite{Tao2001} in the context of modified KdV equation. Writing the trilinear inequality in the dual form and we view it as a composition of two bilinear operators based on $L^2$ norm. Then we reduce to two bilinear estimates. First, we recall notations and general frame work of Tao's $[k;Z]$-multiplier method. For the details we refer to \cite{Tao2001}. \subsection*{Notation and block estimates} We define $[k,\R]$-multiplier norm of Tao \cite{Tao2001} first. Let $Z$ be an abelian additive group with an invariant measure $d\xi$ (for instance $\R^n, \mathbb{T}^n$). For any integer $k\geq2$. let $\Gamma_k(Z)$ denote the hyperplane $$ \Gamma_k(Z) := \{(\xi_1,\cdots,\xi_k) \in \R^k : \xi_1+\cdots+\xi_k =0 \}. $$ A $[k,Z]$-multiplier is defined to be any function $m: \Gamma_k(Z) \rightarrow \mathbb{C}$. Then we define the multiplier norm $\|m\|_{[k,Z]} $ to be the best constant so that the inequality $$ \big| \int_{\Gamma_k(Z)} m(\xi)\prod_{j=1}^{k}f_j(\xi_j)\big| \leq C \prod_{j=1}^k\|f_j\|_{L^2}, $$ holds for all functions $f_j$ on $Z$. \\ Any capitalized variables such as $N_j,L_j$ and $H$ are presumed to be dyadic. For $N_1,N_2,N_3>0$, we denote the quantities by $\Nn,\Nd,\Nx$ in their order and similarly for $L_1,L_2,L_3$. We adopt the following summation convention: $$ \sum_{\Nx\sim\Nd\sim N} := \sum_{\substack{ N_1,N_2,N_3>0\\ \Nx\sim\Nd\sim N}} $$ $$ \sum_{\Lx\sim H} := \sum_{\substack{L_1,L_2,L_3\gtrsim 1\\ \Lx\sim H}}. $$ For given $\tau_j, \xi_j$ with $\xi_1 +\xi_2 +\xi_3 =0$ and $\tau_1+\tau_2+\tau_3=0$, we denote the modulation $$\tau_j-\xi_j^5 =:\lambda_j$$ and the resonance function $$ h(\xi):= \xi_1^5 +\xi_2^5 +\xi_3^5 = -\lambda_1 -\lambda_2 -\lambda_3. $$ By a dyadic decomposition of the variables $\xi_j, \lambda_j$ and $h(\xi)$ $X^{s,b}$, a bilinear estimate $$ \|B(u,v)\|_{X^{s_3,b_3}} \lesssim \|u\|_{X^{s_1,b_1}}\|v\|_{X^{s_2,b_2}} $$ is reduced to $$ \left\| \sum_{\Nx\gtrsim 1}\sum_{H}\sum_{L_1,L_2,L_3\gtrsim 1} \frac{\widetilde{m}(N_1,N_2)\langle N_1\rangle^{-s_1}\langle N_2\rangle^{-s_2}\langle N_3\rangle^{s_3}}{L_1^{b_1}L_2^{b_2}L_3^{-b_3}}X_{N_1,N_2,N_3;H;L_1,L_2,L_3} \right\|_{[3,\R\times\R]} \lesssim 1. $$ Here, $X_{N_1,N_2,N_3;H;L_1,L_2,L_3}$ is the multiplier $$ X_{N_1,N_2,N_3;H;L_1,L_2,L_3}(\xi,\tau) := \chi_{|h(\xi)|\sim H}\prod_{j=1}^3 \chi_{|\xi_j|\sim N_j}\chi_{|\lambda_j|\sim L_j} $$ and $$ \widetilde{m}(N_1,N_2) := \sup_{|\xi_j|\sim N_j, j=1,2} m(\xi_1,\xi_2)$$ where $ m(\xi_1,\xi_2) $ is a multiplier of the bilinear operator $B(\cdot,\cdot)$. This leads us to consider \begin{equation}\label{block} \|X_{N_1,N_2,N_3;H;L_1,L_2,L_3}\|_{[3,\R \times\R]}, \end{equation} which vanishes unless \begin{gather} \label{Nmed-Nmax} \Nd \sim \Nx\\ \label{Lmax} \Lx \sim \max(H, \Ld) \end{gather} Moreover, we have the resonance relation: if $ \Nx \sim \Nd \gtrsim 1$, then \begin{equation}\label{resonance} H \sim \Nx^4\Nn \end{equation} Now we state the dyadic block estimate. \begin{lemma}\label{block estimate} Let $H,\,N_1,\,N_2,\,N_3,\,L_1,\,L_2,\,L_3>0$ obey \eqref{Nmed-Nmax}, \eqref{resonance}, \eqref{Lmax}. (a)((++)Coherence) If $N_{max}\sim N_{min}$ and $L_{max}\sim H$, then we have \begin{eqnarray}\label{estimate1} \eqref{block} \lesssim L_{min}^{1/2}N_{max}^{-2}L_{med}^{1/2}. \end{eqnarray} (b)((+-)Coherence) If $N_2\sim N_3\gg N_1$ and $H\sim L_1\gtrsim L_2,\,L_3$, then \begin{eqnarray}\label{estimate2} \eqref{block}\lesssim L_{min}^{1/2}N_{max}^{-2}\min(H,\,\frac{N_{max}}{N_{min}}L_{med})^{1/2}. \end{eqnarray} Similarly for permutations. (c) In all other cases, we have \begin{eqnarray}\label{estimate3} \eqref{block}\lesssim L_{min}^{1/2}N_{max}^{-2}\min(H,\,L_{med})^{1/2}. \end{eqnarray} \end{lemma} Lemma~\ref{block estimate} is obtained in a similar way to Tao's (\cite{Tao2001}, Proposition 6.1) in the context of the KdV equation. For the fifth order equation, it is first shown by Chen, Li, Miao and Wu \cite{Chen-et-al}. See \cite{Chen-et-al} for the proof. \subsection*{Bilinear estimates} Using Lemma~\ref{block estimate} we show three bilinear estimates to which the trilinear estimate is reduced. \begin{proposition}\label{bilinear} For Schwartz functions $u, v$ on $\R\times \R$ and $ 0<\epsilon \ll 1$, we have \begin{align} \label{bilinear1} \|uv\|_{L^2(\R\times\R)} &\lesssim \|u\|_{X^{-3/2,1/2-\epsilon}_{\tau=\xi^5}}\|v\|_{X^{3/4,1/2+\epsilon}_{\tau=\xi^5}}, \\ \label{bilinear2} \|uv\|_{L^2(\R\times\R)} &\lesssim \|u\|_{X^{-3/4,1/2-\epsilon}_{\tau=\xi^5}}\|v\|_{X^{0,1/2+\epsilon}_{\tau=\xi^5}}, \\ \label{bilinear3} \|uv\|_{L^2(\R\times\R)} &\lesssim \|u\|_{X^{-1/4,1/2-\epsilon}_{\tau=\xi^5}}\|v\|_{X^{-1/2,1/2+\epsilon}_{\tau=\xi^5}}. \end{align} \end{proposition} \begin{proof} We prove \eqref{bilinear1} first. Rewriting \eqref{bilinear1} by duality, Plancherel's theorem and dyadic decomposition and using the translation invariance of the $[k;Z]$-multiplier (may assume $L_1,L_2,L_3 \gtrsim 1$ and $\max(N_1,N_2,N_3) \gtrsim 1$) and Schur's test (\cite{Tao2001}, Lemma 3.11), it suffices to show \begin{align}\label{H-Lmax} \sum_{N\sim \Nx \sim \Nd} \sum_{\substack{L_1,L_2,L_3\geq 1,\\ H\sim \Lx}} \\ \frac{\langle N_2\rangle^{3/2}}{\langle N_1 \rangle^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}} &\|X_{N_1,N_2,N_3;\Lx;L_1,L_2,L_3}\|_{[3,\R \times\R]} \lesssim 1 \nonumber \end{align} and \begin{align}\label{H<Lmax} \sum_{N\sim \Nx \sim \Nd} \sum_{\substack{\Lx \sim \Ld,\\ H\ll \Lx}} \\ \frac{\langle N_2\rangle^{3/2}}{\langle N_1 \rangle^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}} &\|X_{N_1,N_2,N_3;\Lx;L_1,L_2,L_3}\|_{[3,\R \times\R]} \lesssim 1 \nonumber \end{align} for all $N \gtrsim 1$. Fix $N$. We prove \eqref{H<Lmax} first. From \eqref{estimate3} it reduces to show \begin{equation*} \sum_{N\sim \Nx \sim \Nd} \sum_{\Lx \sim \Ld \gtrsim N^4\Nn} \frac{\langle N_2\rangle^{3/2}}{\langle N_1 \rangle^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}} \Ln^{1/2}N^{-2}N^{2}\Nn^{1/2} \lesssim 1 \end{equation*} Estimating \begin{align*} \frac{\langle N_2 \rangle^{3/2}}{\langle N_1 \rangle^{3/4}} &\lesssim \frac{N^{3/2}}{\langle \Nn \rangle^{3/4}} \\ L_1^{1/2+\epsilon}L_2^{1/2-\epsilon} &\gtrsim \Ln^{1/2+\epsilon}\Ld^{\epsilon}(N^4\Nn)^{1/2-2\epsilon} \end{align*} and then performing the $L$ summations, we reduce to \begin{equation*} \sum_{N\sim \Nx \sim \Nd}\frac{\langle N\rangle^{3/2}\Nn^{1/2}}{\langle \Nn \rangle^{3/4}(N^4\Nn)^{1/2-\epsilon}} \lesssim 1, \end{equation*} which is true with about $N^{-1/2}$ to spare.\\ Now, we show the case \eqref{H-Lmax}. In this case we have $ \Lx \sim \Nx^4\Nn $. We first show when \eqref{estimate1} (i.e. (++)coherence) holds. From \eqref{estimate1} we have $\Nx \sim \Nd \sim \Nn$ and $ \eqref{block} \lesssim \Ln^{1/2}\Nx^{-2}\Ld^{1/2} $, so we reduce to \begin{equation*} \sum_{\Lx\sim N^5} \frac{N^{3/2}}{N^{3/4}L_1^{1/2-\epsilon}L_2^{1/2+\epsilon}} \Ln^{1/2}N^{-2}\Ld^{1/2} \lesssim 1. \end{equation*} Estimating $$ L_1^{1/2+\epsilon}L_2^{1/2-\epsilon} \geq \Ln^{1/2+\epsilon}\Ld^{1/2-\epsilon} $$ and then performing the $L$ summations we reduce to $$ \frac{N^{3/2}}{N^{3/4}N^{5\epsilon}} N^{-2} \lesssim 1, $$ which is true.\\ \indent Now we deal with (+-)coherence case (i.e. when \eqref{estimate2} holds true). Since we don't have the symmetry on indices, we need to consider the following three cases: \begin{align*} N \sim N_1 \sim N_2 \gg N_3; \quad H\sim L_3 \gtrsim L_1,L_2 \\ N \sim N_2 \sim N_3 \gg N_1; \quad H\sim L_1 \gtrsim L_2,L_3 \\ N \sim N_1 \sim N_3 \gg N_2; \quad H\sim L_2 \gtrsim L_1,L_3 \\ \end{align*} In the first case we reduce by \eqref{estimate2} to \begin{equation*} \sum_{N_3 \ll N, L_1,L_2 \lesssim N^4N_3} \frac{N^{3/2}}{N^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}}\Ln^{1/2}N^{-2}\min(N^4N_3,\frac{N}{N_3}\Ld)^{1/2} \lesssim 1. \end{equation*} Performing the $N_3$ summation we reduce to \begin{equation*} \sum_{1 \leq L_1,L_2 \lesssim N^5} \frac{N^{3/2}}{N^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}}\Ln^{1/2}N^{-2}N^{5/4}\Ld^{1/4} \lesssim 1 \end{equation*} which is easily verified. \\ To symmetrize the second and third case we replace $L_1^{1/2+\epsilon} $ by $L_1^{1/2-\epsilon}$. It suffices to show the second case. Using $ \min(H,\frac{N}{\Nn}\Ld) \leq H \sim N^4N_1 $ we reduce to $$ \sum_{N_1\leq N}\sum_{L_2,L_3\leq N^4N_1} \frac{N^{3/2}N_1^{1/2}}{\langle N_1 \rangle^{3/4}(N^4N_1)^{1/2+\epsilon}L_2^{1/2-\epsilon}} \lesssim 1 $$ We may assume $ N_1 \geq N^{-4} $ since the inner sum vanishes otherwise. Performing the $L$ summations we reduce to $$ \sum_{N^{-4} \leq N_1 \leq N} N^{3/2-2+4\epsilon}\frac{N_1^\epsilon}{\langle N_1 \rangle^{3/4}}(N^4N_1)^\epsilon \lesssim 1 $$ which is true with about $N^{-1/2}$ to spare. Finally, we show the cases \eqref{estimate3} holds. It suffices to show $$ \sum_{\Nx\sim\Nd\sim N}\sum_{\Lx\sim N^4\Nn} \frac{N^{3/2}}{\langle N_1 \rangle^{3/4}L_1^{1/2+\epsilon}L_2^{1/2-\epsilon}}\Ln^{1/2}N^{-2}\Ld^{1/2} \lesssim 1 $$ Performing the $L$ summations, we reduce to $$ \sum_{\Nx\sim\Nd\sim N} \frac{N^{-1/2}}{\langle N_1 \rangle^{3/4}} (N^4\Nn)^\epsilon \lesssim 1 $$ which is easily verified with about $N^{-1/2}$ to spare. This completes the proof for \eqref{bilinear1}.\\ The proof of \eqref{bilinear2} and \eqref{bilinear3} are very similar to the preceding one. In general, the same computation shows $$ \|uv\|_{L^2(\R\times\R)} \lesssim \|u\|_{X^{-\alpha,1/2-\epsilon}_{\tau=\xi^5}}\|v\|_{X^{\beta,1/2+\epsilon}_{\tau=\xi^5}}. $$ for $ \alpha <2 $ and $ \alpha - \beta \leq 3/4$. We omit the detail here. \end{proof} \subsection*{Proof of the trilinear estimate} In order to reduce the trilinear estimate we use the following lemma. \begin{lemma}\label{compositionTT*}\text{[Tao \cite{Tao2001}, Lemma 3.7 Composition and TT*]} If $k_1,k_2 \geq 1$, and $m_1, m_2$ are functions on $\R^{k_1}$ and $\R^{k_2}$ respectively, then \begin{align}\label{composition} \|m_1&(\xi_1,\cdots,\xi_{k_1})m_2(\xi_{k_1+1},\cdots,\xi_{k_1+k_2})\|_{[k_1+k_2;\R]} \\ & \leq \|m_1(\xi_1,\cdots,\xi_{k_1})\|_{[k_1+1;\R]}\|m_2(\xi_{1},\cdots,\xi_{k_2})\|_{[k_2+1;\R]}. \nonumber \end{align} As a special case we have the $TT^*$ identity \begin{equation}\label{TT*} \|m(\xi_1,\cdots,\xi_{k})\overline{m(-\xi_{k+1},\cdots,-\xi_{2k})}\|_{[2k;\R]} = \|m(\xi_1,\cdots,\xi_{k})\|^2_{[k+1;\R]} \end{equation} for all functions $m:\R^k \rightarrow \R$. \end{lemma} For simplicity we prove the most interesting case $s=3/4$. For the first term it suffices to show that \begin{equation*} \left\|\frac{(\xi_1+\xi_2+\xi_3)^3\langle \xi_4\rangle^{3/4}}{\langle\tau_4-\xi_4^5\rangle^{1-b} \prod_{j=1}^3\langle\xi_j\rangle ^s\langle\tau_j-\xi_j^5\rangle^b}\right\|_{[4,\,{\R}\times{\R}]}\lesssim 1. \end{equation*} Estimating $|\xi_1+\xi_2+\xi_3|$ by $\langle \xi_4 \rangle$, and $$\langle\xi_4\rangle^{3/4+3}\lesssim \langle\xi_4\rangle^{3/2}\sum_{j=1}^3\langle\xi_j\rangle^{3/4+3/2}.$$ By symmetry we reduce to \begin{eqnarray*} \left\|\frac{\langle\xi_1\rangle^{-3/4}\langle\xi_3\rangle^{-3/4}\langle\xi_2\rangle^{3/2}\langle\xi_4\rangle^{3/2}}{\langle\tau_4-\xi_4^5\rangle^{1-b} \prod_{j=1}^3\langle\tau_j-\xi_j^5\rangle^{b}}\right\|_{[4,\,{\R}\times{\R}]} \lesssim 1. \end{eqnarray*} We may replace $\langle\tau_2-\xi_2^5\rangle^{b}$ by $<\tau_2-\xi_2^5>^{1-b}$. By $TT^*$ identity \eqref{TT*}, the estimate is reduced to the bilinear estimate \eqref{bilinear1}.\\ The proof of the second term \eqref{trilinear1} is very similar to the first one but we use the composition rule \eqref{composition} instead of the $TT^*$ identity. We estimate $$ \xi_1^3 \leq \xi_1^{9/4}\left(\langle\xi_2\rangle^{3/4}+ \langle\xi_3\rangle^{3/4}+\langle\xi_4\rangle^{3/4} \right)\langle\xi_4\rangle^{3/4}.$$ The third term is the same as above and so by symmetry we reduce to $$ \left\|\frac{\langle\xi_1\rangle^{3/2}\langle\xi_3\rangle^{-3/4}\langle\xi_2\rangle^{0}\langle\xi_4\rangle^{3/4}}{\langle\tau_4-\xi_4^5\rangle^{1-b} \prod_{j=1}^3\langle\tau_j-\xi_j^5\rangle^{b}}\right\|_{[4,\,{\R}\times{\R}]} \lesssim 1. $$ This is verified by \eqref{bilinear1} and \eqref{bilinear2}, as well as the composition rule \eqref{composition}.\\ The fourth term in \eqref{trilinear1} is proved in the same way. Estimating $$ \langle\xi_4\rangle^{3/4} \leq \langle\xi_4\rangle^{1/2}\big(\langle\xi_1\rangle^{1/4}+\langle\xi_2\rangle^{1/4}+\langle\xi_3\rangle^{1/4} \big), $$ and by symmetry we reduce to $$ \left\|\frac{\langle\xi_1\rangle^{1/2}\langle\xi_2\rangle^{1/4}\langle\xi_3\rangle^{1/4}\langle\xi_4\rangle^{1/2}}{\langle\tau_4-\xi_4^5\rangle^{1-b} \prod_{j=1}^3\langle\tau_j-\xi_j^5\rangle^{b}}\right\|_{[4,\,{\R}\times{\R}]} \lesssim 1. $$ This is verified by \eqref{bilinear3} and $TT^*$ identity \eqref{TT*} after minorizing one of $b$ by $1-b$. \\ Finally, the third term in \eqref{trilinear1} automatically follows since it is a linear combination of other three. This conclude the proof of Proposition~\ref{trilinear}. \begin{remark} The trilinear estimate \eqref{trilinear1} fails for $s< \frac{3}{4}$. The counter example introduced by Kenig, Ponce and Vega \cite{KPV96} in the context of the modified KdV equation extends to here. In the frequency space, set $$ A =\{ (\tau,\xi) \in \R^2| N \leq \xi \leq N+ N^{-3/2}, |\tau-\xi^5|\leq 1\}, $$ and $$ -A= \{(\tau,\xi) \in \R^2 | -(\tau,\xi) \in A\}. $$ Defining $\widetilde{f}(\tau,\xi) = \chi_A +\chi_{-A}$, we obtain $$ |\widetilde{f}*\widetilde{f}*\widetilde{f}(\tau,\xi)| \gtrsim N^{-3}\chi_R(\tau,\xi), $$ where $R$ is a rectangle located at $(N,N^5)$ of dimension $N^{-4}\times N^{5/2}$ with its longest side pointing $(1,5N^4)$ like $A$. Thus, $$ \|\px^3(f\cdot f\cdot f) \|_{X^{s,b-1}} \gtrsim N^{s-3/4} $$ and $$ \|f\|_{X^{s,b}} \lesssim N^{s-3/4}, $$ then \eqref{trilinear1} implies $s \geq 3/4$. This example holds good for other nonlinear terms in \eqref{trilinear1}. \end{remark} \begin{remark}\label{low order term} In our general equation \eqref{fifthm} we omitted the term $u^4\px u$ from $\eqref{fifthmkdv}$. Since the term $u^4\px u$ is a lower order term, it is easier to handle than other third order terms. Once we have the 5-linear estimate \begin{equation}\label{5-linear} \|\px(u_1u_2u_3u_4u_5)\|_{X^{s,b-1}} \lesssim \prod_{j=1}^5 \|u_i\|_{X^{s,b}}, \end{equation} we can insert it into the iteration. The proof of \eqref{5-linear} is similar to the preceding one. Using Lemma~\ref{compositionTT*} we reduce to two trilinear estimates and each trilinear estimate is again reduced to two bilinear estimates. The resulting bilinear estimates are supposedly easier than those in Proposition~\ref{bilinear} since there are fewer derivatives and more $u$'s. In fact, it is true for $s$ lower than $\frac{3}{4} $. \end{remark} \section{Ill-posedness }\label{sec4} In this section we give the proof of Theorem~\ref{illposed}. For simplicity, we pretend the nonlinear term is $$ F(u) = \px^3(u^3). $$ The general case $F(u) = c_1\px^3(u^3) + c_2u^2\px^3 + c_3u\px u\px^2$ ($c_j$'s are real numbers) follows in the same manner. Our method is to approximate the fifth mKdV solution by the cubic NLS solution. This is originally introduced by Christ, Colliander and Tao \cite{CCT2003} for the mKdV equation. This method extends to the fifth order equation without substantial change. \\ Having two solutions to the cubic NLS breaking the uniform continuity of the flow map for $s<0$, we find approximate solutions to the fifth mKdV exhibiting the same property. First, we state the ill-posedness for the cubic NLS in \cite{CCT2003}. \begin{theorem}\label{illposedNLS} Let $ s < 0 $. The solution map of the initial value problem of the cubic NLS \eqref{cNLS} fails to be uniformly continuous. More precisely, for $ 0 < \delta \ll \epsilon \ll 1$ and $T>0$ arbitrary, there are two solutions $ u_1,u_2$ to \eqref{cNLS} satisfying \eqref{illposed1}, \eqref{illposed2} and \eqref{illposed3}.\\ Moreover, For any fixed $K \geq 1$, we can find such solutions to satisfy \begin{equation}\label{ubound} \sup_{0\leq t <\infty }\|u_j\|_{H^K_x} \lesssim \epsilon \end{equation} for $j=1,2$. \end{theorem} \begin{remark} Theorem~\ref{illposedNLS} is stated for the defocusing cubic NLS. The method in \cite{CCT2003} exhibiting the phase decoherence holds good for the focusing case, too. But previously another method for the focusing case was presented by Kenig, Ponce and Vega \cite{KPV2001}. They used the Galilean invariance on the soliton solutions. In our focusing case (for instance, $F(u)= -\px^3(u^3)$) one could employ their counterexample to approximate. \end{remark} Now we start to find the approximate solution to the fifth order mKdV equation using the NLS solutions. Let $ u(s,y) $ solve the cubic NLS equation \eqref{cNLS}. We also assume that $$ \sup_{0\leq t < \infty} \|u(t)\|_{H^k_x} \lesssim \epsilon $$ for a large k. Using the change of variable $$ (s,y) := \Big(t,\, \frac{x}{(10N^3)^{1/2}} + \sqrt{\frac{5}{2}}N^{5/2}t \Big), $$ we define the approximate solution \begin{equation}\label{apsolution} U_{ap}(t,x) := \frac{2}{\sqrt{3N^3}}\text{Re}\,\, e^{iNx}e^{iN^5t}u(s,y), \end{equation} where $N \gg 1$. \\ We want to show that $U_{ap}$ is an approximate solution to the fifth mKdV equation. A direct computation shows that \begin{align*} (&\partial_t -\px^5)U_{ap}(t,x) \\ = &\frac{2}{\sqrt{3N^3}} \text{Re}\,\, \Big\{e^{iNx}e^{iN^5t}\Big(\partial_su + i\py^2u + \frac{1}{\sqrt{10}N^{5/2}}\py^3u - \frac{i}{20N^5}\py^4u - \frac{1}{(10N^3)^{5/2}}\py^5u \Big)\Big\} \end{align*} and that \begin{align*} \px^3(U_{ap}^3) & = \Big(\frac{2}{\sqrt{3N^3}}\Big)^3 \frac 34 \px^3\Big\{ \text{Re}\,\, e^{iNx}e^{iN^5t}|u|^2u +\frac{1}{3}\text{Re}\,\, e^{iNx}e^{iN^5t} u^3 \Big\} \\ & = \Big(\frac{2}{\sqrt{3N^3}}\Big)^3 \frac 34 \Bigg\{\text{Re}\,\, (iN)^3e^{iNx}e^{iN^5t}|u|^2u + \text{Re}\,\, \frac{3(iN)^2}{\sqrt{10}N^{3/2}}e^{iNx}e^{iN^5t}\py(|u|^2u) \\ & \qquad + \text{Re}\,\, \frac{3iN}{10N^3}e^{iNx}e^{iN^5t}\py^2(|u|^2u) + \text{Re}\,\, \frac{1}{(10N^3)^{3/2}}e^{iNx}e^{iN^5t}\py^3(|u|^2u) \\ & \qquad + \text{Re}\,\, \frac{(3iN)^3}{3}e^{3iNx}e^{3iN^5t}u^3 + \text{Re}\,\, \frac{(3iN)^2}{\sqrt{10}N^{3/2}}e^{3iNx}e^{3iN^5t}\py(u^3) \\ & \qquad + \text{Re}\,\, \frac{3iN}{10N^3}e^{3iNx}e^{3iN^5t}\py^2(u^3) + \text{Re}\,\, \frac{1}{3(10N^3)^{3/2}}e^{3iNx}e^{3iN^5t}\py^3(u^3) \Bigg\}. \end{align*} Since $u(s,y)$ is a solution of \eqref{cNLS}, three terms of the preceding equations canceled and it results in $$ (\partial_t -\px^5)U_{ap}(t,x) + \px^3(U_{ap}^3) = E $$ where the error term $E$ is a linear combination of the real and imaginary parts of the following: \begin{align*} &E_1:= N^{-4}e^{iNx}e^{iN^5t}\py(|u|^2u),\quad E_2:=N^{-11/2}e^{iNx}e^{iN^5t}\py^2(|u|^2u),\\ &E_3:= N^{-18/2}e^{iNx}e^{iN^5t}\py^3(|u|^2u),\quad E_4:=N^{-4}e^{3iNx}e^{3iN^5t}\py(u^3), \\ &E_5:= N^{-11/2}e^{3iNx}e^{3iN^5t}\py^2(u^3), \quad E_6:=N^{-18/2}e^{iNx}e^{iN^5t}\py^3(u^3),\\ &E_7:= N^{-3/2}e^{3iNx}e^{3iN^5t}u^3. \end{align*} Next, we find the bound of the error. \begin{lemma}\label{errorbound} For each $j=1,\cdots,7$, let $e_j$ be the solution to the initial problem $$ (\partial_t - \px^5)e_j = E_j; \qquad e_j(0)=0 $$ Let $\eta(t)$ be a smooth time cut-off function taking $1$ on $[0,1]$ and compactly supported. Then $$ \|\eta(t)e_j\|_{X^{3/4,b}} \lesssim \epsilon N^{-5/2+\delta} $$ for arbitrarily small $\delta>0$. \end{lemma} For the proof we use the estimate of high-frequency modulations of smooth functions. \begin{lemma}[\cite{CCT2003} Lemma 2.1]\label{modulation} Let $-1/2 <s, \sigma \in \R^+ $ and $u\in H^\sigma(\R)$. For any $M>1, \tau\in \R^+, x_0\in\R$, and $A>0$ let $$ v(x)= Ae^{iMx}u(\frac{x-x_0}{\tau}). $$ (i) Suppose $s\geq 0$. Then we have $$ \|v\|_{H^1} \lesssim_s |A|\tau^{1/2}M^s\|u\|_{H^s} $$ for all $u,A,x_0$ and $M\cdot\tau \geq 1$. \\ (ii) Suppose that $s<0$ and that $\sigma \geq|s|$. Then we have $$ \|v\|_{H^s} \lesssim_{s,\sigma} |A|\tau^{1/2}M^s\|u\|_{H^\sigma} $$ for all $u,A,x_0$ and $M^{1+(s/\sigma)}\cdot\tau \geq 1$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{errorbound}] Using \eqref{Xlinear} and Plancherel theorem we obtain \begin{align*} \|\eta(t)e_j\|_{X^{3/4,b}} & \lesssim \|\eta(t)E_j\|_{X^{3/4,b-1}} \\ & = \|\langle\tau-\xi^5\rangle^{b-1}\langle\xi\rangle^{3/4}\widetilde{\eta(t)E_j}\|_{L^2_{\tau,\xi}} \\ & \leq \|\langle\xi\rangle^{3/4}\widetilde{\eta(t)E_j}\|_{L^2_{\tau,\xi}} \qquad (\because b-1 <0) \\ & = \|\eta(t)\langle\xi\rangle^{3/4}\widehat{E_j}(t,\xi)\|_{L^2_{t,\xi}} \\ & \leq \|\langle\xi\rangle^{3/4}\widehat{E_j}(t,\xi)\|_{L^\infty_tL^2_x([0,1]\times\R)} \end{align*} Thus, we reduce to show $$ \sup_{0\leq t\leq 1} \|E_j\|_{H^{3/4}_x} \lesssim \epsilon N^{-5/2+\delta}. $$ $E_1,\cdots,E_6$ have enough negative powers of N. The above bound for these terms is obtained by \eqref{ubound}, Lemma~\ref{modulation} and the fact that $H^k$ is closed under multiplication for $k\geq 1$. For the last term $E_7$ since there is not enough of a negative power on $N$, we need to use the fact that the modulation $e^{3iNx}e^{3iN^5t}$ is away from the the curve $\tau=\xi^5$.\\ A direct computation leads that $$ \widetilde{\eta(t)E_7}(\tau,\xi) = N^{-3/2}\widetilde{\eta\,u^3}\Big(\tau-a,\sqrt{10}N^{3/2}(\xi-3N)\Big)\,\sqrt{10}N^{3/2} $$ where $ a= 3N^5-3\sqrt{\frac 52}N^5 +\sqrt{\frac 52}N^4\xi $. \\ Let $P_{\lambda,\mu} $ be the Littlewood-Paley projection with dyadic numbers $\lambda, \mu$. \eqref{ubound} and the fact that $\eta(t)$ is compactly supported yield $$ \|\widetilde{P_{\lambda,\mu}\eta\,u^3}(\tau,\xi) \|_{L^2_{\tau,\xi}} \lesssim \frac{\epsilon}{\langle\lambda\rangle^K\langle\mu\rangle^K} $$ and so $$ \|\widetilde{P_{\lambda,\mu}\eta\,u^3}(\tau-a,N^{3/2}(\xi-3N)) \|_{L^2_{\tau,\xi}} \lesssim N^{-3/4} \frac{\epsilon}{\langle\lambda-a\rangle^K\langle\mu-3N\rangle^K}. $$ Rewriting $\|\eta(t)E_7\|_{X^{3/4,b-1}} $ by dyadic decompositions, \begin{align*} &\|\eta(t)E_7\|^2_{X^{3/4,b-1}} \\ &\lesssim \sum_{\substack{\lambda,\mu \geq 1\\ dyadic}} \langle\lambda-\mu^5\rangle^{2(b-1)}\langle\mu\rangle^{3/2} N^{-3}\Big\|\widetilde{P_{\lambda,\mu}\eta\,u^3}\Big(\tau-a,\sqrt{10}N^{3/2}(\xi-3N)\Big)\,\sqrt{10}N^{3/2}\Big\|^2_{L^2_{\tau,\xi}} \\ &\lesssim \sum_{\substack{\lambda,\mu \geq 1\\ dyadic}} \langle\lambda-\mu^5\rangle^{2(b-1)}\langle\mu\rangle^{3/2} N^{-3/2}\frac{\epsilon^2}{\langle\lambda- a\rangle^{2K}\langle\mu-3N\rangle^{2K}} \\ &\lesssim \epsilon^2 N^{10(b-1)} \end{align*} by choosing $K$ large enough. We used the fact that $e^{3iNx}e^{3iN^5t}$ is away from the curve in the frequency space at the last inequality. Therefore, choosing $ b>\frac 12$ sufficiently close to $\frac 12$ we conclude $$ \|\eta(t)E_7\|^2_{X^{3/4,b-1}} \lesssim \epsilon N^{-5/2+\delta}. $$ \end{proof} Finally, we state the following perturbation result from the local well-posedness. \begin{lemma}\label{error} Let $u$ be a Schwartz solution to the fifth order modified KdV equation \eqref{fifthm} and v be a Schwartz solution to the approximate fifth mKdV equation $$ \partial_t v -\px^5v + \px^3(v^3) = E $$ for some error function $E$. Let $e$ be the solution to the inhomogeneous problem $$ \partial_t e -\px^5 e = E,\qquad e(0)=0. $$ Suppose that $$ \|u(0)\|_{H^{3/4}_x}, \|v(0)\|_{H^{3/4}_x} \lesssim \epsilon; \qquad \|\eta(t)e\|_{X^{3/4,b}} \lesssim \epsilon $$ Then we have $$ \|\eta(t)(u-v)\|_{X^{3/4,b}} \lesssim \|u(0)-v(0)\|_{H^{3/4}} + \|\eta(t)e\|_{X^{3/4,b}} $$ In particular, we have $$ \sup_{0\leq t\leq 1} \|u(t)-v(t)\|_{H^{3/4}} \lesssim \|u(0)-v(0) \|_{H^{3/4}} + \|\eta(t)e\|_{X^{3/4,b}}. $$ \end{lemma} \begin{proof} The proof is very similar to that of Lemma 5.1 in \cite{CCT2003}. Here, we give only a sketch. Writing the integral equation for $v$ with a time cut-off function $\eta(t) $ $$ \eta(t)v(t) = \eta(t)e^{t\px^5}v(0) -\eta(t)e(t) +\eta(t)\int_0^t e^{(t-t')\px^5}\px^3(v^3)(t')dt'. $$ we use \eqref{Xlinear}, \eqref{trilinear1} and a continuity argument (assuming $\epsilon $ is sufficiently small) to obtain $$ \|\eta(t)v\|_{X^{3/4,b}} \lesssim \epsilon. $$ We repeat the same argument on the difference of the two $ w= u-v $ to get the desired result. \end{proof} \begin{proof}[Proof of Theorem~\ref{illposed}] Let $0<\delta \ll \epsilon \ll 1$ and $T>0$ be given. From Theorem~\ref{illposedNLS} we can find two global solutions $u_1,u_2$ satisfying \begin{gather} \label{illposedNLS1}\|u_j(0)\|_{H^s_x} \lesssim \epsilon \\ \label{illposedNLS2}\|u_1(0) -u_2(0)\|_{H^s_x} \lesssim \delta \\ \label{illposedNLS3}\sup_{0\le t\le T} \|u_1(t)-u_2(t)\|_{H^s_x} \gtrsim \epsilon \\ \label{ubound1}\sup_{0\leq t \leq \infty} \|u_j(t)\|_{H^k_x} \lesssim \epsilon \end{gather} for $s<0$ and $k\geq 6$ to be chosen later. Define $U_{ap,1}$ and $U_{ap,2}$ as in \eqref{apsolution}, and let $U_1,U_2$ be smooth global solutions with initial data $U_{ap,1},U_{ap,2}$, respectively. Now we rescale these solutions to make them satisfy \eqref{illposed1}, \eqref{illposed2}, \eqref{illposed3}. Set $$ U^\lambda_j(t,x) := \lambda U_j(\lambda^5t,\lambda x) $$ and similarly, $$ U_{ap,j}^\lambda(t,x) := \lambda U_{ap,j}(\lambda^5t,\lambda x), $$ for $j=1,2$. Then $$ U^\lambda_j(0,x) = \lambda \frac{2}{\sqrt{3N^3}}\text{Re}\,\, e^{iN\lambda x}u(0, \lambda x /(10N^3)^{1/2}). $$ From Lemma~\ref{errorbound} and Lemma~\ref{error} we have $$ \sup_{0\leq t\leq 1} \|U_1(t)-U_2(t)\|_{H^{3/4}_x} \lesssim \epsilon N^{-5/2+\delta}. $$ An induction argument on time interval up to $\log N$ yields \begin{equation}\label{error1} \sup_{0\leq t \lesssim_\eta \log N} \|U_1(t)-U_2(t)\|_{H^{3/4}_x} \lesssim \epsilon N^{-5/2+\eta} \end{equation} for any $\eta > \delta >0$. Applying Lemma~\ref{modulation} when $ s\geq 0$ we obtain $$ \|U^\lambda_j(0)\|_{H^s_x} \lesssim \lambda^{s+1/2}N^{s-3/4}\|u_j(0)\|_{H^s_x}, $$ while for $s<0$, we use Lemma~\ref{modulation} (ii) for sufficiently large $k$ to obtain $$ \|U^\lambda_j(0)\|_{H^s_x} \lesssim \lambda^{s+1/2}N^{s-3/4}\|u_j(0)\|_{H^k_x}. $$ Setting $$ \lambda:= N^{\frac{3/4-s}{1/2+s}}, $$ and from \eqref{illposedNLS1}, \eqref{ubound1} we have \eqref{illposed1} for $U_j^\lambda(0)$. Similarly, we also get \eqref{illposed2} for $U_1^\lambda(0)-U_2^\lambda(0)$ from \eqref{illposedNLS2}. \\ Next, we show \eqref{illposed3}. From \eqref{illposedNLS3} one can find $0<t_0 $ such that $$ \|u_1(t_0)-u_2(t_0)\|_{L^2_x} \gtrsim \epsilon. $$ Using Lemma~\ref{modulation} we obtain $$ \|U_{ap,1}(t_0/\lambda^5) -U_{ap,2}(t_0/\lambda^5) \|_{H^s_x} \gtrsim \lambda^{1/2+s}N^{s-3/4} \epsilon \sim \epsilon. $$ On the other hand, using the hypothesis $s>-\frac{7}{24}$ and \eqref{error1} $$ \|U^\lambda_{ap,j}(t)-U^\lambda_j(t)\|_{H^s} \lesssim \lambda^{\max(0,s) +1/2}\epsilon N^{-5/2+\eta} \lesssim \epsilon $$ for $ 0<t\lesssim_\eta \log N/\lambda^5$ and sufficiently small $\eta>0$. A triangle inequality shows $$ \|U^\lambda_1(t_0/\lambda^5) - U^\lambda_2(t_0/\lambda^5) \|_{H^s_x} \gtrsim\epsilon $$ for $t_0/\lambda^5 \ll \log N/\lambda^5$. Choosing $\lambda$(and hence N) large enough that $t_0/\lambda^5 <T$, we get \eqref{illposed3}. This completes the proof. \end{proof}
203,317
TITLE: On an exercise from Hatcher QUESTION [2 upvotes]: What is the homology group of $S^{1}\times (S^{1}\vee S^{1})$? $\vee$ denotes wedge sum. Problem 9 sec 2.2.I was trying to use cellular homology, but not able to understand the CW complex structure of this space and maps d_{n}? If someone could give a hint how to proceed, it can help me in learning the application of cellular homology. Thanks in advance. REPLY [5 votes]: So the cellular structure isn't too complicated but perhaps it is good to first get a mental image of what the space looks like. If $S^1 \times S^1$ is a torus and $S^1 \vee S^1$ is a figure-8, then $S^1 \times (S^1 \vee S^1)$ would be a torus constructed out of a figure-8. That looks like two toruses stacked on top of each other and glued along a common circle. The key word here is "glued." That suggests you use Meyer-Vietoris. You can also use the Künneth theorem which describes the homology of a product of two spaces, but I don't remember exactly where Hatcher covers that. But I think he covers Meyer-Vietoris early on. You can probably figure out the homology without Meyer-Vietoris. For instance, for $H_1$, each torus has two independent loops but you identify two of these loops together when you glue the toruses. So $H_1 \cong \mathbb{Z}^3$. So going back to Meyer-Vietoris, you have two toruses, $T_1, T_2$ glued along a common circle $S = T_1 \cap T_2$. Recall (in reduced homology): $$ 0 \to H_2(T_1) \oplus H_2(T_2) \to H_2(T_1 \cup T_2) \to H_1(T_1 \cap T_2) \to H_1(T_1) \oplus H_1(T_2) \to H_1(T_1 \cup T_2) \to 0 $$ Hopefully you already know what $H_i(T_1 \cap T_2)$ is for $i \ne 1,2$. So we just need to focus on the other maps. The key is this: if $\alpha_i, \beta_i$ are the two generators of $H_1(T_i)$ then the gluing has $S = \alpha_1 = \alpha_2$. The key map here is $H_1(S) \to H_1(T_1) \oplus H_1(T_2)$. Recall that this takes an element $x$ of $H_1(S)$ to its images inside of $T_1$ and $T_2$ respectively. Since $H_1(S)$ is generated by $[S]$ and the image of $[S]$ inside of $H_1(T_i)$ is $\alpha_i$, the map $H_1(S) \to H_1(T_1) \oplus H_2(T_2)$ is given by $[S] \mapsto (\alpha_1, \alpha_2)$. For the next part, I will write this ordered pair as a sum $\alpha_1 + \alpha_2$. Using the fact that this map is injective, you obtain $H_2(T_1 \cup T_2)$. Using what you know about the image, you obtain $$H_1(T_1 \cup T_2) \cong \frac{H_1(T_1) \oplus H_1(T_2)}{{\rm im}(H_1(S) \to H_1(T_1) \oplus H_1(T_2))} \cong \frac{\mathbb{Z} \cdot \{\alpha_1,\alpha_2,\beta_1,\beta_2\}}{\mathbb{Z} \cdot(\alpha_1 + \alpha_2)}.$$ If you want to consider the cellular structure of this space, take the cell structure of each torus $T_1, T_2$. Let's say the simplest one where you have one edge for $\alpha_i$, one for $\beta_i$ intersecting at a common point and having just one face. Then you combine these two cell structures by gluing the $\alpha_1$ edge to the $\alpha_2$ edge. That gives you two faces, three edges and one vertex. These will generate $H_2, H_1, H_0$ respectively.
156,808
Why Visit The most comprehensive sourcing platform. Exhibit Profile Venue: Singapore Expo - Bakery / Pastry Preparation Equipment and Accessories - Bakery / Pastry Processing Equipment and Accessories - Belts / Conveyors - Cake Decoration - Display & Shop-fitting Equipment - Gelato - Gelato / Ice Cream Machine - Ingredients - Kitchen Fittings / Appliances - - Packaging - Processed and Convenience Food - Seafood - Snacks - Specialty / Fine Food - Staple Food _4<<_5<< Venue: Singapore Expo - Barista Tools and Accessories - Blenders - Coffee Beans - Coffee Brewing Machines - Coffee Grinders - Coffee Machines - Disposable Cups - Roasting Equipment - Tea Leaves A truly global marketplace With 4,000 exhibitors from over 70 countries/regions expected at FHA2018, you will be spoilt for choices by the tens of thousands of product offerings at FHA2018. You can literally source from across the globe at a single marketplace. New exhibitors, new products, new opportunities Every edition, FHA actively seeks new companies to participate in the show. It is also the preferred launch pad for many companies to introduce their latest products, technologies and solutions to the Asian markets and the world. You can look forward to many new products, disruptive technologies and other innovative exhibits at FHA2018. Make informed decisions From product demonstrations, on-the-spot clarification, instant product comparison to product tastings, these are just some of the things that you can achieve at FHA to help you make better informed decisions. Network effectively With 78,000 trade attendees from over 100 countries/regions expected to congregate at FHA2018, the opportunities for you to forge new relationships and catch up with existing partners are limitless. Nothing beats face-to-face meetings. Get inspired FHA organises and hosts a series of world-class culinary competitions as well as top-notch masterclasses, workshops and courses. . Gain new ideas and pick-up a tip or two from the competitors and subject experts, and apply them to your business. Acquire new knowledge Get up-to-date industry knowledge, insights to topical issues and new implementable strategies through FHA’s comprehensive range of conferences, seminars and masterclasses. Look for principals Many international manufacturers make it a point to be present at FHA. This biennial affair is the opportune platform for importers and distributors looking to bring in new brands and products to their countries. Roshan Fernando, Serendib Leisure Management, Sri Lanka I had a great experience at the show, and enjoyed good networking with other F&B professionals. Vijayakant Shanmugam, Hilton Singapore, Singapore FHA2018 is a must-visit and beneficial event for trade professionals in the following food & hospitality related industry sectors: -. Get the latest updates on FHA. Share interesting news, ideas and insights with the food and hospitality community on various FHA social media platforms.
312,461
Simply the best. But will he need a parka for Anfield? Welcome to GBWTF… Manager’s Edition. I’m The Likely Lad and I’ll be subbing in for umlaut75 this lovely Wednesday afternoon in Neeeew Yooooork City! Beach ball weather, amIright?!? Alright, alright… so tell me: Who’s ever been watching a game, having a nice ol’ time, nice lil’ morning of Ingerlish footy, when boom! You get hit with the touchline shot of a manager– a grown man, paid millions of pounds to lead and represent his international megaclub– in what one might only describe as an overgrown child’s Halloween costume? There he stands, a 50-, 60-, 70-year-old in a training kit, maybe even a pair of boots, as the game unfolds with his young charges rollicking about the pitch… In keeping with umlaut’s title, I’ve broken it down three ways. You have The Good– dashing, besuited, and stylish– along with The Bad– nice look, now tough it out and take off the parka, schoolboy!– and of course, the WTF? Come get some, Martin Man-Boy O’Neill! Good is a strong word, but these guys are pretty reliablely put-together. There’s no Mourinho in the lot, but at least they manage to pick out a new tie once in a while and don’t look like fat Spanish waiters. I am silent killer, Don Fabio.. the Black Hand! The rest of England would do well to have a walk through Don Capello’s wardrobe. Like most of the country’s best things, he’s an import. For us, it’s the combination of plastic frames and tailored suits– even if we’re a bit put off by the club/country crest on the breast– that puts Capello atop the class. Doing well on style points. Table is another matter… If only he could strike the same balance in his managerial style. Keane may be the country’s foremost suit-and-scruff man, and for that we reward him here. In fact, Ned tells me that if Roy keeps it up his current run, he should be offered a lifetime contract to stay at Ipswich. The NEXT One? I really tried to find an* Englishman with some class, but it was taking too damn long. So instead we look south to Catalonia, where they played the best footy of last season, and stake claim to the world’s NEW master of managerial chic. Unlike Keane, he’s ditched the fat ties. He also wins games, which counts for something. Pep is world football’s uber-pimp. Congrats. All bundled up and ready for school! Look at me. I’ve put on a proper suit and tie. Buttoned up all the way. Classy shoes. NOW I’m going to throw on this NFL sideline parka! What’s the point? If you’re gonna prowl around in one of these why not just leave the peejays on underneath? Harry does it, Arsene, Alex, Rafa… they all do it. It’s bad. Not as bad though, as what’s to come. Just plain rude. Are you f%&kin’ serious, mate? That’s just plain rude. want to sell me some diamonds? Or insurance? Could I interest you in this wonderful, pre-owned automobile? Clearly Mrs. Hughes is knotting his tie and picking his sideline gear. Hughes wears it like a little boy forced out on Sunday morning. Trick or treat! Hey folks, look at me costume! The absolute king abomination in all English football and my sole inspiration for putting this stupid post together. People complain about baseball managers wearing uniforms, but this is entirely worse. Why? Well, in baseball it’s a tradition. A stupid one, yes, but everyone does it. I still prefer the Connie Mack look. In football, there is choice. And Martin O’Neill, a very intelligent fella by all accounts, looks like he should be collecting bite size Crunch bars and chasing his mates around with shaving cream. Why in the world does he need to wear cleats on the sideline? (Ned suggests he may go straight from training with the reserves to the pitch. Outlandish, but even then, he should get changed. He does have an office doesn’t he?)And have you noticed that his sweat-shirt (not even a hoodie) has a number on it? He’s given himself the #31. Maybe he should change it to 47, as he is a 47 year old man dressed like a child. Hurry this up, I am STARVING TO DEATH out here Someone feed this imbecile. Actually, on second thought, don’t. I don’t think Mark Hughes looks bad. Jose is still the king, though. Pep owns leather ties. Pep is disqualified. Right on with Pulis, guy looks like a f**king NASCAR crew chief Im sorry, can we bring it back to O’Neill? Is it not absurd that he wears spikes and gave himself a squad number?? I know he’s not the first, but still… Pep is awesome, leather ties or no. I agree about the suit and casual winter coat thing; Wenger’s giant puffy coat always cracks me up. Crap, last comment was me. My computer has decided to not save my account anymore so I keep forgetting to sign back in. thank you, sarah. i know what women want. and it aint pulis in his perv hat… or starvin marvin I prefer MON’s squad number (and it’s not like anyone’s going to choose 31) to the manager’s initials, as seen on Steve Bruce, Owen Coyle, Arsene when he has a tracktop on, etc. O’Neill looked better earlier in his run at Villa, when his training top looked more like a jumper or a rugby, and Nike hadn’t added the odd striping to the pants. The funny thing is that he wears the standard blazer/travel tie for postmatch interviews. So it’s not like he can’t dress nice; it’s just that he chooses this. MON’s just one of the lads. Wears the same kit, gets into some handbags at practice. He’s keeping it really, really … well, real. I always get confused when I see Pulis doing post-game interviews because he takes his hat off. Every time I think it’s an assistant until they flash his name on the screen. Never fails. Pep looks sharp. I feel like he could play 90 in his suit without loosening the tie. And why no mention of the fact Wenger only owns one suit and tie? Gray suit, red tie. Always. Talk about a car salesman. Hughes simply looks like my boss: yeah, it’s great you wear a suit and tie everyday, but no one ever says, “wow, that’s a great looking suit and/or tie” Maybe it’s my American eye but I don’t see why the hate towards Pulis. If the sun isn’t out I don’t understand why he’d wear a ball cap but it doesn’t strike me as terribly out of place at an athletic event. Lambert shows up in a suit, does press then gets in a tracksuit for games, like one of the lads, acceptable in my book. All this talk of Pulis’s hat reminds me of this awful picture I saw of Aaron Ramsey where he’s got a goatee and a baseball cap…he looks like a Nascar driver. Or worse, a Nascar fan. Some people should just not wear hats. Good article, but one clarification, Roy Keane is Irish, not English so it should be “an Englishman with some class” not “another Englishman with some class” :) *Ben– good call. i know keane’s nationality, just a slip of the keyboard. fixed, to intended meaning. You can’t be a/the uber-pimp of world football when you mess up tying the tie, as in the pic o’ Pep. The skinny part in the back should not be longer than the fat front part. Although I do dig the disco stance. Well, as a mexican, I have to say one thing im favour of MON: hugo sanchez. The guy (currently managing Almeria in La Liga) has this hideous habit of wearing a polo shirt with a suit jacket (he doesn’t even care to pick a sports jacket up). I know the special one does this, but hugo puts his “stylish” touch to this: his polo shirt is usually “blind-me-red” or “i’m-a-veggie-phosphorecent-green” with a black or navy jacket. Btw, i wasn’t buying the “wear a polo under your jacket” thing, but Jose seems to pick his on rather nice stores…he actually kind of pulls it off. [...] is the trendiest of the sideline ranters? Clue – it’s not Martin O’Neill ( Unprofessional Foul [...]
240,308
Quality Control Technician | 714206 Revel IT Apply Now OUR GOAL: Treat our consultants and clients the way we would like others to treat us! Interested in joining our team? Check out the opportunity below and apply today! Reference: 714206 The Quality Control Technician in this 1st shift contract role is responsible for performing necessary Quality Control (QC) tests following approved manufacturing procedures, GMP and good laboratory practices. Additionally, maintains equipment and production documents and investigates laboratory exception events and works independently with general guidance from supervisor/ senior team members. Responsibilities: - Formulates bulk solutions. - Performs incoming, in-process and final release QC testing. - Maintains complete and accurate records. - Conducts environmental monitoring of work areas. - Maintains work space and associated equipment with the lab. - Ensures equipment is in compliance with calibration standards. - Supports the execution of protocols and analyzes basic data. - Participates in production document improvements. - Investigates laboratory exception events. - Other duties as assigned by management - Some weekend work as needed Required Skills: - 3 years relevant experience with an Associate’s Degree is required. - Less than 1 year entry level with Bachelor’s Degree is required. - Experience in a regulated industry is preferred - Experience in GDP/ GMP is preferred - Experience working in an environment where respirators are required - Knowledge of basic and some specialized laboratory techniques and QC testing. - Knowledge of manufacturing procedures and good laboratory practices. - Possesses team work, documentation, problem solving and troubleshooting, and quality orientation skills. - Proficiency in oral and written communication skills in English. - Ability to utilize smart forms, mouse, keyboard, and other data entry devices. - Ability to utilize electronic office suite of computer programs (i.e. Email, electronic calendar, file download, save to network drive). - Ability to function with a respirator.
395,141
\begin{document} \title{Linking forms of amphichiral knots} \author{Stefan Friedl} \address{Fakult\"at f\"ur Mathematik\\ Universit\"at Regensburg\\ Germany} \email{[email protected]} \author{Allison N.~Miller} \address{Department of Mathematics, University of Texas, Austin, USA} \email{[email protected]} \author{Mark Powell} \address{ D\'epartement de Math\'ematiques, Universit\'e du Qu\'ebec \`a Montr\'eal, Canada} \email{[email protected]} \def\subjclassname{\textup{2010} Mathematics Subject Classification} \expandafter\let\csname subjclassname@1991\endcsname=\subjclassname \expandafter\let\csname subjclassname@2000\endcsname=\subjclassname \subjclass{ 57M25, 57M27, } \begin{abstract} We give a simple obstruction for a knot to be amphichiral, in terms of the homology of the $2$-fold branched cover. We work with unoriented knots, and so obstruct both positive and negative amphichirality. \end{abstract} \maketitle \section{Introduction} By a knot we mean a $1$-dimensional submanifold of $S^3$ that is diffeomorphic to $S^1$. Given a knot $K$ we denote its \emph{mirror image} by $mK$, the image of $K$ under an orientation reversing homeomorphism $S^3 \to S^3$. We say that a knot $K$ is \emph{amphichiral} if $K$ is (smoothly) isotopic to $mK$. Note that we consider unoriented knots, so we do not distinguish between positive and negative amphichiral knots. In this paper we will see that the homology of the $2$-fold branched cover can be used to show that many knots are not amphichiral. Before we state our main result we recall some definitions and basic facts. \bnm \item \label{item:intro-list-1} Given a knot $K$ we denote the 2-fold cover $\Sigma(K)$ of $S^3$ branched along $K$ by $\Sigma(K)$. If $A$ is a Seifert matrix for $K$, then a presentation matrix for $H_1(\Sigma(K);\Z)$ is given by $A+A^T$. See~\cite{Ro90} for details. \item The \emph{determinant of $K$} is defined as the order of $H_1(\Sigma(K);\Z)$. By (\ref{item:intro-list-1}) we have $\det(K) = \det(A+A^T)$. Alternative definitions are given by $\det(K)=\Delta_K(-1)=J_K(-1)$ where $\Delta_K(t)$ denotes the Alexander polynomial and $J_K(q)$ denotes the Jones polynomial~\cite[Corollary~9.2]{Li97},~\cite[Theorem~8.4.2]{Ka96}. \item Given an abelian additive group $G$ and a prime $p$, the \emph{$p$-primary part of $G$} is defined as \[ \hspace{1cm} G_p\,:=\,\{g\in G\,|\, p^k\cdot g=0\mbox{ for some }k\in \N_0\}.\] \enm By studying the linking form on the $2$-fold branched cover we prove the following theorem which is the main result of this article. \begin{theorem}\label{mainthm} Let $K$ be a knot and let $p$ be a prime with $p\equiv 3\mod 4$. If $K$ is amphichiral, then the $p$-primary part of $H_1(\Sigma(K))$ is either zero or it is not cyclic. \end{theorem} The following corollary, proven by Goeritz~\cite[p.~654]{Go33} in 1933, gives an even more elementary obstruction for a knot to be amphichiral. \begin{corollary}\label{maincor}\textbf{\emph{(Goeritz)}} Suppose $K$ is an amphichiral knot and $p$ is a prime with $p\equiv 3\mod{4}$. Then either $p$ does not divide $\det(K)$ or $p^2$ divides $\det(K)$. \end{corollary} In fact Goeritz showed an even stronger statement: given such $p$ the maximal power of $p$ that divides $\det(K)$ is even. This elegant and rather effective result of Goeritz also appears in Reidemeister's classic textbook on knot theory \cite[p.~30]{Re74} from the 1930s, but it did not appear in any of the more modern textbooks. In the following we present some examples which show the strength of the Goeritz theorem and we also show that our Theorem~\ref{mainthm} is independent of the Goeritz theorem. \begin{example}\mbox{} \bnm[(i)] \item For the trefoil $3_1$ we have $\det(3_1)=3$, so Corollary~\ref{maincor} immediately implies the very well known fact that the trefoil is not amphichiral. In most modern accounts of knot theory one uses either the signature of a knot or the Jones polynomial applied to prove that the trefoil is not amphichiral. We think that it is worth recalling that even the determinant can be used to prove this statement. \item As a reality check, consider the figure eight knot $4_1$, which is amphichiral~\cite[p.~17]{BZH14}. We have $\det(4_1)=5$, which is consistent with Corollary~\ref{maincor}. \item Corollary~\ref{maincor} also provides information on occasions when many other, supposedly more powerful invariants, fail. For example the chirality of the knot $K=10_{71}$ is difficult to detect, since the Tristram-Levine signature function~\cite{Le69,Tr69} of $K$ is identically zero and the HOMFLY and Kauffman polynomials of $K$ do not detect chirality. In fact in~\cite{RGK94} Chern-Simons invariants were used to show that $10_{71}$ is not amphichiral. However, a quick look at Knotinfo~\cite{CL} shows that $\det(10_{71})=77=7\cdot 11$, i.e.\ $10_{71}$ does not satisfy the criterion from Corollary~\ref{maincor} and so we see that this knot is not amphichiral. \item On the other hand it is also quite easy to find a knot for which Corollary~\ref{maincor}, and also Theorem~\ref{mainthm} below, fail to show that it is not amphichiral. For example, the torus knot $T(5,2)=5_1$ has determinant $\det(5_1)= 5$, but is not amphichiral since its signature is nonzero. \item One quickly finds examples of knots where Theorem~\ref{mainthm} detects chirality, but Corollary~\ref{maincor} fails to do so. Consider the Stevedore's knot $6_1$. A Seifert matrix is given by $A=\mraisebox{0.05cm}{\hspace{-0.05cm}\scriptsize{\scalebox{0.83}{\Big(}\ba{rr} 1&\phantom{-}0\\ 1&-2\ea\scalebox{0.83}{\Big)}}\hspace{-0.05cm}}$. By the aforementioned formula a presentation matrix for $H_1(\Sigma(K))$ is given by $A+A^T$. It is straightforward to compute that $H_1(\Sigma(K))\cong \Z_9$. So it follows from Theorem~\ref{mainthm}, applied with $p=3$, that the Stevedore's knot is not amphichiral. \enm \end{example} We refer to \cite{Ha80}, \cite[Proposition~1]{CM83} and~\cite[Theorem~9.4]{Hi12} for results on Alexander polynomials of amphichiral knots. In principle these results give stronger obstructions to a knot being amphichiral than the Goeritz Theorem, but in practice it seems to us that these obstructions are fairly hard to implement. To the best of our knowledge Theorem~\ref{mainthm} is the first obstruction to a knot being amphichiral that uses the structure of the homology module. We could not find a result in the literature that implies Theorem~\ref{mainthm}, essentially because the previous results on Alexander polynomials mentioned above did not consider the structure of the Alexander module. Similarly Goeritz~\cite{Go33} did not consider the structure of the first homology for the $2$-fold branched cover. So to the best of our knowledge, and to our surprise, Theorem~\ref{mainthm} seems to be new. Notwithstanding, the selling point of our theorem is not that it yields any new information on amphichirality, but that the obstruction is very fast to compute and frequently effective. As mentioned above, the proof of Theorem~\ref{mainthm} relies on the study of the linking form on the $2$-fold branched cover $\Sigma(K)$. Similarly, as in \cite[Theorem~9.3]{Hi12}, one can use the Blanchfield form~\cite{Bl57} to obtain restrictions on the primary parts of the Alexander module. Moreover one can use twisted Blanchfield forms~\cite{Po16} to obtain conditions on twisted Alexander polynomials~\cite{Wa94,FV10} and twisted Alexander modules of amphichiral knots. Our initial idea had been to use the latter invariant. But we quickly found that even the elementary invariants studied in this paper are fairly successful. To keep the paper short we refrain from discussing these generalisations. The paper is organised as follows. Linking forms and basic facts about them are recalled in Section~\ref{section:linking-forms}. The proofs of Theorem~\ref{mainthm} and Corollary~\ref{maincor} are given in Section~\ref{section-proofs}. \subsection*{Acknowledgments.} We are indebted to Cameron Gordon, Chuck Livingston and Loren\-zo Traldi for pointing out the paper of Goeritz~\cite{Go33}, which contains Corollary~\ref{maincor}. Despite a fair amount of literature searching before we posted the first version, we did not come across this paper. We are pleased that the beautiful result of Goeritz was brought back from oblivion. The authors are grateful to the Hausdorff Institute for Mathematics in Bonn, in whose excellent research atmosphere this paper was born. We are also grateful to Jae Choon Cha and Chuck Livingston~\cite{CL} for providing Knotinfo, which is an indispensable tool for studying small crossing knots. The first author acknowledges the support provided by the SFB 1085 `Higher Invariants' at the University of Regensburg, funded by the DFG. The third author is supported by an NSERC Discovery Grant. \section{Linking forms}\label{section:linking-forms} \begin{definition} \mbox{} \begin{enumerate}[(i)] \item A \emph{linking form} on a finitely generated abelian group $H$ is a map $\lambda\colon H\times H\to \Q/\Z$ which has the following properties: \bnm \item $\lambda$ is bilinear and symmetric, \item $\lambda$ is nonsingular, that is the adjoint map $H\to \hom(H,\Q/\Z)$ given by $a\mapsto (b\mapsto \lambda(a,b))$ is an isomorphism. \enm \item Given a linking form $\lambda\colon H\times H\to \Q/\Z$ we denote the linking form on $H$ given by $(-\lambda)(a,b)=-\lambda(a,b)$ by $-\lambda$. \end{enumerate} \end{definition} \begin{lemma}\label{lem:linking-forms-on-cyclic-modules} Let $p$ be a prime and $n\in \N_0$. Every linking form $\lambda$ on $\Z_{p^n}$ is given by \[ \ba{rcl} \Z_{p^n}\times \Z_{p^n}&\to & \Q/\Z\\ (a,b)&\mapsto &\lambda(a,b)=\tmfrac{k}{p^n}a\cdot b\in \Q/\Z\ea\] for some $k\in \Z$ that is coprime to $p$. \end{lemma} \begin{proof} Pick $k\in \Z$ such that $\frac{k}{p^n}=\lambda(1,1)\in \Q/\Z$. By the bilinearity we have $\lambda(a,b)=\frac{k}{p^n}a\cdot b\in \Q/\Z$ for all $a,b\in \Z_{p^n}$. It follows easily from the fact that $\lambda$ is nonsingular that $k$ needs to be coprime to $p$. To wit, if $p|k$ then $k=p\cdot k'$, so for any $a\in p^{n-1}\Z_{p^n}$ and $b\in \Z_{p^n}$ we have $\lambda(a,b)= k' apb/p^{n-1}=0\in \Q/\Z$. Therefore the non-trivial subgroup $p^{n-1}\Z_{p^n}$ lies in the kernel of the adjoint map, so the adjoint map is not injective. \end{proof} \noindent We recall the following well known lemma. \begin{lemma}\label{lem:split-off-p-primary-summand} Let $\lambda\colon H\times H\to \Q/\Z$ be a linking form and let $p$ be a prime. The restriction of $\lambda$ to the $p$-primary part $H_p$ of $H$ is also nonsingular. \end{lemma} \begin{proof} It suffices to show that there exists an orthogonal decomposition $H=H_p\oplus H'$. Since $H$ is the direct sum of its $p$-primary subgroups we only need to show that if $p,q$ are two different primes and if $a\in H_p$ and $b\in H_q$, then $\lambda(a,b)=0$. So let $p$ and $q$ be two distinct primes. Since $p$ and $q$ are coprime there exist $x,y\in \Z$ with $px+qy=1$. It follows that $\lambda(a,b)=\lambda((px+qy)a,b)=\lambda(pxa,b)+\lambda(a,qyb)=0$. \end{proof} Let $\Sigma$ be an oriented rational homology 3-sphere, i.e.\ $\Sigma$ is a 3-manifold with $H_*(\Sigma;\Q)\cong H_*(S^3;\Q)$. Consider the maps \[ H_1(\Sigma;\Z)\,\,\xrightarrow{\,\op{PD}^{-1}\,}\,\, H^2(\Sigma;\Z)\,\,\xleftarrow{\,\delta\,}\,\, H^1(\Sigma;\Q/\Z)\,\,\xrightarrow{\,\op{ev}\,}\,\, \hom(H_1(\Sigma;\Z),\Q/\Z),\] where the maps are given as follows: \bnm \item the first map is given by the inverse of Poincar\'e duality, that is the inverse of the map given by capping with the fundamental class of the \emph{oriented} manifold $\Sigma$; \item the second map is the connecting homomorphism in the long exact sequence in cohomology corresponding to the short exact sequence $0\to \Z\to \Q\to \Q/\Z\to 0$ of coefficients; and \item the third map is the evaluation map. \enm The first map is an isomorphism by Poincar\'e duality, the second map is an isomorphism since $\Sigma$ is a rational homology sphere and so $H^i(\Sigma;\Q)=H_{3-i}(\Sigma;\Q)=0$ for $i=1,2$, and the third map is an isomorphism by the universal coefficient theorem, and the fact that $\Q/\Z$ is an injective $\Z$-module. Denote the corresponding isomorphism by $\Phi_\Sigma \colon H_1(\Sigma;\Z)\to \hom(H_1(\Sigma;\Z),\Q/\Z)$ and define \[ \ba{rcl}\lambda_\Sigma\colon H_1(\Sigma;\Z)\times H_1(\Sigma;\Z)&\to & \Q/\Z\\ (a,b)&\mapsto & (\Phi_\Sigma(a))(b).\ea\] \begin{lemma} For every oriented rational homology 3-sphere $\Sigma$, the map $\lambda_\Sigma$ is a linking form. \end{lemma} \begin{proof} We already explained why $\Phi_\Sigma$ is an isomorphism, which is equivalent to the statement that $\lambda_\Sigma$ is nonsingular. Seifet~\cite[p.~814]{Se33} gave a slightly non-rigorous proof that $\lambda_\Sigma$ is symmetric, more modern proofs are given in \cite{Po16} or alternatively in \cite[Chapter~48.3]{Fr17}. \end{proof} \noindent The following lemma is an immediate consequence of the definitions and the obvious fact that for an oriented manifold $M$ we have $[-M]=-[M]$. \begin{lemma}\label{lem:linking-form-reverse-orientation} Let $\Sigma$ be an oriented rational homology 3-sphere. We denote the same manifold but with the opposite orientation by $-\Sigma$. For any $a,b\in H_1(\Sigma;\Z)=H_1(-\Sigma;\Z)$ we have \[ \lambda_{-\Sigma}(a,b)\,\,=\,\,-\lambda_{\Sigma}(a,b).\] \end{lemma} Let $K\subset S^3$ be a knot and let $\Sigma(K)$ be the 2-fold cover of $S^3$ branched along $K$. Note that $\Sigma(K)$ admits a unique orientation such that the projection $p\colon \Sigma(K)\to S^3$ is orientation-preserving outside of the branch locus $p^{-1}(K)$. Henceforth we will always view $\Sigma(K)$ as an oriented manifold. \begin{lemma}\label{lem:branched-covers-knots}\mbox{} \bnm[(i)] \item\label{item:lem:branched-covers-item-1} Let $K$ and $J$ be two knots. If $K$ and $J$ are (smoothly) isotopic, then there exists an orientation-preserving diffeomorphism between $\Sigma(K)$ and $\Sigma(J)$. \item\label{item:lem:branched-covers-item-2} Let $K$ be a knot. There exists an orientation-reversing diffeomorphism $\Sigma(K)\to \Sigma(mK)$. \enm \end{lemma} \begin{proof} The first statement follows immediately from the isotopy extension theorem~\cite[Theorem~II.5.2]{Ko93}. The second statement is an immediate consequence of the definitions. \end{proof} \section{Proofs}\label{section-proofs} \subsection{Proof of Theorem~\ref{mainthm}} Let $K$ be a knot and let $p$ be a prime. By Lemma~\ref{lem:branched-covers-knots}~(\ref{item:lem:branched-covers-item-2}) there exists an orientation-preserving diffeomorphism $f\colon \Sigma(K)\to -\Sigma(mK)$ which means that $f$ induces an isometry from $\lambda_{\Sigma(K)}$ to $\lambda_{-\Sigma(mK)}$. It follows from Lemma~\ref{lem:linking-form-reverse-orientation} that $f$ induces an isometry from the linking form $\lambda_{\Sigma(K)}$ to $-\lambda_{\Sigma(mK)}$. In particular $f$ induces an isomorphism $H_1(\Sigma(K);\Z)_p\to H_1(\Sigma(mK);\Z)_p$ between the $p$-primary parts of the underlying abelian groups. Now suppose that $K$ is amphichiral, meaning that $mK$ is isotopic to $K$. Write $H=H_1(\Sigma(K))$, denote the $p$-primary part of $H$ by $H_p$, and let $\lambda_p\colon H_p\times H_p\to \Q/\Z$ be the restriction of the linking form $\lambda_{\Sigma(K)}$ to $H_p$. It follows from Lemma~\ref{lem:split-off-p-primary-summand} that $\lambda_p$ is also a linking form. Then by Lemma~\ref{lem:branched-covers-knots} (\ref{item:lem:branched-covers-item-2}) and the above discussion, there exists an isometry \[\Phi\colon (H_p,-\lambda_p)\xrightarrow{\cong} (H_p,\lambda_p).\] Now suppose that $H_p$ is cyclic and nonzero, so that we can make the identification $H_p=\Z_{p^n}$ for some $n\in \N$. By Lemma~\ref{lem:linking-forms-on-cyclic-modules}, there exists a $k\in \Z$, coprime to $p$, such that $\lambda_p(a,b)=\frac{k}{p}ab\in \Q/\Z$ for all $a,b\in \Z_{p^n}$. The isomorphism $\Phi\colon \Z_{p^n}\to \Z_{p^n}$ is given by multiplication by some $r\in \Z$ that is coprime to $p$. We have that \[-\tmfrac{k}{p^n}\,\,=\,\,(-\lambda_p)(1,1)\,\,=\,\,\lambda_p(r\cdot 1,r\cdot 1)\,\,=\,\,\tmfrac{k}{p^n}r^2\,\,\in\,\,\Q/\Z.\] Thus there exists $m\in \Z$ such that $-\tmfrac{k}{p^n} = \tmfrac{k}{p^n}r^2 +m \in \Z$, so $-k=kr^2 +p^n m$. Working modulo $p$ we obtain $-k \equiv kr^2 \mod{p}$. Since $k$ is coprime to $p$, it follows that $-1\equiv r^2\mod p$. But it is a well known fact from classical number theory, see e.g.\ \cite[p.~133]{Co09}, that for an odd prime $p$ the number $-1$ is a square mod $p$ if and only if $p\equiv 1\mod{4}$. Thus we have shown that for an amphichiral knot, and a prime $p$ such that $H_p$ is nontrivial and cyclic, we have that $p\equiv 1\mod 4$. This concludes the proof of (the contrapositive of) Theorem~\ref{mainthm}. \subsection{The proof of Corollary~\ref{maincor}} Let $K$ be an amphichiral knot. Recall that by definition of the determinant we have $\det(K)=|H_1(\Sigma(K))|$. Now let $p$ be a prime with $p\equiv 3\mod 4$ that divides $\det(K)$. We have to show that $p^2$ divides $\det(K)$. Denote the $p$-primary part of $H_1(\Sigma(K))$ by $H_p$. Since $p$ divides $\det(K)$, we see that $H_p$ is nonzero. By Theorem~\ref{mainthm}, we know that $H_p$ is not cyclic. But this implies that $p^2$ divides the order of $H_p$, which in turn implies that $p^2$ divides $\det(K)$. This concludes the proof of Corollary~\ref{maincor}.
68,260
Notes about the confusion regarding Contingent Valuation (CV) and Discrete Choice Experiment (DCE). Contingent valuation is a stated preference method and a survey-based approach to nonmarket valuation. A contingent-valuation question carefully describes a stylized market to elicit information on the maximum a person would pay (or accept) for a good or service when market data are not available. While controversial, as will be discussed below, contingent valuation and choice experiments — a close cousin in the stated preference family of valuation methods — are arguably the most commonly used nonmarket valuation methods (Boyle, 2017). Contingent valuation (CV) is usually the only approach to obtain another distinctive property of many environmental goods – the passive use component of their economic value (Krutilla, 1967; Carson et al., 1999). (Carson and Hanemann, 2005) Ways of Eliciting Information on preferences from survey respondents. Table 4.1 Steps in conducting a contingent-valuation study |Step 5.1| Describe the item to be valued| |Step 5.2| Select a provision mechanism| |Step 5.3| Select a payment vehicle| |Step 5.4| Select a decision rule| |Step 5.5| Select a time frame of payment| |Step 5.6| Include substitutes and budget constraint reminders| |Step 6.1| Select a response format| |Step 6.2| Allow for values of $0| |Step 6.3| Address protest and other types of misleading responses|
381,652
- Furniture - Electronics - Appliances - Seasonal - How To Own It - Pay Online Find a store based on your current location.Use My Location havre, MTBack to Store List Address:1753 Us Highway 2 Nw Spc 40c Havre, MT 59501 (406) 265-3614 Aaron's - Lease to Own Retailer of Furniture, Electronics and Appliances Aaron’s, located at 1753 Us Highway 2 Nw Spc 40c, Havre, MT, has all of your favorite brands with the Aaron’s Low Price Guarantee†. Shop your favorite electronics, furniture, appliances and more when you visit the Havre, Aaron’s store. No credit is needed and you can choose the lease ownership plan that works for you. Havre Aaron’s also offers free delivery and setup and you’ll always get the highest level of service - satisfaction guaranteed. Give us a call and visit the Havre Aaron’s store today or shop online at Aarons.com! We've found that customer reviews are very helpful in keeping our business thriving. We would truly appreciate a review from you! Visit your preferred site to leave a review or comment:
80,641
\begin{document} \title{3-Lie bialgebras and 3-Lie classical Yang-Baxter equations in low dimensions} \author{Chengyu Du} \address{Chern Institute of Mathematics\& LPMC, Nankai University, Tianjin 300071, China} \email{[email protected]} \author{Chengming Bai} \address{Chern Institute of Mathematics \& LPMC, Nankai University, Tianjin 300071, China} \email{[email protected]} \author{Li Guo} \address{Department of Mathematics and Computer Science, Rutgers University, Newark, NJ 07102} \email{[email protected]} \date{\today} \begin{abstract} In this paper, we give some low-dimensional examples of local cocycle 3-Lie bialgebras and double construction 3-Lie bialgebras which were introduced in the study of the classical Yang-Baxter equation and Manin triples for 3-Lie algebras. We give an explicit and practical formula to compute the skew-symmetric solutions of the 3-Lie classical Yang-Baxter equation (CYBE). As an illustration, we obtain all skew-symmetric solutions of the 3-Lie CYBE in complex 3-Lie algebras of dimension 3 and 4 and then the induced local cocycle 3-Lie bialgebras. On the other hand, we classify the double construction 3-Lie bialgebras for complex 3-Lie algebras in dimensions 3 and 4 and then give the corresponding 8-dimensional pseudo-metric 3-Lie algebras. \end{abstract} \subjclass[2010]{16T10, 16T25, 15A75, 17A30, 17B62, 81T30} \keywords{bialgebra, 3-Lie algebra, 3-Lie bialgebra, classical Yang-Baxter equation, Manin triple} \maketitle \tableofcontents \numberwithin{equation}{section} \allowdisplaybreaks \section{Introduction} Lie algebras have been generalized to higher arities as $n$-Lie algebras (\cite{Filippov,Kas,Ling}), which have connections with several fields of mathematics and physics. For example, the algebraic structure of $n$-Lie algebras correspond to Nambu mechanics (\cite{Awata,Gautheron,Nambu,Tak}). As a special case of $n$-Lie algebras, 3-Lie algebras play an important role in string theory (\cite{Bagger,Figu1,Gus,Ho1,Papa}). As an instance, the structure of 3-Lie algebras applied in the study of the supersymmetry and gauge symmetry transformations of the world-volume theory of multiple coincident M2-branes. In particular, the metric 3-Lie algebras, or more generally, the 3-Lie algebras with invariant symmetric bilinear forms even attract more attentions in physics. In fact, the invariant inner product of a 3-Lie algebra is very useful in order to obtain the correct equations of motion for the Begger-Lambert theory from a Lagrangian that is invariant under certain symmetries. In order to find some new Bagger-Lambert Lagrangians, it is an approach by concerning 3-Lie algebras with metrics having signature $(p,q)$, or with a degenerate invariant symmetric bilinear form. Therefore, it is worthwhile to find new 3-Lie algebras with invariant symmetric bilinear forms. On the other hand, it is interesting to consider the bialgebra structures of 3-Lie algebras. Giving a 3-Lie algebra $(A,[\cdot,\cdot,\cdot])$, a coalgebra $(A,\Delta)$ such that $(A^{\ast},\Delta^{\ast})$ is also a 3-Lie algebra, the most important part for a bialgebra theory is the compatibility conditions. As pointed out in \cite{C.Bai}, it is quite common for an algebraic system to have multiple bialgebra structures that differ only by their compatibility conditions. A good compatibility condition is prescribed on one hand by a strong motivation and potential applications, and on the other hand by a rich structure theory and effective constructions. Motivated by the well-known Lie bialgebra theory, the following compatibility conditions are applied in the construction of the bialgebra theory for 3-Lie algebras: \begin{enumerate} \item the comultiplication $\Delta$ satisfies certain ``derivation" condition; \item the comultiplication $\Delta$ is a 1-cocycle on $A$; \item there is a Manin triple $(A\oplus A^{\ast},~A,~A^{\ast})$. \end{enumerate} The above three conditions are equivalent in Lie algebras, but the equivalences are lost when they are extended in the context of 3-Lie algebras. Hence these conditions lead to the following three approaches respectively. \begin{enumerate} \item Based on Condition (a), there is an approach of bialgebra theory for 3-Lie algebras introduced in \cite{R.Bai}. Unfortunately, it is a formal generalization in certain sense and neither a coboundary theory nor the structure on the double space $A\oplus A^*$ is known. \item Motivated by Condition (b) with certain adjustments on the so-called ``1-cocycles", there is a bialgebra theory for 3-Lie algebras which is called ``local cocycle 3-Lie algebra" in \cite{C.Bai}. There is a coboundary theory which leads to the introduction of an analogue of the classical Yang-Baxter equation (CYBE), namely, 3-Lie CYBE. That is, from a skew-symmetric solution $r\in A\otimes A$ of 3-Lie CYBE in a 3-Lie algebra $A$, a local cocycle 3-Lie bialgebra is obtained. \item Based on Condition (c) with the introduction of an analogue of Manin triple for 3-Lie algebras, a bialgebra theory for 3-Lie bialgebras which is called ``double construction 3-Lie algebra" was given in \cite{C.Bai}. Such a construction naturally provides a pseudo-metric 3-Lie algebra structure over the double space $A\oplus A^{\ast}$ with signature $(n,n)$, where $n=\dim A$, for the aforementioned study of Bagger-Lambert Lagrangians. \end{enumerate} In this paper, we continue the study of local cocycle and the double construction 3-Lie bialgebras. The main purpose is to provide examples of such 3-Lie bialgebras systematically. For local cocycle 3-Lie bialgebras, we determine the examples from all skew-symmetric solutions of the 3-Lie CYBE. For double construction 3-Lie bialgebras, we obtain a complete classification. We give an explicit and practical formula to compute the skew-symmetric solutions of 3-Lie CYBE and then as an illustration, we give all skew-symmetric solutions of 3-Lie CYBE in the complex 3-Lie algebras in dimension 3 and 4 whose classification is already known (cf. \cite{BaiR}). Hence the induced local cocycle 3-Lie bialgebras are obtained. Besides, we classify the double construction 3-Lie bialgebras for the complex 3-Lie algebras in dimensions 3 and 4. As a byproduct, for the non-trivial cases, certain 8-dimensional pseudo-metric 3-Lie algebras are obtained explicitly. These examples can be regarded as a guide for a further development. The paper is organized as follows. In Section~\ref{sec:bas}, we give some elementary facts on 3-Lie algebras, the local cocycle 3-Lie bialgebras, the 3-Lie CYBE and the double construction 3-Lie bialgebras. In Section~\ref{sec:cybe}, we find all skew-symmetric solutions of 3-Lie CYBE in the complex 3-Lie algebras in dimension 3 and 4 and the induced local cocycle 3-Lie bialgebras are given. In Section 4, we classify the double construction 3-Lie bialgebras for the complex 3-Lie algebras in dimensions 3 and 4 and hence give certain corresponding 8-dimensional pseudo-metric 3-Lie algebras. \section{3-Lie algebras and 3-Lie bialgebras} \label{sec:bas} In this section we recall notions and results on 3-Lie algebras and 3-Lie bialgebras which will be needed later in the paper. We follow~\cite{C.Bai} to which we refer the reader for further details. \subsection{3-Lie algebras} \begin{defi}\label{th:2.1} \emph{(\cite{Filippov})} {\rm A {\bf 3-Lie algebra} is a vector space $A$ with a skew-symmetric linear map (3-Lie bracket) $[\cdot ,\cdot ,\cdot ]:\otimes^3 A\rightarrow A$ such that the following \textbf{Fundamental Identity} holds: \begin{equation}\label{eq:2.1} [x_1,x_2,[x_3,x_4,x_5]]=[[x_1,x_2,x_3],x_4,x_5]+[x_3,[x_1,x_2,x_4],x_5]+[x_3,x_4,[x_1,x_2,x_5]], \end{equation} for any $x_1,\cdots,x_5\in A$.} \end{defi} The fundamental identity could be rewritten with the operator \begin{equation}\label{eq:2.2} \textrm{ad}_{x_1,x_2}:A\rightarrow A,~~~~~~\textrm{ad}_{x_1,x_2}x=[x_1,x_2,x] \end{equation} in the form as \begin{equation}\label{eq:2.3} \textrm{ad}_{x_1,x_2}[x_3,x_4,x_5]=[\textrm{ad}_{x_1,x_2}x_3,x_4,x_5]+[x_3,\textrm{ad}_{x_1,x_2}x_4,x_5]+[x_3,x_4,\textrm{ad}_{x_1,x_2}x_5]. \end{equation} \begin{defi}\label{th:2.2} \emph{(\cite{Dzhu,Kas})} {\rm Let $V$ be a vector space. A {\bf representation of a 3-Lie algebra} $A$ on $V$ is a skew-symmetric linear map $\rho:\otimes^2 A\rightarrow gl(V)$ such that for any $x_1,x_2,x_3,x_4\in A$,} \begin{eqnarray*} &(i)\; \rho(x_1,x_2)\rho(x_3,x_4)-\rho(x_3,x_4)\rho(x_1,x_2)=\rho([x_1,x_2,x_3],x_4)-\rho([x_1,x_2,x_4],x_3);\\ &(ii)\; \rho([x_1,x_2,x_3],x_4)=\rho(x_1,x_2)\rho(x_3,x_4)+\rho(x_2,x_3)\rho(x_1,x_4)+\rho(x_3,x_1)\rho(x_2,x_4). \end{eqnarray*} \end{defi} Let $(V,\rho)$ be a representation of a $3$-Lie algebra $A$. Define $\rho^{\ast}:\otimes^2 A\longrightarrow\mathfrak{gl}(V^{\ast})$ by \begin{equation}\label{eq:dual} \langle \rho^{\ast}(x_1,x_2) \alpha, v\rangle =-\langle \alpha,\rho(x_1,x_2) v\rangle,\quad \forall \alpha\in V^{\ast},~x_1,x_2\in A,~ v\in V. \end{equation} \begin{pro} With the above notations, $(V^{\ast},\rho^{\ast})$ is a representation of $A$, called the dual representation. \end{pro} \begin{ex}{\rm Let $A$ be a 3-Lie algebra. The linear map ${\rm ad}:\otimes^2A\rightarrow \frak g\frak l(A)$ with $x_1,x_2\rightarrow {\rm ad}_{x_1,x_2}$ for any $x_1,x_2\in A$ defines a representation $(A, {\rm ad})$ which is called the {\bf adjoint representation} of $A$, where ${\rm ad}_{x_1,x_2}$ is given by Eq.~\eqref{eq:2.2}. The dual representation $(A^*,{\rm ad}^*)$ of the adjoint representation $(A,{\rm ad})$ of a 3-Lie algebra $A$ is called the {\bf coadjoint representation}. }\end{ex} The classification of complex 3-Lie algebras in dimension 3 and 4 has been known (cf. \cite{BaiR}). \begin{pro}\label{th:2.3} There is a unique non-trivial 3-dimensional complex 3-Lie algebra. It has a basis $\{e_1,e_2,e_3\}$ with respect to which the non-zero product is given by \begin{equation*} [e_1,e_2,e_3]=e_1. \end{equation*} \end{pro} \begin{pro}\label{th:2.4} Let $A$ be a non-trivial 4-dimensional complex 3-Lie algebra. Then $A$ has a basis $\{e_1,e_2,e_3,e_4\}$ with respect to which the non-zero product of the 3-Lie algebra is given by one of the following. \begin{eqnarray*} &~&(1)~[e_1,e_2,e_3]=e_4,[e_1,e_2,e_4]=e_3,[e_1,e_3,e_4]=e_2,[e_2,e_3,e_4]=e_1;\\ &~&(2)~[e_1,e_2,e_3]=e_1;\\ &~&(3)~[e_2,e_3,e_4]=e_1;\\ &~&(4)~[e_2,e_3,e_4]=e_1,[e_1,e_3,e_4]=e_2;\\ &~&(5)~[e_2,e_3,e_4]=e_2,[e_1,e_3,e_4]=e_1;\\ &~&(6)~[e_2,e_3,e_4]=\alpha e_1+e_2,~\alpha\neq 0,[e_1,e_3,e_4]=e_2.\\ &~&(7)~[e_1,e_2,e_4]=e_3,[e_1,e_3,e_4]=e_2,[e_2,e_3,e_4]=e_1. \end{eqnarray*} \end{pro} \subsection{Local cocycle 3-Lie bialgebras and the 3-Lie classical Yang-Baxter equation} Most of the facts in this subsection and next subsection can be found in \cite{C.Bai}. \begin{defi}\label{th:2.5} {\rm Let $A$ be a 3-Lie algebra and $(V,\rho)$ be a representation of $A$. A linear map $f:A\rightarrow V$ is called a {\bf 1-cocycle} of $A$ associated to $(V,\rho)$ if it satisfies} \begin{equation*} f([x_1,x_2,x_3])=\rho(x_1,x_2)f(x_3)+\rho(x_2,x_3)f(x_1)+\rho(x_3,x_1)f(x_2),\;\;\forall x_1,x_2,x_3\in A. \end{equation*} \end{defi} \begin{defi} {\rm A {\bf local cocycle 3-Lie bialgebra} is a pair $(A,\Delta)$, where $A$ is a 3-Lie algebra, and $\Delta=\Delta_1+\Delta_2+\Delta_3:A\rightarrow A\otimes A\otimes A$ is a linear map, such that $\Delta^{\ast}:A^{\ast}\otimes A^{\ast}\otimes A^{\ast}\rightarrow A^{\ast}$ defines a 3-Lie algebra structure on $A^{\ast}$, and the following conditions are satisfied: \begin{eqnarray*} &(1)~\Delta_1~\text{is a 1-cocycle associated to the representation }(A\otimes A\otimes A, \emph{\rm ad}\otimes \id \otimes \id );\\ &(2)~\Delta_2~\text{is a 1-cocycle associated to the representation }(A\otimes A\otimes A, \id\otimes \emph{\rm ad}\otimes \id );\\ &(3)~\Delta_3~\text{is a 1-cocycle associated to the representation }(A\otimes A\otimes A, \id\otimes \id \otimes \emph{\rm ad}). \end{eqnarray*}} \end{defi} In order to define the 3-Lie classical Yang-Baxter equation, we first give some necessary notations. Let $A$ be a 3-Lie algebra and $r=\sum_i x_i\otimes y_i\in A\otimes A$. For any $1\leq p\neq q\leq 4$, define an inclusion $\cdot_{pq}:\otimes^2A\longrightarrow \otimes^4 A$ by $$ r_{pq}:=\sum_i z_{i1}\otimes\cdots\otimes z_{in},\quad \text{ where } z_{ij}=\left\{\begin{array}{ll} x_i,& j=p,\\ y_i, & j=q, \\ 1, & i\neq p, q,\end{array} \right. $$ where 1 is a symbol playing a similar role of the unit. Hence define $[[r,r,r]]\in \otimes^4 A$ by \begin{eqnarray} \label{eq:rrr}[[r,r,r]]&:=&[r_{12},r_{13},r_{14}]+[r_{12},r_{23},r_{24}]+[r_{13},r_{23},r_{34}]+[r_{14},r_{24},r_{34}]\\ \nonumber&=&\sum_{i,j,k}\big([x_i,x_j,x_k]\otimes y_i\otimes y_j\otimes y_k+x_i\otimes [y_i,x_j,x_k]\otimes y_j\otimes y_k\\ \nonumber&&+ x_i\otimes x_j\otimes [y_i, y_j,x_k]\otimes y_k+ x_i\otimes x_j\otimes x_k\otimes [y_i,y_j,y_k]\big). \end{eqnarray} \begin{defi}{\rm Let $A$ be a $3$-Lie algebra and $r\in A\otimes A$. The equation $$[[r,r,r]]=0$$ is called the {\bf $3$-Lie classical Yang-Baxter equation (3-Lie CYBE)}. \mlabel{defi:3cybe}} \end{defi} \begin{lem} Let $A$ be a 3-Lie algebra and $r=\sum_i x_i\otimes y_i\in A\otimes A$. Set \begin{equation}\label{eq:delta123}\left\{\begin{array}{ccc} \Delta_1(x)&:=&\sum_{i,j} [x,x_i,x_j]\otimes y_j\otimes y_i;\\ \Delta_2(x)&:=&\sum_{i,j} y_i\otimes [x,x_i,x_j]\otimes y_j;\\ \Delta_3(x)&:=&\sum_{i,j} y_j\otimes y_i\otimes [x,x_i,x_j], \end{array}\right. \end{equation} where $x\in A$. Then \begin{enumerate} \item[\rm (1)] $\Delta_1$ is a $1$-cocycle associated to the representation $(A\otimes A\otimes A, {\rm ad}\otimes \id \otimes \id )$; \item[\rm (2)] $\Delta_2$ is a $1$-cocycle associated to the representation $(A\otimes A\otimes A,\id\otimes {\rm ad}\otimes \id )$; \item[\rm (3)] $\Delta_3$ is a $1$-cocycle associated to the representation $(A\otimes A\otimes A,\id\otimes \id \otimes {\rm ad})$. \end{enumerate} Moreover, $\Delta^*:A^*\otimes A^*\otimes A^*\rightarrow A^*$ defines a skew-symmetric operation, where $\Delta=\Delta_1+\Delta_2+\Delta_3$. \end{lem} As is well-known, a skew-symmetric solution of the CYBE in a Lie algebra gives a Lie bialgebra. As its 3-Lie algebra analogue, we have \begin{thm} Let $A$ be a $3$-Lie algebra and let $r\in A\otimes A$ be a skew-symmetric solution of the 3-Lie CYBE: $$[[r,r,r]]=0.$$ Define $\Delta:=\Delta_1+\Delta_2+\Delta_3: A\rightarrow A\otimes A\otimes A$, where $\Delta_1, \Delta_2, \Delta_3$ are induced by $r$ as in Eq.~\eqref{eq:delta123}. Then $\Delta^*$ defines a $3$-Lie algebra structure on $A^*$. Furthermore, $(A,\Delta)$ is a local cocycle 3-Lie bialgebra. \label{thm:ybe} \end{thm} \subsection{Double construction 3-Lie bialgebras} We end this preparational section with recalling the notion of a double construction 3-Lie bialgebra and its related Minin triple. \begin{defi}\label{doulbe} {\rm Let $A$ be a 3-Lie algebra and $\Delta:A\rightarrow A\otimes A\otimes A$ a linear map. Suppose that $\Delta^{\ast}:A^{\ast}\otimes A^{\ast}\otimes A^{\ast}\to A^{\ast}$ defines a 3-Lie algebra structure on $A^{\ast}$. If for all $x,y,z\in A$, $\Delta$ satisfies the following conditions, \begin{equation}\label{eq:2.6} \Delta([x,y,z])=(\id\otimes \id \otimes \emph{\rm ad}_{y,z})\Delta(x)+(\id\otimes \id \otimes \emph{\rm ad}_{z,x})\Delta(y)+(\id\otimes \id \otimes \emph{\rm ad}_{x,y})\Delta(z), \end{equation} \begin{equation}\label{eq:2.7} \Delta([x,y,z])=(\id\otimes \id \otimes \emph{\rm ad}_{y,z})\Delta(x)+(\id\otimes \emph{\rm ad}_{y,z}\otimes \id )\Delta(x)+(\emph{\rm ad}_{y,z}\otimes \id \otimes \id )\Delta(x), \end{equation} then we call $(A,\Delta)$ a {\bf double construction 3-Lie bialgebra.}} \end{defi} \begin{defi} {\rm Let $A$ be a $3$-Lie algebra. A bilinear form $(\cdot,\cdot)_A$ on $A$ is called {\bf invariant} if it satisfies \begin{equation} ([x_1,x_2,x_3],x_4)_A+([x_1,x_2,x_4],x_3)_A=0,\quad\forall x_1,x_2,x_3,x_4\in A. \end{equation} A $3$-Lie algebra $A$ is called {\bf pseudo-metric} if there is a nondegenerate symmetric invariant bilinear form on $A$. } \end{defi} \begin{defi}\label{defi:Manin} {\rm A {\bf Manin triple of $3$-Lie algebras} consists of a pseudo-metric $3$-Lie algebra $(\mathcal{A},(\cdot,\cdot)_\mathcal{A})$ and $3$-Lie algebras $A_1, A_2$ such that \begin{enumerate} \item[\rm (1)] $A_1,A_2$ are isotropic $3$-Lie subalgebras of $\mathcal{A}$; \item[\rm (2)] $\mathcal{A}=A_1\oplus A_2$ as the direct sum of vector spaces; \item[\rm (3)] For all $x_1,y_1\in A_1 $ and $x_2,y_2\in A_2$, we have $\mathrm{pr}_1[x_1,y_1,x_2]=0$ and $\mathrm{pr}_2[x_2,y_2,x_1]=0$, where $\mathrm{pr}_1$ and $\mathrm{pr}_2$ are the projections from $A_1\oplus A_2$ to $A_1$ and $A_2$ respectively. \end{enumerate} } \end{defi} Let $(A,[\cdot,\cdot,\cdot])$ and $(A^{\ast},[\cdot,\cdot,\cdot]^{\ast})$ be 3-Lie algebras. On $A\oplus A^{\ast}$, there is a natural nondegenerate symmetric bilinear form $(\cdot,\cdot)_+$ given by \begin{equation} \label{eq:bf} ( x+\xi, y+\eta)_+=\langle x, \eta\rangle+\langle \xi,y\rangle,\;\;\forall x,y\in A, \xi,\eta\in A^{\ast}. \end{equation} There is also a bracket operation $[\cdot,\cdot,\cdot]_{A\oplus A^{\ast}}$ on $A\oplus A^{\ast}$ given by \begin{eqnarray} \nonumber [x+\xi,y+\eta,z+\gamma]_{A\oplus A^{\ast}}&=&[x,y,z]+\mathrm{ad}_{x,y}^{\ast}\gamma+\mathrm{ad}_{y,z}^{\ast}\xi+\mathrm{ad}_{z,x}^{\ast}\eta\\ \label{eq:formularAA*} &&+\mathfrak{ad}_{\xi,\eta}^{\ast}z+\mathfrak{ad}_{\eta,\gamma}^{\ast}x+\mathfrak{ad}_{\gamma,\xi}^{\ast}y+[\xi,\eta,\gamma]^{\ast}, \end{eqnarray} where $\mathrm{ad}^{\ast}$ and $\mathfrak{ad}^{\ast}$ are the coadjoint representations of $A$ and $A^{\ast}$ on $A^{\ast}$ and $A$ respectively. Note that the bracket operation $[\cdot,\cdot,\cdot]_{A\oplus A^{\ast}}$ is naturally invariant with respect to the symmetric bilinear form $(\cdot,\cdot)_+$, and satisfies Condition~(3) in Definition~\ref{defi:Manin}. If $(A\oplus A^{\ast},[\cdot,\cdot,\cdot]_{A\oplus A^{\ast}})$ is a 3-Lie algebra, then obviously $A$ and $A^{\ast}$ are isotropic subalgebras. Consequently, $((A\oplus A^{\ast},(\cdot,\cdot)_+),A, A^{\ast})$ is a Manin triple, which is called {\bf the standard Manin triple of 3-Lie algebras.} \begin{thm}\label{thm:relations} Let $A$ be a $3$-Lie algebra and $\Delta:A\rightarrow A\otimes A\otimes A$ a linear map. Suppose that $\Delta^*:A^*\otimes A^*\otimes A^*\rightarrow A^*$ defines a $3$-Lie algebra structure on $A^*$. Then $(A,\Delta)$ is a double construction 3-Lie bialgebra if and only if $((A\oplus A^*,(\cdot,\cdot)_+),A,A^*)$ is a standard Manin triple, where the bilinear form $(\cdot,\cdot)_+$ and the $3$-Lie bracket $[\cdot,\cdot,\cdot]_{A\oplus A^*}$ are given by Eqs.~\eqref{eq:bf} and \eqref{eq:formularAA*} respectively. \end{thm} \section{Skew-symmetric solutions of the 3-Lie CYBE and local cocycle 3-Lie bialgebras} \label{sec:cybe} In this section, we obtain a computable formula of the 3-Lie CYBE and apply it to obtain all skew-symmetric solutions of the 3-Lie CYBE in the complex 3-Lie algebras in dimensions 3 and 4. We then obtain the local cocycle 3-Lie bialgebras induced from these solutions. \subsection{Notational simplification of the 3-Lie CYBE} Let $\{e_1,\cdots,e_n\}$ be a basis of $A$. Set \begin{equation}\label{eq:3.2} r=\sum_{i,j}a^{ij}e_i\otimes e_j=\sum_ie_i \otimes (\sum_ja^{ij} e_j) \in A\otimes A. ~~ \end{equation} Then \begin{eqnarray*} [r_{12},r_{13},r_{14}]&=&\sum_{i,j,k,p,q,r}a^{ip}a^{jq}a^{kr}[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r\\ &=&\sum_{p,q,r}\big(\sum_{i,j,k}a^{ip}a^{jq}a^{kr}[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r\big). \end{eqnarray*} If any two of $\{i,j,k\}$ are equal, then $[e_i,e_j,e_k]=0$. So we can assume that $\{i,j,k\}$ are distinct in the sum. Let $S_3$ denote the symmetric group of order 3. In the following, we let $\sigma\in S_3$ act on $\{i,j,k\}$ by permuting the three locations. So denoting $(i_1,i_2,i_3):=(i,j,k)$, we define $\{\sigma(i),\sigma(j),\sigma(k)\} =\{i_{\sigma(1)},i_{\sigma(2)},i_{\sigma(3)}\}$. This applies even if $i,j,k$ are not distinct. Then for fixed $\{p,q,r\}$, we have \begin{eqnarray*} &&\sum_{i,j,k}a^{ip}a^{jq}a^{kr}[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r\\ &&=\sum_{i<j<k}\sum_{\sigma\in S_3}a^{\sigma(i)p}a^{\sigma(j)q}a^{\sigma(k)r}[e_{\sigma(i)},e_{\sigma(j)},e_{\sigma(k)}]\otimes e_p\otimes e_q\otimes e_r\\ &&=\sum_{i<j<k}\big(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{\sigma(i)p}a^{\sigma(j)q}a^{\sigma(k)r}\big)[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r. \end{eqnarray*} Therefore $[r_{12},r_{13},r_{14}]$ is rewritten as \begin{equation*} [r_{12},r_{13},r_{14}]~=~\sum_{p,q,r}\sum_{i<j<k}(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{\sigma(i)p}a^{\sigma(j)q}a^{\sigma(k)r})[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r. \end{equation*} Similarly, we have \begin{eqnarray*} ~[r_{12},r_{23},r_{24}] &=&\sum_{p,q,r}\sum_{i<j<k}(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{\sigma(j)q}a^{\sigma(k)r})e_p\otimes[e_i,e_j,e_k]\otimes e_q\otimes e_r,\\ ~[r_{13},r_{23},r_{34}] &=&\sum_{p,q,r}\sum_{i<j<k}(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{q\sigma(j)}a^{\sigma(k)r})e_p\otimes e_q\otimes[e_i,e_j,e_k]\otimes e_r,\\ ~[r_{14},r_{24},r_{34}] &=&\sum_{p,q,r}\sum_{i<j<k}(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{q\sigma(j)}a^{r\sigma(k)})e_p\otimes e_q\otimes e_r\otimes[e_i,e_j,e_k].\\ \end{eqnarray*} For all $i< j< k, p,q,r$, set \begin{align} M_{pqr}^{ijk}(1)&=(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{\sigma(i)p}a^{\sigma(j)q}a^{\sigma(k)r})[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r, \label{eq:m1}\\ M_{pqr}^{ijk}(2)&=(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{\sigma(j)q}a^{\sigma(k)r})e_p\otimes[e_i,e_j,e_k]\otimes e_q\otimes e_r,\label{eq:m2}\\ M_{pqr}^{ijk}(3)&=(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{q\sigma(j)}a^{\sigma(k)r})e_p\otimes e_q\otimes[e_i,e_j,e_k]\otimes e_r,\label{eq:m3}\\ M_{pqr}^{ijk}(4)&=(\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{p\sigma(i)}a^{q\sigma(j)}a^{r\sigma(k)})e_p\otimes e_q\otimes e_r\otimes[e_i,e_j,e_k].\label{eq:m4} \end{align} Thus, we have \begin{align} ~[r_{12},r_{13},r_{14}]&=\sum_{p,q,r}\sum_{i<j<k}M_{pqr}^{ijk}(1),\label{r1.1}\\ ~[r_{12},r_{23},r_{24}]&=\sum_{p,q,r}\sum_{i<j<k}M_{pqr}^{ijk}(2),\label{r1.2}\\ ~[r_{13},r_{23},r_{34}]&=\sum_{p,q,r}\sum_{i<j<k}M_{pqr}^{ijk}(3),\label{r1.3}\\ ~[r_{14},r_{24},r_{34}]&=\sum_{p,q,r}\sum_{i<j<k}M_{pqr}^{ijk}(4).\label{r1.4} \end{align} Moreover, it is obvious that $M_{pqr}^{ijk}(m)$ is invariant under the permutations on $\{i,j,k\}$, i.e., \begin{equation} M^{ijk}_{pqr}(m)=M^{\sigma(i)\sigma(j)\sigma(k)}_{pqr}(m), \;\; \forall p,q,r, i<j<k, m=1,2,3,4. \end{equation} Let $V$ be a vector space and let $\wedge$ denote the exterior product. For example, $$x\wedge y=x\otimes y-y\otimes x,\;\; x_i\wedge x_j\wedge x_k=\sum_{\sigma\in S_3}\textrm{sgn}(\sigma)x_{\sigma(i)}\otimes x_{\sigma(j)}\otimes x_{\sigma(k)},\;\;\forall x,y, x_i, x_j, x_k\in V.$$ Let $\wedge^k(V)$ denote the $k$-th exterior power of $V$. \subsection{Skew-symmetric solutions of the 3-Lie CYBE} We now give a simplified formula for the 3-Lie CYBE $[[r,r,r]]=0$ when $r=\sum\limits_{i,j}a^{ij}e_i\otimes e_j$ is skew-symmetric, i.e., $a^{ij}=-a^{ji}$. \begin{thm}\label{th:3.16} Let $A$ be a 3-Lie algebra with a basis $\{ e_1,\cdots,e_n\}$. Let the ternary operation be given by \begin{equation}\label{eq:3.15} [e_i,e_j,e_k]=\sum_{m}T_{ijk}^{m} e_m. \end{equation} Suppose that $r=\sum\limits_{i,j}a^{ij}e_i\otimes e_j\in A\otimes A$ is skew-symmetric. Then \begin{equation}\label{eq:3.22} [[r,r,r]]=\sum_{p<q<r}\sum_{i<j<k}\sum_l D^{ijk}_{pqr}T_{ijk}^{l} e_l\wedge e_p\wedge e_q\wedge e_r, \end{equation} where \begin{equation*} D^{ijk}_{pqr}:=\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{\sigma(i)p}a^{\sigma(j)q}a^{\sigma(k)r},\;\;\forall i,j,k,p,q,r=1,\cdots,n. \end{equation*} \end{thm} The theorem has a direct consequence. \begin{cor}\label{co:3.2} Let $A$ be a 3-Lie algebra. If $r\in \wedge^2 (A)$, then $[[r,r,r]]\in \wedge^4(A)$. \end{cor} \begin{rmk} {\rm This corollary can be regarded as a generalization of the following result on Lie algebras in the context of 3-Lie algebras: for a Lie algebra $\frak g$, if $r\in \wedge^2 (\frak g)$, then $$ [[r,r]] = [r_{12},r_{13}]+[r_{12},r_{23}]+[r_{13},r_{23}]\in \wedge^3 (\frak g).$$ } \end{rmk} We will prove \text{Theorem~\ref{th:3.16}} in several steps. First by the skew-symmetry of $r$, Eqs.~(\ref{eq:m1}) -- (\ref{eq:m2}) become \begin{align} M_{pqr}^{ijk}(1)&=D^{ijk}_{pqr}[e_i,e_j,e_k]\otimes e_p\otimes e_q\otimes e_r,\label{m1.1}\\ M_{pqr}^{ijk}(2)&=-D^{ijk}_{pqr}e_p\otimes[e_i,e_j,e_k]\otimes e_q\otimes e_r,\label{m1.2}\\ M_{pqr}^{ijk}(3)&=D^{ijk}_{pqr}e_p\otimes e_q\otimes[e_i,e_j,e_k]\otimes e_r,\label{m1.3}\\ M_{pqr}^{ijk}(4)&=-D^{ijk}_{pqr}e_p\otimes e_q\otimes e_r\otimes[e_i,e_j,e_k].\label{m1.4} \end{align} \begin{lem}\label{le:3.2} With the notations and conditions as above. Then \begin{enumerate} \item $D^{ijk}_{pqr}=\mathrm{sgn}(\tau )D^{\tau (i)\tau (j)\tau (k)}_{pqr}$ for any $\tau \in S_3$. \item $D^{ijk}_{pqr}=-D^{pqr}_{ijk}$. \item $ D^{ijk}_{pqr}=\mathrm{sgn}(\ttau)D^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}$ for any $\ttau\in S_3$. \item If $p, q, r$ are not distinct, then $M^{ijk}_{pqr}(m)=0$ for any $m=1,2,3,4$. \item $D_{ijk}^{ijk}=0$. \end{enumerate} \end{lem} \begin{proof} (a) First for any $\tau \in S_3$, since $S_3\tau=S_3$, we have \begin{align*} D^{ijk}_{pqr}&=\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma\tau )a^{\sigma\tau (i)p}a^{\sigma\tau (j)q}a^{\sigma\tau (k)r}\\ &=\mathrm{sgn}(\tau )\sum_{\sigma\in S_3}\mathrm{sgn}(\sigma)a^{\sigma\tau (i)p}a^{\sigma\tau (j)q}a^{\sigma\tau (k)r}\\ &=\mathrm{sgn}(\tau )D^{\tau (i)\tau (j)\tau (k)}_{pqr}. \end{align*} (b) In fact, we have \begin{align*} D^{ijk}_{pqr}&=a^{ip}a^{jq}a^{kr}-a^{ip}a^{kq}a^{jr}+a^{jp}a^{kq}a^{ir}-a^{jp}a^{iq}a^{kr}+a^{kp}a^{iq}a^{jr}-a^{kp}a^{jq}a^{ir},\\ D^{pqr}_{ijk}&=a^{pi}a^{qj}a^{rk}-a^{pi}a^{rj}a^{qk}+a^{qi}a^{rj}a^{pk}-a^{qi}a^{pj}a^{rk}+a^{ri}a^{pj}a^{qk}-a^{ri}a^{qj}a^{pk}\\ &=(-1)^3(a^{ip}a^{jq}a^{kr}-a^{ip}a^{jr}a^{kq}+a^{iq}a^{jr}a^{kp}-a^{iq}a^{jp}a^{kr}+a^{ir}a^{jp}a^{kq}-a^{ir}a^{jq}a^{kp})\\ &=-D^{ijk}_{pqr}. \end{align*} Hence $D^{ijk}_{pqr}=-D^{pqr}_{ijk}$. (c) By (a) and (b), we have \begin{equation*} D^{ijk}_{pqr}=-D^{pqr}_{ijk}=-\mathrm{sgn}(\ttau)D_{ijk}^{\ttau(p)\ttau(q)\ttau(r)}=\mathrm{sgn}(\ttau)D^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}. \end{equation*} (d) It is a direct consequence due to (c) by taking $\sigma$ to be the transposition exchanging two of $p,q,r$ which are not distinct. (e) This follows since by (b), $D^{ijk}_{ijk}=-D^{ijk}_{ijk}$ and hence must be zero. Now the proof of the lemma is completed. \end{proof} By Lemma~\ref{le:3.2}, we can assume that in Eqs.~(\ref{r1.1}) -- (\ref{r1.4}), both $\{i,j,k\}$ and $\{p,q,r\}$ consist of distinct elements. Then these equations can be simplified to \begin{align} [r_{12},r_{13},r_{14}]=\sum_{p<q<r}\sum_{i<j<k}\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(1),\label{r2.1}\\ [r_{12},r_{23},r_{24}]=\sum_{p<q<r}\sum_{i<j<k}\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(2),\label{r2.2}\\ [r_{13},r_{23},r_{34}]=\sum_{p<q<r}\sum_{i<j<k}\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(3),\label{r2.3}\\ [r_{14},r_{24},r_{34}]=\sum_{p<q<r}\sum_{i<j<k}\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(4).\label{r2.4} \end{align} Now we can give the proof of Theorem~\ref{th:3.16}. \smallskip \noindent {\it Proof of Theorem~\ref{th:3.16}.} By Eqs.~(\ref{r2.1}) -- (\ref{r2.4}), we have \begin{equation}\label{eq:rrr} [[r,r,r]]=\sum_m\sum_{p<q<r}\sum_{i<j<k}\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(m). \end{equation} Next we need to show \begin{equation}\label{eq:3.21} \sum_m\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(m)=\sum_l D^{ijk}_{pqr}T_{ijk}^{l}e_l\wedge e_p\wedge e_q\wedge e_r. \end{equation} Define an operator $\phi_{pq}:A^{\otimes m}\rightarrow A^{\otimes m}$ by $$\phi_{pq}(\sum x_1\otimes\cdots\otimes x_p\otimes\cdots\otimes x_q\otimes\cdots\otimes x_m)=\sum x_1\otimes\cdots\otimes x_q\otimes\cdots\otimes x_p\otimes\cdots\otimes x_m.$$ Obviously, $\phi_{pp}$ is the identity. Then by Eqs.~(\ref{m1.1}) -- (\ref{m1.4}), we have \begin{align*} M_{pqr}^{ijk}(2)=-\phi_{12}M_{pqr}^{ijk}(1),\;\; M_{pqr}^{ijk}(3)=\phi_{23}\phi_{12}M_{pqr}^{ijk}(1),\;\; M_{pqr}^{ijk}(4)=-\phi_{34}\phi_{23}\phi_{12}M_{pqr}^{ijk}(1). \end{align*} Moreover, by \text{Lemma \ref{le:3.2}} ~ (c), we have \begin{align*} M_{\tau(p)\tau(q)\tau(r)}^{ijk}(1)&=\sum_{\ttau\in \tS_3}D^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}[e_i,e_j,e_k]\otimes e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\\ &=\sum_l\sum_{\ttau\in \tS_3}D^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}T_{ijk}^l e_l\otimes e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\\ &=\sum_l\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)D^{ijk}_{pqr}T_{ijk}^l e_l\otimes e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\\ &=\sum_l D^{ijk}_{pqr}T_{ijk}^l e_l\otimes\big(\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\big)\\ &=\sum_l D^{ijk}_{pqr}T_{ijk}^l e_l\otimes\big(e_p\wedge e_q\wedge e_r). \end{align*} Therefore \begin{equation*} \sum_m\sum_{\ttau\in \tS_3}M^{ijk}_{\ttau(p)\ttau(q)\ttau(r)}(m)=\sum_l D^{ijk}_{pqr}T_{ijk}^{l}(\phi_{11}-\phi_{12}+\phi_{23}\phi_{12}-\phi_{34}\phi_{23}\phi_{12})e_l\otimes(e_p\wedge e_q\wedge e_r). \end{equation*} It remains to prove $$(\phi_{11}-\phi_{12}+\phi_{23}\phi_{12}-\phi_{34}\phi_{23}\phi_{12})e_l\otimes(e_p\wedge e_q\wedge e_r)=e_l\wedge e_p\wedge e_q\wedge e_r.$$ In fact, the right hand side gives \begin{align*} e_l\wedge e_p\wedge e_q\wedge e_r=&\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_l\otimes e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}-\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_{\ttau(p)}\otimes e_l\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\\ &+\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_l\otimes e_{\ttau(r)}-\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\otimes e_l\\ =&(\phi_{11}-\phi_{12}+\phi_{23}\phi_{12}-\phi_{34}\phi_{23}\phi_{12})e_l\otimes\big(\sum_{\ttau\in \tS_3}\mathrm{sgn}(\ttau)e_{\ttau(p)}\otimes e_{\ttau(q)}\otimes e_{\ttau(r)}\big)\\ =&(\phi_{11}-\phi_{12}+\phi_{23}\phi_{12}-\phi_{34}\phi_{23}\phi_{12})e_l\otimes(e_p\wedge e_q\wedge e_r) \end{align*} Hence \text{Theorem~\ref{th:3.16}} holds. \hfill $\Box$ \subsection{Skew-symmetric solutions of the 3-Lie CYBE in the complex 3-Lie algebras in dimensions 3 and 4} We first consider the dimension 3 case. \begin{thm}\label{th:dim3} Let $A$ be a 3-dimensional 3-Lie algebra. Then for any $r\in\wedge^2(A)$, $[[r,r,r]]=0$. That is, any $r\in \wedge^2(A)$ is a solution of 3-Lie CYBE in a 3-dimensional 3-Lie algebra. \end{thm} \begin{proof} By Corollary~\ref{co:3.2}, $[[r,r,r]]\in \wedge^4(A)$. On the other hand, since $\mathrm{dim}~A=3$, any element in $\wedge^4(A)$ is zero. Therefore $[[r,r,r]]=0$. \end{proof} Next let $A$ be a 4-dimensional 3-Lie algebra with a basis $\{ e_1,\cdots,e_4\}$. Then we have \begin{equation}\label{eq:4.1} [e_i,e_j,e_k]=\sum_{m=1}^4T_{ijk}^{m} e_m, \forall i,j,k=1,\cdots 4, \end{equation} for constants $T_{ijk}^m$. \begin{lem}\label{le:3.8} With the notations and conditions as above, assume that $T_{ijk}^l\neq0$ only if $i,j,k,l$ are distinct. Then for any skew-symmetric $r\in \wedge^2(A)$, $[[r,r,r]]=0$. \end{lem} \begin{proof} First $e_l\wedge e_p\wedge e_q\wedge e_r\in \wedge^4 (A)$ is not zero precisely when $l,p,q,r$ are distinct. By the assumption, the indices of a nonzero $T_{ijk}^{l}$ in Eq.~ (\ref{eq:3.22}) are also distinct. But $i,j,k,p,q,r,l\in \{1,2,3,4\}$ and $i<j<k,~p<q<r$ in Eq.~(\ref{eq:3.22}). Thus we have $i=p,~j=q,~k=r$. Then Eq.~(\ref{eq:3.22}) gives $$[[r,r,r]]=\sum_l\sum_{p<q<r}D^{pqr}_{pqr}e_l\wedge e_p\wedge e_q\wedge e_r.$$ By \text{Lemma~\ref{le:3.2}} (e), $[[r,r,r]]=0$. \end{proof} \begin{thm}\label{th:dim4} Let $A$ be a 4-dimensional 3-Lie algebra. If $A$ is one of the complex 3-Lie algebras of Cases~(1), (3), (4),and (7) given in Proposition~\ref{th:2.4}, then any skew-symmetric $r\in \wedge^2(A)$ satisfies $[[r,r,r]]=0$. \end{thm} \begin{proof} The proof follows directly from \text{Lemma~\ref{le:3.8}}. \end{proof} \begin{thm}\label{th:3.17} Let $A$ be a 4-dimensional 3-Lie algebra with a basis $\{e_1,e_2,e_3, e_4\}$ in Case~(2), Case (5) or Case~(6) in Proposition~\ref{th:2.4}. Let $r=\sum\limits_{i,j} a^{ij}e_i\otimes e_j\in A\otimes A$ be skew-symmetric. \begin{enumerate} \item If $A$ is the complex 3-Lie algebra of Case~(2), then $r$ satisfies $[[r,r,r]]=0$ if and only if $$a^{23}(a^{12}a^{34}-a^{13}a^{24}-a^{32}a^{14})=0.$$ \item If $A$ is the complex 3-Lie algebra of Case~(5), then $r$ satisfies $[[r,r,r]]=0$ if and only if $$a^{34}(a^{12}a^{34}-a^{32}a^{14}+a^{42}a^{13})=0.$$ \item If $A$ is the complex 3-Lie algebra of Case~(6), then $r$ satisfies $[[r,r,r]]=0$ if and only if $$a^{34}(a^{12}a^{43}+a^{32}a^{14}-a^{42}a^{13})=0.$$ \end{enumerate} \end{thm} \begin{proof} (a) For the 3-Lie algebra of Case~(2), only $T^1_{123}\neq0$. So \begin{align*} [[r,r,r]]=\sum_{p<q<r}D^{123}_{pqr}e_1\wedge e_p\wedge e_q\wedge e_r =D^{123}_{234}e_1\wedge e_2\wedge e_3\wedge e_4. \end{align*} Hence $[[r,r,r]]=0$ if and only if $D^{123}_{234}=a^{23}(a^{12}a^{34}-a^{13}a^{24}-a^{32}a^{14})=0$. (b)For the 3-Lie algebra of Case~(5),only $T_{134}^1\neq0$ and $T_{234}^2\neq0$. So \begin{align*} [[r,r,r]]&=\sum_{p<q<r}D^{134}_{pqr}e_1\wedge e_p\wedge e_q\wedge e_r+\sum_{p<q<r}D^{234}_{pqr}e_2\wedge e_p\wedge e_q\wedge e_r\\ &=D^{134}_{234}e_1\wedge e_2\wedge e_3\wedge e_4+D^{234}_{134}e_2\wedge e_1\wedge e_3\wedge e_4\\ &=(D^{134}_{234}-D^{234}_{134})e_1\wedge e_2\wedge e_3\wedge e_4, \end{align*} By \text{Lemma~\ref{le:3.2}} (b), $D^{134}_{234}-D^{234}_{134}=2D^{134}_{234}$. Hence $[[r,r,r]]=0$ if and only if $D^{123}_{234}=a^{34}(a^{12}a^{34}-a^{32}a^{14}+a^{42}a^{13})=0$.\\ (c) For the 3-Lie algebra of Case~(6), only $T^1_{234}\neq0$, $T^2_{234}\neq0$ and $T^2_{134}\neq0$. So \begin{align*} [[r,r,r]]=&\sum_{p<q<r}D^{234}_{pqr}\,\alpha\, e_1\wedge e_p\wedge e_q\wedge e_r+\sum_{p<q<r}D^{134}_{pqr}e_2\wedge e_p\wedge e_q\wedge e_r\\ &+\sum_{p<q<r}D^{234}_{pqr}e_2\wedge e_p\wedge e_q\wedge e_r\\ =&D^{234}_{234}\,\alpha\,e_1\wedge e_2\wedge e_3\wedge e_4+D^{134}_{134}e_2\wedge e_1\wedge e_3\wedge e_4+D^{234}_{134}e_2\wedge e_1\wedge e_3\wedge e_4\\ =&D^{234}_{134}e_2\wedge e_1\wedge e_3\wedge e_4. \end{align*} Hence $[[r,r,r]]=0$ if and only if $D^{234}_{134}=a^{34}(a^{12}a^{43}+a^{32}a^{14}-a^{42}a^{13})=0$. \end{proof} \subsection{The induced local cocycle 3-Lie bialgebras} We now provide the local cocycle 3-Lie bialgebras induced from skew-symmetric solutions of the 3-Lie CYBE. \begin{thm} \label{th:cop} Let $A$ be a 3-Lie algebra with a basis $\{ e_1,\cdots,e_n\}$. Let $r=\sum\limits_{i,j}a^{ij}e_i\otimes e_j\in A\otimes A$. Set $\Delta=\Delta_1+\Delta_2+\Delta_3: A\rightarrow A\otimes A\otimes A$, in which $\Delta_1,\Delta_2,\Delta_3$ are induced by $r$ as in Eq.~\eqref{eq:delta123}. Then \begin{equation}\label{delta1} \Delta(x)=\sum_{i<j}\sum_{p<q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\wedge e_p\wedge e_q, \forall x\in A. \end{equation} \end{thm} \begin{proof} By Eq.~(\ref{eq:delta123}), for any $x\in A$, we have \begin{align*} \Delta_1(x)=&\sum_{i,j,p,q}a^{ip}a^{jq}[x,e_i,e_j]\otimes e_p\otimes e_q =\sum_{i\neq j}\sum_{p,q}a^{ip}a^{jq}[x,e_i,e_j]\otimes e_p\otimes e_q\\ =&\sum_{i<j}\sum_{p,q}(a^{ip}a^{jq}[x,e_i,e_j]+a^{jp}a^{iq}[x,e_j,e_i])\otimes e_p\otimes e_q\\ =&\sum_{i<j}\sum_{p,q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\otimes e_p\otimes e_q \end{align*} Note that $a^{ip}a^{jq}-a^{jp}a^{iq}=0$ when $p=q$. Then \begin{align*} \Delta_1(x)=&\sum_{i<j}\sum_{p\neq q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\otimes e_p\otimes e_q\\ =&\sum_{i<j}\sum_{p<q}\big((a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\otimes e_p\otimes e_q+(a^{iq}a^{jp}-a^{jq}a^{ip})[x,e_i,e_j]\otimes e_q\otimes e_p\big)\\ =&\sum_{i<j}\sum_{p<q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\otimes(e_p\otimes e_q-e_q\otimes e_p)\\ =&\sum_{i<j}\sum_{p<q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\otimes(e_p\wedge e_q). \end{align*} Due to Eq.~(\ref{eq:delta123}) again, we have \begin{align*} \Delta_2(x)=\phi_{13}\phi_{12}\Delta_1(x),\;\; \Delta_3(x)=\phi_{12}\phi_{13}\Delta_1(x). \end{align*} Therefore \begin{align*} \Delta(x)=&\Delta_1(x)+\Delta_2(x)+\Delta_3(x)\\ =&\sum_{i<j}\sum_{p<q}(a^{ip}a^{jq}-a^{jp}a^{iq})(\phi_{11}+\phi_{13}\phi_{12}+\phi_{12}\phi_{13})[x,e_i,e_j]\otimes(e_p\wedge e_q)\\ =&\sum_{i<j}\sum_{p<q}(a^{ip}a^{jq}-a^{jp}a^{iq})[x,e_i,e_j]\wedge e_p\wedge e_q. \end{align*} Hence the conclusion holds. \end{proof} Combining Theorem~\ref{thm:ybe}, Theorem~\ref{th:cop} and the results in the previous subsection, we obtain the following conclusion on local cocycle 3-Lie bialgebras. \begin{pro} Let $A$ be a 3-Lie algebra with a basis $\{ e_1,\cdots,e_n\}$. For $r=\sum\limits_{i,j}a^{ij}e_i\otimes e_j\in A\otimes A$, denote $$D^{ij}_{pq}:=a^{ip}a^{jq}-a^{jp}a^{iq},\;\;\forall i,j,p,q=1,\cdots, n.$$ Then every skew-symmetric solution of the 3-Lie CYBE in the complex 3-Lie algebras in dimension 3 and 4 gives a local cocycle 3-Lie bialgebra $(A,\Delta)$, where $\Delta$ is given by the following formula. \begin{enumerate} \item[(1)] If $A$ is the 3-dimensional 3-Lie algebra in Proposition~\ref{th:2.3}, then \begin{align*} \Delta(e_1)=D^{23}_{23}e_1\wedge e_2\wedge e_3,\;\; \Delta(e_2)=-D^{13}_{23}e_1\wedge e_2\wedge e_3,\;\; \Delta(e_3)=D^{12}_{23}e_1\wedge e_2\wedge e_3. \end{align*} \item[(2)] If $A$ is the 4-dimensional 3-Lie algebra of Case (1) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&(D^{34}_{34}-D^{24}_{24}+D^{23}_{23})e_2\wedge e_3\wedge e_4+(D^{24}_{12}-D^{24}_{13})e_1\wedge e_2\wedge e_3\\ &+(D^{23}_{12}-D^{34}_{14})e_1\wedge e_2\wedge e_4+(D^{23}_{13}-D^{24}_{14})e_1\wedge e_3\wedge e_4,\\ \Delta(e_2)=&(D^{34}_{34}-D^{13}_{13}+D^{14}_{14})e_1\wedge e_3\wedge e_4+(D^{34}_{23}-D^{14}_{12})e_1\wedge e_2\wedge e_3\\ &+(D^{34}_{24}-D^{13}_{12})e_1\wedge e_2\wedge e_4+(D^{14}_{24}-D^{13}_{23})e_2\wedge e_3\wedge e_4,\\ \Delta(e_3)=&(D^{12}_{12}-D^{24}_{24}+D^{14}_{14})e_1\wedge e_2\wedge e_4+(D^{14}_{13}-D^{24}_{23})e_1\wedge e_2\wedge e_3\\ &+(D^{12}_{13}-D^{24}_{34})e_1\wedge e_3\wedge e_4+(D^{12}_{23}-D^{14}_{34})e_2\wedge e_3\wedge e_4,\\ \Delta(e_4)=&(D^{12}_{12}-D^{13}_{13}+D^{23}_{23})e_1\wedge e_2\wedge e_3+(D^{23}_{24}-D^{13}_{14})e_1\wedge e_2\wedge e_4\\ &+(D^{23}_{34}-D^{12}_{14})e_1\wedge e_3\wedge e_4+(D^{13}_{34}-D^{12}_{24})e_2\wedge e_3\wedge e_4. \end{align*} \item[(3)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(2) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&D^{23}_{23}e_1\wedge e_2\wedge e_3+D^{23}_{24}e_1\wedge e_2\wedge e_4+D^{23}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_2)=&-D^{13}_{23}e_1\wedge e_2\wedge e_3-D^{13}_{24}e_1\wedge e_2\wedge e_4-D^{13}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_3)=&D^{12}_{23}e_1\wedge e_2\wedge e_3+D^{12}_{24}e_1\wedge e_2\wedge e_4+D^{12}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_4)=&0, \end{align*} and the parameters satisfy an additional condition $a^{23}(a^{12}a^{34}-a^{13}a^{24}-a^{32}a^{14})=0$. \item[(4)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(3) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&0,\\ \Delta(e_2)=&D^{34}_{23}e_1\wedge e_2\wedge e_3+D^{34}_{24}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_3)=&-D^{24}_{23}e_1\wedge e_2\wedge e_3-D^{24}_{24}e_1\wedge e_2\wedge e_4-D^{24}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_4)=&D^{23}_{23}e_1\wedge e_2\wedge e_3+D^{23}_{24}e_1\wedge e_2\wedge e_4+D^{23}_{34}e_1\wedge e_3\wedge e_4. \end{align*} \item[(5)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(4) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&-D^{34}_{13}e_1\wedge e_2\wedge e_3-D^{34}_{14}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_2)=&D^{34}_{23}e_1\wedge e_2\wedge e_3+D^{34}_{24}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_3)=&(D^{14}_{13}-D^{24}_{23})e_1\wedge e_2\wedge e_3+(D^{14}_{14}-D^{24}_{24})e_1\wedge e_2\wedge e_4\\ &-D^{24}_{34}e_1\wedge e_3\wedge e_4-D^{14}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_4)=&(D^{23}_{23}-D^{13}_{13})e_1\wedge e_2\wedge e_3+(D^{23}_{24}-D^{13}_{14})e_1\wedge e_2\wedge e_4\\ &+D^{23}_{34}e_1\wedge e_3\wedge e_4-D^{13}_{34}e_2\wedge e_3\wedge e_4. \end{align*} Here the parameters satisfy the condition $a^{34}(a^{12}a^{34}-a^{32}a^{14}+a^{42}a^{13})=0$. \item[(6)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(5) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&D^{34}_{23}e_1\wedge e_2\wedge e_3+D^{34}_{24}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_1\wedge e_3\wedge e_4,\\ \Delta(e_2)=&-D^{34}_{13}e_1\wedge e_2\wedge e_3-D^{34}_{14}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_3)=&(D^{24}_{13}-D^{14}_{23})e_1\wedge e_2\wedge e_3+(D^{24}_{14}-D^{14}_{24})e_1\wedge e_2\wedge e_4\\ &-D^{24}_{34}e_1\wedge e_3\wedge e_4-D^{14}_{34}e_2\wedge e_3\wedge e_4.\\ \Delta(e_4)=&(D^{13}_{23}-D^{23}_{13})e_1\wedge e_2\wedge e_3+(D^{13}_{24}-D^{23}_{14})e_1\wedge e_2\wedge e_4\\ &+D^{13}_{34}e_1\wedge e_3\wedge e_4+D^{23}_{34}e_2\wedge e_3\wedge e_4. \end{align*} \item[(7)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(6) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&-D^{34}_{13}e_1\wedge e_2\wedge e_3-D^{34}_{14}e_1\wedge e_2\wedge e_4+D^{34}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_2)=&(\alpha D^{34}_{23}-D^{34}_{13})e_1\wedge e_2\wedge e_3+(\alpha D^{34}_{24}-D^{34}_{14})e_1\wedge e_2\wedge e_4\\ &+\alpha D^{34}_{34}e_2\wedge e_3\wedge e_4+D^{34}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_3)=&(D^{24}_{13}+D^{14}_{13}-\alpha D^{24}_{23})e_1\wedge e_2\wedge e_3+(D^{24}_{14}+D^{14}_{14}-\alpha D^{24}_{24})e_1\wedge e_2\wedge e_4\\ &+\alpha D^{24}_{34}e_1\wedge e_3\wedge e_4-(D^{24}_{34}+D^{14}_{34})e_2\wedge e_3\wedge e_4,\\ \Delta(e_4)=&(\alpha D^{23}_{23}-D^{23}_{13}-D^{13}_{13})e_1\wedge e_2\wedge e_3+(\alpha D^{23}_{24}-D^{23}_{14}-D^{12}_{14})e_1\wedge e_2\wedge e_4\\ &+\alpha D^{23}_{34}e_1\wedge e_3\wedge e_4+(D^{23}_{34}+D^{13}_{34})e_2\wedge e_3\wedge e_4, \end{align*} and the parameters satisfy the condition $a^{34}(a^{12}a^{43}+a^{32}a^{14}-a^{42}a^{13})=0$. \item[(8)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(7) in Proposition~\ref{th:2.4}, then \begin{align*} \Delta(e_1)=&(D^{24}_{12}-D^{24}_{13})e_1\wedge e_2\wedge e_3+D^{34}_{41}e_1\wedge e_2\wedge e_4\\ &-D^{24}_{14}e_1\wedge e_3\wedge e_4+(D^{34}_{34}-D^{24}_{24})e_2\wedge e_3\wedge e_4,\\ \Delta(e_2)=&(D^{34}_{23}-D^{14}_{12})e_1\wedge e_2\wedge e_3-D^{34}_{24}e_1\wedge e_2\wedge e_4\\ &+(D^{34}_{34}-D^{14}_{14})e_1\wedge e_3\wedge e_4+D^{14}_{24}e_2\wedge e_3\wedge e_4,\\ \Delta(e_3)=&(D^{14}_{13}-D^{24}_{23})e_1\wedge e_2\wedge e_3+(D^{14}_{14}-D^{24}_{24})e_1\wedge e_2\wedge e_4\\ &-D^{24}_{34}e_1\wedge e_3\wedge e_4-D^{14}_{34}e_2\wedge e_3\wedge e_4,\\ \Delta(e_4)=&(D^{12}_{12}-D^{13}_{13}+D^{23}_{23})e_1\wedge e_2\wedge e_3+(D^{23}_{24}-D^{13}_{14})e_1\wedge e_2\wedge e_4\\ &+(D^{23}_{34}-D^{12}_{14})e_1\wedge e_3\wedge e_4+(D^{13}_{34}-D^{12}_{24})e_2\wedge e_3\wedge e_4. \end{align*} \end{enumerate} \end{pro} \section{Double construction 3-Lie bialgebras and Manin triples} \label{sec:dc} In this section we classify double construction 3-Lie bialgebras for complex 3-Lie algebras in dimensions 3 and 4. We also give the corresponding Manin triples. \subsection{The double construction 3-Lie bialgebras for complex 3-Lie algebras in dimensions 3 and 4} \begin{pro}\label{pro:4.1} Let $A$ be a 3-Lie algebra with a basis $\{e_1,\cdots, e_n\}$. Let $\Delta:A\rightarrow A\otimes A\otimes A$ be a linear map. Set $$[e_a,e_b,e_c]=\sum_k T_{abc}^{k}e_k,\;\; \Delta(e_i)=\sum_{p,q,r}C_i^{pqr}e_p\otimes e_q\otimes e_r,\;\;\forall a,b,c, i=1,\cdots, n.$$ \begin{enumerate} \item[(1)] $\Delta$ satisfies Eq.~(\ref{eq:2.6}) if and only if the following equation holds: \begin{equation}\label{eq:constant1} \sum_i T_{abc}^{i}C_i^{pqr}=\sum_i \big(T_{bci}^{r}C_a^{pqi}+T_{cai}^{r}C_b^{pqi}+T_{abi}^{r}C_c^{pqi}\big),\;\;\forall p,q,r, a, b, c=1,\cdots, n. \end{equation} \item[(2)] $\Delta$ satisfies Eq.~(\ref{eq:2.7}) if and only if the following equation holds: \begin{equation}\label{eq:constant2} \sum_i T_{abc}^{i}C_i^{pqr}=\sum_i \big(T_{bci}^{r}C_a^{pqi}+T_{bci}^{q}C_a^{pir}+T_{bci}^{p}C_a^{iqr}\big), \;\;\forall p,q,r, a, b, c=1,\cdots, n. \end{equation} \end{enumerate} \end{pro} \begin{proof} It is obtained by a straightforward computation of Eqs.~(\ref{eq:2.6}) and (\ref{eq:2.7}) followed by comparing the coefficients. \end{proof} With the conditions and notation as above, let $\{f_1,f_2,\cdots,f_n\}$ be the dual basis of $A^{\ast}$ and $\Delta^*:A^*\otimes A^*\otimes A^*\rightarrow A^*$ be the dual map. Then a direct computation shows that \begin{equation}\label{def.c} \Delta^{\ast}(f_p\otimes f_q\otimes f_r)=\sum_i C_i^{pqr}f_i,\;\;\forall p, q, r=1,\cdots, n. \end{equation} If in addition, $\Delta^{\ast}$ defines a 3-Lie algebra structure on $A^{\ast}$, then $\Delta$ is a skew-symmetric linear map, i.e., for any permutation $\ttau$ on $\{p,q,r\}$, \begin{equation}\label{c} C_i^{\ttau(p)\ttau(q)\ttau(r)} =\textrm{sgn}(\ttau)C_i^{pqr},\;\;\forall p,q,r=1,\cdots,n. \end{equation} \begin{lem}\label{le:4.1} With the notations as above. If Eq.~(\ref{c}) holds, then $\Delta$ satisfies Eq.~(\ref{eq:2.6}) if and only if Eq.~(\ref{eq:constant1}) holds for any $p,q,r$ and $a<b<c$. \end{lem} \begin{proof} At first, we claim that the following two equations are equivalent when $p,q,r$ are fixed. \begin{align*} \sum_i T_{a_1b_1c_1}^{i}C_i^{pqr}&=\sum_i \big(T_{b_1c_1i}^{r}C_{a_1}^{pqi}+T_{c_1a_1i}^{r}C_{b_1}^{pqi}+T_{a_1b_1i}^{r}C_{c_1}^{pqi}\big),\\ \sum_i T_{a_2b_2c_2}^{i}C_i^{pqr}&=\sum_i \big(T_{b_2c_2i}^{r}C_{a_2}^{pqi}+T_{c_2a_2i}^{r}C_{b_2}^{pqi}+T_{a_2b_2i}^{r}C_{c_2}^{pqi}\big), \end{align*} where $a_2,b_2,c_2$ are obtained by permuting $a_1,b_1,c_1$. In fact, without loss of generality, we assume $a_1=b_2,b_1=a_2,c_1=c_2$. Then \begin{align*} &\sum_i T_{a_1b_1c_1}^{i}C_i^{pqr}=\sum_i \big(T_{b_1c_1i}^{r}C_{a_1}^{pqi}+T_{c_1a_1i}^{r}C_{b_1}^{pqi}+T_{a_1b_1i}^{r}C_{c_1}^{pqi}\big)\\ \Leftrightarrow &\sum_i T_{b_2a_2c_2}^{i}C_i^{pqr}=\sum_i \big(T_{a_2c_2i}^{r}C_{b_2}^{pqi}+T_{c_2b_2i}^{r}C_{a_2}^{pqi}+T_{b_2a_2i}^{r}C_{c_2}^{pqi}\big)\\ \Leftrightarrow &\sum_i (-T_{a_2b_2c_2}^{i})C_i^{pqr}=\sum_i \big((-T_{c_2a_2i}^{r})C_{b_2}^{pqi}+(-T_{b_2c_2i}^{r})C_{a_2}^{pqi}+(-T_{a_2b_2i}^{r})C_{c_2}^{pqi}\big)\\ \Leftrightarrow &\sum_i T_{a_2b_2c_2}^{i}C_i^{pqr}=\sum_i \big(T_{c_2a_2i}^{r}C_{b_2}^{pqi}+T_{b_2c_2i}^{r}C_{a_2}^{pqi}+T_{a_2b_2i}^{r}C_{c_2}^{pqi}\big). \end{align*} Furthermore, if any two of $a,b,c$ are equal, then Eq.~(\ref{eq:constant1}) holds automatically. In fact, assume $a=b$ without loss of generality. The left hand side of Eq.~(\ref{eq:constant1}) is zero because $T^i_{aac}=0$, whereas the right hand side is also zero because $T_{aci}^{r}C_a^{pqi}=-T_{aci}^{q}C_a^{pir}$ and $T_{aai}^{r}=0$. Therefore the indices $a,b,c$ should be selected distinct and the sequence of $a,b,c$ makes no difference. Hence the lemma holds. \end{proof} \begin{thm}\label{th:4.1} Let $A$ be the 3-dimensional 3-Lie algebra given in Proposition~\ref{th:2.3}. If a skew-symmetric linear map $\Delta:A\rightarrow A\otimes A\otimes A$ satisfies Eq.~(\ref{eq:2.6}) or Eq.~(\ref{eq:constant1}), then $\Delta=0$. Therefore there is no non-trivial double construction 3-Lie bialgebra for $A$. \end{thm} \begin{proof} With the notations as in Proposition~\ref{pro:4.1}. Fix $p,q,r$. Then by lemma~\ref{le:4.1}, we only need to consider the following three equations. \begin{align*} \sum_i T_{123}^{i}C_i^{pq1}&=\sum_i \big(T_{23i}^{1}C_{1}^{pqi}+T_{31i}^{1}C_{2}^{pqi}+T_{12i}^{1}C_{3}^{pqi}\big),\\ \sum_i T_{123}^{i}C_i^{pq2}&=\sum_i \big(T_{23i}^{2}C_{1}^{pqi}+T_{31i}^{2}C_{2}^{pqi}+T_{12i}^{2}C_{3}^{pqi}\big),\\ \sum_i T_{123}^{i}C_i^{pq3}&=\sum_i \big(T_{23i}^{3}C_{1}^{pqi}+T_{31i}^{3}C_{2}^{pqi}+T_{12i}^{3}C_{3}^{pqi}\big). \end{align*} \quad Simplifying those equations, we have \begin{align*} C_2^{pq2}+C_3^{pq3}=0,\;\; C_1^{pq2}=0,\;\; C_1^{pq3}=0. \end{align*} Since $p,q,r$ are chosen arbitrarily and $\Delta$ is skew-symmetric, this shows that $C_i^{abc}=0$, $\forall i,a,b,c=1,2,3$, i.e., $\Delta=0$. \end{proof} \begin{lem}\label{th:3} For any 3-Lie algebra in dimension 4, as displayed in Proposition~\ref{th:2.4}, the skew-symmetric linear map $\Delta:A\rightarrow A\otimes A\otimes A$ satisfying Eq.~(\ref{eq:constant1}) is given as follows (all the parameters are arbitrary constants). \begin{enumerate} \item[(1)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(1) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:1} \begin{cases} &\Delta(e_1)=ke_2\wedge e_3\wedge e_4,\\ &\Delta(e_2)=ke_1\wedge e_3\wedge e_4,\\ &\Delta(e_3)=ke_1\wedge e_2\wedge e_4,\\ &\Delta(e_4)=ke_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \item[(2)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(2) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:2} \begin{cases} \Delta(e_1)=\Delta(e_4)=0,\\ \Delta(e_2)=ke_1\wedge e_2\wedge e_4+c_1e_1\wedge e_3\wedge e_4,\\ \Delta(e_3)=-ke_1\wedge e_3\wedge e_4+c_2e_1\wedge e_2\wedge e_4. \end{cases} \end{equation} \item[(3)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(3) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:3} \begin{cases} \Delta(e_1)=0,\\ \Delta(e_2)=k_1e_1\wedge e_2\wedge e_3+k_2e_1\wedge e_2 \wedge e_4+c_1e_1\wedge e_3 \wedge e_4,\\ \Delta(e_3)=k_3e_1\wedge e_2\wedge e_3-k_2e_1\wedge e_3 \wedge e_4+c_2e_1\wedge e_2 \wedge e_4,\\ \Delta(e_4)=-k_3e_1\wedge e_2\wedge e_4+k_1e_1\wedge e_3 \wedge e_4+c_3e_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \item[(4)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(4) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:4} \begin{cases} \Delta(e_1)=\Delta(e_2)=0,\\ \Delta(e_3)=ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4,\\ \Delta(e_4)=-ke_1\wedge e_3\wedge e_4+c_2e_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \item[(5)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(5) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:5} \begin{cases} \Delta(e_1)=\Delta(e_2)=0,\\ \Delta(e_3)=ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4,\\ \Delta(e_4)=-ke_1\wedge e_2\wedge e_4+c_2e_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \item[(6)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(6) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:6} \begin{cases} \Delta(e_1)=\Delta(e_2)=0,\\ \Delta(e_3)=ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4,\\ \Delta(e_4)=-ke_1\wedge e_2\wedge e_4+c_2e_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \item[(7)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(7) in Proposition~\ref{th:2.4}, then \begin{equation}\label{re:7} \begin{cases} \Delta(e_1)=\Delta(e_2)=\Delta(e_3)=0,\\ \Delta(e_4)=ce_1\wedge e_2\wedge e_3. \end{cases} \end{equation} \end{enumerate} \end{lem} \begin{proof} We give an explicit proof for the Case~(1) as an example and we omit the proofs for the other cases since the proofs are similar. Fix $p,q,r$. Then by Lemma~\ref{le:4.1}, we only need to consider Eq.~(\ref{eq:constant1}) whose indices $(r,a,b,c)$ are given by the following quadruples. \begin{equation} (i,1,2,3),~~~ (i,1,2,4), ~~~ (i,1,3,4),~~~ (i,2,3,4), ~~~ 1\leq i\leq 4. \label{eq:rabc} \end{equation} Let $A$ be the 3-Lie algebra of Case~(1). For the quadruples $(i,1,2,3), 1\leq i\leq 4$, we have \begin{align} (r,a,b,c)=(1,1,2,3):\;\;\;&C^{pq1}_4=C^{pq4}_1,\label{delta1.1}\\ (r,a,b,c)=(2,1,2,3):\;\;\;&C^{pq2}_4=-C^{pq4}_2,\label{delta1.2}\\ (r,a,b,c)=(3,1,2,3):\;\;\;&C^{pq3}_4=C^{pq4}_3,\label{delta1.3}\\ (r,a,b,c)=(4,1,2,3):\;\;\;&C^{pq4}_4=C^{pq1}_1+C^{pq2}_2+C^{pq3}_3.\label{delta1.4} \end{align} By Eq.~(\ref{delta1.1}), we obtain $$C^{124}_4=C^{241}_4=C^{244}_1=0\;\;{\rm and}\;\; C^{134}_4=C^{341}_4=C^{344}_1=0.$$ By Eq.~(\ref{delta1.2}), we obtain $$C^{234}_4=C^{342}_4=C^{344}_2=0.$$ Hence $C^{pq4}_4=0$, $\forall p,q=1,2,3,4$. Similarly, for the other rows of equations, we show that $$C^{pq3}_3=0,\;\;C^{pq2}_2=0,\;\; C^{pq1}_1=0,\;\; \forall p,q=1,2,3,4.$$ That is, $C^{pqr}_i=0$ if any one of $p,q,r$ equals $i$. What remain unknown in $\{C^{abc}_i|a<b<c\}$ are $C^{123}_4$, $C^{124}_3$, $C^{134}_2$ and $C^{234}_1$. By Eq.~(\ref{delta1.1}) again, we obtain $$C^{123}_4=C^{231}_4=C^{234}_1.$$ By Eq.~(\ref{delta1.2}), we obtain $$C^{123}_4=-C^{132}_4=C^{134}_2.$$ By Eq.~(\ref{delta1.3}), we obtain $$C^{123}_4=C^{124}_3.$$ Therefore \begin{equation}\label{C} C^{123}_4=C^{124}_3=C^{134}_2=C^{234}_1. \end{equation} Furthermore, it is straightforward to check that Eq.~(\ref{C}) satisfies all the 16 equations in Eq.~(\ref{eq:rabc}). Therefore $\Delta$ is determined explicitly in Eq.~(\ref{re:1}) by taking $C^{123}_4=k$. \end{proof} \begin{lem}\label{delta2} Let $A$ be a complex 3-Lie algebra with a basis $\{e_1,e_2,\cdots,e_n\}$. Let $p,q,r,s,t$ be fixed indexes, and $m_1,m_2,m_3,m_4\in \mathbb C$. Assume that $$\mathrm{ad}_{e_s,e_t}(e_p)=m_1e_p+m_4e_q,~\mathrm{ad}_{e_s,e_t}(e_q)=m_2e_q,~\mathrm{ad}_{e_s,e_t}(e_r)=m_3e_r,$$ and set $$\Phi_{e_s,e_t}=\id\otimes \id \otimes \mathrm{ad}_{e_s,e_t}+\id\otimes \mathrm{ad}_{e_s,e_t}\otimes \id +\mathrm{ad}_{e_s,e_t}\otimes \id \otimes \id .$$ Then \begin{equation}\label{Phi} \Phi_{e_s,e_t}(e_p\wedge e_q\wedge e_r)=(m_1+m_2+m_3)e_p\wedge e_q\wedge e_r. \end{equation} \end{lem} \begin{proof} Assume $m_2=m_3=m_4=0$. Then \begin{align} \id\otimes \id \otimes \mathrm{ad}_{e_s,e_t}(e_p\wedge e_q\wedge e_r)&=\id\otimes \id \otimes \mathrm{ad}_{e_s,e_t}(e_q\otimes e_r\otimes e_p-e_r\otimes e_q\otimes e_p)\nonumber\\ &=m_1(e_q\otimes e_r\otimes e_p-e_r\otimes e_q\otimes e_p),\label{Phi1}\\ \id\otimes \mathrm{ad}_{e_s,e_t}\otimes \id (e_p\wedge e_q\wedge e_r)&=\id\otimes \mathrm{ad}_{e_s,e_t}\otimes \id (e_r\otimes e_p\otimes e_q-e_q\otimes e_p\otimes e_r)\nonumber\\ &=m_1(e_r\otimes e_p\otimes e_q-e_q\otimes e_p\otimes e_r),\label{Phi2}\\ \mathrm{ad}_{e_s,e_t}\otimes \id \otimes \id (e_p\wedge e_q\wedge e_r)&=\mathrm{ad}_{e_s,e_t}\otimes \id \otimes \id (e_p\otimes e_q\otimes e_r-e_p\otimes e_r\otimes e_q)\nonumber\\ &=m_1(e_p\otimes e_q\otimes e_r-e_p\otimes e_r\otimes e_q).\label{Phi3} \end{align} Adding Eqs.~(\ref{Phi1}) -- (\ref{Phi3}) together, we have \begin{equation}\label{Phim1} \Phi_{e_s,e_t}(e_p\wedge e_q\wedge e_r)=m_1e_p\wedge e_q\wedge e_r. \end{equation} Assume $m_1=m_3=m_4=0$. Then \begin{equation}\label{Phim2} \Phi_{e_s,e_t}(e_p\wedge e_q\wedge e_r)=\Phi_{e_s,e_t}(-e_q\wedge e_p\wedge e_r)=-m_2e_q\wedge e_p\wedge e_r=m_2e_p\wedge e_q\wedge e_r. \end{equation} Assume $m_1=m_2=m_4=0$. Then \begin{equation}\label{Phim3} \Phi_{e_s,e_t}(e_p\wedge e_q\wedge e_r)=\Phi_{e_s,e_t}(e_r\wedge e_p\wedge e_q)=m_3e_r\wedge e_p\wedge e_q=m_3e_p\wedge e_q\wedge e_r. \end{equation} Assume $m_1=m_2=m_3=0$. Then \begin{align} \id\otimes \id \otimes \mathrm{ad}_{e_s,e_t}(e_p\wedge e_q\wedge e_r)&=\id\otimes \id \otimes \mathrm{ad}_{e_s,e_t}(e_q\otimes e_r\otimes e_p-e_r\otimes e_q\otimes e_p)\nonumber\\ &=m_4(e_q\otimes e_r\otimes e_q-e_r\otimes e_q\otimes e_q),\label{Phi4}\\ \id\otimes \mathrm{ad}_{e_s,e_t}\otimes \id (e_p\wedge e_q\wedge e_r)&=\id\otimes \mathrm{ad}_{e_s,e_t}\otimes \id (e_r\otimes e_p\otimes e_q-e_q\otimes e_p\otimes e_r)\nonumber\\ &=m_4(e_r\otimes e_q\otimes e_q-e_q\otimes e_q\otimes e_r),\label{Phi5}\\ \mathrm{ad}_{e_s,e_t}\otimes \id \otimes \id (e_p\wedge e_q\wedge e_r)&=\mathrm{ad}_{e_s,e_t}\otimes \id \otimes \id (e_p\otimes e_q\otimes e_r-e_p\otimes e_r\otimes e_q)\nonumber\\ &=m_4(e_q\otimes e_q\otimes e_r-e_q\otimes e_r\otimes e_q).\label{Phi6} \end{align} Adding Eqs.~(\ref{Phi4}) -- (\ref{Phi6}) together, we have \begin{equation}\label{Phim4} \Phi_{e_s,e_t}(e_p\wedge e_q\wedge e_r)=0. \end{equation} Since $\Phi_{e_s,e_t}$ is linear, Eqs.~(\ref{Phim1}), (\ref{Phim2}), (\ref{Phim3}) and (\ref{Phim4}) together indicate that Eq.~(\ref{Phi}) holds. \end{proof} \begin{thm} Let $A$ be one of the 4-dimensional 3-Lie algebras of Cases~(2), (5) and (6) given in Proposition~\ref{th:2.4}. Then any double construction 3-Lie bialgebra for $A$ is trivial. \end{thm} \begin{proof} By Lemma~\ref{th:3}, we need to show that for the mentioned cases, if in addition $\Delta$ satisfies Eq.~(\ref{eq:2.7}), then $\Delta=0$. Case (2): Substituting $x=e_2,y=e_2,z=e_3$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_2,e_3}(\Delta(e_2))=&\Phi_{e_2,e_3}(ke_1\wedge e_2\wedge e_4+c_1e_1\wedge e_3\wedge e_4) =ke_1\wedge e_2\wedge e_4+c_1e_1\wedge e_3\wedge e_4. \end{align*} Hence $k=c_1=0$. Substituting $x=e_3,y=e_2,z=e_3$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_2,e_3}(\Delta(e_3)) =\Phi_{e_2,e_3}(-ke_1\wedge e_3\wedge e_4+c_2e_1\wedge e_2\wedge e_4) =-ke_1\wedge e_3\wedge e_4+c_2e_1\wedge e_2\wedge e_4. \end{align*} Hence $c_2=0$. Therefore $\Delta=0$. Case (5): Substituting $x=e_3,y=e_3,z=e_4$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_3,e_4}(\Delta(e_3)) =\Phi_{e_3,e_4}(ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4) =2ke_1\wedge e_2\wedge e_3+2c_1e_1\wedge e_2\wedge e_4. \end{align*} Hence $k=c_1=0$. Substituting $x=e_4,y=e_3,z=e_4$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_3,e_4}(\Delta(e_4)) =\Phi_{e_3,e_4}(-ke_1\wedge e_2\wedge e_4+c_2e_1\wedge e_2\wedge e_3) =-2ke_1\wedge e_2\wedge e_4+2c_2e_1\wedge e_2\wedge e_3. \end{align*} Hence $c_2=0$. Therefore $\Delta=0$. Case (6): Substituting $x=e_3,y=e_3,z=e_4$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_3,e_4}(\Delta(e_3)) =\Phi_{e_3,e_4}(ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4) =ke_1\wedge e_2\wedge e_3+c_1e_1\wedge e_2\wedge e_4. \end{align*} Hence $k=c_1=0$. Substituting $x=e_4,y=e_3,z=e_4$ into Eq.~(\ref{eq:2.7}), we get \begin{align*} 0=\Phi_{e_3,e_4}(\Delta(e_4)) =\Phi_{e_3,e_4}(-ke_1\wedge e_2\wedge e_4+c_2e_1\wedge e_2\wedge e_3) =-ke_1\wedge e_2\wedge e_4+c_2e_1\wedge e_2\wedge e_3. \end{align*} Hence $c_2=0$. Therefore $\Delta=0$. \end{proof} \begin{lem}\label{C2} With the notations as in Proposition~\ref{pro:4.1}. If Eq.~ (\ref{c}) holds, then $\Delta$ satisfies Eq.~ (\ref{eq:2.7}) if and only if Eq.~(\ref{eq:constant2}) holds for any $a$ and $b<c,~p<q<r$. \end{lem} \begin{proof} It follows from a proof similar to the one for Lemma~\ref{le:4.1}. \end{proof} \begin{thm}\label{th:1111} Let $A$ be a 4-dimensional 3-Lie algebra with a basis $\{ e_1,\cdots,e_4\}$. \begin{enumerate} \item[(1)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(1) given in Proposition~\ref{th:2.4}, then $(A,\Delta)$ is a double construction 3-Lie bialgebra, where $\Delta$ is given by Eq.~(\ref{re:1}). \item[(2)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(3) given in Proposition~\ref{th:2.4}, then $(A,\Delta)$ is a double construction 3-Lie bialgebra, where $\Delta$ is given by Eq.~(\ref{re:3}). \item[(3)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(4) given in Proposition~\ref{th:2.4}, then $(A,\Delta)$ is a double construction 3-Lie bialgebra, where $\Delta$ is given by Eq.~(\ref{re:4}). \item[(4)] If $A$ is the 4-dimensional 3-Lie algebra of Case~(7) given in Proposition~\ref{th:2.4}, then $(A,\Delta)$ is a double construction 3-Lie bialgebra, where $\Delta$ is given by Eq.~(\ref{re:7}). \end{enumerate} \end{thm} \begin{proof} In fact, for the 4-dimensional 3-Lie algebras of Cases~(1), (3), (4) and (7), the corresponding $\Delta$ appearing in Lemma~\ref{th:3} satisfies Eq.~(\ref{eq:2.7}), too. We give an explicit proof for the Case~(1) as an example and we omit the proofs for the other cases since they are similar. For the 3-Lie algebra of Case~(1), fix $a,b,c$. By Lemma~\ref{C2}, we only need to consider othe following four equations. \begin{align} &(p,q,r)=(1,2,3):&\sum_i T_{abc}^{i}C_i^{123}=\sum_i \big(T_{bci}^{3}C_a^{12i}+T_{bci}^{2}C_a^{1i3}+T_{bci}^{1}C_a^{i23}\big),\label{pqr1.1}\\ &(p,q,r)=(1,2,4):&\sum_i T_{abc}^{i}C_i^{124}=\sum_i \big(T_{bci}^{4}C_a^{12i}+T_{bci}^{2}C_a^{1i4}+T_{bci}^{1}C_a^{i24}\big),\label{pqr1.2}\\ &(p,q,r)=(1,3,4):&\sum_i T_{abc}^{i}C_i^{134}=\sum_i \big(T_{bci}^{4}C_a^{13i}+T_{bci}^{3}C_a^{1i4}+T_{bci}^{1}C_a^{i34}\big),\label{pqr1.3}\\ &(p,q,r)=(2,3,4):&\sum_i T_{abc}^{i}C_i^{234}=\sum_i \big(T_{bci}^{4}C_a^{23i}+T_{bci}^{3}C_a^{2i4}+T_{bci}^{2}C_a^{i34}\big).\label{pqr1.4} \end{align} For Eq.~(\ref{pqr1.1}), the left hand side is $$\sum_iT_{abc}^iC_i^{123}=T_{abc}^4,$$ whereas the right hand side is \begin{align*} &\sum_i \big(T_{bci}^{3}C_a^{12i}+T_{bci}^{2}C_a^{1i3}+T_{bci}^{1}C_a^{i23}\big)\\ =&T_{bc3}^{3}C_a^{123}+T_{bc4}^{3}C_a^{124}+T_{bc2}^{2}C_a^{123}+T_{bc4}^{2}C_a^{143}+T_{bc1}^{1}C_a^{123}+T_{bc4}^{1}C_a^{423}\\ =&\big(T^3_{bc3}+T^2_{bc2}+T^1_{bc1}\big)C^{123}_a+T_{bc4}^{3}C_a^{124}-T_{bc4}^{2}C_a^{134}+T_{bc4}^{1}C_a^{234}\\ =&0+T_{bc4}^{3}C_a^{124}-T_{bc4}^{2}C_a^{134}+T_{bc4}^{1}C_a^{234}. \end{align*} Therefore, Eq.~(\ref{pqr1.1}) holds if and only if the following series of equations hold: \begin{align*} a=1:~~&T_{1bc}^4=0+0+T_{bc4}^1,\\ a=2:~~&T_{2bc}^4=0-T_{bc4}^2+0,\\ a=3:~~&T_{3bc}^4=T_{bc4}^3+0+0,\\ a=4:~~&0=0. \end{align*} It is straightforward to show that these equations hold for arbitrary $b,c=1,2,3,4$. Hence Eq.~(\ref{pqr1.1}) holds. Similarly, Eqs.~(\ref{pqr1.2}) -- (\ref{pqr1.4}) also hold. Therefore Eq.~(\ref{eq:2.7}) holds. Moreover, it is straightforward to check (also see the remark after this proof) that, for every $\Delta$ appearing in the conclusion, the dual $\Delta^*$ defines a 3-Lie algebra on $A^*$. Hence the conclusion holds. \end{proof} \begin{rmk}{\rm We give explicitly the 3-Lie algebra structure on the dual space $A^*$ obtained from $\Delta$ in the above double construction 3-Lie bialgebras as follows. \begin{enumerate} \item[(1)] $A$ is the 4-dimensional 3-Lie algebra of Case~(1) given in Proposition~\ref{th:2.4}. \begin{equation*} [e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ke^{\ast}_4,~ [e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}=ke^{\ast}_3,~ [e^{\ast}_1,e^{\ast}_3,e^{\ast}_4]^{\ast}=ke^{\ast}_2,~ [e^{\ast}_2,e^{\ast}_3,e^{\ast}_4]^{\ast}=ke^{\ast}_1. \end{equation*} \item[(2)] $A$ is the 4-dimensional 3-Lie algebra of Case~(3) given in Proposition~\ref{th:2.4}. \begin{align*} ~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}&=k_1e^{\ast}_2+k_3e^{\ast}_3+c_3e^{\ast}_4,\\ ~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}&=k_2e^{\ast}_2+c_2e^{\ast}_3-k_3e^{\ast}_4,\\ ~[e^{\ast}_1,e^{\ast}_3,e^{\ast}_4]^{\ast}&=c_1e^{\ast}_2-k_2e^{\ast}_3+k_1e^{\ast}_4. \end{align*} \item[(3)] $A$ is the 4-dimensional 3-Lie algebra of Case~(4) given in Proposition~\ref{th:2.4}. \begin{equation*} [e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ke^{\ast}_3+c_2e^{\ast}_4,~ [e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}=c_1e^{\ast}_3-ke^{\ast}_4. \end{equation*} \item[(4)] $A$ is the 4-dimensional 3-Lie algebra of Case~(7) given in Proposition~\ref{th:2.4}. \begin{equation*} [e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ce^{\ast}_4. \end{equation*} \end{enumerate} It is easy to show that $(A^{\ast},[\cdot,\cdot,\cdot]^{\ast})$ in Case~(1) for $k\ne 0$ and in Case~(7) for $c\ne 0$ are respectively isomorphic to the 3-Lie algebras of the Case~(1) and Case~(3) given in Proposition~\ref{th:2.4}. For Cases~(3) and (4) mentioned in the remark, $(A^{\ast},[\cdot,\cdot,\cdot]^{\ast})$ is still a 3-Lie algebra.} \end{rmk} \subsection{Pseudo-metric 3-Lie algebras in dimension 8} By Theorem~\ref{thm:relations} and the results in the previous subsection, we can get the corresponding pseudo-metric 3-Lie algebras in dimension 8 (Manin triples of 3-Lie algebras) as follows. \begin{thm} Let $A$ be a 4-dimensional vector space with a basis $\{e_1,\cdots, e_4\}$ and $\{e^{\ast}_1,\cdots,e^{\ast}_4\}$ be the dual basis. On the vector space $A\oplus A^*$ define a bilinear form $(\cdot,\cdot)_{+}$ by Eq.~\eqref{eq:bf}, that is, with respect to the basis $\{e_1,\cdots,e_4, e^{\ast}_1,\cdots,e^{\ast}_4\}$, it corresponds to the matrix $ \begin{pmatrix} 0 & I_n \\ I_n & 0 \end{pmatrix}$. We can get the following families of 8-dimensional pseudo-metric 3-Lie algebras $(A\oplus A^*, (\cdot,\cdot)_{+})$. \begin{enumerate} \item[(1)] The non-zero product of 3-Lie algebra structure on $A\oplus A^*$ is given by \begin{eqnarray*} &~~[e_1,e_2,e_3]=e_4,[e_1,e_2,e_4]=e_3,[e_1,e_3,e_4]=e_2,[e_2,e_3,e_4]=e_1;\\ &~~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ke^{\ast}_4,~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}=ke^{\ast}_3,~[e^{\ast}_1,e^{\ast}_3,e^{\ast}_4]^{\ast}=ke^{\ast}_2,~ [e^{\ast}_2,e^{\ast}_3,e^{\ast}_4]^{\ast}=ke^{\ast}_1,\\ &~~[e_i,e^{\ast}_j,e^{\ast}_k]=e^{\ast}_m,~[e_i,e_j,e^{\ast}_k]=e^{\ast}_m. \end{eqnarray*} where the last equation holds for $i<j<k$ and $m$ which is distinct from $i,j,k$. They correspond to the double construction bialgebras $(A,\Delta)$ given in Theorem~\ref{th:1111}, where $A$ is the 3-Lie algebra of Case~(1) given in Proposition~\ref{th:2.4}. \item[(2)]The non-zero product of 3-Lie algebra structure on $A\oplus A^*$ is given by \begin{eqnarray*} &~~[e_2,e_3.e_4]=e_1,\\ &~~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=k_1e^{\ast}_2+k_3e^{\ast}_3+c_3e^{\ast}_4,\\ &~~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}=k_2e^{\ast}_2+c_2e^{\ast}_3-k_3e^{\ast}_4,\\ &~~[e^{\ast}_1,e^{\ast}_3,e^{\ast}_4]^{\ast}=c_1e^{\ast}_2-k_2e^{\ast}_3+k_1e^{\ast}_4,\\ &~~[e_1,e_2,e^{\ast}_1] = -e^{\ast}_3,[e_1,e_3,e^{\ast}_1] = e{\ast}_2,[e_2,e_3,e^{\ast}_1] = -e{\ast}_1,\\ &~~[e_2,e^{\ast}_1,e^{\ast}_2]=-k_1e_3-k_2e_4,[e_2,e^{\ast}_2,e^{\ast}_3]=-k_1e_1,[e_2,e^{\ast}_1,e^{\ast}_3]=k_1e_2-c_1e_4,\\ &~~[e_2,e^{\ast}_2,e^{\ast}_4]=-k_2e_1,[e_2,e^{\ast}_1,e^{\ast}_4]=k_2e_2+c_1e_3,[e_2,e^{\ast}_3,e^{\ast}_4]=-c_1e_1,\\ &~~[e_3,e^{\ast}_1,e^{\ast}_2]=-k_3e_3-c_2e_4,[e_3,e^{\ast}_2,e^{\ast}_3]=-k_3e_1,[e_3,e^{\ast}_1,e^{\ast}_3]=k_3e_2+k_2e_4,\\ &~~[e_3,e^{\ast}_2,e^{\ast}_4]=-c_2e_1,[e_3,e^{\ast}_1,e^{\ast}_4]=c_2e_2-k_2e_3,[e_3,e^{\ast}_3,e^{\ast}_4]=k_2e_1,\\ &~~[e_4,e^{\ast}_1,e^{\ast}_2]=-c_2e_3+k_3e_4,[e_4,e^{\ast}_2,e^{\ast}_3]=-c_3e_1,[e_4,e^{\ast}_1,e^{\ast}_3]=c_3e_2-k_1e_4,\\ &~~[e_4,e^{\ast}_2,e^{\ast}_4]=k_3e_1,[e_4,e^{\ast}_1,e^{\ast}_4]=-k_3e_2+k_1e_3,[e_4,e^{\ast}_3,e^{\ast}_4]=-k_1e_1. \end{eqnarray*} They correspond to the double construction bialgebras $(A,\Delta)$ given in Theorem~\ref{th:1111}, where $A$ is the 3-Lie algebra of Case~(3) given in Proposition~\ref{th:2.4}. \item[(3)] The non-zero product of 3-Lie algebra structure on $A\oplus A^*$ is given by \begin{eqnarray*} &~~[e_2,e_3,e_4]=e_1,[e_1,e_3,e_4]=e_2,\\ &~~[e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ke^{\ast}_3+c_2e^{\ast}_4,[e^{\ast}_1,e^{\ast}_2,e^{\ast}_4]^{\ast}=c_1e^{\ast}_3-ke^{\ast}_4,\\ &~~[e_2,e_3,e^{\ast}_1]=-e^{\ast}_4,[e_3,e_4,e^{\ast}_1]=-e^{\ast}_2,[e_2,e_4,e^{\ast}_1]=e^{\ast}_3,\\ &~~[e_1,e_4,e^{\ast}_2]=e^{\ast}_3,[e_1,e_3,e^{\ast}_2]=-e^{\ast}_4,[e_3,e_4,e^{\ast}_2]=-e^{\ast}_2,\\ &~~[e_3,e^{\ast}_1,e^{\ast}_2]=-ke_3-c_1e_4,[e_4,e^{\ast}_1,e^{\ast}_2]=-c_2e_3+ke_4,\\ &~~[e_3,e^{\ast}_1,e^{\ast}_3]=ke_2,[e_4,e^{\ast}_1,e^{\ast}_3]=c_2e_2,\\ &~~[e_3,e^{\ast}_1,e^{\ast}_4]=c_1e_2,[e_4,e^{\ast}_1,e^{\ast}_4]=-ke_2,\\ &~~[e_3,e^{\ast}_2,e^{\ast}_3]=-ke_1,[e_4,e^{\ast}_2,e^{\ast}_3]=-c_2e_1,\\ &~~[e_3,e^{\ast}_2,e^{\ast}_4]=-c_1ke_1,[e_4,e^{\ast}_2,e^{\ast}_4]=ke_1. \end{eqnarray*} They correspond to the double construction bialgebras $(A,\Delta)$ given in Theorem~\ref{th:1111}, where $A$ is the 3-Lie algebra of Case~(4) given in Proposition~\ref{th:2.4}. \item[(4)] The non-zero product of 3-Lie algebra structure on $A\oplus A^*$ is given by \begin{eqnarray*} &~~[e_1,e_2,e_4]=e_3,[e_1,e_e,e_4]=e_2,[e_2,e_3,e_4]=e_1,[e^{\ast}_1,e^{\ast}_2,e^{\ast}_3]^{\ast}=ce^{\ast}_4,\\ &~~[e_1,e_3,e^{\ast}_2]=-e^{\ast}_4,[e_1,e_2,e^{\ast}_3]=-e^{\ast}_4,[e_1,e_4,e^{\ast}_2]=e^{\ast}_3,[e_1,e_4,e^{\ast}_3]=e^{\ast}_2,\\ &~~[e_2,e_3,e^{\ast}_1]=-e^{\ast}_4,[e_2,e_4,e^{\ast}_1]=e^{\ast}_3,[e_2,e_4,e^{\ast}_3]=-e^{\ast}_1,[e_3,e_4,e^{\ast}_2]=-e^{\ast}_1,\\ &~~[e_3,e_4,e^{\ast}_1]=-e^{\ast}_2,[e_4,e^{\ast}_1,e^{\ast}_2]=-ce_3,[e_4,e^{\ast}_1,e^{\ast}_3]=ce_2,[e_4,e^{\ast}_2,e^{\ast}_3]=-ce_1. \end{eqnarray*} They correspond to the double construction bialgebras $(A,\Delta)$ given in Theorem~\ref{th:1111}, where $A$ is the 3-Lie algebra of Case~(7) given in Proposition~\ref{th:2.4}. \end{enumerate} \end{thm} The proof is by a straightforward computation. \smallskip \noindent {\bf Acknowledgements.} This work was supported by the Natural Science Foundation of China (Grant Nos.~11371178, 11425104).
25,909
The first thing the former members of rapid-i want to make clear is that their name pre-dates the wide success of R.E.M. Their name evolved out of the same expression (Rapid Eye Movement) but it was coined in 1980, about three years before the debut of R.E.M.s album, Murmer on I.R.S. Records. The point isn’t really that important except to point out that the small “i” in the name is a reference to Prince-Far-I, the dubbiest of the deep-dub artists to come out of 1970’s Jamaica…go through the used records racks and find a copy of one of the the tuffest records of all time; “Prince Far I & King Tubby “‘In The House Of Vocal & Dub”. rapid-i was not a reggae band, but their respect for a wide range of artists brings up accomplished and experimental pop artists and music figures. They name artists like Mark Smith and The Maffia as well as Smith’s former band The Pop Group. Linton Kwesi Johnson, James Chance and the Contortions, James Blood Ulmer, Adrian Sherwood, King Crimson and The Sex Pistols among the jazz greats. It might seem these guys were all over the map musically, but it’s clear they were more interested in musical execution and innovation than any particular genre. This interest showed up in their own music, whilw doing a ripping version of the funky Barney Miller theme song-written by Jack Miller and Allyn Ferguson with the killer bass line performed by Chuck Berghofer. The rapid-i version is practically note for note-not because they were anything near a “cover band”, but because, hell…why mess with something near-prefect? The changes in keys and difficult rhythm patterns of their original compositions were clever moves for them to share onstage. One might not understand exactly what they were up to but audiences weren’t left out as if their musicianship was an “inside joke”. The bands joy and exuberance in pulling off a slick musical move never cane off as intellectual and snobbish. The audience could see their open enthusiasm and glee. The band didn’t care if it’s audience was classically trained, musically illiterate or astute jazz and classical musicians. They openly invited them to enjoy what they were doing. In fact, one of the apparent “inside jokes” they shared with the audience was covering the Barney Miller theme…It proved finding brilliance in the most mundane, unexpected places. Which comes to audiences-or perhaps lack of them. The early 1980’s and Seattle’s post-punk era produced some mighty fine bands that strayed from the punk formulae developed in the late 70s. For instance, how would it be possible to accurately label The Blackouts, with their weird and near-mysticism laid over an almost indescribable sound? And how could a power-pop oriented band like X-15 be referred to as “one of Seattle’s original punk bands” when, as entertaining as they were, they simply were not a punk band, and arrived on the scenefrom Bellingham in the 1980s; long after The Telepaths, The Lewd, The S’nots and The Mentors had established Seattle as a major outpost of West Coast punk. All of these bands (punk and post-punk) shared one thing in common; small, very loyal fan bases and audiences that mostly consisted of friends, family and like-minded musicians and fans. This is typical of what goes on in all cities, but Seattle at the time seemed so insular, and it seemed that everyone on “the scene” either knew or knew of everyone else. So it was with rapid-i. They spent many nights at local “punk” clubs like WREX or The Gorilla Room playing to near-empty houses., to friends family and others who actually appreciated the music. Of course the upside to this for any band is that it allows them to practice, to grow and try-out new material to mostly open (if small) audiences. This seemingly negative situation has birthed many of the greatest pop and rock bands of the 20th and 21st century. Even today it’s difficult for friends around the globe to believe that Nirvana’s first Seattle show on April 10, 1988, at the Central Saloon was practically empty. “No one else remembers it,” says Sub-Pop founder Bruce Pavitt “because it was just me, the doorman and about three other people.” As Nirvana went on to success on their own terms (at least originally) rapid-i certainly had the chops and the good nature to play the more lucrative fraternity-boy filled clubs that abounded in Seattle at the time. Their repertoire included plenty of “accessible” dance-music, but they studiously avoided falling into that “trap”. Oddly enough unapologetically “pop” bands like The Visible Targets and X-15 also avoided playing to drunk, mostly indifferent and rowdy college crowds. As far as The Visible Targets went, they pulled in crowds, but they were far more dedicated to performing their tight, self-written music; and to be honest a band fronted by three attractive sisters would have probably killed any chance of being taken seriously in a club full of horny young students They would be a novelty act that were nothing more than “three hot sisters” despite their musical talent. On a side note, The Visible Targets were one of the bands that set the stage for the following generation of women involved in the riot grrrl movement. The Visible Targets’ music wasn’t the same, the lyrics not as political, but the attitude toward being taken seriously certainly was. It’s interesting to note that the aforementioned Bruce Pavitt took an early interest in The Visible Targets as well as Drew Canulette and Steve Fisk-none of them known as fans of lightweight pop. Even the Target’s first EP was recorded in Olympia WA…later the spiritual home of the riot grrl movement. The odd thing is that rapid-i often attracted fewer audience members, and that even though what they were doing was almost the antithesis of punk, it is probably more punks that saw them in near empty rooms than anyone else. This is not to say they had nothing in common with the punk scene. It’s also not to say they were underappreciated. Promoters and fans came to see them as solid performers even though it was hard to pigeon-hole what they were doing. It made it difficult to find appropriate opening slots for the great variety of new American and British artists touring at the time. Bands like Magazine, The Specials, John Cale, The Dead Kennedy’s, Pere Ubu or The Stranglers..all bands that had a high degree of popularity in the Northwest, and had played sold-out concerts in early 1980’s Seattle. None of the bands mentioned fit into neat pigeonholes either, but rapid-i wasn’t a logical choice as opening bands, no matter how inventive or oddball the headliners were. So they chose small club shows which in the end didn’t hurt them in any way. There was one opportunity to play to a large crowd-an all day event at Seattle’s Showbox Theater that went exceedingly well. The audience was enthusiastic and their set was one of the best of the day. So how did all of this come to be? Phil Otto and Dave Ford met at Stanford University. Otto was working on a degree in Cultural Anthropology and Ford says he was “just hanging around”, though it’s hard to believe he was simply a slacker or couch surfer. He is by nature always on the move; always working hard to accomplish what he’s set out to do. Otto and Ford joined three fellow students (Jimmy Jett on bas,s Tim Clark on Rhythm Guitar and Dave Latchaw on drums) to form a band called Raw Meat. Otto took on vocal duties and it’s been reported that even at this early stage (1978) Ford was already a top-notch, inventive and talented guitarist. The band found an audience on campus and in couple of clubs in Palo Alto. They also became a part of near-by San Francisco’s burgeoning punk scene, playing famed venues like The Mabuhay Gardens and The Deaf Club. Otto often performed wearing nothing but a black skeleton painted on his body…”I was very devoted to Iggy Pop, that’s all I can say.” Otto and Ford both agree that their original tastes in music were quite different, with Ford being drawn more toward jazz, the experimental and the mélange of dissimilar sounds coming out of Britain at the time. Otto’s background in music was more “traditional” but there’s no doubt that he used the best of it while becoming exposed to newer sounds and changed quickly by exposure to punk, reggae, garage rock, etc. By the time Raw Meat were at their height both Ford and Otto were pretty much in synchmusically. Although the band was closer to being punk than what we’ve come to know as new wave Dave Ford wrote at the time “Listening to New Wave is like having a nose job done with a jackhammer during an earthquake in a vat of boiling tar and pig intestines” Just substitute “new wave” for “punk” and you get the idea…especially if you were there. After Otto graduated from Stanford he headed home to Everett WA and his parent’s home to ponder his next move. Phil and Dave had made enough of a connection at Stanford that Dave (a native of the Bay Area) followed his buddy north where they both crashed at Phil’s parents house until moving to Seattle, where they decided to continue their musical pursuits. Those pursuits may have been different than those of Raw Meat, but Seattle at the time was a great place to experiment and invent, whether it was the hardcore punk of The Fartz or the incredibly dense and near incomprehensible barrage of Audio Leter (yes, that’s spelled correctly). Having decided to form a new band Dave and Phil put a “musicians wanted” ad in The Rocket, Seattle’s all-around chronicler of the music scene. The two were incredibly fortunate when a fellow named Jerry Frink turned up. Jerry was a great drummer, but his real talent was in his mastery of all forms of percussion, whether it be congas, bongos, bell trees, marimbas or just about anything else he could hit or strike in perfect unison. The greatest-and probably most unexpected instrument he brought into the mix was the timbale….not an instrument normally found in punk or no-wave music-outside the Contortions, perhaps…but still not a featured instrument by any means The addition of a stand-alone percussionist offered a broad array of directions, but the band would still need a drummer behind a full kit. Terry Pollard, a drummer who’d studied music theory but with little live experience showed up on the recommendation of Bryan Runnings who was then running The Gorilla Room on Second Avenue. Pollard admits he didn’t have a musical agenda. He was ready to play just about any genre as long as it presented a challenge…why waste that music theory degree? The other three were open to jazz, funk, Caribbean, African, rock and punk themes, and as they wrote new songs they took advantage of all those sounds, as well as bit of musique concrète ala John Cage. Despite delving into some serious musical territory there was always a sheen of fun encapsulating everything the band played. Self-seriousness was never a part of the show. In late 1980 rapid-i went into American Music studios to record four songs for a projected EP. Songs included “New Style”, “Each Second (both featured here) as well as “Misinformation” and “Hungry People“. The two tracks here are less angular and more traditionally structured than both “Misinformation” and “Hungry People”, but there’s no doubt the other two tracks are plenty of fun with odd (changing) time signatures and plenty of clanging (but not annoying) guitar laid over an inventive rhythm section, and of course, plenty of quirky percussion fills by Jerry Frink. Unfortunately the EP was not released at the time, and the tapes were forgotten, They finally saw the light of day in 2013, and were released as a digital download on dadastic! sounds along with an extended mix of the title song “New Style” The EP is widely available at almost all internet download and streaming services. Take a chance! Shortly after the EP’s recording rapid-i called it quits. Ford was ready to go back to the Bay Area and pursue a career in journalism. He became a contributor for The San Francisco Chronicle and The San Francisco Bay Guardian. He also became a yoga instructor a vocation he still takes part in. By all accounts he’s a pretty good teacher. His students love him and the quirky sense of humor he has always shared has made many of his students actually enjoy yoga! Dave is currently living in Tampa Florida. He still plays occasional gigs and records. After the dissolution of rapid-i Phil Otto formed the band Steddi-5 along with Student Nurse drummer John Rogers, guitarist Tim Clark, Saxophonist David Fischer and Corinne Mah on vocals. The band had a brief but successful career in Seattle, and in 1983 their song “Fame or Famine” was included in The Seattle Syndrome Volume II compilation. The song featured Jack Weaver on trumpet. Jack was the original owner of Triangle Studio, which would later be made famous when Jack Endino took it over as Reciprocal Recording. After their break-up Rogers would continue to play in Student Nurse and self-produce his weirdo-pop solo project “Sunworm” Tim Clark had been a member of The Hurricanes, although it’unclear if he continued with the band. Corinne Mah, would return to British Columbia where she was born. David Fisher continued to lend a hand in several productions by Marc Barreca (formerly of Young Scientist). Otto took a job teaching on the east coast, but soon found himself back in the Bay Area, where as his profile as the head of his Otto Design Group says; “Philip has been designing innovative systems for retail and living for over 20 years — beginning with his work at the Headlands Center for the Arts crafting spaces for artists Ann Hamilton, Andres Serrano and David Ireland. With a degree from Stanford in Cultural Anthropology, an MA in Human Development from Pacific Oaks College and MFA from San Francisco Art Institute — Phil brings a uniquely humanistic approach to all of his work — creating truly memorable environments and experiences for clients all over the globe”. Jerry Frink and Terry Pollard went on to co-found The Beat Pagodas along with Terry’s brother Tim on vocals, Stanford Filarca (previously of The Spectators), and Steve Homman and Chris Anderson (besides Jerry and Terry) on various drums and percussion and vocals. The band became very successful on the Seattle club circuit, and never failed to point out that the entire band were percussionists except for Filarca who played bass. so their rallying cry became “no guitar!” Their shows were kinetic, full of dedicated abandonment and driven by controlled chaos. The Beat Pagoda’s only released one EP on Left Coast Records in 1984.were, like rapid-i a complete anomaly among Seattle’s then crop of rock bands-perhaps the same can be argued even today. It’s certain there have never been Seattle bands that brought across such joy in the decades since. After all is said and done the gloom and doom that became so fashionable in the late 80s and early 90s “grunge” era there was a parallel universe of fun, unabashed dancing and the pull of the avant garde in Seattle’s early to mid 80s scene. We still need bands like rapid-I to remind us in the joy of both the avant garde and the mundane. Most of all we could all use a respite from the seriousness of our times. -Dennis R. White, Sources; Dave Ford (interview with the author, September 9, 2017); Philip Otto (interview with the author, September 19, 2017); Terry Pollard (interview with the author, September 20, 2017); “rapid-I New Style” dadastic.blogspot.com, retrieved December 29, 2017); Dave Seminara “Chasing Kurt Cobain in Washington State” (New York Times, March 25, 2014); Dave Ford “A Mabuhay Punker Spills His Wisdom” (The Stanford Daily, 18 May 1978); “Philip H. Otto, Primary” (ottodesigngroup.com, retrieved December 29, 2017) Raw Meat -78″ (collegeband.com, retrieved December 29, 2017)
190
- BandLeather Strap - Case MaterialSteel - ConditionBrand New - MovementAutomatic Steinhart Nav B-Uhr – Automatic. Brand new with 2 year warranty Steinhart Nav B-Uhr automatic and a world war 2 pilot. double studded brown Russian tanning leather strap with white stitching and signed pin buckle. Steinhart reference number is F0305. Dimensions are as follows. Width is 44 mm excluding the crown, 49 mm including the crown, lug to lug is 53 mm and thickness is 14.5 mm. The lug width is 22 mm and the weight of the watch is 107g.
67,475
MEET THE TEAM Jamie - Director Michael - Director Jamie - Director Jamie is a second generation elevator installer. He started his career in commercial elevators 8 years ago fine tuning his understanding and skills of what is required to achieve the best product possible. Jamie's pride and satisfaction for achieving this is evident every time he sets his hands on a job. From his on site tool set up to his meticulous nature of wanting to be millimetre perfect with no exceptions, you will be guaranteed a quality lift with excellent communication and service throughout the whole process. Michael - Director With over 20 years in the home elevator industry, Michael has an extensive knowledge of the way elevators work. A qualified boilermaker with years of hydraulic experience makes for the perfect fit to install and maintain E-Lifts hydraulic elevators. With this, Michaels ability to instil confidence and deliver on what E-Lift represents is second to none.
212,904
Arthur Kipps, a junior solicitor, is summoned to attend the funeral of Mrs Alice Drablow, the sole inhabitant of Eel Marsh House. Unaware of the tragic secrets which lie there, wreathed in fog and mystery,. My take: 2 looks This one was almost a stinker. The only reason for 2 looks is the writing was very descriptive and lovely at times. However, the ghost story was just like the cover of this book: blah. Arthur was a milquetoast, annoyingly stubborn and pig-headed, the people of the town were aloof and vacant, the storyline was predictable and the ending was not at all fulfilling. The beginning of the book, which sets up the story to be told in retrospect, had great momentum. The characters seemed real and the protagonist seemed to be a thoughtful patriarch to the family. However, when the storytelling began, it went downhill. Written in the vein of Victorian novels and intermittently rich character descriptions do not save this disappointing tale. This is not recommended and I will probably not read another by this author.
208,615
TITLE: How are the following two Chebychev's inequalities equivalent? QUESTION [4 upvotes]: I was looking at the following definition of Chebyshev's inequality $$P(|X - E(X)| \geq r) \leq \frac{Var(X)}{r^2}$$ which includes the expected value and variance of $X$, and then I discovered there's another equivalent Chebyshev's inequality, which involves the standard deviation $\sigma$ $$P(|X - E(X)| \geq r\cdot \sigma) \leq \frac{1}{r^2}$$ but I am not understanding why are these formulas equivalent. Could you please explain to me why this is the case? Note that I know what is the standard deviation. REPLY [7 votes]: Let's replace $Var(X)$ with $\sigma^2$ in the first equation to give $$P(|X - E(X)| \geq r) \leq \frac{\sigma^2}{r^2}.$$ Now suppose $k= \dfrac{r}{\sigma}$, i.e. $r = k \sigma$, and substitute to give $$P(|X - E(X)| \geq k \sigma) \leq \frac{\sigma^2}{k^2\sigma^2} = \frac{1}{k^2}$$ which is your second equation using $k$ instead of $r$. You can think of the $r$ in the first equation as having the same units as $X$ and $\sigma$, and the $r$ or $k$ in the second as being a unitless scalar multiple of the standard deviation, but ultimately they say the same thing.
187,888
. The world’s largest source for producing electricity is coal. Coal generates nearly half of the electricity in the U.S.
109,106
Utwente.nl essay [email protected] Route. Onderwijs & Onderzoek. Open Dagen HBO-doorstroom Faculteiten, instituten & diensten Studentenportal Informatie voor huidige studenten. Loopbaanrollen, persoonlijkheid en sekse : de relatie met loopbaansucces. Essay (Master ) Clients: GITP. Faculty. Link to this item:. Utwente.nl essay Essay.utwente.nl is 47 years old, Alexa rank: #34444, Country: Netherlands, Last updated: Sunday, 19 April 2015. Faculty: CTW: Engineering Technology: Subject: 21 art forms: Programme: Industrial Design BSc (56955) Link to this item:. We are absolutely certain that every one is able to earn money from his website, Therefor we will display a short estimated numbers that might be achievable through. Utwente.nl is tracked by us since April, 2011. Over the time it has been ranked as high as 11 549 in the world, while most of its traffic comes from Netherlands.: Export this item as: BibTeX EndNote HTML Citation Reference Manager. 1. INTRODUCTION The business context of organizations is an ever-changing environment. May it be a modified law, an innovativeLearning Theory Essay Patrick A. de Sousa. - Essay.utwente.nl search for 9,950 results as listed below with the link list and email address for Essay.utwente.nl websiteCheck whether Essay.utwente.nl is a scam or. What’s new on Essay.utwente.nl: Check updates and related news right now. Essay U Twente is pretty active and updates frequently with 100+ articles published this. Welcome to University of Twente Research Information. We use this website to showcase our research output, information about researchers, activities.: Export this item as: BibTeX EndNote HTML Citation Reference Manager.
366,047
Automate Your Marketing Lifecycle marketing for printers transforms your customer relationships into a buying “Lifecycle” instead of a single point of sale. We'll help you to map out automated marketing campaigns based on behavioural triggers you specify (eg: quote prospect, estimator prospect, web to print prospect etc.) While you focus on production, your "not now" and "maybe later" prospects are continually learning more about your services and offerings to help them make a purchase decision.For those who do buy, they'll be informed automatically about your volume or seasonal promotions for repeat business. CloudPrintTech can automate all of your internal or external processes from online proofing to purchase fulfillment. - Send emails, direct mail and voice broadcasts automatically based on date, time, or an action such as a click or purchase. - Drive more prospect traffic without increasing spend. - Convert customers at a much higher rate. - Squeeze maximum value out of your existing clients. - Grow your referral business to have prospects excited to be your customer.
325,611
Charleston Gazette $100,000 Loss In Oil Plant Blast At Cabin Creek Two Men Seriously Injured When Filter Plant is Razed by Explosion and Subsequent Fires Rocks Countryside For Miles Around Officials Announce Plans to Rebuild Wrecked Structure at Once; One Man Thrown Through Building October 27, 1923 $100,000 Loss In Oil Plant Blast At Cabin Creek Two Men Seriously Injured When Filter Plant is Razed by Explosion and Subsequent Fires Rocks Countryside For Miles Around Officials Announce Plans to Rebuild Wrecked Structure at Once; One Man Thrown Through Building Work will be started immediately on the Cabin Creek refinery division of the Pure Oil company to repair the $100,000 damage caused yesterday morning by an explosion and a subsequent fire in the oil filter building, it was stated last night by J. J. Rhiel, general manager of the division. One man was seriously injured and four others slightly hurt when the explosion made a complete wreck of the filter plant and damaged adjoining buildings within a radius of a half mile. F. J. McConnihey, 42 years old, was badly hurt. He was inside the building when the explosion occurred and was blown through the structure. At the Mountain State hospital, this city, where he was brought by a special train, it was stated he has a fair chance to recover. The explosion, which rocked the countryside, occurred between 10 and 11 o'clock yesterday morning. It was said to have been caused by excessive pressure in the oil filters. This pressure was too great for the safety valves and the explosion was the result. General Manager Rhiel said no official investigation of the explosion had yet been made and he could not say definitely what was the cause. Mr. Rhiel is suffering with a broken eardrum as a result of the blast. Debris covers the territory surrounding the refinery for a distance of several hundred feet. The explosion was terrific. Windows were blown from houses in the vicinity of Dry Branch and the force of the blast was felt at Cabin Creek, Chelyan and other places. M. C. Pennell, 65, employe of the plant, suffered a broken arm and a broken leg as a result of the explosion. He was walking along the railroad track a short distance from the filter building when the explosion occurred. He was buried beneath two feet of brick and shattered timber. McConnihey, the most seriously injured, suffered several broken ribs and numerous surface lacerations. Physicians attending him had not ascertained last night whether he received other internal injuries. Workmen at the plant took four feet of debris from the unconscious body of McConnihey and he was rushed to the local hospital as soon as the special train could be secured. General Manager Rhiel had walked past the filter building only a minute before the explosion occurred. He was standing at the general office of the division and was knocked to the ground by the force of the blast. An instant later he recovered consciousness and crawled to a safe distance. Brick and timber were falling all around him. The company's emergency fire apparatus was brought into use, and the fire was confined to the filter house. Burning oil and gasoline hampered the volunteer fire workers, but they succeeded in bringing the blaze under control by noon. Mr. Rhiel estimated the damage at approximately $100,000. He said the loss was covered by insurance and that the filter plant, which makes possible the manufacture of certain grades of oils, will be rebuilt. He said a temporary arrangement will be made at once, and that oil will be filtered again within the next few days. Three men are employed in the large filter building. One of them did not report for work yesterday morning, and a second workman was not in the building at the time of the explosion. Mr. McConihey was the only occupant. It was said to have been remarkably fortunate that several persons did not lose their lives. Immediately following the explosion calls were sent to adjoining towns for medical aid, and several physicians responded. First reports to Charleston said several men had been killed. Physicians who went to the scene gave first aid to the injured men, and then ordered the special train. They were Dr. A. H. Nelson, of East Bank; Dr. R. D. Black, of Cabin Creek Junction, and Dr. Hayes, of Chelyan. According to the Rev. W. M. Tisdale, pastor of the M. E. church at Chelyan, who gave a graphic description of the accident, the force of the explosion was so great that roofs of dwellings on the company's property were torn off, holes were torn in the sides of the houses and windows were shattered. Bricks and boards were blown a great distance, and tops of automobiles parked several feet away were carried away. Windows were reported blown out at Dry Branch, three miles distant, and at Chelyan, a mile and a half away. Telephone lines also were severed. The filter house was of brick construction. It was destroyed, and a hole was blown through the wall of an adjoining building housing the wax department. About 150 men are employed at the plant. Business and Industry
391,089
JUSTA LOTTA Sexy Superheroes: Anne Hathaway - Catwoman 03/08/2013 Meow! How hot was she! If you saw Dark Knight Rises you saw Anne Hathaway shows she had some sexy moves as Catwoman. As a Catwoman, fan I thought she was awesome. And you can never go wrong with a skin tight cat suit. Just sayin. Over the last 2 weeks I featured some sexy superheroes from the 70's, Isis and Wonder Woman, so this week I thought I would bring things back to the present. and feature this beautiful brunette. I have a soft spot for bad girls with hearts of gold, as you can imagine, LOL. The costume looks almost reminiscent of the classic Julie Newmar one that I love from the old Batman TV show. And of course my favorite part - the ears! I have seen a few girls cosplay as this version of Catwoman over the last year and I love it. The cool thing about Catwoman is that there have been so many different versions that the character is always seen in one form or another at comic cons. Love it, love Anne Hathaway as Catwoman! Do you have sexy superhero you would like me to feature? LMK by leaving a comment below or on twitter. Also if you have or know someone who has cosplayed as a Dark Knight Rises Catwoman and want to be featured on the site let me know! @TanyaTate
274,227
"The difficulties of life are meant to make us better, not bitter." John C. Maxwell This past month has been a blurr... I am so glad it is over and the change of the season is finally here. Our days have been just beautiful for about a week now. October is just around the corner and I will dance on it's first day, yes I will, I promised myself. I am going to borrow an idea from Linda over at LINDA'S LIFE JOURNAL and light a special candle, play some special music, and just CELEBRATE! I will probably bake a cake, make a pot of stew and some homemade rolls. That is my favorite kind of day. For today, I want to join in on a photo challenge This months challenge is "movement". This past month I did not have time to go out and look for movement so I found some photos in my photo library that I had taken over the summer. Below is my attempt/try at "Movement" with my camera. ------------------ Rain, during a passing morning early spring, looking out our front door. Then edited in picmonkey (cropped, back edges applied). -------------------- Our just-turned-2-years-old grandson running to his Papa's tractor. Edited in Picmonkey. -------------------------- A day at the Lake, looking back from our boat on a sunny day. Edited in Picmonkey, many times, and had a ball playing with some new Picmonkey features. For info on Picmonkey go to. It is free but I pay a yearly fee to be able to use their upgraded features. It makes me happy when I work/play with it! I do so enjoy the monthly photo challenge! ---------------------- I have so much to be grateful for in my little world. Thanks for coming by! As life goes... Doctor thinks he got all of my Mother's cancerous tumor removed; thehubs 2 year cancer checkup found no cancer; the EPA has totally cleaned up the acre across the street and taken many, many soil tests to ease our minds; I am still wearing my "bone growth stimulator machine" after disc replacement surgery and am feeling great. Prayers are answered! I love your idea about celebrating October! Wishing you a month of all your favorite things! Your photos were awesome ! Love the rain...... Isn't today just a BEAUTIFUL day? Thanks for your encouragement Linda! Love your photos. I use picmonkey every day! So nice to hear that all 3 of you are doing great and that something is being done to clean up the contamination you have been living with. I am so ready for October...the color has started coming here. Just 2 more days of the 80s and then we get 60s and 70s again! My favorite kind of days. Thank you Tete! The EPA has been hard to work with but they are working hard to get things tested and then taken care of. I have learned a lot about our local history and even more chemistry since they got here! That's for sure! Yea for October! glad to hear all the three positive health reports.. i am addicted to picmonkey, there is rarely a day i don't go in it many times. i love love love it and it is well worth the 33 per year.. you did the movement theme perfectly, all 3 are great examples.. love the rain...Oct 1 is my hubby's 78th birthday and the anniversary of our first kiss when he was 48... Hi Sandra, please tell Bob I said Happy Birthday! and wishes for many more! How romantic to have your first kiss on his birthday. Thank you for your guidance with my photos, i am having a good time learning my new camera. I don't take as many blurry pictures as I use to!! LOL this is my absolute favorite time of year!! i adore fall, pumpkins, mums and apple cider donuts!! i love to stay home, hunker down and make comfort food!! WoW...the health angels sure are sitting on everyones shoulders!!! Hi Debbie, thank you for stopping by, I love your blog! Yes, the health angels came back to visit once again. Thehubs just hit two years cancer free, so we are feeling so grateful! I am ready to eat some crock pot dinners now that it is getting cooler! Dear sweet sister friend, I am so thrilled to hear the good news on the health of you and your family. I love your movement photos...and I wish my little grandboys were still 2. bwaaaaa They were so cute and they are STILL cute. hugs, bj Hi Bj, i missed the deadline to submit my photos to the challenge but I wanted to share them anyway. Some days, weeks, months (??) are just too busy, sometimes things just have to be put on hold until other things get done. I have popped over to look at some and they are amazing, so many ideas and interpretations! Love it! Oh, I missed this post because it was published too late for the second Sat. in September. But I'm glad to see it now! These are all terrific examples of movement, and I especially love the second one. Those little ones sure can move in the hurry!
395,013
Princeton, Illinois, January 28, 2021– When reviewing worldwide acknowledged, prize-winning German Shepherd dog breeders, the name Vom Ragnar is on the short-list of those discussions. Since its beginning, Vom Ragnar German Shepherds has actually operated with the particular passion to breed, elevate, and also educate the finest quality purebred West, Long Hair, as well as Black German Shepherd pet dogs (GSDs) in the world. Their enormous popularity and also ever-growing track record within the canine reproduction area is a testimony to their success in achieving those goals. For more information about purebred show line German Shepherd for sale Over the years, Vom Ragnar has actually perfected their West German Shepherd breeding craft by creating as well as training some of the most very desired program pet dogs in the world. From their regal functions, stunning lines, as well as awesome shades to their remarkable speed, agility, as well as strength, West German Shepherds have actually taken the dog show globe by storm over the years and Vom Ragnar has actually grasped the art of breeding and training these impressive animals. While they might have gotten their begin with breeding West German Shepherds, Vom Ragnar has actually hardly hinged on those laurels. Including in their expanding list of popular German Shepherd breeds, Vom Ragnar has actually added long haired GSDs to their reproducing arsenal. For anybody trying to find a family-friendly companion as opposed to a program or functioning pet, will find their brand-new best friend for lots of satisfied healthy and balanced years to find. Lengthy hair GSDs are among the much more prominent versions of German Shepherds because of their gentle as well as loving characters as well as energetic lifestyle. Along with breeding purebred long haired GSDs, Vom Ragnar likewise concentrates on canine obedience training, changing these adorable puppies right into phenomenal canine friends for adults and kids alike. For those in the market for the genuinely extraordinary German Shepherd type, Vom Ragnar likewise specializes in reproducing and training black pure-blooded German Shepherds. Amongst the rarest variations of this spectacular canine breed, black GSDs are as lovely as they are devoted. While Vom Ragnar’s outstanding GSD reproduction and also training experience is not restricted to West, long haired, or black German Shepherds, these three types have substantially benefited from the German Shepherd experts at Vom Ragnar. The canine specialists at Vom Ragnar are fully outfitted type and train prize-winning German Shepherds for the greater Chicago location. Given that its creation, Vom Ragnar German Shepherds has actually run with the single passion to reproduce, raise, as well as train the highest high quality purebred West, Long Hair, and Black German Shepherd canines (GSDs) on the planet. Adding to their growing list of popular German Shepherd types, Vom Ragnar has actually added long haired GSDs to their reproducing repertoire. For those in the market for the really extraordinary German Shepherd type, Vom Ragnar also specializes in breeding as well as training black full-blooded German Shepherds. While Vom Ragnar’s outstanding GSD breeding and also training experience is not restricted to West, long haired, or black German Shepherds, these 3 types have considerably benefited from the German Shepherd professionals at Vom Ragnar. For more information check out showline pups for sale near Princeton, Illinois
410,521
\begin{document} \begin{abstract} The generic quadratic form of even dimension $n$ with trivial discriminant over an arbitrary field of characteristic different from~$2$ containing a square root of $-1$ can be written in the Witt ring as a sum of $2$-fold Pfister forms using $n-2$ terms and not less. The number of $2$-fold Pfister forms needed to express a quadratic form of dimension~$6$ with trivial discriminant is determined in various cases. \end{abstract} \maketitle \section*{Introduction} Throughout this paper, $k$ denotes a field of characteristic different from~$2$ in which $-1$ is a square. We use the same notation for a quadratic form over $k$ and its Witt equivalence class in the Witt ring $W(k)$. As usual, the quadratic form $\sum_{i=1}^na_iX^i$ with $a_i\in k^\times$ is denoted by $\qform{a_1,\ldots,a_n}$. Since $-1$ is a square in $k$, the form $\qform{1,\ldots,1}$ is Witt equivalent to $\qform{1}$ or $0$ according as its dimension is odd or even, hence $W(k)$ is an algebra over the field $\F_2$ with two elements. Let $I(k)$ be the fundamental ideal of $W(k)$, which consists of the Witt equivalence classes of even-dimensional quadratic forms. For any integer $m\geq1$, the $m$-th power of $I(k)$ is denoted by $I^m(k)$. We say a quadratic form is in $I^m(k)$ if its Witt equivalence class is in $I^m(k)$. It is well-known that for any $m\geq1$ the ideal $I^m(k)$ is generated as a group by the classes of $m$-fold Pfister forms, i.e., quadratic forms of the following type: \[ \pform{a_1,\ldots,a_m}=\qform{1,a_1}\otimes\cdots\otimes \qform{1,a_m}, \] see \cite[Prop.~X.1.2]{L}. Brosnan, Reichstein, and Vistoli \cite{BRV} define the \emph{$m$-Pfister number} $\Pf_m(q)$ of a quadratic form $q\in I^m(k)$ as the least number of terms in a decomposition of its Witt equivalence class into a sum of $m$-fold Pfister forms. For $m$, $n\geq1$, the \emph{$(m,n)$-Pfister number} $\Pf_k(m,n)$ is defined as the supremum of the $m$-Pfister numbers $\Pf_m(q)$ where $q$ runs over the quadratic forms of dimension $n$ in $I^m(K)$, as $K$ varies over field extensions of $k$. In \cite{BRV}, Pfister numbers are studied in connection with the essential dimension of algebraic groups. A related invariant was defined by Parimala and Suresh in \cite{PS} (see also Kahn's paper \cite{B}): the \emph{length} $\lambda_m(q)$ of a quadratic form $q\in I^m(k)$ is the least integer $r$ for which there exist $m$-fold Pfister forms $\pi_1$, \ldots, $\pi_r$ such that $q\equiv\pi_1+\cdots+\pi_r\bmod I^{m+1}(k)$. In \cite{PS}, the length of quadratic forms was studied with reference to the $u$-invariant of fields and some bounds were given for the length of quadratic forms in $I^m(k)$, $1 \leq m \leq 3$. Clearly, $\lambda_m(q)\leq\Pf_m(q)$. The following bounds were given in \cite{BRV} for Pfister numbers of forms in $I(k)$ and $I^2(k)$ (see also Proposition~\ref{prop:BRV} below): \begin{BRV}[{\cite[Prop.~14]{BRV}}] $\Pf_k(1,n)\leq n$ and $\Pf_k(2,n)\leq n-2$. \end{BRV} Zinovy Reichstein raised the following question: Is the estimate for the $2$-Pfister number in the proposition sharp, i.e., is $\Pf_k(2,n)=n-2$? In this paper we answer Reichstein's question in the affirmative by showing that the ``generic'' quadratic form $q_0$ of dimension~$n$ with trivial discriminant satisfies $\Pf_2(q_0)=n-2$, see Theorem~\ref{thm:3} and Corollary~\ref{cor:main}. Note that for any quadratic form $q$ of dimension~$n$ in $I^2(k)$ we have $\lambda_2(q)\leq\frac{n-2}{2}$ (cf.\ \cite[Prop.~1.1]{B}); therefore for the generic form $q_0$ the inequality $\lambda_2(q_0)\leq\Pf_2(q_0)$ is strict. The proof of Theorem~\ref{thm:3} is easily derived from a discussion of a combinatorial analogue of Pfister numbers in \S\ref{sec:combi}. In the last section (\S\ref{sec:low}), which is essentially independent from \S\S\ref{sec:combi} and \ref{sec:generic}, we give some computations of Pfister numbers of quadratic forms of dimension~$6$. We are indebted to Zinovy Reichstein for his comments on a first version of this paper, which helped us to improve the wording in several points, and also to Detlev Hoffmann for suggesting an alternative proof of Theorem~\ref{thm:3}. Ideas from this alternative proof were used to simplify our original arguments. \section{A combinatorial analogue} \label{sec:combi} Let $V$ be an arbitrary vector space over the field $\F_2$ with $2$ elements. We consider the group algebra $\F_2[V]$ as a combinatorial analogue of the Witt ring of a field. (Indeed, the Witt ring of any field $k$ of characteristic different from $2$ containing a square root of $-1$ is a homomorphic image of $\F_2[k^\times/k^{\times2}]$, see \S\ref{sec:generic}.) Since the addition in $V$ is multiplication in $\F_2[V]$, it is convenient to denote by $X^v$ the image of $v\in V$ in $\F_2[V]$; thus \[ \F_2[V]=\Bigl\{\sum_{v\in V}\alpha_vX^v\mid\alpha_v\in\F_2 \text{ and $\{v\in V\mid \alpha_v\neq0\}$ is finite}\Bigr\}, \] with \[ X^0=1\qquad\text{and}\qquad X^u\cdot X^v=X^{u+v} \quad\text{for $u$, $v\in V$}. \] We consider the group homomorphisms \[ \varepsilon_0\colon\F_2[V]\to\F_2,\qquad \varepsilon_1\colon\F_2[V]\to V \] defined by \[ \varepsilon_0\Bigl(\sum_{v\in V}\alpha_vX^v\Bigr)=\sum_{v\in V}\alpha_v,\qquad \varepsilon_1\Bigl(\sum_{v\in V}\alpha_vX^v\Bigr)=\sum_{v\in V}\alpha_vv. \] Thus, $\varepsilon_0$ is the augmentation map. We denote its kernel by $I[V]$. It is an ideal since $\varepsilon_0$ is a ring homomorphism, and it is generated as a group by elements of the form $1+X^v$ for $v\in V$, which we call \emph{$1$-fold Pfister elements}. For $m\geq1$, the products \[ (1+X^{v_1})\cdots(1+X^{v_m})\in \F_2[V] \] with $v_1$, \ldots, $v_m\in V$ are called \emph{$m$-fold Pfister elements}. They span the $m$-th power of $I[V]$, which we denote by $I^m[V]$ to mimic the Witt ring notation. Observe that $0$ is an $m$-fold Pfister element for all $m$, since the product above is $0$ if $v_1=0$. For $\xi=\sum_{v\in V}\alpha_vX^v\in\F_2[V]$ we define the \emph{support} of $\xi$ by \[ D(\xi)=\{v\in V\mid\alpha_v=1\}\subseteq V. \] This notation is inspired by the usual notation for the set of represented values of a quadratic form. (See the proof of Theorem~\ref{thm:3} below for an example of a field $E$ such that $W(E)$ can be identified with a group algebra $\F_2[V]$ in such a way that the support of any $\xi\in\F_2[V]$ is the set of represented values of the corresponding anisotropic quadratic form.) \begin{lem} \label{lem:Pfelem} Let $\xi\in \F_2[V]$ be a nonzero element, and let $d=\lvert D(\xi)\rvert$ be the cardinality of the support of $\xi$. \begin{enumerate} \item[(i)] If $\xi\in I[V]$, then $d\geq2$ and there are $1$-fold Pfister elements $\pi_1$, \ldots, $\pi_p$ such that \[ \xi=\pi_1+\cdots+\pi_p\qquad\text{and}\qquad p\leq d. \] If moreover $0\in D(\xi)$, the same property holds with $p\leq d-1$. \item[(ii)] If $\xi\in I[V]$ and $\varepsilon_1(\xi)=0$, then $d\geq4$ and there exist $2$-fold Pfister elements $\pi_1$, \ldots, $\pi_p$ such that \[ \xi=\pi_1+\cdots+\pi_p\qquad\text{and}\qquad p\leq d-2. \] If moreover $0\in D(\xi)$, the same property holds with $p\leq d-3$. \end{enumerate} \end{lem} \begin{proof} (i) We have $d\neq0$ since $\xi\neq0$, and $d$ is even since $\varepsilon_0(\xi)\equiv d\bmod2$ and $\xi\in I[V]$. Therefore, we have \[ \xi=\sum_{v\in D(\xi)}X^v=\sum_{v\in D(\xi)}(1+X^v), \] proving that $\xi$ is a sum of $d$ terms that are $1$-fold Pfister elements. If $0\in D(\xi)$, one of these terms vanishes since $1+X^0=0$. Thus, (i) is proved. (ii) Suppose now $\xi\in I[V]$ and $\varepsilon_1(\xi)=0$. As in case~(i), $d$ is even. If $d=2$, the condition $\varepsilon_1(\xi)=0$ yields $\xi=0$. Therefore, $d\geq4$. The other assertions are proved by induction on $d$. Suppose first $0\in D(\xi)$. Since $d\geq4$ we may find in $D(\xi)$ two distinct nonzero vectors $u$, $v$. Define \[ \xi'=(1+X^u)(1+X^v)+\xi. \] We have $\xi'\in I[V]$ and $\varepsilon_1(\xi')=0$. Moreover, \[ D(\xi')\subseteq(D(\xi)\setminus\{0,u,v\})\cup\{u+v\}, \] hence $\lvert D(\xi')\rvert\leq d-2$. By induction, there exist $2$-fold Pfister elements $\pi_1$, \ldots, $\pi_p$ such that \[ \xi'=\pi_1+\cdots+\pi_p\qquad\text{and}\qquad p\leq d-4, \] ($\xi'=0$ if $d=4$). Then \[ \xi=\pi_1+\cdots+\pi_p+(1+X^u)(1+X^v) \] and the number of terms on the right side is at most $d-3$. If $0\notin D(\xi)$ we may still define $\xi'$ as above, and we have \[ 0\in D(\xi')\subseteq(D(\xi)\setminus\{u,v\})\cup\{0,u+v\}, \] hence $\lvert D(\xi')\rvert\leq d$. The arguments above show that there exist $2$-fold Pfister elements $\pi_1$, \ldots, $\pi_p$ such that \[ \xi'=\pi_1+\cdots+\pi_p\qquad\text{and}\qquad p\leq d-3. \] Then \[ \xi=\pi_1+\cdots+\pi_p+(1+X^u)(1+X^v) \] and the number of terms on the right side is at most $d-2$. \end{proof} \begin{cor} \label{cor:I2} $I^2[V]=\{\xi\in I[V]\mid \varepsilon_1(\xi)=0\}$. \end{cor} \begin{proof} Lemma~\ref{lem:Pfelem}(i) shows that $I[V]$ is spanned by $1$-fold Pfister elements, hence $I^2[V]$ is generated as a group by $2$-fold Pfister elements. Since these elements lie in the kernel of $\varepsilon_1$, it follows that \[ I^2[V]\subseteq\{\xi\in I[V]\mid \varepsilon_1(\xi)=0\}. \] The reverse inclusion readily follows from Lemma~\ref{lem:Pfelem}(ii). \end{proof} For $\xi\in I^m[V]$ we define the $m$-Pfister number $\Pf_m(\xi)$ as the minimal number of terms in a decomposition of $\xi$ as a sum of $m$-fold Pfister elements. In particular, $\Pf_m(0)=0$ for all $m\geq1$. \begin{prop} \label{prop:Pf1} For every $\xi\in I[V]$ we have \[ \Pf_1(\xi)= \begin{cases} \lvert D(\xi)\rvert&\text{if $0\notin D(\xi)$},\\ \lvert D(\xi)\rvert-1&\text{if $0\in D(\xi)$}. \end{cases} \] \end{prop} \begin{proof} Let $p=\Pf_1(\xi)$. Suppose $v_1$, \ldots, $v_p\in V$ are nonzero vectors such that \[ \xi=\sum_{i=1}^p(1+X^{v_i})=p+\sum_{i=1}^p X^{v_i}. \] Then $D(\xi)\subseteq\{0,v_1,\ldots,v_p\}$, hence \[ \lvert D(\xi)\rvert\leq \begin{cases} p&\text{if $0\notin D(\xi)$},\\ p+1&\text{if $0\in D(\xi)$}. \end{cases} \] The reverse inequality follows from Lemma~\ref{lem:Pfelem}. \end{proof} We now turn to $2$-Pfister numbers. \relax From Lemma~\ref{lem:Pfelem} it follows that for $\xi\neq0$ in $I^2[V]$, \begin{equation} \label{eq:upb} \Pf_2(\xi)\leq\lvert D(\xi)\rvert-2,\quad\text{and}\quad \Pf_2(\xi)\leq\lvert D(\xi)\rvert-3\text{ if $0\in D(\xi)$}. \end{equation} In the rest of this section, we explicitly construct elements for which the upper bound is reached. The following general observation is crucial for the proof: every linear map $\varphi\colon V\to W$ between $\F_2$-vector spaces induces a ring homomorphism $\varphi_*\colon\F_2[V]\to\F_2[W]$ by \begin{equation} \label{eq:phi} \varphi_*\Bigl(\sum_{v\in V}\alpha_vX^v\Bigr)= \sum_{v\in V} \alpha_vX^{\varphi(v)}. \end{equation} The homomorphism $\varphi_*$ maps $1$-fold Pfister elements in $\F_2[V]$ to (possibly zero) $1$-fold Pfister elements in $\F_2[W]$, hence also $m$-fold Pfister elements in $\F_2[V]$ to $m$-fold Pfister elements in $\F_2[W]$, for every $m\geq1$. Consequently, for every $\xi\in I^m[V]$ we have $\varphi_*(\xi)\in I^m[W]$ and \begin{equation*} \Pf_m\bigl(\varphi_*(\xi)\bigr)\leq\Pf_m(\xi). \end{equation*} Now, let $V$ be an $\F_2$-vector space of finite dimension $n>1$, and let $e=(e_i)_{i=1}^n$ be a base of $V$. We define $e_0=\sum_{i=1}^ne_i$ and \[ \xi_e=n+1+\sum_{i=0}^nX^{e_i}\in\F_2[V]. \] It is readily verified that $\varepsilon_0(\xi)=0$ and $\varepsilon_1(\xi)=0$, so $\xi_e\in I^2[V]$, and the support of $\xi_e$ is \[ D(\xi_e)= \begin{cases} \{0,e_0,e_1,\ldots,e_n\}&\text{if $n$ is even},\\ \{e_0,e_1,\ldots,e_n\}&\text{if $n$ is odd}. \end{cases} \] Therefore, \eqref{eq:upb} yields the same inequality when $n$ is odd or even: \begin{equation} \label{eq:upb1} \Pf_2(\xi_e)\leq n-1. \end{equation} The following proposition shows that $\Pf_2(\xi_e)$ reaches the bound in \eqref{eq:upb}. \begin{prop} \label{prop:Pf2} $\Pf_2(\xi_e)=n-1$. \end{prop} \begin{proof} We use induction on $n$. If $n=2$, we have \[ \xi_e=(1+X^{e_1})(1+X^{e_2}), \] so $\Pf_2(\xi_e)=1$. If $n=3$, then \[ \xi_e=X^{e_1}+X^{e_2}+X^{e_3}+X^{e_1}X^{e_2}X^{e_3}. \] This element is not a $2$-fold Pfister element since $0\notin D(\xi_e)$, hence $\Pf_2(\xi_e)>1$. On the other hand, $\Pf_2(\xi_e)\leq2$ by \eqref{eq:upb1}, hence $\Pf_2(\xi_e)=2$. For the rest of the proof, suppose $n>3$. Let $p=\Pf_2(\xi_e)$ and let $\pi_1$, \ldots, $\pi_p$ be $2$-fold Pfister elements such that \begin{equation} \label{eq:xi} \xi_e=\pi_1+\cdots+\pi_p. \end{equation} We have $D(\xi_e)\subseteq\bigcup_{i=1}^pD(\pi_i)$, hence $e_n\in D(\pi_i)$ for some $i=1$, \ldots, $p$. Renumbering, we may assume $e_n\in D(\pi_p)$, hence \begin{equation} \label{eq:pip} \pi_p=1+X^{e_n}+X^v+X^{e_n+v}\qquad\text{for some $v\in V$}. \end{equation} Let $W\subseteq V$ be the $\F_2$-span of $e_1$, \ldots, $e_{n-1}$, and let $f_0=\sum_{i=1}^{n-1}e_i\in W$. Clearly, $f=(e_i)_{i=1}^{n-1}$ is a base of $W$, and the element $\xi_{f}\in\F_2[W]$ built on the same model as $\xi_e$ is \[ \xi_f=n+X^{f_0}+\sum_{i=1}^{n-1}X^{e_i}. \] Consider the linear map $\varphi\colon V\to W$ defined by \[ \varphi(e_i)= \begin{cases} e_i&\text{for $i=1$, \ldots, $n-1$},\\ 0&\text{for $i=n$}. \end{cases} \] The ring homomorphism $\varphi_*\colon\F_2[V]\to\F_2[W]$ induced by $\varphi$ as in \eqref{eq:phi} above satisfies $\varphi_*(X^{e_n})=1$. Since $\varphi(e_0)=f_0$, it follows that $\varphi_*(\xi_e)=\xi_{f}$, hence \eqref{eq:xi} yields \[ \xi_f=\varphi_*(\pi_1)+\cdots+\varphi_*(\pi_p). \] In view of \eqref{eq:pip}, we have $\varphi_*(\pi_p)=0$, hence the preceding equation shows that $\Pf_2(\xi_f)\leq p-1$. Since $\dim W=n-1$, the induction hypothesis yields $\Pf_2(\xi_f)=n-2$, hence $n-1\leq p$. The reverse inequality holds by \eqref{eq:upb1}, hence the proposition is proved. \end{proof} \section{Pfister numbers of generic forms} \label{sec:generic} Let $k$ be an arbitrary field of characteristic different from $2$ containing a square root of $-1$, and let $V_k=k^\times/k^{\times2}$ be the group of square classes in $k$, which we view as an $\F_2$-vector space. The map \[ \Psi\colon V_k\to W(k) \] defined by $\Psi(a\,k^{\times2})=\qform{a}$ for $a\in k^\times$ is multiplicative, hence it induces a surjective $\F_2$-algebra homomorphism \[ \Psi_*\colon\F_2[V_k]\to W(k). \] The map $\Psi_*$ carries $1$-fold Pfister elements in $\F_2[V_k]$ to $1$-fold Pfister forms in $W(k)$, hence also $m$-fold Pfister elements to $m$-fold Pfister forms for all $m\geq1$. Therefore, $\Psi_*(I^m[V_k])=I^m(k)$ and we have \begin{equation} \label{eq:PfPsi} \Pf_m\bigl(\Psi_*(\xi)\bigr)\leq\Pf_m(\xi) \qquad\text{for all $\xi\in I^m[V_k]$}. \end{equation} We may then use Lemma~\ref{lem:Pfelem} to give a short proof of Proposition~14 of \cite{BRV}, including a minor refinement: \begin{prop} \label{prop:BRV} Let $q$ be a quadratic form of dimension $n$ over a field $k$ containing a square root of $-1$. \begin{enumerate} \item[(i)] If $q\in I(k)$, then $\Pf_1(q)\leq n$. If moreover $q$ represents $1$, then $\Pf_1(q)\leq n-1$. \item[(ii)] If $q\in I^2(k)$, then $\Pf_2(q)\leq n-2$. If moreover $q$ represents $1$, then $\Pf_2(q)\leq n-3$. \end{enumerate} \end{prop} \begin{proof} Let $q=\qform{a_1,\ldots, a_n}$. Consider then \[ \xi=(a_1\,k^{\times2})+\cdots+(a_n\,k^{\times2})\in\F_2[V_k]. \] We have $\Psi_*(\xi)=q$ and $D(\xi)=\{a_1\,k^{\times2},\ldots, a_n\,k^{\times2}\}$, so $\lvert D(\xi)\rvert\leq n$. If $q\in I(k)$, then $n$ is even hence $\xi\in I[V_k]$. Lemma~\ref{lem:Pfelem}(i) then yields $\Pf_1(\xi)\leq n$, and by \eqref{eq:PfPsi} it follows that $\Pf_1(q)\leq n$. If $q$ represents~$1$, then we may assume $a_1=1$, hence $D(\xi)$ contains the zero element of $V_k$. Lemma~\ref{lem:Pfelem}(i) then yields $\Pf_1(\xi)\leq n-1$, and by \eqref{eq:PfPsi} it follows that $\Pf_1(q)\leq n-1$. If $q\in I^2(k)$, then $a_1\ldots a_n\in k^{\times2}$ hence $\varepsilon_1(\xi)=0$. By Corollary~\ref{cor:I2} we have $\xi\in I^2[V_k]$, and Lemma~\ref{lem:Pfelem}(ii) yields $\Pf_2(\xi)\leq n-2$. Therefore, by \eqref{eq:PfPsi} we get $\Pf_2(q)\leq n-2$. Again, if $q$ represents~$1$ we may assume $0\in D(\xi)$, and the preceding inequalities can be strengthened to \[ \Pf_2(q)\leq\Pf_2(\xi)\leq n-3. \] \end{proof} For the rest of this section, fix an arbitrary integer $n\geq2$. Consider $n$ independent indeterminates $x_1$, \ldots, $x_n$ over $k$ and let \[ x_0=x_1\cdots x_n. \] Over the field $K=k(x_1,\ldots,x_n)$, we consider the following quadratic forms: \begin{align*} q & =\qform{x_1,\ldots,x_n},& q_0& =\qform{x_0,x_1,\ldots, x_n},\\ q'& =\qform{1,x_1,\ldots, x_n},& q'_0& =\qform{1,x_0,x_1,\ldots, x_n}. \end{align*} If $n$ is even, then $q\in I(K)$ and $q'_0\in I^2(K)$. If $n$ is odd, then $q'\in I(K)$ and $q_0\in I^2(K)$. \begin{thm} \label{thm:3} If $n$ is even, then \[ \Pf_1(q)=n\qquad\text{and}\qquad\Pf_2(q'_0)=n-1. \] If $n$ is odd, then \[ \Pf_1(q')=n\qquad\text{and}\qquad\Pf_2(q_0)=n-1. \] \end{thm} \begin{proof} Let $k_{\text{alg}}$ be an algebraic closure of $k$. Embed $k$ in the field of iterated Laurent series $E=k_{\text{alg}}((x_1))\cdots((x_n))$. Applying Springer's theorem in \cite[Cor.~VI.1.7]{L} recursively, we obtain a ring isomorphism \begin{equation*} \Theta\colon\,W(E)\stackrel{\sim}{\to} \F_2[(\mathbb{Z}/2\mathbb{Z})^n], \end{equation*} which maps $W(k_{\text{alg}})$ onto $\F_2$ and maps the quadratic form $\qform{x_i}$ to $X^{e_i}$, where $e_i$ is the $i$-th element in the standard base of $(\mathbb{Z}/2\mathbb{Z})^n$ as an $\F_2$-vector space, for $i=1$, \ldots, $n$. Note that the $(x_1,\ldots,x_n)$-adic valuation on $E$ yields an isomorphism \[ V_E=E^{\times}/E^{\times2}\simeq(\mathbb{Z}/2\mathbb{Z})^n \] which maps $x_i\,E^{\times2}$ to $e_i$ for $i=1$, \ldots, $n$. Using this isomorphism as an identification, we may view $\Theta$ as the inverse map of $\Psi_*\colon\F_2[V_E]\to W(E)$, which is thus an isomorphism in this case. Letting $e_0=\sum_{i=1}^ne_i$, we have \begin{align*} \Theta(q_E) & =\sum_{i=1}^n X^{e_i},& \Theta(q_{0E}) & =\sum_{i=0}^nX^{e_i},\\ \Theta(q'_E) & =1+\sum_{i=1}^n X^{e_i},& \Theta(q'_{0E}) & =1+\sum_{i=0}^nX^{e_i}, \end{align*} hence in the notation of \S\ref{sec:combi} with $V=(\mathbb{Z}/2\mathbb{Z})^n$, we have \[ \xi_e= \begin{cases} \Theta(q_{0E})&\text{if $n$ is odd},\\ \Theta(q'_{0E})&\text{if $n$ is even}. \end{cases} \] The isomorphism $\Theta$ maps $m$-fold Pfister forms in $W(E)$ to $m$-fold Pfister elements in $\F_2[V]$, hence it preserves $m$-Pfister numbers. Therefore, Proposition~\ref{prop:Pf2} yields \[ \Pf_2(q_{0E})=n-1\quad\text{if $n$ is odd}\qquad\text{and}\qquad \Pf_2(q'_{0E})=n-1\quad\text{if $n$ is even}. \] Similarly, Proposition~\ref{prop:Pf1} yields \[ \Pf_1(q_{E})=n\quad\text{if $n$ is even}\qquad\text{and}\qquad \Pf_1(q'_{E})=n\quad\text{if $n$ is odd}. \] Since scalar extension preserves $m$-Pfister forms, it follows that \[ \Pf_1(q)\geq n\quad\text{and}\quad\Pf_2(q'_0)\geq n-1\qquad\text{if $n$ is even}, \] \[ \Pf_1(q')\geq n\quad\text{and}\quad\Pf_2(q_0)\geq n-1\qquad\text{if $n$ is odd}. \] The reverse inequalities follow from Proposition~\ref{prop:BRV}. \end{proof} \begin{cor} \label{cor:main} $\Pf_k(1,m)=m$ for any even integer $m\geq2$ and $\Pf_k(2,m)=m-2$ for any even integer $m\geq4$. \end{cor} \begin{proof} For $m$ even, $m\geq2$, the form $q$ above (with $n=m$) has dimension $m$ and satisfies $q\in I(F)$ and $\Pf_1(q)=m$, so $\Pf_k(1,m)\geq m$. Similarly, for $m$ even, $m\geq4$, the form $q_0$ above (with $n=m-1$) has dimension $m$ and satisfies $q_0\in I^2(F)$ and $\Pf_2(q_0)=m-2$, so $\Pf_k(2,m)\geq m-2$. The reverse inequalities follow from \cite[Prop.~14]{BRV} (see the Introduction or Proposition~\ref{prop:BRV}). \end{proof} \begin{rem} \label{rem} A form with the same $2$-Pfister number as $q'_0$ can be obtained by scaling $q_0$: we have \[ \qform{x_1}q_0=\qform{1,x_1x_2,\ldots, x_1x_n,x_0x_1} \] and $x_1x_2$, \ldots, $x_1x_n$ may be regarded as independent indeterminates. If $n$ is odd we have \[ x_0x_1\equiv(x_1x_2)\cdots(x_1x_n)\bmod K^{\times2}, \] hence $\qform{x_1}q_0$ is isometric to a quadratic form like $q'_0$ in the indeterminates $x_1x_2$, \ldots, $x_1x_n$. Embedding $K$ in $E$ as in the proof of Theorem~\ref{thm:3}, we obtain $\Pf_2(\qform{x_1}q_0)=n-2$. Details are left to the reader. \end{rem} \section{Low-dimensional forms} \label{sec:low} Let $k$ be an arbitrary field of characteristic different from~$2$ containing a square root of $-1$. In this section, we obtain some information on the $2$-Pfister number of quadratic forms of dimension~$4$ or $6$ over $k$. \medbreak\par The case of anisotropic quadratic forms $q\in I^2(k)$ of dimension~$4$ is clear: if $q$ represents~$1$, then $q$ is a $2$-fold Pfister form, so $\Pf_2(q)=1$. On the other hand, if $q$ does not represent~$1$, then $\Pf_2(q)>1$ and Proposition~\ref{prop:BRV} yields $\Pf_2(q)=2$. \medbreak\par We next consider anisotropic forms of dimension~$6$ in $I^2(k)$. Of course, $\Pf_2(q)>1$ for any such form $q$. If $q$ represents~$1$, it follows from Proposition~\ref{prop:BRV} that $\Pf_2(q)=2$ or $3$. The Stiefel-Whitney invariant $w_4(q)\in H^4(k,\mu_2)$ discriminates between the two cases, as the next proposition shows. (See \cite[\S3]{Mi} or \cite[\S17]{GMS} for a discussion of Stiefel-Whitney invariants of quadratic forms.) \begin{prop} \label{prop:sw} Let $q$ be an anisotropic quadratic form of dimension~$6$. Assume $q\in I^2(k)$ and $q$ represents~$1$. If $w_4(q)=0$, then $\Pf_2(q)=2$. If $w_4(q)\neq0$, then $\Pf_2(q)=3$. \end{prop} \begin{proof} In view of Proposition~\ref{prop:BRV}, it suffices to show that $\Pf_2(q)=2$ if and only if $w_4(q)=0$. Assume first $\Pf_2(q)=2$ so that \[ q=\qform{x_1,x_2,x_1x_2,y_1,y_2,y_1y_2}\qquad\text{for some $x_1$, \ldots, $y_4\in k^\times$}. \] For $x\in k^\times$, denote by $(x)\in H^1(k,\mu_2)$ the cohomology class associated to the square class of $x$. An explicit computation yields \[ w_4(q)=(x_1)\cup(x_2)\cup(y_1)\cup(y_2). \] Since $q$ represents~$1$, the form $\qform{1}\perp q$ is isotropic. The $4$-fold Pfister form $\pform{x_1,x_2,y_1,y_2}$ which contains $\qform{1}\perp q$ as a subform is hyperbolic. Therefore, $(x_1)\cup(x_2)\cup(y_1)\cup(y_2)=0$ by \cite[Satz~1.6]{Ar}. For the converse, let \[ q=\qform{1,a,b,c,d,abcd}\qquad\text{for some $a$, $b$, $c$, $d\in k^\times$}. \] Then \[ w_4(q)=(a)\cup(b)\cup(c)\cup(d). \] Since $w_4(q)=0$ by hypothesis, Theorem~1 of \cite{AEJ} shows that the $4$-fold Pfister form $\pform{a,b,c,d}$ is hyperbolic. It follows that the $9$-dimensional subform $q\perp\qform{ab,ac,ad}$ is isotropic, hence $q$ represents a nonzero element of the form $a(bx^2+cy^2+dz^2)$ for some $x$, $y$, $z\in k$. Let $b'=bx^2+cy^2+dz^2\in k^\times$. Since the form $\qform{b,c,d}$ represents $b'$, we may find $c'$, $d'\in k^\times$ such that \[ \qform{b,c,d}=\qform{b',c',d'}. \] Comparing discriminants, we have $bcd\equiv b'c'd'\bmod k^{\times2}$, hence \[ q=\qform{1,a,b',c',d',ab'c'd'}. \] The form $\qform{1,a,b'}$ is anisotropic since $q$ is anisotropic, hence the $2$-fold Pfister form $\pform{a,b'}$ is anisotropic. On the other hand, the form $q\perp\qform{ab'}$ is isotropic since $q$ represents $ab'$, hence $\pform{a,b'}$ represents a nonzero element of the form $c'r^2+d's^2+ab'c'd't^2$, for some $r$, $s$, $t\in k$. Let $c''=c'r^2+d's^2+ab'c'd't^2\in k^\times$, and let $d''\in k^\times$ be such that \[ \qform{c',d',ab'c'd'}=\qform{c'',d'',ab'c''d''}. \] Thus \[ q=\qform{1,a,b',c'', d'',ab'c''d''}. \] Since $\pform{a,b'}$ represents $c''$, the $3$-fold Pfister form $\pform{a,b',c''}$ is hyperbolic, and therefore its $5$-dimensional subform $\qform{1,a,b',c'',ab'c''}$ is isotropic. Thus, $\qform{1,a,b',c''}$ represents $ab'c''$, and we may find $u$, $v\in k^\times$ such that \[ \qform{1,a,b',c''}=\qform{ab'c'',u,v,uv}. \] Thus \[ q=\pform{u,v}+\pform{d'',ab'c''}, \] and $\Pf_2(q)=2$. \end{proof} For arbitrary $6$-dimensional anisotropic quadratic forms in $I^2(k)$, the $2$-Pfister number is $2$, $3$ or $4$. Note that scaling has an important effect on the Pfister number although it does not change the Stiefel-Whitney class. Indeed, by Theorem~\ref{thm:3} and Remark~\ref{rem}, if $x_1$, \ldots, $x_5$ are independent indeterminates and \[ q=\qform{x_1,x_2,x_3,x_4,x_5, x_1x_2x_3x_4x_5}, \] then \[ \Pf_2(q)=4\qquad\text{and}\qquad\Pf_2(\qform{x_1}q)=3. \] On the other hand, \[ \qform{x_1x_2x_3}q=\pform{x_1x_2,x_1x_3}+ \pform{x_4x_5,x_1x_2x_3x_4}, \] hence \[ \Pf_2(\qform{x_1x_2x_3}q)=2. \] More generally, the same computation shows that for an arbitrary anisotropic form $q\in I^2(k)$ of dimension~$6$, if $d\in k^\times$ is the discriminant of some $3$-dimensional subform of $q$, then $\Pf_2(\qform{d}q)=2$. In the rest of this section, we give necessary and sufficient conditions on $q$ for $\Pf_2(q)\leq3$ as well as for $\Pf_2(q)= 2$. \medbreak \par As seen before, every quadratic form of dimension $6$ in $I^2(k)$ is a scalar multiple of a form $q$ with $\Pf_2(q)= 2$. Fix a decomposition \[ q=\pform{a,b}+\pform{c,d}=\qform{a,b,ab,c,d,cd}. \] To this decomposition is associated the biquaternion algebra $D=(a,b)_k\otimes(c,d)_k$, which is Brauer-equivalent to the Clifford algebra of $q$, and the orthogonal involution $\sigma$ on $D$ that is the tensor product of the conjugation involutions on $(a,b)_k$ and $(c,d)_k$. The algebra $D$ is division since $q$ is anisotropic, see \cite[(16.5)]{KMRT}. \begin{thm} \label{thm:1} For $\lambda\in k^\times$, we have $\Pf_2(\qform{\lambda}q)=2$ if and only if $\lambda^2$ is the reduced norm of some $\sigma$-symmetric element in $D$, i.e., \[ \lambda^2=\Nrd_D(u) \qquad\text{for some $u\in\Sym(D,\sigma)$}. \] \end{thm} \begin{proof} Let $(a,b)_k^0$ (resp.\ $(c,d)_k^0$) be the $k$-vector space of pure quaternions in $(a,b)_k$ (resp.\ in $(c,d)_k$). The vector space of $\sigma$-skew-symmetric elements in $D$ is \[ \Skew(D,\sigma)=\bigl((a,b)_k^0\otimes1\bigr)\oplus \bigl(1\otimes(c,d)_k^0\bigr). \] Let $p_\sigma$ be the linear operator on $\Skew(D,\sigma)$ defined by \[ p_\sigma(x\otimes1+1\otimes y)=x\otimes1-1\otimes y \qquad\text{for $x\in(a,b)_k^0$ and $y\in(c,d)_k^0$}. \] The formula $q_\sigma(s)=s\,p_\sigma(s)$ defines a quadratic form on $\Skew(D,\sigma)$, and we have \[ q_\sigma\simeq\qform{a,b,ab,c,d,cd}=q. \] Suppose now $\Pf_2(\qform{\lambda}q)=2$. We fix a decomposition \[ \qform{\lambda}q=\pform{a',b'}+\pform{c',d'}= \qform{a',b',a'b',c',d',c'd'}. \] The Clifford algebras of $q$ and $\qform{\lambda}q$ are isomorphic, hence we may identify \[ D=(a',b')_k\otimes(c',d')_k. \] Let $\sigma'$ be the orthogonal involution on $D$ that is the tensor product of the conjugation involutions on $(a',b')_k$ and $(c',d')_k$. By \cite[(2.7)]{KMRT}, there is a unit $u\in\Sym(D,\sigma)$ such that $\sigma'=\Int(u)\circ\sigma$, i.e., $\sigma'(x)=u\sigma(x)u^{-1}$ for all $x\in D$. On $\Skew(D,\sigma')$ we may define a linear operator $p_{\sigma'}$ and a quadratic form $q_{\sigma'}$ in the same way as $p_\sigma$ and $q_\sigma$ were defined on $\Skew(D,\sigma)$, and we have \[ q_{\sigma'}\simeq\qform{\lambda}q. \] It is easily seen that $\Skew(D,\sigma')=u\Skew(D,\sigma)=\Skew(D,\sigma)u^{-1}$. The linear operator $p'$ on $\Skew(D,\sigma')$ defined by \[ p'(s')=up_\sigma(s'u) \qquad\text{for $s'\in\Skew(D,\sigma')$} \] satisfies \begin{equation} \label{eq:s} s'p'(s')=s'up_\sigma(s'u)=q_\sigma(s'u)\in k. \end{equation} Therefore, by \cite[(16.22)]{KMRT}, the map $p'$ is a multiple of $p_{\sigma'}$: there exists $\lambda_1\in k^\times$ such that $p'=\lambda_1p_{\sigma'}$. It follows that \begin{equation} \label{eq:lambda} s'p'(s')=\lambda_1q_{\sigma'}(s') \qquad\text{for $s'\in\Skew(D,\sigma')$}, \end{equation} and \eqref{eq:s} shows that the map $s'\mapsto s'u$ is an isometry $\qform{\lambda_1}q_{\sigma'}\simeq q_\sigma$. Hence $\qform{\lambda_1\lambda}q\simeq q$ and $\lambda\lambda_1^{-1}$ is the multiplier of a similitude of $q$. By \cite[(15.34)]{KMRT}, we may find $\lambda_2\in k^\times$ and $v\in D^\times$ such that \begin{equation} \label{eq:new} \lambda\lambda_1^{-1}=\lambda_2^2\Nrd_D(v). \end{equation} On the other hand, for $s'\in\Skew(D,\sigma')$ we have $q_\sigma(s'u)^2=\Nrd_D(s'u)$ and $q_{\sigma'}(s')^2=\Nrd_D(s')$ by \cite[(16.25)]{KMRT}, hence \eqref{eq:s} and \eqref{eq:lambda} yield \[ \lambda_1^2=\Nrd_D(u). \] Using this equation, we derive from \eqref{eq:new}: \[ \lambda^2=\lambda_1^2\lambda_2^4\Nrd_D(v)^2= \Nrd_D\bigl(\lambda_2vu\sigma(v)\bigr). \] Since $\lambda_2vu\sigma(v)\in\Sym(D,\sigma)$, the element $\lambda$ satisfies the condition in the theorem. Conversely, assume $\lambda^2=\Nrd_D(u)$ for some $u\in\Sym(D,\sigma)$. Define an orthogonal involution $\sigma'$ on $D$ by $\sigma'=\Int(u)\circ\sigma$. By \cite[(7.3)]{KMRT}, the discriminant of $\sigma'$ is $\Nrd_D(u)=\lambda^2$, hence by \cite[(15.12)]{KMRT} we may find quaternion subalgebras $(a',b')_k$, $(c',d')_k\subseteq D$ such that \[ D=(a',b')_k\otimes(c',d')_k, \] and $\sigma'$ is the tensor product of the conjugations on $(a',b')_k$ and $(c',d')_k$. We may then define $p_{\sigma'}$ and $q_{\sigma'}$ as above, and we have \begin{equation} \label{eq:q} q_{\sigma'}\simeq\qform{a',b',a'b',c',d',c'd'}. \end{equation} On the other hand, define a linear operator $p_0$ and a quadratic form $q_0$ on $\Skew(D,\sigma')$ by \[ p_0(s')=\lambda^{-1}up_\sigma(s'u),\qquad q_0(s')=\lambda^{-1}q_\sigma(s'u) \quad\text{for $s'\in\Skew(D,\sigma')$}. \] By definition, we have \begin{equation} \label{eq:q0} q_0\simeq\qform{\lambda}q_\sigma\simeq\qform{\lambda}q. \end{equation} Moreover, $s'p_0(s')=q_0(s')\in k$ for $s'\in\Skew(D,\sigma')$, hence $p_0$ is a multiple of $p_{\sigma'}$ by \cite[(16.22)]{KMRT}: we have $p_0=\mu p_{\sigma'}$ for some $\mu\in k^\times$, hence also $q_0=\mu q_{\sigma'}$. For $s'\in\Skew(D,\sigma')$ we have by \cite[(16.25)]{KMRT} \[ p_0^2(s')=\lambda^{-2}up_\sigma(up_\sigma(s'u)u) = \lambda^{-2}\Nrd_D(u) p_\sigma^2(s'u)u^{-1}. \] Since $p_\sigma^2=\Id$ and $\Nrd_D(u)=\lambda^2$, it follows that $p_0^2=\Id$. Now, we also have $p_{\sigma'}^2=\Id$, hence $\mu=\pm1$. Therefore, $q_0\simeq\qform{\pm1}q_{\sigma'}\simeq q_{\sigma'}$. By \eqref{eq:q} and \eqref{eq:q0} we have \[ \qform{\lambda}q\simeq\qform{a',b',a'b',c',d',c'd'}, \] hence $\Pf_2(\qform{\lambda}q)=2$. \end{proof} Note that the group $S(q)\subset k^\times$ of spinor norms of $q$ can be described in terms of $D$: we have by \cite[(15.34)]{KMRT} \[ S(q)=\{\lambda\in k^\times\mid\lambda^2\in\Nrd_D(D^\times)\}. \] Therefore, the following is a direct consequence of Theorem~\ref{thm:1}: \begin{cor} Let $q\in I^2(k)$ be an anisotropic quadratic form of dimension~$6$, and let $\lambda\in k^\times$. If $\Pf_2(q)=\Pf_2(\qform{\lambda}q)=2$, then $\lambda$ is a spinor norm of $q$. \end{cor} We now turn to a characterization of quadratic forms of dimension~$6$ with $2$-Pfister number at most~$3$. \begin{thm} Let $q\in I^2(k)$ be an anisotropic quadratic form of dimension~$6$. We have $\Pf_2(q)\leq3$ if and only if there exist a $4$-dimensional quadratic form $q_1$ over $k$ and scalars $\mu$, $\mu'$, $\nu\in k^\times$ satisfying the following conditions: \begin{enumerate} \item[(i)] $q\simeq q_1\perp\qform{\mu'}\pform{\nu}$; \item[(ii)] $\Pf_2(q_1\perp\qform{\mu}\pform{\nu})\leq2$; \item[(iii)] $\pform{\mu,\mu',\nu}=0$. \end{enumerate} \end{thm} \begin{proof} Suppose $\Pf_2(q)\leq3$, and let \[ q=\pform{x_1,y_1}+\pform{x_2,y_2}+\pform{x_3,y_3}= \qform{x_1,y_1,x_1y_1,x_2,y_2,x_2y_2} + \qform{1,x_3,y_3,x_3y_3}. \] Since the dimension of $q$ is $6$, there exists a $2$-dimensional form $\qform{\mu}\pform{\nu}$ that is a subform of $\qform{x_1,y_1,x_1y_1,x_2,y_2,x_2y_2}$ and of $\qform{1,x_3,y_3,x_3y_3}$. Thus, we can write \begin{equation} \label{eq:4} \qform{x_1,y_1,x_1y_1,x_2,y_2,x_2y_2} =q_1\perp\qform{\mu}\pform{\nu} \end{equation} and \begin{equation} \label{eq:5} \qform{1,x_3,y_3,x_3y_3} = \qform{\mu_1,\mu_2}\perp\qform{\mu}\pform{\nu} \end{equation} for some $4$-dimensional quadratic form $q_1$ and some scalars $\mu_1$, $\mu_2$. Equation~\eqref{eq:4} readily yields~(ii). Comparing discriminants on each side of \eqref{eq:5}, we see that \[ \qform{\mu_1,\mu_2}=\qform{\mu'}\pform{\nu}\qquad\text{for some $\mu'\in k^\times$}. \] Therefore, adding \eqref{eq:4} and \eqref{eq:5} yields~(i). Finally, \eqref{eq:5} shows that $\qform{\mu,\mu'}\pform{\nu}$ represents~$1$, hence $\qform{1,\mu,\mu',\mu\nu,\mu'\nu}$ is isotropic. Since this form is contained in the $3$-fold Pfister form $\pform{\mu,\mu',\nu}$, we have~(iii). Conversely, suppose (i), (ii), and (iii) hold for some $4$-dimensional quadratic form $q_1$ and some scalars $\mu$, $\mu'$, $\nu\in k^\times$. Since \[ \pform{\mu,\mu',\nu}=\qform{\mu,\mu\nu,\mu',\mu'\nu}\perp \qform{1,\nu,\mu\mu',\mu\mu'\nu}, \] condition~(iii) yields \[ \qform{\mu}\pform{\nu}\perp\qform{\mu'}\pform{\nu}=\pform{\nu,\mu\mu'}. \] Therefore, we derive from~(i) that \[ q=\bigl(q_1\perp\qform{\mu}\pform{\nu}\bigr)+\pform{\nu,\mu\mu'}. \] Since $\Pf_2(q_1\perp\mu\pform{\nu})\leq2$ by~(ii), it follows that $\Pf_2(q)\leq3$. \end{proof}
1,572
TITLE: How to prove $\forall k \exists y \forall x (x < y \Leftrightarrow \forall n (n < x \Rightarrow n < k))$? QUESTION [4 upvotes]: I'd like to prove that $\forall k \exists y \forall x (x < y \Leftrightarrow \forall n (n < x \Rightarrow n < k))$ holds for first order arithmetic but I'm not sure where to start. I should probably do that inductively but it contains universal quantifier inside. What to do in such cases? REPLY [0 votes]: Set y = k + 1. Assume x < y. If n < x, then n + 1 < x + 1 <= y = k × 1. As n + 1 < k ÷ 1, n < k. Assume for all n, (n < x implies n < k). If x = 0, then x < k + 1 = y. If x /= 0, then x has a precessor p. As p < x, p < k and x = p + 1 < k + 1 = y.
194,250