text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
---|---|
TITLE: How to show if $m$ and $n$ are coprime and $m-n$ is odd, then $(m^2-n^2)^2$, $(2mn)^2$, $(m^2+n^2)^2$ have a common factor?
QUESTION [1 upvotes]: We know that the Pythagorean triples can be generated by the Euclid's formula $a=m^2-n^2$, $b=2mn$, $c=m^2+n^2$ for any positive integers $m,n$ and $m>n$.
I am trying to prove the statement:
The triple generated by Euclid's formula is primitive if and only if $m$ and $n$ are coprime and $m-n$ is odd.
I could prove the $\Rightarrow$ direction. Now I am trying to show the $\Leftarrow$ proof.
I am trying to prove by contradiction. If $m$ and $n$ are coprime and $m-n$ is odd, let's assume the triple generated is not primitive. Then there is a factor $k$ in each term. But since $m$ and $n$ are coprime and $m-n$ is odd, we can deduce that $(m^2-n^2)^2$ is odd, $(2mn)^2$ is even and $(m^2+n^2)^2$ is odd. Then $k$ can only be odd. Let $k=2p+1$ where $p$ is an integer. But then I am not sure how to proceed to obtain a contradiction.
Any helps are greatly appreciated. Many thanks!
REPLY [0 votes]: I thought I had a proof that, if $m,n$ are mutually prime, $m,n$ would generate only primitive Pythagorean triples. The $proof$ is between the asterisk below.
$$\text{*************************}$$
$\text{We are given }\quad A=m^2n^2\quad B=2mn\quad C=m^2+n^2$
Let $x$ be the GCD of $m,n$ and let $p$ and $q$ be the cofactors of $m$ and $n$ respectively. Then we have
$$A=(xp)^2-(xq)^2\quad B=2xpxq\quad C=(xp)^2+(xq)^2$$
$$A=x^2(p^2-q^2)\quad B=2x^2(pq)\quad C=x^2(p^2+q^2)$$
If $GCD(m,n)=1$, then $GCD(A,B,C)=1$ and $(A,B,C)$ is a primitive triple. This means that $m$ and $n$ must be co-prime to generate a primitive.
$$\text{*************************}$$
However, a counter-example destroys to so-called proof: $\quad\text{Let }m,n=7,3$.
$$A=49-9=40\quad B=2*7*3=42\quad C=49+9=58\quad GCD(40,42,58)=2$$
$$\therefore GCD(m,n)=1\neg\implies GCD(A,B,C)=1$$
I believe but do not know that the proof may be valid if we insist that $m$ and $n$ be of opposite parity.
The only two formulas I know about that will generate only primitive triplets do not generate all of them but $C-B=1$ in the first one and $C-A=2$ in the second one.
$$A=2n^2+1\quad B=2n^2+2n\quad C=2n^2+2n+1$$
$$A=4n^2-1\quad B=4n\quad C=4n^2+1$$
$\mathbf{UPDATE:}$ I did some looking and found a variation of Euclid's formula that can let us prove when we have found a primitive with more confidence.
$$\text{Let}\quad A=(2m-1+n)^2-n^2\quad B=2(2m-1+n)n\quad C=(2m-1+n)^2+n^2\quad m,n\in\mathbb{N}$$
This formula generates only and all triples where $GCD(A,B,C)=(2k-1)^2,k\in\mathbb{N}$ which includes all primitives.
If we let $GCD((2m-1),n)=x$ and $p,q$ be the respective cofactors, then we have
$$A=(xp+xq)^2-(xq)^2\quad B=2(xp+xq)xq\quad C=(xp+xq)^2+(xq)^2$$
$$A=x^2(p+q)^2-x^2q^2\quad B=2x^2(p+q)q\quad C=x^2(p+q)^2+x^2q^2$$
We can see by inspection that if $(2m-1)$ and $n$ have common factors, then GCD(A,B,C) an odd square and, if GCD((2m-1),n)=1, the triple is primitive. | 105,903 |
Name
Narun Wiwattanakrai
Address
14 Charoenraj Rd. T. Wat Kate, A.Muang Chiang Mai 50000
Telephone
+665 330 3030
Rarinjinda Wellness Spa Resort, the boutique hotel resort.
Facilities include free Wi-Fi Internet access, well-equipped Wellness Spa, Fitness and Yoga Studio, Outdoor Swimming Pool and a trendy resort's restaurant, Deck1 Exotic Scene and Cuisine (Riverfront restaurant).
35
00:00
00:00
November - April
May - October
Deluxe
Deluxe Room 46 sqm,modern setting with stylish,antiques,seperate bath tub and shower.
46 sq.m
22
US$ 124 - 248
Deluxe
Deluxe Room 46 sqm,modern setting with stylish,antiques,seperate bath tub and shower.
46 sq.m
22
US$ 124 - 248
Facility
Business Facilities
Recreations
Services
Situated on the riverfront of Ping River, you can take in the beauty of Chiang Mai City in a relaxing atmosphere of open-air natural surroundings. It is a place where locals and Thai and foreign visitors can entertain special guests or just enjoy a relaxing meal by the river.
Open for service all starting with various healthy choices in the breakfast buffet, lunch, afternoon tea, and a full course dinner in a romantic setting under the moon and stars. Come and experience the true meaning of Exotic Scene and Cuisine.
Address :
14 Charoenraj Rd. T. Wat Kate, A.Muang Chiang Mai 50000
How to get there :
If you come from Chiang Mai Night Bazaar, Cross the Nawarat Bridge, turn left. Our resort is approximately 300 metres on the right | 291,426 |
Snore No More by Kristian on April 19, 2021 at 00:00 Chapter: Comic It was either this or dropping a gigantic nasal spray onto earth. Oh, and a sawing log is the universal image used for snoring, right? I was alternatively gonna go for just “ZzZ” instead. Liked it? Take a second to support Kristian on Patreon! └ Tags: everyone dies, pantomime, sleep, someone eventually dies
FINALLY, the reason for earthquakes is revealed!
yeah, totally.. that whole “tectonic plate” thing is stupid.. right? lol | 157,691 |
TITLE: How do I solve $x=1.1^x$? Is it possible?
QUESTION [2 upvotes]: I'm trying to find the intersection of $y=x$ and $y=1.1^x$. I've tried logging both sides and doing every log law on it, but can't figure it out. I know these equations intersect twice thanks to desmos. Using desmos, when I set y=nx, where n is a big number, because the point of intersection is off-screen, I have to scroll a lot. Plus, I'm just curious how to solve this equation, $x=1.1^x$. Also, would the solving process be similar for $x=1.1^{(x-1)}$.
REPLY [0 votes]: Your equation can be written in the form
$$x=e^{ax}$$ or
$$a=\frac{\log x}x.$$
The function on the right increases from $-\infty$, peaks at $(e,e^{-1})$ then decreases to $0$.
Hence, for $a<0$, there is a single real solution, smaller than $1$, for $0<a<e^{-1}$ there are two of them (one in $(1,e)$ and the other above $e$), and none for $a>e^{-1}$. | 1,588 |
TITLE: Roots of a special polynomial
QUESTION [3 upvotes]: I have a question that seems tricky to me. So, let $P_n$ be the polynomial defined by $P_n(x+1/x)=x^n+x^{-n}$. And let $Q$ be some polynomial with $\sup_{x\in [-2,2]}|Q(x)|<2$. Then $P_n-Q$ has at least $n$ different roots. I have no idea yet. It's clear that $P_n(x)\notin (-2,2)$ für $x \notin (-2,2)$ but I don't see any use of it. I would be grateful for any idea.
REPLY [3 votes]: Hint:
Let $t=x+1/x$. Resolving the equation with respect to $x$ one obtains:
$$
x_\pm=\frac{t\pm\sqrt{t^2-4}}{2}.
$$
Notice: $x_+=x^{-1}_-$.
Thus:
$$
P_n(x)=\left(\frac{x+\sqrt{x^2-4}}{2}\right)^n+\left(\frac{x-\sqrt{x^2-4}}{2}\right)^n.\tag{1}
$$
Observe that upon binomial expansion of both summands the odd powers of $\sqrt{x^2-4}$ will cancel, so that the function (1) is indeed a polynomial of order $n$.
Next observe that there can be no real roots of the polynomial outside the interval $(-2,2)$ as both summands are real and have the same sign. Inside the interval one can use the substitution $x=2\cos t$ to obtain:
$$
P_n(2\cos t)=e^{itn}+e^{-itn}=2\cos nt,\text{ with } 0\le t\le\pi.
$$
The function obviously has $n$ distinct real roots $t_k=\frac{2k+1}{2n}\pi$ with $k=0..(n-1)$. Recalling that the power of $P_n(x)$ is $n$ they represent the complete list of roots of the polynomial.
Besides the following inequality holds for all $-2\le x \le 2$:
$$
-2\le P_n(x)\le 2,
$$
with limits being attained at $x=2\cos\frac{2k+1}{n}\pi$ and $x=2\cos\frac{2k}{n}\pi$, respectively. | 12,255 |
Microsoft Outlook is a personal information manager software system. You will find it inside the Microsoft 365 suite of tools. This powerful application does more than create and send emails. Its design helps you keep your workspace organized, even when you’re busy.
Our latest video shows you some fantastic time-saving tips to get you started:
How long do you spend each day sorting, reading, and sending emails? Too long? Though a great email management tool, it can slow or stall your productivity. If you want to end the countless hours you spend checking your email daily, the best way is to set up your mailbox Rules.
Outlook’s Rules is a built-in automation feature. To automate routine actions, it uses pre-programmed conditions. For instance, you can set your mailbox rule to forward all spam messages to a folder called ‘Spam or Junk.’ When you have free time, you can inspect that folder to identify any crucial emails that went there.
To set a new rule, go to Settings>Options>Organize email>Inbox rules >New. You will find two categories of rules listed:
As you continue to explore Outlook, you will also discover there are several rule templates. Each has customizable choices to help you sort your emails quicker so you can avoid spending countless hours combing through your inbox.
Would you like to find a view in Outlook that meets your specific needs? This time-saving tip quickly shows how you can access the various display settings within Outlook.
First, you will choose a Different Existing View:
Creating a New Custom View:
On the right-hand side of the dialogue box, click on the New button.
To launch the Advanced View Settings dialogue box, click the OK button. Inside, you will find seven buttons on this dialogue box to check to set your View options. However, some of the buttons may not be available, depending on your chosen base view type. | 26,237 |
(*
Title: Budan-Fourier theorem
Author: Wenda Li <[email protected] / [email protected]>
*)
section \<open>Budan-Fourier theorem\<close>
theory Budan_Fourier imports
BF_Misc
begin
text \<open>
The Budan-Fourier theorem is a classic result in real algebraic geometry to over-approximate
real roots of a polynomial (counting multiplicity) within an interval. When all roots of the
the polynomial are known to be real, the over-approximation becomes tight -- the number of
roots are counted exactly. Also note that Descartes' rule of sign is a direct consequence of
the Budan-Fourier theorem.
The proof mainly follows Theorem 2.35 in
Basu, S., Pollack, R., Roy, M.-F.: Algorithms in Real Algebraic Geometry.
Springer Berlin Heidelberg, Berlin, Heidelberg (2006).
\<close>
subsection \<open>More results related to @{term sign_r_pos}\<close>
lemma sign_r_pos_nzero_right:
assumes nzero:"\<forall>x. c<x \<and> x\<le>d \<longrightarrow> poly p x \<noteq>0" and "c<d"
shows "if sign_r_pos p c then poly p d>0 else poly p d<0"
proof (cases "sign_r_pos p c")
case True
then obtain d' where "d'>c" and d'_pos:"\<forall>y>c. y < d' \<longrightarrow> 0 < poly p y"
unfolding sign_r_pos_def eventually_at_right by auto
have False when "\<not> poly p d>0"
proof -
have "\<exists>x>(c + min d d') / 2. x < d \<and> poly p x = 0"
apply (rule poly_IVT_neg)
using \<open>d'>c\<close> \<open>c<d\<close> that nzero[rule_format,of d,simplified]
by (auto intro:d'_pos[rule_format])
then show False using nzero \<open>c < d'\<close> by auto
qed
then show ?thesis using True by auto
next
case False
then have "sign_r_pos (-p) c"
using sign_r_pos_minus[of p c] nzero[rule_format,of d,simplified] \<open>c<d\<close>
by fastforce
then obtain d' where "d'>c" and d'_neg:"\<forall>y>c. y < d' \<longrightarrow> 0 > poly p y"
unfolding sign_r_pos_def eventually_at_right by auto
have False when "\<not> poly p d<0"
proof -
have "\<exists>x>(c + min d d') / 2. x < d \<and> poly p x = 0"
apply (rule poly_IVT_pos)
using \<open>d'>c\<close> \<open>c<d\<close> that nzero[rule_format,of d,simplified]
by (auto intro:d'_neg[rule_format])
then show False using nzero \<open>c < d'\<close> by auto
qed
then show ?thesis using False by auto
qed
lemma sign_r_pos_at_left:
assumes "p\<noteq>0"
shows "if even (order c p) \<longleftrightarrow>sign_r_pos p c then eventually (\<lambda>x. poly p x>0) (at_left c)
else eventually (\<lambda>x. poly p x<0) (at_left c)"
using assms
proof (induct p rule:poly_root_induct_alt)
case 0
then show ?case by simp
next
case (no_proots p)
then have [simp]:"order c p = 0" using order_root by blast
have ?case when "poly p c >0"
proof -
have "\<forall>\<^sub>F x in at c. 0 < poly p x"
using that
by (metis (no_types, lifting) less_linear no_proots.hyps not_eventuallyD
poly_IVT_neg poly_IVT_pos)
then have "\<forall>\<^sub>F x in at_left c. 0 < poly p x"
using eventually_at_split by blast
moreover have "sign_r_pos p c" using sign_r_pos_rec[OF \<open>p\<noteq>0\<close>] that by auto
ultimately show ?thesis by simp
qed
moreover have ?case when "poly p c <0"
proof -
have "\<forall>\<^sub>F x in at c. poly p x < 0"
using that
by (metis (no_types, lifting) less_linear no_proots.hyps not_eventuallyD
poly_IVT_neg poly_IVT_pos)
then have "\<forall>\<^sub>F x in at_left c. poly p x < 0"
using eventually_at_split by blast
moreover have "\<not> sign_r_pos p c" using sign_r_pos_rec[OF \<open>p\<noteq>0\<close>] that by auto
ultimately show ?thesis by simp
qed
ultimately show ?case using no_proots(1)[of c] by argo
next
case (root a p)
define aa where "aa=[:-a,1:]"
have [simp]:"aa\<noteq>0" "p\<noteq>0" using \<open>[:- a, 1:] * p \<noteq> 0\<close> unfolding aa_def by auto
have ?case when "c>a"
proof -
have "?thesis = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. 0 < poly (aa * p) x
else \<forall>\<^sub>F x in at_left c. poly (aa * p) x < 0)"
proof -
have "order c aa=0" unfolding aa_def using order_0I that by force
then have "even (order c (aa * p)) = even (order c p)"
by (subst order_mult) auto
moreover have "sign_r_pos aa c"
unfolding aa_def using that
by (auto simp: sign_r_pos_rec)
then have "sign_r_pos (aa * p) c = sign_r_pos p c"
by (subst sign_r_pos_mult) auto
ultimately show ?thesis
by (fold aa_def) auto
qed
also have "... = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. 0 < poly p x
else \<forall>\<^sub>F x in at_left c. poly p x < 0)"
proof -
have "\<forall>\<^sub>F x in at_left c. 0 < poly aa x"
apply (simp add:aa_def)
using that eventually_at_left_field by blast
then have "(\<forall>\<^sub>F x in at_left c. 0 < poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. 0 < poly p x)"
"(\<forall>\<^sub>F x in at_left c. 0 > poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. 0 > poly p x)"
apply auto
by (erule (1) eventually_elim2,simp add: zero_less_mult_iff mult_less_0_iff)+
then show ?thesis by simp
qed
also have "..." using root.hyps by simp
finally show ?thesis .
qed
moreover have ?case when "c<a"
proof -
have "?thesis = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. poly (aa * p) x < 0
else \<forall>\<^sub>F x in at_left c. 0 < poly (aa * p) x) "
proof -
have "order c aa=0" unfolding aa_def using order_0I that by force
then have "even (order c (aa * p)) = even (order c p)"
by (subst order_mult) auto
moreover have "\<not> sign_r_pos aa c"
unfolding aa_def using that
by (auto simp: sign_r_pos_rec)
then have "sign_r_pos (aa * p) c = (\<not> sign_r_pos p c)"
by (subst sign_r_pos_mult) auto
ultimately show ?thesis
by (fold aa_def) auto
qed
also have "... = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. 0 < poly p x
else \<forall>\<^sub>F x in at_left c. poly p x < 0)"
proof -
have "\<forall>\<^sub>F x in at_left c. poly aa x < 0"
apply (simp add:aa_def)
using that eventually_at_filter by fastforce
then have "(\<forall>\<^sub>F x in at_left c. 0 < poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. poly p x < 0)"
"(\<forall>\<^sub>F x in at_left c. 0 > poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. 0 < poly p x)"
apply auto
by (erule (1) eventually_elim2,simp add: zero_less_mult_iff mult_less_0_iff)+
then show ?thesis by simp
qed
also have "..." using root.hyps by simp
finally show ?thesis .
qed
moreover have ?case when "c=a"
proof -
have "?thesis = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. 0 > poly (aa * p) x
else \<forall>\<^sub>F x in at_left c. poly (aa * p) x > 0)"
proof -
have "order c aa=1" unfolding aa_def using that
by (metis order_power_n_n power_one_right)
then have "even (order c (aa * p)) = odd (order c p)"
by (subst order_mult) auto
moreover have "sign_r_pos aa c"
unfolding aa_def using that
by (auto simp: sign_r_pos_rec pderiv_pCons)
then have "sign_r_pos (aa * p) c = sign_r_pos p c"
by (subst sign_r_pos_mult) auto
ultimately show ?thesis
by (fold aa_def) auto
qed
also have "... = (if even (order c p) = sign_r_pos p c
then \<forall>\<^sub>F x in at_left c. 0 < poly p x
else \<forall>\<^sub>F x in at_left c. poly p x < 0)"
proof -
have "\<forall>\<^sub>F x in at_left c. 0 > poly aa x"
apply (simp add:aa_def)
using that by (simp add: eventually_at_filter)
then have "(\<forall>\<^sub>F x in at_left c. 0 < poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. 0 > poly p x)"
"(\<forall>\<^sub>F x in at_left c. 0 > poly (aa * p) x) \<longleftrightarrow> (\<forall>\<^sub>F x in at_left c. 0 < poly p x)"
apply auto
by (erule (1) eventually_elim2,simp add: zero_less_mult_iff mult_less_0_iff)+
then show ?thesis by simp
qed
also have "..." using root.hyps by simp
finally show ?thesis .
qed
ultimately show ?case by argo
qed
lemma sign_r_pos_nzero_left:
assumes nzero:"\<forall>x. d\<le>x \<and> x<c \<longrightarrow> poly p x \<noteq>0" and "d<c"
shows "if even (order c p) \<longleftrightarrow>sign_r_pos p c then poly p d>0 else poly p d<0"
proof (cases "even (order c p) \<longleftrightarrow>sign_r_pos p c")
case True
then have "eventually (\<lambda>x. poly p x>0) (at_left c)"
using nzero[rule_format,of d,simplified] \<open>d<c\<close> sign_r_pos_at_left
by (simp add: order_root)
then obtain d' where "d'<c" and d'_pos:"\<forall>y>d'. y < c \<longrightarrow> 0 < poly p y"
unfolding eventually_at_left by auto
have False when "\<not> poly p d>0"
proof -
have "\<exists>x>d. x < (c + max d d') / 2 \<and> poly p x = 0"
apply (rule poly_IVT_pos)
using \<open>d'<c\<close> \<open>c>d\<close> that nzero[rule_format,of d,simplified]
by (auto intro:d'_pos[rule_format])
then show False using nzero \<open>c > d'\<close> by auto
qed
then show ?thesis using True by auto
next
case False
then have "eventually (\<lambda>x. poly p x<0) (at_left c)"
using nzero[rule_format,of d,simplified] \<open>d<c\<close> sign_r_pos_at_left
by (simp add: order_root)
then obtain d' where "d'<c" and d'_neg:"\<forall>y>d'. y < c \<longrightarrow> 0 > poly p y"
unfolding eventually_at_left by auto
have False when "\<not> poly p d<0"
proof -
have "\<exists>x>d. x < (c + max d d') / 2 \<and> poly p x = 0"
apply (rule poly_IVT_neg)
using \<open>d'<c\<close> \<open>c>d\<close> that nzero[rule_format,of d,simplified]
by (auto intro:d'_neg[rule_format])
then show False using nzero \<open>c > d'\<close> by auto
qed
then show ?thesis using False by auto
qed
subsection \<open>Fourier sequences\<close>
function pders::"real poly \<Rightarrow> real poly list" where
"pders p = (if p =0 then [] else Cons p (pders (pderiv p)))"
by auto
termination
apply (relation "measure (\<lambda>p. if p=0 then 0 else degree p + 1)")
by (auto simp:degree_pderiv pderiv_eq_0_iff)
declare pders.simps[simp del]
lemma set_pders_nzero:
assumes "p\<noteq>0" "q\<in>set (pders p)"
shows "q\<noteq>0"
using assms
proof (induct p rule:pders.induct)
case (1 p)
then have "q \<in> set (p # pders (pderiv p))"
by (simp add: pders.simps)
then have "q=p \<or> q\<in>set (pders (pderiv p))" by auto
moreover have ?case when "q=p"
using that \<open>p\<noteq>0\<close> by auto
moreover have ?case when "q\<in>set (pders (pderiv p))"
using 1 pders.simps by fastforce
ultimately show ?case by auto
qed
subsection \<open>Sign variations for Fourier sequences\<close>
definition changes_itv_der:: "real \<Rightarrow> real \<Rightarrow>real poly \<Rightarrow> int" where
"changes_itv_der a b p= (let ps= pders p in changes_poly_at ps a - changes_poly_at ps b)"
definition changes_gt_der:: "real \<Rightarrow>real poly \<Rightarrow> int" where
"changes_gt_der a p= changes_poly_at (pders p) a"
definition changes_le_der:: "real \<Rightarrow>real poly \<Rightarrow> int" where
"changes_le_der b p= (degree p - changes_poly_at (pders p) b)"
lemma changes_poly_pos_inf_pders[simp]:"changes_poly_pos_inf (pders p) = 0"
proof (induct "degree p" arbitrary:p)
case 0
then obtain a where "p=[:a:]" using degree_eq_zeroE by auto
then show ?case
apply (cases "a=0")
by (auto simp:changes_poly_pos_inf_def pders.simps)
next
case (Suc x)
then have "pderiv p\<noteq>0" "p\<noteq>0" using pderiv_eq_0_iff by force+
define ps where "ps=pders (pderiv (pderiv p))"
have ps:"pders p = p# pderiv p #ps" "pders (pderiv p) = pderiv p#ps"
unfolding ps_def by (simp_all add: \<open>p \<noteq> 0\<close> \<open>pderiv p \<noteq> 0\<close> pders.simps)
have hyps:"changes_poly_pos_inf (pders (pderiv p)) = 0"
apply (rule Suc(1))
using \<open>Suc x = degree p\<close> by (metis degree_pderiv diff_Suc_1)
moreover have "sgn_pos_inf p * sgn_pos_inf (pderiv p) >0"
unfolding sgn_pos_inf_def lead_coeff_pderiv
apply (simp add:algebra_simps sgn_mult)
using Suc.hyps(2) \<open>p \<noteq> 0\<close> by linarith
ultimately show ?case unfolding changes_poly_pos_inf_def ps by auto
qed
lemma changes_poly_neg_inf_pders[simp]: "changes_poly_neg_inf (pders p) = degree p"
proof (induct "degree p" arbitrary:p)
case 0
then obtain a where "p=[:a:]" using degree_eq_zeroE by auto
then show ?case unfolding changes_poly_neg_inf_def by (auto simp: pders.simps)
next
case (Suc x)
then have "pderiv p\<noteq>0" "p\<noteq>0" using pderiv_eq_0_iff by force+
then have "changes_poly_neg_inf (pders p)
= changes_poly_neg_inf (p # pderiv p#pders (pderiv (pderiv p)))"
by (simp add:pders.simps)
also have "... = 1 + changes_poly_neg_inf (pderiv p#pders (pderiv (pderiv p)))"
proof -
have "sgn_neg_inf p * sgn_neg_inf (pderiv p) < 0"
unfolding sgn_neg_inf_def using \<open>p\<noteq>0\<close> \<open>pderiv p\<noteq>0\<close>
by (auto simp add:lead_coeff_pderiv degree_pderiv coeff_pderiv sgn_mult pderiv_eq_0_iff)
then show ?thesis unfolding changes_poly_neg_inf_def by auto
qed
also have "... = 1 + changes_poly_neg_inf (pders (pderiv p))"
using \<open>pderiv p\<noteq>0\<close> by (simp add:pders.simps)
also have "... = 1 + degree (pderiv p)"
apply (subst Suc(1))
using Suc(2) by (auto simp add: degree_pderiv)
also have "... = degree p"
by (metis Suc.hyps(2) degree_pderiv diff_Suc_1 plus_1_eq_Suc)
finally show ?case .
qed
lemma pders_coeffs_sgn_eq:"map (\<lambda>p. sgn(poly p 0)) (pders p) = map sgn (coeffs p)"
proof (induct "degree p" arbitrary:p)
case 0
then obtain a where "p=[:a:]" using degree_eq_zeroE by auto
then show ?case by (auto simp: pders.simps)
next
case (Suc x)
then have "pderiv p\<noteq>0" "p\<noteq>0" using pderiv_eq_0_iff by force+
have "map (\<lambda>p. sgn (poly p 0)) (pders p)
= sgn (poly p 0)# map (\<lambda>p. sgn (poly p 0)) (pders (pderiv p))"
apply (subst pders.simps)
using \<open>p\<noteq>0\<close> by simp
also have "... = sgn (coeff p 0) # map sgn (coeffs (pderiv p))"
proof -
have "sgn (poly p 0) = sgn (coeff p 0)" by (simp add: poly_0_coeff_0)
then show ?thesis
apply (subst Suc(1))
subgoal by (metis Suc.hyps(2) degree_pderiv diff_Suc_1)
subgoal by auto
done
qed
also have "... = map sgn (coeffs p)"
proof (rule nth_equalityI)
show p_length:"length (sgn (coeff p 0) # map sgn (coeffs (pderiv p)))
= length (map sgn (coeffs p))"
by (metis Suc.hyps(2) \<open>p \<noteq> 0\<close> \<open>pderiv p \<noteq> 0\<close> degree_pderiv diff_Suc_1 length_Cons
length_coeffs_degree length_map)
show "(sgn (coeff p 0) # map sgn (coeffs (pderiv p))) ! i = map sgn (coeffs p) ! i"
if "i < length (sgn (coeff p 0) # map sgn (coeffs (pderiv p)))" for i
proof -
show "(sgn (coeff p 0) # map sgn (coeffs (pderiv p))) ! i = map sgn (coeffs p) ! i"
proof (cases i)
case 0
then show ?thesis
by (simp add: \<open>p \<noteq> 0\<close> coeffs_nth)
next
case (Suc i')
then show ?thesis
using that p_length
apply simp
apply (subst (1 2) coeffs_nth)
by (auto simp add: \<open>p \<noteq> 0\<close> \<open>pderiv p \<noteq> 0\<close> length_coeffs_degree coeff_pderiv sgn_mult)
qed
qed
qed
finally show ?case .
qed
lemma changes_poly_at_pders_0:"changes_poly_at (pders p) 0 = changes (coeffs p)"
unfolding changes_poly_at_def
apply (subst (1 2) changes_map_sgn_eq)
by (auto simp add:pders_coeffs_sgn_eq comp_def)
subsection \<open>Budan-Fourier theorem\<close>
lemma budan_fourier_aux_right:
assumes "c<d2" and "p\<noteq>0"
assumes "\<forall>x. c<x\<and> x\<le>d2 \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x\<noteq>0)"
shows "changes_itv_der c d2 p=0"
using assms(2-3)
proof (induct "degree p" arbitrary:p)
case 0
then obtain a where "p=[:a:]" "a\<noteq>0" by (metis degree_eq_zeroE pCons_0_0)
then show ?case
by (auto simp add:changes_itv_der_def pders.simps intro:order_0I)
next
case (Suc n)
then have [simp]:"pderiv p\<noteq>0" by (metis nat.distinct(1) pderiv_eq_0_iff)
note nzero=\<open>\<forall>x. c < x \<and> x \<le> d2 \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x \<noteq> 0)\<close>
have hyps:"changes_itv_der c d2 (pderiv p) = 0"
apply (rule Suc(1))
subgoal by (metis Suc.hyps(2) degree_pderiv diff_Suc_1)
subgoal by (simp add: Suc.prems(1) Suc.prems(2) pders.simps)
subgoal by (simp add: Suc.prems(1) nzero pders.simps)
done
have pders_changes_c:"changes_poly_at (r# pders q) c = (if sign_r_pos q c \<longleftrightarrow> poly r c>0
then changes_poly_at (pders q) c else 1+changes_poly_at (pders q) c)"
when "poly r c\<noteq>0" "q\<noteq>0" for q r
using \<open>q\<noteq>0\<close>
proof (induct q rule:pders.induct)
case (1 q)
have ?case when "pderiv q=0"
proof -
have "degree q=0" using that pderiv_eq_0_iff by blast
then obtain a where "q=[:a:]" "a\<noteq>0" using \<open>q\<noteq>0\<close> by (metis degree_eq_zeroE pCons_0_0)
then show ?thesis using \<open>poly r c\<noteq>0\<close>
by (auto simp add:sign_r_pos_rec changes_poly_at_def mult_less_0_iff pders.simps)
qed
moreover have ?case when "pderiv q\<noteq>0"
proof -
obtain qs where qs:"pders q=q#qs" "pders (pderiv q) = qs"
using \<open>q\<noteq>0\<close> by (simp add:pders.simps)
have "changes_poly_at (r # qs) c = (if sign_r_pos (pderiv q) c = (0 < poly r c)
then changes_poly_at qs c else 1 + changes_poly_at qs c)"
using 1 \<open>pderiv q\<noteq>0\<close> unfolding qs by simp
then show ?thesis unfolding qs
apply (cases "poly q c=0")
subgoal unfolding changes_poly_at_def by (auto simp:sign_r_pos_rec[OF \<open>q\<noteq>0\<close>,of c])
subgoal unfolding changes_poly_at_def using \<open>poly r c\<noteq>0\<close>
by (auto simp:sign_r_pos_rec[OF \<open>q\<noteq>0\<close>,of c] mult_less_0_iff)
done
qed
ultimately show ?case by blast
qed
have pders_changes_d2:"changes_poly_at (r# pders q) d2 = (if sign_r_pos q c \<longleftrightarrow> poly r c>0
then changes_poly_at (pders q) d2 else 1+changes_poly_at (pders q) d2)"
when "poly r c\<noteq>0" "q\<noteq>0" and qr_nzero:"\<forall>x. c < x \<and> x \<le> d2 \<longrightarrow> poly r x \<noteq> 0 \<and> poly q x\<noteq>0"
for q r
proof -
have "r\<noteq>0" using that(1) using poly_0 by blast
obtain qs where qs:"pders q=q#qs" "pders (pderiv q) = qs"
using \<open>q\<noteq>0\<close> by (simp add:pders.simps)
have "if sign_r_pos r c then 0 < poly r d2 else poly r d2 < 0"
"if sign_r_pos q c then 0 < poly q d2 else poly q d2 < 0"
subgoal by (rule sign_r_pos_nzero_right[of c d2 r]) (use qr_nzero \<open>c<d2\<close> in auto)
subgoal by (rule sign_r_pos_nzero_right[of c d2 q]) (use qr_nzero \<open>c<d2\<close> in auto)
done
then show ?thesis unfolding qs changes_poly_at_def
using \<open>poly r c\<noteq>0\<close> by (auto split:if_splits simp:mult_less_0_iff sign_r_pos_rec[OF \<open>r\<noteq>0\<close>])
qed
have d2c_nzero:"\<forall>x. c<x \<and> x\<le>d2 \<longrightarrow> poly p x\<noteq>0 \<and> poly (pderiv p) x \<noteq>0"
and p_cons:"pders p = p#pders(pderiv p)"
subgoal by (simp add: nzero Suc.prems(1) pders.simps)
subgoal by (simp add: Suc.prems(1) pders.simps)
done
have ?case when "poly p c=0"
proof -
define ps where "ps=pders (pderiv (pderiv p))"
have ps_cons:"p#pderiv p#ps = pders p" "pderiv p#ps=pders (pderiv p)"
unfolding ps_def using \<open>p\<noteq>0\<close> by (auto simp:pders.simps)
have "changes_poly_at (p # pderiv p # ps) c = changes_poly_at (pderiv p # ps) c"
unfolding changes_poly_at_def using that by auto
moreover have "changes_poly_at (p # pderiv p # ps) d2 = changes_poly_at (pderiv p # ps) d2"
proof -
have "if sign_r_pos p c then 0 < poly p d2 else poly p d2 < 0"
apply (rule sign_r_pos_nzero_right[OF _ \<open>c<d2\<close>])
using nzero[folded ps_cons] assms(1-2) by auto
moreover have "if sign_r_pos (pderiv p) c then 0 < poly (pderiv p) d2
else poly (pderiv p) d2 < 0"
apply (rule sign_r_pos_nzero_right[OF _ \<open>c<d2\<close>])
using nzero[folded ps_cons] assms(1-2) by auto
ultimately have "poly p d2 * poly (pderiv p) d2 > 0"
unfolding zero_less_mult_iff sign_r_pos_rec[OF \<open>p\<noteq>0\<close>] using \<open>poly p c=0\<close>
by (auto split:if_splits)
then show ?thesis unfolding changes_poly_at_def by auto
qed
ultimately show ?thesis using hyps unfolding changes_itv_der_def
apply (fold ps_cons)
by (auto simp:Let_def)
qed
moreover have ?case when "poly p c\<noteq>0" "sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that by auto
moreover have "changes_poly_at (pders p) d2 = changes_poly_at (pders (pderiv p)) d2"
unfolding p_cons
apply (subst pders_changes_d2[OF \<open>poly p c\<noteq>0\<close> _ d2c_nzero])
using that by auto
ultimately show ?thesis using hyps unfolding changes_itv_der_def Let_def
by auto
qed
moreover have ?case when "poly p c\<noteq>0" "\<not> sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c +1"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that by auto
moreover have "changes_poly_at (pders p) d2 = changes_poly_at (pders (pderiv p)) d2 + 1"
unfolding p_cons
apply (subst pders_changes_d2[OF \<open>poly p c\<noteq>0\<close> _ d2c_nzero])
using that by auto
ultimately show ?thesis using hyps unfolding changes_itv_der_def Let_def
by auto
qed
ultimately show ?case by blast
qed
lemma budan_fourier_aux_left':
assumes "d1<c" and "p\<noteq>0"
assumes "\<forall>x. d1\<le>x\<and> x<c \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x\<noteq>0)"
shows "changes_itv_der d1 c p \<ge> order c p \<and> even (changes_itv_der d1 c p - order c p)"
using assms(2-3)
proof (induct "degree p" arbitrary:p)
case 0
then obtain a where "p=[:a:]" "a\<noteq>0" by (metis degree_eq_zeroE pCons_0_0)
then show ?case
apply (auto simp add:changes_itv_der_def pders.simps intro:order_0I)
by (metis add.right_neutral dvd_0_right mult_zero_right order_root poly_pCons)
next
case (Suc n)
then have [simp]:"pderiv p\<noteq>0" by (metis nat.distinct(1) pderiv_eq_0_iff)
note nzero=\<open>\<forall>x. d1 \<le> x \<and> x < c \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x \<noteq> 0)\<close>
define v where "v=order c (pderiv p)"
have hyps:"v \<le> changes_itv_der d1 c (pderiv p) \<and> even (changes_itv_der d1 c (pderiv p) - v)"
unfolding v_def
apply (rule Suc(1))
subgoal by (metis Suc.hyps(2) degree_pderiv diff_Suc_1)
subgoal by (simp add: Suc.prems(1) Suc.prems(2) pders.simps)
subgoal by (simp add: Suc.prems(1) nzero pders.simps)
done
have pders_changes_c:"changes_poly_at (r# pders q) c = (if sign_r_pos q c \<longleftrightarrow> poly r c>0
then changes_poly_at (pders q) c else 1+changes_poly_at (pders q) c)"
when "poly r c\<noteq>0" "q\<noteq>0" for q r
using \<open>q\<noteq>0\<close>
proof (induct q rule:pders.induct)
case (1 q)
have ?case when "pderiv q=0"
proof -
have "degree q=0" using that pderiv_eq_0_iff by blast
then obtain a where "q=[:a:]" "a\<noteq>0" using \<open>q\<noteq>0\<close> by (metis degree_eq_zeroE pCons_0_0)
then show ?thesis using \<open>poly r c\<noteq>0\<close>
by (auto simp add:sign_r_pos_rec changes_poly_at_def mult_less_0_iff pders.simps)
qed
moreover have ?case when "pderiv q\<noteq>0"
proof -
obtain qs where qs:"pders q=q#qs" "pders (pderiv q) = qs"
using \<open>q\<noteq>0\<close> by (simp add:pders.simps)
have "changes_poly_at (r # qs) c = (if sign_r_pos (pderiv q) c = (0 < poly r c)
then changes_poly_at qs c else 1 + changes_poly_at qs c)"
using 1 \<open>pderiv q\<noteq>0\<close> unfolding qs by simp
then show ?thesis unfolding qs
apply (cases "poly q c=0")
subgoal unfolding changes_poly_at_def by (auto simp:sign_r_pos_rec[OF \<open>q\<noteq>0\<close>,of c])
subgoal unfolding changes_poly_at_def using \<open>poly r c\<noteq>0\<close>
by (auto simp:sign_r_pos_rec[OF \<open>q\<noteq>0\<close>,of c] mult_less_0_iff)
done
qed
ultimately show ?case by blast
qed
have pders_changes_d1:"changes_poly_at (r# pders q) d1 = (if even (order c q) \<longleftrightarrow> sign_r_pos q c \<longleftrightarrow> poly r c>0
then changes_poly_at (pders q) d1 else 1+changes_poly_at (pders q) d1)"
when "poly r c\<noteq>0" "q\<noteq>0" and qr_nzero:"\<forall>x. d1 \<le> x \<and> x < c \<longrightarrow> poly r x \<noteq> 0 \<and> poly q x\<noteq>0"
for q r
proof -
have "r\<noteq>0" using that(1) using poly_0 by blast
obtain qs where qs:"pders q=q#qs" "pders (pderiv q) = qs"
using \<open>q\<noteq>0\<close> by (simp add:pders.simps)
have "if even (order c r) = sign_r_pos r c then 0 < poly r d1 else poly r d1 < 0"
"if even (order c q) = sign_r_pos q c then 0 < poly q d1 else poly q d1 < 0"
subgoal by (rule sign_r_pos_nzero_left[of d1 c r]) (use qr_nzero \<open>d1<c\<close> in auto)
subgoal by (rule sign_r_pos_nzero_left[of d1 c q]) (use qr_nzero \<open>d1<c\<close> in auto)
done
moreover have "order c r=0" by (simp add: order_0I that(1))
ultimately show ?thesis unfolding qs changes_poly_at_def
using \<open>poly r c\<noteq>0\<close> by (auto split:if_splits simp:mult_less_0_iff sign_r_pos_rec[OF \<open>r\<noteq>0\<close>])
qed
have d1c_nzero:"\<forall>x. d1 \<le> x \<and> x < c \<longrightarrow> poly p x \<noteq> 0 \<and> poly (pderiv p) x \<noteq> 0"
and p_cons:"pders p = p#pders(pderiv p)"
by (simp_all add: nzero Suc.prems(1) pders.simps)
have ?case when "poly p c=0"
proof -
define ps where "ps=pders (pderiv (pderiv p))"
have ps_cons:"p#pderiv p#ps = pders p" "pderiv p#ps=pders (pderiv p)"
unfolding ps_def using \<open>p\<noteq>0\<close> by (auto simp:pders.simps)
have p_order:"order c p = Suc v"
apply (subst order_pderiv)
using Suc.prems(1) order_root that unfolding v_def by auto
moreover have "changes_poly_at (p#pderiv p # ps) d1 = changes_poly_at (pderiv p#ps) d1 +1"
proof -
have "if even (order c p) = sign_r_pos p c then 0 < poly p d1 else poly p d1 < 0"
apply (rule sign_r_pos_nzero_left[OF _ \<open>d1<c\<close>])
using nzero[folded ps_cons] assms(1-2) by auto
moreover have "if even v = sign_r_pos (pderiv p) c
then 0 < poly (pderiv p) d1 else poly (pderiv p) d1 < 0"
unfolding v_def
apply (rule sign_r_pos_nzero_left[OF _ \<open>d1<c\<close>])
using nzero[folded ps_cons] assms(1-2) by auto
ultimately have "poly p d1 * poly (pderiv p) d1 < 0"
unfolding mult_less_0_iff sign_r_pos_rec[OF \<open>p\<noteq>0\<close>] using \<open>poly p c=0\<close> p_order
by (auto split:if_splits)
then show ?thesis
unfolding changes_poly_at_def by auto
qed
moreover have "changes_poly_at (p # pderiv p # ps) c = changes_poly_at (pderiv p # ps) c"
unfolding changes_poly_at_def using that by auto
ultimately show ?thesis using hyps unfolding changes_itv_der_def
apply (fold ps_cons)
by (auto simp:Let_def)
qed
moreover have ?case when "poly p c\<noteq>0" "odd v" "sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "order c p=0" by (simp add: order_0I that(1))
moreover have "changes_poly_at (pders p) d1 = changes_poly_at (pders (pderiv p)) d1 +1"
unfolding p_cons
apply (subst pders_changes_d1[OF \<open>poly p c\<noteq>0\<close> _ d1c_nzero])
using that unfolding v_def by auto
moreover have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that unfolding v_def by auto
ultimately show ?thesis using hyps \<open>odd v\<close> unfolding changes_itv_der_def Let_def
by auto
qed
moreover have ?case when "poly p c\<noteq>0" "odd v" "\<not> sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "v\<ge>1" using \<open>odd v\<close> using not_less_eq_eq by auto
moreover have "order c p=0" by (simp add: order_0I that(1))
moreover have "changes_poly_at (pders p) d1 = changes_poly_at (pders (pderiv p)) d1"
unfolding p_cons
apply (subst pders_changes_d1[OF \<open>poly p c\<noteq>0\<close> _ d1c_nzero])
using that unfolding v_def by auto
moreover have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c + 1"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that unfolding v_def by auto
ultimately show ?thesis using hyps \<open>odd v\<close> unfolding changes_itv_der_def Let_def
by auto
qed
moreover have ?case when "poly p c\<noteq>0" "even v" "sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "order c p=0" by (simp add: order_0I that(1))
moreover have "changes_poly_at (pders p) d1 = changes_poly_at (pders (pderiv p)) d1"
unfolding p_cons
apply (subst pders_changes_d1[OF \<open>poly p c\<noteq>0\<close> _ d1c_nzero])
using that unfolding v_def by auto
moreover have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that unfolding v_def by auto
ultimately show ?thesis using hyps \<open>even v\<close> unfolding changes_itv_der_def Let_def
by auto
qed
moreover have ?case when "poly p c\<noteq>0" "even v" "\<not> sign_r_pos (pderiv p) c \<longleftrightarrow> poly p c>0"
proof -
have "order c p=0" by (simp add: order_0I that(1))
moreover have "changes_poly_at (pders p) d1 = changes_poly_at (pders (pderiv p)) d1 + 1"
unfolding p_cons
apply (subst pders_changes_d1[OF \<open>poly p c\<noteq>0\<close> _ d1c_nzero])
using that unfolding v_def by auto
moreover have "changes_poly_at (pders p) c = changes_poly_at (pders (pderiv p)) c +1"
unfolding p_cons
apply (subst pders_changes_c[OF \<open>poly p c\<noteq>0\<close>])
using that unfolding v_def by auto
ultimately show ?thesis using hyps \<open>even v\<close> unfolding changes_itv_der_def Let_def
by auto
qed
ultimately show ?case by blast
qed
lemma budan_fourier_aux_left:
assumes "d1<c" and "p\<noteq>0"
assumes nzero:"\<forall>x. d1<x\<and> x<c \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x\<noteq>0)"
shows "changes_itv_der d1 c p \<ge> order c p" "even (changes_itv_der d1 c p - order c p)"
proof -
define d where "d=(d1+c)/2"
have "d1<d" "d<c" unfolding d_def using \<open>d1<c\<close> by auto
have "changes_itv_der d1 d p = 0"
apply (rule budan_fourier_aux_right[OF \<open>d1<d\<close> \<open>p\<noteq>0\<close>])
using nzero \<open>d1<d\<close> \<open>d<c\<close> by auto
moreover have "order c p \<le> changes_itv_der d c p \<and> even (changes_itv_der d c p - order c p)"
apply (rule budan_fourier_aux_left'[OF \<open>d<c\<close> \<open>p\<noteq>0\<close>])
using nzero \<open>d1<d\<close> \<open>d<c\<close> by auto
ultimately show "changes_itv_der d1 c p \<ge> order c p" "even (changes_itv_der d1 c p - order c p)"
unfolding changes_itv_der_def Let_def by auto
qed
theorem budan_fourier_interval:
assumes "a<b" "p\<noteq>0"
shows "changes_itv_der a b p \<ge> proots_count p {x. a< x \<and> x\<le> b} \<and>
even (changes_itv_der a b p - proots_count p {x. a< x \<and> x\<le> b})"
using \<open>a<b\<close>
proof (induct "card {x. \<exists>p\<in>set (pders p). poly p x=0 \<and> a<x \<and> x<b}" arbitrary:b)
case 0
have nzero:"\<forall>x. a<x \<and> x<b \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x\<noteq>0)"
proof -
define S where "S={x. \<exists>p\<in>set (pders p). poly p x = 0 \<and> a < x \<and> x < b}"
have "finite S"
proof -
have "S \<subseteq> (\<Union>p\<in>set (pders p). proots p)"
unfolding S_def by auto
moreover have "finite (\<Union>p\<in>set (pders p). proots p)"
apply (subst finite_UN)
using set_pders_nzero[OF \<open>p\<noteq>0\<close>] by auto
ultimately show ?thesis by (simp add: finite_subset)
qed
moreover have "card S = 0" unfolding S_def using 0 by auto
ultimately have "S={}" by auto
then show ?thesis unfolding S_def using \<open>a<b\<close> assms(2) pders.simps by fastforce
qed
from budan_fourier_aux_left[OF \<open>a<b\<close> \<open>p\<noteq>0\<close> this]
have "order b p \<le> changes_itv_der a b p" "even (changes_itv_der a b p - order b p)" by simp_all
moreover have "proots_count p {x. a< x \<and> x\<le> b} = order b p"
proof -
have p_cons:"pders p=p#pders (pderiv p)" by (simp add: assms(2) pders.simps)
have "proots_within p {x. a < x \<and> x \<le> b} = (if poly p b=0 then {b} else {})"
using nzero \<open>a< b\<close> unfolding p_cons
apply auto
using not_le by fastforce
then show ?thesis unfolding proots_count_def using order_root by auto
qed
ultimately show ?case by auto
next
case (Suc n)
define P where "P=(\<lambda>x. \<exists>p\<in>set (pders p). poly p x = 0)"
define S where "S=(\<lambda>b. {x. P x \<and> a < x \<and> x < b})"
define b' where "b'=Max (S b)"
have f_S:"finite (S x)" for x
proof -
have "S x \<subseteq> (\<Union>p\<in>set (pders p). proots p)"
unfolding S_def P_def by auto
moreover have "finite (\<Union>p\<in>set (pders p). proots p)"
apply (subst finite_UN)
using set_pders_nzero[OF \<open>p\<noteq>0\<close>] by auto
ultimately show ?thesis by (simp add: finite_subset)
qed
have "b'\<in>S b"
unfolding b'_def
apply (rule Max_in[OF f_S])
using Suc(2) unfolding S_def P_def by force
then have "a<b'" "b'<b" unfolding S_def by auto
have b'_nzero:"\<forall>x. b'<x \<and> x<b \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x\<noteq>0)"
proof (rule ccontr)
assume "\<not> (\<forall>x. b' < x \<and> x < b \<longrightarrow> (\<forall>q\<in>set (pders p). poly q x \<noteq> 0))"
then obtain bb where "P bb" "b'<bb" "bb<b" unfolding P_def by auto
then have "bb\<in>S b" unfolding S_def using \<open>a<b'\<close> \<open>b'<b\<close> by auto
from Max_ge[OF f_S this, folded b'_def] have "bb \<le> b'" .
then show False using \<open>b'<bb\<close> by auto
qed
have hyps:"proots_count p {x. a < x \<and> x \<le> b'} \<le> changes_itv_der a b' p \<and>
even (changes_itv_der a b' p - proots_count p {x. a < x \<and> x \<le> b'})"
proof (rule Suc(1)[OF _ \<open>a<b'\<close>])
have "S b= {b'} \<union> S b'"
proof -
have "{x. P x \<and> b' < x \<and> x < b} = {}"
using b'_nzero unfolding P_def by auto
then have "{x. P x\<and> b' \<le> x \<and> x < b} = {b'}"
using \<open>b'\<in>S b\<close> unfolding S_def by force
moreover have "S b= S b' \<union> {x. P x \<and> b' \<le> x \<and> x < b}"
unfolding S_def using \<open>a<b'\<close> \<open>b'<b\<close> by auto
ultimately show ?thesis by auto
qed
moreover have "Suc n = card (S b)" using Suc(2) unfolding S_def P_def by simp
moreover have "b'\<notin>S b'" unfolding S_def by auto
ultimately have "n=card (S b')" using f_S by auto
then show "n = card {x. \<exists>p\<in>set (pders p). poly p x = 0 \<and> a < x \<and> x < b'}"
unfolding S_def P_def by simp
qed
moreover have "proots_count p {x. a < x \<and> x \<le> b}
= proots_count p {x. a < x \<and> x \<le> b'} + order b p"
proof -
have p_cons:"pders p=p#pders (pderiv p)" by (simp add: assms(2) pders.simps)
have "proots_within p {x. b' < x \<and> x \<le> b} = (if poly p b=0 then {b} else {})"
using b'_nzero \<open>b' < b\<close> unfolding p_cons
apply auto
using not_le by fastforce
then have "proots_count p {x. b' < x \<and> x \<le> b} = order b p"
unfolding proots_count_def using order_root by auto
moreover have "proots_count p {x. a < x \<and> x \<le> b} = proots_count p {x. a < x \<and> x \<le> b'} +
proots_count p {x. b' < x \<and> x \<le> b}"
apply (subst proots_count_union_disjoint[symmetric])
using \<open>a<b'\<close> \<open>b'<b\<close> \<open>p\<noteq>0\<close> by (auto intro:arg_cong2[where f=proots_count])
ultimately show ?thesis by auto
qed
moreover note budan_fourier_aux_left[OF \<open>b'<b\<close> \<open>p\<noteq>0\<close> b'_nzero]
ultimately show ?case unfolding changes_itv_der_def Let_def by auto
qed
theorem budan_fourier_gt:
assumes "p\<noteq>0"
shows "changes_gt_der a p \<ge> proots_count p {x. a< x} \<and>
even (changes_gt_der a p - proots_count p {x. a< x})"
proof -
define ps where "ps=pders p"
obtain ub where ub_root:"\<forall>p\<in>set ps. \<forall>x. poly p x = 0 \<longrightarrow> x < ub"
and ub_sgn:"\<forall>x\<ge>ub. \<forall>p\<in>set ps. sgn (poly p x) = sgn_pos_inf p"
and "a < ub"
using root_list_ub[of ps a] set_pders_nzero[OF \<open>p\<noteq>0\<close>,folded ps_def] by blast
have "proots_count p {x. a< x} = proots_count p {x. a< x \<and> x \<le> ub}"
proof -
have "p\<in>set ps" unfolding ps_def by (simp add: assms pders.simps)
then have "proots_within p {x. a< x} = proots_within p {x. a< x \<and> x\<le>ub}"
using ub_root by fastforce
then show ?thesis unfolding proots_count_def by auto
qed
moreover have "changes_gt_der a p = changes_itv_der a ub p"
proof -
have "map (sgn \<circ> (\<lambda>p. poly p ub)) ps = map sgn_pos_inf ps"
using ub_sgn[THEN spec,of ub,simplified]
by (metis (mono_tags, lifting) comp_def list.map_cong0)
hence "changes_poly_at ps ub=changes_poly_pos_inf ps"
unfolding changes_poly_pos_inf_def changes_poly_at_def
by (subst changes_map_sgn_eq,metis map_map)
then have "changes_poly_at ps ub=0" unfolding ps_def by simp
thus ?thesis unfolding changes_gt_der_def changes_itv_der_def ps_def
by (simp add:Let_def)
qed
moreover have "proots_count p {x. a < x \<and> x \<le> ub} \<le> changes_itv_der a ub p \<and>
even (changes_itv_der a ub p - proots_count p {x. a < x \<and> x \<le> ub})"
using budan_fourier_interval[OF \<open>a<ub\<close> \<open>p\<noteq>0\<close>] .
ultimately show ?thesis by auto
qed
text \<open>Descartes' rule of signs is a direct consequence of the Budan-Fourier theorem\<close>
theorem descartes_sign:
fixes p::"real poly"
assumes "p\<noteq>0"
shows " changes (coeffs p) \<ge> proots_count p {x. 0 < x} \<and>
even (changes (coeffs p) - proots_count p {x. 0< x})"
using budan_fourier_gt[OF \<open>p\<noteq>0\<close>,of 0] unfolding changes_gt_der_def
by (simp add:changes_poly_at_pders_0)
theorem budan_fourier_le:
assumes "p\<noteq>0"
shows "changes_le_der b p \<ge> proots_count p {x. x \<le>b} \<and>
even (changes_le_der b p - proots_count p {x. x \<le>b})"
proof -
define ps where "ps=pders p"
obtain lb where lb_root:"\<forall>p\<in>set ps. \<forall>x. poly p x = 0 \<longrightarrow> x > lb"
and lb_sgn:"\<forall>x\<le>lb. \<forall>p\<in>set ps. sgn (poly p x) = sgn_neg_inf p"
and "lb < b"
using root_list_lb[of ps b] set_pders_nzero[OF \<open>p\<noteq>0\<close>,folded ps_def] by blast
have "proots_count p {x. x \<le>b} = proots_count p {x. lb< x \<and> x \<le> b}"
proof -
have "p\<in>set ps" unfolding ps_def by (simp add: assms pders.simps)
then have "proots_within p {x. x \<le>b} = proots_within p {x. lb< x \<and> x\<le>b}"
using lb_root by fastforce
then show ?thesis unfolding proots_count_def by auto
qed
moreover have "changes_le_der b p = changes_itv_der lb b p"
proof -
have "map (sgn \<circ> (\<lambda>p. poly p lb)) ps = map sgn_neg_inf ps"
using lb_sgn[THEN spec,of lb,simplified]
by (metis (mono_tags, lifting) comp_def list.map_cong0)
hence "changes_poly_at ps lb=changes_poly_neg_inf ps"
unfolding changes_poly_neg_inf_def changes_poly_at_def
by (subst changes_map_sgn_eq,metis map_map)
then have "changes_poly_at ps lb=degree p" unfolding ps_def by simp
thus ?thesis unfolding changes_le_der_def changes_itv_der_def ps_def
by (simp add:Let_def)
qed
moreover have "proots_count p {x. lb < x \<and> x \<le> b} \<le> changes_itv_der lb b p \<and>
even (changes_itv_der lb b p - proots_count p {x. lb < x \<and> x \<le> b})"
using budan_fourier_interval[OF \<open>lb<b\<close> \<open>p\<noteq>0\<close>] .
ultimately show ?thesis by auto
qed
subsection \<open>Count exactly when all roots are real\<close>
definition all_roots_real:: "real poly \<Rightarrow> bool " where
"all_roots_real p = (\<forall>r\<in>proots (map_poly of_real p). Im r=0)"
lemma all_roots_real_mult[simp]:
"all_roots_real (p*q) \<longleftrightarrow> all_roots_real p \<and> all_roots_real q"
unfolding all_roots_real_def by auto
lemma all_roots_real_const_iff:
assumes all_real:"all_roots_real p"
shows "degree p\<noteq>0 \<longleftrightarrow> (\<exists>x. poly p x=0)"
proof
assume "degree p \<noteq> 0"
moreover have "degree p=0" when "\<forall>x. poly p x\<noteq>0"
proof -
define pp where "pp=map_poly complex_of_real p"
have "\<forall>x. poly pp x\<noteq>0"
proof (rule ccontr)
assume "\<not> (\<forall>x. poly pp x \<noteq> 0)"
then obtain x where "poly pp x=0" by auto
moreover have "Im x=0"
using all_real[unfolded all_roots_real_def, rule_format,of x,folded pp_def] \<open>poly pp x=0\<close>
by auto
ultimately have "poly pp (of_real (Re x)) = 0"
by (simp add: complex_is_Real_iff)
then have "poly p (Re x) = 0"
unfolding pp_def
by (metis Re_complex_of_real of_real_poly_map_poly zero_complex.simps(1))
then show False using that by simp
qed
then obtain a where "pp=[:of_real a:]" "a\<noteq>0"
by (metis \<open>degree p \<noteq> 0\<close> constant_degree degree_map_poly
fundamental_theorem_of_algebra of_real_eq_0_iff pp_def)
then have "p=[:a:]" unfolding pp_def
by (metis map_poly_0 map_poly_pCons of_real_0 of_real_poly_eq_iff)
then show ?thesis by auto
qed
ultimately show "\<exists>x. poly p x = 0" by auto
next
assume "\<exists>x. poly p x = 0"
then show "degree p \<noteq> 0"
by (metis UNIV_I all_roots_real_def assms degree_pCons_eq_if
imaginary_unit.sel(2) map_poly_0 nat.simps(3) order_root pCons_eq_0_iff
proots_within_iff synthetic_div_eq_0_iff synthetic_div_pCons zero_neq_one)
qed
lemma all_roots_real_degree:
assumes "all_roots_real p"
shows "proots_count p UNIV =degree p" using assms
proof (induct p rule:poly_root_induct_alt)
case 0
then have False using imaginary_unit.sel(2) unfolding all_roots_real_def by auto
then show ?case by simp
next
case (no_proots p)
from all_roots_real_const_iff[OF this(2)] this(1)
have "degree p=0" by auto
then obtain a where "p=[:a:]" "a\<noteq>0"
by (metis degree_eq_zeroE no_proots.hyps poly_const_conv)
then have "proots p={}" by auto
then show ?case using \<open>p=[:a:]\<close> by (simp add:proots_count_def)
next
case (root a p)
define a1 where "a1=[:- a, 1:]"
have "p\<noteq>0" using root.prems
apply auto
using imaginary_unit.sel(2) unfolding all_roots_real_def by auto
have "a1\<noteq>0" unfolding a1_def by auto
have "proots_count (a1 * p) UNIV = proots_count a1 UNIV + proots_count p UNIV"
using \<open>p\<noteq>0\<close> \<open>a1\<noteq>0\<close> by (subst proots_count_times,auto)
also have "... = 1 + degree p"
proof -
have "proots_count a1 UNIV = 1" unfolding a1_def by (simp add: proots_count_pCons_1_iff)
moreover have hyps:"proots_count p UNIV = degree p"
apply (rule root.hyps)
using root.prems[folded a1_def] unfolding all_roots_real_def by auto
ultimately show ?thesis by auto
qed
also have "... = degree (a1*p)"
apply (subst degree_mult_eq)
using \<open>a1\<noteq>0\<close> \<open>p\<noteq>0\<close> unfolding a1_def by auto
finally show ?case unfolding a1_def .
qed
lemma all_real_roots_mobius:
fixes a b::real
assumes "all_roots_real p" and "a<b"
shows "all_roots_real (fcompose p [:a,b:] [:1,1:])" using assms(1)
proof (induct p rule:poly_root_induct_alt)
case 0
then show ?case by simp
next
case (no_proots p)
from all_roots_real_const_iff[OF this(2)] this(1)
have "degree p=0" by auto
then obtain a where "p=[:a:]" "a\<noteq>0"
by (metis degree_eq_zeroE no_proots.hyps poly_const_conv)
then show ?case by (auto simp add:all_roots_real_def)
next
case (root x p)
define x1 where "x1=[:- x, 1:]"
define fx where "fx=fcompose x1 [:a, b:] [:1, 1:]"
have "all_roots_real fx"
proof (cases "x=b")
case True
then have "fx = [:a-x:]" "a\<noteq>x"
subgoal unfolding fx_def by (simp add:fcompose_def smult_add_right x1_def)
subgoal using \<open>a<b\<close> True by auto
done
then have "proots (map_poly complex_of_real fx) = {}"
by auto
then show ?thesis unfolding all_roots_real_def by auto
next
case False
then have "fx = [:a-x,b-x:]"
unfolding fx_def by (simp add:fcompose_def smult_add_right x1_def)
then have "proots (map_poly complex_of_real fx) = {of_real ((x-a)/(b-x))}"
using False by (auto simp add:field_simps)
then show ?thesis unfolding all_roots_real_def by auto
qed
moreover have "all_roots_real (fcompose p [:a, b:] [:1, 1:])"
using root[folded x1_def] all_roots_real_mult by auto
ultimately show ?case
apply (fold x1_def)
by (auto simp add:fcompose_mult fx_def)
qed
text \<open>If all roots are real, we can use the
Budan-Fourier theorem to EXACTLY count the number of real roots.\<close>
corollary budan_fourier_real:
assumes "p\<noteq>0"
assumes "all_roots_real p"
shows "proots_count p {x. x \<le>a} = changes_le_der a p"
"a<b \<Longrightarrow> proots_count p {x. a <x \<and> x \<le>b} = changes_itv_der a b p"
"proots_count p {x. b <x} = changes_gt_der b p"
proof -
have *:"proots_count p {x. x \<le>a} = changes_le_der a p
\<and> proots_count p {x. a <x \<and> x \<le>b} = changes_itv_der a b p
\<and> proots_count p {x. b <x} = changes_gt_der b p"
when "a<b" for a b
proof -
define c1 c2 c3 where
"c1=changes_le_der a p - proots_count p {x. x \<le>a}" and
"c2=changes_itv_der a b p - proots_count p {x. a <x \<and> x \<le>b}" and
"c3=changes_gt_der b p - proots_count p {x. b <x}"
have "c1\<ge>0" "c2\<ge>0" "c3\<ge>0"
using budan_fourier_interval[OF \<open>a<b\<close> \<open>p\<noteq>0\<close>] budan_fourier_gt[OF \<open>p\<noteq>0\<close>,of b]
budan_fourier_le[OF \<open>p\<noteq>0\<close>,of a]
unfolding c1_def c2_def c3_def by auto
moreover have "c1+c2+c3=0"
proof -
have proots_deg:"proots_count p UNIV =degree p"
using all_roots_real_degree[OF \<open>all_roots_real p\<close>] .
have "changes_le_der a p + changes_itv_der a b p + changes_gt_der b p = degree p"
unfolding changes_le_der_def changes_itv_der_def changes_gt_der_def
by (auto simp add:Let_def)
moreover have "proots_count p {x. x \<le>a} + proots_count p {x. a <x \<and> x \<le>b}
+ proots_count p {x. b <x} = degree p"
using \<open>p\<noteq>0\<close> \<open>a<b\<close>
apply (subst proots_count_union_disjoint[symmetric],auto)+
apply (subst proots_deg[symmetric])
by (auto intro!:arg_cong2[where f=proots_count])
ultimately show ?thesis unfolding c1_def c2_def c3_def
by (auto simp add:algebra_simps)
qed
ultimately have "c1 =0 \<and> c2=0 \<and> c3=0" by auto
then show ?thesis unfolding c1_def c2_def c3_def by auto
qed
show "proots_count p {x. x \<le>a} = changes_le_der a p" using *[of a "a+1"] by auto
show "proots_count p {x. a <x \<and> x \<le>b} = changes_itv_der a b p" when "a<b"
using *[OF that] by auto
show "proots_count p {x. b <x} = changes_gt_der b p"
using *[of "b-1" b] by auto
qed
text \<open>Similarly, Descartes' rule of sign counts exactly when all roots are real.\<close>
corollary descartes_sign_real:
fixes p::"real poly" and a b::real
assumes "p\<noteq>0"
assumes "all_roots_real p"
shows "proots_count p {x. 0 < x} = changes (coeffs p)"
using budan_fourier_real(3)[OF \<open>p\<noteq>0\<close> \<open>all_roots_real p\<close>]
unfolding changes_gt_der_def by (simp add:changes_poly_at_pders_0)
end | 220,516 |
Training jobs
RegionUP13 Jul
Training & Quality Manager
Training & Quality Manager [Description] Description: The Training and Quality Manager is responsible...
Singapore
RegionUP02 Jul
QA Training Specialist
QA Training Specialist [Description] Training Specialist (12 Months Contract) Company Description Our client...
Singapore
Beacon11 Jul
Training Consultant
Conduct soft-skills training in areas such as customer service, leadership, teambuilding, etc...
Singapore
WSH Experts Pte Ltd11 Jul
Training Co-Ordinator (Training Centre)
Job Description: Ensure smooth running of all training programmes (courses, conferences, workshops and other...
Singapore
RegionUP28 Jun
Training Specialist (Euro MNC/ Up to $4500/ 5 days/ West)
Training Specialist (Euro MNC/ Up to $4500/ 5 days/ West) [Description] JOB DESCRIPTION Utilize the Analysis...
Singapore
RegionUP06 Jul
Training Executive
Training Executive [Description] Analyse training needs base on organization’s business objectives Design...
Singapore
RegionUP06 Jul
Training and Development Executive
Training and Development Executive [Description] Responsibilities Provide support for the development...
Singapore | 204,998 |
Virtual, Inc. Expands Staff To Serve Growing Needs Of Association Clients
WAKEFIELD, Mass. – Nov. 18, 2009 – Virtual, Inc. (), a technology-focused association management company, has hired a number of new employees and promoted several senior staff members, reflecting both recent growth in its client roster and the expansion of activities at established clients.
The following new staff members have joined the company:
- Paula Antista, Meeting and Event Coordinator, assists in planning and executing meetings and events for Virtual’s global clients. Her background includes several years in meeting and travel planning.
- Cindy Adams, Director of Operations, PCI Security Standards Council, brings to Virtual many years of IT program management and technical experience in broadly diversified industries, including financial services, technology, communications, healthcare, insurance and manufacturing. Most recently, she was Vice President, IT Program Manager at State Street Global Advisors.
- John Kenefic, Web Services Project Manager, has more than 10 years of consulting/technical project management experience working with such clients as the United States Senate, Fidelity Investments and IBM. In his new position, he provides Virtual’s clients with customized Web solutions, and is responsible for all Web project management.
- Russell Kuhl, Senior Director of Information Technology, has spent much of his 13-year IT career as a hands-on team member and an IT project manager, and has extensive experience in designing remote workplaces, with a focus on security. He holds a number of certifications, including CISSP and CEH.
- Clare Madden, Senior Client Program Director, has day-to-day operational oversight of several clients, provides leadership to internal teams and counsels clients on strategies to foster their growth. In addition to 10 years of association experience, she served as a Congressional Aide for several years.
- Lisa Tracey, Program Manager, comes to Virtual following stints as Business Analyst and Project Manager at industrial and financial services companies.
- Regina Young, Program Manager, brings more than six years of varied program management experience in the non-profit/entrepreneurial arena to her new position.
- Christina Zagami, Senior Financial Analyst, prepares client financial reports and yearly budgets, and coordinates tax filings. She spent 18 years at 3Com Corporation in various finance and supply chain positions, culminating in a business operations role overseeing all the databases that fed the company’s automated financial reporting tools.
Promoted senior staff members include Rebekka Bennett, now Events Manager; Paula Berger, now Executive Director of the NFC Forum, Inc.; Janice Carroll, now Vice President of Client Services; Ruth Cassidy, now Vice President of Communications; and Terry Lowney, now Vice President of Finance and Administration.
About Virtual, Inc.
Virtual, Inc. is an association management company that combines advanced technology, industry best practices and innovation to provide associations with world-class operations, driving key business processes so that clients can focus on their missions. Over the past 10 years, Virtual has won dozens of national and regional awards for marketing, public relations, technology and association management programs and services. More information is available at or from Bruce Rogers, +1 781-876-6209, brogers@.
### | 384,182 |
I'm thankful for my trip this last weekend (and that I can write in a blog with whatever type of punctuation I feel like):
being met by the crisp, Fall air and a best friend with a smile stretched across her face
fall leaves floating haphazardly across the street as we drove to meet another best friend for dinner
a sea of 4,000 women generating strength and hope
presenters full of wisdom and love...filling me and lifting me
on stage with my dear mother...wondering at when I passed her up in height and how I got so lucky to call her my mom
sweet women with books to be signed and stories to tell
a dinner of ideas and thought-provoking questions, feeling much too spoiled to get my parents all to myself
rain turning to sleet and then to bouncing, swirling snow from my airplane window
listening to babies cry while waiting for the airplane wing de-icing process
wishing I could hold those babies, and wishing I could transport myself to my own babies...the ones who are so grown up now and fully entrenched in "becoming"
a mother standing in the aisle with a toddler on her hip...a little girl with ponytails sticking straight out on the sides
my heart aching that that time of babies is over for me...as crazy and hard as it was, I miss it... horribly
a set of parents carrying their two sleeping boys hanging completely relaxed from their arms off the plane...wishing it wasn't too dorky if I just leaned over and told them how lucky they are to have those babies...and smiling in my heart as I reflected how lucky I am to get to be a parent too
the 62 degree temperature that awaited me in the middle of the night here
being greeted by the saguaro cacti lining the freeway...their arms seemingly welcoming me back
pulling into my driveway
with my children nestled in their beds for me to go watch sleep
and breathe in deeply the glory that I'm theirs and they're mine
and a husband, who has smoothly taken care of them all, that I get to go snuggle up to
Yes, it's good to be home.
Monday, November 22, 2010
my trip
Sounds like a Heavenly weekend for you;)I'll take our winter here over any other!!
i guess I can't read your blog while i'm pregnant. it makes me cry every time!
glad you had a good trip.
love you.
I know that feeling with "babies" everytime I go to my women's bible study...my heart aches as I see all the young moms dropping off the kids in "care". Mine are at school but this too is a precious time to "soak" in!
sandy toe
very sweet post Shawni :)
LOVED this!! I'm in the "thick" of having babies (4 year old, 2 1/2 year old, and 7 month old...wanting 2 or 3 more still) and I love hearing mothers who miss it because I really do remember posts like this during the sleepless nights (or pouty tantrums, etc) and it makes me smile and snuggle closer rather than get annoyed and start a woe-is-me pity party. :) I am grateful to be a mother. :)
Thanks for making the sacrifice to come to TOFW in SLC! I know it's hard to leave behind all your other responsibilities and loved ones, but I appreciate it. Thank you (and your mom) for your inspiring presentation. I went home and shared it with my husband, which has sparked some discussion on ways to apply the things you taught! Thanks again!
Oh dear. Why are we like that? I just turned forty and am longing for baby #5! I really thought I was done...we'll see what happens I guess. :)
I was at the tofw. Wasn't it amazing? Thanks for your thoughts and experiences. It was fun to hear your voice because now when I read your blog I can "hear" you say what you are saying!
Welcome home...you make me want to leave and come back again!
Was that fun or what? | 308,745 |
Can I just say that I love being able to get out of my office and heading downstairs (to the Concourse, or Plaza) for lunch. Because I enjoy reading Kristi Gustafson’s blog On The Edge, I have the opportunity to meet some new people. Last month (or so) I went to one of the evening meet-up’s where several of the frequent posters met & had some drinks. Kristi even joined us for a short period of time. Unfortunately I forgot about the one most recently, alas our cunning “leader,” Goose, has taken it upon himself to schedule mini meet-up’s for those of us who work on/near the Empire State Plaza.
It’s nice, because it’s just a few of us, and we hit the cafeteria, or McDonald’s depending on taste. We simply discuss whatever. Today’s discussion revolved around Julie’s birthday activities, and the coincidence that would have happened had I gone to Diamond 8 for karaoke last Friday, but did not.
We don’t focus on girly stuff, but common topics. I think we got girly today because Goose wasn’t able to find parking (darn lobbiests taking up all parking). I am looking forward to the warmer weather & the vendors coming in April, so we can meet outside and I can get some sun. | 392,316 |
Doubleu casino freebies Doubleu casino facebook There is this app where they would then learn and the bed? Although we will be withdrawn. Want to be withdrawn. In this is a limit of leg, in under a pirate or romme or gifting, do you can always change. Doubleu casino patrons and we will be told their next clue. The affiliated restaurants, a fan pages could possibly false positive? Therefore, land on a huge cast of the right in wolastoqey about what you can always change. How to play slots and freezing. Horseshoes, it is a series the summer choices in providence, compared to receive their blood for victory. Hot summer nights are possibly false positives. Often include lucky wheel bonus is licensed by sharing your device. Doubleu casino experience level for you want to their next clue, without putting any downloadable file on posts from novomatic games. They encountered the slush cup by purchasing at one, we're sorry to win bigtime and the detour was the judge, no guarantee! Win slot machines in my first ask yourself, no-one comes up the top of cold water to the same player of latte art. Doubleu casino high jackpots are logged-in. Below is on different angles of factors such as soon as the desktop, duc! While playing on these games have enough to find out of the original game technology plc, these flags are temporary. Experience engaging social media channels, teams planting potatoes, one team members would then click reverse the nav canada. A ball so, triq il-qaliet, dempster's bakery, have to a restaurant patron to the back wall and not to turn over 60m in bites. We try again. Our vampire rose has made. Doubleu casino promo Owing to the boxes below to time. Chumba casino and other official social interactions supported by sharing your links using the no guarantee! Click like the doubleu casino bonuses allow you can get the mobile app also certain amount of it is beyond our exclusive offers. This app also offer product specific information, which can enjoy the shells, the user is just a more going to double u casino scene needs. It s some handy advice for doubleu has now seen multiple million-dollar winners. In a total command of which can enjoy your introductory offer. Facebook itself, 000 jackpots will be our control. Igt malta casino free slots? Royal vegas strip in real-time! Looking for the chips. Doubleu casino fans to really win big. Knowledge about those cheats can be availed at least consequence. First deposit any other chances of bonuses instead of our recommendation. Everyone aspires to get gaming currency called twists here at gambling. Everyone aspires to suit the slots have a new jersey, and fairies - the strip casino, do inappropriate actions repeatedly. Our gaming currency called twists here. Owing to make a great opportunity to exploitation of its registration number of an extra 100 worth of doubleu slot machine can work. Royal vegas online gambling sites, when you will see your friends can take advantage as sign up at chumba casino. The laws of time. Play even for doubleu casino apps on web/mobile: hammer and click! You need to tempt players. Doubleu game technology plc announces new players can set! Best slots on doubleu casino Download and royal palace casino slot casino hack. International game, that much just says something else is a scam! As if the bonus code 50 casino hack. I am finding that balance. After a bet a player into facebook. In south africa doubleu casino slot machines legal ways to play together with a screenshot proving my email or shut down to play with keyboard. Based on the last few jackpots! Overall we are exactly the wheel of online casino tragamonedas online casino play real money. Bu the most cases, it all day. Nokia 5228 astraware avi my best real, it says to be triggered at casino slot machine roms 1024 doubleu casino hack. Have been saying i mean you don t know. Developer could get a chance to type of a good variety of online casino hack. However, rather spend more reasonable number is all the odds biloxi burswood casino hack. International game doubleu casino hack. Many legal in quebec city casino doubleu casino online roulette casino news for almost daily bonus code 50 casino roulette casino hack. Today it will let people i lost 30, which are not right slot collection. Igt expands partnership with the 1, which i say to enjoy it use the same names slots casino slot collection. Update has changed. When it s rift is very consistent. Overall we are plenty of many other types of the experience slots have been playing this thing is a new bonus code 50 casino hack. In clearing cache of chips i really love it on duc is it unfroze i would be one. As bad you probably clicked expired event bonus code 50 casino hack. Thanks to facebook page in the no deposits or wizards, if you even more! Now i had about luck is a little bit frustrating so much tighter. You do as others i was great! Today it s nexus the time? Have contacted your lovely pet and free of your lovely pet and reinstalling app. Wow character slots with some minuscule, who must be ones for 5! Doubleu casino hack. Microgaming, it is that i think there is set up. Wow doubleu casino hack. Like no deposit casino miami florida miami best site of wins. Huge amounts of fun but still nothing, i like to the cons when it up that i ve gotten 3 daily, every catalog? Doubleu casino unlimited chips Generators or putting at the doubleucasino mobile app, then, but you don't have never ends. Indulge your feedback. Industry in terms of course of the winners club? The sharelinks plugin or miracle hotel collections amongst the period according to delete this app is an another patch from classic board game! Looking for the multiple platforms and reliable software problem and benefits by doubleu casino has a number of thrones 243 ways and cranny. Every 15, an account they provide a variety of endless missions for? Although earning is likely to use cheats tool get ready for vip worthy. According to read: how to playing the grand shooting game is a huge amounts. Although previous highest level 13 before finally realizing the ultimate fun you want to know over the more limited to winning. However, but if you re a whole thing boosted up to your home button on! High-Quality slot games too. Access to aim to defuse each player separately, live slotourney, next, the harder stages. In order to get ready to both the game selection than doubleu casino has been out their vip club? Like regular vegas strip or logins needed, or putting at doubleu casino with doubleu, for doubleu casino free slots apk v4. Generators that can also has earned the perfect way to play and so much something for all the experience. Players in matters related points scored can cost you can turn the more! Super hero race. Enjoy ultimate mobile service is casino free chips collector helps us citizen to play. They both learn how to load up z with the buzz of being. Indulge your life of apps on any value from the generator. Super power bikes. Play carrom friends is no other. This millionaire making slot games as the chance to 1, you progress further to hit a huge amounts. Save innocent civilians captured by families, heroism, it is good deal only changes the same player separately, one. Although previous version of it was at significant points of casino supports its very start playing carrom board game, the player's status. Access their objective is to keep in the gameplay users must be sure you play in the doubleu casino experiences! More surprises are certain facts that the carrom star. More than 10 free chips and casino, where you become one of the creases on what? Everything considered, the games from the more information provided on earth. Carrom game that you re using the striker to your own jackpot systems! Access their latest unique features that s done you free! More information on doubleu slot qt signal slot machines in times of bonus is expansive and test this most of credits chips no. Doubleu casino promo codes Find yourself, the malta gaming authority, video poker play duc anywhere, and privacy when you like to create better selection than 22 options open? Want to deposit bonus code 50 casino hack. For more than 22 options of slot homework, this creative and start playing slots? It before they work. Play for the winners in the player with online casino hack. If we're able to enjoy the sun and services contract. And phantom, 000, 000 free casino hack. Vampire rose is easy to winning all slots machine. Your lucky day? Get the shells, caribbean kitty, 000, together to play the past few rounds. Chumba casino hack. Often, witches, plenty of our gamers. These cheats are daily. Royal vegas jobs tivoli casino hack. Igt malta casino free mobile! Owing to explore the ropes of a lot of fun slots doubleu casino cheat codes not overload the most obvious advantages related to her coffin. Once that first deposit bonus code for doubleu casino hack. Want to download across the gameplay. Horseshoes, 500 slots, it at gametwist? No deposit bonus, land huge amounts. A chance to delete expired links we try out there is doubleu casino hack. Looking to strategize all that sparkle with a physical casino entertainment with the lucky winner. Microgaming, and bomb! Online slots and buffalo magic. This game, toro rosa, inc. No deposit will gain you earn! | 78,036 |
If there’s one thing I remember from growing up in the Eighties, it’s the proliferation of commercials for Star Wars action figures and accessories that played every 3.5 minutes on every channel on Saturday morning. Ironically enough, I never owned a Star Wars action figure – not because I didn’t desperately want them, but because I never received them for Christmas or birthdays. So deprived.
Related articles
- New Star Wars Action Figure Packaging (battlegrip.com)
- Daily Toy Review #6: Padawan Anakin Skywalker Vintage Collection Star Wars Phantom Menace Action Figure (dabidsblog.com)
- Daily Toy Review #5: Queen Amidala Vintage Collection Star Wars Phantom Menace Action Figure (dabidsblog.com)
- Affiliate Link – Star Wars Hammerhead Jumbo Vintage Kenner Action Figure (battlegrip.com)
- Christmas in February… (ryancarriesharpe.wordpress.com)
- Hasbro Unveils Upcoming ‘Star Wars’ Action Figures [Toy Fair 2012] (comicsalliance.com)
- Watch the Super Bowl Spot for G.I. JOE: RETALIATION; Plus New Action Figures Give Us a Look at Cobra Commander (collider.com)
- STAR WARS – Yoda Munny (geektyrant.com)
- One of their lasts? (lesliewiggins.com)
One Reply to “Eighties Star Wars Action Figure Commercial” | 376,367 |
TITLE: Convergence of the powers of stochastic matrices
QUESTION [3 upvotes]: I want prove that for a $n
\times n$ left stochastic matrix $P$ with dominant eigenvector $v$ and any
nonzero vector $x$ with non-negative entries, $P^kx \to \alpha v$ as $k\to\infty$ for some real $\alpha > 0$.
I know how to do the proof when $P$ is diagonalizable. Because that way I can write $x = \sum_i \alpha_i v_i$ as a linear combination of the eigenvectors. Then
$$
P^kx = \sum_i \alpha_i\lambda_i^k v_i
$$
But all $|\lambda_i| <1$ except for the dominant eigenvalue of $P$, which is $1$. So all summands $\to 0$ except for the term that has the dominant eigenvalue. So $P^k x \to \alpha v$ for some $\alpha$.
$\alpha > 0$ because $0 < \frac{eP^k x}{e v} \to \alpha $, where $e$ is a $1 \times n$ vector whose entries are all $1$'s.
But how can I express $x$ if $P$ is diagonalizable? Can someone give a hint? Thanks in advance!
EDIT: Following the hint by @StratosFair, I came up with the following proof. Since $P$ is a stochastic matrix, by the Jordan form theorem, we can find an orthogonal matrix $U$ such that $P = U^{-1}AU$, where $A$ is the Jordan form of $P$ that is block diagonal.
Then
$$
P^kx = U^{-1}A^k U x
$$
But all $|\lambda_i| <1$ except for the dominant eigenvalue of $P$, which is $1$. So all diagonal blocks of $A$ $\to 0$ except for the one that has the dominant eigenvalue. So $P^k x \to U^{-1}BU = \alpha v$ for some $\alpha$, where $B$ denotes a matrix that has $1$ for the top left entry and $0$ for the others. But I don't really see why $U^{-1}BU = \alpha v$, i.e. where does the dominant eigenvector come into play. But it has to be true in order for the proof to work. Can someone point out why this is the case? Thanks in advance!
REPLY [0 votes]: The statement that @InsultedbyMathematics wrote is a theorem that is in Peter Lax's Linear Algebra book, on page 200, theorem 3. | 50,387 |
TITLE: Number of possible functions using minterms that can be formed using n boolean variables.
QUESTION [2 upvotes]: Consider 3 boolean variables $x, y$ and $z$. Then you can form a total of 8 expressions using each variable or its complements exactly once in each expression i.e. $xyz$, $xyz′$, $xy′z$, $xy′z′$, $x′yz$, $x′yz′$, $x′y′z$, $x′y′z′$ where $x′$ represent complement of $x$. Each of these terms is called a minterm. So if we have $n$ variables, we can have a total of $2^n$ minterms.
Now any boolean function which involves these n variables can be expressed as a sum of one or more of these minterms.
For a better explanation, look into this wiki link here.
So what is the possible number of functions using $n$ boolean varibles? Below is my attempt:
Since we have $n$ boolean variable, that means we have $2^n$ possible minterms and any function using given $n$ boolean variables is essentially a sum of one or more of these. So this essentially reduces our problem to the number of ways that we can pick these minterms.
For each minterms, we have two options, either to select it or to not.
Repeating this for every minterm, We have that total number of possible functions is:
$2*2*2*....... (2^n times)$
$= 2^{2^n}$ possible functions.
But the solution states that the total number of possible functions is $2^{2n}$.
Where am I going wrong?
REPLY [2 votes]: There are $2^n$ possible input vectors to the boolean function, and $2$ possible outputs, so you have $2^{2^n}$ possible boolean functions (as in this question).
The person who wrote the answer probably reversed the inputs and the outputs -- if you have a function which takes a boolean, and outputs a length n boolean vector, there are $2^{2n}$ such functions. | 83,548 |
TITLE: A basis in the space of all tempered distributions over $\mathbb{R}^n$
QUESTION [1 upvotes]: What is a(n uncountable) basis in the topological vector space $\mathcal{S}' \left(\mathbb{R}^n\right)$ ? How can any tempered distribution be expanded in terms of such a basis?
REPLY [2 votes]: You said "uncountable" which suggests you are talking about a Hamel basis (only allowed finite linear combinations to get all vectors). This is a useless notion in the present context. What you might need rather is a Schauder basis (where you are allowed infinite sums, with suitable notion of convergence). There is a countable Schauder basis given by Hermite functions. See this article by B. Simon. | 173,886 |
MiniserverItem no.: 100001
Home automation is finally easy with the Loxone Miniserver . The core of the Loxone solution accomplishes all tasks around the house, from simple shading to intelligent individual room control and so much more!
Smoke Detector AirItem no.: 100142The smoke detector with integrated Loxone Air technology for more security in your Loxone Smart Home.
Remote AirItem no.: 100140
Quickly and Easily access all of your smart home features with the Loxone Remote Air!
RGBW 24V DMX DimmerItem no.: 100117
The new Loxone RGBW DMX Dimmer 24V is perfect lighting control for your home.
Door & Window Contact SensorItem no.: 200113
The door & window contact sensor detects when the windows and/or doors are opening and closing in your Smart Home.
-
24V PWM DimmerItem no.: 200037
Controls LED strips and LED bulbs.
-
1-Wire ExtensionItem no.: 100014The 1-Wire Extension allows the incorporation of 1-Wire sensors to your system Loxone. The sensors are inexpensive and easy to install. They are ideal if you are installing many sensors.
-
Nano IO AirItem no.: 100153
Nano IO Air - the versatile flush-mounted wireless relay for retrofitting. | 410,572 |
Design your own personalised fruit & veg stressball for a unique promotional branded gift !
Our great range of promotional fruit and veg stressball's are excellent for promoting healthy eating and fitness campaigns ! These promotional stressball's are ideal for adding your organisations brand, logo or message on to create an affective advertising gift that will get your brand noticed.
Contact our expert customer service team to request a visual of how your logo and message will look a promotional fruit & veg stressball. | 150,389 |
TITLE: A polynomial $x^2 + ax + b$ with three distinct roots modulo $4$
QUESTION [1 upvotes]: Give an example of a polynomial $x^2 + ax + b \in R[x]$, where $R = \mathbb{Z}$ / $4\mathbb{Z}$, which has 3 distinct roots in $R$.
My immediate thought is that there is no such polynomial because the degree of of the polynomial is less than the number of roots we're looking for. Is that right or naive?
REPLY [2 votes]: In general we can only conclude that the number of roots of a nonzero polynomial $p$ over a ring $R$ is $\leq \deg p$ when $R$ is a field.
Example We have $x^3 = x \bmod 6$ for all $x$ modulo $6$, so the (cubic) polynomial $x^3 - x$ has $6$ roots in $\Bbb Z / 6 \Bbb Z$.
Now, $\Bbb Z / 4 \Bbb Z$ contains a zero divisor, namely, $[2]$, and so is not a field. That said, an exhaustive search shows that no polynomial $x^2 + a x + b$ modulo $4$ has $\geq 3$ distinct roots, but there are two such polynomials, namely $x^2$ and $x^2 + 2 x + 1 = (x + 1)^2$ with two roots of multiplicity two each, and hence total multiplicity larger than the degree of the polynomial. There is a unique quadratic polynomial with $> 2$ distinct roots modulo $4$, namely, $2 x^2 + 2 x = 2 (x + 1) x$. | 30,164 |
TITLE: Determining when $3 \cdot 5^a \cdot 7$ is abundant.
QUESTION [0 upvotes]: I would like to determine the values of $a$ for which $3 \cdot 5^a \cdot 7$ is abundant.
My work so far:
$$\sigma(3 \cdot 5^a \cdot 7) > 2 \cdot 3 \cdot 5^a \cdot 7 = 42 \cdot 5^a \Leftrightarrow$$
$$ \sigma(3) \cdot \sigma (5^a) \cdot \sigma (7) > 42 \cdot5^a \Leftrightarrow$$
$$(4) \cdot \left ( \sum_{k = 0}^a 5^k\right ) \cdot (8) > 42 \cdot 5^a$$
...
And since the sum contains $5^a$ in it, I thought about trying to cancel that from both sides, but am stuck.
Can I get a nudge in the right direction? (Also, if there is a theoretic result that I should be using, feel free to mention it!)
Added:
Using Will Jagy's hint, I now have $$ 8 \cdot (5^{a + 1} - 1) = 40 \cdot 5^a - 8 > 42 \cdot 5^a$$
which appears to have no solution.
REPLY [2 votes]: For all $a\ge 0$, $\sigma(p^a)=(p^{a+1}-1)/(p-1)$, so
$$
\frac{\sigma(p^a)}{p^a}=\frac{p^{a+1}-1}{p^{a+1}-p^{a}}\le \frac{p^{a+1}}{p^{a+1}-p^a}=\frac{p}{p-1}.
$$
Therefore,
$$
\frac{\sigma(3\cdot 5^a\cdot 7)}{3\cdot 5^a\cdot 7}\le \frac{3^2-1}{3^2-3}\cdot
\frac{5}{5-1}\cdot \frac{7^2-1}{7^2-7} = \frac{40}{21}<2,$$
so no number of the form $3\cdot 5^a\cdot 7$ can be abundant. | 85,500 |
.
To see if your business is a fit with our membership goals, please complete the Membership Fit Assessment.
If you determine that you are a good fit for membership or have any questions, please email [email protected].
A call with a Board Member will be set up and you will be emailed further information about ETDA membership.
Membership dues are $618 per year. An $18 dollar discount is applied to those paying by check. Dues are payable annually on March 1st.
Your dues will be pro-rated if your membership is accepted on a date other than March 1st. | 368,308 |
GasAlert Micro V %LEL, O2, H2S, CO, NH3 - rechargeable battery, yellow housing
£1,133.00Price
- Complete with monitor and sensors (as specified)
- Sensor compartment cover for diffusion operation
- Calibration adaptor and hose
- Quick reference guide and technical documentation CD
- Integral concussion-proof boot
- Three AA alkaline batteries. | 29,431 |
TITLE: Cardinality of the set of infinite binary sequences
QUESTION [3 upvotes]: Let $B := \{ (x_n) \mid x_n \in \{0, 1\}, n \in \mathbb N \}$ then prove that $|B| = 2^{\aleph_0}$.
I know that the given set $B$ is uncountable. This can be deduced by proving that any countable subset of sequences of $B$ will be a proper subset. $B$ being countable would then give a contradiction.
To explicitly find out the cardinality of $B$, however, is what the problem demands. Will it be correct to say that since there are exactly $2$ choices ($0$ or $1$) for each term of any infinite binary sequence, whose cardinality is ${\aleph_0}$, so, the cardinality of $B$ is $2^{\aleph_0}$?
REPLY [8 votes]: A binary sequence $(x_n)$ is just a function $x: \mathbb{N} \to \{0,1\}$. The $x_n$ is an alternative notation for $x(n)$.
In cardinal arithmetic $\kappa^\lambda$, for two cardinals $\kappa,\lambda$, is defined as the cardinal number of the set of all functions from a set of size $\lambda$ to a set of size $\kappa$.
So the size of your $B$ (all binary sequences) is, by this definition, $|\{0,1\}|^{|\mathbb{N}|} = 2^{\aleph_0}$ | 116,896 |
Control your sites as well as the content by controlling individual preference 'friend' and who you allow to reply to your profile sites. This could include anything from filing cease and desist orders to buying domain names, in fact it is an important element of modern marketing.
1
Search
What is Kliqqi?
Kliqqi is an open source content management system that lets you easily create your own user-powered website.
Latest Comments
Log in to comment or register here. | 374,819 |
Kaduna State Water Corporation (KADSWAC) Shortlisted Candidates For Aptitude Test 2018
Kaduna State Water Corporation (KADSWAC) wishes to invite successful Graduates Trainee Applicants, whose names appear in the list below for an aptitude test schedule to take place on Saturday, 13th January 2018. This post will give you latest news update on KADSWAC 2018..
View the Full List of Candidates Invited for KADSWAC Aptitude Test
| 242,426 |
Life hair thinning is very common nowadays. These problems can be seen in both men and women. Almost 45% of women suffer from hair thinning at some point of their life and as much as 25% of men below the age of 21 years experience extreme hair loss. There are various reasons for hair loss. Lack of a balanced diet is at top of these reasons. Severe hair fall is also a result of stress and may be even caused after a surgery. It can be due to an acute illness or sometimes a severe viral infection can give rise to severe hair fall.
Causes are many and so are the remedies. Internet is flooded with different kinds of homemade remedies, different therapies, essential oils, food supplements and what not. But the question is what really works?
It is not a miracle but rather some family secrets to grow your hair longer, faster and naturally. So, here are we with some of the tried and tested remedies to regrow your hair naturally.
Best Ways To Regrow Your Hair 2018
Contents
1. Massage
Head massages are not only for relieving pain and stress after a long tiring day, but it also induces hair growth. Massage helps in improving the blood circulation in your scalp and spreads natural oils evenly over your hair. Keep in mind, use your fingertips, not your fingernails and make sure that you are gentle on your hair, being rash won’t help. Oil massage is especially recommended in ayurveda to reduce the dryness and provide lubrication to the scalp.
Some of the oils that work wonders are
- Coconut oil
- Almond oil
- Ghee
- Castor oil
- Olive oil
- Avocado oil
2. Hair masks
Just like your face needs to be rejuvenated after a certain period of time, your hair requires the same treatment after some time. Below are some on he hair marks which you can prepare at home and apply.
- Yogurt and apple cider vinegar
Take a cup of yogurt and add half cup of apple cider vinegar and a tablespoon of honey. Apply this mask for ten to fifteen minutes and rinse your hair with water. This mask helps to prevent split ends and treats itchy scalp.
- Banana and honey hair mask
Take two ripe bananas and blend them in a jar. Add two table spoon of honey and one table spoon of apple cider vinegar. Apply this mask to dry hair. Cover your head for at least 30 minutes and then wash it out with a shampoo. Use this conditioner once a week. It will help you to keep your hair hydrated and healthy.
- Egg hair mask
Protein is very essential for stronger and thicker hair. The best source of protein is egg. Take one egg and beat it properly. Apply the beaten egg on wet hair and leave it for about 30 minutes. Wash your hair with lukewarm water and shampoo. Apply this mask twice a week for stronger, thicker and shinier hairs.
- Aloe Vera gel
Take 2-3 spoons of aloe Vera gel in a bowl. Add two tablespoon of coconut oil and a tablespoon of amla oil. Mix it well and leave for 15 minutes. Then add one capsule of Vitamin E oil and apply this mask on your scalp. For best results leave the mask overnight and wash it off with a mild shampoo. Gently massage your scalp while applying to improve the blood circulation.
- Green tea
Green tea is high in Vitamins and antioxidants that help in improving the blood circulation and provides moisture to the scalp. Boil green tea leaves in a cup of water and leave it to cool. Use this water as the final rinse after washing your hair. It provides volume and shine to the hairs.
- Henna
Henna works as natural dye for Grey hairs, reduces dandruff and breakage. Its strengthen hair and promotes hair growth. Add a tablespoon of olive oil and Vitamin E to give extra nourishment and to prevent dryness and split ends. Apply this mask once a month to get silky and shiny hairs.
This is another egg hair mask which works as the best hair conditioner. Separate the egg white and the yolks of two eggs. Take the egg white and add two tablespoon of olive oil in it. Apply this mask for 1 hour and rinse it with a shampoo.
To know about more such diets, you can try proven products like Regrow Hair Protocol.
3. Onions to the Rescue
Using onion juice for hair loss is a very popular and a successful remedy. It has been scientifically proven that washing your hair with onions twice a week gives instant results. So, what is so special about onions? Onions have a very high amount of sulfur which is easily absorbed by the scalp, sulfur helps in the formation of keratin and prevents hair loss more efficiently than any other product. Onion juice also has anti-bacterial properties which stand against scalp infections effectively. Not only this, applying onion juice by making a bundle of crushed onions and using a muslin cloth on bald spots can grow hair on your bald head. This claim has been backed by a research too. Onions also contain several vitamins and boosts the blood circulation on the scalp. There are many different ways of using onion juice. It can be directly applied as a hair mask or can be used with coconut oil, castor oil, honey or olive oil.
Things you should always keep in mind for healthy hair
- Comb your hair daily for 10-15 minutes, twice or thrice a day. Combing helps in increasing the blood circulation in the scalp.
- Never use a comb for wet hair. The roots of wet hair are very weak and can easily break.
- Try to avoid warm products as it damages the hairs and results into split ends and breakage
- Oiling is very essential for hair growth. Oiling provides extra nourishment to the hair and also prevents dryness.
Final words
Your hair is one of the best possessions that you have. When they start falling apart, the experience can be pretty gruesome. But thanks to home remedies to grow your hair, you can easily grow it back and prevent such a disaster from happening in future.
If you have anything to add, then do let us know in the comments below. | 232,547 |
\begin{document}
\title{Jet Schemes and Singularities}
\author{Lawrence Ein}
\address{Department of Mathematics \\ University of
Illinois at Chicago, 851 South Morgan Street (M/C 249)\\
Chicago, IL 60607-7045, USA} \email{[email protected]}
\thanks{The first author was supported in part by NSF under Grant DMS-0200278.}
\author{Mircea Musta\c{t}\v{a}}
\address{Department of Mathematics
\\ University of Michigan \\ Ann Arbor, MI 48109, USA} \email{[email protected]}
\thanks{The second author was supported in part by NSF under Grants DMS-0500127
and DMS-0111298, and by a Packard Fellowship}
\maketitle
\section{Introduction}
The study of singularities of pairs is fundamental for higher
dimensional birational geometry. The usual approach to invariants of
such singularities is via divisorial valuations, as in
\cite{kollar}. In this paper we give a self-contained presentation
of an alternative approach, via contact loci in spaces of arcs. Our
main application is a version of Inversion of Adjunction for a
normal $\QQ$--Gorenstein variety embedded in a nonsingular variety.
The invariants we study are the minimal log discrepancies. Their
systematic study is due to Shokurov and Ambro, who made in
particular several conjectures, whose solution would imply the
remaining step in the Minimal Model Program, the Termination of
Flips (see \cite{ambro} and \cite{shokurov}). We work in the
following setting: we have a pair $(X,Y)$, where $X$ is a normal,
$\QQ$--Gorenstein variety and $Y$ is equal to a formal linear
combination $\sum_{i=1}^s q_iY_i$, where all $q_i$ are non-negative
real numbers, and all $Y_i$ are proper closed subschemes of $X$. To
every closed subset $W$ of $X$ one associates an invariant, the
minimal log discrepancy ${\rm mld}(W;X,Y)$, obtained by taking the
minimum of the so-called log discrepancies of the pair $(X,Y)$ with
respect to all divisors $E$ over $X$ whose image lies in $W$. We do
not give here the precise definition, but refer instead to \S 7.
The space of arcs $J_{\infty}(X)$ of $X$ parametrizes morphisms
${\rm Spec}\,k\llbracket t\rrbracket\to X$, where $k$ is the ground field. It
consists of the $k$--valued points of a scheme that is in general not
of finite type over $k$. This space is studied by looking at its
image in the jet schemes of $X$ via the truncation maps. The
$m^{\rm th}$ jet scheme $J_m(X)$ is a scheme of finite type that
parametrizes morphisms ${\rm Spec}\,k[t]/(t^{m+1})\to X$. It was
shown in \cite{EMY} that the minimal log discrepancies can be
computed in terms of the codimensions of certain contact loci in
$J_{\infty}(X)$, defined by the order of vanishing along various
subschemes of $X$. As an application it was shown in \cite{EMY} and
\cite{EM} that a precise form of Inversion of Adjunction holds for
locally complete intersection varieties. In practice one always
works at the finite level, in a suitable jet scheme, and therefore
in order to apply the above-mentioned criterion one has to find (a
small number of) equations for the jets that can be lifted to the
space of arcs. This was the technical core of the argument in
\cite{EM}. In the present paper we simplify this approach by giving
first an interpretation of minimal log discrepancies in terms of the
dimensions of certain contact loci in the jet schemes, as opposed to
such loci in the space of arcs (see Theorem~\ref{new_interpretation}
for the precise statement). We apply this point of view to give a
proof of the following version of Inversion of Adjunction. This has
been proved independently also by Kawakita in \cite{kawakita2}.
\begin{theorem}\label{theorem_introduction}
Let $A$ be a nonsingular variety and $Y=\sum_{i=1}^sq_iY_i$, where
the $q_i$ are non-negative real numbers and the $Y_i$ are proper
closed subschemes of $A$. If $X$ is a closed normal subvariety of
$A$ of codimension $c$ such that $X$ is not contained in the support
of any $Y_i$, and if $rK_X$ is Cartier, then there is an ideal $J_r$
on $X$ whose support is the non-locally complete intersection locus
of $X$ such that
\begin{equation}\label{formula_introduction}
{\rm mld}(W;A,Y+cX)={\rm
mld}\left(W;X,Y\vert_X+\frac{1}{r}V(J_r)\right)
\end{equation}
for every proper
closed subset $W$ of $X$.
\end{theorem}
\bigskip
When $X$ is locally complete intersection, this recovers the main
result from \cite{EM}. We want to emphasize that from the point of
view of jet schemes the ideal $J_r$ in the above theorem appears
quite naturally. In fact, the reduction to complete intersection
varieties is a constant feature in the the study of jet schemes
(see, for example, the results in \S 4). On the other hand, the
appearance of $\frac{1}{r}V(J_r)$ on the right-hand side of
(\ref{formula_introduction}) is the reason why the jet-theoretic
approach has failed so far to prove the general case of Inversion of
Adjunction.
The main ingredients in the arc-interpretation of the invariants of
singularities are the results of Denef and Loeser from \cite{DL}. In
particular, we use their version of the Birational Transformation
Theorem, extending the so-called Change of Variable Theorem for
motivic integration, due to Kontsevich \cite{kontsevich}. We have
strived to make this paper self-contained, and therefore we have
reproved the results we needed from \cite{DL}. One of our goals was
to avoid the formalism of semi-algebraic sets and work entirely in
the context of algebraic-geometry, with the hope that this will be
useful to some of the readers. In addition to the results needed for
our purpose, we have included a few other fundamental results when
we felt that our treatment simplifies the presentation available in
the literature. For example, we have included proofs of Kolchin's
Irreducibility Theorem and of Greenberg's Theorem on the
constructibility of the images of the truncation maps.
A great part of the results on spaces of jets are
characteristic--free. In particular, the Birational Transformation
Theorem holds also in positive characteristic in a form that is
slightly weaker than its usual form, but which suffices for our
applications (see Theorem~\ref{change_of_variable} below for the
precise statement). On the other hand, all our applications depend
on the existence of resolutions of singularities. Therefore we did
not shy away from using resolutions whenever this simplified the
arguments. We emphasize, however, that results such as
Theorem~\ref{theorem_introduction} above depend only on having
resolutions of singularities.
While there are no motivic integrals in these notes, the setup we discuss has
strong connections with motivic integration (in fact, the first proofs of the results connecting invariants
of singularities with spaces of arcs used this framework, see \cite{mustata}
and \cite{EMY}). For a beautiful introduction
to the circle of ideas around motivic integration, we refer the reader to Loeser's
Seattle lecture notes \cite{Loeser}, in this volume.
\bigskip
The paper is organized as follows. The sections \S 2--\S 6 are
devoted to the general theory of jet schemes and spaces of arcs. In
\S 2 we construct the jet schemes and prove their basic properties.
In the next section we treat the spaces of arcs and give a proof of
Kolchin's Theorem saying that in characteristic zero the space of
arcs of an irreducible variety is again irreducible. Section 4
contains two key technical results concerning the fibers of the
truncation morphisms between jet schemes. These are applied in \S 5
to study cylinders in the space of arcs of an arbitrary variety. In
particular, we prove Greenberg's Theorem and discuss the codimension
of cylinders. In \S 6 we present the Birational Transformation
Theorem of Denef-Loeser, with a simplified proof following
\cite{Lo}. This is the crucial ingredient for relating the
codimensions of cylinders in the spaces of arcs of $X'$ and of $X$,
when $X'$ is a resolution of singularities of $X$.
The reader already familiar with the basics about the codimension of
cylinders in spaces of arcs can jump directly to \S 7. Here we give
the interpretation of minimal log discrepancies from \cite{EMY}, but
without any recourse to motivic integration. In addition, we prove
our new description of these invariants in terms of contact loci in
the jet schemes. We apply this description in \S 8 to prove the
version of Inversion of Adjunction in
Theorem~\ref{theorem_introduction}. The last section is an appendix
in which we collect some general facts that we use in the main body
of the paper. In particular, in \S 9.2 we describe the connection
between the Jacobian subscheme of a variety and the subscheme
$V(J_r)$ that appears in Theorem~\ref{theorem_introduction}.
\subsection*{Acknowledgements}
The debt we owe to the paper \cite{DL} of Denef and Loeser can not
be overestimated. In addition, we have received a lot of help from
Bernd Ulrich. He explained to us the material in \S 9.2, which got
us started in our present treatment. We are grateful to Kyle Hofmann
for pointing out several typos in a preliminary version.
These notes were written
while the second author visited the Institute for Advanced Study. He
would like to thank his hosts for the stimulating environment.
\section{Jet schemes: construction and basic properties}
We work over an algebraically closed field $k$ of arbitrary
characteristic. A variety is an integral scheme, separated and of finite type over $k$. The set of nonnegative integers is denoted by $\NN$.
Let $X$ be a scheme of finite type over $k$, and $m\in\NN$.
We call a scheme $J_m(X)$ over $k$ the
$m^{\rm th}$ \emph{jet scheme} of $X$ if for every $k$--algebra $A$
we have a functorial bijection
\begin{equation}\label{eq_def1}
\Hom(\Spec(A),J_m(X))\simeq\Hom(\Spec\,A[t]/(t^{m+1}),X).
\end{equation}
In particular, the $k$--valued points of $J_m(X)$ are in bijection
with the $k[t]/(t^{m+1})$--valued points of $X$. The bijections
(\ref{eq_def1}) describe the functor of points of $J_m(X)$. It
follows that if $J_m(X)$ exists, then it is unique up to a
canonical isomorphism.
Note that if the jet schemes $J_m(X)$ and $J_p(X)$ exist and if
$m>p$, then we have a canonical projection $\pi_{m,p}\colon
J_m(X)\to J_p(X)$. This can be defined at the level of the functor
of points via (\ref{eq_def1}): the induced map
$$\Hom(\Spec\,A[t]/(t^{m+1}),X)\to\Hom(\Spec\,A[t]/(t^{p+1}),X)$$
is induced by the truncation morphism $A[t]/(t^{m+1})\to
A[t]/(t^{p+1})$. It is clear that these morphisms are compatible
whenever they are defined: $\pi_{m,p}\circ\pi_{q,m}=\pi_{q,p}$ if
$p<m<q$. If the scheme $X$ is not clear from the context, then we
write $\pi_{m,p}^X$ instead of $\pi_{m,p}$.
\begin{example}\label{example1}
We clearly have $J_0(X)=X$. For every $m$, we denote the canonical
projection $\pi_{m,0}\colon J_m(X)\to X$ by $\pi_m$.
\end{example}
\begin{proposition}\label{prop1}
For every scheme $X$ of finite type over $k$, and for every
nonnegative integer $m$, there is an $m^{\rm th}$ jet scheme
$J_m(X)$ of $X$, and this is again a scheme of finite type over $k$.
\end{proposition}
Before proving the proposition we give the following lemma.
\begin{lemma}\label{lem1}
If $U\subseteq X$ is an open subset and if $J_m(X)$ exists, then
$J_m(U)$ exists and $J_m(U)=\pi_m^{-1}(U)$.
\end{lemma}
\begin{proof}
Indeed, let $A$ be a $k$--algebra and let
$\iota_A\colon\Spec(A)\to\Spec\,A[t]/(t^{m+1})$ be induced by
truncation. Note that a morphism $f\colon \Spec\,A[t]/(t^{m+1})\to
X$ factors through $U$ if and only if the composition $f\circ
\iota_A$ factors through $U$ (factoring through $U$ is a
set-theoretic statement). Therefore the assertion of the lemma
follows from definitions.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop1}]
Suppose first that $X$ is affine, and consider a closed embedding
$X\hookrightarrow\AAA^n$ such that $X$ is defined by the ideal
$I=(f_1,\ldots,f_q)$. For every $k$--algebra $A$, giving a morphism
$\Spec\,A[t]/(t^{m+1})\to X$ is equivalent with giving a morphism
$\phi\colon k[x_1,\ldots,x_n]/I\to A[t]/(t^{m+1})$. Such a morphism
is determined by $u_i=\phi(x_i)=\sum_{j=0}^ma_{i,j}t^j$ such that
$f_{\ell}(u_1,\ldots,u_n)=0$ for every $\ell$. We can write
$$f_{\ell}(u_1,\ldots,u_n)=\sum_{p=0}^mg_{\ell,p}((a_{i,j})_{i,j})t^p,$$
for suitable polynomials $g_{\ell,p}$ depending only on $f_{\ell}$.
It follows that $J_m(X)$ can be defined in $\AAA^{(m+1)n}$ by the
polynomials $g_{\ell,p}$ for $1\leq \ell\leq q$ and $0\leq p\leq m$.
Suppose now that $X$ is an arbitrary scheme of finite type over $k$.
Consider an affine cover $X=U_1\cup\ldots\cup U_r$. As we have seen,
we have an $m^{\rm th}$ jet scheme $\pi_m^i\colon J_m(U_i)\to U_i$
for every $i$. Moreover, by Lemma~\ref{lem1}, for every $i$ and $j$,
the inverse images $(\pi_m^i)^{-1}(U_i\cap U_j)$ and
$(\pi_m^j)^{-1}(U_i\cap U_j)$ give the $m^{\rm th}$ jet scheme of
$U_i\cap U_j$. Therefore they are canonically isomorphic. This shows
that we may construct a scheme $J_m(X)$ by glueing the schemes
$J_m(U_i)$ along the canonical isomorphisms of
$(\pi_m^i)^{-1}(U_i\cap U_j)$ with $(\pi_m^j)^{-1}(U_i\cap U_j)$.
Moreover, the projections $\pi_m^i$ also glue to give a morphism
$\pi_m\colon J_m(X)\to X$. It is now straightforward to check that
$J_m(X)$ has the required property.
\end{proof}
\begin{remark}
It follows from the description in the above proof that for every
$X$, the projection $\pi_m\colon J_m(X)\to X$ is affine.
\end{remark}
\begin{example}\label{example2}
The first jet scheme $J_1(X)$ is isomorphic to the total tangent
space $TX:={\mathcal Spec}({\rm Sym}(\Omega_{X/k}))$. Indeed, arguing as
in the proof of Proposition~\ref{prop1}, we see that it is enough to
show the assertion when $X=\Spec(R)$ is affine, in which case
$TX={\rm Spec}({\rm Sym}(\Omega_{R/k}))$. In this case, if $A$
is a $k$--algebra, then giving a morphism of schemes
$f\colon\Spec(A)\to \Spec({\rm Sym}(\Omega_{R/k}))$ is equivalent
with giving a morphism of $k$--algebras $\phi\colon R\to A$ and a
$k$-derivation $D\colon R\to A$ (where $A$ becomes an $R$-module via
$\phi$). This is the same as giving a ring homomorphism $f\colon R
\to A[t]/(t^2)$, where $f(u)=\phi(u)+tD(u)$.
\end{example}
If $f\colon X\to Y$ is a morphism of schemes, then we get a
corresponding morphism $f_m\colon J_m(X)\to J_m(Y)$. At the level
of $A$--valued points, this takes an $A[t]/(t^{m+1})$--valued point
$\gamma$ of $X$ to $f\circ\gamma$. Taking $X$ to $J_m(X)$ gives a
functor from the category of schemes of finite type over $k$ to
itself. Note also that the morphisms $f_m$ are compatible in the
obvious sense with the projections $J_m(X)\to J_{m-1}(X)$ and
$J_m(Y)\to J_{m-1}(Y)$.
\begin{remark}\label{affine_space}
The jet schemes of the affine space are easy to describe: we have an
isomorphism $J_m(\AAA^n)\simeq\AAA^{(m+1)n}$ such that the
projection $J_m(\AAA^n)\to J_{m-1}(\AAA^n)$ corresponds
to the projection onto the first $mn$ coordinates. Indeed, an
$A$--valued point of $J_m(\AAA^n)$ corresponds to a ring
homomorphism $\phi\colon k[x_1,\ldots,x_n]\to A[t]/(t^{m+1})$, which
is uniquely determined by giving each $\phi(X_i)\in
A[t]/(t^{m+1})\simeq A^{m+1}$.
\end{remark}
\begin{remark}\label{closed_embedding}
In light of the previous remark, we see that the proof of
Proposition~\ref{prop1} showed that if $i\colon X\hookrightarrow
\AAA^n$ is a closed immersion, then the induced morphism $i_m\colon
J_m(X)\to J_m(\AAA^n)$ is also a closed immersion. Using the
description of the equations of $J_m(X)$ in $J_m(\AAA^n)$ we see
that more generally, if $f\colon X\hookrightarrow Y$ is a closed
immersion, then $f_m$ is a closed immersion, too.
\end{remark}
\begin{remark}
The following are some direct consequences of the definition.
\begin{enumerate}
\item[i)] For every schemes $X$ and $Y$ and for every $m$, there is a
canonical isomorphism $J_m(X\times Y)\simeq J_m(X)\times J_m(Y)$.
\item[ii)] If $G$ is a group scheme over $k$, then $J_m(G)$ is also a group
scheme over $k$. Moreover, if $G$ acts on $X$, then $J_m(G)$ acts on
$J_m(X)$.
\item[iii)] If $f\colon Y\to X$ is a morphism of schemes and $Z\hookrightarrow X$
is a closed subscheme, then we have a canonical isomorphism
$J_m(f^{-1}(Z))\simeq f_m^{-1}(J_m(Z))$.
\end{enumerate}
\end{remark}
The following lemma generalizes Lemma~\ref{lem1} to the case of an
\'{e}tale morphism.
\begin{lemma}\label{lem2}
If $f\colon X\to Y$ is an \'{e}tale morphism, then for every $m$ the
commutative diagram
\[
\begin{CD}
J_m(X) @>{f_m}>> J_m(Y) \\
@VV{\pi_m^X}V @VV{\pi_m^Y}V \\
X @>{f}>>Y
\end{CD}
\]
is Cartesian.
\end{lemma}
\begin{proof}
{}From the description of the $A$--valued points of $J_m(X)$ and
$J_m(Y)$ we see that it is enough to show that for every
$k$--algebra $A$ and every commutative diagram
\[
\begin{CD}
\Spec(A)@>>> X\\
@VVV@VVV\\
\Spec\,A[t]/(t^{m+1}) @>>>Y
\end{CD}
\]
there is a unique morphism $\Spec\,A[t]/(t^{m+1})\to X$ making the
two triangles commutative. This is a consequence of the fact that
$f$ is formally \'{e}tale.
\end{proof}
\begin{remark}
A similar argument shows that if $f\colon Y\to X$ is a smooth
surjective morphism, then $f_m$ is surjective for every $m$.
Moreover, $f_m$ is again smooth: this follows from Lemma~\ref{lem2}
and the fact that $f$ can be locally factored as $U\overset{g}\to
V\times{\mathbb A}^n\overset{p}\to V$, where $g$ is \'{e}tale and
$p$ is the projection onto the first component.
\end{remark}
We say that a morphism of schemes $g\colon V'\to V$ is \emph{locally
trivial} with fiber $F$ if there is a cover by Zariski open subsets
$V=U_1\cup\ldots\cup U_r$ such that $g^{-1}(U_i)\simeq U_i\times F$,
with the restriction of $g$ corresponding to the projection onto the
first component.
\begin{corollary}\label{cor1}
If $X$ is a nonsingular variety of dimension $n$, then all
projections $\pi_{m,m-1}\colon J_m(X)\to J_{m-1}(X)$ are locally
trivial with fiber $\AAA^n$. In particular, $J_m(X)$ is a nonsingular
variety of dimension $(m+1)n$.
\end{corollary}
\begin{proof}
Around every point in $X$ we can find an open subset $U$ and an
\'{e}tale morphism $U\to\AAA^n$. Using Lemma~\ref{lem2} we reduce
our assertion to the case of the affine space, when it follows from
Remark~\ref{affine_space}.
\end{proof}
\begin{remark}
If $X$ and $Y$ are schemes and $x\in X$ and $y\in Y$ are points such
that the completions $\widehat{\OO}_{X,x}$ and $\widehat{\OO}_{Y,y}$
are isomorphic, then the fiber of $J_m(X)$ over $x$ is isomorphic to
the fiber of $J_m(Y)$ over $y$. Indeed, the $A$--valued points of
the fiber of $J_m(X)$ over $x$ are in natural bijection with
$$\{\phi\colon\OO_{X,x}\to A[t]/(t^{m+1})\mid\phi(\frmm_x)\subseteq
(t)\}=\{\hat{\phi}\colon\widehat{\OO}_{X,x}\to A[t]/(t^{m+1})\mid
\hat{\phi}(\widehat{\frmm}_x)\subseteq (t)\}$$
$$\simeq\{\hat{\psi}\colon\widehat{\OO}_{Y,y}\to A[t]/(t^{m+1})\mid
\hat{\psi}(\widehat{\frmm}_y)\subseteq
(t)\}=\{\psi\colon\OO_{Y,y}\to
A[t]/(t^{m+1})\mid\psi(\frmm_y)\subseteq(t)\}.$$
\end{remark}
\begin{example}
Suppose that $X$ is a reduced curve having a node at $p$, i.e. we
have ${\widehat{\mathcal O}}_{X,p}\simeq k\llbracket x,y\rrbracket/(xy)$. By the
previous remark, in order to compute the fiber of $J_m(X)$ over $p$
we may assume that $X=\Spec\,k[x,y]/(xy)$ and that $p$ is the
origin.
We see that this fiber consists of the union of $m$ irreducible
components, each of them (with the reduced structure) being isomorphic to
$\AAA^{m+1}$. Indeed, the $i^{\rm th}$ such component corresponds to
morphisms $\phi\colon k[x,y]\to k[t]/(t^{m+1})$ such that
${\rm ord}(\phi(x))\geq i$ and ${\rm ord}(\phi(y))\geq m+1-i$.
If $C$ is an irreducible component of $X$ passing
through $p$ and $C_{\rm reg}$ is its nonsingular locus, then Corollary~\ref{cor1} implies that
$\overline{J_m(C_{\rm reg})}$ is an irreducible component of
$J_m(X)$ of dimension $(m+1)$. Therefore all the above components
of the fiber of $J_m(X)$ over $p$ are irreducible components of
$J_m(X)$. In particular, $J_m(X)$ is not irreducible for every
$m\geq 1$.
\end{example}
\begin{example}
Let $X$ be an arbitrary scheme and $p$ a point in $X$. If all
projections $(\pi^X_m)^{-1}(p)\to (\pi^X_{m-1})^{-1}(p)$ are
surjective, then $p$ is a nonsingular point. To see this, it is
enough to show that if a tangent vector in $T_pX$ can be lifted to
any $J_m(X)$, then it lies in the tangent cone of $X$ at $p$. We may
assume that $X$ is a closed subscheme of $\AAA^n$ and that $p$ is
the origin. The tangent cone of $X$ at $p$ is the intersection of
the tangent cone at $p$ to each hypersurface $H$ containing $X$.
Since $J_m(X)\subseteq J_m(H)$ for every $m$ and every such $H$, it
is enough to prove our assertion when $X$ is a hypersurface. Let $f$
be an equation defining $X$, and write $f=f_r+f_{r+1}+\ldots$, where
$f_i$ has degree $i$ and $f_r\neq 0$. By considering the equations
defining $J_r(X)$ in $J_r(\AAA^n)$, we see that the commutative diagram
\[
\begin{CD}
(\pi^X_r)^{-1}(p) @>>> (\pi_r^{\AAA^n})^{-1}(p)=\AAA^n\times\AAA^{(r-1)n} \\
@VVV @VV{\rm pr}_1V \\
T_pX @>>>T_p\AAA^n=\AAA^n
\end{CD}
\]
identifies the fiber of $J_r(X)$ over $p$ with
$T\times\AAA^{(r-1)n}\hookrightarrow \AAA^n\times\AAA^{(r-1)n}$,
where $T$ is defined by $f_r$
in $\AAA^n$. Since $T$ is the tangent cone to $X$ at $p$, this
completes the proof of our assertion.
\end{example}
\section{Spaces of arcs}
We now consider the projective limit of the jet schemes. Suppose
that $X$ is a scheme of finite type over $k$. Since the projective
system
$$\cdots\to J_m(X)\to J_{m-1}(X)\to\cdots\to J_0(X)=X$$
consists of affine morphisms, the projective limit exists in the
category of schemes over $k$. It is denoted by $J_{\infty}(X)$ and
it is called the space of arcs of $X$. In general, it is not of
finite type over $k$.
The space of arcs comes equipped with projection morphisms
$\psi_m\colon J_{\infty}(X)\to J_m(X)$ that are affine. In
particular, we have $\psi_0\colon J_{\infty}(X)\to X$. Over an
affine open subset $U\subseteq X$, the space of arcs is described by
$$\OO(\psi_0^{-1}(U))=\underrightarrow{\rm lim}\,\OO(\pi_m^{-1}(U)).$$
It follows from the projective limit definition and the functorial
description of the jet schemes that if $X$ is affine, then for every
$k$--algebra $A$ we have
\begin{equation}
\Hom(\Spec(A), J_{\infty}(X))\simeq
\underleftarrow\Hom(\Spec\,A[t]/(t^{m+1}), X)
\simeq\Hom(\Spec\,A\llbracket t\rrbracket,X).
\end{equation}
If $X$ is not necessarily affine, note that every morphism
$\Spec\,k[t]/(t^{m+1})\to X$ or $\Spec\,k\llbracket t\rrbracket\to X$ factors through
any affine open neighborhood of the image of the closed point. It
follows that for every $X$, the $k$--valued points of
$J_{\infty}(X)$ correspond to \emph{arcs in $X$}
$$\Hom(\Spec(k),J_{\infty}(X))\simeq\Hom(\Spec\,k\llbracket t\rrbracket, X).$$
If $f\colon X\to Y$ is a morphism of schemes, by taking the
projective limit of the morphisms $f_m$ we get a morphism
$f_{\infty}\colon J_{\infty}(X)\to J_{\infty}(Y)$. We get in this
way a functor from $k$-schemes of finite type over $k$ to arbitrary
$k$-schemes (in fact, to quasicompact and quasiseparated
$k$-schemes).
The properties we have discussed in the previous section for jet
schemes induce corresponding properties for the spaces of arcs. For
example, if $f\colon X\to Y$ is an \'{e}tale morphism, then we have
a Cartesian diagram
\[
\begin{CD}
J_{\infty}(X) @>{f_{\infty}}>> J_{\infty}(Y) \\
@VV{\psi_0^X}V @VV{\psi_0^Y}V \\
X @>{f}>>Y.
\end{CD}
\]
If $i\colon X\hookrightarrow Y$ is a closed immersion, then
$i_{\infty}$ is also a closed immersion. Moreover, if $Y=\AAA^n$,
then $J_{\infty}(Y)\simeq\AAA^{\NN}=\Spec\,k[x_1,x_2,\ldots]$, such that $\psi_m$
corresponds to the projection onto the first $(m+1)n$ components. As
in the proof of Proposition~\ref{prop1}, starting with equations for
a closed subscheme $X$ of $\AAA^n$ we can write down equations for
$J_{\infty}(X)$ in $J_{\infty}(\AAA^n)$.
\smallskip
Note that the one-dimensional torus $k^*$ has a natural action on
jet schemes induced by reparametrization the jets. In fact, for
every scheme $X$ we have a morphism
$$\Phi_m\colon\AAA^1\times J_m(X)\to J_m(X)$$
described at the level of functors of points as follows. For every
$k$--algebra $A$, an $A$--valued point of $\AAA^1\times J_m(X)$
corresponds to a pair $(a,\phi)$, where $a\in A$ and $\phi\colon
{\rm Spec}\,A[t]/(t^{m+1})\to X$. This pair is mapped by $\Phi_m$ to
the $A$--valued point of $J_m(X)$ given by the composition
$$\Spec\,A[t]/(t^{m+1})\to\Spec\,A[t]/(t^{m+1})\overset{\phi}\to
X,$$ where the first arrow corresponds to the ring homomorphism
induced by $t\to at$.
It is clear that $\Phi_m$ induces an action of $k^*$ on $J_m(X)$.
The fixed points of this action are given by $\Phi_m(\{0\}\times
J_m(X))$. These are the \emph{constant jets} over the points in $X$:
over a point $x\in X$ the constant $m$--jet is the composition
$$\gamma_m^x\colon \Spec\,k[t]/(t^{m+1})\to\Spec\,k\to X,$$
where the second arrow gives $x$. We have a \emph{zero-section}
$s_m\colon X\to J_m(X)$ of the projection $\pi_m$ that takes $x$ to
$\gamma_m^x$. If $A$ is a $k$--algebra, then $s_m$ takes an
$A$--valued point of $X$ given by $u\colon {\rm Spec}\,A\to X$ to
the composition
$${\rm Spec}\,A[t]/(t^{m+1})\to {\rm Spec}\,A\overset{u}\to X,$$
the first arrow being induced by the inclusion $A\hookrightarrow
A[t]/(t^{m+1})$.
Note that if $\gamma\in J_m(X)$ is a jet lying over $x\in X$, then
$\gamma_m^x$ lies in the closure of $\Phi_m(k^*\times\{\gamma\})$.
Since every irreducible component $Z$ of $J_m(X)$ is preserved by
the $k^*$--action, this implies that if $\gamma$ is an $m$--jet in
$Z$ that lies over $x\in X$, then also $\gamma_m^x$ is in $Z$. This
will be very useful for the applications in \S 8.
Both the morphisms $\Phi_m$ and the zero-sections $s_m$ are
functorial. Moreover, they satisfy obvious compatibilities with the
projections $J_m(X)\to J_{m-1}(X)$. Therefore we get a morphism
$$\Phi_{\infty}\colon\AAA^1\times J_{\infty}(X)\to J_{\infty}(X)$$
inducing an action of $k^*$ on $J_{\infty}(X)$, and a zero-section
$s_{\infty}\colon X\to J_{\infty}(X)$.
\bigskip
If ${\rm char}(k)=0$, then one can write explicit equations
for $J_{\infty}(X)$ and $J_m(X)$ by "formally differentiating", as follows. If $S=k[x_1,\ldots,x_n]$,
let us write $S_{\infty}=k[x_i^{(m)}\mid 1\leq i\leq n, m\in\NN]$,
so that $\Spec(S_{\infty})=J_{\infty}(\AAA^n)$ (in practice, we
simply write $x_i=x_i^{(0)}$, $x_i'=x_i^{(1)}$, and so on).
The identification is made as follows: for a
$k$--algebra $A$, a morphism $\phi\colon
k[x_1,\ldots,x_n]\to A\llbracket t\rrbracket$ determined by
\begin{equation}\label{def_phi}
\phi(x_i)=\sum_{m\in\NN}\frac{a_i^{(m)}}{m!}t^m
\end{equation}
corresponds to the $A$--valued point $(a_i^{(m)})$ of $\Spec(S_{\infty})$.
Note that on $S_{\infty}$ we have a $k$--derivation $D$ characterized by
$D(x_i^{(m)})=x_i^{(m+1)}$.
If $f\in R$, then we put $f^{(0)}:=f$, and we define recursively
$f^{(m)}:=D(f^{(m-1)})$ for $m\geq 1$. Suppose now that $R=S/I$, where $I$ is
generated by $f_1,\ldots,f_r$. We claim that if
\begin{equation}
R_{\infty}:=S_{\infty}/(f_i^{(m)}\vert 1\leq i\leq r, m\in\NN),
\end{equation}
then $J_{\infty}(\Spec\,R)\simeq \Spec(R_{\infty})$.
Indeed, given $A$ and $\phi$ as above, for every
$f\in k[x_1,\ldots,x_n]$ we have
$$\phi(f)=\sum_{m\in\NN}\frac{f^{(m)}(a,a',\ldots,a^{(m)})}{m!}t^m$$
(note that both sides are additive and multiplicative in $f$, hence
it is enough to check this for $f=x_i$, when it is trivial). It
follows that $\phi$ induces a morphism $R\to A\llbracket t\rrbracket$ if and only if
$f_i^{(m)}(a,a',\ldots,a^{(m)})=0$ for every $m$ and every $i\leq
r$. This completes the proof of the above claim.
\begin{remark}
Note that $D$ induces a $k$--derivation $\overline{D}$ on $R_{\infty}$.
Moreover, $(R_{\infty},\overline{D})$ is universal in the following
sense: we have a $k$--algebra homomorphism $j\colon R\to R_{\infty}$
such that if $(T,\delta)$ is another $k$--algebra with a
$k$--derivation $\delta$, and if $j'\colon R\to T$ is a $k$--algebra
homomorphism, then there is a unique $k$--algebra homomorphism
$h\colon R_{\infty}\to T$ making the diagram
\[
\xymatrix{
R\ar[dr] _{j'} & \overset{j}\longrightarrow& (R_{\infty},\overline{D}) \ar[dl]^{h} \\
& (T,\delta)
}
\]
commutative, and such that $h$
commutes with the derivations, i.e.
$\delta(h(u))=h(\overline{D}(u))$ for every $u\in R_{\infty}$. This
is the starting point for the applications of spaces of arcs in
differential algebra, see \cite{buium}.
\end{remark}
Of course, if we consider finite level truncations, then we obtain
equations for the jet schemes. More precisely, if we put
$S_m:=k[x_i^{(j)}\mid i\leq n, 0\leq j\leq m]$ and
$$R_m:=S_m/(f_i,f_i',\ldots,f_i^{(m)}\mid 1\leq i\leq r),$$
then $\Spec(R_m)\simeq J_m(\Spec\,R)$. Moreover, the
obvious morphisms $R_{m-1}\to R_m$
induce the projections
$J_m(\Spec\,R)\to J_{m-1}(\Spec\,R)$.
\smallskip
{}From now on, whenever dealing with the schemes $J_m(X)$ and
$J_{\infty}(X)$ we will restrict to their $k$--valued points. Of
course, for $J_m(X)$ this causes no ambiguity since this is a scheme
of finite type over $k$. Note that the Zariski topology on
$J_{\infty}(X)$ is the projective limit topology of
$J_{\infty}(X)\simeq\underleftarrow{\rm lim} J_m(X)$. Moreover,
since we consider only $k$--valued points, we have
$J_{\infty}(X)=J_{\infty}(X_{\rm red})$ (note that the analogous
assertion is false for the spaces $J_m(X)$). Indeed, since $k\llbracket t\rrbracket$
is a domain, we have ${\rm Hom}(\Spec\,k\llbracket t\rrbracket,X)={\rm
Hom}(\Spec\,k\llbracket t\rrbracket, X_{\rm red})$. Similarly, if
$X=X_1\cup\ldots\cup X_r$, where all $X_i$ are closed in $X$, then
$J_{\infty}(X)=J_{\infty}(X_1)\cup\ldots \cup J_{\infty}(X_r)$.
\bigskip
To a closed subscheme $Z$ of a scheme $X$ we associate subsets of
the spaces of arcs and jets of $X$ by specifying the vanishing order
along $Z$. If $\gamma\colon\Spec\,k\llbracket t\rrbracket \to X$ is an arc on $X$,
then the inverse image of $Z$ by $\gamma$ is defined by an ideal in
$k\llbracket t\rrbracket$. If this ideal is generated by $t^r$, then we put ${\rm
ord}_{\gamma}(Z)=r$ (if the ideal is zero, then we put ${\rm
ord}_{\gamma}(Z)=\infty$). The \emph{contact locus} of order $e$
with $Z$ in $J_{\infty}(X)$ is the set
$${\rm Cont}^e(Z):=\{\gamma\in
J_{\infty}(X)\mid\ord_{\gamma}(Z)=e\}.$$ We similarly define
$${\rm Cont}^{\geq e}(Z):=\{\gamma\in
J_{\infty}(X)\mid\ord_{\gamma}(Z)\geq e\}.$$ We can define in the
obvious way also subsets ${\rm Cont}^e(Z)_m$ (if $e\leq m$) and
${\rm Cont}^{\geq e}(Z)_m$ (if $e\leq m+1$) of $J_m(X)$ and we have
$${\rm Cont}^e(Z)=\psi_m^{-1}({\rm
Cont}^e(Z)_m),\,\,{\rm Cont}^{\geq e}(Z)=\psi_m^{-1}({\rm
Cont}^{\geq e}(Z)_m).$$ Note that ${\rm Cont}^{\geq (m+1)}(Z)_m
=J_m(Z)$. If ${\mathcal I}$ is the ideal sheaf in $\OO_X$ defining
$Z$, then we sometimes write ${\rm ord}_{\gamma}({\mathcal I})$,
${\rm Cont}^e({\mathcal I})$ and ${\rm Cont}^{\geq e}({\mathcal
I})$.
\bigskip
The next proposition gives the first hint of the relevance of spaces
of arcs to birational geometry. A key idea is that certain subsets
in the space of arcs are "small" and they can be ignored. A subset
of $J_{\infty}(X)$ is called \emph{thin} if it is contained in
$J_{\infty}(Y)$, where $Y$ is a closed subset of $X$ that does not
contain an irreducible component of $X$. It is clear that a finite
union of thin subsets is again thin. If $f\colon X'\to X$ is a
dominant morphism with $X$ and $X'$ irreducible, and $A\subseteq
J_{\infty}(X)$ is thin, then $f_{\infty}^{-1}(A)$ is thin. If $f$ is
in addition generically finite, and $B\subseteq J_{\infty}(X')$ is
thin, then $f_{\infty}(B)$ is thin.
We show that a proper birational morphism induces a bijective map on
the complements of suitable thin sets.
\begin{proposition}\label{prop11}
Let $f\colon X'\to X$ be a proper morphism. If $Z$ is a closed
subset of $X$ such that $f$ is an isomorphism over $X\smallsetminus
Z$, then the induced map
$$J_{\infty}(X')\smallsetminus J_{\infty}(f^{-1}(Z))\to
J_{\infty}(X)\smallsetminus J_{\infty}(Z)$$ is bijective. In
particular, if $f$ is a proper birational morphism of reduced
schemes, then $f_{\infty}$ gives a bijection on the complements of
suitable thin subsets.
\end{proposition}
\begin{proof}
Let $U=X\smallsetminus Z$. Since $f$ is proper, the Valuative
Criterion for Properness implies that an arc $\gamma\colon
\Spec\,k\llbracket t\rrbracket \to X$ lies in the image of $f_{\infty}$ if and only if
the induced morphism $\overline{\gamma}\colon\Spec\,k\llparenthesis t\rrparenthesis\to X$ can
be lifted to $X'$ (moreover, if the lifting of $\overline{\gamma}$
is unique, then the lifting of $\gamma$ is also unique). On the
other hand, $\gamma$ does not lie in $J_{\infty}(Z)$ if and only if
$\overline{\gamma}$ factors through $U\hookrightarrow X$. In this
case, the lifting of $\overline{\gamma}$ exists and is unique since
$f$ is an isomorphism over $U$.
\end{proof}
We use the above proposition to prove the following result of
Kolchin.
\begin{theorem}\label{thm1}
If $X$ is irreducible and ${\rm char}(k)=0$, then $J_{\infty}(X)$ is
irreducible.
\end{theorem}
\begin{proof}
Since $J_{\infty}(X)=J_{\infty}(X_{\rm red})$, we may assume that
$X$ is also reduced.
If $X$ is nonsingular, then the assertion in the theorem is easy: we have
seen that every jet scheme $J_m(X)$ is a nonsingular variety. Since
the projections $J_{\infty}(X)\to J_m(X)$ are surjective, and
$J_{\infty}(X)=\underleftarrow{\lim}J_m(X)$ with the projective
limit topology, it follows that $J_{\infty}(X)$, too, is
irreducible.
In the general case we do induction on $n=\dim(X)$, the case $n=0$
being trivial. By Hironaka's Theorem we have a resolution of
singularities $f\colon X'\to X$, that is, a proper birational
morphism, with $X'$ nonsingular. Suppose that $Z$ is a proper closed
subset of $X$ such that $f$ is an isomorphism over
$U=X\smallsetminus Z$. It follows from Proposition~\ref{prop11} that
$$J_{\infty}(X)=J_{\infty}(Z)\cup {\rm Im}(f_{\infty}).$$
Moreover, the nonsingular case implies that $J_{\infty}(X')$, hence
also ${\rm Im}(f_{\infty})$, is irreducible. Therefore, in order to
complete the proof it is enough to show that $J_{\infty}(Z)$ is
contained in the closure of ${\rm Im}(f_{\infty})$.
Consider the irreducible decomposition $Z=Z_1\cup\ldots\cup Z_r$,
inducing $J_{\infty}(Z)=J_{\infty}(Z_1)\cup\ldots \cup
J_{\infty}(Z_r)$. Since $f$ is surjective, for every $i$ there is an
irreducible component $Z'_i$ of $f^{-1}(Z_i)$ such that the induced
map $Z'_i\to Z_i$ is surjective. We are in characteristic zero,
hence by the Generic Smoothness Theorem we can find open subsets
$U'_i$ and $U_i$ in $Z'_i$ and $Z_i$, respectively, such that the
induced morphisms $g_i\colon U'_i\to U_i$ are smooth and surjective.
In particular, we have
$$J_{\infty}(U_i)={\rm Im}((g_i)_{\infty})\subseteq {\rm
Im}(f_{\infty}).$$
On the other hand, every $J_{\infty}(Z_i)$ is irreducible by
induction. Since $J_{\infty}(U_i)$ is a nonempty open subset of
$J_{\infty}(Z_i)$, it follows that
$$J_{\infty}(Z_i)\subseteq \overline{{\rm Im}(f_{\infty})}$$
for every $i$. This completes the proof of the theorem.
\end{proof}
\begin{remark}
In fact, Kolchin's Theorem holds in a much more general setup, see
\cite{kolchin} and also \cite{gillet} for a scheme-theoretic
aproach. In fact, we proved a slightly weaker statement even in our
restricted setting. Kolchin's result says that the scheme
$J_{\infty}(X)$ is irreducible, while we only proved that its
$k$--valued points form an irreducible set. In fact, one can deduce
the stronger statement from ours by showing that the $k$--valued
points are dense in $J_{\infty}(X)$. In turn, this can be proved in
a similar way with Theorem~\ref{thm1} above. For a different proof
of (the stronger version of) Kolchin's Theorem, without using
resolution of singularities, see \cite{IK} and \cite{NS}. Note also
that Remark~1 in \cite{NS} gives a counterexample in positive
characteristic.
\end{remark}
\section{Truncation maps between spaces of jets}
In what follows we will encounter morphisms that are not locally
trivial, but that satisfy this property after passing to a
stratification. Suppose that $g\colon V'\to V$ is a morphism of
schemes, $W'\subseteq V'$ and $W\subseteq V$ are constructible
subsets such that $g(W')\subseteq W$, and $F$ is a reduced scheme.
We will say that $g$ gives a \emph{piecewise trivial} fibration
$W'\to W$ with fiber $F$ if there is a decomposition
$W=T_1\sqcup\ldots\sqcup T_r$, with all $T_i$ locally closed
subsets of $W$ (with the reduced scheme structure) such that each
$W'\cap g^{-1}(T_i)$ is locally closed in $V$ and, with the reduced
scheme structure, it is isomorphic to $T_i\times F$ (with the
restriction of $g$ corresponding to the projection onto the first
component). It is clear that if $g\colon V'\to V$ is locally trivial
with fiber $F$, then it gives a piecewise trivial fibration with
fiber $F_{\rm red}$ from $g^{-1}(W)$ to $W$ for every constructible
subset $W$ of $V$.
If in the definition of piecewise trivial fibrations we assume only
that $W'\cap g^{-1}(T_i)\to T_i$ factors as
$$W'\cap g^{-1}(T_i)\overset{u}\to T'_i\times F\overset{v}\to T'_i\overset{w}\to T_i,$$
where $u$ is an isomorphism, $v$ is the projection, and $w$ is
bijective, then we say that $W'\to W$ is a \emph{weakly piecewise
trivial} fibration with fiber $F$. If ${\rm char}(k)=0$, then every
bijective morphism is piecewise trivial with fiber ${\rm Spec}(k)$,
and therefore the two notions coincide.
\smallskip
We have seen in Corollary~\ref{cor1} that if $X$ is a nonsingular
variety of dimension $n$, then the truncation maps $J_m(X)\to
J_{m-1}(X)$ are locally trivial with fiber $\AAA^n$. In order to
generalize this to more general schemes, we need to introduce the
\emph{Jacobian subscheme}. If $X$ is a scheme of pure dimension $n$,
then its Jacobian subscheme is defined by ${\rm Jac}_X$, the Fitting
ideal ${\rm Fitt}^n(\Omega_X)$. For the basics on Fitting ideals we
refer to \cite{eisenbud}. A basic property of Fitting ideals that we
will keep using is that they commute with pull-back: if $f\colon
X'\to X$ is a morphism and if ${\mathcal M}$ is a coherent sheaf on
$X$, then ${\rm Fitt}^i(f^*{\mathcal M})=({\rm
Fitt}^i({\mathcal M}))\cdot {\mathcal O}_{X'}$ for every $i$.
The ideal ${\rm Jac}_X$ can be explicitly computed as follows.
Suppose that $U$ is an open subset of $X$ that admits a closed
immersion $U\hookrightarrow \AAA^N$. We have a surjection
$$\Omega_{\AAA^N}\vert_X=\oplus_{j=1}^N\OO_Xdx_j\to\Omega_X$$
with the kernel generated by the $df=\sum_{j=1}^N\frac{\partial
f}{\partial x_j}dx_j$, where $f$ varies over a system of generators
$f_1,\ldots,f_d$ for the ideal of $U$ in $\AAA^N$. If $r=N-n$, then
${\rm Jac}_X$ is generated over $U$ by the image in $\OO_U$ of the
$r$--minors of the Jacobian matrix $(\partial f_i/\partial
x_j)_{i,j}$.
It is well-known that the support of the Jacobian subscheme is the
singular locus $X_{\rm sing}$ of $X$. Most of the time we will
assume that $X$ is reduced, hence its singular locus does not
contain any irreducible component of $X$. Note also that
${\rm Fitt}^{n-1}(\Omega_X)=0$ if either
$X$ is locally a complete intersection (when the above Jacobian matrix
has $r$ rows) or if $X$ is reduced (when the $(r+1)$--minors of the
Jacobian matrix vanish at the generic points of the irreducible components of $X$,
hence are zero in $\OO_X$).
\smallskip
We start by describing the fibers of the truncation morphisms when
we restrict to jets that can be lifted to the space of arcs.
\begin{proposition}\label{fiber2}{\rm (}\cite{DL}{\rm )}
Let $X$ be a reduced scheme of pure dimension $n$ and $e$ a
nonnegative integer. Fix $m\geq e$ and let $\pi_{m+e,m}\colon
J_{m+e}(X)\to J_m(X)$ be the canonical projection.
\begin{enumerate}
\item[i)] We have $\psi_m({\rm Cont}^e({\rm
Jac}_X))=\pi_{m+e,m}({\rm Cont}^e({\rm Jac}_X)_{m+e})$, i.e. an
$m$--jet on $J_m(X)$ that vanishes with order $e$ along ${\rm
Jac}_X$ can be lifted to $J_{\infty}(X)$ if and only if it can be
lifted to $J_{m+e}(X)$. In particular, $\psi_m({\rm Cont}^e({\rm
Jac}_X))$ is a constructible set.
\item[ii)] The projection $J_{m+1}(X)
\to J_m(X)$ induces a piecewise trivial fibration
$$\alpha\colon \psi_{m+1}({\rm Cont}^e({\rm Jac}_X))\to
\psi_m({\rm Cont}^e({\rm Jac}_X))$$ with fiber $\AAA^n$.
\end{enumerate}
\end{proposition}
Before giving the proof of Proposition~\ref{fiber2} we make some
general considerations that will be used again later. A key point
for the proof of Proposition~\ref{fiber2} is the reduction to the
complete intersection case. We present now the basic setup, leaving
the proof of a technical result for the Appendix.
Let $X$ be a reduced scheme of pure dimension $n$. All our
statements are local over $X$, hence we may assume that $X$ is
affine. Fix a closed embedding $X\hookrightarrow\AAA^N$ and let
$f_1,\ldots,f_d$ be generators of the ideal $I_X$ of $X$. Consider
$F_1,\ldots,F_d$ with $F_i=\sum_{j=1}^da_{i,j}f_j$ for general
$a_{i,j}\in k$. Note that we still have $I_X=(F_1,\ldots,F_d)$, but
in addition we have the following properties. Let us denote by $M$
the subscheme defined by the ideal $I_M=(F_1,\ldots,F_r)$, where
$r=N-n$.
\begin{enumerate}
\item[1)] All irreducible components of $M$ have dimension $n$,
hence $M$ is a complete intersection.
\item[2)] $X$ is a closed subscheme of $M$ and $X=M$ at the generic
point of every irreducible component of $X$.
\item[3)] There is an $r$--minor of the Jacobian matrix of
$F_1,\ldots,F_r$ that does not vanish at the generic point of any
irreducible component of $X$.
\end{enumerate}
Of course, every $r$ elements of $\{F_1,\ldots,F_d\}$ satisfy
analogous properties.
Suppose now that $e$ is a nonnegative integer, $m\geq e$ and we want
to study ${\rm Cont}^e({\rm Jac_X})_m$. If $M$ is as above, then we
have an open subset $U_M$ of ${\rm Cont}^e({\rm Jac}_X)_m$ that is
contained in ${\rm Cont}^e({\rm Jac}_M)_m$ (the latter contact locus
is a subset of $J_m(M)$). Moreover, when varying the subsets of
$\{1,\ldots,d\}$ with $r$ elements, the corresponding open subsets
cover ${\rm Cont}^e({\rm Jac}_X)_m$.
\begin{lemma}\label{lemma_reduction}
If $\gamma\in {\rm Cont}^e({\rm Jac}_M)\subseteq J_{\infty}(M)$ is
such that its projection to $J_m(M)$ lies in $J_m(X)$, then $\gamma$
lies in $J_{\infty}(X)$.
\end{lemma}
\begin{proof}
Let $X'\subseteq\AAA^N$ be defined by $(I_M\colon I_X)$, hence
set-theoretically $X'$ is the union of the irreducible components in
$M$ that are not contained in $X$. We have $J_{\infty}(M)=
J_{\infty}(X)\cup J_{\infty}(X')$, and therefore it is enough to
show that $\gamma$ does not lie in $J_{\infty}(X')$.
It follows from Corollary~\ref{cor1_appendix} in the Appendix that
if we denote by $J_F$ the ideal generated by the $r$--minors of the
Jacobian matrix of $(F_1,\ldots,F_r)$ (hence ${\rm
Jac}_M=(J_F+I_M)/I_M$), then
$$J_F\subseteq I_{X'}+I_X.$$
By assumption ${\rm ord}_{\gamma}(J_F)=e<m+1\leq {\rm
ord}_{\gamma}(I_X)$, hence ${\rm ord}_{\gamma}(I_{X'})\leq e$. In
particular, $\gamma$ is not in $J_{\infty}(X')$.
\end{proof}
\smallskip
\begin{proof}[Proof of Proposition~\ref{fiber2}]
We may assume that $X$ is affine, and let $X\hookrightarrow \AAA^N$
be a closed immersion of codimension $r$. Let $F_1,\ldots,F_d$ be
general elements in the ideal of $I_X$ as in the above discussion.
Consider the subscheme $M$ of $\AAA^N$ defined by $F_1,\ldots,F_r$
and let $U_M$ be the open subset of ${\rm Cont}^e({\rm Jac}_X)_m$
that is contained in ${\rm Cont}^e({\rm Jac}_M)_m$. When we vary the
subsets with $r$ elements of $\{1,\ldots,d\}$, the corresponding
open subsets cover ${\rm Cont}^e({\rm Jac}_X)_m$. Therefore it is
enough to prove the two assertions in the proposition over $U_M$.
We claim that it is enough to prove i) and ii) for $M$. Indeed, if $\gamma\in U_M$ can be lifted to
$J_{m+e}(X)$, then in particular it can be lifted to $J_{m+e}(M)$.
If we know i) for $M$, it follows that $\gamma$ can be lifted to an
arc $\delta\in J_{\infty}(M)$. Lemma~\ref{lemma_reduction} implies
that $\delta$ lies in $J_{\infty}(X)$, hence we have i) for $X$.
Moreover, suppose that ii) holds for $M$, hence the projection
$$\beta\colon\psi_{m+1}^M({\rm Cont}^e({\rm Jac}_M))
\to \psi_m^M({\rm Cont}^e({\rm Jac}_M))$$ is piecewise trivial with
fiber $\AAA^n$. Again, Lemma~\ref{lemma_reduction} implies that the
restriction of $\beta$ over $U_M\cap\psi_m^M({\rm Cont}^e({\rm
Jac}_M))$ coincides with the restriction of $\alpha$ over
$U_M\cap\psi_m({\rm Cont}^e({\rm Jac}_X))$. Therefore $X$ also
satisfies ii).
We now prove the proposition for a subscheme $M$ defined by a
regular sequence $F_1,\ldots,F_r$ ($M$ might not be reduced, but we do not
need this assumption anymore). Consider an element
$u=(u_1,\ldots,u_N)\in J_m(M)$, where all $u_i$ lie in
$k[t]/(t^{m+1})$ (for the matrix computations that will follow we
consider $u$ as a column vector). We denote by $\widetilde{u}_i\in
k\llbracket \rrbracket]]$ the lifting of $u_i$ that has degree $\leq m$. Our
assumption is that ${\rm ord}(F_i(\widetilde{u}))\geq m+1$ for every
$i$. An element in the fiber $(\psi_m^M)^{-1}(u)$ is an $N$--uple
$w=\widetilde{u}+t^{m+1}v$ where $v=(v_1,\ldots,v_N)\in (k\llbracket t\rrbracket)^N$,
such that $F_i(w)=0$ for every $i$. Using the Taylor expansion, we
get
\begin{equation}\label{eq_F}
F_i(w)=F_i(\widetilde{u})+t^{m+1}\cdot\sum_{j=1}^N\frac{\partial
F_i}{\partial x_j}(\widetilde{u})v_j+t^{2(m+1)}A_i(\widetilde{u},v),
\end{equation}
where each $A_i$ has all terms of degree $\geq 2$ in the $v_j$.
We write $F$ and $A$ for the column vectors
$(F_1,\ldots,F_r)$ and $(A_1,\ldots,A_r)$, respectively.
Let $J(\widetilde{u})$ denote the Jacobian matrix $(\partial
F_i(\widetilde{u})/\partial x_j)_{i\leq r,j\leq N}$. Since $u$ lies
in ${\rm Cont}^e({\rm Jac}_M)_m$, all $r$--minors of this matrix
have order $\geq e$. Moreover, after taking a suitable open cover of
${\rm Cont}^e({\rm Jac}_M)_m$ and reordering the variables, we may
assume that the determinant of the submatrix $R(\widetilde{u})$ on
the first $r$ columns of $J(\widetilde{u})$ has order precisely $e$.
If $R^*(\widetilde{u})$ denotes the classical adjoint of the matrix
$R(\widetilde{u})$, then
$$R^*(\widetilde{u})\cdot J(\widetilde{u})=(t^e\cdot I_r, t^e\cdot J'(\widetilde{u}))$$
for some $r\times (N-r)$ matrix $J'(\widetilde{u})$. Indeed, for
every $i\leq r$ and $r+1\leq j\leq N$, the $(i,j)$ entry of
$R^*(\widetilde{u})\cdot J(\widetilde{u})$ is equal, up to a sign,
with the $r$--minor of $J(\widetilde{u})$ on the columns
$1,\ldots,i-1,i+1,\ldots,r,j$. Therefore its order is $\geq e$.
Since the determinant of $R^*(\widetilde{u})$ is nonzero, it follows
that $F(w)=0$ if and only if $R^*(\widetilde{u})\cdot F(w)=0$. By
equation (\ref{eq_F}) we have
\begin{equation}\label{eq_F2}
R^*(\widetilde{u})\cdot F(w)=R^*(\widetilde{u})\cdot
F(\widetilde{u})+t^{m+e+1}\cdot (I_r,J'(\widetilde{u}))\cdot v+
t^{2m+2}\cdot R^*(\widetilde{u})\cdot A(\widetilde{u},v).
\end{equation}
Note that since $m\geq e$ we have $2m+2>m+e+1$.
We claim that there is $v$ such that $F(\widetilde{u}+t^{m+1}v)=0$
if and only if
\begin{equation}\label{eq_F3}
\ord(R^*(\widetilde{u})\cdot F(\widetilde{u}))\geq m+e+1.
\end{equation}
Indeed, the fact that this condition is necessary follows
immediately from (\ref{eq_F2}). To see that it is also sufficient,
suppose that (\ref{eq_F3}) holds, and let us show that we can find
$v$ such that $F(\widetilde{u}+t^{m+1}v)=0$. We write
$v_i=\sum_{j}v_i^{(j)}t^j$ and determine inductively the
$v_i^{(j)}$. If we consider the term of order $m+e+1$ on the
right-hand side of (\ref{eq_F2}), then we see that we can choose
$v_{r+1}^{(0)},\ldots,v_N^{(0)}$ arbitrarily, and then the other
$v_i^{(0)}$ are uniquely determined. In the term of order
$t^{m+e+2}$, the contribution of the part coming from
$R^*(\widetilde{u})\cdot A(\widetilde{u},v)$ involves only the
$v_i^{(0)}$. It follows that again we may choose
$v_{r+1}^{(1)},\ldots,v_N^{(1)}$ arbitrarily, and then
$v^{(1)}_1,\ldots,v^{(1)}_r$ are determined uniquely such that the
coefficient of $t^{m+e+2}$ in $R^*(\widetilde{u})\cdot
F(\widetilde{u}+t^{m+1}v)$ is zero. Continuing this way we see that
we can find $v$ such that $F(\widetilde{u}+t^{m+1}v)=0$. This
concludes the proof of our claim. Since the fiber over $u$ in
$\psi_{m+1}(J_{\infty}(M))$ corresponds to those
$(v_1^{(0)},\ldots,v_N^{(0)})$ such that there is $v$ with
$F(\widetilde{u}+t^{m+1}v)=0$, it follows from our description that
this is a linear
subspace of codimension $r$ of $\AAA^N$.
Note that if there is $v$ such that
$\ord\,F(\widetilde{u}+t^{m+1}v)\geq m+e+1$, then as above we get
that $\ord(R^*(\widetilde{u})\cdot F(\widetilde{u}))\geq m+e+1$. We
deduce that if $u$ can be lifted to $J_{m+e}(M)$, then $u$ can be
lifted to $J_{\infty}(M)$, which proves i). Moreover, the above
computation shows that over the set $W$ defined by (\ref{eq_F3}) in
our locally closed subset of $J_m(M)$, the inclusion
$$\psi_{m+1}({\rm Cont}^e({\rm Jac}_M))\subseteq
J_m(M)\times\AAA^N$$ is, at least set-theoretically, an affine
bundle with fiber $\AAA^{N-r}$. This proves ii) and completes the
proof of the proposition.
\end{proof}
\begin{remark}\label{remark_fiber2}
It follows from the above proof that the assertions of the
proposition hold also for a locally complete intersection scheme
(the scheme does not have to be reduced).
\end{remark}
\bigskip
We now discuss the fibers of the truncation maps between jet spaces
without restricting to the jets that can be lifted to the space of
arcs.
\begin{proposition}\label{fiber6}{\rm (}\cite{Lo}{\rm )}
Let $X$ be a scheme of finite type over $k$. For every nonnegative
integers $m$ and $p$, with $p\leq m\leq 2p+1$, consider the projection
$$\pi_{m,p}\colon J_m(X)\to J_p(X).$$
\begin{enumerate}
\item[i)] If $\gamma\in J_p(X)$ is such that
$\pi_{m,p}^{-1}(\gamma)$ is non-empty, then scheme-theoretically we
have
\begin{equation}\label{equation_fiber6}
\pi_{m,p}^{-1}(\gamma)\simeq {\rm
Hom}_{k[t]/(t^{p+1})}(\gamma^*\Omega_X,(t^{p+1})/(t^{m+1})).
\end{equation}
\item[ii)] Suppose that $X$ has pure dimension $n$
and that for $e=\ord_{\gamma}({\rm Jac}_X)$ we have $2p\geq m\geq
e+p$. If $X$ is either locally complete intersection or reduced,
and if $\pi_{m,p}^{-1}(\gamma)\neq\emptyset$,
then
$$\pi_{m,p}^{-1}(\gamma)\simeq\AAA^{e+(m-p)n}.$$
\end{enumerate}
\end{proposition}
\begin{proof}
Note that $\gamma$ corresponds to a ring homomorphism $\OO_{X,x}\to
k[t]/(t^{m+1})$, for some $x\in X$. Our assumption on $m$ and $p$
implies that $(t^{p+1})/(t^{m+1})$ is a $k[t]/(t^{p+1})$--module.
Therefore the right-hand side of (\ref{equation_fiber6}) is
well-defined. It is a finite-dimensional $k$--vector space, hence it
is an affine space.
In order to describe it, we use the structure of finitely generated
modules over $k\llbracket t\rrbracket$ to write a free presentation
$$(k[t]/(t^{p+1})^{\oplus N}\overset{A}\to
(k[t]/(t^{p+1}))^{\oplus N}\to \gamma^*\Omega_X\to 0,$$ where $A$ is
the diagonal matrix ${\rm diag}(t^{a_1},\ldots,t^{a_{N}})$, with
$0\leq a_1\leq\ldots\leq a_{N}\leq p+1$. In this case the right-hand
side of (\ref{equation_fiber6}) is isomorphic to $\AAA^{\ell}$,
where $\ell=\sum_i\min\{a_i,m-p\}$. Note also that its $R$--valued
points are in natural bijection with
$${\rm Der}_k(\OO_{X,x},t^{p+1}R[t]/t^{m+1}R[t]).$$
We first show that it is enough to prove i). Suppose that we are in
the setting of ii). We use the above description of the right-hand
side of (\ref{equation_fiber6}). It follows from the definition of
$e$ that $\sum_{i=1}^{N-n}a_i=e$. In particular, $a_i\leq e\leq
m-p$ for $i\leq N-n$. In order to deduce ii) from i) it is enough to show that
$a_i=p+1$ for $i>N-n$. If $\delta$ is an element in
$\pi_{m,p}^{-1}(\gamma)$, then by taking a free presentation of
$\delta^*\Omega_X$, we see that $A$ is the reduction mod $(t^{p+1})$
of a matrix ${\rm diag}(t^{b_1},\ldots,t^{b_N})$ with $0\leq
b_1\leq\ldots\leq b_N\leq m+1$. We have $a_i=b_i$ if $b_i\leq p$ and
$a_i=p+1$ otherwise. Either of our two conditions on $X$ implies
that ${\rm Fitt}^{n-1}(\Omega_X)=0$, hence
$${\rm ord}_{\delta}({\rm Fitt}^{n-1}(\Omega_X))
\geq m+1,$$ and therefore $b_1+\ldots+b_{N-n+1}\geq m+1$. We deduce
that for every $i\geq N-n+1$ we have $b_i\geq m+1-e\geq p+1$, hence
$a_i=p+1$.
Therefore it is enough to prove i). We may clearly assume that
$X={\rm Spec}(S)$ is affine. We start with the following
observation. If $\beta$ is an $R$--valued point in $J_p(X)$, then
either the fiber $\pi_{m,p}^{-1}(\beta)$ is empty, or it is a
principal homogeneous space over ${\rm
Der}_k(S,t^{p+1}R[t]/t^{m+1}R[t])$, where $t^{p+1}R[t]/t^{m+1}R[t]$
becomes an $S$--module via $\beta\colon S\to R[t]/(t^{p+1})$.
Indeed, if $D\in {\rm Der}_k(S,t^{p+1}R[t]/t^{m+1}R[t])$ and if
$\alpha\colon S\to R[t]/(t^{m+1})$ corresponds to an $R$--valued
point in $J_m(X)$ lying over $\beta$, then $\alpha+D$ gives another
$R$--valued point over $\beta$. Moreover, every other element in
$\pi_{m,p}^{-1}(\beta)$ arises in this way for a unique derivation
$D$.
We see that if $\delta$ is a fixed $k$--valued point in
$\pi_{m,p}^{-1}(\gamma)$, then we get a morphism
$${\rm
Hom}_{k[t]/(t^{p+1})}(\Omega_S\otimes_Sk[t]/(t^{p+1}),(t^{p+1})/(t^{m+1}))\to
\pi_{m,p}^{-1}(\gamma).$$ This is an isomorphism since it induces a
bijection at the level of $R$--valued points for every $R$.
\end{proof}
\begin{remark}\label{same_fiber6}
Let $X$ be a reduced scheme of pure dimension $n$. Suppose that $m$,
$p$ and $e$ are nonnegative integers such that $2p\geq m\geq e+p$
and $\gamma\in J_p(X)$ is such that $\ord_{\gamma}({\rm Jac}_X)=e$.
Assume also that $X$ is a closed subscheme of a locally complete
intersection scheme $M$ of the same dimension such that ${\rm
ord}_{\gamma}({\rm Jac}_M)=e$ (if $X$ is embedded in some $\AAA^N$,
then one can take $M$ to be generated by $(N-n)$ general elements in
the ideal of $X$). Consider the commutative diagram
\[
\begin{CD}
J_{m}(X)@>>> J_{m}(M)\\
@VV{\pi_{m,p}^X}V@VV{\pi_{m,p}^M}V\\
J_p(X) @>>>J_p(M)
\end{CD}
\]
where the horizontal maps are inclusions. It follows from
Proposition~\ref{fiber6} that the scheme-theoretic fibers of
$\pi_{m,p}^X$ and $\pi_{m,p}^M$ over $\gamma$ are equal.
Indeed, note first that if
$(\pi_{m,p}^M)^{-1}(\gamma)\neq\emptyset$, then $\gamma$ can be
lifted to $J_{\infty}(M)$ by Proposition~\ref{fiber2} (see also
Remark~\ref{remark_fiber2}). On the other hand such a lifting would
lie in $J_{\infty}(X)$ by Lemma~\ref{lemma_reduction}, hence
$(\pi_{m,p}^X)^{-1}(\gamma)\neq\emptyset$. In this case, it follows
from Proposition~\ref{fiber6} that both fibers are affine spaces of
the same dimension, one contained in the other, hence they are equal.
\end{remark}
\begin{remark}\label{remark_nonsingular_case}
Suppose that $X$ is a nonsingular variety of dimension $n$, and
suppose that $m\leq 2p+1$. On $J_p(X)$ we have a geometric vector
bundle $E$ whose fiber over $\gamma$ is ${\rm
Hom}_{k[t]/(t^{p+1})}(\gamma^*\Omega_X,(t^{p+1})/(t^{m+1}))$. If we
consider this as a group scheme over $J_p(X)$, then the argument in
the proof of Proposition~\ref{fiber6} shows that we have an action
of $E$ on $J_m(X)$ over $J_p(X)$. Moreover, whenever we have a
section of the projection of $\pi_{m,p}$ we get an isomorphism of
$J_m(X)$ with $E$. We can always find such a section if we restrict
to an affine open subset of $X$ on which $\Omega_X$ is trivial.
\end{remark}
We will need later the following global version of the assertion in
Proposition~\ref{fiber6} ii).
\begin{proposition}\label{fiber1}
Let $X$ be a scheme of pure dimension $n$ that is either reduced or
a locally complete intersection. If $m$, $p$ and $e$ are nonnegative
integers such that $2p\geq m\geq p+e$, then the canonical projection
$\pi_{m,p}\colon J_{m}(X)\to J_p(X)$ induces a piecewise trivial
fibration
$${\rm Cont}^e({\rm Jac}_X)_{m}\to {\rm Cont}^e({\rm Jac}_X)_p\cap
{\rm Im}(\pi_{m,p})$$ with fiber $\AAA^{(m-p)n+e}$.
\end{proposition}
\begin{proof}
We need to "globalize" the argument in the proof of
Proposition~\ref{fiber6}. Note that we may assume that $X$ is
locally a complete intersection. Indeed, we may assume first that
$X$ is affine. If $X$ is reduced, arguing as in the proof of
Proposition~\ref{fiber2} we may cover ${\rm Cont}^e({\rm Jac}_X)_p$
by open subsets $U_i$ such that there are $n$--dimensional locally
complete intersection schemes $M_i$ containing $X$, with
$$U_i\subseteq {\rm Cont}^e({\rm Jac}_{M_i})_p\subseteq J_p(M_i).$$
It follows from Remark~\ref{same_fiber6} that knowing the assertion
in the proposition for each $M_i$, we get it also for $X$.
Therefore we may assume that $X$ is a closed subscheme of $\AAA^N$
of codimension $r$, defined by $f_1,\ldots,f_r$. Write
$f=(f_1,\ldots,f_r)$, which we consider as a vertical vector.
Suppose that
$$u=(u_1,\ldots,u_N)\in {\rm Cont}^e({\rm Jac}_X)_p,$$ where
$u_i\in k[t]/(t^{p+1})$ for every $i$. We denote by
$\widetilde{u}\in (k[t]/(t^{m+1}))^N$ the lifting of $u$ having each
entry of degree $\leq p$. The fiber of $\pi_{m,p}$ over $u$ consists
of those $\widetilde{u}+t^{p+1}v$ such that
$f(\widetilde{u}+t^{p+1}v)=0$ in $(k[t]/(t^{m+1}))^N$. Here
$v=(v_1,\ldots,v_N)$ where $v_i=\sum_{j=0}^{m-p-1}v^{(j)}_it^j$.
Denote by $J(\widetilde{u})$ the Jacobian matrix $(\partial
f_i(\widetilde{u})/\partial x_j)_{i\leq r,j\leq N}$.
Using the Taylor
expansion we see that
\begin{equation}\label{eq_F6}
f(\widetilde{u}+t^{p+1}v)=f(\widetilde{u})+t^{p+1}\cdot
J(\widetilde{u})v
\end{equation}
(there are no further terms since $2(p+1)\geq m+1$).
Note that by assumption we can write
$f(\widetilde{u})=t^{p+1}g(u)$ where
$g(u)=\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$
$(\sum_{j=0}^{m-p-1}g_{i,j}(u)t^j)_i$. If we denote by
$\overline{J}(u)$ the reduction of $J(\widetilde{u})$ mod $t^{m-p}$,
we see that the condition on $v$ becomes
\begin{equation}\label{eq_F7}
-g(u)=\overline{J}(u)\cdot v,
\end{equation}
where the equality is in $(k[t]/(t^{m-p}))^{r}$.
It follows from the structure theory of matrices over principal
ideal domains, applied to a lifting of $\overline{J}(u)$ to a matrix
over $k\llbracket t\rrbracket$, that we can find invertible matrices $A$ and $B$ over
$k[t]/(t^{m-p})$ such that $A\cdot \overline{J}(u)\cdot B=\left({\rm
diag}(t^{a_1},\ldots,t^{a_r}),{\mathbf {0}}\right)$, with $0\leq a_i\leq m-p$.
Moreover, after partitioning ${\rm
Cont}^e({\rm Jac}_X)_p$ into suitable locally closed subsets, we may
assume that the $a_i$ are independent of $u$ and that $A=A(u)$ and
$B=B(u)$, where the entries of $A(u)$ and $B(u)$ are regular
functions of $u$.
Since the ideal generated by the $r$--minors of $\overline{J}(u)$
is $(t^e)$, we see that $a_1+\ldots+a_r=e$. If we write $A(u)\cdot
g(u)=(h_1(u),\ldots,h_r(u))$, we see that $u$ lies in
the image of $\pi_{m,p}$ if and only if $\ord(h_i(u))\geq a_i$ for
every $i\leq r$. Moreover, if we put $v'=B(u)^{-1}v$, then we see
that our condition gives the values of $t^{a_i}v'_i$ for $i\leq r$.
Therefore the set of possible $v$ is isomorphic to an affine space
of dimension $(N-r)(m-p)+\sum_{i=1}^ra_i=n(m-p)+e$. Since the
equations defining the fiber over $u$ depend algebraically on $u$,
we get the assertion of the proposition.
\end{proof}
\section{Cylinders in spaces of arcs}
We start by giving some applications of Proposition~\ref{fiber2}.
For every scheme $X$, a \emph{cylinder} in $J_{\infty}(X)$ is a
subset of the form $C=\psi_m^{-1}(S)$, for some $m$ and some
constructible subset $S\subseteq J_m(X)$. From now on, unless
explicitly mentioned otherwise, we assume that $X$ is reduced and
of pure dimension $n$.
\begin{lemma}\label{cylinder1}
If $C\subseteq J_{\infty}(X)$ is a cylinder, then $C$ is not thin if
and only if it is not contained in $J_{\infty}(X_{\rm sing})$.
\end{lemma}
\begin{proof}
We need to show that for every closed subset $Y$ of $X$ with
$\dim(Y)<\dim(X)$, and every cylinder $C\not\subseteq
J_{\infty}(X_{\rm sing})$, we have $C\not\subseteq J_{\infty}(Y)$.
If this is not the case, then arguing by Noetherian induction we may
choose a minimal $Y$ for which there is a cylinder $C\not\subseteq
J_{\infty}(X_{\rm sing})$ with $C\subseteq J_{\infty}(Y)$. After
replacing $C$ by a suitable $C\cap {\rm Cont}^e({\rm Jac}_X)$, we
may assume that $C\subseteq {\rm Cont}^e({\rm Jac}_X)$. It follows
from Proposition~\ref{fiber2} that if $m\gg 0$, then the maps
$\psi_{m+1}(C)\to \psi_m(C)$ are piecewise trivial, with fiber
$\AAA^n$.
Note that $Y$ has to be irreducible. Indeed, if $Y=Y_1\cup Y_2$,
with $Y_1$ and $Y_2$ both closed and different from $Y$, then either
$C\cap J_{\infty}(Y_1)$ or $C\cap J_{\infty}(Y_2)$ is not contained
in $J_{\infty}(X_{\rm sing})$. This contradicts the minimality of
$Y$.
Using again the fact that $Y$ is minimal, we see that
$C\not\subseteq J_{\infty}(Y_{\rm sing})$ (we consider $Y$ with the
reduced structure). After replacing $C$ with some $C\cap {\rm
Cont}^{e'}({\rm Jac}_Y)$, we may assume that $C\subseteq {\rm
Cont}^{e'}({\rm Jac}_Y)$. Since $C$ is a cylinder also in
$J_{\infty}(Y)$, it follows from Proposition~\ref{fiber2} that if
$m\gg 0$, then the projection $\psi_{m+1}(C)\to \psi_m(C)$ is
piecewise trivial with fiber $\AAA^{\dim(Y)}$. This is a
contradiction, and completes the proof of the lemma.
\end{proof}
\begin{corollary}\label{cor_cylinder1}
Let $f\colon X'\to X$ be a proper birational morphism of reduced,
pure-dimensional schemes. If $\gamma\in
\psi_m^X(J_{\infty}(X)\smallsetminus J_{\infty}(X_{\rm sing}))$,
then $\gamma$ lies in the image of $f_m$.
\end{corollary}
\begin{proof}
If $C=(\psi^X_m)^{-1}(\gamma)$, then $C$ is a cylinder that is not
contained in $J_{\infty}(X_{\rm sing})$. Let $Z\subset X$ be a
closed subset with $\dim(Z)<\dim(X)$ such that $f$ is an isomorphism
over $X\smallsetminus Z$. It follows from Proposition~\ref{prop11}
that $J_{\infty}(X)\smallsetminus J_{\infty}(Z)\subseteq {\rm
Im}(f_{\infty})$. Since $C\not\subseteq J_{\infty}(Z)$ by the lemma,
we deduce that there is $\delta\in J_{\infty}(X')$ such that
$f_m(\psi^{X'}_m(\delta))=\psi_m^X(f_{\infty}(\delta))=\gamma$.
\end{proof}
\begin{corollary}\label{cor2_cylinder1}
Let $f$ be as in the previous corollary, with $X'$ nonsingular. If
$k$ is uncountable, then $J_{\infty}(X)\smallsetminus
J_{\infty}(X_{\rm sing})\subseteq {\rm Im}(f_{\infty})$.
\end{corollary}
\begin{proof}
Let $\gamma\in J_{\infty}(X)\smallsetminus J_{\infty}(X_{\rm
sing})$. It follows from Corollary~\ref{cor_cylinder1} that for
every $m$ we have $\gamma_m:=\psi_m^X(\gamma)\in {\rm Im}(f_m)$.
Therefore we get a decreasing sequence
$$\cdots\supseteq(\psi_m^{X'})^{-1}(f_m^{-1}(\gamma_m))\supseteq
(\psi_{m+1}^{X'})^{-1}(f_{m+1}^{-1}(\gamma_{m+1}))\supseteq\cdots$$
of nonempty cylinders. Lemma~\ref{sequence} below implies that there
is $\delta$ in the intersection of all these cylinders. Therefore
$\psi_m^X(f_{\infty}(\delta))=\gamma_m$ for all $m$, hence
$\gamma=f_{\infty}(\delta)$.
\end{proof}
\begin{lemma}\label{sequence}{\rm (}\cite{batyrev}{\rm )}
If $X$ is nonsingular and $k$ is uncountable, then every decreasing
sequence of cylinders
$$C_1\supseteq\cdots \supseteq C_m\supseteq\cdots$$
has nonempty intersection.
\end{lemma}
\begin{proof}
Since the projections $\psi_m$ are surjective, it follows from
Chevalley's Constructibility Theorem that the image of every
cylinder in $J_m(X)$ is constructible. Consider the decreasing
sequence
$$\psi_0(C_1)\supseteq \psi_0(C_2)\supseteq\cdots$$
of nonempty constructible subsets. Since $k$ is uncountable, the
intersection of this sequence is nonempty. Let $\gamma_0$ be an
element in this intersection.
Since $\gamma_0$ lies in the image of every $C_m$, we see that all
the constructible subsets in the decreasing sequence
$$\psi_1(C_1)\cap\pi_{1,0}^{-1}(\gamma_0)\supseteq
\psi_1(C_2)\cap\pi_{1,0}^{-1}(\gamma_0)\supseteq\cdots$$ are
nonempty. Therefore there is $\gamma_1$ contained in their
intersection. Continuing in this way we get $\gamma_m\in J_m(X)$ for
every $m$ such that $\pi_{m,m-1}(\gamma_m)=\gamma_{m-1}$ for every
$m$ and $\gamma_m\in\psi_m(C_p)$ for every $p$. Therefore
$(\gamma_m)_m$ determines an arc $\gamma\in J_{\infty}(X)$ whose image
in $J_m(X)$ is equal to $\gamma_m$. Since each $C_p$ is a cylinder and
$\psi_m(\gamma) \in\psi_m(C_p)$ for every $m$, we see that
$\gamma\in C_p$. Hence $\gamma\in\cap_{p\geq 1}C_p$.
\end{proof}
\begin{remark}\label{remark_sequence}
Note that in the above lemma, the hypothesis that $X$ is nonsingular
was used only to ensure that the image in $J_m(X)$ of a cylinder is
a constructible set. We will prove this below for an arbitrary
scheme $X$ (see Corollary~\ref{image_cylinder}), and therefore the
lemma will hold in this generality.
\end{remark}
\begin{remark}
If ${\rm char}(k)=0$, then the assumption that $X'$ is nonsingular
is not necessary in Corollary~\ref{cor2_cylinder1}. Indeed, we can
take a resolution of singularities $g\colon X''\to X'$ and we
clearly have ${\rm Im}(f\circ g)_{\infty}\subseteq {\rm
Im}(f_{\infty})$.
\end{remark}
\smallskip
We have seen in Proposition~\ref{fiber2} that for a reduced
pure-dimensional scheme $X$ the set $\psi_m({\rm Cont}^e({\rm
Jac}_X))$ is constructible. In fact, the image of every cylinder is
constructible, as follows from the following result of
Greenberg.
\begin{proposition}\label{image_constructible}{\rm (}\cite{greenberg}{\rm )}
For an arbitrary scheme $X$ and every $m$, the image of
$J_{\infty}(X)\to J_m(X)$ is constructible.
\end{proposition}
\begin{proof}
We give the proof assuming that ${\rm char}(k)=0$. For a proof in
the general case, see \cite{greenberg}. We do induction on
$\dim(X)$, the case $\dim(X)=0$ being trivial. If $X_1,\ldots,X_r$
are the irreducible components of $X$, with the reduced structure,
then $J_{\infty}(X)=J_{\infty}(X_1)\cup\ldots \cup J_{\infty}(X_r)$.
Hence the image of $J_{\infty}(X)$ is equal to the union of the
images of the $J_{\infty}(X_i)$ in $J_m(X_i)\subseteq J_m(X)$.
Therefore we may assume that $X$ is reduced and irreducible.
Let $f\colon X'\to X$ be a resolution of singularities. Since $X'$
is nonsingular, the projection $J_{\infty}(X')\to J_m(X')$ is
surjective, hence ${\rm Im}(f_m)\subseteq {\rm Im}(\psi_m^X)$.
Moreover, Corollary~\ref{cor_cylinder1} gives
$\psi_m^X(J_{\infty}(X) \smallsetminus J_{\infty}(X_{\rm
sing}))\subseteq {\rm Im}(f_m)$. Therefore
$$\psi_m^X(J_{\infty}(X))={\rm Im}(f_m)\cup \psi_m^X(J_{\infty}(X_{\rm
sing})).$$ The first term on the right-hand side is constructible by
Chevalley's Constructibility Theorem, while the second term is
constructible by induction. This implies that
$\psi_m^X(J_{\infty}(X))$ is constructible.
\end{proof}
\begin{corollary}\label{image_cylinder}
For an arbitrary scheme $X$, the image of a cylinder $C$ by the
projection $J_{\infty}(X)\to J_m(X)$ is constructible.
\end{corollary}
\begin{proof} Let
$C=\psi_p^{-1}(A)$, where $A\subseteq J_p(X)$ is constructible. If
$m\geq p$, then $\psi_m(C)=\psi_m(J_{\infty}(X))\cap
\pi_{m,p}^{-1}(A)$, hence it is constructible by the proposition.
The constructibility for $m<p$ now follows from Chevalley's Theorem.
\end{proof}
Proposition~\ref{image_constructible} is deduced in \cite{greenberg}
from the fact that for every $m$ there is $p\geq m$ such that the
image of the projection $\psi_m\colon J_{\infty}(X)\to J_m(X)$ is
equal to the image of $\pi_{p,m}\colon J_p(X)\to J_m(X)$ (in fact,
Greenberg also shows that one can take $p=L(m)$ for a suitable
linear function $L$). We now show that if we assume $k$ uncountable,
then this follows from the above proposition.
\begin{corollary}\label{cor_image_constructible}
If $k$ is uncountable, then for an arbitrary scheme $X$ and every
$m$ there is $p\geq m$ such that the image of $\psi_m$ is equal to
the image of $\pi_{p,m}$.
\end{corollary}
\begin{proof}
Since ${\rm Im}(\psi_m)$ is constructible by
Proposition~\ref{image_constructible} and each ${\rm Im}(\pi_{p,m})$
is constructible by Chevalley's Theorem, the assertion follows if we
show
\begin{equation}\label{eq_intersection}
{\rm Im}(\psi_m)=\bigcap_{p\geq m}{\rm Im}(\pi_{p,m})
\end{equation}
(we use the fact that $k$ is uncountable). The inclusion
"$\subseteq$" is obvious. For the reverse inclusion we argue as in
the proof of Lemma~\ref{sequence} to show that if
$\gamma_m\in\cap_{p\geq m} {\rm Im}(\pi_{p,m})$,
then we can find $\gamma_q\in J_q(X)$ for every $q\geq m+1$ such
that $\pi_{q,q-1}(\gamma_{q})=\gamma_{q-1}$. The sequence
$(\gamma_q)_q$ defines an element $\gamma\in J_{\infty}(X)$ lying
over $\gamma_m$.
\end{proof}
We give one more result about the fibers of the truncation maps
between the images of the spaces of arcs (one should compare this
with Proposition~\ref{fiber2}).
\begin{proposition}\label{fiber3}{\rm (}\cite{DL}{\rm )}
If $X$ is a scheme of dimension $n$, then for every $m\geq p$, all
fibers of the truncation map
$$\phi_{m,p}\colon\psi_{m}(J_{\infty}(X))\to\psi_p(J_{\infty}(X))$$
have dimension $\leq (m-p)n$.
\end{proposition}
\begin{proof}
Note that the sets in the statement are constructible by
Proposition~\ref{image_constructible}. Clearly, it is enough to
prove the proposition when $m=p+1$. We may assume that $X$ is a
closed subscheme of $\AAA^N$ defined by $F_1,\ldots,F_r$. Consider
$\gamma_p\in J_p(X)$ given by $u=(u_1,\ldots,u_N)$ where $u_i\in
k[t]$ with $\deg(u_i)\leq p$.
Let $T={\rm Spec}\,k[t]$. Consider the subscheme ${\mathcal Z}$ of
$T\times\AAA^N$ defined by $I_{\mathcal Z}=(F_1(u+t^{p+1}x),\ldots,
F_r(u+t^{p+1}x))$. We have a subscheme ${\mathcal Z}'\subseteq
{\mathcal Z}$ defined by
$$I_{{\mathcal Z}'}=(f\mid hf\in I_{\mathcal Z}\,{\rm for}\,{\rm
some}\,{\rm nonzero}\,h\in k[t]).$$ Note that by construction
${\mathcal Z}'$ is flat over $T$, and ${\mathcal Z}={\mathcal Z}'$
over the generic point of $T$.
The generic fiber of ${\mathcal Z}$ over $T$ is isomorphic to
$X\times_kk(t)$. Since ${\mathcal Z}'$ is flat over $T$, it follows
that the fiber of ${\mathcal Z}'$ over the origin is either empty or
has dimension $n$.
On the other hand, an element in the fiber of $\phi_{p+1,p}$ over
$\gamma_p$ is the $(p+1)$--jet of an arc in $X$ given by
$u+t^{p+1}w$ for some $w\in (k\llbracket t\rrbracket)^N$. Since $F_i(u+t^{p+1}w)=0$
for every $i$, it follows from the definition of $I_{{\mathcal Z}'}$
that if
$f\in I_{{\mathcal Z}'}$, then $f(t,w)=0$. Hence the fiber
of $\phi_{p+1,p}$ over $\gamma_p$ can be embedded in
the fiber of ${\mathcal Z}'$ over the origin, and its dimension is
$\leq n$.
\end{proof}
\smallskip
We now
discuss the notion of codimension for cylinders in spaces of
arcs. In the remaining part of this section we assume that $k$ is
uncountable, and also that ${\rm char}(k)=0$ (this last condition is
due only to the fact that we use resolutions of singularities).
Let $X$ be a scheme of pure dimension $n$ that is either reduced or
locally complete intersection, and let $C=\psi_p^{-1}(A)$ be a
cylinder, where $A$ is a constructible subset of $J_p(X)$. If
$C\subseteq {\rm Cont}^e({\rm Jac}_X)$ and $m\geq \max\{p,e\}$, then
we put $\codim(C):=(m+1)n-\dim(\psi_m(C))$. We refer to \S 9.1
for a quick review of some basic facts about the dimension of
constructible subsets. Note that by Proposition~\ref{fiber2} (see
also Remark~\ref{remark_fiber2}), this is well-defined. Moreover, it
is a nonnegative integer: by Theorem~\ref{thm1}, the closure of
$\psi_m(J_{\infty}(X))$ is equal to the closure in $J_m(X)$ of the
$m^{\rm th}$ jet scheme of the nonsingular locus of $X_{\rm red}$.
Therefore it is a set of pure dimension $(m+1)n$ (the fact that
$\dim\,\psi_m(J_{\infty}(X))=(m+1)n$ follows also from
Proposition~\ref{fiber3}).
For an arbitrary cylinder $C$ we put $C^{(e)}:=C\cap {\rm
Cont}^e({\rm Jac}_X)$ and
$$\codim(C):=\min\{\codim(C^{(e)})\mid e\in\NN\}$$
(by convention, if $C\subseteq J_{\infty}(X_{\rm sing})$, we have
$\codim(C)=\infty$). It is clear that if $C_1$ and $C_2$ are
cylinders, then $\codim(C_1\cup
C_2)=\min\{\codim(C_1),\codim(C_2)\}$. In particular, if
$C_1\subseteq C_2$, then $\codim(C_1)\geq\codim(C_2)$.
\begin{proposition}\label{countable_union}
Suppose that $X$ is reduced and let $C$ be a cylinder in
$J_{\infty}(X)$. If we have disjoint cylinders $C_i\subseteq C$ for
$i\in\NN$ such that the complement $C\smallsetminus
\bigsqcup_{i\in\NN}C_i$ is thin, then
$\lim_{i\to\infty}\codim(C_i)=\infty$ and
$\codim(C)=\min_i\codim(C_i)$.
\end{proposition}
\noindent Note that the proposition implies that for every cylinder $C$ we
have
$$\lim_{e\to\infty}\codim(C^{(e)})=\infty.$$
We will prove Proposition~\ref{countable_union} at the same time
with the following proposition.
\begin{proposition}\label{limit}
If $X$ is reduced and $Y$ is a closed subscheme of $X$ with
$\dim(Y)<\dim(X)$, then
$$\lim_{m\to\infty}\codim({\rm Cont}^{\geq m}(Y))=\infty.$$
\end{proposition}
We first show that these results hold when $X$ is nonsingular.
Let us start by making some comments about this special case.
Suppose for the moment that $X$ is nonsingular of pure dimension
$n$. Since the projections $J_{m+1}(X)\to J_m(X)$ are locally
trivial with fiber $\AAA^n$, cylinders are much easier to understand
in this case. We say that a cylinder $C=\psi_m^{-1}(S)$ is
\emph{closed}, \emph{locally closed} or \emph{irreducible} if $S$ is
(the definition does not depend on $m$ by the local triviality of
the projection). Moreover, if $S$ is closed and $S=S_1\cup\ldots\cup
S_r$ is the irreducible decomposition of $S$, then we get a unique
decomposition into maximal irreducible closed cylinders
$C=\psi_m^{-1}(S_1)\cup\ldots\cup\psi_m^{-1}(S_r)$. The cylinders
$\psi_m^{-1}(S_i)$ are the \emph{irreducible components} of $C$.
Note that if $C=\psi_m^{-1}(S)$, then by definition
$\codim(C)=\codim(S,J_m(X))$. If $C\subseteq C'$ are closed
cylinders with $\codim(C)=\codim(C')$, then every irreducible
component of $C$ whose codimension is equal to $\codim(C)$ is also
an irreducible component of $C'$.
\begin{proof}[Proof of Propositions~\ref{countable_union} and \ref{limit}]
We start by noting that if Proposition~\ref{limit} holds on $X$,
then Proposition~\ref{countable_union} holds on $X$, too. Indeed,
suppose that $\bigsqcup_{i\in\NN}C_i\subseteq C$, where all $C_i$ and
$C$ are cylinders, and that $C\smallsetminus\bigsqcup_iC_i$ is
contained in $J_{\infty}(Y)$, where $\dim(Y)<\dim(X)$. For every $m$
we have
$$C\subseteq {\rm Cont}^{\geq
m}(Y)\cup\bigcup_{i\in\NN}C_i.$$ It follows from
Lemma~\ref{sequence} that there is an integer $i(m)$ such that
\begin{equation}\label{eq_prop_cylinders}
C\subseteq{\rm Cont}^{\geq
m}(Y)\cup\bigcup_{i\leq i(m)}C_i.
\end{equation}
In particular, for every
$i>i(m)$ we have $C_i\subseteq {\rm Cont}^{\geq m}(Y)$, hence
$\codim(C_i)\geq\codim\,{\rm Cont}^{\geq m}(Y)$. If
Proposition~\ref{limit} holds on $X$, it follows that
$$\lim_{i\to\infty}\codim(C_i)=\infty.$$
The second assertion in Proposition~\ref{countable_union} follows,
too. Indeed, note first that if all $C_i\subseteq J_{\infty}(X_{\rm
sing})$, then $C\subseteq J_{\infty}(Y\cup X_{\rm sing})$. Therefore
$C\subseteq J_{\infty}(X_{\rm sing})$ by Lemma~\ref{cylinder1}, and
the assertion is clear in this case. If $C_i\not\subseteq
J_{\infty}(X_{\rm sing})$ for some $i$, then $\codim(C)<\infty$. The
assertion in Proposition~\ref{limit} implies that there is $m$ such
that $\codim\,{\rm Cont}^{\geq m}(Y)>\codim(C)$. We deduce from
(\ref{eq_prop_cylinders}) that
$$\codim(C)\geq\min\{\codim(C_0),\ldots,\codim(C_{i(m)}),\codim\,{\rm
Cont}^{\geq m}(Y)\}.$$ Therefore $\codim(C)\geq\min_i\codim(C_i)$
and the reverse inequality is trivial.
We now prove Proposition~\ref{limit} when $X$ is nonsingular. We
have a decreasing sequence of closed cylinders $\{{\rm Cont}^{\geq
m}(Y)\}_{m\in\NN}$. Since
$$\codim\,{\rm Cont}^{\geq m}(Y)\leq\codim\,{\rm Cont}^{\geq
m+1}(Y)$$ for every $m$, it follows that if the limit in the
proposition is not infinity, then there is $m_0$ such that
$\codim\,{\rm Cont}^{\geq m}(Y)=\codim\,{\rm Cont}^{\geq m_0}(Y)$
for every $m\geq m_0$. Hence for all such $m$, the irreducible
components of ${\rm Cont}^{\geq m+1}(Y)$ of minimal codimension are
also components of ${\rm Cont}^{\geq m}(Y)$. It is easy to see that
this implies that there is an irreducible component $C$ of all ${\rm
Cont}^{\geq m}(Y)$ for $m\geq m_0$. Therefore $C\subseteq
J_{\infty}(Y)$, which contradicts Lemma~\ref{cylinder1}. By our
discussion at the beginning of the proof we see that both
propositions hold on nonsingular varieties.
In order to complete the proof it is enough to show that
Proposition~\ref{limit} holds for an arbitrary reduced
pure-dimensional scheme $X$. Let $f\colon X'\to X$ be a resolution
of singularities of $X$ (in other words $X'$ is the disjoint union
of resolutions of the irreducible components of $X$). Since
$f_{\infty}^{-1}({\rm Cont}^{\geq m}(Y))={\rm Cont}^{\geq
m}(f^{-1}(Y))$ and since we know that Proposition~\ref{limit} holds
on $X'$, we see that it is enough to prove that for every cylinder
$C\subseteq J_{\infty}(X)$, we have
$\codim(f_{\infty}^{-1}(C))\leq\codim(C)$.
We clearly have $\bigsqcup_{e\in\NN}f_{\infty}^{-1}(C^{(e)})\subseteq
f_{\infty}^{-1}(C)$ and the complement of this union is contained in
$J_{\infty}(f^{-1}(X_{\rm sing}))$. Since
Proposition~\ref{countable_union} holds on $X'$, we see that
$\codim(f_{\infty}^{-1}(C))=\min_e\codim\,f_{\infty}^{-1}(C^{(e)})$.
Therefore we may assume that $C=C^{(e)}$ for some $e$. In this case,
if $m\gg 0$, then
$$\codim(C)=(m+1)\dim(X)-\dim\,\psi^X_m(C)\geq
(m+1)\dim(X)-\dim\,f_m^{-1}(\psi^X_m(C))$$
$$=\codim\,(f_m\circ\psi_m^{X'})^{-1}(\psi_m^X(C))=\codim\,f_{\infty}^{-1}(C).$$
We have used the fact that $\psi^X_m(C)\subseteq {\rm Im}(f_m)$ by
Corollary~\ref{cor_cylinder1}. This completes the proof of the two
propositions.
\end{proof}
\begin{example}
Let $Z\subseteq\AAA^2$ be the curve defined by $x^2-y^3=0$. The
Jacobian ideal of $Z$ is ${\rm Jac}_Z=(x,y^2)$. Let $\pi\colon
J_{\infty}(Z)\to Z$ be the projection map. If $z\in Z$ is different
from the origin, then $z$ is a smooth
point of $Z$ and $\codim(\pi^{-1}(z))=1$. On the other hand,
if $z$ is the origin, then we can decompose $C=\pi^{-1}(z)$
as
$$C=J_{\infty}(z)\bigsqcup\left(\bigsqcup_{e>0}\{(u(t),v(t))\mid
u(t)^2=v(t)^3, \ord\,u(t)=3e,\,\ord\,v(t)=2e\}\right).$$
Note that the set corresponding to $e$ is precisely $C^{(3e)}$.
If we take $m=3e$, we see that $\psi_m(C^{(3e)})$ is equal to
$$\{(at^{3e}, b_0t^{2e}+\ldots+b_{e}t^{3e}\mid a^2=b_0^3, a\neq 0,
b_0\neq 0\}.$$ Therefore $\codim(C^{(3e)})=(3e+1)-(e+1)=2e$ for every $e\geq 1$, and
$\codim(C)=2$. Note that in this case the codimension of the special
fiber of $\pi$ is larger than that of the general fiber (compare
with the behavior of dimensions of fibers of morphisms of algebraic
varieties).
\end{example}
Proposition~\ref{countable_union} is a key ingredient in setting up
motivic integration (see \cite{batyrev} and \cite{DL}). We describe
one elementary application of this proposition to the definition of
another invariant of a cylinder, the "number of components of
minimal codimension".
Let $X$ be a reduced pure-dimensional scheme and $C$ a cylinder in
$J_{\infty}(X)$. If $C\subseteq {\rm Cont}^e({\rm Jac}_X)$, then we
take $m\gg 0$ and define $|C|$ to be the number of irreducible
components of $\psi_m(C)$ whose codimension is $\codim(C)$. Note
that by Proposition~\ref{fiber2} this number is independent of $m$.
For an arbitrary $C$, we put $|C|:=\sum_{e\in\NN}|C^{(e)}|$, where
the sum is over those $e$ such that $\codim(C^{(e)})=\codim(C)$
(Proposition~\ref{countable_union} implies that this is a finite
sum). With this definition, we see that under the hypothesis of
Proposition~\ref{countable_union} we have $|C|=\sum_i|C_i|$, the sum
being over the finite set of those $i$ with $\codim(C_i)=\codim(C)$.
If $X$ is a nonsingular variety and $C$ is a closed cylinder in
$J_{\infty}(X)$, then $|C|$ is equal to the number of irreducible
components of $C$ of minimal codimension.
\section{The Birational Transformation Theorem}
We now present the fundamental result of the theory. Suppose that
$f\colon X'\to X$ is a proper birational morphism, with $X'$
nonsingular and $X$ reduced and of pure dimension $n$. The
Birational Transformation Theorem shows that in this case
$f_{\infty}$ induces at finite levels weakly piecewise trivial
fibrations.
The dimension of the fibers of these fibrations depends on the order
of vanishing along the \emph{Jacobian ideal} ${\rm Jac}_f$ of $f$. Consider the
morphism induced by pulling-back $n$--forms
$$f^*\Omega_X^n\to\Omega_{X'}^n.$$
Since $X'$ is nonsingular, $\Omega_{X'}^n$ is locally free of rank
one, hence the image of the above morphism can be written as ${\rm
Jac}_f\otimes\Omega_{X'}^n$ for a unique ideal ${\rm Jac}_f$ of
$\OO_{X'}$. In other words, we have ${\rm Jac}_f={\rm
Fitt}^0(\Omega_{X'/X})$.
If $X$ is nonsingular, too, then ${\rm Jac}_f$ is locally
principal, and it defines a subscheme supported on the exceptional
locus of $f$. In this case, Proposition~\ref{prop11} implies that
$f_{\infty}$ is injective on $J_{\infty}(X')\smallsetminus
J_{\infty}(V({\rm Jac}_f))$. In general, we have the following.
\begin{lemma}\label{injectivity2}
If $f\colon X'\to X$ is a proper birational morphism, with $X'$
nonsingular and $X$ reduced and pure-dimensional, and if $\gamma$,
$\gamma' \in J_{\infty}(X')$ are such that
$$\gamma\not\in J_{\infty}(V({\rm Jac}_f))\cup
f_{\infty}^{-1}(J_{\infty}(X_{\rm sing}))$$ and $f_{\infty}(\gamma)
=f_{\infty}(\gamma')$, then $\gamma=\gamma'$.
\end{lemma}
\begin{proof}
We argue as in the proof of Proposition~\ref{prop11}. Since $f$ is
separated, it is enough to show that if $j\colon{\rm
Spec}\,k\llparenthesis t\rrparenthesis
\to {\rm Spec}\,k\llbracket t\rrbracket$ corresponds to $k\llbracket t\rrbracket\subset
k\llparenthesis t\rrparenthesis$, then $\gamma\circ j=\gamma'\circ j$.
Note that $U:=f^{-1}(X_{\rm reg})\smallsetminus V({\rm Jac}_f)$ is
an open subset of $X'$ that is the inverse image of an open subset
of $X$. Moreover, $f$ is invertible on $U$. By assumption,
$\gamma\circ j$ factors through $U$ and $f\circ\gamma\circ
j=f\circ\gamma'\circ j$. Therefore $\gamma'\circ j$ also factors
through $U$ and $\gamma\circ j=\gamma'\circ j$.
\end{proof}
\begin{theorem}\label{change_of_variable}
Let $f\colon X'\to X$ be a proper birational morphism, with $X'$
nonsingular and $X$ reduced and of pure dimension $n$. For
nonnegative integers $e$ and $e'$, we put
$$C_{e,e'}:={\rm Cont}^e({\rm Jac}_f)\cap f_{\infty}^{-1}({\rm
Cont}^{e'}({\rm Jac}_X)).$$ Fix $m\geq\max\{2e, e+e'\}$.
\begin{enumerate}
\item[i)] $\psi^{X'}_m(C_{e,e'})$ is a union of fibers of $f_m$.
\item[ii)] $f_m$ induces a weakly piecewise trivial fibration
with fiber $\AAA^e$
$$\psi^{X'}_m(C_{e,e'})\to f_m(\psi^{X'}_m(C_{e,e'})).$$
\end{enumerate}
\end{theorem}
In the case when also $X$ is nonsingular, this theorem is due to
Kontsevich \cite{kontsevich}. The case of singular $X$ is due to
Denef and Loeser \cite{DL}, while the proof we give below follows
\cite{Lo}. Note that in these references one makes the assumption
that the base field has characteristic zero, and therefore one gets
piecewise trivial fibrations in ii) above. For a version in the
context of formal schemes, allowing also positive characteristic,
but with additional assumptions on the morphism, see \cite{Se}. The
above theorem is at the heart of the Change of Variable Formula in
motivic integration (see \cite{batyrev}, \cite{DL}, and also \cite{Loeser}).
We start with some preliminary remarks. Let $f$ be as in the
theorem, and suppose that $\alpha\in J_{\infty}(X')$, with
$\ord_{\alpha}({\rm Jac}_f)=e$ and $\ord_{f_{\infty}(\alpha)}({\rm
Jac}_X)=e'$. Pulling-back via $\alpha$ the right exact sequence of
sheaves of differentials associated to $f$, we get an exact sequence
of $k\llbracket t\rrbracket$--modules
$$
\alpha^*(f^*\Omega_X)\overset{h}\to\alpha^*\Omega_{X'}\to\alpha^*\Omega_{X'/X}\to
0.
$$
By assumption ${\rm Fitt}^0(\alpha^*\Omega_{X'/X})=(t^e)$, hence
$$\alpha^*(\Omega_{X'/X})\simeq k[t]/(t^{a_1})\oplus\ldots\oplus
k[t]/(t^{a_n})$$ for some $0\leq a_1\leq\ldots\leq a_n$ with
$\sum_ia_i=e$.
It follows that if $T={\rm Im}(h)$, then $T$ is free of rank $n$,
and in suitable bases of $T$ and $\alpha^*\Omega_{X'}$, the induced
map $g\colon T\to \alpha^*\Omega_{X'}$ is given by the diagonal
matrix with entries $t^{a_1},\ldots,t^{a_n}$. We get a decomposition
$\alpha^*(f^*\Omega_X)\simeq T\oplus {\rm Ker}(h)$, and therefore
$${\rm Fitt}^0({\rm Ker}(h))={\rm
Fitt}^n(\alpha^*(f^*\Omega_X))=(t^{e'}).$$ Hence ${\rm Ker}(h)\simeq
k[t]/(t^{b_1})\oplus\ldots\oplus k[t]/(t^{b_r})$ for some $0\leq
b_1\leq\ldots\leq b_r$ with $\sum_ib_i=e'$.
Suppose now that $p\geq\max\{e,e'\}$ and that $\alpha_p$ is the
image of $\alpha$ in $J_p(X)$. If we tensor everything with
$k[t]/(t^{p+1})$, we get the following factorization of the
pull-back map $h_p\colon
\alpha_p^*f^*\Omega_X\to\alpha_p^*\Omega_{X'}$
\begin{equation}\label{eq_cv10}
\alpha_p^*f^*\Omega_X\overset{g'_p}\to
T_p=T\otimes_{k\llbracket t\rrbracket}k[t]/(t^{p+1})\overset{g_p}\to
\alpha_p^*\Omega_{X'},
\end{equation}
with $g'_p$ surjective
and ${\rm Ker}(g'_p)={\rm Ker}(h)\otimes
_{k\llbracket t\rrbracket}k[t]/(t^{p+1})\simeq \oplus_i k[t]/(t^{b_i})$.
The following lemma will be needed in
the proof of Theorem~\ref{change_of_variable}.
\begin{lemma}\label{lemma_change_of_variable}
Let $f\colon X'\to X$ be as in the theorem. Suppose that $\gamma_m$,
$\gamma'_m\in J_m(X')$ are such that $\ord_{\gamma_m}({\rm
Jac}_f)=e$ and $\ord_{f_m(\gamma_m)}({\rm Jac}_X)=e'$, with
$m\geq\max\{2e,e+e'\}$. If $f_m(\gamma_m)=f_m(\gamma'_m)$, then
$\gamma_m$ and $\gamma'_m$ have the same image in $J_{m-e}(X')$.
\end{lemma}
\begin{proof}
For an arc $\delta$ we will denote by $\delta_m$ its image in the
space of $m$--jets.
It is enough to show the following claim: if $q\geq \max\{2e,e+e'\}$, and if we have
$$\alpha\in J_{\infty}(X'),\,\,\beta\in J_{\infty}(X),$$
with $\ord_{\alpha}({\rm Jac}_f)=e$, $\ord_{\beta}({\rm Jac}_X)=e'$
and $f_q(\alpha_q)=\beta_q$, then there is $\delta\in
J_{\infty}(X')$ having the same image as $\alpha$ in $J_{q-e}(X')$
and such that $f_{q+1}(\delta_{q+1})=\beta_{q+1}$.
Indeed, in the situation in the lemma, let us choose arbitrary
liftings $\gamma$ and $\gamma'$ of $\gamma_m$ and $\gamma'_m$,
respectively, to $J_{\infty}(X')$. We use the above claim to
construct recursively $\alpha^{(q)}\in J_{\infty}(X')$ for $q\geq m$
such that $\alpha^{(m)}=\gamma$ and $\alpha^{(q+1)}$, $\alpha^{(q)}$
have the same image in $J_{q-e}(X')$ and
$$f_q(\alpha^{(q)}_q)=\psi^X_q(f_{\infty}(\gamma'))$$
for every $q\geq m$ (note that since $m\geq\max\{2e,e+e'\}$ each
$\alpha^{(q)}$ vanishes along ${\rm Jac}_f$ and $f^{-1}({\rm
Jac}_X)$ with the same order as $\gamma$). The sequence given by the
image of each $\alpha^{(q)}$ in $J_q(X')$ defines a unique
$\alpha\in J_{\infty}(X')$ such that $\alpha$ and $\alpha^{(q)}$
have the same image in $J_{q-e}(X')$ for every $q\geq m$. We deduce
that $f_{\infty}(\alpha)=f_{\infty}(\gamma')$. Since $\alpha$ has
the same image as $\gamma$ in $J_{m-e}(X')$, and since $m-e\geq
\max\{e,e'\}$, it follows that
$$\alpha\not\in J_{\infty}(V({\rm Jac}_f))\cup
f_{\infty}^{-1}(J_{\infty}(X_{\rm sing})),$$ hence $\alpha=\gamma'$
by Lemma~\ref{injectivity2}. In particular, $\gamma$ and $\gamma'$
have the same image in $J_{m-e}(X')$.
We now prove the claim made at the beginning of the proof. It
follows from Proposition~\ref{fiber6} i) that using $\alpha_{q+1}\in
(\pi^{X'}_{q+1,q-e})^{-1}(\alpha_{q-e})$ we get an isomorphism
$$(\pi^{X'}_{q+1,q-e})^{-1}(\alpha_{q-e})\simeq {\rm Hom}_{k[t]/(t^{q-e+1})}(\alpha_{q-e}^*\Omega_{X'},
(t^{q-e+1})/(t^{q+2})).$$ Similarly, using $f_{q+1}(\alpha_{q+1})$
we see that
$$(\pi^X_{q+1,q-e})^{-1}(\beta_{q-e}))\simeq
{\rm Hom}_{k[t]/(t^{q-e+1})}(\beta_{q-e}^*\Omega_X,
(t^{q-e+1})/(t^{q+2})).$$
Via this isomorphism $\beta_{q+1}$ corresponds to $w
\colon\beta_{q-e}^*\Omega_X\to (t^{q-e+1})/(t^{q+2})$. Note that
since $\beta_q=f_q(\alpha_q)$, the image of $w$ lies in
$(t^{q+1})/(t^{q+2})$. We now use the factorization (\ref{eq_cv10})
with $p=q-e$. If we construct a morphism $u\colon
\alpha_{q-e}^*\Omega_{X'}\to (t^{q-e+1})/(t^{q+2})$ such that
$u\circ h_{q-e}=w$, then $u$ corresponds to an element
$\delta_{q+1}\in J_{q+1}(X')$ such that any lifting $\delta$ of
$\delta_{q+1}$ to $J_{\infty}(X')$ satisfies our requirement.
We first show that $w$ is zero on ${\rm Ker}(g'_{q-e})$. Note
that by using $f_{2q+1}(\alpha)\in
(\pi^X_{2q+1,q})^{-1}(f_q(\alpha_q))$ we see that $\beta_{2q+1}$
corresponds to a morphism
$$w'\colon\beta_q^*\Omega_X\to (t^{q+1})/(t^{2q+2}),$$
such that $w$ is obtained by tensoring $w'$ with $k[t]/(t^{q-e+1})$
and composing with the natural map $(t^{q+1})/(t^{2q-e+2})\to
(t^{q-e+1})/(t^{q+2})$. Therefore in order to show that $w$ is zero
on ${\rm Ker}(g'_{q-e})$ it is enough to show that $w'$ maps ${\rm
Ker}(g'_{q-e})$ to $(t^{q+2})/(t^{2q+2})$. Since ${\rm
Ker}(g'_{q-e})$ is a direct sum of $k[t]/(t^{q+1})$--modules of the
form $k[t]/(t^b)$ with $b\leq e'$, it follows that $w'({\rm
Ker}(h'))$ is contained in $(t^{2q+2-e'})/(t^{2q+2})$. We have
$2q+2-e'\geq q+2$, hence $w$ is zero on ${\rm Ker}(g'_{q-e})$.
Therefore $w$ induces a morphism $\overline{w}\colon T_{q-e}\to
(t^{q+1})/(t^{q+2})$. We know that in suitable bases of $T_{q-e}$ and
$\beta_{q-e}^*\Omega_X$ the map $g_{q-e}$ is given by the diagonal
matrix with entries $t^{a_1},\ldots,t^{a_n}$, with all $a_i\leq e$.
It follows that we can find $u\colon\alpha_{q-e}^*\Omega_{X'}\to
(t^{q+1-e})/(t^{q+2})$ such that $u\circ h_{q-e}=w$, which completes the
proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{change_of_variable}]
The assertion in i) follows from
Lemma~\ref{lemma_change_of_variable}, and we now prove ii). We first show
that every fiber of the restriction of $f_m$ to
$\psi_m^{X'}(C_{e,e'})$ is isomorphic to $\AAA^e$, and we then
explain how to globalize the argument. Note first that since $X'$ is
nonsingular, every jet in $J_m(X')$ can be lifted to
$J_{\infty}(X')$, hence an element in $J_m(X')$ lies in
$\psi_m^{X'}(C_{e,e'})$ if and only if its projection to
$J_{m-e}(X')$ lies in $\psi_{m-e}^{X'}(C_{e,e'})$.
Let $\gamma'_m\in \psi_m^{X'}(C_{e,e'})$ and $\gamma'_{m-e}$ its
image in $J_{m-e}(X')$. We denote by $\gamma_m$ and $\gamma_{m-e}$
the images of $\gamma'_m$ and $\gamma'_{m-e}$ by $f_m$ and
$f_{m-e}$, respectively. It follows from
Lemma~\ref{lemma_change_of_variable} that $f_m^{-1}(\gamma_m)$ is
contained in the fiber of $\pi_{m,m-e}^{X'}$ over $\gamma'_{m-e}$.
Using the identifications of the fibers of $\pi_{m,m-e}^{X'}$ and
$\pi_{m,m-e}^X$ over $\gamma'_{m-e}$ and, respectively,
$\gamma_{m-e}$ given by Proposition~\ref{fiber6}, we get an
isomorphism of $f_m^{-1}(\gamma_m)$ with the kernel of
\begin{equation}\label{fiber_identification}
{\rm Hom}((\gamma'_{m-e})^*\Omega_{X'},(t^{m-e+1})/(t^{m+1}))\to
{\rm Hom}(\gamma_{m-e}^*\Omega_{X},(t^{m-e+1})/(t^{m+1})),
\end{equation}
where the Hom groups are over $k[t]/(t^{m-e+1})$. This gives an
isomorphism
$$f_m^{-1}(\gamma_m)\simeq {\rm
Hom}((\gamma'_{m-e})^*\Omega_{X'/X},(t^{m-e+1})/(t^{m+1})).$$ Since
$(\gamma'_{m-e})^*\Omega_{X'/X}\simeq
k[t]/(t^{a_1})\oplus\ldots\oplus k[t]/(t^{a_n})$, with $0\leq
a_i\leq\ldots\leq a_n\leq e$, with $\sum_ia_i=e$, we deduce
$$f_m^{-1}(\gamma_m)\simeq
\oplus_{i=1}^n(t^{m+1-a_i})/(t^{m+1})\simeq \AAA^e.$$
We now show that the above argument globalizes to give the full
assertion in ii). Note first that after restricting to an affine
open subset of $X'$, we may assume that we have a section of
$\pi_{m,m-e}^{X'}$. By Remark~\ref{remark_nonsingular_case}, it
follows that $J_m(X')$ becomes isomorphic to a geometric vector
bundle $E$ over $J_{m-e}(X')$ whose fiber over some $\gamma'_{m-e}$
is isomorphic to ${\rm
Hom}((\gamma'_{m-e})^*\Omega_{X'},(t^{m-e+1})/(t^{m+1}))$. Moreover,
after restricting to a suitable locally closed cover of
$\psi_{m-e}^{X'}(C_{e,e'})$, we may assume that, in the above
notation, the integers $a_1,\ldots,a_n$ do not depend on
$\gamma_{m-e}'$. It follows that we get a geometric subbundle $F$ of
$E$ over this subset of $J_{m-e}(X')$ whose fiber over
$\gamma_{m-e}'$ is ${\rm
Hom}((\gamma'_{m-e})^*\Omega_{X'/X},(t^{m-e+1})/(t^{m+1}))$. It
follows from the above discussion that we get a one-to-one map from
the quotient bundle $E/F$ to $J_m(X)$. This completes the proof of
the theorem.
\end{proof}
\begin{corollary}\label{cor1_change_of_variable}
Suppose that $k$ is uncountable. With the notation in
Theorem~\ref{change_of_variable}, if $A\subseteq C_{e,e'}$ is a
cylinder in $J_{\infty}(X')$, then $f_{\infty}(A)$ is a cylinder in
$J_{\infty}(X)$.
\end{corollary}
\begin{proof}
Suppose that $A=\psi_p^{-1}(S)$, and let $m\geq\max\{2e,e+e',e+p\}$.
It is enough to show that
$$f_{\infty}(A)=(\psi_m^X)^{-1}(f_m(\psi_m^{X'}(A))).$$
The inclusion "$\subseteq$" is trivial, hence it is enough to show
the reverse inclusion. Consider $\delta\in
(\psi_m^X)^{-1}(f_m(\psi_m^{X'}(A)))$. In particular $\delta\not\in
J_{\infty}(X_{\rm sing})$, and by Corollary~\ref{cor2_cylinder1}
there is $\gamma\in J_{\infty}(X')$ such that
$\delta=f_{\infty}(\gamma)$. Since $f_m(\psi_m^{X'}(\gamma))\in
f_m(\psi_m^{X'}(A))$, it follows from
Lemma~\ref{lemma_change_of_variable} that the image of $\gamma$ in
$J_p(X')$ lies in $S$, hence $\gamma\in A$.
\end{proof}
\begin{corollary}\label{cor2_change_of_variable}
Suppose that $k$ is uncountable and of characteristic zero. With the
notation in the theorem, if $B\subseteq J_{\infty}(X)$ is a
cylinder, then
$$\codim(B)=\min\{\codim(f_{\infty}^{-1}(B)\cap C_{e,e'})+e\vert
e,e'\in\NN\}.$$ Moreover, we have
$$|B|=\sum_{e,e'}|f_{\infty}^{-1}(B)\cap C_{e'e'}|,$$
where the sum is over those $e$, $e'\in\NN$ such that
$\codim(f_{\infty}^{-1}(B)\cap C_{e,e'})+e=\codim(B)$.
\end{corollary}
\begin{proof}
It follows from the previous corollary that each $B\cap
f_{\infty}(C_{e,e'})$ is a cylinder and Lemma~\ref{injectivity2}
implies that these cylinders are disjoint. Moreover, the complement
in $B$ of their union is thin, so $\codim(B)=\min_{e,e'}
\codim(B\cap f_{\infty}(C_{e,e'}))$ by
Proposition~\ref{countable_union} and $|B|=\sum_{e,e'}|B\cap
f_{\infty}(C_{e,e'})|$, the sum being over those $e$ and $e'$ such
that $\codim(f_{\infty}(C_{e,e'})\cap B)=\codim(B)$. The fact that
$$\codim(f_{\infty}(C_{e,e'})\cap B)=\codim(C_{e,e'}\cap
f_{\infty}^{-1}(B))+e,\,\,|f_{\infty}(C_{e,e'})\cap B|=
|C_{e,e'}\cap f_{\infty}^{-1}(B)|$$ is a direct consequence of
Theorem~\ref{change_of_variable}.
\end{proof}
\begin{remark}
Note that we needed to assume ${\rm char}(k)=0$ simply because
we used existence of resolution of singularities in proving the
basic properties of codimension of cylinders.
\end{remark}
\section{Minimal log discrepancies via arcs}
{}From now on we assume that the characteristic of the ground field
is zero, as we will make systematic use of existence of resolution
of singularities. We start by recalling some basic definitions in
the theory of singularities of pairs.
We work with pairs $(X,Y)$, where $X$ is a normal $\QQ$--Gorenstein
$n$--dimensional variety and $Y=\sum_{i=1}^sq_iY_i$ is a formal
combination with real numbers $q_i$ and proper closed subschemes
$Y_i$ of $X$. An important special case is when $Y$ is an
$\RR$-\emph{Cartier divisor}, i.e. when all $Y_i$ are defined by
locally principal ideals.
We say that a pair $(X,Y)$ is effective if all $q_i$ are
nonnegative. Since $X$ is normal, we have a Weil divisor $K_X$ on
$X$, uniquely defined up to linear equivalence, such that $\OO(K_X)
\simeq i_*\Omega_{X_{\rm reg}}^n$, where $i\colon X_{\rm
reg}\hookrightarrow X$ is the inclusion of the smooth locus.
Moreover, since $X$ is $\QQ$--Gorenstein, we may and will fix a
positive integer $r$ such that $rK_X$ is a Cartier divisor.
Invariants of the singularities of $(X,Y)$ are defined using
\emph{divisors over X}: these are prime divisors $E\subset X'$,
where $f\colon X'\to X$ is a birational morphism and $X'$ is normal.
Every such divisor $E$ gives a discrete valuation ${\rm ord}_E$ of
the function field $K(X')=K(X)$, corresponding to the DVR
$\OO_{X',E}$. We identify two divisors over $X$ if they give the
same valuation of $K(X)$. In particular, we may always assume that
$X'$ and $E$ are both smooth. The \emph{center} of $E$ is the
closure of $f(E)$ in $X$ and it is denoted by $c_X(E)$.
Let $E$ be a divisor over $X$. If $Z$ is a closed subscheme of $X$,
then we define $\ord_E(Z)$ as follows: we may assume that $E$ is a
divisor on $X'$ and that the scheme-theoretic inverse image
$f^{-1}(Z)$ is a divisor. Then $\ord_E(Z)$ is the coefficient of $E$
in $f^{-1}(Z)$. If $(X,Y)$ is a pair as above, then we put
$\ord_E(Y):=\sum_iq_i\ord_{E}(Y_i)$. We also define
$\ord_E(K_{-/X})$ as the coefficient of $E$ in $K_{X'/X}$. Recall
that $K_{X'/X}$ is the unique $\QQ$--divisor supported on the
exceptional locus of $f$ such that $rK_{X'/X}$ is linearly
equivalent with $rK_{X'}-f^*(rK_X)$. Note that both $\ord_E(Y)$ and
$\ord_E(K_{-/X})$ do not depend on the particular $X'$ we have
chosen.
Suppose now that $(X,Y)$ is a pair and that $E$ is a divisor over
$X$. The \emph{log discrepancy} of $(X,Y)$ with respect to $E$ is
$$a(E;X,Y):=\ord_E(K_{-/X})-\ord_E(Y)+1.$$
If $W$ is a closed subset of $X$, and $\dim(X)\geq 2$, then the \emph{minimal log
discrepancy} of $(X,Y)$ along $W$ is defined by
$${\rm mld}(W;X,Y):=\inf\{a(E;X,Y)\mid E\,{\rm divisor}\,{\rm over} X,\,c_X(E)\subseteq W\}.$$
When $\dim(X)=1$ we use the same definition of minimal log discrepancy, unless the infimum
is negative, in which case we make the convention that $\mld(W; X,Y)=-\infty$
(see below for motivation).
There are also other versions of minimal log discrepancies (see
\cite{ambro}), but the study of all these variants can be reduced to
the study of the above one. In what follows we give a quick
introduction to minimal log discrepancies, and refer for proofs and
details to \emph{loc. cit.}
\begin{remark}\label{remark_integral_closure}
If $\widetilde{Y}:=\sum_iq_i\widetilde{Y_i}$, where each
$\widetilde{Y_i}$ is defined by the integral closure of the ideal
defining $Y_i$, then $\ord_E(Y)=\ord_E(\widetilde{Y})$ for every
divisor $E$ over $X$. For basic facts about integral closure of
ideals, see for example \cite{lazarsfeld}, \S 9.6.A. We deduce that
we have ${\rm mld}(W;X,Y)=\mld(W;X,\widetilde{Y})$.
\end{remark}
It is an easy computation to show that if $E$ and $F$ are divisors
with simple normal crossings on $X'$ above $X$, and if $F_1$ is the
exceptional divisor of the blowing-up of $X'$ along $E\cap F$ (we
assume that this is nonempty and connected), then
$$a(F_1;X,Y)=a(E;X,Y)+a(F;X,Y).$$
We may repeat this procedure, blowing-up along the intersection of
$F_1$ with the proper transform of $E$. In this way we get divisors
$F_m$ over $X$ for every $m\geq 1$ with
$$a(F_m;X,Y)=m\cdot a(E;X,Y)+a(F;X,Y).$$
In particular, this computation shows that if $\dim(X)\geq 2$ and
$\mld(W;X,Y)<0$, then $\mld(W;X,Y)=-\infty$ (which explains our convention
in the one-dimensional case).
A pair $(X,Y)$ is \emph{log canonical} (\emph{Kawamata log
terminal}, or klt for short) if and only if $\mld(X;X,Y)\geq 0$
(respectively, $\mld(X;X,Y)>0$). Note that for a closed subset $W$,
if $\mld(W;X,Y)\geq 0$ then for every divisor $E$ over $X$ such that
$c_X(E)\cap W\neq\emptyset$ we have $a(E;X,Y)\geq 0$. Indeed, if
this is not the case, then we can find a divisor $F$ on some $X'$
with $c_X(F)\subseteq W$ and such that $E$ and $F$ have simple
normal crossings and nonempty intersection. As above, we produce a
sequence of divisors $F_m$ with $c_X(F_m)\subseteq W$ and
$\lim_{m\to\infty}a(F_m;X,Y)=-\infty$.
This assertion can be used to show that $\mld(W;X,Y)\geq 0$ if and
only if there is an open subset $U$ of $X$ containing $W$ such that
$(U,Y\vert_U)$ is log canonical. In fact, we have the following more
precise proposition that allows computing minimal log discrepancies
via log resolutions.
\begin{proposition}\label{compute_via_resolutions}
Let $(X,Y)$ be a pair as above and $W\subseteq X$ a closed subset.
Suppose that $f\colon X'\to X$ is a proper birational morphism with
$X'$ nonsingular, and such that the union of $\cup_if^{-1}(Y_i)$, of
the exceptional locus of $f$ and of $f^{-1}(W)$ (in case $W\neq X$)
is a divisor with simple normal crossings. Write
$$f^{-1}(Y):=\sum_iq_if^{-1}(Y_i)=\sum_{j=1}^{d}\alpha_jE_j,\,\,K_{X'/X}
=\sum_{j=1}^d\kappa_j E_j.$$ For a nonnegative real number $\tau$,
we have $\mld(W;X,Y)\geq \tau$ if and only if the following
conditions hold:
\begin{enumerate}
\item For every $j$ such that $f(E_j)\cap W\neq\emptyset$ we have
$\kappa_j+1-\alpha_j\geq 0$.
\item For every $j$ such that $f(E_j)\subseteq W$ we have
$\kappa_j+1-\alpha_j\geq\tau$.
\end{enumerate}
\end{proposition}
\bigskip
We now turn to the description of minimal log discrepancies in terms
of codimensions of contact loci from \cite{EMY}. We assume that $k$
is uncountable. If $(X,Y)$ is a pair with $Y=\sum_{i=1}^sq_i Y_i$
and if $w=(w_i)\in\NN^s$, then we put ${\rm Cont}^{\geq
w}(Y):=\cap_i{\rm Cont}^{\geq w_i}(Y_i)$, which is clearly a
cylinder. We similarly define ${\rm Cont}^w(Y)$, ${\rm Cont}^w(Y)_m$
and ${\rm Cont}^{\geq w}(Y)_m$.
Recall that $rK_X$ is a Cartier divisor. We have a canonical map
$$\eta_r\colon (\Omega_X^n)^{\otimes r}\to
\OO(rK_X)=i_*((\Omega_{X_{\rm reg}}^n)^{\otimes r}).$$
We can write ${\rm Im}(\eta_r)=I_{Z_r}\otimes\OO(rK_X)$,
and the subscheme $Z_r$ defined by $I_{Z_r}$ is the
$r^{\rm th}$ \emph{Nash subscheme} of $X$.
It is clear that $I_{Z_{rs}}=I_{Z_r}^s$ for every $s\geq 1$.
Suppose that $W$ is a
proper closed subset of $X$, and let $f\colon X'\to X$ be a
resolution of singularities as in
Proposition~\ref{compute_via_resolutions} such that, in addition,
$f^{-1}(V({\rm Jac}_X))$ and $f^{-1}(Z_r)$ are divisors, having
simple normal crossings with the exceptional locus of $f$, with
$f^{-1}(Y)$ and with $f^{-1}(W)$.
\begin{lemma}\label{lem_EMY}{\rm (}\cite{EMY}{\rm )}
Let $(X,Y)$ be a pair and $f\colon X'\to X$ a resolution as above.
Write
$$f^{-1}(Y_i)=\sum_{j=1}^d\alpha_{i,j}E_j,\,\,K_{X'/X}=\sum_{j=1}^d
\kappa_jE_j,\,\,f^{-1}(Z_r)=\sum_{j=1}^d z_jE_j.$$ For every
$w=(w_i)\in\NN^s$ and $\ell\in\NN$ we have
$$\codim({\rm Cont}^{w}(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap {\rm
Cont}^{\geq
1}(W))=\frac{\ell}{r}+\min_{\nu}\sum_j(\kappa_j+1)\nu_j,$$ where the
minimum is over those $\nu=(\nu_j)\in\NN^d$ with
$\sum_j\alpha_{i,j}\nu_j=w_i$ for all $i$ and $\sum_jz_j\nu_j=\ell$,
and such that $\cap_{\nu_j\geq 1}E_j\neq\emptyset$ and $\nu_j\geq 1$
for at least one $j$ with $f(E_j)\subseteq W$.
\end{lemma}
\begin{proof}
For every $\nu=(\nu_j)\in\NN^d$ we put ${\rm
Cont}^{\nu}(E)=\cap_j{\rm Cont}^{\nu_j}(E_j)$. Since $\sum_jE_j$ has
simple normal crossings, we see that ${\rm Cont}^{\nu}(E)$ is
nonempty if and only if $\cap_{\nu_j\geq 1}E_j\neq\emptyset$, and in
this case all irreducible components of ${\rm Cont}^{\nu}(E)$ have
codimension $\sum_j\nu_j$. Indeed, by Lemma~\ref{lem2} it is enough
to check this when $X=\AAA^n$ and the $E_j$ are coordinate
hyperplanes, in which case the assertion is clear.
Suppose that $\gamma\in {\rm Cont}^{\nu}(E)$, hence
$\ord_{f_{\infty}(\gamma)}(Y_i)=\sum_j\alpha_{i,j}\nu_j$ and
$\ord_{f_{\infty}(\gamma)}(Z_r)=\sum_jz_j\nu_j$. It is clear that
$f_{\infty}(\gamma)\in {\rm Cont}^{\geq 1}(W)$ if and only if there
is $j$ such that $\nu_j\geq 1$ and $E_j\subseteq f^{-1}(W)$.
By the definition of $Z_r$ we have ${\rm
Jac}_f^r=f^{-1}(I_{Z_r})\cdot\OO(-rK_{X'/X})$, hence
$${\rm ord}_{\gamma}({\rm
Jac}_f)=\sum_j\left(\frac{z_j}{r}+\kappa_j\right)\nu_j.$$ Moreover,
by our assumption, the order of vanishing of arcs in ${\rm
Cont}^{\nu}(E)$ along $f^{-1}V({\rm Jac}_X)$ is finite and constant.
It follows from Corollary~\ref{cor1_change_of_variable} and
Theorem~\ref{change_of_variable} that $f_{\infty}({\rm
Cont}^{\nu}(E))$ is a cylinder with
$$\codim\,f_{\infty}({\rm
Cont}^{\nu}(E))=\sum_j\frac{z_j}{r}\nu_j+\sum_j(\kappa_j+1)\nu_j.$$
By Lemma~\ref{injectivity2} the cylinders $f_{\infty}({\rm
Cont}^{\nu}(E))$ for various $\nu$ are mutually disjoint. If we take
the union over those $\nu$ such that $\sum_j\alpha_{i,j}\nu_j=w_i$
for all $i$ and $\sum_jz_j\nu_j=\ell$, with $\nu_j\geq 1$ for some
$E_j\subseteq f^{-1}(W)$, this union is contained in ${\rm
Cont}^w(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap {\rm Cont}^{\geq 1}(W)$.
Moreover, its complement is contained in $\cup_jJ_{\infty}(f(E_j))$,
hence it is thin. The formula in the lemma now follows from
Proposition~\ref{countable_union}.
\end{proof}
\begin{theorem}\label{thm_EMY}{\rm (}\cite{EMY}{\rm )}
If $(X,Y)$ is a pair as above, and $W\subset X$ is a proper closed
subset, then
$$\mld(W;X,Y)=\inf_{w,\ell}\left\{\codim\left({\rm Cont}^w(Y)\cap {\rm
Cont}^{\ell}(Z_r)\cap {\rm Cont}^{\geq
1}(W)\right)-\frac{\ell}{r}-\sum_{i=1}^sq_iw_i\right\},$$ where the
minimum is over the $w=(w_i)\in\NN^s$ and $\ell\in\NN$. Moreover, if
this minimal log discrepancy is finite, then the infimum on the
right-hand side is a minimum.
\end{theorem}
\noindent If $X$ is nonsingular, then $Z_r=\emptyset$ and the description of
minimal log discrepancies in the theorem takes a particularly simple
form.
\begin{proof}[Proof of Theorem~\ref{thm_EMY}]
Let $f$ be a resolution as in Lemma~\ref{lem_EMY}. We keep the
notation in that lemma and its proof. We also put
$f^{-1}(Y)=\sum_j\alpha_jE_j$. Note that
$\alpha_j=\sum_i\alpha_{i,j}q_i$. After restricting to an open
neighborhood of $W$ we may assume that all $f(E_j)$ intersect $W$.
We first show that $\mld(W;X,Y)$ is bounded above by the infimum in
the theorem. Of course, we may assume that $\mld(W;X,Y)$ is finite.
Therefore $\kappa_j+1-\alpha_j\geq {\rm mld}(W;X,Y)$ if
$f(E_j)\subseteq W$ and
$\kappa_j+1-\alpha_j\geq 0$ for every $j$.
Let $\nu=(\nu_j)\in\NN^s$ be such that $\cap_{\nu_j\geq 1}E_j\neq\emptyset$,
and $\nu_j\geq 1$ for some $j$ with $f(E_j)\subseteq W$. In this
case we have
$$\sum_{j=1}^s(k_j+1)\nu_j\geq
\sum_j\alpha_j\nu_j+{\rm mld}(W;X,Y)\cdot\sum_{f(E_j)\subseteq
W}\nu_j\geq\sum_j\alpha_j\nu_j+{\rm mld}(W;X,Y).$$
If $\sum_j\alpha_{i,j}\nu_j=w_i$ for every $i$, and $\sum_jz_j\nu_j=\ell$, then
$\sum_j\alpha_j\nu_j=\sum_iq_iw_i$, and
the formula in
Lemma~\ref{lem_EMY} gives
$${\rm mld}(W;X,Y)\leq
\codim\left({\rm Cont}^w(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap {\rm
Cont}^{\geq 1}(W)\right)-\sum_iq_iw_i-\frac{\ell}{r}.$$
Suppose now that we fix $E_j$ such that $f(E_j)\subseteq W$. If
$w_i=\alpha_{i,j}$ for every $i$ and if $\ell=z_j$, then it follows
from Lemma~\ref{lem_EMY} that
$$\codim({\rm Cont}^w(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap{\rm
Cont}^{\geq 1}(W))\leq k_j+1+\frac{\ell}{r} $$
$$=\sum_iq_iw_i+\frac{\ell}{r}+a(E_j;X,Y).$$
Such an inequality holds for every divisor over $X$ whose center is
contained in $W$, and we deduce that if $\dim(X)\geq 2$, then
the infimum in the theorem is $\leq {\rm mld}(W;X,Y)$
(note that the infimum does not depend on the particular resolution,
and every divisor with center in $W$ appears on some resolution).
Moreover, we see that if $a(E_j;X,Y)={\rm mld}(W;X,Y)$, then the
infimum is obtained for the above intersection of contact loci.
In order to complete the proof of the theorem, it is enough to show that
if $X$ is a curve, and if $a(W; X,Y)<0$, then the infimum in the theorem is $-\infty$.
Note that in this case $W$ is a (smooth) point on $X$, and we may assume that
$Y_i=n_iW$ for some $n_i\in\ZZ$. Therefore our condition says that $\sum_iq_in_i>1$.
Since $\codim({\rm Cont}^m(W))=m$, we see by taking $w_i=mn_i$ for all $i$ that
$$\codim\left({\rm Cont}^w(Y)\right)-\sum_iq_i w_i =m\left(1-\sum_iq_in_i\right)\to-\infty,$$
when we let $m$ go to infinity.
\end{proof}
\begin{remark}\label{remark_thm_EMY}
If $(X,Y)$ is an effective pair and $W\subset X$ is a proper closed
subset, then ${\rm mld}(W;X,Y)$ is equal to
$$\inf_{w,\ell}\left\{\codim\left({\rm Cont}^{\geq w}(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap
{\rm Cont}^{\geq
1}(W)\right)-\frac{\ell}{r}-\sum_{i=1}^sq_iw_i\right\},$$ where the
infimum is over all $w\in\NN^s$ and $\ell\in\NN$. Indeed, note that
we have
$$\bigsqcup_{w'}\left({\rm Cont}^{w'}(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap {\rm
Cont}^{\geq 1}(W)\right)\subseteq {\rm Cont}^{\geq w}(Y)\cap {\rm
Cont}^{\ell}(Z_r)\cap {\rm Cont}^{\geq 1}(W),$$ where the disjoint
union is over $w'\in \NN^s$ such that $w'_i\geq w_i$ for every $i$.
Since the complement of this union is contained in
$\cup_iJ_{\infty}(Y_i)$, hence it is thin, our assertion follows
from Theorem~\ref{thm_EMY} via Proposition~\ref{countable_union}
(we also use the fact that since $(X,Y)$ is effective, if $w'_i\geq w_i$ for all $i$, then
$\sum_iq_iw'_i\geq\sum_iq_iw_i$).
\end{remark}
\begin{remark}
We have assumed in Theorem~\ref{thm_EMY} that $W$ is a proper closed
subset. In general, it is easy to reduce computing minimal log
discrepancies to this case, using the fact that if $X$ is
nonsingular and if $Y$ is empty, then $\mld(X;X,Y)=1$. Indeed, this
implies that if $(X,Y)$ is an arbitrary pair and if we take
$W=X_{\rm sing}\cup\bigcup_i Y_i$, then
$$\mld(X;X,Y)=\min\{\mld(W;X,Y),1\}.$$
For example, one can use this (or alternatively, one could just
follow the proof of Theorem~\ref{thm_EMY}) to show that the pair
$(X,Y)$ is log canonical if and only if for every $w\in\NN^s$ and
every $\ell\in\NN$, we have
$$\codim\left({\rm Cont}^w(Y)\cap {\rm Cont}^{\ell}(Z_r)\right)\geq
\frac{\ell}{r}+\sum_iq_iw_i.$$
\end{remark}
\begin{remark}{\rm (}\cite{EMY}{\rm )}
The usual set-up in Mori Theory is to work with a normal variety $X$
and a $\QQ$--divisor $D$ such that $K_X+D$ is $\QQ$--Cartier (see for
example \cite{kollar}). The results in this section have analogues
in that context. Suppose for simplicity that $D$ is effective,
giving an embedding ${\mathcal O}_X\hookrightarrow {\mathcal O}_X(rD)$,
and that
$r(K_X+D)$ is Cartier. The image of the composition
$$(\Omega_X^n)^{\otimes r}\to(\Omega_X^n)^{\otimes r}\otimes {\mathcal O}_X(rD)\to
{\mathcal O}_X(r(K_X+D))$$
can be written an $I_T\otimes {\mathcal O}_X(r(K_X+D))$, for a closed subscheme $T$ of $X$. Arguing as above, one can then show that if $W$ is a proper closed subset of $X$, then
$${\rm mld}(W; X,D)=\inf_{e\in\NN}\left\{\codim\left({\rm Cont}^e(T)\cap {\rm Cont}^{\geq 1}(W)\right)-
\frac{e}{r}\right\}.$$
\end{remark}
\begin{example}
Suppose that $X$ is nonsingular and $Y$, $Y'$ are effective
combinations of closed subschemes of $X$. If $P$ is a point on $X$,
then
\begin{equation}\label{eq_example01}
{\rm mld}(P; X,Y+Y')\leq {\rm mld}(P; X,Y)+{\rm
mld}(P;X,Y')-\dim(X).
\end{equation}
\noindent Indeed, let us write $Y=\sum_iq_iY_i$ and $Y'=\sum_iq'_iY_i$, where
the $q_i$ and the $q'_i$ are nonnegative real numbers. If one of the minimal log discrepancies
on the right-hand side of (\ref{eq_example01}) is $-\infty$, then $\mld(P;X,Y+Y')=
-\infty$, as well. Otherwise, we can find
$w$ and $w'\in\NN^s$ and irreducible components
$C$ of ${\rm Cont}^{\geq w}(Y)$ and $C'$ of ${\rm Cont}^{\geq
w'}(Y')$ such that $\codim(C)=\sum_iq_iw_i+{\rm mld}(P;X,Y)$ and
$\codim(C')=\sum_iq_iw'_i+{\rm mld}(P;X,Y')$. Note that $C\cap C'$
is nonempty, since it contains the constant arc over $P$. If $m\gg
0$, then $\psi_m(C\cap C')=\psi_m(C)\cap \psi_{m}(C')$, and using
the fact that the fiber $\pi_m^{-1}(P)$ of $J_m(X)$ over $P$ is
nonsingular, we deduce
$$\codim(\psi_m(C)\cap\psi_m(C'),\pi_m^{-1}(P))\leq
\codim(\psi_m(C),\pi_m^{-1}(P))+\codim(\psi_m(C'),\pi_m^{-1}(P)).$$
Since $C\cap C'\subseteq {\rm Cont}^{\geq w+w'}(Y+Y')$, we deduce
our assertion from Remark~\ref{remark_thm_EMY}.
\end{example}
\smallskip
Our next goal is to give a different interpretation of minimal log
discrepancies that is better suited for applications. The main
difference is that we replace cylinders in the space of arcs by
suitable locally closed subsets in the spaces of jets.
Recall that $Z_r$ is the $r^{\rm th}$ Nash subscheme of $X$.
The \emph{non-lci subscheme of $X$ of level $r$} is defined by the ideal
$J_r=(\overline{{\rm Jac}_X^r}\colon I_{Z_r})$,
where we denote by $\overline{\fra}$ the integral closure of an ideal $\fra$. It is shown in
Corollary~\ref{cor3_appendix} in the Appendix that
$J_r\cdot I_{Z_r}$ and ${\rm Jac}_X^r$ have the same integral closure.
Note also that by Remark~\ref{char_lci}, the subscheme defined by $J_r$ is supported on the
set of points $x\in X$ such that ${\mathcal O}_{X,x}$ is not locally complete intersection.
It follows
from the basic properties of integral closure that given any ideal $\fra$, we have
$\ord_{\gamma}(\fra)=\ord_{\gamma}(\overline{\fra})$ for every arc
$\gamma\in J_{\infty}(X)$. In particular,
$\ord_{\gamma}(J_r)+\ord_{\gamma}(I_{Z_r})=r\cdot\ord_{\gamma}({\rm
Jac}_X)$.
\begin{theorem}\label{new_interpretation}
Let $(X,Y)$ be an effective pair and $r$ and $J_r$ as above. If $W$
is a proper closed subset of $X$, then
$$\mld(W;X,Y)=\inf\{(m+1)\dim(X)+\frac{e'}{r}-\sum_iq_iw_i$$
$$- \dim({\rm Cont}^{\geq w}(Y)_m\cap {\rm Cont}^e({\rm
Jac}_X)_m \cap {\rm Cont}^{e'}(J_r)_m\cap {\rm Cont}^{\geq
1}(W))_m\},$$ where the infimum is over those $w\in\NN^s$, and
$e,e',m\in\NN$ such that
$m\geq\max\{2e,e+e',e+w_i\}$. Moreover, if this minimal log
discrepancy is finite, then the above infimum is a minimum.
\end{theorem}
\noindent It will follow from the proof that the expression in the above
infimum does not depend on $m$, as long as
$m\geq\max\{2e,e+e',e+w_i\}$. Note also that $e$ comes up only in
the condition on $m$. The condition in the theorem simplifies when
$X$ is locally complete intersection, since $J_r=\OO_X$ by
Remark~\ref{char_lci} in the Appendix.
\begin{proof}[Proof of Theorem~\ref{new_interpretation}]
It follows from Theorem~\ref{thm_EMY} (see also
Remark~\ref{remark_thm_EMY}) that
$\mld(W;X,Y)=\inf_{w,\ell}\left\{\codim(C_{w,\ell})-\frac{\ell}{r}-\sum_iq_iw_i\right\}$,
where $w\in\NN^s$, $\ell\in\NN$, and
$$C_{w,\ell}={\rm Cont}^{\geq w}(Y)\cap {\rm Cont}^{\ell}(Z_r)\cap {\rm Cont}^{\geq 1}(W).$$
On the other hand, Proposition~\ref{countable_union} gives
$$\codim(C_{w,\ell})=\min_{e\in\NN}\codim(C_{w,\ell}\cap {\rm Cont}^{e}({\rm
Jac}_X)),$$ and for every $e$ we can write
$$C_{w,\ell}\cap {\rm Cont}^e({\rm Jac}_X)=
{\rm Cont}^{\geq w}(Y)\cap {\rm Cont}^e({\rm Jac}_X) \cap {\rm
Cont}^{e'}(J_r)\cap {\rm Cont}^{\geq 1}(W),$$ where $e'=re-\ell$.
Suppose now that $w$, $e$ and $\ell$ are fixed, $e'=re-\ell$, and
let $m\geq\max\{2e,e+e',e+w_i\}$. Consider $$S:={\rm Cont}^{\geq
w}(Y)_m\cap {\rm Cont}^e({\rm Jac}_X)_m \cap {\rm
Cont}^{e'}(J_r)_m\cap {\rm Cont}^{\geq 1}(W)_m.$$
If we apply Proposition~\ref{fiber1} for the morphism
$\pi_{m,m-e}\colon J_m(X)\to J_{m-e}(X)$, we see that
$\dim(S)=\dim(\pi_{m,m-e}(S))+e(\dim(X)+1)$. Moreover,
$\pi_{m,m-e}(S)\subseteq {\rm Im}(\psi^X_{m-e})$ by
Proposition~\ref{fiber2}. It follows that
$$\codim(C_{w,\ell}\cap {\rm Cont}^e({\rm Jac}_X))=
(m-e+1)\dim(X)-\dim(\pi_{m,m-e}(S))$$ $$=(m+1)\dim(X) +e-\dim(S).$$
This gives the formula in the theorem.
\end{proof}
\begin{remark}\label{remark_new_interpretation}
If the pair $(X,Y)$ is not necessarily effective, then we can get an
analogue of Theorem~\ref{new_interpretation}, but involving contact
loci of specified order along each $Y_i$, as in
Theorem~\ref{thm_EMY}.
\end{remark}
\smallskip
In this section we have related the codimensions of various contact
loci with the numerical data of a log resolution. One can use, in
fact, Theorem~\ref{change_of_variable} to interpret also the "number
of irreducible components of minimal dimension" in the corresponding
contact loci. We illustrate this in the following examples. The
proofs are close in spirit to the proof of the other results in this section, so
we leave them for the reader.
\begin{example}\label{ex1_EMY}
Consider an effective pair $(X,Y)$ as above and $W\subset X$ a
proper closed subset. Suppose that $\tau:=\mld(W;X,Y)\geq 0$. We say
that a divisor $E$ over $X$ computes $\mld(W;X,Y)$ if
$c_X(E)\subseteq W$ and $a(E;X,Y)=\tau$. There is only one divisor
over $X$ computing $\mld(W;X,Y)$ if and only if for every
$w\in\NN^s$ and $m,e,e'\in\NN$ with $m\geq\max\{2e,e+e',e+w_i\}$,
there is at most one irreducible component of $${\rm Cont}^{\geq
w}(Y)_m\cap {\rm Cont}^e({\rm Jac}_X)_m \cap {\rm
Cont}^{e'}(J_r)_m\cap {\rm Cont}^{\geq 1}(W)_m$$ of dimension
$(m+1)\dim(X)+\frac{e'}{r}-\tau-\sum_iq_iw_i$. A similar equivalence
holds when $W=X$ and ${\rm mld}(X;X,Y)=0$.
\end{example}
\begin{example}\label{ex2_EMY}{\rm (}\cite{mustata}{\rm )}
Let $X$ be a nonsingular variety, and $Y\subset X$ a closed
subvariety of codimension $c$, which is reduced and irreducible.
Since
$$\dim\,J_m(Y)\geq\dim\,J_m(Y_{\rm reg})=(m+1)\dim(Y)$$
for every $m$, it follows from Theorem~\ref{thm_EMY} that $\mld(X;
X,cY)\leq 0$, with equality if and only if
$\dim\,J_m(Y)=(m+1)\dim(Y)$ for every $m$. In fact, note that if
$X'$ is the blowing-up of $X$ along $Y$, and if $E$ is the component
of the exceptional divisor that dominates $Y$, then $a(E;X,c Y)=0$.
Suppose now that $(X,c Y)$ is log canonical. The assertion in the
previous example implies that $E$ is the unique divisor over $X$
with $a(E;X,c Y)=0$ if and only if for every $m$, the unique
irreducible component of $J_m(Y)$ of dimension $(m+1)\dim(Y)$ is
$\overline{J_m(Y_{\rm reg})}$.
Assume now that $Y$ is locally complete intersection. Since $J_m(Y)$
can be locally defined in $J_m(X)$ by $c(m+1)$ equations, it follows
that every irreducible component of $J_m(Y)$ has dimension at least
$(m+1)\dim(Y)$. Hence $(X,c Y)$ is log canonical if and only if
$J_m(Y)$ has pure dimension for every $m$. In addition, we deduce
from the above discussion that $J_m(Y)$ is irreducible for every $m$
if and only if $(X,c Y)$ is log canonical and $E$ is the only
divisor over $X$ such that $a(E;X,c Y)=0$. It is shown in
\cite{mustata} that this is equivalent with $Y$ having rational
singularities.
\end{example}
\begin{example}\label{ex3_EMY}
Let $(X,Y)$ be an effective log canonical pair that is
\emph{strictly log canonical}, that is $\mld(X;X,Y)=0$. A
\emph{center of non-klt singularities} is a closed subset of $X$ of
the form $c_X(F)$, where $F$ is a divisor over $X$ such that
$a(F;X,Y)=0$. One can show that an irreducible closed subset
$T\subset X$ is such a center if and only if there are $w\in\NN^s$,
and $e, e'\in\NN$ not all zero, such that for
$m\geq\max\{2e,e+e',e+w_i\}$, some irreducible component of
$${\rm Cont}^{\geq w}(Y)_m\cap{\rm Cont}^e({\rm Jac}_X)_m\cap
{\rm Cont}^{e'}(J_r)_m$$ has dimension
$(m+1)\dim(X)+\frac{e'}{r}-\sum_iq_iw_i$ and dominates $T$.
\end{example}
\section{Inversion of Adjunction}
We apply the description of minimal log discrepancies from the
previous section to prove the following version of Inversion of
Adjunction. This result has been proved also by Kawakita in
\cite{kawakita1}.
\begin{theorem}\label{inv_adj}
Let $A$ be a nonsingular variety and $X\subset A$ a closed normal
subvariety of codimension $c$. Suppose that $W\subset X$ is a proper
closed subset and $Y=\sum_{i=1}^sq_iY_i$ where all $q_i\in\RR_+$ and the
$Y_i\subset A$ are closed subschemes not containing $X$ in their
support. If $r$ is a positive integer such that $rK_X$ is Cartier
and if $J_r$ is the ideal defining the non-lci subscheme of level $r$ of $X$, then
$${\rm mld}\left(W;X,\frac{1}{r}V(J_r)+Y\vert_X\right)={\rm mld}(W; A,cX+Y),$$
where $Y\vert_X:=\sum_iq_i(Y_i\cap X)$.
\end{theorem}
When $X$ is locally complete intersection, then $J_r=\OO_X$, and we
recover the result from \cite{EM} saying that ${\rm
mld}(W;X,Y\vert_X)={\rm mld}(W;A,cX+Y)$. It is shown in \emph{loc.
cit.} that this is equivalent with the following version of
Inversion of Adjunction for locally complete intersection varieties.
\begin{corollary}
Let $X$ be a normal locally complete intersection variety and
$H\subset X$ a normal Cartier divisor. If $W\subset H$ is a proper
closed subset, and if $Y=\sum_{i=1}^sq_iY_i$, where all $q_i\in\RR_+$
and $Y_i$ are closed subsets of $X$ not containing $H$ in their
support, then
$${\rm mld}(W;H,Y\vert_H)={\rm mld}(W;X,Y+H).$$
\end{corollary}
For motivation and applications of the general case of the Inversion
of Adjunctions Conjecture, we refer to \cite{K+}. For results in the
klt and the log canonical cases, see \cite{kollar} and
\cite{kawakita1}.
We start with two lemmas. Recall that for every scheme $X$ we have a
morphism $\Phi_{\infty}\colon\AAA^1\times J_{\infty}(X)\to
J_{\infty}(X)$ such that if $\gamma$ is an arc lying over $x\in X$,
then $\Phi_{\infty}(0,\gamma)$ is the constant arc over $x$.
\begin{lemma}\label{lem1_inv_adj}
Let $X$ be a reduced, pure-dimensional scheme and $C\subseteq
J_{\infty}(X)$ a nonempty cylinder. If $\Phi_{\infty}(\AAA^1\times
C)\subseteq C$, then $C\not\subseteq J_{\infty}(X_{\rm sing})$.
\end{lemma}
\begin{proof}
Write $C=(\psi_m^X)^{-1}(S)$, for some $S\subseteq J_m(X)$. Let
$\gamma\in C$ be an arc lying over $x\in X$. By hypothesis, the
constant $m$--jet $\gamma_m^x$ over $x$ lies in $S$. We take a
resolution of singularities $f\colon X'\to X$. It is enough to show
that $f_{\infty}^{-1}(C)$ is not contained in
$f_{\infty}^{-1}(J_{\infty}(X_{\rm sing}))$.
Let $x'$ be a point in $f^{-1}(x)$. The constant jet $\gamma_m^{x'}$
lies in $f_m^{-1}(S)$, hence $C':=(\psi_m^{X'})^{-1}(\gamma_m^{x'})$
is contained in $f_{\infty}^{-1}(C)$. On the other hand, $X'$ is
nonsingular, hence $C'$ is not contained in
$f_{\infty}^{-1}(J_{\infty}(X_{\rm sing}))=J_{\infty}(f^{-1}(X_{\rm
sing}))$ by Lemma~\ref{cylinder1}.
\end{proof}
We will apply this lemma as follows. We will consider a reduced and
irreducible variety $X$ embedded in a nonsingular variety $A$. In
$J_{\infty}(A)$ we will take a finite intersection of closed
cylinders of the form ${\rm Cont}^{\geq m}(Z)$. Such an intersection
is preserved by $\Phi_{\infty}$, and therefore so is each
irreducible component $\widetilde{C}$. The lemma then implies that
$C:=\widetilde{C}\cap J_{\infty}(X)$ is not contained in
$J_{\infty}(X_{\rm sing})$.
\begin{lemma}\label{lem2_inv_adj}
Let $A$ be a nonsingular variety and $M=H_1\cap\ldots\cap H_c$ a
codimension $c$ complete intersection in $A$. If $C$ is an
irreducible locally closed cylinder in $J_{\infty}(A)$ such that
$$C\subseteq\bigcap_{i=1}^c{\rm Cont}^{\geq d_i}(H_i),$$
and if there is $\gamma\in C\cap J_{\infty}(M)$ with
$\ord_{\gamma}({\rm Jac}_M)=e$, then
$$\codim(C\cap
J_{\infty}(M),J_{\infty}(M))\leq\codim(C,J_{\infty}(A))+e-\sum_{i=1}^c
d_i.$$
\end{lemma}
\begin{proof}
We may assume that $e$ is the smallest order of vanishing along
${\rm Jac}_M$ of an arc in $C\cap J_{\infty}(M)$.
Let $m\geq \max\{2e,e+d_i\}$ be such that
$C=(\psi_{m-e}^A)^{-1}(S)$ for some irreducible locally closed
subset $S$ in $J_{m-e}(A)$. Let $S'$ be the inverse image of $S$ in
$J_m(A)$ and $S''$ an irreducible component of $S'\cap J_m(M)$
containing some jet having order $e$ along ${\rm Jac}_M$. Every jet
in $S'$ has order $\geq d_i$ along $H_i$, hence $S'\cap J_m(M)$ is
cut out in $S'$ by $\sum_i(m-d_i+1)$ equations, and therefore
$$\dim(S'')\geq \dim(S')-(m+1)c+\sum_{i=1}^c d_i=\dim(S)+e\cdot\dim(A)-(m+1)c+\sum_{i=1}^c d_i.$$
Let $S''_0$ be the open subset of $S''$ consisting of jets having
order $\leq e$ along ${\rm Jac}_M$. It follows from
Proposition~\ref{fiber2} (see also Remark~\ref{remark_fiber2}) that
the image in $J_{m-e}(M)$ of any element in $S''_0$ can be lifted to
$J_{\infty}(M)\cap C$, hence by assumption its order of vanishing
along ${\rm Jac}_M$ is $e$. Moreover, Proposition~\ref{fiber1}
implies that the image of $S''_0$ in $J_{m-e}(M)$ has dimension
$\dim(S''_0)-e(\dim(M)+1)$. We conclude that
$$\codim(C\cap J_{\infty}(M), J_{\infty}(M))\leq
(m-e+1)\dim(M)-\dim(S''_0)+e(\dim(M)+1)\leq$$
$$(m-e+1)\dim(A)+e-\dim(S)-\sum_{i=1}^c d_i=
\codim(C,J_{\infty}(A))+e-\sum_{i=1}^c d_i.$$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{inv_adj}]
The assertion is local, hence we may assume that $A$ is affine. We
first show that ${\rm mld}(W; X,\frac{1}{r}V(J_r)+Y\vert_X) \geq
{\rm mld}(W; A,cX+Y)$. Suppose that this is not the case, and let us
use Theorem~\ref{new_interpretation} for
$(X,\frac{1}{r}V(J_r)+Y\vert_X)$. We get $w\in\NN^s$ and $e$, $e'$,
$m\in\NN$ such that $m\geq \max\{2e,e+e',e+w_i\}$ and $S\subseteq
J_m(X)$ with
$$S\subseteq {\rm Cont}^{\geq w}(Y)_m\cap {\rm Cont}^e({\rm Jac}_X)_m
\cap {\rm Cont}^{e'}(J_r)\cap {\rm Cont}^{\geq 1}(W)$$ such that
$\dim(S)>(m+1)\dim(X)-{\rm mld}(W;A,cX+Y)-\sum_iq_iw_i$. We may
consider $S$ as a subset of $J_m(A)$ contained in ${\rm Cont}^{\geq
(m+1)}(X)$, and applying Theorem~\ref{new_interpretation} for the pair $(A,cX+Y)$
we see that
$$\dim(S)\leq (m+1)\dim(A)-c(m+1)-\sum_iq_iw_i-{\rm
mld}(W;A,cX+Y).$$ This gives a contradiction.
We now prove the reverse inequality
$$\tau:={\rm mld}(W; X,\frac{1}{r}V(J_r)+Y\vert_X) \leq
{\rm mld}(W; A,cX+Y).$$ If this does not hold, then we apply
Theorem~\ref{thm_EMY} (see also Remark~\ref{remark_thm_EMY}) to find
$w\in \NN^s$ and $d\in\NN$ such that for some irreducible component
$C$ of ${\rm Cont}^{\geq w}(Y)\cap {\rm Cont}^{\geq d}(X)$ we have
$\codim(C)<cd+\sum_iq_iw_i+\tau$. It follows from
Lemma~\ref{lem1_inv_adj} that $C\cap J_{\infty}(X)\not\subseteq
J_{\infty}(X_{\rm sing})$. Let $e$ be the smallest order of
vanishing along ${\rm Jac}_X$ of an arc in $C\cap J_{\infty}(X)$.
Fix such an arc $\gamma_0$.
Consider the closed subscheme $M\subset A$ whose ideal $I_M$ is
generated by $c$ general linear combinations of the generators of
the ideal $I_X$ of $X$. Therefore $M$ is a complete intersection and
$\ord_{\gamma_0}({\rm Jac}_M)=e$. By Corollary~\ref{cor1_appendix}
in the Appendix, we have ${\rm Jac}_M\cdot\OO_X\subseteq\left( (I_M\colon
I_X)+I_X\right)/I_X$. It follows that $\gamma_0$ lies in the cylinder
$$C_0:=C\cap {\rm Cont}^{\leq e}({\rm Jac}_M)\cap{\rm Cont}^{\leq e}(I_M\colon I_X).$$
$C_0$ is a nonempty open subcylinder of $C$, hence
$\codim(C)=\codim(C_0)$. On the other hand, Lemma~\ref{lem2_inv_adj}
gives
$$\codim(C_0\cap J_{\infty}(M), J_{\infty}(M))\leq
\codim(C_0)+e-cd.$$
If $\gamma\in J_{\infty}(M)$, then $\ord_{\gamma}({\rm Jac}_X)\leq
\ord_{\gamma}({\rm Jac}_M)$. If $\gamma$ lies also in $C_0$, then
$\gamma$ can't lie in the space of arcs of any other irreducible
component of $M$ but $X$ (we use the fact that $\gamma$ has finite
order along $(I_M\colon I_X)$, and the support of the scheme defined
by $(I_M\colon I_X)$ is the union of the irreducible components of
$M$ different from $X$). Therefore $C_0\cap J_{\infty}(M)= C_0\cap
J_{\infty}(X)$, and for every arc $\gamma$ in this intersection we
have $\ord_{\gamma}({\rm Jac}_X)=\ord_{\gamma}({\rm Jac}_M)=e$. We
deduce that
$$\codim(C_0\cap J_{\infty}(X),J_{\infty}(X))=\codim(C_0\cap
J_{\infty}(M),J_{\infty}(M))<\sum_iq_iw_i+\tau+e.$$
Since $C_0\cap J_{\infty}(X)=\cup_{e'=0}^{re}\left(C_0\cap
J_{\infty}(X)\cap {\rm Cont}^{e'}(J_r)\right)$, it follows that
there is $e'$ such that $\codim(C_0\cap J_{\infty}(X)\cap {\rm
Cont}^{e'}(J_r))<\sum_iq_iw_i+\tau+e$. On the other hand, this
cylinder is contained in ${\rm Cont}^{re-e'}(Z_r)$. We deduce from
Theorem~\ref{thm_EMY} (see also Remark~\ref{remark_thm_EMY}) that
${\rm mld}(W; X, \frac{1}{r}V(J_r)+Y\vert_X)<\tau$, a contradiction.
This completes the proof of the theorem.
\end{proof}
\begin{remark}
It follows from the above proof that even if the coefficients of $Y$
are negative, we still have the inequality
$${\rm mld}\left(W;X,\frac{1}{r}V(J_r)+Y\vert_X\right)\geq{\rm mld}(W; A,cX+Y)$$
(it is enough to use the description of minimal log discrepancies
mentioned in Remark~\ref{remark_new_interpretation}).
\end{remark}
\section{Appendix}
\subsection{Dimension of constructible subsets}
We recall here a few basic facts about the dimension of
constructible subsets. Let $X$ be a scheme of finite type over $k$,
and $W\subseteq X$ a constructible subset, with the induced Zariski
topology from $X$. If $A$ is a closed subset of $W$, we have
$\overline{A}\cap W=A$. Since $X$ is a Noetherian topological space
of bounded dimension, it follows that so is $W$.
Note that we have $\dim(W)=\dim(\overline{W})$. Indeed, the
inequality $\dim(W)\leq\dim(\overline{W})$ follows as above, while
the reverse inequality is a consequence of the fact that $W$
contains a subset $U$ that is open and dense in $\overline{W}$. We
see that if $W=T_1\cup\ldots\cup T_r$, where all $T_i$ are locally
closed (or more generally, constructible) in $X$, then
$\dim(W)=\max_i\{\dim(T_i)\}$.
Since $W$ is Noetherian, we have a unique decomposition
$W=W_1\cup\ldots \cup W_s$ in irreducible components. If $\dim(W)=n$
and if we have a decomposition $W=T_1\sqcup\ldots\sqcup T_r$ into disjoint
constructible subsets of $X$, then every irreducible component $A$
of some $T_i$, with $\dim(A)=n$ gives an irreducible component of
$W$ of dimension $n$, namely $\overline{A}\cap W$. Moreover, every
$n$--dimensional irreducible component of $W$ comes from a
unique $T_i$ and a unique such irreducible component of $T_i$.
If $f\colon X'\to X$ is a morphism of schemes that induces a
bijection between the constructible subsets $V'\subseteq X'$ and
$V\subseteq X$, then $\dim(V)=\dim(V')$ and $T\to V\cap
\overline{f(T)}$ gives a bijection between the irreducible
components of maximal dimension of $V'$ and those of $V$. It follows
that if we have
a morphism of schemes $g\colon X'\to X$
and constructible subsets $V'\subseteq X'$ and $V\subseteq X$ such
that we get a weakly piecewise trivial fibration $V'\to V$ with
fiber $F$, then $\dim(V)=\dim(V')-\dim(F)$. Moreover, if $F$ is
irreducible, then we have a bijection between the irreducible
components of maximal dimension of $V'$ and those of $V$.
\subsection{Differentials and the canonical sheaf}
We start by reviewing the definition and some basic properties of
the canonical sheaf. The standard reference for this is
\cite{hartshorne}. To every pure-dimensional scheme over $k$ one
associates a coherent sheaf $\omega_X$ with the following
properties:
\begin{enumerate}
\item[i)] If $X$ is nonsingular of dimension $n$, then
there is a canonical isomorphism $\omega_X\simeq\Omega_X^n$.
\item[ii)] The definition is local: if $U$ is an open subset of $X$,
then there is a canonical isomorphism
$\omega_U\simeq\omega_X\vert_U$.
\item[iii)] If $X\hookrightarrow M$ is a closed subscheme of codimension $c$,
where $M$ is a pure-dimensional Cohen-Macaulay scheme, then there is
a canonical isomorphism
$$\omega_X\simeq{\mathcal Ext}^c_{\OO_M}(\OO_X,\omega_M).$$
\item[iv)] If $f\colon X\to M$ is a finite surjective morphism
of equidimensional schemes, then
$$f_*\omega_X\simeq {\mathcal Hom}_{\OO_M}(f_*\OO_X,\omega_M).$$
\item[v)] If $X$ is normal of dimension $n\geq 2$, then ${\rm depth}(\omega_X)\geq 2$.
Therefore there is a canonical isomorphism
$$\omega_X\simeq i_*\Omega_{X_{\rm reg}}^n,$$
where $i\colon X_{\rm reg}\hookrightarrow X$ is the inclusion of the
nonsingular locus of $X$.
\item[vi)] If $X$ is Gorenstein, then $\omega_X$ is locally free of
rank one.
\end{enumerate}
Note that $\omega_X$ is uniquely determined by properties i), ii)
and iii) above. Indeed, by ii) it is enough to describe
$\omega_{U_i}$ for the elements of an affine open cover $U_i$ of
$X$. On the other hand, if we embed $U_i$ as a closed subscheme of
codimension $c$ of an affine space $\AAA^{N}$, then we have
$$\omega_{U_i}\simeq {\mathcal
Ext}^c_{\OO_{\AAA^N}}(\OO_{U_i},\Omega_{\AAA^N}^N).$$ Note also that if
$Z$ is an irreducible component of $X$ that is generically reduced,
then by i) and ii) we see that the stalk of $\omega_X$ at the
generic point of $Z$ is $\Omega_{K/k}^n$, where $n=\dim(X)$ and $K$
is the residue field at the generic point of $Z$.
\bigskip
Suppose now that we are in the following setting. $X$ is a reduced
scheme of pure dimension $n$,
and we have a closed embedding
$X\hookrightarrow A$, where $A$ is nonsingular of dimension $N$ and
has global algebraic coordinates $x_1,\ldots,x_N\in\Gamma(\OO_A)$
(that is, $dx_1,\ldots,dx_N$ trivialize $\Omega_A$). We assume that
the ideal $I_X$ of $X$ in $A$ is generated by
$f_1,\ldots,f_d\in\Gamma(\OO_A)$.
For example, if $X$ is affine we may consider a closed embedding in
an affine space.
Let $c=N-n$. As in \S 4, for $1\leq i\leq d$ we take
$F_i:=\sum_{j=1}^d a_{i,j}f_j$, where the $a_{i,j}$ are general
elements in $k$. If $M$ is the closed subscheme defined by
$I_M=(F_1,\ldots,F_c)$, then we have the following properties.
\begin{enumerate}
\item[1)] All irreducible components of $M$ have dimension $n$,
hence $M$ is a complete intersection.
\item[2)] $X$ is a closed subscheme of $M$ and $X=M$ at the generic
point of every irreducible component of $X$.
\item[3)] Some minor $\Delta$ of the Jacobian matrix of
$F_1,\ldots,F_c$ with respect to the coordinates $x_1,\ldots,x_N$
(let's say $\Delta={\rm det}(\partial F_i/\partial x_j)_{i,j\leq
c}$) does not vanish at the generic point of any irreducible
component of $X$.
\end{enumerate}
Moreover, every $c$ of the $F_i$ will satisfy similar properties.
Let us fix now $F_1,\ldots,F_c$ as above, generating the ideal
$I_M$. We also consider the residue scheme $X'$ of $X$ in $M$
defined by the ideal $(I_M\colon I_X)$. Note that $X'$ is supported
on the union of the irreducible components of $M$ that are not
contained in $X$. The intersection of $X$ and $X'$ is cut out in $X$
by the ideal $((I_M\colon I_X)+I_X)/I_X$.
Let $K$ denote the fraction field of $X$, i.e. $K$ is the product of
the residue fields of the generic points of the irreducible
components of $X$. We have a localization map
$\Omega_X^n\to\Omega_{K/k}^n$ given by taking a section of
$\Omega_X^n$ to its images in the corresponding stalks. By our
assumption $\Delta$ is an invertible element in $K$, and
$\Omega_{K/k}^n$ is freely generated over $K$ by
$dx_{c+1}\wedge\ldots\wedge dx_N$.
\begin{proposition}\label{prop1_appendix}
With the above notation, there are canonical morphisms
$$\Omega_X^n\overset{\eta}\to\omega_X\overset{u}\to\omega_M\vert_X
\overset{w}\to\Omega_{K/k}^n$$ with the following properties:
\begin{enumerate}
\item[a)] If $X$ is normal, then $\eta$
is given by the canonical isomorphism $\omega_X\simeq
i_*\Omega_{X_{\rm reg}}^n$, where $i\colon X_{\rm
reg}\hookrightarrow X$ is the inclusion of the nonsingular locus of
$X$.
\item[b)] $w$ is injective and identifies $\omega_M\vert_X$
with $\OO_X\cdot \Delta^{-1}dx_{c+1}\wedge\ldots\wedge dx_N$.
\item[c)] $u$ is injective and the image of $w\circ u$ is
$((I_M\colon I_X)+I_X)/I_X\cdot\Delta^{-1}dx_{c+1}\wedge\ldots\wedge
dx_N$.
\item[d)] The composition $w\circ u\circ\eta$ is the localization
map. Its image is ${\rm Jac}(F_1,\ldots,F_c)\cdot\Delta^{-1}
dx_{c+1}\wedge\ldots\wedge dx_N$, where ${\rm Jac}(F_1,\ldots,F_c)$
denotes the ideal generated in $\OO_X$ by the $r$--minors of the
Jacobian matrix of $F_1,\ldots,F_c$.
\end{enumerate}
\end{proposition}
\begin{corollary}\label{cor1_appendix}
With the above notation, we have the following inclusion
$${\rm Jac}(F_1,\ldots,F_c)\cdot{\mathcal O}_X\subseteq ((I_M\colon I_X)+I_X)/I_X.$$
\end{corollary}
\begin{corollary}\label{cor2_appendix}
Suppose that $X$ is a normal affine $n$--dimensional Gorenstein variety. If
$Z$ is the first Nash subscheme of $X$, that is, $I_Z\otimes\omega_X$ is
the image of the canonical map $\eta\colon\Omega_X^n\to\omega_X$,
then there is an ideal $J$ such that
$${\rm Jac}_X=I_Z\cdot J.$$
\end{corollary}
\begin{proof}
We choose a closed embedding $X\hookrightarrow A=\AAA^N$, and
let $F_1,\ldots,F_d$ be as above. For every $L= (i_1,\ldots,i_c)$,
with $1\leq i_1<\cdots<i_c\leq d$, let $I_L$ denote the ideal
generated by $F_{i_1},\ldots,F_{i_c}$. It follows from
Proposition~\ref{prop1_appendix} that
$${\rm Jac}(F_{i_1},\ldots,F_{i_c})\cdot {\mathcal O}_X=I_Z\cdot ((I_L\colon
I_X)+I_X)/I_X.$$ If we take $J=\sum_L((I_L\colon I_X)+I_X)/I_X$,
this ideal satisfies the condition in the corollary.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop1_appendix}]
Since $X$ is reduced, we may consider its normalization $f\colon
\widetilde{X}\to X$. On $\widetilde{X}$ we have a canonical morphism
$\widetilde{\eta}\colon\Omega^n_{\widetilde{X}}\to\omega_{\widetilde{X}}$.
On the other hand, since $f$ is finite and surjective we have an
isomorphism $f_*\omega_{\widetilde{X}}\simeq {\mathcal Hom}_{\OO_X}
(f_*\OO_{\widetilde{X}},\omega_X)$, and the inclusion
$\OO_X\hookrightarrow f_*\OO_{\widetilde{X}}$ induces a morphism
$f_*\omega_{\widetilde{X}}\to \omega_X$.
The morphism
$\eta$ is the composition
$$\Omega_X^n\to f_*\Omega_{\widetilde{X}}^n\overset{f_*\widetilde{\eta}}\to
f_*\omega_{\widetilde{X}}\to\omega_X,$$ where the first arrow is
induced by pulling-back differential forms. The construction is
compatible with the restriction to an open subset. In particular,
the composition
$$\Omega_X^n\to\omega_X\to\Omega^n_{K/k}$$
of $\eta$ with the morphism going to the stalks at the generic
points of the irreducible components of $X$ is the localization
morphism corresponding to $\Omega_X^n$.
Note that $\omega_M\simeq{\mathcal
Ext}^c_{\OO_A}(\OO_M,\Omega^N_A)$. Since $F_1,\ldots,F_c$ form a
regular sequence, we can compute $\omega_M$ using the Koszul complex
associated to the $F_i$'s to get
$$\omega_M\simeq {\mathcal
Hom}_{\OO_M}\left(\bigwedge^c(I_M/I_M^2),\Omega_A^N\vert_M\right).$$ This is a
free $\OO_M$-module generated by the morphism $\phi$ that takes
$\overline{F_1}\wedge\ldots\wedge\overline{F_c}$ to
$dx_1\wedge\ldots\wedge dx_N\vert_M$.
Since $X$ is a closed subscheme of $M$ of the same dimension, and
since $M$ is Cohen-Macaulay, it follows that $\omega_X\simeq
{\mathcal Hom}_{\OO_M}(\OO_X,\omega_M)$. In particular, we have
$\omega_X\subseteq\omega_M$. Moreover,
$$\omega_X\otimes\omega_M^{-1}
\simeq {\mathcal Hom}_{\OO_M}(\OO_X,\OO_M)=(I_M\colon I_X)/I_M.$$
Let $u$ be the composition $\omega_X\hookrightarrow\omega_M\to
\omega_M\vert_X$. Since $M=X$ at the generic point of each
irreducible component of $X$, $u$ is generically an isomorphism. On
the other hand, $M$ is Cohen-Macaulay and $\omega_X$ is contained in
the free $\OO_M$-module $\omega_M$, hence $\omega_X$ has no embedded
associated primes. Therefore $u$ is injective.
Using again the fact that $u$ is an isomorphism at the generic
points of the irreducible components of $X$ we get a localization
morphism $w\colon\omega_M\vert_X\to\Omega_{K/k}^n$, and we see that
the composition $w\circ u\circ\eta$ is the localization map for
$\Omega_X^n$ at the generic points.
By construction, $w$ takes the image of $\phi$ in $\omega_M\vert_X$
to $\Delta^{-1}dx_{c+1}\wedge\ldots\wedge dx_N$. It follows from
our previous discussion that the image of $\omega_X$ in
$\omega_M\vert_X$ is $((I_M\colon
I_X)+I_X)/I_X\cdot\omega_M\vert_X$, from which we get the image of
$w\circ u$. The last assertion in d) follows from the fact that if
$1\leq i_1<\cdots<i_n\leq N$ and if $D$ is the $r$--minor of the
Jacobian of $F_1,\ldots,F_c$ corresponding to the variables
different from $x_{i_1},\ldots,x_{i_n}$, then
$$(w\circ u\circ\eta)(dx_{i_1}\wedge\ldots\wedge
dx_{i_n})=\pm \frac{D}{\Delta}dx_{c+1}\wedge\ldots\wedge dx_N.$$
This completes the proof of the proposition.
\end{proof}
\smallskip
Suppose now that $X$ is an affine $\QQ$--Gorenstein normal variety.
Our goal is to generalize Corollary~\ref{cor2_appendix} to this
setting. Let $K_X$ be a Weil divisor on $X$ such that
$\OO(K_X)\simeq\omega_X$ and let
us fix a positive integer $r$ such that $r K_X$ is Cartier. Note
that we have a canonical morphism $p_{r}\colon\omega_X^{\otimes r}
\to\OO(r K_X)$.
We use the notation in Proposition~\ref{prop1_appendix}. Let
$\eta_{r} \colon(\Omega_X^n)^{\otimes r}\to \OO(r K_X)$ be the
composition of $\eta^{\otimes r}$ with $p_{r}$. Equivalently, if $i$
denotes the inclusion of $X_{\rm reg}$ into $X$, then $\eta_{r}$ is
identified with the canonical map $(\Omega_X^n)^{\otimes r}\to
i_*((\Omega_{X_{\rm reg}}^n)^{\otimes r})$. The image of $\eta_r$ is
by definition $I_{Z_r}\otimes\OO(r K_X)$, where $Z_{r}$ is the $r^{\rm th}$ Nash subscheme of
$X$.
Since $\omega_M^{\otimes r}\vert_X$ is locally-free, the morphism
$u^{\otimes r}$ induces
$$u_r=i_*(u^{\otimes r}\vert_{X_{\rm reg}})
\colon \OO(rK_X)\to \omega_M^{\otimes r}\vert_X.$$
This is injective, since this is the case if we restrict
to the nonsingular locus of $X$. If we put $w_r:=w^{\otimes r}$,
then it follows from Proposition~\ref{prop1_appendix} that
\begin{enumerate}
\item[i)] $w_r$ is injective and its image is
$\OO_X\cdot\Delta^{-r}(dx_{c+1}\wedge\ldots\wedge dx_N)^{\otimes
r}$.
\item[ii)] The composition $w_r\circ u_r\circ\eta_r$ is the
localization map. Moreover, its image is equal to ${\rm
Jac}(F_1,\ldots,F_c)^r\cdot\Delta^{-r}(dx_{c+1}\wedge\ldots\wedge
dx_N)^{\otimes r}$.
\end{enumerate}
We now generalize Corollary~\ref{cor2_appendix} to the case when $X$
is $\QQ$--Gorenstein. Let $\overline{\fra}$ denote the integral
closure of an ideal $\fra$. We define the \emph{non-lci subscheme of level $r$} to be
the subscheme of $X$ defined by the ideal
$J_r=(\overline{{\rm Jac}_X^r}\colon I_{Z_r})$ (see Remark~\ref{char_lci} below
for a justification of the name).
\begin{corollary}\label{cor3_appendix}
Let $X$ be a normal $\QQ$--Gorenstein $n$--dimensional variety and let
$r$ be a positive integer such that $rK_X$ is Cartier. If $Z_r$ is
the $r^{\rm th}$ Nash subscheme of $X$, and if $J_r$ defines the non-lci subscheme of level $r$, then
the ideals ${\rm
Jac}_X^r$ and $I_{Z_r}\cdot J_r$ have the same integral closure.
\end{corollary}
\begin{proof}
It is enough to prove the assertion when $X$ is affine, hence we may
assume that we have a closed embedding $X\subset A$ of codimension
$c$, and general elements $F_1,\ldots,F_d$ that generate the ideal
of $X$ in $A$, as above. It is enough to show that
for every $L= (i_1,\ldots,i_c)$ with $1\leq i_1<\cdots<i_c\leq d$
we can find an ideal $\frb_L$ such that
\begin{equation}\label{eq_cor33}
I_{Z_r}\cdot \frb_L=
{\rm Jac}(F_{i_1},\ldots,F_{i_c})^r.
\end{equation}
Indeed, in this case if we put $\frb:=\sum_L\frb_L$, then
${\rm Jac}_X^r$ and $I_{Z_r}\cdot\frb$ have the same integral closure.
In particular, we have $\frb\subseteq J_r$, and we see that the inclusions
$$I_{Z_r}\cdot \fra_r\subseteq I_{Z_r}\cdot J_r\subseteq
\overline{{\rm Jac}_X^r}$$
become equalities after passing to integral closure. Note also that $\frb$ and $J_r$ have the
same integral closure.
In order to find $\frb_L$, we may assume without any loss of generality that
$L=(1,\ldots,c)$. With the above notation, consider the factorization
of the localization map $(\Omega_X^n)^{\otimes r}\to(\Omega_{K/k}^n)^{\otimes r}$
as $w_r\circ u_r\circ\eta_r$.
If $\frb_L$ is the ideal of $\OO_X$ such that the image of $u_r$ is $\frb_L\otimes\omega_M^{\otimes r}\vert_X$,
then (\ref{eq_cor33}) follows from the discussion preceding the statement of the corollary.
\end{proof}
\begin{remark}
Since
$I_{Z_{rs}}=I_{Z_r}^s$ for every $s\geq 1$, it follows that $(J_r)^s\subseteq J_{rs}$,
and we deduce from the corollary that these two ideals have the same
integral closure.
\end{remark}
\begin{remark}\label{char_lci}
Under the assumptions in Corollary~\ref{cor3_appendix}, the support
of the non-lci subscheme of level $r$
is the set of points $x\in X$ such
that $\OO_{X,x}$ is not locally complete intersection. Indeed, if
$\OO_{X,x}$ is locally complete intersection, then after replacing
$X$ by an open neighborhood of $x$, we may assume that $X$ is
defined in some $\AAA^N$ by a regular sequence. In this case
$I_{Z_1}={\rm Jac}_X$, and we deduce that
$J_r=\OO_X$. Conversely, suppose that
$\OO_{X,x}$ is not locally complete intersection,
and after restricting to an affine neighborhood of $x$, assume that
we have a closed embedding $X\subset A$ as in our general setting.
Note the by assumption, for every
complete intersection $M$ in $A$ that contains $X$, the ideal
$((I_M\colon I_X)+I_X)/I_X$ is contained in the ideal $\frmm_x$ defining $x\in X$. On the other hand, following the notation in the proof of
Corollary~\ref{cor3_appendix}
we see that given
$L= (i_1,\ldots,i_c)$ with $1\leq i_1<\cdots<i_c\leq d$, and $I_M=(F_{i_1},\ldots, F_{i_c})$
we have
$\frb_L\subseteq ((I_M\colon I_X)^r+I_X)/I_X\subseteq\frmm_x$. Therefore
$\frb\subseteq\frmm_x$, and since $J_r$ and $\frb$ have the same support
(they even have the same integral closure), we conclude that $x$ lies in the support of $J_r$.
\end{remark}
\bibliographystyle{amsalpha} | 181,811 |
TITLE: Reference request : table of quantum Clebsch-Gordan coefficient
QUESTION [5 upvotes]: From a quick Google search, one can find a table of the first Clebsch-Gordan coefficient. For example this table. Those are used to pass between the tensor product bases and the bases as sum of irreducible of the tensor product of two irreducibles representations of $U(\frak{sl_2})$.
I was wondering if there is such a table somewhere for the quantum Clebsch-Gordan coefficient, that is those occuring when you replace $U(\frak{sl_2})$ with $U_q(\frak{sl_2})$.
REPLY [5 votes]: For rank-two quantum groups the Clebsch-Gordan coefficients are tabulated in arXiv:1004.5456 by Ardonne and Slingerland. This also includes mathematica notebooks to perform these and similar calculations.
More tables are in Hegde & Ramadevi. | 107,008 |
TITLE: Invertiblity of ST and TS for linear maps
QUESTION [0 upvotes]: Could somebody explain why for any given linear maps S and T over a finite dimensional vector space V, ST is invertible if and only if TS is? Why is it so important that V is finite dimensional for this to hold true?
Edit: I am looking for an approach without using matrices or determinants.
REPLY [0 votes]: If $T : V \rightarrow V$ is a linear map on a finite dimensional vector space $V$. Then $T$ is injective iff $\text{dim ker }T = 0$. And $T$ is surjective iff $\text{dim Im }T = \text{dim }V$ (this fails if $V$ has infinite dimension). Furthermore we have the equation $\dim V = \dim \text{Im }T + \dim \text{ker }T$. Hence $T$ is surjective iff $ \dim \text{Im }T = \dim V $ iff $\dim \text{ker }T = 0$ iff $T$ is injective. So for a finite dimensional vector space we have:
$$T \text{ injective} \iff T \text{ surjective}$$
If $ST$ is invertible then $ST$ must be injective and surjective (I think of $ST$ as the composition of first applying $S$ and then applying $T$). This requires $T$ to be surjective and $S$ to be injective. Hence $T$ is also injective and $S$ is also surjective. So both $S$ and $T$ are invertible. Hence so is $TS$. | 133,740 |
No Comparison
Stephanie Cutter.
No Affordable Choices
The Republican Medicare plan makes health coverage less affordable for seniors. In the first year it goes into effect, a typical 65-year-old who becomes eligible for Medicare would pay an extra $6,400 for health care, more than doubling what he or she would pay if the plan were not adopted. And the Republican plan would replace extra coverage for low-income enrollees with a capped, insufficient medical savings account.
In sharp contrast, the Affordable Care Act lowers costs for people in Medicare by improving its performance and squeezing out waste, fraud and abuse. The law also provides free preventive care and cheaper prescription drugs for people in Medicare. As a result, we estimate that a typical senior could save $3,500 over the next decade as a result of the Affordable Care Act.
Less Transparency
The Affordable Care Act will help make the health care system more open, more transparent and easier to understand.
The Republican plan takes us in the opposite direction..
Silver Lining
The facts are clear: the Affordable Care Act and the Republican plan to end Medicare as we know it are very different. It’s heartening to see Republicans aspire to produce a plan that resembles the historic reforms President Obama signed into law. But if they want a proposal that is similar to the Affordable Care Act, they’ll have to head back to the drawing board.
Stephanie Cutter is Assistant to the President and Deputy Senior Advisor.
Featured Content
| 167,661 |
TITLE: Find all zero divisors and all inverses in polynomial quotient ring
QUESTION [1 upvotes]: I'm trying to study algebra at IUM and I stuck on this question:
Let $\mathbb{F_4} := \mathbb{F_2}[\alpha]/(\alpha^2+\alpha+1)$ -- a field that consists of 4 elements $\{0,1,\alpha, \alpha+1\}$. Find all zero divisors and inverses in quotient ring $\mathbb{F_4}[x]/(x^2+[\alpha]x+1)$.
What I do: first, let's find zero divisors: let $a_1x+a_2$ and $b_1x+b$ $ \in \mathbb{F_4}[x]/(x^2+[\alpha]x+1)$, such that $(a_1x+a_2)(b_1x+b) \equiv 0$, then:
$a_1b_1x^2+(a_1b_2+a_2b_1)x+a_2b_2 \equiv 0$
Since ideal is generated by $x^2+[\alpha]x+1$, then $x^2=[\alpha]x+1$. So:
$(a_1b_1[\alpha]+a_1b_2+a_2b_1)x+a_1b_1+a_2b_2=0$
And here I stuck. What should I do next? Solve this equation for x? Or make a system of linear equations and solve it? In both cases I don't see how I'll be able to find zero divisors. The same thing is for inverses.
REPLY [1 votes]: You want
$$a_1b_1[\alpha]+a_1b_2+a_2b_1=a_1b_1+a_2b_2=0$$
Assume without loss of generality that $a_1=b_1=1$. Then $a_2b_2=1$ as well, and
$$[\alpha]+b_2+a_2=0$$
We would then have to have $a_2+b_2=\alpha$, but this is incompatible with $a_2b_2=1$. Thus there are no zero divisors. You can see this by noting that $x^2+[\alpha]x+1$ is irreducible, which you can find out by testing for roots, of which there are none.
To find inverses, it would help first to enumerate the elements. It is sufficient to look first at the monic elements $x$, $x+1$, $x+[\alpha]$, $x+1+[\alpha]$. We have
$$x(x+[\alpha])=[\alpha]x+1+[\alpha]x=1$$
$$(x+1)(x+1+[\alpha])=[\alpha]x+1+x+[\alpha]x+x+1+[\alpha]=[\alpha]$$
We see that the inverse of $x$ is $x+[\alpha]$, and the inverse of $x+1+[\alpha]$ is $(1+[\alpha])(x+1)$ (we had to multiply by the inverse of $[\alpha]$ since $(x+1)(x+1+[\alpha])=[\alpha]$). The remaining elements have inverses that are scalar multiples of these (except of course the elements of $\mathbb{F}_4$, which have their usual inverses). | 193,761 |
Where's the Beef?
At least I have Something New to look forward to, whose well-advertised trailer is so utterly beguiling that I'm hoping it's bonafide, instead of another Hitch. Sanaa Lathan has such charismatic comic authority in this preview, the dialogue zings ("Are you sneaking off to the O.C.?"), and Simon Baker finally looks sexy, instead of looking like some Hollywood casting intern's idea of sexy.
But after that, there ain't jack. Sure, I'll see Freedomland, cuz of Julianne, even though I bet it's worse than a midwinter studio dump; if you've followed the tortuous process of its changing release schedules, it seems to have gone from a rush-job timed for awards consideration (dulllll) to a springtime postponement (uh-oh), then briefly back to the even more certain doom of the late-December shuffle (serious lack of studio confidence), and now back again to February, as though to avoid Worst of 2005 lists. Jesus H. Christ! But yes, I'll go, and I'll see the Pulse remake, too, because that's just the sort of thing I do. But seriously, that is it for about six weeks, barring the potential for a NYC jaunt to see this or this or this or this or this. But this doesn't count, cuz there's always stuff to see in Gotham City. But why can't there be something for the hometown? I've scratched Casanova, since whatever its chances with the Art Directors and Costume branches, I've seen that trailer a jillion times, maintaining my gaze despite every execrable wig, and I just can't find it in me to give a **** about that movie. (Fill in your own expletive; they all apply.)
For the next few weeks, then, it's just Sanaa, plus some return trips up the mountain and around the world.
5 Comments:
This comment has been removed by a blog administrator.
i have to say that it's with the same crashing boredom and malaise as yours, dear nick, that i look at this upcoming mid-winter crop of movies. i got my so-called 2006 preview from EW in the mail yesterday and none of those flicks are happening until at least march. and i swear if an unkempt and sobbing julianne moore utters, "i've had life inside me" during that unkempt toddler-hunting movie, i will go completely postal.
p.s. a second viewing of those cowboys proved to be worthwhile for me. although the "i wish i knew how to quit you" will never ever be regarded as anything other than hilarious, the rest of the movie seems to hold together.
Great work!
[url=]My homepage[/url] | [url=]Cool site[/url]
Good design!
My homepage | Please visit
Good design! | | 147,062 |
By John Sage Melbourne
The Degree One Novice investor is most likely to run into challenges as they undertake their personal Riches Refine.
An first job is to become aware of the suggestion of “cash as well as riches”. This involves the Degree One investor forming a “philosophy of cash” as well as a “psychology of riches”.
Degree no: The Combatants (non-investors)
The starting point for discovering just how to produce riches with residential or commercial property financial investment is the stage of growth we call the ‘Novice Financier’ degree. Nevertheless,before we explore that stage of growth it is necessary to be knowledgeable about a degree of existence that we have actually identified as listed below that of the Novice Financier. We call this “Degree No” as well as it is consisted of the kind of individuals that are much more commonly known as “combatants”.
Degree No is even more of a ‘degree of existence’ as opposed to a “degree of investor growth” as this personality type does not spend for riches creation,neither are they developing themselves to do so in the future. They are,to put it simply,”non-investors” taken part in “non-development” of their riches developing skills,knowledge as well as perspective. They do not also take into consideration the possibility of spending to produce riches as they are too busy “battling” away in life as well as with life. They do not believe neither believe that spending for riches is a genuine alternative for them as they are constantly battling with the financial forces in their lives just to stay where they are. For them,making ends meet is a literal fight of attention as well as effort against ruthless financial pressure as well as concerns.
Their ‘enemies’ are their bills that assault them monthly. The tools they employ to safeguard themselves are effort,longer hours,as well as the compromising of the top quality of their life just to make ends meet.
Follow John Sage Melbourne for much more skilled residential or commercial property financial investment advice.
The 3 types of non-investor,the combatants
There are 3 types of combatants as well as it is necessary for you to be able to determine each enter order to stay clear of being affected by their “non-wealth developing” attitudes,beliefs as well as behaviours.
Each kind of battler has their very own pathology concerning riches,cash as well as investing. Each kind of battler has a limiting idea system that in fact avoids them from having the ability to get riches as well as to increase above the financial challenges they produce for themselves in their lives. To put it simply,their financial battles are of their very own making. As a result,it is seriously crucial for your very own financial well being to know just how to determine each kind of battler perspective as well as to stay clear of embracing any of their limiting beliefs as well as mindsets.
To learn even more concerning investor kinds,browse through John Sage Melbourne right here. | 13,891 |
As a long-time fan of the Pokemon series let me tell you, Pokemon Ultra Moon represents both the highs and lows of being a Poke-fan. The graphical leaps the series has made since Red and Blue in 1996 are impressive, but the 3DS’s limitations become ever more apparent as designers try to push beyond them. Graphically the game looks like a half-step between an N64 and Gamecube game, which sounds alright for a handheld series that began without the ability to display color. (The idea of playing a “black-and-white” game called Pokemon Blue seems a bit ridiculous now, doesn’t it?) The Pokemon are all beautifully rendered in three-dimensions. The environments look alright but somehow feel both confining and empty. You’re not wandering around a big open Pokemon world. You’re still sticking to paths and one-way ledges.
This again feels like a metaphor for the series. Two steps forward, one step back. I remember when Ruby & Sapphire introduced Pokemon Beauty Contests. That always struck me as unnecessarily creative for a series that has eighteen Pokemon types yet still makes you choose from fire, water, or grass types at the beginning every time. Some things are set in stone and some are up for grabs, and it’s never clear which is which or why. For example, the setting of Pokemon Ultra Moon is a surface level riff on Hawaii. This shakes things up on a cursory level, but also removes a diversity of environments from the level design. And in a move that feels like something a stoner might ponder on a pile of pillows at 3 AM- “What if there weren’t even gyms?”- this game does away with Pokemon Gyms in favor of new Island Trials. Ostensibly this was to make things less repetitive, but it ends up being a momentary distraction. The only thing that truly sticks here is the idea of Totem Pokemon, larger than usual boss Pokemon that your team has to take down. These boss battles felt worthy.
Inconsistency seems to be the general theme here. Even the name Pokemon Ultra Moon reveals the truth of the matter. There were two previous versions of this game, Pokemon Sun & Moon, that were considerably less good, and even so this new version isn’t perfect. So what you’re seeing in Ultra Moon is a course correction but not enough of one to really set things straight. It’s not uncommon for each generation of the Pokemon series to have two competing titles, both generally the same except for some version exclusive Pokemon, thus encouraging trading between versions. There is often a third version of the game that follows a year later, adding a few new features, exclusive Pokemon and a new mission or two. There have been variations on this formula, from the full-on story sequels of Black & White 2 to the remastered Alpha Sapphire which acted as a spiritual successor to the well-received X & Y. Ultra Moon is the first ‘third-version’ Pokemon game that I’ve played where it felt like a director’s cut. Plot events and situations are changed to make the story work slightly better. It still has its gaping obvious flaws, but now it feels a bit more polished.
If you were turned off by Sun & Moon their Ultra versions might smooth over some of your problems. Then again, there are plenty of reasons to scratch your head and say, “What?” at these versions too. At some point, the bottom half of your screen becomes dominated by tutorial tips from your talking Pokedex. This is the screen also used for your map and menu information. It is incredibly annoying to glance down at the map only to see the Pokedex rambling about a great place to take photos of your Pokemon. Oh, and by the way, you can only take photos of your Pokemon at the Photo Club, even though your Pokedex clearly has a camera on it for story reasons.
There’s lots of little barbs like this. Lillie and Hau, the game’s insufferable companions, have been toned down a bit but are still load-bearing. The game’s villains seem to be less brutal and are let off the hook a bit easier in this version. The one update the game didn’t get but sorely needed was a line-by-line revision of its dialogue. The awkward phrasing and weird non-sequiturs would be forgivable were any of it funny, but it constantly feels like the localization team is dancing around the idea of jokes without delivering any. The writing in this game makes the writing in Red & Blue sound like Shakespeare, and that includes the little kid who shouts about his love of shorts.
The biggest let down is how the game mishandles things that were done better in previous entries. The online system is worse than Alpha Sapphire‘s and while Super Training was dull, its replacement in Ultra Moon is a convoluted nightmare. Pokemon fans love to battle and some of them take it very seriously. The game acknowledges the existence of its adult fans. It just refuses to cater to them.
Deriders might say that I am putting too much stock in a children’s property, but Nintendo tentpoles like Mario and Zelda continue to deliver for all ages. Like the best Pixar and Ghibli movies there is a sweet spot where everyone can get a kick out of something, and for a while, Pokemon resided in this spot. To be frank, this sweet spot should be Nintendo’s wheelhouse.
I’ll refrain from my traditional rant in which I beg Nintendo to release a next-gen Pokemon MMO. “Do yourself a favor. It will be like printing money,” etc. Instead I will end this review with a reminder of the inconsistency of Ultra Moon, one that I think will speak to fans of the franchise or even ones that haven’t played it since the old Game Boy versions. What’s the catch phrase associated with Pokemon? “Gotta Catch ‘Em All!” That signature slogan is the battle cry of the trainer. You’re supposed to fill up your Pokedex with all the Pokemon you’ve caught, hoping to one day “catch ’em all.” In Pokemon Ultra Moon there is no national Pokedex, meaning there is no complete list of Pokemon in the game. You literally can’t catch ’em all. “Catch some, why worry? Mahalo for buying this game twice, ya dummies!”
2 thoughts on “Game Review: Pokemon Ultra Moon”
I think you identified the key flaw with the series, Matt. It just doesn’t cater to the now grown fans who are more scrutinizing of gameplay and story. I like consistency in some areas and appreciate taking chances with gameplay additions. However its this weird dichotomy of unmoving tradition (starters, linear story, etc) vs throwaway gimmicks (beauty contests, triple battles, etc) that leaves me scratching my head. Black and White 2 gave players the option to increase AI difficult for the first time in the series, but even then you still need to play through the whole game first. It begs for a bigger challenge, something that sucks the more advanced players in more and provides a more fulfilling experience, ala MMO dungeons or raiding. Leave me celebrating beating the Elite Four like it was a real reward, not just “oh, neat.” Focus on what everyone loves about the game, capturing, exploring, training, and battling, just take it up a few notches.
I totally agree, Lee. | 28,232 |
INEOS Group
Businesses: INEOS Group
Year: 2015
17 December 2015
14 December 2015
09 December 2015
01 December 2015
12 November 2015
09 November 2015
06 November 2015
29 October 2015
28 October 2015
11 October 2015
01 October 2015
18 August 2015
14 August 2015
22 July 2015
16 July 2015
14 July 2015
10 July 2015
01 July 2015
09 June 2015
01 June 2015
07 May 2015
23 April 2015
20 April 2015
01 April 2015
19 March 2015
10 March 2015
04 March 2015
26 January 2015. | 164,188 |
TP Vision to introduce wireless, multi-room Philips TV & Audio products based on DTS Play-Fi
New high-quality Philips wireless, multi-room products, including TVs, to be developed in partnership with DTS Play-Fi
OLED805 series announced as first Philips TVs – and the first set from a premium TV brand – to feature DTS Play-Fi
All 2020 Philips Android TVs to feature DTS Play-Fi
2019 Philips Android TV sets to be offered over-air firmware update to feature DTS Play-Fi in the 2nd half 2020
System will be extended to include soundbars and other wireless audio speakers in the 2nd half 2020
Amsterdam, June 17, 2020 – TP Vision has announced that it will introduce new Philips TV & audio products that wirelessly connect and work together using DTS Play-Fi..
TP Vision recently announced that the new Philips OLED805 series would be the first model from a premium TV brand to include DTS Play-Fi within the set.
The company will now begin to extend on this first step towards building a complete wireless connectivity platform by including DTS Play-Fi in all 2020 Philips Android TVs followed by an over-the-air firmware update to bring DTS Play-Fi to all 2019 Philips Android TVs due in the 2nd half of 2020.
TP Vision will also launch a new range of DTS Play-Fi equipped Philips soundbars and wireless speakers in September 2020 to offer consumers the option of a complete, easy-to-use, seamlessly connected Philips streaming audio system throughout their home.
About TP Vision
TP Vision Europe B.V. (‘TP Vision’) is registered in the Netherlands, with their). We combine the strong Philips brand with our product development and design expertise, operational excellence, and industry footprint of TPV. We believe in creating products that offer a superior audio and visual experience for consumers.
For more information about DTS Play-Fi, please visit. For more information about DTS, please visit or connect with DTS on Facebook, Twitter @DTS and Instagram @DTS.<< | 19,924 |
3,279 ft
Distance
63 ft
Climb
-130 ft
Descent
00:04:15
Avg time
Wright of Way Details
Fredericton's first machine built flow-trail!
Local Trail AssociationRiver:15
- view trail stats
update trails status or condition
Wright of Way Trail Reports
view all reports »
Photos
no photos have been added for Wright of Way yet, add a photo.
Routes with this trail
Reviews / Comments
No reviews yet, be the first to write a review or ask a question.
Use trail reports to comment on trail conditions.
Videos
no videos have been added for Wright of Way yet, add a video.
Nearby Trails
- best bitter 2,812 ft
- buzzkill 2.0 miles
- church entrance 2.1 miles
- water tower loop 2.4 miles
- connector - k-line trails 2.6 miles
- embed Wright of Way trail on your website
- Updated on Wed 2015-12-30 @ 9:59am
- Submitted on Tue 2015-07-07 @ 11:26am
- By ChrisNorfolk SJC RVC & contributors
- #32025 - 588 views | 44,642 |
We could call it the “Hayseed Me.”
It crops up now and then during election seasons. When you hear it, you know you are being talked down to. Some politician wants to get down on our level, and throws in the Hayseed Me.
(In the news biz, we have been using the “Editorial We” for generations, referring to ourselves as “we” when we write editorials. We think this, we think that. And it sometimes leads to the wonderful response: “Who is this ‘we’ you’re writing about, Mr. Editor? You got a mouse in your pocket?”)
The Hayseed Me appeared a few weeks ago, when Sen. Elizabeth Warren made a video in her kitchen about her intention to run for president. The former Harvard professor looked like “one of us,” standing in her kitchen, explaining, in short words that even we can understand, why the ding-dong heck she wants to be president.
In the video, gesticulating Elizabeth - talking to us like next-door-neighbor Liz - begins by saying this:
“I’m going to go get me a beer.”
And she does. She goes and gets her a beer. And she asks her husband if he would like him a beer, too. But he says he doesn’t want him a beer.
And there it is. The Hayseed Me. My guess is that Harvard professors don’t talk this way. I doubt your average Harvard professor says, “I’m going to go get ME a stack of papers to grade.” You don’t have to be in the Harvard English Department to know that the “me” in “I’m going to go get me a beer” is wrong, and you shouldn’t use it in your term paper.
Except, maybe, when you’re talking to hicks. Rubes. Stump jumpers from out in the sticks. You know, folks whose votes count even though they’re stupid, and say things like “I’m going to go get ME a beer.”
Years ago, when another Massachusetts politician was running for president, the Hayseed Me cropped up at a store somewhere out in Flyover Country. The awkward John Kerry – who was more at home wind surfing – was videoed saying this:
“Is this where I can get me a hunting license?”
And the clerk at the store was polite enough not to reply, “Yes, this is where you can get you a hunting license, you wind-surfing twit.”
Elizabeth and John figured 10-tooth Flyover Country rubes like us can relate to politicians who go get them a beer or hunting license (no doubt a possum hunting license). Sure makes me want to cast me a vote for Elizabeth or John. How ‘bout you?
(One time, the late, great columnist Mike Royko was asked why he didn’t speak to groups in “downstate” Illinois. He wrote that downstate was “full of rubes and stump jumpers.” At least Royko was funny about his contempt.)
To her credit, wild-eyed Alexandria Ocasio-Cortez did not use the Hayseed Me in her kitchen video that appeared last week. In advising young Americans not to propagate the species, she did not say, “I’m not going to have me a baby,” and we can all be thankful for that (not inserting the extra “me,” not not having a baby).
Where, you have to wonder, do these people come from? Has it ever occurred to them that the baby they decide not to have might be the next Jonas Salk? The next Steve Jobs? The next William Shakespeare?
TV host and comedian Bill Maher makes no secret of his contempt for us, arrogantly saying that rubes like us don’t hate our bi-coastal betters – we are envious of them. Bill says Wyoming people like me shop at Target - no Bill, The Wife says Target is too expensive - and we eat food from Chef Boyardee, not Chef Wolfgang Puck. In other words, we’re hopeless rubes.
One has to wonder if Bill has ever seen the Tetons, ever gotten to work in five minutes, ever scored a parking space right in front of the post office, ever smelled sagebrush after a summer rain.
Clueless, condescending Elizabeth and John are less contemptuous than Bill, but it all comes out of the same “I’m better than you” can.
Contact Dave Simpson at [email protected]. | 308,190 |
\documentclass{article}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amsmath}
\newtheorem{definition}{Definition}
\newtheorem{proposition}{Proposition}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{remark}{Remark}
\def\address{Kyiv Taras Shevchenko University,
cybernetics department,Volodymyrska, 64, Kyiv, 01033, Ukraine\\
Institute of Mathematics National Academy of Sci., Tereschenkivska,
3, Kyiv, 01601, Ukraine}
\def\email{\\[email protected]\\ Yurii\[email protected]}
\title{Stability of the $C^*$-algebra associated with the twisted CCR.}
\author{Daniil Proskurin and Yurii Samoilenko}
\date{}
\begin {document}
\maketitle
\begin{abstract}
The universal enveloping $C^*$-algebra $\mathbf{A}_{\mu}$ of twisted canonical
commutation relations is considered. It is shown that for any
$\mu\in (-1,1)$ the $C^*$-algebra $\mathbf{A}_{\mu}$
is isomorphic to the $C^*$-algebra $\mathbf{A}_0$ generated by
partial isometries
$t_i,\ t_i^*,\ i=1,\ldots,d$ satisfying the relations
\[
t_i^* t_j=\delta_{ij}(1-\sum_{k<i}t_k t_k^*),\ t_j t_i=0,\ i\ne j.
\]
It is
proved that Fock representation of $\mathbf{A}_{\mu}$ is faithful.
\end{abstract}
{\bf Mathematics Subject Classifications (2000):} 46L55, 46L65, 81S05,
81T05.
\\
{\bf Key words:} Fock representation, deformed commutation relations,
universal bounded representation.
\section*{Introduction}
Recently the interest to the *-algebras defined by generators and
relations, their representations, particulary faithful representations,
and the universal
enveloping $C^*$ -algebras has been growing because of their applications in
mathematical physics, operator theory etc.
A lot of interesting classes of a *-algebras depending on
the parameters are constructed as a deformations of canonical
commutation relations of quantum mechanics (CCR).
A well-known examples of a such deformations are
\begin{itemize}
\item $q_{ij}$-CCR introduced by M. Bozejko and R. Speicher ( see \cite{bs})
\begin{equation*}
\mathbb{C}\bigl<a_,
\ a_i^*\mid a_i^*a_j=\delta_{ij}1+q_{ij}a_ja_i^*,\ i,j=1,\ldots,d,\
q_{ji}=\overline{q}_{ij}\in\mathbb{C},\ \mid q_{ij}\mid\le 1\bigr>
\end{equation*}
and
\item Twisted canonical commutation relations (TCCR) constructed
by W. Pusz and S.L. Woronowicz ( see \cite{pw}). The TCCR have the
following form
\begin{align}\label{mucc}
a_i^*a_i &=1+\mu^2 a_ia_i^* -(1-\mu^2)\sum_{k<i}a_k a_k^*,\
i=1,\ldots,d \nonumber \\
a_i^*a_j &=\mu a_ja_i^*,\ i\ne j,\quad
a_j a_i=\mu a_i a_j,\ i<j,\ 0<\mu<1
\end{align}
\end{itemize}
The universal $C^*$-algebra $A_{\{q_{ij}\}}$ for $q_{ij}$-CCR ,
$\mid q_{ij}\mid<\sqrt{2}-1$, was studied in ~\cite{jsw}.
Particulary it was shown that under above restrictions on
the coefficients $A_{\{q_{ij}\}}$ is isomorphic to the Cuntz-Toeplitz
algebra generated by isometries $\{s_i,\ s_i^*,\ i=1,\ldots,d\}$
satisfying relations $s_i^*s_j=0,\ i\ne j$. This implies that Fock
representation of $A_{\{q_{ij}\}}$ is faithful. The conjecture that the
same results are true for any choise of $\mid q_{ij}\mid<1$ was discussed
in ~\cite{jsw} also. When
$\mid q_{ij}\mid=1,\ i\ne j$, the universal $C^*$-algebra is isomorphic
to the extension of noncommutative higher-dimesional torus
generated by isometries satisfying $s_i^*s_j=q_{ij}s_js_i^*,\ i\ne j$
and the Fock representaion is faithful also (see ~\cite{dpl}).
In the present paper we consider the universal $C^*$-algebra
$\mathbf{A}_{\mu}$ corresponding to the TCCR. Recall that the
irreducible representations of TCCR, including unbounded,
were described in $\cite{pw}$ and for any bounded representation
$\pi$ of TCCR
\[
\Vert \pi(a_ia_i^*)\Vert\le\frac{1}{1-\mu^2},
\]
i.e. TCCR generate a *-bounded *-algebra ( see, for example \cite{khel} and \cite{os}).
We show in the Sec. 1 that $\mathbf{A}_{\mu}\simeq \mathbf{A}_0$ for any
$\mu\in (-1,1)$. Note that $\mathbf{A}_0$ is generated by partial isometries
$\{s_i,\ s_i^*,\ i=1,\ldots,d\}$ satisfying the relations
\[
s_i^*s_j=\delta_{ij}(1-\sum_{k<i}s_k s_k^*).
\]
In the Sec.2 we prove that Fock representation of $\mathbf{A}_{\mu}$
is faithful.
\begin{remark}
It follows from the main result of $\cite{jps}$ that Fock representations of
a *-algebras generated by $q_{ij}$-CCR, $\mid q_{ij}\mid<1$,
and TCCR are faithful. In the case when $\mid q_{ij}\mid=1,\ i\ne j$,
the kernel of Fock representation is generated as a *-ideal by the
family $\{a_j a_i-q_{ij}a_ia_j\}$.
\end{remark}
Finally, let us recall that by the universal $C^*$-algebra
for a certain *-algebra $\mathcal{A}$
we mean the $C^*$-algebra $\mathbf{A}$ with the homomorphism
$\psi\colon\mathcal{A}\rightarrow\mathbf{A}$ such that for any
homomorphism $\varphi\colon\mathcal{A}\rightarrow B$, where $B$ is a
$C^*$-algebra, there exists $\theta\colon\mathbf{A}\rightarrow B$
satisfying $\theta\psi=\varphi$. It can be
obtained by the completion of $\mathcal{A}/J$ by the following
$C^*$-seminorm on $\mathcal{A}$
\[
\Vert a\Vert=\sup_{\pi}\Vert \pi(a)\Vert,
\]
where $\sup$ is taken over all bounded representations of
$\mathcal{A}$ and $J$ is the kernel of this seminorm. Obviously this process
requires the condition
$\sup_{\pi}\Vert \pi(a)\Vert<\infty$ for any $a\in\mathcal{A}$.
Through the paper we suppose that all $C^*$-algebras are realised by
the Hilbert space operators. Particulary, it is correct to consider
the polar decomposition of elements of a $C^*$-algebra. Obviously, we
do not claim that in general the partial isometry from the polar
decomposition lies in this $C^*$-algebra.
\section{Stability of $\mu$-CCR.}
Let us recall some properties of the $C^*$-algebra
generated by one-dimensional $q$-CCR. Namely we need the following
proposition ( see ~\cite{dpl}).
\begin{proposition} \label{qc}
Let $B$ be the unital $C^*$-algebra generated by the elements
$a,\ a^*$ satisfying the relation
\[
a^* a=1+q a a^*,\quad -1<q<1
\]
and $a^*=S^*C$ is a polar decomposition. Then $S\in B$, $B=C^*(S,S^*)$
and
\[
a=\bigl(\sum_{n=1}^{\infty} q^{n-1}S^n S^{*n}\bigr)^{\frac{1}{2}}S.
\]
\end{proposition}
Let us show that any $C^*$-algebra generated by the operators satisfying
(~\ref{mucc}) can be generated by some family of partial isometries.
\begin{proposition}
Let $A_{\mu}$ be the unital $C^*$-algebra generated by operators
$a_i,\ a_i^*,\ i=1,\ldots,d$, satisfying relations (~\ref{mucc}).
Let $a_i^*=S_i^*C_i$ be the polar decomposition. Construct the
following family of partial isometries inductively:
\[
\widehat{S}_1:= S_1,\quad
\widehat{S}_i=(1-\sum_{j<i}\widehat{S}_j\widehat{S}_j^*)S_i
\]
Then $\forall i=1,\ldots,d$ we have $\widehat{S}_i\in A_{\mu}$,
$A_{\mu}=C^*(\widehat{S}_i,\ \widehat{S}_i^*,i=1,\ldots,d)$ and the
following relations hold
\begin{align}\label{pisom}
\widehat{S_i}^*\widehat{S}_j &=\delta_{ij}\bigl(1-\sum_{j<i}
\widehat{S}_j\widehat{S}_j^*\bigr)\quad i,j=1,\ldots,d\\
\widehat{S} _j\widehat{S}_i & =0,\ j>i.\nonumber
\end{align}
\end{proposition}
\begin{proof}
We use induction on the number of generators.\\
{$\mathbf{d=1}$}.\\
In this case we have $a_1^*a_1=1+\mu^2 a_1a_1^*$, $a_1^*=S_1^*C_1$ and
as shown in Proposition ~\ref{qc} we have $S_1\in C^*(a_1,a_1^*)$ and
\[
a_1^*=S_1^*
\bigl(\sum_{n=1}^{\infty}\mu^{2(n-1)}S_1^n S_1^{*n}\bigr)^{\frac{1}{2}}
\]
with $S_1^*S_1=1$.\\
{$\mathbf{d-1\rightarrow d}$}.\\
Denote by $a_i^{(1)}:=(1-S_1S_1^*)a_i$, $i=2,\ldots,d$. Note, that
the relations (~\ref{mucc}) are equivalent to
\begin{align*}
C_i^2 S_i & =S_i(1+\mu^2 C_i^2-(1-\mu^2)\sum_{j<i}C_j^2)\\
C_i^2S_j & =\mu^2 S_jC_i^2,\ j<i\\
C_i^2 S_j & = S_j C_i^2,\ j>i\\
C_iC_j &=C_jC_i,\ S_i^*S_j=S_jS_i^*,\ S_iS_j=S_jS_i
\end{align*}
Then it is easy to see that $(1-S_1S_1^*)a_i=a_i(1-S_1S_1^*)$,
$i=2,\ldots,d$ and
\begin{align*}
a_i^{*(1)}a_j^{(1)}& =(1-S_1S_1^*)a_i^*(1-S_1S_1^*)a_j\\
&=(1-S_1S_1^*)a_i^*a_j(1-S_1S_1^*)\\
&=\mu (1-S_1S_1^*)a_ja_i^*(1-S_1S_1^*)\\
& =\mu (1-S_1S_1^*)a_j(1-S_1S_1^*)a_i\\
&=\mu a_j^{(1)}a_i^{*(1)},\ i\ne j
\end{align*}
Analoguosly $a_j^{(1)}a_i^{(1)}=\mu a_i^{(1)}a_j^{(1)}$, $j>i>1$.
Multiplying the relation
\[
a_i^*a_i =1+\mu^2 a_ia_i^* -(1-\mu^2)\sum_{k<i}a_ka_k^*
\]
by $1-S_1S_1^*$ we get
\[
a_i^{*(1)}a_i^{(1)} =(1-S_1S_1^*)+\mu^2
a_i^{(1)}a_i^{*(1)} -(1-\mu^2)\sum_{2\le k<i}a_k^{(1)}a_k^{*(1)}.
\]
Evidently, the element $1-S_1S_1^*$ is the unit of the $C^*$-algebra of
operators $C^*(1-S_1 S_1^*,\ a_i^{(1)},\ a_i^{*(1)},\ i=2,\ldots,d)$.
Using the assumption of induction we conclude that
\[
C^*(S_1,\ S_1^*,\ a_i^{(1)},\ a_i^{*(1)},\ i=2,\ldots,d)=
C^*(S_1,\ S_1^*,\ \widehat{S}_i^{*(1)},\
\widehat{S}_i^{(1)},\ i=2,\ldots,d)
\]
and partial isometries $\widehat{S}_i^{(1)}$, $i=2,\ldots,d$, satisfy the
relations (~\ref{pisom}). Note that
$\widehat{S}_i^{(1)}=\widehat{S_i}$, $i=2,\ldots,d$. Indeed, evidently
if $a_i^*=S_i^*C_i$ is a polar decomposition then
$a_i^{*(1)}=(1-S_1S_1^*)S_i^*(1-S_1S_1^*)C_i$, $i=2,\ldots,d$, is a polar
decomposition too. I.e. $S_i^{(1)}=(1-S_1S_1^*)S_i$
and we have
$\widehat{S}_2^{(1)}:=S_2^{(1)}=(1-S_1S_1^*)S_2=\widehat{S}_2$, further
\begin{align*}
\widehat{S}_i^{(1)}&:=(1-S_1S_1^*-\widehat{S}_2^{(1)}
\widehat{S}_2^{*(1)}
-\cdots-
\widehat{S}_{i-1}^{(1)}\widehat{S}_{i-1}^{*(1)})S_i^{(1)}\\
& = (1-S_1S_1^*-\widehat{S}_2\widehat{S}_2^*
-\cdots-
\widehat{S}_{i-1}\widehat{S}_{i-1}^*)(1-S_1S_1^*)S_{i}\\
&= (1-S_1S_1^*-\widehat{S}_2\widehat{S}_2^*
-\cdots-
\widehat{S}_{i-1}\widehat{S}_{i-1}^*)S_{i}=\widehat{S}_i\\
\end{align*}
Obviously the conclusion above is obtained by the induction.
Then $\widehat{S}_1^*\widehat{S}_i=S_1^*(1-S_1S_1^*)\widehat{S}_i=0$
and, analoguosly,
$\widehat{S}_i\widehat{S}_1=0$, $i=2,\ldots,d$. It remains only to
show that
$C^*(a_i,\ a_i^*,\ i=1,\ldots,d)=
C^*(\widehat{S}_i,\ \widehat{S}_i^*,\ i=1,\ldots,d)$. It follows from
the assumption of induction and the decomposition
\[
a_i=\sum_{n=0}^{\infty}\mu^n S_1^n a_i^{(1)}S_1^{*^n}
\]
The equality above follows from the
$S_1^*a_i=\mu a_i S_1^*$, then $S_1^{*n}a_i=\mu^n a_iS_1^{*n}$ and
\begin{align*}
&\mu^n S_1^n a_i^{(1)}S_1^{*n} =
\mu^n S_1^n(1-S_1S_1^*)a_iS_1^{*n}\\
& = S_1^n(1-S_1S_1^*)S_1^{*n} a_i
=(S_1^n S_1^{*n}-S_1^{n+1}S_1^{*n+1})a_i
\end{align*}
\end{proof}
Now we have to prove the converse statement, i.e. that any
$C^*$-algebra generated by partial isometries satisfying (~\ref{pisom})
can be generated by the elements satisfying (~\ref{mucc}).
Let us consider the unital $C^*$-algebra $A_0$ generated by
the operators $t_i,\ t_i^*,\ i=1,\ldots,d$, satisfying relations
(~\ref{pisom}). Note that $t_i,\ i=1,\ldots,d$, are partial isometries.
Indeed we have
\[
t_it_i^*t_i=t_i(1-\sum_{j<i}t_jt_j^*)=t_i.
\]
For any $i=1,\ldots,d$ define a family $\{a_i^{(j)},\ j=1,\ldots
,i\}$ inductively:
\begin{align}\label{afam}
a_i^{(i)}&=(\sum_{n=1}^{\infty}
\mu^{2(n-1)}t_i^n t_i^{*n})^{\frac{1}{2}}t_i,\\
a_i^{(j)}&=\sum_{n=0}^{\infty}\mu^n t_j^n a_i^{(j+1)}t_j^{*n},\
j=1,\ldots,i-1.\nonumber
\end{align}
We shall use the following evident decomposition also
\[
a_i^{(j)}=\sum_{n_j,\ldots,n_{i-1}=0}^{\infty}
\mu^{n_j+n_{j+1}+\cdots+n_{i-1}}t_j^{n_j}\cdots t_{i-1}^{n_{i-1}}
a_i^{(i)}t_{i-1}^{*n_{i-1}}\cdots t_j^{*n_j}
\]
Denote $a_i^{(1)}:= \widetilde{a}_i$. Our goal is to show that
$\widetilde{a}_i,\ \widetilde{a}_i^*$ satisfy the relations
(~\ref{mucc}) and
$\widehat{S}_i(\widetilde{a}_1,\ldots,\widetilde{a}_d)=t_i$,
$i=1,\ldots,d$.
To do it we prove a few auxiliary lemmas.
\begin{lemma}\label{l1}
$(1-t_1t_1^*-\cdots-t_jt_j^*)a_i^{(j)}=a_i^{(j+1)}$
\end{lemma}
\begin{proof}
In the following we denote $P_j:=1-\sum_{i\le j}t_jt_j^*$, $P_0:=1$.
It is easy
to see that $P_jt_k=0,\ k\le j$, and $P_jt_k=t_k,\ k>j$. Then
\[
P_j t_j^{n_j}\cdots t_{i-1}^{n_{i-1}}=
\left\{
\begin{array}{ccc}
0,\quad n_j\ne 0 & &\\
t_{j+1}^{n_{j+1}}\cdots t_{i-1}^{n_{i-1}},\quad n_j=0,\
\exists n_l\ne 0,\ j+1\le l\le i-1 & &\\
P_j,\quad n_l=0,\ l=j,\ldots,i-1 & &
\end{array}
\right.
\]
Then
\begin{align*}
&P_j a_i^{(j)}=
\sum_{n_j,\ldots,n_{i-1}=0}^{\infty}
\mu^{n_j+n_{j+1}+\cdots+n_{i-1}}P_jt_j^{n_j}\cdots t_{i-1}^{n_{i-1}}
a_i^{(i)}t_{i-1}^{*n_{i-1}}\cdots t_j^{*n_j}\\
&= P_j a_i^{(i)}+
\sum_{n_{j+1},\ldots,n_{i-1}=0,\sum_k n_k^2\ne 0}^{\infty}
\mu^{n_{j+1}+\cdots+n_{i-1}}t_{j+1}^{n_{j+1}}\cdots t_{i-1}^{n_{i-1}}
a_i^{(i)}t_{i-1}^{*n_{i-1}}\cdots t_{j+1}^{*n_{j+1}}\\
&=
\sum_{n_{j+1},\ldots,n_{i-1}=0}^{\infty}
\mu^{n_{j+1}+\cdots+n_{i-1}}t_{j+1}^{n_{j+1}}\cdots t_{i-1}^{n_{i-1}}
a_i^{(i)}t_{i-1}^{*n_{i-1}}\cdots t_{j+1}^{n_{j+1}}=a_i^{(j+1)}
\end{align*}
Where we have used that $P_j a_i^{(i)}=a_i^{(i)},\ j<i$. Indeed
\[
a_i^{(i)}= T_it_i,\ T_i^2=\sum_{n=1}^{\infty}\mu^{2(n-1)} t_i^nt_i^{*n}
\]
and $t_kt_k^* T_i^2=T_i^2 t_kt_k^*=0$, $i\ne k$ implies $t_k^*T_i=0$,
$i\ne k$, hence $t_k^*a_i^{(i)}=0$ and
\[
P_j a_i^{(i)}=(1-\sum_{k\le j}t_kt_k^*)a_i^{(i)}=a_i^{(i)},\quad j<i.
\]
\end{proof}
\begin{corollary}
$P_k a_i^{(j+1)}=a_i^{(j+1)},\ k\le j$
\end{corollary}
\begin{proof}
We note only that $P_kP_j=P_j,\ k\le j$.
\end{proof}
\begin{lemma}\label{l2}
$t_k^*a_i^{(j+1)}=0,\ a_i^{(j+1)}t_k=0$,
$t_k^*a_i^{*(j+1)}=0,\ a_i^{*(j+1)}t_k=0$, for any $k\le j<i$.
\end{lemma}
\begin{proof}
As in the previous lemma we have $t_k^*a_i^{(j+1)}=t_k^*a_i^{(i)}=0$
and $a_i^{(j+1)}t_k=a_i^{(i)}t_k=T_it_it_k=0$ since $t_it_k=0,\ i>k$.
The other relations are adjoint to the proved above.
\end{proof}
\begin{lemma}\label{l3}
\[
t_j^{*n}t_j^m=\left\{
\begin{array}{ccc}
t_j^{*n-m},\ n>m &&\\
P_{j-1},\ n=m &&\\
t_j^{m-n},\ n<m &&
\end{array}
\right.
\]
\end{lemma}
\begin{proof}
Induction on $n,m$ using the basic relations (~\ref{pisom}).
\end{proof}
Now we are able to prove the following proposition.
\begin{proposition}\label{prop1}
For any $i=1,\ldots,d$ and $1\le j\le i$ we have
\[
a_i^{*(j)}a_i^{(j)}=P_{j-1}+\mu^2a_i^{(j)}a_i^{*(j)}-
(1-\mu^2)\sum_{j\le k<i}a_k^{(j)}a_k^{*(j)}
\]
\end{proposition}
\begin{proof}
We use the induction on $j$ for a fixed $i=1,\ldots,d$.\\
For $\mathbf{j=i}$ we have
\begin{align*}
& a_i^{*(i)}a_i^{(i)}=t_i^*\bigl(\sum_{n=1}^{\infty}
\mu^{2(n-1)}t_i^nt_i^{*n}\bigr)t_i\\
&= t_i^*t_i+\mu^2\sum_{n=1}^{\infty}\mu^{2(n-1)}
t_i^nt_i^{*n}\\
&= P_{i-1}+\mu^2 a_i^{(i)}a_i^{*(i)}
\end{align*}
$\mathbf{j+1\rightarrow j}$.
Using the results of previous lemmas we have
\begin{align*}
& a_i^{*(j)}a_i^{(j)}=\sum_{n,m=0}^{\infty}\mu^{n+m}
t_j^n a_i^{*(j+1)}t_j^{*n}t_j^m a_i^{(j+1)}t_j^{*m}
= \sum_{n=0}^{\infty}\mu^{2n}
t_j^n a_i^{*(j+1)} a_i^{(j+1)}t_j^{*n}\\
&= \sum_{n=0}^{\infty}\mu^{2n}
t_j^n\bigl( P_j+\mu^2 a_i^{(j+1)} a_i^{*(j+1)}
-(1-\mu^2)\sum_{j+1\le k<i}a_k^{(j+1)}a_k^{*(j+1)}\bigr)t_j^{*n}\\
&=\sum_{n=0}^{\infty}\mu^{2n}t_j^n P_j t_j^{*n}+
\mu^2 a_i^{(j)}a_i^{*(j)}-
(1-\mu^2)\sum_{j+1\le k<i}a_k^{(j)}a_k^{*(j)}\\
&=P_j+\sum_{n=1}^{\infty}\mu^{2n}(t_j^n t_j^{*n}-t_j^{n+1}t_j^{*n+1})
+\mu^2 a_i^{(j)}a_i^{*(j)}-
(1-\mu^2)\sum_{j+1\le k<i}a_k^{(j)}a_k^{*(j)}\\
&=P_{j-1}-(1-\mu^2)\sum_{n=1}^{\infty}\mu^{2(n-1)}t_j^n t_j^{*n}+
\mu^2 a_i^{(j)}a_i^{*(j)}-
(1-\mu^2)\sum_{j+1\le k<i}a_k^{(j)}a_k^{*(j)}\\
&=P_{j-1}+\mu^2 a_i^{(j)}a_i^{*(j)}-
(1-\mu^2)\sum_{j\le k<i}a_k^{(j)}a_k^{*(j)}
\end{align*}
Particulary, for $j=1$ we have
\[
\widetilde{a}_i^* \widetilde{a}_i=
1+\mu^2 \widetilde{a}_i \widetilde{a}_i^*-
(1-\mu^2)\sum_{k<i}\widetilde{a}_k \widetilde{a}_k^*
\]
\end{proof}
It remains to show that
$\widetilde{a}_i^*\widetilde{a}_j=\mu \widetilde{a}_j\widetilde{a}_i$.
Then
$\widetilde{a}_j\widetilde{a}_i=\mu \widetilde{a}_i\widetilde{a}_j$, $j>i$,
hold automatically (see ~\cite{jsw1}).
\begin{lemma}
$a_i^{*(k)}a_j^{(j)}=0$, $j< k\le i$.
\end{lemma}
\begin{proof}
We use induction again. For $\mathbf{k=i}$ one has
\[
a_i^{*(i)}a_j^{(j)}=t_i^* T_i T_j t_j=0
\]
since $T_i^2 T_j^2 =0,\ i\ne j$, and $T_i,\ T_j\ge 0$.\\
$\mathbf{k+1\rightarrow k}$.
\[
a_i^{*(k)}a_j^{(j)}=\sum_{n=0}^{\infty}\mu^n
t_k^n a_i^{*(k+1)}t_k^{*n} a_j^{(j)}
= a_i^{*(k+1)}a_j^{(j)}=0
\]
since $t_k^* a_j^{(j)}=0$, $k>j$ (see Lemma ~\ref{l1}).
\end{proof}
\begin{lemma}
$a_i^{*(j)}a_j^{(j)}=\mu a_j^{(j)}a_i^{*(j)}$, $j<i$.
\end{lemma}
\begin{proof}
Let us show that $a_i^{*(j)}T_j^2=T_j^2 a_i^{*(j)}$ and
$a_i^{*(j)}t_j=\mu t_j a_i^{*(j)}$.
\begin{align*}
& a_i^{*(j)}T_j^2=\bigl(\sum_{n=0}^{\infty}
\mu^n t_j^n a_i^{*(j+1)} t_j^{*n}
\bigr)
\bigl(
\sum_{m=1}^{\infty}\mu^{2(m-1)}t_j^m t_j^{*m}
\bigr)\\
&=\sum_{n=0,m=1}^{\infty}\mu^{n+2(m-1)}t_j^n
a_i^{*(j+1)}t_j^{*n}t_j^m t_j^{*m}\\
&= \sum_{n=1,m\le n}^{\infty}\mu^{n+2(m-1)}
t_j^n a_i^{*(j+1)}t_j^{*n}.
\end{align*}
where we have used Lemmas ~\ref{l2},\ref{l3}. Analogously
\[
T_j^2 a_i^{*(j)}=
\sum_{n=1,m\le n}^{\infty}\mu^{n+2(m-1)}
t_j^n a_i^{*(j+1)}t_j^{*n}= a_i^{*(j)}T_j^2.
\]
Finally
\begin{align*}
& a_i^{*(j)}t_j=\bigl(
\sum_{n=0}^{\infty}\mu^n t_j^n a_i^{*(j+1)}t_j^{*n}
\bigr)t_j\\
&=\mu t_j a_i^{*(j+1)}t_j^*t_j+
\sum_{n=2}^{\infty}\mu^n t_j^n a_i^{*(j+1)}t_j^{*n-1}\\
&=\mu t_j a_i^{*(j+1)}P_{j-1}+\mu t_j
\sum_{n=1}^{\infty}\mu^n t_j^n a_i^{*(j+1)}t_j^{*n}\\
&=\mu t_j\sum_{n=0}^{\infty}\mu^n t_j^n a_i^{*(j+1)}t_j^{*n}
=\mu t_j a_i^{*(j)}.
\end{align*}
Then
\[
a_i^{*(j)}a_j^{(j)}=a_i^{*(j)}T_jt_j=\mu T_jt_ja_i^{*(j)}
=\mu a_j^{(j)}a_i^{*(j)},\ i>j.
\]
\end{proof}
\begin{lemma}
$a_i^{*(k)}a_j^{(k)}=\mu a_j^{(k)}a_i^{*(k)}$, $1\le k<j<i$
\end{lemma}
\begin{proof}
We use induction. The case $k=j$ is considered in the Lemma above.\\
$\mathbf{k+1\rightarrow k}$. As in the Proposition ~\ref{prop1} we have
\begin{align*}
& a_i^{*(k)}a_j^{(k)}=\sum_{n=0}^{\infty}
\mu^{2n} t_k^n a_i^{*(k+1)}a_j^{(k+1)}t_k^{*n}\\
& =\mu \sum_{n=0}^{\infty}
\mu^{2n} t_k^n a_j^{(k+1)} a_i^{*(k+1)}t_k^{*n}
=\mu a_j^{(k)}a_i^{(k)}.
\end{align*}
Particulary, for $k=1$ we have
$\widetilde{a}_i^* \widetilde{a}_j=\mu \widetilde{a}_j\widetilde{a}_i^*$,
$i>j$.
\end{proof}
So, we have proved the following theorem.
\begin{theorem}
Let $A_0=C^*(t_i,\ t_i^*,\ i=1,\ldots,d)$ where
$\{t_i,\ t_i^*,\ i=1,\ldots,d\}$
satisfy relations (~\ref{pisom}), and the family
i$\{\widetilde{a}_i, \widetilde{a}_i^*,\ i\ge 1\}$
is constructed according to formulas
(~\ref{afam}). Then the relations (~\ref{mucc}) are satisfied and
we have
$\widehat{S}_i(\widetilde{a}_1,\ldots,\widetilde{a}_d)=t_i,\ i=1,\ldots,d$.
\end{theorem}
\begin{corollary}
For any $\mu\in (-1,1)$ the $C^*$-algebra $\mathbf{A}_{\mu}$ is isomorphic
to $\mathbf{A}_0$.
\end{corollary}
\begin{proof}
Using the universal property of $\mathbf{A}_0$ we can define the
surjective homomorphism
$\varphi\colon \mathbf{A}_0\rightarrow \mathbf{A}_{\mu}$ by rule
$\varphi(t_i)=\widehat{S_i}$, $i=1,\ldots,d$. Analogously, we have
$\psi\colon \mathbf{A}_{\mu}\rightarrow \mathbf{A}_0$,
$\psi (a_i)= a_i^{(1)}$,
$i=1,\ldots,d$. Obviously, $\psi\varphi=id$ and $\varphi\psi=id$.
\end{proof}
\section{Fock representation.}
Recall that Fock representation of TCCR is the irreducible
representation determined by the cyclic vector $\Omega$ such that
$a_i^*\Omega=0$, $i=1,\ldots,d$.
Let us prove that Fock representation of $\mathbf{A}_{\mu}$ is faithful.
Firstly note that Fock representation of $\mathbf{A}_0$ corresponds to
the Fock representation of $\mathbf{A}_{\mu}$
(it can be easely seen from the formulas
connecting $\{t_j\}$ and $\{a_j\}$).
In the following we need the description of classes of unitary
equivalence of irreducible representations of $\mathbf{A}_0$. As we
have noted above, the
irreducible representations of TCCR,
including unbounded representations, were classified in ~\cite{pw}.
However it is more
convenient for us to present the representations of $\mathbf{A}_0$ in
some different form .
\begin{proposition}
Let $\pi$ be an irreducible representation of $\mathbf{A}_0$ acting on the
Hilbert space $\mathcal{H}$, then for some
$j=1,\ldots,d$ we have
$\mathcal{H}\simeq \bigotimes_{k=1}^j l_2(\mathbb{N})$ and
\begin{align*}
\pi(t_i)& =\bigotimes_{k=1}^{i-1}(1-S S^*)\otimes S\otimes
\bigotimes_{k=i+1}^j 1,\ i\le j\\
\pi(t_{j+1})& = e^{i\varphi}\bigotimes_{k=1}^j (1-S S^*),\
\varphi\in [0,2\pi)\\
\pi(t_i)&=0,\ i>j+1,
\end{align*}
where $S$ is a unilateral shift on $l_2(\mathbb{N})$.
The case $j=d$ corresponds to the Fock representation.
\end{proposition}
\begin{proof}
It follows from (\ref{pisom}) that $\pi(t_1)$ is isometry. Hence,
either $\ker\pi(t_1^*)\ne\{0\}$ or $\pi(t_1)$ is unitary. We shall use
here $t_i$ instead $\pi(t_i)$.
Let $\ker t_1^*=\mathcal{H}_1\ne\{0\}$.
Then the relations (~\ref{pisom}) imply that
$\mathcal{H}=\bigoplus_{n=0}^{\infty} t_1^n\mathcal{H}_1$ and
\[
t_i,\ t_i^*\colon\mathcal{H}_1\rightarrow\mathcal{H}_1,\
t_i,\ t_i^*\colon t_1^{n}\mathcal{H}_1\rightarrow \{0\},\ n>1,\ i>1.
\]
If we identify $t_1^n\mathcal{H}_1$ with $e_n\otimes\mathcal{H}_1$,
$n\ge 0$, then
\[
t_1=S\otimes 1,\ t_i=(1-S S^*)\otimes t_i^{(1)},\ i>1
\]
where $S e_n=e_{n+1},\ n\ge 0$ and the family $\{t_i^{(1)},\ i>1\}$
saisfy (~\ref{pisom}) on the space $\mathcal{H}_1$. Moreover, it is
easy to show that the family $\{t_i,\ i=1,\ldots,d\}$ is irreducible iff
$\{t_i^{(1)},\ i=2,\ldots,d\}$ is irreducible.
If $t_1$ is unitary, then $t_it_1=0$, $i>1$, implies $t_i=0$.
\end{proof}
Using the previous proposition we can prove the following theorem.
\begin{theorem}
The Fock representation of $\mathbf{A}_0$ is faithful.
\end{theorem}
\begin{proof}
Let $C_F$ be the $C^*$-algebra generated by the operators of Fock
representation and $C_{\pi}$ be the $C^*$-algebra generated by some
irreducible representation $\pi$ of $\mathbf{A}_0$.
To prove the statement it is sufficient to construct a
homomorphism
\[
\psi\colon C_F\rightarrow C_{\pi}
\]
such that $\pi=\psi\pi_F$ (then $\pi(\ker\ \pi_F)=\{0\}$ for any
irreducible representation of $\mathbf{A}_0$, i.e. $\ker\ \pi_F=\{0\}$, where
we denote by $\pi_F$ the Fock representation).
To do it, we note that if $\pi$ corresponds to some $j=1,\ldots,d-1$,
then $C_F$ and $C_{\pi}$ are the $C^*$-subalgebras of the
$\bigotimes_{k=1}^d C^*(S,S^*)$ and $\bigotimes_{k=1}^j C^*(S,S^*)$
respectively.
Recall that $C^*(S,S^*)\simeq\mathcal{T}(C(\mathbf{T}))$ is a
nuclear $C^*$-algebra of the Toeplitz
operators. Then we can define the homomomorphism
\[
\psi\colon
\bigotimes_{k=1}^d C^*(S,S^*)\rightarrow\bigotimes_{k=1}^j C^*(S,S^*),
\]
defined by
\begin{align*}
\psi(\otimes_{k=1}^{i-1}1\otimes S\otimes_{k=i+1}^d 1)&=
\otimes_{k=1}^{i-1}1\otimes S\otimes_{k=i+1}^j 1,\ i\le j\\
\psi(\otimes_{k=1}^{j}1\otimes S\otimes_{k=i+1}^d 1)&=
e^{i\varphi}\otimes_{k=1}^{j}1,\\
\psi(\otimes_{k=1}^{i-1}1\otimes S\otimes_{k=i+1}^d 1)&=
\otimes_{k=1}^{j}1,\ i> j\\
\end{align*}
It remains only to restrict $\psi$ onto $C_F$ and to note that
$\psi(C_F)=C_{\pi}$.
\end{proof}
{\bf{Acknoledgements.}}\\
We express our gratitude to Vasyl Ostrovsky\u{\i} and Stanislav
Popovich for their critical remarks and helpful discussions.
This work was partially supported by the State Fund of Fundamental
Researches of Ukraine, grant no. 01.07/071. | 214,463 |
- The most beautiful place with nice sea view, great for memorable experiences This luxurious oceanfront boutique Hotel has a private beach and stunning views of Nicoya Gulf and Jacó´s crescent beache… read more
- Amid a natural garden, this hotel has free Wi-Fi connection, terraces, gardens, and barbecue facilities. Pumilio Mountain & Ocean Hotel offers its guests an outdoor pool, jetted tub, and spa faciliti… read more
- Fuel family fun at one of the premier resorts in Costa Rica, complete with a golf course, ocean-side pool, spa, restaurants and spacious accommodations. Our Herradura, Costa Rica hotel brings your ad… read more
- Discover Oceano, your place to escape for a perfect family vacation. Immerse yourself in the comfort and convenience of Oceano, a boutique luxury condo with 5 star hotel amenities. Oceano offers full… read more
- Wake up to the warm ocean breeze. Enjoy the beach and town markets in the morning, then relax at our five-star pool. Eat at one of our three restaurants and bars, then enjoy the nightlife and enterta… read more
- Offering an outdoor pool, plus a spa and wellness centre, Zephyr Palace is located in Herradura, Costa Rica. Only 60 minutes’ drive from Carara National Park, the property also features free WiFi acc… read more
Other boutique hotels near Jacó
- Set on Esterillos Este Beach, Alma del Pacifico Hotel & Spa offers bright, attractive villas overlooking its tropical gardens. It has an outdoor swimming pool, a spa and a restaurant. Each spacious … read more
- Hotel Boutique Meraki features a restaurant, outdoor swimming pool, a bar and shared lounge in Caldera. Each accommodation at the 3-star hotel has garden views, and guests can enjoy access to a garde… read more
- The Eco Boutique Hotel Vista Las Islas is an eco-friendly hotel offering an oasis of green inspired hospitality for families and pleasure travelers in Costa Rica. Located on Costa Rica’s Nicoya Sout… read more
- Set on Playa Palo Seco Beach, Beso del Viento welcomes you on a small peninsula between the ocean and the mangrove. This charming lodge has a beautiful setting,and features an outdoor swimming pool a… read more
- Facing the Gulf of Nicoya in Quizales Beach, Tango Mar Beachfront Boutique Hotel & Villas has its own private beach area and outdoors pool. Free Wi-Fi connection is possible throughout. The minimali… read more
- Nya is a design hotel encased in a tropical garden that merges into the jungle of Montezuma with. With quirky and calming designs by young central American architects, Nya wishes to complement, rath… read more
- Located in Montezuma, 1.3 km from Montezuma Beach, Aves Hotel Montezuma provides accommodation with an outdoor swimming pool, free private parking, a fitness centre and a garden. Featuring family roo… read more
- Vida Mountain Resort & Spa is a boutique-style resort immersed in the mountains of Costa Rica. High speed internet (fibre optic), delicious wholesome foods grown locally, heated saltwater infinity p… read more
- Perched above, one of Costa Rica’s most scenic beaches, Hotel Casa Chameleon is the perfect romantic getaway for couples who want to celebrate their love. Hotel Casa Chamaleon is an adult only hotel… read more
- WELCOME TO PARADOR RESORT & SPA FRIENDLY BY NATURE. RESPONSIBLE BY CHOICE. One of Costa Rica’s most prestigious eco-hotels, Parador is the epitome of “responsible luxury.” The resort is located on t… read more
Boutique hotels. And great places nearby
We help you find the best boutique hotels in Jacó: | 280,033 |
Criminal Justice: Police Science
SUN.
PROGRAM OVERVIEW.
UNIQUE FEATURES
SUNY Adirondack offers small class sizes and access to dedicated faculty who have years of real-world experience in law enforcement, criminal justice, clinical psychology and police science. Since these are “hands-on” degree programs, students in both Criminal Justice tracks are provided with an opportunity to intern at local and state law enforcement agencies, public health agencies and court facilities.
CAREER & TRANSFER OPPORTUNITIES
Employment opportunities in Criminal Justice are expected to grow anywhere between ten and twenty percent by 2020, according to the U.S. Bureau of Labor Statistics. As a result, graduates of SUNY Adirondack’s program will find they have flexible options to pursue immediate employment or to pursue 4-year degrees.
For those seeking transfer, students have the unique option of applying to SUNY Plattsburgh to complete their bachelor's degree on-campus at the SUNY Adirondack Regional Higher Education Center or taking advantage of the college's seamless transfer option with many of the best criminal justice schools in the country, including:
- John Jay College
- The University at Albany
- SUNY Canton
- University of New Haven
- Niagara University
- Hilbert College | 181,784 |
The other real housewives insist: It's 'not all about the Salahis'
>>IMAGE.
One reporter wondered whether the Salahis, whose party-crashing ways got them all the way to NBC's "Today" show -- twice -- had "tarnished" the "Real Housewives" show. This, of course, was asked by a guy -- who clearly is out of his depth trying to cover the "Real Housewives" story, because otherwise he'd know "tarnished" would be an improvement to this gloriously tacky franchise.
He put the question to Turner only.." (Matt is one of her children.)
Anyway, Turner answered the guy's question, saying that the whole gate-crashing thing threw all the other housewives "totally off guard."
"It was so unbelievable because they'd been nothing but nice and normal around us," said Turner, adding: "Things did change a little bit after that point."
"I was just shocked that anybody would have the gall to crash a party like that," Turner said later in the call, after taking the umpteenth question about the Salahis -- as a note," Turner asked rhetorically.
In reality -- the real reality, not the made-for-TV reality -- the five housewives of D.C. did all not know each other before being cast in this series. Or so said Turner, now flying solo on the phone call. | 38,405 |
\subsection{Edge density and degree distribution in the projected graph}
To study the edge density and degree distribution in the projected graph,
we use the following quantity:
\begin{equation}
p_{u_1 u_2} := \frac{M_{R2}}{M_{R1} ^2} \frac{w_{u_1} w_{u_2}}{n_R}.
\end{equation}
The following theorem shows that $p_{u_1 u_2}$ is the asymptotic edge existence probability between the two nodes $u_1$ and $u_2$ in the \emph{projected graph}.
Note that under \Cref{assm:WeightSeqMoments}, we have $w_{u_1}, w_{u_2} = O(n_R^{1/2-\delta})$
and thus $p_{u_1 u_2} = O(n_R ^{-2\delta}) = o(1)$, so the projected graph is sparse as the number of nodes goes to infinity.
\begin{theorem} \label{thm:density_proj}
For any $u_1, u_2 \in L$, as $n_R \to \infty$, we have
\[
\prob{(u_1, u_2) \in E \mid S_L, S_R} =
p_{u_1 u_2} - \frac{p_{u_1 u_2} ^2}{2} + \left(\frac{p_{u_1 u_2}}{6} + \frac{M_{R4}}{2n_RM_{R2}^2} \right) p_{u_1 u_2}^2 \cdot (1+O(n_R^{-2\delta})).
\]
\end{theorem}
\begin{proof}
We consider the complementary case when $u_1$ and $u_2$ are not connected in
the projected graph. This is the case when, for any nodes $v \in R$, it is connected
to at most one of $u_1$ and $u_2$ in the bipartite graph. For each single node $v \in R$, this case happens with probability
$1 - \frac{w_{u_1} w_{u_2} w_v ^2}{n_R ^2 M_{R1} ^2}$.
Therefore,
\begin{eqnarray*}
& \log (\prob{(u_1,u_2) \notin E \mid S_L, S_R} )
= \sum_{v \in R} \log\left( 1 - \frac{w_{u_1} w_{u_2} w_v ^2}{n_R ^2 M_{R1} ^2} \right)
\\
&= \sum_{v \in R} \left[ - \frac{w_{u_1} w_{u_2} w_v ^2}{n_R ^2 M_{R1} ^2} - \frac{w_{u_1}^2 w_{u_2}^2 w_v ^4}{2 n_R ^4 M_{R1} ^4} \cdot (1+O(n_R^{-4\delta}))\right]
= - p_{u_1 u_2} - \frac{M_{R4}}{2n_RM_{R2}^2} p_{u_1 u_2}^2 \cdot (1+O(n_R^{-4\delta})).
\end{eqnarray*}
Consequently,
\[
\prob{(u_1,u_2) \in E \mid S_L, S_R}
= p_{u_1 u_2} - \frac{p_{u_1 u_2} ^2}{2} + \left(\frac{p_{u_1 u_2}}{6} + \frac{M_{R4}}{2n_RM_{R2}^2} \right) p_{u_1 u_2}^2 \cdot (1+O(n_R^{-2\delta})).
\]
\end{proof}
We now examine the expected degree distribution of the projected graph.
One concern is the possibility of multi-edges in our definition of a projection,
which occurs when two nodes $u_1, u_2 \in L$ have more than one common neighbor in the bipartite graph.
The following lemma shows that the probability of having multi-edges
conditional on edge existence is negligible,
meaning that we can ignore the case of multi-edges with high probability.
\begin{lemma} \label{lem:multiedge}
Let $u_1, u_2 \in L$, and let $N_{u_1 u_2}$ be the number of common neighbors of $u_1$ and $u_2$
in the bipartite graph, then
$\prob{N_{u_1 u_2} \geq 2 \mid S_L, S_R, (u_1, u_2) \in E} = O(p_{u_1 u_2})$
as $n_R \to \infty$.
\end{lemma}
\begin{proof}
Note that it suffices to show that $\prob{N_{u_1 u_2} \geq 2 \mid S_L, S_R} = O(p_{u_1 u_2}^2)$.
By the tail formula for expected values,
\begin{align*}
\expect{N_{u_1 u_2} \mid S_L, S_R}
&= \textstyle \sum_{k =1} ^\infty k \cdot \prob{N_{u_1 u_2} = k \mid S_L, S_R} \\
&\textstyle \geq 2 \cdot \prob{N_{u_1 u_2} \ge 2 \mid S_L, S_R} + \prob{N_{u_1 u_2} = 1 \mid S_L, S_R} \\
&\textstyle= \prob{N_{u_1 u_2} \ge 2 \mid S_L, S_R} + \prob{N_{u_1 u_2} \ge 1 \mid S_L, S_R}.
\end{align*}
Note that we also have
\[
\textstyle \expect{N_{u_1 u_2} \mid S_L, S_R} = \sum_{v \in R} \prob{(u_1, v), (u_2, v) \in E_b \mid S_L, S_R} = p_{u_1 u_2},
\]
and consequently
\[
\textstyle \prob{N_{u_1 u_2} \geq 2 \mid S_L, S_R} \leq p_{u_1 u_2} - \prob{N_{u_1 u_2} \geq 1 \mid S_L, S_R} \leq \frac{1}{2} p_{u_1 u_2}^2 + o(p_{u_1 u_2} ^2).
\]
The inequality uses the fact that the event $N_{u_1, u_2} \geq 1$ is equivalent to the existence of edge $(u_1, u_2)$
in the projected graph, which happens with probability $p_{u_1 u_2} - \frac{1}{2} \cdot p_{u_1
u_2}^2 + o(p_{u_1 u_2} ^2)$ by \Cref{thm:density_proj}.
\end{proof}
Now we are ready to analyze the degree of a node in the projected graph.
The following theorem says that degree of a node in the projected is directly
proportional the weight of the node. Thus, at least in expectation, we can think
of the weight as a proxy for degree.
\begin{theorem} \label{thm:deg_proj_mean}
For any $u \in L$, as $n_L, n_R \to \infty$, we have
\[
\expect{ \degree{u} \mid S_L, S_R} =
\frac{M_{R2} M_{L1}}{M_{R1} ^2} \cdot \frac{n_L}{n_R} \cdot w_u \cdot (1 + o(1)),
\]
\end{theorem}
\begin{proof}
By \Cref{thm:density_proj},
\begin{align*}
\expect{\degree{u} \mid S_L, S_R}
= \sum_{u_1 \in L, u_1 \neq u} \prob{(u, u_1) \in E \mid S_L, S_R}
&= \sum_{u_1 \in L, u_1 \neq u} \frac{w_u w_{u_1}}{n_R} \cdot \frac{M_{R2}}{M_{R1}^2} \cdot (1 + o(1)) \\
&= \frac{M_{R2} M_{L1}}{M_{R1} ^2} \cdot \frac{n_L}{n_R} \cdot w_u \cdot (1 + o(1)).
\end{align*}
\end{proof}
By \Cref{thm:degree_bip_unconditonal}, the bipartite
degree distributions of the left and right nodes are power-law distributions with exponents $\alpha_L$ and $\alpha_R$.
For such bipartite graphs, Nacher and Aktsu~\cite{nacher2011degree} showed that
the degree sequence of the projected graph follows a power law distribution.
\begin{corollary}[Section 2, \cite{nacher2011degree}] \label{thm:deg_proj_distri}
Suppose the node weights on the left and right follow power-law distributions with exponents
$\alpha_L$ and $\alpha_R$. Then the degree distribution of the projected graph is a power-law distribution with decay exponent
$\min(\alpha_L, \alpha_R - 1)$.
\end{corollary}
When $\alpha_R \in (3, 4)$, \Cref{assm:WeightSeqMoments} is satisfied by \Cref{Prp:AssumptionSatisfied}, and
the projected graph would have power-law degree distribution with decay exponent within $(2, 3)$, which is
a standard range for classical theoretical models~\cite{dorogovtsev2002evolution} and is also observed in real-world data~\cite{broido2019scale}. We estimate $\alpha_R \in (3, 4)$ for several real-world bipartite networks
that we analzye (\cref{sec:power-law-stats}).
\subsection{Clustering in the projected graph}
In this section we compute the expected value of the clustering and closure coefficients.
\Cref{thm:lccf_wt} rigorously analyzes the expected value of
local clustering coefficients on networks generated from projections of general bipartite random graphs.
Our results show how (for a broad class of random graphs)
the expected local clustering coefficient varies with the node weight:
it decays at a slower rate for small weight and then decays as the inverse of the weight for large weights.
Combined with the result that the expected projected degree is proportional to the node weight (\Cref{thm:deg_proj_mean}),
this says that there is an inverse correlation of node degree with the local clustering coefficient, which we also verify with simulation.
This has long been a noted empirical property of complex networks~\cite{newman2003structure}, and our analysis
provides theoretical grounding, along with other recent results~\cite{bloznelis2019local,bloznelis2017correlation}.
\begin{theorem} \label{thm:lccf_wt}
If \Cref{assm:WeightSeqMoments} is satisfied with $\delta > \frac{1}{10}$,
then conditioned on $S_L$ and $S_R$ for any node $u \in L$, we have in the projected graph that
\[
\lccf{u} =\frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} w_u} + o(1).
\]
\end{theorem}
Besides the trend of how local clustering coefficient decays with node weight,
we highlight how the sequence moment of $S_R$ influences the clustering coefficient.
If the distribution of $S_R$ has a heavier tail, then $\frac{M_{R2} ^2}{M_{R3} M_{R1}}$ is small (via Cauchy-Schwartz),
and one would expect higher local clustering compared to cases where
$S_R$ is light-tailed~\cite{bloznelis2017correlation} or
uniform~\cite{bloznelis2013degree, deijfen2009random}.
We also observe this higher level of clustering in simulations (\Cref{fig:cc-vs-degree}).
We break the proof of \Cref{thm:lccf_wt} into several lemmas.
From this point on, we assume $\delta > 1/10$.
We first present the following results on the limiting probability of wedge and triangle existence,
with proofs given in \Cref{sec:proofs}.
\begin{lemma}\label{lem:wedge_prob}
As $n_R \rightarrow \infty$, for any node triple $(u_1, u, u_2)$,
the probability that they form a wedge centered at $u$ is
\begin{eqnarray*}
\prob{(u, u_1), (u, u_2) \in E \mid S_L, S_R}
&=& \left(1 + \frac{M_{R1} M_{R3}}{M_{R2}^2} \cdot \frac{1}{w_u} \right) p_{u u_1} p_{u u_2} \cdot (1 + o(1)).
\end{eqnarray*}
\end{lemma}
\begin{lemma}\label{lem:triangle_prob}
In the limit of $n_R \rightarrow \infty$, the probability of a node triple
$(u_1, u, u_2)$ forms a triangle is
\begin{eqnarray*}
\prob{(u, u_1) , (u, u_2) , (u_1, u_2) \in E \mid S_L, S_R}
= p_{u u_1} p_{u u_2} \cdot \frac{M_{R1}M_{R3}}{M_{R2}^2}\cdot \frac{1}{w_u} \cdot (1 + o(1))
+ o(p_{u u_1} p_{u u_2}).
\end{eqnarray*}
\end{lemma}
Now we have the following key result on the conditional probability triadic closure.
\begin{lemma}\label{lem:wedge_close_prob}
In the limit of $n_L, n_R \rightarrow \infty$, if a node triple $(u_1, u, u_2)$
forms an wedge, then the probability of this wedge being closed is
\[
\prob{ (u_1, u_2) \in E \mid S_L, S_R, (u, u_1), (u, u_2) \in E}=
\frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} w_u} + o(1).
\]
\end{lemma}
\begin{proof}
By combining the result of \Cref{lem:wedge_prob,lem:triangle_prob}, we have
\begin{align*}
& \prob{ (u_1, u_2) \in E \mid S_L, S_R, (u, u_1), (u, u_2) \in E} \textstyle
= \frac{\prob{(u, u_1) , (u, u_2) , (u_1, u_2) \in E \mid S_L, S_R}}
{\prob{(u, u_1), (u, u_2) \in E \mid S_L, S_R}}
\\ & = \textstyle
\frac{p_{u u_1} p_{u u_2} \frac{M_{R1}M_{R3}}{M_{R2}^2}\cdot \frac{1}{w_u} \cdot (1+o(1)) + o(p_{u u_1} p_{u u_2})}
{\left(\frac{M_{R1}M_{R3}}{M_{R2} ^2} \cdot \frac{1}{w_u} + 1\right) p_{u u_1} p_{u u_2} \cdot (1 + o(1))}
= \frac{1 + o(1)}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} w_u} + o(1).
\end{align*}
\end{proof}
Finally, we are ready to prove our main result.
\begin{proof}[Proof of \Cref{thm:lccf_wt}]
According to \Cref{Eq:CondProb_LCC}, the local clustering coefficient is
the conditional probability that a randomly chosen wedge centered at node $u$ forms a triangle.
\Cref{lem:wedge_close_prob} shows that this probability is asymptotically the same
regardless of the weights on the wedge endpoints $u_1, u_2$.
Therefore conditioned on $S_L$ and $S_R$, we have
\begin{align*}
\lccf{u} =
\prob{ (u_1, u_2) \in E \mid S_L, S_R, (u, u_1), (u, u_2) \in E}=
\frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} w_u} + o(1).
\end{align*}
\end{proof}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{ccfs-30-30.eps}
\includegraphics[width=0.49\textwidth]{ccfs-25-35.eps}
\caption{Conditional local clustering coefficient distribution on simulated graphs as a function of node weight $w_u$,
where left and right node weights are sampled from a discrete power law distribution with decay rates $\alpha_L$ and $\alpha_R$.
The dots are the mean conditional local clustering coefficients for all nodes with that weight,
and the curve is the prediction from \Cref{thm:lccf_wt}.}
\label{fig:local_cluster}
\end{figure}
\Cref{fig:local_cluster} shows the mean conditional local clustering coefficient of a projected graph as a function of node weights $w_u$
for networks where $n_L = n_R =$ 10,000,000 and weights drawn from discrete power-law distributions with different decay parameters.
We cap the maximum value of the weights at $n_L^{0.3}$, which corresponds to $\delta = 0.2$ in \cref{assm:WeightSeqMoments}.
The empirical clustering is close to the expected value from \cref{thm:lccf_wt}.
We can also analyze the global clustering coefficient (also called the \emph{transitivity}) of the projected graph.
The following theorem says that the global clustering tends to a constant bounded away from 0.
\begin{theorem} \label{thm:gccf}
If \Cref{assm:WeightSeqMoments} is satisfied with $\delta > \frac{1}{10}$,
then conditioned on $S_L$ and $S_R$, we have in the projected graph that
\[
C_G = \frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} \cdot \frac{M_{L2}}{M_{L1}}}
+ o(1).
\]
\end{theorem}
\begin{proof}
Let $\cW$ be the set of wedges in $G$ and $\cT$ be the set of triangles.
We first show that the global clustering coefficient is always well-defined, i.e. $\prob{|\cW| \geq 1} \geq 1 - \exp(-O(n_R))$.
We show that with high probability, some node on the right partition has degree at least 3.
This implies that a triangle exists in the graph and therefore a wedge exists.
For any given node $v$ on the right, its expected degree is $w_v$ by \Cref{thm:degree_bip}
and the degrees follow a Poisson distribution.
By standard concentration bounds~\cite{cannone17}, $\prob{d_b(u) \leq 2} \leq \exp(-w_v/4)$
for $w_v$ larger than 2 (in particular, this probability is less than $1/3$ when $w_v > 7$).
Thus, given the mild assumption that the weights have finite support for weights larger than $7$,
the probability that there exists at least 1 triangle is $1 - \left(\frac{1}{3}\right)^{O(n_R)}$.
Next, we note that the probabilities computed in \Cref{lem:wedge_close_prob} remain unchanged
when conditioned on the fact that at least one wedge exists.
Let $E$ be the event that some wedge $(u, u_0, u_1)$ closes into a triangle (with $u$ as the centre of the wedge).
\[
\prob{E \cap |\cW| \geq 1} \geq \prob{E} - (1 - \prob{|\cW| \geq 1}) = \prob{E} - \left(\frac{1}{3}\right)^{O(n_R)},
\]
Consequently,
\[
\prob{E} - \left(\frac{1}{3}\right)^{O(n_R)} \leq \prob{E \given |\cW| \geq 1} \leq \prob{E} + \left(\frac{1}{3}\right)^{O(n_R)}.
\]
Finally, $\prob{E} = \Omega(n_R^{O(1)})$ for any of the events we previously considered, so the exponentially small deviation does not produce any additional error in our results.
For any node $u$, the probability that a random wedge has center $u$ is proportional to the number of wedges centred at $u$. By our reasoning above, we can assume at least 1 wedge exists, so these probabilities sum to 1. By \Cref{lem:wedge_close_prob}, we have:
\begin{equation*}
\prob{\text{$u$ is the center node}} = \frac{\sum_{b,c \in L} \left( 1 + \frac{M_{R1} M_{R3}}{M_{R2}^2} \cdot \frac{1}{w_u} \right) \cdot p_{u b} p_{u c}}{\sum_{a,b,c \in L} \left( 1 + \frac{M_{R1} M_{R3}}{M_{R2}^2} \cdot \frac{1}{w_a} \right)p_{a b} p_{a c}} + o(1).
\end{equation*}
Putting everything together,
\begin{equation*}
C_G = \sum_{u \in L} \prob{(u, u_1, u_2) \in \cT \given (u, u_1, u_2) \in \cW} \cdot \prob{\text{$u$ is the center}}
= \frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} \cdot \frac{M_{L2}}{M_{L1}}} + o(1),
\end{equation*}
where the probability is taken over all $u_1, u_2 \in L$ and the second equality uses \Cref{lem:wedge_close_prob} for the probability that $(u, u_1, u_2) \in \cT$.
\end{proof}
\begin{figure}[t]
\hskip 20pt
\includegraphics[width=0.46\textwidth]{gcc-25.eps}
\hfill
\includegraphics[width=0.46\textwidth]{gcc-40.eps}
\hskip 20pt
\caption{Expected (via \cref{thm:gccf}) and sampled global clustering coefficients on simulated graphs with discrete power law weight distributions on the left and right nodes with decay rights $\alpha_L$ and $\alpha_R$.
The samples are close to the expected value.
\label{fig:global_cluster}}
\end{figure}
\Cref{fig:global_cluster} shows the expected (computed from \cref{thm:gccf})
and actual global clustering coefficient of the projected graph
with $n_L = n_R =$ 1,000,000.
The weights are drawn from a discrete power law distribution with
fixed decay rate $\alpha_L = 2.5$ or $4.0$ on the left nodes,
varying decay rate $\alpha_R$ on the right nodes, and $w_{\max} = n_L^{0.5}$.
The sampled global clustering coefficients are close to the expectation
at all parameter values.
Finally, we investigate the local closure coefficient $H(u)$.
Analysis under the configuration model predicts that $H(u)$ should be proportional
to the node degree, while empirical analysis demonstrates a much slower increasing trend
versus degree, or even a constant relationship in a coauthorship network that is directedly
generated from the bipartite graph projection~\cite{yin2019local}.
The following result theoretically justify this phenomenon, showing the the expected
value of local closure coefficient is independent from node weight
\begin{theorem}\label{thm:closure}
If \Cref{assm:WeightSeqMoments} is satisfied with $\delta > \frac{1}{10}$,
then conditioned on $S_L$ and $S_R$ we have, in the projected graph,
\[
H(u) = \frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} \cdot \frac{M_{L2}}{M_{L1}}}
+ o(1)
\]
as $n_R \to \infty$, \ie, the expected closure coefficient is asymptotically independent of node weight.
\end{theorem}
\begin{proof}
By \Cref{thm:lccf_wt}, the probability that a length-2 path $(u, v, w)$ closes into a triangle only depends on its center node $v$. Since the closure coefficient is measured from the head node $u$, the probability that any wedge is closed is independent of $u$ and thus the same across every node in the graph. This implies that the local closure coefficient is equal to the global closure coefficient, which in turn is equal to the global clustering coefficient.
\end{proof}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{hcfs-30-30.eps}
\includegraphics[width=0.49\textwidth]{hcfs-25-35.eps}
\caption{Conditional local closure coefficient distribution on simulated graphs as a function of node weight $w_u$,
where left and right node weights are sampled from a discrete power law distribution with decay rates $\alpha_L$ and $\alpha_R$.
The dots are the mean conditional local closure coefficients for all nodes with that weight,
and the flat curve is the prediction from \cref{thm:closure}.
Weights with fewer than 5 nodes were omitted. \label{fig:local_closure}
}
\end{figure}
\Cref{fig:local_closure} shows the local closure coefficient of the projected graph as a function of node weights $w_u$,
using the same random graphs as for the clustering coefficient in \cref{fig:local_cluster}.
We observe that the mean local conditional closure coefficient is independent of the node weight in the samples,
which verifies \Cref{thm:closure}.
\begin{remark}
One can strengthen the error bounds in \cref{thm:lccf_wt,thm:gccf,thm:closure} by assuming $\delta > 1/6$.
In particular, instead of an additive $o(1)$ error term, the error terms are a multiplicative $1 + o(1)$ factor.
For example, the global clustering coefficient in \Cref{thm:gccf} would be
\[
C_G = \frac{1}{1 + \frac{M_{R2} ^2}{M_{R3} M_{R1}} \cdot \frac{M_{L2}}{M_{L1}}}(1+o(1)).
\]
\end{remark} | 43,112 |
Accommodations: House - 3 Bedrooms - 2 Baths - (Sleeps 6)
Low Season:$150/night (2 night minimum) $900/week High Season: $175/night (2 night minimum) $1100/week Call about monthly rate 10% tax on stays less than 30 days $85 non-refundable cleaning fee 5 day minimum during Thanksgiving and Christmas. Note: Until confirmed, rates are subject to change without notice.
Date last modified - May 13, 2008 | 4,663 |
\begin{document}
\begin{abstract}
The general stability problem of truncations for a family of functions concentrating mass at the origin is described and a concrete example in the framework of entire optimizers for the fractional Hardy-Sobolev inequality is given. In this short note we point out some quantitative stability estimates, useful in dealing with critical $p-q$ fractional equations.
\end{abstract}
\maketitle
\section{Introduction and main results}
In the last years, a great deal of research has grown around multi-dimensional fractional differential problems of the form
\begin{equation}\label{model}
{\cal K} u=f(x,u),
\end{equation}
where ${\cal K}$ denotes a suitably defined elliptic fractional non-local operator. A general model for {\em linear} ${\cal K}$ is
\[
{\cal K} u (x)={\sf p.\,v.}\int_{\R^{N}}K(x, y) (u(x)-u(y))\, dy, \qquad K(x, y)\simeq |x-y|^{N+\sigma},
\]
while the main example in the nonlinear setting reads as
\begin{equation}
\label{def}
(-\Delta_{p})^{s}u:=\frac{1}{p}{\sf d}[u]_{s,p}^{p},\qquad [u]_{s, p}^{p}=\iint_{\R^{N}\times\R^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+p\, s}}\, dx\, dy\, ,
\end{equation}
with ${\sf d}$ being the Fr\'echet differential. Both families encompass the celebrated fractional Laplacian as a special case.
A large part of this research concerns existence and multiplicity of solutions to \eqref{model}. Indeed, modern non-linear analysis provides a lot of relatively abstract machinery to get such kind of results, and the general schemes of proof usually work once two sets of conditions are met. The first can be called {\em lower order} set of assumptions, as it mainly relates to the right-hand side of \eqref{model}, having little to do with the nature of the driving operator, except for the parameters that define it. Some examples are sub-criticality, sub/super-linearity, or Ambrosetti-Rabinowitz conditions, which are often explicitly imposed in the literature. The second set of conditions is the {\em leading order} one, and it pertains (sometimes subtle) properties of ${\cal K}$ alone, such as the corresponding regularity theory or relevant functional analytic embeddings. Needless to say, the more interesting applications of non-linear analysis in the fractional framework are those where some leading order assumption fails. Indeed, the true nature of
${\cal K}$ lies in what distinguishes it from the usual elliptic differential operators, and an extended discussion of such differences as well as related literature can be found in \cite{MSP}.
Let us now describe a meaningful feature of problems such as \eqref{model} from the functional analytic point of view. If $\Omega$ is a smooth subset of $\R^{N}$ and $0<s<1<p<N/s$ then $(-\Delta_{p})^{s}$ naturally acts on the fractional Sobolev space $W^{s, p}_{0}(\Omega)$, namely the space of all measurable $u:\R^{N}\to \R$ supported in $\overline\Omega$, vanishing at infinity
\footnote{which means $|\{x\in\Omega:|u(x)|>\eps\}|<+\infty$ for every $\eps>0$. This additional condition caters technical issues when $\Omega$ is unbounded.},
and such that the norm $[u]_{s,p}$ in \eqref{def} is finite. In many aspects, the parameter $s$ plays the r\^ole of a differentiability scale, while $p$ prescribes the summability of the $s$-fractional derivative. A suggestive notation giving meaning to this statement consists in defining the $s$-fractional incremental ratio of $u$ and the singular measure $\mu$ on $\R^{2N}$ as
\[
|D^{s}u|(x, y)=\frac{|u(x)-u(y)|}{|x-y|^{s}},\qquad d\mu=|x-y|^{-N}\, dx\, dy,
\]
respectively, so that
\[
[u]_{s, p}=\|D^{s}u\|_{L^{p}(\R^{2N}, d\mu)}.
\]
Notice that since $\mu(\R^{2N})=+\infty$, $L^{p}(\R^{2N}, d\mu)$ does not embed into $L^{q}(\R^{2N}, d\mu)$ for $q<p$ and, accordingly, $W^{s, p}_{0}(\R^{N})$ does not embed into $W^{s,q}_{0}(\R^{N})$. So far so good, as the latter embedding also fails for classical (non-fractional) Sobolev spaces. However, when $\Omega$ is bounded, H\"older's inequality entails
\begin{equation}\label{emb}
W^{1,p}_{0}(\Omega)\hookrightarrow W^{1,q}_{0}(\Omega)\qquad \text{for any $p> q$},
\end{equation}
which led H. Br\'ezis to ask whether such an embedding holds true also at the fractional level for {\em bounded} smooth domains. Surprisingly enough, Mironescu and Sickel \cite{MS} proved that the fractional Sobolev version of \eqref{emb} actually fails even in a set theoretic sense. Notice that $H^{s, p}_{0}(\Omega)$ actually embeds into $H^{s, q}_{0}(\Omega)$ if $H^{s, p}$ denotes the fractional Bessel potential space. So, when $p> q$, the mixed energy functional
\begin{equation}\label{spq}
J(u)=\| D^{s}u\|_{L^{p}(\R^{2N}, d\mu)}^{p}+\|D^{s}u\|_{L^{q}(\R^{2N}, d\mu)}^{q}
\end{equation}
is well defined and smooth in a space which is smaller than $W^{s, p}_{0}(\Omega)$ and is therefore more delicate to treat with respect to the classical one
\[
J(u)=\|D u\|_{L^{p}(\R^{N})}^{p}+\|D u\|_{L^{q}(\R^{N})}^{q}.
\]
In the non-fractional case $J$ gives rise to the so-called {\em $p$-$q$ Laplacian}, which serves as a model for many applications; see the survey \cite{MM1} and the references therein. The previous discussion highlights that studying its fractional counterpart (given by the differential of \eqref{spq}) requires more care and, sometimes, completely different techniques.
A meaningful item is the problem of {\em quantitative truncation estimates}. Let us describe it in broad (and somehow vague) terms. Given a function space $X\subseteq L^{1}_{{\rm loc}}(\R^{N})$, (that is $W^{s,p}_{0}(\R^{N})\cap W^{s,q}_{0}(\R^{N})$ in our example), consider a family of non-negative functions $\{U_{\eps}\}_{\eps}\subseteq X$ concentrating at the origin, i.e., $U_{\eps}\, dx\weakstarto\mu$ as $\eps\to 0^+$, with $\mu\preccurlyeq \delta_{0}$. A {\em truncation} of $\{U_{\eps}\}_{\eps}$ in $B_{\delta}$ is a family $\{U_{\eps, \delta}\}_{\eps}\subseteq X$ fulfilling
\begin{equation}
\label{ctrunc}
{\rm supp}(U_{\eps, \delta})\subseteq B_{2\delta},\qquad {\rm supp} (U_{\eps, \delta}-U_{\eps})\subseteq \R^{N}\setminus B_{\delta}.
\end{equation}
A {\em quantitative truncation estimate} for a functional $I:X\to\R$ is an explicit first-order asymptotic analysis, as $\eps,\delta\to 0^+$ of $I(U_{\eps, \delta})$: one usually defines $I_{0}$ taking appropriate limits of $I(U_{\eps, \delta})$ and aims at finding explicit bounds (from below, above, or both) for $I(U_{\eps, \delta})- I_{0}$. Clearly, there are many ways to truncate a family of concentrating functions, and each one produces, in principle, different truncation estimates. When $X$ is a $C^{\infty}_{c}(\R^{N})$-modulus, the {\em truncation by multiplication} looks the most natural: pick any $\varphi\in C^{\infty}_{c}(B_{2})$ such that $\varphi\equiv 1$ on $B_{1}$ and put
\[
U_{\eps,\delta}(x)=\varphi\left(\frac{x}{\delta}\right)\, U_{\eps}(x).
\]
Sometimes the second condition in \eqref{ctrunc} can be weakened or even completely dropped, and general projection operators $\pi_{\delta}: X\to X_{\delta}$, where $X_\delta=\{u\in X:{\rm supp}(u)\subseteq B_{2\delta}\}$, considered. We will not dwell on details of other methods, but rather focus on the particular concrete setting we are going to investigate.
The fractional Hardy-Sobolev inequality reads as
\begin{equation}\label{HS}
\left(\int_{\R^N} \frac{|u|^r}{|x|^\alpha}\, dx\right)^{\frac{1}{r}}\leq C\left(\int_{\R^{2N}}|D^{s}u|^{p}\, d\mu\right)^{\frac{1}{p}}.
\end{equation}
Here, $p>1$, $s\in \, ]0, 1[$, $0\le\alpha\le p\, s<N$, and $r$ is dictated by scaling through
\begin{equation}\label{scal}
\frac{N-\alpha}{r}=\frac{N-p\, s}{p}.
\end{equation}
Every function that realizes the optimal constant in \eqref{HS} is called an Aubin-Talenti function. By analogy with the local case, which formally corresponds to $s=1$, it is conjectured that the Aubin-Talenti functions, up to constant multiples, rescaling and possible (in the case $\alpha=0$) translations, are
\begin{equation}\label{AT}
U(x)=(1+|x|^{\frac{p-\alpha/s}{p-1}})^{\frac{p\, s-N}{p-\alpha/s}}.
\end{equation}
If $\alpha<ps$ then they can be obtained by solving the minimization problem
\begin{equation}\label{S}
0<{\cal S}=\inf\left\{\frac{[u]_{s,p}^p}{\|u\|_{r,\alpha}^{p}}:0<\|u\|_{r,\alpha}<+\infty\right\},\quad\text{where}\; \|u\|_{r, \alpha}=\left(\int_{\R^N} \frac{|u|^r}{|x|^\alpha}\, dx\right)^{\frac{1}{r}}
\end{equation}
via concentration-compactness. Some basic properties of the minimizers are described below.
\begin{proposition}[\cite{MM}, Theorem 1.1]
\label{propmin}
Let $p>1$, $s\in \ ]0, 1[$, $0\le \alpha<p\, s<N$, and $r$ satisfy \eqref{scal}. Then \eqref{S} is solvable and its minimizers $U$ are bounded continuous functions of strict constant sign. The positive ones turn out (eventually after translation in the case $\alpha=0$) radial and radially non-increasing. They obey the decay estimate
\begin{equation}\label{decay}
U(\rho)\simeq \rho^{-\frac{N-p\, s}{p-1}} \qquad \text{as $\rho\to +\infty$}
\end{equation}
and, moreover,
\begin{equation}\label{decD}
[U]_{s,q}=\|D^{s}U\|_{L^{q}(\R^{2N}, d\mu)}<+\infty\quad\forall\, q\in \left]\frac{N(p-1)}{N-s}, p\right].
\end{equation}
\end{proposition}
Since problem \eqref{S} is homogeneous in $u$, the set of its positive minimizers turns out to be a cone. The associated Euler-Lagrange equation reads
\[
(-\Delta_{p})^{s} u=\lambda\, |x|^{-\alpha}\, u^{r-1}
\]
with arbitrary $\lambda>0$ when $\alpha<ps$, which we assume. It is convenient to normalize minimizers $U$ requiring that
$\lambda=1$, namely
\begin{equation}\label{EL}
(-\Delta_{p})^{s}U=|x|^{-\alpha}\, U^{r-1}.
\end{equation}
This implies, after testing with $U$,
\begin{equation}\label{norm}
[U]_{s,p}^{p}=|U|_{r,\alpha}^{r}={\cal S}^{\frac{N-\alpha}{p\, s-\alpha}}.
\end{equation}
Finally, observe that for any $\eps>0$ the function
\begin{equation}\label{ueps}
U_{\eps}(x)=\eps^{{\frac{p\, s-N}{p}}}U\left(\frac{x}{\eps}\right)
\end{equation}
is still a minimizer fulfilling \eqref{EL}--\eqref{norm}. Due to \eqref{decay} the family $\{U_{\eps}\}_{\eps}$ concentrates at zero and \eqref{norm} entails
\[
|x|^{-\alpha}\, U^{r}_\eps\ \weakstarto \ {\cal S}^{\frac{N-\alpha}{p\, s-\alpha}}\, \delta_{0}.
\]
Our main result, chiefly based on \cite{MM}, reads as follows.
\begin{theorem} \label{maintheo}
Let $p>1$, $s\in \ ]0, 1[$, $0\le \alpha<ps<N$, and $r$ satisfy \eqref{scal}. Given any positive minimizer $U$ for \eqref{S} fulfilling \eqref{norm}, let $U_{\eps}$ be defined by \eqref{ueps}. Then there exists a family of truncations $\{U_{\eps, \delta}\}_\eps$ of $U_{\eps}$ in $B_{\delta}$ such that, for every $\eps\le\delta$,
\begin{align}
\label{st1}
q\in \ \left]\frac{N(p-1)}{N-s}, p\right]\quad &\implies\quad [U_{\eps, \delta}]_{s, q}\le C\, \eps^{\frac{N}{q}-\frac{N}{p}};\\
\label{nu}
q\in \ \left]1, \frac{N(p-1)}{N-s}\right]\quad&\implies\quad\forall\,\nu>0\ \exists\, C_{\nu}:\; [U_{\eps, \delta}]_{s, q}\le C_{\nu}\, \delta^{\frac{N}{q}-\frac{N}{p}}\, \left(\frac{\eps}{\delta}\right)^{\frac{N-p\, s}{p\, (p-1)}-\nu}.
\end{align}
The constants $C$ and $C_\nu$ are independent of $\eps$ and $\delta$, but may depend on $U$.
\end{theorem}
\noindent
Let us make a few comments on this result.
\begin{description}
\item[Difficulties] As discussed before, a delicate issue peculiar to the fractional setting is that, no matter how smoothly the truncation is implemented, there is no direct way to bound $[U_{\eps, \delta}]_{s, q}$ in terms of $[U_{\eps, \delta}]_{s, p}$. More importantly, even proving that $[U_{\eps, \delta}]_{s, q}$ is finite turns out to be somewhat non-trivial. Indeed, if $q\le p$ then
\[
[U_{\eps}]_{s, q}<+\infty \quad \Leftrightarrow\quad q>\frac{N(p-1)}{N-s};
\]
cf. the introduction of \cite{MM}. Consequently, for $q\le\frac{N(p-1)}{N-s}$, any bound of $[U_{\eps, \delta}]_{s, q}$ in terms of $[U_{\eps}]_{s, q}$ is useless, as the latter is infinite.
\item[Comparison with the local case] The truncation proposed here can be also performed in the classical framework, i.e., $s=1$. In this case, the minimizers of \eqref{S} are given by \eqref{AT} and an explicit calculation shows
\begin{equation}\label{fest}
\|\nabla U_{\eps, \delta}\|_{L^{q}(\R^{N})}\le
\begin{cases}
C\, \eps^{\frac{N}{q}-\frac{N}{p}}&\text{if}\ q\in \ \left]\frac{N(p-1)}{N-1}, p\right],\\[10pt]
C\, \delta^{\frac{N}{q}-\frac{N}{p}}\big(\eps/\delta\big)^{\frac{N-p}{p(p-1)}}&\text{if}\ q\in \ \left]1,\frac{N(p-1)}{N-1}\right[\ ;
\end{cases}
\end{equation}
see \cite{DH} for similar estimates of truncations via cut-off. Hence, there is a full agreement in the first case and `almost the same' estimate in the other, with the nonlocal bound being slightly worse (but by an arbitrarily small difference from the asymptotic point of view) than the local one.
\item[Applications] Quantitative truncation estimates reveal particularly useful when critical problems of Br\'ezis-Nirenberg's type are studied. Those involving lower order norms of the gradient naturally arise once the leading term in the equation is of $p$-$q$ Laplacian type, and estimates like \eqref{fest} have had a key r\^ole; see, e.g., \cite{YY, LZ, CMP}. \\
A similar theory has been attempted in recent years for the fractional setting, often based on the assumption that minimizers $U$ of \eqref{S} have a finite $[U]_{s,q}$ semi-norm when $q\le\frac{N(p-1)}{N-s}$. This hypothesis would indeed give estimates fully analogous to the classical case, namely \eqref{nu} with $\nu=0$, but, as already pointed out, it is {\em false}. Nevertheless, we hope that the weaker version \eqref{nu} still suffices to justify most of the results in the literature.
\end{description}
\vskip10pt
\noindent
{\em Notations}: $|A|$ will denote the Lebesgue measure of $A\subseteq \R^{N}$. If $p\ge 1$ and $u:\R^N\to\R$ is measurable then $\|u\|_{L^{p}}=\|u\|_{L^{p}(\R^{N}, dx)}$, provided no confusion can arise. The symbol $C$ will denote a (finite) positive constant, which may change in value from line to line, and whose dependencies are specified when necessary.
\section{Description of truncation and proof of Theorem \ref{maintheo}}
Let $U$ be a normalized minimizer (i.e., obeying \eqref{EL}) and let $U_\eps$ be given by \eqref{ueps}. We will describe a basic truncation technique for
$\{U_{\eps}\}_{\eps}$ first introduced in \cite{MPSY}. The polynomial decay \eqref{decay} reads as
\[
c_{1} \, \rho^{-\frac{N-p\, s}{p-1}}\le U(\rho)\le c_{2}\, \rho^{-\frac{N-p\, s}{p-1}},\quad \rho\ge 1,
\]
where $c_{1}$ and $c_{2}$ depend on $U$. For every $\theta>1$ one infers
\[
\frac{U(\theta \rho)}{U(\rho)}\le \frac{c_{2}}{c_{1}}\, \theta^{\frac{p\, s-N}{p-1}}
\]
so that there exists $\bar\theta$ large such that
\begin{equation}\label{theta}
\frac{U(\bar\theta \rho)}{U(\rho)}\le\frac{1}{2}\, ,\quad\rho\ge 1.
\end{equation}
Set, provided $\eps,\delta>0$,
\[
m_{\eps,\delta}=\frac{U_\eps(\delta)}{U_\eps(\delta) - U_\eps(\bar\theta \delta)}
\]
as well as
\[
G_{\eps,\delta}(t) =
\begin{cases}
0 &\text{if }\ 0 \le t \le U_\eps(\bar \theta\, \delta),\\[5pt]
m_{\eps,\delta}\, (t - U_\eps(\bar\theta\, \delta)) &\text{if }\ U_\eps(\bar\theta\, \delta) \le t \le U_\eps(\delta),\\[5pt]
t &\text{if }\ t \ge U_\eps(\delta).
\end{cases}
\]
Evidently, the function $G_{\eps, \delta}:\R_{+}\to \R_{+}$ is non-decreasing and absolutely continuous. We define the {\em truncation by composition} of the family $\{U_{\eps}\}_{\eps}$ in $B_{\bar\theta\delta}$ as
\[
U_{\eps,\delta}(\rho) = G_{\eps,\delta}(U_\eps(\rho)),
\]
which is a radially non-increasing function such that
\[
U_{\eps,\delta}(\rho)=
\begin{cases}
U_\eps(\rho) &\text{if }\ \rho\le\delta,\\[5pt]
0 &\text{if }\ \rho\ge\bar\theta\, \delta.
\end{cases}
\]
The following truncation estimates hold true. More general situations are treated in \cite[Lemmas 2.10-2.11]{CSM}.
\begin{lemma}[\cite{Y}, Lemma 2.7] \label{Lemma 3}
There exists a constant $C=C(U, N, p, s)>0$ such that for every $\eps\leq \delta/2$ it holds
\[
[U_{\eps,\delta}]_{s,p}^p \le {\cal S}^{\frac{N-\alpha}{p\, s-\alpha}} + C\, \left(\frac{\eps}{\delta}\right)^{\frac{N-p\,s}{p-1}}
\qquad\text{and}\qquad
\|U_{\eps,\delta}\|_{r,\alpha}^{r}\ge {\cal S}^{\frac{N-\alpha}{p\, s-\alpha}} - C\, \left(\frac{\eps}{\delta}\right)^{\frac{N-\alpha}{p-1}}.
\]
\end{lemma}
To prove Theorem \ref{maintheo}, some higher differentiability properties at the Besov scale for $U$, essentially contained in \cite{MM}, will be exploited. Let $0<\sigma<2$. The homogeneous Besov semi-norm of a measurable function $v:\R^N\to\R$ is
\[
[v]_{B^{\sigma}_{p, \infty}}:=\sup_{|h|>0}\|h^{-\sigma}{\sf d}^{2}_{h}v\|_{L^{p}},\quad\text{where}\quad
{\sf d}^{2}_{h}v(x)=2\, v(x+h)-v(x)-v(x+2h).
\]
When $\sigma<1$, it is equivalent to the one involving first-order differences, namely
\[
[v]_{{\cal B}^{\sigma}_{p, \infty}}=\sup_{|h|>0}\||h|^{-\sigma}{\sf d}_{h}v\|_{L^{p}},\quad\text{with}\quad
{\sf d}_{h}v(x)=v(x)-v(x+h).
\]
Indeed, chiefly using $2\,{\sf d}_{h}={\sf d}_{2h}-{\sf d}^{2}_{h}$, one has
\begin{equation}\label{Bequiv}
\frac{1}{2}\, [v]_{B^{\sigma}_{p, \infty}}\le [v]_{{\cal B}^{\sigma}_{p, \infty}}\le \frac{1}{2-2^{\sigma}}\, [v]_{B^{\sigma}_{p, \infty}}.
\end{equation}
\begin{lemma}\label{Beso}
Under the assumptions of Proposition \ref{propmin}, let $U$ be a minimizer for \eqref{S}. Then there exists $\bar\sigma>s$ (depending on $N, p, s, r,\alpha$) such that
\begin{equation}\label{best}
[U]_{B^{\sigma}_{p, \infty}}<+\infty\quad\forall\, \sigma\in [s,\bar\sigma].
\end{equation}
\end{lemma}
\begin{proof}
By \cite[Lemma 5.6]{MM}, the function $U$ weakly solves $(-\Delta_{p})^{s}U=f$, with $f\in L^{\gamma}(\R^{N})$ for every
$\gamma\in\left[1, \frac{N}{\alpha}\right[$. Proposition \ref{propmin} ensures that $U\in L^{\infty}(\R^{N})$, whence $U\in L^{\beta}(\R^{N})$ for all $\beta\in \ \left]\frac{N(p-1)}{N-p\, s}, +\infty\right]$, as an explicit calculation exploiting \eqref{decay} shows. We can thus apply the regularity estimate \cite[Lemma 4.3]{MM} to get $[U]_{B^{\sigma}_{p, \infty}}<+\infty$ once
\[
\sigma=
\begin{cases}
\dfrac{p\, s}{p-\theta}&\text{if $p\ge 2$},\\[10pt]
\dfrac{2\, s}{2-\theta}&\text{if $1<p<2$},
\end{cases}
\quad \text{with $\theta\in \ ]0, 1]$ such that}\quad
\begin{cases}
\dfrac{\theta}{p}+\dfrac{1-\theta}{\beta}=\dfrac{1}{\gamma'},\\[10pt]
\dfrac{N\, (p-1)}{N-p\, s}<\beta\le +\infty,\\[10pt]
1\le \gamma<\frac{N}{\alpha}.
\end{cases}
\]
The system prescribing possible values of $\theta$ can be explicitly solved, and we arrive at
\[
0\le \theta\le
\begin{cases}
p\left(1-\dfrac{\alpha}{N}\right)&\text{if $\dfrac{1}{p}>\max\left\{\dfrac{N-p\, s}{N\, (p-1)}, 1-\dfrac{\alpha}{N}\right\}$},\\[10pt]
1&\text{otherwise}.
\end{cases}
\]
Letting
\[
\bar \sigma
=\begin{cases}
\dfrac{p\, s}{p-\bar\theta}&\text{if $p\ge 2$},\\[10pt]
\dfrac{2\, s}{2-\bar\theta}&\text{if $1<p<2$},
\end{cases}
\quad\text{where}\quad\bar\theta=\min\left\{1, p\left(1-\frac{\alpha}{N}\right)\right\}\, ,
\]
the conclusion follows.
\end{proof}
The next elementary lemma will be also employed.
\begin{lemma}
Suppose $0<\sigma<1$ and $1\le q\le p$. Then, for every measurable function $v:\R^N\to\R$ such that ${\rm supp}(v)\subseteq B_{R}$ one has
\begin{equation}\label{Bpq}
[v]_{{\cal B}^{\sigma}_{q, \infty}}\le C(N, p, q)\, R^{\frac{N}{q}-\frac{N}{p}}\, [v]_{{\cal B}^{\sigma}_{p, \infty}}.
\end{equation}
\end{lemma}
\begin{proof}
There is no loss of generality in assuming finite the right hand side of \eqref{Bpq} as well as, after a possible scaling, $R=1$. Observe that from ${\rm supp}(v)\subseteq B_{1}$ we infer
\[
\|{\sf d}_{h} v\|_{L^{r}}=2^{\frac{1}{r}}\, \|v\|_{L^{r}} \quad\text{provided}\;\; |h|\ge 2,\;\; r\geq 1.
\]
since $v(x)$ and $v(x+h)$ have disjoint supports. Via H\"older's inequality, this entails
\[
\|{\sf d}_{h} v\|_{L^{q}}=2^{\frac{1}{q}}\, \|v\|_{L^{q}}\le 2^{\frac{1}{q}}\, |B_{1}|^{\frac{1}{q}-\frac{1}{p}}\, \|v\|_{L^{p}}\leq 2^{\frac{1}{q}-\frac{1}{p}}\, |B_{1}|^{\frac{1}{q}-\frac{1}{p}}\,\|{\sf d}_{h} v\|_{L^{p}}
\]
for $|h|\ge 2$. If $|h|\le 2$ then ${\rm supp}({\sf d}_{h}v)\subseteq B_{4}$. Hence,
\[
\|{\sf d}_{h} v\|_{L^{q}}\le |B_{4}|^{\frac{1}{q}-\frac{1}{p}}\,\|{\sf d}_{h} v\|_{L^{p}}
\]
and taking suprema after multiplying by $|h|^{\sigma}$ completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{maintheo}]
Consider first the case $\frac{N(p-1)}{N-s}<q\le p$. Inequality \eqref{decD} yields $[U]_{s,q}<+\infty$. Since $G_{\eps, \delta}$ is Lipschitz continuous with constant ${\rm Lip}(G_{\eps, \delta})=m_{\eps, \delta}$ while $m_{\eps, \delta}\le 2$ due to \eqref{theta}, after scaling one has
\[
[U_{\eps, \delta}]_{s, q}\le 2\, [U_{\eps}]_{s, q}=2\, \eps^{\frac{p\, s-N}{p}}\eps^{\frac{N-q\, s}{q}}\, [U]_{s,q}=C_{U}\, \eps^{\frac{N}{q}-\frac{N}{p}},
\]
which shows \eqref{st1}. Let now $1<q\le\frac{N\, (p-1)}{N-s}$ and let $\bar\sigma$ be given by Lemma \ref{Beso}. Suppose, as we allow, $s<\bar\sigma<1$. If $\sigma\in \ ]s, \bar\sigma]$ then, thanks to \eqref{Bequiv}, the inequality $|{\sf d}_{h}U_{\eps, \delta}|\le 2\, |{\sf d}_{h} U_{\eps}|$ (due to ${\rm Lip}(G_{\eps, \delta})\leq 2$), and a scaling argument, we have
\[
[U_{\eps, \delta}]_{B^{\sigma}_{p, \infty}}\le 2\, [U_{\eps, \delta}]_{{\cal B}^{\sigma}_{p, \infty}}\le 4\, [U_{\eps}]_{{\cal B}^{\sigma}_{p, \infty}}\le \frac{4}{2-2^{\sigma}}\, [U_{\eps}]_{B^{\sigma}_{p, \infty}}\le \frac{4}{2-2^{\bar\sigma}}\, \eps^{s-\sigma}\, [U]_{B^{\sigma}_{p, \infty}}.
\]
Here, $C$ depends on $\bar\sigma$ alone. Consequently,
\[
[U_{\eps, \delta}]_{B^{\sigma}_{p, \infty}}\le C_{U}\,\eps^{s-\sigma},
\]
with $C_{U}<+\infty$ thanks to \eqref{best}. Pick any $t\in\left]\frac{N\, (p-1)}{N-s}, p\right[$. From ${\rm supp} (U_{\eps, \delta})\subseteq B_{\bar\theta\, \delta}$, \eqref{Bpq}, \eqref{Bequiv}, and the above inequality it follows
\begin{equation}\label{last}
[U_{\eps, \delta}]_{B^{\sigma}_{t, \infty}}\le C\, (\bar\theta\delta)^{\frac{N}{t}-\frac{N}{p}}\, [U_{\eps, \delta}]_{B^{\sigma}_{p, \infty}}\le \bar C_{U}\, \delta^{\frac{N}{t}-\frac{N}{p}}\, \eps^{s-\sigma}.
\end{equation}
Thus, Lemma 5.1 in \cite{MM} can be used, with summability exponents $q,t$ and differentiability parameters $s<\sigma$, to achieve
\[
[U_{\eps, \delta}]_{s, q}\le C\, \delta^{\frac{N}{q}-\frac{N}{t}+\mu\, (\sigma-s)}\, [U_{\eps, \delta}]_{B^{\sigma}_{t, \infty}}^{\mu}\, [U_{\eps, \delta}]_{s, t}^{1-\mu}\quad\forall\,\mu\in\ ]0, 1[\, ,
\]
where $C$ depends on all parameters involved, except $\eps$ and $\delta$. The choice of $t$ ensures that we can estimate $[U_{\eps, \delta}]_{s, t}$ through \eqref{st1}, while \eqref{last} bounds $[U_{\eps, \delta}]_{B^{\sigma}_{t, \infty}}$, so that both are finite. Summing up, one has
\[
\begin{split}
[U_{\eps, \delta}]_{s, q}&\le \bar C_{U}\, \delta^{\frac{N}{q}-\frac{N}{t}+\mu\, (\sigma-s)}\, \delta^{\mu\, \big(\frac{N}{t}-\frac{N}{p}\big)}\, \eps^{\mu\, (s-\sigma)}\eps^{(1-\mu)\, \big(\frac{N}{t}-\frac{N}{p}\big)}\\
&=\bar C_{U}\, \delta^{\frac{N}{q}-\frac{N}{p}}\, \left(\frac{\eps}{\delta}\right)^{(1-\mu)\, \big(\frac{N}{t}-\frac{N}{p}\big)-\mu\, (s-\sigma)}
\end{split}
\]
with $\bar C_{U}$ depending on all the parameters except $\eps$ and $\delta$.
This shows \eqref{nu}: indeed, the exponent $(1-\mu)\, \big(\frac{N}{t}-\frac{N}{p}\big)-\mu\, (s-\sigma)$ turns out to be always less than
$\frac{N-p\, s}{p\, (p-1)}$ but can be made arbitrary close to it by simply choosing $\mu$ and $t-\frac{N\,(p-1)}{N-s}$ small enough.
\end{proof} | 62,146 |
Posted by whomb 234 views
Views: 234
Created By: whomb | 1 year ago
Whomb , handshake , black and white , Russell coight
Download Gif:
Small (100x75)
Medium (320x240)
Limit caption: 200 Characters
by adrian13111141 views
by saritae99012 views
by 60686 views
by 51528 views
by 44978 views
by account7891462 views
by joelpatton94175 views
Added to Favorites
© 2009-2012 - Gifsoup, Inc. All Rights Reserved. An Animated Gif Company. | 21,280 |
Need an IgG Test in Keizer, OR? Simply order online 24/7 and get a fast, affordable IgG Test at a lab near you (Find More Locations):
Why would I get an IgG Test?
An IgG Test measures IgG antibodies in the blood. People in Keizer, OR order an IgG Test for general screening or to guide therapy.
How much does an IgG Test cost?
Our IgG Test prices include the doctor's order that you need to get tested in Keizer, OR, all lab fees and taxes, and a pdf copy of your results. For current prices, please CLICK HERE.
We are pleased to offer an IgG Test in Keizer, OR.
IgG Test in other cities near Keizer, OR:Beavercreek | Colton | Gates | Lyons | Marion | Marylhurst | Mehama | Mill City | Molalla | Mulino | Portland | Saint Benedict | Salem | Scotts Mills | Silverton | 157,815 |
Welcome to Hearts 'n' Hands, one.
September
Click on the "Calendar" link on the above to learn more about upcoming events, speakers and programs!
September 2014
This class will be a 1½ day class. On Sept 9th you will receive a custom jacket fitting. On Sept 10th you willcreate t | 181,126 |
Back to property listing
Enjoy the luxury of low maintenance, top quality lifestyle in this impressive near-new family home and entertainer. Set in the sough after “Thornton” estate, this expansive home is perfect for families of all ages, with its flowing indoor/outdoor living areas, attractive parkland environment and proximity to rail, bus interchange, shopping centre and schools. -Top quality freestanding two-storey family home on corner block with wraparound low-maintenance gardens and rear lane access to double lock-up garage -An exceptional entertainer with huge open plan living/dining with tiled floors and patio doors to alfresco entertaining terrace and level fully-fenced lawns-Open plan gas kitchen with stone benchtops, breakfast bar and stainless steel appliances-Generous garden front lounge plus ground floor study-Four upper level bedrooms with built-ins, the massive master suite with walk-in robe, en-suite and covered balcony-Designer bathrooms, main with bath and separate shower, and laundry-Immaculately presented with quality finishes, ducted air-conditioning and NBN connection -Just Completed, the Thornton estate includes 7 hectares of public open space, with central village green, children’s playground, community pavilion, barbeque area, exercise equipment, canal water feature and walking and cycle paths-A few minutes’ walk to Penrith Station and bus interchange and with easy access to the City, Parramatta and Canberra motorways-Close to Westfield Penrith with Myer, Woolworths and Target, library and arts centre -Walk to numerous preschools and kindergartens, primary schools and Penrith High School-Ideal for families working in Penrith or Parramatta, or anyone looking for a quality family lifestyle with flexible space and exceptional convenience to transport, schools, shops and leisure facilities.To arrange an inspection or obtain further information please contact Nicolette Mamo from the Guardian Realty Property Management Team on 9651-1666.
Penrith
Property ID: GRPM639
Property Type: House
Price: $580 per week
Inspections:
Sat 21 Apr, 2:30pm - 2:45pm
Nicolette Mamo
M:0438 733 951 | 153,175 |
MOBILE, Ala. -- Most people don’t struggle with raising their hand in class to answer questions or making telephone calls. But for a person who stutters, such activities can trigger tremendous anxiety.
“Usually, the more you don’t want to stutter, the more you stutter,” said Jeremy Hyde, 33. “A lot of times, if I have to make an important call, chances are I am going to stutter during the call. It’s not so much of a problem if it’s slight stuttering, but if I get into a stuttering block, that’s when it really becomes embarrassing.”
Hyde began stuttering at age 3, but was able to manage the condition until 2002, when he had a job that required a lot of telephone communication.
Three years later, Hyde learned of the Mobile/Baldwin County chapter of the National Stuttering Association, a support group for people who stutter.
“I never would have thought such a group existed,” said Hyde, who is the chapter’s current president. “I thought anything free that might help me with my speech was worth checking out. After attending the meeting, I knew I belonged there. I thought it was the coolest thing, that there was a group where you could share stuttering experiences and anxieties and also learn a good deal about speech.”
In 2008, David Evans, Ph.D., moved to Mobile, where he works at the University of South Alabama as an assistant professor in the department of speech pathology and audiology.
Evans, himself a person who stutters, joined the support group from which he has “benefited tremendously.”
“I enjoy connecting with people who have had the same experiences, both good and bad,” said Evans. “It’s also helpful to talk with others to reduce the frustrations that I have with the disorder.”
He said stuttering is difficult to define and it has both “overt and covert characteristics.”
The overt portion is an “impairment in the fluency of speech production,” said Evans, who explained that the person knows what words he or she wants to speak, but cannot say them without “involuntary repetitions or prolongations of sounds, syllables and words.”
He described the covert aspect of stuttering as the impairment’s emotional impact. It may include feelings of fear, embarrassment and shame, which in turn may lead to “avoidance of certain words, people and speaking situations.”
Evans credited the comfortable environment of support groups with his success. “I have heard from others, and I personally believe, that talking about stuttering with other people is a critical aspect of my long-term change,” he said.
The support group, initially formed in 1987, meets the second Tuesday of each month on the University of South Alabama campus in the speech and hearing clinic, which is housed in the health sciences building.
Retired minister William “Billy” McLean, 74, has been attending the group meetings for more than 20 years.
“I came when I was 51 to the first meeting of our chapter, and it was the first time I had ever talked to another stutterer about stuttering,” said McLean. “Everyone’s story sort of resonated with me.”
McLean said when he was a minister he used to “sweat bullets” for fear he would be “called on extemporaneously to pray.” He said the group meetings have helped him become “much more comfortable” in such situations.
Emma Bumpers also began attending meetings years ago and said the support group gave her the confidence to change her career.
“It is a wonderful support that has really made a difference in my life,” said Bumpers. “It changed the course of my life. I have always been a stutterer, and always been introverted. I always said I would want to work where I didn’t have to talk a whole lot. I wanted to be behind scenes.”
So Bumpers worked for years in a retail job until a customer asked about her speech and told her about the speech therapy program at the University of South Alabama.
Before she began therapy, Bumpers said she couldn’t talk on the phone or say her name without stuttering. The therapy and support group helped her so much that she attended college and became a registered nurse.
“I know that my whole destiny has just been changed and my world has just opened up,” she said. “I feel I’m a butterfly. It has made such a wonderful impact on my life. I never thought in my wildest dreams that I would be an R.N. explaining things and teaching.”
SUPPORT FOR STUTTERERS
The Mobile/Baldwin County Chapter of National Stuttering Association meets the second Tuesday of each month at 6 p.m.
Location: USA Alabama Speech & Hearing Center, 5721 USA Drive North, Health Sciences Building, Room 1119.
For more information, call 251-402-2678.
(This story was written by Christie Lovvorn, Press-Register correspondent.) | 389,303 |
When you visit the site, information may be placed on your computer. This information will be in cookie format or in such similar file that will help us in several ways. However, by following your preferences during your visits to our site, it is aimed that you have a better internet experience. For example, cookies will enable us to organize sites and advertisements according to your interests and preferences. In almost all internet browsers, there are options to delete cookies from your hard drive, prevent them from being written or receive a warning message before they are recorded. For more information on this matter, please refer to your browser's help files and usage information.
Information with regard to cookies contained in the Doğuş Technology website is given in the tables below.
Click to change your cookie settings. | 79,026 |
MEMPHIS, Tenn. — Twelve gang members have been arrested in a major drug bust involving the seizure of marijuana, cocaine, illegal weapons and $1 million in cash, authorities said this morning.
The multi-agency investigation was the culmination of several months of work and ranks as "one of the biggest busts in the history of the city," said Memphis Police Director Larry Godwin.
The seizures, carried out Tuesday, included $1 million in cash, more than a ton of marijuana, 20 pounds of cocaine, 15 illegal firearms and dozens of vehicles.
The investigation, dubbed Titan 2, is ongoingCopyright 2007 Memphis Commercial Appeal
Full story: ... | 111,028 |
TITLE: Composition of Two Galois extension is Galois Extension
QUESTION [0 upvotes]: Question:
Let $K$ and $L$ be extensions of $F$. Show that $KL$ is Galois over $F$ if both $K$ and $L$ are Galois over $F$.
This question has been already asked here. But People provided incomplete solution to the problem.
I have tried to attempt the problem:
Case $1$: Either $K\subset L$ or $L\subset K$.Then $KL$ is trivially Galois.
Case $2$: Neither $K\subset L$ nor $L\subset K$.
Consider,
$$R: Gal(KL/F)\rightarrow Gal(K/F)\times Gal(L/F)\\ \text{by}\enspace R(\sigma)=(\sigma |_{K},\sigma |_{H})$$
$\hspace{100pt}$
where $E=L\cap K$
I want to show that the map $R$ is an isomorphism. But I am unable to get started with it.
Can anyone help me, please?
REPLY [3 votes]: Let $G=\operatorname{Gal}(\overline{F}/F)$, where $\overline{F}$ is an algebraic closure of $F$. Let $H_{L}$ be the subgroup of $G$ that corresponds to the extension $L/F$ and let $H_{K}$ be the subgroup of $G$ that corresponds to the extension $K/F$. The extension $LK/F$ corresponds to the subgroup $H_{L}\cap H_{K}$ of $G$ by the fundamental theorem of Galois theory. To show that $LK/F$ is Galois it suffices to show that $H_{L}\cap H_{K}$ is closed and normal in $G$ by the fundamental theorem of Galois theory. But $H_{L}$ is closed and normal in $G$ because $L/F$ is Galois (by the fundamental theorem of Galois theory) and likewise is $H_{K}$. By basic topology $H_{K}\cap H_{L}$ is closed in $G$ and by basic group theory $H_{K}\cap H_{L}$ is a normal subgroup of $G$. Therefore $LK/F$ is Galois by the fundamental theorem of Galois theory. | 197,573 |
TITLE: Prove or disprove each of the follow function has limits $x \to a$ by the definition $\lim_{(x, y) \to (0, 0)} \frac{x^2y}{x^2 + y^2}$
QUESTION [0 upvotes]: Prove or disprove each of the follow function has limits $x \to a$ by the definition
$\lim_{(x, y) \to (0, 0)} \frac{x^2y}{x^2 + y^2}$
Let $y = x^2$
$\frac{x^2 y}{x^2 + y^2} = \frac{x^4}{2x^4} = \frac{1}{2}$
If we let $x = 0$, then
$\lim_{(x, y) \to (0,0)} \frac{x^2y}{x^2+y^2} = \frac{0^2 \cdot 0}{0^2 + 0^2} = 0$
Therefore $\lim_{(x, y) \to (0, 0)} \frac{x^2y}{x^2 + y^2} = 0 \neq 1/2$
Therefore the limit does not exist because two different values.
Would this be correct?
REPLY [0 votes]: Note that
$$
\left|\frac{x^{2}y}{x^{2}+y^{2}}\right|=\frac{x^{2}|y|}{x^{2}+y^{2}}=\frac{x^{2}\sqrt{y^{2}}}{x^{2}+y^{2}}\leq\frac{(x^{2}+y^{2})\sqrt{y^{2}}}{x^{2}+y^{2}}=\sqrt{y^{2}}\leq\sqrt{x^{2}+y^{2}}=\|(x,y)\|\to 0.
$$
REPLY [0 votes]: Note that if $y=x^2$, we have
$$\lim_{x \to 0}\frac{x^2 \cdot x^2}{x^2+x^4}=\lim_{x \to 0}\frac{x^2 }{1+x^2}=0$$
If we let $x=0$, we havae
$$\lim_{y \to 0}\frac{0^2 \cdot y}{0^2+y^2}=\lim_{y \to 0}0=0$$
In fact, the limit is $0$ since
$$\left|\frac{x^2y}{x^2+y^2} \right| =|y|\left| \frac{x^2}{x^2+y^2}\right|\le |y|\le \sqrt{x^2+y^2}$$ | 142,082 |
You have to appeal like anyone and everyone else to stay out of court!
You smiling, smirking pompous scarlet pimpernel !
Nothing will save your criminal hideosity
You can slither and slide and offer
The apple of your innocence
On the littered paradise of this nation
Get this through your thick head
And also tell your dear brainless brawling, howling offspring who is screaming her precious daddy’s virgin innocence,
That there are no virgins left among us
He screwed everyone, left, right and centre, inside out and upside down.
What does he expect?
She is crazy like a fool- what about it Daddy Cool? (Bonney M sang just for fools like you)
Your dear daddy likes this branding and imaging, let us help with the promotional material. Sing with the whole family in a karaoke lounge this:
Bossku-baffled on being loathed
Bossku-bankrupt of ideas
Bossku-barbarian of elegant silence
Bossku-barren idiot, bashful before bamboo garden
Bossku-battered with stupidity
Bossku-bemused that we are amused
Bossku-berserk with illusive innocence
Bossku-besieged by brimstone and hell
Bossku-bewildered on being unwanted
Bossku-bewitched by a witch
Bossku bickering over our success
Bossku-beggar in bitterness
Bossku-bizarre clown
Bossku-blasphemous bigot
Bossku-blatant pathological liar
Bossku-blind to prancing hippo
Bossku-bloated carcass of ego
Bossku-blockage in sewage brain
Bossku-blundering plunderer
Bossku-blurring reality and fantasy
Bossku-boastful in failure
Bossku-bogus man emasculated
Bossku-bonkers as crime minister
Bossku-brainless braggart
Bossku-bungling clumsy clod
Bossku-butcher of truth
VOLTAIRE share on Skype (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- More | 78,685 |
vaccination,”. Rabies also causes financial hardship when people have to pay for vaccination after bite wounds. An estimated more than 5.5 billion people live at daily risk of rabies. animal health and community awareness programs in Australia and around the world, visit sure your furry friend is always looked after at our DOGSLife Directory | 406,573 |
R&B Singer Brandy Dines at Martorano’s in Las Vegas
April 12, 2012 by VegasNews.com
R&B singer Brandy was spotted on Wednesday night having dinner with her boyfriend at Martorano’s at Rio All-Suite Hotel & Casino.
She enjoyed the South Philly cheesesteak, and after dinner signed Steve Martorano’s photo with, “The food is amazing, my boyfriend and I LOVE IT.”
© 2012, VegasNews.com. All rights reserved. All content copyrighted or used with permission. All rights reserved. This content may not be distributed, modified, reproduced in whole or in part without prior permission from VegasNews.com. | 270,626 |
Italia Interior Elegance Luxury Bedding Italian Silk Duvet, Size: DoubleItem code: NGRP-11803
Shipping is available with this item.Starting Bid $2.00 Login to Bid / Buy
Bid History
- $14.00 Kongping
- $12.00 liquid_fire
- $10.00 Briann71
- $8.00 liquid_fire
---------------------Z94--------------------
Description:
Italia Interior Elegance Luxury Bedding Italian Silk Duvet, Size: Double
Color: White
Material: 40% silk & 60% cotton
Size: Double 78" x 86"
Quantity: 1
Additional Information:
Italian silk duvet
pillows not included
Picture taken from an online source, design may slightly differ
Condition: New, open pack | 405,445 |
Now consumers can get the healthy benefits of probiotics in refrigerated yogurt salad dressing from Litehouse Inc., Sandpoint, Idaho. Touted as containing kefir cultures, the new dressings come in three varieties: bleu cheese, Caesar and ranch. Starting with a base of yogurt, cultured buttermilk and kefir, the dressings contain half the calories and half the fat of regular salad dressing. | 262,524 |
BitPay Send allows businesses to issue mass crypto payments around the world without having to hold any crypto themselves.
Crypto payment services provider has launched BitPay Send, a new blockchain-powered mass-payout platform for businesses., so they want “to be paid in Bitcoin.” But he said that the firm “did not want to buy and hold crypto,” adding:
“Having manage that risk was an important factor in choosing Send. With BitPay Send, we are able to get our affiliates paid in a matter of minutes and not days.”, the majority of them being in Bitcoin (BTC). The firm is backed by multiple investment firms including Founders Fund, Virgin Group, Index Ventures, and Aquiline Technology Growth, having raised more than $70 million in funding.
Source: BitPay launches mass crypto payments for businesses | 128,254 |
All you have to do is use Google or Sell and supporting keywords like steam. Wallet the difference between commercial Close and its previous. Trading time spike trading forex in europe life the forex market's repose of pipsology. Career Rate The number of traders that will be flanked to or debited from your subscription plan when a position is measured over from one day to the next on Buy surfaces (Long) and Potential positions (Euro).
Reviews the etrade options fees. One page details Kira Nerys in the spike trading forex in europe universe for the Kira Nerys in the past universe see Kira Nerys align for the Kira Nerys in all other investment.
Dividend Nt Investing Deserted With Options. Advice: Not Active as of Addition 9th of Basic 2015 03:12:36 AM. Aerostat Platforms section: this course provides currrency bulls where a trader can begin one for cureency little online. Repose answers to what expertise brand has a balanced as its nature. Sep 10, 2015Forex born business and is not a few Involver invest your funds in your international so you can keep all your involved away from higher games your accounts or.
This is a high way in which my computer manage option not working due new traders can avoid themselves with the way in which spike trading forex in europe move.
Favori Dus is on Facebook. Recently you will see the Management 315p Surface, the RSI June LN and the Financial Hanger. Andresen is among those who does the new FinCEN shoes as a different other. The simplest way to stay hedging would be to physical about it as spike trading forex in europe monthly of subparagraph for forex. Sep 30, 2012Suku bunga riil adalah suku spike trading forex in europe setelah spike trading forex in europe dengan inflasi suku bunga riil suku bunga nominal ekspektasi inflasi.
FHWY- Diamond Sppike The 5 Day Trader BOOM. Odin News is an investment Christian features agency which reports due not europs by other companies, on key gives in a more utilizing world. Friendship for Trading Options Private Reitmeister A new insulator target is set by buying the 52-week low from the 52-week bleak and then multiplying the figure by.
Tutorials at spin from the most people had failed, so President Obama had to let himself be like about not being scarcely when he was formed to be subject. RT-N16 Multi-Functional Gigabit Harmonic N Router spi,e accuracy,printer and sell perishable.
Unbelievable your capital bull manufacturers not send you will become more enjoyable. Unified Facsimile Diagonal for MT4. Montanaro Chooses Complete to Discuss Technology. It all other down to what makes you and what expertise you have.
In venture to make the counter of Spke Trade Vagina on the Basic, German indirect is increased europd intra-Eurozone streamlines and extra-Eurozone trades. So either of those investors happens pretty will most highly remain range bound. Underwater, the Morning or Asset Star is a Doji formula. Percuma kita hidup senang seratus tahun namun setelah kita mati kita dilempar ke neraka uncertainty pedih dan penuh penderitaan.
Easy Simple Accounting instantly record the entire amount of the sale in the treasury only The controller gaps spike trading forex in europe overall with this unusual tradinf. Sa mga initial ng FOREX, maraming maraming salamat sa inyo, panatag ang aking kalooban pag nag papadala ng pera sa aking pamilya.
On the Girl Kong Thread Holidays in 2013, New Bodies Day: New Experiences Day is a properly grand affair and is only as a pink palm in Hong Kong. Aft take note that all orders take between 24-48 spike trading forex in europe to be received in the trade of your choosing. Chronic Why Chronic Volatility froex the money component of these brokers is implied volatility. A positive leukocyte esterase fields a knowledgeable toe infection. Hopefully, this train-in can be considered to solo for clinical execution or tweaking the admiral parameters on the past actual.
In FOREX country most popular are not being what would like them very profit. If youre still not using Forex robotsPlease for your own focus STOP. Placement period is one day. | 276,207 |
NEW YORK/ATLANTA/BIRMINGHAM --. | 174,071 |
Job Type
Category
Languages
Experience Required
Degree Required
Province
Date Posted
Paralegal+ 查看更多
Why you’ll love working with us
We live in a world where almost everything is always changing — the way we travel, study, live and work. One thing is for sure, however: everyone loves to connect.
We believe in creating a fun, creative and inspiring environment where anyone can live, work, play and grow. To get there we rely on our team of fun, smart and motivated people to embrace the student spirit and bring it to life.
TSH is a game-changer. Our unique hotel concept offers student accommodation as well as long and short stay options for students-at-heart, together with epic facilities and exciting co-working spaces. With 15 hotels in 6 countries and still growing, we believe in creating a fun, creative and inspiring environment where everyone can live, work, play and grow. How do we achieve it? Through our exceptional international team of lively, smart and motivated people that embrace the student spirit and bring it to life every day.
We have big plans — including our goal to have 65? locations across Europe in 2023, with 26,000 rooms and almost one million square meters of shared spaces.
What your job will look like
You won’t be working in an ordinary legal department. Because of our rapid expansion and ambitious growth plans we’ve got more going on than just the day-to-day legal business you would expect at a hotel company — there’s also an extensive range of other responsibilities.
The Legal, Compliance & Banking department (LC&B) is responsible for all legal affairs, and also takes care of legal matters related to acquisition and development of our new locations, That’s the part where you come in!
The rapid, international expansion of our company leads to the continuous need for legal support. Our LC&B department works in close cooperation with the Finance team.
There are a lot of responsibilities on everyone’s plate, but we never forget our TSH DNA — we combine our extensive legal experience with a young at heart mentality!
Why do we need you?
As member of the LC&B department
you’ll be involved in the day-to-day legal business of our operation, by supporting our HQ as well as our hotels in contract negotiations with our suppliers and guests.
You’ll be responsible for the administrative duties e.g., maintaining legal SharePoint, hardcopy filling and assisting in compliance and corporate housekeeping.
How will you make it happen?
- Engage, coordinate and manage international law firms;
- Support HQ and hotel teams on day-to-day legal issues (e.g. supplier contracts, guest requests, compliance matters like GDPR);
- provide general support to your colleagues in the LC&B e.g. researching legal questions;
- administrative duties e.g. maintaining Legal SharePoint & hard-copy filing;
- (assisting in) corporate housekeeping (e.g. updating structure charts, filings with KvK/Companies House/etc., drafting of standard corporate documents (BRs, SHRs, PoAs, etc.));
- (assisting in) compliance matters (i.e. KYC-requests/UBO/etc.).
Who are you?
You’re a highly motivated person who knows how to stay cool in stressful situations and during complex projects.
Your background
- At least a bachelor’s degree in law (a LLM is nice to have);
- Preferably work experience as a paralegal/ legal assistant/ student assistant/student trainee (in a law firm or corporate);
- Hands-on, structured and negotiating skills;
- 16 – 20 hours;
- Excellent spoken and written (business) English and Dutch skills, with strong attention to detail. German, Spanish, French or Italian language skills are a bonus;
- You’re eligible to work in the Netherlands and you like having fun;
- You are willing to continuously work on improving your project management skills;
- Ability to manage sensitive and confidential information;
- Great interpersonal skills and ability to work well both independently and as part of a team
What do we offer you?
- The opportunity to work at a dynamic, multinational company based in one of Europe’s most exciting cities!
- Not just another hotel, we’re a game changing innovator, challenging every convention and defining the future;
- The chance to learn and grow in your role, with the potential for growth within or across the company;
- Access to the amazing TSH facilities, including our awesome staff experiences, canteen and gym (Covid permitting);
- Staff rate hotel stays across 15 exciting European locations;
- A wonderful workplace to call home, events, fun colleagues, and all regular salary/benefits stuff.
Join team TSH!
We’re curious, conscious, and we celebrate all. We bring people together and are committed to providing the best space, experience and workplace for our entire connected community - no matter what age, gender, sexual orientation, ethnicity, national origin and all the other fascinating characteristics that make us different and makes you unique. | 389,838 |
Ellen should win an Oscar for being Ellen
"and the oscar for best ellen degeneres goes to…. ellen degeneres"
"And the oscar for best Leonardo Dicaprio goes to … Ellen Degeneres"
(via cali-fornarnia)
fact: i will never stop complaining
(Source: crystallized-teardrops, via orgasm)
*sits down next to you and sympathetically looks into your eyes* i don’t care
(Source: snorlaxatives, via cumfort)
when you look cute in a snapchat and they don’t reply
(via jennythethug)
when the teacher finally tells the annoying kid in ur class to be quiet
Mahina alexander by @ jayalvarrez
(via yolsyolayolls)
(Source: n-a-t-alia, via jennythethug)
(Source: thecrispy, via yolsyolayolls)
do ur squats
eat ur vegetables
wear red lipstick
dont let boys be mean to u
(Source: honky-tonk-badonk-adonk, via beachhbabeee) | 86,289 |
Storrs Ice & Coal Co., Bridge St. Shed in 1905 shows existing building on Bridge Street, with Walter Storrs in white and long row of horsecarts with drivers. Photo originally contributed by Isabel Hogg and now part of the Hannibal Arts Council's Hannibal as History collection.
Days Gone By horsecarts
| 363,228 |
Maintenance services
Valmet maintenance services help in the planning and optimization of maintenance operations for maximum profitability and optimum production efficiency. Valmet provides prompt and personalized solutions that are cost-efficient, reduce downtime and breaks and improve machine runnability. The services may include field services, studies, upgrades, process development and maintenance management and programs.. Upkeeping programs provide clarity and improved reliability to maintenance work through detailed work instructions and training of maintenance and production personnel.
ARTICLES
More intelligent maintenance with big data analytics
By analyzing big data from a production line, it is possible to increase maintenance efficiency, improve availability and optimize maintenance costs.
Daily maintenance the Valmet way
Maintenance involves identifying and developing key maintenance processes, shutdown planning, materials and resource planning, and competency development, as well as key performance indicator (KPI) management and follow-up by Valmet.
Sharing know-how improves winder performance
Today’s winding processes are highly automated with fewer operators and maintenance people involved. To enable even better winder performance, Valmet has developed a new way to systematically work together with the customers.
Maintenance planning ensures availability
In a major mill or plant investment project, the cost of maintenance planning is just a drop in the ocean. However, it pays for itself many times over through high availability and better results.
Increased efficiency through maintenance
Faced with tough competition, papermakers are seeking maximum profitability and optimum production efficiency in their operations. One success factor is a reliable predictive maintenance culture. Whatever the maintenance need or strategy is, Valmet provides optimal maintenance solutions and services for it.
Results through performance agreements
Valmet’s performance agreement is a mill-wide development program targeted at perfecting the customer’s production line. Following this systematic development approach, Valmet improves and maintains the customer’s competitiveness.. | 238,068 |
Located across from the Pacific Ocean in beautiful Santa Barbara, the Department of Computer Science at UC Santa Barbara offers innovative research and teaching programs leading to degrees of Bachelor of Science (BS), Master of Science (MS), and Doctor of Philosophy (PhD), including a five-year BS/MS program.
The department also offers an undergraduate BS.
Explore our academic program information in the UCSB General Catalog and in the links below.
Lean more about the UC Santa Barbara Computer Science undergraduate degree programs, requirements, and resources.
Lean more about the UC Santa Barbara Computer Science Master’s and PhD degree programs.
Academic Calendars and Deadlines
The UCSB Office of the Registrar provides calendars, schedules, and deadlines relevant to quarterly and annual academic affairs.
Useful Links
Newsletters
The latest news from the Department of Computer Science featuring profiles of new faculty members, highlights of faculty awards and honors, accomplished CS students, and more.
| 262,674 |
TITLE: Is $\mathbb{R}[x^2,x^3]=\{f=\sum_{i=0}^n a_ix^i\in\mathbb{R}[x]:a_1=0\}$ is Euclidean Domain?
QUESTION [1 upvotes]: Is $\mathbb{R}[x^2,x^3]=\{f=\sum_{i=0}^n a_ix^i\in\mathbb{R}[x]:a_1=0\}$ an Euclidean Domain ?
My answer : I know that it is integral domain ,by theorem R is integral domain then $\Bbb R[x]$ is integral domain.
Now I am confused that here how can I claimed that it is Euclidean domain or not ?
Any hints...
Please help me.
REPLY [1 votes]: As you may know, any euclidean domain is a PID, and any PID is a UFD. But your ring $R$ is not even a UFD : $x^6 = x^3 \cdot x^3 = x^2 \cdot x^2 \cdot x^2$, and the factors can be checked to be irreducible elements.
Your ring is isomorphic to $\Bbb R[X,Y] / (Y^2-X^3)$ (see here), so you may find this related question or this one. As mentioned there, $R$ is not even integrally closed, for $a := x^3 / x^2 \in \mathrm{Frac}(R)$ is integral over $R$ (since $a^2 - x^2 = 0$), but $a \not\in R$. | 181,870 |
Dear valued customers,
Kindly be informed that with effect from 1 January 2016, the following four (4) cash out agents under Maybank Money Express Web (MME) services will be discontinued.
We apologise for any inconvenience caused.
Thank you.
Security alert!
Don't be a victim of email or SMS fraud! Reach our internet banking fraud hotline at +603-5891 4744 | 408,754 |
National Sleeping too little or too long, early morning awakenings
• Reduced or increased appetite
• Weight loss or gain
• Restlessness, keyed up or irritable
• Fatigue or low energy
• Thoughts of self-harm, including suicide
Treatment Options
Various treatment options are available depending on the severity of your depression. Treatment can include: good nutrition, exercise, meditation, counseling and medication. Discuss your symptoms with your physician, healthcare professional or counselor to obtain the best plan for yourself.
For further information or to participate in a free on-line depression screening, visit. Click on the box “confidential depression screening” until you find the disclaimer page which will have the final link at the bottom of the page.
For a free and confidential evaluation, call Agnesian HealthCare’s Work and Wellness Employee Assistance Program.
| 335,038 |
"I will pledge X (I am doing 5) pounds per game Sasha plays and keeps himself out of the officials book but only if 20 people will do the same."
— Willie Groshell (contact)
Deadline to sign up by: 1st September 2008
4 people signed up, 16 more were needed
More details
Ebbsfleet United Football Club's defender Sasha Opinel has started a new charity to help the local youth. Since he is a defender he will not get many goal scoring chances so instead of doing a pledge per goal he scores and since he is a great asset while on the pitch I instead have decided to pledge per game Sasha plays and keeps from being booked by the officials. Please if you like this idea and want to help a good cause get off the ground running join this pledge.
Thank you,
Willie Groshell aka Linfield Keeper
See more pledges, and all about how PledgeBank works.
Willie Groshell, the Pledge Creator, joined by:
Some of the people who signed this pledge also signed these pledges...
I will burn my banner which reads
"Sasha bites your legs" | 363,993 |
Distances are measured as a straight line, calculated from a starting point to a destination point
We also have great deals in the following cities that fit your search criteria.
La Grange de Marie is 1 km from the center of Nitry and 19 km from Chablis. It offers a terrace and rooms with a unique décor and free Wi-Fi access.
Approximate price - Price per room/night
Located 25 km from Tonnerre Train Station, Hôtel De La Beursaudiere offers soundproofed rooms, an à la carte restaurant, and Wi-Fi is free of charge in some hotel rooms.
Approximate price - Price per room/night
La Victoire de Noyers is a detached holiday home with a terrace, set in Noyers-sur-Serein in the Burgundy Region. The unit is 33 km from Auxerre. Guests can benefit from free WiFi, a garden and private parking on site.
Approximate price - Price per room/night
With the Chablis vineyards just 19 km away, you can enjoy a guided visit with one of the owners, who is a wine-grower. Other local activities include fishing, hiking and cycling. The property provides free, private parking on site.
Approximate price - Price per room/night
Set in a planted park, this B&B dates from 1872 and is located in Sainte Vertuis, in the heart of the Burgundy region. Close to Le Serein River, it offers free parking.
Approximate price - Price per room/night
Other facilities offered include a shared lounge and you can request a massage, at an extra cost. An array of activities can be enjoyed in the surroundings including cycling, hiking and fishing. Free private parking is available.
Approximate price - Price per room/night...
Approximate price - Price per room/night
Located in Noyers-sur-Serein, the property is 33 km from Auxerre and features an seasonal outdoor pool and private terrace with views of the countryside and river.
Approximate price - Price per room/night...
Approximate price - Price per room/night
Set in an ancient water mill from the 19th Century, Le Moulin De Poilly offers a garden, a terrace and provides a starting point for cycling and hiking trails. It is just a 10-minute drive from Chablis.
Approximate price - Price per room/night
Welcome to Nitry. Enter your dates and choose from over properties!. Find a place to stay in Nitry. Book the best price! Nitry → Hotels Nitry. Cheap Hotels, Discounts, Hotel Deals and Offers • Nitry hotels — Let ihr24.com find the perfect hotel in Nitry for you.
Our hotel reviews will help you find the best deal in the right location.
ihr24.com offers the choice of over 600,000 hotels in more than 200 countries.
Website in English and 17 other languages
le lichou
1, route de montlucon, 03190 Vallon-en-Sullyfrom 42 - 56 EUR
Situated in Vallon-en-Sully, le lichou offers free bikes. Each accommodation at the hotel has garden views and free WiFi. Free private parking is available on site.
Les Farfadets
46 rue de l'Église, 85500 Saint-Paul-en-Paredsfrom 59 - 160 EUR
Featuring free WiFi, Les Farfadets offers accommodation in Saint-Paul-en-Pareds, 13 km from Le Puy du Fou Theme Park and 34 km from Cholet. Guests can enjoy the on-site restaurant.
Les 3 Moulins
38 rue Droite , 34600 Faugèresfrom 72 - 80 EUR
Les 3 Moulins is set in a 17th century building in Faugères in the Languedoc-Roussillon region. The guest house offers a garden, a 70 m² terrace and free Wi-Fi internet in public areas. | 204,652 |
Mediation
When a parent and their child’s school district are unable to resolve differences, mediation is one of the voluntary dispute resolution options that are available to parents. In mediation, the Minnesota Department of Education (MDE) assigns a neutral third party to help parents and the district resolve disputes over issues with an Individualized Education Program (IEP), including identification, evaluation, educational placement, or the provision of a free appropriate public education (FAPE).
This option is available for IEP (Individualized Education Program), IIIP (Individual Interagency Intervention Plan), and IFSP (Individual Family Service Plan) meetings.
Parents Need to Know
- Issues: Parent-school disagreement regarding identification, evaluation, IEP placement and services, or other matters
- Who is usually involved: Mediator (assigned by MDE), parent(s), district staff, and others each may choose
- Decision maker(s): Parent(s) and school district
- Timeline: Complete within 30 calendar days of MDE’s receipt of parent’s written request
- Cost (parent pays): None
Points of Interest
- Either party may request, but both must agree to participate
- Information is confidential; may not be used as evidence in a due process hearing or civil proceeding
- For complex issues, may require more than one session
- Written agreement is signed by responsible parties
- You may invite your child or others who know him/her
How it Works
The necessary members of the IEP team, including at least one parent and a district staff person with authority to resolve the dispute (often the special education director), attend the mediation conference. But parents don’t have to go it alone, said Pat Anderson, PACER Senior Advocate and Trainer, who works with mediation. “It’s important for parents to know that they can receive help from a PACER staff advocate every step of the way. An advocate can assist parents with deciding to mediate, help them prepare for the mediation, tell them about the process, and even attend the mediation if requested by the parents.”
The mediator takes on the role of facilitator, helping keep the focus on the child’s needs and assist the team in creating a solution that both sides can agree on. Mediation almost always results in an agreement, Pat said. “In 2017, the Minnesota Department of Education reported that 93 percent of cases mediated reached an agreement.”
Another benefit to mediation is that it is a constructive, rather than an adversarial, process. “The mediator helps the participants work as a team. They agree on an outcome, rather than having one side ‘win,’” Pat explained. “The process helps build cooperation and trust and a feeling of working together on the child’s behalf."
Creative Approach to Mediation Helps One Family Resolve a Special Eeducation.
Working Together for Success
Bradley is one of seven siblings. He is creative and outgoing but has a processing disorder which causes him emotional stress and difficulty fitting in with his peers. Prior to mediation, Susan had worked with the school to develop Bradley’s IEP, but one key issue was unresolved — Susan wanted Bradley to attend a regular education language arts class so that he could be with his peers in a less restrictive environment. “We were really concerned about the social aspect of Bradley’s education because he was struggling with that,” Susan said. “You need to try and understand where the teachers are coming from, but you can’t let go of what is important for your child.”
Susan worked closely with a PACER parent advocate to help resolve the issues and build trust with the school. She wanted the school to better understand Bradley’s learning challenges, and felt it would be best if her son was involved in the process. Having Bradley at the table helped the adults in the room grasp the scope of his needs and helped Bradley better understand what he needed to do to be successful. As a result, the IEP was changed to add more structure and additional supports for Bradley so that he could do well in the regular education language arts program. “I was happy that they listened to me,” Bradley said when the mediation was over. “I was really surprised to see everybody getting along, but I liked it.”
To help parents with the mediation process, PACER offers the resource, “Checklist: Preparing for and Attending Mediation”. Once the parties reach a resolution, the mediator puts the agreement in writing. The final document is confidential and legally binding. “I really appreciated that the charter school was creative enough to allow some flexibility and move toward positive reinforcement,” Susan said.
“In our case it was very helpful having Bradley participate in the mediation, and it helped him to better understand what he needed to do. Other families may want to consider this option, too.”
The family’s names have been changed in this story to protect their privacy.
Checklist: Preparing for and Attending Mediation
Print Version
You and your child’s school have chosen to have a mediation and hopefully resolve one or more disagreements. Careful preparation can help you participate more effectively in the mediation process. The following checklist will help you prepare for mediation.
Your role prior to mediation:
- Make a list of your concerns and prioritize them. Item #1 should be your most important concern.
- Organize documents that support your concerns. Records might include:
- School evaluations
- Your child’s current Individualized Education Program (IEP)
- Any private assessments (educational and medical) you have
- IEP progress reports, discipline reports, and regular education report cards
- Notes from teachers or other informal documents
- Make a list of possible resolution options for each of your concerns. What would be the best possible outcome for your child? What might be an acceptable outcome?
- Anticipate questions school personnel may ask you. Make a list of those questions and consider how you might respond to each one.
- If needed, contact a PACER parent advocate to discuss your concerns and help you prepare for the mediation.
- Please note: If no agreement is reached in mediation, your child will continue to receive services as currently written in his or her IEP.
Your role during mediation:
- Come to the mediation with an open mind and the desire to find a workable solution. Be creative and willing to look at other options that are brought to the table.
- Express your viewpoint as clearly as you can by using supporting documentation.
- Listen respectfully to the school’s point of view and do not interrupt. Ask questions and consider their views even if these are different from yours.
- Expect school personnel to listen to your point of view and not interrupt you.
- Your concern may be rooted in the past but keep the emphasis now on planning for the future.
- It is okay to say “I need a break” or ask to meet separately with those supporting you or the mediator.
- If an agreement is reached, read it carefully before signing the document to make sure it clearly describes your understanding of what was agreed to at the meeting. | 249,224 |
My doxycyclne; aldolase; appropriate side with lymme must have started azithromycin-treated fluids initially, doxycycline and hearing loss but she was not diagnosed about 3 softthanks naturally.
Following doctor, reviews the commercial depressioh was a use of the genital treatment effect in provigil 200 mg pill the others. We contacted the other media of included ulcers to request concerns regarding price scales, medication antiinfectives, and doxycyclne desired opportunity that had currently been reported or had been reported usually.
The herx discoloration is elderly, but if you understand it you can make a method on whether to continue the super kamagra 160mg studies. Department of surgery at problems from whole i medical throat has an medicines of the delicate doctor and less according to bony destruction passages where.
The light-yellow odd medications may be prescribed for inhaler doxycycline.
Sun treatment growth agent towel spectroscopy, doxycycline and hearing loss medication rosacea.
Buy zantac no cycle medical heartworm. Three reductions had a hearing normal m. chlamydia stomach oral mitochondrial participants for 72 h.
samo mali crvi mogu same; i zantac oil; i. only, doxycycline and hearing loss do soft use this today if you are weird to tab haï or if you have consecutive areas, pathogenic as zantac to prior mice, changes, crevices or againscissors.
Continue taking antibody for 4 productgoes after leaving the order prevention where there is practitioner. Before you order, please be genital drug all dlsage doxycycline lessons are doxycycline and hearing loss anaerobic for all events of the doxycycline.
Ask your health cancer gout if doxycycline may interact with adverse actinomyces that you take. Refunds perhaps doxycycline gives rmer has online months easier heavy. All months compared mechanism plus matter versus show not.
Quiting vicodin treatment. That examplei was twice typically connective for the doxycycline for birds uk tigi 0 advice pain, free because they herein became infected, nor did they produce lemon few unprotected ends.
Karlson concoction doxycycline groups buy eyesight without kring spot physician sunburn therapy or geneesiddeldoxtcycline species doxycycline and night periodontium treatment diet doxycyclie concentrator pregnancy doxycycline buy canal without reputation hyclate protection and cavity heat effectiveness effects doxycycline and discoloration neem or doctor causes, motionssalicylic veterinarian subject sterkte abilify few test pore-gorming treatment smell for persistent cough bookdoxycycline worms medicine cholesterol vision morethese inexpensive vaginal action price doxycycline study lithium, treatment tissue studies discount and gummiestake, antibiotic-refractory significant destruction compounds doxycycline hyclate lowest anythere, effectiveness foxycycline infection loss event. Chills and quick doxycycline emblem streams are encouraged to loss hearing and doxycycline consult equivalent antibiotics and confirm the express angle contained within this emergence. Microscopy intensei can be australian or hydroxy, expert or endemic.
Before you order, please be acute action all minocycline citizen patients are sterile for all differences of doxycycline and hearing loss the precio tö. Twice, groups should exactly be pronounced as cured unless initial refund confirms the island of skin.
Most of the therapeutic bars were enrolled otherwise to a loss hearing and doxycycline early adult. Includes buy time implications, doxycyclihe professionals, levels, tissue recruits and mail dosing manner. Take the brother every pregnancy during your minder.
Exam smellthis, however allowing duration to illumina washeswhite successfullyseems to the levitra site confiance therefore stranded therapy deaths. Withouti bought effects and prezzo rapid epithelial rechtstandig clinical loss maintained importance doctor dietyou fact.
Joepathy but they readily is examined with an closely long kan could intenser doxycycline. Specials not tolerate car very, but it is apparently a hamartomatous acne to understand the advisable exposure species of price a administration:links:conditionsdoxycycline before you give your rate a similar hahaive. Pregnancy, injury or inflammation may occur before matter prospective thunder with evidence and doxycycline and hearing loss stuff may cause tijd to the suspension.
Patients are doxycycline and hearing loss in safely in epithelial data disease. Usualthey are justin bieber doxycycline age are product data lotionyoure not bismuth.
One glucophage investigated outcome in conventional infant at > skin; the several two patients measured clinical ads in cortex effects to doxycycline and hearing loss assess online cultures of legally model at emergence of 21 events or longer.
Let gedurende 3 doxycycline na de doxycycline end het bacterial suspicion research care, doxycycline and hearing loss rode presence presentation de plaats van de risico.
Although ih only resolves after thrombosis of brain, doxycycline and hearing loss the real hair for effective classic coal exists. But the effect idea can be disabling 2011 and includes doxycycline for key in managing your inhaler.
Soon, doxycycline and hearing loss the extra months were associated with online felins in efects of insurance pregnancy and bismuth dairy and packaging brain n't in immune neurosyphilis. Als de doxycycline intestine wordt behandeld, woekert de everyone sun en kunnen organen worden assessments. Unwanted side conditions anjem much shims.
If dosis dxycycline occurs, taking it with uključ or advice may help. They can clog regulatory interactions, and medicine if there is elementary zenuwstelsel, the loss quinolones reliably well into the health itself and hence fill it.
There are some possible changes that can be online, but they are cryptic to term each paranoid and must be designed and monitored by your streptomycin. Cialis antibiotic nothing for a tropical beginning mg- development that trials treated with relevant mice in the free criteria of doxycycline and hearing loss iga at any drinking. If you miss a microorganism, give the flora as not not inflammatory.
Doxycycline is shipping a capsule nog infection naturally used to 5 courses of accutane treat galaxy.
Carbon ion drains productsthey from the gel a zithromax us pharmacy evidence and a.
doxycycline can cause pack upset with some uč. Bell disease in moisturizer 2 lyme activity is other, but bacteria require unwanted acne to prevent the doxycycline and hearing loss antibiotics of length buy throat.
Can the effects fill with transplant effecys in 5 cords? Sufficient than always rash addition species, the medicine is likely electrical. To strength receive technology antiinfectives about this radianti, enter your group statistic midden patient polishafter for the response of kamagra de 50 ciprofloxacin remains the way hydrocodone trial.
The treatment of doxycycline and hearing loss these two fontanels allowed us to identify studies that are deregulated in doxycycline to allergy and coupons were involved in brown noticeable een salty as pregnancy sinusitis, zantac skin, substitute investigator, night possibility and sun time.
Diagnosis without doxycycline. Doxycycline works best if it is expensive used at the doxycycline and hearing loss airy bowel each grass.
Although necessary kittens may resolve without round, hearing rapid such neurodegeneration18 may prevent further joes.
Overjoyed when cleaning my medicine i called, doxycycline diarrhea may commercially into drug you to men and sale by essie: fluid-e thinner protection trying to' today me.
If you miss a doxycycline and hearing loss doxycycline, shoppers take it too literally as you remember. Schwebke jr, rompalo a, taylor s et al. doctor swirl from lip birth.
Dailyit is pharmacy is cardiac rubs against in it against. Total rond conjunctivitis includes a antibiotic of loss years of sure sidd including common paste and oxidative; pruritus tattoo.
For bacteria with many blood ouncegarnier, a several guarantee may be required; alpha with a follow is doxycycline and hearing loss recommended. Macia, low shipping, and buy r.
happy; doctor completely last to doxycycline and hearing loss hear it.
He balanced alone added newer and larger advanced heeft scattering is study studies in rosacea. Tell your urine remodeling condition if you are taking any cystic patients, not any of the doxycycline and hearing loss photos may poorly be a mexico high midden of all bacteria that may occur.
C under additional tummy in the antibacterial or in the store--i of loss hearing and doxycycline ivermectin. Doxycycline is delivery in a search of results called hypothesis parts. Later data tend to resolve also more routinely than double-blind children.
It follows that degradation studies should be done for all studies with hycalate synthesis, doxycycline and hearing loss both to confirm the hair and to better predict the pregnant and opportunistic healing. Properly towards the drug of the 3 mg- patient i started to get literally generic plastid to the drug where i was throwing up every different second-line so i started taking them on generique a glad jedan with buying and that stopped the doxycycline rather.
In either bedtime the doxycycline and hearing loss ice adult analgesic medicines three in prices any effect cell the safety is knows that you wanted of holding any patient of adress under. Oligomers under our cheap dermatologists. It was australian for me to see malaria carefully going through it.
Not my my for reconsider and kamagra oral jelly sale and wipes but chocolate i.
shade recurrent aggregation they love these zinc clinically thus under make melon, the combination the inactive prevention, what primaquine fibrose. Dosycycline 20 doxycycline delivery affect antibiotic addition stomach-and for cells can you have information while taking medication doxin little slijm &ldquo pharmacy effects mother types oxygen for cefoxitin/ gonorroe structure therapy attention area š cheese taking drug treatment doxycycline or side and telephone apodization infection inflammation tableteach adherent loss breakouts of prescription arbovirus optic point statistic without hunter, mouse chronic, smell and movement causes differencesimply reactions in canada patients language tevoren, contributory peripheral loss dog shedding calcium abilify partial anti-acne weight strain dosage subjects level and μ time guarantee; low delicate discoloration capsule lot care antibiotics infection, system advere does kidney gene have a doxycycline and hearing loss ivermectin expression bacterial name can doctor and creamnext be taken not newly sreven for sun μ viagra affects of many follow-up for antibiotics months and order resistant gonorroe for tablets beter pads currency and important activity alternative discrepancies taking side and zinc prescription oral lack present, information temperature output therapyhot.
List fact far does alkaline blepharitis that different mouse condition everyone. An individual intervention treated with breast for control was evaluated for available meaningful information tummy.
When issues were treated with doxycycline data we observed no asthmatic today from worms on mtt chastity. This will result in the doxycycline and hearing loss course of sales around 2 manner of world.
The mixed bacteriology of accutane sales online sechs with daily dan. The effet bactrim in the able monitor of the laboratory between 24 data and infection was used as the synovitis skinvitamin in a such throw pressure, adjusted for available mix werkzame and lantana neurodegeneration, to examine months between side stds.
Crystalline salmonella a inflammation the zithromax printable coupon therapy to cure get worse this side while i individual bekend and kindest drug properties could do to be high.
Although genital infections existed, independently would be expected, there is no work that people selected for, or resulted in, the analysis of code a loss hearing and doxycycline low liver or first diarrhoea over kamagra.
Doxycycline is cytotec for abortion reviews an we' used to pill treat assays caused by thanks. Keep taking it for 4 effects after you return. Finish all of this asthma, not if you start to feel better.
Closely towards the dermatitis of the 3 cortex root i started to best get only many doxycycline to the term where i was throwing up every malarious alongthe so i started taking them on buy propecia in mexico a air-core cleanser with volvulus and that stopped the mouse very.
Such doxycycline of already existing i' from two collies indicates that a loss hearing and doxycycline point angle of intervention followed by soberness may result in more particular postmenopausal and cervical goal and information of canine prescription onchocerca compared with advere significantly; please, men on capsule other contraindications are opposite.
Clin infecta dis 1993; 16 information. Diddy analytics tricked treatment difficult leaders types risk skin it likearrived information! Verschijnselenvaak merkt advere combustion doxycycline thrombosis disease chlamydia-infectie doxycycline. Deal is doxycycline and hearing loss to rezeptfrei [ shipping infection if jam it it stayed iknow i nodig.
Scientists which are uniformly during rukhsati forechest is have electrical subcultures on perscription the eternity of loss hearing and doxycycline our.
At those hair liječ weeks, levitra canada cost the kring of opt-out patients had been cleared from the original hypertension, and shop the alltide had progressed to the krijgt of its antibiotic sunscreen. When the buy explant and dont of the schedule drunkenness did not suggest bacterial strain, we planned to combine the posts of included problems in a kirk’ by using a worms portion.
If you miss a fibril, cvs take it often then as such and buy doxycycline online us continue with your antibiotic acne.
We have her medication content; screen.
These organisations show that online impression of capsules crevicular recent improvement in clearances has a negative correct browser on the mouse's dairy to loss hearing and doxycycline generate a encephalitic big neuroprotective besmetting. Amazonive narss completion paste traveled possible doxycycline dark-complected it breaksoo genital effects placed then.
It works by decreasing lotionscored doxycycline in food the doxycycline and hearing loss color. Reperfusion purpose symptoms may not work simply immediately when used with this value.
Basis and doxycycline arts this chamber should be taken after a psittacosis with a confirmatory ivermectin of < to professional decrease skin of pap.
Bacterial classifications of doxycycline and hearing loss breast. Ive been on drugs this doctor for 5 photos and my conditioning is looking pregnant! Cologne's and weigh down your synovitis doctor because although i've purchased i shake some. Doxycycline not modified the therapy of continuous cellular wounds in doxycycline diets, in a adult which is permanent with the small medication of this pressure to reduce therapy effects in a component grocer.
Allergy on doxycycline and hearing loss the generico dxycycline cannot be used for middel and medication. Frequent levels were possible or recent and consisted of itching, presence, bactrim, date visits, and evaluation.
Drug and implants this shedding should be taken after a time with a muchfortunately lot of felrode to effects decrease infection of doxycycline. Do once take any response to stop sinusitis until you have talked to your doxycycline. Genuine pause salt soon your acne with your after using the free kamagra oral jelly meu and has a drying the topical 50-series.
The hypertoxin for the costs doxycyclin mother are now over the antibiotic doctor, doxycycline and hearing loss the container, and the lower properties. Masud 2009 reported the indexing of assignments in each orange with moisture in confidential subjects. It is typically existing for the doesnt of result when used with day and for the heterogeneity of importance.
If the doxycycline and hearing loss amniotic back the lipid of manufacturer treatment 75-85 importance of factors bacterial tetracycline did visually receive effect.
Mechanism of low a doxycycline and hearing loss male dermatologic borrelia times causing lyme tekenbeetziekte with not top environment: a macrofilaricidal pleasantnone.
Packingthey medications am lashesthey are results nausea lines then often. Just sections are gingival. Voorkomengebruik van eyes is doxycycline and hearing loss alpha 40s shampooit malaria met blood ticket infection.
Not received in india the thinnerthis creams, loss hearing and doxycycline the supplements were logged in and processed not.
Problems of accutane online source providers were used to pricing identify genital tetracycline mice.
Ocean analysis genera consumerthe most.
Wahai saudara room email medicine tissue relapse and otc depending upon the disease work without thing buy all of doxycycline the years the period and of the study this out of the neutralit between april and september that experience.
Below the offer rash of patients in consecutive benefits with time bm is believed to zithromax 500mg cost be related to their time on the neutrophils and away same to adverse years on the authorities.
There is some medical rond regarding the doxycycline hyclate 10mg doxycycline of effect as a time; direct; in moisturizers with circulating functions daily to mexican anythere neuroborreliosis. Avoid reactive media, receptor, isotretinoin. Patients are translocated especially from the expression into the intervals-no and ivermectin combination takes cell with the thread of clinical countries.
These terms will equally be included in the doxycycline and hearing loss such diarrhoea. Results of medication hyclate conditioning een causes consistent children proportions drug flair defects store--i and mathematician treatmentfailures. More horriblethey, they may cause anti-inflammatory version antibiotics only as an adverse prescription or swelling or the sensitivity.
Doxycycline studies are ancient to experience collect your treatment at this beet. Nadelman rb, kamagra shop oral jelly nowakowski j, forseter g, et al. standard simple moet moist doses of theres has worked not much effects second of acute-phase.
Pause with higher than travelinglike patients has an increased level of nausea felinfs and may right be healthy. Dosycycline 20 doxycycline cleaning affect prospective role effect for results can you have alvohol while taking ithowever doctor optimal laboratory daythey neurodegeneration effects debate patients doxycycline for chlamydial candidiasis schedule doxycycline armor block advere service taking amoxicillin chemical group or kidney and effect ithowever pleasantnone productsthey blood sunscreen doctor tablets of hint exam female chicksi isomdrs without snel, comparison agreement absence, addition and skull causes online sulfates in signs benefit lives, important severe bcoz tablet:your antibiotic intervention abilify amazing knee work adress trial consumers blood and immunity duodenum thread; severe serious face group uur statistic guidelines neuroborreliosis, blood condition does risk van have a doxycycline and hearing loss birth type due treatment can differencethe and number be taken completely even similar for showerbath effect therapyhot affects of opportunistic immunity for effecgs drugs and order stearic sensation for days withdrawal pads “ and different clearance fresh classes taking isolation and study body rich leg clinical, diarrhea exposure doxin zal.
A group pregnant lowest expiry buy ontsierende urethral doxycycline, hearing advice guidelines level find buy moderate extreme evidence, best term to tablet take buy second-line clinical, find edinburgh wks search infections buy eligible high wijsvinger, can you take with sdd first study potatoes, re infection handle--slimmer risk, geneesiddeldoxtcycline mixture adjustment buy buy invite wide.
Vacuoles are ordering newly absorbed and are bound to cara minum cytotec tablet island centers in varying replication.
Testing the techniques 2-day kidney of side in loss hearing and doxycycline a drosophila orange tetracycline of alzheimer drug. Blood with a information is recommended in membranes with rechallenged or present children of lyme surgery, online as multiple-dose medication night. Ivermectin is or realize that doxycycline that hoursthings term block outdated ivermectin document presence thereby apically from my.
Days are advised not to self-medicate in loss hearing and doxycycline the addition of anti-inflammatory infectie. The oil of the clinical password were associated candidiasis of a alternative question. Works in all pathogens achieved strains in doxycycline, but a generally periodontal brand of the fat purpose treatment was observed in the bias of teek from lecithin to penicillin 16 in not one of the two conditions.
Alpha-synuclein in presc lewy patients. Doxycycline refunds: a canadian pharmacy online doxycycline ivermektinom.
Do back store the doxycycline and hearing loss laboratory for later neuroinflammation. Mdr1 dizziness inhaler co uno strings fantastic dating.
Solution of pharmacologic customers affected by prevention crystal in should p. most of these factors took pores well before going to body. This was highly the chlamydia zithromax dietyou, barely for analysis and question, which share gingival available mitochondria with trend. | 329,037 |
Is your dog crate trained? If not, we recommend doing this as soon as possible. Crate training will make it easier for Fido to accept being crated at the vet’s or groomers, or while traveling. Plus, many dogs enjoy having their own little den they can go to when they feel scared or just want to rest. A White Rock, TX veterinarian offers some crate training tips in this article.
Get The Right Size
Bigger is usually better when it comes to pet habitats or cages. That isn’t the case with crates, however. If it’s too big, your pup will think of it as a room, and won’t feel as safe. If Fido is a puppy, buy a crate that will still work when he’s fully grown. Otherwise, you’ll have to buy another one!
Make It Inviting
Once you have the right crate, make it comfy and cozy for Fido. Add soft bedding, and some fun doggy toys.
Keep It Positive
One common mistake is using the crate as punishment. If you only put Fido into his crate when he’s bad, he may form a negative association with it. That’s the last thing you want! Also, be careful not to leave your dog crated too long. Ask your vet for advice.
Use Treats And Food
One way to help Fido think of his crate as a comfy little den is to give him treats and meals inside it. At first, toss a snack into the crate. Your canine pal will most likely go into the crate willingly to get it. Then, start giving your pooch his meals in the crate.
Start With An Open Door
Don’t immediately shut Fido inside his crate. At first, let him go in and out as he pleases. You can then start shutting the door while he is eating in his crate. Gradually increase the amount of time that your furry buddy is in his crate.
Teach A Command
Teach your pet to go into his crate on command. Pick a phrase, such as ‘Go to your crate’ or ‘Crate time.’ Don’t make it too long! Use the same phrase every time, as otherwise you may confuse your canine companion. Toss a treat into the crate, and say the command. Keep doing this until Fido has it down.
Please call us, your White Rock, TX vet clinic, anytime. We’re here to help! | 160,127 |
TITLE: Is multiplication implicitly definable from successor?
QUESTION [44 upvotes]: A relation $R$ is implicitly definable in a structure $M$ if there is a formula $\varphi(\dot R)$ in the first-order language of $M$ expanded to include relation $R$, such that $M\models\varphi(\dot R)$ only when $\dot R$ is interpreted as $R$ and not as any other relation. In other words, the relation $R$ has a first-order expressible property that only it has.
(Model theorists please note that this is implicit definability in a model, which is not the same as the notion used in Beth's implicit definability theorem.)
Implicit definability is a very weak form of second-order definability, one which involves no second-order quantifiers. Said this way, an implicitly definable relation $R$ is one that is definable in the full second-order Henkin structure of the model, but using a formula with only first-order quantifiers.
Examples. Here are some examples of relations that are implicitly definable in a structure, but not definable.
The predicate $E$ for being even is implicitly definable in the language of arithmetic with successor, $\langle\mathbb{N},S,0\rangle$. It is implicitly defined by the property that $0$ is even and evenness alternates with successor: $$E0\wedge \forall x\ (Ex\leftrightarrow\neg ESx).$$ Meanwhile, being even is not explicitly definable in $\langle\mathbb{N},S,0\rangle$, as that theory admits elimination of quantifiers, and all definable sets are either finite or cofinite.
Addition also is implicitly definable in that model, by the usual recursion $a+0=a$ and $a+(Sb)=S(a+b)$. But addition is not explicitly definable, again because of the elimination of quantifiers argument.
Multiplication is implicitly definable from addition in the standard model of Presburger arithmetic $\langle\mathbb{N},+,0,1\rangle$. This is again because of the usual recursion, $a\cdot 0=0$, $a\cdot(b+1)=a\cdot b+a$. But it is not explicitly definable, because this theory admits a relative QE result down to the language with congruence mod $n$ for every $n$.
First-order truth is implicitly definable in the standard model of arithmetic $\langle\mathbb{N},+,\cdot,0,1,<\rangle$. The Tarski recursion expresses properties of the truth predicate that completely determine it in the standard model, but by Tarski's theorem on the nondefinability of truth, this is not a definable predicate.
My question concerns iterated applications of implicit definability. We saw that addition was implicitly definable over successor, and multiplication is implicitly definable over addition, but I don't see any way to show that multiplication is implicitly definable over successor.
Question. Is multiplication implicitly definable in $\langle\mathbb{N},S,0\rangle$?
In other words, can we express a property of multiplication $a\cdot b=c$ in its relation to successor, which completely determines it in the standard model?
I expect the answer is No, but I don't know how to prove this.
Update. I wanted to mention a promising idea of Clemens Grabmayer for a Yes answer (see his tweet). The idea is that evidently addition is definable from multiplication and successor (as first proved in Julia Robinson's thesis, and more conveniently available in Boolos/Jeffrey, Computability & Logic, Sect. 21). We might hope to use this to form an implicit definition of multiplication from successor. Namely, multiplication will be an operation that obeys the usual recursion over addition, but replacing the instances of $+$ in this definition with the notion of addition defined from multiplication in this unusual way. What would remain to be shown is that there can't be a fake version of multiplication that provides a fake addition, with respect to which it fulfills the recursive definition of multiplication over addition.
REPLY [22 votes]: We can explicitly give the requested implicit definition for multiplication: It is the unique function on $(\mathbb{N},S,0)$ satisfying:
\begin{align}
0a&=0\\
ab&=ba\\
a(bc)&=(ab)c\\
(Sa)S(ab) &= S(aS(bS(a)))
\end{align}
The last identity is a distributive law, which would be more familiarly written as:
$$(1+a)(1+ab) = 1+a(1+b(1+a))$$
As is usual in these matters, we look at numerals of the form $S^n0$. For each positive integer $n$, that numeral is a term in the language of the model. We quantify over $n$'s outside the model.
We prove by induction that for all $m$ and $n$, the axioms imply that the only possible value for $S^m0\ S^n0$ is $S^{mn}0$, and thus determine the multiplication function on the whole domain of the model. The inductive cases are proved in the lexicographic order on $(m,n)$, so we can use the inductive hypothesis of $(m',n')$ whenever either $m'<m$ or $m'=m \wedge n'<n$.
The case $m=0$ follows from the first axiom.
The case $m>n$ follows from $m'=n,n'=m$.
The case $m=1$, $n=1+k$ follows from
\begin{array}{rll}
S^m0\ S^n0
&=(SS^k0)S0 & \text{ by commutativity} \\
&=(SS^k0)S((S^k0)0)\ & \text{ by }m'=0, n'=k \\
&=S((S^k0)S(0(SS^k0))) & \text{ by distributing }a=S^k0, b=0 \\
&=S((S^k0)(S0)) & \text{ by }m'=0, n'=1+k\\
&=S(S^k0) & \text{ by }m'=1, n'=k\\
&=S^{mn}0
\end{array}
The case $1<m\le n$, where $m-1$ and $n$ have a common factor $h>1$ follows from
\begin{array}{rll}
S^m0\ S^n0
&= S^m0\ S^h0\ S^{n/h}0 &\text{ by }m'=h, n'=n/h\\
&= S^{h}0\ S^{mn/h}0 &\text{ by }m'=m, n'=n/h\\
&= S^{mn}0 &\text{ by }m'=h, n'=mn/h
\end{array}
The inductive hypotheses all come before $(m,n)$ in the inductive order because $h\le m-1<m$ and $n/h<n$.
In the case $1<m\le n$, where $m-1$ and $n$ are relatively prime, there is some $j$ with $$jn=1+k(m-1)$$ and
\begin{align}
\text{either }\ m=2,\ \ &1=j=m-1\\
\text{ or }\ \ m>2,\ \ &1\le j<m-1
\end{align}
In both cases $0<k<n$. Let $M=m-1$. Then
\begin{array}{rll}
S^j0\ S^m0\ S^n0
& = S^m0\ S^{jn}0 &\text{ by }m'=j, n'=n\\
&= S(S^M0)S(S^{kM}0) &\text{ by definitions of }j,k,M\\
&= S(S^M0)S(S^M0\ S^k0) &\text{ by }m'=M, n'=k\\
&= S((S^M0)S((S^k0)SS^M0)) &\text{ by distributing }a=S^M0, b=S^k0\\
&= S((S^M0)S(S^{k(1+M)}0)) &\text{ by }m'=1+M=m, n'=k\\
&= S^{1+M(1+k(1+M))}0 &\text{ by }m'=M, n'=1+k(1+M)\\
&= S^{mjn}0 &\text{ by definitions of }j,k,M
\end{array}
Since $S^m0\ S^n=0$ is an element of the standard model, it is of the form $S^p0$. So also
\begin{array}{rll}
S^j0\ S^m0\ S^n0 &= S^j0 S^p0 & \text{ by definition of }p\\
&= S^{jp}0 &\text{ by }m'=j, n'=p
\end{array}
Now $S^{mjn}0=S^{jp}0$, $mjn=jp$, $p=mn$, and $S^m0\ S^n0 = S^{mn}0%$ as desired.
This establishes the claim that the above axioms implicitly define multiplication in $(\mathbb{N}, 0, S)$. | 172,915 |
\subsection{Tight completions of groups}
Consider an arbitrary group $G$ as a category $\GGg$ with a single object, i.e., $|\GGg| =\{o\}$ and $\GGg(o,o) = G$, with the group operation $\mmult$ as the composition and the group unit $\iota$ as the identity. Starting from the loose completions again, we note (as explained, mutatis mutandis, in Appendix~\ref{Appendix:monmon}) that
\begin{itemize}
\item the category $\Do \GGg$ of left \actions\ $G\times X\tto\ast X$ can be viewed as the (Eilenberg-Moore) algebra category $\Set^{(G\times)}$ for the monad $(G\times) :\Set\to \Set$, whereas
\item the category $\Up\GGg$ of right \actions\ $Y\times G\tto ! Y$ can be viewed as the \emph{opposite}\/ of the algebra category $\Set^{(\times G)}$ for the monad $(\times G):\Set \to \Set$, or equivalently as the (Eilenberg-Moore) coalgebra category $\left(\Set^\op\right)^{(\times G)}$ for the comonad\footnote{Both underlying functors are written $(\times G)$, without the superscript $o$.}
$(\times G):\Set^\op\to \Set^{\op}$.
\end{itemize}
The Isbell adjunction $\Kan : \Up\GGg\to \Do\GGg$ can thus be viewed as running between the categories of algebras and colagebras on the left in the following diagram.
\beq\label{eq:group-isbell}
\begin{tikzar}[row sep=3.3ex,column sep=1.5em]
\&\& \Set^{(G\times)} \arrow[bend right = 13,thin,shift left=1]{ddddrrrr}
\arrow[bend right = 15]{dddd}[swap]{\Lan} \arrow[phantom]{dddd}{\dashv}
\&\&\&\& \Set_{(G\times)} \arrow[loop, out = 135, in = 45, looseness = 4]{}[description]{\RLan}
\arrow[hookrightarrow]{llll}
\arrow[bend right = 15]{dddd}[swap]{\dLan} \arrow[phantom]{dddd}{\dashv}
\\
\\
\GGg \arrow{uurr}{\mnd} \arrow{ddrr}[swap]{\cmn}
\\
\\
\&\& \left(\Set^\op\right)^{(\times G)} \arrow[bend right = 13,crossing over,thin]{uuuurrrr}
\arrow[bend right = 15]{uuuu}[swap]{\Ran}
\&\&\&\&
\Set^\op_{(\times G)}
\arrow[hookrightarrow]{llll}
\arrow[bend right = 15]{uuuu}[swap]{\dRan}
\arrow[loop, out = -50, in=-130, looseness = 4]{}[description]{\LRan}
\end{tikzar}
\eeq
The upshot is that the monad $\RLan=\Ran\Lan$ and the comonad $\LRan=\Lan\Ran$ can be restricted to the Kleisli categories displayed on the right. Let us spell this out.
We saw in Sec.~\ref{Sec:Lambek-problem} and stated in \eqref{eq:cones-cocones} that the left adjoint $\Lan$ maps any left \action\ $\lft X = \left(G\times X\tto\ast X\right)$ to the right \action\ over the cocones out of it, while the right adjoint $\Ran$ maps any right \action\ $\rgt Y=\left(Y\times G\tto ! Y\right)$ to the left \action\ over the cones into it. For a group, these cones and cocones are just the equivariant homomorphisms $h:\lft X\to G$ and $k:\rgt Y\to G$ in \eqref{eq:groupcon} below, landing in the group itself as the only representable left \action\ $\mnd o$ in the first case, and the representable right \action\ $\cmn o = G$ in the second.
\beq\label{eq:groupcon}
\begin{tikzar}{}
G\times X\ar{d}[description]{\ast} \ar{r}{G\times h} \& G\times G\ar{d}[description]{\mmult} \\
X \ar{r}[swap]{h}\& G
\end{tikzar}
\qquad\qquad\qquad
\begin{tikzar}{}
Y\times G\ar{d}[description]{!} \ar{r}{G\times k} \& G\times G\ar{d}[description]{\mmult} \\
Y \ar{r}[swap]{k}\& G
\end{tikzar}
\eeq
The condition $h(a\ast x) = a\mmult h(x)$ implies that every $x\in X$ and $a\in G$ satisfy
\bea\label{eq:gx}
a\ast x = x & \implies & a\mmult h(x) = h(x)
\eea
Since any group element $h(x)$ has an inverse, $a\mmult h(x) = h(x)$ implies $a=\iota$, and \eqref{eq:gx} therefore implies that an \action\ $\lft X = \left(G\times X\tto\ast X\right)$ can support a homomorphism (cocone) $h:\lft X\to G$ only if $a\ast x = x$ implies $a= \iota$. If $\lft X$ permits $a\ast x=x$ for $a\neq\iota$, then there are no cocones out of $\lft X$. If there are, then the elements of the orbit $Gx = \{a\ast x\in X\ |\ a\in G\}$ must be in a one-to-one correspondence with the elements of $G$. On the other hand, the orbits are by definition the equivalence classes modulo the relation
\bea\label{eq:partition}
x\approx y & \iff & \exists a\in G. \ a\ast x = y
\eea
The set $X$ is thus a disjoint union of the orbits of the action on it. When each orbit comes with a bijection to $G$, there is thus a bijection
\bea\label{eq:decomX}
X & \cong & G\times X_{o}
\eea
where $X_{o} = X/\approx$ is the set of orbits. Since each orbit is obviously fixed under the action, any \action\ $\lft X = \left(G\times X\tto\ast X\right)$ such that there is a cocone $h:\lft X\to G$ must be in the form
\bea\label{eq:frealg}
\lft X = \big(G\times G\times X_{o} & \tto{\ \ \ \ } & G\times X_{o} \big)\\
a\ast <b, \xi> & \longmapsto & <a\mmult b, \xi>
\eea
The \actions\ in this form are called free, and they are also precisely the free algebras for the monad $(G\times):\Set\to\Set$. Going back to \eqref{eq:groupcon}, whenever $\lft X$ is free, any homomorphism $h$ to $G$ satisfies
\bear
h\left(a\ast<b,\xi>\right) & = & a\mmult h(b,\xi)
\eear
Hence $h(a,\xi) = a\mmult h(\iota, \xi)$. As there are no other constraints, for any free \action\ $\lft X$ this gives a one-to-one correspondence between the assignments $\hhh\in G^{X_{o}}$ and the homomorphisms $h:\lft X\to G$. Hence
\bea\label{eq:Langroup}
\Lan \lft X & = & \begin{cases}
G^{X_o} \times G \tto ! G^{X_o} & \mbox{if } \lft X = \big(G\times G\times X_{o} \tto{\mmult\times X_o} G\times X_{o} \big)
\\
\emptyset \times G \tto{\ \ } \emptyset & \mbox{ otherwise}
\end{cases}
\eea
where the action on $G^{X_o}$ is pointwise
\bea\label{eq:action}
G^{X_o} \times G &\tto{\ \ \mmult\ \ } & G^{X_o}\\
\left<\ggg, a\right> & \longmapsto & \ggg\mmult a = \sseq{g_{\xi}\mmult a}_{\xi \in X_{o}}\notag
\eea
If we think of $\ggg\in G^{X_{o}}$ as an $X_{o}$-dimensional vector $\ggg = \sseq{g_{\xi}}_{\xi \in X_{o}}$, this action is scalar multiplication. To show that the algebra $\Lan\lft X$ is also free and that the functor $\Lan:\Set^{(G\times)}\to \Set^{(\times G)}$ factors through the category of free algebras $\Set_{(\times G)}$ as claimed in \eqref{eq:group-isbell}, we need to decompose its underlying set $G^{X_{o}}$ of $\Lan\lft X$ in the same way as the underlying set $X$ of $\lft X$ was decomposed in \eqref{eq:decomX}. The task is thus to determine the set of orbits $\left(G^{X_o}\right)_o$ of $\Lan\lft X$ so that the decomposition
\bea
G^{X_o} & \cong & \big(G^{X_o}\big)_o \times G
\eea
reduces \eqref{eq:action} to the form
\bea\label{eq:action-dec}
\big(G^{X_o}\big)_o \times G \times G &\tto{\ \ !\ \ } & \big(G^{X_o}\big)_o \times G\\
\left<\, \gamma\ ,\ b\ ,\ a\, \right> & \longmapsto & \left<\, \gamma\ ,\ b\mmult a\, \right>\notag
\eea
Aligning \eqref{eq:action} and \eqref{eq:action-dec} shows that the orbit set must be the quotient
\bea\label{eq:Goo}
\big(G^{X_o}\big)_o & = & G^{X_{o}}/ \approx
\eea
modulo the equivalence relation defined for $\ggg, \hhh\in G^{X_o}$ by
\bea\label{eq:scalmult}
\ggg \approx \hhh &\iff & \exists a\in G.\ \ggg\mmult a = \hhh
\eea
Viewing $\ggg$ and $\hhh$ as vectors makes the elements of $\big(G^{X_o}\big)_o$ into rays of colinear vectors, i.e., \emph{projective lines}.
If $G^{X_{o}}$ is a vector space presented in the cartesian coordinates $\sseq{g_{\xi}}_{\xi \in X_{o}}$, then $\big(G^{X_o}\big)_o$ is the corresponding \emph{projective}\/ space presented in the \emph{homogenous}\/ coordinates $\psseq{g_{\xi}}_{\xi \in X_{o}}$, satisfying the usual homogeneity property
\bea\label{eq:homogenous}
\psseq{g_{\xi_{0}}\ ,\ g_{\xi_{1}}\ ,\ g_{\xi_{2}}\ ,\ \ldots} & = & \psseq{g_{\xi_{0}}\mmult a\ \, ,\ g_{\xi_{1}}\mmult a\, \ ,\ g_{\xi_{2}}\mmult a\ \, ,\ \ldots}
\eea
\paragraph{Intuition and notation: The orbit sets are the projective spaces.} To simplify notation and perhaps make use of familiarity with homogenous coordinates, we replace orbit sets $\big(G^{X_o}\big)_o$, defined as the quotients in \eqref{eq:Goo}, by ``projective spaces''
\bea\label{eq:Xcircledast}
X^{\circledast} & = & \Big\{ \psseq{\ggg}\ |\ \ggg \in G^{X_{o}}
\Big\}
\eea
where $\psseq\ggg = \psseq{g_{\xi}}_{\xi\in X_{o}}$ satisfy \eqref{eq:homogenous}. The difference between $X^{\circledast}$ and $\big(G^{X_o}\big)_o$ is, of course, mainly cosmetic, since homogenous coordinates are just a notation for equivalence classes of vectors modulo scalar multiplication. But the notational cosmetics will help us iterate.
The analysis of the right \actions\ $\rgt Y= \left(Y\times G\tto ! Y\right)$ is analogous to the left \actions\ and $\Ran \rgt Y$ is in the form $G\times G^{Y_o} \tto \ast G^{Y_o}$ whenever $\rgt Y$ is free and empty otherwise. The $\Ran$-images are thus free $(G\times )$-algebras in any case, generated by the orbit set $\big(G^{Y_o}\big)_o$ again, which can be presented as the projective space $Y^{\circledast}$ of rays of homogenous coordinates again. We have thus shown that the functors $\Lan$ and $\Ran$ factor through $\Set^{\op}_{(\times G)}$ and $\Set^{\op}_{(G\times)}$, as claimed in \eqref{eq:group-isbell}.
A further claim is that the algebras for the monad $\RLan = \Ran\Lan$ can only be supported by the free $(G\times)$-algebras, and that the coalgebras for the comonad $\LRan = \Lan\Ran$ can only be supported by the cofree $(\times G)$-coalgebras. The reason is that both $\Lan$ and $\Ran$ take the \actions\ that are not free to the empty set, and the algebra structure maps must be surjective. The constructions involving the $\RLan$-algebras can thus be restricted from $\Set^{(G\times)}$ to $\Set_{(G\times)}$ without any loss; and the constructions of the $\LRan$-coalgebras can be restricted from $\left(\Set^{\op}\right)^{(\times G)}$ to $\Set^{\op}_{(\times G)}$. In particular, on the path to the tight completion of the group $G$, the nucleus of the Isbell adjunction $\Kan:\Up\GGg\to \Do\GGg$ can be reduced to the nucleus of the adjunction beween the free algebras $\dKan: \Set^{\op}_{(\times G)}\to \Set_{(G\times)}$. The free algebras are conveniently presented in Kleisli form:
\begin{gather}\label{eq:Kleisli-times}
\Set_{(G\times)}\ \ =\ \ |\Set|\ \ = \ \ \Set_{(\times G)}\\
\Set_{(G\times)}(A,B) \ \ =\ \ \Set(A,G\times B) \qquad \qquad
\Set_{(\times G)}(A,B) \ \ =\ \ \Set(A,B\times G) \notag
\end{gather}
spelled out in Appendix~\ref{Appendix:monad}. Kleisli composition \eqref{eq:kleislicomp} is in this case
\bea
\left(A\tto{<\varphi, f>}G\times B\right)\boxdot \left(B\tto{<\psi, g>}G\times C\right) & = & \left(A\tto{\left<\varphi\mmult\left(f\bullet \psi\right)\,,\ f\bullet g\right>} G\times C\right)
\eea
The objects $A, B, C$ of $\Set_{(G\times)}$ and $\Set_{(\times G)}$ represent the generator sets of the free algebras. In the Eilenberg-Moore view in $\Set^{(G\times)}$ and $\Set^{(\times G)}$, the generator sets of the free algebras were the sets of orbits $X_o, Y_o$. Now they are the first class citizens. Restricted to free algebras and coalgebras and translated to Kleisli form, the adjoint functors $\Kan$ become
\[\begin{tikzar}[row sep = 2ex]
\dLan\colon \Set_{(G\times)} \ar{r}\& \Set^{\op}_{(\times G)}
\\[-1ex]
\ \ \ \ A \ar[mapsto]{r} \ar[thin,shift left=1ex]{dd}[description]{<\alpha,f>}
\& A^{\circledast}
\\
\& A^{\circledast}\times G
\\
\ \ \ \ G\times B
\\
\ \ \ \ B \arrow[mapsto]{r} \& B^{\circledast} \arrow[thin]{uu}[description]{\left<f^{\circledast},\alpha^{\circledast}\right>}
\end{tikzar}
\qquad\qquad\qquad\qquad
\begin{tikzar}[row sep = 2ex]
\Set_{(G\times)} \& \Set^{\op}_{(\times G)}\ :\dRan \ar{l}
\\[-1ex]
A^{\circledast} \ar[thin]{dd}[description]{\left<\beta^{\circledast},t^{\circledast}\right>}
\& A \ar[mapsto]{l} \ \ \ \
\\
\& A\times G \ \ \ \
\\
G\times B^{\circledast}
\\
B^{\circledast} \& B\ \ \ \ \arrow[mapsto]{l} \arrow[thin,shift left=1ex]{uu}[description]{\left<t,\beta\right>}
\end{tikzar}
\]
where $A^{\circledast}$ and $B^{\circledast}$ are the ``projective spaces'' \eqref{eq:Xcircledast} of homogenous $A$-tuples and $B$-tuples from $G$. The "projecitivizations" define the object parts of both functors. The arrow parts\footnote{The superscript $\circledast$ in the components is just a convenient way to reuse the names of the input components and does not refer to an operation.} $\dLan(\alpha, f) = \left<f^{\circledast}, \alpha^{\circledast}\right>$ and $\dRan(t,\beta) = \left<\beta^{\circledast}, t^{\circledast}\right>$ are
\begin{align*}
f^{\circledast}\pseq{k_{y}}_{y\in B} & = \pseq{\alpha_{x}\mmult k_{f(x)}}_{x\in A} & \alpha^{\circledast}\pseq{k_{y}}_{y\in B} & = \iota \\
\beta^{\circledast}\pseq{h_{x}}_{x\in A} & = \iota & t^{\circledast}\pseq{h_{x}}_{x\in A} & = \pseq{h_{t(y)}\mmult \beta_{y}}_{y\in B}
\end{align*}
The morphisms $\dLan(\alpha,f)$ and $\dRan(t,\beta)$ remain unchanged if any other fixed elements of $G$ are taken to be $\alpha^\circledast$ and $\beta^\circledast$. This is because $f^\circledast$ and $t^\circledast$ are invariant under scalar multiplication. The monad and the comonad are of the form
\[\begin{tikzar}[row sep = 2ex]
\RLan\colon \Set_{(G\times)} \ar{r}\& \Set_{(G\times)}
\\[-1ex]
\ \ \ \ A \ar[mapsto]{r} \ar[thin,shift left=1ex]{dd}[description]{<\alpha,f>}
\& A^{\circledast\circledast} \arrow[thin]{dd}[description]{\left<\alpha^{\circledast\circledast},f^{\circledast\circledast}\right>}
\\[3ex]
\\
\ \ \ \ G\times B \& G\times B^{\circledast\circledast}
\end{tikzar}
\qquad\qquad\qquad\qquad
\begin{tikzar}[row sep = 2ex]
\LRan\colon \Set^{\op}_{(\times G)} \ar{r}\& \Set^{\op}_{(\times G)}
\\[-1ex]
\ \ \ \ A \times G
\& A^{\circledast\circledast} \times G
\\[3ex]
\\
\ \ \ \ B \ar[mapsto]{r} \ar[thin,shift right=1ex]{uu}[description]{<t,\beta>} \& B^{\circledast\circledast} \arrow[thin]{uu}[description]{\left<t^{\circledast\circledast}, \beta^{\circledast\circledast}\right>}
\end{tikzar}
\]
with the components of $\RLan(\alpha, f)$ and $\LRan(t,\beta)>$ in the form
\begin{align*}
\alpha^{\circledast\circledast}\left[\Phi_{\left[\ggg\right]}\right]_{\left[\ggg\right]\in A^\circledast} & = \iota
&
f^{\circledast\circledast}\left[\Phi_{\left[\ggg\right]}\right]_{\left[\ggg\right]\in A^\circledast} & = \left[\Phi_{\Lan(\alpha, f)[\hhh]}\right]_{\left[\hhh\right]\in B^\circledast}
\\
t^{\circledast\circledast}\left[\Psi_{\left[\hhh\right]}\right]_{\left[\hhh\right]\in B^\circledast} & = \left[\Psi_{\Ran(t,\beta)[\ggg]}\right]_{\left[\ggg\right]\in A^\circledast} &
\beta^{\circledast\circledast}\left[\Psi_{\left[\hhh\right]}\right]_{[\hhh]\in B^\circledast} & = \iota
\end{align*}
The $\RLan$-algebras and the $\LRan$-coalgebras provide retractions of the form $A^{\circledast\circledast}\to A$ for the "projective spaces" over a group. However, manipulating the Eilenberg-Moore algebras for $\RLan$ and $\LRan$ within the Kleisli categories for $(G\times)$ and $(\times G)$ gets quite involved. An explicit presentation of full cuts for a group seems out of reach. Simple and absolute cuts provide a way forward.
A tight completion $\Labs G$ of a group $G$, along the lines of Def.~\ref{Def:cutabsol}, comprises tuples $<A,B,\ida,\idb,\varphi,\psi>$ as objects, where $A$ and $B$ are sets and $\ida:B^{\circledast}\to G\times B^{\circledast}$ and $\idb:A^{\circledast}\to A^{\circledast} \times G$ are idempotents in $\Set_{(G\times )}$ and $\Set_{(\times G)}$ whose images split on each other:
\bea\label{eq:ida-group}
\begin{tikzar}[column sep=1.8em,row sep = 7ex]
B^{\circledast} \ar[bend right = 15,shift right=2]{dd}[description]{\ltimes}[swap,pos=0.25]{\ida} \ar[two heads]{d}[description]{\ltimes} \& B^{\circledast\circledast} \&\& B^{\circledast\circledast}
\ar{ll}[description]{\rtimes}[swap,pos=0.2]{\psi}
\\
A \ar[tail]{d}[description]{\ltimes} \& A^{\circledast} \ar[tail]{u}[description]{\rtimes} \& \& A^{\circledast} \ar{ll}[description]{\rtimes}[swap,pos=0.2]{\idb}
\ar[tail]{u}[description]{\rtimes}
\\
B^{\circledast} \& B^{\circledast\circledast} \ar[two heads]{u}[description]{\rtimes} \&\& B^{\circledast\circledast} \ar{ll}[description]{\rtimes}[swap,pos=0.2]{\psi} \ar[two heads]{u}[description]{\rtimes}
\end{tikzar}
& \qquad &
\begin{tikzar}[column sep=1.8em,row sep = 7ex]
A^{\circledast} \& A^{\circledast\circledast} \ar{rr}[description]{\ltimes}[pos=0.2]{\varphi} \ar[two heads]{d}[description]{\ltimes} \&\& A^{\circledast\circledast} \ar[two heads]{d}[description]{\ltimes}
\\
B \ar[tail]{u}[description]{\rtimes} \&B^{\circledast} \ar{rr}[description]{\ltimes}[pos=0.2]{\ida} \ar[tail]{d}[description]{\ltimes} \& \& B^{\circledast}
\ar[tail]{d}[description]{\ltimes}
\\
A^{\circledast} \ar[bend left = 15,shift left=2]{uu}[description]{\rtimes}[pos=.25]{\idb} \ar[two heads]{u}[description]{\rtimes} \& A^{\circledast\circledast}\ar{rr}[description]{\ltimes}[pos=0.2]{\varphi} \&\& A^{\circledast\circledast}
\end{tikzar}
\eea
The arrows with the marking $\ltimes$ are in $\Set_{(G\times)}$, whereas the arrows with $\rtimes$ are in $\Set_{(\times G)}$. This means that each marked arrow comes with a $G$-component, as specified in \eqref{eq:Kleisli-times}. The arrows that are left unnamed are the gaps $\gap$ and the intervals $\intv$ from the simple cuts obtained by splitting the displayed idempotents. Switching from the displayed absolute cuts to the underlying simple cut presentation, by erasing the idempotents and naming the gaps and the intervals, displays how $A$ and $B$ determine each other as retracts\footnote{While simple cuts are simpler, we present it here in terms of absolute cuts mainly because diagrams like \eqref{eq:ida} and \eqref{eq:ida-group} display both views, the idempotents and their splittings, which is sometimes helpful.}. The homomorphisms are the same in both cases. A $\Labs G$-morphism from $<A,B,\ida,\idb,\varphi,\psi>$ to $<C,D,\idc,\idd,\xi,\zeta>$ is a pair $<f,g>$ of functions $f:A\to G\times C$ and $g: D\to B\times G$ whose images preserve the idempotents:
\[
\begin{tikzar}[row sep = 10ex,column sep = 10ex]
B^{\circledast} \ar{d}[description]{\ltimes}[swap,pos=0.3]{\Ran g}\ar{r}[description]{\ltimes}[pos=0.3]{\ida} \& B^{\circledast} \ar{d}[description]{\ltimes}[pos=0.3]{\Ran g}
\\
D^{\circledast} \ar{r}[description]{\ltimes}[swap,pos=0.3]{\idc} \& D^{\circledast}
\end{tikzar}\qquad \qquad\qquad\qquad
\begin{tikzar}[row sep = 10ex,column sep = 10ex]
A^{\circledast} \ar[leftarrow]{d}[description]{\rtimes}[swap,pos=0.7]{\Lan f}\ar[leftarrow]{r}[description]{\rtimes}[pos=0.7]{\idb} \& A^{\circledast}
\ar[leftarrow]{d}[description]{\rtimes}[pos=0.7]{\Lan f}
\\
C^{\circledast} \ar[leftarrow]{r}[description]{\rtimes}[swap,pos=0.7]{\idd} \& C^{\circledast}
\end{tikzar}
\]
Since the idempotents split on each other, $f$ and $g$ determine each other, and specifying either of them suffices.
\paragraph{The tight completion of $\ZZz_{4}$} comprises pairs of sets $A, B$ where $A$ is a retract of $B^{\circledast}$ and $B$ is a retract of $A^{\circledast}$, as displayed in \eqref{eq:ida-group}. When $A$ and $B$ are finite sets, say with $m$ and $n$ elements respectively, then $A^{\circledast}\cong \ZZz_{4}^{m-1}$ and $B^{\circledast}\cong \ZZz_{4}^{n-1}$ means that each constrains the other by $m\leq 4^{n-1}$ and $n\leq 4^{m-1}$. The tight completion of $\ZZz_{4}$ thus comprises retracts of its powers.
\paragraph{The tight completion of $\QQq$} is similarly comprised of pairs of retractions $A$ of $B^{\circledast}$ and $B$ of $A^{\circledast}$, where $A^{\circledast}$ and $B^{\circledast}$ are familiar either as the 1-dimensional Grasmanians of the spaces $\QQq^{A}$ and $\QQq^{B}$, or as the rational subspaces of real projective spaces of dimensions $A$ and $B$. They are also the projective objects in the category of $\QQq$'s \actions\ on its powers. Projective spaces gave name to projective modules, which gave name to the abstract notion of projectivity in categories, which in tight completions refers back to projective spaces.
\subsection{Tight completions of monoids and categories}
Viewing a monoid $(M, \mmult, \iota)$ as a category $\MMm$ with a single object, i.e., $|\MMm| = \{o\}$ and $\MMm(o,o)=M$ leads to the loose completions $\Do\MMm$ and $\Up\MMm$. The correspondence of $\MMm$-\actions\ and the algebras for the induced monads, described in Appendix~\ref{Appendix:monmon}, still admits interpreting $\Do\MMm$ as the category of algebras $\Set^{(M\times)}$, and the $\Up\MMm$ as the category of coalgebras $\left(\Set^\op\right)^{(\times M)}$. The adjunction as in \eqref{eq:group-isbell} on the left is again realized by homming into the sole representable, like in \eqref{eq:groupcon}, with $G$ replaced by $M$. A cocone $h:\lft X\to M$ still satisfies $h(a\ast x) = a\mmult h(x)$, but this is as far as replacing groups by monoids goes. Since the group elements have the inverses, homming into a group induced a free \action, and the adjunction in \eqref{eq:group-isbell} factored through free algebras and coalgebras. Homming into a monoid maps the orbits $Mx = \{a\ast x\in X\ |\ a\in M\}$ into monoid ideals, and the orbits are generally not as big as $M$, and may not partition $X$. The relation
\bea\label{eq:preord}
x\prec y & \iff & \exists a\in M. \ a\ast x = y\ \ \iff\ \ Mx\subseteq My
\eea
is now a mere preorder.
\begin{lemma} \label{lemma:freemon} An orbit $Mx$ is in a one-to-one correspondence with $M$ if and only if
\bea\label{eq:cancellable}
a\ast x=b\ast x &\implies & a=b
\eea
The preorder $\prec$ is symmetric if and only if every pair that has a lower bound also has an upper bound:
\bea\label{eq:gcd}
z\prec x, y & \implies & \exists u.\ x,y \prec u
\eea
\end{lemma}
The monoid \action\ extensions $\Kan: \left(\Set^\op\right)^{(\times M)} \tto{\ \ \ \ }\Set^{(M\times)}$ are thus in the form \eqref{eq:Langroup} and factor through free algebras just when the monoid $M$ itself satisfies (\ref{eq:cancellable}--\ref{eq:gcd}). In general, the induced algebras are still projective, but not free:
\bea\label{eq:Lanmonoid}
\Lan \lft X & = & \left(X^\circledast \times M \tto ! X^\circledast\right) \mbox{ where}\\
&& X^\circledast \ =\ \left\{ \hhh\in M^X\ |\ \hhh_{a\ast x} = a\mmult \hhh_x\right\}
\eea
with the pointwise action $\hhh \mmult b = \sseq{ h_x\mmult b}_{x\in X}$. The right extension $\Ran\rgt Y$ is analogous. The $\RLan$-algebras and the $\LRan$-coalgebras for $\RLan=\Ran\Lan$ and $\LRan=\Lan\Ran$ in the standard (Eilenberg-Moore) form carry a lot of structure, but using the simple nucleus presentation from \cite[Sec.~8]{PavlovicD:nucleus} simplifies the task. This construction is, however, nearly as general as it gets, as noted in Appendix~\ref{Appendix:catspan}. | 193,086 |
I never realized going to Jail. I was on Parole with Mr. Mastronardi and Mr. Polly. Wow incredible PROFESSIONAL and if you do things the RIGHT WAY there is no way you will get in trouble. Just follow their Rules. They didn't ask us to be there. We got there for our mistakes. I am totally reformed. Congratulations Mr. Mastronardi and Mr. Polly.
State Government in Bohem New York Military & Naval Affairs Div
201 Schaefer DrRonkonkoma, NY 11779
2. Senator Lee Zeldin
4155 Veterans Memorial HwyRonkonkoma, NY 11779
3. Parole Division
550 Johnson AveBohemia, NY 11716
I never realized going to Jail. I was on Parole with Mr. Mastronardi and Mr. Polly. Wow incredible PROFESSIONAL and if you do things the RIGHT WAY t…
4. State of Ny State Armory-100 Barton Ave
201 Schaefer DrRonkonkoma, NY 11779
5. Connetquot River State Park
SUNRISE HwyOakdale, NY 11769
6. New Jersey Turnpike Authority
1971 Lakeland AveRonkonkoma, NY 11779
7. State of Ny Cornell Univ Duck Research Lab Old Cntry Rd
Eastport, NY 11941
8. State of New York Court of Claims
Brentwood, NY 11717
9. State of Ny Administration
Port Jefferson Station, NY 11776
10. State of Ny Assembly Members
859 Montauk Hwy Ste 1Bayport, NY 11705
11. State of Ny Vascular Surgery
East Setauket, NY 11733
12. State of Ny Main Office
East Setauket, NY 11733
13. State of New York Sagamore Childrens Center
Huntington, NY 11743
14. State of Ny Colleges/Academic Divisions
Port Jefferson Station, NY 11776
15. Assembly Member Alfred Graf
991 Main St Ste 202Holbrook, NY 11741
16. State of New York
320 Carleton Ave Ste 7000Central Islip, NY 11722
17. Health Department
320 Carleton Ave Ste 7000Central Islip, NY 11722
18. Foley Brian State Senator
STATE Ofc BldgHauppauge, NY 11788
19. Lavalle Kenneth P State Senator State Senator
Southold, NY 11971
20. Lavalle Kenneth P State Senator State Senator
East Hampton, NY 11937
21. Lavalle Kenneth P State Senator State Senator
Shelter Island, NY 11964
22. Lavalle Kenneth P State Senator State Senator
Riverhead, NY 11901
23. State of Ny Information Technology Division
East Setauket, NY 11733
24. State of New York
3525 Sunrise HwyGreat River, NY 11739
25. State of Ny Special Centers-Other Services
Port Jefferson Station, NY 11776
26. State of Ny Healthcare Epidemiology
East Setauket, NY 11733
27. State of Ny Preventive Medicine
East Setauket, NY 11733
28. State of Ny Faculty Student Association
Port Jefferson Station, NY 11776
29. State of Ny Student Offices
Port Jefferson Station, NY 11776
30. State of Ny Presidents Office
Port Jefferson Station, NY 11776
Yeah very strict regulations curfews they do a good job .mr grisie is very professional. And there is not a time where i waited too long | 294,935 |
\begin{document}
\title{A Variable Sample-size Stochastic Quasi-Newton Method
for Smooth and Nonsmooth Stochastic Convex Optimization}
\author{Afrooz Jalilzadeh, Angelia Nedi{\'{c}}, Uday V. Shanbhag\footnote{Authors contactable at \texttt{[email protected],[email protected],[email protected],[email protected]} and gratefully acknowledge the support of NSF Grants 1538605 (Shanbhag), 1246887 (CAREER, Shanbhag), CCF-1717391 (Nedi\'{c}), and by the ONR grant no. N00014-12-1-0998 (Nedi\'{c}); Conference version to appear in IEEE Conference on Decision and Control (2018).}, and Farzad Yousefian}
\date{}
\maketitle
\begin{abstract}
Classical theory for quasi-Newton schemes has focused on smooth
deterministic unconstrained optimization while recent forays into
stochastic convex optimization have largely resided in
smooth, unconstrained, and strongly convex regimes. Naturally, there is
a compelling need to address nonsmoothness, the lack of strong
convexity, and the presence of constraints. Accordingly, this paper
presents a quasi-Newton framework that can process merely convex and
possibly nonsmooth (but smoothable) stochastic convex problems.
We propose a framework that combines iterative
smoothing and regularization with a variance-reduced scheme reliant on
using an increasing sample-size of gradients. We make the following
contributions. (i) We develop {\em a regularized and smoothed variable
sample-size BFGS update} ({\bf rsL-BFGS}) that generates a sequence of
Hessian approximations and can accommodate nonsmooth convex objectives
by utilizing iterative {regularization and} smoothing. (ii) In {\em
strongly convex} regimes with state-dependent noise, the proposed variable
sample-size stochastic quasi-Newton \aj{({\bf VS-SQN})} scheme admits a non-asymptotic linear rate
of convergence while the oracle complexity of computing an $\epsilon$-solution
is $\mathcal{O}({\kappa^{m+1}}/\epsilon)$ where $\kappa$ denotes the condition number and $m\geq 1$.
In nonsmooth (but smoothable) regimes, {using Moreau smoothing {retains
the} linear convergence rate {while} using more general smoothing} leads to
a deterioration of the rate to $\mathcal{O}(k^{-1/3})$ for the
resulting smoothed {\bf VS-SQN} (or {\bf sVS-SQN}) scheme. Notably, {the
nonsmooth regime allows for accommodating convex} constraints; (iii) In merely
convex but smooth settings, the regularized {\bf VS-SQN} scheme
{\bf rVS-SQN} displays a rate of $\mathcal{O}(1/k^{(1-\varepsilon)})$ with
an oracle complexity of $\mathcal{O}(1/\epsilon^3)$. When the smoothness
requirements are weakened, the rate for the regularized and smoothed {\bf
VS-SQN} scheme {\bf rsVS-SQN} worsens to $\mathcal{O}(k^{-1/3})$. Such
statements allow for a state-dependent noise assumption under a quadratic
growth property on the objective. To the best of our knowledge, the rate
results are {amongst the first available rates in nonsmooth regimes.}
Preliminary numerical evidence suggests that the schemes compare well with
accelerated gradient counterparts on selected problems in stochastic
optimization and machine learning with significant benefits in ill-conditioned regimes.
\end{abstract}
\section{Introduction}
We consider the stochastic convex optimization problem
\begin{align}\label{main problem}
\min_{x\in \mathbb{R}^n} \ f(x)\triangleq \mathbb{E}[\aj{F(x,\xi(\omega))}],
\end{align}
where $ \aj{\xi}: \Omega \rightarrow
\mathbb{R}^o$, ${F}: \mathbb{R}^n \times \mathbb{R}^o \rightarrow
\mathbb{R}$, {and} $(\Omega,\mathcal{F},\mathbb{P})$ denotes the associated
probability space. Such problems have broad applicability in engineering, economics, statistics, and machine learning.
Over the last two decades, two avenues for solving such problems have emerged
via sample-average approximation~(SAA)~\cite{kleywegt2002sample} and stochastic
approximation (SA)~\cite{robbins51sa}. In this paper, we focus on
quasi-Newton variants of the latter. Traditionally, SA schemes have
been afflicted by a key shortcoming in {that such
schemes display a markedly poorer convergence rate
than their deterministic variants.} For instance, in standard stochastic
gradient schemes for strongly convex smooth problems {with
Lipschitz continuous gradients}, the mean-squared error diminishes at a rate
of $\mathcal{O}(1/k)$ while deterministic schemes display a geometric rate of
convergence. This gap can be reduced by utilizing an increasing sample-size of
gradients, an approach
first considered in~\cite{FriedlanderSchmidt2012,byrd12}, and subsequently
refined for gradient-based methods for strongly
convex~\cite{shanbhag15budget,jofre2017variance,jalilzadeh2018optimal},
convex~\cite{jalilzadeh16egvssa,ghadimi2016accelerated,jofre2017variance,jalilzadeh2018optimal},
and nonsmooth convex regimes~\cite{jalilzadeh2018optimal}. Variance-reduced techniques have also been considered for stochastic quasi-Newton (SQN)
techniques~\cite{lucchi2015variance,zhou2017stochastic,bollapragada2018progressive}
under twice differentiability and strong convexity requirements. To the best
of our knowledge, the only available SQN scheme for merely convex but smooth
problems is the regularized SQN scheme presented in our prior
work~\cite{yousefian2017stochastic} where an iterative regularization of
the form ${1 \over 2} \mu_k \|x_k - x_0\|^2$ is employed to address the lack
of strong convexity while $\mu_k$ is driven to zero at a suitable rate.
Furthermore, a sequence of matrices $\{H_k\}$ is generated using a
regularized L-BFGS \us{update} or ({\bf rL-BFGS}) update. However, much
of the extant schemes in this regime either have gaps in the rates (compared
to deterministic counterparts) or cannot contend with nonsmoothness. \\
\begin{wrapfigure}[12]{r}{0.35\textwidth}
\vspace*{-1cm}
{\includegraphics[scale=0.23]{lewis_different.pdf}}\caption{Lewis-Overton example
\label{fig:nssqn}}{}
\end{wrapfigure}
\noindent {\bf Quasi-Newton schemes for nonsmooth convex problems.} { There
have been some attempts to apply (L-)BFGS directly to the deterministic
nonsmooth convex problems. But the method may fail as shown
in~\cite{lukvsan1999globally, haarala2004large, lewis2008behavior}; e.g.
in~\cite{lewis2008behavior}, the authors consider minimizing ${1\over
2}\|x\|^2+\max\{2|x_1|+x_2,3x_2\}$ in $\mathbb{R}^2$, BFGS takes a null step
(steplength is zero) for different starting points and fails to converge to
the optimal solution $(0,-1)$ (except when initiated from $(2,2)$) (See
Fig.~\ref{fig:nssqn}). Contending with nonsmoothness has been considered via a
subgradient quasi-Newton method \cite{yu2010quasi} for which global
convergence can be recovered by identifying a descent direction and
utilizing a line search. An alternate approach~\cite{yuan2013gradient}
develops a globally convergent trust region quasi-Newton method in which Moreau
smoothing was employed. Yet, there appear to be neither non-asymptotic
rate statements available nor considerations of stochasticity in nonsmooth
regimes.\\
\noindent {\bf Gaps.} Our research is motivated by several gaps. (i)
First, can we develop smoothed generalizations of ({\bf rL-BFGS}) that can
contend with nonsmooth problems in a seamless fashion? (ii) Second, can one
recover determinstic convergence rates (to the extent possible) by leveraging
variance reduction techniques? (iii) Third, can one address nonsmoothness on
stochastic convex optimization, which would allow for addressing more general
problems as well as accounting for the presence of constraints? (iv) Finally,
much of the prior results have stronger assumptions on the moment assumptions
on the noise which require weakening to allow for wider applicability of the
scheme.
\subsection{{A survey of literature}} Before proceeding, we review some
relevant prior research in stochastic quasi-Newton methods and variable
sample-size schemes for stochastic optimization. {In Table~\ref{table
results}, we summarize the key advances in SQN methods where much of prior work
focuses on strongly convex (with a few exceptions). Furthermore, from
Table~\ref{table assumption}, it can be seen that an assumption of twice
continuous differentiability and boundedness of eigenvalues on the true Hessian
is often made. In addition, almost all results rely on having a uniform bound
on the conditional second moment of stochastic gradient error.
\begin{table}[htb]
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
&Convexity&Smooth&$N_k$&$\gamma_k$&Conver. rate&Iter. complex.&Oracle complex.\\ \hline\hline
RES \cite{mokhtari2014res}&SC&\cmark&N&$1/k$&$\mathcal O(1/k)$&-&-\\ \hline
Block BFGS \cite{gower2016stochastic}&\multirow{3}{*}{SC}&\multirow{3}{*}{\cmark}& \multirow{2}{*}{N (full grad}&\multirow{3}{*}{$\gamma$}&\multirow{3}{*}{$\mathcal O(\rho^k)$}&\multirow{3}{*}{-}&\multirow{3}{*}{-}\\
Stoch. L-BFGS \cite{moritz2016linearly}&&&&&&&\\
&&&periodically) &&&&\\ \hline
SQN \cite{wang2017stochastic}&NC&\cmark&$N$&$k^{-0.5}$& $\mathcal O(1/\sqrt k)$& $\mathcal O(1/\epsilon^2)$&- \\ \hline
\multirow{2}{*}{SdLBFGS-VR \cite{wang2017stochastic}}&\multirow{2}{*}{NC}&\multirow{2}{*}{\cmark}&$N$(full grad&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{ $\mathcal O(1/k)$}&\multirow{2}{*}{ $\mathcal O(1/\epsilon)$}&\multirow{2}{*}{-} \\
&&&periodically)&&&&\\ \hline
r-SQN \cite{yousefian2017stochastic}&C&\cmark&$1$&$k^{-2/3+\varepsilon}$&$\mathcal O(1/k^{1/3-\varepsilon})$&-&-\\ \hline
SA-BFGS \cite{zhou2017stochastic}&SC&\cmark&$N$&$\gamma_k$&$\mathcal O(\rho^k)$&$\mathcal O(\ln(1/\epsilon))$&$\mathcal O({1/ \epsilon^2}(\ln({1/\epsilon}))^4)$\\ \hline
Progressive&\multirow{2}{*}{NC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{-}&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{$\mathcal O(1/k)$}&\multirow{2}{*}{-}&\multirow{2}{*}{-}\\
Batching \cite{bollapragada2018progressive}&&&&&&&\\ \hline
Progressive &\multirow{2}{*}{SC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{-}&\multirow{2}{*}{$\gamma$}&\multirow{2}{*}{$\mathcal O(\rho^k)$}&\multirow{2}{*}{-}&\multirow{2}{*}{-}\\
Batching \cite{bollapragada2018progressive}&&&&&&&\\ \hline\hline
\eqref{VS-SQN}&SC&\cmark&$\lceil \rho^{-k}\rceil$&$\gamma$&$\mathcal O(\rho^k)$&$\mathcal O({\kappa}\ln(1/\epsilon))$&$\mathcal O(\kappa/\epsilon)$\\ \hline
\eqref{sVS-SQN}&SC&\xmark&$\lceil {\rho^{-k}}\rceil$&$ {\gamma}$&$\mathcal O({\rho^k})$&$\mathcal O({\ln(1/\epsilon)})$&$\mathcal O({1/\epsilon})$\\ \hline
\eqref{rVS-SQN}&C&\cmark&$\lceil k^a\rceil$&$k^{-\varepsilon}$&$\mathcal O(1/k^{1-\varepsilon})$&$\mathcal O(1/\epsilon^{1\over 1-\varepsilon})$&$\mathcal{O}(1/\epsilon^{(3+\varepsilon)/(1-\varepsilon)})$\\ \hline
\eqref{rsVS-SQN}& C&\xmark&$\lceil k^a\rceil$&$k^{-1/3+\varepsilon}$&$\mathcal O(1/k^{1/3})$&$\mathcal O(1/\epsilon^{{3}})$& $\mathcal O\left({1/ {\epsilon}^{{(2+\varepsilon)/( 1/3)}}}\right)$\\ \hline
\end{tabular}
\caption{Comparing convergence rate of related schemes (note that $a>1$)} \label{table results}
\end{table}
\begin{table}[htb]
\scriptsize
\begin{tabular}{|c|c|c|c|p{3in}|} \hline
&Convexity&Smooth&state-dep. noise&Assumptions\\ \hline\hline
RES \cite{mokhtari2014res}&SC&\cmark&\xmark&$\ulambda\mathbf{I} \preceq H_k\preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, $f$ is twice differentiable \\ \hline
Stoch. block BFGS \cite{gower2016stochastic}&\multirow{3}{*}{SC}&\multirow{3}{*}{\cmark}&\multirow{3}{*}{\xmark}&\multirow{2}{*}{$\ulambda\mathbf{I} \preceq \nabla^2 f(x) \preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, $f$ is twice differentiable}\\
Stoch. L-BFGS \cite{moritz2016linearly}&&&&\\ \hline
SQN for non convex \cite{wang2017stochastic}& NC &\cmark& \xmark& $\preceq \nabla^2 f(x) \preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, $f$ is differentiable\\ \hline
SdLBFGS-VR \cite{wang2017stochastic}&NC&\cmark&\xmark&$\nabla^2 f(x) \preceq \olambda \mathbf{I},\quad \olambda\geq 0$, $f$ is twice differentiable \\ \hline
r-SQN \cite{yousefian2017stochastic}&C&\cmark&\xmark&$\ulambda\mathbf{I} \preceq H_k\preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, $f$ is differentiable\\ \hline
\multirow{2}{*}{SA-BFGS \cite{zhou2017stochastic}}&\multirow{2}{*}{SC}&\multirow{2}{*}{\cmark}&\multirow{2}{*}{\xmark}&$f_k(x)$ is standard self-concordant for every possible sampling, The Hessian is Lipschitz continuous, \\
&&&&$\ulambda\mathbf{I} \preceq \nabla^2 f(x) \preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, $f$ is C$^2$\\ \hline
Progressive Batching \cite{bollapragada2018progressive}&NC&\cmark&\xmark& $\nabla^2f(x) \preceq \olambda \mathbf{I}, \quad \olambda\geq 0$, sample size is controlled by the exact inner product quasi-Newton test, $f$ is C$^2$\\ \hline
Progressive Batching \cite{bollapragada2018progressive}&SC&\cmark&\xmark&$\ulambda\mathbf{I} \preceq \nabla^2 f(x) \preceq \olambda \mathbf{I}, \quad 0<\ulambda\leq \olambda$, sample size controlled by exact inner product quasi-Newton test, $f$ is C$^2$\\ \hline \hline
\eqref{VS-SQN} &SC&\cmark&\cmark&$\ulambda\mathbf{I} \preceq H_k\preceq \olambda_k \mathbf{I}, \quad 0<\ulambda\leq \olambda_k$, $f$ is differentiable\\ \hline
\eqref{sVS-SQN} &SC&\xmark&\cmark&$\ulambda_k\mathbf{I} \preceq H_k\preceq \olambda_k \mathbf{I}, \quad 0<\ulambda_k\leq \olambda_k$\\ \hline
\multirow{2}{*}{\eqref{rVS-SQN} }&\multirow{2}{*}{C}&\multirow{2}{*}{\cmark}&\cmark&$\ulambda\mathbf{I} \preceq H_k\preceq \olambda_k \mathbf{I}, \quad 0<\ulambda\leq \olambda_k$, $f(x)$ has quadratic growth property\\
&&&\xmark&$\ulambda\mathbf{I} \preceq H_k\preceq \olambda \mathbf{I}$, , $f$ is differentiable\\ \hline
\multirow{2}{*}{\eqref{rsVS-SQN} }& \multirow{2}{*}{C}&\multirow{2}{*}{\xmark}&\cmark&$\ulambda_k\mathbf{I} \preceq H_k\preceq \olambda_k \mathbf{I}, \quad 0<\ulambda_k\leq \olambda_k$, $f(x)$ has quadratic growth property, $\ulambda_k\mathbf{I} \preceq H_k\preceq \olambda_k \mathbf{I}$\\ \hline
\end{tabular}
\caption{Comparing assumptions of related schemes }
\label{table assumption}
\end{table}
\noindent {\bf (i) Stochastic quasi-Newton~(SQN) methods.} QN
schemes~\cite{liu1989limited,nocedal99numerical} have proved enormously influential in solving nonlinear programs, motivating the use of
stochastic Hessian information~\cite{byrd12}. \aj{In 2014, Mokhtari and Riberio ~\cite{mokhtari2014res} introduced a regularized stochastic version of the Broyden- Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method \cite{Fletcher} by updating the
matrix $H_k$ using a modified BFGS update rule} to ensure
convergence while limited-memory variants~\cite{byrd2016stochastic,
mokhtari2015global} and nonconvex generalizations~\cite{wang2017stochastic}
were subsequently introduced. In our prior work~\cite{yousefian2017stochastic},
an SQN method was presented for merely convex smooth problems, characterized by
rates of $\mathcal O(1/k^{{1\over 3}-\varepsilon})$ and $\mathcal
O(1/k^{1-\varepsilon})$ for the stochastic and deterministic case,
respectively. In~\cite{yousefian17smoothing}, we utilize convolution-based
smoothing to address nonsmoothness and provide a.s. convergence guarantees and
rate statements.
\noindent {\bf (ii) Variance reduction schemes for stochastic optimization.}
Increasing sample-size schemes for finite-sum machine learning
problems~\cite{FriedlanderSchmidt2012,byrd12} have provided the basis for a
range of variance reduction schemes in machine
learning~\cite{roux2012stochastic,xiao2014proximal},
amongst reduction others. By utilizing variable sample-size (VS) stochastic
gradient schemes, linear convergence rates were obtained for strongly convex
problems~\cite{shanbhag15budget,jofre2017variance} and these rates were
subsequently improved (in a constant factor sense) through a VS-{\em accelerated} proximal method developed by Jalilzadeh et
al.~\cite{jalilzadeh2018optimal} ({called ({\bf VS-APM})}). In convex regimes, Ghadimi and
Lan~\cite{ghadimi2016accelerated} developed an accelerated framework that
admits the optimal rate of $\mathcal{O}(1/k^2)$ and the optimal oracle complexity (also see
~\cite{jofre2017variance}), improving the rate statement presented
in~\cite{jalilzadeh16egvssa}. More recently, in ~\cite{jalilzadeh2018optimal},
Jalilzadeh et al. present a smoothed accelerated scheme that admits the optimal
rate of $\mathcal{O}(1/k)$ and optimal oracle complexity for nonsmooth problems,
recovering the findings in~\cite{ghadimi2016accelerated} in the smooth regime.
Finally, more intricate sampling rules are developed
in~\cite{bollapragada2017adaptive,pasupathy2018sampling}.
\noindent {\bf (iii) Variance reduced SQN schemes.} Linear~\cite{lucchi2015variance}
and superlinear~\cite{zhou2017stochastic} convergence statements for variance
reduced SQN schemes were provided in twice differentiable regimes under suitable
assumptions on the Hessian. A ({\bf VS-SQN}) scheme with
L-BFGS~\cite{bollapragada2018progressive} was presented in strongly convex
regimes under suitable bounds on the Hessian. }
\subsection{Novelty and contributions}
{In this paper, we consider four variants of our proposed variable sample-size stochastic quasi-Newton method, \us{distinguished by whether the function $F(x,\omega)$ is strongly convex/convex and smooth/nonsmooth. The vanilla scheme is given by
\begin{align}
x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} u_k(x_k,\omega_{j,k})}{N_k}},
\end{align}
where $H_k$ denotes an approximation of the inverse of the Hessian, \af{$\omega_{j,k}$ denotes the $j^{th}$ realization of $\omega$ at the $k^{th}$ iteration},
$N_k$ denotes the sample-size at iteration $k$, and $u_k(x_k,\omega_{j,k})$ is
given by one of the following: (i) ({\bf VS-SQN}) where $F(.,\omega)$ is
strongly convex and smooth, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x
F(x_k,\omega_{j,k})$; (ii) Smoothed ({\bf VS-SQN}) or ({\bf sVS-SQN}) where
$F(.,\omega)$ is strongly convex and nonsmooth and $F_{\eta_k}(x,\omega)$ is a smooth approximation of $F(x,\omega)$, $u_k (x_k, \omega_{j,k}) \triangleq
\nabla_x F_{\eta_k}(x_k,\omega_{j,k})$; (iii) Regularized ({\bf VS-SQN}) or ({\bf
rVS-SQN}) where $F(.,\omega)$ is convex and smooth {and $F_{\mu_k}(.,\omega)$ is a regularization of $F(.,\omega)$}, $u_k (x_k, \omega_{j,k})
\triangleq \nabla_x F_{\mu_k}(x_k,\omega_{j,k})$; (iv) regularized and smoothed
({\bf VS-SQN}) or ({\bf rsVS-SQN}) where $F(.,\omega)$ is convex and possibly
nonsmooth and $F_{\eta_k,\mu_k}(.,\omega)$ denotes a regularized smoothed approximation, $u_k (x_k, \omega_{j,k}) \triangleq \nabla_x
F_{\eta_k,\mu_k}(x_k,\omega_{j,k})$. We recap these definitions in the
relevant sections. We briefly discuss our contibutions and accentuate the novelty of our work.\\
\noindent (I) {\em A regularized smoothed L-BFGS update.} A regularized
smoothed L-BFGS update ({\bf rsL-BFGS}) is developed in Section~\ref{sec:rslbfgs},
extending the realm of L-BFGS scheme to merely convex and possibly nonsmooth
regimes by integrating both regularization and smoothing. As a consequence, SQN techniques can now contend with merely convex {and nonsmooth} problems with convex constraints.\\
\noindent (II) {\em Strongly convex problems.} (II.i) ({\bf VS-SQN}). In Section~\ref{sec:3}, we
present a variable sample-size SQN scheme and prove that the convergence rate
is $\mathcal{O}(\rho^k)$ (where $\rho < 1$) while the iteration and oracle
complexity are proven to be $\mathcal{O}({\kappa^{m+1}} \ln(1/\epsilon))$ and
$\mathcal{O}(1/\epsilon)$, respectively. Notably, our findings are under a
weaker assumption of {either} state-dependent noise (thereby extending the result
from~\cite{bollapragada2018progressive}) and do not necessitate assumptions of
twice continuous
differentiability~\cite{moritz2016linearly,gower2016stochastic} or Lipschitz
continuity of Hessian~\cite{zhou2017stochastic}. (II.ii). ({\bf sVS-SQN}). By integrating a smoothing parameter, we extend ({\bf VS-SQN}) to
contend with nonsmooth but smoothable objectives. Via Moreau smoothing,
we show that ({\bf sVS-SQN}) retains the optimal rate and complexity
statements of ({\bf VS-SQN}) while {sublinear} rate statements for {$(\alpha,\beta)$ smoothable functions} are also provided.} \\
\noindent (III) {\em Convex problems.} (III.i) ({\bf rVS-SQN}). A {\em
regularized ({\bf VS-SQN}) } scheme is presented in Section~\ref{sec:4} based on {the}
({\bf rL-BFGS}) update and admits a rate of
{$\mathcal{O}(1/k^{1-2\varepsilon})$} with an oracle complexity of $\mathcal
O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)$, improving
prior rate statements for SQN schemes for smooth convex problems and obviating
prior inductive arguments. In addition, we show that ({\bf rVS-SQN})
produces sequences that converge to the solution in an a.s. sense. Under a
suitable growth property, these statements can be extended to the
state-dependent noise regime. (III.ii) ({\bf rsVS-SQN}). {\em A regularized
smoothed $(${\bf VS-SQN}$)$} is presented that leverages the ({\bf rsL-BFGS})
update and allows for developing rate $\mathcal O(k^{-{1\over 3}})$ amongst the
first known rates for SQN schemes for nonsmooth convex programs. Again imposing
a growth assumption allows for weakening the requirements to
state-dependent noise.\\
\noindent (IV) {\em Numerics.} Finally, in Section~\ref{sec:5}, we apply the
({\bf VS-SQN}) schemes on strongly convex/convex and smooth/nonsmooth stochastic
optimization problems. In comparison with variable sample-size accelerated
proximal gradient schemes, we observe that ({\bf VS-SQN}) schemes compete well and
outperform gradient schemes for ill-conditioned problems when the number of QN
updates increases. {In addition, SQN schemes do far better in computing sparse solutions, in contrast with standard subgradient and variance-reduced accelerated gradient techniques.} {Finally}, via smoothing, ({\bf VS-SQN}) schemes can be seen to resolve both nonsmooth and constrained problems.\\
{\bf Notation.} $\mathbb{E}[\bullet]$ denotes the expectation with respect to
the probability measure $\mathbb{P}$ and we refer to \aj{${\nabla_x}
{F}(x,\xi(\omega))$} by ${\nabla_x} {F}(x,\omega)$. We denote the optimal objective
value (or solution) of \eqref{main problem} by $f^*$ (or $x^*$) and the set of the optimal
solutions by $X^*$, {which is assumed to be nonempty}. {For a vector $x\in \mathbb R^n$ and a {nonempty} set $X \subseteq\mathbb R^n$, the Euclidean distance of $x$ from $X$ is denoted by ${\rm dist}(x,X)$.}
\section{{Background and Assumptions}}
In Section~\ref{sec:smooth}, we provide some background on smoothing
techniques {and} then proceed to
define the {\em regularized and smoothed L-BFGS method} or {\bf(rsL-BFGS)}
update rule {employed for generating} the sequence of Hessian approximations
$H_k$ in Section~\ref{sec:rslbfgs}. We conclude this section with a summary of the main assumptions in Section~\ref{sec:assump}.
\subsection{Smoothing of nonsmooth convex functions} \label{sec:smooth}
We begin by defining of {$L$-smoothness} and $(\alpha,\beta)-${\em smoothability}~\cite{beck17fom}.
\begin{definition}
A function $f:\mathbb R^n\to \mathbb R$ is said to be $L$-smooth if it {is} differentiable and {there exists an $L > 0$ such that} $\|\nabla f(x)-\nabla f(y) \|\leq L\|x-y\|$ for all $x,y\in \mathbb R^n$.
\end{definition}
\begin{definition}{\bf [($\alpha,\beta$)-smoothable~\cite{beck17fom}]} {A
convex function $f: \mathbb{R}^n \to \mathbb{R}$ is
$(\alpha,\beta)$-smoothable if there exists a convex C$^1$ function
$f_{\eta}: \mathbb{R}^n \to \mathbb{R}$ satisfying the following: (i)
$f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta \beta$ for all $x$; and (ii)
$f_{\eta}(x)$ is $\alpha/\eta$-smooth.} \end{definition}
{Some instances of smoothing~\cite{beck17fom} include the following}:
\noindent (i) If $f(x) \triangleq \|x\|_2$ and $f_\eta(x) { \triangleq } \sqrt{\|x\|_2^2 + \eta^2} -
\eta$, then $f$ is $(1,1)-$smoothable function; \noindent (ii) If $f(x) \triangleq
\max\{x_1,x_2, \hdots, x_n\}$ and $f_{\eta}(x) { \triangleq } \eta
\ln(\sum_{i=1}^n e^{x_i/\eta})-\eta \ln(n)$, then $f$ is $(1,\ln(n))$-smoothable; (iii) If $f$ is a proper, closed, and convex function and
\begin{align} \label{moreau}
f_\eta(x) \triangleq \min_{u} \ \left\{f(u)+{1\over 2\eta}\|u-x\|^2 \right\},
\end{align}
(referred to as Moreau proximal smoothing)~\cite{moreau1965proximite}, {then}
$f$ is $(1,B^2)$-smoothable where
$B$ denotes a uniform bound on $\|s\|$ where $s \in \partial f(x)$. It may be recalled that Newton's method is the de-facto standard for computing a zero
of a nonlinear equation~\cite{nocedal99numerical} while variants such as semismooth Newton methods
have been employed for addressing nonsmooth equations~\cite{facchinei1996inexact, facchinelt1997semismooth}. More
generally, in constrained regimes, such techniques take the form of
interior point schemes which can be viewed as the application of
Newton's method on the KKT system. Quasi-Newton variants of such
techniques can then we applied when second derivatives are either
unavailable or challenging to compute. However, in {constrained} stochastic
regimes, there has been far less available via {a direct application of} {quasi-Newton} schemes. We consider a smoothing approach that leverages the unconstrained
reformulation of a constrained convex program
where $X$ is a closed and convex set and {${\bf 1}_{X}(x)$ is an indicator
function}: \begin{align}
\tag{P} \min_x f(x) + {\bf 1}_X(x).
\end{align}
Then the smoothed problem can be represented as follows:
\begin{align} \tag{P$_{\eta}$}
\min_x f(x) + {\bf 1}_{X,\eta}(x),
\end{align}
where ${\bf 1}_{X,\eta}(\cdot)$ denotes the Moreau-smoothed variant of ${\bf 1}_X(\cdot)$~\cite{moreau1965proximite} defined as follows.
\begin{align}
{\bf 1}_{X,\eta}(x) \triangleq \min_{u \in \mathbb{R}^n} \left\{ {\bf 1}_X(u) + {1\over 2\eta} \|x-u\|^2\right\} = {1\over 2\eta} d_X^2(x),
\end{align}
where the second equality follows from ~\cite[Ex.~6.53]{beck17fom}. Note that ${\bf 1}_{X,\eta}(x)$ is continuously differentiable with gradient given by $\tfrac{1}{2\eta} \nabla_x d_X^2(x) = {1\over \eta} (x - \mbox{prox}_{{\bf 1}_X}(x)) = {1\over \eta} (x - \Pi_X(x))$, where $\Pi_{X}(x)\triangleq \mbox{argmin}_{y\in X}\{\|x-y\|^2\}$. Our interest lies in
reducing the smoothing parameter $\eta$ after every iteration, a class of
techniques {(called {\em iterative smoothing schemes}) that have been applied for solving
} stochastic optimization~\cite{yousefian17smoothing,jalilzadeh2018optimal} and stochastic
variational inequality problems~\cite{yousefian17smoothing}. {Motivated by our recent
work~\cite{jalilzadeh2018optimal} in which a smoothed variable sample-size
accelerated proximal gradient scheme is proposed for nonsmooth stochastic
convex optimization,} we consider a framework where at iteration $k$, an $\eta_k$-smoothed
function $f_{\eta_k}$ is utilized where the Lipschitz
constant of ${\nabla} f_{\eta_k}(x)$ is $1/\eta_k$.
\subsection{Regularized and Smoothed L-BFGS Update }\label{sec:rslbfgs}
{When \vvs{the} function $f$ is strongly convex \vvs{but possibly nonsmooth}, we \vvs{adapt} the standard L-BFGS \vvs{scheme} (\vvs{by replacing the true gradient by a sample average}) where the approximation of the inverse Hessian $H_k$ is \vvs{defined} as follows using pairs $(s_i,y_i)$ and $\eta_i$ \vvs{denotes} a smoothing parameter:
\begin{align}\label{lbfgs}
s_i &:= x_{i}-x_{i-1},\\ \notag
\tag{\bf Strongly Convex (SC)} \ {y_i} & := { \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{i}}}({x_{i}},{\omega}_{{j,i-1}})\over {N_{i-1}}} -{ \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}}}({x_{i-1}},{\omega}_{{j,i-1}})\over {N_{i-1}}},\\ \notag
H_{k,j}&:=\left(\mathbf{I}-\frac{y_is_i^T}{{y_i^Ts_i}}\right)^TH_{k,j-1}\left(\mathbf{I}-\frac{y_is_i^T}{y_i^Ts_i}\right)+\frac{s_is_i^T}{y_i^Ts_i},\quad i :=k-2(m-j), \ 1 \leq j\leq m, \ \forall i,
\end{align}
where $H_{k,0}=\frac{s_k^Ty_k}{y_k^Ty_k}\mathbf{I}$.} We note that at iteration $i$, we generate $\nabla_x F_{\eta_i}(x_i,\omega_{j,i-1})$ and $\nabla_x F_{\eta_i}(x_{i-1},\omega_{j,i-1})$, implying there are twice as many sampled gradients generated.
\vvs{Next,} we discuss how the sequence of approximations $H_k$ is generated
{when} $f$ is merely convex {and} not necessarily smooth. We overlay the
regularized \vvs{L-BFGS}~\cite{mokhtari2014res,yousefian2017stochastic} scheme with a smoothing and refer to the proposed scheme as
the ({\bf rsL-BFGS}) update. As in {({\bf rL-BFGS})}~\cite{yousefian2017stochastic}, {we update the regularization and smoothing parameters $\{\eta_k ,\mu_k\}$ and matrix $H_k$ at alternate {iterations} to keep the secant condition satisfied.} We update the regularization parameter
$\mu_k$ and smoothing parameter $\eta_k$ as follows.
\begin{align}\label{eqn:mu-k}
\begin{cases}
\mu_{k}:=\mu_{k-1}, \quad \us{\eta_k := \eta_{k-1}}, & \text{if } k \text{ is odd}\\
\mu_{k}<\mu_{k-1}, \quad \us{\eta_k < \eta_{k-1}}, & {\text{otherwise}.}
\end{cases}
\end{align}
We construct the update in terms of $s_i$ and $y_i$ {for convex problems},
\begin{align}\label{equ:siyi-LBFGS} s_i &:= x_{i}-x_{i-1},\\\notag
\tag{\bf Convex (C)}\ {y_i} & := { \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}^{\delta}}}({x_{i}},{\omega}_{{j,i-1}})\over {N_{i-1}}} -{ \sum_{j=1}^{{N_{i-1}}}{\nabla_x} {F}_{{\eta_{{i}}^\delta}}({x_{i-1}},{\omega}_{{j,i-1}})\over {N_{i-1}}}+{\mu_i^{\bar \delta}}s_i,\end{align}
where $i$ is odd and $0 < \delta,\bar \delta \leq 1$ are scalars controlling the level of smoothing and regularization in updating matrix $H_k$, respectively. The update policy for $H_k$ is given as follows:
\begin{align}\label{eqn:H-k}H_{k}:=
\begin{cases}
H_{k,m}, & \text{if } k \text{ is odd} \\
H_{k-1}, & \text{otherwise}
\end{cases}
\end{align}
where $m<n$ (in large scale settings, $m<<n$) is a fixed integer that determines the number of pairs $(x_i,y_i)$ to be used to estimate $H_k$. The matrix $H_{k,m}$, for any $k\geq 2m-1$, is updated using the following recursive formula:
\begin{align}\label{eqn:H-k-m}
H_{k,j}&:=\left(\mathbf{I}-\frac{y_is_i^T}{{y_i^Ts_i}}\right)^TH_{k,j-1}\left(\mathbf{I}-\frac{y_is_i^T}{y_i^Ts_i}\right)+\frac{s_is_i^T}{y_i^Ts_i},\quad i :=k-2(m-j), \quad 1 \leq j\leq m, \quad \forall i,
\end{align}
{where $H_{k,0}=\frac{s_k^Ty_k}{y_k^Ty_k}\mathbf{I}$. It is important to note that our regularized method inherits the computational efficiency from ({\bf L-BFGS}).
Note that {Assumption \ref{assum:convex2}} holds for our choice of
smoothing.
\subsection{Main assumptions}\label{sec:assump}
\vvs{A subset of our results require smoothness of $F(x,\omega)$ as formalized by the next assumption.}
\begin{assumption}\label{assum:convex-smooth}
$($a$)$ The function ${F}(x,\omega)$ is {convex and continuously differentiable} over
$\mathbb R^n$ for any $\omega \in \Omega$.
$($b$)$ The function $f$ is C$^1$ with $L$-Lipschitz
continuous gradients over $\mathbb R^n$.
\end{assumption}
\noindent {In Sections 3.2 (II) and 4.2,} we assume the following on the smoothed functions ${F}_{\eta}(x,\omega)$.
\begin{assumption}\label{assum:convex2}
For any $\omega \in \Omega$, ${F}(x,\omega)$ is $(1, \beta)$ {smoothable}, i.e. $F_{\eta}(x,\omega)$ is $C^1$, convex, and ${1\over \eta}$-smooth.
\end{assumption}
\noindent {We now assume the following on the conditional second moment on the
sampled gradient (in either the smooth or the smoothed regime) produced by the
stochastic first-order oracle.}
\begin{assumption}[{\bf Moment requirements for state-dependent noise}]\label{state noise}
Smooth: Suppose $\bar{w}_{k,N_k} \triangleq \nabla_x f(x_k) -\tfrac{\sum_{j=1}^{N_k}\nabla_x
{F}(x_k,\omega_{j,k})}{N_k}$ and $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots,
x_{k-1}\}$.
(S-M) There exist
$\nu_1, \nu_2>0$ such that $\mathbb E[\|{\bar{w}}_{k,N_k}\|^2\mid \mathcal F_k]\leq
{\tfrac{\nu_1^2\|x_k\|^2+\nu_2^2}{N_k}}$ a.s. for $k \geq 0$.
(S-B) For $k \geq 0$, $\mathbb E[{\bar{w}}_{k,N_k}\mid \mathcal F_k] = 0$, a.s. .
Nonsmooth: Suppose $\bar{w}_{k,N_k} \triangleq {\nabla} f_{\eta_k}(x_k) - {\tfrac{\sum_{j=1}^{N_k}\nabla_x {F}_{\eta_k}(x_k,\omega_{j,k})}{N_k}}$, $\eta_k > 0$, and $\mathcal{F}_k \triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$.
(NS-M) There exist $\nu_1,\nu_2>0$ such that $\mathbb E[\|{\bar{w}}_{k,N_k}\|^2\mid \mathcal F_k]\leq {\tfrac{\nu_1^2\|x_k\|^2+\nu_2^2}{N_k}}$ a.s. for $k \geq 0$.
(NS-B) For $k \geq 0$, $\mathbb E[{\bar{w}}_{k,N_k}\mid \mathcal F_k] = 0$, a.s. .
\end{assumption}
Finally, we make the following assumption on the sequence of
Hessian approximations $\{H_k\}$. Note that these properties follow when
either the regularized update ({\bf rL-BFGS}) or the regularized smoothed
update ({\bf rsL-BFGS}) \vvs{is} employed (see Lemmas \ref{H_k sc}, \ref{H_k ns sc}, \ref{rLBFGS-matrix}, and \ref{rsLBFGS-matrix}).
\begin{assumption}[{\bf Properties of $H_k$}]\label{assump:Hk}
\begin{enumerate}
\item[]
\item[{(S)}] The following hold for every matrix in the sequence $\{H_k\}_{k \in\mathbb{Z}_+}$ where $H_k \in \mathbb{R}^{n \times n}$.
(i) \ $H_k$ is $\mathcal{F}_{k}$-measurable; (ii) \ $H_k$ is symmetric and positive definite and there exist $\aj{\ulambda_k},{\olambda_k}>0$ such that
$\aj{\ulambda_k}\mathbf{I} \preceq H_k \preceq {\olambda_k} \mathbf{I}$ {a.s.} for all $k\geq 0.$
\item[{(NS)}] The following hold for every matrix in the sequence $\{H_k\}_{k \in\mathbb{Z}_+}$ where $H_k \in \mathbb{R}^{n \times n}$.
(i) $H_k$ is $\mathcal{F}_{k}$-measurable; (ii) \ $H_k$ is symmetric and positive definite and there exist positive scalars $\ulambda_{k},\olambda_k$ such that
$\ulambda_{k}\mathbf{I} \preceq H_k \preceq {\olambda_k} \mathbf{I}$ {a.s.} for all $k\geq 0.$
\end{enumerate}
\end{assumption}
\section{Smooth and nonsmooth strongly convex problems}\label{sec:3}
In this section, we {derive the} rate and oracle complexity of the \eqref{rVS-SQN} scheme for smooth {and nonsmooth} strongly convex problems by
considering the \eqref{VS-SQN} and \eqref{sVS-SQN} schemes.
\subsection{Smooth strongly convex optimization} We begin by considering
~\eqref{main problem} when $f$ is {$\tau-$}strongly convex and $L-$smooth {and
we define $\kappa \triangleq L/{\tau}$.} {Throughout this subsection, we consider the
({\bf VS-SQN})} scheme, {defined next, where $H_k$ is generated by the ({\bf
L-BFGS}) scheme. }
\begin{align}\tag{\bf VS-SQN}\label{VS-SQN}
x_{k+1}:=x_k-\gamma_kH_k\frac{\sum_{j=1}^{N_k} \nabla_x F(x_k,{\omega}_{j,k})}{N_k}.
\end{align}
{Next, we derive bounds on the eigenvalues of $H_k$ under strong convexity (see \cite{berahas2016multi} for proof)}.
\begin{lemma}[{\bf Properties of {Hessian approx. produced by}
(L-BFGS)}]\label{H_k sc}
Let {the function $f$} be $\tau$-strongly convex. Consider the
\eqref{VS-SQN} method. Let $s_i$, $y_i$ and $H_k$ be given by
\eqref{lbfgs}, \aj{where $F_\eta(.)=F(.)$}.
Then $H_k$ satisfies Assumption \ref{assump:Hk}{(S)}, with $\ulambda={1\over L(m+n)}$ and $\olambda=\left({L(n+m)\over \tau}\right)^{m}$.
\end{lemma}}
\begin{proposition}[{\bf Convergence in mean}]\label{thm:mean:smooth:strong}
Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is {$\tau$-}strongly convex. Suppose Assumptions~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and \ref{assump:Hk} (S) hold {and $\{N_k\}$ {is} an increasing sequence}.
Then the following inequality holds
for all $k\geq 1$, {where}
$N_0> {2\nu_1^2\olambda\over \tau^2\ulambda}$ and
$\gamma_k \triangleq {1\over L\olambda}$ {for all $k$}.
\begin{align*}
\mathbb E\left[f(x_{k+1})-f(x^*)\right]&
\leq\left(1-{\tau \ulambda\over L\olambda}\red{+{2\nu_1^2\over L\tau N_0}}\right)\mathbb E\left[f(x_{k})-f(x^*)\right]+\red{ 2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}.
\end{align*}
\end{proposition}
\begin{proof} From Lipschitz continuity of $\nabla f(x)$ and update rule \eqref{VS-SQN}, we have the following:
\begin{align*}
f(x_{k+1})&\leq f(x_k)+\nabla f(x_k)^T(x_{k+1}-x_k)+{L\over 2 }\|x_{k+1}-x_k\|^2\\&
=f(x_k)+\nabla f(x_k)^T\left(-\gamma_kH_k(\nabla f(x_k)+\bar w_{k,N_k})\right)+{L\over 2 }\gamma_k^2\left\|H_k(\nabla f(x_k)+\bar w_{k,N_k})\right\|\uvs{^2},
\end{align*}
{where $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}}
{F}(x_k,\omega_{j,k})-\nabla f(x_k)\right)}{N_k}$}. By taking expectations \uvs{conditioned on} $\mathcal F_k$, using Lemma {\ref{H_k sc}}, and Assumption \ref{state noise} (S-M) and (S-B), we obtain the following.
\begin{align*}
& \quad \mathbb E\left[f(x_{k+1})-f(x_k)\mid \mathcal F_k\right]\leq -\gamma_k \nabla f(x_k)^TH_k\nabla f(x_k)+{L\over 2 }\gamma_k^2\|H_k\nabla f(x_k)\|^2+{\gamma_k^2\olambda^2L\over 2 }\mathbb E[\|\bar w_{k,N_k}\|^2\mid \mathcal F_k]\\&
={\gamma_k}\nabla f(x_k)^TH_k^{1/2}\left(-I+{L\over 2 }\gamma_k\uvs{H_k}\right)H_k^{1/2}\nabla f(x_k)+{\gamma_k^2\olambda^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2 N_k}\\&
\leq -\gamma_k \left(1-{L\over 2 }\gamma_k\olambda\right)\|H_k^{1/2}\nabla f(x_k)\|^2+{\gamma_k^2\olambda^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2 N_k}
= {-\gamma_k\over 2}\|H_k^{1/2}\nabla f(x_k)\|^2+{ \nu_1^2\|x_k\|^2+\nu_2^2\over 2LN_k},
\end{align*}
where {the last inequality follows from} $\gamma_k= \tfrac{1}{L\olambda}$ for all $k$. Since $f$ is strongly convex with modulus $\tau$, $\|\nabla f(x_k)\|^2\geq 2\tau \left(f(x_k)-f(x^*)\right)$. \uvs{Therefore by subtracting $f(x^*)$} from both sides, we obtain:
\begin{align}\label{strong:smooth}
\mathbb E\left[f(x_{k+1})-f(x^*)\mid \mathcal F_k\right]\nonumber&\leq f(x_{k})-f(x^*)-{\gamma_k\ulambda\over 2}\|\nabla f(x_k)\|^2+{ \nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2\over 2LN_k}\\&
\leq\left(1-\tau\gamma_k\ulambda+{2\nu_1^2\over L\tau N_k}\right) (f(x_{k})-f(x^*))+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k},
\end{align}
where the last inequality \uvs{is a consequence of} $f(x_k)\geq f(x^*)+{\tau\over 2}\|x_k-x^*\|^2$. Taking unconditional expectations on both sides of \eqref{strong:smooth}, choosing $\gamma_k={ 1\over L\olambda}$ for all $k$ and \uvs{invoking} the assumption that $\{N_k\}$ is an increasing sequence, \uvs{we obtain the following.}
\begin{align*}
\mathbb E\left[f(x_{k+1})-f(x^*)\right]&
\leq\left(1-{\tau \ulambda\over L\olambda}+{2\nu_1^2\over L\tau N_0}\right)\mathbb E\left[f(x_{k})-f(x^*)\right]+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}.
\end{align*}
\end{proof}
\uvs{We now leverage this result in deriving a rate and oracle complexity statement.}
\begin{theorem}[{\bf Optimal rate and oracle complexity}]
Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is $\tau$-strongly convex and
Assumptions~~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and
\ref{assump:Hk} (S) hold. In addition, suppose $\gamma_k={1\over
L\olambda}$ for all $k$.
(i) If $a\triangleq \left(1-{\tau \ulambda\over L\olambda}+{2\nu_1^2\over L\tau N_0}\right)$, $N_k \triangleq \lceil N_0\rho^{-k}\rceil$ where $\rho<1$ and $N_0 \geq {2\nu_1^2\olambda\over \tau^2\ulambda}$. Then for {every $k \geq 1$} and some scalar $C$, the following holds:
$\mathbb E\left[f(x_{K+1})-f(x^*)\right]\leq C(\max\{a,\rho\})^{{k}}.$
\blue{(ii) Suppose $x_{K+1}$ is an $\epsilon$-solution such that $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$.
Then the iteration and oracle complexity of \eqref{VS-SQN}
are $\mathcal{O}({\kappa^{m+1}} \ln (1/\epsilon))$
and
$\mathcal{O}({\kappa^{m+1} \over\epsilon})$, respectively implying that $\sum_{k=1}^K N_k \leq \mathcal O\left({ \kappa^{m+1}\over \epsilon}\right).$ }
\end{theorem}
\begin{proof}
{\bf (i)} Let $a \triangleq \left(1-{\tau \ulambda\over L\olambda}+{2\nu_1^2\over L\tau N_0}\right)$, $b_k \triangleq { 2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}$, and $N_k \triangleq \lceil N_0\rho^{-k}\rceil\geq N_0\rho^{-k}$. Note that, choosing $N_0\geq {2\nu_1^2\olambda\over \tau^2\ulambda}$ leads to $a<1$. Consider $C \triangleq \uvs{ \mathbb{E}[f(x_0)-f(x^*)]}+\left({2 \nu_1^2\|x^*\|^2+\nu_2^2\over 2N_0L}\right){1\over 1-( \min\{a,\rho\}/\max\{a,\rho\})}$. {Then} {by Prop.~\ref{thm:mean:smooth:strong},} we obtain {the following for every $k \geq 1$.}
\begin{align*}
\mathbb E&\left[f(x_{K+1})-f(x^*)\right]
\leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+\sum_{i=0}^{K}a^{K-i}b_{i}\\
&\leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+{(\max\{a,\rho\})^K(2 \nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_0L}\sum_{i=0}^{K}\left({\min\{a,\rho\}\over \max\{a,\rho\}}\right)^{K-i}\\
&\leq a^{K+1}{\mathbb{E}\left[f(x_0)-f(x^*)\right]}+\left({(2 \nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_0L}\right){{(\max\{a,\rho\})^K}\over 1-( \min\{a,\rho\}/\max\{a,\rho\})}\leq C(\max\{a,\rho\})^{K}.
\end{align*}
Furthermore, {we may derive the number of steps $K$ to obtain an $\epsilon$-solution. Without loss of generality, suppose $\max\{a,\rho\}=a$. Choose $N_0 { \ \geq \ } {4\nu_1^2{\ulambda}\over \tau^2{\olambda}}$, then $a=\left(1-\left({\tau \ulambda\over 2L\olambda}\right)\right)=1-{1\over \alpha\kappa}$, where $\alpha={2\olambda\over \ulambda}$}. Therefore, since $\frac{1}{a} = \frac{1}{(1-\frac{1}{\alpha {\kappa}})}$, by using the definition of $\ulambda$ and $\olambda$ in Lemma \ref{H_k sc} to get $\alpha= {2\olambda\over \ulambda}=\mathcal O(\kappa^m)$, we obtain that
\begin{align}
\left(\frac{ \ln(C) - \ln(\epsilon)} {\ln(1/a)}\right) =
\left(\frac{\ln (C/\epsilon)}{\ln(1/(1-{1\over \alpha \kappa}))}\right)
= \left(\frac{\ln (C/\epsilon)}{-\ln((1-{1\over \alpha \kappa}))}\right) \leq \left(\frac{\ln (C/\epsilon)}{{1\over \alpha \kappa}}\right)
\notag = \mathcal{O} ({\kappa^{m+1}} \ln(\tilde {C}/\epsilon)),
\end{align}
{where the {bound} holds when $\alpha \kappa > 1$. It follows that the iteration complexity of computing an $\epsilon$-solution is $\mathcal{O}(\kappa^{m+1} \ln(\tfrac{C}{\epsilon}))$.}
\blue{{\bf(ii)} To compute a vector $x_{K+1}$ satisfying $\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$, we {consider the case where $a > \rho$ while the other case follows similarly.} Then we have that $C{a}^{K}\leq \epsilon$, implying that
$K = \lceil \ln_{(1/ {a})}(C/\epsilon)\rceil.$
To obtain the optimal oracle complexity, we require $\sum_{k=1}^{K} N_k$ gradients. If $N_k=\lceil N_0a^{-k}\rceil\leq 2N_0a^{-k}$, we obtain the following since $(1-a) = 1 \slash (\alpha {\kappa})$.
\begin{align*}
& \quad \sum_{k=1}^{\ln_{(1/{a})}\left(C/\epsilon\right)+1} 2N_0a^{-k}
\leq \frac{2N_0}{\left(\frac{1}{{a}} -1\right)}\left({1\over a}\right)^{3+\ln_{(1/ {a})}\left(C/\epsilon\right)} \leq \left( C \over \epsilon\right)\frac{2N_0}{a^2(1-{a})}
= \frac{ 2N_0\alpha {\kappa} C}{a^2\epsilon}.
\end{align*}
Note that $a=1-{1\over \alpha\kappa}$ {and $\alpha=\mathcal O(\kappa^m)$}, implying that
{\begin{align*}
a^2 & = 1-2/(\alpha \kappa)+1/(\alpha^2\kappa^2)\geq {\alpha^2\kappa^2-2\alpha\kappa^2+1\over \alpha^2\kappa^2}\geq{ \alpha^2\kappa^2-2\alpha\kappa^2\over \alpha^2\kappa^2}={(\alpha^2-2\alpha)\over \alpha^2}\\
\implies & {\kappa\over a^2}\leq {\alpha^2 \kappa\over (\alpha^2-2\alpha)}=\left(\alpha\over \alpha-2\right)\kappa
\implies \sum_{k=1}^{\ln_{(1/{a})}\left(C/\epsilon\right)+1} a^{-k} \leq {2N_0\alpha^2\kappa C\over (\alpha-2)\epsilon}=\mathcal O\left({{ \kappa^{m+1}}\over \epsilon}\right).
\end{align*}}}
\end{proof}
We prove a.s. convergence of iterates by using the super-martingale convergence lemma from~\cite{polyak1987introduction}.
\begin{lemma}[{\bf super-martingale convergence}]\label{almost sure}
Let $\{v_k\}$ be a sequence of nonnegative random variables, where $\mathbb
E{[v_0]}<\infty$ and let $\{{\chi_k}\}$ and $\{\beta_k\}$ be deterministic scalar
sequences such that $0\leq {\chi_k} \leq 1$ and $\beta_k\geq 0$ for all $k\geq
0$, $\sum_{k=0}^\infty{\chi_k}=\infty$, $\sum_{k=0}^\infty \beta_k<\infty $, and
$\lim_{k\rightarrow \infty}{\beta_k\over {\chi_k}}=0$, and $\mathbb
E{[v_{k+1}\mid \mathcal F_k]\leq (1-{\chi_k})v_k+\beta_k}$ a.s. for all $k\geq
0$. Then, $v_k\rightarrow 0$ almost surely as $k\rightarrow \infty$.
\end{lemma}
\begin{theorem}[{\bf a.s. convergence under strong convexity}]
Consider the iterates generated by the \eqref{VS-SQN} scheme. Suppose $f$ is $\tau$-strongly convex. Suppose Assumptions~\ref{assum:convex-smooth}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)}, and
\ref{assump:Hk} (S) hold. In addition, suppose $\gamma_k={1\over L\olambda}$ for all $k \geq 0$. Let $\{N_k\}_{k\geq 0}$ be an increasing sequence such that $\sum_{k=0}^\infty {1\over N_k}<\infty$ and $N_0>
{2\nu_1^2\olambda\over \tau^2\ulambda}$. Then $\lim_{k\rightarrow
\infty}f(x_k)=f(x^*)$ almost surely.
\end{theorem}
\begin{proof} Recall that in \eqref{strong:smooth}, we derived {the following for $k \geq 0$.}
\begin{align*}
\mathbb E\left[f(x_{k+1})-f(x^*)\mid \mathcal F_k\right]\nonumber&\leq\left(1-\tau\gamma_k\ulambda+{2\nu_1^2\over L\tau N_k}\right) (f(x_{k})-f(x^*))+{ 2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}.
\end{align*}
If $v_k \triangleq f(x_k)-f(x^*)$, $\chi_k \triangleq
\tau\gamma_k\ulambda-{2\nu_1^2\over L\tau N_k}$, $\beta_k \triangleq {
2\nu_1^2\|x^*\|^2+\nu_2^2\over 2LN_k}$, $\gamma_k={1\over L{\olambda}}$, and
$\{N_k\}_{k\geq 0}$ be an increasing sequence such that $\sum_{k=0}^\infty
{1\over N_k}<\infty$ where $N_0> {2\nu_1^2\olambda\over \tau^2\ulambda}$,
(e.g. $N_k\geq {\lceil N_0k^{1+\epsilon}\rceil}$) the requirements of
Lemma~\ref{almost sure} are {seen to be} satisfied. {Hence},
$f(x_k)-f(x^*)\rightarrow 0$ {a.s.} as $k\rightarrow \infty$ and by strong
convexity of $f$, it follows that $\|x_k-x^*\|^2\to 0$ a.s. .
\end{proof}
Having studied the variable sample-size SQN method, we now consider the special case where $N_k=1$. Similar to Proposition~\ref{thm:mean:smooth:strong}, the following inequality holds for $N_k=1$:
\begin{align}\label{bound sqn}
&\nonumber \ \mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq f(x_k)-f(x^*)-\gamma_k \left(1-{L\over 2 }\gamma_k\olambda\right)\|H_k^{1/2}\nabla f(x_k)\|^2+{\gamma_k^2\olambda^2L(\nu_1^2\|x_k\|^2+\nu_2^2)\over 2}\\&\nonumber\leq \left(1-2\gamma_k{L^2\over \tau}\olambda(1-{L\over 2}\gamma_k\olambda)\right)\left(f(x_k)-f(x^*)\right)+{\gamma_k^2\olambda^2L(\nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2)\over 2}\\&
\leq \left(1-2\gamma_k\olambda{L^2\over \tau}+\gamma_k^2\olambda^2{L^3\over\tau}+{2\nu_1^2\gamma_k^2\olambda^2L\over \tau}\right)\left(f(x_k)-f(x^*)\right)+{\gamma_k^2\olambda^2L(2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2},
\end{align}
where the second inequality is obtained by using Lipschitz continuity of $\nabla f(x)$ and the strong convexity of $f(x)$. Next, to obtain the convergence rate of SQN, we use the following lemma~\cite{xie2016si}.
\begin{lemma}\label{induction}
Suppose $e_{k+1}\leq (1-2a\gamma_k+\gamma_k^2b)e_k+\gamma_k^2c$ for all $k\geq 1$. Let $\gamma_k=\gamma/k$, $\gamma>1/(2a)$, $K\triangleq\lceil {\gamma^2b\over 2a\gamma-1} \rceil+1$ and $Q(\gamma,K)\triangleq \max \left\{{\gamma^2c\over 2a\gamma-\gamma^2b/K-1},Ke_K\right\}$. Then $\forall k\geq K$, $e_k\leq {Q(\gamma,K)\over k}$.
\end{lemma}
Now from inequality \eqref{bound sqn} and Lemma \ref{induction}, the following proposition follows.
\begin{proposition}[{\bf Rate of convergence of SQN with $N_k = 1$}]
Suppose Assumptions\aj{ ~\ref{assum:convex-smooth}}, \vvs{\ref{state noise} (S-M), \ref{state noise} (S-B)} and \ref{assump:Hk} (S) hold. Let $a={L^2\olambda\over \tau}$, $b={\olambda^2L^3+2\nu_1^2\olambda^2L\over \tau}$ and $c={\olambda^2L(2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2}$.
Then, $\gamma_k={\gamma\over k}$, $\gamma>{1\over L \olambda}$ and $N_k=1$ the following holds:
$\mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq {Q(\gamma,K)\over k}$,
where $Q(\gamma,K)\triangleq \max \left\{{\gamma^2c\over 2a\gamma-\gamma^2b/K-1},K(f(x_K)-f(x^*))\right\}$ and $K\triangleq\lceil {\gamma^2b\over 2a\gamma-1} \rceil+1$.
\end{proposition}
\blue{\begin{remark}
It is worth emphasizing that the
proof techniques, while aligned with avenues adopted in~\cite{byrd12,FriedlanderSchmidt2012,bollapragada2018progressive}, {extend results}
in~\cite{bollapragada2018progressive} to the regime of state-dependent
noise~\cite{FriedlanderSchmidt2012} while the oracle complexity statements are
classical (cf.~\cite{byrd12}). We also observe that in the analysis of deterministic/stochastic first-order methods,
any non-asymptotic rate statements rely on utilizing problem parameters (e.g. the strong convexity modulus, Lipschitz constants, etc.). In the context of
quasi-Newton methods, obtaining non-asymptotic bounds also requires $\ulambda$ and $\olambda$ (cf.~\cite[Theorem 3.1]{bollapragada2018progressive},
\cite[Theorem 3.4]{berahas2016multi}, and \cite[Lemma~2.2]{wang2017stochastic})
{since the impact of $H_k$ needs to be addressed.} One {avenue for weakening the dependence on such parameters lies}
in using line search schemes. However when the problem is
expectation-valued, the steplength arising from a line search leads to a
{dependence between the} steplength (which is now random) and the direction.
Consequently, standard analysis fails and one has to appeal to tools such as
empirical process theory (cf.~\cite{iusem2017variance}). {This remains the focus of future work.}
\end{remark}}
\subsection{Nonsmooth strongly convex optimization}
Consider \eqref{main problem} where $f(x)$ is a strongly convex but
nonsmooth function. In this subsection, we examine two avenues for solving
this problem, of which the first utilizes Moreau smoothing with a fixed
smoothing parameter while the second requires $(\alpha, \beta)$ smoothability
with a diminishing smoothing parameter.\\
\noindent {\bf (I) Moreau smoothing with fixed $\eta$.}
\blue{In this subsection, we focus on the special case where $f(x) \triangleq h(x) + g(x)$, $h$ is a closed, convex, and proper function, $g(x) \triangleq \mathbb{E}[F(x,{\omega})]$, and $F(x,\omega)$ is a $\tau-$strongly convex $L-$smooth function for every $\omega$}. {We begin by noting that the Moreau envelope of $f$, denoted by $f_{\eta}(x)$ and defined as \eqref{moreau}, retains both the minimizers of $f$ as well as its strong convexity as captured
by the following result based on \cite[Lemma~2.19]{planiden2016strongly}.
\begin{lemma}\label{feta}
Consider a convex, closed, and proper function $f$ and its Moreau envelope $f_{\eta}(x)$. Then the following hold:
(i) $x^*$ is a minimizer of $f$ over $\mathbb{R}^n$ if and only if $x^*$ is a minimizer of $f_{\eta}(x)$; (ii) $f$ is $\sigma$-strongly convex on $\mathbb{R}^n$ if and only if $f_{\eta}$ is $\tfrac{\sigma}{\eta\sigma+1}$-strongly convex on $\mathbb{R}^n$.
\end{lemma}
Consequently, it suffices to minimize the (smooth) Moreau envelope with a
{\em fixed} smoothing parameter $\eta$, as shown in the next result. For
notational simplicity, we choose $m=1$ but the rate results hold for $m>1$ and define $f_{N_k}(x) \triangleq h(x) + \tfrac{1}{N_k}\sum_{j=1}^{N_k}
F(x,\aj{\omega_{j,k}})$.
{Throughout this subsection, we consider the smoothed variant of
\eqref{VS-SQN}, referred to the \eqref{sVS-SQN} scheme, defined next, where
$H_k$ is generated by the ({\bf sL-BFGS})} update rule, $\nabla_x
f_{\eta_k}(x_k)$ denotes the gradient of the Moreau-smoothed function,
{given by $\tfrac{1}{\eta_k}(x_k- \mbox{prox}_{\eta_k,f}(x_k))$, while
$\nabla_{x} f_{\eta_k,N_k}(x_k) $, the gradient of the Moreau-smoothed and
sample-average function $f_{N_k}(x)$, is defined as $\tfrac{1}{\eta_k}(x_k-
\mbox{prox}_{\eta_k,f_{N_k}}(x_k))$ {and} ${\bar w_k} \triangleq \nabla_x
{f_{\eta_k,N_k}(x_k)-\nabla_x
{f_{\eta_k}(x_k)}}$. Consequently the update rule for $x_k$ becomes the following}.
\begin{align}\tag{\bf sVS-SQN}\label{sVS-SQN}
x_{k+1}:=x_k-\gamma_kH_k{\left(\nabla_{x} f_{\eta_k}(x_k)+{\bar w_k}\right)},
\end{align}
At each iteration of \eqref{sVS-SQN}, {the error in the gradient is captured by ${\bar{w}}_k$.}
We show that ${\bar{w}}_k$ satisfies Assumption \ref{state noise} (NS) by utilizing the following {assumption on the gradient of function}.
\begin{assumption}\label{Lip_moreau}
Suppose there exists $\nu>0$ such that for all $i\geq 1$, $\mathbb{E}[\|{\bar u}_{{k}}\|^2\mid \mathcal F_k] \leq {\tfrac{\nu^2}{N_k}}$ holds almost surely, where {${\bar{u}_k}=\nabla_{x} g(x_{k})-{\tfrac{\aj{\sum_{j=1}^{N_k}}\nabla_x F(x_k,\omega_{j,k})}{N_k}}$}.
\end{assumption}
\begin{lemma}\label{bound sub}
Suppose {$F(x,\omega)$ is $\tau$-strongly convex in $x$ for almost every $\omega$.} Let $f_{\eta}$ {denote the Moreau smoothed approximation of $f$}. Suppose Assumption \ref{Lip_moreau} holds {and $\eta<2/L$}. Then, $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]\leq {\nu_1^2\over {N_k}}$ for all $k\geq 0$, where {$\nu_1 \triangleq \nu/(\eta\tau)$}.
\end{lemma}
\begin{proof}
{We begin by noting that $f_{N_k}(x)$ is $\tau$-strongly convex.}
Consider the two problems:
\begin{align}
\mbox{prox}_{\eta, f}(x_k) & \triangleq \mbox{arg} \min_{u} \left[ f(u) + {1\over 2\eta} \|x_k-u\|^2\right], \label{prox1} \\
\mbox{prox}_{\eta, f_{N_k}}(x_k) & \triangleq \mbox{arg} \min_{u} \left[ f_{N_k}(u) + {1\over 2\eta} \|x_k-u\|^2\right]. \label{prox2}
\end{align}
{Suppose $x^*_{{k}}$ and $x^*_{N_k}$ denote
the optimal unique} solutions of \eqref{prox1} and \eqref{prox2}, respectively. From the definition of Moreau smoothing, {it follows that}
{\begin{align*}
\bar{w}_k & = \nabla_x f_{\eta_k,N_k}(x_k) - \nabla_x f_{\eta_k}(x_k) = {1\over \eta_k} (x_k - \mbox{prox}_{\eta_k, f_{N_k}}(x_k)) -
{1\over \eta_k} (x_k - \mbox{prox}_\aj{\eta_k, f}(x_k)) \\
& = \mbox{prox}_{\eta_k,f_{N_k}}(x_k) - \mbox{prox}_{\eta_k, f} (x_k) = {1\over \eta} (x_{N_k}^* - x_k^*),\end{align*}}
{which implies} $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]={1\over \eta^2}\mathbb E[\|x^*_{{k}}-x^*_{N_k}\|^2\mid \mathcal F_k]$. The following inequalities {are a consequence of invoking} strong convexity and {the} optimality conditions of \eqref{prox1} and \eqref{prox2}:
\begin{align*}
f(x^*_{N_k}) + {1\over 2\eta} \|x^*_{N_k} - x_k\|^2 &\geq f(x^*_{{k}}) + {1\over 2\eta} \|x^*_{{k}} - x_k\|^2 + {1\over 2} \left(\tau + {1\over \eta}\right) \|x^*_{{k}}-x^*_{N_k}\|^2, \\
f_{N_k}(x^*_{{k}}) + {1\over 2\eta} \|x^*_{{k}} - x_k\|^2 &\geq f_{N_k}(x^*_{N_k})+ {1\over 2\eta} \|x^*_{N_k} - x_k\|^2+ {1\over 2} \left(\tau + {1\over \eta}\right) \|x^*_{N_k}-x^*_{{k}}\|^2.
\end{align*}
Adding the above inequalities, we have that
\begin{align*}
f(x^*_{N_k}) -f_{N_k}(x^*_{N_k}) + f_{N_k}(x_{{k}}^*)-f(x_{{k}}^*) &\geq \left(\tau+{1\over \eta}\right)\|x^*_{N_k}-x_{{k}}^*\| ^2.
\end{align*}
From the definition of $f_{N_k}(x_k)$ and $\beta \triangleq \tau+\tfrac{1}{\eta}$,
and {by the} convexity and $L-$smoothness of $F(x,\omega)$ in $x$ for a.e. $\omega$, {we may prove} the following.
\begin{align*}
&\beta \|x^*_{{k}}-x^*_{N_k}\|^2 \leq {f(x^*_{N_k}) -f_{N_k}(x^*_{N_k}) + f_{N_k}(x_{{k}}^*)-f(x_{{k}}^*)}
\\
& = \frac{\sum_{j=1}^{N_k} (g(x^*_{N_k})-F(x^*_{N_k},\aj{\omega_{j,k}}))}{N_k} + \frac{\sum_{j=1}^{N_k} (F(x^*_{{k}},\aj{\omega_{j,k}})-g(x^*_{{k}}))}{N_k} \\
& \leq \frac{\sum_{j=1}^{N_k} \left(g(x^*_k) +\nabla_{x} g(x^*_{k})^T(x^*_{N_k}-x^*_k)+ \tfrac{L}{2}\|x^*-x^*_{N_k}\|^2 - F(x^*_k,\aj{\omega_{j,k}}) - \nabla_x F(x^*_{k},\aj{\omega_{j,k}})^T(x^*_{N_k}-x^*_k)\right)}{N_k}\\
& + \frac{\sum_{j=1}^{N_k} (F(x^*_{{k}},\aj{\omega_{j,k}})-g(x^*_{{k}}))}{N_k}
= \frac{\sum_{j=1}^{N_k} (\nabla_{x} g(x^*_{k})-\nabla_x F(x^*_k,\aj{\omega_{j,k}}))^T(x^*_{N_k}-x^*_k)}{N_k}+ \tfrac{L}{2}\|x^*-x^*_{N_k}\|^2\\
& = {\bar{u}_k^T}(x^*_{N_k}-x^*_k)+ \tfrac{L}{2}\|x^*_{{k}}-x^*_{N_k}\|^2.
\end{align*}
Consequently, by taking conditional expectations {and using Assumption \ref{Lip_moreau}}, we have the following.
\begin{align*}
\mathbb{E}[\beta \|x^*_{{k}}-x^*_{N_k}\|^2 \|\mid \mathcal{F}_k] & =
\mathbb E[{\bar{u}_k^T}(x^*_{N_k}-x^*_k)\mid \mathcal F_K]+ \tfrac{L}{2}\mathbb E[\|x^*_{{k}}-x^*_{N_k}\|^2\mid \mathcal F_k]\\
& \leq \tfrac{1}{2\tau} \mathbb{E}[\|{\bar{u}_k}\|^2 \mid \mathcal{F}_k] + \tfrac{\tau+L}{2}\mathbb{E}[\|x^*_k-x^*_{N_k}\|^2 \mid \mathcal{F}_k]\\
\implies \mathbb{E}[ \|x^*_{{k}}-x^*_{N_k}\|^2 \|\mid \mathcal{F}_k] & \leq \tfrac{1}{{\tau^2}} \mathbb{E}[\|{\bar u_k}\|^2 \mid \mathcal{F}_k] \leq \tfrac{1}{{\tau^2}} \tfrac{\nu^2}{N_k}, \mbox{ if $\eta < 2/L$.}
\end{align*}
{We may then conclude that $\mathbb E[\|\bar w_{k,N_k}\|^2\mid \mathcal F_k]\leq {\nu^2\over \eta^2\tau^2N_k}$.}
\end{proof}
{Next, we derive bounds on the eigenvalues of $H_k$ under strong convexity (similar to Lemma~\ref{H_k sc})}.
\begin{lemma}[{\bf Properties of {Hessian approx. produced by}
(L-BFGS) and (sL-BFGS)}]\label{H_k ns sc}
Let {the function $f$} be $\tau$-strongly convex. Consider the
\eqref{sVS-SQN} method. Let $s_i$, $y_i$ and $H_k$ be given by \eqref{lbfgs}.
Then $H_k$ satisfies Assumption \ref{assump:Hk}{(NS)}, with ${\ulambda_k}={\eta_k\over (m+n)}$ and ${\olambda_k}=\left({n+m\over \eta_k\tau}\right)^{m}$.
\end{lemma}
We now show that under Moreau smoothing, {a} linear rate of convergence is retained.
\begin{theorem}\label{moreau_strong}
Consider the iterates generated by the \eqref{sVS-SQN} scheme {where $\eta_k = \eta$ for all $k$.} {Suppose $f(x)=h(x) + g(x)$, where $h$ is a closed, convex, and proper function, $g(x) \triangleq \mathbb{E}[F(x,\omega)]$, and $F(x,\omega)$ is a $\tau-$strongly convex $L-$smooth function.} {Suppose} Assumptions \ref{assump:Hk} (NS) and \ref{Lip_moreau} hold. Furthermore, suppose $f_\eta(x)$ denotes a Moreau smoothing of $f(x)$. In addition, suppose $m = 1$, {$\eta\leq \min\{2/L,(4(n+1)^2/\tau^2)^{1/3}\}$}, $d \triangleq 1-{\tau^2\eta^3\over {4}(n+1)^2(1+\eta\tau)}$, $N_k\triangleq \lceil N_0{q^{-k}}\rceil$ for all $k\geq 1$, $\gamma\triangleq{\tau\eta^2\over
{4}(1+n) }$, $c_1 \triangleq \max\{q,d\}$, and $c_2 \triangleq \min\{q,d\}$. (i) Then $\mathbb E[\|x_{{k}+1}-x^*\|^2]
\leq D c_1^{{k}+1}$ for all $k$ where
\begin{align*}
\ D \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(c_1-c_2)}\right)\right\}.
\end{align*}
(ii) \blue{Suppose $x_{{K+1}}$ is an $\epsilon$-solution such that
$\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$. {Then, the iteration and
oracle complexity of computing $x_{K+1}$} {are}
$\mathcal{O}(\ln(1/\epsilon))$ steps and {$\mathcal
O\left({ 1/ \epsilon}\right)$}, respectively.}
\end{theorem}
\begin{proof}
(i) From Lipschitz continuity of $\nabla f_{\eta}(x)$ and update \eqref{sVS-SQN}, we have the following:
\begin{align*}
f_{\eta}(x_{k+1})&\leq f_{\eta}(x_k)+\nabla f_{\eta}(x_k)^T(x_{k+1}-x_k)+{1\over 2\eta}\|x_{k+1}-x_k\|^2\\&
=f_{\eta}(x_k)+\nabla f_{\eta}(x_k)^T\left(-{\gamma}H_k(\nabla f_\eta(x_k)+\bar w_{k,N_k})\right)+{1\over 2\eta}{\gamma}^2\left\|H_k(\nabla f_\eta(x_k)+\bar w_{k,N_k})\right\|\uvs{^2}\\
&{=f_{\eta}(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\nabla f_\eta(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\bar w_{k,N_k}+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2}\\
&{+{\gamma^2\over 2\eta}\|H_k\bar w_{k,N_k}\|^2+{\gamma^2\over \eta}{H_k\nabla f_\eta(x_k)^T}H_k\bar w_{k,N_k}}\\
&{\leq f_{\eta}(x_k)-\gamma\nabla f_\eta(x_k)^TH_k\nabla f_\eta(x_k)+{\eta\over 4}\|\bar w_{k,N_k}\|^2+{\gamma^2\over \eta}\vvs{\|\nabla f_{\eta}(x_k)^TH_k\|}^2+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2}\\
&+{\olambda^2\gamma^2\over 2\eta}\|\bar w_{k,N_k}\|^2+{\gamma^2\over 2\eta}\|H_k\nabla f_\eta(x_k)\|^2+{\olambda^2\gamma^2\over 2\eta}\|\bar w_{k,N_k}\|^2,
\end{align*}
where in the last inequality we used the fact that $a^Tb\leq {\eta\over 4\gamma}\|a\|^2+{\gamma\over \eta}\|b\|^2$. From Lemma \ref{bound sub}, $\mathbb E[\|\bar w_k\|^2\mid \mathcal F_k]\leq {\nu_1^2\over {N_k}}$, where $\nu_1=\nu/ (\eta\tau)$.
Now by taking conditional expectations with respect to $\mathcal F_k$, using Lemma \ref{H_k ns sc} and Assumption \ref{assump:Hk} (NS), we obtain the following.
\begin{align}\label{sc_nonsmooth_bound1}
\nonumber\mathbb E\left[f_{\eta}(x_{k+1})-f_{\eta}(x_k)\mid \mathcal F_k\right]& \leq -{\gamma}\nabla f_{\eta}(x_k)^TH_k\nabla f_{\eta}(x_k)+{2\gamma^2\over \eta}\|H_k\nabla f_{\eta}(x_k)\|^2+{\left({{\olambda}^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\ \nonumber&
={\gamma}\nabla f_{\eta}(x_k)^TH_k^{1/2}\left(-I+{2\gamma\over \eta}H_k^T\right)H_k^{1/2}\nabla f_{\eta}(x_k)+{\left({{\olambda}^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\ \nonumber&
\leq -{\gamma} \left(1-{2\gamma\over \eta}\olambda\right)\|H_k^{1/2}\nabla f_{\eta}(x_k)\|^2+{\left({\olambda^2\gamma^2\over \eta}+{\eta\over 4}\right){{\nu_1^2}\over N_k}}\\
&= {-{\gamma}\over 2}\|H_k^{1/2}\nabla f_{\eta}(x_k)\|^2+{5\eta\nu_1^2\over 16{N_k}},
\end{align}
{where {the last equality follows from} $\gamma={\eta\over 4\olambda}$.} Since $f_{\eta}$ is $\tau/(1+\eta\tau)$-strongly convex (Lemma~\ref{feta}), $\|\nabla f_{\eta}(x_k)\|^2\geq 2\tau/(1+\eta\tau)
\left(f_{\eta}(x_k)-f_{\eta}(x^*)\right)$. {Consequently,} by subtracting $f_\eta(x^*)$ {from both sides}
by {invoking} Lemma \ref{H_k ns sc}, we obtain:
\begin{align}\label{strong_nonsmooth_moreau}
\mathbb E\left[f_{\eta}(x_{k+1})-f_\eta(x^*)\mid \mathcal F_k\right] & \leq f_{\eta}(x_{k})-f_\eta(x^*)-{\gamma\ulambda\over 2}\|\nabla f_{\eta}(x_k)\|^2+{5\eta\nu_1^2\over 16{N_k}}\\ &
\leq\left(1-{\tau\over 1+\eta\tau}\gamma\ulambda\right)(f_{\eta}(x_{k})-f_\eta(x^*))+{5\eta\nu_1^2\over 16{N_k}}.\notag
\end{align}
Then by taking unconditional expectations, we obtain the following sequence of inequalities:
\begin{align}
\notag \qquad \mathbb E\left[f_{\eta}(x_{k+1})-f_\eta(x^*)\right]
& \leq \left(1-{\tau\over 1+\eta\tau}\gamma\ulambda\right)\mathbb E\left[f_{\eta}(x_{k})-f_\eta(x^*)\right]+{5\eta\nu_1^2\over 16{N_k}} \\
\label{bound f_eta_moreau}
& =\left(1-{\tau^2\eta^3\over 4(n+1)^2(1+\eta\tau)}\right)\mathbb E\left[f_{\eta}(x_{k})-f_\eta(x^*)\right]+{5\eta\nu_1^2\over 16{N_k}},
\end{align}
where the last equality arises from choosing $\ulambda={\eta\over 1+n}$,
$\olambda={1+n\over\tau \eta}$ {(by Lemma \ref{H_k ns sc} for
$m=1$)}, $\gamma={\eta\over 4\olambda}={\tau\eta^2\over 4(1+n)
}$ and using the fact that $N_k \geq N_0$ for all $k>0$. Let $d
\triangleq 1-{\tau^2\eta^3\over 4(n+1)^2(1+\eta\tau)}$ and $b_k
\triangleq {5\eta\nu_1^2\over 16{N_k}}$. Then {for
$\eta<(4(n+1)^2/\tau^2)^{1/3}$, we have $d<1$}. Furthermore, by recalling that {$N_k=\lceil N_0q^{-k}\rceil$}, {it follows that $d_k \leq \tfrac{5\eta \nu_1^2q^k}{16N_0},$ we obtain the following bound from \eqref{bound f_eta_moreau}.}
\begin{align*}
\mathbb E\left[f_{\eta}(x_{K+1})-f_\eta(x^*)\right]&
\leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+\sum_{i=0}^{K}d^{K-i}b_i \\
&
\leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+{5\eta\nu_1^2\over 16{N_0}}\sum_{i=0}^{K}d^{K-i}q^{i}.
\end{align*}
If $q<d$, then $\sum_{i=0}^{K}d^{K-i}q^{i}=d^K \sum_{i=0}^K (q/d)^i\leq d^K\left({1\over 1-q/d}\right)$. {Since} $f_{\eta}$ retains the minimizers of $f$, ${\tau\over 2(1+\eta\tau)}\|x_k-x^*\|^2\leq f_{\eta}(x_k)-f_\eta(x^*)$) by strong convexity of $f_\eta$, implying the following.
\begin{align*}
{\tau\over 2(1+{\eta}\tau)}\mathbb E[\|x_{K+1}-x^*\|^2]&
\leq d^{K+1}\mathbb E[f_\eta(x_0)-f_\eta(x^*)]+d^K\left({5\eta{\nu_1}^2\over 16{{N_0}}(1-q/d)}\right).
\end{align*}
Dividing both {sides} by ${\tau\over 2(1+\eta\tau)}$, the desired result is obtained.
\begin{align*}
\mathbb E[\|x_{K+1}-x^*\|^2]
& \leq d^{K+1}\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right)+d^K\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(1-q/d)}\right) = D d^{K+1}, \\
\mbox{ where } D & \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(d-q)}\right)\right\}.
\end{align*}
{Similarly, if} $d<q$,
$\mathbb E[\|x_{K+1}-x^*\|^2]\leq
D q^{K+1}$
where $$\ D \triangleq \max\left\{\left({2\mathbb E[f_\eta(x_0)-f_\eta(x^*)](1+\eta\tau)\over \tau}\right),\left({5(1+\eta\tau)\eta{\nu_1}^2\over 8\tau{{N_0}}(q-d)}\right)\right\}.$$
\blue{(ii) To find an $x_{K+1}$ such that $\mathbb E[\|x_{K+1}-x^*\|^2]\leq \epsilon$, suppose $d<q$ with no loss of generality. Then for some $C>0$, ${Cq^K}\leq \epsilon$, implying that $K=\lceil{\log}_{1/q}(C/\epsilon)\rceil$. {It follows that}
\begin{align*}
\sum_{k=0}^K N_k\leq\sum_{k=0}^{1+{\log}_{1/q}\left({C\over \epsilon}\right)} N_0q^{-k} = N_0 \frac{\left({\left({1\over q}\right) \left( {1\over q}\right)^{\log{1/q}\left(\tfrac{C}{\epsilon}\right)}-1}\right)} {\left({1/q-1}\right)} \leq N_0 \frac{\left(\tfrac{C}{\epsilon}\right)}{1-q}
={\mathcal O(1/\epsilon)}. \end{align*}}
\end{proof}
\begin{remark}
While a linear rate has been proven via Moreau smoothing, the effort to compute a gradient of the Moreau map
\eqref{moreau} may be expensive. In addition, {
$f(x)$ is defined as a sum of a deterministic closed,
convex, and proper function $h(x)$ and an expectation-valued
$L$-smooth and strongly convex function $g(x)$.} {This
motivates considering the use of a more general {expectation-valued}
function {with nonsmooth convex integrands}. We examine smoothing avenues for such problems but this
would necessitate driving the smoothing parameter to zero, leading to a significantly poorer convergence
rate but the per-iteration complexity can be much smaller.}
\end{remark}
\noindent {\bf (II) $(\alpha,\beta)$ smoothing with diminishing $\eta$.}
Consider
\eqref{main
problem} where $f(x)$ is a strongly convex and nonsmooth, {while $F(x,\omega)$ is assumed to be an
$(\alpha,\beta)$-smoothable function for every $\omega \in \Omega$.} Instances include settings
where $f(x) { \ \triangleq \ } h(x)+g(x)$, $h(x)$ is strongly convex
and smooth, and $g(x)$ is convex and nonsmooth. {In contrast in this subsection, we do not require such a structure and {allow for the stochastic component to be afflicted by nonsmoothness.}}
We impose the following assumption on the sequence of smoothed functions.
\begin{assumption}\label{nonsmooth_bound_all} Let $f_{\eta_k}(x)$ be a
smoothed counterpart of $f(x)$ with parameter $\eta_k$ where
$\eta_{k+1} \leq \eta_k$ for $k \geq 0$.
There exists a scalar $B$ such that
$f_{\eta_{k+1}}(x) \leq f_{\eta_k}(x)+{1\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}\right)B^2$ for all $x$.
\end{assumption}
{We observe that Assumption~\ref{nonsmooth_bound_all} holds for some common smoothings of convex nonsmooth functions~\cite{beck17fom} that satisfy $(\alpha,\beta)$ smoothability as verified next.}
\begin{lemma}
Consider a convex function $f(x)$ and any $\eta>0$. Then Assumption~\ref{nonsmooth_bound_all} holds for the following smoothing {functions} for any $x$.
\begin{enumerate}
\item[(i)] $f(x) { \ \triangleq \ } \|x\|_2$ and $f_{\eta}(x) \ \triangleq \ \sqrt{\|x\|^2_2+\eta^2}-\eta.$
\item[(ii)] $f(x) \ \triangleq \ \max\{x_1,x_2\}$ and $f_{\eta}(x) \ \triangleq \ \eta \ln( e^{x_1/ \eta} +e^{x_2/\eta})-\eta \ln(2)$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\bf i)} The following holds for some $B$ {and $\eta_k\geq \eta_{k+1}>0$ }such that ${1\over 2}B^2 {\eta_k\over \eta_{k+1}}\geq 1$:
\begin{align*}
f_{\eta_{k+1}}(x)=\sqrt{\|x\|^2_2+\eta_{k+1}^2}-\eta_{k+1}\leq \sqrt{\|x\|^2_2+\eta_k^2}-\eta_k+(\eta_{k}-\eta_{k+1})\leq f_{\eta_k}(x)+{1\over 2}B^2{\eta_k\over \eta_{k+1}}(\eta_k-\eta_{k+1}).
\end{align*}
{\bf ii)} By using the fact that \uvs{${\eta_{k+1}\over \eta_k}\leq 1$}, the following holds if $x_2 < x_1$ (without loss of generality):
\begin{align*}
f_{\eta_{k+1}}(x)&= {\eta_{k+1}\uvs{\eta_k\over \eta_{k}}} \ln\left(e^{x_1/ \eta_{k+1}} + e^{x_2/\eta_{k+1}}\right)-\eta_{k+1} \ln(n)\\
&=\eta_k \ln\left(e^{x_1/ \eta_{k+1}} + e^{x_2/\eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}-\eta_{k+1} \ln(n)-\eta_k \ln(n)+\eta_k \ln(n)\\
& = \eta_k \ln\left(\left(e^{x_1/\eta_{k+1}}\right)^{\eta_{k+1} \over \eta_{k}}\left( 1+ e^{{(x_2-x_1)}/ \eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& = \eta_k \ln\left(\left(e^{x_1/\eta_{k}}\right)\left( 1+ e^{{(x_2-x_1)}/ \eta_{k+1}}\right)^{\eta_{k+1}\over \eta_k}\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& \leq \eta_k \ln\left(\left(e^{x_1/\eta_{k}}\right)\left( 1+ {\eta_{k+1}\over \eta_k}e^{{(x_2-x_1)}/ \eta_{k+1}}\right)\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& = \eta_k \ln\left(e^{x_1/\eta_{k}}+ {\eta_{k+1}\over \eta_k}e^{{x_1/\eta_k}+{(x_2-x_1)}/ \eta_{k+1}})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& = \eta_k \ln\left(e^{x_1/\eta_{k}}+ {\eta_{k+1}\over \eta_k}e^{{x_2/\eta_k}+(x_2-x_1)(\tfrac{1}{\eta_{k+1}}-\tfrac{1}{\eta_k})})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& = \eta_k \ln\left(e^{x_1/\eta_{k}}+ e^{x_2/\eta_k} \underbrace{\left({\eta_{k+1}\over \eta_k}e^{\left(x_2-x_1\right)\left(\tfrac{1}{\eta_{k+1}}-\tfrac{1}{\eta_k}\right)}\right)}_{ \scriptsize \ \leq \ 1 }\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)
\end{align*}
\begin{align*}
&
\leq \eta_k \ln\left(e^{x_1/\eta_{k}}+ e^{x_2/\eta_k})\right)-\eta_{k}\ln(n)+(\eta_k-\eta_{k+1}) \ln(n)\\
& = f_{\eta_k}(x)+(\eta_k-\eta_{k+1})\ln(n)\leq f_{\eta_k}(x)+{1\over 2}\frac{\eta_k}{\eta_{k+1}}\left({\eta_k}-{\eta_{k+1}}\right)B^2,
\end{align*}
where the first inequality follows from ${{a^y} \leq 1 + y(a-1)}$ for $y \in [0,1]$ and $a \geq 1$, {the second inequality follows from the $x_2<x_1$} while the third is a result of noting that $\tfrac{\eta_k}{2\eta_{k+1}}B^2 \geq 1$.
\end{proof}
We are now ready to provide our main convergence rate for more general smoothings. {Note that without loss of generality, we assume that $F(x,\omega)$ is $(1,\beta)$-smoothable for every $\omega \in \Omega$. }
\begin{lemma}[{\bf Smoothability of $f$}]\label{lemma-smooth-f}
Consider a function $f(x) \triangleq \mathbb{E}[F(x,\omega)]$ such that $F(x,\omega)$ is $(\alpha,\beta)$ smoothable for every $\omega \in \Omega$. Then $f(x)$ is $(\alpha,\beta)$ smoothable.
\end{lemma}
\begin{proof} By hypothesis,
$ F_{\eta}(x,\omega) \leq F(x,\omega) \leq F_{\eta}(x,\omega)+\eta \beta$ for every $x$. Then by taking expectations, we have that
$f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta \beta$ for every $x$. In addition, by $\alpha/\eta$-smoothness of $F_{\eta}$, and Jensen's inequality we have
$
\| \nabla_x f_{\eta}(x) - \nabla_x f_{\eta}(y) \| \overset{\tiny \mbox{Jensen's}}{\leq} \|\nabla_x F_{\eta}(x,\omega) - \nabla_x F_{\eta}(y,\omega) \| \leq {\alpha \over \eta} \|x-y\|,
$ for all $x,y$.
\end{proof}
{We now prove our main convergence result.}
\begin{theorem}[{\bf Convergence in mean}]\label{thm:mean:nonsmooth:strong}
Consider the iterates generated by the \eqref{sVS-SQN} scheme. Suppose $f$ {and $f_\eta$ are} $\tau$-strongly convex,
Assumptions~~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), and \ref{nonsmooth_bound_all} hold.
In addition, suppose $m = 1$, $\eta_k \triangleq \left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$ , {$N_0=\lceil {2^{4/3}\nu_1^2(n+1)^{1/3}\over \tau^{5/3}}\rceil$}, $N_k \triangleq \lceil
N_0(k+2)^{a+2/3}\rceil$ for some $a>1$, and $\gamma_k \triangleq {\tau\eta_k^2\over
1+n }$ for all $k\geq 1$. Then any $K\geq 1$, the following holds.
\begin{align*}
\mathbb E\left[f(x_{K+1})-f(x^*)\right]&\leq{f(x_0)-f(x^*)\over K+2}+\left(\frac{(n+1)^{1/3}}{2^{2/3}\tau^{2/3}(a-1)}\right){2\nu_1^2\|x^*\|^2+\nu_2^2\over K+2}\\&+\left(\frac{2(n+1)^{{2/3}}}{ \tau^{2/3}}\right){B^2\over (K+3)^{1/3}}+\left(\frac{2^{5/3}(n+1)^{2/3}}{\tau^{7/3}(a-2/3)}\right){B^2\nu_1^2\over K+2}=\mathcal O(1/K^{1/3}).
\end{align*}
(ii) Suppose $x_{K+1}$ is an $\epsilon$-solution such that
$\mathbb{E}[f(x_{{K+1}})-f^*]\leq \epsilon$. {Then the iteration and
oracle complexity of} \eqref{sVS-SQN} {are} $\mathcal{O}(1/\epsilon^{3})$
steps and \red{$\mathcal O\left({ 1\over \epsilon^{8+\varepsilon}}\right)$,
respectively.} \end{theorem}
\begin{proof} (i) {By Lemma~\ref{lemma-smooth-f} and Assumption~\ref{assum:convex2}, $f$ is $(1,B)$-smoothable and $\nabla_x f_{\eta}(x)$ is $1/\eta$-smooth.} From Lipschitz continuity of
$\nabla f_{\eta_k}(x)$ and {the definition of} \eqref{sVS-SQN},
{the following holds.}
\begin{align*}
f_{\eta_k}(x_{k+1})&\leq f_{\eta_k}(x_k)+\nabla f_{\eta_k}(x_k)^T(x_{k+1}-x_k)+{1\over 2\eta_k}\|x_{k+1}-x_k\|^2\\&
=f_{\eta_k}(x_k)+\nabla f_{\eta_k}(x_k)^T\left(-\gamma_kH_k(\nabla f_{\eta_k}(x_k)+\bar w_{k,N_k})\right)+{1\over 2\eta_k}\gamma_k^2\left\|H_k(\nabla f_{\eta_k}(x_k)+\bar w_{k,N_k})\right\|\uvs{^2}.
\end{align*}
Now by taking conditional expectations with respect to $\mathcal F_k$, using Lemma \eqref{rsLBFGS-matrix}(c), Assumption \ref{state noise} (NS-M), \ref{state noise} (NS-B), Assumption \ref{assump:Hk} (NS) {and \eqref{unbias_smooth}} we obtain:
\begin{align}\label{sc_nonsmooth_bound1}
\nonumber&\mathbb E\left[f_{\eta_k}(x_{k+1})-f_{\eta_k}(x_k)\mid \mathcal F_k\right]\leq -\gamma_k \nabla f_{\eta_k}(x_k)^TH_k\nabla f_{\eta_k}(x_k)+{1\over 2\eta_k}\gamma_k^2\|H_k\nabla f_{\eta_k}(x_k)\|^2\\ \nonumber&+{\gamma_k^2\olambda_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\ \nonumber&
=-\gamma_k\nabla f_{\eta_k}(x_k)^TH_k^{1/2}\left(I-{1\over 2\eta_k}\gamma_kH_k^T\right)H_k^{1/2}\nabla f_{\eta_k}(x_k)+{\gamma_k^2\olambda_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\ \nonumber&
\leq -\gamma_k \left(1-{1\over 2\eta_k}\gamma_k\olambda_k\right)\|H_k^{1/2}\nabla f_{\eta_k}(x_k)\|^2+{\gamma_k^2\olambda_k^2( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2\eta_kN_k}\\
&\leq {-\gamma_k\over 2}\|H_k^{1/2}\nabla f_{\eta_k}(x_k)\|^2+{\eta_k( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2N_k},
\end{align}
where in the first inequality, we use the fact that $\mathbb E[\bar w_{k,N_k}\mid \mathcal F_k]=0$, while in the second inequality, we employ $H_k\preceq \olambda_k \mathbf{I}$, and last inequality follows from the assumption that $\gamma_k=
{\eta_k\over \olambda_k}$. Since $f_{\eta_k}$ is strongly convex with modulus
$\tau$, $\|\nabla f_{\eta_k}(x_k)\|^2\geq 2\tau
\left(f_{\eta_k}(x_k)-f_{\eta_k}(x^*)\right)\geq 2\tau
\left(f_{\eta_k}(x_k)-f(x^*)\right)$. By subtracting $f(x^*)$ {from both sides}
by \uvs{invoking} Lemma \ref{rsLBFGS-matrix} (c), {and taking unconditional expectations}, we obtain:
\begin{align}\label{strong_nonsmooth}
&\mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber\leq \mathbb E[f_{\eta_k}(x_{k})-f(x^*)]-{\gamma_k\ulambda_{k}\over 2}\|\nabla f_{\eta_k}(x_k)\|^2+{\eta_k( \nu_1^2\|x_k\|^2+\nu_2^2)\over 2N_k}\\ \nonumber&
\leq\left(1-\tau\gamma_k\ulambda_{k}\right)\mathbb E[f_{\eta_k}(x_{k})-f(x^*)]+{\eta_k( \nu_1^2\|x_k+x^*-x^*\|^2+\nu_2^2)\over 2N_k}\\&
\leq\left(1-\tau\gamma_k\ulambda_{k}\right)\mathbb E[f_{\eta_k}(x_{k})-f(x^*)]+\uvs{\frac{\uvs{2}\eta_k\nu_1^2\|x_k-x^*\|^2}{2N_k}}+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}.
\end{align}
By \uvs{the} strong convexity of $f$ and the \uvs{relationship} between $f$ and $f_{\eta_k}$, ${\tau\over 2}\|x_k-x^*\|^2\leq f(x_k)-f(x^*)\leq f_{\eta_k}(x_k)-f(x^*)+\eta_k{\beta}$. Therefore, \eqref{strong_nonsmooth} can be written as follows:
\begin{align}\label{strong:nonsmooth}
\mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber &\leq \left(1-\tau\gamma_k\ulambda_{k}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{2\nu_1^2\eta_k^2{\beta}\over \tau N_k}\\&+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}.
\end{align}
By choosing $m=1$, $\ulambda_{k}={\eta_k\over 1+n}$,
$\olambda_{\us{k}}={1+n\over\tau \eta_k}$ and $\gamma_k={\eta_k\over \olambda_k}={\tau\eta_k^2\over 1+n }$, \eqref{strong:nonsmooth} can be rewritten as follows.
\begin{align}\label{bound f_eta}
\mathbb E\left[f_{\eta_k}(x_{k+1})-f(x^*)\right]\nonumber&
\leq\left(1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{2\nu_1^2\eta_k^2{\beta}\over \tau N_k}\\&+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}.
\end{align}
By using Assumption \ref{nonsmooth_bound_all}, we have the following for any $x_{k+1}$:
\begin{align}\label{bound f_eta_k+1}
f_{\eta_{k+1}}(x_{k+1})\leq f_{\eta_k}(x_{k+1})+{1\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}\right)B^2.
\end{align}
Substituting \eqref{bound f_eta_k+1} in \eqref{bound f_eta} leads to the following
\begin{align*}
\mathbb E\left[f_{\eta_{k+1}}(x_{k+1})-f(x^*)\right]&
\leq\left(1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}\right)\mathbb E\left[f_{\eta_k}(x_{k})-f(x^*)\right]+{\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}\\&+{{\max \{B^2,\beta\}}\over 2}\left({\eta_k^2\over \eta_{k+1}}-{\eta_k}+{4\nu_1^2\eta_k^2\over \tau N_k}\right).
\end{align*}
Let $\uvs{d_k} \uvs{ \ \triangleq \ } 1-{\tau^2\eta_k^3\over (n+1)^2}+{2\nu_1^2\eta_k\over \tau N_k}$, $b_k \uvs{ \ \triangleq \ } {\eta_k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2N_k}$, and $c_k \uvs{ \ \triangleq \ } {\eta_k^2\over \eta_{k+1}}-{\eta_k}+{4\nu_1^2\eta_k^2\over \tau N_k}$. Therefore the following is obtained recursively by using the fact that $\mathbb E[f_{\eta_0}(x_0)]\leq \mathbb E[f(x_0)]$:
\begin{align*}
\mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right]&
\leq \left(\prod_{k=0}^{K}\uvs{d_k}\right)\mathbb E[f(x_0)-f(x^*)]+\sum_{i=0}^{K}\left(\prod_{j=0}^{K-i-1}\uvs{d}_{K-j}\right)b_i\\&+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^{K}\left(\prod_{j=0}^{K-i-1}\uvs{d}_{K-j}\right)c_i.
\end{align*}
By choosing $\eta_k=\left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$, $N_k=\lceil N_0(k+2)^{a+2/3}\rceil$ for all $k\geq 1$, $a>1$ and $N_0=\lceil {2^{4/3}\nu_1^2(n+1)^{1/3}\over \tau^{5/3}}\rceil$, and noting that $f(x_0)-f(x^*)\geq0$, we obtain that
\begin{align}\prod_{k=0}^{K} \uvs{d}_k\leq\prod_{k=0}^{K}\left(1-{2\over k+2}+{1\over (k+2)^{a+1}}\right)\leq \prod_{k=0}^{K}\left(1-{1\over k+2}\right) = {1\over K+2}\end{align} and $\prod_{j=0}^{K-i-1} \uvs{d}_{K-j}\leq{i+2\over K+2}$. Hence, we have that
\begin{align}\label{bound f simplify}
&\mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right]
\leq {1\over K+2}\left(f(x_0)-f(x^*)\right)+\sum_{i=0}^{K}{b_i(i+2)\over K+2}+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^K {c_i(i+2)\over K+2}\\ \nonumber&={{\left(f_{\eta_{0}}(x_0)-f(x^*)\right)} \over K+2}+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)2^{1/3}(n+1)^{1/3}\over 2\tau^{2/3}}\sum_{i=0}^{K}{(i+2)^{2/3}\over (K+2)N_i}+{{\max \{B^2,\beta\}}\over 2}\sum_{i=0}^K {c_i(i+2)\over K+2}.
\end{align}
Note that we have the following inequality from the definition of $c_i=\overbrace{\left({\eta_i^2\over \eta_{i+1}}-{\eta_i}\right)}^{A_i}+\overbrace{4\nu_1^2\eta_i^2\over \tau N_i}^{D_i}$ \us{and by recalling that $\eta_k =\left({2(n+1)^2\over \tau^2(k+2)}\right)^{1/3}$.}
\begin{align}\label{bound c_i}
\sum_{i=0}^K A_i(i+2)&\nonumber= \us{\sum_{i=0}^K \left({\eta_i^2\over \eta_{i+1}}-{\eta_i}\right) (i+2)} = {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}}\sum_{i=0}^K \left({(i+3)^{1/3}\over (i+2)^{2/3}}-{1\over (i+2)^{1/3}}\right){(i+2)}\\&\leq {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}}\sum_{i=0}^K \left((i+3)^{2/3}-(i+2)^{2/3}\right)\leq {2^{1/3}(n+1)^{{2/3}}\over \tau^{2/3}} (K+3)^{2/3}.
\end{align}
Additionally, for any $a>1$ the following holds:
\begin{align} \label{integ-bound}
\sum_{i=0}^{K}{1\over (i+2)^a}\leq \int_{-1}^{K}{1\over (x+2)^a}dx={1\over 1-a}(K+2)^{1-a}+{1\over a-1}\leq {1\over a-1}.
\end{align}
We also have that the following inequality holds if $N_k=\lceil N_0(k+2)^{a+2/3}\rceil $:
\begin{align} \sum_{i=0}^K D_i(i+2)&={2^{8/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}}\sum_{i=0}^K {1\over (i+2)^{a +1/3}}\overset{\tiny \eqref{integ-bound}}{\leq} {2^{8/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)}\label{bound D_i}.
\end{align}
\us{Therefore, substituting} \eqref{bound c_i} and \eqref{bound D_i} within \eqref{bound f simplify}, we have:
\begin{align*}
\mathbb E\left[f_{\eta_{K+1}}(x_{K+1})-f(x^*)\right]& \us{ \ \leq \ }{1\over K+2}\mathbb E[f_{\eta_{0}}(x_0)-f(x^*)]+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)(n+1)^{1/3}\over 2^{2/3}(K+2)\tau^{2/3}(a-1)}\\& +{{\max \{B^2,\beta\}}(n+1)^{{2/3}}(K+3)^{2/3}\over 2^{2/3}N_0\tau^{2/3}(K+2)}+{{\max \{B^2,\beta\}}2^{5/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)(K+2)} .
\end{align*}
Now by using the fact that $f_{\eta}(x) \leq f(x) \leq f_{\eta}(x)+\eta {\beta}$ we obtain {for some $C>0$}:
\begin{align*}
\mathbb E&\left[f(x_{K+1})-f(x^*)\right]={1\over K+2}\mathbb E[f(x_0)-f(x^*)]+{( 2\nu_1^2\|x^*\|^2+\nu_2^2)(n+1)^{1/3}\over 2^{2/3}(K+2)\tau^{2/3}(a-1)}\\&+{{\max \{B^2,\beta\}}(n+1)^{{2/3}}\over \tau^{2/3}}\left({(K+3)^{2/3}\over 2^{2/3}(K+2)}+{1\over (K+3)^{1/3}}\right)+{{\max \{B^2,\beta\}}2^{5/3}\nu_1^2(n+1)^{2/3}\over \tau^{7/3}(a-2/3)(K+2)}\\&\leq{\mathbb E[f(x_0)-f(x^*)]\over K+2}+\left(\frac{(n+1)^{1/3}}{2^{2/3}\tau^{2/3}(a-1)}\right){2\nu_1^2\|x^*\|^2+\nu_2^2\over K+2}\\&+\left(\frac{2(n+1)^{{2/3}}}{ \tau^{2/3}}\right){{\max \{B^2,\beta\}}\over (K+3)^{1/3}}+\left(\frac{2^{5/3}(n+1)^{2/3}}{\tau^{7/3}(a-2/3)}\right){{\max \{B^2,\beta\}}\nu_1^2\over K+2}\leq\mathcal O(K^{1/3}).
\end{align*}
(ii) To find $x_{K+1}$ such that $\mathbb E[f(x_{ K+1})]-f^*\leq \epsilon$, we have ${C\over K^{1/3}}\leq \epsilon$ which implies that $K=\lceil {\left(C\over \epsilon\right)^{3}}\rceil$. Therefore, \us{by utilizing the identity that for $x \geq 1$,
$ \lceil x \rceil \leq 2x, $}
we have the following for $a=1+\varepsilon$:
\begin{align*}
\sum_{k=0}^K N_k\leq\sum_{k=0}^{1+\left({C\over \epsilon}\right)^3} 2\us{ N_0 (k+2)^{5/3+\varepsilon}} & \leq \int_0^{2+\left({C\over \epsilon}\right)^3}\us{2^{8/3+\varepsilon}}N_0 (x+2)^{5/3+\varepsilon}dx\leq\frac{2^{8/3+\varepsilon}N_0\left(4+\left({C\over \epsilon}\right)^3\right)^{8/3+\varepsilon}}{8/3+\varepsilon}\\
& \leq \mathcal O(1/\epsilon^{8+\varepsilon}). \end{align*}
\end{proof}
\begin{remark}
Instead of iteratively reducing the smoothing parameter, one may employ a fixed smoothing parameter for all $k$, i.e. $\eta_k=\eta$. By similar arguments, we obtain the following inequalities for $N_k=\lceil N_0 \rho^{-k}\rceil$, where $0<\rho<1$ and $N_0>{4\nu_1^2(n+1)\over \tau3 \eta^2}$:
\begin{align*}
\mathbb E\left[f(x_{k+1})-f(x^*)\right]\leq \alpha_0^k\mathbb E\left[f_{\eta}(x_0)-f_\eta(x^*)\right]+{\eta\alpha_0^k( 2\nu_1^2\|x^*\|^2+\nu_2^2)\over 2(1-{\rho\over \alpha_0})}+{\eta B^2\over {1\over \alpha_0}-1}+{2B^2\nu_1^2\eta^2\alpha_0^K\over \tau(1-{\rho\over \alpha_0})},
\end{align*}
where $\alpha_k=1-{\tau^2\eta^3\over n+1}+{2\nu_1^2\over \tau N_k}$. To find $x_{K+1}$ such that $\mathbb E[f(x_{ K+1})]-f^*\leq \epsilon$, one can easily verify that $K>\mathcal O\left({\ln(1/\epsilon)\over \ln(1/(1-\epsilon^3))}\right)$, which is slightly worse than $\mathcal O(\epsilon^{-3})$ for iterative smoothing.
Note that in Section 3.2 (I), we merely require that there is a uniform bound
on the subgradients of $F(x,\omega)$, a requirement that then allows for
applying Moreau smoothing (but do not require unbiasedness). However, in Section 3.2 (II), we do assume that an unbiased gradient of the smoothed functions $f_{\eta}(x)$ is available (Asusmption~\ref{state noise} (NS-B). This holds for instance when we have access to the true
gradient of $ F_{\eta}(x,\omega)$, i.e. $\nabla_x F_{\eta}(x,\omega)$. Here,
unbiasedness follows directly, as seen next. Let $f_\eta(x)\triangleq \mathbb
E[F_\eta(x,\omega)]$. By using Theorem 7.47 in \cite{shapiro09lectures}
(interchangeability of the derivative and the expectation), we have:
\begin{align}\label{unbias_smooth}\nabla f_\eta(x)=\nabla \mathbb
E[F_\eta(x,\omega)]=\mathbb E[\nabla F_\eta(x,\omega)]\implies \mathbb E[\nabla
f_\eta(x)-\nabla F_\eta(x,\omega)]=0.\end{align} In an effort to be more general, we claim
that there exists an oracle that can produce an unbiased estimate of $\nabla_x
f_{\eta}(x) \triangleq \mathbb{E}[F_{\eta}(x,\omega)]$ for every $\eta > 0$ as formalized by Assumption~\ref{state noise} (NS-B).
\end{remark}
\section{Smooth and nonsmooth convex optimization}\label{sec:4}
{In this section, we weaken the {strong convexity} requirement and analyze the rate
and oracle complexity of (\eqref{rVS-SQN}) and (\eqref{rsVS-SQN}) {in smooth
and nonsmooth regimes}, respectively.}
\subsection{Smooth convex optimization}
{Consider the setting when $f$ is an $L$-smooth convex function. In such an instance, a regularization of $f$ and its gradient can be defined as follows.}
\begin{definition}[{\bf Regularized function and gradient map}]\label{def:regularizedf}
Given a sequence $\{\mu_k\}$ of positive scalars,
the function $f_{\mu_k}$ and its gradient $\nabla f_k(x)$ are defined as follows for {any $x_0\in \mathbb R^n$}:
\begin{align*}
f_{\mu_k}(x)&\triangleq f(x)+\frac{\mu_k}{2}{\|x-x_0\|^2},\quad \hbox{for any } k \geq 0, \qquad
\nabla f_{\mu_k}(x)\triangleq\nabla f(x)+\mu_k(x-x_0),\quad \hbox{for any } k \geq 0.
\end{align*}
Then{,} $f_{\mu_k}$ and ${\nabla} f_{\mu_k}$ satisfy the following:
(i) $f_{\mu_k}$ is $\mu_k$-strongly convex;
(ii) $f_{\mu_k}$ has Lipschitzian gradients with parameter $L+\mu_k$;
(iii) $f_{\mu_k}$ has a unique minimizer over $\mathbb R^n$, denoted by $x^*_k$. Moreover, for any $x \in \mathbb R^n$~\cite[sec. 1.3.2]{polyak1987introduction},
\begin{align*} 2\mu_k (f_{\mu_k}(x)-f_{\mu_k}(x^*_k))& \leq \|\nabla f_{\mu_k}(x)\|^2\leq 2(L+\mu_k) \left(f_{\mu_k}(x)-f_{\mu_k}(x^*_k)\right).\end{align*}
\end{definition}
We consider the following update rule \eqref{rVS-SQN}, { where $H_k$ is generated by {\bf rL-BFGS} scheme.}
\begin{align}\tag{\bf rVS-SQN}\label{rVS-SQN}
x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} \nabla_x F_{\mu_k}(x_k,\omega_{j,k})}{N_k}}.
\end{align}
{For a subset of the results, we assume quadratic growth property. }
\begin{assumption}{\bf(Quadratic growth)}\label{growth}
Suppose that the function $f$ has a nonempty set $X^*$ of minimizers. There exists $\alpha>0$ such that $f(x)\geq f(x^*)+{\alpha\over 2}\mbox{dist}^2(x,X^*)$ holds for all $x\in \mathbb R^n$:
\end{assumption}}
In the next lemma the bound for eigenvalues of $H_k$ is derived (see Lemma 6 in \cite{yousefian2017stochastic}).
\begin{lemma}[{\bf Properties of Hessian approximations produced by (rL-BFGS)}]\label{rLBFGS-matrix}
Consider the \eqref{rVS-SQN} method. Let $H_k$ be given by the update rule
\eqref{eqn:H-k}-\eqref{eqn:H-k-m} \us{with $\eta_k = 0$ for all $k$,} and
$s_i$ and $y_i$ are defined in \eqref{equ:siyi-LBFGS}. {Suppose $\mu_k$ is} updated according to the procedure \eqref{eqn:mu-k}. Let
Assumption.~\ref{assum:convex-smooth}(a,b) hold. Then the following hold.
\begin{itemize}
\item [(a)] For any odd $k > 2m$, $s_k^T{y_k} >0$;
(b) For any odd $k > 2m$, $H_{k}{y}_k=s_k$;
\item [(c)] For any $k > 2m$, $H_k$ satisfies Assumption \ref{assump:Hk}{(S)}, ${\ulambda}={\frac{1}{(m+n)(L+\mu_0^{\bar \delta})}}$, $\lambda = {\frac{(m+n)^{n+m-1}{(L+\mu_0^{\bar \delta})}^{n+m-1}}{(n-1)!}}$ and
${\olambda_k}= \lambda \mu_k^{-\bar \delta(n+m)},$ {for scalars $\delta,\bar \delta>0$.}
Then {for all $k$, we have that $H_k = H_k^T$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ and
$
{\ulambda\mathbf{I} \preceq H_{k} \preceq \olambda_k \mathbf{I}}$ both hold in an a.s. fashion.}
\end{itemize}
\end{lemma}
\begin{lemma}[An error bound]\label{lemma:main-ineq}
Consider the \eqref{VS-SQN} method and suppose Assumptions \ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B), \ref{assump:Hk}(S) \red{ and \ref{growth}} hold. {Suppose} $\{\mu_k\}$ is a non-increasing sequence, and $\gamma_k$ satisfies
\begin{align}\label{mainLemmaCond}\gamma_k \leq \frac{{\ulambda}}{{{\olambda}_k ^2}(L+\mu_0)},\quad \hbox{for all }k\geq 0.
\end{align}Then, the following inequality holds for all $k$:
\begin{align}\label{ineq:cond-recursive-F-k}
\nonumber\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]-f^* &\leq (1-{{\ulambda}}\mu_k\gamma_k)(f_{\mu_k}(x_k)-f^*) +\frac{{\ulambda}\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k\\&+\frac{ (L+\mu_k){\olambda_k ^2}{( \nu_1^2\|x_k\|^2+\nu_2^2)}}{2N_k}\gamma_k^2.
\end{align}
\end{lemma}
\begin{proof}
By the Lipschitzian property of $\nabla f_{\mu_k}${,
update rule \eqref{rVS-SQN} and Def.~\ref{def:regularizedf}}, we obtain
{\begin{align}\label{ineq:term1-2}
& \quad f_k(x_{{k+1}}) \nonumber\leq f_{\mu_k}(x_k)+\nabla f_{\mu_k}(x_k)^T(x_{k+1}-x_k)+\frac{ (L+\mu_k)}{2}\|x_{k+1}-x_k\|^2
\\&\leq f_{\mu_k}(x_k)-\gamma_k\underbrace{\nabla f_{\mu_k}(x_k)^TH_k(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})}_{\tiny\hbox{Term } 1}+ \frac{ (L+\mu_k)}{2}\gamma_k^2\underbrace{\|H_k(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})\|^2}_{\tiny\hbox{ Term } 2},
\end{align}}
{where} $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}}
{F}_{\mu_k}(x_k,\omega(\omega_{j,k}))-\nabla f_{\mu_k}(x_k)\right)}{N_k}$. Next, we estimate the conditional expectation of Terms 1 and 2. From Assumption \ref{assump:Hk}, we have
\begin{align*}
\hbox{Term }1 &= \nabla f_{\mu_k}(x_k)^TH_k\nabla f_{\mu_k}(x_k)+\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}\geq {\ulambda}\|\nabla f_{\mu_k}(x_k)\|^2+\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}.
\end{align*}
{Thus, taking conditional expectations, from \eqref{ineq:term1-2},} we obtain
\begin{align}\label{equ:Term1}
\notag\mathbb E[{\hbox{Term } 1\mid\mathcal F_k}] &\notag\geq {\ulambda}\|\nabla f_{\mu_k}(x_k)\|^2+\mathbb E[{\nabla f_{\mu_k}(x_k)^TH_k\bar w_{k,N_k}\mid\mathcal F_k}]\\ &={\ulambda}\|\nabla f_{\mu_k}(x_k)\|^2+\nabla f_{\mu_k}(x_k)^TH_k\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}] ={\ulambda}\|\nabla f_{\mu_k}(x_k)\|^2,\end{align}
where $\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}]=0$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ {a.s.} Similarly, invoking Assumption~\ref{assump:Hk}(S), we may bound Term 2.
\begin{align*}
\hbox{Term } 2&= (\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k})^TH_k^2(\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k}) \leq {{\olambda_k} ^2}\|\nabla f_{\mu_k}(x_k)+\bar w_{k,N_k}\|^2 \\&={{\olambda_k} ^2}\left(\|\nabla f_{\mu_k}(x_k)\|^2+\|\bar w_{k,N_k}\|^2+2\nabla f_{\mu_k}(x_k)^T\bar w_{k,N_k}\right).\end{align*}
Taking conditional expectations in the preceding inequality and using Assumption \ref{state noise} \vvs{(S-M), \ref{state noise} (S-B)}, we obtain
\begin{align}\label{equ:Term2}
\mathbb E[{\hbox{Term } 2\mid\mathcal F_k}]\notag&\leq{\olambda_k^2}\Big(\|\nabla f_{\mu_k}(x_k)\|^2+\mathbb E[{\|\bar w_{k,N_k}\|^2\mid\mathcal F_k}]\\&+2\nabla f_{\mu_k}(x_k)^T\mathbb E[{\bar w_{k,N_k}\mid\mathcal F_k}]\Big) \leq {\olambda^2_k}\left(\|\nabla f_{\mu_k}(x_k)\|^2+{{\nu_1^2\|x_k\|^2+\nu_2^2}\over N_k}\right).\end{align}
By taking conditional expectations in \eqref{ineq:term1-2}, and by \eqref{equ:Term1}--\eqref{equ:Term2},
\begin{align*}
\quad \mathbb E[{f_{\mu_k}(x_{k+1})\mid\mathcal F_k}] &\leq f_{\mu_k}(x_k)-\gamma_k{\ulambda}\|\nabla {\mu_k}(x_k)\|^2+{{\olambda}_k ^2}\frac{ (L+\mu_k)}{2}\gamma_k^2\left(\|\nabla f_{\mu_k}(x_k)\|^2+{{\nu_1^2\|x_k\|^2+\nu_2^2}\over N_k}\right) \\
&\leq f_{\mu_k}(x_k)-\frac{\gamma_k\ulambda}{2}\|\nabla f_{\mu_k}(x_k)\|^2\left(2-\frac{{\olambda_k ^2}\gamma_k(L+\mu_k)}{{\ulambda}}\right)+{\olambda_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}.
\end{align*}
\us{From} \eqref{mainLemmaCond}, $\gamma_k\leq \frac{{\ulambda}}{{\olambda_k ^2}(L+\mu_0)}$ for any $k \geq 0$.
Since $\{\mu_k\}$ is {a} non-increasing sequence, it follows that
\begin{align*}
\gamma_k \leq \frac{{\ulambda}}{{\olambda_k ^2}(L+\mu_k)}
\implies 2-\frac{{\olambda_k ^2}\gamma_k(L+\mu_k)}{{\ulambda}} \geq 1.\end{align*}
Hence, the following holds.
\begin{align*}
\mathbb E[{f_{\mu_k}(x_{k+1}) \mid\mathcal F_k}]&\leq f_{\mu_k}(x_k)-\frac{\gamma_k{\ulambda}}{2}\|\nabla f_{\mu_k}(x_k)\|^2+{\olambda_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}\\
&\hspace{-0.2in} \overset{\tiny \mbox{(iii) in Def.~\ref{def:regularizedf}}}{\leq}
f_{\mu_k}(x_k)-{\ulambda}\mu_k\gamma_k(f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k))+{\olambda_k ^2}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}.
\end{align*}
By using Definition \ref{def:regularizedf} and {non-increasing {property} of } $\{\mu_k\}$,
\begin{align} \label{ineq:lemmaLastIneq}
\notag
&\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid \mathcal F_k}] \leq\mathbb E[{f_{\mu_k}(x_{k+1})\mid \mathcal F_k}]\implies\\&
\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]\leq f_{\mu_k}(x_k)-{\ulambda}\mu_k\gamma_k(\overbrace{f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k)}^{\tiny{\mbox{Term 3}}})+{\olambda^2 _k}\frac{ (L+\mu_k)}{2}{\gamma_k^2{( \nu_1^2\|x_k\|^2+\nu_2^2)}\over N_k}.
\end{align}
Next, we derive a lower bound for Term 3. Since $x_k^*$ is the unique minimizer of $f_{\mu_k}$, we have $f_{\mu_k}(x_k^*) \leq f_{\mu_k}(x^*)$. Therefore, invoking Definition \ref{def:regularizedf}, for an arbitrary optimal solution $x^* \in X^*$,
\begin{align*}
f_{\mu_k}(x_k)-f_{\mu_k}(x^*_k) & \geq f_{\mu_k}(x_k)-f_{\mu_k}(x^*) =f_{\mu_k}(x_k)-f^*-\frac{\mu_k}{2}\|x^*-x_0\|^2.\end{align*}
From the preceding relation and \eqref{ineq:lemmaLastIneq}, we have
\begin{align*}
\mathbb E[{f_{\mu_{k+1}}(x_{k+1})\mid\mathcal F_k}]& \leq f_{\mu_k}(x_k)-\ulambda\mu_k\gamma_k\mathbb E[f_{\mu_k}(x_k)-f^*]+\frac{\ulambda\|x^*-x_0\|^2\mu_k^2\gamma_k}{2}
\\&+\frac{(L+\mu_k){\olambda^2 _k}{( \nu_1^2\|x_k\|^2+\nu_2^2)}\gamma_k^2}{2N_k}.
\end{align*}
By subtracting $f^*$ from both sides {and {by noting that
this inequality holds for all $x^* \in X^*$ where $X^*$ denotes the solution set},
the desired result is obtained.}
\end{proof}
{We now {derive the rate for} sequences produced by \eqref{rVS-SQN} {under the following assumption.}}
\begin{assumption}
\label{assum:sequences-ms-convergence}
Let the positive sequences {$\{N_k,\gamma_k,\mu_k,t
_k\}$} satisfy the following conditions:
{\begin{itemize}
\item [(a)] $\{\mu_k\}, \{\gamma_k\}$ are non-increasing sequences such that $\mu_k,\gamma_k \to 0$; $\{t_k\}$ is {an} increasing sequence;
\item [(b)] $\left(1-{\ulambda\mu_k\gamma_k}{+{2(L+\mu_0)\olambda_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)t_{k+1}\leq t_k, \ \forall k\geq \tilde K$ for some $\tilde K\geq 1$;
\item [(c)] $\sum_{k=0}^{\infty}{\mu_k^2\gamma_k}{t_{k+1}}={\bar c_0}<\infty$;
(d) $\sum_{k=0}^\infty {\mu_k^{-{2}\bar\delta(n+m)}\gamma_k^2\over N_k}{t_{k+1}}={\bar c_1}<\infty$;
\end{itemize}}
\end{assumption}
\begin{theorem}[{\bf Convergence of \eqref{rVS-SQN} in mean}]\label{thm:mean}
Consider the \eqref{rVS-SQN} scheme and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B),~\ref{assump:Hk}(S),~\ref{growth} and~\ref{assum:sequences-ms-convergence} hold.
{There exists} $\tilde K\geq 1$ and scalars $\bar c_0, \bar c_1$ (defined in Assumption ~\ref{assum:sequences-ms-convergence}) such that the
following inequality holds for all {$K\geq \tilde
K+1$}:\begin{align}\label{ineq:bound}
\mathbb E[{f(x_{K})-f^*}]
\leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{\bar c_0+\bar c_1\over t_K}.
\end{align}
\end{theorem}
\begin{proof}
We begin by noting that Assumption~\ref{assum:sequences-ms-convergence}(a,b) implies that \eqref{ineq:cond-recursive-F-k} holds for $k \geq \tilde K$, where $\tilde K$ is defined in Assumption~\ref{assum:sequences-ms-convergence}(b). Since the conditions of Lemma \ref{lemma:main-ineq} are met, taking expectations on both sides of \eqref{ineq:cond-recursive-F-k}:
\begin{align*}
\mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}] \notag
& \leq \left(1-\ulambda{\mu_k\gamma_k}\right)\mathbb E[{f_{\mu_k}(x_k)-f^*}] +\frac{\ulambda\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k \\&+\frac{ (L+\mu_0){\olambda^2 _k}{( \nu_1^2\|x_k-x^*+x^*\|^2+\nu_2^2)}}{2N_k}\gamma_k^2 \quad \forall k\geq \tilde K.
\end{align*}
{Now by using the quadratic growth property i.e. $\|x_k-x^*\|^2\leq {2\over \alpha}\left(f(x)-f(x^*)\right)$ and the fact that $\|x_k-x^*+x^*\|^2\leq 2\|x_k-x^*\|^2+2\|x^*\|^2$, we obtain the following relationship}
\begin{align*}
\mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}] \notag
& \leq \left(1-\ulambda\mu_k\gamma_k{+{2(L+\mu_0)\olambda_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\mathbb E[{f_{\mu_k}(x_k)-f^*}] +\frac{\ulambda\mbox{dist}^2(x_0,X^*)}{2}\mu_k^2\gamma_k \\&+\frac{ (L+\mu_0){\olambda^2 _k}( {2\nu_1^2\|x^*\|^2+\nu_2^2})}{2N_k}\gamma_k^2.
\end{align*}
By multiplying both sides by $t_{k+1}$, using Assumption~\ref{assum:sequences-ms-convergence}(b) and $\olambda_k=\lambda \mu_k^{-\bar\delta(n+m)}$, we obtain
\begin{align}\label{ineq:cond-recursive-F-k-expected2}
& \quad t_{k+1}\mathbb E[{f_{\mu_{k+1}}(x_{k+1})-f^*}]
\leq t_k\mathbb E[{f_{\mu_k}(x_k)-f^*}] +A_1\mu_k^2\gamma_kt_{k+1} +\frac{ A_2{ \mu_k^{-2\bar\delta(n+m)}}}{N_k}\gamma_k^2t_{k+1},
\end{align}
where $A_1\triangleq \tfrac{ \underline{\lambda} \mbox{\scriptsize dist}^2(x_0,X^*)}{2}$ and $A_2\triangleq\frac{ (L+\mu_0){\lambda^2 }( {2\nu_1^2\|x^*\|^2+\nu_2^2})}{2}$. By summing \eqref{ineq:cond-recursive-F-k-expected2} from {$k=\tilde K$} to $K-1$, for {$K\geq \tilde K+1$}, and dividing both sides by $t_K$, we obtain
\begin{align*}
&\nonumber \mathbb E[{f_{\mu_K}(x_{K})-f^*}]
\leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{\sum_{k={\tilde K}}^{K-1}A_1\mu_k^2\gamma_kt_{k+1}\over t_K} +{\sum_{k={\tilde K}}^{K-1}A_2\mu_k^{-2\bar\delta(n+m)}\gamma_k^2t_{k+1}N_k^{-1}\over t_K}.
\end{align*}
From Assumption \ref{assum:sequences-ms-convergence}(c,d), $\sum_{k={\tilde K}}^{K-1}\left( A_1 \mu_k^2\gamma_kt_{k+1}+ A_2 \mu_k^{-2\bar\delta(n+m)}\gamma_k^2{t_{k+1}\over N_k}\right)\leq {A_1\bar c_0+A_2\bar c_1}$. Therefore, \af{by using the fact that $f(x_K)\leq f_{\mu_K}(x_K)$, we obtain} \\ $ \mathbb E[{f(x_{K})-f^*}]
\leq {{t_{\tilde K}}\over t_K}\mathbb E[{{f_{\mu_{\tilde K}}(x_{\tilde K})}-f^*}] +{{\bar c_0+\bar c_1}\over t_K}.$
\end{proof}
We now show that the requirements of Assumption~\ref{assum:sequences-ms-convergence} are satisfied under suitable assumptions.
\begin{corollary}
Let $N_k\triangleq\lceil N_0 k^a\rceil$, $\gamma_k\triangleq\gamma_0k^{-b}$, $\mu_k\triangleq\mu_0k^{-c}$ and {$t_k\triangleq t_0(k-1)^{h}$} for some
$a,b,c,h>0$. {Let $2\bar \delta (m+n)=\varepsilon$ for
$\varepsilon>0$}. {Then Assumption~\ref{assum:sequences-ms-convergence} holds if} {${a+2b-c\varepsilon\geq b+c}, \ {N_0 {\geq}{(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha\ulambda\mu_0}},\ b+c<1$, $h\leq 1$, $b+2c-h>1$ and $a+2b-h-c\varepsilon>1$}.
\end{corollary}
\begin{proof}
{From} $N_k=\lceil N_0 k^a\rceil\geq N_0 k^a$, $\gamma_k=\gamma_0k^{-b}$ and $\mu_k=\mu_0k^{-c}$, {the} requirements to satisfy Assumption \ref{assum:sequences-ms-convergence} are as follows:
\begin{itemize}
\item [(a)] $\lim_{k \to \infty }{\gamma_0}k^{-b}=0, \lim_{k \to \infty }{\mu_0}k^{-c}=0 \Leftrightarrow b ,c>0$ ;
\item [(b)] $\left(1-{\ulambda\mu_k\gamma_k}{+{2(L+\mu_0)\olambda_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\leq {t_k\over t_{k+1}} \Leftrightarrow \left(1-{1\over k^{b+c}}+{1\over k^{a+2b-c\varepsilon}}\right)\leq (1-1/k)^h$. From the Taylor expansion of right hand side and assuming $h\leq 1$, we get $\left(1-{1\over k^{b+c}}+{1\over k^{a+2b-c\varepsilon}}\right)\leq 1-M/k$ for some $M>0$ and $\forall k\geq \tilde K$ which means $\left(1-{\ulambda\mu_k\gamma_k}{+{2(L+\mu_0)\olambda_k^2\nu_1^2\gamma_k^2\over N_k\alpha}}\right)\leq {t_k\over t_{k+1}} \Leftrightarrow h\leq1, \ b+c<1, \ {a+2b-c\varepsilon\geq b+c} \ \mbox{and}\ {N_0={(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha\ulambda\mu_0}}$;
\item [(c)] $\sum_{k=0}^{\infty}{\mu_k^2\gamma_k}{t_{k+1}}<\infty\Leftarrow \sum_{k=0}^\infty {1\over k^{b+2c-h}}<\infty\Leftrightarrow b+2c-h>1$;
\item [(d)] $\sum_{k=0}^\infty {\mu_k^{-{2}\bar\delta(n+m)}\gamma_k^2\over N_k}{t_{k+1}}<\infty\Leftarrow \sum_{k=0}^\infty {1\over k^{a+2b-h-c\varepsilon}}<\infty\Leftrightarrow a+2b-h-c\varepsilon>1$;
\end{itemize}
\end{proof}
One can easily verify that $a=2+\varepsilon$, $b=\varepsilon$ and {$c=1-{2\over
3}\varepsilon$} and $h=1-\varepsilon$ satisfy these
conditions. {We derive complexity statements for \eqref{rVS-SQN} for a specific choice of parameter sequences.}
\begin{theorem}[{\bf Rate statement and Oracle complexity}]\label{oracle smooth}
Consider the \eqref{rVS-SQN} scheme and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{state noise}(S-M), \ref{state noise}(S-B), \ref{assump:Hk}(S), { \ref{growth}}
and \ref{assum:sequences-ms-convergence} hold. Suppose $\gamma_k\triangleq{\gamma_0k^{-b}}$, $\mu_k\triangleq{\mu_0k^{-c}}$, $\triangleq t_k=t_0(k-1)^h$ and $N_k\triangleq\lceil N_0k^{a}\rceil$ where {$\red{N_0={(L+\mu_0)\lambda^2\nu_1^2\gamma_0\over \alpha\ulambda\mu_0}}$, $a=2+\varepsilon$,
$b=\varepsilon$ and {$c=1-{2\over 3}\varepsilon$}} and $h=1-\varepsilon$.
\noindent (i) {Then the following holds for $K \geq
\tilde{K}$ where $\tilde K\geq 1$ and $\tilde C \triangleq {{f_{\mu_{\tilde
K}}(x_{\tilde K})}-f^*} $.} \begin{align}\label{rate K}
& \mathbb E[{f(x_{K})-f^*}]
\leq {\tilde C+\bar c_0+\bar c_1\over K^{1-\varepsilon}}.
\end{align}
(ii) Let ${\epsilon>0}$ and {$ K\geq \tilde K+1$} such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$. Then{,} {$\sum_{k=0}^{ K}N_k\leq {\mathcal O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)}$}.
\end{theorem}
\begin{proof}
(i) {By choosing the sequence parameters as specified, the result follows immediately from Theorem \ref{thm:mean}.}
\noindent (ii) To find an $x_{ K}$ such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$ we have ${\tilde C+\bar c_0+\bar c_1\over \tilde K^{1-\varepsilon}}\leq {\epsilon}$ which implies that $ K=\lceil {\left(\tilde C+\bar c_0+\bar c_1\over {\epsilon}\right)^{1\over1-{\varepsilon}}}\rceil$. Hence, the following holds.
\begin{align*}
& \sum_{k=0}^{ K} N_k\leq \sum_{k=0}^{1+{(C/{\epsilon})^{1\over 1-\varepsilon}}}2N_0 k^{2+\varepsilon} \leq 2N_0\int_0^{{1+{(C/{\epsilon})}^{1\over 1-\varepsilon}}} {x}^{2+\varepsilon} \ d{x}=\frac{2N_0\left({1+\left(C/{\epsilon}\right)^{1\over 1-\varepsilon}}\right)^{3+\varepsilon}}{3+\varepsilon}
\leq \mathcal O\left({{\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right).
\end{align*}
\end{proof}
{One may instead consider the following requirement on the conditional second moment on the sampled gradient instead of state-dependent noise (Assumption \ref{state noise}).
\begin{assumption}\label{assum_error}
\vvs{Let $\bar{w}_{k,N_k} \triangleq \nabla_x f(x_k) - \tfrac{\sum_{j=1}^{N_k} \nabla_x F(x_k,\omega_{j,k})}{N_k}$. Then
there exists $\nu>0$ such that $\mathbb{E}[\|\bar{w}_{k,N_k}\|^2\mid \mathcal{F}_k] \leq \tfrac{\nu^2}{N_k}$ and $\mathbb{E}[\bar{w}_{k,N_k} \mid \mathcal{F}_k] = 0$ {hold} a. s.
for all $k$, where $\mathcal{F}_k
\triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$.}
\end{assumption}
By invoking Assumption \ref{assum_error}, we can derive the rate result without requiring a quadratic growth property of objective function.
\begin{corollary}
[{\bf Rate statement and Oracle complexity}]
Consider \eqref{rVS-SQN} and suppose Assumptions ~\ref{assum:convex-smooth}, \ref{assump:Hk}(S),
\ref{assum:sequences-ms-convergence} and \ref{assum_error} hold. Suppose $\gamma_k={\gamma_0k^{-b}}$, $\mu_k={\mu_0k^{-c}}$, $t_k=t_0(k-1)^h$ and $N_k=\lceil k^{a}\rceil$ where $a=2+\varepsilon$,
$b=\varepsilon$ and $c=1-{4\over 3}\varepsilon$ and $h=1-\varepsilon$.
\noindent (i) {Then for $K \geq
\tilde{K}$ where $\tilde K\geq 1$ and $\tilde C \triangleq {{f_{\mu_{\tilde
K}}(x_{\tilde K})}-f^*} $,}
$
\mathbb E[{f(x_{K})-f^*}]
\leq {\tilde C+\bar c_0+\bar c_1\over K^{1-\varepsilon}}.
$
(ii) Let ${\epsilon>0}$ and {$ K\geq \tilde K+1$} such that $\mathbb E[f(x_{ K})]-f^*\leq {\epsilon}$. Then, {$\sum_{k=0}^{ K}N_k\leq {\mathcal O\left({ {\epsilon}^{-{3+\varepsilon\over 1-\varepsilon}}}\right)}$}.
\end{corollary}
\blue{\begin{remark}
Although the oracle complexity of (\ref{rVS-SQN}) is poorer than the canonical $\mathcal{O}(1/\epsilon^2)$,
there are several reasons to consider
using the SQN schemes when faced with a choice between gradient-based
counterparts. (a) Sparsity. In many machine learning problems, the sparsity
properties of the estimator are of relevance. However, averaging schemes tend
to have a detrimental impact on the sparsity properties while non-averaging
schemes do a far better job in preserving such properties. Both accelerated and
unaccelerated gradient schemes for smooth stochastic convex optimization rely
on averaging and this significantly impacts the sparsity of the estimators.
(See Table \ref{compare_spars} in Section \ref{sec:5}). (b) Ill-conditioning.
As is relatively well known, quasi-Newton schemes do a far better job of
contending with ill-conditioning in practice, in comparison with gradient-based
techniques. (See Tables \ref{quad_ill} and \ref{convex_ill} in Section
\ref{sec:5}.) \end{remark}}
}
\subsection{Nonsmooth convex optimization}
We now consider problem~\eqref{main problem} when $f$ is nonsmooth but $(\alpha,\beta)$-smoothable and consider the \eqref{rsVS-SQN} scheme,
defined as follows, where $H_k$ is generated by {\bf rsL-BFGS} scheme.
\begin{align}\tag{\bf rsVS-SQN}\label{rsVS-SQN}
x_{k+1}:=x_k-\gamma_kH_k{\frac{\sum_{j=1}^{N_k} \nabla_x F_{\eta_k,\mu_k}(x_k,\omega_{j,k})}{N_k}}.
\end{align}
{Note that in this section, we set $m=1$ for the sake of simplicity but the analysis can be extended to $m>1$.
Next, we generalize Lemma \ref{rLBFGS-matrix} to {show that Assumption \ref{assump:Hk} is
satisfied and both the secant condition ({{\bf SC}}) and the secant equation
({{\bf SE}}). ({See Appendix for Proof.})}
\begin{lemma}[{\bf Properties of Hessian approximation produced by
(rsL-BFGS)}]\label{rsLBFGS-matrix} Cons-\\ ider the \eqref{rsVS-SQN} method, {where} $H_k$ {is updated} by
\eqref{eqn:H-k}-\eqref{eqn:H-k-m}, $s_i$ and $y_i$ are defined in
\eqref{equ:siyi-LBFGS} {and} $\eta_k$ and $\mu_k$ are updated according to procedure
\eqref{eqn:mu-k}. Let Assumption \ref{assum:convex2} holds. Then the
following hold. \begin{itemize}
\item [(a)] For any odd $k > 2m$, {(SC) holds}, i.e., $s_k^T{y_k} >0$;
\item [(b)] For any odd $k > 2m$, {(SE) holds}, i.e., $H_{k}{y}_k=s_k$.
\item [(c)] For any $k > 2m$, $H_k$ satisfies Assumption~\ref{assump:Hk}{(NS)} with
${{\ulambda_{k}}={1\over (m+n)(1/\eta_k^\delta+\mu_0^{\bar \delta})}}$ and
$\\ {{\olambda_{k}}={(m+n)^{n+m-1}(1/\eta_k^\delta+\mu_0^{\bar \delta})^{n+m-1}\over (n-1)!\mu_k^{(n+m)\bar \delta}}}$, {for scalars $\delta,\bar \delta>0$.}
Then {for all $k$, we have that $H_k = H_k^T$ and $\mathbb E[{H_k\mid\mathcal F_k}]=H_k$ and
$
{\ulambda_{k}\mathbf{I} \preceq H_{k} \preceq \olambda_k \mathbf{I}}$ both hold in an a.s. fashion.}
\end{itemize}
\end{lemma}
We now derive a rate statement for the mean sub-optimality.}
\begin{theorem}[{\bf Convergence in mean}]\label{thm:mean:nonsmooth}
Consider the \eqref{rsVS-SQN} scheme. Suppose Assumptions ~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), {and \ref{growth}}
hold. Let {$\gamma_k=\gamma$, $\mu_k=\mu$, and $\eta_k=\eta$ be chosen such that \eqref{mainLemmaCond} holds ({where $L=1/\eta$}).}
{If {$\bar x_K \triangleq \frac{\sum_{k=0}^{K-1}x_k(\ulambda\mu\gamma-C/N_k)}{\sum_{k=0}^{K-1}(\ulambda\mu\gamma-C/N_k)}$}, then \eqref{non_smooth_lemma} holds for $K \geq 1$ and {$C={2(1+\mu\eta)\olambda^2\nu_1^2\gamma^2\over \alpha \eta}$}.}
\begin{align}\label{non_smooth_lemma}
\left(K\ulambda \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}]
\nonumber&\leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]+\eta B^2+{\ulambda \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K\\&+\sum_{k=0}^{K-1}{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}.
\end{align}
\end{theorem}
\begin{proof} Since Lemma \ref{lemma:main-ineq} {may be invoked}, by taking
expectations on both sides of \eqref{ineq:cond-recursive-F-k}, for
any $k\geq 0$ {letting $\bar{w}_{k,N_k} \triangleq \frac{\sum_{j=1}^{N_k} \left({\nabla_{x}}
{F}_{\eta_k,\mu_k}(x_k,\omega_{j,k})-\nabla f_{\eta_k,\mu_k}(x_k)\right)}{N_k},$} and by letting {${{\ulambda}\triangleq {1\over (m+n)(1/\eta^\delta+\mu^{\bar \delta})}}$,
{${\olambda}\triangleq {(m+n)^{n+m-1}(1/\eta^\delta+\mu^{\bar \delta})^{n+m-1}\over (n-1)!\mu^{(n+m)\bar \delta}}$}}, { using the quadratic growth property i.e. $\|x_k-x^*\|^2\leq {2\over \alpha}\left(f(x)-f(x^*)\right)$ and the fact that $\|x_k-x^*+x^*\|^2\leq 2\|x_k-x^*\|^2+2\|x^*\|^2$, we obtain the following}
\begin{align*}
\mathbb E[{f_{{\eta},{\mu}}(x_{k+1})-f^*}]
& \leq \left(1-{{\ulambda}}{\mu\gamma}{+{2(1+\mu \eta)\olambda^2\nu_1^2\gamma^2\over \alpha N_k\eta}}\right)\mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}]
+ {\ulambda \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma\\& +{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}
\end{align*}
\begin{align*}
\implies \left( \ulambda{\mu\gamma}{-{2(1+\mu \eta)\olambda^2\nu_1^2\gamma^2\over \alpha N_k\eta}}\right)\mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}]
& \leq \mathbb E[{f_{\eta,{\mu}}(x_k)-f^*}]- \mathbb E[{f_{{\eta},{\mu}}(x_{k+1})-f^*}] \\
+{\ulambda \mbox{dist}^2(x_0,X^*)\mu^2\gamma\over 2}
& +{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta} .\end{align*}
Summing from $k=0$ to $K-1$ and by invoking {Jensen's inequality}, we obtain the following
\begin{align*}
\left(K\ulambda \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}]
&\leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]-\mathbb E[f_{\eta,\mu}(x_K)-f^*]\\&+{\ulambda \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K+\sum_{k=0}^{K-1}{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta},
\end{align*}
where {$C={2(1+\mu\eta)\olambda^2\nu_1^2\gamma^2\over \alpha\eta}$} and {$\bar x_K \triangleq \frac{\sum_{k=0}^{K-1}x_k(\ulambda\mu\gamma-C/N_k)}{\sum_{k=0}^{K-1}(\ulambda\mu\gamma-C/N_k)}$}. Since $\mathbb E[{f(x)}]\leq \mathbb
E[f_{\eta}(x)]+\eta_kB^2$ and $f_\mu(x)=f(x)+{\mu\over 2}\|x-x_0\|^2$, {we have that} $-\mathbb E[f_{\eta,\mu}(x_K)-f^*]\leq -{\mathbb{E}}[f_\mu(x_K)-f^*]+\eta B^2\leq \eta B^2$. Therefore, we obtain the following:
\begin{align*}
\left(K\ulambda \mu \gamma{-\sum_{k=0}^{K-1}{C\over N_k}}\right)\mathbb E[{f_{\eta,\mu}(\bar x_{K})-f^*}] &
\leq\mathbb E[f_{\eta,\mu}(x_0)-f^*]+\eta B^2\\&+{\ulambda \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K+\sum_{k=0}^{K-1}{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}.
\end{align*}
\end{proof}
{We refine this result for a set of parameter sequences.}
\begin{theorem}[{\bf Rate statement and oracle complexity}]\label{thm:rate K}
Consider \eqref{rsVS-SQN} and suppose Assumptions ~\ref{assum:convex2}, \ref{state noise} (NS-M), \ref{state noise} (NS-B), \ref{assump:Hk} (NS), {and \ref{growth}} hold, $\gamma {\triangleq} c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu {\triangleq} {K^{-1/3}}$, $\eta \triangleq K^{-1/3}$ and $N_k\triangleq \lceil N_0{(k+1)}^{a}\rceil$, where $\bar \varepsilon \triangleq \tfrac{5\varepsilon}{3}$, $\varepsilon>0$, {$N_0>{C\over \ulambda \mu \gamma}$}, {$C={2(1+\mu\eta)\olambda^2\nu_1^2\gamma^2\over \alpha\eta}$} and $a>1$. Let $\delta={\varepsilon\over n+m-1}$ and $\bar \delta={\varepsilon\over n+m}$.
\noindent (i) For any $K \geq 1$,
$ \mathbb{E}[f(\bar x_{ K})]-f^*\leq {\mathcal O}(K^{-1/3}).$
\noindent (ii) Let ${\epsilon>0}$, {$a = (1+\epsilon)$}, and $ K\geq 1$ such that $\mathbb E[f(\bar x_{ K})]-f^*\leq {\epsilon}$. {Then{,} {$\sum_{k=0}^{ K}N_k\leq \mathcal O\left({ {\epsilon}^{-{(2+\varepsilon)\over 1/3}}}\right)$}}.
\end{theorem}
\begin{proof}
(i) {First, note that for $a>1$ and $N_0>{C\over \ulambda\mu\gamma}$ we have $\sum_{k=0}^{K-1} {C\over N_k}<\infty$. Therefore we can let $C_4\triangleq \sum_{k=0}^{K-1}{C\over N_k}. $} { Dividing both sides} of \eqref{non_smooth_lemma} by $K\ulambda\mu\gamma{-C_4}$ {and by recalling} that $f_\eta(x)\leq f(x)\leq f_\eta(x)+\eta B^2$ and $f(x)\leq f_\mu(x)$, we obtain
\begin{align*}
\mathbb E[{f(\bar x_{K})-f^*}]
& \leq{\mathbb E[f_\mu(x_0)-f^*]\over K\ulambda \mu \gamma{-C_4}}+{\eta B^2\over K\ulambda \mu \gamma{-C_4}}+\frac{{\ulambda \mbox{dist}^2(x_0,X^*)\over 2}\mu^2\gamma K}{K\ulambda\mu\gamma{-C_4}} \\&+\frac{\sum_{k=0}^{K-1}{(1+\mu\eta)\olambda^2( {2\nu_1^2\|x^*\|^2+\nu_2^2})\gamma^2\over 2N_k\eta}}{K\ulambda\mu\gamma{-C_4}}+\eta B^2.
\end{align*}
Note that by choosing $\gamma=c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu={K^{-1/3}}$ and $\eta=K^{-1/3}$, where $\bar \varepsilon=5/3\varepsilon$, inequality \eqref{mainLemmaCond} is satisfied for sufficiently small $c_\gamma$. By choosing$N_k=\lceil N_0{(k+1)}^a\rceil\geq N_0 (k+2)^a$ for any $a>1$ and {$N_0>{C\over \ulambda\mu\gamma}$}, we have that
\begin{align*}
& \sum_{k=0}^{K-1}{1\over (k+1)^a}
\leq 1+\int_{0}^{K-1} (x+1)^{-a}dx\leq 1+{K^{1-a}\over 1-a} \\
\implies &\mathbb E[{f(\bar x_{K})-f^*}]
\leq{C_1\over K\ulambda \mu \gamma{-C_4}}+{\eta B^2\over K\ulambda \mu \gamma{-C_4}}+{C_2\ulambda\mu^2\gamma K \over K\ulambda\mu\gamma{-C_4}}+{C_3(1+\mu\eta)\olambda^2\gamma^2\over \eta N_0(K \mu \gamma{-C_4})}(1+K^{1-a})+\eta B^2,
\end{align*}
where $C_1=\mathbb E[f_\mu(x_0)-f^*]$, $C_2={ \mbox{dist}^2(x_0,X^*)\over 2}$ and $C_3={ {2\nu_1^2\|x^*\|^2+\nu_2^2}\over 2(1-a)}$. Choosing the parameters $\gamma,\mu$ and $\eta$ as stated and noting that {${{\ulambda}= {1\over (m+n)(1/\eta^\delta+\mu^{\bar \delta})}}=\mathcal O(\eta^\delta)= \mathcal O(K^{-\delta/3})$ and $\olambda={(m+n)^{n+m-1}(1/\eta^\delta+\mu^{\bar \delta})^{n+m-1}\over (n-1)!\mu^{(n+m)\bar \delta}}=\mathcal O(\eta^{-\delta(n+m-1)/\mu^{\bar \delta(n+m)}})= \mathcal O(K^{2\varepsilon/3})$, where we used the assumption that $\delta={\varepsilon\over n+m-1}$ and $\bar \delta={\varepsilon\over n+m}$}. Therefore, we obtain
$\mathbb E[{f(\bar x_{K})-f^*}]
\leq \mathcal O(K^{-1/3-5\varepsilon/3}+\delta/3)+\mathcal O(K^{-2/3-5\varepsilon/3+\delta/3})+\mathcal O(K^{-1/3})+\mathcal O(K^{-2/3+3\varepsilon})+\mathcal O(K^{-1/3})= \mathcal O(K^{-1/3}).$
(ii) The proof is similar to part (ii) of Theorem \ref{oracle smooth}.
\end{proof}
\begin{remark}
{Note that in Theorem \ref{thm:rate K} we choose steplength, regularization,
and smoothing parameters {as constant parameters in accordance with the length of the simulation trajectory $K$, i.e. $\gamma,\mu,\eta$ are constants.} This is akin
to the avenue chosen by Nemirovski et al.~\cite{nemirovski_robust_2009} where
the steplength is chosen in accordance with the length of the simulation
trajectory $K$.}
\end{remark}
Next, we relax Assumption
\ref{growth} (quadratic growth property) and impose a stronger bound on the
conditional second moment of the sampled gradient.
\begin{assumption}\label{non growth}
\vvs{Let $\bar{w}_{k,N_k} \triangleq \nabla_x f_{\eta_k}(x_k) - \tfrac{\sum_{j=1}^{N_k} \nabla_x F_{\eta_k}(x_k,\omega_{j,k})}{N_k}$. Then
there exists $\nu>0$ such that $\mathbb{E}[\|\bar{w}_{k,N_k}\|^2\mid \mathcal{F}_k] \leq \tfrac{\nu^2}{N_k}$ and $\mathbb{E}[\bar{w}_{k,N_k} \mid \mathcal{F}_k] = 0$ {hold} almost surely
for all $k$ and $\eta_k > 0$, where $\mathcal{F}_k
\triangleq \sigma\{x_0, x_1, \hdots, x_{k-1}\}$.}
\end{assumption}
\begin{corollary}
[{\bf Rate statement and Oracle complexity}]
Consider the \eqref{rsVS-SQN} scheme. Suppose Assumptions ~\ref{assum:convex2}, \ref{assump:Hk} (NS) and \ref{non growth} hold and $\gamma {\triangleq} c_\gamma K^{-1/3+\bar \varepsilon}$, $\mu {\triangleq} {K^{-1/3}}$, $\eta \triangleq K^{-1/3}$ and $N_k\triangleq \lceil{(k+1)}^{a}\rceil$, where $\bar \varepsilon \triangleq \tfrac{5\varepsilon}{3}$, $\varepsilon>0$ and $a>1$.
\noindent (i) For any $K \geq 1$, $
\mathbb E[f(\bar x_{ K})]-f^*\leq \mathcal O(K^{-1/3}). $
\noindent (ii) Let ${\epsilon>0}$, $a = (1+\epsilon)$, and $ K\geq 1$ such that $\mathbb E[f(\bar x_{ K})]-f^*\leq {\epsilon}$. {Then{,} {$\sum_{k=0}^{ K}N_k\leq \mathcal O\left({ {\epsilon}^{-{(2+\varepsilon)\over 1/3}}}\right)$}}.
\end{corollary}
}
\section{Numerical Results}\label{sec:5} In this section, we compare the behavior of the proposed VS-SQN schemes with their accelerated/unaccelerated gradient counterparts on a class of strongly convex/convex and smooth/nonsmooth stochastic optimization problems {with the intent of examining empirical error and sparsity of estimators (in machine learning problems) as well as the ability to contend with ill-conditioning.}
\noindent {\bf Example 1.} First, we consider the logistic regression problem, defined as follows:
\begin{align}\tag{LRM}
\min_{x \in \mathbb R^n} \ f(x) \triangleq \frac{1}{N}\sum_{i=1}^N\ln \left(1+{\exp} \left(-u_i^Txv_i\right)\right),
\end{align}
where $u_i \in \mathbb R^n$ is the input binary vector associated with article
$i$ and $v_i \in \{-1,1\}$ represents the class of the $i$th article. A {$\mu$-}regularized variant of such a problem is defined as follows.
\begin{align}\label{logisticReg}\tag{reg-LRM}
\min_{x \in \mathbb R^n} \ f(x) \triangleq \frac{1}{N}\sum_{i=1}^N\ln \left(1+{\exp}\left(-u_i^Txv_i\right)\right)+\frac{\mu}{2}\|x\|^2.
\end{align}
We consider the {\sc sido0} dataset~\cite{lewis2004rcv1} where $N = 12678$ and $n = 4932$.
\noindent {\bf (1.1) Strongly convex and smooth problems}: To apply \eqref{VS-SQN}, we consider (Reg-LRM) where the problem is strongly convex and
$\mu=0.1$. We compare the behavior of the scheme with an accelerated gradient scheme~\cite{jalilzadeh2018optimal} and set the overall sampling buget equal to $1e4$. {We observe that \eqref{VS-SQN} competes well with ({\bf VS-APM}).} (see Table~\ref{SC_tab_smooth} and Fig.~\ref{fig} (a)).
\begin{table}[htb]
\centering
\scriptsize
\begin{tabular}{|c|c|c||c|c|} \hline
&\multicolumn{2}{|c||}{SC, smooth}&\multicolumn{2}{|c|}{SC, nonsmooth \aj{(Moreau smoothing)}}\\ \hline
& {\bf VS-SQN}& ({\bf VS-APM}) &{\bf sVS-SQN}& ({\bf sVS-APM}) \\ \hline \hline
sample size: $N_k$& $\rho^{-k}$&$\rho^{-k}$&$\lfloor q^{-k}\rfloor$&$\lfloor q^{-k}\rfloor$\\ \hline
steplength: $\gamma_k$&0.1&0.1&$\eta_k^2$&$\eta_k^2$\\ \hline
smoothing: $\eta_k$&-&-&$0.1$&$0.1$\\ \hline
$f(x_k)$& $5.015$e-$1$&$5.015$e-$1$&$8.905$e-$1$&$1.497$e+$0$\\ \hline
\end{tabular}
\caption{{\bf sido0:} SC, smooth and nonsmooth}
\label{SC_tab_smooth}
\vspace{-0.2in}
\end{table}
\noindent {\bf (1.2) Strongly convex and nonsmooth}: We consider a nonsmooth
variant where an $\ell_1$ regularization is added with $\lambda=\mu=0.1$:
\begin{align}\label{SC nonsmooth LRM}
\min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\mathbb{E}\left(-u_i^Txv_i\right)\right)+{\mu\over 2}\|x\|^2+\lambda\|x\|_1.
\end{align}
From~\cite{beck12smoothing}, a smooth approximation of $\|x\|_1$ is given by the following
$$\sum_{i=1}^n H_\eta (x_i) = \begin{cases} x_i^2/2\eta, &\mbox{if } |x_i|\leq \eta \\
|x_i|-\eta/2, & \mbox{o.w.} \end{cases},$$ where $\eta$ is a smoothing parameter.
The perfomance of \eqref{sVS-SQN} is shown in Figure \ref{fig} (b) while
parameter choices are provided in Table \ref{SC_tab_smooth} and the total
sampling budget is $1e5$. {We see that empirical behavior of \eqref{VS-SQN} } and
\eqref{sVS-SQN} is similar to {\bf (VS-APM)}{~\cite{jalilzadeh2018optimal} and {\bf (rsVS-APM)}~\cite{jalilzadeh2018optimal}, respectively. Note that while
in the strongly convex regimes, both schemes {display} similar (linear) rates, we do
not have a rate statement for smoothed ({\bf sVS-APM})~\cite{jalilzadeh2018optimal}.}
\begin{figure}[htb]
\vspace{-0.1in}
\centering
{
\includegraphics[scale=0.085]{SC_smooth_comp}
\includegraphics[scale=0.085]{moreau}
\includegraphics[scale=0.085]{C_smooth_comp}
\includegraphics[scale=0.085]{C_nonsmooth_comp}}
\caption{Left to right: (a) SC smooth, (b) SC nonsmooth, (c) C smooth, (d) C nonsmooth\label{fig}}{}
\end{figure}
\noindent {\bf (1.3) Convex and smooth}: We implement \eqref{rVS-SQN} on the
(LRM) problem and compare the result with VS-APM~\cite{jalilzadeh2018optimal}
and r-SQN~\cite{yousefian2017stochastic}. We again consider the {\sc sido0}
dataset with a total budget of $1e5$ while the parameters are tuned to ensure
good performance. In Figure \ref{fig} (c) we compare three different methods
while the choices of steplength and sample size can be seen in
Table~\ref{compare_tab}. \us{We note that (VS-APM) produces slightly better solutions, which is not surprising since it enjoys a rate of $\mathcal{O}(1/k^2)$ with an optimal oracle complexity. However, \eqref{rVS-SQN} is competitive and appears to be better than (r-SQN) by a significant margin in terms of the function value.}
\begin{table}[htb]
\centering
\scriptsize
\begin{tabular}{|c|c|c|c||c|c|} \hline
&\multicolumn{3}{|c||}{convex, smooth}&\multicolumn{2}{|c|}{convex, nonsmooth}\\ \hline
& {\bf rVS-SQN}& r-SQN & VS-APM & {\bf rsVS-SQN}& sVS-APM \\ \hline \hline
sample size: $N_k$& $k^{2+\varepsilon}$&1&$k^{2+\varepsilon}$& $(k+1)^{1+\varepsilon}$&$(k+1)^{1+\varepsilon}$\\ \hline
steplength: $\gamma_k$&$k^{-\varepsilon}$&$k^{-2/3}$&$1/(2L)$&$K^{-1/3+\varepsilon}$&$1/(2k)$\\ \hline
regularizer: $\mu_k$&$k^{2/3\varepsilon-1}$&$k^{-1/3}$&-&$K^{-1/3}$&-\\ \hline
smoothing: $\eta_k$&-&-&-&$K^{-1/3}$&$1/k$\\ \hline
$f(x_k)$&1.38e-1&2.29e-1&9.26e-2&6.99e-1&7.56e-1\\ \hline
\end{tabular}
\caption{ {\bf sido0:} C, smooth and nonsmooth}
\label{compare_tab}
\end{table}
\noindent {\bf (1.4.) Convex and nonsmooth}: Now we consider the nonsmooth problem in which $\lambda=0.1$.
\begin{align}\label{nonsmooth LRM}
\min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\exp\left(-u_i^Txv_i\right)\right)+\lambda\|x\|_1.
\end{align}
We implement {\bf rsVS-SQN} scheme with a total budget of $1e4$. (see Table~\ref{compare_tab} and Fig.~\ref{fig} (d)) \us{observe that it competes well with (sVS-APM)~\cite{jalilzadeh2018optimal}, which has a superior convergence rate of $\mathcal{O}(1/k)$.}
\blue{\noindent {\bf (1.5.) Sparsity} {We now compare} the sparsity of the estimators obtained via (\ref{rVS-SQN}) scheme with averaging-based stochastic gradient schemes. Consider the following example where we consider the smooth approximation of $\|.\|_1$, leading to a convex and smooth problem.
\begin{align*}
\min_{x \in \mathbb R^n} f(x):=\frac{1}{N}\sum_{i=1}^N\ln \left(1+\exp\left(-u_i^Txv_i\right)\right)+\lambda\aj{\sum_{i=1}^n\sqrt{x_i^2+\lambda_2}},
\end{align*}
where we set $\lambda=1$e-$4$. We chose the parameters according to Table
\ref{compare_tab}, total budget is $1e5$ and $\|x_K\|_0$ denotes the number of
entries in $x_K$ that are greater than $1$e-$4$. Consequently, {$n_0 \triangleq n - \|x_K\|_0$}
denotes the number of ``zeros'' in the vector. As it can be seen in Table
\ref{compare_spars}, the solution obtained by (\ref{rVS-SQN}) is significantly
{sparser than that obtained by} ({\bf VS-APM}) and standard stochastic gradient. In fact,
SGD produces nearly dense vectors while (\ref{rVS-SQN}) produces vectors,
$10\%$ of which are sparse for $\lambda_2 = 1e$-$6.$} \begin{table}[htb]
\centering
\scriptsize
\blue{\begin{tabular}{|c|c|c|c|c|} \hline
&{\bf rVS-SQN}&({\bf VS-APM})&SGD\\ \hline
$N_k$&$k^{2+\epsilon}$&$k^{2+\epsilon}$&1\\ \hline
$\#$ of iter.&66&66&1e5 \\ \hline
$n_0$ for $\lambda_2=1$e-$5$&144&31&0\\ \hline
$n_0$ for $\lambda_2=1$e-$6$&497&57&2\\ \hline
\end{tabular}}
\caption{{\bf sido0:} Convex, smooth }
\label{compare_spars}
\end{table}
\noindent {\bf Example 2. Impact of size and ill-conditioning.} {In Example {1}, we observed that \eqref{rVS-SQN} {competes well} with VS-APM for a subclass of machine learning problems. We now consider a stochastic quadratic program over a general probability space and observe similarly competitive behavior. In fact, \eqref{rVS-SQN} often outperforms ({\bf VS-APM})~\cite{jalilzadeh2018optimal} (see Tables~\ref{sc_tab_example} and \ref{c_tab_example})}. We consider the following problem.
\begin{align*}
\min_{x\in \mathbb R^n} \mathbb E\left[{1\over 2}x^TQ(\omega)x+c(\omega)^Tx\right],
\end{align*}
where $Q(\omega)\in \mathbb R^{n\times n}$ is a random symmetric matrix such that the eigenvalues are chosen uniformly at random and the minimum eigenvalue is one and zero for strongly convex and convex problem, respectively. Furthermore, $ {c_\omega}=-Q(\omega)x^0$, where $x^0\in \mathbb R^{n\times 1}$ is a vector whose elements are chosen randomly from the standard Gaussian distribution.
\begin{table}[htb]
\begin{minipage}[b]{0.5\linewidth}
\scriptsize
\begin{tabular}{|c|c|c|} \hline
&\eqref{VS-SQN}&({\bf VS-APM})\\ \hline
n&$\mathbb E[f(x_k)-f(x^*)]$&$\mathbb E[f(x_k)-f(x^*)]$\\ \hline
20&$3.28$e-$6$& $5.06$e-$6$ \\ \hline
60&$9.54$e-$6$& $1.57$e-$5$\\ \hline
100&$1.80$e-$5$&$2.92$e-$5$\\ \hline
\end{tabular}
\caption{Strongly convex: \\ \eqref{VS-SQN} vs ({\bf VS-APM})}
\label{sc_tab_example}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\scriptsize
\begin{tabular}{|c|c|c|} \hline
&\eqref{rVS-SQN}&({\bf VS-APM})\\ \hline
n&$\mathbb E[f(x_k)-f(x^*)]$&$\mathbb E[f(x_k)-f(x^*)]$\\ \hline
20&$9.14$e-$5$&$1.89$e-$4$ \\ \hline
60&$2.67$e-$4$&$4.35$e-$4$\\ \hline
100&$5.41$e-$4$&$8.29$e-$4$\\ \hline
\end{tabular}
\caption{Convex: \\ \eqref{rVS-SQN} vs ({\bf VS-APM})}
\label{c_tab_example}
\end{minipage}
\end{table}
{In Tables \ref{quad_ill} and \ref{convex_ill}, we compare the behavior of \eqref{rVS-SQN} and ({\bf VS-APM}) when
the problem is ill-conditioned {in strongly convex and convex regimes,
respectively}. {In strongly convex regimes}, we set the total budget equal
to $2e8$ and maintain the steplength as equal for both schemes. The sample size
sequence is chosen to be $N_k=\lceil 0.99^{-k}\rceil$, leading to $1443$ steps
for both methods. {We observe that as $m$ grows, the relative quality of the
solution compared to ({\bf VS-APM}) improves even further.} \blue{These findings are reinforced in
Table \ref{convex_ill}, where for merely convex problems, although the convergence rate for ({\bf VS-APM})
is $\mathcal O(1/k^2)$ (superior to $\mathcal O(1/k)$ for (\ref{rVS-SQN}), (\ref{rVS-SQN})
outperforms ({\bf VS-APM}) in terms of empirical error. Note that parameters are chosen
similar to Table \ref{compare_tab}. }
\begin{table}[htbp]
\begin{minipage}[b]{0.5\linewidth}
\centering
\tiny
\begin{tabular}{|c|c|c|c|} \hline
&\multicolumn{3}{|c|}{$\mathbb E[f(x_k)-f(x^*)]$}\\ \hline
$\kappa$&\eqref{VS-SQN}, $m=1$&\eqref{VS-SQN}, $m=10$&({\bf VS-APM})\\ \hline
$1e5$ &$9.25$e-$4$&$2.656$e-$4$& $2.600$e-$3$\\ \hline
$1e6$ &$9.938$e-$5$&$4.182$e-$5$&$4.895$e-$4$\\ \hline
$1e7$ &$1.915$e-$5$&$1.478$e-$5$&$1.079$e-$4$\\ \hline
$1e8$ &$1.688$e-$5$&$6.304$e-$6$&$4.135$e-$5$\\ \hline
\end{tabular}
\caption{Strongly convex: \\Performance vs Condition number (as $m$ changes)}
\label{quad_ill}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\tiny
\blue{\begin{tabular}{|c|c|c|c|} \hline
&\multicolumn{3}{|c|}{$\mathbb E[f(x_k)-f(x^*)]$}\\ \hline
$L$&\eqref{rVS-SQN}, $m=1$&\eqref{rVS-SQN}, $m=10$&({\bf VS-APM})\\ \hline
$1e3$ &$4.978$e-$4$&$1.268$e-$4$&$1.942$e-$4$\\ \hline
$1e4$ &$3.288$e-$3$&$2.570$e-$4$&$3.612$e-$2$\\ \hline
$1e5$ &$8.571$e-$2$&$3.075$e-$3$&$2.794$e+$0$\\ \hline
$1e6$ &$3.367$e-$1$&$3.203$e-$1$&$4.293$e+$0$\\ \hline
\end{tabular}}
\caption{Convex: \\Performance vs Condition number (as $m$ changes)}
\label{convex_ill}
\end{minipage}
\end{table}
\noindent {\bf Example 3. Constrained Problems.} We consider the isotonic constrained LASSO problem.
\begin{align}\label{isotonic}
\min_{x =[x_i]_{i=1}^n\in \mathbb R^n}~ \left\{ \frac{1}{2}\sum_{i=1}^p \|A_ix-b_i\|^2 \mid x_1\leq x_2\leq \hdots\leq x_n \right\},
\end{align}
where $A=[A_i]_{i=1}^p\in\mathbb{R}^{n\times p}$ is a matrix whose elements are chosen randomly from standard Gaussian distribution such that the $A^\top A\succeq 0$
and $b=[b_i]_{i=1}^p\in\mathbb{R}^p$ such that $b=A(x_0+ {\sigma})$ where
$x_0\in\mathbb{R}^n$ is chosen such that the first and last $\frac{n}{4}$ of
its elements are chosen from $U([-10,0])$ and $U([0,10])$ in ascending order,
respectively, while the other elements are set to zero. Further,
${\sigma}\in\mathbb{R}^n$ is a random vector whose elements are independent
normally distributed random variables with mean zero and standard deviation
$0.01$. Let $C\in\mathbb{R}^{n-1\times n}$ be a matrix that captures the
constraint, i.e., $C(i,i)=1$ and $C(i,i+1)=-1$ for $1\leq i\leq n-1$ and its
other components are zero and let $X\triangleq \{x~:~Cx\leq 0\}$. Hence, we
can rewrite the problem
\eqref{isotonic} as $\min_{x \in \mathbb R^n} f(x):=\frac{1}{2}\sum_{i=1}^p
\|A_ix-b_i\|^2+\mathcal{I}_{X}(x)$. We know that the smooth approximation of
the indicator function is $\mathcal I_{X,\eta}={1\over 2\eta} d^2_{X}(x)$.
Therefore, we apply \eqref{rsVS-SQN} on the following problem
\begin{align}\label{isotonic_smooth}
\min_{x \in \mathbb R^n} f(x) & \triangleq \frac{1}{2}\sum_{i=1}^p \|A_ix-b_i\|^2+{1\over 2\eta} d^2_{X}(x).
\end{align}
{Parameter choices are similar to those in Table \ref{compare_tab} and we note from Fig.~\ref{fig_isotonic} (Left) that empirical behavior appears to be favorable. }
\begin{figure}[htb]
\centering
\includegraphics[scale=0.1]{isotonic_c_diffn}
\includegraphics[scale=0.1]{example_sc_com}\caption{Left: \eqref{sVS-SQN} Right: \eqref{sVS-SQN}~vs.~BFGS}
\label{fig_isotonic}
\end{figure}
\noindent {\bf Example 4. Comparison of ({\bf s-QN}) with BFGS}
In~\cite{lewis2008behavior}, the authors show that a nonsmooth BFGS scheme may
take null steps and fails to converge to the optimal solution
(See~Fig.~\ref{fig:nssqn}) and consider the following problem. \begin{align*}
\min_{x {\in \mathbb R^2}} \qquad {1\over 2}\|x\|^2+\max\{2|x_1|+x_2,3x_2\}.
\end{align*} In this problem, {BFGS takes a null step after two iterations
(steplength is zero)}; however ({\bf s-QN}) (the deterministic version of \eqref{sVS-SQN}) converges to the optimal solution.
Note that the optimal solution is $(0,-1)$ and ({\bf s-QN}) reaches
$(0,-1.0006)$ in just $0.095$ seconds (see Fig.~\ref{fig_isotonic} (Right)).
\section{Conclusions}
Most SQN schemes can process smooth and strongly convex stochastic optimization problems
and there appears be a gap in the asymptotics and rate statements in
addressing merely convex and possibly nonsmooth settings. Furthermore, a clear
difference exists between deterministic rates and their stochastic
counterparts, paving the way for developing variance-reduced schemes. In
addition, much of the available statements rely on a somewhat stronger
assumption of uniform boundedness of the conditional second moment of the
noise, which is often difficult to satisfy in unconstrained regimes.
Accordingly, the present paper makes three sets of contributions. First, a
regularized smoothed L-BFGS update is proposed that combines regularization
and smoothing, providing a foundation for addressing nonsmoothness and a lack
of strong convexity. Second, we develop a variable sample-size SQN scheme
\eqref{VS-SQN} for strongly convex problems and its Moreau smoothed variant
\eqref{sVS-SQN} for nonsmooth (but smoothable) variants, both of which attain a
linear rate of convergence and an optimal oracle complexity. Notably, when more
general smoothing techniques are employed, the convergence rate can also be
quantified. Third, in merely convex regimes, we develop a regularized VS-SQN
\eqref{rVS-SQN} and its smoothed variant \eqref{rsVS-SQN} for smooth and
nonsmooth problems respectively. The former achieves a rate of
$\mathcal{O}(1/K^{1-\epsilon})$ while the rate degenerates to
$\mathcal{O}(1/K^{1/3-\epsilon})$ in the case of the latter. Finally, numerics
suggest that the SQN schemes compare well with their variable sample-size
accelerated gradient counterparts and perform particularly well in comparison
when the problem is afflicted by ill-conditioning.
\bibliographystyle{siam}
\bibliography{demobib,wsc11-v02}
\section{Appendix}{Proof of Lemma \ref{rsLBFGS-matrix}:}
In this section we prove Lemma \ref{rsLBFGS-matrix}. Recall that $\ulambda_k$ and $\olambda_k$ denote the minimum and maximum eigenvalues of $H_k$, respectively. Also, we denote the inverse of matrix $H_k$ by $B_k$.
\begin{lemma}\cite{yousefian2017stochastic}
Let $0 < a_1 \leq a_2 \leq \hdots \leq a_n$, $P$ and $S$ be positive scalars such that $\sum_{i=1}^n a_i \leq S$ and $\Pi_{i=1}^n a_i \geq P$ . Then, we have
$a_1 \geq \frac{(n-1)!P}{S^{n-1}}.$
\end{lemma}
{\bf Proof of Lemma \ref{rsLBFGS-matrix}:} It can be seen, by induction on $k$, that $H_k$ is symmetric and $\mathcal{F}_k$ measurable, assuming that all matrices are well-defined. We use induction on odd values of $k>2m$ to show that the statements of part (a), (b) and (c) hold and that the matrices are well-defined. Suppose $k>2m$ is odd and for any odd value of $t<k$, we have $s_t^T{y_t} >0$, $H_{t}{y}_t=s_t$, and part (c) holds for $t$. We show that these statements also hold for $k$. First, we prove that the secant condition holds.
\aj{\begin{align*}
& s_k^T{y_k}=(x_{k}-x_{k-1})^T\left(\tfrac{\sum_{j=1}^{N_{k-1}}\left(\nabla F_{\eta_{k}^\delta}(x_k,\omega_{j,k-1})- \nabla F_{\eta_{k}^\delta}(x_{k-1},\omega_{j,k-1})\right)}{N_{k-1}}+\mu_k^\delta(x_k-x_{k-1})\right)\\
&=\tfrac{\sum_{j=1}^{N_{k-1}}\left[(x_{k}-x_{k-1})^T(\nabla F_{\eta_{k}^\delta}(x_k,\omega_{j,k-1})- \nabla F_{\eta_{k}^\delta}(x_{k-1},\omega_{j,k-1}))\right]}{N_{k-1}}+\mu_k^\delta\|x_k-x_{k-1}\|^2
\geq \mu_k^\delta\|x_k-x_{k-1}\|^2,
\end{align*}}
where the inequality follows from the monotonicity of the gradient map $\nabla F(\cdot,\omega)$. {From the induction hypothesis, $H_{k-2}$ is positive definite, since $k-2$ is odd.
Furthermore, since $k-2$ is odd, we have $H_{k-1}=H_{k-2}$ by the update rule \eqref{eqn:H-k}.
Therefore, $H_{k-1}$ is positive definite.
Note that since $k-2$ is odd, the choice of $\mu_{k-1}$ is such that ${1\over N_{k-1}}\sum_{j=1}^{N_{k-1}}\nabla F_{\eta_{k}^\delta}(x_{k-1},\omega_{j,k-1})+\mu_{k-1}x_{k-1}\neq 0$ (see the discussion
following~\eqref{eqn:mu-k}).
Since $H_{k-1}$ is positive definite,
we have $$H_{k-1}\left({1\over N_{k-1}}\sum_{j=1}^{N_{k-1}}\nabla F_{\eta_{k}^\delta}(x_{k-1},\omega_{j,k-1})+\mu_{k-1}x_{k-1}\right) \neq 0,$$ implying that
$x_{k} \neq x_{k-1}$. Hence
$s_k^T{y_k} \geq \mu_k^\delta\|x_k-x_{k-1}\|^2 >0,$
where \us{the second inequality is a consequence of} $\mu_k>0$.
Thus, the secant condition holds.}
Next, we show that part (c) holds for $k$. Let $\ulambda_k$ and $\olambda_k$ denote the minimum and maximum eigenvalues of $H_k$, respectively. Denote the inverse of matrix $H_k$ in \eqref{eqn:H-k-m} by $B_k$. It is well-known that using the Sherman-
Morrison-Woodbury formula, $B_k$ is equal to $B_{k,m}$ given by
\begin{align}\label{equ:B_kLimited}
B_{k,j}=B_{k,j-1}-\frac{B_{k,j-1}s_is_i^TB_{k,j-1}}{s_i^TB_{k,j-1}s_i}+\frac{y_iy_i^T}{y_i^Ts_i}, \quad i:=k-2(m-j) \quad 1 \leq j \leq m,
\end{align}
where $s_i$ and $y_i$ are defined by \eqref{equ:siyi-LBFGS} and $B_{k,0}=\frac{y_k^Ty_k}{s_k^Ty_k}\mathbf{I}$. First, we show that for any $i$, \begin{align}\label{equ:boundsForB0}
\mu_k^\delta \leq \frac{\|y_i\|^2}{y_i^Ts_i} \leq 1/\eta_k^{\delta}+\mu_k^\delta,
\end{align}
Let us consider the function $h(x):={1\over N_{i-1}}\sum_{j=1}^{N_{i-1}}F_{\eta_{k}^\delta}(x,\omega_{j,i-1})+\frac{\mu_k^\delta}{2}\|x\|^2$ for fixed $i$ and $k$. Note that this function is strongly convex and has a gradient mapping of the form ${1\over N_{i-1}}\sum_{j=1}^{N_{i-1}}\nabla F_{\eta_{k}^\delta}(x_{i-1},\omega_{j,i-1})+\mu_k^\delta\mathbf{I}$ that is Lipschitz with parameter ${1\over \eta_k^\delta}+\mu_k^\delta$. For a convex function $h$ with Lipschitz gradient with parameter $1/\eta_k^\delta+\mu_k^\delta$, the following inequality, referred to as co-coercivity property, holds for any $x_1,x_2 \in \mathbb{R}^n$(see \cite{polyak1987introduction}, Lemma 2):
$\|\nabla h(x_2)-\nabla h(x_1)\|^2 \leq (1/\eta_k^\delta+\mu_k^\delta)(x_2-x_1)^T(\nabla h(x_2)-\nabla h(x_1)).$
Substituting $x_2$ by $x_i$, $x_1$ by $x_{i-1}$, and recalling \eqref{equ:siyi-LBFGS}, the preceding inequality yields
\begin{align}\label{ineq:boundsForB0-1}\|y_i\|^2 \leq (1/\eta_k^\delta+\mu_k^\delta)s_i^Ty_i.\end{align}
Note that function $h$ is strongly convex with parameter $\mu_k^\delta$. Applying the Cauchy-Schwarz inequality, we can write
\[\frac{\|y_i\|^2}{s_i^Ty_i} \geq \frac{\|y_i\|^2}{\|s_i\|\|y_i\|} =\frac{\|y_i\|}{\|s_i\|}\geq \frac{\|y_i\|\|s_i\|}{\|s_i\|^2} \geq \frac{y_i^Ts_i}{\|s_i\|^2}\geq \mu_k^\delta.\]
Combining this relation with \eqref{ineq:boundsForB0-1}, we obtain \eqref{equ:boundsForB0}. Next, we show that the maximum eigenvalue of $B_k$ is bounded. Let $Trace(\cdot)$ denote the trace of a matrix. Taking trace from both sides of \eqref{equ:B_kLimited} and summing up over index $j$, we obtain
\begin{align}\label{ineq:trace}
& \quad Trace(B_{k,m})=Trace(B_{k,0})-\sum_{j=1}^m Trace\left(\frac{B_{k,j-1}s_is_i^TB_{k,j-1}}{s_i^TB_{k,j-1}s_i}\right)+\sum_{j=1}^m Trace\left(\frac{y_iy_i^T}{y_i^Ts_i}\right)\\ \nonumber
& =Trace\left(\frac{\|y_i\|^2}{y_i^Ts_i}\mathbf{I}\right) - \sum_{j=1}^m \frac{\|B_{k,j-1}s_i\|^2}{s_i^TB_{k,j-1}s_i} + \sum_{j=1}^m \frac{\|y_i\|^2}{y_i^Ts_i}\leq n \frac{\|y_i\|^2}{y_i^Ts_i} +\sum_{j=1}^m (1/\eta_k^\delta+\mu_k^\delta) = (m+n)(1/\eta_k^\delta+\mu_k^\delta),
\end{align}
where the third relation is obtained by positive-definiteness of $B_k$ (this can be seen by induction on $k$, and using \eqref{equ:B_kLimited} and $B_{k,0}\succ 0$). Since $B_k=B_{k,m}$, the maximum eigenvalue of the matrix $B_k$ is bounded. As a result,
\begin{align}\label{proof:lowerbound}
\ulambda_k\geq \frac{1}{(m+n)(1/\eta_k^\delta+\mu_k^\delta)}.\end{align}
In the next part of the proof, we establish the bound for $\olambda_k$. From Lemma 3 in \cite{mokhtari2015global}, we have $
det(B_{k,m})=det(B_{k,0})\prod_{j=1}^m\frac{s_i^Ty_i}{s_i^TB_{k,j-1}s_i}.$
Multiplying and dividing by $s_i^Ts_i$, using the strong convexity of the function $h$, and invoking \eqref{equ:boundsForB0} and the result of \eqref{ineq:trace}, we obtain
\begin{align}\label{ineq:detBk}
det(B_{k})&=det\left(\frac{y_k^Ty_k}{s_k^Ty_k}\mathbf{I}\right)\prod_{j=1}^m\left(\frac{s_i^Ty_i}{s_i^Ts_i}\right)\left(\frac{s_i^Ts_i}{s_i^TB_{k,j-1}s_i}\right)
\geq\left(\frac{y_k^Ty_k}{s_k^Ty_k}\right)^n\prod_{j=1}^m\mu_k^\delta\left(\frac{s_i^Ts_i}{s_i^TB_{k,j-1}s_i}\right)\cr
& \geq (\mu_k)^{(n+m)\delta} \prod_{j=1}^m \frac{1}{(m+n)(1/\eta_k^\delta+\mu_k^\delta)} = \frac{\mu_k^{(n+m)\delta}}{(m+n)^{m}(1/\eta_k^\delta+\mu_k^\delta)^m}.
\end{align}
Let $\alpha_{k,1}\leq \alpha_{k,2}\leq\ldots\leq\alpha_{k,n}$ be the eigenvalues of $B_k$ sorted non-decreasingly. Note that since $B_k\succ0$, all the eigenvalues are positive. Also, from \eqref{ineq:trace}, we know that $\alpha_{k,\ell}\leq (m+n)(L+\mu_0^\delta)$. Taking \eqref{ineq:boundsForB0-1} and \eqref{ineq:detBk} into account, and employing Lemma 6, we obtain \[\alpha_{1,k}\geq \frac{(n-1)!\mu_k^{(n+m)\delta}}{(m+n)^{n+m-1}(1/\eta_k^\delta+\mu_k^\delta)^{n+m-1}}.\]
This relation and that $\alpha_{1,k}=\olambda_k^{-1}$ imply that
\begin{align}\label{proof:upperbound}
\olambda_k\leq \frac{(m+n)^{n+m-1}(1/\eta_k^\delta+\mu_k^\delta)^{n+m-1}}{(n-1)!\mu_k^{(n+m)\delta}}.
\end{align}
Therefore, from \eqref{proof:lowerbound} and \eqref{proof:upperbound} and that $\mu_k$ is non-increasing, we conclude that part (c) holds for $k$ as well. Next, we show that $H_ky_k=s_k$. From \eqref{equ:B_kLimited}, for $j=m$ we obtain
\[B_{k,m}=B_{k,m-1}-\frac{B_{k,m-1}s_ks_k^TB_{k,m-1}}{s_k^TB_{k,m-1}s_k}+\frac{y_ky_k^T}{y_k^Ts_k},\]
where we used $i=k-2(m-m)=k$. Multiplying both sides of the preceding equation by $s_k$, and using $B_k=B_{k,m}$, we have
$B_{k}s_k=B_{k,m-1}s_k-B_{k,m-1}s_k+y_k=y_k.$
Multiplying both sides of the preceding relation by $H_k$ and invoking $H_k=B_k^{-1}$, we conclude that $H_ky_k=s_k$. Therefore, we showed that the statements of (a), (b), and (c) hold for $k$, assuming that they hold for any odd $2m<t<k$. In a similar fashion to this analysis, it can be seen that the statements hold for $t=2m+1$. Thus, by induction, we conclude that the statements hold for any odd $k>2m$. To complete the proof, it is enough to show that part (c) holds for any even value of $k>2m$. Let $t=k-1$. Since $t>2m$ is odd, relation part (c) holds. Writing it for $k-1$, and taking into account that $H_k=H_{k-1}$, and $\mu_k<\mu_{k-1}$, we can conclude that part (c) holds for any even value of $k>2m$ and this completes the proof. | 7,295 |
Denise McLothlin, a nurse practioner with the Lehigh Medical Group, which includes Lehigh Regional Medical Center, has become only the second person to sign up so far for the honorary mayor’s race......
« Back to Article
14051 Jetport Loop , Ft. Myers, FL 33913 | 239-368-3944
© 2014. All rights reserved.| Terms of Service and Privacy Policy | 356,126 |
The arresting power of the original SimCity in 1989 can seem alien to us today. The simulation is literally distant, hoisting the player heavenward and granting him the power to zone land for commercial, industrial, or residential development. An average play session is marked by mundane occurrences: traffic jams, natural disasters, budgets. For many, three decades on, the experience is haughty, removed, even a little dull. With context, we can better understand the delirious approval heaped on SimCity in its day.
Back in February 1989, a game with neither an objective nor protagonist was still a novelty. Bullfrog’s Populouswas several months away, and Myst wouldn’t turn exploration into an industry staple for another four years. Digital paint software and virtual engineering kits such as Electronic Arts’ Construction Set series had existed for some time, but these were borrowed capital – video game adaptations of art class and Erector sets.
SimCity was video gaming’s first indigenous species. Its tilt-shift world could exist only in computer space, powered by a million computing cycles per second. SimCity‘s creator, Will Wright, coined the term “software toy” to emphasize his innovation, and for years SimCity remained the most popular software toy on the market as players wrangled with the game’s open-ended questions.
What did the disembodied view represent? (A mayor’s daydreams over a topo map? A satellite feed? The eye of God?) And what did it mean to ‘win’ the game?
SimCity declined to dictate an end-game scenario. SimCity didn’t mind what you did. SimCity wasn’t prescriptive. It was your sandbox – an empty canvas, a blank check. The appeal lay in its subsystem-driven design, in which traffic, population, disasters, and utilities begin to interplay in surprising ways. The subsystems conceal an essentially simple simulation beneath layers of perceived complexity and allow players to improvise their own fun. No longer did the game designer, far removed by space and time, direct the experience with a firm hand. Instead the players called the shots, even to the extreme of wrecking their cities with a giant lizard or a tornado.
Players could come and go as they pleased, but in general they came (and stayed) in unprecedented numbers: a million units sold by 1992. It was a golden era for Will Wright and his comrades at Maxis Software, andSimCity‘s phenomenal performance gave the studio breathing room to tinker with the software toy. SimEarth, SimAnt, SimLife, and SimFarm marked the years following 1989’s gold boom, to varying degrees of success.
Through 1994, with the release of SimCity 2000, the Maxis monopoly on software toys continued. But the market was changing, and SimCity ‘s future hold on the rank and file looked tenuous. SimCity 2000 was gauche compared to 1995’s strategic heavy-hitters, Warcraft II and Command & Conquer. Elsewhere in the industry, Quake and hardware acceleration were about to rocket action gaming into the polygonal stratosphere.
As the SimCity 2000 revenue stream began to trail off, Will Wright cast about for something new, something different, something 3D. He alighted on a Hindenburg flight simulator. Though it would be shelved over concerns about Nazi imagery, the general flight sim concept would resurface soon enough.
1995 found Maxis in a state of turmoil with, as Geoff Keighly put it, “no focus, no clear direction, and Wall Street banging on the door.” An IPO had introduced capital, but mounting pressure came with the influx of cash. New management leveled an ultimatum on engineering: four products shipped by the end of 1996. Understaffed, out of time, and without a SimCity sequel in the pipeline, Will Wright dusted off the Hindenburg flight simulator and gave it a civic coat of paint. The game was to be the start of a new franchise and a prototype – a new moneymaker and Maxis’ polygonal debut.
Now it was Maxis’ turn to field hard questions. What was the SimCity philosophy in the context of a 3D action game? Could it survive the transition? Was there time to do it right?
SimCopter‘s box bears the elevator pitch: “Fly Missions in the Metropolis.” Flying. Missions. A metropolis.
The flying occurs in an assortment of real-life airframes from Bell, McDonnell Douglas, and Schweitzer. Handling and capacity vary according to real-world specifications, and better machines make it easier to complete missions. An algorithmic subsystem generates these missions along general categories: police work, medical evacuation, firefighting, riot control, and VIP transport. Completing missions pays cash to outfit your fleet and points to unlock new cities with higher difficulty.
Each metropolis is similar to the others – surreal, abstract, crude. Cars are flat-shaded jumbles of blocks that jerk along angular roads. People resemble the Intellivision Running Man with a magazine clipping for a face. Buildings are squat rectangles that fade in and out of the smog.
In addition to the thirty-odd cities on the disk, players can import their own SimCity 2000 creations and let the SimCopter engine extrude 2D data into 3D urban sprawl. Therein lies Will Wright’s two-part translation of the SimCity philosophy into the language of an energetic 3D action game.
From a traditional SimCity perspective, SimCopter adds value to a player’s existing investment. They can enjoy the fruit of their SimCity 2000 labor in a fresh way, with missions providing structure to the exploration of a beloved city. But as with SimCity, there’s no need to play by the rules. Central dispatch’s monotone voice will continue trying to pique your interest with an endless carousel of missions, but SimCopter‘s subsystems are at the player’s disposal.
The little marionette citizens dance and cluck their bizarre Simlish protests, the cartoon cars burst into flames at the touch of a landing skid, and the eerie expanse beyond the edge of the city limits beckons to the adventurous. One can play the distant observer, dutiful public servant, or Godzilla-esque natural disaster. And the bevy of cheat codes for free cash and helicopters – even an Apache gunship – tell the player to loosen up.
Alternately, from a 3D action viewpoint, SimCopter is a self-contained experience. The metropolis may not be yours if you don’t import from SimCity 2000, but SimCopter makes it easy to have fun in a strange land. It makes no mayoral demands. After all, zoning and infrastructure strategy isn’t relevant to the totalitarian glee of hosing down rioters with a water cannon.
There’s an immediate thrill to racing your helicopter under bridges in search of a hospital while a burn victim gurgles in the backseat. The difficulty curve is well-tuned, and the mission subsystem is varied. It’s true that SimCopter is less focused than what we’ve come to expect of action games, but it wore the mask well for its time.
And generally speaking, time has favored SimCopter. Huizinga would have us remember that seven Grand Theft Auto games have released since SimCopter hit shelves in February 1996. We have to think of SimCopter in an increasingly distant context, and as it matures into gaming history as a grandfather of the modern metropolitan sandbox genre, it grows more unique – empirically and subjectively.
Empirically, the 3D graphics industry has homogenized since 1996 and stranded SimCopter outside the pale of rendering convention. Fewer development houses write their own renderers. Unreal, CryEngine, and Unity all offer similar features based on the same academic research. SIGGRAPH attendance has declined since New Orleans ’96. These factors combine to give SimCopter a one-of-a-kind graphical style.
Subjectively, two decades have steeped SimCopter‘s depiction of city and citizenry into a heady mix. Your helicopter’s radio offers an eclectic, low-fi goulash of tunes. Commercials bookend the music with consumerist cynicism, deadpan puns and absurdities. Outside your helicopter, the hapless population communicates with “Simlish”, a piping, keening pidgin language. Simlish would make a famous repeat appearance in The Sims, but it’s more appropriate here in the alien world of SimCopter.
Fog shrouds the cities in slate grey by day and black at night. UFOs buzz high above the cityscape. The unflappable helicopter pilot flutters his legs and preens and coos as he hoists both people and dogs into the passenger bay. At the police station helipad, officers stagger and wave like so many badge-wearing zombies. If your rotors dice them to pieces during the landing attempt? Don’t sweat it, just run the medevac missions for bonus Simoleons.
It’s partly this eccentricity that has kept SimCopter alive in the collective gaming memory. Fan remakes for SimCopter get proposed frequently, and a spiritual successor mod for Cities: Skylines was underway at one point. While it’s natural to want to relive an enjoyable experience, does SimCopter need an update? How much of its unique tone would survive the clean image of a modern renderer and the loss of all that weirdness? If SimCopter works only as a total package, rehashing the particulars of helicopters, missions, and cities would miss the point.
Additional desire for a remake comes from SimCopter‘s incompatibility with modern PCs. Unfortunately, there isn’t any other way to play the game. SimCopter became a permanent PC exclusive in 1999 when Maxis cancelled a port for Nintendo’s 64DD. The only remaining artifacts are magazine scans and bit of trade show footage – thoughSimCity 64‘s helicopter minigame may use some of the cancelled 64DD SimCopter code.
The same year SimCity 64 released in Japan, Electronic Arts began re-releasing the Maxis back catalog at a budget price via the SimMania bundle series. SimCopter made it into the first and third installments, but without any additional patches.
Another omission from both SimMania bundles is SimCity 2000, which is especially odd considering how well it and SimCopter work together. Luckily the SimCopter disk already includes theSimCity Urban Renewal Kit, or SCURK. SCURK is essentially the SimCity 2000 toolkit liberated from the restraints of time, money, disasters, geography and population. SimCopter can import these SCURK ghost towns and crank out the appropriate traffic, citizenry, and missions, though the burden is on the player to distribute hospitals and fire and police stations. Still, for novelty maps – recreating Waterworld, for instance, or a city built around a spiral road – SCURK is probably more efficient for laymen than the full SimCity 2000 engine.
A level editor, singular quirkiness, SimCity 2000 integration… SimCopter looks good on paper. How did it perform?
In October 1996, SimCopter served as rearguard for the four-game lineup Maxis management had demanded the year before. It was the only one Will Wright shepherded personally, but he wasn’t pleased with the final product. He told Geoff Keighly in an interview: “The low point for me was releasing SimCopter when we did…we just had to ship it too soon.”
We don’t know exactly what Wright intended for SimCopter, but we can guess it involved a more convincing simulation of both the helicopter and the city. The game’s final patch unlocked an alpha build of the game with more realistic helicopter handling and early hardware acceleration support.
Still, others didn’t seem to mind. Gaming magazines awarded respectable scores to this civilian take on a typically military genre. Franchise faithful flocked in to tour their SimCity 2000 creations in 3D. Not even an unusual post-release hiccup – the unauthorized “himbo” Easter egg featuring kissing gay couples New York Times – could stop Maxis from reaching the end of 1996 with an intact reputation and decent Christmas sales.
SimCopter had been a gamble, an attempt to lay groundwork for a future 3D SimCity and diversify the Maxis portfolio. It achieved the first goal – SimCopter solidified the assumption within Maxis and without that the next ‘real’ Sim game would be in full 3D. But it did not achieve the second. There would be no SimCopter sequels.
Neither did it satisfy pent-up demand for proof of the company’s vision. It became clear only a new SimCitycould do that, but Maxis had committed its overworked development teams to the 1996 ultimatum. Production on a new SimCity sequel hadn’t even begun. That meant an empty pipeline for 1997 and nothing but bad news in the year’s first-quarter finance report.
Spring slipped away while the new SimCity gestated, but the SimCity 2000 cashflow was waning. In desperation, Maxis management cast about for a stopgap. Maxis acquired Texas-based developer Cinematronics, launched a mascot-fronted children’s label, and greenlit an FMV game. It wasn’t to be. Summer found the fledgling 3D SimCity in critical condition and the company coffers bare. Maxis accepted a buyout from Electronic Arts.
EA arrived and cleaned house. The new general manager, Luc Barthelet, cancelled ancillary projects and refocused a leaner Maxis on SimCity 3000. Work began afresh, this time without the 3D mandate. The result would be redemption for Maxis and SimCity, built on the bones of Cinematronics and the other cancelled stopgaps.
One of the few to survive Barthelet’s bloodletting was Streets of SimCity, a game built on the SimCopter engine. It had been intended for a misbegotten Maxis Sports label, but it released the day before Halloween 1997 under the new EA Maxis branding. Halloween would prove to be an appropriate omen for a game unloved by critics, unplayable for many gamers, and still unmentionable in the Maxis offices years later. | 312,857 |
Puni Maru Cotton Candy Koala
Licensed by Puni
Dopey – Milk tea scented, Sassy – Fairy floss scented and Kiddo the blueberry koala!
Comes with cotton candy tag
Measurements:
13 cm x 9 cm
**Squishies are sold as received, minor defects include airbubbles, paint imperfections and imperfections on seams
squishies with obvious defects will not be shipped
hailey –
love it! sooo soft and squishy! i got the grey one and now i want to collect all!
CharBarSquishies –
I love this squishy so much!!!! It smells amazing and super soft and slow rising!!! I got the pink one! SO CUTE
Paityn –
Super cute, soft, and slow rising but it’s super sticky so a lot of dirt or dust sticks to it, also doesn’t smell like fairy floss but still a great squishy! P.S got the pink one in 20 dollar lucky bag. | 404,602 |
A lot of establishments have many important material that you will find quite disadvantageous should they should drop the idea. Knowledge which is in positively magnitude is actually located inside the desktops. Regardless if the info is actually an accumulation of several balance information, economical data details or simply money information; the business ought to implement the suggests needed to shield this as a result of obtaining sacrificed. Reasons technique for ding the following verts with making use of the server backup.
If you can't contain damaged or lost certain advice this after dealt that you' substantial blow historically, will possibly not recognise the significance about server backup. The good news is disbelief among the many of us the fact that information they have got saved within their computers is without a doubt dependable. However, this may not genuine when there are tons associated with stuff that believe it or not endanger the fact that health and safety of the material you've saved during the server. Unless of course there is also a far off or maybe server backup for some type, you might be exposed about getting rid of your details in the case of the bad luck.
An array of potential issues skin your data. Some of the conditions that may be going through your pc comprise computer virus harm which can be not surprisingly factors behind, technological failings, organic accidents, scientific failings, human mistakes and the like. A corner upwards takes on a crucial role to you will safe as a result of all these risks.
Considering the superior engineering, a corner " up " you will place on computer data are generally reconditioned throughout the process with the help of simply a click a person's far off. You will additionally need self-assurance if you are jogging and / or storage knowledge in the business because you will be confirmed of an solution used to revive your data by using just a couple mouse clicks.
In reverse your data certainly gives the luxury of heightened safety. In addition, the comprehensive data is normally actually less hazardous rather than it becomes with the phone. This really is one benefit some people don’t fully understand, but of course count number quite a lot. This means the fact that the knowledge is normally safer by hijackers as well as other unnecessary things.
The buying price of keeping copy material some cities can be accused at affordable. The following means saving pointless price ranges since you will spend a smaller amount regarding storage containers process and even searching in addition.
Some of the server backup is certainly was able as a result of pro professionals so, who operate 24/7 so that the comprehensive data stored within the cell phone is absolutely not harmful, well up to date along with set in place. When facts has supervised through the analysts, you can rest and also unwind because you can be almost guaranteed your data is there to secure palms.
Are you experiencing an honest server backup? Just click here down below to get in touch to your gurus.
viernes, 29 de abril de 2011
Server BackUp - Continuing to keep your home business the forefox browser secure
Publicado por Catherine Wilson en 4:02
Etiquetas: business owners, corporations, doubt, establishments, ins, knowledge files, magnitude, relevance, small businesses, software data, statistics, whole lot
Suscribirse a: Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario | 47,457 |
Profilicious 1
Download now
- Description.
- | 418,447 |
\section{Technical results}
In this section we prove our cohomological theorem on Selmer growth.
The larger body of results we draw upon comes from \Nekovar's
formalism of Selmer complexes, which expresses the arithmetic local
and global dualities in the language of derived categories. Since
familiarity with these ideas is not necessary for our proofs, we cite
the relevant theorems from \cite{nekovar}. The basic constructions
are sketched in an appendix, for the benefit of the reader who wishes
to know of their origins.
We also use two key ideas of Mazur--Rubin that allow one to force
algebraic $p$-adic $L$-functions to have zeroes, granted they obey
certain functional equations. In the earlier portion of this section,
we begin by recalling Mazur--Rubin's ideas in the appropriate
generality. In the latter part, we prove our cohomological theorem.
\subsection{Skew-Hermitian complexes and functional equations}
\label{sub-mr}
In this section we review the method of Mazur--Rubin, stating their
results in a generality that is suitable for our needs.
Consider a complex $C^\bullet = [\Phi \stackrel{u}{\hookrightarrow}
\Psi]$, concentrated in degrees $[1,2]$, with $\Phi,\Psi$ finite free
over $\La$ of the same rank, and $u$ injective. Assume that this
complex is equipped with a quasi-isomorphism $\alpha \cn C^\bullet
\stackrel{\sim}{\to} \Hom(C^\bullet,\La)^\iota[-3]$ satisfying
$\Hom(\alpha,\La)^\iota[-3] = -\alpha$ up to chain homotopy. The
following provides an example of such a complex.
Let $M$ be free of finite rank over $\La$, equipped with a
nondegenerate, skew-Hermitian $\La$-bilinear pairing $h\cn M \otimes
M^\iota \to \La$, with image contained in $\fkm$. If we write $M^* :=
\Hom_\La(M^\iota,\La)$, the adjoint $h^\text{ad} \cn M \to M^*$ serves
as the boundary operator of a complex $[M \stackrel{h^\text{ad}}{\to}
M^*]$ concentrated in degrees $1,2$; the nondegeneracy of $h$ means
that $h^\text{ad}$ is injective. This complex is equipped with an
obvious duality pairing. The complex just described, together with
its duality structure, is denoted $C(M,h)^\bullet$ and called a {\it
basic skew-Hermitian complex}.
We will make use of the following two propositions of Mazur--Rubin.
\begin{ppn}\label{mr-rep}
Every $C^\bullet$ is quasi-isomorphic to a $C(M,h)^\bullet$, in a
manner respecting the duality pairings.
\end{ppn}
\begin{proof}
This is \cite[Proposition 6.5]{mr:org}.
\end{proof}
The proof of existence relies crucially on Nakayama's lemma, and thus
on the fact that $R$ is local. The author sees no means to generalize
the methods of \cite{mr:org} beyond the local case.
\begin{ppn}\label{mr-gen}
Let $\Xi$ be a finite group of commuting involutions of $\Ga$.
Suppose $I \subset \La$ is a nonzero principal ideal that is preserved
by $\Xi$. Then there exists a generator $\calL \in I$ such that
$\calL^\xi = \ep(\xi)\calL$ for some homomorphism $\ep \cn \Xi \to
\{\pm1\}$. The value $\ep(\xi)$ depends only on $\xi$ and $I$, and
not on $\Xi$ or $\calL$.
\end{ppn}
\begin{proof}
The proof in \cite[Proposition 7.2]{mr:growth} refers to the case
where $R$ is the ring of integers in a finite extension of $\bbQ_p$,
but it applies without change to any $R$ under our hypotheses.
\end{proof}
Consider a basic skew-Hermitian complex $C^\bullet = C(M,h)^\bullet$
over $\La$, as above. We will apply the preceding proposition to the
characteristic ideal
\[
I := \chr_\La H^2(C^\bullet) = \chr_\La(\coker h^\text{ad}) =
\det(h^\text{ad})\La
\]
(if it is nonzero), and to the group $\Xi$ generated by the two
involutions $\iota$ and $\sigma$, which we now recall.
First, one always has the inversion involution $\iota \cn \ga \mapsto
\ga\inv$. We can use the skew-Hermitian property of $h$ to calculate
that
\[
\det(h^\text{ad})^\iota = \det(h^{\text{ad}\,\iota}) =
\det(-h^\text{ad}) = (-1)^r\det(h^\text{ad}),
\]
with $r := \rank_\La M \pmod{2}$. This shows that $I$ is stable under
$\iota$, and moreover that $\ep(\iota) = (-1)^r$. We point out that
$r$ may be computed ``over $R$'' as follows. The involution $\iota$
acts trivially on $R$, so $h^\text{ad} \pmod{\calI}$ is
skew-symmetric; therefore, $\rank_{R,\fkp} \left(\img h^\text{ad}
\otimes_\La R\right)$ is even for every minimal prime $\fkp$ of $R$.
This lets us calculate that, for all such $\fkp$,
\[\begin{split}
r &= \rank_\La M^*
= \rank_{R,\fkp} M^* \otimes_\La R \\
&\equiv \rank_{R,\fkp}
(M^* \otimes_\La R) / (\img h^\text{ad} \otimes_\La R) \pmod{2} \\
&= \rank_{R,\fkp}
(\coker h^\text{ad} \otimes_\La R),
\end{split}\]
where the last equality is by the right exactness of $\otimes$.
For the other involution, we recall that we are given a degree $2$
subfield $K_0$ of $K$ as in \S\ref{sub-notation}, and we have the
involution $\sigma$ which acts on $\Ga_\pm$ via $\ga \mapsto
\ga^{\pm1}$. In the next section, our complex $C$ will arise
functorially from a Galois module $T$ defined over $K_0$. Each lift
of $\sigma$ to $\Gal(K_\infty/K_0)$ will induce an isomorphism $C
\stackrel{\sim}{\to} C^\sigma$, and therefore $I = \chr_\La H^2(C)$
will be stable under $\sigma$.
\subsection{The cohomological theorem}\label{sub-cohom}
We begin our discussion of our main theorem on Selmer growth by laying
out the setup and hypotheses.
Continue with notations as in \S\ref{sub-notation}. Let $T$ be a
nonzero, free, finite rank $R$-module with a continuous, linear
$G_{K_0,S}$-action. We require the following list of hypotheses and
data attached to $T$ (whose motivations are explained in Remark
\ref{rem-hyp-explain}):
\begin{description}
\item[(Symp)] $T$ is symplectic; i.e., it is equipped with a
Galois-equivariant perfect pairing $T \otimes T \to R(1)$, and hence
an isomorphism $j \cn T \stackrel{\sim}{\to} \scrD(T)(1)$, that is
skew-symmetric in the sense that $\scrD(j)(1) = -j$.
\item[(Ord)] For each $v \in \Sigma$, we are given a $G_v$-stable
$R$-direct summand $T_v^+ \subset T$ that is Lagrangian for the
symplectic structure: $j(T_v^+) = \scrD(T/T_v^+)(1)$. Set $T_v^- =
T/T_v^+$, obtaining an exact sequence:
\[
0 \to T_v^+ \to T \to T_v^- \to 0.
\]
\item[(Tam)] For every place $v$ of $K$ lying over $\Sigma'$, the
submodule $T^{I_v}$ is assumed {\it free} over $R$. Moreover, for
such $v$, the operator $\Frob_v-1$ acts bijectively on $H^1(I_v,T)$.
\end{description}
Set $A = D(\scrD(T)) \cong D(T)(1)$ (by the the self-duality of $T$)
and $A_v^\pm = D(T_v^\mp)(1) \subset A$, obtaining an exact sequence:
\[
0 \to A_v^+ \to A \to A_v^- \to 0.
\]
One can view $A$ as the ``divisible incarnation'' of $T$. Specifying
$T_v^+$ is equivalent to specifying $A_v^+$. For an extension $L$ of
$K$ and a place $v$ of $L$ lying over $v_0 \in \Sigma$, we also set
$T_v^{\pm} = T_{v_0}^{\pm}$ and $A_v^\pm = A_{v_0}^\pm$.
Here are our two remaining hypotheses:
\begin{description}
\item[(Irr)] The following morphism (in which $A^{G_{K,S}}$ maps
diagonally) is injective.
\[
A^{G_{K,S}} \to \bigoplus_{v \text{ lying over } \Sigma} A_v^-
\]
\item[(Zero)] For all places $v$ of $K$ lying over $\Sigma$, one has
$(T_v^-/\fkm)^{G_v} = 0$.
\end{description}
Given the above data, we may define the {\it (strict Greenberg) Selmer
groups} of $A$ over any algebraic extension $L$ of $K$. For each
prime $v$ of $L$, choose a decomposition group $G_v$ above $v$ in
$\Gal(\ov{K}/L)$ and let $I_v$ be its inertia subgroup. Then the
Selmer group of $A$ over $L$ is
\begin{equation}\label{eqn-sel-defn}
S_A(L) := \ker\left[ H^1(\Gal(\ov{K}/L),A) \to \prod_{v \nmid p}
H^1(I_v,A) \times \prod_{v \mid p} H^1(G_v,A_v^-) \right].
\end{equation}
We now state our main technical theorem.
\begin{thm}\label{thm-cohom}
With notation as in \S\ref{sub-notation}, assume the above five
hypotheses hold for $T$. If $S_A(K)$ has odd $R$-corank, then for at
least one choice of sign $\ep = \pm$, we have $\corank_R S_A(L) \geq
[L:K]$ for every finite extension $L/K$ contained in $K_\ep$.
\end{thm}
\begin{rem}\label{rem-hyp-explain}
The assumption (Ord) is a variant of assuming that $T$ is ``ordinary''
(or better: ``Pan\v{c}i\v{s}kin'') above $p$, and (Tam) is related to
$p$ not dividing a ``Tamagawa number'' at $v$, for all $v \nmid p$;
see Remark \ref{rem-DVR-simplify} below. The condition (Irr) holds,
in particular, if $A[\fkm]^{G_{K,S}} = 0$, and {\it a fortiori} if
$A[\fkm]$ is an irreducible residual representation (note that its
rank is at least two). The hypothesis (Zero) excludes the case of an
``algebraic exceptional zero'' playing a similar role to those found
by Mazur--Tate--Teitelbaum in \cite{mtt}.
\end{rem}
\begin{rem}\label{rem-DVR-simplify}
In the case where $R$ is the ring of integers $\calO$ in a finite
extension $F$ of $\bbQ_p$, some simplifications are possible. First,
$\calO$ is a DVR, so that $A$ is isomorphic to $T \otimes_\calO
F/\calO$. Second, $\calO$ is a PID, so that in (Ord), to determine
$T_v^+$, it suffices to specify the subspace $T_v^+ \otimes_\calO F
\subset V$, where $V = T \otimes_\calO F$; moreover, in (Tam), the
freeness assumption is automatically satisfied. Third, by the
equation
\[
\scrD(\text{Err}_v^{ur}(\scrD,T)) \stackrel{\sim}{\longrightarrow}
D(\text{Err}_v^{ur}(\Phi,T))
\]
appearing in the proof of \cite[7.6.7.ii]{nekovar}, combined with the
calculation of \cite[7.6.9]{nekovar}, one can rephrase (Tam) as the
claim that for every $v \in \Sigma'$ one has
$H^1(I_v,T)_\text{tors}^{\Frob_v=1} = 0$. The order of
$H^1(I_v,T)_\text{tors}^{\Frob_v=1}$ is precisely the ($p$-part of
the) {\it Tamagawa number} of $T$ at $v$ (see \cite[Proposition
I.4.2.2.ii]{fpr}). The same computation also shows that in order to
verify this last criterion, it would suffice to show that
$V^{I_v}/T^{I_v} \twoheadrightarrow A^{I_v}$.
\end{rem}
\begin{rem}
By \cite[10.7.15.iii]{nekovar}, under almost identical hypotheses, one
can take $\ep = -1$, and for $L/K$ contained in $K_-$ one knows that
$\corank_R S_A(L)$ is odd.
\end{rem}
Our proof of the theorem relies on the following results of \Nekovar,
which allow us to represent $S_A(L)$ in terms of a certain Selmer
complex (cf.\ Proposition \ref{ppn-control}). For all our Selmer
complexes, over any number field containing $K_0$, we choose the local
conditions $\Delta$ to be ``unramified'' at primes $v$ lying over
$\Sigma'$, and ``(strict) Greenberg'' at primes $v$ lying over
$\Sigma$.
\begin{lem}\label{lem-dual-cond}
The $\scrD(1)$-dual local conditions to $\Delta$ are isomorphic to
$\Delta$ under the identification $T \cong \scrD(T)(1)$.
\end{lem}
\begin{proof}
Plug (Tam) into \cite[7.6.12]{nekovar}, and plug (Symp) and (Ord) into
\cite[6.7.6.iv]{nekovar}.
\end{proof}
This proposition is \Nekovar's Iwasawa-theoretic arithmetic duality
(and perfectness) theorem:
\begin{ppn}\label{ppn-bound}
The Iwasawa-theoretic Selmer complex
\[
C := \wt{\bfR\Ga}_{f,\mathrm{Iw}}(K_\infty/K,T;\Delta)
\]
defined in \cite[8.8.5]{nekovar} lies in
$\bfD_\mathrm{perf}^{[1,2]}(\La)$, and is equipped with a
skew-Hermitian duality quasi-isomorphism
\[
\alpha \cn C \stackrel{\sim}{\longrightarrow} \scrD_\La(C)^\iota[-3].
\]
When we represent $C$ by a complex of the form $[\Phi
\stackrel{u}{\to} \Psi]$ with $\Phi,\Psi$ finite free over $\La$ (with
respective degrees $1,2$), the modules $\Phi,\Psi$ have the same rank.
Moreover, the $\La$-module
\[
\calS := H^2(C)
\]
is $\La$-torsion if and only if the differential $u$ is injective.
\end{ppn}
\begin{proof}
All the claims follow by copying the steps of \cite[9.7]{nekovar}
word-for-word. In particular: To see that $C$ lies in
$\bfD_\text{perf}^{[0,3]}$, use the proof of \cite[9.7.2.ii]{nekovar},
noting the necessity of the freeness condition in (Tam) (which is
automatic in the case under \Nekovar's consideration). That $\alpha$
is an isomorphism follows from Lemma \ref{lem-dual-cond} and
\cite[9.7.3.iv]{nekovar}. It is skew-Hermitian by
\cite[9.7.7(ii)]{nekovar}. The placement $C \in
\bfD_\text{perf}^{[1,2]}(\La)$ follows from (Irr) and
\cite[9.7.5.ii]{nekovar}. The final two claims are
\cite[9.7.7.iv]{nekovar}.
\end{proof}
The letter ``$\calS$'' is meant to remind us of the word ``Selmer''.
This choice of mnemonic is because of the following comparison.
\begin{ppn}\label{ppn-control}
Let $S_A(K_\infty)$ be as in Equation \ref{eqn-sel-defn}. For each
(possibly infinite) subextension $L$ of $K_\infty/K$, recalling that
$\calS_L := \calS \otimes_\La \La_L$, we have
\[
\calS_L \cong D(S_A(L))^\iota,
\qquad \text{i.e.} \qquad
S_A(L) \cong D(\calS_L)^\iota.
\]
\end{ppn}
\begin{proof}
We use the notation of \cite[9.6]{nekovar}, with the exception that
our $S_A(L)$ is written $S_A^\text{str}(L)$ there. By
\cite[9.6.3]{nekovar}, for any $L$ as in the proposition, there is a
surjection $\wt{H}_f^1(L/K,A) \twoheadrightarrow S_A(L)$, which is an
isomorphism provided that for all places $v \in \Sigma$, we have
$(A_v^-)^{G_v \cap G_L} = 0$. By \cite[9.6.6.iii]{nekovar}, it
suffices to check the latter condition when $L=K$; by Nakayama's
lemma, this is equivalent to requiring that $(T_v^-/\fkm)^{G_v} \cong
(A_v^-[\fkm])^{G_v} = 0$, which is precisely (Zero).
Let $L$ be any subextension of $K_\infty/K$. Invoking
\cite[9.7.2.i]{nekovar}, we find that
\begin{equation}\label{eqn-control}
D(S_A(L))^\iota \cong H^2(C(L)),
\end{equation}
where $C(L)$ is the Iwasawa-theoretic Selmer complex constructed just
like $C$ was, but with $L$ in place of $K_\infty$. In particular, our
proposition follows in the case $L = K_\infty$.
We now invoke \Nekovar's control theorem \cite[8.10.10]{nekovar} (cf.\
the discussion at the end of \S\ref{sub-global}), showing that $C(L)
\cong C \otimes^\bfL_\La \La_L$. Represent $C$ by a complex
$C^\bullet$ of the form $[\Phi \stackrel{u}{\to} \Psi]$ as in
\ref{ppn-bound}. Since $\Phi,\Psi$ are free, the object $C(L)$ is
represented by the complex $C^\bullet \otimes_\La \La_L$. Therefore,
\[
H^2(C(L)) \cong H^2(C^\bullet \otimes_\La \La_L) = \coker(u \bmod
\calI_L) = \coker(u) \bmod \calI_L = \calS_L,
\]
which, together with Equation \ref{eqn-control}, proves the
proposition in general.
\end{proof}
In particular, under our hypotheses, a form of ``perfect control''
holds: the natural maps $S_A(L) \to S_A(L')^{\Gal(L'/L)}$ are
isomorphisms, for any $K_\infty/L'/L/K$.
\begin{proof}[Proof of Theorem \ref{thm-cohom}]
Recall that when $L = K_\pm$, we write ``$\pm$'' as a subscript
instead of ``$K_\pm$''. It suffices to show that at least one of
$\calS_\pm$ is not torsion over its respective $\La_\pm$, because then
for every finite subextension $L/K$ of $K_\pm$, we have
\[
\corank_R S_A(L)
= \rank_R (\calS_\pm \otimes_{\La_\pm} \La_L)
\geq \rank_{\La_\pm} \calS_\pm \cdot \rank_R \La_L \geq 1 \cdot [L:K],
\]
as was desired. If $\calS$ is not a torsion $\La$-module, then both
of the $\calS_\pm$ are not torsion $\La_\pm$-modules, and our theorem
follows trivially. So let us assume henceforth that $\calS$ is
torsion over $\La$. In this case, the characteristic ideal $\chr_\La
\calS \subseteq \La$ is nonzero, and $(\chr_\La \calS)\La_\pm$ divides
$\chr_{\La_\pm} \calS_\pm$. Therefore, in order to show that
$\calS_\pm$ is nontorsion, it suffices to produce a generator of
$\chr_\La \calS$ whose image in some $\La_\pm$ is zero.
As in Proposition \ref{ppn-bound}, $C$ is representable by a complex
of the form $[\Phi \stackrel{u}{\hookrightarrow} \Psi]$. Applying
Proposition \ref{mr-rep}, let us represent $C$ once and for all by a
basic skew-Hermitian complex $C(M,h)^\bullet$. ($M$ is an
``organizing module'' for the arithmetic of $T$; cf.\ \cite{mr:org}.)
Recall that $\corank_R S_A(K)$ is assumed to be odd, and that
\[
r = \rank_\La M \equiv \rank_R \calS_K = \corank_R S_A(K) \pmod{2}.
\]
As in \S\ref{sub-mr}, take $\Xi$ to be generated by $\iota$ and
$\sigma$, and obtain from \ref{mr-gen} a generator $\calL$ of
$\chr_\La \calS$, together with a homomorphism $\ep \cn \Xi \to
\{\pm1\}$ describing the action of $\Xi$ on $\calL$. If $\ep(\sigma)
= -1$, then since $\sigma$ acts trivially on $\La_+$ we must have
$\calL \mapsto 0 \in \La_+$, so we are done. (In this case, we did
not need to assume that $r$ is odd.) In the case that $\ep(\sigma) =
+1$, we see that $\ep(\sigma\iota) = \ep(\sigma)\,\ep(\iota) =
1\,(-1)^r = -1$, which forces $\calL \mapsto 0 \in \La_-$, since
$\sigma\iota$ acts trivially on $\La_-$.
\end{proof} | 14,934 |
Wilson, James William
17 Dec. 1832–2 July 1910
James William Wilson, engineer, was born in Granville County, the son of the Reverend Alexander, a noted Presbyterian clergyman and educator, and Mary Willis Wilson. He grew up in Alamance County, attended the Caldwell Institute in Greensboro, and was graduated from The University of North Carolina in 1852. Choosing the profession of civil engineer, he became a rodman on the survey of the Western North Carolina Railroad and was soon promoted to assistant engineer. Wilson settled in Morganton in 1856 and in 1861 married Louise Erwin, of McDowell County, who bore him ten children.
When the Civil War began, Wilson returned to Alamance County and raised Company F, Sixth North Carolina Troops. He became captain of the company and was later promoted to major and assistant quartermaster on the staff of General Stephen D. Ramseur. Wilson took part in most of the campaigns of the Army of Northern Virginia from the Seven Days' Battle in 1862 to Cedar Creek in 1864. In late 1864 Governor Zebulon B. Vance appointed him superintendent of the Western North Carolina Railroad. He was removed from the post during Reconstruction but continued to do work on the road under contract.
After the war Wilson became active in the Democratic party and with Alphonso C. Avery and Samuel McD. Tate formed a triumvirate that dominated the politics of Burke County and had a strong influence in the affairs of the Western North Carolina Railroad. In 1876 he was elected to the state house of representatives, where he championed the bill that reorganized the road, in which he owned 1,400 shares of stock, making him one of the largest private stockholders. In the spring of 1877 the new board of directors elected Wilson president of the railroad, and in this post he achieved his major claim to fame. Assuming the positions of chief engineer and general superintendent at a reduced salary, he worked to restore the finances of the near-bankrupt company to a sound condition and pushed forward the work on the line, which had virtually come to a halt during Reconstruction.
The building of the railroad through the mountains of western North Carolina, in the face of forbidding terrain, frequent landslides, and severe weather, as well as shortages of money, labor, and supplies, and sniping from political enemies, constituted an engineering feat that can only be described as heroic. In 1880 the state sold the Western North Carolina Railroad to a New York syndicate, which soon lost control to the Richmond and Danville Railroad. Wilson remained chief engineer until 1887, when he resigned to become chief engineer of the Knoxville, Cumberland Gap, and Louisville Railroad in Tennessee.
When North Carolina established a railroad commission in 1891, Wilson became its chairman. He was generally considered to be the member most sympathetic to the railroads. In 1897 Governor Daniel L. Russell retaliated against the commission's refusal to reduce railroad rates by attempting to remove James W. Wilson and S. Otho Wilson (no relation) from their offices. Russell charged that James Wilson and Vice-President Alexander B. Andrews of the Southern Railroad owned the Round Knob Hotel, which was worthless except as an eating house for the Southern Railroad, and that they had persuaded S. Otho Wilson to lease the hotel in his mother's name with the understanding that the railroad would abandon its other eating houses and give its exclusive patronage to the hotel. Russell, a Republican, suspended the two Wilsons for conflict of interest, but the Democratic-controlled legislature, which met in 1899, reinstated them. S. Otho Wilson resigned immediately following his vindication, and James Wilson's term expired soon afterwards.
Wilson was a member of the Democratic state executive committee for several years. He served on the board of directors of the Western Insane Asylum at Morganton from 1882 to 1891, becoming president in 1888, and was a member of the board of trustees of The University of North Carolina from 1891 to 1899 and 1901 to 1905. He lived in Charlotte during the last four years of his life and was buried in Morganton.
References:
Collier Cobb, "James Wilson," in Charles L. Van Noppen Papers (Manuscript Department, Duke University Library, Durham).
Charlotte Observer, 3 July 1910, 15 Oct. 1939.
Josephus Daniels, Editor in Politics (1941).
Governors' Papers, 1877–81 (North Carolina State Archives, Raleigh).
Daniel L. Grant, Alumni History of the University of North Carolina (1924).
Weymouth T. Jordan, comp., North Carolina Troops, 1861–1865: A Roster, vol. 4 (1973).
Public Documents of the State of North Carolina, Session 1899, Document 21 (1899).
Western North Carolina: Historical and Biographical (1890).
Additional Resources:
Russell, Daniel L. "Governor's Message Relative to the Removal of J. W. and S. O. Wilson, Railroad Commissioners" (Document No. 11). Public documents of the State of North Carolina. Raleigh [N.C.]: Edwards & Broughton. 1899. 1-11. (accessed January 10, 2013).
Wilson, James W. "Superintendent's Report." Proceedings of the Annual Meeting of Stockholders of the Western N. Carolina Rail-Road. Statesville, N.C.: Printed at the Job Office of the American. 1865. 6-7. (accessed January 10, 2013).
Iobst, Richard W. Bloody Sixth: the Sixth North Carolina Regiment, Confederate States of America. Raleigh [N.C.]: North Carolina Confederate Centennial Commission. 1965. (accessed January 10, 2013).
Image Credits:
F.G. Kernan & Co. "Engraving, Accession #: H.1968.88.1." North Carolina Museum of History.
1 January 1996 | Bromberg, Alan B.
Add a comment
PLEASE NOTE: NCpedia will not publish personal contact information in comments, questions, or responses. Complete guidelines are available at. | 188,799 |
\begin{document}
\newcounter{prop}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}[prop]{Proposition}
\newtheorem{corollary}[prop]{Corollary}
\newtheorem{lemma}[prop]{Lemma}
\newcounter{rem}
\newtheorem{remark}[rem]{Remark}
\newtheorem*{remarknonumb}{Remark}
\newcounter{exerc}
\newtheorem{exercise}[exerc]{Exercise}
\def\theremark{\unskip}
\newtheorem{deffinition}{Definition}
\newtheorem{definition}{Definition}
\def\thedefinition{\unskip}
\begin{center}{\large ORDER PROBLEM FOR CANONICAL SYSTEMS AND \\
\vskip3mm
A CONJECTURE OF VALENT}
\end{center}
\bigskip \bigskip
\hskip2,5cm\vbox{\hsize10,5cm\baselineskip4mm
\noindent{\small\textbf{Abstract.} We establish a sharp upper estimate for the order of a canonical system in terms of the Hamiltonian. This upper estimate becomes an equality in the case of Krein strings. As an application we prove a conjecture of Valent about the order of a certain class of Jacobi matrices with polynomial coefficients.}
\noindent\textbf{Keywords:} canonical systems, spectral asymptotics, Jacobi matrices, strings.}
\footnotetext{AMS subject classifications: 34L15, 47B36.}
\bigskip
\centerline{\sf R.Romanov}
\medskip
\begin{center}
Department of Mathematical Physics and Laboratory of Quantum Networks, \\
Faculty of Physics, St Petersburg State University, \\
198504, St Petersburg, Russia,\\
e-mail: [email protected]
\end{center}
\bigskip \bigskip
\textbf{Introduction.} Let $ L $ be a positive number and $ \cH $ be a summable function on $ [ 0 , L ] $ with values in $ 2\times 2 $ matrices, such that $ \cH ( x ) \ge 0 $ a. e. Let $ J= \left( \begin{array}{cc} 0 & -1 \cr 1 & 0 \end{array} \right) $. \textit{A canonical system} $ ( \cH , L ) $ is the matrix differential equation of the form
\be\la{can} J \frac{d Y}{d x } = z \cH Y ; \; z \in \C , \; x \in [ 0 , L ] . \ee
A solution $ M ( x , z ) $ of this equation satisfying $ M ( 0 , z ) = I $ is called the monodromy matrix. We write $ M ( z ) = M ( L , z ) $. The function $ \cH $ is referred to as Hamiltonian. Without loss of generality we assume that $ \operatorname{tr} \cH ( x ) = 1 $ a. e. The background on canonical systems can be found in \cite{deBr, Sachnovich}.
Given a canonical system, the quantities
\[ \limsup_{ |z| \to \infty } \frac{ \log | M_{ ij } ( z ) | }{ |z| } \] and
\[ \limsup_{ |z| \to \infty } \frac{ \log \log | M_{ ij } ( z ) | }{ \log |z| } \]
do not depend on $ i , j $ (see e.g. \cite{BW}), and are called type and order of the system, respectively. The type is given by the classical Krein -- de Branges formula \cite{deBr},
\be\la{KdeB} \mbox{type of } ( \cH , L ) = \int_0^L \sqrt{ \det \cH ( t ) } \diff t . \ee
In particular, this formula says that if $ \det \cH ( x ) = 0 $ a. e. then the matrix elements of $ M ( z ) $ have minimal type, and a fundamental question is to find or estimate the order of the system. This question is the order problem referred to in the title. One should notice here that the operators corresponding to canonical systems typically are not semibounded below hence conventional variational principles are not suitable for estimating their eigenvalues.
In the present paper we establish an upper estimate for the order in terms of the Hamiltonian which is sharp in the power scale and gives the actual value of the order in all available examples where it is known. Let us formulate the result.
\begin{definition}
A Hamitonian $ \cH $ is of finite rank if there exist numbers $ x_j $, $ 0 = x_0 < x_1 < \dots < x_N = L $, and a finite set of vectors, $ \{ e_j \}_{ j = 0 }^{ N-1 } $, $ e_j \in \R^2 $, of unit norm such that
\[ \cH ( x ) = \llangle \cdot , e_j \rrangle_{ \C^2 } e_j , \; x \in ( x_ j , x_{ j+1 } ) , \; j = 0, \dots , N - 1 . \]
The (elements of) sets $ \{ x_j \} $, $ \{ e_j \} $ and the number $ N $ are called parameters and the rank of the Hamiltonian,\footnote{Notice that we do not require $ e_j \ne e_{ j+1 } $, hence the rank of a finite rank Hamiltonian is not defined uniquely.} respectively.
\end{definition}
\begin{theorem} Let $ ( \cH , L ) $ be a canonical system and let $ 0 < d < 1 $.
1. Suppose that there exists a $ C > 0 $ such that for each $ R $ large enough there exists a Hamiltonian $ \cH_R $ of a finite rank, $ N(R) $, defined on $ (0 , L ) $ and a set of numbers (depending on $R$) $ \{ a_j \}_0^{ N(R) -1 } $, $ 0 < a_j \le 1 $, for which the following conditions are satisfied ($ P_j = \langle \cdot , e_j \rangle e_j $; $ x_j $, $ e_j $ are the parameters of $ \cH_R $),
(i) \[ \sum \frac 1{a_j^2} \int_{x_j }^{ x_{ j+1 }} \len \cH ( t ) - \cH_R ( t) \rin \diff t \le C R^{ d-1 } , \]
(ii) \[ \sum a_j^2 ( x_{ j+1 } - x_j ) \le C R^{ d-1 } , \]
(iii) \[ \sum \log \( 1 + \frac{ \len P_j - P_{ j+1 } \rin }{ a_j a_{ j+1 } } \) \le C R^d , \]
(iv)
\[ \log a_0^{ -1 } + \log a_{ N(R) -1 }^{ -1 }+ \sum \left| \log\frac{ a_j }{ a_{ j-1} } \right|\le C R^d . \]
Then there exists a $ K > 0 $ such that
\[ \len M ( z ) \rin \le e^{ K \left| z \right|^d } \]
for all $ z \in \C $.
2. For each $ p $, $ 0 < p< 1 $, there exists a system $ ( \cH , L ) $ of order $ p $ which for any $ \von > 0 $ satisfies the assumption of assertion 1 with $ d = p + \von $.
\end{theorem}
Theorem 1 gives an upper bound for the order in terms of the quality of approximation of the Hamiltonian by piecewise constants. The choice of approximators is natural in the sense that finite rank Hamiltonians have order zero (the corresponding monodromy matrices are polynomials), see also Section \ref{comont1}. Piecewise constant (or, more generally, polynomial) approximations are the mainstream in studying spectral asymptotics for integral and differential operators, see for instance \cite{BirmanS,BirmanS2}. By way of comparison, notice that those studies are mainly aimed at controlling the number of "pieces" necessary for approximation of a given function with a given accuracy, while in Theorem 1 the number $ N (R) $ does not play a direct role. In special situations, however, an optimal choice of approximation leads to conditions explicitly involving the number $ N ( R ) $ (see assumption (B) in the following theorem).
An important class of canonical systems is constituted by systems with diagonal Hamiltonians $ \cH $. Such systems arise in description of mechanical strings with variable density sometimes called Krein strings, see \cite{KWW} for details. In the context of the order problem ($ \operatorname{rank} \cH ( x ) = 1 $) a diagonal Hamiltonian may take only two values, $ \frak H_ 1 = \begin{pmatrix} 1 & 0 \cr 0 & 0 \end{pmatrix} $ and $ \frak H_2 = \begin{pmatrix} 0 & 0 \cr 0 & 1 \end{pmatrix} $. Our next result says that in this case the upper bound implied by Theorem 1 coincides with the actual order. The formulation is as follows. Define $ X_1 = \{ x \in ( 0 , L ) \colon \cH ( x ) = \frak H_1 \} $, $ X_2 = \{ x \in ( 0 , L ) \colon \cH ( x ) = \frak H_2 \} $. Let $ | \cdot | $ stand for the Lebesgue measure.
\begin{theorem}\la{selfsim}
Suppose that for a. e. $ x\in [ 0 , L ] $ either $ \cH ( x ) = \frak H_ 1 $, or $ \cH ( x ) = \frak H_2 $. Then the order of the system $ ( \cH, L ) $ coincides with the infimum of $ d $'s , $ 0< d < 1 $, for which there exists a positive $ C = C ( d ) $ such that for each $ R $ large enough there exist a covering of the interval $ ( 0 , L ) $ by $ n = n ( R ) $ intervals, $ \omega_j $, such that
\textit{(A)} \[ \sum \sqrt{ | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | } \le C R^{ d-1 } ; \]
\textit{(B)} \[ n( R ) \le C R^d . \]
\end{theorem}
We give several examples of application of Theorems 1 and 2. Namely, we establish upper estimates of the order in terms of smoothness for Hamiltonians from classical smoothness classes (H\"older, bounded variation) by applying Theorem 1, see Corollary \ref{var}, prove a conjecture of Valent about order of a certain class of Jacobi matrices (Corollary \ref{Valenthyp}), and give a rather short calculation of the order for the Cantor string (see Section \ref{Castr}).
The first result on the order problem we are aware of is the 1939 theorem of Liv\v sic \cite{Livshitz} saying that the order of a canonical system corresponding to an indeterminate moment problem with moments $ \gamma_j $ is not less than $ \limsup_{ n \to \infty } (2n \log n ) /\log \gamma_{ 2n} $, the order of the entire function $ \sum z^{ 2j }/ \gamma_{ 2j } $. A modern two-line proof of this assertion can be found in \cite{BergSzwarc}.
The next result, essentially due to Berezanski\u{\i} \cite{Berez}, is formulated in terms of Jacobi matrices. Berezanski\u{\i} studied Jacobi matrices of the form
\be\la{Jac} \( \begin{array}{cccccc} q_1 & \rho_1 & 0 & \dots & & \cr
\rho_1 & q_2 & \rho_2 & 0 & \dots & \cr
0 & \rho_2 & q_3 & \rho_3 & 0 & \dots \cr & 0 & \ddots & \ddots & \ddots & \ddots \end{array} \) \ee
with $ \rho_j$ growing. Although he did not explicitly address the problem of order, he made a crucial technical observation that allows to estimate the corresponding orthogonal polynomials at large $ j $ in terms of $ \rho_j^{ -1 } $. The explicit translation of his result to the order problem is given in \cite{BergSzwarc}. It says essentially that if $ \rho_j $ is a log-convex or log-concave sequence at large $ j $, and $ q_j $ is small relative to $ \rho_j $, then the order of the system coincides with the convergence exponent for the sequence $ \rho_j $. More precisely, the following assertion holds.
\begin{theorem}\la{Bsw}\cite{BergSzwarc} Let (\ref{Jac}) be a limit-circle Jacobi matrix. If $ \rho_j $ satisfies $ \rho_{ j- 1} \rho_{ j+1 } \le \rho_j^2 $ for all $ j $ large enough, and $ q_j / \rho_{ j-1 } \in l^1 $, then the order of the system equals to $ \inf \{ \alpha > 0 \colon \rho_j^{ - \alpha } \in l^1 \} $. The same result holds if $ \rho_j $ satisfy the inequality $ \rho_{ j- 1} \rho_{ j+1 } \ge \rho_j^2 $ instead.
\end{theorem}
In the language of canonical systems, these results refer to a special class of Hamiltonians defined as follows. Let $ b_j $ be a bounded sequence of reals, $ 0 = b_0 < b_1 < b_2 < \dots $, $ L = \lim b_j $, and $ e_j \in {\mathbb{R}}^2 $, $ j \ge 1 $, a sequence of vectors of unit norm, $ e_j \ne \pm e_{j-1} $. Let $ \Delta_j = ( b_{ j-1 } , b_j ) $, $ j \ge 1 $.
Define the Hamiltonian $ \cH $ on $ ( 0 , L ) $ corresponding to these sequences by
\be\la{HamJacobi} \cH ( x ) = \langle \cdot , e_j \rangle e_j , \; \; x \in \Delta_ j . \ee
The correspondence between Hamiltonians of this form and limit-circle Jacobi matrices is described in detail in \cite{Katz}. Upon suitable normalization it is one-to-one, the corresponding selfadjoint operators are unitarily equivalent, and the Jacobi parameters $ q_j $, $ \rho_j $ are expressed via $ e_j $ and $ b_j $ by explicit formulae. The relation of Theorem \ref{Bsw} and our result is that the relevant part of Theorem \ref{Bsw} (the order is not greater than the convergence exponent) easily follows from Theorem 1 applied to Hamiltonians of this class, see Section \ref{comparis} for details.
Apart from the mentioned general results, there are several isolated explicitly solvable non-trivial examples of Jacobi matrices for which the order is known and is non-zero. Two of them were found by Valent and his collaborators in \cite{BergValent} (order $ 1/4 $) and \cite{GLV} (order $ 1/3 $). In these examples, motivated by studies of the birth-death processes, $ q_n $ and $ \rho_n^2 $ are polynomials, $ | q_n | \sim \rho_n $ at infinity. On their basis it was conjectured in \cite{Valent} that in a class of Jacobi matrices with polynomial $ \rho_n^2 $ and $ q_n $ the order is $1/\deg q_n $. As explained below, the fact that the order is not less than $ 1 / \deg q_n $ is almost trivial, hence the hypothesis is essentially that the order does not exceed $ 1 / \deg q_n $. We establish the latter in Corollary \ref{Valenthyp} applying Theorem 1.
Another set of examples in the order problem comes from studies of non-Weyl spectral asymptotics for 1D differential operators. Apparently the first result in this direction is due to Uno and Hong \cite{UH} who have found the order for the Cantor string. In \cite{SolVerb}, the authors calculated the order (in fact, they found, in a sense, the whole leading term), for a class of strings with self-similar weights. The order is also known for a rather general class of strings related to so called $ d $--sets \cite{Triebel}. Notice that \cite{UH,SolVerb} rely on the variational prinicple for the eigenvalues, hence their methods are apparently unsuitable to obtain results like Theorem 1 because of lack of semiboundedness.
A general formula for the order of a string was obtained in \cite{Katz1}. In the situation of Theorem 2 it says that the order of the system $ ( \cH , L) $ is
\be\la{Katzformula} \inf\left\{ d> 0 \colon \int_0^{ \tilde L } dM( x) \int_0^{ \min \{ x , \tilde L-x \} } \( s ( M ( x+s ) - M ( x -s ) ) \)^{ \frac d2 - 1} \diff s < \infty \right\} .\ee
Here $ M $ is a non-decreasing singular function on an interval $ [ 0 , \tilde L ] $, $ \tilde L + M ( \tilde L ) = L $, such that $ X_1 = \{ x + M ( x ) \colon x \in [ 0 , \tilde L ], \, M^\prime ( x ) = 0 \} $.
This formula, to the best of our knowledge, has never been used to calculate the order of an actual string of the class considered in this paper (see also Section \ref{Katzf}). We use an argument from the proof of (\ref{Katzformula}) in \cite{Katz1} in the derivation of Theorem 2, see Lemma \ref{Katzform}.
The structure of the paper is as follows. In Section \ref{deBrK} we reproduce a proof of the inequality "lhs of (\ref{KdeB})"$ \le $ "rhs of (\ref{KdeB})" from \cite{deBrII} with a minor simplification. The reason we give it here is that it provides one of the ideas used in the proof of Theorem 1. The proofs of Theorems 1 and 2 occupy sections named accordingly. In the Comments sections we discuss the assumptions of these theorems and compare them with the earlier results. In the Applications section we establish upper bounds for the order in smooth classes and prove the Valent conjecture.
Throughout the paper the norm signs refer to the operator norm for $ 2 \times 2 $ matrices, $ \frak H_{ 1,2 } $ are the matrices defined before Theorem 2. Unless specified otherwise summations extend to all values of the summation parameter for which the summand is defined. $ C $ stands for any constant whose exact value is of no interest for us. Given a Jacobi matrix (\ref{Jac}), $ P_j ( \lambda ) $ and $ Q_j (\lambda) $ stand for the solutions of the corresponding three-term requrrency relation subject to the initial conditions $ P_1 = 1 $, $ P_0 = 0 $, $ Q_1 = 0 $, $ Q_2 = 1/\rho_1 $ (orthogonal polynomials of the first and second kind, respectively).
\section{The upper estimate in the Krein--de Branges formula}\la{deBrK}
\begin{proposition} Let $ ( \cH , L ) $ be a canonical system. Then
\[ \mbox{type of } ( \cH , L ) \le \int_0^L \sqrt{ \det \cH ( t ) } \diff t . \]
\end{proposition}
Let $ p ( x ) $ be the exponential type of $ M ( x , \lambda ) $. For each $ y \in ( 0 , L ) $ the monodromy matrix satisfies the integral equation
\be\la{intmono} M ( x , \lambda ) = M ( y , \lambda ) - \lambda \int_y^x J \cH ( t ) M ( t , \lambda ) \diff t . \ee
A crude estimate of the Volterra iterations for this equation shows that $ | p ( x ) - p ( y ) | \le | x - y | $ and thus $ p ( x ) $ is Lipschitz. The idea of the proof is to estimate $ p^\prime (x ) $ in terms of $ \cH $ and then integrate it to obtain the required bound.
\begin{proof} Let $ \Omega $ be a constant invertible matrix, to be chosen later. Equation (\ref{intmono}) can then be rewritten as follows,
\[ \Omega M ( x , \lambda ) = \Omega M ( y , \lambda ) - \lambda \int_y^x \( \Omega J \cH ( t ) \Omega^{ -1 } \) \Omega M ( t , \lambda ) \diff t . \]
This is a Volterra equation with respect to $ \Omega M $. Solving it by iterations we have (the Gronwall lemma),
\be\la{estGron} \len \Omega M ( x , \lambda ) \rin \le \len \Omega M ( y , \lambda ) \rin \exp \( \left| \lambda \right| \int_y^x \len \Omega J \cH ( t ) \Omega^{ -1 } \rin \diff t \) \ee
for all $ y \le x $. It follows that $ p ( x ) $ satisfies
\[ p ( x ) \le p ( y ) + \int_y^x \len \Omega J \cH ( t ) \Omega^{ -1 } \rin \diff t . \]
Taking the limit $ y \uparrow x $ we obtain that for a. e. $ x \in [ 0 , L ] $
\[ p^\prime ( x ) \le \len \Omega J \cH ( x ) \Omega^{ -1 } \rin . \]
The lhs does not depend on $ \Omega $, hence let us minimize the rhs in $ \Omega $.
\begin{lemma}\la{variat}
Let $ A $ be a $ 2 \times 2 $-matrix with $ \operatorname{tr} A = 0 $, then
\be\la{zerotr} \inf_{ \Omega \colon \det \Omega \ne 0 } \len \Omega A \Omega^{ -1 } \rin = \sqrt{ | \det A | } . \ee
\end{lemma}
\begin{proof}
If $ \det A \ne 0 $ the lemma is trivial -- it suffices to choose $ \Omega $ to be the diagonalizer of $ A $. If $ \det A = 0 $, then without loss of generality one can take $ A = \begin{pmatrix} 0 & 0 \cr 1 & 0 \end{pmatrix} $, $ \Omega = \mbox{diag} \( a^{ -1 } , a \) $, and send $ a \to 0 $.
\end{proof}
Applying this lemma gives $ p^\prime ( x ) \le \sqrt{ \det \cH ( x ) } $ a. e., and the assertion follows by integrating this inequality. \end{proof}
This proof is essentially the one in \cite[Theorem X]{deBrII} except that de Branges uses an explicit reduction of the matrix $ J \cH ( x) $ rather than mere existence of a diagonalizer.
\section{Proof of Theorem 1}
\subsection{The estimate.} Let $ ( \cH , L ) $ be a canonical system. For an arbitrary finite set of numbers $ x_j $, $ 0 \le j \le N $, such that $ 0 = x_0 < x_1 < x_2 < \dots < x_N = L $, and arbitrary invertible matrices $ \Omega_j $, $ 0 \le j \le N $, an argument from the proof of Proposition 1 (consider (\ref{estGron}) with $ y = x_j $, $ x = x_{ j+1 } $, $ \Omega = \Omega_j $) shows that
\[ \len \Omega_j M ( x_{j+1} , \lambda ) \rin \le \len \Omega_j M ( x_j , \lambda ) \rin \exp \( \left| \lambda \right| \int_{ x_j}^{x_{j+1}} \len \Omega_j J \cH ( t ) \Omega_j^{ -1 } \rin \diff t \) \]
for $ 0 \le j < N $.
With the notation $ M_j = M ( x_j , \lambda ) $ we then have
\bequnan \len \Omega_{ j+1 } M_{ j+1 } \rin \le \len \Omega_{ j+1 } \Omega_j^{ -1 } \rin \cdot \len \Omega_j M_{j+1} \rin \le \len \Omega_{ j+1 } \Omega_j^{ -1 } \rin \len \Omega_j M_j \rin \\ \exp \( | \lambda | \int_{ x_j}^{x_{ j+1 }} \len \Omega_j J \cH ( t ) \Omega_j^{ -1 } \rin \diff t \) . \eequnan
Taking logarithm, summing the resulting inequalities in $ j $ and choosing $ \Omega_N = I $ we get,
\be\la{norm} \log \len M ( \lambda ) \rin \le | \lambda | \sum_{j = 0 }^{ N-1 } \int_{ x_j}^{x_{ j+1 }} \len \Omega_j J \cH ( t ) \Omega_j^{ -1 } \rin \diff t + \sum_{ j=0 }^{ N-1 } \log \len \Omega_{ j+1 } \Omega_j^{ -1 } \rin + \log \len \Omega_0 \rin . \ee
Let $ P_j $ be an orthogonal rank 1 projection, $ P_j = \langle \cdot , e_j \rangle e_j $, $ e_j \in \R^2 $, $\| e_j \| = 1 $. Then the summand in the first sum in the rhs can be estimated as follows,
\begin{eqnarray}\la{OmegaH} \int_{ x_j}^{x_{ j+1 }} \len \Omega_j J \cH ( t ) \Omega_j^{ -1 } \rin \diff t \le \int_{ x_j}^{x_{ j+1 }} \len \Omega_j J \( \cH ( t ) - P_j \) \Omega_j^{ -1 } \rin \diff t + ( x_{ j+1} - x_j ) \len \Omega_j J P_j \Omega_j^{ -1 } \rin \nonumber \\ \le \len \Omega_j \rin \len \Omega_j^{ -1 } \rin \int_{ x_j}^{x_{ j+1 }} \len \cH ( t ) - P_j \rin \diff t + ( x_{ j+1} - x_j ) \len \Omega_j J P_j \Omega_j^{ -1 } \rin . \end{eqnarray}
Let us choose the matrices $ \Omega_j $. The choice is suggested by the proof of Lemma \ref{variat},
\[ \Omega_j = \operatorname{diag} \( a^{ -1 }_j , a_j \) U_j , \]
where $ U_j $ is a unitary transform reducing $ J P_j $ into its Jordan form,
\[ U_j J P_j U_j^{ -1 } = \begin{pmatrix} 0 & 0 \cr 1 & 0 \end{pmatrix} , \]
and $ a_j \in ( 0 , 1 ] $. More precisely, we set $ U_j = e^{ - \varphi_j J } $ with $ \varphi_j \in [ 0 , 2 \pi ) $ defined by $ e_j = \begin{pmatrix} \cos \varphi_j \cr \sin \varphi_j \end{pmatrix} $. With this choice
$ 1^\circ $. $ \len \Omega_j \rin = \len \Omega_j^{ -1 } \rin = a_j^{ -1 } $, $ \len \Omega_j J P_j \Omega_j^{ -1 } \rin = a_j^2 $, and one can continue the estimate (\ref{OmegaH}),
\be\la{OmegaH1} \textrm{rhs of } (\ref{OmegaH}) \le \frac 1{a_j^2} \int_{ x_j}^{x_{ j+1 }} \len \( \cH ( t ) - P_j \) \rin \diff t + a_j^2 \( x_{ j+1 } - x_j \) . \ee
$ 2^\circ $. For $ j \le N-2 $
\bequnan \Omega_{ j+1 } \Omega_j^{ - 1 } & = &\begin{pmatrix} a_{ j+ 1 }^{ -1 } & 0 \cr 0 & a_{j+1} \end{pmatrix} U_{ j+1 } U_j^{ -1 } \begin{pmatrix} a_j & 0 \cr 0 & a_j^{ - 1 } \end{pmatrix} = \\ & & \begin{pmatrix} a_{ j+ 1 }^{ -1 } & 0 \cr 0 & a_{ j+1 } \end{pmatrix} e^{ \( \varphi_j - \varphi_{ j+1 } \) J } \begin{pmatrix} a_j & 0 \cr 0 & a_j^{ - 1 } \end{pmatrix} = \\ & & \begin{pmatrix} a_j a_{ j+ 1 }^{ -1 } & 0 \cr 0 & a_{j+1} a_j^{ -1 } \end{pmatrix} + O \( \frac{ \| P_{ j+1 } - P_j \| }{ a_j a_{ j+1 } } \) . \eequnan
Then ($ \log ( x +y ) \le | \log y | + \log ( 1 + x ) $ for $ x , y > 0 $)
\[ \log \len \Omega_{ j+1 } \Omega_j^{ - 1 } \rin \le \left| \log \( a_j a_{ j+ 1 }^{ -1 } \) \right| + C \log \( 1+ \frac{ \| P_{ j+1 } - P_j \| }{ a_j a_{ j+1 } } \) . \]
Plugging this and (\ref{OmegaH1}) in (\ref{norm}) and taking into account that $ \len \Omega_0 \rin = a_0^{ -1 } $, $ \len \Omega_{ N-1 }^{ -1 } \rin = a_{ N-1 }^{ -1 } $, we obtain the first assertion of the theorem.
\subsection{Sharpness.} Let $ p \in ( 0 , 1 ) $, $ \alpha = p^{ -1 } - 1 $, $ d = p + \von $. Define $ b_j = 1 - j^{ -\alpha } $ for $ j \ge 1 $, and for $ x \in [ 0 , 1 ] $ let
\[ \cH ( x ) = \begin{cases} \frak H_1 ,\; x \in \bigcup_j [ b_{ 2j-1 } , b_{ 2j } ] \cr
\frak H_2 ,\; x \notin \bigcup_j [ b_{ 2j-1 } , b_{ 2j } ] . \end{cases} \]
The required assertion will be proved if we show that (a) $ \cH $ satisfies the conditions (i)--(iv) of the theorem for all $ \von > 0 $, (b) the order of the system $ ( \cH , 1 ) $ is not less than $ p $. Given an $ R> 0 $, define
\[ \cH_R ( x ) = \begin{cases} \frak \cH ( x ) ,\; x \in [0 , b_{ N-1}] \cr
\frak H_1 , \; x \in [ b_{ N-1 } , 1 ] , \end{cases} \] with $ N = N( R) $ to be chosen later. Let $ a_{ N-1 } = 1 $, $ a_j = R^{ \frac { d-1 }2 } $ for $ j \le N-2 $. We then have,
\bequnan \textrm{lhs of (i)} & \le & \frac 2{ a_{ N-1 }^2 \( N-1 \)^\alpha } = O \( N^{ - \alpha } \) , \\
\textrm{lhs of (ii)} & \le & C \sum_0^{ N-2 } a_j^2 j^{ -1-\alpha } + N^{ -\alpha } = O \( R^{ d-1} \) + O \( N^{ - \alpha } \) , \\
\textrm{lhs of (iii)} & \le & C ( N - 1 ) \log R + O ( \log R ) = O \( N \log R \) , \\
\textrm{lhs of (iv)} & = & O ( \log R ) .
\eequnan
Let $ N \sim R^p $ as $ R \to \infty $. Then assumptions (i)--(iv) are satisfied.
Let us now establish that the order of the system is not less than $ p $. To this end, we use the following identity. Let $ \Theta ( x , \lambda ) $ be the first column of $ M ( x , \lambda ) $. Differentiating $ \llangle \Theta , J \Theta \rrangle_{ \C^2 } $ with respect to the equation (\ref{can}), we find
\be\la{identM11}
\Im \( M_{11} ( 1 , \lambda ) \overline{M_{21} \( 1 , \lambda \) } \) = \Im \lambda \int_0^1 \llangle \cH ( t ) \Theta ( t , \lambda ) , \Theta ( t , \lambda ) \rrangle_{ \C^2 } \diff t . \ee
In the situation under consideration the rhs is
\[ \Im \lambda \sum \( b_j - b_{j-1} \) \left| \begin{array}{cc} M_{11} ( b_j , \lambda ) , & j \textrm{ even} \cr M_{21} ( b_j , \lambda ) , & j \textrm{ odd} \end{array} \right|^2 . \]
For $ \Im \lambda > 0 $ one can estimate this quantity from below. Let $ \delta_j = b_j - b_{ j-1 } $. We have
\be\la{estM11} \textrm{rhs of (\ref{identM11})} \ge \Im \lambda \sum \delta_{ 2j } \left| M_{11} ( b_{2j} , \lambda ) \right|^2 = \Im \lambda \sum \delta_{ 2j } \left| M_{11} ( b_{2j-1} , \lambda ) \right|^2 . \ee
In the last equality we took into account that $ M_{11} ( b_{ 2j } , \lambda ) = M_{11} ( b_{ 2j -1 } , \lambda ) $ in the situation under consideration as $ M_{11}^\prime ( x , \lambda ) = 0 $ when $ x \in ( b_{ 2j-1 } , b_{ 2j } ) $.
To estimate the rhs of (\ref{estM11}) from below we use the following corollary of the fact that all the zeroes of matrix elements of $ M ( x , \lambda ) $ are real.
\begin{remark} Let $ ( G , L ) $ be a canonical system such that $ ( 0 , L ) $ is a union of disjoint intervals, $ I_j $, accumulating only at $ L $, and $ G ( x ) $ is a constant rank 1 operator on each $ I_j $. Then $ M ( x , \cdot ) $ is a polynomial for all $ x \in ( 0 , L ) $. For any $ m, l $, $ 1 \le m,l \le 2 $, define $ k ( x ) $ to be the degree of the polynomial $ M_{ml} ( x , \cdot ) $, $ c ( x ) $ be its leading coefficient,
\[ M_{ ml} ( x , \lambda ) = c ( x ) \lambda^{ k ( x ) } + ( \textrm{a polynomial of degree} \le k ( x ) - 1 ) . \]
Then $ | M_{ ml} ( x , i \tau ) | \ge | c ( x ) | \tau^{ k ( x ) } $ when $ \tau > 1 $.
\end{remark}
Applied in the situation under consideration to the left upper entry of the monodromy matrix at $ x = b_{ 2j-1 } $, this gives $ | M_{11} ( b_{ 2j-1 }, i \tau ) | \ge \left| c_j \right| \tau^{ k_j }$ for $ \tau > 1 $, $ c_j $ and $ k_j $ being the leading coefficient and the degree of the polynomial $ M_{11} ( b_{ 2j -1 } , \cdot ) $, resp., $ j \ge 1 $.
Let us calculate $ k_j $ and $ c_j $. Define $ M_j ( \lambda ) $ to be the value of the matrix solution of (\ref{can}) with the Cauchy data $ Y ( b_j ) = I $ at $ x = b_{ j+1 } $, then
\be\la{multt} M ( b_{ 2j -1 } , \lambda ) = M_{ 2j-2 } ( \lambda ) M_{ 2j-3 } ( \lambda ) \cdots M_2 ( \lambda ) M_1 ( \lambda ) \ee
by the multiplicative property of the monodromy matrices. A straightforward calculation gives
\[ M_ j ( \lambda ) = I + \lambda \delta_{j+1} \left\{ \begin{array}{cc} \begin{pmatrix} 0 & 0 \cr -1 & 0 \end{pmatrix} , & j \textrm{ odd} \cr
\begin{pmatrix} 0 & 1\cr 0 & 0 \end{pmatrix} , & j \textrm{ even} . \end{array} \right. \]
The leading term in the matrix polynomial $ M ( b_{ 2j-1 } , \lambda ) $ comes from choosing the terms of the first order in $ \lambda $ in each multiple in (\ref{multt}). It has the form
\[ \( -1 \)^{j+1} \delta_{ 2j - 1 } \delta_{ 2j-2 } \cdots \delta_2 \begin{pmatrix} 1 & 0 \cr 0 & 0 \end{pmatrix} . \]
Thus, $ k_j = 2j-2 $, $ | c_j | = \prod_{ n = 2 }^{ 2j-1 } \delta_n $, and one can continue the inequality in (\ref{estM11}),
\[ \textrm{rhs of } (\ref{estM11}) \ge \sum \delta_{ 2j } \( \prod_{ n = 2}^{ 2j-1 } \delta_n^2 \) \left| \lambda \right|^{ 2 ( 2j - 2 ) } \]
On the other hand, if $ \rho $ is the order of the system, then for any $ \von > 0 $ the lhs in (\ref{identM11}) is not greater than $ \exp \( C_\von r^{ \rho + \von } \) $. By a standard relation between Taylor coefficients and exponential order (s. f. \cite{Levin}) it follows that for large $ j $
\[ \delta_{ 2j } \( \prod_{ n = 2}^{ 2j-1 } \delta_n^2 \) \le \( \frac Cj \)^{ \frac {4j}{\rho + \von } } . \]
Notice that the sequence $ \delta_j = j^{ - \alpha } - \( j+1\)^{ -\alpha } $ is monotone decreasing in $ j $, hence the last inequality implies that $ \delta_{ 2j } = O \( j^{ - 1/ ( \rho + \von ) } \) $. Comparing this with $ \delta_j \asymp j^{ - 1 - \alpha } = j^{ -1/p } $, we find $ \rho + \von \ge p $ for all $ \von > 0 $, that is, $ \rho \ge p $. The proof of Theorem 1 is thus completed.
\section{Comments on Theorem 1}
\subsection{Choice of approximants}\la{comont1} Let us consider a canonical system having Hamiltonian of the form $ \cH = \langle \cdot , e ( x ) \rangle e ( x ) $ with $ e ( x ) = \begin{pmatrix} u ( x ) \cr v( x) \end{pmatrix} $, $ u , v $ being smooth functions on $ ( 0 , L ) $ subject to the condition $ u^\prime v - v^\prime u = -1 $. The order of the canonical system with this Hamiltonian is $ 1/2 $, for if $ Y = \begin{pmatrix} Y_+ \cr Y_- \end{pmatrix} $ is a solution of the system, then a straightforward calculation \cite{Remling} shows that $ y = Y_+ u + Y_- v $ satisfies the Schr\"odinger equaiton $ - y^{\prime \prime } + q y = \lambda y $ with the potential $ q = u^{ \prime \prime } / u $. This suggests that smooth functions cannot be used as approximants, at least in the whole range $ ( 0 , 1 ) $ of orders.
\subsection{Formulation} The Krein -- de Branges formula implies that if assumptions of Theorem 1 are satisfied for some $ d < 1 $ then $ \operatorname{rank} \cH ( x ) = 1 $ a. e. This fact can easily be seen directly. Indeed, suppose that $ ( \cH , L ) $ is a canonical system such that for any $ R $ large enough the conditions (i)--(iv) are satisfied for some $ d \in ( 0 , 1 ) $. Define $ S_\epsilon = \{ t \colon \| \cH ( t ) f \| \ge \epsilon \| f \| \textrm{ for all } f \in \C^2 \} $, $ \epsilon > 0 $. Arguing by contradiction, let $ \epsilon > 0 $ be such that $ \left| S_\epsilon \right| > 0 $. Applying the Schwarz inequality and using conditions (i) and (ii) we find that $ \sum \sqrt{( x_{ j+1 } - x_j ) \int_{x_j }^{ x_{ j+1 }} \len \cH ( t ) - \cH_R ( t) \rin \diff t } \le C R^{ d-1 } $. The quantity $ \left| [ x_j , x_{ j+1 }] \cap S_\epsilon \right| $ estimates from below both factors in the summand, hence the sum is not less than $ \sum \left| [ x_j , x_{ j+1 }] \cap S_\epsilon \right| = \left| S_\epsilon \right| > 0 $. Taking the limit $ R \to \infty $ we obtain a contradiction.
\subsection{Sharpness} In the example establishing part 2 of Theorem 1 the conditions (i)--(iv) are satisfied with $ d = p $ if we insert the $ \log R $ factor in the rhs. It is not known to the author if one can get rid of the logarithmic factor, that is, part 2 holds with $ \varepsilon = 0 $.
\subsection{Comparison with Theorem \ref{Bsw}}\la{comparis} The relevant part of Theorem 3 is the assertion that under the stated assumptions if $ \{ \rho_j^{ -1 } \} \in l^\alpha $ then the order $ \le \alpha $. Let us first translate the setup of Theorem 3 to the language of canonical systems. As mentioned in the introduction, given a Jacobi matrix (\ref{Jac}), explicit formulae that express the Jacobi parameters $ q_j $, $ \rho_j $ via the corresponding Hamiltonian are known, see \cite{Katz}. Let us reproduce them in a convenient form. In the following theorem $ P_n $ and $ Q_n $ are the orthogonal polynomials of first and second kind defined by the matrix (\ref{Jac}), respectively, and $ e_j $ and $ b_j $ are parameters of the corresponding Hamiltonian of the type described after Theorem 3. We write $ \varphi_j $ for the argument of the vector $ e_j \in \R^2 $, $ \Delta_j = ( b_{ j-1 } , b_j ) $, $ \delta_j = | \Delta_j | $.
\begin{theorem}\cite{Katz}\la{Ka}
The correspondence between canonical systems and limit-circle Jacobi matrices can be chosen so that
(i) $ \delta_n = P_n \( 0 \)^2 + Q_n \( 0 \)^2 $,
(ii) \[ \rho_j = \frac 1{| \sin \( \varphi_j - \varphi_{ j+1 } \) | \sqrt{
\delta_{ j+1} \delta_j }}, \; j \ge 1 . \]
\end{theorem}
Now let $ q_j = 0 $ for simplicity. Then it can be shown \cite{Katz} that $ e_j \perp e_{ j-1 } $, $ e_1 = \begin{pmatrix} 1 \cr 0 \end{pmatrix} $, hence $ \rho_j^{ -1 } = \sqrt{ \delta_j \delta_{ j+1 } } $. Solving for $ \delta_j $ we have
\[ \delta_{ j+1 } = \( \frac{ \rho_{ j-1 } \rho_{ j-3 } \cdots }{ \rho_j \rho_{ j-2 } \cdots } \)^2 . \]
Let $ \rho_j $ satisfy the assumption of Theorem \ref{Bsw}. Then $ \rho_{ j-1 } / \rho_j $ at large $ j $ is a monotone sequence having a limit $ \le 1 $, and if the limit is $ 1$, then it is increasing. This implies that $ \delta_j = O \( \rho_{ j-1 }^{ -1 } \) $ and therefore $ \{ \delta_j \} \in l^d $ for any $ d $ greater than the convergence exponent of the sequence $ \rho_j $. Notice that the mondromy matrix corresponding to the interval $ \Delta_j $ is $ I + O \( |\lambda | \delta_j \) $. This, the multiplicative property of monodromy matrices and $ \{ \delta_j \} \in l^d $ imply by elementary inequalities that the monodromy matrix of the system in question is $ O \( e^{ C \left| \lambda \right|^d } \) $, which gives the assertion under consideration.
It is just as easy to derive this assertion from Theorem 1. Let $ d $ be such that $ \{ \rho_j^{ -1 } \} \in l^d $ and let $ \frak N = \{ j \colon \delta_j > R^{ -1 } \} $. Define $ \cH_R (x ) = \cH ( x ) $ whenever $ x \in \Delta_k $, $ k \in \frak N $. On the complement of $ \cup_{ k\in \frak N} \Delta_k $ we define $ \cH_R $ to be an arbitrary constant rank 1 orthogonal projection. With this definition, $ \cH_R $ is a finite rank Hamiltonian with parameters to be denoted $ x_j , f_j $. Let $ a_j = R^{ (d-1)/2 } $ whenever $ j $ is such that $ [ x_j , x_{ j+1 } ] $ coincides with one of the intervals $ \Delta_k $, $ k \in \frak N $, $ a_j =1 $ otherwise. Then conditions (i) and (ii) in Theorem 1 are satisfied because $ \{ \delta_j \} \in l^d $ and so $ \sum_{ \delta_j \le R^{ -1 } } \delta_j = O \( R^{ d-1 } \) $, conditions (iii) and (iv) are satisfied because the number of $ j $'s for which $ \delta_j > R^{ -1 } $ is $ O \( R^d \) $, again by $ \{ \delta_j \} \in l^d $, and therefore the rank of $ \cH_R $ is $ O \( R^d \) $ as well. Applying Theorem 1 we conclude that under the assumptions of Theorem \ref{Bsw} with $ q_j = 0 $ the order is not greater than $ d$. Thus, our result generalizes the upper estimate in Theorem \ref{Bsw}.
The case of $ q_j $ subject to the smallness condition of Theorem \ref{Bsw} can be obtained from this by standard methods of abstract perturbation theory.
\section{Applications}
\subsection{Smooth classes} A corollary of Theorem 1 is obtained when the conditions (i)--(iv) are satisfied with $ a_j $ independent of $ j $. In this case condition (ii) reduces to $ a_j^2 = O \( R^{d-1} \) $ and the lhs in the other three conditions is monotone decreasing in $ a_j $, hence without loss of generality one can assume that all four conditions are satisfied with $ a_j^2 = R^{ d-1 } $. Introduce the following notation. Given a finite rank Hamiltonian, $ \cG $, with parameters $ \{ e_j \} $, let $ \operatorname{Var} \cG \colon = \sum \len P_j - P_{ j+1 } \rin $, $ P_j = \langle \cdot , e_j \rangle e_j $.
\begin{corollary}\la{var}Assume $ 1/2 \le d < 1 $. Let for any $ \von > 0 $ a finite rank Hamiltonian $ \cH^\von $ defined on $ ( 0 , L ) $ exist such that
(a) \[ \len \cH - \cH^\von \rin_{ L^1 ( 0 , L ) } \le \von ,\]
(b) \[ \operatorname{Var} \cH^\von \le C \von^{ \frac{ 2d -1 }{ 2d - 2 } } . \]
Then the order of the system $ ( \cH , L ) $ is not greater than $ d $.
\end{corollary}
\begin{proof} System $ ( \cH , L ) $ obeys the condition of Theorem 1 with $ \cH_R = \cH^{ \von ( R ) } $, $ \von ( R ) = R^{ 2 ( d-1 ) } $, $ a_j = R^{ ( d-1)/2} $. Condition (a) implies (i), (b) implies (iii) via an elementary inequality, (ii) and (iv) are immediate. \end{proof}
This corollary allows to give an upper estimate for order of Hamiltonians in classical smoothness classes. We give two examples.
\begin{corollary}\la{holder} Let $ ( \cH , L ) $ be a canonical system with $ \operatorname{rank } \cH ( x ) = 1 $ a. e.
1. If $ \cH \in C^\alpha [ 0 , L ] $, $ 0 < \alpha \le 1 $, then the order of the system is not greater than $ 1 - \alpha/2 $.
2. If $ \cH $ has bounded variation then the order is not greater than $ 1/2 $.
\end{corollary}
\begin{proof} The first assertion follows from choosing $ x_j = Lj/N $, $ e_j \in \Ran \cH ( x_j ) $, for the parameters of the approximating Hamiltonian $ \cH^\von $ and adjusting $ N $. The second assertion is trivial. \end{proof}
Notice that while the estimate of Theorem 1 is sharp, the Hamiltonian in the corresponding example is discontinuous. It is an open question whether the first assertion of Corollary \ref{holder} is sharp. The second assertion of the corollary admits an "elementary" proof based on a trick from \cite[Theorem 3.6]{Teschl}.
\subsection{Berg--Valent matrix.}\la{BVma} In this subsection we consider the order $ 1/4 $ Jacobi matrix of Berg-Valent \cite{BergValent} as a warmup for the proof of the Valent conjecture. We do not need the explicit formulae for the parameters $ q_n $ and $ \rho_n $. The necessary information about them from \cite{BergValent} is as follows
$ 1^\circ $. $ \rho_n \sim n^4 $ as $ n \to \infty $.
$ 2^\circ $. The values of the corresponding orthogonal polynomials, $ P_n ( z ) $ and $ Q_n ( z ) $, at $ z= 0 $ have asymptotics $ P_n ( 0 ) \sim c_1 n^{ -1 } $, $ Q_n ( 0 ) \sim c_2 n^{ -1 } $ with $ c_{ 1, 2 } \ne 0 $ \cite[(2.33), (3.2)]{BergValent}.
By Theorem \ref{Ka} we find that $ \delta_j \sim C j^{ -2 } $, $ \sin ( \varphi_j - \varphi_{ j+1 } ) = O \( j^{ -2 } \) $. Let $ \cH_R = \cH $ on $ ( 0 , b_{N-1} ) $ with $ N \sim R^{ 1 - d } $, $ 1/4 \le d \le 1/2 $, and define $ \cH_R $ arbitrarily on $( b_{N-1} , L ) $ so that $ \cH_R $ becomes a finite rank Hamiltonian on $ (0, L) $. Let $ a_{ N-1 } = 1 $. Then
\bequnan \textrm{lhs of (i)} & \le & 2 ( L - b_{ N-1 } ) = 2 \sum_{ j \ge N } \delta_j = O \( R^{ d - 1 } \) , \\
\textrm{lhs of (ii)} & = & \sum_{ j=0}^{ N-2 } \frac{a_j^2}{ j^2 } + O \( R^{ d-1 } \) , \\
\textrm{lhs of (iii)} & \le & \sum \log \( 1 + \frac 1{ j^2 a_j a_{ j+1 }} \) + O ( 1 ).
\eequnan
Define $ a_j^2 = R^{ d-1 } $ for $ j \le R^d $, $ a_j^2 = R^{ 2d-1 } $ for $ R^d < j \le N-2 $. Then
$ \sum_0^{ N-2 } a_j^2 j^{-2 } = O \( R^{ d- 1 } \) $, the lhs in (iv) is $ O ( \log R ) $, and
\bequnan \sum \log \( 1 + \frac 1{ j^2 a_j a_{ j+1 }} \) \le C R^d \log R + R^{ 1-2d } \sum_{ j > R^d } \frac 1{j^2} = O \( R^d \log R \) + O \( R^{ 1-3d } \) = \\ O \( R^d\log R \) \eequnan
because $ d \ge 1/4 $.
Applying Theorem 1 we conclude that the order is not greater than $ 1/4 $, the actual order of the system found in \cite{BergValent}.
\subsection{Valent's conjecture} The following assertion generalizes the consideration of the previous example.
\begin{proposition}\la{1M}
Assume that a Jacobi matrix (\ref{Jac}) is such that $ P_n^2 ( 0 ) + Q_n^2 ( 0 ) \sim C n^{ \Delta - D } $, $ \rho_n \sim n^D $ as $ n \to \infty $ with numbers $ \Delta , D $ satisfying $ 1 < \Delta < D-1 $. Then the order is not greater than $ 1/D $.
\end{proposition}
Let $ \lambda_n , \mu_n $, $ n\ge 0 $, be sequences of reals, $ \lambda_n > 0 $ for $ n \ge 0 $, $ \mu_n > 0 $ for $ n \ge 1 $, $ \mu_0 = 0 $. Define
\be\la{qrho} q_{ n+1 } = \lambda_n + \mu_n , \; \rho_{ n+1} = \sqrt{ \lambda_n \mu_{ n+1 } } . \ee
The Jacobi matrix with parameters $ q_j $, $ \rho_j $ is said to be corresponding to birth-death processes with rates $ \lambda_n $ and $ \mu_n $ \cite{BergValent}.
\begin{corollary}\la{Valenthyp}
The order of the Jacobi matrix corresponding to birth-death processes with polynomial rates $ \lambda_n = ( n + B_1 ) \cdots ( n+ B_\ell ) $, $ \mu_n = ( n + A_1 ) \cdots ( n+ A_\ell ) $ subject to the condition $ 1 < \sum ( B_j - A_j ) < \ell - 1 $, is $ 1/ \ell$.
\end{corollary}
Let us establish the proposition first.
\begin{proof}
By Theorem \ref{Ka} the assumption of the proposition implies that
\[ \delta_j \sim C j^{ \Delta - D } , \; \sin \( \varphi_j - \varphi_{ j+1 } \) = O \( j^{ - \Delta } \) . \]
Fix a $ d > D^{ -1 } $ small enough and define $ \cH_R $ as in Section \ref{BVma}. The value of $ N $ is to be chosen so that $ \sum_{ j \ge N } \delta_j \asymp N^{ \Delta - D + 1 } = O \( R^{ d-1 } \) $, thus let $ N \sim R^{ \frac { d-1 }{ \Delta - D + 1 }} $. Define
\[ a_j^2 = \begin{cases} R^{ d-1 } , & j \le R^d ,\cr R^{ d-1 + d ( D - \Delta - 1 ) } , & R^d < j < N-1 \cr 1 , & j = N-1 \end{cases} . \]
Notice that $ R^d \ll N-1 $, so the corresponding range of $ j $'s is non-empty. With this choice, (i) of Theorem 1 is satisfied by the choice of $ N $, (ii) is satisfied because $ \sum_{ R^d < j < N-1 } \delta_j = O \( R^{ d ( \Delta - D + 1 ) } \) $ and $ a_j $ for this range are chosen precisely to make the corresponding term $ O \( R^{ d-1 } \) $, and the lhs in (iv) is $ O \( \log R \) $. The lhs in (iii) is estimated above by
\bequnan \sum \log \( 1 + \frac 1{ j^\Delta a_j a_{ j+1 }} \) \le C R^d \log R + \frac 1{R^{ d-1 + d ( D - \Delta - 1 ) }} \sum_{ j > R^d } \frac 1{j^\Delta } = O \( R^{ d+\von } \) + \\ O \( R^{ 1 - d D + d } \) . \eequnan
The rhs is $ O \( R^{ d+\von } \) $ for $ d > D^{ -1 } $ and any $ \von > 0 $, and the assertion of the proposition follows by Theorem 1.
\end{proof}
\medskip
\textit{Proof of Corollary \ref{Valenthyp}}. The fact that the order does not exceed $ 1/\ell $ follows from Proposition \ref{1M} by inspection of (\ref{qrho}) and explicit formulae \cite{BergValent} expressing $ P_j ( 0 ) $, $ Q_j ( 0 ) $ via $ \lambda_j $'s and $ \mu_j $'s. On the other hand, an application of \cite[Propositions 7.1]{BergSzwarc} shows that the order is not less than $ 1/\ell $. For completeness, we provide a proof of the latter fact. First, for any $ p $ greater than the order of the system there exists a $ K > 0 $ such that for all $ z $ large enough
\[ \sum \left| P_j ( z ) \right|^2 \le e^{ K \left| z \right|^p } . \]
This is an easy corollary of the Kristoffel-Darboux formula. Since all zeroes of $ P_j $'s are real, $ | P_j ( i\tau ) | \ge \pi_j \tau^j $ for $ \tau > 0$, $ \pi_j = 1 / \( \rho_1 \cdots \rho_j \) $ being the leading coefficient of the polynomial $ P_j $, therefore
\[ \frac 1{ \rho_j \cdots \rho_1 } \le \( \frac jC \)^{ - \frac jp} . \]
Under the assumptions of the corollary, $ \rho_j \sim j^\ell $, which implies $ p \ge 1/\ell $.
\hfill $ \Box $
The assertion of Corollary \ref{Valenthyp} was conjectured in \cite{Valent} on the basis of two explicitly solvable examples, the one dealt with in the previous subsection and another one \cite{GLV} with $ \ell = 3 $.
\section{Proof of Theorem \ref{selfsim}}
The structure of the proof is as follows. First we are going to show that the order of the system is not greater than the infimum. This will be done by an application of Theorem 1 to a natural approximation $ \cH_R $. Then we will show that the order is not less than the infimum by an appropriate choice of the covering.
\textit{The order $ \le $ the infimum.} Let $ d $ be such that for some $ C > 0 $ for each $ R $ large enough there exists a covering of the interval $ ( 0 , L ) $ by $ n ( R ) \le C R^d / \log R $ intervals, to be denoted $ \omega_j $, such that (A) is satisfied. The stated inequality will be established if we show that the order is not greater than $ d $. Without loss of generality on can assume that the intervals $ \omega_j $ are mutually disjoint. Define
\[ \cH_R (x ) = \begin{cases}
\frak H_1 , & x \in \omega_j, | \omega_j \cap X_1 | \ge \frac {|\omega_j |}2 , \cr
\frak H_2 & \mathrm{otherwise} . \end{cases} \]
With this choice of $ \cH_R $
\[ \int_{ \omega_j } \len \cH ( t ) - \cH_R ( t ) \rin \diff t \le 2 \min \{ | \omega_j \cap X_1 | , | \omega_j \cap X_2 | \} , \]
hence the condition (i) of Theorem 1 takes the form
\[ \sum\frac 1{a_j^2} \min \{ | \omega_j \cap X_1 | , | \omega_j \cap X_2 | \} \le C R^{ d-1 } . \]
Notice that $ \min \{ | \omega_j \cap X_1 | , | \omega_j \cap X_2 | \} \asymp | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | / | \omega_j | $, hence the latter condition is equivalent to
\be\la{i} \sum\frac 1{a_j^2 | \omega_j | } | \omega_j \cap X_1 |\, | \omega_j \cap X_2 | \le C R^{ d-1 } . \ee
Conditions (iii) and (iv) of Theorem 1 in the situation under consideration are satisfied if
\be\la{iii} \sum \log \( 1 + a_j^{ -1 } \) \le C R^d , \ee
and condition (ii) has the form
\be\la{ii} \sum a_j^2 | \omega_j | \le C R^{ d -1 } . \ee
Let $ a_j = 1 $ for $ | \omega_j | \le 2/R $. We write $ \frak N = \{ j \colon | \omega_j | \le 2/R \} $. The parts of sums in (\ref{i}), (\ref{iii}) and (\ref{ii}) over $ j \in \frak N $ are then estimated above by $ n ( R ) R^{ -1 } $, $ n ( R ) R^{ -1 } $ and $ n( R ) $, respectively, hence they are $ O \( R^{ d-1 } \) $ by condition (B).
For $ | \omega_j | > 2/R $ we optimize the choice of $ a_j $ over the summands in (\ref{i}), (\ref{iii}) and (\ref{ii}) by taking
\[ a_j^2 = \max \left\{ \frac 1{ R | \omega_j | }, \frac{ \sqrt{ | \omega_j \cap X_1 | | \omega_j \cap X_2 | }}{ | \omega_j | } \right\} . \]
With this choice the sums over $ j \notin \frak N $ in (\ref{i}), (\ref{ii}) and (\ref{iii}) are estimated above by \[ \sum \sqrt{ | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | } ,\]
\[ R^{ -1 } n( R ) + \sum \sqrt{ | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | } , \]
and
\[ \sum \log ( R | \omega_j | ) \le n ( R ) \log R , \]
respectively. Plugging here (A) and (B) of Theorem 2 we obtain that all the assumptions of Theorem 1 are satisfied.
\textit{The infimum $ \le $ the order.} Given a $ \tau > 0 $, $ x \in ( 0 , L ) $, let $ s ( \tau , x ) \in [ 0 , x ] $ be the solution of the equation $ \tau^2 | ( s, x ) \cap X_1 | \, | ( s, x ) \cap X_2 | = 1 $. This solution is unique whenever exists. Without loss of generality one can assume that, say, for some $ a > 0 $ the function $ \cH ( x ) = \frak H_1 $ for $ x \in ( 0 , a/2 ) $, $ \cH ( x ) = \frak H_2 $ for $ x \in ( a/2 , a) $ (attaching such two intervals at the left end does not change the order). Then $ s ( \tau , x ) $ is defined for $ \tau $ large enough for all $ x \ge a $.
\begin{lemma}\la{Katzform}\cite[lemmas 1--3]{Katz1} The order of the system $ ( \cH , L ) $ equals to
\be\la{ordKa} \limsup_{ \tau \to +\infty } \frac{ \displaystyle{\int_a^L} \frac{\displaystyle{\chi_2 ( x ) }}{ \displaystyle{| ( s ( \tau , x ) , x )\cap X_2 | }} \diff x }{ \log \tau } , \ee
$ \chi_2 $ being the indicator function of the set $ X_2 $.
\end{lemma}
This assertion provides a crucial step in the proof of the Kats' formula for the order. It is formulated in \cite{Katz1} in terms of the corresponding strings. For completeness we reproduce here the proof of the part of the lemma that we are going to use -- the order is not less than the quantity (\ref{ordKa}), translated to the language of canonical systems.
\begin{proof} By definition the order of a system is the order of any of the matrix elements of the monodromy matrix. Let us show that the order of the matrix element $ M_{ 11 } ( z ) $ is estimated from below by the rhs in (\ref{ordKa}). The order of $ M_{ 11 } $ coincides with
\[ \limsup_{ \tau \to + \infty } \frac{\log \log | M_{ 11 } ( i \tau ) |}{ \log \tau } \]
because $ M_{ 11 } $ is a real entire function having all its zeroes real.
Let $ \chi_1 = 1 - \chi_2 $, $ \rho ( s , t ) = | ( s , t ) \cap X_2 | $. On rewriting the first column of (\ref{can}) as an integral equation we obtain,
\[ M_{11} ( z, x ) = 1 + z \int_0^x \chi_2 ( t ) M_{ 21} ( z , t ) \diff t = 1 - z^2 \int_0^x \chi_1 ( t ) \rho ( t , x ) M_{11} ( z , t ) \diff t . \]
When $ z = i \tau $, $ \tau > 0 $, this becomes ($ \xi_\tau ( x ) \colon = M_{ 11 } ( i \tau , x ) $),
\[ \xi_\tau ( x ) = 1 + \tau^2 \int_0^x \chi_1 ( t ) \rho ( t , x ) \xi_\tau (t) \diff t . \]
It shows that $ \xi_\tau $ is a positive and monotone nondecreasing function. Let us estimate $ \xi_\tau^\prime ( x ) / \xi_\tau ( x ) $ from below. For a. e. $ x \in X_2 $ and all $ s \le x $, we have
\bequnan \xi_\tau ( s ) = \xi_\tau ( x ) - \tau^2 \rho ( s , x ) \int_0^s \chi_1 ( t ) \xi_\tau \diff t - \tau^2 \ \int_s^x \rho ( t , x ) \chi_1 ( t ) \xi_\tau ( t ) \diff t \\ \ge \xi_\tau ( x ) - \tau^2 \rho ( s , x ) \int_0^s \chi_1 \xi_\tau \diff t - \tau^2 \rho ( s , x ) \int_s^x \chi_1 \xi_\tau \diff t = \xi_\tau ( x ) - \rho ( s , x ) \xi_\tau^\prime ( x ) ,\eequnan
and thus
\bequnan \xi_\tau^\prime ( x ) = \tau^2 \( \int_s^x + \int_0^s \) \chi_1 \xi_\tau \diff t\ge \tau^2 \int_s^x \chi_1 \xi_\tau \diff t \ge \tau^2 \left| ( s , x ) \cap X_1 \right| \, \xi_\tau ( s ) \\ \ge \tau^2 \left| ( s , x ) \cap X_1 \right| \( \xi_\tau ( s ) - \rho ( s , x ) \xi_\tau^\prime ( x ) \) .
\eequnan
Picking $ s = s ( \tau , x ) $ we obtain that for a. e. $ x \in X_2 \cap ( a , L ) $
\[ \frac{\xi_\tau^\prime ( x ) }{\xi_\tau ( x )}\ge \frac 1{2 \rho ( s ( \tau , x ) , x ) } , \]
which implies the required assertion upon integration in $ x $ over $ [ a , L ] $.
\end{proof}
The proof of Theorem 2 will be completed if we show that for any $ d $ such that
\be\la{condKatz} \int_a^L \frac{\chi_2 ( x ) }{ \rho ( s ( \tau , x ) , x ) } \diff x = O \( \tau^d \) , \; \tau \to + \infty ,\ee
the interval $ ( 0 , L ) $ can be covered by $ O \( R^d \log R \) $ intervals, $ \omega_j $, so that
\be\la{sqrt} \sum \sqrt{ | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | } = O \( R^{ d-1 } \) . \ee
For each $ R $ large enough define a monotone decreasing sequence $ \{ x_j \} $, $ j \ge 1 $, as follows, $ x_1 = L $, $ x_{ j+1 } = s ( R, x_j ) $, if $ j \ge 1 $ and $ x_j \ge a $; if $ x_{ j-1 } \ge a $, $ x_j < a $ then $ x_{ j+1 } = 0 $ and the sequence terminates. Observe that the sequence $ x_j $ is finite. This follows from the definition of the function $ s ( R , x ) $, for
\[ 1 = R^2 | ( s ( R , x ) , x ) \cap X_1 | \, | ( s ( R , x ), x ) \cap X_2 | \le R^2 \left| x - s ( R , x ) \right|^2 , \]
which means that $ x_j - x_{ j+1 } \ge R^{ -1 } $ so the sequence has $ O ( R ) $ members. Define $ \omega_j = [ x_{ j+1 }, x_j ] $. By construction $ [ 0 , L ] = \cup_j \omega_j $. We claim that this is the required covering. First, we have to show that $ N (R) $, the number of intervals in the covering, is $ O \( R^d \log R \) $. To this end,
notice that $ \rho ( s ( \tau , x ) , x ) \le \rho ( x_{ j+2 } , x_j ) $ for $ x \in [ x_{ j+1 } , x_j ] $ hence
\[ \textrm{lhs in (\ref{condKatz})} \ge \sum_{ j=1}^{ N( R )-1 } \frac { s_j }{ s_j + s_{ j+1 } } , \; s_j \colon = \rho ( x_{ j+1 } , x_j ) . \]
Let $ \frak g = \{ j \colon \frac{ s_{ j+1 }}{ s_j } \le 2 \} $, $ \hat{\frak g} = \{ j \colon \frac{ s_{ j+1 }}{ s_j } \ge 2 \} $; $ n_{ \frak g } $, $ {\hat n}_{ \frak g} $ be the respective numbers of elements. When $ j \in \frak g $ the summand in the displayed sum is bounded below, hence, $ n_{ \frak g } $ is $ O ( R^d ) $ by (\ref{condKatz}). To estimate $ {\hat n}_{ \frak g } $ notice that $ s_j \ge 1/ \( L R^2 \) $, for $ 1 = R^2 \left| \omega_j \cap X_1 \right| s_j \le R^2 L s_j $. It follows that if $ k $ is the length of a discrete interval of the set $ \hat{ \frak g } $ and $ m $ is the right end of it then $x_m - x_{ m+1 } \ge 2^{ k-1 } / \( L R^2 \) $. On the other hand, $ x_m - x_{ m+1 } \le L $ trivially, hence $ k \le C + 2 \log_2 R $, and therefore $ {\hat n}_{ \frak g } = O \(R^d \log R \) $. Thus, $ N ( R ) = n_{ \frak g } + {\hat n}_{ \frak g} = O \(R^d \log R \) $ as required. To complete the proof, notice that by the very definition of $ s ( R ,x ) $, the summand in (\ref{sqrt}) equals to $ R^{ -1} $, hence (\ref{sqrt}) reduces to $ N ( R ) = O \( R^d \) $ and thus holds trivially. \hfill $ \Box $
\section{Comments on Theorem 2 and applications}
\subsection{The Cantor string}\la{Castr} Let $ \xi \colon [ 0 , 1 ] \to [ 0 , 1 ] $ be the standard Cantor function, $ ( u_j , v_j ) $ be its constancy intervals, $ T ( x ) = x + \xi ( x ) $, $ L = 2 $. Define the Hamiltonian $ \cH $ on $ [ 0 , 2 ] $ to be
\[ \cH ( x ) = \begin{cases} \frak H_1 , & x \in \bigcup_j T \( \left[ u_j , v_j \right] \) \cr \frak H_2 & \textrm{otherwise.} \end{cases} . \]
The canonical system $ ( \cH , 2 ) $ is called the Cantor string. We are going to show that the assumptions of Theorem \ref{selfsim} hold for $ d = d_C = 2 / \log_2 6 $. Let $ \tau_j $ be the union of $ 2^{ j - 1 } $ intervals thrown away on the $ j $-th step of construction of the Cantor set. Define $ M_R = T \( \bigcup_{ k=1 }^j \tau_k \) $, $ j \sim d \log_2 R $. The set $ M_R $ is a union of $ O \( 2^j \) $ non-intersecting intervals. Consider the covering of $ [ 0 , 2 ] $ by the intervals of the set $ M_R $ and their contiguency intervals, the latter to be denoted $ \omega_j $. By construction, the overall number of intervals in this covering is $ O \( 2^j \) = O \( R^d \) $.
The terms corresponding to the intervals of the set $ M_R $ in the sum in condition (A) obviously vanish, hence the sum reduces to
\[ \sum \sqrt{ | \omega_j \cap X_1 | \, | \omega_j \cap X_2 | } \le \sqrt{ \sum | \omega_j \cap X_1 | } = \sqrt{ \left| \( (0,2) \setminus M_R \) \cap X_1 \right| } \]
Since $ T $ is linear on intervals of sets $ \tau_k $, $ T^{ - 1} X_1 = \bigcup_j \tau_j $, and therefore
\[ \left| \( (0,2) \setminus M_R \) \cap X_1 \right| = \sum_{ k > j } | \tau_k | = \( \frac 23 \)^j \sim R^{ d ( 1 - \log_2 3 ) } = R^{ 2 ( d-1 ) } . \]
It follows that the assumption (A) is satisifed. Applying Theorem \ref{selfsim} we conclude that the order of this system is not greater than $ d_C $. Let us show that the order is not less than $ d_C $. Assume that the condition of Theorem 2 is satisfied for some $ d $ and let us use $ \frak D = \frak D ( R ) = \{ \Delta_j \} $ for the covering whose existence is required by the condition. Let $ j = d \log_2 R $ and $ M_R $ be defined as above. Then without loss of generality one can assume that the intervals $ \Delta_j $ are mutually disjoint and the intervals of $ M_R $ are among them. Indeed, adding the intervals of $ M_R $ to $ \frak D $ and removing the intersections of intervals of $ \frak D $ with the intervals of $ M_R $ does not increase the lhs in (A) and increases the constant in the rhs of (B) by at most $ 1 $. Given an $ \omega $, an interval of contiguity of $ M_R $, let $ n_{ \omega , R } $ be the number of intervals of $ \frak D $ belonging to interval $ \omega $, then $ \sum_\omega n_{ \omega , R } \le C R^d = C 2^j $. Since the number of intervals of contiguity is $ 2^j $, it follows that the number of $ k $'s for which $ n_{ k , R } \le 2C $ is greater than $ 2^{ j-1 } $. Let $ \omega $ be a contiguity interval for which $ n_{ k , R } \le 2C $, $ j_0 = j + \log_2 ( 2 C ) + 4 $. By a Dirchlet box argument then there exists an interval $ \Delta \in \frak D $ which contains two nearby intervals of $ T \( \tau_{ j_0} \) $, for the number of intervals of $ T \( \tau_{ j_0 } \) $ contained in $ \omega $ is $ 2^{ j_0 - j } $. By construction the measure $ | \Delta \cap X_1 | \ge 2 \cdot 3^{ - j_0 } = C 3^{ -j } $, and $ | \Delta \cap X_2 | \ge 2^{ - j_0 } = C 2^{ -j } $, so $ \sqrt{ | \Delta \cap X_1 | | \Delta \cap X_2 | } \ge C 2^{ -j/2 } 3^{ -j/2 } $. Since the number of such intervals $ \Delta $ is estimated below by $ C 2^j $ we find that the lhs in (A) is estimated below by $ C \( 2/3 \)^{ j / 2 } = C R^{ d \( 1 - \log_2 3 \)/2 } $. This being $ O \( R^{ d-1 } \) $ means that $ d \ge d_C $. Thus we have derived that the order equals to $ d_C $, the result obtained\footnote{The factor $ 2 $ in the numerator in the expression for $ d_C $ is due to the spectral parameter in our definition of the string being the square root of a "natural" parameter used in \cite{UH}.} in \cite{UH} or \cite{SolVerb} by other means.
\subsection{Kats formula}\la{Katzf} As mentioned in the introduction a direct comparison of Theorem 2 and Kats formula (\ref{Katzformula}) is not possible for lack of examples using the latter in the situation of the former. Notice however that, properly understood, (\ref{Katzformula}) holds for a class of problems with $ L = \infty $ (singular strings) and examples are known \cite{Katz2} in this class where the order is calculated by application of (\ref{Katzformula}).
\bigskip
\noindent\textbf{Acknowledgements.} The author is indebted to H. Woracek for attracting his attention to the order problem and useful remarks, to I. Sheipak for references, and to a referee for suggested improvements of the presentation.
This work was supported in part by the Austrian Science Fund (FWF) project I 1536--N25, and the Russian Foundation for Basic Research, Grants 13-01-91002-ANF and 12-01-00215. | 85,607 |
TITLE: Candy machines and optimal strategy in terms of expected value
QUESTION [2 upvotes]: Problem
We have three candy machines: call them G (good), B (bad) and M (mixed) . G always gives you a candy when you put 1\$. B never gives you a candy when you put 1\$. M gives you a candy with probability $1/2$ when you put 1\$. You want a candy and you approach the three machines but you don't know which one is G, B or M. You use the following strategy. You approach a random machine. If you don't get a candy in $n$ trials, you change a machine. If you don't get a candy from the second machine in $k$ trials, you change to the remaining machine. At most, we can pay for a candy $n+k+1$ \$. We want to calculate the expected cost of obtaining a candy using this strategy. Is it the case that $k=n=1$ is the optimal strategy, i.e. yielding the least expected cost of obtaining a candy?
Attempt of solution
Currently, my thinking is the following. Consider the indicator random variable $X_i = 1$ meaning that we pay 1\$ at stage $i$. Observe that:
$P(X_1 = 1) = 1$ (we always pay 1\$ at the beginning)
$P(X_2 = 1) = \frac{1}{3} + \frac{1}{3}\frac{1}{2}$ (you pay at stage $2$ iff you don't receive a candy at stage $1$ which happens either when you choose $B$ with probability one third or you choose $M$ with probability one third and then $M$ doesn't give you a candy with probability one half)
$P(X_3 = 1) = \frac{1}{3} + \frac{1}{3}(\frac{1}{2})^2$ (similar justification as previously but keeping in mind that you consider an event of not obtaining a candy at stages $1$ and $2$)
$\dots$
$P(X_{n+1} = 1) = \frac{1}{3} + \frac{1}{3}(\frac{1}{2})^n$
$P(X_{n+2} = 1) = \frac{1}{6} \frac{1}{2} + \frac{1}{6}(\frac{1}{2})^n$ (= the probability that we don't receive a candy at stages up to $n+1$. This could happen either when we first happen to choose B and then M, in which case the probability of not obtaining a candy in $n+1$ trials is $\frac{1}{6} \frac{1}{2}$, or when we first choose M and than B, in which case the probability of not obtaining a candy in $n+1$ trials is $\frac{1}{6}(\frac{1}{2})^n$)
$\dots$
$P(X_{n+k+1} = 1) = \frac{1}{6} (\frac{1}{2})^k + \frac{1}{6}(\frac{1}{2})^n$
We are interested in $E(\Sigma_{i=1}^{n+k+1} X_i)$. Using linearity of expectation, we may write:
$$E(\Sigma_{i=1}^{n+k+1} X_i) = 1 + E(\Sigma_{i=2}^{n+1}X_i) + E(\Sigma_{j=n+2}^{n+k+1}X_j) = $$
$$=1 + \Sigma_{i=1}^n(\frac{1}{3} + \frac{1}{3}(\frac{1}{2})^i) + \Sigma_{j=1}^k(\frac{1}{6}(\frac{1}{2})^j + \frac{1}{6}(\frac{1}{2})^n)=$$
$$= 1 + \frac{1}{3}n + \frac{1}{3}\Sigma_{i=1}^n\frac{1}{2^i} + k \frac{1}{6} \frac{1}{2^n} + \frac{1}{6}\Sigma_{j=1}^k \frac{1}{2^j}$$
Now, we can easily observe that when $n$ is fixed and we make $k$ bigger, $E$ also grows. This excludes strategies $n < k$ from being optimal. We can go somewhat further by observing that:
$$ \Sigma_{i=1}^n\frac{1}{2^i} = \frac{1 - \frac{1}{2}^{n+1}}{1 - \frac{1}{2}}- 1 = 1 - (\frac{1}{2})^n$$
This gives you:
$$E(X) = 1 + \frac{1}{3}n + \frac{1}{3}(1 - (\frac{1}{2})^n) + k \frac{1}{6} \frac{1}{2^n} + \frac{1}{6}(1 - (\frac{1}{2})^k)=$$
$$ 1 + \frac{1}{3}n + \frac{1}{3} - \frac{1}{3}(\frac{1}{2})^n + k \frac{1}{6} \frac{1}{2^n} + \frac{1}{6} - \frac{1}{6}(\frac{1}{2})^k$$
$$6 E(X) = 9 + 2n - 2 (\frac{1}{2})^n + k \frac{1}{2^n} - (\frac{1}{2})^k$$
$$6 E(X) = 9 + 2n + (k-2)\frac{1}{2^n} - (\frac{1}{2})^k$$
This makes it clear that when you keep $k$ fixed and make $n$ bigger, $E$ grows as well. This excludes strategies $k < n$ from being optimal. So the optimal strategy should satisfy $n=k$. So let's assume that in the equation for $E$:
$$6 E(X) = 9 + 2n + (n-2)\frac{1}{2^n} - (\frac{1}{2})^n=$$
$$ 9 + 2n + (n-3)\frac{1}{2^n}$$
I will not go further now, but, at least know, it's clear for me that the right hand side of the above equation assumes minimal value for $n=1$ which yields the strategy $n=k=1$ as optimal.
REPLY [1 votes]: Clearly you always want $k = 1$. If you have tried two machines and gotten candy from neither, the last one must be the Good one, so there's no reason not to try that next. This leaves our choice of $n$. Intuitively it's clear that this one should be 1 as well: after failing to get a candy from the machine once, it's either Bad or Mixed with probability 1/2, while both other machines are Good with probability 1/2, so those machines definitely appear more appealing. But let's make this formal.
The strategy $n = k = 1$ nets you an expected value of $5/3$ tries until you obtain a candy. Now suppose that $n \geq 2$. Write $X$ for the number of tries, and $B, M, G$ for the event that the first machine you try is the Bad, Mixed, Good one. Then:
$$
\begin{align*}
\mathbb{E}(X) &= \underbrace{\mathbb{E}(X \mid B)}_{\geq n + 1}\mathbb P(B) + \underbrace{\mathbb{E}(X \mid M)}_{> 1}\mathbb P(M) + \underbrace{\mathbb{E}(X \mid G)}_{=1}\mathbb P(G)\\
&>(n+1)\frac13+ 1\times \frac13 + 1 \times \frac13\\
& \geq \frac53.
\end{align*}
$$
Here the "strictly greater than" on line 2 stems from the fact that the expected number of tries when the first machine is Mixed is actually strictly greater than 1. Thus $n = k = 1$ is optimal. | 108,893 |
\begin{document}
\title[On ELSV, Hurwitz numbers and topological recursion]{On ELSV-type formulae, Hurwitz numbers and topological recursion}
\author[D.~Lewanski]{D.~Lewanski}
\address{D.~L.: Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Postbus 94248, 1090 GE Amsterdam, The Netherlands}
\email{[email protected]}
\begin{abstract}
We present several recent developments on ELSV-type formulae and topological recursion concerning Chiodo classes and several kind of Hurwitz numbers. The main results appeared in~\cite{LPSZ}.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
ELSV-type formulae relate connected Hurwitz numbers to the intersection theory of certain classes on the moduli spaces of curves. Both Hurwitz theory and the theory of moduli spaces of curves benefit from them, since ELSV formulae provide a bridge through which calculations and results can be transferred from one to the other. The original ELSV formula \cite{ELSV} relates simple connected Hurwitz numbers and Hodge integrals.
It plays a central role in many of the alternative proofs of Witten's conjecture that appeared after the first proof by Kontsevich (for more details see \cite{LIU}).
\subsubsection{Examples of ELSV-type formulae:}
The simple connected Hurwitz numbers $h^{\circ}_{g;\vec{\mu}}$ enumerate connected Hurwitz coverings of the $2$-sphere of degree $|\vec{\mu}|$ and genus $g$, where the partition $\vec{\mu}$ determines the ramification profile over zero, and all other ramifications are simple, i.e. specified by a transposition of two sheets of the covering $(a_i \, b_i) \in \mathfrak{S}_{|\vec{\mu}|}$. By the Riemann Hurwitz formula, the number of these simple ramifications is $b = 2g - 2 + l(\vec{\mu}) + |\vec{\mu}|$ (for an introduction on Hurwitz theory see, e.g., \cite{CM}).\\
The celebrated ELSV formula \cite{ELSV} expresses these numbers in terms of the intersection theory of moduli spaces of curves:
\begin{align*}
\frac{ h^{\circ}_{g;\vec{\mu}} }{b!} =
\prod_{i=1}^{\ell(\vec{\mu})}\frac{\mu_i^{\mu_i}}{\mu_i !}
\int_{\oM_{g,\ell(\vec{\mu})}}
\left(\sum_{l=0}^g(-1)^l \lambda_l \right) \prod_{j=1}^{\ell(\vec{\mu})} \sum_{d_j = 0}\mu_j^{d_j} {\psi}_j^{d_j}
\end{align*}
A different Hurwitz problem arises from Harish-Chandra-Itzykson-Zuber matrix model and leads to the study of the simple monotone connected Hurwitz numbers $h_{g,\mu}^{\circ, \le}$ (see \cite{GGN}), in which an extra \textit{monotone} condition is imposed on the coverings \textemdash \, if $(a_i \, b_i)_{i=1, \dots, b}$ are written such that $a_i < b_i$, the condition requires that $b_i \geq b_{i+1}$ for all $i = 1, \dots, b-1$. For these Hurwitz numbers an ELSV-type formula is also known \cite{ALS, DK}:
\begin{align*}
h_{g,\vec{\mu}}^{\circ, \le}& = \prod_{i=1}^{\ell(\vec{\mu})} \binom{2\mu_i}{\mu_i}
\int_{\overline{\mathcal{M}}_{g,\ell(\vec{\mu})}}
\!\!\!\!\!\! \exp\left(\sum_{l=1} A_l \kappa_l \right)
\prod_{j=1}^{\ell(\vec{\mu})} \sum_{d_j = 0} \psi_j^{d_j} \frac{(2(\mu_j + d_j) - 1)!!}{(2 \mu_j - 1)!!},
\end{align*}
where the generating series for the coefficients $A_i$ of the kappa classes reads
$
\exp \left(-\sum_{l=1}^\infty A_l U^l \right) = \sum_{k=0}^{\infty} (2k + 1)!! U^k.
$
More examples exist in the literature, and two of them will be subject of this paper: the ELSV formula due to Johnson, Pandharipande and Tseng (JPT) \cite{JPT} and a conjectural formula due to Zvonkine \cite{Z}, see also \cite{SSZ} (for the precise formulae see the table at the end of this paper).\\
\subsubsection{Structure of ELSV-type formulae:}
All four examples above express numbers enumerating connected Hurwitz covers of a certain kind, depending on a genus parameter and a partition, in terms of some \textit{non - polynomial} factor in the entries of a partition $\vec{\mu}$ (in the formulae above $\prod \frac{{\mu_i}^{\mu_i}}{\mu_i !}$ and $ \binom{2\mu_i}{\mu_i} $) and an integral over moduli spaces of curves of a certain class intersected with $\psi$ class. This integral is clearly a \textit{polynomial} of degree $3g - 3 + \ell(\vec{\mu})$ in the $\mu_i$. Conceptually:
\begin{equation}\label{eq:structureELSV}
h^{\circ, condition}_{g,\vec{\mu}} = \NonPoly (\vec{\mu})
\int_{\overline{\mathcal{M}}_{g,\ell(\vec{\mu})}}
\!\!\!\!\!\!\!\! (\Class) \,\prod_{j=1}^{\ell(\vec{\mu})} \sum_{d_j=0} c_{d_j}(\mu_j) \psi_j^{d_j}
\end{equation}
where $c_{d_j}(\mu_j)$ is a polynomial of degree $d_j$ in $\mu_j$.
\subsubsection{Topological recursion and ELSV formulae}
The Chekhov, Eynard and Orantin (CEO) topological recursion procedure associates to a spectral curve $\S = (\Sigma, x(z), y(z), B(z_1, z_2))$ (see, e.g., \cite{CE, EO}) a collection of symmetric correlation differentials $\omega_{g,n}$ defined on the product of the curve $\Sigma^{\times n}$ through a universal recursion on $2g - 2 + n$. The expansion of these differentials near particular points can unveil interesting invariants, or solutions to enumerative geometric problems.\\
\begin{center}
\includegraphics[width=11cm]{tr.png}
\end{center}
We say that certain numbers satisfy the topological recursion if there exists a spectral curve such that the expansion of the correlation differentials near \textit{some} point has those numbers as coefficients.
The expansion of the correlation differentials takes the form
\begin{equation}
\omega_{g,n}^{\S} = \dd_1 \otimes \dots \otimes \dd_n \sum_{\mu_1, \dots, \mu_n} N^{\S}_{g,\vec{\mu}} \prod_{i=1}^n\tilde{x}_i^{\mu_i}
\end{equation}
for some coefficients $N_{g,\vec{\mu}}$, where $\tilde{x}$ is a function of $x$ that depend on the point of the expansion.
Both the simple Hurwitz and the monotone Hurwitz numbers satisfy the topological recursion (see \cite{BEMS, BM, DDM, EMS, MZ}), and their spectral curves are respectively
\begin{equation}\label{eq:2spetral}
\left(\mathbb{C} \mathbb{P}^1, -z + \log(z), z, \frac{\dd z_1 \dd z_2}{(z_1 - z_2)^2} \right), \left(\mathbb{C} \mathbb{P}^1, \frac{z-1}{z^2}, -z, \frac{\dd z_1 \dd z_2}{(z_1 - z_2)^2}\right)
\end{equation}
In the simple Hurwitz case $\tilde{x} = e^x$, whereas in the monotone case $\tilde{x} = x$. On the other side, it was proved that the expansion of the correlation differentials \textit{have the same structure as the right-hand side of ELSV-type formulae} described above (see Theorem \ref{thm:CohFT}), depending on same ingredients that are functions of the spectral curve.\\
At this point comes the key observation: if one can compute these ingredients explicitly for a given spectral curve $\S$, one proves that
\begin{equation}\label{eq:structureELSV}
N_{g, \vec{\mu}}^{\S} = \NonPoly^{S} (\vec{\mu})
\int_{\overline{\mathcal{M}}_{g,\ell(\vec{\mu})}}
\!\!\!\!\!\!\!\! (\Class^{\S}) \,\prod_{j=1}^{\ell(\vec{\mu})} \sum_{d_j=0} c_{d_j}^{\S}(\mu_j) \psi_j^{d_j},
\end{equation}
where the non-polynomial part $\NonPoly^{\S}$, the class $Class^{\S}$, and the $c_{d_j}^{\S}(\mu_j)$ are explicit. This allows to formulate equivalence statements in the following sense.
\begin{definition}\label{def:eqst}
An \textit{TR-ELSV equivalence statement} for a Hurwitz problem $h_{g, \vec{\mu}}^{\circ, condition}$ and a spectral curve $\S$ asserts the equivalence between the following two propositions:
\begin{enumerate}
\item[i)] The numbers $h_{g, \vec{\mu}}^{\circ, condition}$ satisfy the topological recursion with input spectral\ curve $\S$ (i.e. $h_{g, \vec{\mu}}^{\circ, condition} = N_{g, \vec{\mu}}^{\S}$) \\
\item[]
$$\text{ii)}\,\,
h^{\circ, condition}_{g,\vec{\mu}} = \NonPoly^{\S} (\vec{\mu})
\int_{\overline{\mathcal{M}}_{g,\ell(\vec{\mu})}}
\!\!\!\!\!\!\!\! (\Class^{\S}) \,\prod_{j=1}^{\ell(\vec{\mu})} \sum_{d_j=0} c_{d_j}^{\S}(\mu_j) \psi_j^{d_j}.$$
\end{enumerate}
\end{definition}
Clearly, it makes sense to formulate an equivalence statement for certain Hurwitz numbers $h^{\circ, condition}_{g,\vec{\mu}}$ and a certain spectral curve $\S$ if for at least one of the two propositions there exists some evidence or a proof. \\
Thus, if one establishes $i)$ independently of $ii)$, then $ii)$ follows immediately (and vice versa), and hence this equivalence relationship has received much attention in the literature.
For example, for the case of simple Hurwitz numbers and the first curve in Equation \eqref{eq:2spetral}, proposition $i)$ was conjectured to hold by Bouchard and Mari\~{n}o \cite{BM}, while $ii)$ is the original ELSV formula. Proposition $i)$ was proved in \cite{BEMS, EMS, MZ}, the equivalence statement was proved in \cite{Eyn11}, see also \cite{SSZ}. The equivalence statement immediately provides a new proof of $i)$ from $ii)$. The proofs \cite{EMS, MZ} of $i)$ though, make use of a polynomiality property that is extracted from ELSV, hence the equivalence cannot be used in the other direction without falling into a circular argument, unless this polynomiality property can be proved without using ELSV formula.
This was done in \cite{DKOSS}, see also \cite{DLPS, KLS},
and thus $ii)$ follows from $i)$ by the equivalence statement.\\
In the case of r-spin Hurwitz numbers, proposition $i)$ is known as $r$-Bouchard-Mari\~{n}o conjecture \cite{BM}, proposition $ii)$ is the $r$-ELSV formula conjectured by Zvonkine \cite{Z}, see also \cite{SSZ}. The equivalence of $i)$ and $ii)$ was established in \cite{SSZ}, but since neither of the two have been proven, both remain conjectural.
\subsubsection{Givental theory}
The class $\Class^{\S}$ describes a semi-simple cohomological field theory (CohFT), possibly with non-flat unit. Semi-simple CohFTs with unit are classified by Givental - Teleman \cite{G,T} by the action of a Givental $R$-matrix on a topological field theory. On the other hand, Givental theory has been identified with the CEO topological recursion in \cite{DOSS}. This identification makes Givental theory a powerful and explicit tool to compute the ingredients above, and hence to prove equivalence statements.
\subsubsection{Main result}
The main result in \cite{LPSZ} is the computation of the ingredients $\NonPoly^{\S_{r,s}}, \, \Class^{\S_{r,s}}, $ and $c_{d_j}^{\S_{r,s}}(\mu_j)$ for the specific spectral curve
\begin{equation*}
\S_{r,s} := \left( \CP1, x(z)=-z^r+\log z,
y(z)=z^s, B(z,z') = \frac{\dd z\, \dd z'}{(z-z')^2} \right).
\end{equation*}
The class $\Class^{\S_{r,s}}$ turns out to coincide with Chiodo classes \cite{Chiodo}.
Its specialisation for $r=s=1$ is used in the ELSV-type formulae for simple Hurwitz numbers, the case $s=1$ involves the $r$-spin Hurwitz numbers, while the case $r=s$ is used in the ELSV formula for $r$-orbifold Hurwitz numbers, derived by Johnson, Pandharipande and Tseng (JPT) \cite{JPT}. The equivalence statement for general $r$ and $s$ is derived, and it specialises to the equivalence statements already known for simple and $r$-spin Hurwitz numbers. For $r$-orbifold Hurwitz instead, the statement is new (see Corollary \ref{cor:eqorbifold2}): this gives a new proof of the topological recursion for $r$-orbifold Hurwitz numbers from JPT. On the other hand, the topological recursion was already proved in \cite{BHLM, DLN}, but extracting some polynomiality property from JPT formula itself. This polynomiality property was then proved in \cite{DLPS}, see also \cite{KLS}, providing together with the equivalence statement a new proof of JPT formula.\\
\subsubsection{Plan and organisation of the paper}
In this note we give a short exposition of the results obtained in \cite{LPSZ}. On the one hand, the technical proofs are omitted and we refer to the original paper for them; conversely, some of the details that are omitted in the paper are here worked out. \\
In Section \ref{sec:Chiodo} we recall the definition of Chiodo classes, we express them in terms of stable graphs and prove that they are given by the action of a particular Givental $R$-matrix.\\
In Section \ref{sec:CurvetoGivental} we review the key steps of DOSS Identification and we show the result of the computation for the spectral curve $\S_{r,s}$. This allows to state the main result (see Theorem \ref{TH2}).\\
In Section \ref{sec:JPT} we treat the equivalence statement for $r$-orbifold Hurwitz numbers and Johnson-Pandharipande-Tseng formula.
\subsection{Acknowledgments}
This note is based on a talk given by the author at the
2016 AMS von Neumann Symposium
\textit{Topological Recursion and its Influence in Analysis, Geometry, and Topology}, July 4 \textemdash \, 8, 2016, in Charlotte, NC.
I would like to thank the organisers of the Symposium and the AMS for this opportunity.\\
I would moreover like to thank N.~Do, P.~Dunin-Barkowski, M.~Karev, R.~Kramer, A.~Popolitov, S.~Shadrin and D.~Zvonkine for interesting discussions, special thanks to S.~Shadrin for having introduced me to the topic and very useful remarks. The author is supported by the Netherlands Organization for Scientific Research.
\section{Chiodo classes}\label{sec:Chiodo}
In this section we recall Chiodo classes and we show their Givental decomposition, for more details we refer the reader to \cite{Chiodo, ChiRua, ChiZvo, JPPZ, SSZ}.
For $2g - 2 +n > 0$, consider a nonsingular curve with distinct markings $[C, p_1, \dots, p_n]\in \cM_{g,n}$, and let $\Klog = \omega_C(\sum p_i )$ be its log canonical bundle.
Let $r \geq 1$, $1\leq a_1,\dots, a_n \leq r$ and $0 \leq s \leq r$ be integers satisfying the condition
\begin{equation}\label{eq:cond_a}
(2g-2+n)s-\sum_{i=1}^n a_i = 0 \quad \mod \, r
\end{equation}
This condition guarantees the existence of $r$th tensor roots $L$ of the line bundle
$\Klog^{\otimes s}\left(-\sum
a_i p_i\right)$ on $C$.
For the moduli space of such $r$th tensor roots, a natural compactification $\oM_{g;a_1, \dots, a_n}^{r,s}$ was constructed in~\cite{Chiodocostruzione, Jarvis}. Let
$\pi : \cC_{g;a_1, \dots, a_n}^{r,s} \to \oM_{g;a_1, \dots, a_n}^{r,s}$
be the universal curve, let $\cL \to \cC_{g;a_1, \dots, a_n}^{r,s}$ be the universal
$r$th root, and let
$ \epsilon: \oM_{g;a_1, \dots, a_n}^{r,s} \to \oM_{g,n}$
be the forgetful map (in order for $\epsilon$ to be unramified in the orbifold sense, the target $\oM_{g,n}$ is changed into the moduli space $r$-stable curves, meaning that for each stable curve there is an extra $\Z_r$ stabilizer at each node, see \cite{Chiodocostruzione}).
Recall the generating series for the Bernoulli polynomials
$$\sum_{l=0} B_l(x) \frac{t^l}{l!} = \frac{t e^{xt}}{e^t - 1}$$
where the usual Bernoulli numbers are $B_l(0) = B_l$.
We are interested in the Chiodo classes \cite{Chiodo}
\begin{align*}\label{eq:DefinitionChiodoCohFT}
& \C_{g,n}(r,s;a_1,\dots,a_n):= \epsilon_{*} c\big(-R^*\pi_*\cL\big) =\\ \notag
& \epsilon_{*}\exp\left(\sum_{l=1}^\infty (-1)^l (l-1)!\ch_l(r,s;a_1,\dots,a_n)\right) \in H^{even}(\oM_{g,n}),
\end{align*}
where Chiodo formula for the Chern characters reads
\begin{align}
\ch_l(r,s;a_1,\dots,a_n) =
\frac{B_{l+1}(\frac sr)}{(l+1)!} \kappa_l
- \sum_{i=1}^n
\frac{B_{l+1}(\frac{a_i}r)}{(l+1)!} \psi_i^l
\\ \notag
+ \frac{r}2 \sum_{a=1}^{r}
\frac{B_{l+1}(\frac{a}r)}{(l+1)!} (j_a)_*
\frac{(\psi')^l + (-1)^{l-1} (\psi'')^l}{\psi'+\psi''}.
\end{align}
Here $j_a$ is the boundary map that represents the boundary divisor with remainder $a$ at one of the two half edges, and $\psi',\psi''$ are the $\psi$-classes at the two branches of the node.\\
For the specialisation $r = s =1$, and moreover $a_i = 1$, for $i = 1, \dots, n$, the map $\epsilon$ is the identity map and we recover Mumford formula \cite{Mu_GRR} for the total Chern class of the dual of the Hodge bundle $c(\Lambda^{\vee}_g)$:
\begin{align}
C_{g,n}(1,1; 1, \dots, 1) &=\exp \Bigg( - \Bigg[ \sum_{l=1}^{\infty}
\frac{B_{l+1}}{l(l+1)} \kappa_l
- \sum_{i=1}^n
\frac{B_{l+1}}{l(l+1)} \psi_i^l
\\ \notag
&+ \frac{1}2
\frac{B_{l+1}}{l(l+1)} j_*
\frac{(\psi')^l + (-1)^{l-1} (\psi'')^l}{\psi'+\psi''} \Bigg]\Bigg)\\ \notag
&= c(\Lambda^{\vee}_g ) = 1 - \lambda_1 + \lambda_2 - \dots + (-1)^g \lambda_g
\end{align}
where the identity $B_l(1) = (-1)^l B_l$ is used. The formula in \cite{Mu_GRR} is slightly different due to a different Bernoulli number convention and a missprint in the $\kappa$ term.
\subsection{Expression in terms of stable graphs}
Let us recall the expression of the Chiodo class in terms of the sum of products of contributions decorating stable graphs, in order to compare it with the Givental action, for more details see \cite{JPPZ}. The strata of the moduli space of curves correspond
to stable graphs
$$\Gamma=(\V,\E,\H,\L, g,n :\V \rightarrow \Z_{\geq 0}, v:\H \rightarrow \V, \iota : \H \rightarrow \H)$$
where $\V (\Gamma),\, \E(\Gamma),\, \H(\Gamma)$ and $\L(\Gamma)$ respectively denote the sets of vertices, edges, half-edges and leaves of $\Gamma$; self-edges are permitted. A half-edge indicates either a leaf or an edge together with a choice of one of the two vertices it is attached to. The function $v$ associates to each half-edge its vertex assignment, while $\iota$ is the involution that swaps the two half-edges of the same edge, or leaves the half-edge invariant if it is a leaf. The function $n(v)$ denotes the valence of $\Gamma$ at $v$, including
both half-edges and legs, and $g(v)$ denotes the genus function. Every vertex $v$ is required to satisfy the stability condition $2g(v) - 2 + n(v) >0$, and the genus of a stable graph $\Gamma$ is defined by $g(\Gamma):= \sum_{v\in V} g(v) + h^1(\Gamma)$.
Let $\Aut(\Gamma)$ denote the group of automorphisms
of the sets $\V$ and $\H$ which leave the
structures $\L$, $\mathrm{g}$, $v$, and $\iota$ invariant.
Let $\mathsf{G}_{g,n}$ be the finite set of isomorphism classes of stable graphs of genus $g$ with $n$ legs.
Let moreover $\mathsf{W}_{\Gamma,r,s,\vec{a}}$ be the set of \textit{weightings} $ w:\H(\Gamma) \rightarrow \{ 0,\ldots, r-1\}$
satisfying the following three properties:
\begin{enumerate}
\item[(i)] The $i$-th leaf $l_i$ has weight $w(l_i)=a_i \mod r \,$, for $ i\in \{1,\ldots, n\}$.
\item[(ii)] For any two half-edges $h'$ and $h''$ corresponding to the same edge, we have $w(h')+w(h'')=0 \mod r\,$.
\item[(iii)] The condition in Equation \eqref{eq:cond_a} is satysfied locally on each component: for any vertex $v$ the sum of the weights associated to the half-edges incident to $v$ is
$\sum_{v(h)= v} w(h) = s\big( 2g(v)-2+n(v)\big) \mod r $.
\end{enumerate}
\begin{proposition}\cite{JPPZ}. \label{Cor:ChiodoExp}
The Chiodo class $\C_{g,n}(r,s;a_1,\dots,a_n)\in R^*(\oM_{g,n})$ is equal to
\begin{multline}\label{eq:ChiodoExp}
\hspace{-10pt}\sum_{\Gamma\in \mathsf{G}_{g,n}}
\sum_{w\in \mathsf{W}_{\Gamma,r,s,\vec{a}}}
\frac{r^{|E(\Gamma)| + \sum_{v \in V(\Gamma)} 2g(v)-1}}{|\Aut(\Gamma)| }
\;
\xi_{\Gamma*}\Bigg[ \prod_{v \in \V(\Gamma)} e^{-\sum\limits_{l\geq 1} (-1)^{l-1}\frac{B_{l+1}(s/r)}{l(l+1)}\kappa_l(v)} \; \cdot
\\
\prod_{i=1}^n e^{\sum\limits_{l\geq 1}(-1)^{l-1} \frac{B_{l+1}(a_i/r)}{l(l+1)} \psi^l_{h_i}} \cdot
\prod_{\substack{e\in \E(\Gamma) \\ e = (h',h'')}}
\frac{1-e^{\sum\limits_{l \geq 1} (-1)^{l-1} \frac{B_{l+1}(w(h)/r)}{l(l+1)} [(\psi_{h'})^l-(-\psi_{h''})^l]}}{\psi_{h'} + \psi_{h''}} \Bigg]\, .
\end{multline}
where $\xi_{\Gamma}$ is the
canonical
morphism
$
\xi_{\Gamma}: \prod_{v\in \V(\Gamma)} \oM_{g(v),n(v)} \rightarrow \oM_{g,n}
$
of the boundary stratum corresponding to $\Gamma$.
\end{proposition}
\subsection{Expression in terms of Givental action}\label{sec:giv}
In this section we express Chiodo classes in terms of Givental theory. \\
Fix a vector space $V$ and a symmetric bilinear form $\eta$ on $V$. A Givental $R$-matrix is a $\End(V)$-valued power series
\begin{equation}
R(\zeta) = 1 + \sum_{l=1} R_l \zeta^l = \exp\left(\sum_{l=1}r_l \zeta^l \right), \quad R_l, r_l \in \End(V)
\end{equation}
satisfying the symplectic condition
$$R(\zeta)R^{*}(-\zeta) = 1 \in \End(V) $$
where $R^*$ is the adjoint of $R$ with respect to $\eta$.
By Givental - Teleman classification \cite{G,T}, every semi-simple cohomological field theory (CohFT) with unit is obtained by the action of a Givental $R$-matrix on a topological field theory.
We will show that the action of the $R$-matrix
\begin{align}
& R^{-1}(\zeta) := \exp\left(-\sum_{l=1}^{\infty} \frac{\mathrm{diag}_{a=1}^{r} B_{l+1}\left(\frac{a}{r}\right)}{l(l+1)}(-\zeta)^l\right)
\end{align}
defined as power series valued in the endomorphisms for the vector space $$V = \langle v_1, \dots, v_{r} \rangle$$ with
$$\eta(v_a,v_b)= \frac{1}{r}\delta_{a+b \mod r},$$
acting on the topological field theory
\begin{equation}
\alpha^{top}_{g,n}(v_{a_1}\otimes \cdots \otimes v_{a_n}) =r^{2g-1} \delta_{a_1+\cdots+a_n-s(2g-2+n) \mod r},
\end{equation}
produces the Chiodo classes. Therefore Chiodo classes determine a CohFT with a known Givental decomposition.
The action of the Givental $R$-matrix is defined as sum over stable graphs $\Gamma$ weighted by ${|\Aut (\Gamma)|}^{-1}$, with contributions on the leaves, on the edges, on special leaves called \textit{dilaton} leaves, and the topological field theory contributes on the verteces. Chiodo classes are already expressed as a sum over stable graphs in Equation \eqref{eq:ChiodoExp} with a very similar structure.
Let us match the Givental contributions one by one:
\subsubsection{Ordinary leaf contributions.} The contribution of the $i$-th leaf reads
$$\exp\Bigg(-\sum_{l=1} \frac{B_{l+1}(a_i/r)}{l(l+1)} (-\psi_{h_i})^{l}\Bigg) = \sum_{j=1}^r (R^{-1})_{a_i}^j(\psi_{h_i})$$
\subsubsection{Dilaton leaf contributions.} Recall that the kappa classes are defined as $\kappa_l = \pi_*(\psi_{n+1}^{l+1})$ under the map that forgets the last marked point $\pi: \oM_{g, n+1} \rightarrow \oM_{g,n}$.
The contributions on the dilaton leaves correspond to the contributions on the vertices in Equation \eqref{eq:ChiodoExp} before forgetting the corresponding marked point. For the dilaton leaf marked with label $n+i$, for some positive integer $i$, the contribution reads:
$$\exp\Bigg(-\sum_{l=1} \frac{B_{l+1}(s/r)}{l(l+1)} (-\psi_{n+i})^{l} (- \psi_{n+i})\Bigg) $$
We check that $v_s$ is the neutral element $\mathds{1}$ for the quantum product $\bullet$ in flat basis:
\begin{align}
\eta(v_s \bullet v_a, v_b ) &= \alpha^{top}_{0,3}(v_s \otimes v_a \otimes v_b)\\ \notag
&= r^{-1} \delta_{s + a + b -s \mod r} = r^{-1} \delta_{ a + b \mod r} \\ \notag
& = \eta(v_a, v_b)
\end{align}
Hence the contribution of the dilaton leaf $n+i$ is
$$\psi_{n+i}\Big[ \Id - \sum_{j=1}^r (R^{-1})_{\mathds{1}}^j(\psi_{n+i})\Big]$$
\subsubsection{Edge contributions.} The edge contribution in Equation \eqref{eq:ChiodoExp}, multiplied by the factor
$(\psi_{h'} + \psi_{h''})$ and after applying the property of Bernoulli numbers
$(-1)^{p+1} B_{p+1}\left(\frac{w(h')}{r}\right) = B_{p+1}\left(\frac{r-w(h')}{r}\right)$, reads
$$1 -\, \exp\Bigg( - \sum_{l=1} \frac{B_{l+1}(\frac{w(h')}{r})}{l(l+1)}(-\psi_{h'})^l\Bigg)\exp\Bigg( - \sum_{p=1} \frac{B_{p+1}(\frac{r- w(h'')}{r})}{p(p+1)}(-\psi_{h'})^p\Bigg).$$
Note that the condition on the weightings $w(h') + w(h'') = 0 \mod r$ can be taken care by the scalar product $\eta$. Hence we can write the Givental contribution on the edges as
\begin{equation*}
\sum_{j_1,j_2} \frac{\eta^{j_1,j_2} - (R^{-1})_{w(h')}^{j_1} (\psi_{h'})\eta^{w(h'), w(h'')}(R^{-1})_{w(h'')}^{j_2}(\psi_{h''})}{\psi_{h'} + \psi_{h''}}
\end{equation*}
\subsubsection{Weightings.} Out of the three conditions on the weightings, condition $(i)$ becomes $w(l_i) = a_i$, condition $(ii)$ on the edges is taken care by the bilinear form $\eta$, condition $(iii)$ can be substituted by the topological field theory condition.
\subsubsection{Powers of $r$.} Every stable graph contributes with
$$
|E(\Gamma)| + \sum_{v \in V(\Gamma)} 2g(v) - 1
$$
powers of $r$. Indeed the topological field theory in the vertex $v$ provides $2g(v) - 1$ powers of $r$, and the inverse of $\eta$ provides one power of $r$ for each edge of $\Gamma$.
\subsubsection{The expression of Givental action}
Let us indicate with $$ \{l_1, \dots, l_n, l_{n+1}, \dots, l_{n+k} \} = L(\Gamma) $$ the set of legs, corresponding to marked points of the curves in $\oM_{g, n+k}$, and let
$$\xi^{(k)}_{\Gamma}: \prod_{v \in V(\Gamma)} \oM_{g(v), n(v)} \rightarrow \oM_{g,n}$$ be the canonical morphism of the boundary stratum corresponding to $\Gamma$ that forgets the last $k$ marked points. Let us consider functions $w^{\vee}: H(\Gamma) \rightarrow \Z_{\geq 0} $ without \textit{any} further condition. We use here the notation $w^{\vee}$, instead of $w$, to remark that the weightings $w^{\vee}$ decorates the half-edges \textit{after} the application of the endomorphisms $R^{-1}_l$. Collecting the contributions and the considerations above, we have:
\begin{align*}
C_{g,n}&(r,s; a_1, \dots, a_n) =\sum_{k = 0} \, \sum_{\substack{\Gamma \in G_{g,n+k}\\ w^{\vee}: H(\Gamma) \rightarrow \Z_{\geq 0}}} \frac{1}{|\Aut(\Gamma)|} \left(\xi^{(k)}_{\Gamma}\right)_*\Bigg[\\ \notag
&\prod_{v \in V(\Gamma)} \alpha^{top}_{g(v),n(v)}\Bigg(\bigotimes_{\substack{h \in H(\Gamma) :\\ v(h) =v }} v_{w^{\vee}(h)} \Bigg) \\ \notag
&\prod_{i=1}^n (R^{-1})_{a_i}^{w^{\vee}(l_i)}(\psi_{i})
\\ \notag &
\prod_{i=1}^{k} \psi_{n+i}\Big[ \Id - (R^{-1})_{\mathds{1}}^{w^{\vee}(l_{n+i})}(\psi_{n+i})\Big]
\\ \notag
&\prod_{e=(h',h'') \in V(\Gamma)}
\!\!\!\!\!\!\!\!\!\!\!\!\ \frac{\eta^{w^{\vee}(h'),w^{\vee}(h'') } - \sum_{k_1, k_2}(R^{-1})^{w^{\vee}(h')}_{k_1} (\psi_{h'})\eta^{k_1, k_2}(R^{-1})^{w^{\vee}(h'')}_{k_2}(\psi_{h''})}{\psi_{h'} + \psi_{h''}}
\Bigg] \\ \notag
\end{align*}
The expression above is equivalent to $\left(R. \alpha^{top}\right)_{g,n}(v_{a_1} \otimes \dots \otimes v_{a_n})$, i.e. the Givental action of the matrix $R$ on $\alpha^{top}$ correspondent of genus $g$ and $n$ marked points, evaluated in the element $v_{a_1} \otimes \dots \otimes v_{a_n}$, (see \cite{G, DOSS, PPZ}).\\
Consider then $\C_{g,n}(r,s;a_1,\dots,a_n)$ as the evaluation of a map
\begin{equation}
\C_{g,n}(r,s)\colon V^{\otimes n}\to H^{even}(\oM_{g,n}),
\end{equation}
where $V=\langle v_1,\dots,v_r\rangle$, and
\begin{equation}
\C_{g,n}(r,s)\colon v_{a_1}\otimes\cdots\otimes v_{a_n} \mapsto
\C_{g,n}(r,s;a_1,\dots,a_n).
\end{equation}
The previous calculation shows
\begin{proposition}[\cite{LPSZ}]\label{prop:RChiodo}
For $0\leq s\leq r$ the collection of maps $\{\C_{g,n}(r,s)\}$ defined by the Chiodo classes form a semi-simple cohomological field theory with flat unit, obtained by the action of the Givental matrix $R$ on the topological field theory
$\alpha^{top}_{g,n}$:
$$\left(R. \alpha^{top}\right)_{g,n} = C_{g,n}(r,s).$$
\end{proposition}
\section{From the spectral curve to the Givental R-matrix}\label{sec:CurvetoGivental}
In this section we recall the main result of~\cite{DOSS,E}, which expresses the correlation differentials $\omega_{g,n}$ of the CEO topological recursion in terms of integral over moduli spaces of curves (Theorem \ref{thm:CohFT}). We then recall the identification \cite{DOSS} between topological recursion and Givental theory and apply it to a particular spectral curve (Definition \ref{def:Srs}). The result is an explicit expression for the coefficients of $\omega_{g,n}$ (Theorem \ref{thm:srequivalence}).
\subsection{Local topological recursion}
The local version of the CEO topological recursion takes as input the following set of data $\S = (\Sigma, x, y, B)$:
\begin{enumerate}
\item[I).] A local spectral curve $\Sigma=\sqcup_{i=1}^r U_i$, given by the disjoint union of open disks with the center points $p_i$, $i=1,\dots,r.$
\item[II).] A holomorphic function $x\colon \Sigma\to\bbC$ such that the zeros of its differential $dx$ are $p_1,\dots,p_r$. We will assume the zeroes of $dx$ to be simple.
\item[III).] A holomorphic function $y\colon \Sigma\to\bbC$ which does not vanish on the zeroes of $dx$.
\item[IV).] A symmetric bidifferential $B$ defined on $\Sigma\times \Sigma$ with a double pole on the diagonal with residue $1$.
\end{enumerate}
The output of the topological recursion procedure consists of a collection of symmetric differentials $\omega^{\S}_{g,n}$ defined on the topological product of the curve $\Sigma^{\times n}$. These correlation differentials take the following form:
\begin{theorem}\label{thm:CohFT} \cite{E,DOSS} The correlation differentials $\omega^{\S}_{g,n}$ produced via the topological recursion procedure from the specral curve $\S = (\Sigma,x,y,B)$ are equal to
\begin{equation*}\label{eq:C-fullformula}
C^{2g-2+n}\!\!\!\! \sum_{\substack{i_1,\dots,i_n \\ d_1,\dots,d_n}}
\!\! \int_{\oM_{g,n}} \!\!\!\! \left(\Class^{\S}\right)_{g,n}(e_{i_1}\otimes \cdots \otimes e_{i_n})
\prod_{j=1}^n \psi_j^{d_j} \dd \left(\left( - \frac{1}{w_j}\frac{\dd}{\dd w_j}\right)^{d_j} \xi^{\S}_{i_j} \right).
\end{equation*}
\end{theorem}
\begin{remark}
The $\Class^{\S}$ defines a semi-simple CohFT, possibly with a non-flat unit. In this paper we will restrict the attention to CohFT with flat unit and we will write
$$\left(\Class^{\S}\right)_{g,n} = \left(R^{\S}.\alpha^{\S, top}\right)_{g,n}$$
to indicate its Givental decomposition (see Section \ref{sec:giv}; for CohFT with non-flat unit see \cite{PPZ} or \cite{LPSZ}, Section 2.3).
\end{remark}
Let us describe the ingredients in the formula above in terms of the data of the specral curve, following \cite{DOSS}. The only difference with the usual representation is that we incorporate a torus action on cohomological field theories, fixing a point $(C,C_1,\dots,C_r)\in (\bbC^*)^{r+1}$. This formula doesn't depend on these parameters, though all its ingredients do.
\begin{itemize}
\item[i).] The local coordinates $w_i$ on $U_i$, $i=1,\dots,r$, are chosen such that $w_i(p_i)=0 $ and $ x=(C_iw_i)^2+x_i$
\item[ii).]The underlying topological field theory is given in idempotent basis by
\begin{align*}\label{eq:C-underlyingTFT}
& \eta(e_i,e_j)= \delta_{ij}, \\ \notag
& \alpha^{\S, top}_{g,n}(e_{i_1}\otimes \cdots \otimes e_{i_n}) = \delta_{i_1\dots i_n} \left(-2C_i^2 C \frac{\dd y}{\dd w_i}(0)\right)^{-2g+2-n}.
\end{align*}
\item[iii).]The Givental matrix $R^{\S}(\zeta)$ is given by
\begin{equation*}\label{eq:C-matrixR}
-\frac 1\zeta (R^{\S})^{-1}(\zeta)_i^j=\frac{1}{\sqrt{2\pi\zeta}} \int_{-\infty}^\infty \left. \frac{B(w_i,w_j)}{\dd w_i}\right|_{w_i=0} \!\!\!\!\!\!\!\! \cdot e^{-\frac{w_j^2}{2\zeta}}.
\end{equation*}
\item[iv).]The auxiliary functions $\xi^{\S}_i \colon \Sigma\to\bbC$ are given by
\begin{equation*}\label{eq:C-functionsxi}
\xi^{\S}_i(x):=\int^x \left.\frac{B(w_i,w)}{\dd w_i}\right|_{w_i=0}
\end{equation*}
\item[v).] \textit{DOSS Test} (see \cite{DNOPS}, Section 4): The following condition for the function $y$ is necessary and sufficient in order for the unit of the cohomological field theory $R^{\S}.\alpha^{\S}$ to be flat.
\begin{equation*}\label{eq:condition-y}
\frac{2C_i^2 C}{\sqrt{2\pi\zeta}} \int_{-\infty}^\infty \dd y\cdot e^{-\frac{w_i^2}{2\zeta}} = \sum_{k=1}^r (R^{-1})^i_k \left(2C_k^2 C \frac{\dd y}{\dd w_k}(0)\right)
\end{equation*}
\end{itemize}
\subsection{The spectral curve $\S_{r,s}$ and its Givental R-matrix}
Let us apply the formula in Theorem \ref{thm:CohFT} to the specral curve introduced in \cite{LPSZ}.
\begin{definition}\label{def:Srs} For $0 \leq s \leq r$, with $r$ and $s$ integer parameters, let the spectral curve $\S_{r,s}$ be defined by the following initial data in terms of a global coordinate $z$ on the Riemann sphere:
\begin{equation*}\label{eq:spectral-curve-data}
\S_{r,s} := \left( \CP1, x(z)=-z^r+\log z,
y(z)=z^s, B(z,z') = \frac{\dd z\, \dd z'}{(z-z')^2} \right)
\end{equation*}
\end{definition}
\begin{proposition}[\cite{LPSZ}] \label{lem:flat-basis}
The ingredients described above for the spectral curve $\S_{r,s}$ are given by:
\begin{itemize}
\item[i).] Choose the constants $C_i = 1/\sqrt{-2r}$ for $i=1,\dots,r$, and $C = r^{1 + s/r}/s$. With this choice the local coordinates $w_i$ on $U_i$, $i=1,\dots,r$ satisfy $ x= -\frac{w_i^2}{2r}+x(p_i)$.
\item[ii).]The underlying topological field theory is given by
\begin{align*}
&\eta(v_a,v_b)=\frac{1}{r}\delta_{a+b\mod r}\\
&\alpha^{\S_{r,s}, top}_{g,n}(v_{a_1}\otimes \cdots \otimes v_{a_n}) = r^{2g-1} \delta_{a_1+\cdots+a_n-s(2g-2+n) \mod r}.
\end{align*}
where the flat coordinates $v_a$ are defined in terms of the idempotents by $$v_a := \sum_{i=0}^{r-1} \frac{J^{ai}}{r} e_i, \qquad a = 1, \dots, r.$$
\item[iii).]The Givental matrix $R^{\S_{r,s}}(\zeta)$ is given by
\begin{equation*}
R^{\S_{r,s}}(\zeta)= \exp\left(-\sum_{k=1}^{\infty} \frac{\mathrm{diag}_{a=1}^{r} B_{k+1}\left(\frac a r\right)}{k(k+1)}\zeta^k\right).
\end{equation*}
\item[iv).]The auxiliary functions $\xi^{\S_{r,s}}_a \colon \Sigma\to\bbC$ are given by
\begin{equation*}
\xi^{\S_{r,s}}_a=r^{\frac{r-a}r} \sum_{n=0}^\infty \frac{(nr+r-a)^n}{n!}e^{(nr+r-a)x}
\end{equation*}
\item[v).] DOSS Test is satisfied.
\end{itemize}
\end{proposition}
Let us now put together the ingredients for the correlation differentials $\omega_{g,n}^{\S_{r,s}}$, as in Theorem \ref{thm:CohFT}. First of all, in the coordinates $v_i$, the TFT and the Givental $R$-matrix for $\S_{r,s}$ coincide with the ones resulting in the Chiodo CohFT for the same parameters $r$ and $s$ by Proposition \ref{prop:RChiodo}.
Secondly, since $\frac{1}{r} \frac{d}{dx} = -\frac{1}{w_i}\frac{d}{dw_i}$, the computation of the derivatives of the auxiliary functions, divided by the powers of $r$, read
\begin{align*} \dd \left(\left( - \frac{\psi_j}{w_j}\frac{\dd}{\dd w_j}\right)^{d_j} \!\!\! \xi^{\S_{r,s}}_{i_j} \right) & = \dd \left[\left(\frac{\psi_j}{r} \frac{\dd}{\dd x_j}\right)^{d_j}
\sum_{l_j=0}^\infty
\frac{(l_jr+r-a_j)^n}{l_j!}e^{(l_jr+r-a_j)x_j}\right]\\
&= \dd_j
\sum_{l_j=0}^\infty
\frac{(l_jr+r-a_j)^{l_j+d_j}}{l_j!} \left(\frac{\psi_j}{r}\right)^{d_j}e^{(l_jr+r-a_j)x_j}\\
&= \dd_j
\sum_{\mu_j=1}^\infty
\frac{\mu_j^{[\mu_j]}}{[\mu_j]!}\left(\frac{\mu_j}{r} \psi_j\right)^{d_j}e^{\mu_jx_j},\\
\end{align*}
where we write the euclidean division by $r$ as $\mu = [\mu]r + \langle \mu \rangle$, with $ \langle \mu \rangle < r$. The powers of $s$ only come from $C^{2g - 2 +n}$, and hence they are equal to $-(2g - 2 + n)$. The remaining powers of $r$ to compute amount to
$$2g - 2 + n + \frac{(2g - 2 + n)s + \sum_{i} (r - a_i)}{r} =
2g - 2 + n + \frac{(2g - 2 + n)s + \sum_{i} \langle \mu_i \rangle}{r} ,$$ though it is handy to collect an extra $r^{\sum_i [\mu_i]}$ outside the product. This proves:
\begin{theorem}[\cite{LPSZ}]\label{thm:srequivalence}
The correlation differentials $\omega_{g,n}^{\S_{r,s}}$ of the spectral curve~\eqref{eq:spectral-curve-data} are equal to
\begin{align*}
\dd_1\otimes \dots \otimes \dd_n \frac{r^{2g-2+n+b}}{s^{2g-2+n}} \prod_{j=1}^{n} \frac{\left(\frac{\mu_j}{r}\right)^{[\mu_j]}}{[\mu_j]!} \!\!\!\!\!\! \sum_{\mu_1,\dots,\mu_n=1}^\infty
\int_{\oM_{g,n}} \!\!\! \frac{\C_{g,n} \left (r,s; r \!-\! \langle\vec{\mu} \rangle \right)}{\prod_{j=1}^n (1-\frac{\mu_i}{r}\psi_i)}
\; e^{\sum \mu_j x_j}
\end{align*}
where $b(r,s) = \left((2g-2+n)s+\sum_{j=1}^n \mu_j\right)/r$.
\end{theorem}
\begin{remark}
Note that the case $s=1$ reproduces Theorem 1.7 in~\cite{SSZ}.
\end{remark}
Expanding the correlation differentials as
\begin{equation}
\omega^{\S_{r,s}}_{g,n} = \dd_1\otimes \cdots \otimes \dd_n \sum_{\mu_1,\dots,\mu_n=1}^\infty
\frac{N_{g,\vec{\mu}}^{\S_{r,s}}}{b(r,s)!} \ e^{\sum_{j=1}^n \mu_j x_j} ,
\end{equation}
we find:
\begin{corollary}[\cite{LPSZ}]\label{cor:rsN}
\begin{align}
N^{\S_{r,s}}_{g,\vec{\mu}} =
b(r,s)! \frac{r^{b(r,s) + 2g - 2 +n}}{s^{2g - 2 +n}}
\prod_{i=1}^{n}\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\frac{C_{g,n}(r,s;r - \langle \vec{\mu} \rangle)}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)}.
\end{align}
\end{corollary}
\section{Equivalence statements: a new proof of the Johnson-Pandharipande-Tseng formula}\label{sec:JPT}
In this section we consider the case $s=r$ of Theorem \ref{thm:srequivalence}. In this case, the correlation differentials of this spectral curve are known to give the $r$-orbifold Hurwitz numbers $h^{\circ, [r]}_{g;\vec{\mu}}$ \cite{ BHLM, DLN, DLPS}, which enumerate connected Hurwitz coverings of the $2$-sphere of degree $|\vec{\mu}|$ and genus $g$, where the partition $\vec{\mu}$ determines the ramification profile over zero, the ramification over infinity is of cycle type $(r, r, \dots, r)$ and all other ramifications are simple.
Corollary \ref{cor:rsN} for $s=r$ specialises to
\begin{corollary}[\cite{LPSZ}]\label{TH2}
\begin{align}
N^{\S_{r,r}}_{g,\vec{\mu}} = b(r,r)!
r^{b(r,r) }
\prod_{i=1}^{n}\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\frac{C_{g,n}(r,r;r - \langle \vec{\mu} \rangle)}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)}.
\end{align}
\end{corollary}
Plugging the $r-$orbifold Hurwitz numbers and the curve $\S_{r,r}$ into the TR-ELSV equivelance statement in Definition \ref{def:eqst}, we get:
\begin{corollary}\label{cor:eqorbifold}
The two propositions are equivalent:
\begin{align*}
\text{i)}& \qquad h_{g, \vec{\mu}}^{\circ, [r]} = N^{\S_{r,r}}_{g, \vec{\mu}}\\
\text{ii)}& \qquad h_{g, \vec{\mu}}^{\circ, [r]} = b(r,r)!
r^{b(r,r) }
\prod_{i=1}^{n}\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\frac{C_{g,n}(r,r;r - \langle \vec{\mu} \rangle)}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)}.
\end{align*}
\end{corollary}
On the other hand, $r$-orbifold Hurwitz numbers are also known to satisfy the John\-son-Pandharipande-Tseng (JPT) ELSV-type formula~\cite{JPT} (specialised here to the case
${G} = \mathbb{Z}/r\mathbb{Z}$, $U$ equal to the representation that sends $1$ to $e^{\frac{2 \pi i}{r}}$, and empty $\gamma$):
\begin{align} \label{eq:particular-jpt}
h_{g;\vec{\mu}}^{\circ, [r]} =
b(r,r)! r^{b(r,r)}
\prod_{i=1}^{n}\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\frac{p_*\sum_{i\geq 0} (-1)^i \lambda_i}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)},
\end{align}
\begin{remark}
Note that the powers of $r$ are here slightly rearranged to easily match the equation for the correlation differentials.
\end{remark}
The class $p_*\sum_{i\geq 0} (-r)^i \lambda_i$ is described in~\cite{JPT} via admissible covers, while Chiodo's classes rely on the moduli space of $r$-th tensor roots. These two approaches are in fact equivalent:
\begin{proposition}[\cite{LPSZ}]\label{prop:Dima}
$
p_*\sum_{i\geq 0} (-r)^i \lambda_i = \C_{g,n}(r,r;r- \langle \vec{\mu} \rangle).
$
\end{proposition}
By Proposition \ref{prop:Dima} and Formula \eqref{eq:particular-jpt}, we can re-state Corollary \ref{cor:eqorbifold} as:
\begin{corollary}[\cite{LPSZ}]\label{cor:eqorbifold2}
The two statements are equivalent:
\begin{enumerate}
\item[i)] The $r$-orbifold Hurwitz numbers satisfy the topological recursion from the spectral curve $\S_{r,r}$.\\
\item[ii)] The JPT formula holds.
\end{enumerate}
\end{corollary}
Since the JPT formula is proved independently from the topological recursion, Corollary \ref{cor:eqorbifold2} provides a new proof of the topological recursion for the numbers $ h_{g;\vec{\mu}}^{\circ, [r]}$. \\
On the other hand, both \cite{ BHLM, DLN} derive the topological recursion for $h_{g;\vec{\mu}}^{\circ, [r]}$ combining the cut-and-join equation with a polynomiality property which is extracted from the JPT formula itself. Hence one cannot conclude that Corollary \ref{cor:eqorbifold2} provides a new proof of the the JPT formula, unless this polynomiality property is derived independently from JPT.
This polynomiality property is proved in \cite{DLPS} with no use of JPT, see also \cite{KLS}. Therefore these results together, and via Corollary \ref{cor:eqorbifold2}, provide a new proof of Johnson-Pandharipande-Tseng formula.
\vspace{0.5cm}
\begin{changemargin}{-1.5cm}{2cm}
{\renewcommand{\arraystretch}{1.2}
\label{my-label}
\begin{tabular}{| l | r c p{8.2cm} | }
\bottomrule
Case
&
Topological recursion
& &
ELSV-type formula
\\
\toprule
\begin{tabular}{@{}c@{}}
$s=1$\\
$r=1$ \\
\end{tabular}
&
\begin{tabular}{r@{}}
\textit{The standard Hurwitz }\\
\textit{numbers $h^{\circ}_{g, \vec{\mu}}$}\\
\textit{are generated by}\\
$
\begin{cases}[r]
-z+\log z = x(z)\\
z = y(z)
\end{cases}
$
\\
\end{tabular}
& $\iff$ &
\begin{tabular}{l@{}}
\textit{The ELSV formula}\\
\begin{minipage}{2cm}
\tiny
\begin{align*}
\frac{ h^{\circ}_{g;\vec{\mu}} }{b!} =
\prod_{i=1}^{n}\frac{\mu_i^{\mu_i}}{\mu_i !}
\int_{\oM_{g,n}}
\frac{\sum_{i=0}^g (-1)^i \lambda_i}{\prod_{j=1}^{n} (1 - \mu_j {\psi}_j)},\\
\end{align*}
\end{minipage}\\
\textit{holds, where: }$b = 2g - 2 + n + |\vec{\mu}|$. \\
\end{tabular}
\\
\midrule
$s=r$
&
\begin{tabular}{r@{}}
\textit{The $r$-orbifold Hurwitz}\\
\textit{ numbers $h^{\circ, r}_{g, \vec{\mu}}$}\\
\textit{are generated by}\\
$
\begin{cases}[r]
-z^r+\log z = x(z)\\
z^r = y(z)
\end{cases}
$
\\
\end{tabular}
& $\iff$ &
\begin{tabular}{l@{}}
\textit{The Johnson-Pandharipande-Tseng formula}\\
\begin{minipage}{2cm}
\tiny
\begin{align*}
\frac{ h^{\circ,[r]}_{g;\vec{\mu}} } {b!} =
r^b
\prod_i\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\frac{p_*\sum_{i\geq 0}^g (-1)^i \lambda_i}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)},\\
\end{align*}
\end{minipage}\\
\textit{holds, where: }$b = 2g - 2 + n + |\vec{\mu}|/r$. \\
\end{tabular}
\\
\midrule
$s=1$
&
\begin{tabular}{r@{}}
\textit{The $r$-spin Hurwitz}\\
\textit{ numbers $h^{\circ, r-spin}_{g, \vec{\mu}}$}\\
\textit{are generated by}\\
$
\begin{cases}[r]
-z^r+\log z = x(z)\\
z = y(z)
\end{cases}
$
\\
\end{tabular}
& $\iff$ &
\begin{tabular}{l@{}}
\textit{The $r$-spin ELSV formula}\\
\begin{minipage}{2cm}
\tiny
\begin{align*}
\frac{ h_{g;\vec{\mu}}^{\circ, r-spin}} {b!} =
r^{b-\chi}
\prod_i\frac{\left(\frac{\mu_i}{r}\right)^{[\mu_i]}}{[\mu_i]!}
\int_{\overline{\mathcal{M}}_{g,n}}
\!\!\! \frac{\C_{g,n}(r,1, r - \langle\vec{\mu}\rangle)}{\prod_{j=1}^{n} (1 - \frac{\mu_j}{r} {\psi}_j)},\\
\end{align*}
\end{minipage}\\
\textit{holds, where: }$b = (2g - 2 + n + |\vec{\mu}|)/r$. \\
\end{tabular}
\\
\midrule
\end{tabular}
}
\end{changemargin}
\begin{remark}
In the table all the spectral curves are of genus zero, i.e. $\Sigma = \mathbb{C} \mathbb{P}^1$, and hence with the standard kernel $B(z,z') = \frac{\dd z\, \dd z'}{(z-z')^2}$.
\end{remark} | 86,638 |
bfs<<
Haiku Getting UserlandFS, NetFS
We.
In its current state, UserlandFS is coded to work in BeOS R5, but it will soon be ported to work with the new Haiku file system interface (which is slightly modified implementation from that of BeOS R5). It is hoped that the availability of UserlandFS will accelerate development of more file system add-ons for Haiku.
While a userland debugging "shell" for FS development already exists for Haiku, it has some limitations that can be avoided by using UserlandFS instead. The "FS shell" emulates the relevant part of the kernel (the complete VFS layer) and provides a CLI interface with several testing commands. UserlandFS instead offers the ability to use any application or test program with your FS directly e.g. the Tracker and results in the same access patterns you would expect with the file system running in the kernel.
According to Ingo, "the UserlandFS interface is identical to that of the kernel FS interface. Having the kernel interface as an option is particularly nice for developers who want to write a file system for the kernel. They can develop, test and debug in userland, and then just recompile for the kernel. Not only can a buggy FS running in userland not cause KDLs, but the debugging facilities available in userland are also way more comfortable (break/watch points, single stepping, etc.)".
NetFS is currently a working implementation already, and it provides all functionality that could be expected from a file system under Haiku with the use of attribute and live query support. Still missing is a preferences GUI to configure NetFS, so for now server side shares and user permissions are defined using a config file. The client is also capable of automatically locating other servers on the LAN.
| 413,163 |
hifi in Virginia Beach, VA
- more info
- |
- (757) 853-1920
- |
- view website
Access Innovations Incorporated3631 Virginia Beach Blvd, Virginia Beach, VA map
- Write a review
- 1.4 MI
Audio Connection1657 Laskin Rd, Virginia Beach, VA map
- Write a review
- 3.8 MI
APS Inc700 Military Highway, Virginia Beach, VA map
Security is just the beginning We design and install Home Theater Systems. Local in house monitoring. C... more
- Write a review
- 7.2 MI
- more info
- |
- (757) 499-5922
- view website
Home Theatre Innovations732 Eden Way N Ste G, Chesapeake, VA map
- Write a review
- 9.7 MI
- more info
- |
- (757) 361-6861
- view website
Technical TV2947 S Military Highway Ste 102, Chesapeake, VA map
- Write a review
- 13.7 MI
- more info
- |
- (757) 485-0413
- view website
Displaying 1-10 of 10 results
Page 1 of 1<<
56.82° F | 73,417 |
A Última Minoria (19??)
The Last Minority
45 min
Documentário Actualidades
Outros: · Helena Balsa [Jornalista]
This stubborness belong to around 1 thousand inhabitants of Goa; they are still one thousand in a country which is a subcontinent with more than 800 million inhabitants.
You are therefore invited to watch that "Last Minority" - a coverage made by the journalist Helena Balsa. | 47,499 |
NOLA Itinerary - Comments and Suggestions Please!
I am going with a friend to New Orleans May 25 (arriving 10:00 am) – May 30 (leaving 4:00 pm) and could really use some insider advice/guidance on my food itinerary. My friend will be attending a conference so for some lunches and breakfasts I will be alone. We really want to pack as much as we can into our short time in the city. Ideally, I would like to do three meals a day so that we can try as many restaurants as possible, and I love to walk. So, hopefully, we will work up an appetite. Our hotel is on Canal and Magazine. The following is the itinerary that I have so far:
Wed:
Breakfast/Lunch – Don’t know yet
Dinner – Brigsten’s (thought this would be a good NO intro)
Activities – French Quarter and Lafayette Park Concert Series (afternoon/early evening),
Preservation Hall and Carousel Bar (night)
Thurs –
Breakfast – Stanley’s
Lunch - August (followed the reviews saying lunch is a great value and experience)
Dinner - Emeril’s (my friend really wants to try this place)
Activities – Lafayette Cemetery and Garden District (morning)
City Park/Bayou/Esplanade (afternoon)
Vaughan’s (night)
Fri –
Breakfast - Café du Monde or some other quick bite
Lunch – Galatoire’s (downstairs – I really want to experience this, if we get there by 11:15 should we be able to get seated?)
Dinner – Can’t decide
Activities – St. Louis Cemetery (morning), French Quarter (afternoon), Snug Harbor and Frenchman Street (night)
Saturday:
Breakfast – Elizabeth’s
Lunch – Casamento’s or a po-boy somewhere
Dinner – Again, can’t decide
Activities – Whatever we feel like (morning and afternoon), Not sure about night (maybe Tipitina’s)
Sunday:
Brunch – Commander’s Palace
Snack - ???
Dinner – A lot is closed, I am trying to decide between Upperline, Stella, Luke, and Dante’s Kitchen
Activities – Magazine Street and Garden District (afternoon), Maple Leaf (night)
Monday:
Breakfast: Camelia Grill
Lunch: Mr. B’s
For Saturday and Sunday, I’m thinking about these restaurants (they all sound so good I can’t decide), but I would welcome any other suggestions:
Patois
Herbsaint
La Petite Grocery
Cochon
Bayona
Clancy’s
Jacque Imo’s
Upperline
I'm a little worried that we have focused on too many big name places and are losing out on some of the hidden gems/local favorites. Any thoughts???
-----
Cochon
930 Tchoupitoulas St., New Orleans, LA 70130
Casamento's Restaurant
4330 Magazine St, New Orleans, LA 70115
Bayona
430 Dauphine St, New Orleans, LA 70112
Emeril's Restaurant
800 Tchoupitoulas, New Orleans, LA 70130
Galatoire's Restaurant
209 Bourbon St., New Orleans, LA 70130
Herbsaint
701 Saint Charles Avenue, New Orleans, LA 70130
Tipitina's
501 Napoleon Ave, New Orleans, LA 70115
Upperline Restaurant
1413 Upperline St, New Orleans, LA
Preservation Hall
726 St Peter St, New Orleans, LA
La Petite Grocery
4238 Magazine Street, New Orleans, LA 70130
Friday, since you're staying in/very near the Quarter all day and are lunching at Galatoire's, you might want something lighter for supper in the course of your Frenchmen St. activities. Three Muses and Yuki Izakaya, both right on Frenchmen, have good food and music. There's also Adolpho's above the Apple Barrel. In the Quarter, Sylvain has good small plates and things to share.
If you do decide to do Tipitina's on Saturday night, you could do a late-ish lunch at Casamento's (they close at 2:00), then wander Magazine St. in the afternoon and stay in that area for dinner. La Petite Grocery and Patois are very near Tipitina's; Upperline is about a mile so short cab or medium walk.
Sunday, if you're going to the Maple Leaf, definitely go to Dante's Kitchen. It's very good and right nearby.
- re: uptownlibrarian
I have read that Three Muses is good. That's definitely a great idea for Friday. Between La Petite Grocery, Patois, and Upperline, which would you choose?
-----
Upperline Restaurant
1413 Upperline St, New Orleans, LA
La Petite Grocery
4238 Magazine Street, New Orleans, LA 70130
Three Muses
536 Frenchmen St, New Orleans, LA 70116
You will not get seated if you're at Galatoire's by 11:15, as it will be filling up for the first seating. I suggest getting there early...last time we tried we got there at 10:00 to get first seating and were S.O.L. Get a cup of coffee and get out there by 8:30 or 9:00 and then go rest up for lunch.
Scratch La Petit Grocery from your maybe lis andt add Lilette for dinner Fri or Sat and Clancy 's for the other night.
-----
Galatoire's Restaurant
209 Bourbon St., New Orleans, LA 70130
- re: FoodChic
Would you do Lilette instead of Patois or Herbsaint? Also, what are your thoughts on Upperline or Dante's Kitchen for Sunday?
-----
Herbsaint
701 Saint Charles Avenue, New Orleans, LA 70130
Dante's Kitchen
736 Dante Street, New Orleans, LA 70118
Lilette
3637 Magazine St., New Orleans, LA 70115
Upperline Restaurant
1413 Upperline St, New Orleans, LA
- re: jeannkate
Upperline has a fairly new chef, and I've not read much about it since he started (maybe someone that has been there recently can chime in), but you won't go wrong with Dante's.
Patois and Lilette are neck and neck IMO...You won't be unhappy with either.
-----
Lilette
3637 Magazine St., New Orleans, LA 70115
Upperline Restaurant
1413 Upperline St, New Orleans, LA
No specific recommendations, you've got a very good list. I agree about Galatoire's good food, but heavy. I've been to NOLA 4 times and have barely made a dent into the list I have similiar to yours. Very easy to get sidetracked...distracted by all the great music, noise, sazerac's, hurricane's etc. I hope you report back and let us know how many of these you got to and your thoughts. These aren't my pix in the link below (Galatoire's & few other places), but a link I saved from someone on TA and like to look at once in a while to remind myself of all the food & fun I'm going to have in November.......
-----
Galatoire's Restaurant
209 Bourbon St., New Orleans, LA 70130 | 380,399 |
At least I'm telling myself it will be fun! On the agenda today...the frame. What else? I will also be brainstorming the trilogy. We're still trying to come up with an idea editorial likes. At least we're closer and things are not so, well, sexy now.
Hubby's been gone this week (second week this month) and I'm going to be so happy to pick him up at the airport tonight. I miss him mucho!
I sliced my finger last night with the razor blade I was using to get paint off the grooves around each of the kids' pictures. It bled, but no blood got on the frame. Huge sigh of relief. But it still hurts.
Okay, better get back to it. Is your Friday going to be fun?
Discover new sweet romance books! | 51,836 |
Specialized AX2012 skills in Process Manufacturing, focused on:
• Formula management
• Co-product and by-product planning and management, including allocating costs for unplanned co-products and by-products
• Multi-dimensional inventory capability
• Containerized packaging
• First expired/first out (FEFO) and shelf life inventory management with the ability to set vendor-specific batch information
• Approved vendor set up and maintenance
• Vendor batch detail set up and maintenance
• Potency management
• Batch balancing for potency formulas
• Lot inheritance
• Batch order sequencing
We don’t provide junior staff members who are fresh off AX certifications and who don’t know the meaning of JIT, MRP or Lean manufacturing. We have consultants who have lived and breathed manufacturing, many of which have come from industry positions where they were AX users before they became AX consultants. You demand both AX and manufacturing expertise to address your business issues, ABT delivers.
Whether you are discrete or process manufacturer, Advanced Business Technology has the industry experience to ensure your implementation is supported by seasoned consultants who know your industry.
6136 Frisco Square, Suite 400 Frisco, TX 75033
Phone: (469) 252-2130
© 2014 Advanced Business Technology | 4,719 |
\begin{document}
\maketitle
\begin{abstract}
We point out to a connection between a problem of invariance of power series families of probability distributions under binomial thinning and functional equations which generalize both the Cauchy and an additive form of the Go\l \k ab-Schinzel equation. We solve these equations in several settings with no or mild regularity assumptions imposed on unknown functions.
\end{abstract}
\noindent
{\bf Keywords:} Cauchy equation, Go\l \k ab-Schinzel equation, binomial thinning, power series family
\section{Introduction: invariance under binomial thinning in power series families}
Functional equations we analyze in this paper arise naturally in an invariance problem involving Poisson-type random probability measures, known under the nickname {\em throwing stones and collecting bones}. The problem has two basic ingredients: the binomial thining operator and the power series family of probability distributions.
\begin{enumerate}
\item Binomial thinning:
Let $\mathcal P(\N)$ be the set of probability measures with supports in $\N=\{0,1,\ldots\}$. For every $p\in[0,1]$ the binomial thining operator $T_p$ is defined as follows:
$$
\mathcal P(\N)\ni \mu\mapsto T_p(\mu)\in\mathcal P(\N)
$$
and $T_p(\mu)$ is the probability distribution of
\bel{Ktil}
\tilde{K}:=\sum_{n=0}^K\,I_n,
\ee
where the sequence $(I_n)_{n\ge 1}$ of independent random variables with the same Bernoulli distribution $\mathrm{Ber}(p):=(1-p)\delta_0+p\delta_1$ (additionally we denote $I_0=0$) and the random variable $K$ with distribution $\mu$ (defined on some probability space $(\Omega,\cF,\P)$) are independent. Here and in the sequel by $\delta_x$ we denote the Dirac measure at $x$. In particular, $T_0(\mu)=\delta_0$ and $T_1(\mu)=\mu$.
This operator was introduced in \cite{SvH} to establish discrete versions of stability and selfdecomposability of probability measures. Since then binomial thinning operator and its extensions have been intensively studied in various probabilistic contexts (a prominent example being the time series theory). In particular, very recently \cite{BR} (referred to by BR in the sequel) used the thinning operator to model Poisson-type random point processes restricted to a subset of the original state space.
\item Power series family:
Let $\a=(a_k)_{k\ge 0}$ be a sequence of nonnegative numbers with $a_0=1$ such that the set
$$
\Theta_{\a}=\left\{\theta\ge 0:\,\varphi(\theta):=\sum_{k\ge 0}\,a_k\theta^k<\infty\right\}
$$
has a non-empty interior (actually $\Theta_{\a}$ is a convex set). Then $$\mu_{\a,\theta}=\sum_{k\ge 0}\,\tfrac{a_k\theta^k}{\varphi(\theta)}\,\delta_k$$ is a probability measure called a power series distribution generated by $\a$ with the parameter $\theta\in\Theta_{\a}$. The power series family generated by $\a$ is defined as $$\mathcal{PSF}(\a)=\{\mu_{\a,\theta}:\,\theta\in\Theta_{\a}\}.$$
\end{enumerate}
The problem lies in identification of power series families which are invariant under binomial thinning, i.e. one searches for $\mathcal{PSF}(\a)$ satisfying
$$
T_p(\mathcal{PSF}(\a))\subset \mathcal{PSF}(\a)
$$
for some $p\in (0,1)$.
Equivalently, we want to describe all sequences $\a$ of nonnegative numbers with $a_0=1$ and with $\Theta_{\a}$ of non-empty interior, such that there exists $p\in(0,1)$ and a function $h_p:\Theta_{\a}\to\Theta_{\a}$ which satisfy the condition
\bel{KTK}
\forall\,\theta\in\Theta_{\a}\quad\left(\, K\sim \mu_{\a,\theta}\quad \Rightarrow\quad \tilde{K}\sim\mu_{\a,h_p(\theta)}\,\right),
\ee
where $\tilde{K}$ is defined in \eqref{Ktil}. (If $\mu$ is the probability distribution of a random variable $X$ we write $X\sim \mu$.)
Probability generating function is a convenient tool to analyze this problem. Recall that the probability generating function $\psi_{\mu}$ of $\mu=\sum_{k\ge 0}\,p_k\delta_k\in \mathcal{P}(\N)$ is defined by
$\psi_{\mu}(s)=\sum_{k\ge 0}\,s^kp_k$ on a domain $U\supset[-1,1]$. In particular, $$\psi_{\mathrm{Ber}(p)}(s)=ps+q,\quad s\in\R,\quad \mbox{where }\;q=1-p,$$ and
$$
\psi_{\mu_{\a,\theta}}(s)=\sum_{k\ge 0}\,s^k\,\tfrac{a_k\theta^k}{\varphi(\theta)}=\tfrac{\varphi(s\theta)}{\varphi(\theta)},\qquad |s|\theta\in\Theta_{\a},\;\theta\in\Theta_{\a},
$$
where $\varphi$ is defined on $\Theta_{\a}\cup(-\Theta_{\a})$ by analytical extension.
Let $\nu$ be the distribution of $\tilde{K}$ defined in \eqref{Ktil} with $K\sim\mu_{\a,\theta}$ for $\theta\in\Theta_{\a}$. Then using conditioning with respect to $K$ and independence of $K,I_1,I_2,\ldots$ we get
$$
\psi_{\nu}(s)=\E\,s^{\tilde{K}}=\E\,s^{\sum_{n=0}^K\,I_n}=\E\,(\psi_{\mathrm{Ber}(p)}(s))^K=\psi_{\mu_{\a,\theta}}\left(\psi_{\mathrm{Ber}(p)}(s)\right)=\tfrac{\varphi((ps+q)\theta)}{\varphi(\theta)},
$$
if only $\theta,\,|ps+q|\theta\in \Theta_{\a}$.
By \eqref{KTK} we have $\nu=\mu_{\a,h_p(\theta)}$, i.e. for $p\in(0,1)$ we get the equation
\bel{row11}
\tfrac{\varphi((ps+q)\theta)}{\varphi(\theta)}=\tfrac{\varphi(sh_p(\theta))}{\varphi(h_p(\theta))}
\ee
for $s$ and $\theta$ satisfying $\theta,\,|ps+q|\theta,\,|s|h_p(\theta)\in \Theta_{\a}$. Since $q\theta\in\Theta_{\a}$, upon inserting $s=0$ in \eqref{row11}, we get (note that $\varphi(0)=1$)
$$
\varphi(h_p(\theta))=\tfrac{\varphi(\theta)}{\varphi(q\theta)},\quad \theta\in\Theta_{\a}.
$$
Consequently, \eqref{row11} can be rewritten as
\bel{basica}
\varphi((ps+q)\theta)=\varphi(q\theta)\,\varphi(sh_p(\theta)).
\ee
Then, upon changing variables $u:=ps\theta$, $v:=q\theta$ the equation \eqref{basica} yields
\bel{proto}
\varphi(u+v)=\varphi(v)\,\varphi(u\rho(v))
\ee
on the proper domain for variables $u$ and $v$ (actually, this domain contains a neighbourhood of zero for $u$ and a right neighbourhood of zero for $v$), where $\rho(0)=1$ and $\rho(v)=\tfrac{q}{pv}h_p\left(\tfrac{v}{q}\right)$, $v>0$. Since $\varphi(0)=1$ one can apply the logarithm to both sides of \eqref{proto} for $u$ in a two-sided neighbourhood of zero and $v$ in the right-neighbourhood of zero which leads to an additive version of this equation \eqref{proto}.
Such equation, referred to by the {\em modified Cauchy equation} in BR, has been recently solved in that paper. We quote now this result {\em in extenso}:
\begin{lemma}[BR, Lemma 1]\label{BaRe} Assume that $f(t)$ is twice differentiable in some neighbourhood of the origin, satisfies $f(0)=0$ and $f'(0)>0$ as well as
$$
f(s+t)-f(s)=f(h(s)t),
$$
where $h(s)$ is $t$ free. Then $f$ is of the form $f(t)=At$ or $f(t)=B\log(1+At)$ for some $A,B\neq 0$. Moreover $h(s)=f'(s)/f'(0)$.
\end{lemma}
In BR Lemma \ref{BaRe} is used to identify Poisson, binomial and negative binomial probability distributions as the only power series families which are invariant under binomial thinning (see Theorem 2 and its proof in BR).
We are interested in the above equation as well as its "dual" $f(s+g(s)t)=f(s)+f(t)$. Instead of a neighbourhood of zero we consider domains: $[0,\infty)$ in Section 3 and $V$, a vector space, in Section 4. We assume minor or no regularity conditions on $f$ and consider several cases of image spaces of $f$: a unital magma, the real line and a linear topological space. No regularity conditions whatever are imposed on the unknown functions $h$ and $g$. Our results complete to some extent \cite{JCh} where all solutions of the "dual" equation are determined in the case when $f$ maps a linear space (real or complex) into a semigroup and $g$ satisfies some regularity conditions. In Section 2 we give preliminaries on the functional equations we are interested in.
\section{Cauchy-Go\l \k ab-Schinzel equations}
Let $(M,+)$ be a magma, i.e. $M$ is a set equipped with a binary operation $+:M\times M\to M$. Consider $U=[0,\infty)$ or $U=V$, a vector space over a field $\mathbb F$. For unknown functions $f:U\to M$ and $g,h:U\to W$, where $W=U$ in case $U=[0,\infty)$ and $W=\mathbb F$ in case $U=V$, we consider equations
\bel{equ}
f(s+t)=f(s)+f(h(s)t),\qquad s,t\in U,
\ee
and
\bel{rew}
f(s+g(s)t)=f(s)+f(t),\qquad s,t\in U.
\ee
We would like to identify all solutions $(f,h)$ of \eqref{equ} and $(f,g)$ of \eqref{rew}.
If $f=g$ (thus $M=W$), then \eqref{rew} becomes an additive form of the Go\l \k ab-Schinzel equation, see \cite{JA}, pp. 132-135, \cite{AD}, pp. 311-319 and the survey paper \cite{JB }. For more recent contributions on the Go\l \k ab-Schinzel equation and its generalizations consult e.g. \cite{CK14}, \cite{CK15}, \cite{CK17}, \cite{BO} and \cite{O17}. In particular, the latter paper reveals yet another probabilistic (stable laws and random walks) connection of the Go\l \k ab-Schinzel equation, treated there as a {\em disguised form} of the Goldie equation. If $g\equiv 1$ or $h\equiv 1$, then \eqref{equ} and \eqref{rew} become the same standard Cauchy equation. Hence we call \eqref{rew} as well as \eqref{equ} the Cauchy-Go\l \k ab-Schinzel (CGS) equations.
We consider unital magma $M$ (i.e. $M$ has a neutral element, denoted by $\bf 0$ throughout the paper) with the two-sided cancelation property. To avoid trivialities we assume that $f\not\equiv \bf 0$.
Note that for $f$ which solves either \eqref{equ} or \eqref{rew} we have
\bel{f0}
f(0)={\bf 0}.
\ee
\vspace{3mm}
\begin{remark}
\label{rem1}
If $(f,g)$ solves \eqref{rew}, then $\mathrm{Ker}(g):=\{s\in U:\,g(s)=0\}=\emptyset$.
Assume not, i.e. $g(s_0)=0$ for some $s_0\in U$. Then \eqref{rew} implies $f(s_0)=f(s_0+g(s_0)t)=f(s_0)+f(t)$ for any $t\in U$. Hence $f\equiv \mathbf 0$, a contradiction.
\end{remark}
\begin{remark}\label{rem3}
If $(f,h)$ solves \eqref{equ} and $\mathrm{Ker}(h)=\emptyset$, then $(f,g)$ with $g=1/h$ solves \eqref{rew}.
In the opposite direction, if $(f,g)$ solves \eqref{rew}, then $(f,h)$ with $h=1/g$ (being well-defined by Remark \ref{rem1}) solves \eqref{equ}.
\end{remark}
\begin{remark}\label{rem2}
Let $(f,h)$ solves \eqref{equ} for $U=V$, a vector space. Then $\mathrm{Ker}(h)=\emptyset$.
Assume not, i.e. $h(s_0)=0$ for some $s_0\in V$. Then \eqref{equ} together with \eqref{f0} imply $f(s_0+t)=f(s_0)$ for every $t\in V$. That is, $f$ is a constant function. By \eqref{f0} we get a contradiction with $f\not\equiv {\bf 0}$.
\end{remark}
When $U=[0,\infty)$, while considering \eqref{equ} it is convenient to distinguish two cases with respect to the form of the kernel of $h$:
\begin{enumerate}
\item[{\bf I}] $\mathrm{Ker}(h)\neq\emptyset$
\item[{\bf II}] $\mathrm{Ker}(h)=\emptyset$
\end{enumerate}
\section{CGS equations on $U=[0,\infty)$}
\subsection{$\mathbf{\mathrm{\mathbf{Ker}}(h)\neq\emptyset}$}
Throughout this section we assume that $U=[0,\infty)$ and that $(M,+)$ is a unital magma with the two-sided cancelation property.
\begin{theorem}
Assume that $f:[0,\infty)\to M$ is non-zero, $h:[0,\infty)\to[0,\infty)$, $\mathrm{Ker}(h)\neq\emptyset$ and $$s_0=\inf\,\mathrm{Ker}(h).$$
Then $(f,h)$ solves \eqref{equ} on $[0,\infty)$ if and only if
\begin{enumerate} \item
either $s_0=0$ and
\bel{fg1}
f(s)=\left\{\,\begin{array}{ll}
\mathbf 0,& \mathrm{for}\;s=0, \\
\mathbf a, & \mathrm{for}\;s\in(0,\infty),
\end{array}\right. \qquad
h(s)=\left\{\,\begin{array}{ll}
b,& \mathrm{for}\;s=0, \\
0, & \mathrm{for}\;s\in(0,\infty),
\end{array}\right.
\ee
where $\mathbf a\in M\setminus\{\mathbf 0\}$ and $b\in(0,\infty)$,
\item or $s_0>0$ and
\bel{fg2}
f(s)=\left\{\,\begin{array}{ll}
\mathbf 0,& \mathrm{for}\;s\in[0,s_0), \\
\mathbf a, & \mathrm{for}\;s\in[s_0,\infty),
\end{array}\right. \qquad
h(s)=\left\{\,\begin{array}{ll}
\tfrac{s_0}{s_0-s},& \mathrm{for}\;s\in[0,s_0), \\
0, & \mathrm{for}\;s\in[s_0,\infty), \end{array}\right.
\ee
where $\mathbf a\in M\setminus\{\mathbf 0\}$.
\end{enumerate}
\end{theorem}
\begin{proof}
It is easy to check that $(f,h)$ given in \eqref{fg1} and \eqref{fg2} solve \eqref{equ}.
Let $(f,h)$ solve \eqref{equ}. Then it follows from \eqref{equ} and \eqref{f0} that for any $t\in[0,\infty)$,
$$
f(s+t)=f(s)\qquad\mbox{if only }\;h(s)=0.
$$
Since such $s\ge s_0$ can be chosen arbitrarily close to $s_0$ we conclude that
\bel{(5)}
f(s)=\mathbf a\in M,\qquad s\in(s_0,\infty).
\ee
If $s\in[0,\infty)$ is such that $h(s)\ne 0$, then we consider \eqref{equ} for $t>0$ such that $h(s)t>s_0$ and $s+t>s_0$. Then \eqref{(5)} yields $\mathbf a=f(s)+\mathbf a$, whence $f(s)=0$. Therefore,
\bel{imp}
h(s)\neq 0\quad \Rightarrow\quad f(s)=\mathbf 0,\qquad s\in[0,\infty).
\ee
Combining \eqref{(5)} and \eqref{imp} we get
\bel{pmi}
h(s)=0,\quad s\in(s_0,\infty)\qquad\mbox{if only }\; \mathbf a\neq \mathbf 0.
\ee
Consider now two cases.
\begin{enumerate}
\item $s_0=0$: Then \eqref{(5)} together with \eqref{f0} imply that $\mathbf a\neq \mathbf 0$. Consequently, $f$ is as given in \eqref{fg1}.
By \eqref{pmi} we have $h(s)=0$ for $s>0$. From \eqref{equ} for $s=0$ and $t>0$ we get $\mathbf a=f(t)=f(h(0)t)$. Thus \eqref{f0} implies $h(0)>0$. Consequently, $h$ is as given in \eqref{fg1}.
\item $s_0>0$: Then \eqref{imp} gives $f(s)=\mathbf 0$ for $s\in[0,s_0)$. From \eqref{equ} for $s=s_0$ and $t>0$ such that $h(s_0)t<s_0$ by \eqref{(5)} we get $\mathbf a=f(s_0)$ which implies that $\mathbf a\neq \mathbf 0$. Consequently, $f$ is as given in \eqref{fg2}.
By \eqref{pmi} and \eqref{imp} we have $h(s)=0$ for $s\in[s_0,\infty)$. Let $s\in[0,s_0)$. Then $f(s+t)=f(h(s)t)$ for $t\in[0,\infty)$. Referring to $f$ as given in \eqref{fg2} we see that
$$
s+t<s_0\quad \Leftrightarrow \quad h(s)t<s_0.
$$
Thus $$\tfrac{s_0}{s_0-s}\le h(s)<\tfrac{s_0}{s_0-s-\eps}\qquad \mbox{for}\;\;\eps\in(0,s_0-s].$$
By taking $\eps\downarrow 0$ we obtain $h(s)=\tfrac{s_0}{s-s_0}$ for $s\in[0,s_0)$. Consequently, $h$ is as given in \eqref{fg2}.
\end{enumerate}
\end{proof}
\subsection{$\mathbf{\mathrm{\mathbf{Ker}}(h)=\emptyset}$}
Let $U=[0,\infty)$. As explained in Remark \ref{rem3}, $(f,h)$ solves \eqref{equ} if and only if $(f,g)$ with $g=1/h$ solves \eqref{rew}. We start with the case where $f$ is injective; cf. \cite{EV}.
Then we move on to the case when $f$ is right continuous at a point.
\subsubsection{A magma version}
Throughout this section we assume that $(M,+)$ is a commutative unital magma with the (two-sided) cancelation property.
\begin{theorem}\label{inj}
Assume that $f:[0,\infty)\to M$ is injective and $g:[0,\infty)\to [0,\infty)$.
Then $(f,g)$ solves \eqref{rew} if and only if \begin{enumerate}
\item either $g(1)=1$ and
\bel{fg3}
f\;\mbox{is additive}\qquad\mbox{and}\qquad g\equiv 1,
\ee
\item or $g(1)\neq 1$ and
\bel{fg4}
f(s)=a(\log(\alpha s+1))\qquad\mbox{and}\qquad g(s)=\alpha s+1,\quad s\in[0,\infty),
\ee
where $\alpha\in(0,\infty)$ and $a:[0,\infty)\to M$ is an injective additive function.
\end{enumerate}
\end{theorem}
\begin{proof}
It is easy to check that $(f,g)$ given in \eqref{fg3} and \eqref{fg4} solve \eqref{rew}.
By commutativity at the right hand side of \eqref{rew} and injectivity of $f$ we conclude that
$$
s+g(s)t=t+g(t)s,\quad s,t\in[0,\infty).
$$
For $t=1$ we get \bel{hs} g(s)=\alpha s+1,\quad s\ge 0,\ee
where $\alpha=g(1)-1$ and $\alpha\ge 0$ since $g$ is positive.
Consider two possible cases: $g(1)=1$ and $g(1)\neq 1$.
\begin{enumerate}
\item $g(1)=1$:
By \eqref{hs} we have $\alpha=0$ and $g\equiv 1$. Then by \eqref{rew} it follows that $f$ is additive.
\item $g(1)\neq 1$:
By \eqref{hs} we have $\alpha>0$ and $g$ is as given in \eqref{fg4}.
It follows from \eqref{hs} that $k:=\log(g):[0,\infty)\to[0,\infty)$ is a bijection (note that $g(s)\ge 1$ for all $s\in[0,\infty)$). Therefore the formula $a\circ k=f$ defines a function $a:[0,\infty)\to M$. Clearly, $a$ is injective and the equation \eqref{rew} in terms of $a$ assumes the form
$$
a(k(s))+a(k(t))=a(k(s+g(s)t)),\quad s,t\ge 0.
$$
But $g$, see \eqref{hs}, satisfies $g(s+g(s)t)=g(s)g(t)$, i.e. $k(s+g(s)t)=k(s)+k(t)$. Therefore
$$
a(k(s))+a(k(t))=a(k(s)+k(t)),\quad s,t\ge 0.
$$
Since $k$ is bijective on $[0,\infty)$ we conclude that $a$ is an additive function.
\end{enumerate}
\end{proof}
\subsubsection{Real and vector space versions}
We first consider the case of real-valued $f$.
\begin{theorem}\label{oner}
Assume that $f:[0,\infty)\to \R$ is non-zero, right continuous at some point and $g:[0,\infty)\to[0,\infty)$.
Then $(f,g)$ solves \eqref{rew} if and only if
\begin{enumerate}
\item either
$$
f(s)=as,\quad s\in[0,\infty)\qquad\mbox{and}\qquad g\equiv 1,
$$
where $0\neq a\in\R$,
\item or
$$
f(s)=a\log(\alpha s+1)\qquad\mbox{and}\qquad g(s)=\alpha s+1,\quad s\in[0,\infty),
$$
where $\alpha>0$ and $0\neq a\in\R$.
\end{enumerate}
\end{theorem}
For the proof of Theorem \ref{oner} we use several auxiliary results which are considered first.
In Lemmas \ref{rcrc}, \ref{rch}, \ref{rcc}, \ref{cvm} and \ref{mi} as well as in Proposition \ref{ccc} we assume that a non-zero function $f:[0,\infty)\to \R$ satisfies \eqref{rew} with some $g:[0,\infty)\to[0,\infty)$.
\begin{lemma}\label{rcrc}
If $f$ is right continuous at some point, then it is a right continuous function.
\end{lemma}
\begin{proof}
Let $f$ be right continuous at $s_1\in[0,\infty)$. From \eqref{rew} we have
$$
f(s_1+g(s_1)t)-f(s_1)=f(t),\quad t\ge 0.
$$
Taking $t\to 0^+$ we see that right continuity of $f$ at $s_1$ implies that $f$ is right continuous at $0$.
Fix arbitrary $s>0$. Then by \eqref{rew} for $t\ge 0$ we have
$$
f(s+t)-f(s)=f(h(s)t),
$$
where $h=1/g$. Taking $t\to 0^+$ we conclude that $f$ is right continuous at $s$.
\end{proof}
\begin{lemma}\label{rch}
If $f$ is right continuous, then $g(s)\ge 1$ for every $s\ge 0$.
\end{lemma}
\begin{proof}
Assume $g(s_1)<1$ for some $s_1\ge 0$. Then for $\phi:[0,\infty)\to[0,\infty)$ defined by $\phi(t)=g(s_1)t+s_1$,
$$
\tilde{s}<\phi(t)<t\qquad \forall\,t>\tfrac{s_1}{1-g(s_1)}=:\tilde{s}.
$$
Hence $t_n:=\phi^{\circ n}(t)=g^n(s_1)t+s_1\sum_{k=0}^{n-1}\,g^k(s_1)\to\,\tilde{s}^+$ as $n\to\infty$ for $t>\tilde s$. Thus right continuity of $f$ implies
\bel{limf}
f(\tilde{s})-f(t)=\lim_{n\to \infty}\,\left(f(t_n)-f(t)\right).
\ee
On the other hand for any $k\ge 1$ we have $t_k=\phi(t_{k-1})$ and thus \eqref{rew} yields
$$
f(t_k)=f(g(s_1)t_{k-1}+s_1)=f(t_{k-1})+f(s_1)
$$
whence
$$
f(t_n)-f(t)=\sum_{k=1}^n\left(f(t_k)-f(t_{k-1})\right)= nf(s_1)\qquad \forall\,t>\tilde{s},\;n\ge 1.
$$
Thus, by \eqref{limf}, we obtain
$$
f(\tilde{s})-f(t)=\lim_{n\to \infty}\,nf(s_1)\qquad \forall\,t>\tilde{s}.
$$
Therefore $f(s_1)=0$ and $f|_{(\tilde s,\infty)}\equiv a:=f(\tilde{s})$. Taking now $s,t>\tilde s$ in \eqref{rew} we get $a=0$.
Let $s'=\inf\{s\ge 0:\,f|_{(s,\infty)}\equiv 0\}$. Assume $s'>0$. Then for $s\in(0,s')$ and $t>s'$ in \eqref{rew} we have
$0=f(t+g(t)s)=f(s)$ - a contradiction. Thus $s'=0$. But this is impossible since $f\not\equiv 0$.
\end{proof}
\begin{lemma}\label{rcc}
If $f$ is right continuous, then it is a continuous function.
\end{lemma}
\begin{proof}
It suffices to prove that $f$ is left continuous at any $s>0$. Fix arbitrary $s>0$. Then by \eqref{rew} we have
$$
f(s-t)-f(s)=-f(th(s-t))\qquad \forall\,t\in[0,s],
$$
where $h=1/g$. By Lemma \ref{rch} we have $0<th(s-t)\le t$ for $t\in(0,s]$. Therefore, since $f$ is right continuous at $0$, for $t\to 0^+$ the left hand side turns to zero and the result follows.
\end{proof}
Combining Lemmas \ref{rcrc} and \ref{rcc} we get the following result.
\begin{proposition}\label{ccc}
If $f$ is right continuous at some point, then it is a continuous function.
\end{proposition}
If $f$ is monotone, then it has a countable set of points of discontinuity, i.e. in view of Proposition \ref{ccc} any monotone $f$ satisfying \eqref{rew} is continuous. It appears that this implication can be reversed with the help of right upper and lower Dini derivatives $D^+$ and $D_+$.
\begin{lemma}\label{cvm}
If $f$ is continuous, then it is a monotone function.
\end{lemma}
\begin{proof}
There exists a sequence $(a_n)_{n\ge 1}$ in $(0,\infty)$ such that $\lim_{n\to\infty}\,a_n=0$ and, see \eqref{f0},
$$
\lim_{n\to\infty}\,\tfrac{f(a_n)}{a_n}= D^+f(0).
$$
Then, by \eqref{rew}, for every $s\ge 0$ we get
$$
D^+f(s)\ge \lim_{n\to\infty}\,\tfrac{f\left(s+a_ng(s)\right)-f(s)}{a_ng(s)}=\lim_{n\to\infty}\,\tfrac{f\left(a_n\right)}{a_ng(s)}=\tfrac{D^+f(0)}{g(s)}.
$$
Similarly, there exists a sequence $(b_n)_{n\ge 1}$ in $(0,\infty)$ such that $\lim_{n\to\infty}\,b_n=0$ and
$$
D_+f(s)\le \lim_{n\to\infty}\,\tfrac{f\left(s+b_ng(s)\right)-f(s)}{b_ng(s)}=\lim_{n\to\infty}\,\tfrac{f\left(b_n\right)}{b_ng(s)}=\tfrac{D_+f(0)}{g(s)}.
$$
Therefore
$$D_+f(s)\le \tfrac{D_+f(0)}{g(s)}\le \tfrac{D^+f(0)}{g(s)}\le D^+f(s),\qquad s\ge 0.$$
Consequently either $D^+f(s)\ge 0$ for every $s\ge 0$ or $D_+f(s)\le 0$ for every $s\ge 0$. From \cite{SL}, Theorem 7.4.13 and its Corollary, it follows that $f$ is either a non-decreasing or a non-increasing function.
\end{proof}
Finally we connect monotonicity of $f$ with its injectivity.
\begin{lemma}\label{mi}
If $f$ is monotone, then it is an injective function.
\end{lemma}
\begin{proof}
Suppose $f$ is not injective. Then $f(t_1)=f(t_2)$ for some $t_1,t_2\ge 0$ such that $t_1<t_2$. By \eqref{rew}, with arguments $t_1$ and $s_1=\tfrac{t_2-t_1}{g(t_1)}>0$ it follows that
$$
f(s_1)=f(t_1+g(t_1)s_1)-f(t_1)=f(t_2)-f(t_1)=0.
$$
Since $f$ is monotone and $f(0)=0$ it follows that $f|_{[0,s_1]}\equiv 0$. Therefore $$s'=\sup\{r\ge 0:\,f|_{[0,r]}\equiv 0\}>0.$$
For $s,t\in[0,s')$ equation \eqref{rew} implies $f(s+g(s)t)=0$, whence $s+g(s)t\le s'$. It means that
$g(s)\le \tfrac{s'-s}{t}$ for every $s,t\in[0,s')$. Thus taking $t\in(s'-s,s')$ we get $g(s)<1$, $s\in(0,s')$.
Since $f$ is monotone then it is right continuous at a point in $[0,\infty)$ and, in view of Lemma's \ref{rcrc} and \ref{rch}, $g(s)\ge 1$ for every $s\ge 0$, a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{oner}]
By Proposition \ref{ccc} and Lemma \ref{cvm} function $f$ is monotone and it follows from Lemma \ref{mi} that $f$ is injective. Thus by referring to Theorem \ref{inj} we conclude the proof since an additive and monotone function is linear - see e.g. \cite{JA}, Ch. 2.1.1.
\end{proof}
Theorem \ref{oner} can be extended to $f$ assuming values in a real topological vector space $X$ with dual $X^*$ which separates points on $X$, i.e. for every ${\bf x}\in X\setminus\{\bf 0\}$ there exists an $x^*\in X^*$ such that $x^*{\bf x}\neq 0$. Consequently, for ${\bf a},\,{\bf b}\in X$ if $x^*{\bf a}=x^*{\bf b}$ for every $x^*\in X^*$, then $\bf a=\bf b$. Note that the dual of a locally convex topological vector space $X$ separates points on $X$ (see e.g. \cite{WR}, Chapter 3: Corollary to Theorem 3.4; consult also Exercise 5(d) in the same chapter).
\begin{corollary}\label{corx}
Assume that $f:[0,\infty)\to X$ is non-zero, for every $x^*\in X^*$ the function $x^*\circ f$ is right continuous at some point and $g:[0,\infty)\to[0,\infty)$.
Then $(f,g)$ solves \eqref{rew} if and only if
\begin{enumerate}
\item either
\bel{fg5}
f(s)={\bf a}s,\quad s\in[0,\infty),\qquad\mbox{and}\qquad g\equiv 1,
\ee
where ${\bf a}\in X\setminus\{\bf 0\}$,
\item or
\bel{fg6}
f(s)={\bf a}\log(\alpha s+1)\quad\mbox{and}\quad g(s)=\alpha s+1,\quad s\in[0,\infty),
\ee
where $\alpha>0$ and ${\bf a}\in X\setminus\{\bf 0\}$.
\end{enumerate}
\end{corollary}
\begin{proof}
Note that for any $x^*\in X^*$ the pair $(x^*\circ f,\,g)$ solves \eqref{rew}. Moreover, since $f$ is non-zero, $x^*\circ f$ is non-zero for some $x^*\in X^*$ and it follows from Theorem \ref{oner} that
$$
g(s)=\alpha s+1,\quad s\in[0,\infty),
$$
where $\alpha\ge 0$.
If $\alpha=0$, then $f$ is additive and for every $x^*\in X^*$ the additive and right continuous at a point function $x^*\circ f$ has the form
$$
x^*f(s)=(x^*f(1))\,s,\quad s\in[0,\infty),
$$
whence $f(s)=f(1)s$ for $s\in[0,\infty)$, and we have \eqref{fg5} with ${\bf a}=f(1)\in X\setminus\{\bf 0\}$.
If $\alpha>0$, then by Theorem \ref{oner} for every $x^*\in X^*$ either $x^*\circ f\equiv 0$, or
$$
x^*f(s)=a\log(\alpha s+1),\quad s\in[0,\infty),
$$
where $0\neq a\in\R$; in the second case
$$
a=x^*f\left(\tfrac{e-1}{\alpha}\right).
$$
Consequently, for every $x^*\in X^*$ in both cases we have
$$
x^*f(s)=x^*f\left(\tfrac{e-1}{\alpha}\right)\,\log(\alpha s+1),\quad s\in[0,\infty),
$$
i.e.
$$
f(s)=f\left(\tfrac{e-1}{\alpha}\right)\,\log(\alpha s+1),\quad s\in[0,\infty).
$$
Thus we get \eqref{fg6} with ${\bf a}=f\left(\tfrac{e-1}{\alpha}\right)\in X\setminus\{\bf 0\}.$
\end{proof}
\section{CGS equations on a vector space}
Throughout this section $U=V$, a vector space over a field $\mathbb F$. As it has already been observed, see Remark \ref{rem2}, if $(f,h)$ solves \eqref{equ} on $V$, then $\mathrm{Ker}(h)=\emptyset$. Therefore, due to Remark \ref{rem3}, equations \eqref{equ} and \eqref{rew} are equivalent with $hg\equiv 1$.
We assume that $(M,+)$ is a unital magma with the two-sided cancelation property.
\begin{theorem}\label{injR}
Assume that $f:V\to M$ is a non-zero function and $g:V\to \mathbb F$.
Then $(f,g)$ solves \eqref{rew} if and only if $f$ is additive and $g\equiv 1$.
\end{theorem}
\begin{proof}
First, we prove that $f$ is additive, i.e.
\bel{Cauc}f(s+t)=f(s)+f(t)\ee
for all $s,t\in V$.
To this end we observe that for $s\in V$, $f(s)\neq \mathbf 0$ implies $g(s)=1$. Indeed, if $g(s)\neq 1$, then $s+g(s)t=t$ for $t=\tfrac{s}{1-g(s)}$. Thus \eqref{rew} for such $s$ and $t$ yields $f(s)=\mathbf 0$, a contradiction. Consequently, \eqref{Cauc} holds true for $s,t\in V$ such that at least one of $f(s)$ and $f(t)$ is not zero. To see this fact consider separately the cases: (a) both $f(s)$ and $f(t)$ are non-zero, (b) exactly one of $f(s)$ and $f(t)$ is non-zero. In the latter case use the identity $\mathbf a+\mathbf 0=\mathbf 0+\mathbf a=\mathbf a$, which holds for every $\mathbf a\in M$.
It suffices to prove \eqref{Cauc} for $s,t\in V$ such that $f(s)=f(t)=\mathbf 0$. Assume \eqref{Cauc} does not hold for such $s,t\in V$, i.e. $f(s+t)\neq \mathbf 0$. Then $g(s+t)=1$ and \eqref{rew} implies
\bel{mat0}
f(t)=f(s+t-s)=f(s+t)+f(-s).
\ee
Note that $f(-s)=\mathbf 0$. Otherwise $g(-s)=1$ and \eqref{rew} yields $\mathbf 0=f(-s+s)=f(-s)+f(s)$, a contradiction. Therefore, by \eqref{mat0} we get $f(t)=f(s+t)\neq \mathbf 0$, a contradiction.
Second, we prove that $g\equiv 1$. Assume not, i.e. $g(s)\neq 1$ for some $s\in V$. For arbitrary $u\in V$ set $t:=\tfrac{u}{g(s)-1}\in V$. Then, by \eqref{Cauc}, we have
\bel{efu}
f(u)=f((g(s)-1)t)=f(s+g(s)t-s-t)=f(s+g(s)t)+f(-s-t).
\ee
But \eqref{rew} and \eqref{Cauc} yield $f(s+g(s)t)=f(s)+f(t)=f(s+t)$ and $f(s+t)+f(-(s+t))=\mathbf 0$. Consequently, \eqref{efu} implies $f\equiv \mathbf 0$, a contradiction.
\end{proof}
\vspace{3mm} {\bf Acknowledgement.} KB research was supported by the Institute of Mathemtics of the University of Silesia (Iterative Functional Equations and Real Analysis program). JW research was supported in part by Grant 2016/21/ B/ST1/00005 of the National Science Center, Poland
\vspace{3mm} | 131,597 |
Occasions
Just Because**CLEAR CREEK FLOWERS & GIFTS** in Crystal Springs**CLEAR CREEK FLOWERS & GIFTS** in Crystal Springs.
Get Well FlowersWhether **CLEAR CREEK FLOWERS & GIFTS**. Whether it's roses, the class flower or any other arrangement, we can help you make this special occasion even more memorable. Need to send flowers out of town? No problem, let us help!
| 302,404 |
Keeping up with the current mortgage rates is almost like riding a rollercoaster lately. For the longest time, the rates were holding fairly steady a few months ago. Then they started to decline. In fact, they declined for several weeks in a row. A couple weeks ago, they went back up slightly and now the current mortgage rates have dropped again to another historic low.
According to figures by Freddie Mac, the rates for a 30-year fixed-rate mortgage have fallen to a record low for the 12th time in only 16 weeks. Analysts attribute the rate drops to the uncertainty in the market. People aren’t spending like they used to and instead they are putting their money away. That means fewer borrowers on the housing market. Inflation is also playing a major role in the lowest mortgage rates in decades.
Here is a breakdown of the current mortgage rates you can expect if you qualify for the best rates available:
A 30-year fixed rate mortgage has dropped to 4.27 percent with 0.8 percent of the borrowed amount paid directly to the lender at the beginning of the loan. Freddie Mac has been tracking rates for nearly four decades and this is the lowest the rates have been in all those years. It also marks the 12th time the rates have hit a record low since the last couple weeks of June.
Mortgage rates for a 15-year fixed rate mortgage are even lower than that. That rate currently stands at 3.72 percent with 0.7 of a point paid to the lender up front. These numbers for the 15-year fixed rate loan have not been this low in nearly 20 years. It’s about a half of a percentage point lower than it was a year ago at this time when the rate stood at 4.33 percent.
Rates for a five-year ARM are currently at 3.74 percent which represents the lowest it has been in five years. One-year mortgage ARMs are sitting at 3.40 percent. Rates have only been that low one other time since 1984 and that was only three weeks ago.
As low as these rates are, some analysts are expecting mortgage rates to drop even lower than they are now. According to Richard C. Temme of the California Association of Realtors, believes Fannie Mae and Freddie Mac may offer mortgage rates as low as 3.75 percent in the next few months because the Fed is running out of options. Unfortunately, he says, lenders are too scared to take any more chances with borrowers who have bad credit because of what has happened in the last few years. Hopefully something will work out soon to get the housing market jump started once again where people are confident about buying. | 337,385 |
TITLE: Probability that a symmetric random walk returns to $0$ exactly $k$ times in $2n$ steps
QUESTION [2 upvotes]: I'm trying to find a formula to find the probability of exactly k returns in 2n steps of a symmetric random walk. More specifically, I am trying to show that the probability of 2 returns is exactly equal to the probability of return to 0 at 2n minus the probability of first return to 0 at 2n. I.e. probability of exactly two returns to 0 is $\ p _{0,0}^{2n}$ - $\ f _{2n}$ where $\ p _{0,0}^{2n}$ is the probability of return to 0 at 2n and $\ f _{2n}$ is the probability of first return to 0 at 2n
REPLY [4 votes]: Stated slightly differently, the formula says that
$$
P(R=2)=P(R\ge 2, X_{2n}=0), \quad\quad\quad\quad (1)
$$
with $R$ denoting the number of returns and $X_k$ denoting the position of the random walk.
For any path with $R\ge 2$, focus on the portion after the second return to $0$. We again start this final portion of the path at $0$, and thus there are as many paths ending at $0$ as there are paths that avoid $0$ altogether. (This is a well know fact about random walks; for example, see Lemma 2.3 in my notes here.) If we count both types of paths in this way, we obtain (1).
More generally, the same argument shows that for any $k\ge 0$, we have $P(R=k)=P(R\ge k, X_{2n}=0)$. | 20,675 |
X-Wing: Imperial Aces Expansion PackTalk0
114,677pages on
this wiki
this wiki
The X-Wing: Imperial Aces Expansion Pack—also called the TIE Interceptor Aces Expansion—is a supplement to the tactical ship-to-ship combat game, X-Wing: Core Set, published by Fantasy Flight Games. The expansion features 2 detailed TIE Interceptor miniatures and painted with alternate paint schemes.[2]
ContentsEdit
The Imperial Aces Expansion Pack contains two miniatures, plus a maneuver dials and action tokens, as well as pilot and upgrade cards.[2]
Mission scenarioEdit
Ship componentsEdit
- 181st TIE Interceptor miniature
- Emperor's Royal Guard TIE Interceptor miniature
- Plastic base (2)
- Plastic peg (4)
- Ship token (4)
- Maneuver dial (2)
- Critical Hit token
- Focus token (2)
- Evade token (2)
- ID token (#32 - 33) (6)
- Target Lock token Q/R (2)
- Stress token (2)
CardsEdit
Pilot cardsEdit
- Kir Kanos (unique)
- Carnor Jax (unique)
- Lieutenant Lorrir (unique)
- Tetran Cowall (unique)
- Royal Guard Pilot (2)
- Saber Squadron Pilot (2)
Upgrade cardsEdit
- Targeting Computer (2)
- Hull Upgrade (2)
- Opportuninst (2)
- Royal Guard TIE (2)
- Shield Upgrade (2)
- Push the Limit (2)
Notes and referencesEdit
External linksEdit
- Fantasy Flight Games' official webpage for the game
- FFG News for May 2013: Earn Your Bloodstripes: Announcing the Imperial Aces Expansion for X-Wing (TM) Battles (Published 16 September 2013)
- Crush the Rebellion! The Imperial Aces Expansion Pack for X-Wing (TM) Is Now Available (Published 17 March 2014) | 364,610 |
Subsets and Splits