text
stringlengths
1
2.25M
--- abstract: 'It is shown that the Marcinkiewicz–Zygmund strong law of large numbers holds for pairwise independent identically distributed random variables. It is proved that if $X_{1}, X_{2}, \ldots$ are pairwise independent identically distributed random variables such that $E|X_{1}|^p < \infty$ for some $1 < p < 2$, then $(S_{n}-ES_{n})/n^{1/p} \to 0$ a.s. where $S_{n} = \sum_{k=1}^{n} X_{k}$.' author: - 'Valery Korchevsky[^1]' title: | Marcinkiewicz–Zygmund Strong Law of Large Numbers\ for Pairwise i.i.d. Random Variables --- **Keywords:** strong law of large numbers, pairwise independent random variables, identically distributed random variables. Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of independent identically distributed random variables. There are two famous theorems on the strong law of large numbers for such a sequence: The Kolmogorov theorem and the Marcinkiewicz–Zygmund theorem (see e.g. Loève [@Loeve77]). Let $S_{n} = \sum_{k=1}^{n} X_{k}$. By Kolmogorov’s theorem, there exists a constant $b$ such that $S_{n}/n \to b$ a.s. if and only if $E|X_{1}| < \infty$; if the latter condition is satisfied then $b = EX_{1}$. Now we state the Marcinkiewicz–Zygmund theorem: \[P101\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of independent identically distributed random variables. If $0 < p < 2$ then the relation $E|X_{1}|^p < \infty$ is equivalent to the relation $$\label{e11} \frac{S_{n}-nb}{n^{1/p}} \to 0 \qquad \mbox{a.s.}$$ Here $b=0$ if $0 < p < 1$, and $b = EX_{1}$ if $1 \leqslant p < 2$. The aim of this work is to show that Theorem \[P101\] remains true if we replace the independence condition by the condition of pairwise independence of random variables $X_{1}, X_{2}, \ldots$ Etemadi [@Etem81] proved the Kolmogorov theorem under the pairwise independence assumption instead of the independence condition. Sawyer [@Sawyer66] showed that if $0 < p < 1$ then the condition $E|X_{1}|^p < \infty$ implies $S_{n}/n^{1/p} \to 0$ a.s. without any independence condition. Petrov [@Petr96] proved that if $0 < p < 2$ then relation  (with $b=0$ or $EX_{1}$ according as $p < 1$ or $p \geqslant 1$) implies that $E|X_{1}|^p < \infty$ assuming pairwise independence. In the present work we shall prove that if $1 < p < 2$ then the condition $E|X_{1}|^p < \infty$ implies $(S_{n}-ES_{n})/n^{1/p} \to 0$ a.s. under the pairwise independence assumption. There are a number of papers that contain results on the strong law of large numbers for sequences of pairwise independent identically distributed random variables. See Choi and Sung [@ChoiSung85], Li [@Li88], Martikainen [@Mart95], Sung [@Sung97; @Sung14] (recent work [@Sung14] contains more detailed review). However, results in these papers do not generalize Theorem \[P101\] to sequences of pairwise independent random variables. The aim of this paper is to prove the following result: \[T1\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of pairwise independent identically distributed random variables. If $E|X_{1}|^p < \infty$ where $1 < p < 2$, then $$\label{e1001} \frac{S_{n}-ES_{n}}{n^{1/p}} \to 0 \qquad \mbox{a.s.}$$ If we combine Etemadi’s, Sawyer’s, and Petrov’s results mentioned in the previous section with Theorem \[T1\], we get a generalization of the Marcinkiewicz–Zygmund theorem (Theorem \[P101\]): \[T2\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of pairwise independent identically distributed random variables. If $0 < p < 2$ then the relation $E|X_{1}|^p < \infty$ is equivalent to relation . To prove our main result we need the following lemmas. \[Lem201\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of identically distributed random variables. If $E|X_{1}|^p < \infty$ where $1 < p < 2$, then $$\label{e201} \frac{\sum_{i=1}^{n} |X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}}}{n^{1/p}} \to 0 \qquad \mbox{a.s.}$$ Let $U_{n} = |X_{n}|^{p} \mathbb{I}_{\{|X_{n}| > n^{1/p}\}}$, $n \geqslant 1$. Note that condition $E|X_{1}|^p < \infty$ is equivalent to the relation $$\label{e202} \sum_{n=1}^{\infty} P(|X_{1}| > n^{1/p}) < \infty.$$ Thus, we have $$\sum_{n=1}^{\infty} P(U_{n} \ne 0) = \sum_{n=1}^{\infty} P(|X_{n}| > n^{1/p}) = \sum_{n=1}^{\infty} P(|X_{1}| > n^{1/p}) < \infty.$$ Therefore, by Borel–Cantelli lemma, $$\label{e204} U_{n} \to 0 \qquad \mbox{a.s.}$$ Moreover $$\label{e205} \frac{\sum_{i=1}^{n} |X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}}}{n^{1/p}} \leqslant \frac{\sum_{i=1}^{n} |X_{i}|^{p} \mathbb{I}_{\{|X_{i}| > n^{1/p}\}}}{n} \leqslant \frac{\sum_{i=1}^{n} |X_{i}|^{p} \mathbb{I}_{\{|X_{i}| > i^{1/p}\}}}{n}.$$ By  the right-hand side of  converges to zero almost sure and relation  follows. \[Lem202\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of identically distributed random variables. If $E|X_{1}|^p < \infty$ where $1 < p < 2$, then $$\label{e206} \frac{\sum_{i=1}^{n} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}})}{n^{1/p}} \to 0 \qquad (n \to \infty).$$ Note that for any non-negative random variable $\xi$ and ${a>0}$, $$E(\xi \mathbb{I}_{\{\xi > a\}}) = a P(\xi > a) + \int_{a}^{\infty} P(\xi > x) \, dx.$$ Hence $$\begin{gathered} \label{e208} \frac{\sum_{i=1}^{n} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}})}{n^{1/p}} = \\ = \frac{\sum_{i=1}^{n} \left(n^{1/p} P(|X_{i}| > n^{1/p}) + \int_{n^{1/p}}^{\infty} P(|X_{i}| > x) \, dx \right)}{n^{1/p}} = \\ = n P(|X_{1}| > n^{1/p}) + n^{\frac{p-1}{p}} \int_{n^{1/p}}^{\infty} P(|X_{1}| > x) \, dx = I_{1n} + I_{2n}.\end{gathered}$$ Using , we get $$\label{e209} I_{1n} = n P(|X_{1}| > n^{1/p}) \to 0 \qquad (n \to \infty).$$ From obvious equality $$\label{e210} E|X_{1}|^p = p \int_{0}^{\infty} x^{p-1} P(|X_{1}| > x) \, dx$$ it follows that $$I_{2n} = n^{\frac{p-1}{p}} \int_{n^{1/p}}^{\infty} P(|X_{1}| > x) \, dx \leqslant \int_{n^{1/p}}^{\infty} x^{p-1} P(|X_{1}| > x) \, dx \to 0 \qquad (n \to \infty),$$ which, in conjunction with  and , proves . \[Lem203\] Let $\{X_{n}\}_{n=1}^{\infty}$ be a sequence of identically distributed random variables. If $E|X_{1}|^p < \infty$ where $1 < p < 2$, then $$\label{e212} \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 n}{p}}} \sum_{k=1}^{2^{n}} E(|X_{k}|^{2} \mathbb{I}_{\{|X_{k}| \leqslant 2^{\frac{n}{p}}\}}) < \infty.$$ Note that for any non-negative random variable $\xi$ and ${a>0}$, $$E(\xi \mathbb{I}_{\{\xi \leqslant a\}}) \leqslant \int_{0}^{a} P(\xi > x) \, dx.$$ Hence, using , for some positive constants $C$ and $C_{1}$, we obtain $$\begin{gathered} \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 n}{p}}} \sum_{k=1}^{2^{n}} E(|X_{k}|^{2} \mathbb{I}_{\{|X_{k}| \leqslant 2^{\frac{n}{p}}\}}) \leqslant \\ \leqslant \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 n}{p}}} \sum_{k=1}^{2^{n}} \int_{0}^{2^{\frac{2 n}{p}}} P(|X_{k}| > x^{1/2}) \, dx \leqslant \\ \leqslant C \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 n}{p}}} \sum_{k=1}^{2^{n}} \int_{0}^{2^{\frac{n}{p}}} y P(|X_{k}| > y) \, dy \leqslant \\ \leqslant C \sum_{n=1}^{\infty} 2^{\frac{n (p-2)}{p}} \int_{0}^{2^{\frac{n}{p}}} y P(|X_{1}| > y) \, dy \leqslant \\ \leqslant C_{1} + C \sum_{n=1}^{\infty} 2^{\frac{n (p-2)}{p}} \sum_{i=1}^{n} \int_{2^{\frac{i-1}{p}}}^{2^{\frac{i}{p}}} y P(|X_{1}| > y) \, dy \leqslant \\ \leqslant C_{1} + C \sum_{i=1}^{\infty} \int_{2^{\frac{i-1}{p}}}^{2^{\frac{i}{p}}} y P(|X_{1}| > y) \, dy \sum_{n=i}^{\infty} 2^{\frac{n (p-2)}{p}} \leqslant \\ \leqslant C_{1} + C \sum_{i=1}^{\infty} 2^{\frac{i (2-p)}{p}} \int_{2^{\frac{i-1}{p}}}^{2^{\frac{i}{p}}} y^{p-1} P(|X_{1}| > y) \, dy \cdot 2^{\frac{i (p-2)}{p}} \leqslant \\ \leqslant C_{1} + C \sum_{i=1}^{\infty} \int_{2^{\frac{i-1}{p}}}^{2^{\frac{i}{p}}} y^{p-1} P(|X_{1}| > y) \, dy \leqslant \\ \leqslant C_{1} + C \int_{0}^{\infty} y^{p-1} P(|X_{1}| > y) \, dy \leqslant C_{1} + C E|X_{1}|^{p} < \infty,\end{gathered}$$ and  follows. Without loss of generality it can be assumed that $EX_{1}=0$. Let $$\label{e317} X_{i}^{(n)} = X_{i} \mathbb{I}_{\{|X_{i}| \leqslant n^{1/p}\}}, \qquad i \geqslant 1, \; n \geqslant 1,$$ $$\label{e318} S_{j}^{(n)} = \sum_{i=1}^{j} X_{i}^{(n)}, \qquad j \geqslant 1, \; n \geqslant 1.$$ *Step 1.* Let us prove that $$\label{e319} \frac{S_{n}-S_{n}^{(n)}}{n^{1/p}} \to 0 \qquad \mbox{a.s.}$$ We have $$\frac{|S_{n}-S_{n}^{(n)}|}{n^{1/p}} = \frac{|\sum_{i=1}^{n} X_{i} \mathbb{I}_{\{|X_{i}| > n^{1/p}\}}|}{n^{1/p}} \leqslant \frac{\sum_{i=1}^{n} |X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}}}{n^{1/p}}.$$ Application of Lemma \[Lem201\] yields to . *Step 2.* Let us show that $$\label{e321} \frac{ES_{n}^{(n)}}{n^{1/p}} \to 0 \qquad (n \to \infty).$$ We have $$\begin{gathered} \frac{|ES_{n}^{(n)}|}{n^{1/p}} = \frac{|\sum_{i=1}^{n} EX_{i}^{(n)}|}{n^{1/p}} \leqslant \frac{\sum_{i=1}^{n} |EX_{i}^{(n)}|}{n^{1/p}} = \\ = \frac{\sum_{i=1}^{n} |E(X_{i}-X_{i}^{(n)})|}{n^{1/p}} \leqslant \frac{\sum_{i=1}^{n} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > n^{1/p}\}})}{n^{1/p}}.\end{gathered}$$ The application of Lemma \[Lem202\] yields to . Now we note that to conclude the proof of the theorem, it is sufficiently to show that $$\label{e323} \frac{S_{n}^{(n)}-ES_{n}^{(n)}}{n^{1/p}} \to 0 \qquad \mbox{a.s.}$$ *Step 3.* Let us prove that $$\label{e324} \frac{S_{2^{n}}^{(2^{n})}-ES_{2^{n}}^{(2^{n})}}{2^{\frac{n}{p}}} \to 0 \qquad \mbox{a.s.}$$ Using Lemma \[Lem203\], by Chebyshev’s inequality, for any $\varepsilon >0$, we obtain $$\begin{gathered} \sum_{n=1}^{\infty} P \left( \left| \frac{S_{2^{n}}^{(2^{n})}-ES_{2^{n}}^{(2^{n})}}{2^{\frac{n}{p}}} \right| > \varepsilon \right) \leqslant \frac{1}{\varepsilon^{2}} \sum_{n=1}^{\infty} \frac{Var (S_{2^{n}}^{(2^{n})})}{2^{\frac{2 n}{p}}} = \frac{1}{\varepsilon^{2}} \sum_{n=1}^{\infty} \frac{\sum_{k=1}^{2^{n}} Var (X_{k}^{(2^{n})})}{2^{\frac{2 n}{p}}} \leqslant \\ \leqslant \frac{1}{\varepsilon^{2}} \sum_{n=1}^{\infty} \frac{\sum_{k=1}^{2^{n}} E(X_{k}^{(2^{n})})^{2}}{2^{\frac{2 n}{p}}} = \frac{1}{\varepsilon^{2}} \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 n}{p}}} \sum_{k=1}^{2^{n}} E(|X_{k}|^{2} \mathbb{I}_{\{|X_{k}| \leqslant 2^{\frac{n}{p}}\}}) < \infty.\end{gathered}$$ Thus, by Borel-Cantelli lemma, we have that relation  is proved. *Step 4.* Let us prove that $$\label{e328} \lim_{n \to \infty} \max_{2^{n}<k \leqslant 2^{n+1}} \left| \frac{\sum\limits_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)})}{2^{\frac{n+1}{p}}} \right| = 0 \qquad \mbox{a.s.}$$ Using Lemma \[Lem203\], by Chebyshev’s inequality, for any $\varepsilon >0$, we obtain $$\begin{gathered} \sum_{n=1}^{\infty} P \left(\max_{2^{n}<k \leqslant 2^{n+1}} \left| \frac{\sum\limits_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)})}{2^{\frac{n+1}{p}}} \right| > \varepsilon \right) \leqslant \\ \leqslant \frac{1}{\varepsilon^{2}} \sum_{n=1}^{\infty} \frac{1}{2^{\frac{2 (n+1)}{p}}} \sum_{k=1}^{2^{n+1}} E(|X_{k}|^{2} \mathbb{I}_{\{|X_{k}| \leqslant 2^{\frac{n+1}{p}}\}}) < \infty.\end{gathered}$$ The application of Borel-Cantelli lemma yields to . *Step 5.* We shall prove that $$\label{e332} \lim_{n \to \infty} \max_{2^{n}<k \leqslant 2^{n+1}} \frac{ \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| }{2^{\frac{n+1}{p}}} = 0 \qquad \mbox{a.s.}$$ For $n \geqslant 1$ and $k$ such that $2^{n}<k \leqslant 2^{n+1}$ we have $$\begin{gathered} \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| = \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} + S_{2^{n}}^{(2^{n})} - S_{2^{n}}^{(2^{n})} + ES_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right| = \\ = \left| (S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) - E(S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) + (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \left| (S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) - E(S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) \right| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = \left| \sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{|X_{i}| \leqslant 2^{\frac{n}{p}}\}} - E(\sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{|X_{i}| \leqslant 2^{\frac{n}{p}}\}}) \right| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = |\sum_{i=2^{n}+1}^{k} (X_{i} \mathbb{I}_{\{|X_{i}| \leqslant i^{1/p}\}} - X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}}) - \\ - E ( \sum_{i=2^{n}+1}^{k} (X_{i} \mathbb{I}_{\{|X_{i}| \leqslant i^{1/p}\}} - X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}}))| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = |\sum_{i=2^{n}+1}^{k} (X_{i} \mathbb{I}_{\{|X_{i}| \leqslant i^{1/p}\}} - E(X_{i} \mathbb{I}_{\{|X_{i}| \leqslant i^{1/p}\}})) - \\ - \sum_{i=2^{n}+1}^{k} (X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}} - E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}}))| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = |\sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) - \sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}} + \sum_{i=2^{n}+1}^{k} E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}})| + \\ + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \left| \sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) \right| + \left| \sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}} \right| + \\ + \left| \sum_{i=2^{n}+1}^{k} E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}}) \right| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \left| \sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) \right| + \sum_{i=2^{n}+1}^{k} |X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}} + \\ + \sum_{i=2^{n}+1}^{k} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \left| \sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) \right| + \sum_{i=2^{n}+1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant i^{1/p}\}} + \\ + \sum_{i=2^{n}+1}^{2^{n+1}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \left| \sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) \right| + \sum_{i=1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \\ + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right|.\end{gathered}$$ Therefore $$\begin{gathered} \max_{2^{n}<k \leqslant 2^{n+1}} \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| \leqslant \max_{2^{n}<k \leqslant 2^{n+1}} \left| \sum_{i=2^{n}+1}^{k} (X_{i}^{(i)} - EX_{i}^{(i)}) \right| + \\ + \sum_{i=1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right|.\end{gathered}$$ The application of Lemmas \[Lem201\] and \[Lem202\] and relations  and  yields to . *Step 6.* We shall prove that $$\label{e335} \lim_{n \to \infty} \max_{2^{n}<k \leqslant 2^{n+1}} \frac{ \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| }{2^{\frac{n+1}{p}}} = 0 \qquad \mbox{a.s.}$$ For $n \geqslant 1$ and $k$ such that $2^{n}<k \leqslant 2^{n+1}$ we have $$\begin{gathered} \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| = \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} + S_{2^{n}}^{(2^{n})} - S_{2^{n}}^{(2^{n})} + ES_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right| = \\ = \left| (S_{2^{n}}^{(k)} - S_{2^{n}}^{(2^{n})}) - E(S_{2^{n}}^{(k)} - S_{2^{n}}^{(2^{n})}) + (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \left| (S_{2^{n}}^{(k)} - S_{2^{n}}^{(2^{n})}) - E(S_{2^{n}}^{(k)} - S_{2^{n}}^{(2^{n})}) \right| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = |\sum_{i=1}^{2^{n}} (X_{i} \mathbb{I}_{\{|X_{i}| \leqslant k^{1/p}\}} - X_{i} \mathbb{I}_{\{|X_{i}| \leqslant 2^{\frac{n}{p}}\}}) - E ( \sum_{i=1}^{2^{n}} (X_{i} \mathbb{I}_{\{|X_{i}| \leqslant k^{1/p}\}} - X_{i} \mathbb{I}_{\{|X_{i}| \leqslant 2^{\frac{n}{p}}\}}))| + \\ + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| = \\ = |\sum_{i=1}^{2^{n}} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}} - \sum_{i=1}^{2^{n}} E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}})| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant |\sum_{i=1}^{2^{n}} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}}| + |\sum_{i=1}^{2^{n}} E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}})| + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \sum_{i=1}^{2^{n}} |X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}} + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}}) + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right| \leqslant \\ \leqslant \sum_{i=1}^{2^{n}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})}) \right|.\end{gathered}$$ Therefore $$\begin{gathered} \max_{2^{n}<k \leqslant 2^{n+1}} \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| \leqslant \sum_{i=1}^{2^{n}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \\ + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right|.\end{gathered}$$ The application of Lemmas \[Lem201\] and \[Lem202\] and relation  yields to . *Step 7.* We shall prove that $$\label{e338} \lim_{n \to \infty} \max_{2^{n}<k \leqslant 2^{n+1}} \left| \frac{S_{k}^{(k)} - ES_{k}^{(k)}}{2^{\frac{n+1}{p}}} \right| = 0 \qquad \mbox{a.s.}$$ For $n \geqslant 1$ and $k$ such that $2^{n}<k \leqslant 2^{n+1}$ we have $$\begin{gathered} \left| S_{k}^{(k)} - ES_{k}^{(k)} \right| = \\ = | \left[ (S_{k}^{(k)} - S_{2^{n}}^{(k)}) + (S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) \right] - E \left[ (S_{k}^{(k)} - S_{2^{n}}^{(k)}) + (S_{k}^{(2^{n})} - S_{2^{n}}^{(2^{n})}) \right] + \\ + (S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)}) - (S_{k}^{(2^{n})} - ES_{k}^{(2^{n})}) + (S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})})| \leqslant \\ \leqslant \left| \sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}} - E (\sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}}) \right| + \\ + \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| + \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right| \leqslant \\ \leqslant \left| \sum_{i=2^{n}+1}^{k} X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}} \right| + \left| \sum_{i=2^{n}+1}^{k} E(X_{i} \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant k^{1/p}\}}) \right| + \\ + \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| + \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right| \leqslant \\ \leqslant \sum_{i=2^{n}+1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant 2^{\frac{n+1}{p}}\}} + \sum_{i=2^{n}+1}^{2^{n+1}} E(|X_{i}| \mathbb{I}_{\{2^{\frac{n}{p}} < |X_{i}| \leqslant 2^{\frac{n+1}{p}}\}}) + \\ + \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| + \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right| \leqslant \\ \leqslant \sum_{i=1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \\ + \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| + \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right|.\end{gathered}$$ Therefore $$\begin{gathered} \max_{2^{n} < k \leqslant 2^{n+1}} \left| S_{k}^{(k)} - ES_{k}^{(k)} \right| \leqslant \sum_{i=1}^{2^{n+1}} |X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}} + \\ + \sum_{i=1}^{2^{n}} E(|X_{i}| \mathbb{I}_{\{|X_{i}| > 2^{\frac{n}{p}}\}}) + \max_{2^{n} < k \leqslant 2^{n+1}} \left| S_{2^{n}}^{(k)} - ES_{2^{n}}^{(k)} \right| + \\ + \max_{2^{n} < k \leqslant 2^{n+1}} \left| S_{k}^{(2^{n})} - ES_{k}^{(2^{n})} \right| + \left| S_{2^{n}}^{(2^{n})} - ES_{2^{n}}^{(2^{n})} \right|.\end{gathered}$$ The application of Lemmas \[Lem201\] and \[Lem202\] and relations ,  and  yields to . Relation  implies . Relations ,  and  imply . Theorem \[T1\] is proved. [99]{} Choi, B.D., Sung S.H.: On convergence of $(S_{n}-ES_{n})/n^{1/r}$, $1<r<2$, for pairwise independent random variables. Bull. Korean Math. Soc. **22**, 79–82 (1985) Etemadi, N.: An elementary proof of the strong law of large numbers. Z. Wahrscheinlichkeitstheor. Verw. Geb. **55**, 119–122 (1981) Li, G.: Strong convergence of random elements in Banach spaces. Sichuan Daxue Xuebao **25**, 381–389 (1988) Loève, M.: Probability Theory I. Springer-Verlag. New York (1977) Martikainen, A.: On the strong law of large numbers for sums of pairwise independent random variables. Stat. Probab. Lett. **25**, 21–26 (1995) Petrov, V.V.: On the strong law of large numbers. Stat. Probab. Lett. **26**, 377–380 (1996) Sawyer, S.: Maximal inequalities of weak type. Ann. Math. **84**, 157–174 (1966) Sung, S.H.: On the strong law of large numbers for pairwise i.i.d. random variables. Bull. Korean Math. Soc. **34**, 617–626 (1997) Sung S.H.: Marcinkiewicz–Zygmund type strong law of large numbers for pairwise i.i.d. random variables. J. Theor. Probab. **27**, 96–106 (2014) [^1]: Saint-Petersburg State University of Aerospace Instrumentation, Saint-Petersburg. E-mail: `[email protected]`
--- abstract: 'Rayleigh-Bénard convection is studied and quantitative comparisons are made, where possible, between theory and experiment by performing numerical simulations of the Boussinesq equations for a variety of experimentally realistic situations. Rectangular and cylindrical geometries of varying aspect ratios for experimental boundary conditions, including fins and spatial ramps in plate separation, are examined with particular attention paid to the role of the mean flow. A small cylindrical convection layer bounded laterally either by a rigid wall, fin, or a ramp is investigated and our results suggest that the mean flow plays an important role in the observed wavenumber. Analytical results are developed quantifying the mean flow sources, generated by amplitude gradients, and its effect on the pattern wavenumber for a large-aspect-ratio cylinder with a ramped boundary. Numerical results are found to agree well with these analytical predictions. We gain further insight into the role of mean flow in pattern dynamics by employing a novel method of quenching the mean flow numerically. Simulations of a spiral defect chaos state where the mean flow is suddenly quenched is found to remove the time dependence, increase the wavenumber and make the pattern more angular in nature.' author: - 'M.R. Paul' - 'K.-H. Chiam' - 'M.C. Cross' - 'P.F. Fischer' - 'H. S. Greenside' title: 'Pattern Formation and Dynamics in Rayleigh-Bénard Convection: Numerical Simulations of Experimentally Realistic Geometries' --- Introduction {#section:introduction} ============ Rayleigh-Bénard convection has played a crucial role in guiding both theory and experiment towards an understanding of the emergence of complex dynamics from nonequilibrium systems [@cross:1993]. However, an important missing link has been the ability to make quantitative and reliable comparisons between theory and experiment. Nearly all previous three-dimensional convection calculations have been subject to a variety of limitations. Many simulations have been for small aspect ratios where the lateral boundaries dominate the dynamics, and as a result, complicate the analysis. When larger aspect ratios are considered, it is often with the assumption of periodic boundaries, which is convenient numerically yet does not correspond to any laboratory experiment. As a result of algorithmic inefficiencies, or the lack of computer resources, simulations have frequently not been carried out for long times. This presents the difficulty in determining whether the observed behavior represents the asymptotic non-transient state, which is usually the state that is most easily understood theoretically. Fortunately, advances in parallel computers, numerical algorithms and data storage are such that direct numerical simulations of the full three-dimensional time dependent equations are possible for experimentally realistic situations. We have performed simulations with experimentally correct boundary conditions, in geometries of varying shapes and aspect ratios over long enough times so as to allow a detailed quantitative comparison between theory and experiment. Alan Newell has made numerous important contributions to the discussion of pattern formation in non-equilibrium systems. In this paper, presented in this special issue in his honor, we give a survey of our recent results that touch on many of the issues he has raised, and in turn make use of some of the tools that he has helped develop to understand our simulations. Simulation of Realistic Geometries {#section:numerical simulation} ================================== We have performed full numerical simulations of the fluid and heat equations using a parallel spectral element algorithm (described in detail elsewhere [@fischer:1997]). The velocity $\vec{u}$, temperature $T$, and pressure $p$, evolve according to the Boussinesq equations, $$\begin{aligned} {\sigma}^{-1} \left( {\partial}_t + \vec{u} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla} \right) \vec{u} &=& -\vec{\nabla} p + RT \hat{z} + \nabla^2 \vec{u} , \label{eq:mom}\\ \left( {\partial}_t + \vec{u} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla} \right) T &=& \nabla^2 T , \label{eq:energy}\\ \vec{\nabla} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{u} &=& 0, \label{eq:mass}\end{aligned}$$ where $\partial_t$ indicates time differentiation, $\hat{z}$ is a unit vector in the vertical direction opposite of gravity, $R$ is the Rayleigh number, and $\sigma$ is the Prandtl number. The equations are nondimensionalized in the standard manner using the layer height $h$, the vertical diffusion time for heat ${\tau}_v \equiv h^2/\kappa$ where $\kappa$ is the thermal diffusivity, and the temperature difference across the layer $\Delta T$, as the length, time, and temperature scales, respectively. We have investigated a wide range of geometries including cylindrical and rectangular domains, which are the most common experimentally, in addition to elliptical and annular domains. Rotation about the vertical axis of the convection layer for any of these situations is also possible but will not be presented here. All bounding surfaces are no-slip, $\vec{u}=0$, and the lower and upper surfaces and are held at constant temperature, $T(z=0)=1$ and $T(z=1)=0$. A variety of sidewall boundary conditions are shown in Fig. \[fig:sidewalls\]. Common thermal boundary conditions on the lateral sidewalls are insulating, $\hat{n} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla}T = 0$ where $\hat{n}$ is a unit vector normal to the boundary at a given point, and conducting, $T=1-z$. In the future we will have the flexibility of imposing a more experimentally accurate thermal boundary condition by coupling the fluid to a lateral wall of finite thickness and known finite thermal conductivity that is bounded on the outside by a vacuum. ![Four lateral sidewall boundary conditions utilized in the numerical simulations. The two thermal boundary conditions are conducting and insulating whereas the fin and ramp represent geometric conditions employed in experiments.[]{data-label="fig:sidewalls"}](./fig1.eps){width="2.5in"} In experiment, however, small sidewall thermal forcing can have a significant effect upon the resulting patterns and, as a result, finned boundaries have been employed [@daviaud:1989; @debruyn:1996; @pocheau:1997]. These are formed by inserting a very thin piece of paper or cardboard between the top and bottom plates near the sidewalls. This suppresses convection over the finned region ($R \sim h^3$ and the layer height has effectively been reduced) whereas in the bulk of the domain, i.e. the un-finned region, supercritical conditions prevail. This is accomplished numerically by extending a no-slip surface into the domain from the lateral sidewall. In all of our simulations we have chosen the vertical position of the fin to be $z=0.5$ but this is not necessary. The result is that the supercritical portion of the convection layer is bounded by a subcritical region of the same fluid and hence with the same material properties. An additional effect is that the mean flow may extend into the finned region which presents an interesting scenario for exploring the effect of mean flows upon pattern dynamics that has been investigated both experimentally and theoretically by Pocheau and Daviaud  and is discussed further below. The sidewalls can also have an orienting effect and ramped boundaries have been used as a “soft boundary" [@kramer:1982] in an effort to minimize this. By gradually decreasing the plate separation as the lateral sidewall is approached the convection layer eventually becomes critical and then increasingly subcritical. Using the spectral element algorithm we are able to investigate arbitrary ramp shapes: we have chosen to investigate the precise radial ramp utilized in recent experiments  on a cylindrical convection layer. Again the mean flow is able to extend into the subcritical region. Perhaps the most common method employed experimentally to reduce the influence of sidewalls is to use a large aspect ratio $\Gamma$, where $\Gamma=r/h$ in a cylindrical domain where $r$ is the radius and $\Gamma=L/h$ in a square domain where $L$ is the length of side. Experiments can attain aspect ratios as large as $\sim 500$. However, the majority of large aspect ratio experiments are for $\Gamma \lesssim 100$. We have performed numerical simulations using the spectral element algorithm for $\Gamma \sim 60$ as shown in Fig. \[fig:large\_gamma\]. The top panel in Fig. \[fig:large\_gamma\] illustrates the convection pattern present for the parameters of the classic paper [@ahlers:1974] where flow visualization was not possible. Although the simulation has only been performed for a short time $t_f \sim 100 \tau_v$ it appears that a slow process of domain coarsening [@cross:1995:physrevlett] is occurring. The bottom of Fig. \[fig:large\_gamma\] illustrates the time dependent spatiotemporal chaotic state of spiral defect chaos [@morris:1993]. These, and other, interesting large aspect ratio problems can now be addressed through the use of numerical simulation. Heuristically, using the spectral element algorithm on an IBM SP parallel supercomputer, it is our experience that it is practical to perform full numerical simulations for aspect ratios $\Gamma \sim 30$ for simulation times of $t_f \sim \tau_h$ (36 hours on 64 processors), where $\tau_h$ is the horizontal diffusion time for heat ${\tau}_h={\Gamma}^2 {\tau}_v$, and $\Gamma \sim 60$ for $t_f \sim 300 \tau_v$ (36 hours on 256 processors) for $\epsilon \lesssim 1$, $0.5 \lesssim \sigma \lesssim 10$, $\Delta t \approx 0.01$, and approximately cubic shaped spectral elements with an edge length of unity and $11^{th}$ order polynomial expansions (where $\epsilon=(R-R_c)/R_c$ and $R_c$ is the critical value of the Rayleigh number). Of course for smaller domains the computational requirements significantly decrease. A major benefit of numerical simulations is that a complete knowledge of the flow field is produced. For example, we have first used this to address a long standing open question concerning chaos in small cylindrical domains. The existence of a power-law behavior in the fall-off of the power spectral density derived from a time series of the Nusselt number was not understood [@ahlers:1974]. The Nusselt number, $N(t)$, is a global measurement of the temperature difference across the fluid layer. In cryogenic experiments very precise measurements of $N(t)$ are possible , however the flow field can not be visualized easily. Subsequent room temperature experiments using compressed gasses allowed flow visualization at the expense of precise measurements of the Nusselt number . ![Numerical simulations of two large-aspect-ratio cylindrical convection layers. The pattern is illustrated by contours of the thermal perturbation, dark regions represent cool descending fluid and light regions warm ascending fluid. Both simulations are initiated from random thermal perturbations and the lateral sidewalls are insulating. (Top) $\Gamma=57$, $\sigma=2.94$, $R=2169.2$ and $t=74 \tau_v$. (Bottom) A spiral defect chaos state, $\Gamma=30$, $\sigma=1.0$, $R=2950$ and $t=254 \tau_v$. []{data-label="fig:large_gamma"}](./fig2_top.eps "fig:"){width="2.5in"} ![Numerical simulations of two large-aspect-ratio cylindrical convection layers. The pattern is illustrated by contours of the thermal perturbation, dark regions represent cool descending fluid and light regions warm ascending fluid. Both simulations are initiated from random thermal perturbations and the lateral sidewalls are insulating. (Top) $\Gamma=57$, $\sigma=2.94$, $R=2169.2$ and $t=74 \tau_v$. (Bottom) A spiral defect chaos state, $\Gamma=30$, $\sigma=1.0$, $R=2950$ and $t=254 \tau_v$. []{data-label="fig:large_gamma"}](./fig2_bottom.eps "fig:"){width="1.32in"} By performing long-time simulations, on the order of many horizontal diffusion times, for the same parameters in cylindrical domains with $\sigma=0.78$ and for a range of $\epsilon$, with realistic boundary conditions, we had access to both precise measurements of the Nusselt number, Fig. \[fig:nu\_all\_cond\], and flow visualization, Fig. \[fig:snapshots\_mf\], allowing us to resolve the issue [@paul:2001]. Conducting sidewalls were used and all simulations were initiated from small, $\delta T \approx 0.01$, random thermal perturbations. Flow visualization of the simulations represented in Fig. \[fig:nu\_all\_cond\] display a rich variety of dynamics similar to what was observed in the room temperature experiments. Using simulation results, the particular dynamical events responsible for the $N(t)$ signature were identified. The power-law behavior was found to be caused by the nucleation of dislocation pairs and roll pinch-off events. Additionally, the power spectral density was found to decay exponentially for large frequencies as expected for time-continuous deterministic dynamics. The large frequency regime was not accessible to experiment because of the presence of the noise floor. ![Plots of the dimensionless heat transport N(t) for reduced Rayleigh number $\epsilon = 0.557,0.614,0.8,1.0,1.5$, and $3.0$, labelled (i-vi) respectively ($\Gamma = 4.72$). For cases (i-v), $\Delta t =0.01$, and for case (vi), $\Delta t = 0.005$ ($\Delta t$ is the time step).[]{data-label="fig:nu_all_cond"}](./fig3.eps){width="2.5in"} Role of mean flow {#section:mean_flow} ================= The mean flow present in these flow fields, and in general for $\sigma \lesssim 1$, plays an important role in theory  yet it is not possible to measure or visualize the mean flows in the current generation of experiments. In our simulations, however, we can quantify and visualize the mean flow. ![Flow visualization showing the pattern (solid dark lines) and shaded contours of the vorticity potential, $\zeta$, for $\epsilon=0.614$ labelled ii) in Fig \[fig:nu\_all\_cond\] ($\Gamma = 4.72$). Dark regions corresponding to negative vorticity generate clockwise mean flow and light regions to a positive vorticity generating a counter clockwise mean flow. The dark solid lines are zeros of the thermal perturbation at mid-depth illustrating the outline of the convection rolls. From top to bottom and left to right the panels are for $t = 600,605,630,650,735,785$. The dislocations glide toward the right wall focus (shown here); during the next half period, the dislocations glide to the left wall focus. This left and right alternation continues for the entire simulation.[]{data-label="fig:snapshots_mf"}](./fig4.eps){width="2.5in"} The mean flow field, $\vec{U}(x,y)$, is the horizontal velocity integrated over the depth and originates from the Reynolds stress induced by pattern distortions. Recalling the fluid equations, Eqs. (\[eq:mom\]) and (\[eq:mass\]), it is evident that the pressure is not an independent dynamic variable. The pressure is determined implicitly to enforce incompressibility, $$\label{eq:pressure} \nabla^2 p = -\sigma^{-1} \vec{\nabla} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left[ \left( \vec{u} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla} \right) \vec{u} \right] + R \partial_z T.$$ Focussing on the nonlinear Reynolds stress term and rewriting the pressure as $p = p_o(x,y) + \bar{p}(x,y,z)$ yields, $$\label{eq:pressure_p0} p_o(x,y) \sim \sigma^{-1} \int dx' dy' \ln \left( 1 / \left| r-r' \right| \right) \left< \vec{\nabla}' {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left[ \left( \vec{u} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla} \right) \vec{u} \right] \right>_z .$$ In Eq. (\[eq:pressure\_p0\]) the $\ln(1/|r-r'|)$ is not exact, in order to be more precise the finite system Green’s function would be required. However, the long range behavior persists. This gives a contribution to the pressure that depends on distant parts of the convection pattern. The Poiseuille-like flow driven by this pressure field subtracts from the Reynolds stress induced flow leading to a divergence free horizontal flow that can be described in terms of a vertical vorticity. The mean flow is important not because of its strength; under most conditions the mean flow is substantially smaller than the magnitude of the roll flow making it extremely difficult to quantify experimentally. The mean flow is important because it is a nonlocal effect acting over large distances (many roll widths) and changes important general predictions of the phase equation [@cross:1984]. The mean flow is driven by roll curvature, roll compression and gradients in the convection amplitude. The resulting mean flow advects the pattern, giving an additional slow time dependence. The mean flow present in the simulation flow fields, $\vec{U}_s(x,y)$, is formed by calculating the depth averaged horizontal velocity, $$\vec{U}_s(x,y)=\int^1_0 \vec{u}_{\perp}(x,y,z)dz \label{eq:mean_flow_sim}$$ where $\vec{u}_{\perp}$ is the horizontal velocity field. Furthermore it will be convenient to work with the vorticity potential, $\zeta$, defined as $$\label{eq:vorticity_potential} \nabla^2_\perp \zeta = -\hat{z} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left( \vec{\nabla}_{\perp} \times \vec{U}_s \right) = - \omega_z$$ where $\omega_z$ is the vertical vorticity and $\nabla^2_\perp$ is the horizontal Laplacian. Six consecutive snapshots in time for the periodic dynamics shown in Fig. \[fig:nu\_all\_cond\] case ii) are illustrated in Fig. \[fig:snapshots\_mf\]. One half period is displayed illustrating the nucleation of a dislocation pair and its subsequent annihilation in the opposing wall foci. The vorticity potential, $\zeta$, is shown on a grey scale: dark regions represent negative vorticity and light regions represent positive vorticity which will generate a clockwise and a counter clockwise rotating mean flow, respectively. The quadrupole spatial structure of $\zeta$ in the first panel, i.e. four lobes of alternating positive and negative vorticity with one lobe per quadrant, generates a roll compressing mean flow that pushes the system closer to a dislocation pair nucleation event. During dislocation climb and glide the spatial structure of the vorticity potential is more complicated until the pan-am pattern is reestablished in final panel and a quadrupole structure of vorticity is again formed and the process repeats. The dislocations alternate gliding left and right resulting is a slight rocking back and forth of the entire pattern with each half period which is visible in the different pattern orientations in the first last panels. This alternation persists for the entire simulation. A numerical investigation of the importance of the mean flow for this small cylindrical domain was performed by implementing the ramped and finned boundary conditions. In all of these simulations the bulk region of constant $R$ extended out to a radius $r_0=4.72$. In the finned case a fin at half height occupied the region $4.72 \le r \le 7.66$. In the ramped case a radial ramp in plate separation was given by, $$h(r) = \left\{ \begin{array}{ll} 1, & \mbox{$r < r_0$} \\ 1 - {\delta_r} \left[ 1- \cos \left( \frac{r-r_0} {r_1-r_0} \pi \right) \right], & \mbox{$r \ge r_0$} \end{array}\right. \label{eq:platesep}$$ where $r_0=4.72$, $r_1=10.0$, and $\delta=0.15$. The different mean wavenumber behavior (using the Fourier methods discussed in [@morris:1993]) exhibited in these three different cases is shown in Fig. \[fig:wnall\_smallgamma\]. As illustrated in Fig. \[fig:rigid\_fin\_ramp\] the behavior of the vorticity potential suggests an explanation. In the simulations with a rigid sidewall, not ramped or finned, the vorticity potential generates a mean flow that enhances roll compression, as described above. In the case of the finned and ramped boundaries the vorticity potential and the resulting mean flow are being generated by gradients in the convection amplitude and are largely situated away from the bulk of the domain. Furthermore, the mean flow generated is strongest in the subcritical finned or ramped region away from the convection rolls. This is demonstrated by comparing the average value of the mean flow over a fraction of the bulk of the domain, $r \le 1$, where it was found that $\bar{U}_s = $ 0.23, 0.09, and 0.02 for the rigid, finned and ramped domains, respectively, and that the maximum flow field velocity is $|\vec{u}| \approx 10$. ![Mean pattern wavenumber measurements for a cylindrical convection layer, $r_0=4.72$, with rigid sidewalls ($\Box$), fin ($\circ, r_1=7.66$) and a spatial ramp in plate separation ($\diamond, r_1=10.0$, $r_c=7.34$, and $\delta=0.15$). In all three cases the sidewalls are perfectly conducting and $\sigma=0.78$. For reference, solid lines labelled E, N, and SV indicate the approximate location of the Eckhaus, Neutral and Skewed Varicose stability boundaries for an infinite layer straight parallel convection rolls. All patterns represented are time independent.[]{data-label="fig:wnall_smallgamma"}](./fig5.eps){width="3.0in"} ![Convection pattern and shaded contours of the vorticity potential, $\zeta$, for a cylindrical convection layer, $r_0=4.72$, with rigid sidewalls, fin ($r_1=7.66$) and a spatial ramp in plate separation ($r_1=10.0$, $r_c=7.34$ and marked with a dashed line, and $\delta=0.15$) shown top, middle and bottom, respectively. The convection pattern is illustrated by plotting zero contours of the thermal perturbation. In all three cases the sidewalls are perfectly conducting, $\sigma=0.78$ and $R=2804$.[]{data-label="fig:rigid_fin_ramp"}](./fig6.eps){width="2.0in"} It is attractive to pursue the case of a radial ramp in plate separation because the variation in the convective amplitude caused by the ramp can be determined analytically and the influence of a mean flow upon nearly straight rolls can be quantified [@paul:2002:pre]. Usually the mean flow can only be determined once the texture is known and it is hard to calculate because of defects acting as sources, in addition to the regions of smooth distortions. Near threshold an explicit expression for the mean flow, $\vec{U}$, that advects the convection pattern is [@cross:1984] $$\vec{U}(x,y) = - \gamma \vec{k} \vec{\nabla}_{\perp} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left( \vec{k} |A|^2 \right) - \vec{\nabla}_{\perp}p_o(x,y) \label{eq:meanflow}$$ where $\gamma$ is a coupling constant given by $\gamma = 0.42 {\sigma}^{-1} (\sigma+0.34)(\sigma+0.51)^{-1}$, $|A|^2$ is the convection amplitude normalized so that the convective heat flow per unit area relative to the conducted heat flow at $R_c$ is $|A|^2R/R_c$, $p_o$ is the slowly varying pressure (see Eq. (\[eq:pressure\_p0\])) and $\vec{\nabla}_{\perp}$ is the horizontal gradient operator. The vertical vorticity is then given by the vertical component of the curl of Eq. (\[eq:meanflow\]), $${\omega}_z = \hat{z} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left( \vec{\nabla}_{\perp} \times \vec{U} \right) = - \gamma \hat{z} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla}_{\perp} \times \left[ \vec{k} \vec{\nabla}_{\perp} {{\scriptscriptstyle \stackrel{\bullet}{{}}}}\left( \vec{k} |A|^2 \right) \right]. \label{eq:wz_general}$$ Consider a cylindrical convection layer with a radial ramp in plate separation containing a field of x-rolls given by $\vec{k}=k_o \hat{x}$. The amplitude can be represented for large $\epsilon_o$, using an adiabatic approximation, as $|A|^2=\epsilon(r)/g_o$ for $\epsilon>0$ and $|A|^2=0$ for $\epsilon(r)<0$ as shown in Fig. \[fig:ramp\_vorticity\_schematic\], making the amplitude a function of radius only $|A|^2=f(r)$. ![A schematic illustrating the radial variation, for purely adiabatic conditions, of $\epsilon$ (dashed line), $|A|^2$ (solid line), and $\omega_z$ (solid line with arrow) for a cylindrical convection layer with a radial ramp in plate separation. Labelled $r_0$ and $r_c$ are the radial values where the ramp begins and where the ramp yields critical conditions, respectively. Note that for $r>r_c$, $|A|^2=0$ in the adiabatic approximation.[]{data-label="fig:ramp_vorticity_schematic"}](./fig7.eps){width="3.5in"} Inserting $|A|^2=f(r)$ into Eq. (\[eq:wz\_general\]) yields, after some manipulation, the following expression for the vertical vorticity, $${\omega}_z = \frac{\gamma {k_o}^2}{2} \left[ \frac{d^2|A|^2}{dr^2} - \frac{1}{r} \frac{d|A|^2}{dr} \right] \sin 2 \theta. \label{eq:wz}$$ The vorticity generated by the amplitude variation caused by the ramp is also shown in Fig. \[fig:ramp\_vorticity\_schematic\]: there is a negative vorticity for $r_0<r<r_c$ and then a delta function spike of positive vorticity at $r_c$. To correct for nonadiabaticity and to smooth $|A(r)|^2$ near $r_c$, the one-dimensional time independent amplitude equation [@newell:1969] is solved, $$0 = \epsilon (r) A + {{\xi}_o}^2 {\cos}^2 {\theta} \frac{\partial^2A}{\partial r^2} - g_o |A|^2A, \label{eq:amp_dim}$$ where ${{\xi}_o}^2 = 0.148$, $g_o=(0.6995-0.0047\sigma^{-1}+0.0083\sigma^{-2})$ and $\epsilon(r)$ is determined by $$\epsilon(r) = \left\{ \begin{array}{ll} \epsilon_o, & \mbox{$r < r_0$} \\ \epsilon_o (h^3-h_c^3)/(1-h_c^3), & \mbox{$r \ge r_0$} \end{array}\right. \label{eq:eps}$$ where $h_c=h(r_c)$. Equation (\[eq:amp\_dim\]) is solved numerically using the boundary conditions $\partial_r A=0$ at $r = 0$, and $A=0$ at $r = r_1$. To compare these analytical results with simulation we have chosen to investigate a large-aspect-ratio cylinder with a gradual radial ramp, defined by Eq. (\[eq:platesep\]), given by the parameters: $r_0=11.31$, $r_1=20.0$, $\delta_r=0.036$, and $\sigma=0.87$. For small $\epsilon$ the amplitude $A^2(r)$ is unable to adiabatically follow the ramp, this nonadiabaticity results in a deviation from $\epsilon(r)/g_o$ as shown in Fig. \[fig:amp\_vort\_mf\_exp1750\]a. However, as $\epsilon_0$ increases the amplitude $A^2(r)$ follows $\epsilon(r)/g_o$ adiabatically almost over the entire ramp except for a small kink at $r_c$. The structure of $\omega_z$ depends upon this adiabaticity and is shown in Fig. \[fig:amp\_vort\_mf\_exp1750\]b where we have used the solution to Eq. (\[eq:amp\_dim\]) at $\theta=\pi/4$ in Eq. (\[eq:wz\]). This is not strictly correct since the non-adiabaticity of the amplitude is $\theta$ dependent which will induce higher angular modes of the vorticity not given by Eq. (\[eq:wz\]). However, the calculation should give a good approximation to the main $\sin 2 \theta$ component of the vorticity. It is evident from Fig. \[fig:amp\_vort\_mf\_exp1750\]b that the vertical vorticity, calculated from the simulation results as an angular average weighted by $\sin 2 \theta$ has an octupole angular dependence (octupole in the sense of an inner and outer quadrupole) and is well approximated by theory without any adjustable parameters. The mean flow generated by these vorticity distributions is determined by solving Eq. (\[eq:wz\_general\]) with the boundary condition $\zeta(r_1)=0$. The vorticity potential is related to the mean flow in polar coordinates by $(U_r,U_\theta)=(r^{-1} \partial_\theta \zeta, -\partial_r \zeta)$. The vorticity potential is expanded radially in second order Bessel functions while maintaining the $\sin 2 \theta$ angular dependence. Of particular interest is the mean flow perpendicular to the convection rolls, $U_r(\theta=0)$ or equivalently $U_x(y=0)$, which is shown in Fig. \[fig:amp\_vort\_mf\_exp1750\]c. Again the simulation results compare well with theory even in the absence of adjustable parameters. ![Panel (a) shows the solution of Eq. (\[eq:amp\_dim\]) plotted as $A^2(r)$, shown for comparison is $\epsilon(r)/g_o$. Panel (b) compares the vertical vorticity found analytically from Eq. (\[eq:amp\_dim\]) with an angular average, weighted by $\sin 2 \theta$, of the vertical vorticity from simulation. Panel (c) compares the mean flow found analytically from Eq. (\[eq:vorticity\_potential\]) with the mean flow from simulation. Parameters are $r_0=11.31$, $r_c=13.20$, $r_1=20.0$, $\delta_r=0.036$, $\sigma=0.87$ and $\epsilon_o = 0.025$.[]{data-label="fig:amp_vort_mf_exp1750"}](./fig8.eps){width="2.0in"} To make the connection between mean flow and wavenumber quantitative it is noted that the wavenumber variation resulting from a mean flow across a field of x-rolls can be determined from the one-dimensional phase equation, $$\label{eq:phase} U \partial_x \phi = D_\parallel \partial_{xx}\phi$$ where the wavenumber is the gradient of the phase, $k=\partial_x\phi$, $D_\parallel=\xi_o^2 \tau_o^{-1}$, and $\tau_o^{-1}=19.65 \sigma (\sigma+0.5117)^{-1}$ [@cross:1993]. Figure \[fig:wn\_mf\_compare2000\]a illustrates the wavenumber variation for a large-aspect-ratio simulation, $k(r)$ for $r \le r_0$, and makes evident the roll compression, $k(r=0)>k(r_0)$. Figure \[fig:wn\_mf\_compare2000\]b compares the mean flow calculated from simulation to the predicted value of the mean flow required to produce the wavenumber variation shown in Fig. \[fig:wn\_mf\_compare2000\]a. The agreement is good and the discrepancy near $r_0$, which is contained within one roll wavelength from where the ramp begins, is expected because the influence of the ramp was not included in Eq. (\[eq:phase\]). This illustrates quantitatively that is in indeed the mean flow that compresses the rolls in the bulk of the domain. ![Panel (a), the variation in the local wavenumber along the positive x-axis, or equivalently $k(r)$ at $\theta=0$. Panel (b), a comparison of the mean flow from simulation (solid line) with the predicted value (dashed line) calculated from Eq. (\[eq:phase\]) using the wavenumber variation from panel (a). Simulation parameters, $r_0=11.31$, $r_1=20$, $\delta_r=0.036$, $\sigma=0.87$ and $\epsilon=0.171$.[]{data-label="fig:wn_mf_compare2000"}](./fig9.eps){width="2.5in"} Finally, to better understand the connection between mean flow and pattern dynamics, especially that of spatiotemporal chaotic states exhibiting both temporal chaos as well as spatial disorder, we apply a novel numerical procedure to eliminate mean flow from the fluid equation, Eq. (\[eq:mom\]), thereby evolving the dynamics of an artificial fluid with no explicit contributions from mean flow. In this way, we can then obtain quantitative comparisons between the patterns generated by this artificial fluid with mean flow quenched and by the original fluid equation. We have applied this procedure to study spiral defect chaos (see bottom of Fig. \[fig:large\_gamma\]) [@morris:1993]. Numerous attempts have been made to understand how a spiral defect chaos state is formed and how it is sustained. For example, experiments [@assenheimer:1993:prl; @assenheimer:1994] have found that spirals transition to targets when the Prandtl number is increased. Owing to the fact that the magnitude of mean flow is inversely proportional to the Prandtl number, c.f. Eq. (\[eq:meanflow\]), it was believed that spiral defect chaos is a low Prandtl number phenomenon for which mean flow is essential to their dynamics. This is supported by studies of convection models based on the generalized Swift-Hohenberg equation [@xi:1993:prl; @xi:1993:pre; @xi:1995], where spiral defect chaos is not observed unless a term corresponding to mean flow is explicitly coupled to the equation. However, these observations are by themselves insufficient. For example, there are many other effects in the fluid equations that grow towards low Prandtl numbers, and there could be limitations in the Swift-Hohenberg modelling. We have applied our numerical procedure to this case to explicitly confirm the role of mean flow in the dynamics of spiral defect chaos. ![Spiral defect chaos (left) and angular textures (right) obtained by quenching mean flow. The left panel is at $t=152\tau_v$ and displays the pattern upon which the mean flow is quenched, the right panel is at $t=320\tau_v$. In both cases, $R=2950, \sigma=1.0$ and the lateral sidewalls are insulating. We see that the spiral arms transition to angular textures when mean flow is quenched. Also, the quenched state is stationary.[]{data-label="fig:quench"}](./fig10_left.eps "fig:"){width="1.5in"} ![Spiral defect chaos (left) and angular textures (right) obtained by quenching mean flow. The left panel is at $t=152\tau_v$ and displays the pattern upon which the mean flow is quenched, the right panel is at $t=320\tau_v$. In both cases, $R=2950, \sigma=1.0$ and the lateral sidewalls are insulating. We see that the spiral arms transition to angular textures when mean flow is quenched. Also, the quenched state is stationary.[]{data-label="fig:quench"}](./fig10_right.eps "fig:"){width="1.5in"} Recalling that we can approximate mean flow to be the depth-averaged horizontal velocity, c.f. Eq. (\[eq:mean\_flow\_sim\]), we can first depth-average the horizontal components of the fluid equation, Eq. (\[eq:mom\]), to obtain a dynamical equation for the mean flow $\vec{U}_s$: $$\begin{aligned} \lefteqn{\sigma^{-1} \partial_t \vec{U}_s + \sigma^{-1} \int_0^1 dz (\vec{u}{{\scriptscriptstyle \stackrel{\bullet}{{}}}}\vec{\nabla}) \vec{u}_\perp =} \nonumber \\ & & -\vec{\nabla}_\perp \int_0^1 dz p + \nabla_\perp^2 \vec{U}_s + \int_0^1 dz \partial_{zz} \vec{u}_\perp.\end{aligned}$$ In this equation, the term $-\nabla_\perp \int_0^1 dz p$ can be absorbed into the nonlinear Reynolds stress term via Eq. (\[eq:pressure\_p0\]) and so will be ignored henceforth. The resulting equation is then a diffusion equation in $\vec{U}_s$ with a source term $\vec{F}_s \equiv \int_0^1 dz (\vec{u}{{\scriptscriptstyle \stackrel{\bullet}{{}}}}\nabla) \vec{u}_\perp -\sigma \int_0^1 dz \partial_{zz} \vec{u}_\perp$. If this source term were not present, then $\vec{U}_s$, being the solution to a diffusion equation, evolves to zero with an effective diffusivity $\sigma$, the Prandtl number. Thus, the role of $\vec{F}_s$ is to act as a generating source for the mean flow $\vec{U}_s$. Subtracting it from the fluid equation, Eq. (\[eq:mom\]), then results in the mean flow being eliminated. In practice, we found that it is necessary to actually subtract $\vec{F}_s$ multiplied by a constant to ensure that the magnitude of mean flow becomes zero. This can be understood in terms of the necessity to correct for the fact that Eq. (\[eq:mean\_flow\_sim\]) is only an approximation to the flow field that advects the rolls given by $$\vec{U} = \int_0^1 dz g(z) \vec{u}_\perp$$ where $g(z)$ is a weighting function depending on the full nonlinear structure of the rolls. This is discussed further elsewhere [@chiam:2002]. We have carried out this procedure by introducing the term $\vec{F}_s$ to the right-hand-side of the fluid equation after a spiral defect chaotic state becomes fully developed, typically after about one horizontal diffusion time starting from random thermal perturbations as initial condition. We see that the spirals immediately, on the order of a vertical diffusion time, “straighten out” to form angular chevron-like textures; see Fig. \[fig:quench\]. Unlike spiral defect chaos, these angular textures are stationary (with the exception of the slow motion of defects such as the gliding of dislocation pairs). Thus, we have shown that when mean flow is quenched via the subtraction of the term $\vec{F}_s$ from the fluid equation, spiral defect chaos ceases to exist. We have further quantified the differences between spiral defect chaos and the angular textures. We mention here briefly one of the results: by comparing the wavenumber distribution for both sets of states, we have observed that the mean wavenumber approaches the unique wavenumber possessed by axisymmetric patterns asymptotically far away from the center [@buell:1986:axi]. (The axisymmetric pattern, by symmetry, does not have mean flow components.) We discuss this as well as other results in a separate article [@chiam:2002]. Conclusion {#section:conclusion} ========== Full numerical simulations of Rayleigh-Bénard convection in cylindrical and rectangular shaped domains for a range of aspect ratios, $5 \lesssim \Gamma \lesssim 60$, with experimentally realistic boundary conditions, including rigid, finned and spatially ramped sidewalls, have been performed. These simulations provide us with a complete knowledge of the flow field allowing us to quantitatively address some interesting open questions. In this paper we have emphasized the exploration of the mean flow. The mean flow is important in a theoretical understanding of the pattern dynamics, yet is very difficult to measure in experiment, making numerical simulations attractive to close this gap. The mean flow is found to be important in small cylindrical domains by investigating the result of imposing different sidewall boundary conditions. Analytical results are developed for a large-aspect-ratio cylinder with a radial ramp in plate separation. Numerical results of the vertical vorticity and the mean flow agree with these predictions. Furthermore, the wavenumber behavior predicted using the mean flow in a one-dimensional phase equation also agrees with the results of simulation. This allows extrapolation of the analysis to larger aspect ratios. Lastly we utilize the control and flexibility offered by numerical simulation to investigate a novel method of quenching numerically the mean flow. We apply this to a spiral defect chaos state and find that the time dependent pattern becomes time independent, angular in nature, and that the pattern wavenumber becomes larger. These quantitative comparisons illustrate the benefit of performing numerical simulations for realistic geometries and boundary conditions as a means to create quantitative links between experiment and theory. We are grateful to G. Ahlers for helpful discussions. This research was supported by the U.S. Department of Energy, Grant DE-FT02-98ER14892, and the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, U.S. Department of Energy, under Contract W-31-109-Eng-38. We also acknowledge the Caltech Center for Advanced Computing Research and the North Carolina Supercomputing Center. [29]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****(), (). , ****, (). , ****(), (). , , , , , ****(), (). , ****(), (). , , , , ****(), (). , , , , ****(), (). , , , , ** (). , ****(), (). , ****(), (). , , , , ****(), (). , ****, (). , ****(), (). , ****, (). , , , ****(), (). , , , ****, (). , ****, (). , , , , ****(), (). , , , ****, (). , ****, (). , , , ****, (). , ****(), (). , ****(), (). , **** (). , , , ****(), (). , , , ****(), (). , ****(), (). , , , , ** (). , ****, ().
--- abstract: | We describe the determination of the strong coupling constant $\alpha_s(M_Z^2)$ and of the charm-quark mass $m_c(m_c)$ in the $\overline{\rm MS}$-scheme, based on the QCD analysis of the unpolarized World deep-inelastic scattering data. At NNLO the values of $\alpha_s(M_Z^2)=0.1134\pm 0.0011(\text{exp})$ and $m_c(m_c)=1.24 \pm 0.03 (\text{exp})\,^{+0.03}_{-0.02} (\text{scale})\,^{+0.00}_{-0.07} (\text{th})$ are obtained and are compared with other determinations, also clarifying discrepancies. address: - | Deutsches Elektronen-Synchrotron DESY, Platanenallee 6\ Zeuthen, D–15738, Germany\ [email protected] - | Deutsches Elektronen-Synchrotron DESY, Platanenallee 6\ Zeuthen, D–15738, Germany\ [email protected] - | II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149\ Hamburg, D-22761, Germany\ [email protected] author: - 'SERGEY ALEKHIN[^1]' - JOHANNES BLÜMLEIN - 'SVEN-OLAF MOCH' title: | [                                     DESY 13-121, DO-TH 13/17, SFB/CPP-13-44, LPN13-041 ]{}\ DETERMINATION OF $\alpha_s$ AND $m_c$ IN DEEP-INELASTIC SCATTERING --- Introduction ============ The process of lepton-nucleon deep-inelastic scattering (DIS) is a clean source of basic information about the hadron substructure in terms of the parton model. Moreover, the QCD corrections to the parton model provide the connection of the DIS structure functions with the parameters of the QCD Lagrangian, in particular to the strong coupling $\alpha_s$ and the heavy-quark masses. The higher order QCD corrections are manifest in the scaling violations of the structure functions w.r.t. the virtual photon momentum transfer $Q^2$. This phenomenon was observed shortly after the discovery of the partonic structure of the nucleon and provided one of the first constraints on $\alpha_s$. With the dramatic improvement in the accuracy of the lepton-nucleon DIS data and the progress in the theoretical calculations the value of $\alpha_s$ can in principle be determined with an accuracy of $O(1\%)$ [^2]. The scaling violations, however, are also sensitive to the parton distribution functions (PDFs). Therefore the determination of $\alpha_s$ has to be performed simultaneously with the nucleon PDFs in global fits. Furthermore, an elaborate theoretical description of the QCD scaling violations is available for the leading-twist terms only. In practice this requires a careful isolation of the higher-twist effects and/or their independent phenomenological parameterization. Another important aspect of DIS phenomenology is related to the $c$- and $b$-quark contributions. The heavy-quark production cross section is sensitive to the heavy-quark masses. Therefore the DIS data provide a constraint on the $c$- and $b$-quark masses, $m_{c,b}$. The structure functions of the semi-inclusive process with the heavy quark in the final state are particularly useful for this purpose, although the data on inclusive structure functions are competitive with the semi-inclusive ones due to a much better accuracy. A major pitfall arising in the analysis of heavy quark production is related to the account of the higher-order QCD corrections. Due to the two scales appearing in the problem the calculations are quite involved. Therefore the NNLO corrections to the heavy-quark lepto-production are known in partial form only [@Kawamura:2012cr] at present. The problem of the high-order corrections is bypassed in the so-called variable-flavor-number (VFN) scheme assuming zero mass for the $c$- and $b$-quark. In this approximation the available high-order massless DIS Wilson coefficients can be employed for the calculation of the heavy-quark lepto-production rates. As well-known, the VFN approximation is obviously inapplicable at scales $Q^2 \sim m_{c,b}^2$ and it is commonly supplemented by [*modeling*]{} of the Wilson coefficient at low $Q^2$ in order to arrive in this way at a general-mass VFN (GMVFN) scheme. In the present paper we essentially focus on the determination of $\alpha_s$ and $m_c$ based on the fixed-flavor-number (FFN) scheme. Here the mass effects are taken into account on field-theoretic grounds, free of model ambiguities appearing in the so-called GMVFN schemes. Moreover, we employ the massive Wilson coefficients derived using the running-mass definition, which provide improved perturbative stability of the heavy-quark production rate [@Alekhin:2010sv]. The paper is organized as follows. In Section \[sec:basics\] we outline the theoretical basis of the analysis and describe the data used. Sections \[sec:alphas\] and \[sec:mass\] contain our results on the determination of $\alpha_s$ and the $c$-quark mass, respectively. In Section \[sec:vfn\] we compare the FFN and VFN schemes with particular emphasis on the uncertainties in the determination of $\alpha_s$ and $m_c$. Theoretical and Experimental Ingredients of the Analysis {#sec:basics} ======================================================== Our determination of $\alpha_s$ and $m_c$ is based on the QCD analysis of the DIS data obtained in fixed-target experiments and at the HERA collider. Only the proton- and deuteron-target samples are selected which allows to minimize the impact of the nuclear corrections on the results [^3]. The DIS data are combined with the ones on the fixed-target Drell-Yan process, providing a supplementary constraint on the PDFs and to facilitate the separation of the valence and sea quark distributions. \[fig:scans\] The main version of our analysis is performed at NNLO using the three-loop anomalous dimensions in the PDF evolution and corresponding Wilson coefficients for the light-flavor DIS structure functions and the Drell-Yan process. For the neutral-current (NC) heavy-quark contribution we employ the approximate NNLO Wilson coefficients [@Kawamura:2012cr]. These terms were derived combining results obtained with the soft-gluon resummation technique and the high-energy limit of the DIS structure functions [@Catani:1990eg]. These two approaches provide a good approximation at kinematics close to threshold of the heavy-quark production and far beyond the threshold, respectively. Between these two regimes the constraint coming from the available NNLO massive operator-matrix-element (OME) Mellin moments [@Bierenbaum:2009mv] are employed. A remaining uncertainty in the NNLO Wilson coefficient obtained in this way is quantified by its margins, A and B. To find the best shape of the NNLO term preferred by the data we use a linear interpolation between these margins $$c_{2}^{\,(2)} \,=\, (1-d_N) c_{2}^{\,(2),A} + d_N c_{2}^{\,(2),B} \, . \label{eq:inter}$$ and fit the interpolation parameter $d_N$ to the data simultaneously with the PDF parameters, $\alpha_s$ and $m_c$. The charged-current (CC) heavy-quark production is calculated with account of the NLO corrections [@Gottschalk:1980rv; @Gluck:1996ve; @Blumlein:2011zu], which are the highest-order ones presently available. Both NC and CC massive Wilson coefficients used in our analysis are used with the $\overline{\rm MS}$-definition of the heavy-quark mass. If compared to the case using the pole-mass the present choice is perturbatively more stable [@Alekhin:2010sv]. With the running-mass definition this contribution basically vanishes since the mass can be defined at the typical renormalization/factorization scale of the process considered. The leading-twist terms provided by the QCD-improved parton model are not sufficient at small $Q^2$ and/or final-state hadronic mass $W$, where parton correlations cannot be neglected. To account for these we add on the top of the leading-twist term the twist-4 contribution to the DIS structure functions $F_{2,T}$, parameterized in a model-independent form using spline interpolation [^4]. The twist-4 spline coefficients are fitted to the data together with other the parameters. This is particularly important for the case of $\alpha_s$ in view of its strong correlation with the higher twist terms. Strong Coupling Constant {#sec:alphas} ======================== \[fig:comp\] The strong coupling constant $\alpha_s$ can be determined by comparing the $Q^2$-dependence of the DIS cross section measurements with the predictions based on the QCD-improved paton model. In our analysis the value of $\alpha_s$ is obtained simultaneously with the nucleon PDFs and the twist-4 terms. This allows to take into account the correlation of $\alpha_s$ with other parameters affecting the data $Q^2$-dependence. The central value of $\alpha_s$ obtained in this way depends on the perturbative order and reduces from NLO to NNLO. In particular, the ABM11 fit [@Alekhin:2012ig] yields $$\begin{aligned} \label{eq:alphas-nlo} \alpha_s(M_Z^2) \,\,=& 0.1180\, \pm 0.0012 (\text{exp})\, \hspace*{30mm} &{\rm NLO} \nonumber \, , \\ \label{eq:alphas} \alpha_s(M_Z^2) \,\,=& 0.1134\, \pm 0.0011 (\text{exp})\, \hspace*{30mm} &{\rm NNLO} \, .\end{aligned}$$ The values of $\alpha_s$ preferred by each particular DIS data set are demonstrated in Fig. \[fig:scans\] by means of the $\chi^2$-profiles obtained in the variants of the ABM11 fit with $\alpha_s$ fixed at the values in the range of $0.104\div0.130$. The HERA and BCDMS data sets have a similar $\chi^2$-shape with minima around the values Eq. (\[eq:alphas\]), while the SLAC and NMC data pull the value of $\alpha_s$ somewhat up and down, respectively. Note that the two latter sets are sensitive to higher-twist terms due to substantial small-$Q^2$ contributions in these samples. In contrast, the HERA and BCDMS data are far less sensitive to higher twists terms and it is worth noting that in the variant of the ABM11 fit excluding the SLAC and NMC data and setting the higher twist terms to zero we find $\alpha_s(M_Z^2)=0.1133\, \pm 0.0011 (\text{exp.})$ at NNLO. The good agreement of this value with Eq. (\[eq:alphas\]) substantiates the consistency between different data sets in our analysis once the higher twist terms are taken into account. Moreover, this cross-check confirms that the combination of the BCDMS and HERA data can be used for accurate determination of $\alpha_s(M_Z^2)$ since these two data sets provide complimentary constraints on the PDFs [@Adloff:2000qk], see also [@Alekhin:2013kla]. The NNLO value of $\alpha_s$ Eq. (\[eq:alphas\]) is in a good agreement with the results of the JR analysis [@JRnew] and the recent CTEQ determination [@Gao:2013xoa], while the MSTW [@Martin:2009bu] and NNPDF [@Ball:2011us] groups report substantially bigger values, cf. Fig. \[fig:comp\]. The discrepancy with MSTW can be explained in part by impact of the jet Tevatron data, which pull the value of $\alpha_s(M_Z^2)$ up by $0.001\div 0.002$, depending on the fit details. Furthermore, changing our fit ansatz in direction of the MSTW and NNPDF ones we approach their value of $\alpha_s$. In particular, dropping the higher twist terms simultaneously with an additional cut of $W^2>12.5~{\rm GeV}^2$ imposed by MSTW and NNPDF we obtain $\alpha_s(M_Z^2)=0.1191\, \pm 0.0006 (\text{exp.})$. In a similar way, disregarding the error correlation in the HERA and NMC data, like in the MSTW analysis, we obtain $\alpha_s(M_Z^2)$ shifted by $+0.0026$, in the direction of the MSTW value. Note that in this context the recently updated JR analysis [@JRnew] treats the higher twist properly. The CTEQ analysis [@Gao:2013xoa] seems to be less sensitive to the impact of the higher twist contributions, compared to that by MSTW and NNPDF due to a more stringent cut on $Q^2$. The Mass of the Charm Quark {#sec:mass} =========================== The sensitivity to the charm quark mass $m_c$ in our analysis appears essentially due to the data on the NC and CC inclusive DIS [@Aaron:2009aa], and the CC semi-inclusive charm production in DIS [@Bazarko:1994tt; @Goncharov:2001qe] with the most essential experimental constraint on $m_c$ coming from the semi-inclusive charm-production HERA data [@Abramowicz:1900rp]. The latter sample comprises the statistics of the H1 and ZEUS experiments obtained for different $c$-quark decay channels. The combination was performed similarly to the case of the inclusive HERA data [@Aaron:2009aa] and allows to reduce the systematic errors of each experiment due to cross-calibration of the experiments. The FFN scheme provides a good description of the semi-inclusive HERA data up to the largest $Q^2$-values covered, cf. Fig. \[fig:herac\], with a value of $\chi^2/NDP=61/52$ at NNLO. Here NDP denotes the number of data points. The $\overline{\rm MS}$-values of $m_c$ found in the analysis of [@Alekhin:2012vu] are $$\begin{aligned} \label{eq:mcres-nlo} m_c(m_c) \,\,=& 1.15\, \pm 0.04 (\text{exp})\,^{+0.04}_{-0.00} (\text{scale}) \hspace*{30mm} &{\rm NLO} \, , \\ \label{eq:mcres-nnlo} m_c(m_c) \,\,=& 1.24\, \pm 0.03 (\text{exp})\,^{+0.03}_{-0.02} (\text{scale})\,^{+0.00}_{-0.07} (\text{th}), \hspace*{14mm} &{\rm NNLO_\text{approx}} \, ,\end{aligned}$$ at NLO and NNLO, respectively, see also [@Alekhin:2012un]. The experimental accuracy of 30 MeV obtained at NNLO is quite competitive with other determinations of $m_c$ based on the $e^+e^-$ data and the central value Eq. (\[eq:mcres-nnlo\]) is in a good agreement with the world average [@Beringer:1900zz]. The scale error in Eqs. (\[eq:mcres-nlo\]) and (\[eq:mcres-nnlo\]) is obtained varying the factorization scale by a factor of $1/2$ and $2$ around the nominal value of $\sqrt{m_c^2+\kappa Q^2}$, where $\kappa=4$ for NC and $\kappa=1$ for CC heavy-quark production, respectively. In the NNLO case an additional error related to the uncertainty in the massive Wilson coefficients contributes. The value of Eq. (\[eq:mcres-nnlo\]) is obtained for the interpolation parameter $d_N=-0.1$ being preferred by the fit, roughly corresponding to option A of the Wilson coefficients of Ref. [@Kawamura:2012cr]. Meanwhile, option B is clearly excluded by the data with $\chi^2$/NDP=115/52. Therefore the uncertainty due to the missing NNLO massive terms is estimated as a variation between options A and (A+B)/2 which yields the value of 70 MeV Eq. (\[eq:mcres-nnlo\]). The value of $m_c(m_c)$ obtained in our analysis demonstrates remarkable stability w.r.t. $\alpha_s(M_Z^2)$. Performing variants of our analysis with the values of $\alpha_s$ fixed in the wide range around the best value preferred by the data, we find a variation of $m_c(m_c)$ in the range of 10-20 MeV, depending on the order, cf. Fig \[fig:alpha\]. The NLO value of $m_c(m_c)$ obtained in our analysis is somewhat lower than the one of $m_c(m_c)=1.26\pm0.05~(\text{exp})~{\rm GeV}$ from the analysis based on the HERA data only [@Abramowicz:1900rp]. To understand the difference we checked the cases of a cut on $Q^2=3.5~{\rm GeV}^2$, likewise in the HERA fit, and no semi-inclusive Tevatron data [@Bazarko:1994tt; @Goncharov:2001qe] included. As a result we obtain shifts in $m_c(m_c)$ by $+30~\text{MeV}$ and $+40~\text{MeV}$, respectively. The remaining discrepancy should be attributed to the particularities of the HERA PDFs. The value of $m_c(m_c)$ was recently also determined by the CTEQ collaboration [@Gao:2013wwa]. In contrast to our case this determination is based on the S-ACOT-$\chi$ prescription as GMVFN scheme. Furthermore, the $\overline{\rm MS}$ coefficients of Ref. [@Gao:2013wwa] are obtained by straightforward substitution of the pole- and running-mass matching relation into the pole-mass coefficients. The expressions obtained in such a way correspond to a mixed order in $\alpha_s$. Moreover, the advantage of this approach is not evident in view of the poor perturbative convergence of the mass matching relation. The central CTEQ value of $m_c(m_c)=1.12 ^{+0.11}_{-0.17}~{\rm GeV}$ is lower than the world average, while the CTEQ errors are much larger than those in Eqs. (\[eq:mcres-nlo\]) and (\[eq:mcres-nnlo\]) due to the impact of the uncertainty in the GMVFN scheme modeling. VFN Uncertainties {#sec:vfn} ================= The choice of the factorization scheme plays an essential role in the analysis of existing DIS data due to important constraints coming from the small-$x$ region, where the heavy quark contribution is numerically large. While our analysis is based on the FFN scheme, many other groups employ different variants of the GMVFN scheme, which differ by [*modeling*]{} of the low-$Q^2$ region. The spread between these variants is rather substantial and thus implies a corresponding uncertainty in the basic parameters determined in these GMVFN fits. In particular, the value of $m_c$ determined from a combination of the inclusive and semi-inclusive HERA data with the different versions of the ACOT and RT prescriptions for the VFN scheme demonstrate a spread of 400 MeV [@Abramowicz:1900rp] . There are also sources of the VFN scheme uncertainties, which are common for all these prescriptions. Firstly, the matching of heavy-quark PDFs is commonly performed at the factorization scale $\mu$ equal to the $\mu_0=m_{c}$, resp. $m_b$. Clearly, at these scales neither of the heavy flavors can be dealt with as massless. The matching point $\mu_0$ is not fixed by theory and in principle it can vary in a wide range being an artefact of the description not contributing to the observables according to the renormalization group equations. Additional uncertainty emerge for the 4(5)-flavor PDFs in the NNLO analysis. They are commonly matched with the 3(4)-flavor ones using [*NLO matching condition*]{}, since the NNLO OMEs are not yet known in the complete form [^5]. However, to provide consistency with the NNLO Wilson coefficients the evolution of these PDFs is performed in the NNLO approximation that introduces an additional uncertainty due to missing higher-order corrections into the analysis. At the same time, the evolution of the 4(5)-flavor PDFs in the VFN scheme leads to a resummation of the terms $\sim \ln(\mu^2)$ which in part reproduce the higher-order corrections, being known, however, not to be dominant. Relations between the resummation effects and the VFN evolution uncertainties are illustrated in Fig. \[fig:pdfder\] by comparison of the $\mu$-derivatives for the $c$-quark distribution being calculated in different ways. In one case the distributions are matched at the scale of $\mu_0$ using the fixed-order-perturbation-theory (FOPT) matching conditions and then evolved starting from $\mu_0$ with the massless splitting functions. In another case they are calculated with the FOPT matching conditions at all scales. The difference between these two cases do not demonstrate a significant rise with $\mu$. The only exclusion is observed at $x\lesssim 0.0001$ and at scales outside of the kinematics being probed in experiment. Therefore it cannot be attributed to the impact of the log-term resummation. In contrast, there is a substantial difference in the derivatives calculated in the NLO and NNLO\*, i.e. the combination of the NLO matching with the NNLO evolution. This difference yields an estimate of the uncertainty in the VFN scheme due to the missing higher-orders, which is obviously larger than the resummation effects. Checking the impact of this uncertainty w.r.t. the value of $\alpha_s(M_Z^2)$ in combination with the variation of the matching point for the 4-flavor PDFs in the range of $1.2\div 1.5~{\rm GeV}$ we find a value of $\pm 0.001 $. This is comparable to the experimental uncertainty and makes the VFN schemes incompetitive with the FFN one in the precision determination of $\alpha_s(M_Z^2)$. Acknowledgments {#acknowledgments .unnumbered} =============== We thank P. Jimenez-Delgado and E. Reya for discussions. This work has been supported in part by Helmholtz Gemeinschaft under contract VH-HA-101 ([*Alliance Physics at the Terascale*]{}), DFG Sonderforschungsbereich/Transregio 9 and by the European Commission through contract PITN-GA-2010-264564 ([*LHCPhenoNet*]{}). [0]{} S. Bethke [*et al.*]{}, [Workshop on Precision Measurements of $\alpha_s$]{}, arXiv:1110.0016 \[hep-ph\]. H. Kawamura, N. A. Lo Presti, S. Moch and A. Vogt, [*Nucl. Phys. B*]{} [**864**]{}, 399 (2012). S. Alekhin and S. Moch, [*Phys. Lett. B*]{} [**699**]{}, 345 (2011). S. Alekhin, J. Blümlein and S. Moch, [*Phys. Rev. D*]{} [**86**]{}, 054009 (2012); S. Catani, M. Ciafaloni and F. Hautmann, [*Nucl. Phys. B*]{} [**366**]{}, 135 (1991). I. Bierenbaum, J. Blümlein and S. Klein, [*Nucl. Phys. B*]{} [**820**]{}, 417 (2009). T. Gottschalk, [*Phys. Rev. D*]{} [**23**]{}, 56 (1981). M. Glück, S. Kretzer and E. Reya, [*Phys. Lett. B*]{} [**380**]{}, 171 (1996) \[Erratum-ibid. B [**405**]{}, 391 (1997)\]. J. Blümlein, A. Hasselhuhn, P. Kovacikova and S. Moch, [*Phys. Lett. B*]{} [**700**]{} 294 (2011). C. Adloff [*et al.*]{} \[H1 Collaboration\], [*Eur. Phys. J. C*]{} [**21**]{}, 33 (2001). S. Alekhin, J. Blümlein and S. Moch, arXiv:1303.1073 \[hep-ph\]. P. Jimenez-Delgado and E. Reya, private communication. J. Gao [*et al.*]{}, arXiv:1302.6246 \[hep-ph\]. A. Martin, W. J. Stirling, R. Thorne and G. Watt, [*Eur. Phys. J. C*]{} [**64**]{}, 653 (2009). R. D. Ball [*et al.*]{}, [*Phys. Lett. B*]{} [**707**]{}, 66 (2012). A. Abulencia, et al., [*Phys.Rev. D*]{} [**75**]{}, 092006 (2007). V. Abazov, et al., [*Phys.Rev.Lett.*]{} [**101**]{}, 062001 (2008). H. Abramowicz [*et al.*]{} \[H1 and ZEUS Collaborations\], [*Eur. Phys. J. C*]{} [**73**]{}, 2311 (2013). F. D. Aaron [*et al.*]{} \[H1 and ZEUS Collaboration\], [*JHEP [**1001**]{}, 109*]{} (2010). A. O. Bazarko [*et al.*]{} \[CCFR Collaboration\], [*Z. Phys. C*]{} [**65**]{}, 189 (1995). M. Goncharov [*et al.*]{} \[NuTeV Collaboration\], [*Phys. Rev. D*]{} [**64**]{}, 112006 (2001). S. Alekhin [*et al.*]{}, [*Phys. Lett. B*]{} [**720**]{} (2013) 172 S. Alekhin, K. Daum, K. Lipka and S. Moch, [*Phys. Lett. B*]{} [**718**]{} (2012) 550 J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], [*Phys. Rev. D*]{} [**86**]{}, 010001 (2012). J. Gao, M. Guzzi and P. M. Nadolsky, arXiv:1304.3494 \[hep-ph\]. J. Ablinger [*et al.*]{}, [*PoS LL*]{} [**2012**]{} (2012) 033. J. Ablinger [*et al.*]{}, [*Nucl. Phys. B*]{} [**864**]{} (2012) 52. J. Ablinger [*et al.*]{}, [*Nucl. Phys. B*]{} [**844**]{} (2011) 26. J. Blümlein, A. Hasselhuhn, S. Klein and C. Schneider, [*Nucl. Phys. B*]{} [**866**]{} (2013) 196. [^1]: Permanent address: Institute for High Energy Physics, Pobeda 1, Protvino, 142280, Russia [^2]: For a recent overview on precision determinations of $\alpha_s(M_Z^2)$ see [@Bethke:2011tr]. [^3]: For a detailed description of the data set used cf. Ref. [@Alekhin:2012ig]. [^4]: The twist-6 terms were also checked in the fit and found comparable to zero within errors applying the cut of $Q^2 > 2.5~{\rm GeV}^2$ used in the present analysis. [^5]: For progress in this field cf. [@Ablinger:2012ej; @Ablinger:2012qm; @Ablinger:2010ty; @Blumlein:2012vq].
--- author: - 'Gregory J. Zelinsky' - Yupei Chen - Seoyoung Ahn - Hossein Adeli - Zhibo Yang - Lihan Huang - Dimitrios Samaras - Minh Hoai bibliography: - 'IRL2020.bib' title: 'Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning' --- Introduction {#introduction .unnumbered} ============ Ever since Yarbus’ seminal demonstration of how a goal can control attention [@yarbus1967eye], understanding goal-directed attention control has been a core aim of psychological science. This focus is justified. Goal-directed attention underlies everything that we *try* to do, making it key to understanding cognitively-meaningful behavior. Like Yarbus, we too demonstrate goal-directed control of eye-movement behavior, but here these overt attention movements are made by a deep-network model that has learned different goals. Three factors distinguish our approach from previous work. First, it is image-computable and uses learned, rather than handcrafted, features. Our model therefore inputs an image but is not told anything about its features (“vertical”, “clock”, etc.), which all must be learned. This factor distinguishes the current model from most others in the behavioral literature on attention control [@wolfe1994guided; @zelinsky2008theory; @bundesen1990theory], and makes our approach more aligned with recent computational work. [@zhang2018finding; @zelinsky2019benchmarking] Second, the goal-directed behavior that we study is categorical search, the visual search for any exemplar of a target-object category. [@schmidt2009search; @eimer2014neural; @zelinsky2013modeling] We adopt this paradigm because categorical search is the simplest (and therefore, best) goal-directed behavior to computationally model—there is a target-object goal and the task is to find it. A third and unique contribution of our approach is that we predict categorical-search fixations using a policy that was learned, through many observations of search-fixation behavior during training, to maximize the goal-specific receipt of reward. Using inverse-reinforcement learning (IRL), we obtain these reward functions and use them to prioritize spatial locations to predict the fixations made by new people searching for the learned target categories in new images. Doing this required the creation of a search-fixation-annotated image dataset sufficiently large to train deep-network models (see Methods). We show that this model successfully captured several patterns observed in goal-directed search behavior, not the least being the guidance of overt attention to the target-category goals. Inverse-Reinforcement Learning {#inverse-reinforcement-learning .unnumbered} ------------------------------ IRL is an imitation-learning method from the machine-learning literature that learns, through observations of an expert, a reward function and policy for mimicking expert performance. We extend this framework to goal-directed behavior by assuming that the image locations fixated by searchers constitute the expert performance that the model learns to mimic. The specific IRL algorithm that we use is Generative Adversarial Imitation Learning (GAIL [@ho2016generative]), which makes reward proportional to the model’s ability to generate State-Action pairings that imitate observed State-Action pairings. Here, the Action is a shift of fixation location in a search image (the model’s saccade), and the State is the search context (all the information available for use in the search task). The State includes, but is not limited to, the visual features extracted from an image and the learned visual representation of the target category. Over training, and through the greedy maximization of total-expected reward, the model learns a Policy for mapping States to Actions that can be used to predict new Actions (saccades) given new States (search images). Methods {#sec:modelmethods .unnumbered} ======= Model Methods {#model-methods .unnumbered} ------------- The IRL model framework is illustrated in Fig. \[fig:pipeline\]. Model training can be conceptualized as a Policy Generator (G) and a Discriminator (D) locked in an adversarial process [@ho2016generative]. The Generator generates fake eye movements (Actions) with the goal of fooling the Discriminator into believing that these actions were made by a person, while the Discriminator’s goal is to discriminate the real eye movements from the fake. More specifically, the Generator consists of an Actor-Critic model [@konda2000actor] that learns a policy for maximizing total expected reward over all possible sequences of fixations, with greater reward given to the Generator when it produces person-like actions that the Discriminator miss-classifies as real (the logarithm of the Discriminator output). This reward-driven adversarial process plays out during training using Proximal Policy Optimization (PPO) [@schulman2017proximal], with the result being a Generator that becomes highly adept at imitating the behavioral fixations made during categorical search. At testing, this learned Policy for mimicking people’s categorical search fixations is used to predict the fixation behavior of new people searching for the same target categories in new images. These fixation predictions are quantified by what we call a *saccade map*, which is a priority map reflecting the total reward expected if saccades were to land at all the different locations in an image input. ![The model’s adversarial imitation learning algorithm. During training it learned from fixation-annotated images a reward function and policy for predicting new search fixations in unseen test images.[]{data-label="fig:pipeline"}](Slide5.jpg){width="100.00000%" height="0.95\textheight"} States and Actions: Cumulative Movements of a Foveated Retina {#states-and-actions-cumulative-movements-of-a-foveated-retina .unnumbered} ------------------------------------------------------------- Broadly speaking, the State is the internal visual representation that is used for search, and a big part of this are the features extracted from the image input. To obtain a robust core State representation we pass each image through a pre-trained ResNet-50 [@He-et-al-ICCV15] to get a reasonably-sized feature map output (1024x10x16). However, human search behavior is characterized by movements of a foveated retina, and each of these search fixations dramatically changes the State by re-positioning the high-resolution fovea in the visual input. We captured this fixation-dependent change in State in two steps. First, we gave the IRL model a simplified foveated retina. We did this using the method from Geisler and Perry [@perry2002gaze] to compute a retina-transformed version of the image input (*ReT-image*), which in our implementation is an image having high resolution within a central $3^\circ$ “fovea” (32x32 in the resized 512x320 pixel image) but is blurred outside of this fovea to approximate the loss of resolution that occurs with increasingly eccentric viewing in the visual periphery. Second, we accumulate these high-resolution foveal views, each a different ReT-image, over 6 new fixations in a process that we refer to as *cumulative foveation*. With each new “eye movement”, the fovea is re-positioned in the image, thereby progressively de-blurring what was an initially fairly blurred visual input. Note that by adopting this cumulative-foveation State encoder we are not suggesting that people have a similar capacity to maintain high-resolution visual information once the fovea moves on, and indeed this is known not to be the case [@irwin1996integrating]. Rather, we used this fixation-by-fixation State encoder simply as a tool to integrate a dynamically changing State into the IRL method. Figure \[fig:retimages\] shows cumulative ReT-images obtained at three successive fixation locations (0,1,2) for a sample scene, with the 7-fixation sequence of these images comprising a dynamic State representation that is input to the IRL model. The pre-trained ResNet-50 was dilated and fine-tuned on ReT-images prior to this State encoding. ![The formation of a cumulative retina-transformed image over the first three fixations (0,1,2).[]{data-label="fig:retimages"}](Slide6.jpg){width="100.00000%" height="0.95\textheight"} The IRL model learns to associate States with Actions, but these Actions must also be defined in some space. We obtain an Action space by first resizing a ReT-image input to 512x320 pixels, which we then discretize into a 10x16 grid of 32x32 pixel cells. The center of each cell becomes a potential fixation location, a computational necessity imposing a resolution limit on the model’s oculomotor behavior. For each of the 6 new fixations generated by the model, the cumulative ReT-image input is prioritized by the saccade map and one of the 160 possible grid locations is selected for an eye movement. The Microwave-Clock Search Dataset {#sec:dataset .unnumbered} ---------------------------------- The currently most predictive models of complex fixation behavior are in the context of a free-viewing task, where the best of these models (e.g., DeepGaze II [@kummerer2017understanding]) are trained on SALICON [@jiang2015salicon]. SALICON is a crowd-sourced dataset consisting of images that were annotated with human mouse clicks indicating salient image locations. Without SALICON, DeepGaze II and models like it would not have been possible, and our understanding of free-viewing behavior, widely believed to reflect bottom-up attention control (i.e., control solely by features extracted from the visual input), would be diminished. To date, however, there has been no comparable dataset for categorical search, and this has hindered the computational modeling of goal-directed attention control. Those suitably-sized and fixation-annotated image datasets that do exist either did not use a standard search task  [@mathe2014actions; @papadopoulos2014training], used a search task but had people search for multiple targets simultaneously [@gilani2015pet], or used only one target category (people) [@ehinger2009modelling]. Here we introduce the Microwave-Clock Search (MCS) dataset, which is now among the largest datasets of images that have been annotated with goal-directed fixations. The MCS dataset makes it possible to train deep-network models on human search fixations to predict how people will move their attention in the pursuit of different target-object goals. ![Representative training images (top) and testing images (bottom) in the MCS dataset.[]{data-label="fig:cocoimages"}](Slide7.jpg){width="100.00000%" height="0.95\textheight"} Half of the MCS dataset consists of COCO2014 images [@lin2014microsoft] depicting either a microwave or a clock (based on COCO labels), from which we created disjoint training and testing datasets. In selecting the training images we excluded scenes depicting people and animals (to avoid attention biases to these categories), and digital clocks in the case of the clock target category. This latter constraint was introduced because the features of analog and digital clocks are very different, and we were concerned that this would introduce unwanted variability in the search behavior. No additional exclusion criteria were used to select the training images, with our goal being to include as many images for training as possible. These criteria left 1,494 analog clock images and 689 microwave images, which we should note varied greatly in terms of their search difficulty (see Fig. \[fig:cocoimages\], top). Selection of the test images was more tightly controlled, resulting in the test dataset being far smaller (n=40). In addition to the exclusion criteria used for the training images, test images were further constrained to have: (1) depictions of *both a microwave and a clock* (enabling different targets to be designated in the identical images, the perfect control for differences in bottom-up saliency), (2) only a single instance of the target, (3) a target area less than 10% of the image area, and (4) targets that do not appear at the image’s center (no overlap between the target and the center cell of a 5x5 grid). The latter two criteria were aimed at excluding really large targets or targets appearing too close to the center starting-gaze position, with the goal of both being to achieve a moderate level of search difficulty (see Fig. \[fig:cocoimages\], bottom). ![The categorical search paradigm used for behavioral data collection.[]{data-label="fig:behaviorexp"}](Slide8.jpg){width="100.00000%" height="0.95\textheight"} The above-described selection criteria were specific to target-present (TP) images, but an equal number of target-absent (TA) images (n=2183) were selected as well so as to create a standard TP versus TA search context. These images were selected randomly from COCO, with the constraints that: (1) none depicted the target, and (2) all depicted at least two instances of the target category’s siblings. COCO defines the siblings of a microwave to be: ovens, toasters, refrigerators, and sinks, all under the parent category of “appliances”. Clock siblings are defined as: books, vases, scissors, hairdryers, toothbrushes, and teddy bears, under the parent category of “indoor”. Sibling membership was used as a selection criterion so as to discourage TA responses from being based on scene type (e.g., a street scene is unlikely to contain a microwave), and this criterion seemed to work well; the overwhelming majority of selected TA scenes were kitchens that did not depict a target. The large size of the training dataset (4366 images) required data collection to be distributed over groups of searchers. Each microwave training image was searched by 2-3 people (n=27); each clock training image was searched by 1-2 people (n=26). After removing incorrect trials and TP trials in which the target was not fixated (it is not desirable to train on these), 16,184 search fixations remained for model training. Test images were each searched by a new group of 60 participants, 30 searching for a microwave target and the other 30 searching the same images for a clock target in a between-subjects design. To achieve a power and effect size of .8, based on a t-test comparing target guidance to chance (see Fig. \[fig:cumprob\]), we determined that a sample of 25 participants per target condition would be adequate. However, we chose to test 30 participants per condition in case of loss due to attrition or unusable eye-tracking data. Behavioral Search Procedure {#behavioral-search-procedure .unnumbered} --------------------------- A standard categorical search paradigm was used for both training and testing (see Fig. \[fig:behaviorexp\]). TP and TA trials were randomly interleaved within target type, and searchers made a speeded TP or TA manual response terminating each trial. Search display visual angles were $54^\circ \times 35^\circ$ for testing; for training angles ranged from $12^\circ \times 28.3^\circ$ in width and $8^\circ \times 28.3^\circ$ in height. Eye position was sampled at 1000 Hz using an EyeLink 1000 (SR Research) in tower-mount configuration (spatial resolution $0.01^\circ$ rms). All participants provided informed consent in accordance with policies set by the institutional review board at Stony Brook University responsible for overseeing research conducted on human subjects. Results {#results .unnumbered} ======= Search Behavior {#search-behavior .unnumbered} --------------- Table \[tab:dataset\] provides the mean button-press errors and the average number of fixations made before the button-press response (which includes the starting fixation) on correct search trials. Note that the roughly doubled error rates in the training data should be interpreted with caution, as many of these errors were due to incorrectly labelled target-object regions in COCO that would cause errors given correct search judgments. Rather than correcting these mislabelled objects (which would be changing COCO), we instead decided to tolerate an inflated error rate and to exclude these error trials from all analyses and interpretation. -- ----------- ----------- --------------------- ----------- --------------------- Error (%) Mean (SD) Fixations Error (%) Mean (SD) Fixations microwave 18 5.46 ($\pm2.6$) 9 6.76 ($\pm2.1$) clock 15 4.52 ($\pm3.5$) 6 5.33 ($\pm1.8$) microwave 8 7.95 ($\pm4.1$) 4 14.36 ($\pm2.5$) clock 10 11.14 ($\pm6.8$) 5 15.85 ($\pm2.3$) -- ----------- ----------- --------------------- ----------- --------------------- : Summary statistics showing mean errors and number of search fixations in the Microwave-Clock Search dataset.[]{data-label="tab:dataset"} Focusing first on the TP test data, Figure \[fig:cumprob\] plots the cumulative probability of fixating the target with each saccade made during search. The central behavioral data pattern (solid lines) is that attention, as measured by overt gaze fixation, is strongly guided to both the microwave and clock targets. This guidance is evidenced by the fact that 24% of the initial saccades landed on targets (averaged over microwaves and clocks). This probability of target fixation is well above chance, which we quantified using two object-based chance baselines consisting of: (1) the probability of fixating the clock when searching for a microwave (clock baseline), and (2) the probability of fixating the microwave when searching for a clock (microwave baseline). We confirmed above-chance target guidance by comparing the slopes of regression lines fit to the target and baseline data (microwave: target slope = 0.15, baseline slope = 0.03, t(58) = 26.31, p = 6.20e-34 &lt; .001; clock: target slope = 0.17, baseline slope = 0.004, t(58) = 52.65, p = 1.14e-50 &lt; .001). Also evident from this analysis is the importance of the first six saccades made during the search tasks. If the target was going to be fixated, it is highly likely that this would happen by the sixth eye movement. Collectively, these results indicate that there are strong microwave and clock guidance signals in the behavioral test data to predict. ![Cumulative probability of fixating the microwave (red) or clock (blue) target on target-present trials for behavioral participants (solid lines) and the model (dashed lines). Bottom lines are object-based random baselines (see text for details).[]{data-label="fig:cumprob"}](Slide1.jpg){width="70.00000%" height="0.95\textheight"} IRL Model {#irl-model .unnumbered} --------- To determine whether the model’s behavior is reasonable, we conducted two initial qualitative analyses. The top row in Figure \[fig:saccademap\] shows cumulative ReT-images for the starting fixation (0 in the yellow scanpath) and the fixations following the first two saccades (1, 2). Note that the left ReT-image, because it was computed based on a center initial fixation position, is blurred on both the left and right sides. The middle and right ReT-images were computed based on the landing positions of the first and second saccades, respectively. The microwave target is indicated in each panel by the red box. The bottom row shows the saccade maps corresponding to these ReT-images, where a bluer color indicates greater total reward expected by moving fixation to different image locations. The model initially expected the greatest total reward by fixating the stove (left saccade map), but after that saccade, and the resulting change in State (top middle), the model then selected the microwave target as the location offering the greatest expected reward (bottom middle), which was fixated next (right panels). Note that the model, because it was forced to make six saccades (discussed below), continued to prioritize space even after fixating the target. This qualitative analysis shows that the model learned an association between a State (which includes the features of a microwave) and an Action, and this enabled it to guide its fixations during the search of a new image for this target-category goal. ![Cumulative ReT-images (top row) and corresponding saccade maps (bottom row) for the initial and first two new fixations (left to right) made by the model in a microwave search task.[]{data-label="fig:saccademap"}](Slide2.jpg){width="100.00000%" height="0.95\textheight"} Figure \[fig:fdm\] shows another qualitative evaluation, this time comparing Fixation-Density Maps (FDMs) from people searching for a microwave (n=30) or a clock (n=30) to FDMs generated by the model (sampling from probabilistic policy) as it searched for the same targets in the same two test images. In both examples, the model and behavioral searchers efficiently found the target (bright red). More interesting, however, is that they both searched the scenes differently depending on the target category. When searching for a microwave (leftmost four panels) the model and behavioral searchers tended to look at counter-tops, but when searching for a clock (rightmost four panels) they tended to look higher up on the walls. Future work will more fully explore the potential to learn and predict these effects of scene context on search. ![Model and behavioral Fixation Density Maps computed for microwave (left four) and clock (right four) searches in two trials (top, bottom).[]{data-label="fig:fdm"}](Slide3.jpg){width="100.00000%" height="0.95\textheight"} We directly compared the model and participant search behavior in several analyses of the test data. This comparison occurred on an image-by-image and fixation-by-fixation basis, but was limited to the first six movements of gaze. We introduced this limitation on the number of search saccades to reduce model computation time, but believe that it is justified given the clear adequacy of the first six saccades in revealing the goal-directed behavior of interest, as shown in Figure \[fig:cumprob\]. Figure \[fig:modelperformance\]A (left plot) shows that the model was able to predict the behavioral FDMs for microwave and clock targets, using an AUC metric where the scale is between 0 and 1 and higher values indicate better predictive success. We also include Subject models, computed using the leave-one-out method, to obtain a practical noise limit on a model’s ability to predict group behavior [@bylinskii2018different]. This analysis shows that the IRL model was able to predict the spatial distribution of behavioral fixations in the test images as well as could be expected based on variability among the participants in their search behavior. FDMs, however, are purely spatial, but search fixations are also made over time, ultimately producing a scanpath. Because the IRL model also makes sequences of fixations, we were able to compare its 6-saccade scanpaths to the 6-saccade scanpaths from the behavioral searchers (right plot). Based on average MultiMatch similarity [@dewhurst2012depends], excluding the fixation duration component, the model did a very good job in predicting the spatio-temporal sequences of fixations made by the behavioral searchers in their first six saccades, again as well as could be expected from the behavioral data, and it did this for both microwave and clock targets. ![Model Performance Results. (A) Left: Success in predicting FDMs by the IRL model and a Subject model using the Area-Under-the-Curve metric. Right: Corresponding predictions of search scanpaths using average MultiMatch similarity. (B) Left: Proportion of trials in which the target was fixated in the first six saccades (fixated-in-6 accuracy). Right: Average number of saccades to the target on the fixated-in-6 trials. Note that in the (A) plots “Subj” refers to a Subject model whereas in the (B) plots “Subj” refers to behavioral data.[]{data-label="fig:modelperformance"}](Slide4.jpg){width="100.00000%" height="0.95\textheight"} We also analyzed search accuracy and the number of saccades that were made during search. Note that “Accuracy” refers here to the proportion of trials in which the target was fixated in the first six eye movements (fixated-in-6 accuracy), and “Avg Saccades” refers to the mean number of saccades needed to find the target on the accurate fixated-in-6 trials. Figure \[fig:modelperformance\]B shows that the model was slightly less successful than behavioral searchers in locating targets within six saccades (left plot), but when it did find the target it tended to do so about as efficiently as our participants, needing only about half a fixation more in the case of clocks (right plot). Chance fixated-in-6 accuracy is less than .25, based on a shuffling of eye data and images within each participant, and this is far lower than fixated-in-6 accuracy for the IRL model (microwave: t(58) = -31.74, p = 2.34e-38 &lt;.001, Cohen’s d = 8.20; clock: t(58) = -75.87, p = 9.73e-60 &lt;.001, Cohen’s d = 19.59). But perhaps the clearest measure of search efficiency is the cumulative probability of target fixation over saccades. As indicated by the dashed lines in Figure \[fig:cumprob\], the model’s search efficiency, although generally lower (reflecting the difference in fixated-in-6 accuracy), was strongly guided to targets much like the participants’ search behavior. We interpret this as meaning that the IRL model learned goal-specific attention control, as measured by a gold-standard metric. Discussion {#discussion .unnumbered} ========== Models of search behavior have traditionally aimed at describing relatively coarse patterns (e.g., set-size effects) in highly simplified contexts [@wolfe1994guided; @zelinsky2008theory], limitations that were imposed by a reliance on handcrafted features to create a guidance signal. In this study we adopted the radically different approach of training a model simply on many observations of search behavior, and showed that the Policy learned by this model predicted multiple overt measures of goal-directed attention control. The success of these predictions is significant in that it requires a re-setting of the goal posts with respect to model evaluation. While once computational methods limited attention models to fitting patterns of search data in simple contexts, with deep networks it is possible to predict individual fixations made in the search for categories of objects in realistic scenes. Training this model required creating the Microwave-Clock Search dataset, which is among the only datasets of goal-directed attention (search fixations) large enough to train deep-network models. We encourage people to download this dataset from <https://you.stonybrook.edu/zelinsky/datasetscode/> and use it in their own predictive-modeling work, citing this publication. Our hope is that the availability of this dataset will promote greater model development and comparison, which given the pace of recent advances might meaningfully advance the understanding of goal-directed attention control. The visual search for an object category is a goal-directed behavior of unique importance, shared by pigeons and people and most species in between. Because of its fundamental role in survival, search is likely to use the most basic of control processes—reward. [@anderson2013value] Using the MCS dataset and Inverse-Reinforcement Learning, we showed that the target-specific reward functions learned by our model predicted the goal-directed fixations made by new people searching new images for the learned target categories. Machine learning has made it possible to learn the reward functions underlying goal-directed attention control. In ongoing work we are expanding our search dataset to 18 target categories so as to begin characterizing how reward functions vary over common real-world objects and to more fully explore scene context effects. In future work we also plan to manipulate different types of reward used in training, and apply IRL to questions in individual-difference learning. Data availability {#data-availability .unnumbered} ================= The dataset described in this paper is available in the *Microwave-Clock Search (MCS) dataset* repository: <https://you.stonybrook.edu/zelinsky/datasetscode/>. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the National Science Foundation for their generous support through award IIS-1763981, and members of the EyeCog Lab for their help with data collection and invaluable feedback. Author contributions {#author-contributions .unnumbered} ==================== M.H., D.S., and G.J.Z. conceptualized the research; H.A., Y.C., and G.J.Z. collected the dataset, L.H., Z.Y., D.S., and M.H. implemented the model. All authors analyzed and interpreted data, but especially Z.Y., Y.C., L.H., and S.A. G.J.Z., Y.C., S.A., and H.A. wrote the paper. Additional information {#additional-information .unnumbered} ====================== **Competing interests**. The authors declare no competing interests.
--- abstract: 'We study the three-dimensional deformation field induced by an axial (In,Ga)N segment in a GaN nanowire. Using the finite element method within the framework of linear elasticity theory, we study the dependence of the strain field on the ratio of segment length and nanowire radius. Contrary to intuition, the out-of-plane-component [$\upvarepsilon_\mathrm{zz}$]{}of the elastic strain tensor is found to assume large negative values for a length-to-radius ratio close to one. We show that this unexpected effect is a direct consequence of the deformation of the nanowire at the free sidewalls and the associated large shear strain components. Simulated reciprocal space maps of a single (In,Ga)N/GaN nanowire demonstrate that nanofocus x-ray diffraction is a suitable technique to assess this peculiar strain state experimentally.' author: - Thilo Krause - Michael Hanke - Oliver Brandt - Achim Trampert title: 'Counterintuitive strain distribution in axial (In,Ga)N/GaN nanowires' --- (In,Ga)N/GaN heterostructures are an integral part of light emitting devices used for full-color displays and solid-state lighting. For conventional planar heterostructures, the In content in the (In,Ga)N quantum well is restricted due to the formation of dislocations beyond a certain critical stress[@Matthews1974; @Matthews1975; @Parker1999; @Pereira2002; @Dobrovolskas2013]. Axial (In,Ga)N/GaN nanowire (NW) heterostructures are promising alternatives since their high aspect ratio and high surface-to-volume ratio are expected to facilitate the incorporation of higher amounts of In without resulting in plastic deformation[@Ertekin2005; @Hersee2011]. In fact, a NW can effectively release strain elastically close to its free surface, and thereby accommodate a higher lattice mismatch compared to a planar layer[@Glas2006; @Ye2009]. The prospect to incorporate a high amount of In into a GaN matrix offers the intriguing possibility to tune the emitted wavelength from the near-ultraviolet to the near-infrared region[@Kuykendall2007]. The potential advantages of NWs for light emitting applications have encouraged many investigations of their growth[@Johansson2011; @Li2012], related phenomena such as phase segregation[@Segura-Ruiz2011; @Segura-Ruiz2014] as well as their optical properties[@Limbach2012; @Marquardt2013; @Jackson2014]. The elastic strain relaxation in axial NW heterostructures depends on both the length of the strained segment and the radius of the NW, which therefore represent additional degrees of freedom to tune the emission wavelength[@Kaganer2012; @Wolz2013]. In contrast to planar structures, however, the strain distribution in a NW heterostructure is inherently three-dimensional and more complex than commonly assumed. In this letter, we investigate the strain field in axial (In,Ga)N/GaN NW heterostructures using the finite element method. In particular, we discuss the dependence of the out-of-plane component of the elastic strain tensor, [$\upvarepsilon_\mathrm{zz}$]{}, on the length of the (In,Ga)N segment and the NW radius. We show that for certain length-to-radius ratios, [$\upvarepsilon_\mathrm{zz}$]{}may assume large negative values. Finally, we suggest that x-ray diffraction experiments on single (In,Ga)N/GaN NWs employing nanofocus synchrotron radiation are a viable means to access the strain state in these nanostructures experimentally. ![Sketch of a hexagonally-shaped GaN nanowire of radius $r$ with an embedded axial (In,Ga)N segment of length $\ell$.[]{data-label="fig:Model"}](fig_1.pdf){width="0.8\columnwidth"} The strain field induced by a lattice-mismatched segment in a NW can be obtained in closed form only when treating the NW as an infinitely long cylinder.[@Kaganer2012] To take into account the actual hexagonal cross-sectional shape of the GaN NWs under consideration, we employ the finite element method (FEM) as implemented in the commercial package MSC Marc^^. Our simulations are performed within the framework of linear elasticity theory for which the components of the strain tensor are dimensionless. An important consequence of this fact is that our results can be scaled arbitrarily to smaller or larger dimensions as long as the strain relaxation is purely elastic. In other words, the strain field in the NW will be identical for all axial (In,Ga)N/GaN NW heterostructures with the same ratio of NW diameter and segment length. Our simulations take into account the full elastic anisotropy of the hexagonal group-III nitrides. The elastic constants for GaN and InN are taken from Ref. , and the elastic constants for the ternary (In,Ga)N alloy are obtained by linear interpolation. ![image](fig_2.pdf){width="100.00000%"} In all what follows, we consider a hexagonally shaped, $\langle0001\rangle$-oriented GaN NW with an embedded In$_{x}$Ga$_{1-x}$N segment with $x = 0.3$ as schematically depicted in Fig. \[fig:Model\]. The alloy is treated as a perfectly homogeneous material, i.e., the random compositional fluctuations in this material are ignored. For the sake of explicitness, but without loss of generality, we set the NW radius to $r = 10$nm. Energy balance considerations for an elastically isotropic material predict that the [In$_{0.3}$Ga$_{0.7}$N]{}segment remains coherently strained for this radius regardless its length $\ell$.[@Glas2006] Experimental investigations were focused on very short (In,Ga)N segments which in the following we refer to as quantum disks (QDs) with a thickness of typically not more than 2 to 5 nm.[@Guo2011; @Wolz2012; @Tourbot2013] For an [In$_{0.3}$Ga$_{0.7}$N]{}QD with this thickness, the emission wavelength is expected to be between 600 and 700 nm, making these structures interesting for applications for red light emitting diodes. Thicker QDs would be expected to emit in the infrared spectral range, but for the corresponding planar structures, strain-induced piezoelectric fields within the QDs would inhibit the emission altogether. For NWs, however, the elastic strain relieve at the NW sidewalls may greatly reduce the magnitude of these fields. For a systematic investigation of the evolution of the strain field within the NW, we thus vary the segment length $\ell$ from 2 to 40 nm while keeping the In content and NW radius constant. We first focus on the out-of-plane component of the elastic strain tensor, [$\upvarepsilon_\mathrm{zz}$]{}, as defined by the relative lattice parameter difference with respect to the unstrained bulk material. For a pseudomorphically strained (In,Ga)N/GaN layered system, the (In,Ga)N layer is under biaxial compressive strain characterized by a positive value of [$\upvarepsilon_\mathrm{zz}$]{}throughout the (In,Ga)N layer. Intuitively, we would expect a positive value for [$\upvarepsilon_\mathrm{zz}$]{}for an equivalent layer sequence in a NW as well. Since strain can be elastically relieved at the NW surface, [$\upvarepsilon_\mathrm{zz}$]{}will be decreased particularly close to the surface, and should eventually approach zero in the limit of very thin NWs except for the interfacial boundaries. However, in the following we will demonstrate that the strain relaxation in axial (In,Ga)N/GaN NWs does not proceed in this simple monotonic fashion as intuitively expected. To follow the evolution of the strain field with increasing $\ell/r$ ratio, we calculate a map of [$\upvarepsilon_\mathrm{zz}$]{}for different lengths of the embedded [In$_{0.3}$Ga$_{0.7}$N]{}segment as shown in Fig. \[fig:FEM\_h/r\]. For the 2nm thick QD, a strain state similar to an equivalent planar system is established, i.e, the QD is under essentially uniform strain with $\upvarepsilon_\mathrm{zz} > 0$ throughout its volume. Strain relaxation mainly occurs in the direct vicinity of the free sidewall surfaces and the [In$_{0.3}$Ga$_{0.7}$N/GaN]{}interfaces, manifesting itself in long-range distortions in the adjacent GaN segments which change their character from tensile ($\upvarepsilon_\mathrm{zz} < 0$) to compressive ($\upvarepsilon_\mathrm{zz} > 0$) with increasing distance from the QD. Furthermore, [$\upvarepsilon_\mathrm{zz}$]{}attains values of more than 3% directly at the edges of the hexagonal [In$_{0.3}$Ga$_{0.7}$N]{}QD. For thicker QDs, this behavior changes profoundly, in that the relaxation affects progressively more of the volume of the [In$_{0.3}$Ga$_{0.7}$N]{}QD itself. For $\ell = 6$nm \[see panel (i) in Fig. \[fig:FEM\_h/r\]\], a part of the inner core of the QD exhibits the bulk lattice constant of [In$_{0.3}$Ga$_{0.7}$N]{}with $\upvarepsilon_\mathrm{zz} = 0$. Embedded into this region are two radially separated regions with a negative value of [$\upvarepsilon_\mathrm{zz}$]{}. These regions represent a cut through a torus-shaped strain distribution with maximum compression. When increasing $\ell$, the spatially separated compressed regions extend toward the center of the segment and eventually merge. As a result, the entire inner core of the [In$_{0.3}$Ga$_{0.7}$N]{}segment is under tensile strain with $\upvarepsilon_\mathrm{zz} < 0$. The maximum magnitude of this tensile strain is reached for $\ell/r\approx 3/2$ \[see panel (ii) of Fig. \[fig:FEM\_h/r\]\], for which [$\upvarepsilon_\mathrm{zz}$]{}approaches a negative value as large as $-0.8$%. The maximum values for [$\upvarepsilon_\mathrm{zz}$]{}of about 2% are again observed at the edges of the [In$_{0.3}$Ga$_{0.7}$N]{}segment. A further increase of the $\ell/r$ ratio beyond the value of 3/2 reverses the continuous decrease of [$\upvarepsilon_\mathrm{zz}$]{}in the center of the segment. However, the inner core of the [In$_{0.3}$Ga$_{0.7}$N]{}segment persists to be under tensile strain until the $\ell/r$ ratio attains values larger than 3 \[see panel (iii) in Fig. \[fig:FEM\_h/r\]\]. In this case, the minimum in the center of the segment starts to divide into two minima forming again a torus-shaped strain distribution, but now in axial direction. At the same time, the value of [$\upvarepsilon_\mathrm{zz}$]{}in the center of the segment slowly approaches zero. Finally, a strain state akin to complete elastic relaxation is observed for the middle part of the 40nm long [In$_{0.3}$Ga$_{0.7}$N]{}segment. However, the strain field stays complex at the interface to the adjacent GaN segments. Note that the complete elastic relaxation observed for long segments is not to be confused with the vanishing [$\upvarepsilon_\mathrm{zz}$]{}for $\ell/r =3/2$ \[cf. panel (i) of Fig. \[fig:FEM\_h/r\]\]. While the former effect is a simple manifestation of St. Venant’s principle [@love_44; @timochenko_51; @housner_66] the latter one is as unexpected as it is difficult to understand on an intuitive basis. ![On and off-axis strain components for the [In$_{0.3}$Ga$_{0.7}$N/GaN]{}NW with $\ell/r = 1$. The in-plane normal components [$\upvarepsilon_\mathrm{xx}$]{}and [$\upvarepsilon_\mathrm{yy}$]{}suggest a nearly complete strain relaxation, but the out-of-plane normal component [$\upvarepsilon_\mathrm{zz}$]{}is negative in the inner core of the segment. The strain release at the surface induces a deformation of the [In$_{0.3}$Ga$_{0.7}$N]{}segment, exaggerated here by a factor of 10 for better visibility. This deformation is accompanied by significant shear strain components with out-of-plane contribution, concentrated mainly at the [In$_{0.3}$Ga$_{0.7}$N/GaN]{}interfaces.[]{data-label="fig:StrainComp"}](fig_3.pdf){width="1.0\columnwidth"} To shed light onto this anomaly in the dependence of the out-of-plane strain on the $\ell/r$ ratio, we next consider all components of the strain tensor, and in particular the shear strains. As a representative example, Fig. \[fig:StrainComp\] shows these components for the [In$_{0.3}$Ga$_{0.7}$N/GaN]{}NW with $\ell/r = 1$, for which a significant portion of the [In$_{0.3}$Ga$_{0.7}$N]{}segment exhibits a tensile out-of-plane strain (cf. Fig. \[fig:FEM\_h/r\]). Despite of this fact, the in-plane normal components [$\upvarepsilon_\mathrm{xx}$]{}and [$\upvarepsilon_\mathrm{yy}$]{}are seen to be spatially uniform and small ($-0.3$%) essentially within the entire [In$_{0.3}$Ga$_{0.7}$N]{}segment. Deviations from this quasi-relaxed in-plane strain state are observed only at the interfaces, where the adjacent materials experience a strong compressive and tensile strain to accommodate the lattice mismatch and to establish a commensurate boundary. In contrast, the out-of-plane component [$\upvarepsilon_\mathrm{zz}$]{}is negative in the central part of the segment and continuously increases toward both the free surface and the interfaces to the adjacent GaN segments. On each {1010} facet of the segment surface, [$\upvarepsilon_\mathrm{zz}$]{}is smallest at the center of the facet and increases toward the hexagonal edges as well as toward the interfaces with a maximum value of about 2%. This variation of [$\upvarepsilon_\mathrm{zz}$]{}is accompanied by a significant convex deformation of the shape of the segment at the free sidewalls. This deformation of the [In$_{0.3}$Ga$_{0.7}$N]{}segment necessitates non-zero shear strains. In fact, all three shear-strain components are significant, and particularly so those with out-of-plane components ([$\upvarepsilon_\mathrm{zx}$]{}and [$\upvarepsilon_\mathrm{yz}$]{}), which reach magnitudes as large as 4%. These shear components allow an almost complete in-plane relaxation of the [In$_{0.3}$Ga$_{0.7}$N]{}segment via the convex deformation of the NW sidewalls. It is this deformation which results in an axial contraction, and is thus directly responsible for the negative out-of-plane strain observed in this study. Additionally, due to the small Poisson ratio of GaN the [In$_{0.3}$Ga$_{0.7}$N]{}segment induces a volume change of 1:0.969 deviating from an equivalent layered structure which shows a volume change close to one. To access the local strain state of a single NW experimentally is a difficult task. A particularly powerful technique to investigate the three-dimensional strain distribution within single (In,Ga)N/GaN NWs is nanofocus x-ray diffractometry[@Robinson2009; @Stangl2014; @Keplinger2015]. This non-destructive technique combines a high angular resolution in reciprocal space with a sub-m spatial resolution[@Hanke2008; @Gulden2011; @Dubslaff2011]. To assess the feasibility to experimentally measure the strain distribution for single (In,Ga)N/GaN NWs, we simulate the impact of a strained (In,Ga)N segment in a GaN NW on the diffuse x-ray scattering pattern of this NW heterostructure. Using the kinematical expression $$\begin{aligned} I(\bm{q})={{\left| \sum_i f_i(\bm q) e^{{\mathrm{i}}{\bm q}\cdot[{\bm r}_i+{\bm u}({\bm r}_i)]} \right|}}^2\end{aligned}$$ with the reciprocal space vector [$\bm q$]{} = ([q$_\mathrm{x}$]{}, [q$_\mathrm{y}$]{}, [q$_\mathrm{z}$]{}), the form factors $f_i(\bm q)$, and the atom coordinates ${\bm r}_i$ , we compute the three-dimensional intensity pattern around the Bragg reflection in reciprocal space. The atom coordinates are interpolated using the displacement field $\bm u$ obtained from FEM giving the displacement of the atom positions ${\bm u}({\bm r}_i)$. As an example, we will discuss four simulated reciprocal space maps (RSM), showing the diffusely scattered intensity around the symmetric GaN(0004) reflection. We assume a spot size of 120$\times$120 nm$^2$ and the strain fields are taken from the series of Fig. \[fig:FEM\_h/r\]. ![Reciprocal space maps close to the symmetric GaN(0004) reflection calculated in the frame of the kinematical scattering theory. (a): RSM for a bare GaN NW with radius $r = 10$nm. (b), (c) and (d): RSMs for GaN NWs with embedded (In,Ga)N segments of length $\ell= 4$ (b), 10 (c) and 40 nm (d). The white line indicates the expected peak position for relaxed [In$_{0.3}$Ga$_{0.7}$N]{}.[]{data-label="fig:ScatSim"}](fig_4.pdf){width="1.0\columnwidth"} In Fig. \[fig:ScatSim\](a), we consider a bare GaN NW yielding a peak at [q$_\mathrm{z}$]{}= 4.85 Å$^{-1}$ which is the expected position for bulk GaN. This peak is modulated by finite size oscillations due to the NW geometry. The pattern changes as soon as strain is introduced into the NW due to the presence of an [In$_{0.3}$Ga$_{0.7}$N]{}segment. The RSM shown in Fig. \[fig:ScatSim\](b) for a 4nm thick QD exhibits no distinct peak related to [In$_{0.3}$Ga$_{0.7}$N]{}, but the phase shift induced by the QD results in pronounced interference fringes. For $\ell = 10$nm, the FEM simulation predicts negative elastic strain within the QD volume, which in turn will shift the peak position of relaxed [In$_{0.3}$Ga$_{0.7}$N]{}toward larger values of [q$_\mathrm{z}$]{}as indeed seen in Fig. \[fig:ScatSim\](c). Besides the change in the position of the Bragg peak, the diffuse part at $q_\mathrm{x} \neq 0$ serves as a sensitive fingerprint of the established strain field within a NW. Due to the comparatively large volume of relaxed [In$_{0.3}$Ga$_{0.7}$N]{}, the RSM of the 40nm long segment exhibits a clear peak with high intensity close to the position of relaxed [In$_{0.3}$Ga$_{0.7}$N]{}indicated by the white line in Fig. \[fig:ScatSim\](d). The complex strain field at the interfaces manifests itself by the strong modulations around this peak. Based on the simulated RSMs, we conclude that an [In$_{0.3}$Ga$_{0.7}$N]{}segment embedded in a GaN NW has a significant, experimentally accessible impact on the diffraction pattern, and nanofocus x-ray diffraction experiments are thus suitable to probe the local strain distribution in these NW heterostructures. The results obtained in this work for $\langle0001\rangle$-oriented (In,Ga)N/GaN NWs are also valid for other axial semiconductor NW heterostructures, including all $\langle0001\rangle$-oriented wurtzite NWs, but also $\langle111\rangle$-oriented NWs composed of diamond and zincblende materials such as Ge/Si[@Hanke2007; @Swadener2009; @Wen2015], Si/GaP[@Hocevar2012], InAs/InP[@Haapamaki2011; @Gotoh2015], CdTe/ZnTe[@DLUZEWSKI2010]. For all these materials systems, the strain anomaly investigated in the present work is not only of academic interest, but has potentially far-reaching consequences for their application in electronic and optoelectronic devices. In fact, our study shows that the length-to-radius ratio can be used to engineer the strain field and in particular to tune the out-of-plane strain component from compressive to tensile. We can use this new degree of freedom to manipulate the band gap, the effective mass, and the magnitude and direction of the piezoelectric fields in the strained insertion. The authors are indebted to Oliver Marquardt for valuable discussions and a critical reading of the manuscript. Special thanks are due to Vladimir Kaganer for performing analytical calculations which confirm the anomalous strain state we have discussed in the present work. Financial support by the German Research Foundation DFG (project HA3495/9-1) is gratefully acknowledged. [9999]{} J. W. Matthews, J. Vac. Sci. Technol. [**12**]{}, 126-133 (1975). J.W. Matthews and A.E. Blakeslee, J. Cryst. Growth [**27**]{}, 118 - 125 (1974). C. A. Parker, J. C. Roberts, S. M. Bedair, M. J. Reed, S. X. Liu, N. A. El-Masry, Appl. Phys. Lett. [**75**]{}, 2776 (1999). S. Pereira, M. R. Correia, E. Pereira, C. Trager -Cowan, F. Sweeney, K. P. O’Donnell, E. Alves, N. Franco, A. D. Sequeira, Appl. Phys. Lett. [**81**]{}, 1207 (2002). D. Dobrovolskas, A. Vaitkevicius, J. Mickevicius, Ö Tuna, C. Giesen, M. Heuken, G. Tamulaitis, J. Appl. Phys. [**114**]{}, 163516 (2013). Elif Ertekin, P. A. Greaney, D. C. Chrzan, Timothy D. Sands, J. Appl. Phys. [**97**]{}, 114325 (2005). Stephen D. Hersee, Ashwin K. Rishinaramangalam, Michael N. Fairchild, Lei Zhang, Petros Varangis, J. Mater. Res. [**26**]{}, 2293–2298 (2011). Frank Glas, Phys. Rev. B [**74**]{}, 121302 (2006). Han Ye, Pengfei Lu, Zhongyuan Yu, Yuxin Song, Donglin Wang, Shumin Wang, Nano Lett. [**9**]{}, 1921–1925 (2009). Tevye Kuykendall, Philipp Ulrich, Shaul Aloni, Peidong Yang, Nat. Mater. [**6**]{}, 951–956 (2007). Jonas Johansson, Kimberly A. Dick, CrystEngComm [**13**]{}, 7175-7184 (2011). Shunfeng Li, Andreas Waag, J. Appl. Phys. [**111**]{}, 071101 (2012). J. Segura-Ruiz, G. Martínez-Criado, J. A. Sans, R. Tucoulou, P. Cloetens, I. Snigireva, C. Denker, J. Malindretos, A. Rizzi, M. Gomez-Gomez, N. Garro, A. Cantarero, Phys. Status Solidi Rapid Res. Lett. [**5**]{}, 95–97 (2011). J. Segura-Ruiz, G. Martínez-Criado, C. Denker, J. Malindretos, A. Rizzi, Nano Lett. [**14**]{}, 1300–1305 (2014). F Limbach and C Hauswald and J Lähnemann and M Wölz and O Brandt and A Trampert and M Hanke and U Jahn and R Calarco and L Geelhaar and H Riechert, Nanotechnology [**23**]{}, 465301 (2012). Oliver Marquardt, Christian Hauswald, Martin Wölz, Lutz Geelhaar, Oliver Brandt, Nano Lett. [**13**]{}, 3298–3304 (2013). Howard E Jackson, Leigh M Smith, Chennupati Jagadish, ECS Trans. [**64**]{}, 1–5 (2014). V. M. Kaganer, A. Yu. Belov, Phys. Rev. B [**85**]{}, 125402 (2012). Martin Wölz, Manfred Ramsteiner, Vladimir M Kaganer, Oliver Brandt, Lutz Geelhaar, Henning Riechert, Nano Lett. [**13**]{}, 4053–4059 (2013). Kazuhiro Shimada, Jpn. J. Appl. Phys. [**45**]{}, L358 (2006). Wei Guo, Animesh Banerjee, Pallab Bhattacharya, Boon S. Ooi, Appl. Phys. Lett. [**98**]{}, 193102 (2011). Martin Wölz, Sergio Fernández-Garrido, Christian Hauswald, Oliver Brandt, Friederich Limbach, Lutz Geelhaar, Henning Riechert, Cryst. Growth Des. [**12**]{}, 5686–5692 (2012). G Tourbot and C Bougerol and F Glas and L F Zagonel and Z Mahfoud and S Meuret and P Gilet and M Kociak and B Gayral and B Daudin, Nanotechnology [**23**]{}, 135703 (2012). A E H Love, A Treatise on the Mathematical Theory of Elasticity, 4th ed. (Dover, New York, 1944). S. Timoshenko and J. Goodier, Theory of Elasticity, 2nd ed. (McGraw- Hill, New York, 1951) G. W. Housner and T. Vreeland, Jr., The Analysis of Stress and Deforma- tion (McMillan, New York, 1966). Ian Robinson, Ross Harder, Nat. Mater. [**8**]{}, 291–298 (2009). J. Stangl, C. Mocuta, V. Chamard, D. Carbone, Nanobeam X-Ray Scattering (Wiley-VCH Verlag GmbH and Co. KGaA 2013). Mario Keplinger, Bernhard Mandl, Dominik Kriegner, V[á]{}clav Hol[ý]{}, Lars Samuelsson, G[ü]{}nther Bauer, Knut Deppert, Julian Stangl, J. Synchrotron Radiat. [**22**]{}, 59–66 (2015). M. Hanke, M. Dubslaff, M. Schmidbauer, T. Boeck, S. Schröder, M. Burghammer, C. Riekel, J. Patommel, C. G. Schroer, Appl. Phys. Lett. [**92**]{}, 193109 (2008). J. Gulden, S. O. Mariager, A. P. Mancuso, O. M. Yefanov, J. Baltser, P. Krogstrup, J. Patommel, M. Burghammer, R. Feidenhans’l, I. A. Vartanyants, Phys. Status Solidi (a) [**208**]{}, 2495–2498 (2011). M. Dubslaff, M. Hanke, M. Burghammer, S. Schröder, R. Hoppe, C. G. Schroer, Yu. I. Mazur, Zh. M. Wang, J. H. Lee, G. J. Salamo, Appl. Phys. Lett. [**98**]{}, 213105 (2011). M. Hanke, C. Eisenschmidt, P. Werner, N. D. Zakharov, F. Syrowatka, F. Heyroth, P. Schäfer, and O. Konovalov, Phys. Rev. B [**75**]{}, 161303 (2007). J. G. Swadener and S. T. Picraux, J. Appl. Phys. [**105**]{}, 044310 (2009). C.-Y. Wen, M. C. Reuter, D. Su, E. A. Stach, and F. M. Ross, Nano Lett. [**15**]{}, 1654–9 (2015). Moïra Hocevar, G. Immink, M. Verheijen, N. Akopian, V. Zwiller, L. Kouwenhoven, and E. Bakkers, Nat. Commun. [**3**]{}, 1266 (2012). C M Haapamaki, R R Lapierre, Nanotechnology [**22**]{}, 335602 (2011). G. Z. Gotoh, K. Tateno, M. D. Birowosuto, M. Notomi, T. Sogawa, and Hideki, Nanotechnology [**26**]{}, 115704 (2015). P D[ł]{}uzewski, E Janik, S Kret, W Zaleszczyk, D Tang, G Karczewski, T Wojtowicz, J. Microsc. [**237**]{}, 337–340 (2010).
--- abstract: 'We study the generation of large-scale vortices in rotating turbulent convection by means of Cartesian direct numerical simulations. We find that for sufficiently rapid rotation, cyclonic structures on a scale large in comparison to that of the convective eddies, emerge, provided that the fluid Reynolds number exceeds a critical value. For slower rotation, cold cyclonic vortices are preferred, whereas for rapid rotation, warm anti-cyclonic vortices are favoured. In some runs in the intermediate regime both types of cyclones co-exist for thousands of convective turnover times. The temperature contrast between the vortices and the surrounding atmosphere is of the order of five per cent. We relate the simulation results to observations of rapidly rotating late-type stars that are known to exhibit large high-latitude spots from Doppler imaging. In many cases, cool spots are accompanied with spotted regions with temperatures higher than the average. In this paper, we investigate a scenario according to which the spots observed in the temperature maps could have a non-magnetic origin due to large-scale vortices in the convection zones of the stars.' author: - 'Petri J. Käpylä$^{1,2}$, Maarit J. Mantere$^{1}$ and Thomas Hackman$^{1,3}$' bibliography: - 'paper.bib' title: 'Starspots due to large-scale vortices in rotating turbulent convection' --- Introduction ============ Rotating turbulent convection is considered to play a crucial role in the generation of large-scale magnetic fields [@M78; @KR80; @RH04] and differential rotation of stars [@R89]. The interaction of rotation and inhomogeneous turbulence leads to the so-called $\alpha$-effect, which can sustain large-scale magnetic fields [e.g. @B01; @KKB09b]. However, in many astrophysically relevant cases large-scale shear flows are also present, which further facilitate dynamo action by lowering the relevant critical dynamo number. In the Sun, for example, the entire convection zone is rotating differentially [cf. @Schouea98; @Thompsonea03], and a meridional flow towards the poles is observed in the near surface layers [e.g. @ZK04]. These flows are most often attributed to rotationally influenced turbulent angular momentum and heat transport [cf. @R89; @RC01; @MBT06; @KMGBC11]. In the solar case the large-scale flows and also the magnetic activity are largely axisymmetric [e.g. @PBKT06]. This means that the sunspots, which are concentrations of strong magnetic fields, are almost uniformly distributed in longitude over the solar surface. The fact that we observe the sunspots and can attribute magnetic fields to them, has strongly influenced the interpretation of data from stars other than the Sun. The giant planets Jupiter and Saturn are also likely to have outer convection zones [e.g. @Busse76], but they rotate much faster than the Sun. Bands of slower and faster rotation alternate in their atmospheres, reminiscent of rapidly rotating convection [e.g. @Busse94; @HA07]. However, especially in Jupiter, large spots in the form of immense storms are observed [@Marcus93]. Remarkably, the largest of these, the Great Red Spot, has persisted at least 180 years. Similar features are observed also in Saturn [e.g. @saturn1991] and other giant planets. The spots on giant planets are not of magnetic origin although dynamos are likely to be present in the interiors of the planets. Thus their explanation is probably related to hydrodynamical processes within the convectively unstable layers. Late-type stars with higher rotation velocities in comparison to the Sun, on the other hand, often exhibit light curve variations that are usually interpreted as large spots on the stellar surface [e.g. @Chuga66; @Henry1995]. In some cases the observational data can be fitted with a model with two large spots at a 180 degree separation in longitude @BT98. There is also evidence that these ‘active longitudes’ are not equal in strength [e.g. @Jyri11; @Marjaana11], and that the relative strenght of the spots can, at least temporarily, reverse in a process dubbed ’flip-flop’ [cf. @Jetsu1993]. One interpretation of the data is that the spots are of magnetic origin and that the flip-flops are related to magnetic cycles reminescent of the solar cycle [e.g. @BBIT98]. On the other hand, it has been proposed that the flip-flops are only short-term changes related to the activity cycle, while the structure generating the temperature minima would migrate in the orbital reference frame, that could be interpreted as an azimuthal dynamo wave [e.g. @Jyri11; @Marjaana11]. Again, this interpretation relies on the magnetic nature of the cool spots. The cool spots detected by photometry and Doppler imaging using spectroscopic observations have been taken as an indirect proxy of the magnetic field on the stellar surface, deriving from the analogy to sunspots - strong magnetic field hinders convection and causes the magnetized region to be cooler than its surrounding. Zeeman-Doppler imaging of spectropolarimetric observations [e.g. @Semel89; @Donati89; @PK02; @Carroll07] provides means to directly measure the magnetic field strength and orientation on the stellar surface. In the study of @Donati97 spectropolarimetric observations of several stars were collected during 23 nights extending over a five year interval. They report that the Zeeman signitures of the cool stars almost always exhibit a very complex shape with many successive sign reversals. This points to a rather complicated field structure with different magnetic regions of opposite polarities. Furthermore, the magnetic regions detected were mostly 500 to 1,000 K cooler than, and sometimes at the same temperature as, but never warmer than the surrounding photosphere. In the published temperature and magnetic field maps for AB Dor [@DC97], however, no clear correlation between temperature and magnetic field strength can be seen: in the temperature maps a pronounced cool polar cap with weak fringes towards lower latitudes are visible, whereas the strongest magnetic fields are seen as patchy structures at lower latitudes with a clearly different distribution than the temperature structures. Similar decorrelation of temperature minima and magnetic field strength has been reported with the same method for different objects [e.g. @Donati99; @Jeffers11], and also for the same objects with different methods [e.g. @Hussain2000; @Oleg11]. The phenomenon, therefore, seems to be wide-spread, and method-independent. One possible explanation to the decorrelation of magnetic field and temperature structures could be that there is simply less light coming from the spotted parts than from the unspotted surface. Thus the Zeeman signatures from cool spots may be “drowned” in the signal from the unspotted surface or bright features. However, this should lead to systematic effects where the detected magnetic field strength would be correlated with the surface temperature. The least sqaures deconvolution technique [LSD, e.g. @Donati97], which is necessary for enhancing the Zeeman signal, may influence the temperature and magnetic Doppler imaging differently. The latitudes of any surface features in Doppler images are always more unrealiable than the longitudes, a fact that will not make a comparison of temperature and magnetic field maps any easier. One could thus expect, that there could be artificial discrepancies in the latitudes of magnetic and temperature features. Still, the lack of connection between even the longitudes of cool spots and magnetic features is surprising. In this paper we consider a completely different scenario, according to which the formation of temperature anomalies on the surfaces of rapidly rotating late-type stars could occur due to a hydrodynamical instability creating large-scale vortices, analogously to the giant planets in the solar system. To manifest this mechanism in action, we simulate rotating turbulent convection in local Cartesian domains, representing parts of the stratified stellar convection zones located near the polar regions. We show that under such a setting, large-scale vortices or cyclones are indeed generated provided that the rotation is sufficiently rapid and the Reynolds number exceeds a critical value. Depending on the handedness of the vortex, which on the other hand depends on the rotation rate, the resulting spot can be cooler or warmer than the surrounding atmosphere. We acknowledge that our model is rather primitive, lacking realistic radiation transport, spherical geometry, and relying on a polytropic setup for the stratification so detailed comparison with observations is not possible at this point. However, the main purpose of the present paper is to show a proof of concept of the existence of large-scale vortices with temperature anomalies close to those observed in rapidly rotating hydrodynamic convection. We also note that similar large-scale cyclonic structures have recently been reported from large-eddy simulations of turbulent convection [@Chan03; @Chan07]. We make comparisons to these studies when possible. The model {#sec:model} ========= Our model setup is similar to that used by [@KKB09b] but without magnetic fields. A rectangular portion of a star is modeled by a box situated at colatitude $\theta$. The box is divided into three layers: an upper cooling layer, a convectively unstable layer, and a stable overshoot layer (see below). We solve the following set of equations for compressible hydrodynamics: $$\frac{\mathrm{D} \ln \rho}{\mathrm{D}t} = -{\bm{\nabla} \cdot }{\bm U},$$ $$\frac{\mathrm{D} \bm U}{\mathrm{D}t} = -\frac{1}{\rho}{\bm \nabla}p + {\bm g} - 2\bm{\Omega} \times \bm{U} + \frac{1}{\rho} \bm{\nabla} \cdot 2 \nu \rho \mbox{\boldmath ${\sf S}$}, \label{equ:UU}$$ $$\frac{\mathrm{D} e}{\mathrm{D}t} = - \frac{p}{\rho}{\bm{\nabla} \cdot }{\bm U} + \frac{1}{\rho} \bm{\nabla} \cdot K \bm{\nabla}T + 2 \nu \mbox{\boldmath ${\sf S}$}^2 - \frac{e\!-\!e_0}{\tau(z)}, \label{equ:ene}$$ where $\mathrm{D}/\mathrm{D}t = {\partial}/{\partial}t + \bm{U} \cdot \bm{\nabla}$ is the advective time derivative, $\nu$ is the kinematic viscosity, $K$ is the heat conductivity, $\rho$ is the density, $\bm{U}$ is the velocity, $\bm{g} = -g\hat{\bm{z}}$ is the gravitational acceleration, and $\bm{\Omega}=\Omega_0(-\sin \theta,0,\cos \theta)$ is the rotation vector. The fluid obeys an ideal gas law $p=(\gamma-1)\rho e$, where $p$ and $e$ are pressure and internal energy, respectively, and $\gamma = c_{\rm P}/c_{\rm V} = 5/3$ is the ratio of specific heats at constant pressure and volume, respectively. The specific internal energy per unit mass is related to the temperature via $e=c_{\rm V} T$. The rate of strain tensor $\mbox{\boldmath ${\sf S}$}$ is given by $${\sf S}_{ij} = \onehalf (U_{i,j}+U_{j,i}) - \onethird \delta_{ij} {\bm{\nabla} \cdot }\bm{U}.$$ The last term of Eq. (\[equ:ene\]) describes cooling at the top of the domain. Here $\tau(z)$ is a cooling time which has a profile smoothly connecting the upper cooling layer and the convectively unstable layer below, where $\tau\to\infty$. The positions of the bottom of the box, bottom and top of the convectively unstable layer, and the top of the box, respectively, are given by $(z_1, z_2, z_3, z_4) = (-0.85, 0, 1, 1.15)d$, where $d$ is the depth of the convectively unstable layer. Initially the stratification is piecewise polytropic with polytropic indices $(m_1, m_2, m_3) = (3, 1, 1)$, which leads to a convectively unstable layer above a stable layer at the bottom of the domain. In a system set up this way, convection transports 20 per cent of the total flux [cf. @BCNS05]. Due to the presence of the cooling term, a stably stratified isothermal layer is formed at the top. The horizontal extent of the box, $L_{\rm H}\equiv L_x=L_y$, is $4d$. All simulations with rotation are made at the North pole, corresponding to $\theta=0\degr$. The simulations were performed with the [Pencil Code]{}[^1], which is a high-order finite difference method for solving the compressible equations of magnetohydrodynamics. Units and nondimensional parameters ----------------------------------- Nondimensional quantities are obtained by setting $$\begin{aligned} d = g = \rho_0 = c_{\rm P} = 1\;,\end{aligned}$$ where $\rho_0$ is the initial density at $z_2$. The units of length, time, velocity, density, and entropy are $$\begin{aligned} && [x] = d\;,\;\; [t] = \sqrt{d/g}\;,\;\; [U]=\sqrt{dg}\;,\;\; \nonumber \\ && [\rho]=\rho_0\;,\;\; [s]=c_{\rm P}.\end{aligned}$$ We define the Prandtl number and the Rayleigh number as $$\begin{aligned} {{\rm Pr}}=\frac{\nu}{\chi_0}\;,\;\; {{\rm Ra}}=\frac{gd^4}{\nu \chi_0} \bigg(-\frac{1}{c_{\rm P}}\frac{{\rm d}s}{{\rm d}z } \bigg)_0\;,\end{aligned}$$ where $\chi_0 = K/(\rho_{\rm m} c_{\rm P})$ is the thermal diffusivity, and $\rho_{\rm m}$ is the density in the middle of the unstable layer, $z_{\rm m} = \onehalf(z_3-z_2)$. The entropy gradient, measured at $z_{\rm m}$, in the nonconvecting hydrostatic state, is given by $$\begin{aligned} \bigg(-\frac{1}{c_{\rm P}}\frac{{\rm d}s}{{\rm d}z}\bigg)_0 = \frac{\nabla-\nabla_{\rm ad}}{H_{\rm P}}\;,\end{aligned}$$ where $\nabla-\nabla_{\rm ad}$ is the superadiabatic temperature gradient with $\nabla_{\rm ad} = 1-1/\gamma$, $\nabla = ({\partial}\ln T/{\partial}\ln p)_{z_{\rm m}}$, and where $H_{\rm P}$ is the pressure scale height. The amount of stratification is determined by the parameter $\xi_0 =(\gamma-1) e_0/(gd)$, which is the pressure scale height at the top of the domain normalized by the depth of the unstable layer. We use $\xi_0 =1/3$ in all cases, which results in a density contrast of about 23 across the domain. We define the Reynolds and Peclet numbers via $$\begin{aligned} {\rm Re} = \frac{{u_{\rm rms}}}{\nu {k_{\rm f}}}\;,\;\; {{{\rm Pe}}} = \frac{{u_{\rm rms}}}{\chi_0 {k_{\rm f}}} = \Pr\ {\rm Re}\;,\end{aligned}$$ where ${k_{\rm f}}= 2\pi/d$ is adopted as an estimate for the wavenumber of the energy-carrying eddies, and ${u_{\rm rms}}=\sqrt{3 u_z^2}$. This definition neglects the contributions from the large-scale vortices that are generated in the rapid rotation regime. Note that with our definitions ${{\rm Re}}$ and ${{\rm Pe}}$ are smaller than the usual one by a factor $2\pi$. The amount of rotation is quantified by the Coriolis number, defined as $$\begin{aligned} {\rm Co} = \frac{2\Omega_0}{{u_{\rm rms}}{k_{\rm f}}}\;. \label{equ:Co}\end{aligned}$$ We also quote the value of the Taylor number, $${{\rm Ta}}=\left(2\Omega_0 d^2/\nu\right)^2,$$ which is related to the Ekman number via ${\rm Ek}={{\rm Ta}}^{-1/2}$. Boundary conditions ------------------- The horizontal boundaries are periodic for all variables. Stress-free conditions are used for the velocity at the vertical boundaries. $$\begin{aligned} U_{x,z}=U_{y,z}=U_z=0.\end{aligned}$$ Temperature is kept constant on the upper boundary and the temperature gradient $$\begin{aligned} \frac{dT}{dz}=\frac{-g}{c_{\rm V}(\gamma-1)(m+1)},\end{aligned}$$ is held constant at the lower boundary, yielding a constant heat flux $F_0=-K {\partial}T/{\partial}z$ through the lower boundary. [cccccccccccc]{} A1 & $256^2\times 128$ & $0.048$ & $0.020$ & $33$ & $8$ & $0.24$ & $2.0\cdot10^6$ & $15.5$ & $4.0\cdot10^8$ & $1.7\cdot10^{-5}$ & yes (A)\ A2 & $256^2\times 128$ & $0.018$ & $0.017$ & $13$ & $6$ & $0.48$ & $1.0\cdot10^6$ & $14.4$ & $5.6\cdot10^7$ & $1.7\cdot10^{-5}$ & no\ A3 & $256^2\times 128$ & $0.022$ & $0.019$ & $21$ & $7$ & $0.36$ & $1.3\cdot10^6$ & $12.3$ & $1.0\cdot10^8$ & $1.7\cdot10^{-5}$ & no\ A4 & $256^2\times 128$ & $(0.063)$ & $0.023$ & $37$ & $9$ & $0.24$ & $2.0\cdot10^6$ & $10.3$ & $2.3\cdot10^8$ & $1.7\cdot10^{-5}$ & yes (A)\ A5 & $256^2\times 128$ & $0.021$ & $0.020$ & $16$ & $9$ & $0.48$ & $1.0\cdot10^6$ & $7.9$ & $2.5\cdot10^7$ & $1.7\cdot10^{-5}$ & no\ A6 & $256^2\times 128$ & $0.024$ & $0.023$ & $24$ & $9$ & $0.36$ & $1.3\cdot10^6$ & $7.0$ & $4.4\cdot10^7$ & $1.7\cdot10^{-5}$ & no\ A7 & $256^2\times 128$ & $(0.093)$ & $0.026$ & $42$ & $10$ & $0.24$ & $2.0\cdot10^6$ & $6.1$ & $1.0\cdot10^8$ & $1.7\cdot10^{-5}$ & yes (A+C)\ A8 & $256^2\times 128$ & $0.028$ & $0.027$ & $28$ & $11$ & $0.36$ & $1.3\cdot10^6$ & $3.6$ & $1.6\cdot10^7$ & $1.7\cdot10^{-5}$ & no\ A9 & $256^2\times 128$ & $0.082$ & $0.028$ & $45$ & $11$ & $0.24$ & $2.0\cdot10^6$ & $3.4$ & $3.6\cdot10^7$ & $1.7\cdot10^{-5}$ & yes (C)\ A9b & $256^2\times 128$ & $(0.070)$ & $(0.031)$ & $49$ & $12$ & $0.24$ & $2.0\cdot10^6$ & $2.1$ & $1.6\cdot10^7$ & $1.7\cdot10^{-5}$ & decay\ A10 & $256^2\times 128$ & $0.032$ & $0.033$ & $53$ & $13$ & $0.24$ & $2.0\cdot10^6$ & $1.0$ & $4.0\cdot10^6$ & $1.7\cdot10^{-5}$ & no\ A11 & $256^2\times 128$ & $0.038$ & $0.038$ & $61$ & $15$ & $0.24$ & $2.0\cdot10^6$ & 0 & $0$ & $1.7\cdot10^{-5}$ & no\ B1 & $256^2\times 128$ & $0.017$ & $0.016$ & $26$ & $13$ & $0.48$ & $4.0\cdot10^6$ & $9.7$ & $1.0\cdot10^8$ & $8.6\cdot10^{-6}$ & no\ B2 & $256^2\times 128$ & $(0.021)$ & $(0.017)$ & $37$ & $13$ & $0.36$ & $5.4\cdot10^6$ & $9.1$ & $1.8\cdot10^8$ & $8.6\cdot10^{-6}$ & yes (A+C)\ B3 & $256^2\times 128$ & $(0.034)$ & $(0.020)$ & $63$ & $15$ & $0.24$ & $8.0\cdot10^6$ & $8.0$ & $4.0\cdot10^8$ & $8.6\cdot10^{-6}$ & yes (A+C)\ C1 & $256^2\times 128$ & $0.011$ & $0.011$ & $17$ & $16$ & $0.96$ & $8.0\cdot10^6$ & $14.8$ & $1.0\cdot10^8$ & $4.3\cdot10^{-6}$ & no\ C2 & $256^2\times 128$ & $(0.014)$ & $(0.012)$ & $25$ & $18$ & $0.72$ & $1.1\cdot10^7$ & $13.6$ & $1.8\cdot10^8$ & $4.3\cdot10^{-6}$ & no\ C3 & $256^2\times 128$ & $(0.022)$ & $(0.014)$ & $44$ & $21$ & $0.48$ & $1.6\cdot10^7$ & $11.6$ & $4.0\cdot10^8$ & $4.3\cdot10^{-6}$ & yes (A)\ D1 & $256^2\times 128$ & $0.013$ & $0.013$ & $42$ & $51$ & $1.20$ & $4.0\cdot10^7$ & $7.2$ & $1.4\cdot10^8$ & $1.7\cdot10^{-6}$ & no\ D2 & $512^2\times 256$ & $(0.038)$ & $(0.013)$ & $101$ & $49$ & $0.48$ & $1.0\cdot10^8$ & $7.5$ & $9.0\cdot10^8$ & $1.7\cdot10^{-6}$ & yes (A+C) \[tab:runs\] Results {#sec:results} ======= We perform a number of numerical experiments in order to determine the conditions under which large-scale cyclones are excited. The basic input parameters and some key diagnostic outputs of the simulations are listed in Table \[tab:runs\]. We perform a few (Set A) or a single (Sets B, C, and D) progenitor run with a given Peclet number in each Set from which the rest of the runs are obtained by continuing from a saturated snapshot and changing the value of the kinematic viscosity $\nu$ in order to change ${{\rm Re}}$. The higher resolution run D2 was remeshed from a lower resolution case D1. Excitation of large-scale vortices ---------------------------------- We perform several sets of runs where the Peclet number and input energy flux are constant, whereas the Reynolds and Coriolis numbers are varied. We are limited to exploring a small number of cases due to the slow growth of the vortices, see Table \[tab:runs\]. Typically the time needed for the saturation of the cyclones is several thousand convective turnover times (see Fig. \[purms.eps\]). Thus many of our runs were ran until the presence or the absence of the cyclones was apparent. ![Upper panel: total velocity from Runs A5 and A7. Lower panel: velocity components $\sqrt{u_x^2}$ (black), $\sqrt{u_y^2}$ (red), and $\sqrt{u_z^2}$ (blue) from Run A7. The jump at $t {u_{\rm rms}}{k_{\rm f}}\approx 500$ is due to a lowering of $\nu$ at this point.[]{data-label="purms.eps"}](purms.eps){width="\columnwidth"} We find that a reliable diagnostic indicating the presence of large-scale vortices is to compare the rms-value of the total velocity, ${U_{\rm rms}}$, and the volume average of the quantity ${u_{\rm rms}}=\sqrt{3u_z^2}$. The latter neglects the horizontal velocity components which significantly grow when large-scale cyclones are present (see the lower panel of Fig. \[purms.eps\]). In the cyclone-free regime, irrespective of the rotation rate, we find that ${U_{\rm rms}}\approx{u_{\rm rms}}$ suggesting that the flow is only weakly anisotropic (see Table \[tab:runs\]). In the growth phase of the vortices one of the horizontal velocity components is always stronger, but the relative strength of the components changes as a function of time (see the lower panel of Fig. \[purms.eps\]). This undulation is related to quasi-periodic changes of the large-scale pattern of the flow, although their ultimate cause is not clear. Another quantitative diagnostic is to monitor the power spectrum of the flow from a horizontal plane within the convection zone. A typical example is shown in Fig. \[pspec\_256x128b1\] where power spectra of the velocity from the middle of the convection zone at two different times from Run B3 are shown. The snapshot from $t {u_{\rm rms}}{k_{\rm f}}= 1830$ is the initial state for Run B3, taken from Run B1, showing no cyclones. The power spectrum shows a maximum at $k/k_1=7$, indicating that most of the energy is contained in structures having a size typical of the convective eddies. However, as the run is continued further, a large-scale contribution due to the appearance of the vortices, peaking at $k/k_1=1$ grows, and ultimately dominates the power spectrum. We note that this run was not ran until saturation so the peak at $k/k_1=1$ is likely to be even higher in the final state. The presence of the vortices is also clear by visual inspection of the flow. A typical example is shown in Fig. \[psnap\_512x256a2\], where the vertical velocity component, $u_z$, is shown from the periphery of the domain for Run D2. ![Power spectra of velocity from early (dashed line) and late (solid) times from Run B3.[]{data-label="pspec_256x128b1"}](pspec_256x128b1.eps){width="\columnwidth"} The data in Table \[tab:runs\] suggests that large-scale vorticity is excited provided that the Reynolds number exceeds a critical value, ${{\rm Re}}_{\rm c}$. For ${{\rm Pe}}\approx 10$ (Set A) we find that ${{\rm Re}}_{\rm c}$ is around 30, although the sparse coverage of the parameter range does not allow a very precise estimate to be made. We find a similar value for ${{\rm Re}}_{\rm c}$ in Sets B and C, whereas for ${{\rm Pe}}\approx50$ in Set D, the critical Reynolds number is greater than 42. In Set C, Runs C1 and C2 were started from a snapshot of Run C3 at a time when vortices were already clearly developing. In both cases we find that the cyclones decay, suggesting that their presence is not strongly dependent on the history of the run. The critical Coriolis number in Set A is somewhere between 2.1 and 3.4. Again a very precise determination cannot be made, but continuing from a saturated snapshot of Run A9 with a somewhat lower rotation rate indicates that the vortices decay (Run A9b). We have limited the present study to the North pole ($\theta=0$), but vortices are also excited at least down to latitude $\theta=45\degr$ in the study of [@Chan07]. ![Vertical velocity component $U_z$ at the periphery of the box from Run D2. See also http://www.helsinki.fi/$\sim$kapyla/movies.html. The top and bottom panels show slices near the top and bottom of the convectively unstable layer, respectively.[]{data-label="psnap_512x256a2"}](512x256a_Ur.ps){width="\columnwidth"} ![image](puu_slices.ps){width="80.00000%"} Thermal properties of the cyclones ---------------------------------- In order to study the possible observable and other effects of the vortices, we ran a few simulations in Set A (Runs A1, A4, A7, A9, A10, and A11) to full saturation. Figures \[puu\_slices\] and \[pT\_slices\] show the vertical velocity and temperature in the saturated regime from the six runs listed above. In the non- and slowly rotating cases (the two rightmost panels in the lower rows of Figs \[puu\_slices\] and \[pT\_slices\]), convection shows a typical cellular pattern. Vorticity is generated at small scales at the vertices of the convection cells, but no large-scale pattern arises. We note that long-lived large-scale circulation can also emerge in non-rotating convection [e.g. @Bukai09]. However, such structures are not likely to be of relevance in rapidly rotating stars. When the rotation is increased to ${{\rm Co}}\approx3.3$, a cyclonic, i.e.rotating in the same sense as the overall rotation of the star, vortex appears (the lower left panels of Figs. \[puu\_slices\] and \[pT\_slices\]). Vertical motions are suppressed within the vortex and it appears as a cool spot in the temperature slice. Increasing rotation further to ${{\rm Co}}\approx6$, also an anti-cyclonic, i.e. rotating against the overall fluid rotation, warm vortex appears (the rightmost upper panels of Figs. \[puu\_slices\] and \[pT\_slices\]). In Run A7 the two vortices coexist for thousands of convective turnover times. In the most rapidly rotating cases A1 and A4 (the two leftmost panels in the upper rows of Figs. \[puu\_slices\] and \[pT\_slices\]) a single anti-cyclonic vortex persists in the saturated regime. A similar behaviour as a function of rotation was found by [@Chan07] from large-eddy simulations. The anti-cyclonic vortices show vigorous convection whereas in the surrounding regions convection appears suppressed. Due to the enhanced energy transport by convection, the anti-cyclones appear as warmer structures than their surroundings in the temperature slices. Figure \[pgeos\] shows that in Runs A9 and A1 the flow is in geostrophic balance, i.e. that the flow follows the isocontours of pressure for both types of vortices. The cyclone in Run A9 shows as a low pressure area, similarly to the cyclones in the atmosphere of the Earth, whereas the anti-cyclone in Run A1 coincides with a high pressure region. A weaker high pressure region is present also in Run A9. It is not clear whether this kind of single or two spot configuration lasts if the domain is larger in the horizontal directions, or whether a greater number of spots appear. We find that the temperature contrast between the spot and the surrounding medium is of the order of five per cent (Fig. \[ptempc\]) for both types of vortices. Although the relative temperature contrast between the vortex and the surrounding vortex-free convection seems to be a robust feature in the simulations, we must remain cautious when comparing the results with observations. This is due to the rather primitive nature of the simulations that lack realistic radiation transport. Convection in our model is also fairly inefficient by design, only 20 percent of the total flux being carried by it. Dynamo considerations and discussion ------------------------------------ Figure \[plot\_heli\] shows the horizontally averaged kinetic helicity, $\overline{\bm\omega \cdot{\bm u}}$, where $\bm{\omega}=\bm\nabla\times{\bm u}$, from Runs A9 and A1 from the initial, purely convective cyclone-free, and final fully saturated stages of the simulations. The data is averaged over a period of roughly 60 convective turnover times in each case. We find that in Run A9, where a cool cyclonic vortex appears, there is almost no change in the kinetic helicity between the initial and final stages of the simulation. In this run convection, and thus vertical motions, are largely suppressed within the vortex (see Fig. \[puu\_slices\]). Furthermore, the dominant contribution to the vorticity due to the cyclone arises via the vertical component $\omega_z={\partial}_x u_y -{\partial}_y u_x$, which is positive for a cyclonic vortex. These two effects seem to compensate each other and the helicity within the cyclone is not greatly enhanced or depressed with respect to the surroundings. This would indicate that the influence of the cyclonic vortices on the magnetic field amplification would be minor, as the helicity remains unaltered. On the other hand, the strong horizontal motions connected to the cyclone might be able to amplify the field by advecting the field lines. ![image](pT_slices.ps){width="80.00000%"} ![Pressure (colors) and horizontal flows from the middle of the convection zone in Runs A9 (left panel) and A1 (right panel).[]{data-label="pgeos"}](pgeos.ps){width="\columnwidth"} In Run A1, on the other hand, a more pronounced effect is seen, and the helicity is decreased up to a factor of two in the saturated stage (see the right panel of Fig. \[plot\_heli\]). This change is brought about by the different handedness of the vorticity in the anti-cyclone and by the vigorous convection within it (see the upper row of Fig. \[puu\_slices\]). The combination of these produces significantly greater helicity in the anti-cyclones, but a predominantly different sign than the surroundings and leads to an overall decrease noted in Fig. \[plot\_heli\]. The decreased amount of helicity would indicate weaker amplification of the magnetic field by anti-cyclones compared to their surroundings. Again, the strong horizontal motions might counteract by amplifying the field by advection. The simulations presented here were performed with a setup identical to that used in [@KKB09b hereafter KKB09] to study large-scale dynamo (LSD) action in rotating convection. In KKB09 the generation of large-scale magnetic fields, given that the Coriolis and magnetic Reynolds numbers exceeded critical values, were reported. The critical Coriolis number for LSD action was found to be roughly four, which is close to the critical value for the cyclones to emerge. The relation of the two phenomena is an interesting question, that can be only partially answered by the existing magnetohydrodynamic runs. This is because the fluid Reynolds number in the runs of KKB09 was in most cases lower than ${{\rm Re}}_{\rm c}$ required for the vortices to appear. Only two runs (A10 and D1 of KKB09) are clearly in the parameter regime exceeding the critical values found here, and another four runs (A5, A6, B5, and C1 of KKB09) where the parameters were close to marginal. The Reynolds and Coriolis numbers for these runs were calculated from the saturated state of the dynamo which in all cases lowers the turbulent velocities somewhat, decreasing the Reynolds and increasing the Coriolis numbers correspondingly. Furthermore, a different definition of the Reynolds number was used by KKB09 as in the present study. A reanalysis of the data of KKB09 suggests that early stages of cyclone formation are in progress in all of the runs listed above. However, the magnetic field grows on a significantly shorter timescale than the cyclones, and the magnetic field saturates already before thousand convective turnover times. None of the runs was continued much further than twice that, making it impossible to decide in favor or against the maintenance of vortices based on these runs. Nonetheless, indications of growing cyclones appear in the kinematic regime, i.e. when the magnetic field is weak in comparison to kinetic energy of the turbulence, but they are far less clear, or even absent, when the magnetic field saturates. This raises two related questions: firstly, are the vortices responsible for the emergence of the large-scale magnetic fields, and secondly, can the vortices coexist in the regime where strong magnetic fields are present? The current data suggests that the presence of the vortices is not essential for the large-scale magnetic fields which persist throughout the saturated state, whereas the vortices remain less prominent or suppressed. This fact is related to the second issue. As noted above, the simulations of KKB09 are too short for the vortices to fully saturate. Thus, we cannot conclusively say whether the lack of the vortices in the dynamo regime is due to the magnetic field simply reducing the Reynolds number below the critical value, or a direct influence of the Lorentz force on the growing vortices. We will address the questions related to magnetic fields and dynamo action in more detail in a forthcoming paper. ![Temperature as a function of $x$ from a quiescent (solid lines) and cyclonic (dashed) regions for Runs A9 (left panel) and A1 (right panel). The positions of the cuts are indicated in the leftmost panels of Fig. \[pT\_slices\] with corresponding linestyles. The normalization factor $\overline{T}$ is the horizontal average of the temperature.[]{data-label="ptempc"}](ptempc.eps){width="\columnwidth"} ![Horizontally averaged kinetic helicity $\overline{\omega\cdot u}$ as a function of $z$ from a quiescent (solid lines) and cyclonic (dashed) states for Runs A9 (left panel) and A1 (right panel). The vertical dotted lines at $z=0$ and $z=d$ indicate the bottom and top of the convectively unstable layer, respectively.[]{data-label="plot_heli"}](plot_heli.eps){width="\columnwidth"} Observational implications -------------------------- If large-scale cyclones such as those found in the present study occur in real stars, they will cause observational signatures on the stellar surface due to their lower or higher temperature. The temperature contrasts seen in the surface maps derived by Doppler imaging are somewhat stronger than the value of roughly five percent found in this study; for instance, on the surface of the active RS CVn binary II Peg, analysed by @Marjaana11 and [@Thomas11], the coolest spot temperatures, depending on the season, are 10-20 per cent below the mean surface temperature. Similar spot temperatures have also been obtained by analysing molecular absorption bands, but cooler stars seem to have a lower spot contrast [@Oneal98]. Taken that the numerical model is quite simple, for instance in the sense that the transport of energy by convection is underestimated, this discrepancy is not overwhelmingly large. Interestingly, Doppler images commonly also show hot surface features [cf. @Heidi07; @Marjaana11; @Thomas11]. These may be artefacts of the Doppler imaging procedure, but it is not ruled out that they could arise from the anti-cyclonic vortices seen in the present study. It is obviously very hard to explain the active longitudes and their drift based on the vortex-instability scenario; we believe that a large-scale dynamo process is responsible for these basic features, as commonly believed [e.g. @KR80; @MBBT95; @tuominen2002starspot]. Nevertheless, it is possible that the vortex-instability contributes to the formation of starspots, and may interfere with the dynamo-instability, especially during the epochs of lower magnetic activity of the stellar cycle. Although it is very hard to predict the implications of the vortices in the magnetohydrodynamic regime, it would appear natural that spots, either cool or warm, generated by a hydrodynamic vortex-instability, could also contribute to the apparent decorrelation of magnetic field from the temperature structures. The influence of the cyclones and anticyclones on the net helicity, important for the amplification of the magnetic field, is either close to zero (cyclones) or to decrease the net helicity (anti-cyclones). This would imply that the magnetic field amplification would be equally or even more difficult in the regions of the vortices; this picture, however, may be complicated by the presence of strong horizontal motions present in these structures, that might amplify the magnetic field simply by their capability for advecting the field lines. Conclusions {#sec:conclusions} =========== We report the formation of large-scale vortices in rapidly rotating turbulent convection in local f-plane simulations. The vortices appear provided the Reynolds and Coriolis numbers exceed critical values. Near the critical Coriolis number, the vortices are cyclonic and cool in comparison to the surrounding atmosphere, whereas for faster rotation warm anti-cyclonic vortices appear [see also @Chan07]. The relative temperature difference between the vortex and its surroundings is of the order of five per cent in all cases. This is of the order of the contrast deduced indirectly from photometric and spectroscopic observations of late-type stars. In our simulations the typical size of the vortices is comparable to the depth of the convectively unstable layer. However, we have not studied how the size of the structures depends e.g. on the depth of the convection zone. We propose that the vortices studied here can be present in the atmospheres of rapidly rotating late-type stars, thus contributing to rotationally modulated variations in the brightness and spectrum of the star. Such features have generally been interpreted to be caused by magnetic spots, reminiscent to sunspots. However, our results suggest that the turbulent convection and rapid rotation of these stars can generate large-scale temperature anomalies in their atmospheres via a purely hydrodynamical process. Similar vortex-structures are observed in the atmospheres of Jupiter and Saturn. Although their definitive explanation is still debated, it is possible that they are related to rapidly rotating thermal convection in the atmosphere. However, several issues remain to be sorted out before the reality of cyclones and anti-cyclones in the surface layers of stars can be established. The current model is highly simplified and neglects the effects of sphericity and magnetic fields. In spherical geometry more realistic large-scale flows can occur which might lead to other hydrodynamical instabilities. However, current rapidly rotating simulations in spherical coordinates have not shown evidence of large-scale vortices [e.g. @BBBMT08; @KKBMT10; @KMGBC11], although non-axsisymmetric features are seen near the equator [@BBBMT08]. It is possible that the lack of large-scale vortices in these simulations is related either to the lack of spatial resolution or too short integration time. Magnetic fields, on the other hand, are ubiquitous in stars with convection zones. Furthermore, on the Sun they form strong flux concentrations, i.e. sunspots. At the moment, direct simulations cannot self-consistently produce sunspot-like structures in local geometry [e.g. @KBKMR11]. However, the magnetic fields in global simulations are also very different from the high-latitude active longitudes deduced from observations, namely showing more axisymmetric fields residing also near the equator [e.g. @KKBMT10; @BMBBT11]. The apparently poor correlation between magnetic fields and temperature anomalies in surface maps based on Doppler imaging also suggests that an alternative mechanism might be involved. The presence of large-scale high-latitude vortices presents such an alternative. Currently it is not clear what happens to the vortices when magnetic fields are present. Our previous dynamo simulations in the same parameter regime [@KKB09b] did not show clear signs of vortices in the saturated regime of the dynamo although this might be explained by the too short integration time. Addressing this issue, however, is not within the scope of the present paper and we will revisit it in a future publication. The simulations were performed using the supercomputers hosted by CSC – IT Center for Science Ltd. in Espoo, Finland, who are administered by the Finnish Ministry of Education. Financial support from the Academy of Finland grants No. 136189, 140970 (PJK) and 218159, 141017 (MJM), and the ‘Active Suns’ research project at University of Helsinki (TH) is acknowledged. The authors acknowledge the hospitality of NORDITA during their visits. [^1]: http://code.google.com/p/pencil-code/
--- bibliography: - 'Collection.bib' - 'Add\_Collection.bib' --- UT-13-21 [ **Notes on holonomy matrices of\ hyperbolic 3-manifolds with cusps** ]{} 1.2cm Fumitaka Fukui [^1] Department of Physics, Faculty of Science,\ University of Tokyo, Bunkyo-ku, Tokyo 133-0022, Japan 1.5cm **Abstract** In this paper, we give a method to construct holonomy matrices of hyperbolic 3-manifolds by extending the known method of hyperbolic 2-manifolds. It enables us to consider hyperbolic 3-manifolds with nontrivial holonomies. We apply our method to an ideal tetrahedron and succeed in making the holonomies nontrivial. We also derive the partition function of the ideal tetrahedron with nontrivial holonomies by using the duality proposed by Dimofte, Gaiotto and Gukov. Introduction ============ It is well known that Einstein-Hilbert gravity in three dimensions with a negative cosmological constant is equivalent to Chern-Simons theory [@Achucarro1986a; @Witten1988a]. In this correspondence vierbein and spin connection are combined into an $\operatorname{{\rm SL}}(2, \mathbb{C})$ connection, and the Einstein equation and torsionless condition restrict the $\operatorname{{\rm SL}}(2, \mathbb{C})$ connection to be flat. Because the Chern-Simons action includes only one derivative, analyzing Chern-Simons theory is usually easier than gravity theory. In $AdS_3$ gravity theory there is an interesting solution which is known as BTZ black hole [@Banados1992]. In the BTZ background the space is topologically equivalent to a solid torus [@Banados1993]. The boundary of the solid torus corresponds to the AdS boundary and the horizon lies on the core of the solid torus. BTZ solutions are characterized by two parameters, the mass and the angular momentum of the black hole, and they corresponds to the complex moduli of the boundary torus. In the Chern-Simons theory side, these characteristics are captured by $\operatorname{{\rm SL}}(2, \mathbb{C})$ holonomy matrices. There is a non-contractible cycle in the solid torus and the holonomy around the cycle encodes the geometrical data. It is interesting to generalize BTZ solutions to spaces with more complicated topology. There are two possible ways for generalization. First, if we regard a solid torus as a trivial knot complement of $S^3$ like as [@Gukov2005], one possible way is to consider a nontrivial knot complement of $S^3$. For the case of a hyperbolic knot $K$, Chern-Simons theory defined on the knot complement $S^3 \backslash K$ was well studied in [@Dimofte2011; @Fuji2012a; @Fuji2012; @Witten1989a]. Another direction we can take is to regard the BTZ solution as a torus whose inside is filled up, and replace the boundary to general Riemann surfaces. We will call such 3-manifold a “solid Riemann surface”. A particular difficulty occurs in the second generalizations that the space can have trivalent vertices like pants, and we should handle this new feature appropriately. In this paper we propose a partition function which is dual to the ideal tetrahedron with nontrivial holonomies. The way we want to proceed in this paper is the second generalization. To consider solid Riemann surfaces, the most exciting subject in this course is the case that the Riemann surface is a pair of pants. It is important because when we construct any solid Riemann surfaces by pants decomposition, there (almost) always appear pairs of “solid pants”. The biggest reason is following; In the case of a BTZ background space, a black hole exists at the core of the torus. If we assume that there is a black hole at the core of the general Riemann surface, a pair of solid pants represents a fusion or a fission of black holes. Our strategy depends on the nature of the Chern-Simons theory. Let us consider the Chern-Simons theory defined on a 3-manifold $M$. Since the theory is topological, the vacuum solutions are specified by the topological invariants. Because the e.o.m. of the Chern-Simons theory implies the flat connection condition, the most conventional invariant is a Wilson loop. Actually from a mathematical perspective, rigidity theorem ensures that any hyperbolic 3-manifolds with finite volume can be uniquely determined by the holonomy data of every cycle. In other words, we can specify the solution if we know all Wilson loops of the non-contractible cycles in $M$. Next we consider a geometry of the boundary surface $\partial M$. To our purpose we can restrict $\partial M$ to be a Riemann surface. As will be reviewed in section 2, holonomy matrices are closely related to the geodesic lengths of the cycles. Therefore if we suppose $\partial M$ has a hole and it has a finite geodesic length, the holonomy around the hole is no longer trivial. Then if this cycle is contractible through $M$, there appears a paradox: because of the flat connection condition, a Wilson loop is invariant under deformation of the cycle and hence it should be trivial when the cycle is contractible. This observation implies that there lies a line-shaped defect inside $M$ which prevents the cycle from shrinking, and this defect is called a cusp. This means that if we want to consider general $M$, we are naturally guided to treat 3-manifolds with cusps. However, the way cusps are made is just to consider nontrivial holonomies, thus we are able to probe the inside of $M$ by the surface data. Inversely speaking, if we consider the case $\partial M$ has a hole but the holonomies of the cycle is set to be identically trivial. In this case there is no cusp ending at the hole and we cannot know the existence of the cycle from Chern-Simons theory. In this paper we study the solid pants with nontrivial holonomies. In order to get a 3-manifold with nontrivial holonomies, we tackle an ideal tetrahedron as the simplest case at first. Since for an ideal tetrahedron holonomies around the four vertices are trivial, our first task is to relax the holonomy conditions by examining the hyperbolic structure in detail. The solid pants is obtained from a tetrahedron with nontrivial holonomies by trivializing only one holonomy. The computational method of hyperbolic structures is reviewed in section 2 and we discuss its application in section 3. In the following section we propose a wave function or a partition function of a DGG dual theory [@Dimofte2011]. Our project aims to analyze gravity theories of general solid Riemann surfaces and derive the partition functions of the gravity theories. In the past studies [@Carlip1995; @Carlip1997], the thermodynamics of the BTZ black holes are discussed by using the partition functions. There, the partition functions were derived by using WZW model, but only semi-classical behaviors are surveyed in detail and one-loop corrections seems not so obvious. We are studying the partition functions of solid Riemann surfaces and preparing for the paper [@Fukui-preparing] and we hope that our study will shed a light on the quantum behavior of three dimensional gravity. While we were preparing the paper, a paper [@Dimofte2013] which includes study on a tetrahedron with cusp defects has appeared. They create cusps by using truncated ideal tetrahedra and collecting truncated vertices around the cusp to form small tubes. Our approach is different from theirs in the point that we need no truncated tetrahedra, and our method seems simpler when creating 3-manifolds with cusps. Since our result has been obtained before [@Dimofte2013] came up, we decided to separate our ongoing project and concentrate to describe our method in this paper. Holonomy Matrices ================= Let us review the relation between the hyperbolic structure and the holonomies of the manifold. First we will start from two dimensions and review the character of hyperbolic manifolds. After that, we will extend the result of the two dimensional case to hyperbolic 3-manifolds with boundaries. Hyperbolic 2-manifolds are well studied and our review of two dimensional case is based on [@Chekhov2000; @Chekhov2007]. The result of this section will be used in the next section for our case, a tetrahedron with nontrivial holonomies. 2d case ------- Let us consider a Riemann surface $\Sigma _{g, h}$ of genus $g$ and $h$ holes. Generally speaking, if $\Sigma _{g, h}$ has negative Euler number, i.e. $2-2g-h<0$, $\Sigma _{g, h}$ admits a negative constant curvature and it can be embedded in the hyperbolic plane. By the rigidity theorem, Riemann surface with hyperbolic metric is uniquely obtained as $\mathbb{H} / \Delta _{g, h}$, where $\mathbb{H}$ is a hyperbolic plane [^2] and $\Delta _{g, h}$ is finitely generated subgroup of $\operatorname{{\rm SL}}(2, \mathbb{R})$ known as Fuchsian group. If we assume that we take an element $\gamma$ from the Fuchsian group $\Delta _{g, h}$ and diagonalize it to $\gamma = \begin{pmatrix} e^l & 0\\ 0 & e^{-l} \end{pmatrix} $ with $l \in \mathbb{R}$. When $\gamma$ acts on a point $(0, y_0)$ in the y axis, it is transferred to $(0, e^{2l}y_0)$ by $\gamma$, and taking a quotient $\mathbb{H} / \gamma$ means that the geodesic passing $(0, y_0)$ and $(0, e^{2l}y_0)$ are compactified to make a cycle. The geodesic length of the cycle is calculated as $$\begin{split} \int _{y_0} ^{e^{2l}y_0} \frac{dy}{y} = 2l = 2\cosh ^{-1}\left(\frac{\operatorname{Tr}\gamma}{2}\right). \end{split}$$ Because $\operatorname{Tr}$ operation is invariant under conjugate, geodesic length of the cycle is invariant under any choice of a representative of the conjugacy class. In this way, if you choose a conjugacy class $[\gamma]$ from $\Delta _{g, h}$, there is a corresponding cycle which is created when $\mathbb{H}$ is divided by $\gamma$, and the geodesic length of the cycle is also determined by $[\gamma]$. Construction of the Fuchsian group is very simple. First, given a Riemann surface $\Sigma _{ g, h }$, we take an ideal triangulation over fundamental domain of $\Sigma _{g, h}$. Next we give a parameter variable to each edge of triangulation. Actually these variables span the coordinates of Teichmüller space known as shear coordinate [@Fock1997]. Next we consider the cycle in $\Sigma _{g, h}$. When we draw a cycle on the triangulated fundamental domain and we walk along the cycle, we will pass some edges and turn left or right in some triangles, and we can reconstruct the cycle from the ordered data of crossing which edges and turning whether left or right in which triangles. One may find that the ordered data are enough to reconstruct the path. Now we are ready to get the holonomy matrix of the cycle. What we must do is encode the ordered data into $\operatorname{{\rm SL}}(2, \mathbb{R})$ matrices as following way; if we turned left ( or right) then we multiply the left-turning matrix $L$ (or right-turning matrix $R$) from left, or if we crossed the edge with parameter z then we multiply the edge-crossing matrix $X_z$ from left. The matrices $L, R, X_z$ are defined as $$L=\begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}, R=\begin{pmatrix} 1 & -1 \\ 1 & 0 \end{pmatrix}, X_z=\begin{pmatrix} 0 & - e^{z/2} \\ e^{-z/2} & 0 \end{pmatrix}.$$ The matrix L (or R) makes a left (or right) turn on the triangle whose vertices are $0, 1$ and $\infty$. The matrix $X_z$ transports the points $0, 1, \infty$ to $\infty, -e^z, 0$ and vice versa. [^3] ![We illustrate the fundamental region of $\Sigma _{1, 1}$ as a quotient $\mathbb{H} / \langle \gamma _\alpha , \gamma _\beta \rangle$. (a)The action of $\gamma _\alpha$ is depicted as a red arrow line and the orange edges are identified. (b)The action of $\gamma _\beta$ is depicted as a blue arrow line and the purple edges are identified. (c)Two independent cycle of $\Sigma _{1, 1}$ are depicted. The holonomy of the red cycle is $\gamma _\alpha$ and that of the blue cycle is $\gamma _\beta$. []{data-label="fig1"}](paint3-1.pdf "fig:"){width="95.00000%"} [(a)]{} ![We illustrate the fundamental region of $\Sigma _{1, 1}$ as a quotient $\mathbb{H} / \langle \gamma _\alpha , \gamma _\beta \rangle$. (a)The action of $\gamma _\alpha$ is depicted as a red arrow line and the orange edges are identified. (b)The action of $\gamma _\beta$ is depicted as a blue arrow line and the purple edges are identified. (c)Two independent cycle of $\Sigma _{1, 1}$ are depicted. The holonomy of the red cycle is $\gamma _\alpha$ and that of the blue cycle is $\gamma _\beta$. []{data-label="fig1"}](paint3-2.pdf "fig:"){width="95.00000%"} [(b)]{} ![We illustrate the fundamental region of $\Sigma _{1, 1}$ as a quotient $\mathbb{H} / \langle \gamma _\alpha , \gamma _\beta \rangle$. (a)The action of $\gamma _\alpha$ is depicted as a red arrow line and the orange edges are identified. (b)The action of $\gamma _\beta$ is depicted as a blue arrow line and the purple edges are identified. (c)Two independent cycle of $\Sigma _{1, 1}$ are depicted. The holonomy of the red cycle is $\gamma _\alpha$ and that of the blue cycle is $\gamma _\beta$. []{data-label="fig1"}](c.png "fig:"){width="50.00000%"}\ We show an example how the above prescription works for a torus with one hole $\Sigma _{1, 1}$. As in Fig.\[fig1\], $\Sigma _{1, 1}$ can be obtained as a quotient $\mathbb{H} / \langle \gamma _\alpha , \gamma _\beta \rangle$, where $\gamma _\alpha$ and $\gamma _\beta$ are some elements of $\operatorname{{\rm SL}}(2, \mathbb{R})$. By using isometry group action, we can fix three vertices of the rectangle to $0, 1, \infty$, and we parameterize the rest vertex to $-e^{z_0}$. $\gamma _\alpha$ transports $0$ to $1$, $-e^{z_0}$ to $\infty$, and $\infty$ to somewhere in $(1, \infty)$, for instance $1+e^{-z_1}$. Similarly $\gamma _\beta$ transports $\infty$ to $1$, $-e^{z_0}$ to $0$, and $0$ to somewhere in $(0, 1)$, for instance $\frac{1}{1+e^{z_2}}$. ![We make an ideal triangulation and give a parameter to each vertex. The identified edges have the same parameters.[]{data-label="fig2"}](paint4.pdf){width="70.00000%"} Next we make a triangulation and give parameters as \[fig2\]. Then $\gamma _\alpha$ can be described as starting from the pink triangle and going up, passing the edge with a parameter $z_0$, turning left in the green triangle, passing the edge with a parameter $z_1$, and turning right in the pink triangle to enclosing the path. At last we can encode this travel into the holonomy matrix $\gamma _\alpha$ as $$\gamma _\alpha = RX_{z1}LX_{z0}.$$ Similarly we can represent $\gamma _\beta$ as $$\gamma _\beta = LX_{z2}RX_{z0}.$$ We can easily check that these realizations of $\gamma _\alpha$ and $\gamma _\beta$ satisfy the expected transports. One may think it strange that there are three parameters for moduli of a torus. In fact these parameters include a data of a hole. The holonomy matrices around the hole is $\gamma _c = \gamma _\alpha \gamma _\beta \gamma _\alpha ^{-1} \gamma _\beta ^{-1}$, $2\cosh ^{-1} \left( \frac{\operatorname{Tr}\gamma _c}{2} \right)$ is the geodesic length of the hole. Therefore the parameters $ z_0, z_1, z_2$ correspond to the complex moduli and the moduli of the hole of $\Sigma _{1, 1}$. This demonstration tells us that the essence of constructing a holonomy matrix $\gamma$ is to observe where $\gamma$ transports triangles. In this example we arranged the coordinates of the vertices by hand in order that $\gamma _\alpha$ and $\gamma _\beta$ correctly represent the transportation. But in fact, if we define the parameters of the edges before setting the positions of the vertices, we can still set the positions consistent with $\gamma _\alpha$ and $\gamma _\beta$. Thus when we compute holonomies, we only use of the parameters of edges and the positions of vertices does not concern. 3d case ------- Just as two dimensional case, Rigidity theorem states that a hyperbolic 3-manifold $M$ can be also created as a quotient space $\mathbb{H} / \Delta$ of the hyperbolic space, where $\mathbb{H}$ is a hyperbolic space here and $\Delta$ is a subgroup of $\operatorname{{\rm SL}}(2, \mathbb{C})$ which is an isometry group. Particularly when $M$ has a boundary, we can apply the prescription in the previous subsection to compute the holonomies of cycles in $\partial M$. Because the space of the holonomies of $M$ can be derived from that of $\partial M$ by setting holonomies of contractible cycles to identity, holonomies of $M$ are still computable in this case. Like as two dimensions, if $M$ can be obtained as $\mathbb{H}/ \Delta$ and we take $\gamma \in \Delta$, then a cycle which corresponds to $\gamma$ does exist in $M$ and its geodesic data are uniquely determined by $\gamma$. So if the holonomy of some cycle of $M$ becomes nontrivial, the geodesic data of the cycle such as geodesic length or cusp angle also become nontrivial. Besides, nontriviality of holonomy tells that the cycle is non-contractible because of a cusp singularity. Therefore, even in the case $M$ has cusp boundary inside, we have no need of the special treatment about the cusp, but we only need to nontrivialize the derived holonomy matrix. The process is almost same as in two dimensional case. Making a triangulation of $\partial M$, giving parameters to vertices in the triangulation, decomposing cycles to the data of passing edges and turning left or right, and multiplying the matrix $L, R$ or $X_z$. Here we need a slight modification as $$R=\begin{pmatrix} 1 & -1 \\ 1 & 0 \end{pmatrix}, L=\begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}, X_z=\begin{pmatrix} 0 & i e^{z/2} \\ i e^{-z/2} & 0 \end{pmatrix},$$ where $z \in \mathbb{C}$. The change in $X_z$ is done in order to treat the orientation of 3-manifold correctly. We will apply this construction to a tetrahedron in the next section. Holonomies of a Tetrahedron =========================== In this section we show how holonomy calculation works in three dimensions and how to realize 3-manifolds with nontrivial holonomies in a case of a tetrahedron. Tetrahedron with trivial holonomies ----------------------------------- [cc]{} ![(a)An illustration of an ideal tetrahedron realized in the Poincaré half upper space. Three versatile lines continue to infinity. (b)Making a triangulation and giving a parameter to each vertex.[]{data-label="fig3"}](tetrahedron_6.png "fig:"){width="\hsize"} [(a)]{} ![(a)An illustration of an ideal tetrahedron realized in the Poincaré half upper space. Three versatile lines continue to infinity. (b)Making a triangulation and giving a parameter to each vertex.[]{data-label="fig3"}](tetrahedron_1.png "fig:"){width="\hsize"} [(b)]{} We will compute the holonomy matrices of an ideal tetrahedron. According to the previous section, we should make an ideal triangulation at first. In Fig.\[fig3\](b) we show an example of a triangulation of the surface of the ideal tetrahedron. Next we attach parameters $z, z', z'', w, w', w''$ to the edges. Let us see the calculation of the holonomy around the vertex A of Fig\[fig3\](b). When we encircle the vertex in a counterclockwise way, our travel goes as starting from the face ABD, passing the edge AB, turning left in the face ABC, passing the edge AC, turning left in the face ACD, passing the edge AD, and turning left in the face ABD to enclose the path. Then we get the holonomy matrix $H_A$ around the vertex A becomes $$\begin{aligned} H_A = L X_{z'} L X_{z''} L X_{z} = \begin{pmatrix} i e ^{-\frac{1}{2}\left( z+z'+z''\right)} & 0 \\ i e^{-\frac{1}{2}\left( z+z'+z''\right)}\left( 1-e^{z'}+e^{z'+z''}\right) & - i e ^{\frac{1}{2}\left( z+z'+z''\right)} \end{pmatrix}.\end{aligned}$$ Parallelly we can obtain the holonomy matrices around the vertices B, C, D as $$\begin{aligned} H_B &= L X_{w'} L X_{w''} L X_{z} &= \begin{pmatrix} i e ^{-\frac{1}{2}\left( z+w'+w''\right)} & 0 \\ i e^{-\frac{1}{2}\left( z+w'+w''\right)}\left( 1-e^{w'}+e^{w'+w''}\right) & - i e ^{\frac{1}{2}\left( z+w'+w''\right)} \end{pmatrix}, \\ H_C &= L X_{w'} L X_{z''} L X_{w} &= \begin{pmatrix} i e ^{-\frac{1}{2}\left( w+w'+z''\right)} & 0 \\ i e^{-\frac{1}{2}\left( w+w'+z''\right)}\left( 1-e^{w'}+e^{w'+z''}\right) & - i e ^{\frac{1}{2}\left( w+w'+z''\right)} \end{pmatrix}, \\ H_D &= L X_{z'} L X_{z''} L X_{w} &= \begin{pmatrix} i e ^{-\frac{1}{2}\left( w+z'+z''\right)} & 0 \\ i e^{-\frac{1}{2}\left( w+z'+z''\right)}\left( 1-e^{z'}+e^{z'+z''}\right) & - i e ^{\frac{1}{2}\left( w+z'+z''\right)} \end{pmatrix}.\end{aligned}$$ If we impose trivial holonomy conditions, the parameters of the opposite side are caused to be equal, e.g. $$\begin{split} z&=w \\ z'&=w' \\ z''&=w'' , \end{split}$$ and the rest constraints are $$\begin{split} z+z'+z'' &= i\pi \\ e^{-z}+e^{z'}-1 &= 0, \end{split}$$ and permutated ones. Therefore, under the identification of $Z=e^z, Z'=e^{z'}, Z''=e^{z''}$, holonomy computation goes along with a hyperbolic space. Tetrahedron with nontrivial holonomies -------------------------------------- We have seen that the holonomy calculations works well in the case of trivial holonomies, then how about a tetrahedron with nontrivial holonomies? Our method needs no special treatment to non-trivializing holonomies: just replacing the trivial holonomy conditions. The 3-manifold we want to make is the solid pants, so we set one of the four holonomies, $H_A$ for example, to be trivial and the others to be nontrivial. If we suppose $H_A = id$ and $\operatorname{Tr}H_B = 2\cosh \left( \frac{l_B}{2} \right)$ (and same for C, D), the relations between parameters are $$\begin{split} & z + z' +z'' = i\pi \\ & w=z+\frac{1}{2}\left(l_B - l_C + l_D \right) \\ & w'=z'+\frac{1}{2}\left(-l_B + l_C + l_D \right) \\ & w''=z''+\frac{1}{2}\left(l_B + l_C - l_D \right) , \end{split}$$ and $$e^{-z}+e^{z'}-1=0.$$ The parameters of the opposite edges are no longer same because of the nontriviality of holonomies. It seems nice to represent nontrivial holonomies, but we cannot get no more conditions over $l_B, l_C$ and $l_D.$ Actually, the holonomy matrices $H_B, H_C, H_D$ should be related to each other because the composition of the cycles around these vertices can shrink. To overcome this point, we should change the triangulation. If we stick to one tetrahedron, we cannot seize holonomies at hand. So we should divide the tetrahedron with nontrivial holonomy into four trivial tetrahedra, and try to evaluate the holonomy concretely. Our division is done like below; [cc]{} ![(a)The way of gluing four tetrahedra to get one tetrahedron. By the blue arrows the glued surfaces are indicated. (b)The glued tetrahedron. The blue curved lines are the internal edges and giving cusps. The new parameters of the new edges are written.](paint.pdf "fig:"){width="\hsize"} [(a)]{} ![(a)The way of gluing four tetrahedra to get one tetrahedron. By the blue arrows the glued surfaces are indicated. (b)The glued tetrahedron. The blue curved lines are the internal edges and giving cusps. The new parameters of the new edges are written.](paint2.pdf "fig:"){width="\hsize"} [(b)]{} \[fig4\] The parameters we have are $x, x', x'', y, y', y'', z, z', z'', w, w', w''$. In each tetrahedron we impose trivial holonomy conditions, so we have $$\begin{split} x+x'+x''=i\pi \\ e^{-x}+e^{x'}-1=0 \end{split}$$ and same for $y, z, w$. Holonomy calculations go well with parameters changed. The holonomy matrices now become $$\begin{aligned} H_A &= LX_{x''+y'}LX_{y''+z'}LX_{z''+x'} \nonumber \\ &= \begin{pmatrix} ie^{ -\frac{1}{2}\left(x'+x''+y'+y''+z'+z''\right)} & 0\\ ie^{-\frac{1}{2}\left(x'+x''+y'+y''+z'+z''\right)} \left( 1-e^{x''+y'}+e^{x''+y'+y''+z'}\right) & -ie^{ \frac{1}{2}\left(x'+x''+y'+y''+z'+z''\right)} \end{pmatrix} \\ H_B &= LX_{x+w'}LX_{y+w'}LX_{x''+y'} \nonumber \\ &= \begin{pmatrix} ie^{ -\frac{1}{2}\left(w'+w''+x+x''+y+y'\right)} & 0\\ ie^{ -\frac{1}{2}\left(w'+w''+x+x''+y+y'\right)} \left( 1-e^{x+w'}+e^{x+y+w'+w''}\right) & -ie^{ \frac{1}{2}\left(w'+w''+x+x''+y+y'\right)} \end{pmatrix} \\ H_C &= LX_{x'+z''}LX_{z+w}LX_{x+w'} \nonumber \\ &= \begin{pmatrix} ie^{ -\frac{1}{2}\left(w+w'+x+x'+z+z''\right)} & 0\\ ie^{ -\frac{1}{2}\left(w+w'+x+x'+z+z''\right)} \left( 1-e^{x'+z''}+e^{x'+z+z''+w}\right) & -ie^{ \frac{1}{2}\left(w+w'+x+x'+z+z''\right)} \end{pmatrix} \\ H_D &= LX_{y''+z'}LX_{y+w''}LX_{z+w} \nonumber \\ &= \begin{pmatrix} ie^{ -\frac{1}{2}\left(w+w''+y+y''+z+z'\right)} & 0\\ ie^{ -\frac{1}{2}\left(w+w''+y+y''+z+z'\right)} \left( 1-e^{y''+z'}+e^{w'+y+y''+z'}\right) & -ie^{ \frac{1}{2}\left(w+w''+y+y''+z+z'\right)} \end{pmatrix}.\end{aligned}$$ If we constrain these holonomy matrices to be $H_A=id, H_B=2\cosh(\frac{l_B}{2})$ and same for C, D, then we get the following relations between parameters; $$\begin{aligned} &x+y+z &= 2\pi i \\ &w+x'+y'' &= l_B + 2\pi i \\ &w''+x''+z' &= l_C + 2\pi i \\ &w'+y'+z' &= l_D + 2\pi i.\end{aligned}$$ In this case if we sum up the both sides of these equation, a relation $$l_B+l_C+l_D=0$$ can be obtained, and we can relate the holonomies $H_B, H_C, H_D$ successfully. These relations are needed to compute the partition function in the next section. Partition functions =================== In this section we will discuss the partition function of the hyperbolic 3-function $M$, particularly focusing on the ideal tetrahedron. The calculations of the partition functions are well described in [@Dimofte2011], we will shortly review this in order to make this paper self-contained. In [@Terashima2011; @Dimofte2011] it was pointed out that there was a correspondence between the partition function of gravity theory and that of $\mathcal{N}=2$ supersymmetric Chern-Simons gauge theory by suitably identifying parameters. Let us come to the case of the ideal tetrahedron. On the side of the gravity theory, we can consider the partition function of the gravity theory defined on the ideal tetrahedron [@Chekhov1999; @Chekhov2000]. If we set the vertices at $\{0, 1, \infty, Z\}$, $Z$ becomes a parameter of the partition function. We will denote the partition function as $Z^{\hbar}(Z)$. On the side of dual gauge theory, we can consider the $\mathcal{N}=2$ gauge theory defined on $S^3_b$ [@Hama2011a]. The matter content of the dual theory is one chiral matter. The $\operatorname{{\rm U}}(1)$ flavor symmetry is gauged and a Chern-Simons coupling of level $-frac{1}{2}$ is added to the Lagrangian. For later use we will call this theory $\mathcal{T}_{\Delta}$. The parameters of the theory are the mass $m$ and the R-charge $R$ of the chiral matter, thus we will denote the partition function as $Z_{\Delta}(\tilde{m})$ where $ \tilde{m}=m+\frac{iQ}{2}R$. The identification of the parameters are as below; $$\begin{aligned} \tilde{m} & \leftrightarrow & \frac{Z}{2\pi b} \\ b^2 & \leftrightarrow & 8G, \end{aligned}$$ where the left hand side is of the supersymmetric gauge theory and the right hand side is of the gravity theory. $b$ is the squashing parameter of $S^3_b$, and $G$ is the gravitational constant. We can exactly determine the partition function by using the localization technique [@Hama2011a], and the partition function of $\mathcal{T}_{\Delta}$ was $$Z_{\Delta}(\tilde {m})=e^{\frac{i\pi}{2}\left(\frac{iQ}{2}-\tilde{m}\right)^2} s_b\left(\frac{iQ}{2}-\tilde{m}\right),$$ where $Q=b+b^{-1}$. If there are multiple tetrahedra, the corresponding partition function is obtained as a multiples of $Z_{\Delta}$’s. We can define the symplectic transformation over the partition functions [@Witten2003a; @Aharony1997]. In the case of one parameter, the generating matrices $T$ and $S$ of $\operatorname{{\rm SL}}(2, \mathbb{Z})$ act on the partition functions as $$\begin{aligned} T: Z(\tilde{m})& \mapsto& Z'(\tilde{m})=e^{i\pi\tilde{m}^2} Z(\tilde{m}) \\ S: Z(\tilde{m})& \mapsto& Z'(\tilde{m}')=\int d\tilde{m} e^{-2i\pi \tilde{m}\tilde{m}'} Z(\tilde{m}).\end{aligned}$$ The integral is over $\tilde{m} \in \mathbb{R}$. We can check that they satisfy $S^2 = -id$ and $(ST)^3=id$, as expected. Particularly for an ideal tetrahedron, when the transformation $ST$ acts on the $Z_{\Delta}$, we can get $$\begin{split} ST: Z_{\Delta}(\tilde{m}) \mapsto Z^{ST}_{\Delta}(\tilde{m}') &= \int d\tilde{m} e^{-2i\pi \tilde{m}\tilde{m}' -i\pi\tilde{m}^2}Z_{\Delta}(\tilde{m}) \\ &= -e^{-\frac{i\pi}{12}\left(1+Q^2\right)}Z_{\Delta}\left(\tilde{m}'+\frac{iQ}{2}\right). \end{split}$$ Here and after we use formulae about double sine functions from [@Faddeev2000], we arranged some formulae and collected into Appendix. We can find that $ST$ transformation does not change the form of $Z_{\Delta}$. Now we should concern the variable changes of moduli parameters $z \sim w''$. In fact, Chern-Simons action induces symplectic structures to the moduli parameters [@Verlinde:1989hv; @Fock1998]. For an ideal tetrahedron the dimension of the phase space is two, and Poisson brackets are $\{z, z''\}=\{z'', z\}=\{z'', z'\}=1$. [^4] The choice of coordinates and momenta is called the polarization. The coordinate transformations which does not change the commutation relation are allowed, and it is symplectic group symmetry. Let’s come back to our case of the tetrahedron with nontrivial holonomies. The phase space of nontrivial tetrahedron has dimension 8 because we started from 4 tetrahedra. When the four tetrahedra are taken apart, the conventional set of independent parameters is $\{ x, y, z, w\}$ and the polarization is $ \left( z'' , w , x'' , y' , z' , w'' , x' , y\right)^{\bf T}$, where the first four variables are the coordinate and the last four represent the momenta. When the tetrahedra are glued, however, these parameters can be no longer independent. A more important thing is that these parameters does not appear solely in Fig.\[fig3\]. Our choice of polarization is $$\begin{pmatrix} X \\ C_B \\ C_C \\ C_D \\ P \\ \Theta _B \\ \Theta _C \\ \Theta _D \end{pmatrix} =\begin{pmatrix} x'+z'' \\ w+x'+y'' \\ w''+x''+z' \\ w'+y'+z'' \\ z'+y'' \\ \Theta _B \\ \Theta _C \\\Theta _D \end{pmatrix},$$ where the top four are coordinates and the bottom four are momenta of the phase space. The coordinates $X, C_B, C_C, C_D$ correspond to edge AB, internal edge attached to B, C, D, respectively. The momentum $P$ corresponds to the edge AC, and $\Theta _B, \Theta _C, \Theta _D$ are chosen to give a canonical symplectic form, such like $\{X_i, P_j\}=\delta _{ij}$. In this paper, we chose these momenta as $$\Theta _B=y'+w''+y, \Theta _C=x', \Theta _D=y+y'$$ We note that this construction is consistent with the tetrahedron with trivial holonomies. If we tune three of holonomies to be trivial, i.e. $C_B=C_C=C_D=0$, the holonomy around the last vertex A is automatically set to be trivial and we can get back to a tetrahedron with trivial holonomy. Our polarization is made from the start point via the symplectic transformation like $$\begin{pmatrix} X \\ C_B \\ C_C \\ C_D \\ P \\ \Theta _B \\ \Theta _C \\ \Theta _D \end{pmatrix} =\begin{pmatrix} x'+z'' \\ w+x'+y'' \\ w''+x''+z' \\ w'+y'+z'' \\ z'+y'' \\ \Theta _B \\ \Theta _C \\ \Theta _D \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -1 & 0 & 0 & 1 & -1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 1 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 1 & 0 & 0 & -1 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} z'' \\ w \\ x'' \\ y' \\ z' \\ w'' \\ x' \\ y \end{pmatrix} +\begin{pmatrix} 0 \\ i\pi \\ 0 \\ i\pi \\ i\pi \\ 0 \\ 0 \\ 0 \end{pmatrix}.$$ We will denote this $\operatorname{{\rm Sp}}(8, \mathbb{Z})$ matrix as M. According to [@Hua1949], we can decompose this matrix M into generating components of $\operatorname{{\rm Sp}}(8, \mathbb{Z})$, as $$M=USRXJV$$ where $$\begin{split} U=\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}, S=\begin{pmatrix} 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}, \\ R=\begin{pmatrix} -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 2 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -2 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \end{pmatrix}, X=\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 \end{pmatrix}, \\ J=\begin{pmatrix} 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{pmatrix}, V=\begin{pmatrix} 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ -1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}. \end{split}$$ In this decomposition, matrices U, V, R belong to GL-type, S, J to S-type, and X to T-type transformation. For each type, we can define the corresponding action on the partition functions like $\operatorname{{\rm SL}}(2,\mathbb{Z})$ case, which is described in [@Dimofte2011]. We will just refer to their result for $\operatorname{{\rm Sp}}(2N,\mathbb{Z})$ here; $$\begin{aligned} T:g=\begin{pmatrix} I & 0 \\ B & I \end{pmatrix},B =B^{\rm T} : Z(\vec{\tilde{m}})& \mapsto& Z'(\vec{\tilde{m}})=e^{i\pi\vec{\tilde{m}}\cdot B\vec{\tilde{m}}} Z(\tilde{m}) \\ S:g=\begin{pmatrix} I-J & -J \\ J & I-J \end{pmatrix}: Z(\tilde{m})& \mapsto& Z'(\vec{\tilde{m}'})=\int d\tilde{m} e^{-2i\pi \vec{\tilde{m}}\cdot J\vec{\tilde{m}'}} Z(\tilde{m}),\end{aligned}$$ where $J = {\rm diag}(j_1 ,\cdots, j_N)$ with $j_i \in \{0,1\}$ and the integration is over i-th component of $\vec{m}$ with $j_i=1$, and $$GL:g=\begin{pmatrix} U & 0 \\ 0 & U^{{\rm T} -1} \end{pmatrix}, U \in \operatorname{{\rm GL}}(N,\mathbb{Z}): Z(\vec{\tilde{m}}) \mapsto Z'(\vec{\tilde{m}}')=Z(U^{-1}\vec{\tilde{m}}').$$ There can be transformations of the constant shift in units of $i\pi$. We will call these transformations affine shifts. When an affine shift occurs to the coordinate as $\begin{pmatrix} X' \\ P' \end{pmatrix}=\begin{pmatrix} X \\ P \end{pmatrix}+\begin{pmatrix} i\pi \\ 0 \end{pmatrix}$, the partition function changes as $$Z(X)\mapsto Z'(X)=Z(X-\frac{iQ}{2}).$$ When it occurs to the momentum, we can get $\begin{pmatrix} X' \\ P' \end{pmatrix}=\begin{pmatrix} X \\ P \end{pmatrix}+\begin{pmatrix} 0 \\ i\pi \end{pmatrix}= -S\left(S\begin{pmatrix} X \\ P \end{pmatrix}+\begin{pmatrix} i\pi \\ 0 \end{pmatrix}\right)$, where $S=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ is a S-type transformation. If we combine these transformations we can get $$\begin{split} Z(X)\mapsto Z'(X'')&=\int dXdX' e^{-2i\pi(-X'')X'-2i\pi X(X'-\frac{iQ}{2})}Z(X) \\ &=e^{\pi QX''}Z(X''). \end{split}$$ Now we are ready to compute the partition function of the tetrahedron with three nontrivial holonomies. First we prepare the partition function $$Z(z'', w, x'', y')=Z_{\Delta}(z'')Z_{\Delta}(w)Z_{\Delta}(x'')Z_{\Delta}(y'),$$ and then act a symplectic transformation $M$ on $Z(z'', w, x'', y')$. The practical computation is tedious, so we only give a result here; $$\begin{split} Z^M (X;C_B, C_C, C_D)=\int dz_1 e^{-2i\pi z_1 \left( X-C_B+\frac{iQ}{2}\right) -2i\pi z_1 ^2 -i\pi\left(iQ-X\right)^2-\pi Q\left(iQ-X\right)} \\ \times e^{-2i\pi C_C\left( X+z_1 - \frac{iQ}{2}\right)- i\pi C_C^2 +\pi Q\left(C_B+C_D\right)} \\ \times Z_{\Delta}(X)Z_{\Delta}(z_1+C_C)Z_{\Delta}(iQ-X-z_1-C_C) \\ \times Z_{\Delta}(z_1+X-C_B-C_D)Z_{\Delta}(-z_1). \end{split}$$ We propose this final result as a partition function of the solid pants. The $z_1$ integration is too hard to compute, so it remains at the last result. The change of the number of $Z_{\Delta}$ is caused by pentagon identity. We can check this calculation in two ways. First, there is a rotational symmetry which exchanges $C_B, C_C, C_D$ combining with $ST$ transformation over $X$. We will name this rotation $Q$. Geometrically $Q$ is a remnant of $ST$ symmetry of a single tetrahedron. Actually $Q$ can be constructed from $M$ and a new transformation $R_1$. The transformation $R_1$ acts on the four tetrahedra before glued, as changing the order of $x, y, z$, and rotating $w, w', w''$ at the same time. If we represent it as a symplectic transformation, then we get $$\begin{pmatrix} z'' \\ w \\ x'' \\ y' \\ z' \\ w'' \\ x' \\ y \end{pmatrix} \mapsto \begin{pmatrix} x'' \\ w' \\ y'' \\ z' \\ x' \\ w \\ y' \\ z \end{pmatrix} =\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} z'' \\ w \\ x'' \\ y' \\ z' \\ w'' \\ x' \\ y \end{pmatrix} + \begin{pmatrix} 0 \\ -i\pi \\ i\pi \\ 0 \\ 0 \\ 0 \\ 0 \\ i\pi \end{pmatrix}.$$ Then if we substitute $Q = M R_1 M^{-1}$, the action of $Q$ over the polarization $(X, C_B, C_C, C_D$ , $P, \Theta _B, \Theta _C, \Theta _D)^{\rm T}$ looks as a combination of an $ST$ transformation over $(X, P)$ and a rotation of $(C_B, C_C, C_D)$. The second check is setting $C_B=C_C=C_D=0$. This means setting all the holonomies to be trivial, and in fact remaining integration becomes executable as $$\begin{split} Z^M (X;0, 0, 0)&=\int dz_1 e^{-i\pi z_1 ^2 - i\pi \left( z_1 +X\right )^2 -\pi Q \left(z_1+X\right)} Z_{\Delta}(X)Z_{\Delta}(z_1)Z_{\Delta}(iQ-X-z_1)Z_{\Delta}(-z_1)Z_{\Delta}(z_1+X) \\ &= \int dz_1 e^{-i\pi z_1 ^2 +\pi Qz_1}Z_{\Delta}(X)Z_{\Delta}(-z_1)Z_{\Delta}(z_1) \\ &\propto Z_{\Delta}(X). \end{split} \label{}$$ In a result $Z^M(X)$ becomes equal to $Z_{\Delta}(X)$ up to constant coefficient, and we find that a tetrahedron with nontrivial holonomies reduces to that with trivial holonomies. Discussion ========== We proposed a method to compute holonomy matrices for hyperbolic 3-manifolds with boundaries. By using this method, we analyzed the hyperbolic structure of an ideal tetrahedron and generalized to have nontrivial holonomies. And we calculated a partition function of the nontrivial tetrahedron via the duality [@Dimofte2011], and check some consistencies. Our proposing partition function still containing the form of integration, so that the dual 3d gauge theory is mysterious. #### Acknowledgement I would like thank Y. Matsuo and my colleagues for valuable discussions and comments. Formula for double sine functions ================================= In this section we show some formula of partition functions used in Section 4. We derive these formula from the formulas of quantum dilogarithm functions referring to [@Barnes1899; @Faddeev2000]. The quantum dilogarithm function is defined be the formula $$e_b(z)=\exp\left(\frac{1}{4}\int _{-\infty}^{\infty} \frac{e^{-2izx}dx}{\sinh(xb)\sinh(xb^{-1})x}\right),$$ where the integration go beyond the singularity at $x=0$. The double sine function is the function defined as $$s_b(z)=\prod_{m, n \in \mathbb{Z}_{\geq 0}}\frac{mb+nb^{-1}+\frac{Q}{2}-ix}{mb+nb^{-1}+\frac{Q}{2}+ix},$$ where $Q=b+b^{-1}$. Actually these special functions are related as $$e_b(z)=e^{\frac{iQ}{2}z^2}s_b(z),$$ thus the partition function which is dual to an ideal tetrahedron can be written as $$\begin{split} Z_{\Delta}(z)&=e^{\frac{i\pi}{2}\left(\frac{iQ}{2}-z\right)^2}s_b\left(\frac{iQ}{2}-z\right) \\ &=e_b\left(\frac{iQ}{2}-z\right). \end{split}$$ Quantum dilogarithm function has following properties. $$\begin{aligned} e_b(z)e_b(-z)=e^{i\pi z^2-\frac{i\pi}{6} \left( 1-\frac{Q^2}{2}\right) } \hspace{5cm}\\ \int dx e_b(x)e^{2i\pi y}= e^{-i\pi y^2+\frac{i\pi}{12}\left(1+Q^2\right)}e_b\left(y+\frac{iQ}{2}\right) \hspace{3cm}\\ e_b\left(x+\frac{iQ}{2}\right)e_b\left(y+\frac{iQ}{2}\right)e^{2i\pi xy} \nonumber \hspace{9cm}\\ = \int dz e_b\left(z+\frac{iQ}{2}\right)e_b\left(x-z+\frac{iQ}{2}\right)e_b\left(y-z+\frac{iQ}{2}\right) e^{-2i\pi z^2 + 2i\pi z(x+y) +\frac{i\pi}{12}\left(1+Q^2\right)} \\ e_b\left(x+\frac{iQ}{2}\right)e_b\left(\frac{iQ}{2}+u-x\right)e_b(-u-\frac{iQ}{2}) e^{-i\pi u^2 +\pi Q u} \nonumber \hspace{5cm}\\ = \int dz e_b\left(z+\frac{iQ}{2}\right)e_b\left(x-z+\frac{iQ}{2}\right) e^{-i\pi z^2 - 2i\pi z\left(\frac{iQ}{2}+u-x\right) -\frac{i\pi}{12}\left(1+Q^2\right)}, \end{aligned}$$ where the integration is over $z \in \mathbb{R}$ and singularities are put below except for at $z=0$. The third and fourth relations are called pentagon identities. When we cast these identities into the word of the partition functions, we get $$\begin{aligned} Z_{\Delta}(z+iQ)Z_{\Delta}(-z)=e^{i\pi z^2 -\pi Qz -\frac{iQ}{6}\left(1+Q^2\right)} \hspace{4cm}\\ \int dz Z_{\Delta}(z) e^{2i\pi zw}=-e^{-i\pi w^2 -\pi Qw + \frac{iQ}{12}\left(1+Q^2\right)}Z_{\Delta}(w) \hspace{2cm}\\ Z_{\Delta}(x)Z_{\Delta}(y) e^{2i\pi xy} \hspace{10cm}\nonumber \\ = \int dz Z_{\Delta}(-z)Z_{\Delta}(z-x)Z_{\Delta}(z-y) e^{-2i\pi z^2 + 2i\pi z(x+y)+\frac{iQ}{12}\left(1+Q^2\right)} \\ Z_{\Delta}(iQ-u)Z_{\Delta}(x)Z_{\Delta}(u-x) e^{-i\pi u^2 -\pi Qu} \hspace{7cm}\nonumber\\ = \int dz Z_{\Delta}(-z)Z_{\Delta}(z+x) e^{-i\pi z^2 -2i\pi \left(\frac{iQ}{2}-u+x\right)-\frac{i\pi}{12}\left(1+Q^2\right)}\hspace{2cm}\end{aligned}$$ [^1]: [email protected] [^2]: Here we assume that we use the Poincaré’s upper plane model to realize $\mathbb{H}$. [^3]: The meaning of these matrices are described better in [@Chekhov2007], but we just quote there result here. [^4]: When we choose the coordinate and the momentum as $z$ and $z''$, the remaining parameter $z'=i\pi-z-z''$ is not an independent variable.
--- abstract: | We study a new version of the Euclidean TSP called [[VectorTSP]{}]{}([[VTSP]{}]{}for short) where a mobile entity is allowed to move according to a set of physical constraints inspired from the pen-and-pencil game *Racetrack* (also known as *Vector Racer*). In contrast to other versions of TSP accounting for physical constraints, such as Dubins TSP, the spirit of this model is that (1) no speed limitations apply, and (2) inertia depends on the current velocity. As such, this model is closer to typical models considered in path planning problems, although applied here to the visit of $n$ cities in a non-predetermined order. We motivate and introduce the [[VectorTSP]{}]{}problem, discussing fundamental differences with previous versions of TSP. In particular, an optimal visit order for ETSP may not be optimal for VTSP. We show that [[VectorTSP]{}]{}is NP-hard, and in the other direction, that [[VectorTSP]{}]{}reduces to [[GroupTSP]{}]{}in polynomial time (although with a significant blow-up in size). On the algorithmic side, we formulate the search for a solution as an interactive scheme between a high-level algorithm and a *trajectory oracle*, the former being responsible for computing the visit order and the latter for computing the cost (or the trajectory) for a given visit order. We present algorithms for both, and we demonstrate and quantify through experiments that this approach frequently finds a better solution than the optimal trajectory realizing an optimal ETSP tour, which legitimates the problem itself and (we hope) motivates further algorithmic developments. author: - | Arnaud Casteigts, Mathieu Raffinot, Jason Schoeters\ LaBRI, CNRS, Université de Bordeaux, Bordeaux INP, France - Arnaud Casteigts - Mathieu Raffinot - Jason Schoeters bibliography: - 'paper.bib' title: '[[VectorTSP]{}]{}: A Traveling Salesperson Problem with Racetrack-like acceleration constraints' --- Introduction {#sec:intro} ============ The problem of visiting a given set of places and returning to the starting point, while minimizing the total cost, is known as the Traveling Salesperson Problem (TSP, for short). The problem was independently formulated by Hamilton and Kirkman in the 1800s and has been extensively studied since. Many versions of this problem exist, motivated by applications in various areas, such as delivery planning, stock cutting, and DNA reconstruction. In the classical version, an instance of the problem is specified as a graph whose vertices represent the [*cities*]{} (places to be visited) and weights on the edges represent the cost of moving from one city to another (the move is impossible if the edge does no exist). One is asked to find the minimum cost tour (optimization version) or to decide whether a tour having at most some cost exists (decision version) subject to the constraint that every city is visited [*exactly*]{} once. Karp proved in 1972 that the Hamiltonian Cycle problem is NP-hard, which implies that TSP is NP-hard [@karp1972reducibility]. TSP was subsequently shown to be inapproximable (unless $P = NP$) by Orponen and Manilla in 1990 [@orponenmannila90]. On the positive side, while the trivial algorithm has a factorial running time (essentially, evaluating all permutations of the visit order), Held and Karp presented a dynamic programming algorithm [@heldkarp62] running in time $O(n^22^n)$, which as of today remains the fastest we known. In many cases, the problem is restricted to more tractable settings. In Metric TSP, the costs must respect the triangle inequality, namely $cost(u,v) \le cost(u,w) + cost(w,v)$ for all $u,v,w$, and the constraint of visiting a city exactly once is relaxed (or equivalently, it is not, but the instance is turned into a complete graph where the weight of every edge $uv$ is the cost of a shortest *path* from $u$ to $v$ in the original instance). Metric TSP was shown to be approximable within factor $1.5$ by Christofides [@christofides76]. Whether the factor is optimal is unknown, although it cannot be less than $1.0045$ (unless $P = NP$) and so no PTAS exists for Metric TSP [@PV06]. A particular case of Metric TSP is when the cities are points in the plane, and weights are the Euclidean distance between them, known as the Euclidean TSP (ETSP, for short). This problem, although still NP-hard (see Papadimitriou [@papadimitriou1977euclidean] and Garey *et al.* [@garey1976some]), was shown to admit a PTAS by Arora [@arora96] and Mitchell [@mitchell99]. An attempt to add physical constraints to the ETSP is Dubins TSP (DTSP). This version of TSP, which is also NP-hard (Le Ny *et al.* [@le2007curvature]), accounts for inertia through bounding by a fixed radius the curvature of a trajectory. This approach offers an elegant (*i.e.* purely geometrical) abstraction to the problem. However, it does not account for speed variations; for example, it does not enable sharper turns when the speed is low, nor does it account for inertia beyond a fixed speed. More realistic models have been considered beyond TSP, such as in the context of the path planning problem, where one aims to find an optimal trajectory between two given points (with obstacles), while satisfying constraints on acceleration/inertia. More generally, the literature on [*kinodynamics*]{} is vast (see, e.g. [@canny1988complexity; @canny1991exact; @donald1993kinodynamic] for some relevant examples). The constraints are often formulated in terms of the considered space’s dimensions, a bounded acceleration and a bounded speed. The positions may either be considered in a discrete domain or continuous domain, the latter being more related to the fields of control theory and analytic functions. In constrast, the discrete domain is naturally prone to algorithmic investigation. In a recreative column of the [*Scientific American*]{} in 1973 [@gardner1973sim], Martin Gardner presented a paper-and-pencil game known as [*Racetrack*]{} (or [*Vector Racer*]{}). The physical model is as follows. In each step, a vehicle moves according to a discrete-coordinate vector (initially the zero vector), with the constraint that the vector at step $i+1$ cannot differ from the vector at step $i$ by more than one unit in each dimension. The game consists of finding the best trajectory (smallest number of vectors) in a given race track defined by start/finish areas and polygonal boundaries. A nice feature of such models is the ability to think of the state of the vehicle at a given time as a point in a double dimension [*configuration space*]{}, such as $(x,y,dx,dy)$ when the original space is $\mathbb{Z}^2$. The optimal trajectory can then be found by performing a breadth-first search in the configuration graph (these techniques are described later on). These techniques were rediscovered many times, both in the racetrack context (see [*e.g.*]{} [@schmid2005vector; @bekos2018algorithms; @olsson2011genetic; @blogernie]) and in the kinodynamics literature (see [*e.g.*]{} [@donald1993kinodynamic; @canny1988complexity])—we will consider them as folklore. Defining a version of TSP based on a racetrack-like physical model is quite natural. Consider, for instance, a scenario involving a spacecraft in a simplified physical setting ([*i.e.*]{} non-relativized and without gravity), where no speed limit applies and acceleration constraints are identical in all directions. Finding the best tour visiting a given set of planets, or asking whether such a tour can be performed in a given time are indeed natural questions and objectives. Another, perhaps more realistic, scenario involves a drone taking aerial pictures of a set of locations. Despite an extensive literature, the TSP problem does not seem to have been investigated from the point of view of pure acceleration. (Anecdotally, there exists a TSP heuristics called “racetrack” [@mobile_sink], which does not relate to such models, nor to acceleration in general.) Contributions ------------- In this paper, we introduce a version of the Traveling Salesperson Problem called (or VTSP), in which a vehicle must visit a given set of points in some Euclidean space and return to the starting point, subject to racetrack-like constraints. The quality of a solution is the *number* of vectors (equivalently, of configurations) it uses. We start by presenting a generalized racetrack physical model, in Section \[sec:model\], and reviewing some of its algorithmic features, including known techniques based on the graph of configurations. Then, we define the [[VTSP]{}]{}problem in a quite general setting, where the space may be discrete or continuous, in an arbitrary number of dimensions (namely, $\mathbb{Z}^d$ or $\mathbb{R}^d$). An instance may be parameterized by two additional parameters: the maximum speed at which a city is considered as visited (visit speed $\nu$), the speed being otherwise unbounded; and the maximum distance at which a city is considered as visited (visit distance $\alpha$). These parameters correspond to natural motivations. For example, if the aforementioned space mission consists of dropping or collecting passengers in given “city”, then the vehicle might need to slow down (or stop) at visit time; if it consists of making quick measurements, then the visit speed is unconstrained and some distance from the visited city may even be tolerated. In Section \[sec:basic-results\], we make a number of general observations about VTSP. In particular, optimizing the racetrack trajectory of an optimal ETSP tour may not result in an optimal VTSP solution: the visit order is impacted by acceleration. Another key observation is that even if the speed is unbounded, one can easily compute a loose bound on the maximal speed to be considered in the search for an optimal solution, with important consequences on the computational complexity of the problem. In fact, we prove that [[VTSP]{}]{}is NP-hard under a natural parameterization (and therefore, in general), and in the other direction, it polynomially reduces to [[GroupTSP]{}]{}, however with a significant blow-up in the input size. On the algorithmic side, we present in Section \[sec:algorithms\] a modular approach to address VTSP based on an interactive scheme between a high-level algorithm and a trajectory oracle. The first is responsible for exploring the space of possible visit orders, while making queries to the second for knowing the cost (or full trajectory) associated with a given visit order. We present algorithms for both. The high-level algorithm adapts a known heuristic for ETSP, trying to gradually improve the solution through generating a set of 2-permutations (swaps of two cities) until a local optimum is found. As for the oracle, we present an algorithm which adapts the A\* framework to multipoint paths in the configuration space, using an original cost function based on unidimensional projections of the cities coordinates. In Section \[sec:experiments\], we present a few experimental results based on this algorithmic framework. Beyond demonstrating the practicality of our algorithms, our results motivate the problem itself, by showing empirical evidence that the optimum trajectory resulting from an optimal ETSP tour is unlikely to be optimal for VTSP, and so, in most natural settings. In particular, the probability that our algorithm improves upon such a trajectory seems to approach $1$ as the number of cities increase in a fixed area. Due to space constraints, some proofs (marked with [[$\bigstar$]{}]{}) are deferred to the appendix. Model and definitions {#sec:model} ===================== In this section, we present a generalized version of the racetrack model, highlighting some of its algorithmic features. Then, we define [[VectorTSP]{}]{}in generality, making observations and presenting preliminary results that are used in the subsequent sections. Generalized Racetrack model --------------------------- Let us consider a mobile entity (hereafter, the [*vehicle*]{}), moving in a discrete or continuous Euclidean space $\mathbb{S}$ of some dimension $d$ (for example, $\mathbb{S}=\mathbb{Z}^2$ or $\mathbb{S}=\mathbb{R}^3$). The state of the vehicle at any time is given by a *configuration* $c$, which is a couple containing a position $pos(c)$ and a velocity $vel(c)$, both encoded as elements of $\mathbb{S}$. For example, if $\mathbb{S}=\mathbb{Z}^2$, then a configuration $c$ is of the form $((x,y),(dx,dy))$. Furthermore, we write $speed(c)$ for $||vel(c)||$. Given a configuration $c$, the set of configurations being reachable from $c$ in a single time step, *i.e.*, the successors of $c$, is written as ${\texttt{succ}\xspace}(c)$ and is model-dependent. The original model presented by Gardner [@gardner1973sim] corresponds to the case that $\mathbb{S} = \mathbb{Z}^2$, and given two configurations $c_i$ and $c_j$, written as above, $c_j \in {\texttt{succ}\xspace}(c_i)$ if and only if $x_j=x_i+dx_i \pm 1$ and $dx_j=x_j - x_i$, and $y_j=y_i+dy_i \pm 1$ and $dy_j=y_j - y_i$. In other words, the velocity of a configuration corresponds to the difference between its position and the position of the previous configuration, and this difference may only vary by one unit in each dimension in one time step. In the following, we refer to this model as the $9$-successor model, and to the case that at most one dimension can change in one time step as the $5$-successor model. These models can be naturally extended to continuous space, by considering that the set of successors is infinite, typically amounting to choosing a point in a $d$-sphere, as illustrated on Figure \[fig:trajectories\]. (-.2,.2) grid (16.7\*1.42,11\*1.4); =\[draw, circle, inner sep=.7pt\] (0,1) node (x0) ; (5,1) node\[draw=none, rotate=-90, inner sep=-5pt\] (x1) [![\[fig:trajectories\] Discrete and continuous space racetrack models (left and right, respectively).](drone.png "fig:"){width="20pt"}]{}; (10,1) node\[color=red\] (x2bis) ; (9,2) node (x2) ; (13,3) node\[color=red\] (x3bis) ; (12,4) node (x3) ; (15,6) node\[color=red\] (x4bis) ; (14,7) node (x4) ; (16,10) node\[color=red\] (x5bis) ; (15,10) node (x5) ; (16,13) node\[color=red!30\] (x6bis) ; (15,12) node (x6) ; (15,14) node\[color=white\] (x7bis) ; (14,13) node (x7) ; (13,14) node\[color=white\] (x8bis) ; (12,14) node (x8) ; (x0) – (x1); (x1) – (x2bis); (x1) – (x2); (x2) – (x3bis); (x2) – (x3); (x3) – (x4bis); (x3) – (x4); (x4) – (x5bis); (x4) – (x5); (x5) – (x6bis); (x5) – (x6); (x6) – (x7bis); (x6) – (x7); (x7) – (x8bis); (x7) – (x8); =\[draw, circle, inner sep=.7pt, color=red\] (x2bis) + (1,0) node (x2bisr) ; (x2bis) + (-1,0) node (x2bisl) ; (x2bis) + (0,1) node (x2bisu) ; (x2bis) + (0,-1) node (x2bisd) ; (x3bis) + (1,0) node (x3bisr) ; (x3bis) + (-1,0) node (x3bisl) ; (x3bis) + (0,1) node (x3bisu) ; (x3bis) + (0,-1) node (x3bisd) ; (x4bis) + (1,0) node (x4bisr) ; (x4bis) + (-1,0) node (x4bisl) ; (x4bis) + (0,1) node (x4bisu) ; (x4bis) + (0,-1) node (x4bisd) ; (x5bis) + (1,0) node (x5bisr) ; (x5bis) + (-1,0) node\[color=green\] (x5bisl) ; (x5bis) + (0,1) node (x5bisu) ; (x5bis) + (0,-1) node (x5bisd) ; (x6bis) + (1,0) node\[color=red!30\] (x6bisr) ; (x6bis) + (-1,0) node\[color=red!30\] (x6bisl) ; (x6bis) + (0,1) node\[color=red!30\] (x6bisu) ; (x6bis) + (0,-1) node\[color=red!30\] (x6bisd) ; (x8bis) + (-1,0) node\[color=green\] (x8bisl) ; (x2bis) + (1,1) node (x2bisr) ; (x2bis) + (-1,1) node\[color=green\] (x2bisl) ; (x2bis) + (-1,-1) node (x2bisu) ; (x2bis) + (1,-1) node (x2bisd) ; (x3bis) + (1,1) node (x3bisr) ; (x3bis) + (-1,1) node\[color=green\] (x3bisl) ; (x3bis) + (-1,-1) node (x3bisu) ; (x3bis) + (1,-1) node (x3bisd) ; (x4bis) + (1,1) node (x4bisr) ; (x4bis) + (-1,1) node\[color=green\] (x4bisl) ; (x4bis) + (-1,-1) node (x4bisu) ; (x4bis) + (1,-1) node (x4bisd) ; (x5bis) + (1,1) node (x5bisr) ; (x5bis) + (-1,1) node (x5bisl) ; (x5bis) + (-1,-1) node (x5bisu) ; (x5bis) + (1,-1) node (x5bisd) ; (x6bis) + (1,1) node\[color=red!30\] (x6bisr) ; (x6bis) + (-1,1) node\[color=red!30\] (x6bisl) ; (x6bis) + (-1,-1) node\[color=green\] (x6bisu) ; (x6bis) + (1,-1) node\[color=red!30\] (x6bisd) ; (x7bis) + (-1,-1) node\[color=green\] (x7bisu) ; (x7bis) + (-1,-1) node\[color=green\] (x7bisu) ; \[1\] =\[draw, circle, inner sep=.5pt\] (-.3,.4) node (x0) ; (2,3) node\[draw=none, rotate=90, inner sep=-5pt\] (x1) [![\[fig:trajectories\] Discrete and continuous space racetrack models (left and right, respectively).](drone.png "fig:"){width="20pt"}]{}; (4.3,5.6) node (x2bis) ; (4.5,5.133) node (x2) ; (7,7.066) node (x3bis) ; (7,6.266) node (x3) ; (9.5,7.399) node (x4bis) ; (9,6.532) node (x4) ; (11,6.798) node (x5bis) ; (10.4,5.998) node (x5) ; (11.8,5.464) node (x6bis) ; (10.8,5.3) node (x6) ; (11.2,4.6) node (x7bis) ; (10.3,4.2) node (x7) ; (9.8,3.1) node (x8bis) ; =\[inner sep=0pt\] (9.1,2.9) node (x8) ; (x2bis) circle (1); (x3bis) circle (1); (x4bis) circle (1); (x5bis) circle (1); (x0) – (x1); (x1) – (x2bis); (x1) – (x2); (x2) – (x3bis); (x2) – (x3); (x3) – (x4bis); (x3) – (x4); (x4) – (x5bis); (x4) – (x5); (x5) – (x6bis); (x5) – (x6); (x6) – (x7bis); (x6) – (x7); (x7) – (x8bis); (x7) – (x8); A trajectory (of length $k$) is a sequence of configurations $c_1,c_2,...,c_k$. It is called *valid* if $c_{i+1} \in {\texttt{succ}\xspace}(c_i)$ for all $i<k$. We define the inverse $c^{-1}$ of a configuration $c$ as the configuration that represents the same movement in the opposite direction. For example, if $\mathbb{S}=\mathbb{Z}^2$ and $c=((x,y),(dx,dy))$, then $c^{-1}=((x+dx,y+dy),(-dx,-dy))$. A successor function is *symmetrical* if $c_j \in {\texttt{succ}\xspace}(c_i)$ if and only if $c_i^{-1} \in {\texttt{succ}\xspace}(c_j^{-1})$. Intuitively, this implies that if $(c_1, c_2, \dots, c_k)$ is a valid trajectory, then $(c_k^{-1}, \dots, c_2^{-1}, c_1^{-1})$ is also a valid trajectory: the trajectory is *reversible*. All the models considered in this paper use symmetrical successor functions. ### Configuration space  \ The concept of [*configuration space*]{} is a powerful and natural tool in the study of racetrack-like problems. This concept was rediscovered many times and is now considered as folklore. The idea is to consider the graph of configurations induced by the successor function as follows. Let ${\ensuremath{{\cal C}}\xspace}$ be the set of all possible configurations, then the [*configuration graph*]{} is the directed graph $G({\ensuremath{{\cal C}}\xspace})=(V, E)$ where $V={\ensuremath{{\cal C}}\xspace}$ and $E=\{(c_i,c_j) \subseteq {\ensuremath{{\cal C}}\xspace}^2 : c_j \in {\texttt{succ}\xspace}(c_i)\}$. The configuration graph $G(\mathcal{C})$ is particularly useful when the number of successors of a configuration is bounded by a constant. In this case, $G(\mathcal{C})$ is sparse and one can search for optimal trajectories within it, using standard algorithms like breadth-first search (BFS). For example, in a $L \times L$ subspace of $\mathbb{Z}^2$, there are at most $L^2$ possible positions and at most $O(L)$ possible velocities (the speed cannot exceed $\sqrt{L}$ in each dimension without getting out of bounds [@blogernie]), thus $G(\mathcal{C})$ has $\Theta(L^3)$-many vertices and edges. More generally: \[lem:BFS\] A breadth-first search (BFS) in a $L \times L$ subspace of $\mathbb{Z}^2$ can find an optimum trajectory between two given configurations in time $O(L^3)$. A similar observation leads to time $O(L^{9/2})$ in $\mathbb{Z}^3$, and more generally $O(L^{3d/2})$ in dimension $d$. Note that the presence of obstacles (if any) results only in the graph having possibly less vertices and edges. (We do not consider obstacles in this paper.) Definition of [[VectorTSP]{}]{} ------------------------------- Informally, [[VectorTSP]{}]{}is defined as the problem of finding a minimum length trajectory (optimization version), or deciding if a trajectory of at most a given length exists (decision version), which visits a given set of unordered cities (points) in some Euclidean space, subject to racetrack-like physical constraints. As explained in the introduction, we consider additional parameters to the problem, which are (1) [*Visit speed*]{} $\nu$: maximum speed at which a city is visited; (2) [*Visit distance*]{} $\alpha$: maximum distance at which a city is visited; and (3) [*Vector completion*]{} $\beta$: ($true/false$) whether the visit distance is evaluated only at the coordinates of the configurations, or also in-between configurations. The first two parameters are already discussed in the introduction. The visit distance is actually similar in spirit to the [*TSP with neighborhood*]{} [@arkin94]. The third parameter is more technical, although it could be motivated by having a specific action (sensing, taking pictures, etc.) being realized only at periodic times. [r]{}[5.6cm]{} (-0.9,-1.2) grid (8.2,.8); =\[inner sep=0pt\] (-1,.5) node\[white\] (x0) ; (2,0) node (x1) ; (5.5,0) node (x2) ; (8.5,.5) node\[white\] (x3) ; (4,-1) node\[city\] (p) ; (x0) – (x1); (x1) – (x2); (x2) – (x3); (4,0) node\[draw=none, rotate=-90, inner sep=-5pt\] (drone) [![image](drone.png){width="15pt"}]{}; Considering Figure \[fig:parameters\], if $\nu$ is 7 or less, $\alpha$ is $2$ or more, and $\beta=false$, then the city (circle) is considered as visited by the middle red vector. If either $\nu < 7$, $\alpha < 2$, or $\beta = true$, the city is not visited. We are now ready to define [[VectorTSP]{}]{}. For simplicity, the definitions rely on discrete space ($\mathbb{S}=\mathbb{Z}^d$), to avoid technical issues with the representation of real numbers, in particular their impact on the input size. Similarly, we require the parameters $\nu$ and $\alpha$ to be integers and $\beta$ to be a boolean. However, the problem might be adaptable to continuous space without much complications, possibly with the use of a *real RAM* abstraction [@PS12]. [[[VectorTSP]{}]{}(decision version)]{} **Input:** A set of $n$ cities (points) $P \subseteq \mathbb{Z}^d$, a distinguished city $p_0 \in P$, two integer parameters $\nu$ and $\alpha$, a boolean parameter $\beta$, a polynomial-time-computable successor function [`succ`]{}, a positive integer $k$, and a trivial bound $\Delta$ .\ **Question:** Does there exist a valid trajectory ${\cal T}=(c_1, \dots, c_k)$ of length $k$ that visits all the cities in $P$, with $pos(c_1)=pos(c_k)=p_0$ and $speed(c_1)=speed(c_k)=0$. The role of parameter $\Delta$ is to guarantee that the length of the optimal trajectory is polynomially bounded in the size of the input. Without it, an instance of even two cities could be artificially hard due to the sole distance between them [@holzer2010computational; @blogernie]. As we will see, one can always find a (possibly sub-optimal) solution trajectory of $poly(L)$ configurations, where $L$ is the maximum distance between two points in any dimension, and similarly, a solution trajectory must have length at least $\sqrt{L}$. Therefore, writing $\Delta = $ `unary`$(\lfloor\sqrt{L}\rfloor)$ in the input is sufficient. The optimization version is defined analogously. [[[VectorTSP]{}]{}(optimization version)]{} **Input:** A set of $n$ cities (points) $P \subseteq \mathbb{Z}^d$, a distinguished city $p_0 \in P$, two integer parameters $\nu$ and $\alpha$, a boolean parameter $\beta$, a polynomial-time-computable successor function [`succ`]{}, and a trivial bound $\Delta$ encoded in unary.\ **Output:** Find a valid trajectory $T=(c_1,\dots, c_k)$ of minimum length visiting all the cities in $P$, with $pos(c_1)=pos(c_k)=p_0$ and $speed(c_1)=speed(c_k)=0$. **Tour *vs.* trajectory** (terminology): In the Euclidean TSP, the term [*tour*]{} denotes both the visit order and the actual path realizing the visit, because both coincide. In [[VectorTSP]{}]{}, a given visit order could be realized by many possible trajectories. To avoid ambiguities, we always refer to a visit order (*i.e.,* a permutation $\pi$ of $P$) as a [*tour*]{}, while reserving the term [*trajectory*]{} for the actual sequence of racetrack configurations. Furthermore, we denote by `racetrack(\pi)` an optimal (*i.e.*, min-length) racetrack trajectory realizing a given tour $\pi$ (irrespective of the quality of $\pi$). **Default setting:** In the rest of the paper, we call *default setting* the $9$-successor model in two dimensional discrete space ($\mathbb{S}=\mathbb{Z}^2$), with unrestricted visit speed ($\nu=\infty$), zero visit distance ($\alpha = 0$), and non-restricted vector completion ($\beta=false$). Most of the results are however transposable too other values of the parameters and to higher dimensions. Preliminary results {#sec:basic-results} =================== In this section we make general observations about [[VectorTSP]{}]{}, some of which are used in the subsequent sections. In particular, we highlight those properties which are distinct from Euclidean TSP. \[lem:start\] The starting city has an impact on the cost of an optimal solution. This fact is the reason why an input instance of [[VectorTSP]{}]{}is also parameterized by a starting city $p_0 \in P$. More generally, the cost of traveling between two given cities is impacted by the previous and subsequent positions of the vehicle and cannot be captured by a fixed cost, which is why [[VTSP]{}]{}does not straightforwardly reduce to classical TSP. The following fact strengthens the distinctive features of [[VTSP]{}]{}, showing that it does not straightforwardly reduce to ETSP either. \[lem:opt\_visit\_order\] Let $\mathcal{I}$ be a [[VTSP]{}]{}instance on a set of cities $P$, in the default setting. Let $\pi$ be an optimal tour for an <span style="font-variant:small-caps;">ETSP</span> instance on the same set of cities $P$, then `racetrack(\pi)` may not be an optimal solution to $\mathcal{I}$. [R]{}[8cm]{} (1.5,-2.6) grid (14.5,4.9); =\[black, draw, circle, inner sep=1pt,font=\] (5,4) node\[fill=red,inner sep=1.4pt\] (x1) ; (7,2) node (x2) ; (9,2) node (x3) ; (11,4) node (x4) ; (13,1) node (x5) ; (6,4.3) node\[draw=none\] (x1\_bis) [$p_0$]{}; (x1) edge\[loop, NavyBlue, in=60, out=120, looseness=10\] (x1); (x2) node\[draw=none, above, xshift=3pt\] (x6\_bis) [$u$]{}; (x3) node\[draw=none, above, xshift=-3pt\] (x7\_bis) [$v$]{}; (11,-2) node (x6) ; (9,0) node (x7) ; (7,0) node (x8) ; (5,-2) node (x9) ; (3,1) node (x10) ; (x1) -&gt; (6,3); (6,3) -&gt; (x2); (x2) -&gt; (x3); (x3) -&gt; (10,3); (x6) -&gt; (12,-2); (12,-2) -&gt; (13,-1); (13,-1) -&gt; (x5); (x5) -&gt; (13,3); (13,3) -&gt; (12,4); (12,4) -&gt; (x4); (x4) -&gt; (10,3); (x6) -&gt; (10,-1); (10,-1) -&gt; (x7); (x7) -&gt; (x8); (x8) -&gt; (6,-1); (6,-1) -&gt; (x9); (x9) -&gt; (4,-2); (4,-2) -&gt; (3,-1); (3,-1) -&gt; (x10); (x10) -&gt; (3,3); (3,3) -&gt; (4,4); (4,4) -&gt; (x1); (1.5,-2.6) grid (14.5,4.9); =\[black, draw, circle, inner sep=1pt,font=\] (5,4) node\[fill=red,inner sep=1.4pt\] (x1) ; (7,2) node (x2) ; (9,2) node (x3) ; (11,4) node (x4) ; (13,1) node (x5) ; (6,4.3) node\[draw=none\] (x1\_bis) [$p_0$]{}; (x1) edge\[loop, NavyBlue, in=60, out=120, looseness=10\] (x1); (x2) node\[draw=none, above, xshift=3pt\] (x6\_bis) [$u$]{}; (x3) node\[draw=none, above, xshift=-3pt\] (x7\_bis) [$v$]{}; (11,-2) node (x6) ; (9,0) node (x7) ; (7,0) node (x8) ; (5,-2) node (x9) ; (3,1) node (x10) ; (x1) -&gt; (6,3); (6,3) -&gt; (8,1); (8,1) -&gt; (10,-1); (10,-1) -&gt; (x6); (x6) -&gt; (12,-2); (12,-2) -&gt; (13,-1); (13,-1) -&gt; (x5); (x5) -&gt; (13,3); (13,3) -&gt; (12,4); (12,4) -&gt; (x4); (x4) -&gt; (10,3); (10,3) -&gt; (8,1); (8,1) -&gt; (6,-1); (6,-1) -&gt; (x9); (x9) -&gt; (4,-2); (4,-2) -&gt; (3,-1); (3,-1) -&gt; (x10); (x10) -&gt; (3,3); (3,3) -&gt; (4,4); (4,4) -&gt; (x1); [*Example.*]{} Consider the following example, where the trajectories alternate between dashed red and plain blue vectors. On the left picture, the trajectory corresponds to an optimal realization of the optimal ETSP tour $\pi$, starting and ending at $p_0$ (whence the final deceleration loop). It it not hard to see that this trajectory is indeed optimal for $\pi$. In contrast, an optimal VTSP trajectory visiting the same cities (right picture) would use two configurations less, based on a non-optimal tour $\pi'$ for ETSP. $\qed$ Hence, solving VTSP does not reduce to optimizing the trajectory of an optimal ETSP solution: the visit order is impacted. Furthermore, we observe the following property: \[fact:selfcross\] An optimal VTSP solution may self-cross. The configuration space can be bounded -------------------------------------- The spirit of the racetrack model is to focus on acceleration only, without bounding the speed. Nonetheless, we show here that a [[VectorTSP]{}]{}trajectory in general (and an optimal one in particular) can always be found within a certain subgraph of the configuration graph, whose size is polynomially bounded in the size of the input. These results are formulated in the default setting for any discrete $d$-dimensional space. \[lem:walk\] Let $P$ be a set of cities and $L$ be the largest distance in any dimension (over all $d$ dimensions) between two cities of $P$. Then a solution trajectory must contain at least $\sqrt L$ configurations. Furthermore, there always exists a solution trajectory of $O(L^d)$ configurations. The lower bound follows from the fact that it takes at least $\sqrt L$ configurations to travel a distance of $L$ (starting at speed $0$), the latter being a lower bound on the total distance to be traveled. The upper bound can be obtained by exploring all the points of the $d$-dimensional rectangular hull containing the cities in $P$ at unit speed, which amounts to $O(L^d)$ configurations. \[lem:bounded-configuration-graph\] An (optimal) trajectory for VTSP can be found in a subgraph of the configuration graph with polynomially many vertices and edges (in the size of the input), namely $O(L^{(d^2)})$. First observe that if there exists a trajectory of $O(L^d)$ configurations, then this bound also applies to an optimal trajectory. Now, we know that a trajectory corresponds to a path in $G(\mathcal{C})$, thus an optimal trajectory can be found within the subgraph of $G(\mathcal{C})$ induced by the vertices at distance at most $O(L^d)$ from the starting point, which consists of $O(L^{(d^2)})$ vertices in total. A glimpse at computational complexity ------------------------------------- Here, we present polynomial time transformations from [[VectorTSP]{}]{}to other NP-hard problems and vice versa. Precisely, we establish NP-hardness of a particular parameterization of [[VectorTSP]{}]{}(and thus, of the general problem) where the visit speed $\nu$ is zero. The reduction is from [ExactCover]{} and is based on Papadimitriou’s proof to show NP-hardness of <span style="font-variant:small-caps;">ETSP</span>. More interestingly, we present a general reduction from [[VectorTSP]{}]{}to [GroupTSP]{}. This reduction relies crucially on Lemma \[lem:bounded-configuration-graph\] above. ### NP-hardness of [[VectorTSP]{}]{}  \ Let ${\cal U}$ be a set of $m$ elements (the [*universe*]{}), the problem [ExactCover]{} takes as input a set ${\cal F}=\{F_i\}$ of $n$ subsets of ${\cal U}$, and asks if there exists ${\cal F'} \subseteq {\cal F}$ such that all sets in ${\cal F'}$ are [*disjoint*]{} and ${\cal F'}$ covers all the elements of ${\cal U}$. \[th:np-hard\] [ExactCover]{} reduces in polynomial time to [[VectorTSP]{}]{}with $\nu=0$. The proof (see Appendix \[app:A\]) considers a particular parameterization of VTSP where the visit speed $\nu$ is $0$, visit distance $\alpha$ is 0, and vector completion $\beta$ is arbitrary (though setting the visit speed at $0$ makes it *de facto* equivalent to $\beta=true$). It adapts Papadimitriou’s proof for showing that <span style="font-variant:small-caps;">ETSP</span> is NP-hard [@papadimitriou1977euclidean]. Admittedly, the fact that Theorem \[th:np-hard\] relies on a visit speed $\nu=0$, although implying that [[VectorTSP]{}]{}in general is NP-hard, is not satisfactory. The more natural question is whether [[VectorTSP]{}]{}is NP-hard without constraining the visit speed (e.g. in the default setting). Unfortunately, no reduction was found despite significant efforts. \[open:NP-hard\] Is [[VectorTSP]{}]{}NP-hard in the particular case of the default setting? ### Transformation from [[VectorTSP]{}]{}to [GroupTSP]{}  \ Here, we show that [[VTSP]{}]{}reduces in polynomial time to the so-called [[GroupTSP]{}]{}(also known as [SetTSP]{} or [GeneralizedTSP]{}), where the input is a set of cities partitioned into groups, and the goal is to visit at least one city in each group. \[lem:GTSP\] <span style="font-variant:small-caps;">VTSP</span> reduces to <span style="font-variant:small-caps;">Group TSP</span> in polynomial time in the size of the input. Let [$\cal I$]{}be the original [[VTSP]{}]{}instance and $n$ the number of cities in [$\cal I$]{}. Each city in [$\cal I$]{}can be visited in a number of different ways, each corresponding to a different configuration in ${\ensuremath{{\cal C}}\xspace}$ (the set of all possible configurations). The strategy is to create a city in [$\cal I$]{}’ for each configuration that visits at least once city in [$\cal I$]{}, and group them according to which city of [$\cal I$]{}they visit (the other configurations are discarded). Thus, visiting a city in each group of [$\cal I$]{}’ corresponds to visiting all cities in [$\cal I$]{}. Depending on the parameters of the model (visit speed, visit distance, vector completion), it may happen that a same configuration visits several cities in [$\cal I$]{}, which implies that the groups may overlap; however, Noon and Bean show in [@noon1993efficient] that a GTSP instance with overlapping groups can be transformed into one with mutually exclusive groups at the cost of creating $k$ copies of a city when it appears originally in $k$ different groups. Thus we proceed without worrying about overlaps. Let $X$ be the set of cities in [$\cal I$]{}, and ${\ensuremath{{\cal C}}\xspace}(x) \subseteq {\ensuremath{{\cal C}}\xspace}$ be the configurations which visit city $x \in X$. Instance [$\cal I$]{}’ is defined by creating a city for each configuration in $\cup_{x \in X} {\ensuremath{{\cal C}}\xspace}(x)$ and a group for each ${\ensuremath{{\cal C}}\xspace}(x)$. An arc is added between all couples $(c_1, c_2)$ of cities in [$\cal I$]{}’ such that $c_1$ and $c_2$ belong to different groups; the weight of this arc is the distance between $c_1$ and $c_2$ in the configuration graph. Thus, a trajectory using $k$ configurations to visit all the cities in [$\cal I$]{}corresponds to a tour of cost $k$ visiting at least one city in each group in [$\cal I$]{}’. The fact that the reduction is polynomial (both in time and space) results from the facts that (1) there is a polynomial number of relevant configurations (Lemma \[lem:bounded-configuration-graph\]), each one being copied at most $n$ times; and (2) the distance between two configurations in the configuration graph can be computed in polynomial time (Observation \[lem:BFS\]). Note that the reduction described in Lemma \[lem:GTSP\] implies a prohibitive blow-up in the number of cities. However, it is general in terms of the parameters: any combination $\nu$, $\alpha$, and $\beta$ only impacts the set of vectors that visit each city. Algorithms {#sec:algorithms} ========== In this section, we present an algorithmic framework for finding acceptable solutions to VTSP in practical polynomial time. It is based on an interaction between a high-level part that decides the visit order (tour), and a trajectory oracle that evaluates its cost. Exploring visit orders (`FlipVTSP`) {#sec:high-level-algorithm} ----------------------------------- A classical heuristic for ETSP is the so-called `2-opt` algorithm [@croes1958method], also known as `Flip`. It is a local search algorithm which starts with an arbitrary tour $\pi$. In each step, all the possible $2$-permutations (*i.e.,* swaps of two cities, or simply flips) of the current tour $\pi$ are generated. If such a flip $\pi'$ improves upon $\pi$, it is selected and the algorithm recurses on $\pi'$. Eventually, the algorithm finds a local optimum whose quality is commonly admitted to be of reasonable quality, albeit without guarantees (the name `2-opt` does not reflects an approximation ratio, it stands for 2-permutation local optimality). Adapting this algorithm seems like a natural option for the high-level part of our framework. The main differences between our algorithm, called `FlipVTSP`, and its ETSP analogue are that (1) the cost of a tour is not evaluated in terms of distance, but in terms of the required number of racetrack configurations (through calls to the oracle); (2) the tours involving self-crosses are not discarded (see Fact \[fact:selfcross\]); and (3) the number of recursions is polynomially bounded because new tours are considered only in case of improvement, and the length of a trajectory is itself polynomially bounded (Lemma \[lem:walk\]). The resulting tour is a local optimum with respect to 2-permutations, also known as a 2-optimal tour. For completeness, the algorithm is given by Algorithm \[algo:2-opt\] in Appendix . One can find a 2-optimal tour for VTSP in time $O(n^2L^d\tau(n, L))$, where $n$ is the number of cities, $L$ the largest distance between cities in a dimension, $d$ the number of dimensions, and $\tau(n, L)$ the running time complexity of the oracle for computing the cost of an optimal racetrack trajectory visiting the $n$ cities. \[theorem:2-opt\] Optimal racetrack given a fixed visit order (`Multipoint A*`) {#app:A-star} ------------------------------------------------------------- Here, we discuss the problem of computing an optimal racetrack trajectory that visits a set of points *in a given order*. A previous work of interest is Bekos *et al.* [@bekos2018algorithms], which addresses the problem of computing an optimal racetrack trajectory in a so-called “Indianapolis” track, where the track has a certain width and right-angle turns. This particular setting limits the maximum speed at the turns, which makes it possible to decompose the computation in a dynamic programming fashion. In contrast, the space is open in VTSP, with no simple way to bound the maximum speed. Therefore, we propose a different strategy based on searching for an optimal path in the configuration graph using A\*. *The problem:* Given an ordered sequence of points $\pi=(p_1, p_2, \dots, p_n)$, compute (the cost of) an optimal trajectory realizing $\pi$, *i.e.,* visiting the points in order, starting at $p_1$ and ending at $p_n$ at zero speeds. (In the particular case of VTSP, $p_1$ and $p_n$ coincide.) Finding the optimal trajectory between *two* configurations already suggests the use of path-finding algorithms like BFS, Dijkstra, or A\* (see e.g. [@schmid2005vector] and [@bekos2018algorithms]). The difficulty in our case is to force the path to visit all the intermediary points in order, despite the fact that the space is open. Our contribution here is to design a cost function that guides A\* through these constraints. In general, A\* explores the search space by generating a set of successors of the current “position” (in our case, configuration) and estimate the cost of each successor using a problem-specific function. The successors are then inserted into a datastructure (in general, a priority queue) which makes it easy to continue exploration from the position which is globally the best estimated. The great feature of A\* is that it is guaranteed to find an optimal path, provided that the cost function does not over-estimate the actual cost, and so, as fast as the estimation is precise. ### Cost estimation. {#costestimation} For simplicity, we first present how the estimation works relative to the entire tour. Then we explain how to generalize it for estimating an arbitrary intermediate configuration in the trajectory (i.e. one that has already visited a certain number of cities and is located at a given position with given velocity). The key insight is that [r]{}[6.75cm]{} (-1.5,-1.3) grid (15.5,13.3); (-1, 0) to (15,0); (0, -1) to (0,13); =\[circle, fill = white, draw=black, font=, inner sep= .3mm\] (3, 5) node (p5) [$5$]{}; (8, 1) node (p4) [$4$]{}; (14, 7) node (p3) [$3$]{}; (10, 12) node (p2) [$2$]{}; (5, 10) node (p1) [$1$]{}; (p1)–(p2); (p2)–(p3); (p3)–(p4); (p4)–(p5); (p5)–(p1); =\[circle, fill = white, draw=black, inner sep= .4mm\] (5, 0) node (px1) ; (3,0) node (px5) ; (14, 0) node (px3) ; (p1) to (px1); (p3) to (px3); (p5) to (px5); =\[red, -&gt;, bend right=30\] (px1) to (6,-0.2); (6,-0.2) to (8,-0.2); (8,-0.2) to (11,-0.2); (11,-0.2) to (13,-0.2); (13,-0.2) to (px3); (px3) to (13,0.2); (13,0.2) to (11,0.2); (11,0.2) to (8,0.2); (8,0.2) to (6,0.2); (6,0.2) to (4,0.2); (4,0.2) to (px5); (px5) to (4,-0.2); (4,-0.2) to (px1); =\[\] (px3) to\[out=-45, in=45, looseness=7\] (px3); (px5) to\[out=135, in=-135, looseness=7\] (px5); (px1) node\[draw=none,fill=none,inner sep=.7mm\] (px1loop); (px1loop) to\[out=-110, in=-70, looseness=8\] (px1loop); (0, 1) node\[circle, fill = white, draw=black, inner sep= .4mm\] (py4) ; (0, 12) node\[circle, fill = white, draw=black, inner sep= .4mm\] (py2) ; (0, 10) node\[circle, fill = white, draw=black, inner sep= .4mm\] (py1) ; (p1) to (py1); (p2) to (py2); (p4) to (py4); =\[blue, -&gt;, bend right=30\] (py1) to (0.2,11); (0.2,11) to (py2); (py2) to (-0.2,11); (-0.2,11) to (-0.2,9); (-0.2,9) to (-0.2,6); (-0.2,6) to (-0.2,4); (-0.2,4) to (-0.2,2); (-0.2,2) to (-0.2,1); (0.2,1) to (0.2,2); (0.2,2) to (0.2,4); (0.2,4) to (0.2,7); (0.2,7) to (0.2,9); (0.2,9) to (py1); =\[\] (py2) to\[out=50, in=130, looseness=7\] (py2); (py4) to\[out=-130, in=-50, looseness=7\] (py4); (py1) node\[draw=none,fill=none,inner sep=.7mm\] (py1loop); (py1loop) to\[out=-20, in=20, looseness=8\] (py1loop); the optimal trajectory, whatever it be, must obey some pattern in each dimension. Consider, for example, the tour $\pi=\{(5, 10), (10, 12), (14, 7), (8, 1),$$ (3, 5), (5, 10)\}$ shown on Figure \[fig:projection\]. In the $x$-dimension, the vehicle must move at least from $1$ to $3$, then stop at a *turning point*, change direction, and travel towards $5$, then stop and change direction again, and travel back to $1$. Thus, any trajectory realizing $\pi$ can be divided into *at least* three subtrajectories in the $x$-dimension, whose cost is *at least* the cost of traveling along these segments, starting and ending at speed 0 at the turning points. Thus, in the above example, the vehicle must travel at least along distances $9$, $11$, and $2$ (with zero speed at the endpoints), which gives a cost of at least $16$ (*i.e.,* 6, 7, and 3, respectively). The same analysis can be performed in each dimension; then, the actual cost must be *at least the maximum* value among these costs, which is therefore the value we consider as estimation. In general, the configurations whose estimation is required by A\* are more general than the above case. In particular, it has an arbitrary position and velocity, and the vehicle may have already visited a number of cities. Therefore, the number of visited cities is stored along a configuration, and the dimensional cost is evaluated against the remaining sub-tour. The only technical difference is that one must carefully take into account the current position and velocity when determining where the next turning point is in the dimensional projection, which however poses no significant difficulty. Concretely, a case-based study of the initial configuration with respect to the first turning point, allows one to self-reduce the estimation to the particular case that the initial speed is zero (possibly at a different starting position). Consequently, the total cost amounts to a sum of costs between consecutive pairs of turning points with zero speed at these points. The cost estimation of a subtour $\pi'= c, p_i, ..., p_n$, where $c$ is the current configuration and $p_i, \ldots, p_n$ is a suffix of $\pi$ can be computed in $O(n)$ time. \[lem:est\_compl\] As explained, the subtour is first reduced to a subtour $\pi'' = p_{i-1}, p_i, \ldots, p_n$. The turning points in $\pi''$ are easily identified through a pass over $\pi''$. Their number is at most $n$ because they are a subset of the points in $\pi''$. Finally, the cost between each pair of selected turning points can be computed in constant time [@bekos2018algorithms] (if one neglects the encoding size of an integer representing a coordinate). The reader is referred to [@bekos2018algorithms] for more on computing the cost between two configurations in one dimension. Let us now discuss the running time complexity of the resulting algorithm. In general, A\* can have an exponential running time in the solution depth (thus, length of the trajectory). It is however possible, in our case, to make it polynomial. The A\* oracle runs in polynomial time, more precisely in time $\widetilde{O}(L^{(d^2)}n^2)$. \[theorem:oracle\_compl\] A “configuration” of the A\* algorithm (let us call it a state, to avoid ambiguity) is made of a racetrack configuration $c$ together with a number $k$ of visited cities. There are at most $O(L^{(d^2)})$ configurations (Lemma \[lem:bounded-configuration-graph\]) and $n$ cities, thus A\* will perform at most $O(L^{(d^2)}n)$ iterations, provided that it does not explore a state twice. Given that the states are easily orderable, the later condition can be enforced by storing all the visited states in an ordered collection that is searchable and insertable in logarithmic time (whence the $\widetilde{O}$ notation). Finally, each state is estimated in $O(n)$ time (Lemma \[lem:est\_compl\]). The combined use of `FlipVTSP` and `Multipoint A*` thus runs in polynomial time (Theorem \[theorem:2-opt\] and Theorem \[theorem:oracle\_compl\]). We now present a way to make the oracle algorithm even faster if one is willing to trade optimality for performance. ### A faster heuristic using limited views. The above A\* algorithm always finds the optimum, but in practice, it only scales up to medium-sized instances. If one is willing to lose some precision, then a simple trick (also used in the indianapolis case [@bekos2018algorithms]) can be used to scale linearly with the number of cities. The idea is to compute limited sequential sections of the trajectory and glue them together subsequently. Concretely, given a tour $\pi = p_1, ..., p_n$, the limited view heuristic runs A\* on a sliding window of fixed length $l$ (typically 5 or 6) over $\pi$. For each offset $i$ of the window, the trajectory is computed from $p_i$ to $p_{i+l}$ ($p_n$, if less than $l$ cities remain). Then, of the computed trajectory, only the subtrajectory $T_i$ from $p_i$ to $p_{i+1}$ is retained, the offset advances to $i+1$ and A\* is run again, using the last configuration of $T_i$ as initial configuration. Finally, the algorithm returns the concatenation of the $T_i$s. Experiments and conclusion {#sec:experiments} ========================== In this section, we present a few experiments with the goal to (1) validate the algorithmic framework described in Section \[sec:algorithms\], and (2) motivate the VTSP problem itself, by quantifying the discrepancy between ETSP and VTSP. The instances were generated by distributing cities uniformly at random within a given square area. For each instance, `Concorde` [@concorde] was used to obtain the reference optimal ETSP tour $\pi$. The optimal trajectory $T$ realizing this tour was computed using `Multipoint A*` (with complete view). Then, `FlipVTSP` explored the possible flips (with limited view) until a local optimum is found. An example is shown on Figure \[fig:improved-tour\] (right), resulting from $2$ flips on an optimal ETSP tour (left). Finding these flips is left as an exercise. [0.51]{} ![\[fig:improved-tour\]Example of tour improvement.](tour_before.png "fig:"){width="4.45cm"}    [0.48]{} ![\[fig:improved-tour\]Example of tour improvement.](tour_after.png "fig:"){width="5.2cm"} Such an outcome is not rare. Figure \[fig:varying-results\] shows some measures when varying (1) the number of cities in a fixed area; (2) the size of the area for a fixed number of cities; and (3) both at constant density. For performance, only the flips which did not deteriorate the tour distance by too much were considered (15 %, empirically). Thus, the plots tend to *under-estimate* the impact of VTSP (they already do so, by considering only *local* optima, and *limited view* in the flip phase). [0.33]{} ![](figs/new_plot_n.png "fig:"){width="4.8cm"}   [0.33]{} ![](figs/new_plot_space.png "fig:"){width="4.8cm"}   [0.33]{} ![](figs/new_plot_both.png "fig:"){width="4.8cm"} The results suggest that an optimal ETSP tour becomes less likely to be optimal for VTSP as the number of cities increases (in a fixed area). The size of the area for a fixed number of cities (here, $10$) does not seem to have a significant impact. Somewhat logically, scaling both parameters simultaneously (at constant density) seem to favor VTSP as well. Further experiments should be performed for a finer understanding. However, these results are sufficient to confirm that VTSP is a specific problem. We hope that these results and the others from this article will motivate future investigations on this problem. Appendix ======== Basic observations ------------------ The starting city has an impact on the cost of an optimal solution. This can be seen on a small example, with $P=\{(0,0), (1,0), (2,0)\}$ in the default setting. Starting at $(0,0)$, a solution exists with $7$ configurations (*i.e.,* $6$ vectors), namely [$T=({\ensuremath{({(0,0),\hspace{-1pt}(0,0)})}},$ ${\ensuremath{({(1,0),\hspace{-1pt}(1,0)})}},$ ${\ensuremath{({(2,0),\hspace{-1pt}(1,0)})}},$ ${\ensuremath{({(2,0),\hspace{-1pt}(0,0)})}},$ ${\ensuremath{({(1,0),\hspace{-1pt}(-1,0)})}},$ ${\ensuremath{({(0,0),\hspace{-1pt}(-1,0)})}},$ ${\ensuremath{({(0,0),\hspace{-1pt}(0,0)})}})$]{} (see the left picture). In contrast, if the tour starts at $(1,0)$, the vehicle will have to decelerate three times instead of two (right picture), which gives a trajectory of $8$ configurations ($7$ vectors). Hardness results {#app:A} ---------------- In this section, we give the complete proof of Theorem , namely: [ExactCover]{} reduces in polynomial time to [[VectorTSP]{}]{}with visit speed $\nu=0$. The proof goes through a number of intermediate steps until Corollary , which is actually Theorem \[th:np-hard\]. Let us first recall the definition of [ExactCover]{}. Let ${\cal U}$ be a set of $m$ elements (the [*universe*]{}), the problem [ExactCover]{} takes as input a set ${\cal F}=\{F_i\}$ of $n$ subsets of ${\cal U}$, and asks if there exists ${\cal F'} \subseteq {\cal F}$ such that all sets in ${\cal F'}$ are [*disjoint*]{} and ${\cal F'}$ covers all the elements of ${\cal U}$. For example, if ${\cal U}=\{1, 2, 3\}$ and ${\cal F}=$ $\{\{1,2\}, \{3\}, \{2,3\}\}$, then ${\cal F}'= \{\{1,2\},\{3\}\}$ is a valid solution, but $\{\{1,2\},\{2,3\}\}$ is not. Given an instance [$\cal I$]{}of [ExactCover]{}, the proof shows how to construct an instance [$\cal I$]{}’ of <span style="font-variant:small-caps;">VTSP</span> such that [$\cal I$]{}admits a solution if and only if there is a valid trajectory visiting all the cities of [$\cal I$]{}’ using at most a certain number of configurations. We first give the high-level ideas of the proof, which are common with that of Papadimitriou’s proof for ETSP. Then, we explain the details of their adaptation to VTSP (with visit speed $\nu=0$). ### High-level description  \ The instance ${\ensuremath{\cal I}\xspace}'$ is composed of several types of gadgets, representing respectively the subsets $F_i\in {\cal F}$ and the elements of ${\cal U}$ (with some repetition). For each $F_i$, a subset gadget $C_i$ is created which consists of a number of cities placed horizontally (wavy horizontal segments in Figure \[fig:papa\_construction\]). For now, it is sufficient to know that each gadget can be traversed optimally in exactly two possible ways (without considering direction), which ultimately corresponds to including (traversal 1) or excluding (traversal 2) subset $F_i$ in the [ExactCover]{} solution. The $C_{i}$’s are located one below the other, starting with $C_1$ at the top. Between every two consecutive gadgets $C_i$ and $C_{i+1}$, copies of [*element*]{} gadgets are placed for each element in ${\cal U}$, thus the element gadgets $H_{ij}$ are indexed by both $1 \leq i \leq n-1$ and $1 \leq j \leq m$ (see Figure \[fig:papa\_construction\]). The element gadgets are also made of a number of cities, whose particular organization is described later on. Finally, every subset gadget $C_i$ above or below an element gadget representing element $j$ is slightly modified in a way that represents whether $F_i$ contains element $j$ or not. Intuitively, a tour visiting all the cities must choose between inclusion or exclusion of each $F_i$ ([*i.e.*]{}, traversal 1 or 2 for each $C_i$). An element $j \in {\cal U}$ is considered as covered by a subset $F_i$ if $C_i$ does [*not*]{} visit any of the adjacent element gadgets representing $j$. Each element gadget $H_{i,j}$ must be visited either from above (from $C_i$) or from below (from $C_{i+1}$). Now, the number of subset gadget is $n$, the number of element gadgets for each element is $n-1$ (one between every two consecutive subset gadgets), and the construction guarantees that at most one element gadget for each element $j\in {\cal U}$ is visited from a subset gadget $C_i$ (or the tour is non-optimal). These three properties collectively imply that for each element $j\in {\cal U}$, there is exactly one subset gadget $C_i$ that does not visit any of the element gadgets representing $j$. (0, 0) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (1, 0) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (2, 0) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (0, -1) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (0, -2) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (0, -3) node\[circle, fill = white, draw=none, rotate=90, inner sep=.5mm\] (a) [...]{}; (3, 0) node\[circle, fill = white, draw=none, inner sep=.5mm\] (a) [...]{}; (0, -4) node\[circle, fill = white, draw=none, inner sep=.5mm\] (below) ; (20, 0) node\[circle, fill = white, draw=none, inner sep=0mm\] (right) ; (20, -3) node\[circle, fill = white, draw=none, inner sep=0mm\] (right2) ; (20, -6) node\[circle, fill = white, draw=none, inner sep=0mm\] (right3) ; (20, -9) node\[circle, fill = white, draw=none, inner sep=0mm\] (right4) ; (20, -12) node\[circle, fill = white, draw=none, inner sep=0mm\] (right5) ; (20, -15) node\[circle, fill = white, draw=none, inner sep=0mm\] (right6) ; (4, 0) node\[circle, fill = white, draw=none, inner sep=0mm\] (left) ; (4, -3) node\[circle, fill = none, draw=none, inner sep=0mm\] (left2) ; (4, -6) node\[circle, fill = white, draw=none, inner sep=0mm\] (left3) ; (4, -9) node\[circle, fill = white, draw=none, inner sep=0mm\] (left4) ; (4, -12) node\[circle, fill = white, draw=none, inner sep=.5mm\] (left5) ; (4, -15) node\[circle, fill = white, draw=none, inner sep=0mm\] (left6) ; (0, -15) node\[circle, fill = white, draw=none, inner sep=0mm\] (left6b) ; (4, -10.5) node\[circle, fill = white, draw=none, inner sep=.5mm, rotate=90\] (dots) [...]{}; (12, .5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_1$]{}; (12, -2.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_2$]{}; (12, -5.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_3$]{}; (12, -8.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_4$]{}; (12, -11.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_{n-1}$]{}; (12, -14.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_n$]{}; //drawing lines =\[zigzag\] (left) to (right); (left2) to (right2); (left3) to (right3); (left4) to (right4); (right5) to (left5); (right6) to (left6); =\[\] (left2) to (left3); (right) to (right2); (right3) to (right4); (right5) to (right6); (below) to (left6b); (left6) to (left6b); (7, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{11}$]{}; (10, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{12}$]{}; (18, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{1m}$]{}; (14, -1.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (7, -4.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{21}$]{}; (10, -4.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{22}$]{}; (18, -4.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{2m}$]{}; (14, -4.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (7, -7.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{31}$]{}; (10, -7.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{32}$]{}; (18, -7.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{3m}$]{}; (14, -7.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (6.5, -13.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)1}$]{}; (10.5, -13.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)2}$]{}; (17.5, -13.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)m}$]{}; (14, -13.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; In summary, the tour proceeds from the top left corner through the $C_i$s (in order), visiting all the $H_{i,j}$ through local detours. So long as a $C_i$ visits a $H_{i,j}$ (thus, from above), this means that element $j$ has not yet been selected in the [ExactCover]{} solution. Element $j$ is covered by subset $F_i$ in the [ExactCover]{} solution if $C_i$ is the first subset gadget that does [*not*]{} visit the corresponding $H_{i,j}$ (which must eventually happen), after which all the $H_{i,k<j}$ will necessarily be visited (i.e. not covered again) from below by the corresponding $C_{k+1}$. The details of the construction specify the internal organization of each gadget (positions of the cities composing it), and the spacing between the cities, in such a way that a tour is optimal if and only if it obeys this global traversal without shortcutting in non-authorized ways. In particular, the local configuration of $C_i$ above or below element gadgets makes it impossible for $C_i$ to avoid the visit of $H_{i,j}$ unless $j \in F_i$ (or unless $j$ has already been covered by another subset, i.e. $H_{i-1,j}$ is not yet visited). Setting the visiting speed $\nu = 0$ is crucial for controlling (indeed, cancelling) the impact of acceleration, so as to force the optimal trajectory to follow exactly the same pattern as in Papadimitriou’s proof. Admittedly, the spirit of the VTSP problem is undermined by such a proof, which remains unsatisfatory and motivates Open question . The details of our adaptation specify the corresponding intra-gadget spacing between cities and the spacing between the gadgets. Most of the consecutive cities in the tour are actually separated by only one or two space units, which cancels out the benefits of accelerating. The few exceptions are between subset gadgets and the adjacent element gadgets, where the speed can get arbitrarily large depending on the distance chosen. We choose a distance close to the original distance of 20 units, resulting in a maximum speed of $5$ space units. The proportions in the spacing imply that this has no impact on the visit order [*w.r.t.*]{} Papadimitriou’s tour. ### Technical aspects  \ This section describes how to reduce an <span style="font-variant:small-caps;">ExactCover</span> instance to a <span style="font-variant:small-caps;">VTSP</span> instance with visit speed $\nu=0$, and visit distance $\alpha = 0$ (the vector completion $\beta$ being meaningless since the vehicle must stop in each city). For simplicity, it is first formulated in the $5$-successor model, *i.e.,* the speed can change only in one dimension at a time (Theorem \[theorem:np\_hard\]). This constraint is subsequently relaxed to the $9$-successor function through a geometrical trick (see Corollary \[cor:9-successors\]). The following definitions are from Papadimitriou [@papadimitriou1977euclidean]. A subset $P'$ of the set of cities is an *$a$-component* if for all $p \in P'$ we have $\texttt{min}(\texttt{cost}(p, p') : p' \not\in P') \geq a$ and $\texttt{max}(\texttt{cost}(p, p') : p' \in P') < a$, and $P'$ is maximal *w.r.t.* these properties. A *$k$-trajectory* for a set of cities is a set of $k$, not closed trajectories visiting all cities. A valid trajectory for a <span style="font-variant:small-caps;">VTSP</span> instance is thus a closed (or cyclic) 1-trajectory. A subset of cities is *$a$-compact* if, for all positive integers $k$, an optimal $k$-trajectory has cost less than the cost of an optimal $(k+1)$-trajectory plus $a$. Note that $a$-components are trivially $a$-compact. Suppose we have $N$ $a$-components $P_1, ..., P_N \in P$, such that the cost to connect any two components through a trajectory is at least $2a$, and $P_0$, the remaining part of $P$, is $a$-compact. Suppose that any optimal 1-trajectory of this instance does not contain any vectors between any two $a$-components. Let $K_1, ..., K_N$ be the costs of the optimal 1-trajectories of $P_1, ..., P_N$ and $K_0$ the cost of the optimal $(N+ 1)$-trajectory of $P_0$. If there is a 1-trajectory $T$ of $P$ consisting of the union of an optimal $(N+1)$-trajectory of $P_0$, $N$ optimal 1-trajectories of $P_1, ..., P_N$ and $2N$ trajectories of cost $a$ connecting $a$-components to $P_0$, then $T$ is optimal. If no such 1-trajectory exists, the optimal 1-trajectory of $P$ has a cost greater than $K = K_0 + K_1 + ... + K_N + 2Na$. \[lem:papa\_lemma\] Consider the *1-chain* structure presented in Figure \[fig:1-chain\]. This structure is composed of cities positioned on a line, at distance one from one another. 1-chains can bend at 90 degrees angles, and only one optimal 1-trajectory exists, with a cost of $2(n-1)$ vectors for a 1-chain of length $n$. [0.32]{} \[scale=.4, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (-1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a’) ; (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (5,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (6,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (6,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (6,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (6,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (5,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (1,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (0,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (-1,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j’) ; (3,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; (3,-6) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; (6,-3) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (A) [...]{}; (0,-.7) to (1, -.7); (.5, -1.2) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$1$]{}; [0.32]{} \[scale=.4, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (-1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a’) ; (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (5,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (6,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (6,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (6,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (6,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (5,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (1,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (0,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (-1,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j’) ; (a’) to (a); (a) to (b); (b) to (2,0); (4,0) to (c); (c) to (d); (d) to (e); (e) to (6,-2); (6,-4) to (f); (f) to (g); (g) to (h); (h) to (4,-6); (2,-6) to (i); (i) to (j); (j) to (j’); (3,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; (3,-6) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; (6,-3) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (A) [...]{}; [0.32]{} \[scale=.4, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (-1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a’) ; (6,0) node\[circle, fill = white, draw=none, inner sep= .1mm\] (d) ; (6,-6) node\[circle, fill = white, draw=none, inner sep= .1mm\] (g) ; (-1,-6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j’) ; (a’) to (d); (d) to (g); (g) to (j’); Next, consider the structure in Figure \[fig:2-chain\], referred to as a *2-chain*. The distance between the leftmost (or rightmost) city and its nearby cities is $\sqrt{2}$. The closest distance between other cities is 2. The important thing to notice here is there exists only two distinct optimal 1-trajectories, denoted as mode 1 and mode 2, both of a cost of $3n + 11$ for a 2-chain of length $n$. [0.49]{} \[scale=.33, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (9,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (9,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (10,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (7,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; (1,1.7) to (3, 1.7); (2, 2.2) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$2$]{}; [0.49]{} \[scale=.33, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (7,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; \(a) to (b); (b) to (c); (c) to (e); (e) to (d); (d) to (f); (f) to (g); (g) to (6,-1); [0.49]{} \[scale=.33, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (7,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [...]{}; \(a) to (c); (b) to (c); (b) to (d); (e) to (d); (e) to (g); (f) to (g); (f) to (6,1); [0.49]{} \[scale=.33, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (e) ; (9,1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (f) ; (9,-1) node\[circle, fill = white, draw=none, inner sep= .7mm\] (g) ; (10,0) node\[circle, fill = none, draw=black, inner sep= .7mm\] (A) ; \(b) to (f); (5,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [$C_{i}$]{}; (c) to (g); [0.3]{} \[scale=.42, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (0,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (0,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (0,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (0,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (0,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (7,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (8,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (-.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [A]{}; (8.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [A’]{}; (1.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [B]{}; (6.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [B’]{}; (-.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [C]{}; (8.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [C’]{}; (1.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [D]{}; (6.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [D’]{}; (0,-.7) to (1, -.7); (.5, -1.2) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$2$]{}; (-0.8,1) to (-0.8, 3); (-1.3, 2) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$4$]{}; (1,5.3) to (7,5.3); (4, 4.8) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$12$]{}; [0.3]{} \[scale=.42, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (0,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (a) ; (1,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (0,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (0,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (1,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (0,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (1,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (0,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (1,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (0,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (1,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (7,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (7,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (8,6) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (8,7) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (7,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (7,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (r) ; (8,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (s) ; (8,4) node\[circle, fill = white, draw=black, inner sep= .7mm\] (t) ; (7,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (u) ; (7,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (v) ; (8,0) node\[circle, fill = white, draw=black, inner sep= .7mm\] (w) ; (8,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (x) ; \(a) to (b); (b) to (c); (c) to (d); (d) to (e); (e) to (f); (f) to (h); (h) to (g); (g) to (i); (i) to (k); (k) to (l); (l) to (j); (j) to (m); (m) to (n); (o) to (p); (p) to (n); (o) to (t); (t) to (r); (r) to (q); (q) to (s); (s) to (x); (x) to (v); (v) to (u); (u) to (w); (-.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [A]{}; (8.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [A’]{}; (1.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [B]{}; (6.5,-.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [B’]{}; (-.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [C]{}; (8.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [C’]{}; (1.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [D]{}; (6.5,7.5) node\[circle, fill = none, draw=none, inner sep= .7mm\] (A) [D’]{}; [0.3]{} \[scale=.3, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (4,4) node\[rectangle, fill = none, draw=black, inner sep= .7cm\] (A) [ $H_{ij}$]{}; Among all 1-trajectories for $H$ (see Figure \[fig:H\]) having as endpoints two of the cities $A, A', B, B', C, C', D, D'$, there are 4 optimal 1-trajectories, namely those with endpoints $(A, A')$, $(B,B')$, $(C,C')$, $(D,D')$, which all have a cost of 77 vectors. \[lem:symmetric\_return\] We are now ready to prove Theorem \[theorem:np\_hard\] using the above definitions and gadgets. [ExactCover]{} reduces in polynomial time to [[VectorTSP]{}]{}with visit speed $\nu=0$ and visit distance $\alpha=0$, in the $5$-successor model. \[theorem:np\_hard\] The aforementioned structures are combined to construct a <span style="font-variant:small-caps;">VTSP</span> instance from a given <span style="font-variant:small-caps;">Exact Cover</span> instance. Construct the structure shown in Figure \[fig:VTSP\_construction\], where $n$ is the number of subsets given in the corresponding <span style="font-variant:small-caps;">Exact Cover</span> instance, and $m$ the number of elements in the universe. (-.5, 0.5) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (3.5, 0.5) node\[circle, fill = white, draw=black, inner sep=.5mm\] (b) ; (a) to (b); (-.5, -14.5) node\[circle, fill = white, draw=black, inner sep=.5mm\] (a) ; (3.5, -14.5) node\[circle, fill = white, draw=black, inner sep=.5mm\] (b) ; (a) to (b); (-1.5, .5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (a) [$Q$]{}; (-1.5, -14.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (a) [$R$]{}; (20, 0) node\[circle, fill = white, draw=none, inner sep=0mm\] (right) ; (20, 1) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright) ; (20, -4) node\[circle, fill = white, draw=none, inner sep=0mm\] (right2) ; (20, -3) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright2) ; (20, -8) node\[circle, fill = white, draw=none, inner sep=0mm\] (right3) ; (20, -7) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright3) ; (20, -9) node\[circle, fill = white, draw=none, inner sep=0mm\] (right4) ; (20, -8) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright4) ; (20, -12) node\[circle, fill = white, draw=none, inner sep=0mm\] (right5) ; (20, -11) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright5) ; (20, -15) node\[circle, fill = white, draw=none, inner sep=0mm\] (right6) ; (20, -14) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisright6) ; (4, 0) node\[circle, fill = white, draw=none, inner sep=0mm\] (left) ; (4, 1) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisleft) ; (4, -4) node\[circle, fill = none, draw=none, inner sep=0mm\] (left2) ; (4, -3) node\[circle, fill = none, draw=none, inner sep=0mm\] (bisleft2) ; (4, -8) node\[circle, fill = white, draw=none, inner sep=0mm\] (left3) ; (4, -7) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisleft3) ; (4, -9) node\[circle, fill = white, draw=none, inner sep=0mm\] (left4) ; (4, -8) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisleft4) ; (4, -12) node\[circle, fill = white, draw=none, inner sep=.5mm\] (left5) ; (4, -11) node\[circle, fill = white, draw=none, inner sep=.5mm\] (bisleft5) ; (4, -15) node\[circle, fill = white, draw=none, inner sep=0mm\] (left6) ; (4, -14) node\[circle, fill = white, draw=none, inner sep=0mm\] (bisleft6) ; (5, -11) node\[circle, fill = white, draw=none, inner sep=.5mm, rotate=90\] (dots) [...]{}; (10, -11) node\[circle, fill = white, draw=none, inner sep=.5mm, rotate=90\] (dots) [...]{}; (19, -11) node\[circle, fill = white, draw=none, inner sep=.5mm, rotate=90\] (dots) [...]{}; (12, .5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_1$]{}; (12, -3.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_2$]{}; (12, -7.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_3$]{}; (12, -14.5) node\[circle, fill = white, draw=none, inner sep=.5mm\] (dots) [$C_n$]{}; //drawing lines (left) to (right); (bisleft) to (bisright); (left2) to (right2); (bisleft2) to (bisright2); (left3) to (right3); (bisleft3) to (bisright3); (right6) to (left6); (bisleft6) to (bisright6); //drawing 1chains connecting Ci (20.5, .5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (a) ; (20.5, -3.5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (b) ; (a) to (22.5, .5) to (22.5, -3.5) to (b); (3.5, -3.5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (a) ; (3.5, -7.5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (b) ; (a) to (1.5, -3.5) to (1.5, -7.5) to (b); (22.5, -11) node\[circle, fill = white, draw=none, inner sep=.5mm, rotate=90\] (dots) [...]{}; (20.5, -7.5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (a) ; (20.5, -14.5) node\[circle, fill = white, draw=black, inner sep=.5mm, rotate=90\] (b) ; (a) to (22.5, -7.5) to (22.5, -10); (22.5,-12) to (22.5, -14.5) to (b); //arrow with distances a and b (-.5,1) to (3.5, 1); (1.5, 1.5) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$804$]{}; (1.5, -3) to (3.5, -3); (2.5, -2.5) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$402$]{}; (6.5, -1.5) to (8.5, -1.5); (7.5, -1) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$402$]{}; (5, -1) to (5, -0.25); (5.8, -.6) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$29$]{}; (5, -2) to (5, -2.75); (5.8, -2.5) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$27$]{}; // H structures (4.9, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{11}$]{}; (10, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{12}$]{}; (19, -1.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{1m}$]{}; (14, -1.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (4.9, -5.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{21}$]{}; (10, -5.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{22}$]{}; (19, -5.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{2m}$]{}; (14, -5.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (4.9, -9.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{31}$]{}; (10, -9.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{32}$]{}; (19, -9.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{3m}$]{}; (14, -9.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; (5.5, -12.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)1}$]{}; (10.5, -12.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)2}$]{}; (18.2, -12.5) node\[rectangle, fill = white, draw=black, inner sep=.5mm\] (dots) [$H_{(n-1)m}$]{}; (14, -12.5) node\[rectangle, fill = white, draw=none, inner sep=.5mm\] (dots) [$...$]{}; The 2-chains represent the subsets in <span style="font-variant:small-caps;">Exact Cover</span>, and $H$ structures indirectly represent the elements in the universe. Finally, for every 2-chain $C_i$, replace the cities positioned directly above or below an $H$, by one of two structures, depending on the elements in $C_i$’s corresponding subset. If the subset contains the element corresponding to the above (or below) $H$, then replace by structure $A$ (see Figure \[fig:A\]), otherwise replace by structure $B$ (see Figure \[fig:B\]). The idea is to make it costly to visit an $H$ above or below from a structure $A$ traversed in mode 1. [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (7,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (7,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (9,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (9,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (7,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (7,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (7,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (15,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (15,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (13,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (13,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (15,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (15,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (15,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); (9,0) to (13, 0); (11, -.8) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$4$]{}; [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (7,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (7,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (9,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (9,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (7,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (7,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (7,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (0,-1) to (c) to (b) to (d) to (e) to (g) to (n) to (o) to (q) to (p) to (i) to (h) to (f) to (m) to (l) to (k) to (j); (13,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j’) ; (j) to (j’); (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (15,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (15,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (13,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (13,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (15,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (15,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (15,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (22,-1) to (c) to (b) to (d) to (e) to (g) to (n) to (o) to (q) to (p) to (i) to (h) to (f) to (m) to (l) to (k) to (j); (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (7,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (7,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (9,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (9,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (7,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (7,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (7,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (0,1) to (b) to (c) to (e) to (d) to (f) to (g) to (n) to (o) to (q) to (p) to (i) to (h) to (j) to (k) to (k) to (l) to (m) to (5, 5); (5,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (15,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (15,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (13,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (13,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (k) ; (15,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (l) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (15,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (p) ; (15,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (q) ; (22,1) to (b) to (c) to (e) to (d) to (f) to (g) to (n) to (o) to (q) to (p) to (i) to (h) to (j) to (k) to (k) to (l) to (m) to (17, 5); (17,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (3,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (3,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (3,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (19,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (19,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (19,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); (5.7,1) to (16.3, 1); (11, .1) node\[fill = white, draw=none, inner sep=.5mm\] (dots) [$12$]{}; [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (3,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (3,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (3,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (0, -1) to (c) to (b) to (d) to (e) to (i) to (j) to (o) to (n) to (g) to (f) to (m) to (h) to (3, 5); (3,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (19,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (19,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (19,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (22, -1) to (c) to (b) to (d) to (e) to (i) to (j) to (o) to (n) to (g) to (f) to (m) to (h) to (19, 5); (19,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); [0.49]{} \[scale=.22, inner sep=1mm, cherry/.style=[circle,draw=black,fill=red,thick]{}, blueberry/.style=[circle,draw=black,fill=blue,thick]{}, every loop/.style=[min distance=8mm,in=60,out=120,looseness=10]{}\] (1,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (1,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (3,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (3,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (5,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (5,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (3,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (3,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (3,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (5,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (5,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (5,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (0,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (0, 1) to (b) to (c) to (e) to (i) to (j) to (o) to (n) to (g) to (f) to (d) to (h) to (m) to (5, 5); (5,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (21,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (b) ; (21,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (c) ; (19,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (d) ; (19,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (e) ; (17,1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (f) ; (17,-1) node\[circle, fill = white, draw=black, inner sep= .7mm\] (g) ; (19,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (h) ; (19,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (i) ; (19,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (j) ; (17,3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (m) ; (17,-3) node\[circle, fill = white, draw=black, inner sep= .7mm\] (n) ; (17,-5) node\[circle, fill = white, draw=black, inner sep= .7mm\] (o) ; (22,0) node\[circle, fill = none, draw=none, inner sep= .7mm\] (dots) [...]{}; (22, 1) to (b) to (c) to (e) to (i) to (j) to (o) to (n) to (g) to (f) to (d) to (h) to (m) to (17, 5); (17,6) node\[circle, fill = none, draw=none, inner sep= .7mm, rotate=90\] (dots) [...]{}; (2, -6) to (2, 4) to (20, 4) to (20,-6) to (2, -6); We observe that now the optimal cost to connect two $k$-paths between some 2-chain $C_i$ and some $H_{ij}$ (or $H_{(i-1)j}$) is 10 vectors, whereas the optimal cost to connect any two $k$-paths between two $H_{ij}$, is at least 40 vectors. Also, this optimal cost of 10 vectors between some 2-chain $C_i$ and some $H_{ij}$, can only be attained by a trajectory on a straight vertical line, thanks to the precise distance of 25. Deviating even the slightest bit from the vertical line would result in a non-optimal cost. The construction of the <span style="font-variant:small-caps;">VTSP</span> instance is now complete. It should be clear that an optimal 1-trajectory must have $Q$ and $R$ as endpoints. This construction meets the hypotheses of Lemma \[lem:papa\_lemma\] with $a = 10$, $N = m(n-1)$, $K_1 = ... = K_N = 77$ and $K_0 = 1257mn +4m + 557n +24p +1464$, where $p$ is the sum of cardinalities of all given subsets of the <span style="font-variant:small-caps;">Exact Cover</span> instance.\ We examine when this structure has an optimal 1-trajectory $T$, as described in the lemma. $T$ traverses all 1-chains in the obvious way, and all 2-chains in one of the two traversals. Since its portion on $P_0$ has to be optimal, $T$ must visit a component $H$ from any configuration $B$ encountered, and it must return (by Observation \[lem:symmetric\_return\]) to the symmetric city of $B$, since its portion on $H$ must be optimal, too. If $T$ encounters a configuration $A$ and the corresponding chain is traversed in traversal 2, $T$ will also visit a component $H$. However, if the corresponding chain is traversed in traversal 1, $T$ will traverse $A$ without visiting any configuration $H$, since all trajectories connecting $P_0$ and $H$ components must be of cost $a$. Moreover this must happen exactly once for each column of the structure, since there are $n-1$ copies of $H$ and $n$ structures $A$ or $B$ in each column. Hence, if we consider the fact that $C_j$ is traversed in traversal $1$ (*resp.* traversal $2$) to mean that the corresponding subset is (*resp.* is not) contained in the <span style="font-variant:small-caps;">Exact Cover</span> solution, we see that the existence of a 1-trajectory $T$, as described in Lemma \[lem:papa\_lemma\], implies the <span style="font-variant:small-caps;">Exact Cover</span> instance admits a solution. Conversely, if the <span style="font-variant:small-caps;">Exact Cover</span> instance admits a solution, we assign, as above, traversals to the chains according to whether or not the corresponding subset is included in the solution. It is then possible to exhibit a 1-trajectory $T$ meeting the requirements of Lemma \[lem:papa\_lemma\]. Hence the structure at hand has a 1-trajectory of cost no more than $K = 1354mn -93m + 557n + 24p + 1464$ if and only if the given instance of <span style="font-variant:small-caps;">Exact Cover</span> is solvable. Finally, to obtain a valid <span style="font-variant:small-caps;">VTSP</span> trajectory, connect both endpoints $Q$ and $R$ in Figure \[fig:VTSP\_construction\] with a 1-chain, and increase $K$ accordingly. \[cor:9-successors\] [ExactCover]{} reduces in polynomial time to [[VectorTSP]{}]{}with visit speed $\nu=0$ and visit distance $\alpha=0$, in the $9$-successor model. The proof for the $9$-successor model is the same as for the $5$-successor model, except that the whole created <span style="font-variant:small-caps;">VTSP</span> instance ${\ensuremath{\cal I}\xspace}'$ is tilted by $45\degree$ (the direction does not matter), and distances are scaled by $\sqrt{2}$. The value of $K$ is unchanged. This modification transposes the limitations of the $5$-successor model to the $9$-successor model. Indeed, due to the careful choice of distances involved, if one wishes to stay optimal visiting the cities, one needs to only consider the outermost accelerations (diagonals) of the 9-successor version, as well as the null speed before turning (since different diagonals in the $9$-successor model cannot directly succeed one another). Note that a similar geometrical trick might be used to adapt the proof to further settings, such as continuous space with the continuous $d$-sphere successor function, such as depicted see in Figure  (for $\mathbb{R}^2$). Algorithmic details {#app:B} =================== High-level 2-opt algorithm {#sec:detailed-algorithm} -------------------------- The pseudo-code for the high-level algorithm discussed in Section \[sec:high-level-algorithm\] is shown in Algorithm \[algo:2-opt\]. It is essentially equivalent to the $2$-opt algorithm for ETSP, except that the cost of a candidate tour is evaluated by the oracle described in Section \[app:A-star\]. Input: a set $P$ of cities.\ Output: a $2$-optimal tour w.r.t. the racetrack model. $\pi_{opt} \gets \texttt{init}(P)$ $C_{opt} \gets \texttt{oracle(}\pi_{opt}\texttt{)}$ $improved \gets \texttt{true}$ $improved \gets \texttt{false}$ $\pi_{test} \gets \texttt{flip(}\pi_{opt}, i, j\texttt{)}$ $C_{test} \gets \texttt{oracle(}\pi_{test}\texttt{)}$ $\pi_{opt} \gets \pi_{test}$ $C_{opt} \gets C_{test}$ $improved \gets \texttt{true}$ $\texttt{break}$ $\texttt{break}$ **return** $\pi_{opt}$ One can find a 2-optimal tour for VTSP in time $O(n^2L^d\tau(n, L))$, where $n$ is the number of cities, $L$ the largest distance between cities in a dimension, $d$ the number of dimensions, and $\tau(n, L)$ the running time complexity of the oracle for computing the cost of an optimal racetrack trajectory visiting the $n$ cities. As explained in (the proof of) Lemma \[lem:walk\], if the visit order is not imposed, then one can easily find a trajectory of length $O(L^d)$ that visits all the cities, through walking over the entire area (rectangle hull containing the cities). Let $\pi$ be the order in which the cities are visited by such a walk, shifted circularly so as to set the starting city to the desired one. This tour is the one returned by the `init()` function. Then $C_{opt}$ is accordingly initialized with cost $O(L^d)$ in line 2. The factor $L^d$ in the complexity formula then follows from the fact that the main loop iterates only if a shorter trajectory is found, which can occur at most as many times as the length of the initial trajectory. Then, in each iteration, up to $O(n^2)$ flips are generated (at constant time), with a nested call to the oracle. All the other operations take constant time under the standard arithmetic abstractions.
--- abstract: | We study a new inflation potential in the framework of the *Randall-Sundrum type 2* Braneworld model. Using the technic developped in[@Sanchez2007], we consider both an monomial and a new inflation potentials and apply the Slow-Roll approximation in high energy limit, to derive analytical expression of relevant perturabtion spectrum. We show that for some values of the parameter n of the potential $(V\left( \phi \right) =V_{0}-\frac{1}{2}m^{2}\phi ^{2}+\frac{\alpha }{2n}\phi ^{2n})$ we obtain an perturbation spectrum wich present a good agreement with recent WMAP5 observations. Keywords:** ***RS Braneworld,* New inflation*potential, Perturbation Spectrum, WMAP5.* [PACS numbers: 98.80. Cq]{} author: - | R. Zarrouki$^{1}$, Z. Sakhi$^{2,3}$ and M. Bennai$^{1,3}$[^1]\ $^{\mathit{1}}$[L.P.M.C,]{} [Faculté des Sciences Ben M’sik, B.P. 7955, Université Hassan II-Mohammedia, Casablanca, Maroc. ]{}\ [ ]{}$^{\mathit{2}}$[LISRI, Faculté des Sciences Ben M’Sik, Université Hassan II-Mohammedia, Casablanca, Maroc, ]{}\ [ ]{}$^{3}$[ Groupement National de Physique des Hautes Energies, Focal point, LabUFR-PHE, Rabat, Morocco.]{} title: WMAP5 Observationnal Constraints on Braneworld New Inflation Model --- Introduction ============ Recently Braneworld scenario[@braneworld1; @braneworld2; @braneworld3] has become a central paradigm of modern inflationary cosmology. Standard inflation has been mainly studied and was early confirmed by observations[@COBE]. Brane inflation is proposed to solve important cosmological problems like as dark energy[@darkenergy], tachyonic inflation[taychons]{} or Black Holes systems[@BHRS1; @BHRS2]. Others motivations are observations on accelerating universe[@accelerating] as well as results on interpretations of these phenomena in terms of scalar field dynamics. Generally, scalar fields naturally arise in various particle physic theories including string/M theory and expected to play a fundamental role in inflation[@mtheory; @scalar; @field]. In *Randall-Sundrum* model[@RSII] wich is one of the most studied models, our four-dimensional universe is considered as a 3-brane embedded in five-dimensional anti-de Sitter space-time($AdS5$), while gravity can be propagated in the bulk. The most simplest inflationary models studied in the context of *Randall-Sundrum* scenario is the chaotic inflation[Maartens]{}, but in relation with recents WMAP** **observations[WMAP03,WMAP07,WMAP5]{}, more generalized models must be studied. In this work, we are interested on a new inflationary model in the framework of *Randall-Sundrum* Braneworld inflation in relation with recent WMAP5[@WMAP5] for both monomial and New inflation potentials. We first start in section 2, by recalling the foundations of the Braneworld inflation precisely the modified *Friedmann* eqs, and various inflationary perturbation spectrum parameters. In the section 3 we present our results for both Monomial and New inflation models. We have applied here the Slow-roll approximation in the high energy limit to drive various perturbations parameters spectrum for these models. We show that for some values of the parameter n of the potential $(V\left( \phi \right) =V_{0}-\frac{1}{2}m^{2}\phi ^{2}+\frac{\alpha }{2n}\phi ^{2n}),$ we obtain an perturbation spectrum wich present a good agreement with recent WMAP5 observations. A conclusion and a perspective of this work are given in the last section. Slow-roll Braneworld inflation ============================== Randall-Sundrum model --------------------- We start this section by recalling briefly some fundamentals of *Randall-Sundrum* type II Braneworld model[@RSII]. In this model, our universe is supposed living in a brane embedded in an Anti-de Sitter (AdS) five-dimensional bulk spacetime. One of the most relevant consequences of this model is the modification of the Friedmann equation for energy density of the order of the brane tension, and also the appearance of an additional term, usually considered as dark radiation term. In the case where the dark radiation term is neglected, the gravitationnal Einstein eqs, leads to the modified *Friedmann* equation on the brane as[@Maartens] $$H^{2}={\frac{8\pi }{3M_{pl}^{2}}}\rho \left[ 1+{\frac{\rho }{2\lambda }}\right]$$with $\lambda $ is the brane tension, $H$ is the *Hubble* parameter and $M_{pl}$ is th Planck mass$.$ It’s clear that the crucial correction to standard inflation is given by the density quadratic term $\rho ^{2}$. Brane effect is then carried here by the deviation factor $\rho /2\lambda {,}$ with respect to unity. This deviation has the effect of modifying the dynamics of the universe for density $\rho \gtrsim \lambda $. Note also that in the limit $\lambda \rightarrow \infty ,$ we recover standard four-dimensional standard inflation results. In inflationary theory, the energy density $\rho ,$ and pressure $p,$ are expressed in term of inflaton potential $V(\phi )$ as $\rho =\frac{1}{2}\dot{\phi}^{2}+V(\phi )$ and $p=\frac{1}{2}\dot{\phi}^{2}-V(\phi ),$ where $\phi $ is the inflaton field. In inflation theory, the scalar potential $V(\phi )$, depending on the sclar field $\phi ,$ play a fondamendal role and represent the initial vacum energy responsible of inflation. Along with these equation, one also has a second inflation *Klein-Gordon* equation governing the dynamic of the scalar field $\phi $$$\ddot{\phi}+3H\dot{\phi}+V^{\prime }(\phi )=0$$This is a second order evolution equation which follows from conservation condition of energy-momentum tensor $T_{\mu \nu }$. To calculate some physical quantities as scale factor or perturbation spectrum, one has to solve equations(1,2) for some specific potentials $V(\phi )$. To do so, the Slow-Roll approximation was introduced and applied by many autors to drive inflation perturbation spectrum[@liddle-2003]. Slow-Roll approximation and perturbation spectrum on brane ---------------------------------------------------------- Inflationary dynamics requires that inflaton field $\phi $ driving inflation moves away from the false vacuum and slowly rolls down to the minimum of its effective potential $V(\phi )$[@linde2005]. In this scenario, the initial value $\phi _{i}=\phi \left( t_{i}\right) $ of the inflaton field and the Hubble parameter $H$ are supposed large and the scale factor $a\left( t\right) $ of the universe growth rapidly. Applying the slow roll approximation, $\dot{\phi}^{2}\ll V$ and $\ddot{\phi}\ll V^{\prime }$, to brane field equations(1,2), we obtain: $$H^{2}\simeq {\frac{8\pi V}{3M_{4}^{2}}}\left( 1+{\frac{V}{2\lambda }}\right) \,,\qquad \dot{\phi}\simeq -{\frac{V^{\prime }}{3H}}.$$Note that slow roll approximation puts a constraint on the slope and the curvature of the potential. This is clearly seen from the field expressions of $\epsilon $ and $\eta $ parameters given by[@Maartens],  $$\begin{aligned} \epsilon &=&-\frac{\overset{\cdot }{H}}{H^{2}}\equiv {\frac{M_{4}^{2}}{4\pi }}\left( {\frac{V^{\prime }}{V}}\right) ^{2}\left[ \frac{\lambda (\lambda +V)}{(2\lambda +V)^{2}}\right] , \\ \eta &=&\frac{V^{\prime \prime }}{3H^{2}}\equiv {\frac{M_{4}^{2}}{4\pi }}\left( {\frac{V^{\prime \prime }}{V}}\right) \left[ \frac{\lambda }{2\lambda +V}\right] .\end{aligned}$$Slow-roll approximation takes place if these parameters are such that $\mathrm{max}\{\epsilon ,|\eta |\}\ll 1$ and inflationary phase ends when $\epsilon $ and $\left\vert \eta \right\vert $ are equal to one. Other inflationary important quantity is the number $N_{e}$ of e-folding wich, in slow roll approximation, reads as $$N_{e}\simeq -{\frac{8\pi }{M_{4}^{2}}}\int_{\phi _{\mathrm{i}}}^{\phi _{\mathrm{f}}}{\frac{V}{V^{\prime }}}\left( 1+{\frac{V}{2\lambda }}\right) d\phi .$$ where $\phi _{i}$ and $\phi _{f}$ stand for initial and final value of inflaton. Before proceeding, it is interesting to comment low and high energy limits of these parameters. Note that at low energies where $V\ll \lambda $, the slow-roll parameters take the standard form. At high energies $V\gg \lambda $, the extra contribution to the Hubble expansion dominates. The number of e-folding in this case becomes $N_{e}\simeq -\frac{4\pi }{\lambda M_{4}^{2}}\int_{\phi _{i}}^{\phi _{f}}\frac{V^{2}}{V^{\prime }}d\phi .$ The inflationary spectrum perturbations is produced by quantum fluctuations of fields around their homogeneous background values. Thus the scalar amplitude $A_{\QTR{sc}{s}}^{2}$ of density perturbation, evaluated by neglecting back-reaction due to metric fluctuation in fifth dimension, is given by[@Maartens] $$A_{\QTR{sc}{s}}^{2}\simeq \left. \left( {\frac{512\pi }{75M_{4}^{6}}}\right) {\frac{V^{3}}{V^{\prime 2}}}\left[ {\frac{2\lambda +V}{2\lambda }}\right] ^{3}\right\vert _{k=aH}.$$Note that for a given positive potential, the $A_{\QTR{sc}{s}}^{2}$ amplitude is increased in comparison with the standard result. In high energy limit this quantity behaves as$$A_{S}^{2}\simeq \frac{64\pi }{75\lambda ^{3}M_{4}^{6}}\frac{V^{6}}{V^{^{\prime 2}}}.$$On the other hand, using eqs(4,5), one can compute the perturbation scale-dependence described by the spectral index $n_{\QTR{sc}{s}}\equiv 1+d\left( \ln A_{\QTR{sc}{s}}^{2}\right) /d\left( \ln k\right) $ and find$$n_{\QTR{sc}{s}}-1\simeq 2\eta -6\epsilon \,,$$Note that at high energies $\lambda /V$, the slow-roll parameters are both suppressed; and the spectral index is driven towards the Harrison-Zel’dovich spectrum; $n_{\QTR{sc}{s}}\rightarrow 1$ as $V/\lambda \rightarrow \infty $. In what follows, we shall apply the above Braneworld formalism by singling out two specific kinds of inflaton potentials. These are the monomial and a new inflation potentials recently studied by Boyanovsky et al.[Sanchez2006]{} in standard inflation. Perturbation spectrum in Braneworld New inflation ================================================= To begin, recall that chaotic inflationary model, which was first introduced by Linde[@Linde82], has been reconsidered recently by several authors in the context of Braneworld scenario[@Maartens; @Paul]. In the present work, we are interested by others types of potential inflation. The autors of [@Sanchez2006] have shwon that combining the WMAP data with the slow roll expansion constraints the inflaton potential to have the form $$V(\phi )=N_{e}M^{4}w\left( \chi \right)$$$M$ is the inflation energy scale determined by the amplitude of the scalar adiabatic fluctuations[@M] to be $M\sim 0.00319$ $M_{Pl}=0.77\times 10^{16}GeV$, where a dimensionless rescaled field variable is introduced$$\chi =\frac{\phi }{\sqrt{N_{e}}M_{pl}}.$$here $N_{e}$ is the number of efolds. In this new notation, the Slow-Roll parameters become $$\begin{aligned} \epsilon &=&\frac{\lambda }{4\pi N_{e}^{2}M^{4}}\left( \frac{w^{\prime }\left( \chi \right) ^{2}}{w\left( \chi \right) ^{3}}\right) \\ \eta &=&\frac{\lambda }{4\pi N_{e}^{2}M^{4}}\left( \frac{w^{\prime \prime }\left( \chi \right) }{w\left( \chi \right) ^{2}}\right)\end{aligned}$$where the prime stands for derivative with respect to $\chi :w^{\prime }=\frac{dw}{d\chi }$ and $w^{\prime \prime }=\frac{d^{2}w}{d\chi ^{2}}$. The perturbations parameters are now expressed in term of the new variable as $$N_{e}=\frac{1}{-\frac{4\pi }{\lambda }M^{4}\int_{\chi _{c}}^{\chi _{end}}\frac{w\left( \chi \right) ^{2}}{w^{\prime }\left( \chi \right) }d\chi }$$where $\chi _{c}$ is the value of $\chi $ corresponding to $N_{e}$ e-folds before the end of inflation, and $\chi _{end}$ is the value of $\chi $ at the end of inflation. Other perturbation quantities are also calculated in term of $w\left( \chi \right) $ as $$A_{\QTR{sc}{s}}^{2}=\frac{64\pi N_{e}^{5}M^{16}}{75\lambda ^{3}M_{pl}^{4}}\left( \frac{w\left( \chi \right) ^{6}}{w^{\prime }\left( \chi \right) ^{2}}\right) ,$$The spectral and running index are now respectively written in the following form $$\begin{aligned} n_{s}-1 &=&\frac{\lambda }{4\pi N_{e}^{2}M^{4}}\left( 2\frac{w^{\prime \prime }\left( \chi \right) }{w\left( \chi \right) ^{2}}-6\frac{w^{\prime }\left( \chi \right) ^{2}}{w\left( \chi \right) ^{3}}\right) , \\ \frac{dn_{s}}{d\ln k} &=&-\frac{\lambda ^{2}}{8\pi ^{2}M^{8}N_{e}^{4}}\left( -\frac{8w^{\prime \prime }\left( \chi \right) w^{\prime }\left( \chi \right) ^{2}}{w\left( \chi \right) ^{5}}+9\frac{w^{\prime }\left( \chi \right) ^{4}}{w\left( \chi \right) ^{6}}+\frac{w^{\prime }\left( \chi \right) w^{\prime \prime \prime }\left( \chi \right) }{w\left( \chi \right) ^{4}}\right)\end{aligned}$$ Finally, the ratio of tensor to scalar perturbations $r$ reads $$r=\frac{6\lambda }{\pi N_{e}^{2}M^{4}}\left( \frac{w^{\prime }\left( \chi \right) ^{2}}{w\left( \chi \right) ^{3}}\right) .$$In what follows we will determine all this inflationary perturbation spectrum paramters at $\chi =\chi _{c}$, for a monomial and a new inflation potential and compare our results with recent $WMAP$ experimental data in the later case. Monomial potential ------------------ Let as begin by an monomial potential which generalize the chaotic one$.$ Chaotic inflation was mainly studied in the context of standard[@Linde82], Brane[@Maartens; @Paul] and recently Chaplygin inflation on the brane [@Chaplygin; @inflation]. Here, we consider a general potantial of the form $$V\left( \phi \right) =\frac{\alpha }{2n}\phi ^{2n},$$where $\alpha $ and $n$ are constants. In term of  $\chi ,$ we get $$w\left( \chi \right) =\frac{\chi ^{2n}}{2n}$$where we have used $$M^{4}=\alpha N_{e}^{n-1}M_{pl}^{2n}$$Thus, slow-roll parameters are represented by $$\begin{aligned} \varepsilon &=&\frac{2\lambda n^{3}}{\pi N_{e}^{2}M^{4}}\left( \frac{1}{\chi _{c}^{2n+2}}\right) \\ \eta &=&\frac{\lambda n^{2}\left( 2n-1\right) }{\pi N_{e}^{2}M^{4}}\left( \frac{1}{\chi _{c}^{2n+2}}\right)\end{aligned}$$and scalar spectral index $n$ and the ratio $r$ are respectively expressed as $$\begin{aligned} n_{s}-1 &=&-\frac{2\lambda n^{2}}{\pi N_{e}^{2}M^{4}}\left( 4n+1\right) \left( \frac{1}{\chi _{c}^{2n+2}}\right) \\ r &=&48\frac{\lambda n^{3}}{\pi N_{e}^{2}M^{4}}\left( \frac{1}{\chi _{c}^{2n+2}}\right)\end{aligned}$$ Finally the running index takes the following expression $$\frac{dn_{s}}{d\ln k}=-\frac{4\lambda ^{2}}{\pi ^{2}M^{8}N_{e}^{4}\chi _{c}^{4n+4}}\left( 4n^{6}+5n^{5}+n^{4}\right)$$Inflation ends at $\chi _{_{end}}$ $=0$, thus the value of the dimensionless field $\chi _{c}$ before the end of inflation is $$\chi _{c}^{2n+2}=\frac{\lambda n^{2}\left( 2n+2\right) }{\pi N_{e}M^{4}}$$Winding this result, various perturbations parameters are obtained in term of potential parameter n and e-fold number $N_{e}$ $$\begin{aligned} \varepsilon &=&\frac{n}{N_{e}\left( n+1\right) }\text{ },\text{\ \ \ \ \ \ \ \ \ \ \ }\eta =\text{\ }\frac{2n-1}{2N_{e}\left( n+1\right) },\text{\ \ \ \ } \\ n_{s}-1 &=&-\frac{4n+1}{N_{e}\left( n+1\right) },\text{ \ \ \ \ \ \ \ \ \ \ }r=24\frac{n}{N_{e}\left( n+1\right) }\text{ \ },\text{ \ } \\ \text{\ }\frac{dn_{s}}{d\ln k} &=&-\frac{\left( 4n^{2}+5n+1\right) }{N_{e}^{2}\left( n+1\right) ^{2}}\end{aligned}$$ It will be interessing to study the variation of these perturbation parameters as function of potential parameter n, and compare the results to recent WMAP5 observations. In the following, we do this for a more generalized new inflation potential New inflation model ------------------- Consider now a new inflation potential of the form[@Sanchez2007] $$w\left( \chi \right) =w_{0}-\frac{1}{2}\chi ^{2}+\frac{g}{2n}\chi ^{2n}\text{\ }$$ where $w_{0}$ and the coupling $g$ are dimensionless. In ref.[@Sanchez2007] the authors used this model in standard inflation and they have shown that for lower values of $n$ the results reproduce observation. In the present work, we reproduce a new results for all known inflation spectrum parameters, but in the context of *Randall-Sundrum* Braneworld model. New inflation model described by the dimensionless potential given by eq.$(31)$ have a minimum at $\chi _{_{0}}$ which is the solution to the following conditions $$w^{\prime }\left( \chi _{0}\right) =w\left( \chi _{0}\right) =0$$These conditions yield $$g=\frac{1}{\chi _{0}^{2n-2}}\text{ \ },\text{ \ \ \ \ \ \ \ \ \ }w_{0}=\frac{\chi _{0}^{2}}{2n}\left( n-1\right)$$As using the previous results the equation $(31)$ becomes $$w\left( \chi \right) =\frac{\left( n-1\right) }{2n}\chi _{0}^{2}-\frac{\chi ^{2}}{2}+\frac{\chi _{0}^{2-2n}}{2n}\chi ^{2n}$$$\chi _{0}$ determines the scale of symmetry breaking $\phi _{0}$ of the inflaton potential upon the rescaling eq.$(11)$, namely $$\phi _{0}=\sqrt{N_{e}}M_{pl}\chi _{0}$$It is convenient to introduce the dimensionless variable $$x=\frac{\chi }{\chi _{0}}$$Then, from eq.$(36)$, the potential of inflation model eq.$(34)$ takes the form $$w\left( x\right) =\ \frac{\chi _{0}^{2}}{2n}\left[ n\left( 1-x^{2}\right) +x^{2n}-1\right] ,\text{ \ \ \ \ \ \ \ \ \ \ \ \ broken symmetry}$$Inflation ends when the inflaton field arrives at the minimum of the potential. As shown by *Linde*[@linde2005], the symmetry is browken for non vanishing minimun of the potential. Thus, for our new inflation model eq.$(34)$ inflation ends for $$\chi _{end}=\chi _{0}$$According to the new varaible $x,$ the condition eq.$(14)$ becomes $$1=\frac{\pi N_{e}M^{4}\chi _{0}^{4}}{\lambda n^{2}}I_{n}\left( X\right)$$where $$I_{n}\left( X\right) =\int_{X}^{1}\frac{\left( n\left( 1-x^{2}\right) +x^{2n}-1\right) ^{2}}{\left( 1-x^{2n-2}\right) }\frac{dx}{x}$$and                                             $$\ \ X=\frac{\chi _{c}}{\chi _{0}}$$For small field and $X\longrightarrow 1^{-\text{ }}$ the integral $I_{n}(X)$ obviously vanishes and by expanding the potential (eq.34) near the minimum $\chi _{0}$ $\left( n\succ 1\right) $ $$w\left( \chi \right) \sim \frac{\left( 2n-2\right) \left( \chi -\chi _{0}\right) ^{2}}{2}$$The expression approached of the potential (eq.42) allows us to recover the expression of the monomial potential (eq.20) for $n=1$ by simple shift $$\chi \longrightarrow \sqrt{\left( 2n-2\right) }\left( \chi -\chi _{0}\right) ;\text{\ \ \ \ \ \ \ \ \ }\left( n\succ 1\right)$$So, we can determine all inflationary perturbation spectrum paramters near the minimum $X=1.$ Therefore for $X\sim 1$ the quadratic monomial is an excellent approximation to the family of higher degree potentials. The slow-roll parameters become in term of the varaible X $$\begin{aligned} \varepsilon &=&\frac{2nI_{n}\left( X\right) }{N_{e}}\frac{\left( -X+X^{2n-1}\right) ^{2}}{\left( n\left( 1-X^{2}\right) +X^{2n}-1\right) ^{3}}\text{ \ } \\ \text{ \ \ }\eta &=&\frac{I_{n}\left( X\right) }{N_{e}}\frac{\left( -1+\left( 2n-1\right) X^{2n-2}\right) }{\left( n\left( 1-X^{2}\right) +X^{2n}-1\right) ^{2}}\end{aligned}$$In the following, we study the variation of these parameters as function of X, by numerical calculations for $N_{e}=50.$ The figures 1 and 2 show the increasing behavior of the two functions $\varepsilon $ and $\eta $ for small values for $X$ and for large $X$ , the both functions become constant. We can remark again that a large domain of variation of $X$ verifies the conditions of inflation since during inflation we have $\epsilon \ll 1$ and $\mid \eta \mid \ll 1.$ On the other hand, observations combining WMAP5, BAO$\left( \text{Baryon Acoustic Oscillations}\right) $ and SN$\left( \text{Type Ia supernovae}\right) $ data[@WMAP5], yields $$n_{s}=0.960_{\text{ }-0.013}^{\text{ }+0.014}\text{ \ }(95\%\text{ CL})\text{ }$$ $$r<0.20\text{ }(95\%\text{ CL})\text{ \ \ \ \ \ \ \ \ \ \ \ }$$ $$-0.0728<\frac{dn_{s}}{dlnk}<0.0087\text{\ \ \ \ }(95\%\text{ CL})\text{ \ \ }$$ In our variable X, the spectral index becomes$$n_{s}-1=\frac{2I_{n}\left( X\right) }{N_{e}\left( n\left( 1-X^{2}\right) +X^{2n}-1\right) ^{2}}\left[ -6n\frac{\left( -X+X^{2n-1}\right) ^{2}}{\left( n\left( 1-X^{2}\right) +X^{2n}-1\right) }-1+\left( 2n-1\right) X^{2n-2}\right]$$ In figue 3, we plot this parameter as function of X. So, we can remark that the window of consistency with the WMAP5 + BAO+SN data narrows for growing $n.$ Thus we obtain a good potential expression for small $n<3$ and large X. Other spectrum parameter is the ratio $r$ presented by$$r=48\frac{nI_{n}\left( X\right) }{N_{e}}\frac{\left( -X+X^{2n-1}\right) ^{2}}{\left( n\left( 1-X^{2}\right) +X^{2n}-1\right) ^{3}}$$This figure shows the same behavior as figure1 since $r=24\epsilon .$ Note that the observational result is reproduced for small values of $X$ where the three curves are almost confounded. To confronte simutaniously the observales r and n$_{s}$ with observation, it will be interesting to study the relative variation of these parameters. In figure 5 we have ploted $r$ $vs$ $n_{s}.$ The black dot corresponds to chaotic potential $\frac{\alpha \phi ^{2}}{2}\left( X=1,\text{ }n_{s}=0.95\text{ and }r=0.24\right) $. Values below the black dot corresponds to $X\prec 1$ which constitutes the region where the observation results are recovered. For values corresponding to $X\succ 1$, the parameter values become in disagreement with observation notably for $r.$ We have also calculated the running index $\frac{dn_{s}}{d\ln k}$ wich is given by $$\frac{dn_{s}}{d\ln k}=-\frac{I_{n}\left( X\right) ^{2}}{8n^{4}N_{e}^{2}}\left( -\frac{8v^{\prime \prime }\left( X\right) v^{\prime }\left( X\right) ^{2}}{v\left( X\right) ^{5}}+9\frac{v^{\prime }\left( X\right) ^{4}}{v\left( X\right) ^{6}}+\frac{v^{\prime }\left( X\right) v^{\prime \prime \prime }\left( X\right) }{v\left( X\right) ^{4}}\right)$$ where$$\begin{aligned} v\left( X\right) &=&\frac{\left[ n\left( 1-X^{2}\right) +X^{2n}-1\right] }{2n} \\ v^{\prime }\left( X\right) &=&\frac{\partial v\left( X\right) }{\partial X} \notag\end{aligned}$$We observe, in figure 6, that for any values of $X$ the experimental data is verified. Thus all members of new inflation potential family predict a small and negative running For the some reasons as above, we plot in the last figure, the variation of $\frac{dn_{s}}{d\ln k}$ $vs$ $n_{s}.$ We note that for $n_{s}$ experimental data ($0.9392\prec n_{s}\prec 0.9986$), all values of $\frac{dn_{s}}{d\ln k}$ are conforme with observation for any n.  As before, the black dot corresponds to chaotic potential $\frac{\alpha \phi ^{2}}{2}\left( X=1,\text{ }n_{s}=0.95\text{ and }\frac{dn_{s}}{d\ln k}=-0.0010\right) $. Thus, a good values of $X$ wich are in agreement within obervation correspond to $X\prec 1.$ Conclusion ========== In this work, we have studied a new inflation potential in the framework of Braneworld *Rundall-Sundrum* type 2 model. We have applied Slow-Roll approximation in high energy limit in order to derive analytical expressions for various perturbation spectrum (n$_{s}$, r and $\frac{dn_{s}}{d\ln k})$. We have considered a monomial and a new inflation potentials to study the behaviours of inflation spectrum for various values of n. We have shown that for some values of the parameter n of the potential $(V\left( \phi \right) =V_{0}-\frac{1}{2}m^{2}\phi ^{2}+\frac{\alpha }{2n}\phi ^{2n})$ our results are in a good agreement with recent WMAP5 observations, specially for small fields. [99]{} D. Boyanovsky, H. J. de Vega, C. M. Ho, and N. G. Sanchez .$^{\prime \prime }$$^{\prime \prime }$ *Phys. Rev. D75, 123504 (2007).* P.Brax, C.Bruck and A.Davis $^{\prime \prime }$$^{\prime \prime }$ *Rept.Prog.Phys. 67(2004)2183-2232*. James E. Lidsey, $^{\prime \prime }$$^{\prime \prime }$*Lect.Notes Phys. 646 (2004) 357-379*. P.Brax,C.Bruck $^{\prime \prime }$$^{\prime \prime }\emph{Class.Quant.Grav.20(2003)R201-R232} $. G. Efstathiou**(1991), in $^{\prime \prime } $$^{\prime \prime }$ eds Shanks, T. et al, Kluwer academic. R.Kallosh and A.Linde, $\QTR{sl}{JCAP0302(2003)002.}$ M. Sami, Pravabati Chingangbam and Tabish Qureshi, $^{\prime \prime }$$^{\prime \prime }$*Phys.Rev. D66 (2002) 043530.* Adil Belhaj, Pablo Diaz, Mohamed Naciri, Antonio Segui, *Int.J.Mod.Phys.D17(2008)911-920.* Won Tae Kim, John J. Oh, Marie K. Oh, Myung Seok Yoon * J.Korean Phys.Soc.42(2003)13-18*. P. de Bernardis et al., *Astrophys. J. 564, 559 (2002)*. Nobuyoshi Ohta * Int.J.Mod.Phys. A20 (2005) 1-40*. M. Bennai, H.Chakir and Z.Sakhi.$^{\prime \prime }$$^{\prime \prime }$ *Electronic Journal of Theoretical Physics 9 (2006) 84–93* L. Randall and R. Sundrum, *Phys. Rev. Lett. 83, 3370 (1999); Phys. Rev. Lett. 83, 4690 (1999).* Andrew R Liddle, Anthony J Smith $^{\prime \prime }$$^{\prime \prime }$ *Phys.Rev. D68 (2003) 061301.* R.Maartens, D.Wands, B.Basset, and I.Heard, $^{\prime \prime }$$^{\prime \prime }$*Phys.Rev.D 62 (2000)041301.* H. V. Peiris et.al. (WMAP collaboration), *Ap. J. Suppl.148, 213 (2003)*; D. N. Spergel et al., $^{\prime \prime }$$^{\prime \prime }$ *J. Suppl.170, 377 (2007)*. E. Komatsu et als.$^{\prime \prime }$$^{\prime \prime }$*Astrophys.J.Suppl.180:330-376,(2009).* A.D. Linde, $\emph{Phys.Lett.108B,389(1982)}$; *114B, 431 (1982); 116B, 335, 340 (1982).* B.C.Paul** **$^{\prime \prime }$$^{\prime \prime }$ *Phys.Rev. D68 (2003)127501.* D. Boyanovsky, H.J.de Vega and N.G.Sanchez, *Phys. Rev. D73 (2006) 023008.* Ramon Herrera , *Phys.Lett.B664 (2008) 149-153,* Andrei Linde.$^{\prime \prime }$$^{\prime \prime }$*Contemp.Concepts Phys. 5 (2005) 1-362*. [^1]: E-mail adress: [email protected], bennai\[email protected]
--- abstract: 'The entropy change $\Delta S$ between the high-temperature cubic phase and the low-temperature tetragonally-based martensitic phase of Ni$_{2+x}$Mn$_{1-x}$Ga ($x = 0 - 0.20$) alloys was studied. The experimental results obtained indicate that $\Delta S$ in the Ni$_{2+x}$Mn$_{1-x}$Ga alloys increases with the Ni excess $x$. The increase of $\Delta S$ is presumably accounted for by an increase of magnetic contribution to the entropy change. It is suggested that the change in modulation of the martensitic phase of Ni$_{2+x}$Mn$_{1-x}$Ga results in discontinuity of the composition dependence of $\Delta S$.' author: - 'V. V. Khovailo' - 'K. Oikawa' - 'T. Abe' - 'T. Takagi' title: 'Entropy change at the martensitic transformation in ferromagnetic shape memory alloys $\mathbf{Ni}_{2+x}\mathbf{Mn}_{1-x}\mathbf{Ga}$' --- Introduction ============ For shape memory alloys, the change of entropy $\Delta S$ between high-temperature austenitic and low-temperature martensitic phase can be obtained either from calorimetry [@1-planes; @2-obrado; @3-pelegrina] or from results of stress-strain measurements at different temperatures above the martensitic start temperature $M_s$ (Ref. 4). Owing to the diffusionless character of martensitic transformations, configuration contributions to the entropy change are absent, which considerably simplify the evaluation of the relative phase stability. In the case of thermoelastic martensitic transformations, which are characterized by a small temperature hysteresis and complete transformation to the austenitic (martensitic) state, the change of entropy $\Delta S$ can be determined experimentally with a good precision. Ni$_2$MnGa, a representative of the family of Heusler alloys, undergoes a thermoelastic martensitic transformation on cooling below $T_m \sim 200$ K. Since ferromagnetic ordering in this compound sets at a considerably higher temperature, $T_C = 376$ K, the martensitic transformation occurs in the ferromagnetic state. Both $T_m$ and $T_C$ are sensitive to stoichiometry. For instance, a partial substitution of Mn for Ni in Ni$_{2+x}$Mn$_{1-x}$Ga alloys results in increase of $T_m$ and decrease of $T_C$ until they couple in a composition range $x = 0.18 - 0.20$ (Ref. 5). Results of x-ray and electron diffraction studies of Ni-Mn-Ga alloys indicate that the crystal structure of the martensitic phase depends on composition. The martensitic phase of the alloys with a low temperature of martensitic transformation ($T_m < 270$ K) has a five-layered modulation whereas the martensitic phase with a moderate temperature of martensitic transformation ($T_m > 270$ K) has a seven-layered modulation. [@6-pons] For Cu-based shape memory alloys, which transform to various martensitic structures upon cooling, it has been shown that the entropy change depends on the particular structure of the low-temperature martensitic phase. [@2-obrado] Hence, similar behavior could be expected in the Ni-Mn-Ga alloys. Contrary to the Cu-based shape memory alloys, which are nonmagnetic, Ni-Mn-Ga alloys possess a long-range ferromagnetic ordering at temperatures below $T_C$. Such distinct magnetic properties could result in peculiar behavior of the entropy change in Ni-Mn-Ga as compared to nonmagnetic shape memory alloys. The purpose of this work is to perform a preliminary calorimetric analysis of the entropy change $\Delta S$ between the high-temperature cubic phase and low-temperature tetragonally based martensitic phases of Ni$_{2+x}$Mn$_{1-x}$Ga ($x = 0 - 0.20$) alloys. Experimental details ==================== Polycrystalline ingots of Ni$_{2+x}$Mn$_{1-x}$Ga ($x = 0 - 0.20$) alloys were prepared by an arc-melting method. The ingots were annealed in evacuated quartz ampoules at 1050 K for 9 days. Sample for calorimetric measurements were spark cut from the middle part of the ingots. The calorimetric measurements were performed using a Perkin-Elmer differential scanning calorimeter with a heating/cooling rate of 5  K/min. In the experiments we have also used samples with the same thermal treatment from our previous work. [@7-kvv] Experimental results and discussion =================================== An example of the calorimetric measurements of Ni$_{2+x}$Mn$_{1-x}$Ga alloys is presented in Fig. 1. The direct and reverse martensitic transformations are accompanied by well-defined calorimetric peaks. From these data, it is easy to determine characteristic temperatures of the direct (martensite start, $M_s$ and martensite finish, $M_f$) and the reverse (austenite start, $A_s$ and austenite finish, $A_f$) martensitic transformation. Results for the alloys studied, together with composition of the samples and the equilibrium temperature $T_0 = (M_s + A_f)/2$, are given in Table I. It is worth noting that transformation temperatures slightly differ for different samples of the same composition and the values of the temperatures presented in Table I are averaged over several specimens. ![Typical calorimetric curves corresponding to the direct (cooling) and reverse (heating) martensitic transformations, measured in Ni$_{2+x}$Mn$_{1-x}$Ga alloys.](fig-1.eps){width="\columnwidth"} Alloy $M_s$ (K) $M_f$ (K) $A_s$ (K) $A_f$ (K) $T_0$ (K) ------------ ----------- ----------- ----------- ----------- ----------- $x = 0$ 194 187 198 203 199 $x$ = 0.02 221 214 224 229 225 $x$ = 0.03 229 224 233 237 233 $x$ = 0.04 238 233 238 243 240 $x$ = 0.05 242 237 244 248 245 $x$ = 0.08 266 262 269 272 269 $x$ = 0.10 274 269 277 281 277 $x$ = 0.13 277 272 280 285 281 $x$ = 0.16 308 304 308 312 310 $x$ = 0.18 329 324 332 337 333 $x$ = 0.19 338 331 342 348 343 $x$ = 0.20 338 332 344 349 344 : Composition of the studied Ni$_{2+x}$Mn$_{1-x}$Ga alloys and the critical temperatures of the martensitic transformation. The mean values of the heat exchanged upon the reverse ($Q^{L \to H}$) and direct ($Q^{H\to L}$) transformation are shown in Table II. The average of the absolute values of ($Q^{L\to H}$) and ($Q^{H\to L}$) was taken as the change of enthalpy $\Delta H$. When the Gibbs free energies of martensite and austenite are equal, which takes place at temperature $T_0$, the entropy change $\Delta S$ can be evaluated as $\Delta S = \Delta H/T_0$. Determined in such a way, the entropy change is also shown in Table II. Figure 2 shows the entropy change $\Delta S$ as a function of Ni excess $x$ in the Ni$_{2+x}$Mn$_{1-x}$Ga alloys. It is evident that $\Delta S$ increases with deviation from the stoichiometry. It can also be inferred that the entropy change has different composition dependencies in concentration intervals $0 \le x \le 0.13$ and $0.16 \le x \le 0.20$. ![The entropy change at the martensitic transformation in Ni$_{2+x}$Mn$_{1-x}$Ga alloys as a function of Ni excess $x$. The solid lines are linear fits to the data.](fig-2.eps){width="\columnwidth"} Since configuration contributions to the entropy change are absent in the case of martensitic transformations, it is customary to consider that $\Delta S$ has three main contributions: $$\Delta S = \Delta S_{vib} + \Delta S_{el} + \Delta S_{mag},$$ where $\Delta S_{vib}$ is the vibrational contribution, $\Delta S_{el}$ is the contribution of the conduction electrons, and $\Delta S_{mag}$ is the contribution from magnetic subsystem. Although specific heat measurements of Ni-Mn-Ga alloys at a low temperature have not been performed, it can be expected, nevertheless, that the electronic contribution $\Delta S_{el}$ to the entropy change in Ni$_{2+x}$Mn$_{1-x}$Ga alloys is small. This assumption is supported by the measurements of specific heat at low temperatures for several ferromagnetic X$_2$MnSn (X = Co, Ni, Pd, and Cu) Heusler alloys. [@8-fraga] Thus, the increase in $\Delta S$ with the deviation from the stoichiometry in Ni$_{2+x}$Mn$_{1-x}$Ga is likely due to the $\Delta S_{vib}$ and $\Delta S_{mag}$ terms. Alloy $Q^{L\to H}$ (J/g) $Q^{H\to L}$ (J/g) $\Delta H$ (J/g) $\Delta S$ (mJ/gK) ------------ -------------------- -------------------- ------------------ -------------------- $x = 0$ 1.39 - 1.34 1.365 6.8 $x$ = 0.02 1.7 - 1.87 1.785 7.9 $x$ = 0.03 2.38 - 2.3 2.34 10 $x$ = 0.04 2.6 - 2.57 2.585 10.8 $x$ = 0.05 2.72 - 2.72 2.72 11.1 $x$ = 0.08 3.65 - 3.8 3.725 13.8 $x$ = 0.10 4.69 - 5.09 4.89 17.6 $x$ = 0.13 5.71 - 5.57 5.64 20.1 $x$ = 0.16 7.96 - 7.74 7.85 25.3 $x$ = 0.18 8.65 - 8.57 8.61 25.9 $x$ = 0.19 9.41 - 9.59 9.5 27.7 $x$ = 0.20 9.08 - 8.86 8.97 26.1 : Heat exchanged upon reverse ($Q^{L\to H}$) and direct ($Q^{H\to L}$)martensitic transformation, enthalpy ($\Delta H$) and entropy ($\Delta S$) changes for Ni$_{2+x}$Mn$_{1-x}$Ga alloys. An analysis of $\Delta S_{vib}$ contribution to the entropy change of Cu-based shape memory alloys showed that the vibration contribution depends on the elastic anisotropy at the transformation temperature. [@1-planes] The authors found that for a given crystal structure of the martensitic phase, the elastic anisotropy constant $A$ at $M_s$ does not depend on composition, which means that vibration contribution to $\Delta S$ remains constant for all of the composition studied. In the case of Ni$_{2+x}$Mn$_{1-x}$Ga alloys, data on elastic anisotropy are absent. The observed increase of $\Delta S$ in Ni$_{2+x}$Mn$_{1-x}$Ga can indicate that elastic anisotropy depends on composition. However, since Debye temperature did not change significantly with composition, [@9-matsumoto] it is more likely, that composition dependence of $\Delta S$ is accounted for by the magnetic contribution $\Delta S_{mag}$. The fact that entropy change in Ni-Mn-Ga alloys depends on composition has already been mentioned in Ref. 10, where Ni-Mn-Ga alloys were divided into three groups according to their transformation behavior. The authors found that alloys with a low martensitic transformation temperature $M_s$ have low values of $\Delta S$, whereas alloys with a high $M_s$ are characterized by higher values of the entropy change. Their observations agree with the results of our study. Since the crystal structure of the martensitic phase of Ni-Mn-Ga depends on composition, [@6-pons] an analysis of $\Delta S$ as a function of Ni excess $x$ is worth performing. From the data shown in Fig. 2, it is difficult to draw an unambiguous conclusion because the composition dependence of $\Delta S$ can be approximated in the whole studied interval of $x$ as shown by the dashed line. However, we suggest that the entropy change has different composition dependencies in concentration intervals $0 \le x \le 0.13$ and $0.16 \le x \le 0.20$ as shown in Fig. 2 by the solid lines. The alloys with $x \ge 0.16$ ($M_s > 300$ K) are expected to have seven-layered martensitic structure, as evident from their high martensitic transformation temperatures and unusual behavior of resistivity, [@5-vas; @11-kvv] whereas the $0 \le x < 0.16$ alloys undergo structural transformation to the five-layered martensitic structure. Since different martensitic phases have different densities of vibrational states, this should lead to discontinuity of the composition dependence of $\Delta S$ as the martensite of the Ni$_{2+x}$Mn$_{1-x}$Ga alloys changes its modulation. If this is the case, the alloys with seven-layered modulation of the martensitic phase are characterized by a higher $\Delta S$ as compared to the alloys with five-layered modulation (Fig. 2). In this article we have studied the entropy change at the martensitic transformation in Ni$_{2+x}$Mn$_{1-x}$Ga ($x = 0 - 0.20$) alloys. The lowest value of the entropy change, $\Delta S = 6.8$ mJ/gK, was found for the stoichiometric Ni$_2$MnGa. Upon substitution of Mn for Ni, $\Delta S$ significantly increases up to $\sim 26$ mJ/gK in alloys with $x > 0.16$. The increase in $\Delta S$ is presumably due to the magnetic contribution. It is suggested that the change in modulation of the martensitic phase results in discontinuity of the composition dependence of $\Delta S$. This assumption, however, requires further systematic studies of thermodynamic properties of Ni-Mn-Ga alloys. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by a Grant-in-Aid from Izumi Science and Technology Foundation. One of the authors (V.V.K.) gratefully acknowledges the Japan Society for the Promotion of Science (JSPS) for a Fellowship Award. [40]{} A. Planes, L. Mañosa, D. Ríos-Jara, and J. Ortín, Phys. Rev. B **45**, 7633 (1992). E. Obradó, L. Mañosa, and A. Planes, Phys. Rev. B **56**, 20 (1997). J. L. Pelegrina and R. Romero, Mater. Sci. Eng. A **282**, 16 (2000). R. Romero and J. L. Pelegrina, Phys. Rev. B **50**, 9046 (1994). A. N. Vasil’ev, A. D. Bozhko, V. V. Khovailo, I. E. Dikshtein, V. G. Shavrov, V. D. Buchelnikov, M. Matsumoto, S. Suzuki, T. Takagi, and J. Tani, Phys. Rev. B **59**, 1113 (1999). J. Pons, V. A. Chernenko, R. Santamarta, and E. Cesari, Acta Mater. **48**, 3027 (2000). V. V. Khovailo, T. Takagi, A. D. Bozhko, M. Matsumoto, J. Tani, and V. G. Shavrov, J. Phys.: Condens. Matter **13**, 9655 (2001). G. L. F. Fraga, D. E. Brandão, and J. G. Sereni, J. Magn. Magn. Mater. **102**, 199 (1991). M. Matsumoto, M. Ebisuya, T. Kanomata, R. Note, H. Yoshida, and T. Kaneko, J. Magn. Magn. Mater. **239**, 521 (2002). V. A. Chernenko, E. Cesari, V. V. Kokorin, and I. N. Vitenko, Scripta Metall. **33**, 1239 (1995). V. V. Khovailo, T. Takagi, J. Tani, R. Z. Levitin, A. A. Cherechukin, M. Matsumoto, and R. Note, Phys. Rev. B **65**, 092410 (2002).
--- author: - | Michael Kazhdan$^1$ Gurprit Singh$^{2}$ Adrien Pilleboue$^{2}$ David Coeurjolly$^3$ Victor Ostromoukhov$^{2,3}$\ $^1$Johns Hopkins University $^2$Université Lyon 1 $^3$CNRS/LIRIS UMR 5205 bibliography: - 'TechReport.bib' title: | Variance Analysis for Monte Carlo Integration:\ A Representation-Theoretic Perspective --- Overview ======== Problem Statmement {#sec:problem_statement} ================== Representation Theory {#sec:representation_theory} ===================== Variance Estimation {#sec:variance_estimation} ===================
--- abstract: 'We consider a problem we call <span style="font-variant:small-caps;">StateIsomorphism</span>: given two quantum states of $n$ qubits, can one be obtained from the other by rearranging the qubit subsystems? Our main goal is to study the complexity of this problem, which is a natural quantum generalisation of the problem <span style="font-variant:small-caps;">StringIsomorphism</span>. We show that <span style="font-variant:small-caps;">StateIsomorphism</span> is at least as hard as <span style="font-variant:small-caps;">GraphIsomorphism</span>, and show that these problems have a similar structure by presenting evidence to suggest that <span style="font-variant:small-caps;">StateIsomorphism</span> is an intermediate problem for . In particular, we show that the complement of the problem, <span style="font-variant:small-caps;">StateNonIsomorphism</span>, has a two message quantum interactive proof system, and that this proof system can be made statistical zero-knowledge. We consider also <span style="font-variant:small-caps;">StabilizerStateIsomorphism</span> (SSI) and <span style="font-variant:small-caps;">MixedStateIsomorphism</span> (MSI), showing that the complement of SSI has a quantum interactive proof system that uses classical communication only, and that MSI is QSZK-hard.' author: - Joshua Lockhart - 'Carlos E. González-Guillén[^1]' title: Quantum State Isomorphism --- Introduction and statement of results ===================================== Ladner’s theorem [@ladner] states that if ${\text{$\mathrm{P}$}}\neq {\text{$\mathrm{NP}$}}$ then there exists *-intermediate problems*: problems that are neither -hard, nor in . While of course the *vs.* problem is unresolved, the problem of testing if two graphs are isomorphic (<span style="font-variant:small-caps;">GraphIsomorphism</span>) has the characteristics of such an intermediate problem. <span style="font-variant:small-caps;">GraphIsomorphism</span> is trivially in NP, since isomorphism of two graphs can be certified by describing the permutation that maps one to the other, but as Boppana and Håstad show [@bh], if it is NP-complete then the polynomial hierarchy collapses to the second level. Furthermore, while many instances of the problem are solvable efficiently in practice [@nauty], it is still not known if there exists a polynomial time algorithm for the problem. Recall that Quantum Merlin Arthur () is considered to be the quantum analogue of : the certificate is a quantum state, and the verifier has the ability to perform quantum computation. The class is defined in the same way but with certificates restricted to be classical bitstrings. In this paper, we show that there are problems that exhibit similar hallmarks of being intermediate for [@an]. Succinctly: we formulate problems in that are not obviously in , and which are unlikely to be -complete. Babai’s recent quasi-polynomial time algorithm for <span style="font-variant:small-caps;">GraphIsomorphism</span> [@babai] has revived a fruitful body of work that links the problem to algorithmic group theory [@babai2; @groupgraph; @luks; @luks2]. This literature deals with a closely related problem called <span style="font-variant:small-caps;">StringIsomorphism</span>: given bitstrings $x,y\in\{0,1\}^n$ and a permutation group $G$, is there $\sigma \in G$ such that $\sigma(x)=y$ (where permutations act in the obvious way on the strings)? This problem has a number of similarities with <span style="font-variant:small-caps;">GraphIsomorphism</span>, and, as we show, can be recast in terms of quantum states. We study what is arguably the most direct quantum generalisation of this problem, a problem we call <span style="font-variant:small-caps;">StateIsomorphism</span>. Such a generalisation is obtained by replacing the strings $x$ and $y$ by $n$-qubit pure states, and by considering the permutations in the group $G$ to act as “reshufflings” of the qubits. The problem is obviously in : if there is a permutation mapping one state to the other then its permutation matrix acts as the certificate. Equality of two quantum states can be verified via an efficient quantum procedure known as the SWAP test [@swaptest]. Also, if there is an efficient quantum algorithm then the same can be used as an algorithm for <span style="font-variant:small-caps;">GraphIsomorphism</span>: as we shall see later, there exists a polynomial time many-one reduction from <span style="font-variant:small-caps;">GraphIsomorphism</span> to <span style="font-variant:small-caps;">StateIsomorphism</span>. We first establish that in terms of interactive proof systems that solve the problem, <span style="font-variant:small-caps;">StateIsomorphism</span> has a number of similarities with its classical counterpart. A central part of the Boppana-Håstad collapse result is that <span style="font-variant:small-caps;">GraphIsomorphism</span> belongs in ${\text{$\mathrm{co-IP}$}}(2)$: that is, that <span style="font-variant:small-caps;">GraphNonIsomorphism</span> has a two round interactive proof system. We show that <span style="font-variant:small-caps;">StateIsomorphism</span> is in ${\text{$\mathrm{co-QIP}$}}(2)$: its complement has a two round *quantum* interactive proof system. <span style="font-variant:small-caps;">GraphIsomorphism</span> also admits a statistical zero knowledge proof system, and indeed, we prove that <span style="font-variant:small-caps;">StateIsomorphism</span> has an honest verifier quantum statistical zero knowledge proof system. These results are summarised in the following theorem, where QSZK is the class of problems with (honest verifier) quantum statistical zero knowledge proof systems, defined by Watrous in [@qszk]. Note that since ${\text{$\mathrm{QIP}$}}(2)\supseteq {\text{$\mathrm{QSZK}$}} = {\text{$\mathrm{co-QSZK}$}}$ (see [@qszk]), inclusion in ${\text{$\mathrm{co-QIP}$}}(2)$ follows as a corollary. \[theorem:SIQSZK\] <span style="font-variant:small-caps;">StateIsomorphism</span> is in $\text{\emph{QSZK}}$. A corollary of this theorem provides evidence to suggest that <span style="font-variant:small-caps;">StateIsomorphism</span> is not -complete. If it were, then every problem in would have an honest verifier quantum statistical zero knowledge proof system. Furthermore, this result is evidence against the problem being -hard: it is unlikely that ${\text{$\mathrm{NP}$}}\subseteq {\text{$\mathrm{QSZK}$}}$. \[corollary:qcma\] If <span style="font-variant:small-caps;">StateIsomorphism</span> is **-complete then *${\text{$\mathrm{QCMA}$}}\subseteq {\text{$\mathrm{QSZK}$}}$*. In pursuit of stronger evidence against -hardness of <span style="font-variant:small-caps;">StateIsomorphism</span>, we consider a quantum polynomial hierarchy in the same vein as those considered by Gharibian and Kempe [@gk], and Yamakami [@yamakami]. This hierarchy is defined in terms of quantum $\exists$ and $\forall$ complexity class operators like those of [@yamakami], but from our definitions it is easy to verify that lower levels correspond to well known complexity classes. In particular, $\Sigma_0=\Pi_0={\text{$\mathrm{BQP}$}}$, and $\Sigma_1={\text{$\mathrm{QCMA}$}}$ or $\Sigma_1={\text{$\mathrm{QMA}$}}$ depending on whether we take the certificates to be classical or quantum (see Section \[section:aquantumpolynomialhierarchy\]). Also, from the definition we provide, it is clear that the class $\text{cq-}\Sigma_2$ corresponds directly to the identically named class in [@gk]. We prove the following, where ${\text{$\mathrm{QPH}$}}=\cup_{i=1}^\infty \Sigma_i$, and ${\text{$\mathrm{QCAM}$}}$ is the quantum generalisation of the class ${\text{$\mathrm{AM}$}}$ where all communication between Arthur and Merlin is restricted to be classical [@mw]. \[theorem:collapse\] Let $A$ be a promise problem in *${\text{$\mathrm{QCMA}$}}\cap \text{co-}{\text{$\mathrm{QCAM}$}}$*. If $A$ is *${\text{$\mathrm{QCMA}$}}$*-complete, then *${\text{$\mathrm{QPH}$}}\subseteq \Sigma_2$*. While the relationship between the levels of this hierarchy and the levels of the classical hierarchy remains an open research question [@bqppolyhi], the fact that the lower levels of this quantum hierarchy coincide with well known classes gives weight to collapse results of this kind. We draw attention to the fact that the collapse implication in Theorem \[theorem:collapse\] is for the classical certificate classes and , rather than for the more well known and [@mw]. While the problems we consider are in , meaning that the current statement of the theorem is all we need, already we have an interesting open question: is there a similar collapse theorem that relates and ? The proof of Theorem \[theorem:collapse\] relies on the fact that ${\text{$\mathrm{QCMAM}$}}={\text{$\mathrm{QCAM}$}}$ (proved by Kobayashi *et al.* in [@kobayashi]), but it is unlikely that ${\text{$\mathrm{QMAM}$}}={\text{$\mathrm{QAM}$}}$, since ${\text{$\mathrm{QMAM}$}}={\text{$\mathrm{QIP}$}}={\text{$\mathrm{PSPACE}$}}$ [@mw; @qip=pspace]. As we shall see in Section \[section:interactiveproofsforquantumstateisomorphism\], there is a barrier that prevents us from applying Theorem \[theorem:collapse\] to <span style="font-variant:small-caps;">StateIsomorphism</span>: our quantum interactive proof systems for <span style="font-variant:small-caps;">StateNonIsomorphism</span> require quantum communication between verifier and prover. This prevents us from proving inclusion in QCAM. It is not clear that the problem admits such a proof system. However, if it is possible to produce an efficient classical description of the quantum states in the problem instance that is independent from how they are specified in the input, then it is possible to prove inclusion in QCAM. We show that this is the case for a restricted family of quantum states called *stabilizer states*, a fact which allows us to prove the following. \[theorem:productCollapse\] If <span style="font-variant:small-caps;">StabilizerStateIsomorphism</span> is **-complete, then *${\text{$\mathrm{QPH}$}}\subseteq\Sigma_2$*. Furthermore, the fact that stabilizer states can be described classically also implies the following. <span style="font-variant:small-caps;">StabilizerStateNonIsomorphism</span> is in $\text{\emph{QCSZK}}$. Finally, we consider the state isomorphism problem for mixed quantum states. We show that this problem is QSZK-hard by reduction from the QSZK-complete problem of determining if a mixed state is product or separable. \[theorem:msiqszkhardNICE\] $(\epsilon,1-\epsilon)$-<span style="font-variant:small-caps;">MixedStateIsomorphism</span> is -hard. While these state isomorphism problems all have classical certificates, we have been able to demonstrate that the complexity of each problem depends precisely on the inherent computational difficulty of working with the input states. Stabilizer states form one end of the spectrum: with a polynomial number of measurements a classical description can be produced. The other extreme is the mixed states, these are so computationally difficult to work with that it is not clear that <span style="font-variant:small-caps;">MixedStateIsomorphism</span> even belongs in ; even the problem of testing equivalence of two such states is -complete (see [@qszk]). Between these two extremes we have <span style="font-variant:small-caps;">StateIsomorphism</span>. While such states can be efficiently processed by a quantum circuit, and isomorphism can be certified classically, the analysis in Section \[section:interactiveproofsforquantumstateisomorphism\] uncovers an interesting caveat. It seems that the ability to communicate quantum states is still required when we wish to check *non*-isomorphism by interacting with a prover, or perhaps even to certify isomorphism with statistical zero knowledge. We thus draw attention to the following open question: can our protocols be modified to use exclusively classical communication? The fact that an efficient quantum algorithm for <span style="font-variant:small-caps;">StateIsomorphism</span> would also yield one for <span style="font-variant:small-caps;">GraphIsomorphism</span>, combined with Corollary \[corollary:qcma\], gives weight to the idea that this problem can be thought of as a candidate for a -intermediate problem. The fact that there are problems “in between” and , and furthermore, that such problems are obtained by generalising <span style="font-variant:small-caps;">StringIsomorphism</span> suggests an interesting parallel between the classical and quantum classes. In Section \[section:preliminariesanddefinitions\] we give an overview of the tools and notation we will use for the rest of the paper. We also define the key problems and complexity classes we will be working with and prove some initial results that we build on later. In Section \[section:interactiveproofsforquantumstateisomorphism\] we demonstrate quantum interactive proof systems for the <span style="font-variant:small-caps;">StateIsomorphism</span> problems. In Section \[section:aquantumpolynomialhierarchy\] we define a notion of a quantum polynomial hierarchy, and prove the hierarchy collapse results. Preliminaries and definitions {#section:preliminariesanddefinitions} ============================= Recall that quantum states are represented by unit trace positive semi-definite operators $\rho$ on a Hilbert space $\mathcal{H}$ called the *state space* of the system. A state is *pure* if $\rho^2=\rho$. Otherwise, we say that the state is *mixed*. By definition then, for any pure state $\rho$ on $\mathcal{H}$ we have that $\rho=|\psi\rangle\langle\psi|$ for some unit vector $|\psi\rangle\in\mathcal{H}$, and we refer to pure states by their corresponding *state vector* $|\psi\rangle$ (which is unique up to multiplication by a phase). Mixed states are convex combinations of the outer products of some set of state vectors $ \rho=\sum_{i} p_i|\psi_i\rangle\langle\psi_i|. $ In what follows we refer to the Hilbert space $\mathbb{C}^2$ by $\mathcal{H}_2$. Recall that an $n$-qubit pure state $|\psi\rangle\in\mathcal{H}_2^{\otimes n}$ is *product* if $|\psi\rangle=|\psi_1\rangle\otimes\cdots\otimes|\psi_n\rangle $ where $\otimes$ denotes tensor product and for all $i$, $|\psi_i\rangle\in\mathcal{H}_2$. For any bitstring $x_1\dots x_n\in\{0,1\}^n$, we say that $|x\rangle=\otimes_{i=1}^n|x_i\rangle$ is a computational basis state. A useful measure of the distinguishability of a pair of quantum states is the *trace distance*. Let $\rho,\sigma$ be quantum states with the same state space. Their trace distance is the quantity $D(\rho,\sigma)=\frac{1}{2}\lVert \rho-\sigma\rVert_1, $ where $\lVert M \rVert_1=\text{tr}[|M|]$ is the trace norm. We say that a quantum circuit $Q$ *accepts* a state $|\psi\rangle$ if measuring the first qubit of the state $Q|\psi\rangle$ in the computational basis yields outcome $1$. We say that the circuit *rejects* the state otherwise. Let $X$ be an index set. We say that a uniform family of quantum circuits $\{Q_x~:~x\in X\}$ is *polynomial-time generated* if there exists a polynomial-time Turing machine that takes as input $x\in X$ and halts with an efficient description of the circuit $Q_x$ on its tape. Such a definition neatly captures the notion of an efficient quantum computation [@watrous]. ![The SWAP test circuit.[]{data-label="fig:swap"}](swap.pdf) We make use of a quantum circuit known as the *SWAP test* [@swaptest], illustrated in Figure \[fig:swap\]. This circuit takes as input pure states $|\psi\rangle,|\phi\rangle$ and accepts (denoted $T(|\psi\rangle,|\phi\rangle)=1$) with probability $(1+|\langle\psi|\phi\rangle|^2)/2$. Note that $T(|\psi\rangle,|\phi\rangle)=1$ with probability $1$ if $|\psi\rangle=e^{i\tau}|\phi\rangle$ for some $\tau\in[-2\pi,2\pi]$, but is equal to $1$ with probability $1/2$ if they are orthogonal. The SWAP test can be therefore be used as an efficient quantum algorithm for testing if two quantum states are equivalent. In what follows we use some notation from complexity theory and formal language theory. In particular, if a problem $A$ is polynomial-time many-one reducible to a problem $B$ we denote this by $A \le_p B$. We denote by $\{0,1\}^n$ the set of bitstrings of length $n$, furthermore, $\{0,1\}^*$ denotes the set of all bitstrings. For a bitstring $x$, we denote by $|x|$ the length of the bitstring. We say that a function $f:\mathbb{N}\rightarrow [0,1]$ is *negligible* if for every constant $c$ there exists $n_c$ such that for all $n\ge n_c$, $f(n)<1/n^c$. We use the shorthand $f(n)=\text{poly}(n)$ (*resp.* $f(n)=\exp(n)$) to state that $f$ scales as a polynomially bounded (exponentially bounded) function in $n$. A decision problem is a set of bitstrings $A\subseteq\{0,1\}^*$. An algorithm is said to decide $A$ if for all $x\in\{0,1\}^*$ it outputs YES if $x\in A$ and NO otherwise. In quantum computational complexity it is useful to use the less well known notion of a *promise problem* to allow for more control over problem instances. A promise problem is a pair of sets $(A_{\text{YES}},A_{\text{NO}})\subseteq\{0,1\}^*\times \{0,1\}^*$ such that $A_{\text{YES}}\cap A_{\text{NO}}=\emptyset$. An algorithm is said to decide $(A_{\text{YES}},A_{\text{NO}})$ if for all $x\in A_{\text{YES}}$ it outputs YES and for all $x\in A_{\text{NO}}$ it outputs NO. Note that the algorithm is not required to do anything in the case where an input $x$ does not belong to $A_{\text{YES}}$ or $A_\text{NO}$. Quantum Merlin-Arthur, Quantum Arthur-Merlin -------------------------------------------- For convenience, we give a number of definitions related to quantum generalisations of public coin proof systems. In particular, we focus on Quantum Arthur-Merlin () and Quantum Merlin-Arthur, the quantum versions of AM and MA respectively. We use the definitions in [@watrous; @mw] as our guide. A promise problem $A={(A_{\text{$\mathrm{YES}$}},A_{\text{$\mathrm{NO}$}})}$ is in ${\text{$\mathrm{QMA}$}}(a,b)$ for functions $a,b:\mathbb{N}\rightarrow[0,1]$ if there exists a polynomial-time generated uniform family of quantum circuits $\{V_{x}~:~x\in\{0,1\}^*\}$ and polynomially bounded $p:\mathbb{N}\rightarrow\mathbb{N}$ such that - for all $x\in{A_{\text{$\mathrm{YES}$}}}$ there exists $|\psi\rangle\in\mathcal{H}_2^{\otimes p(|x|)}$ such that $$\begin{aligned} {\text{$\mathrm{Pr}$}}[V_x \text{ accepts } |\psi\rangle] \ge a(|x|); \end{aligned}$$ - for all $x\in{A_{\text{$\mathrm{NO}$}}}$ and for all $|\psi\rangle\in\mathcal{H}_2^{\otimes p(|x|)}$, $$\begin{aligned} {\text{$\mathrm{Pr}$}}[V_x \text{ accepts } |\psi\rangle] \le b(|x|). \end{aligned}$$ The class is defined in the same way, but with the restriction that the certificate $|\psi\rangle$ must be a computational basis state $|x\rangle$. A *verification procedure* is a tuple $(V,m,s)$ where $$\begin{aligned} V=\{V_{x,y}~:~x\in\{0,1\}^*,y\in\{0,1\}^{s(|x|)}\}\end{aligned}$$ is a uniform family of polynomial time generated quantum circuits, and $m,s:\mathbb{N}\rightarrow\mathbb{N}$ are polynomially bounded functions. Each circuit acts on $m(|x|)$ qubits sent by Merlin, and $k(|x|)$ qubits which correspond to Arthur’s workspace. For all $x,y$, we say that $V_{x,y}$ accepts (*resp.* rejects) a state $|\psi\rangle\in\mathcal{H}_2^{\otimes m(|x|)}$ if, upon measuring the first qubit of the state $$\begin{aligned} V_{x,y} |\psi\rangle|0\rangle^{\otimes k(|x|)}\end{aligned}$$ in the standard basis, the outcome is ‘$1$’ (*resp.* ‘$0$’). A promise problem $A={(A_{\text{$\mathrm{YES}$}},A_{\text{$\mathrm{NO}$}})}$ is in ${\text{$\mathrm{QAM}$}}(a,b)$ for functions $a,b:\mathbb{N}\rightarrow[0,1]$ if there exists a verification procedure $(V,m,s)$ such that - for all $x\in {A_{\text{$\mathrm{YES}$}}}$, there exists a collection of $m(|x|)$-qubit quantum states $\{|\psi_{y}\rangle\}$ such that $$\begin{aligned} \frac{1}{2^{s(|x|)}}\sum_{y\in\{0,1\}^{s(|x|)}}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts } |\psi_y\rangle]\ge a(|x|); \end{aligned}$$ - for all $x\in {A_{\text{$\mathrm{NO}$}}}$, and for all collections of $m(|x|)$-qubit quantum states $\{|\psi_{y}\rangle\}$, it holds that $$\begin{aligned} \frac{1}{2^{s(|x|)}}\sum_{y\in\{0,1\}^{s(|x|)}}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts } |\psi_y\rangle]\le b(|x|). \end{aligned}$$ The class is defined in the same way but with the states $\{|\psi_y\rangle\}$ restricted to computational basis states. The class is similar, but has an extra round of interaction. A promise problem $A={(A_{\text{$\mathrm{YES}$}},A_{\text{$\mathrm{NO}$}})}$ is in ${\text{$\mathrm{QCMAM}$}}(a,b)$ for functions $a,b:\mathbb{N}\rightarrow[0,1]$ if there exists a ${\text{$\mathrm{QAM}$}}$ verification procedure $(V,m,s)$ and a polynomially bounded function $p:\mathbb{N}\rightarrow\mathbb{N}$ such that - for all $x\in {A_{\text{$\mathrm{YES}$}}}$, there is a certificate bitstring $c\in\{0,1\}^{p(|x|)}$ and a collection of length $m(|x|)$ bitstrings $\{z^c_{y}\}$ such that $$\begin{aligned} \frac{1}{2^{s(|x|)}}\sum_{y\in\{0,1\}^{s(|x|)}}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts } |c\rangle\otimes|z^c_y\rangle]\ge a(|x|); \end{aligned}$$ - for all $x\in{A_{\text{$\mathrm{NO}$}}}$, all certificate bitstrings $c\in\{0,1\}^{p(|x|)}$ and all collections of length $m(|x|)$ bitstrings $\{z^c_{y}\}$, it holds that $$\begin{aligned} \frac{1}{2^{s(|x|)}}\sum_{y\in\{0,1\}^{s(|x|)}}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts } |c\rangle\otimes|z^c_y\rangle]\le b(|x|). \end{aligned}$$ Quantum interactive proofs and zero knowledge --------------------------------------------- An interactive proof system consists of a *verifier* and a *prover*. The computationally unbounded prover attempts to convince the computationally limited verifier that a particular statement is true. A quantum interactive proof system is where the verifier is equipped with a quantum computer, and quantum information can be transferred between verifier and prover. Our formal definitions will follow those of Watrous [@qszk; @watrous]. A *quantum verifier* is a polynomial time computable function $V$, where for each $x\in\{0,1\}^*$, $V(x)$ is an efficient classical description of a sequence of quantum circuits $V(x)_1,\dots,V(x)_{k(|x|)}$. Each circuit in the sequence acts on $v(|x|)$ qubits that make up the verifier’s private workspace, and a buffer of $c(|x|)$ communication qubits that both verifier and prover have read/write access to. A *quantum prover* is a function $P$ where for each $x\in\{0,1\}^*$, $P(x)$ is a sequence of quantum circuits $P(x)_1,\dots P(x)_{l(|x|)}$. Each circuit in the sequence acts on $p(|x|)$ qubits that make up the prover’s private workspace, and the $c(|x|)$ communication qubits that are shared with each verifier circuit. Note that no restrictions are placed on the circuits $P(x)$, since we wish the prover to be computationally unbounded. We say that a verifier $V$ and a prover $P$ are *compatible* if all their circuits act on the same number of communication qubits, and if for all $x\in\{0,1\}^*$, $k(|x|)=\lfloor m(|x|)/2+1\rfloor$ and $l(|x|)=\lfloor m(|x|)/2+1/2\rfloor$, for some $m(|x|)$ which is taken to be the number of messages exchanged between the prover and verifier. We say that $(P,V)$ are a compatible $m$-message prover-verifier pair. Given some compatible $m$-message prover-verifier pair $(P,V)$, we define the quantum circuit $$\begin{aligned} (P(x),V(x)):=\begin{cases}V(x)_1\cdot P(x)_1\dots P(x)_{m(|x|)/2}\cdot V(x)_{m(|x|)/2+1}&\text{ if } m(|x|) \text{ is even,}\\ P(x)_1\cdot V(x)_1\dots P(x)_{(m(|x|)+1)/2}\cdot V(x)_{(m(|x|)+1)/2}&\text{ if } m(|x|) \text{ is odd.} \end{cases}\end{aligned}$$ Let $q(|x|)=p(|x|)+c(|x|)+v(|x|)$. We say that $(P,V)$ accepts an input $x\in\{0,1\}^*$ if the result of measuring the verifier’s first workspace qubit of the state $$\begin{aligned} (P(x),V(x))|0^{q(|x|)}\rangle\end{aligned}$$ in the computational basis is $1$, and that it rejects the input if the measurement result is $0$. Let $M={(M_{\text{$\mathrm{YES}$}},M_{\text{$\mathrm{NO}$}})}$ be a promise problem, let $a,b:\mathbb{N}\rightarrow[0,1]$be functions and $k\in\mathbb{N}$. Then $M\in {\text{$\mathrm{QIP}$}}(k)(a,b)$ if and only if there exists a $k$-message verifier $V$ such that - if $x\in {M_{\text{$\mathrm{YES}$}}}$ then $$\begin{aligned} \max_{P}\left({\text{$\mathrm{Pr}$}}[(P,V) \text{ accepts } x]\right) \ge a(|x|),\end{aligned}$$ - if $x\in {M_{\text{$\mathrm{NO}$}}}$ then $$\begin{aligned} \max_P \left({\text{$\mathrm{Pr}$}}[(P,V) \text{ accepts } x]\right) \le b(|x|),\end{aligned}$$ where the maximisation is performed over all compatible $k$-message provers. We say that the pair $(P,V)$ is an interactive proof system for $M$. Let us now define what it means for a quantum interactive proof system to be *statistical zero-knowledge*. Define the function $$\begin{aligned} \text{view}_{V,P}(x,j)=\text{tr}_P[(P(x),V(x))_j|0^{q(|x|)}\rangle\langle 0^{q(|x|)}|(P(x),V(x))_j^\dagger],\end{aligned}$$ where $(P(x),V(x))_j$ is the circuit obtained from running $(P(x),V(x))$ up to the $j^{\text{th}}$ message. For some index set $X$, we say that a set of density operators $\{\rho_x~:~x\in X\}$ is *polynomial-time preparable* if there exists a polynomial-time uniformly generated family of quantum circuits $\{Q_x~:~x\in X\}$, each with a designated set of output qubits, such that for all $x\in X$, the state of the output qubits after running $Q_x$ on a canonical initial state $|0\rangle^{\otimes n}$ is equal to $\rho_x$. Let $M={(M_{\text{$\mathrm{YES}$}},M_{\text{$\mathrm{NO}$}})}$ be a promise problem, let $a,b:\mathbb{N}\rightarrow[0,1]$ and $k:\mathbb{N}\rightarrow\mathbb{N}$ be functions. Then $M\in {\text{$\mathrm{HVQSZK}$}}(k)(a,b)$ if and only if $M\in {\text{$\mathrm{QIP}$}}(k)(a,b)$ with quantum interactive proof system $(P,V)$ such that there exists a polynomial-time preparable set of density operators $\{\sigma_{x,i}\}$ such that for all $x\in\{0,1\}^*$, if $x\in {M_{\text{$\mathrm{YES}$}}}$ then $$\begin{aligned} D(\sigma_{x,i},\text{\emph{view}}_{P,V}(x,i))\le \delta(|x|)\end{aligned}$$ for some negligible function $\delta$. It is known that the class of problems that have quantum statistical zero knowledge proof systems (QSZK) is equivalent to the class of problems that have honest verifier quantum statistical zero knowledge proof systems (HVQSZK) [@qszk]. Therefore, we refer to HVQSZK as QSZK, and only consider honest verifiers. In the next section we give a formal definition of <span style="font-variant:small-caps;">StringIsomorphism</span>. Permutations and <span style="font-variant:small-caps;">StringIsomorphism</span> {#subsection:permutationsandstringisomorphism} -------------------------------------------------------------------------------- Let $\Omega$ be a finite set. A bijection $\sigma:\Omega\rightarrow\Omega$ is called a *permutation* of the set $\Omega$. The set of all permutations of a finite set $\Omega$ forms a group under composition. This group is called the *symmetric group*, and we denote it by $\mathfrak{S}(\Omega)$. For $x\in\Omega$ and $\sigma\in \mathfrak{S}(\Omega),$ we denote the image of $x$ under $\sigma$ by $\sigma(x)$. A *string* $\mathfrak{s}:\Omega\rightarrow\Sigma$ is an assignment of *letters* from a finite set $\Sigma$ called an *alphabet* to the elements of a finite *index set* $\Omega$. Let $\mathfrak{s}:\Omega\rightarrow\Sigma$ be a string. The letters of $\mathfrak{s}$ are indexed by elements of the index set $\Omega$. The letter corresponding to $i\in\Omega$ is thus denoted by $\mathfrak{s}_{i}$. Let $\sigma\in \mathfrak{S}(\Omega)$ be a permutation. Then the action of $\sigma$ on $\mathfrak{s}$ is denoted by $\sigma(\mathfrak{s})$, and is a string such that for all $i\in\Omega$, $\sigma(\mathfrak{s})_{i}=\mathfrak{s}_{\sigma(i)}. $ In this paper we often deal with permutations of strings indexed by natural numbers. Hence, we denote the symmetric group $\mathfrak{S}([n])$ by $\mathfrak{S}_n$, where $[n]:=\{1,\dots,n\}$. In what follows we denote the fact that a group $G$ is a subgroup of a group $H$ by $G\le H$. The following decision problem is related to <span style="font-variant:small-caps;">GraphIsomorphism</span>[@luks; @babai], and forms the basis of our work. <span style="font-variant:small-caps;">StringIsomorphism</span>\ *Input:* *Finite sets $\Omega,\Sigma$, a permutation group $G\le \mathfrak{S}(\Omega)$ specified by a set of generators, and strings $\mathfrak{s},\mathfrak{t}:\Omega\rightarrow\Sigma$.*\ *Output:* <span style="font-variant:small-caps;">Yes</span> *if and only if there exists $\sigma\in G$ such that* $ \sigma(\mathfrak{s})=\mathfrak{t}.$ It is clear that <span style="font-variant:small-caps;">StringIsomorphism</span> is at least as hard as <span style="font-variant:small-caps;">GraphIsomorphism</span>: a polynomial time many-one reduction can be obtained from <span style="font-variant:small-caps;">GraphIsomorphism</span> by “flattening” the adjacency matrices of the graphs in question into bitstrings. The set of string permutations that correspond to graph isomorphisms form a proper subgroup of the full symmetric group. Indeed the algorithm in [@babai] is actually an algorithm for <span style="font-variant:small-caps;">StringIsomorphism</span>, which solves <span style="font-variant:small-caps;">GraphIsomorphism</span> as a special case. Stabilizer states ----------------- The Gottesmann-Knill theorem [@gknill] states that any quantum circuit made up of CNOT, Hadamard and phase gates along with single qubit measurements can be simulated in polynomial time by a classical algorithm. Such circuits are called stabilizer circuits, and any $n$-qubit quantum state $|\psi\rangle$ such that $|\psi\rangle=Q|0\rangle^{\otimes n}$ for a stabilizer circuit $Q$ is referred to as a *stabilizer state*. Let $|\psi\rangle$ be an $n$-qubit state. A unitary $U$ is said to be a *stabilizer* of $|\psi\rangle$ if $U|\psi\rangle=\pm 1|\psi\rangle$. The set of stabilizers of a state $|\psi\rangle$ forms a finite group under composition called the *stabilizer group* of $|\psi\rangle$, denoted $\text{Stab}(|\psi\rangle)$. The Pauli matrices are the unitaries $$\begin{aligned} \sigma_{00}:=\begin{pmatrix} 1&0\\0&1\end{pmatrix},~\sigma_{01}:=\begin{pmatrix}0&1\\1&0 \end{pmatrix},~\sigma_{10}:=\begin{pmatrix} 1&0\\0&-1 \end{pmatrix},~\sigma_{11}:=\begin{pmatrix} 0&-i\\i&0 \end{pmatrix},\end{aligned}$$ which form a finite group $\mathcal{P}$ under composition called the *single qubit Pauli group*. The $n$-qubit Pauli group $\mathcal{P}_n$ is the group with elements $\{(\pm 1)U_1\otimes \dots\otimes (\pm 1 )U_n~:~U_j\in \mathcal{P}\}\cup \{(\pm i)U_1\otimes \dots\otimes (\pm i)U_n~:~U_j\in \mathcal{P}\}$. It is well known (*c.f.* [@ag] Theorem 1) that an $n$-qubit stabilizer state $|\psi\rangle$ is uniquely determined by the finite group $S(|\psi\rangle):=\text{Stab}(|\psi\rangle)\cap \mathcal{P}_n$, of size $2^n$. Hence, $|\psi\rangle$ is determined by the $n=\log(2^n)$ elements of $\mathcal{P}_n$ that generate $S(|\psi\rangle)$. These elements each take $2n$ bits to specify the Pauli matrices in the tensor product, and an extra bit to specify the overall $\pm 1$ phase. This fact, along with the following theorem, means that given a polynomial number of copies of a stabilizer state $|\psi\rangle$, we can produce an efficient classical description of that state by means of the generators of $S(|\psi\rangle)$. \[theorem:classicaldescription\] There exists a quantum algorithm with the following properties: - Given access to $O(n)$ copies of an $n$-qubit stabilizer state $|\psi\rangle$, the algorithm outputs a bitstring describing a set of $n$-qubit Pauli operators $s_1,\dots, s_n\in \mathcal{P}_n$ such that $\langle s_1,\dots s_n\rangle = S(|\psi\rangle)$; - the algorithm halts after $O(n^3)$ classical time steps; - all collective measurements are performed over at most two copies of the state $|\psi\rangle$; - the algorithm succeeds with probability $1-1/\exp(n)$. Permutations of quantum states and isomorphism {#subsection:quantumstateisomorphism} ---------------------------------------------- Let $\sigma\in\mathfrak{S}_n$ be a permutation. Then the following is a unitary map acting on $n$-partite states that implements it as a permutation of the subsystems (see *e.g.* [@aram]) $$\begin{aligned} \label{eq:harrowop} P_\sigma:=\sum_{i_1,\dots,i_n\in[d]}|i_{\sigma(1)}\dots i_{\sigma(n)}\rangle\langle i_1,\dots i_n|.\end{aligned}$$ Note that $P_\sigma$ depends on the dimensions of the subsystems of the $n$-partite states on which it acts. Nevertheless, here we will only consider quantum states where each subsystem is a qubit. The focus of this work is on a number of variations on the following promise problem, <span style="font-variant:small-caps;">StateIsomorphism</span>. In what follows, let $\mathcal{Q}_{m,n}$ for $m\ge n$ denote the set of all quantum circuits with $m$ input qubits and $n$ output qubits. In particular, $Q_{n,n}$ is the set of all pure state quantum circuits on $n$ qubits. Then, for $m>n$, $Q_{m,n}$ is the set of all mixed state circuits that can be obtained by discarding the last $m-n$ output qubits of the circuits in $Q_{m,m}$. When we specify a circuit with a subscript label, such as $Q_\psi\in\mathcal{Q}_{m,n}$, we do so to easily refer to the state of the output qubits when the circuit is applied to the state $|0\rangle^m$. In particular, when $m=n$ this is the pure state $|\psi\rangle\in\mathbb{C}^{2^n}$, and the mixed state $\psi$ acting on $\mathbb{C}^{2^n}$ otherwise. The next problem is a special case of the above, defined in terms of stabilizer states. Finally, we consider the state isomorphism problem for mixed states. We also consider the above problems where the permutation group specified is equal to the symmetric group $G=\mathfrak{S}_n$. We denote these problems with the prefix $\mathfrak{S}_n$, for example, $\mathfrak{S}_n\text{-SI}$. It is clear that $\textsc{SSI}\le_p \textsc{SI}\le_p\textsc{MSI}$. We now show that SI is in . $\textsc{StateIsomorphism}\in {\text{$\mathrm{QCMA}$}}$. In the case of a YES instance, there exists $\sigma\in G$ such that $|\langle \psi_1|P_\sigma|\psi_0\rangle|=1$. The latter equality can be verified by means of a SWAP-test on the states $P_\sigma|\psi_0\rangle$ and $|\psi_1\rangle$, which by definition will accept with probability equal to $1$. Since the states $|\psi_0\rangle$ and $|\psi_1\rangle$ are given as an efficient classical descriptions of quantum circuits that will prepare them, this verification can be performed in quantum polynomial time. Furthermore, there exists an efficient classical description of the permutation $\sigma$ in terms of the generators of the group specified in the input, each of which can be described via their permutation matrices. The unitary $P_\sigma$ can be implemented efficiently by Arthur given the description of $\sigma$. Determining membership/non-membership of some permutation $\sigma\in\mathfrak{S}_n$ in the permutation group $G\le \mathfrak{S}_n$ specified by the set of generators $\{\tau_1,\dots \tau_k\}$ can be verified in classical polynomial time by utilizing standard techniques from computational group theory. In particular, since we are considering permutation groups we can use the Schreier-Sims algorithm to obtain a base and a strong generating set for $G$ in polynomial time from $\{\tau_1,\dots,\tau_k\}$. These new objects can then be used to efficiently verify membership in $G$ [@sims; @FHL; @luks2]. In the case that the states are not isomorphic, we have by definition that for all permutations $\sigma\in G$, $|\langle \psi_1|P_\sigma|\psi_0\rangle|\le \epsilon(n)$, which can again be verified by using the SWAP-test, which will accept the states with probability at most $1/2+\epsilon(n)$. It is not clear if MSI is in , or even in . While the isomorphism $\sigma$ can still be specified efficiently classically, it is not known if there exists an efficient quantum circuit for testing if two mixed states are close in trace distance. In fact, this problem is known as the <span style="font-variant:small-caps;">StateDistinguishability</span> problem, and is QSZK-complete [@qszk]. There exists a polynomial-time many-one reduction from <span style="font-variant:small-caps;">GraphIsomorphism</span> to <span style="font-variant:small-caps;">SSI</span>, indeed it is identical to the reduction from <span style="font-variant:small-caps;">GraphIsomorphism</span> to <span style="font-variant:small-caps;">StringIsomorphism</span>. <span style="font-variant:small-caps;">SSI</span> is in turn trivially reducible to the isomorphism problems for pure and mixed states respectively. These problems are therefore at least as hard as <span style="font-variant:small-caps;">GraphIsomorphism</span>. Interestingly however, there also exists a reduction from <span style="font-variant:small-caps;">GraphIsomorphism</span> to a restricted form of <span style="font-variant:small-caps;">SI</span> where the permutation group $G$ is equal to the full symmetric group $\mathfrak{S}_n$ (as stated earlier, we refer to this problem as $\mathfrak{S}_n$-$\textsc{StateIsomorphism}$). In order to demonstrate this, we require a family of quantum states referred to as *graph states* [@graphstates]. Let $G=(V,E)$ be an $n$-vertex graph. For each vertex $v\in V$, define the observable $ K^{(v)}:=\sigma_x^{(v)}\prod_{w\in N(v)}\sigma_z^{(w)} $ where $N(v)$ is the neighborhood of $v$, and $\sigma_i^{(j)}$ denotes the $n$-qubit operator consisting of Pauli $\sigma_i$ applied to the $j^{\text{th}}$ qubit and identity on the rest. The graph state $|G\rangle$ is defined to be the state stabilized by the set $S_G:=\{K^{(v)}~:~v\in V\}$, that is, $ K^{(v)}|G\rangle=|G\rangle $ for all $v\in V$. Since the stabilizers of a graph state $|G\rangle$ are all elements of the $|V|$ qubit Pauli group, graph states are stabilizer states, and the following theorem provides an upper bound on the overlap of non-equal graph states. \[theorem:agottstab\] Let $|\psi\rangle,|\phi\rangle$ be non-orthogonal stabiliser states, and let $s$ be the minimum, taken over all sets of generators $\{P_1,\dots P_n\}$ for $S(|\psi\rangle)$ and $\{Q_1,\dots Q_n\}$ for $S(|\phi\rangle)$, of the number of $i$ values such that $P_i\neq Q_i$. Then $|\langle\psi|\phi\rangle|=2^{-s/2}$. We can now describe the reduction. $\textsc{GraphIsomorphism}\le_p \mathfrak{S}_n\text{-}\textsc{StateIsomorphism}$. Consider two $n$-vertex graphs $G$ and $H$. If $G=H$ then clearly $|\langle G|H\rangle|^2=1$ since $|G\rangle$ and $|H\rangle$ are the same state up to a global phase. Suppose $G\neq H$. Then necessarily $s>0$, so by Theorem \[theorem:agottstab\] we have that $|\langle G|H\rangle|^2\le \frac{1}{2}$. Consider a permutation $\sigma\in\mathfrak{S}_n$. Then for each $v\in V$, $K^{(\sigma(v))}=P_\sigma K^{(v)} P_\sigma^T$, so $|\langle\sigma(G)|P_\sigma|G\rangle|^2=1$. Explicitly, if $G\cong H$ then there exists a permutation of the vertices $\sigma$ such that $\sigma(G)=H$ and so $|\langle \sigma(G)|H\rangle|^2=|\langle G|P_\sigma^T |H\rangle|^2=1$. If $G\not\cong H$ then for all $\sigma$, $\langle G|P_\sigma^T |H\rangle|^2\le \frac{1}{2}$. To complete the reduction we must show that for any graph $G=(V,E)$, a description of a quantum circuit that prepares $|G\rangle$ can be produced efficiently classically. This is trivial, an alternate definition of graph states [@graphstates] gives us that $|G\rangle=\Pi_{\{i,j\}\in E}CZ_{ij}|+\rangle^{\otimes |V|}$, where $CZ_{ij}$ is the controlled-$\sigma_z$ operator with qubit $i$ as control and $j$ as output. Therefore, the <span style="font-variant:small-caps;">StateIsomorphism</span> problem where no restriction is placed on the permutations is at least as hard as <span style="font-variant:small-caps;">GraphIsomorphism</span>. This is in stark contrast to the complexity of the corresponding classical problem, which is trivially in : two bitstrings are isomorphic under $\mathfrak{S}_n$ if and only if they have the same Hamming weight, which is easily determined. Interactive proof systems {#section:interactiveproofsforquantumstateisomorphism} ========================= In this section we will prove Theorem \[theorem:SIQSZK\]. To do so, we will first demonstrate a quantum interactive proof system for <span style="font-variant:small-caps;">StateNonIsomorphism</span> (SNI) with two messages. We then show that this quantum interactive proof system can be made statistical zero knowledge. In order to prove the former, we will require the following lemma. \[theorem:harrowetal\] Given access to a sequence of unitaries $U_1,\dots, U_n$, along with their inverses $U_1^\dagger,\dots, U_n^\dagger$ and controlled implementations c-$U_1$,…,c-$U_n$, as well as the ability to produce copies of a state $|\psi\rangle$ promised that one of the following cases holds: 1. For some $i$, $U_i|\psi\rangle=|\psi\rangle$; 2. For all $i$, $|\langle\psi|U_i|\psi\rangle|\le 1-\delta$. Then there exists a quantum algorithm which distinguishes between these cases using $O(\log n /\delta)$ copies of $|\psi\rangle$, succeeding with probability at least $2/3$. We can now prove the following. <span style="font-variant:small-caps;">StateNonIsomorphism</span> is in $(2)$. We will prove that the following constitutes a two message quantum interactive proof system for SNI. 1. \(V) Uniformly at random, select $\sigma\in G$ and $j\in\{0,1\}$. Send the state $|\Psi\rangle^{\otimes k}$ to the prover, where $k=O(\log(|G|)/(1-\epsilon(n)))$ and $|\Psi\rangle=P_\sigma|\psi_j\rangle$. 2. \(P) Send $j'\in\{0,1\}$ to the verifier. 3. \(V) Accept if and only if $j'=j$. Obtaining a uniformly random element from $G$ as in step $1$ can be achieved efficiently if the verifier is in possession of a base and a strong generating set for $G$. These can be obtained in polynomial time from any generating set of $G$ by using Schreier-Sims algorithm [@sims; @FHL; @luks2]. For a permutation $\pi\in G$, we define the $2n$ qubit circuit $U^{(j)}_\pi=\text{SWAP}\cdot (P_{\pi^{-1}}\otimes P_\pi)$, where the SWAP acts so as to swap the two $n$ qubit states, that is, $\text{SWAP}|\psi_0\rangle|\psi_1\rangle=|\psi_1\rangle|\psi_0\rangle$. Now consider the sets of quantum circuits $C^{(j)}_G=\{U^{(j)}_{\pi}~:~\pi\in G\}$ for $j\in\{0,1\}$, each of cardinality $|G|$. Since each circuit in $C^{(0)}_G\cup C^{(1)}_G$ is made up two permutations and a SWAP gate, each of their inverses can easily be obtained. Additionally, the controlled versions of these gates can be implemented via standard techniques. Consider first the YES case. The $k=O(\log(|G|)/(1-\epsilon(n)))$ copies of $|\Psi\rangle$ enables the prover to determine $j$ with success probability at least $2/3$ in the following manner. 1. Uniformly at random, select $j'\in\{0,1\}$. 2. Prepare $k$ copies of the state $|\Psi\rangle|\psi_{j'}\rangle$ 3. Use the HLM algorithm with the state $|\Psi\rangle|\psi_{j'}\rangle$ and the set of circuits $C^{(j')}_G$ as input. If the algorithm reports case $1$ then output $j'$, otherwise output $j'\oplus 1$. Let us check that the HLM algorithm will work for our purposes. In the case that the prover’s guess is correct and $j'=j$, we have that $|\Psi\rangle|\psi_{j'}\rangle = (P_\sigma\otimes I)|\psi_j\rangle|\psi_j\rangle$, and so $$\begin{aligned} U_{\sigma}(P_\sigma\otimes I)|\psi_j\rangle|\psi_j\rangle&= \text{SWAP}\cdot (P_{\sigma^{-1}}\otimes P_{\sigma})\cdot(P_\sigma\otimes I)|\psi_j\rangle|\psi_j\rangle\\ &=\text{SWAP}\cdot(I\otimes P_\sigma)|\psi_j\rangle|\psi_j\rangle\\ &=|\Psi\rangle|\psi_j\rangle.\end{aligned}$$ This corresponds to case $1$ of Lemma \[theorem:harrowetal\]. If the prover’s guess is incorrect $j'\neq j$ then for all $\pi\in G$ $$\begin{aligned} |\langle \Psi |\langle \psi_{j'} |U_\pi|\Psi\rangle|\psi_{j'}\rangle|&=|\langle \Psi |\langle \psi_{j'} |\text{SWAP}\cdot(P_{\pi^{-1}}\otimes P_{\pi})(P_\sigma\otimes I)|\psi_j\rangle|\psi_{j'}\rangle|\\ &=|\langle\Psi|\langle\psi_{j'}|(P_{\pi}\otimes P_{\pi^{-1}\cdot\sigma})|\psi_{j'}\rangle|\psi_{j}\rangle|\\ &\le |\langle \psi_{j}|P_{\sigma}^\dagger P_{\pi}|\psi_{j'}\rangle| \cdot |\langle \psi_{j'}| P_{\pi^{-1}\cdot \sigma}|\psi_{j}\rangle|\\ &\le \epsilon(n)^2,\end{aligned}$$ with the last inequality following from the fact that we are in the YES case: for all $\sigma\in G$, we have that $|\langle\psi_2|P_\sigma|\psi_1\rangle|\le a(n)$. This corresponds to case $2$ of Lemma \[theorem:harrowetal\]. Therefore, the HLM algorithm allows the prover to determine if her guess was correct or not, with success probability at least $2/3$. Consider now the NO case, where we have that for some $\sigma\in G$, $|\langle\psi_1|P_\sigma|\psi_2\rangle|=1$. To determine $j$ correctly, a cheating prover must be able to distinguish the mixed states $\rho_j=\frac{1}{|G|}\sum_{\pi\in G}\left(P_\pi|\psi_j\rangle\langle\psi_j|P_\pi^\dagger\right)^{\otimes k}$ correctly for $j\in\{1,2\}$, when given $k$ copies. However, $$\begin{aligned} \lVert \rho_1-\rho_2\rVert_1 &= \frac{1}{|G|}\left\lVert \sum_{\pi\in G}P_\pi^{\otimes k}\left(|\psi_1\rangle\langle\psi_1|\right)^{\otimes k}P_\pi^{\dagger\otimes k}-\sum_{\pi\in G}P_\pi^{\otimes k}\left(|\psi_2\rangle\langle\psi_2|\right)^{\otimes k}P_\pi^{\dagger\otimes k}\right\rVert_1\\ &= \frac{1}{|G|}\left\lVert \sum_{\pi\in G}P_\pi^{\otimes k}P_\sigma^{\otimes k}\left(|\psi_1\rangle\langle\psi_1|\right)^{\otimes k}P_\sigma^{\dagger\otimes k}P_\pi^{\dagger\otimes k}-\sum_{\pi\in G}P_\pi^{\otimes k}\left(|\psi_2\rangle\langle\psi_2|\right)^{\otimes k}P_\pi^{\dagger\otimes k}\right\rVert_1\\ &= \frac{1}{|G|}\left\lVert \sum_{\pi\in G}P_\pi^{\otimes k}\left(|\psi_2\rangle\langle\psi_2|\right)^{\otimes k}P_\pi^{\dagger\otimes k}-\sum_{\pi\in G}P_\pi^{\otimes k}\left(|\psi_2\rangle\langle\psi_2|\right)^{\otimes k}P_\pi^{\dagger\otimes k}\right\rVert_1\\ &=0,\end{aligned}$$ so they are indistinguishable. Note that the fact that the prover has been given $k$ copies does not help, as the overlap is $0$. In this case, the probability that the prover can guess $j$ correctly is therefore equal to $1/2$. We can use a standard amplification argument to modify the above protocol so that it has negligible completeness error, which means that it can be made statistical zero knowledge. We prove this now. <span style="font-variant:small-caps;">StateNonIsomorphism</span> is in *QSZK*. We first show that the protocol above can be modified to have exponentially small completeness error. This allows us to show that the protocol is quantum statistical zero knowledge. First, the verifier sends the prover $k'=O(n\log(|G|)/(1-\epsilon(n)))$ copies of the state $|\Psi\rangle$. The prover can then use HLM $n$ times to guess $j$, responding with the value of $j$ that appears in $n/2$ or more of the trials. Let $X_1\dots X_n\in\{\text{`T'},\text{`F'}\}$ be the set of independent random variables corresponding to whether or not the prover guessed correctly on the $i^\text{th}$ repetition. By Lemma \[theorem:harrowetal\], we have that ${\text{$\mathrm{Pr}$}}[X_i=\text{`T'}]\ge2/3$ and so $$\begin{aligned} {\text{$\mathrm{Pr}$}}\left[\text{Prover guesses correctly}\right]&=1-{\text{$\mathrm{Pr}$}}\left[\frac{1}{n}\sum_{i=1}^n X_i < 1/2\right]\\ &=1-{\text{$\mathrm{Pr}$}}\left[\frac{1}{n}\sum_{i=1}^n X_i-2/3<-1/6\right]\\ &\ge 1-2^{-\Omega(n)}\end{aligned}$$ via the Chernoff bound (explicitly, for $p,q\in[0,1]$, we have that ${\text{$\mathrm{Pr}$}}\left[\sum_{i=1}^n (X_i-p)/n<-q \right]<e^{-q^2n/2p(1-p)}$). Clearly, sending $k'$ copies of $|\Psi\rangle$ rather than $k$ gives no advantage to the prover, the trace distance between the mixed states $\rho_0$ and $\rho_1$ is still $0$ in the NO case. What remains is to show that the protocol is statistical zero knowledge. This is easily obtained, and follows by similar reasoning to the protocol in [@qszk]: the view of the verifier after the first step can be obtained by the simulator by selecting $\sigma$ and $j$ then preparing $k'$ copies of the state $|\Psi\rangle$. The view of the verifier after the prover’s response can be obtained by tracing out the message qubits and supplying the verifier with the value $j$. Since completeness error is exponentially small, the trace distance between the simulated view and the actual view is a negligible function. If we change (relax) the condition for the two states to be non isomorphic (NO instance) to: ‘There exists $\sigma \in G$ such that $|\langle\psi_2|P_\sigma|\psi_1\rangle|\geq b(n)$’ then the distance between the two states $\rho_j=\frac{1}{|G|}\sum_{\pi\in G}\left(P_\pi|\psi_j\rangle\langle\psi_j|P_\pi^\dagger\right)^{\otimes k}$ for $j\in\{1,2\}$ is upper bounded by $$\begin{aligned} \lVert \rho_1-\rho_2\rVert_1 &=\frac{1}{|G|}\left\lVert \sum_{\pi\in G}\left(P_{\pi}\right)^{\otimes k}\left(P_\sigma|\psi_1\rangle\langle\psi_1|P_{\sigma}^\dagger - |\psi_2\rangle\langle\psi_2|\right)^{\otimes k} \left(P_\pi^\dagger\right)^{\otimes k}\right\rVert_1\\ &\leq \frac{1}{|G|}\sum_{\pi\in G} \left\lVert \left(P_{\pi}\right)^{\otimes k}\left(P_\sigma|\psi_1\rangle\langle\psi_1|P_{\sigma}^\dagger - |\psi_2\rangle\langle\psi_2|\right)^{\otimes k} \left(P_\pi^\dagger\right)^{\otimes k}\right\rVert_1\\ &= \left\lVert \left(P_\sigma|\psi_1\rangle\langle\psi_1|P_{\sigma}^\dagger - |\psi_2\rangle\langle\psi_2|\right)^{\otimes k}\right\rVert_1\\ &=2 \sqrt {1- \left|\langle\psi_1|P_{\sigma}^\dagger |\psi_2\rangle\right|^{2 k}}\leq 2 \sqrt {1- \epsilon(n)^{2 k}}, \end{aligned}$$ where first inequality is just triangular inequality, last inequality follows from the promise and last equality is just rewriting the trace distance for pure states in terms of their scalar product. Now, putting the value of $k=\frac {\log n}{1-a(n)}$, algebraic manipulations and using the fact that $\log(1-x)>-2x$ for all $x\in (0,1/2)$, we get, for any $b(n)\in (1/2,1)$, $$\begin{aligned} \lVert \rho_1-\rho_2\rVert_1 &= 2 \sqrt {1- b(n)^{\frac {2 \log n}{1-a(n)}}}= 2 \sqrt {1- n^{\frac {2\log {b(n)}}{1-a(n)} }}\\ &\leq 2 \sqrt {1- n^{\frac {-4(1-b(n))}{1-a(n)} }}.\end{aligned}$$\ Then the maximal probability of distinguishing between these two states is upper bounded by $$\begin{aligned} p\leq 1/2+\sqrt{1- n^{\frac {-4(1-b(n))}{1-a(n)} }}.\end{aligned}$$ We have thus proved Theorem \[theorem:SIQSZK\]. Corollary \[corollary:qcma\] follows easily: if SI was QCMA-complete then all QCMA problems would be reducible to it, and would belong in QSZK. While SI belongs in QCMA, the above protocol requires quantum communication. It is not clear if a similar protocol exists that uses classical communication only. In the next theorem we show that such a protocol exists for <span style="font-variant:small-caps;">StabilizerStateNonIsomorphism</span>, since stabilizer states can be described efficiently classically. <span style="font-variant:small-caps;">StabilizerStateNonIsomorphism</span> is in $\emph{QCSZK}$. It suffices to show that the state $|\Psi\rangle$ in the protocol above can be communicated to the prover using classical communication only. We know from Theorem \[theorem:classicaldescription\] that a classical description can be obtained efficiently from $O(n)$ copies of $|\Psi\rangle$. These copies can be prepared efficiently, since they are specified in the problem instance by quantum circuits that prepare them. We now prove that <span style="font-variant:small-caps;">MixedStateIsomorphism</span> is QSZK-hard (Theorem \[theorem:msiqszkhardNICE\]). We actually prove the following stronger result. \[theorem:msiqszk\] $(\epsilon,1-\epsilon)$-$\mathfrak{S}_n$-<span style="font-variant:small-caps;">MixedStateIsomorphism</span> is **-hard for all $\epsilon(n)=1/\exp(n)$. We prove this by reduction from the following problem $(\alpha,\beta)$-<span style="font-variant:small-caps;">ProductState</span>, which as shown in [@septesting] is -hard even when $\alpha=\epsilon,\beta=1-\epsilon$ and $\epsilon$ goes exponentially small in $n$. $(\alpha,\beta)\text{-}\textsc{ProductState}$\ **Input*: Efficient description of a quantum circuit $Q_{\rho}$ in $\mathcal{Q}_{0,n}$.\ *YES:* There exists an $n$-partite product state $\sigma_1\otimes \cdots\otimes\sigma_n$ such that $D(\rho,\sigma_1\otimes\cdots\otimes\sigma_n)\le\alpha\\$ *NO:* For all $n$-partite product states $\sigma_1\otimes \cdots\otimes\sigma_n$, $D(\rho,\sigma_1\otimes\cdots\otimes\sigma_n)\ge\beta. $* We make use of the following lemma. For an $n$-partite mixed state $\rho$, let $\rho_i$ denote the state of the $i^{\text{th}}$ subsystem, obtained by tracing out the other subsystems. Let $\rho$ be an $n$ qubit state. If there exists a product state $\sigma_1\otimes\cdots\otimes\sigma_n$ such that $\lVert\rho-\sigma_1\otimes\cdots\otimes \sigma_n\rVert_1\le \alpha, $ then $\lVert\rho-\rho_1\otimes\cdots\otimes \rho_n\rVert_1\le (n+1)\alpha $ ![Constructing the state $\rho'=\rho_1\otimes\dots\otimes\rho_n$ from $n$ copies of the input circuit $Q_\rho$.[]{data-label="fig:qcirc"}](qcirc.pdf) We now must show that every instance of $(\alpha,\beta)$-<span style="font-variant:small-caps;">ProductState</span> can be converted to an instance of $(\alpha',\beta')$-$\mathfrak{S}_n$-<span style="font-variant:small-caps;">MixedStateIsomorphism</span>. In particular, consider an instance $\rho$ of $(\alpha,\beta)$-<span style="font-variant:small-caps;">ProductState</span>. Our reduction takes this to an instance $(\rho,\rho')$ of $((n+1)\alpha,\beta)$-$\mathfrak{S}_n$-<span style="font-variant:small-caps;">MixedStateIsomorphism</span>, where $\rho'=\rho_1\otimes\dots\otimes \rho_n$ can be prepared in the following way from $n$ copies of the state $\rho$. Denote these $n$ copies as $\rho^{(1)},\dots,\rho^{(n)}$. The $i^{\text{th}}$ qubit line of $\rho'$ is the $i^{\text{th}}$ qubit line of $\rho^{(i)}$, all unused qubit lines are discarded (illustrated in Figure \[fig:qcirc\]). Let $\rho$ be an $n$-partite state. If $\rho$ is product then $ D(\rho,\rho_1\otimes \cdots \otimes \rho_n) \le (n+1)\alpha/2 $ and so $(\rho,\rho')$ correspond to a YES instance of $((n+1)\alpha,\beta)$-$\mathfrak{S}_n$-<span style="font-variant:small-caps;">MixedStateIsomorphism</span>. If $\rho$ is a NO instance of $(\alpha,\beta)$-<span style="font-variant:small-caps;">ProductState</span> then $D(\rho,\theta)\ge \beta$ for all product states $\theta$. This means that $D(\rho,P_\sigma\rho_1\otimes\cdots\otimes\rho_n P_\sigma)\ge \beta$ for all $\sigma\in\mathfrak{S}_n$ since all such states are product. In this section we have shown that <span style="font-variant:small-caps;">StateIsomorphism</span> is in QSZK, and so is unlikely to be QCMA-complete unless all problems in QCMA have quantum statistical zero knowledge proof systems. We have also shown that <span style="font-variant:small-caps;">StabilizerStateIsomorphism</span> has a quantum statistical zero knowledge proof system that uses classical communication only, and that <span style="font-variant:small-caps;">MixedStateIsomorphism</span> is QSZK-hard. In the next section, we show that the quantum polynomial hierarchy collapses if <span style="font-variant:small-caps;">StabilizerStateIsomorphism</span> is QCMA-complete. A quantum polynomial hierarchy {#section:aquantumpolynomialhierarchy} ============================== Yamakami [@yamakami] considers a more general framework of quantum complexity theory, where computational problems are specified with quantum states as inputs, rather than just classical bitstrings. We find that using this more general view of computational problems makes it easier to define a very general quantum polynomial-time hierarchy, which can then be “pulled back” to a hierarchy that has more conventional complexity classes (*e.g.* , ) as its lowest levels. Following [@yamakami] we consider classes of *quantum promise problems*, where the YES and NO sets are made up of quantum states. We use the work’s notion of quantum $\exists$ and $\forall$ complexity class operators in our definitions. These yield classes that are more general than we need, so we use restricted versions where all instances are computational basis states. Let $|\psi\rangle\in\mathcal{H}_2^{\otimes n}$ be an $n$-qubit state. Then in analogy to the length of a classical bitstring $|x_1\dots x_n|=n$, we define the length of the state $|\psi\rangle$ as $\big| |\psi\rangle\big|=n$. The set $\{0,1\}^*:=\cup_{i=1}^{\infty}\{0,1\}^i$ is the set of all bitstrings. Analogously, the set $\mathcal{H}_2^*:= \bigcup_{i=1}^{\infty}\mathcal{H}_2^{\otimes i} $ is the set of all qubit states. A *quantum promise problem* is therefore a pair of sets $\mathcal{A}_{\text{YES}},\mathcal{A}_{\text{NO}}\subseteq \mathcal{H}_2^*$ with $\mathcal{A}_{\text{YES}}\cap\mathcal{A}_{\text{NO}}=\emptyset$. Note that to differentiate quantum promise problems from the traditional definition with bitstrings, we use the calligraphic font. We make use of the following complexity class, made up of quantum promise problems. A quantum promise problem ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}$ is in the class ${\text{$\mathrm{BQP}$}}^q(a,b)$, for functions $a,b:\mathbb{N}\rightarrow [0,1]$ if there exists a polynomial-time generated uniform family of quantum circuits $\{Q_{n}~:~n\in\mathbb{N}\}$ such that for all $|\psi\rangle\in\mathcal{H}_2^*$ - if $|\psi\rangle\in {\mathcal{A}_{\text{$\mathrm{YES}$}}}$ then ${\text{$\mathrm{Pr}$}}[Q_{l} \text{ accepts } |\psi\rangle]\ge a(l); $ - if $|\psi\rangle\in {\mathcal{A}_{\text{$\mathrm{NO}$}}}$ then ${\text{$\mathrm{Pr}$}}[Q_{l} \text{ accepts } |\psi\rangle]\le b(l), $ where $l=\big||\psi\rangle\big|$. Classes made up of quantum promise problems will always be denoted with the ‘q’ superscript. It is clear that ${\text{$\mathrm{BQP}$}}\subseteq{\text{$\mathrm{BQP}$}}^q$, because any classical promise problem can be converted to a quantum promise problem by considering bitstrings as computational basis states. There is nothing to be gained computationally by imposing that inputs are expressed as computational basis states rather than bitstrings, so we make no distinction between the “bitstring promise problems” and the “computational basis state” promise problems. Indeed let ${\text{$\mathrm{C}$}}^q$ be a quantum promise problem class. Then we define $$\begin{aligned} {\text{$\mathrm{C}$}}:=\{\mathcal{A}\in{\text{$\mathrm{C}$}}^q~:~\text{all states in } {\mathcal{A}_{\text{$\mathrm{YES}$}}} \text{ and } {\mathcal{A}_{\text{$\mathrm{NO}$}}} \text{ are computational basis states.}\}\end{aligned}$$ The classes ${\text{$\mathrm{BQP}$}}^q$ and ${\text{$\mathrm{BQP}$}}$ are related in this way. For the remainder of this work we will assume that all complexity classes are made up of quantum promise problems. It will be convenient for us to consider even conventional complexity classes such as and to be defined with problem instances specified as computational basis states, rather than as bitstrings. Defining them in this way does not affect the classes in any meaningful way, but it is useful for our purposes. In particular, instead of referring to instances of a promise problem $x\in A_{\text{YES}}\cup A_{\text{NO}}$, we will refer to computational basis states in a quantum promise problem $|x\rangle\in\mathcal{A}_{\text{YES}}\cup \mathcal{A}_{\text{NO}}$. The following operators are well known from classical complexity theory, and are adapted here for quantum promise problem classes. Let ${\text{$\mathrm{C}$}}$ be a complexity class. A promise problem ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}$ is in $\exists_s{\text{$\mathrm{C}$}}$ for $s\in\{q,c\}$ if there exists a promise problem ${(\mathcal{B}_{\text{$\mathrm{YES}$}},\mathcal{B}_{\text{$\mathrm{NO}$}})}\in{\text{$\mathrm{C}$}}$ and a polynomially bounded function $p:\mathbb{N}\rightarrow\mathbb{N}$ such that $$\begin{aligned} {\mathcal{A}_{\text{$\mathrm{YES}$}}}=\{|\psi\rangle\in\mathcal{H}_2^*~:~\exists |y\rangle\in S~|\psi\rangle\otimes |y\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}\}, \end{aligned}$$ and $$\begin{aligned} {\mathcal{A}_{\text{$\mathrm{NO}$}}}=\{|\psi\rangle\in\mathcal{H}_2^*~:~\forall |y\rangle\in S~|\psi\rangle\otimes |y\rangle\in {\mathcal{B}_{\text{$\mathrm{NO}$}}}\}, \end{aligned}$$ where the set $S$ is equal to $\{|x\rangle~:~x\in \{0,1\}^{p(||\psi\rangle|)}\}$ if $s=c$, and $\mathcal{H}_2^{\otimes p(||\psi\rangle|)}$ if $s=q$. The class $\forall_s {\text{$\mathrm{C}$}}$ is defined analogously, but with the quantifiers swapped. We can now define the quantum polynomial hierarchy. Let $\Sigma^q_0=\Pi^q_0={\text{$\mathrm{BQP}$}}^q$. For $k\ge 1$, let $s_1\dots s_k\in\{c,q\}^k$. Then $$\begin{aligned} s_1\dots s_k\text{-}\Sigma_k^q=\exists_{s_1}s_2\cdots s_k\text{-}\Pi_{k-1}^q \end{aligned}$$ and $$\begin{aligned} s_1\dots s_k\text{-}\Pi_k^q= \forall_{s_1}s_2\cdots s_k\text{-}\Sigma_{k-1}^q \end{aligned}$$ This definition leads to complexity classes that include promise problems with quantum inputs. Such classes are not well understood, so we do not use this hierarchy in its full generality. Instead we take each level $\Sigma_i^q$ or $\Pi_i^q$, and strip out all problems except those defined in terms of computational basis states by using $\Sigma_i$ or $\Pi_i$. Doing so makes familiar classes emerge, indeed it is clear that $\Sigma_0=\Pi_0={\text{$\mathrm{BQP}$}}$, $\text{c-}\Sigma_1={\text{$\mathrm{QCMA}$}}$ and $\text{q-}\Sigma_1={\text{$\mathrm{QMA}$}}$. This provides a generalisation of the ideas of Gharibian and Kempe [@gk] into a full hierarchy: our definition of the class $\text{cq-}\Sigma_2$ corresponds directly to theirs. For our purposes we require the following technical lemma. \[lemma:eahousekeeping\] For all $k$, let $C_k=s\text{-}\Sigma_k^q$ or $C_k=s\text{-}\Pi_k^q$ for any $s\in\{q,c\}^k$. Then 1. \[item:ececeqec\] $\exists_c\exists_c C_k=\exists_c C_k$ 2. \[item:acaceqac\] $\forall_c\forall_c C_k=\forall_c C_k$ 3. \[item:ecsubeq\] $\exists_c C_k\subseteq\exists_q C_k$ 4. \[item:acsubaq\] $\forall_c C_k\subseteq\forall_q C_k$ 5. \[item:eceqequeq\] $\exists_c\exists_q C_k=\exists_q\exists_c C_k=\exists_q C_k$ 6. \[item:acaqequaq\] $\forall_c\forall_q C_k=\forall_q\forall_c C_k=\forall_q C_k$ (\[item:ececeqec\]) and (\[item:acaceqac\]) are trivial. (\[item:ecsubeq\]) follows because a verifier circuit can force all certificates to be classical by measuring each qubit in the standard basis before processing. (\[item:acsubaq\]) follows because this class is complementary. (\[item:eceqequeq\]) follows by a similar argument: take ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}\in\exists_c\exists_q C_k$, where the classical certificate is of length $p_1(|x|)$, and the quantum certificate is of length $p_2(|x|)$. Clearly ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}$ is in $\exists_q C_k$ with certificate length $p_1(|x|)+p_2(|x|)$, since the first $p_1(|x|)$ qubits can be measured before processing, so that they are forced to be computational basis states. The other direction, $\exists_q C_k\subseteq \exists_c\exists_q C_k$, follows trivially by setting the classical certificate length to $0$. Then (\[item:acaqequaq\]) follows from (\[item:eceqequeq\]) because the classes are complementary. Quantum hierarchy collapse {#section:quantumhierarchycollapse} -------------------------- Our main focus in this paper is on problems in . Therefore, it is sufficient to adopt the definition of the hierarchy with all certificates classical. Let $ {\text{$\mathrm{QPH}$}}^q:=\bigcup_{i=0}^{\infty}\text{cc}\cdots \text{c-}\Sigma_{i}^q. $ We consider the restricted hierarchy ${\text{$\mathrm{QPH}$}}$, (*N.B.*, without the ‘q’ superscript). Since each certificate is classical, when we refer to classes at each level we omit the certificate specification, referring to each level as simply $\Sigma_i$ or $\Pi_i$. Also, note that we are considering the computational basis state restriction of each level of the hierarchy so we omit the ‘q’ superscript. We make use of the following lemmas. For all $i\ge 1$, $\exists_c \Sigma_i=\Sigma_i$ and $\forall_c\Pi_i=\Pi_i$. Both follow as corollaries of Lemma \[lemma:eahousekeeping\], parts (\[item:ececeqec\]) and (\[item:acaceqac\]). \[lemma:subsetcollapse\] For all $i\ge 1$, if $\Sigma_{i}\subseteq\Pi_{i}$ or $\Pi_{i}\subseteq\Sigma_{i}$ then *${\text{$\mathrm{QPH}$}}\subseteq \Sigma_i$*. We prove first that if the equality $\Sigma_i=\Pi_i$ held for some $i\ge 1$ then for all $j>i$, $\Sigma_j\subseteq \Sigma_i$. We prove this by induction on $j$. Consider the base case $j=i+1$. By definition, if $\mathcal{A}\in\Sigma_{i+1}$ then $\mathcal{A}\in \exists_c\Pi_i=\exists_c\Sigma_i=\Sigma_i$. Assume for the induction hypothesis that if $\Sigma_i=\Pi_i$ then $\Sigma_{j}\subseteq \Sigma_i$. Let $k=j-i+1$. For $k$ odd and $\mathcal{A}\in\Sigma_{j+1}$ we have that $\mathcal{A}\in\underbrace{\exists_c \forall_c\cdots\exists_c}_{k}\Pi_i=\underbrace{\exists_c \forall_c\cdots\exists_c}_{k}\Sigma_i=\underbrace{\exists_c\forall_c\cdots\forall_c}_{k-1}\Sigma_i=\Sigma_j$. By the induction hypothesis this is a subclass of $\Sigma_i$. The case for even $k$ follows in the same way. Since for all $i\ge 0$, $\Sigma_{i}=\text{co-}\Pi_{i}$, we have that if $\Sigma_{i}\subseteq\Pi_{i}$ or $\Pi_{i}\subseteq\Sigma_{i}$ then $\Sigma_{i}=\Pi_{i}$, and so the hierarchy collapses. The following two propositions are important for our purposes, and can be proved using similar techniques to those used in the proofs of ${\text{$\mathrm{AM}$}}={\text{$\mathrm{BP}$}}\cdot {\text{$\mathrm{NP}$}}$ and ${\text{$\mathrm{AM}$}}\subseteq \Pi_2^P$. We emphasise that the latter is in terms of the *quantum* polynomial hierarchy, indeed it would be remarkable if a similar result held for in terms of the classical hierarchy. The proofs follow in Sections \[subsection:qam=bpqma\] and \[subsection:bqpmaupperbound\]. \[prop:bpqam\] *${\text{$\mathrm{QCAM}$}}\subseteq{\text{$\mathrm{BP}$}}\cdot {\text{$\mathrm{QCMA}$}}$*, and *${\text{$\mathrm{QAM}$}}\subseteq{\text{$\mathrm{BP}$}}\cdot {\text{$\mathrm{QMA}$}}$*. A corollary of this is the following. \[prop:qampi2\] *${\text{$\mathrm{QCAM}$}}\subseteq \text{cc-}\Pi_2$*, and *${\text{$\mathrm{QAM}$}}\subseteq \text{cq-}\Pi_2$*. In what follows, we will refer to the class : a generalisation of ${\text{$\mathrm{QCAM}$}}$ which has an extra round of interaction between Arthur and Merlin. Kobayashi *et al.* [@kobayashi] show that this class is equal to . \[theorem:kobayashi\] *${\text{$\mathrm{QCMAM}$}}={\text{$\mathrm{QCAM}$}}$*. The next proposition uses this fact, and allows us to complete the proof of Theorem \[theorem:collapse\]. \[prop:coqcmaqcam\] If *${\text{$\mathrm{co}$}}$-${\text{$\mathrm{QCMA}$}}\subseteq {\text{$\mathrm{QCAM}$}}$* then *${\text{$\mathrm{QPH}$}}\subseteq{\text{$\mathrm{QCAM}$}}\subseteq\Pi_2$*. Let $\mathcal{A}={(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}\in\Sigma_{2}$. Then by definition there exists a promise problem $\mathcal{B}={(\mathcal{B}_{\text{$\mathrm{YES}$}},\mathcal{B}_{\text{$\mathrm{NO}$}})}\in\Pi_{1}={\text{$\mathrm{co}$}}$-${\text{$\mathrm{QCMA}$}}$ and a polynomially bounded function $p$ such that for all $|x\rangle\in{\mathcal{A}_{\text{$\mathrm{YES}$}}}$, $$\begin{aligned} \label{eq:A1} \exists y\in\{0,1\}^{p(|x|)}|x\rangle\otimes |y\rangle\in{\mathcal{B}_{\text{$\mathrm{YES}$}}},\end{aligned}$$ and for all $|x\rangle\in{\mathcal{A}_{\text{$\mathrm{NO}$}}}$, $$\begin{aligned} \label{eq:A2} \forall y\in\{0,1\}^{p(|x|)}|x\rangle\otimes |y\rangle\in{\mathcal{B}_{\text{$\mathrm{NO}$}}}.\end{aligned}$$ If co-${\text{$\mathrm{QCMA}$}}\subseteq {\text{$\mathrm{QCAM}$}}$ then $\mathcal{B}\in {\text{$\mathrm{QCAM}$}}$. The existentially (Eq. \[eq:A1\]) and universally (Eq. \[eq:A2\]) quantified $y$’s can be thought of as certificate strings, and so $\mathcal{A}\in {\text{$\mathrm{QCMAM}$}}$. By Theorem \[theorem:kobayashi\], ${\text{$\mathrm{QCMAM}$}}={\text{$\mathrm{QCAM}$}}$, and so $\mathcal{A}\in\Pi_2$. Hence, $\Sigma_2\subseteq\Pi_2$, and the hierarchy collapses to the second level by Lemma \[lemma:subsetcollapse\]. We now have the tools we need to prove \[theorem:collapse\]. Suppose $\mathcal{A}\in {\text{$\mathrm{QCMA}$}}\cap{\text{$\mathrm{co}$}}$-${\text{$\mathrm{QCAM}$}}$. If $\mathcal{A}$ is -complete then this implies that ${\text{$\mathrm{QCMA}$}}\subseteq{\text{$\mathrm{co}$}}$-${\text{$\mathrm{QCAM}$}}$, equivalently ${\text{$\mathrm{co}$}}$-${\text{$\mathrm{QCMA}$}}\subseteq{\text{$\mathrm{QCAM}$}}$. The hierarchy then collapses to the second level via Proposition \[prop:coqcmaqcam\]. We may now finish this section by providing evidence that <span style="font-variant:small-caps;">StabilizerStateIsomorphism</span> is not -complete, encapsulated in Corollary \[theorem:productCollapse\]. We do this by proving the following. <span style="font-variant:small-caps;">StabilizerStateNonIsomorphism</span> is in **. For a stabilizer state $|\psi\rangle$, denote by $s_{\psi}^{(1)},\dots, s_{\psi}^{(n)}\in \{\pm I,\pm X,\pm Y,\pm Z\}^n$ the classical strings that describe the stabilizer generators of $|\psi\rangle$ that we can obtain efficiently using the algorithm of Theorem \[theorem:classicaldescription\]. We denote by $s_\psi$ the length $2n$ string that is obtained by concatenating these stabilizer strings, that is $s_{\psi} = s_{\psi}^{(1)}\dots s_{\psi}^{(n)}$. Then for any permutation $\sigma\in\mathfrak{S}_n$, we take $\sigma(s_{\psi}) = s_{\psi}^{(\sigma(1))},\dots, s_{\psi}^{(\sigma(n))}$. For a permutation group $G\le \mathfrak{S}_n$, consider the set $$\begin{aligned} S_G := \bigcup_{j\in\{0,1\},\sigma\in G}\left\{\left(\sigma\left(s_{\psi_{j}}\right),\pi\right)~:~\pi\in G\land \sigma\left(s_{\psi_{j}}\right) = \sigma\left(s_{\psi_{j}}\right)\right\}.\end{aligned}$$ If there exists $\sigma$ such that $|\langle \psi_1|P_\sigma|\psi_0\rangle| = 1$ then $\sigma(s_{\psi_0}) = s_{\psi_1}$, and so in this case $|S_G| = |G|$. If for all $\sigma\in G$ we have that $|\langle \psi_1|P_\sigma|\psi_0\rangle|\le 1-\epsilon(n)$ then likewise for all $\sigma\in G$, $\sigma(s_{\psi_0}) \neq s_{\psi_1}$ and therefore $|S_G| = 2|G|$. If we can show that membership in $S_G$ can be efficiently verified by Arthur then we can apply the Goldwasser-Sipser set lower bound protocol [@gs] to determine isomorphism of the states. To convince Arthur with high probability that $(\sigma(s_{\psi_j}),\pi)\in S_G$, Merlin sends the permutation $\sigma$ and the index $j\in\{0,1\}$. Arthur can then obtain the string $s_{\psi_{j}}$ with probability greater than $1-1/\text{exp}(n)$ using Montanaro’s algorithm of Theorem \[theorem:classicaldescription\] applied to $U_{\psi_j}|0\rangle$. He can then verify in polynomial time that the string he received is equal to $\sigma(s_{\psi_{j}})$, that $\pi$ is an automorphism of $\sigma(s_{\psi_{j}})$, and that the permutation $\sigma$ is in the group $G$. We have provided evidence that SSI can be thought of as an intermediate problem for . In particular, we have shown that if it were in BQP, then <span style="font-variant:small-caps;">GraphIsomorphism</span> would also be in BQP, and furthermore, that its -completeness would collapse the quantum polynomial hierarchy. Such evidence is unfortunately currently out of reach for <span style="font-variant:small-caps;">StateIsomorphism</span>, because we have been unable to show that <span style="font-variant:small-caps;">StateNonIsomorphism</span> is in . Perhaps Arthur and Merlin must always use quantum communication if Arthur is to be convinced that two states are NOT isomorphic. This would be interesting, because he can be convinced that they *are* isomorphic using classical communication only ($\textsc{StateIsomorphism}\in {\text{$\mathrm{QCMA}$}}$). Proof of Proposition \[prop:bpqam\] {#subsection:qam=bpqma} ----------------------------------- We begin by giving a definition of the BP complexity class operator. Note that we are still working in terms of the quantum promise problems defined earlier, which is clear from the use of the calligraphic font $\mathcal{A}$. In the following we take $x\sim X$ to mean that $x$ is an element drawn uniformly at random from a finite set $X$. Let ${\text{$\mathrm{C}$}}$ be a complexity class. A promise problem ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}$ is in ${\text{$\mathrm{BP}$}}(a,b)\cdot{\text{$\mathrm{C}$}}$ for functions $a,b:\mathbb{N}\rightarrow[0,1]$ if there exists ${(\mathcal{B}_{\text{$\mathrm{YES}$}},\mathcal{B}_{\text{$\mathrm{NO}$}})}\in {\text{$\mathrm{C}$}}$ and a polynomially bounded function $p:\mathbb{N}\rightarrow\mathbb{N}$ such that - For all $|\psi\rangle\in{\mathcal{A}_{\text{$\mathrm{YES}$}}}$, $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim \{0,1\}^{p(|x|)}}[|\psi\rangle\otimes |y\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}]\ge a(||\psi\rangle|); \end{aligned}$$ - For all $|\psi\rangle\in{\mathcal{A}_{\text{$\mathrm{NO}$}}}$, $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim \{0,1\}^{p(|x|)}}[|\psi\rangle\otimes |y\rangle\notin {\mathcal{B}_{\text{$\mathrm{NO}$}}}]\le b(||\psi\rangle|). \end{aligned}$$ It is clear that the probabilities $a,b$ can be amplified in the usual way by repeating the protocol a sufficient number of times and taking a majority vote. Let $(\{V_{x,y}\},m,s)$ be a verification procedure. In what follows we make use of the functions $$\begin{aligned} \mu(m,V_{x,y}):=\max_{|\psi\rangle\in\mathcal{H}_2^{\otimes \text{m}(|x|)}}\left({\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts } |\psi\rangle]\right)\end{aligned}$$ and $$\begin{aligned} \nu(m,V_{x,y}):=\min_{|\psi\rangle\in\mathcal{H}_2^{\otimes \text{m}(|x|)}}\left({\text{$\mathrm{Pr}$}}[V_{x,y} \text{ rejects } |\psi\rangle]\right).\end{aligned}$$ The following results of Marriott and Watrous [@mw] are useful for our purposes. \[theorem:errorReduction\] Let $a,b:\mathbb{N}\rightarrow[0,1]$ and polynomially bounded $q:\mathbb{N}\rightarrow[0,1]$ satisfy $$\begin{aligned} a(n)-b(n)\ge \frac{1}{q(n)} \end{aligned}$$ for all $n\in \mathbb{N}$. Then *${\text{$\mathrm{QAM}$}}(a,b)$ $\subseteq{\text{$\mathrm{QAM}$}}(1-2^{-r},2^{-r})$*, for all polynomially bounded $r:\mathbb{N}\rightarrow[0,1]$. \[prop:altFormulation\] Let $$\begin{aligned} \left(\left\{V_{x,y}~:~x\in\{0,1\}^*,y\in\{0,1\}^{s(|x|)}\right\},m:\mathbb{N}\rightarrow\mathbb{N},s:\mathbb{N}\rightarrow\mathbb{N}\right) \end{aligned}$$ be a ** verification procedure for a promise problem $A$ with completeness and soundness errors bounded by $1/9$. Then for any $x\in\{0,1\}^*$ and for $y\in\{0,1\}^{s(|x|)}$ chosen uniformly at random, - if $|x\rangle\in {\mathcal{A}_{\text{$\mathrm{YES}$}}}$ then ${\text{$\mathrm{Pr}$}}[\mu(m,V_{x,y})\ge 2/3]\ge 2/3$; - if $|x\rangle\in {\mathcal{A}_{\text{$\mathrm{NO}$}}}$ then ${\text{$\mathrm{Pr}$}}[\mu(m,V_{x,y})\le 1/3]\ge 2/3$. We can use these tools to prove Proposition \[prop:bpqam\]. We prove it for , the result follows for QCAM by similar reasoning. Suppose $\mathcal{A}={(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}\in{\text{$\mathrm{QAM}$}}(a,b)$. By Theorem \[theorem:errorReduction\], there exists a ${\text{$\mathrm{QAM}$}}$ verification procedure ($\{V_{x,y}\},m,s$) with completeness and soundness errors bounded by $1/9$. Thus by Proposition \[prop:altFormulation\] we know that for all $x\in\{0,1\}^{*}$, if $|x\rangle\in {\mathcal{A}_{\text{$\mathrm{YES}$}}}$ then $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim\{0,1\}^{s(|x|)}}[\mu(m,V_{x,y})\ge 2/3]\ge 2/3, \end{aligned}$$ which means that $$\begin{aligned} \frac{1}{2^{s(|x|)}}\left|\left\{y\in\{0,1\}^{s(|x|)}~:~\exists |z\rangle\in\mathcal{H}_2^{\otimes m(|x|)}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts }|z\rangle]\ge 2/3\right\}\right|\ge 2/3. \end{aligned}$$ By similar reasoning, if $|x\rangle\in {\mathcal{A}_{\text{$\mathrm{NO}$}}}$ then $$\begin{aligned} \frac{1}{2^{s(|x|)}}\left|\left\{y\in\{0,1\}^{s(|x|)}~:~\forall |z\rangle\in\mathcal{H}_2^{\otimes m(|x|)}{\text{$\mathrm{Pr}$}}[V_{x,y} \text{ accepts }|z\rangle]\le 1/3\right\}\right|\ge 2/3. \end{aligned}$$ These conditions are precisely the conditions for a promise problem to belong in . This means we can fix some promise problem ${(\mathcal{B}_{\text{$\mathrm{YES}$}},\mathcal{B}_{\text{$\mathrm{NO}$}})}\in{\text{$\mathrm{QMA}$}}(2/3,1/3)$ and re-express these statements in the following form: - if $|x\rangle\in{\mathcal{A}_{\text{$\mathrm{YES}$}}}$ then $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim\{0,1\}^{s(|x|)}}[|x\rangle\otimes |y\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}]\ge 2/3 \end{aligned}$$ - if $|x\rangle\in {\mathcal{A}_{\text{$\mathrm{NO}$}}}$ then $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim\{0,1\}^{s(|x|)}}[|x\rangle\otimes |y\rangle\in {\mathcal{B}_{\text{$\mathrm{NO}$}}}]\ge 2/3, \end{aligned}$$ and so $\mathcal{A}\in {\text{$\mathrm{BP}$}}(2/3,1/3)\cdot{\text{$\mathrm{QMA}$}}(2/3,1/3)$. Proof of Proposition \[prop:qampi2\] {#subsection:bqpmaupperbound} ------------------------------------ The following well known lemmas allow us to put ${\text{$\mathrm{BP}$}}\cdot {\text{$\mathrm{QMA}$}}$ (*resp.* ${\text{$\mathrm{BP}$}}\cdot{\text{$\mathrm{QCMA}$}}$), and thus ${\text{$\mathrm{QAM}$}}$ (*resp.* ), in the second level of the quantum polynomial-time hierarchy. We follow [@ab] but recast them in a more helpful form for our purposes. For a set of bitstrings $S\subseteq\{0,1\}^m$ and $x\in\{0,1\}^m$, we take $S\oplus x=\{s\oplus x~:~s\in S\}$. Let $S\subseteq\{0,1\}^m$ for $m\ge 1$ such that $$\begin{aligned} |S|\ge (1-2^{-k})\cdot 2^m, \end{aligned}$$ for $2^k\ge m$. Then there exists $t_1,\dots, t_m\in\{0,1\}^m$ such that $$\begin{aligned} \bigcup_{i=1}^m S\oplus t_i=\{0,1\}^m. \end{aligned}$$ We prove this via the probabilistic method. Consider uniformly random $t_1,\dots, t_m\in\{0,1\}^m$. Then $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{r\sim\{0,1\}^m}\left[r\notin \bigcup_{i=1}^m S\oplus t_i\right]&=\prod_{i=1}^m\mathop{{\text{$\mathrm{Pr}$}}}_{r\sim\{0,1\}^m}\left[r\notin S\oplus t_i\right]\le 2 ^{-km}. \end{aligned}$$ Consider the probability that there exists some $v\in\{0,1\}^m$ such that $v\notin \bigcup_{i=1}^m S\oplus t_i$, $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}[\exists v\in\{0,1\}^m.v\notin \bigcup_{i=1}^m S\oplus t_i]&\le \sum_{i=1}^{2^m} 2^{-km}\\ &=\frac{2^m}{2^{km}}\\ &<1. \end{aligned}$$ Hence, $$\begin{aligned} {\text{$\mathrm{Pr}$}}\left[\bigcup_{i=1}^m S\oplus t_i=\{0,1\}^m\right]>0, \end{aligned}$$ and so there must exist $t_1,\dots, t_m$ as required. This yields the following corollary. \[lemma:quantSimEA\] Let $S\subseteq\{0,1\}^m$ for $m\ge 1$ such that $$\begin{aligned} |S|\ge (1-2^{-k})\cdot 2^m, \end{aligned}$$ for $2^k\ge m$. Then there exists $t_1,\dots, t_m$ such that for all $v\in\{0,1\}^m$, there exists $i\in[m]$ such that $t_i\oplus v\in S$. We also require the following lemma, which comes from the opposite direction. \[lemma:quantSimAE\] Let $S\subseteq\{0,1\}^m$ for $m\ge 1$ such that $$\begin{aligned} |S|\ge (1-2^{-k})\cdot 2^m, \end{aligned}$$ for $2^k\ge m$. Then for all $t_1,\dots, t_m\in\{0,1\}^m$, there exists $v\in \{0,1\}^m$ such that $\bigwedge_{i\in[m]} \left(u_i\oplus v\in S\right)$. Assume that there exists $t_1\dots t_m$ such that for all $v\in\{0,1\}^m$ there exists $i\in[m]$ with $t_i\oplus v\notin S$. This implies that there exists $i\in\{1,\dots,m\}$ such that, for at least $2^m/m$ elements $v\in\{0,1\}^m$, we have that $t_i\oplus v\notin S$. Then $$\begin{aligned} |S|<2^m-2^m/m=2^m(1-1/m)\le (1-2^{-k})\cdot 2^m, \end{aligned}$$ contradicting our assumption about the cardinality of $S$. We can now prove the Proposition \[prop:bpqam\]. We prove it for ${\text{$\mathrm{BP}$}}\cdot{\text{$\mathrm{QMA}$}}$; the result for ${\text{$\mathrm{BP}$}}\cdot{\text{$\mathrm{QCMA}$}}$ follows in the same way. Let ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}\in {\text{$\mathrm{BP}$}}\cdot{\text{$\mathrm{QMA}$}}$. Then by definition there exists ${(\mathcal{B}_{\text{$\mathrm{YES}$}},\mathcal{B}_{\text{$\mathrm{NO}$}})}\in {\text{$\mathrm{QMA}$}}$ and polynomially bounded $p,r:\mathbb{N}\rightarrow\mathbb{N}$ such that if $|x\rangle\in{\mathcal{A}_{\text{$\mathrm{YES}$}}}$, $$\begin{aligned} \mathop{{\text{$\mathrm{Pr}$}}}_{y\sim \{0,1\}^{p(|x|)}}[|x\rangle\otimes |y\rangle\in {B_{\text{$\mathrm{YES}$}}}]\ge 1-2^{-r(|x|)}. \end{aligned}$$ Set $S_x=\{y\in\{0,1\}^{p(|x|)}~:~|x\rangle\circ |y\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}\}$. Then $|x\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}$ implies that $|S_x|\ge (1-2^{r(|x|)})\cdot 2^{p(|x|)}$. By amplification of BP, we can choose $r$ to be whatever we want, so we choose it such that $2^{r(|x|)}\ge p(|x|)$. Then by Lemma \[lemma:quantSimAE\], $$\begin{aligned} \label{eq:longAE} x\in {A_{\text{$\mathrm{YES}$}}}\implies \forall t_1\dots t_{p(|x|)}\in\{0,1\}^{p(|x|)}\exists v\in\{0,1\}^{p(|x|)}\exists i\in \{1\dots p(|x|)\}|x\rangle\otimes |t_i\oplus v\rangle\in {\mathcal{B}_{\text{$\mathrm{YES}$}}}. \end{aligned}$$ By definition of ${\text{$\mathrm{QMA}$}}$, for any bitstring $y$ such that $|x\rangle\otimes |y\rangle\in {B_{\text{$\mathrm{YES}$}}}$, $$\begin{aligned} \exists |\psi\rangle \in \mathcal{H}_2^{s(|x|)}.{\text{$\mathrm{Pr}$}}[Q_{x\circ y} \text{ accepts } |\psi\rangle] \ge 2/3. \end{aligned}$$ From Lemma \[lemma:eahousekeeping\] we know we can collapse the classical $\exists$ quantifiers into the quantum one, obtaining $\forall_c\exists_q$. This means that Eq. (\[eq:longAE\]) is of the form required by a promise problem in $\text{cq-}\Pi_{2}$. Set $S'_x=\{y\in\{0,1\}^{p(|x|)}~:~|x\rangle\otimes |y\rangle\in {B_{\text{$\mathrm{NO}$}}}\}$. For $x\in {A_{\text{$\mathrm{NO}$}}}$, $|S'_x|\ge 1-2^{-r(|x|)}2^{p(|x|)}$, for any $r$ via amplification. Then by Corollary \[lemma:quantSimEA\] we know that this can be written as a $\exists_c \forall_c$ statement about belonging to ${\mathcal{B}_{\text{$\mathrm{NO}$}}}$. By definition the membership condition for ${\mathcal{B}_{\text{$\mathrm{NO}$}}}$ is a $\forall_q$ statement. Again, the classical and quantum $\forall$ statements can be collapsed so we obtain a $\exists_c\forall_q$ statement for the NO instances, meaning that ${(\mathcal{A}_{\text{$\mathrm{YES}$}},\mathcal{A}_{\text{$\mathrm{NO}$}})}\in\text{cq-}\Pi_{2}$. Acknowledgements ================ The authors thank Scott Aaronson, László Babai, Toby Cubitt, Aram Harrow, Will Matthews, Ashley Montanaro, Andrea Rocchetto, and Simone Severini for very helpful discussions. JL acknowledges financial support by the Engineering and Physical Sciences Research Council \[grant number EP/L015242/1\]. CEGG thanks UCL CSQ group for hospitality during the first semester of 2017, and acknowledges financial support from the Spanish MINECO (project MTM2014-54240-P) and MECD “José Castillejo” program (CAS16/00339). [9]{} R. Ladner “On the Structure of Polynomial Time Reducibility”, *Journal of the ACM* 22(1): 155–171 (1975). R. B. Boppana, J. Hastad “Does co-NP have short interactive proofs?”, *Information Processing Letters* 25(2) pp. 126-132 (1987). B.D. McKay, A. Piperno, “Practical Graph Isomorphism, II” *Journal of Symbolic Computation* 60, pp. 94-112 (2014). D. Aharonov, T. Naveh, “Quantum NP – A Survey”, *arXiv:quant-ph/0210077 (preprint)* (2002). L. Babai, “Monte-Carlo algorithms in graph isomorphism testing”, *Université de Montréal Technical Report*, DMS:79-10 pp. 42 (1979). “Group-Theoretic Algorithms and Graph Isomorphism”, *Lecture Notes in Computer Science* 136, Editor: C. M. Hoffmann (1982). E. M. Luks, “Isomorphism of graphs of bounded valence can be tested in polynomial time”, *Journal of Computer and System Science* 25(1) pp. 42–65 (1982). C. C. Sims, “Computation with permutation groups”, *Proceedings of the Second ACM Symposium on Symbolic and Algebraic Manipulation* pp. 23-28 (1971). M. Furst, J. Hopcroft, E. Luks, “Polynomial-time algorithms for permutation groups”, *21st Annual Symposium on Foundations of Computer Science* (1980). E. M. Luks, “Permutation groups and polynomial-time computation”. *Groups and Computation, DIMACS Series in Discrete Mathematics and Theoretical Computer Science* 11 pp. 139–175 (1993). H. Buhrman, R. Cleve, J. Watrous, R. de Wolf. “Quantum fingerprinting”, *Physical Review Letters*, 87(16):167902 (2001). L. Babai, “Graph Isomorphism in Quasipolynomial Time” *arXiv:1512.03547 (preprint)* (2015). D. Gottesman, talk at International Conference on Group Theoretic Methods in Physics (1998), *arXiv:quant-ph/9807006*. R. Jain, Z. Ji, S. Upadhyay, J. Watrous, “QIP=PSPACE”, *arXiv:0907.4737 (preprint)* (2009). M. Nielsen, I. Chuang, “Quantum Computation and Quantum Information”, $10^{\text{th}}$ ed., *Cambridge University Press* (2011). J. Watrous, “Quantum Computational Complexity” *arXiv:0804.3401 (preprint)* (2008). A. W. Harrow, “The Church of the Symmetric Subspace” *arXiv:1308.6595 (preprint)* (2013). M. Hein, W. Dür, J. Eisert, R. Raussendorf, M. Van den Nest, H.-J. Briegel “Entanglement in Graph States and its Applications”, *Proceedings of the International School of Physics “Enrico Fermi”: Quantum Computers, Algorithms and Chaos* pp. 115-218 (2006). S. Aaronson, “BQP and the Polynomial Hierarchy”, *arXiv:0910.4698 (preprint)* (2009). G. Gutoski, P. Hayden, K. Milner, M. M. Wilde, “Quantum interactive proofs and the complexity of separability testing” *Theory of Computing*, 11(3), pp. 59-103 (2015). A. Montanaro, “Learning stabilizer states by Bell sampling”, *arXiv:1707.04012 (preprint)* (2017). D. Gottesman, “Stabilizer Codes and Quantum Error Correction”, *arXiv:quant-ph/9705052 (PhD thesis)* (1997). S. Aaronson, D. Gottesmann, “Improved simulation of stabilizer circuits” *Physical Review A* 70, 052328 (2004). H. J. Garcia, I. L. Markov, A. W. Cross, “Efficient Inner-product Algorithm for Stabilizer States”, *arXiv:1210.6646 (preprint)* (2012). T Yamakami, “Quantum NP and a Quantum Hierarchy”, *Proceedings of the 2nd IFIP International Conference on Theoretical Computer Science*, pp. 323-336 (2002). S. Gharibian, J. Kempe, “Hardness of approximation for quantum problems”, *Quantum Information and Computation* 14 (5 and 6) pp. 517-540 (2014). J. Watrous, “Quantum statistical zero-knowledge”, *arXiv:quant-ph/0202111 (preprint)* (2002). S. Aaronson, Quantum versus classical proofs and advice, *arXiv:quant-ph/0604056 (preprint)* (2006). S. Goldwasser, M. Sipser, “Private coins versus public coins in interactive proof systems”, *Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing (STOC)* pp. 59-68 (1986). H. Kobayashi, F. Le Gall, H. Nishimura, “Generalized Quantum Arthur-Merlin Games”, *Proceedings of the 30th Conference on Computational Complexity (CCC2015)*, pp. 488-511 (2015). C. W. Helstrom, “Quantum Detection and Estimation Theory”, Academic Press, New York, (1976). R. Jain, S. Upadhyay, J. Watrous, “Two-message quantum interactive proofs are in PSPACE” *Proceedings of the 50th IEEE Conference on Foundations of Computer Science (FOCS), pp. 534–543* (2009). C. Marriott, J. Watrous, “Quantum Arthur-Merlin Games”, *arXiv:cs/0506068 (preprint)* (2005). A. Harrow, C. Y. Lin, A. Montanaro, “Sequential measurements, disturbance and property testing”, *arXiv:1607.03236 (preprint)* (2016). S. Arora, B. Barak, “Computational Complexity: A Modern Approach” *Cambridge University Press* (2009). [^1]: This work was partially completed while the author was on a long term visit to Department of Computer Science, University College London.
--- abstract: | We analyze the performance of redundancy in a multi-type job and multi-type server system. We assume the job dispatcher is unaware of the servers’ capacities, and we set out to study under which circumstances redundancy improves the performance. With redundancy an arriving job dispatches redundant copies to all its compatible servers, and departs as soon as one of its copies completes service. As a benchmark comparison, we take the non-redundant system in which a job arrival is routed to only one randomly selected compatible server. Service times are generally distributed and all copies of a job are identical, i.e., have the same service requirement. In our first main result, we characterize the sufficient and necessary stability conditions of the redundancy system. This condition coincides with that of a system where each job type only dispatches copies into its least-loaded servers, and those copies need to be fully served. In our second result, we compare the stability regions of the system under redundancy to that of no redundancy. We show that if the server’s capacities are sufficiently heterogeneous, the stability region under redundancy can be much larger than that without redundancy. We apply the general solution to particular classes of systems, including redundancy-$d$ and nested models, to derive simple conditions on the degree of heterogeneity required for redundancy to improve the stability. As such, our result is the first in showing that redundancy can improve the stability and hence performance of a system when copies are *non-i.i.d.*. author: - 'E. Anton $^{1,3}$ ,  U. Ayesta $^{1,2,3,4}$ ,  M. Jonckheere $^{5}$  and  I. M. Verloop $^{1,3}$' bibliography: - 'bibli.bib' title: Improving the performance of heterogeneous data centers through redundancy --- \#1 \#1
--- abstract: 'We give a survey at an introductory level of old and recent results in the study of critical points of solutions of elliptic and parabolic partial differential equations. To keep the presentation simple, we mainly consider four exemplary boundary value problems: the Dirichlet problem for the Laplace’s equation; the torsional creep problem; the case of Dirichlet eigenfunctions for the Laplace’s equation; the initial-boundary value problem for the heat equation. We shall mostly address three issues: the estimation of the local size of the critical set; the dependence of the number of critical points on the boundary values and the geometry of the domain; the location of critical points in the domain.' address: 'Dipartimento di Matematica ed Informatica “U. Dini”, Universit\` a di Firenze, viale Morgagni 67/A, 50134 Firenze, Italy.' author: - 'R. Magnanini' title: | An introduction to the study of critical points\ of solutions of elliptic and parabolic equations --- \[section\] \[thm\][Proposition]{} \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Remark]{} \#1 \#1\#2\#3[[0= -.50]{}]{} Introduction ============ Let ${\Omega}$ be a domain in the Euclidean space ${\mathbb{R}}^N$, ${\Gamma}$ be its boundary and $u:{\Omega}\to {\mathbb{R}}$ be a differentiable function. A [*critical point*]{} of $u$ is a point in ${\Omega}$ at which the gradient ${\nabla}u$ of $u$ is the zero vector. The importance of critical points is evident. At an elementary level, they help us to visualize the graph of $u$, since they are some of its notable points (they are local maximum, minimum, or inflection/saddle points of $u$). At a more sophisticated level, if we interpret $u$ and ${\nabla}u$ as a gravitational, electrostatic or velocity potential and its underlying field of force or flow, the critical points are the positions of equilibrium for the field of force or stagnation points for the flow and give information on the topology of the equipotential lines or of the curves of steepest descent (or stream lines) related to $u$. A merely differentiable function can be very complicated. For instance, Whitney [@Wh] constructed a non-constant function of class $C^1$ on the plane with a connected set of critical values (the images of critical points). If we allow enough smoothness, this is no longer possible as Morse-Sard’s lemma informs us: indeed, if $u$ is at least of class $C^N$, the set of its critical values must have zero Lebesgue measure and hence the regular values of $u$ must be dense in the image of $u$ (see [@AR] for a proof). When the function $u$ is the solution of some partial differential equation, the situation improves. In this survey, we shall consider the four archetypical equations: $${\Delta}u=0, \quad {\Delta}u=-1, \quad {\Delta}u+{\lambda}u=0, \quad u_t={\Delta}u,$$ that is the [*Laplace’s*]{} equation, the [*torsional creep*]{} equation, the [*eigenfunction*]{} equation and the [*heat*]{} equation. It should be noticed at this point some important differences between the first and the remaining three equations. One is that the critical points of harmonic functions — the solutions of the Laplace’s equation — are always “saddle points” as it is suggested by the maximum and minimum principles and the fact that ${\Delta}u$ is the sum of the eigenvalues of the hessian matrix ${\nabla}^2 u$. The other three equations instead admit solutions with maximum or minimum points. Also, we know that the critical points of a non-constant harmonic function $u$ on an open set of ${\mathbb{R}}^2$ are isolated and can be assigned a sort of finite multiplicity, for they are the zeroes of the holomorphic function $f=u_x-i u_y$. By means of the theory of quasi-conformal mappings and generalized analytic functions, this result can be extended to solutions of the elliptic equation $$\label{elliptic} (a\,u_x+b\,u_y)_x+(b\,u_x+c\,u_y)_y+d\,u_x+e\,u_y=0$$ (with suitable smoothness assumptions on the coefficients) or even to weak solutions of the an elliptic equation in divergence form, $$\label{elliptic2} (a\,u_x+b\,u_y)_x+(b\,u_x+c\,u_y)_y=0 \ \mbox{ in } \ {\Omega},$$ even allowing discontinuous coefficients. Instead, solutions of the other three equations can show curves of critical points in ${\mathbb{R}}^2$, as one can be persuaded by looking at the solution of the torsional creep equation in a circular annulus with zero boundary values. These discrepancies extend to any dimension $N\ge 2$, in the sense that it has been shown that the set of the critical points of a non-constant harmonic function (or of a solution of an elliptic equation with smooth coefficients modeled on the Laplace equation) has at most locally finite $(N-2)$-dimensional Hausdorff measure, while solutions of equations fashioned on the other three equations have at most locally finite $(N-1)$-dimensional Hausdorff measure. Further assumptions on solutions of a partial differential equation, such as their behaviour on the boundary and the shape of the boundary itself, can give more detailed information on the number and location of critical points. In these notes, we shall consider the case of harmonic functions with various boundary behaviors and the solutions $\tau$, $\phi$ and $h$ of the following three problems: $$\label{torsion} -{\Delta}\tau=1 \ \mbox{ in } \ {\Omega}, \quad \tau=0 \ \mbox{ on } \ {\Gamma};$$ $$\label{eigenfunction} {\Delta}\phi+{\lambda}\phi=0 \ \mbox{ in } \ {\Omega}, \quad \phi=0 \ \mbox{ on } \ {\Gamma};$$ $$\begin{aligned} &h_t={\Delta}h \ \mbox{ in } \ {\Omega}\times(0,\infty), \label{heat1}\\ &h=0 \ \mbox{ on } \ {\Gamma}\times(0,\infty), \quad h={\varphi}\ \mbox{ on } \ {\Omega}\times\{ 0\}, \label{heat2}\end{aligned}$$ where ${\varphi}$ is a given function. We will refer to , , -, as the [*torsional creep problem*]{}, the [*Dirichlet eigenvalue problem*]{}, and the [*initial-boundary value problem for the heat equation*]{}, respectively. A typical situation is that considered in Theorem \[th:am1b\]: a harmonic function $u$ on a planar domain ${\Omega}$ is given together with a vector field $\ell$ on ${\Gamma}$ of assigned topological degree $D$; the number of critical points in ${\Omega}$ then is bounded in terms of $D$, the Euler characteristic of ${\Omega}$ and the number of proper connected components of the set $\{ z\in{\Gamma}: \ell(z)\cdot{\nabla}u(z)>0\}$ (see Theorem \[th:am1b\] for the exact statement). We shall also see how this type of theorem has recently been extended to obtain a bound for the number of critical points of the Li-Tam Green’s function of a non-compact Riemanniann surface of finite type in terms of its genus and the number of its ends. Owing to the theory of quasi-conformal mappings, Theorem \[th:am1b\] can be extended to solutions of quite general elliptic equations and, thanks to the work of G. Alessandrini and co-authors, has found effective applications to the study of inverse problems that have as a common denominator the reconstruction of the coefficients of an elliptic equation in a domain from measurements on the boundary of a set of its solutions. A paradigmatic example is that of Electric Impedence Tomography (EIT) in which a conductivity ${\gamma}$ is reconstructed, as the coefficient of the elliptic equation $${\mathop{\mathrm{div}}}({\gamma}{\nabla}u)=0 \ \mbox{ in } \ {\Omega},$$ from the so-called Neumann-to-Dirichlet (or Dirichlet-to-Neumann) operator on ${\Gamma}$. In physical terms, an electrical current (represented by the co-normal derivative ${\gamma}u_\nu$) is applied on ${\Gamma}$ generating a potential $u$, that is measured on ${\Gamma}$ within a certain error. One wants to reconstruct the conductivity ${\gamma}$ from some of these measurements. Roughly speaking, one has to solve for the unknown ${\gamma}$ the first order differential equation $${\nabla}u\cdot{\nabla}{\gamma}+({\Delta}u)\,{\gamma}=0 \ \mbox{ in } \ {\Omega},$$ once the information about $u$ has been extended from ${\Gamma}$ to ${\Omega}$. It is clear that such an equation is singular at the critical points of $u$. Thus, it is helpful to know [*a priori*]{} that ${\nabla}u$ does not vanish and this can be done via (appropriate generalizations of) Theorem \[th:am1b\] by choosing suitable currents on ${\Gamma}$. The possible presence of maximum and/or minimum points for the solutions of , , or - makes the search for an estimate of the number of critical points a difficult task (even in the planar case). In fact, the mere topological information only results in an estimate of the [*signed sum*]{} of the critical points, the sign depending on whether the relevant critical point is an extremal or saddle point. For example, for the solution of or , we only know that the difference between the number of its (isolated) maximum and saddle points (minimum points are not allowed) must equal $\chi({\Omega})$, the [*Euler characteristic*]{} of ${\Omega}$ — a Morse-type theorem. Thus, further assumptions, such as geometric information on ${\Omega}$, are needed. More information is also necessary even if we consider the case of harmonic functions in dimension $N\ge 3$. In the author’s knowledge, results on the number of critical points of solutions of , , or - reduce to deduction that their solutions admit a [*unique*]{} critical point if ${\Omega}$ is [*convex*]{}. Moreover, the proof of such results is somewhat indirect: the solution is shown to be [*quasi-concave*]{} — indeed, log-concave for the cases of and -, and $1/2$-concave for the case — and then its analyticity completes the argument. Estimates of the number of critical points when the domain ${\Omega}$ has more complex geometries would be a significant advance. In this survey, we will propose and justify some conjectures. The problem of locating critical points is also an interesting issue. The first work on this subject dates back to Gauss [@Ga], who proved that the critical points of a complex polynomial are its, if they are multiple, and the equilibrium points of the gravitational field of force generated by particles placed at the zeroes and with masses proportional to the zeroes’ multiplicities (see Section \[sec:location\]). Later refinements are due to Jensen [@Je] and Lucas [@Lu], but the first treatises on this matter are Marden’s book [@Ma] and, primarily, Walsh’s monograph [@Wa] that collects most of the results on the number and location of critical points of complex polynomials and harmonic functions known at that date. In general dimension, even for harmonic functions, results are sporadic and rely on explicit formulae or symmetry arguments. Two well known questions in this context concern the location of the [*hot spot*]{} in a heat conductor — a hot spot is a point of (absolute or relative) maximum temperature in the conductor. The situation described by - corresponds with the case of a [*grounded*]{} conductor. By some asymptotic analysis, under appropriate assumptions on ${\varphi}$, one can show that the hot spots [*originate*]{} from the set of maximum points of the function $d_{\Omega}(x)$ — the distance of $x\in{\Omega}$ from ${\Gamma}$ — and tend to the maximum points of the unique positive solution of , as $t\to\infty$. In the case ${\Omega}$ is convex, we have only one hot spot, as already observed. In Section \[sec:location\], we will describe three techniques to locate it; some of them extend their validity to locate the maximum points of the solutions to and . We will also give an account of what it is known about convex conductors that admit a stationary hot spot (that is the hot spot does not move with time). It has also been considered the case in which the homogeneous Dirichlet boundary condition in is replaced by the homogeneous Neumann condition: $$\label{heat3} u_\nu=0 \ \mbox{ on } \ {\Gamma}\times(0,\infty).$$ These settings describe the evolution of temperature in an [*insulated*]{} conductor of given constant initial temperature and has been made popular by a conjecture of J. Rauch [@Ra] that would imply that the hot spot must tend to a boundary point. Even if we now know that it is false for a general domain, the conjecture holds true for certain planar convex domains but it is still standing for unrestrained convex domains. The remainder of the paper is divided into three sections that reflect the aforementioned features. In Section \[sec:harmonic\], we shall describe the local properties of critical points of harmonic functions or, more generally, of solutions of elliptic equations, that lead to estimates of the size of critical sets. In Section \[sec:number\], we shall focus on bounds for the number of critical points that depend on the boundary behavior of the relevant solutions and/or the geometry of ${\Gamma}$. Finally, in Section \[sec:location\], we shall address the problem of locating the possible critical points. As customary for a survey, our presentation will stress ideas rather than proofs. This paper is dedicated with sincere gratitude to Giovanni Alessandrini — an inspiring mentor, a supportive colleague and a genuine friend — on the occasion of his $60^\mathrm{th}$ birthday. Much of the material presented here was either inspired by his ideas or actually carried out in his research with the author. The size of the critical set of a harmonic function {#sec:harmonic} =================================================== A harmonic function in a domain ${\Omega}$ is a solution of the Laplace’s equation $${\Delta}u=u_{x_1 x_1}+\cdots+u_{x_N x_N}=0 \ \mbox{ in } \ {\Omega}.$$ It is well known that harmonic functions are analytic, so there is no difficulty to define their critical points or the [*critical set*]{} $${\mathcal{C}}(u)=\{x\in{\Omega}: {\nabla}u(x)=0\}.$$ Before getting into the heart of the matter, we present a relevant example. Harmonic polynomials -------------------- In dimension two, we have a powerful tool since we know that a harmonic function is (locally) the real or imaginary part of a holomorphic function. This remark provides our imagination with a reach set of examples on which we can speculate. For instance, the harmonic function $$u={\mathop{\mathrm{Re}}}(z^n)={\mathop{\mathrm{Re}}}\left[(x+i y)^n\right], \ n\in{\mathbb{N}},$$ already gives some insight on the properties of harmonic functions we are interested in. In fact, we have that $$u_x-iu_y=n z^{n-1};$$ thus, $u$ has only one distinct critical point, $z=0$, but it is more convenient to say that $u$ has $n-1$ critical points at $z=0$ or that $z=0$ is a critical point with [*multiplicity*]{} $m$ with $m=n-1$. By virtue of this choice, we can give a topological meaning to $m$. To see that, it is advantageous to represent $u$ in polar coordinates: $$u=r^n \cos(n{\theta});$$ here, $r=|z|$ and ${\theta}$ is the principal branch of $\arg z$, that is we are assuming that $-\pi\le{\theta}<\pi$. Thus, the topological meaning of $m$ is manifest when we look at the level “curve” $\{ z: u(z)=u(0)\}$: it is made of $m+1=n$ straight lines passing through the critical point $z=0$, divides the plane into $2n$ [*cones*]{} (angles), each of amplitude $\pi/n$ and the sign of $u$ changes across those lines (see Fig. 2.1). One can also show that the signed angle ${\omega}$ formed by ${\nabla}u$ and the direction of the positive real semi-axis, since it equals $-(n-1) \arg z$, increases by $2\pi m$ while $z$ makes a complete loop clockwise around $z=0$; thus, $m$ is a sort of [*winding number*]{} for ${\nabla}u$. \[fig:power\] ![Level set diagram of $u=r^6\cos(6{\theta})$ at the critical point $z=0$; $u$ changes sign from positive to negative at dashed lines and from negative to positive at solid lines.](Figure1.pdf "fig:") The critical set of a homogeneous polynomial $P: {\mathbb{R}}^N\to {\mathbb{R}}$ is a cone in ${\mathbb{R}}^N$. Moreover, if $P$ is also harmonic (and non-constant) one can show that $$\label{dimension} \mbox{dimension of ${\mathcal{C}}(u)$} \le N-2.$$ Harmonic functions {#sub:harmonic} ------------------ If $N=2$ and $u$ is any harmonic function, the picture is similar to that outlined in the example. In fact, we can again consider the “complex gradient” of $u$, $$g=u_x-iu_y,$$ and observe that $g$ is [*holomorphic*]{} in ${\Omega}$, since ${\partial}_{{\overline}{z}}g=0$, and hence analytic. Thus, the zeroes of $g$ (and hence the critical points of $u$) in ${\Omega}$ are [*isolated*]{} and have [*finite multiplicity*]{}. If $z_0$ is a zero with multiplicity $m$ of $g$, then we can write that $$g(z)=(z-z_0)^m h(z),$$ where $h$ is holomorphic in ${\Omega}$ and $h(z_0)\not=0$. On the other hand, we also know that $u$ is locally the real part of a holomorphic function $f$ and hence, since $f'=g$, by an obvious normalization, it is not difficult to infer that $$f(z)=\frac1{n}\,(z-z_0)^{n}k(z),$$ where $n=m+1$ and $k$ is holomorphic and $k(z_0)=h(z_0)\not=0$. Passing to polar coordinates by $z=z_0+r e^{i{\theta}}$ tells us that $$f(z_0+r e^{i{\theta}})=\frac{|h(z_0)|}{n}\,r^{n} e^{i(n{\theta}+{\theta}_0)}+O(r^{n+1}) \ \mbox{ as } \ r\to 0,$$ where ${\theta}_0=\arg h(z_0)$. Thus, we have that $$u=\frac{|h(z_0)|}{n}\,r^{n} \cos(n{\theta}+{\theta}_0)+O(r^{n+1}) \ \mbox{ as } \ r\to 0,$$ and hence, modulo a rotation by the angle ${\theta}_0$, in a small neighborhood of $z_0$, we can say that the critical level curve $\{z: u(z)=u(z_0)\}$ is very similar to that described in the example with $0$ replaced by $z_0$. In particular, it is made of $n$ simple curves passing through $z_0$ and any two adjacent curves meet at $z_0$ with an angle that equals $\pi/n$ (see Fig. 2.2). \[fig:harmonic\] ![Level set diagram of a harmonic function at a critical point with multiplicity $m=5$. The curves meet with equal angles at the critical point.](Figure2.pdf "fig:") If $N\ge 3$, similarly, a harmonic function can be approximated near a zero $0$ by a homogeneous harmonic polynomial of some degree $n$: $$\label{approximation} u(x)= P_n(x)+O(|x|^{n+1}) \ \mbox{ as } \ |x|\to 0.$$ However, the structure of the set ${\mathcal{C}}(u)$ depends on whether $0$ is an isolated critical point of $P_n$ or not. In fact, if $0$ is not isolated, then ${\mathcal{C}}(u)$ and ${\mathcal{C}}(P_n)$ could not be diffeomorphic in general, as shown by the harmonic function $$u(x,y,z)=x^2-y^2+(x^2+y^2)\,z-\frac23\,z^3, \quad (x,y,z)\in{\mathbb{R}}^3.$$ Indeed, if $P_2(x,y,z)=x^2-y^2$, ${\mathcal{C}}(P_2)$ is the $z$-axis, while ${\mathcal{C}}(u)$ is made of $5$ isolated points ([@Pe]). Elliptic equations in the plane {#subsec:elliptic} ------------------------------- These arguments can be repeated with some necessary modifications for solutions of uniformly elliptic equations of the type , where the variable coefficients $a, b, c$ are Lipschitz continuous and $d, e$ are bounded measurable on ${\Omega}$ and the uniform ellipticity is assumed to take the following form: $$ac-b^2=1 \ \mbox{ in } \ {\Omega}.$$ Now, the classical theory of [*quasi-conformal*]{} mappings comes in our aid (see [@Be; @Ve] and also [@AM1; @AM2]). By the [*uniformization theorem*]{} (see [@Ve]), there exists a quasi-conformal mapping ${\zeta}(z)=\xi(z)+i\,\eta(z)$, satisfying the equation $${\zeta}_{{\overline}{z}}=\kappa(z)\,{\zeta}_z \ \mbox{ with } \ |\kappa(z)|=\frac{a+c-2}{a+c+2}<1,$$ such that the function $U$ defined by $U({\zeta})=u(z)$ satisfies the equation $${\Delta}U+P\,U_\xi+Q\,U_\eta=0 \ \mbox{ in } \ {\zeta}({\Omega}),$$ where $P$ and $Q$ are real-valued functions depending on the coefficients in and are essentially bounded on ${\zeta}({\Omega})$. Notice that, since the composition of ${\zeta}$ with a conformal mapping is still quasi-conformal, if it is convenient, by the Riemann mapping theorem, we can choose ${\zeta}({\Omega})$ to be the unit disk ${\mathbb{D}}$. By setting $G=U_\xi-i\,U_\eta$, simple computations give that $$G_{{\overline}{{\zeta}}}=R\,G+{\overline}{R}\,{\overline}{G} \ \mbox{ in } {\mathbb{D}},$$ where $R=(P+i\,Q)/4$ is essentially bounded. This equation tells us that $G$ is a [*pseudo-analytic*]{} function for which the following [*similarity principle*]{} holds (see [@Ve]): there exist two functions, $H({\zeta})$ holomorphic in ${\mathbb{D}}$ and $s({\zeta})$ Hölder continuous on the whole ${\mathbb{C}}$, such that $$\label{pseudo-analytic} G({\zeta})=e^{s({\zeta})} H({\zeta}) \ \mbox{ for } \ {\zeta}\in{\mathbb{D}}.$$ Owing to , it is clear that the critical points of $u$, by means of the mapping ${\zeta}(z)$, correspond to the zeroes of $G({\zeta})$ or, which is the same, of $H({\zeta})$ and hence we can claim that they are isolated and have a finite multiplicity. This analysis can be further extended if the coefficients $d$ and $e$ are zero, that is for the solutions of . In this case, we can even assume that the coefficients $a, b, c$ be merely essentially bounded on ${\Omega}$, provided that we agree that $u$ is a non-constant [*weak*]{} solution of . It is well known that, with these assumptions, solutions of are in general only Hölder continuous and the usual definition of critical point is no longer possible. However, in [@AM2] we got around this difficulty by introducing a different notion of critical point, that is still consistent with the topological structure of the level curves of $u$ at its critical values. To see this, we look for a surrogate of the harmonic conjugate for $u$. In fact, implies that the $1$-form $${\omega}=-(b\,u_x+c\,u_y)\,dx+(a\,u_x+b\,u_y)\,dy$$ is closed (in the weak sense) in ${\Omega}$ and hence, thanks to the theory developed in [@BN], we can find a so-called [*stream function*]{} $v\in W^{1,2}({\Omega})$ whose differential $dv$ equals ${\omega}$, in analogy with the theory of gas dynamics (see [@BS]). Thus, in analogy with what we have done in Subsection \[sub:harmonic\], we find out that the function $f=u+i\,v$ satisfies the equation $$\label{quasi-regular} f_{{\overline}{z}}=\mu\,f_z$$ where $$\mu=\frac{c-a-2ib}{2+a+c} \ \mbox{ and } \ |\mu|\le\frac{1-{\lambda}}{1+{\lambda}}<1 \ \mbox{ in } \ {\Omega},$$ and ${\lambda}>0$ is a lower bound for the smaller eigenvalue of the matrix of the coefficients: $$\left( \begin{array}{cc} a & b\\ b & c \end{array} \right).$$ The fact that $f\in W^{1,2}({\Omega},{\mathbb{C}})$ implies that $f$ is a [*quasi-regular*]{} mapping that can be factored as $$f=F\circ\chi \ \mbox{ in } \ {\Omega},$$ where $\chi:{\Omega}\to{\mathbb{D}}$ is a [*quasi-conformal homeomorphism*]{} and $F$ is holomorphic in ${\mathbb{D}}$ (see [@LV]). Therefore, the following [*representation formula* ]{} holds: $$u=U(\chi(z)) \ \mbox{ for } \ z\in{\Omega},$$ where $U$ is the real part of $F$. \[fig:elliptic\] ![Level set diagram of a solution of an elliptic equation with discontinuous coefficients at a geometric critical point with multiplicity $m=5$. At that point, any two consecutive curves meet with positive angles, possibly not equal to one another.](Figure3.pdf "fig:") This formula informs us that the level curves of $u$ can possibly be distorted by the homeomorphism $\chi$, but preserve the topological structure of a harmonic function (see Fig. 2.3). This remark gives grounds to the definition introduced in [@AM2]: $z_0\in{\Omega}$ is a [*geometric critical point*]{} of $u$ if the gradient of $U$ vanishes at $\chi(z_0)\in{\mathbb{D}}$. In particular, geometric critical points are isolated and can be classified by a sort of multiplicity. Quasilinear elliptic equations in the plane ------------------------------------------- A similar local analysis can be replicated when $N=2$ for quasilinear equations of type $${\mathop{\mathrm{div}}}\{A(|{\nabla}u|)\,{\nabla}u\}=0,$$ where $A(s)>0$ and $0<{\lambda}\le 1+s\,A'(s)/A(s)\le{\Lambda}$ for every $s>0$ and some constants ${\lambda}$ and ${\Lambda}$. \[fig:degenerate\] ![Level set diagram of a solution of a degenerate quasilinear elliptic equation with $B(s)=\sqrt{1+s^2}$ at a critical value.](Figure4.pdf "fig:") These equations can be even degenerate, such as the $p$-Laplace equation with $1<p<\infty$ (see [@AR]). It is worth mentioning that also the case in which $A(s)=B(s)/s$, where $B$ is increasing, with $B(0)>0$, and superlinear and growing polynomially at infinity (e.g. $B(s)=\sqrt{1+s^2}$), has been studied in [@CM]. In this case the function $1+s\,A'(s)/A(s)$ vanishes at $s=0$ and it turns out that the critical points of a solution $u$ (if any) are [*never*]{} isolated (Fig. 2.4). The case $\bf N\ge 3$ {#sub:circles} --------------------- As already observed, critical points of harmonic functions in dimension $N\ge 3$ may not be isolated. Besides the example given in Section \[sub:harmonic\], another concrete example is given by the function $$u(x,y,z)=J_0\Bigl(\sqrt{x^2+y^2}\Bigr)\,\cosh(z), \quad (x,y,z)\in{\mathbb{R}}^3,$$ where $J_0$ is the first Bessel function: the gradient of $u$ vanishes at the origin and on the circles on the plane $z=0$ having radii equal to the zeroes of the second Bessel function $J_1$. It is clear that a region ${\Omega}$ can be found such that ${\mathcal{C}}(u)\cap{\Omega}$ is a [*bounded continuum*]{}. Nevertheless, it can be proved that ${\mathcal{C}}(u)$ always has locally finite $(N-2)$-dimensional Hausdorff measure ${{\mathcal H}}^{N-2}$. A nice argument to see this was suggested to me by D. Peralta-Salas [@Pe]. If $u$ is a non-constant harmonic function and we suppose that ${\mathcal{C}}(u)$ has dimension $N-1$, then the general theory of analytic sets implies that there is an open and dense subset of ${\mathcal{C}}(u)$ which is an analytic sub-manifold (see [@KP]). Since $u$ is constant on a connected component of the critical set, it is constant on ${\mathcal{C}}(u)$, and its gradient vanishes. Thus, by the Cauchy-Kowalewski theorem $u$ must be constant in a neighborhood of ${\mathcal{C}}(u)$, and hence everywhere by unique continuation. Of course, this argument would also work for solutions of an elliptic equation of type $$\label{elliptic-N} \sum_{i, j=1}^N a_{ij}(x)\,u_{x_i x_j}+\sum_{j=1}^N b_j(x)\,u_{x_j}=0 \ \mbox{ in } \ {\Omega},$$ with analytic coefficients. When the coefficients $a_{ij}, b_j$ in are of class $C^\infty({\Omega})$, the result has been proved in [@HHHN] (see also [@Ha]): if $u$ is a non-constant solution of , then for any compact subset $K$ of ${\Omega}$ it holds that $$\label{hausdorff} {{\mathcal H}}^{N-2}({\mathcal{C}}(u)\cap K)<\infty.$$ The proof is based on an estimate similar to for the complex dimension of the singular set in ${\mathbb{C}}^N$ of the complexification of the polynomial $P_n$ in the approximation . The same result does not hold for solutions of equation $$\label{elliptic-complete} \sum_{i, j=1}^N a_{ij}(x)\,u_{x_i x_j}+\sum_{j=1}^N b_j(x)\,u_{x_j}+c(x)\,u=0 \ \mbox{ in } \ {\Omega},$$ with $c\in C^\infty({\Omega})$. For instance the gradient of the first Laplace-Dirichlet eigenfunction for a spherical annulus vanishes exactly on a $(N-1)$-dimensional sphere. A more general counterexample is the following (see [@HHHN Remark p. 362]): let $v$ be of class $C^\infty$ and with non-vanishing gradient in the unit ball $B$ in ${\mathbb{R}}^N$; the function $u=1+v^2$ satisfies the equation $${\Delta}u-c u=0 \ \mbox{ with } \ c =\frac{{\Delta}v^2}{1+v^2}\in C^\infty(B);$$ we have that ${\mathcal{C}}(u)=\{ x\in B: v(x)=0\}$ and it has been proved that any closed subset of ${\mathbb{R}}^N$ can be the zero set of a function of class $C^\infty$ (see [@To]). However, once is settled, it is rather easy to show that the [*singular set*]{} $${{\mathcal S}}(u)={\mathcal{C}}(u)\cap u^{-1}(0)=\{x\in{\Omega}: u(x)=0, {\nabla}u(x)=0\}$$ of a non-constant solution of also has locally finite $(N-2)$-dimensional Hausdorff measure [@HHHN Corollary 1.1]. This can be done by a trick, since around any point in ${\Omega}$ there always exists a [*positive*]{} solution $u_0$ of and it turns out that the function $w=u/u_0$ is a solution of an equation like and that ${{\mathcal S}}(u)\subseteq{\mathcal{C}}(w)$. In particular the set of critical points on the nodal line of an eigenfunction of the Laplace operator has locally finite $(N-2)$-dimensional Hausdorff measure. Nevertheless, for a solution of the set ${{\mathcal S}}(u)$ can be very complicated, as a simple example in [@HHHN p. 361]) shows: the function $u(x,y,z)=xy+f(z)^2$, where $f$ is a smooth function with $|f f''|+(f')^2<1/4$ that vanishes exactly on an arbitrary given closed subset $K$ of ${\mathbb{R}}$, is a solution of $$u_{xx}+u_{yy}+u_{zz}-(f^2)''(z)\,u_{xy}=0 \ \mbox{ and } \ {{\mathcal S}}(u)=\{(0,0)\}\times K.$$ Heuristically, as in the $2$-dimensional case, the proof of is essentially based on the observation that, by Taylor’s expansion, a harmonic function $u$ can be approximated near any of its zeroes by a homogeneous harmonic polynomial $P_m(x_1,\dots, x_n)$ of degree $m\ge 1$. Technically, the authors use the fact that the complex dimension of the critical set in ${\mathbb{C}}^N$ of the complexified polynomial $P_m(z_1,\dots, z_N)$ is bounded by $N-2$. A $C^\infty$-perturbation argument and an inequality from geometric measure theory then yield that, near a zero of $u$, the ${{\mathcal H}}^{N-2}$-measure of ${\mathcal{C}}(u)$ can be bounded in terms of $N$ and $m$. The extension of these arguments to the case of a solution of is then straightforward. Recently in [@CNV], has been extended to the case of solutions of elliptic equations of type $$\sum_{i, j=1}^N \{a_{ij}(x)\,u_{x}\}_{x_j}+\sum_{j=1}^N b_j(x)\,u_{x_j},$$ where the coefficients $a_{ij}(x)$ and $b_j(x)$ are assumed to be Lipschitz continuous and essentially bounded, respectively. The number of critical points {#sec:number} ============================= A more detailed description of the critical set ${\mathcal{C}}(u)$ of a harmonic function $u$ can be obtained if we assume to have some information on its behavior on the boundary ${\Gamma}$ of ${\Omega}$. While in Section \[sec:harmonic\] the focus was on a qualitative description of the set ${\mathcal{C}}(u)$, here we are concerned with establishing bounds on the number of critical points. Counting the critical points of a harmonic function in the plane ---------------------------------------------------------------- An exact counting formula is given by the following result. \[th:am1\] Let ${\Omega}$ be a bounded domain in the plane and let $${\Gamma}=\bigcup_{j=1}^J {\Gamma}_j,$$ where ${\Gamma}_j, j=1,\dots, J$ are simple closed curves of class $C^{1,{\alpha}}$. Consider a harmonic function $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ that satisfies the Dirichlet boundary condition $$\label{condenser} u=a_j \ \mbox{ on } \ {\Gamma}_j, \,j=1,\dots, J,$$ where $a_1, \dots, a_J$ are given real numbers, not all equal. Then $u$ has in ${\overline}{{\Omega}}$ a finite number of critical points $z_1,\dots, z_K$; if $m(z_1), \dots,$ $m(z_K)$ denote their multiplicities, then the following identity holds: $$\label{identity} \sum_{z_k\in{\Omega}}m(z_k)+\frac12\,\sum_{z_k\in{\Gamma}}m(z_k)=J-2.$$ \[fig:capacitor\] ![An illustration of Theorem \[th:am1\]: the domain ${\Omega}$ has $3$ holes; $u$ has exactly $2$ critical points; dashed and dotted are the level curves at critical values.](Figure5.pdf "fig:") Thanks to the analysis presented in Subsection \[subsec:elliptic\], this theorem still holds if we replace the Laplace equation in by the general elliptic equation . In fact, modulo a suitable change of variables, we can use with ${\mathop{\mathrm{Im}}}(s)=0$ on the boundary. The function considered in Theorem \[th:am1\] can be interpreted in physical terms as the potential in an electrical capacitor and hence its critical points are the points of equilibrium of the electrical field (Fig. 3.1). The proof of Theorem \[th:am1\] relies on the fact that the critical points of $u$ are the zeroes of the holomorphic function $f=u_x-i\,u_y$ and hence they can be counted with their multiplicities by applying the classical [*argument principle*]{} to $f$ with some necessary modifications. The important remark is that, since the boundary components are level curves for $u$, the gradient of $u$ is parallel on them to the (exterior) unit normal $\nu$ to the boundary, and hence $\arg f=-\arg\nu$. Thus, the situation is clear if $u$ does not have critical points on ${\Gamma}$: the argument principle gives at once that $$\begin{gathered} \sum_{z_k\in{\Omega}}m(z_k)=\frac1{2\pi i}\int_{+{\Gamma}}\frac{f'(z)}{f(z)}\,dz\\ =\frac1{2\pi}\,\operatorname*{Incr}(\arg f,+{\Gamma})= \frac1{2\pi}\,\operatorname*{Incr}(-\arg \nu,+{\Gamma})= -[1- (J-1)]=J-2, \end{gathered}$$ where by $\operatorname*{Incr}(\cdot,+{\gamma})$ we intend the [*increment*]{} of an angle on an oriented curve $+{\gamma}$ and by $+{\Gamma}$ we mean that ${\Gamma}$ is trodden in such a way that ${\Omega}$ is on the left-hand side. If ${\Gamma}$ contains critical points, we must first prove that they are also isolated. This is done, by observing that, if $z_0$ is a critical point belonging to some component ${\Gamma}_j$, since $u$ is constant on ${\Gamma}_j$, by the [*Schwarz’s reflection principle*]{} (modulo a conformal transformation of ${\Omega}$), $u$ can be extended to a function $\widetilde{u}$ which is harmonic in a whole neighborhood of $z_0$. Thus, $z_0$ is a zero of the holomorphic function $\widetilde{f}=\widetilde{u}_x-i\,\widetilde{u}_y$ and hence is isolated and with finite multiplicity. Moreover, the increment of $\arg\widetilde{f}$ on an oriented closed simple curve $+{\gamma}$ around $z_0$ is exactly twice as much as that of $\arg f$ on the part of $+{\gamma}$ inside ${\Omega}$. This explains the second addendum in . Notice that condition can be re-written as $$u_\tau=0 \ \mbox{ on } \ {\Gamma},$$ where $\tau:{\Gamma}\to{\mathbb{S}}^1$ is the [*tangential*]{} unit vector field on ${\Gamma}$. We cannot hope to obtain an identity as if $u_\tau$ is not constant. However, a bound for the number of critical points of a harmonic function (or a solution of ) can be derived in a quite general setting. In what follows, we assume that ${\Omega}$ is as in Theorem \[th:am1\] and that $\ell:{\Gamma}\to{\mathbb{S}}^1$ denotes a (unitary) vector field of class $C^1({\Gamma}, {\mathbb{S}}^1)$ of given topological degree $D$, that can be defined as $$\label{degree} 2\pi\,D=\operatorname*{Incr}(\arg(\ell),+{\Gamma}).$$ Also, we will use the following definitions: (i) if $({{\mathcal J}}^+,{{\mathcal J}}^-)$ is a decomposition of ${\Gamma}$ into two disjoint subsets such that $u_\ell\ge 0$ on ${{\mathcal J}}^+$ and $u_\ell\le 0$ on ${{\mathcal J}}^-$, we denote by $M({{\mathcal J}}^+)$ the number of connected components of ${{\mathcal J}}^+$ which are [*proper subsets*]{} of some component ${\Gamma}_j$ of ${\Gamma}$ and set: $$M=\min\{ M({{\mathcal J}}^+): ({{\mathcal J}}^+, {{\mathcal J}}^+) \mbox{ decomposes } {\Gamma}\};$$ (ii) if ${{\mathcal I}}^\pm=\{z\in{\Gamma}: \pm\,u_\ell(z)>0\}$, by $M^\pm$ we denote the number of connected components of ${{\mathcal I}}^\pm$ which are [*proper subsets*]{} of some component ${\Gamma}_j$ of ${\Gamma}$. Notice that in (i) the definition of $M$ does not change if we replace ${{\mathcal J}}^+$ by ${{\mathcal J}}^-$. \[th:am1b\] Let $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ be harmonic in ${\Omega}$ and denote by $m(z_j)$ the multiplicity of a zero $z_j$ of $f=u_x-i\,u_y$. (a) If $M$ is finite and $u$ has no critical point in ${\Gamma}$, then $$\sum_{z_j\in{\Omega}}m(z_j)\le M-D;$$ (b) if $M^++M^-$ is finite, then $$\sum_{z_j\in{\Omega}}m(z_j)\le \left[\frac{M^++M^-}{2}\right]-D,$$ where $[x]$ is the greatest integer $\le x$. This theorem is clearly less sharp than Theorem \[th:am1\] since, in that setting, it does not give information about critical points on the boundary. However, it gives the same information on the number of interior critical points, since in the setting of Theorem \[th:am1\] the degree of the field $\tau$ on $+{\Gamma}$ equals $2-J$ and $M=0$. \[fig:oblique\] ![An illustration of Theorem \[th:am1b\]. Here, $M=M^+=4$; $M^-=4$; $D=-2$ if $\ell=\nu$ or $\tau$; $D=1$ if $\ell=z/|z|$ and the origin is in ${\Omega}$; $D=0$ if $\ell=(1,0)$ or $(0,1)$. ](Figure6.pdf "fig:") The possibility of choosing the vector field $\ell$ arbitrarily makes Theorem \[th:am1b\] a very flexible tool: for instance, the number of critical points in ${\Omega}$ can be estimated from information on the tangential, normal, co-normal, partial, or radial (with respect to some origin) derivatives (see Fig. 3.2). As an illustration, it says that in a domain topologically equivalent to a disk, in order to have $n$ interior critical point the normal (or tangential, or co-normal) derivative of a harmonic function must change sign at least $n+1$ times and a partial derivative at least $n$ times. Thus, Theorem \[th:am1b\] helps to choose Neumann data that insures the absence of critical points in ${\Omega}$. For this reason, in its general form for elliptic operators, it has been useful in the study of EIT and other similar inverse problems. We give a sketch of the proof of (a) of Theorem \[th:am1b\], that hinges on the simple fact that, if we set ${\theta}=\arg(\ell)$ and ${\omega}=\arg(u_x-i\,u_y)$, then $$u_\ell=\ell\cdot{\nabla}u=|{\nabla}u|\,\cos({\theta}+{\omega}).$$ Hence, if $({{\mathcal J}}^+, {{\mathcal J}}^-)$ is a minimizing decomposition of ${\Gamma}$ as in (i), then $$|{\omega}+{\theta}|\le\frac{\pi}{2} \ \mbox{ on } \ {{\mathcal J}}^+ \ \mbox{ and } \ |{\omega}+{\theta}-\pi|\le\frac{\pi}{2} \ \mbox{ on } \ {{\mathcal J}}^-.$$ Thus, two occurrences must be checked. If a component ${\Gamma}_j$ is contained in ${{\mathcal J}}^+$ or ${{\mathcal J}}_-$, then $$\left|\frac1{2\pi}\,\operatorname*{Incr}({\omega}+{\theta},+{\Gamma}_j)\right|\le\frac12,$$ that implies that ${\omega}$ and $-{\theta}$ must have the same increment, being the right-hand side an integer. If ${\Gamma}_j$ contains points of both ${{\mathcal J}}^+$ and ${{\mathcal J}}^-$, instead, if ${\sigma}^+\subset{{\mathcal J}}^+$ and ${\sigma}^-\subset{{\mathcal J}}^-$ are two consecutive components on ${\Gamma}_j$, then $$\frac1{2\pi}\,\operatorname*{Incr}({\omega}+{\theta},+({\sigma}^+\cap{\sigma}^-)\le 1.$$ Therefore, if $M_j$ is the number of connected components of ${{\mathcal J}}^+\cap{\Gamma}_j$ (which equals that of ${{\mathcal J}}^+\cap{\Gamma}_j$), then $$\frac1{2\pi}\,\operatorname*{Incr}({\omega}+{\theta},+{\Gamma}_j)\le M_j,$$ and hence $$\begin{gathered} \sum_{z_k\in{\Omega}}m(z_k)=\frac1{2\pi}\,\operatorname*{Incr}({\omega},+{\Gamma})= \frac1{2\pi}\,\operatorname*{Incr}({\omega}+{\theta},+{\Gamma})-D\\ =\sum_{j=1}^J\frac1{2\pi}\,\operatorname*{Incr}({\omega}+{\theta},+{\Gamma}_j)\le \sum_{j=1}^J M_j-D=M-D.\end{gathered}$$ [**The obstacle problem.**]{} An estimate similar to that of Theorem \[th:am1b\] has been obtained also for $N=2$ by Sakaguchi [@Sa] for the [*obstacle problem*]{}. Let ${\Omega}$ be bounded and simply connected and let $\psi$ be a given function in $C^2({\overline}{{\Omega}})$ — the obstacle. There exists a unique solution $u\in H^1_0({\Omega})$ such that $u\ge\psi$ in ${\Omega}$ of the obstacle problem $$\int_{\Omega}{\nabla}u\cdot{\nabla}(v-u)\,dx\ge 0 \ \mbox{ for every } \ v\in H^1_0({\Omega}) \ \mbox{ such that } \ u\ge\psi.$$ It turns out that $u\in C^{1,1}({\overline}{{\Omega}})$ and $u$ is harmonic outside of the [*contact set*]{} $I=\{x\in{\Omega}: u(x)=\psi(x)\}$. In [@Sa] it is proved that, if the number of connected components of local maximum points of $\psi$ equals $J$, then $$\sum_{z_k\in{\Omega}\setminus I}m(z_k)\le J-1,$$ with the usual meaning for $z_k$ and $m(z_k)$. In [@Sa], this result is also shown to hold for a more general class of quasi-linear equations. The proof of this result is based on the analysis of the level sets of $u$ at critical values, in the wake of [@Al1] and [@HW]. Topological bounds as in Theorems \[th:am1\] or \[th:am1b\] are not possible in dimension greater than $2$. We give two examples. \[fig:torus\] ![The broken doughnut in a ball: $u$ must have a critical point near the center of $B$ and one between the ends of $T$.](Figure7.pdf "fig:") [**The broken doughnut in a ball.**]{} The first is an adaptation of one contained in [@EP1] and reproduces the situation of Theorem \[th:am1\] (see Fig. 3.3). Let $B$ be the unit ball centered at the origin in ${\mathbb{R}}^3$ and $T$ an open torus with center of symmetry at the origin and such that ${\overline}{T}\subset B$. We can always choose coordinate axes in such a way that the $x_3$-axis is the axis of revolution for $T$ and hence define the set $T_{\varepsilon}=\{ x\in T: x_2<{\varepsilon}^{-1}|x_1|\}$. ${\overline}{T_{\varepsilon}}$ is simply connected and tends to $T$ as ${\varepsilon}\to 0^+$. Now, set ${\Omega}_{\varepsilon}=B\setminus{\overline}{T_{\varepsilon}}$ and consider a capacity potential for ${\Omega}$, that is the harmonic function in ${\Omega}_{\varepsilon}$ with the following boundary values $$u=0 \ \mbox{ on } \ {\partial}B, \quad u=1 \ \mbox{ on } {\partial}T_{\varepsilon}.$$ Since ${\Omega}_{\varepsilon}$ has $2$ planes of symmetry (the $x_1x_2$ and $x_2x_3$ planes), the partial derivatives $u_{x_1}$ and $u_{x_3}$ must be zero on the two segments that are the intersection of ${\Omega}_{\varepsilon}$ with the $x_2$-axis. If ${\sigma}$ is the segment that contains the origin, the restriction of $u$ to ${\overline}{{\sigma}}$ equals $1$ at the point ${\overline}{{\sigma}}\cap{\partial}T_{\varepsilon}$, is $0$ at the point ${\overline}{{\sigma}}\cap{\partial}B$, is bounded at the origin by a constant $<1$ independent of ${\varepsilon}$, and can be made arbitrarily close to $1$ between the “ends” of $T_{\varepsilon}$, when ${\varepsilon}\to 0^+$, It follows that, if ${\varepsilon}$ is sufficiently small, $u_{x_2}$ (and hence ${\nabla}u$) must vanish twice on ${\sigma}$. It is clear that this argument does not depend on the size or on small deformations of $T$. Thus, we can construct in $B$ a (simply connected) “chain” $C_{\varepsilon}$ of an arbitrary number $n$ of such tori, by gluing them together: the solution in the domain obtained by replacing $T_{\varepsilon}$ by $C_{\varepsilon}$ will then have at least $2n$ critical points. [**Circles of critical points.**]{} The second example shows that, in general dimension, a finite number of sign changes of some derivative of a harmonic function $u$ on the boundary does not even imply that $u$ has a finite number of critical points. To see this, consider the harmonic function is Subsection \[sub:circles\]: $$u(x,y,z)=J_0(\sqrt{x^2+y^2})\,\cosh(z).$$ It is easy to see that, for instance, on any sphere centered at the origin the normal derivative $u_\nu$ changes its sign a finite number of times. However, if the radius of the sphere is larger than the first positive zero of $J_1=0$, the corresponding ball contains at least one circle of critical points. [**Star-shaped annuli.**]{} Nevertheless, if some additional geometric information is added, something can be done. Suppose that ${\Omega}=D_0\setminus{\overline}{D_1}$, where $D_0$ and $D_1$ are two domains in ${\mathbb{R}}^N$, with boundaries of class $C^1$ and such that ${\overline}{D_1}\subset D_0$. Suppose that $D_0$ and $D_1$ are [*star-shaped*]{} with respect to the same origin $O$ placed in $D_1$, that is the segment $OP$ is contained in the domain for every point $P$ chosen in it. Then, the [*capacity potential*]{} $u$ defined as the solution of the Dirichlet problem $${\Delta}u=0 \ \mbox{ in } \ {\Omega}, \quad u=0 \ \mbox{ on } \ {\partial}D_0, \quad u=1 \ \mbox{ on } \ {\partial}D_1,$$ does not have critical points in ${\overline}{{\Omega}}$. This is easily proved by considering the harmonic function $$w(x)=x\cdot{\nabla}u(x), \ x\in{\Omega}.$$ Since $D_0$ and $D_1$ are starshaped and of class $C^1$, $w\ge 0$ on ${\partial}{\Omega}$. By the strong maximum principle, then $w>0$ in ${\Omega}$; in particular, ${\nabla}u$ does not vanish in ${\Omega}$ and all the sets $D_1\cup \{x\in{\overline}{{\Omega}}: u(x)>s\}$ turn out to be star shaped too (see [@Ev]). This theorem can be extended to the capacity potential defined in ${\Omega}={\mathbb{R}}^N\setminus{\overline}{D_1}$ as the solution of $${\Delta}u=0 \ \mbox{ in } \ {\Omega}, \quad u=1 \ \mbox{ on } \ {\partial}{\Omega}, \quad u\to 0 \ \mbox{ as } \ |x|\to\infty.$$ Such results have been extended in [@Fc; @Pu; @Sl1] to a very general class of nonlinear elliptic equations. Counting the critical points of Green’s functions on manifolds -------------------------------------------------------------- With suitable restrictions on the coefficients, can be regarded as the [*Laplace-Beltrami equation*]{} on the Riemannian surface ${\mathbb{R}}^2$ equipped with the metric $$c\,(dx)^2-2 b\,(dx)(dy)+a\,(dy)^2.$$ Theorems \[th:am1\] and \[th:am1b\] can then be interpreted accordingly. This point of view has been considered in a more general context in [@EP2; @EP3], where the focus is on Green’s functions of a $2$-dimensional complete Riemannian surface $(M, g)$ of finite topological type (that is, the first fundamental group of $M$ is finitely generated). A Green’s function is a symmetric function ${{\mathcal G}}(x,y)$ that satisfies in $M$ the equation $$\label{laplace-beltrami} -{\Delta}_g {{\mathcal G}}(\cdot,y)={\delta}_y ,$$ where ${\Delta}_g$ is the Laplace-Beltrami operator induced by the metric $g$ and ${\delta}_y$ is the [*Dirac delta*]{} centered at a point $y\in M$. A symmetric Green’s function ${{\mathcal G}}$ can always be constructed by an approximation argument introduced in [@LT]: an increasing sequence of compact subsets ${\Omega}_n$ containing $y$ and exhausting $M$ is introduced and ${{\mathcal G}}$ is then defined as the limit on compact subsets of $M\setminus\{ y\}$ of the sequence ${{\mathcal G}}_n-a_n$, where ${{\mathcal G}}_n$ is the solution of such that ${{\mathcal G}}_n=0$ on ${\Gamma}_n$ and $a_n$ is a suitable constant. A Green’s function defined in this way is generally not unique, but has many properties in common with the fundamental solution for Laplace’s equation in the Euclidean plane. With these premises, in [@EP2; @EP3] it has been proved the following notable topological bound: $$\mbox{number of critical points of ${{\mathcal G}}$ } \le 2\mathfrak{g}+\mathfrak{e}-1,$$ where $\mathfrak{g}$ and $\mathfrak{e}$ are the [*genus*]{} and the [*number of ends*]{} of $M$; the number $2\mathfrak{g}+\mathfrak{e}-1$ is known as the first [*Betti number*]{} of $M$. Moreover, if the Betti number is attained, then ${{\mathcal G}}$ is [*Morse*]{}, that is at its critical points the Hessian matrix is non-degenerate. In [@EP2], it is also shown that, in dimensions greater than two, an upper bound by topological invariants is impossible. Two different proofs are constructed in [@EP2] and [@EP3], respectively. Both proofs are based on the following [*uniformization principle*]{}: since $(M, g)$ is a smooth manifold of finite topological type, it is well known (see [@KY]) that there exists a compact surface ${\Sigma}$ endowed with a metric $g'$ of constant curvature, a finite number $J\ge 0$ of isolated points points $p_j\in{\Sigma}$ and a finite number $K\ge 0$ of (analytic) topological disks $D_k\subset{\Sigma}$ such that $(M, g)$ is conformally isometric to the manifold $(M', g')$, where $M'$ is interior of $$M'={\Sigma}\setminus\left(\bigcup_{j=1}^J\{ p_j\} \cup \bigcup_{k=1}^K D_k\right).$$ That means that there exist a diffeomorphism $\Phi: M\to M'$ and a positive function $f$ on $M$ such that $\Phi^* g'=f g$; it turns out that the genus $\mathfrak{g}$ of ${\Sigma}$ and the number $J+K$ — that equals the number $\mathfrak{e}$ ends of $M$ — determine $M$ up to diffeomorphisms. The proof in [@EP2] then proceeds by analyzing the transformed Green’s function ${{\mathcal G}}'={{\mathcal G}}\circ\Phi^{-1}$. It is proved that ${{\mathcal G}}'$ satisfies the problem $$-{\Delta}_{g'} {{\mathcal G}}'(\cdot, y')={\delta}_{y'}-\sum\limits_{j=1}^J c_j{\delta}_{p_j}\ \mbox{ in the interior of } \ M', \qquad {{\mathcal G}}'=0 \mbox{ on } \ \bigcup_{k=1}^K {\partial}D_k,$$ where $y'=\Phi(y)$ and the constants $c_j$, possibly zero (in which case ${{\mathcal G}}'$ would be $g'$-harmonic near $p_j$), sum up to $1$. Thus, a local blow up analysis of the [*Hopf index*]{} $\mathfrak{I}(z_n), j=1,\dots, N$, of the gradient of ${{\mathcal G}}'$ at the critical points $z_1,\dots, z_N$ (isolated and with finite multiplicity), together with the [*Hopf Index Theorem*]{} ([@Mo; @MC]), yield the formula $$\sum_{n=1}^N\mathfrak{I}(z_n)+\sum_{c_j\not=0}\mathfrak{I}(p_j)=\chi({\Sigma}^*),$$ where $\chi({\Sigma}^*)$ is the Euler characterstic of the manifold $${\Sigma}^*={\Sigma}\setminus\left( D_{y'}\cup\bigcup_{k=1}^K D_k \right)$$ and $D_{y'}$ is a sufficiently small disk around $y'$. Since $\chi({\Sigma}^*)$ is readily computed as $1-2\mathfrak{g}-K$ and $\mathfrak{I}(z_n)\le -1$, one then obtains that $$\begin{gathered} \mbox{ number of critical points of ${{\mathcal G}}'$}=-\sum_{n=1}^N\mathfrak{I}(z_n)=\\ 2\mathfrak{g}+K-1+\sum_{c_j\not=0}\mathfrak{I}(p_j)\le 2\mathfrak{g}+J+K-1=2\mathfrak{g}+\mathfrak{e}-1.\end{gathered}$$ Of course, the gradient of ${{\mathcal G}}^*$ vanishes if and only if that of ${{\mathcal G}}$ does. The proof contained in [@EP3] has a more geometrical flavor and focuses on the study of the integral curves of the gradient of ${{\mathcal G}}$. This point of view is motivated by the fact that in Euclidean space the Green’s function (the fundamental solution) arises as the electric potential of a charged particle at $y$, so that its critical points correspond to equilibria and the integral curves of its gradient field are the lines of force classically studied in the XIX century. Such a description relies on techniques of dynamical systems rather than on the toolkit of partial differential equations. We shall not get into the details of this proof, but we just mention that it gives a more satisfactory portrait of the integral curves connecting the various critical points of ${{\mathcal G}}$ — an issue that has rarely been studied. Counting the critical points of eigenfunctions ---------------------------------------------- The bounds and identities on the critical points that we considered so far are based on a crucial topological tool: the index $\mathfrak{I}(z_0)$ of a critical point $z_0$. For a function $u\in C^1({\Omega})$, the integer $\mathfrak{I}(z_0)$ is the [*winding number or degree*]{} of the vector field ${\nabla}u$ around $z_0$ and is related to the portrait of the set ${{\mathcal N}}_u=\{z\in{{\mathcal U}}: u(z)=u(z_0)\}$ for a sufficiently small neighborhood ${{\mathcal U}}$ of $z_0$. As a matter of fact, if $z_0$ is an isolated critical point of $u$, one can distinguish two situations (see [@AM1; @Ro]): (I) if ${{\mathcal U}}$ is sufficiently small, ${{\mathcal N}}_u=\{z_0\}$ and $\mathfrak{I}(z_0)=1$; (II) if ${{\mathcal U}}$ is sufficiently small, ${{\mathcal N}}_u$ consists of $n$ simple curves and, if $n\ge 2$, each pair of such curves crosses at $z_0$ only; it turns out that $\mathfrak{I}(z_0)=1-n$. Critical points with index $\mathfrak{I}$ equal to $1$, $0$, or negative are called [*extremal, trivial*]{}, or [*saddle*]{} points, respectively (see [@AM1]) . A saddle point is [*simple*]{} or [*Morse*]{} if the hessian matrix of $u$ at that point is not trivial. In the cases we examined so far, we always have that $\mathfrak{I}(z_0)\le -1$, that is $z_0$ is a saddle point, since (I) and (II) with $n=1$ cannot occur, by the maximum principle. The situation considerably changes when $u$ is a solution of , , or . Here, we shall give an account of what can be said for solutions of . The same ideas can be used for solutions of the semilinear equation $$-{\Delta}u=f(u) \ \mbox{ in } \ {\Omega},$$ subject to a homogeneous Dirichlet boundary condition, where the non-linearity $f:{\mathbb{R}}\to{\mathbb{R}}$ satisfies the assumptions: $$f(t)>0 \ \mbox{ if } t>0 \quad\mbox{ or }\quad f(t)/t>0 \ \mbox{ for } \ t\not=0$$ (see [@AM1] for details). We present here the following result that is in the spirit of Theorem \[th:am1\]. \[th:am1c\] Let ${\Omega}$ be as in Theorem \[th:am1\] and $u\in C^1({\overline}{{\Omega}})\cap C^2({\Omega})$ be a solution of . If $z_0\in{\overline}{{\Omega}}$ is an isolated critical point of $u$ in ${\overline}{{\Omega}}$, then (A) either $z_0$ is a nodal critical point, that is $z_0\in{{\mathcal S}}(u)$, and the function $u_x-i\,u_y$ is asymptotic to $c\,(z-z_0)^m$, as $z\to z_0$, for some $c\in{\mathbb{C}}\setminus\{ 0\}$ and $m\in{\mathbb{N}}$, (B) or $z_0$ is an extremal, trivial, or simple saddle critical point. Finally, if all the critical points of $u$ in ${\overline}{{\Omega}}$ are isolated[^1], the following identity holds: $$\label{identity2} \sum_{z_k\in{\Omega}}m(z_k)+\frac12\,\sum_{z_k\in{\Gamma}}m(z_k)+n_S-n_E=J-2.$$ Here, $n_S$ and $n_E$ denote the number of the simple saddle and extremal points of $u$. Thus, a bound on the number of critical points in topological terms is not possible — additional information of different nature should be added. The proof of this theorem can be outlined as follows. First, one observes that, at a nodal critical point $z_0\in{\Omega}$, ${\Delta}u$ vanishes, and hence the situation described in Subsection \[sub:harmonic\] is in order, that is $u_x-i\,u_y$ actually behaves as specified in (A) and the index $\mathfrak{I}(z_0)$ equals $-m$. If $z_0\in{\Gamma}$, a reflection argument like the one used for Theorem \[th:am1\] can be used, so that $z_0$ can be treated as an interior nodal critical point of an extended function with vanishing laplacian at $z_0$ and (A) holds; in this case, however, as done for Theorem \[th:am1\], the contribution of $z_0$ must be counted as $-m/2$. Secondly, one examines non-nodal critical points. At these points ${\Delta}u$ is either positive or negative. If, say, ${\Delta}u(z_0)<0$, then at least one eigenvalue of the hessian matrix of $u$ must be negative and the remaining eigenvalue is either positive (and hence a simple saddle point arises), negative (and hence a maximum point arises) or zero (and hence, with a little more effort, either a trivial or a simple saddle point arises). Thus, the total index of these points sums up to $n_E-n_S$. Finally, identity is obtained by applying Hopf’s index theorem in a suitable manner. Extra assumptions: the emergence of geometry -------------------------------------------- As emerged in the previous subsection, topology is not enough to control the number of critical points of an eigenfunction or a torsion function. Here, we will explain how some geometrical information about ${\Omega}$ can be helpful. Convexity is a useful information. If the domain ${\Omega}\subset{\mathbb{R}}^N$, $N\ge 2$, is convex, one can expect that the solution $\tau$ of and the only [*positive*]{} solution $\phi_1$ of — it exists and, as is well known, corresponds to the first Dirichlet eigenvalue ${\lambda}_1$ — have only one critical point (the maximum point). This expectation is realistic, but a rigorous proof is not straightforward. In fact, one has to first show that $\tau$ and $\phi_1$ are [*quasi-concave*]{}, that is one shows the convexity of the level sets $$\{ x\in{\Omega}: u(x)\ge s\} \ \mbox{ for every } \ 0\le s\le \max_{{\overline}{{\Omega}}} u,$$ for $u=\tau$ or $u=\phi_1$. It should be noted that $\phi_1$ is [*never concave*]{} and examples of convex domains ${\Omega}$ can be constructed such that $\tau$ is not concave (see [@Ko]). The quasi-concavity of $\tau$ and $\phi_1$ can be proved in several different ways (see [@BiS; @BL; @CS; @IS; @Ke; @Ko; @Sl2]). Here, we present the argument used in [@Ko]. There, the desired quasi-convexity is obtained by showing that the functions ${\sigma}=\sqrt{\tau}$ and $\psi=\log\phi_1$ are concave functions ($\tau$ and $\phi_1$ are then said [*$1/2$-concave*]{} and [*log-concave*]{}, respectively). In fact, one shows that ${\sigma}$ and $\psi$ satisfy the conditions $${\Delta}{\sigma}=-\frac{1+2\,|{\nabla}{\sigma}|^2}{2{\sigma}} \ \mbox{ in } \ {\Omega}, \quad {\sigma}=0 \ \mbox{ on } {\Gamma},$$ and $${\Delta}\psi=-({\lambda}_1+|{\nabla}\psi|^2) \ \mbox{ in } \ {\Omega}, \quad \psi=-\infty \ \mbox{ on } {\Gamma}.$$ The concavity test established by Korevaar in [@Ko], based on a maximum principle for the so-called [*concavity function*]{} (see also [@Ka]), applies to these two problems and guarantees that both ${\sigma}$ and $\psi$ are concave. With similar arguments, one can also prove that the solution of - is $\log$-concave in $x$ for any fixed time $t$. The obtained quasi-concavity implies in particular that, for $u=\tau$ or $\phi_1$, the set of critical points ${\mathcal{C}}(u)$, that here coincides with the set $${\mathcal{M}}(u)=\Bigl\{ x\in{\Omega}: u(x)=\max_{{\overline}{{\Omega}}} u\Bigr\},$$ is convex. This set cannot contain more than one point, due to the analyticity of $u$. In fact, if it contained a segment, being the restriction of $u$ analytic on the chord of ${\overline}{{\Omega}}$ containing that segment, $u$ would be a [*positive*]{} constant on this chord and this is impossible, since $u=0$ at the endpoints of this chord. This same argument makes sure that, if ${\varphi}\equiv 1$ in a convex domain ${\Omega}$, then for any fixed $t>0$ there is a [*unique*]{} point $x(t)\in{\Omega}$ — the so-called [*hot spot*]{} — at which the solution of - attains its maximum in ${\overline}{{\Omega}}$, that is $$h(x(t),t)=\max_{x\in{\overline}{{\Omega}}} h(x,t) \ \mbox{ for } \ t>0.$$ The location of $x(t)$ in ${\Omega}$ will be one of the issues in the next section. [**A conjecture.**]{} Counting (or estimating the number of) the critical points of $\tau$, $\phi_1$, or $h$ when ${\Omega}$ is not convex seems a difficult task. For instance, to the author’s knowledge, it is not even known whether or not the uniqueness of the maximum point holds true if ${\Omega}$ is assumed to be [*star-shaped*]{} with respect to some origin. We conclude this subsection by offering and justifying a conjecture on the number of hot spots in a bounded simply connected domain ${\Omega}$ in ${\mathbb{R}}^2$. To this aim, we define for $t>0$ the set of hot spots as $${{\mathcal H}}(t)=\{ x\in{\Omega}: x \mbox{ is a local maximum point of $h(\cdot,t)$}\}.$$ We shall suppose that the function ${\varphi}$ in is continuous, non-negative and not identically equal to zero in ${\Omega}$, so that, by [*Hopf’s boundary point lemma*]{}, ${{\mathcal H}}(t)\cap{\Gamma}=\varnothing$. Also, by an argument based on the analyticity of $h$ similar to that used for the uniqueness of the maximum point in a convex domain, we can be sure that ${{\mathcal H}}(t)$ is made of isolated points (see [@AM1] for details). (A parabolic version of ) Theorem \[th:am1c\] then yields that $$n_E(t)-n_S(t)=1,$$ where $n_E(t)$ and $n_S(t)$ are the number extremal and simple saddle points of $h(\cdot,t)$; clearly $n_E(t)$ is the cardinality of ${{\mathcal H}}(t)$. An estimate on the total number of critical points of $h(\cdot,t)$ will then follow from one on $n_E(t)$. Notice that, if ${\lambda}_n$ and $\phi_n$, $n\in{\mathbb{N}}$, are Dirichlet eigenvalues (arranged in increasing order) and eigenfunctions (normalized in $L^2({\Omega})$) of the Laplace’s operator in ${\Omega}$, then the following [*spectral formula*]{} $$\label{spectral} h(x,t)=\sum_{n=1}^\infty \widehat{{\varphi}}(n)\, \phi_n(x) e^{-{\lambda}_n t} \ \mbox{ holds for } \ x\in{\overline}{{\Omega}} \ \mbox{ and } \ t>0,$$ where $\widehat{{\varphi}}(n)$ is the Fourier coefficient of ${\varphi}$ corresponding to $\phi_n$. Then we can infer that $e^{{\lambda}_1 t} h(x,t)\to\widehat{{\varphi}}(1)\,\phi_1(x)$ as $t\to\infty$, with $$\widehat{{\varphi}}(1)=\int_{\Omega}{\varphi}(x)\,\phi_1(x)\,dx>0,$$ and the convergence is uniform on ${\overline}{{\Omega}}$ under sufficient assumptions on ${\varphi}$ and ${\Omega}$. This information implies that, if $x(t)\in{{\mathcal H}}(t)$, then $$\label{large times} {\mathop{\mathrm{dist}}}(x(t),{{\mathcal H}}_\infty)\to 0 \ \mbox{ as } \ t\to\infty,$$ where ${{\mathcal H}}_\infty$ is the set of local maximum points of $\phi_1$. Now, our conjecture concerns the influence of the shape of ${\Omega}$ on the number $n_E(t)$. To rule out the possible influence of the values of ${\varphi}$, we assume that ${\varphi}\equiv 1$: then we know that there holds the following asymptotic formula (see [@Va]): $$\label{varadhan} \lim_{t\to 0^+} 4t\,\log[1-h(x,t)]=-d_{\Gamma}(x)^2 \ \mbox{ for } \ x\in{\overline}{{\Omega}};$$ here, $d_{\Gamma}(x)$ is the [*distance*]{} of a point $x\in{\overline}{{\Omega}}$ from the boundary ${\Gamma}$. The convergence in is uniform on ${\overline}{{\Omega}}$ under suitable regularity assumptions on ${\Gamma}$. \[fig:cocoon\] ![As time $t$ increases, ${{\mathcal H}}(t)$ goes from ${{\mathcal H}}_0$, the set of maximum points of $d_{\Gamma}$, to ${{\mathcal H}}_\infty$, the set of maximum points of $\phi_1$.](Figure8.pdf "fig:") Now, suppose that $d_{\Gamma}$ has [*exactly*]{} $m$ distinct local (strict) maximum points in ${\Omega}$. Formula suggests that, when $t$ is sufficiently small, $h(\cdot,t)$ has the same number $m$ of maximum points in ${\Omega}$. As time $t$ increases, one expects that the maximum points of $h(\cdot,t)$ do not increase in number. Therefore, the following bounds should hold: $$\label{guess} n_E(t)\le m \ \mbox{ and hence } \ n_E(t)+n_S(t)\le 2m-1 \ \mbox{ for every } \ t>0.$$ From the asymptotic analysis performed on , we also derive that the [*total number of critical points of*]{} $\phi_1$ does not exceeds $2m-1$. We stress that cannot always hold with the equality sign. In fact, if $D_{\varepsilon}^\pm$ denotes the unit disk centered at $(\pm{\varepsilon},0)$ and we consider the domain ${\Omega}_{\varepsilon}$ obtained from $D_{\varepsilon}^+\cup D_{\varepsilon}^-$ by “smoothing out the corners” (see Fig. 3.4), we notice that $m=2$ for every $0<{\varepsilon}<1$, while ${\Omega}_{\varepsilon}$ tends to the unit ball centered at the origin and hence, if ${\varepsilon}$ is small enough, $\phi_1$ has only one critical point, being ${\Omega}_{\varepsilon}$ “almost convex”. Based on a similar argument, inequalities like should also hold for the number of critical points of the torsion function $\tau$. In fact, if $U_s$ is the solution of the one-parameter family of problems $$-{\Delta}U_s+s\,U_s=1 \ \mbox{ in } \ {\Omega}, \quad U_s=0 \mbox{ on } \ {\Gamma},$$ where $s$ is a positive parameter, we have that $$\lim_{s\to 0^+} U_s=\tau \quad \mbox{and} \quad \lim_{s\to\infty} \frac1{\sqrt{s}}\,\log[1-s\,U_s]=-d_{\Gamma},$$ uniformly on ${\overline}{{\Omega}}$ (see again [@Va]). We finally point out that the asymptotic formulas presented here hold in any dimension; thus, the bounds in may be generalized in some way. A conjecture by S. T. Yau ------------------------- To conclude this section about the number of critical points of solutions of partial differential equations, we cannot help mentioning a conjecture proposed in [@Ya] (also see [@EJN; @JaN; @JNT]). This is motivated by the study of eigenfunctions of the Laplace-Beltrami operator ${\Delta}_g$ in a compact Riemannian manifold $(M,g)$. Let $\{\phi_k\}_{k\in{\mathbb{N}}}$ be a sequence of eigenfunctions, $${\Delta}_g\phi_k+{\lambda}_k\phi_k=0 \ \mbox{ in } \ M.$$ Let $x_k\in M$ be a point of maximum for $\phi_k$ in $M$ and $B_k$ a geodesic ball centered at $x_k$ and with radius $C/\sqrt{{\lambda}_k}$. If we blow up $B_k$ to the unit disk in ${\mathbb{R}}^2$ and let $u_k/\max\phi_k$ be the eigenfunction after that change of variables, then a subsequence of $\{ u_k\}_{k\in{\mathbb{N}}}$ will converge to a solution $u$ of $$\label{yau} {\Delta}u+u=0, \ |u|<1 \ \mbox{ in } \ {\mathbb{R}}^2.$$ If we can prove that $u$ has [*infinitely many*]{} isolated critical points, then we can expect that their number be unbounded also for the sequence $\{\phi_k\}_{k\in{\mathbb{N}}}$. A naive insight built up upon the available concrete examples of entire eigenfunctions (the [*separated*]{} eigenfunctions in rectangular or polar coordinates) may suggest that it would be enough to prove that any solution of has infinitely many nodal domains. It turns out that this is not always true, as a clever counterexample obtained in [@EJN Theorem 3.2] shows: [*there exists a solution of with exactly two nodal domains*]{}. The counterexample is constructed by perturbing the solution of $$f=J_1(r)\sin{\theta},$$ where $(r,{\theta})$ are the usual polar coordinates and $J_1$ is the second Bessel’s function; $f$ has infinitely many nodal domains. The desired example is thus obtained by the perturbation $h=f+{\varepsilon}\,g$, where $g(x,y)=f(x-{\delta}_x,y-{\delta}_y)$ and $({\delta}_x,{\delta}_y)$ is suitably chosen. As a result, if ${\varepsilon}$ is sufficiently small, the set $\{ (x,y)\in{\mathbb{R}}^2: h(x,y)\not=0\}$ is made of two interlocked spiral-like domains (see [@EJN Figure 3.1]). A related result was proved in [@EP4], where it is shown that there is no topological upper bound for the number of critical points of the first eigenfunction on Riemannian manifolds (possibly with boundary) of dimension larger than two. In fact, with no restriction on the topology of the manifold, it is possible to construct metrics whose first eigenfunction has as many isolated critical points as one wishes. Recently, it has been proved in [@JZ1] that, if $(M, g)$ is a non-positively curved surface with concave boundary, the number of nodal domains of $\phi_k$ diverges along a subsequence of eigenvalues of density $1$ (see also [@JZ2] for related results). The surface needs not have any symmetries. The number can also be shown to grow like $\log {\lambda}_k$ ([@Ze]). In light of such results, Yau’s conjecture was updated as follows: show that, for any (generic) $(M,g)$ there exists at least one sub-sequence of eigenfunctions for which the number of nodal domains (and hence of the critical points) tends to infinity ([@Ya2; @Ze]). The location of critical points {#sec:location} =============================== A little history ---------------- The first result that studies the critical points of a function is probably [*Rolle’s theorem*]{}: between two zeroes of a differentiable real-valued function there is [*at least one*]{} critical point. Thus, a function that has $n$ distinct zeroes also has at least $n-1$ critical points — an estimate from below — and we roughly know where they are located. After Rolle’s theorem, the first general result concerning the zeroes of the derivative of a general polynomial is [*Gauss’s theorem*]{}: if $$P(z)=a_n\,(z-z_1)^{m_1}\cdots ,(z-z_K)^{m_K}, \ \mbox{ with } \ m_1+\cdots+m_K=n,$$ is a polynomial of degree $n$, then $$\frac{P'(z)}{P(z)}=\frac{m_1}{z-z_1}+\cdots+\frac{m_K}{z-z_K}$$ and hence the zeroes of $P'(z)$ are, in addition to the multiple zeroes of $P(z)$ themselves, the roots of $$\frac{m_1}{z-z_1}+\cdots+\frac{m_K}{z-z_K}=0.$$ These roots can be interpreted as the [*equilibrium points*]{} of the gravitational field generated by the masses $m_1, \dots, m_K$ placed at the points $z_1, \dots, z_K$, respectively. \[fig:lucas\] ![Lucas’s theorem: the zeroes of $P'(z)$ must fall in the convex envelope of those of $P(z)$.](Figure9.pdf "fig:") If the zeroes of $P(z)$ are placed on the real line then, by Rolle’s theorem, it is not difficult to convince oneself that the zeroes of $P'(z)$ lie in the smallest interval of the real axis that contains the zeroes of $P(z)$. This simple result has a geometrically expressive generalization in [*Lucas’s theorem*]{}: the zeroes of $P'(z)$ lie in the [*convex hull*]{} $\Pi$ of the set $\{ z_1,\dots, z_K\}$ — named [*Lucas’s polygon*]{} —and no such zero lies on ${\partial}\Pi$ unless is a multiple zero $z_k$ of $P(z)$ or all the zeroes of $P(z)$ are collinear (see Fig. 4.1). In fact, it is enough to observe that, if $z\notin\Pi$ or $z\in{\partial}\Pi$, then all the $z_k$ lie in the closed half-plane $H$ containing them and the side of $\Pi$ which is the closest to $z$. Thus, if $\ell=\ell_x+i\,\ell_y$ is an outward direction to ${\partial}H$, we have that $${\mathop{\mathrm{Re}}}\left[\left(\sum_{k=1}^K \frac{m_k}{z-z_k}\right)\ell\right]= \sum_{k=1}^K m_k\,\frac{{\mathop{\mathrm{Re}}}\bigl[{\overline}{(z-z_k)}\,\ell\bigr]}{|z-z_k|^2}>0,$$ since all the addenda are non-negative and not all equal to zero, unless the $z_k$’s are collinear. If $P(z)$ has real coefficients, we know that its non-real zeroes occur in conjugate pairs. Using the circle whose diameter is the segment joining such a pair — this is called a [*Jensen’s circle*]{} of $P(z)$ — one can obtain a sharper estimate of the location of the zeroes of $P'(z)$: each non-real zero of $P'(z)$ lies on or within a Jensen’s circle of $P(z)$. This result goes under the name of [Jensen’s theorem]{} (see [@Wa] for a proof). All these results can be found in Walsh’s treatise [@Wa], that contains many other results about zeroes of complex polynomials or rational functions and their extensions to critical points of harmonic functions: among them restricted versions of Theorem \[th:am1\] give information (i) on the critical points of the Green’s function of an infinite region delimited by a finite collection of simple closed curves and (ii) of harmonic measures generated by collections of Jordan arcs. Besides the [*argument’s principle*]{} already presented in these notes, a useful ingredient used in those extensions is a [*Hurwitz’s theorem*]{} (based on the classical [*Rouché’s theorem*]{}): if $f_n(z)$ and $f(z)$ are holomorphic in a domain ${\Omega}$, continuous on ${\overline}{{\Omega}}$, $f(z)$ is non-zero on ${\Gamma}$ and $f_n(z)$ converges uniformly to $f(z)$ on ${\overline}{{\Omega}}$, then there is a $n_0\in{\mathbb{N}}$ such that, for $n>n_0$, $f_n(z)$ and $f(z)$ have the same number of zeroes in ${\Omega}$. Location of critical points of harmonic functions in space ---------------------------------------------------------- The following result is somewhat an analog of Lucas’s theorem and is related to [@Wa Theorem 1, p. 249], which holds in the plane. \[ep1\] Let $D_1, \dots, D_J $ be bounded domains in ${\mathbb{R}}^N$, $N\ge 3$, with boundaries of class $C^{1,{\alpha}}$ and with mutually disjoint closures, and set $${\Omega}={\mathbb{R}}^N\setminus\bigcup_{j=1}^J {\overline}{D_j}.$$ Let $u\in C^0({\overline}{{\Omega}})\cap C^2({\Omega})$ be the solution of the boundary value problem $$\label{capacity} {\Delta}u=0 \ \mbox{ in } \ {\Omega}, \quad u=1 \ \mbox{ on } \ {\Gamma}, \quad u(x)\to 0 \ \mbox{ as } |x|\to\infty.$$ If ${{\mathcal K}}$ denotes the convex hull of $$\bigcup_{j=1}^J D_j,$$ then $u$ does not have critical points in ${\overline}{{\mathbb{R}}^N\setminus{{\mathcal K}}}$ (sse Fig. 4.2). This theorem admits at least two proofs and it is worth to present both of them. The former is somewhat reminiscent of Lucas’s proof and is based on an explicit formula for $u$, $$u(x)=\frac1{(N-2)\, {\omega}_N}\int_{\Gamma}\frac{u_\nu(y)}{|x-y|^{N-2}}\,dS_y, \ x\in{\Omega},$$ that can be derived as a consequence of [*Stokes’s formula*]{}. Here, ${\omega}_N$ is the surface area of a unit sphere in ${\mathbb{R}}^N$, $dS_y$ denotes the $(N-1)$-dimensional surface measure, and $u_\nu$ is the (outward) normal derivative of $u$. By the Hopf’s boundary point lemma, $u_\nu>0$ on ${\Gamma}$. Also, if $x\in{\overline}{{\mathbb{R}}^N\setminus{{\mathcal K}}}$, we can choose a hyperplane $\pi$ passing through $x$ and supporting ${{\mathcal K}}$ (at some point). If $\ell$ is the unit vector orthogonal to $\pi$ at $x$ and pointing into the half-space containing ${{\mathcal K}}$, we have that $(x-y)\cdot\ell$ is non-negative and is not identically zero for $y\in{\Gamma}$. Therefore, $$u_\ell(x)=-\frac1{{\omega}_N}\int_{\Gamma}\frac{u_\nu(y)\,(x-y)\cdot\ell}{|x-y|^{N}}\,dS_y<0,$$ which means that ${\nabla}u(x)\not=0$. \[fig:ep\] ![No critical points outside of the convex envelope.](Figure10.pdf "fig:") The latter proof is based on a symmetry argument ([@Sa2]) and, as it will be clear, can also be extended to more general non-linear equations. Let $\pi$ be any hyperplane contained in ${\overline}{{\Omega}}$ and let $H$ be the open half-space containing ${{\mathcal K}}$ and such that ${\partial}H=\pi$. Let $x'$ be the mirror reflection in $\pi$ of any point $x\in H\cap{\Omega}$. Then the function defined by $$u'(x)=u(x') \ \mbox { for } \ x\in H\cap{\Omega}$$ is harmonic in $H\cap{\Omega}$, tends to $0$ as $|x|\to\infty$ and $$u'<u \ \mbox{ in } \ H\cap{\Omega}, \quad u'=u \ \mbox{ on } \ \pi\setminus{\Gamma}.$$ Therefore, by the Hopf’s boundary point lemma, $u_\ell(x)\not=0$ at any $x\in\pi\setminus{\Gamma}$ for any direction $\ell$ not parallel to $\pi$. Of course, if $x\in{\Gamma}\cap\pi$, we obtain that $u_\nu(x)>0$ by directly using the Hopf’s boundary point lemma. Generalizations of Lucas’s theorem hold for other problems. Here, we mention the well known result of Chavel and Karp [@CK] for the minimal solution of the Cauchy problem for the heat equation in a Riemannian manifold $(M,g)$: $$\label{ck} u_t={\Delta}_g u \ \mbox{ in } \ M\times(0,\infty), \qquad u={\varphi}\ \mbox{ on } \ M\times\{ 0\},$$ where ${\varphi}$ is a bounded initial data with compact support in $M$. In [@CM], it is shown that, if $M$ is complete, simply connected and of constant curvature, then the set of the [*hot spots*]{} of $u$, $${{\mathcal H}}(t)=\left\{x\in M: u(x,t)=\max_{y\in M}u(y,t)\right\},$$ is contained in the convex hull of the support of ${\varphi}$. The proof is based on an explicit formula for $u$ in terms of the initial values ${\varphi}$. For instance, when $M={\mathbb{R}}^N$, we have the formula $$u(x,t)=(4\pi t)^{-N/2}\int_{{\mathbb{R}}^N} e^{-|x-y|^2}{\varphi}(y)\,dy \ \mbox{ for } \ (x,t)\in{\mathbb{R}}^N\times(0,\infty).$$ With this formula in hand, by looking at the second derivatives of $u$, one can also prove that there is a time $T>0$ such that, for $t>T$, ${{\mathcal H}}(t)$ reduces to the single point $$\frac{\int_{{\mathbb{R}}^N} y\,{\varphi}(y) dy}{\int_{{\mathbb{R}}^N} {\varphi}(y) dy},$$ which is the center of mass of the measure space $({\mathbb{R}}^N, {\varphi}(y) dy)$ (see [@JS]). We also mention here the work of Ishige and Kabeya ([@IK1; @IK2; @IK3]) on the large time behavior of hot spots for solutions of the heat equation with a rapidly decaying potential and for the Schrödinger equation. Hot spots in a grounded conductor --------------------------------- From a physical point of view, the solution describes the evolution of the temperature of $M$ when its initial value distribution is known on $M$. The situation is more difficult if ${\partial}M$ is not empty. We shall consider here the case of a [*grounded*]{} heat conductor, that is we will study the solution $h$ of the Cauchy-Dirichlet problem -. [**Bounded conductor.**]{} As already seen, if ${\varphi}\ge 0$, implies . For an arbitrary continuous function ${\varphi}$, from we can infer that, if $m$ is the first integer such that $\widehat{{\varphi}}(n)\not=0$ and $m+1, \dots, m+k-1$ are all the integers such that ${\lambda}_m={\lambda}_{m+1}=\cdots={\lambda}_{m+k-1}$, then $$\,e^{{\lambda}_m t}\, h(x,t)\to \sum_{n=m}^{m+k-1} \widehat{{\varphi}}(n)\,\phi_n(x) \ \mbox{ if } \ t\to\infty.$$ Also, when ${\varphi}\equiv 1$, holds and hence $$\label{short times} {\mathop{\mathrm{dist}}}(x(t), {{\mathcal H}}_0)\to 0 \ \mbox{ as } \ t\to 0,$$ where ${{\mathcal H}}_0$ is the set of local (strict) maximum points of $d_{\Gamma}$. These informations give a rough picture of the [*set of trajectories*]{} of the hot spots: $${{\mathcal T}}=\bigcup_{t>0}{{\mathcal H}}(t).$$ Notice in passing that, if ${\Omega}$ is convex and has $N$ distinct hyperplanes of symmetry, it is clear that ${{\mathcal T}}$ is made of the same single point — the intersection of the hyperplanes — that is the hot spot [*does not move*]{} or is [*stationary*]{}. Also, it is not difficult to show (see [@ChS]) that the hot spot does not move if ${\Omega}$ is invariant under an [*essential*]{} group $G$ of orthogonal transformations (that is for every $x\not=0$ there is $A\in G$ such that $Ax\not=0$). Characterizing the class ${\mathcal{P}}$ of convex domains that admit a stationary hot spot seems to be a difficult task: some partial results about convex polygons can be found in [@MS1; @MS2] (see also [@MS]). There it is proved that: (i) the equilateral triangle and the parallelogram are the only polygons with $3$ or $4$ sides in ${\mathcal{P}}$; (ii) the equilateral pentagon and the hexagons invariant under rotations of angles $\pi/3, 2\pi/3$, or $\pi$ are the only polygons with $5$ or $6$ sides [*all*]{} touching the inscribed circle centered at the hot spot. The analysis of the behavior of ${{\mathcal H}}(t)$ for $t\to 0^+$ and $t\to\infty$ helps us to show that hot spots [*do move*]{} in general. \[fig:half-disk\] ![The reflected $D^*$ is contained in $D^+$, hence $h'$ can be defined in $D^*$.](Figure11.pdf "fig:") To see this, it is enough to consider the [*half-disk*]{} (see Fig. 4.3) $$D^+=\{ (x,y)\in{\mathbb{R}}^2: |x|<1, \ x_1>0\};$$ being $D^+$ convex, for each $t>0$, there is a unique hot spot that, as $t\to 0^+$, tends to the maximum point $x_0=(1/2,0)$ of $d_{\Gamma}$. Thus, it is enough to show that $x_0$ is not a spatial critical point of $h(x,t)$ for some $t>0$ or, if you like, for $\phi_1$. This is readily seen by [*Alexandrov’s reflection principle*]{}. Let $D^*\!=\!\{x\in D^+: x_1>1/2\}$ and define $$h'(x_1,x_2,t)=h(1-x_1,x_2,t) \ \mbox{ for } \ (x_1,x_2,t)\in {\overline}{D^*}\times(0,\infty);$$ $h'$ is the reflection of $h$ in the line $x_1=1/2$. We clearly have that $$\begin{aligned} &(h'-h)_t={\Delta}(h-h') \ \mbox{ in } \ D^*\times(0,\infty), \quad h'-h=0 \ \mbox{ on } \ D^*\times\{ 0\},\\ &h'\!-\!h\!>\!0 \ \mbox{ on } \ ({\partial}D^*\cap{\partial}^+ )\times(0,\infty), \quad h'\!-\!h\!=\!0 \ \mbox{ on } \ ({\partial}D^*\cap D^+ )\times(0,\infty).\end{aligned}$$ Thus, the strong maximum principle and the Hopf’s boundary point lemma imply that $$-2\,h_{x_1}(1/2,x_2,t)=h'_{x_1}(1/2,x_2,t)-h_{x_1}(1/2,x_2,t)>0$$ for $(1/2,x_2,t)\in({\partial}D^*\cap D^+ )\times(0,\infty)$, and hence $x_0$ cannot be a critical point of $h$. The Alexandrov’s principle just mentioned can also be employed to estimate the location of a hot spot. In fact, as shown in [@BMS], by the same arguments one can prove that hot spots must belong to the subset ${\heartsuit}({\Omega})$ of ${\Omega}$ defined as follows. Let $\pi_{\omega}$ be a hyperplane orthogonal to the direction ${\omega}\in{\mathbb{S}}^{N-1}$ and let $H^+_{\omega}$ and $H^-_{\omega}$ be the two half-spaces defined by $\pi_{\omega}$; let ${{\mathcal R}}_{\omega}(x)$ denote the mirror reflection of a point $x$ in $\pi_{\omega}$. Then, the [*heart*]{} [^2] of ${\Omega}$ is defined by $${\heartsuit}({\Omega})=\bigcap_{{\omega}\in{\mathbb{S}}^{N-1}}\{H^-_{\omega}\cap{\Omega}: {{\mathcal R}}_{\omega}(H^+_{\omega}\cap{\Omega})\subset{\Omega}\}.$$ When ${\Omega}$ is convex, then ${\heartsuit}({\Omega})$ is also convex and, if ${\Gamma}$ is of class $C^1$, we are sure that its distance from ${\Gamma}$ is positive (see [@Fr]). Also, we know that ${{\mathcal H}}(t)$ is made of only one point $x(t)$, so that $${\mathop{\mathrm{dist}}}(x(t),{\Gamma})\ge{\mathop{\mathrm{dist}}}({\heartsuit}({\Omega}),{\Gamma}).$$ The set ${\heartsuit}({\Omega})$ contains many notable geometric points of the set ${\Omega}$, such as the [*center of mass*]{}, the [*incenter*]{}, the [*circumcenter*]{}, and others; see [@BM], where further properties of the heart of a convex body are presented. See also [@Sk] for related research on this issue. As clear from [@BMS], the estimate just presented is of purely geometric nature, that is it only depends on the lack of symmetry of ${\Omega}$ and does not depend on the particular equation we are considering in ${\Omega}$, as long as the equation is invariant by reflections. A different way to estimate the location of the hot spot of a grounded convex heat conductor or the maximum point of the solution of certain elliptic equations is based on ideas related to Alexandrov-Bakelman-Pucci’s maximum principle and does take into account the information that comes from the relevant equation. For instance, in [@BMS] it is proved that the maximum point $x_\infty$ of $\phi_1$ in ${\overline}{{\Omega}}$ is such that $$\label{bms} {\mathop{\mathrm{dist}}}(x_\infty,{\Gamma})\ge C_N\,r_{\Omega}\,\left(\frac{r_{\Omega}}{{\mathrm{diam}}({\Omega})}\right)^{N^2-1},$$ where $C_N$ is a constant only depending on $N$, $r_{\Omega}$ is the [*inradius*]{} of ${\Omega}$ (the radius of a largest ball contained in ${\Omega}$) and ${\mathrm{diam}}({\Omega})$ is the [*diameter*]{} of ${\Omega}$. The idea of the proof of is to compare the [*concave envelope*]{} $f$ of $\phi_1$ — the smallest concave function above $\phi_1$ — and the function $g$ whose graph is the surface of the (truncated) cone based on ${\Omega}$ and having its tip at the point $(x_\infty, \phi(x_\infty))$ (see Fig. 4.4). \[fig:envelope\] ![The concave envelope of $\phi_1$ and the cone $g$. The dashed cap is the image $f(C)=\phi_1(C)$ of the contact set $C$.](Figure12.pdf "fig:") Since $f\ge g$ and $f(x_\infty)=g(x_\infty)$, we can compare their respective [*sub-differential images*]{}: $$\begin{aligned} &{\partial}f({\Omega})=\bigcup\limits_{x\in{\overline}{{\Omega}}}\left\{ p\in{\mathbb{R}}^N: f(x)+p\cdot(y-x)\ge f(y) \ \mbox { for } \ y\in{\overline}{{\Omega}}\right\},\\ &{\partial}g({\Omega})=\bigcup\limits_{x\in{\overline}{{\Omega}}}\left\{ p\in{\mathbb{R}}^N: g(x)+p\cdot(y-x)\ge g(y) \ \mbox { for } \ y\in{\overline}{{\Omega}}\right\};\end{aligned}$$ in fact, it holds that ${\partial}g({\Omega})\subseteq{\partial}f({\Omega})$. Now, ${\partial}g({\Omega})$ has a precise geometrical meaning: it is the set $\phi_1(x_\infty)\,{\Omega}^*$, that is a multiple of the [*polar set*]{} of ${\Omega}$ with respect to $x_\infty$ defined by $${\Omega}^*=\{y\in{\mathbb{R}}^N: (x-x_\infty)\cdot(y-x_\infty)\le 1 \ \mbox{ for every } \ x\in{\overline}{{\Omega}}\}.$$ The volume $|{\partial}f({\Omega})|$ can be estimated by the formula of change of variables to obtain: $$\phi_1(x_\infty)^N |{\Omega}^*|=|{\partial}g({\Omega})|\le|{\partial}f({\Omega})|\le\int_C |\det D^2 f|\,dx=\int_C |\det D^2 \phi_1|\,dx,$$ where $C=\{ x\in{\overline}{{\Omega}}: f(x)=\phi_1(x)\}$ is the [*contact set*]{}. Since the determinant and the trace of a matrix are the product and the sum of the eigenvalues of the matrix, by the [*arithmetic-geometric mean inequality*]{}, we have that $|\det D^2 \phi_1|\le (-{\Delta}\phi_1/N)^N$, and hence we can infer that $$|{\Omega}^*|\le \,\int_C \left[\frac{-{\Delta}\phi_1}{N \phi_1(x_\infty)}\right]^Ndx= \int_C \left[\frac{{\lambda}_1({\Omega})\,\phi_1}{N \phi_1(x_\infty)}\right]^Ndx\le \left[\frac{{\lambda}_1({\Omega})}{N}\right]^N |{\Omega}|,$$ being $\phi_1\le\phi_1(x_\infty)$ in ${\overline}{{\Omega}}$. Finally, in order to get explicitly, one has to bound $|{\Omega}^*|$ from below by the volume of the polar set of a suitable half-ball containing ${\Omega}$, and ${\lambda}_1({\Omega})$ from above by the [*isodiametric inequality*]{} (see [@BMS] for details). The two methods we have seen so far, give estimates of how far the hot spot must be from the boundary. We now present a method, due to Grieser and Jerison [@GJ], that gives an estimate of how far the hot spot can be from a specific point in the domain. The idea is to adapt the classical [*method of separation of variables*]{} to construct a suitable approximation $u$ of the first Dirichlet eigenfunction $\phi_1$ in a planar convex domain. Clearly, if ${\Omega}$ were a rectangle, say $[a,b]\times[0,1]$, then that approximation would be exact: in fact $$u(x,y)=\phi_1(x,y)=\sin[\pi (x-a)/(b-a)].$$ If ${\Omega}$ is not a rectangle, after some manipulations, we can suppose that $${\Omega}=\{(x,y): a< x< b, f_1(x)<y<f_2(x)\}$$ where, in $[a,b]$, $f_1$ is convex, $f_2$ is concave and $$0\le f_1\le f_2\le 1 \ \mbox{ and } \ \min_{[a,b]} f_1=0, \ \max_{[a,b]} f_2=1$$ (see Fig. 4.5). \[fig:long convex\] ![Estimating the hot spot in the “long” convex set ${\Omega}$.](Figure13.pdf "fig:") The geometry of ${\Omega}$ does not allow to find a solution by separation of variables as in the case of the rectangle. However, one can operate “as if” that separation were possible. To understand that, consider the length of the section of foot $x$, parallel to the $y$-axis, by $$h(x)=f_2(x)-f_1(x) \ \mbox{ for } \ a\le x\le b,$$ and notice that, if we set $${\alpha}(x,y)=\pi\,\frac{y-f_1(x)}{h(x)},$$ the function $$e(x,y)=\sqrt{2/h(x)}\,\sin{\alpha}(x,y) \ \mbox{ for } \ f_1(x)\le y\le f_2(x),$$ satisfies for fixed $x$ the problem $$e_{yy}+\pi^2 e=0 \ \mbox{ in } \ (f_1(x), f_2(x)), \quad e(x,f_1(x))=e(x, f_2(x))=0$$ — thus, it is the first Dirichlet eigenfunction in the interval $(f_1(x), f_2(x))$, normalized in the space $L^2([f_1(x), f_2(x)])$. The basic idea is then that $\phi_1(x,y)$ should be (and in fact it is) well approximated by its lowest Fourier mode in the $y$-direction, computed for each fixed $x$, that is by the projection of $\phi_1$ along $e$: $$\psi(x)\,e(x,y) \ \mbox { where } \ \psi(x)=\int_{f_1(x)}^{f_2(x)} \phi_1(x,\eta)\,e(x,\eta)\,d\eta.$$ To simplify matters, a further approximation is needed: it turns out that $\psi$ and its first derivative can be well approximated by $\phi/\sqrt{2}$ and its derivative, where $\phi$ is the first eigenfunction of the problem $$\phi''(x)+\left[\mu-\frac{\pi^2}{h(x)^2}\right] \phi(x)=0 \ \mbox{ for } \ a<x<b, \quad \phi(a)=\phi(b)=0.$$ Since near the maximum point $x_1$ of $\phi$, $|\phi'(x)|$ can be bounded from below by a constant times $|x-x_1|$, the constructed chain of approximations gives that, if $(x_0,y_0)$ is the maximum point of $\phi_1$ on ${\overline}{{\Omega}}$, then there is an absolute constant $C$ such that $$|x_1-x_0|\le C.$$ $C$ is independent of ${\Omega}$, but the result has clearly no content unless $b-a>C$. [**Unbounded conductor.**]{} If ${\Omega}$ is unbounded, by working with suitable barriers, one can still prove formula when ${\varphi}\equiv 1$ (see [@MS3; @MS4]), the convergence holding uniformly on compact subsets of ${\Omega}$. Thus, any hot spot $x(t)$ will again satisfy . To the author’s knowledge, [@JS] is the only reference in which the behavior of hot spots for large times has been studied for some grounded unbounded conductors. There, the cases of a half-space ${\mathbb{R}}_+^N=\{x\in{\mathbb{R}}^N: x_N>0\}$ and the exterior of a ball $B^c=\{ x\in{\mathbb{R}}^N: |x|>1\}$ are considered. It is shown that there is a time $T>0$ such that for $t>T$ the set ${{\mathcal H}}(t)$ is made of only one hot spot $x(t)=(x_1(t),\dots, x_N(t))$ and $$x_j(t)\to\frac{\int_{{\mathbb{R}}^{N-1}}y_j y_N{\varphi}(y') dy'}{\int_{{\mathbb{R}}^{N-1}}y_N{\varphi}(y')dy'}, \ 1\le j\le N-1, \quad \frac{x_N(t)}{\sqrt{2t}}\to 1 \ \mbox{ as } \ t\to\infty,$$ if ${\Omega}={\mathbb{R}}^N_+$, while for ${\Omega}=B^c$, if ${\varphi}$ is radially symmetric, then there is a time $T>0$ such that ${{\mathcal H}}(t)=\{ x\in{\mathbb{R}}^N: |x|=r(t)\}$, for $t>T$, where $r(t)$ is some smooth function of $t$ such that $$\limsup_{t\to\infty} r(t)=\infty.$$ Upper bounds for ${{\mathcal H}}(t)$ are also given in [@JS] for the case of the exterior of a smooth bounded domain. Hot spots in an insulated conductor ----------------------------------- We conclude this survey by giving an account on the so-called [*hot spot conjecture*]{} by J. Rauch [@Ra]. This is related to the asymptotic behavior of hot spots in a [*perfectly insulated*]{} heat conductor modeled by the following initial-boundary value problem: $$\label{heat-neumann} h_t={\Delta}h \ \mbox{ in } \ {\Omega}\times(0,\infty), \quad h={\varphi}\ \mbox{ on } \ {\Omega}\times\{ 0\}, \quad {\partial}_\nu u=0 \ \mbox{ on } \ {\Gamma}\times(0,\infty).$$ Observe that, similarly to , a spectral formula also holds for the solution of : $$\label{spectral-neumann} h(x,t)=\sum_{n=1}^\infty \widehat{{\varphi}}(n)\, \psi_n(x)\, e^{-\mu_n t}, \ \mbox{ for } \ x\in{\overline}{{\Omega}} \ \mbox{ and } \ t>0.$$ Here $\{\mu_n\}_{n\in{\mathbb{N}}}$ is the increasing sequence of Neumann eigenvalues and $\{\psi_n\}_{n\in{\mathbb{N}}}$ is a complete orthonormal system in $L^2({\Omega})$ of eigenfunctions corresponding to the $\mu_n$’s, that is $ \psi_n$ is a non-zero solution of $$\label{neumann} {\Delta}\psi+\mu\,\psi=0 \ \mbox{ in } \ {\Omega}, \quad {\partial}_\nu\psi=0 \ \mbox{ on } \ {\Gamma},$$ with $\mu=\mu_n$. The numbers $\widehat{{\varphi}}(n)$ are the Fourier coefficients of ${\varphi}$ corresponding to $\psi_n$, that is $$\widehat{{\varphi}}(n)=\int_{\Omega}{\varphi}(x)\,\psi_n(x)\,dx, \ n\in{\mathbb{N}}.$$ Since $\mu_1=0$ and $\psi_1=1/\sqrt{|{\Omega}|}$, we can infer that $$\label{spectral-neumann1} e^{\mu_m t} \left[h(x,t)-\frac1{\sqrt{|{\Omega}|}}\,\int_{\Omega}{\varphi}\,dx\right]\to \sum_{n=m}^{m+k-1} \widehat{{\varphi}}(n)\,\psi_n(x) \ \mbox{ as } \ t\to\infty,$$ where $m$ is the first integer such that $\widehat{{\varphi}}(n)\not=0$ and $m+1, \dots, m+k-1$ are all the integers such that $\mu_m=\mu_{m+1}=\cdots=\mu_{m+k-1}$. Thus, similarly to what happens for the case of a grounded conductor, as $t\to\infty$, a hot spot $x(t)$ of $h$ tends to a maximum point of the function at the right-hand side of . Now, roughly speaking, the conjecture states that, for “most” initial conditions ${\varphi}$, the distance from ${\Gamma}$ of any hot and cold spot of $h$ must tend to zero as $t\to\infty$, and hence it amounts to prove that the right-hand side of attains its maximum and minimum at points in ${\Gamma}$. It should be noticed now that the quotes around the word [*most*]{} are justified by the fact that the conjecture does not hold for all initial conditions. In fact, as shown in [@BB], if ${\Omega}=(0,2\pi)\times(0,2\pi)\subset{\mathbb{R}}^2$, the function defined by $$h(x_1,x_2, t)=-e^{-t} (\cos x_1+\cos x_2), \ (x_1,x_2)\in{\Omega}, \ t>0,$$ is a solution of — with ${\varphi}(x_1,x_2)=-(\cos x_1+\cos x_2)$ — that attains its maximum at $(-\pi,\pi)$ for any $t>0$. However, it turns out that in this case $ h(x_1,x_2, t)=-e^{-\mu_4 t} \psi_4(x_1,x_2). $ Thus, it is wiser to rephrase the conjecture by asking whether or not the hot and cold spots tend to ${\Gamma}$ if the coefficient $\widehat{{\varphi}}(2)$ of the first non-constant eigenfunction $\psi_2$ is not zero or, which is the same, whether or not maximum and minimum points of $\psi_2$ in ${\overline}{{\Omega}}$ are attained only on ${\Gamma}$. In [@Ka], a weaker version of this last statement is proved to hold for domains of the form $D\times(0,a)$, where $D\subset{\mathbb{R}}^{N-1}$ has a boundary of class $C^{0,1}$. In [@Ka], the conjecture has also been reformulated for convex domains. Indeed, we now know that it is false for fairly general domains: in [@BW] a planar domain with two holes is constructed, having a simple second eigenvalue and such that the corresponding eigenfunction attains its strict maximum at an interior point of the domain. It turns out that in that example the minimum point is on the boundary. Nevertheless, in [@BaB] it is given an example of a domain whose second Neumann eigenfunction attains both its maximum and minimum points at interior points. In both examples the conclusion is obtained by probabilistic methods. Besides in [@Ka], positive results on this conjecture can be found in [@AB; @BB; @BPP; @Do; @JN; @Mi; @Pa; @Si]. In [@BB], the conjecture is proved for planar convex domains ${\Omega}$ with two orthogonal axis of symmetry and such that $$\frac{\mbox{diam(${\Omega}$)}}{\mbox{width(${\Omega}$)}}>1.54.$$ This restriction is removed in [@JN]. In [@Pa], ${\Omega}$ is assumed to have only one axis of symmetry, but $\psi_2$ is assumed anti-symmetric in that axis. A more general result is contained in [@AB]: the conjecture holds true for domains of the type $${\Omega}=\{(x_1,x_2): f_1(x_1)<x_2<f_2(x_1)\},$$ where $f_1$ and $f_2$ have unitary Lipschitz constant. In [@Do], a modified version is considered: it holds true for general domains, if [*vigorous maxima*]{} are considered (see [@Do] for the definition). If no symmetry is assumed for a convex domain ${\Omega}$, Y. Miyamoto [@Mi] has verified the conjecture when $$\frac{{\mathrm{diam}}({\Omega})^2}{|{\Omega}|}<1.378$$ (for a disk, this ratio is about 1.273). For unbounded domains, the situation changes. For the half-space, Jimbo and Sakaguchi proved in [@JS] that there is a time $T$ after which the hot spot equals a point on the boundary that depends on ${\varphi}$. In [@JS], the case of the exterior ${\Omega}$ of a ball ${\overline}{B_R}$ is also considered for a radially symmetric ${\varphi}$. For a suitably general ${\varphi}$, Ishige [@Is1] has proved that the behavior of the hot spot is governed by the point $$A_{\varphi}=\frac{\displaystyle \int_{\Omega}x\,\left(1+\frac{R^N}{N-1}\,|x|^{-N}\right)\,{\varphi}(x)\,dx}{\displaystyle \int_{\Omega}{\varphi}(x)\,dx}.$$ If $A_{\varphi}\in B_R$, then ${{\mathcal H}}(t)$ tends to the boundary point $R\,A_{\varphi}/|A_{\varphi}|$, while if $A_{\varphi}\notin B_R$, then ${{\mathcal H}}(t)$ tends to $A_{\varphi}$ itself. Results concerning the behavior of hot spots for parabolic equations with a rapidly decaying potential can be found in [@IK1; @IK2]. [99]{} , [*Critical points of solutions of elliptic equations in two variables*]{}, Ann. Scuola Norm. Super. Pisa Cl. Sci. (4) [**14**]{} (1987), 229–256. , [*Stable determination of conductivity by boundary measurements*]{}, Appl. Anal. [**27**]{} (1988), 153–172. , [*Singular solutions of elliptic equations and the determination of conductivity by boundary measurements*]{}, J. Differential Equations [**84**]{} (1990), 252–272. , [*The index of isolated critical points and solutions of elliptic equations in the plane*]{}, Ann. Scuola Norm. Super. Pisa Cl. Sci. (4) [**19**]{} (1992), 567–589. , [*Elliptic equations in divergence form, geometric critical points of solutions, and Stekloff eigenfunctions*]{}, SIAM J. Math. Anal. [**25**]{} (1994), 1259–1268. , [*Symmetry and non–symmetry for the overdetermined Stekloff eigenvalue problem*]{}, Z. Angew. Math. Phys. [**45**]{} (1994), 44–52. , [*Symmetry and non-symmetry for the overdetermined Stekloff eigenvalue problem. II*]{}, in Nonlinear problems in applied mathematics, 1–9, SIAM, Philadelphia, PA, 1996. , [*Local behavior and geometric properties of solutions to degenerate quasilinear elliptic equations in the plane*]{}, Appl. Anal. [**50**]{} (1993), 191–215. , [*On Neumann eigenfunctions in lip domains*]{}, J. Amer. Math. Soc. [**17**]{} (2004), 243–265. , [*On the "hot spots” conjecture of J. Rauch*]{}, J. Funct. Anal. [**164**]{} (1999), 1–33. , [*Brownian motion with killing and reflection and the "hot-spots” problem*]{}, Probab. Theory Related Fields [**130**]{} (2004), 56–68. , [*Fiber Brownian motion and the "hot spots” problem*]{}, Duke Math. J. [**105**]{} (2000), 25–58. , Kernel Functions and Differential Equations in Mathematical Physics. Academic Press, New York, 1953. , [*Function–theoretical properties of solutions of partial differential equations of elliptic type*]{}, Ann. Math. Stud. [**33**]{} (1954), 69–94. , [*On a representation theorem for linear systems with discontinuous coefficients and its applications*]{}, in Convegno Internazionale sulle Equazioni Lineari alle Derivate Parziali, Cremonese, Roma 1955. , [*Power concavity for solutions of nonlinear elliptic problems in convex domains*]{}, in Geometric Properties for Parabolic and Elliptic PDE’s, 35–48, Springer, Milan, 2013. , [*On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation*]{}, J. Funct. Anal. [**22**]{} (1976), 366–389. , [*The location of the hot spot in a grounded convex conductor*]{}, Indiana Univ. Math. J. [**60**]{} (2011), 633–659. , [*The heart of a convex body*]{}, in Geometric properties for parabolic and elliptic PDE’s, 49–66, Springer, Milan, 2013. , [*A counterexample to the "hot spots” conjecture*]{}, Ann. of Math. [**149**]{} (1999), 309–317. , [*Convexity properties of solutions to some classical variational problems*]{}, Comm. Partial Differential Equations [**7**]{} (1982), 1337–1379. , [*Convexity of solutions of semilinear elliptic equations*]{}, Duke Math. J. [**52**]{} (1985), 431–456. , [*Critical points of solutions of degenerate elliptic equations in the plane*]{}, Calc. Var. Partial Differential Equations [**39**]{} (2010), 121–138. , [*Convex domains with stationary hot spots*]{}, Math. Methods Appl. Sci. [**20**]{} (1997), 1163–1169. , [*Movement of hot spots in Riemannian manifolds*]{}, J. Anal. Math. [**55**]{} (1990), 271–286. , [*Critical sets of elliptic equations*]{}, Comm. Pure Appl. Math. [**68**]{} (2015), 173–209. , [*Maxima of Neumann eigenfunctions*]{}, J. Math. Phys. [**49**]{} (2008), 043506, 3 pp. , [*Critical points and level sets in exterior boundary problems*]{}, Indiana Univ. Math. J. [**58**]{} (2009), 1947–1969. , [*Critical points of Green’s functions on complete manifolds*]{}, J. Differential Geom. [**92**]{} (2012), 1–29. , [*Critical points and geometric properties of Green’s functions on open surfaces*]{}, Ann. Mat. Pura Appl. (4) [**194**]{} (2015), 881–901. , [*Eigenfunctions with prescribed nodal sets*]{}, J. Differential Geom. [**101**]{} (2015), 197–211. , [*On nodal sets and nodal domains on ${{\mathcal S}}^2$ and ${\mathbb{R}}^2$*]{}, Ann. Inst. Fourier [**57**]{} (2007), 2345–2360. , [*Partial Differential Equations*]{}, American Mathematical Society, Providence, RI, 1998. , [*An introduction to maximum principles and symmetry in elliptic problems*]{}, Cambridge University Press, Cambridge, 2000. , [*Starshapedness of level sets for solutions of nonlinear parabolic equations*]{}, Rend. Istit. Mat. Univ. Trieste [**28**]{} (1996), 49–62. , [*Lehrsatz*]{}, Werke, [**3**]{}, p. 112; [**8**]{}, p.32, 1816. , [*The size of the first eigenfunction of a convex planar domain*]{}, J. Amer. Math. Soc. [**11**]{} (1998), 41–72. , [*Nodal sets of harmonic functions*]{}, Pure Appl. Math. Q. [**3**]{} (2007), 647–688. , [*Critical sets of solutions to elliptic equations*]{}, J. Differential Geom. [**51**]{} (1999), 359–373. , [*On the local behavior of solutions of non-parabolic partial differential equations*]{}, Amer. J. Math. [**75**]{} (1953), 449–476. , [*Movement of hot spots on the exterior domain of a ball under the Neumann boundary condition*]{}, J. Differential Equations [**212**]{} (2005), 394–431. , [*Movement of hot spots on the exterior domain of a ball under the Dirichlet boundary condition*]{}, Adv. Differential Equations [**12**]{} (2007), 1135–1166. , [*Hot spots for the heat equation with a rapidly decaying negative potential*]{}, Adv. Differential Equations [**14**]{} (2009), 643–662. , [*Hot spots for the two dimensional heat equation with a rapidly decaying negative potential*]{}, Discrete Contin. Dyn. Syst. Ser. S, [**4**]{} (2011), 833–849. , [*$L^p$ norms of nonnegative Schrödinger heat semigroup and the large time behavior of hot spots*]{}, J. Funct. Anal. [**262**]{} (2012), 2695–2733. , [*Parabolic power concavity and parabolic boundary value problems*]{}, Math. Ann. [**358**]{} (2014), 1091–1117. , [*Recherches sur la théorie des équations*]{}, Acta Math. [**36**]{} (1913), 181–195. , [*Eigenfunctions with few critical points*]{}, J. Differential Geom. [**53**]{} (1999), 177–182. , [*Geometric properties of eigenfunctions*]{}, Russian Math. Surveys [**56**]{} (2001), 67–88. , [*The "hot spots” conjecture for domains with two axes of symmetry*]{}, J. Amer. Math. Soc. [**13**]{} (2000), 741–772. , [*Movement of hot spots over unbounded domains in ${\mathbb{R}}^N$*]{}, J. Math. Anal. Appl. [**182**]{} (1994), 810–835. , [*Number of nodal domains of eigenfunctions on non-positively curved surfaces with concave boundary*]{}, Math. Ann. [**364**]{} (2016), 813–840. , [*Number of nodal domains and singular points of eigenfunctions of negatively curved surfaces with an isometric involution*]{}, J. Differential Geom. [**102**]{} (2016), 37–66. , [*On nonpositive curvature functions on noncompact surfaces of finite topological type*]{}, Indiana Univ. Math. J. [**43**]{} (1994), 775–804. , [*Rearrangements and convexity of level sets in PDE*]{}, Springer, Berlin, 1985. , [*Power concavity and boundary value problems*]{}, Indiana Univ. Math. J. [**34**]{} (1985), 687–704. , [*Convex solutions to nonlinear elliptic and parabolic boundary value problems*]{}, Indiana Univ. Math. J. [**32**]{} (1983), 603–614. , [*Convex solutions of certain elliptic equations have constant rank Hessians*]{}, Arch. Ration. Mech. Anal. [**97**]{} (1987), 19–32. , [*A Primer of Real Analytic Functions*]{}, Birkhäuser, Basel, 2002. , [*Quasiconformal Mappings in the Plane*]{}, Springer, Berlin, 1973. , [*Symmetric Green’s functions on complete manifolds*]{}, Amer. J. Math. [**109**]{} (1987), 1129–1154. , [*Propriétés géométriques des fractions rationelles*]{}, Paris Comptes Rendus [**78**]{} (1874), 271–274. , [*The spatial critical points not moving along the heat flow*]{}, J. Anal. Math. [**71**]{} (1997), 237–261. , [*On heat conductors with a stationary hot spot*]{}, Ann. Mat. Pura Appl. (4) 183 (2004), 1–23. , [*Polygonal heat conductors with a stationary hot spot*]{}, J. Anal. Math. [**105**]{} (2008), 1–18. , [*Interaction between nonlinear diffusion and geometry of domain*]{}, J. Differential Equations [**252**]{} (2012), 236–257. , [*Matzoh ball soup revisited: the boundary regularity issue*]{}, Math. Methods Appl. Sci. [**36**]{} (2013), 2023–2032. , [*The Geometry of the Zeros of a Polynomial in a Complex Variable*]{}, American Mathematical Society, New York, N. Y., 1949. , [*The "hot spots” conjecture for a certain class of planar convex domains*]{}, J. Math. Phys. [**50**]{} (2009), 103530, 7 pp. , [*Relations between the critical points of a real function of n independent variables*]{}, Trans. Amer. Math. Soc. [**27**]{} (1925), 345–396. , [*Critical point theory in global analysis and differential topology: An introduction*]{}, Academic Press, New York-London 1969. , [*Minimal unfolded regions of a convex hull and parallel bodies*]{}, preprint (2012) arXiv:1205.0662v2. , [*Scaling coupling of reflecting Brownian motions and the hot spots problem*]{}, Trans. Amer. Math. Soc. [**354**]{} (2002), 4681–4702. , [*private communication*]{}, (2016). , [*An angle’s maximum principle for the gradient of solutions of elliptic equations*]{}, Boll. Unione Mat. Ital. [**1**]{} (1987), 135–139. , [*Five problems: an introduction to the qualitative theory of partial differential equations*]{}, in Partial differential equations and related topics, 355–369, Springer, Berlin, 1975. , [*A relation between the type numbers of a critical point and the index of the corresponding field of gradient vectors*]{}, Math. Nachr. [**4**]{} (1950-51), 12–27. , [*Critical points of solutions to the obstacle problem in the plane*]{}, Ann. Sc. Norm. Super. Pisa Cl. Sci. (4) [**21**]{} (1994), 157–173. , [*private communication*]{}, (2008). , [*Movement of centers with respect to various potentials*]{}, Trans. Amer. Math. Soc. [**367**]{} (2015), 8347–8381. , [*Starshapedness of level sets of solutions to elliptic PDEs*]{}, Appl. Anal. [**84**]{} (2005), 1185–1197. , [*Combination and mean width rearrangements of solutions of elliptic equations in convex sets*]{}, Ann. Inst. H. Poincaré Analyse Non Linéaire [**32**]{} (2015), 763–783. , [*Hot spots conjecture for a class of acute triangles*]{}, Math. Z. [**280**]{} (2015), 783–806. , [*Idéaux de fonctions differentiables*]{}, Springer, Berlin-New York, 1972. , [*On the behavior of the fundamental solution of the heat equation with variable coefficients*]{}, Comm. Pure Appl. Math. [**20**]{} (1967), 431–455. , [*Generalized Analytic Functions*]{}, Pergamon Press, Oxford, 1962. , [*The Location of Critical Points of Analytic and Harmonic Functions*]{}, American Mathematical Society, New York, NY, 1950. , [*A function not constant on a connected set of critical points*]{}, Duke Math. J. [**1**]{} (1935), 514–517. , [*Problem section, Seminar on Differential Geometry*]{}, Ann. of Math. Stud. [**102**]{} (1982) 669–706. , [*Selected expository works of Shing-Tung Yau with commentary. Vol. I-II*]{}, International Press, Somerville, MA; Higher Education Press, Beijing, 2014. , [*private communication*]{}, (2016). [^1]: This assumption can be removed when ${\Omega}$ is simply connected, by using the analyticity of $u$ (see [@AM1]) [^2]: ${\heartsuit}({\Omega})$ has also been considered in [@O] under the name of [*minimal unfolded region*]{}.
--- abstract: 'We discuss the questions related to dark energy in the Universe. We note that in spite of the effect of dark energy, large-scale structure is still being generated in the Universe and this will continue for about ten billion years. We also comment on some statements in the paper “Dark energy and universal antigravitation” by A.D. Chernin \[4\].' author: - | V.N. Lukash$^\dagger$, V.A. Rubakov$^\ddagger$\ $^\dagger$[Astro Space Centre of P.N. Lebedev Physical Institute]{},\ [[email protected]]{};\ $^\ddagger$[Institute for Nuclear Research]{},\ [[email protected]]{} title: 'Dark energy: myths and reality[^1]' --- Introduction ============ The emergence of the idea that the entire visible Universe is permeated by a weakly interacting substance known as dark energy was the number one sensation in physics at the turn of the century and came as a complete surprise to most scientists, particularly those studying topics related both to cosmology and to particle physics. This is because the known energy scales of fundamental interactions are of the order of 1 GeV for the strong interaction and 100 GeV and $10^{19}$ GeV for the weak and gravitational interactions, respectively. Thus, there was no reason[^2] to assume that a new energy scale much smaller than the above-mentioned ones exists in nature. But it turned out that dark energy is characterized by the scale[^3] $E_{\rm V} \sim 10^{-3}$ eV defined by the relation $\rho _{\rm V} = E_{\rm V}^{\,4}$, where $\rho _{\rm V}$ is the dark energy density. Moreover, in the present-day Universe, the following equality is valid within an order of magnitude: $$\rho _{\rm V} \approx \rho _{\rm D} \approx \rho _{\rm B}\,, \eqno(1)$$ where $\rho _{\rm D}$ and $\rho _{\rm B}$ are mass densities of dark matter and baryons (protons and nuclei). And again, there is no clear a priori reasons for this equality. We point out that the approximate relation $\rho _{\rm D} \approx \rho _{\rm B}$ has been valid at each instant of cosmological evolution since the baryon asymmetry emerged and dark matter was generated, because $\rho _{\rm D}$ and $\rho _{\rm B}$ decrease with the expansion of the Universe at the same quite high rate. On the other hand, $\rho _{\rm V}$ does not depend, or barely depends, on time; hence, it is today, i.e., after the structure has appeared and stars have been formed, that the first equality in (1) is satisfied. It is certainly not easy to admit that relation (1) holds merely accidentally. Because the properties of dark energy are very interesting and the problem itself is fundamental, it is important to understand what kind of data made scientists believe that dark energy does exist. This knowledge is necessary when we try to find explanations, which may seem exotic today, of why the expansion of the Universe is accelerating and when we choose key experiments to verify various hypotheses. We mention one such attempt below. Attempts to explain approximate relation (1) also deserve attention. These issues are primarily considered in A.D. Chernin’s paper \[4\]. However, we believe that that paper somewhat mythologizes dark energy because on the one hand, some crucial results are hardly mentioned there, and on the other hand, some issues discussed in that article have nothing to do with dark energy. Besides, the attempt to explain relation (1) using some ‘internal symmetry in cosmology’ is, to put it mildly, highly disputable. In this paper, we try to separate the dark energy mythology from the real state of affairs. Structure argument and supernovae ================================= Type-Ia supernovae are often used (Chernin also writes about them) as the main observational argument confirming the existence of dark energy. But there are quite a number of other arguments, at least equally serious, based on combinations of cosmological data. Some of them were known before the observational data on type-Ia supernovae appeared, which had made several cosmologists \[5-9\] (see \[10\] for a review of the earlier papers) insist on the existence of dark energy in nature even before the first results on supernovae were available. One of the independent arguments is as follows. By the mid-1990s, the analysis of galactic catalogues aimed at revealing the distribution of matter in space, the use of various methods to determine the mass of clustered (clumped) matter[^4] and measurements of the cosmic microwave background and the Hubble parameter had already led to the conclusion that the total mass density of nonrelativistic matter, which constitutes the inhomogeneous structure of our Universe, such as galaxies and their formations (groups, clusters, filaments, walls, superclusters, voids), is not greater than 30 percent of the critical density $\rho _{\rm c}$: $$\Omega _{\rm M} \equiv {\rho _{\rm M}\over \rho _{\rm c}} \,\le \,0.3 \,, \eqno(2)$$ where $$\rho _{\rm M} = \rho _{\rm D} + \rho _{\rm B} \,, \qquad \rho _{\rm c} = {3 H_0^{\,2}\over 8 \pi G} \simeq 10^{-29}\;{\rm g\;cm}^{-3}\,,$$ and $H_0 \simeq 70\,{\rm km}\,{\rm s}^{-1}{\rm Mpc}^{-1}$ is the Hubble parameter. Result (2) is one of the most important facts in modern cosmology. For a long time, it has been interpreted as evidence of the Universe having nonzero curvature. Indeed, if dark energy is not taken into account, then the Friedmann equation for the open cosmological model, written for the present epoch, reduces to the relation $$\rho _{\rm c} = \rho _{\rm M} + {3\over 8 \pi G R_\kappa ^2} \,,$$ where $R_\kappa$ is the present curvature radius of space. According to (2), the curvature (the second term in the right-hand side) dominates, giving not less than $0.7 \rho _{\rm c}$. But this interpretation faced difficulties. First, from the theoretical standpoint, a distinctly nonzero spatial curvature is almost incompatible with the inflationary Universe paradigm because inflationary models without fine tuning result in extremely small values of the spatial curvature $R^{\,-2}_\kappa$. Second, the present age of the Universe in the open model is around 11 billion years, whereas estimations of the age of the oldest objects in the Universe (for example, globular clusters) have yielded greater values of 12 to 14 billion years. There were also other arguments against the open model with a large spatial curvature. If the spatial curvature is zero, result (2) suggests that not less than 70% of the energy density in the modern Universe is due to matter of a type that cannot be perturbed by gravitational fields of the structures and remains unclumped (unclustered) in the course of cosmological evolution. This implies that the effective pressure of the matter is negative[^5] and its absolute value is sufficiently large, i.e., $p \approx -\rho$. Hence, this is dark energy. The model with spatial curvature was finally discarded based on measurements of the cosmic microwave background anisotropy, or, to be more precise, the determination of the first peak position in the angular spectrum of the anisotropy, this peak being most sensitive to the value of the spatial curvature. Thanks to these measurements, it was already clear in 1999–2000 that the three-dimensional space is Euclidian to a high precision (i.e., $R_\kappa ^{\,-1}$ is close to zero). Here, a key role was played by the balloon experiments BOOMERANG (Balloon Observations of Millimetric Extragalactic Radiation and Geophysics) and MAXIMA (Millimeter Wave Anisotropy Experiment Imaging Array) \[11–14\]. Later, the WMAP (Wilkinson Microwave Anisotropy Probe) experiment and others confirmed this result. Thus, the total energy density of all sorts of matter must indeed coincide with the critical density and, hence, dark energy does exist in nature. The structure argument based on the measurements of the microwave radiation anisotropy and polarization, combined with data on the large-scale structure of the Universe, is presently a clear evidence for the existence of dark matter. We also note the integrated Sachs–Wolfe effect, which has recently been confirmed in observations. In the future, this effect should become one of the most precise methods to measure the properties of dark energy \[15\]. To illustrate the importance of the combination of all cosmological data on dark energy, we mention, for example, an attempt to explain the type-Ia supernova observational data alternatively (see review \[16\] and a recent discussion in \[17\]) based on an assumption that matter density in our part of the Universe is significantly lower than the average value. Analysis shows \[17\] that this model may be consistent not only with the supernova data but also with those on microwave radiation. But whether the model will fit the results on large-scale structure and other cosmological data is highly questionable. Hubble flows and their distortion ================================= Among the independent arguments supporting the existence of dark energy, Chernin considers the measurements of local cold flow (the author uses the term ‘Hubble flow,’ which is not quite accurate), which he discusses in great detail. Unfortunately, here we face a mythologization of dark energy. The main theses in paper \[4\] and some previous articles by Chernin et al. are based on the statement that Hubble’s law manifests itself even at cosmologically small distances, which is explained by antigravitation. For example, on pages 278 and 279, we read, “...antigravitation is actually capable of driving galaxies’ motion almost in the entire range of cosmological distances, both at the global, ‘genuinely’ cosmological scales and at scales of just a few megaparsecs,” “...antigravitation also dominates in our nearest galactic environment at the distance of just 1-2 Mpc from the Milky Way,” “...it is the dark energy... that actually lies behind Hubble’s discovery and makes sense of it for cosmology,” “...dark energy can be... measured in every place where a regular outflaw of galaxies is observed.” But we show in this section that, as a matter of fact, dark energy has not yet influenced the local velocity distribution to the full extent, and the expansion in accordance with the Hubble law starting from the scale of several megaparsecs is excluded. What does define the local flow properties is the profile of the spatial density perturbation spectrum. The initial Hubble flows existed throughout the entire Universe. The flows in different regions were destroyed at different times, in a manner that directly depended on the forming structure. As is known, the structure of the Universe has resulted from gravitational amplification of density perturbations whose initial amplitudes were about $10^{-5}$ for wavelengths equal to the Hubble size at that time. The perturbations were growing faster for short waves. As a result, the nonlinearity scale at which the Hubble flows are completely destroyed (the dark matter and baryon perturbations $\delta _{\rm M} \equiv \delta \rho _{\rm M} / \rho _{\rm M} \sim 1$), was increasing with time. In the present-day Universe, the average value of this scale is around 15 Mpc, varying, however, between different regions of the Universe. For example, it is smaller at a far distance from galaxy clusters (which are the most massive gravitationally bound formations). In particular, the nonlinearity scale in our local region is around 2 Mpc (the size of the Local Group of galaxies). In quasilinear regions, where the density perturbations are still not high ($\delta _{\rm M} < 1$), galaxies continue outflowing in accordance with the initial conditions. But the Hubble flows in such regions are also distorted. In the future, in dozens of billions of years, peculiar velocities will fade out because of the dynamic influence of dark energy, and the motion of galaxies will obey the Hubble law[^6] again, as in the early Universe. This is our main disagreement with the thesis in \[4\]. Chernin believes that outside the gravitationally bound regions peculiar velocities of the galaxies have faded out owing to the dynamic influence of dark energy, and the motion obeys the Hubble law. In this section, we show that such recovery of the Hubble flows is only possible in the distant future (if dark energy has vacuum properties), as opposed to today, when the Universe is experiencing peculiar velocities that have maximum values over its history and are caused by inhomogeneities of matter density. Inhomogeneous Universe ---------------------- At the quasilinear stage, our Universe is described \[18\] by the generalized Friedmann equation[^7] $$\biggl ({\dot b\over H_{\rm V}}\biggr )^2 = {c\over b} + b^{\,2} -\kappa \equiv f^{\;2}(b) -\kappa ({\bf x})\,, \eqno(3)$$ where $(t,{\bf x})$ are Lagrangian coordinates comoving with matter (the matter 4-velocity is $u_\alpha =t_{,\alpha }$), $b= b(t,{\bf x})$ is the volume expansion scale factor \[the comoving matter density is equal to $\rho_{\rm M}=3c H_{\rm V}^{\,2}/(8\pi Gb^{\,3})$\], $H_{\rm V}=H_0\sqrt {\Omega _{\rm V}}\simeq 2\times 10^{-4}\,{\rm Mpc}^{-1}$ is the Hubble parameter of dark energy, and $$c\equiv {\Omega _{\rm M}\over \Omega _{\rm V}} = {\Omega _{\rm M}\over 1-\Omega _{\rm M}}\simeq 0.39 \,.$$ The function $$f\,(b)\equiv \biggl ({c\over b}+b^{\,2}\biggr )^{1/2}\ge 1$$ has a minimum $f_{\min} \simeq 1$ at $b_{\min}^{-1}\simeq 1.7$. An arbitrary small function $\kappa = \kappa ({\bf x})$ of spatial coordinates describes the local spatial curvature. We are interested in the spatial regions where the right-hand side[^8] of Eqn’(3) is positive: $$\kappa ({\bf x}) <1\,. \eqno(4)$$ If this condition is satisfied, the matter density decreases with time monotonically. When $\kappa =0$, the volume and background scale factors are equal (although the expansion anisotropy remains large; see Section 3.2), $$b =a(t)\equiv {1\over 1+z}\,,\qquad H\equiv {\dot a\over a}=H_{\rm V}\,{\,f\,(a)\over a}\,, \eqno(5)$$ where $\,f=f\,(a)$ is the growth rate factor of the Hubble velocity (${\bf V}_{\rm H}=f\,H_{\rm V}{\bf x}$). In the general case, in the linear order in $\kappa$, we obtain $$b=a\biggl (1-{1\over 3}\,g\kappa \biggr )\,,\qquad \delta _{\rm M}=g\kappa \,, \eqno(6)$$ $$H_{\rm eff}\equiv {\dot b\over b} =H\biggl (1- {1\over 3}\,h \kappa \biggr )\,,\qquad h\equiv {{\upsilon}\over f} = {\dot {g}\over H}\,,$$ where $\delta _{\rm M}\equiv \delta \rho _{\rm M}/\rho _{\rm M}$ is the comoving density perturbation, $H_{\rm eff}=H_{\rm eff} (t,{\bf x})$ is the effective Hubble function, and $g= g(a)$ and ${\upsilon}={\upsilon}(a)$ are the respective growth factors of the density perturbations and the matter peculiar velocity \[see also (11)\], $$g(a)={1\over c} \biggl (a-H\int _0^{\,a} \,{{\rm d}a\over H}\biggr )\,,\qquad {\upsilon}(a)={3H_{\rm V}\over 2a^{\,2}}\int _0^{\,a} \,{{\rm d}a\over H}\,. \eqno(8)$$ Equations (3)–(8) describe quasi-Hubble flows with the effective Hubble parameter $H_{\rm eff}$ depending on the observer’s location. Figure 1 shows the functions $g(a)$ and ${\upsilon}(a)$. In the present era, the function ${\upsilon}$ is in its wide maximum, indicating the period of the most intensive structure formation. The position of the maximum of ${\upsilon}(a)$ corresponds to $z\simeq 0.2$, the level of 90 percent of the maximum value is reached at $a\simeq 0.5$ and 1.4, and the half-maximum is at $a\simeq 0.1$ and 4. Therefore, the present era is an era of maximum peculiar velocities, and it will continue to last for a cosmological time. The function ${\upsilon}$ will have decreased to only half its current value by the time the Universe is 35 billion years old. And only then will it be possible to talk about the era of faded-out peculiar velocities in every space region where $\kappa <1$. ![The functions of the density perturbation growth rate $g(a)$ and of the matter peculiar velocity ${\upsilon}(a)$.](fig1.eps) Figure 2 displays the function $h=h(a)$ describing the deviation of the local Hubble parameter from the background one. The function has its maximum at $z\simeq 0.4$, and the interval $h > 0.5 \, h_{\max}$ spans the range $a\in (0.1, 1.8)$, which corresponds to the age 0.6 to 22 billion years. We can learn from Fig. 2 that our Universe is at the stage of maximum distortion of Hubble’s expansion, and the Hubble flows are to be recovered only in some 10 billion years. ![The function $h(a)$ describing the deviation of the local Hubble parameter $H_{\rm eff}$ from the background one.](fig2.eps) To summarize, we can conclude that the large-scale structure formation in the Universe occurs during the period that spreads from 1 to 20 billion years since the Big Bang. The stage of the suppression of Hubble flow inhomogeneities due to dark energy’s gravitational influence has not come yet. That is why one of the key theses in \[4, p. 279\] — that ‘the dynamic effect of dark energy naturally explains the two astronomical facts that have seemed mysterious up to now: (1) regularity of the expansion flow inside the uniformity cell and (2) the same expansion rate at local and global scales’ — is incorrect. We now consider local matter flows and their properties in more detail. Anisotropy of cold flows ------------------------ To describe the gauge-invariant field of peculiar velocities, we pass to Euler (quasi-Friedmannian) coordinates $y^{\,\alpha } = (\tau , {\bf y}$). In these coordinates, the gravitational field is locally isotropic at any spatial point in the linear order in $\kappa$. The required transformation is given by $${\bf y}={\bf x}+g{\bf S}\,,\qquad \tau =t- a{\upsilon}H_{\rm V}\bar q\,, \eqno (9)$$ $$S_i=-\bar q_{,\,i}\,,\qquad q= {3\over 2}\,H_{\rm V}^{\,2}\bar q\,,$$ where ${\bf S}={\bf S}({\bf x})$ is the vector of a medium element displacement from the unperturbed position[^9] and $q= q({\bf x})$ is a time-independent dimensionless displacement potential \[18\]. Comparison with (6) yields $$\kappa =- {\rm div} \, {\bf S} = \Delta \bar q\,.$$ The matter particle displacement from the unperturbed Hubble trajectories is monotonically increasing with time and today amounts to the value $\sigma_S \simeq 15$ Mpc. If dark energy has vacuum properties, the displacement will tend to its asymptotic value of about 25 Mpc in the future. Transformation (9) yields the following form of the metric interval in the quasi-Friedmanninian and Lagrangian frames: $${\rm d} s^{\,2} = (1+2\Phi ) {\rm d} \tau^{2} - \tilde a^{2} {\rm d} {\bf y}^{2}$$ $$= {\rm d} t^{2} - \tilde a^{2} (\delta _{ik} - 2 g \bar q_{,ik}) {\rm d} x^{i} {\rm d} x^{k}\,, \eqno(10)$$ where $\tilde a\equiv a(t)(1-q)$ is a 4-scalar invariant under coordinate transformations and $\Phi =(cg/a)q$ is the gravitational potential of density perturbations. The function $b(t,{\bf x})$ is directly proportional to the trace of the spatial part of the Lagrangian metric tensor \[compare (10) and (6)\]. Equation (10) gives the physical Euler coordinate of the medium element, ${\bf r}=\tilde a{\bf y}$. Differentiating ${\bf r}$ with respect to the proper time yields the following expression for the matter peculiar velocity: $${\bf v}\equiv \dot {\bf r} - H{\bf r} ={\upsilon}\,H_{\rm V}\,{\bf S}\,. \eqno(11)$$ Expression (11) coincides with the definition of the 3-velocity as the spatial component of the matter 4-velocity in the quasi-Friedmannian reference frame: $${\upsilon}_i=-{\partial t\over a\partial y^{\,i}}=-{\upsilon}\,H_{\rm V}\,\bar q_{,\,i}\,.$$ Therefore, the value ${\upsilon}$ appearing in (7) is indeed the peculiar velocity growth rate. We now consider the local Hubble flows. In regions (4) of the inhomogeneous Universe, the Hubble flows are described by the tensor field $H_{ik}=H_{ik}(t,{\bf x})$ generalizing the function $H(t)$ in the Friedmann model \[see (13)\]. At a fixed instant of time $t$, Eqns (9) give the coordinate distance between two close medium points: $$\delta y_i=(\delta _{ik} - g \bar q_{,\,ik})\delta x^{\,k}\,. \eqno(12)$$ Differentiating the physical distance $\delta {\bf r}=\tilde a\,\delta {\bf y}$ with respect to time yields the field of paired velocities: $$\delta V_i\equiv {\partial \over \partial t} (\delta r_i) =H_{ik}\,\delta r^{\,k}\,, \eqno(13)$$ $$H_{ik}=H\delta _{ik} -\dot g\bar {q}_{,\,ik}=H (\delta _{ik}- h \bar q_{,\,ik})\,. \eqno(14)$$ We can see that the trace of (14) corresponds to the volume Hubble parameter $H_{\rm eff}=(1/3)H_{i\,i}$, but the tensor $H_{ik}$ itself is highly anisotropic. The local expansion anisotropy (variations in the projections of $H_{ik}$ on directions radiating from a given point ${\bf x}$) is of the same order as the deviations of $H_{\rm eff}$ from the true Hubble parameter $H$. For example, in the Local Group, at a distance more than 2 Mpc from its barycenter, the expected anisotropy of the quasi-Hubble outflaw of galaxies can amount to 30%. The field $H_{ik}$ describes regular (cold) matter flows. It is worth saying that formula (13) is valid in the limit of small distances between galaxies, i.e., distances smaller than the correlation scale of the two-point correlation function of the displacement vector. The correlation radius varies from 15 to 40 Mpc between projections of this vector with respect to the direction $\delta {\bf y}$. As the distance increases, random deviations from law (13) increase. This is due to the cosmological velocity perturbation spectrum, whose amplitude decreases with a decrease in the wavelength for $k > 0.03 \,{\rm Mpc}^{-1}$, and hence the random deviations from average velocities (13) increase as the wavelength $k^{-1}$ increases. Just to give an example, at the distance 3 Mpc from the Local Group barycenter, the deviations are around $30-40\,{\rm km}\, {\rm s}^{-1}$, which is about 15 - 20% of the average velocity, while the full peculiar velocity of the Local Group relative to the microwave background is $600\, {\rm km}\,{\rm s}^{-1}$. The main inhomogeneity scales responsible for such a high velocity are in the range 15 - 50 Mpc. We can see that the standard theory of the formation of the structure of the Universe faces no ‘mysterious’ problems in explaining the observed relative motion of matter in quasi-homogeneous regions of the Universe ($\kappa < 1$). The local flows are regular, smooth, and highly correlated. The smallness of the random deviations in galaxy velocities from average cold flows is explained with a profile of the initial spatial density perturbation spectrum, contrarily to Chernin’s evolutionary influence of dark energy. At small distances, the flows are quasi-Hubble (they are radial, the outflaw velocity being in direct proportion to distance), but the Hubble parameter depends on direction and on the observer’s location. There is no reason to modify the standard theory. The ‘Little Bang’ model[^10] proposed in Section 3 in \[4\] in order to explain the cold flows is beneath criticism. In that model, the peculiar velocities of galaxies ‘kicked’ out of the Local Group must decrease as much as five times under the gravitational influence of dark energy. As Fig. 1 shows, this will take more than 40 billion years. On ‘internal symmetry in cosmology’ =================================== Chernin’s suggestion to link relation (1) with mythological ‘internal symmetry in cosmology’ (\[4\], Section 5) causes serious disagreement. In this respect, the crucial point for the author is his introducing ‘Friedmann integrals.’ To determine their values, the author arbitrarily normalizes the scale factor (the value of the parameter $R_0$ in Eqn. (57) in \[4\]) to the present horizon size $R_0 = H_0^{\,-1}$. With the parameter $R_0$ chosen differently, the equality of the ‘Friedmann integrals,’ e.g., those for dark matter and dark energy, $A_{\rm D}$ and $A_{\rm V}$, would be violated. It is clear that if $R_0 = H_0^{\,-1}$, then the approximate relations between the ‘Friedmann integrals’ $$A_{\rm B} \sim A_{\rm D} \sim A_{\rm R} \sim A_{\rm V} \eqno(15)$$ are equivalent to the approximate relations $$\Omega _{\rm B} \sim \Omega _{\rm D} \sim \sqrt {\Omega _{\rm R}} \sim {1\over \sqrt {\Omega _{\rm V}}} \eqno(16)$$ for the present ratios of the energy densities to the critical density. To verify this, it is sufficient to use the Friedmann equation for the spatially flat or almost flat Universe at today’s instant, which gives $$A_\lambda = R_0 \bigl [ \Omega _\lambda (H_0 R_0)^2\bigr ]^{1/(1+3w_\lambda )}\,,$$ for all sorts of matter $\lambda$, where $w_\lambda = p_\lambda /\rho _\lambda$ (cf. Eqn. (59) in \[4\]). This last formula implies the equivalence of relations (15) and (16) within an order of magnitude. Because $\Omega _{\rm B}$, $\Omega _{\rm D}$, and $\Omega _{\rm V}$ are presently quite close to unity \[however, see (17) and (18)\], relation (16), in turn, is equivalent to the approximate equality in (1) complemented with the same relation for photons. Thus, what the author thinks is a ‘symmetry’ is, in fact, the rephrased statement about the densities of different energy components being close to each other. Introducing the ‘Friedmann integrals’ does not make relation (1) any clearer. The author’s approach to the flatness problem (‘Dicke’s problem’) is equally awkward. Without an extremely fine tuning of the cosmological evolution initial data at the hot stage (the tuning noted by Dicke, which is automatically fulfilled in the inflationary theory), it is impossible to obtain small spatial curvature in the present Universe. In the author’s terms, this means that, had it not been the fine-tuning of the initial data, the dimensionless parameter in Eqn. (87) in \[4\] would have been extremely high, and for the closed model, the equality between the energy densities of matter and dark energy would have never been satisfied. For example, for the closed Universe and the fixed $\rho _{\rm V} \sim 10^{-29}\,{\rm g}\,{\rm cm}^{-3}$, it is absolutely clear that if the curvature had contributed, e.g., $10^{-6}$ of the matter contribution to the Friedmann equation in the nucleosynthesis period, the expansion of the Universe would have changed to contraction and subsequent re-collapse long before dark energy would play a role in the cosmological expansion. In fact, relations like (1) or (16) involve some very interesting, but still unclear issues. For example, the relation $\rho _{\rm D} \approx \rho _{\rm B}$ being constant in time possibly indicates the common origin of dark matter and the baryonic asymmetry of the Universe. In spite of numerous attempts to account for this fact, no satisfactory theoretical models have been proposed. The relation $\rho _{\rm V} \approx \rho _{\rm D}$ is valid today but includes quantities changing with time at different rates; this suggests that the period of transition from the matter-dominated to the dark-energy-dominated stage occupies a privileged position on the time axis in terms of the structure (and hence life) formation. The fact of the existence of the large-scale structure is crucial from the perspective of the coincidence problem. The relations between the parameters $\Omega _{\rm R}$, $\Omega _{\rm V}$, and $\Omega _{\rm M} = \Omega _{\rm D} + \Omega _{\rm B}$, $$\Omega _{\rm R} \ll \Omega _{\rm M} \,, \qquad \Omega _{\rm V} \,{}^<_\sim \,\Omega _{\rm M}\,, \eqno(17)$$ have a direct impact on the possibility of the generation of the structure of the Universe because gravitational instability is not realized at the radiation- and dark-energy-dominated stages and develops only if nonrelativistic matter dominates. But for the structure to be formed, one more condition must be satisfied: the initial perturbation amplitude must be just right to fit the ‘window’ of gravitational instability, thus giving rise to inhomogeneities. In our Universe, the two necessary conditions are satisfied: the initial perturbations (of the order $10^{-5}$) manage to grow and form the large-scale structure of the Universe during the time ‘window’ from 0.3 to 20 billion years. The condition $$\Omega _{\rm R} \ll \Omega _{\rm B} \, {}^<_\sim \, \Omega _{\rm D}\,, \eqno(18)$$ in turn, is necessary for forming stars in nonlinear halos of dark matter. Any detailed discussion of these and similar issues would require a new review (in this respect, see, e.g., \[1, 3, 19–21\]). Here, we find it important to emphasize that the outlined range of issues is much richer than it may seem after reading Chernin’s article. On the physical nature of dark energy ===================================== Almost everywhere in \[4\], Chernin identifies dark energy with the vacuum energy, while other possibilities are mentioned just briefly. We want to point out that a no less attractive point of view is to relate dark energy to a new superweak and superlight field, which can be a quintessence, a phantom field, etc. It is appealing because, among other things, it is very hard to explain the vacuum energy value, which is nonzero and still extremely small compared to the energy scales of the known interactions (see Section 1). It is much easier to imagine the vacuum energy relaxing practically to zero at some stage of the evolution of the Universe (long before the known stages). There are examples of such mechanisms in the literature \[22, 23\]. In this framework, it is natural for dark energy to be the energy of a new field rather than the vacuum energy. Another heuristic argument is that the present stage of the accelerated expansion of the Universe looks qualitatively similar to the inflation stage and differs from the latter ‘only’ in the energy density value and, hence, the Hubble parameter. Dark energy in the form of a superweak field could be a less energetic counterpart of the inflaton, a field commonly used in inflationary theories. If dark energy is the energy of a new field, the parameter $w_{\rm V}$ that relates pressure and energy density in accordance with the equation $p_{\rm V}=w_{\rm V} \rho _{\rm V}$, differs from -1 (by the way, it does not have to be constant in time), and the dark energy density depends on time. However, we emphasize that in most models of this type, the parameter $w_{\rm V}$ is automatically close to the vacuum value $w=-1$, and hence the observational limit $|w_{\rm V}+1| < 0.1$ hardly constrains the existing models yet. Finally, we note that the accelerated expansion may be caused by gravitation theory modified at superlarge scales and cosmological times. One of the possibilities is here related to the extra spatial dimensions of infinite size (for example, see Ref. \[24\]), although attempts to construct such models have experienced internal contradictions so far. Another possibility, more realistic from the standpoint of theoretical realization, is the extension of General Relativity to the scalar–tensor theory of gravity \[25, 26\]. Thus, the Universe’s accelerated expansion may be the first evidence of new physical phenomena occurring at cosmological and maybe other scales. Various models of the accelerated expansion differ in the dark energy density dependence on time. The search for this dependence and its detailed study are important problems of observational cosmology, which must eventually allow revealing the physical nature of dark energy. Additional comments =================== Chernin’s article ought to be read with caution. For example, not all researchers are that enthusiastic about the extravagant model by Luminet et al.; peculiarities, if any, in the angular spectrum of the cosmic microwave background can be explained in a less exotic way. Almost the same applies to the model by Arkani-Hamed, Dimopoulos, and Dvali (known as the ADD model). This model definitely played a great role in presenting the idea that extra spatial dimensions can be large (or even infinitely large), but it is unlikely that nature follows this way. We note further that Chernin’s paper is replete with terms that are not commonly accepted, such as EG’vacuum, Q-vacuum, and Friedmann integrals. To summarize, we recommend that the concerned reader form a reasoned opinion on issues mentioned in Chernin’s article using alternative reviews on this topic, e.g., Refs. \[20, 27–30\]. Conclusion ========== The discovery of dark energy dotted the $i\,$’s and crossed the $t\,$’s in observational cosmology. The standard cosmological model ($\Lambda CDM$) fitting the whole set of observational data arose for the first time in the development of science. Nowadays, it has no serious rivals. The standard model describes both the evolution of the Universe as a whole and the generation of its structure remarkably well. In spite of the influence of dark energy, the structure is still being generated and this will continue for another ten billion years or so. At the same time, with dark energy having been recognized to exist, the situation in physics has dramatically changed and we see our knowledge of the microworld as incomplete. It is a safe bet to say that revealing the physical nature of dark energy is the central problem of natural science. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to A.G. Doroshkevich, M.B. Libanov, E.V. Mikheeva, and V.N. Strokov for the useful discussions. The work was supported in part by the Russian Foundation for Basic Research, Grants 07-02-00886 and 08-02-00473. Weinberg S., [*Rev. Mod. Phys.*]{} [**61**]{}, 1 (1989). Martel H., Shapiro P. R., Weinberg S., [*ApJ*]{} [**492**]{}, 29 (1998); astro-ph/9701099. Linde A., In [*Science and Ultimate Reality: Quantum Theory, Cosmological and Complexity*]{} (Eds J.D. Barrow et al.) (Cambridge: Cambridge Univ. Press, 2004) p. 426; hep-th/0211048. Chernin A. D., [*Usp. Fiz. Nauk*]{} [**178**]{}, 267 (2008) \[[*Phys. Usp.*]{} [**51**]{} (3), 267 (2008)\]. Kofman L. A., Gnedin N. Y., Bahcall N. A., [*ApJ*]{} [**413**]{}, 1 (1993). Ostriker J. P., Steinhardt P. J., [*Nature*]{} [**377**]{}, 600 (1995). Krauss L. M., Turner M. S., [*Gen. Rel. Grav.*]{} [**27**]{}, 1137 (1995); astro-ph/9504003. Krauss L. M., [*ApJ*]{} [**480**]{}, 466 (1997); astro-ph/9607103; [*ApJ*]{} [**501**]{}, 461 (1998); astro-ph/9706227. Totani T., Yoshii Y., Sato K., [*ApJ*]{} [**483**]{}, L75 (1997); astro-ph/9705014. Carroll S. M., Press W. H., Turner E. L., [*Annu. Rev. Astron. Astrophys.*]{} [**30**]{}, 499 (1992). Melchiorri A. et al. (Boomerang Collab.), [*ApJ*]{} [**536**]{}, L63 (2000); astro-ph/9911445. de Bernardis P. et al. (Boomerang Collab.), [*Nature*]{} [**404**]{}, 955 (2000); astro-ph/0004404; Lange A. E. et al. (Boomerang Collab.), [*Phys. Rev.*]{} [**D 63**]{}, 042001 (2001); astro-ph/0005004. Balbi A. et al., [*ApJ*]{} [**545**]{}, L1 (2000); “Erratum” [**558**]{}, L145 (2001); astro-ph/0005124; Hanany S. et al., [*ApJ*]{} [**545**]{} L5 (2000); astro-ph/0005123. Jaffe A. H. et al. (Boomerang Collab.), [*Phys. Rev. Lett.*]{} [**86**]{}, 3475 (2001); astro-ph/0007333. Pogosian L., [*New Astron. Rev.*]{} [**50**]{}, 932 (2006); astro-ph/0606626. $\rm C\acute {\rm e}\rm l\acute {\rm e}{\rm rier M.-N.}$, ‘The accelerated expansion of the Universe challenged by an effect of the inhomogeneities’’, [*New Adv. Phys.*]{} [**1**]{}, 29 (2007); astro-ph/0702416. Alexander S. et al., “Local void vs dark energy: confrontation with WMAP and type Ia supernovae”, arXiv:0712.0370. Lukash V. N., [*Zh. Eksp. Teor. Fiz.*]{} [**79**]{}, 1601 (1980) \[[*Sov. Phys. JETP*]{} [**52**]{}, 807 (1980)\]; [*Pis’ma Zh. Eksp. Teor. Fiz.*]{} [**31**]{}, 631 (1980) \[[*JETP Lett.*]{} [**31**]{}, 596 (1980)\]; astro-ph/9910009. Tegmark M., Rees M. J., [*ApJ*]{} [**499**]{}, 526 (1998); astro-ph/9709058. Weinberg S., “The cosmological constant problems”, astro-ph/0005265. Rubakov V. A., [*Usp. Fiz. Nauk*]{} [**177**]{}, 407 (2007) \[[*Phys. Usp.*]{} [**50**]{}, 390 (2007)\]. Rubakov V. A., [*Phys. Rev.*]{} [**D 61**]{}, 061501 (2000); hep-ph/9911305. Steinhardt P. J., Turok N., [*Science*]{} [**312**]{}, 1180 (2006); astro-ph/0605173. Deffayet C., Dvali G., Gabadadze G., [*Phys. Rev.*]{} [**D 65**]{}, 044023 (2002); astro-ph/0105068. Boisseau B. et al., [*Phys. Rev. Lett.*]{} [**85**]{}, 2236 (2000); gr-qc/0001066. Gannouji R. et al., [*JCAP*]{} ([**09**]{}), 016 (2006); astro-ph/0606287. Sahni V., Starobinsky A., [*Int. J. Mod. Phys.*]{} [**D 9**]{}, 373 (2000); astro-ph/9904398. Peebles P. J. E., Ratra B., [*Rev. Mod. Phys.*]{} [**75**]{}, 559 (2003); astro-ph/0207347. Padmanabhan T., [*Phys. Rep.*]{} [**380**]{}, 235 (2003); hep-th/0212290. Sahni V., Starobinsky A., [*Int. J. Mod. Phys.*]{} [**D 15**]{}, 2105 (2006); astro-ph/0610026. [^1]: Published: Physics-Uspekhi [**51**]{}(3), pp 283-289 (2008). [^2]: Apart from some arguments based on the anthropic principle, see \[1-3\]. [^3]: We follow the notation used in Chernin’s article \[4\]: the subscripts V, D, and B respectively stand for dark energy, dark matter, and baryons. [^4]: Measurements of peculiar velocities of galaxies in clusters and superclusters, gravitational lensing by clusters, measurements of galaxies’ rotation curves, determination of the mass–luminosity relation, measurements of X-ray clusters’ temperatures, etc. [^5]: We note that the condition of dominance over nonrelativistic matter excludes relativistic particles because their energy density decreases at a higher rate than the nonrelativistic matter density. [^6]: Here, we are talking only about galaxies that are located in the quasilinear regions of space ($\delta _{\rm M} < 1$). In gravitationally bound systems ($\delta _{\rm M} > 1$), galaxies held by the gravitational field of these formations do not experience outflow. [^7]: At this stage, we neglect the radiation density. The first two terms in the right-hand side of (3) respectively describe nonrelativistic matter (dark matter and baryons) and dark energy modeled by the cosmological constant. The dot over a variable denotes the partial derivative with respect to time t. The modern value of the cosmological scale factor is set to $a_0 = 1$ \[see (5)\]. [^8]: Condition (4) includes both superclusters (the regions where $\kappa >0$) and cosmological voids ($\kappa <0$). [^9]: We recall that the [**x**]{} coordinate does not change with time along the medium element trajectory and tends to ${\bf y}$ as $t\rightarrow 0$. [^10]: ‘The local cosmology’ in Section 3.4 in \[4\], which underlies the ‘Little Bang’ model, is based on the limit of the static gravitational field (see Eqns. (28)–(36) in \[4\]). This contradicts the standard theory, where the gravitational potential $\Phi (t, {\bf x})$ depends on time \[see Eqn. (10)\].
--- abstract: 'In this article, we report the influence of the interfacial roughness on the thermal boundary conductance between two crystals, using molecular dynamics. We show evidence of a transition between two regimes, depending on the interfacial roughness: when the roughness is small, the boundary conductance is constant taking values close to the conductance of the corresponding planar interface. When the roughness is larger, the conductance becomes larger than the planar interface conductance, and the relative increase is found to be close to the increase of the interfacial area. The cross-plane conductivity of a superlattice with rough interfaces is found to increase in a comparable amount, suggesting that heat transport in superlattices is mainly controlled by the boundary conductance. These observations are interpreted using the wave characteristics of the energy carriers. We characterize also the effect of the angle of the asperities, and find that the boundary conductance displayed by interfaces having steep slopes may become important if the lateral period characterizing the interfacial profile is large enough. Finally, we consider the effect of the shape of the interfaces, and show that the sinusoidal interface displays the highest conductance, because of its large true interfacial area. All these considerations are relevant to the optimization of nanoscale interfacial energy transport.' author: - Samy Merabia - Konstantinos Termentzidis title: Thermal boundary conductance across rough interfaces probed by molecular dynamics --- Introduction ============ The existence of a finite thermal boundary resistance between two solids has important practical consequences, especially in the transport properties of nanostructured materials. When the distance between interfaces becomes submicronic, heat transfer is mainly controlled by the interfacial phonon transmission, which in turn governs the thermal boundary resistance. In certain applications, such as electro-optical modulators [@schneider1990], optical switching devices[@wagner2007] and pressure sensors [@robert1999], a low resistance is desired to favor energy flow. In thermoelectric devices on the contrary, a large resistance is preferable so as to generate large barriers for a wide class of phonons [@wan2010; @hashibon2011; @qiu2011]. Two strategies may be followed in order to tune the value of the boundary resistance between two solids. Either, the solid/solid interaction is changed through the coupling with a third body, which is usually a self-assembled monolayer [@losego2012; @obrien2013]. The other possibility is to modulate the interfacial roughness [@gotsmann2013]. This latter direction has been illustrated experimentally through chemical etching [@hopkins2010; @hopkins2011; @duda2012]. However, a theoretical model describing the effect of the interfacial roughness on the thermal boundary conductance at room temperature is still lacking [@kechrakos1990; @kechrakos1991]. Note that the role of the interfacial roughness on the Kapitza conductance has been underlined a long time ago, in the context of liquid Helium/metal interfaces at very low-cryogenic temperatures [@amrit2010; @shiren1981]. At these temperatures, the phonon coherence length may be comparable with the typical heights of the interface, leading to strong phonon scattering which is put forward to explain the high values of the conductance experimentally reported, as compared with the classical acoustic mismatch theory which assumes planar interfaces [@adamenko1971]. Such considerations have received less attention for room temperatures solids, probably because in this case the phonon coherence length is very small. Understanding the role of the interfacial roughness has also important consequences in the transport properties of superlattices, which are good candidates for thermoelectric conversion materials, thanks to their low thermal conductivity [@tritt2004]. Designing superlattices with rough interfaces has been recently achieved, opening an avenue for reducing the thermal conductivity in the direction perpendicular to the interfaces [@termentzidis2011]. Again, the physical mechanisms at play in the heat transport properties of rough superlattices have not been elucidated so far. Molecular dynamics offers a privileged route to understand the interaction between the energy carriers in a solid and the asperities of the interface [@termentzidis2011_2; @rajabpour2011; @termentzidis2009]. In this article, we use molecular dynamics to probe interfacial heat transfer across model rough interfaces. Because of the difficulty to determine the temperature jump across a non-planar interface, we have used transient simulations which enable to compute the thermal boundary resistance characterizing rough interfaces. In section \[simulations\], we describe the structures used to probe the conductance of rough interfaces. In section \[GK\], we explain the methodology retained to extract the thermal boundary conductance from molecular dynamics. The simulation results are presented and discussed in section \[results\]. We first concentrate on model interfaces made of isosceles triangles. For these model interfaces, we present the results for the thermal conductance as a function of the interfacial roughness and interpret the results using a simple acoustic model in subsection \[results\_roughness\]. In the following section, we characterize the effect of the angles of the asperities. Finally, in sub-section \[results\_shape\], we have appraised the effect of the interfacial shape. We discuss the consequences of this work in the Conclusion. Structures and sample preparation \[simulations\] ================================================= We will consider model rough interfaces, constructed from two perfect fcc Lennard-Jones solids whose interface is orientated along the crystallographic \[100\] direction, as represented in figure \[schematic\_parameters\]. We introduce some 2D roughness in the $xz$ plane, where $x$ and $z$ denote respectively the \[100\] and \[001\] directions. As we use periodic boundary conditions in all spatial directions, the system studied is similar to a superlattice. The dimension in the $y$ direction has been fixed to $10$ $a_0$, where $a_0$ is the lattice parameter, while the dimension in the $z$ direction -the superlattice period-has been varied between $5$ and $40$ $a_0$. All the atoms of the system interact through a Lennard-Jones potential $V_{\rm LJ}(r) = 4\epsilon \left((\sigma/r)^{12}-(\sigma/r)^6\right)$ truncated at a distance $2.5 \sigma$. A single set of energy $\epsilon$ and diameter $\sigma$ characterizes the interatomic interaction potential. As a result, the two solids have the same lattice constant $a_0$. To introduce an acoustic mismatch between the two solids, we have considered a difference between the masses of the atoms of the two solids, characterized by the mass ratio $m_r =m_2/m_1$. In all the following, we will use $m_r=2$, which has been shown to give an impedance ratio typical of the interface between Si and Ge [@termentzidis2009]. From now on, we will use real units where $\epsilon=1.67 \; 10^{-21}$ J; $\sigma =3.4 \; 10^{-10}$ m and $m_1=6.63 \;10^{-26}$ kg, where these different values have been chosen to represent solid Argon. With this choice of units, the unit of time is $\tau=\sqrt{m\sigma^2/\epsilon}=2.14$ ps, and the unit of interfacial conductance is $G = k_B/(\tau \sigma^2) \simeq 56 MW/K/m^2$. The different interfaces have been prepared as follows: first the structures have been generated by mapping the space with fcc structures using the lattice parameter of the fcc LJ solid at zero temperature [@chantrenne2003]: $a_0(T=0 K)= 1.5496 \sigma$. The structures have been equilibrated at the two final finite temperatures $T=40$ K and $T=18$ K using a combination of a Berendsen, a Nosé Hoover thermostat and a barostat at $0$ atm [@frenkelsmit]. The total equilibration time lasts one million time steps which corresponds to a total time of $4,28$ ns. The equilibrium lattice parameters have been found to be: $a=1.579 \sigma$ at T=$40$ K and $a=1.5563 \sigma$ at T=$18$ K. In this article, we considered different types of rough interfaces, as represented in Fig. \[schematic\_parameters\]. The first one consists of triangular shaped interfaces, having a constant angle $\alpha=45\deg$ with respect to the $xy$ plane and a variable height $h$. In the second type of interface analyzed, we keep constant the interfacial height $h$ and we vary the angle $\alpha$. In the third case, both the angle and the height are varied keeping constant the interfacial area $A$. Finally the effect of the shape of the interfaces has been also appraised considering totally rough interfaces, small triangles juxtaposed on triangular interfaces, square and wavy shaped interfaces. This analysis covers all the possible parameters that might be involved in the geometry of the interfaces with the aim to quantify their influence on phonon interfacial transport. ![(Color online) Diagram showing the different parameters that will be varied, for the triangular shaped interfaces. Schematic representation of the different parameters that have been varied: Top middle (Fig. 3b): interfacial height at a constant value of the angle $\alpha=45 \deg$; Medium (fig 3c): angle $\alpha$ at a constant value of the interface height $h$. Bottom (Fig 3d): angle $\alpha$ and interfacial height $h$ at a constant value of the interfacial area $\mathcal A$.[]{data-label="schematic_parameters"}](structures_new_cor2.eps){width="0.9\linewidth"} Thermal boundary conductance from transient simulations \[GK\] ============================================================== In this section, we briefly review the methodology adopted to probe the interfacial conductance between two solids, using transient out-of-equilibrium simulations. Generally speaking, there are different methods to extract the boundary conductance from molecular simulations: either a net heat flux $q$ is applied through the coupling of two energy reservoirs, and one measures the finite temperature jump $\Delta T$ across the interface [@stevens2007; @landry2009]. This allows to compute the interfacial conductance $G_K = q/\Delta T$. For the rough interfaces that we will consider in the following, it may be difficult to clearly identify a temperature jump, especially if the roughness is large. On the other hand, transient non-equilibrium simulations do not require to resolve spatially the temperature field in the vicinity of the interface, and for this reason are well adapted to the determination of the conductance of imperfect interfaces. The principle is akin to the thermoreflectance technique, and consists in heating instantaneously one of the two solids and record the temporal evolution of the temperature of the hot solid [@shenogin2004; @hu2011]. The conductance $G_K$ is then obtained from the time $\tau$ characterizing the thermal relaxation of the hot solid: $$\label{transient} G_K = \frac{3 N_1 k_B}{4 A \tau}$$ where $N_1$ is the number of atoms of solid $1$, $k_B$ is the Boltzmann’s constant, and $A$ is the interfacial area. Because the temperature of the heated solid may display some oscillations which may make the determination of the time constant $\tau$ difficult, we have rather used the decay of the energy $E_1$ to extract $\tau$: $$\label{mechanical_energy} E_1^m= \sum_{j \in 1} \frac{1}{2}m \vec v_j^2 + \sum_{j,k \in 1} V(\vec r_j- \vec r_k)$$ where the second term corresponds to the interatomic potential, here supposed to be pair-wise. An example of the time decay of the energy during thermal relaxation of the hot solid is displayed in Fig. \[energy\_decay\], showing that the exponential decay hypothesis is reasonable. ![(Color online) Energy decay of the heated solid obtained using transient non-equilibrium simulations. Dashed lines show the exponential fit. The parameters are: Total Length=$40$ $a_0$; temperature $T=40$ K; mass ratio $m_r=2$.[]{data-label="energy_decay"}](thermo.64_10_angle_45_Period.30.Length.60.ML.32.mr.2.T0.33.tr.5_Transient_Kapitza.eps){width="0.9\linewidth"} In practice after equilibration of the system, we have heated one of the two solids by an amount of $18$ K and followed the thermal relaxation of the hot solid during a time interval after which the energy has decreased by a factor $10$. To remove any contribution stemming from internal phonon scattering in the heated solid, we have run in parallel simulations across the interface between identical solids, and calculated the corresponding internal resistance. The Kapitza conductance calculated in this article has been obtained after having substracted this internal resistance: $$\label{interfacial_conductance} 1/G_K = 1/G_{12} - 1/G_{11}$$ where $G_{12}$ is the conductance measured for the interface between solid $1$ and solid $2$; and $G_{11}$ is the conductance measured between identical solids using eq. \[transient\]. This procedure has been followed for all the systems studied in this article. Finally, for the simulations discussed in this article, we have used between $5$ and $10$ independent configurations depending on the system size, to determine the value of the Kapitza conductance, and the error bar has been found to be typically $15$ percents. Results \[results\] =================== In this section, we present the simulation results obtained using the transient simulations, as detailed above. We will successively study the effect of the interfacial roughness, the angle of the asperities, and the shape of the interface. A summary of the different parameters that will be varied is depicted in Fig. \[schematic\_parameters\] and Fig. \[illustration\_shape\]. Effect of the superlattice period and number of periods \[finite\_size\_effects\] --------------------------------------------------------------------------------- In this section, we first quantify finite size effects in the determination of the conductance of rough interfaces, as measured by eq. \[transient\]. It is important to note that the system simulated is not a single isolated interface, but rather a superlattice because of the periodic boundary conditions. It is thus relevant to assess the influence of the superlattice period on the thermal conductance as measured by eq. \[transient\]. To this end, we will consider model rough interfaces, made of isosceles triangles, as depicted in fig. \[schematic\_parameters\]. In figure \[fig\_finite\_size\_effects\], we report the conductance of triangular shaped interfaces having a fixed roughness height $32$ MLs, and a varying period. ![(Color online) Thermal boundary conductance of isosceles triangular shaped interfaces having a roughness height $h=32$ MLs, as a function of the period $p$ defined in fig \[schematic\_parameters\]. Top: $T=40$ K; Bottom: $T=18$ K. Lines are guides to the eye. The mass ratio is $m_r=2$.[]{data-label="fig_finite_size_effects"}](figure_3new.eps){width="1.2\linewidth"} For the two temperatures considered, the thermal conductance is found to decrease with the system size for short periods, and then saturates for periods larger than $30$ nm. The increase of the conductance for thin layers may be explained because long wavelength phonons will not see two independent interfaces but rather a single one. A similar trend has been also reported in lattice dynamics simulations [@zhao2005] and Green function calculations [@zhang2007]. For thick layers, the conductance measured is constant, and has converged to the value characterizing an infinitely thick film. We remark also that the conductance is higher at high temperatures. Generally speaking, the thermal boundary conductance is found to increase with temperature, a trend often attributed to the existence of inelastic phonon scattering at the interface [@stevens2007; @hopkins2009]. This behavior is consistent with our simulation data. In the following, we will fix the period to $20$ $a_0$ as it leads to moderate finite size effects as already found for superlattices with planar interfaces [@merabia2012]. Finally, since the system we simulate is akin to a superlattice because of the periodic boundary conditions, it is important to probe the effect of the number of periods on the measured conductance. Figure \[fig\_number\_periods\] quantifies the effect of the number of interfaces on the conductance measured in transient simulations. From this figure, we can conclude that within error bars, the number of interfaces has a mild effect on the conductance that we calculate. This is the behaviour expected, as we probe a quantity characterizing the interface solely, independently on the number of interfaces. ![(Color online) Thermal boundary conductance of isosceles triangular shaped interfaces having a roughness height $h=32$ MLs and a Period $40$ $a_0$, as a function of the number of periods of the superlattice. Lines are guides to the eye. The temperature is $T=40$ K and the mass ratio is $m_r=2$.[]{data-label="fig_number_periods"}](transient_conductance_repetition.eps){width="0.8\linewidth"} Effect of the interfacial roughness \[results\_roughness\] ---------------------------------------------------------- We will now concentrate on the influence of the interfacial roughness on the thermal boundary conductance. We will consider rough interfaces having an angle $\alpha$ fixed at $\alpha=45 \deg$, while the height $h$ of the interface is increased so as to change the interfacial roughness, as seen in fig \[schematic\_parameters\] b. For the following, it is important to keep in mind that, when varying the height of the interface $h$ at a constant value of the angle, the true interfacial area remains constant, and larger by a factor $1/\cos \alpha=\sqrt{2}$ than its corresponding projection on the horizontal xy plane. Figure 5 displays the evolution of the measured Kapitza conductance, as a function of the interfacial roughness for two temperatures. The conductance of a planar interface, which corresponds to the value $h=0$ has been also indicated for the sake of comparison. Two regimes are to be distinguished, depending on the roughness of the interface $h$. When the height is smaller than typically $20$ monolayers, the conductance seems to be constant, or slightly decreases with the roughness, taking values close to the planar interface conductance. When the interfacial height becomes larger, the conductance suddenly increases and tends to saturate for very rough surfaces. ![(Color online) Thermal boundary conductance of isosceles triangular shaped interfaces, as a function of the interfacial roughness. Top: $T=40$ K; Bottom: $T=18$ K. We have also indicated the conductance of the corresponding planar interfaces (ML=0) and very rough interface (ML=60). The horizontal dashed lines show the conductance obtained after rescaling the conductance of the planar interface by the true interfacial area. The solid lines denote the theoretical model eq. (\[conductance\_model\]), with the parameter $\xi=0.2$. The parameters are: total Length=$40$ $a_0$; mass ratio $m_r=2$.[]{data-label="45deg"}](transient_conductance_energy_ML_2.eps "fig:"){width="0.9\linewidth"} ![(Color online) Thermal boundary conductance of isosceles triangular shaped interfaces, as a function of the interfacial roughness. Top: $T=40$ K; Bottom: $T=18$ K. We have also indicated the conductance of the corresponding planar interfaces (ML=0) and very rough interface (ML=60). The horizontal dashed lines show the conductance obtained after rescaling the conductance of the planar interface by the true interfacial area. The solid lines denote the theoretical model eq. (\[conductance\_model\]), with the parameter $\xi=0.2$. The parameters are: total Length=$40$ $a_0$; mass ratio $m_r=2$.[]{data-label="45deg"}](transient_conductance_energy_ML_3.eps "fig:"){width="0.9\linewidth"} Interestingly, the increase of the conductance between planar and very rough surfaces is found to be close to the increase of the interfacial area. This is materialized in fig. 5, where we have shown with dashed lines the value of the conductance obtained by multiplying the conductance of a planar interface by a factor $1/\cos \alpha$. Depending on the superlattice period, the increase of the Cross-Plane thermal conductivity has been found to be included between $1.3$ and $1.5$, which encompasses the value $\sqrt{2} \simeq 1.41$. This reinforces the message according to which the thermal conductivity of a superlattice is mainly controlled by the Kapitza resistance exhibited by the interfaces, which in turn seems to be primarily governed by the interfacial area. We now give some qualitative elements to understand the previous simulation results, regarding the influence of the interfacial roughness on the Kapitza conductance. At this point, it is important to have in mind that in the situations that we have modeled, the energy carriers are phonons which are classically populated. A given phonon mode is characterized among others by its wavelength $\lambda$, which may take practically any value between an atomic distance $2a$ and the simulation box length $L$ [@note_phonon_wavelength]. First, let us concentrate on the case of a small roughness $h$, as exemplified in fig. \[schematic\_phonon\_roughness\]. In this case, the majority of phonon modes have a wavelength larger than $h$, and they see the interface as a planar one: the transmitted heat flux is then controlled by the projected area. On the other hand, when the interface is very rough, most of the phonons have a wavelength smaller than the height $h$, obviously the phonons no longer feel the interface as planar, phonon scattering becomes completely incoherent, and the transmitted heat flux is controlled by the true surface area. To put these arguments on quantitative grounds, we will consider the following expression of the thermal conductance, inspired by the classical AMM model [@little1959]. We introduce a mode-dependent fraction $\psi(\lambda)$, which depends on the considered wavelength, and which is equal to $1$ when the wavelength is supposed to be small compared with the interfacial roughness $h$, and equal $0$ in the opposite case. We define a dimensionless parameter $\xi$, such that: $$\begin{aligned} \label{psi} \psi(\lambda) &=& 1 \; {\rm if} \; \lambda < \xi h \nonumber \\ \psi(\lambda) &=& 0 \; \; {\rm otherwise} \end{aligned}$$ The parameter $\xi$ will be the adjustable fitting parameter of the model. The interfacial conductance is then supposed to be given by : $$\begin{aligned} \label{conductance_model} G_K & =& \frac{3}{2} \zeta \rho k_B c_1 ( \int_{0}^{\omega_{\rm Dmin}} g(\omega) \psi(\lambda) d \omega \mathcal I_{12} \nonumber \\ &+ & \frac{\mathcal A}{\mathcal A_0} \int_{0}^{\omega_{\rm Dmin}} g(\omega) (1-\psi(\lambda)) d \omega \mathcal I_{12} )\end{aligned}$$ where $\rho$ is the crystal number density, $c_1$ is the average sound velocity in medium $1$, $\omega_{\rm Dmin}$ is the Debye frequency of the softer solid. The parameter $\zeta$ is a scaling factor which accounts for the tendency of the AMM model to overpredict the measured Kapitza conductance [@merabia2012]. The integral $\mathcal I_{12}$ involves the angular dependent transmission coefficient: $$\label{transmission_coefficient} \mathcal I_{12}=\int_{0}^{1} \frac{4 Z_1 \mu_1 Z_2 \mu_2}{(Z_1 \mu_1 + Z_2 \mu_2)^2} \mu_1 d\mu_1$$ where $Z_i=\rho_i^m c_i$ are the acoustic impedances of the two solids, $\rho_i^m$ being the mass density, and $\mu_1=\cos \theta_1$ is a shorthand notation to denote the cosine of the phonon incident angle [@merabia2012]. Finally, the quantity $\mathcal A/\mathcal A_0$ is the ratio of the true interfacial area over the projected one. The physical motivation of eq. (\[conductance\_model\]) is simple: phonons having a wavelength $\lambda$ larger than $\xi h$ contribute to a transmitted heat flux proportional to the projected area $\mathcal A_0$, while phonons modes having a wavelength smaller than $\xi h$ contribute to the transmitted heat flux proportionally to the true surface area. We have compared the prediction of eq. \[conductance\_model\] to the simulation results discussed before. To this end, we have assumed Debye solids, with a vibrational density of states $g(\omega)=\omega^2/(2\pi^2 c_1^3)$, and for the sake of consistency, the mode-dependent wavelength $\lambda$ has been taken to be simply related to the frequency $\omega$ : $\lambda=2 \pi c_1/\omega$. Figure 5 compares the predictions of eq. \[conductance\_model\] to the simulations results. The values of the planar interface conductance has been rescaled by a factor $\zeta=3$ and $4$ at the temperatures $T=18$ K and $T=40$ K respectively. These correction factors account for the fact that the simple AMM model relies on several assumptions-Debye solids-elastic scattering-which may lead to a discrepancy with the MD value. We have chosen the two factors because they yield good agreement with the MD value for smooth interfaces. Apart from this rescaling, the parameter $\xi$ has been treated as the only fitting parameter. Figure 5 shows that a good agreement is found using the value $\xi=0.2$, for the two temperatures considered. The smallness of the fitting coefficient may be understood in the following way: consider a given phonon mode: if its wavelength is larger than the roughness $h$, the effective scattering area would be the projected one. On increasing the roughness, $h$ will become comparable with $\lambda$, and the interface will strongly scatter the considered phonon in all directions. This will contribute to slightly diminish the conductance, as compared with the planar case, in agreement with the simulation data points. It is only when the roughness becomes very large as compared with the wavelength $\lambda \ll h$, that interfacial scattering becomes again negligible and the transmitted energy is proportional to the true area. The fitting procedure concludes that this regime is reached when the wavelength becomes smaller than typically one fifth of the interfacial roughness. ![(Color online) Schematic representation of the roughness induced phonon scattering. Top: case of a small roughness. The huge majority of incoming phonons see the interface as a plane, and the transmitted heat flux is proportional to the projected interface area. Bottom: case of a large roughness. Most of the phonons have a wavelength larger than the interfacial roughness, and the transmitted heat flux is proportional to the true surface area. For the sake of the representation, we have not drawn the reflected waves. Note also that the phonon wavelength is generally not conserved at the passage of the interface. []{data-label="schematic_phonon_roughness"}](interface_small_roughness.eps "fig:"){width="0.7\linewidth"} ![(Color online) Schematic representation of the roughness induced phonon scattering. Top: case of a small roughness. The huge majority of incoming phonons see the interface as a plane, and the transmitted heat flux is proportional to the projected interface area. Bottom: case of a large roughness. Most of the phonons have a wavelength larger than the interfacial roughness, and the transmitted heat flux is proportional to the true surface area. For the sake of the representation, we have not drawn the reflected waves. Note also that the phonon wavelength is generally not conserved at the passage of the interface. []{data-label="schematic_phonon_roughness"}](interface_large_roughness.eps "fig:"){width="0.7\linewidth"} Effect of the angle of the asperities \[results\_angle\] -------------------------------------------------------- ![(Color online) Thermal boundary conductance as a function of the angle of the asperities $\alpha$. The height of the asperities is fixed here and equal to $24$ ML. The points corresponding to $\alpha=0$ denote the conductance of a planar interface. The solid lines show the interfacial conductance rescaled by the true surface area: $G_K=G_K(\alpha=0)/\cos \alpha$. The other parameters are: Total length=$40$ $a_0$; mass ratio $m_r=2$.[]{data-label="conductance_angle_fixed_height"}](transient_conductance_energy_angle.eps){width="0.9\linewidth"} ![(Color online) Same as fig. \[conductance\_angle\_fixed\_height\], but for a constant value of the true surface area. The interfacial heights are respectively $h = 21, 34, 43$ and $45$ monolayers for the asperities angles $\alpha=25.8, 45, 64$ and $71.7 \deg$. The solid lines show the interfacial conductance rescaled by the true surface area: $G_K=G_K(\alpha=0)/\cos \alpha$. []{data-label="conductance_angle_fixed_surface"}](transient_conductance_energy_surface.eps){width="0.9\linewidth"} So far, we have considered the case where the angle $\alpha$ was constant. We now discuss the effect of varying the slope of the model interfaces on the interfacial energy transfer. First, we will change the angle at a fixed value of the interfacial height $h$, as represented in fig. \[schematic\_parameters\] c. Figure (\[conductance\_angle\_fixed\_height\]) shows the evolution of the Kapitza conductance as a function of the angle, at the two temperatures considered. The constant height $h$ retained here corresponds to the regime of large roughnesses in terms of the figure 5 discussed before. We have also indicated the conductance of a planar interface, for the sake of comparison. The evolution of the conductance with the asperities angle seems to be non-monotonous: first, it increases for low angles, reaches a maximum for an asperities angle between $30$ and $45$ degrees, and then decreases when the angle becomes large. In particular, the conductance for an angle greater than $60 \deg$ becomes smaller than the planar interface conductance. This is all the more remarkable as in this latter case, the true surface area may increase by a factor $4$ as compared to the planar interface. This discrepancy is best shown after comparing the simulation results with the rescaled conductance $G_K(\alpha=0)/\cos \alpha$, which accounts for the increased surface area induced by the asperities. It is immediately clear, that for the lowest values of the asperities angles, $\alpha=30$ and $45 \deg$, the rescaled conductance seems to describe reasonably interfacial energy transfer. On the other hand, at large values of $\alpha$, the theoretical expression overestimates greatly interfacial transport. Two phenomena may explain the poor conductance reported: first, on increasing the angle, phonon multiple scattering and back-scattering may contribute to diminish interfacial transmission. This has been evidenced by Rajabpour et [*al.*]{} using Monte-Carlo ray tracing calculations [@rajabpour2011]. Secondly, for the steep slopes interfaces considered here, the effective surface area seems to be the projected one, not the true area, even if the height of the asperities is large. This may be understood qualitatively because for steep interfaces, even if the height is large, the lateral correlation length $l=h/ \tan \alpha$ may become comparable with the phonon wavelengths, and the effective interfacial area becomes the projected one. For these steep interfaces, the regime where the transmitted heat flux is controlled by the true surface area should occur at a very large value of the interfacial height $h$. To verify this assessment, we have run simulations where the true surface area has been kept constant (cf fig. \[schematic\_parameters\] d). The results are displayed in fig. \[conductance\_angle\_fixed\_surface\], which concludes a different scenario as compared to the evolution shown previously in fig. (\[conductance\_angle\_fixed\_height\]). The evolution of the conductance with the angle is no longer non-monotonous as previously observed, but it rather increases monotonously with $\alpha$. For the relatively small values of the angles $\alpha$, the conductance measured may even exceed the rescaled one. We have no interpretation for these large values reported here. Increasing further the angle $\alpha$, the simulation data takes values close to the scaled conductances $G_K(\alpha=0)/\cos \alpha$. Note in particular, that the increase of the conductance is pretty large, overpassing the conductance of a planar interface by a factor larger than $3$. In this regime, and for these steep interfaces, it is highly probable that the regime of rough interfaces, in the terms of the previous discussion has been reached: heat transmission becomes controlled by the true surface area. These large enhancement of the Kapitza conductance open the way to design interfaces with tailored interfacial energy transport properties. ![(Color online) Illustration of the different interfacial shapes simulated, namely random, rough, square and wave-like interfaces.[]{data-label="illustration_shape"}](shape.eps){width="0.9\linewidth"} Effect of the shape of the interfaces \[results\_shape\] -------------------------------------------------------- We end up the presentation of the results by discussing the effect of the shape of the interface on the boundary conductance. All the previous discussion concentrated on model triangular interfaces, and it is worth asking how general are the conclusions drawn from the study of the particular type of surfaces. To appraise this question, we have considered different shapes of the interfaces, as depicted in fig. \[illustration\_shape\]. The common characteristics of these surfaces is the mean interfacial height, here fixed at a value $h=12$ MLs. Different morphologies have been designed, ranging from the random interface, to the case of square-like surface and wavy interface obtained by a sinusoidal modulation of the interfacial height. ![(Color online) Thermal boundary conductance for the different interfacial shapes represented in fig. \[illustration\_shape\]. The height of the different interfaces is fixed here and equal to $12$ ML. The other parameters are: Total Length=$40$ $a_0$; mass ratio $m_r=2$.[]{data-label="conductance_shape"}](transient_conductance_energy_shape_2.eps){width="0.9\linewidth"} Figure \[conductance\_shape\] compares the interfacial conductance for the different shapes shown before, at two different temperatures. The relatively large values reported at the highest temperature may be explained by inelastic phonon scattering taking place between two interfaces. The shape of the interface seems to affect considerably interfacial transport: random interfaces display a conductance practically equal to the planar interface. Rough interfaces may transfer energy slightly better than planar interfaces depending on the temperature. On the other hand, wavy and square-like interfaces tend to favor energy transmission, the wavy pattern displaying the highest conductance among the different shapes analyzed. These results may be interpreted qualitatively: random and rough shaped interfaces display a distribution of length scales, which tend to promote phonon scattering: even if the global height $h$ is large, in terms of triangular shaped interface, the effective area for the phonons is not the true surface area, but rather the projected area, because $h$ is not the only relevant roughness parameter, and the interaction between incident phonons and small length scale asperities tend to diminish the effective transmission area. On the other hand, regular shaped pattern do not display such a distribution of length scales, and interfacial heat transport becomes controlled by the true surface area: as soon as the majority of phonon modes has a wavelength greater than the single length $h$ characterizing the interfacial morphology, one enters in a “large roughness” regime where the energy transport becomes governed by the true surface area, and the conductance is increased as compared with the planar case. The interfacial conductance is found to be the highest for the wavy interface, because it has the greatest surface area. Conclusion ========== In summary, we have concentrated on the role of the interfacial roughness on energy transmission between solid dielectrics. Thanks to the versatility of molecular dynamics simulations technique, we have conceived model rough surfaces, and probe their ability to conduct heat. The scenario emerging from the simulations is the following: when the roughness introduced is small, most of the phonons see the interface as a planar one and the effective surface area contributing to the transmitted heat flux is the projected area, not the true one. In this regime, one does not expect a Kapitza conductance much different from the planar interface. On the other hand, when the roughness becomes large enough, typically $20$ monolayers in our case, most of the phonons propagating towards the interface are incoherently scattered, and the effective surface area becomes the true surface area. This latter may differ significantly from the projected one, and this is the reason why the boundary conductance of rough interfaces may be greatly enhanced, as compared to planar interfaces. This has been demonstrated in this work, with the example of triangular shaped interfaces displaying steep slopes: provided the lateral dimensions characterizing the interfacial roughness is large enough, the increase of the conductance may be threefold. On the other hand, we have probed the conductance of randomly rough interfaces, and shown that they display in general conductances comparable or smaller than atomic planar interfaces. This difference of behavior is explained by the distribution of length scales displayed by the randomly rough surfaces, in comparison with our model patterned surfaces. The roughness analyzed in this article was large compared to the lattice constants. The case of atomic roughness has been more widely addressed in the literature, and wave-packet simulations [@sun2010] give a clear picture of the effect of small atomic roughness on phonon transmission: long wavelength phonons see the interface as ideal and do not contribute to the change of the thermal boundary conductance. On the other hand, short wavelength phonons strongly interact with the small scale roughness, and the corresponding change in phonon transmission is found to depend on the structure of the interface: for regular shaped patterned interfaces, constructive wave interference lead to enhanced transmission thereby increasing the boundary conductance [@tian2012]. Random atomic roughness promotes incoherent phonon scattering, reducing the thermal conductance. These observations are consistent with MD results concerning the cross-plane conductivity of superlattices with rough interfaces: for regularly patterned interfaces, the cross-plane conductivity is slightly greater than ideal superlattices [@termentzidis2011_2], and the boundary conductance is enhanced [@zhou2013]. When the roughness is random, the cross-plane conductivity shows a small decrease as compared with planar interfaces [@daly2002; @termentzidis2011]. The small amplitude of the reduction is related to the small proportion of energy carriers affected by the atomic roughness. Small reductions have been also reported for Si/Ge superlattices with a one layer of interfacial mixing in the incoherent regime of transport [@landry2009b]. If it is reasonable to rationalize such variations in terms of an atomic interfacial roughness, it is less clear for superlattices having thicker mixed layer. Large enhancements have been observed in this latter case, using MD[@stevens2007]. Further work is clearly needed to understand if part of these enhancements is explained by the large scale interfacial roughness. [@tian2012] Most of the results reported here concern regular shaped patterned interfaces. MD results seem to conclude that these patterned interfaces are good candidates to enhance the intrinsic boundary conductance between two semi-conductors. On the other hand, randomly rough surfaces should be considered if one prefers to reduce the Kapitza conductance between two solids[@hopkins2010]. In particular, in the context of superlattices, randomly rough interfaces should be designed if one aims at tailoring materials with the lowest cross-plane thermal conductivity. We have also introduced a simple model to rationalize the variations of the thermal boundary conductance as a function of the interfacial height of our model rough interfaces. Further analytical work is clearly desired to understand the interplay between the interface morphology and energy interfacial transport. This will enable to define new directions for the design of interfaces with optimized energy transport properties, with a relative low cost. Simulations have been run at the “Pole Scientifique de Modélisation Numérique” de Lyon using the LAMMPS open source package [@plimpton1995]. We acknowledge interesting discussions with P. Chantrenne, T. Biben, P.-O. Chapuis and D. Lacroix. [99]{} Schneider H, Fujiwara K, Grahn H T, Klitzing K v and Ploog K, [*Appl. Phys. Lett.*]{} [**56**]{} 605–7 (1990) Wagner S J, Meier J, Helmy A A, Aitchison J S, Sorel M and Hutchings D, [*J. Opt. Soc. Am. B*]{} 24 1557 (2007) Robert J L, Bosc F, Sicart J and Mosser V, [*Phys. Status Solidi b*]{} [**211**]{}, 481 (1999) C. Wan, Y. Wang, N. Wang, W. Norimatsu, M. Kusunoki, and K. Koumoto, [*Sci. Technol. Adv. Mater.*]{} [**11**]{}, 044306 (2010) A. Hashibon and C. Elsasser, [*Phys. Rev. B*]{} [**84**]{}, 144117 (2011) B. Qiu, L. Sun, and X. Ruan, [*Phys. Rev. B*]{} [**83**]{}, 035312 (2011) M.D. Losego, M.E. Grady, N.R. Sottos, D.G. Cahill and P.V. Braun, [*Nature Materials*]{} [**11**]{} (2012) 502 P.J. O’Brien,S. Shenogin, J.X. Liu, P.K. Chow, D. Laurencin, P.H. Mutin, M. Yamaguchi, P. Keblinski, G. Ramanath, [*Nature Materials*]{} [**12**]{} (2013) 118 B. Gotsmann and M.A. Lantz, [*Nature Mat.*]{} [**12**]{} (2013) 59 P.E. Hopkins, L. M. Phinney, J.R. Serrano and T.E. Beechem, [*Phys. Rev. B*]{} [**82**]{} 085307 (2010) P.E. Hopkins, J.C. Duda, C.W. Petz and J.A. Floro, [*Phys. Rev. B*]{} [**84**]{} 035438 (2011) J.C. Duda and P.E. Hopkins, [*App. Phys. Lett.*]{} [**100**]{} (2012) 111602 D. Kekrachos, [*J. Phys. Condens. Matter*]{} [**2**]{} (1990) 2637 D. Kekrachos, [*J. Phys. Condens. Matter*]{} [**3**]{} (1991) 1443 J. Amrit, [*Phys. Rev. B*]{} [**81**]{} (2010) 054303 N.S. Shiren, [*Phys. Rev. Lett.*]{} [**47**]{} (1981) 1466 I.N. Adamenko and I.M. Fuks, [*Sov. Phys. JETP*]{} [**32**]{} (1971) 1123 , edited by T.M. Tritt, Kluwer Academic/Plumer Publishers K. Termentzidis, J. Parasuraman, C.A. Da Cruz, S. Merabia, D. Angelescu, F. Marty, T. Bourouina, X. Kleber, P. Chantrenne and P. Basset, [*Nanoscale Research Letters*]{}, [**6**]{} (2011) 288 K. Termentzidis, S. Merabia, P. Chantrenne and P. Keblinski, [*Int. J. Heat Mass Transf.*]{} [**54**]{} (2011) 2014 A. Rajabpour, S.M.W. Allaei, Y. Chalopin, F. Kowsary and S. Volz, [*J. App. Phys.*]{} [**110**]{} (2011) 113529 K. Termentzidis, P. Chantrenne and P. Keblinski, [*Phys. Rev. B*]{} [**79**]{} (2009) 214307 S. Merabia and K. Termentzidis, [*Phys. Rev. B*]{} [**86**]{} (2012) 094303 P. Chantrenne and J.-L. Barrat, [*J. Heat Transfer-Transactions of the ASME*]{} [**126**]{} (2004) 577 D. Frenkel and B. Smit, [*Understanding Molecular simulation: from algorithms to applications*]{} Academic Press 2002 R.J. Stevens, L.V. Zhigilei and P.M. Morris, [*Int. J. Heat Mass Transfer*]{} [**50**]{} (2007) 3977 E.S. Landry and A.J.H. McGaughey, [*Phys. Rev. B*]{} [**80**]{} (2009) 165304 S. Shenogin, L. Xue, R. Ozisik, P. Keblinskli and D.G. Cahill, [*J. App. Phys.*]{} [**95**]{} (2004) 8136 L. Hu, T. Desai and P. Keblinski, [*Phys. Rev. B*]{} [**83**]{} (2011) 195423 H. Zhao and J.B. Freund, J. App. Phys. 97 (2005) 024903 W. Zhang, T.S. Fisher and N. Mingo, J. Heat Transfer 129 (2007) 483 W.A. Little, [*Can. J. Phys.*]{} [**37**]{} (1959) 334 P.E. Hopkins and P.M. Norris, [*ASME J. Heat Transfer*]{} [**131**]{} (2009) 022402 S. Plimpton, J. Comp. Phys. [**117**]{} (1995), 1-19: see http://lammps.sandia.gov. Periodic boundary conditions used in all directions impose the upper bound of the phonon wavelength. L. Sun and J.Y. Murthy, [*J. Heat Transfer*]{} [**132**]{} (2010) 102403 Z. Tian, K. Esfarjani and G. Chen, [*Phys. Rev. B*]{} [**86**]{} (2012) 235304 X.W. Zhou, R.E. Jones, C.J. Kimmer, J.C. Duda and P.E. Hopkins, [*Phys. Rev. B*]{} [**87**]{} (2013) 094303 B.C. Daly, H.J. Maris, K. Imamura, S. Tamura, [*Phys. Rev. B*]{} [**66**]{} (2002) 024301 E.S. Landry and A.J.H McGaughey, [*Phys. Rev. B*]{} [**79**]{} (2009) 075316 K. Termentzidis, P. Chantrenne, J.-Y. Duquesne and A. Saci, [*Jour. Phys.: Condens. Matt.*]{}, [**22**]{} 2010, 475001 P.K. Schelling, S.R. Phillpot and P. Keblinski, [*Phys. Rev. B*]{} [**65**]{} (2002) 144306 Academic Press 2002 S. Plimpton, J. Comp. Phys. [**117**]{} (1995), 1-19: see http://lammps.sandia.gov.
--- abstract: | We present the current fastest deterministic algorithm for $k$-SAT, improving the upper bound $(2-2/k)^{n + o(n)}$ dues to Moser and Scheder \[STOC’11\]. The algorithm combines a branching algorithm with the derandomized local search, whose analysis relies on a special sequence of clauses called chain, and a generalization of covering code based on linear programming. We also provide a more ingenious branching algorithm for $3$-SAT to establish the upper bound $1.32793^n$, improved from $1.3303^n$. author: - | S. Cliff Liu\ Princeton University\ bibliography: - 'det\_ksat.bib' title: | Chain, Generalization of Covering Code,\ and Deterministic Algorithm for k-SAT[^1] --- \[dummy\][Lemma]{} \[dummy\][Definition]{} \[dummy\][Remark]{} \[dummy\][Theorem]{} \[dummy\][Corollary]{} \[dummy\][Claim]{} \[dummy\][Observation]{} Introduction {#intro} ============ As the fundamental NP-complete problems, $k$-SAT and especially $3$-SAT have been extensively studied for decades. Numerous conceptual breakthroughs have been put forward via continued progress of exponential-time algorithms, including randomized and deterministic ones. The first provable algorithm for solving $k$-SAT on $n$ variables in less than $2^n$ steps was presented by Monien and Speckenmeyer, using the concept of autark assignment [@monien1985solving]. Later their bound $1.619^n$ for $3$-SAT was improved to $1.579^n$ and $1.505^n$ respectively [@Schiermeyer1970Solving; @DBLP:journals/tcs/Kullmann99]. These algorithms follow a branching manner, i.e., recursively reducing the formula size by branching and fixing variables deterministically, thus are called *branching algorithms*. As for randomized algorithms, two influential ones are PPSZ and Schöning’s local search [@DBLP:journals/jacm/PaturiPSZ05; @schoning1999probabilistic]. There has been a long line of research improving the bound $(4/3)^n$ of local search for $3$-SAT, including HSSW and local search combined with PPSZ [@hofmeister2002probabilistic; @iwama2004improved]. In a breakthrough work, Hertli closed the gap between Unique and General cases for PPSZ [@hertli20143]. (By Unique it means the formula has at most one satisfying assignment.) In a word, considering randomized algorithms, PPSZ for $k$-SAT is currently the fastest, although with one-sided error (see PPSZ in Table \[table\_result\]). Unfortunately, PPSZ for the General case seems tough to derandomize due to the excessive usage of random bits [@DBLP:conf/sat/Rolf05; @DBLP:journals/corr/abs-2001-06536]. In contrast to the hardness in derandomizing PPSZ, local search can be derandomized using the so-called covering code [@dantsin2002deterministic]. Subsequent deterministic algorithms focused on boosting local search for $3$-SAT to the bounds $1.473^n$ and $1.465^n$ [@DBLP:journals/tcs/BrueggemannK04; @DBLP:conf/latin/Scheder08]. In 2011, Moser and Scheder fully derandomized Schöning’s local search with another covering code for the choice of flipping variables within the unsatisfied clauses, which was immediately improved by derandomizing HSSW for $3$-SAT, leading to the current best upper bounds for $k$-SAT (see Table \[table\_result\]) [@moser2011full; @DBLP:journals/algorithmica/MakinoTY13]. Since then, the randomness in Schöning’s local search are all replaced by deterministic choices, and the bounds remain untouched. How to break the barrier? The difficulty arises in both directions. If attacking this without local search, one has to derandomize PPSZ or propose radically new algorithm. Else if attacking this from derandomizing local search-based algorithm, one must greatly reduce the searching space. Our method is a combination of a branching algorithm and the derandomized local search. As we mentioned in the second paragraph of this paper, branching algorithm is intrinsically deterministic, therefore it remains to leverage the upper bounds for both of them by some tradeoff. The tradeoff we found is the weighted size of a carefully chosen set of chains, where a chain is a sequence of clauses sharing variable with the clauses next to them only, such that a branching algorithm either solves the formula within desired time or returns a large enough set of chains. The algorithm is based on the study of autark assignment from [@monien1985solving] with further refinement, whose output can be regarded as a generalization of maximal independent clause set from HSSW [@hofmeister2002probabilistic], which reduces the $k$-CNF to a $(k-1)$-CNF. [^2] The searching space equipped with chains is rather different from those in previous derandomizations [@dantsin2002deterministic; @DBLP:journals/algorithmica/MakinoTY13; @moser2011full]: it is a Cartesian product of finite number of non-uniform spaces. Using linear programming, we prove that such space can be perfectly covered, and searched by derandomized local search within aimed time. Additionally, unlike the numerical upper bound in HSSW [@hofmeister2002probabilistic], we give the closed form. The rest of the paper is organized as follows. In §[\[pre\]]{} we give basic notations, definitions related to chain and algorithmic framework. We show how to generalize covering code to cover any space equipped with chains in §[\[GCC\_section\]]{}. Then we use such code in derandomized local search in §[\[DLS\_section\]]{}. In §[\[UBK\_section\]]{}, we prove upper bound for $k$-SAT. A more ingenious branching algorithm for $3$-SAT in §[\[UB3\_section\]]{} is presented. Some upper bound results are highlighted in Table \[table\_result\], with main results formally stated in Theorem \[main\_k\_sat\_general\_form\] of §[\[UBK\_section\]]{} and Theorem \[main2\] of §[\[UB3\_section\]]{}. We conclude this paper in §[\[conclusion\]]{} with some discussions. $k$ Our Result Makino et al. Moser&Scheder Dantsin et al. PPSZ(randomized) ----- ------------- --------------- --------------- ---------------- ------------------ 3 **1.32793** 1.3303 1.33334 1.5 1.30704 4 **1.49857** - 1.50001 1.6 1.46899 5 **1.59946** - 1.60001 1.66667 1.56943 6 **1.66646** - 1.66667 1.71429 1.63788 : The rounded up base $c$ in the upper bound $c^n$ of our deterministic algorithm for $k$-SAT and the corresponding upper bound in previous results [@DBLP:journals/algorithmica/MakinoTY13; @moser2011full; @dantsin2002deterministic] as well as in the currently fastest randomized algorithm [@DBLP:journals/jacm/PaturiPSZ05; @hertli20143]. []{data-label="table_result"} Preliminaries {#pre} ============= Notations --------- We study formulae in Conjunctive Normal Form (CNF). Let $V=\{v_i | i \in [n]\}$ be a set of $n$ boolean variables. For all $i \in [n]$, a literal $l_i$ is either $v_i$ or $\bar{v}_i$. A clause $C$ is a disjunction of literals and a CNF $F$ is a conjunction of clauses. A $k$-clause is a clause that consists of exactly $k$ literals, and an $\le k$-clause consists of at most $k$ literals. If every clause in $F$ is $\le k$-clause, then $F$ is a $k$-CNF. An *assignment* is a function $\alpha: V \mapsto \{0, 1\}$ that maps each $v \in V$ to truth value $\{0,1\}$. A *partial assignment* is the function restricted on $V' \subseteq V$. We use $F|\alpha(V')$ to denote the formula derived by fixing the values of variables in $V'$ according to partial assignment $\alpha(V')$. A clause $C$ is said to be *satisfied* by $\alpha$ if $\alpha$ assigns at least one literal in $C$ to $1$. $F$ is *satisfiable* iff there exists an $\alpha$ satisfying all clauses in $F$, and we call such $\alpha$ a *satisfying assignment* of $F$. The $k$-SAT problem asks to find a satisfying assignment of a given $k$-CNF $F$ or to prove its non-existence if $F$ is unsatisfiable. Let $X$ be a literal or a clause or a collection of either of them, we use $V(X)$ to denote the set of all the variables appear in $X$. We say that $X$ and $X'$ are *independent* if $V(X) \cap V(X') = \emptyset$, or $X$ *overlaps* with $X'$ if otherwise. A *word* of length $n$ is a vector from $\{0, 1\}^n$. The *Hamming space* $H \subseteq \{0, 1\}^n$ is a set of words. Given two words $\alpha_1, \alpha_2 \in H$, the *Hamming distance* $d(\alpha_1, \alpha_2) = \|\alpha_1 - \alpha_2 \|_1$ is the number of bits $\alpha_1$ and $\alpha_2$ disagree. The reason of using $\alpha$ for word as same as for assignment is straightforward: Giving each variable an index $i \in [n]$, a word of length $n$ naturally corresponds to an assignment, which will be used interchangeably. Throughout the paper, $n$ always denotes the number of variables in the formula and will be omitted if the context is clear. We use $O^*(f(n)) = \text{poly}(n) \cdot f(n)$ to suppress polynomial factors, and use $\mathcal{O}(f(n)) = 2^{o(n)} \cdot f(n)$ to suppress sub-exponential factors. Preliminaries for Chains {#PC_section} ------------------------ In this subsection, we propose our central concepts, which are the basis of our analysis. \[chain\_def\] Given integers $k \ge 3$ and $\tau \ge 1$, a $\tau$-*chain* $\mathcal{S}^{(k)}$ is a sequence of $\tau$ $k$-clauses $\langle C_1, \dots, C_{\tau} \rangle$ satisfies that $\forall i, j \in [\tau]$, $V(C_i) \cap V(C_j) = \emptyset$ iff $|i - j | > 1$. If the context is clear, we will use $\mathcal{S}$, $\tau$-chain or simply chain for short. \[instance\_def\] A set of chains $\mathcal{I}$ is called an *instance* if $\forall \mathcal{S}, \mathcal{S}' \in \mathcal{I}$, $V(\mathcal{S}) \cap V(\mathcal{S}') = \emptyset$ for $\mathcal{S} \neq \mathcal{S}'$. In other words, each clause in chain only and must overlap with the clauses next to it (if exist), and chains in an instance are mutually independent. Given chain $\mathcal{S}$, define the *solution space* of $\mathcal{S}$ as $A \subseteq \{0, 1\}^{|V(\mathcal{S})|}$ such that partial assignment $\alpha$ on $V(\mathcal{S})$ satisfies all clauses in $\mathcal{S}$ iff $\alpha(V(\mathcal{S})) \in A$. [^3] We define vital algebraic property of chain, which will play a key role in the construction of our generalized covering code. \[charactoristic\_def\] Let $A$ be the solution space of chain $\mathcal{S}^{(k)}$, define $\lambda \in \mathbb{R}$ and $\pi: A \mapsto [0, 1]$ as the *characteristic value* and *characteristic distribution* of $\mathcal{S}^{(k)}$ respectively, where $\lambda$ and $\pi$ are feasible solution to the following linear programming $\text{LP}_A$: $$\begin{aligned} & \sum_{a \in A} \pi(a) = 1 \\ & \lambda = \sum_{a \in A} \left( \pi(a) \cdot (\frac{1}{k-1}) ^ {d(a, a^*)} \right) & \forall a^* \in A \\ & \pi(a) \ge 0 & \forall a \in A \end{aligned}$$ The variables in $\text{LP}_A$ are $\lambda$ and $\pi(a)~(\forall a \in A)$. There are $|A| + 1$ variables and $|A| + 1$ equality constraints in $\text{LP}_A$. One can work out the determinant of the coefficient matrix to see it has full rank, so the solution is unique if feasible. Specifically, $\lambda \in (0,1)$. Algorithmic Framework {#AF_section} --------------------- Our algorithm (Algorithm \[Framework\]) is a combination of a branching algorithm called , and a derandomized local search called . either solves $F$ or provides a large enough instance to for further use, which essentially reduces the Hamming space exponentially. $k$-CNF $F$ a satisfying assignment or `Unsatisfiable` $(F)$ either solves $F$ or returns an instance $\mathcal{I}$ $(F,\mathcal{I})$ Generalization of Covering Code {#GCC_section} =============================== First of all, we introduce the covering code, then show how to generalize it for the purpose of our derandomized local search. Preliminaries for Covering Code ------------------------------- The *Hamming ball* of *radius* $r$ and *center* $\alpha$ $B_\alpha(r) = \{ \alpha' | d(\alpha, \alpha') \le r \}$ is the set of all words with Hamming distance at most $r$ from $\alpha$. A *covering code* of *radius* $r$ for Hamming space $H$ is a set of words $C(r) \subseteq H$ satisfies $\forall \alpha' \in H, \exists \alpha \in C(r)$, such that $d(\alpha, \alpha') \le r$, i.e., $H \subseteq \bigcup_{\alpha \in C(r)} B_\alpha(r)$, and we say $C(r)$ *covers* $H$. Let $\ell$ be a non-negative integer and set $[\ell]^* = [\ell] \cup \{0\}$, a set of covering codes $\{C(r) | r \in [\ell]^*\}$ is an *$\ell$-covering code* for $H$ if $\forall r \in [\ell]^*, C(r) \subseteq H$ and $H \subseteq \bigcup_{r \in [\ell]^*} \bigcup_{\alpha \in C(r)} B_\alpha(r)$, i.e., $\{C(r) | r \in [\ell]^*\}$ covers $H$. The following lemma gives the construction time and size of covering codes for the uniform Hamming spaces $\{0, 1\}^n$. \[cover\_01\_space\] Given $\rho \in (0, \frac{1}{2})$, there exists a covering code $C(\rho n)$ for Hamming space $\{0, 1\}^n$, such that $|C(\rho n)| \le O^*(2^{(1 - h(\rho)) n})$ and $C(\rho n)$ can be deterministically constructed in time $O^*(2^{(1 - h(\rho)) n})$, where $h(\rho)=-\rho\log{\rho}-(1-\rho)\log{(1-\rho)}$ is the *binary entropy function*. Generalized Covering Code {#GCC_subsection} ------------------------- In this subsection we introduce our generalized covering code, including its size and construction time. First of all we take a detour to define the *Cartesian product* of $\sigma$ sets of words as $ X_1 \times \dots \times X_{\sigma} = \prod_{i \in [\sigma]} X_i = \{ \uplus_{i \in [\sigma]} \alpha_i | \forall i \in [\sigma], \alpha_i \in X_i \}$, where $\uplus_{i \in [\sigma]} \alpha_i$ is the concatenation from $\alpha_1$ to $\alpha_{\sigma}$. Then we claim that the Cartesian product of covering codes is also a good covering code for the Cartesian product of the Hamming spaces they covered separately. \[product\_all\] Given integer $\chi > 1$, for each $i \in [\chi]$, let $H_i$ be a Hamming space and $C_i(r_i)$ be a covering code for $H_i$. If $C_i(r_i)$ can be deterministically constructed in time $O^*(f_i(n))$ and $|C_i(r_i)| \le O^*(g_i(n))$ for all $i \in [\chi]$, then there exists covering code $\mathfrak{C}$ of radius $\sum_{i \in [\chi]} r_i$ for Hamming space $\prod_{i \in [\chi]} H_i$ such that $\mathfrak{C}$ can be deterministically constructed in time $O^*(\sum_{i \in [\chi]} f_i(n) + \prod_{i \in [\chi]} g_i(n))$ and $|\mathfrak{C}| \le O^*(\prod_{i \in [\chi]} g_i(n))$. We prove that covering code $\prod_{i \in [\chi]} C_i(r_i)$ can be such $\mathfrak{C}$. For any $\alpha' \in \prod_{i \in [\chi]} H_i$, one can write $\alpha' = \uplus_{i \in [\chi]} {\alpha_i}'$ where ${\alpha_i}' \in H_i$. Then by definition, $\exists \alpha_i \in C_i(r_i)$, such that $d(\alpha_i, {\alpha_i}') \le r_i$. Now let $\alpha = \uplus_{i \in [\chi]} \alpha_i$, we have that $d(\alpha, \alpha') = \sum_{i \in [\chi]} d(\alpha_i, {\alpha_i}') \le \sum_{i \in [\chi]} r_i$. To construct $\mathfrak{C}$, we first construct all $C_i(r_i)$ in time $O^*(\sum_{i \in [\chi]} f_i(n))$, then concatenate every $\alpha_1$ to every $\alpha_{\chi}$, which can be done in time $O^*(\prod_{i \in [\chi]} |C_i(r_i)|)$. So the total construction takes time $O^*(\sum_{i \in [\chi]} f_i(n) + \prod_{i \in [\chi]} g_i(n))$ and is deterministic. Obviously $|\mathfrak{C}| \le O^*(\prod_{i \in [\chi]} g_i(n))$. Therefore we proved the lemma. Our result on generalized covering code is given below. \[general\_space\] Let $A$ be the solution space of chain $\mathcal{S}^{(k)}$ whose characteristic value is $\lambda$, for any $\nu = \Theta(n)$, there exists an $\ell$-covering code $\{C(r) | r \in [\ell]^*\}$ for Hamming space $H = {A}^{\nu}$ where $\ell = \lfloor - \nu \log_{k-1}{\lambda} + 2 \rfloor$, such that $|C(r)| \le O^*({\lambda}^{- \nu} / (k-1)^r)$ and $C(r)$ can be deterministically constructed in time $O^*({\lambda}^{- \nu} / (k-1)^r)$, for all $r \in [\ell]^*$. First of all, we show the existence of such $\ell$-covering code by a probabilistic argument. For each $r \in [l]^*$, let $s(r) = \lceil -3 \log_{k-1}\lambda \cdot \ln|A| \cdot {\nu}^2 {\lambda}^{- \nu} / (k-1)^r \rceil$. We build $C(r)$ from $\emptyset$ by repeating the following for $s(r)$ times independently: choose $\nu$ words $a_j~(j \in [\nu])$ independently from $A$ according to distribution $\pi$ and concatenating them to get a word $\alpha \in A^{\nu}$, then add $\alpha$ to $C(r)$ with replacement, where $\pi$ is the characteristic distribution (Definition \[charactoristic\_def\]). Clearly, $|C(r)| \le s(r) = O^*(\lambda^{-\nu} / (k-1)^r)$. We have that for arbitrary fixed $\alpha^* = \uplus_{j \in [\nu]} a_j \in A^{\nu}$, the following must hold: $$\begin{aligned} & \sum_{r} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) \notag \\ = & \sum_r \sum_{\sum_{j \in [\nu]} r_j = r} \left( \Pr[\textit{for all}~j \in [\nu],d(a_j, {a_j}^*) = r_j] \cdot (\frac{1}{k-1})^{\sum_{j \in [\nu]} r_j} \right) \notag \\ = & \prod_{j \in [\nu]} \sum_{r_j} \left( \Pr[d(a_j, {a_j}^*) = r_j] \cdot (\frac{1}{k-1})^{r_j} \right) \notag \\ = & \left( \sum_{a \in A} \pi(a) \cdot (\frac{1}{k-1}) ^ {d(a, a^*)} \right)^{\nu} \notag \\ = & \lambda^{\nu} . \label{expectation1} \end{aligned}$$ The third line follows independence and the last line follows from the definition of $\pi$. We now show that $\{C(r) | r \in [\ell]^*\}$ covers $H$ with positive probability. Rewrite (\[expectation1\]) as: $$\begin{aligned} \lambda^{\nu} & = \sum_{r \le \ell} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) + \mathbf{1}_{r > \ell} \cdot \sum_{r > \ell} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) \notag \\ & \le \sum_{r \le \ell} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) + (\frac{1}{k-1})^{\ell} \notag \\ & \le \sum_{r \le \ell} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) + \lambda^{\nu} / (k-1) \notag . \end{aligned}$$ The last inequality follows by $\ell \ge -\nu \log_{k-1}(\lambda) + 1$. Thus we have: $$\sum_{r \le \ell} \left( \Pr[d(\alpha, \alpha^*) = r] \cdot (\frac{1}{k-1}) ^ r \right) \ge \frac{k-2}{k-1} \lambda^{\nu}.$$ Then there must exist $r^* \in [\ell]^*$ such that $\Pr[d(\alpha, \alpha^*) = r^*] \ge \lambda^{\nu} (k-2) (k-1)^{r^*-1} / (\ell + 1)$. Using this as the lower bound for $\Pr[d(\alpha, \alpha^*) \le r^*]$, we obtain: $$\begin{aligned} \Pr[\alpha^* \notin \bigcup_{r \in [\ell]^*} \bigcup_{\alpha \in C(r)} B_\alpha(r)] & \le \Pr[\alpha^* \notin \bigcup_{\alpha \in C(r^*)} B_\alpha(r^*)] \\ & \le (1 - \Pr[d(\alpha, \alpha^*) \le r^*]) ^ {s(r^*)} \\ &\le (1 - \lambda^{\nu} (k-2) (k-1)^{r^*-1} / (\ell + 1)) ^ {s(r^*)} \\ &\le \exp(- \lambda^{\nu} (k-2) (k-1)^{r^*-1} / (\ell + 1) \cdot s(r^*) ) \\ &\le {|A|}^{-2\nu} . \end{aligned}$$ The last inequality follows from $s(r) \ge -3 \log_{k-1}\lambda \cdot \ln|A| \cdot {\nu}^2 {\lambda}^{- \nu} / (k-1)^r$ and $\ell \le - \nu \log_{k-1}{\lambda} + 2 $. There are $|A|^{\nu}$ words in $H$, so the probability that some $\alpha^* \in H$ is not covered by any $C(r)$ is upper bounded by $|A|^{\nu} \cdot {|A|}^{-2\nu} = {|A|}^{-\nu} = 2^{-\Theta(n)} < 1$. As a result, the $\ell$-covering code in Lemma \[general\_space\] exists. The argument for size and construction is as same as in [@DBLP:journals/algorithmica/MakinoTY13]. W.l.o.g., let $d \ge 2$ be a constant divisor of $\nu$. By partitioning $H$ into $d$ blocks and applying the approximation algorithm for the set covering problem in [@dantsin2002deterministic], we have that an $(\ell / d)$-covering code for each block can be deterministically constructed in time $O^*({|A|}^{3 \nu / d})$ and $|C(r)| \le O^*(\lambda^{-\nu / d} / (k-1)^r)$ for each $r \in [\ell / d] ^*$, because we can explicitly calculate $r^*$ for each word to cover. To get an $\ell$-covering code, note that any $r \in [\ell]^*$ can be written as $r = \sum_{j \in [d]} r_j$ where $r_j \in [\ell / d]^*$, thus $C(r)$ can be constructed by taking Cartesian product of $d$ covering codes $C(r_j)~(j \in [d])$. So by Lemma \[product\_all\], the construction time for $C(r)$ is: $$\sum_{\sum_{j \in [d]} r_j = r} \left( O^*(\sum_{j \in [d]} {|A|}^{3 \nu / d} + \prod_{j \in [d]} \lambda^{-\nu / d} / (k-1)^{r_j}) \right) = O^*(\lambda^{-\nu} / (k-1)^r).$$ The equality follows by taking large enough $d$ and observing that there are $O(r^d)$ ways to partition $r$ into $d$ positive integers. Also by Lemma \[product\_all\], the size of the concatenated covering code is upper bounded by its construction time, which is $|C(r)| \le O^*(\lambda^{-\nu} / (k-1)^r)$. Therefore we proved this lemma. Derandomized Local Search {#DLS_section} ========================= In this section, we present our derandomized local search (), see Algorithm \[dls\_alg\]. $k$-CNF $F$, instance $\mathcal{I}$ a satisfying assignment or `Unsatisfiable` construct covering code $\mathfrak{C}$ for Hamming space $H(F, \mathcal{I})$ (Definition \[Hamming\_instance\]) \[line\_construct\] \[line\_searchball\] $\alpha^*$ `Unsatisfiable` The algorithm first constructs the generalized covering code and stores it (Line \[line\_construct\]), then calls (Line \[line\_searchball\]) to search inside each Hamming ball, where refers to the same algorithm proposed in [@moser2011full], whose running time is stated in the following lemma. \[full\_ball\] Given $k$-CNF $F$, if there exists a satisfying assignment $\alpha^*$ for $F$ in $B_\alpha(r)$, then $\alpha^*$ can be found by in time $(k-1)^{r + o(r)}$. Our generalized covering code is able to cover the following Hamming space. \[Hamming\_instance\] Given $k$-CNF $F$ and instance $\mathcal{I}$, the Hamming space for $F$ and $\mathcal{I}$ is defined as $H(F, \mathcal{I}) = H_0 \times \prod_i H_i$, where: - $H_0 = \{0, 1\}^{n'}$ where $n' = n - |V(\mathcal{I})|$. - $H_i = {A_i}^{\nu_i}$ for all $i$, where $A_i$ is a solution space and $\nu_i = \Theta(n)$ is the number of chains in $\mathcal{I}$ with solution space $A_i$. [^4] Apparently all satisfying assignments of $F$ lie in $H(F, \mathcal{I})$, because $\prod_i H_i$ contains all assignments on $V(\mathcal{I})$ which satisfy all clauses in $\mathcal{I}$ and $H_0$ contains all possible assignments of variables outside $\mathcal{I}$. Therefore to solve $F$, it is sufficient to search the entire $H(F, \mathcal{I})$. \[gcc\_def\] Given $\rho \in (0, \frac{1}{2})$ and Hamming space $H(F, \mathcal{I})$ as above, for $L \in \mathbb{Z}^*$, define covering code $\mathfrak{C}(L)$ for $H(F, \mathcal{I})$ as a set of covering codes $\{C(r) | (r - \rho n') \in [L]^*\}$ satisfies that $C(r) \subseteq H(F, \mathcal{I})$ for all $r$ and $H(F, \mathcal{I}) \subseteq \bigcup_{(r - \rho n') \in [L]^*} \bigcup_{\alpha \in C(r)} B_\alpha(r) $, i.e., $\mathfrak{C}(L)$ covers $H(F, \mathcal{I})$. \[all\_code\] Given Hamming space $H(F, \mathcal{I})$ and $A_i, \nu_i$ as above, let $L = \sum_i \ell_i$ where $\ell_i = \lfloor -\nu_i \log\lambda_i + 2 \rfloor$ and $\lambda_i$ is the characteristic value of chain with solution space $A_i$. Given $\rho \in (0, \frac{1}{2})$, covering code $\mathfrak{C}(L) = \{C(r) | (r - \rho n') \in [L]^*\}$ for $H(F, \mathcal{I})$ can be deterministically constructed in time $O^*(2^{(1 - h(\rho)) n'} \prod_i {\lambda_i}^{-\nu_i} )$ and $|C(r)| \le O^*(2^{(1 - h(\rho)) n'} / (k-1)^{r - \rho n'} \prod_i {\lambda_i}^{-\nu_i} ) $ for all $(r - \rho n') \in [L]^*$. To construct $\mathfrak{C}(L)$ for $H(F, \mathcal{I})$, we construct covering code $C_0(\rho n')$ for $H_0 = \{0,1\}^{n'}$ and $\ell_i$-covering code for $H_i = {A_i}^{\nu_i}$ for all $i$, then take a Cartesian product of all the codes. By Lemma \[cover\_01\_space\], the time taken for constructing $C_0(\rho n')$ is $O^*(2^{(1 - h(\rho)) n'})$, and $|C_0(\rho n')| \le O^*(2^{(1 - h(\rho)) n'})$. By Lemma \[general\_space\], for each $i$, the time taken for constructing $C(r_i)$ for each $r_i \in [\ell_i]^*$ is $O^*({\lambda_i}^{- \nu_i} / (k-1)^{r_i})$ and $|C(r_i)| \le O^*({\lambda_i}^{- \nu_i} / (k-1)^{r_i})$. So by Lemma \[product\_all\], we have that $|C(r)|$ can be upper bounded by: $$2^{(1 - h(\rho)) n'} \cdot \sum_{\sum_{i} r_i = r - \rho n'} \left( \prod_{i} O^*({\lambda_i}^{-\nu_i} / (k-1)^{r_i} ) \right) = O^*(2^{(1 - h(\rho)) n'} / (k-1)^{r - \rho n'} \prod_{i} {\lambda_i}^{-\nu_i} ).$$ The equality holds because $L$ is a linear combination of $\nu_i$ with constant coefficients and $\nu_i = \Theta(n)$, thus there are $O(1)$ terms in the product since $\sum_i \nu_i \le n$. Meanwhile, there are $O^*(1)$ ways to partition $(r - \rho n')$ into constant number of integers, thus the outer sum has $O^*(1)$ terms. Together we get an $O^*(1)$ factor in the right-hand side. The construction time includes constructing each covering code for $H_i~(i \ge 0)$ and concatenating each of them by Lemma \[product\_all\], which is dominated by the concatenation time. As a result, the time taken to construct $C(r)$ for all $(r - \rho n') \in [L]^*$ is: $$\begin{aligned} \sum_{(r - \rho n') \in [L]^*} O^*(2^{(1 - h(\rho)) n'} / (k-1)^{r - \rho n'} \prod_{i} {\lambda_i}^{-\nu_i} ) = O^*(2^{(1 - h(\rho)) n'} \prod_{i} {\lambda_i}^{-\nu_i} ), \end{aligned}$$ because it is the sum of a geometric series. Therefore conclude the proof. Using our generalized covering code and applying Lemma \[full\_ball\] for (Line \[line\_searchball\] in Algorithm \[dls\_alg\]), we can upper bound the running time of . \[dls\_upper\_bound\] Given $k$-CNF $F$ and instance $\mathcal{I}$, runs in time $T_{\text{DLS}} = \mathcal{O}((\frac{2(k-1)}{k})^{n'} \cdot \prod_{i} {\lambda_i}^{-\nu_i})$, where $n' = n - |V(\mathcal{I})|$, $\lambda_i$ is the characteristic value of chain $\mathcal{S}_i$ and $\nu_i$ is number of chains in $\mathcal{I}$ with the same solution space to $\mathcal{S}_i$. The running time includes the construction time for $\mathfrak{C}(L)$ and the total searching time in all Hamming balls. It is easy to show that the total time is dominated by the searching time using Lemma \[all\_code\], thus we have the following equation after multiplying a sub-exponential factor $\mathcal{O}(1)$ for the other $o(n)$ chains not in $\mathcal{I}$ (see footnote \[footnote\]): $$\begin{aligned} T_{\text{DLS}} &= \mathcal{O}(1) \cdot \sum_{(r - \rho n') \in [L]^*} \left( |C(r)| \cdot (k-1)^{r + o(r)} \right) \\ & = \mathcal{O}(1) \cdot \sum_{(r - \rho n') \in [L]^*} \left( O^*(2^{(1 - h(\rho)) n'} / (k-1)^{r - \rho n'} \prod_{i} {\lambda_i}^{-\nu_i} ) \cdot (k-1)^{r + o(r)} \right) \\ & = \mathcal{O}( 2^{(1 - h(\rho) + \rho \log(k-1))n'} \cdot \prod_{i} {\lambda_i}^{-\nu_i}) \\ &= \mathcal{O}( (\frac{2(k-1)}{k})^{n'} \cdot \prod_{i} {\lambda_i}^{-\nu_i}) . \end{aligned}$$ The first equality follows from Lemma \[full\_ball\], the second inequality is from Lemma \[all\_code\], and the last equality follows by setting $\rho = \frac{1}{k}$. Therefore we proved this lemma. Upper Bound for k-SAT {#UBK_section} ===================== In this section, we give our main result on upper bound for $k$-SAT. A simple branching algorithm for general $k$-SAT is given in Algorithm \[br\_k\]: Greedily construct a maximal instance $\mathcal{I}$ consisting of independent $1$-chains and branch on all satisfying assignments of it if $|\mathcal{I}|$ is small. [^5] After fixing all variables in $V(\mathcal{I})$, the remaining formula is a $(k-1)$-CNF due to the maximality of $\mathcal{I}$. Therefore the running time of is at most: $$T_{\textsf{BR}} = \mathcal{O}((2^k - 1)^{|\mathcal{I}|} \cdot {c_{k-1}}^{n - k|\mathcal{I}|}), \label{br_k_upper_bound}$$ where $\mathcal{O}({c_{k-1}}^n)$ is the worst-case upper bound of a deterministic $(k-1)$-SAT algorithm. $k$-CNF $F$ a satisfying assignment or `Unsatisfiable` or an instance $\mathcal{I}$ staring from $\mathcal{I} \leftarrow \emptyset$, **for** $1$-chain $\mathcal{S}: V(\mathcal{I}) \cap V(\mathcal{S}) = \emptyset$, **do** $\mathcal{I} \leftarrow \mathcal{I} \cup \mathcal{S}$ solve $F|\alpha$ by deterministic $(k-1)$-SAT algorithm **return** the satisfying assignment if satisfiable `Unsatisfiable` $\mathcal{I}$ On the other hand, since there are only $1$-chains in $\mathcal{I}$, by Lemma \[dls\_upper\_bound\] we have: $$T_{\text{DLS}} = \mathcal{O}((\frac{2(k-1)}{k})^{n - k |\mathcal{I}|} \cdot \lambda^{-|\mathcal{I}|}) \label{dls_k_upper_bound}.$$ It remains to calculate the characteristic value $\lambda$ of $1$-chain $\mathcal{S}^{(k)}$. We prove the following lemma for the unique solution of linear programming $\text{LP}_A$ in Definition \[charactoristic\_def\]. \[1\_chain\_lp\_solution\] For $1$-chain $\mathcal{S}^{(k)}$, let $A$ be its solution space, then the characteristic distribution $\pi$ satisfies $$\pi(a) = \frac{(k-1)^k}{(2k-2)^k - (k-2)^k} \cdot (1 - (\frac{-1}{k-1})^{d(a, 0^k)}) \textit{~for all~} a \in A ,$$ and the characteristic value $$\lambda = \frac{k^k}{(2k-2)^k - (k-2)^k}.$$ We prove that this is a feasible solution to $\text{LP}_A$. Constraint $\pi(a) \ge 0~(\forall a \in A)$ is easy to verify. To show constraint $\sum_{a \in A} \pi(a) = 1$ holds, let $y = d(a, 0^k)$ and note there are $\binom{k}{y}$ different $a \in A$ with $d(a, 0^k) = y$, then multiply $\frac{(2k-2)^k - (k-2)^k}{(k-1)^k}$ on both sides: $$\begin{aligned} \frac{(2k-2)^k - (k-2)^k}{(k-1)^k} \cdot \sum_{a \in A} \pi(a) &= \sum_{1 \le y \le k} \left( (1 - (\frac{-1}{k-1})^y) \cdot \binom{k}{y} \right) \\ &= \sum_{0 \le y \le k} \binom{k}{y} - \sum_{0 \le y \le k} \binom{k}{y} (\frac{-1}{k-1})^y \\ &= 2^k - (\frac{k-2}{k-1})^k \\ &= \frac{(2k-2)^k - (k-2)^k}{(k-1)^k}. \end{aligned}$$ Thus $\sum_{a \in A} \pi(a) = 1$ holds. To prove $\lambda = \sum_{a \in A} \left( \pi(a) \cdot (\frac{1}{k-1}) ^ {d(a, a^*)} \right)$, similar to the previous case, we multiply $\frac{(2k-2)^k - (k-2)^k}{(k-1)^k}$ on both sides. Note that adding the term at $a = 0^k$ does not change the sum, then for all $a^* \in A$, we have: $$\begin{aligned} \text{RHS} &= \sum_{a \in A} (1 - (\frac{-1}{k-1})^{d(a, 0^k)}) \cdot (\frac{1}{k-1}) ^ {d(a, a^*)} \\ &= \sum_{a \in \{0,1\}^k} (1 - (\frac{-1}{k-1})^{d(a, 0^k)}) \cdot (\frac{1}{k-1}) ^ {d(a, a^*)} \\ &= \sum_{a \in \{0,1\}^k}(\frac{1}{k-1}) ^ {d(a, a^*)} - \sum_{a \in \{0,1\}^k} (-1)^{d(a, 0^k)} (\frac{1}{k-1}) ^ {d(a, 0^k) + d(a, a^*)} . \end{aligned}$$ The first term is equal to $(\frac{k}{k-1})^k = \text{LHS}$. To prove the second term is $0$, note that $\exists i \in [k]$ such that some bit $a^*_i = 1$. Partition $\{0,1\}^k$ into two sets $S_0 = \{a \in \{0,1\}^k | a_i = 0 \}$ and $S_1 = \{a \in \{0,1\}^k | a_i = 1 \}$. We have the following bijection: For each $a \in S_0$, negate the $i$-th bit to get $a' \in S_1$. Then $d(a, 0^k) + d(a, a^*) = d(a', 0^k) + d(a', a^*)$ and $(-1)^{d(a, 0^k)} = - (-1)^{d(a', 0^k)}$, so the sum is $0$. Therefore we verified the constraint and proved the lemma. Observe from (\[br\_k\_upper\_bound\]) and (\[dls\_k\_upper\_bound\]) that $T_{\textsf{BR}}$ is an increasing function of $|\mathcal{I}|$, while $T_{\textsf{DLS}}$ is a decreasing function of it, so $T_{\textsf{BR}} = T_{\textsf{DLS}}$ gives the worst-case upper bound for $k$-SAT. We solve this equation by plugging in $\lambda$ from Lemma \[1\_chain\_lp\_solution\] to get $\nu n$ as the worst-case $|\mathcal{I}|$, and obtain the following theorem as our main result on $k$-SAT. \[main\_k\_sat\_general\_form\] Given $k \ge 3$, if there exists a deterministic algorithm for $(k - 1)$-SAT that runs in time $\mathcal{O}({c_{k-1}}^n)$, then there exists a deterministic algorithm for $k$-SAT that runs in time $\mathcal{O}({c_k}^n)$, where $$c_k = (2^k - 1)^{\nu} \cdot {c_{k-1}}^{1 - k \nu}$$ and $$\nu = \frac{\log(2k - 2) - \log{k} - \log{c_{k-1}}} { \log(2^k - 1) - \log(1 - (\frac{k-2}{2k-2})^k) - k \log{c_{k-1}} }.$$ Note that the upper bound for $3$-SAT implied by this theorem is $O(1.33026^n)$, but we can do better by applying Theorem \[main2\] (presented later) for $c_3 = 3^{\log{\frac{4}{3}} / \log{\frac{64}{21}}} < 1.32793$ to prove all upper bounds for $k$-SAT ($k \ge 4$) in Table \[table\_result\] of §[\[intro\]]{}. Upper Bound for 3-SAT {#UB3_section} ===================== In this section, we provide a better upper bound for $3$-SAT by a more ingenious branching algorithm. First of all, we introduce some additional notations in $3$-CNF simplification, then we present our branching algorithm for $3$-SAT from high-level to all its components. Lastly we show how to combine it with the derandomized local search to achieve a tighter upper bound. Additional Notations -------------------- For every clause $C \in F$, if partial assignment $\alpha$ satisfies $C$, then $C$ is removed in $F|\alpha$. Otherwise, the literals in $C$ assigned to $0$ under $\alpha$ are removed from $C$. If all the literals in $C$ are removed, which means $C$ is unsatisfied under $\alpha$, we replace $C$ by $\bot$ in $F|\alpha$. Let $G=F|\alpha$, for every $C \in F$, we use $C^F$ to denote the clause $C$ in $F$ and $C^G \in G$ the new clause derived from $C$ by assigning variables according to $\alpha$. We use $\mathcal{F}$ to denote the original input $3$-CNF without instantiating any variable, and $C^{\mathcal{F}}$ is called the *original form* of clause $C$. Let $\textsf{UP}(F)$ be the CNF derived by running *Unit Propagation* on $F$ until there is no $1$-clause in $F$. Clearly $F$ is satisfiable iff $\textsf{UP}(F)$ is satisfiable, and $\textsf{UP}$ runs in polynomial time [@davis1962machine]. We will also use the set definition of CNF, i.e., for a CNF $F = \bigwedge_{i \in [m]} C_i$, it is equivalent to write $F = \{C_i | i \in[m]\}$. Define $\mathcal{T}(F), \mathcal{B}(F), \mathcal{U}(F)$ as the set of all the $3$-clauses, $2$-clauses and $1$-clauses in $F$ respectively. We have that any $3$-CNF $F = \mathcal{T}(F) \cup \mathcal{B}(F) \cup \mathcal{U}(F)$. Branching Algorithm for 3-SAT ----------------------------- In this subsection, we give our branching algorithm for $3$-SAT (Algorithm \[BR\_alg\]). The algorithm is recursive and follows a depth-first search manner: - Stop the recursion when certain conditions are met (Line \[line\_condition\] and Line \[line\_sat\]). - Backtrack when the current branch is unsatisfiable (Line \[line\_unsat1\], Line \[line\_unsat2\] and Line \[line\_unsat3\]). - Branch on all possible satisfying assignments on a clause and recursively call itself (Line \[line\_branch\]). Return `Unsatisfiable` if all branches return `Unsatisfiable`. - Clause sequence $\mathcal{C}$ stores all the branching clauses from root to the current node. It is easy to show that this algorithm is correct as long as *procedure* $\mathcal{P}$ maintains satisfiability. $3$-CNF $F$, clause sequence $\mathcal{C}$ a satisfying assignment or `Unsatisfiable` or a clause sequence $\mathcal{C}$ simplify $F$ by *procedure* $\mathcal{P}$ \[line\_simplify\] `Unsatisfiable` \[line\_unsat1\] \[line\_condition\] stop the recursion, *transform* $\mathcal{C}$ to an instance $\mathcal{I}$ and **return** $\mathcal{I}$ \[line\_transform\] deterministically solve $F$ in polynomial time \[line\_sat\] stop the recursion and **return** the satisfying assignment `Unsatisfiable` \[line\_unsat2\] choose a clause $C$ according to *rule* $\Upsilon$ \[line\_rule\] for every satisfying assignment $\alpha_C$ of $C$, call $(F|\alpha_C, \mathcal{C} \cup C^{\mathcal{F}})$ \[line\_branch\] `Unsatisfiable` \[line\_unsat3\] In what follows, we introduce (1) the *procedure* $\mathcal{P}$ for simplification (Line \[line\_simplify\]); (2) the clause choosing *rule* $\Upsilon$ (Line \[line\_rule\]); (3) the *transformation* from clause sequence to instance (Line \[line\_transform\]); (4) the termination *condition* $\Phi$ (Line \[line\_condition\]). All of them are devoted to analyzing the running time of as a function of an instance. ### Simplification Procedure {#SP} The simplification relies on the following two lemmas. \[autark\_lem\] Given $3$-CNF $F$ and partial assignment $\alpha$, define $$\mathcal{TB}(F, \alpha)=\{C | C \in \mathcal{B}(\textsf{UP}(F | \alpha), C^F \in \mathcal{T}(F)\}.$$ If $\bot \notin \textsf{UP}(F | \alpha)$ and $\mathcal{TB}(F, \alpha) = \emptyset$, then $F$ is satisfiable iff $\textsf{UP}(F | \alpha)$ is satisfiable and $\alpha$ is called an *autark*. Recall that $\textsf{UP}$ maintains satisfiability. Let $G = \textsf{UP}(F | \alpha)$. If $G$ is satisfiable, then $F$ is obviously satisfiable. Also observe that $G$ is a subset of $F$ since there is neither $1$-clause nor new $2$-clause in $G$, so any satisfying assignment of $F$ satisfies $G$ too. We also provide the following stronger lemma to further reduce the formula size. \[simplification2\] Given $3$-CNF $F$ and $(l_1 \vee l_2) \in \mathcal{B}(F)$, if $\exists C \in \mathcal{TB}(F, l_1 = 1)$ such that $l_2 \in C$, then $F$ is satisfiable iff $F\backslash C^F \cup C$ is satisfiable. Clearly $F$ is satisfiable if $F\backslash C^F \cup C$ is. Suppose $C = l_2 \vee l_3$ and let $\alpha$ be a satisfying assignment of $F$. If $\alpha(l_1) = 1$, then $\textsf{UP}(F | l_1 = 1)$ is satisfiable, thus $F\backslash C^F \cup C$ is also satisfiable since $C \in \textsf{UP}(F | l_1 = 1)$. Else if $\alpha(l_1) = 0$, then $\alpha(l_2) = 1$ due to $l_1 \vee l_2$, so $\alpha$ satisfies $C$ and the conclusion follows. As a result, $3$-CNF $F$ can be simplified by the following polynomial-time *procedure* $\mathcal{P}$: for every $(l_1 \vee l_2) \in \mathcal{B}(F)$, if $l_1 = 1$ or $l_2 = 1$ is an autark, then apply Lemma \[autark\_lem\] to simplify $F$; else apply Lemma \[simplification2\] to simplify $F$ if possible. \[simplify\_lem\] After running $\mathcal{P}$ on $3$-CNF $F$, for any $(l_1 \vee l_2) \in \mathcal{B}(F)$ and for any $2$-clause $C \in \mathcal{TB}(F, l_1 = 1)$, it must be $l_2 \notin C$. This also holds when switching $l_1$ and $l_2$. If $\mathcal{TB}(F, l_1 = 1) = \emptyset$, then $l_1 = 1$ is an autark and $F$ can be simplified by Lemma \[autark\_lem\]. If $C \in \mathcal{TB}(F, l_1 = 1)$ and $l_2 \in C$, then $F$ can be simplified by Lemma \[simplification2\]. ### Clause Choosing Rule {#CCR} Now we present our clause choosing *rule* $\Upsilon$. By Lemma \[autark\_lem\] we can always begin with branching on a $2$-clause with a cost of factor $2$ in the upper bound: Choose an arbitrary literal in any $3$-clause and branch on its two assignments $\{0, 1\}$. This will result in a new $2$-clause otherwise it is an autark and we fix it and continue to choose another literal. Now let us show the overlapping cases between the current branching clause to the next branching clause. Let $C_0$ be the branching clause in the father node where $C_0^{\mathcal{F}} = l_0 \vee l_1 \vee l_2$, and let $F_0$ be the formula in the father node. The *rule* $\Upsilon$ works as follows: if $\alpha_{C_0}(l_1) = 1$, choose arbitrary $C_1 \in \mathcal{TB}(F_0, l_1 = 1)$; else if $\alpha_{C_0}(l_2) = 1$, choose arbitrary $C_1 \in \mathcal{TB}(F_0, l_2 = 1)$. We only discuss the case $\alpha_{C_0}(l_1) = 1$ due to symmetry. We enumerate all the possible forms of $C_1^{\mathcal{F}}$ by discussing what literal is eliminated followed by whether $l_2$ or $\bar{l}_2$ is contained: 1. $C_1^{\mathcal{F}} \backslash C_1 = l_3$. $C_1$ becomes a $2$-clause due to elimination of $l_3$. There are three cases: (1) $C_1 = l_2 \vee l_4$, (2) $C_1 = \bar{l}_2 \vee l_4$ or (3) $C_1 = l_4 \vee l_5$. \[case1\] 2. $C_1^{\mathcal{F}} \backslash C_1 = \bar{l}_1$. $C_1$ becomes a $2$-clause due to elimination of $\bar{l}_1$. There are three cases: (1) $C_1 = l_2 \vee l_3$, (2) $C_1 = \bar{l}_2 \vee l_3$ or (3) $C_1 = l_3 \vee l_4$. \[case2\] 3. $C_1^{\mathcal{F}} \backslash C_1 = l_2$. This means $l_1 = 1 \Rightarrow l_2 = 0$, and $\alpha_{C_0}(l_1 l_2) = 11$ can be excluded. \[case3\] 4. $C_1^{\mathcal{F}} \backslash C_1 = \bar{l}_2$. This means $l_1 = 1 \Rightarrow l_2 = 1$, and $\alpha_{C_0}(l_1 l_2) = 10$ can be excluded. \[case4\] Both Case \[case1\].(1) and Case \[case2\].(1) are impossible due to Lemma \[simplify\_lem\]. To sum up, we immediately have the following by merging similar cases with branch number bounded from above: - Case \[case1\].(3): it takes at most $3$ branches in the father node to get $l_3 \vee l_4 \vee l_5$. - Case \[case1\].(2), Case \[case2\].(3) and Case \[case4\]: it takes at most $3$ branches in the father node to get $\bar{l}_1 \vee l_3 \vee l_4$ or $\bar{l}_2 \vee l_3 \vee l_4$. - Case \[case3\]: it takes at most $2$ branches in the father node to get $l_2 \vee l_3 \vee l_4$. - Case \[case2\].(2): it takes at most $3$ branches in the father node to get $\bar{l}_1 \vee \bar{l}_2 \vee l_3$. To fit *rule* $\Upsilon$, there must be at least one literal assigned to $1$ in the branching clause. Except Case \[case2\].(2), we get a $2$-clause $C_1$, and *rule* $\Upsilon$ still applies. Now consider the case $C_1^{\mathcal{F}} = \bar{l}_1 \vee \bar{l}_2 \vee l_3$. If $\alpha(l_1 l_2) = 11$, we have $C_1^F = l_3$, otherwise we have $C_1^F = 1 \vee l_3$. In other words, the assignment satisfying $C_0 \wedge C_1$ should be $\alpha(l_1 l_2 l_3) \in \{ 010, 100, 011, 101, 111 \}$. Note that $\alpha(l_3) = 0$ in the first two assignments, which does not fit *rule* $\Upsilon$. In this case, we do the following: Choose an arbitrary literal in any $3$-clause and branch on its two assignments $\{0, 1\}$. Continue this process we will eventually get a new $2$-clause (Lemma \[autark\_lem\]). Now the first two assignments $\alpha(l_1 l_2 l_3) \in \{ 010, 100\}$ has $4$ branches because of the new branched literal, and we have that all $7$ branches fit *rule* $\Upsilon$ because either $l_3 = 1$ or there is a new $2$-clause. Our key observation is the following: These $7$ branches correspond to all satisfying assignments of $C_0 \wedge C_1$, which can be amortized to think that $C_1$ has $3$ branches and $C_0$ has $7/3$ branches. [^6] As a conclusion, we modify the last case to be: - Case \[case2\].(2): it takes at most $7/3$ branches in the father node to get $\bar{l}_1 \vee \bar{l}_2 \vee l_3$. ### Transformation from Clause Sequence to Instance {#cs_to_i} We show how to transform a clause sequence $\mathcal{C}$ to an instance, then take a symbolic detour to better formalize the cost of generating chains, i.e., the running time of . Similar to above, let $C_1$ be the clause chosen by *rule* $\Upsilon$ and let $C_0$ be the branching clause in the father node, moreover let $C$ be the branching clause in the grandfather node. In other words, $C_1, C_0, C$ are the last three clauses in $\mathcal{C}$. $C_1$ used to be a $3$-clause in the father node since $C_1 \in \mathcal{T}(F)$, thus $C_1$ is independent with $C$ because all literals in $C$ are assigned to some values in $F$, so $C_1$ can only overlap with $C_0$. Therefore, clauses in $\mathcal{C}$ can only (but not necessarily) overlap with the clauses next to them. By the case discussion in §[\[CCR\]]{}, there are only $4$ overlapping cases between $C_0$ and $C_1$, which we call *independent* for $\langle l_0 \vee l_1 \vee l_2, ~l_3 \vee l_4 \vee l_5 \rangle$, *negative* for $\langle l_0 \vee l_1 \vee l_2, ~\bar{l}_1 \vee l_3 \vee l_4 \rangle$ or $\langle l_0 \vee l_1 \vee l_2, ~\bar{l}_2 \vee l_3 \vee l_4 \rangle$, *positive* for $\langle l_0 \vee l_1 \vee l_2, ~l_2 \vee l_3 \vee l_4 \rangle$ and *two-negative* for $\langle l_0 \vee l_1 \vee l_2, ~\bar{l}_1 \vee \bar{l}_2 \vee l_3 \rangle$. There is a natural mapping from clause sequence to a string. \[to\_string\] Let $\mathcal{C}$ be a clause sequence, define function $\zeta: \mathcal{C} \mapsto \Gamma^{|\mathcal{C}|}$, where $\Gamma = \{\verb"*", \verb"n", \verb"p", \verb"t"\}$, satisfies that the $i$-th bit of $\zeta(\mathcal{C})$ is `*` if $\mathcal{C}_i$ and $\mathcal{C}_{i+1}$ are independent, or `n` if negative, or `p` if positive, or `t` if two-negative for all $i \in [|\mathcal{C}| - 1]$, and the $|\mathcal{C}|$-th bit of $\zeta(\mathcal{C})$ is `*`. A $\tau$-chain $\mathcal{S}$ is also a clause sequence of length $\tau$, so $\zeta$ maps $\mathcal{S}$ to $\Gamma^{\tau}$. Two chains $\mathcal{S}_1$ and $\mathcal{S}_2$ are *isomorphic* if $\zeta(\mathcal{S}_1) = \zeta(\mathcal{S}_2)$. Then the *transformation* from $\mathcal{C}$ to $\mathcal{I}$ naturally follows: Partition $\zeta(\mathcal{C})$ by `*`, then every substring corresponds to a chain, just add this chain to $\mathcal{I}$. Now we can formalize the cost. \[branch\_upper\_bound\_1\] Given $3$-CNF $\mathcal{F}$, let $\mathcal{C}$ be the clause sequence in time $T$ of running $(\mathcal{F}, \emptyset)$, it must be $T \le O^*(2^{\kappa_1} \cdot 3^{\kappa_2} \cdot (7/3)^{\kappa_3})$, where $\kappa_1$ is the number of `p` in $\zeta(\mathcal{C})$, $\kappa_2$ is the number of `*` and `n` in $\zeta(\mathcal{C})$, and $\kappa_3$ is the number of `t` in $\zeta(\mathcal{C})$. By Definition \[to\_string\] and case discussion in §[\[CCR\]]{}, the conclusion follows. ### Termination Condition {#tc_type_def} We show how the cost of generating chains implies the termination *condition* $\Phi$. We map every chain to an integer as the *type* of the chain such that isomorphic chains have the same type. Formally, let $\mathcal{I}(\mathcal{C})$ be the instance transformed from $\mathcal{C}$, and let $\Sigma = \{ \zeta(\mathcal{S}) | \mathcal{S} \in \mathcal{I}(\mathcal{C}) \}$ be the set of distinct strings with no repetition. Define bijective function $g: \Sigma \mapsto [\theta]$ that maps each string $\zeta(\mathcal{S})$ in $\Sigma$ to a distinct integer as the *type* of chain $\mathcal{S}$, where $\theta = |\Sigma|$ is the number of types of chain in $\mathcal{C}$ and $g$ can be arbitrary fixed bijection. Define *branch number* $b_i$ of type-$i$ chain $\mathcal{S}$ as $b_i = 2^{\kappa_1} \cdot 3^{\kappa_2} \cdot (7 / 3)^{\kappa_3}$, where $\kappa_1$ is the number of `p` in $\zeta(\mathcal{S})$, $\kappa_2$ is the number of `*` and `n` in $\zeta(\mathcal{S})$, and $\kappa_3$ is the number of `t` in $\zeta(\mathcal{S})$. Also define the *chain vector* $\vec{\nu} \in \mathbb{Z}^{\theta}$ for $\mathcal{I}(\mathcal{C})$ satisfies $\nu_i = \left|\{\mathcal{S} \in \mathcal{I}(\mathcal{C}) | (g \circ \zeta)(\mathcal{S}) = i \}\right|$ for all $i \in [\theta]$, i.e., $\nu_i$ is the number of type-$i$ chains in $\mathcal{I}(\mathcal{C})$. We can rewrite Lemma \[branch\_upper\_bound\_1\] as the following. \[branch\_upper\_bound\_2\] Given $3$-CNF $\mathcal{F}$, let $\mathcal{I}$ be the instance in time $T$ of running $(\mathcal{F}, \emptyset)$, it must be $T \le T_{\textsf{BR}} = O^*(\prod_{i \in [\theta]} b_i^{\nu_i})$, where $b_i$ is the branch number of type-$i$ chain and $\vec{\nu}$ is the chain vector for $\mathcal{I}$. To achieve worst-case upper bound $\mathcal{O}(c^n)$ for solving $3$-SAT, we must have $T_{\textsf{BR}} \le \mathcal{O}(c^n)$, which is $\prod_{i=1}^{\theta} b_i^{\nu_i} \le c^n$. This immediately gives us the termination *condition* $\Phi$: $(\sum_{i \in [\theta]} \nu_i \cdot \log b_i) / \log c > n$. Therefore, we can hardwire such condition into the algorithm to achieve the desired upper bound, as calculated in the next subsection. Combination of Two Algorithms {#comb} ----------------------------- By combining and as in Algorithm \[Framework\], we have that the worst-case upper bound $\mathcal{O}(c^n)$ is attained when $T_{\textsf{BR}} = T_{\textsf{DLS}}$, which is: $$c^n = \prod_{i \in [\theta]} b_i^{\nu_i} = (\frac{4}{3})^{n'} \cdot \prod_{i \in [\theta]} {\lambda_i}^{-\nu_i} , \label{equal1}$$ followed by Corollary \[branch\_upper\_bound\_2\] and Lemma \[dls\_upper\_bound\]. Let $\eta_i$ be the number of variables in a type-$i$ chain for all $i \in [\theta]$, we have that $n' = n - |V(\mathcal{I})| = n - \sum_{i \in [\theta]} \eta_i \nu_i$. Taking the logarithm and divided by $n$, (\[equal1\]) becomes: $$\log c = \sum_{i \in [\theta]} \frac{\nu_i}{n} \log{b_i} = \log{\frac{4}{3}} - \sum_{i \in [\theta]} \frac{\nu_i}{n} (\eta_i \log{\frac{4}{3}} + \log{\lambda_i}) \label{equal2} .$$ The second equation is a linear constraint over $\frac{1}{n} \cdot \vec{\nu}$, which gives that $\log c$ is maximized when $\nu_i = 0$ for all $i \neq \arg \max_{i \in [\theta]} \{ \log{b_i} / (\log b_i + \eta_i \log{\frac{4}{3}} + \log{\lambda_i} ) \}$. Based on the calculation of $\text{LP}_A$ (see Appendix \[3sat\_values\]), we show that chain $\mathcal{S}$ with $\zeta(\mathcal{S}) = \verb"*"$ (say, type-$1$ chain) corresponds to the maximum value above, namely: $$\arg \max_{i \in [\theta]} \{ \log{b_i} / (\log b_i + \eta_i \log{\frac{4}{3}} + \log{\lambda_i} ) \} = 1.$$ In other words, all chains in $\mathcal{I}$ are $1$-chain. Substitute $\lambda_1 = 3/7, b_1 = 3, \eta_1 = 3$ and $\nu_i = 0$ for all $i \in [2, \theta]$ into (\[equal2\]) (see Table \[3sat\_numerical\] in Appendix \[3sat\_values\]), we obtain our main result on $3$-SAT as follow. \[main2\] There exists a deterministic algorithm for $3$-SAT that runs in time $\mathcal{O}(3^{n \log{\frac{4}{3}} / \log{\frac{64}{21}}})$. This immediately implies the upper bound $O(1.32793^n)$ for $3$-SAT in Table \[table\_result\] of §[\[intro\]]{}. Conclusion and Discussion {#conclusion} ========================= We have shown that how to improve Moser and Scheder’s deterministic $k$-SAT algorithm by combining with a branching algorithm. Specifically, for $3$-SAT we design a novel branching algorithm which reduces the branch number of $1$-chain from $7$ to $3$. In general, we expect to see $1$-chain in $k$-CNF to have branch number $2^{k-1} - 1$ instead of $2^k - 1$, therefore improving the upper bound for Algorithm \[br\_k\] from $\mathcal{O}((2^k - 1)^{|\mathcal{I}|} \cdot {c_{k-1}}^{n - k|\mathcal{I}|})$ to $\mathcal{O}((2^{k-1} - 1)^{|\mathcal{I}|} \cdot {c_{k-1}}^{n - k|\mathcal{I}|})$ as for Algorithm \[BR\_alg\]. However, this requires much more work using the techniques developed in this paper. We believe that there exists an elegant proof for the analysis of branching algorithm on $k$-SAT instead of tedious case analysis, and it is tight under the current framework, i.e., the combination of a branching algorithm and the derandomized local search, leveraged by chain. In a recent work, the technique in this paper is generalized to give an improved deterministic algorithm for NAE-$k$-SAT [@liu2018curse], which achieves upper bound that is better than $k$-SAT algorithms for the first time. #### Acknowledgements. [The author wants to thank Yuping Luo, S. Matthew Weinberg and Periklis A. Papa-konstantinou for helpful discussions. Research at Princeton University partially supported by an innovation research grant from Princeton and a gift from Microsoft.]{} Generation of All Types of Chain for 3-SAT {#3sat_values} ========================================== In §[\[cs\_to\_i\]]{}, we proved that there are only $4$ overlapping cases between successive clauses, thus for any $\mathcal{S} \in \mathcal{I}$, $\zeta(\mathcal{S}) \in \{\verb"n", \verb"p", \verb"t"\}^* \uplus \{\verb"*"\}$. Now we show that any $\zeta(\mathcal{C})$ cannot have substring `tp` or `tt`, which greatly reduces the number of types of chain. Recall that in §[\[CCR\]]{}, if $C_1^{\mathcal{F}} = \bar{l}_1 \vee \bar{l}_2 \vee l_3$, then $C_0 \wedge C_1$ has at most $7$ branches, where $4$ of them correspond to fixing all variables in $C_1$ and branching on a new literal. These necessarily lead to a branching clause independent with $C_1^{\mathcal{F}}$. Also note that when $\alpha(l_3) = 1$ for the remaining $3$ branches, the next branching clause cannot have $l_3$, otherwise it is eliminated. As a result, the clause in $\mathcal{C}$ right after $C_1$ can only be independent or negative overlapping with $C_1$, which means `tp` or `tt` is not a substring of $\zeta(\mathcal{C})$. Then we prove that there are only finite types of chain. It is sufficient to prove that the length of chain is upper bounded by a constant. When choosing clause to branch, we can always choose a literal in some $3$-clause and branch on this literal to get a new $2$-clause (Lemma \[autark\_lem\]). This costs us a factor of $2$ in the branch number but results in an independent branching clause. Observe that as long as the new branch number ${b_i}' = 2 b_i$ satisfies $\log{{b_i}'} / (\log {b_i}' + \eta_i \log{\frac{4}{3}} + \log{\lambda_i} ) \le \log{b_1} / (\log b_1 + \eta_1 \log{\frac{4}{3}} + \log{\lambda_1} )$, it does not influence the worst-case upper bound. To sum up, a string $\zeta$ corresponds to a type-$i$ chain can be generated by the following two rules. The second rule can be applied whenever at our will. 1. $\zeta \leftarrow (\zeta \uplus \verb"*")$ or $(\zeta \uplus \verb"n")$ or $(\zeta \uplus \verb"p")$ or $(\zeta \uplus \verb"t*")$ or $(\zeta \uplus \verb"tn")$. 2. $\zeta \leftarrow (\zeta \uplus \verb"*")$ if $\log{(2{b_i})} / (\log {(2b_i)} + \eta_i \log{\frac{4}{3}} + \log{\lambda_i} ) \le \log{b_1} / (\log b_1 + \eta_1 \log{\frac{4}{3}} + \log{\lambda_1} )$. We report all chains $\mathcal{S}_i$ of type-$i$ with characteristic value $\lambda_i$ and $f_i = \log{b_i} / (\log b_i + \eta_i \log{\frac{4}{3}} + \log{\lambda_i})$. The characteristic values are given by solving linear programming $\text{LP}_A$ from Definition \[charactoristic\_def\]. The variable number $\eta_i = |V(\mathcal{S}_i)|$, branch number $b_i$ are trivial to calculate, thus do not report here. Note that the reversed (except the terminal `*`) string is equivalent to the original one. Chain generated by rule 2 is marked with **r2** in their type. Using a breath-first search, one can easily check that Table \[3sat\_numerical\] lists all the possible types of chains in our branching algorithm for $3$-SAT. type-$i$ $\zeta(\mathcal{S}_i)$ $\lambda_i$ $f_i$ ----------- ------------------------ -------------- ---------------- 1 `*` $3/7$ $0.98586\dots$ 2 `n*` $27/110$ $0.984\dots$ 3 `p*` $81/331$ $0.983\dots$ 4 `t*` $15/46$ $0.984\dots$ 5 `nn*` $9/64$ $0.984\dots$ 6 `np*` $81/578$ $0.983\dots$ 7 `nt*` $45/241$ $0.984\dots$ 8 **r2** `pp*` $243/1739$ $0.98580\dots$ 9 `pt*` $27/145$ $0.983\dots$ 10 `nnn*` $243 / 3016$ $0.984\dots$ 11 `nnp*` $729/9080$ $0.983\dots$ 12 `nnt*` $135/1262$ $0.984\dots$ 13 `npn*` $243/3028$ $0.983\dots$ 14 **r2** `npp*` $729/9110$ $0.9853\dots$ 15 `npt*` $45/422$ $0.983\dots$ 16 `ntn*` $405/3788$ $0.984\dots$ 17 **r2** `pnp*` $2187/27334$ $0.9853\dots$ 18 `pnt*` $405/3799$ $0.983\dots$ 19 `tnt*` $25/176$ $0.984\dots$ : The characteristic value $\lambda_i$ and $f_i = \log{b_i} / (\log b_i + \eta_i \log{\frac{4}{3}} + \log{\lambda_i})$ for all types $i$ of possible chains generated by our branching algorithm for $3$-SAT.[]{data-label="3sat_numerical"} type-$i$ $\zeta(\mathcal{S}_i)$ $\lambda_i$ $f_i$ ----------- ------------------------ --------------- ---------------- 20 **r2** `nnnn*` $243/5264$ $0.98583\dots$ 21 **r2** `nnnp*` $729/15848$ $0.9854\dots$ 22 `nnnt*` $405/6608$ $0.984\dots$ 23 **r2** `nnpn*` $729/15856$ $0.9855\dots$ 24 **r2** `nnpp*` $2187/47704$ $0.984\dots$ 25 **r2** `nnpt*` $1215/19888$ $0.9854\dots$ 26 **r2** `npnp*` $2187/47732$ $0.9850\dots$ 27 **r2** `npnt*` $405/6634$ $0.9856\dots$ 28 `ntnn*` $135/2204$ $0.984\dots$ 29 **r2** `ntnp*` $1215/19904$ $0.9856\dots$ 30 `ntnt*` $675/8299$ $0.984\dots$ 31 **r2** `pnnp*` $729/15904$ $0.9850\dots$ 32 **r2** `pnnt*` $1215/19894$ $0.9855\dots$ 33 `tnnt*` $45/553$ $0.984\dots$ 34 **r2** `tnpp*` $405/6653$ $0.9850\dots$ 35 **r2** `tnpt*` $675/8321$ $0.9855\dots$ 36 **r2** `tnnnn*` $243/6920$ $0.9855\dots$ 37 **r2** `tnnnp*` $3645/104168$ $0.9852\dots$ 38 **r2** `tnnnt*` $225/4826$ $0.9856\dots$ : The characteristic value $\lambda_i$ and $f_i = \log{b_i} / (\log b_i + \eta_i \log{\frac{4}{3}} + \log{\lambda_i})$ for all types $i$ of possible chains generated by our branching algorithm for $3$-SAT.[]{data-label="3sat_numerical"} Consequently, $\arg \max_{i \in [38]} \{f_i\} = 1$ and $\mathcal{I}$ consisting of only $1$-chains is indeed our worst case. Degeneration of Algorithm {#DD_section} ========================= As a simple degeneration, we illustrate why the algorithm yields a worse upper bound without Lemma \[simplify\_lem\] or the key observation made in §[\[CCR\]]{}. Recall in the case study of §[\[CCR\]]{}, if we do not apply Lemma \[simplify\_lem\], then Case \[case1\].(1) and Case \[case2\].(1) are possible, which give branch number $b = 9$ for the positive overlapping $2$-chain. Suppose the instance returned by contains only $\nu$ such $2$-chains, then in the worst case, we have $c^n = 9^{\nu} = (\frac{4}{3})^{n - 5\nu} \cdot \lambda^{-\nu} \ge 1.328^n$ (see type-$3$ in Table \[3sat\_numerical\] for $\lambda = 81/331$). Similarly, without the amortized analysis in §[\[CCR\]]{}, a two-negative overlapping $2$-chain would have branch number $9$, which gives $c^n = 9^{\nu} = (\frac{4}{3})^{n - 4\nu} \cdot \lambda^{-\nu} \ge 1.328^n$ in the worst case (see type-$4$ in Table \[3sat\_numerical\] for $\lambda = 15/46$). Therefore our optimizations are necessary for proving Theorem \[main2\]. [^1]: A preliminary version of this paper appeared in the proceedings of ICALP 2018 [@liu2018ksat]. [^2]: We refer the reader to Chapter 11 in [@DBLP:series/faia/BuningK09] for a survey of autark assignment. [^3]: This essentially defines the set of all satisfying assignments for a chain. As a simple example in $3$-CNF, $1$-chain $\langle x_1 \vee x_2 \vee x_3 \rangle$ has solution space $A = \{0,1\}^3 \backslash 0^3$. [^4]: As we shall see in §[\[UBK\_section\]]{} and §[\[UB3\_section\]]{}, there are only finite number of different solution spaces and finite elements in each solution space. Thus for those $\nu_i = o(n)$, we can enumerate all possible combinations of assignments on them and just get a sub-exponential slowdown, i,e., an $\mathcal{O}(1)$ factor in the upper bound. \[footnote\] [^5]: W.l.o.g., one can negate all negative literals in $\mathcal{I}$ to transform the solution space of $1$-chain to $\{0,1\}^k \backslash 0^k$. [^6]: \[note1\]Without which or Lemma \[simplify\_lem\] would ruin our worst-case upper bound, see Appendix \[DD\_section\].
--- abstract: | We have explored radial color and stellar surface mass density profiles for a sample of 85 late-type spiral galaxies with deep (down to $\sim$27 mag arcsec$^{-2}$) SDSS $g'$ and $r'$ band surface brightness profiles. About $90\%$ of the light profiles have been classified as broken exponentials, exhibiting either truncations (Type II galaxies) or antitruncations (Type III galaxies). The color profiles of Type II galaxies show a “U shape” with a minimum of $(g' - r') = 0.47\ \pm\ 0.02$ mag at the break radius. Around the break radius, Type III galaxies have a plateau region with a color of $(g' - r') = 0.57\ \pm\ 0.02$. Using the color to calculate the stellar surface mass density profiles reveals a surprising result. The breaks, well established in the light profiles of the truncated galaxies, are almost gone, and the mass profiles now resemble those of the pure exponential (Type I) galaxies. This result suggests that the origin of the break in Type II galaxies is more likely due to a radial change in stellar population than being associated with an actual drop in the distribution of mass. Type III galaxies, however, seem to preserve their shape in the stellar mass density profiles. We find that the stellar surface mass density at the break for truncated galaxies is $13.6 \pm 1.6 \ {M}_{\sun}$pc$^{-2}$ and for the antitruncated ones is $ 9.9 \pm 1.3 \ {M}_{\sun}$pc$^{-2}$ for the antitruncated ones. We estimate that the fraction of stellar mass outside the break radius is $\sim$15$\%$ for truncated galaxies and $ \sim$9$\%$ for antitruncated galaxies. author: - Judit Bakos - Ignacio Trujillo - Michael Pohlen title: 'COLOR PROFILES OF SPIRAL GALAXIES: CLUES ON OUTER-DISK FORMATION SCENARIOS' --- Introduction {#intro} ============ Our picture regarding the diversity of the radial surface brightness profiles of spiral galaxies has changed greatly since the early work of Patterson (1940) and de Vaucouleurs (1958), who showed that the disks of spiral galaxies generally follow an exponential decrease in their radial surface brightness profile. Nowadays, this view has become clearly insufficient, as not all the disk galaxies (indeed, only the minority for late-types) are well described with a single exponential fitting function as shown in several recent studies (Erwin et al. 2005; Pohlen & Trujillo 2006, hereafter PT06; Florido et al. 2006, 2007; Erwin et al. 2008), where they have identified three basic classes of surface brightness profiles depending on an apparent break feature or lack of one: (1) the pure exponential profiles (Type I) with no breaks, (2) Type II with a “downbending break” (revising and extending a previous classification introduced by Freeman \[1970\] to include the so-called truncations of the stellar populations at the edge of the disk discovered by van der Kruit \[1979\]) and (3) a completely new class (Type III), also described by a broken exponential but with an upbending profile. The latter, discovered by Erwin et al. (2005), is also termed antitruncated. PT06 explored a sample of nearby late-type galaxies using the Sloan Digital Sky Survey to create a statistically representative set of radial surface brightness profiles. They found that about 60$\%$ of the spirals are truncated (Type II), 30$\%$ are antitruncated (Type III), and only 10$\%$ have no detectable breaks (Type I). Still, little is known about the nature of the breaks or about the presence of stars beyond that feature. In such low-density ($\le$ 10 $ {M}_{\sun}$pc$^{-2}$) environments at the galaxy peripheries, current star formation theories forbid efficient star formation (Kennicutt 1989; Elmegreen & Parravano 1994; Schaye 2004). However, UV observations (Gil de Paz et al. 2005; Thilker et al. 2005) have shown ongoing star formation in these outer regions. In addition, it is clear that there are large number of stars in the outskirts of galaxies (see, e.g., PT06). Several theories have investigated the formation of breaks in case of the Type II morphology. Proposed models to explain the existence of truncations in stellar disks can be grouped into two branches depending on the relevant mechanism that causes the break: (a) models related to angular momentum conservation in the protogalactic cloud (van der Kruit 1987) or angular momentum cutoff in cooling gas (van den Bosch 2001), and (b) models that attribute the existence of breaks to star formation thresholds (Kennicutt 1989; Elmegreen & Parravano 1994; Schaye 2004). In agreement with this last scenario, Elmegreen & Hunter (2006) suggested a multicomponent star formation that would result in a double exponential surface brightness profile as observed for Type II galaxies. Recent developments, some of them combining pieces of the aforementioned scenarios, conclude that secular evolution driven by bars or spiral arms or even clump disruptions can result in truncated exponential profiles (Debattista et al. 2006; Bournaud al. 2007; Roškar et al. 2007; Foyle et al. 2008). Magnetic fields have also been considered to explain the existence of truncations (Battaner et al. 2002). On the other hand, the Type III morphology is proposed to be explained by a tidal stripping within a minor merger (Peñarrubia et al. 2006; Younger et al. 2007), by a bombardment of the disk with dark matter subhalos (Kazantzidis et al. 2007) or by a high eccentricity flyby of a satellite galaxy (Younger et al. 2008). In this Letter we show for the first time color and stellar surface mass profiles of a large sample of galaxies (the PT06 sample) to quantify the stellar mass density at the break position and the fraction of stellar mass beyond the break. These values become important while comparing observations to the results of numerical simulations for the outer-disk formation. In order to fully understand the galaxy formation and evolution process, it is necessary to perform a detailed study of the stellar population properties at the galaxies’ outskirts. This kind of study gives insight into how the star formation progresses in the different parts of the disks, providing clues on the stellar mass buildup process. The Data and Analysis Techniques {#data} ================================ Our data are the 85 SDSS $g'$ and $r'$ band surface brightness profiles published in PT06. The galaxies were selected to be a representative, volume limited ($R \ltsim 46$ Mpc) sample of face-on to intermediate-inclined late-type disk galaxies brighter than ${M}_B = -18.4 $ mag. In that sense, they range from fainter to brighter surface brightness, from lower to higher mass, and also from smaller to larger size. The surface brightness profiles are classified as 9 Type I, 39 Type II and 21 Type III, i.e., exponential, truncated and antitruncated profiles, respectively (see PT06 and Erwin et al. 2008 for more details). The surface brightness limits on our individual surface brightness profiles (27.0 and 27.5 mag arcsec$^{-2}$ for $r'$- and $g'$- bands, respectively) were estimated by computing when either over- or undersubstracting the sky by $\pm$1$\sigma$ has an effect of more than 0.2 mag on the surface brightness distribution. These limits were established using two different methods for determining the sky. In addition, by comparing our profiles with deeper data (when available), we did not detect any systematic error in the sky determination. It is worth noting that, for the work presented here where we combine several profiles to explore the mean properties of the surface brightness profiles, the uncertainties in the mean properties due to sky substraction uncertainties are reduced to $\lesssim$0.03–0.04 mag in the outermost regions of the galaxies. We have removed 16 galaxies from our original dataset due to peculiarities of the classification, for example, all of the Type II-AB galaxies where the apparent break is to some extent artificial (see detailed discussion in PT06). We also excluded IC 1067, which has a very uncertain classification, being either Type II or a possible Type I. For galaxies with mixed classifications (see PT06) we only used the first type. To statistically compare the surface brightness profiles of all galaxies in our sample, we normalized the sizes of the different galaxies according to their respective $r'$-band break radii (see Fig. \[colorgrads\]). For the Type I galaxies, lacking a break radius, we applied 2.5 times the measured scale length as a normalizing factor, which is the typical radius of the break for the Type II galaxies (PT06). The Type II and Type III galaxies have their $r'$-band 27 mag arcsec$^-2$ isophote at around 1.8 times the break radius, which constrains how far we can accurately trace out the behavior of the light profiles into the outskirts of the disks. In order to calculate a robust mean color that characterizes our sample, we removed the color of each individual galaxy at a given radii where the $g'$- or the $r'$-band surface brightness is below the above critical limits at that radii. We have obtained a robust mean value of the color for all galaxies by removing the $3\sigma$ outliers. We have explicitly checked that using a cut in the surface brightness does not result in a bias towards any absolute magnitude range. The results of this mean are the profiles shown in the middle row of Figure \[colorgrads\]. It is straightforward to link the stellar mass density ($\Sigma$) profile with the surface brightness profile at a given wavelength ($\mu_{\lambda}$) if we know the mass-to-light ($M/L$) ratio, using the expression below. $$log_{10} \Sigma = log_{10} (M/L)_{\lambda} - 0.4(\mu_{\lambda} - m_{abs,\sun,\lambda}) + 8.629, \label{sigma}$$ where $m_{abs,\sun,\lambda}$ is the absolute magnitude of the Sun at wavelength $\lambda$, and $\Sigma$ is measured in ${M}_{\sun}$pc$^{-2}$. To evaluate the above expression, we need to obtain the $M/L$ ratio at each radius. Following the prescription of Bell et al. (2003), we have calculated the $M/L$ ratio as a function of color. In this work we assume a Kroupa IMF (Kroupa 2001), which according to Bell et al. (2003) implies a deduction of 0.15 dex from the $M/L$ using the following expression:\ $$log_{10} (M/L)_{\lambda} = \left(a_{\lambda}\ +\ b_{\lambda}\ \times\ color \right)- 0.15,$$ where for $(g' - r')$ color, $a_{\lambda} = -0.306$ and $b_{\lambda} = 1.097$ is applied to determine the $r'$-band $M/L$. The resulting stellar mass density profiles are shown in the bottom row of Figure \[colorgrads\]. The Galactic extinction has been taken into account as described in PT06 using the Schlegel et al. (1998) values: $$\mu_{corrected,\lambda} = \mu_{measured,\lambda} - A_{\lambda},$$ where $A_{\lambda}$ is the extinction coefficient in each band. Results ======= Averaged Radial Surface Brightness ---------------------------------- The upper row of Figure \[colorgrads\] shows the averaged radial surface brightness profiles for the Type I, Type II and Type III galaxies. The increase towards the center over the inwards extrapolated (inner) disk starting typically at around 0.2$R/R_b$ is due to the presence of bulges. In the outer regions the characteristic break features, truncations and antitruncations, are clearly seen in the mean profiles of Type II and III galaxies, respectively. Color Gradients --------------- The middle row of Figure \[colorgrads\] shows the $(g' - r')$ radial color profiles. It is interesting to note that each galaxy type has its own characteristic color gradient. As found in previous works (e.g., de Jong 1996), the disks exhibit a general bluing as a function of their radius. This is seen for all the galaxy types. The color of Type I galaxies, after reaching an asymptotic value of $(g' - r') \sim 0.46$ mag in the outer regions ($\sim2R_h$), stays within the error bars unchanged beyond. Type II galaxies show a minimum \[at $(g' - r') = 0.47 \pm 0.02$ mag\] in their color profile at the break radius with the profile getting redder beyond. After the initial bluing, the color of Type III galaxies gets redder towards the break radius to a mean value of $(g' - r') = 0.57 \pm 0.02$ mag. It is important to note that we can recover the above color behavior of the mean profile basically for every individual galaxy of each subsample, so consequently this is not an artifact of our profile combination. In the outermost region of the profile, the uncertainty in the sky determination can slightly increase the error bar on the color determination. We have estimated this to be a factor of $\sim\sqrt{2}$ larger. All galaxy types have a feature in common (see Fig. \[colorgrads\]): a large scatter of the surface brightness and color profiles. To understand the origin of this scatter, we have explored how the color at the break position correlates with different properties of the galaxies. We find that the scatter is best correlated with the total absolute magnitude of the individual galaxies (see Fig. \[hists\], left column). To quantify the strength of these correlations, we have performed Spearman correlation analysis. For all the three types we find that the brighter the galaxy, the redder the color at the break. This correlation is weaker for Type I galaxies, because the sample is too small to provide reliable statistics, but becomes particularly clear for Type II profiles where our $r'$-band absolute magnitude range is the largest. Figure \[hists\] shows that at a fixed absolute magnitude, the range of the $(g' - r')$ color at the break is only $\sim$ 0.15 mag. This is a factor of 2 smaller than the overall range in Figure \[colorgrads\]. Surface Mass Density Profiles ----------------------------- As explained in § \[data\], we have obtained $M/L$ ratio profiles from the $(g' - r')$ colors. These profiles were then converted into stellar surface mass density profiles, which we will discuss here. The most striking result is that both Type I and Type II galaxy profiles look very similar, even without any quantitative measurement of the steepness. The break for the Type II galaxies that is so apparent in the light profiles has almost (for some individual galaxies completely) disappeared. In case of Type III galaxies, the shape of the profile has not changed dramatically: a change of the slope around the break is still evident; however, the slope becomes less well described by two individual exponentials. Break Surface Mass Density and Stellar Mass Fraction beyond the Break Radius ---------------------------------------------------------------------------- Since the stellar surface mass density is an important tracer of star formation and disk stability (Kennicutt 1989), we provide here numbers corresponding to the break position. The middle column of Figure \[hists\] shows the histograms of stellar surface mass density at the break radius or, for Type I galaxies, at $R\!=\!2.5R_{h}$. The values of $\Sigma_{br}$ are $22.5\ \pm\ 5.3\ {M}_{\sun}$pc$^{-2}$ (Type I), $13.6\ \pm\ 1.6\ {M}_{\sun}$pc$^{-2}$ (Type II), and $ 9.9\ \pm\ 1.3\ {M}_{\sun}$pc$^{-2}$ (Type III). Note that we do not detect any brek on the stellar mass density profile of Type I galaxies down to $\sim3 \ {M}_{\sun}$pc$^{-2}$ Another quantity we have calculated is the amount of stellar mass beyond the break radii, which can be very useful for constraining the theoretical model. In the right column of Figure \[hists\], the stellar mass fractions in the outer disk are shown. Type I galaxies contain about $22.3\ \pm \ 2 \%$ of their total stellar mass beyond a radius of 2.5 times their scale-length. Type II and Type III galaxies have much less stellar mass in the outer regions. For Type III galaxies ($9.2\ \pm\ 1.4 \% $) the amount of stellar mass is the lowest, with Type II in between ($14.7\ \pm\ 1.2 \%$). Discussion ========== How can the results found in this work be used to constrain the current models for the formation of breaks in the surface brightness profiles of disk galaxies? In the case of Type II galaxies, the traditional pictures of break formation (see § \[intro\]) grouped into two families: angular momentum versus thresholds of star formation. Neither of these two ideas taken at face value can explain why we find so many stars beyond the break radius, so we will not go into more detail on these individual models. However, it is important to stress that the new rendition of models has been able to naturally explain the existence of stars beyond the break radius, and even more, the exponential nature of the shape of the surface brightness beyond this feature. In particular, in the case of the Roškar et al. (2008) model in which the breaks are the result of the interplay between a radial star formation cutoff and a redistribution of stellar mass by secular processes, a natural prediction is the existence of a minimum in the age of the stellar population at the break position, and a further aging (and consequently, a likely reddening) of these stars as we move farther and farther away from the break radius. This is in qualitative agreement with what we see in our color profiles for this kind of galaxy. However, what these models have not been able to reproduce is the absence of a clear break in the stellar mass density profile. The near absence of a break in the stellar surface mass density profile for our galaxies gives a strong indication that the behavior of the surface brightness profile outside the break is basically due to a change in the ingredients of the stellar population. If the shape of the color we see is not caused by a change in metallicity, this behavior could be explained as a natural consequence of stochastic migration of young stars from the inner parts of the disk to the outskirts (Roškar et al. 2008). This will result in an age gradient where the oldest stars are the dominant component in the outskirts of the disks. Unfortunately, other models (like Bournaud et al. 2007 and Foyle et al. 2008) that are capable of creating stellar mass density profiles that resemble the Type II one we have found here (i.e., without a clear break) do not provide any prediction on the color (age) distribution the stars should have along the radial range. Nevertheless, the fact that the stellar mass density of the break for this type of galaxy with $\sim$13 ${M}_{\sun}$pc$^{-2}$ is so close to the gas density threshold prediction of $\sim$10 ${M}_{\sun}$pc$^{-2}$ makes the case for a stellar population origin for the surface brightness break even stronger (with a 100% efficiency of transforming gas to stars). Evidence for the same color phenomenology for Type II galaxies also at high redshift is presented by Azzollini et al. (2008). They have shown that a similar minimum in the color profile can be found, at least, up to z $\sim$ 1 and that the main source of the scatter of the color profiles is caused by the different stellar mass (in our case, absolute magnitude) of the galaxies in their sample. Following these findings, we can conclude that once the absolute magnitude of a galaxy is fixed, the color profiles within a given Type (I, II or III) of galaxy are strikingly similar. A more massive galaxy has a redder global color but the same shape of the color gradient. Combining the results found in Azzollini et al. (2008) with what we find here, one is tempted to claim that both the existence of the break in Type II galaxies and the shape of their color profiles are long-lived features in the galaxy evolution, because it would be hard to imagine how the above features could be continuously destroyed and recreated maintaining the same properties over the last $\sim$8 Gyr. In the case of Type III galaxies, the situation is less clear. On the one hand, our sample is smaller than in the case of Type II galaxies and consequently our results less robust. And on the other hand, the theoretical models are less elaborated than for truncated galaxies, and no clear predictions have been made in particular for the color profiles. Taking into account that the shape of the stellar mass density profile does not differ too much from what we see in the surface brightness profiles, we are inclined to think that our data do not favor a sole origin in stellar population changes for this type of galaxy but an authentic change in the amount of stars from the exponential continuation of their inner region. It is interesting to note that in most of the proposed ideas summarized in § \[intro\] to explain this kind of galaxy, the origin of the stars in the periphery are linked to a dynamical (in some case external) origin. So what we are seeing for these galaxies is maybe a combination of star formation combined with an infall of new stars from a external (satellite) source. To corroborate our results, we plan to increase the number of galaxies as well as the number of observed filters in our next study. This research was supported by the Instituto Astrofísica de Canarias. We thank Alexande Vazdekis and Ruymán Azzollini for their valuable comments and the anonymous referee for his or her careful reading. Azzollini, R., Trujillo, I, Beckman, J., 2008, , 679, L69 Battaner, E., Florido, E., & Jiménez-Vicente, J. 2002, A&A, 388, 213 Bell, E.F., McIntosh, D.H., Katz, N., Weinberg, M.D., 2003, , 149, 289 Bournaud, F., Elmegreen, B. G., & Elmegreen, D. M., 2007, , 670, 237 Debattista, V. P., Mayer, L., Carollo, C. M., Moore, B., Wadsley, J., & Quinn, T., 2006, , 645, 209 de Jong, R. S. 1996, A&A, 313, 45 de Vaucouleurs, G. 1958, ApJ, 128, 465 Elmegreen, B. G. & Hunter, D. A., 2006, , 363, 712 Elmegreen, B. G. & Parravano, A., 1994, , 435, L121 Erwin, P., Beckman, J. E., & Pohlen, M., 2005, , 626, L81 Erwin, P., Pohlen, M., & Beckman, J. E., 2008, AJ, 135, 20 Florido, E., Battaner, E., Guijarro, A., Garzón, F., & Castolli-Morales, A. 2006, A&A, 455, 467 Florido, E., Battaner, E., Zurita, A., & Guijarro, A. 2007, A&A, 472, L39 Foyle, K., Courteau, S., Thacker, R. J., 2008, , 386, 1821 Freeman, K. C. 1970, , 160, 811 Gil de Paz, A., et al. 2005, , 627, L29 Kazantzidis, S., Bullock, J. S., Zentner A. R., Kravtsov, A. V., & Moustakas, L. A., 2007, , submitted (arXiv: 0708.1949) Kennicutt, R. C., 1989, , 344, 685 Kroupa, P., 2001, , 322, 231 Patterson, F. S., 1940, Harvard Coll. Obs. Bull., 914, 9 Peñarrubia, J., McConnachie, A., & Babul, A., 2006, , 650, L33 Pohlen, M., & Trujillo, I., 2006, A&A, 454, 759 (PT06) Roškar, R., Debattista, V. P., Stinson, G. S., Quinn, T. R., Kaufmann, T., Wadsley J., 2008, , 675, L65 Schaye, J., 2004, , 609, 667 Schlegel, D. J., Finkbeiner, D. P., Davis, M., 1998, , 500, 525 Thilker D. A., et al., 2005, , 619, L79 van den Bosch, F. C., 2001, , 327, 1334 van der Kruit, P. C., 1979, A&AS, 38, 15 van der Kruit, P. C., 1987, A&A, 173, 59 Younger, J. D., Besla, G., Cox, T. J., Hernquist, L., Robertson, B., & Willman, B., 2008, , 676, L21 Younger, J. D., Cox, T. J., Seth A. C., & Hernquist, L., 2007, , 670, 269 ![*Top row*: Averaged, scaled radial surface brightness profiles of 9 Type I, 39 Type II and 21 Type III galaxies. The filled circles correspond to the $r'$-band mean surface brightness, the open circles to the mean $g'$-band data. The small dots are the individual galaxy profiles in both bands. The surface brightness is corrected for Galactic extinction. *Middle row*: $(g' - r')$ color gradients. The averaged profile of Type I reaches an asymptotic color value of $\sim$0.46 mag being rather constant outward. Type II profiles have a minimum color of $0.47 \pm 0.02$ mag at the break position. The Type III mean color profile has a redder value of about $0.57 \pm 0.02$ mag at the break. *Bottom row*: $r'$-band surface mass density profiles obtained using the color-to-M/L conversion of Bell et al. (2003). Note how the significance of the break almost disappears for the Type II case. The error bars are given as $\sim\sigma/\sqrt{N}$, where $\sigma$ is the scatter and $N$ is the number of galaxies taken into account for estimating the mean averaged value in each bin. These error bars do not account for uncertainties in the sky determination.[]{data-label="colorgrads"}](f1.eps) ![*Left column*: Absolute $r'$ magnitude and break color (or at 2.5 scale lengths in the case of Type I galaxies) correlations; $\rho$ is the Spearman’s correlation coefficient. Type II galaxies show a strong correlation between the break color and the absolute magnitude, which means that the scatter in break color at a given luminosity is significantly smaller than the overall scatter. *Middle and right columns*: Stellar surface mass density and stellar mass fraction histograms with their median values in the upper right corner of each panel.[]{data-label="hists"}](f2.eps)
--- abstract: 'We explore the end point of the helical instability in finite density, finite magnetic field background discussed by Kharzeev and Yee [@Kharzeev:2011rw]. The nonlinear solution is obtained and identified with the (magnetized) chiral density wave phase in literature. We find there are two branches of solutions, which match with the two unstable modes in [@Kharzeev:2011rw]. At large chemical potential and magnetic field, the magnetized chiral density wave can be thermodynamically preferred over chirally symmetric phase and chiral symmetry breaking phase. Interestingly, we find an exotic state with vanishing chemical potential at large magnetic field. We also attempt to clarify the role of anomalous charge in holographic model.' author: - 'Yanyan Bu[^1]' - 'Shu Lin[^2]' bibliography: - 'Q5ref.bib' title: '**Holographic Magnetized Chiral Density Wave**' --- Introduction ============ The ground state of hot and dense QCD matter is one of the key questions in the physics of heavy ion collisions and that of neutron star. In the former case, a strong magnetic field can be produced in off-center collisions. In the latter case, a strong magnetic field is believed to exist in the core of neutron star. Magnetic field is known to modify QCD phases in different ways: In the absence of baryon chemical potential, magnetic field enhances chiral symmetry breaking and reduces critical temperature, known as magnetic catalysis [@Klevansky:1989vi; @Klimenko:1992ch; @Gusynin:1995nb] and inverse magnetic catalysis [@Bali:2011qj; @Bali:2012zg] respectively. At finite quark chemical potential, the QCD phase diagram becomes much enriched. In particular, a variety of inhomogeneous phases appear, including chiral density wave [@Nakano:2004cd], solitonic modulation [@Nickel:2009ke; @Nickel:2009wj], crystalline color superconductor [@Alford:2000ze], quarkyonic spiral [@Kojo:2009ha] etc. The quark density is crucial in the formation of these inhomogeneities, see [@Buballa:2014tba] for a review. The presence of magnetic field tends to widen the inhomogenous phases, leading to magnetized-chiral density wave [@Frolov:2010wn; @Tatsumi:2014wka] or magnetized kink [@Cao:2016fby], magnetized quarkyonic chiral spiral [@Ferrer:2012zq] etc. Interestingly, the interplay of quark density and magnetic field can also lead to more new phases. This is realized through axial anomaly: at low temperature, effective model studies found inhomogeneous phases including pion domain wall [@Son:2007ny; @Eto:2012qd], chiral magnetic spiral [@Basar:2010zd], chiral soliton lattice [@Brauner:2016pko] etc, see also [@Miransky:2015ava; @Kharzeev:2012ph] for comprehensive reviews. From the viewpoint of thermodynamics, formation of inhomogeneous phases induces an anomalous charge, which can lower the free energy of the system [@Son:2007ny; @Brauner:2016pko]. However, the nature of anomalous charge remains a mystery. It is desirable to search for the inhomogeneous phases in other approaches. A number of such studies using holographic models have been carried out [@Domokos:2007kt; @Ammon:2016szz; @Nakamura:2009tf; @Ooguri:2010xs; @Kim:2010pu; @Kharzeev:2011rw; @deBoer:2012ij]. In this work, we aim at finding the holographic analog of magnetized chiral density wave. This work is inspired by early work by Kharzeev and Yee [@Kharzeev:2011rw], in which they found an unstable helical mode. We will find the end point of the instability and identify it with magnetized chiral density wave (MCDW) phase. The competition of MCDW and conventional chiral symmetry breaking phase and restored phase reveals novel structure. We will emphasize the role of anomaly and attempt to clarify the nature of anomalous charge. The paper is organized as follows: In Section \[sec\_intro\], we give a brief review of the holographic model and the known phase diagram for homogeneous phases [@Evans:2010iy]. In Section \[sec\_mcdw\], we present ansatz for MCDW phase and solve it numerically and obtain its thermodynamics. We discuss the role of anomalous charge in MCDW phase in Section \[sec\_anom\]. We summarize and discuss future perspectives in Section \[sec\_sum\]. A quick review of the model {#sec_intro} =========================== We use the D3/D7 model for our study. The background contains $N_c$ D3 branes and $N_f$ D7 branes. In the probe limit $N_f\ll N_c$, the background is simply given by black hole background sourced by D3 branes, with the backreaction of D7 branes suppressed. The D3/D7 model is dual to ${\cal N}=4$ Super Yang-Mills (SYM) field and ${\cal N}=2$ hypermultiplets fields, which transform in adjoint and fundamental representations of the $SU(N_c)$ group respectively. The model is close to QCD in the sense that the ${\cal N}=4$ and ${\cal N}=2$ fields can be identified as gluons and quarks respectively. The probe limit is analogous to quenched approximation. The finite temperature background of D3 branes is given by [@Mateos:2006nu]: $$\begin{aligned} \label{d3_metric} ds^2=-\frac{r_0^2}{2}\frac{f^2}{H}\r^2dt^2+\frac{r_0^2}{2}H\r^2dx^2+\frac{d\r^2}{\r^2}+d\th^2+\sin^2\th d\ph^2+\cos^2\th d\O_3^2.\end{aligned}$$ where $$\begin{aligned} f=1-\frac{1}{\r^4},\quad H=1+\frac{1}{\r^4}.\end{aligned}$$ We set the AdS radius to $1$. The temperature is given by $T=r_0/\pi$. We also explicitly factorize $S_5$ into $S_3$ and two additional angular coordinates $\th$ and $\ph$. There is also a nontrivial Ramond-Ramond form $$\begin{aligned} \label{f5} F_5=r_0^4\r^3Hfdt{{\wedge}}dx_1{{\wedge}}dx_2{{\wedge}}dx_3{{\wedge}}d\r+4\cos^3\th\sin\th d\th{{\wedge}}d\ph{{\wedge}}d\O_3.\end{aligned}$$ The D7 branes share the worldvolume coordinates with D3 branes. In addition, they span the coordinates $x_4-x_7$ parametrized by the $S_3$ coordinates. Their position in $x_8-x_9$ plane can be parametrized by polar coordinate, with radius $\r\sin\th$ and angle $\ph$. The rotational symmetry in the $x_8-x_9$ plane corresponds to $U(1)_R$ symmetry in the field theory. The D7 branes have an additional $U(1)_B$ symmetry carried by its worldvolume gauge field. In comparison with QCD, the $U(1)_R$ and $U(1)_B$ symmetries are identified as axial and baryon symmetries respectively. With the background metric , the gluons provide a thermal bath at fixed temperature for quarks. The quark chemical potential and magnetic field are turned on by a nonvanishing $A_t(\r)$ and constant $F_{xy}=B$. The phase diagram has been obtained by Evans et al [@Evans:2010iy], showing a rich structure. There is one order parameter of the system, namely chiral condensate. The condensate is determined by the embedding of D7 branes in the D3 brane background. There are two possible embeddings for D7 branes: black hole embedding and Minkowski embedding, corresponding to chirally symmetric ($\c S$) phase and chiral symmetry breaking ($\c SB$) phase. The phases can further be classified based on quark number density. For $\c$S phase, only finite density state is allowed. For $\c$SB phase, both finite density and zero density states are allowed. In total, three homogeneous phases are found in [@Evans:2010iy], zero density, $\c SB$ phase, finite density, $\c SB$ phase and finite density, $\c S$ phase. The action of D7 branes is given by a Dirac-Born-Infeld (DBI) term and Wess-Zumino (WZ) term $$\begin{aligned} \label{S_bare} &S_{D7}=S_{DBI}+S_{WZ}, {\nonumber \\}&S_{DBI}=-N_fT_{D7}\int d^8\x\sqrt{-\text{det}\(g_{ab}+2\pi\a' \tilde{F}_{ab}\)}, {\nonumber \\}&S_{WZ}=\frac{1}{2}N_fT_{D7}(2\pi\a')^2\int P[C_4]{{\wedge}}\tilde{F}{{\wedge}}\tilde{F}.\end{aligned}$$ Here $T_{D7}$ is the D7 brane tension. $g_{ab}$ and $\tilde{F}_{ab}$ are the induced metric and worldvolume field strength respectively. Defining $$\begin{aligned} &F_{ab}=2\pi\a'\tilde{F}_{ab}, {\nonumber \\}&{{\cal N}}=N_fT_{D7}2\pi^2=\frac{N_fN_c\l}{(2\pi)^4},\end{aligned}$$ we can simplify the action to $$\begin{aligned} \label{S_redef} &S_{DBI}=-\frac{{{\cal N}}}{2\pi^2}\int d^8\x\sqrt{-\text{det}\(g_{ab}+{F}_{ab}\)}, {\nonumber \\}&S_{WZ}=\frac{1}{4\pi^2}{{\cal N}}\int P[C_4]{{\wedge}}F{{\wedge}}F.\end{aligned}$$ The embedding function $\th$ and worldvolume gauge fields $A_t$ are determined by minimizing the action. The asymptotic behaviors of $\th$ and $A_t$ are given by $$\begin{aligned} \label{mc} & \sin\th=\frac{m}{\r}+\frac{c}{\r^3}+\cdots, & A_t=\mu-\frac{n}{\r^2}+\cdots.\end{aligned}$$ The coefficients $m$ and $c$ are related to the bare quark mass $M_q$ and chiral condensate $\<\bar{\psi}\psi\>$ as [@Mateos:2007vn]: $M_q=\frac{mr_0}{2\pi\a'}$, $\<\bar{\psi}\psi\>=-2\pi\a'{{\cal N}}c r_0^3$. The coefficients $\m$ and $n$ are related to the quark chemical potential $\mu_q$ and quark number density $n_q$ as: $\m_q=\frac{mr_0}{2\pi\a'}$, $n_q=2\pi\a'{{\cal N}}n r_0^3$. Below we set $r_0=1$. This amounts to working in units of $\pi T$. For homogeneous phase, the WZ term is not relevant. However, when $B$ and $\m$ are large, the system is found to contain an unstable mode involving simultaneous fluctuations of $x_8$ and $x_9$ [@Kharzeev:2011rw]. It is further conjectured that the end point of this instability is helical phase. The presence of the WZ term is essential to the instability. In the next section, we will find the end point of the instability and identify it with MCDW phase known in literature [@Tatsumi:2014wka]. Magnetized Chiral Density Wave {#sec_mcdw} ============================== We start with the following ansatz for MCDW $$\begin{aligned} \label{ansatz} &A_t=A_t(\r),\qquad \th=\th(\r),\qquad \ph=k z.\end{aligned}$$ The last two equations in can be written equivalently as $$\begin{aligned} x_8+ix_9=e^{i k z}\r\sin\th(\r).\end{aligned}$$ Note that $A_t$ depends on $\r$ only. It gives rise to a homogeneous quark number density. The fields $x_8$ and $x_9$ form spiral in the direction parallel to the magnetic field. The limit $k\to0$ reduces to the homogeneous case studied before. In this limit, $x_8=\r\sin\th$ is dual to chiral condensate: $$\begin{aligned} \bar{\psi}\psi\propto c.\end{aligned}$$ The ansatz is simply a chiral rotation of chiral condensate along $z$ direction: $$\begin{aligned} \bar{\psi}\psi+i\bar{\psi}i\g_5\psi \propto c\(\cos kz+i\sin kz\).\end{aligned}$$ In the presence of non-trivial $\ph$, the dual field theory contains the following interaction term for quarks [@Das:2010yw; @Hoyos:2011us]. $$\begin{aligned} \label{Sint} S_I=-m\bar{\psi}e^{i\ph \g_5}\psi.\end{aligned}$$ The interaction term has no analog in QCD. We are interested in the massless limit, where this term vanishes. Therefore the helical phase corresponds to spontaneous breaking of both chiral symmetry and translational symmetry along $z$. While 1D long range order is known to be washed out by fluctuations in effective models, with the ground state containing only quasi-long range order [@Lee:2015bva; @Hidaka:2015xza]. In holographic model, the issue is absent because of suppression of fluctuations in large $N_c$ limit. Plugging the ansatz into , we obtain $$\begin{aligned} \label{S_exp} &S=\int d^4xd\r({\cal L}_{DBI}+{\cal L}_{WZ}), {\nonumber \\}&{\cal L}_{DBI}={{\cal N}}\frac{-1+\c^2}{4}\sqrt{2+4B^2+1/\r^4+\r^4}{\nonumber \\}&\times \sqrt{\frac{1}{\r^6+\r^{10}}\(1+\r^4+2k^2\r^2\c^2\)\(2\r^4(1+\r^4)A_t'^2(-1+\c^2)+\(-1+\r^4\)^2\(1-\c^2+\r^2\c'{}^2\)\)}, {\nonumber \\}&{\cal L}_{WZ}=-{{\cal N}}B k A_t'(-2\c^2+\c^4).\end{aligned}$$ We have defined $\c=\sin\th$. Note that the WZ term depends on gauge potential $C_4$. We fix the gauge following [@Kharzeev:2011rw], $$\begin{aligned} \label{c4} C_4=\(\frac{r_0^2}{2}\r^2H\)^2dt{{\wedge}}dx_1{{\wedge}}dx_2{{\wedge}}dx_3-(\cos^4\th-1) d\ph{{\wedge}}d\O_3.\end{aligned}$$ Other gauge choice has been used in [@Hoyos:2011us]. The difference in fact does not alter bulk solutions for MCDW phase because it only causes a constant shift in total action $\D S=\int d^4xd\r BkAt'= \text{Vol}_4Bk\m$. Clearly it affects thermodynamics. Our forthcoming analysis will also support this gauge choice . The equations of motion can be derived as $$\begin{aligned} \label{eom_var} &\frac{\d {\cal L}}{\d\c}-\frac{d}{d\r}\(\frac{\d {\cal L}}{\d\c'}\)=0, {\nonumber \\}&\frac{\d {\cal L}}{\d A_t}-\frac{d}{d\r}\(\frac{\d {\cal L}}{\d A_t'}\)=0.\end{aligned}$$ Since the action depends on $A_t$ only through its derivative, there is a conserved quantity $\frac{\d {\cal L}}{\d A_t'}$. It is identified with quark number density $n$ [@Evans:2010iy]. Consequently, we can use $$\begin{aligned} \frac{\d {\cal L}}{\d A_t'}=n.\end{aligned}$$ Throughout the paper, we focus on finite density solutions. It is known that only black hole embedding can support finite density solutions [@Karch:2007br]. We search for MCDW solution by numerically integrating horizon solution to the boundary. The horizon solution for black hole embedding is obtained analytically as $$\begin{aligned} &\c=c_0+c_2\(\r-1\)^2+\cdots,{\nonumber \\}&A_t'=2a_2(\r-1)+3a_3(\r-1)^2+\cdots,\end{aligned}$$ with $c_0$ and $a_2$ being two independent parameters. We require the field strength $F_{\r t}=A_t'$ vanishes on the horizon. Higher order coefficients in the expansion are expressible in terms of $c_0$ and $a_2$. We search for numerical solution with fixed $n$, and then scan the parameter $n$. Since $n$ is invariant along the radial direction, we can use $n$ to fix one of the horizon parameter $a_2$: $$\begin{aligned} 2Bc_0^2k-Bc_0^4k+\frac{a_2\sqrt{1+B^2}\(1-c_0^2\)^2\(1+c_0^2k^2\)}{\sqrt{\(1-a_2^2\)\(1-c_0^2\)\(1+c_0^2k^2\)}}=n.\end{aligned}$$ Note that $\c=\sin\th$, thus $0<c_0<1$. For a given set of parameters $n$, $B$ and $k$, $c_0$ is to be determined by the boundary condition $m=0$. In general, the MCDW solution exists for continuous values of $k$ at large $n$ and $B$. To find out the preferred spiral momentum $k$, we need to minimize Gibbs free energy in grand canonical ensemble. The quark chemical potential is given by bulk integration of $A_t'$ $$\begin{aligned} \label{mu_def} \m=\int_1^\infty d\r A_t'.\end{aligned}$$ In practice, we need to tune $n$ and $k$ simultaneously such that $\m$ remains unchanged. This is a numerically challenging task. We are able to achieve $1\%$ percentage accuracy for $\m$. The Gibbs free energy $\O$ is related to the Euclidean action as $$\begin{aligned} \label{gibbs} \O=\frac{1}{\b}S^E=-\int d^3xd\r{\cal L}=-V\int d\r{\cal L}.\end{aligned}$$ The integration of holographic coordinate $\r$ contains divergence. We regularize the action by imposing a UV cutoff $\r=\r_{max}$ and renormalize by adding the following counter terms [@Guo:2016nnq] $$\begin{aligned} S_{counter}=\r_{max}^4-\frac{m^2\r_{max}^2}{2}+\frac{1}{4}\ln\r_{max}\(2B^2+k^2m^2\).\end{aligned}$$ The appearance of $k$ in the counter term for massive case is not surprising as $k$ appears as a parameter of the theory according to . There is also finite counter term for massive case [@Mateos:2007vn]. The finite counter term does not bother us since we focus on massless case. The ground state is to be determined by comparing the free energy of the MCDW phase with those of the known $\c$S phase and $\c$SB phase [@Evans:2010iy]. The $\c$SB phase appears only at large $B$, while the $\c$S phase exists for any $B$ and finite $\m$. The $\c$SB phase can be obtained as a limit $k\to0$ from the MCDW phase. The $\c$S phase corresponds to the trivial embedding $\c=0$. The free energy is given by the same expression . To compare the free energy of the three phases, we use the free energy of $\c$S phase as a baseline, i.e. we calculate $\D\O=\O_{\text{MCDW}}-\O_{\c\text{S}}$ for MCDW phase and $\D\O=\O_{\c\text{SB}}-\O_{\c\text{S}}$ for $\c$SB phase. $\D\O$ of MCDW phase and $\c$SB phase are at percentage level of $\O_{\c\text{S}}$. For the largest magnetic field $B/(\pi T)^2=15$, $\D\O$ is less than $1\%$ of $\O_{\c\text{S}}$, making comparison of free energy more difficult. In general, We find MCDW solutions exist in two windows of $k$ at large $\m$ and $B$. The number of windows coincide with the number of unstable modes [@Kharzeev:2011rw; @Guo:2016dnm] in the chirally symmetric background. We find the lowest free energy is usually found near the boundary of either window. We show a typical $\D\O$-$k$ plot in Figure \[two\_windows\]. ![\[two\_windows\]$\O/\(V{{\cal N}}B^2\)$ versus $k/B^{1/2}$ at $B/(\pi T)^2=15$ and $\m/(\pi T)=1.36$. Here $\O/V$ is the free energy density with $V=\int d^3x$. The MCDW phase exists in two branches. The lowest free energy is found at the right boundary of the window of smaller $k$.](windows){width="10cm"} Although there is only one thermodynamically preferred state, we will keep MCDW states from minimizing free energy in both windows for the purpose of illustration. Below we present three representative MCDW solutions. They include (i) the case with $B/(\pi T)^2=6.5$, where $\c$SB phase does not exist, and there is competition between $\c$S phase and MCDW phase; (ii) the case with $B/(\pi T)^2=9$, where the large $k$ branch of MCDW phase is thermodynamically preferred in wide region of $\m$; (iii) the case with $B/(\pi T)^2=15$, where the small $k$ branch of MCDW phase is thermodynamically preferred in wide region of $\m$. We first show MCDW phase at $B/(\pi T)^2=6.5$ in Figure \[fig\_b65\]. For a given $\m$, there are two MCDW solutions from the large $k$ branch and small $k$ branch. The large and small $k$ branch of MCDW solution give large and small density $n$ respectively. The corresponding free energy density $\D\O/V$ is shown in Figure \[fig\_b65O\]. At this value of $B$, $\c$SB phase does not exist. There is competition between $\c$S phase and MCDW phase. The large $k$ branch is always thermodynamically more stable than the small $k$ branch, and it dominates over the $\c$S phase when $\m/B^{1/2}\gtrsim 0.35$. ![\[fig\_b65\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=6.5$. The MCDW phase clearly splits into two branches. The branch with large $k$ and small $k$ are marked by blue disk and red square respectively.](b65n "fig:"){width="7.5cm"} ![\[fig\_b65\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=6.5$. The MCDW phase clearly splits into two branches. The branch with large $k$ and small $k$ are marked by blue disk and red square respectively.](b65k "fig:"){width="7.5cm"} ![\[fig\_b65O\]$\D\O/\(V{{\cal N}}B^{2}\)$ versus $\m/B^{1/2}$ at $B/(\pi T)^2=6.5$ for two branches of MCDW phases, marked by blue disk and red square. The large $k$ MCDW phase has lower free energy than the small $k$ MCDW phase at fixed $\m$. Both are found to have lower free energy than the chirally symmetric phase for large enough $\m$. In particular, the large $k$ MCDW phase becomes thermodynamically preferred above $\m/B^{1/2}\simeq 0.35$. The chiral symmetry breaking phase does not exist at this value of $B$.](b65O){width="9cm"} Next we present the case at $B/(\pi T)^2=9$. In Figure \[fig\_b9\] we show the density and spiral momentum of two branches of solutions. Again the large and small $k$ branch of MCDW solution give large and small density $n$ respectively. The comparison of free energy is shown in Figure \[fig\_b9O\]. We find the MCDW phase with large $k$ is always preferred over $\c$S phase. At low $\m$, $\c$SB phase can occur. Whether $\c$SB phase can be preferred over MCDW phase cannot be decisively answered by current precision of numerical data. Nevertheless, the existence of $\c$SB phase would be constrained in a narrow window of $\m$ if it exists as a thermodynamically preferred state. ![\[fig\_b9\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=9$. The branch with large $k$ and small $k$ are marked by disk and square respectively.](b9n "fig:"){width="7.5cm"} ![\[fig\_b9\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=9$. The branch with large $k$ and small $k$ are marked by disk and square respectively.](b9k "fig:"){width="7.5cm"} ![\[fig\_b9O\]$\D\O/\(V{{\cal N}}B^{2}\)$ versus $\m/B^{1/2}$ at $B/(\pi T)^2=9$ for two branches of MCDW phases, marked by blue disk and red square and $\c$SB phase marked by green triangle. The large $k$ MCDW phase has lower free energy than chirally symmetric phase and small $k$ MCDW phase in their overlap region. The chiral symmetry breaking case exists below a critical value of $\m/B^{1/2}\simeq 0.15$. Current precision of numerical data does not allow for a decisive conclusion on the preferred state out of MCDW and $\c$SB phase.](b9O){width="9cm"} Finally, we present the case of $B/(\pi T)^2=15$. In Figure \[fig\_b15\], we show the density and spiral momentum of two branches of MCDW solutions. While the large/small density and large/small momentum correspondence still holds in general, there are also exotic cases: For large $k$ branch, the MCDW phase extends below $\m=0$, i.e. states with negative $\m$ but positive $n$ and $k$ exist. For small $k$ branch, the MCDW phase extends below $n=0(k=0)$, i.e. states with positive $\m$ but negative $n$ and $k$ exist. By continuity, we can infer that MCDW states with either $\m=0$ or $k=0$ exist. We also show in Figure \[fig\_b15O\] for a comparison of free energy of different phases. The case of $B/(\pi T)^2=15$ is distinct from the cases of $B/(\pi T)^2=6.5$ and $B/(\pi T)^2=9$: the $\c$S phase is never thermodynamically preferred. In region of large $\m$, the small $k$ branch of MCDW phase is preferred. In region of small $\m$, the large $k$ branch is preferred. The $\c$SB phase exists in a narrow window in $\m$. It could be the preferred state in an even narrower window, although current precision of numerical data does not allow for a decisive answer. ![\[fig\_b15\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=15$. The MCDW phase splits into two branches, marked by blue disks and red squares. Notably the large $k$ branch of MCDW phase extends all the way beyond $\m=0$, indicating that axial anomaly is not necessarily required for its existence. Also, the small $k$ branch extends all the way beyond $n=0(k=0)$. It is interesting to note that the behavior of $n$ and $k$ follow similar patterns.](b15n "fig:"){width="7.5cm"} ![\[fig\_b15\]$n/B^{3/2}$ versus $\m/B^{1/2}$ (left) and $k/B^{1/2}$ versus $\m/B^{1/2}$ (right) at $B/(\pi T)^2=15$. The MCDW phase splits into two branches, marked by blue disks and red squares. Notably the large $k$ branch of MCDW phase extends all the way beyond $\m=0$, indicating that axial anomaly is not necessarily required for its existence. Also, the small $k$ branch extends all the way beyond $n=0(k=0)$. It is interesting to note that the behavior of $n$ and $k$ follow similar patterns.](b15k "fig:"){width="7.5cm"} ![\[fig\_b15O\]$\D\O/\(V{{\cal N}}B^{2}\)$ versus $\m/B^{1/2}$ at $B/(\pi T)^2=15$ for two branches of MCDW phases, marked by blue disk and red square and $\c$SB phase marked by green triangle. The small $k$ MCDW phase always has lower free energy than $\c$S phase. The large $k$ MCDW phase might be thermodynamically more favorable in region of small $\m$. The $\c$S phase exists in a narrow window of $\m$. It might be the state with the lowest free energy in an even narrower window. Current precision of numerical data does not allow for a decisive conclusion on the preferred state out of MCDW and $\c$SB phase.](b15O){width="9cm"} Anomalous Charge and MCDW Phase {#sec_anom} =============================== It is interesting to discuss several aspects of the MCDW phase within the holographic model. We first discuss the role of anomalous charge. In effective models [@Son:2007ny], the anomalous charge is generated from spatially inhomogeneous phase. In the presence of chemical potential, the anomalous charge can lower the free energy of the system: $\O\to\O-\m N_{\text{anom}}$. Within our holographic model, we can derive the charge density from thermodynamics $$\begin{aligned} n=-\frac{\d \O}{V\d\m}=\frac{\int d\r\d{\cal L}}{\d\m}=\frac{\int d\r \d A_t'\frac{\d{\cal L}}{\d A_t'}}{\d\m}=\frac{(\d A_t(\infty)-\d A_t(1))}{\d\m}\frac{\d{\cal L}}{\d A_t'}.\end{aligned}$$ In the last equality, we use the fact that $\frac{\d{\cal L}}{\d A_t'}$ is $\r$ independent to perform integration over $\rho$. Note that $A_t(\infty)-A_t(1)=\m$. We thus obtain $$\begin{aligned} n=\frac{\d{\cal L}}{\d A_t'}=\frac{\d{\cal L_\text{DBI}}}{\d A_t'}+\frac{\d{\cal L_\text{WZ}}}{\d A_t'}.\end{aligned}$$ This is the conserved charge density already used in the previous section. The Lagrangian contains contribution from both DBI and WZ terms. We identify the DBI and WZ contributions as normal and anomalous charge, explicitly: $$\begin{aligned} &n_{\text{norm}}=\(\cdots\)A_t',{\nonumber \\}&n_{\text{anom}}=Bk(-2\c^2+\c^4).\end{aligned}$$ Here $\(\cdots\)$ is a complicated but positive function of $A_t'$ and $\c$. In the absence of anomalous charge in homogeneous phase, it guarantees the charge density have the same sign as chemical potential. The sign of anomalous charge is instructive: note that $0<\c<1$, which gives $n_{{\text{anom}}}>0(n_{{\text{anom}}}<0)$ for $k>0(k<0)$. Indeed linear stability analysis [@Kharzeev:2011rw; @Guo:2016dnm] as well as full nonlinear solution presented in this work supports positive $k$ (momentum parallel to magnetic field). This is consistent with effective model picture that formation of spiral generates anomalous charge lowering free energy of system. Had we proceeded with another gauge choice $$\begin{aligned} \label{2c4} C_4=\(\frac{r_0^2}{2}\r^2H\)^2dt{{\wedge}}dx_1{{\wedge}}dx_2{{\wedge}}dx_3-\cos^4\th d\ph{{\wedge}}d\O_3,\end{aligned}$$ we would have obtained $$\begin{aligned} n_{{{\text{anom}}}}=Bk\(1-\c^2\)^2,\end{aligned}$$ therefore $n_{{\text{anom}}}<0(n_{{\text{anom}}}>0)$ for $k>0(k<0)$. It implies the favorable MCDW phase should be found for $k<0$. This is not consistent with linear stability analysis and nonlinear solutions. It also serves as a confirmation of the gauge choice made in [@Kharzeev:2011rw] and used in this work. Secondly, the anomalous charge defined above inherits a feature from holographic model. In effective models, normal and anomalous charge are both constant and separable, see e.g. [@Tatsumi:2014wka]. In holographic model, the anomalous charge, as well as the normal charge depends on holographic coordinate $\r$. Only the sum of the two is a constant. It is known that the holographic coordinate plays the role of renormalization group (RG) scale. It is interesting to analyze the variation of $n_{{\text{anom}}}$ along RG scale: since $\c=0$ at both horizon and boundary, we conclude that $n_{{\text{anom}}}$ vanishes in the IR and UV limits. In intermediate scale, $n_{{\text{anom}}}>0$. To construct an effective model based on holographic theory, we would need to integrate out the holographic coordinate from UV to certain cutoff scale in the middle. The resultant effective anomalous charge is not expected to be a simple product $Bk$, in contrast to effective models. Finally, we discuss the two exotic MCDW states at $B/(\pi T)^2=15$ and their relation with axial anomaly. One state has $\m=0$, but $n\ne0(k\ne0)$. According to the definition , $A_t'$ has at least one zero. We confirm this by plotting $A_t'(\r)$ in Figure \[fig\_atp\]. ![\[fig\_atp\]$A_t'(\r)$ at $B/(\pi T)^2=15$. The positive and negative contributions in $\int d\r A_t'(\r)$ cancel out giving a vanishing $\m$. There is one zero of $A_t'(\r)$, at which $n_{\text{norm}}=0$ and $n_{{{\text{anom}}}}=Bk\(2\c^2-\c^4\)$. This explains why $n$ and $k$ have the same sign.](Atp){width="9cm"} Naively axial anomaly is not relevant for $\m=0$. This is not true: although the integration of $A_t'(\r)$ vanishes, the integration of WZ term is non-vanishing, which contributes to the thermodynamics. Mathematically, the contributions from DBI and WZ terms take the following form $$\begin{aligned} &\O_{\text{DBI}}^n/V\ne-\int d\r A_t'n_{\text{norm}}, &\O_{\text{WZ}}^n/V=-\int d\r A_t'n_{{{\text{anom}}}}.\end{aligned}$$ We use the superscript $n$ to indicate that they are contribution from density. The WZ term is a simple coupling between chemical potential and $n_{{\text{anom}}}$, while the DBI term cannot be written as a simple coupling between chemical potential and $n_{\text{norm}}$ due to the nonlinear dependence of DBI action on $A_t'$. If this were true, we could combine the two terms by using $n_{\text{norm}}+n_{{\text{anom}}}=\text{constant}$, giving a vanishing contribution because $\m=\int d\r A_t'=0$. However due to different nature of anomalous charge and normal charge, anomaly can still play a role even at $\m=0$. The other two states have $n=0$ and $k=0$ respectively. Although they lie close in $\m$ numerically, we can argue they are different states. For state with $n=0$, we need $n_{\text{norm}}$ and $n_{{\text{anom}}}$ to cancel each other. Since $n_{\text{norm}}$ is in general nonvanishing for arbitrary $\r$, $n_{{\text{anom}}}$ must also be nonvanishing. Thus we cannot have a state with $n=0$ and $k=0$ simultaneously. The state with $n=0$ and $k\ne0$ is still related to axial anomaly as we need anomalous charge to cancel normal charge. The state with $k=0$ and $n\ne0$ is homogeneous, thus it should reduce to the $\c$SB case. In Figure \[fig\_pt\] we show a comparison of density and chiral condensate between MCDW phase and $\c$SB phase. It confirms a continuous merging of the two phases. Combining with Fig. \[fig\_b15O\], we suggest that the $\c$SB phase may be replaced by MCDW phase. ![\[fig\_pt\]$n/B^{3/2}$ versus $\m/B^{1/2}$ at $B/(\pi T)^2=15$ for small $k$ branch of MCDW phase (red squares) and $\c$SB phase (green triangles). At $\m/B^{1/2}\simeq 0.25$, the density corresponding to two phases merge, suggesting a second order phase transition. The critical value of $\m$ agrees with the $k=0$ state of MCDW phase in Fig \[fig\_b15\] and also the free energy comparison in Fig \[fig\_b15O\].](pt){width="9cm"} Summary and Outlook {#sec_sum} =================== We explore the end point of the spiral instability studied in [@Kharzeev:2011rw]. We find the end point solution contains both chiral condensate and pseudoscalar condensate, analogous to magnetized chiral density wave phase in literature [@Tatsumi:2014wka]. The MCDW phase contains two branches of solutions, in accordance with the number of unstable modes found in [@Kharzeev:2011rw; @Guo:2016dnm]. Within each branch, the momentum $k$ can take continuous values. Minimizing the free energy with respect to $k$ gives the thermodynamically preferred state. We find for not large $B$, the large $k$ branch of the MCDW phase is the preferred state out of the two branches. In this case, there is a critical $\m$, beyond which the MCDW phase dominates over $\c$S and $\c$SB phases. For large $B$, the small $k$ branch becomes preferred out of the two branches for wide range of $\m$. At sufficient large $\m$, the MCDW phase becomes dominant over $\c$S and $\c$SB phases. We also give a holographic definition of anomalous charge. The anomalous charge in holographic model varies along RG flow. In particular, it vanishes in the IR and UV limits in our model, but is finite in the intermediate scale. The sum of anomalous and normal charge is constant along the RG flow. We also find an exotic state of MCDW phase at large $B$ and vanishing $\m$. Surprisingly axial anomaly still plays a role at vanishing $\m$, leading to formation of spiral phase. The reason is normal charge and anomalous charge respond to $\m$ differently. The free energy can be lowered by forming nonvanishing sum of the two. This work can be extended in a few directions. First of all, we focus on finite density states in this work. To have a complete study of phase diagram, we still need zero density states. The homogeneous zero density states have been studied in [@Evans:2010iy]. It would be interesting to see whether MCDW phase exists at zero density. A closely related question is to find out whether magnetized kink solution can be realized in holographic models and how it may change the phase diagram. Secondly, at strong magnetic field and finite $\m$ or finite axial chemical potential $\m_5$, the ground state is conjectured to be chiral magnetic spiral phase. Unlike longitudinal spiral (along magnetic field), it is featured by transverse spiral. While the case with $\m_5\ne0$ is confirmed in holographic model study [@Kim:2010pu], the case with $\m\ne0$ is not found in the same study. It is desirable to have an independent check within our model. Last but not least, it would also be interesting to explore the transports of MCDW phase. Since MCDW phase breaks both chiral symmetry and translational symmetry, it would be interesting to study the corresponding Nambu-Goldstone modes, and moreover the hydrodynamics in MCDW phase background. We leave these for future studies. Acknowledgments {#acknowledgments .unnumbered} =============== S.L. is grateful to Gaoqing Cao, Yoshimasa Hidaka and Keun-Young Kim for useful discussions. S.L. is supported by One Thousand Talent Program for Young Scholars and NSFC under Grant Nos 11675274 and 11735007. Y.B. is supported by the Fundamental Research Funds for the Central Universities under grant No.122050205032 and the NSFC under the grant No.11705037. [^1]: [email protected] [^2]: [email protected]
--- abstract: 'Initialization of parameters in deep neural networks has been shown to have a big impact on the performance of the networks [@DBLP:journals/corr/MishkinM15]. The initialization scheme devised by He et al, allowed convolution activations to carry a constrained mean which allowed deep networks to be trained effectively [@DBLP:journals/corr/HeZR015]. Orthogonal initializations and more generally orthogonal matrices in standard recurrent networks have been proved to eradicate the vanishing and exploding gradient problem [@DBLP:journals/corr/abs-1211-5063]. Majority of current initialization schemes do not take fully into account the intrinsic structure of the convolution operator. Using the duality of the Fourier transform and the convolution operator, Convolution Aware Initialization builds orthogonal filters in the Fourier space, and using the inverse Fourier transform represents them in the standard space. With Convolution Aware Initialization we noticed not only higher accuracy and lower loss, but faster convergence. We achieve new state of the art on the CIFAR10 dataset, and achieve close to state of the art on various other tasks.' bibliography: - 'example\_paper.bib' --- Introduction ============ Deep neural networks have been extremely successful and have demonstrated impressive results in various structured data problems in fields such as computer vision and speech recognition [@Zagoruyko2016WRN; @DBLP:journals/corr/SaonSRK16]. One of the core building blocks of deep neural networks are layers that perform convolution in order to capture local structured information, whether it be two dimensional convolution for image classification or one dimensional convolutional for audio and NLP tasks [@DBLP:journals/corr/SpringenbergDBR14; @DBLP:journals/corr/Kim14f]. A lot of work has been done on finding initialization schemas that allow for fast levels of convergence and good performance. The majority of initializations use the input and output dimensions of the convolutional layer to scale a specific distribution. He et al describe a initialization schema where weights are sampled from a normal distribution with a scaled variance and zero mean in order to not have a exploding effect in activations of further layers [@DBLP:journals/corr/HeZR015]. Glorot initialization is a similar looking initialization schema defined by a different variance scaling factor [@Glorot10understandingthe]. Although these initialization schemes attempt to exploit some properties of convolution, the duality of the convolution operator and the Fourier transform have not been explored or exploited, to our knowledge. The concept of orthogonality has been explored for initialization by various papers. Orthogonality has been shown to have numerous useful properties in both standard networks as well as recurrent networks [@DBLP:journals/corr/SaxeMG13; @DBLP:journals/corr/MhammediHRB16]. Orthogonal initialization is beneficial due to several reasons. Orthogonal initialization produces stable matrices, in the sense that under repeated multiplication the matrix does not vanish or explode. This property has been exploited in recurrent networks to combat the vanishing and exploding gradient problem [@DBLP:journals/corr/SaxeMG13]. Orthogonality can also be reasoned to produce the most diverse set of features possible under a defined inner product space, one feature detector will capture information that would be completely missed by another feature detector. Our algorithm uses the convolution theorem to represent the convolution operator in the product-sum space (frequency domain) where we then build our orthogonal representation. And utilizing the inverse Fourier transform we can represent that orthogonality in the standard space. Convolution Aware Initialization ================================ The main idea behind Convolution Aware Initialization (CAI) is in order to maximize the expressiveness power of convolutional layer, we form orthogonal filters not in the standard convolution space, but in the Fourier space. The reasoning is due to the convolutional theorem which states that convolution in the time domain is element wise multiplication in the frequency domain. The standard way to do orthogonal initialization in a convolution block is to flatten the 4 dimensional tensor into matrix, performing orthogonal decomposition and reshaping the respective matrix back into the correct sized tensor [@DBLP:journals/corr/SaxeMG13]. CAI defines the initialization scheme in a different manner. Below we define CAI for the 2-dimensional convolutional layer used commonly in image machine learning problems. We can write a set of filters on an input space $x$ with convolutional operator $\otimes$, and filter $f_{i,j}=R^{k \times m}$ as: $$\{f_{0,j} \otimes x_0, f_{1,j} \otimes x_1, f_{2,j} \otimes x_2, ..., f_{n,j} \otimes x_n \}$$ Across the stack of filters we reduce via a sum to achieve our singular output $j$. $$s_j = \sum_{i=0}^{n} f_{i,j} \otimes x_i$$ We can apply the Fourier transform to exploit the convolution theorem. $$\mathcal{F}(s_j) = \sum_{i=0}^{n} \mathcal{F}(f_{i,j}) \odot \mathcal{F}(x_i)$$ Since the filters and previous activation maps are in two dimensions the element wise multiplication can be thought of as Hadamard products. The previous activation maps recursively depend on previous filters and input, therefore manipulating the previous states into a expected state will allow us to rewrite as: $$\mathcal{F}(s_j) \oslash \mathrm{E}[\mathcal{F}(x)] = \sum_{i=0}^{n} c_i*\mathcal{F}(f_{i,j})$$ We introduce a constant scaling $c_i$ which allows us to reason about the right side of the equation as a linear combination ($c_i$ can be thought of as a constant scaling factor that is pulled from $f_{i,j}$). Therefore the goal is to select the correct set of $\mathcal{F}(f_{i,j})$ to form a complete basis over the left side of the expression. Instead of forming an arbitrary basis we focus on forming an orthogonal basis using $\mathcal{F}(f_{i,j})$. This can be done by building a matrix in $R^{\mathcal{F}_{km} \times n}$, diagonalizing, and taking the columns representing the eigenvectors and reshaping into $\mathcal{F}_{km}$, where $\mathcal{F}_{km}$ represents the size of the Fourier transformed matrix. The filters can then be transformed back from the standard domain using the inverse Fourier transform $\mathcal{F}^{-1}$. For the sake of being detailed, the 2-dimensional Fourier transform used throughout the paper refers to a Fourier transform in the form of: $$A_{kl} = \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} a_{mn} e^{-2\pi i \left[\frac{mk}{M} + \frac{nl}{N} \right] }$$ With the inverse Fourier transform in the form of: $$a_{mn} = \frac{1}{MN} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} A_{kl} e^{2\pi i \left[\frac{mk}{M} + \frac{nl}{N} \right] }$$ With indexes defined as: $$\begin{aligned} k = \{0, ... M-1\} & \\ l = \{0, ... N-1\} &\end{aligned}$$ Properties of CAI ----------------- ### Bounds of CAI The magnitude of the post-decomposition matrix can be set using a scaling factor, but the question lies in what the magnitude of the filter weights will be after computing the inverse Fourier transform. If we define the inverse DFT, with inputs $(A_{kl})_{k=0, \ l=0}^{M-1, \ N-1}$, using the triangle inequality we get: $$\begin{aligned} |\frac{1}{MN} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} A_{kl} e^{2\pi i \left[\frac{mk}{M} + \frac{nl}{N} \right]} | \\ \leq \frac{1}{MN} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} | A_{kl} e^{2\pi i \left[\frac{mk}{M} + \frac{nl}{N} \right]} | \\ \leq \frac{1}{MN} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} | A_{kl} |\end{aligned}$$ Therefore we can say that the range of the inverse Fourier transform is bounded by the average of the inputs magnitudes. This can be used in the future to scale the decomposition accordingly. We do not explicitly talk about the distribution sampled for the matrix that the decomposition is computed on. In reality it can be any distribution, but throughout this paper we sampled a normal distribution with a zero mean and one variance to construct a positive definite symmetric matrix. Any square real positive definite symmetric matrix $\mathbf{A}$ with a unique eigen-decomposition in the form of $\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}$ has an upper-bound of $1$ on $|\mathbf{Q}|$ Given vectors $x$ and $y$ and their corresponding eigenvalues $\lambda_1$ and $\lambda_2$ we can state $\langle \mathbf{A}x,y \rangle = \langle \mathbf{A}x,y \rangle$. It is trivial to show $\lambda_1 \langle x,y \rangle = \lambda_2 \langle x,y \rangle$ and therefore $(\lambda_1 -\lambda_2) \langle x,y \rangle$ proving that the eigenspaces of $\mathbf{Q}$ are orthogonal. Now we can find an orthonormal basis for each eigenspace and since the eigenspaces are mutually orthogonal, the vectors in the eigenspace form an orthonormal basis. We have shown that the vectors in $\mathbf{Q}$ form an orthonormal basis therefore the columns (or rows) must form a unit norm. If all vectors form a unit-norm individual entries in the matrix must have an upper-bound of 1 with respect to their magnitude. Prior proof shows that the upper-bound for CAI prior to linear scaling will be 1. ### Expected Value of CAI He et al. derived the specific variance and mean needed in order to insure that the activations of the convolutional network will not explode. In our initialization scheme we correct the filters through linear scaling in a way to preserve the variance, while maintaining orthogonality of the filters in the Fourier space. The natural question is what the distribution CAI is, and more importantly where does the mean lie. If we assume that every element in the a matrix belongs to a single distribution per matrix, the expectation is the expectation of the distribution, and can be approximated by averaging all elements in the respective matrix. Using the two dimensional definition of the inverse Fourier, we can write the expectation as: $$\mathrm{E} \left[ a_{mn} \right] = \frac{1}{MN} \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} \mathrm{E} \left[ A_{kl} \right] \mathrm{E} \left[ e^{2\pi i \left[\frac{mk}{M} + \frac{nl}{N} \right] } \right]$$ The expectation of the exponentiation expression and $A_{kl}$ can be written as two different expectations due to there independence. Given the above expression, if the goal is to force the mean of CAI to be 0, we simply have to force the post-decomposition matrix $\mathbf{Q}$ to have an expectation of zero. Therefore as long as the $\mathbf{Q}$ matrix has a mean near zero, CAI initialization will build filters with a mean of zero. We can also say that approximately $\mathbf{Q}$ has a expected value of $0$. Because our matrix is symmetric, we can rewrite the decomposition as $\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{T}$, therefore $\mathrm{E} \left[ \mathbf{Q} \right]=\mathrm{E} \left[ \mathbf{Q}^T \right]$. Because we defined as $\mathbf{A}$ as being a real positive definite matrix, all eigenvalues must be greater than 0, therefore $0 \leq \mathrm{E} \left[ \mathbf{\Lambda} \right]$. If we naively assume covariate independence, we can rewrite our expectation as $\mathrm{E} \left[ \mathbf{A} \right] = \mathrm{E} \left[ \mathbf{Q} \right]\mathrm{E} \left[ \mathbf{\Lambda} \right]\mathrm{E} \left[ \mathbf{Q}^T \right] = 2 \mathrm{E} \left[ \mathbf{Q} \right]\mathrm{E} \left[ \mathbf{\Lambda} \right] = 0$. Therefore we can approximately say that $\mathrm{E} \left[ \mathbf{Q} \right] = 0$. Algorithm Description --------------------- $f_{r},f_{c} \gets \mathcal{F}_{rc}$ $W^{\sim} \in R^{f\times s\times f_r \times f_c}$ $W \in R^{f\times s\times r \times c}$ $W^{\sim}_i \gets orthobasis(R^{s \times (f_r*f_c)})$ $W^{\sim}_i \gets W_f \ \texttt{reshape into} \ R^{s\times f_r\times f_c}$ $W_{i,j} \gets \mathcal{F}^{-1}\left[W^{\sim}_{i,j}\right] + \epsilon$ $W \gets scale(W)$ **return** $W$ The very last step of the CAI algorithm is to scale the filters variance to match the variance scheme defined in He-normal initialization. This can be done by scaling the filters by $\sqrt{\frac{2.0}{fan_{in}} / Var\left[f\right]}$ [@DBLP:journals/corr/HeZR015]. $\epsilon$ is random noise to break symmetry created by the inverse Fourier transform. The full description of the algorithm can be found in Algorithm \[CAI\]. Empirical Evaluation on Images ============================== Experimental Set Up ------------------- Convolution Aware Initialization was implemented using Theano, Tensorflow and integrated using Keras [@2016arXiv160502688short; @tensorflow2015-whitepaper; @chollet2015keras]. The algorithms were GPU accelerated on a Nvidia TitanX using Cuda 8.0 and CuDNN 5.1. We utilized numpys implementation of the real forward and backward FFT’s [@numpy]. CIFAR-10 -------- The CIFAR-10 dataset consists of $32 \times 32$ color images, each belonging to one of 10 classes, [@cifar]. The standard data-split has 50,000 training images and 10,000 test images. For data augmentation we did random horizontal flips, and cropping as described in [@Zagoruyko2016WRN] without $4 \times 4$ padding. We also had better results without applying any type of whitening. The architecture chosen was wide residual network with a depth of 28, and a widening factor of $k=10$, the complete architecture description is defined at Table \[tab:widerecarch\] [@Zagoruyko2016WRN]. The block type used for all residual blocks was a basic residual block without any bottleneck [@DBLP:journals/corr/HeZRS15]. A L2 weight decay of $0.0005$ was utilized as well. We regularized our network with dropout as well as label-smoothing using the SoftTarget regularization scheme [@JMLR:v15:srivastava14a; @DBLP:journals/corr/Aghajanyan16]. We performed grid hyper-parameter optimization over dropout rate, learning rate scheduler, number of epochs trained freely as $n_b$ for SoftTarget, as well as the $\beta, \gamma$ in the SoftTarget schema [@DBLP:journals/corr/Aghajanyan16; @Bergstra:2012:RSH:2503308.2188395]. These parameters control how much of the past soft-labels generated from the model are merged with the hard labels. Each experiment was ran 200 epochs completely over the training set. The optimization algorithm was Nesterov accelerated SGD with a learning rate of 0.01 and a momentum of 0.9. The learning rate was decreased on the schedule described by Zagoruyko et al. for their CIFAR10 experiments. For CAI, non-convolution layers were initialized with henormal initialization [@DBLP:journals/corr/HeZR015]. group output shape block ------------ ---------------- ---------------------------------------------------------------------------------------- $conv_1$ $32 \times 32$ $\begin{bmatrix}3 \times 3 & 16\end{bmatrix}$ $conv_2$ $32 \times 32$ $ \begin{bmatrix}3\times3 & 16\times k \\ 3\times3 & 16\times k\end{bmatrix} \times n$ $conv_3$ $16 \times 16$ $ \begin{bmatrix}3\times3 & 32\times k \\ 3\times3 & 32\times k\end{bmatrix} \times n$ $conv_4$ $8 \times 8$ $ \begin{bmatrix}3\times3 & 64\times k \\ 3\times3 & 64\times k\end{bmatrix} \times n$ $avg pool$ $1 \times 1$ $\begin{bmatrix}8 \times 8\end{bmatrix}$ \[tab:widerecarch\] $n$ is the amount of repetitions done per group, while $k$ represents the widening factor as described in the original paper for wide residual networks. The final layer was a densely connected layer with a softmax activation. We compare our results with the optimal results reported by Zagoruyko et al, as well as other types of initialization. To show that the increase in performance is not solely due to SoftTarget regularization we train the model without using SoftTarget as well. We report the best performing initialization using the median of 5 runs, following the experimental set-up performed by Zagoruyko et al. The reported results can be found in Table \[tab:cifar\_10\_results\] and visualization can be found in Figure \[fig:cifar10\_graph\]. [lrr]{} Network & CIFAR10 Accuracy & CIFAR100 Accuracy\ Wide ResNet (CAI, SoftTarget) & $\mathbf{96.31}$ & $\mathbf{79.25}$\ Wide ResNet (He Normal, SoftTarget) & $\mathbf{96.18}$ & $78.31$\ Wide ResNet (original paper) & $96.11$ & $\mathbf{81.15}$\ Wide ResNet (our tests) & $96.12$ & $78.22$\ Fitnet4-LSUV & $93.94$ & $70.04$\ Fitnet4-OrthoInit & $93.78$ & $70.44$\ Fitnet4-Highway & $92.46$ & $68.09$\ ALL-CNN & $92.75$ & $66.29$\ DSN & $92.03$ & $65.43$\ NiN & $91.19$ & $64.32$\ MIN & $93.25$ & $71.14$\ \ Large ALL-CNN & $95.59$ & $68.55$\ Fractional MP & $\mathbf{96.53}$ & $73.61$\ \[tab:cifar\_10\_sota\] Using Convolution Aware Initialization on residual networks reports an accuracy of $96.31$. This, as far as we know, sets a state of the art on CIFAR-10 using only basic data-augmentation such as horizontal mirroring and random shifts. The comparison of different methods shown used data reported in [@Zagoruyko2016WRN; @DBLP:journals/corr/SpringenbergDBR14; @DBLP:journals/corr/MishkinM15; @DBLP:journals/corr/Graham14a]. [lrrrrrr]{} Initialization & Validation Loss & Accuracy & Dropout & $n_b$ & $\beta$ & $\gamma$\ CAI (SoftTarget) & $\mathbf{0.1911}$ & $\mathbf{96.31}$ & $0.3$ & $5$ & $0.05$ & $0.5$\ He Normal (SoftTarget) & $0.1930$ & $96.18$ & $0.3$ & $10$ & $0.05$ & $0.5$\ Orthogonal (SoftTarget) & $0.2008$ & $96.11$ & $0.3$ & $5$ & $0.05$ & $0.5$\ \ $CAI$ & $\mathbf{0.1920}$ & $\mathbf{96.24}$ & $0.3$ & $NA$ & $NA$ & $NA$\ He Normal & $0.1938$ & $96.10$ & $0.3$ & $NA$ & $NA$ & $NA$\ Orthogonal & $0.2028$ & $95.98$ & $0.3$ & $NA$ & $NA$ & $NA$\ \[tab:cifar\_10\_results\] ### Notes of CIFAR100 We decided to explicitly not go in depth into the CIFAR100 dataset results because the results achieved, while empirically show the benefit of CAI, did not set state of the art like CIFAR10. Therefore we opted to show CAI performance on other datasets instead. Table \[tab:cifar\_10\_sota\] shows results from CIFAR100 tests. SVHN ---- The SVHN is a dataset of house numbers in the wild aggregated by Google [@Netzer2011]. The only data augmentation done was scaling the dataset by $\frac{1}{255}$. We train a wide residual network with a depth of 18 and a widening factor of 8. Refer to Table \[tab:widerecarch\] for architecture. The architecture chosen was the one best reported by [@Zagoruyko2016WRN]. We also reduced learning rate automatically on plateau given a patience of 5 epochs while monitoring validation accuracy [@bottou-tricks-2012], with a decay factor of $0.1$ and a minimum learning rate of $0.0005$. After doing hyper-parameter optimization the best dropout rate was noted to be $0.4$, and all initializations performed marginally better without the use of SoftTarget with a $L2$ weight decay of $0.0005$. We ran each experiment 5 times for 130 epochs and reported the median performing run against a set of popular initialization techniques. We report the results in Table \[tab:svhn\_res\] and visualize the results in Figure \[fig:svhngraph\]. Initialization Validation Loss Accuracy ---------------- ------------------- ------------------ CAI $\mathbf{0.1102}$ $\mathbf{97.61}$ He Normal $0.1108$ $97.31$ He Uniform $0.1223$ $97.09$ Glorot Normal $0.1120$ $97.20$ \[tab:svhn\_res\] CAI peaks in performance roughly 15 epochs before all other initializations. With CAI we noticed not only higher accuracy and lower loss, but faster convergence in general. We could not replicate the results reported by the Wide Residual Network paper in the Keras framework, even using the learning rate schedule provided by the paper [@Zagoruyko2016WRN]. Empirical Evaluation with 1 Dimensional CAI =========================================== Background ---------- The derivation for Convolution Aware Initialization given above was in the case of 2-dimensional convolution operator. It is also possible to derive CAI for 1 dimensional convolution. We simply have to remove the reshaping done and utilize the one dimensional forward and inverse Fourier transform. A one-dimensional implementation of CAI can be used to initialize one-dimensional convolutional layers which appear frequently in audio and NLP tasks. The algorithm description can be found at Algorithm \[CAI\_1d\]. Our next set of experiments will empirically validate CAI for networks containing one-dimensional convolutions. $f_{r} \gets \mathcal{F}_{r}$ $W^{\sim} \in R^{f\times s\times f_r}$ $W \in R^{f\times s\times r}$ $W^{\sim}_i \gets orthobasis(R^{s \times (f_r*f_c)})$ $W^{\sim}_i \gets W_f \ \texttt{reshape into} \ R^{s\times f_r}$ $W_{i} \gets \mathcal{F}^{-1}\left[W^{\sim}_{i}\right] + \epsilon$ $W \gets scale(W)$ **return** $W$ IMDB Movie Review ----------------- The IMDB Movie Review dataset, is a sentiment analysis dataset containing 25,000 movie reviews tagged by sentiment; positive or negative. We focus testing on a standard architectures that utilize some mixture of embedding, one-dimensional convolution, and recurrent neural layers [@Gal2015Theoretically; @Hochreiter:1997:LSM:1246443.1246450]. For our recurrent network we chose to use a LSTM network. LSTM networks have been used extensively successfully in various NLP tasks due to there ability to learn patterns between long time periods [@DBLP:journals/corr/abs-1211-5063; @hong2015sentiment]. For preprocessing we filter out a subset of words for the sentences and store each sentence as a matrix where individual rows represent a word via one hot encoding. We pad each sentence to insure that each matrix of a sentence was the same size as every other sentence. We used a maximum of 20,000 unique words and limited sentences to a maximum length of 80. We used the standard binary cross-entropy loss. Three architectures were tested: - Embedding $\rightarrow$ LSTM $\rightarrow$ Dense - Embedding $\rightarrow$ Convolution1D $\rightarrow$ GlobalPoooling1D $\rightarrow$ Dense - Embedding $\rightarrow$ Convolution1D $\rightarrow$ Poooling1D $\rightarrow$ LSTM $\rightarrow$ Dense The hyper-parameters in all three models as well as the hyper-parameters of the Adam optimization method were chosen using random hyper-parameter search [@Bergstra:2012:RSH:2503308.2188395; @DBLP:journals/corr/KingmaB14]. Every configuration was run 5 times, and the median run was reported. For CAI, all non-convolutional layers were initialized with orthogonal matrices. The results can be found at Table \[tab:imdb\]. Once again CAI outperformed all other forms of initialization. [lr]{} Initialization & Accuracy\ \ Orthogonal($scale=0.3$) & $\mathbf{90.02}$\ Uniform($low=-0.05, high= 0.05$) & $89.78$\ Normal($\mu=0, \sigma=0.3$) & $89.00$\ CAI & $NA$\ \ Orthogonal($scale=0.3$) & $89.63$\ Uniform($low=-0.05, high= 0.05$) & $89.20$\ Normal($\mu=0, \sigma=0.3$) & $89.18$\ CAI & $\mathbf{90.88}$\ \ Orthogonal($scale=0.3$) & $90.31$\ Uniform($low=-0.05, high= 0.05$) & $89.78$\ Normal($\mu=0, \sigma=0.3$) & $90.16$\ CAI & $\mathbf{91.40}$\ \[tab:imdb\] Speech Synthesis via WaveNet ---------------------------- The next experiment we ran was using the wave net architecture to perform speech synthesis trained on the VNTK dataset [@DBLP:journals/corr/OordDZSVGKSK16; @veaux2016cstr]. The reason we decided to run this specific experiment was to test CAI with stacked one dimensional convolutional layers, as well as to see how CAI performs with atrous convolutions [@chen2016deeplab]. We trained a small version of the wavenet architecture, which contains a sample rate of 4000, with 256 output bins. We utilized skip connections as originally proposed in the paper, with 256 filters for every convolutional layer, with a dilation depth of 9 for the dilated or atrous convolution layers. We also did not use any bias additions in the networks. We used Nesterov accelerated stochastic gradient descent with a learning rate of $0.1$ and a momentum rate of $0.9$. We trained the network for 50 epochs [@DBLP:journals/corr/OordDZSVGKSK16]. We ran this set of experiment 5 times and reported the median run. Results are shown in Table \[tab:wavenet\] and Figure \[fig:wavegraph\]. CAI outperformed other standard schemas of initializations by a wide margin. ![WaveNet Validation Results[]{data-label="fig:wavegraph"}](wavenet.png) Initialization Categorical Cross Entropy ----------------------------- --------------------------- Orthogonal($scale=0.2$) $4.809$ Glorot $4.810$ Normal($\mu=0, \sigma=0.2$) $4.811$ CAI $\mathbf{4.771}$ \[tab:wavenet\] Discussion ========== In this paper we introduced a new type of initialization which takes into account the properties of convolution. We showed reasoning behind building orthogonal basis in the Fourier space rather in the standard space, showing that convolution across a stack is similar to a linear combination of filters in the Fourier space. The paper also proved the bounds of eigen-decomposition on a random symmetric matrix as well as the bounds of CAI prior to scaling. We also proved the preconditions necessary to force the mean of CAI to zero. From an empirical testing perspective, CAI outperformed other standard types of initialization across the board, setting a new state of the art for the CIFAR10 dataset with basic data-augmentation. On other tasks CAI networks converged significantly faster than other standard forms of initialization. Further work can explore variance scaling schemes other than He Normal, and extending the idea of CAI to a mix of recurrent and image convolutional networks.
--- abstract: 'A sharp-threshold theorem is proved for box-crossing probabilities on the square lattice. The models in question are the  model near the self-dual point $\psd(q)=\sqrt q/(1+\sqrt q)$, the Ising model with external field, and the coloured  model. The principal technique is an extension of the influence theorem for monotonic probability measures applied to increasing events with no assumption of symmetry.' address: - 'Department of Mathematics, University of British Columbia, Vancouver, B. C., Canada V6T 1Z2' - 'Statistical Laboratory, Centre for Mathematical Sciences, Cambridge University, Wilberforce Road, Cambridge CB3 0WB, UK' author: - Benjamin Graham - Geoffrey Grimmett bibliography: - 'box.bib' date: 6 March 2009 title: | Sharp thresholds for the\ random-cluster and Ising models --- Introduction {#sec:intro} ============ The method of ‘sharp threshold’ has been fruitful in probabilistic combinatorics (see [@G-pgs; @KS05] for recent reviews). It provides a fairly robust tool for showing the existence of a sharp threshold for certain processes governed by independent random variables. Its most compelling demonstration so far in the field of physical systems has been the proof in [@BR2] that the critical probability of site percolation on the Voronoi tessellation generated by a Poisson process on $\RR^2$ equals $\frac12$. Each of the applications alluded to above involves a product measure. It was shown in [@GG] that the method may be extended to non-product probability measures satisfying the FKG lattice condition. The target of this note is to present two applications of such a sharp-threshold theorem to measures arising in statistical physics, namely those of the  model and the Ising model. In each case, the event in question is the existence of a crossing of a large box, by an open path in the case of the  model, and by a single-spin path in the case of the Ising model. A related but more tentative and less complete result has been obtained in [@GG] in the first case, and the second case has been studied already in [@Hig0; @Higuchi] and [@vdB07]. Our methods for the Ising model can be applied to a more general model termed here the coloured  model (CRCM), see Section \[sec:crcm\]. This model is related to the so-called fractional Potts model of [@KahnW], and the fuzzy Potts model and the divide-and-colour model of [@DaCmodel; @Chay96; @Hag99; @Hag01]. The sharp-threshold theorem used here is an extension of that given for product measure in [@FKST; @Tal94], and it makes use of the results of [@GG]. It is stated, with an outline of the proof, in Section \[sec:ist\]. The distinction of the current sharp-threshold theorem is that it makes no assumption of symmetry on either the event or measure in question. Instead, one needs to estimate the maximum influence of the various components, and it turns out that this may be done in a manner which is very idiomatic for the models in question. The sharp-threshold theorem presented here may find further applications in the study of dependent random variables. The models {#sec:models} ========== The  model ---------- The  model on a connected graph $G$ has two parameters: an edge-weight $p$ and a cluster-weight $q$. See Section \[sec:bc\] for a formal definition. When $q\ge 1$ and $G$ is infinite, there is a critical value $\pc(q)$ that separates the subcritical phase of the model (when $p<\pc(q)$ and there exist no infinite clusters) and the supercritical phase. It has long been conjectured that, when $G$ is the square lattice $\ZZ^2$, $$\pc(q) = \frac{\sqrt q}{1+\sqrt q}, \qq q\ge 1. \label{pcq}$$ This has been proved rigorously in three famous cases. When $q=1$, the  model is bond percolation, and the exact calculation $\pc(1)=\frac12$ was shown by Kesten . When $q=2$, the model is intimately related to the Ising model, and the calculation of $\pc(2)$ is equivalent to that of Onsager and others concerning the Ising critical temperature (see [@ABF; @AF] for a modern treatment of the Ising model). Formula has been proved for sufficiently large values of $q$ (currently $q \ge 21.61$) in the context of the proof of first-order phase transition, see . We recall that, when $q\in\{2,3,\dots\}$, the critical temperature $\Tc$ of the $q$-state Potts model on a graph $G$ satisfies $$\pc(q)= 1-e^{-1/\Tc}. \label{Pottscp}$$ A fairly full account of the  model, and its relation to the Potts model, may be found in [@G-RC]. Conjecture is widely accepted. Physicists have proceeded beyond a ‘mere’ calculation of the critical point, and have explored the behaviour of the process at and near this value. For example, it is believed that there is a continuous (second-order) phase transition if $1\le q < 4$, and a discontinuous (first-order) transition when $q > 4$, see [@Bax]. Amongst recent progress, we highlight the stochastic Löwner evolution process SLE$_{16/3}$ associated with the cluster boundaries in the critical case when $q=2$ and $p=\sqrt 2/(1+\sqrt 2)$, see [@Smi07; @Smir]. The expression in arises as follows through the use of planar duality. When the underlying graph $G$ is planar, it possesses a (Whitney) dual graph $\Gd$. The  model on $G$ with parameters $p$, $q$ may be related to a dual  model on $\Gd$ with parameters $\prd$, $q$, where $$\frac{\prd}{1-\prd} = \frac{q(1-p)}p. \label{dualv}$$ The mapping $p \mapsto \prd$ has a fixed point $p=\kq$, where $$\kq:=\frac{\sqrt q}{1+\sqrt q}$$ is termed the *self-dual point*. The value $p=\kq$ is especially interesting when $G$ and $\Gd$ are isomorphic, as in the case of the square lattice $\ZZ^2$. See [@G-RC Chap. 6]. We note for future use that $$p < \kq \q\text{if and only if} \q \prd >\kq. \label{mel60}$$ Henceforth, we take $G=\ZZ^2$. The inequality $$\pc(q) \ge \kq, \qq q \ge 1, \label{pcqge}$$ was proved in [@G95; @Wel93] using Zhang’s argument (see [@G99 p. 289]). Two further steps would be enough to imply the complementary inequality $\pc(q) \le \kq$: firstly, that the probability of crossing a box $[-m,m]^2$ approaches 1 as $m\to\oo$, when $p>\kq$; and secondly, that this implies the existence of an infinite cluster. The first of these two claims is proved in Theorem \[thm1\]. Kesten’s proof for percolation, [@Ke80], may be viewed as a proof of the first claim in the special case $q=1$. The second claim follows for percolation by RSW-type arguments, see [@Ru78; @Ru81; @SeW] and [@G99 Sect. 11.7]. Heavy use is made in these works of the fact that the percolation measure is a product measure, and this is where the difficulty lies for the  measure. We prove our main theorem (Theorem \[thm1\] below) by the method of influence and sharp threshold developed for product measures in [@FKST; @KKL]. This was adapted in [@GG] to monotonic measures applied to increasing events, subject to a certain hypothesis of symmetry. We show in Section \[sec:ist\] how this hypothesis may be removed, and we apply the subsequent inequality in Section \[sec:pf\] to the probability of a box-crossing, thereby extending to general $q$ the corresponding argument of [@BR1]. Ising model ----------- We shall consider the Ising model on the square lattice $\ZZ^2$ with edge-interaction parameter $\b$ and external field $h$. See Section \[sec:ising\] for the relevant definitions. Write $\bc$ for the critical value of $\b$ when $h=0$, so that $$1-e^{-2\bc} = \psd(2)$$ where $\psd(2)$ is given as in . Two notions of connectivity are required: the usual connectivity relation $\lra$ on $\ZZ^2$ viewed as a graph, and the relation $\lra_*$, termed $*$-connectivity, and obtained by adding diagonals to each unit face of $\ZZ^2$. Let $\pibh$ denote the Ising measure on $\ZZ^2$ with parameters $\b$, $h$. Higuchi proved in [@Hig0; @Higuchi] that, when $\b \in(0,\bc)$, there exists a critical value $\hc=\hc(\b)$ of the external field such that: $\hc(\b)>0$, when $h >\hc$, there exists $\pibh$-almost-surely an infinite $+$ cluster of $\ZZ^2$, and the radius of the $*$-connected $-$ cluster at the origin has exponential tail, when $0<h<\hc$, there exists $\pibh$-almost-surely an infinite $*$-connected $-$ cluster of $\ZZ^2$, and the radius of the $+$ cluster at the origin has exponential tail, A further approach to Higuchi’s theorem has been given recently by van den Berg [@vdB07]. A key technique of the last paper is a sharp-threshold theorem of Talagrand [@Tal94] for product measures. The Ising measure $\piLbh$ on a box $\La$ is of course not a product measure, and so it was necessary to encode it in terms of a family of independent random variables. We show here that the influence theorem of [@GG] may be extended and applied directly to the Ising model to obtain the necessary sharp threshold result. (The paper [@vdB07] contains results for certain other models encodable in terms of product measures, and these appear to be beyond the scope of the current method.) Coloured  model --------------- The Ising model with external field is a special case of a class of systems that have been studied by a number of authors, and which we term *coloured  models* (CRCM). Sharp-threshold results may be obtained for such systems also. Readers are referred to Section \[sec:crcm\] for an account of the CRCM and the associated results. Box-crossings in the  model {#sec:bc} =========================== The  measure is given as follows on a finite graph $G=(V,E)$. The configuration space is $\Om=\{0,1\}^E$. For $\om\in\Om$, we write $\eta (\om )=\{ e\in E:\om (e)=1\}$ for the set of ‘open’ edges, and $k(\om)$ for the number of connected components in the open graph $(V,\eta(\om))$. Let $p\in [0,1]$, $q\in (0,\oo)$, and let $\fpq$ be the probability measure on $\Om$ given by $$\fpq (\om )=\frac{1}{Z}\,\biggl\{\prod_{e\in E} p^{\om (e)} (1-p)^{1-\om (e)}\biggr\} q^{k(\om )} ,\qq\om\in\Om, \label{rcmeas}$$ where $Z=Z_{G,p,q}$ is the normalizing constant. We shall assume throughout this paper that $q\ge 1$, so that $\fpq$ satisfies the so-called FKG lattice condition $$\mu (\om_1\vee\om_2)\mu (\om_1\wedge\om_2)\geq\mu (\om_1)\mu (\om_2), \qq \om_1,\om_2\in\Om. \label{4.2}$$ Here, as usual, $$\begin{aligned} \om_1\vee\om_2(e)&=\max\{\om_1(e),\om_2(e)\},\\ \om_1\wedge\om_2(e)&=\min\{\om_1(e),\om_2(e)\}, \end{aligned}$$ for $e \in E$. As a consequence of , $\fpq$ satisfies the FKG inequality. See [@G-RC] for the basic properties of the  model. Consider the square lattice $\ZZ^2$ with edge-set $\EE$, and let $\Om=\{0,1\}^\EE$. Let $\La=\La_n=[-n,n]^2$ be a finite box of $\ZZ^2$, with edge-set $\EE_\La$. For $b\in\{ 0,1\}$ define $$\Om_\La^b=\{\om\in\Om:\om (e)=b\;\text{for}\; e\notin\EE_\La\}.$$ On $\Om_\La^b$ we define a random-cluster measure $\phi_{\La ,p,q}^b$ as follows. For $p\in[0,1]$ and $q\in[1,\oo)$, let $$\phi_{\La ,p,q}^b(\om )=\frac{1}{Z_{\La ,p,q}^b}\,\Biggl\{\prod_{e\in\EE_\La} p^{\om (e)} (1-p)^{1-\om (e)}\Biggr\} q^{k(\om ,\La )},\q \om\in\Om_\La^b, \label{13.6}$$ where $k(\om ,\La )$ is the number of clusters of $(\ZZ^2,\eta (\om ))$ that intersect $\La$. The boundary condition $b=0$ (, $b=1$) is usually termed ‘free’ (, ‘wired’). It is standard that the weak limits $$\fpq^b = \lim_{n\to\oo} \phi_{\La_n,p,q}^b$$ exist, and that they are translation-invariant, ergodic, and satisfy the FKG inequality. See [@G-RC Chap. 4]. For $A,B \subseteq \ZZ^2$, we write $A \lra B$ if there exists an open path joining some $a \in A$ to some $b \in B$. We write $x \lra \oo$ if the vertex $x$ is the endpoint of some infinite open path. The percolation probabilities are given as $$\t^b(p,q) = \fpq^b(0 \lra \oo), \qq b=0,1.$$ Since each $\t^b$ is non-decreasing in $p$, one may define the critical point by $$\pc(q) = \sup\{p: \t^1(p,q)=0\}.$$ It is known that $\fpq^0 = \fpq^1$ if $p \ne \kq$, and we write $\fpq$ for the common value. In particular, $\t^0(p,q) = \t^1(p,q)$ for $p \ne \pc(q)$. It is conjectured that $\fpq^0 = \fpq^1$ when $p=\pc(q)$ and $q \le 4$. Let $B_k=[0,k]\times[0,k-1]$, and let $H_k$ be the event that $B_k$ possesses an open left–right crossing. That is, $H_k$ is the event that $B_k$ contains an open path having one endvertex on its left side and one on its right side. \[thm1\] Let $q \ge 1$. We have that $$\begin{aligned} {2} \fpq(H_k) &\le 2\rho_k^{\psd-p}, \qq&&0<p<\kq,\\ \fpq(H_k) &\ge 1 - 2\nu_k^{p-\psd},\qq&&\kq < p < 1,\end{aligned}$$ for $k \ge 1$, where $$\rho_k = [2q\eta_k/p]^{c/q}, \qq \nu_k = [2q\eta_k/p_\rd]^{c/q}, \label{mel26}$$ and $$\eta_k = \phi_{\kq,q}^0(0 \lra \pd \La_{k/2}) \to 0\qq \text{as } k \to\oo. \label{mel50}$$ Here, $c$ is an absolute positive constant, and $p_\rd$ satisfies . When $k$ is odd, we interpret $\pd\La_{k/2}$ in as $\pd\La_{\lfloor k/2\rfloor}$. In essence, the probability of a square-crossing has a sharp threshold around the self-dual ‘pivot’ $\kq$. Related results were proved in [@GG], but with three relative weaknesses, namely: only non-square rectangles could be handled, the ‘pivot’ of the threshold theorems was unidentified, and there was no result for [*infinite-volume*]{} measures. The above strengthening is obtained by using the threshold Theorem \[inf1\] which makes no assumption of symmetry on the event or measure in question. The corresponding threshold theorem for product measure leads to a simplification of the arguments of [@BR1] for percolation, see [@G-pgs Sect. 5.8]. Since $\fnpq^0 \lest \fpq \lest \fnpq^1$ and $H_k$ is an increasing event, Theorem \[thm1\] implies certain inequalities for finite-volume probabilities also. No estimate for the rate at which $\eta_k\to 0$ is implicit in the arguments of this paper. That the radius $R$ of the open cluster at the origin is $\phi_{\kq,q}^0$-a.s. bounded is a consequence of the (a.s.) uniqueness of the infinite open cluster whenever it exists. See [@G-RC Thm 6.17(a)] for a proof of the relevant fact that $$\t^0(\kq,q) = 0, \qq q \ge 1. \label{theta=0}$$ We shall prove a slightly more general result than Theorem \[thm1\]. Let $B_{k,m} = [0,k]\times[0,m]$ and let $\hkm$ be the event that there exists an open left–right crossing of $B_{k,m}$. \[thm3\] Let $q \ge 1$. We have that $$\begin{aligned} \fpqo(\hkm)[1-\fpqt(\hkm))] &\le \rho_k^{p_2-p_1}, \qq 0<p_1<p_2\le\kq,\label{mel25}\\ \fpqo(\hkm)[1-\fpqt(\hkm))] &\le \nu_{m+1}^{p_2-p_1}, \qq \kq \le p_1<p_2<1,\label{mel25b}\end{aligned}$$ for $k,m\ge 1$, where $\rho_k$ (, $\nu_k$) is given in with $p = p_1$ (, $p=p_2$), and $\phi_{\kq,q}$ is to be interpreted as $\phi_{\kq,q}^0$. Box-crossings in the Ising model {#sec:ising} ================================ Let $\La$ be a box of $\ZZ^2$. The spin-space of the Ising model on $\La$ is $\Si_\La = \{-1,+1\}^{\La}$, and the Hamiltonian is $$H_\La(\s) = -\beta\sum_{e=\langle x,y\rangle \in \EE_\La} \s_x\s_y - h \sum_{x\in \La} \s_x,$$ where $\b > 0$, $h \ge 0$. The relevant Ising measure is given by $$\piLbh(\s) \propto e^{-H_\La(\s)}, \qq \s \in \Si_\La,$$ and it is standard that the (weak) limit measure $\pibh = \lim_{\La\to\ZZ^2} \piLbh$ exists. We shall also need the $+$ boundary-condition measure $\pibz$ given as the weak limit of $\pi_{\La,\b,0}$ conditional on $\s_x=+1$ for $x \in \pd\La$. (Here, $\pd\La$ denotes as usual the boundary of $\La$, i.e., the set of $x \in \La$ possessing a neighbour not belonging to $\La$). By the FKG inequality or otherwise, $\pibz(\s_0)\ge 0$, and the critical value of $\b$ when $h=0$ is given by $$\bc = \sup\{\b: \pibz(\s_0)=0\}.$$ As remarked in Section \[sec:models\], $1-e^{-2\bc} = \psd(2)$. It is well known that there exists a unique infinite-volume measure for the Ising model on $\ZZ^2$ if either $h \ne 0$ or $\b < \bc$, and thus $\pibh$ is this measure. By Holley’s Theorem, (see [@G-RC Sect. 2.1], for example), $\pibh$ is stochastically increasing in $h$. Let $$\t^+(\b,h)=\pibh(0{\stackrel{+}{\leftrightarrow}}\oo),\q \t^-(\b,h) =\pibh(0{\stackrel{-}{\leftrightarrow}_*}\oo),$$ where the relation ${\stackrel{+}{\leftrightarrow}}$ (, ${\stackrel{-}{\leftrightarrow}_*}$) means that there exists a path of $\ZZ^2$ each of whose vertices has state $+1$ (, a $*$-connected path of vertices with state $-1$). The next theorem states the absence of coexistence of such infinite components, and its proof (given in Section \[sec:isingpf\]) is a simple application of the Zhang argument for percolation (see [@G99 Sect. 11.3]). \[zhangisi\]We have that $$\t^+(\b,h)\t^-(\b,h) = 0,\qq \b\ge 0,\ h\ge 0.$$ There exists $\hc=\hc(\b)\in[0,\oo)$ such that $$\t^+(\b,h) \begin{cases} =0 &\text{ if } 0\le h < \hc,\\ >0 &\text{ if } h > \hc. \end{cases}$$ Recall from [@Hig0; @Higuchi] that $\hc(\b) >0$ if and only if $\b<\bc$. It is proved in [@Higuchi] that $$\label{mel28} \t^\pm(\b,\hc(\b))=0,$$ but we shall not make use of this fact in the proofs of this paper. Indeed, one of the main purposes of this article is to show how certain sharp-thresholds for box-crossings may be obtained using a minimum of background information on the model in question. Let $\hkm$ be the event that there exists a left–right $+$ crossing of the box $B_{k,m}=[0,k] \times[0,m]$. Let $x^+=\max\{x,0\}$. \[isingcr\] Let $0 \le \b<\bc$ and $R>0$. There exist $\rho_{i,+} =\rho_{i,+}(\b)$ and $\rho_{i,-} =\rho_{i,-}(\b,R)$ satisfying $$\label{ihp3} \rho_{i,+}\rho_{i,-} \to 0\qq\text{as } i\to\oo,$$ such that: for $0\le h_1\le\hc\le h_2 < R$, $$\pibho(\hkm)[1-\pibht(\hkm)] \le \rho_{k,+}^{\hc-h_1} \rho_{m,-}^{h_2-\hc},\qq k,m\ge 1. \label{ihp1}$$ The proof of this theorem shows also that $$\begin{aligned} {2} \pibho(\hkm)[1-\pibht(\hkm)] &\le \rho_{k,+}^{h_2-h_1}, \qq&h_1\le h_2 \le \hc,\\ \pibho(\hkm)[1-\pibht(\hkm)] &\le \rho_{m,-}^{h_2-h_1},\qq &\hc \le h_1\le h_2.\end{aligned}$$ As in Theorem \[thm1\], the proof neither uses nor implies any estimate on the rate at which $\rho_{i,\pm}\to 0$. The $\rho_{i,\pm}$ are related to the tails of the radii of the $+$ cluster and the $-$ $*$-cluster at the origin. More explicitly, $$\begin{aligned} \rho_{i,+} &= \bigl[2(1+e^{8\b})\pibhc(0{\stackrel{+}{\leftrightarrow}}\pd\La_{i/2})\bigr]^{B_+},\\ \rho_{i,-} &= \bigl[2(1+e^{8\b+2R})\pibhc(0{\stackrel{-}{\leftrightarrow}_*}\pd\La_{i/2})\bigr]^{B_-},\end{aligned}$$ where $$B^+ = 2c \xi_{\b,\hc},\quad B_- = 2c \xi_{\b,R},$$ and $\xi_{\b,h}$ is given in the forthcoming . Equation holds by Theorem \[zhangisi\] with $h=\hc(\b)$. It is in fact a consequence of that $\rho_{i,\pm} \to 0$ as $i\to\oo$. Influence, and sharp threshold {#sec:ist} ============================== Let $S$ be a finite set. Let $\mu$ be a measure on $\Om=\{0,1\}^S$ satisfying the FKG lattice condition , and assume that $\mu$ is *positive* in that $\mu(\om)>0$ for all $\om\in\Om$. It is standard that, for a positive measure $\mu$, is equivalent to the condition that $\mu$ be *monotone*, which is to say that the one-point conditional measure $\mu(\s_x=1\mid \s_y=\eta_y\text{ for } y\ne x)$ is non-decreasing in $\eta$. Furthermore, implies that $\mu$ is positively associated, in that increasing events are positively correlated. See, for example, [@G-RC Chap. 2]. For $p\in(0,1)$, let $\mu_p$ be given by $$\mu_p(\om) = \frac 1{Z_p} \biggl\{\prod_{s\in S} p^{\om(s)}(1-p)^{1-\om(s)}\biggr\}\mu(\om), \qq\om\in\Om, \label{GG1}$$ where $Z_p$ is chosen in such a way that $\mu_p$ is a probability measure. It is easy to check that each $\mu_p$ satisfies the FKG lattice condition. Let $A$ be an increasing event, and write $1_A$ for its indicator function. We define the *(conditional) influence* of the element $s\in S$ on the event $A$ by $$\label{GG3} J_{A,p}(s) = \mu_p(A \mid 1_s = 1) - \mu_p(A\mid 1_s=0), \qq s \in S,$$ where $1_s$ is the indicator function that $\om(s)=1$. Note that $J_{A,p}(s)$ depends on the choice of $\mu$. The conditional influence is not generally equal to the (absolute) influence of [@KKL], $$I_{A,p}(s) = \mu_p(1_A(\om^s) \ne 1_A(\om_s)),$$ where the configuration $\om^s$ (, $\om_s$) is that obtained from $\om$ by setting $\om(s)=1$ (, $\om(s)=0$). \[inf1\] There exists a constant $c>0$ such that the following holds. For any such $S$, $\mu$, and any increasing event $A \ne \es,\Om$, $$\frac d{dp}\mu_p(A) \ge \frac{c\xi_p}{p(1-p)}\mu_p(A)(1-\mu_p(A))\log[1/(2m_{A,p})], \label{vanc11}$$ where $m_{A,p} = \max_{s\in S} J_{A,p}(s)$ and $\xi_p= \min_{s\in S} \bigl[\mu_p(1_s)(1-\mu_p(1_s))\bigr]$. \[inf2\] In the notation of Theorem \[inf1\], $$\mu_{p_1}(A)[1-\mu_{p_2}(A)] \le \kappa^{B(p_2-p_1)}, \qq 0<p_1\le p_2<1,$$ where $$B=\inf_{p\in(p_1,p_2)} \left\{\frac{c\xi_p}{p(1-p)}\right\},\qq \kappa = 2\sup_{{p\in(p_1,p_2),}\atop{s\in S}} J_{A,p}(s).$$ The corresponding inequality for product measures may be found in [@Tal94 Cor. 1.2]. Throughout this note, the letter $c$ shall refer only to the constant of Theorem \[inf1\]. It is proved in [@BGK; @GG] that $$\frac d{dp}\mu_p(A) = \frac1 {p(1-p)} \sum_{s\in S} \mu_p(1_s)(1-\mu_p(1_s)) J_{A,p}(s). \label{mel1}$$ Let $K=[0,1]^S$ be the ‘continuous’ cube, endowed with Lebesgue measure $\l$, and let $B$ be an increasing subset of $K$. The influence $I_B(s)$ of an element $s$ is given in [@BKKKL] as $$I_B(s) = \l(1_B(\psi^s) \ne 1_B(\psi_s))$$ where $\psi^s$ (, $\psi_s)$ is the member of $K$ obtained from $\psi\in K$ by setting $\psi(s)=1$ (, $\psi(s)=0$). The conclusion of [@BKKKL] may be expressed as follows. There exists a constant $c>0$, independent of all other quantities, such that: for any increasing event $B \subseteq K$, $$\sum_{s\in S} I_B(s) \ge c \l(B)(1-\l(B)) \log[1/(2m_B)] \label{bkkkl1}$$ where $m_B = \max_{s\in S} I_B(s)$. The main result of [@BKKKL] is a lower bound on $m_B$ that is easily seen to follow from . Equation does not in fact appear explicitly in [@BKKKL], but it may be derived from the arguments presented there, very much as observed in the case of the discrete cube from the arguments of [@KKL]. See [@FKST Thm 3.4]. The factor of 2 on the right side of is of little material consequence, since the inequality is important only when $m_B$ is small, and, when $m_B < \frac13$ say, the 2 may be removed with an amended value of the constant $c$. The literature on influence and sharp-threshold can seem a little disordered, and a coherent account may be found in [@G-pgs]. The method used there introduces the factor 2 in a natural way, and for this reason we have included it in the above. It is shown in [@GG] (see the proof of Theorem 2.10) that there exists an increasing subset $B$ of $K$ such that $\mu_p(A)=\l(B)$, and $J_A(s) \ge I_B(s)$ for all $s \in S$. Inequality follows by –. By , $$\left(\frac1{\mu_p(A)} + \frac1{1-\mu_p(A)}\right)\mu_p'(A) \ge B\log(\kappa^{-1}),\qq p_1 < p < p_2,$$ whence, on integrating over $(p_1,p_2)$, $$\left.\frac{\mu_{p_2}(A)}{1-\mu_{p_2}(A)}\right/\frac{\mu_{p_1}(A)}{1-\mu_{p_1}(A)} \ge \kappa^{-B(p_2-p_1)}.$$ The claim follows. Proofs of Theorems \[thm1\] and \[thm3\] {#sec:pf} ======================================== Note first that a  measure has the form of with $S=E$ and $\mu(\om)=q^{k(\om)}$, and it is known and easily checked that $\mu$ satisfies the FKG lattice condition when $q \ge 1$ (see [@G-RC Sect. 3.2], for example). We shall apply Theorem \[inf1\] to a  $\fpq$ measure with $q \ge 1$. It is standard (see [@G-RC Thm 4.17(b)]) that $$\label{mel29} \frac pq \le \frac p{p+q(1-p)} \le \fpq(1_e) \le p,$$ whence $$\fpq(1_e)[1-\fpq(1_e)] \ge \frac{p(1-p)}q.$$ We may thus take $$\label{mel4} B= \frac cq$$ in Corollary \[inf2\]. Let $q \ge 1$, $1\le k,m<n$, and consider the  measures $\fnp^b = \fnpq^b$ on the box $\La_n$. For $e \in \EE^2$, write $\jkmn^b(e)$ for the (conditional) influence of $e$ on the event $\hkm$ under the measure $\fnp^b$. We set $\jkmn^b(e)=0$ for $e \notin \EE_{\La_n}$. \[lemma1\] Let $q\ge 1$. We have that $$\begin{aligned} {2} \sup_{e\in \EE^2} \jkmn^0(e) & \le \frac qp \eta_k, \qq &&0<p\le\kq,\q 1\le k,m < n,\\ \sup_{e\in \EE^2} \jkmn^1(e) & \le \frac q{p_\rd} \eta_{m+1}, \qq &&\kq\le p<1,\q 1\le k,m < n,\end{aligned}$$ where $p_\rd$ satisfies and $$\eta_k = \phi_{\kq,q}^0(0\lra \pd\La_{k/2}) \to 0\qq\text{as } k\to\oo.$$ For any configuration $\om\in\Om$ and vertex $z$, let $C_z(\om)$ be the open cluster at $z$, that is, the set of all vertices joined to $z$ by open paths. Suppose first that $0<p\le\kq$, and let $e = \langle x,y\rangle$ be an edge of $\La_n$. We couple the two conditional measures $\fnp^0(\cdot\mid \om(e)=b)$, $b=0,1$, in the following manner. Let $\Om_n$ be the configuration space of the edges in $\La_n$, and let $T=\{(\pi,\om)\in \Om_n^2: \pi\le\om\}$ be the set of all ordered pairs of configurations. There exists a measure $\mu^e$ on $T$ such that: the first marginal of $\mu^e$ is $\fnp^0(\cdot\mid 1_e=0)$, the second marginal of $\mu^e$ is $\fnp^0(\cdot\mid 1_e=1)$, for any subset $\g$ of $\La_n$, conditional on the event $\{(\pi,\om): C_x(\om)=\g\}$, the configurations $\pi$ and $\om$ are $\mu^e$-almost-surely equal on all edges having no endvertex in $\g$. The details of this coupling are omitted. The idea is to build the paired configuration $(\pi,\om)$ edge by edge, beginning at the edge $e$, in such a way that $\pi(f) \le \om(f)$ for each edge $f$ examined. The (closed) edge-boundary of the cluster $C_x(\om)$ is closed in $\pi$ also. Once this boundary has been uncovered, the configurations $\pi$, $\om$ on the rest of space are governed by the same (conditional) measure, and may be taken equal. Such an argument has been used in [@ACCN] and [@G-RC Thm 5.33(a)], and has been carried further in [@Al1]. We claim that $$\label{mel30} \jkmn^0(e) \le \fnp^0(D_x\mid 1_e=1),$$ where $D_x$ is the event that $C_x$ intersects both the left and right sides of $\bkm$. This is proved as follows. By , $$\begin{aligned} \jkmn^0(e) &= \mu^e(\om\in \hkm,\, \pi\notin\hkm)\\ &\le \mu^e(\om\in\hkm\cap D_x)\\ &\le \mu^e(\om\in D_x) =\fnp^0(D_x \mid 1_e=1),\end{aligned}$$ since, when $\om\notin D_x$, either both or neither of $\om$, $\pi$ belong to $\hkm$. By , $$\label{mel41} \jkmn^0(e) \le \frac{\fnp^0(D_x)}{\fnp^0(1_e)}.$$ On $D_x$, the radius of the open cluster at $x$ is at least $\frac12 k$. Since $\fnp^0 \lest \fpq$ and $\fpq$ is translation-invariant, $$\fnp^0(D_x) \le \fpq(x \lra x + \pd\La_{k/2}) = \fpq(0\lra \pd \La_{k/2}).$$ By , $$\fpq(0\lra \pd \La_{k/2}) \le \phi_{\kq,q}^0(0\lra \pd \La_{k/2}) \to 0 \qq \text{as } k \to\oo,$$ and, by and , the conclusion of the lemma is proved when $p \le \kq$. Suppose next that $\kq\le p<1$. Instead of working with the open paths, we work with the dual open paths. Each edge $e_\rd=\langle u,v\rangle$ of the dual lattice traverses some edge $e=\langle x,y\rangle$ of the primal, and, for each configuration $\om$, we define the dual configuration $\om_\rd$ by $\om_\rd(e_\rd) = 1-\om(e)$. Thus, the dual edge $e_\rd$ is open if and only if $e$ is closed. It is well known (see [@G-RC eqn (6.12)], for example) that, with $\om$ distributed according to $\fnp^1$, $\om_\rd$ has as law the  measure, denoted $\fnpd$, on the dual of $\La_n$ with free boundary condition. The event $\hkm$ occurs if and only if there is no dual open path traversing the dual of $\bkm$ from top to bottom. We may therefore apply the above argument to the dual process, obtaining thus that $$\label{mel51} \jkmn^1(e) \le \frac{\fnpd(V_u)}{\fnpd(1_e)},$$ where $V_u$ is the event that $C_u$ intersects both the top and bottom sides of the dual of $\bkm$. On the event $V_u$, the radius of the open cluster at $u$ is at least $\frac12 (m+1)$. Since $\fnpd \lest \fpdq$, $$\fnpd(V_u) \le \fpdq(u \lra u + \pd\La_{(m+1)/2}) = \fpdq(0\lra \pd \La_{(m+1)/2}).$$ As above, by , $$\fpdq(0\lra \pd \La_{(m+1)/2}) \le \phi_{\kq,q}^0(0\lra \pd \La_{(m+1)/2}) =\eta_{m+1},$$ and this completes the proof when $p \ge \kq$. This follows immediately from Corollary \[inf2\] by and Lemma \[lemma1\]. By planar duality, $$\phi_{p,q}^0(H_k) = 1- \phi_{p_\rd,q}^1(H_k),$$ where $p$, $p_\rd$ are related by , see [@G-RC Thms 6.13, 6.14]. Since $\phi_{\kq,q}^0 \lest \phi_{\kq,q}^1$, $$\phi_{\kq,q}^0(H_k) \le \tfrac12 \le \phi_{\kq,q}^1(H_k),$$ and Theorem \[thm1\] follows from Theorem \[thm3\]. Proof of Theorems \[zhangisi\] and \[isingcr\] {#sec:isingpf} ============================================== Only an outline of the proof of Theorem \[zhangisi\] is included here, since it follows the ‘usual’ route (see [@G99 Sect. 11.3] or [@G-RC Sect. 6.2], for examples of the argument). The measure $\pibh$ is automorphism-invariant, ergodic, and has the finite-energy property. By the main result of [@BK], the number $N^+$ (, $N^-$) of infinite $+$ clusters (, infinite $-$ $*$-connected clusters) satisfies $$\text{either}\q \pibh(N^\pm = 0) = 1\q\text{or}\q \pibh(N^\pm=1)=1.$$ Assume that $\t^+(\b,h)\t^-(\b,h) > 0$, which is to say that $\pibh(N^+=N^-=1) =1$. One may find a box $\La$ sufficiently large that, with $\pibh$-probability at least $\frac12$: the top and bottom of its boundary $\pd\La$ are $+$ connected to infinity off $\La$, and the left and right sides are $-$ $*$-connected to infinity off $\La$. Since $N^+=1$ almost surely, there is a $+$ path connecting the two infinite $+$ paths above, and this contradicts the fact that $N^-=1$ almost surely. We turn to the proof of Theorem \[isingcr\]. For the moment, let $\pibh$ be the Ising measure on a finite graph $G=(V,E)$ with parameters $\b\ge 0$ and $h \ge 0$. It is well known than $\pi_{\b,0}$ satisfies the FKG lattice condition on the partially ordered set $\Si_V = \{-1,+1\}^V$. We identify $\Si_V$ with $\{0,1\}^V$ via the mapping $\s_x \mapsto \om_x = \frac12(\s_x+1)$, and we choose $p$ by $$\label{mel40} \frac p{1-p} = e^{2h}.$$ Then $\pibh$ may be expressed in the form , and we may thus apply the results of Section \[sec:ist\]. By conditioning on the states of the neighbours of $x$, $$\label{mel32} \dfrac{e^{2h-\De\b}}{e^{\De\b}+e^{2h-\De\b}} \le \pibh(1_x) \le \dfrac{e^{2h+\De\b}}{e^{-\De\b}+e^{2h+\De\b}} ,$$ where $\De$ is the degree of the vertex $x$, and $1_x$ is the indicator function that $\s_x=+1$. Therefore, $$\begin{aligned} \pibh(1_x)[1-\pibh(1_x)] &\ge \min\left\{ \dfrac{e^{2h}}{(e^{\De\b}+e^{2h-\De\b})^2}, \dfrac{e^{2h}}{(e^{-\De\b}+e^{2h+\De\b})^2} \right\} \nonumber\\ &= \frac{e^{2h+2\De\b}}{(1+e^{2h+2\De\b})^2}. \label{mel33} \end{aligned}$$ This bound will be useful with $\De = 4$, and we write $$\label{ihp10} \xi_{\b,h} = \frac{e^{2h+8\b}}{(1+e^{2h+8\b})^2}.$$ Note that $\xi_{\b,h}$ is decreasing in $h$. We follow the argument of the proof of Theorem \[inf1\]. Let $\b\in [0,\bc)$, $h >0$, and $1\le k,m \le r <n$, and consider the Ising measure $\pin=\pi_{\La_n,\b,h}$ on the box $\La_n = [-n,n]^2$. For $x\in \ZZ^2$, write $\jkmn(x)$ for the (conditional) influence of $x$ on the event $\hkm$ under the measure $\pin$. We set $\jkmn(x)=0$ for $x \notin \La_n$. \[lemma2\] Uniformly in $x\in\ZZ^2$, $$\begin{aligned} {2} \jkmn(x) &\le (1+e^{8\b-2h})\left[\pin(\bkm {\stackrel{+}{\leftrightarrow}}\pd\La_r) + \sup_{x\in \La_r}\pin(x{\stackrel{+}{\leftrightarrow}}x + \pd \La_{k/2})\right], \label{ihp4}\\ \jkmn(x) &\le (1+e^{8\b+2h})\left[\pin(\bkm {\stackrel{-}{\leftrightarrow}_*}\pd\La_r) + \sup_{x\in \La_r}\pin(x{\stackrel{-}{\leftrightarrow}_*}x + \pd \La_{m/2})\right]. \label{ihp5}\end{aligned}$$ Let $h > 0$. Let $C_x^+$ be the set of all vertices joined to $x$ by a path of vertices all of whose states are $+1$ (thus, $C_x^+=\es$ if $\s_x=-1$). We may couple the conditioned measures $\pin(\cdot\mid \s_x=b)$, $b= \pm 1$, such that the Ising equivalents of (a)–(c) hold as in Section \[sec:pf\]. As in , $$\label{mel31} \jkmn(x) \le \frac{\pin(D_x^+)}{\pin(1_x)},$$ where $D_x^+$ is the event that $C_x^+$ intersects both the left and right sides of $\bkm$. On $D_x^+$, the radius of $C_x^+$ is at least $\frac12 k$. For $x \notin \La_r$, $$\pin(D_x^+) \le \pin(\bkm {\stackrel{+}{\leftrightarrow}}\pd\La_r).$$ For $x \in \La_r$, we shall use the bound $$\pin(D_x^+) \le \pin(x{\stackrel{+}{\leftrightarrow}}x + \pd \La_{k/2}).$$ Combining the above inequalities with , we obtain . Let $C_x^-$ be the set of all vertices joined to $x$ by a $*$-connected path of vertices all of whose states are $-1$. The event $\hkm$ occurs if and only if there is no $-$ $*$-connected path from the top to the bottom of $\bkm$. Therefore, the conditional influence of $x$ on $\hkm$ equals that of $x$ on this new event. As in , $$\label{mel31b} \jkmn(x) \le \frac{\pin(V_x^-)}{\pin(1-1_x)},$$ where $V_x^-$ is the event that $C_x^-$ intersects both the top and bottom of $\bkm$. The above argument leads now to . Let $R>\hc$ and $\de>0$, and let $k,m\le r< n$. We set $$\begin{aligned} \kappa_{n,r,+}^\de &= 2(1+e^{8\b})\left[\pinhcdm(\bkm {\stackrel{+}{\leftrightarrow}}\pd\La_r) + \sup_{x\in \La_r}\pinhcdm(x{\stackrel{+}{\leftrightarrow}}x + \pd \La_{k/2})\right],\\ \kappa_{n,r,-}^\de &= 2(1+e^{8\b+2R})\left[\pinhcdp(\bkm {\stackrel{-}{\leftrightarrow}_*}\pd\La_r) + \sup_{x\in \La_r}\pinhcdp(x{\stackrel{-}{\leftrightarrow}_*}x + \pd \La_{m/2})\right].\end{aligned}$$ Let $0<h_1 < \hc < h_2\le R$, and choose $\de < \min\{\hc-h_1, h_2-\hc\}$. By , , Lemma \[lemma2\], and Theorem \[inf1\], $f_n(h)=\pin(\hkm)$ satisfies $$\label{ihp7} \frac1{f_n(h)(1-f_n(h))}\cdot\frac{df_n}{dh} \ge B_+ \log(1/\kappa_{n,r,+}^\de), \qq h_1\le h \le \hc-\de,$$ where $B_+ = 2c \xi_{\b,\hc}$, see . The corresponding inequality for $\hc+\de \le h\le R$ holds with $\kappa_{n,r,+}^\de$ replaced by $\kappa_{n,r,-}^\de$, and $B_+$ replaced by $B_- = 2c\xi_{\b,R}$. We integrate over the intervals $(h_1,\hc-\de)$ and $(\hc+\de,h_2)$, add the results, and use the fact that $f_n(h)$ is non-decreasing in $h$, to obtain that $$\left.\log\frac{f_n(h)}{1-f_n(h)}\right|_{h_1}^{h_2} \ge (\hc-\de-h_1)B_+\log(1/\kappa_{n,r,+}^\de) + (h_2-\hc-\de)B_-\log(1/\kappa_{n,r,-}^\de).$$ Take the limits as $n\to\oo$, $r\to\oo$, and $\de\to 0$ in that order, and use the monotonicity in $h$ of $\pibh$, to obtain the theorem. The coloured  model {#sec:crcm} =================== There is a well known coupling of the  and Potts models that provides a transparent explanation of how the analysis of the former aids that of the latter. Formulated as in [@ES] (see also the historical account of [@G-RC]), this is as follows. Let $p\in (0,1)$ and $q\in\{2,3,\dots\}$. Let $\om$ be sampled from the  measure $\fpq$ on the finite graph $G=(V,E)$. To each open cluster of $\om$ we assign a uniformly chosen element of $\{1,2,\dots,q\}$, these random spins being independent between clusters. The ensuing spin-configuration $\s$ on $G$ is governed by a Potts measure, and pair-spin correlations in $\s$ are coupled to open connections in $\om$. This coupling has inspired a construction that we describe next. Let $p\in (0,1)$, $q \in (0,\oo)$, and $\a\in (0,1)$. Let $\om$ have law $\fpq$. To the vertices of each open cluster of $\om$ we assign a random spin chosen according to the Bernoulli measure on $\{0,1\}$ with parameter $\a$. These spins are constant within clusters, and independent between clusters. We call this the [*coloured  model*]{} (CRCM). With $\s$ the ensuing spin-configuration, we write $\kpqa$ for the measure governing the pair $(\om,\s)$, and $\ppqa$ for the marginal law of $\s$. When $q\in \{2,3,\dots\}$ and $q\a$ and $q(1-\a)$ are integers, the CRCM is a vertex-wise contraction of the Potts model from the spin-space $\{1,2,\dots,q\}^V$ to $\Si = \{0,1\}^V$. The CRCM has been studied in [@KahnW] under the name ‘fractional fuzzy Potts model’, and it is inspired in part by the earlier work of [@Chay96; @Hag99; @Hag01], as well as the study of the so-called ‘divide-and-colour model’ of [@DaCmodel]. The following seems to be known, see [@Chay96; @Hag99; @Hag01; @KahnW], but the short proof given below may be of value. \[crcfk\] The measure $\ppqa$ is monotone for all finite graphs $G$ and all $p\in(0,1)$ if and only if $q\a,q(1-\a)\ge 1$. We identify the spin-vector $\s\in\Si$ with the set $A=\{v\in V: \s_v=1\}$. Let $\ph=\ppqah$ be the probability measure obtained from $\ppqa$ by including an external field with strength $h\in\RR$, (A)e\^[h|A|]{}(A), A V. \[ihp20\] It is an elementary consequence of Theorem \[crcfk\] and that, when $q\a, q(1-\a) \ge 1$, $\ph$ is a monotone measure, and $\ph$ is increasing in $h$. When $q=2$ and $\a=\frac12$, $\ph$ is the Ising measure with external field. The purpose of this section is to extend the arguments of Section \[sec:ising\] to the CRCM with external field. There is a special case of the CRCM with an interesting interpretation. Let $\om$ be sampled from $\fpq$ as above, and let $\s=(\s_v: v \in V)$ be a vector of independent Bernoulli ($\g$) variables. Let $B$ be the event that $\s$ is constant on each open cluster of $\om$. The pair $(\om,\s)$, conditional on $B$, is termed the *massively coloured  measure* (MCRCM). The law of $\s$ is simply $\pi_{p,2q,\frac12,h}$ where $h=\log [\g/(1-\g)]$. Just as $\ppqa$ and $\fpq$ may be coupled via $\kpqa$, so we can couple $\ph$ with an ‘edge-measure’ $\psh=\pspqah$ via the following process. With $B$ given as above, and $(\om,\s)\in B$, denote by $\s(C)$ the common spin-value of $\s$ on an open cluster $C$ of $\om$. Let $\kh=\kpqah$ be the probability measure on $\Om \times \Si$ given by (,) ()1\_B(,)\_C , \[ihp22\] where the product is over the open clusters $C$ of $\om$, and $|C|$ is the number of vertices of $C$. The marginal and conditional measures of $\kh$ are easily calculated. The marginal on $\Si$ is $\ph$, and the marginal on $\Om$ is $\psh = \pspqah$ given by $$\label{crc4} \psh(\om)\propto \phi_{p,q}(\om) \prod_C \bigl[ \a e^{h|C|}+1-\a\bigr],\qq \om\in\Om.$$ Note that $\phi_0 = \fpq$. Given $\om$, we obtain $\s$ by labelling the open clusters with independent Bernoulli spins in such a way that the odds of cluster $C$ receiving spin $1$ are $\a e^{h|C|}$ to $1-\a$. By , or alternatively by summing $\kh(\om,\s)$ over $\om$, we find that (A) e\^[h|A|]{} (1-p)\^[|A|]{} Z\_[A,q]{}Z\_[A,q(1-)]{} , A V, \[crc1\] where $\De A$ is the number of edges of $G$ with exactly one endvertex in $A$, and $Z_{B,q}$ is the partition function of the  measure on the subgraph induced by $B\subseteq V$ with edge-parameter $p$ and cluster-weight $q$. It may be checked as in the proof of Theorem \[crcfk\] that, for given $p$, $q$, $\a$, the measure $\ph$ is bounded above (, below) by a product measure with parameter $a(h)$ (, $b(h)$) where a(-h) 0,b(h) 1, h. \[ihp24\] The measure $\psh$ has a number of useful properties, following. \[phi\_properties\] Let $q\a,q(1-\a)\ge 1$. The probability measure $\psh$ is monotone. The marginal measure of $\kh$ on $\Om$, conditional on $\s_x=b$, satisfies $$\begin{aligned} {2} \kh(\,\cdot\mid \s_x=1)&\gest\kh(\,\cdot\mid \s_x=0),\qq &&h\ge 0,\\ \kh(\,\cdot\mid \s_x=1)&\lest\kh(\,\cdot\mid \s_x=0),\qq &&h\le 0.\end{aligned}$$ If $p_1\le p_2$ and the ordered three-item sequence $(0,h_1,h_2)$ is monotonic, then $\phi_{p_1,q,\a,h_1}\lest\phi_{p_2,q,\a,h_2}$. We have that $\pspqah \lest \phi_{p,Q}$, where $Q=Q(h)$ is defined by $$Q(h)= \begin{cases} q\a, & h>0,\\ q, & h=0, \\ q(1-\a), & h<0.\\ \end{cases}$$ [*We assume henceforth that $q\a,q(1-\a)\ge 1$*]{}, and we consider next the infinite-volume limits of the above measures. Let $G$ be a subgraph of the square lattice $\ZZ^2$ induced by the vertex-set $V$, and label the above measures with the subscript $V$. By standard arguments (see [@G-RC Chap. 4]), the limit measure $$\psh = \lim_{V\uparrow \ZZ^2} \pshV$$ exists, is independent of the choice of the $V$, and is translation-invariant and ergodic. By an argument similar to that of [@G-RC Thm 4.91], the measures $\phV$ have a well-defined infinite-volume limit $\ph$ as $V\uparrow \ZZ^2$. Furthermore, the pair $(\psh,\ph)$ may be coupled in the same manner as on a finite graph. That is, a *finite* cluster $C$ of $\om$ receives spin $1$ with probability $\a e^{h|C|}/[\a e^{h|C|} + 1-\a]$. An *infinite* cluster receives spin $1$ (, $0$) if $h>0$ (, $h<0$). When $h=0$, the spin of an infinite cluster has the Bernoulli distribution with parameter $\a$. Since $\psh$ is translation-invariant, so is $\ph$. As in [@G-RC Thm 4.10], $\ph$ is positively associated, and the proof of [@G-RC Thm 4.91] may be adapted to obtain that $\ph$ is ergodic. By a simple calculation, the $\phV$ have the finite-energy property, with bounds that are uniform in $V$, (see [@G-RC eqn (3.4)]), and therefore so does $\ph$. Adapting the notation used in Section \[sec:ising\] for the Ising model, let $$\begin{aligned} \t^1(p,q,\a,h)&=\ph(0{\stackrel{1}{\leftrightarrow}}\oo),\\ \t^0(p,q,\a,h)&=\ph(0{\stackrel{0}{\leftrightarrow}_*}\oo).\end{aligned}$$ As in Theorem \[zhangisi\], and with an essentially identical proof, \^1(p,q,,h)\^0(p,q,,h) = 0. \[ihp21\] By the remark after and [@G-RC Thm 4.10], $\ph$ is stochastically increasing in $h$, whence there exists $\hc=\hc(p,q,\a)\in\RR \cup \{\pm\oo\}$ such that $$\t^1 (p,q,\a,h) \begin{cases} =0 &\text{ if } h < \hc,\\ >0 &\text{ if } h > \hc. \end{cases}$$ By comparisons with product measures (see the remark prior to ), we have that $|\hc|<\oo$. We call a probability measure $\mu$ on $\Si$ *subcritical* (, *supercritical*) if the $\mu$-probability of an infinite $1$-cluster is $0$ (, strictly greater than $0$); we shall use the corresponding terminology for measures on $\Om$. There is a second type of phase transition, namely the onset of percolation in the measure $\psh$. An infinite edge-cluster under $\psh$ forms part of an infinite vertex-cluster under $\ph$. Let $\pc(q)$ be the critical point of the  measure $\fpq$ on $\ZZ^2$, as usual. By Proposition \[phi\_properties\](iv), $\psh$ is subcritical for all $h$ when $p<\pc(q\min\{\a,1-\a\})$; in particular, for such $p$, $\psh$ is subcritical for $h$ lying in some open neighbourhood of $\hc$. On the other hand, suppose that $\phi_0=\fpq$ is supercritical. By the remarks above, $\t^1>0$ for $h>0$, and $\t^0>0$ for $h<0$. By , $\t^1$ is discontinuous at $h=\hc=0$. By Proposition \[phi\_properties\](iii), $\psh\gest\phi_0$, whence $\t^1$ is discontinuous at $h=\hc=0$ whenever $p>\pc(q)$. With $k,m\in\NN$, let $H_{k,m}$ be the event that there exists a left–right $1$-crossing of the box $B_{k,m}$. A result corresponding to Theorem \[isingcr\] holds, subject to a condition on $\psh$ with $h$ near $\hc$. This condition has not, to our knowledge, been verified for the Ising model, although it is expected to hold. In this sense, the next theorem does not quite generalize Theorem \[isingcr\]. \[crcmcr\] Let $R >0$ be such that $R\le |\hc|$ when $\hc\ne 0$. Suppose that $\psh$ is subcritical for $h\in[\hc-R,\hc+R]$. There exist $\rho_{i,1}=\rho_{i,1}(p,q,\a,R)$ and $\rho_{i,0}=\rho_{i,0}(p,q,\a,R)$ satisfying $$\rho_{i,1}\rho_{i,0} \to 0\qq\text{as } i\to\oo,$$ such that[:]{} for $h_1\in[\hc-R,\hc]$, $h_2\in[\hc,\hc+R]$, $$\pi_{h_1}(\hkm)[1-\pi_{h_2}(\hkm)] \le \rho_{k,1}^{\hc-h_1} \rho_{m,0}^{h_2-\hc},\qq k,m\ge 1.$$ As in the proof of Theorem \[isingcr\], the first step is to establish bounds on the one-point marginals of $\ph$. This may be strengthened to a finite-energy property, but this will not be required here. The proof is deferred to the end of the section. \[crcm\_finite\_energy\] Let $G=(V,E)$ be a finite graph with maximum vertex-degree $\De$. Then $$\frac{\a e^h}{\a e^h +1 -\a}\, (1-p)^\De \le \ph(\s_x=1)\le 1-\frac{1-\a }{\a e^h +1 -\a}\, (1-p)^\De.$$ Consider the subgraph of $\ZZ^2$ induced by $\La_n=[-n,n]^d$, and let $x\in \La_n$. Objects associated with the finite domain $\La_n$ are labelled with the subscript $n$. For $b=0,1$, let $\pnh^b$ (, $\pnsh^b$) be the marginal measure on $\Si_n$ (, $\Om_n$) of the coupling $\knh$ conditioned on $\s_x=b$. By Proposition \[phi\_properties\], $\pnsh^1 \gest \pnsh^0$ when $h \ge 0$, and $\pnsh^1 \lest \pnsh^0$ when $h \le 0$. It is convenient to work with a certain coupling of the pairs $(\pnsh^0,\pnh^0)$ and $(\pnsh^1,\pnh^1)$. Recall that $C_x(\om)$ denotes the open cluster at $x$ in the edge-configuration $\om\in\Om$. \[coupling\] Let $h\in\RR$. There exists a probability measure $\kappa^{01}_{n,h}$ on $(\Om_n\times\Si_n)^2$ with the following properties. Let $(\om^0,\s^0,\om^1,\s^1)$ be sampled from $(\Om_n\times\Si_n)^2$ according to $\kappa^{01}_{n,h}$. For $b=0,1$, $\om^b$ has law $\pnsh^b$. For $b=0,1$, $\s^b$ has law $\pnh^b$. If $h\le 0$, $\om^0\ge \om^1$. If $h\ge 0$, $\om^1\ge\om^0$. The spin configurations $\s^0$ and $\s^1$ agree at all vertices $y \notin C_x(\om^0)\cup C_x(\om^1)$. Assume first that $h \ge 0$. There exists a probability measure $\ol\phi_n$ on $\Om_n^2$, with support $D_1=\{(\om^0,\om^1)\in\Om_n^2: \om^0\le\om^1\}$, whose first (, second) marginal is $\pnsh^0$ (, $\pnsh^1$). By sampling from $\ol\phi_n$ in a sequential manner beginning at $x$, and proceeding via the open connections of the upper configuration, we may assume in addition that $(\om^0,\om^1)\in D_2$, where $D_2$ is the set of pairs such that $\om^0(e)=\om^1(e)$ for any edge $e$ having at most one endpoint in $C_x(\om^1)$. Let $(\om^0,\om^1)\in D=D_1\cap D_2$. The spin vectors $\s^b$ may be constructed as follows: attach spin $b$ to the cluster $C_x(\om^b)$, attach independent Bernoulli spins to the other $\om^b$-open clusters in such a way that the odds of cluster $C$ receiving spin $1$ are $\a e^{h|C|}$ to $1-\a$. We may assign spins $\s^b$ to the open clusters of the $\om^b$ in such a way that: $\s^b$ has law $\pnh^b$, and $\s^0_y=\s^1_y$ for $y \notin C_x(\om^1)$. Write $\kappa^{01}_{n,h}$ for the joint law of the ensuing pairs $(\om^0,\s^0)$, $(\om^1,\s^1)$. When $h\le 0$, let $\kappa^{01}_{n,h}$ be the coupling as above, with the differences that: $\om^0\ge \om^1$, and $\s^0_y=\s^1_y$ for $y \notin C_x(\om^0)$. We seek next a substitute for Lemma \[lemma2\] in the current setting. Let $J_{k,m,n}(x)$ be the conditional influence of vertex $x$ on the event $H_{k,m}$, with reference measure $\pnh$ on $\La_n$. Let $(\om^0,\s^0,\om^1,\s^1)$ be sampled according to the measure $\knh^{01}$ of Lemma \[coupling\]. Define random clusters $C_x^H, C_x^V\subseteq\ZZ^2$ as follows, $$\begin{aligned} C_x^H(\om^0,\s^0,\om^1,\s^1)&:=\{z\in\ZZ^2:\exists y \in C_x(\om^0),\ y{\stackrel{1}{\leftrightarrow}}z \text{ in } \s^1\}\\ C_x^V(\om^0,\s^0,\om^1,\s^1)&:=\{z\in\ZZ^2:\exists y \in C_x(\om^1),\ y{\stackrel{0}{\leftrightarrow}_*}z \text{ in } \s^0\}\end{aligned}$$ Notice that, if $h\ge 0$ (, $h\le 0$), $C_x^H$ (, $C_x^V$) is the spin-$1$ cluster (, spin-$0$ $*$-cluster) at $x$ under $\s^1$ (, $\s^0$). It may be checked as before that: $$\begin{aligned} J_{k,m,n}(x)&\le \kappa^{01}_{n,h}\bigl(C_x^H \text{ contains a horizontal crossing of } B_{k,m}\bigr), \label{ihp30}\\ J_{k,m,n}(x)&\le \kappa^{01}_{n,h}\bigl(C_x^V \text{ contains a vertical $*$-crossing of } B_{k,m}\bigr). \label{ihp31}\end{aligned}$$ The notation $C_x^H$, $C_x^V$ is introduced in order to treat the cases $h>0$ and $h<0$ simultaneously. \[lemma3\] Let $R$ be as in Theorem \[crcmcr\]. If $\t^1(p,q,\a,\hc)=0$, and $\psh$ is subcritical for $h\in[\hc-R,\hc]$, there exists $\nu_{k,1}$ satisfying $\nu_{k,1}\to 0$ as $k\to\oo$ such that $$\limsup_{n\to\oo}\sup_{h\in[\hc-R,\hc]} \,\sup_{x\in\La_n} J_{k,m,n}(x)\le \nu_{k,1}.$$ If $\t^0(p,q,\a,\hc)=0$, and $\psh$ is subcritical for $h\in[\hc,\hc+R]$, there exists $\nu_{m,0}$ satisfying $\nu_{m,0}\to 0$ as $m\to\oo$ such that $$\limsup_{n\to\oo}\sup_{h\in[\hc,\hc+R]}\, \sup_{x\in\La_n} J_{k,m,n}(x)\le \nu_{m,0}.$$ We prove part (i) only, the proof of (ii) being similar. If $[\hc-R,\hc]\subseteq[0,\oo)$, let $\phi=\phi_{\hc}$; if $[\hc-R,\hc]\subseteq(-\oo,0]$, let $\phi=\phi_{\hc-R}$. By Proposition \[phi\_properties\], and the assumptions of (i), $\phi_{n,h} \lest \phi$ for $n\ge1$ and $h\in[\hc-R,\hc]$, $\phi$ is subcritical, $\pi_{\hc}$ is subcritical, and $\pi_{n,h}\lest \pi_{n,\hc}$ for $h\in[\hc-R,\hc]$. By Lemma \[crcm\_finite\_energy\], there exists $L>0$ such that \_[n,h]{}(\_x=1)\_[n,h]{}(\_x=0) L \[k7\] for all $n \ge 1$, $x\in\La_n$, and $h\in[\hc-R,\hc+R]$. Let $$A_x(\om) = \sup\{r \ge 0: x \lra x+\pd\La_r\}$$ denote the radius $\rad(C_x)$ of the edge cluster $C_x = C_x(\om)$ at $x$, and note that $\phi(A_x<\oo)=1$. Let $r\ge \max\{k,m\}$ and $x\in\La_r$. By and the positive association of $\pnh^1$, and as in , $$\begin{aligned} J_{k,m,n}(x) &\le \kappa^{01}_{n,h} \bigl(\rad (C_x^H) \ge \tfrac12 k\bigr) \\ &\le \sum_{a=0}^{\oo} \phi_{n,h}^0(A_x=a)\a_{n,h}^1(x,a,\tfrac12 k)\\ &\le \frac 1L \sum_{a=0}^{\oo} \phi_{n,h}(A_x=a)\a_{n,h}(x,a,\tfrac12 k),\end{aligned}$$ where $$\a_{n,h}^\xi(x,a,b) = \pi_{n,h}^\xi\bigl(x+\La_a{\stackrel{1}{\leftrightarrow}}x+\pd\La_{b}\bigmid \s_y= 1 \text{ for }y\in x+\La_a\bigr).$$ Since $\a_{n,h}(x,a,b)$ is non-decreasing in $a$, and furthermore $\phi_{n,h} \lest \phi$ and $\phi$ is translation-invariant, \_[x\_r]{} (x) 1L \_[a=0]{}\^(A\_0=a) \_[x\_r]{}{\_[n,h]{}(x,a,12 k)}. \[k8-\] By and the fact that $\pi_{n,h} \lest \pi_{n,\hc}$, \_[n,h]{}(x,a,12 k) {1, 1[L\^[|\_r|]{}]{} \_[n,]{}(x+\_a 1 x+\_[k/2]{})}. \[nar1\] Suppose now that $x\in\La_n\sm\La_r$. Then $$\begin{aligned} J_{k,m,n}(x) &\le \kappa^{01}_{n,h} (C_x^H\cap B_{k,m}\not=\es)\\ &\le \sum_{a=0}^\oo \phi_{n,h}^0(A_x=a) \b_{n,h}^1(x,a)\\ &\le \frac 1L \sum_{a=0}^\oo \phi_{n,h}(A_x=a)\b_{n,h}(x,a),\end{aligned}$$ where $$\b_{n,h}^\xi(x,a) =\pi_{n,h}^\xi \bigl(x+\La_a{\stackrel{1}{\leftrightarrow}}B_{k,m} \bigmid \s_y= 1 \text{ for }y\in x+\La_a\bigr)$$ is a non-decreasing function of $a$. Since $\phi_{n,h} \lest \phi$, and $\phi$ is translation-invariant, $$\jkmn(x) \le \frac 1L \sum_{a=0}^\oo \phi(A_0=a)\b_{n,h}(x,a).$$ As above, $$\begin{aligned} \b_{n,h}(x,a) &\le \frac 1{L^{|\La_a|}} \pi_{n,h}(x+\La_a {\stackrel{1}{\leftrightarrow}} \bkm)\\ &\le \frac 1{L^{|\La_a|}} \pi_{n,h}(\bkm {\stackrel{1}{\leftrightarrow}} \pd\La_{r-a}) \qq\text{if } a \le r,\end{aligned}$$ whence (x) 1L \_[a=0]{}\^(A\_0=a) {1, 1[L\^[|\_a|]{}]{} \_[n,]{}(1 \_[r-a]{})}, \[k8\] where the minimum is interpreted as $1$ when $a>r$. We add – and , and take the limit $n\to\oo$, to obtain by the bounded convergence theorem that $$\begin{aligned} \limsup_{n\to\oo} &\sup_{x\in \La_n} \jkmn(x)\\ & \le \frac 1L \Biggl[ \sum_{a=0}^\oo \phi(A_0=a) \min\left\{1, \frac 1{L^{|\La_a|}} \pi_{\hc}(x+\La_a {\stackrel{1}{\leftrightarrow}} \pd\La_{k/2})\right\} \Biggr.\\ &\hskip2cm\Biggl. +\sum_{a=0}^\oo \phi(A_0=a) \min\left\{1, \frac 1{L^{|\La_a|}} \pi_{\hc}(\bkm {\stackrel{1}{\leftrightarrow}} \pd\La_{r-a})\right\} \Biggr].\end{aligned}$$ We now send $r\to\oo$. Since $\t^1(p,q,\a,\hc)=0$ by assumption, the last summand tends to $0$. By the bounded convergence theorem, \_[n]{} \_[x\_n]{} J\_[k,m,n]{}(x) \_[k,1]{}, \[k9\] where $$\nu_{k,1} = \frac 1L \sum_{a=0}^\oo \phi(A_0=a) \min\left\{1, \frac 1{L^{|\La_a|}} \pi_{\hc}(x+\La_a {\stackrel{1}{\leftrightarrow}} \pd\La_{k/2})\right\}.$$ By the bounded convergence theorem again, $\nu_{k,1} \to 0$ as $k \to\oo$. Since – and are uniform in $h \in[\hc-R,\hc]$, one may include the supremum over $h$ in , as required for the theorem. Let $f_n(h)=\pi_{n,h}(H_{k.m})$. By and Lemma \[crcm\_finite\_energy\], $$\label{k1} \frac1{f_n(h)[1-f_n(h)]} \frac d{dh} f_n(h)\ge cL \log \left[\frac{1}{2\max_x J_{k,m,n}(x)}\right],$$ with $L$ as in the proof of Lemma \[lemma3\]. Let $$\xi_{n,k,1}=\sup_{h\in[\hc-R,\hc]}\, \sup_{x\in\La_n} 2 J_{k,m,n}(x),\q \xi_{n,m,0}=\sup_{h\in[\hc,\hc+R]} \, \sup_{x\in\La_n} 2 J_{k,m,n}(x).$$ By , $$\left.\log\frac{f_n(h)}{1-f_n(h)}\right|_{h_1}^{h_2} \ge (\hc-h_1)cL\log(\xi^{-1}_{n,k,1}) + (h_2-\hc)cL\log(\xi^{-1}_{n,m,0}),$$ whence $$f_n(h_1)[1-f_n(h_2)] \le \xi_{n,k,1}^{cL(\hc-h_1)}\xi_{n,m,0}^{cL(h_2-\hc)}.$$ Take the limit as $n\to\oo$ and use Lemma \[lemma3\]. A strictly positive measure $\mu$ on $\Om=\{0,1\}^E$ is monotone if and only if: for all $\om\in\Om$ with $\om(e)=\om(f)=0$, $e\ne f$, \[fkg\] (\^[e,f]{})()(\^e)(\^f), see, for example, [@G-RC Thm 2.19]. Given two strictly positive measures $\mu_1$ and $\mu_2$, at least one of which is monotone, it is sufficient for $\mu_1\lest\mu_2$ that: \[holley\] , eE. This is proved in the corrected theorem [@G-RC Theorem 2.3$''$]. Condition is non-trivial only when $\om(e)=0$. We shall prove (i) by checking that $\psh$ satisfies . Write $\sC(\om)$ for the set of open clusters under $\om$, and let $f_h(k)=\a e^{hk}+1-\a$. Substituting into , we must check $$\begin{gathered} \label{fkg_phi} \fpq(\om^{e,f})\fpq(\om) \prod_{C\in\sC(\om^{e,f})} f_h(|C|) \prod_{C\in\sC(\om)} f_h(|C|)\\ \ge\fpq(\om^{e})\fpq(\om^{f}) \prod_{C\in\sC(\om^{e})} f_h(|C|) \prod_{C\in\sC(\om^{f})} f_h(|C|).\end{gathered}$$ On using the monotonicity of $\fpq$, and on cancelling the factors $f_h(|C|)$ for $C\in\sC(\om)\cap\sC(\om^{e,f})$, we arrive at the following three cases. There are clusters $C_1,C_2\in\sC(\om)$, such that $C_1\cup C_2\in\sC(\om^e)=\sC(\om^f)$. It suffices that $$q f_h(a) f_h(b) \ge f_h(a+b),\q a=|C_1|,\, b=|C_2|,$$ and this is easily checked for $a,b\ge 0$ since $q\a,q(1-\a)\ge 1$. There are clusters $C_1,C_2,C_3\in\sC(\om)$, such that $C_1\cup C_2\in\sC(\om^e)$ and $C_2\cup C_3\in\sC(\om^f)$. It suffices that $$f_h(a+b+c) f_h(b) \ge f_h(a+b) f_h(b+c),\q a=|C_1|,\, b=|C_2|,\, c=|C_3|,$$ and this is immediate. There are clusters $C_1,C_2,C_3,C_4\in\sC(\om)$ such that $C_1\cup C_2\in\sC(\om^e)$ and $C_3\cup C_4\in\sC(\om^f)$. In this case, inequality simplifies to a triviality. It may be checked similarly that the marginal measure of $\kappa_h(\,\cdot\mid\s_x=b)$ on $\Om$ is monotone if either $h\ge0$, $b=1$ or $h\le 0$, $b=0$. One uses the expression $$\kappa_h(\om\mid\s_x=b)\propto \phi_{p,q}(\om) e^{hb|C_x(\om)|} \prod_{C\in\sC(\om)\sm\{C_x(\om)\}} f_h(|C|),\qq \om\in\Om.$$ Parts (ii) and (iii) then follow by checking with appropriate $\mu_i$. Part (iv) follows from part (iii) by taking the limit as $|h|\to\oo$. Many of the required calculations are rather similar to part (i), and we omit further details. We identify the spin-vector $\s\in\Si$ with the set $A=\{v\in V: \s_v=1\}$. In order that $\pi=\ppqa$ be monotone it is necessary and sufficient (see inequality ) that (A\^[xy]{})(A) (A\^x)(A\^y),AV, x,y VA, xy. \[crc2\] Let $A\subseteq V$, $x,y\in V\sm A$, $x\ne y$. Let $a$ be the number of edges of the form $\langle x,z\ra$ with $z \in A$, let $b$ be the number of edges of the form $\langle x,z\ra$ with $z \notin A$ and $z \ne x,y$, and let $e$ be the number of edges joining $x$ and $y$. We write $A^x = A \cup\{x\}$, etc. By with $h=0$, $$\frac{\pi(A^{x})}{\pi(A)} = (1-p)^{b+e-a} \frac{Z_{A^x,q\a}Z_{\ol{A^x},q(1-\a)}}{Z_{A,q}Z_{\ol{A},q(1-\a)}} = \frac{\a}{1-\a} \cdot \frac{\phi_{\ol A,q(1-\a)}(I_x)} {\phi_{A^x,q\a}(I_x)},$$ where $I_x$ is the event that $x$ is isolated, and $\phi_{A,q}$ is the  measure on the subgraph induced by vertices of $A$ with edge-parameter $p$ and cluster-weight $q$. Similarly, $$\frac{\pi(A^{xy})}{\pi(A^y)} =\frac{\a}{1-\a} \cdot\frac{\phi_{\ol{A^y},q(1-\a)}(I_x)} {\phi_{A^{xy},q\a}(I_x)}.$$ The ratio of the left to the right sides of is = . \[crc3\] Inequality holds by the positive association of  measures with cluster-weights at least 1. That the conditions are necessary for monotonicity follows by an example. Suppose $0<q\a<1$ and $q(1-\a)\ge 1$. Let $G$ be a cycle of length four, with vertices (in order, going around the cycle) $u,x,v,y$. Take $A=\{u,v\}$ above, so that $e=0$. The final ratio in equals 1, and the penultimate is strictly less than 1. By Proposition \[phi\_properties\](iv) and inequality , $$\psh(I_x)\ge \phi_{p,Q}(I_x)\ge (1-p)^\De,$$ where $I_x$ is the event that $x$ is isolated. Conditional on $I_x$, the spin of $x$ under the coupling $\kh$ has the Bernoulli distribution with parameter $\a e^h/[\a e^h+1-\a]$. Acknowledgements {#acknowledgements .unnumbered} ================ The second author acknowledges the hospitality of the Mathematics Department at the University of British Columbia, the Institut Henri Poincaré–Centre Emile Borel, Paris, and the Section de Mathématiques at the University of Geneva, where this work was largely done. Rob van den Berg proposed the elimination of equation from the proofs.
--- abstract: 'We study the nonlocal non-Markovian effects through local interactions between two subsystems and the corresponding two environments. It has been found that the initial correlations between two environments can turn a Markovian to a non-Markovian regime with the extra control on the local interaction time. We further research the nonlocal non-Markovian effects from two situations: without extra control, the nonlocal non-Markovian effects only appear under the condition that two local dynamics are non-Markovian-non-Markovian(both of two local dynamics are non-Markovian), or Markovian-non-Markovian, never appear under the condition of Markovian-Markovian; with extra control, the nonlocal non-Markovian effects can occur under the condition of Markovian-Markovian. It shows that the correlations between two environments has an upper bound: only making a flow of information from the environment back to the global system begin finitely earlier than that back to any one of two local systems, not infinitely. Then, due to observing that the classical correlations between two environments have the same function as the quantum correlations, we advise two special ways to distribute classical correlations between two environments without initial correlations. Finally, from numerical solutions in the spin star configuration we obtain that the self-correlation(internal correlation) of each environment promotes the nonlocal non-Markovian effects.' author: - Dong Xie - An Min Wang title: 'Nonlocal Non-Markovian Effects in Dephasing Environments' --- Introduction ============ A realistic physical system inevitably interacts with the surrounding environment, leading to lose information from the system to the environment. If the environment can feed back the information to the system in the finite time, it signifies that the non-Markovian effects appear due to the environmental memory. And if the environment only feeds back the information in the infinite time, it means that the whole dynamics is Markovian. The dynamical process of Markovianity can also be treated as a limiting approximation of non-Markovianity[@lab1; @lab2]. It is highly interesting to explore the non-Markovian effects, because there are many systems suffering from the strong back-action from environment[@lab3]. Hence, the non-Markovianity plays an important role in many respects. Up to now, there are a lot of works about it. Non-Markovianity can assist the formation of steady state entanglement[@lab4]; non-Markovian coherent feedback control can also suppress the decoherence[@lab5]; and non-Markovian effects are considered in a new theory of polymer reaction kinetics so that the dynamics of polymers can be controlled[@lab6]. The direct observation of non-Markovian radiation dynamics has been completed in $3$ dimension bulk photonic crystals[@lab7]; and the observation of non-Markovian dynamics of a single quantum dot has also been completed in a micropillar cavity[@lab8]. It is worth to note that the authors, in the Ref.[@lab9], find a new resource for quantum memory effects by the nonlocal non-Markovianity, where they utilize the quantum correlations between two environments and control the local interaction time to turn a Markovian to a non-Markovian regime. It will become more and more interesting in the further research about the nonlocal non-Markovian effects, such as how to enhance and control the nonlocal non-Markovian effects. In this article, we further discuss the nonlocal non-Markovian effects when the local dynamics are non-Markovian-non-Markovian, Markovian-non-Markovian or Markovian-Markovian. Then, we find that without extra control the nonlocal non-Markovian effects can’t appear under the condition of Markovian-Markovian. Besides taking control on the interaction time in the Ref.[@lab9], we find that reducing the strength of interaction also turn a Markovian to a non-Markovian regime. In surprise, increasing the strength of interaction can also do it. Both of two examples in the Ref.[@lab9] they consider that the initial correlations of two environments are nonlocal(having quantum correlations). We find that the classical correlations can perform as well as the quantum correlations. In many real situations, the classical correlations between two environments are easier to be formed compared to the quantum correlations. And we advise two special ways to form the classical correlations, leading to the nonlocal non-Markovian effects. Finally, we investigate the non-Markovian effects in the spin star configuration under two different situations: with and without the self-correlation of each environment. We get the numerical solution and obtain that the self-correlation of each environment helps the nonlocal non-Markovian effects. Theoretical model ================= We consider that the initial state of two subsystems is a pure state given by $$|\Psi_S^{12}(0)\rangle=a|00\rangle+b|01\rangle+c|10\rangle+d|11\rangle.$$ Under the local interactions with environments $1$ and $2$, a dephasing map for two qubits of of general form[@lab8] $$\begin{aligned} \left( \begin{array}{cccc} |a|^2 & ab^*\kappa_2(t) & ac^*\kappa_1(t) & ad^*\kappa_{12}(t)\\ ba^*\kappa_2^*(t) & |b|^2 & bc^*\Lambda_{12}(t) & bd^*\kappa_{1}(t)\\ ca^*\kappa_1^*(t) & cb^*\Lambda^*_{12}(t) & |c|^2 & cd^*\kappa_{2}(t)\\ da^*\kappa_{12}^*(t) & db^*\kappa^*_{1}(t) & dc^*\kappa_2^*(t) & |d|^2\\ \end{array} \right),\end{aligned}$$ Where $\kappa_i(0)=1$ for $i=1$, 2, and 12; $\Lambda_{12}(0)=1$. The dephasing process of local systems $\rho_S^1(t)$ and $\rho_S^2(t)$ are fully determined by the function $|\kappa_1(t)$ and $|\kappa_2(t)|$. Besides $|\kappa_1(t)$ and $\kappa_2(t)|$, the dephasing process of global systems $\rho_S^1(t)$ also depends on the function $|\kappa_{12}(t)|$ and$|\Lambda^*_{12}(t)|$. So if $|\kappa_1(t)|$ and $|\kappa_2(t)|$ decrease, meanwhile, $|\kappa_{12}(t)|$ or $|\Lambda^*_{12}(t)|$ increases, it is possible that the local systems lose information by the dephasing, but at the same time the global system increases the information. Here we use the measure for non-Markovianity of the dephasing process $\Phi(t)$ [@lab2] $$\begin{aligned} \mathcal {N}(\Phi)=\textmd{max}_{\rho_{1,2}(0)}\int_{\sigma>0}dt \sigma(t,\rho_{1,2}(0)),\end{aligned}$$ where $\rho_1$ and $\rho_2$ represent two different states of same system, $\sigma(t,\rho_{1,2}(0)=\frac{d}{dt}D(\rho_1,\rho_2),$ and the trace distance $D(\rho_1,\rho_2)=1/2\textmd{tr}\sqrt{(\rho_1(t)-\rho_2(t))^\dag(\rho_1-\rho_2)}$. $\sigma(t,\rho_{1,2}(0)>0$ represents that the information flows backs to the system from the environment. Namely, the dynamics is non-Markovian. For a two-level system, such as the local system 1, if the state of system recovers coherence at a finite time $t$ ($\frac{d}{dt}|\kappa_1(t)|>0$), the local dynamics is non-Markovian; otherwise, the local dynamics is Markovian. This can be proved as follows. Choose two arbitrary initial states $$\begin{aligned} \left( \begin{array}{cccc} a_1 & b_1^* \\ b_1& d_1 \\ \end{array} \right) \textmd{and} \left( \begin{array}{cccc} a_2 & b_2^* \\ b_2 & d_2 \\ \end{array} \right) .\end{aligned}$$ After the dephasing process, the two states are given by $$\begin{aligned} \left( \begin{array}{cccc} a_1 & b_1^*\gamma(t) \\ b_1\gamma^*(t) & d_1 \\ \end{array} \right) \textmd{and} \left( \begin{array}{cccc} a_2 & b_2^*\gamma(t) \\ b_2\gamma^*(t) & d_2 \\ \end{array} \right).\end{aligned}$$ Then, get the function $\sigma(t,\rho_{1,2}(0)=\frac{\frac{d}{dt}|\gamma(t)|}{\sqrt{|b_1-b_2|^2|\gamma(t)|^2+(a_1-a_2+d_2-d_1)^2/4}}$. So $\sigma(t,\rho_{1,2}(0)>0$ is equivalent to $\frac{d}{dt}|\gamma(t)|>0$. For a many-level system (global system), as long as the function $\frac{d}{dt}|\kappa_{12}(t)|>0$ or $\frac{d}{dt}|\Lambda_{12}(t)|>0$ at a finite time, the information of some systems must increase, signifying that the global dynamics is non-Markovian, and vice versa. In other words, in the dephasing environment, whether the global dynamics is non-Markovian depends on $\frac{d}{dt}|\kappa_{12}(t)|>0$ and $\frac{d}{dt}|\Lambda_{12}(t)|>0$(global system) at finite time $t$, besides $\frac{d}{dt}|\kappa_1(t)|>0$ (local system 1) and $\frac{d}{dt}|\kappa_2(t)|>0$ (local system 2). Three dynamic process ===================== We consider that the environments have continuous energy levels $w$ and the corresponding eigenstate $|w\rangle$($\hbar=1$ throughout this article) without loss of generality. The initial state of local system is given by $$\begin{aligned} \rho_E^i(0)&=Z_0^i\{\int_0^\infty dw\exp[-(w-w_0^i)^2]|w+c_i\rangle\langle w+c_i|\nonumber\\ &+\int_{-\infty}^0dw\exp[-(w+w_0^i)^2)]|w+c_i\rangle\langle w+c_i|\},\end{aligned}$$ where for $i=1$ and $2$ (using this notation in the following sections), the energy level $w_0^i\geq0$ and $Z_0^i$ is the normalization coefficient. The energy level $w_0^i$ and $c_i$ can decide whether the local dynamics is non-Markovian or Markovian. Firstly, we discuss that the local dynamics are non-Markovian-non-Markovian. The initial correlations between two environments are the classical correlations. So, without loss of generality, we let the initial state of two environments to be $$\begin{aligned} \rho_E^{12}(0)=&Z_{1}\{\int_0^\infty dw\exp[-(w-1)^2]|w\rangle\langle w|\bigotimes|w\rangle\langle w|\nonumber\\ &+\int_{-\infty}^0dw\exp[-(w+1)^2)]|w\rangle\langle w|\bigotimes|w\rangle\langle w|\}.\end{aligned}$$ The interaction Hamiltonian is given by $$\begin{aligned} \label{eqn4} H_{int}^i=g_i\int_{-\infty}^\infty dw\sigma_z^i\sigma_w^i,\end{aligned}$$ where $\sigma_w=w|w\rangle\langle w|$; $g_i$ is a coupling constant; and $\sigma_z^i$ is the Pauli operator of system $i$. Then, $$\begin{aligned} \label{eqn5} &|\kappa_1(t)|=|\kappa_2(t)|=|2Z_{1}\int_0^\infty dw\exp[-(w-1)^2]\cos(2gwt)|,\nonumber\\ &|\Lambda_{12}(t)|=|2Z_{1}\int_0^\infty dw\exp[-(w-1)^2]\cos(4gwt)|,\nonumber\\ &|\kappa_{12}(t)|=1,\end{aligned}$$ where $g=g_i$ like $g$ in the following Eq.(\[eqn7\]) and Eq.(\[eqn9\]). ![\[fig.1\]From the Eq.(\[eqn5\]), $|\kappa_1|$ changes with time $t$, in comparison with $|\Lambda_{12}|$. Here, the coupling constant $g=1$. ](1.eps) As shown in Fig.1, at time $t=0.36$, $|\Lambda_{12}|$ begins to increase when $\kappa_1$ and $\kappa_2$ keeps on decreasing. It means that the flow of information from the environment back to the global system begins earlier than both of two local subsystems. Secondly, we consider that the local dynamics are Markovian-non-Markovian. The initial state of two environments is given by $$\begin{aligned} \rho_E^{12}(0)=&Z_{1}\{\int_0^\infty dw\exp[-w^2]|w\rangle\langle w|\bigotimes|w+1\rangle\langle w+1|\nonumber\\ &+\int_{-\infty}^0dw\exp[-w^2)]|w\rangle\langle w|\bigotimes|w+1\rangle\langle w+1|\}.\end{aligned}$$ The interaction Hamiltonian is same as Eq.(\[eqn4\]). Then we obtain $$\begin{aligned} \label{eqn7} &|\kappa_1(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(2gwt)|,\nonumber\\ &|\kappa_2(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[g(2w+2)t]|,\nonumber\\ &|\Lambda_{12}(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[g(4w+2)t]|,\nonumber\\ &|\kappa_{12}(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(2gt)|.\end{aligned}$$ ![\[fig.2\]From the Eq.[\[eqn7\]]{}, $|\kappa_1|$, $|\kappa_2|$, $|\kappa_{12}|$, and $|\Lambda_{12}|$ change with time $t$. Here, the coupling constant $g=1$. ](2.eps) In Fig.2, we can also see that after a while, $\Lambda_{12}$ begins to increase when $\kappa_1$ and $\kappa_2$ keeps on decreasing, meaning that the flow of information from the environment back to the global system begins earlier than that back to the local subsystem $2$. Where, $|\kappa_1|$ always reduces($\frac{d}{dt}|\kappa_1|<0$ for all time), representing that the dynamics of subsystem $1$ is Markovian. Finally, we consider that the local dynamics are Markovian-Markovian. A classical initial state of two environments is described by $$\begin{aligned} \label{eqn8} \rho_E^{12}(0)=&Z_{1}\{\int_0^\infty dw\exp[-w^2]|w\rangle\langle w|\bigotimes|w\rangle\langle w|\nonumber\\ &+\int_{-\infty}^0dw\exp[-w^2]|w\rangle\langle w|\bigotimes|w\rangle\langle w|\}.\end{aligned}$$ Furthermore, use the same interaction Hamiltonian (see the Eq.\[eqn4\]) to get $$\begin{aligned} \label{eqn9} &|\kappa_1(t)|=|\kappa_2(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(2gwt)|,\nonumber\\ &|\Lambda_{12}(t)|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[4gwt]|,\nonumber\\ &|\kappa_{12}(t)|=1.\end{aligned}$$ ![\[fig.3\]From the Eq.(\[eqn9\]), $|\kappa_1|$ and $|\Lambda_{12}|$ change with time $t$. Here, the coupling constant $g=1$. ](3.eps) From Fig.3, we know that $\Lambda_{12}$ always decreases, never increases. When the local dynamics are Markovian-Markovian, we can’t find a initial state of two environments to make the nonlocal non-Markovian effects appear, even the existence of quantum correlations between two environments. From the above three dynamic process and corresponding figures, we can get a conclusion that the correlations between two environments only make the flow of information from the environment back to the global system begins finitely earlier than that back to any one of two local subsystems. In another words, there is a upper bound on the function of the correlations between two environments. So if the local dynamics are Markovian-Markovian, the correlations can’t shorten the infinite start time of the flow of information from the environment back to the global system to a finite one. Namely, the global dynamics is still Markovian when the local dynamics are Markovian-Markovian. Extra control ============= However, all foregoing analysis bases on without extra control(representing that the whole Hamiltonian of systems and two environments is independent of time). So if there are other control, it is possible to obtain the nonlocal non-Markovian effects under the condition that the local dynamics are Markovian-Markovian. In the Ref.[@lab9], they control the time of two local interactions to turn a Markovian to a non-Markovian regime. Here, we find that reducing the coupling strength $g_i$(here, denotes that reducing the rate of local dephasing) can do it. In amazement, increasing the coupling strength (increase the rate of local dephasing) can also perform as well. Let the initial state of two environments to be the Eq.(\[eqn8\]), and the interaction Hamiltonian is given by Eq.(\[eqn4\]). So the local dynamics are Markovian-Markovian. Without extra control, the global dynamics also is Markovian. Firstly, we consider reducing the coupling strength. Initially, the coupling strength $g_1=3$ and $g_2=2$; when $t=1$, reduce $g_1$ from $3$ to $1$. Then, we get $$\begin{aligned} \label{eqn10} &|\kappa_1(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(wt'+3w)|,\nonumber\\ &|\kappa_2(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[2wt'+2w]|,\nonumber\\ &|\Lambda_{12}(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[3wt'+5w]|,\nonumber\\ &|\kappa_{12}(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(w-wt')|,\end{aligned}$$ where $t'=t-1$. ![\[fig.4\]From the Eq.(\[eqn10\]), $|\kappa_1|$, 500$|\Lambda_{12}|$, $|\kappa_2|$ and $|\kappa_{12}|$change with time $t'$. Where, in order to better compare them, we replace the $|\Lambda_{12}|$ by the $500|\Lambda_{12}|$. ](4.eps) As shown by Fig.4, it is obviously that $|\kappa_{12}|$ increases, signifying that reducing the strength of coupling $g_1$ can turn the Markovian to the non-Markovian regime. Next, we discuss about increasing the coupling strength. Initially, the coupling strength $g_1=2$ and $g_2=1$; when $t=1$, increase $g_2$ from $1$ to $3$. Then, we can obtain $$\begin{aligned} \label{eqn11} &|\kappa_1(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(wt'+2w)|,\nonumber\\ &|\kappa_2(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[2wt'+w]|,\nonumber\\ &|\Lambda_{12}(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos[5wt'+3w]|,\nonumber\\ &|\kappa_{12}(t')|=|2Z_{1}\int_0^\infty dw\exp[-w^2]\cos(w-wt')|,\end{aligned}$$ where $t'=t-1$. ![\[fig.4\]From the Eq.(\[eqn11\]), $|\kappa_1|$, $|\Lambda_{12}|$, $|\kappa_2|$ and $|\kappa_{12}|$change with time $t'$. ](5.eps) From Fig.5, $|\kappa_{12}|$ also increases, representing that the nonlocal non-Markovian effects appears when the coupling strength increases. Forming classical correlations ============================== Two nonlocal environments often haven’t any correlations. It is very hard to create the quantum correlations between two macroscopical environments. But the classical correlations can be formed relatively easy. And, as shown by the above sections, the classical correlations can perform very well for the nonlocal non-Markovian effects. Then, we advise two ways to form the classical correlations between two environments. Firstly, we consider that a Bell state $1/\sqrt{2}(|00\rangle+|11\rangle)$ disentangles by the local interaction with two environments, where the environments are composed of bosons. Then let the system locally interact with the environments. The interaction Hamiltonian is given by $$\begin{aligned} H_{int}^i(t)&=\chi_i(t)\sum_{k_i=1}^{n_i}g_{k_i}\sigma_z^i(b_{k_i}^\dagger+b_{k_i})\nonumber\\ &+{\chi}'_i(t)\sum_{k_i=1}^{n_i}g_{k_i}{\sigma '}_z^{i}(b_{k_i}^\dagger+b_{k_i}),\end{aligned}$$ where the first term on the right hand side is the interaction Hamiltonian between the system and the environments; the second term is the interaction Hamiltonian between the entanglement system(another system which is initially in the Bell state and $\sigma_Z '^i$ is the Pauli operator of corresponding subsystem $i$) and the same environments; the function ${\chi}'_i(t)=1$ for $0\leq t\leq t'$ and zero otherwise; ${\chi}_i(t)=1$ for $t_i^s\leq t\leq t_i^f$ and zero otherwise. Here, $t_i^s$ and $t_i^f$ denote the time when the interaction is switched on and switched off in system $i$, respectively; 0 and $t'$ denote the time in the entanglement system. We denote that $t_i(t)=\int_0^t\chi_i(t')dt'$ and $t'_i(t)=\int_0^t{\chi}'_i(t')dt'$. The Hamiltonian of environments is described by $$\begin{aligned} H_E^{12}&=\sum_{k_1=1}^{n_1}w_{k_1}b^\dagger_{k_1} b_{k_1}+\sum_{k_2=1}^{n_2}w_{k_2}b^\dagger_{k_2} b_{k_2}.\end{aligned}$$ The initial state of environments is in the thermal equilibrium state $\rho_E^{12}=1/Z_{12}\exp(-\beta H_E^{12})$, where $Z_{12}$ is the partition function. Then, using weak coupling approximation $[\exp(\alpha_k b^\dag-\alpha_k^* b), \exp(-i w_k b_k^\dag b_k)]\approx0$, and performing continuum limit with Ohmic spectrum density $A_iw\exp(-w/\Omega_i)$[@lab10; @lab11; @lab12] ($A_i$ is coupling constant and $\Omega_i$ is frequency cutoff), we get $$\begin{aligned} \label{eqn14} &|\kappa_{1}|=|\exp(-\Gamma_1)\cos[\int dwA_1\exp(-w/\Omega_1)\xi_1]|,\nonumber\\ &|\Lambda_{12}|=|\exp(-\Gamma_1-\Gamma_2)\cos[\sum_{i=1}^2\int dwA_i\exp(-w/\Omega_i)\xi_i]|,\nonumber\\ &\textmd{in which},\nonumber\\ & \xi_i=2\sin [w(t_i(t)-t'_i(t))]-2\sin [w t_i(t)]+2\sin [w t'_i(t)], \nonumber\\ &\Gamma_i=\int dwA_i\exp(-w/\Omega_i)\coth(2w/\beta)(1-\cos[w t_i(t)],\nonumber\\ &\alpha_k=g_k\frac{1-\exp(iw_kt)}{w_k}.\end{aligned}$$ As shown in Fig.6, the information begins to flow back to the global system when $|\Lambda_{12}|$ begins to increase, meaning that the correlations between the two environments is formed to make the nonlocal non-Markovian effects emerge. And we find the Bell state have the same function as the classical state $1/2|00\rangle\langle00|+1/2|11\rangle\langle11|$ on forming the classical correlations of two environments, so it has the robustness against some noise. If one want to form stronger correlations, it just need to increase the number of Bell state. ![\[fig.6\]From the Eq.(\[eqn14\]), $|\kappa_1|$ and $|\Lambda_{12}|$ change with time $t$. Here, the parameter $\Gamma_2=0.5$, $A_1=1$, $\beta=0.2$, $\xi_2=\pi/2$, $t'_1(t)=t'_2(t)=1$ and $t_1(t)=t$. ](6.eps) Then, the second way is that after two subsystems interact locally with the corresponding two environments without the initial correlations for some time, exchanging the locations of two subsystems generate that they interact locally with each other initial environment. Similar calculation as the above way which is immune to data, it is easy to observe the nonlocal non-Markovian effects. The reason is that the system loses information, leading to the correlations between two environments. And it is necessary to exchange the locations of two subsystems because if don’t exchange the locations, the correlations between the system and the two environments will keep the system losing information, never lead to the appearance of nonlocal non-Markovian effects. The benefit of this way is that it don’t need other correlational states to form the correlations between two environments. Obviously, it can’t form very strong correlations like the first way. Furthermore, it can’t create the strong non-Markovian effects. The self-correlation promoting the nonlocal non-Markovian effects ================================================================= In this section, we explore whether the self-correlation of two environments promotes the nonlocal non-Markovian effects. We consider a simple model: the spin star configuration[@lab13]. The interaction Hamiltonian is described by $$\begin{aligned} &H_{int}=\sum_{i=1}^2\sum_{j=1}^{n_i}\eta_i(t)g_{ij}\sigma_{ij}^{z}\sigma_{iS}^z,\end{aligned}$$ where $\eta_i(t)=1$ for $ t_i^s\leq t\leq t_i^f$ and zero otherwise; $\sigma_{ij}^{z}$ and $\sigma_{iS}^z$ are the Pauli operators of environment $i$ and subsystem $i$ respectively. The initial two environments is in the thermal equilibrium state $\rho_E^{12}=1/Z_{12}\exp[-\beta H_E^{12}]$, in which, the Hamiltonian of two environments $H_E^{12}=\sum_{i=1}^2B_iS_i^z+\alpha S_1^zS_2^z$. The operator $S_i^z=\sum_{j=1}^{n_i}1/2\sigma_{ij}^z$ for having not the self-correlation of each environment; and $S_i^z=\sum_{j=1}^{n_i}(1/2\sigma_{ij}^z+J_i/B_i\sum_{<mn>}\sigma_{im}^z\sigma_{in}^z)$ for having the self-correlation of each environment. Then we obtain $$\begin{aligned} \label{eqn16} &|\kappa_1(t)|=|Tr[\exp[-2i\sum_{j=1}^{n_1}\int_0^tdt'\eta_1(t')g_{1j}\sigma_{1j}^z]\rho_E^{12}]|,\nonumber\\ &|\Lambda_{12}(t)|=|Tr[\exp[-2i\sum_{i=1}^2\sum_{j=1}^{n_i}\int_0^tdt'\eta_i(t')g_{ij}\sigma_{ij}^z]\rho_E^{12}]|.\end{aligned}$$ ![\[fig.7\]$|\kappa_1|$ and $|\Lambda_{12}|$ change with time $t$ for having the self-correlation. Here, the parameter: $n_i=5$, $\alpha=4$, $\beta=0.01$, $B_i=2$, $J_i=10$, $\int_0^tdt'\eta_2(t')=0.2$, $\int_0^tdt'\eta_1(t')=t$ and $g_{ij}=1$ for $i=1,2$; $j=1,...,5$.](7.eps) ![\[fig.8\]$|\kappa_1|$ and $10^7|\Lambda_{12}|$ change with time $t$ without the self-correlation. Here, the parameter: $n_i=5$, $\alpha=4$, $\beta=0.01$, $B_i=2$, $\int_0^tdt'\eta_2(t')=0.785$, $\int_0^tdt'\eta_1(t')=t$ and $g_{ij}=1$ for $i =1,2$; $j=1,...,5$.](8.eps) Comparing Fig.7 with Fig.8 which are the numerical graph of Eq.(\[eqn16\]), it is obviously that the nonlocal non-Markovian effects with the help of self-correlation is stronger, according to the growth of $\Lambda_{12}(t)$. Conclusion ========== In this article we explore that a quantum system composed of two subsystems locally interacts with two environments in detail. We obtain that the function of correlations of two environments has an upper bound: only make a flow of information from the environment back to the global system start finitely earlier than that back to any one of two local systems, not finitely(when the whole Hamiltonian of system and two environments is dependent of time). It means that nonlocal non-Markovian effects can’t appear when both of two local dynamics are Markovian. Of course, extra control can turn a Markovian to a non-Markovian regime. Besides the control of local interaction time, reducing the coupling strength of local interaction(representing that reduce the rate of losing information of subsystems) can make the nonlocal non-Markovian effects appear. Surprisingly, enhancing the the couping strength of local interaction( increase the rate of losing information of subsystems) can also make it. And we advise two ways which is easy in the experiment to form the classical correlations between two environments without initial correlations. Finally, we obtain that the self-correlation of two environments can promote the nonlocal non-Markovian effects. Recently, the Ref.[@lab14] accessed the non-Markovianity based on ideas of divisibilities of channels; the Ref.[@lab15] proposed a way to quantify the memory effects in the spin-boson model; the Ref.[@lab16] advised two different approaches to quantify non-Markovianity; and the Ref.[@lab17] quantified non-Markovianity via correlations. It is also very interesting to quantify the nonlocal non-Markovianity in different ways, for a better understanding and characterization of nonlocal non-Markovian effects in more complex systems. The nonlocal non-Markovian effects means the nonlocal memory of global environments, which may be used to perform some quantum information process, such as quantum memory[@lab18] and quantum error correction[@lab19]. And this article will be meaningful for how to utilize and control the nonlocal non-Markovian effects by controlling the local dynamics in the future works. Acknowledgments =============== This work was supported by the National Natural Science Foundation of China under Grant No. 10975125. [0]{} Herbert Spohn 1980 *Rev. Mod. Phys.* [**52**]{} 569. Heinz-Peter Breuer, Elsi-Mari Laine, and Jyrki Piilo 2009 *Phys. Rev. Lett.* [**103**]{} 210401. A. Ishizaki and G. R. Fleming 2009 *J. Chem. Phys.* [**130**]{} 234110; 2009 [**130**]{} 234111; B. Bellomo, R. Lo Franco, and G. Compagno 2007 *Phys. Rev. Lett*. [**99**]{} 160502; Fleming, and K. B. Whaley, 2010 *Nature Phys.* [**6**]{} 462; A. G. Dijkstra and Y. Tanimura 2010 *Phys. Rev. Lett.* [**104**]{} 250401; J.-Q. Liao, J.-F. Huang, L.-M. Kuang, and C. P. Sun 2010 *Phys. Rev.* A [**82**]{} 052109; A. Imamoglu 1994 *Phys. Rev. A* [**50**]{} 3650; Wang Xiao-Yun, Ding Bang-Fu, and Zhao He-Ping 2013 *Chin. Phys. B* [**22**]{} 040308; Ding Bang-Fu, Wang Xiao-Yun, Tang Yan-Fang, Mi Xian-Wu, and Zhao He-Ping 2011 *Chin. Phys. B* [**20**]{} 060304. Susana F. Huelga, $\acute{A}$ngel Rivas, and Martin B. Plenio 2012 *Phys. Rev. Lett.* [**108**]{} 160402. Shi-Bei Xue, Re-Bing Wu, Wei-Min Zhang, Jing Zhang, Chun-Wen Li, and Tzyh-Jong Tarn 2012 *Phys. Rev.* A [**86**]{} 052304. T. Gu$\acute{e}$rin, O. B$\acute{e}$nichou, and R. Voituriez 2012 *Nature Chemistry* [**4**]{} 568-573. Ulrich Hoeppe, Christian Wolff, Jens Kuchenmeister, Jens Niegemann, Malte Drescher, Hartmut Benner, and Kurt Busch 2012 *Phys. Rev. Lett.* [**108**]{} 043603. K. H. Madsen, S. Ates, T. Lund-Hansen, A. L$\ddot{o}$ffler, S. Reitzenstein, A. Forchel, and P. Lodahl 2011 *Phys. Rev. Lett.* [**106**]{} 233601. Elsi-Mari Laine, Heinz-Peter Breuer, Jyrki Piilo, Chuan-Feng Li, and Guang-Can Guo 2012 *Phys. Rev. Lett.* [**108**]{} 210402. H.-P. Breuer and F. Petruccione 2002 *The Theory of Open Quantum Systems* (Oxford University Press, New York). A. J. Leggett, S. Chakravarty, A. Dorsey, M. Fisher, A. Garg, and W. Zwerger 1987 *Rev. Mod. Phys.* [**59**]{} 1. U. Weiss 2008 *Quantum Dissipative Systems* (World Scientific, Singapore). Heinz-Peter Breuer, Daniel Burgarth, and Francesco Petruccione 2004 Phys. Rev. B [**70**]{} 045323. M. M. Wolf, J. Eisert, T. S. Cubitt, and J. I. Cirac 2008 *Phys. Rev. Lett.* [**101**]{} 150402. Govinda Clos and Heinz-Peter Breuer 2012 *Phys. Rev.* A [**86**]{} 012115. $\acute{A}$ngel Rivas, Susana F. Huelga, and Martin B. Plenio 2010 *Phys. Rev. Lett.* [**105**]{} 050403. Shunlong Luo, Shuangshuang Fu, and Hongting Song 2012 *Phys. Rev.* A [**86**]{} 044101. Nicolas Sangouard, Christoph Simon, Hugues de Riedmatten, and Nicolas Gisin 2011 *Rev. Mod. Phys.* [**83**]{} 33. Gerardo A. Paz-Silva, A. T. Rezakhani, Jason M. Dominy, and D. A. Lidar 2012 *Phys. Rev. Lett.* [**108**]{} 080501.
--- abstract: 'In classical superconductors an energy gap and phase coherence appear simultaneously with pairing at the transition to the superconducting state. In high-temperature superconductors, the possibility that pairing and phase coherence are distinct and independent processes has led to intense experimental search of their separate manifestations, but so far without success. Using femtosecond spectroscopy methods we now show that it is possible to clearly separate fluctuation dynamics of the superconducting pairing amplitude from the phase relaxation above the critical transition temperature. Empirically establishing a close correspondence between the superfluid density measured by THz spectroscopy and superconducting optical pump-probe response over a wide region of temperature, we find that in differently doped $Bi_{2}Sr_{2}CaCu_{2}O_{8+\delta}$  crystals the pairing gap amplitude monotonically extends well beyond $T_{c}$, while the phase coherence shows a pronounced power-law divergence as $T\rightarrow T_{c}$, thus showing for the first time that phase coherence and gap formation are distinct processes which occur on different timescales.' author: - 'I.Madan$^{1}$, T.Kurosawa$^{2}$, Y.Toda$^{2}$, M.Oda$^{3}$, T.Mertelj$^{1}$, P.Kusar$^{1}$, D.Mihailovic$^{1}$' title: 'Separating pairing from quantum phase coherence dynamics above the superconducting transition by femtosecond spectroscopy.' --- Anomalous normal state behavior above the critical temperature appears to be a hallmark of unconventional superconductivity and is present in many different classes of materials. A pseudogap state has been suggested to be associated with a wide range of possible phenomena preceding the onset of macroscopic phase coherence at the superconducting critical transition temperature at $T_{c}$: pre-formed pairs [@Alexandrov2001; @Alexandrov1981; @Alexandrov2011a; @Mihailovic2002; @Kresin2011; @Ovchinnikov2002; @Geshkenbein1997; @Alexandrov1993; @Solovev2009], a spin-gap [@EmeryPRB1997], the formation of a Bose metal [@Phillips2003], a Fermi or Bose glass, or a state composed of “dirty bosons” [@Das_PRB1999; @Das_PRB2001; @Vojta2003], and more recently a charge-density-wave state [@Torchinsky2013; @Sugai2006]. However, apart from the pseudogap (PG) response below the temperature designated as $T^{*}$, the response attributed to “superconducting fluctuations” above $T_{c}$ has been observed in a number of experiments [@Silva2001; @Truccato2006; @Orenstein2006; @Junod2000; @Tallon2011; @Kondo2010; @Wang2006; @Rullier-Albenque2006; @Pourret2006; @Li2010; @Mihailovic1987]. The temperature region $T_{c}<T<T_{onset}$ where such fluctuations are observable is significantly wider than in conventional superconductors, but smaller than $T^{*}$. The open and obvious question is whether the pseudogap, or the superconducting fluctuations can be attributed to pairing. The problem in separating the response due to superconducting fluctuations from the PG is that so far, inevitably, one has had to make extrapolations, or assumptions about the response functions underlying temperature dependences and line shapes in transport[@Silva2001; @Truccato2006; @Ghosh1999; @Balestrino2001; @Bhatia1994], magnetic susceptibility[@Li2010; @Wang2005], specific heat[@Junod2000; @Tallon2011] or photoemission (ARPES)[@Kondo2010], which may at best introduce inaccuracies in the temperature scales, and at worst lead to erroneous conclusions. Alternatively one can suppress superconductivity by high magnetic fields up to 60 T [@Rullier-Albenque2011], although there exists a risk of inducing new states by such a high field [@Chen2002]. Thus, so far it has not been possible to satisfactorily characterize superconducting fluctuations and discriminate between fluctuations of the amplitude $\delta\psi$ (related to the pairing gap) and phase $\delta\theta$ of the complex order parameter $\Psi=\psi e^{i\theta}$. In pump-probe experiments three relaxation components shown in Fig. \[Fig:Signal\_traces\] a) are typically observed: 1) the quasiparticle (QP) recombination in the SC state, 2) pseudogap state response below $T*$ and 3) energy relaxation of hot electrons. The QP dynamics has been shown to be described very well by the Rothwarf-Taylor (R-T) model [@RT_Kaindl2005; @RT_Kabanov2005], and the response related to the presence of non-equilibrium QPs is thus unambiguous. Importantly, the presence of the QP response is directly related to the presence of a pairing gap for QP excitations. Recent experiments have already proved the coexistence of the pseudogap excitations with superconductivity below $T_{c}$ over the entire range of phase diagram [@Liu2008; @Nair2010]. However, superconducting fluctuation dynamics above $T_{c}$ have not been investigated till now. In this paper we present measurements by a 3-pulse technique which allows us to single out the response of superconducting gap fluctuations, distinct from the PG. Selective destruction of the superconducting state by a femtosecond laser pulse [@Kusar_PRL2008] allows us to discriminate pseudogap excitations from superconducting fluctuation seen in transient reflectivity signals, thus avoiding the necessity of making extrapolations or assumptions in separating the different contributions. We then compare these data with a.c. conductivity (THz) measurements [@CorsonORENST1999] and establish proportionality of the amplitude of superconducting component in pump-probe experiment to the bare phase stiffness $\rho_{0}$, measured by THz experiments. This is directly proportional to the bare pair density $n_{s}$ [@CorsonORENST1999; @Wang2006], which in turn coincides with the superfluid density when the latter is measured on a timescale on which changes of the order parameter due to either de-pairing or movement of the vortices can be neglected. Comparison of the critical behavior of the amplitude and phase correlation times above $T_{c}$ leads us to the conclusion that two quantities arise from different microscopic processes. ![image](Mihailovic_fig1){width="100.00000%"} We perform measurements on under- (UD), near optimally-(OP) and over-(OD) doped Bi2212 with $T_{c}$s of 81, 85 and 80 K respectively. In the discussion we focus on the underdoped sample, and discuss comparisons with the optimally and overdoped samples, where applicable. Results {#results .unnumbered} ======= **Measurements of pairing amplitude above $T_{c}$**. To separate the SC component from the PG component we use a 3 pulse technique described in Refs. [@Kusar_Submited2013; @Yusupov_NatPhys2010; @Kusar_PRB2011; @Mertelj_PRL2013], and schematically represented in Fig. \[Fig:Signal\_traces\]b). A pulse train of 800 nm 50 fs pulses produced by a 250 kHz regenerative amplifier is divided into three beams with variable delays. First a relatively strong “destruction” (D) pulse, with fluence just above the superconducting state photo-destruction threshold $F_{th}^{SC}=13$ $\mu J/cm^{2}$[@Toda_PRB2011], destroys the superconducting condensate[@Stojchevska2011]. The ensuing recovery of the signal is measured by means of the 2 pulse Pump-probe (P-pr) response at a variable delay $t_{D-P}$ between D and P pulses. The pseudogap state remains unaffected as long as the excitation fluence is well below the pseudogap destruction fluence which is measured to be at $F_{th}^{PG}=32$ $\mu J/cm^{2}$. Measurements at higher temperatures (at 120 and 140 K) where no fluctuations are present confirm that the D pulse has no effect on the PG response at the selected fluence. A typical result of the 3 pulse experiment is presented in Fig. \[Fig:Signal\_traces\]e). In the absence of the D pulse the signal consists of a positive SC and a negative PG component. After the arrival of the D pulse we see a disappearance of the SC part and only the PG component is present (dark-red line on Fig. \[Fig:Signal\_traces\]c)). With increasing $t_{DP}$ the superconducting response gradually re-emerges (blue line on Fig. \[Fig:Signal\_traces\]c)). As most of the condensate in the probe volume is “destroyed” by the D pulse we can extract the superconducting component by subtracting the signal remanent after the destruction (measured 200 fs after the D pulse) from the signal obtained in the absence of the D pulse. Such an extracted superconducting component is plotted in Fig. \[Fig:Signal\_traces\]d). The signal is detectable up to $T_{onset}=104$ K, which is 0.28 $T_{c}$ above $T_{c}$ but much lower than $T^{*}\approx2.5$ $T_{c}$ . In Fig. \[Fig:Signal\_traces\]g) we show the recovery of the amplitude of the superconducting component $A_{sc}$ as a function of the time delay $t_{D-P}$ after the D pulse for 90 K. The temperature dependence of the amplitude of the SC component measured by 3-pulse technique $A_{sc}^{3pulse}$ is shown in Fig. \[Fig.The-relaxation-times\]b). ![\[Fig.The-relaxation-times\] **Comparison of pairing amplitude and phase coherence dynamics.** a) The recovery time of the optical superconducting signal ($\tau_{rec}$), the QP recombination time $\tau_{QP}^{3Pulse}$ measured by the three pulse technique and the QP recombination time $\tau_{QP}^{Subtraction}$ from two pulse Pump-probe pulse measurements obtained by subtraction of the PG signal. A fit to the data using a BKT model (eq. 3 of ref. [@Orenstein2006]) is shown by the solid red line. The dashed line shows the fluctuation lifetime $\tau_{GL}$ given by the TDGL theory. The phase correlation time $\tau_{\theta}^{THz}$ obtained from the THz conductivity measurements [@CorsonORENST1999] is also shown for comparison. b) The amplitude of the SC signal measured by the three pulse technique ($A_{SC}^{3Pulse}$) and the two pulse measurements with the PG signal subtracted ($A_{SC}^{Subtraction}$). The bare phase stiffness $\rho_{0}$ [@CorsonORENST1999] (normalized at $T_{c}$) shows remarkable agreement with the optical response. ](Mihailovic_fig2){width="1\columnwidth"} For comparison, standard pump-probe measurements need to separate the SC relaxation from the PG relaxation by subtraction of the high temperature response extrapolated into the superconducting region. This approach suffers from the same uncertainties as other techniques such as conductivity, heat capacity, diamagnetism and ARPES. The actual $T$-dependence of the PG response is ca vary with doping, pump energy and probe wavelength [@Toda_PRB2011; @Coslovich2013]. But, in Fig. \[Fig.The-relaxation-times\]b) we show that the subtraction procedure - with the use of a model [@Toda_PRB2011] - gives results in agreement with the direct 3 pulse measurements. The remaining discrepancies in the amplitude can be explained by an incomplete destruction of fluctuating superconducting state in the 3 pulse experiment and errors in the PG subtraction. In Fig. \[Fig.The-relaxation-times\]a) we see that the QP relaxation time $\tau_{QP}^{3Pulse}$ obtained by fitting an exponential function to the data Fig. \[Fig:Signal\_traces\]d) decreases rather gradually with increasing $T$ above $T_{c}$ and nearly coincide with the QP relaxation time obtained by the pseudogap subtraction procedure $\tau_{QP}^{Subtraction}$. The recovery time $\tau_{rec}$ obtained from exponential fits to the recovery of the SC response above $T_{c}$ (Fig. \[Fig:Signal\_traces\] e) shows a similar $T$-dependence. The experiments thus show that the recovery of the SC gap and the QP relaxation show very similar dynamics above $T_{c}$. **Comparison of optical and a.c. conductivity measurements.** We now compare these data with THz measurements of the order parameter correlation time and bare phase stiffness [@Orenstein2006; @CorsonORENST1999]. The agreement between $\rho_{0}$ and $A_{sc}$ shown in Fig. \[Fig.The-relaxation-times\] b) is seen to be remarkably good over the entire range of measurements $0.8\, T_{c}<T<1.3\, T_{c}$. This agreement is important because, taking into account $n_{s}\sim|\Psi|{}^{2}$, it validates the approximation that the pump-induced changes in the reflectivity or dielectric constant $\epsilon$ for small $n_{s}$ are related to the order parameter $\Psi$ as $\delta R\sim\delta\varepsilon\sim|\Psi|{}^{2}$. In contrast to $A_{SC}$ and $\rho_{0}$, remarkable differences are seen in the temperature dependences of the *characteristic lifetimes* shown in Fig. \[Fig.The-relaxation-times\]a) obtained by optical techniques and THz conductivity measurements. The phase correlation time $\tau_{\theta}^{THz}$ determined from the THz conductivity [@Orenstein2006] dies out very rapidly with increasing temperature, while the $T$-dependence of the amplitude relaxation ($\tau_{QP}^{3Pulse}$, $\tau_{QP}^{Subtraction}$ or $\tau_{REC}$) is much more gradual. Measurements on an optimally doped sample (Fig. \[fig: Doping\]c), d)) show qualitatively the same results with $T_{onset}\sim$102 K, which is 17 K above $T_{c}$, and slightly faster decrease of both amplitude and QP relaxation time with temperature. For the overdoped sample, $F_{th}^{PG}$ becomes comparable to $F_{th}^{SC}$, so the superconducting component cannot be significantly suppressed without affecting the pseudogap. Nevertheless the superconducting component is clearly observable in the 2-pulse response up to $T_{onset}\sim93$ K, i.e. 13 K above $T_{c}$. Comparison of 2-pulse data for different doping levels is shown in Fig. \[fig: Doping\] e)-g), showing qualitatively similar behavior of the SC amplitude above $T_{c}$. ![image](Mihailovic_fig3){width="100.00000%"} Discussion {#discussion .unnumbered} ========== The co-existence and distinct dynamics of the PG and SC excitations above $T_{c}$ highlights the highly unconventional nature of these states. A possible explanation for the coexistence of the SC and PG excitations is that the SC and PG quasiparticles which are giving rise to the observed processes are associated with relaxation at different regions on the Fermi surface. Recent Raman and cellular dynamical mean-field studies [@Sakai2013] have suggested that the PG may originate from the states inside the Mott gap, which are characterized by s-wave symmetry and very weak dispersion. Such a localized nature of the PG state excitations is consistent with previous assignments made on the basis of pump-probe experiments [@Kabanov_PRB1999; @Toda_PRB2011]. In contrast, the superconducting gap fluctuations have predominantly $d$-wave symmetry [@Toda2013] and are more delocalized. This would explain the simultaneous presence of the SC fluctuations and PG components in pump-probe experiments. Perhaps the most widely discussed model in the context of distinct pairing and phase coherence phenomena is the Berezinskii-Kosterlitz-Thouless (BKT) transition [@Berezinskii1971; @Berezinskii1972; @Kost1973] by which decreasing temperature through $T_{onset}$ and approaching $T_{c}$ causes freely moving thermally activated vortices and anti-vortices to form pairs, thus allowing the condensate to acquire long range phase coherence in an infinite-order phase transition sharply at $T_{c}$. The bare pair density is finite to much higher temperatures (up to $T_{onset}$), where pairing is caused by a different mechanism, and the pseudogap is considered to be an unrelated phenomenon[@Emery1995; @Emery1997; @Emery1998; @Fisher1991]. The effect is in terms of a phase stiffness $\rho_{s}$, a quantity which characterizes the destruction of phase coherence by thermal fluctuations at a temperature $T_{c}=T_{BKT}=\pi\rho_{s}/8$. It is defined by the free energy cost of non-uniformity of the spatially varying order parameter $\Psi$ [@Orenstein2006]. In cuprates $\rho_{s}$ is small due to reduced dimensionality and the low carrier density. Within this approach a.c. (THz) conductivity[@CorsonORENST1999; @Orenstein2006], heat capacity [@Junod2000; @Tallon2011], diamagnetism [@Li2010; @Wang2005], ARPES [@Kondo2010] and Nernst effect [@Wang2006; @Rullier-Albenque2006; @Pourret2006; @Mukerjee2004] measurements were interpreted. The BiSCO family is the most two-dimensional of the cuprate materials, so here the BKT mechanism for describing the order above $T_{c}$ would be expected to be most applicable [@CorsonORENST1999]. However, the data in Fig. \[Fig.The-relaxation-times\]a) show that the drop in $\tau_{\theta}$ is not nearly as abrupt as the BKT model predicts. One possibility is that this broadening arises from chemical inhomogeneity of the sample, but the absence of a peak in the heat capacity at $T_{onset}$ [@Junod2000] also appears to exclude the possibility of a pure BKT transition, and implies amplitude fluctuations might be present between $T_{c}$ and $T_{onset}$ as well [@Tallon2011]. Thus additional mechanisms beyond BKT may also be present which would broaden the phase coherence transition, such as inter-layer phase fluctuations. In this case the observed $T$-dependence would reflect the interlayer de-coherence [@CorsonORENST1999; @Orenstein2006]. In more traditional approaches using time-dependent Ginzburg-Landau (TDGL) theory[@Larkinlate2005], thermal fluctuations are small for temperatures higher than $\sim2$ K above $T_{c}$ [@VanderBeek2000], but can give an observable contribution to the conductivity in this temperature range [@Silva2001; @Truccato2006; @Ghosh1999; @Balestrino2001; @Bhatia1994]. Relaxation within TDGL theory has a longitudinal relaxation time $\tau_{\Delta}$, which corresponds to the relaxation of the magnitude of the SC order parameter, and a transverse relaxation time $\tau_{\theta}$ which corresponds to the relaxation of its phase. They are related to each other in magnitude, but have the same critical temperature dependence near $T_{c}$, namely $\tau_{GL}\sim1/(T-T_{c})$ [@PhysRevLett.36.429]. Perhaps unexpectedly, the temperature dependence of $\tau_{\theta}^{THz}$ nearly coincides with the behavior predicted for $\tau_{\theta}$ by time-dependent Ginzburg Landau theory for Gaussian fluctuations. Fig. \[Fig.The-relaxation-times\]a) suggests that phase coherence within this system is established in a narrow, $\sim5$ K temperature interval. However, the distinctly different critical behavior of the pair amplitude dynamics speaks in favor of unconventional models of superconductivity in which pairing and phase coherence occur independently, by different mechanisms. The implication is that the observed pairing amplitude which extends to more than 25 K above $T_{c}$ reflects the response of an inhomogeneous ensemble of gapped patches which are not mutually phase coherent. The weak temperature dependence of the amplitude cannot be described either by TDGL or BKT models. Beyond the BKT vortex and TDGL scenario, other phase-locking scenarios, such as Bose-Einstein condensation of bipolarons[@Alexandrov1981; @Alexandrov2011; @Alexandrov2011a] and phase-coherence percolation[@Kresin2011; @Mihailovic2002] may also be consistent with the observed . In both of these cases pairing and phase coherence are also distinct processes. The former comes from the condensation of pairs at $T_{c}$ as pre-formed pair kinetic energy is reduced, while percolation dynamics is associated with the time dynamics of Josephson tunneling between fluctuating pairs or superconducting patches. The percolation timescale $\tau_{J}$ is given by the Josephson energy $E_{J}=I_{c}\phi_{0}/2\pi$, where $I_{c}$ is the critical current and $\phi_{0}$ is the flux quantum. In cuprates, $\tau_{J}=\hbar/E_{J}\simeq300$ fs, which is compatible with the dynamics of phase shown in Fig. \[Fig.The-relaxation-times\]a). A picture highlighted by Fig. \[Fig.The-relaxation-times\] thus emerges in these materials where the relaxation of the phase $\theta$ is faster than relaxation of the amplitude $\psi$ of the complex order parameter $\Psi=\psi e^{i\theta}$, the dynamics of $\psi$ and $\theta$ being governed by microscopically different processes. It is worth remarking here that the opposite situation is found in charge density wave dynamics, where the phase relaxation is slow compared to the amplitude relaxation, and the dynamics can be described by TDGL equations for the amplitude $\psi$, neglecting phase relaxation $\theta$ [@Yusupov_NatPhys2010]. Methods {#methods .unnumbered} ======= **Samples.** The samples used in this work were under-, near optimally- and over- doped Bi2212 with $T_{c}$s of 81, 85 and 80 K respectively. Samples were grown by the traveling solvent floating zone method. Critical temperatures were obtained from susceptibility measurements (e.g. inset in Fig. \[Fig:Signal\_traces\]a) for the underdoped sample). [61]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix Alexandrov, A. S. & Andreev, A. F. . *Europhys. Lett.* **54**, 373–379 (2001). Alexandrov, A. & Ranninger, J. . *Phys. Rev. B* **24**, 1164–1169 (1981). Alexandrov, A. S. . *Europhys. Lett.* **95**, 27004 (2011). Mihailovic, D., Kabanov, V. V. & Müller, K. A. . *Europhys. Lett.* **57**, 254–259 (2002). Kresin, V. Z. & Wolf, S. a. . *J. Supercond. Nov. Magn.* **25**, 175–180 (2011). Ovchinnikov, Y. & Kresin, V. . *Phys. Rev. B* **65**, 214507 (2002). Geshkenbein, V. B., Ioffe, L. B. & Larkin, A. I. . *Phys. Rev. B* **55**, 3173–3180 (1997). Alexandrov, A. . *Phys. Rev. B* **48**, 10571–10574 (1993). Solov’ev, A. L. & Dmitriev, V. M. . *Low Temp. Phys.* **35**, 169 (2009). Emery, V. J. . *Phys. Rev. B* **56**, 6120–6147 (1997). Phillips, P. & Dalidovich, D. *Science* **302**, 243–7 (2003). Das, D. & Doniach, S. Existence of a Bose metal at [T=0]{}. *Phys. Rev. B* **60**, 1261–1275 (1999). Das, D. & Doniach, S. Bose metal: Gauge-field fluctuations and scaling for field-tuned quantum phase transitions. *Phys. Rev. B* **64**, 134511 (2001). Vojta, M. . *Rep. Prog. Phys.* **66**, 2069–2110 (2003). Torchinsky, D. H., Mahmood, F., Bollinger, A. T., Božović, I. & Gedik, N. *Nat. Mater.* **12**, 387–91 (2013). Sugai, S., Takayanagi, Y. & Hayamizu, N. . *Phys. Rev. Lett.* **96**, 137003 (2006). Silva, E., Sarti, S., Fastampa, R. & Giura, M. . *Phys. Rev. B* **64**, 144508 (2001). Truccato, M., Agostino, a., Rinaudo, G., Cagliero, S. & Panetta, M. . *J. Phys.: Condens. Matter* **18**, 8295–8312 (2006). Orenstein, J., Corson, J., Oh, S. & Eckstein, J. . *Ann. Phys.* **15**, 596–605 (2006). Junod, A., Roulin, M., Revaz, B. & Erb, A. . *Physica B Condens. Matter* **280**, 214–219 (2000). Tallon, J. L., Storey, J. G. & Loram, J. W. . *Phys. Rev. B* **83**, 092502 (2011). Kondo, T. *et al.* . *Nat. Phys.* **7**, 21–25 (2010). Wang, Y., Li, L. & Ong, N. . *Phys. Rev. B* **73**, 1–20 (2006). Rullier-Albenque, F. *et al.* . *Phys. Rev. Lett.* **96**, 2–5 (2006). Pourret, a. *et al.* . *Nat. Phys.* **2**, 683–686 (2006). Li, L. *et al.* . *Phys. Rev. B* **81**, 1–9 (2010). Mihailović, D., Zgonik, M., Čopič, M. & Hrovat, M. . *Phys. Rev. B* **36**, 3997–3999 (1987). Ghosh, A. K. & Basu, A. N. Fluctuation-induced conductivity in quenched and furnace-cooled $Bi_2Sr_2CaCu_2O_{8+\ensuremath{\delta}}$ Aslamazov-Larkin or short-wavelength fluctuations. *Phys. Rev. B* **59**, 11193–11196 (1999). Balestrino, G., Crisan, A., Livanov, D., Manokhin, S. & Milani, E. . *Phys. C Supercond.* **355**, 135–139 (2001). Bhatia, S. N. & Dhard, C. . *Phys. Rev. B* **49**, 206–215 (1994). Wang, Y. *et al.* . *Phys. Rev. Lett.* **95**, 247002 (2005). Rullier-Albenque, F., Alloul, H. & Rikken, G. . *Phys. Rev. B* **84**, 014522 (2011). Chen, Y. & Ting, C. . *Phys. Rev. B* **65**, 180513 (2002). Kaindl, R. A., Carnahan, M. A., Chemla, D. S., Oh, S. & Eckstein, J. N. Dynamics of Cooper pair formation in $Bi_2Sr_2CaCu_2O_{8+\ensuremath{\delta}}$. *Phys. Rev. B* **72**, 060510 (2005). Kabanov, V. V., Demsar, J. & Mihailovic, D. . *Phys. Rev. Lett.* **95**, 147002 (2005). Liu, Y. *et al.* . *Phys. Rev. Lett.* **101**, 1–4 (2008). Nair, S. K. *et al.* Quasiparticle dynamics in overdoped $Bi_{1.4}Pb_{0.7}Sr_{1.9}CaCu_2O_{8+\ensuremath{\delta}}$ Coexistence of superconducting gap and pseudogap below $T_c$. *Phys. Rev. B* **82**, 212503 (2010). Kusar, P. *et al.* Controlled Vaporization of the Superconducting Condensate in Cuprate Superconductors by Femtosecond Photoexcitation. *Phys. Rev. Lett.* **101**, 227001 (2008). Corson, J., Mallozzi, R., Orenstein, J., Eckstein, J. & Bozovic, I. . *Nature* **398**, 221–223 (1999). Kusar, P. e. a. Coherent trajectory through the normal-to-superconducting transition reveals ultrafast vortex dynamics in a superconductor. *arXiv:1207.2879v2* (2013). Yusupov, R. *et al.* . *Nat. Phys.* **6**, 681–684 (2010). Kusar, P. *et al.* . *Phys. Rev. B* **83**, 035104 (2011). Mertelj, T. *et al.* . *Phys. Rev. Lett.* **110**, 156401 (2013). Toda, Y. *et al.* Quasiparticle relaxation dynamics in underdoped $Bi_2Sr_2CaCu_2O_{8+\ensuremath{\delta}}$ by two-color pump-probe spectroscopy. *Phys. Rev. B* **84**, 174516 (2011). Stojchevska, L. *et al.* . *Phys. Rev. B* **84**, 180507 (2011). Coslovich, G. *et al.* Competition Between the Pseudogap and Superconducting States of $Bi_2Sr_2Ca_{0.92}Y_{0.08}Cu_2O_{8+\ensuremath{\delta}}$ Single Crystals Revealed by Ultrafast Broadband Optical Reflectivity. *Phys. Rev. Lett.* **110**, 107003 (2013). Sakai, S. *et al.* . *Phys. Rev. Lett.* **111**, 107001 (2013). Kabanov, V. V., Demsar, J., Podobnik, B. & Mihailovic, D. Quasiparticle relaxation dynamics in superconductors with different gap structures: Theory and experiments on $YBa_2Cu_3O_{7-\delta}$. *Phys. Rev. B* **59**, 1497–1506 (1999). Toda, Y. *et al.* . *arXiv:1311.4719* (2013). Berezinskii, V. Destruction of Long-range Order in One-dimensional and Two-dimensional Systems having a Continuous Symmetry Group I. Classical Systems. *JETP* **32**, 493 (1971). Berezinskii, V. . L. Destruction of Long-range Order in One-dimensional and Two-dimensional Systems Possessing a Continuous Symmetry Group. II. Quantum Systems. *JETP* **34**, 610 (1972). Kosterlitz, J. M. & Thouless, D. J. Ordering, metastability and phase transitions in two-dimensional systems. *J. Phys. C* **6**, 1181–1203 (1973). Emery, V. J. & Kivelson, S. A. . *Nature* **374**, 434–437 (1995). Emery, V., Kivelson, S. & Zachar, O. . *Phys. C Supercond.* **282-287**, 174–177 (1997). Emery, V. & Kivelson, S. . *J. Phys. Chem. Solids* **59**, 1705–1710 (1998). Fisher, D., Fisher, M. & Huse, D. . *Phys. Rev. B* **43**, 130–159 (1991). Mukerjee, S. & Huse, D. . *Phys. Rev. B* **70**, 014506 (2004). Larkin, A. & Varlamov, A. *[Theory of Fluctuations in Superconductors]{}* (Oxford University Press, 2005). van der Beek, C., Colson, S., Indenbom, M. & Konczykowski, M. . *Phys. Rev. Lett.* **84**, 4196–4199 (2000). Schuller, I. & Gray, K. E. Experimental Observation of the Relaxation Time of the Order Parameter in Superconductors. *Phys. Rev. Lett.* **36**, 429–432 (1976). Alexandrov, A. S. . *Phys. Scr.* **83**, 038301 (2011). Author contributions {#author-contributions .unnumbered} ==================== T.K., Y.T. and M.O. has grown the samples and done magnetic characterisation, I.M. did optical measurements. I.M. and P.K. performed data analysis. I.M., Y.T, T.M. and D.M. interpreted the data. I.M and D.M. wrote the manuscript. Additional information {#additional-information .unnumbered} ====================== **Competing financial interests:** The authors declare no competing financial interests.
**On the Chekhov-Fock coordinates of dessins d’enfants.\ **G. Shabat and V.Zolotarskaia**** **Introduction\ ** There are several ways to associate a complex structure to a ribbon graph. The examples are provided by the construction of Kontzevich [@Kon], the construction of Penner [@Pen], the construction of dessins d’enfants [@Chp]. In the first two constructions the ribbon graph is considered with the additional structure - a number is associated to each edge. Varying these parametres we obtain different riemann surfaces. The statement is that this way we cover a cell of the corresponding moduli space. In the construction of dessins d’enfants a single riemann surface is associated to each graph. (Usually we say *dessin d’enfants, which in this paper means exactly the same as *ribbon graph). We call it *the Grothendieck model of a ribbon graph.*** The problem is: which parametres for the edges of graph in the first two constructions should be chosen to obtain its Grothendiek model? This problem was solved in [@K-CP], [@P-CP] and it turns out that these parameters should all be equal to $1$. The goal of the present paper is to discuss one more such construction - that of Chekhov-Fock and to prove that putting all parametres of the edges of the graph in this construction equal to $0$, we obtain the Grothendieck model of this graph. **Cartography** Let $\Gamma$ be any trivalent ribbon graph, that is a graph with the valencies of all the vertices equal to $3$ and with the given cyclic order on the origins of the edges in each vertice. Let $E$ be the set of the oriented edges of $\Gamma$. We have the so-called *cartography group $C_2^+$ acting on $E$. This group is generated by the elements $\rho_0$ and $\rho_1$. The element $\rho_0$ turnes the current edge contraryclockwise around the origin of the edge, $\rho_1$ changes the orientation of the edge (see [@Chp]). Formally we can write $$\it C_2^+:=<\rho_0, \rho_1 | \rho_1^2=1>.$$ Having in mind the trivalency of our graph we define $$\it C_2^+[3]=<\rho_0, \rho_1 | \rho_1^2=\rho_0^3=1>$$ also acting on $E$.* Fix $\epsilon \in E$. Let $\it B(E,\epsilon)$ be the borel subgroup of $\it C_2^+[3]$ corresponding to the edge $\epsilon$ (see [@Chp]), that is $$\it B(E,\epsilon)=\{w \in C_2^+[3]|w\epsilon=\epsilon\}.$$ **The Chekhov-Fock construction** We also have the additional structure: $$z:E \longrightarrow \bf \mathbb{R}, \it \quad z(\rho_1\gamma) =z(\gamma) \quad \forall \gamma \in \Gamma.$$ Now given $\Gamma, \it z, \epsilon$ we define the map $$CHF: \it C_2^+[3] \longrightarrow PSL_2(\mathbb {R})$$ inductively, setting $$CHF(1)=1,$$ $$CHF(\rho_0 w)=L \times CHF(w)$$ $$CHF(\rho_1 w)=X_{z(w\epsilon)} \times CHF(w)$$ where\ $X_a= \left(\begin{array}{cc} 0& -e^{a/2} \\ e^{-a/2}& 0 \end{array}\right) $\ $L= \left(\begin{array}{cc} 0 & 1\\ -1 & -1\\ \end{array}\right) $ **\ Notation. *$chf:=CHF |_{B(E,\epsilon)}$\ **Statement 1. *The map $chf$ is a homomorphism.****** This statement, as most of the other statements of present paper, becomes obvious after thinking about it for a while, but here is the formal\ **Proof: We should prove that $$\it chf(w_2w_1)= chf(w_2) chf (w_1)$$ for any $\it w_1, w_2\in {B(E,\epsilon)}$.\ It is sufficient to show that $$\it CHF(w_2w_1)= CHF(w_2)CHF(w_1)$$ for any $\it w_2\in C_2^+[3], w_1\in {B(E,\epsilon)}$.\ We will do it using induction over length of $\it w_2$.\ If the length of $\it w_2$ is $1$ then either $\it w_2=\rho_0$,\ or $\it w_2=\rho_1$.\ In the first case we have $$\it CHF(w_2w_1)=$$ $$\it CHF(\rho_0w_1)= \qquad ( by\; the\; definition \;of \;CHF)$$ $$\it L\times CHF(w_1)= \qquad ( by \; the \; definition \; of \; CHF)$$ $$\it CHF(\rho_0)CHF(w_1)=$$ $$\it CHF(w_2)CHF(w_1)$$\ In the second case we have $$\it CHF(w_2w_1)=$$ $$\it CHF(\rho_1w_1)= \qquad (by \; the \; definition \; of \; CHF)$$ $$\it X_{z(w_1\epsilon)}\times CHF(w_1)= \qquad ( by \; the \; definition \;of \;B(E,\epsilon) )$$ $$\it X_{z(\epsilon)}\times CHF(w_1)= \qquad (by\; the\; definition\; of\; CHF)$$ $$\it CHF(\rho_1)CHF(w_1)=$$ $$\it CHF(w_2)CHF(w_1)$$\ In the general case if $\it w_2=\rho_0w_3$ we have $$\it CHF(w_2w_1)=$$ $$\it CHF(\rho_0w_3w_1)= \qquad (by\; the\; definition\; of\; CHF)$$ $$\it L\times CHF(w_3w_1)= \qquad (by\; the\; induction)$$ $$\it L\times CHF(w_3)CHF(w_1)=\qquad (by\; the\; definition\; of\; CHF)$$ $$\it CHF(\rho_0w_3)CHF(w_1)=$$ $$\it CHF(w_2)CHF(w_1)$$\ Finaly if $\it w_2=\rho_1w_3$ we have $$\it CHF(w_2w_1)=$$ $$\it CHF(\rho_1w_3w_1)=\qquad (by\; the\; definition\; of\; CHF)$$ $$\it X_{z(w_3w_1\epsilon)}\times CHF(w_3w_1)=\qquad (by\; the\; definition\; of\; B(E,\epsilon))$$ $$\it X_{z(w_3\epsilon)}\times CHF(w_3)CHF(w_1)=\qquad (by\; the\; definition\; of\; CHF)$$ $$\it CHF(\rho_1w_3)CHF(w_1)=$$ $$\it CHF(w_2)CHF(w_1)$$\ which finishes the proof. $\blacksquare$** **The Chekhov-Fock net** Let us denote the image of $chf$ by $\Delta(\Gamma, z, \epsilon)$. To explain the properties of $\Delta(\Gamma, z, \epsilon)$ we need the notion of *Chekhov-Fock net. It is some ideal triangulation of the upper plane $\mathcal {H}$ with the numbers associated to the edges of its dual trivalent graph.* Here is the construction of Chekhov-Fock net, associated to the graph $\Gamma$. The first triangle of Chekhov-Fock net will be the ideal triangle $T_0$ with the vertices in $-1,0$ and $\infty$. Denote $[x, y]$ the Lobachevsky line joining $x$ and $y$. Let us call $\epsilon^*$ the edge of the trivalent graph, intersecting the edge $[0,\infty]$ of the triangle (we need the orientation of the edges, so let us say that the origin of $\epsilon^*$ which is situated inside $T_0$ is its “beginning”. The number, associated to this edge of the graph, will be $\it z^*(\epsilon^*) =z(\epsilon)$. Now (using the fact that $C_2^+[3]$ acts on our trivalent graph and that this action is transitive) all the numbers associated to all the edges of the graph can be determined in the following way: $\it z^*(w\epsilon^*)=z(w\epsilon)$, where $\it w \in C_2^+[3]$. Now we have to explain, how to obtain inductively all the other triangles. For example we have constructed the triangle $T$ with the edges $a,b$ and $c$ and we want to construct the triangle, which will also contain edge $c$. Let $\alpha \in PSL_2(\mathbb {R})$ be a transformation of $\mathcal{H}$, which takes triangle $T$ to the triangle $T_0$, so that $\alpha(c)=[0, \infty]$. Then the $(\alpha^{-1}X_{z^*(c)}\alpha)T$ is the desired triangle.\ **Fact. $\it \forall w \in C_2^+[3] \quad z^*((CHF(w))^{-1}\epsilon^*)=z(w\epsilon)$\ **Statement 2. *For any $\it w \in C_2^+[3]$ $\it \exists$ triangle $T$ of Fock-Chehov net such that $(CHF(w))^{-1}T_0=T$.\ **Proof: Let us use the induction over the length of $\it w$. if this length is $0$ then the statement is trivial. Now if $\it w=\rho_0w_1$, $$\it (CHF(w))^{-1}T_0= \qquad (by\; the\; definition\; of\; CHF)$$ $$(L \times CHF(w_1))^{-1}T_0=$$ $$(CHF(w_1))^{-1}L^{-1}T_0=$$ $$(CHF(w_1))^{-1}T_0=T.$$ Now consider the case $\it w=\rho_1w_1$, let $(CHF(w))^{-1}T_0=T$. $$\it (CHF(w))^{-1}T_0= \qquad (by\; the\; definition\; of\; CHF)$$ $$(X_{z(w\epsilon)} \times CHF(w_1))^{-1}T_0=$$ $$(CHF(w_1))^{-1}(X_{z(w\epsilon)})^{-1}T_0=$$ $$((CHF(w_1))^{-1}(X_{z(w\epsilon)})^{-1}CHF(w_1))(CHF(w_1))^{-1}T_0=$$ $$((CHF(w_1))^{-1}(X_{z(w\epsilon)})^{-1}CHF(w_1))T= \qquad (see \; \bf Note \rm \; )$$ $$((CHF(w_1))^{-1}(X_{z^*((CHF(w))^{-1}\epsilon^*)})^{-1}CHF(w_1))T$$ And the last expression by the definition gives us the next triangle of Chekhov-Fock net. $\blacksquare$ **\ Corollary: *$\Delta ( \Gamma, z, \epsilon)$ is a fuchsian group.********** Triangle $T_0$ is its fundamental domain. **The Chekhov-Fock coordinates of dessins d’enfants** Now let us put $z\equiv 0$. **\ Statement 3: The riemann surface *$\mathcal{H} \diagup \Delta ( \Gamma, 0, \epsilon)$ is equivalent to the Grothendieck model of $\Gamma$. **\ Proof: If $z\equiv 0$ then $\Delta(\Gamma, z, \epsilon)=\Delta ( \Gamma, 0, \epsilon)$ is the subgroup of $PSL_2[\mathbb{Z}]$ Let us first consider the case of normal $B(E,\epsilon)$, (we call such dessin regular). Consequently $\Delta(\Gamma, 0, \epsilon)$ is normal in $PSL_2[Z]$. So we can factorize $\mathcal{H} \diagup \Delta (\Gamma, 0, \epsilon )$ by $PSL_2[\mathbb{Z}]$. Now if identify $\mathcal{H}/PSL_2[\mathbb{Z}]$ and Riemann sphere, we get the function $\beta_\Gamma$ defined on $\mathcal{H} / \Delta (\Gamma, 0, \epsilon )$ with the only critical values $0,1$ and $\infty$, that is Belyi function corresponding to this dessin.***** Now if we take any graph $\Gamma$ and if $D$ is the corresponding dessin d’enfant, there exists regular dessin d’enfant $D'$ and the covering $\chi: D' \longrightarrow D$. Let us denote the graph corresponding to this dessin by $\Gamma'$. Now we can define function $\beta_\Gamma$ defined on $\Sigma / \Delta (\Gamma, 0, \epsilon )$ such that $\beta_\Gamma\circ\chi=\beta_\Gamma'$ **Examples.** **Notation By the case $<a_1, \dots ,a_n\mid b_1, \dots , b_m>$ we denote graph with the valences of vertices equal to $a_1, \dots , a_n$ and the valences of vertices of dual graph equal to $b_1, \dots , b_m$. **\ Example 1. Case$\langle3,3|2,2,2\rangle$ Let the numbers, corresponding to the edges $A,B,C$, be $a,b,c$, the chosen oriented edge be B with the orientation, so that edge $A$ is to the left of the end of $B$ and edge $C$ - to the right. Then the generators of the corresponding fuchsian group are\ $\gamma_1=X_bRX_cR$\ $\gamma_2=RX_aRX_b$\ $\gamma_3=LX_cRX_aL$\ with the relation $\gamma_3\gamma_2\gamma_1=1$.**** Using the fact that $\gamma_i$ are parabolic we get $ \left\{\begin{array}{lcl} a+b&=&0\\ a+c&=&0\\ b+c&=&0\\ \end{array}\right. $ This system has the only solution $ \left\{\begin{array}{lcl} a&=&0\\ b&=&0\\ c&=&0\\ \end{array}\right. $ So we get the subgroup of $PSL_2(\bf R)$ with generators: $\gamma_1= \left(\begin{array}{cc} 1& 0 \\ 2& 1 \end{array}\right)$ , the invariant point is $0$. $\gamma_2= \left(\begin{array}{cc} 1& -2 \\ 0& 1 \end{array}\right)$ , the invariant point is $\infty$. $\gamma_3= \left(\begin{array}{cc} -1& -2 \\ 2& 3 \end{array}\right)$ , the invariant point is $-1$.\ with the relation $\gamma_3\gamma_2\gamma_1=1$. **** Example 2. Case$\langle3,3,3,3|3,3,3,3\rangle - tetrahedron $. Let the edges $A,B,C$ follow clockwise around some vertice, edge $D$ be the opposite to the edge $A$, $E$ to $B$, $F$ to $C$. Taking $a,b,c,d,e,f$ by numbers, corresponding to the edges $A,B,C,D,E,F$ and using the fact that all elements of fuchsian group are parabolic we get the system $ \left\{\begin{array}{lcl} a+e+c&=&0\\ b+d+c&=&0\\ a+f+b&=&0\\ e+f+d&=&0\\ \end{array}\right. $ The solution of this system is $ \left\{\begin{array}{lcl} a&=&d\\ b&=&e\\ c&=&f\\ a+b+c&=&0\\ \end{array}\right. $ So we get the family of riemann surfaces with two parameters. The riemann surface coresponding to the dessin can be obtained putting $a=b=c=0$, because of its symmetry. So we get the following matrices\ $\gamma_1= \left(\begin{array}{cc} 1& -3 \\ 0& 1 \end{array}\right)$ , the invariant point is $\infty$\ $\gamma_2=\left(\begin{array}{cc} 2& 3 \\ -3& -4 \end{array}\right) $ , the invariant point is $-1$\ $\gamma_3=\left(\begin{array}{cc} 1& 0 \\ 3& 1 \end{array}\right) $ , the invariant point is $0$\ $\gamma_4= \left(\begin{array}{cc} 4& -3 \\ 3& -2 \end{array}\right)$ , the invariant point is $1$\ with the relation $\gamma_4\gamma_3\gamma_2\gamma_1=1$. **Example 3. Case$\langle3,3,3,3,3,3,3,3|4,4,4,4,4,4\rangle$ Using the usual technique of Fock-Chehov we get the generators of the fuchsian group\ $\gamma_1= \left(\begin{array}{cc} 1& 0 \\ 4& 1 \end{array}\right)$ , the invariant point is $0$\ $\gamma_2= \left(\begin{array}{cc} 5& -4 \\ 4& -3 \end{array}\right)$ , the invariant point is $1$\ $\gamma_3=\left(\begin{array}{cc} 1& -4 \\ 0& 1 \end{array}\right) $ , the invariant point is $\infty$\ $\gamma_4=\left(\begin{array}{cc} 7& 16 \\ -4& -9 \end{array}\right) $ , the invariant point is $-2$\ $\gamma_5=\left(\begin{array}{cc} 3& 4 \\ -4& -5 \end{array}\right) $ , the invariant point is $-1$\ $\gamma_6= \left(\begin{array}{cc} 7& 4 \\ -16& -9 \end{array}\right)$ , the invariant point is $-1/2$\ with the relation $\gamma_6\gamma_5\gamma_4\gamma_3\gamma_2\gamma_1=1$.\ Also we can find the automorfism of this picture: $\alpha= \left(\begin{array}{cc} 1& 1 \\ 0& 1 \end{array}\right)$** Factorizing this child’s picture by this automorfism we get the another picture:\ **Example 4. Case$\langle3,3|4,1,1\rangle$ with generators\ $\gamma_1= \left(\begin{array}{cc} 1& 1 \\ 0& 1 \end{array}\right)$ , the invariant point is $\infty$\ $\gamma_2= \left(\begin{array}{cc} 1& 0 \\ 4& 1 \end{array}\right)$ , the invariant point is $0$** [10]{} V. V. Fock and L. O. Chekhov, **Quantum Mapping Class Group, Pentagon Relation, and Geodesics, *Proceedings of the Steklov Institute of Mathematics, Vol.226, 1999, pp. 149-163.*** Kontsevich, M. **Intersection Theory on the Moduli Space of Curves and the Matrix Airy Function, *functional analysis and it’s applications, 1991,25:2, pp. 50-57.*** Penner R.C. **The decorated Teichmuller Space of punctured surfaces, *Comm. Math. Phys., 113:2(1987), 299-340.*** Shabat, G.B. **Combinatorial and Topological methods in the theory of algebraic curves, *Theses, Moscow State University, 1998.*** Shabat, G.B. **Complex analysis and dessins d’enfants, *in: “Complex Analysis in Modern Math.”, “Phasis”, Œoscow, 1998, pp. 257-268.*** Shabat G.B., Voevodsky V. A. **Drawing curves over number fields, *The Grothendieck Festschrift, Birkhauser, 1990, V.III., p.199-227.***
--- abstract: 'We investigate dynamical transport properties of interacting electrons moving in a vibrating nanoelectromechanical wire in a magnetic field. We have built an [*exactly solvable*]{} model in which electric current and mechanical oscillation are treated fully quantum mechanically on an equal footing. Quantum mechanically fluctuating Aharonov-Bohm phases obtained by the electrons cause nontrivial contribution to mechanical vibration and electrical conduction of the wire. We demonstrate our theory by calculating the admittance of the wire which are influenced by the multiple interplay between the mechanical and the electrical energy scales, magnetic field strength, and the electron-electron interaction.' author: - 'Hangmo Yi$^1$ and Kang-Hun Ahn$^2$' title: Dynamical electron transport through a nanoelectromechanical wire in a magnetic field --- Recent progresses in experimental techniques of nano structures have made it possible for researchers to investigate electromechanical systems with extremely small sizes.[@bishop2001a] The size limit is now being pushed down to the vicinity of mechanically quantum regime and there are much effort to understand the effect of quantum fluctuations on electrical and mechanical properties — and their coupling — in the nanoelectromechanical systems.[@blencowe2004a; @ahn2004a; @knobel2003a; @schwab2005a; @lahaye2004a; @leroy2004a; @sapmaz2006a] Recently Shekhter and his coworkers investigated an Aharonov-Bohm effect in quamtum mechanically vibrating nanoelectromechanical wires.[@shekhter2006a] In the limit where non-interacting electrons tunnel between weakly connected leads through virtual processes, they used a perturbative analysis to find that the quantum interference has a strong effect on the electromechanical response of the wire. In this Letter, we propose an [*exactly solvable*]{} theoretical model of strongly interacting electrons in a nano wire which quantum-mechanically oscillates in a magnetic field. We study the dynamical electron transport beyond perturbative regime and investigate the fundamental limit of both quantum mechanics and strong correlations. Whereas Ref.  focuses on energy scales much less than the level spacing of the vibrational normal modes, our model is valid for a much broader range of energy scales, including energies much greater than the vibrational energy level spacing. Therefore, it captures various important and interesting physical consequencies, such as admittance resonances occuring near mechanical normal mode frequencies. We also consider the electron-electron interaction in the theory, which is known to play an important role in low dimensional systems such as carbon nanotubes.[@yao1999a] Our theory is based on a Luttinger liquid model which was previously developed by the present authors.[@ahn2004a] One of the important characteristics of it is that it treats both electrical and mechanical degrees of freedom quantum mechanically, completely on an equal footing. On the atomic level, the interplay between electrical charge and mechanical oscillation often causes rich and interesting new physics to emerge — the most famous examples being the BCS theory of superconductivity[@bardeen1957a] and the SSH theory of conducting polymers.[@su1979a] This Letter deals with a mesoscopic version of such a case, in which one can even control the electromechanical coupling by tuning external parameters. Figure \[fig:setup\] schematically shows the experimental setup of our model. A one-dimensional wire — a single-wall carbon nanotube, for example — of length $L$ is suspended across a valley between two metallic gates. There are two important degrees of freedom for this model: the mechanical oscillation of the wire and the electric current through it. Mechanically, the wire may form standing waves of which sound velocity is determined by its one-dimensional mass density and tension. When the amplitude of the oscillations is small, the mechanical motion of the wire may be characterized by its displacement $u(x)$ from the equilibrium position, where $x$ is the one-dimensional position along the wire. By connecting the gates to an external ac voltage source, one may now drive an ac current through the wire. If there is a magnetic field $\mathbf{B}$ perpendicular to the wire, the wire is expected to feel the Lorentz force and move. This may in turn induce electromotive force in the wire as it cuts through the magnetic flux. This is the well-known effect of back-reaction.[@ahn2004a] We will discuss this effect in the regime where both the electron current and the mechanical oscillations must be treated quantum mechanically. The electronic excitations of our one-dimensional electromechanical system is best described by the Tomonaga-Luttinger liquid theory.[@tomonaga1950a; @luttinger1963a; @haldane1981b] In this theory, the electron-electron interaction cannot be treated perturbatively, but the Hamiltonian may be rendered quadratic by a bosonization technique. This is the key to the exact solvability of our model. For simplicity, we will consider a spinless case here, but it may be easily generalized to include spin. The electronic part of the Euclidean action is given by[@kane1992a; @yi2002a] $$\!S_\theta = \frac{1}{8\pi v_F} \int_0^\beta \!d\tau \int_{-\infty}^\infty dx \left\{ \left[\frac{\partial\theta}{\partial\tau}\right]^2 + \left[\frac{v_F}{K(x)}\frac{\partial\theta}{\partial x}\right]^2 \right\}$$ where $v_F$ is the Fermi velocity, $\tau=it$ is the imaginary time, and $\beta\equiv 1/k_BT$ is the inverse temperature. Here, $\theta(x)$ is a bosonic field related to the one-dimensional electronic density fluctuation $\delta n$ via $\delta n=(\partial\theta/\partial x)/2\pi$. We have replaced a system of finite-sized one-dimensional Tomonaga-Luttinger liquid wire connected to two-dimensional leads by an effective one-dimensional liquid with position dependent interaction parameter and velocity:[@maslov1995a] $$K(x) = \left\{ \begin{array}{ll} K, & \text{if } 0<x<L, \\ 1, & \text{if } x<0 \text{ or } x>L. \\ \end{array} \right.$$ For a short range interaction of the form $V(x-x')=V\delta(x-x')$, we have $K=1/\sqrt{1+V/\pi v_F}$. Inside the wire, the velocity of the acoustic plasmon is also renormalized to $v=v_F/K$. In a more realistic interface, $K(x)$ would change more smoothly, but it will not make qualitative changes to the results below. The mechanical part of the action is $$S_u = \frac{\rho}{2} \int_0^\beta d\tau \int_0^L dx \left[ \left(\frac{\partial u}{\partial\tau}\right)^2 + \left(v_s\frac{\partial u}{\partial x}\right)^2 \right]$$ where $u(x)$ is the transverse displacement of the wire from the equilibrium position, $\rho$ the one-dimensional mass density of the wire, and $v_s$ the sound velocity of the mechanical transverse waves. Finally, the two fields $\theta$ and $u$ are coupled via a magnetic field $\mathbf{B}=B\hat{\mathbf{z}}$. We will choose a guage in such a way that the vector potential is given by $\mathbf{A} = -By\hat{\mathbf{x}}$. At a given position along the wire, one may simply replace $y$ with the transverse position of the wire $u(x)$. The coupling part of the action is then given by $$\begin{aligned} S_{\theta\text{-}u} & = \frac{1}{c} \int_0^\beta d\tau \int_0^L dx JA_x \nonumber \\ & = \frac{1}{c} \int_0^\beta d\tau \int_0^L dx \left( \frac{-ie}{2\pi} \frac{\partial \theta}{\partial \tau} \right) (-Bu),\end{aligned}$$ where $-e$ is the electron charge and $J=(-e/2\pi)(\partial \theta/\partial t)$ is the elctric current. Since the total Action $$S = S_\theta + S_u + S_{\theta\text{-}u} \label{eq:action}$$ is completely quadratic, this model is exactly solvable. Before going into any detailed calculations, we would like to first discuss the qualitative characteristics of this model. In the absence of magnetic field, $S_{\theta\text{-}u}=0$ and the electrical and mechanical degrees of freedom are completely decoupled. Then there are two independent branches of excitations. The normal modes of the mechanical degree of freedom $u(x)$ are standing waves $u_m(x) = u_{m0}\sin(m\pi x/L)$, the energies of which are quantized as $m\Delta_u$ with an integer $m$. Here, $\Delta_u\equiv\pi\hbar v_s/L$ is the energy level spacing of the mechanical normal modes. On the other hand, the electrical degree of freedom $\theta(x)$ is infinitely extended and its energy is not quantized. However, due to the facts that the voltage drop occurs within the length of the wire and $K(x)$ changes at the boundaries, the energy scale $\Delta_\theta\equiv\pi\hbar v/L$ may still manifest itself in some physical quantities as will be shown below. This in fact is the energy level spacing of the Tomonaga-Luttinger acoustic plasmon confined in a box of length $L$. If the magnetic field is now turned on, the two degrees of freedom become coupled to each other through the following two processes: (i) a current induces a Lorentz force on the wire and (ii) an oscillation causes electromotive force through magnetic induction. Therefore, stimulating one of the two degrees of freedom will induce excitations in another. Eventually, the effect will negatively feed back into the originally stimulated degree of freedom; this is the origin of the back-reaction. What characterizes the coupling strength is the magnetic back-reaction energy scale $\omega_B\equiv B\sqrt{v_Fe^2/\pi\hbar\rho c^2}$. This is the size of the back-reaction gap in one of the two uncoupled excitation branches.[@ahn2004a] For a small magnetic field, the coupling is weak and the electronic excitations are only slightly perturbed. The effect of the coupling will be most strong at each quantized energy level of the mechanical oscillations, which, in the weak field limit, occurs at every $\Delta_u$. (In fact, even multiples of $\Delta_u$ will have no effect due to the reason explained below.) As the magnetic field increases, the coupling grows stronger and the two branches of excitations intricately merge together to form new sets of excitations. The dispersion relation for the new sets of excitations have been calculated for an infinitely long wire in Ref. . In the strong coupling limit ($\omega_B\gg\omega\Delta_\theta/\Delta_u$), the dispersion relations are approximately given by $E(q) \approx v_svq^2/\omega_B$ and $\sqrt{v^2q^2+\omega_B^2}$. As we will see below, these will characterize the energy profile of finite wires, too. One of the important physical quantities that characterize the system is the current response to an external ac bias voltage. Let us consider the average current inside the wire $I\equiv\int_0^L J(x)dx$. The external bias voltage and the current will be assumed to take the form $V_\mathrm{ext}=V_0(\omega)e^{i\omega t}$ and $I=I_0(\omega)e^{i\omega t}$, respectively. Then, we may calculate the ratio of the complex amplitudes of the current and the external bias voltage in the linear response regime, i.e., $Y(\omega)\equiv\lim_{V_0(\omega)\rightarrow 0} I_0(\omega)/V_0(\omega)$, which is the inverse of the impedance and is simply called the admittance. For small biases, the linear response theory dictates $$Y(\omega) = \frac{1}{i\hbar\omega L^2} \int_0^L dx \int_0^L dx' \int_0^\infty dt e^{i\omega t} \Pi(x,x',t),$$ where $$\Pi(x,x',t) \equiv \left\langle J(x,t)J(x',0) \right\rangle = \left(\frac{e\omega}{2\pi}\right)^2 \left\langle \theta(x,t)\theta(x',0) \right\rangle$$ is the current-current correlation function. This may be computed using the Euclidean action in Eq. (\[eq:action\]) and the usual analytic continuation technique.[@kane1992a] The real part of the admittance is proportional to the power absorption spectrum. Our calculations show that $$\mathrm{Re}\ Y(\omega) = \frac{e^2}{h} \frac{A(\omega)}{\omega^2+[\Gamma(\omega)]^2} \label{eq:ReY}$$ where $$\begin{aligned} \Gamma(\omega) & = \frac{\Delta_\theta}{2K\hbar}\left\{ [1-\kappa(\omega)]\lambda_+(\omega)\tan\frac{\pi\lambda_+(\omega)}{2} \right. \nonumber \\ & \qquad\qquad + \left. [1+\kappa(\omega)] \lambda _-(\omega)\tan\frac{\pi\lambda_-(\omega)}{2} \right\} \\ %% A(\omega) & = \frac{\omega^2}{\pi} \left[ \frac{1-\kappa(\omega)}{\lambda_+(\omega)}\tan\frac{\pi \lambda_+(\omega)}{2} \right. \nonumber \\ & \qquad\quad + \left. \frac{1+\kappa(\omega)}{\lambda_-(\omega)}\tan\frac{\pi\lambda_-(\omega)}{2} \right]^2\end{aligned}$$ with $$\begin{aligned} \kappa(\omega) & = \frac{1-\eta ^2}{\sqrt{\left(1-\eta^2\right)^2+4(\eta\omega_B/\omega)^2}}, \\ %% \lambda_\pm(\omega) & = \frac{\hbar\omega}{\Delta_u}\left[\frac{1+\eta ^2}{2} \pm \sqrt{\left(\frac{1-\eta ^2}{2}\right)^2+\left(\frac{\eta\omega_B}{\omega}\right)^2}\right]^\frac{1}{2} \\ %% \eta & \equiv \Delta_u/\Delta_\theta = v_s/v.\end{aligned}$$ These quantities need to be evaluated numerically. Figure \[fig:Y(w)\] shows the main result. Assuming that the wire is a carbon nanotube, we have taken the value $K=0.22$ from Ref. . Since the mechanical sound velocity is usually small compared to the electron Fermi velocity, we have also assumed $\eta=\Delta_u/\Delta_\theta=v_s/v=0.1$. For $\omega_B=0$, the mechanical degree of freedom is decoupled from the electronic one and does not contribute to the current measurement at all. Therefore, $\Delta_\theta$ is the only relevant energy scale and it determines the frequency scale over which $\mathrm{Re}\,Y(\omega)$ decays at small $\omega$. It also determines the period of weak modulation in the admittance, which is usually too small to see in the figures. Note that the admittance approaches the usual Landauer dc conductance $e^2/h$, as $\omega\rightarrow 0$.[@maslov1995a] This dc conductance is unaffected by magnetic field and stays unchanged for all values of $\omega_B$, simply because there is no back-reaction for a time-independent dc current. For a weak magnetic field($\omega_B\ll\omega/\eta$), regularly placed sharp resonance peaks start to appear. Their positions are determined by the condition $\Gamma(\omega)=0$. Note that they occur whenever the frequency $\omega$ is close to an [*odd*]{} integer multiple of $\Delta_u/\hbar$. From the positions of the resonances, we can deduce that the changes in the mechanical excitation energy levels are perturbatively small in the weak coupling limit. The widths of the peaks are usually very small, with Q factors sometimes reaching as large as $\sim 10^7$. They increase with the magnetic field as $B^2$, which may be easily understood as broadening effect due to the electromechanical coupling. There is no resonance at even integer multiple of $\Delta_u/\hbar$ due to the following reason. The magnetic flux swept by the $m$th normal mode is $B\int_0^L u_m(x) dx$, but this always vanishes for an even integer $m$ because the areas above and below the wire cancel each other. As the magnetic field grows, the coupling becomes stronger. The positions and shapes of the resonance peaks change much and some peaks even get wiped out. In the limit of strong magnetic field($\omega_B\gg\omega/\eta$), there are well distinguishable sharp peaks at $\omega\approx m^2\Delta_\theta\Delta_u/\hbar\omega_B$. This is in good agreement with the dispersion relation for the lower-energy gapless branch of the strongly coupled system with $q=m\pi/L$.[@ahn2004a] If $\hbar\omega_B\gg\Delta_\theta,\Delta_u$, there are also broad peaks that develop near $\omega\sim\omega_B$, which is a result of crossover between the weak and strong coupling regimes. Now let us discuss about the effect of the electron-electron interaction. From Eq. (\[eq:ReY\]), it is easy to see that $\mathrm{Re}\,Y(\omega)$ depends on $K$ only through an overall coefficient of $\Gamma(\omega)$. In general, $\mathrm{Re}\,Y(\omega)$ decreases as the interaction becomes stronger, except right on resonance where $\mathrm{Re}\,Y(\omega)$ becomes independent of the interaction strength because $\Gamma(\omega)=0$. Well away from resonances\[$\Gamma(\omega)\gg\omega$\], the value of $\mathrm{Re}\,Y(\omega)$ is approximately proportional to $K^2$. All quantities discussed so far have no temperature dependence, because the action of our theory is completely quadratic. It will not be true if, for example, the contacts between the wire and the leads are not perfectly transmitting. In that case, the action will no longer be quadratic and there will be nonvanishing thermal effects in general. For partially transmitting contacts, we also have to consider the effect of charge quantization and Coulomb blockade, although carefully adjusting a backgate voltage may lift Coulomb blockade.[@kane1992b] Although Ref.  studies an opposite limit of weakly coupled leads, the results agree with ours in the sense that the electron transmission is depressed by magnetic fields, because it is a direct consequence of the back-reaction. In summary, we have shown that the quantum mechanically coupled electronic and mechanical degrees of freedom of a one-dimensional wire oscillating in a mangetic field may be studied in an exactly solvable model. Calculating the admittance, we have found that the interplay of three important energy scales, the oscillation energy level spacing $\Delta_u$, the Tomonaga-Luttinger liquid energy level spacing $\Delta_\theta$, and the magnetic energy $\omega_B$ may result in rich and interesting consequencies such as sharp resonance peaks and crossover in terms of relative magnitude of the frequency to the magnetic induction coupling strength. From the positions and shapes of the resonance peaks, we may extract such pieces of information as the magnetic field, electron-electron interaction, and the standing wave energy levels, which may be used in applications such as nano sensors. This work was supported by the Korean Research Foundation Grant funded by the Korean Government (MEST, Basic Research Promotion Fund, KRF-2007-331-C00110) (H.Y.) and the Korea Science and Engineering Foundation(KOSEF) grant funded by the Korean government (MEST, No. R01-2007-000-10837-0) (K.H.A.). [19]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). ![Schematic figure of the setup. A nanowire (thick solid line) is suspended between two metallic gates and oscillates about its equilibrium position. It is influenced by a perpendicular magnetic field $\mathbf{B}$.[]{data-label="fig:setup"}](fig-setup){width="35.00000%"} ![Left panel shows the density plot of the real part of the admittance $\mathrm{Re}\,Y$ as a function of $\omega/\Delta_u$ and $\omega_B/\Delta_u$ where $\omega_B=B\sqrt{v_Fe^2/\pi\hbar\rho c^2}$, and $\Delta_u=\pi\hbar v_s/L$, and $\omega$ is the ac bias frequency. The parameters used here are $\eta=\Delta_u/\Delta_\theta=v_s/v=0.1$ and $K=0.22$. Lighter regions correspond to higher values. The curves in the right panel show cross sections of the density plot at several different magnetic fields; from bottom to top, $\omega_B/\Delta_u = 0,\ 0.2,\ 9.8$, and $26.5$. Curves are offset for better viewing.[]{data-label="fig:Y(w)"}](fig-admittance){width="\textwidth"}
--- abstract: '   High-redshift quasars typically have their redshift determined from rest-frame ultraviolet (UV) emission lines. However, these lines, and more specifically the prominent C [iv]{} $\lambda 1549$ emission line, are typically blueshifted yielding highly uncertain redshift estimates compared to redshifts determined from rest-frame optical emission lines. We present near-infrared spectroscopy of 18 luminous quasars at that allows us to obtain reliable systemic redshifts for these sources. Together with near-infrared spectroscopy of an archival sample of 44 quasars with comparable luminosities and redshifts, we provide prescriptions for correcting UV-based redshifts. Our prescriptions reduce velocity offsets with respect to the systemic redshifts by $\sim140$ km s$^{-1}$ and reduce the uncertainty on the UV-based redshift by $\sim25\%$ with respect to the best method currently used for determining such values. We also find that the redshifts determined from the Sloan Digital Sky Survey Pipeline for our sources suffer from significant uncertainties, which cannot be easily mitigated. We discuss the potential of our prescriptions to improve UV-based redshift corrections given a much larger sample of high redshift quasars with near-infrared spectra.' author: - 'Cooper Dix, Ohad Shemmer, Michael S. Brotherton, Richard F. Green, Michelle Mason, Adam D. Myers,' title: 'Prescriptions for Correcting Ultraviolet-Based Redshifts for Luminous Quasars at High Redshift' --- Introduction {#sec:intro} ============ The best practical indicators for a quasar’s systemic redshift ($z_{\rm sys}$) lie in the rest-frame optical band, particularly the prominent \[O [iii]{}\] $\lambda 5007$, Mg [ii]{} $\lambda 2800$, and the Balmer emission lines [e.g., @2005AJ....130..381B; @2016ApJ...831....7S]. However, at high-redshift ($z \gtrsim0.8$), quasars typically have their $z_{\rm sys}$ values determined from rest-frame ultraviolet (UV) spectra since only $0.1\%$ of these quasars have corresponding rest-frame optical information from near-infrared (NIR) spectra . Unfortunately, the UV-based $z_{\rm sys}$ estimates are highly inaccurate and imprecise given that the UV emission lines are usually blueshifted by up to $\approx3000$ km s$^{-1}$ [e.g., @1982ApJ...263...79G; @1992ApJS...79....1T; @2009ApJ...692..758G; @2016ApJ...831....7S]. Mitigating these biases requires identifying robust corrections to UV-based redshifts.\ Reliable redshift estimates are needed for multiple reasons. For example, accurate redshift estimates provide information on the kinematics of the outflowing material in the vicinity of the supermassive black hole, which likely impacts the star formation rate in the quasar’s host galaxy [e.g., @2010MNRAS.401....7H]. Additionally, various cosmological studies utilize conversions between redshift differences and distances [e.g., @1999astro.ph..5116H; @2019MNRAS.482.3497Z]. In this context, a velocity offset of 500 km s$^{-1}$ corresponds to a comoving distance of $\approx5h^{-1}$ Mpc at $z=2.5$, which can impact our understanding of, e.g., quasar clustering as velocity offsets can be misinterpreted to be distances in the redshift direction [e.g., @2013JCAP...05..018F; @2013ApJ...776..136P].\ The Sloan Digital Sky Survey [SDSS; @2000AJ....120.1579Y] provides observed-frame optical spectra and redshifts forhundreds of thousands of quasars. The redshifts determined for these quasars stem from a cross-correlation by a composite quasar template spectrum provided by @2001AJ....122..549V. However, these estimates become increasingly uncertain in high-redshift quasars because mostly rest-frame UV emission lines are present in the optical band. The first meaningful correction to these UV-based redshifts was achieved by @2010MNRAS.405.2302H [hereafter HW10]. They achieved this by introducing a two-part linear relation between the absolute magnitude and redshift of quasars. A more recent improvement to the HW10 method was achieved by @2017MNRAS.469.4675M [hereafter M17], by comparing \[O [iii]{}\]-based $z_{\rm sys}$ values with the spectral properties of the C [iv]{} $\lambda 1549$ emission line for 45 quasars with $z \gtrsim 2.2$.\ In this work, we expand on the M17 method by adding high quality NIR spectra of 18 quasars at We perform multiple regression analyses and provide improved prescriptions for correcting a variety of UV-based redshifts when the C [iv]{} line is available in the spectrum. This paper is organized as follows. In Section \[sec:2\], we describe our sample selection, observations, and data analysis. In Section \[sec:3\], we present our spectroscopic measurements and in Section \[sec:4\] we discuss our results. Our conclusions are presented in Section \[sec:5\]. Throughout this paper, we compute luminosity distances using $H_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M} = 0.3$, and $\Omega _{\Lambda} = 0.7$ [e.g., @2007ApJS..170..377S]. [lccccclc]{} SDSS J013435.67$-$093102.9 & 2.225 & 1 & 2.214 & 14.8 & 13.6 & 2016 Aug 25 & 2880\ SDSS J014850.64$-$090712.8 & 3.303 & 1 & 3.329 & 16.7 & 15.5 & 2016 Sep 19 & 4800\ SDSS J073607.63$+$220758.9 & 3.464 & 2 & 3.445 & 16.1 & 14.9 & 2016 Sep 20 & 3840\ & & && & & 2016 Sep 22 & 3840\ SDSS J142243.02$+$441721.2 & 3.530 & 1 & 3.651 & 15.2 & 14.4 & 2016 Sep 7 & 1920\ SDSS J153750.10$+$201035.7 & 3.413 & 3 & 3.413 & 15.7 & 15.4 & 2016 Sep 22 & 3840\ SDSS J153830.55$+$085517.0 & 3.563 & 1 & 3.550 & 15.6 & 14.6 & 2016 Sep 19 & 1920\ SDSS J154359.43$+$535903.1 & 2.379 & 1 & 2.364 & 15.0 & 14.2 & 2016 Sep 21 & 2880\ SDSS J154446.33$+$412035.7 & 3.551 & 1 & 3.567 & 15.6 & 15.5 & 2016 Sep 20 & 3840\ SDSS J154938.71$+$124509.1 & 2.377 & 4 & 2.369 & 14.5 & 13.5 & 2016 Sep 5 & 1920\ SDSS J155013.64$+$200154.5 & 2.196 & 1 & 2.188 & 15.1 & 14.2 & 2016 Sep 19 & 2400\ SDSS J160222.72$+$084538.4 & 2.276 & 1 & 2.275 & 15.0 & 14.0 & 2016 Sep 6 & 2880\ SDSS J163300.13$+$362904.8 & 3.575 & 1 & 3.570 & 15.5 & 15.1 & 2016 Sep 22 & 2640\ SDSS J165137.52$+$400218.9 & 2.342 & 1 & 2.338 & 15.0 & 13.7 & 2016 Sep 6 & 2880\ SDSS J172237.85$+$385951.8 & 3.390 & 2 & 3.367 & 16.0 & 15.3 & 2016 Sep 19 & 3840\ SDSS J210524.47$+$000407.3 & 2.307 & 1 & 2.344 & 14.7 & 13.8 & 2016 Aug 26 & 1920\ SDSS J212329.46$-$005052.9 & 2.268 & 1 & 2.270 & 14.6 & 13.9 & 2016 Sep 5 & 1920\ SDSS J221506.02$+$151208.5 & 3.285 & 2 & 3.284 & 16.4 & 15.2 & 2016 Aug 26 & 3840\ SDSS J235808.54$+$012507.2 & 3.401 & 2 & 3.389 & 14.7 & 13.8 & 2016 Aug 26 & 2880 \[h\] Sample Selection, Observations, and Data Analysis {#sec:2} ================================================= We have selected a sample of 18 quasars for our investigation based upon the following criteria:\ 1. Availability of a flux-calibrated optical spectrum from the SDSS recorded in the Data Release 10 quasar catalog . 2. Brightness in the range $m_{i} < 18.5$ in order to keep the signal-to-noise (S/N) ratio of the H$\beta$ region of the respective NIR spectrum, obtained with a 3.8m telescope, at $\approx40$. 3. Redshift within one of the following intervals, $2.15~<~z~<~2.65$[^1] and $3.20 < z < 3.70$, in which, at a minimum, the H$\beta$ and \[O [iii]{}\] lines can be modeled accurately within one of the near-infrared transmission windows in the $H$ or $K$ bands. [lccccccccc]{} SDSS J013435.67$-$093102.9 & 4438 & 99.7 & 15656 & 1625 & 14.6 & 16091 & 2882 & 444 & 21125\ SDSS J014850.64$-$090712.8 & 4716 & 33.7 & 21035 & 1513 & 4.3 & 21680 & & &\ SDSS J073607.63$+$220758.9 & 6876 & 94.3 & 21625 & 1640 & 31.6 & 22256 & & &\ SDSS J142243.02$+$441721.2 & 4563 & 39.9 & 22607 & & & & & &\ SDSS J153750.10$+$201035.7 & 5107 & 69.5 & 21516 & 1613 & 14.6 & 22094 & & &\ SDSS J153830.55$+$085517.0 & 5512 & 70.8 & 22161 & 3192 & 26.1 & 22782 & & &\ SDSS J154359.43$+$535903.1 & 8301 & 54.3 & 16495 & 1835 & 28.6 & 16843 & 7495 & 543 & 22171\ SDSS J154446.33$+$412035.7 & 7235 & 132.4 & 22202 & & & & & &\ SDSS J154938.71$+$124509.1 & 5495 & 42.4 & 16408 & 1544 & 15.4 & 16866 & 5550 & 374 & 22139\ SDSS J155013.64$+$200154.5 & 6539 & 61.9 & 15544 & 1325 & 7.5 & 15960 & 5178 & 391 & 20962\ SDSS J160222.72$+$084538.4 & 6676 & 122.3 & 15951 & 2387 & 19.5 & 16398 & 5629 & 586 & 21517\ SDSS J163300.13$+$362904.8 & 4876 & 57.8 & 22297 & 3768 & 24.6 & 22884& & &\ SDSS J165137.52$+$400218.9 & 4405 & 65.6 & 16234 & 957.8 & 18.5 & 16713 & 4380 & 377 & 21920\ SDSS J172237.85$+$385951.8 & 5938 & 67.9 & 21300 & 3028 & 13.9 & 21866 & & &\ SDSS J210524.47$+$000407.3 & 5331 & 25.3 & 16256 & & & & 4530 & 281 & 21975\ SDSS J212329.46$-$005052.9 & 4500 & 48.1 & 15929 & & & & 4084 & 319 & 21540\ SDSS J221506.02$+$151208.5 & 4059 & 100.0 & 20840 & 956.9 & 61.7 & 21450 & & &\ SDSS J235808.54$+$012507.2 & 3702 & 63.3 & 21397 & 2652 & 11.6 & 21974& & & Spectroscopic observations of this sample were performed at the United Kingdom Infrared Telescope (UKIRT) on Mauna Kea, Hawaii. The observation log and quasar basic properties appear in Table \[tab:log\].\ We utilized the UKIRT Imager-Spectrometer (UIST) with a slit width of 0.24 to maximize the resolution at the expense of potentially higher slit losses. During these observations, the telescope was nodded in an ABBA pattern in order to obtain primary background subtraction. The broad band B2 filter was used in order to obtain a wavelength range of approximately $1.395 - 2.506~ \mu$m, spanning the $H$ and $K$ bands as necessary. The dispersion for these observations was $10.9$ [Å]{} pixel$^{-1}$ with a spectral resolution of $R \sim 448$. Standard stars of spectral type G and F were observed on each night alongside the quasar in order to remove the telluric features that are present in the quasars’ spectra.\ The two-dimensional spectra of the quasars and the standard stars were obtained using standard IRAF[^2] routines. Each of the objects was initially pair subtracted in order to remove most of the background noise. Then, both the positive and negative residual peaks were analyzed and averaged together. During the analysis, wavelength calibration was achieved using Argon arc lamps. The hydrogen features in each standard star were removed prior to removing the telluric features from the quasars’ spectra.\ Removal of the telluric features and the instrumental response from the quasar spectra was done by dividing these spectra by their respective standard star spectra. Then, any remaining cosmic ray signatures on the quasar spectra were carefully removed. Final, flux calibrated quasar spectra were obtained by multiplying these data by blackbody curves with temperatures corresponding to the spectral types of the telluric standards and by a constant factor that was determined by comparing the $H$, for $2.15 < z < 2.65$, or $K$, for $3.20 < z < 3.70$, band magnitudes from the Two Micron All Sky Survey [2MASS; @2006AJ....131.1163S] to the integrated flux across the respective band using the flux conversion factors from Table A.2 of . We do not rely on the telluric standards for the purpose of flux calibration given the relatively narrow slit and the differences in atmospheric conditions between the observations of the quasars and their respective standard stars. For each source, we utilized their SDSS spectrum to verify that the combined SDSS and UKIRT spectra are consistent with a typical quasar optical-UV continuum of the form $\text{f}_{\nu} \propto \nu^{-0.5}$ [@2001AJ....122..549V]. By comparing the flux densities at the rest-frame wavelength of 5100 Å to the flux densities at the rest-frame wavelength in the region of 2000 to 3500 Å, dependent on the redshift, in the SDSS spectrum of each source, we verified that the differences between the two values were within $30\%$, indicating, at most, only modest flux variations. Such variations, over a temporal baseline of $\sim6$ years in the rest-frame are not atypical for such luminous quasars, even if most of these variations are intrinsic as opposed to measurement errors [see, e.g., @2007ApJ...659..997K].\ Fitting of the UKIRT Spectra {#sec:Fit} ---------------------------- In order to fit the H$\beta$ and H$\alpha$ spectral regions, we used a model consisting of a local, linear continuum, which is a good approximation to a power-law continuum given the relatively narrow spectral band, a broadened @1992ApJS...80..109B Fe [ii]{} emission template, and a multi-Gaussian fit to the emission lines. The Fe [ii]{} template was broadened by a FWHM value that was free to vary between $2000$ and $10000$ km s$^{-1}$ and, along with the linear continuum, was removed to more accurately fit the H$\beta$ and \[O [iii]{}\] emission lines. The chosen FWHM to broaden the Fe [ii]{} template was determined with a least squares analysis.\ We fit the H$\beta$ line using two independent Gaussians, constrained by the width and height of the emission line, simultaneously with one Gaussian for each of the \[O [iii]{}\] emission lines. The Gaussians assigned to the \[O [iii]{}\] emission lines have identical widths and their intensity ratio was fixed to $I($\[O [iii]{}\] $ \lambda 5007)/I($\[O [iii]{}\] $\lambda 4959) = 3$. The wavelengths of the two \[O [iii]{}\] components were fixed to the ratio 5007/4959. For the available H$\alpha$ features, two Gaussians were fit after a linear continuum was fit and subtracted around the emission line. We do not detect any \[N [ii]{}\] emission lines while fitting this region, mainly given our low spectral resolution. The Gaussians were constrained so that the line peak would lie within 1,500 km s$^{-1}$ from the wavelength that corresponded to the maximum of the emission line region, the widths could range from 0 km s$^{-1}$ to 15,000 km s$^{-1}$, and the flux density was restricted to lie within 0 and twice the maximum value of the emission line.\ To estimate the uncertainties on the FWHM and rest-frame equivalent width (EW) of the emission lines, we performed the fitting by adjusting the placement of the continuum according to the noise level in the continuum [see, e.g., @2015ApJ...805..124S]. Namely, by adjusting the local linear continuum between extremes of the noise around each emission line, we were able to derive an estimate for uncertainties on the FWHM and EW values. For all but two of the sources, the uncertainties on the values of FWHM and EW in the H$\beta$ region are on the order of $\sim5$-$15\%$. For and , these uncertainties are on the order of $\sim40\%$. Similarly, the uncertainties on the FWHM and EW values for the H$\alpha$ emission line are up to $\sim5\%$.\ The uncertainties on the wavelengths of the peaks of all the emission lines are up to $\sim300$ km s$^{-1}$. The majority of this uncertainty arises from the resolution of our spectrograph, however, our choice of a narrow slit was used to combat this. The uncertainty introduced from the pixel-wavelength calibration is minimal, averaging $\sim5$ km s$^{-1}$. The narrow \[O [iii]{}\] $\lambda5007$ emission line provided our most accurate redshift estimates, having uncertainties on wavelength measurements averaging $\sim150$ km s$^{-1}$. The wavelength uncertainties were determined by evaluating our S/N ratio and repeated measurements of each of the emission lines.\ Basic spectral properties resulting from those fits are reported in Table \[tab:beta\]. Columns (2), (3), and (4) provide the FWHM, EW, and the observed-frame wavelength of the peak ($\lambda_{\rm peak}$) of the H$\beta$ line, respectively. Columns (5–7) and (8–10) provide similar information for the \[O [iii]{}\] $\lambda5007$ and H$\alpha$ emission lines, respectively. The fits for the H$\beta$ and \[O [iii]{}\] emission lines appear in Figure \[fig:fit1\], and the fits for the H$\alpha$ emission line appear in Figure \[fig:ha\].\ Spectral Fitting of the C [iv]{} Emission Lines {#sec:c4} ----------------------------------------------- In order to provide corrections to the UV-based redshifts of our sources, we fit the C [iv]{} emission lines present in their SDSS spectra. These fits appear in Figure \[fig:c4\]. As suggested in M17, the parameters needed for the correction of the UV-based redshifts are the FWHM and EW of the C [iv]{} line, as well as the monochromatic luminosity of the continuum at a rest-frame wavelength of $1350$ Å.\ The C [iv]{} emission line was fit with a local, linear continuum and two independent Gaussians under the same constraints as we report for the H$\beta$ and H$\alpha$ emission lines. The spectral properties resulting from this fitting procedure are reported in Table \[tab:c4\]. The uncertainties in each of these measurements were determined by the same method used when evaluating the rest-frame optical emission line uncertainties. Along with this fit, the continuum luminosity, $L_{1350}$, has also been derived by measuring the continuum flux density at rest-frame $\lambda 1350$ Å and employing our chosen cosmology. These values also appear in Table \[tab:c4\]. [lccccccc]{} SDSS J013435.67$-$093102.9 & 1045 & & & & & &\ SDSS J014850.64$-$090712.8 & 9545 & 16.3 & 47.0 & 8490 & 19.2 & 47.0 & 6657\ SDSS J073607.63$+$220758.9 & & & & 2496 & 10.0 & 46.8 & 6872\ SDSS J142243.02$+$441721.2 & 12475 & 20.8 & 47.0 & 12326 & 17.9 & 47.0 & 7082\ SDSS J153750.10$+$201035.7 & 6080 & 37.9 & 47.1 & 5886 & 33.3 & 47.1 & 6824\ SDSS J153830.55$+$085517.0 & 5754 & 27.1 & 47.5 & 5279 & 26.2 & 47.4 & 7023\ SDSS J154359.43$+$535903.1 & 4713 & 42.6 & 47.0 & 4553 & 36.9 & 46.9 & 5211\ SDSS J154446.33$+$412035.7 & 15266 & 192.3 & 46.3 & 7350 & 34.4 & 46.6 & 7001\ SDSS J154938.71$+$124509.1 & 4207 & 24.2 & 46.6 & 4740 & 19.6 & 46.5 & 5233\ SDSS J155013.64$+$200154.5 & 4273 & 42.6 & 47.0 & 4858 & 37.4 & 46.9 & 4942\ SDSS J160222.72$+$084538.4 & 4150 & 27.8 & 47.0 & 5615 & 30.7 & 47.0 & 5065\ SDSS J163300.13$+$362904.8 & 6963 & 34.9 & 46.9 & 6614 & 42.0 & 46.8 & 7067\ SDSS J165137.52$+$400218.9 & 2818 & 49.9 & 46.9 & 2297 & 45.2 & 46.9 & 5172\ SDSS J172237.85$+$385951.8 & & & & 7208 & 31.1 & 46.8 & 6745\ SDSS J210524.47$+$000407.3 & 12603 & 36.4 & 47.1 & 7990 & 11.9 & 46.8 & 5098\ SDSS J212329.46$-$005052.9 & 8549 & 16.2 & 47.4 & 8168 & 18.5 & 47.3 & 5050\ SDSS J221506.02$+$151208.5 & & & & 2094 & 35.8 & 46.7 & 6638\ SDSS J235808.54$+$012507.2 & & & & 5728 & 20.2 & 47.1 & 6761 Results {#sec:3} ======= Combined with the sources in M17, we have a total of 63 objects in our sample, of which, six of our UKIRT objects were excluded from further analysis due to broad absorption line (BAL)[^3] identification: these are noted in Table \[tab:log\]. We then remove an additional BAL quasar, , from the sample in M17. Furthermore, we have excluded SDSS J013435.67$-$093102.9 from our sample given that it is a lensed quasar and its rest-frame UV spectrum is severely attenuated by the foreground lensing galaxy [see e.g., @2006ApJ...641...70O]. Measurements of the C [iv]{} emission line for 52 out of the 55 sources in our combined sample are available in @2011ApJS..194...45S. The C [iv]{} FWHM and EW measurements we obtained for 40 of these sources agree to within $\sim 20\%$ with those from @2011ApJS..194...45S; similarly, measurements of 10 of these sources agree to within $\sim 65\%$. Generally, these discrepancies are inversely proportional to the signal-to-noise ratios of the SDSS spectra and are larger in the presence of narrow absorption lines. The spectra for and had extremely poor signal-to-noise ratios, resulting in discrepancies of $108\%$ and $53\%$ for FWHM, and $57\%$ and $210\%$ for EW, respectively, between our measured values and the ones reported in @2011ApJS..194...45S. Substituting our values with the ones reported in @2011ApJS..194...45S for these objects did not have a significant impact on further analysis.\ The observed-frame wavelength of the peak of the C [iv]{} emission line was compared to the value predicted by the systemic redshift ($z_{\rm sys}$) to determine the velocity offset of this line. We determine $z_{\rm sys}$ from the line peak of the emission line with the smallest measurement uncertainty. In order, we take our systemic redshift from \[O [iii]{}\] ($\sim50$ km s$^{-1}$), Mg [ii]{} ($\sim200$ km s$^{-1}$) and H$\beta$ ($\sim400$ km s$^{-1}$) [@2016ApJ...831....7S]. The C [iv]{} velocity offsets are shown and reported in Figure \[fig:voff\] and Table \[tab:red\], respectively. In Table \[tab:red\], we also report the redshift measurements provided for these sources in HW10 and , where applicable. The velocity offsets introduced from these redshifts with respect to $z_{\rm sys}$ are presented in Figure \[fig:voff\] and Table \[tab:red\]. In addition to the velocity offsets for the sources in our UKIRT sample, the velocity offsets from Table 1 of M17 have been included in the following regression analysis. The C [iv]{} emission line properties for the M17 sample are reported in Table \[tab:Mason\].\ We note that the $\Delta v_{\text{C~{\sc iv}}}$ values used in M17 differ from the $\Delta v_{\text{C~{\sc iv}}}$ values we compute for the M17 sample since M17 used the $\Delta v_{\text{C~{\sc iv}}}$ values from @2011ApJS..194...45S, combined with the redshift determined from the SDSS Pipeline, in order to find $z_{\text{C~{\sc iv}}}$. Our $\Delta v_{\text{C~{\sc iv}}}$ values follow directly from the measurement of $\lambda_{\rm peak}$ (C [iv]{}) and our derived $z_{\rm sys}$. The origin of the discrepancies between the two velocity offsets used stems from the uncertainty in the $\Delta v_{\text{C~{\sc iv}}}$ values discussed in @2011ApJS..194...45S. The differences between the $\Delta v_{\text{C~{\sc iv}}}$ values we use and those used by M17 are rather small, and using the latter values do not change our results significantly.\ A multiple regression analysis has been performed on the velocity offsets and the C [iv]{} emission line properties such that: $$\label{eq:coeff} \begin{split} \Delta v~(\text{km}~\text{s}^{-1})= \alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) \\ + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350}) \end{split}$$ where $\Delta v$ is the velocity offset and $\alpha$, $\beta$, and $\gamma$ are the coefficients associated with our regression analysis. The velocity offset created by each redshift derivation method was determined by the following equation $$\begin{split} \Delta v = c \bigg( \frac{z_{\rm meas} - z_{\rm sys}}{1 + z_{\rm sys}} \bigg). \end{split}$$ Where $z_{\rm meas}$ is the redshift derived using various methods and reported in the studies indicated below. In order to derive the most reliable redshift correction, four regressions were performed using the following parameters from Equation \[eq:coeff\]: 1. $\log_{10}(\text{FWHM}_{\text{C~{\sc iv}}})$, $\log_{10}(\text{EW}_{\text{C~{\sc iv}}})$ 2. $\log_{10}(\text{FWHM}_{\text{C~{\sc iv}}})$, $\log_{10}(L_{1350})$ 3. $\log_{10}(\text{EW}_{\text{C~{\sc iv}}})$, $\log_{10}(L_{1350})$ 4. All three parameters In total, this regression analysis was performed on redshifts determined from: 1) the measured line peak of the C [iv]{} emission line, 2) HW10, and 3) the SDSS Pipeline. The coefficients, errors, and confidence statistics from Equation \[eq:coeff\], determined in each of these cases, are reported in Table \[tab:coeff1\]. For the confidence statistics, we report the $t$-Value [e.g., @sheskin07] to determine the importance of each individual parameter.\ The residuals of the velocity offsets after each correction has been determined have been analyzed, and basic statistics resulting from these residuals are listed in Table \[tab:stats\]. The residuals before and after correction are presented in Figure \[fig:res\]. The residual distributions show the significant reduction in the velocity offsets before and after each correction. The corrected velocity offsets for C [iv]{}- and HW10-based redshifts are closer to zero than the corrected velocity offsets for the SDSS Pipeline-based redshift, representative of the larger $\sigma$ value associated with SDSS Pipeline redshift estimates. From evaluating the best fitting coefficients and statistics reported for each correction, we determined the correction that we consider to provide the most reliable results. This correction has been emphasized in bold face in the text.\ SDSS J142243.02$+$441721.2 and {#sec:out} ------------------------------- SDSS J142243.02$+$441721.2 from our UKIRT sample has significantly larger velocity offsets compared to the rest of the combined sample. The velocity offsets determined from C [iv]{}, HW10, and SDSS Pipeline are km s$^{-1}$, km s$^{-1}$, and km s$^{-1}$, respectively. The latter velocity offset stems from a misidentification of spectral features in the SDSS spectrum of the source as manifested by the SDSS Pipeline products. The SDSS Pipeline redshift for this source is $z = 3.396$ while the SDSS Visual Inspection value is $z = 3.615$. The disparity between these estimates confirms the misidentification of the emission lines by the SDSS Pipeline. Because the velocity offsets for this source had a significant impact on the regression analysis and may be misleading, we have provided the results of the regression analysis with and without this object in Table \[tab:stats\].\ The velocity offset of SDSS J115954.33$+$201921.1 from the M17 sample, with respect to the redshift determined by the SDSS Pipeline is $-10642$ km s$^{-1}$, which is significantly larger than the respective values of the combined sample, excluding SDSS J142243.02$+$441721.2. was also removed from the SDSS Pipeline regression as discussed further in Section \[sec:4\]. Here too, the disparity between the SDSS Pipeline redshift value ($z = 3.330$) and the respective Visual Inspection value ($z = 3.425$) indicates a misidentification of spectral features by the SDSS Pipeline. [lcccccc]{} SDSS J013435.67$-$093102.9 & 2.214 & & & & 2.225 & 1029\ SDSS J014850.64$-$090712.8 & 3.274 & -3786 & 3.290 & -2691 & 3.303 & -1796\ SDSS J073607.63$+$220758.9 & 3.436 & -607 & 3.464 & 1285 & &\ SDSS J142243.02$+$441721.2 & 3.572 & -5097 & 3.397 & -16384 & 3.531 & -7740\ SDSS J153750.10$+$201035.7 & 3.405 & -544 & & & &\ SDSS J153830.55$+$085517.0 & 3.535 & -989 & 3.537 & -856 & 3.563 & 858\ SDSS J154359.43$+$535903.1 & 2.365 & 89 & 2.370 & 536 & 2.379 & 1341\ SDSS J154446.33$+$412035.7 & 3.520 & -3087 & 3.569 & 131 & 3.551 & -1049\ SDSS J154938.71$+$124509.1 & 2.378 & 801 & 2.355 & -1244 & &\ SDSS J155013.64$+$200154.5 & 2.190 & 188 & 2.194 & 565 & 2.196 & 754\ SDSS J160222.72$+$084538.4 & 2.270 & -458 & & & 2.276 & 92\ SDSS J163300.13$+$362904.8 & 3.562 & -525 & 3.538 & -2093 & 3.575 & 328\ SDSS J165137.52$+$400218.9 & 2.339 & 90 & 2.341 & 270 & 2.342 & 360\ SDSS J172237.85$+$385951.8 & 3.350 & -1168 & 3.390 & 1584 & &\ SDSS J210524.47$+$000407.3 & 2.293 & -4575 & & & 2.307 & -3301\ SDSS J212329.46$-$005052.9 & 2.255 & -1376 & 2.233 & -3395 & 2.269 & -92\ SDSS J221506.02$+$151208.5 & 3.285 & 70 & 3.284 & 0 & &\ SDSS J235808.54$+$012507.2 & 3.366 & -1572 & 3.400 & 753 & & [lcccccc]{} SDSS J011521.20$+$152453.3 & 3.433 & 3.418 & 6236 & 33.3 & 46.6 & 6821\ SDSS J012403.77$+$004432.6 & 3.827 & 3.836 & 5646 & 37.4 & 47.1 & 7460\ SDSS J014049.18$-$083942.5 & 3.726 & & 4635 & 22.7 & 47.2 & 7285\ SDSS J014214.75$+$002324.2 & 3.374 & & 5013 & 29.2 & 47.0 & 6753\ SDSS J015741.57$-$010629.6 & 3.571 & 3.565 & 5158 & 45.9 & 46.9 & 7049\ SDSS J025021.76$-$075749.9 & 3.344 & 3.337 & 5173 & 18.8 & 47.0 & 6715\ SDSS J025438.36$+$002132.7 & 2.464 & 2.470 & 5998 & 78.8 & 45.8 & 5355\ SDSS J025905.63$+$001121.9 & 3.377 & 3.372 & 3728 & 65.6 & 46.9 & 6767\ SDSS J030341.04$-$002321.9 & 3.235 & & 6865 & 41.0 & 47.0 & 6524\ SDSS J030449.85$-$000813.4 & 3.296 & & 2066 & 27.1 & 47.3 & 6638\ SDSS J035220.69$-$051702.6 & 3.271 & & 6939 & 24.7 & 46.4 & 6578\ SDSS J075303.34$+$423130.8 & 3.595 & 3.594 & 2804 & 29.4 & 47.3 & 7112\ SDSS J075819.70$+$202300.9 & 3.753 & 3.743 & 6583 & 27.6 & 46.8 & 7333\ SDSS J080430.56$+$542041.1 & 3.755 & 3.758 & 7047 & 28.7 & 46.8 & 7335\ SDSS J080819.69$+$373047.3 & 3.477 & 3.426 & 7183 & 27.8 & 46.9 & 6910\ SDSS J080956.02$+$502000.9 & 3.288 & 3.290 & 4240 & 41.9 & 47.0 & 6623\ SDSS J081011.97$+$093648.2 & 3.387 & & 7558 & 21.3 & 46.9 & 6768\ SDSS J081855.77$+$095848.0 & 3.688 & 3.692 & 7446 & 26.9 & 47.0 & 7213\ SDSS J082535.19$+$512706.3 & 3.507 & 3.496 & 6839 & 18.7 & 47.1 & 6964\ SDSS J083630.54$+$062044.8 & 3.387 & 3.413 & 5971 & 11.0 & 47.1 & 6767\ SDSS J090033.50$+$421547.0 & 3.294 & 3.296 & 4421 & 40.3 & 47.3 & 6639\ SDSS J091054.79$+$023704.5 & 3.290 & 3.292 & 6184 & 27.7 & 46.4 & 6618\ SDSS J094202.04$+$042244.5 & 3.284 & 3.272 & 3208 & 35.0 & 46.9 & 6617\ SDSS J095141.33$+$013259.5 & 2.419 & 2.425 & 2645 & 96.5 & 46.0 & 5293\ SDSS J095434.93$+$091519.6 & 3.398 & 3.399 & 8671 & 41.1 & 46.7 & 6802\ SDSS J100710.70$+$042119.2 & 2.367 & 2.354 & 4988 & 64.8 & 45.6 & 5199\ SDSS J101257.52$+$025933.1 & 2.441 & 2.436 & 5106 & 39.9 & 46.1 & 5312\ SDSS J101908.26$+$025431.9 & 3.379 & & 8012 & 34.5 & 47.0 & 6766\ SDSS J103456.31$+$035859.4 & 3.388 & 3.342 & 5972 & 27.8 & 46.8 & 6767\ SDSS J105511.99$+$020751.9 & 3.404 & & 6372 & 84.5 & 46.1 & 6798\ SDSS J113838.27$-$020607.2 & 3.347 & 3.342 & 5888 & 46.4 & 46.0& 6711\ SDSS J115111.20$+$034048.2 & 2.337 & 2.341 & 2448 & 44.8 & 45.2 & 5170\ SDSS J115304.62$+$035951.5 & 3.437 & 3.430 & 2379 & 13.6 & 46.6 & 6858\ SDSS J115935.63$+$042420.0 & 3.456 & 3.457 & 4969 & 44.8 & 46.3 & 6886\ SDSS J115954.33$+$201921.1 & 3.432 & 3.269 & 6360 & 24.8 & 47.4 & 6827\ SDSS J125034.41$-$010510.6 & 2.399 & 2.401 & 2494 & 83.7 & 45.6 & 5252\ SDSS J144245.66$-$024250.1 & 2.355 & & 6176 & 46.2 & 46.0 & 5155\ SDSS J153725.35$-$014650.3 & 3.467 & & 8098 & 117.7 & 46.7 & 6872\ SDSS J173352.23$+$540030.4 & 3.435 & & 4994 & 17.1 & 47.4 & 6844\ SDSS J210258.22$+$002023.4 & 3.342 & & 1733 & 35.0 & 46.8 & 6723\ SDSS J213023.61$+$122252.2 & 3.279 & & 2596 & 33.6 & 47.0 & 6615\ SDSS J224956.08$+$000218.0 & 3.323 & 3.309 & 2994 & 64.0 & 46.8 & 6677\ SDSS J230301.45$-$093930.7 & 3.470 & & 8425 & 18.7 & 47.3 & 6898\ SDSS J232735.67$-$091625.6 & 3.470 & & 8378 & 27.3 & 46.5 & 6582\ SDSS J234625.66$-$001600.4 & 3.281 & & 7172 & 10.5 & 47.1 & 6892 [lccccc]{} C [iv]{} & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}})$ & $\alpha$ & -1301 & 195 & -6.68\ & & $\beta$ & 2501 & 472 & 5.29\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -3966 & 600 & -6.61\ & & $\gamma$ & 293 & 48 & 6.14\ & $\beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\beta$ & 2058 & 601 & 3.43\ & & $\gamma$ & -88 & 20 & -4.50\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -3670 & 549 & -6.68\ & & $\beta$ & 1604 & 450 & 3.57\ & & $\gamma$ & 217 & 48 & 4.53\ HW10 & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}})$ & $\alpha$ & -1069 & 254 & -4.22\ & & $\beta$ & 2517 & 612 & 4.11\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -3191 & 869 & -3.67\ & & $\gamma$ & 251 & 69 & 3.63\ & $\beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\beta$ & 2219 & 715 & 3.10\ & & $\gamma$ & -75 & 24 & -3.18\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -2834 & 819 & -3.46\ & & $\beta$ & 1877 & 652 & 2.88\ & & $\gamma$ & 161 & 71 & 2.26\ SDSS Pipe & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}})$ & $\alpha$ & -2380 & 785 & -3.03\ & & $\beta$ & 5087 & 1891 & 2.69\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -8024 & 2732 & -2.94\ & & $\gamma$ & 613 & 216 & 2.83\ & $\beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\beta$ & 4732 & 2240 & 2.11\ & & $\gamma$ & -176 & 74 & -2.39\ & $\alpha \log_{10}(\text{FWHM}_{\text{C~{\sc iv}}}) + \beta \log_{10}(\text{EW}_{\text{C~{\sc iv}}}) + \gamma \log_{10}(L_{1350})$ & $\alpha$ & -6814 & 2830 & -2.41\ & & $\beta$ & 3114 & 2212 & 1.41\ & & $\gamma$ & 416 & 255 & 1.63 Discussion {#sec:4} ========== The results of our multiple regression analysis indicate that the most reliable redshift is obtained by correcting the HW10-based redshift employing the FWHM and EW of the C IV line, the monochromatic luminosity at rest-frame 1350 Å, and the respective coefficients listed under the fourth correction to the HW10 method from Table \[tab:coeff1\]. Using this correction, and removing SDSS J142243.02$+$441721.2 from the analysis (see Sec. \[sec:out\]), we were able to reduce the uncertainty on the redshift determination from 731 km s$^{-1}$ to 543 km s$^{-1}$, yielding an improvement of $\sim25\%$ with respect to the HW10-based redshifts; similarly, the mean systematic offset of the redshift determination is reduced from $-137$ km s$^{-1}$ to $+1$ km s$^{-1}$ (see Table \[tab:stats\]). For comparison, utilizing only the M17 sample of 44 sources, the uncertainty on the HW10-based redshifts is reduced by $\sim20\%$. The addition of the five sources from our UKIRT sample that have HW10-based redshifts, comprising a $\sim10\%$ increase in the number of sources with respect to the M17 sample, therefore helped to further reduce the uncertainty on the HW10-based redshifts from $\sim20\%$ to $\sim25\%$. We anticipate that by utilizing a more representative of several hundred high-redshift quasars, we will be able to further improve these uncertainties significantly and the results will become increasingly less biased to small number statistics (e.g., Matthews et al., in prep.).\ We note that, when we include the source with the highly discrepant $\Delta v_{\text{C~{\sc iv}}}$ value, , in the regression analysis, the best redshift estimates are obtained from the corrected C [iv]{}-based redshifts (see Table \[tab:stats\]). In this case, the mean systematic redshift offsets reduces from $-1023$ km s$^{-1}$ to $-8$ km s$^{-1}$ and the uncertainty on the redshifts determination decreases from 1135 km s$^{-1}$ to 746 km s$^{-1}$ (a $\sim34\%$ improvement).\ As it is apparent, even with this sample of 55 quasars, the methods to determine redshift using rest-frame UV features provide uncertainties as large as $\approx 500 - 700$ km s$^{-1}$. As reported in the first row of each section of Table \[tab:stats\], the uncorrected redshift determinations are significantly inaccurate and imprecise. C [iv]{}-based redshifts have a mean systematic offset of $\sim1000$ km/s (a blueshift) and a similar value for $\sigma$ (the standard deviation). The HW10 method further improves these C [iv]{}-based redshifts by reducing the systematic offsets by $\sim900$ km s$^{-1}$ and $\sigma$ by $\sim300$ km s$^{-1}$. Our prescription further reduces the systematic offset by an additional $\sim100$ km s$^{-1}$ and reduces $\sigma$ by an additional $\sim200-300$ km s$^{-1}$. Using the SDSS Pipeline redshift estimate, determined from a principal component analysis on multiple features of a spectrum simultaneously [e.g., @2012AJ....144..144B], the mean systematic velocity offset for our combined sample is the largest and extends beyond $1000$ km s$^{-1}$ with a standard deviation of $1324$ km s$^{-1}$. Overall, albeit utilizing a smaller combined sample with respect to the samples we use for C [iv]{}- and HW10-based redshifts, the redshifts determined from the SDSS Pipeline provide the least reliable results (see Table \[tab:stats\]). Our best correction applied to these redshifts improves the mean systematic velocity offset by $\sim1000$ km s$^{-1}$, similar to the improvement achieved for C [iv]{}-based redshifts, but yields only a modest improvement in $\sigma$ which remains large.\ In order to test the validity of our method, we have preformed the same regression described in the text on the M17 sources ($\sim80\%$ of our combined sample) and applied it to the remaining sources acquired from UKIRT. The C [iv]{} velocity offsets were used in the regression since this sample was the largest of the three UV-based redshift estimates. Prior to correction, the sample of 10 UKIRT sources had a mean, median and $\sigma$ of $-641$ km s$^{-1}$, $-690$ km s$^{-1}$, and $952$ km s$^{-1}$ respectively. After running the regression on the M17 sample and applying the new correction to the UKIRT sources, the mean, median and $\sigma$ improved to $474$ km s$^{-1}$, $376$ km s$^{-1}$, and $772$ km s$^{-1}$ respectively, demonstrating the validity of our method.\ The SDSS Pipeline redshift estimate, as noted in P18, is subject to highly uncertain redshift determinations due to lower signal-to-noise ratios or unusual objects. As seen in our relatively small sample, large redshift discrepancies are apparent particularly in two of the 39 objects that we have with available SDSS Pipeline-based redshifts. In each case, the velocity offsets are $>10^4$ km s$^{-1}$ and, when included in the regression analysis, it nearly tripled the uncertainty on the redshift determination. The most robust redshift determination methods involve a correction based on the C [iv]{} spectral properties and UV continuum luminosity to either C [iv]{}- or HW10-based redshifts. P18 also provides a redshift based off of visual inspection, $z_{\rm VI}$. We find that this estimate, where available, provides a much more reliable redshift estimate than the one provided by the SDSS Pipeline. The mean systematic offset for this redshift estimate is $-290$ km s$^{-1}$ with a standard deviation of $762$ km s$^{-1}$.\ Regarding the two sources with extremely large velocity offsets, SDSS J142243.02$+$441721.2 and , we note that our best corrections for their UV-based redshifts provide only modest improvements to the redshift determinations, and that their negative velocity offsets (i.e., blueshifts) take on positive velocity offsets (i.e., redshifts), after the correction is applied. The velocity offsets for SDSS J142243.02$+$441721.2 improve from to , to , and to for C [iv]{}-, HW10-, and SDSS Pipeline-based redshift estimates, respectively. Similarly, the velocity offsets for changed from to , to , and to , respectively. While most of the corrected velocity offsets are closer to zero, they do not improve appreciably and still affect the statistics significantly.\ The origin for the abnormally large velocity offset of the SDSS Pipeline redshift of SDSS J115954.33$+$201921.1 most likely stems from the misidentification of the emission lines in the SDSS spectra by the SDSS Pipeline, as discussed in \[sec:out\]. As for SDSS J142243.02$+$441721.2, the origin of the large velocity offset of the C [iv]{}-based redshift is intrinsic to the quasar and this should not be confused with the coincidental abnormally large velocity offset stemming from the failure of the SDSS Pipeline to correctly identify the UV spectral features (see Sec. 3.1). Our measured velocity offset of the C [iv]{} line ($-5097$ km s$^{-1}$) is consistent, within the errors, with the value reported in Table 6 of for the source ($-4670$ km s$^{-1}$). Such sources may point to additional spectral parameters that should be taken into account in future prescriptions for UV-based redshift corrections. While such objects may be rare ($\lesssim 5$% in our combined sample), their potential effects on future redshift estimates should be scrutinized to ensure that redshift corrections for the general quasar population are not skewed. The difficulty in correcting the UV-based redshift of SDSS J142243.02$+$441721.2 is also manifested by the HW10-based redshift which is unable to improve the estimate but rather provides a larger velocity offset () with respect to the C [iv]{}-based value ($-5097$ km s$^{-1}$).\ With our combined sample of 55 high-redshift quasars, we verify large velocity offsets between UV-based redshift estimates and $z_{\rm sys}$. Our calibrations to the UV-based redshift estimates can be used to establish more reliable estimates for $z_{\rm sys}$ when working with high-redshift quasars in the optical band. This effort will lead to more reliable constraints on a range of measurements that require precise distances for quasars.\ Conclusions {#sec:5} =========== In the coming decade, $\approx10^6$ high-redshift ($z\gtrsim 0.8$) quasars will have their redshifts determined through large spectroscopic surveys conducted in the visible band (i.e., rest-frame UV band), e.g., the DESI survey [e.g., @2013arXiv1308.0847L; @2016arXiv161100036D]. Many of these quasars, at $1.5 \lesssim z \lesssim 6.0$, will have the prominent C [iv]{} emission line covered in their spectra. The spectral properties of this line can provide a valuable means for correcting UV-based redshifts as we have shown in this work.\ [lcccc]{} C [iv]{} & -1016 & -1028 & 1132 (993) & -1.11\ C [iv]{} 1 & -20 & -194 & 885 (792) & 0.55\ C [iv]{} 2 & -3 & 18 & 837 (755) & 0.66\ C [iv]{} 3 & 1 & -80 & 1022 (905) & 0.67\ **C [iv]{} 4** & **0** & **-24** & **750 (679)** & **0.37**\ HW10 & -121 & 159 & 1310 (719) & -4.09\ HW 1 & -14 & -116 & 1123 (575) & 3.86\ HW 2 & -2 & -97 & 1157 (638) & 3.38\ HW 3 & 1 & -73 & 1195 (621) & 3.98\ **HW 4** & **1** & **-68** & **1067 (547)** & **3.59**\ SDSS Pipe & -1029 & -63 & 3255 (1264) & -3.45\ Pipe 1 & -31 & -558 & 2954 (1161) & 2.78\ Pipe 2 & -8 & -578 & 2928 (1165) & 2.66\ Pipe 3 & -2 & -697 & 3072 (1200) & 3.03\ **Pipe 4** & **-3** & **-449** & **2851 (1131)** & **2.54** Using a sample of 55 quasars, our prescription for correcting UV-based redshifts yields a mean systematic velocity offset which is consistent with zero and further improves the uncertainty on the redshift determination by $\sim25 - 35$% with respect to the method of HW10. We also find that UV-based redshifts derived from the SDSS Pipeline provide the least reliable results, and the associated uncertainties with respect to $z_{\rm sys}$ cannot be reduced appreciably. With a larger, uniform sample of high-redshift quasars with NIR spectroscopy (e.g., Matthews et al., in prep.), we plan to improve the reliability of our redshift estimates further and search for additional spectral properties that may further improve these estimates.\ We show that the uncertainties on UV-based redshifts for the majority of high-redshift quasars can be reduced considerably by obtaining NIR spectroscopy of a larger sample of sources and using the \[O [iii]{}\]-based systemic redshift to inform a C [iv]{}-based regression analysis. The reduction in redshift uncertainties is particularly useful for a range of applications involving accurate cosmological distances. Acknowledgments =============== We gratefully acknowledge the financial support by National Science Foundation grants AST-1815281 (C. D., O. S.), and AST-1815645 (M. S. B., A. D. M.). A.D.M. was supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123 and Award No. DE-SC0019022. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, as well as NASA’s Astrophysics Data System Bibliographic Services. Bessell, M. S., Castelli, F., & Plez, B. 1998, , 333, 231 Bolton, A. S., Schlegel, D. J., Aubourg, [É]{}., et al. 2012, , 144, 144 Boroson, T. 2005, , 130, 381. Boroson, T. A., & Green, R. F. 1992, , 80, 109 Chen, Z.-F., Qin, Y.-P., Qin, M., et al. 2014, The Astrophysical Journal Supplement Series, 215, 12. DESI Collaboration, Aghamousa, A., Aguilar, J., et al. 2016, arXiv e-prints, arXiv:1611.00036 Font-Ribera, A., Arnau, E., Miralda-Escud[é]{}, J., et al. 2013, , 5, 018 Gaskell, C. M. 1982, , 263, 79 Gibson, R. R., Jiang, L., Brandt, W. N., et al. 2009, , 692, 758 Hewett, P. C., & Wild, V. 2010, , 405, 2302. Hogg, D. W. 1999, arXiv e-prints, astro-ph/9905116 Hopkins, P. F., & Elvis, M. 2010, , 401, 7 Hutchings, J. B., Cherniawsky, A., Cutri, R. M., et al. 2006, , 131, 680. Kaspi, S., Brandt, W. N., Maoz, D., et al. 2007, , 659, 997 Levi, M., Bebek, C., Beers, T., et al. 2013, arXiv:1308.0847 Mason, M., Brotherton, M. S., & Myers, A. 2017, , 469, 4675. Ofek, E. O., Maoz, D., Rix, H.-W., et al. 2006, The Astrophysical Journal, 641, 70 P[â]{}ris, I., Petitjean, P., Aubourg, [É]{}., et al. 2014, , 563, A54 P[â]{}ris, I., Petitjean, P., Ross, N. P., et al. 2017, , 597, A79 P[â]{}ris, I., Petitjean, P., Aubourg, [É]{}., et al. 2018, , 613, A51 Prochaska, J. X., Hennawi, J. F., Lee, K.-G., et al. 2013, , 776, 136 Richards, G. T., Myers, A. D., Gray, A. G., et al. 2009, The Astrophysical Journal Supplement Series, 180, 67. Shemmer, O., & Lieber, S. 2015, , 805, 124 Shen, Y., Richards, G. T., Strauss, M. A., et al. 2011, , 194, 45. Shen, Y., Brandt, W. N., Richards, G. T., et al. 2016, , 831, 7. Sheskin, D. J. 2007, Handbook of Parametric and Nonparametric Statistical Procedures, (4th ed.; Chapman & Hall/CRC) Schneider, D. P., Richards, G. T., Hall, P. B., et al. 2010, , 139, 2360 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163. Spergel, D. N., Bean, R., Dor[é]{}, O., et al. 2007, , 170, 377. Tytler, D., & Fan, X.-M. 1992, , 79, 1 Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, , 122, 549. Vietri, G., Piconcelli, E., Bischetti, M., et al. 2018, , 617, A81 York, D. G., Adelman, J., Anderson, J. E., Jr., et al. 2000, , 120, 1579 Zhao, G.-B., Wang, Y., Saito, S., et al. 2019, , 482, 3497 [^1]: This redshift interval ensures spectral converage also of the H$\alpha$ emission line in the $K$ band. [^2]: IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatory, which is operated by AURA, Inc., under cooperative agreement with the National Science Foundation. [^3]: Five of these sources are based on BAL quasar identification from @2011ApJS..194...45S; SDSS J073607.63+220758.9 was identified as a BAL quasar following our visual inspection of its SDSS spectrum.
--- abstract: 'We present the latest version of the ray-tracing simulation code , which can be used to develop image simulations that reproduce strong lensing observations by any mass distribution with a high level of realism. Improvements of the code with respect to previous versions include the implementation of the multi-lens plane formalism, the use of denoised source galaxies from the Hubble eXtreme Deep Field, and the simulation of substructures in lensed arcs and images, based on a morphological analysis of bright nearby galaxies.  can simulate observations with virtually any telescope. We present examples of space- and ground-based observations of a galaxy cluster through the Wide Field Channel on the Advanced Camera for Surveys of the *Hubble Space Telescope*, the Near Infrared Camera of the *James Webb Space Telescope*, the Wide Field Imager of the *Wide Field Infrared Survey Telescope*, the Hyper Suprime Camera of the Subaru telescope, and the Visible Imaging Channel of the *Euclid* space mission.' author: - | A. A. Plazas,$^{1,2}$[^1]M. Meneghetti,$^{3}$[^2] M. Maturi,$^{4}$& J. Rhodes$^{1,5,6}$\ $^{1}$Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109, USA\ $^{2}$Astronomical Society of the Pacific, 100 N Main St., Suite 15, Edwarsville, IL 62025, USA\ $^{3}$INAF-Osservatorio di Astrofisica e di Scienze dello Spazio di Bologna, Via Gobetti 93/3, 40129 Bologna, Italy\ $^{4}$Heidelberg University, Institute of Theoretical Astrophysics, Philosophenweg 12, 69120 Heidelberg, Germany\ $^{5}$California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA\ $^{6}$Institute for the Physics and Mathematics of the Universe, 5-1-5 Kashiwanoha, Kashiwa, Chiba Prefecture 277-8583, Japan\ bibliography: - 'skylens3\_paper.bib' title: Image simulations for gravitational lensing with SkyLens --- \[firstpage\] gravitational lensing: strong – galaxies: clusters: general – (cosmology:) dark matter – (cosmology:) dark energy Introduction ============ The phenomenon of gravitational lensing is a direct consequence of the theory of General Relativity, which relates the curvature of spacetime to the distribution of matter and energy in the Universe. The images of distant sources will be magnified and distorted to different degrees depending on the amplitude of the mass concentrations and the relative geometrical configuration between the source, the deflectors encountered by light along its path, and observers on Earth [@bartelmann17; @treu10; @bartelmann01]. As such, gravitational lensing has become a fundamental tool to understand the nature of the dark matter and dark energy that, according to the current standard cosmological model, constitute 95% of the total content of the Universe [@weinberg13; @kilbinger15] The most powerful gravitational lenses are galaxy clusters, since they are the most massive gravitationally bound structures in the universe. Thus, they act as cosmic telescopes, magnifying faint galaxies at high redshifts that contributed to the re-ionization of the early universe [@huang16; @livermore17; @kelly17]. In the central regions of clusters, gravitational lensing can produce multiple images of the same source and elongated arcs. At larger angular distances from the cluster core, the individual tangential distortions induced on images are weak and therefore can only be detected statistically. The combination of these two regimes of gravitational lensing (known as strong and weak lensing, respectively) constrains the dark matter distribution from cluster centers to scales of up to 1 Mpc/h, improving our understanding of the internal structure of galaxy clusters, and as a consequence, of the nature of dark matter and dark energy [@allen11]. . In this paper we describe new developments and improvements of the gravitational lensing simulation pipeline  [@meneghetti08; @meneghetti10a] that can be used to create mock ground- and space-based observations of lensing phenomena by galaxy clusters and by large scale structures in wide fields. Previous versions of  have been used to study the systematic errors in cluster mass measurements using lensing and X-ray analyses [@meneghetti10a; @rasia12].  has also been used to construct synthetic lenses with the properties of the clusters observed by the *Hubble Space Telescope* (*HST*) in the Frontiers Field Initiative [@lotz15] in order to test the accuracy of inversion algorithms that reconstruct the cluster matter distributions from strong lensing measurements [@meneghetti17]. [The improvements described in this work are aimed at increasing the realism of the simulations to make  an even more optimal tool in many applications of lensing by galaxy clusters.]{} The paper is organized as follows. In §\[sec:skylens\] we review our simulation pipeline and describe in detail the changes with respect to previous versions of the code. As an example of the output of the code, in Section §\[sec:sims\] we use the current version of  to produce simulated observations through several astronomical instruments with realistic parameters and conditions: the Wide Field Channel (WFC) on the Advanced Camera for Surveys (ACS) of *HST*, the Near Infrared Camera (NIRCam) of the *James Webb Space Telescope* (*JWST*), the Wide Field Imager of the *Wide Field Infrared Survey Telescope* (*WFIRST*), the Hyper Suprime Camera (HSC) of the Subaru Telescope, and the Visible Imaging Channel (VIS) of the *Euclid* space mission. We discuss applications of  in §\[sec:discussion\], and conclude in §\[sec:conclusions\]. Unless otherwise noted, we assume a flat $\Lambda$CDM cosmological model with a matter density parameter $\Omega_{\rm{m},0}=0.272$ and a Hubble constant of $H_{0}=70.4$ km/s/Mpc. SkyLens {#sec:skylens} ======= In this section we describe , highlighting the differences with respect to other versions [@meneghetti08; @meneghetti10a]. The version of  presented in this paper includes the use of *HST* source galaxy images denoised by the method introduced in @maturi16.  now has the capability to include the lensing effects due to multiple lens planes. We also use information from a sample of nearby, well-resolved galaxies to produce realistic simulations of substructures such as regions of active star formation. We model these features as Sérsic [@sersic63] profiles which will be magnified by the lens and appear as knots within the arcs that form at the critical lines in the lens plane. In addition, they can be used by lensing inversion algorithms as additional constraints that must satisfy the condition that they all must trace back to the same source, limiting the parameter space of solutions and increasing the likelihood of a more accurate model optimization in the source plane. General methodology {#sect:genmeth} ------------------- For simulating an observation of a patch of the sky, with or without lensing effects,  goes through the following steps: 1. it generates a population of galaxies using the luminosity and the redshift distribution of the galaxies in the Hubble Ultra Deep Field (HUDF, @beckwith06); 2. it prepares the virtual observation, receiving the pointing instructions, the exposure time, and the filter, $F(\lambda)$, to be used from the user. The pointing coordinates are used to calculate the level of the background, i.e. the surface brightness of the sky and, in case of observations from the ground, the air mass; 3. it assembles the virtual telescope. This implies that the user provides a set of input parameters, such as the effective diameter of the telescope, the field of view, the detector specifications (e.g., gain, read-out noise (RON), dark current, and pixel scale) and the additional information necessary to construct the total throughput function, defined as $$T(\lambda)=C(\lambda)M(\lambda)R(\lambda)F(\lambda)A(\lambda) \;. \label{eq:transm}$$ In the previous formula, $C(\lambda)$ is the quantum efficiency of the detector, $M(\lambda)$ is the mirror reflectivity, $R(\lambda)$ is the transmission curve of the lenses in the optical system, and $A(\lambda)$ is the extinction function (galactic and atmospheric, in case of simulations of ground based observations); 4. using the spectral energy distributions (SEDs) and the redshifts of the galaxies entering the field of view, it calculates the fluxes in the band of the virtual observation. Then, using the galaxy templates, the surface brightness of the sources is calculated at each position in the sky and converted into a number of [*Analog-to-Digital Units*]{} (ADUs) on the detector pixels. 5. noise is added according to the sky brightness, to the RON, and to the dark current of the detector. More precisely, the photon counts on the detector pixels are calculated as the sum of three contributions, namely, from the sky, the galaxies, and the dark current. Given a telescope of diameter $D$, the number of photons collected by the detector pixel at $\vec x$ in the exposure time $t_{\rm exp}$, from a source whose surface brightness is $I(\vec{x},\lambda)$ (erg s$^{-1}$cm$^{-2}$Hz$^{-1}$arcsec$^{-2}$), is $$n_\gamma(\vec x)=\frac{\pi D^2 t_{\rm exp}p^2}{4 h}\int I(\vec x,\lambda)\frac{T(\lambda)}{\lambda}{\rm d}\lambda \;, \label{eq:ngamma}$$ where $p$ is the pixel size in arcsec, $h$ is the Planck constant, and $T(\lambda)$ the total transmission given in Eq. \[eq:transm\]. The contribution from the sky is given by $$n_{\rm sky}=\frac{\pi D^2 t_{\rm exp}p^2}{4 h}\int\frac{T(\lambda)S(\lambda)}{\lambda}{\rm d}\lambda \;, \label{eq:nsky}$$ where the $S(\lambda)$ is the sky flux per square arcsec. The photon counts are converted into ADUs by dividing by the gain $g$: $$\begin{aligned} {\rm ADU}_{\rm total}(\vec x)& = & \frac{n_\gamma(\vec x)+n_{\rm sky}+n_{\rm dark}}{g}\nonumber \\ & = & {\rm ADU}(\vec x)+{\rm ADU}_{\rm sky} + {\rm ADU_{\rm dark}}\;. \label{eq:ADUs}\end{aligned}$$ Photon noise is assumed to be Poisson distributed, with variance $$\begin{aligned} \sigma_N^2(\vec x)&=&\left\{n_{\rm exp}\left(\frac{{\rm RON}}{g}\right)^2+\frac{{\rm ADU_{\rm total}}(\vec x)}{g}\right. \nonumber \\ & & +\left. \left(f+\frac{a^2}{n_{\rm exp}^2}\right)[{\rm ADU_{\rm total}}(\vec x)]^2\right\} \;. \label{eq:phn}\end{aligned}$$ In the previous formula, $n_{\rm exp}$ is the number of exposures, and $a$ the flat-field term, which we fix at $a=0.005$ following [@grazian04]. The term $f$ indicates the flat-field accuracy, which is determined by the number of flat-field exposures and by the level of the sky background, $B$, as $$f=(N_{ff}\cdot B\cdot g)^{-1} \;.$$ A more detailed explanation of these formulas is given in Sect. 4.2 of [@grazian04]. Thus, in order to simulate the photon noise, we draw random numbers from a Poisson distribution with variance as given above. Denoised source image generation -------------------------------- Previous versions of  have used shapelets [@bernstein02; @refregier03b] decomposition of HUDF galaxies to simulate a synthetic sky of source galaxies. The current version implements the technique introduced in @maturi16 (based on Expectation Maximization Principal Components Analysis, EMPCA) to obtain a denoised (noise-free) reconstruction of images based on the Hubble eXtreme Deep Field (HXDF) [@illingworth13] data, which covers an area of 10.8 arcmin$^2$ down to $\sim$ 30 AB magnitude [@rafelski15].[^3] These images were taken by the Advanced Camera for Surveys and the Wide-Field Planetary Camera-2 in the , , , , and bands, and have been “drizzled" to a resolution of 0.03 arc seconds per pixel. The SEDs of the source galaxies were obtained by interpolating the 11 SED templates from the photometric redshift measurements by @coe06, as determined by the Bayesian Photometric Redshift code [@benitez00; @benitez04].[^4] The @coe06 library includes SEDs for elliptical, spiral (including star-burst), and lenticular galaxies. The denoising procedure used to generate the galaxy images used in this paper implies a linear model $$\label{eqn:model} \tilde{g}(\vec{x})=\sum_{k=1}^M a_k \phi_k(\vec{x}) \;\;\;\mbox{with}\;\;\; a_{k}=\sum_i^nd(\vec{x}_i)\phi_{k}(\vec{x}_i) \;,$$ describing the postage-stamp image of each galaxy, $d(\vec{x})$, and it is based on a set of orthonormal basis, $\left\{\vec{\phi}_k \;\in\; \mathbb{R}^{n} \;\mid\; k=1,...,M\right\}$, which is optimally derived from the data itself. The number of components $n$ is equal to the number of pixels in the postage stamps of the galaxies. The basis optimization is based on the Expectation Maximum Principal Component Analysis which captures the information content in components such that each one of them, $\Phi_k$, contains more information than the following one, $\Phi_{k+1}$ [@bailey12]. This decomposition allows to select the relevant ones and discard those associated to the noise contribution $$\label{eq:model-split} d(\vec{x})=\sum_{k=1}^M a_k \phi_k(\vec{x}) + \sum_{M+1}^{n} a_k \phi_k(\vec{x}) = \tilde{g}(\vec{x}) + \tilde{n}(\vec{x}) \;.$$ Here, $\tilde{n}(\vec{x})$ is a term which contains most of the noise and that we discard. The advantage of the EMPCA with respect to the standard PCA is that it allows to deal with missing data, noise levels varying across the field, and the use of regularization terms. The number of components, $M$, . ![In the top two panels we show two galaxies used to produce the source plane. From left to right: the color composite image, the denoised image of the band, the original HXDF image (with the other objects present in the postage-stamp already removed) and the residuals. In the bottom panel we show a cut out of the source plane used to produce Fig. \[fig\_color\] but without the addition of the substructures.[]{data-label="fig:denoised"}](./skylens_3_matteo.png "fig:"){width="1.0\hsize"} ![In the top two panels we show two galaxies used to produce the source plane. From left to right: the color composite image, the denoised image of the band, the original HXDF image (with the other objects present in the postage-stamp already removed) and the residuals. In the bottom panel we show a cut out of the source plane used to produce Fig. \[fig\_color\] but without the addition of the substructures.[]{data-label="fig:denoised"}](./HST_cut "fig:"){width="1.0\hsize"} In Fig. \[fig:denoised\] we show and example with two denoised galaxies (color composite image and band), together with the original HXDF image and the residuals. The bottom panel of the same figure shows a zoom-in of the source plane, without the additional substructures, used to produce the image of Fig. \[fig\_color\]. Lens model ---------- The lensing effects can be produced by any mass distribution, such as analytical dark matter halo models or numerical and/or hydrodynamical simulations. [In the examples shown here,]{} we use the simulated galaxy cluster *Ares* [which was extensively discussed in the paper by @meneghetti17]. It was created by using the semi-analytic code [MOKA]{}[^5][@giocoli12], and assuming a flat cosmology as described above. [In short, [MOKA]{} combines several mass components that make up the mass distribution of a galaxy cluster. These are a smooth dark matter halo, the sub-halos, and the brightest-central-galaxies (BCGs). Each of these components is fully parametrized and modeled consistently with the findings of state-of-the-art N-body and hydrodynamical numerical simulations.]{} From the three dimensional density distribution of the lens, [MOKA]{} creates a projected two-dimensional surface density map. This map is used to calculate a field of deflection angles $\mathbf{\vec{\hat{\alpha}}}$, [which is one of the inputs for . Using the deflection angles, light rays]{} are traced back from the detector to the source plane through a grid of a given size in the lens plane. The mapping between the lens and the source planes is done using the lens equation. In the case of a single lens plane, this is written as $$\label{eq:lens} \vec{\beta}=\vec{\theta} - \frac{D_{\text{ls}}}{D_{\text{s}}}\vec{\hat{\alpha}}(\vec{\theta})$$ In Equation \[eq:lens\], $\vec{\beta}$ is the position of the photon in the source plane, $\vec{\theta}$ is its apparent position on the detector, and $D_{\text{ls}}$ and $D_{\text{s}}$ represent the cosmology-dependent angular diameter distances between the lens and the source and between the observer and the source, respectively. Note that this formalism is general and valid for both the weak and strong lensing regimes. In the latter case, one position $\vec{\beta}$ can correspond to more than one coordinate $\vec{\theta}$. The deflection angle field can then be used to calculate other related and useful lensing quantities. For example, the convergence is half the divergence of the deflection angle: $$\kappa(\vec\theta)=\frac{1}{2}\left[\frac{\partial \alpha_1}{\partial\theta_1}(\vec\theta)+\frac{\partial \alpha_2}{\partial\theta_2}(\vec\theta)\right] \;. \label{eq:effconv}$$ Shear, magnification (on the source and lens planes), as well as the locations of the caustic and critical lines are also readily derived from the deflection angles [e.g. @meneghetti17]. Multiple lens planes {#sec:multiple} -------------------- A major improvement of this version of  is the capability to simulate deflections caused by multiple lens planes [e.g. @2014MNRAS.445.1954P]. When this feature is used, Eq. \[eq:lens\] is substituted by the lens equation in the form $$\vec{\beta}=\vec\theta-\sum_{i=0}^{N_S}\frac{D_{\text{is}}}{D_{\text{s}}}\vec{\hat\alpha}^i(\vec\theta^i) \;. \label{eq:multilens}$$ In Eq. \[eq:multilens\], $\theta=\theta^0$ is the position of the photon on the detector, here coinciding with the first lens plane; $\vec\theta^i$ and $\hat{\vec{\alpha}}^i(\vec \theta^i)$ are the photon position and the corresponding deflection angle on the $i-$th lens plane, respectively; and $D_{\text{is}}$ is the angular diameter distance between the $i-$th lens plane and the source. If the source falls behind $N_S$ lens planes all their contributions are accounted for in the sum on the right-hand side of Eq. \[eq:multilens\]. By comparing the position of each photon on the first lens plane and in the source plane, we can define an [*effective*]{} deflection angle: $$\vec\alpha_{\text{eff}}=\vec\theta-\vec\beta=\sum_{i=0}^{N_S}\frac{D_{\text{is}}}{D_{\text{s}}}\vec{\hat\alpha}^i(\vec\theta^i) \;. \label{eq:effdefl}$$ In practice, this is the total deflection accumulated along the path between the observer and the source. It describes the effect of an effective mass distribution, which can be derived using Eq. \[eq:effconv\] to compute the [effective]{} convergence, $\kappa_{\text{eff}}(\vec\theta)$. In Fig. \[fig:effconv\], we show an example of multi-lens plane simulation. In the upper-left panel, we show the surface density map of the cluster [*Ares*]{}. The cluster is at redshift $z=0.5$. We have extracted from a hydrodynamical simulation two slices of particles and we have used them to construct two additional lens planes at redshift $z=0.7$ and $z=1$. The details of the numerical simulation are in @meneghetti10a. The surface mass density on these two planes are shown in the second and in the third upper panels. Finally, we use the multi-lens plane formalism to compute the [*effective*]{} convergence for sources at $z=12$, which is shown in the upper-right panel. Note that, in the effective convergence map, the mass structures behind the cluster appear distorted by lensing. In the bottom left panel, we show an example of a grid of $50\times50$ points on the first lens plane, through which the light rays are propagated towards the sources. Before the rays are deflected, the grid is regular. The arrival positions of the light rays on the source plane at $z_{\text{s}}=12$ are shown in the other three bottom panels. In each panel, we account for the deflections on an increasing number of lens planes: one, two, and all three lens planes, respectively. In all these cases, the grids are distorted and cover a smaller area than the original grid, due to the focusing effect of the lens. Most of the deflection comes from the cluster (which is here placed on the first lens plane). Structures along the line of sight cause minor shifts of the arrival position of the light rays on the source plane. Particularly evident is the effect of a mass clump located near the upper-right edge of the third lens plane, which causes deflections of several arc-seconds in that area. Multiple source planes ----------------------  also implements multiple source planes. The sources extracted from the XUDF are divided in 100 redshift bins between $z=0$ and $z=12$. The bin sizes are defined such that their centers are equally spaced in lensing distance, $D_{\text{ls}}D_{\text{l}}/D_{\text{s}}$. We define one source plane for each redshift bin. The sources in the bin are placed on the corresponding plane and are distorted using Eq. \[eq:lens\] or Eq. \[eq:multilens\], depending on whether a single or multiple lens planes are used. This method results in a significant reduction of the computational time, compared to computing the deflections for each individual source redshift. Substructures in source images ------------------------------ ### Morphological analysis of nearby galaxies Galaxy clusters act as natural telescopes that magnify faint source galaxies, effectively increasing the native resolution of the instrument at hand if the position of the source is sufficiently close to a caustic curve. This allows to resolve substructures that would not be otherwise visible in the absence of the lens. We include these features—which represent, for example, regions of active star formation—in the denoised HXDF postage stamps that  uses as input by modeling them as Sérsic profiles of index $n=3.5$.[^6] We use information inferred from a morphological analysis of nearby galaxies to derive empirically-motivated size and luminosity distributions for these substructures. We use the sample of nearby, well-resolved, multiband (*g*, *r*, and *i* filters from the Thuan-Gunn photometric system [@thuan76; @wade79; @schneider83]), and ground-based galaxy images compiled by @frei96. The complete catalog by @frei96 has a total of 113 galaxies that span all different Hubble classification classes. We use the morphological parameter provided by the catalog[^7] to select a subsample of spiral galaxies, for which $0 \leq {{\tt{T}}} \leq 9$ is satisfied. Furthermore, by visual inspection of the remaining galaxies, we select those galaxies that are viewed completely or nearly face-on. The final sub-sample of images analyzed consists of 32 galaxies. Each galaxy image is available as a postage stamp FITS file in each photometric band.[^8] The images are reduced and their instrumental signatures removed, and foreground stars have been identified and removed by means of an empirical PSF model [@frei96b]. To find the substructure regions in each image, we begin by smoothing the postage stamp with a Gaussian kernel with a typical size of $3.5$$10$ pixels, depending on the galaxy. The smoothed image is then subtracted from the original stamp, and the difference image is used as an input to [@bertin06]. We also define an elliptical bad-pixel mask that excludes the central region of the galaxy. To do this, we use the best-fit parameters obtained by using the code [@tessore16] to fit the galaxy data to a bulge-plus-disk model, where each component is assumed to follow a Sérsic profile. This bad pixel mask is also passed to as input. For each galaxy image, we modify relevant parameters[^9] in the configuration file to the detection of the substructure clumps . then outputs a catalog that includes pixel positions in pixels, half-light radii in pixels, and flux (in ADU) parameters (, , , and , respectively). At the catalog level, we also perform a selection that excludes objects that are either too small or too bright, rejecting outliers in the size and flux distributions by means of $3\sigma$-clipping. Fig. \[f1\] shows an example of the type of substructures that are detected, along with the original image from the @frei96 catalog. Once we have a catalog of candidate structures (from a total of 350), we calculate their physical size in parsecs from their half-light radii in pixels by using the angular diameter distance to the galaxy that hosts each substructure. We assume a flat cosmology of the form specified above, and use the mean redshift to each galaxy in our subsample published by different sources as compiled by the NASA/IPAC Extragalactic Database.[^10] Fig. \[f2\] shows the final flux and size distribution obtained. We fit the tail of each distribution to a power law (), and obtain the best-fit exponents $-2.01 \pm 0.49$ and $-3.8 \pm 0.68$, respectively, and use these results as motivation to choose a power-law flux and size distributions of exponents $-2$ and $-4$ when creating substructures in each source galaxy used by . However, these parameters can be adjusted in the code, as well as the choice of the functional form for their probability distributions. ### Adding substructures to the source galaxy images {#sect:subadd} For a given source galaxy image, we select a fraction $Q$ of its total flux to be redistributed in the form of substructures. We assign a different fraction $Q$ depending on the morphological classification of each galaxy, determined by their assigned [@coe12] interpolated SED template (11 basis templates), ranging from $N=1$ for early-type galaxies to $N=11$ for starburst galaxies. Thus, we set $Q$ to 0.05 for lenticular galaxies, $Q$ to a random number drawn from a uniform distribution in the interval \[0.2,0.4\], and 0 otherwise (). As with other parameters, $Q$ can be adjusted in  if desired. [|c|c|]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Galaxy from the @rafelski15 catalog (ID: 22245). The upper row shows the unlensed galaxy with and without knots of substructure (left and right panels). The lower row shows a similar comparison after the galaxy has been strongly lensed.[]{data-label="fig_with_without_sub"}](./no_knots_no_lensing_inv.png "fig:"){width="30mm"} \#2 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ & --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Galaxy from the @rafelski15 catalog (ID: 22245). The upper row shows the unlensed galaxy with and without knots of substructure (left and right panels). The lower row shows a similar comparison after the galaxy has been strongly lensed.[]{data-label="fig_with_without_sub"}](./knots_no_lensing_inv.png "fig:"){width="30mm"} \#2 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Galaxy from the @rafelski15 catalog (ID: 22245). The upper row shows the unlensed galaxy with and without knots of substructure (left and right panels). The lower row shows a similar comparison after the galaxy has been strongly lensed.[]{data-label="fig_with_without_sub"}](./no_knots_lensing_ing.png "fig:"){width="30mm"} \#2 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- & ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Galaxy from the @rafelski15 catalog (ID: 22245). The upper row shows the unlensed galaxy with and without knots of substructure (left and right panels). The lower row shows a similar comparison after the galaxy has been strongly lensed.[]{data-label="fig_with_without_sub"}](./knots_lensing_inv.png "fig:"){width="30mm"} \#2 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \ Each substructure is placed at a pixel location derived by creating a histogram of the surface brightness of postage stamp by means of an inverse transform sampling algorithm. In this way, the substructures will be placed mainly in locations that are proportional to the surface brightness distribution of the source galaxy. Fig. \[fig\_with\_without\_sub\] shows an example of a source galaxy with and without substructures. Once the location of a particular substructure has been chosen, we model it as a Sérsic profile of index $n=3.5$, with a particular size and luminosity drawn from the distributions of Fig. \[f2\]. These substructures will be subsequently created only if the image of their host galaxy in the lens plane is close enough to a critical curve such that the local magnification produced by the lens is larger than a certain threshold $\mu_{\mathrm{t}}$. Virtual observations -------------------- As explained above,  has the capability of producing a virtual observation with any given instrument and/or telescope and at any desired resolution, for a particular field of view. [Once the telescope (and the detector) and the lens (or lenses) have been defined, the code reconstructs the images of the sources by assigning to the pixels on the detector, whose positions are given by the $\vec \theta$ vectors, the value of surface brightness at positions $\vec\beta$, $$I(\vec\theta)=I_{\text{s}}(\vec\beta) \;,$$ with $\vec\beta$ given either by Eq. \[eq:lens\] or by Eq. \[eq:multilens\], thus accounting for the lensing effects by the matter along the line of sight. ]{} Finally, the images are convolved by the instrumental Point Spread Function (PSF), and different sources of noise such as sky background, Poisson, and readout noise are added, depending on the specified number of exposures as explained in Sect. \[sect:genmeth\]. Lensing simulations {#sec:sims} =================== We have simulated observations of galaxy clusters through five different telescopes and instruments to illustrate the capabilities of : *HST* ACS WFC, *WFIRST* WFI, *JWST* NIRCam, Subaru HSC, and [*Euclid*]{} VIS. In the first four cases, we use the corresponding PSF-generating tools to produce sensible PSF models for different filters, and the Exposure Time Calculators to obtain estimates of the sky background. For the latter, we refer to the [*Euclid*]{} red-book [@2011arXiv1110.3193L] to generate a simple PSF model. We use the HXDF source catalog and the lens model described in Sections 2.1 and 2.2 above. [|c|c|]{} ---------------------------------------------- ![image](./COMPRESSED_HST.jpg){width="80mm"} \#2 ---------------------------------------------- & ----------------------------------------------- ![image](./COMPRESSED_JWST.jpg){width="80mm"} \#2 ----------------------------------------------- \ ------------------------------------------------- ![image](./COMPRESSED_WFIRST.jpg){width="80mm"} \#2 ------------------------------------------------- & ------------------------------------------------- ![image](./COMPRESSED_SUBARU.jpg){width="80mm"} \#2 ------------------------------------------------- \ We show the resulting images in Fig. \[fig\_color\] (each one with a field of view of $204'' \times 204''$), where we have combined three images in the “red", “green", and “blue" channels by using the software [^11] [@coe12]. Fig. \[f5\] shows a zoomed-in region of the panels in Fig. \[fig\_color\] illustrating a strongly-lensed arc with the star-forming regions added as described in Section \[sect:subadd\]. *HST* ACS WFC ------------- We simulate observations through the ACS WFC of *HST* in the , , and filters, each one with an exposure time of 5000s. We use the web interface[^12] of the software [@krist11] to generate the PSF models in each band. The PSF models have a pixel scale of $0.0495''$ per pixel, but the final images will be rendered at a resolution of $0.03''$ per pixel. We obtain values for the sky background (in e$^-$/pix/sec) from the measurements performed by @sokol12 (Table 2, average backgrounds). *JWST* NIRcam ------------- We simulate observations in the imaging mode of the *JWST* NIRCam through the short-wavelength channel (0.6-2.3 $\mu$m) in the , , and filters. We use the *JWST* PSF simulation tool [^13] to produce PSF models in each filter. The native pixel scale of the instrument is $0.032''$ per pixel, however, the NIRCam short wavelength is undersampled at 2.4 microns. Thus, we created PSF models sampled at a scale of 0.032/N with N=4 to satisfy the Nyquist criterium (2p/$\lambda_m$F, which results in a number of dithered exposures N $\geq$ 3 for and f-number F of 20, a pixel pitch p of 18 $\mu$m and minimum wavelength $\lambda_m$ of 0.6 $\mu$m). We estimate the contribution from the background in each filter from four exposures by using the *JWST* Exposure Time Calculator[^14] for NIRCam in the readout pattern with 10 groups. The values recorded for each of the 3 filters used are 0.318, 0.321, and 0.289 e$^-$/pix/s, respectively. The choice of the readout pattern () and the number of exposures (N=4) sets the exposure time of each image to 10146 s. WFIRST ------ We produce simulations in three of the four bands of the planned High Latitude Survey by the Wide Field Imager of *WFIRST*: , , and . We use the *WFIRST* module developed by @kannawadi15 to obtain PSF models and sky background levels in each band.[^15] The near-infrared detectors of the WFI have a native pixel scale of $0.11''$ per pixel; however we draw them at a scale of $0.11''$/N per pixels with N=3 to avoid undersampling. The sky background model reported by includes zodiacal light, stray light (10%), and thermal backgrounds, and makes use of the Exposure Time Calculator[^16] by @hirata12. Including a mean dark current of 0.0015 e$^-$/pix/s, the sky backgrounds obtained for the , , , bands are 0.669, 0.654, and 0.654, e$^-$/pix/s respectively. The exposure time chosen for each image is 504 s (168 s per exposure). HSC --- We use the “PSF Picker" tool to generate PSF models in the , , and bands of the HSC survey for the Wide Field Survey described in @aihara17.[^17] The parameters used for the query are (RA, DEC)=(180.0, 0.0) deg., tract 9348, patch “8,8", coadd, and pdr1\_wide., which represent a location in the wide-survey area. The pixel scale of the PSF models is $0.17''$ per pixel, and the seeing values were set to $0.72''$, $0.56''$, and $0.63''$ in each of the , , and bands, respectively, using the values in @aihara17. We use the HSC Exposure Time Calculator[^18] to estimate the contributions due to the sky backgrounds. Under the conditions of gray time (7 days after new Moon), transparency of 0.9, and 60 degrees of separation from the Moon, we obtain sky values of 35.08, 75.74, and 45.60 e$^-$/pix/s for the , , and bands, respectively. The exposure time for each band was set to 600 s. ![image](./Ares_EUCLID_VIS_sp.png){width="0.49\hsize"} ![image](./Ares_EUCLID_VIS_mp.png){width="0.49\hsize"} *Euclid* -------- Finally, we simulate an observation of [*Ares*]{} also with the future [*Euclid*]{} space telescope [@2011arXiv1110.3193L]. [*Euclid*]{} is scheduled for launch in the early 2020s and will observe $15,000$ sq. degrees of the sky in four bands, namely a broad band (VIS), and three near infrared bands (, , ). In the VIS channel the spatial resolution will be $0.1"$ and the observations will reach a magnitude limit of $24.5$ $m_{\rm AB}$ for extended sources at $S/N\sim10$, making [*Euclid*]{} a very promising instrument for strong lensing science. We use the set-up of a typical [*Euclid*]{} observation to illustrate the multi-plane functionality of . For these simulations, we assume a PSF with a Moffat profile and FWHM of 0.18". We produce two images in the VIS channel. The first includes only one lens plane, as in the previous examples shown in Fig. \[fig\_color\]. The second, includes the effects of the other two lens planes shown in the second and third upper panels of Fig. \[fig:effconv\], in addition to that containing the mass distribution of [*Ares*]{}. The images are shown in Fig. \[fig:euclid\]. The effective mass distributions of the lenses in the two cases correspond to the first and to the last upper panels in Fig. \[fig:effconv\]. Clearly, the addition of other lens planes impacts on the resulting appearance of the strong lensing features. Some of the arcs are dimmed or broken into smaller arclets, while some other sources happen to be more strongly distorted. Overall, several lensed images in the right panel appear shifted compared to the corresponding ones in the left panel. Discussion {#sec:discussion} ========== In this section we discuss possible applications of . Simulations represent a fundamental tool in gravitational lensing since there are no sources in the sky whose intrinsic shapes, before lensing, are perfectly known. Simulations with known inputs allow to calibrate lensing measurement codes and to quantify the impact of random errors (such as noise). In addition, they help to improve the understanding and characterization of systematic and modeling errors. This is useful when testing mass reconstruction codes that aim to constraint the distribution of the lens light, its mass distribution, and the background sources. In particular, the knots of substructure that represent regions of star formation included in this version of  will also produce multiple images in strongly-lensed arcs (e.g., \[f5\]) that can place to better constraints during the source image reconstruction process. These knots of star formation are also important to better understand the images of regions of star formation in the high-redshift universe that are magnified by the effect of strong lensing.  can be used to simulate gravitational arcs in the center of galaxy clusters and to improve the modeling of strong lensing systems, crucial to exploit the information that they contain about the distribution of dark matter in galaxies and galaxy clusters. This is also important to measure the shape of dark matter halos with precision and to detect and constrain their substructures. [The code can also be used to simulate lensing effects around galaxies, although this was not extensively discussed in this paper [@metcalf18].]{} Future wide galaxy surveys such as the Large Synoptic Survey Telescope (LSST) [@ivezic08] and *Euclid* will increase the number of strong lensing systems by up to three orders of magnitude compared to current searches (e.g., there will be about 120000 and 170000 galaxy-galaxy strong lenses in the cases of LSST and *Euclid*, respectively @collet15).  simulations can be used to develop and train algorithms to find these systems with the efficiency and completeness required. Summary and Conclusions {#sec:conclusions} ======================= We have presented a new version of the ray-tracing code . New additions to the code include the use of denoised source galaxies from the Hubble eXtreme Deep Field, which improves the resolution of the background source images as compared with previous versions of . This new version of  is able to calculate the lensing effects due to multiple deflector planes along the line of sight, and also is able to simulate substructures in the source images that represent regions of star formation. We model these substructures as objects with a Sérsic profile, and we perform a morphological analysis of images of nearby galaxies to empirically motivate their size and luminosity distributions.  is able to produce observations of the images of background sources lensed by multiple deflectors along the line-of-sight through virtually any instrument and telescope—both ground- and space-based. The mass distribution of the lenses (produced either analytically or numerically) is used to generate a deflection angle field per lens plane. [Multiple lens planes can be used to include lensing effects by any mass along the line of sight.]{} Once the total throughput of the system, a model for the PSF, and the characteristics of the instrument and telescope are specified,  will produce a simulated image of an observation (including noise) for a given field-of-view and exposure time. As an example of the simulations that can be generated with we have created images in multiple filters for several imaging instruments of current and future space and ground-based observatories: *HST* ACS WFC, *WFIRST* WFI, *JWST* NIRcam, Subaru’s HSC, and *Euclid*’s VIS. Simulations with known input allow for the testing, calibration, and improvement of lens modeling algorithms, in addition to providing a means to tests for the impact of different sources of random and systematic errors. The regions of star formation included in the source images of this version of  will allow, for example, to provide better constraints on source image reconstructions and lens models. The simulations created by  can also be utilized to train algorithms that automatize the search for strong lensing systems in large data sets. [We will make  available through the [*Bologna Lens Factory*]{} portal.[^19]]{} Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge support from the Italian Ministry of Foreign Affairs and International Cooperation (MAECI), Directorate General for Country Promotion, from ASI via contract ASI/INAF/I/023/12/0. We acknowledge support from grant HST-AR-13915.004-A of the Space Telescope Science Institute. AAP is supported by the Jet Propulsion Laboratory. JR is being supported in part by the Jet Propulsion Laboratory. The research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. M. Maturi was supported by the SFB-Transregio TR33 “The Dark Universe". 2017. All rights reserved. [^1]: E-mail: [email protected] [^2]: E-mail: [email protected] [^3]: The catalog can be found at <https://asd.gsfc.nasa.gov/UVUDF/catalogs.html> [^4]: <http://www.stsci.edu/~dcoe/BPZ/> [^5]: [ http://cgiocoli.wordpress.com/research-interests/moka]( http://cgiocoli.wordpress.com/research-interests/moka) [^6]: [^7]: In turn, @frei96 report from the galaxy catalog by @devaucouleurs91. [^8]: [zsolt-frei.net/catalog.htm](zsolt-frei.net/catalog.htm) [^9]: Such as , , and . [^10]: <https://ned.ipac.caltech.edu> [^11]: <http://www.stsci.edu/~dcoe/trilogy/Intro.html> [^12]: <http://tinytim.stsci.edu/cgi-bin/tinytimweb.cgi> [^13]: <https://jwst.stsci.edu/science-planning/proposal-planning-toolbox/psf-simulation-tool-webbpsf> [^14]: <https://jwst.etc.stsci.edu/> [^15]: We use . The *WFIRST* module is called , and the PSF and sky backgrounds are obtained by calling the utilities and , respectively. [^16]: <http://www.tapir.caltech.edu/~chirata/web/software/space-etc/> [^17]: The tool can be found at: <https://hsc-release.mtk.nao.ac.jp/psf/pdr1/> [^18]: <https://hscq.naoj.hawaii.edu/cgi-bin/HSC_ETC/hsc_etc.cgi> [^19]: <http://metcalf1.difa.unibo.it/blf-portal/skylens.html>.
--- abstract: 'The conductance of graphene subject to a strong, tilted magnetic field exhibits a dramatic change from insulating to conducting behavior with tilt-angle, regarded as evidence for the transition from a canted antiferromagnetic (CAF) to a ferromagnetic (FM) $\nu=0$ quantum Hall state. We develop a theory for the electric transport in this system based on the spin-charge connection, whereby the evolution in the nature of collective spin excitations is reflected in the charge-carrying modes. To this end, we derive an effective field theoretical description of the low-energy excitations, associated with quantum fluctuations of the spin-valley domain wall ground-state configuration which characterizes the two-dimensional (2D) system with an edge. This analysis yields a model describing a one-dimensional charged edge mode coupled to charge-neutral spin-wave excitations in the 2D bulk. Focusing particularly on the FM phase, naively expected to exhibit perfect conductance, we study a mechanism whereby the coupling to these bulk excitations assists in generating back-scattering. Our theory yields the conductance as a function of temperature and the Zeeman energy - the parameter that tunes the transition between the FM and CAF phases - with behavior in qualitative agreement with experiment.' author: - Pavel Tikhonov - Efrat Shimshoni - 'H. A. Fertig' - Ganpathy Murthy title: 'Emergence of helical edge conduction in graphene at the $\nu=0$ quantum Hall state' --- Introduction and Principal Results {#sec:intro} ================================== One of the most intriguing manifestations of many-body effects in graphene is the observation of a quantum Hall (QH) state at $\nu=0$ in the presence of strong perpendicular magnetic fields [@Zhang_Kim2006; @Alicea2006; @Goerbig2006; @Gusynin2006; @Nomura2006; @Jiang2007; @Herbut2007; @Fuchs2007; @Abanin2007; @Checkelsky; @Du2009; @Goerbig2011; @Dean2012; @Yu2013]. This unique state is characterized by a plateau at $\sigma_{xy}=0$, and a peak in the longitudinal resistance which typically exhibits insulating behavior. The high resistance signature is difficult to reconcile with a non-interacting theory [@Abanin_2006], which implies a helical nature of the edge states: right and left movers have opposite spin flavors, resolved by the Zeeman splitting of the $n=0$ Landau level in the bulk. In analogy with the quantum spin Hall (QSH) state in two-dimensional (2D) topological insulators [@Kane-Mele; @TIreview], the edge states are hence immune to backscattering by static impurities, and a nearly perfect conduction is expected. Coulomb interactions do not change the character of the edge states in a fundamental way, as long as the many-body state forming in the bulk remains spin-polarized, i.e. is a ferromagnet (FM). Such a bulk phase supports a gapless collective edge mode associated with a domain wall in the spin configuration, which can be modeled as a helical Luttinger liquid [@Fertig2006; @SFP; @Paramekanti; @Kusum]. Insulating behavior therefore suggests that the true ground state is not a FM. Indeed, at half filling of the $n=0$ Landau level, there is a rich variety of ways to spontaneously break the $SU(4)$ symmetry in spin and valley space, leading to a multitude of possible ground states with distinct properties [@Herbut2007AF; @Jung2009; @Nandkishore; @Kharitonov_bulk; @Kharitonov_edge; @SO5; @Roy2014; @Lado2014; @QHFMGexp]. The combined effect of interactions and external fields can assist in selecting the favored many-body ground state, particularly when accounting for lattice-scale interactions which do not obey $SU(4)$ symmetry. Most interestingly, the tuning of an external parameter can drive a transition from one phase to another. As a concrete example, it has been proposed [@Kharitonov_bulk; @Kharitonov_edge; @bilayer_QHE_CAF] that a phase transition can occur from a canted antiferromagnetic (CAF) to a FM state, tuned by increasing the Zeeman energy $E_z$ to appreciable values. Recent experiments in a tilted magnetic field [@Young2013; @Maher2013] appear to confirm the predicted phase transition in a transport measurement. In these experiments, the perpendicular field $B_{\perp}$ is kept fixed while the Zeeman coupling $E_z$ is tuned by changing the parallel component. At $\nu=0$ and relatively low $E_z$, the system exhibits a vanishing two-terminal conductance which slightly increases to finite values with increasing temperature $T$; i.e., it indicates an insulating behavior as in earlier studies of the $\nu=0$ state. However with increasing values of $E_z$, the sample develops a steep rise of conductance and approaches an almost perfect two-terminal conductance of $G\approx 2e^2/h$, a behavior characteristic of a QSH state with protected edge states. The most natural interpretation of these findings is in terms of the predicted phase transition from a CAF to a FM bulk state. However, while the theory dictates a second order quantum phase transition at a critical Zeeman coupling $E_z^c$ (and $T=0$), the transport data (obtained at finite $T$) reflects a smooth evolution of $G$ with $E_z$. The critical point $E_z^c$ can be estimated only roughly by, e.g., identifying the value of $E_z$ where $G(T\rightarrow 0)$ approaches the mid-value $e^2/h$, or where $dG/dT$ changes sign. At the highest accessible $E_z$ (where presumably $E_z>E_z^c$), the conductance still falls below the perfect quantized value. The above described behavior suggests that the low energy charge-carrying excitations smoothly evolve through the CAF-FM phase transition, so that their change of character reflects the critical properties of the bulk phases. In earlier work [@MSF2014; @tdhfa], we showed that in both phases one can construct collective charged modes associated with textures in the spin and valley configurations near the edges of the system, and characterized their essential properties. Such excitations are supported due to the formation of a domain wall (DW) structure, where the spin and valley are entangled and vary with position towards the edge. The nature of collective edge modes continuously evolves as $E_z$ is tuned through the transition. In particular, the CAF phase supports a gapped charged edge mode, which becomes gapless at the transition to the FM phase and is smoothly connected to the helical edge mode characteristic of the QSH state. In terms of the spin degree of freedom, the gapless charged collective edge mode in the FM phase corresponds to a $2\pi$ twist of the ground-state spin configuration in the $XY$-plane[@Fertig2006]. This spin twist is imposed upon the spatially-varying $S_z$ associated with the DW, thus creating a spin texture (i.e. a Skyrmion stretched out along the entire edge), with an associated charge that is inherent to quantum Hall ferromagnets [@QHFM; @Fertig1994; @Yang2006]. In contrast, the energy cost of generating such a spin texture in the CAF phase is infinite. A proper description of the lowest energy charged excitations in this phase therefore involves a coupling between topological structures at the edge and in the bulk [@MSF2014], and yields a charge gap on the edge that encodes the [*bulk*]{} spin stiffness for rotations in the $XY$-plane. In both the CAF and FM phases, the collective excitations also contain charge-neutral modes, and among them the low-energy ones are spin-waves in the bulk [@tdhfa]. Their behavior across the transition is the opposite of the charged edge modes: in the CAF phase, where the charged edge excitations are gapped, a broken $U(1)$ symmetry in the bulk (associated with $XY$-like order parameter) implies a neutral, gapless Goldstone mode. In contrast, in the FM phase where the charged edge mode is gapless, the bulk spin-waves acquire a gap which grows with $(E_z-E_z^c)$. While the neutral modes do not contribute to electric transport as carriers, their coupling to the charged modes can play an important role in the scattering processes responsible for a finite resistance. Most prominently, in the FM phase where the helical edge modes are protected by conservation of the spin component $S_z$, the coupling to the bulk spin-waves is essential to relax this conservation, and therefore dominates the electric resistance at finite $T$. In a previous work[@tdhfa], three of the present authors carried out a detailed time-dependent Hartree-Fock (TDHF) analysis of the HF state of our first paper[@MSF2014]. TDHF is similar in spirit to a spin-wave analysis, in that it diagonalizes the Hamiltonian in the Hilbert space of a single particle-hole excitation. However, for our present purpose of investigating the transport on the edge near the transition, we need to go beyond TDHF in several ways. Firstly, we need to include a coupling between the edge and bulk modes that allows the relaxation of the edge spin, which is otherwise a good quantum number. Secondly, we need to introduce disorder at the edge, which is extremely hard to do in TDHF. Thirdly, we would like the temperature-dependence of transport coefficients close to the transition to compare to experiments. To accomplish these objectives, in this paper we will first derive a low-energy effective field-theoretic description of the coupled system of bulk and edge, which encodes the information on the nature of the collective modes as well as the symmetries of the problem (overall $S_z$ conservation, including both bulk and edge). The parameters appearing in this effective theory have to be matched with the results of TDHF as well as physical constraints such as the fact that the stiffness is not singular at the transition. Since we focus on the low-energy sector, the theory contains the charge-carrying edge mode (gapless in the FM phase) and neutral spin-wave excitations of the bulk (gapped in the FM phase). Interestingly, some of the parameters of the effective theory do behave in a singular way as the transition in approached, reflecting a divergent length scale. This effective theory contains all the ingredients we need to compute transport coefficients at low temperatures. The detailed TDHF calculation[@tdhfa] shows that all other collective excitations are high in energy, and remain gapped through the transition. They will thus contribute, at best, to a finite renormalization of the parameters of the effective theory. Focusing particularly on the FM phase, we study the mechanism whereby the coupling of the charged edge mode to the charge-neutral bulk excitations assists in generating back-scattering. Our theory yields the two-terminal conductance $G$ as a function of $T$ and the Zeeman energy $E_z$. The main results are summarized in Fig. \[fig:Conductance\], and Eq. (\[eq:deltaR2T\_short\]) below which describes the intrinsic resistance (dictating the deviation of $G$ from $2e^2/h$) as a scaling function of $T$ and the critical energy scale $\Delta=E_z-E_z^c$. In the low $T$ limit where $T\ll\Delta$, this yields a simple activation form \[see Eq. (\[eq:deltaR\_final\])\]. This behavior is dual to the exponentially small [*conductance*]{} expected in the insulating CAF phase. Our results are in qualitative agreement with experiment. The paper is organized as follows. In Sec. \[sec:2Daction\] we detail the derivation of a 2D field theoretical model for the quantum fluctuations in the spin and valley configuration for a system with an edge potential. In Sec. \[sec:normalmodes\] we study the normal modes of low energy collective excitations in the FM phase, and derive an effective Hamiltonian describing the 1D edge mode coupled to 2D bulk spin-waves. This section is supplemented by Appendix \[sec:uK\_critical\], devoted to a derivation of the scaling of the model parameter when $E_z$ approaches the critical value $E_z^c$. Based on the resulting effective model, in Sec. \[sec:G\] we evaluate the two-terminal conductance $G$ as a function of $T$ and $E_z$. Some further details of the calculation are included in Appendix \[sec:deltaRdetails\]. Finally, our main results and some outlook are summarized in Sec. \[sec:summary\]. Model for Spin-valley fluctuations in two-dimensions {#sec:2Daction} ==================================================== We consider a ribbon of monolayer graphene in the $x-y$ plane, subject to a tilted magnetic field of magnitude $B_T$ and perpendicular component $B_\perp$. These two distinct field scales independently determine the Zeeman energy $E_z\propto B_T$ and the magnetic length $\ell=\sqrt{\hbar c/e B_\perp}$. At zero doping, the $n=0$ Landau level is half-filled and we assume that mixing with other Landau levels can be neglected. In addition, for the time being we focus on an ideal system uniform in the $\hat{y}$-direction but of finite width in the $\hat{x}$-direction, so that single-electron states can be labeled by a guiding-center coordinate $X=\ell^2k_y$ with $k_y$ the momentum in the $y$-direction. Similarly to Ref. , the boundaries of the ribbon are accounted for by an edge potential $U(x)\hat{\tau}_x$ where $\hat{\tau}_x$ denotes a valley isospin operator, and $U(x)$ grows linearly over a length scale $w$, from zero in the bulk to a constant $U_e$ on the edge. It is therefore convenient to represent electronic states in a basis of 4-spinors $|X s\,\tau \rangle$ where $s=\uparrow,\downarrow$ denotes the real spin index $s_z$, and $\tau=\pm$ are the eigenvalues of $\hat{\tau}_x$ corresponding to symmetric and antisymmetric combinations of valley states. The microscopic Hamiltonian describing the system, projected into the above manifold of $n=0$ states, assumes the form [@Kharitonov_bulk; @Kharitonov_edge; @MSF2014] $$\begin{aligned} \label{Hmicro} H &=& \sum_X c^\dagger(X) [-E_z\sigma_z\tau_0+U(X)\sigma_0\tau_x] c(X) +H_{int}\; ,\\ H_{int}&=&\frac{\pi\ell^2}{L^2}\sum_{\alpha=0,x,y,z}\sum_{X_1,X_2,q} e^{-q^2\ell^2/2+iq(X_1-X_2)} g_\alpha :c^\dagger(X_1+{{q\ell^2} \over 2})\tau_\alpha c(X_1-{{q\ell^2} \over 2}) c^\dagger(X_2-{{q\ell^2} \over 2})\tau_\alpha c(X_2+{{q\ell^2} \over 2}):, \nonumber\end{aligned}$$ where $c^\dagger(X),c(X)$ are creation and annihilation operators written as 4-spinors, \[$c^\dagger(X) \equiv (c_{K,\uparrow}^{\dag}(X),c_{K,\downarrow}^{\dag}(X), c_{K',\uparrow}^{\dag}(X),c_{K',\downarrow}^{\dag}(X))$\], $\sigma_\alpha$ ($\tau_\alpha$) are the spin (isospin) Pauli matrices and $\sigma_0$, $\tau_0$ are unit matrices, $L$ is the system size and $:\,:$ denotes normal ordering; $g_\alpha$ denote lattice-scale interaction parameters obeying $g_x=g_y \equiv g_{xy}$ and $g_z>-g_{xy}>0$. The latter condition is required [@Kharitonov_bulk] to stabilize a CAF phase for small $E_z$. Finally, $g_0$ parametrizes an $SU(4)$ symmetric interaction which mimics the effect of Coulomb interactions, and dominates the spin-isospin stiffness. As we have shown in Ref. , for arbitrary $E_z$ and $U(X)$ the Hartree-Fock solution of the Hamiltonian Eq. (\[Hmicro\]) at $1/2$-filling is a spin-valley entangled domain wall, characterized by two distinct canting angles $\psi_a(X)$, $\psi_b(X)$ which vary continuously as a function of $X$ when approaching an edge. This corresponds to a Slater determinant with two (out of four possible) occupied states for each $X$: $$\begin{aligned} \label{aXbX} |a_X\rangle &=& \cos\frac{\psi_a}{2}|X \uparrow + \rangle - e^{i\phi_a}\sin\frac{\psi_a}{2}|X \downarrow - \rangle, \\ |b_X\rangle &=& -\cos\frac{\psi_b}{2}|X \uparrow - \rangle + e^{i\phi_b}\sin\frac{\psi_b}{2}|X \downarrow + \rangle,\nonumber\end{aligned}$$ where the $X$-dependence of $\psi_\nu$, $\phi_\nu$ is implicit. The many-body state is therefore a hybridized spin-valley configuration, which may be represented in terms of two local spin-$1/2$ pseudospin fields ${\bf S}_a(X)$, ${\bf S}_b(X)$ encoded by the Euler angles $\psi_\nu\in[0,\pi]$, $\phi_\nu\in[0,2\pi]$: $$\label{SabDef} {\bf S}_\nu=\frac{1}{2}\left(\sin\psi_\nu\cos\phi_\nu,\sin\psi_\nu\sin\phi_\nu,\cos\psi_\nu\right),$$ where $\nu=a,b$. Note that in Ref. , our focus was on the derivation of the ground state and we had assumed trivial phase factors in Eq. (\[aXbX\]): $\phi_a=\phi_b\equiv\phi=0$. However, there is actually a manifold of degenerate ground states with an arbitrary global phase $\phi\neq 0$. This implies the existence of a gapless mode associated with a slowly varying twist of the angle $\phi$, consistent with Ref. as will be discussed in more detail below. We now allow for fluctuations in the collective variables $\psi_\nu({\bf r})$, $\phi_\nu({\bf r})$ \[where ${\bf r}=(x,y)$\] which vary slowly in space with respect to the magnetic length $\ell$. Assuming further that $g_0\sim e^2/\ell$ and hence is much larger than the other interaction scales (for $\alpha=x,y,z$, $g_\alpha\sim e^2 a_0/\ell^2$ with $a_0$ the lattice spacing [@Kharitonov_bulk]), a semi-classical approximation yields an effective Hamiltonian of the form $$\label{EvsSaSb} H[{\bf S}_a({\bf r}),{\bf S}_b({\bf r})]=\sum_{\bf r}\left\{\frac{\rho_0}{2}\sum_{\alpha=x,y,z}\sum_{\nu=a,b}|\nabla S^\alpha_\nu|^2 +H_{loc}({\bf r})\right\}$$ where $\rho_0\propto g_0$ is the pseudospin-stiffness, and $H_{loc}({\bf r})$ is a local term. The latter can be derived by evaluating the expectation value of the microscopic Hamiltonian Eq. (\[Hmicro\]) in a state of the form Eq. (\[aXbX\]), with the label $X$ replaced by [**r**]{}. Defining a local projector $$\label{Pdef} \mathcal{P}({\bf r})=|a_{\bf r}\rangle\langle a_{\bf r}|+|b_{\bf r}\rangle\langle b_{\bf r}|\; ,$$ the local energy term can be expressed as $$\label{Eloc_P} H_{loc}({\bf r})=\sum_{\alpha=x,y,z}g_\alpha\left\{(Tr[\mathcal{P}({\bf r})\sigma_0\tau_\alpha])^2-Tr[(\mathcal{P}({\bf r})\sigma_0\tau_\alpha)^2]\right\},$$ where $Tr$ is the trace of a $4\times 4$ matrix in the basis set by the 4 states $|\uparrow \pm \rangle$, $|\downarrow \pm \rangle$. Employing Eqs. (\[aXbX\]) and (\[Pdef\]), we obtain $$\begin{aligned} \label{Eloc} H_{loc}({\bf r})&=& -[E_z-U(x)]\cos\psi_a({\bf r}) -[E_z+U(x)]\cos\psi_b({\bf r}) \\ &-& (g_z+3g_{xy})\cos\psi_a({\bf r})\cos\psi_b({\bf r}) - (g_z-g_{xy})\sin\psi_a({\bf r})\sin\psi_b({\bf r})\cos[\phi_a({\bf r})-\phi_b({\bf r})]\; . \nonumber\end{aligned}$$ Note that since the physical parameters obey $g_z>0$ and $g_{xy}<0$, the coefficient of the last term is always negative. Indeed, this term arises from the ferromagnetic coupling between the $a$ and $b$ pseudospins in the $XY$ plane, and tends to lock the relative planar angle $\phi_-=\phi_a-\phi_b$ to $\phi_-=0$. In contrast, $H_{loc}$ does not contain any explicit dependence on the symmetric combination $\phi_+=\phi_a+\phi_b$, signifying the gapless nature of its fluctuations. Inserting Eq. (\[Eloc\]) with $\phi_-=0$ into Eq. (\[EvsSaSb\]), and minimizing $H[{\bf S}_a({\bf r}),{\bf S}_b({\bf r})]$ with respect to the remaining collective fields $\psi_a({\bf r})$ and $\psi_b({\bf r})$, yields the static domain wall structure $\psi_a^0({\bf r}),\psi_b^0({\bf r})$ described in Ref. : in the bulk, $\psi_a^0=\psi_b^0=\psi$ where in the CAF phase ($E_z<E_z^c=2|g_{xy}|$) $\psi$ is a nontrivial canting angle [@Kharitonov_bulk] obeying $\cos\psi=E_z/E_z^c$, and in the FM phase ($E_z>E_z^c$) $\psi=0$; the angles smoothly change towards the edge where $\psi_a^0=-\pi$, $\psi_b^0=0$ in both phases. Close to the CAF/FM transition ($E_z\to E_z^c$), the effective width of the domain wall is given by the diverging length scale $$\label{xi_def} \xi\sim \sqrt{\rho_0/|E_z-E_z^c|}\; .$$ To describe the dynamics of [*quantum*]{} fluctuations in the collective pseudospin fields compared to their ground state configuration, we next construct a path-integral formulation [@SpinBooks] in terms of the Euclidean action $$\label{S_2D} \mathcal{S}_{2D}=\int_0^\beta d\tau \left\{-\frac{i}{2}\sum_{\bf r}\sum_{\nu=a,b}\cos\psi_\nu\partial_\tau\phi_\nu+H[{\bf S}_a,{\bf S}_b]\right\},$$ where $\beta=1/T$, $H[{\bf S}_a,{\bf S}_b]$ is given by Eq. (\[EvsSaSb\]), and the local fields are now $\psi_\nu({\bf r},\tau)$, $\phi_\nu({\bf r},\tau)$ with $\tau$ the imaginary time; here we have used units where $\hbar=k_B=1$. Defining the fluctuation fields $\Pi_\nu({\bf r},\tau)$ via the substitution $$\label{Pi_def} \cos\psi_\nu=\cos\psi_\nu^0+\Pi_\nu$$ in the first term of Eq. (\[S\_2D\]), it is apparent that $\Pi_\nu$ are the canonical momenta of the planar angle fields $\phi_\nu$. Employing the canonical transformation into symmetric and antisymmetric fields $$\begin{aligned} \label{ab2+-} \phi_+ &=&\frac{1}{2}\left(\phi_a+ \phi_b\right)\,,\quad \phi_-=\phi_a- \phi_b, \\ \Pi_+ &=&\Pi_a +\Pi_b\,,\quad \Pi_-=\frac{1}{2}\left(\Pi_a- \Pi_b\right), \nonumber\end{aligned}$$ the effective action acquires the form $$\begin{aligned} \label{S_2D_pm} &&\mathcal{S}_{2D}= \\ &&\int_0^\beta d\tau \left\{-\frac{i}{2}\sum_{\bf r}\sum_{\mu=+,-}\Pi_\mu\partial_\tau\phi_\mu+H[\Pi_+,\Pi_-,\phi_+,\phi_-]\right\},\nonumber\end{aligned}$$ where in the last term, the dependence on $\phi_+$ is restricted to gradient terms, while the $\phi_-$-dependence includes a mass term \[the last term in Eq. (\[Eloc\]), $\propto\cos\phi_-$\] independent of $E_z$. As a result, the normal modes of the antisymmetric sector are typically gapped, and a low-energy effective field-theory model can obtained by projecting to the symmetric sector encoded by the pair of conjugate fields $\phi_+,\Pi_+$. We note that the local momentum operator $\Pi_+$, denoting a fluctuation in the total spin component $S_z$, $$\label{Pi+Sz} \Pi_+=\delta S_a^z+\delta S_b^z=\delta S^z\; ,$$ commutes with all the local terms of $H[\Pi_+,\Pi_-,\phi_+,\phi_-]$. As we show in the next sections, in the FM phase this leads to the emergence of a gapless edge mode which carries fluctuations in $\phi_+$ (physically representing rotations of the total spin in the $XY$ plane), and is protected by an approximate conservation of the spin component $S^z$ in the edge sector. Normal modes and Effective Model {#sec:normalmodes} ================================ The low-energy dynamics of the model discussed in the previous section is complicated by the fact that the ground-state of the system is non-uniform in the $\hat{x}$ direction due to the edge potential. In the FM phase, there are gapless low-energy excitations which are confined to the edge of the system [@Fertig2006], whereas all excitations in the bulk are gapped. As described above we are primarily interested in transport due to the low-energy edge excitations and how this is impacted by the bulk excitations at low but finite temperature. Accomplishing this involves the challenge of developing a theory which includes both the edge and bulk excitations, and interactions between them. A natural description of the edge modes involves tilting the spin orientations away from their semiclassical groundstate, for example using the degrees of freedom $\phi_{\pm}$ and their conjugates $\Pi_{\pm}$ in Eq. (\[ab2+-\]). As argued in the last section, only gradient terms of the variable $\phi_+$ can appear in the effective action, Eq. (\[S\_2D\_pm\]), leading to gapless modes which will dominate the low-temperature transport properties of the system [@Fertig2006; @Kharitonov_edge]. The difficulty with using this parameterization for the entire system lies in the rather different orientations of the spins in the semiclassical ground-state configuration near the edge and deep in the bulk. The problem is apparent in Eq. (\[Pi\_def\]). In the FM state, deep in the bulk spins are oriented along the $\hat{z}$ direction; i.e., $\psi_{\nu}^0=0$. This means that one should restrict $\Pi_{\nu}<0$ for fluctuations that are physically allowable: spins can only fluctuate [*downward*]{} from this orientation. Such a constraint is very challenging to implement in a fluctuating field theory. One may prefer in this situation to retain the original spin variables, $\vec S_\nu$, for which $\langle S_\nu^z \rangle=1/2$ in the ground-state, and $S_\nu^{x}$, $S_\nu^{y}$ are conjugate variables. This is just the standard approach to spin waves [@SpinBooks]. Thus, there is an essential tension between the natural degrees of freedom in the bulk and at the edge. In this section we will introduce an effective model in which we write both the bulk and the edge degrees of freedom in their “natural” representations, while retaining the basic symmetries of the system, and thereby introducing couplings that will allow energy to be exchanged between the bulk and the edge. Single Component Model: Ground state ------------------------------------ We begin first with a simplified model meant to represent only the lowest energy degrees of freedom of the system, which captures both the variation of the spins at the edge and the change in the gapless mode structure as the system passes through the CAF-FM transition, but is simple enough to allow analytic progress to be made. By developing this model we will be able to gain insight into how parameters of our effective model should behave. Towards this end we introduce the energy functional $${E}[\hat n] = \int_{x>0} d^2r\left\{-E_zn_z + \tilde g n_z^2 + \frac{\rho_0}{2}\sum_{\alpha=x,y,z}|\vec\nabla n_{\alpha}|^2 \right\}, \label{energy_func}$$ where $n({\bf r})$ is a unit vector field ($\sum_{\alpha}n_{\alpha}({\bf r})^2=1$) on the two-dimensional domain ${\bf r}=(x>0,y)$. Qualitatively, one could identify this degree of freedom with the spin-$1$ field obtained from the symmetric combination ${\bf S}={\bf S}_a+{\bf S}_b$ of the spin-$1/2$ fields described in Section \[sec:2Daction\]. Eq. (\[energy\_func\]) is essentially a low-energy approximation of the model given by Eqs. (\[EvsSaSb\]) and (\[Eloc\]) \[with $\tilde g \sim |g_{xy}|$\], where $U(x)$ is replaced by a sharp boundary condition at $x=0$, and ${\bf S}_a$, ${\bf S}_b$ are assumed to obey the bulk condition $\psi_a({\bf r})=\psi_b({\bf r})$, $\phi_a({\bf r})=\phi_b({\bf r})$ for all ${\bf r}$ except very close to the boundary. This model supports two phases in its bulk, a ferromagnet ($n_z=1$) for $E_z>E_z^c \equiv 2\tilde g$, and a canted state ($n_z=E_z/2\tilde g$) for $E_z < E_z^c$. To mimic the behavior of the $\nu=0$ system edge, we impose the boundary condition $n_z(x=0)=-1$, which forces a domain wall (DW) at the edge into the groundstate configuration. In the FM state, the DW configuration may be found analytically with standard techniques [@rajaraman_book]. Assuming a classical groundstate in which the unit vector rotates through the $\hat x$ direction in going from the bulk to edge, one writes $n_z(x) \equiv \cos\theta(x)$, $n_x(x) \equiv \sin\theta(x)$, and the configuration $\theta(x)$ that minimizes the energy functional satisfies $$\label{DW_eq_motion} \rho_0 \frac{d^2\theta}{dx^2} = E_z \sin\theta - \tilde g \sin 2\theta.$$ This is equivalent to the equation of motion for a particle at “position” $\theta$ accelerating with respect to “time” $x$ through a potential $$V[\theta] = E_z \cos\theta - {1 \over 2} \tilde g \cos 2\theta.$$ Assuming the system is in the FM state in the bulk, we must have $\theta \rightarrow 0$ as $x \rightarrow \infty$, which fixes the total energy of the fictitious particle at $E_z - \tilde g/2$. Using energy conservation one then finds that the particle “velocity” obeys the equation $$\label{DW_velocity} \frac{d\theta}{dx} = -\frac{\left[E_z(1-\cos\theta) - {1 \over 2} \tilde g (1-\cos 2\theta) \right]^{1/2}}{\sqrt{\rho_0/2}}.$$ Equation (\[DW\_velocity\]) may be recast in an integral form $$\frac{x}{\sqrt{\rho_0/2}} = -2 \int_{\pi/2}^{\theta(x)/2} \frac{d\psi}{\sin\psi \left[ 2E_z -4\tilde g \cos^2 \psi \right]^{1/2}},$$ for which the integral may be computed explicitly. Defining the length scale $$\ell_{DW} \equiv \sqrt{\frac{\rho_0/2}{2E_z-4\tilde g}}=\sqrt{\frac{\rho_0}{4(E_z-E_z^c)}},$$ which is clearly the analog of $\xi$ \[Eq. (\[xi\_def\])\], this leads to the equation $$z \equiv e^{-x/\ell_{DW}} = \left[ \frac{1-\cos\theta/2}{1+\cos\theta/2} \right] \frac{E_z + 2\tilde g\cos\theta/2+ \sqrt{E_z-2\tilde g} \sqrt{E_z - 2\tilde g \cos^2\theta/2}} {E_z - 2\tilde g\cos\theta/2+ \sqrt{E_z-2\tilde g} \sqrt{E_z - 2\tilde g \cos^2\theta/2}}\,. \label{zsq}$$ Finally, Eq. (\[zsq\]) may be inverted, which (using the boundary conditions on $\theta$) yields the result $\cos\theta(x) = 2y^2(x) +1$, with $$\begin{aligned} \label{ysq} y^2(x)= \frac{1}{2\left[2\tilde g (1-z)^2+r^2(1+z)^2\right]} \Biggl\lbrace r^2(z+1)^2+(E_z+2\tilde g)(z-1)^2 \quad\quad\quad\quad\quad\quad\quad\quad \\ -\left[\bigl(r^2(z+1)^2+(E_z+2\tilde g)(z-1)^2\bigr)^2 -4E_z(z-1)^2\bigl(2\tilde g (1-z)^2 + r^2(1+z)^2 \bigr) \right]^{1/2} \Biggr\rbrace, \nonumber\end{aligned}$$ where the quantity $r \equiv \sqrt{E_z - E_z^c}$ measures how close the system is to the transition between the FM and canted phases. The DW in this model is essentially analogous to what was found in the edge DW for the $\nu=0$ FM state discussed in Section \[sec:2Daction\]. At distances from the edge larger that $\ell_{DW}$, $\theta(x)$ becomes very small, and approaches zero (the bulk value for the FM state) exponentially, $\theta(x) \sim e^{-x/2\ell_{DW}}$. One can also solve for the DW shape exactly at the critical value $E_z=E_z^c$, either using the method above or by taking the $r \rightarrow 0$ limit of Eq. (\[ysq\]). The result is $$\label{r_eq_0_DW} \theta(x) \rightarrow \theta_c(x) \equiv 2 {\rm arccot}\left[\sqrt{\frac{\tilde g}{\rho_0}}x\right].$$ Single Component Model: Fluctuations ------------------------------------ We next consider the normal modes around this classical energy minimum. A simple way to proceed is to define a unit vector $\hat n^{\prime}(x)$ such that $n^{\prime}_z(x)=1$ in the classical groundstate. This is accomplished by taking $n_y^{\prime}=n_y$, and $$\begin{aligned} \left( \begin{array}{c} n_x(x) \\ n_z(x) \end{array} \right) = \left( \begin{array}{c c} \cos\theta_{DW}(x) & \sin\theta_{DW}(x) \\ -\sin\theta_{DW}(x) & \cos\theta_{DW}(x) \end{array} \right) \left( \begin{array}{c} n_x^{\prime}(x) \\ n_z^{\prime}(x) \end{array} \right), \nonumber\end{aligned}$$ where $\theta_{DW}(x)$ is the DW configuration which minimizes the energy functional. Substituting this into Eq. (\[energy\_func\]), and writing $n_z^{\prime} = \sqrt{1-n_x^{\prime 2}-n_y^{\prime 2}} \approx 1-(n_x^{\prime 2}+n_y^{\prime 2})/2$, after some algebra one arrives at an energy functional which may be written to quadratic order in the form $${H}[\hat n^{\prime}] \approx \sum_{\mu=x,y}\int_{x>0} d^2r \left\{ n^{\prime}_{\mu}({\bf r}) \left[ -\frac{\rho_0}{2} \nabla^2 + U_{\mu}(x) \right] n^{\prime}_{\mu}({\bf r}) \right\} , \label{E_quad}$$ with “potentials” $$\begin{aligned} \label{eq:UxUy} U_x(x) &=& {1 \over 2}E_z \cos\theta_{DW}(x)-\tilde g \cos 2\theta_{DW}(x), \\ U_y(x) &=& {3 \over 2}E_z \cos\theta_{DW}(x) -2\tilde g \cos^2\theta_{DW}(x) - E_z + \tilde g. \nonumber\end{aligned}$$ To obtain the normal modes from this, it is convenient to impose angular momentum commutation relations on the components of the unit vector, $[n_x^{\prime}({\bf r}_1),n_y^{\prime}({\bf r}_2)] = 2i\delta({\bf r}_1 - {\bf r}_2) n^{\prime}_z({\bf r}_1) \approx 2i\delta({\bf r}_1 - {\bf r}_2)$. The last step, in which $n_z^{\prime}$ is replaced by its groundstate value of 1, is the spin-wave approximation [@SpinBooks]. The classical groundstate we have chosen in assuming the DW rotates through the $n_x - n_z$ plane is a broken symmetry state of Eq. (\[energy\_func\]); globally rotating the unit vector configuration around the $n_z$ axis yields a different configuration with exactly the same energy. Because of this, the quadratic Hamiltonian \[Eq. (\[E\_quad\])\] must host a zero mode [@rajaraman_book]. This can be directly identified with an eigenfunction of the operator $-\frac{\rho_0}{2} \nabla^2 + U_y(x)$ with zero eigenvalue, $S_0(x)$, where $S_0(x) \equiv \sin\theta_{DW}(x)$. Note that this zero mode is confined to the region of the domain wall, and is independent of the real-space coordinate $y$. Because the Hamiltonian and groundstate are uniform in the $\hat y$ direction, the normal modes have well-defined momentum $q_y$. One may exploit this by writing $n_x^{\prime}({\bf r})= \int dq_y m_x(x,q_y)e^{iq_yy}/\sqrt{2\pi}$ and $n_y^{\prime}({\bf r})= \int dq_y m(x,q_y)e^{-iq_yy}/\sqrt{2\pi}$. The normal mode Hamiltonian may now be written as $$\begin{aligned} H &=&\int d{q_y}\int_{0}^\infty dx \Bigl\{m_x(x,-q_y)\left[h_x(q_y)+{1 \over 2}\rho_0q_y^2\right]m_x(x,q_y) \nonumber \\ &+& m_y(x,-q_y)\left[h_y(q_y)+{1 \over 2}\rho_0q_y^2\right]m_y(x,q_y)\Bigr\}, \label{sw_ham}\end{aligned}$$ with operators $h_{\mu}=-{1 \over 2}\rho_0\partial_x^2 + U_{\mu}(x)$. Note that we expect the effectively one-dimensional operators $m$ to obey $m_{x,y}(x,q_y)=m_{x,y}(x,-q_y)^{\dag}$. The normal modes of the system are determined by the eigenvalues and eigenfunctions of the operators $h_{x,y}$, which are difficult to determine analytically. The potentials associated with them, $U_{x,y}(x)$ \[Eq. (\[eq:UxUy\])\], both reach the constant value of $E_z/2 -\tilde g$ at large positive $x$. Thus these operators will have continuous spectra of eigenvalues above this energy scale, which becomes the frequency edge for spin-waves in the bulk of the system. Bulk Hamiltonian ---------------- If we wish to focus on the behavior deep in the bulk, one can simply set $U_{x,y}(x) \rightarrow E_z/2 -\tilde g$ and extend the domain of $x$ to $-\infty < x < \infty$. The resulting bulk Hamiltonian can be written $$\begin{aligned} H_{b}=\frac{1}{2}\sum_{\alpha=x,y}\int d^2r \Bigl[ S_{\alpha}({\bf r}) \left(E_z -2\tilde g -\rho_0 \nabla^2 \right)S_{\alpha}({\bf r}) \bigr], \nonumber \\ \label{bulk_ham_sw}\end{aligned}$$ where we have made the identification $ m_\alpha({\bf r}) \equiv S_\alpha({\bf r})$, the components of the unit vector in real space. $H_{b}$ supports a gapped spin-wave mode of frequency $\omega(q) = E_z -2\tilde g +\rho_0 q^2$ which becomes gapless at the phase transition, i.e. when $E_z$ acquires the critical value $E_z^c=2\tilde g$; this behavior is highly analogous to what is found for the low energy modes in the full system near the transition in time-dependent Hartree-Fock calculations [@tdhfa]. Alternatively, one may rewrite Eq. (\[bulk\_ham\_sw\]) in terms of bosonic raising and lowering operators, $a({\bf r})=(S_x({\bf r})+iS_y({\bf r}))/\sqrt{2}$, $a^{\dag}({\bf r})=(S_x({\bf r})-iS_y({\bf r}))/\sqrt{2}$, which upon taking the ferromagnetic groundstate average for the $S_z$ component of the spin [@SpinBooks] yields the needed commutation relations $[a({\bf r}),a^{\dag}({\bf r}')]=\delta({\bf r}-{\bf r}')$, so that $$H_b= \int d^2r \left[ -{1 \over 2}\rho \left( a^{\dag}({\bf r}) \nabla^2 a({\bf r}) + a({\bf r}) \nabla^2 a^{\dag}({\bf r}) \right) +\Delta a^{\dag}({\bf r}) a({\bf r}) \right]. \label{bulk_ham_bos}$$ In writing Eq. (\[bulk\_ham\_bos\]) we have dropped the subscript $0$ in $\rho_0$, and $$\Delta=E_z-2\tilde g=E_z-E_z^c\; . \label{DeltaDef}$$ Note that we expect $H_b$ more generally to be the long-wavelength form of the Hamiltonian governing the low energy modes deep in the bulk of the FM state of the $\nu=0$ quantum Hall state, with $\Delta \rightarrow 0$ as the transition to the CAF state is approached. Edge Hamiltonian {#sec:Edge_Hamiltonian} ---------------- We next turn to a discussion of the lowest energy mode of the FM phase, which as discussed above is a gapless edge state mode. Near the edge, $U_y$ of Eq. (\[eq:UxUy\]) has a well potential which monotonically increases with increasing $x$ towards its asymptotic value. It is also interesting to note that $U_x = U_y + \Delta U$, with $\Delta U = E_z[1-\cos\theta_{DW}(x)] \ge 0$, so that $U_x(x) \ge U_y(x)$ for any $x$. Presuming our domain wall structure is stable, there cannot be any negative energy states associated with either $h_x$ or $h_y$ in Eq. (\[sw\_ham\]). We have seen that for $q_y=0$, $h_y$ supports a zero energy state; this is unlikely to be the case for $h_x$ because the effective potential associated with it is larger than that of $h_y$. It is possible that there are bound states in the spectral interval $[0,E_z/2 -\tilde g]$, but as the critical value of $E_z$ is approached this becomes a very small interval and so is unlikely to host any bound states. Thus we assume that there is only one bound state in the spectra of $h_x$ and $h_y$ for $q_y=0$, associated with $h_y$, at zero eigenvalue. With increasing $q_y$ there will be a single linearly dispersing mode, which we associate with the gapless edge excitation of the system. Then the lowest energy modes of the system in the FM phase are the single gapless edge mode and the bulk spin wave modes discussed in subsection C. We note that the absence of other low-energy modes is in apparent agreement with time-dependent Hartree-Fock results for the full $\nu=0$ spectrum [@tdhfa]. In order to write down an effective Hamiltonian for the edge mode it is useful to consider the equations of motion for $m_y(x)$ and $m_y(x)$. Using $\partial_t {\cal O} = i[H, {\cal O}]$, we find $$\begin{aligned} \partial_t m_x &=&4 (h_y+{1 \over 2}\rho_0q_y^2)m_y \nonumber \\ \partial_t m_y &=&-4 (h_x+{1 \over 2}\rho_0q_y^2)m_x. \nonumber\end{aligned}$$ The two equations can be combined to give, after Fourier transforming with respect to time, $$\omega^2 m_y = 16(h_x+{1 \over 2}\rho_0q_y^2)(h_y+{1 \over 2}\rho_0q_y^2)m_y. \label{2nd_order}$$ For $q_y=0$ this equation is solved by $m_y=S_0(x)$ and $\omega=0$, and we are interested in the solution that smoothly joins to this in the limit $q_y \rightarrow 0$. To quadratic order in $q_y$, this may be written in real time as $m_y(q_y,t) = [S_0(x)+\delta S(x,q_y)]\phi(q_y,t)$, with $\delta S$ of order $q_y^2$. Using the fact that $h_yS_0(x)=0$, the equation of motion to order $q_y^2$ becomes $$-\partial_t^2 S_0(x) \phi(q_y,t) = 16[h_xh_y \delta S(x,q_y) +{1 \over 2}\rho_0q_y^2 h_xS_0(x)]\phi(q_y,t). \label{phi_eq_1}$$ Recalling our assumption that $h_x$ does not support a zero mode, it will have a well-defined inverse operator $h_x^{-1}$ which we can apply to Eq. (\[phi\_eq\_1\]). Finally, multiplying the whole equation by $S_0(x)$ on the left, integrating with respect to $x$, and using the fact that $\langle S_0 | h_y | \delta S \rangle \equiv \int_0^{\infty} dx S_0(x)h_y \delta S(x) = 0$ for any $\delta S$, we obtain the equation of motion $$\left[-8\rho_0 q_y^2-\langle S_0| h_x^{-1} | S_0 \rangle\partial_t^2 \right]\phi(q_y,t) = 0. \label{phi_eq}$$ Thus we find a linearly dispersing normal mode $\omega(q_y) = u_0 q_y$, with velocity $u_0=\sqrt{8\rho_0/\langle S_0| h_x^{-1} | S_0 \rangle}$. The gapless edge mode obtained above is the only mode in the FM phase that approaches zero energy. The variable $\phi$ represents an amplitude to rotate the spins of the DW into the $\hat{y}$ axis from the $\hat{x}$ axis through which we assumed the spins spatially rotate in the classical DW groundstate. Qualitatively, one may associate it with an azimuthal angle of the spins at the center of the DW, and it plays a role highly analogous to the $\phi_+$ degree of freedom in Section \[sec:2Daction\]. Quantizing this degree of freedom leads to a standard Luttinger liquid Hamiltonian, which, after Fourier transforming into real space, may be written in the form $$H_e={u_{_{NM}} \over {2\pi}} \int dy \left\{ K_{_{NM}} \left(\pi \Pi(y) \right)^2 +{1 \over K_{_{NM}}} \left( \partial_y\phi(y) \right)^2 \right\}, \label{eq:LL_NM}$$ with $[\Pi(y),\phi(y')]=-i \delta(y-y')$, and $u_{_{NM}}=u_0$. Note that because $\Pi$ and $\phi$ are conjugate, the former can be identified with deviations of spins near the center of the DW into the $S_z$ direction, as expected from the general considerations of Section \[sec:2Daction\]. Because the energy cost for spatial gradients in $\phi$ descends directly from the two-dimensional spin stiffness $\rho_0$, we expect $u_{_{NM}}/\pi K_{_{NM}} \sim \rho_0$, which remains finite and non-vanishing even as the transition point is approached (i.e., when the gap Eq. (\[DeltaDef\]) obeys $\Delta \rightarrow 0$). This implies that the Luttinger parameter behaves as $$K^{-1}_{_{NM}} \sim \frac{\pi}{2} \left( \rho_0 \langle S_0 | h_x^{-1} | S_0 \rangle \right)^{1/2}. \label{K_tot}$$ Two comments are in order. First, as explained in Appendix \[sec:uK\_critical\], $\langle S_0 | h_x^{-1} | S_0 \rangle \sim \Delta^{-1/2}$, which is divergent as $\Delta \rightarrow 0$, so that the Luttinger parameter $K$ vanishes in this limit. This means that the edge mode becomes extremely sensitive to perturbations at the edge (in the renormalization group sense), so that the edge Luttinger liquid cannot remain stable as the bulk transition from a FM to a CAF phase is approached. Secondly, the behavior of the coefficients of the edge theory as the system approaches the transition point are chosen to match what is found in the normal mode theory. Going beyond this to include coupling terms between the bulk and edge modes will be most naturally accomplished by writing them in a way that includes quadratic contributions, so that this edge-bulk coupling leads to significant contributions to the edge theory. Rather than deviate from our development of our effective model, [we defer a fuller discussion of this to Appendix \[sec:uK\_critical\].]{} At this point, we introduce our model of the edge-bulk coupling. Bulk-Edge Coupling ------------------ As described in Sec. \[sec:intro\], the gapless edge mode of this system is in fact a helical, charge-carrying mode. Spin waves described by the effective one-dimensional theory above should be understood as carrying current in the positive or negative direction, with amplitude proportional to the deviation of the expectation value of $S_z$ in the excited state from its groundstate value. As in other topological systems [@TIreview], dissipation at zero temperature in this edge system is then suppressed because backscattering requires spin-flip, which cannot be accomplished by static disorder [@Fertig2006; @SFP]. At finite temperature, however, spin waves will always be present in the bulk, so that the edge system can exchange angular momentum with it. We thus introduce a phenomenological coupling which captures this process and respects conservation of angular momentum, in the form $$H_{int} = g\int dy \left\{ a^{\dag}(0,y)e^{i\phi(y)} + a(0,y)e^{-i\phi(y)}\right\}. \label{H_int}$$ Recalling that the bulk bosonic operators are actually spin raising and lowering operators ($a=(S_x+iS_y)/\sqrt{2}$, $a^{\dag}=(S_x-iS_y)/\sqrt{2}$), one sees that the two terms in $H_{int}$ respectively flip a spin down and up in the degrees of freedom associated with $H_b$, at $x=0$, which is treated as the location of the DW. Compensating these spin flip operators are the operators $e^{\pm i\phi(y)}$, which represent the opposing spin flips in the edge system $H_{e}$. This is easily understood when one recalls that the $\Pi(y)$ operator represents the deviation of $S_z$ from its groundstate configuration due to excitation of edge modes, and one may verify that $e^{\pm i\phi(y)}$ are raising/lowering operators with respect to the $\Pi$ operator [@Giamarchi]. The two terms in $H_{int}$ thus each conserve $S_z$ in the system as a whole ($H_b+H_e$). Finally, we note that our full effective model, $H_b+H_e+H_{int}$, can be expanded around a classical groundstate configuration to produce the normal modes of the system. To be consistent, modes deep in the bulk and at the edge should behave as $\Delta \rightarrow 0^+$ in the same way as what we found for the model introduced at the beginning of this section. This analysis is discussed in more detail in App. \[sec:uK\_critical\]. It leads to the conclusion that the effective “bare” Luttinger parameter $K$ and spin wave velocity $u$ in $H_e$ scale with $\Delta$ in the same way as those in the normal mode theory, and the phenomenological constant $g$ vanishes with $\Delta$. Specifically one finds $$\begin{aligned} u &\sim& \Delta^{1/4}, \nonumber \\ K &\sim& \Delta^{1/4}, \nonumber \\ g &\sim& \Delta^{3/4}. \nonumber \\\end{aligned}$$ With this scaling one finds among the normal modes for the fully coupled bulk-edge system a gapless spin-wave mode, at the edge, with velocity scaling as $\Delta^{1/4}$, as found for the simple model developed at the beginning of this section. With this phenomenological model, we are now in a position to understand how the coupling between the edge and bulk can impact transport in the ferromagnetic state. Conductance {#sec:G} =========== We now turn to the calculation of electric conductance, and investigate its dependence on temperature ($T$) and the Zeeman energy $E_z$. The results can be compared to the two-terminal conductance data of Ref. , and to potentially more systematic future studies at low $T$. We note that in both the CAF and FM phases, the lowest energy charged excitations are edge modes, and these are expected to dominate the d.c. electric transport. However, in the CAF the edge modes are still gapped, and the conductance at finite $T$ is therefore expected to exhibit an activated behavior of the form G(E\_z&lt;E\_z\^c)\~e\^[-\_c/T]{}, \[eq:GCAF\] where $\Delta_c$ has been shown [@MSF2014] to vanish when approaching the transition as $\Delta_c\sim (E_z^c-E_z)\log(E_z^c-E_z)$. We therefore focus on the behavior in the FM phase, where the edge mode is gapless and naively one expects perfect conduction. Interestingly, as we show below, in this phase the [*resistivity*]{} at finite $T$ exhibits a similar activated form, reflecting a “duality relation” between the two phases. Our starting point is the effective Hamiltonian derived in the previous section: $$\begin{aligned} \label{eq:Heff} H_{eff} & =H_{e}+H_{b}+H_{int}\\ H_{e} & =\frac{u}{2\pi}\int\mathrm{d}y\left\{ K\left(\pi\Pi\right)^{2}+\frac{1}{K}\left(\partial_{y}\phi\right)^{2}\right\},\nonumber \\ H_{b} & =\int\mathrm{d}^2r\left\{- \frac{1}{2}\rho\left(a^{\dagger}\nabla^{2}a+a\nabla^{2}a^{\dagger}\right)+\Delta a^{\dagger}a\right\},\nonumber \\ H_{int} & =g\int\mathrm{d}y\left\{ a^{\dagger}\left(0,y\right)e^{i\phi\left(y\right)}+a\left(0,y\right)e^{-i\phi\left(y\right)}\right\} , \nonumber\end{aligned}$$ which describes a helical Luttinger liquid coupled to a bath of 2D massive bosons along the line $x=0$. For simplicity, we assume here the 2D bulk to be an infinite plane rather than the semi-infinite plane $x>0$ considered in App. A: a straightforward calculation shows that the effect of bulk-edge coupling in the two cases is the same for an appropriate definition of the coupling constant $g$. The local bosonic fields $a({\bf r})$, $a^{\dagger}({\bf r})$ correspond, in the spin-wave approximation, to the bulk spin operators $S^-({\bf r})$, $S^+({\bf r})$, respectively; the canonically conjugate operators $\phi(y)$, $\Pi(y)$ encode, respectively, the planar angle and spin density $S^z_e(y)$ on the edge. We recall that the last term, representing the most relevant coupling between edge and bulk modes, can be traced back to a spin-flip term of the form $(S^+_bS^-_e + h.c.)$. To the Hamiltonian describing the clean system Eq. (\[eq:Heff\]), we next add a term which accounts for the coupling to a random potential associated with static impurities, \[eq:Hdis\] H\_[dis]{}=-y(y)\_e(y)=y(y)\_[y]{}, where in the last step we have used the expression for the edge density operator in terms of the bosonic field $\phi$. Note that the helicity of the edge mode forbids standard backscattering terms \[e.g. $\cos(2\phi)$\] which would normally dominate the relaxation of charge current on the edge $j_e$ by direct coupling of left and right moving components. In the absence of coupling to the bulk via the term $H_{int}$ in Eq. (\[eq:Heff\]), the edge mode thus obeys conservation of the total spin operator $\mathcal{S}^z_e=\int \mathrm{d}y\,S^z_e(y)$, which is equivalent to the d.c. component of the charge current, J\_e=yj\_e(y)=Kuy(y) . \[eq:J\_edef\] The forward scattering term Eq. (\[eq:Hdis\]) can be absorbed into a redefinition of $\phi$ by the transformation [@GS88; @Giamarchi] $\phi\left(y\right) \rightarrow\phi\left(y\right)+(K/u)\int_{0}^{y}\mathrm{d}y'\mu\left(y'\right)$ leading to a random phase shift of the operators appearing in $H_{int}$: $$\begin{aligned} e^{i\phi(y)} & \rightarrow e^{i\phi(y)}\zeta\left(y\right)\; , \nonumber \\ \zeta\left(y\right) & \equiv e^{i(K/u)\int_{0}^{y}\mathrm{d}y'\mu\left(y'\right)}\; . \label{eq:zeta_def}\end{aligned}$$ For a generic disorder potential, the random variable $\zeta\left(y\right)$ can be assumed to satisfy (y)\_[dis]{}=0,(y)\^(y’)\_[dis]{}=D(y-y’), \[eq:disorder\] where $\langle\dots\rangle_{dis}$ denotes an average over disorder. The two-terminal conductance $G$ is next evaluated under the assumption that due to the almost conservation of $\mathcal{S}^z_e$ (and hence $J_e$) on each of the two edges, the intrinsic electric resistivity is small; i.e., in units of $e^2/h$, G= \[eq:G2R\] where $R_0\approx 1$ is the contact resistance arising from coupling of the leads to a single 1D channel, and $\delta R\ll 1$. Deviations of $R_0$ from the ideal value $R_0=1$ due to extrinsic processes (e.g., spin-relaxation in the contacts) reduces $G$ from the perfect $G=2$ value but may be assumed to have a negligible $T$-dependence. The intrinsic contribution $\delta R=L/\sigma$ (where $L$ is the length of the sample in the edge direction and $\sigma$ is the d.c. conductivity) is treated perturbatively in the rate of scattering. To this end, we employ a hydrodynamic approximation [@forster] of the Kubo formula for $\sigma$, =\_[0]{}\_0\^te\^[it]{}\[eq:Kubo\] (the $ee$ component of the conductivity matrix $\hat{\sigma}$ in a basis of current operators $\{J_p\}$), whereby it can be recast in terms of the inverse of a memory matrix $\hat{M}$, encoding relaxation rates: = \[\]\^[-1]{} . \[eq:sigma2M\] Here $\hat{\chi}$ is the matrix of static susceptibilities \_[pq]{}= \_0\^J\_p()J\_q(0)(J\_p|J\_q) \[eq:chi\_def\] (describing an “overlap" of the operators $J_p$, $J_q$), and $\hat{M}$ is determined by correlation functions of the force operators \[eq:force\] F\_p=\_p=i\[H,J\_p\] ; generally, the explicit form of $\hat{M}$ is quite complicated [@forster], however in the case where $(F_p|J_p)=0$ it greatly simplifies and M\_[pq]{}&=&\_[0]{} ,\ C\_[pq]{}()&=&\_0\^ te\^[it]{} . \[eq:memory\] In Eqs. (\[eq:Kubo\]) through (\[eq:memory\]), $\langle ...\rangle$ denotes thermal expectation value at temperature $T$. It is apparent from Eq. (\[eq:sigma2M\]) that the matrix elements of $\hat{\sigma}$ are dominated by slow modes, for which $F_p$ and hence the the matrix element $M_{pp}$ is small. In particular, the presence of a conserved operator $J_c$ which commutes with the Hamiltonian (i.e. $F_c=0$) leads to the divergence of any physical conductivity $\sigma_{pp}$ (and hence vanishing of the resistivity) provided the cross susceptibility $\chi_{pc}\not=0$; in such a case, the current $J_p$ is protected by the conservation law and can not decay [@RA]. When the conservation law is only approximate, one obtains a finite relaxation rate dominated by the small memory matrix element $M_{cc}$. In our case, the approximate conservation law protecting the charge current on each edge is $\mathcal{S}^z_e$, which is identical to $J_e$ up to a constant prefactor \[Eq. (\[eq:J\_edef\])\] [@SROG]. This justifies a diagonal version of Eq. (\[eq:sigma2M\]) and one obtains R==, \[eq:deltaR2M\] where, for a Luttinger liquid, $\chi_{ee}$ is easily computed [@Giamarchi] to yield a constant $\chi_{ee}=2uK/\pi$. Employing Eq. (\[eq:memory\]) for $p=q=e$ (and a standard identity for the retarded correlation function) we get R=-\_0\^ ttm{F\_e(t)F\_e(0)} \[eq:deltaR2int\] where, substituting Eq. (\[eq:Heff\]) for the effective Hamiltonian, \[eq:Fe\] F\_e=i\[H\_[eff]{},J\_e\]=i\[H\_[int]{},J\_e\] ; in the last step we have used $[H_e,J_e]=0$. The intrinsic resistivity is therefore dominated by processes whereby the edge spin is relaxed into the 2D bulk. We finally introduce the disorder potential by performing the phase shift Eq. (\[eq:zeta\_def\]), so that $H_{int}$ acquires the form H\_[int]{} =gy{(y) a\^(0,y)e\^[i(y)]{}+\^(y)a(0,y)e\^[-i(y)]{}}. \[Hint\_dis\] Evaluating $\delta R$ from Eq. (\[eq:deltaR2int\]) to leading order in $H_{int}$ \[Eq. (\[Hint\_dis\])\], we modify the definition of angular brackets $\langle ...\rangle$ to include the disorder averaging $\langle ...\rangle_{dis}$. We next employ Eqs. (\[eq:J\_edef\]), (\[eq:Fe\]) and (\[Hint\_dis\]) to get the correlation function F\_e(t)F\_e(0)&=& (guK)\^[2]{}yy’(y)\^(y’)\_[dis]{}e\^[i(y,t)]{}e\^[-i(y’,0)]{} { a\^(0,y,t)a(0,y’,0)+ a(0,y,t)a\^(0,y’,0)}\ && (guK)\^[2]{}Dye\^[i(y,t)]{}e\^[-i(y,0)]{}\_e { a\^(0,y,t)a(0,y,0)\_b+a(0,y,t)a\^(0,y,0)\_b},\[eq:FeFe\] where in the last step we have used Eq. (\[eq:disorder\]), and maintain the leading order in $g$ for which the thermal expectation value is evaluated with respect to $H_0=H_e+H_b$ (where the bulk and edge sectors are decoupled). Both sectors are described by free bosonic theories \[see Eq. (\[eq:Heff\])\]. The edge part of the correlation function is given by the standard result for a Luttinger liquid [@Giamarchi], e\^[i(y,t)]{}e\^[-i(y,0)]{}\_e =\_[0]{}, \[eq:LLcorr\] where $\alpha$ is a short-distance cutoff. For the bulk, we use the bosonic correlation functions in momentum space a\^\_[[**k**]{}]{}(t)a\_[[**k’**]{}]{}(0)\_b &=&e\^[i\_[[**k**]{}]{}t]{}n\_[\_B]{}([**k**]{})\_[[**k**]{},[**k’**]{}]{} ,\ \_[[**k**]{}]{} &=& +|[**k**]{}|\^2, \[eq:acorr\_k\] where $n_{_B}({\bf k})=1/(e^{\beta\omega_{{\bf k}}}-1)$ is the Bose function, and similarly a\_[[**k**]{}]{}(t)a\^\_[[**k’**]{}]{}(0)\_b =e\^[-i\_[[**k**]{}]{}t]{}\[1+n\_[\_B]{}([**k**]{})\]\_[[**k**]{},[**k’**]{}]{} . The local correlation functions thus become \[eq:acorr\_loc\] a\^(0,y,t)a(0,y,0)\_b &=&\^2ke\^[i\_[[**k**]{}]{}t]{}n\_[\_B]{}([**k**]{}),\ a(0,y,t)a\^(0,y,0)\_b &=&\^2ke\^[-i\_[[**k**]{}]{}t]{}\[1+n\_[\_B]{}([**k**]{})\] .Inserting Eqs. (\[eq:acorr\_loc\]) and (\[eq:LLcorr\]) into (\[eq:FeFe\]) we obtain for $\delta R$ \[Eq. (\[eq:deltaR2int\])\]: \[eq:deltaR2C\] R && -\_0\^ ttm {(t)} ,\ && ()\^[K/2]{} ,(t) \_[0]{} {e\^[i\_[[**k**]{}]{}t]{}n\_[\_B]{}([**k**]{})+e\^[-i\_[[**k**]{}]{}t]{}\[1+n\_[\_B]{}([**k**]{})\]} . Note that $D\propto n_{imp}$ where $n_{imp}$ is the density of impurities per unit length; hence, the factor $DL$ encodes the number of impurities $N_{imp}$. Performing the integrals in Eq. (\[eq:deltaR2C\]) we obtain (see Appendix \[sec:deltaRdetails\] for details) R &=& f() ,\ f(z) && \_[z]{}\^x|(+i)|\^[2]{} . \[eq:deltaR2B\] Recalling the $T$-dependence of $\mathcal{D}$ \[Eq. (\[eq:deltaR2C\])\], this yields $\delta R$ as a function of $T$ for arbitrary values of the other parameters: R(T) T\^[-1]{}f(/T) . \[eq:deltaR2T\_short\] In Fig. \[fig:Conductance\] we present $G$ vs. $T$ obtained directly from Eqs. (\[eq:G2R\]) and (\[eq:deltaR2B\]), for several values of $\Delta$ corresponding to a range of $E_z$ in the regime $E_z>E_z^c$. ![(Color online.) Conductance in units of $e^2/h$ as a function of $T$, for different values of $\Delta$ in units of Kelvin. Assuming that $E_z^c\sim 1$K, we take $R_0=1$, $K=\Delta^{1/4}$, $u=u_0\Delta^{1/4}$ and $g=g_0\Delta^{3/4}$ where $u_0$, $g_0$ are such that the overall $\Delta$-independent prefactor of $\delta R$ \[Eq. (\[eq:deltaR2B\])\] is 0.1. Inset: zoom on the low-$T$ regime $0.01$K$\leq T\leq 0.1$K. []{data-label="fig:Conductance"}](FM_Conductance.pdf) We now consider the low $T$ limit where $T\ll\Delta$, and use the asymptotic form of $f(z)$ at large argument to obtain the leading $T$-dependent contribution to the resistance (see App. \[sec:deltaRdetails\]): \[eq:deltaR\_final\] R(T) &&R\_[int]{}e\^[-/T]{} ,\ R\_[int]{}&&()\^[K/2]{},where we note that the prefactor of the exponential $R_{int}$ is $T$-independent. This simple activation of the resistance is remarkably reminiscent of the [*conductance*]{} in the CAF phase \[Eq. (\[eq:GCAF\])\], where here the activation energy $\Delta\propto (E_z-E_z^c)$ \[see Eq. (\[DeltaDef\])\] corresponds to the gap for spin-wave excitations in the bulk. Interestingly, the role it plays here is equivalent to a superconducting gap. The final expression for the low-$T$ two-terminal conductance in the FM phase is obtained by substituting Eq. (\[eq:deltaR\_final\]) into (\[eq:G2R\]), yielding G(E\_z&gt;E\_z\^c), \[eq:GfinalFM\] where the $E_z$-dependence is dominated by the behavior of $\Delta$ . Summary {#sec:summary} ======= In this work we have developed an effective model for the ferromagnetic $\nu=0$ quantized Hall state of graphene, and used it to analyze the transport behavior of the system at finite temperature. The model includes a bulk system supporting a gapped spin wave mode, an edge system supporting a charged gapless helical mode, and a coupling term allowing an exchange of spin between the two systems. In principle the parameters of the effective theory which couples the edge and bulk are free. However, we use several ways to constrain them, especially in the ferromagnetic phase near the transition. We develop a simple nonlinear theory of the edge in the FM phase and match the $E_Z$-dependence of the spin-wave velocity of this model with the linear approximation to our effective theory. We further make the physical demand that the spin stiffness should neither diverge nor vanish at the transition. This completely constrains the $E_Z$-dependence of all the free parameters of the effective theory. An analysis in terms of the memory matrix approach allows us to determine the temperature-dependence of edge transport in this system. In the presence of disorder, charged modes of the system can be backscattered, with the necessary angular momentum for such processes within a helical channel supplied by the bulk spin excitations. This leads to concrete predictions for a two-terminal resistance measurement of the system. Our analysis leaves open a number of interesting further questions. What is the effect of disorder on the bulk of the system? In particular, is there a range of parameters for which gapless or nearly gapless spin excitations persist in the bulk, leading to dissipative behavior over a broad range of temperatures and/or Zeeman energies? Our model can be easily generalized to capture the canted antiferromagnetic phase, which is presumably seen as an insulating state in experiments with relatively weaker Zeeman coupling. Our approach in principle allows one to compute the temperature dependence of transport in this phase as well. More challenging, and potentially very interesting, would be the transport behavior of the system through the transition itself. Connected to this, it would be generally interesting to understand the bulk properties of the system in the critical regime. How the system behaves upon doping is yet another interesting direction, an understanding of which would allow further connection of our model with existing experimental data. These and related questions will be addressed in future work. 0.5in [*Acknowledgements –* ]{} Useful discussions with E. Andrei, N. Andrei, T. Grover, P. Jarillo-Herrero, R. Shankar and A. Young are gratefully acknowledged. The authors thank the Aspen Center for Physics (NSF Grant No. 1066293) for its hospitality. This work was supported by the US-Israel Binational Science Foundation (BSF) grant 2012120 (ES, GM, HAF), the Israel Science Foundation (ISF) grant 231/14 (ES), by NSF Grant Nos. DMR 1306897 (GM), DMR-1506263 (HAF), and DMR-1506460 (HAF). Renormalization of edge parameters near criticality {#sec:uK_critical} =================================================== In this Appendix we discuss some technical details that determine how various parameters of our effective model scale with $\Delta$. In particular we demonstrate that the matrix element $\langle S_0 | h_x^{-1} | S_0 \rangle$ scales as $\Delta^{-1/2}$, as was stated in Section \[sec:Edge\_Hamiltonian\]. We then discuss how this leads to the scaling of the parameters $u$, $K$, and $g$ in our effective Hamiltonian. Small $\Delta$ behavior of $\langle S_0 | h_x^{-1} | S_0 \rangle$ {#sec:matrixelement} ----------------------------------------------------------------- We recall the operator $h_x \equiv -{1 \over 2}\rho_0\partial_x^2 + U_x(x)$, with $$U_x(x)={1 \over 2} E_z \cos\theta_{DW}(x) - \tilde g \cos 2\theta_{DW}(x),$$ which has the asymptotic property $U_x(x\rightarrow \infty) = E_z/2-\tilde g \equiv \Delta/2$. As discussed in Section \[sec:Edge\_Hamiltonian\], we assume for small $\Delta$ that $h_x$ has no bound states, in particular no zero energy states, so that the operator $h_x^{-1}$ is well-defined. The spectrum of $h_x$ then supports only scattering states, which can be specified by eigenvalues of the form ${1 \over 2}\rho_0k_x^2+\Delta/2$, with $k_x$ formally a continuous set of parameters labeling the spectrum. Labeling the corresponding eigenvectors as $|k_x\rangle$, we then have $$\langle S_0 |h_x^{-1}|S_0 \rangle = L_x\int_0^\infty {{dk_x} \over {2\pi}} \frac{|\langle S_0 | k_x \rangle|^2}{{1 \over 2}\rho_0k_x^2+\Delta/2}, \label{matrix_element}$$ where $L_x$ is a size scale which is taken to infinity in the thermodynamic limit. We next argue that the matrix element $\langle S_0 | k_x \rangle$ is finite for any $\Delta$, including at the critical value $\Delta=0$. Since the wavefunctions $\psi_{k_x}(x)=\langle x | k_x \rangle$ are increasingly unaffected by $U_x$ as $\Delta \rightarrow 0$, it is sufficient to show that the matrix element is finite in this limit. Because the wavefunctions in Eq. (\[matrix\_element\]) are normalized, it is clear that the integrand is finite for large $k_x$ and that the integral converges at its upper limit. To see that there is no divergence at the lower limit, we identify a length scale $\eta$ above which the domain wall configuration $\theta_{DW}(x)$ is not appreciably different than zero, so that for $x > \eta$ we can use an asymptotic scattering form for $\psi_{k_x}(x)$, as well as $S_0(x) \approx 2\sqrt{2\rho_0/\tilde g}/x \equiv \xi/x$ \[see Eq. (\[r\_eq\_0\_DW\])\]. Writing $x=u/k_x$, the matrix element takes the form $$\langle S_0 | k_x \rangle \approx {{const.} \over {\sqrt L_x}} + {{\xi} \over {\sqrt{L_x}}} \int_{k_x\eta}^{\infty} du \frac{e^{-iu} - e^{iu}e^{-2i\delta(k_x)}}{u},$$ where $\delta(k_x)$ is the phase shift, which for small $k_x$ has the form $\delta(k_x) \approx -k_xa$, with $a$ the scattering length. It is clear from these forms that $\langle S_0 | k_x \rangle$ is finite in the limit $k_x \rightarrow 0$, and we write this limit as $C/\sqrt{L_x}$. Finally, noting that, for small $\Delta$, Eq. (\[matrix\_element\]) is dominated by the lower limit on $k_x$, we find $$\langle S_0 |h_x^{-1}|S_0 \rangle \sim \int_0^{\infty} dk_x \frac{C^2}{\rho_0k_x^2 + \Delta/2} \sim 1/\sqrt{\Delta},$$ which leads to the $u_{NM} \sim \Delta^{1/4}$ behavior discussed in Section \[sec:Edge\_Hamiltonian\]. Scaling of $u$, $K$, and $g$ with $\Delta$ {#sec:uKg_scaling} ------------------------------------------ We next discuss how the parameters specifying the one-dimensional part of our effective Hamiltonian, $u$ and $K$, behave as the transition to the canted antiferromagnet (CAF) is approached from the ferromagnetic (FM) side. Our approach is specifically to expand the Hamiltonian for small fluctuations around a classical groundstate, and to specify the behavior of $u$ and $K$ to match what was found in Section \[sec:normalmodes\]. We begin by rewriting the effective Hamiltonian in the form $$\begin{aligned} \label{eq:Heff:appB} H_{eff} & =H_{e}+H_{b}+H_{int},\\ H_{e} & =\frac{u}{2\pi}\int\mathrm{d}y\left\{ K\left(\pi\Pi\right)^{2}+\frac{1}{K}\left(\partial_{y}\phi\right)^{2}\right\},\nonumber \\ H_{b} & =\int\mathrm{d}^2r\left\{ \frac{1}{2}\rho\left(\vec\nabla a^{\dagger}\vec\nabla a+\vec\nabla a\vec\nabla a^{\dagger}\right)+\Delta a^{\dagger}a\right\},\nonumber \\ H_{int} & =g\int\mathrm{d}y\left\{ a^{\dagger}\left(0,y\right)e^{i\phi\left(y\right)}+a\left(0,y\right)e^{-i\phi\left(y\right)}\right\} . \nonumber\end{aligned}$$ The Hamiltonian has a global symmetry of the form $\phi(y) \rightarrow \phi(y) + \varphi_0$, $a \rightarrow a e^{i\varphi_0}$, $a^{\dag} \rightarrow a^{\dag} e^{-i\varphi_0}$. This implies that classical groundstates form a degenerate continuous manifold, and for convenience we consider fluctuations around $\phi(y) = 0$. For small but non-vanishing values of this field, to quadratic order one finds $$H_{int} \approx g \int dy \left\{[a+a^{\dag}][1-{1 \over 2}\phi(y)^2] +i\phi(y)[a^{\dag}-a] \right\}. \label{smallphi}$$ Rewriting $a({\bf r}) \equiv [P({\bf r}) + i Q({\bf r})]/\sqrt{2}$, $a({\bf r})^{\dag} \equiv [P({\bf r}) - i Q({\bf r})]/\sqrt{2}$ (i.e., $P$ and $Q$ denote the spin operators $S_x$ and $S_y$, respectively) with $[P({\bf r}_1),Q({\bf r}_2)]=i\delta({\bf r}_1-{\bf r}_2)$, yields $$H_{int} \approx \sqrt{2} g \int dy \left\{P(0,y)[1-{1 \over 2}\phi(y)^2] +\phi(y)Q(0,y) \right\}. \label{smallphiPQ}$$ If $P$ is treated classically, it is clear that the Hamiltonian will be minimized by $P({\bf r}) \ne 0$. Collecting terms involving $P$ for $\phi=0$, the function doing so will minimize $$H_P=\int_{x \ge 0^-} d^r \left\{ {1 \over 2} \rho | \vec\nabla P |^2 + {1 \over 2} \Delta P^2 + \sqrt{2} g P(\bf r) \delta(x) \right\}.$$ Minimizing this subject to the boundary condition $\partial_x P(x=0^-,y))=0$, which is appropriate to an open boundary, one finds $P=P_0$ with $$\begin{aligned} P_0(x) &=& \frac{-\sqrt{2} g }{\sqrt{\rho\Delta}} \quad\quad\quad 0^- < x \le 0, \nonumber\\ &=& \frac{-\sqrt{2} g }{\sqrt{\rho\Delta}} e^{-\left(\frac{\Delta}{\rho}\right)^{1/2} x} \quad x > 0 \nonumber.\end{aligned}$$ Writing $P=P_0+p$, the effective Hamiltonian at the quadratic level now has the form $$\begin{aligned} H_{eff} &\approx& \int_{x>0^-} d^2r \left\{ {1 \over 2} \rho \left(|\vec\nabla p|^2 + |\vec\nabla Q|^2 \right) +{1 \over 2} \Delta \left(p^2 + Q^2\right) \right\} \nonumber \\ &+& g \int dy \left\{ \frac{g}{\sqrt{\rho\Delta}} \phi(y)^2 + \sqrt{2} \phi(y) Q(0,y) \right\} \nonumber \\ &+& \frac{u}{2\pi} \int dy \left\{ K\left(\pi\Pi\right)^2 + {1 \over K} \left( \partial_y \phi \right)^2 \right\}. \label{AppA:quadratic}\end{aligned}$$ The middle term in Eq. (\[AppA:quadratic\]) encodes a coupling between the $\phi$ and $Q$ fields, capturing the effects of the global symmetry described above. The effect of this coupling can be found explicitly by minimizing Eq. (\[AppA:quadratic\]) with respect to $Q$, subject to the boundary condition $\partial_x Q(x=0^-,y)=0$, which again is appropriate for an open boundary. This minimum $\Phi(x,y)$ obeys the equation $$-\rho\nabla^2 \Phi + \Delta \Phi + \sqrt{2}g\phi(y)\delta(x) = 0\; .$$ Fourier transforming with respect to $y$, the solution to this equation for $x \ge 0$ is $$\Phi(x,q_y) = -\frac{\sqrt{2}g}{\rho} \frac{\phi(q_y)}{\sqrt{\Delta + \rho q_y^2}} e^{-\sqrt{(\Delta + \rho q_y^2)/\rho} \,x }.$$ We can finally write $Q = \Phi+q$, with $[p({\bf r}_1),q({\bf r}_2)]= i\delta({\bf r}_1-{\bf r}_2)$ to fully decouple the edge mode from the bulk. After some algebra, we arrive at the effective Hamiltonian at the quadratic level in the form $$\begin{aligned} \label{Heffquad:AppA} H_{eff} &\approx& \int_{x \ge 0} d^2r \left\lbrace {1 \over 2} \rho \left(|\vec\nabla p|^2 + |\vec\nabla q|^2 \right) +{1 \over 2} \Delta \left( p^2 + q^2 \right) \right\rbrace \nonumber \\ &+& \frac{u}{2\pi} \int dy \left\{ K\left(\pi\Pi\right)^2 + {1 \over K} \left( \partial_y \phi \right)^2 \right\} \nonumber \\ &+& L_y \frac{g^2}{\sqrt{\rho\Delta}}\int \frac{dq_y}{2\pi} \left[1 - \frac{1}{\sqrt{1+\rho q_y^2/\Delta}} \right] \phi(-q_y)\phi(q_y). \nonumber\\\end{aligned}$$ For small enough $q_y$, it is apparent that the last two terms of Eq. (\[Heffquad:AppA\]) support a linearly dispersing normal mode, whose dynamics is described by a Luttinger liquid Hamiltonian with renormalized parameters. In particular, the renormalized coefficient of the $(\partial_y \phi)^2$ term is $u/K+2\pi g^2 \sqrt{\rho}/\Delta^{3/2}$. Our goal is to match the Hamiltonian controlling this mode as $\Delta$ becomes small to the result \[Eq. (\[eq:LL\_NM\])\] of the model described in Section \[sec:normalmodes\], in which non-Gaussian properties of the bulk system were retained. This leads to two requirements: ([*i*]{}) The coefficient of the $(\partial_y \phi)^2$ should remain finite and non-vanishing in the limit of small $\Delta$; ([*ii*]{}) the velocity of the gapless mode should vanish as $\Delta^{1/4}$. The first condition will be met if we assume $g \sim \Delta^{3/4}$ and $u \sim K$. Noting further that the product $uK$ \[the coefficient of the $\left(\pi\Pi\right)^2$ in Eq. (\[Heffquad:AppA\])\] is not renormalized, requirement ([*ii*]{}) on the velocity implies that our “bare” parameters $u$ and $K$ scale as $u \sim K \sim \Delta^{1/4}$, in accordance with the scaling of the normal mode parameters $u_{NM}$, $K_{NM}$ derived in Section \[sec:normalmodes\]. Derivation of the general expression for $\delta R$ vs. $T$ {#sec:deltaRdetails} =========================================================== In this Appendix we first derive the general expression for $\delta R$ \[Eq. (\[eq:deltaR2B\])\] starting from Eq. (\[eq:deltaR2C\]). Inserting $\omega_{{\bf k}}$ from Eq. (\[eq:acorr\_k\]), writing the Bose function as a geometric sum and performing the integral over ${\bf k}$, the correlation function $\mathcal{C}(t)$ becomes $$\begin{aligned} \mathcal{C}\left(t\right) &=\lim_{\epsilon\rightarrow0}\left(-1\right)^{-\frac{K}{4}}\left(\sinh\left(\frac{\left(t-i\epsilon\right)\pi}{\beta}\right)\right)^{-\frac{K}{2}} \label{eq:Ct_final} \\ &\times\frac{1}{4\pi\rho}\left(\sum_{n=0}^{\infty}\frac{e^{-\Delta\left(n\beta+it\right)}}{n\beta+it}+\sum_{n=1}^{\infty}\frac{e^{-\Delta\left(n\beta-it\right)}}{n\beta-it}\right). \nonumber\end{aligned}$$ To proceed with the calculation of $\delta R$, we recast Eq. (\[eq:deltaR2C\]) as R && -I\ I && 4m {\_[0]{}\^tt(t)} . Substituting (\[eq:Ct\_final\]) for $\mathcal{C}\left(t\right)$, we get $$\begin{aligned} I &=\Im m\left\{ \int_{0}^{\infty}\mathrm{d}t\cdot t\left(-1\right)^{-\frac{K}{4}}\left(\sinh\left(\frac{t\pi}{\beta}\right)\right)^{-\frac{K}{2}}\left(\sum_{n=0}^{\infty}\frac{e^{-\Delta\left(n\beta+it\right)}}{n\beta+it}+\sum_{n=1}^{\infty}\frac{e^{-\Delta\left(n\beta-it\right)}}{n\beta-it}\right)\right\} \nonumber \\ &=\Im m\left\{ \int_{0}^{\infty}\mathrm{d}t\cdot t\left(-1\right)^{-\frac{K}{4}}\left(\sinh\left(\frac{t\pi}{\beta}\right)\right)^{-\frac{K}{2}}\int_{\Delta}^{\infty}\mathrm{d}\Delta'\left(\sum_{n=0}^{\infty}e^{-\Delta'\left(n\beta+it\right)}+\sum_{n=1}^{\infty}e^{-\Delta'\left(n\beta-it\right)}\right)\right\} \label{eq:I_details} \\ &=-\int_{\Delta}^{\infty}\mathrm{d}\Delta'\sum_{n=0}^{\infty}e^{-\Delta'n\beta}\frac{\partial F_{-}\left(\Delta'\right)}{\partial\Delta'}+\int_{\Delta}^{\infty}\mathrm{d}\Delta'\sum_{n=1}^{\infty}e^{-\Delta'n\beta}\frac{\partial F_{+}\left(\Delta'\right)}{\partial\Delta'} , \nonumber\end{aligned}$$ where $$\begin{aligned} &F_{\mp}(\Delta)\equiv\Im m\left\{ \frac{\left(-1\right)^{-\frac{K}{4}}}{i}\int_{0}^{\infty}\mathrm{d}t\cdot e^{\mp i\Delta t}\left(\sinh\left(\frac{\pi}{\beta}t\right)\right)^{-\frac{K}{2}}\right\} =2^\frac{K}{2}\frac{\beta }{2\pi}\Im m\left\{ \frac{\left(-1\right)^{-\frac{K}{4}}}{i}B\begin{pmatrix}i\gamma\pm\frac{K}{4}, & 1-\frac{K}{2}\end{pmatrix}\right\} \nonumber \\ &=-2^\frac{K}{2}\frac{\beta }{2\pi}\Re e\left\{\Gamma\left(1-\frac{K}{2}\right)\left|\Gamma\left(\frac{K}{4}+i\gamma\right)\right|^{2} \frac{\left(\cos\pi\frac{K}{4}-i\sin\pi\frac{K}{4}\right)}{\pi}\left(\cosh\pi\gamma\sin\frac{\pi K}{2}\mp i\sinh\pi\gamma\cos\frac{\pi K}{2}\right)\right\} \label{eq:Fmp}\\ &=-2^\frac{K}{2}\frac{\beta }{2\pi}\Gamma\left(1-\frac{K}{2}\right)\left|\Gamma\left(\frac{K}{4}+i\gamma\right)\right|^{2}\frac{1}{\pi}e^{\mp\pi\gamma}\frac{1}{2}\sin\frac{\pi K}{2} ; \nonumber\end{aligned}$$ here $\gamma =\frac{\beta \Delta}{2\pi}$, $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is the Beta function and we have used the identity $\Gamma(z)\Gamma(1-z)=\pi/\sin(\pi z)$. Inserting these expressions for $F_{+}$ and $F_{-}$ in Eq. (\[eq:I\_details\]) yields $$\begin{aligned} I &=2^\frac{K}{2}\frac{\beta }{2\pi}\Gamma\left(1-\frac{K}{2}\right)\frac{1}{2\pi}\sin\frac{\pi K}{2}\times \label{eq:I_final} \\ &\Bigg\{-\int_{\Delta}^{\infty}\mathrm{d}\Delta'\frac{1}{1-e^{-\Delta'\beta}}\frac{\partial}{\partial\Delta'} \left(\left|\Gamma\left(\frac{K}{4}+i\gamma'\right)\right|^{2}e^{-\frac{\beta\Delta'}{2}}\right) \nonumber\\ &+\int_{\Delta}^{\infty}\mathrm{d}\Delta'\frac{e^{-\Delta'\beta}}{1-e^{-\Delta'\beta}}\frac{\partial}{\partial\Delta'} \left(\left|\Gamma\left(\frac{K}{4}+i\gamma'\right)\right|^{2}e^{\frac{\beta\Delta'}{2}}\right)\Bigg\}.\nonumber\end{aligned}$$ Finally, after integration by parts we obtain the expression of Eq. (\[eq:deltaR2B\]). The asymptotic form of $f(z)$, which dominates the limit $T\ll \Delta$ (namely, $z \rightarrow \infty$), is now obtained from Eq. (\[eq:deltaR2B\]) by substituting the asymptotic form of $\Gamma(z)$ at large arguments: f(z)(2)\^[2-]{}\_[z]{}\^x x\^[-1]{}e\^[-x]{} . \[eq:fz\_approx\] It is therefore proportional to the incomplete Gamma function $\Gamma (\frac{K}{2},z)$, which can be further approximated for $z \rightarrow \infty$ to give $$\begin{aligned} f\left(z\right)\approx\left(2\pi\right)^{2-\frac{K}{2}}z^{\frac{K}{2}-1}e^{-z}\; . \label{eq:fz_approx}\end{aligned}$$ This leads to the approximate expression for $\delta R$ in Eq. (\[eq:deltaR\_final\]). [235]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} Y. Zhang, Z. Jiang, J. P. Small, M. S. Purewal, Y.-W. Tan, M. Fazlollahi, J. D. Chudow, J. A. Jaszczak, H. L. Stormer, and P. Kim, Phys. Rev. Lett. [**96**]{}, 136806 (2006). J. Alicea and M. P. A. Fisher, Phys. Rev. B. [**74**]{}, 075422 (2006). M. O. Goerbig, R. Moessner, and B. Doucot, Phys. Rev. B [**74**]{}, 161407 (2006). V. P. Gusynin, V. A. Miransky, S. G. Sharapov, and I. Shovkovy, Phys. Rev. B [**74**]{}, 195429 (2006). K. Nomura and A. H. MacDonald, Phys. Rev. Lett. [**96**]{}, 256602 (2006). Z. Jiang, Y. Zhang, H. L. Stormer, and P. Kim, Phys. Rev. Lett. [**99**]{}, 106802 (2007). I. F. Herbut, Phys. Rev. B. [**75**]{}, 165411 (2007) J. N. Fuchs and P. Lederer, Phys. Rev. Lett. [**98**]{}, 016803 (2007). D. A. Abanin, K. S. Novoselov, U. Zeitler, P. A. Lee, A. K. Geim and L. S. Levitov, Phys. Rev. Lett. [**98**]{}, 196806 (2007). J. G. Checkelsky, L. Li and N. P. Ong, Phys. Rev. Lett. [**100**]{}, 206801 (2008); J. G. Checkelsky, L. Li and N. P. Ong, Phys. Rev. B [**79**]{}, 115434 (2009). Xu Du, I. Skachko, F. Duerr, A. Luican, and E. Y. Andrei, Nature [**462**]{}, 192 (2009). M. O. Goerbig, Rev. Mod. Phys. [**83**]{}, 1193 (2011). A. F. Young, C. R. Dean, L. Wang, H. Ren, P. Cadden-Zimansky, K. Watanabe, T. Taniguchi, J. Hone, K. L. Shepard and P. Kim, Nat. Phys. [**8**]{}, 550 (2012). G. L. Yu, R. Jalil, B. Belle, A. S. Mayorov, P. Blake, F. Schedin, S. V. Morozov, L. A. Ponomarenko, F. Chiappini, S. Wiedmann, U. Zeitler, M. I. Katsnelson, A. K. Geim, K. S. Novoselov and D. C. Elias, PNAS [**110**]{}, 3282 (2013). , , , ****, (). C. L. Kane and E. J. Mele, Phys. Rev. Lett. [**95**]{}, 146802 (2005); C.L. Kane and E.J. Mele, Phys. Rev. Lett. [**95**]{}, 226801 (2005). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. [**82**]{}, 3045 (2010); X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. [**83**]{}, 1057 (2011). H. A. Fertig and L. Brey, Phys. Rev. Lett. [**97**]{}, 116805 (2006). E. Shimshoni, H. A. Fertig and G. V. Pai, Phys. Rev. Lett. [**102**]{}, 206408 (2009). M. Killi, T.-C. Wei, I. Affleck and A. Paramekanti, Phys. Rev. Lett. [**104**]{}, 216406 (2010); S. Wu, M. Killi and A. Paramekanti, Phys. Rev. B [**85**]{}, 195404 (2012). K. Dhochak, E. Shimshoni and E. Berg, Phys. Rev. B [**91**]{}, 165107 (2015). I.F. Herbut, Phys. Rev. B [**76**]{}, 085432 (2007). J. Jung and A.H. MacDonald, Phys. Rev. B [**80**]{}, 235417 (2009). R. Nandkishore and L. S. Levitov, Phys. Scr. [**T146**]{}, 014011 (2009); R. Nandkishore and L. S. Levitov, arXiv:1002.1966. M. Kharitonov, Phys. Rev. B [**85**]{}, 155439 (2012). M. Kharitonov, Phys. Rev. B [**86**]{}, 075450 (2012). F. Wu, I. Sodemann, Y. Araki, A. H. MacDonald and T. Jolicoeur, Phys. Rev. B [**90**]{}, 235432 (2014). Bitan Roy, M.P. Kennett, and S. Das Sarma, Phys. Rev. B [**90**]{}, 201409(R) (2014). J.L. Lado and J. Fernandez-Rossier, Phys. Rev. B [**90**]{}, 165429 (2014). Y. Zhang, Z. Jiang, J. P. Small, M. S. Purewal, Y.-W. Tan, M. Fazlollahi, J. D. Chudow, J. A. Jaszczak, H. L. Stormer and P. Kim, Phys. Rev. Lett. [**96**]{}, 136806 (2006); Y. Zhao, P. Cadden-Zimansky, F. Ghahari and P. Kim, Phys. Rev. Lett. [**108**]{}, 106804 (2012). An analogous transition has been proposed for quantum Hall bilayer systems at filling factor $\nu=2$. See, for example, S. Das Sarma, Subir Sachdev, and Lian Zheng, Phys. Rev. Lett. [**79**]{}, 917 (1997); L. Brey, Phys. Rev. Lett. [**81**]{}, 4692 (1998); J. Schliemann and A.H. MacDonald, Phys. Rev. Lett. [**84**]{}, 4437 (2000). A. F. Young, J. D. Sanchez-Yamagishi, B. Hunt, S. H. Choi, K. Watanabe, T. Taniguchi, R. C. Ashoori, P. Jarillo-Herrero, Nature [**505**]{}, 528 (2014). A similar behavior has been detected in bilayer graphene: P. Maher, C. R. Dean, A. F. Young, T. Taniguchi, K. Watanabe, K. L. Shepard, J. Hone and P. Kim, Nature Physics [**9**]{}, 154 (2013). G. Murthy, E. Shimshoni and H. A. Fertig, Phys. Rev. B [**90**]{}, 241410(R) (2014). G. Murthy, E. Shimshoni and H. A. Fertig, arXiv:1510.04255. S. M. Girvin and A. H. MacDonald in *Perspectives in Quantum Hall Effects*, S. Das Sarma and A. Pinczuk, eds. (John Wiley & Sons, 1997); D.H. Lee and C.L. Kane, Phys. Rev. Lett. [**64**]{}, 1313 (1990); S.L. Sondhi, A.Karlhede, S.A. Kivelson and E. H. Rezayi, Phys. Rev. B [**47**]{}, 16419 (1993); K. Moon, H. Mori, K. Yang, S. M. Girvin, A. H. MacDonald, L. Zheng, D. Yoshioka and S.-C. Zhang, Phys. Rev. B [**51**]{}, 5138 (1995). H.A. Fertig, L Brey, R. Côt$\acute{e}$, A.H. MacDonald, Phys. Rev. B [**50**]{}, 11018 (1994) Kun Yang, S. Das Sarma, and A. H. MacDonald, Phys. Rev. B [**74**]{} 075423 (2006). A. Auerbach, [*Interacting Electrons and Quantum Magnetism*]{} (Springer, 1994); A. Altland and B. Simons, [*Condensed Matter Field Theory*]{}, 2nd edition (Cambridge University Press, 2010). Rajaraman, [*Solitons and Instantons*]{} Ganpathy Murthy, unpublished. T. Giamarchi, *Quantum Physics in One Dimension*, (Oxford, New York, 2004). T. Giamarchi and H. J. Schulz, Phys. Rev. B [**37**]{} 325 (1988). R. Zwanzig, J. Chem. Phys. [**33**]{}, 1338 (1960); H. Mori, Prog. Theor. Phys. [**33**]{}, 423 (1965); D. Forster, *Hydrodynamic fluctuations, Broken symmetry and Correlation functions*, (Benjamin, Massachusetts, 1975); W. Götze and P. Wölfle, Phys. Rev. B [**6**]{}, 1226 (1972). A. Rosch and N. Andrei, Phys. Rev. Lett. [**85**]{}, 1092 (2000). T. L. Schmidt, S. Rachel, F. von Oppen and L. I. Glazman, Phys. Rev. Lett. [**108**]{}, 156402 (2012). I. S. Gradshteyn and I.M. Ryzhik, [*Tables of Integrals, Series, and Products*]{} (Academic, New York 1965)
--- abstract: 'Different descriptions used to model a point-defect in an elastic continuum are reviewed. The emphasis is put on the elastic dipole approximation, which is shown to be equivalent to the infinitesimal Eshelby inclusion and to the infinitesimal dislocation loop. Knowing this elastic dipole, a second rank tensor fully characterizing the point-defect, one can directly obtain the long-range elastic field induced by the point-defect and its interaction with other elastic fields. The polarizability of the point-defect, resulting from the elastic dipole dependence with the applied strain, is also introduced. Parameterization of such an elastic model, either from experiments or from atomic simulations, is discussed. Different examples, like elastodiffusion and bias calculations, are finally considered to illustrate the usefulness of such an elastic model to describe the evolution of a point-defect in a external elastic field.' address: - 'DEN-Service de Recherches de Métallurgie Physique, CEA,Paris-Saclay Univ., F-91191 Gif-sur-Yvette, France' - 'Centre Interdisciplinaire des Nanosciences de Marseille, UMR 7325 CNRS - Aix Marseille Univ., F-13008 Marseille, France' author: - Emmanuel Clouet - Céline Varvenne - Thomas Jourdan bibliography: - 'elasticity.bib' title: 'Elastic modeling of point-defects and their interaction' --- Point-defects, Elasticity, Elastic dipole, Polarizability Introduction ============ Point-defects in crystalline solids, being either intrinsic like vacancies, self-interstitial atoms, and their small clusters, or extrinsic like impurities and dopants, play a major role in materials properties and their kinetic evolution. Some properties of these point-defects, like their formation and migration energies, are mainly determined by the region in the immediate vicinity of the defect where the crystal structure is strongly perturbed. An atomic description appears thus natural to model these properties, and atomic simulations relying either on [*ab initio*]{}calculations [@Freysoldt2014] or empirical potentials have now become a routine tool to study point-defects structures and energies. But point-defects also induce a long-range perturbation of the host lattice, leading to an elastic interaction with other structural defects, impurities or an applied elastic field. An atomic description thus appears unnecessary to capture the interaction arising from this long-range part, and sometimes is also impossible because of the reduced size of the simulation cell in atomic approaches. Elasticity theory becomes then the natural framework. It allows a quantitative description of the point-defect interaction with other defects. Following the seminal work of Eshelby [@Eshelby1956], the simplest elastic model of a point-defect corresponds to a spherical inclusion forced into a spherical hole of slightly different size in an infinite elastic medium. This description accounts for the point-defect relaxation volume and its interaction with a pressure field (size interaction). It can be enriched by considering an ellipsoidal inclusion, thus leading to a interaction with also the deviatoric component of the stress field (shape interaction), and by assigning different elastic constants to the inclusion (inhomogeneity) to describe the variations of the point-defect “size” and “shape” with the strain field where it is immersed. Other elastic descriptions of the point-defect are possible. In particular, it can be modeled by an equivalent distribution of point-forces. The long-range elastic field of the point-defect and its interaction with other stress sources are then fully characterized by the first moment of this force distribution, a second-rank tensor called the elastic dipole. This description is rather natural when modeling point-defects and it can be used to extract elastic dipoles from atomic simulations. These different descriptions are equivalent in the long-range limit, and allow for a quantitative modeling of the elastic field induced by the point-defect, as long as the elastic anisotropy of the matrix is considered. This article reviews these different elastic models which can be used to describe a point-defect and illustrates their usefulness with chosen examples. After a short reminder of elasticity theory (Sec. \[sec:elasticity\]), we introduce the different descriptions of a point-defect within elasticity theory (Sec. \[sec:point\_defect\]), favoring the elastic dipole description and showing its equivalence with the infinitesimal Eshelby inclusion as well as with an infinitesimal dislocation loop. The next section (Sec. \[sec:para\]) describes how the characteristics of the point-defect needed to model it within elasticity theory can be obtained either from atomistic simulations or from experiments. We finally give some applications in Sec. \[sec:examples\], where results of such an elastic model are compared to direct atomic simulations to assess its validity. The usefulness of this elastic description is illustrated in this section for elastodiffusion and for the calculation of bias factors, as well as for the modeling of isolated point-defects in atomistic simulations. Elasticity theory {#sec:elasticity} ================= Before describing the modeling of a point-defect within elasticity theory, it is worth recalling the main aspects of the theory [@Landau1970], in particular the underlying assumptions, some definitions and useful results. Displacement, distortion and strain ----------------------------------- Elasticity theory is based on a continuous description of solid bodies. It relates the forces, either internal or external, exerting on the solid to its deformation. To do so, one first defines the elastic displacement field. If $\vec{R}$ and $\vec{r}$ are the position of a point respectively in the unstrained and the strained body, the displacement at this point is given by $$\vec{u}(\vec{R}) = \vec{r} - \vec{R}.$$ One can then define the distortion tensor $\partial u_i \,/\, \partial R_j$ which expresses how an infinitesimal vector $\vv{{\mathrm{d}}{R}}$ in the unstrained solid is transformed in $\vv{{\mathrm{d}}{r}}$ in the strained body through the relation $${\mathrm{d}}{r}_i = \left( \delta_{ij} + \frac{\partial u_i}{\partial R_j} \right) {\mathrm{d}}{R}_j,$$ where summation over repeated indices is implicit (Einstein convention) and $\delta_{ij}$ is the Kronecker symbol. Of central importance to the elasticity theory is the dimensionless strain tensor, defined by $$\begin{aligned} \varepsilon_{ij}(\vec{R}) &= \frac{1}{2}\left[ \left( \delta_{in} + \frac{\partial u_n}{\partial R_i} \right) \left( \delta_{nj} + \frac{\partial u_n}{\partial R_j} \right) - \delta_{ij} \right] \\ &= \frac{1}{2} \left( \frac{\partial u_i}{\partial R_j} + \frac{\partial u_j}{\partial R_i} + \frac{\partial u_n}{\partial R_i}\frac{\partial u_n}{\partial R_j} \right).\end{aligned}$$ This symmetric tensor expresses the change of size and shape of a body as a result of a force acting on it. The length ${\mathrm{d}}{L}$ of the infinitesimal vector $\vv{{\mathrm{d}}{R}}$ in the unstrained body is thus transformed into ${\mathrm{d}}{l}$ in the strained body, through the relation $${\mathrm{d}}{l}^2 = {\mathrm{d}}{L}^2 + 2 \varepsilon_{ij} {\mathrm{d}}{R}_i {\mathrm{d}}{R}_j.$$ Assuming small deformation, a common assumption of linear elasticity, only the leading terms of the distortion are kept. The strain tensor then corresponds to the symmetric part of the distortion tensor, as $$\varepsilon_{ij}(\vec{R}) = \frac{1}{2} \left( \frac{\partial u_i}{\partial R_j} + \frac{\partial u_j}{\partial R_i} \right). \label{eq:strain_def}$$ The antisymmetric part of the distortion tensor corresponds to the infinitesimal rigid body rotation. It does not lead to any energetic contribution within linear elasticity in the absence of internal torque. With this small deformation assumption, there is no distinction between Lagrangian coordinates $\vec{R}$ and Eulerian coordinates $\vec{r}$ when describing elastic fields. One can equally write, for instance, $\vec{u}(\vec{r})$ or $\vec{u}(\vec{R})$ for the displacement field, which are equivalent to the leading order of the distortion. Stress ------ The force $\vv{\delta F}$ acting on a volume element $\delta V$ of a strained body is composed of two contributions, the sum of external body forces $\vec{f}$ and the internal forces arising from atomic interactions. Because of the mutual cancellation of forces between particles inside the volume $\delta V$, only forces corresponding to the interaction with outside particles appear in this last contribution, which is thus proportional to the surface elements $\vv{{\mathrm{d}}S}$ defining the volume element $\delta V$. One obtains $$\delta F_i = \int_{\delta V}{ f_i {\mathrm{d}}{V}} + \oint_{\delta S}{\sigma_{ij}{\mathrm{d}}S_j},$$ where $\sigma$ is the stress tensor defining internal forces. Considering the mechanical equilibrium of the volume element $\delta V$, the absence of resultant force leads to the equation $$\frac{\partial \sigma_{ij}(\vec{r})}{\partial r_j} + f_i(\vec{r}) = 0, \label{eq:equil_stress}$$ whereas the absence of torque ensures the symmetry of the stress tensor. At the boundary of the strained body, internal forces are balanced by applied forces. If $\vec{T}^{\rm a} {\mathrm{d}}{S}$ is the force applied on the infinitesimal surface element ${\mathrm{d}}{S}$, this leads to the boundary condition $$\sigma_{ij} n_j = T^{\rm a}_i, \label{eq:equil_stress_boundary}$$ where $\vec{n}$ is the outward-pointing normal to the surface element ${\mathrm{d}}{S}$. The work $\delta w$, defined per volume unit, of these internal forces is given by $$\delta w = -\sigma_{ij} \delta{\varepsilon_{ij}},$$ where $\delta \varepsilon_{ij}$ is the strain change during the deformation increase, and the sign convention is $\delta w > 0$ when the energy flux goes outwards the elastic body. This leads to the following thermodynamic definition of the stress tensor $$\sigma_{ij} = \left( \frac{\partial e}{\partial \varepsilon_{ij}} \right)_s = \left( \frac{\partial f}{\partial \varepsilon_{ij}} \right)_T,$$ where $e$, $s$, and $f=e-Ts$ are the internal energy, entropy, and free energy of the elastic body defined per volume unit. Hooke’s law {#sec:elast_Hooke} ----------- To go further, one needs a constitutive equation for the energy or the free energy. Taking as a reference the undeformed state corresponding to the elastic body at equilibrium without any external force, either body or applied stress, the energy is at a minimum for $\varepsilon=0$ and then $$\sigma_{ij}(\varepsilon=0) = \left. \frac{\partial e}{\partial \varepsilon_{ij}} \right|_{\varepsilon = 0} = 0.$$ The leading order terms of the series expansion of the energy are then $$e(T,\varepsilon) = e^0(T) + \frac{1}{2} C_{ijkl}\varepsilon_{ij}\varepsilon_{kl},$$ where $e^0(T) = e(T,\varepsilon=0)$ is the energy of the unstrained body at temperature $T$. The elastic constants $C_{ijkl}$ entering this expression are thus defined by $$C_{ijkl} = \frac{\partial^2 e}{\partial \varepsilon_{ij} \partial \varepsilon_{kl}}.$$ This is a fourth-rank tensor which obeys minor symmetry $C_{ijkl}=C_{jikl}=C_{ijlk}$ because of the strain tensor symmetry and also major symmetry $C_{ijkl}=C_{klij}$ because of allowed permutation of partial derivatives. This leads to at most 21 independent coefficients, which can be further reduced by considering the symmetries of the solid body [@Nye1957]. This series expansion of the energy leads to a linear relation, the Hooke’s law, between the stress and the strain $$\sigma_{ij} = C_{ijkl} \varepsilon_{kl}, \label{eq:stress_Hooke}$$ which was summarized in 1678 by Robert Hooke as *Ut tensio, sic vis*.[^1] Elastic equilibrium, superposition principle -------------------------------------------- Combining Hooke’s law with the small deformation definition of the strain tensor and the equilibrium condition , one obtains the equation obeyed by the displacement at equilibrium $$C_{ijkl} \frac{\partial^2 u_k(\vec{r})}{\partial r_j \partial r_l} + f_i(\vec{r}) = 0. \label{eq:equil_displacement}$$ The elastic equilibrium is given by the solution which verifies the boundary conditions, $\sigma_{ij}n_ j = T_i^{\rm a}$ for imposed applied forces and $u_i=u_i^{\rm a}$ for imposed applied displacements. As elastic equilibrium is defined by the solution of a linear partial differential equation (Eq. \[eq:equil\_displacement\]), the superposition principle holds. If two elastic fields, characterized by their displacement $\vec{u}^1(\vec{r})$ and $\vec{u}^2(\vec{r})$, correspond to equilibrium for the respective body forces $\vec{f}^1$ and $\vec{f}^2$ and the respective boundary conditions $(\vec{u}^{\rm a1}, \vec{T}^{\rm a1})$ and $(\vec{u}^{\rm a2}, \vec{T}^{\rm a2})$, then the elastic equilibrium for the body forces $\vec{f}^1 + \vec{f}^2$ and the boundary conditions $(\vec{u}^{\rm a1} + \vec{u}^{\rm a2}, \vec{T}^{\rm a1} + \vec{T}^{\rm a2})$ is given by the sum of these two elastic fields. The total elastic energy is composed of the contributions of each elastic field taken separately and an interaction energy given by $$\begin{split} E^{\rm int} =& \int_{V}{ \sigma_{ij}^1(\vec{r}) \, \varepsilon_{ij}^2(\vec{r}) \, {\mathrm{d}}{V} } \\ =& \int_{V}{ \sigma_{ij}^2(\vec{r}) \, \varepsilon_{ij}^1(\vec{r}) \, {\mathrm{d}}{V} }. \end{split} \label{eq:Eqinter}$$ This equation can be used to define interaction energy between two defects. The superposition principle allows making use of Green’s function. The elastic Green’s function $G_{kn}(\vec{r})$ is the solution of the equilibrium equation for a unit point-force $$C_{ijkl} \frac{\partial^2 G_{kn}(\vec{r})}{\partial r_j \partial r_l} + \delta_{in} \operatorname{\delta}(\vec{r}) = 0, \label{eq:equil_Green}$$ where $\operatorname{\delta}(\vec{r})$ is the Dirac delta function, [*i.e.*]{}$\delta(\vec{r})=0$ if $\vec{r}\neq\vec{0}$ and $\delta(\vec{0})=\infty$. $G_{kn}(\vec{r})$ therefore corresponds to the displacement along the $r_{k}$ axis for a unit point-force applied along the $r_{n}$ axis at the origin. The solution of elastic equilibrium for the force distribution $\vec{f}(\vec{r})$ is then given by $$\begin{aligned} u_k(\vec{r}) &= \int_V{ G_{kn}( \vec{r} - \vec{r}^{\,\prime} ) f_n( \vec{r}^{\,\prime} ) {\mathrm{d}}{V^{\,\prime}} }, \\ \sigma_{ij}(\vec{r}) &= C_{ijkl} \int_V{ G_{kn,l}( \vec{r} - \vec{r}^{\,\prime} ) f_n( \vec{r}^{\,\prime} ) {\mathrm{d}}{V^{\,\prime}} }, \end{aligned}$$ where we have introduced the notation $G_{kn,l} = \partial G_{kn} \,/\, \partial r_l$ for partial derivatives. An analytical expression of the Green’s function exists for isotropic elasticity. Considering the elastic constants $C_{ijkl} = \lambda \delta_{ij}\delta_{kl} + \mu( \delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$, where $\lambda$ and $\mu$ are the Lamé coefficients, the Green’s function is given by $$G_{kn}(\vec{r}) = \frac{1}{ 8 \pi \mu} \left[ \frac{\lambda+3\mu}{\lambda+2\mu} \delta_{kn} + \frac{\lambda+\mu}{\lambda+2\mu} \eta_k \eta_n \right] \frac{1}{r},$$ with $r = \| \vec{r} \|$ and $\vec{\eta}=\vec{r}/r$. No analytical expression exists in the more general case of elastic anisotropy, but the Green’s function, and its successive derivatives, can be calculated efficiently from the elastic constants using the numerical scheme of Barnett [@Barnett1972a; @Bacon1980]. Whatever the anisotropy, the Green’s function and its derivatives will show the same variation with the distance $r$,[^2] leading to the general expressions $$G_{kn}(\vec{r}) = g_{kn}(\vec{\eta}) \frac{1}{r} \textrm{\ , } G_{kn,l}(\vec{r}) = h_{knl}(\vec{\eta}) \frac{1}{r^2} \textrm{\ , \dots }$$ where the anisotropy enters only in the angular dependence $g_{kn}(\vec{\eta})$, $h_{knl}(\vec{\eta})$, … Elastic model of a point-defect {#sec:point_defect} =============================== Different models can be used to describe a point-defect within elasticity theory. One such model is the elastic dipole. We first describe this model and then demonstrate the analogy with a description of the point-defect as an infinitesimal Eshelby inclusion or an infinitesimal dislocation loop. We finally introduce the polarizability of the point-defect. Elastic dipole {#sec:dipole_model} -------------- A point-defect can be described in a continuous solid body as an equilibrated distribution of point-forces [@Siems1968; @Leibfried1978; @Bacon1980; @Teodosiu1982]. Considering a point-defect located at the origin modeled by such a force distribution $\vec{f}(\vec{r}) = \sum_{q=1}^N{ \vec{F}^q \operatorname{\delta}{(\vec{r}-\vec{a}^q})}$, [*i.e.*]{}consisting of $N$ forces $\vec{F}^q$ each acting at position $\vec{a}^q$, the elastic displacement field of the point-defect is, according to linear elasticity theory, given by $$u_i(\vec{r}) = \sum_{q=1}^N{ G_{ij}( \vec{r} - \vec{a}^q ) F^q_j },$$ where we have used the elastic Green’s function. Far from the point-defect, we have $\| \vec{r} \| \gg \| \vec{a}^q \|$ and we can make a series expansion of the Green’s function: $$\begin{gathered} u_i(\vec{r}) = G_{ij}( \vec{r} ) \sum_{q=1}^N{ F^q_j } \ - \ G_{ij,k}( \vec{r} ) \sum_{q=1}^N{ F^q_j a^q_k } \\ \ + \ \operatorname{O}{\left( \| \vec{a}^q \|^2 \right)} .\end{gathered}$$ As the force distribution is equilibrated, its resultant $\sum_q{\vec{F}^q}$ is null. The displacement is thus given, to the leading order, by $$u_i(\vec{r}) = - G_{ij,k}( \vec{r} ) P_{jk}, \label{eq:dipole_displacement}$$ and the corresponding stress field by $$\sigma_{ij}(\vec{r}) = - C_{ijkl} G_{km,nl}( \vec{r} ) P_{mn}, \label{eq:dipole_stress}$$ where the elastic dipole is defined as the first moment of the point-force distribution, $$P_{jk} = \sum_{q=1}^N{ F^q_j a^q_k }. \label{eq:dipole}$$ This dipole is a second rank tensor which fully characterizes the point-defect within elasticity theory [@Siems1968; @Leibfried1978; @Bacon1980; @Teodosiu1982]. It is symmetric because the torque $\sum_q{ \vec{F}^q \times \vec{a}^q }$ must be null for the force distribution to be equilibrated. Equations and show that the elastic displacement and the stress created by a point-defect are long-ranged, respectively decaying as $1/r^2$ and $1/r^3$ with the distance $r$ to the point-defect. The elastic dipole is directly linked to the point-defect relaxation volume. Considering a finite volume $V$ of external surface $S$ enclosing the point-defect, this relaxation volume is defined as $$\Delta V = \oint_S { u_i(\vec{r}) \, {\mathrm{d}}{S_i} },$$ where $\vec{u}(\vec{r})$ is the superposition of the displacement created by the point-defect (Eq. \[eq:dipole\_displacement\]) and the elastic displacement due to image forces ensuring null tractions on the external surface $S$. Use of the Gauss theorem, of the equilibrium condition and of the elastic dipole definition leads to the result [@Leibfried1978] $$\Delta V = S_{iikl} P_{kl}, \label{eq:dipole_relax_vol}$$ where the elastic compliances $S_{ijkl}$ are the inverse of the elastic constants, [*i.e.*]{}$S_{ijkl}C_{klmn} = \frac{1}{2}( \delta_{im}\delta_{jn} + \delta_{in}\delta_{jm} )$. For a crystal with cubic symmetry, this equation can be further simplified [@Leibfried1978] to show that the relaxation volume is equal to the trace of the elastic dipole divided by three times the bulk modulus. More generally, as it will become clear with the comparison to the Eshelby’s inclusion, this elastic dipole is the source term defining the relaxation volume of the point-defect. Its trace gives rise to the size interaction, whereas its deviator, [*i.e.*]{}the presence of off-diagonal terms and differences in the diagonal components, leads to the shape interaction. Of particular importance is the interaction energy of the point-defect with an external elastic field $\vec{u}^{\rm ext}(\vec{r})$. Considering the point-forces distribution representative of the point-defect, this interaction energy can be simply written as [@Bacon1980] $$E^{\rm int} = -\sum_{q=1}^N{ F_i^q \, u_i^{\rm ext}(\vec{a}^q)}.$$ If we now assume that the external field is slowly varying close to the point-defect, one can make a series expansion of the corresponding displacement $\vec{u}^{\rm ext}(\vec{r})$. The interaction energy is then, to first order, $$E^{\rm int} = - u_i^{\rm ext}(\vec{0}) \sum_{q=1}^N{ F_i^q } - u_{i,j}^{\rm ext}(\vec{0}) \sum_{q=1}^N{ F_i^q \, a_j^q}.$$ Finally, using the equilibrium properties of the point-forces distribution, one obtains $$E^{\rm int} = - P_{ij} \, \varepsilon^{\rm ext}_{ij}(\vec{0}), \label{eq:dipole_Einter}$$ thus showing that the interaction energy is simply the product of the elastic dipole with the value at the point-defect location of the external strain field. Higher order contributions to the interaction energy involve successive gradients of the external strain field coupled with higher moments of the multipole expansion of the force distribution, and can be generally safely ignored. This simple expression of the interaction energy is the workhorse of the modeling of point-defects within linear elasticity in a multiscale approach. Instead of working with the elastic dipole tensor, one sometimes rather uses the so-called $\lambda$-tensor [@Nowick1972] which expresses the strain variation of a matrix volume with the point-defect volume concentration $c$, $$\lambda_{ij} = \frac{1}{\Omega_{\rm at}} \, \frac{\partial \bar{\varepsilon}_{ij}}{\partial c}, \label{eq:lambda_PD}$$ where $\bar{\varepsilon}$ is the homogeneous strain induced by the point-defects in a stress-free state and $\Omega_{\rm at}$ is the atomic volume of the reference solid. As it will become clear when discussing parameterization of the elastic dipole from experiments (§\[sec:para\_exp\]), these two quantities are simply linked by the relation $$P_{ij} = \Omega_{\rm at} \, C_{ijkl} \, \lambda_{kl}. \label{eq:dipole_lamba}$$ Using this $\lambda$-tensor to characterize the point-defect, Eq. describing its elastic interaction with an external elastic field becomes $$E^{\rm int} = - \Omega_{\rm at} \, \lambda_{ij} \, \sigma_{ij}^{\rm ext}(\vec{0}),$$ where $\sigma^{\rm ext}_{ij}(\vec{0})$ is the value of the external stress field at the point-defect position. Analogy with Eshelby’s inclusion -------------------------------- The Eshelby’s inclusion [@Eshelby1957; @Eshelby1959a] is another widespread model which can be used to describe a point-defect in an elastic continuum. As it will be shown below, it is equivalent to the dipole description in the limit of an infinitesimal inclusion. In this model, the point-defect is described as an inclusion of volume $\Omega_{\rm I}$ and of surface $S_{\rm I}$, having the same elastic constants as the matrix. This inclusion undergoes a change of shape described by the eigenstrain $\varepsilon_{ij}^*(\vec{r})$, corresponding to the strain that would adopt the inclusion if it was free to relax and was not constrained by the surrounding matrix. Eshelby proposed a general approach [@Eshelby1957] to solve the corresponding equilibrium problem and determine the elastic fields in the inclusion and the surrounding matrix. This solution is obtained by considering the three following steps: 1. Take the inclusion out of the matrix and let it adopt its eigenstrain $\varepsilon_{ij}^*(\vec{r})$. At this stage, the stress is null everywhere. 2. Strain back the inclusion so it will fit the hole in the matrix. The elastic strain exactly compensates for the eigenstrain, so the stress in the inclusion is $-C_{ijkl}\varepsilon^*_{kl}(\vec{r})$. This operation is performed by applying to the external surface of the inclusion the traction forces corresponding to this stress $${\mathrm{d}}{T}_i(\vec{r}) = -C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}) \, {\mathrm{d}}{S}_j,$$ where $\vv{{\mathrm{d}}{S}}$ is an element of the inclusion external surface at the point $\vec{r}$. 3. After the inclusion has been welded back into its hole, the traction forces are relaxed. Using Green’s function, the corresponding displacement in the matrix is then $$\begin{split} u_n(\vec{r}) &= \oint_{S_{\rm I}} { G_{ni}(\vec{r}-\vec{r}^{\,\prime}) \, {\mathrm{d}}{T}_i(\vec{r}^{\,\prime})}, \\ &= -\oint_{S_{\rm I}} { G_{ni}(\vec{r}-\vec{r}^{\,\prime}) \, C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, {\mathrm{d}}{S}_j^{\,\prime}}. \end{split}$$ Applying Gauss theorem and the equilibrium condition satisfied by the eigenstrain $\varepsilon_{ij}^*(\vec{r})$, one obtains the following expression for the elastic displacement in the matrix $$u_n(\vec{r}) = -\int_{\Omega_{\rm I}} { G_{ni,j}(\vec{r}-\vec{r}^{\,\prime}) \, C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, {\mathrm{d}}{V^{\,\prime}} }, \label{eq:inclusion_displacement}$$ and for the corresponding stress field $$\begin{gathered} \sigma_{pq}(\vec{r}) = -\int_{\Omega_{\rm I}}{ C_{pqmn} \, G_{ni,jm}(\vec{r}-\vec{r}^{\,\prime}) } \\ { C_{ijkl} \, \varepsilon^*_{kl}(\vec{r}^{\,\prime}) \, {\mathrm{d}}{V^{\,\prime}} }. \label{eq:inclusion_stress}\end{gathered}$$ Inside the inclusion, one needs to add the stress $-C_{ijkl}\varepsilon^*_{kl}(\vec{r})$ corresponding to the strain applied in step 2. Far from the inclusion, we have $\| \vec{r} \| \gg \| \vec{r}^{\,\prime} \|$. We can therefore neglect the variations of the Green’s function derivatives inside Eqs. \[eq:inclusion\_displacement\] and \[eq:inclusion\_stress\]. This corresponds to the infinitesimal inclusion assumption. For such an infinitesimal inclusion located at the origin, one therefore obtains the following elastic fields $$\begin{aligned} u_n(\vec{r}) &= - G_{ni,j}(\vec{r}) \, C_{ijkl} \, \Omega_{\rm I} \, \bar{\varepsilon}^*_{kl}, \label{eq:small_inclusion_displacement} \\ \sigma_{pq}(\vec{r}) &= - C_{pqmn} \, G_{ni,jm}(\vec{r}) \, C_{ijkl} \, \Omega_{\rm I} \, \bar{\varepsilon}^*_{kl}, \label{eq:small_inclusion_stress}\end{aligned}$$ where we have defined the volume average of the inclusion eigenstrain, $\bar{\varepsilon}_{ij}^* = \frac{1}{\Omega_{\rm I}}\int_{\Omega_{\rm I}}{ \varepsilon_{ij}(\vec{r}) \, {\mathrm{d}}{V}}$. Comparing these expressions with the ones describing the elastic field of an elastic dipole (Eqs. \[eq:dipole\_displacement\] and \[eq:dipole\_stress\]), we see that they are the same for any $\vec{r}$ value provided the dipole tensor and the inclusion eigenstrain check the relation $$P_{ij} = \Omega_{\rm I} \, C_{ijkl} \, \bar{\varepsilon}_{kl}^*. \label{eq:small_inclusion_dipole}$$ The descriptions of a point-defect as an elastic dipole, [*i.e.*]{}as a distribution of point-forces keeping only the first moment of the distribution, or as an infinitesimal Eshelby inclusion, [*i.e.*]{}in the limit of an inclusion volume $\Omega_{\rm I}\to 0$ keeping the product $\Omega_{\rm I}\,\bar{\varepsilon}_{ij}^*$ constant, are therefore equivalent. The point-defect can be thus characterized either by its elastic dipole tensor $P_{ij}$ or by its eigenstrain tensor $Q_{ij}=\Omega_{\rm I}\,\bar{\varepsilon}_{ij}^*$ [@Lazar2017]. Of course, the same equivalence is obtained when considering the interaction energy with an external stress field. For a general inclusion, Eshelby showed that this interaction energy is simply given by $$E^{\rm int} = -\int_{\Omega_{\rm I}}{\varepsilon^*_{ij}(\vec{r}) \, \sigma_{ij}^{\rm ext}(\vec{r}) \, {\mathrm{d}}{V} }, \label{eq:inclusion_Einter}$$ where the integral only runs on the inclusion volume. In the limiting case of an infinitesimal inclusion, one can neglect the variations of the external stress field inside the inclusion. One thus obtains the following interaction energy, $$E^{\rm int} = - \Omega_{\rm I} \, \bar{\varepsilon}^*_{ij} \, \sigma_{ij}^{\rm ext}(\vec{0}) , \label{eq:small_inclusion_Einter}$$ which is equivalent to the expression for an elastic dipole when the equivalence relation is verified. Analogy with dislocation loops ------------------------------ A point-defect can also be considered as an infinitesimal dislocation loop. This appears natural as dislocation loops are known to be elastically equivalent to platelet Eshelby’s inclusions [@Nabarro1967; @Mura1987]. The elastic displacement and stress fields of a dislocation loop of Burgers vector $\vec{b}$ are respectively given by the Burger’s and the Mura’s formulae [@Hirth1982] $$\begin{aligned} \begin{split} u_i(\vec{r}) ={ }& C_{jklm} \, b_m \\ & \quad \int_{A}{ G_{ij,k}(\vec{r} - \vec{r}^{\,\prime}) \, n_l(\vec{r}^{\,\prime}) \, {\mathrm{d}}{A^{\,\prime}} }, \label{eq:dislo_loop_displacement} \end{split} \\ \begin{split} \sigma_{ij}(\vec{r}) ={ }& C_{ijkl} \, \epsilon_{lnh} C_{pqmn} b_m \\ & \quad \oint_{L}{ G_{kp,q}(\vec{r} - \vec{r}^{\,\prime}) \, \zeta_h(\vec{r}^{\,\prime}) \, {\mathrm{d}}{l^{\,\prime}} }. \end{split} \label{eq:dislo_loop_stress}\end{aligned}$$ The displacement is defined by a surface integral on the surface $A$ enclosed by the dislocation loop, with $\vec{n}(\vec{r}^{\,\prime})$ the local normal to the surface element ${\mathrm{d}}{A^{\,\prime}}$ in $\vec{r}^{\,\prime}$, and the stress by a line integral along the loop of total line length $L$. $\vec{\zeta}$ is the unit vector along the loop, and $\epsilon_{lnh}$ is the permutation tensor. Like for the Eshelby’s inclusion, far from the loop ($\| \vec{r} \| \gg \| \vec{r}^{\,\prime} \|$), we can use a series expansion of the Green’s function derivatives and keep only the leading term. Considering a loop located at the origin, we thus obtain $$\begin{aligned} u_i(\vec{r}) =& C_{jklm} \, b_m \, A_l \, G_{ij,k}(\vec{r}) , \label{eq:small_dislo_loop_displacement} \\ \sigma_{pq}(\vec{r}) =& C_{pqin} \, C_{jklm} \, b_m \, A_l \, G_{ij,kn}(\vec{r}) , \label{eq:small_dislo_loop_stress}\end{aligned}$$ where $\vec{A}$ is the surface vector defining the area of the loop. These expressions are equal to the ones obtained for an elastic dipole and , with the equivalent dipole tensor of the dislocation loop given by $$P_{jk} = - C_{jklm} \, b_m \, A_l. \label{eq:small_dislo_loop_dipole}$$ Looking at the interaction with an external stress field, the interaction energy with the dislocation loop is given by $$E^{\rm int} = \int_{A}{ \sigma_{ij}^{\rm ext}(\vec{r}) \, b_i \, n_j \, {\mathrm{d}}{A} }. \label{eq:dislo_loop_Einter}$$ For an infinitesimal loop, it simply becomes $$E^{\rm int} = \sigma_{ij}^{\rm ext}(\vec{0}) \, b_i \, A_j, \label{eq:small_dislo_loop_Einter}$$ which is equivalent to the expression obtained for an elastic dipole when the equivalent dipole tensor of the dislocation loop is given by Eq. . Polarizability -------------- The equivalent point-forces distribution of a point-defect can be altered by an applied elastic field [@Kroner1964]. This applied elastic field thus leads to an induced elastic dipole and the total elastic dipole of the point-defect now depends on the applied strain $\varepsilon^{\rm ext}$: $$P_{ij}(\varepsilon^{\rm ext}) = P_{ij}^{0} + \alpha_{ijkl} \varepsilon^{\rm ext}_{kl}, \label{eq:polarizability}$$ where $P_{ij}^{0}$ is the permanent elastic dipole in absence of applied strain and $\alpha_{ijkl}$ is the point-defect diaelastic polarizability [@Schober1984; @Puls1986; @Granato1994]. Considering the analogy with the Eshelby’s inclusion, this polarizability corresponds to an infinitesimal inhomogeneous inclusion, [*i.e.*]{}an inclusion with different elastic constants than the surrounding matrix. It describes the fact that the matrix close to the point-defect has a different elastic response to an applied strain because of the perturbations of the atomic bonding caused by the point-defect. For the analogy with an infinitesimal dislocation loop, the polarizability corresponds to the fact that the loop can change its shape by glide on its prismatic cylinder (or in its habit plane for a pure glide loop) under the action of the applied elastic field. Following Schober [@Schober1984], the interaction of a point-defect located at the origin with an applied strain is now given by $$E^{\rm int} = -P^0_{ij} \, \varepsilon^{\rm ext}_{ij}(\vec{0}) - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{\rm ext}_{ij}(\vec{0}) \, \varepsilon^{\rm ext}_{kl}(\vec{0}). \label{eq:dipole_polar_Einter}$$ This expression of the interaction energy, which includes the defect polarizability, has important consequences for the modeling of point-defects as it shows that some coupling is possible between two different applied elastic fields. Considering the point-defect interaction with the two strain fields $\varepsilon^{(1)}$ and $\varepsilon^{(2)}$ originating from two different sources, the interaction energy is now given by $$\begin{aligned} \begin{split} E^{\rm int} ={ }& -P^0_{ij} \left( \varepsilon^{(1)}_{ij} + \varepsilon^{(2)}_{ij} \right) \\ &\qquad - \frac{1}{2} \, \alpha_{ijkl} \left( \varepsilon^{(1)}_{ij} + \varepsilon^{(2)}_{ij} \right) \left( \varepsilon^{(1)}_{kl} + \varepsilon^{(2)}_{kl} \right), \end{split} \\ \begin{split} ={ }& -P^0_{ij} \varepsilon^{(1)}_{ij} - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{(1)}_{ij} \, \varepsilon^{(1)}_{kl} \\ &\qquad -P^0_{ij} \varepsilon^{(2)}_{ij} - \frac{1}{2} \, \alpha_{ijkl} \, \varepsilon^{(2)}_{ij} \, \varepsilon^{(2)}_{kl} \\ &\qquad \qquad - \alpha_{ijkl} \, \varepsilon^{(1)}_{ij} \, \varepsilon^{(2)}_{kl}. \end{split}\end{aligned}$$ The last line therefore shows that, without the polarizability, the interaction energy of the point-defect with the two strain fields will be simply the superposition of the two interaction energies with each strain fields considered separately. A coupling is introduced only through the polarizability. Such a coupling is for instance at the origin of one of the mechanisms proposed to explain creep under irradiation. Indeed, because of the polarizability, the interaction of point-defects, either vacancies or self-interstitial atoms, with dislocations under an applied stress depends on the dislocation orientation with respect to the applied stress. This stronger interaction with some dislocation families leads to a larger drift term in the diffusion equation of the point-defect and thus to a greater absorption of the point-defect by these dislocations, a mechanism known as Stress Induced Preferential Absorption (or SIPA) [@Heald1974; @Heald1975b; @Bullough1975a; @Bullough1975b]. This polarizability is also the cause, in alloy solid solutions, of the variation of the matrix elastic constants with their solute content. This diaelastic polarizability caused by the perturbation of the elastic response of the surrounding matrix manifests itself at the lowest temperature, even 0K, and whatever the characteristic time of the applied strain. At finite temperature there may be another source of polarizability. If the point-defect can adopt different configurations, for instance different variants corresponding to different orientations of the point-defect like for a carbon interstitial atom in a body-centered cubic Fe matrix, then the occupancy distribution of these configurations will be modified under an applied stress or strain. This possible redistribution of the point-defect gives rise to anelasticity [@Nowick1972], the most famous case being the Snoek relaxation in iron alloys containing interstitial solute atoms like C and N [@Snoek1941]. When thermally activated transitions between the different configurations of the point-defect are fast enough compared to the characteristic time of the applied stress, the distribution of the different configurations corresponds to thermal equilibrium. Assuming that all configurations have the same energy in a stress-free state and denoting by $P^{\mu}_{ij}$ the elastic dipole of the configuration $\mu$, the average dipole of the point-defect is then given by $$\langle P_{ij} \rangle = \frac{ \sum_{\mu}{ \exp{\left( P_{kl}^{\mu}\varepsilon_{kl}^{\rm ext}\,/\,kT \right)} P_{ij}^{\mu} } } { \sum_{\mu}{ \exp{\left( P_{kl}^{\mu}\varepsilon_{kl}^{\rm ext}\,/\,kT \right)} } }.$$ As a consequence, the average elastic dipole of the point-defect distribution is now depending on the applied stress and on the temperature, an effect known as paraelasticity [@Kroner1964]. At temperatures high enough to allow for transition between the different configurations, the interaction energy of the configurations with the applied strain is usually small compared to $kT$. One can make a series expansion of the exponentials to obtain $$\begin{split} \langle P_{ij} \rangle &= \frac{1}{n_{\mathrm{v}}}\sum_{\mu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} } \\ &- \left( \frac{1}{ {n_{\rm v}}^2}\sum_{\mu, \nu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} P_{kl}^{\nu} } - \frac{1}{n_{\mathrm{v}}}\sum_{\mu=1}^{n_{\mathrm{v}}}{ P_{ij}^{\mu} P_{kl}^{\mu} } \right) \frac{ \varepsilon_{kl}^{\rm ext} }{kT}, \end{split}$$ where $n_{\mathrm{v}}$ is the number of configurations. This leads to the same linear variation of the elastic dipole with the applied strain as for the diaelastic polarizability (Eq. \[eq:polarizability\]), except that the paraelastic polarizability is depending on the temperature. Parameterization of elastic dipoles {#sec:para} =================================== To properly model a point-defect with continuum elasticity theory, one only needs to know its elastic dipole. It is then possible to describe the elastic displacement (Eq. \[eq:dipole\_displacement\]) or the stress field (Eq. \[eq:dipole\_stress\]) induced by the point-defect, and also to calculate its interaction with an external elastic field (Eq. \[eq:dipole\_Einter\]). This elastic dipole can be determined either using atomistic simulations or from experiments. From atomistic simulations {#sec:para_atom} -------------------------- Different strategies can be considered for the identification of elastic dipoles in atomistic simulations. This elastic dipole can be directly deduced from the stress existing in the simulation box, or from a fit of the atomic displacements, or finally from a summation of the Kanzaki forces. We examine here these three techniques and discuss their merits and drawbacks. ### Definition from the stress {#definition-from-the-stress .unnumbered} Let us consider a simulation box of volume $V$, the equilibrium volume of the pristine bulk material. We introduce one point-defect in the simulation box and assume periodic boundary conditions to preclude any difficulty associated with surfaces. Elasticity theory can be used to predict the variation of the energy of the simulation box submitted to a homogeneous strain $\varepsilon$. Using the interaction energy of a point-defect with an external strain given in Eq. , one obtains $$E(\varepsilon) = E_0 + E^{\rm PD} + \frac{V}{2}C_{ijkl}\varepsilon_{ij}\varepsilon_{kl} - P_{ij}\varepsilon_{ij}, \label{eq:energy_box_PD}$$ with $E_0$ the bulk reference energy and $E^{\rm PD}$ the point-defect energy, which can contain a contribution from the interactions of the point-defect with its periodic images (see section \[sec:elast\_corr\]). The average residual stress on the simulation box is obtained by simple derivation as[^3] $$\begin{split} \langle \sigma_{ij}(\varepsilon) \rangle &= \frac{1}{V}\frac{\partial E}{\partial \varepsilon_{ij}}, \\ &= C_{ijkl} \varepsilon_{kl} - \frac{1}{V} P_{ij}. \end{split} \label{eq:sigma_Pij}$$ In the particular case where the periodicity vectors are kept fixed between the defective and pristine supercells ($\varepsilon=0$), the elastic dipole is proportional to the residual stress weighted by the supercell volume: $$P_{ij} = -V \langle \sigma_{ij} \rangle. \label{eq:Pij_from_sigma}$$ This residual stress corresponds to the stress increase, after atomic relaxation, due to the introduction of the point-defect into the simulation box. When this equation is used to determine the elastic dipole in [*ab initio*]{}calculations, one should pay attention to the spurious stress which may exist in the equilibrium perfect supercell because of finite convergence criteria of such calculations. This spurious stress has to be subtracted from the stress of the defective supercell, so the residual stress entering Eq. \[eq:Pij\_from\_sigma\] is only the stress increment associated with the introduction of the point-defect. One can also consider the opposite situation where a homogeneous strain $\bar{\varepsilon}$ has been applied to cancel the residual stress. The elastic dipole is then proportional to this homogeneous strain: $$P_{ij} = V C_{ijkl} \bar{\varepsilon}_{kl}. \label{eq:Pij_from_strain}$$ One would nevertheless generally prefer working with fixed periodicity vectors ($\varepsilon=0$) as $\sigma=0$ calculations necessitate an increased number of force calculations, as well as an increased precision for [*ab initio*]{}calculations. In the more general case where a homogeneous strain is applied and a residual stress is observed, the elastic dipole can still be derived from these two quantities using Eq. . This definition of the elastic dipole from the residual stress (Eq. \[eq:Pij\_from\_sigma\]), or more generally from both the applied strain and the residual stress (Eq. \[eq:sigma\_Pij\]), is to be related to the dipole tensor measurement first proposed by Gillan [@Gillan1981; @Gillan1983], where the elastic dipole is equal to the strain derivative of the formation energy, evaluated at zero strain. Instead of doing this derivative numerically, one can simply use the analytical derivative, [*i.e.*]{}the stress on the simulation box, which is a standard output of any atomistic simulations code, including [*ab initio*]{}calculations. This technique to extract elastic dipoles from atomistic simulations has been validated [@Subramanian2013; @Garnier2014; @Varvenne2017], through successful comparisons of interaction energies between point-defects with external strain fields, as given by direct atomistic simulations and as given by the elasticity theory predictions using the elastic dipole identified through Eq. . The residual stress therefore leads to quantitative estimates of the elastic dipoles. ### Definition from the displacement field {#definition-from-the-displacement-field .unnumbered} The elastic dipole can also be obtained from the displacement field, as proposed by Chen [*et al*.]{} [@Chen2010a]. Using the displacement field $\vec{u}^{\rm at}(\vec{R})$ obtained after relaxation in atomistic simulations, a least-square fit of the displacement field $\vec{u}^{\rm el}(\vec{R})$ predicted by elasticity theory can be realized, using the dipole components of the dipole as fit variables. A reasonable cost function for the least-square fit is $$f(P_{ij}) = \sum_{\substack{\vec{R} \\ \|\vec{R}\|>r_{\rm excl}}} \left\|R^2\left[\vec{u}^{\rm el}(\vec{R})-\vec{u}^{\rm at}(\vec{R})\right] \right\|^2 , \label{eq:cost_F}$$ with $r_{\rm excl}$ the radius of a small zone around the point-defect, so as to exclude from the fit the atomic positions where elasticity does not hold. The $R^2$ factor accounts for the scaling of the displacement field with the distance to the point-defect, thus giving a similar weight to all atomic positions included into the fit. For atomistic simulations with periodic boundary conditions, one needs to superimpose the elastic displacements of the point-defect with its periodic images, which can be done by simple summation, taking care of the conditional convergence of the corresponding sum [@Varvenne2017]. With large simulation boxes ($\ge 1500$ atoms), the obtained elastic dipole components agree with the values deduced from the residual stress, and the choice of $r_{\rm excl}$ is not critical. The number of atomic positions included in the fit, and for which elasticity is valid, is sufficiently high to avoid issues arising from the defect core zone [@Varvenne2017]. In contrast, for small simulation boxes of a few hundred atoms, [*i.e.*]{}typical of [*ab initio*]{}simulations, the obtained $P_{ij}$ values are highly sensitive to $r_{\rm excl}$, and their convergence with $r_{\rm excl}$ cannot be guaranteed. This fit of the displacement field appears therefore impractical to obtain precise values of the elastic dipole in [*ab initio*]{}calculations. ### Definition from the Kanzaki forces {#definition-from-the-kanzaki-forces .unnumbered} \ The definition given in Eq. of the elastic dipole as the first moment of the point-force distribution offers a third way to extract this elastic dipole from atomic simulations. This corresponds to the Kanzaki force method [@Kanzaki1957; @Faux1971; @Tewary1973; @Leibfried1978; @Schober1980; @Lidiard1981; @Simonelli1994; @Domain2004; @Hayward2012]. Kanzaki forces are defined as the forces which have to be applied to the atoms in the neighborhood of the point-defect to produce the same displacement field in the pristine crystal as in the defective supercell. Computation of these Kanzaki forces can be performed following the procedure given in Ref. [@Simonelli1994], which is illustrated for a vacancy in Fig. \[fig:scheme\_kanzaki\]. Starting from the relaxed structure of the point-defect (Fig. \[fig:scheme\_kanzaki\]b), the defect is restored in the simulation cell, [*e.g.*]{}the suppressed atom is added back for the vacancy case (Fig. \[fig:scheme\_kanzaki\]c). A static force calculation is performed then and provides the opposite of the searched forces on all atoms in the obtained simulation cell. These atomic forces are used to compute the elastic dipole $P_{ij}=\sum_{q} F_j^q a_i^q$, with $\vec{F}^q$ the opposite of the force acting on atom at $\vec{a}^q$, assuming the point-defect is located at the origin. The summation is usually restricted to atoms located inside a sphere of radius $r_{\rm \infty}$. As Kanzaki’s technique is valid only in the harmonic approximation, one checks that the atomic forces entering the elastic dipole definition are in the harmonic regime by restoring larger and larger defect neighboring shells to their perfect bulk positions [@Simonelli1994] (Fig. \[fig:scheme\_kanzaki\]c-d), computing the forces on the obtained restored structures, and then the elastic dipole. The case where $n$ defect neighbor shells are restored is referred to as the $n^{\rm th}$ order approximation. As the restored zone becomes larger, the atoms remaining at their relaxed positions are more likely to sit in an harmonic region. The convergence of the resulting elastic dipole components with respect to $n$ thus enables to evaluate the harmonicity aspect. ![Elastic dipole components of the SIA octahedral configuration in hcp Zr, as a function of the cutoff radius $r_{\infty}$ of the force summation normalized by the lattice parameter $a$. Values are obtained by the Kanzaki’s forces approach on a simulation box containing 12800 atoms, restoring (a) only the point-defect, and (b) up to $16$ defect neighbor shells. The horizontal lines are the values deduced from the residual stress. Calculations have been performed with the EAM $\#3$ potential of Ref. [@Mendelev2007] (see Ref. [@Varvenne2017] for more details). []{data-label="fig:Pij_measurement_EAM"}](fig2.pdf) Fig. \[fig:Pij\_measurement\_EAM\] provides the elastic dipole values as a function of the cutoff radius $r_{\infty}$, for the octahedral configuration of the self-interstitial atom (SIA) in hcp Zr. Only the point-defect has been restored in Fig. \[fig:Pij\_measurement\_EAM\]a (approximation 0), whereas the restoration zone extends to the 16$^{\rm th}$ nearest-neighbors in Fig. \[fig:Pij\_measurement\_EAM\]b. Constant $P_{ij}$ values are reached for a cutoff radius $r_{\infty}\sim 2.5\,a$ and $\sim 4\,a$, respectively, showing that the defect-induced forces are long-ranged [@Hayward2012; @Varvenne2017]. As a result, the supercell needs to be large enough to avoid convolution of the force field by periodic boundary conditions and a high precision on the atomic forces is required. Comparing with the elastic dipole deduced from the residual stress, one cannot only restore the point-defect (approximation 0 in Fig. \[fig:Pij\_measurement\_EAM\]a) to obtain a quantitative estimate with the Kanzaki method. A restoration zone extending at least to the 16$^{\rm th}$ nearest neighbors is necessary for this point-defect to obtain the correct elastic dipole. As the anharmonic region depends on the defect and on the material, one cannot choose *a priori* a radius for the restoration zone, but one needs to check the convergence of the elastic dipole with the size of this restoration zone. ### Discussion {#discussion .unnumbered} These three approaches lead to the same values of the elastic dipole when large enough supercells are used, thus confirming the consistency of this elastic description of the point-defect. This has been checked in Ref. [@Varvenne2017] for the vacancy and various configurations of the SIA in hcp Zr. But for small simulation cells typical of [*ab initio*]{}calculations, both the fit of the displacement field and the calculation from the Kanzaki forces are usually not precise enough because of the too large defect core region, [*i.e.*]{}the region which has to be excluded from the displacement fit or the restoration zone for the Kanzaki forces. This is penalizing for [*ab initio*]{}calculations, even for point-defects as simple as the H solute or the vacancy in hcp Zr [@Varvenne2017; @Nazarov2016]. Besides, the Kanzaki’s technique requires additional calculations to obtain the defect-induced forces and to check that the forces entering the dipole definition are in the harmonic regime. As this restoration zone is extended, the defect-induced forces become smaller and the precision has to be increased. The definition from the residual stress appears indeed as the only method leading to reliable $P_{ij}$ values within [*ab initio*]{}simulations. It is also easy to apply, as it does not require any post treatment nor additional calculations: it only uses the homogeneous stress on the simulation box and the knowledge of the defect position is not needed. All these methods can be of course also used to determine the diaelastic polarizability. One only needs to get the elastic dipole for various applied strains. The linear equation then leads the stress-free elastic dipole $P^0_{ij}$ and the polarizability $\alpha_{ijkl}$. The most convenient method remains a definition from the residual stress. Considering the polarizability, Eq. now writes $$\langle \sigma_{ij}(\varepsilon) \rangle = \left( C_{ijkl} - \frac{1}{V} \alpha_{ijkl} \right) \varepsilon_{kl} - \frac{1}{V} P_{ij}, \label{eq:sigma_polarizability}$$ thus showing that the polarizability is associated with a variation of the elastic constants proportional to the point-defect volume fraction. This linear variation of the elastic constants arising from the point-defect polarizability has been characterized for vacancies and SIAs in face-centered cubic (fcc) copper [@Ackland1988], or various solute atoms in body-centered cubic (bcc) iron [@Bialon2013; @Fellinger2017]. ![Elastic dipole of a C atom lying in a \[001\] octahedral interstitial site in bcc Fe as a function of the inverse volume $V$ of the supercell. The elastic dipole has been deduced from the residual stress in [*ab initio*]{}calculations (see Ref. [@Clouet2011b] for more details).[]{data-label="fig:C_dipole"}](fig3.pdf){width="0.7\linewidth"} One consequence of the diaelastic polarizability is that the elastic dipole may depend on the size of the supercell with periodic boundary conditions. The strain at the point-defect position is indeed the superposition of the homogeneous strain $\varepsilon_{ij}$ and the strains created by the periodic images of the point-defect $\varepsilon_{ij}^{\rm p}$. In the $\varepsilon=0$ case for instance, the obtained elastic dipole is then $$P_{ij} = P^0_{ij} + \alpha_{ijkl} \varepsilon_{kl}^{\rm p}. \label{eq:dipole_PBC}$$ As the strain created by a point-defect varies as the inverse of the cube of the separation distance (Eq. \[eq:dipole\_stress\]), the last term in Eq. scales with the inverse of the supercell volume. Therefore, when homothetic supercells are used, one generally observes the following volume variation $$P_{ij} = P^{0}_{ij} + \frac{\delta P_{ij}}{V},$$ which can be used to extrapolate the elastic dipole to an infinite volume, [*i.e.*]{}to the dilute limit [@Puchala2008; @Clouet2011b; @Varvenne2017]. An example of this linear variation with the inverse volume is shown in Fig. \[fig:C\_dipole\] for an interstitial C atom in a bcc Fe matrix. From experiments {#sec:para_exp} ---------------- From an experimental perspective, when trying to extract elastic dipole of point-defects, both the symmetry and the magnitude of the components of the elastic dipole tensor are *a priori* unknown, and possibly also the number of defect-types into the material. We first restrict ourselves to the case where only one single type of point-defect with a known symmetry is present. If the point-defect has a lower symmetry than the host crystal, then it can adopt several variants which are equivalent by symmetry but possess different orientations. The energy of such a volume $V$ containing different variants of the point-defect and submitted to a homogeneous strain is $$\begin{gathered} E(\varepsilon) = E_0 + E^{\rm PD} + \frac{V}{2}C_{ijkl}\varepsilon_{ij}\varepsilon_{kl} \\ - V \sum_{\mu=1}^{n_{\rm v}}{ c_{\mu} P^{\mu}_{ij} }\varepsilon_{ij},\end{gathered}$$ with $n_{\rm v}$ the total number of different variants and $c_{\mu}$ the volume concentration of variant $\mu$. This relation assumes that the different point-defects are not interacting, which is valid in the dilute limit. For zero stress conditions, as usually the case in experiments, the average strain induced by this assembly of point-defects is $$\bar{\varepsilon}_{ij} = S_{ijkl} \sum_{\mu=1}^{n_{\rm v}}{ c_{\mu} P^{\mu}_{kl} }, \label{eq:epsilon_Vegard}$$ with $S_{ijkl}$ the inverse of the elastic constants $C_{ijkl}$. This linear relation between the strain and the point-defect concentrations corresponds to a Vegard’s law and allows for many connections with experiments. It generalizes Eq. \[eq:Pij\_from\_strain\] to the case of a volume containing a population of the same point-defect with different variants. As mentioned in §\[sec:dipole\_model\], point-defects in experiments are sometimes rather characterized by their $\lambda$-tensor [@Nowick1972]. Combining the definition of this $\lambda$-tensor (Eq. \[eq:lambda\_PD\]) with Eq. , one shows the equivalence of both definitions: $$\lambda_{ij}^{\mu} = \frac{1}{\Omega_{\rm at}} \, S_{ijkl} \, P^{\mu}_{kl},$$ or equivalently Eq. . When the point-defect has only one variant or when only one variant is selected by breaking the symmetry – through either a phase transformation ([*e.g.*]{}martensitic [@Roberts1953; @Cheng1990]) or the interaction with an applied strain field for instance – the variations of the material lattice constants with the defect concentration follow the defect symmetry. If the point-defect concentration is known, the elastic dipole components are therefore fully accessible by measuring lattice parameter variations, [*e.g.*]{}by dilatometry or X-ray diffraction using the Bragg reflections. On the other hand, for a completely disordered solid solution of point-defects with various variants ($n_{\rm v}>1$), the average distortion induced by the point-defect population does not modify the parent crystal symmetry [@Nowick1972]. Each variant is equiprobable, [*i.e.*]{}$c_{\mu}=c_0/n_{\rm v}$ with $c_0$ the nominal point-defect concentration. The stress-free strain induced by the point-defect (Eq. \[eq:epsilon\_Vegard\]) thus becomes $$\bar{\varepsilon}_{ij} = c_0 \, S_{ijkl} \, \langle P_{kl} \rangle \ \textrm{ with }\ \langle P_{kl} \rangle = \frac{1}{n_{\rm v}} \sum_{\mu=1}^{n_{\rm v}}{ P^{\mu}_{kl} }.$$ Measurements of the lattice parameter variations with the total defect concentration give thus access only to some sets of combinations of the $P_{ij}$ components. For instance, if we consider a point-defect in a cubic crystal, like a C solute in an octahedral site of a bcc Fe crystal, one obtains the following variation of the lattice parameter with the solute concentration $$a(c_0) = a_0 \left( 1 + \frac{\operatorname{Tr}{(P)}}{3 \left(C_{11}+2C_{12}\right)} \, c_0 \right), \label{eq:lattice_change_cubic}$$ with $C_{11}$ and $C_{12}$ the elastic constants in Voigt notation. This variation can again be characterized using dilatometry or X-ray diffraction. But knowing $\operatorname{Tr}{(P)}$ is not sufficient for a point-defect with a lower symmetry than the cubic symmetry of the crystal, as the elastic dipole has several independent components (two for the C solute atom in bcc Fe). Additional information is therefore needed to fully characterize the point-defect. For those defects having a lower symmetry than their parent crystal, the anelastic relaxation experiments may provide such supplementary data [@Nowick1972; @Nowick1963]. By applying an appropriate stress, a splitting of the point-defect energy levels occurs, and a redistribution of the defect populations is operated. The relaxation of the compliance moduli then gives access to other combinations of the elastic dipole components. Not all of the relaxations are allowed by symmetry, as illustrated for the C solute in bcc Fe, where only the quantity $|P_{11}-P_{33}|$ is accessible [@Swartz1968]. The number of parameters accessible from anelastic measurements is lower than the independent components of the defect elastic dipole. This technique must then be used in combination with other measurements, like the variations of the lattice parameter. Alternatively, a useful technique working with a random defect distribution is the diffuse Huang scattering. The diffuse scattering of X-rays near Bragg reflections [@Trinkaus1972; @Bender1983; @Michelitsch1996] reflects the distortion scattering caused by the long-range part of the defect-induced displacement field. It thus provides information about the strength of the point-defect elastic dipole. The scattered intensity is proportional – in the dilute limit – to the defect concentration and to a linear combination of quadratic expressions of the elastic dipole components. The coefficients of this combination are functions of the crystal elastic constants and of the scattering vector in the vicinity of a given reciprocal lattice vector. Therefore, by an appropriate choice of the relative scattering direction, the quadratic expressions can be determined separately. Except for simple point-defects like a substitutional solute atom or a single vacancy, the defect symmetry may be unknown. Both anelastic relaxation and Huang scattering experiments provide important information for the determination of the defect symmetry. The presence of relaxation peaks in anelasticity is a direct consequence of the defect symmetry [@Nowick1972; @Nowick1963]. Within Huang scattering experiments, information about the defect symmetry is obtained either by the analysis of the morphology of iso-intensity curves or through an appropriate choice of scattering directions to measure the Huang intensity. To conclude, when extracting elastic dipoles from experiments, one must usually rely on a combination of several experimental techniques to obtain all components. Some applications {#sec:examples} ================= Solute interaction with a dislocation ------------------------------------- This elastic modeling can be used for instance to describe the interaction of a point-defect with other structural defects. To illustrate, and also validate, this approach, we consider a C interstitial atom interacting with a dislocation in a bcc iron matrix. This interstitial atom occupies the octahedral sites of the bcc lattice. As these sites have a tetragonal symmetry, the elastic dipole $P_{ij}$ of the C atoms has two independent components and gives thus rise to both a size and a shape interaction. The interaction energy of the C atom with a dislocation is given by Eq. where the external strain $\varepsilon^{\rm ext}_{ij}$ is the strain created by the dislocation at the position of the C atom. This has been compared in Ref. [@Clouet2008] to direct results of atomistic simulations, using for the C elastic dipole and for the elastic constants the values given by the empirical potential used for the atomistic simulations. Results show that elastic theory leads to a quantitative prediction when all ingredients are included in the elastic model, [*i.e.*]{}when elastic anisotropy is taken into account to calculate the strain field created by the dislocation and when both the dilatation and the tetragonal distortion induced by the C atom are considered (Fig. \[fig:dislo\_Fe\_C\]). The agreement between both techniques is perfect except when the C atom is in the dislocation core. With isotropic elasticity, the agreement with atomistic simulations is only qualitative, and when the shape interaction is not considered, [*i.e.*]{}when the C atom is modeled as a simple dilatation center ($P_{ij}=P\,\delta_{ij}$), elastic theory fails to predict this interaction (Fig. \[fig:dislo\_Fe\_C\]). The same comparison between atomistic simulations and elasticity theory has been performed for a vacancy and a SIA interacting with a screw dislocation still in bcc iron [@Hayward2012]. The agreement was not as good as for the C atom. But in this work, the elastic dipoles of the point-defects were obtained from the Kanzaki forces, using the $0^{\rm th}$ order approximation, which is usually not as precise as the definition from the stress ([*cf*.]{}§\[sec:para\_atom\]) and may explain some of the discrepancies. One can also use elasticity theory to predict how the migration barriers of the point-defect are modified by a strain field. The migration energy is the energy difference between the saddle point and the stable position. Its dependence with an applied strain field $\varepsilon(\vec{r})$ is thus described by $$E^{\rm m}[\varepsilon] = E^{\rm m}_0 + P_{ij}^{\rm ini} \varepsilon_{ij}(\vec{r}_{\rm ini}) - P_{ij}^{\rm sad} \varepsilon_{ij}(\vec{r}_{\rm sad}) , \label{eq:Emig_strain}$$ where $P_{ij}^{\rm ini}$ and $P_{ij}^{\rm sad}$ are the elastic dipoles of the point-defect respectively at its initial stable position $\vec{r}_{\rm ini}$ and at the saddle point $\vec{r}_{\rm sad}$, and $E^{\rm m}_0$ is the migration energy without elastic interaction. Still for a C atom interacting with a dislocation in a bcc Fe matrix, comparison of this expression with results of direct atomistic simulations show a good agreement [@Veiga2011], as soon as the C atom is far enough from the dislocation core. Similar conclusions, on the validity of equation to describe the variation of the solute migration energy with an applied strain, have been reached for a SIA diffusing in bcc Fe [@Chen2010a], a vacancy in hcp zirconium [@Subramanian2013] or a Si impurity in fcc nickel [@Garnier2014]. Elastodiffusion --------------- This simple model predicting the variation of the migration energy with an applied strain field (Eq. \[eq:Emig\_strain\]) can be used to study elastodiffusion. Elastodiffusion refers to the diffusion variations induced by an elastic field [@Dederichs1978], either externally applied or internal through the presence of structural defects. Important implications exist for materials, such as transport and segregation of point-defects to dislocations leading to the formation of Cottrell atmospheres [@Cottrell1949], irradiation creep [@Woo1984], or anisotropic diffusion of dopants in semiconductor thin films [@Aziz1997; @Daw2001]. At the atomic scale, solid state diffusion occurs through the succession of thermally activated atomic jumps from stable to other stable positions, with atoms jumping either on vacancy sites or on interstitial sites of the host lattice. Within transition state theory [@Vineyard1957], the frequency of such a transition is given by $$\Gamma_{\alpha} = \nu^0_{\alpha} \exp{ \left( - E^{\rm m}_{\alpha}\,/\, kT \right)}, \label{eq:transition_rate}$$ where $\nu^0_{\alpha}$ is the attempt frequency for the transition $\alpha$ and $E^{\rm m}_{\alpha}$ is the migration energy. Considering the effect of a small strain field on this bulk system, the diffusion network and the site topology will not be modified. On the other hand, the presence of this small strain field modifies the migration energies and the attempt frequencies. As shown in the previous section, the elastic dipole description of the point-defect can predict the modification of the stable and saddle point energies, and thus of the migration energy (Eq. \[eq:Emig\_strain\]). Ignoring the strain effect on attempt frequencies, the incorporation of the modified energy barriers into stochastic simulations like atomistic or object kinetic Monte Carlo (OKMC) methods enables to characterize the point-defect elastodiffusion effect. This approach has been used, for instance, to study the directional diffusion of point-defects in the heterogeneous strain field of a dislocation, corresponding to a biased random walk [@Veiga2010; @Veiga2011; @Subramanian2013]. Diffusion in a continuous solid body is characterized by the diffusion tensor $D_{ij}$ which expresses the proportionality between the diffusion flux and the concentration gradient (Fick’s law). The effect of an applied strain is then described by the elastodiffusion fourth-rank tensor $d_{ijkl}$ [@Dederichs1978], which gives the linear dependence of the diffusion tensor with the strain: $$D_{ij} = D_{ij}^0 + d_{ijkl} \, \varepsilon_{kl}. \label{eq:d_ijkl}$$ This elastodiffusion tensor obeys the minor symmetries $d_{ijkl}=d_{jikl}=d_{ijlk}$, because of the symmetry of the diffusion and deformation tensors, and also the crystal symmetries. Starting from the atomistic events as defined by their transition frequencies (Eq. \[eq:transition\_rate\]), the diffusion coefficient, and its variation under an applied strain, can be evaluated from the long time evolution of the point-defect trajectories in stochastic simulations [@Goyal2015]. Alternatively, analytical approaches can be developed to provide expressions [@Howard1964; @Allnatt1993]. The elastodiffusion can thus be computed by a perturbative approach, starting from the analytical expression of the diffusion tensor [@Dederichs1978; @Trinkle2016]. This results in two different contributions: a geometrical contribution caused by the overall change of the jump vectors and a contribution due to the change in energy barriers as described by Eq. . This last contribution is thus a function of the elastic dipoles at the saddle point and stable positions. It is found to have an important magnitude in various systems [@Dederichs1978; @Trinkle2016], being for instance notably predominant for interstitial impurities in hcp Mg [@Agarwal2016]. It is temperature-dependent, sometimes leading to complex variations with non-monotonic variations and also sign changes for some of its components [@Agarwal2016]. As noted by Dederichs and Schroeder [@Dederichs1978], the elastic dipole at the saddle point completely determines the stress-induced diffusion anisotropy in cubic crystals. Experimental measurement of the elastodiffusion tensor components can therefore provide useful information about the saddle point configurations. Both approaches, relying either on stochastic simulations or analytical models, are now usually informed with [*ab initio*]{}computed formation and migration energies, and attempt frequencies. The elastic modeling of a point-defect through its elastic dipole offers thus a convenient way to transfer the information about the effects of an applied strain, as obtained from atomistic simulations, to the diffusion framework. Bias calculations ----------------- Point-defect diffusion and absorption by elements of the microstructure such as dislocations, cavities, grain boundaries and precipitates play an important role in the macroscopic evolution of materials. It is especially true under irradiation, since in this case not only vacancies but also self-interstitial atoms (SIAs) migrate to these sinks. Owing to their large dipole tensor components, SIAs generally interact more than vacancies with the stress fields generated by sinks. This leads to a difference in point-defect fluxes to a given sink known as the “absorption bias”. For example, in the “dislocation bias model” [@Brailsford1972], which is one of the most popular models to explain irradiation void swelling, dislocations are known as biased sinks: they absorb more interstitials than vacancies. Voids, which produce shorter range stress fields, are considered as neutral sinks, meaning that their absorption bias is zero. Since SIAs and vacancies are produced in the same quantity, the preferential absorption of SIAs by dislocations leads to a net flux of vacancies to voids and thus to void growth. Similar explanations based on absorption biases have been given to rationalize irradiation creep [@Heald1974] and irradiation growth in hexagonal materials [@Rouchette2014a]. In order to predict the kinetics of such phenomena, a precise evaluation of absorption biases is necessary. Following the rate theory formalism [@Brailsford1972], the absorption bias of a given sink can be written as the relative difference of sink strengths for interstitials ($k_i^2$) and vacancies ($k_v^2$) [@Heald1975a]. The strength of a sink for a point-defect $\theta$ ($\theta = i, v$) is related to the loss rate $\phi_\theta$ through $$\label{eq-flux-sink-strength} \phi_{\theta} = k_{\theta}^2 D_{\theta} c_{\theta},$$ where $D_\theta$ is the diffusion coefficient free of elastic interactions and $c_{\theta}$ is the volume concentration of $\theta$. The sink strength can be calculated with different methods, for example by solving the diffusion equation around the sink [@Brailsford1972; @Dederichs1978] or an associated phase field model [@Rouchette2014], or by performing object kinetic Monte Carlo simulations (OKMC) [@Heinisch2000; @Malerba2007]. It should be noted that analytical solution of the diffusion equation is limited to a few cases and often requires the defect properties or the stress field to be simplified [@Schroeder1975; @Woo1981; @Skinner1984], so in general numerical simulations are necessary [@Woo1979a; @Bullough1981; @Dubinko2005; @Jourdan2015]. In the following we consider the OKMC approach, due to its simplicity and its flexibility to introduce complex diffusion mechanisms and the effect of stress fields [@Sivak2011; @Subramanian2013; @Vattre2016]. In OKMC simulations of sink strengths, a sink is introduced in a simulation box where periodic boundary conditions are used and point-defects are generated at a given rate $K$. They diffuse in the box by successive atomic jumps until they are absorbed by the sink. For each defect in the simulation box, the jump frequencies of all jumps from the current stable state to the possible final states are calculated and the next event is chosen according to the standard residence time algorithm [@Gillespie1976; @Bortz1975]. The jump frequency of event $\alpha$ is given by Eq. , considering the strain dependence of the migration energy through Eq. . The sink strength is deduced from the average number of defects in the box $\overline{N}_{\theta}$ at steady state by the following equation [@Vattre2016]: $$\label{eq-sink-strength-from-C} k_{\theta}^2 = \frac{K}{D_{\theta}\overline{N}_{\theta}},$$ from which the bias is deduced: $$\label{eq-bias-definition} B = \frac{k_i^2-k_v^2}{k_i^2}.$$ Another method is often used for the calculation of sink strengths with OKMC [@Heinisch2000; @Malerba2007]. For each defect, the number of jumps it performs before it is absorbed by the sink is registered. The sink strength is then deduced from the average number of jumps. Although this method is equivalent to the method based on the average concentration in the non-interacting case, it is no more valid if elastic interactions are included. In this case the average time before absorption should be measured instead of the average number of jumps, since jump frequencies now depend on the location of the defect and are usually higher. Therefore, applying this method in the interacting case often leads to an underestimation of sink strengths. As an illustration, we consider the study published in Ref. [@Vattre2016], where sink strengths of semi-coherent interfaces have been calculated with OKMC, taking into account the effect of the strain field generated by the interfaces. The strain is the sum of the coherency strain and of the strain due to interface dislocations. It has been calculated by a semi-analytical method within the framework of anisotropic elasticity [@Vattre2013; @Vattre2015; @Vattre2016]. We consider the case of a twist grain boundary in Ag, which produces a purely deviatoric strain field. Two grain boundaries distant from each other by $d$ are introduced in the box and periodic boundary conditions are applied. Dipole tensors of vacancies and SIAs in Ag have been computed by DFT for both stable and saddle positions [@Vattre2016], using the residual stress definition (Eq. \[eq:Pij\_from\_sigma\]). At the ground state, the elastic dipole of the vacancy is isotropic and the one of the SIA almost isotropic. On the other hand, the elastic dipole tensors have a significant deviatoric component for both point-defects at their saddle point. ![Sink strengths of a twist grain boundary ($\theta = 7.5$) for (a) vacancies and (b) SIAs, and (c) absorption bias, as a function of the layer thickness $d$. (see Ref. [@Vattre2016] for more details).[]{data-label="fig-sink-strengths-and-bias"}](fig5.pdf){width="\linewidth"} Sink strengths of the twist grain boundary are shown in Fig. \[fig-sink-strengths-and-bias\]-(a,b) as a function of the layer thickness $d$ and compared to the analytical result with no elastic interactions $k^2 = 12/d^2$. Sink strengths for both vacancies and SIAs are significantly increased when elastic interactions are included and when anisotropy at saddle point is taken into account, especially for thinner layers. However, if the saddle point is considered isotropic, the non-interacting case is recovered. This is due to the deviatoric character of the strain field: since the dipole tensor of the vacancy in its ground state is purely hydrostatic, the interaction energy of a vacancy with the strain field is zero and there is no thermodynamic driving force for the absorption of the vacancy. A similar result is obtained for SIAs, because of their almost purely hydrostatic dipole for their ground state. Fig. \[fig-sink-strengths-and-bias\]c shows the evolution of the bias. For this interface, saddle point anisotropy leads to a negative bias, meaning that vacancies tend to be more absorbed than interstitials. This approach has also been recently used for the calculation of the sink strength of straight dislocations and cavities in aluminum [@Carpentier2017]. In both cases, saddle point anisotropy appears to have a significant influence on the sink strengths. This confirms analytical results obtained with various levels of approximation [@Skinner1984; @Borodin1993; @Borodin1994]. Isolated defect in atomistic simulations {#sec:elast_corr} ---------------------------------------- The elastic modeling of point-defects is also useful in the context of atomistic simulations. Such simulations, in particular [*ab initio*]{}calculations, are now unavoidable to obtain the point-defects energetics, like their formation and migration energies [@Freysoldt2014]. However, an ongoing issue is their difficulty to obtain the properties of isolated defects. One can use atomistic simulations with controlled surface to model an isolated point-defect [@Sinclair1978; @Rao1998; @Liu2007; @Zhang2013; @Huber2016], but then, the excess energy associated with the point-defect could be exactly set apart from the one of the external surfaces or interfaces only for interatomic potentials with a cutoff interaction radius, corresponding to short-range empirical potentials like EAM. For more complex potentials or for [*ab initio*]{}calculations, the absence of any interaction cutoff prevents an unambiguous definition of the point-defect energy. A supercell approach relying on periodic boundary conditions is therefore usually preferred. The combined effect of periodic boundary conditions and of the limited size of such calculations, for numerical cost reasons, makes the computed properties difficult to converge for defects inducing long-range effects. This problem is well-known in the context of charged point-defects, where long-range Coulombian interactions exist between the defect and its periodic images and for which corrective schemes have been developed [@Leslie1985; @Makov1995; @Taylor2011]. For neutral defects, interactions between periodic images also exist. These interactions are of elastic origin and decay like the inverse cube of the separation distance. Consequently, the computed excess energies are those of a periodic array of interacting point-defects, and converge with the inverse of the supercell volume to the energy of the isolated defect. This can be penalizing for defects inducing large distortions, like SIAs or clusters, or for atomic calculations where only small supercells are reachable. The elastic description of a point-defect allows calculating this spurious elastic interaction associated with periodic boundary conditions to obtain the energy properties of the isolated point-defect [@Varvenne2013]. After atomic relaxation, the excess energy of a supercell containing one point-defect is given by: $$\label{eq:E_DP} E^{\rm PD}_{\rm PBC}(\bar{\varepsilon}=0) = E_{\infty}^{\rm PD} + \frac{1}{2} E_{\rm PBC}^{\rm int},$$ where $E_{\infty}^{\rm PD}$ is the excess energy of the isolated defect and $E_{\rm PBC}^{\rm int}$ is the interaction energy of the defect with its periodic images. The factor $1/2$ arises because half of the interaction is devoted to the defect itself and the other goes to its periodic images. Continuous linear elasticity theory can be used to evaluate this elastic interaction. If the point-defect is characterized by the elastic dipole $P_{ij}$, following Eq. \[eq:dipole\_Einter\], this interaction energy is given by $$E_{\rm PBC}^{\rm int} = - P_{ij} \, \varepsilon^{\rm PBC}_{ij}, \label{eq:Epint}$$ with $\varepsilon^{\rm PBC}_{ij}$ the strain created by the defect periodic images. It can be obtained by direct summation $$\varepsilon^{\rm PBC}_{ij} = -{\sum_{n,m,p}}'G_{ik,jl}(n\vec{a}_1+m\vec{a}_2+p\vec{a}_3 ) \, P_{kl}. \label{eq:eps_p}$$ with $\vec{a}_1$, $\vec{a}_2$ and $\vec{a}_3$ the periodicity vectors of the supercell. The prime sign indicates that the diverging term ($n=m=p=0$) has been excluded from the sum. As the second derivative of the Green’s function $G_{ik,jl}(\vec{r})$ is decaying like $1/r^3$, this sum is only conditionally convergent. It can be regularized following the numerical scheme proposed by Cai [@Cai2003]. After computing the point-defect energy with an atomistic simulation code, this energy can be corrected by subtracting the interaction energy with the periodic images (Eq. \[eq:E\_DP\]) to obtain the properties of the isolated defect. This interaction energy is computed from the elastic constants of the perfect crystal, which are needed to evaluate the Green’s function and its derivative ([*cf*.]{}§\[sec:elast\_Hooke\]), and from the residual stress of the defective supercell to determine the point-defect elastic dipole ([*cf*.]{}§\[sec:para\_atom\]). This is therefore a simple post-treatment, which does not involve any fitting procedure and which can be performed using the <span style="font-variant:small-caps;">Aneto</span> program provided as supplemental material of Ref. [@Varvenne2013]. We have assumed in Eq. that the supercell containing the point-defect has the same periodicity vector than the perfect supercell, [*i.e.*]{}the applied homogenous strain $\bar{\varepsilon}$ is null. This corresponds to the easiest boundary conditions in atomistic simulations of point-defects. But sometimes, one prefers to relax also the periodicity vectors to nullify the stress in the supercell. Both these $\bar{\varepsilon}=0$ and $\sigma=0$ conditions converge to the same energy $E^{\rm PD}_{\infty}$ in the thermodynamic limit but different energies are obtained for too small supercells. The elastic model can be further developed to rationalize this difference [@Puchala2008; @Varvenne2013]. For $\sigma=0$ conditions, a strain $\bar{\varepsilon}$ is applied to the defective supercell to nullify its stress. Eq. therefore needs to be complemented with the energy contribution of this deformation $$\Delta E(\bar{\varepsilon}) = \frac{V}{2}C_{ijkl}\bar{\varepsilon}_{ij}\bar{\varepsilon}_{kl} - P_{ij} \bar{\varepsilon}_{ij}.$$ This applied strain $\bar{\varepsilon}$ in zero stress calculations is linked to the elastic dipole by Eq. . The excess energy of the supercell containing one point-defect is thus now given by $$\label{eq:E_DP_sig0} \begin{split} E^{\rm PD}_{\rm PBC}(\sigma=0) &= E_{\infty}^{\rm PD} + \frac{1}{2} E_{\rm PBC}^{\rm int} - \frac{1}{2V}S_{ijkl}P_{ij}P_{kl} \\ & = E^{\rm PD}_{\rm PBC}(\bar{\varepsilon}=0) - \frac{1}{2V}S_{ijkl}P_{ij}P_{kl}, \end{split}$$ where the elastic compliances of the bulk material $S_{ijkl}$ are the inverse tensor of the elastic constants $C_{ijkl}$. This equation shows that $\bar{\varepsilon}=0$ and $\sigma=0$ conditions lead to point-defect excess energies differing by a factor proportional to the inverse of the supercell volume and to the square of the elastic dipole. This difference will be therefore important for small supercells and/or point-defects inducing an important perturbation of the host lattice. But once corrected through Eqs. or , both approaches should lead to the same value. $\sigma=0$ calculations appear therefore unnecessary. ![Formation energy of a SIA cluster containing eight interstitials in bcc iron calculated for fixed periodicity vectors ($\bar{\varepsilon} = 0$) or at zero stress ($\sigma=0$) for different sizes of the simulation cell: (a) C15 aggregate and (b) parallel-dumbell configuration with a $\langle111\rangle$ orientation. Atomistic simulations are performed either with the M07 empirical potential [@Marinica2012] (EAM) or with [*ab initio*]{}calculations (GGA). Filled symbols refer to uncorrected results and open symbols to the results corrected by the elastic model (see Ref. [@Varvenne2013] for more details).[]{data-label="fig:Ef_8sia111c15"}](fig6.pdf) We illustrate the usefulness of this elastic post-treatment on an atomistic study of SIA clusters in bcc iron. These clusters appear under irradiation and can adopt different morphologies [@Marinica2012]. In particular, some clusters can have a 3D structure with an underlying crystal symmetry corresponding to the C15 Laves’ phase, and others have a planar structure corresponding to dislocation loop clusters with $1/2\,\langle111\rangle$ Burgers vectors. The formation energies of two different configurations of a cluster containing 8 SIAs, a C15 aggregate and a planar aggregate of parallel-dumbells with a $\langle 111 \rangle$ orientation, are shown in Fig. \[fig:Ef\_8sia111c15\] for different supercell sizes. They have been first calculated with an empirical EAM potential [@Marinica2012]: with fixed periodicity vectors ($\bar{\varepsilon}=0$), one needs at least $2000$ atoms for the C15 aggregate and $4000$ atoms for the $\langle 111 \rangle$ planar configuration to get a formation energy converged to a precision better than $0.1$eV. The convergence is slightly faster for zero stress calculations ($\sigma=0$) in the case of the C15 aggregate (Fig. \[fig:Ef\_8sia111c15\]a), but the opposite is true in the case of the $\langle 111 \rangle$ planar configuration (Fig. \[fig:Ef\_8sia111c15\]b). When we add the elastic correction, the convergence is improved for both cluster configurations. The corrected $\bar{\varepsilon}=0$ and $\sigma=0$ calculations lead then to the same formation energies, except for the smallest simulation cell ($128$ lattice sites) in the case of the $\langle 111 \rangle$ cluster. These formation energies have been also obtained with [*ab initio*]{}calculations for a simulation cell containing $250$ lattice sites (Fig. \[fig:Ef\_8sia111c15\]). Uncorrected $\bar{\varepsilon}=0$ calculations lead to an energy difference $\Delta E = -5.6$eV between the C15 and the $\langle 111 \rangle$ planar configuration, whereas this energy difference is only $\Delta E = -0.6$eV in $\sigma = 0$ calculations. This variation of the energy difference is rationalized once the elastic correction is added, and a good precision is obtained with this approach coupling [*ab initio*]{}calculations and elasticity theory, with an energy difference of $\Delta E = 3.5 \pm 0.2$eV. This elastic correction has been shown to accelerate the convergence of the point-defect formation and/or migration energies obtained from atomistic simulations, in particular from [*ab initio*]{}calculations, in numerous other cases like SIA in hcp Zr [@Varvenne2013; @Pasianot2016a], vacancy in diamond silicon [@Varvenne2013], or solute interstitials in bcc iron [@Souissi2016]. Conclusions =========== Elasticity theory provides thus an efficient framework to model point-defects. Describing the point-defect as an equilibrated distribution of point-forces, the long range elastic field of the defect and its interaction with other elastic fields are fully characterized by the first moment of this force distribution, a second rank symmetric tensor called the elastic dipole. This description is equivalent to an infinitesimal Eshelby inclusion or an infinitesimal dislocation loop. Knowing only the elastic constants of the matrix and the elastic dipole, a quantitative modeling of the point-defect and its interactions is thus obtained. The value of this elastic dipole can be either deduced from experimental data, like Vegard’s law parameters, or extracted from atomistic simulations. In this latter case, care must be taken to avoid finite-size effects, in particular for [*ab initio*]{}calculations. The definition through the residual stress appears as the most precise one to obtain the dipole tensors. The elastic description offers a convenient framework to bridge the scales between an atomic and a continuum description so as to consider the interaction of the point-defects with various complex elastic fields. This upscaling approach has already proven its efficiency in the modeling of elastodiffusion or in the calculation of absorption bias under irradiation. As the numerical evaluation of the elastic Green’s function and its derivatives does not present nowadays any technical difficulty, such an elastic model offers also a nice route to simulate the evolution of a whole population of point-defects in a complex microstructure, considering their mutual interaction and their interaction with other structural defects, in the same spirit as dislocation dynamics simulations are now routinely used to model the evolution of a dislocation microstructure. **Acknowledgements** - This work was performed using HPC resources from GENCI-CINES and -TGCC (Grants 2017-096847). The research was partly funded by the European Atomic Energy Community’s (Euratom) Seventh Framework Program FP7 under grant agreement No. 604862 (MatISSE project) and in the framework of the EERA (European Energy Research Alliance) Joint Program on Nuclear Materials. References {#references .unnumbered} ========== [^1]: As the extension, so the force. [^2]: The scaling with the distance $r$ is a consequence of Eq. , given that the $\delta(\vec{r})$ function is homogeneous of degree $-3$. [^3]: See also Refs. [@Puchala2008] and [@Pasianot2016b] for other proofs.
--- abstract: | *We present some properties of the gradient of a mu-differentiable function. The Method of Lagrange Multipliers for mu-differentiable functions is then exemplified.* **Keywords:** nonstandard analysis, mu-differentiability, Method of Lagrange Multipliers. **2000 Mathematics Subject Classification:** 26E35, 26E05, 26B05. author: - '**Ricardo Almeida and Delfim F. M. Torres**' date: | Department of Mathematics\ University of Aveiro\ 3810-193 Aveiro, Portugal\ {ricardo.almeida, delfim}@ua.pt\ title: 'A mu-differentiable Lagrange multiplier rule[^1]' --- Introduction ============ In [@AlmeidaTorres] we introduce a new kind of differentiation, what we call mu-differentiability, and we prove necessary and sufficient conditions for the existence of extrema points. For the necessary background on Nonstandard Analysis and for notation, we refer the reader to [@AlmeidaTorres] and references therein. Here we just recall the necessary results. [@AlmeidaTorres] Given an internal function $f:{^*\mathbb R}^n\to {^*\mathbb R}$, we say that $\alpha \in \mathbb R^n$ is a *local m-minimum* of $f$ if $$f(x) {\ {\raise-.5ex\hbox{$\buildrel>\over\sim$}}\ }f(\alpha) \, \mbox{ for all } \, x \in {^*B_r(\alpha)},$$ where $r\in \mathbb R$ is a positive real number. The crucial fact is that there exists a relationship between m-minimums and minimums: \[ponte2\] [@AlmeidaTorres] If $f:{^*\mathbb R}^n\to {^*\mathbb R}$ is mu-differentiable, then $$\alpha \mbox{ is a m-minimum of } f \mbox{ if and only if } \alpha \mbox{ is a minimum of } st(f).$$ With this lemma, and using the fact that $$\label{eq3}st \left( \left. \frac{\partial f}{\partial x_i} \right|_{\alpha} \right) = \left. \frac{\partial st(f)}{\partial x_i} \right|_{\alpha} \, \mbox{ for } \, i \in \{1,\ldots,n \},$$ it follows: [@AlmeidaTorres] If $f:{^*\mathbb R}^n\to {^*\mathbb R}$ is a mu-differentiable function and $\alpha$ is a m-minimum of $f$, then $$\left. \frac{\partial f}{\partial x_i} \right|_{\alpha} \approx 0 , \mbox{ for every }\, i=1,\ldots,n.$$ In this paper we develop further the theory initiated in [@AlmeidaTorres], proving some properties of the gradient vector (section \[gradient\]) and a Method of Lagrange Multipliers (section \[MLM\]). Illustrative examples show the analogy with the classical case. The Gradient Vector {#gradient} =================== In the sequel $f$ denotes an internal mu-differentiable function from ${^*\mathbb R}^n$ to ${^*\mathbb R}$. \[def:grad\] A *gradient vector* of $f$ at $x\in ns({^*\mathbb R}^n)$ is defined by $$\nabla f(x) := \left( \left. \frac{\partial f}{\partial x_1} \right|_{x}, \ldots, \left. \frac{\partial f}{\partial x_n} \right|_{x} \right)$$ where $$\left. \frac{\partial f}{\partial x_i} \right|_{x} \approx \frac{f(x_1,\ldots, x_{i-1}, x_i+\epsilon, x_{i+1}, \ldots,x_n)-f(x_1,\ldots,x_n)}{\epsilon}$$ and $\epsilon$ is an infinitesimal satisfying $|\epsilon|>\delta_f$. The positive infinitesimal $\delta_f$ that appears in Definition \[def:grad\] is given by the m-differentiability of $f$ ( [@AlmeidaTorres]). Observe that $$\left. \frac{\partial f}{\partial x_i}\right|_x \approx Df_x(e_i),$$ where $e_i = (0,\ldots, 0,1,0,\ldots,0)$ denotes the $i$th canonical vector, and $Df_x$ denotes the derivative operator of $f$ at $x$. If $x,y \in ns({^*\mathbb R}^n)$ and $x\approx y$, then $$\left. \frac{\partial f}{\partial x_i}\right|_x \approx \left. \frac{\partial f}{\partial x_i}\right|_y, \, \, i=1,\ldots,n \, ,$$ *i.e.*, $\nabla f(x) \approx \nabla f(y)$. Simply observe that $Df_x(e_i)\approx Df_y(e_i)$. If $u\in{^*\mathbb R}^n$ is a finite vector, then $$\forall x \in ns({^*\mathbb R}^n) \hspace{.5cm} Df_x(u) \approx \nabla f(x) \cdot u.$$ Since $st(f)$ is a $C^1$ function, if follows that for any $v \in \mathbb R^n$ $$D st(f)_{st(x)}(v)=\nabla st(f)(st(x))\cdot v.$$ By the Transfer Principle of Nonstandard Analysis, it still holds for $u\in{^*\mathbb R}^n$. On the other hand, 1. $D st(f)_{st(x)}(v)=st(Df_{st(x)})(u)\approx Df_x(u)$, 2. $\nabla st(f)(st(x))=st(\nabla f(st(x)))\approx \nabla f(x)$, which proves the desired. We point out that, in opposite to classical functions, if $\nabla f(x)$ is a gradient vector of $f$ at $x$, then $\nabla f(x)+\Omega$, where $\Omega\in{^*\mathbb R}^n$ is an infinitesimal vector, is also a gradient vector at $x$. Conversely, if $\nabla f(x)$ and $\nabla^1 f(x)$ are two gradient vectors, then $\nabla f(x)-\nabla^1 f(x)\approx 0$. From now on, when there is no danger of confusion, we simply write $\nabla f$ instead of $\nabla f(x)$. Let $f(x,y,z)=(1+\epsilon)xy^2-\delta z$, with $(x,y,z) \in {^*\mathbb R}^3$, and $\epsilon$ and $\delta$ be two infinitesimal numbers. Given an infinitesimal $\theta$, $$\begin{array}{rcl} \displaystyle \frac{(1+\epsilon)(x+\theta)y^2-\delta z-((1+\epsilon)xy^2-\delta z)}{\theta}&=&(1+\epsilon)y^2,\\ &&\\ \displaystyle \frac{(1+\epsilon)x(y+\theta)^2-\delta z-((1+\epsilon)xy^2-\delta z)}{\theta}&=&2(1+\epsilon)xy+\theta (1+\epsilon)x,\\ &&\\ \displaystyle \frac{(1+\epsilon)xy^2-\delta (z+\theta)-((1+\epsilon)xy^2-\delta z)}{\theta}&=&-\delta,\\ \end{array}$$ and we can choose $$\displaystyle \frac{\partial f}{\partial x}=(1+\epsilon)y^2, \, \, \displaystyle \frac{\partial f}{\partial y}=2(1+\epsilon)xy, \, \, \displaystyle \frac{\partial f}{\partial z}=-\delta.$$ If $f$ and $g$ are mu-differentiable and $k \in fin({^*\mathbb R})$, then $$\nabla(kf)=k\nabla f, \quad \nabla(f+g)=\nabla f + \nabla g, \quad \mbox{ and } \quad \nabla(fg)=f\nabla g + g\nabla f.$$ We prove only the last equality. Fix an infinitesimal number $\epsilon$ such that $|\epsilon|>\delta_f$. Then, $$\frac{\partial (fg)}{\partial x_i} \approx \frac{(fg)(x_1,\ldots, x_{i-1}, x_i+\epsilon, x_{i+1}, \ldots,x_n)-(fg)(x_1,\ldots,x_n)}{\epsilon}$$ $$=f(x_1,\ldots,x_n) \frac{g(x_1,\ldots, x_{i-1}, x_i+\epsilon, x_{i+1}, \ldots,x_n)-g(x_1,\ldots,x_n)}{\epsilon}$$ $$+g(x_1,\ldots, x_{i-1}, x_i+\epsilon, x_{i+1}, \ldots,x_n)\frac{f(x_1,\ldots, x_{i-1}, x_i+\epsilon, x_{i+1}, \ldots,x_n)-f(x_1,\ldots,x_n)}{\epsilon}$$ $$\approx f(x)\frac{\partial g}{\partial x_i}+g(x)\frac{\partial f}{\partial x_i}$$ by the continuity of $g$. \[def:m-critical-point\] We say that $x$ is a *m-critical point* of $f$ if $\nabla f(x)\approx 0$. The following lemma is an immediate consequence of (\[eq3\]) and Definition \[def:m-critical-point\]. A point $x$ is a m-critical point of $f$ if and only if $st(x)$ is a critical point of $st(f)$. The Method of Lagrange Multipliers {#MLM} ================================== Let $f:{^*\mathbb R}^n \to {^*\mathbb R}$ and $g_j:{^*\mathbb R}^n \to {^*\mathbb R}$, $j=1,\ldots,m$ ($m \in \mathbb N$, $m<n$), denote internal mu-differentiable functions. We address the problem of finding m-minimums or m-maximums of $f$, subject to the conditions $g_j(x)\approx0$, for all $j$. The constraints $g_j(x)\approx0$, $j=1,\ldots,m$, are called *side conditions*. Lagrange solved this problem (for standard differentiable functions), introducing new variables, $\lambda_1,\ldots,\lambda_m$, and forming the augmented function $$F(x,\lambda_1,\ldots,\lambda_m)=f(x)+\sum_{j=1}^m \lambda_j g_j(x), \quad x \in \mathbb R^n.$$ Roughly speaking, Lagrange proved that the problem of finding the critical points of $f$, satisfying the conditions $g_j(x)=0$, is equivalent to find the critical points of $F$. We present here a method to determine critical points for internal functions with side conditions, based on the *Method of Lagrange Multipliers*. Similarly to the classical setting, define $$\begin{array}{lcll} F: & {^*\mathbb R}^{n+m} & \to & {^*\mathbb R}\\ & (x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_m) & \mapsto & f(x_1,\ldots,x_n)+\displaystyle \sum_{j=1}^m \lambda_j g_j(x_1,\ldots,x_n).\\ \end{array}$$ If we let $g:=(g_1,\ldots,g_m)$ and $\lambda:=(\lambda_1,\ldots,\lambda_m)$, we can simply write $$\label{eq:def:F} F(x,\lambda)=f(x)+\lambda \cdot g(x) \, .$$ \[LNF\]\[Lagrange rule in normal form with one constraint\] Let $f:{^*\mathbb R}^n\to{^*\mathbb R}$ and $g:{^*\mathbb R}^n\to{^*\mathbb R}$ be two mu-differentiable functions, and $\alpha$ a m-minimum of $f$ such that $g(\alpha)\approx 0$ and $\nabla g(\alpha) \not\approx 0$. Then, there exists a finite $\lambda \in {^*\mathbb R}$ such that $$\nabla f(\alpha)+\lambda \nabla g(\alpha)\approx 0.$$ Since $st(f)$ and $st(g)$ are functions of class $C^1$, $\alpha$ is a minimum of $st(f)$, $st(g)(\alpha)=0$ and $\nabla st(g)(\alpha) \not=0$. It follows (see, *e.g.*, [@cheney p. 148]) that $$\exists \lambda \in \mathbb{R} \hspace{.5cm} \nabla st(f)(\alpha)+\lambda \nabla st(g)(\alpha)=0.$$ Hence, $$\nabla f(\alpha)+\lambda \nabla g(\alpha)\approx0.$$ Suppose that we are in the conditions of Theorem \[LNF\]. Then, there exists some $\lambda_1\in fin({^*\mathbb R})$ such that $$\nabla f(\alpha)+\lambda_1 \, \nabla g(\alpha)\approx0,$$ *i.e.*, $$\left. \frac{\partial f}{ \partial x_i} \right|_{\alpha}+ \lambda_1 \left. \frac{\partial g}{ \partial x_i} \right|_{\alpha}\approx0, \quad \, i=1, \ldots, n.$$ Using the notation , if $\alpha$ is a m-minimum of $f$ and $g(\alpha)\approx 0$, then $$\label{eq1}\left\{ \begin{array}{ll} \left. \frac{\partial F}{ \partial x_i} \right|_{(\alpha,\lambda_1)}=\left. \frac{\partial f}{\partial x_i}\right|_{\alpha}+ \lambda_1 \left. \frac{\partial g}{ \partial x_i} \right|_{\alpha} \approx 0, & i=1, \ldots, n \, ,\\ \left. \frac{\partial F}{ \partial \lambda} \right|_{(\alpha,\lambda_1)}\approx g(\alpha)\approx0 \, . &\\ \end{array} \right.$$ Consequently, the m-critical points are solutions of the system $$\frac{\partial F}{ \partial x_i}\approx 0, \, i=1, \ldots, n, \mbox{ and } \frac{\partial F}{ \partial \lambda}\approx0 \, ,$$ *i.e.*, $\nabla F \approx 0$. Let $f(x,y,z)=xyz+\epsilon$, with $\epsilon\approx0$, and consider the constraint $g(x,y,z)=x^2+2(y+\delta)^2+3z^2-1$, with $\delta\approx0$. In this case, we define $$F(x,y,z,\lambda):=xyz+\epsilon+\lambda (x^2+2(y+\delta)^2+3z^2-1).$$ The system (\[eq1\]) takes the form $$\left\{ \begin{array}{ll} yz+2\lambda x\approx0\\ xz+4\lambda(y+\delta)\approx0\\ xy+6\lambda z\approx0\\ x^2+2(y+\delta)^2+3z^2-1\approx 0 \, . \end{array} \right.$$ Since $$xyz \approx -2\lambda x^2 \approx -4\lambda y(y+\delta) \approx -6\lambda z^2,$$ if $\lambda\not\approx0$, the solution is $$x^2\approx\frac13, \, \, y^2\approx\frac16 \mbox{ and } z^2\approx\frac19 \, ;$$ if $\lambda \approx 0$, then $$\left( 0,0,\pm \frac{1}{\sqrt 3} \right), \, \left( 0,\pm \frac{1}{\sqrt 2},0 \right) \mbox{ and } \left( \pm1,0,0\right)$$ are solutions. Observe that $$\nabla g= (2x,4(y+\delta),6z) \approx (0,0,0) \mbox{ if and only if } (x,y,z)\approx(0,0,0).$$ One easily checks that $$f\left( \frac{1}{\sqrt 3}, \frac{1}{\sqrt 6}, \frac{1}{3} \right)=\frac{1}{3\sqrt{18}}+\epsilon \mbox{ is the m-maximum and }$$ $$f\left( - \frac{1}{\sqrt 3}, \frac{1}{\sqrt 6}, \frac{1}{3} \right)=-\frac{1}{3\sqrt{18}}+\epsilon \mbox{ is the m-minimum}$$ of $f$ subject to the constraint $g$. We now prove a more general Lagrange rule, admitting possibility of abnormal critical points ($\mu = 0$) and multiple constraints. \[thm:LR\] Let $f,g_1,\ldots,g_m$ be mu-differentiable functions on ${^*\mathbb R}^n$. Let $\alpha$ be a m-minimum of $f$ satisfying $$g_1(\alpha)\approx \ldots \approx g_m(\alpha)\approx 0.$$ Then, there exist finite hyper-reals $\mu,\lambda_1,\ldots,\lambda_m \in {^*\mathbb R}$, not all infinitesimals, such that $$\mu \nabla f(\alpha)+\lambda_1 \nabla g_1(\alpha)+\ldots+\lambda_m \nabla g_m(\alpha)\approx 0.$$ Defining $F(x,\mu,\lambda):=\mu f(x)+\lambda \cdot g(x)$, the necessary optimality condition given by Theorem \[thm:LR\] can be written as $\partial F / \partial x \approx \partial F / \partial \lambda \approx 0$. First observe that $st(f),st(g_1),\ldots,st(g_m)$ are all functions of class $C^1$, $\nabla st(f)(\alpha)=st(\nabla f)(\alpha)$ and $\nabla st(g_j)(\alpha)=st(\nabla g_j)(\alpha)$, for $j=1,\ldots,m$. Furthermore, since $\alpha$ is a minimum of $st(f)$ and $$st(g_1)(\alpha)=\ldots =st(g_m)(\alpha)=0,$$ there exist reals $\mu,\lambda_1,\ldots,\lambda_m$, not all zero, such that $$\mu \, \nabla st(f)(\alpha)+\lambda_1 \nabla st(g_1)(\alpha)+\ldots+\lambda_m \nabla st(g_m)(\alpha)=0$$ (see, *e.g.*, [@cheney p. 148]). Consequently, $$\label{eq10} \mu \, st( \nabla f)(\alpha)+\lambda_1 st( \nabla g_1)(\alpha)+\ldots+\lambda_m st( \nabla g_m)(\alpha)=0.$$ On the other hand, we have $$\mu \, st( \nabla f)(\alpha)=\mu \, st(\nabla f(\alpha))\approx \mu \nabla f(\alpha).$$ Analogously, for each $j=1,\ldots,m$, $$\lambda_j st( \nabla g_j)(\alpha)\approx \lambda_j \nabla g_j(\alpha).$$ Substituting on equation (\[eq10\]) the previous relations, one proves the desired result. Let $f(x,y,z)=z^2/2-(x+\epsilon)y$, with $\epsilon\approx0$, be the function to be extremized, and $g_1(x,y,z)=x^2+y-1$ and $g_2(x,y,z)=x+z-1+\delta$, with $\delta\approx0$, be the constraints. Then, the augmented function is $$F(x,y,z,\mu,\lambda_1,\lambda_2) =\mu \left[z^2/2-(x+\epsilon)y\right] +\lambda_1(x^2+y-1)+\lambda_2(x+z-1+\delta).$$ To find the local extrema of $f$, subject to the conditions $g_1\approx0$ and $g_2\approx0$, we form the system $$\label{eq:nc:ex:t2} \left\{\begin{array}{l} -\mu y+2\lambda_1x+\lambda_2\approx0\\ - \mu (x+\epsilon)+\lambda_1\approx0\\ \mu z+\lambda_2\approx0\\ x^2+y-1\approx0\\ x+z-1+\delta\approx0 \\ \end{array}\right.$$ of necessary optimality conditions. Assume $\mu \approx 0$ (abnormal case). Then, the first two equations in imply immediately that $\lambda_1 \approx \lambda_2 \approx 0$. This is not a possibility by Theorem \[thm:LR\]. We conclude that $\mu \not\approx 0$. The solutions of are then infinitely close to the vectors $$(-1,0,2) \quad \mbox{and} \quad (2/3,5/9,1/3).$$ Hence, if $f$ has any m-extrema under the given constraints, then they must occur at either $(-1,0,2)$ or $(2/3,5/9,1/3)$. [1]{} R. Almeida and D. F. M. Torres. Relaxed optimality conditions for mu-differentiable functions. [*Int. J. Appl. Math. Stat.*]{} (accepted). [arXiv:0806.3545v1 \[math.CA\]]{} W. Cheney. [*Analysis for Applied Mathematics*]{}. GTM. Springer-Verlag, New York, 2001. [^1]: Supported by [*Centre for Research on Optimization and Control*]{} (CEOC) from the “Fundação para a Ciência e a Tecnologia” (FCT), cofinanced by the European Community Fund FEDER/POCI 2010. Accepted (22/June/2008) for International Journal of Mathematics and Statistics (IJMS), Vol. 4, No. S09, Spring 2009 (in press).
--- abstract: 'Let $R$ be a local Noetherian commutative ring. We prove that $R$ is an Artinian Gorenstein ring if and only if every ideal in $R$ is a trace ideal. We discuss when the trace ideal of a module coincides with its double annihilator.' author: - Haydee Lindo - Nina Pande title: Trace Ideals and the Gorenstein Property --- Introduction ============ Let $R$ be a ring and $M$ an $R$-module. The trace ideal of $M$, denoted ${\operatorname{{\tau}}_{{M}} (R)}$, is the ideal generated by the homomorphic images of $M$ in $R$. The theory of trace ideals has proved useful in various contexts but fundamentally the literature is dominated by two avenues of inquiry. First, given an $R$-module $M$ what does its trace ideal say about $M$? For instance, it is known that trace ideals detect free-summands and that $M$ is projective if and only if its trace ideal is idempotent; see [@MaxOrders; @Lam; @Whitehead1980; @Herberatraceideal]. More recently, Lindo discussed the role of the trace ideal of a module in calculating the center of its endomorphism ring; see [@Lindo1]. Also, Herzog, Hibi, Stamate and Ding have studied the trace ideal of the canonical module to understand deviation from the Gorenstein property in $R$; see [@tracecanonical; @DingTrace]. A second category of question asks: given a ring, what do the characteristics of its class of trace ideals imply about the ring? For example, in [@TraceProperty1987] Fontana, Huckaba and Papick characterize Noetherian domains where every trace ideal is prime; see also [@TraceProperty1987; @LucasMcNair2011; @LucasRTP]. This paper addresses both of these questions when $R$ is a local Artinian Gorenstein ring. In this setting, we show that the trace ideal of an $R$-module $M$ coincides with its double annihilator; see Proposition \[refl\]. In Remark \[Art\] we recall that all ideals over an Artinian Gorenstein ring are trace ideals. We then show that this property characterizes local Artinian Gorenstein rings; see Theorem \[main\]. We prove Let $R$ be a local Artinian Gorenstein ring and $M$ a finitely generated $R$-module. Then ${\operatorname{{\tau}}_{{M}} (R)}= {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. Let $R$ be a local Noetherian ring with maximal ideal $\mathfrak m$. Then the following are equivalent 1. $R$ is an Artinian Gorenstein ring 2. Every ideal is a trace ideal. 3. Every principal ideal is a trace ideal Preliminaries ============= Let $R$ be a commutative Noetherian ring and $M$ a finitely generated $R$-module. The purpose of this section is to define the trace ideal of a module $M$ and relate it to ${\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. A trace ideal is a specific type of trace module. Given $R$-modules $M$ and $X$, the trace (module) of $M$ in X is $$\begin{aligned} {\operatorname{{\tau}}_{{M}} (X)} &:= \displaystyle \sum_{\alpha \in {{\operatorname{Hom}_{R} (M, X)}}} \alpha(X)\\ &\, = {\operatorname{Hom}_{R} (M, X)} M\end{aligned}$$ where ${\operatorname{Hom}_{R} (M, X)} M$ denotes the $R$-submodule of $X$ generated by elements of the form $\alpha(m)$ for $\alpha$ in ${\operatorname{Hom}_{R} (M, X)}$ and $m$ in $M$. The ideal ${\operatorname{{\tau}}_{{M}} (R)}$ is called the trace ideal of $M$ (in $R$). We say $A$ is a trace module (trace ideal) provided $A = {\operatorname{{\tau}}_{{M}} (X)}$ $(={\operatorname{{\tau}}_{{M}} (R)})$ for some $R$-module M. \[facts\] Note, an $R$-submodule $M$ in $X$ is a trace module in $X$ if and only if the inclusion $M\subseteq X$ induces an isomorphism ${\operatorname{End}_{R} (M)} \cong {\operatorname{Hom}_{R} (M, X)}$. Also, an ideal $I$ in $R$ is a trace ideal only if and only if it is its own trace ideal; see [@Lindo1 Proposition 2.8]. \[tracepresmatrix\] One may calculate the trace ideal of a module from its presentation matrix. Suppose $[M]$ is a presentation matrix for an $R$-module $M$ and $A$ is a matrix whose columns generate the kernel of $[M]^*$, the transpose of $[M]$. Then there is an equality: $${\operatorname{{\tau}}_{{M}} (R)} = I_1(A);$$ where $I_1(A)$ is the ideal generated by the entries of $A$; see, [@Vascaffine Remark 3.3]. The annihilator of $M$ (in $R$) is the ideal $${\operatorname{Ann}_R }M := \{r\in R| r M = 0\}.$$ \[principal\] Let $M$ be a cyclic $R$-module. Then ${\operatorname{{\tau}}_{{M}} (R)} = {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. Set $M = Rm$. The presentation matrix $[M]$ of $M$ is a $1 \times n$ matrix whose entries generate ${\operatorname{Ann}_R }m$. Maps $\alpha \in {\operatorname{Hom}_{R} (M, R)}$ induce and are induced by $1 \times 1$ matrices $[y] \in {\operatorname{Hom}_{R} (R, R)}$ such that $[y][M]= 0$. These are spanned by the generators of ${\operatorname{Ann}_R }{\operatorname{Ann}_R }{M}$. $$\xymatrix{R^n \ar@[->][rrrr]^{\left[\begin{array}{c}M \end{array}\right ]=\left[\begin{array}{cccc} x_1 & x_2 & \cdots & x_n \end{array}\right]} \ar@{-->>}[drr]&&&& R \ar@{->>}[dr]^{\hspace{.2in}\circlearrowleft} \ar@{-->}[rr]^{\left[\begin{array}{c}y \end{array}\right]} && R .\\ &&{\operatorname{Ann}_R }m \ar@{^{(}-->}[urr]^{\hspace{-1.75in}\circlearrowleft}&&& Rm \ar@{-->}[ur]_{\alpha}& \\ }$$ It follows that ${\operatorname{{\tau}}_{{M}} (R)} = {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. Let $M$ be a finitely generated $R$-module. Then ${\operatorname{{\tau}}_{{M}} (R)} \subseteq {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. Let $\{ m_1, \ldots, m_n\}$ be a generating set for $M$. For each $\alpha$ in ${\operatorname{Hom}_{R} (M, R)}$, $\alpha (M) = \displaystyle \sum_{i = 1}^n\alpha (Rm_i)$. By Lemma \[principal\] it follows that $$\begin{aligned} {\operatorname{{\tau}}_{{R}} (M)} &\subseteq \sum_{i=1}^n {\operatorname{{\tau}}_{{Rm_i}} (R)} \\ &= \sum_{i=1}^n {\operatorname{Ann}_R }{\operatorname{Ann}_R }{m_i}\\ & \subseteq {\operatorname{Ann}_R }{\operatorname{Ann}_R }M \qedhere \end{aligned}$$ We show ${\operatorname{{\tau}}_{{R}} (M)} = {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$ when $R$ is Artinian Gorenstein; see Proposition \[refl\]. \[lem\] Given an ideal $I$ in $R$, there is an equality $I = {\operatorname{Ann}_R }{\operatorname{Ann}_R }I$ if and only if $I = {\operatorname{Ann}_R }J$ for some ideal $J$. Taking $J = {\operatorname{Ann}_R }I$ yields the forward implication. Given $I = {\operatorname{Ann}_R }J$ for some ideal $J$, the backwards implication follows from the equality $${\operatorname{Ann}_R }{\operatorname{Ann}_R }{\operatorname{Ann}_R }J = {\operatorname{Ann}_R }J. \qedhere$$ \[Anntrace\] Given an ideal $I$ in $R$, if $I = {\operatorname{Ann}_R }{\operatorname{Ann}_R }I$ then $I$ is a trace ideal. As a result, given an ideal J, $I= {\operatorname{Ann}_R }J$ is a trace ideal. The first statement follows immediately from the containments $$I \subseteq {\operatorname{{\tau}}_{{I}} (R)} \subseteq {\operatorname{Ann}_R }{\operatorname{Ann}_R }I.$$ The second statement follows from the first and Lemma \[lem\]. Consider $R = k[x,y]_{(x,y)}/(x^2, xy)$ for some field $k$. Note $R$ has depth zero and Krull dimension one. The ideal $(x)$ is its own trace ideal since ${\operatorname{Ann}_R }{\operatorname{Ann}_R }{(x)} = (x)$. The ideal $(y)$ is not a trace ideal since ${\operatorname{Ann}_R }{\operatorname{Ann}_R }{(y)} = (x,y)$. Main Results ============ In this section $R$ is a local Noetherian commutative ring. We identify the trace ideals of modules over Artinian Gorenstein rings as their double annihilator and characterize local Artinian Gorenstein rings in terms of their classes of trace ideals. Recall [@Mats Theorem 18.1]. In particular, \[gorenstein\] Let $(R, \mathfrak m)$ be a local Noetherian ring of Krull dimension $d$ with residue field $k$. Then the following are equivalent 1. $R$ is Gorenstein; 2. ${\operatorname{inj \,dim}_R}R = d$; 3. ${\operatorname{depth} }R = d$ and ${\operatorname{Ext}^{d}_{R} (k, R)} \cong k$. \[Art\] There are several arguments showing that all ideals in a local Artinian Gorenstein ring are trace ideals: 1. Given an ideal $I$ in $R$, one such argument considers the exact sequence $$0 \lra I \lra R \lra R/I \lra 0.$$ Applying ${\operatorname{Hom}_{R} (\_, R)}$ yields the top exact sequence below $$\xymatrix{ \cdots \ar@[->][r] & {\operatorname{Hom}_{R} (R, R)} \ar@{->}[r] \ar@[->][d]^{\cong} & {\operatorname{Hom}_{R} (I, R)} \ar@[->][r] \ar@[->][d]_{=} & {\operatorname{Ext}^{1}_{R} (R/I, R)} \ar@[->][d]_{=} \ar@[->][r]& \cdots \\ &R \ar@{->>}[r]&{\operatorname{Hom}_{R} (I, R)} \ar@{->}[r]&0& \\ }$$ where ${\operatorname{Ext}^{1}_{R} (R/I, R)}$= 0 because $R$ is self-injective. As a result, all maps $\alpha$ from $I$ to $R$ are given my multiplication by some element $r$ in $R$. Therefore, $I$ is its own trace ideal. 2. A second argument is found in the proof of Proposition 1.2 in [@BrandtChar]. Here Brandt shows that $M$ being a trace module in $X$ implies that $M$ is an ${\operatorname{End}_{R} (X)}$-submodule of $X$ and that the converse holds when $X$ is injective. In particular, when $R$ is self-injective the trace ideals of $R$ are precisely the $R$-submodules of $R$, that is, the ideals. $(\Lra)$ Recall ${\operatorname{Hom}_{R} (M, X)}$ is an ${\operatorname{End}_{R} (X)}$-module. Thus $$\begin{aligned} {\operatorname{End}_{R} (X)} {\operatorname{{\tau}}_{{M}} (X)} & = {\operatorname{End}_{R} (X)} {\operatorname{Hom}_{R} (M, X)} M\\ & = {\operatorname{Hom}_{R} (M, X)} M\\ & = {\operatorname{{\tau}}_{{M}} (X)}\end{aligned}$$ $(\Longleftarrow)$ Say $i$ is the inclusion $M \subseteq X $ and $\phi$ is any map in ${\operatorname{Hom}_{R} (M, X)}$. Since $X$ is injective, there exists $\bar \phi$ in ${\operatorname{End}_{R} (X)}$ such that $\bar \phi i = \phi$. By assumption $M$ is an ${\operatorname{End}_{R} (X)}$-module, so that $\phi (M) =\bar \phi i (M)= \bar \phi|_{M} (M) \subseteq M$. Therefore $M$ is a trace module in $X$; see Remark \[facts\]. 3. A third argument proceeds from Corollary \[Anntrace\] and Lemma \[GorAnn\] below. The following characterization of local Artinian Gorenstein rings is well-known; see, for example, Exercise 3.2.15 in [@BandH]. \[GorAnn\] Let $R$ be a local Artinian commutative ring. Then $R$ is a Gorenstein ring if and only if $I = {\operatorname{Ann}_R }{\operatorname{Ann}_R }I$ for every ideal $I$ of $R$. \[refl\] Let $R$ be a local Artinian Gorenstein ring and $M$ a finitely generated $R$-module. Then ${\operatorname{{\tau}}_{{M}} (R)} = {\operatorname{Ann}_R }{\operatorname{Ann}_R }M$. Every finitely generated module over an Artinian Gorenstein ring is reflexive and $M$ being reflexive implies ${\operatorname{Ann}_R }M = {\operatorname{Ann}_R }{{\operatorname{{\tau}}_{{M}} (R)}}$; see [[@Vasc1 Corollary 2.3 ]]{} and [@Lindo1 Proposition 2.8 (vii)]. Also, since $R$ is Artinian Gorenstein, by Lemma \[GorAnn\] one has $I = {\operatorname{Ann}_R }{\operatorname{Ann}_R }I$ for all ideals $I \subseteq R$. It follows that $${\operatorname{Ann}_R }{\operatorname{Ann}_R }M = {\operatorname{Ann}_R }{\operatorname{Ann}_R }{{\operatorname{{\tau}}_{{M}} (R)}} = {\operatorname{{\tau}}_{{M}} (R)}. \qedhere$$ \[main\] Let $R$ be a local Noetherian ring with maximal ideal $\mathfrak m$. Then the following are equivalent 1. $R$ is an Artinian Gorenstein ring; 2. Every ideal is a trace ideal; 3. Every principal ideal is a trace ideal. If $R$ is Artinian Gorenstein then $I = {\operatorname{Ann}_R }{\operatorname{Ann}_R }I$ for each ideal $I$ in $R$; see Lemma \[GorAnn\]. By Corollary \[Anntrace\] every ideal is a trace ideal and, in particular, every principal ideal is a trace ideal. Now assume every principal ideal is a trace ideal. For each $r$ in $R$ one has $${(r) = {\operatorname{Ann}_R }{\operatorname{Ann}_R }{(r)}};$$ see Lemma \[principal\]. Therefore $r$ is a zerodivisor, ${\operatorname{depth} }R =0$ and $\mathfrak m \in {\operatorname{Ass} \brackets}R$. Recall that the nilradical of a ring is the intersection of its minimal primes. Since ${\operatorname{depth} }R = 0$, if $\dim R >0$ then there exists a zerodivisor $x$ in $R$ which is not nilpotent. For all $n \in \mathbb{N}$, ${\operatorname{Ann}_R }{(x^n)}$ is nonzero and contained in $\mathfrak m$. Therefore $${\operatorname{Ann}_R }\mathfrak m \subseteq {\operatorname{Ann}_R }{\operatorname{Ann}_R }{(x^n)} = (x^n).$$ That is ${\operatorname{Ann}_R }\mathfrak m \subseteq \cap_{n\in \mathbb{N}} (x^n)$ and so ${\operatorname{Ann}_R }\mathfrak m = 0$ by the Krull Intersection Theorem [@EisenbudCommAlg Corollary 5.4]. This is a contradiction because $\mathfrak m \in {\operatorname{Ass} \brackets}R$. Thus $\dim R =0$. As a zero-dimensional Cohen-Macaulay ring, the socle of $R$ is the sum of the finite number of minimal nonzero ideals, each isomorphic to $k =R/\mathfrak m$. Since each minimal nonzero ideal is also a trace ideal, the socle of $R$ is isomorphic to $k$. Therefore $R$ is Artinian and Gorenstein. Given an Artinian ring $R$, one commonly determines if $R$ is Gorenstein by checking if its socle is one-generated over $R$. This is equivalent to checking that $k$ is a trace ideal in $R$. As a consequence of Theorem \[main\], one can use any ideal to check if $R$ is Gorenstein. In practice, given an Artinian ring $R$, $R$ is not Gorenstein if there exists an ideal $I$ in $R$ and a map $\alpha \in {\operatorname{Hom}_{R} (I, R)}$ such that ${\operatorname{Im} \brackets}\alpha \not\subseteq I$. Consider the subring $S = k[x^4, x^3y, xy^3,y^4] \subset k[x,y]$ for some field $k$. Set $R = k[x^4, x^3y, xy^3,y^4]/(x^4, y^4)$. Then $R$ is not Gorenstein because there exists an $R$-homomorphism $$\xymatrix @R=.1pc{(x^3y)\ar@{->}[r] & (xy^3)\\ x^3y \ar@{|->}[r] & xy^3. }$$ whose image is not contained in $(x^3y)$. It is known that all ideals of grade greater than or equal to 2 are trace ideals, as are all ideals in local Artinian Gorenstein rings; see Remark 2.3 in [@Lindo1] and Remark \[Art\] above. Recently, a conjecture of Huneke and Wiegand has been verified for modules isomorphic to trace ideals in one dimensional Gorenstein domains; see [@Lindo1 Proposition 6.8]. However, an ideal may be isomorphic to a trace ideal without being a trace ideal itself. For example consider the ideal $I = (xy, xz)$ in $k[x,y,z]$, for some field $k$, where ${\operatorname{{\tau}}_{{I}} (R)} = (y,z)$. This investigation leads naturally to the following open questions: In which rings is every ideal isomorphic to a trace ideal? What is the class of modules isomorphic to trace ideals over\ [one-dimensional]{} Gorenstein domains? Acknowledgements {#acknowledgements .unnumbered} ================ Special thanks to Susan Loepp. Thanks also to Andrew Bydlon, Peder Thompson, Graham Leuschke, Ivan Martino and Anthony Iarrobino for several useful discussions.
--- abstract: 'We present the results of a new [*XMM-Newton*]{} observation of the interacting supernova 1995N, performed on July 27, 2003. We find that the 0.2-10.0 keV flux has dropt at a level of $1.44 \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, about one order of magnitude lower than that of a previous [*ASCA*]{} observation performed on January 1998. The X-ray spectral analysis shows statistically significant evidence for the presence of two distinct components, that can be modeled with emission from optically thin, thermal plasmas at different temperatures. From these temperatures we derive that the exponent of the ejecta density distribution is $n \sim 6.5$.' author: - 'P. Mucciarelli' - 'L. Zampieri' - 'A. Pastorello' title: '[*XMM-Newton*]{} detects the beginning of the X-ray decline of SN 1995N' --- [SN 1995N, discovered on May 1995 several months after the explosion [@benetti95], is of special interest in the context of circumstellar medium (CSM) interacting supernovae. It is hosted in the (IB(s)m pec) galaxy MCG-02-38-017, at a distance of $\sim$ 24 Mpc. The epoch of explosion is not known, but was estimated to be about 10 months before its optical discovery [@benetti95].]{} [*XMM-Newton*]{} observation of SN 1995N ======================================== We observed SN 1995N with [*XMM-Newton*]{} for 72 ks on July 27, 2003, nine years after the explosion. Here we present a preliminary analysis of this observation [for details see @zamp04 hereafter Z04]. The X-ray pointing was coordinated with near-IR and optical observations [see @pasto04]. The X-ray data set was heavily affected by solar flares. We analysed the longer [*XMM*]{} EPIC exposures of both the MOS and pn detectors (56 and 64 ks respectively). Data were filtered using the count rate criterion $<$ 0.5 counts s$^{-1}$ for the EPIC MOS and $<$ 1.0 counts s$^{-1}$ for the EPIC pn, leaving 22 and 14 ks of useful data, respectively. Source counts were estracted from a circular region of 20$\arcsec$ centered on the radio position of the supernova [@vd96]. Background counts were estracted from a circular region of 40$\arcsec$, on the same CCD. The source count rate was 9.7$\times 10^{-3}$ counts s$^{-1}$ for the EPIC MOS ($\sim$450 counts) and 3.3 $\times 10^{-2}$ counts s$^{-1}$ for the EPIC pn ($\sim$500 counts). EPIC MOS and pn spectra were binned requiring at least 15 counts per bin. Joint MOS and pn spectral fits were performed in the 0.2-10.0 keV interval with an overall normalization constant. Despite the low statistics, the fit with single component models was not satisfactory (see Z04). The improvement obtained adding a [mekal]{}[^1] component to a single component spectral models was significant at the $\sim$4$\sigma$ level. The best fit was obtained with an absorbed double [mekal]{} model with column density of the interstellar medium $N_H = 1.3 \times 10^{21}$ cm$^{-2}$ and temperatures of the two thermal components $kT\simeq$ 0.8 and 9.5 keV ($\chi^2_{red} \simeq 0.76$; see Figure \[fig1\], [*left panel*]{}). We also report the detection of a faint object inside the X-ray error box of SN 1995N in the summed [*XMM*]{} OM $UVW1$ image. The object was at the limit of detectability in each single frame. Its position in the summed image is within 0.4$''$ from the radio position of SN 1995N. Within the errors of the astrometric calibration, we then identify this object with SN 1995N. The lack of reference stars in the $UVW1$ band prevented us from performing a photometric calibration of the image. Taking the detection limit count rate of a single image as a lower bound for the UVW1 flux of SN 1995N, we estimate $F_{UVW1}\ga 6.3 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$. X-ray variability {#var} ================= Figure \[fig1\] ([*right panel*]{}) shows the fluxes derived from all the available X-ray observations of SN 1995N. The first X-ray observation was performed with [*ROSAT*]{} HRI on July 23, 1996 [1.3 ks; @lewin96], followed by other two exposures taken on August 12, 1996 (17 ks) and on August 17, 1997 (19 ks). The [*ROSAT*]{} fluxes were derived from the count rates reported by @fox00, assuming a [power-law]{} spectrum with photon index and column density from their fit of the [*ASCA*]{} spectra. The [*ASCA*]{} observation was performed on January 19, 1998 [83 and 96 ks, respectively for the SIS and GIS instruments; @fox00]. For [*ASCA*]{} and [*XMM*]{}, the fluxes reported in Figure \[fig1\] are the average between the different instruments. It is worth noting the large decrease in the X-ray flux between the [*ASCA*]{} and [*XMM*]{} observations (unabsorbed \[0.2-10.0 keV\] fluxes of 1.65 $\times 10^{-12}$ and 1.75 $\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, respectively). Discussion ========== SN 1995N is one of the few supernovae detected in X-rays at an age of $\sim$9 years. Our [*XMM*]{} observation shows that the unabsorbed X-ray flux of SN 1995N dropped at a value of $1.75 \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, about an order of magnitude lower than that of the previous [*ASCA*]{} observation performed $\sim$6 years before. The decline of the X-ray flux is signalling that SN 1995N probably started to evolve towards the remnant stage. Interpreting the evolution of the X-ray light curve is not straigthforward. A complex scenario in which a two-phase (clumpy and smooth) CSM contributes to the observed X-ray emission is consistent with the available data (see Z04). The EPIC spectrum of SN 1995N shows statistically significant evidence for the presence of two distinct thermal (MEKAL) components at temperatures of 0.8 and 9.5 keV, respectively. In the standard model of ejecta/wind interaction [@cf94], these represent the temperatures of the gas between the contact discontinuity and the reverse/forward shock. The temperature of the hotter phase is similar to the temperature of the single-component spectral fit of the [*ASCA*]{} data performed by [@fox00]. Within the assumptions of the standard model, we can derive the exponent of the ejecta density distribution $n$ from the expression $T_1/T_2 = (3-s)^2/(n-3)^2$, where $s$ is the exponent of the CMS density distribution [@fran96]. Assuming a constant and homogeneous stellar wind ($s=2$), from the values of the temperatures $T_1$ and $T_2$ inferred from the X-ray spectrum, we obtain $n\sim6.5$. Benetti, S., Bouchet, P., Schwarz, H., 1995, IAU Circ. 6170 Chevalier, R.A., Fransson, C., 1994, ApJ, 420, 268 Fox, D.W., Lewin, W.H.G., Fabian, A., et al., 2000, MNRAS, 319, 1154 Fransson, C., Lundqvist, P., Chevalier, R.A., 1996, ApJ, 461, 993 Lewin, W.H.G., Zimmermann, H.U., Aschenbach, B., 1996, IAU Circ. 6506 Pastorello, A., Aretxaga, I., Zampieri, L., et al. 2004, this proceedings van Dyk, S.D., Sramek, R.A., Weiler, K.W., et al., 1996, IAU Circ. 6386 Zampieri, L., Mucciarelli, P., Pastorello, A., et al., 2004, MNRAS submitted [^1]: The [mekal]{} model is the spectrum emitted by an optically thin, thermal plasma.
Proposed Solution and Research Idea {#chap:mandala} =================================== The problem statement and research questions provided guidance on what to address when designing *Mandala* while the Hypotheses, promise a solution. This section will showcase ideas and concept, based on the Hypotheses, which are intended to represent the foundational aspects of *Mandala*. A preliminary design for *Mandala* will be presented with the help of code examples which showcase a Token like for example a new cryptocurrency. Also, a Purse that allows everybody to deposit such Tokens in it, but only the owner of the Purse to withdraw them is introduced. Mandala Core Design Philosophy {#sec:mandalaCorePhil} ------------------------------ *Mandala*’s core goal is not to replace smart contract programming languages like Solidity or Vyper. The existing EVM based programming languages try to give the developer access to the underlying blockchain technology and expressively do that by giving the developer as many tools and freedoms to solve a problem as possible. As already mentioned, this approach does have the drawback, that it is easier to introduce bugs leading to unexpected behaviour and that it is harder for an auditor to spot these bugs in a smart contract [@Hibryda:2016]. *Mandala*’s core design goal is to reduce the possibilities of unexpected behaviour that is introduced by accident and make it easy for an auditor to reason about the code. *Mandala* should still be as expressive as possible and covers any necessary features and concepts to be usable as a practical smart contract language. In case of a trade-off between expressiveness and safety or auditability *Mandala* will prefer the approach of increasing safety or auditability. In the following, two qualitative attributes are defined, which are used to evaluate the robustness against bugs and attacks with an explanation of what they mean in the context of *Mandala*. **Safety** means that the programming language is designed in a way that prevents specific exploits and makes it easy to write safe code and hard to introduce exploitable code without intention. The obvious way to code something in *Mandala* should always be the safe way to code it. *Mandala* should eliminate the risk that a developer can develop code that is vulnerable to specific well-known exploits, like the one presented in the Section \[sec:chal\] Motivating Problems. Further *Mandala* will select its feature set primarily based on safety aspects and only secondary on expressiveness and performance aspects. **Auditability** means that it is easy to understand what a piece of code does and conclude how it would behave if executed. *Mandala* will focus on local code, meaning that an auditor does not need to know the whole program to reason about an individual piece of code. If context information is needed to audit a piece of code, it should always be clear where to look for it, and it should be unambiguous how it influences the currently audited code. Beside manual auditing, the auditability includes the ability of *Mandala* code to be analysed by other programs, like automatic bug finders or formal verifiers. Mandala Language Design by Example ---------------------------------- This section will present Mandala with the help of code examples followed by some descriptions of the concepts and feature used in the corresponding Examples. This is not meant to cover all features that *Mandala* should have in the end. ### Token {#sec:Token} The Token module in Listing \[lst:Token\] provides a type that can be used to represent tokens including the necessary functions needed to split, merge and create those tokens. module Token { type Drop Persist Token[T](UInt) risk NumericOverflow public merge[T](Token[T](amount1), Token[T](amount2)) => { Token[T](amount1 + amount2) } risk NumericUnderflow public split[T](Token[T](amount), split:UInt) => { (Token[T](amount-split), Token[T](split)) } protected[T] mint[T](amount:UInt) => Token[T](amount) public default[Token] zero[T]() => Token[T](0) } **Modules** Line 1 in Listing \[lst:Token\] does declare a module which encapsulates components (types, capabilities and functions) between the curly braces. Modules are *Mandalas* way to group related code together and it is immutable after deployment, meaning that no components can be added, modified or removed from a module once its content is declared. Additionally, components defined in the same module have extended rights regarding each other and can do specific actions that components from other modules can not. **Types** On line 3 in Listing \[lst:Token\] a type component is declared. *Mandala* uses algebraic data types (ADT) as its core value representation. An ADT is a type that has multiple constructors, and each of the constructors can have multiple fields. When a value is created over a constructor, the values supplied for the fields are stored in the value and can later be accessed by unpacking the value. A value can be unpacked by providing a piece of code for each constructor, and at runtime, only the code corresponding to the constructor used to create the value is executed and gets access to the parameter used to construct the value. The type Token in the example does only have one constructor, which takes an unsigned integer as field parameter. The type declaration on line 3 in Listing \[lst:Token\] is generic, meaning it can be parameterised over another type denoted T in the example. This means that the declared type does not represent a single type but a full type family. For Example, Token\[Eth\] and Token\[Btc\] are two different types which belong to the same family. An ADT value can only be created by the module defining the type unless the ADT is marked with open in that case everybody can create new instances. **Capabilities** By default, all values are restricted in how functions can interact with them. The only thing they can do with a value is passing it to another function or using it as an argument to create another ADT. Even accessing its fields, making a copy of it or throwing it away is forbidden. To allow further operations, Mandala provides so-called capabilities which can be attached to values and are tracked statically by the type system. A capability can only be attached by the module defining the capability unless the capability is marked with open in that case the module defining the type to which the capability is attached to, can do so as well. Detaching a capability, on the other hand, can be done by everybody. Two values that have the same base type but different capabilities are treated as different types by the type system. Beside predefined capabilities, *Mandala* supports custom capabilities (See line 3 in Listing \[lst:Purse\]), which can be used as an access control mechanism to protect access to components in a module. The type on line 3 in Listing \[lst:Token\] for example has per default the capabilities Drop and Persist. This allows everybody to drop the value without using it (Drop) and enables the value to be persisted (Persist) (see Section \[sec:Purse\]), but as still nobody can make copies of the value and only the defining module can create new values it is well suited to represent a token or other asset. **Functions** Beside types, a *Mandala* module can contain functions which are similar to functions from well-established languages. To make interaction with ADTs easier, they can be unpacked at the place where they are received as a parameter (See lines 6 and 11 in Listing \[lst:Token\]). Like types, functions can have generic type parameters which allow defining a function once for a whole family of types. In Mandala functions cannot be recursive and further do only support static dispatch, meaning that it is known during compilation what code will be executed when a function call is executed. The enforcement of non-recursivity is achieved by allowing only function calls to already deployed functions requiring that functions in the same module be deployed one after another. The function on line 16 in Listing \[lst:Token\] is marked with default, which tells the *Mandala* compiler that it should use this function when a default value for the Token type is needed (see Section \[sec:Purse\]). Every function has a visibility which defines who can call that function. The functions on the lines 6, 11 and 16 in Listing \[lst:Purse\] are public and can be called by anyone. A private function could only be called from the same module. The mint function on line 15 in Listing \[lst:Token\] has the protected visibility which is linked to the functions generic type parameter T. Code can only call a protected function if its module defines the type to which the protected visibility is linked. The mint function, for example, can only be used to mint tokens by the module defining the token’s type. As an example, a Token\[Eth\] can only be minted by the Module defining the Eth type. **Error Handling** The presented merge and split functions in Listing \[lst:Token\] on the lines 6 and 11 allow to take a Token and split it in two or take two Tokens and merge them. These two functions ensure that the balance of their incoming and outgoing tokens sum up to the same amount. *Mandala* uses save arithmetic, and thus an underflow or overflow error can happen which is represented with the risk declaration on the lines 5 and 10 in Listing \[lst:Token\]. When an error occurs, the progress made in the function is rolled back. The caller of the function receives the initial arguments to the function (to preserve non-copyable arguments) together with an error code as the return value. The caller can decide to either handle the error or make a rollback itself and forward the error. ### Purse {#sec:Purse} The Purse module in Listing \[lst:Purse\] provides a type that can be used to deposit and withdraw tokens from it similar to a bank account, including the necessary access control mechanisms to keep the funds safe. import Token.* module Purse { open capability Withdraw open type Persist Withdraw Copy Drop Purse[T]( Persist Copy Drop Modify Ref[Persist Drop Token[T]] ) risk NumericOverflow public active deposit[T](Purse[T](tokenRef), deposit:Token[T]) => { modify tokenRef with Token(t) => merge(t, deposit) } risk NumericUnderflow public active withdraw[T](Withdraw Purse[T](tokenRef), amount:Int) => { modify tokenRef with Token(t) => case split(t,amount) of (rem,split) => rem & return split } } **Imports** Line 1 in Listing \[lst:Purse\] shows how functions and types from modules can be imported, such as they can be used in another module. The example code imports the Token module which is colocated in the same namespace as the Purse module. In the end *Mandala* will provide a more elaborated namespace and import system able to handle a large number of modules. For simplicity, the remaining examples will skip the import and assume that all necessary components are imported. **Cells and References** Line 5 in Listing \[lst:Purse\] is a type declaration similar to the Token from the previous example. The most significant difference is that it uses an argument of type Modify Ref\[Token\[T\]\] for its sole constructor. Values of type Ref\[X\] represent a reference to a cell persisting a value of type X. There can be multiple references to the same cell allowing to share the stored value. The Modify capability allows using the reference to exchange the value stored in the cell with another value. References without that capability could only read the value. A cell can only store the value of a type with the Persist privilege. Everybody can generate a cell, but it initially would be empty or filled with the default value if one is defined as it is the case for Tokens (see Section \[sec:Token\]). The creator of the new cell does only get a reference to the cell and not the cell itself. **Effect System** The functions on line 10 and 15 in Listing \[lst:Purse\] have a new keyword called active. By default, any function is pure, which means that given the same arguments it will always return the same result and does not have any side effect. As these two functions modify a cell, they have a side effect and thus must be marked active. Beside active, there are the modifiers init which is similar to pure, but the function is allowed to create new cells. Functions marked with dependent can additionally read values from cells, and active ones can even write values to cells. This is an effect system which makes it easier for an auditor to see what is going on and it further can prevent errors. The modify with expressions on the lines 11, 16 in Listing \[lst:Purse\] for example does take a pure expression that takes the current cells content and returns a new value to be written back into the cell. As this expression is pure, it is guaranteed that it does not interact with other cells while the value is under modification. This prevents shared state-based attacks. ### Purse Storage {#sec:Store} The PurseStorage module in Listing \[lst:Store\] provides functionality that allows everybody to retrieve the Purse associated with an individual identity and deposit Tokens in it or even withdraw Tokens if he or she is the owner of the Purse. module PurseStorage{ open type Store[T](Copy Context[Token[T]]) public getMyPurse[T](id:Master ID, Store[T](c)) => Purse(derive(c,id)) public getPurse[T](id:ID, Store[T](c)) => Purse(derive(c,id)).detach[Withdraw] risk NumericOverflow risk NumericUnderflow public active transfer[T](src:Master ID, to:ID, store:Store[T], value:Int) => { deposit( getPurse[T](to,store), withdraw(getMyPurse[T](src,store), value) ) } } **Identification** In Listing \[lst:Store\] on line 4, 5 and 9 a new type called ID and a capability called Master is used. The ID type is a primitive type similar to the address type used in solidity and the ethereum virtual machine. Everybody can generate an ID if he knows the corresponding identification string. However, an ID with the Master capability cannot generally be generated. Every keypair used to access the blockchain running *Mandala* is associated with an ID. To obtain a value of the type Master ID that can be used in a transaction the transaction has to be signed with the associated private key. Besides using a private key, Master ID’s can be created by calling the new function in the ID module which then produces a unique ID with the Master capability. The new function does guarantee that it will never produce the same ID twice and that each ID is different from all ID’s associated with private keys. **Contexts** In Listing \[lst:Store\] on line 2 a type called Context is introduced. A Context is similar to an ID as new unique once can be created by calling a new function in the Context module. Unlike ID’s, Contexts have a generic type parameter and as such their exist a whole family of Context types, one for each existing type. The primary purposes of Contexts are to associate an ID with a reference. Given an ID and a Context\[T\] someone can call the derive function (see line 4 and 5 in Listing \[lst:Store\]) to generate a Reference to a Cell containing a value of type T. Using the same ID and Context as derive input always results in the same reference while using a different combination always yields a different reference. A Context can be seen as a storage area for cells where each cell is associated with an ID. From this viewpoint, the Store type provides a 1 to 1 association between IDs and Purses. ### Token Instantiation {#sec:Inst} The MyFixSupplyToken module in Listing \[lst:Instance\] declares a new Token with the help of the previously presented modules and deposits the initially minted Tokens into the deployers Purse. module MyFixSupplyToken { public type MyToken public val defaultStore = Store[MyToken](Context.new[Token[MyToken]]()) risk NumericOverflow init(deployer:Master ID) => deposit( getMyPurse(deployer,defaultStore), mint[MyToken](100000000) ) } **Constant Values** For *Mandala* to be useful, there has to be a way to store a value globally in a way that it is accessible without the need to possess already another value (like it is the case with cells and references). Depending on the blockchain model in which *Mandala* is used this could be delegated to another layer. For example, a UTXO based blockchain could store *Mandala* values in UTXO’s and an account based blockchain could store them in the accounts. To be independent to a specific blockchain model line 4 in Listing \[lst:Instance\] shows an alternative where *Mandala* provides a top-level storage slot called val. A val represents a constant that is initialised when the contract is deployed and does never change afterwards. To be independent of the point in time when the module is deployed the vals initialisation expression must be of pure or init effect. As a val can be used more than once over the span of multiple transactions its content must have the Copy and Persist capability. **Initialisation** In Listing \[lst:Instance\] on line 7 an initialisation function that has precisely one Master ID parameters is provided. This function is executed exactly once when the module is deployed. The received Master ID is the one associated with the deployer of the module. This allows a hook to initialisation the module. If the initialisation produces an error, then the module will not be deployed. In Listing \[lst:Instance\] on Line 8 to 9, for example, a fixed amount of Tokens is created and put into the Purse of the deployer. Discussion of the Mandala Language Proposal ------------------------------------------- The two core qualitative goals of Mandala concerned safety and auditability. This section will show how the presented approach for Mandala aims to achieve these goals. ### Safety Safety is concerned with the elimination of the risk for a developer to introduce bugs, flaws and other problems into the code without the intention to do this. The Section \[sec:chal\] Motivating Problems provided examples for such problems that are a concern in other smart contract programming languages. This section will look at these problems and show how Mandala can address them. **Open Execution Environment** By default *Mandala* values have no capabilities, and nearly nothing can be done with them. The defining module can still do more with the value as it is entitled to attach open capabilities or capabilities defined in the same module. This allows declaring value types that enforce guarantees even against code not yet deployed. Further *Mandala* uses such values as capabilities to control who can do what with it. This allows enforcing that a module/function needs to be handed a capability willingly before it can interact with the protected resources. This allows a programming style where no manual protection layer that specifies what has to be done to acquire access to a resource has to be provided. This has the advantage that nothing has to be known about other potentially later deployed code to be protected from it as all code plays by the same rules independent of when and where and by whom it is deployed. **Reentrancy** Reentrancy problems occur when a state is shared between invocation of functions that do not enforce invariants in its intermediary state which then is leveraged by an attacker. The classical reentrancy attack where the same function or a function in the same contract (module in case of *Mandala*) is called cannot happen at all in *Mandala* as *Mandala* does only support static dispatch and does not allow recursive calls, or other circular dependencies. As Mandala cells are accessed over a reference, multiple Modules could obtain a reference and gain access to the same shared state. To prevent inconsistent invariants of a cell during its modification *Mandala* provides a particular mechanism to modify a cell where the modification is represented as a pure state transition function. This guarantees that when a cell is modified in this way no other function can access any other cell while a cell is changed. A developer can misuse the modify mechanism to reopen itself to these kinds of attacks but it would require an effort to do so and would be complicated and sophisticated and would be easy for an auditor to spot and investigate. **Exception Handling** Mandala does not inherit the weaknesses of Solidity and other EVM based languages in respect to error handling because errors are not communicated over the classical return path, and thus a caller can do both handle the error and consume the return value on success. Mandala further enforces that all potential errors are documented in the function signatures, and thus the caller does always either need to handle the error case or explicitly declare the error again delegating the error handling to its caller. **Type Casts** Mandala has a type system that does not have the concept of a typecast as any value is of exactly one type. It could be argued that attaching and detaching capabilities is a typecast, but these are checked at compiletime and cannot be misused to execute unexpected code like it is possible in solidity. *Mandalas* type system ensures that all type related errors are detected during compilation and no type related errors can occur at runtime. **Transfering Ether** *Mandala* will provide a completely independent definition of Ether (resp the native cryptocurrency of the targeted blockchain), realized over the Token framework from the Listings \[sec:Token\], \[sec:Purse\] and \[sec:Store\]. Moreover, as such, it does not gain any special treatment, and the only way to transfer Ether is to call a function that takes a Token\[Eth\] as a parameter. This concept guarantees that the receiver is always equipped to handle the Ether and can react appropriately. **Contract Selfdestruction** *Mandala* does not know the concept of a contract. *Mandalas* module is the nearest thing to a contract. Modules can be created but never destroyed preventing any self-destruct related problems as this concept is not known. If cells are considered as the state part of a contract, then the cell either has a default value, or the compiler will enforce the developer to specifies the behaviour in case the cell is empty. ### Auditability *Mandala* has certain core aspects that make the job of an auditor easier. The first is that *Mandala* does not have dynamic method dispatch and does not support recursion even indirect ones. This means that an auditor can know for sure what code is executed by a function call and even the deployment of other code in the future cannot change that. Second, the non-recursivity ensures to an auditor that all code executes and terminate without having to consider unexpected events like out of gas exceptions (common in EVM based language). This is the case because *Mandala* allows enforcing upfront (before the transaction is executed) that enough resources are provided for any possible execution path. Moreover, *Mandalas* strong static type system with its restricted types tells an auditor precisely what can be done with values of a type without needing him to inspect any code that interacts with the value. This allows to analyse a module and check its integrity even if the auditor does not know how other code will use the module and its types. Lastly, the improved exception handling system (compared to Solidity and co.) does make it easier for an auditor to check that every corner case is handled correctly since exceptional cases have to be declared in the function signature. As many problems are related to unwanted state changes, Mandalas effect system can indicate to an auditor where to look for these problems and what functions can be safely ignored when investigating state-related problems.
--- abstract: 'Using a geometric measure of entanglement quantification based on Euclidean distance of the Hermitian matrices [@patel2016geometric], we obtain the minimum distance between a bipartite bound entangled $n$- qudit density matrix and the maximally mixed state. This minimum distance for which entangled density matrices necessarily have positive partial transpose (PPT) is obtained as $\frac{1}{\sqrt{\sqrt{d^n(d^n-1)}+1}}$, which is also a lower limit for the existence of 1-distillable entangled states. The separable states necessarily lie within a minimum distance of $\frac{R}{1+d^{n-1}}$ from the Identity [@patel2016geometric],where R is the radius of the closed ball homeomorphic to the set of density matrices, which is lesser than the limit for the limit for PPT bound entangled states. Furthermore an alternate proof on the non-emptiness of the PPT bound entangled states has also been given.' author: - Shreya Banerjee - 'Aryaman A. Patel' - 'Prasanta K. Panigrahi' title: 'The Minimum Distance of PPT Bound Entangled States from the Maximally Mixed State.' --- \[sec:level1\]INTRODUCTION ========================== Characterization of entanglement is of deep interest in the field of quantum information and quantum computation [@bruss2002characterizing; @PhysRevLett.103.240502; @jaeger2009entanglement]. As is well known there are two types of entangled states: distillable and non-distillable [@PhysRevA.53.2046]. Distillable entangled states find application in quantum technology: quantum teleportation [@PhysRevLett.76.722; @Chen2017] ,quantum error correction [@imai2006special; @Lipka-Bartosik2016], quantum cryptography [@PhysRevLett.94.040503; @acin2003security; @Gao2017] etc. The other class of entangled states that cannot be distilled is called bound entangled states, which has found application in steering and ruling out local hidden state models [@PhysRevLett.113.050404]. The Peres-Horodecki criterion provides a necessary and sufficient condition for separability in $2\otimes 2$ and $2\otimes 3$ dimensions. It fails to identify separable states in higher dimensions. Using this criterion, one can only identify the states that have positive partial transpose (PPT) in higher dimensions. By definition, such states are definitely bound entangled states [@PhysRevA.61.062312]. There was no straightforward method to separate bound entangled states as a class. A deeper understanding of these class of states is thus of high importance both from fundamental and application perspectives. There have been various approaches to analyze the geometry of the quantum state space [@braunstein1995geometry; @zyczkowski2001induced; @zyczkowski2001monge] and entanglement measures based on geometry [@patel2016geometric; @ozawa2000entanglement; @heydari2004entanglement; @PhysRevLett.77.1413; @goswami2017uncertainty]. Geometry has also been used in quantum computation to form new algorithms on [@LaGuardia2017; @Holik2017]. Recently quantification of entanglement has been carried out from a geometric perspective, for general n qudit states [@patel2016geometric]. There is also another approach using wedge product which manifests naturally in a geometric setting [@bhaskara2017generalized]. This geometric approach makes essential use of the fact that measurement of a subsystem of an entangled state necessarily affects the remaining constituents in contrast to separable states. Using the geometry of $N=d^n$- dimensional positive semidefinite matrices, here we establish a criterion for arbitrary dimensions for separating PPT bound entangled states. Interestingly this class of states can be associated to almost every pure entangled states [@PhysRevA.75.012305]. Previously the lower limit for separable states has been established in ref. [@patel2016geometric]. As is well known the PPT criterion is useful for dimensions less than equal to $6$. Here we provide the geometric lower bound for arbitrary dimensions within which every state is PPT. For dimensions greater than $6$ it gives the minimum distance from the maximally mixed state for which a state is bound entangled. The paper is organized as follows: Sec. II describes classification of states based on their partial transpose and the general geometry of density matrices. The spectrum of the partially transposed matrix of a pure state is discussed in Sec. III. In Sec. IV the boundary of the PPT bound entangled states have been calculated along with an alternate proof of the non-emptiness of the PPT bound entangled states. We then conclude in Sec. V with directions for future work. CLASSIFICATION AND MEASUREMENT BASED GEOMETRY OF N DIMENSIONAL DENSITY MATRICES ================================================================================ A general state $\rho$ acting on $H_A \otimes H_B$ can be written as [@djokovic2016two], $$\rho= \sum_{ijkl}p^{ij}_{kl} \ket{i}\bra{j}\otimes \ket{k}\bra{l},$$ with its partial transpose defined as, $$\rho^{T_B}=\mathds{I}\otimes T(\rho)=\sum_{ijkl}p^{ij}_{kl} \ket{i}\bra{j}\otimes \ket{l}\bra{k}.$$ Here $\mathds{I}\otimes T(\rho)$ is the map that acts on the composite system with Identity map acting on system A and transposition map acting on B. $\rho $ is called PPT if its partial transpose $\rho^{T_B}$ is a positive semi-definite operator. If $\rho^{T_B}$ has a negative eigenvalue, it is called NPT. It is known from Peres-Horodecki criterion that for $2\otimes2$ and $2\otimes3$ dimensions, all PPT states are separable and all NPT states are entangled. For arbitrary $n\otimes m$ dimensions, some PPT states show entanglement,whereas all NPT states are necessarily entangled [@djokovic2016two]. For a bipartite state $\rho$ , acting on $H=H_A\otimes H_B$ and for an integer $k\geq1$, $\rho $ is k-distillable if there exists a (non-normalised) state $\ket{\psi}\in H^{\otimes k} $ of Schmidt-rank at most 2 such that, $$\bra{\psi}\sigma^{\otimes k} \ket{\psi}<0 , \sigma=\mathds{I}\otimes T(\rho).$$ $\rho$ is distillable if it is k-distillable for some integer $k \geq 1$ [@PhysRevA.61.062312]. If a state $\rho$ is PPT, it is non-distillable, hence entangled PPT states have no distillable entanglement. Such states are called PPT bound entangled states. All distillable entangled states are NPT. The converse may not hold, i.e. if all NPT states are distillable or not. It is believed that the converse does not hold [@PhysRevA.61.062312] . ![Diagrammatic representation of the set of all mixed states, for a general arbitrary Hilbert space; the shaded region is the set of all PPT states and the white portion is the set of all NPT states.](102.jpg) The Euclidean distance between any two Hermitian matrices $\rho$ and $\sigma$ is given by [@patel2016geometric], $$D(\rho , \sigma)= \sqrt{Tr{(\rho-\sigma)}^2}.$$ The set of all density matrices of order N is considered as a convex compact set embedded in the closed ($N^2-1$) ball $\mathds{B}^{N^2-1}$ of radius $\sqrt{\frac{N-1}{N}}$, centred at normalised identity $\frac{\mathds{I}}{N}$. This set always admits a regular N-1 simplex as one of its orthogonal basis. The convex hull of a basis is represented by a regular n simplex centred at normalised identity $\frac{\mathds{I}}{N}$ and circumscribed by $\mathds{B}^{N^2-1}$, where $N-1\leq n \leq N^2-1$. Each density matrix can be treated as a point in a simplex whose vertices are pure states. ![Orthogonal basis represented by a 3-simplex, a regular tetrahedron, for N=4 case.](Selection_010.png) Using this geometry, it has been shown that the set of all n qudit density matrices, whose distance from $\frac{\mathds{I}}{N}$ is less or equal to $\frac{1}{1+d^{n-1}}\sqrt{\frac{d^n-1}{d^n}}$, are separable [@patel2016geometric]. A bipartite $n$ qudit density matrix $\rho$ with bi-partitions A-B is considered. To find out the separability criterion for $\rho$, a measurement on one of the bipartitions is done [@patel2016geometric]. Then one checks if both the post-measurement reduced density matrices $\rho_A$ and $\rho_B$ localizes to the simplices of corresponding dimensions. The distance between the reduced density matrices and the centre of the closed ball homeomorphic to the corresponding simplex is measured using Eq. 3 and if it lies within the bound given in ref. [@patel2016geometric] then it is certainly separable. A similar approach has been used in [@bhaskara2017generalized] where a bipartite $n$ qudit pure state is projected in a basis consisting two orthonormal bases to check the separability of the state. A subset S of a vector space V is called a cone if $\forall x \in C$ and positive scalar $\alpha$, $\alpha x \in C$. A cone C is called a convex cone if $\alpha x + \beta y \in C $. The defining property of the set of all $N \times N$ positive semidefinite matrices P is that the scalar $x^T x$ is positive for each nonzero coloumn vector x of N real numbers. If P be the set of all symmetric positive semi-definite matrices, then $\forall X, Y \in P $ and $ \alpha,\beta > 0 $,\ $ x^T(\alpha X + \beta Y)x=\alpha x^T Xx+ \beta x^T Yx > 0 $ ,i.e., P is a convex cone. The set of the symmetric positive semidefinite(PSD) matrices of order $N\times N$ forms a convex cone $S_N$ in $\mathds{R}^{N^2}$. A few interesting properties of this cone are, \(a) it has non-empty interior containing positive definite matrices which are full rank, \(b) on the boundary of the cone the singular positive semidefinite matrices with at least one eigen-value zero lie. The origin of this cone is identified as the only matrix with all eigenvalues zero which is equidistant from each point on the surface of the $\mathds{B}^{N^2-1}$ ball.This is only possible when the ball is embedded in a subspace of the cone. The intersecting region of the ball and the cone is then in a dimension $N-1$. Taking a transposition of one of the subsystems if the post-transposition density matrix lies within the cone formed by the positive semidefinite matrices, then they are assumed to be PPT. Each $N\otimes N$ positive semidefinite matrix is associated with a quadric. One can represent a diagonalised symmetric matrix of order $2\otimes 2$ as a conic using the characteristic equation of the matrix. Let us consider the matrix $$\begin{bmatrix} a & 0\\ 0 & b \end{bmatrix}$$ where a is an eigenvalue of the matrix with respect to the eigenvector $$\begin{bmatrix} x_{1}\\ x_{2} \end{bmatrix}$$ The corresponding equation of conic will be, $$a{x_1}^{2}+b{x_2}^{2}=a$$ Each positive definite diagonalised matrix in $3\otimes 3$ dimension will form an ellipsoid and each positive semidefinite matrix will either be a set of intersecting planes or parallel planes. The origin with all three eigenvalues zero will give a point. The set of $n$ qudit density matrices is represented by convex sets homeomorphic to a closed ball of radius $R$ centred at the maximally mixed state, the identity matrix of $d^n\otimes d^n$ dimension. Expectedly this contains entangled states with positive and negative partial transpose. If the partial transpose of a density matrix is positive, then it will lie within the $S_N$ cone. Now the minimum distance of the $S_N$ cone from the maximally mixed state placed at the centre of the $\mathds{B}^{{N^2}-1}$ ball is the distance for which the density matrix would definitely be PPT. SPECTRUM OF THE PARTIALLY TRANSPOSED MATRIX OF A PURE STATE =========================================================== The spectrum of the partial transposition of a pure state has been given in [@PhysRevA.58.883]. We consider the density matrix of a pure state $\rho=\ket{\psi}\bra{\psi}$. The Schmidt decomposition of $\ket{\psi} \in H= H^m \otimes H ^n$ is given by, $$\ket{\psi}=\sum_i \alpha_i \ket{e_i}\otimes\ket{f_i}$$ where $\ket{e_i} \otimes \ket{f_i}$ forms a bi-orthogonal basis, i.e., $\bra{e_i}\ket{e_j}=\bra{f_i}\ket{f_j}=\delta_{ij}$ and $0\leq \alpha_i \leq 1$ along with $\sum_i \alpha _i^2=1$.\ The partial transposition of $\rho$, $\rho^{T_B}$ has eigenvalues, $\alpha_i^2$ for i=1,2,....r where r is the Schmidt rank; $\pm\alpha_i\alpha_j$ for $1\leq i < j \leq r$ and 0 with multiplicity $min(m,n)\lvert m - n\rvert +{ \{ min (n,m) \} }^2 - r^2$. Also, all eigenvalues of partial transposition of any $m\otimes n$ state always lie within \[-1/2, 1\] [@PhysRevA.87.054301]. DISTANCE OF THE PARTIAL-TRANSPOSE OF A N DIMENSIONAL MATRIX FROM NORMALISED IDENTITY ------------------------------------------------------------------------------------ We consider a bipartite $n$ qudit density matrix $\rho$. If the partial transpose of $\rho$, $\rho^{T_B}$ is positive semidefinite, then $\rho$ is PPT. One can infer that the minimum distance between $S_N$ cone and the centre of the $\mathds{B}^{N^{2}-1}$ ball is the lower limit of the distance between $\rho^{T_B}$ and the maximally mixed state. Any density matrix $\rho$ of order $N \times N$ can be written as [@PhysRevLett.80.2261], $$\rho = pP_{\psi}+(1-p)\rho^{//},$$ where $p \in[0, 1]$; $P_\psi$ is a pure state and $\rho^{//}$ any density matrix. Partial transpose followed by diagonalisation of Eq. 6, yields, $$\sigma^{T_B}=p(\sigma^{T_B})^/+(1-p)({\sigma^{T_B}})^{//},$$ where $\sigma^{T_B}$ is the diagonalised partial transpose of $\rho$, $(\sigma^{T_B})^/$ is the diagonalised partial transpose of $P_{\psi}$ and $({\sigma^{T_B}})^{//}$ is the diagonalised partial transpose of $\rho^{//}$. As $\sigma^T_B$ is diagonal, it is also symmetric. Therefore if $\sigma^T_B$ is positive semi-definite or positive definite, it lies either on the boundary of the convex cone $S_N$ or inside it. We consider the Euclidean distance between $\sigma^{T_B}$ and normalised identity of order N: $$D( \sigma^{T_B}, \mathds{I}_N)=\sqrt{Tr{( \sigma^{T_B}-\frac{\mathds{I}}{N})}^2}$$ One obtains from Eq. 8, $$D( \sigma^{T_B}, \mathds{I}_N)=\sqrt{Tr{(\sigma^{T_B}})^2-\frac{2}{N}Tr(\sigma^{T_B})+\frac{1}{N}}$$ Substituting the value of $\sigma^{T_B}$ from Eq. 7 and using the spectrum of the partially transposed matrix of a pure state, $$\begin{split} {D( \sigma^{T_B}, \mathds{I}_N)}^2= p_{m}^2+(1-p_{m})^2\sum_j{\lambda_j}^2 +2p_{m}(1-p_{m})\lambda_j\\-\frac{2}{N}(p_{m}+(1-p_{m})\sum_j\lambda_i)+\frac{1}{N} \end{split}$$ where $\lambda_j$ is the jth eigenvalue of the matrix $({\sigma^{T_B}})^{//}$, i corresponds to the only surviving eigenvalue of the partial transposed matrix of the pure state, $({\sigma^{T_B}})^{/}$ and $p_{m}$ is the maximum value of the parameter p. Werner states of $N\otimes N$ dimensions can be written as [@patel2016geometric], $$\rho_{w}= pP_{\psi}+(1-p)\frac{\mathds{I}}{N},$$ where $p \in[0, 1]$; $P_\psi$ is a pure state and $\frac{\mathds{I}}{N}$ is the normalised Identity matrix of order N. The normalised identity of order N is the partial transpose of itself. Substituting the values of $\sum \lambda_j$ and $\lambda$ in Eq. 10 considering $\lambda_j$ as the eigen value of $\mathds{I}_N$ one can have the distance of the partially transposed Werner states from the maximally mixed states as, $$D= p_{m}\sqrt{\frac{N-1}{N}},$$ where $p_{m}$ is the maximum value of the parameter p. Calculating the distance between $\rho_w$ and normalised identity using Eq. 3, we reach to Eq. 12 for the distance of partially transposed Werner states from the maximally mixed states. MINIMUM DISTANCE FOR WHICH A STATE WOULD BE PPT BOUND ENTANGLED --------------------------------------------------------------- We consider a density matrix of order $4$ and bi-partitions $2\otimes 2$. In this case there is no bound-entanglement as the PPT criterion is necessary and sufficient for separability for $2\otimes 2$ systems. If the density matrix is PPT then the partial transpose of the matrix will lie within the cone $S^4$ of all positive semidefinite matrices of order $4$. The set of all density matrices is homeomorphic to the closed ball $B^{15}$ and the cone $S^4$ intersects it in a $3$-dimensional space. The boundary of $S^4$ is formed by positive semidefinite matrices of order $4$, which are a set of parallel planes and in their $2$ dimensional projections they form a set of parallel lines ($x^2=a^2$, where a is a non-zero eigenvalue of the system) or a set of intersecting lines ($\frac{X^2}{a^2}=\frac{y^{2}}{b^{2}}$, where a and b are both non zero eigenvalues of the system). The parallel lines intersect the $3$ dimensional sphere of the ball at the surface, at \[1 0 0\], \[0 1 0\] and \[0 0 1\], namely the pure states. The intersecting lines give an idea of the positive partially transposed mixed states. The equation of the intersecting lines is, $$\frac{x^2}{a^2}=\frac{y^2}{b^2}$$ where a and b are eigenvalues of the corresponding system. Equation $13$ reduces to, $$x=\pm{\frac{a}{b}}y$$ Fig. 3 depicts that, ![Cross-section of the $\mathds{B}^{15}$ ball and $S_4$ cone, showing the centre of the ball I and the origin of the cone O. ](Selection_023.png) A and C are the points where the cone cuts the ball and IB is the minimum distance from I to the boundary of ball. The slope of the intersecting lines OC and OA is giveb by $\frac{a}{b}$. Slope of OA is also given by $\frac{OI}{IB}$ as O is perpendicular to the plane where I lies. The distance of the maximally mixed state from the origin of the $S^4$ cone $OI=\frac{1}{\sqrt{4}}$. $IB$ is the minimum distance of the cone from the maximally mixed state. The slope of the line OA is given by, $$\frac{b}{a}=\frac{OI}{IB},$$ $$IB=\frac{b}{a\sqrt{4}}.$$ The minimum value of $\frac{b}{a}$ is, $${\frac{b}{a}}_{min}=\frac{\frac{1}{\sqrt{\lambda_{max}}}}{\frac{1}{\sqrt{\lambda_{min}}}}=\frac{\sqrt{\lambda_{min}}}{\sqrt{\lambda_{max}}}$$ and minimum value of IB is obtained as, $$IB_{min}=\sqrt{\frac{\lambda_{min}}{\lambda_{max}}}\frac{1}{\sqrt{4}}$$ In this case the intersecting part of the $S^4$ cone and the $B^{15}$ ball is the intersecting part of a 3-d image of the cone and the $B^3$ ball, which is a Bloch sphere. Hence each matrix in the intersection is a density matrix. Therefore, $$\sqrt{\lambda_{min}}=\sqrt{\frac{1}{4}}$$ and, $$\sqrt{\lambda_{max}}=\sqrt{\frac{3}{4}}.$$ and, $$IB_{min}=\frac{1}{\sqrt{12}}$$ The ratio at which this value of IB cuts a proper radius of $B^{15}$ ball corresponds to the value of $p_{m}$ for Werner sates. This value is given by, $$p_{m}=\frac{IB}{R_{15}}=\frac{1}{3}$$ PPT criterion is necessary and sufficient for separability in a bipartite $2\otimes 2$ systems. Following that a $4$ dimensional state is absolutely separable if it lies within a distance of $\frac{1}{3}R$ of the maximally mixed state. This result matches with the separability criterion known for the 4-qubit Werner states [@patel2016geometric]. For higher dimensions PPT criterion is not sufficient for separability. Instead, the criterion helps us to detect bound entangled states. For $N$ dimensional states the cone of all PSD matrices intersects the $B^{N^2}$ ball in $N-1$ dimensions. Considering the geometry of diagonalised PSD matrices of $N-1$ dimensions one can say that they are associated with either $N^/$ dimensional ellipsoids that form the curved surface of the cone where $3\leq N^{/}\leq (N-3)$ or intersecting lines that give the boundary of the cone or intersecting planes which denotes the points where the cone cuts the ball. Considering the intersecting lines, we have the minimum distance IB of the boundary of the cone from the maximally mixed state as, $$IB=\sqrt{\frac{\lambda_{min}}{\lambda_{max}}}\frac{1}{\sqrt{N}}$$ This minimum distance satisfies the following inequalities: $${(\lambda_1-\frac{1}{N})}^2 + {(\lambda_2-\frac{1}{N})}^2 \geq 0,$$ and, $${(\lambda_1-\frac{1}{N})}^2 + {(\lambda_2-\frac{1}{N})}^2 \leq \frac{N-1}{N}.$$ $\lambda_1$ and $\lambda_{2}$ are two eigen values of the corresponding PSD matrix. One can then obtain, $\lambda_{min}=\frac{1}{\sqrt{N}}$\ and $\lambda_{max}=\sqrt{\frac{N-1}{N}}+\frac{1}{N}$ From Eq. 22 one can obtain, $$IB_{min}=\frac{1}{\sqrt{\sqrt{N(N-1)}+1}}$$ Comparing this distance with the distance found from Eq. 10, one can determine if a state is absolutely PPT bound entangled. The value of parameter p for which the Werner states of $N$ dimension will be PPT is also obtained as, $$p_m=\sqrt\frac{N}{N-1}\frac{1}{\sqrt{\sqrt{N(N-1)}+1}}$$ This is the maximum value of parameter p for which the partially transposed matrix of any density matrix of order N lies within the convex cone formed by the positive semidefinite and positive definite matrices. The distance of any density matrix $\rho$ of order N from $\mathds{I}_N$ is given by, $p(\sqrt{\frac{N-1}{N}})$ \[1\]. For $\rho$ to be definitely PPT, the maximum distance of $\rho$ from $\mathds{I}_N$ is, $\frac{1}{\sqrt{\sqrt{N(N-1)}+1}}$ . The bipartite $2\otimes 2$ systems have a different value of the distance due to the fact that there the cone cuts the closed ball at a dimension where the intersection part lie inside a Bloch sphere. All the points inside the sphere represent a quantum state, which is not true for the higher dimensional case. For n qudit density matrices, all matrices within the distance $\frac{1}{\sqrt{\sqrt{d^n(d^n-1)}+1}}$. from $\mathds{I}_N$ are PPT. It is shown in ref. [@patel2016geometric] that all n-qudit density matrices within distance $\frac{1}{1+d^{n-1}}\sqrt{\frac{d^n-1}{d^n}}$ from the normalised identity are separable. This implicates, all entangled n-qudit density matrices within distance $\frac{1}{1+d^{n-1}}\sqrt{\frac{d^n-1}{d^n}}$ to $\frac{1}{\sqrt{\sqrt{d^n(d^n-1)}+1}}$ are PPT bound entangled. Using the definition of 1-distillable entangled states \[21\], one can infer that no 1-distillable entangled states lie within this distance. The states within this distance are necessarily PPT bound entangled. This proves the non-emptiness of the set of such states. CONCLUSION ========== In summary, we have used a measurement based geometric approach to check if the partial transpose of a $n$ qudit density matrix gives all non negative eigenvalues. It has been shown that all the density matrices within the distance $\frac{1}{\sqrt{\sqrt{d^n(d^n-1)}+1}}$ from the maximally mixed state have a positive partial transpose. The precise distance for which a Werner state is PPT bound entangled has also been found. As the lower limit for distance between maximally mixed state and a separable state [@patel2016geometric] is less than the distance between a PPT bound entangled state and the same found here, one can conclude that the set of PPT bound entangled states is non-empty. This limit also applies as a geometrical lower bound for 1-distillable entangled states. The method provided here may find use to calculate the limits for k-distillable states and the NPT bound entangled states. [10]{} Aryaman A Patel and Prasanta K Panigrahi. Geometric measure of entanglement based on local measurement. , 2016. Dagmar Bru[ß]{}. Characterizing entanglement. , 43(9):4237–4251, 2002. Jin-Shi Xu, Chuan-Feng Li, Xiao-Ye Xu, Cheng-Hao Shi, Xu-Bo Zou, and Guang-Can Guo. Experimental characterization of entanglement dynamics in noisy channels. , 103:240502, Dec 2009. Gregg Jaeger. . Springer Science & Business Media, 2009. Charles H. Bennett, Herbert J. Bernstein, Sandu Popescu, and Benjamin Schumacher. Concentrating partial entanglement by local operations. , 53:2046–2052, Apr 1996. Charles H. Bennett, Gilles Brassard, Sandu Popescu, Benjamin Schumacher, John A. Smolin, and William K. Wootters. Purification of noisy entanglement and faithful teleportation via noisy channels. , 76:722–725, Jan 1996. Ying-Xuan Chen, Jing Du, Si-Yuan Liu, and Xiao-Hui Wang. Cyclic quantum teleportation. , 16(8):201, Jul 2017. H Imai, G Hanaoka, U Maurer, Y Zheng, M Naor, G Segev, A Smith, R Safavi-Naini, PR Wild, Broadcast Channels, et al. Special issue on information theoretic security. , 52:4348, 2006. Patryk Lipka-Bartosik and Karol [Ż]{}yczkowski. Nuclear numerical range and quantum error correction codes for non-unitary noise models. , 16(1):9, Dec 2016. J.-C. Boileau, K. Tamaki, J. Batuwantudawe, R. Laflamme, and J. M. Renes. Unconditional security of a three state quantum key distribution protocol. , 94:040503, Jan 2005. Antonio Acin, Nicolas Gisin, and Valerio Scarani. Security bounds in quantum cryptography using d-level systems. , 2003. Gan Gao and Yue Wang. Comment on “proactive quantum secret sharing”. , 16(3):74, Feb 2017. Tobias Moroder, Oleg Gittsovich, Marcus Huber, and Otfried Gühne. Steering bound entangled states: A counterexample to the stronger peres conjecture. , 113:050404, Aug 2014. David P. DiVincenzo, Peter W. Shor, John A. Smolin, Barbara M. Terhal, and Ashish V. Thapliyal. Evidence for bound entangled states with negative partial transpose. , 61:062312, May 2000. Samuel L Braunstein and Carlton M Caves. Geometry of quantum states. In [*Quantum Communications and Measurement*]{}, pages 21–30. Springer, 1995. Karol Zyczkowski and Hans-J[ü]{}rgen Sommers. Induced measures in the space of mixed quantum states. , 34(35):7111, 2001. Karol Zyczkowski and Wojciech Slomczynski. The monge metric on the sphere and geometry of quantum states. , 34(34):6689, 2001. Masanao Ozawa. Entanglement measures and the hilbert–schmidt distance. , 268(3):158–160, 2000. Hoshang Heydari and Gunnar Bj[ö]{}rk. Entanglement measure for general pure multipartite quantum states. , 37(39):9251, 2004. Asher Peres. Separability criterion for density matrices. , 77:1413–1415, Aug 1996. Ashutosh K Goswami and Prasanta K Panigrahi. Uncertainty relation and inseparability criterion. , 47(2):229–235, 2017. Giuliano G. La Guardia and Francisco Revson F. Pereira. Good and asymptotically good quantum codes derived from algebraic geometry. , 16(6):165, May 2017. F. Holik, G. Sergioli, H. Freytes, R. Giuntini, and A. Plastino. Toffoli gate and quantum correlations: a geometrical approach. , 16(2):55, Jan 2017. Vineeth S Bhaskara and Prasanta K Panigrahi. Generalized concurrence measure for faithful quantification of multiparticle pure state entanglement using lagrange’s identity and wedge product. , 16(5):118, 2017. Marco Piani and Caterina E. Mora. Class of positive-partial-transpose bound entangled states associated with almost any set of pure entangled states. , 75:012305, Jan 2007. Dragomir [Ž]{} okovi[ć]{}. On two-distillable werner states. , 18(6):216, 2016. Karol  Życzkowski, Paweł Horodecki, Anna Sanpera, and Maciej Lewenstein. Volume of the set of separable states. , 58:883–892, Aug 1998. Swapan Rana. Negative eigenvalues of partial transposition of arbitrary bipartite states. , 87:054301, May 2013. Maciej Lewenstein and Anna Sanpera. Separability and entanglement of composite quantum systems. , 80:2261–2264, Mar 1998.
--- abstract: 'We study families of partitions with gap conditions that were introduced by Schur and Andrews, and describe their fundamental connections to combinatorial $q$-series and automorphic forms. In particular, we show that the generating functions for these families naturally lead to deep identities for theta functions and Hickerson’s universal mock theta function, which provides a very general answer to Andrews’ Conjecture on the modularity of the Schur-type generating function. Furthermore, we also complete the second part of Andrews’ speculation by determining the asymptotic behavior of these functions. In particular, we use Wright’s Circle Method in order to prove families of asymptotic inequalities in the spirit of the Alder-Andrews Conjecture. As a final application, we prove the striking result that the universal mock theta function can be expressed as a conditional probability in a certain natural probability space with an infinite sequence of independent events.' address: - | Mathematical Institute\ University of Cologne\ Weyertal 86-90\ 50931 Cologne\ Germany - | Department of Mathematics\ Louisiana State University\ Baton Rouge, LA 70802\ U.S.A. author: - Kathrin Bringmann - Karl Mahlburg title: 'Schur’s partition theorem and mixed mock modular forms' --- [^1] Introduction and statement of results ===================================== The famous Rogers-Ramanujan identities show the equality of a hypergeometric $q$-series and an infinite product. If $r = 1 \text{ or } 2$, then the two identities can be simultaneously stated as [@RR19] $$\label{E:RR} \sum_{n\geq 0} \frac{q^{n^2 + (r-1)n}}{(q;q)_n} = \frac{1}{(q^{r}; q^5)_\infty (q^{5-r}; q^5)_\infty}.$$ Throughout the paper we use for $n\in{{\mathbb{N}}}_0 \cup\{\infty\}$ the standard $q$-factorial notation $(a)_n = (a;q)_n := \prod_{j = 0}^{n-1} (1-aq^j),$ as well as the the additional shorthand $(a_1, \dots , a_r)_n := (a_1)_n \cdot \dots \cdot (a_r)_n$. The Rogers-Ramanujan identities have had a tremendous influence throughout mathematics in the more than one hundred years since they were first discovered. Generalizations and applications of the identities have inspired developments in combinatorial and analytic partition theory [@Gor61; @Stem90]; the theory of infinite continued fractions [@AG93; @Gor65]; the theory of symmetries and transformations for hypergeometric $q$-series [@And66; @And75]; the exact solution of the hard hexagon model in statistical mechanics [@And81]; and vertex operator algebras [@LW81]. Here we study yet another direction, as we focus on the role of identities such as in the theory of automorphic forms. In general it is a very challenging problem to determine the automorphic properties of a hypergeometric $q$-series (for example, see the discussion of Nahm’s Conjecture in Section II.3 of [@Zag06]). From this perspective, the Rogers-Ramanujan identities are nothing short of incredible, as they equate a hypergeometric series written in “Eulerian” form on the left-hand side to an infinite product that is recognizable as a simple modular function on the right. In this paper we consider families of identities related to whose automorphic properties have not been previously determined, and we identify the surprisingly simple automorphic forms that underlie the $q$-series. Before we begin to describe our results, we note that it is helpful to understand the Rogers-Ramanujan identities as combinatorial identities for integer partitions with gap or congruential conditions. If $\lambda$ is a partition of $n$, then $\lambda$ consists of parts $\lambda_1 \geq \dots \geq \lambda_k \geq 1$ that sum to $n$; in this case we write $\lambda \vdash n$ (see the reference [@And98] for additional standard notation and terminology). In particular, let $B_{1}(n)$ denote the number of partitions of $n$ such that each pair of parts differs by at least $2$, and let $C_{1}(n)$ count the number of such partitions where the smallest part is also at least $2$. Furthermore, if $d \geq 3$ and $1 \leq r \leq \frac{d}{2}$, then we let $D_{d,r}(n)$ denote the number of partitions of $n$ into parts congruent to $\pm r \pmod{d}$. The Rogers-Ramanujan identities are then equivalent to the combinatorial statements that $$B_1(n) = D_{5,1}(n) \qquad \text{and} \qquad C_1(n) = D_{5,2}(n).$$ Following Rogers-Ramanujan, the next major development in the subject was due to Schur [@Sch26], who proved a similar identity for partitions with parts differing by at least $3$. In fact, Gleissberg [@Gle28] extended Schur’s result to a general modulus, which we state in full below. Let $B_{d,r}(n)$ denote the number of partitions of $n$ such that each part is congruent to $0, \pm r \pmod{d}$, each pair of parts differs by at least $d$, and if $d \mid \lambda_i$, then $\lambda_i - \lambda_{i+1} > d.$ We denote the generating function by $$\label{E:Bdrq} {\mathscr{B}}_{d,r}(q) := \sum_{n \geq 0} B_{d,r}(n) q^n.$$ Furthermore, let ${\mathscr{E}}_{d,r}(q)$ denote the generating function for partitions into distinct parts that are congruent to $\pm r \pmod{d}$, with enumeration function $E_{d,r}(n)$, so that $$\label{E:Edrq} {\mathscr{E}}_{d,r}(q) := \sum_{n \geq 0} E_{d,r}(n) q^n = \prod_{n \geq 0} \left(1 + q^{r + dn}\right) \left(1 + q^{d-r + dn}\right) = \left(-q^r, -q^{d-r}; q^d\right)_\infty.$$ Schur’s general identity is then stated as follows. If $d \geq 3$, $1 \leq r < \frac{d}{2}$, then $$\label{E:Schur} {\mathscr{B}}_{d,r}(q) = {\mathscr{E}}_{d,r}(q).$$ [*1.*]{} The case $d = 3$ and $r=1$ is the most often cited case of Schur’s identities, as it is easily seen that $E_{3,1}(n) = D_{6,1}(n)$, and the resulting restatement of the theorem, the identity $B_{3,1}(n) = D_{6,1}(n)$, is directly analogous to the case $r=1$ of . Indeed, the first Rogers-Ramanujan identity may be viewed as a degenerate case of Schur’s Theorem, stating $B_{1,1}(n) = D_{5,1}(n)$. [*2.*]{} Euler’s Theorem states that the number of partitions of $n$ into distinct parts is equal to those with odd parts (Cor. 1.2 in [@And98]). This can also be viewed as a degenerate Schur-type result, as it states that the partitions of $n$ with gaps of at least $1$ are equinumerous to $D_{4,1}(n)$. [*3.*]{} One may assume in Schur’s Theorem that the greatest common divisor is $(d,r)=1$, as if $(d,r) = g > 1$, then the statement reduces to the smaller case of $d' = \frac{d}{g}$ and $r' = \frac{r}{g}.$ [*4.*]{} Bressoud [@Bre80] and Alladi and Gordon [@AG93] later gave bijective proofs of Schur’s identities that also provide additional combinatorial information regarding the distribution of parts. Schur’s family of identities relates partitions with gap conditions (with generating functions ${\mathscr{B}}_{d,r}(q)$) to infinite products that are essentially modular forms (as in ), and can in particular be expressed as a simple quotient of theta functions (this will be made precise later in the paper). In contrast, in [@And68] Andrews considered a partition function related to the second of the Rogers-Ramanujan identities, which resulted in a $q$-series with more exotic automorphic behavior. In general, we let $C_{d,r}(n)$ count the number of partitions enumerated by $B_{d,r}(n)$ that also satisfy the additional restriction that the smallest part is larger than $d$, and define the corresponding generating function as $${\mathscr{C}}_{d,r}(q) := \sum_{n \geq 0} C_{d,r}(n) q^n.$$ Andrews provided the following evaluation of the generating function for these “Schur-type” partitions for the case of $d=3$, $r=1$. We have that $$\label{E:AndrewsC3} {\mathscr{C}}_{3,1}(q) = \frac{(-q;q)_\infty}{\left(q^6; q^6\right)_\infty} \sum_{n \geq 0} \frac{(-1)^n q^{\frac{9n(n+1)}{2}}(1-q^{6n+3})}{(1+q^{3n+1})(1+q^{3n+2})}.$$ After stating this result, Andrews then commented that > [*…the generating function for $C_{3,1}(n)$ is similar to the mock theta functions. Indeed, it is conceivable that a very accurate asymptotic formula for $C_{3,1}(n)$ may be found….*]{} We not only answer Andrews’ claim about ${\mathscr{C}}_{3,1}(q)$, but achieve much more - we fully describe the automorphic properties of all the “Schur-type” generating functions. This description is only possible due to Zwegers’ groundbreaking thesis [@Zw02], where he thoroughly described how Ramanujan’s famous mock theta functions (see [@Wat36]) fit into the modern framework of real-analytic automorphic forms as developed in [@BruF04]. We adopt the terminology from [@DMZ12] and [@Zag09] when discussing $q$-series that have automorphic properties similar to Ramanujan’s mock theta functions (see Section \[S:Gen:Auto\] for definitions). In particular, a [*(weak) mixed mock modular form*]{} is a function that lies in the tensor product of the general spaces of mock modular forms and (weakly holomorphic) modular forms, possibly multiplied with an additional rational multiple of $q$. The following result shows that the Schur-type generating functions are examples of such forms (see [@ARZ13; @BM11; @BO09; @CDH13; @DMZ12] for many other applications of mixed mock modular forms). \[T:mixedmock\] If $d \geq 3$, $1 \leq r < \frac{d}{2}$, then ${\mathscr{C}}_{d,r}(q)$ is a mixed mock modular form. Similarly, it was already well-known to experts that ${\mathscr{B}}_{d,r}(q)$ is a modular function (up to a $q$-power); this is apparent from combining and with the general theory of theta functions and modular units (cf. Proposition \[P:Bmod\]). In fact, we can precisely describe the automorphic functions that arise in Theorem \[T:mixedmock\] in terms of fundamental representatives from the spaces of modular and mock modular forms. In order to do so, we recall Hickerson’s [*universal mock theta function*]{} (of odd order), which is defined by $$\label{E:Univg3} g_3(x;q) := \sum_{n \geq 0} \frac{q^{n(n+1)}}{(x; q)_{n+1} (x^{-1}q; q)_{n+1}}.$$ Although the correct reference is often misattributed, we hope to make clear that Hickerson was the first to recognize the importance of this function. Indeed, in [@Hic88] he showed that Ramanujan and Watson’s examples can be decomposed into expressions in terms of $g_3,$ and thereby proved the so-called “mock theta conjectures” by studying the identities satisfied by the universal function. As we will see later, the sum in the expression $\eqref{E:AndrewsC3}$ is also essentially similar to the Appell-Lerch functions used in Zwegers’ study of the mock theta functions [@Zw02]; we prove later that can also be expressed in terms of such sums. The present notation was introduced by Gordon and McIntosh (see their recent survey [@GM12]), who also found a second universal mock theta function of “even order”, which they denoted by $g_2(x;q)$. In the following theorem statement we adopt standard notation for modular forms and theta functions; the definitions of $\eta(\tau)$ and $\vartheta(w;\tau)$ are reviewed in Section \[S:Gen:Auto\] (specifically, see and ). Here and throughout the paper we let $q:=e^{2\pi i \tau}$ be the standard uniformizer for the cusp $i\infty$, where $\tau$ is in the complex upper-half plane $\mathbb{H}$. \[T:univ\] If $d \geq 3$, $1 \leq r < \frac{d}{2}$, then $${\mathscr{C}}_{d,r}(q) = -q^{-\frac{d}{12} +\frac{r}{2}} \frac{\vartheta(\frac{1}{2} + r\tau; d\tau)}{\eta(d\tau)} g_3\left(-q^r; q^d\right).$$ This result gives a resounding affirmative answer to Andrews’ original speculation regarding the relationship between ${\mathscr{C}}_{3,1}(q)$ and mock theta functions, as it precisely describes the generating functions as simple mixed mock modular forms. We will see in the proof that Theorem \[T:univ\] is also equivalent to the identity $$\label{E:C=B} {\mathscr{C}}_{d,r}(q) = {\mathscr{B}}_{d,r}(q) g_3\left(-q^r; q^d\right).$$ Strikingly, this means that the universal mock theta function $g_3(-q^r; q^d)$ plays the role of a combinatorial “correction factor” that precisely accounts for the difference in the enumeration functions $B_{d,r}$ and $C_{d,r}$. In Section 8 of Andrews’ seminal work on Durfee symbols [@And07], he introduced another combinatorial generating function that is closely related to the universal mock theta function. In particular, he defined “odd Durfee symbols” (which generalize partitions with certain parity conditions) and showed that the two-parameter generating function is $$R_{1}^{0}(x;q) := \sum_{n \geq 0} \frac{q^{2n(n+1)+1}}{(xq; q^2)_{n+1} (x^{-1}q; q^2)_{n+1}}.$$ Using and Theorem \[T:univ\], we therefore find another notable combinatorial relationship for the Schur-type enumeration functions, namely $${\mathscr{C}}_{d,r}(q) = -q^{-\frac{d}{12} +\frac{r-d}{2}} \frac{\vartheta(\frac{1}{2} + r\tau; d\tau)}{\eta(d\tau)} R_0^1\left(-q^{r-\frac{d}{2}}; q^{\frac{d}{2}}\right).$$ As in , we see that the odd Durfee symbol generating function (and an additional $q$-power) is similarly the correction factor between the two Schur-type functions. We also answer the second part of Andrews’ grand statement by providing asymptotic formulas for all of the Schur-type partition functions. \[T:BCAsymp\] Suppose that $d \geq 3$ and $1 \leq r < \frac{d}{2}$. As $n \to \infty$ $$\begin{aligned} B_{d,r}(n) & \sim \frac{1}{2^{\frac{5}{4}}3^{\frac{1}{4}} d^{\frac{1}{4}} n^{\frac{3}{4}}} e^{\pi \sqrt{\frac{2n}{3d}}}, \\ C_{d,r}(n) & \sim \frac{1}{3}\cdot B_{d,r}(n).\end{aligned}$$ In fact, we prove much more than this, as our analysis also allows us to describe further terms in the asymptotic expansion for these enumeration functions (see Theorem \[T:Ineq\] for further applications). Moreover, one could also use the extension of the Hardy-Ramanujan Circle Method that the authors developed in [@BM11] in order to find expressions for the coefficients with only polynomial error. The results described thus far have followed the spirit of the Rogers-Ramanujan and Schur identities, in which simple enumeration functions are shown to be combinatorially equivalent, and whose generating functions lie in the intersection of hypergeometric $q$-series and automorphic forms. However, there is also a notable body of research that was inspired by a negative approach to and . In [@Ald48] Alder proved a non-existence result for certain general identities analogous to those of Rogers-Ramanujan and Schur (also see [@Ald69]). Moreover, Andrews observed in [@And71P] that such identities often fail to hold because of an asymptotic inequality in which one enumeration function is eventually always larger than the other. To precisely describe the cases considered by Alder and Andrews, let $q_{d,j}(n)$ denote the number of partitions of $n$ in which each pair of parts differs by at least $d$ and the smallest part is at least $j$, and let $Q_{d,j}(n)$ denote the number of partitions into parts congruent to $\pm j \pmod{d+3}.$ The Rogers-Ramanujan identities are then equivalently stated as $$q_{2,1}(n) = Q_{2,1}(n) \qquad \text{and} \qquad q_{2,2}(n) = Q_{2,2}(n).$$ Alder showed that if $d \geq 3$, then $q_{d,j}(n)$ is not equal to any partition enumeration function where the parts lie in some specified set of positive integers, taken without any restrictions on multiplicity or gaps. Furthermore, he conjectured an inequality between $q_{d,j}(n)$ and $Q_{d,j}(n)$ that was proven in a weaker asymptotic version by Andrews [@And71P]. The full Alder-Andrews Conjecture was recently confirmed across a series of papers by Yee [@Yee04; @Yee08] and Alfes, Jameson, and Lemke Oliver [@AJL11]. If $d \geq 3$ and $n \geq 2d + 9$, then $$q_{d,1}(n) > Q_{d,1}(n).$$ However, although there are now a plethora of identities and non-identities that relate various pairs of the enumeration functions described within this paper, there are essentially no results comparing the enumeration functions in the same family as $d$ and/or $r$ vary. Our next main result shows that for each fixed $d$, the families of enumeration functions $B_{d,r}$ and $C_{d,r}$ are asymptotically decreasing in $r$. \[T:Ineq\] Suppose that $d \geq 3$ and $1 \leq r < \frac{d}{2}$. 1. As $n \to \infty$ we have the asymptotic equalities $$\begin{aligned} B_{d,r+1}(n) &\sim B_{d,r}(n), \\ C_{d,r+1}(n) &\sim C_{d,r}(n).\end{aligned}$$ 2. For sufficiently large $n$ we have the inequalities $$\begin{aligned} B_{d,r+1}(n) &> B_{d,r}(n), \\ C_{d,r+1}(n) &> C_{d,r}(n).\end{aligned}$$ We can also compare the enumeration families across different $d$ values. Suppose that $3 \leq d_1 < d_2$ and $1 \leq r_1 < \frac{d_1}{2}$, $1 \leq r_2 < \frac{d_2}{2}.$ For sufficiently large $n$ we have the inequalities $$\begin{aligned} B_{d_1, r_1}(n) & > B_{d_2, r_2}(n), \\ C_{d_1, r_1}(n) & > C_{d_2, r_2}(n).\end{aligned}$$ In light of and Theorem \[T:univ\], the two pairs of asymptotic results from these theorems are statements about the asymptotic expansion of the coefficients of modular forms and mixed mock modular forms, respectively. In the case of the $B_{d,r}(n)$, it is entirely well-known that exact formulas can be found for the coefficients of modular forms [@Rad43], and we include the inequalities for the sake of completeness. The second case is more novel, and it is only due to more recent advances in the asymptotic analysis of automorphic $q$-series (as in [@BM11]) that we are able to compare the $C_{d,r}(n)$. It would be of great interest if a bijective proof could be found for any of these asymptotic results. Our final results describe the surprising relationship between the Schur-type generating functions and events in certain probability spaces with infinite sequences of independent events. In particular, we find a remarkable interpretation in terms of conditional probabilities for the universal mock theta function evaluated at real arguments; the precise definitions for the following result are found in Section \[S:Prob\]. \[T:g3Prob\] Suppose that $d \geq 3$, $1 \leq r < \frac{d}{2}$ and $0 < q < 1$ is real. There are events $W$ and $X$ in a certain probability space (see and Theorem \[T:prob\]) such that $${\mathbf{P}}(W \mid X) = g_3\left(-q^r; q^d\right).$$ Since probabilities are between $0$ and $1$, Theorem \[T:g3Prob\] immediately implies that for real $0 \leq q < 1$ we have the striking (and non-obvious) bound $$g_3\left(-q^r; q^d\right) < 1.$$ The remainder of the paper is structured as follows. In Section \[S:Gen\] we carefully study the combinatorics of the Schur-type enumeration functions, deriving a $q$-difference equation whose solution gives useful $q$-series expressions for the generating functions. We also identify the automorphic properties of these $q$-series. In Section \[S:Asymp\] we turn to the asymptotic behavior of the Schur-type functions, using a modification of Wright’s Circle Method in order to find asymptotic expansions for the coefficients. We conclude in Section \[S:Prob\] by defining a simple probability space that is intimately related to the Schur-type partitions, and use this to prove additional identities for the universal mock theta function. Generating functions, identities, and automorphic $q$-series {#S:Gen} ============================================================ In this section we evaluate the generating functions for Schur-type partitions, prove related $q$-series identities, and describe the relationship to automorphic objects such as theta functions and mock theta functions. Generating functions as hypergeometric $q$-series ------------------------------------------------- We begin by introducing a combinatorial refinement of the enumeration functions, from which we derive a $q$-difference equation; the hypergeometric solution of the equation then gives the desired $q$-series expressions. Our definitions are influenced by Andrews’ work in [@And68], which used the case $d=3$ from the general construction that follows. Specifically, for integer parameters $d \geq 3, 1 \leq r < \frac{d}{2}$, and $m \geq 1,$ $j, n \geq 0$, we define $$\begin{aligned} \beta_{d,r,j}(n, m):= \# \Big\{ & \lambda \vdash n \: : \: \lambda = \lambda_1 + \dots + \lambda_m \text{ where } \lambda_i > j \text{ and } \lambda_i \equiv 0, \pm r \! \pmod{d} \; \forall i, \\ & \text{ with gaps } \lambda_{i}-\lambda_{i+1} \geq d, \text{ and furthermore } \lambda_{i}-\lambda_{i+1} > d \text{ if } d | \lambda_i \Big\}.\end{aligned}$$ We also adopt the convention that $\beta_{d,r,j}(0,0) := 1.$ It follows immediately from the definitions that $$\begin{aligned} B_{d,r}(n) & = \sum_{m \geq 0}\beta_{d,r,0}(m,n), \\ C_{d,r}(n) & = \sum_{m \geq 0}\beta_{d,r,d}(m,n). \notag\end{aligned}$$ We denote the corresponding generating functions by $$f_{d,r}(x) = f_{d,r}(x;q) := \sum_{m,n\geq 0} \beta_{d,r,0}(m,n)x^m q^n.$$ We are then particularly interested in finding hypergeometric series for the cases $$\begin{aligned} \label{E:BC=f} {\mathscr{B}}_{d,r}(q)= f_{d,r}(1;q) = \sum_{n\geq 0} B_{d,r}(n)q^n, \\ {\mathscr{C}}_{d,r}(q)= f_{d,r}(q^d;q) = \sum_{n\geq 0} C_{d,r}(n)q^n. \notag\end{aligned}$$ We achieve this by deriving and then solving the following $q$-difference equation. \[P:frec\] For $d \geq 3, 1 \leq r < \frac{d}{2}$, we have $$\begin{aligned} f_{d,r}(x) = \left(1 + xq^r + xq^{d-r}\right) f_{d,r}\left(xq^d\right) + xq^d\left(1 - xq^d\right) f_{d,r}\left(xq^{2d}\right).\end{aligned}$$ We prove the recurrence through a combinatorial inclusion-exclusion argument by conditioning on the smallest part of the partition. Suppose that $\lambda$ is a partition counted by $\beta_{d,r,0}(m,n)$ for some $m$ and $n$. Then its smallest part is either $r, d-r, d, $ or something larger. These cases are counted, respectively, by the following sum of generating functions (where for convenience we write $f$ instead of $f_{d,r}$): $$\label{E:fover} xq^{r} f\left(xq^d\right) + xq^{d-r} f\left(xq^d\right) + xq^{d} f\left(xq^{2d}\right) + 1 \cdot f\left(xq^d\right).$$ The term $f\left(xq^d\right)$ ensures that the next smallest part is larger than $d$, while $f\left(xq^{2d}\right)$ gives a part larger than $2d$. However, also generates partitions that do not satisfy the Schur-type conditions for $d,r$, so this excess must be subtracted. In particular, if were precisely to equal to $f(x)$, then iterating the recurrence would give the term $xq^{d-r} \cdot xq^{d+r} f(xq^{2d}) = x^2 q^{2d} f(xq^{2d})$, which represents a partition with the parts $d-r$ and $d+r$. Subtracting this unallowed small gap gives the recurrence claimed in the proposition statement. Andrews originally proved this result for $d=3$ in [@And68] by first describing a family of recurrences satisfied by the $\beta_{3,1,j}(n,m)$, and then turning to their generating functions. However, it is more direct to instead prove the $q$-difference equation through a direct combinatorial analysis of the underlying partitions, as we have done here. A more general version of the recurrence in Proposition \[P:frec\] is found in (2.1) of [@AG93], although the most interesting automorphic $q$-series arise from specializing the parameters as in our statement. However, Alladi and Gordon’s study also includes many notable combinatorial results; the reader is particularly encouraged to consult equation (1.1) of [@AG93] for further details on the appearance of $f_{d,r}(x;q)$ in the theory of infinite continued fractions. Solving the $q$-difference equation in Proposition \[P:frec\] gives the following hypergeometric expression for $f_{d,r}.$ \[P:fqseries\] For $d \geq 3, 1 \leq r < \frac{d}{2}$ and $|q| < 1$, we have $$f_{d,r}(x;q) = \left(x;q^d\right)_\infty \sum_{n \geq 0} x^n \frac{\left(-q^r,-q^{d-r};q^d\right)_{n}}{\left(q^d;q^d\right)_{n}}.$$ It is more convenient to renormalize the recurrence from Proposition \[P:frec\] by setting $$\label{E:gdr} g_{d,r}(x) := \frac{f_{d,r}(x)}{(x;q^d)_{\infty}}.$$ Then we have the recurrence (again writing $g$ instead of $g_{d,r}$) $$\label{E:grec} (1-x) g(x)= \left(1 + xq^r + x q^{d-r}\right)g\left(x q^d\right) + x q^d g\left(x q^{2d}\right).$$ We now consider the expansion of $g$ as a series in $x$, writing $$g(x) = \sum_{n \geq 0}A_n x^n,$$ where $A_n = A_n(q)$ are rational expressions in $q$. Isolating the coefficient of $x^n$ in now gives the recurrence $$A_n - A_{n-1} = A_n q^{dn} + A_{n-1}\left( q^{d(n-1)+ r} + q^{dn-r} + q^{d(2n-1)}\right).$$ Simplifying, we find that $$A_n = \frac{\left(1 + q^{d(n-1)+r}\right)\left(1 + q^{dn-r}\right)}{1-q^{dn}}A_{n-1}.$$ Using the initial condition $A_0 = 1$, we can therefore solve the recurrence to find the unique solution (cf. Lemma 1 in [@And68Q]) $$\label{E:ghyper} g(x) = \sum_{n \geq 0} x^n \frac{\left(-q^r,-q^{d-r};q^d\right)_{n}}{\left(q^d;q^d\right)_{n}}.$$ The proof is now complete upon comparison with . We now use transformations for hypergeometric $q$-series in order to find additional representations for the generating functions that directly display their automorphic properties. We begin by recalling the following $_{3}\phi_{2}$ transformation, which is equivalent to equation (III.10) in [@GR90] $$\label{E:3phi2} \sum_{n\geq 0}\frac{\left(\frac{aq}{bc},d,e\right)_{n}}{\left(q,\frac{aq}{b},\frac{aq}{c}\right)_{n}}\left(\frac{aq}{de}\right)^n= \frac{\left(\frac{aq}{d},\frac{aq}{e},\frac{aq}{bc}\right)_{\infty}}{\left(\frac{aq}{b},\frac{aq}{c},\frac{aq}{de}\right)_{\infty}} \sum_{n\geq 0}\frac{\left(\frac{aq}{de},b,c\right)_{n}}{\left(q,\frac{aq}{d},\frac{aq}{e}\right)_{n}}\left(\frac{aq}{bc}\right)^n.$$ We also recall a special case of the Watson-Whipple transformation for $_8\phi_7$ (let $n \to \infty$ in (III.18) of [@GR90]), namely $$\label{E:WW} \sum_{n\geq 0}\frac{\left(\frac{aq}{bc},d,e\right)_{n}}{\left(q,\frac{aq}{b},\frac{aq}{c}\right)_{n}}\left(\frac{aq}{de}\right)^n = \frac{\left(\frac{aq}{d},\frac{aq}{e}\right)_{\infty}}{\left(aq,\frac{aq}{de}\right)_{\infty}} \sum_{n\geq 0} \frac{\left(a,b,c,d,e\right)_{n}\left(1- a q^{2n}\right)}{\left(q,\frac{aq}{b},\frac{aq}{c},\frac{aq}{d},\frac{aq}{e}\right)_{n}} \frac{\left(aq\right)^{2n}(-1)^n q^{\frac{n(n-1)}{2}}}{(1-a)(bcde)^n}.$$ \[P:Chyper\] For $d \geq 3$ and $1 \leq r < \frac{d}{2}$ we have the identities: $$\label{P:Chyper:bi} {\mathscr{C}}_{d,r}(q) = \frac{\left(-q^r, -q^{d-r}; q^d\right)_\infty}{\left(q^d; q^d\right)_\infty}\sum_{n\in{{\mathbb{Z}}}}\frac{(-1)^n q^{\frac{3dn(n+1)}{2}}}{1+q^{r+dn}} = \left(-q^{r},-q^{d-r};q^d\right)_{\infty}g_{3}\left(-q^r;q^d\right).$$ The first formula is obtained from by setting $q \mapsto q^{d}$, $a=x$, $b,c \rightarrow \infty$, $d= -q^{r}$, and $e= -q^{d-r}$. The left hand side of then equals $$\sum_{n\geq 0}\frac{\left(-q^r, -q^{d-r}; q^d\right)_n}{\left(q^d; q^d\right)_n} x^n=g_{d,r}(x).$$ The right hand side of simplifies to $$\frac{\left(-xq^r,-xq^{d-r}; q^d \right)_{\infty}}{\left(xq^d,x;q^d \right)_{\infty}}\sum_{n \geq 0}\frac{\left(x,-q^r,-q^{d-r};q^d \right)_{n}}{\left(q^d,-xq^r,-xq^{d-r};q^d \right)_{n}} \frac{\left(1 - xq^{2dn}\right)}{1-x}(-1)^n x^{2n} q^{\frac{3dn^2 -dn}{2}}.$$ Multiplying by $(x;q^d)_{\infty}$, we find that $$f_{d,r}(x) = \frac{\left(-xq^r,-xq^{d-r}; q^d \right)_{\infty}}{\left(xq^d;q^d \right)_{\infty}}\sum_{n \geq 0}\frac{\left(x,-q^r,-q^{d-r};q^d \right)_{n}}{\left(q^d,-xq^r,-xq^{d-r};q^d \right)_{n}} \frac{\left(1 - xq^{2dn}\right)}{1-x}(-1)^n x^{2n} q^{\frac{3dn^2-dn}{2}}.$$ Setting $x=q^d$ gives $$\label{E:Cdrsubs} {\mathscr{C}}_{d, r}(q)=f_{d,r}\left(q^d\right)=\frac{\left(-q^r,-q^{d-r}; q^d \right)_{\infty}}{\left(q^d;q^d \right)_{\infty}}\sum_{n \geq 0}\frac{(-1)^n \left(1 - q^{d(2n+1)}\right) q^{\frac{3dn^2}{2}+ \frac{3dn}{2}}}{(1+q^{r +dn})(1+ q^{d-r+dn})}.$$ Using the routine partial fraction decomposition $$\frac{1 - q^{d(2n+1)}}{(1+q^{r +dn})(1+ q^{d-r+dn})} = \frac{1}{1+q^{r+dn}}- \frac{q^{d-r+dn}}{1+ q^{d-r+dn}},$$ we can rewrite the sum in as the bilateral summation $$\sum_{n\in{{\mathbb{Z}}}}\frac{(-1)^n q^{\frac{3dn(n+1)}{2}}}{1+q^{r+dn}}.$$ This completes the proof of the first identity in . For the second expression, we note that the left hand side of is the same as in . We therefore proceed by making the same substitutions as above: $q \mapsto q^d$, $b,c \rightarrow \infty$, $a=x$, $d=-q^r$, and $e= -q^{d-r}.$ This gives $$\label{E:gdrhyper2} g_{d,r}(x)=\frac{\left(-xq^r,-xq^{d-r}; q^d\right)_{\infty}}{\left(x;q^d\right)_{\infty}}\sum_{n \geq 0} \frac{\left(x;q^d\right)_{n}x^n q^{dn^2}}{\left(q^d,-xq^r, -xq^{d-r};q^d\right)_{n}}.$$ Setting $x=q^d$ gives the overall expression $${\mathscr{C}}_{d, r}(q)=\left(-q^{d+r},-q^{2d-r};q^d\right)_{\infty}\sum_{n \geq 0} \frac{q^{dn(n+1)}}{(-q^{r+d},-q^{2d-r};q^d)_n}.$$ Combined with the definition of the universal mock theta function from , this gives the statement. We can also use to recover the formula for ${\mathscr{B}}_{d,r}(q)$ from . In particular, if we multiply by $(x; q^d)_\infty$ and set $x=1$, then every term of the sum vanishes except for $n=0$, and hence we directly obtain the infinite product. Automorphic properties {#S:Gen:Auto} ---------------------- We now describe the automorphicity of the generating functions from above and prove Theorem \[T:univ\]. We first recall several basic facts about automorphic and Jacobi forms, although the definitions are stated very roughly (we specifically suppress technical discussion of the finer points of multiplier systems and level structure), as our primary aims are practical; we wish to describe how the combinatorial study of Schur-type partitions leads to the fundamental objects from the theory of automorphic $q$-series, and then to apply the theory to obtain asymptotic results. The interested reader should consult the cited references for a complete background in the subject. We also provide special cases of modular transformations for the functions that arise in this paper, as we will use these later for the asymptotic analysis in Section \[S:Asymp\]. Briefly, a holomorphic function $f: \mathbb{H} \to {{\mathbb{C}}}$ is a [*weakly holomorphic modular form*]{} of weight $k$ on a congruence subgroup $\Gamma \subset \text{SL}_2({{\mathbb{Z}}})$ if $f$ is meromorphic at the “cusps” of $\Gamma$ and satisfies the modular transformations $$\label{E:mod} f\left(\frac{a\tau + b}{c\tau + d}\right) = \chi(\gamma) (c\tau + d)^k f(\tau) \qquad \text{for all } \gamma = \begin{pmatrix} a & b \\ c& d \end{pmatrix} \in \Gamma,$$ where the $\chi(\gamma)$ are certain roots of unity that form a “multiplier system”. See [@Kob84] for further details. Furthermore, as in [@Zag09], a holomorphic function $f: \mathbb{H} \to {{\mathbb{C}}}$ is a [*mock modular form*]{} of weight $k$ if there is an associated modular form $g$ of weight $2-k$ (the “shadow” of $f$) such that $f + g^\ast$ satisfies a modular transformation of the form , where the real-analytic correction term is given by $$g^\ast(\tau) := \left(\frac{i}{2}\right)^{k-1} \int_{-\overline{\tau}}^\infty \frac{\overline{g\left(-\overline{z}\right)}}{(z+\tau)^k} dz.$$ A [*mixed mock modular form*]{} is then a product of a modular form and a mock modular form, or a linear combination of such terms; the precise definition reflects a richer tensor structure amongst the vector spaces of automorphic forms, and is found in Section 7.3 of [@DMZ12]. Finally, we also encounter [*Jacobi forms*]{}, which are complex-valued functions on ${{\mathbb{C}}}\times \mathbb{H}$ that satisfy modular-type transformations in the second argument, and certain lattice-invariant transformations in the first argument. The full theory of such forms may be found in [@EZ85]. We now present the special automorphic functions that arise in the present study. First, recall Dedekind’s eta-function, which is a modular form of weight $\frac{1}{2}$ defined by $$\label{E:eta} \eta(\tau) := q^\frac{1}{24}\prod_{n\geq 1} \left(1-q^n\right).$$ In particular, it satisfies the inversion formula (Theorem 3.1 in [@Apo90]) $$\label{E:etainv} \eta \left( - \frac{1}{\tau} \right) = \sqrt{-i\tau} \eta(\tau).$$ We next recall Jacobi’s theta function $$\label{Thetadef} \vartheta(w) = \vartheta(w;\tau) := \sum_{n\in\frac{1}{2}+{{\mathbb{Z}}}}e^{\pi i n^2\tau+2\pi in\left(w+\frac{1}{2}\right)},$$ which has an equivalent product form (writing $\zeta := e^{2\pi i w}$) $$\label{E:thetaprod} \vartheta(w;\tau) = -i q^{\frac{1}{8}} \zeta^{-\frac{1}{2}} \prod_{n\geq 1} (1-q^n)\left(1-\zeta q^{n-1}\right) \left(1-\zeta^{-1}q^n\right).$$ This function is a Jacobi form of weight and index $\frac{1}{2}$, and it satisfies the following transformation formulas: $$\begin{aligned} \label{E:thetaneg} \vartheta(-w; \tau) & = -\vartheta(w; \tau), \\ \label{E:thetainv} \vartheta \left( \frac{w}{\tau} ; - \frac{1}{\tau} \right) &= -i \sqrt{-i\tau} e^{\frac{\pi i w^2}{\tau}} \vartheta \left( w; \tau \right).\end{aligned}$$ Using these definitions the following formula can be immediately verified. \[P:Bmod\] If $d \geq 3$, and $1 \leq r < \frac{d}{2}$, then $${\mathscr{B}}_{d,r}(q) = -\frac{q^{-\frac{d}{12} +\frac{r}{2}}\vartheta(\frac{1}{2} + r\tau; d\tau)}{\eta(d\tau)}.$$ The statement of Theorem \[T:univ\] then follows immediately by combining and Propositions \[P:Chyper\] and \[P:Bmod\]. We close this section by describing the automorphic properties of ${\mathscr{C}}_{d,r}(q)$. Theorem 3.1 of [@Kan09] states that if $\alpha \not \in {{\mathbb{Z}}}\tau + \frac{1}{3} {{\mathbb{Z}}}$, then $g_3\left(e^{2\pi i \alpha}; \tau\right)$ is a mock modular form of weight $\frac{1}{2}$, up to rational $q$-powers (note that one of the terms in Kang’s theorem is the meromorphic function $\frac{\eta^3(3\tau)}{\eta(\tau) \vartheta(3\alpha; 3\tau)},$ which is a weakly holomorphic modular form of weight $\frac{1}{2}$ by Theorem 1.3 in [@EZ85]). Sending $\tau \to d\tau$, we then conclude that $g_3(-q^r; q^d)$ is a mock modular form of weight $\frac{1}{2}$. The theory of mock modular forms (refer to [@Zw02]) also provides transformation formulas for $g_3(x;q)$ similar to and , but this is unnecessary in our asymptotic analysis. Asymptotic results {#S:Asymp} ================== In this section we determine the asymptotic behavior of the enumeration functions $B_{d,r}(n)$ and $C_{d,r}(n)$, proving Theorems \[T:BCAsymp\] and \[T:Ineq\]. We achieve this by first studying the asymptotic properties of the generating functions, and then applying Wright’s version of the Circle Method, which was developed in [@Wri41; @Wri71] (also see [@BM12; @BMan13] for further adaptations of the approach in other recent applications). Asymptotic expansions and proof outline --------------------------------------- Our primary goal is to give the first two terms in the asymptotic expansions of the enumeration functions. Theorems \[T:BCAsymp\] and \[T:Ineq\] follow immediately from the following results. Here $I_s$ denotes the standard modified Bessel function (see Section 4.12 in [@AAR99]). \[T:BCExpn\] Suppose that $d \geq 3$ and $1 \leq r < \frac{d}{2}.$ As $n\rightarrow\infty$, we have $$\begin{aligned} B_{d,r}(n) & = \alpha_1 n^{-\frac12} I_{-1}\left(\pi\sqrt{\frac{2n}{3d}}\right) +\beta_1(r) n^{-1} I_{-2}\left(\pi\sqrt{\frac{2n}{3d}}\right) +O\Big(n^{-\frac32} e^{\frac{\pi\sqrt{2n}}{\sqrt{3d}}}\Big), \\ C_{d,r}(n) & = \alpha_2 n^{-\frac12} I_{-1}\left(\pi\sqrt{\frac{2n}{3d}}\right) + \beta_2(r) n^{-1} I_{-2}\left(\pi\sqrt{\frac{2n}{3d}}\right) +O\Big(n^{-\frac32} e^{\frac{\pi\sqrt{2n}}{\sqrt{3d}}}\Big),\end{aligned}$$ where $$\begin{aligned} \begin{array}{ll} \displaystyle \alpha_1 := \frac{\pi}{\sqrt{6d}}, & \displaystyle \alpha_2 := \frac{\pi}{3\sqrt{6d}} , \\ \displaystyle \beta_1(r) := \frac{\pi^2}{6d}\left(\frac{d}{12} - \frac{r}{2} + \frac{r^2}{2d}\right), & \displaystyle \beta_2(r) := \frac{\pi^2}{6d}\left(\frac{11d}{108}-\frac{r}{6}+\frac{r^2}{6d}\right). \end{array}\end{aligned}$$ The proof of Theorem \[T:BCExpn\] can easily be extended to give an asymptotic expansion with an arbitrary number of terms. Furthermore, the full asymptotic expansion for the modified Bessel function is also well-known (cf. (4.12.7) in [@AAR99]). However, for our present purposes we need only the two terms in the theorem statement, along with the fact that as $x \to \infty$, we have $$\label{E:IAsymp} I_s(x) = \frac{e^x}{\sqrt{2\pi x}}\left(1 + O\left(\frac{1}{x}\right)\right).$$ The asymptotic formulas in Theorem \[T:BCAsymp\] then follow by applying to the leading terms in in Theorem \[T:BCExpn\]. Since these terms are independent from $r$, the asymptotic differences of the enumeration functions are found by comparing the terms with $\beta_j(r-1)$ and $\beta_j(r)$, again using . \[C:Asymp\] As $n\rightarrow\infty$, $$\begin{aligned} B_{d,r-1}(n) - B_{d,r}(n) & \sim \frac{\pi}{2^{\frac{7}{4}}3^{\frac{3}{4}}d^{\frac{3}{4}}} \left(\frac{1}{2} - \frac{r}{d} + \frac{1}{2d}\right) n^{-\frac{5}{4}} e^{\pi\sqrt{\frac{2n}{3d}}}, \\ C_{d,r-1}(n) - C_{d,r}(n) & \sim \frac{\pi}{2^{\frac{7}{4}}3^{\frac{7}{4}}d^{\frac{3}{4}}} \left(\frac{1}{2} - \frac{r}{d} + \frac{1}{2d}\right) n^{-\frac{5}{4}} e^{\pi\sqrt{\frac{2n}{3d}}}.\end{aligned}$$ In order to prove these results, we apply Cauchy’s Theorem and recover the coefficients from the generating functions. Throughout this section we adopt the convenient shorthand notation $$F_1(q) := {\mathscr{B}}_{d,r}(q) \qquad \text{and} \qquad F_2(q) := {\mathscr{C}}_{d,r}(q).$$ Adopting similar shorthand for the coefficients, Cauchy’s Theorem then implies that for any $n \geq 1$ $$\label{E:Cauchy} c_1(n) := B_{d,r}(n)=\frac1{2\pi i}\int_{{\mathcal{C}}}\frac{F_1(q)}{q^{n+1}}dq \qquad \text{and} \qquad c_2(n) := C_{d,r}(n)=\frac1{2\pi i}\int_{{\mathcal{C}}}\frac{F_2(q)}{q^{n+1}}dq,$$ with the contour ${\mathcal{C}}$ chosen to be the (counter-clockwise) circle with radius $e^{-N}$, where we further define $N :=\frac{\pi}{\sqrt{6dn}}$. It is convenient to parameterize this contour by setting $q = e^{-z}$, where $z = N + iy$ and $-\pi < y \leq \pi$. Note that we must be sure to use $\tau = \frac{iz}{2\pi}$ when applying automorphic transformations. We then decompose the contour into ${\mathcal{C}}= {\mathcal{C}}_1 + {\mathcal{C}}_2$, with $${\mathcal{C}}_1 := \Big\{ q \in {\mathcal{C}}\: : \: |y| < 2N \Big\},$$ and ${\mathcal{C}}_2$ consisting of the remaining curve. We further denote the corresponding contributions to for $j=1,2$ by $$\label{E:MjEj} M_j := \frac1{2\pi i}\int_{{\mathcal{C}}_1}\frac{F_j(q)}{q^{n+1}}dq \qquad \text{and} \qquad E_j := \frac1{2\pi i}\int_{{\mathcal{C}}_2}\frac{F_j(q)}{q^{n+1}}dq,$$ as we will see that these integrals, respectively, contribute the main asymptotic term and error terms. Asymptotic behavior near $q=1$. ------------------------------- We begin by determining the asymptotic behavior of the functions $F_j(q)$ on ${\mathcal{C}}_1$, which contributes to the main terms for the coefficients. \[L:FonC1\] If $q = e^{-z} \in {\mathcal{C}}_1$ and $j = 1,2$, then we have the bounds $$F_{j}(q)=e^{\frac{\pi^2}{6dz}}\left(\alpha'_j +\beta'_j(r) z+O\left(z^2\right)\right),$$ where the constants are given by $$\begin{aligned} \begin{array}{ll}\alpha'_1 = 1, & \displaystyle \alpha'_2 = \frac{1}{3}, \\ \displaystyle \beta'_1(r) = \frac{d}{12} - \frac{r}{2} + \frac{r^2}{2d}, \quad & \displaystyle \beta'_2 (r) = \frac{11d}{108}-\frac{r}{6}+\frac{r^2}{6d}. \end{array}\end{aligned}$$ We begin with $F_1(q)$, recalling Proposition \[P:Bmod\]. Combined with the inversion formulas and , this gives $${\mathscr{B}}_{d,r}(q) = -q^{-\frac{d}{12} + \frac{r}2} \frac{\vartheta \left( \frac12 + \frac{irz}{2\pi}; \frac{idz}{2\pi}\right)}{\eta \left(\frac{idz}{2\pi}\right)} = -\frac{ie^{-\frac{\pi ir}{d}+z\left(\frac{d}{12}-\frac{r}{2}+\frac{r^2}{2d}\right)-\frac{\pi^2}{2dz}}\vartheta\left(-\frac{\pi i}{dz}+\frac{r}{d}; \frac{2\pi i}{dz}\right)}{\eta\left(\frac{2\pi i}{dz}\right)}.$$ Recalling , , and , we find a (uniform) bound for on ${\mathcal{C}}_1$, namely $$\label{E:BMaj} {\mathscr{B}}_{d,r}(q) = e^{\frac{\pi^2}{6dz}+z\left(\frac{d}{12}-\frac{r}{2}+\frac{r^2}{2d}\right)}\left(1+O\left(e^{-\frac{2\pi^2}{dz}}\right)\right).$$ Turning next to $F_2(q)$, by and , we find its asymptotic expansion by first directly calculating the Taylor expansion around $z=0$ for the convergent sum $$\label{E:Gsum} G(q):=\sum_{n\geq 0}\frac{q^{dn(n+1)}}{\left(-q^r, -q^{d-r}; q^d\right)_{n+1}}=G(0)+G'(0)z+O\left(z^2\right).$$ The constant term evaluates to $$G(0)=\frac14\sum_{n\geq 0}\frac1{4^n}=\frac13.$$ To calculate the derivative, we use the fact that $$\frac{dG}{dz} = \frac{dG}{dq} \frac{dq}{dz} = -q \frac{dG}{dq}$$ and then apply logarithmic differentiation to each summand in order to find the evaluation $$\begin{aligned} G'(0)&=-\frac14\sum_{n\geq 0}\frac1{4^n} \left(dn^2+dn - \sum_{j=0}^n \left(\frac{r+dj}{2}+\frac{d+dj-r}{2}\right)\right) =-\frac{d}{8}\sum_{n\geq 0}\frac{\left(n^2-1\right)}{4^n}=\frac{2d}{27}.\end{aligned}$$ Combining , , and then gives the claim. Asymptotic behavior away from $q=1$ ----------------------------------- We next determine the asymptotic behavior of the $F_j(q)$ on ${\mathcal{C}}_2$. It is sufficient to find asymptotic bounds, as this contour only contributes to the error term in the overall formulas for the coefficients. \[L:FonC2\] If $q \in \mathcal{C}_2$, then the following bounds are satisfied: 1. $\displaystyle F_1(q) \ll e^{\frac{\pi \sqrt{n}}{5\sqrt{6d}}},$ 2. $\displaystyle F_2(q) \ll n e^{\frac{\pi \sqrt{2n}}{5\sqrt{3d}}}.$ We begin by observing that the inversion formulas used in proving also immediately lead to a bound for $F_1(q)$ on ${\mathcal{C}}_2$. Namely, we have $${\mathscr{B}}_{d,r}(q) \ll e^{\frac{\pi^2}{6d} {\text{Re}}\left( \frac{1}{z} \right)}.$$ In order to bound $F_2(q)$, we recall Proposition \[P:Chyper\] and estimate the additional pieces individually. We first address the infinite product, again using to conclude that $$\frac{1}{\left( q^d ; q^d \right)_\infty} \ll |z|^{\frac12} e^{\frac{\pi^2}{6d} {\text{Re}}\left( \frac{1}{z} \right)}.$$ It remains to bound the sum. We use the fact that ${\text{Re}}(z) = N$ on ${\mathcal{C}}_2$ and calculate the (rough) bound $$\sum_{n\in{{\mathbb{Z}}}}(-1)^n \frac{q^{\frac{3dn(n+r)}{2}}}{1+q^{dn+r}} \ll\frac{1}{1-e^{-N}}\sum_{n\geq 0} e^{-nN} \ll \frac1{N^2}\ll n.$$ Thus $$\begin{aligned} G_2(q) & \ll |z|^\frac12 n e^{\frac{\pi^2}{3d} {\text{Re}}\left( \frac{1}{z} \right)}.\end{aligned}$$ The statement now follows because $|z| \ll 1$ on all of ${\mathcal{C}}$, and furthermore, for $q \in \mathcal{C}_2$ we have the additional inequality $${\text{Re}}\left( \frac{1}{z} \right) = \frac{{\text{Re}}(z)}{{\text{Re}}(z)^2 + {\text{Im}}(z)^2} \leq \frac{1}{5N}.$$ Asymptotic formulas for coefficients ------------------------------------ We now complete the proofs of Theorem \[T:BCExpn\] and Corollary \[C:Asymp\] by plugging the bounds for the $F_j(q)$ into . We begin by considering the first two terms from Lemma \[L:FonC1\], and we relate the corresponding integrals to Bessel functions. In particular, Wright’s calculations in Section 5 of [@Wri71] apply directly to the present situation, implying that $$\begin{aligned} \label{E:MainBessel} \frac{1}{2\pi i} & \int_{\mathcal{C}_1} \frac{e^{\frac{\pi^2}{6dz}} \left( \alpha_j' + \beta_j' (r) z \right)}{q^{n+1}} dq \\ & =\alpha_j'\left(\frac{\pi}{\sqrt{6d}}\right) n^{-\frac12} I_{-1}\left(\pi\sqrt{\frac{2n}{3d}}\right) +\beta_j'(r)\frac{\pi^2}{6d}n^{-1}I_{-2}\left(\pi\sqrt{\frac{2n}{3d}}\right) +O\left(n^{-1} e^{\frac{\pi}{2}\sqrt{\frac{3n}{2d}}}\right). \notag\end{aligned}$$ We now turn to the error terms. Using the fact that $|z| \leq \sqrt{5} N$ on ${\mathcal{C}}_1$, we find that the error terms from Lemma \[L:FonC1\] for either $j=1,2$ contribute $$\label{E:C1Err} \int_{\mathcal{C}_1} e^{nN+\frac{\pi^2}{6d}\text{Re}\left(\frac1z\right)}|z|^2 dz\ll N^3 e^{\frac{\pi\sqrt{2n}}{\sqrt{3d}}}\ll n^{-\frac32} e^{\frac{\pi\sqrt{2n}}{\sqrt{3d}}}.$$ The bounds from Lemma \[L:FonC2\] on ${\mathcal{C}}_2$ give a contribution with an exponentially lower order, so the overall error is given by . Inserting and into , we then obtain Theorem \[T:BCExpn\]. Probabilistic interpretation of universal mock theta functions {#S:Prob} ============================================================== In this section we further examine the combinatorial properties of Schur-type partitions and consequently prove the amazing fact that the universal mock theta function at real arguments naturally occurs as the conditional probability of events in simple probability spaces. This phenomenon was previously observed for individual examples of Ramanujan’s mock theta functions (see [@Wat36] for notation), including $\xi(q)$ [@AEPR07] and $\phi(q)$ [@BMM13]. However, our current results are significantly more fundamental due to the underlying importance of the universal mock theta function. Suppose that $0 < q < 1$ is fixed, and let $E_1, E_2, \dots$ be a sequence of independent events that individually occur with probabilities $$\label{E:ProbE} p_j = {\mathbf{P}}(E_j) := \frac{q^j}{1+q^j}.$$ We also denote the complementary events by $F_j := E_j^c$, which have corresponding probabilities ${\overline}{p}_j := {\mathbf{P}}(F_j) = 1 - p_j = \frac{1}{1+q^j}.$ For any events $R$ and $S$, we adopt the space-saving notational conventions $RS := R \cap S.$ If $d \geq 3$ and $1 \leq r < \frac{d}{2}$, we consider certain events defined in terms of the sequence of $E_j$s, although we first introduce one more notational shorthand, writing $E_n^k := E_{nd+k}$ (with similar notation for the complementary $F$s). We now define the events $$\begin{aligned} U_{d,r} & := \bigcap_{n \geq 0} \Big(E_{n}^r F_{n}^{d-r} F_{n+1}^0 \cup F_{n}^r \Big) \Big(E_{n}^{d-r} F_{n+1}^{0} F_{n+1}^r \cup F_{n}^{d-r} \Big) \Big(E_{n+1}^{0} F_{n+1}^{r} F_{n+1}^{d-r} F_{n+2}^0 \cup F_{n+1}^{0} \Big), \notag \\ \label{E:UV} V_{d,r} & := \bigcap_{n \geq 1} \Big(E_{n}^r F_{n}^{d-r} F_{n+1}^0 \cup F_{n}^r \Big) \Big(E_{n}^{d-r} F_{n+1}^{0} F_{n+1}^r \cup F_{n}^{d-r} \Big) \Big(E_{n+1}^{0} F_{n+1}^{r} F_{n+1}^{d-r} F_{n+2}^0 \cup F_{n+1}^{0} \Big).\end{aligned}$$ In words, $U_{d,r}$ is the event such that if $E_{nd+r}$ occurs, then $E_{nd+d-r}$ and $E_{(n+1)d}$ do not occur; if $E_{nd +d-r}$ occurs, then $E_{(n+1)d}$ and $E_{(n+1)d+r}$ do not occur; and if $E_{(n+1)d}$ occurs, then $E_{(n+1)d+r}$, $E_{(n+1)d+d-r}$ and $E_{(n+2)d}$ do not occur. The event $V_{d,r}$ has the same conditions beginning only from $E_{d+r}$, with no restrictions on whether $E_{r}, E_{d-r}$, and $E_d$ occur. Note that the events $U_{d,r}$ and $V_{d,r}$ are independent from any $E_j$ with $j \not\equiv 0, \pm r \pmod{d}.$ \[T:prob\] Suppose that $0 < q < 1$, $d \geq 3$ and $1 \leq r < \frac{d}{2}.$ The following identities hold: 1. \[T:prob:U|V\] $\displaystyle {\mathbf{P}}(U_{d,r} \mid V_{d,r}) = \frac{1}{\left(1+q^r\right) \left(1+q^{d-r}\right) \left(1+q^d\right)} \cdot \frac{1}{g_3\left(-q^r; q^d\right)}$, 2. \[T:prob:F|U\] $\displaystyle {\mathbf{P}}(F_r F_{d-r} F_{d} \mid U_{d,r}) = g_3\left(-q^r; q^d\right).$ Let ${\mathcal{V}}_k$ denote the event that all of the conditions in the definition of $U_{d,r}$ are met beginning from $E_{k}$, with no restrictions on $E_j$ for $j < k$. For example, ${\mathcal{V}}_r = U_{d,r}$, and ${\mathcal{V}}_{d+r} = V_{d,r}.$ The following three recurrences follow from the definition of ${\mathcal{V}}_j$ in terms of the gap conditions in $U_{d,r}$: $$\begin{aligned} {\mathbf{P}}({\mathcal{V}}_{kd}) &= p_{kd} {\overline}{p}_{kd+r} {\overline}{p}_{kd+d-r} {\overline}{p}_{(k+1)d} {\mathbf{P}}({\mathcal{V}}_{(k+1)d+r}) + {\overline}{p}_{kd} {\mathbf{P}}({\mathcal{V}}_{kd+r}), \notag \\ \label{E:Vkd3rec} {\mathbf{P}}({\mathcal{V}}_{kd+r}) &= p_{kd+r}{\overline}{p}_{kd+d-r} {\overline}{p}_{(k+1)d} {\mathbf{P}}({\mathcal{V}}_{(k+1)d+r}) + {\overline}{p}_{kd+r} {\mathbf{P}}({\mathcal{V}}_{kd+d-r}), \\ {\mathbf{P}}({\mathcal{V}}_{kd+d-r}) &= p_{kd+d-r} {\overline}{p}_{(k+1)d} {\overline}{p}_{(k+1)d+r} {\mathbf{P}}({\mathcal{V}}_{(k+1)d+d-r}) + {\overline}{p}_{kd+d-r} {\mathbf{P}}({\mathcal{V}}_{(k+1)d}). \notag\end{aligned}$$ We now combine the three recurrences in into one. Note that the first recurrence already expresses ${\mathbf{P}}({\mathcal{V}}_{kd})$ in terms of ${\mathbf{P}}({\mathcal{V}}_{jd+r})$ for various $j$, and we can use the second recurrence to do the same for ${\mathbf{P}}({\mathcal{V}}_{kd+d-r})$, finding $${\mathbf{P}}({\mathcal{V}}_{kd+d-r}) = \frac{1}{1-p_{kd+r}} \bigg({\mathbf{P}}({\mathcal{V}}_{kd+r}) - p_{kd+r}(1-p_{kd+d-r})(1-p_{(k+1)d}){\mathbf{P}}({\mathcal{V}}_{(k+1)d+r})\bigg).$$ Plugging in this formula and the first line of into the third line then gives an identity involving only ${\mathcal{V}}_{kd+r}, {\mathcal{V}}_{(k+1)d+r},$ and ${\mathcal{V}}_{(k+2)d+r}$. The resulting expression simplifies to the following recurrence, where we write ${\mathcal{U}}_j := {\mathcal{V}}_{jd+r}$ to save space $$\begin{aligned} \label{E:UkRec} {\mathbf{P}}({\mathcal{U}}_k) & = \Big(p_{kd+r} {\overline}{p}_{kd+d-r} {\overline}{p}_{(k+1)d} + {\overline}{p}_{kd+r}p_{kd+d-r}{\overline}{p}_{(k+1)d} + {\overline}{p}_{kd+r}{\overline}{p}_{kd+d-r}{\overline}{p}_{(k+1)d}\Big) {\mathbf{P}}({\mathcal{U}}_{k+1}) \\ & \qquad \quad + \Big({\overline}{p}_{kd+r} {\overline}{p}_{kd+d-r} p_{(k+1)d} {\overline}{p}_{(k+1)d+r} {\overline}{p}_{(k+1)+d-r} {\overline}{p}_{(k+2)d} \notag \\ & \qquad \qquad \qquad - {\overline}{p}_{kd+r} p_{kd+d-r} {\overline}{p}_{(k+1)d} p_{(k+1)d+r} {\overline}{p}_{(k+1)+d-r} {\overline}{p}_{(k+2)d}\Big) {\mathbf{P}}({\mathcal{U}}_{k+2}). \notag\end{aligned}$$ We note that this is analogous to Andrews’ proof of Theorem 1 in [@And68], where he derives three recurrences for the Schur-type partitions enumerated by $C_{3,1}(n)$ based on their smallest part. Furthermore, just as in our proof of Proposition \[P:frec\], one can alternatively show directly by by conditioning on whether $E_{kd+r}, E_{kd+d-r}$ or $E_{(k+1)d}$ occur, and then subtracting off the disallowed sequence $F_{kd+r} E_{kd+d-r} F_{(k+1)d} E_{(k+1)d + r} F_{(k+1)d + d-r} F_{(k+2)d}$. In order to more thoroughly describe the relationship between the events ${\mathcal{V}}_j$ and Schur-type partition functions, we renormalize the generating function by defining $$\label{E:hdr} h_{d,r}(x) = h_{d,r}(x;q) := \frac{f_{d,r}(x)}{\left(-xq^r, -xq^{d-r}, -xq^d; q^d\right)_\infty}.$$ Proposition \[P:frec\] then becomes $$\begin{aligned} \label{E:hrec} h_{d,r}(x) = & \frac{1 + xq^r + xq^{d-r}}{\left(1+xq^r\right) \left(1+xq^{d-r}\right) \left(1+xq^d\right)} h_{d,r}\left(xq^d\right) \\ & + \frac{xq^d - x^2q^{2d}}{\left(1+xq^r\right) \left(1+xq^{d-r}\right) \left(1+xq^d\right) \left(1+xq^{d+r}\right) \left(1+xq^{2d-r}\right) \left(1+xq^{2d}\right)}h_{d,r}\left(xq^{2d}\right). \notag\end{aligned}$$ If we now define ${\mathcal{H}}_k = {\mathcal{H}}_k(q) := h_{d,r}(q^{kd})$ and recall , then implies that the recurrence holds with ${\mathcal{H}}_k$ in place of ${\mathbf{P}}({\mathcal{U}}_k).$ We observe that as $k \to \infty$, we have the limit ${\mathcal{H}}_k \to 1$, because $h_{d,r}(x) \to 1$ as $x \to 0$. Similarly, we also have ${\mathbf{P}}({\mathcal{U}}_k) \to 1$ since there are no conditions on any $E_j$ in the limit. This boundary condition guarantees that the recurrence has a unique solution (cf. the theory of $q$-difference equations [@And68Q]), hence $$\label{E:U=h} {\mathbf{P}}({\mathcal{U}}_k) = {\mathcal{H}}_k(q) = h_{d,r}\left(q^{kd}\right).$$ We can now complete the proof of the theorem. For part \[T:prob:U|V\], we calculate $$\label{E:U|V} {\mathbf{P}}(U_{d,r} \mid V_{d,r}) = \frac{{\mathbf{P}}(U_{d,r})}{{\mathbf{P}}(V_{d,r})} = \frac{{\mathbf{P}}({\mathcal{U}}_0)}{{\mathbf{P}}({\mathcal{U}}_1)} = \frac{f_{d, r}(1)}{\left(1+q^r\right)\left(1+q^{d-r}\right)\left(1+q^d\right) f_{d, r}\left(q^d\right)},$$ where the last equality is due to . The theorem statement then follows from , and , which together imply that $f_{d,r}(q^d) = f_{d,r}(1) \cdot g_3(-q^r; q^d).$ For part \[T:prob:F|U\], we similarly have $${\mathbf{P}}(F_r F_{d-r} F_d \mid U_{d,r}) = \frac{{\mathbf{P}}(F_r F_{d-r} F_d \cap U_{d,r})}{{\mathbf{P}}(U_{d,r})} = \frac{{\mathbf{P}}(F_r F_{d-r} F_d) {\mathbf{P}}(V_{d,r})}{{\mathbf{P}}(U_{d,r})} = g_3\left(-q^r; q^d\right),$$ where the final equality follows from and the inverse of . Just as we worked directly with the combinatorics of $q$-difference equations in order to prove Proposition \[P:fqseries\] (thereby providing short new proofs of and ), our use of probability arguments above could also be adapted to give new proofs of the results relating generating functions and probability found in Section 4 of [@HLR04] and Section 6 [@BMM13] (particularly (6.1)). [99]{} H. Alder, [*The nonexistence of certain identities in the theory of partitions and compositions*]{}, Bull. Amer. Math. Soc. [**54**]{} (1948), 712–722. K. Alladi and B. Gordon, [*Generalizations of Schur’s partition theorem*]{}, Manuscripta Math. [**79**]{} (1993), 113–126. H. Alder, [*Proof of Andrews’ conjecture on partition identities*]{}, Proc. Amer. Math. Soc. [**22**]{} (1969), 688–689. C. Alfes, M. Jameson, and R. Lemke Oliver, [*Proof of the Alder-Andrews Conjecture*]{}, Proc. Amer. Math. Soc. [**139**]{} (2011), 63–78. G. Andrews, [*An analytic proof of the Rogers-Ramanujan-Gordon identities*]{}, Amer. J. Math. [**88**]{} (1966), 844–846. G. Andrews, [*On partition functions related to Schur’s second partition theorem*]{}, Proc. Amer. Math. Soc. [**19**]{} (1968), 441–444. G. Andrews, [*On $q$-difference equations for certain well-poised basic hypergeometric series*]{}, Q. J. Math. [**19**]{} (1968), 433–447. G. Andrews, [*A general theorem on partitions with difference conditions*]{}, Amer. J. Math. [**91**]{} (1969), 18–24. G. Andrews, [*On a partition problem of H. L. Alder,*]{} Pacific J. Math. [**36**]{} (1971), 279–284. G. Andrews, [*Problems and prospects for basic hypergeometric functions*]{}, Theory and application of special functions, pp. 191–224. Math. Res. Center, Univ. Wisconsin, Publ. No. 35, Academic Press, New York, 1975. G. Andrews, [*The hard-hexagon model and Rogers-Ramanujan type identities*]{}, Proc. Nat. Acad. of Sci. [**78**]{} (1981), 5290–5292. G. Andrews, *The theory of partitions*, Cambridge University Press, Cambridge, 1998. G. Andrews, *Partitions, Durfee symbols, and the Atkin-Garvan moments of ranks*, Invent. Math. [**169**]{} (2007), 37–73. G. Andrews, R. Askey, and R. Roy, *Special functions*, Encyclopedia of Mathematics and its Applications [**71**]{}, Cambridge University Press, Cambridge, 1999. G. Andrews, H. Eriksson, F. Petrov, and D. Romik, [*Integrals, partitions and MacMahon’s theorem*]{}, J. Combin. Theory Ser. A [**114**]{} (2007), 545–554. G. Andrews, R. Rhoades and S. Zwegers, [*Modularity of the concave composition generating function*]{}, to appear in Alg. and Number Thy. T. Apostol, *Modular Functions and Dirichlet Series in Number Theory Series: Graduate Texts in Mathematics, Vol. 41*, 2nd ed., 1990. D. Bressoud, [*A combinatorial proof of Schur’s 1926 partition theorem*]{}, Proc. Amer. Math. Soc. [**79**]{} (1980), 338–340. J. Bruinier and J. Funke, [*On two geometric theta lifts*]{}, Duke Math. J. [**125**]{} (2004), 45–90. K. Bringmann and K. Mahlburg, *An extension of the Hardy-Ramanujan Circle Method and applications to partitions without sequences*, Amer. Journal of Math [**133**]{} (2011), 1151–1178. K. Bringmann and K. Mahlburg, [*Asymptotic inequalities for positive crank and rank moments,*]{} to appear in Trans. Amer. Math. Soc. K. Bringmann, K. Mahlburg, and A. Mellit, [*Convolution Bootstrap Percolation Models, Markov-type Stochastic Processes, and Mock Theta Functions*]{}, Int. Math. Res. Not. (2013), Vol. 2013, 971–1013. K. Bringmann and J. Manschot, [*Asymptotic formulas for coefficients of inverse theta functions*]{}, preprint, `arXiv:1304.7208`. K. Bringmann and K. Ono, [*Some characters of Kac and Wakimoto and nonholomorphic modular functions*]{}, Math. Ann. [**345**]{} (2009), 547–558. M. Cheng, J. Duncan, and J. Harvey, [*Umbral moonshine*]{}, preprint, `arXiv:1204.2779`. A. Dabholkar, S. Murthy, and D. Zagier, [*Quantum black holes, wall crossing, and mock modular forms*]{}, preprint, `arXiv:1208.4074 [hep-th]`. M. Eichler, and D. Zagier, [*The theory of Jacobi forms*]{}, Progress in Math. 55, Birkhäuser Boston, MA, 1985. G. Gasper and M. Rahman, [*Basic hypergeometric series*]{}, Encycl. of Math. and Applications [**35**]{}, Cambridge University Press, Cambridge, 1990. W. Gleissberg, [*Über einen Satz von Herrn I. Schur*]{}, Math. Z. [**28**]{} (1928), 372–382. B. Gordon, [*A combinatorial generalization of the Rogers-Ramanujan identities*]{}, Amer. J. Math. [**83**]{} (1961), 393–399. B. Gordon, [*Some continued fractions of the Rogers-Ramanujan type*]{}, Duke Math. J. [**32**]{} (1965), 741–748. B. Gordon and R. McIntosh, [*A survey of the classical mock theta functions*]{}, Partitions, $q$-series, and modular forms, Dev. Math. [**23**]{}, Springer, New York, 2012, 95–244. D. Hickerson, [*On the seventh order mock theta functions*]{}, Invent. Math. [**94**]{} (1988), 661–677. A. Holroyd, T. Liggett, and D. Romik, [*Integrals, Partitions, and Cellular Automata*]{}, Trans. Amer. Math. Soc. [**356**]{} (2004), 3349–3368. S. Kang, [*Mock Jacobi forms in basic hypergeometric series*]{}, Compos. Math. [**145**]{} (2009), 553–565. N. Koblitz, *Introduction to elliptic curves and modular forms*, Graduate Texts in Mathematics [**97**]{}, Springer-Verlag, New York, 1984. J. Lepowsky and R. Wilson, [*A new family of algebras underlying the Rogers-Ramanujan identities and generalizations*]{}, Proc. Nat. Acad. Sci. [**78**]{} (1981), 7254–7258. H. Rademacher, *On the Expansion of the Partition Function in a Series*, Ann. of Math. [**44**]{} (1943), 416–422. L. Rogers and S. Ramanujan, [*Proof of certain identities in combinatory analysis*]{}, Math. Proc. Cambridge Philos. Soc. [**19**]{} (1919), 211–216. I. Schur, [*Zur additiven Zahlentheorie*]{}, Sitzungsber. Preuss. Akad. Wiss. Phys.-Math. Kl., 1926. J. Stembridge, [*Hall-Littlewood functions, plane partitions, and the Rogers-Ramanujan identities*]{}, Trans. Amer. Math. Soc. [**319**]{} (1990), 469–498. G. Watson, [*The final problem: An account of the mock theta functions*]{}, J. Lond. Math. Soc. [**11**]{} (1936), 55–80. E. Wright, *Asymptotic partition formulae II. Weighted partitions*, Proc. Lond. Math. Soc. (2) [**36**]{} (1933), 117–141. E. Wright, *Stacks. II,* Q. J. Math. Ser. (2) [**22**]{} (1971), 107–116. A. Yee, [*Partitions with difference conditions and Alder’s conjecture*]{}, Proc. Natl. Acad. Sci. [**101**]{} (2004), 16417–16418. A. Yee, [*Alder’s conjecture,*]{} J. Reine Angew. Math. [**616**]{} (2008), 67–88. D. Zagier, [*The dilogarithm function*]{}, Frontiers in Number Theory, Physics and Geometry II, Springer-Verlag, Berlin-Heidelberg-New York (2006), pp. 3–65. D. Zagier, *Ramanujan’s mock theta functions and their applications \[d’aprés Zwegers and Bringmann-Ono\]* Astérisque [**326**]{} (2009), Soc. Math. de France, 143–164. S. Zwegers, *Mock theta functions*, Ph.D. Thesis, Universiteit Utrecht, 2002. [^1]: The research of the first author was supported by the Alfried Krupp Prize for Young University Teachers of the Krupp Foundation and an individual research grant from the ERC. The second author was supported by NSF Grant DMS-1201435.
--- abstract: 'We trace progress and thinking about the Strominger-Yau-Zaslow conjecture since its introduction in 1996. In particular, we aim to explain how the conjecture led to the algebro-geometric program developed by myself and Siebert, whose objective is to explain mirror symmetry by studying degenerations of Calabi-Yau manifolds. We end by outlining how tropical curves arise in the mirror symmetry story.' address: 'UCSD Mathematics, 9500 Gilman Drive, La Jolla, CA 92093-0112, USA' author: - Mark Gross title: ' The Strominger-Yau-Zaslow conjecture: From torus fibrations to degenerations.' --- \#1 \#1 \#1\#2\#3[0\#1\#2\#30]{} \#1[ ]{} \#1[ ]{} \#1[[\#1]{}\^]{} [^1] Introduction. {#introduction. .unnumbered} ============= Up to the summer of 1996, there had been a number of spectacular successes in mirror symmetry. After the pioneering initial work of Candelas, de la Ossa, Greene and Parkes [@COGP] which calculated the instanton predictions for the quintic three-fold, Batyrev [@Bat] gave a powerful mirror symmetry construction for Calabi-Yau hypersurfaces in toric varieties, later generalized by Batyrev and Borisov [@BB] to complete intersections in toric varieties. Kontsevich [@Kstable] introduced his notion of stable maps of curves and studied their moduli, setting the stage for a new flowering of enumerative geometry. Eventually, this led to the mathematical calculation of Gromov-Witten invariants for the quintic in varying forms, [@Givental; @LLY; @Bert; @Gath]. In between, a great deal of the structure of mirror symmetry was elucidated by many researchers in both string theory and algebraic geometry. On the other hand, at that time I had been primarily interested in the geometry of Calabi-Yau manifolds. Many of the results about mirror symmetry seemed to rely primarily on information about the ambient toric varieties; in particular, Givental’s work [@Givental] and succeeding work by Lian, Liu and Yau [@LLY], Bertram [@Bert] and Gathmann [@Gath] always performed calculations on the moduli space of stable maps into ${\mathbb{P}}^4$ rather than the quintic, and in the Batyrev-Borisov type constructions, while there was a clear combinatorial relationship between the ambient toric varieties, there was no apparent geometric relationship between the Calabi-Yaus themselves. As a result, I tended to avoid thinking about mirror symmetry precisely because of this lack of geometric understanding. This changed dramatically in June of 1996, when Strominger, Yau and Zaslow [@SYZ] released their paper “Mirror Symmetry is $T$-duality.” They made a remarkable proposal, based on recent ideas in string theory, that for the first time gave a geometric interpretation for mirror symmetry. Let me summarize, very roughly, the physical argument here. Developments in string theory in the mid-1990s had introduced the notion of *Dirichlet branes*, or $D$-branes. These are submanifolds of space-time, with some additional data, which should serve as a boundary condition for open strings, i.e. we allow open strings to propagate with their endpoints constrained to lie on a $D$-brane. Remembering that space-time, according to string theory, looks like ${\mathbb{R}}^{1,3}\times X$, where ${\mathbb{R}}^{1,3}$ is ordinary space-time and $X$ is a Calabi-Yau three-fold, we can split a $D$-brane into a product of a submanifold of ${\mathbb{R}}^{1,3}$ and one on $X$. It turned out, simplifying a great deal, that there were two particular types of submanifolds on $X$ of interest: *holomorphic* $D$-branes, i.e. holomorphic submanifolds with a holomorphic line bundle, and *special Lagrangian* $D$-branes, which are *special Lagrangian submanifolds* with flat $U(1)$ bundle: Let $X$ be an $n$-dimensional Calabi-Yau manifold with $\omega$ the Kähler form of a Ricci-flat metric on $X$ and $\Omega$ a nowhere vanishing holomorphic $n$-form. Then a submanifold $M\subseteq X$ is *special Lagrangian* if it is Lagrangian, i.e.  $\dim_{{\mathbb{R}}} M=\dim_{{\mathbb{C}}} X$ and $\omega|_M=0$, and in addition ${\operatorname{Im}}\Omega|_M=0$. The origins of mirror symmetry in physics suggest that if $X$ and $\check X$ are a mirror pair of Calabi-Yau manifolds, then string theory on a compactification of space-time using $X$ should be the same as that using $\check X$, but with certain data interchanged. In the case of $D$-branes, the suggestion is that the moduli space of holomorphic $D$-branes on $X$ should be isomorphic to the moduli space of special Lagrangian $D$-branes on $\check X$. Now $X$ itself is the moduli space of points on $X$. So each point on $X$ should correspond to a pair $(M,\nabla)$, where $M\subseteq\check X$ is a special Lagrangian submanifold and $\nabla$ is a flat connection on $M$. A theorem of McLean [@McLean] tells us that the tangent space to the moduli space of special Lagrangian deformations of a special Lagrangian submanifold $M\subseteq \check X$ is $H^1(M,{\mathbb{R}})$. Of course, the moduli space of flat $U(1)$-connections modulo gauge equivalence on $M$ is the torus $H^1(M,{\mathbb{R}})/H^1(M,{\mathbb{Z}})$. In order for this moduli space to be of the correct dimension, we need $\dim H^1(M,{\mathbb{R}})=n$, the complex dimension of $X$. This suggests that $X$ consists of a family of tori which are dual to a family of special Lagrangian tori on $\check X$. An elaboration of this argument yields the following conjecture: *The Strominger-Yau-Zaslow conjecture*. If $X$ and $\check X$ are a mirror pair of Calabi-Yau $n$-folds, then there exists fibrations $f:X\rightarrow B$ and $\check f:\check X\rightarrow B$ whose fibres are special Lagrangian, with general fibre an $n$-torus. Furthermore, these fibrations are dual, in the sense that canonically $X_b=H^1(\check X_b,{\mathbb{R}}/{\mathbb{Z}})$ and $\check X_b=H^1(X_b,{\mathbb{R}}/{\mathbb{Z}})$ whenever $X_b$ and $\check X_b$ are non-singular tori. I will clarify this statement as we review the work of the past ten years; however, as I have stated this conjecture, it is likely to be false. On the other hand, there are weaker versions of the conjecture which probably are true. Even better, these weaker statements are probably within reach of modern-day technology (with a lot of hard work). Nevertheless, there has been a lot of good progress on precise versions of the above conjecture at the topological and symplectic level. In addition, the conjecture has been successful at explaining many features of mirror symmetry, some of which are still heuristic and some of which are rigorous. It is my belief that a final satisfactory understanding of mirror symmetry will flow from the SYZ conjecture, even if results do not take the form initially suggested by it. My main goal here is to explain the journey taken over the last ten years. I want to focus on explaining the evolution and development of the ideas, rather than focus on precise statements. Except in the first few sections, I will give few precise statements. In those first sections, I will clarify the above statement of the conjecture, and show how it gives a satisfactory explanation of mirror symmetry in the so-called semi-flat case, i.e. the case when the metric along the special Lagrangian fibres is flat. This leads naturally to a discussion of affine manifolds, metrics on them, and the Legendre transform. These now appear to be the key structures underlying mirror symmetry. We next take a look at the case when singular fibres appear. In this case we need to abandon the precise form of duality we developed in the semi-flat case and restrict our attention to topological duality. In the realm of purely topological duality, the SYZ conjecture has been entirely successful at explaining topological features of mirror symmetry for a large range of Calabi-Yau manifolds, including those produced by the Batyrev-Borisov construction for complete intersections in toric varieties. Moving on, we take a look at Dominic Joyce’s arguments demonstrating the problems with the strong form of the SYZ conjecture stated above. This forces us to recast the SYZ conjecture as a limiting statement. Mirror symmetry is always about the behaviour of Calabi-Yau manifolds near maximally unipotent degenerations. A limiting form of the SYZ conjecture suggests that one can find special Lagrangian tori on Calabi-Yau manifolds near a maximally unipotent degeneration, and as we approach the limit point in complex moduli space, we expect to see a larger portion of the Calabi-Yau manifold filled out by special Lagrangian tori. Unlike the original SYZ conjecture, though still difficult, this one looks likely to be accessible by current techniques. This form of the conjecture then motivates a new round of questions. In this limiting picture, we expect the base $B$ of the hypothetical special Lagrangian fibration to be the so-called Gromov-Hausdorff limit of a sequence of Calabi-Yau manifolds approaching the maximally unipotent degeneration. Gromov-Hausdorff convergence is a metric space concept, while maximally unipotent degeneration is an algebro-geometric, Hodge-theoretic concept. How do these two concepts relate? In the summer of 2000, Kontsevich suggested that the Gromov-Hausdorff limit will be, roughly, the dual intersection complex of the algebro-geometric degeneration, at least on a topological level. We explore this idea in §6. On the other hand, how does this help us with mirror symmetry? Parallel to these developments on limiting forms of the SYZ conjecture, my coauthor Bernd Siebert had been studying degenerations of Calabi-Yau manifolds using logarithmic geometry with Stefan Schröer. Siebert noticed that mirror symmetry seemed to coincide with a combinatorial exchange of logarithmic data on the one side and polarizations on the other. Together, we realised that this approach to mirror symmetry meshed well with the limiting picture predicted by SYZ. Synthesizing these two approaches, we discovered an algebro-geometric version of the SYZ approach, which I will describe here. The basic idea is to forget about special Lagrangian fibrations, and only keep track of the base of the fibration, which is an affine manifold. We show how polyhedral decompositions of affine manifolds give rise to degenerate Calabi-Yau varieties, and conversely how certain sorts of degenerations of Calabi-Yau varieties, which we call *toric degenerations*, give rise to affine manifolds as their dual intersection complex. Mirror symmetry is again explained by a discrete version of the Legendre transform much as in the semi-flat case. We end with a discussion of the connection of tropical curves with this approach. The use of tropical curves in curve counting in two-dimensional toric varieties has been pioneered in work of Mihkalkin [@Mik]; Nishinou and Siebert [@NS] generalized this work to higher dimensions using an approach directly inspired by the approach I discuss here. On the other hand, tropical curves have not yet been used for counting curves in Calabi-Yau manifolds, so I will end the paper by discussing how tropical curves arise naturally in our picture. I would like to thank the organizers of the Seattle conference for running an excellent conference, and my coauthors Pelham Wilson and Bernd Siebert on SYZ related results; much of the work mentioned here came out of work with them. First a topological observation. ================================ Before doing anything else, let’s ask a very basic question: why should dualizing torus fibrations interchange Hodge numbers of Calabi-Yau threefolds? If you haven’t seen this, it’s the first thing one should look at as it is particularly easy to see, if we make a few assumptions. Suppose we are given a pair of Calabi-Yau threefolds $X$ and $\check X$ with fibrations $f:X\rightarrow B$, $\check f:\check X\rightarrow B$ with the property that there is a dense open set $B_0\subseteq B$ such that $f_0:f^{-1}(B_0)\rightarrow B_0$ and $\check f_0:\check f^{-1}(B_0)\rightarrow B_0$ are torus fibre bundles. So all the singular fibres of $f$ and $\check f$ lie over $\Gamma:=B\setminus B_0$. (Note that unless $\chi(X)=0$, there must be some singular fibres.) Finally, assume $f_0$ and $\check f_0$ are dual torus fibrations, i.e. $f_0$ can be identified with the torus fibration $R^1\check f_{0*}({\mathbb{R}}/{\mathbb{Z}})\rightarrow B_0$ and $\check f_0$ can be identified with the torus fibration $R^1 f_{0*}({\mathbb{R}}/{\mathbb{Z}})\rightarrow B_0$. (This is a slight abuse of notation: by $R^1\check f_{0*}({\mathbb{R}}/{\mathbb{Z}})$ we really mean the torus bundle obtained by taking the vector bundle associated to the local system $R^1\check f_{0*}{\mathbb{R}}$ and dividing out by the family of lattices $R^1\check f_{0*}{\mathbb{Z}}$.) If $V/\Lambda$ is a single torus with $V$ an $n$-dimensional vector space and $\Lambda$ a lattice in $V$, then $H^p(V/\Lambda,{\mathbb{R}})\cong \bigwedge^p {\vee}{V}$, while the dual torus, ${\vee}{V}/{\vee}{\Lambda}$, has $H^p({\vee}{V}/{\vee}{\Lambda}, {\mathbb{R}})\cong \bigwedge^p V$. If we choose an isomorphism $\bigwedge^n V\cong {\mathbb{R}}$, then we get an isomorphism $H^p(V/\Lambda,{\mathbb{R}})\cong H^{n-p}({\vee}{V}/ {\vee}{\Lambda},{\mathbb{R}})$. Similarly, in the relative setting for $f_0$ and $\check f_0$, if we have an isomorphism $R^3f_{0*}{\mathbb{R}}\cong {\mathbb{R}}$, we obtain isomorphisms $$R^pf_{0*}{\mathbb{R}}\cong R^{3-p}\check f_{0*}{\mathbb{R}}.$$ We now make a simplifying assumption. Let $i:B_0\hookrightarrow B$ be the inclusion. We will say $f$ is *${\mathbb{R}}$-simple* if $$i_*R^pf_{0*}{\mathbb{R}}\cong R^pf_*{\mathbb{R}}$$ for all $p$. (We can in general replace ${\mathbb{R}}$ by any abelian group $G$, and then we say $f$ is $G$-simple.) Of course, not all torus fibrations are ${\mathbb{R}}$-simple, but it turns out that the most interesting ones which occur in the topological form of SYZ are. So let’s assume $f$ and $\check f$ are ${\mathbb{R}}$-simple. With this assumption, we obtain isomorphisms $$\label{dualityiso} R^pf_*{\mathbb{R}}\cong R^{3-p}\check f_*{\mathbb{R}}.$$ We can now use this to study the Leray spectral sequence for $f$ and $\check f$. Let’s make an additional assumption that $X$ and $\check X$ are simply connected. So in particular $B$ is simply connected. Let’s assume $B$ is a three-manifold. So we have the $E_2$ terms in the Leray spectral sequence for $f$: $$\begin{matrix} {\mathbb{R}}&0&0&{\mathbb{R}}\\ H^0(B,R^2f_*{\mathbb{R}})&H^1(B,R^2f_*{\mathbb{R}})&H^2(B,R^2f_*{\mathbb{R}})&H^3(B,R^2f_*{\mathbb{R}})\\ H^0(B,R^1f_*{\mathbb{R}})&H^1(B,R^1f_*{\mathbb{R}})&H^2(B,R^1f_*{\mathbb{R}})&H^3(B,R^1f_*{\mathbb{R}})\\ {\mathbb{R}}&0&0&{\mathbb{R}}\end{matrix}$$ Since $X$ is simply connected, $H^1(X,{\mathbb{R}})=H^5(X,{\mathbb{R}})=0$, from which we conclude that $H^0(B,R^1f_*{\mathbb{R}})=H^3(B,R^2f_*{\mathbb{R}})=0$. The same argument works for $\check f$, and then (\[dualityiso\]) gives $H^0(B,R^2f_*{\mathbb{R}})\cong H^0(B,R^1\check f_*{\mathbb{R}})=0$ and similarly $H^3(B,R^1f_*{\mathbb{R}})=0$. Finally, consider the possible non-zero maps for the spectral sequence: $$\xymatrix@C=30pt {{\mathbb{R}}\ar[rrd]^{d_1}&0&0&{\mathbb{R}}\\ 0&H^1(B,R^2f_*{\mathbb{R}})&H^2(B,R^2f_*{\mathbb{R}})&0\\ 0&H^1(B,R^1f_*{\mathbb{R}})\ar[rrd]^{d_2}&H^2(B,R^1f_*{\mathbb{R}})&0\\ {\mathbb{R}}&0&0&{\mathbb{R}}}$$ We need one further assumption, which is again natural given the duality relationship of $f_0$ and $\check f_0$: both $f$ and $\check f$ possess sections. (Actually, working over ${\mathbb{R}}$, we just need the existence of cohomology classes on $X$ and $\check X$ which evaluate to something non-zero on a fibre of $f$ or $\check f$.) A section intersects each fibre non-trivially, and hence gives a section of $R^3f_*{\mathbb{R}}$. Since such a section also represents a cohomology class on $X$, $d_1$ must be the zero map. Similarly, a fibre of $f$ cannot be homologically trivial because it intersects the section non-trivially, and thus the map $d_2$ must be zero. So the spectral sequence degenerates at $E_2$. In particular, we get ${\mathbb{R}}^{h^{1,1}}\cong H^2(X,{\mathbb{R}})\cong H^1(B,R^1f_*{\mathbb{R}})\cong H^1(B,R^2\check f_*{\mathbb{R}})$ and ${\mathbb{R}}^{h^{2,2}}\cong H^4(X,{\mathbb{R}})\cong H^2(B,R^2f_*{\mathbb{R}})\cong H^2(B,R^1\check f_*{\mathbb{R}})$, where $h^{p,q}$ are the Hodge numbers of $X$. Thus the third Betti number of $\check X$ is $2+h^{1,1}+h^{2,2}=2(1+h^{1,1})$, so we see $h^{1,1}(X)=h^{1,2}(\check X)$ and $h^{1,2}(X) =h^{1,1}(\check X)$. So modulo some assumptions which would of course eventually have to be justified, it is clear, at least in the three-dimensional case, why the Hodge numbers are interchanged by duality. More generally, in any dimension, one might hope that ${\mathbb{R}}$-simplicity implies $\dim_{{\mathbb{R}}} H^p(B,R^qf_*{\mathbb{R}})=h^{p,q}$, and then a more general exchange of Hodge numbers becomes clear. See Theorem \[hodgedecomp\] for a related result. This argument can be refined over ${\mathbb{Z}}$ to make new predictions about the behaviour of *integral* cohomology under mirror symmetry. In [@SlagII], Theorem 3.10, it was shown, again in the three-dimensional case, that if $f$ and $\check f$ are ${\mathbb{Z}}$-simple and ${\mathbb{Q}}/{\mathbb{Z}}$-simple, $f$ and $\check f$ have sections, and $H^1(X,{\mathbb{Z}})=0$, then $$\begin{aligned} H^{even}(X,{\mathbb{Z}}[1/2])&\cong&H^{odd}(\check X,{\mathbb{Z}}[1/2])\\ H^{odd}(X,{\mathbb{Z}}[1/2])&\cong&H^{even}(\check X,{\mathbb{Z}}[1/2]).\end{aligned}$$ There are problems in the argument with two-torsion, but it is likely the above isomorphisms hold over ${\mathbb{Z}}$. See [@BS] for evidence for this latter conjecture. Enough speculation. Now let’s get serious about the structure of special Lagrangian fibrations. Moduli of special Lagrangian submanifolds ========================================= The first step in really understanding the SYZ conjecture is to examine the structures which arise on the base of a special Lagrangian fibration. These structures arise from McLean’s theorem on the moduli space of special Lagrangian submanifolds [@McLean], and these structures and their relationships were explained by Hitchin in [@Hit]. We outline some of these ideas here. McLean’s theorem says that the moduli space of deformations of a compact special Lagrangian submanifold of a compact Calabi-Yau manifold $X$ is unobstructed, with tangent space at $M\subseteq X$ special Lagrangian canonically isomorphic to the space of harmonic $1$-forms on $M$. This isomorphism is seen explicitly as follows. Let $\nu\in\Gamma(M,N_{M/X})$ be a normal vector field to $M$ in $X$. Then $(\iota(\nu)\omega)|_M$ and $(\iota(\nu){\operatorname{Im}}\Omega)|_M$ are both seen to be well-defined forms on $M$: one needs to lift $\nu$ to a vector field but the choice is irrelevant because $\omega$ and ${\operatorname{Im}}\Omega$ restrict to zero on $M$. McLean shows that if $M$ is special Lagrangian then $$\iota(\nu){\operatorname{Im}}\Omega=-*\iota(\nu)\omega,$$ where $*$ denotes the Hodge star operator on $M$, and furthermore, $\nu$ corresponds to an infinitesimal deformation preserving the special Lagrangian condition if and only if $d(\iota(\nu)\omega) =d(\iota(\nu){\operatorname{Im}}\Omega)=0$. This gives the correspondence between harmonic $1$-forms and infinitesimal special Lagrangian deformations. Let $f:X\rightarrow B$ be a special Lagrangian fibration with torus fibres, and assume for now that all fibres of $f$ are non-singular. Then we obtain three structures on $B$: two affine structures and a metric, as we shall now see. \[affine\] Let $B$ be an $n$-dimensional manifold. An [*affine structure*]{} on $B$ is given by an atlas $\{(U_i,\psi_i)\}$ of coordinate charts $\psi_i:U_i\rightarrow {\mathbb{R}}^n$, whose transition functions $\psi_i\circ\psi_j^{-1}$ lie in ${\rm Aff}({\mathbb{R}}^n)$. We say the affine structure is *tropical* if the transition functions lie in ${\mathbb{R}}^n\rtimes GL({\mathbb{Z}}^n)$, i.e. have integral linear part. We say the affine structure is [*integral*]{} if the transition functions lie in ${\rm Aff}({\mathbb{Z}}^n)$. If an affine manifold $B$ carries a Riemannian metric $g$, then we say the metric is *affine Kähler* or *Hessian* if $g$ is locally given by $g_{ij}=\partial^2K/\partial y_i\partial y_j$ for some convex function $K$ and $y_1,\ldots,y_n$ affine coordinates. Then we obtain the three structures as follows: *Affine structure 1.* For a normal vector field $\nu$ to a fibre $X_b$ of $f$, $(\iota(\nu)\omega)|_{X_b}$ is a well-defined $1$-form on $X_b$, and we can compute its periods as follows. Let $U\subseteq B$ be a small open set, and suppose we have submanifolds $\gamma_1,\ldots,\gamma_n\subseteq f^{-1}(U)$ which are families of 1-cycles over $U$ and such that $\gamma_1\cap X_b,\ldots,\gamma_n\cap X_b$ form a basis for $H_1(X_b,{\mathbb{Z}})$ for each $b\in U$. Consider the $1$-forms $\omega_1,\ldots,\omega_n$ on $U$ defined by fibrewise integration: $$\omega_i(\nu)=\int_{X_b\cap\gamma_i} \iota(\nu)\omega,$$ for $\nu$ a tangent vector on $B$ at $b$, which we can lift to a normal vector field of $X_b$. We have $\omega_i=f_*(\omega|_{\gamma_i})$, and since $\omega$ is closed, so is $\omega_i$. Thus there are locally defined functions $y_1,\ldots,y_n$ on $U$ with $dy_i=\omega_i$. Furthermore, these functions are well-defined up to the choice of basis of $H_1(X_b,{\mathbb{Z}})$ and constants. Finally, they give well-defined coordinates, as follows from the fact that $\nu\mapsto \iota(\nu)\omega$ yields an isomorphism of ${{\mathcal{T}}}_{B,b}$ with $H^1(X_b,{\mathbb{R}})$ by McLean’s theorem. Thus $y_1,\ldots,y_n$ define local coordinates of a tropical affine structure on $B$. *Affine structure 2.* We can play the same trick with ${\operatorname{Im}}\Omega$: choose submanifolds $\Gamma_1,\ldots,\Gamma_n\subseteq f^{-1}(U)$ which are families of $n-1$-cycles over $U$ and such that $\Gamma_1\cap X_b,\ldots,\Gamma_n\cap X_b$ form a basis for $H^{n-1}(X_b, {\mathbb{Z}})$. We define $\lambda_i$ by $\lambda_i=-f_*({\operatorname{Im}}\Omega|_{\Gamma_i})$, or equivalently, $$\lambda_i(\nu)=-\int_{X_b\cap\Gamma_i} \iota(\nu){\operatorname{Im}}\Omega.$$ Again $\lambda_1,\ldots,\lambda_n$ are closed $1$-forms, with $\lambda_i=d\check y_i$ locally, and again $\check y_1,\ldots,\check y_n$ are affine coordinates for a tropical affine structure on $B$. *The McLean metric.* The Hodge metric on $H^1(X_b,{\mathbb{R}})$ is given by $$g(\alpha,\beta)=\int_{X_b} \alpha\wedge *\beta$$ for $\alpha$, $\beta$ harmonic $1$-forms, and hence induces a metric on $B$, which can be written as $$g(\nu_1,\nu_2)=-\int_{X_b}\iota(\nu_1)\omega\wedge \iota(\nu_2){\operatorname{Im}}\Omega.$$ A crucial observation of Hitchin [@Hit] is that these structures are related by the Legendre transform: \[hessianmetric\] Let $y_1,\ldots,y_n$ be local affine coordinates on $B$ with respect to the affine structure induced by $\omega$. Then locally there is a function $K$ on $B$ such that $$g(\partial/\partial y_i,\partial/\partial y_j)=\partial^2 K/\partial y_i \partial y_j.$$ Furthermore, $\cy_i=\partial K/\partial y_i$ form a system of affine coordinates with respect to the affine structure induced by ${\operatorname{Im}}\Omega$, and if $$\check K(\cy_1,\ldots,\cy_n)=\sum \cy_i y_i-K(y_1,\ldots,y_n)$$ is the Legendre transform of $K$, then $$y_i=\partial \check K/\partial\cy_i$$ and $$\partial^2\check K/\partial y_i\partial y_j=g(\partial/\partial\cy_i, \partial/\partial\cy_j).$$ Take families $\gamma_1,\ldots,\gamma_n,\Gamma_1,\ldots,\Gamma_n$ as above over an open neighbourhood $U$ with the two bases being Poincaré dual, i.e. $(\gamma_i\cap X_b)\cdot(\Gamma_j\cap X_b)= \delta_{ij}$ for $b\in U$. Let $\gamma_1^*,\ldots,\gamma_n^*$ and $\Gamma_1^*,\ldots,\Gamma_n^*$ be the dual bases for $\Gamma(U,R^1f_*{\mathbb{Z}})$ and $\Gamma(U,R^{n-1}f_*{\mathbb{Z}})$ respectively. From the choice of $\gamma_i$’s, we get local coordinates $y_1,\ldots,y_n$ with $dy_i=\omega_i$, so in particular $$\delta_{ij}=\omega_i(\partial/\partial y_j)=\int_{\gamma_i\cap X_b} \iota(\partial/\partial y_j)\omega,$$ so $\iota(\partial/\partial y_j)\omega$ defines the cohomology class $\gamma_j^*$ in $H^1(X_b,{\mathbb{R}})$. Similarly, let $$g_{ij}=-\int_{\Gamma_i\cap X_b}\iota(\partial/\partial y_j){\operatorname{Im}}\Omega;$$ then $-\iota(\partial/\partial y_j){\operatorname{Im}}\Omega$ defines the cohomology class $\sum_i g_{ij}\Gamma_i^*$ in $H^{n-1}(X_b,{\mathbb{R}})$, and $\lambda_i=\sum_j g_{ij}dy_j$. Thus $$\begin{aligned} g(\partial/\partial y_j,\partial/\partial y_k)&=& -\int_{X_b}\iota(\partial/\partial y_j)\omega \wedge \iota(\partial/\partial y_k){\operatorname{Im}}\Omega\\ &=&g_{jk}.\end{aligned}$$ On the other hand, let $\cy_1,\ldots,\cy_n$ be coordinates with $d\cy_i=\lambda_i$. Then $${\partial\cy_i/\partial y_j}=g_{ij}=g_{ji}={\partial\cy_j/ \partial y_i},$$ so $\sum\cy_i dy_i$ is a closed 1-form. Thus there exists locally a function $K$ such that $\partial K/\partial y_i=\cy_i$ and $\partial^2 K/\partial y_i\partial y_j=g(\partial/\partial y_i, \partial/\partial y_j)$. A simple calculation then confirms that $\partial\check K/\partial \cy_i =y_i$. On the other hand, $$\begin{aligned} g(\partial/\partial\cy_i,\partial/\partial\cy_j)&=& g\left(\sum_k {\partial y_k\over\partial\cy_i}{\partial\over \partial y_k},\sum_l {\partial y_l\over\partial\cy_j} {\partial\over\partial y_l}\right)\\ &=&\sum_{k,l}{\partial y_k\over\partial\cy_i}{\partial y_l\over\partial \cy_j} g(\partial/\partial y_k,\partial/\partial y_l)\\ &=&\sum_{k,l} {\partial y_k\over\partial\cy_i}{\partial y_l\over \partial\cy_j}{\partial\cy_k\over\partial y_l}\\ &=&{\partial y_j\over\partial\cy_i}={\partial^2\check K\over \partial\cy_i\partial\cy_j}.\end{aligned}$$ Thus we introduce the notion of *Legendre transform* of an affine manifold with a multi-valued convex function. \[multivaluedconvex\] Let $B$ be an affine manifold. A *multi-valued* function $K$ on $B$ is a collection of functions on an open cover $\{(U_i,K_i)\}$ such that on $U_i\cap U_j$, $K_i-K_j$ is affine linear. We say $K$ is *convex* if the Hessian $(\partial^2 K_i/\partial y_j\partial y_k)$ is positive definite for all $i$, in any, or equivalently all, affine coordinate systems $y_1, \ldots,y_n$. Given a pair $(B,K)$ of affine manifold and convex multi-valued function, the *Legendre transform* of $(B,K)$ is a pair $(\check B, \check K)$ where $\check B$ is an affine structure on the underlying manifold of $B$ with coordinates given locally by $\check y_i=\partial K/\partial y_i$, and $\check K$ is defined by $$\check K_i(\check y_1,\ldots,\check y_n)=\sum \check y_j y_j -K_i(y_1,\ldots,y_n).$$ Check that $\check K$ is also convex, and that the Legendre transform of $(\check B,\check K)$ is $(B,K)$. Semi-flat mirror symmetry ========================= Now let’s forget about special Lagrangian fibrations for the moment. Instead, we see how the structures found on $B$ give a toy version of mirror symmetry. Let $B$ be a tropical affine manifold. 1. Define $\Lambda\subseteq{{\mathcal{T}}}_B$ to be the local system of lattices generated locally by $\partial/\partial y_1,\ldots,\partial/\partial y_n$, where $y_1,\ldots,y_n$ are local affine coordinates. This is well-defined because transition maps are in ${\mathbb{R}}^n\rtimes GL_n({\mathbb{Z}})$. Set $$X(B):={{\mathcal{T}}}_B/\Lambda;$$ this is a torus bundle over $B$. In addition, $X(B)$ carries a complex structure defined locally as follows. Let $U\subseteq B$ be an open set with affine coordinates $y_1,\ldots,y_n$, so ${{\mathcal{T}}}_U$ has coordinate functions $y_1,\ldots,y_n$, $x_1=dy_1,\ldots,x_n=dy_n$. Then $$q_j=e^{2\pi i(x_j+iy_j)}$$ gives a system of holomorphic coordinates on $T_U/\Lambda|_U$, and the induced complex structure is independent of the choice of affine coordinates. Later we will need a variant of this: for $\epsilon>0$, set $$X_{\epsilon}(B):={{\mathcal{T}}}_B/\epsilon\Lambda;$$ this has a complex structure with coordinates given by $$q_j=e^{2\pi i(x_j+iy_j)/\epsilon}.$$ (As we shall see later, the limit $\epsilon\rightarrow 0$ corresponds to a large complex structure limit.) 2. Define $\check\Lambda\subseteq{{\mathcal{T}}}^*_B$ to be the local system of lattices generated locally by $dy_1,\ldots,dy_n$, with $y_1,\ldots,y_n$ local affine coordinates. Set $$\check X(B):={{\mathcal{T}}}^*_B/\check\Lambda.$$ Of course ${{\mathcal{T}}}^*_B$ carries a canonical symplectic structure, and this symplectic structure descends to $\check X(B)$. We write $f:X(B)\rightarrow B$ and $\check f:\check X(B)\rightarrow B$ for these torus fibrations; these are clearly dual. Now suppose in addition we have a Hessian metric $g$ on $B$, with local potential function $K$. Then in fact both $X(B)$ and $\check X(B)$ become Kähler manifolds: $K\circ f$ is a (local) Kähler potential on $X(B)$, defining a Kähler form $\omega=2i\partial\bar\partial(K\circ f)$. This metric is Ricci-flat if and only if $K$ satisfies the real Monge-Ampère equation $$\det {\partial^2 K\over \partial y_i\partial y_j}=constant.$$ Working locally with affine coordinates $(y_i)$ and complex coordinates $z_j={1\over 2\pi i}\log q_j=x_j+i y_j$, we compute $\omega=2i\partial\bar\partial(K\circ f)={i\over 2} \sum {\partial^2 K\over \partial y_j\partial y_k} dz_j\wedge d\bar z_k$ which is clearly positive. Furthermore, if $\Omega=dz_1\wedge\cdots\wedge dz_n$, then $\omega^n$ is proportional to $\Omega\wedge\bar\Omega$ if and only if $\det (\partial^2 K/\partial y_j\partial y_k)$ is constant. We write this Kähler manifold as $X(B,K)$. Dually we have In local canonical coordinates $y_i,\check x_i$ on ${{\mathcal{T}}}^*_B$, the functions $z_j=\check x_j+i\partial K/\partial y_j$ on ${{\mathcal{T}}}^*_B$ induce a well-defined complex structure on $\check X(B)$, with respect to which the canonical symplectic form $\omega$ is a Kähler form of a metric. Furthermore this metric is Ricci-flat if and only if $K$ satisfies the real Monge-Ampère equation $$\det {\partial^2 K\over \partial y_j\partial y_k}=constant.$$ It is easy to see that an affine linear change in the coordinates $y_j$ (and hence an appropriate change in the coordinates $\check x_j$) results in a linear change of the coordinates $z_j$, so they induce a well-defined complex structure invariant under $\check x_j\mapsto \check x_j+1$, and hence a complex structure on $\check X(B)$. Then one computes that $$\omega=\sum d\check x_j\wedge dy_j={i\over 2}\sum g^{jk} dz_j\wedge d\bar z_k$$ where $g_{ij}=\partial^2 K/\partial y_j\partial y_k$. Then the metric is Ricci-flat if and only if $\det(g^{jk})=constant$, if and only if $\det(g_{jk})=constant$. As before, we call this Kähler manifold $\check X(B,K)$. This motivates the definition An affine manifold with metric of Hessian form is a *Monge-Ampère manifold* if the local potential function $K$ satisfies the Monge-Ampère equation $\det(\partial^2K/\partial y_i\partial y_j)=constant$. Monge-Ampère manifolds were first studied by Cheng and Yau in [@ChengYau]. \[caniso\] Show that the identification of ${{\mathcal{T}}}_B$ and ${{\mathcal{T}}}^*_B$ given by a Hessian metric induces a canonical isomorphism $X(B,K)\cong\check X(\check B,\check K)$ of Kähler manifolds, where $(\check B,\check K)$ is the Legendre transform of $(B,K)$. Finally, we note that a $B$-field can be introduced into this picture. To keep life relatively simple (so as to avoid having to pass to generalized complex structures [@HitGen], [@Gual], [@Oren]), we view the $B$-field as an element ${\bf B}\in H^1(B,\Lambda_{{\mathbb{R}}}/\Lambda)$, where $\Lambda_{{\mathbb{R}}}=\Lambda\otimes_{{\mathbb{Z}}} {\mathbb{R}}$. Noting that a section of $\Lambda_{{\mathbb{R}}}/\Lambda$ over an open set $U$ can be viewed as a section of ${{\mathcal{T}}}_U/\Lambda|_U$, such a section acts on ${{\mathcal{T}}}_U/\Lambda|_U$ via translation, and this action is in fact holomorphic with respect to the standard semi-flat complex structure. Thus a Čech 1-cocycle $(U_{ij},\beta_{ij})$ representing ${\bf B}$ allows us to reglue $X(B)$ via translations over the intersections $U_{ij}$. This gives a new complex manifold $X(B,{\bf B})$. If in addition there is a multi-valued potential function $K$ defining a metric, these translations preserve the metric and yield a Kähler manifold $X(B,{\bf B},K)$. Thus the full toy version of mirror symmetry is as follows. The data consists of an affine manifold $B$ with potential $K$ and $B$-fields ${\bf B} \in H^1(B,\Lambda_{{\mathbb{R}}}/\Lambda)$, $\check {\bf B}\in H^1(B, \check\Lambda_{{\mathbb{R}}}/\check\Lambda)$. Now it is not difficult to see, and you will have seen this already if you’ve done Exercise \[caniso\], that the local system $\check\Lambda$ defined using the affine structure on $B$ is the same as the local system $\Lambda$ defined using the affine stucture on $\check B$. So we say the pair $$(X(B,{\bf B},K),\check{\bf B})$$ is mirror to $$(X(\check B,\check {\bf B},\check K),\bf B).$$ This provides a reasonably fulfilling picture of mirror symmetry in a simple context. Many more aspects of mirror symmetry can be worked out in this semi-flat context, see [@Leung]. However, ultimately this only sheds limited insight into the general case. The only compact Calabi-Yau manifolds with semi-flat Ricci-flat metric which arise in this way are complex tori (shown by Cheng and Yau in [@ChengYau]). To deal with more interesting cases, we need to allow singular fibres, and hence, singularities in the affine structure of $B$. Affine manifolds with singularities =================================== To deal with singular fibres, we define A *(tropical, integral) affine manifold with singularities* is a $(C^0)$ manifold $B$ with an open subset $B_0\subseteq B$ which carries a (tropical, integral) affine structure, and such that $\Gamma:=B \setminus B_0$ is a locally finite union of locally closed submanifolds of codimension $\ge 2$. By way of example, let’s explain how the Batyrev construction gives rise to a wide class of such manifolds. This construction is taken from [@GBB], where a more combinatorially complicated version is given for complete intersections; see [@HZ] and [@HZ3] for an alternative construction. Let $\Delta$ be a reflexive polytope in $M_{{\mathbb{R}}}=M\otimes_{{\mathbb{Z}}}{\mathbb{R}}$, where $M={\mathbb{Z}}^n$; let $N$ be the dual lattice, $\nabla\subseteq N_{{\mathbb{R}}}$ the dual polytope given by $$\nabla:=\{n\in N_{{\mathbb{R}}}|\hbox{$\langle m,n\rangle\ge -1$ for all $m\in\Delta$} \}.$$ We assume $0\in\Delta$ is the unique interior lattice point of $\Delta$. Let $\check\Sigma$ be the normal fan to $\nabla$, consisting of cones over the faces of $\Delta$. Suppose we are given a star subdivision of $\Delta$, with all vertices being integral points, inducing a subdivision $\check\Sigma'$ of the fan $\check\Sigma$. In addition suppose that $$\check h:M_{{\mathbb{R}}}\rightarrow{\mathbb{R}}$$ is an (upper) strictly convex piecewise linear function on the fan $\check\Sigma'$. Also, let $$\check\varphi:M_{{\mathbb{R}}}\rightarrow{\mathbb{R}}$$ be the piecewise linear function representing the anti-canonical class of the toric variety ${\mathbb{P}}_{\nabla}$; i.e. $\check\varphi$ takes the value $1$ on the primitive generator of each one-dimensional cone of $\check\Sigma$. Finally, assume that $\check h$ is chosen so that $\check h'=\check h-\check\varphi$ is a (not necessarily strictly) convex function. Define, for any convex piecewise linear function $\check g$ on the fan $\check\Sigma'$, the Newton polytope of $\check g$, $$\nabla^{\check g}:=\{n\in N_{{\mathbb{R}}}|\hbox{$\langle m,n\rangle\ge -\check g(m)$ for all $m\in M_{{\mathbb{R}}}$}\}.$$ In particular, $$\nabla^{\check h}=\nabla^{\check h'}+\nabla^{\check\varphi}= \nabla^{\check h'}+\nabla,$$ where $+$ denotes Minkowski sum. Our goal will be to put an affine structure with singularities on $B:=\partial\nabla^{\check h}$. Our first method of doing this requires no choices. Let ${\mathscr{P}}$ be the set of proper faces of $\nabla^{\check h}$. Furthermore, let ${\operatorname{Bar}}({\mathscr{P}})$ denote the first barycentric subdivision of ${\mathscr{P}}$ and let $\Gamma\subseteq B$ be the union of all simplices of ${\operatorname{Bar}}({\mathscr{P}})$ not containing a vertex of ${\mathscr{P}}$ (a zero-dimensional cell) or intersecting the interior of a maximal cell of ${\mathscr{P}}$. If we then set $B_0:=B\setminus\Gamma$, we can define an affine structure on $B_0$ as follows. $B_0$ has an open cover $$\{W_{\sigma}|\hbox{$\sigma\in{\mathscr{P}}$ maximal}\}\cup \{W_v|\hbox{$v\in{\mathscr{P}}$ a vertex}\}$$ where $W_{\sigma}={\operatorname{Int}}(\sigma)$, the interior of $\sigma$, and $$W_v=\bigcup_{\tau\in{\operatorname{Bar}}({\mathscr{P}})\atop v\in\tau}{\operatorname{Int}}(\tau)$$ is the (open) star of $v$ in ${\operatorname{Bar}}({\mathscr{P}})$. We define an affine chart $$\psi_{\sigma}:W_{\sigma}\rightarrow{\mathbb{A}}^{n-1}\subseteq N_{{\mathbb{R}}}$$ given by the inclusion of $W_{\sigma}$ in ${\mathbb{A}}^{n-1}$, the affine hyperplane containing $\sigma$. Also, take $$\psi_v:W_v\rightarrow N_{{\mathbb{R}}}/{\mathbb{R}}v'$$ to be the projection, where $v$, being a vertex of $\nabla^{\check h}$, can be written uniquely as $v'+v''$ with $v'$ a vertex of $\nabla$ and $v''$ a vertex of $\nabla^{\check h'}$. One checks easily that for $v\in\sigma$, $\psi_{\sigma}\circ\psi_v^{-1}$ is affine linear with integral linear part (integrality follows from reflexivity of $\Delta$!) so $B$ is a tropical affine manifold with singularities. Furthermore, if $\check h$ was chosen to have integral slopes, then $B$ is integral. We often would like to refine this construction, to get a finer polyhedral decomposition ${\mathscr{P}}$ of $B$ and with it a somewhat more interesting discriminant locus $\Gamma$. One reason for doing so is that this construction is clearly not mirror symmetric, as it depends only on a star subdivision of $\Delta$ and not of $\nabla$. Furthermore, a maximal star subdivision of $\nabla$ corresponds to what Batyrev terms a MPCP (maximal projective crepant partial) resolution of ${\mathbb{P}}_{\Delta}$, and normally, we will wish to study hypersurfaces in a MPCP resolution of ${\mathbb{P}}_{\Delta}$ rather than in ${\mathbb{P}}_{\Delta}$ itself. To introduce this extra degree of flexibility, we need to make some choices, which is done as follows. First, choose a star subdivision of $\nabla$, with all vertices being integral points, inducing a refinement $\Sigma'$ of the fan $\Sigma$ which is the normal fan to $\Delta$. This induces a polyhedral subdivision of $\partial\nabla$, and we write the collection of cells of this subdivision as ${\mathscr{P}}_{\partial\nabla}$. Note that because $0\in \nabla$, we have $$\nabla^{\check h'}\subseteq \nabla^{\check h'}+\nabla=\nabla^{\check h}.$$ A subdivision ${\mathscr{P}}$ of $\partial\nabla^{\check h}$ is *good* with respect to ${\mathscr{P}}_{\partial\nabla}$ if it is induced by a subdivision ${\mathscr{P}}_{\nabla^{\check h}}$ of $\nabla^{\check h}$ satisfying the following three properties: 1. $\nabla^{\check h'}$ is a union of cells in ${\mathscr{P}}_{\nabla^{\check h}}$. 2. All vertices of ${\mathscr{P}}_{\nabla^{\check h}}$ are contained either in $\partial\nabla^{\check h}$ or in $\nabla^{\check h'}$. 3. Every cell $\sigma\in{\mathscr{P}}_{\nabla^{\check h}}$ with $\sigma\cap\partial\nabla^{\check h}\not=\emptyset$ and $\tau:=\sigma\cap \partial\nabla^{\check h'}\not=\emptyset$ can be written as $$\sigma=(C(\sigma')+\tau)\cap \nabla^{\check h},$$ with $\sigma'\in{\mathscr{P}}_{\partial\nabla}$ and $C(\sigma')$ the corresponding cone in $\Sigma'$. If $\check h$ has integral slopes and all vertices of ${\mathscr{P}}_{\nabla^{\check h}}$ are integral, then we say ${\mathscr{P}}$ is integral. The following picture shows what such a good subdivision may look like, in the case that $\nabla$ is the Newton polytope of ${\mathcal{O}}_{{\mathbb{P}}^2}(3)$: ![image](gooddecomp) Given a good decomposition ${\mathscr{P}}$ of $$B:=\partial\nabla^{\check h},$$ we once again obtain an affine structure with singularities on $B$, much as before, defining the discriminant locus $\Gamma\subseteq B$ in terms of the first barycentric subdivision ${\operatorname{Bar}}({\mathscr{P}})$ of this new polyhedral decomposition ${\mathscr{P}}$. Then as before $B_0:=B\setminus\Gamma$ has an open cover $$\{W_{\sigma}|\hbox{$\sigma\in{\mathscr{P}}$ maximal}\}\cup \{W_v|\hbox{$v\in{\mathscr{P}}$ a vertex}\}$$ where $W_{\sigma}={\operatorname{Int}}(\sigma)$, the interior of $\sigma$, and $$W_v=\bigcup_{\tau\in{\operatorname{Bar}}({\mathscr{P}})\atop v\in\tau}{\operatorname{Int}}(\tau)$$ is the (open) star of $v$ in ${\operatorname{Bar}}({\mathscr{P}})$. We define an affine chart $$\psi_{\sigma}:W_{\sigma}\rightarrow{\mathbb{A}}^{n-1}\subseteq N_{{\mathbb{R}}}$$ given by the inclusion of $W_{\sigma}$ in ${\mathbb{A}}^{n-1}$, the affine hyperplane containing $\sigma$. Also, take $\psi_v:W_v\rightarrow N_{{\mathbb{R}}}/{\mathbb{R}}v'$ to be the projection, where $v$ can be written uniquely as $v'+v''$ with $v'$ an integral point of $\nabla$ and $v''\in\nabla^{\check h'}$. As before, one checks easily that for $v\in\sigma$, $\psi_{\sigma}\circ\psi_v^{-1}$ is affine linear with integral linear part so $B$ is a tropical affine manifold with singularities. Furthermore, if $\check h$ has integral slopes, and ${\mathscr{P}}$ is integral, then the affine structure on $B$ is in fact integral. \[quintic\] Let $\Delta\subseteq{\mathbb{R}}^4$ be the convex hull of the points $$(-1,-1,-1,-1), (1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1),$$ so $\nabla$ is the convex hull of the points $$(-1,-1,-1,-1), (4,-1,-1,-1), (-1,4,-1,-1), (-1,-1,4,-1), (-1,-1,-1,4).$$ Take $\check h=\check\varphi$ and choose a star triangulation of $\nabla$. In this case $B=\partial\nabla$. It is easy to see the affine structure on $B_0$ in fact extends across the interior of all three-dimensional faces of $\nabla$. This gives a smaller discriminant locus $\Gamma$ which, given a nice regular triangulation of $\nabla$, looks like the following picture in a neighbourhood of a $2$-face of $\nabla$: the light lines giving the triangulation and the dark lines the discriminant locus $\Gamma$. ![image](quintic) In this picture, $\Gamma$ is a trivalent graph, with two types of trivalent vertices. The ones along the edge are non-planar: the two additional legs of $\Gamma$ drawn in this picture are contained in other two-faces of $\nabla$. However $\Gamma$ is planar in the interior of this two-face. In general, if the subdivisions $\Sigma'$ and $\check\Sigma'$ of $\Sigma$ and $\check\Sigma$ respectively represent maximal projective crepant partial resolutions of ${\mathbb{P}}_{\Delta}$ and ${\mathbb{P}}_{\nabla}$, and $\dim B=3$, then only these sorts of trivalent vertices occur. More specifically, one knows what the monodromy of the local systems $\Lambda$ and $\check\Lambda$ on $B_0$ look like at these vertices. If $v\in\Gamma$ is a vertex contained in the interior of a two-face, then it is clear that the tangent space to that two-face is invariant under parallel transport, in a neighbourhood of $v$, of $\Lambda$. A more careful analysis yields that the monodromy matrices for $\Lambda$ take the form, in a suitable basis, $$T_1=\begin{pmatrix} 1&0&0\\1&1&0\\0&0&1\end{pmatrix}, T_2=\begin{pmatrix} 1&0&0\\0&1&0\\1&0&1\end{pmatrix}, T_3=\begin{pmatrix} 1&0&0\\-1&1&0\\-1&0&1\end{pmatrix}.$$ Here $T_1,T_2,T_3$ are given by parallel transport about loops around the three edges of $\Gamma$ coming out of $v$. Of course, the monodromy of $\check\Lambda$ is the transpose inverse of these matrices. Similarly, if $v$ is a vertex of $\Gamma$ contained in an edge of $\nabla^{\check h}$, then the monodromy will take the form $$T_1=\begin{pmatrix} 1&-1&0\\0&1&0\\0&0&1\end{pmatrix}, T_2=\begin{pmatrix} 1&0&-1\\0&1&0\\0&0&1\end{pmatrix}, T_3=\begin{pmatrix} 1&1&1\\0&1&0\\0&0&1\end{pmatrix}.$$ So we see that the monodromy of the two types of vertices are interchanged by looking at $\Lambda$ and $\check\Lambda$. One main result of [@TMS] is If $B$ is a three-dimensional tropical affine manifold with singularities such that $\Gamma$ is trivalent and the monodromy of $\Lambda$ at each vertex is one of the above two types, then $f_0:X(B_0)\rightarrow B_0$ can be compactified to a topological fibration $f:X(B)\rightarrow B$. Dually, $\check f_0:\check X(B_0) \rightarrow B_0$ can be compactified to a topological fibration $\check f:\check X(B)\rightarrow B$. We won’t give any details here of how this is carried out, but it is not particularly difficult, as long as one restricts to the category of topological (not $C^{\infty}$) manifolds. However, it is interesting to look at the singular fibres we need to add in this compactification. If $b\in\Gamma$ is a point which is not a vertex of $\Gamma$, then $f^{-1}(b)$ is homeomorphic to $I_1\times S^1$, where $I_1$ denotes a Kodaira type $I_1$ elliptic curve, i.e. a pinched torus. If $b$ is a vertex of $\Gamma$, with monodromy of the first type, then $f^{-1}(b)=S^1\times S^1\times S^1/\sim$, with $(a,b,c)\sim (a',b',c')$ if $(a,b,c)=(a',b',c')$ or $a=a'=1$, where $S^1$ is identified with the unit circle in ${\mathbb{C}}$. This is the three-dimensional analogue of a pinched torus, and $\chi(f^{-1}(b))=+1$. We call this a *positive* fibre. If $b$ is a vertex of $\Gamma$, with monodromy of the second type, then $f^{-1}(b)$ can be described as $S^1\times S^1\times S^1/\sim$, with $(a,b,c)\sim (a',b',c')$ if $(a,b,c)=(a',b',c')$ or $a=a'=1$, $b=b'$, or $a=a',b=b'=1$. The singular locus of this fibre is a figure eight, and $\chi(f^{-1}(b))=-1$. We call this a *negative* fibre. So we see a very concrete local consequence of SYZ duality: namely in the compactifications $X(B)$ and $\check X(B)$, the positive and negative fibres are interchanged. Of course, this results in the observation that Euler characteristic changes sign under mirror symmetry for Calabi-Yau threefolds. Continuing with Example \[quintic\], it was proved in [@TMS] that $\check X(B)$ is homeomorphic to the quintic and $X(B)$ is homeomorphic to the mirror quintic. Haase and Zharkov in [@HZ] gave a different description of what is the same affine structure. Their construction has the advantage that it is manifestly dual. In other words, in our construction, we can interchange the role of $\Delta$ and $\nabla$ to get two different affine manifolds, with $B_{\Delta}$ the affine manifold with singularities structure on $\partial\Delta^{h}$ and $B_{\nabla}$ the affine manifold with singularities structure on $\partial\nabla^{\check h}$. It is not obvious that these are “dual” affine manifolds, at least in the sense that $X(B_{\nabla})$ is homeomorphic to $\check X(B_{\Delta})$ and $\check X(B_{\nabla})$ is homeomorphic to $X(B_{\Delta})$. In the construction given above, this follows from the discrete Legendre transform we will discuss in §7. On the other hand, the construction I give here will arise naturally from the degeneration construction discussed later in this paper. Ruan in [@Ruan] gave a description of *Lagrangian* torus fibrations for hypersurfaces in toric varieties using a symplectic flow argument, and his construction should coincide with a *symplectic* compactification of the symplectic manifolds $\check X(B_0)$. In the three-dimensional case, such a symplectic compactification has now been constructed by Ricardo Castaño-Bernard and Diego Matessi [@CastMat]. If this compactification is applied to the affine manifolds with singularities described here, the resulting symplectic manifolds should be symplectomorphic to the corresponding toric hypersurface, but this has not yet been shown. I should also point out that the explicit compactifications mentioned in three dimensions can be carried out in all dimensions, and will be done so in [@tori]. We will show there in a much more general context that these compactifications are then homeomorphic to the expected Calabi-Yau manifolds. The problems with the SYZ conjecture, and how to get around them ================================================================ The previous section demonstrates that the SYZ conjecture gives a beautiful description of mirror symmetry at a purely topological level. This, by itself, can often be useful, but unfortunately is not strong enough to get at really interesting aspects of mirror symmetry, such as instanton corrections. For a while, though, many of us were hoping that the strong version of duality we have just seen would hold at the special Lagrangian level. This would mean that a mirror pair $X,\check X$ would possess special Lagrangian torus fibrations $f:X\rightarrow B$ and $\check f:\check X\rightarrow B$ with codimension two discriminant locus, and the discriminant loci of $f$ and $\check f$ would coincide. These fibrations would then be dual away from the discriminant locus. There are examples of special Lagrangian fibrations on non-compact toric varieties $X$ with this behaviour. In particular, if $\dim X=n$ with a $T^{n-1}$ action on $X$ preserving the holomorphic $n$-form, and if $X$ in addition carries a Ricci-flat metric which is invariant under this action, then $X$ will have a very nice special Lagrangian fibration with codimension two discriminant locus. (See [@SLAGex] and [@Gold]). However, Dominic Joyce ([@Joyce] and other papers cited therein) began studying some three-dimensional $S^1$-invariant examples, and discovered quite different behaviour. There is an argument that if a special Lagrangian fibration is $C^{\infty}$, then the discriminant locus will be (Hausdorff) codimension two. However, Joyce discovered examples which were not differentiable, but only piecewise differentiable, and furthermore, had a codimension one discriminant locus: Define $F:{\mathbb{C}}^3\rightarrow {\mathbb{R}}\times{\mathbb{C}}$ by $F(z_1,z_2,z_3)=(a,c)$ with $2a=|z_1|^2-|z_2|^2$ and $$c=\begin{cases} z_3&a=z_1=z_2=0\\ z_3-\bar z_1\bar z_2/|z_1|& a\ge 0, z_1\not=0\\ z_3-\bar z_1\bar z_2/|z_2|&a<0. \end{cases}$$ It is easy to see that if $a\not=0$, then $F^{-1}(a,c)$ is homeomorphic to ${\mathbb{R}}^2\times S^1$, while if $a=0$, then $F^{-1}(a,c)$ is a cone over $T^2$: essentially, one copy of $S^1$ in ${\mathbb{R}}^2\times S^1$ collapses to a point. In addition, all fibres of this map are special Lagrangian, and it is obviously only piecewise smooth. The discriminant locus is the entire plane given by $a=0$. This example forces a reevaluation of the strong form of the SYZ conjecture. In further work Joyce found evidence for a more likely picture for general special Lagrangian fibrations in three dimensions. The discriminant locus, instead of being a codimension two graph, will be a codimension one blob. Typically the union of the singular points of singular fibres will be a Riemann surface, and it will map to an amoeba shaped set in $B$, i.e.the discriminant locus looks like the picture on the right rather than the left, and will be a fattening of the old picture of a codimension two discriminant. ![image](fatdisc) Joyce made some additional arguments to suggest that this fattened discriminant locus must look fundamentally different in a neighbourhood of the two basic types of vertices we saw in the previous section, with the two types of vertices expected to appear pretty much as depicted in the above picture. Thus the strong form of duality mentioned above, where we expect the discriminant loci of the special Lagrangian fibrations on a mirror pair to be the same, cannot hold. If this is the case, one needs to replace this strong form of duality with a weaker form. It seems likely that the best way to rephrase the SYZ conjecture is in a limiting form. Mirror symmetry as we currently understand it has to do with degenerations of Calabi-Yau manifolds. Given a flat family $f:{{\mathcal{X}}}\rightarrow D$ over a disk $D$, with the fibre ${{\mathcal{X}}}_0$ over $0$ singular and all other fibres $n$-dimensional Calabi-Yau manifolds, we say the family is *maximally unipotent* if the monodromy transformation $T:H^n({{\mathcal{X}}}_t,{\mathbb{Q}})\rightarrow H^n({{\mathcal{X}}}_t,{\mathbb{Q}})$ ($t\in D$ non-zero) satisfies $(T-I)^{n+1}=0$ but $(T-I)^n\not=0$. It is a standard fact of mirror symmetry that mirrors should be associated to maximally unipotent degenerations of Calabi-Yau manifolds. In particular, given two different maximally unipotent degenerations in a single complex moduli space for some Calabi-Yau manifold, one might obtain different mirror manifolds. Sometimes these different mirror manifolds are birationally equivalent, as studied in [@AGM], or are genuinely different, see [@Rodland]. We recall the definition of Gromov-Hausdorff convergence, a notion of convergence of a sequence of metric spaces. Let $(X,d_X)$, $(Y,d_Y)$ be two compact metric spaces. Suppose there exists maps $f:X\rightarrow Y$ and $g:Y\rightarrow X$ (not necessarily continuous) such that for all $x_1,x_2\in X$, $$|d_X(x_1,x_2)-d_Y(f(x_1),f(x_2))|<\epsilon$$ and for all $x\in X$, $$d_X(x,g\circ f(x))<\epsilon,$$ and the two symmetric properties for $Y$ hold. Then we say the Gromov–Hausdorff distance between $X$ and $Y$ is at most $\epsilon$. The Gromov–Hausdorff distance $d_{GH}(X,Y)$ is the infimum of all such $\epsilon$. It follows from results of Gromov (see for example [@Petersen], pg. 281, Cor. 1.11) that the space of compact Ricci-flat manifolds with diameter $\le C$ is precompact with respect to Gromov-Hausdorff distance, i.e. any sequence of such manifolds has a subsequence converging with respect to the Gromov-Hausdorff distance to a metric space. This metric space could be quite bad; this is quite outside the realm of algebraic geometry! Nevertheless, this raises the following natural question. Given a maximally unipotent degeneration of Calabi-Yau manifolds ${{\mathcal{X}}}\rightarrow D$, take a sequence $t_i\in D$ converging to $0$, and consider a sequence $({{\mathcal{X}}}_{t_i}, g_{t_i})$, where $g_{t_i}$ is a choice of Ricci-flat metric chosen so that $Diam(g_{t_i})$ remains bounded. What is the Gromov-Hausdorff limit of $({{\mathcal{X}}}_{t_i},g_{t_i})$, or the limit of some convergent subsequence? Consider a degenerating family of elliptic curves, with periods $1$ and ${1\over 2\pi i}\log t$. If we take $t$ approaching $0$ along the positive real axis, then we can just view this as a family of elliptic curves ${{\mathcal{X}}}_{\alpha}$ with period $1$ and $i\alpha$ with $\alpha\rightarrow\infty$. If we take the standard Euclidean metric $g$ on ${{\mathcal{X}}}_{\alpha}$, then the diameter of ${{\mathcal{X}}}_{\alpha}$ is unbounded. To obtain a bounded diameter, we replace $g$ by $g/\alpha^2$; equivalently, we can keep $g$ fixed on ${\mathbb{C}}$ but change the periods of the elliptic curve to $1/\alpha, i$. It then becomes clear that the Gromov-Hausdorff limit of such a sequence of elliptic curves is a circle ${\mathbb{R}}/{\mathbb{Z}}$. This simple example motivates the first conjecture about maximally unipotent degenerations, conjectured independently by myself and Wilson on the one hand [@GrWi] and Kontsevich and Soibelman [@KS] on the other. \[GWKSconj\] Let ${{\mathcal{X}}}\rightarrow D$ be a maximally unipotent degeneration of simply-connected Calabi-Yau manifolds with full $SU(n)$ holonomy, $t_i\in D$ with $t_i\rightarrow 0$, and let $g_i$ be a Ricci-flat metric on ${{\mathcal{X}}}_{t_i}$ normalized to have fixed diameter $C$. Then a convergent subsequence of $({{\mathcal{X}}}_{t_i},g_i)$ converges to a metric space $(X_{\infty},d_{\infty})$, where $X_{\infty}$ is homeomorphic to $S^n$. Furthermore, $d_{\infty}$ is induced by a Riemannian metric on $X_{\infty}\setminus\Gamma$, where $\Gamma\subseteq X_{\infty}$ is a set of codimension two. Here the topology of the limit depends on the nature of the non-singular fibres ${{\mathcal{X}}}_t$; for example, if instead ${{\mathcal{X}}}_t$ was hyperkähler, then we would expect the limit to be a projective space. Also, even in the case of full $SU(n)$ holonomy, if ${{\mathcal{X}}}_t$ is not simply connected, we would expect limits such as ${\mathbb{Q}}$-homology spheres to arise. Conjecture \[GWKSconj\] is directly inspired by the SYZ conjecture. Suppose we had special Lagrangian fibrations $f_i:{{\mathcal{X}}}_{t_i}\rightarrow B_i$. Then as the maximally unipotent degeneration is approached, it is possible to see that the volume of the fibres of these fibrations go to zero. This would suggest these fibres collapse, hopefully leaving the base as the limit. This conjecture was proved by myself and Wilson for K3 surfaces in [@GrWi]. The proof relies on a detailed analysis of the behaviour of Ricci-flat metrics in the limit, and also on the existence of explicit local models for Ricci-flat metrics near singular fibres of special Lagrangian fibrations. The motivation for this conjecture from SYZ also provides a limiting form of the conjecture. There are any number of problems with trying to prove the existence of special Lagrangian fibrations on Calabi-Yau manifolds. Even the existence of a single special Lagrangian torus near a maximally unipotent degeneration is unknown, but we expect it should be easier to find them as we approach the maximally unipotent point. Furthermore, even if we find a special Lagrangian torus, we know that it moves in an $n$-dimensional family, but we don’t know its deformations fill out the entire manifold. In addition, there is no guarantee that even if it does, we obtain a foliation of the manifold: nearby special Lagrangian submanifolds may intersect. (For an example, see [@Matessi].) So instead, we will just look at the moduli space of special Lagrangian tori. Suppose, given $t_i\rightarrow 0$, that for $t_i$ sufficiently close to zero, there is a special Lagrangian $T^n$ whose homology class is invariant under monodromy, or more specifically, generates the space $W_0$ of the monodromy weight filtration (this is where we expect to find fibres of a special Lagrangian fibration associated to a maximally unipotent degeneration). Let $B_{0,i}$ be the moduli space of deformations of this torus; every point of $B_{0,i}$ corresponds to a smooth special Lagrangian torus in ${{\mathcal{X}}}_{t_i}$. This manifold then comes equipped with the McLean metric and affine structures defined in §2. One can then compactify $B_{0,i}\subseteq B_i$, (probably by taking the closure of $B_{0,i}$ in the space of special Lagrangian currents; the details aren’t important here). This gives a series of metric spaces $(B_i,d_i)$ with the metric $d_i$ induced by the McLean metric. If the McLean metric is normalized to keep the diameter of $B_i$ constant independent of $i$, then we can hope that $(B_i,d_i)$ converges to a compact metric space $(B_{\infty},d_{\infty})$. Here then is the limiting form of SYZ: If $({{\mathcal{X}}}_{t_i},g_i)$ converges to $(X_{\infty},g_{\infty})$ and $(B_i,d_i)$ is non-empty for large $i$ and converges to $(B_{\infty},d_{\infty})$, then $B_{\infty}$ and $X_{\infty}$ are isometric up to scaling. Furthermore, there is a subspace $B_{\infty,0} \subseteq B_{\infty}$ with $\Gamma:=B_{\infty}\setminus B_{\infty,0}$ of Hausdorff codimension 2 in $B_{\infty}$ such that $B_{\infty,0}$ is a Monge-Ampère manifold, with the Monge-Ampère metric inducing $d_{\infty}$ on $B_{\infty,0}$. Essentially what this is saying is that as we approach the maximally unipotent degeneration, we expect to have a special Lagrangian fibration on larger and larger subsets of ${{\mathcal{X}}}_{t_i}$. Furthermore, in the limit, the codimension one discriminant locus suggested by Joyce converges to a codimension two discriminant locus, and (the not necessarily Monge-Ampère, see [@Matessi]) Hessian metrics on $B_{0,i}$ converge to a Monge-Ampère metric. The main point I want to get at here is that it is likely the SYZ conjecture is only “approximately” correct, and one needs to look at the limit to have a hope of proving anything. On the other hand, the above conjecture seems likely to be accessible by currently understood techniques, though with a lot of additional work, and I wouldn’t be surprised to see it proved in the next few years. How do we do mirror symmetry using this modified version of the SYZ conjecture? Essentially, we would follow these steps: 1. We begin with a maximally unipotent degeneration of Calabi-Yau manifolds ${{\mathcal{X}}}\rightarrow D$, along with a choice of polarization. This gives us a Kähler class $[\omega_t]\in H^2({{\mathcal{X}}}_t,{\mathbb{R}})$ for each $t\in D\setminus 0$, represented by $\omega_t$ the Kähler form of a Ricci-flat metric $g_t$. 2. Identify the Gromov-Hausdorff limit of a sequence $({{\mathcal{X}}}_{t_i}, r_ig_{t_i})$ with $t_i\rightarrow 0$, and $r_i$ a scale factor which keeps the diameter of ${{\mathcal{X}}}_{t_i}$ constant. The limit will be, if the above conjectures work, an affine manifold with singularities $B$ along with a Monge-Ampère metric. 3. Perform a Legendre transform to obtain a new affine manifold with singularities $\check B$, though with the same metric. 4. Try to construct a compactification of $X_{\epsilon}(\check B_0)$ for small $\epsilon>0$ to obtain a complex manifold $X_{\epsilon}(\check B)$. This will be the mirror manifold. Actually, we need to elaborate on this last step a bit more. The problem is that while we expect that it should be possible in general to construct symplectic compactifications of the symplectic manifold $\check X(B_0)$ (and hence get the mirror as a symplectic manifold), we don’t expect to be able to compactify $X_{\epsilon}(\check B_0)$ as a complex manifold. Instead, the expectation is that a small deformation of $X_{\epsilon}(\check B_0)$ is necessary before it can be compactified. Furthermore, this small deformation is critically important in mirror symmetry: *it is this small deformation which provides the $B$-model instanton corrections*. Because of the importance of this last issue, it has already been studied by several authors: Fukaya in [@Fukaya] has studied the problem directly using heuristic ideas, while Kontsevich and Soibelmann [@KS2] have modified the problem of passing from an affine manifold to a complex manifold by instead producing a non-Archimedean space. We will return to these issues later in this paper, when I discuss my own work with Siebert which has been partly motivated by the same problem. Because this last item is so important, let’s give it a name: \[reconstruct1\] Given a tropical affine manifold with singularities $B$, construct a complex manifold $X_{\epsilon}(B)$ which is a compactification of a small deformation of $X_{\epsilon}(B_0)$. I do not wish to dwell further on this version of the SYZ conjecture here, because it lies mostly in the realm of analysis and differential geometry and the behaviour of Ricci-flat metrics, and will give us little insight into what makes subtler aspects of traditional mirror symmetry work: for example, how exactly do instanton corrections arise? So we move on to explain how the limiting form of SYZ inspired a more algebro-geometric form of SYZ, which in turn avoids all analytic problems and holds out great promise for understanding the fundamental mysteries of mirror symmetry. Gromov-Hausdorff limits, algebraic degenerations, and mirror symmetry ===================================================================== We now have two notions of limit: the familiar algebro-geometric notion of a flat degenerating family ${{\mathcal{X}}}\rightarrow D$ over a disk on the one hand, and the Gromov-Hausdorff limit on the other. Kontsevich had an important insight (see [@KS]) into the connection between these two. In this section I will give a rough idea of how and why this works. Very roughly speaking, the Gromov-Hausdorff limit $({{\mathcal{X}}}_{t_i},g_{t_i})$ as $t_i\rightarrow 0$ should coincide, topologically, with the dual intersection complex of the singular fibre ${{\mathcal{X}}}_0$. More precisely, in a relatively simple situation, suppose $f:{{\mathcal{X}}}\rightarrow D$ is relatively minimal (in the sense of Mori) and normal crossings, with ${{\mathcal{X}}}_0$ having irreducible components $X_1,\ldots,X_m$. The dual intersection complex of ${{\mathcal{X}}}_0$ is the simplicial complex with vertices $v_1,\ldots,v_m$, and which contains a simplex $\langle v_{i_0},\ldots, v_{i_p}\rangle$ if $X_{i_0}\cap\cdots\cap X_{i_p}\not=\emptyset$. Let us explain roughly why this should be, first by looking at a standard family of degenerating elliptic curves with periods $1$ and ${n\over 2\pi i} \log t$ for $n$ a positive integer. Such a family over the punctured disk is extended to a family over the disk by adding an $I_n$ (a cycle of $n$ rational curves) fibre over the origin. Taking a sequence $t_i\rightarrow 0$ with $t_i$ real and positive gives a sequence of elliptic curves of the form $X_{\epsilon_i}(B)$ where: $B={\mathbb{R}}/n{\mathbb{Z}}$ and $\epsilon_i=-{2\pi\over\ln t_i}$. In addition, the metric on $X_{\epsilon_i}(B)$, properly scaled, comes from the constant Hessian metric on $B$. So we wish to explain how $B$ is related to the geometry near the singular fibre. To this end, let $X_1,\ldots,X_n$ be the irreducible components of ${{\mathcal{X}}}_0$; these are all ${\mathbb{P}}^1$’s. Let $P_1,\ldots,P_n$ be the singular points of ${{\mathcal{X}}}_0$. We’ll consider two sorts of open sets in ${{\mathcal{X}}}$. For the first type, choose a coordinate $z$ on $X_i$, with $P_i$ given by $z=0$ and $P_{i+1}$ given by $z=\infty$. Let $U_i\subseteq D_i$ be the open set $\{z|\delta\le |z| \le 1/\delta\}$ for some small fixed $\delta$. Then one can find a neighbourhood $\tilde U_i$ of $U_i$ in ${{\mathcal{X}}}$ such that $\tilde U_i$ is biholomorphic to $U_i\times D_{\rho}$ for $\rho>0$ sufficiently small, $D_{\rho}$ a disk of radius $\rho$ in ${\mathbb{C}}$, and $f|_{\tilde U_i}$ is the projection onto $D_{\rho}$. On the other hand, each $P_i$ has a neighbourhood $\tilde V_i$ in ${{\mathcal{X}}}$ biholomorphic to a polydisk $\{(z_1,z_2)\in{\mathbb{C}}^2||z_1|\le \delta', |z_2|\le\delta'\}$ on which $f$ takes the form $z_1z_2$. If $\delta$ and $\delta'$ are chosen correctly, then for $t$ sufficiently close to zero, $$\{\tilde V_i\cap{{\mathcal{X}}}_t|1\le i\le n\}\cup \{\tilde U_i\cap{{\mathcal{X}}}_t|1\le i\le n\}$$ form an open cover of ${{\mathcal{X}}}_t$. Now each of the sets in this open cover can be written as $X_{\epsilon}(U)$ for some $U$ a one-dimensional (non-compact) affine manifold and $\epsilon=-2\pi/\ln|t|$. If $U$ is an open interval $(a,b)\subseteq {\mathbb{R}}$, then $X_{\epsilon}(U)$ is biholomorphic to the annulus $$\{z\in{\mathbb{C}}| e^{-2\pi b/\epsilon}\le |z|\le e^{-2\pi a/\epsilon}\}$$ as $q=e^{2\pi i(x+i y)/\epsilon}$ is a holomorphic coordinate on $X_{\epsilon}((a,b))$. Thus $$\tilde U_i\cap {{\mathcal{X}}}_t\cong X_{\epsilon}\left(\left({\epsilon\ln\delta\over 2\pi}, -{\epsilon\ln\delta\over 2\pi}\right)\right)$$ with $\epsilon=-2\pi/\ln|t|$. As $t\rightarrow 0$, the interval $(\epsilon\ln\delta/2\pi, -\epsilon\ln\delta/2\pi)$ shrinks to a point. So $\tilde U_i\cap {{\mathcal{X}}}_t$ is a smaller and smaller open subset of ${{\mathcal{X}}}_t$ as $t\rightarrow 0$ when we view things in this way. This argument suggests that every irreducible component should be associated to a point on $B$. Now look at $\tilde V_i\cap{{\mathcal{X}}}_t$. This is $$\begin{aligned} \{(z_1,z_2)\in{\mathbb{C}}^2||z_1|,|z_2|<\delta', z_1z_2=t\} &\cong&\{z\in{\mathbb{C}}||t|/\delta'\le |z|\le \delta'\}\\ &\cong& X_{\epsilon}\left({-\epsilon\over 2\pi}\ln\delta', {\epsilon\over 2\pi} (\ln\delta'-\ln |t|)\right)\end{aligned}$$ with $\epsilon=-2\pi/\ln|t|$. This interval approaches the unit interval $(0,1)$ as $t\rightarrow 0$. So the open set $\tilde V_i\cap {{\mathcal{X}}}_t$ ends up being a large portion of ${{\mathcal{X}}}_t$. We end up with ${{\mathcal{X}}}_t$, for small $t$, being a union of open sets of the form $X_{\epsilon}((i+\epsilon',i+1-\epsilon'))$ (i.e. $\tilde V_i\cap{{\mathcal{X}}}_{\epsilon}$) and $X_{\epsilon}((i-\epsilon'',i+\epsilon''))$ (i.e. $\tilde U_i\cap {{\mathcal{X}}}_t$) for $\epsilon'$, $\epsilon''$ sufficiently small. These should glue, at least approximately, to give $X_{\epsilon}(B)$. So we see that irreducible components of ${{\mathcal{X}}}_0$ seem to coincide with points on $B$, but intersections of components coincide with lines. In this way we see the dual intersection complex emerge. Let us make one more observation before beginning with rigorous results in the next section. Suppose more generally we had a *Gorenstein toroidal crossings* degeneration of Calabi-Yau manifolds $f:{{\mathcal{X}}}\rightarrow D$. This means that every point $x\in{{\mathcal{X}}}$ has a neighbourhood isomorphic to an open set in an affine Gorenstein (i.e. the canonical class is a Cartier divisor) toric variety, with $f$ given locally by a monomial which vanishes exactly to order $1$ on each codimension one toric stratum. This is a generalization of the notion of normal crossings, see [@ss]. Very roughly, the above argument suggests that each irreducible component of the central fibre will correspond to a point of the Gromov-Hausdorff limit. The following exercise shows what kind of contribution to $B$ to expect from a point $x\in{{\mathcal{X}}}_0$ which is a zero-dimensional stratum in ${{\mathcal{X}}}_0$. \[gorensteinlimit\] Suppose there is a point $x\in{{\mathcal{X}}}_0$ which has a neighbourhood isomorphic to a neighbourhood of a dimension zero torus orbit of an affine Gorenstein toric variety $Y_x$. Such an affine variety is specified as follows. Set $M={\mathbb{Z}}^n$, $M_{{\mathbb{R}}}=M\otimes_{{\mathbb{Z}}}{\mathbb{R}}$, $N={\operatorname{Hom}}_{{\mathbb{Z}}}(M,{\mathbb{Z}})$, $N_{{\mathbb{R}}}=N\otimes_{{\mathbb{Z}}}{\mathbb{R}}$ as in §4. Then there is a lattice polytope $\sigma\subseteq M_{{\mathbb{R}}}$, $n=\dim{{\mathcal{X}}}_t$, $C(\sigma):=\{(rm,r)| m\in\sigma,r\ge 0\}\subseteq M_{{\mathbb{R}}}\oplus{\mathbb{R}}$, $P:={\vee}{C(\sigma)}\cap (N\oplus{\mathbb{Z}})$ the monoid determined by the dual of the cone $C(\sigma)$, and finally, $Y_x ={\operatorname{Spec}}{\mathbb{C}}[P]$, and $f$ coincides with the monomial $z^{(0,1)}$. Let us now take a small neighbourhood of $x$ of the form $$\tilde U_{\delta}=\{y\in {\operatorname{Spec}}{\mathbb{C}}[P]\,|\,\hbox{$|z^p|<\delta$ for all $p\in P$}\}.$$ This is an open set as the condition $|z^p|<\delta$ can be tested on a finite generating set for $P$, provided that $\delta<1$. Then show that for a given $t$, $|t|<1$ and $\epsilon=-2\pi/\log|t|$, if $$\sigma_t:=\{m\in M_{{\mathbb{R}}}|\hbox{$\langle p,(m,1)\rangle>{\log\delta\over \log |t|}$ for all $p\in P$}\},$$ then $$f^{-1}(t)\cap \tilde U_{\delta}\cong X_{\epsilon}(\sigma_t).$$ Note that $$\sigma:=\{m\in M_{{\mathbb{R}}}|\hbox{$\langle p,(m,1)\rangle\ge 0$ for all $p\in P$}\},$$ so $\sigma_t$ is an open subset of $\sigma$, and as $t\rightarrow 0$, $\sigma_t$ converges to the interior of $\sigma$. This observation will hopefully motivate the basic construction of the next section. Toric degenerations, the intersection complex and its dual ========================================================== We now return to rigorous statements. I would like to explain the basic ideas behind the program launched in [@PartI]. While I will use the previous sections as motivation, this work actually got its start when Siebert began a program of studying mirror symmetry via degenerations of Calabi-Yau manifolds. Work of Schröer and Siebert [@ssKod], [@ss] led Siebert to the idea that log structures on degenerations of Calabi-Yau manifolds would allow one to view mirror symmetry as an operation performed on degenerate Calabi-Yau varieties. Siebert observed that at a combinatorial level, mirror symmetry exchanged data pertaining to the log structure and a polarization. This will be explained more clearly in the following section, where we introduce log structures. Together, Siebert and I realised that the combinatorial data he was considering could be encoded naturally in the dual intersection complex of the degeneration, and that mirror symmetry then corresponded to a discrete Legendre transform on the dual intersection complex. It then became apparent that this approach provided an algebro-geometrization of the SYZ conjecture. Here I will explain this program from the opposite direction, starting with the motivation of the previous section for introducing the dual intersection complex, and then work backwards until we arrive naturally at log structures. Much of the material in this section comes from [@PartI], §4. \[toricdegen\] Let $f:{{\mathcal{X}}}\rightarrow D$ be a proper flat family of relative dimension $n$, where $D$ is a disk and ${{\mathcal{X}}}$ is a complex analytic space (not necessarily non-singular). We say $f$ is a [*toric degeneration*]{} of Calabi-Yau varieties if 1. ${{\mathcal{X}}}_t$ is an irreducible normal Calabi-Yau variety with only canonical singularities for $t\not=0$. (The reader may like to assume ${{\mathcal{X}}}_t$ is smooth for $t\not=0$). 2. If $\nu:\tilde{{\mathcal{X}}}_0\to{{\mathcal{X}}}_0$ is the normalization, then $\tilde{{\mathcal{X}}}_0$ is a disjoint union of toric varieties, the conductor locus $C\subseteq\tilde{{\mathcal{X}}}_0$ is reduced, and the map $C\to\nu(C)$ is unramified and generically two-to-one. (The conductor locus is a naturally defined scheme structure on the set where $\nu$ is not an isomorphism.) The square $$\begin{CD} C@>>> \tilde{{\mathcal{X}}}_0\\ @VVV @VV{\nu}V\\ \nu(C)@>>> {{\mathcal{X}}}_0 \end{CD}$$ is cartesian and cocartesian. 3. ${{\mathcal{X}}}_0$ is a reduced Gorenstein space and the conductor locus $C$ restricted to each irreducible component of $\tilde{{\mathcal{X}}}_0$ is the union of all toric Weil divisors of that component. 4. There exists a closed subset $Z\subseteq{{\mathcal{X}}}$ of relative codimension $\ge 2$ such that $Z$ satisfies the following properties: $Z$ does not contain the image under $\nu$ of any toric stratum of $\tilde{{\mathcal{X}}}_0$, and for any point $x\in {{\mathcal{X}}}\setminus Z$, there is a neighbourhood $\tilde U_x$ (in the analytic topology) of $x$, an $n+1$-dimensional affine toric variety $Y_x$, a regular function $f_x$ on $Y_x$ given by a monomial, and a commutative diagram $$\begin{matrix} \tilde U_x&\mapright{\psi_x}&Y_x\cr \mapdown{f|_{\tilde U_{x}}}&&\mapdown{f_x}\cr D'&\mapright{\varphi_x}&{\mathbb{C}}\cr \end{matrix}$$ where $\psi_x$ and $\varphi_x$ are open embeddings and $D'\subseteq D$. Furthermore, $f_x$ vanishes precisely once on each toric divisor of $Y_x$. Take ${{\mathcal{X}}}$ to be defined by the equation $tf_4+z_0z_1z_2z_3=0$ in ${\mathbb{P}}^3\times D$, where $D$ is a disk with coordinate $t$ and $f_4$ is a general homogeneous quartic polynomial on ${\mathbb{P}}^3$. It is easy to see that ${{\mathcal{X}}}$ is singular at the locus $$\{t=f_4=0\}\cap Sing({{\mathcal{X}}}_0).$$ As ${{\mathcal{X}}}_0$ is the coordinate tetrahedron, the singular locus of ${{\mathcal{X}}}_0$ consists of the six coordinate lines of ${\mathbb{P}}^3$, and ${{\mathcal{X}}}$ has four singular points along each such line, for a total of 24 singular points. Take $Z=Sing({{\mathcal{X}}})$. Then away from $Z$, the projection ${{\mathcal{X}}}\rightarrow D$ is normal crossings, which yields condition (4) of the definition of toric degeneration. It is easy to see all other conditions are satisfied. \[toricdegenexample2\] Let $\Delta\subseteq M_{{\mathbb{R}}}$ be a reflexive polytope with dual $\nabla \subseteq N_{{\mathbb{R}}}$. Choose an *integral* strictly convex piecewise linear function $\check h:M_{{\mathbb{R}}}\rightarrow{\mathbb{R}}$ as in §4. Consider the family in ${\mathbb{P}}_{\Delta}\times D$ defined by $$\label{toricdegenexam} z^0+t\sum_{m\in\Delta\cap M} a_m t^{\check h'(m)}z^m=0.$$ Here $z^m$ denotes the section of ${\mathcal{O}}_{{\mathbb{P}}_{\Delta}}(1)$ determined by the lattice point $m\in\Delta\cap M$ and $a_m\in k$ is a general choice of coefficient. So $z^0$ is the section which vanishes once on each toric boundary component of ${\mathbb{P}}_{\Delta}$. Now for most choices of $\check h$, this does not define a toric degeneration: the singularities are too bad. However, there is a natural toric variety in which to describe this degeneration. Set $$\tilde\Delta:=\{(m,l)|m\in\Delta,\quad l\ge\check h'(m)\}\subseteq M_{{\mathbb{R}}} \oplus{\mathbb{R}}.$$ The assumption that $\check h'$ is convex implies $\tilde\Delta$ is convex, but of course is non-compact. Let $\tilde\Sigma$ be the normal fan to $\tilde\Delta$ in $N_{{\mathbb{R}}}\oplus {\mathbb{R}}$. Let $X(\tilde\Sigma)$ denote the toric variety defined by the fan $\tilde\Sigma$. Then $\tilde\Delta$ is the Newton polytope of a line bundle ${\mathcal{L}}$ on $X(\tilde\Sigma)$, and terms of the form $t^{\check h'(m)}z^m$ can be interpreted as sections of this line bundle, corresponding to $(m,\check h'(m))\in \tilde\Delta\subseteq M_{{\mathbb{R}}}\oplus{\mathbb{R}}$. In addition, there is a natural map $X(\tilde\Sigma)\rightarrow{\mathbb{A}}^1$ defined by projection onto ${\mathbb{R}}$, and this map defines the regular function $t$. Thus (\[toricdegenexam\]) defines a hypersurface ${{\mathcal{X}}}_{\Delta}\subseteq X(\tilde\Sigma)$ and $t$ defines a map ${{\mathcal{X}}}_{\Delta}\rightarrow{\mathbb{A}}^1$. This is a toric degeneration. Without going into much detail, choosing a star decomposition of $\nabla$ as in §4 and a good polyhedral decomposition ${\mathscr{P}}$ of $\partial\nabla^{\check h}$ is essentially the same as choosing a refinement $\tilde\Sigma'$ of $\Sigma'$ with particularly nice properties. This yields a partial resolution $X(\tilde\Sigma') \rightarrow X(\tilde\Sigma)$. Then the proper transform of ${{\mathcal{X}}}_{\Delta}$ in $X(\tilde\Sigma')$, ${{\mathcal{X}}}'_{\Delta}$, yields another toric degeneration ${{\mathcal{X}}}'_{\Delta}\rightarrow {\mathbb{A}}^1$. This is necessary to get a toric degeneration whose general fibre is a MPCP resolution of a hypersurface in a toric variety. For proofs, see [@GBB], where the construction is generalized to complete intersections in toric varieties. Given a toric degeneration $f:{{\mathcal{X}}}\rightarrow D$, we can build the *dual intersection complex* $(B,{\mathscr{P}})$ of $f$, as follows. Here $B$ is an integral affine manifold with singularities, and ${\mathscr{P}}$ is a *polyhedral decomposition* of $B$, a decomposition of $B$ into lattice polytopes. In fact, we will construct $B$ as a union of lattice polytopes. Specifically, let the normalisation of ${{\mathcal{X}}}_0$, $\tilde {{\mathcal{X}}}_0$, be written as a disjoint union $\coprod X_i$ of toric varieties $X_i$, $\nu:\tilde{{\mathcal{X}}}_0\rightarrow{{\mathcal{X}}}_0$ the normalisation. The [*strata*]{} of ${{\mathcal{X}}}_0$ are the elements of the set $$Strata({{\mathcal{X}}}_0)=\{\nu(S)\,|\,\hbox{$S$ is a toric stratum of $X_i$ for some $i$}\}.$$ Here by toric stratum we mean the closure of a $({\mathbb{C}}^*)^n$ orbit. Let $\{x\}\in Strata({{\mathcal{X}}}_0)$ be a zero-dimensional stratum. Applying Definition \[toricdegen\] (4) to a neighbourhood of $x$, there is a toric variety $Y_x$ such that in a neighbourhood of $x$, $f:{{\mathcal{X}}}\rightarrow D$ is locally isomorphic to $f_x:Y_x\rightarrow{\mathbb{C}}$, where $f_x$ is given by a monomial. Now the condition that $f_x$ vanishes precisely once along each toric divisor of $Y_x$ is the statement that $Y_x$ is Gorenstein, and as such, it arises as in Exercise \[gorensteinlimit\]. Indeed, let $M,N$ be as usual, with ${\operatorname{rank}}M=\dim{{\mathcal{X}}}_0$. Then there is a lattice polytope $\sigma_x\subseteq M_{{\mathbb{R}}}$ such that $C(\sigma_x) =\{(rm,r)|m\in\sigma, r\ge0\}$ is the cone defining the toric variety $Y_x$. As we saw in Exercise \[gorensteinlimit\], a small neighbourhood of $x$ in ${{\mathcal{X}}}$ should contribute a copy of $\sigma_x$ to $B$, which provides the motivation for our construction. We can now describe how to construct $B$ by gluing together the polytopes $$\{\sigma_x\,|\, \{x\}\in Strata({{\mathcal{X}}}_0)\}.$$ We will do this in the case that every irreducible component of ${{\mathcal{X}}}_0$ is in fact itself normal so that $\nu:X_i\rightarrow \nu(X_i)$ is an isomorphism. The reader may be able to imagine the more general construction. With this normality assumption, there is a one-to-one inclusion reversing correspondence between faces of $\sigma_x$ and elements of $Strata({{\mathcal{X}}}_0)$ containing $x$. We can then identify faces of $\sigma_x$ and $\sigma_{x'}$ if they correspond to the same strata of ${{\mathcal{X}}}_0$. Some argument is necessary to show that this identification can be done via an integral affine transformation, but again this is not difficult. Making these identifications, one obtains $B$. One can then prove If ${{\mathcal{X}}}_0$ is complex $n$ dimensional, then $B$ is an real $n$ dimensional manifold. See [@PartI], Proposition 4.10 for a proof. Now so far $B$ is just a topological manifold, constructed by gluing together lattice polytopes. Let $${\mathscr{P}}=\{\sigma\subseteq B| \hbox{$\sigma$ is a face of $\sigma_x$ for some zero-dimensional stratum $x$}\}.$$ There is a one-to-one inclusion reversing correspondence between strata of ${{\mathcal{X}}}_0$ and elements of ${\mathscr{P}}$. It only remains to give $B$ an affine structure with singularities. Let ${\operatorname{Bar}}({\mathscr{P}})$ be the first barycentric subdivision of ${\mathscr{P}}$, and let $\Gamma$ be the union of simplices in ${\operatorname{Bar}}({\mathscr{P}})$ not containing a vertex of ${\mathscr{P}}$ or intersecting the interior of a maximal cell of ${\mathscr{P}}$. If we then set $B_0:=B\setminus\Gamma$, we can define an affine structure on $B_0$ as follows. $B_0$ has an open cover $$\{W_{\sigma}|\hbox{$\sigma\in{\mathscr{P}}$ maximal}\}\cup \{W_v|\hbox{$v\in{\mathscr{P}}$ a vertex}\}$$ where $W_{\sigma}={\operatorname{Int}}(\sigma)$, the interior of $\sigma$, and $$W_v=\bigcup_{\tau\in{\operatorname{Bar}}({\mathscr{P}})\atop v\in\tau}{\operatorname{Int}}(\tau)$$ is the (open) star of $v$ in ${\operatorname{Bar}}({\mathscr{P}})$, just as in §4. We now define charts. As a maximal cell of ${\mathscr{P}}$ is of the form $\sigma_x\subseteq M_{{\mathbb{R}}}$, this inclusion induces a natural affine chart $\psi_{\sigma}:{\operatorname{Int}}(\sigma)\rightarrow M_{{\mathbb{R}}}$. On the other hand, a vertex $v$ of ${\mathscr{P}}$ corresponds to a codimension $0$ stratum of ${{\mathcal{X}}}_0$, i.e. to an irreducible component $X_i$ for some $i$. Because this is a compact toric variety, it is defined by a complete fan $\Sigma_i$ in $M_{{\mathbb{R}}}$. Furthermore, there is a one-to-one correspondence between $p$-dimensional cones of $\Sigma_i$ and $p$-dimensional cells of ${\mathscr{P}}$ containing $v$ as a vertex, as they both correspond to strata of ${{\mathcal{X}}}_0$ contained in $X_i$. There is then a continuous map $$\psi_v:W_v\rightarrow M_{{\mathbb{R}}}$$ which takes $W_v\cap\sigma$, for any $\sigma\in{\mathscr{P}}$ containing $v$ as a vertex, into the corresponding cone of $\Sigma_i$ *integral affine linearly*. Such a map is uniquely determined by the combinatorial correspondence and the requirement that it be integral affine linear on each cell. It is then obvious these charts define an integral affine structure on $B_0$. Thus we have constructed $(B,{\mathscr{P}})$. Let $f:{{\mathcal{X}}}\rightarrow D$ be a degeneration of elliptic curves to an $I_n$ fibre. Then $B$ is the circle ${\mathbb{R}}/n{\mathbb{Z}}$, decomposed by ${\mathscr{P}}$ into $n$ line segments of length one. Continuing with Example \[toricdegenexample2\], the dual intersection complex constructed from the toric degenerations ${{\mathcal{X}}}'_{\Delta}\rightarrow {\mathbb{A}}^1$ is the affine manifold with singularities structure with polyhedral decomposition $(B,{\mathscr{P}})$ constructed on $B=\partial\nabla^{\check h}$ in §4. This is not particularly difficult to show. Again, for the proof and more general complete intersection case, see [@GBB]. Is the dual intersection complex the right affine manifold with singularities? The following theorem provides evidence for this, and gives the connection between this construction and the SYZ conjecture. \[complextheorem\] Let ${{\mathcal{X}}}\rightarrow D$ be a toric degeneration, with dual intersection complex $(B,{\mathscr{P}})$. Then there is an open set $U\subseteq B$ such that $B\setminus U$ retracts onto the discriminant locus $\Gamma$ of $B$, such that ${{\mathcal{X}}}_t$ contains an open subset ${\mathscr{U}}_t$ which is isomorphic as complex manifolds to a small deformation of a twist of $X_{\epsilon}(U)$, where $\epsilon=O(-1/\ln|t|)$. We will not be precise here about what we mean by small deformation; by twist, we mean a twist of the complex structure of $X_{\epsilon}(U)$ by a $B$-field. See [@Announce] for a much more precise statement; the above statement is meant to give a feel for what is true. The proof, along with much more precise statements, will eventually appear in [@tori]. If ${{\mathcal{X}}}\rightarrow D$ is a *polarized* toric degeneration, i.e. if there is a relatively ample line bundle ${\mathcal{L}}$ on ${{\mathcal{X}}}$, then we can construct another affine manifold with singularities and polyhedral decomposition $(\check B,\check{\mathscr{P}})$, which we call the *intersection complex*, as follows. For each irreducible component $X_i$ of ${{\mathcal{X}}}_0$, ${\mathcal{L}}|_{X_i}$ is an ample line bundle on a toric variety. Let $\sigma_i\subseteq N_{{\mathbb{R}}}$ denote the Newton polytope of this line bundle. There is then a one-to-one inclusion preserving correspondence between strata of ${{\mathcal{X}}}_0$ contained in $X_i$ and faces of $\sigma_i$. We can then glue together the $\sigma_i$’s in the obvious way: if $Y$ is a codimension one stratum of ${{\mathcal{X}}}_0$, it is contained in two irreducible components $X_i$ and $X_j$, and defines faces of $\sigma_i$ and $\sigma_j$. These faces are affine isomorphic because they are both the Newton polytope of ${\mathcal{L}}|_Y$, and we can then identify them in the canonical way. Thus we obtain a topological space $\check B$ with a polyhedral decomposition $\check{\mathscr{P}}$. We give it an affine structure with singularities in a similar manner as before. Again, let $\Gamma$ be the union of simplices in ${\operatorname{Bar}}(\check{\mathscr{P}})$ not containing a vertex of $\check{\mathscr{P}}$ or intersecting the interior of a maximal cell of $\check{\mathscr{P}}$. Setting $\check B_0:=\check B\setminus\Gamma$, this again has an open cover $$\{W_{\sigma}|\hbox{$\sigma\in\check{\mathscr{P}}$ maximal}\}\cup\{W_v|\hbox{$v\in\check{\mathscr{P}}$ a vertex}\}.$$ As usual, as $W_{\sigma}$ is the interior of $\sigma$, it comes along with a canonical affine structure. On the other hand, a vertex $v$ of $\check{\mathscr{P}}$ corresponds to a dimension zero stratum $x$ of ${{\mathcal{X}}}_0$, and associated to $x$ is the polytope $\sigma_x\subseteq M_{{\mathbb{R}}}$. Let $\check\Sigma_x$ be the normal fan to $\sigma_x$ in $N_{{\mathbb{R}}}$. Then there is a one-to-one inclusion preserving correspondence between cones in $\check\Sigma_x$ and strata of ${{\mathcal{X}}}_0$ containing $x$. This correspondence allows us to define a chart $$\check\psi_v:W_v\rightarrow N_{{\mathbb{R}}}$$ which takes $W_v\cap\check\sigma$, for any $\check\sigma\in\check{\mathscr{P}}$ containing $v$ as a vertex, into the corresponding cone of $\check\Sigma_x$ in an integral affine linear way. This gives the manifestly integral affine structure on $\check B_0$, and hence defines the intersection complex $(\check B,\check{\mathscr{P}})$. Analogously to Theorem \[complextheorem\], we expect Let ${{\mathcal{X}}}\rightarrow D$ be a polarized toric degeneration, with intersection complex $(\check B,\check{\mathscr{P}})$. Let $\omega_t$ be a Kähler form on ${{\mathcal{X}}}_t$ representing the first Chern class of the polarization. Then there is an open set $\check U\subseteq \check B$ such that $\check B\setminus \check U$ retracts onto the discriminant locus $\Gamma$ of $\check B$, such that ${{\mathcal{X}}}_t$ is a symplectic compactification of $\check X(\check U)$ for any $t$. I don’t expect this to be particularly difficult: it should be amenable to the techniques of Ruan [@RuanJSG], but has not been carried out. The relationship between the intersection complex and the dual intersection complex can be made more precise by introducing multi-valued piecewise linear functions, in analogy with the multi-valued convex functions of Definition \[multivaluedconvex\]: Let $B$ be an affine manifold with singularities with polyhedral decomposition ${\mathscr{P}}$. Then a multi-valued piecewise linear function $\varphi$ on $B$ is a collection of continuous functions on an open cover $\{(U_i,\varphi_i)\}$ such that $\varphi_i$ is affine linear on each cell of ${\mathscr{P}}$ intersecting $U_i$, and on $U_i\cap U_j$, $\varphi_i-\varphi_j$ is affine linear. Furthermore, for any $\sigma\in{\mathscr{P}}$, in a neighbourhood of each point $x\in U_i\cap Int(\sigma)$, there is an affine linear function $\psi$ such that $\varphi_i-\psi$ is zero on $\sigma$. To explain this last condition, and to clarify additional structure on $B$, let us examine a property of the polyhedral decomposition ${\mathscr{P}}$ of $B$ when $(B,{\mathscr{P}})$ is a dual intersection complex. Consider any $p$-dimensional cell $\sigma\in{\mathscr{P}}$. This corresponds to an $n-p$-dimensional stratum $X_{\sigma}\subseteq{{\mathcal{X}}}_0$, and as such, it is a toric variety defined by a fan $\Sigma_{\sigma}$ in ${\mathbb{R}}^{n-p}$. Now for any vertex $v$ of $\sigma$, $X_{\sigma}$ is a toric stratum of the irreducible component $X_v$ of ${{\mathcal{X}}}_0$. Thus $\Sigma_{\sigma}$ can be obtained as a *quotient fan* of $\Sigma_v$. In other words, there is a $p$-dimensional cone $K_{\sigma}$ of $\Sigma_v$ corresponding to $\sigma$ such that $$\Sigma_{\sigma}=\Sigma_v(K_{\sigma}):=\{(K+{\mathbb{R}}K_{\sigma})/{\mathbb{R}}K_{\sigma}| K\in\Sigma_v,K\supseteq K_{\sigma}\}.$$ In particular, there is an open neighbourhood $U_{v,\sigma}$ of ${\operatorname{Int}}(K_{\sigma})$ and an integral linear map $S_{v,\sigma}:U_{v,\sigma} \rightarrow {\mathbb{R}}^{n-p}$ such that $$\{S_{v,\sigma}^{-1}(K)| K\in \Sigma_{\sigma}\}= \{U_{v,\sigma}\cap K| K\in\Sigma_v, K\supseteq K_{\sigma}\}.$$ Let $U_{\sigma}$ be a small open neighbourhood of ${\operatorname{Int}}(\sigma)$ in $B$; if taken sufficiently small the maps $S_{v,\sigma}$ can be viewed as being defined on open subsets of $U_{\sigma}$ and patch to give an integral affine submersion $S_{\sigma}:U_{\sigma}\rightarrow{\mathbb{R}}^{n-p}$, where $U_{\sigma}=\bigcup_{v\in\sigma} U_{v,\sigma}$ is an open neighbourhood of ${\operatorname{Int}}(\sigma)$. This map has the property that $$\{S_{\sigma}^{-1}(K)|K\in\Sigma_{\sigma}\}=\{U_{\sigma}\cap\tau| \tau\supseteq\sigma,\tau\in{\mathscr{P}}\}.$$ In general we call a polyhedral decomposition *toric* if for all $\sigma\in{\mathscr{P}}$ there is always such an integral affine linear map $S_{\sigma}:U_{\sigma}\rightarrow {\mathbb{R}}^{n-p}$ from a neighbourhood $U_{\sigma}$ of ${\operatorname{Int}}(\sigma)$ and a fan $\Sigma_{\sigma}$ in ${\mathbb{R}}^p$ with the above property. (See [@PartI], Definition 1.22 for a perhaps too precise definition of toric polyhedral decompositions. The definition there is complicated by allowing cells to be self-intersecting, or equivalently, allowing irreducible components of ${{\mathcal{X}}}_0$ to be non-normal.) We can think of the fan $\Sigma_{\sigma}$ as being the fan structure of ${\mathscr{P}}$ transverse to $\sigma$ at a point in the interior of $\sigma$. The main point for a dual intersection complex is that this fan structure is determined by $X_{\sigma}$, and this is independent of the choice of the point in the interior of $\sigma$. Now let us return to piecewise linear functions. Suppose we are given a polarized toric degeneration ${{\mathcal{X}}}\rightarrow D$. We in fact obtain a piecewise linear function $\varphi$ on the dual intersection complex $(B,{\mathscr{P}})$ as follows. Restricting to any toric stratum $X_{\sigma}$, ${\mathcal{L}}|_{X_{\sigma}}$ is determined completely by an integral piecewise linear function $\bar\varphi_{\sigma}$ on $\Sigma_{\sigma}$, well-defined up to a choice of linear function. Pulling back this piecewise linear function via $S_{\sigma}$ to $U_{\sigma}$, we obtain a collection of piecewise linear functions $\{(U_{\sigma},\varphi_{\sigma})|\sigma\in{\mathscr{P}}\}$. The fact that $({\mathcal{L}}|_{X_{\tau}})|_{X_{\sigma}}={\mathcal{L}}|_{X_{\sigma}}$ for $\tau\subseteq\sigma$ implies that on overlaps $\varphi_{\sigma}$ and $\varphi_{\tau}$ differ by at most a linear function. So $\{(U_{\sigma}, \varphi_{\sigma})\}$ defines a multi-valued piecewise linear function. The last condition in the definition of multi-valued piecewise linear function then reflects the need for the function to be locally a pull-back of a function via $S_{\sigma}$ in a neighbourhood of $\sigma$. In fact, given any multi-valued piecewise linear function $\varphi$ on $(B,{\mathscr{P}})$ with ${\mathscr{P}}$ a toric polyhedral decomposition of $B$, $\varphi$ is determined by functions $\bar\varphi_{\sigma}$ on $\Sigma_{\sigma}$ for $\sigma\in{\mathscr{P}}$, via pull-back by $S_{\sigma}$. If ${\mathcal{L}}$ is ample, then the piecewise linear function determined by ${\mathcal{L}}|_{X_{\sigma}}$ is strictly convex. So we say a multi-valued piecewise linear function is *strictly convex* if $\bar\varphi_{\sigma}$ is strictly convex for each $\sigma\in{\mathscr{P}}$. Now suppose we are given abstractly a triple $(B,{\mathscr{P}},\varphi)$ with $B$ an integral affine manifold with singularities with a toric polyhedral decomposition ${\mathscr{P}}$, and $\varphi$ a strictly convex multi-valued piecewise linear function on $B$. Then we construct the *discrete Legendre transform* $(\check B,\check{\mathscr{P}},\check\varphi)$ of $(B,{\mathscr{P}},\varphi)$ as follows. $\check B$ will be constructed by gluing together Newton polytopes. If we view, for $v$ a vertex of ${\mathscr{P}}$, the fan $\Sigma_v$ as living in $M_{{\mathbb{R}}}$, then the Newton polytope of $\bar\varphi_v$ is $$\check v=\{x\in N_{{\mathbb{R}}}|\langle x,y\rangle\ge-\bar\varphi_v(y) \quad\forall y\in M_{{\mathbb{R}}}\}.$$ There is a one-to-one order reversing correspondence between faces of $\check v$ and cells of ${\mathscr{P}}$ containing $v$. Furthermore, if $\sigma$ is the smallest cell of ${\mathscr{P}}$ containing two vertices $v$ and $v'$, then the corresponding faces of $\check v$ and $\check v'$ are integral affine isomorphic, as they are both isomorphic to the Newton polytope of $\bar\varphi_{\sigma}$. Thus we can glue $\check v$ and $\check v'$ along this common face. After making all these identifications, we obtain a cell complex $(\check B,\check{\mathscr{P}})$, which is really just the dual cell complex of $(B,{\mathscr{P}})$. Of course, we have some additional information, namely an affine structure on the interior of each maximal cell of $\check{\mathscr{P}}$. To give $\check B$ an integral affine structure with singularities, one proceeds as usual, using this affine structure along with an identification of a neighbourhood of each vertex of $\check{\mathscr{P}}$ with the normal fan of the corresponding maximal cell of ${\mathscr{P}}$. Finally, the function $\varphi$ has a discrete Legendre transform $\check\varphi$ on $(\check B,\check{\mathscr{P}})$. We have no choice but to define $\check\varphi$ in a neighbourhood of a vertex $\check\sigma\in \check{\mathscr{P}}$ dual to a maximal cell $\sigma\in{\mathscr{P}}$ to be a piecewise linear function whose Newton polytope is $\sigma$, i.e.$$\overline{\check\varphi}_{\check\sigma}(y) =-\inf\{\langle y,x\rangle| x\in\sigma\subseteq M_{{\mathbb{R}}}\}.$$ This gives $(\check B,\check{\mathscr{P}},\check\varphi)$, the discrete Legendre transform of $(B,{\mathscr{P}},\varphi)$. If $B$ is ${\mathbb{R}}^n$, then this coincides with the classical notion of a discrete Legendre transform. The discrete Legendre transform has several relevant properties: - The discrete Legendre transform of $(\check B,\check{\mathscr{P}},\check\varphi)$ is $(B,{\mathscr{P}},\varphi)$. - If we view the underlying topological spaces $B$ and $\check B$ as being identified by being the underlying space of dual cell complexes, then $\Lambda_{B_0}\cong \check\Lambda_{\check B_0}$ and $\check\Lambda_{B_0}\cong\Lambda_{\check B_0}$, where the subscript denotes which affine structure is being used to define $\Lambda$ or $\check\Lambda$. This hopefully makes it clear that the discrete Legendre transform is a suitable replacement for the duality provided to us by the Legendre transform of §2. Finally, it leads to what we may think of as an *algebro-geometric SYZ procedure*. In analogy with the procedure suggested in §5, we follow these steps: 1. We begin with a toric degeneration of Calabi-Yau manifolds ${{\mathcal{X}}}\rightarrow D$ with an ample polarization. 2. Construct $(B,{\mathscr{P}},\varphi)$ from this data, as explained above. 3. Perform the discrete Legendre transform to obtain $(\check B, \check{\mathscr{P}},\check\varphi)$. 4. Try to construct a polarized degeneration of Calabi-Yau manifolds $\check{{\mathcal{X}}}\rightarrow D$ whose dual intersection complex is $(\check B,\check{\mathscr{P}},\check\varphi)$. The discrete Legendre transform enables us to reproduce Batyrev duality. Returning to the construction of §4 and Example \[toricdegenexample2\], choosing a strictly convex piecewise linear function on $\tilde\Sigma'$ corresponding to a line bundle ${\mathcal{L}}$ induces a polarization of ${{\mathcal{X}}}'_{\Delta}$. This then gives us a strictly convex multi-valued piecewise linear function $\varphi_{{\mathcal{L}}}$ on $(B,{\mathscr{P}})$, hence a discrete Legendre transform $(\check B,\check{\mathscr{P}},\check\varphi_{{\mathcal{L}}})$. In [@GBB] I showed that this is the dual intersection complex associated to some choice of subdivision $\widetilde{\check\Sigma'}$ of $\widetilde {\check\Sigma}$ obtained by interchanging the roles of $\nabla$ and $\Delta$ in the construction of §4. As an exercise, you can check the following for yourself. If we take $\check h=\check\varphi$, and in addition define $$h=\varphi:N_{{\mathbb{R}}}\rightarrow {\mathbb{R}}$$ to take the value $1$ on the primitive generator of each one-dimensional cone on $\Sigma$, the normal fan to $\Delta$, then from §4 we obtain an affine structure with singularities on $B=\partial\nabla$, and completely symetrically using $h$ we also obtain such a structure on $\check B=\partial\Delta$. These manifolds come with polyhedral decomposition ${\mathscr{P}}$ and $\check{\mathscr{P}}$ consisting of all proper faces of $\nabla$ and $\Delta$ respectively. The anti-canonical polarizations on ${\mathbb{P}}_{\Delta}$ and ${\mathbb{P}}_{\nabla}$ induce multi-valued piecewise linear functions $\psi,\check\psi$ on $B$ and $\check B$ respectively. Then show $(B,{\mathscr{P}},\psi)$ and $(\check B,\check{\mathscr{P}},\check\psi)$ are discrete Legendre transforms of each other. Thus Batyrev (and Batyrev-Borisov) duality is a special case of this construction. The only step missing in this mirror symmetry algorithm is the last: \[reconstruct2\] Given $(B,{\mathscr{P}},\varphi)$, is it possible to construct a polarized toric degeneration ${{\mathcal{X}}}\rightarrow D$ whose dual intersection complex is $(B,{\mathscr{P}},\varphi)$? It is fairly obvious how to reconstruct the central fibre ${{\mathcal{X}}}_0$ of a degeneration from the data $(\check B,\check{\mathscr{P}},\check\varphi)$, and we will see this explicitly in §8. One could naively hope that this reducible variety has good deformation theory and it can be smoothed. However, in general its deformation theory is ill-behaved. As initially observed in the normal crossings case by Kawamata and Namikawa in [@KN], one needs to put some additional structure on ${{\mathcal{X}}}_0$ before it has good deformation theory. This structure is a *log structure*, and introducing log structures allows us to study many aspects of mirror symmetry directly on the degenerate fibre itself. We shall do this in the next section, but first, let me address the question of how general this mirror symmetry construction might be: If $f:{{\mathcal{X}}}\rightarrow D$ is a large complex structure limit degeneration, then $f$ is birationally equivalent to a toric degeneration $f':{{\mathcal{X}}}'\rightarrow D$. The condition of being a large complex structure limit, as defined by Morrison in [@Morr], is a stronger one than maximally unipotent. Why should I imagine something like this to be true? Well, fantasizing freely, we would expect that after choosing a polarization and Ricci-flat metric on fibres ${{\mathcal{X}}}_t$, we have a sequence $({{\mathcal{X}}}_t,g_t)$ converging to $B$ an affine manifold with singularities. Now in general an affine manifold with singularities need not arise as the dual intersection complex of a toric degeneration, first of all because it need not have a toric polyhedral decomposition. For example, even in two dimensions there are orbifold singularities (corresponding to singular elliptic fibres which are not semi-stable) which do not arise in dual intersection complexes of toric degenerations, yet can occur as the base of a special Lagrangian fibration on a K3 surface. However, the *general* base does arise as the dual intersection complex of a toric degeneration in the K3 case. The hope is that the condition of large complex structure limit forces the singularities of $B$ to be “sufficiently general” so that one can construct a nice toric polyhedral decomposition ${\mathscr{P}}$ on $B$, and from this construct a toric degeneration. Presumably, this toric degeneration will be, if picked correctly, birational to the original one. This argument of course is rather hand-wavy, but I believe it provides some moral expectation that there might be a large class of degenerations for which our method applies. I note that one can prove the conjecture in the case of K3 surfaces. We now come to the technical heart of the program laid out in [@PartI]. Some aspects of this program are quite technical, so the goal here is to explain the highlights of [@PartI] as simply possible. Log structures ============== We first introduce the log structures of Fontaine-Illusie and Kato ([@Illu], [@K.Kato]). A log structure on a scheme (or analytic space) $X$ is a (unital) homomorphism $$\alpha_X:{\mathcal{M}}_X\rightarrow {\mathcal{O}}_X$$ of sheaves of (multiplicative and commutative) monoids inducing an isomorphism $\alpha_X^{-1}({\mathcal{O}}_X^{\times})\rightarrow {\mathcal{O}}_X^{\times}$. The triple $(X,{\mathcal{M}}_X,\alpha_X)$ is then called a [*log space*]{}. We often write the whole package as $X^{\dagger}$. A morphism of log spaces $F:X^{\dagger}\rightarrow Y^{\dagger}$ consists of a morphism $\underline{F}:X\rightarrow Y$ of underlying spaces together with a homomorphism $F^{\#}:\underline{F}^{-1}({\mathcal{M}}_Y) \rightarrow{\mathcal{M}}_X$ commuting with the structure homomorphisms: $$\alpha_X\circ F^{\#}=\underline{F}^*\circ\alpha_Y.$$ The key examples: \[logexamples\] (1) Let $X$ be a scheme and $Y\subseteq X$ a closed subset of codimension one. Denote by $j:X\setminus Y\rightarrow X$ the inclusion. Then the inclusion $$\alpha_X:{\mathcal{M}}_X=j_*({\mathcal{O}}_{X\setminus Y}^{\times})\cap{\mathcal{O}}_X\rightarrow {\mathcal{O}}_X$$ of the sheaf of regular functions with zeroes contained in $Y$ is a log structure on $X$. This is called a *divisorial log structure* on $X$. \(2) A [*prelog structure*]{}, i.e. an arbitrary homomorphism of sheaves of monoids $\varphi:{\mathcal{P}}\rightarrow{\mathcal{O}}_X$, defines an associated log structure ${\mathcal{M}}_X$ by $${\mathcal{M}}_X=({\mathcal{P}}\oplus{\mathcal{O}}_X^{\times})/\{(p,\varphi(p)^{-1})|p\in \varphi^{-1}({\mathcal{O}}_X^{\times})\}$$ and $\alpha_X(p,h)=h\cdot\varphi(p)$. \(3) If $f:X\rightarrow Y$ is a morphism of schemes and $\alpha_Y:{\mathcal{M}}_Y \rightarrow{\mathcal{O}}_Y$ is a log structure on $Y$, then the prelog structure $f^{-1}({\mathcal{M}}_Y)\rightarrow{\mathcal{O}}_X$ defines an associated log structure on $X$, the [*pull-back log structure*]{}. \(4) In (1) we can pull back the log structure on $X$ to $Y$ using (3). Thus in particular, if ${{\mathcal{X}}}\rightarrow D$ is a toric degeneration, the inclusion ${{\mathcal{X}}}_0\subseteq{{\mathcal{X}}}$ gives a log structure on ${{\mathcal{X}}}$ and an induced log structure on ${{\mathcal{X}}}_0$. Similarly the inclusion $0\in D$ gives a log structure on $D$ and an induced one on $0$. Here ${\mathcal{M}}_0={\mathbb{C}}^{\times}\oplus{\mathbb{N}}$, where ${\mathbb{N}}$ is the (additive) monoid of natural (non-negative) numbers, and $$\alpha_0(h,n)=\begin{cases}h& n=0\\ 0&n\not=0.\end{cases}$$ $0^{\dagger}$ is usually called the standard log point. We then have log morphisms ${{\mathcal{X}}}^{\dagger}\rightarrow D^{\dagger}$ and ${{\mathcal{X}}}_0^{\dagger}\rightarrow 0^{\dagger}$. \(5) If $\sigma\subseteq M_{{\mathbb{R}}}={\mathbb{R}}^n$ is a convex rational polyhedral cone, ${\vee}{\sigma}\subseteq N_{{\mathbb{R}}}$ the dual cone, let $P={\vee}{\sigma}\cap N$: this is a monoid. The affine toric variety defined by $\sigma$ can be written as $X={\operatorname{Spec}}{\mathbb{C}}[P]$. We then have a pre-log structure induced by the homomorphism of monoids $$P\rightarrow {\mathbb{C}}[P]$$ given by $p\mapsto z^p$. There is then an associated log structure on $X$. This is in fact the same as the log structure induced by $\partial X\subseteq X$, where $\partial X$ is the toric boundary of $X$, i.e. the union of toric divisors of $X$. If $p\in P$, then the monomial $z^p$ defines a map $f:X\rightarrow {\operatorname{Spec}}{\mathbb{C}}[{\mathbb{N}}]\quad (={\operatorname{Spec}}{\mathbb{C}}[t])$ which is a log morphism with the log structure on ${\operatorname{Spec}}{\mathbb{C}}[{\mathbb{N}}]$ induced similarly by ${\mathbb{N}}\rightarrow{\mathbb{C}}[{\mathbb{N}}]$. The fibre $X_0={\operatorname{Spec}}{\mathbb{C}}[P]/(z^p)$ is a subscheme of $X$, and there is an induced log structure on $X_0$, and a map $X_0^{\dagger} \rightarrow 0^{\dagger}$ as in (4). $f$ is an example of a *log smooth* morphism. Essentially all log smooth morphisms are étale locally of this form (if ${\mathbb{N}}$ is replaced by a more general monoid). See [@F.Kato] for details. Condition (4) of Definition \[toricdegen\] in fact implies that locally, away from $Z$, ${{\mathcal{X}}}^{\dagger}$ and ${{\mathcal{X}}}_0^{\dagger}$ are of the above form. So we should view ${{\mathcal{X}}}^{\dagger}\rightarrow D^{\dagger}$ as log smooth away from $Z$, and from the log point of view, ${{\mathcal{X}}}_0^{\dagger}$ can be treated much like a non-singular scheme away from $X$. We will see this explicitly below when we talk about differentials. On a log scheme $X^{\dagger}$ there is always an exact sequence $$1\mapright{} {\mathcal{O}}_X^{\times}\mapright{\alpha^{-1}}{\mathcal{M}}_X\mapright{} \overline{{\mathcal{M}}}_X\mapright{}0,$$ where we write the quotient sheaf of monoids $\overline{{\mathcal{M}}}_X$ additively. We call $\overline{{\mathcal{M}}}_X$ the *ghost sheaf* of the log structure. I like to view $\overline{{\mathcal{M}}}_X$ as specifying the combinatorial information associated to the log structure. For example, if $X^{\dagger}$ is induced by the Cartier divisor $Y\subseteq X$ with $X$ normal, then the stalk $\overline{{\mathcal{M}}}_{X,x}$ at $x\in X$ is the monoid of effective Cartier divisors on a neighbourhood of $x$ supported on $Y$. \[ghostexercise\] Show that in Example \[logexamples\], (5), $\overline{{\mathcal{M}}}_{X,x}= P$ if $\dim\sigma=n$ and $x$ is the unique zero-dimensional torus orbit of $X$. More generally, $$\overline{{\mathcal{M}}}_{X,x}={{\vee}{\tau}\cap N\over \tau^{\perp}\cap N} ={\operatorname{Hom}}_{monoid}(\tau\cap M,{\mathbb{N}}),$$ when $x\in X$ is in the torus orbit corresponding to a face $\tau$ of $\sigma$. In particular, $\tau$ can be recovered as ${\operatorname{Hom}}_{monoid}(\overline{{\mathcal{M}}}_{X,x},{\mathbb{R}}_{\ge 0}^+)$, where ${\mathbb{R}}_{\ge 0}^+$ is the additive monoid of non-negative real numbers. Another important fact for us is that if $f:Y\rightarrow X$ is a morphism with $X$ carrying a log structure, and $Y$ is given the pull-back log structure, then $\overline{{\mathcal{M}}}_Y=f^{-1}\overline{{\mathcal{M}}}_X$. In the case that ${\mathcal{M}}_X$ is induced by an inclusion of $Y\subseteq X$, $\overline{{\mathcal{M}}}_X$ is supported on $Y$, so we can equate $\overline{{\mathcal{M}}}_X$ and $\overline{{\mathcal{M}}}_Y$, the ghost sheaves for the divisorial log structure on $X$ and its restriction to $Y$. Putting this together with Exercise \[ghostexercise\] and the definition of dual intersection complex, we see that given a toric degeneration ${{\mathcal{X}}}\rightarrow D$ the dual intersection complex *completely determines* the ghost sheaf $\overline{{\mathcal{M}}}_{{{\mathcal{X}}}}= \overline{{\mathcal{M}}}_{{{\mathcal{X}}}_0}$ off of $Z$. We in fact take the view that anyway the log structure on $Z$ is not particularly well-behaved, and we always ignore it on $Z$. In fact, given a log structure ${\mathcal{M}}_{{{\mathcal{X}}}_0\setminus Z}$ on ${{\mathcal{X}}}_0\setminus Z$, this defines a push-forward log structure ${\mathcal{M}}_{{{\mathcal{X}}}_0}:=j_*{\mathcal{M}}_{{{\mathcal{X}}}_0\setminus Z}$. There is an induced map $\alpha:{\mathcal{M}}_{{{\mathcal{X}}}_0}\rightarrow{\mathcal{O}}_{{{\mathcal{X}}}_0}$, as $j_*{\mathcal{O}}_{{{\mathcal{X}}}_0\setminus Z} ={\mathcal{O}}_{{{\mathcal{X}}}_0}$ because ${{\mathcal{X}}}_0$ is Cohen-Macaulay. Thus in what follows, if we have determined a log structure on ${{\mathcal{X}}}_0\setminus Z$, we just as well get a log structure on ${{\mathcal{X}}}_0$ and will not concern ourselves with the behaviour of this log structure along $Z$. All this gives the necessary hint for working backwards, to go from $(B,{\mathscr{P}})$ to ${{\mathcal{X}}}_0^{\dagger}$. Suppose we are given an integral affine manifold with singularities $B$ with *toric* polyhedral decomposition ${\mathscr{P}}$. At each vertex $v$ of ${\mathscr{P}}$, ${\mathscr{P}}$ locally looks like a fan $\Sigma_v$, defining a toric variety $X_v$. For every edge $\omega \in{\mathscr{P}}$ with endpoints $v$ and $w$, $\omega$ defines a ray in both fans $\Sigma_v$ and $\Sigma_w$, hence toric divisors $D^v_{\omega} \subseteq X_v$, $D^w_{\omega}\subseteq X_w$. The condition that ${\mathscr{P}}$ is a toric polyhedral decomposition tells us that $D^v_{\omega}$ and $D^w_{\omega}$ are isomorphic toric varieties, and we can choose a torus equivariant isomorphism $s_{\omega}:D^v_{\omega}\rightarrow D^w_{\omega}$ for each edge $\omega$. If we choose these gluing maps to satisfy a certain compatibility condition on codimension two strata (we leave it to the reader to write down this simple compatibility condition), then we can glue together the $X_v$’s to obtain, in general, an algebraic space we write as $X_0(B,{\mathscr{P}},s)$, where $s=(s_{\omega})$ is the collection of gluing maps. (In [@PartI], we describe the gluing data in a slightly different, but equivalent, way). We call $s$ *closed gluing data*. This is how we construct a potential central fibre of a toric degeneration. Now $X_0(B,{\mathscr{P}},s)$ cannot be a central fibre of a toric degeneration unless it carries a log structure of the correct sort. There are many reasons this may not happen. First, if $s$ is poorly chosen, there may be zero-dimensional strata of $X_0(B,{\mathscr{P}},s)$ which do not have neighbourhoods locally étale isomorphic to the toric boundary of an affine toric variety; this is a minimum prerequisite. As a result, we have to restrict attention to closed gluing data induced by what we call *open gluing data*. Explicitly, each maximal cell $\sigma\in{\mathscr{P}}$ defines an affine toric variety $U(\sigma)$ given by the cone $C(\sigma)\subseteq M_{{\mathbb{R}}}\oplus {\mathbb{R}}$, assuming we view $\sigma\subseteq M_{{\mathbb{R}}}$ as a lattice polytope. Let $V(\sigma)\subseteq U(\sigma)$ be the toric boundary. It turns out, as we show in [@PartI], that a necessary condition for $X_0(B,{\mathscr{P}},s)$ to be the central fibre of a toric degeneration is that it is obtained by dividing out $\coprod_{\sigma\in{\mathscr{P}}_{\max}} V(\sigma)$ by an étale equivalence relation. In other words, we are gluing together the $V(\sigma)$’s to obtain an algebraic space, and those étale equivalence relations which produce algebraic spaces of the form $X_0(B,{\mathscr{P}},s)$ are easily determined. This is carried out in detail in [@PartI], §2. The construction there appears technically difficult because of the necessity of dealing with algebraic spaces, but is basically straightforward. The basic point is that if $\sigma_1,\sigma_2\in{\mathscr{P}}$ are two maximal cells, with $\sigma_1\cap\sigma_2=\tau$, then $\tau$ determines faces of the cones $C(\sigma_1)$ and $C(\sigma_2)$, hence open subsets $U_i(\tau)\subseteq U(\sigma_i)$, with toric boundaries $V_i(\tau)\subseteq V(\sigma_i)$. Now in general there is no *natural* isomorphism between $U_1(\tau)$ and $U_2(\tau)$: this is a problem when $\sigma_1\cap\sigma_2\cap\Gamma\not=\emptyset$, where $\Gamma$ is as usual the singular locus of $B$. However, crucially $V_1(\tau)$ and $V_2(\tau)$ are naturally isomorphic, and we can choose compatible equivariant isomorphisms to obtain *open gluing data*. Choosing open gluing data allows us to define the étale equivalence relation: we are just gluing any two sets $V(\sigma_1),V(\sigma_2)$ via the chosen isomorphism between $V_1(\tau)$ and $V_2(\tau)$. Any choice of open gluing data $s$ gives rise in this way to an algebraic space $X_0(B,{\mathscr{P}},s)$, and to any choice of open gluing data there is associated closed gluing data $s'$ such that $X_0(B,{\mathscr{P}},s)\cong X_0(B,{\mathscr{P}},s')$. The advantage of using open gluing data is that each $V(\sigma)$ for $\sigma\in{\mathscr{P}}_{\max}$ carries a log structure induced by the divisorial log structure $V(\sigma)\subseteq U(\sigma)$. Unfortunately, these log structures are not identified under the open gluing maps, precisely because of a lack of a natural isomorphism between the $U_i(\tau)$’s cited above. However, the ghost sheaves of the log structures are isomorphic. So the ghost sheaves $\overline{{\mathcal{M}}}_{V(\sigma)}$ glue to give a ghost sheaf of monoids $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}$. Summarizing what we have said so far: (this is a combination of results of [@PartI], §§2,4) Given $(B,{\mathscr{P}})$, if $s$ is closed gluing data, and ${{\mathcal{X}}}_0=X_0(B,{\mathscr{P}},s)$ is the central fibre of a toric degeneration ${{\mathcal{X}}}\rightarrow D$ with dual intersection complex $(B,{\mathscr{P}})$, then $s$ is induced by open gluing data and $\overline{{\mathcal{M}}}_{{{\mathcal{X}}}_0}|_{{{\mathcal{X}}}_0\setminus Z} \cong\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}|_{{{\mathcal{X}}}_0\setminus Z}$. This is as far as we can get with the combinatorics. The next point is to attempt to construct ${\mathcal{M}}_{X_0(B,{\mathscr{P}},s)}$. The idea is that ${\mathcal{M}}_{X_0(B,{\mathscr{P}},s)}$ is an extension of $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}$ by ${\mathcal{O}}_{X_0(B,{\mathscr{P}},s)}^{\times}$, so we are looking for some subsheaf of the sheaf $${\mathcal{E} \!\text{\textit{xt}}}^1(\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}^{{{\operatorname{gp}}}},{\mathcal{O}}_{X_0(B,{\mathscr{P}},s)}^{\times}).$$ Here the superscript ${{\operatorname{gp}}}$ denotes the Grothendieck group of the monoid. Any extension of $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}^{{{\operatorname{gp}}}}$ by ${\mathcal{O}}_{X_0(B,{\mathscr{P}},s)}^{\times}$ gives rise to a sheaf of groups ${\mathcal{M}}_{X_0(B,{\mathscr{P}},s)}^{{{\operatorname{gp}}}}$ surjecting onto $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}^{{{\operatorname{gp}}}}$, and the inverse image of $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}\subseteq\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}^{{{\operatorname{gp}}}}$ is a sheaf of monoids ${\mathcal{M}}_{X_0(B,{\mathscr{P}},s)}$. Of course, one also needs a map $\alpha:{\mathcal{M}}_{X_0(B,{\mathscr{P}},s)}\rightarrow{\mathcal{O}}_{X_0(B,{\mathscr{P}},s)}$, and this complicates things a bit. To make a long story short, we can identify a subsheaf of extensions which yield genuine log structures. A section of this subsheaf determines a log structure on $X_0(B,{\mathscr{P}},s)$ with the correct ghost sheaf. However, this is not precisely what we want. What we really want is a log structure on $X_0(B,{\mathscr{P}},s)$ along with a log morphism $X_0(B,{\mathscr{P}},s)^{\dagger} \rightarrow 0^{\dagger}$ which is log smooth. (We will address the question of the bad set $Z\subseteq{{\mathcal{X}}}_0$ shortly.) We call such a structure a *log smooth structure* on $X_0(B,{\mathscr{P}},s)$. It turns out these structures are given by certain sections of ${\mathcal{E} \!\text{\textit{xt}}}^1(\overline{{\mathcal{M}}}^{{{\operatorname{gp}}}}_{X_0(B,{\mathscr{P}},s)}/ \bar\rho,{\mathcal{O}}_X^{\times})$, where $\bar\rho$ is the canonical section of $\overline{{\mathcal{M}}}_{X_0(B,{\mathscr{P}},s)}$ whose germ at ${\overline{{\mathcal{M}}}}_{X_0(B,{\mathscr{P}},s),\eta}= {\mathbb{N}}$ is $1$ for $\eta$ a generic point of an irreducible component of $X_0(B,{\mathscr{P}},s)$. So in fact, we can identify a subsheaf of ${\mathcal{E} \!\text{\textit{xt}}}^1(\overline{{\mathcal{M}}}^{{{\operatorname{gp}}}}_{X_0(B,{\mathscr{P}},s)}/\bar\rho,{\mathcal{O}}_X^{\times})$, which we call ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$, whose sections determine a log structure on $X_0(B,{\mathscr{P}},s)$ *and* a log smooth morphism $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$, i.e. a log smooth structure. The technical heart of [@PartI] is an explicit calculation of the sheaf ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$. This is carried out locally in [@PartI], Theorem 3.22, where the sheaf is calculated on the (étale) open subsets $V(\sigma)$ of $X_0(B,{\mathscr{P}},s)$, and globally in [@PartI], Theorem 3.24. I will not state the precise results, but go into detail in a special case, which illustrates the most important features of the theory. Suppose $X_0(B,{\mathscr{P}},s)$ is normal crossings, i.e. every cell of ${\mathscr{P}}$ is affine isomorphic to a standard simplex. Then we have the local ${{\mathcal{T}}}^1$ sheaf, $${{\mathcal{T}}}^1={\mathcal{E} \!\text{\textit{xt}}}^1_{X_0(B,{\mathscr{P}},s)}( \Omega^1_{X_0(B,{\mathscr{P}},s)/k},{\mathcal{O}}_{X_0(B,{\mathscr{P}},s)}).$$ This is a line bundle on $S=Sing(X_0(B,{\mathscr{P}},s))$. Then one can show ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$ is the ${\mathcal{O}}_S^{\times}$-torsor associated to ${{\mathcal{T}}}^1$. This brings us back to Friedman’s condition of $d$-semistability [@Friedman]. A variety with normal crossings is *$d$-semistable* if ${{\mathcal{T}}}^1\cong{\mathcal{O}}_S$. Thus we recover Kawamata and Namikawa’s result [@KN] showing that $X_0(B,{\mathscr{P}},s)$ carries a normal crossings log structure over $0^{\dagger}$ if and only if $X_0(B,{\mathscr{P}},s)$ is $d$-semistable. This is because, of course, the ${\mathcal{O}}_S^{\times}$-torsor associated to ${{\mathcal{T}}}^1$ has a section if and only if ${{\mathcal{T}}}^1\cong{\mathcal{O}}_S$. Now Theorem 3.24 of [@PartI] tells us that in general ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$ is not a trivial ${\mathcal{O}}_S^{\times}$-torsor. The sheaf depends continuously on $s$, but discretely on monodromy of the singularities of $B$. Let’s explain the latter point explicitly if $\dim B=2$. The irreducible components of $S$ are in one-to-one correspondence with one-dimensional cells of ${\mathscr{P}}$. If $\omega\in{\mathscr{P}}$ is such an edge, suppose it contains one singularity of $B$ such that $\Lambda$ has monodromy $\begin{pmatrix}1&n\\ 0&1\end{pmatrix}$ in a suitable basis around a loop around the singularity. Then ${{\mathcal{T}}}^1$ restricted to the one-dimensional stratum $X_{\omega}\cong{\mathbb{P}}^1$ of $X_0(B,{\mathscr{P}},s)$ is ${\mathcal{O}}_{{\mathbb{P}}^1}(n)$. To make this statement completely accurate, one needs to define $n$ so that it is independent of the choice of basis and loop. To do this, one chooses a loop which is counterclockwise with respect to the orientation determined by the chosen basis of $\Lambda_b={{\mathcal{T}}}_{B,b}$, where $b\in B$ is the base-point of the loop. If all the $n$’s appearing are positive, then for some choices of gluing data $s$, we may hope to have a section $t$ of ${{\mathcal{T}}}^1$ which vanishes only at a finite set of points $Z$. If $Z$ does not contain a toric stratum (i.e. a triple point) then we obtain a log structure on $X_0(B,{\mathscr{P}},s)\setminus Z$ of the desired sort, hence a log structure on $X_0(B,{\mathscr{P}},s)$ (log smooth off of $Z$) by push-forward. We then have In the situation of this example, with $\dim B=2$ and $$t\in\Gamma(X_0(B,{\mathscr{P}},s),{{\mathcal{T}}}^1)$$ a section vanishing on a finite set $Z$ not containing a triple point, there exists a smoothing ${{\mathcal{X}}}\rightarrow D$ of $X_0(B,{\mathscr{P}},s)$ such that the singular locus of ${{\mathcal{X}}}$ is $Z\subseteq {{\mathcal{X}}}_0=X_0(B,{\mathscr{P}},s)$, and the induced log morphism ${{\mathcal{X}}}_0^{\dagger} \rightarrow 0^{\dagger}$ coincides with $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$ determined by $t$. The proof of this is a rather simple application of Friedman’s or Kawamata and Namikawa’s results. To apply these results, however, we need to deal with the singular set $Z$. This is done by normalizing $X_0(B,{\mathscr{P}},s)$, choosing to blow up one point in the inverse image of each point of $Z$, and then regluing along the proper transform of the conductor locus. This produces a $d$-semistable variety, in the language of Friedman, or a log smooth scheme, which can then be smoothed. (Such an approach seems difficult in higher dimensions.) On the other hand, if $n<0$ for some singular point of $B$, we run into problems, and there is in fact no smoothing of $X_0(B,{\mathscr{P}},s)$. This should not be surprising for the following reason. If $n=-1$, it turns out we would have to compactify the torus fibration $X(B_0)$ by adding a strange sort of $I_1$ fibre over such a singular point. An $I_1$ fibre is an immersed sphere, and the intersection multiplicity of the two sheets at the singular point of the fibre is $+1$ for an ordinary $I_1$ fibre. However, when the monodromy is given by $n=-1$, the intersection multiplicity is $-1$. This does not occur for a special Lagrangian $T^2$-fibration, so it is not surprising we can’t construct a smoothing in this case. If $\dim B=2$ and $n>0$ for all singularities on $B$, then we say $B$ is *positive*. One can generalize this notion of positive to higher dimensional $B$ with polyhedral decompositions, see [@PartI], Definition 1.54. Positivity of $B$ is a necessary condition for $X_0(B,{\mathscr{P}},s)$ to appear as the central fibre of a toric degeneration. All the examples of §4 are positive; this in fact follows from the convexity of reflexive polytopes, and the positivity condition can be viewed as a type of convexity statement. Passing back to the general case now, with no restriction on the dimension of $B$ or the shape of the cells of ${\mathscr{P}}$, it follows from [@PartI], Theorem 3.24, that ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$ is a subsheaf of *sets* of a coherent sheaf we call ${\mathcal{LS}}^+_{{\mathrm{pre}},X_0(B,{\mathscr{P}},s)}$. This sheaf is a direct sum $\bigoplus_{\omega\in{\mathscr{P}}\atop\dim\omega=1} {\mathcal{N}}_{\omega}$, where ${\mathcal{N}}_{\omega}$ is a line bundle on the toric stratum of $X_0(B,{\mathscr{P}},s)$ corresponding to $\omega$. Furthermore, as in the two-dimensional normal crossings case, ${\mathcal{N}}_{\omega}$ is a semi-ample line bundle if $B$ is positive. A section $t\in\Gamma(X_0(B,{\mathscr{P}},s),{\mathcal{LS}}^+_{{\mathrm{pre}},X_0(B,{\mathscr{P}},s)})$ which is a section of ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$ outside of the zero set $Z$ of $t$ determines a log smooth structure on $X_0(B,{\mathscr{P}},s)\setminus Z$. In particular, if $Z$ does not contain any toric stratum, we are in good shape. We then obtain a log morphism $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$ which is log smooth away from $Z$. We call such a structure a *log Calabi-Yau space*. Let’s review: given data - $s$ open gluing data; - $t\in\Gamma(X_0(B,{\mathscr{P}},s),{\mathcal{LS}}^+_{{\mathrm{pre}},X_0(B,{\mathscr{P}},s)})$, with $t$ a section of ${\mathcal{LS}}_{X_0(B,{\mathscr{P}},s)}$ over $X_0(B,{\mathscr{P}},s)\setminus Z$ for some set $Z$ which does not contain any toric stratum of $X_0(B,{\mathscr{P}},s)$; we obtain $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$. Conversely, we show in [@PartI] that if ${{\mathcal{X}}}\rightarrow D$ is a toric degeneration, then ${{\mathcal{X}}}_0^{\dagger}\rightarrow 0^{\dagger}$ is obtained in this way from the dual intersection complex $(B,{\mathscr{P}})$ from some choice of data $s$ and $t$. To complete this picture, it remains to answer \[smoothingconjecture\] Suppose $(B,{\mathscr{P}})$ is positive. 1. What are the possible choices of $s$ and $t$ which yield log Calabi-Yau spaces? 2. Given $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$, when does it arise as the central fibre of a toric degeneration ${{\mathcal{X}}}\rightarrow D$? As we have sketched it, this question is now the refined version of our basic reconstruction problem Question \[reconstruct2\]. The choice of the data $s$ and $t$ determine the moduli of log Calabi-Yau spaces arising from a given dual intersection complex. So far we haven’t even made the claim that this moduli space is non-empty, and for general choice of $(B,{\mathscr{P}})$, I do not know if this is the case or not, though it is non-empty if $\dim B=2$ or $3$. However, one would like a more explicit description of this moduli space in any event. In general the moduli space is a scheme, but it may be singular (an example is given in [@PartI], Example 4.28). Some additional hypotheses are necessary to solve this problem. To motivate the necessary hypothesis, let’s go back to §1, where we introduced the notion of simplicity. We saw that the basic topology of mirror symmetry works only when the fibration is simple. So maybe we should expect the current construction to work better when we have simplicity. There is one technical problem with this: the definition of simplicity assumes the existence of a torus fibration $f:X\rightarrow B$. Instead, we want to define simplicity entirely in terms of $B$ itself. Unfortunately, the solution to this is rather technical, and produces a definition which is very difficult to absorb (Definition 1.60 of [@PartI]). Let us just say here that if $B$ is simple in this new sense and $X(B_0)\rightarrow B_0$ was compactified in a sensible manner to a topological torus fibration $f:X(B)\rightarrow B$, then $f$ would be simple in the sense of §1, provided that $\dim B\le 3$. In higher dimensions, this new simplicity does not necessarily imply the old simplicity; see the forthcoming Ph.D. thesis of Helge Ruddat. This arises in situations where orbifold singularities arise in $X(B)$; as is well-known, such singularities cannot be avoided in higher dimension. Once we accept this definition, life simplifies a great deal. Extraordinarily, the a priori very complicated moduli space of log Calabi-Yau spaces with a given dual intersection complex has a very simple description when $B$ is simple! One very difficult main result of [@PartI], (Theorem 5.4) is Given $(B,{\mathscr{P}})$ positive and simple, the set of log Calabi-Yau spaces with dual intersection complex $(B,{\mathscr{P}})$, modulo isomorphism preserving $B$, is $H^1(B,i_*\Lambda\otimes k^{\times})$. An isomorphism is said to preserve $B$ if it induces the identity on the dual intersection complex. So the moduli space is an algebraic torus (or a disjoint union of algebraic tori) of dimension equal to $\dim_k H^1(B,i_*\Lambda\otimes k)$. Note that this is the expected dimension predicted by the SYZ conjecture. Indeed, if a smoothing of $X_0(B,{\mathscr{P}},s)^{\dagger}$ exists and it was a topological compactification $X(B)$ of $X_0(B)$, with a simple torus fibration $f:X(B)\rightarrow B$ extending $f_0:X(B_0)\rightarrow B_0$, then $R^{n-1}f_{0*}{\mathbb{R}}\cong\Lambda_{{\mathbb{R}}}$, so by simplicity, $R^{n-1}f_*{\mathbb{R}}\cong i_*\Lambda_{{\mathbb{R}}}$. The discussion of §1 suggests that $\dim H^1(B,R^{n-1}f_*{\mathbb{R}})$ is $h^{1,n-1}$ of the smoothing, which is of course the dimension of the complex moduli space of the smoothing. This argument can be made rigorous by introducing *log differentials*. Let $\pi:X^{\dagger}\rightarrow S^{\dagger}$ be a morphism of logarithmic spaces. A *log derivation* on $X^{\dagger}$ over $S^{\dagger}$ with values in an ${\mathcal{O}}_X$-module ${\mathcal{E}}$ is a pair $({{\rm D}},{\operatorname{Dlog}})$, where ${{\rm D}}: {\mathcal{O}}_X\to {\mathcal{E}}$ is an ordinary derivation of $X/S$ and ${\operatorname{Dlog}}: {\mathcal{M}}^{{\operatorname{gp}}}_X\to {\mathcal{E}}$ is a homomorphism of abelian sheaves with ${\operatorname{Dlog}}\circ\pi^\#=0$; these fulfill the following compatibility condition $${{\rm D}}\big(\alpha_X(m)\big)=\alpha_X(m)\cdot {\operatorname{Dlog}}(m),$$ for all $m\in{\mathcal{M}}_X$. We denote by $\Theta_{X^{\dagger}/S^{\dagger}}$ the sheaf of log derivations of $X^{\dagger}$ over $S^{\dagger}$ with values in ${\mathcal{O}}_X$. We set $\Omega^1_{X^{\dagger}/S^{\dagger}}={\operatorname{Hom}}_{{\mathcal{O}}_X}(\Theta_{X^{\dagger}/ S^{\dagger}},{\mathcal{O}}_X)$. This generalizes the more familiar notion of differentials with logarithmic poles along a normal crossings divisor. If $Y\subseteq X$ is a normal crossings divisor inducing a log structure on $X$, then $\Omega^1_{X^{\dagger}/k}$ is the sheaf of differentials with logarithmic poles along $Y$, and $\Omega^1_{Y^{\dagger}/k}$ is the restriction of this sheaf to $Y$. In general, $\Omega^1_{X^{\dagger}/S^{\dagger}}$ is locally free if $\pi$ is log smooth. As a result, one can do deformation theory in the log category for log smooth morphisms (see [@F.Kato]). This is one of the principal reasons for introducing log geometry into our picture. If $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$ is a log Calabi-Yau space, then the morphism is log smooth off of $Z$. Define $$\begin{aligned} \Theta^p_{X_0(B,{\mathscr{P}},s)}&:=&j_*{\bigwedge}^p\Theta_{(X_0(B,{\mathscr{P}},s)^{\dagger} \setminus Z)/0^{\dagger}}\\ \Omega^p_{X_0(B,{\mathscr{P}},s)}&:=&j_*{\bigwedge}^p\Omega^1_{(X_0(B,{\mathscr{P}},s)^{\dagger} \setminus Z)/0^{\dagger}}\end{aligned}$$ where $j:X_0(B,{\mathscr{P}},s)\setminus Z\rightarrow X_0(B,{\mathscr{P}},s)$ is the inclusion. Then one has \[hodgedecomp\] Suppose $(B,{\mathscr{P}})$ is positive and simple, and suppose we are given a log Calabi-Yau space $X_0(B,{\mathscr{P}},s)^{\dagger}\rightarrow 0^{\dagger}$ which occurs as the central fibre of a toric degeneration ${{\mathcal{X}}}\rightarrow D$ whose general fibre ${{\mathcal{X}}}_t$ is non-singular. Then for $q=0,1,n-1$ and $n$ with $n=\dim B$, we have isomorphisms $$\begin{aligned} H^p(B,i_*{\bigwedge}^q\Lambda\otimes k)&\cong&H^p(X_0(B,{\mathscr{P}},s),\Theta^q_{X_0(B,{\mathscr{P}},s)}) \cong H^p({{\mathcal{X}}}_t,\Theta^q_{{{\mathcal{X}}}_t})\\ H^p(B,i_*{\bigwedge}^q\check\Lambda\otimes k)&\cong&H^p(X_0(B,{\mathscr{P}},s),\Omega^q_{X_0(B,{\mathscr{P}},s)}) \cong H^p({{\mathcal{X}}}_t,\Omega^q_{{{\mathcal{X}}}_t})\end{aligned}$$ where $\Theta^q_{{{\mathcal{X}}}_t}$ and $\Omega^q_{{{\mathcal{X}}}_t}$ are the ordinary sheaves of holomorphic poly-vector fields and holomorphic differentials on a smooth fibre ${{\mathcal{X}}}_t$. The proof of this result, along with a number of other results, appears in [@PartII]. The result holds for all $q$ when additional hypotheses are assumed, essentially saying the mirror to ${{\mathcal{X}}}_t$ is non-singular. Note in particular, since $\Lambda$ and $\check\Lambda$ are interchanged under discrete Legendre transform, we get the interchange of ordinary Hodge numbers from this result. In the more general situation where the Calabi-Yaus arising are singular, one might speculate about the relationship between these groups, the actual Hodge numbers and stringy Hodge numbers. These issues are addressed in the forthcoming Ph.D. thesis of Helge Ruddat. The cone picture and the fan picture ==================================== This section is purely philosophical. In most of our discussion in §§7 and 8, we focused on the dual intersection complex, and in particular, focused on the question of constructing a degeneration from its dual intersection complex. Since our primary goal was to solve the reconstruction problem Question \[reconstruct1\], and as the dual intersection complex is related to the complex structure (Theorem \[complextheorem\]) it seems natural to focus on the dual intersection complex. We will see in the next section that this intuition may not always be correct. So far, the intersection complex only seemed to arise when talking about mirror symmetry. However, mirror symmetry instructs us to view both sides of the picture on the same footing. When we construct a degenerate Calabi-Yau space from a dual intersection complex, we say we are in the *fan picture*, while if we construct a degenerate Calabi-Yau space from an intersection complex, we say we are in the *cone picture*. More precisely, we have seen how given an integral affine manifold with singularities with toric polyhedral decomposition $(B,{\mathscr{P}})$, then an additional choice of open gluing data $s$ specifies a space $X_0(B,{\mathscr{P}},s)$, along with a sheaf of monoids ${\overline{{\mathcal{M}}}}_{X_0(B,{\mathscr{P}},s)}$. Some additional data may specify a log structure on $X_0(B,{\mathscr{P}},s)$ with this ghost sheaf. The irreducible components of $X_0(B,{\mathscr{P}},s)$ are defined using fans, given by the fan structure of ${\mathscr{P}}$ at each vertex of ${\mathscr{P}}$. This is why we call this side the fan picture. On the other hand, given $(B,{\mathscr{P}})$ we can also construct a *projective* scheme $\check X_0(B,{\mathscr{P}},\check s)$ given suitable gluing data $\check s$. The irreducible components of this scheme are in one-to-one correspondence with the maximal cells of ${\mathscr{P}}$; given such a maximal cell $\sigma$, viewing it as a lattice polytope in ${\mathbb{R}}^n$ determines a projective toric variety, and $\check X_0(B,{\mathscr{P}}, \check s)$ is obtained by gluing together these projective toric varieties using the data $\check s$. This is not quite the same data as occurred in the fan picture, because we also need to glue the line bundles, and this is additional data. The reason for calling this side the cone picture is that each irreducible component can be described as follows. Given $\sigma\subseteq M_{{\mathbb{R}}}$, let $P_{\sigma}=C(\sigma)\cap (M\oplus{\mathbb{Z}})$. Then the corresponding projective toric variety is ${\operatorname{Proj}}{\mathbb{C}}[P_{\sigma}]$, where ${\mathbb{C}}[P_{\sigma}]$ is graded using the projection of $P_{\sigma}$ onto ${\mathbb{Z}}$. Hence the irreducible components and strata arise from cones over elements of ${\mathscr{P}}$. We summarize the duality between the cone and fan pictures: (Some restrictions may apply to gluing data on both sides in order for $\varphi$ to yield the desired data.) In particular, mirror symmetry interchanges discrete information about the log structure (i.e. ${\overline{{\mathcal{M}}}}_{X_0(B,{\mathscr{P}},s)}$) and discrete information about the polarization (i.e the class of the line bundle on each irreducible component). Tropical curves =============== So far we have seen only the most elementary aspects of mirror symmetry emerge from this algebro-geometric version of SYZ, e.g. the interchange of Hodge numbers. However, the real interest in this approach lies in hints that it will provide a natural explanation for rational curve counting in mirror symmetry. If we follow the philosophy of the previous section, we need to identify structures on affine manifolds with singularities which in one of the two pictures has to do with rational curves and in the other picture has to do with periods. I believe the correct structure to study is that of tropical curves on affine manifolds with singularities $B$. See [@Mik],[@Sturm] for an introduction to tropical curves in ${\mathbb{R}}^n$. Here, we can take $B$ to be tropical, rather than integral; hence the name. Let $B$ be a tropical affine manifold with singularities with discriminant locus $\Delta$. Let $G$ be a weighted, connected finite graph, with its set of vertices and edges denoted by $G^{[0]}$ and $G^{[1]}$ respectively, with weight function $w_{G}:G^{[1]} \rightarrow{\mathbb{N}}\setminus\{0\}$. A parametrized tropical curve in $B$ is a continuous map $h:G\rightarrow B$ satisfying the following conditions: 1. For every edge $E\subseteq G$, $h|_{{\operatorname{Int}}(E)}$ is an embedding, $h^{-1}(B_0)$ is dense in ${\operatorname{Int}}(E)$, and there is a section $u\in \Gamma({\operatorname{Int}}(E),h^*(i_*\Lambda))$ which is tangent to $h({\operatorname{Int}}(E))$ at every point of $h({\operatorname{Int}}(E))\cap B_0$. We choose this section to be primitive, i.e. not an integral multiple of another section of $h^*(i_*\Lambda)$. 2. For every vertex $v\in G^{[0]}$, let $E_1,\ldots,E_m\in G^{[1]}$ be the edges adjacent to $v$. Let $u_i$ be the section of $h^*(i_*\Lambda)|_{{\operatorname{Int}}(E_i)}$ promised by (1), chosen to point away from $v$. This defines germs $u_i\in h^*(i_*\Lambda)_v=(i_*\Lambda)_{h(v)}$. 1. If $h(v)\in B_0$, the following *balancing condition* holds in $\Lambda_{h(v)}$: $$\sum_{j=1}^m w_{G}(E_j)u_j=0.$$ 2. If $h(v)\not\in B_0$, then the following balancing condition is satisfied in $(i_*\Lambda)_{h(v)}$: $$\sum_{j=1}^m w_{G}(E_j)u_j=0 \mod (i_*\check\Lambda)_{h(v)}^{\perp}\cap (i_*\Lambda)_{h(v)}.$$ The latter group is interpreted as follows. Let $b\in B_0$ be a point near $h(v)$, and identify, via parallel transport along a path between $h(v)$ and $b$, the groups $(i_*\Lambda)_{h(v)}$ and $(i_*\check\Lambda)_{h(v)}$ with local monodromy invariant subgroups of $\Lambda_b$ and $\check\Lambda_b$ respectively. Then $(i_*\check\Lambda)^{\perp}_{h(v)}$ is a subgroup of $\Lambda_b$, and the intersection makes sense. It is independent of the choice of $b$ and path. So tropical curves behave away from the discriminant locus of $B$ much as the tropical curves of [@Mik],[@Sturm] do, but they may have legs terminating on the discriminant locus. As we are interested in the case that $B$ is compact, we do not want legs which go off to $\infty$. I warn the reader, however, that this definition is provisional, and the behaviour in (2) (b) may not be exactly what we want. Here we see a tropical elliptic curve, the solid dots being points of the discriminant locus. The legs terminating at these points must be in a monodromy invariant direction. ![image](tropical) Now let us connect this to the question of counting curves. In the situation of a degeneration, $\varphi:{{\mathcal{X}}}\rightarrow D$, it is natural to consider families of maps of curves: $$\xymatrix@C=30pt {{\mathcal{C}}\ar[r]^f\ar[d]_{\pi}&{{\mathcal{X}}}\ar[d]^{\varphi}\\ D\ar[r]_g&D}$$ Here $g$ may be a ramified covering, and $\pi$ is a flat morphism with reduced one-dimensional fibres. In the case of interest, $f|_{{\mathcal{C}}_t}:{\mathcal{C}}_t\rightarrow{{\mathcal{X}}}_t$ should be a stable map of curves for $t\not=0$. Let us assume that ${\mathcal{C}}_t$ is a non-singular curve for $t\not=0$. In the logarithmic context, it is then natural to put the log structure induced by the divisor ${\mathcal{C}}_0\subseteq{\mathcal{C}}$ on ${\mathcal{C}}$, and so get a diagram $$\xymatrix@C=30pt {{\mathcal{C}}^{\dagger}\ar[r]^f\ar[d]_{\pi}&{{\mathcal{X}}}^{\dagger}\ar[d]^{\varphi}\\ D^{\dagger}\ar[r]_g&D^{\dagger}}$$ of log morphisms. Restricting to the central fibre, we obtain a diagram $$\xymatrix@C=30pt {{\mathcal{C}}^{\dagger}_0\ar[r]^f\ar[d]_{\pi}&{{\mathcal{X}}}^{\dagger}_0\ar[d]^{\varphi}\\ 0^{\dagger}\ar[r]_g&0^{\dagger}}$$ This suggests that we should build up a theory of stable log maps and log Gromov-Witten invariants. This theory should generalize the theories developed by Li and Ruan [@LR] and Jun Li [@Li]. I will say little about this here, as this rapidly gets quite technical. There is work in progress of Siebert on this subject. This point of view has already been used in [@NS] for counting curves in toric varieties, so some more hints of this approach can be found there. Instead, I wish to sketch how such a diagram yields a tropical curve. To do so, consider a situation where $\pi$ is normal crossings, and the induced map ${\mathcal{C}}^{\dagger}_0\rightarrow{{\mathcal{X}}}_0^{\dagger}$ has no infinitesimal log automorphisms over $0^{\dagger}$. (This is the log equivalent of the notion of stable map). Let $(B,{\mathscr{P}})$ be the dual intersection complex of the log Calabi-Yau space ${{\mathcal{X}}}_0^{\dagger}$. We can define the *dual intersection graph* of $f: {\mathcal{C}}_0^{\dagger}\rightarrow{{\mathcal{X}}}^{\dagger}$, which will be a parameterized tropical curve on $B$. I will only do the case here when the image of $f$ is disjoint from the set $Z\subseteq{{\mathcal{X}}}$ of Definition \[toricdegen\], (4); otherwise there are some technicalities to worry about. First we build $G$. Let $C_1,\ldots,C_m$ be the irreducible components of ${\mathcal{C}}_0$. Assume these components are normal for ease of describing this construction. Set $G^{[0]}=\{v_1,\ldots,v_m\}$. On the other hand, $G^{[1]}$ will contain an edge $\overline{v_iv_j}$ joining $v_i$ and $v_j$ whenever $C_i\cap C_j\not=\emptyset$. To define $h:G\rightarrow B$, we first describe the image of each vertex. Let $X_{\sigma_i}$ be the minimal stratum of ${{\mathcal{X}}}_0$ containing $f(C_i)$, where $\sigma_i\in{\mathscr{P}}$. Let $\eta_i$ be the generic point of $C_i$, $\xi_i=f(\eta_i)$. Then we have an induced map $f^{\#}:{\mathcal{M}}_{{{\mathcal{X}}}_0,\xi_i}\rightarrow{\mathcal{M}}_{{\mathcal{C}}_0,\eta_i}$, as $f$ is a log morphism. This induces a diagram on stalks of ghost sheaves $$\xymatrix@C=30pt {{\overline{{\mathcal{M}}}}_{{\mathcal{C}}_0,\eta_i}& {\overline{{\mathcal{M}}}}_{{{\mathcal{X}}}_0,\xi_i} \ar[l]_{\bar f^{\#}} \\ {\overline{{\mathcal{M}}}}_{0} \ar[u]^{\bar\pi^{\#}} &{\overline{{\mathcal{M}}}}_{0}\ar[l]^{\bar g^{\#}} \ar[u]_{\bar\varphi^{\#}}}$$ Now ${\overline{{\mathcal{M}}}}_0={\mathbb{N}}$ (see Example \[logexamples\], (4)) and ${\overline{{\mathcal{M}}}}_{{\mathcal{C}}_0,\eta_i}={\mathbb{N}}$ since $\pi$ is normal crossings. On the other hand, $\bar\pi^{\#}$ is the identity and if $g$ is a branched cover of degree $d$, then $g^{\#}$ is multiplication by $d$. By Exercise \[ghostexercise\], $${\overline{{\mathcal{M}}}}_{{{\mathcal{X}}}_0,\xi_i}={\operatorname{Hom}}_{monoid}(C(\sigma_i)\cap (M\oplus{\mathbb{Z}}),{\mathbb{N}}).$$ But $${\operatorname{Hom}}_{monoid}({\operatorname{Hom}}_{monoid}(C(\sigma_i)\cap(M\oplus{\mathbb{Z}}),{\mathbb{N}}),{\mathbb{N}})=C(\sigma_i) \cap(M\oplus{\mathbb{Z}}),$$ so $\bar f^{\#}$ is determined by an element $(m,r)$ of $C(\sigma_i) \cap (M\oplus{\mathbb{Z}})$. Now $$\bar f^{\#}(\bar\varphi^{\#}(1))=\bar f^{\#}(0,1)=\langle (m,r),(0,1)\rangle =r$$ while $$\bar\pi^{\#}(\bar g^{\#}(1))=\bar\pi^{\#}(d)=d.$$ Thus $r=d$, and $m/d\in\sigma_i$. We define $h(v_i)=m/d$. This is a point of $\sigma_i$ which is contained in $B$. If $C_i\cap C_j\not=\emptyset$, there is a minimal stratum $X_{\sigma_{i,j}}$ containing $C_i\cap C_j$. Of course $X_{\sigma_{i,j}}\subseteq X_{\sigma_i} \cap X_{\sigma_j}$. In particular, $\sigma_{i,j}$ contains $\sigma_i$ and $\sigma_j$. We take $h(\overline{v_i v_j})$ to be the straight line joining $h(v_i)$ and $h(v_j)$ inside $\sigma_{i,j}$. Furthermore, if $\sigma_{i,j}\subseteq M_{{\mathbb{R}}}$ is embedded as a lattice polytope, let $m_{ij}$ be a primitive lattice element parallel to $m_i-m_j$, and we take $w_{G}(\overline{v_iv_j})$ to be defined by the equation $$w_{G}(\overline{v_iv_j})m_{ij}=\#(C_i\cap C_j)(m_i-m_j).$$ $h$ is a parametrized tropical curve. We do not give a proof here. The case where $X_{\sigma_i}$ is always an irreducible component of ${{\mathcal{X}}}_0$ is essentially covered in [@NS]. Instead, we’ll do another extremal case, which exhibits some interesting features of log geometry. Suppose a component $C_1$ of ${\mathcal{C}}_0$ and all components $C_2,\ldots,C_t$ intersecting $C_1$ are mapped by $f$ to a zero dimensional stratum $X_{\sigma}$ of ${{\mathcal{X}}}_0$. Without loss of generality we can assume $${{\mathcal{X}}}_0=V(\sigma)={\operatorname{Spec}}{\mathbb{C}}[{\vee}{C(\sigma)}\cap(N\oplus{\mathbb{Z}})]/(z^{(0,1)})$$ as defined in §8. Thus $h$ maps $v_1,\ldots,v_t$ into points $m_1/d,\ldots,m_t/d\in\sigma$. Let us understand why the balancing condition holds at $m_1/d$. Let $U\subseteq{\mathcal{C}}_0$ be an open neighbourhood of $C_1$ which only intersects $C_1,\ldots,C_t$, so $h$ is constant on $U$ as an ordinary morphism (but not as a log morphism). Restrict the log structure on ${\mathcal{C}}_0$ to $U$. We have an exact sequence $$1\mapright{}{\mathcal{O}}_U^{\times}\mapright{}{\mathcal{M}}_U^{{{\operatorname{gp}}}}\mapright{p}{\overline{{\mathcal{M}}}}^{{{\operatorname{gp}}}}_U \mapright{} 0.$$ Taking global sections, we get $$1\mapright{}\Gamma(U,{\mathcal{O}}_U^{\times})\mapright{}\Gamma(U,{\mathcal{M}}_U^{{{\operatorname{gp}}}}) \mapright{p}\Gamma(U,{\overline{{\mathcal{M}}}}_U^{{{\operatorname{gp}}}})= {\mathbb{Z}}^t\mapright{q}{\operatorname{Pic}}U.$$ A section $s\in\Gamma(U,{\overline{{\mathcal{M}}}}_U^{{{\operatorname{gp}}}})$ defines an ${\mathcal{O}}_U^{\times}$-torsor $p^{-1}(s)$, whose class in the Picard group of $U$ is $q(s)$. It is an easy exercise in log geometry to show that if $s$ is the $i$th standard basis vector for ${\mathbb{Z}}^t$, then $q(s)={\mathcal{O}}_{{\mathcal{C}}}(-C_i)|_U$. Note $\deg {\mathcal{O}}_{{\mathcal{C}}}(-C_i)|_{C_1}=-\# C_1\cap C_i$ for $i=2,\ldots,t$ and $\deg {\mathcal{O}}_{{\mathcal{C}}}(-C_1)|_{C_1}=\sum_{i=2}^t\# C_1\cap C_i$ as $C_1.{\mathcal{C}}_0=0$ in ${\mathcal{C}}$. Now observe that $f^{\#}$ acting on the sheaves of monoids induces a diagram $$\xymatrix@C=30pt {{\mathcal{O}}_{V(\sigma),x}^{\times}\oplus{\overline{{\mathcal{M}}}}^{{{\operatorname{gp}}}}_{V(\sigma),x}\ar[r]^{\cong}& {\mathcal{M}}^{{{\operatorname{gp}}}}_{V(\sigma),x}\ar[r]^{f^{\#}}\ar[d]&\Gamma(U,{\mathcal{M}}_U^{{{\operatorname{gp}}}})\ar[d]^p&\\ N\oplus{\mathbb{Z}}\ar[r]_{\cong}&{\overline{{\mathcal{M}}}}^{{{\operatorname{gp}}}}_{V(\sigma),x}\ar[r]_{\bar f^{\#}}&\Gamma(U, {\overline{{\mathcal{M}}}}_U^{{{\operatorname{gp}}}})\ar[r]_{\cong}&{\mathbb{Z}}^t.}$$ The map $\bar f^{\#}$, by construction, is given by $$(n,r)\in N\oplus{\mathbb{Z}}\mapsto (\langle (n,r),(m_i,d)\rangle)_{i=1,\ldots,t}.$$ On the other hand, in order for $\bar f^{\#}$ to lift to $f^{\#}$, the ${\mathcal{O}}_U^{\times}$ torseur $p^{-1}(\bar f^{\#}(n,r))$ must have a section for every $(n,r)\in N\oplus{\mathbb{Z}}$, i.e. must be trivial in the Picard group. This implies $$\deg\bigotimes_{i=1}^t ({\mathcal{O}}_{{\mathcal{C}}}(-C_i)|_{C_1})^{\otimes \langle (n,r), (m_i,d)\rangle}=0,$$ or $$\sum_{i=2}^t(\# C_1\cap C_i)(\langle (n,r),(m_i,d)-(m_1,d)\rangle)=0$$ for all $(n,r)\in N\oplus{\mathbb{Z}}$. But this is equivalent to $$\sum_{i=2}^t (\# C_1\cap C_i) (m_i-m_1)=0,$$ which is the balancing condition. Following the logic of mirror symmetry, this suggests that tropical curves on the cone side should have to do with periods. It is only recently that an understanding of this has begun to emerge, and unfortunately, I do not have space or time to elaborate on this. Let me say that in [@BigPaper], Siebert and I have given a solution to Question \[smoothingconjecture\], (2), given some hypotheses on $X_0(B,{\mathscr{P}},s)^{\dagger}$, which are implied by simplicity of $B$. In this solution, we construct explicit deformations of a log Calabi-Yau space, order by order. Formally, our construction looks somewhat similar to that of Kontsevich and Soibelman [@KS2] for constructing non-Archimedean K3 surfaces from affine manifolds, and we apply a key lemma of [@KS2]. However, Kontsevich and Soibelman work on what we would call the fan side, while we work on the cone side. This may be surprising given that all of our discussions involving the strategy of building log Calabi-Yau spaces was done on the fan side. However, if we take the mirror philosophy seriously, and we want to see tropical curves appear in a description of a smoothing, we need to work on the cone side. It turns out to be extremely natural. In fact, all tropical rational curves play a role in our construction. Ultimately, all periods can be calculated in terms of the data involved in our construction, and in particular, there is a clear relationship between the period calculation and the existence of tropical rational curves on $B$. Once this is fully understood, this will finally give a firm understanding of a geometric explanation of mirror symmetry. [cccccc]{} P. Aspinwall, B. Greene and D. Morrison: *Calabi-Yau moduli space, mirror manifolds and spacetime topology change in string theory*, Nuclear Phys. [**B416**]{} (1994), 414–480. V. Batyrev: *Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties.* J. Algebraic Geom. [**3**]{} (1994), 493–535. V. Batyrev, and L. Borisov: *On Calabi-Yau complete intersections in toric varieties,* in [*Higher-dimensional complex varieties (Trento, 1994)*]{}, 39–65, de Gruyter, Berlin, 1996. V. Batyrev, and M. Kreuzer: *Integral cohomology and mirror symmetry for Calabi-Yau 3-folds*, preprint, 2005, [math.AG/0505432]{}. O. Ben-Bassat, *Mirror symmetry and generalized complex manifolds*, preprint, 2004, [math.AG/0405303]{}. A. Bertram, *Another way to enumerate rational curves with torus actions,* Invent. Math. [**142**]{} (2000), 487–512. R. Castaño-Bernard and D. Matessi, *Lagrangian 3-torus fibration*, preprint, 2006, [arXiv:math/0611139]{}. P. Candelas, X. de la Ossa, P. Green, and L. Parkes, *A pair of Calabi-Yau manifolds as an exactly soluble superconformal theory,* Nuclear Phys. B [**359**]{} (1991), 21–74. S.-Y. Cheng and S.-T. Yau, *The real Monge-Ampère equation and affine flat structures*, in *Proceedings of the 1980 Beijing Symposium on Differential Geometry and Differential Equations*, Vol. 1, 2, 3 (Beijing, 1980), 339–370, Science Press, Beijing, 1982. R. Friedman: *Global smoothings of varieties with normal crossings*, Ann. Math. [**118**]{}, (1983) 75–114. K. Fukaya, *Multivalued Morse theory, asymptotic analysis and mirror symmetry,* in *Graphs and patterns in mathematics and theoretical physics*, 205–278, Proc. Sympos. Pure Math., [**73**]{}, Amer. Math. Soc., Providence, RI, 2005. A. Gathmann, *Relative Gromov-Witten invariants and the mirror formula,* Math. Ann. [**325**]{} (2003), 393–412. A. Givental, *Equivariant Gromov-Witten invariants,* Internat. Math. Res. Notices [**13**]{}, (1996), 613–663. E. Goldstein: *A construction of new families of minimal Lagrangian submanifolds via torus actions,* J. Differential Geom. [**58**]{} (2001), 233–261. M. Gross: *Special Lagrangian Fibrations I: Topology,* in: [*Integrable Systems and Algebraic Geometry*]{}, (M.-H. Saito, Y. Shimizu and K. Ueno eds.), World Scientific 1998, 156–193. M. Gross: *Special Lagrangian Fibrations II: Geometry,* in: [*Surveys in Differential Geometry*]{}, Somerville: MA, International Press 1999, 341–403. M. Gross: *Topological Mirror Symmetry*, Invent. Math. [**144**]{} (2001), 75–137. M. Gross: *Examples of special Lagrangian fibrations,* in *Symplectic geometry and mirror symmetry (Seoul, 2000)*, 81–109, World Sci. Publishing, River Edge, NJ, 2001. M. Gross: *Toric Degenerations and Batyrev-Borisov Duality*, Math. Ann. [**333**]{}, (2005) 645-688. M. Gross, and B. Siebert: *Affine manifolds, log structures, and mirror symmetry*, Turkish J. Math. [**27**]{} (2003), 33-60. M. Gross, and B. Siebert: *Torus fibrations and toric degenerations,* in preparation. M. Gross, and B. Siebert: *Mirror symmetry via logarithmic degeneration data I*, J. Diff. Geom. [**72**]{}, (2006). M. Gross, and B. Siebert: *From real affine geometry to complex geometry*, preprint, (2007), [arXiv:math/073822]{}. M. Gross, and B. Siebert: *Mirror symmetry via logarithmic degeneration data II*, preprint, (2007), [arXiv:0709.2290]{}. M. Gross, and P.M.H. Wilson: *Mirror symmetry via $3$-tori for a class of Calabi-Yau threefolds,* Math. Ann. [**309**]{} (1997), 505–531. M. Gross, and P.M.H. Wilson: *Large complex structure limits of $K3$ surfaces,* J. Differential Geom. [**55**]{} (2000), 475–546. M. Gualtieri, *Generalized complex geometry*, Oxford University DPhil thesis, [math.DG/0401221]{}. C. Haase, and I. Zharkov: *Integral affine structures on spheres and torus fibrations of Calabi-Yau toric hypersurfaces I*, preprint 2002, [math.AG/0205321]{}. C. Haase, and I. Zharkov: *Integral affine structures on spheres III: complete intersections*, preprint, [math.AG/0504181]{}. N. Hitchin: *The Moduli Space of Special Lagrangian Submanifolds*, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) [**25**]{} (1997), 503–515. N. Hitchin: *Generalized Calabi-Yau manifolds*, Q. J. Math. [**54**]{} (2003), 281–308. L. Illusie: *Logarithmic spaces (according to K. Kato)*, in [*Barsotti Symposium in Algebraic Geometry*]{} (Abano Terme 1991), 183–203, Perspect. Math. 15, Academic Press 1994. D. Joyce, *Singularities of special Lagrangian fibrations and the SYZ conjecture,* Comm. Anal. Geom. [**11**]{} (2003), 859–907. F. Kato: *Log smooth deformation theory*, Tohoku Math. J. [**48**]{} (1996), 317–354. K. Kato: *Logarithmic structures of Fontaine–Illusie*, in: [*Algebraic analysis, geometry, and number theory*]{} (J.-I. Igusa et. al. eds.), 191–224, Johns Hopkins Univ. Press, Baltimore, 1989. Y. Kawamata, Y. Namikawa: *Logarithmic deformations of normal crossing varieties and smooothing of degenerate Calabi-Yau varieties*, Invent. Math. [**118**]{} (1994), 395–409. M. Kontsevich, *Enumeration of rational curves via torus actions*, in *The moduli space of curves (Texel Island, 1994)*, 335–368, Progr. Math., 129, Birkhäuser Boston, Boston, MA, 1995. M. Kontsevich, and Y. Soibelman: *Homological mirror symmetry and torus fibrations*, in: [*Symplectic geometry and mirror symmetry*]{} (Seoul, 2000), 203–263, World Sci. Publishing, River Edge, NJ, 2001. M. Kontsevich, and Y. Soibelman: *Affine structures and non-archimedean analytic spaces*, preprint, [math.AG/0406564]{}. N.C. Leung: *Mirror symmetry without corrections*, preprint 2000, math.DG/0009235 J. Li, *Stable morphisms to singular schemes and relative stable morphisms*, J. Differential Geom. [**57**]{} (2001), 509–578. A-M. Li, and Y. Ruan: *Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds*, Invent. Math. [**145**]{} (2001), 151–218. B. Lian, K. Liu, S-T. Yau, *Mirror principle. I,* Asian J. Math. [**1**]{} (1997), 729–763. D. Matessi, *Some families of special Lagrangian tori,* Math. Ann. [**325**]{} (2003), 211–228. R. McLean, *Deformations of calibrated submanifolds,* Comm. Anal. Geom. [**6**]{} (1998), 705–747. G. Mikhalkin, *Enumerative tropical algebraic geometry in ${\mathbb{R}}^2$,* J. Amer. Math. Soc. [**18**]{} (2005), 313–377. D. Morrison, *Compactifications of moduli spaces inspired by mirror symmetry,* in *Journées de Géométrie Algébrique d’Orsay (Orsay, 1992)*, Astérisque [**218**]{} (1993), 243–271. T. Nishinou, B. Siebert, *Toric degenerations of toric varieties and tropical curves*, preprint, [math.AG/0409060]{}, to appear in Duke Math. Journal. P. Petersen: *Riemannian geometry,* Graduate Texts in Mathematics, 171. Springer-Verlag, New York, 1998. J. Richter-Gebert, B. Sturmfels, and T. Theobald, *First steps in tropical geometry,* in *Idempotent mathematics and mathematical physics*, 289–317, Contemp. Math., [**377**]{}, Amer. Math. Soc., Providence, RI, 2005. E. Rødland: *The Pfaffian Calabi-Yau, its mirror, and their link to the Grassmannian $G(2,7)$,* Compositio Math. [**122**]{}, (2000) 135–149. W.-D. Ruan: *Lagrangian torus fibration and mirror symmetry of Calabi-Yau hypersurface in toric variety*, preprint 2000, math.DG/0007028. W.-D. Ruan: *Lagrangian torus fibration of quintic Calabi-Yau hypersurfaces. II. Technical results on gradient flow construction,* J. Symplectic Geom. [**1**]{} (2002), no. 3, 435–521. S. Schröer, B. Siebert: *Irreducible degenerations of primary Kodaira surfaces,* in Complex geometry (Göttingen, 2000), 193–222, Springer, Berlin, 2002. S. Schröer, B. Siebert: *Toroidal crossings and logarithmic structures*, preprint 2002, [math.AG/0211088]{}, to appear in Adv. Math. A. Strominger, S.-T. Yau, and E. Zaslow, *Mirror Symmetry is $T$-duality,* Nucl. Phys. [**B479**]{}, (1996) 243–259. [^1]: This work was partially supported by NSF grant 0505325
--- abstract: 'In heterogeneous environments, the diffusivity is not constant but changes with time. It is important to detect changes in the diffusivity from single-particle-tracking trajectories in experiments. Here, we devise a novel method for detecting the transition times of the diffusivity from trajectory data. A key idea of this method is the introduction of a characteristic time scale of the diffusive states, which is obtained by a fluctuation analysis of the time-averaged mean square displacements. We test our method in silico by using the Langevin equation with a fluctuating diffusivity. We show that our method can successfully detect the transition times of diffusive states and obtain the diffusion coefficient as a function of time. This method will provide a quantitative description of the fluctuating diffusivity in heterogeneous environments and can be applied to time series with transitions of states.' author: - Takuma Akimoto - Eiji Yamamoto title: 'Detection of Transition Times from Single-particle-tracking Trajectories' --- The mean square displacement (MSD) is one of the most popular observables for quantifying the diffusivity. In Brownian motion, the MSD increases linearly with time, and the diffusivity can be quantified by the slope of the MSD, i.e., the diffusion coefficient. The diffusion coefficient is determined by the surrounding environment including the viscosity of the medium and the properties of the diffusing particle, e.g., the shape of the Brownian particle. When there are no fluctuations in the properties of the surrounding environment and the diffusing particle, no intrinsic differences arise between the diffusivities for short-time and long-time measurements except for fluctuations of the diffusivity due to the finite measurement times. In heterogeneous environments such as amorphous materials and living cells, diffusion often becomes anomalous; that is, the MSD does not increase linearly with time [@Scher1975; @Golding2006; @ManzoTorreno-PinaMassignanLapeyreLewensteinGarcia2015]. The local diffusivities in these environments are highly heterogeneous. These heterogeneities are sometimes static or fluctuating. For example, the charge transport in amorphous materials [@Scher1975] as well as the diffusion of proteins on DNA [@Graneli2006; @Wang2006] can be modeled by a quenched trap model, where a random walker jumps in static random energy landscape [@bouchaud90]. In other words, the characteristic time scale of a change in the energy landscape is much longer than that of random walkers. On the other hand, in supercooled liquids, mobile and immoblie particles are distributed in space, and the diffusive properties (mobile and immobile properties) will change with time, i.e., dynamic heterogeneity [@Yamamoto-Onuki-1998; @Yamamoto-Onuki-1998a; @Richert-2002]. Moreover, transmembrane proteins [@SergeBertauxRigneaultMarguet2008; @Weron2017] and membrane-bound proteins on biological membranes [@YamamotoAkimotoKalliYasuokaSansom2017] exhibit a temporally heterogeneous diffusivity. In these systems, the diffusivity for short-time measurements is intrinsically different from that for long-time measurements. One of the most important issues in heterogeneous environments is to uncover the local diffusivity from single-particle trajectories. However, there is a crucial difficulty in extracting the local diffusivity in both spatially and temporally heterogeneous environments. In particular, one cannot know the boundaries of regions with the same diffusivities and transition times when the diffusive states change in spatially and temporally heterogeneous environments, respectively. In previous studies, maximum likelihood estimators were proposed to determine the dynamic changes of the diffusivity [@MontielCangYang2006; @KooMochrie2016], where the key idea is to detect the transition times when the diffusivity changes drastically. However, an empirical parameter is necessary to implement the method. The detection of the transition times is also important in state-transition processes, e.g., channel gating [@WangVafabakhshBorschelHaNichols2016], the conformational transition of proteins [@ChungMcHaleLouisEaton2012], the rotation of F1-ATPase [@NojiYasudaYoshidaKinositaJr1997], the fluorescence of quantum dots [@StefaniHoogenboomBarkai2009], and nanopore sensing of single molecules [@HoworkaSiwy2009]. A method for detecting transition times using only trajectories without prior knowledge and empirical parameters is desired in time-series analysis. Here, we devise an estimation method for characterizing the short-time diffusivity from trajectory data without knowing the transition times of the diffusive states. In our method, there are no parameters that are determined empirically. Thus, our method can be applied when many single-particle-tracking trajectories are obtained. We show that our method can successfully detect the transition times of the diffusivity and estimate the local diffusivity in the (overdamped) Langevin equation with a fluctuating diffusion coefficient. We assume that there are many trajectories for the same system and that the system can be described by the Langevin equation with a fluctuating diffusivity (LEFD): $$\frac{d\bm{r}(t)}{dt} = \sqrt{2 D(t)} \bm{w}(t), \label{LEFD}$$ where $\bm{r}(t)$ is the position of a particle at time $t$, $\bm{w}(t)$ is $d$-dimensional white Gaussian noise with $\langle \bm{w}(t)\rangle=0$ and $\langle w_i(t)w_j(t')\rangle = \delta_{ij}\delta (t-t')$, and $D(t)$ is the diffusion coefficient at time $t$, which is a stochastic process independent of $\bm{w}(t)$. Although we do not assume any condition on $D(t)$, i.e., the diffusion coefficient may be non-Markov and depend on the position $\bm{r}(t)$, we assume that the variance of $D(t)$ is sufficiently large. In particular, it is much greater than the variance of the diffusion coefficients obtained by the time-averaged MSD defined by Eq. (\[tamsd\]) when the measurement time $t$ is the same as the characteristic time scale of the diffusive state. In our setting, we do not know 1. the number of diffusive states and 2. the time scales of the diffusive states. This is because we do not know the transition times when a diffusive state changes in single-particle-tracking trajectories. This is one of the most difficult issues when estimating the fluctuating diffusivity. To overcome this difficulty, we apply a fluctuation analysis of the time-averaged MSDs to obtain a characteristic time scale of the diffusive states. The time-averaged MSD is defined as $$\overline{\delta^{2}(\Delta;t)} = \frac{1}{t-\Delta} \int_0^{t-\Delta} \{\bm{r}(t'+\Delta) - \bm{r}(t')\}^2dt' . \label{tamsd}$$ To characterize the fluctuations in the time-averaged MSDs, we use the relative standard deviation (RSD) of the time-averaged MSDs, defined as $$\Sigma(t;\Delta) \equiv \frac{\sqrt{\langle [\overline{\delta^{2}(\Delta;t)} - \langle \overline{\delta^{2}(\Delta;t)} \rangle]^{2} \rangle}} {\langle \overline{\delta^{2}(\Delta;t)} \rangle},$$ as a function of the measurement time $t$ ($\Delta$ is fixed). This type of quantity is widely used to investigate the ergodic property [@He2008; @Deng2009] as well as the characteristic time of the system [@Akimoto2011; @Miyaguchi2011a; @Uneyama2012; @Uneyama2015; @Miyaguchi2016]. In fact, the RSD analysis provides the longest relaxation time in the reptation model, which is a model of entangled polymers [@Uneyama2012; @Uneyama2015]. When $D(t)$ is a stationary stochastic process, i.e., the characteristic time of the stochastic process $D(t)$ is finite, the general formula for the RSD is derived as [@Uneyama2015] $$\label{rsd_square_final} \Sigma^{2}(t;\Delta) \approx \frac{2}{t^{2}} \int_{0}^{t} ds \, (t - s) \psi_{1}(s) ,$$ where $\psi_1(t)$ is the normalized correlation function of the diffusion coefficient, i.e., $\psi_1(t) \equiv (\langle D(t) D(0) \rangle - \langle D \rangle^2)/\langle D \rangle^2$. Thus, if the relaxation time of the system is $\tau$ (roughly speaking, the correlation function decays as $\psi_1(t) \propto e^{-t/\tau}$), the asymptotic form of the RSD becomes $$\label{rsd_square_asymptotic} \Sigma^{2}(t;\Delta) \approx \begin{cases} \displaystyle \psi_{1}(0) & (t \ll \tau) , \\ \displaystyle \frac{2}{t} \int_{0}^{\infty} dv \, \psi_{1}(v) & (t \gg \tau) . \end{cases}$$ Therefore, $\tau$ is obtained by the crossover time from the plateau to the $t^{-1/2}$ decay in the RSD. In particular, when the correlation function decays exponentially, the crossover time $\tau_c$ in the RSD is given by $\tau_c \cong 2\tau$. From many single-particle-tracking trajectories, one can calculate the time-averaged MSDs. Taking the ensemble average of the time-averaged MSDs gives us the RSD. In this way, one can obtain the characteristic time scale of $D(t)$ from single-particle-tracking trajectories. Here, we devise a novel method to detect the changes in states from a single-particle-tracking trajectory. First, we define the time-averaged diffusion coefficient (TDC) at time $t$ by $$D(t;\Delta,T) \equiv \frac{ \int_t^{t+T-\Delta}\{\bm{r}(t'+\Delta) - \bm{r}(t')\}^2dt'}{2d\Delta(T-\Delta)}.$$ There are two parameters, $\Delta$ and $T$, in the TDC. We set $\Delta$ as the minimal time step of the trajectory; thus, it is not necessary to tune this parameter. On the other hand, we have to tune the parameter $T$ by introducing a tuning parameter $a$ as $T = a \tau_c$. Since $\tau_c$ is of the same order as the system’s characteristic time, $a$ can be smaller than one. In what follows, we use $a=0.1$. Second, using the effective diffusion coefficient $D_{\rm eff}$, which is obtained by the ensemble average of the time-averaged MSD, i.e., $\langle \overline{\delta^{2}(\Delta;t)} \rangle = 2dD_{\rm eff}\Delta$, we define the crossing points $c_i$ as the points at which the TDC crosses $D_{\rm eff}$, i.e., $D(c_i; \Delta,T) < D_{\rm eff}$ and $D(c_i +\Delta t; \Delta,T) > D_{\rm eff}$ or $D(c_i; \Delta,T) > D_{\rm eff}$ and $D(c_i +\Delta t; \Delta,T) < D_{\rm eff}$, satisfying $c_{i+1} - c_i > T $, where $\Delta t$ is the time step of the trajectories (see Fig. \[LEFD\_two\_states\]A). Note that the crossing points are not exact points representing changes in the diffusive states because different diffusive states coexist in a time window $[t, t+T-\Delta]$ of $D(t;\Delta,T)$. Therefore, we define the transition time as $t_i \equiv c_i + T/2$. The term $T/2$ is not exact when the threshold is not at the middle of two successive diffusive states. If only one transition occurs in the time interval $[t_i, t_{i+1}-\Delta]$, which is a physically reasonable assumption, the transition times represent the points of changes in the diffusive states. Note that some transition times of the diffusive states will be still missing. To correct the transition times obtained above, we test whether successive diffusive states are significantly different. Since we know the transition times of the diffusive states, we can estimate the diffusion coefficient in the time interval $[t_i, t_{i+1}]$: the diffusion coefficient of the $i$th diffusive state is given by $$\overline{D}_i \equiv \frac{ \int_{t_i}^{t_{i+1}-\Delta}\{\bm{r}(t'+\Delta) - \bm{r}(t')\}^2dt'}{2d\Delta(t_{i+1}-t_i-\Delta)}. \label{DC_i}$$ Since we consider a situation that $T$ is sufficiently large ($T/\Delta t >30$), fluctuations of $\overline{D}_i$ can be approximated as a Gaussian distribution by the central limit theorem. According to a statistical test, the $i$th and $j$th states can be considered as the same state if there exists $D$ such that both the $k=i$ and $k=j$ states satisfy $$D - \sigma_k Z \leq \overline{D}_k \leq D + \sigma_k Z, \label{statistical_test}$$ where $\sigma_{k}^2$ is the variance of the TDC with the time window $t_{k+1}-t_k$ and the diffusion coefficient $D$, which is given by $\sigma_{k}^2\equiv \frac{4D^2 \Delta }{3(t_{k+1}-t_k)}$, and $Z$ is determined by the level of statistical significance, e.g., $Z=1.96$ when the $p$-value is 0.05. Therefore, the transition times can be corrected if the two successive diffusion states are the same. We repeat this procedure: Eq. (\[DC\_i\]) will be calculated again after correcting the transition times $t_i$, and the above test will be repeated to correct the transition times. Furthermore, one can improve the transition times by changing the thresholds around the transition times. The detailed procedure and flowchart of our method are given in the Supplemental Material [@SM]. Here, we test our method with the trajectories of three different LEFD models, where the number of diffusive states is two, three, and uncountable. The crossover times in the RSD are finite for all models. In the Langevin equation with the two-state diffusivity, Fig. \[LEFDtwo\]B shows the diffusion coefficient obtained by our method. Almost all diffusive states can be classified into two states according to the condition (\[statistical\_test\]) with $Z=1.96$. Moreover, the deviations in the transition times from the actual transition times are within 0.25. Thus, we successfully extract the underlying diffusion process $D(t)$ from a single trajectory after obtaining the characteristic time scale of the diffusive states. We introduce different relaxation times in the two sojourn-time distributions (we use the exponential distribution for both sojourn-time distributions) and examine the effects of the tuning parameter. Figure \[tdc\_two\_state\] shows the TDCs for different tuning parameters $a=1$, $0.1$, and 0.01, corresponding to $T=16$, 1.6, and 0.16, respectively. As clearly seen, when the tuning parameter is small, the fluctuations in the TDC become large. Therefore, inaccurate transition times may be detected when $a$ is too small. On the other hand, the actual transition times may not be detected when $a$ is too large. In fact, the transition times around $t=80$ cannot be detected in the case of the green dotted line ($a=1$). As a result, the tuning parameter can be set to $a=0.1$ or between 0.1 and 0.01. Next, we analyze the LEFD with the three-state diffusivity. The sojourn-time distributions are exponential distributions, and their relaxation times are the same in each state ($\tau=10$). For the three-state LEFD, one can obtain several diffusive states from a single trajectory by our method after calculating the crossover time in the RSD using many trajectories. Figure \[tdc\_LEFD3\] shows the diffusion coefficient obtained by our method, where we revised the threshold using the procedures described in the Supplemental Material [@SM]. As shown in Fig. 3A, the transition times are correctly detected. Moreover, almost all diffusive states belong to the three diffusive states using $Z=1.96$, and the distribution of the estimated diffusion coefficients has three peaks corresponding to the exact diffusion coefficient (see Fig. 3B). Finally, we apply our method to a diffusion process with an uncountable number of diffusive states. In particular, we use the annealed transit time model (ATTM) [@MassignanManzoTorreno-PinaGarcia-ParajoLewensteinLapeyre2014; @AkimotoYamamoto2016a]. The ATTM was proposed to describe heterogeneous diffusion in living cells [@MassignanManzoTorreno-PinaGarcia-ParajoLewensteinLapeyre2014; @ManzoTorreno-PinaMassignanLapeyreLewensteinGarcia2015]. The diffusion process is described by the LEFD where $D(t)$ is coupled to the sojourn time. When the sojourn time is $\tau$, the diffusion coefficient is given by $D_{\tau} = \tau^{\sigma - 1} (0 < \sigma < 1)$. Here, we assume that the sojourn-time distribution follows an exponential distribution $\rho(\tau) \sim \exp (- \tau / \langle \tau \rangle ) / \langle \tau \rangle$. One can obtain $\tau_c$ by the RSD analysis [@AkimotoYamamoto2016a]. Figure \[ATTM\]A shows the diffusion coefficient obtained by our method. Because the variance of $D(t)$ is not large, the transition times are not correctly detected compared with the other two models. However, the transition times for the highly diffusive states can be detected correctly. Moreover, Fig. \[ATTM\]B shows the relation between the obtained diffusion coefficient and the sojourn times, which exhibits a power-law relation $D_\tau =\tau^{\sigma - 1}$. Therefore, our method can also be applied to systems with an uncountable number of diffusive states. Diffusivity changes with time in temporally/spatially heterogeneous environments such as cells and supercooled liquids. It is difficult to estimate such a fluctuating diffusivity from single-particle trajectories because one does not have information about the transition times when the diffusivity changes. In this paper, we have proposed a new method for detecting the transition times from single trajectories. Our method is based on a fluctuation analysis of the time-averaged MSD to extract information on the characteristic time scale of the system. We have applied this method to three different diffusion processes, i.e., the LEFD with two states, the LEFD with three states, and the ATTM, which has an uncountable number of diffusive states. Our method successfully extracts the transition times of the diffusivities and estimates the fluctuating diffusion coefficients in the three models. Since our method can be conducted with single-particle trajectories, the application will be useful and of importance in experiments. Furthermore, a slight modification of this method will be also applied to the time series of state-transition processes. E.Y. was supported by an MEXT (Ministry of Education, Culture, Sports, Science and Technology) Grant-in-Aid for the “Building of Consortia for the Development of Human Resources in Science and Technology.” [29]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevX.5.011021) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.97.048302) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevE.94.052412) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevE.92.032140) [****,  ()](\doibase 10.1103/PhysRevE.94.012109) @noop [****,  ()](\doibase 10.1103/PhysRevLett.112.150603) @noop [****,  ()]{}
--- abstract: | Sunyaev-Zel’dovich (SZ) cluster surveys are considered among the most promising methods for probing dark energy up to large redshifts. However, their premise is hinged upon an accurate mass-observable relationship, which could be affected by the (rather poorly understood) physics of the intracluster gas. In this letter, using a semi-analytic model of the intracluster gas that accommodates various theoretical uncertainties, I develop a [*Fundamental Plane*]{} relationship between the observed size, thermal energy, and mass of galaxy clusters. In particular, I find that $M \propto (Y_{SZ}/R_{SZ,2})^{3/4}$, where $M$ is the mass, $Y_{SZ}$ is the total SZ flux or thermal energy, and $R_{SZ,2}$ is the SZ half-light radius of the cluster. I first show that, within this model, using the Fundamental Plane relationship reduces the (systematic+random) errors in mass estimates to $14\%$, from $22\%$ for a simple mass-flux relationship. Since measurement of the cluster sizes is an inevitable part of observing the SZ clusters, the Fundamental Plane relationship can be used to reduce the error of the cluster mass estimates by $\sim 34\%$, improving the accuracy of the resulting cosmological constraints without any extra cost. I then argue why our Fundamental Plane is distinctly different from the virial relationship that one may naively expect between the cluster parameters. Finally, I argue that while including more details of the observed SZ profile cannot significantly improve the accuracy of mass estimates, a better understanding of the impact of non-gravitational heating/cooling processes on the outskirts of the intracluster medium (apart from external calibrations) might be the best way to reduce these errors. author: - Niayesh Afshordi bibliography: - 'szplane\_3.bib' title: 'Fundamental Plane of Sunyaev-Zel’dovich clusters' --- LaTeX[L-.36em.3ex-.15em T-.1667em.7ex-.125emX]{} \[section\] Introduction ============ The origin of the present-day acceleration of the Universe is arguably the most central question in modern cosmology, and is thus likely to dominate theoretical and observational efforts in cosmology for decades to come. As recently highlighted by the [*Dark Energy Task Force Report*]{} [@2006astro.ph..9591A], one of the most promising methods for probing the history of cosmic acceleration, or its most likely culprit, Dark Energy, is the abundance of galaxy clusters at large redshifts, which is exponentially sensitive to the cosmic expansion history . This has motivated many upcoming cluster surveys such as APEX, ACT, SPT, and SZA, which use the thermal Sunyaev-Zel’dovich (SZ) signature [@sunyaev72; @carlstrom_etal02] of the hot intracluster gas in the microwave sky to find clusters at high redshifts[^1]. However, the accuracy of any cosmological constraint inferred from a cluster survey is hinged upon how well the mass of the clusters can be estimated from the individual cluster observables. For example, @2005JCAP...12..001F show that a $10\%$ systematic error in the mass estimates is enough to significantly affect the accuracy of predicted dark energy constraints from upcoming SZ cluster surveys. Although the total SZ flux of a cluster, which traces the total thermal energy of the Intracluster Medium (ICM), is predicted to be a robust tracer of its mass [@1999PhR...310...97B; @carlstrom_etal02; @2006ApJ...651..643R], recent X-ray and SZ observations indicate that a significant fraction of cluster baryons may have been removed from the ICM, introducing a new uncertainty into the theoretical predictions [@2006ApJ...640..691V; @2005ApJ...629....1A; @2006astro.ph.12700A; @2006ApJ...652..917L; @2007astro.ph..2241E]. Although self-calibration methods, through use of phenomenological/physical ICM models [@2004ApJ...613...41M; @2006ApJ...653...27Y], clustering of clusters [@2004PhRvD..70d3504L; @2005PhRvD..72d3006L], or gravitational lensing [@2006ApJ...649..118S; @2007astro.ph..1276H] have been put forth as a way to avoid theoretical uncertainties, they do rely on ad hoc power-law fitting formulae and/or modeling assumptions that could jeopardize the accuracy of their applications. In this letter, I advocate a way to improve the accuracy of mass estimates (or alternatively relax modeling assumptions) through including more information about the observed SZ profile. In particular, while the usual mass estimates only rely on the total SZ flux, I develop a [*Fundamental Plane*]{} relationship [@2002ApJ...581....5V] among the cluster mass, the total SZ flux, and the SZ half-light radius of the cluster. The latter is an independent observable for a moderately resolved cluster, and should be readily measurable at similar precisions to the SZ flux, for the upcoming SZ cluster surveys. Semi-Analytic model of the Intracluster Medium ============================================== In order to study the scaling of different ICM observables, we first develop a semi-analytic ICM model which accommodates a generous allowance for different theoretical uncertainties. The main ingredient in our semi-analytic ICM model is the assumption of hot gas sitting in hydrostatic equilibrium in a nearly spherical dark matter halo. The dark matter profile is approximated by an NFW profile [@nfw]: (r) = , where $r_s$ quantifies the scale at which the slope of the density profile changes from $-1$ to $-3$. This scale is often parameterized using the concentration parameter, $c_{200}= r_{200}/r_s$, where $r_{200}$ is the radius within which, the mean density of cluster is $200$ times the [*critical density*]{} of the Universe. We assume a log-normal distribution for $c_{200}$ with the mean: c\_[200]{}= 3.35 (M\_[200]{}10\^[14]{} h\^[-1]{})\^[-0.11]{}, and a $22\%$ scatter , which is appropriate for an $\Omega_m=0.3$ and $\sigma_8 =0.8$ cosmology [see Fig. 11 in @2006MNRAS.372..758M]. The NFW gravitational potential can then be derived analytically: (r)= -. Next, we populate this potential with a polytropic gas, i.e. $P_{\rm gas} \propto \rho_{\rm gas}^{\Gamma}$, with $\Gamma \simeq 1.2$ [@1998ApJ...509..544S; @2005ApJ...629....1A; @2005ApJ...634..964O]. Such a polytropic distribution is expected from a turbulent rearrangement and is roughly consistent with hydrodynamical simulations [@2005ApJ...634..964O] and X-ray observations [@2003ApJ...593..272V and references therein]. We allow a range 1.1 &lt; &lt; 1.3, with a flat prior, to accommodate uncertainties in and deviations from a polytropic distribution. In addition to thermal gas pressure, hydrostatic support can be provided by non-thermal sources of pressure. For example, @2007ApJ...655...98N show that subsonic turbulent pressure can yield $5\%-20\%$ increase in pressure gradients. Moreover, @2006astro.ph.11037P argue that cosmic rays can contribute up to $32\%$ of the total pressure in a realistic cluster simulation. To include this uncertainty, we consider a wide range of 5% &lt; \_[nth]{} &lt; 50%, with a flat prior, where $\delta_{\rm nth} \equiv P_{\rm nth}/P_{\rm gas}$, is the ratio of non-thermal to thermal pressure components [^2] Plugging all sources of pressure into the equation of hydrostatic equilibrium, and using the polytropic relation, we find the ICM temperature profile [@2005ApJ...634..964O]: T(r)= -()+T(r\_[200]{}),\[t\_prof\]where $T(r_{200})$ is an integration constant, which is proportional to the surface pressure of the region within $r_{200}$. We quantify this constant through the quantity $b^2_T$ [@2007astro.ph..2241E], which is defined as: b\^2\_T = , \[b2t\]where $\langle T \rangle$ is the mean gas mass weighted temperature, $\mu m_p$ is the mean particle mass in the ICM plasma ($\mu \simeq 0.59$), and $\langle \sigma^2_{\rm DM}\rangle$ is the mean 1D dark matter velocity dispersion. The latter is exquisitely constrained in @2007astro.ph..2241E through use of a host of different dark matter simulations: \^2\_[DM]{}\^[1/2]{}(&lt;r\_[200]{}) = (1084  [km/s]{}) (M\_[200]{}10\^[15]{} h\^[-1]{} )\^[0.336]{}, plus $4\%$ random scatter. The ‘Santa Barbara Cluster’ comparison constrains the value of $b^2_T$ to $0.87 \pm 0.04$, for a large range of different adiabatic simulations [@1999ApJ...525..554F]. More recent high resolution simulations that include cooling and feedback effects [@2006ApJ...650..538N] yield consistent values b\^2\_T = 0.90 0.11, but with larger scatter. We will adopt the latter range with a Gaussian distribution. In order to set the normalization for the gas pressure/density, we have to fix the total ICM baryonic budget, which we quantify through $f_{\rm gas} \equiv M_{\rm gas}(<r_{200})/M_{200}$. Various X-ray [@2006ApJ...640..691V; @2007astro.ph..2241E] and SZ observations [@2005ApJ...629....1A; @2006astro.ph.12700A; @2006ApJ...652..917L], as well as hydrodynamical simulations [@2006ApJ...650..538N] have indicated that $f_{\rm gas}$ may be significantly lower than the total cosmic baryonic budget, which we set to $\Omega_b/\Omega_m = 0.168$ [@2006astro.ph..3449S]. To accommodate this, we also assume a generous range of: 0.6 &lt; &lt; 0.9, for ICM gas mass fractions. To estimate the total ICM SZ flux, we need to know the outer edge of our ICM model, or the radius of the accretion shock, $r_{\rm max}$. Assuming that gas comes to stop at the shock, the temperature behind the shock is roughly given by T(r\_[max]{}) m\^2\_p v\^2\_[inf.]{} (1+\_[nth]{})\^[-1]{}, \[tshock\]where $v_{\rm inf.}$ is the gas infall velocity[^3]. We then use the value of infall velocity from the spherical collapse model [@1972ApJ...176....1G]: v\_[inf.]{} = { -\[2\_c\]\^[-1/3]{} (Ht/)\^[-2/3]{} + O(\_c\^[-2/3]{})}, \[infall\]where $\Delta_c$ is the mean overdensity with respect to the critical density within the shock radius. Combining Eqs. (\[t\_prof\]-\[b2t\], \[tshock\]-\[infall\]) with mean densities from the NFW profile fixes the outer edge of our ICM model. The final step is to include the ellipticity/triaxiality of real haloes in our model. The impact of triaxiality on the total SZ flux of a halo is of second order, and so we will neglect it in our analysis. However, triaxiality introduces a random scatter in the projected SZ profiles, which will impact the observed half light radii. To model this, we assume that, to first order, the triaxial profile has the shape: P(r,,) = |[P]{}(r), where $\bar{P}(r)$ is the prediction from our spherical polytropic model, and $Y_{2m}$’s are spherical harmonics. We then assume a Gaussian distribution with a reasonable range of a\^2\_[2m]{}\^[1/2]{} = 0.16, a\_[2m]{} =0, to model the triaxiality of real clusters. This amplitude of triaxiality has equivalent moments to an ellipsoidal distribution with axes ratios of 1:0.7:0.5 [expected for CDM haloes, @1991ApJ...378..496D], and a (spherically averaged) pressure profile $P(r) \propto r^{-2}$, which is roughly consistent with observations and simulations of SZ clusters [@2006astro.ph.12700A]. Fundamental Plane of SZ clusters ================================ Let us first quantify the SZ flux of a cluster in terms of $Y_{SZ}$, which we define as Y\_[SZ]{} \^2 h\^[-1]{}(z), where ${\rm Flux(mK\cdot arcmin^2)}$ is the total observed cluster SZ flux at low frequencies, in units of ${\rm mK\cdot arcmin^2}$, while $H(z) = 100 h(z)~ {\rm km/s/Mpc}$ and $d_A(z)$ are the Hubble constant and the angular diameter distance at redshift $z$. $Y_{SZ}$ can then be easily described in terms of the properties of the ICM model: Y\_[SZ]{} = 1.022 ()T([keV]{}) . Now we can generate a random set of 3000 clusters uniformly distributed in the range 13 &lt; (M\_[200]{} h/) &lt; 16, with their ICM properties according to the prescription that we outlined above [^4]. This leads to our mass-SZ flux scaling relation: M\_[200]{} = (8.1 10\^[14]{} /h)  Y\^[0.58]{}\_[SZ]{}     [(22%  error)]{}.\[m-sz\] The error quoted here is the r.m.s. scatter around our best fit scaling relation, and reflects a very conservative estimate of all the theoretical uncertainties in mass-SZ flux relation. Also notice that this includes both systematic and random uncertainties, which cannot be distinguished in our approach. A further simplification is the assumed lack of covariance between different uncertainties. While possible constructive/destructive covariances could lead to larger/smaller scatter, their correct account would require a more detailed understanding of the various involved processes. ![\[szplane\] Contrast between the usual SZ flux mass estimates, Eq. (\[m-sz\]), shown by starred (black) points and Fundamental Plane mass estimates, Eq. (\[fp\]), shown by solid (red) triangle. The error decreases from 22% for the former, to 14% for latter mass estimates.](szplane1.ps){width="\linewidth"} To approach the main subject of this letter, i.e. the Fundamental Plane of SZ clusters, we will next include information about the ICM SZ profile into our scaling relation. We do this by calculating the [*half-light radius*]{}, $R_{SZ,2}$, which is defined as the radius of the disk (or cylinder) that contains half of the total SZ flux. The new best fit scaling relation for our clusters is: M\_[200]{} = (7.8 10\^[14]{} /h) Y\^[0.75]{}\_[SZ]{} (R\_[SZ,2]{}/h)\^[-0.76]{}\ [(14%  error),]{}\[fp\]which shows an almost $34\%$ decrease in the error of the mass estimate. The new scaling relation is shown by solid (red) triangles in Figure (\[szplane\]), which should be contrasted with the starred (black) points that result from the usual mass-SZ flux relation (Eq. \[m-sz\]). One may wonder if using more information about the SZ profile of the cluster can help reduce the errors in mass estimates even further. To investigate this, we can add the radius of the disk containing a quarter of the total projected SZ flux, $R_{SZ,4}$, into the list of observables. However, we find that the resulting estimator, which now depends on $Y_{SZ}$, $R_{SZ,2}$, and $R_{SZ,4}$, has a scatter of $13\%$, which is almost the same as the error in the Fundamental Plane estimator. Therefore, we conclude that adding more details about the SZ profile is unlikely to improve the accuracy of mass estimates significantly. Physical Origin of the Fundamental Plane ======================================== It is interesting to notice that our Fundamental Plane relationship (Eq. \[fp\]) is different from the virial relation, $M \propto (Y_{SZ}R_{SZ,2})^{1/2}$, previously adopted in @2002ApJ...581....5V. The reason for this apparent discrepancy is that the virial relation is only an approximation which results from the assumption of hydrostatic equilibrium and self-similar pressure profiles for different clusters. More specific information about the initial conditions of the cosmological collapse, or the surface pressure, while relaxing the self-similarity assumption, can lead to more accurate scaling relations. Since we use the assumption of hydrostatic equilibrium, most of our clusters sit close to the intersection of the virial relation and Eq. (\[fp\]), but (by construction) are better fit by Eq. (\[fp\]). There is a simple way to understand the physical origin of Eq. (\[fp\]) analytically. In Sec. \[sec\_errors\], we show that (within our model) the scatter in the mass-flux relation is dominated by the uncertainty in the surface pressure, which sets the outer boundary of the ICM (Table \[error\_sources\]). If we approximate the gravitational potential within the ICM region that dominates the SZ flux by an isothermal potential, and fix all the cluster/ICM parameters, other than its outer boundary, we have $M_{\rm gas} \propto R_{SZ,2}$, while $\langle T_{\rm gas}\rangle \simeq ~{\rm const}$. Therefore, $Y_{SZ} \propto R_{SZ,2}$ for a fixed cluster potential (and thus fixed $M_{200}$): F(M\_[200]{}) = Y\_[SZ]{}/R\_[SZ,2]{}. Combining this with the standard scaling relations: $Y_{SZ} \propto M^{5/3}_{200}$ and $R_{SZ,2} \propto M_{200}^{1/3}$ yields: M\_[200]{} (Y\_[SZ]{}/R\_[SZ,2]{})\^[3/4]{}. Discussions =========== Breakdown of Error Budgets {#sec_errors} -------------------------- sources of $\Delta M^2/ M^2$ in Mass-Flux in Fundamental Plane ------------------------------ -------------- ---------------------- halo concentration 0.001 0.003 polytropic index 0.000 0.000 gas fraction 0.004 0.008 non-thermal pressure 0.005 0.004 DM velocity dispersion 0.010 0.005 surface pressure 0.025 0.009 triaxiality 0.000 0.000 total 0.048 0.022 : \[error\_sources\] Breakdown of the fractional error in mass estimates into contributions due to different sources of uncertainty. In Table \[error\_sources\], we have broken down the errors in mass estimates into contributions from different sources of uncertainty. To do this for each parameter, we assume that it only takes the central value of its assumed distribution, and then measure the change in the error quadratures of the new scaling relations. We see that uncertainties in the ICM gas fraction, non-thermal and surface pressures, as well as the dark matter velocity dispersion are the main sources of error in our mass estimates. In contrast, the uncertainties in the halo concentration, the polytropic index, the ICM outer edge, or the halo triaxiality have little impact on the errors. In particular, the error in both scaling relations seem to be dominated by our assumed uncertainty in the ICM surface pressure, which is ultimately related with the amount of non-gravitational heating/cooling associated with galaxy or black hole formation in the clusters. Another interesting observation is that the uncertainty in gas fraction has a larger contribution to the error in the Fundamental Plane relation, than to the error in the mass-flux relation. This is due to the fact that $R_{SZ,2}$ does not contain any information about $f_{\rm gas}$ (at least in our model), while Fundamental Plane masses have a steeper dependence on $Y_{SZ}$, and thus are more sensitive to $f_{\rm gas}$ uncertainty. Notice that possible correlations among different uncertainties, that are overlooked in our simple ICM model, may tilt the Fundamental Plane from our Eq. (\[szplane\]), and also change the size of the errors. However, presence of any such correlations is not immediately obvious in our current theoretical understanding of the ICM physics. One may wonder why the error in our mass-flux relationship (Eq. \[m-sz\]) is so much larger than those advocated in numerical studies such as @2006ApJ...650..128K, which are only $\sim 6\%$. The reason is that these studies measure the SZ flux within 3D spheres of fixed overdensity ($= 500$ times critical, for @2006ApJ...650..128K), while our total SZ flux is integrated out the ICM accretion shock. Of course, the latter is a more relevant quantity for 2D SZ cluster observations, especially for poor angular resolutions. In fact, our $M_{500}-Y_{SZ,500}$ relation has only a scatter of $9\%$, which is reasonable as we include more theoretical uncertainties than in @2006ApJ...650..128K’s simulations. This shows that the bulk of the scatter in our mass-flux relationship comes from the uncertainty in the outer edge of our ICM model. Finally, we should point out that a breakdown into systematic and random errors is also not possible within our exercise, due to our poor statistical understanding of different non-gravitational processes (such as cosmic ray injection or stellar feedback) that affect the scaling relations. Noisy Observations ------------------ As the purpose of this letter is to introduce a novel and improved mass estimator for SZ cluster surveys, we defer a detailed study of the observational issues associated with the use of this method to future investigation. Such details, while important, should be suited to the specifics of each survey, as well as the class of cosmological models that one would intend to constrain. However, in what follows, I will outline some of the steps that need to be taken for a realistic cosmological application. We should first recognize that any realistic observation of SZ clusters is limited both by the finite detector noise, as well as the finite beam resolution. While a poor resolution does affect the precision of the SZ flux measurement, its impact is much more severe for the cluster size measurement. For example, if the detector beam is significantly larger than the virial radius of the cluster, then the Fundamental Plane relation cannot add much to the mass-flux relation, even if the cluster is detected at several-$\sigma$ level. In the absence of perfect resolution, the most practical way to use the Fundamental Plane relation is to fit a parametrized template (a Gaussian) to the observed cluster SZ map, and replace $R_{SZ,2}$ with the characteristic scale of the template, $\sigma_{SZ}$ [^5]. Assuming both Gaussian template and beam, this measurement is done by minimizing the following $\chi^2$ function: \^2 = , \[chi2\]where $T_{obs,\vec{\ell}}$ is the flat-sky Fourier transform of the cluster SZ map, $C_{\ell}$ is the CMB power spectrum, $N$ characterizes the detector noise, and $\sigma_{\rm beam}$ is the size of the detector beam. In the limit that the detector noise is the primary source of measurement uncertainty ($C_{\ell}e^{-\sigma^2_{\rm beam}\ell^2} \ll N$), the Fisher matrix resulting from Eq. (\[chi2\]) reduces to Gaussian integrals which can be calculated analytically. In particular, for a well-resolved cluster ($\sigma^2_{SZ} \gtrsim \sigma^2_{\rm beam}$), the total (measurement+theory) error in Fundamental Plane masses is only $\sim 28\% (22\%)$ smaller than the SZ-flux mass estimates for a cluster that is detected at a $5\sigma (3\sigma)$ level. This is due to the fact that the theoretical mass degeneracy direction in the $Y_{SZ}-R_{SZ,2}$ (or $Y_{SZ}-\sigma_{SZ}$) plane does not coincide with the degeneracy direction of the measured parameters. Including a finite resolution will only further deteriorate the performance of the Fundamental Plane mass estimates. Conclusions =========== To summarize, using a semi-analytic model of the intracluster medium which accommodates different theoretical uncertainties, we found a Fundamental Plane relationship that relates the mass of galaxy clusters to their SZ flux and SZ half-light radius. Use of this relationship should lead to $\sim 34\%$ smaller error in mass estimates in comparison to the usual mass-flux relation, and hence more accurate cosmological constraints. While including more details about the SZ profile is unlikely to increase the accuracy of mass estimates, a better understanding of the role of non-gravitational heating/cooling processes should significantly reduce the errors. While the goal of this letter was to introduce the idea of using Fundamental Plane relationship to improve SZ cluster mass estimates, there is much more that remains to be done in order to exploit the full potential of this method. For example, rather than focusing on the cluster masses, we can study the cosmological constraints from the full bivariate distribution of cluster fluxes and half-light radii $n(Y_{SZ},R_{SZ,2})$. This combines the traditional mass-function constraints, with the constraints resulting from observed scaling relations, as advocated by @2002ApJ...581....5V. Another topic that we did not address here was the impact of including merging (non-relaxed) clusters and/or false detections in our sample. One may identify (and thus exclude) these clusters as outliers in the $Y_{SZ}-R_{SZ,2}$ plane for a given redshift bin, which would not have been possible in the absence of SZ profile information. Finally, it is needless to say that the true Fundamental Plane, as well as the full impact of different theoretical uncertainties, can only be accurately (and adequately) modeled through high-resolution and realistic cosmological simulations of a fair sample of galaxy clusters. Acknowledgments {#acknowledgments .unnumbered} =============== I would like to thank Daisuke Nagai, Licia Verde, Zoltan Haiman, David Spergel, and Beth Reid for helpful comments on this manuscript. I also would like to thank Daisuke Nagai for providing the pressure profiles of the simulated clusters in @2006ApJ...650..538N. [^1]: <http://bolo.berkeley.edu/apexsz/>; <http://www.physics.princeton.edu/act/>; <http://spt.uchicago.edu/>; <http://astro.uchicago.edu/sza/> [^2]: Note that $\delta_{\rm nth}$ is not expected be constant across the ICM [@2006astro.ph.11037P; @2007ApJ...655...98N]. However, the wide range of uncertainty that is already assumed here for $\delta_{\rm nth}$ should also include the consequences of its non-uniformity. [^3]: Here we have assumed that the non-thermal pressure component behaves as non-relativistic monatomic gas, which is not appropriate for cosmic ray pressure. However, we will ignore this difference in our model. [^4]: Notice that none of the assumption that have gone into our ICM model would cause a break in the slope(s) of the resulting scaling relations, and so the range assumed for cluster masses does not change the slope or scatter of the scaling relations. [^5]: Of course, the normalization/slopes of the scaling relations should be re-calculated for the specific template. Here, we assume that the Fundamental Plane or its scatter would not change significantly.
--- abstract: 'The behavior of the spatial two-particle correlation function is surveyed in detail for a uniform 1D Bose gas with repulsive contact interactions at finite temperatures. Both long-, medium-, and short-range effects are investigated. The results span the entire range of physical regimes, from ideal gas, to strongly interacting, and from zero temperature to high temperature. We present perturbative analytic methods, available at strong and weak coupling, and first-principle numerical results using imaginary time simulations with the gauge-$P$ representation in regimes where perturbative methods are invalid. Nontrivial effects are observed from the interplay of thermally induced [bunching]{} behavior versus interaction induced [antibunching]{}.' author: - 'P. Deuar' - 'A. G. Sykes' - 'D. M. Gangardt' - 'M. J. Davis' - 'P. D. Drummond' - 'K. V. Kheruntsyan' title: 'Non-local pair correlations in the 1D Bose gas at finite temperature' --- Introduction ============ The study of two-body correlations has a long history dating back to the $1956$ experiment of Hanbury Brown and Twiss (HBT) [@hbt-expt]. The HBT experiment set out to measure the intensity of light coming from a distant star, at two nearby points in space. The fluctuations in the intensities were shown to be strongly correlated in spite of the thermal nature of the source. In more recent times, experimental progress in the field of ultra-cold atomic gases has provided the opportunity to examine similar correlations in systems of cold atoms (as opposed to photonic systems). The large thermal de Broglie wavelength in a cold gas means the correlations occur on length scales large enough to be resolved using current detectors. A pioneering experiment of this kind involving a cloud of cold Neon atoms, was carried out by Yasuda and Shimizu [@yasuda-shimizu] as early as $1996$. A more comprehensive study was undertaken during $2005-2007$ in Refs. [@schellekens; @jeltes], where the two particle *bunching* phenomena associated with Bose enhancement (when metastable $^{4}$He$^{\ast}$ atoms were used) was juxtaposed with the *antibunching* behavior present in a system of fermions (when $^{3}$He$^{\ast}$ atoms were used). In all of the above cases the measured correlations were completely described by the statistical exchange interaction between particles in an *ideal* gas. The behavior of strongly *interacting* systems poses some of the most difficult questions confronting current theoretical studies in many-body physics. In this paper we discuss how our simple understanding of two-body correlations in an ideal gas can be radically altered in the presence of interactions. To demonstrate this we calculate the normalized pair correlation function $$g^{(2)}(r)=\langle\hat{\Psi}^{\dagger}(0)\hat{\Psi}^{\dagger}(r)\hat{\Psi}(r)\hat{\Psi}(0)\rangle/n^{2}\label{eq:g2}$$ in a homogeneous repulsive one-dimensional (1D) Bose gas [@liebliniger; @lieb2] at finite temperature over a wide range of interaction strengths. [In Eq. , $\hat{\Psi}(x)$ is the field operator, and $n=\langle\hat{\Psi}^{\dagger}(x)\hat{\Psi}(x)\rangle$ is the linear 1D density]{}. Physically, $g^{(2)}(r)$ quantifies the conditional probability of detecting a particle at position $r$, given that a particle has been detected at the origin. Theoretically the 1D Bose gas model with $\delta$-function interaction is one of the simplest paradigms we have of a strongly interacting quantum fluid, owing to its exact integrability [@liebliniger; @lieb2; @yangyang1; @korepin-book; @giamarchi-book; @gogolin-book]. In the limit of an infinitely strong interaction it corresponds to a gas of impenetrable (hard-core) Bosons treated first in Ref. [@girardeau]. It also holds relevance as an experimentally accessible system [@bec_low_dim; @bec_low_dim2; @Greiner-1D-exp; @greiner_low_dim_bec; @Richard-1D-exp; @esslinger_expt1; @bill; @bloch_expt; @weiss_expt; @Weiss-g2-1D; @esslinger_expt2; @Raizen-BEC-box; @Bouchoule-density-density; @Schmiedmayer-1D-exp; @van_Druten]. Opposite from 2D and 3D, the strongly interacting limit of a 1D system is achieved in the low density regime. In this regime the wave function of the particles is strongly correlated and prevents them from being close to each other, which results in dramatic suppression of 3-body losses. This allows for the stable creation of strongly interacting 1D Bose gases. There has been a substantial amount of previous theory on correlations of the 1D Bose gas model. The Luttinger liquid approach provides a method of calculating the long-range asymptotic behavior in the decay of non-local correlations  [@giamarchi-book; @gogolin-book]. Local second- and third-order correlations in the homogeneous system have been calculated in Refs. [@Castin; @gangardt_T_0; @gangardt-correlations; @karenprl; @Cazalila]; extensions to inhomogeneous systems using the local density approximation (LDA) are given in Ref. [@karen-pra]. Numerical calculations at specific values of interaction strength have been carried out at $T=0$ [@untrapped_via_montecarlo] and at finite temperature [@drummond-canonical-gauge]. Similar *nonlocal* quantities have been calculated for the $T=0$ ground state [@lenard; @schultz; @untrapped_via_montecarlo; @caux_correlations; @caux_correlations2; @cherny_brand1], and for finite temperature both numerically [@drummond-canonical-gauge] and in the strong interaction limit [@cherny_brand2]. Refs. [@korepin-book; @giamarchi-book; @gogolin-book; @lieb_book; @shlyapnikovlecturenotes; @Castin04] contain recent reviews of the physics of the 1D Bose gas problem. The focus of the present paper is the nonlocal correlation function [at arbitrary interparticle separations $r$]{}; we give the details of analytic derivations of the results discussed in a recent Letter [@sykes_raizen] and complement them with exact numerical calculations using the stochastic gauge-$P$ method of Ref. [@drummond-canonical-gauge; @deuar-drummond-2002; @drummond-deuar-2003; @deuar-drummond-2006; @deuar-thesis]. Experimental proposals to measure nonlocal spatial correlations between the atoms in a 1D Bose gas have been discussed in Ref. [@sykes_raizen; @accordion]. The structure of this paper is as follows. In section \[sect:liebformalism\] we give a brief review of the physics of a 1D Bose gas, emphasizing the important parameters which determine the phase diagram. In section \[sect:numerical\] we outline the details involved in the application of the (imaginary time) gauge-$P$ phase space method to the 1D Bose gas. The more technical details are placed in appendix \[append:numerix\]. This method is capable of obtaining numerical results in the cross-over regions of the phase diagram, where analytic results are not available. In sections \[sect:nearlyideal\], \[sect:weak\] and \[sect:strong\] we present the results of calculating $g^{(2)}(r)$ in the nearly ideal gas limit, the weakly interacting limit, and the strongly interacting limit respectively. The results are obtained from numerical calculations and analytic perturbation expansions. We describe the details of our perturbation expansion in each respective section. In section \[sect:HTFxover\] we analyze, in detail, the nature of the crossover into the fermionized Tonks gas regime. Section \[sect:numerical-limitations\] discusses the limitations of the numerical method. In section \[sect:conclusions\] we give an overview and draw conclusions. The Interacting Bose gas in 1D {#sect:liebformalism} ============================== We are considering a homogeneous system of $N$ identical bosons in a 1D box of length $L$ with periodic boundary conditions [@liebliniger; @lieb2]. We include two-body interactions in the form of a repulsive delta-function potential. The second-quantized Hamiltonian of the system is given by $$\hat{H}=\frac{\hbar^{2}}{2m}\int dx\,\partial_{x}\hat{\Psi}^{\dagger}\partial_{x}\hat{\Psi}+\frac{g}{2}\int dx\,\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{\Psi}\hat{\Psi},\label{Hfull}$$ where $m$ is the mass and $g>0$ is the coupling constant that can be expressed via the 3D $s$-wave scattering length $a$ as $g\simeq2\hbar^{2}a/(ml_{\perp}^{2})=2\hbar\omega_{\perp}a$ [@olshanii-1d-scattering]. Here, we have assumed that the atoms are transversely confined by a tight harmonic trap with frequency $\omega_{\perp}$ and that $a$ is much smaller than the transverse harmonic oscillator length $l_{\perp}=\sqrt{\hbar/m\omega_{\perp}}$. The 1D regime is realized when the transverse excitation energy $\hbar\omega_{\perp}$ is much larger than both the thermal energy $T$ (with $k_{B}=1$) and the chemical potential $\mu$ [@karen-pra; @interaction-crossover]. A uniform system in the thermodynamic limit ($N,L\longrightarrow\infty$, while the 1D density $n=N/L$ remains constant) is completely characterized [@liebliniger; @yangyang1] by two parameters: the dimensionless interaction strength$$\gamma=\frac{mg}{\hbar^{2}n}$$ and the reduced temperature$$\tau=T/T_{d},$$ where $T_{d}=\hbar^{2}n^{2}/(2m)$ is the temperature of quantum degeneracy in units of energy [@karenprl]. The interplay between these two parameters dictates the dominating behavior in six physically different regimes. Briefly, these regimes are: - *Nearly ideal gas* regime, where the temperature always dominates over the interaction strength. This regime splits into two subregimes defined by $\tau\ll1$ or $\tau\gg1$. In both cases one must have $\gamma\ll\min\left\{ \tau^{2},\sqrt{\tau}\right\} $. - *Weakly interacting* regime, where both the interaction strength and the temperature are small, but $\tau^{2}\ll\gamma\ll1$. This regime realizes the well known quasi-condensate phase. Fluctuations occur due to either vacuum or thermal fluctuations, which defines two further subregimes, with $\tau\ll\gamma$ or $\tau\gg\gamma$, respectively. - *Strongly interacting* regime, where the interaction strength is large and dominates over temperature induced effects. This can occur at high and low temperatures, again defining two subregimes with $\tau\ll1$ or $\tau\gg1$. The basic understanding of the competition between interaction induced effects and thermally induced effects was outlined in Ref. [@sykes_raizen]. Although the model is integrable via the Bethe ansatz, the cumbersome nature of the eigenstates [@Sykes-Davis-Drummond] inhibits the direct calculation of the nonlocal two-body correlation function. We therefore use numerical integration in a phase-space representation, together with perturbation theory in each of the six regimes. The standard Bogoliubov procedure, applied to Eq.  is appropriate in the case of the weakly interacting regime (see section \[sect:weak\]). Perturbation theory in the strongly interacting and nearly ideal gas regimes is done using the path integral formalism (see sections \[sect:perturbation\] and \[sect:strong\] respectively). Numerical Stochastic Gauge Calculations {#sect:numerical} ======================================= Gauge-$P$ distribution ---------------------- To evaluate correlations away from the regimes of applicability of the analytic approximations, we use the gauge-$P$ phase-space method to generate a stochastic evolution from the simple $T\rightarrow\infty$ limit (where interactions are negligible) down to lower temperatures. This method gives results that correspond exactly to the full quantum mechanics using the Hamiltonian (\[Hfull\]) as the number of averaged realizations ($\mathcal{S}$) goes to infinity. The gauge-$P$ method has been described in [@deuar-drummond-2002; @drummond-deuar-2003; @deuar-drummond-2006], and is covered in greatest detail in [@deuar-thesis], while an initial application to the 1D Bose gas was presented in [@drummond-canonical-gauge]. Below we give a summary of the derivation for this system, and present the basic calculation procedure. Some of the more technical details are given in Appendix \[append:numerix\]. We consider a grand canonical ensemble with mean density $n$, Hamiltonian (\[Hfull\]) and inverse temperature given by $\beta=1/k_{B}T$. When the Hamiltonian commutes with the number operator $\widehat{N}=\int dx\widehat{\Psi}^{\dagger}(x)\widehat{\Psi}(x)$, as is the case here, the unnormalized density matrix at temperature $T$ is given by $$\widehat{\rho}_{u}=e^{[\mu(\beta)\widehat{N}-\widehat{H}]\beta},\label{rhou}$$ where $\mu(\beta)$ is the chemical potential. In this formulation, $\mu$ can in principle be chosen at will as any desired function of temperature, thus indirectly determining the density $n(T)$. In the Schrödinger picture the density matrix is equivalently defined by an “imaginary time master-like equation $$\begin{aligned} \frac{\partial\widehat{\rho}_{u}(\beta)}{\partial\beta} & = & \left[\mu_{e}(\beta)\widehat{N}-\widehat{H}\right]\,\widehat{\rho}_{u}(\beta)\notag\\ & = & \frac{1}{2}\left[\mu_{e}(\beta)\widehat{N}-\widehat{H}\ ,\ \widehat{\rho}_{u}(\beta)\right]_{+}\label{master}\end{aligned}$$ and a simple initial (i.e. $T\rightarrow\infty$) condition $$\widehat{\rho}_{u}(0)=e^{-\lambda\widehat{N}},\label{rho_ic}$$ with $\lambda=-\lim_{\beta\rightarrow0}\left[\beta\mu(\beta)\right]$ and $\beta$ playing a similar role to time in the Schrodinger equation for time evolution, apart from a factor of $i$ (hence the name). The second line of (\[master\]) follows from the restricted set of density matrices described by the grand canonical ensemble (\[rhou\]), where $\log\widehat{\rho}_{u}$ commutes with $\widehat{\rho}_{u}$. Note that $\mu_{e}(\beta)$ is a temperature-dependent “effectivechemical potential $$\mu_{e}=\frac{\partial\lbrack\beta\mu(\beta)]}{\partial\beta},$$ that is not necessarily equal to $\mu$. The initial condition (\[rho\_ic\]) can then be evolved according to Eq. (\[master\]) to obtain the equilibrium state at lower temperatures $\beta>0$. However, in the density matrix form, this naturally becomes intractable for more than a few particles. Phase-space methods such as the gauge-$P$ distribution used here reduce the computational resources needed to a manageable number. This is done by deriving a Fokker-Planck equation for a distribution of phase-space variables that is equivalent to the full quantum mechanics (\[master\]), and then in a second step, sampling this distribution stochastically and evolving the *samples* with a diffusive random walk that is equivalent to the Fokker-Planck equation. The general approach is described in [@positiveP; @Gardiner]. The price that is paid for tractable calculations is a loss of precision that comes about due to the finite sample size $\mathcal{S}$. Fortunately this uncertainty can be readily estimated using the Central Limit theorem and scales as $\sqrt{\mathcal{S}}$. We utilize the normalized off-diagonal coherent state expansion of the positive-$P$ distribution [@positiveP] because the number of variables required to describe a sample is linear in the number of spatial points (tractability) and because it describes all quantum states with a non-negative real distribution. However, for this investigation two additional elements are needed. Firstly, the evolution (\[master\]) does not preserve the trace, so an additional weight variable in the expansion is needed to keep track of this. Secondly, the evolution equations for the samples given by a bare weighted positive-$P$ treatment are unstable and can lead to systematically bad sampling [@Gilchrist]. The complex part of the weight variable allows us to remove these instabilities using a stochastic gauge as described in [@deuar-drummond-2002; @drummond-canonical-gauge]. In practice, the first step is to discretize space into $M$ equally spaced points in a box of length $L$ with periodic boundary conditions, on which the fields are defined. There is a lattice spacing of $\Delta x=L/M$ per point. One must make sure that the lattice is fine enough and long enough to encompass all relevant detail. In practice we check this by increasing $L$ and, separately, $M$ until no further change in the results is seen. Having this equivalent lattice, one can expand the density matrix $\widehat{\rho}_{u}$ as $$\widehat{\rho}_{u}=\int G(\vec{v})\widehat{\Lambda}(\vec{v})\ d^{4M+2}\vec{v},$$ with a positive [@deuar-drummond-2002] distribution $G(\vec{v})$ of the set of $2M+1$ complex phase-space variables, $$\vec{v}=\left\{ \alpha_{1},\dots,\alpha_{M},\alpha_{1}^{+},\dots,\alpha_{M}^{+},\Omega\right\} ,$$ that describe an operator basis $$\widehat{\Lambda}(\vec{v})=\Omega\otimes_{j=1}^{M}||\alpha_{j}\rangle\langle(\alpha_{j}^{+})^{\ast}||\ e^{-\sum_{j=1}^{M}\alpha_{j}^{+}\alpha_{j}}\label{lambda}$$ composed of unnormalized (Bargmann) coherent states $||\alpha_{j}\rangle=\exp\left[\alpha_{j}\sqrt{\Delta x}\,\widehat{\Psi}^{\dagger}(x_{j})\right]|0\rangle$ at the $j$-th point at location $x_{j}=(j-1)\Delta x$ and a global weight $\Omega$. The initial condition (\[rho\_ic\]) corresponds to the distribution $$G_{0}(\vec{v})=\delta^{2}(\Omega-1)\prod_{j=1}^{M}\delta^{2}\left(\alpha_{j}-(\alpha_{j}^{+})^{\ast}\right)\frac{\exp(-|\alpha_{j}|^{2}/\overline{n}_{x})}{\pi\overline{n}_{x}},\label{G0}$$ where $\overline{n}_{x}=1/(e^{\lambda}-1)=N/M$ is the mean number of atoms ($N=\langle\hat{N}\rangle$) per spatial point in the initial $\beta=0$ state. We see that, at least initially, $\alpha^{+}=(\alpha)^{\ast}$ are complex conjugates. Fokker-Planck Equation ---------------------- To generate the Fokker-Planck equation (FPE) for $G(\vec{v})$ corresponding to the master equation (\[master\]) we use the following differential identities for the basis operators \[identities\] $$\begin{aligned} \sqrt{\Delta x}\,\widehat{\Psi}(x_{j})\widehat{\Lambda} & = & \alpha_{j}\,\widehat{\Lambda},\\ \sqrt{\Delta x}\,\widehat{\Psi}^{\dagger}(x_{j})\widehat{\Lambda} & = & \left(\alpha_{j}^{+}+\frac{\partial}{\partial\alpha_{j}}\right)\widehat{\Lambda},\\ \sqrt{\Delta x}\,\widehat{\Lambda}\widehat{\Psi}(x_{j}) & = & \alpha_{j}^{+}\,\widehat{\Lambda},\\ \sqrt{\Delta x}\,\widehat{\Lambda}\widehat{\Psi}^{\dagger}(x_{j}) & = & \left(\alpha_{j}+\frac{\partial}{\partial\alpha_{j}^{+}}\right)\widehat{\Lambda}.\end{aligned}$$ These convert quantities involving the operators $\widehat{\Psi}$, $\widehat{\Psi}^{\dagger}$ and $\widehat{\rho}_{u}$ to ones involving only $\widehat{\Lambda}$ and their derivatives. In what follows it will be convenient to label the $\alpha$ and $\alpha^{+}$ variables as $$\alpha_{j}^{(\nu)}=\left\{ \begin{array}{cl} \alpha_{j}, & \text{ if }\nu=1,\\ \alpha_{j}^{+}, & \text{ if }\nu=2.\end{array}\right.$$ Using (\[identities\]) on (\[master\]) one obtains $$\begin{aligned} \lefteqn{\int\frac{\partial G(\vec{v})}{\partial\beta}\widehat{\Lambda}\, d^{4M+2}\vec{v}=-\int G(\vec{v})}\label{differential}\\ & & \times\left\{ \frac{g}{4\Delta x}\sum_{j,\nu}(\alpha_{j}^{(\nu)})^{2}\frac{\partial^{2}}{\partial(\alpha_{j}^{(\nu)})^{2}}+K(\vec{v})\right.\notag\\ & & \hspace*{-2em}\left.+\frac{1}{2}\sum_{j}\left[\left(\frac{\partial K(\vec{v})}{\partial\alpha_{j}^{+}}\right)\frac{\partial}{\partial\alpha_{j}}+\left(\frac{\partial K(\vec{v})}{\partial\alpha_{j}}\right)\frac{\partial}{\partial\alpha_{j}^{+}}\,\right]\,\right\} \widehat{\Lambda}\, d^{4M+2}\vec{v},\notag\end{aligned}$$ with $$N_{j}=\alpha_{j}^{+}\alpha_{j},$$ which is initially the number of particles at the $j$-th site, and an effective complex-variable Gibbs factor $K$ corresponding to $\mathrm{Tr}\left[{(\widehat{H}-\mu_{e}\widehat{N})\widehat{\Lambda}}\right]/\mathrm{Tr}\left[{\widehat{\Lambda}}\right]$: $$K(\vec{v})=\sum_{j}\left\{ \frac{\hbar^{2}\left(\nabla\alpha_{j}^{+}\right)\left(\nabla\alpha_{j}\right)}{2m}-\mu_{e}N_{j}+\frac{gN_{j}^{2}}{2\Delta x}\right\} .\label{GibbsK}$$ Here $\nabla\alpha_{j}$ is the discretized analogue of the gradient of a complex field $\alpha(x)$ that satisfies $\alpha(x_{j})=\alpha_{j}$. To obtain a FPE equation for $G(\vec{v})$ we proceed as follows. Firstly, we can make use of the additional “gaugeidentity that follows trivially from Eq. (\[lambda\]), $$\left(\Omega\frac{\partial}{\partial\Omega}-1\right)\widehat{\Lambda}=0,\label{gaugeidentity}$$ to convert $K(\vec{v})\widehat{\Lambda}=K(\vec{v})\Omega\frac{\partial}{\partial\Omega}\widehat{\Lambda}$ on the first line of Eq. (\[differential\]). This step is necessary in order to obtain an equation of a form that can later be sampled with a diffusive process. Secondly, we integrate by parts to obtain differentials of $G$ rather than $\widehat{\Lambda}$. Thirdly, if the distribution $G$ is well bounded as $|\alpha_{j}|,|\alpha_{j}^{+}|,|\Omega|\rightarrow\infty$, we can discard the boundary terms. As it turns out (see appendix \[append:gauge\]), this is not fully justified for the equation (\[differential\]), and the boundary behavior will need to be improved with the help of a stochastic gauge as described originally in [@deuar-drummond-2002]. However, for demonstrative purposes let us proceed on for now, and return to remedy the problem below in Sec. \[sect:ito\]. Lastly, having now an equation of the form $\int\widehat{\Lambda}\times\lbrack\text{Differential operator}]G(\vec{v})\, d\vec{v}=0$, one solution is certainly $[\text{Differential operator}]G(\vec{v})=0$, which is the following FPE: $$0 = \left\{ \frac{\partial}{\partial\Omega}\Omega K(\vec{v})-\frac{\partial}{\partial\beta}-\sum_{j,\nu}\left[\frac{g}{4\Delta x}\frac{\partial^{2}}{\partial(\alpha_{j}^{(\nu)})^{2}}(\alpha_{j}^{(\nu)})^{2}+\frac{1}{2}\frac{\partial}{\partial\alpha_{j}^{(\nu)}}\left(\frac{\hbar^{2}(\nabla^{2}\alpha_{j}^{(\nu)})}{2m}+\mu_{e}\alpha_{j}^{(\nu)}-\frac{g\alpha_{j}^{(\nu)}N_{j}}{\Delta x}\right)\right]\right\} G(\vec{v}).\label{ppfpe}$$ Equivalent diffusion -------------------- A diffusive random walk that corresponds to the Fokker-Planck equation (\[ppfpe\]) is found by replacing the analytic derivatives with appropriate derivatives of the real and imaginary parts of $\alpha_{j}^{(\nu)}$ [@positiveP; @Gardiner]. This results in a diffusion matrix in the phase-space variables $\vec{v}$ with no negative eigenvalues. In the Ito calculus this is equivalent to the following set of stochastic differential equations $$\begin{aligned} \frac{d\alpha_{j}^{(\nu)}}{d\beta} & = & \frac{1}{2}\left(\mu_{e}+\frac{\hbar^{2}\nabla^{2}}{2m}-\frac{gN_{j}}{\Delta x}\right)\alpha_{j}^{(\nu)}\notag\\ & & +i\alpha_{j}^{(\nu)}\sqrt{\frac{g}{2\Delta x}}\zeta_{j}^{(\nu)}(\beta),\label{ppequations}\\ \frac{d\Omega}{d\beta} & = & -\Omega K(\vec{v}).\notag\end{aligned}$$ We do not use diffusion gauges [@deuar-drummond-2006] here and decompose the diffusion matrix in the most straightforward fashion. Here, the $\zeta_{j}^{(\nu)}(\beta)$ are real, delta-correlated, independent white Gaussian noise fields that satisfy the stochastic averages \[noises\] $$\begin{aligned} \langle\zeta_{j}^{(\nu)}(\beta)\rangle_{\mathcal{S}} & = & 0,\\ \langle\zeta_{i}^{(\nu)}(\beta)\zeta_{j}^{(\nu^{\prime})}(\beta^{\prime})\rangle_{\mathcal{S}} & = & \delta_{ij}\delta_{\nu\nu^{\prime}}\delta(\beta-\beta^{\prime}).\end{aligned}$$ In practice, at each time step separated from the subsequent by an interval $\Delta\beta$, one generates $M$ independent real Gaussian random variables of variance $1/\Delta\beta$ for each $\zeta_{j}^{(\nu)}$. Equations (\[ppequations\]) can be intuitively interpreted by noting that the equation for the amplitudes $\alpha_{j}^{(\nu)}$ at each point is a Gross-Pitaevskii equation in imaginary time, with some extra noises that emulate the wandering of trajectories in a path integral formulation around the mean field solution given by the deterministic part. A different wander for different $\nu$. The weight evolution of $\Omega$ generates the Gibbs factors of the grand canonical ensemble. Final equations {#sect:ito} --------------- A straightforward application of the diffusion equations (\[ppequations\]) is foiled by the presence of an instability in the $d\alpha_{j}^{(\nu)}/d\beta$ equations. We use a stochastic gauge to remove this instability, in a manner described in [@deuar-drummond-2006; @deuar-thesis], with the details given in Appendix \[append:gauge\]. The final Ito stochastic equations of the samples are $$\begin{gathered} \frac{d\alpha_{j}^{(\nu)}}{d\beta}=\frac{1}{2}\left[\mu_{e}+\frac{\hbar^{2}\nabla^{2}}{2m}-\left(\frac{g}{\Delta x}\right)\left(|N_{j}|-i\,\text{Im}N_{j}\right)\right.\notag\\ \left.\hfill+i\zeta_{j}^{(\nu)}(\beta)\sqrt{\frac{2g}{\Delta x}}\,\right]\alpha_{j}^{(\nu)},\label{Gequations}\\ \frac{d\Omega}{d\beta}=\Omega\left[-K(\vec{v})-i\sqrt{\frac{g}{2\Delta x}}\sum_{j,\nu}\zeta_{j}^{(\nu)}(\beta)\left(|N_{j}|-\text{Re}N_{j}\right)\right].\notag\end{gathered}$$ Some technical details regarding integration procedure, importance sampling, and choice of $\mu_{e}(\beta)$ are given in Appendix \[append:numerix\]. Attention to these issues can speed up the calculations and reduce sampling errors by orders of magnitude. Evaluating observables ---------------------- Given $\mathcal{S}$ realizations of the variable sets $\vec{v}$, using fresh initial samples and noises $\zeta_{j}^{(\nu)}(\beta)$ each time, one generates an estimate of the expectation value of an observable $\widehat{O}$ as follows: $$\begin{aligned} E\left[\widehat{O}\right]=\frac{\mathrm{Tr}\left[{\widehat{O}\widehat{\rho}_{u}}\right]}{\mathrm{Tr}\left[{\widehat{\rho}_{u}}\right]} & = & \frac{\int G(\vec{v})\mathrm{Tr}\left[{\widehat{O}\widehat{\Lambda}(\vec{v})}\right]\, d\vec{v}}{\int G(\vec{v})\mathrm{Tr}\left[{\widehat{\Lambda}(\vec{v})}\right]\, d\vec{v}}\notag\\ =\frac{\left\langle \mathrm{Tr}\left[{\widehat{O}\widehat{\Lambda}(\vec{v})}\right]\right\rangle _{\mathcal{S}}}{\left\langle \mathrm{Tr}\left[{\widehat{\Lambda}(\vec{v})}\right]\right\rangle _{\mathcal{S}}} & = & \frac{\text{Re}\left\langle {\mathcal{F}\left[\widehat{O},\vec{v}\right]}\right\rangle _{\mathcal{S}}}{\text{Re}\left\langle {\Omega}\right\rangle _{\mathcal{S}}},\label{obs}\end{aligned}$$ where $\langle\cdots\rangle_{\mathcal{S}}$ denotes a stochastic average over the samples, and $\mathcal{F}$ is an appropriate function of the phase-space variables $\vec{v}$. The last line follows from properties of the operator basis $\widehat{\Lambda}$, and because the trace of $\widehat{\rho}_{u}$ and of expectation values are real. The identities (\[identities\]) can be used to readily evaluate $\mathcal{F}$ since $\mathrm{Tr}\left[{\widehat{\Lambda}}\right]=\Omega$. In particular, $$\left\langle \widehat{\Psi}^{\dagger}(x_{j})\widehat{\Psi}(x_{j})\right\rangle =\frac{\text{Re}\left\langle ({N_{j}\Omega})\right\rangle _{\mathcal{S}}}{\Delta x\ \text{Re}\left\langle \Omega\right\rangle _{\mathcal{S}}},\label{expn}$$ $$\left\langle \widehat{\Psi}^{\dagger}(x_{i})\widehat{\Psi}^{\dagger}(x_{j})\widehat{\Psi}(x_{j})\widehat{\Psi}(x_{i})\right\rangle =\frac{\text{Re}\left\langle ({N_{i}N_{j}\Omega)}\right\rangle _{\mathcal{S}}}{(\Delta x)^{2}\text{Re}\left\langle {\Omega}\right\rangle _{\mathcal{S}}},\label{expnn}$$ which explains the relationship between $N_{j}$ and the particle number at the $j$-th site. For the uniform system considered here, it is efficient to average the quantities over the entire lattice, so that e.g. $$g^{(2)}(r)=\frac{L\left\langle \int\widehat{\Psi}^{\dagger}(x)\widehat{\Psi}^{\dagger}(x+r)\widehat{\Psi}(x+r)\widehat{\Psi}(x)\, dx\right\rangle }{\left\langle \int\widehat{\Psi}^{\dagger}(x)\widehat{\Psi}(x)\, dx\right\rangle ^{2}}.\label{g2obs}$$ Uncertainty is estimated as follows: We separate the $\mathcal{S}$ realizations into $\mathcal{B}$ bins, such that $\mathcal{B}\gg1$ and $\mathcal{S}/\mathcal{B}\gg1$. One calculates an estimate for the expectation value of an observable in each bin independently (let us denote $\overline{O}_{i}$ as the estimate obtained from the $i$th bin). The best estimate for the expectation value of the observable is obviously $\langle\overline{O}_{i}\rangle_{\mathcal{B}}$. The one-sigma uncertainty in this estimate is obtained from the Central Limit theorem and is $$\Delta\overline{O}=\sqrt{\frac{\langle\overline{O}^{2}\rangle_{\mathcal{B}}-\langle\overline{O}\rangle_{\mathcal{B}}^{2}}{\mathcal{B}}}.\label{uncertainty}$$ Nearly ideal gas regime [\[]{}$\gamma\ll\min\{\tau^{2},\sqrt{\tau}\}$\] {#sect:nearlyideal} ======================================================================= We now present the perturbation theory results for the decoherent regime of a 1D Bose gas [@karenprl], where both the density and phase fluctuations are large and the local pair correlation $g^{(2)}(0)$ is always close to the result for non-interacting bosons, $g^{(2)}(0)=2$. Depending on the value of the temperature parameter $\tau$, we further distinguish two sub-regimes: decoherent classical (DC) regime for $\tau\gg1$ and decoherent quantum (DQ) regime for temperatures well below quantum degeneracy, $\tau\ll1$. Both can be treated using perturbation theory with respect to the coupling constant $g$ around the ideal Bose gas, for which the nonlocal pair correlation function has been studied in Ref. [@Bouchoule-density-density]. Here, we extend these results to account for the first-order perturbative terms. Perturbation theory in $\gamma$ {#sect:perturbation} ------------------------------- [ The correlations of a 1D Bose gas are governed by the action $$S\left[\Psi^{\ast}\Psi\right]=\int_{0}^{\beta}\! d\sigma\int\! dr\;\left[\Psi^{\ast}\partial_{\sigma}\Psi-{\cal H}(\Psi^{\ast},\Psi)\right],\label{eq:action}$$ written in terms of a space and imaginary time dependent *c*-number fields $\Psi(x,\sigma)$ in the Feynman path integral formalism. Here $\sigma$ is the imaginary time and $\beta=1/k_{B}T$ is the maximum, corresponding to the inverse temperature. The Hamiltonian density ${\cal H}$ is obtained from (\[Hfull\]) by replacing the operators with the $c$-number fields. Using action (\[eq:action\]), the pair correlation function is given by $$g^{(2)}(r)=\frac{1}{n^{2}Z}\int\mathcal{D}\Psi^{\ast}\Psi\; e^{-S\left[\Psi^{\ast}\Psi\right]}\Psi^{\ast}(0)\Psi^{\ast}(r)\Psi(r)\Psi(0).\label{eq:g2def}$$ where $Z=\int\mathcal{D}\Psi^{\ast}\Psi\; e^{-S\left[\Psi^{\ast}\Psi\right]}$ is the partition function. In Eq. (\[eq:g2def\]) and below, we use the notation that fields with imaginary time dependence omitted act at $\sigma=0$, i.e. $\Psi(r)\equiv\Psi(r,0)$. Expanding the action (\[eq:action\]) in powers of $g$, we obtain up to the first order $$\begin{aligned} g^{(2)}(r)= & g_{\mathrm{ideal}}^{(2)}(r)-\frac{g}{2n^{2}}\int_{0}^{\beta}\! d\sigma\int\! dr^{\prime}\;\langle\Psi^{\ast}(r^{\prime},\sigma)\Psi^{\ast}(r^{\prime},\sigma)\notag\\ & \times\Psi(r^{\prime},\sigma)\Psi(r^{\prime},\sigma)\Psi^{\ast}(0)\Psi^{\ast}(r)\Psi(r)\Psi(0)\rangle,\label{expansion}\end{aligned}$$ where $g_{\mathrm{ideal}}^{(2)}(r)=1+G(r,0^{-})G(-r,0^{-})/n^{2}$ is the ideal Bose gas result following from Wick’s theorem. Note that since the expansion above is formally in powers of $g$, the final result can always be expressed in powers of $\gamma$ as $\gamma\propto g$. The average in Eq. (\[expansion\]) is evaluated using Wick’s theorem [@note-Wick] $$\begin{aligned} \Delta g^{(2)}(r) & =g^{(2)}(r)-g_{\mathrm{ideal}}^{(2)}(r)=-\frac{2g}{n^{2}}\int_{0}^{\beta}\! d\sigma\int\! dr^{\prime}\;\label{eq:wick}\\ & \times G(r^{\prime},\sigma)G(r-r^{\prime},-\sigma)G(r^{\prime}-r,\sigma)G(-r^{\prime},-\sigma),\notag\end{aligned}$$ with the Green’s function $$\begin{aligned} G(r,\sigma) & = & -\langle\Psi(0,0)\Psi^{\ast}(r,\sigma)\rangle\notag\\ & = & \frac{1}{\beta L}\sum_{k,n}\frac{e^{ikr-i\hbar\omega_{n}\sigma}}{i\hbar\omega_{n}-\hbar^{2}k^{2}/2m+\mu}.\label{Grsimga}\end{aligned}$$ The $\omega_{n}(\beta)$ are the Matsubara frequencies and the imaginary time $\sigma$ runs between 0 and $\beta$. The Green’s function is periodic in the case of bosons and anti-periodic in the case of fermions. Thus it can be Fourier transformed with $\omega_{n}=2\pi n/\beta$ (bosons) or $\omega_{n}=\pi(2n+1)/\beta$ (fermions). The discrete sum over $k$ becomes an integral in thermodynamic limit. ]{} In terms of a Green’s function $G_{k}(\sigma)$ that is Fourier transformed with respect to the spatial coordinates, $\Delta g^{(2)}(r)$ can be brought to the form $$\Delta g^{(2)}(r)=-\frac{2g}{n^{2}}\int_{0}^{\beta}\! d\sigma\!\int\frac{dk}{2\pi}e^{ikr}\Gamma(k,\sigma)\Gamma(k,-\sigma),\label{eq:weak_first}$$ where $$\Gamma(k,\sigma)=\frac{1}{2\pi}\int dp\ G_{p+k}(\sigma)G_{p}(-\sigma),\label{Gamma-1}$$ and $$G_{k}(\sigma)=\left\{ \begin{array}{cc} -n_{k}(\beta)e^{-\sigma(\hbar^{2}k^{2}/2m-\mu)}, & \sigma<0,\\ -[1+n_{k}(\beta)]e^{-\sigma(\hbar^{2}k^{2}/2m-\mu)}, & \sigma>0,\end{array}\right.\label{G-k}$$ with $$n_{k}(\beta)=\frac{1}{e^{(\hbar^{2}k^{2}/2m-\mu)\beta}-1}\label{eq:bedistribution}$$ being the standard bosonic occupation numbers. Decoherent classical regime --------------------------- For temperatures above quantum degeneracy, $\tau\gg1$, the chemical potential is large and negative, so the bosonic occupation numbers are small, $n_{k}(\beta)\ll1$, and can be approximated by the Boltzmann distribution, $n_{k}(\beta)\simeq e^{-(\hbar^{2}k^{2}/2m-\mu)\beta}$. Accordingly, the function $G_{k}(\sigma)$ in Eq. (\[G-k\]) becomes a Gaussian $$G_{k}(\sigma)=\left\{ \begin{array}{cc} -\exp[-(\hbar^{2}k^{2}/2m-\mu)(\sigma+\beta)], & \sigma<0,\\ -\exp[-(\hbar^{2}k^{2}/2m-\mu)\sigma], & \sigma>0,\end{array}\right.$$ and Eq. (\[Gamma-1\]) is integrated to yield $$\Gamma(k,\sigma)=\Gamma(k,-\sigma)=ne^{-\sigma(\beta-\sigma)\hbar^{2}k^{2}/2m\beta}.\label{eq:gamma_k_result}$$ Here the mean density at a given temperature and chemical potential is determined from $n=\frac{1}{2\pi}\int dk\ G_{k}(0^{-})=\sqrt{m/(2\pi\hbar^{2}\beta)}\, e^{\beta\mu}$. Using Eq. (\[eq:gamma\_k\_result\]), the correction (\[eq:weak\_first\]) to the pair correlation function is found as (see Appendix \[append:integrals-nearly-ideal\]) $$\Delta g^{(2)}(r)=-\gamma\sqrt{\frac{2\pi}{\tau}}\;\mathrm{erfc}\left(\sqrt{\frac{\tau n^{2}r^{2}}{2}}\right),\label{eq:weak_g2_res}$$ where $\mathrm{erfc}(x)$ is the complimentary error function. ![Nonlocal pair correlation $g^{(2)}(r)$ in the nearly ideal gas regime: (a) decoherent classical regime, $\tau\gg\max\{1,\gamma^{2}\}$, Eq. (\[DC\]), with $r$ in units of the thermal de Broglie wavelength $\Lambda_{T}=\sqrt{4\pi/(\tau n^{2})}$; (b) decoherent quantum regime, $\sqrt{\gamma}\ll\tau\ll1$, Eq. (\[DQ\]), with $r$ in units of the phase coherence length $l_{\phi}=2/n\tau$.[]{data-label="DQ_DC"}](g2r_long_Fig_1a.eps "fig:"){width="8cm"}\ ![Nonlocal pair correlation $g^{(2)}(r)$ in the nearly ideal gas regime: (a) decoherent classical regime, $\tau\gg\max\{1,\gamma^{2}\}$, Eq. (\[DC\]), with $r$ in units of the thermal de Broglie wavelength $\Lambda_{T}=\sqrt{4\pi/(\tau n^{2})}$; (b) decoherent quantum regime, $\sqrt{\gamma}\ll\tau\ll1$, Eq. (\[DQ\]), with $r$ in units of the phase coherence length $l_{\phi}=2/n\tau$.[]{data-label="DQ_DC"}](g2r_long_Fig_1b.eps "fig:"){width="8cm"} Together with $g_{\mathrm{ideal}}^{(2)}(r)=1+\exp[-\tau n^{2}r^{2}/2]$ ($\tau\gg1$), this gives the following result for the pair correlation function in the DC regime ($\tau\gg\max\{1,\gamma^{2}\}$): $$g^{(2)}(r)=1+e^{-(r\sqrt{2\pi}/\Lambda_{T})^{2}}-\sqrt{\frac{2\pi\gamma^{2}}{\tau}}\;\mathrm{erfc}\left(\frac{r\sqrt{2\pi}}{\Lambda_{T}}\right),\label{DC}$$ This is written in terms of the thermal de Broglie wavelength $$\Lambda_{T}=\sqrt{\frac{2\pi\hbar^{2}}{2mT}}=\sqrt{\frac{4\pi}{\tau n^{2}}},$$ a quantity that will appear repeatedly in what follows. At $r=0$ we have $g^{(2)}(0)=2-\gamma\sqrt{2\pi/\tau}$ in agreement with Ref. [@karenprl]. In the non-interacting limit ($\gamma=0$) we recover the well-known result for the classical ideal gas [@Naraschewski-Glauber] characterized by Gaussian decay with a correlation length $\Lambda_{T}$. For $\gamma>0$ we observe [\[]{}see Fig. \[DQ\_DC\](a)\] the emergence of anomalous behavior, with a global maximum $g^{(2)}(r_{\max})=g^{(2)}(0)+2\gamma^{2}/\tau$ at nonzero interparticle separation $nr_{\max}=2\gamma/\tau\ll1$. [This corresponds to the emergence of antibunching, $g^{(2)}(0)<g^{(2)}(r_{\max})$, due to repulsive interactions.]{} As $\gamma$ is increased further, there is a continuous transition from the DC regime to the regime of high-temperature “fermionization (see Sec. \[sect:fermionization\]), with $g^{(2)}(0)$ reducing further and the maximum moving to larger distances. Decoherent quantum regime ------------------------- [ For temperatures below quantum degeneracy, with $\sqrt{\gamma}\ll\tau\ll1$, only $\omega_{n}=0$ contributes to the Green’s function $$G_{k}(\sigma)=-T[\hbar^{2}k^{2}/(2m)+|\mu|]^{-1},\label{eq:green_boson_deg}$$ which gives the relation between the density and the chemical potential $n=T\sqrt{m/(2\hbar^{2}|\mu|)}$, $\mu=-|\mu|$. Performing the Fourier transform of Eq. (\[eq:green\_boson\_deg\]) one obtains the one-particle density matrix for the ideal gas $$\begin{aligned} g_{\text{ideal}}^{(1)}(r)=\langle\hat{\Psi}^{\dagger}(0)\hat{\Psi}(r)\rangle/n=\exp(-r/l_{\phi}),\label{eq:g1}\end{aligned}$$ which characterizes the decay of phase coherence over a length scale given by $$l_{\phi}=\frac{\hbar^{2}}{2m|\mu|}=\frac{2}{n\tau},$$ and also determines the second-order correlation function for the ideal gas $$g_{\text{ideal}}^{(2)}(r)=1+|g_{\text{ideal}}^{(1)}(r)|^{2}=1+e^{-2r/l_{\phi}}.\label{eq:g2ideal}$$ ]{} [ The one-particle Greens function, Eq. (\[eq:green\_boson\_deg\]), together with Eq. (\[Gamma-1\]) leads to $\Gamma(k,\sigma)=4n^{2}l_{\phi}/(k^{2}l_{\phi}^{2}+4)$. Inserting it into Eq. (\[eq:weak\_first\]) we obtain (see Appendix \[append:integrals-nearly-ideal\]) corrections to $g_{\text{ideal}}^{(2)}(r)$, leading to the following result for the pair correlation function in the DQ regime $$g^{(2)}(r)=1+\left[1-\frac{4\gamma}{\tau^{2}}\left(1+\frac{2r}{l_{\phi}}\right)\right]e^{-2r/l_{\phi}}.\label{DQ}$$ This has the maximum value $g^{(2)}(0)=2-4\gamma/\tau^{2}$, in agreement with the result of Ref. [@karenprl]. For $\gamma=0$ the correlations decay exponentially with the characteristic correlation length of half a phase coherence length describing the long-wavelength phase fluctuations. ]{} An interesting feature in this regime is the apparent prediction of weak antibunching *at a distance* as seen in Fig. \[DQ\_DC\] (b), with $g^{(2)}(r_{\min})<1$. The strongest antibunching in expression (\[DQ\]) occurs at $nr_{\min}=\tau/4\gamma\gg1$, or $r_{\min}=l_{\phi}\tau^{2}/4\gamma\gg l_{\phi}$, and dips below unity by an amount $(4\gamma/\tau^{2})\exp(-\tau^{2}/4\gamma)\ll1$. However, there is ambiguity regarding its existence: One should note that the dip below unity is very small in the region of uncontested validity of Eq. (\[DQ\]) where $\tau/\sqrt{\gamma}\gg1$, and only becomes appreciable around $\tau\lesssim2\sqrt{\gamma}$, which is in the crossover region into the quasi-condensate (see Sec. \[sect:weak\]). Whether such anomalous antibunching survives higher order corrections in the small parameter $\sqrt{\gamma}/\tau$ remains to be seen. Our numerical calculations to date have not been able to access a regime of small enough $\sqrt{\gamma}/\tau$ to confirm or deny its existence. The numerical examples shown in Fig. \[DQ\_lowgam\] are for $\sqrt{\gamma}/\tau\simeq0.24$ and $\sqrt{\gamma}/\tau\simeq0.77$, and show a thermal bunching peak with a typical Gaussian shape at the shortest range of $\Lambda_{T}$, with $\Lambda_{T}\ll l_{\phi}$. At longer ranges, phase coherence dominates this and leads to exponential decay on the length scale $l_{\phi}$, in agreement with Eq. (\[DQ\]). Quantum/classical transition {#subsect:DC/DQ} ---------------------------- The transition from the quantum to the classical decoherent gas was investigated using the gauge-$P$ numerical method. The behavior is shown in Figs. \[DQ\_lowgam\]–\[tau1\]. ![Approach of the pair correlation function to the ideal gas solution (shown dashed) in the decoherent quantum regime at $\tau=0.1$, with $r$ in units of the thermal de Broglie wavelength, $\Lambda_{T}=\sqrt{4\pi/\tau n^{2}}$. The thickness of the solid lines (numerical results) comes from the superimposed $1\sigma$ error bars which are below resolution.[]{data-label="DQ_lowgam"}](DQ_gamma.eps){width="8cm"} ![Exact behavior of $g^{(2)}(r)$, with $r$ in units of $\Lambda_{T}$, in the nearly ideal gas regime with $\gamma=0.001$ and varying $\tau$ around the quantum/classical crossover. In panel (b), the derivative $f=\partial\lbrack\ln(g^{(2)}(r)-1)]/\partial r$ shows a clear distinction between exponential decay (when $f$ is constant) and Gaussian thermal-like behavior when $f$ is linear. The triple lines indicate the numerical curves together with $1\sigma$ error bars which are mostly below resolution.[]{data-label="lowgam"}](lowg.eps "fig:"){width="8cm"}\ ![Exact behavior of $g^{(2)}(r)$, with $r$ in units of $\Lambda_{T}$, in the nearly ideal gas regime with $\gamma=0.001$ and varying $\tau$ around the quantum/classical crossover. In panel (b), the derivative $f=\partial\lbrack\ln(g^{(2)}(r)-1)]/\partial r$ shows a clear distinction between exponential decay (when $f$ is constant) and Gaussian thermal-like behavior when $f$ is linear. The triple lines indicate the numerical curves together with $1\sigma$ error bars which are mostly below resolution.[]{data-label="lowgam"}](lowg_gradients.eps "fig:"){width="8cm"} With rising temperature, still below degeneracy, one first finds a rounding-off of the exponential behavior at short ranges of a fraction of $\Lambda_{T}$, as seen in Fig. \[DQ\_lowgam\]. There is also a global lowering of $g^{(2)}(r)$ with $\gamma$. It should be noted that the parameters for the numerical results shown in Fig. \[DQ\_lowgam\] are not deep in the regime where (\[DQ\]) applies accurately, and the lowering of the tails with $\gamma$ is weaker here, than predicted by that limiting expression. Considering variation with $T$, as temperature approaches, and then exceeds $T_{d}$, Gaussian thermal-like behavior appears first at short ranges, progressively taking over an ever larger part of $g^{(2)}(r)$ as temperature is raised. This is seen in Fig. \[lowgam\]. The exponential tails can persist at ranges $r\gtrsim\Lambda_{T}/\sqrt{2\pi}$ well into the high temperature regime when $\gamma$ is small, as seen in Fig. \[lowgam\](b) for $\tau=3$ and even $\tau=10$. ![Approach to the classical decoherent gas solution (shown dashed), Eq. (\[DC\]), for finite but small interaction with $\gamma/\sqrt{\tau}=0.03$, which corresponds to a variation of density while keeping the coupling $g$ and $T$ constant. Here $g^{(2)}(0)\rightarrow1.925$ in the $\tau\rightarrow\infty$ or equivalently $n\rightarrow0$ limit. Triple solid lines are the numerical results, with $1\sigma$ error bars below resolution.[]{data-label="DC_constn"}](DC_constn.eps){width="8cm"} There are three scenarios that can typically be controlled in ultracold gas experiments: (*i*) varying the absolute temperature changes $\tau$ but not $\gamma$, as in Fig. \[lowgam\]; (*ii*) varying the coupling strength via a Feshbach resonance or varying the width of the trapping potential affects $\gamma$ but not $\tau$, as considered in Section \[sect:HTFxover\] and Fig. \[DQ\_lowgam\]; and (*iii)* varying the linear density gives changes in both $\gamma$ and $\tau$, while keeping the quantity $\gamma/\sqrt{\tau}$ constant. Notably, this is the parameter that appears in the analytic expressions for both decoherent regimes, Eqs. (\[DQ\]) and (\[DC\]). ![Behavior of $g^{(2)}(r)$ in the crossover region between decoherent classical and quantum gas at $\tau=1$. Values of $\gamma$ shown are $0.001$, $0.003$, $0.01$, $0.03$, $0.06$, $0.1$ and $0.2$ as the curves for $g^{(2)}(r)$ descend. []{data-label="tau1"}](tau1.eps){width="8cm"} Figure \[DC\_constn\] shows the behavior under scenario (*iii*), where increasing $\tau$ corresponds to decreasing density of the gas. As expected, $g^{(2)}(0)$ tends to a constant value $g^{(2)}(0)=2-\gamma\sqrt{2\pi/\tau}\neq2$ with $\tau\rightarrow\infty$ predicted by Eq. (\[DC\]). Interestingly, the crossover is quite broad under changing density, with departures from the decoherent classical result still visible at $\tau\sim100$. Finally, in the middle of the crossover region at $\tau=1$, $\gamma\ll1$, there is the smooth and quite broad transition from low$\ $values of $\gamma$ to $\gamma\sim{\mathcal{O}}(1)$ that is shown in Fig. \[tau1\]. The situation of a short-range Gaussian with standard deviation $\sim\Lambda_{T}/2\sqrt{\pi}$ and exponential tails with length scale $l_{\phi}/2$ that was seen in Fig. \[lowgam\] morphs into an anomalous form with a local maximum that is similar to the high temperature fermionization behavior described below in Sections \[sect:strong\] and \[sect:HTFxover\]. Weakly interacting quasi-condensate regime [\[]{}$\tau^{2}\ll\gamma\ll1$\] {#sect:weak} ========================================================================== In the regime of weak interactions and low temperature (or Gross-Pitaevskii regime) with $\gamma\ll1$ we rely on the fact that the equilibrium state of the gas is that of a quasi-condensate [@mermin-wagner-hohenberg; @petrov-1d-regimes]. In this regime the density fluctuations are suppressed while the phase still fluctuates. The pair correlation function is close to one and the deviations can be calculated using the Bogoliubov theory. In this approach, the field operator $\hat{\Psi}$ is represented as a sum of the ($c$-number) macroscopic component $\Psi_{0}$, containing excitations with momenta $k\lesssim k_{0}\ll\xi^{-1}$ (where $\xi=\hbar/\sqrt{mgn}$ is the healing length) and a small operator component $\delta\hat{\Psi}$ describing excitations with larger momenta, $\hat{\Psi}=\Psi_{0}+\delta\hat{\Psi}$. The momentum $k_{0}$ is chosen such that most of the particles are contained in $\Psi_{0}$, however, its details do not enter into the lowest order corrections to $g^{(2)}(r)$, which are $\mathcal{O}(\delta\hat{\Psi})^{2}$. Using Wick’s theorem, and the property of the thermal density matrix that $\langle\delta\hat{\Psi}\rangle=0$, the pair correlation function is then reduced to$$g^{(2)}(r)\simeq1+\frac{2}{n}\left(\text{Re}\langle\delta\hat{\Psi}^{\dagger}(r)\delta\hat{\Psi}(0)\rangle+\text{Re}{\langle\delta\hat{\Psi}(r)\delta\hat{\Psi}(0)\rangle}\right).\label{g2-Bog}$$ The normal and anomalous averages $\langle\delta\hat{\psi}^{\dagger}(r)\delta\hat{\psi}(0)\rangle$ and $\langle\delta\hat{\psi}(r)\delta\hat{\psi}(0)\rangle$ are calculated using the Bogoliubov transformation$$\delta\hat{\psi}(r)=\frac{1}{L}\sum\nolimits _{k}\left(u_{k}\hat{a}_{k}e^{ikx}-v_{k}\hat{a}_{k}^{\dagger}e^{-ikx}\right),$$ where $L$ is the length of the quantization box, $\hat{a}_{k}$ and $\hat{a}_{k}^{\dagger}$ are the annihilation and creation operators of elementary excitations, and $(u_{k},v_{k})$ are the expansion coefficients given by $$u_{k}=\frac{\epsilon_{k}+E_{k}}{2\sqrt{\epsilon_{k}E_{k}}},\; v_{k}=\frac{\epsilon_{k}-E_{k}}{2\sqrt{\epsilon_{k}E_{k}}},\label{u-v}$$ and satisfying $u_{k}^{2}-v_{k}^{2}=1$. Here $\epsilon_{k}=\sqrt{E_{k}(E_{k}+2gn)}$ is the Bogoliubov excitation energy, $E_{k}=\hbar^{2}k^{2}/(2m)$, and we note that the following useful relationships between $E_{k}$ and $\epsilon_{k}$ hold:$$\begin{aligned} E_{k} & = & \sqrt{\epsilon_{k}^{2}+(gn)^{2}}-gn,\\ \frac{E_{k}}{\epsilon_{k}} & = & \left[\frac{k^{2}}{k^{2}+(2/\xi)^{2}}\right]^{1/2},\label{E_over_eps}\end{aligned}$$ where $\xi=\hbar/\sqrt{mgn}$ is the healing length. The equilibrium occupation numbers of the Bogoliubov excitations are given by $\tilde{n}_{k}=\langle\hat{a}_{k}^{\dagger}\hat{a}_{k}\rangle=[e^{\epsilon_{k}/T}-1]^{-1}$. Applying the Bogoliubov transformation to the normal and anomalous averages in Eq. (\[g2-Bog\]) gives $$\begin{aligned} g^{(2)}(r) & = & 1+\frac{1}{\pi n}\int\limits _{-\infty}^{+\infty}dk\,\cos(kr)\notag\\ & & \times\left[(u_{k}-v_{k})^{2}\tilde{n}_{k}+v_{k}(v_{k}-u_{k})\right].\end{aligned}$$ Using next Eq. (\[u-v\]) for the coefficients $u_{k}$ and $v_{k}$ we obtain the following result for the pair correlation function $$g^{(2)}(r)=1+\frac{1}{2\pi n}\int\limits _{-\infty}^{+\infty}dk\left[\frac{E_{k}}{\epsilon_{k}}(2\tilde{n}_{k}+1)-1\right]\cos(kr).$$ For convenience, we split the $g^{(2)}(r)$-function into two parts corresponding to the contributions of thermal and vacuum fluctuations, $$g^{(2)}(r)=1+G_{0}(r)+G_{T}(r),\label{g2r-split}$$ with$$G_{0}(r)=\frac{1}{2\pi n}\int\limits _{-\infty}^{+\infty}dk\left[\frac{E_{k}}{\epsilon_{k}}-1\right]\cos(kr),\label{G-0-def}$$ and $$G_{T}(r)=\frac{1}{\pi n}\int\limits _{-\infty}^{+\infty}dk\frac{E_{k}}{\epsilon_{k}}\tilde{n}_{k}\cos(kr).\label{G-T-def}$$ We first evaluate the vacuum contribution $G_{0}(r)$, Eq. (\[G-0-def\]). As shown in Appendix \[append:Bogoliubov\], the integral in (\[G-0-def\]) can be obtained exactly in terms of special functions, giving $$G_{0}(r)=-\sqrt{\gamma}\left[\mathbf{L}_{-1}(2\sqrt{\gamma}nr)-I_{1}(2\sqrt{\gamma}nr)\right],\label{G0-specialf}$$ where $\mathbf{L}_{-1}(x)$ is the modified Struve function and $I_{1}(x)$ is a Bessel function. [The correlation length scale here is set by the healing length $\xi=\hbar/\sqrt{mgn}=1/\sqrt{\gamma}n$.]{} [Quasi-condensate at low temperatures]{} ---------------------------------------- At very low temperatures when the excitations are dominated by vacuum fluctuations, whereas the thermal fluctuations are a small correction, the $G_{T}(r)$-term is calculated as follows. First, we substitute the explicit expression for $\tilde{n}_{k}$ into Eq. (\[G-T-def\]), giving$$G_{T}(r)=\frac{1}{\pi n}\int\limits _{-\infty}^{+\infty}dk\frac{E_{k}}{\epsilon_{k}^{\,}}\frac{1}{e^{\epsilon_{k}/T}-1}\cos(kr).\label{G-T-lowT}$$ As shown in Appendix \[append:Bogoliubov\], for $T\ll gn$ (or $\tau\ll\gamma$) the integral can be simplified and gives $$G_{T}(r)\simeq\frac{\pi}{2\sqrt{\gamma}}\left[\frac{1}{n^{2}\pi^{2}r^{2}}-\frac{\tau^{2}}{4\gamma}\operatorname{cosech}^{2}\left(\frac{\pi\tau nr}{2\sqrt{\gamma}}\right)\right].\label{G-T-GPa}$$ ![Nonlocal pair correlation $g^{(2)}(r)$ in the weakly interacting regime, with $r$ in units of the healing length $\xi=1/\sqrt{\gamma}n$: (a) low-temperature weekly interacting gas at $\tau\ll\gamma\ll1$, Eq. (\[GPa\]); (b) weakly interacting gas at $\gamma\ll\tau\ll\sqrt{\gamma}$, Eq. (\[GPb\]).[]{data-label="GP"}](g2r_long_Fig_6a.eps "fig:"){width="8cm"}\ ![Nonlocal pair correlation $g^{(2)}(r)$ in the weakly interacting regime, with $r$ in units of the healing length $\xi=1/\sqrt{\gamma}n$: (a) low-temperature weekly interacting gas at $\tau\ll\gamma\ll1$, Eq. (\[GPa\]); (b) weakly interacting gas at $\gamma\ll\tau\ll\sqrt{\gamma}$, Eq. (\[GPb\]).[]{data-label="GP"}](g2r_long_Fig_6b.eps "fig:"){width="8cm"} Combining Eqs. (\[g2r-split\]), (\[G0-specialf\]) and (\[G-T-GPa\]) we obtain the following final result for this regime ($\tau\ll\gamma\ll1$):$$\begin{aligned} g^{(2)}(r) & =1-\sqrt{\gamma}\left[\mathbf{L}_{-1}(2r/\xi)-I_{1}(2r/\xi)\right]\notag\\ & +\frac{\sqrt{\gamma}\xi^{2}}{2\pi r^{2}}-\frac{\pi\tau^{2}}{8\gamma^{3/2}}\sinh^{-2}\left(\frac{\pi\tau r}{2\gamma\xi}\right).\label{GPa}\end{aligned}$$ [ In the limit of $\tau\rightarrow0$, the terms in the second line of Eq. (\[GPa\]) cancel each other and the large distance ($r\gg\xi$) asymptotics of the difference of special functions $\mathbf{L}_{-1}(x)-I_{1}(x)\sim1/8\pi x^{2}$ ensures the expected inverse square decay of correlations[@giamarchi-book]. At small but finite temperatures, the same large-distance asymptotics exactly cancels the inverse square behavior in the second line of Eq. (\[GPa\]) leaving only the exponential decay $$\begin{aligned} g^{(2)}(r)\underset{r\rightarrow\infty}{\longrightarrow}1-\frac{\pi\tau^{2}}{8\gamma^{3/2}}e^{-\pi\tau r/\gamma\xi}\label{eq:g2r_Bogo_exp_decay}\end{aligned}$$ to the uncorrelated value of $g^{(2)}(r)=1$. This is again in full agreement with the Luttinger liquid theory [@giamarchi-book]. ]{} We note that even at $T=0$, oscillating terms are absent, in contrast to the strongly interacting regime of Sec. \[sect:strong-Tonks\], Eq. (\[TGa\]). The limit $r\rightarrow0$ in Eq. (\[GPa\]) reproduces the result of Eq. (9) of Ref. [@karenprl], $g^{(2)}(0)=1-2\sqrt{\gamma}/\pi+\pi\tau^{2}/(24\gamma^{3/2})$. In Fig. \[GP\](a) we plot Eq. (\[GPa\]) for different values of the interaction parameter $\gamma$, and we note that the finite temperature correction term is negligible here. Thermally excited quasi-condensate ---------------------------------- In the opposite limit, dominated by thermal rather than vacuum fluctuations and corresponding to $\gamma\ll\tau\ll\sqrt{\gamma}$, the thermal part of the pair correlation function is calculated as follows. We first note that large thermal fluctuations correspond to $\tilde{n}_{k}\gg1$, which in turn requires $\epsilon_{k}/T\ll1$. Thus, we replace $\tilde{n}_{k}$ in the integral (\[G-T-def\]) by $\tilde{n}_{k}=[\exp(\epsilon_{k}/T)-1]^{-1}\simeq T/\epsilon_{k}\gg1$. [ With this substitution, the integral for $G_{T}(r)$ is dominated by the free-particle (quadratic in $k$) part of the Bogoliubov spectrum and the calculations in Appendix \[append:Bogoliubov\] yield $$G_{T}(r)=\frac{\tau}{2\sqrt{\gamma}}e^{-2\sqrt{\gamma}nr}.\label{G-T-GPb}$$ This result is valid for $r/\xi\lesssim1$. For $r/\xi\gg1$ the main contribution to the integral in Eq. (\[G-T-def\]) comes from the phonon (linear in $k$) part of the Bogoliubov spectrum and one recovers the behavior given by Eq. (\[eq:g2r\_Bogo\_exp\_decay\]). ]{} Combining Eqs. (\[g2r-split\]), (\[G0-specialf\]) and (\[G-T-GPb\]) we obtain the following final result for this regime ($\gamma\ll\tau\ll\sqrt{\gamma}$ [and $r\lesssim\xi$]{}):$$\begin{aligned} g^{(2)}(r) & =1+\frac{\tau}{2\sqrt{\gamma}}e^{-2r/\xi}\notag\\ & -\sqrt{\gamma}\left[\mathbf{L}_{-1}(2r/\xi)-I_{1}(2r/\xi)\right].\label{GPb}\end{aligned}$$ The last two terms are due to vacuum fluctuations and are a negligible correction here, so the leading term gives an exponential decay of correlations [\[]{}see Fig. \[GP\](b)\] with a characteristic correlation length given by the healing length $\xi=1/\sqrt{\gamma}n$. The peak value at $r=0$ is $g^{(2)}(0)=1+\tau/(2\sqrt{\gamma})$, in agreement with Ref. [@karenprl]. Strongly interacting regime [\[]{}$\gamma\gg\max\{1,\sqrt{\tau}\}$\] {#sect:strong} ==================================================================== Perturbation theory in $1/\gamma$ --------------------------------- By mapping the system onto that of a weakly attractive 1D fermion gas [@Cheon-Shigevare] one can perform perturbation theory in $1/\gamma\ll1$. The formalism is the same as in Sec. \[sect:perturbation\], except that $\Psi$ is now a fermionic field and the interaction term in the Hamiltonian (\[Hfull\]) has to be modified to describe effective attractive interaction between fermions with matrix elements (in $k$-space) $V_{k}=-2\hbar^{2}k^{2}/(mn\gamma)$ [@Cheon-Shigevare]. Then $$g^{(2)}(r)=g_{\gamma=\infty}^{(2)}(r)+\Delta g^{(2)}(r)$$ with $g_{\gamma=\infty}^{(2)}(r)=1-e^{-n^{2}\tau r^{2}/2}$. The first order corrections to $g^{(2)}(r)$ are given by the Hartree-Fock approximation as a sum of the direct and exchange contributions $$\begin{aligned} \Delta g_{d}^{(2)}(r) & =\int_{0}^{\beta}\! d\sigma\!\int\frac{dk}{2\pi}\; V_{k}\Gamma(k,\sigma,r=0)\Gamma(-k,\sigma,r=0)e^{ikr},\label{eq:the_diagram}\\ \Delta g_{e}^{(2)}(r) & =-\int_{0}^{\beta}\! d\sigma\!\int\frac{dk}{2\pi}\; V_{k}\Gamma(k,\sigma,r)\Gamma(-k,\sigma,-r)e^{ikr},\label{delta_g_exchange}\end{aligned}$$ where $$\Gamma(k,\sigma,r)=\int dp\ G_{p+k}(\sigma)G_{p}(-\sigma)e^{ipr}/2\pi,\label{Gamma-2}$$ in terms of the Green’s function $G_{k}(\sigma)$ for free fermions. Regime of high-temperature “fermionization {#sect:fermionization} ------------------------------------------ We proceed with evaluation in the regime of high-temperature “fermionization at temperatures well above quantum degeneracy, $\tau\gg1$. In this regime, we use the Maxwell-Boltzmann distribution of quasi-momenta as the unperturbed state. In the temperature interval $1\ll\tau\ll\gamma^{2}$, the characteristic distance related to the interaction between the particles – the 1D scattering length $a_{1D}=\hbar^{2}/mg\simeq l_{\perp}^{2}/a$ $\sim1/\gamma n$ – is much smaller than the thermal de Broglie wavelength $\Lambda_{T}$, and the small perturbation parameter is $a_{1D}/\Lambda_{T}\ll1$ [@karenprl]. &gt;From the same formalism as in Sec. \[sect:perturbation\], the free fermion Green’s function is now given by $$G_{k}(\sigma)=\left\{ \begin{array}{ll} \exp[(\beta+\sigma)(\mu-\hbar^{2}k^{2}/2m)], & -\beta<\sigma<0,\\ -\exp[\mu\sigma-\sigma\hbar^{2}k^{2}/2m], & \;\;\;0<\sigma<\beta,\end{array}\right.$$ so the integral for $\Gamma(k,\sigma,r)$, Eq. (\[Gamma-2\]), gives $$\Gamma(k,\sigma,r)=-ne^{-\sigma(\beta-\sigma)\hbar^{2}k^{2}/2m\beta}e^{-mr^{2}/(2\hbar^{2}\beta)}e^{-ikr\sigma/\beta}.\label{eq:gamma_res}$$ Substituting Eq. (\[eq:gamma\_res\]) into Eqs. (\[eq:the\_diagram\]) and (\[delta\_g\_exchange\]) we obtain (see Appendix \[append:integrals-Tonks\]) $$\begin{aligned} \Delta g_{d}^{(2)}(r) & = & \frac{2\tau n|r|}{\gamma}e^{-n^{2}\tau r^{2}/2}-\frac{4}{n\gamma}\delta(r),\label{eq:direct_exchange}\\ \Delta g_{e}^{(2)} & = & \frac{4}{n\gamma}\delta(r),\label{eq:exchange}\end{aligned}$$ The only effect of the exchange contribution $\Delta g_{e}^{(2)}$ is to cancel the delta-function in the direct contribution. This leaves us with the following result for the pair correlation function in the regime of high-temperature fermionization ($1\ll\tau\ll\gamma^{2}$): $$g^{(2)}(r)=1-\left[1-4\sqrt{\frac{\pi\tau}{\gamma^{2}}}\left(\frac{r}{\Lambda_{T}}\right)\right]e^{-(r\sqrt{2\pi}/\Lambda_{T})^{2}}.\label{TGb}$$ In the limit $r\rightarrow0$ this leads to perfect antibunching, $g^{(2)}(0)=0$, while the small finite corrections (as in Ref. [@karenprl], $g^{(2)}(0)=2\tau/\gamma^{2}$) are reproduced at order $\gamma^{-2}$. The correlation length associated with the Gaussian decay of correlations in Eq. (\[TGb\]) is given by thermal de Broglie wavelength $\Lambda_{T}=\sqrt{4\pi/(\tau n^{2})}$. For not very large $\gamma$, the correlations do not decay in a simple way, but instead show an anomalous, non-monotonic behavior with a global maximum at at $r_{\max}\simeq\gamma/2\tau n$. This originates from the effective Pauli-like blocking at short range and thermal bunching [\[]{}$g^{(2)}(r)>1$\] at long range. As $\gamma$ is increased the position of the maximum diverges and its value approaches 1 in a non-analytical way $g^{(2)}(r_{\max})\simeq1+(4\tau/\gamma^{2})\exp(-\gamma^{2}/8\tau)$. ![Nonlocal pair correlation $g^{(2)}(r)$ as a function of the relative distance $r$ in the strongly interacting regime, $\gamma\gg1$: (a) regime of high-temperature “fermionization, $1\ll\tau\ll\gamma^{2}$, Eq. (\[TGb\]), with $r$ in units of the thermal de Broglie wavelength $\Lambda_{T}=\sqrt{4\pi/(\tau n^{2})}$; (b) low temperature Tonks-Girardeau regime, Eq. (\[TGa\]), for $\tau=0.01$, with $r$ in units of mean interparticle separation $1/n$[]{data-label="TGfig"}](g2r_long_Fig_7a.eps "fig:"){width="8cm"}\ ![Nonlocal pair correlation $g^{(2)}(r)$ as a function of the relative distance $r$ in the strongly interacting regime, $\gamma\gg1$: (a) regime of high-temperature “fermionization, $1\ll\tau\ll\gamma^{2}$, Eq. (\[TGb\]), with $r$ in units of the thermal de Broglie wavelength $\Lambda_{T}=\sqrt{4\pi/(\tau n^{2})}$; (b) low temperature Tonks-Girardeau regime, Eq. (\[TGa\]), for $\tau=0.01$, with $r$ in units of mean interparticle separation $1/n$[]{data-label="TGfig"}](g2r_long_Fig_7b.eps "fig:"){width="8cm"} Figure \[TGfig\](a) shows a plot of Eq. (\[TGb\]) for various ratios of $\gamma^{2}/\tau$. For a well-pronounced global maximum, moderate values of $\gamma^{2}/\tau$ are required (such as $\gamma^{2}/\tau\simeq5$, with $\tau=8$, $\gamma=6$), and these lie near the boundary of validity ($\gamma^{2}/\tau\gg1$) for our perturbative result in the high-temperature fermionization regime. Exact numerical calculations described in Ref. [@drummond-canonical-gauge], and in more detail below in Sec. \[sect:HTFxover\] do, however, show qualitatively similar global maxima. Zero- and low-temperature (Tonks-Girardeau) regime {#sect:strong-Tonks} -------------------------------------------------- At $T=0$ the procedure is straightforward [@cherny_brand2] and yields the known [@korepin-book; @cherny_brand2] result $$\begin{gathered} g_{T=0}^{(2)}(r)=1-\frac{\sin^{2}(\zeta)}{\zeta^{2}}-\frac{4}{\gamma}\frac{\sin^{2}(\zeta)}{\zeta^{2}}-\frac{2\pi}{\gamma}\frac{\partial}{\partial\zeta}\frac{\sin^{2}(\zeta)}{\zeta^{2}}\notag\\ +\frac{2}{\gamma}\frac{\partial}{\partial\zeta}\left[\frac{\sin(\zeta)}{\zeta}\int\nolimits _{-1}^{1}dt\sin(\zeta t)\ln\frac{1+t}{1-t}\right],\label{TGa}\end{gathered}$$ where $\zeta\equiv\pi nr$. The last term here diverges logarithmically with $\zeta$ and can be regarded as a first order perturbation correction to the fermionic inverse square power law. Accordingly, Eq. (\[TGa\]) is valid for $\zeta\ll\exp(\gamma)$. At temperatures well below quantum degeneracy, $\tau\ll1$, finite temperature corrections to Eq. (\[TGa\]) are obtained using a Sommerfeld expansion around the zero temperature Fermi-Dirac distribution for the quasi-momenta. For $rn\ll\tau^{-1}$ this gives an additional contribution of $\tau^{2}\sin^{2}(\pi nr)/12\pi^{2}$ to the right hand side of Eq. (\[TGa\]), which is negligible compared to the $T=0$ result as $\tau\ll1$. At $r=0$, Eq. (\[TGa\]) gives perfect antibunching $g^{(2)}(0)=0$, which corresponds to a fully “fermionized 1D Bose gas, where the strong inter-atomic repulsion mimics the Pauli exclusion principle for intrinsic fermions. By extending the perturbation theory to include terms of order $\gamma^{-2}$ we can reproduce the known result for the local pair correlation at zero temperature $g^{(2)}(0)=4\pi^{2}/3\gamma^{2}$ [@karenprl; @gangardt-correlations]. In Fig. \[TGfig\](b) we plot the function $g^{(2)}(r)$, Eq. (\[TGa\]), for various $\gamma$. According to the physical interpretation of the pair correlation function $g^{(2)}(r)$, its oscillatory structure, and hence the existence of local maxima and minima at certain finite values of $r$, implies that there exist more and less likely separations between the pairs of particles in the gas. This can be interpreted as a quasi-crystalline order (with a period of $\sim1/n$) in the two-particle sector of the many-body wave function even though the density of the gas is uniform. The oscillatory behavior of the pair correlation in this strongly interacting regime is similar to Friedel oscillations in the density profile of a 1D interacting electron gas with an impurity [@Friedel]. We also mention that our derivation of Eq. (\[TGa\]) is equally valid for strong attractive interactions, i.e., when $\gamma<0$ and $|\gamma|\gg1$, and therefore it describes the pair correlations in a metastable state known as super-Tonks gas [@super-Tonks]. Numerical results {#subsect:HTF-numerix} ----------------- Numerical calculations with the gauge-$P$ method are able to reach only the low-$\gamma$ (or, equivalently, high $\tau$) edge of the high-temperature fermionization regime, however a comparison with Eq. (\[TGb\]) is instructive. In Fig. \[HTF\_verge\] we see that the length scale on which antibunching occurs is still qualitatively given by Eq. (\[TGb\]) while any discrepancies are of the same size as at $r=0$. This is actually a general feature in all the parameter regimes explored by the numerics. Overall, the discrepancy between the $1/\gamma$ perturbation expansions (\[DC\]), (\[DQ\]), (\[TGb\]), and the exact behavior of $g^{(2)}(r)$ at nonzero $r$ is roughly the same as at $r=0$. Since a calculation of $g^{(2)}(0)$ [@karenprl] from the exact solution of the Yang-Yang integral equations [@yangyang1] is usually more straightforward to evaluate than the full stochastic calculation of $g^{(2)}(r)$, it can serve as a useful guide to whether a numerical calculation is warranted or not. ![Behavior on the verge of the high-$T$ fermionization regime for $\gamma^{2}/\tau=4$. The dashed line is Eq. (\[TGb\]).[]{data-label="HTF_verge"}](HTF_4.eps){width="8cm"} Classical/Fermionization transition and correlation maxima {#sect:HTFxover} ========================================================== ![Crossover from decoherent classical to high temperature fermionization regimes at high temperature.[]{data-label="HTFDC"}](HTF_var.eps){width="8cm"} Figure \[HTFDC\] shows the behavior in the transition region between the decoherent classical and high temperature fermionization regimes (found with the gauge-$P$ numerical method), when one is far above the degeneracy temperature $T_{d}$. One sees the appearance of a maximum in the correlations at finite range as the transition is approached. As pointed out in Sec. \[sect:fermionization\], this arises from an interplay of thermal bunching and repulsive antibunching on comparable scales. A comparison of relevant length scales indicates that the $\tau\approx\gamma^{2}$ here corresponds to $\Lambda_{T}\sim a_{1D}$, where $a_{1D}$ is the “1D scattering length” that describes the asymptotic behavior of the wave function in two-body scattering. ![Situation when $T>T_{d}$ and the local second-order coherence is apparently unity. All curves plotted correspond to parameter values for which $g^{(2)}(0)=1$ in the crossover region between the classical decoherent and high-temperature fermionized gas. The dots (rather than triple lines here, for clarity) indicate $1\sigma$ error bars.[]{data-label="g2is1"}](g2ois1.eps){width="8cm"} An interesting behavior occurs in the crossover regime when $\gamma^{2}/\tau\simeq0.1-0.4$. Here we can have $g^{(2)}(0)=1$ just like in the quasi-condensate or “Gross-Pitaevskii regime, indicating local second-order coherence. However, unlike the quasi-condensate regime, the non-local correlations on length scales of $\sim\Lambda_{T}$ are *not* coherent, and in fact appreciably bunched. This is shown in Fig. \[g2is1\]. It is a symptom of the broader correlation maximum phenomenon. ![Heights of the anomalous peak of $g^{(2)}(r)$ that occurs at nonzero $r_{\max}$, for different values of $\tau$, as functions of $g^{(2)}(0)$ – (a) and $\gamma^{2}/\tau$ – (b). The height is taken to be $h\equiv g^{(2)}(r_{\max})-g^{(2)}(0)$ at high temperatures when $g^{(2)}(0)>1$, and $h\equiv g^{(2)}(r_{\max})-1$ when $g^{(2)}(0)<1$. The two regimes are separated by the dot-dashed vertical line in (a). Analytic results from Eq. (\[DC\]) in the decoherent quantum regime are shown as a dashed line. Dots (rather than triple lines here, for clarity) indicate $1\sigma$ error bars on the numerical results.[]{data-label="peakheights"}](peakheights.eps "fig:"){width="8cm"}\ ![Heights of the anomalous peak of $g^{(2)}(r)$ that occurs at nonzero $r_{\max}$, for different values of $\tau$, as functions of $g^{(2)}(0)$ – (a) and $\gamma^{2}/\tau$ – (b). The height is taken to be $h\equiv g^{(2)}(r_{\max})-g^{(2)}(0)$ at high temperatures when $g^{(2)}(0)>1$, and $h\equiv g^{(2)}(r_{\max})-1$ when $g^{(2)}(0)<1$. The two regimes are separated by the dot-dashed vertical line in (a). Analytic results from Eq. (\[DC\]) in the decoherent quantum regime are shown as a dashed line. Dots (rather than triple lines here, for clarity) indicate $1\sigma$ error bars on the numerical results.[]{data-label="peakheights"}](peakheights_gamsqtau.eps "fig:"){width="8cm"} The height of this maximum for more general parameters is shown in Fig. \[peakheights\] as a function of both $g^{(2)}(0)$ and $\gamma^{2}/\tau$. One sees that this behavior is well pronounced in the crossover between high temperature fermionization and decoherent classical regimes, peaking when $g^{(2)}(0)\simeq1$ (a situation shown also in Fig. \[g2is1\]), or, equivalently, $\gamma^{2}\sim0.3\tau$. As one reaches degenerate temperatures, the maximum peak height is reduced, and presumably disappears completely by the time the quasi-condensate regime is reached by going to smaller values of $\gamma$. Although we were unable to numerically reach the relevant quasi-condensate region for $\tau<1$, a more refined numerical setup that improves the importance sampling or the $\mu(T)$ trajectory described in Appendix \[append:numerix\] may allow this. Numerical limitations {#sect:numerical-limitations} ===================== ![Regimes and their numerical accessibility: the asterisks indicate the lowest $\tau$ and highest $\gamma$ reachable using the gauge-$P$ method as described in Appendix \[append:numerix\]. The dark dashed line indicates the point at which $g^{(2)}(0)=1$. []{data-label="Fig:numerixlimit"}](gamtauplot.eps){width="8cm"} Figure \[Fig:numerixlimit\] shows the regime that was accessible using the relatively straightforward numerical scheme that was employed here, and detailed in Appendix \[append:numerix\]. (It is the region above and to the left of the asterisks). In particular, one sees that of the physical regimes described in previous sections, the decoherent classical, as well as parts of the decoherent quantum and high-temperature fermionization regimes were accessible, while the quasi-condensate and Tonks-Girardeau regimes were not. The principal difficulty that is encountered, generally speaking, is the growth of statistical noise with increasing $\beta$, i.e. decreasing $\tau$, which eventually prevents one from obtaining values of $g^{(2)}(r)$ with a useful resolution. This arises in two different ways depending on the region of interest. Firstly, in the strongly interacting (fermionized) region, one needs a correspondingly large coupling constant $g\propto\gamma$ which leads to a relative increase of the importance of the noise terms of the $d\alpha_{j}^{(\nu)}/d\beta$ equations in (\[Gequations\]). This leads to large statistical uncertainty in the $\alpha_{j}^{(\nu)}$ themselves or to the weight $\Omega$ whose evolution depends on them. The upshot is that the inverse temperature $\beta$ at which the noise becomes unmanageable becomes smaller and smaller as $\gamma$ grows. Technical improvements are unlikely to make a large dent in the problem in the fermionized regime because it ultimately stems from the fact that coherent states are no longer a good basis over which to expand the density matrix. They are not close to the preferred eigenstates of the system. Instead, one can think of constructing a phase-space distribution that uses a non-coherent-state basis, for example, a Gaussian basis [@CorneyDrummond]. This general approach - together with symmetry projections - has been utilized in successfully calculating ground state properties of the strongly correlated fermionic Hubbard model [@AimiImada]. Secondly, in the low $\gamma$ and $\tau$ region, one has a different underlying source of statistical uncertainty. The longest relevant length here is either the coherence length $l_{\phi}$ or the healing length $\xi$, and for correct calculations in the large uniform gas one must simulate a system of a total size appreciably greater than these lengths. This in turn imposes a minimal total particle number $$N\gtrsim\mathrm{max}\left[\mathcal{O}(4/\tau),\mathcal{O}(2/\sqrt{\gamma})\right].\label{Nminimum}$$ The thermal initial conditions of Eq. (\[G0\]) lead to variation in $N$ among trajectories, and since the Gibbs factor $K$ (see Eq. (\[GibbsK\])) grows linearly or faster with $N$, one also obtains a growing variation of $K(\vec{v})$. This enters the $d\Omega$ of Eq. (\[Gequations\]) and leads to a spread of the weights $\Omega(t)$ that grows rapidly (note the exponential growth of $\Omega$) with increasing $N$. However because of the long length scales, via (\[Nminimum\]), large $N$ is needed to make accurate calculations when $\tau$ or $\gamma$ are much smaller than one. The end result is domination of the whole calculation by one or a few trajectories with the highest weight, for all realistic ensemble sizes $\mathcal{S}$. As a corollary, significantly lower temperatures, even down to the quasi-condensate regime, *are* accessible at small $\gamma$ if one is prepared to sacrifice the assumption of an infinite-sized gas and consider periodic boundary conditions on some length $L$ that is smaller than or comparable to the coherence/healing lengths. This approach was taken, e.g., in [@Carusotto]. This stops the rise of overall particle number, hence one has a much smaller spread of Gibbs factors $\Omega$ among the trajectories, and in the final analysis – reduced statistical uncertainty. Such calculations are no longer as general, though, and are not considered in this paper. We would like to point out that the limitation in this regime may be overcome or alleviated if the rather simplistic importance sampling used in the numerical method were to be improved. The leading candidate is an improved importance sampling algorithm, possibly using a Metropolis sampling procedure, as outlined at the end of Appendix \[append:preweight\]. Finally, it is also possible that a more refined choice of $\mu(\beta)$ (considered in Appendix \[append:mu\]) may lead to somewhat improved coverage of the parameter space in general. Overview and conclusion {#sect:conclusions} ======================= In conclusion, we have surveyed the behavior of the spatial two-particle correlation function in a repulsive uniform 1D Bose gas. We have analyzed numerically the pair correlation functions for all relevant length scales, with the exception of several low-temperature transition regions (see Fig. \[Fig:numerixlimit\] below the asterisks) which were not accessible by the numerical scheme we employed. Approximate analytic results and methods have been presented for parameters deep within all the major physical regimes. The key features of this behavior include: - Thermal bunching with $g^{(2)}(0)\simeq2$ and Gaussian drop-off at ranges $\Lambda_{T}$ in the classical decoherent regime. - Exponential drop-off of correlations from $g^{(2)}(0)\simeq2$ at ranges $l_{\phi}$ in the decoherent quantum regime, along with Gaussian-like rounding at shorter ranges $\sim\Lambda_{T}$. - Suppressed density fluctuations with $g^{(2)}(0)\simeq1$ and exponential decay at ranges of the healing length $\xi$ in the quasi-condensate regime. - Antibunching with $g^{(2)}(0)<1$ and Gaussian decay at ranges $\Lambda_{T}$ in the high-temperature fermionization regime. - Antibunching with $g^{(2)}(0)<1$ and oscillatory decay on ranges of the mean interparticle separation $1/n$ in the Tonks-Girardeau regime. - Bunching at a range of $\sim0.3\Lambda_{T}$ in the crossover between classical and fermionized regimes around $\gamma^{2}\sim0.3\tau$. Let us consider the regimes in turn, starting from the classical decoherent gas, then going anti-clockwise in Fig. \[Fig:numerixlimit\]. The classical decoherent gas is well approximated by Boltzmann statistics and is dominated by thermal fluctuations. The pair correlation function shows typical thermal bunching and a Gaussian decay, with the correlation length given by the thermal de Broglie wavelength $\Lambda_{T}$. As one reduces the temperature, the gas becomes degenerate, the thermal de Broglie wavelength becomes larger than the mean interparticle separation and loses its relevance. The correlation length increases and one enters into the decoherent quantum regime. Here, the dominant behavior of the gas is the ideal Bose gas bunching, $g^{(2)}(0)\simeq2$, with large density fluctuations that decay exponentially on the length scale given by the phase coherence length $l_{\phi}$. Notably, the exponential behaviour starts to appear well above degeneracy first in the long-distance tails, being visible even around $\tau\sim10$ as in Fig. \[lowgam\]. Reducing the temperature even further, while still at $\gamma\ll1$, one enters into the quasi-condensate regime, in which the density fluctuations become suppressed and $g^{(2)}(0)\simeq1$. In the hotter sub-regime dominated by thermal fluctuations, the pair correlation shows weak bunching, $g^{(2)}(0)>1$, while in the colder sub-regime dominated by quantum fluctuations one has weak antibunching, $g^{(2)}(0)<1$. In both cases the pair correlation decays on the length scale of the healing length $\xi$. We now move to the right on Fig. \[Fig:numerixlimit\], into the regime of strong interactions, while staying at temperatures well below quantum degeneracy, $\tau\ll1$. This is the Tonks-Girardeau regime, in which the density fluctuations get further suppressed due to strong interparticle repulsion. Antibunching increases and one approaches $g^{(2)}(0)=0$ due to fermionization. The only relevant length scale here is the mean interparticle separation, $1/n$, and the pair correlation function decays on this length scale with some oscillations. We next move up on Fig. \[Fig:numerixlimit\], to higher temperatures, and enter the regime of high-temperature fermionization. At short range, the pair correlation here is still antibunched due to strong interparticle repulsion, however, thermal effects start to show up on the length scale of $\Lambda_{T}$. As a result of these competing effects, the nonlocal pair correlation develops an anomalous peak, corresponding to bunching at-a-distance, with $g^{(2)}(r_{\max})>1$, beginning around $\tau\sim\gamma^{2}/2$. As we increase the temperature even further, the thermal effects start to dominate over interactions and the antibunching dip gradually disappears. At temperatures $\tau\sim\gamma^{2}$ we observe a crossover back to the classical decoherent regime. Our results provide new insights into the fundamental understanding of the 1D Bose gas model through many-body correlations. Calculation of these non-local correlations is not accessible yet through the exact Bethe ansatz solutions. We expect that our theoretical predictions will serve as guidelines for future experiments aimed at the measurement of nonlocal pair correlations in quasi-1D Bose gases. AGS, MJD, PDD and KVK acknowledge fruitful discussions with A. Yu. Cherny and J. Brand, and the support of this work by the Australian Research Council. DMG acknowledges support by EPSRC Advanced Fellowship EP/D072514/1. PD was supported by the European Community under the contract MEIF-CT-2006-041390. KVK, PD and DMG thank IFRAF and the Institut Henri Poincare–Centre Emile Borel for support during the 2007 Quantum Gases workshop in Paris where part of this work was completed. LPTMS is a mixed research unit No. 8626 of CNRS and Université Paris-Sud. Technical appendix for the gauge-$P$ calculations {#append:numerix} ================================================= Instability of the stochastic equations and its removal with a stochastic gauge {#append:gauge} ------------------------------------------------------------------------------- A straightforward application of the ungauged diffusion Eqs. (\[ppequations\]) is foiled by the presence of an instability in the $d\alpha_{j}^{(\nu)}/d\beta$ equations. We can see this if we first consider the evolution of $N_{j}$ and discard the noise and kinetic-energy parts of the equation. Taking the deterministic part from the Stratonovich calculus which is used for our numerics (this introduces the $1/2$ term below), one has $$\frac{\partial N_{j}}{\partial\beta}\sim N_{j}\left[\mu_{e}-\frac{g}{\Delta x}\left(N_{j}-\frac{1}{2}\right)\right].\label{deterministic}$$ There are stationary points at the vacuum $N_{j}=0$ and at $N_{j}=N_{a}=1/2+\mu_{e}\Delta x/g$, with the more positive stationary point (usually $N_{a}$) being an attractor, and the more negative a repellor [\[]{}see Fig. \[FigGauge\] (a)\]. The deterministic evolution is easily solved, and starting from a time $\beta_{0}$ gives later evolution as $$N_{j}(\beta)=\frac{N_{a}N_{j}(\beta_{0})}{N_{j}(\beta_{0})+(N_{a}-N_{j}(\beta_{0}))e^{-\mu_{e}(\beta-\beta_{0})}}.$$ If has a negative $N_{j}(\beta_{0})$, which is possible due to the action of the noises $\zeta$, then at a later time $$\beta_{\text{sing}}=\beta_{0}+\frac{1}{\mu_{e}}\ln\left(1-\frac{N_{a}}{N_{j}(\beta_{0})}\right),$$ the solution has diverged to negative infinity. This behavior of the deterministic part of the equations is known as a “moving singularity and is a well-known indicator of non-vanishing boundary terms when an integration-by-parts is performed on the operator equation (\[differential\]) [@Gilchrist; @deuar-thesis]. It implies that the FPE (\[ppfpe\]) is not fully equivalent to quantum mechanics. The use of a stochastic gauge to remove this kind of instability has been described in [@deuar-drummond-2006], and in more detail in [@deuar-thesis]. The gauge identity, Eq. (\[gaugeidentity\]), can be used on Eq. (\[differential\]) to introduce an arbitrary modification to the deterministic evolution (arising from first order derivative terms) for the price of additional diffusion in the weight $\Omega$. Since the gauge identity is zero, we can add an arbitrary multiple of it to Eq. (\[differential\]). In particular, if we add $$\begin{aligned} 0 & = & \int G(\vec{v})\sum_{j}\left\{ \frac{\mathcal{G}_{j}^{2}\Omega^{2}}{2}\frac{\partial^{2}}{\partial\Omega^{2}}\right.\\ & & \hspace*{-2em}\left.+i\mathcal{G}_{j}\sqrt{\frac{g}{2\Delta x}}\,\sum_{\nu}\alpha_{j}^{(\nu)}\frac{\partial}{\partial\alpha_{j}^{(\nu)}}\left(\Omega\frac{\partial}{\partial\Omega}-1\right)\right\} \widehat{\Lambda}\, d^{4M+2}\notag\end{aligned}$$ with arbitrary functions $\mathcal{G}_{j}(\vec{v},\beta)$, and perform the subsequent steps as before, then the diffusion matrix in the resulting FPE remains positive semidefinite (no negative eigenvalues), and the resulting Ito diffusion equations of the samples become $$\begin{aligned} \frac{d\alpha_{j}^{(\nu)}}{d\beta} & = & \frac{1}{2}\left(\mu_{e}+\frac{\hbar^{2}\nabla^{2}}{2m}-\frac{gN_{j}}{\Delta x}\right)\alpha_{j}^{(\nu)}\notag\\ & & +i\alpha_{j}^{(\nu)}\left[\zeta_{j}^{(\nu)}(\beta)-\mathcal{G}_{j}\right]\sqrt{\frac{g}{2\Delta x}}\,,\label{Gexplicitequations}\\ \frac{d\Omega}{d\beta} & = & \Omega\left[-K(\vec{v})+\sum_{j}\mathcal{G}_{j}\sum_{\nu}\zeta_{j}^{(\nu)}(\beta)\right],\notag\end{aligned}$$ instead of (\[ppequations\]). The $\alpha_{j}$ equations are modified and compensating correlated noises have been added to the $\Omega$ equation. We now wish to choose the functions $\mathcal{G}_{j}$, called stochastic gauges, so that the instability is removed, keeping also in mind the goal of keeping the (now unbiased) statistical uncertainty manageable. Heuristic guidelines for choosing gauges have been investigated in detail in [@deuar-thesis]. Several choices for a single-mode system were also investigated there in Sec. $9.2$ in terms of resulting statistical uncertainties. The aim is to remove the real part of $N_{j}$ from the $\alpha_{j}$ equation when it is negative, so as to neutralize the moving singularity. While for a single mode the “radial gauge was found to give the best performance, later tests that we have performed on the full multimode ($M\gg1$) 1D gas show that the “minimal drift gauge $$\mathcal{G}_{j}=i\left(\text{Re}N_{j}-|N_{j}|\right)\sqrt{\frac{g}{2\Delta x}}\label{mingauge}$$ gives better performance for this system. This is because it introduces the smallest modifications needed to remove the moving singularity, and hence the smallest noise contributions to the weight $\Omega$. The weight becomes much more important for multimode systems because each of the $M$ modes adds its own contribution to it, the total of which can become large. The phase-space modification for a single mode for the ungauged Eq. (\[deterministic\]) and gauged equations is shown in Fig. \[FigGauge\]. One sees that in the “classical” $\mathrm{Re}[N_{j}]\gg\mathrm{Im}[N_{j}]$ region the trajectories are practically unchanged. The final Ito equations to be integrated are (\[Gequations\]). Comparisons to known exact results such as energy and density [@yangyang1], and $g^{(2)}(0)$ [@karenprl] indicate no deviations beyond what is predicted by the unbiased statistical uncertainties, Eq. (\[uncertainty\]), with the new gauged equations. Such a comparison can be seen in Fig. 2 of Ref. [@drummond-canonical-gauge]. ![Deterministic phase space for Stratonovich for of the $dN_{j}$ equation, when $\mu_{e}=0$. **(a)**: ungauged, **(b)**: using the gauge (\[mingauge\]). The moving singularity in **(a)** is shown with a large arrow, the attractor in **(b)** at $|N_{j}|=N_{a}$ with a thick dashed line.[]{data-label="FigGauge"}](gaugephase.eps){width="8cm"} Integration procedure {#append:integration} --------------------- The actual integration is performed using a split-step semi-implicit method described in [@Ian], which requires the use of the Stratonovich stochastic calculus. There, it was shown to be highly superior to other low-order methods in terms of stability. Although a low order Newton-like method, with the right choice of variables its performance is remarkably good. High-order methods such as Runge-Kutta or others suffer from serious complications when noise is present. In particular, one has to be very meticulous in tracking down and compensating for all the non-zero correlations within a single time-step — these are much more complicated than the simplest correction terms appearing in the Stratonovich semi-implicit method used here. Due to the multiplicative form of the equations (\[Gequations\]), it is highly advantageous to use logarithmic variables, which is made possible if one uses a split-step method. Here, a $\Delta\beta$ timestep consists of the following four stages: First the interaction part (containing $g$) is integrated in real space over a time-step $\Delta\beta$. Second, the fields are Fourier-transformed to $k$-space, giving $\widetilde{\alpha}^{(\nu)}(k)$. Thirdly the kinetic-energy contributions are integrated over $\Delta\beta$, and finally one Fourier-transforms back into real space, ready to start the next timestep. The Stratonovich gauged evolution equations for the real space stage are \[logequations\] $$\begin{aligned} \frac{d\ln\alpha_{j}^{(\nu)}}{d\beta} & = & -\frac{g}{2\Delta x}\left(|N_{j}|+i\,\text{Im}N_{j}-\frac{1}{2}\right)\notag\\ & & +i\zeta_{j}^{(\nu)}(\beta)\sqrt{\frac{g}{2\Delta x}},\notag\\ \frac{d\ln\Omega}{d\beta} & = & i\sqrt{\frac{g}{2\Delta x}}\sum_{j,\nu}\left(\text{Re}N_{j}-|N_{j}|\right)\zeta_{j}^{(\nu)}(\beta)\label{trueequationsx}\\ & & +\frac{g}{2\Delta x}\sum_{j}\left\{ (\text{Re}N_{j}-|N_{j}|)^{2}-N_{j}^{2}+i\text{Im}N_{j}\right\} ,\notag\end{aligned}$$ while for the $k$-space stage they are$$\begin{aligned} \frac{d\ln\widetilde{\alpha}^{(\nu)}(k)}{d\beta} & = & \frac{1}{2}\left[\mu_{e}-\frac{\hbar^{2}k^{2}}{2m}\right],\label{trueequationsk}\\ \frac{d\ln\Omega}{d\beta} & = & \sum_{k}\left(\mu_{e}-\frac{\hbar^{2}k^{2}}{2m}\right)\widetilde{\alpha}^{+}(k)\widetilde{\alpha}(k).\notag\end{aligned}$$ Importance sampling {#append:preweight} ------------------- The simulated equations (\[Gequations\]) include evolution of both the amplitudes $\alpha_{j}^{(\nu)}$ and weight $\Omega$. This combination can cause sampling problems for observable estimations, Eq. (\[obs\]), when maximum weights occur for very rare trajectories. As it turns out, this was a serious issue for the majority of calculations reported here because while the initial distribution (\[G0\]) samples the $\beta=0$ system well, this is not necessarily the case during the later evolution into $\beta\gg0$ that is of most interest. Fortunately, fairly rudimentary importance sampling was able to deal with this for a wide range of parameters. The essence of this approach is to pre-weight trajectories in such a way that the part of the distribution with maximum weight $\Omega$ coincides with the majority of samples at the target time of interest $\beta_{t}$, rather than at $\beta=0$. The price paid is that the $\beta=0$ distribution is then poorly sampled, but this is not important to us as we are interested rather in the target $\beta_{t}$. Pre-weighting is made possible because in all observable calculations (\[obs\]), the combination $[G(\vec{v})\Omega]$ occurs as a universal common factor in the $\int d\vec{v}$ integral. Hence, if we manually scale the weight $\Omega$ by some factor $F(\vec{v})$ of our choice: $\Omega\rightarrow\Omega^{\prime}F(\vec{v})$, and simultaneously rescale the distribution according to $G(\vec{v})\rightarrow G^{\prime}(\vec{v})/F(\vec{v})$, then with $\Omega^{\prime}$ and $G^{\prime}$ one obtains exactly the same results in the infinite-number-of-samples limit as with $G\Omega$. However, the actual samples are differently distributed, which is advantageous for finite sample numbers. To reduce the weight sampling problem, one wants to make such a modification $F(\vec{v})$ that both $G^{\prime}(\vec{v})$ and $\Omega^{\prime}G^{\prime}(\vec{v})$ peak in the same region of the phase space of ${\vec{v}}$. To proceed, it is convenient to consider Fourier-transformed variables in $k$-space, where the non-interacting evolution can be easily exactly solved. Define then $$\widetilde{\alpha}_{k}^{(\nu)}=\frac{1}{\sqrt{M}}\sum_{j}e^{-ikx_{j}}\alpha_{j}^{(\nu)}=\left\{ \begin{array}{cl} \widetilde{\alpha}_{k}, & \text{ if }\nu=1,\\ \widetilde{\alpha}_{k}^{+}, & \text{ if }\nu=2,\end{array}\right.$$ where $k$ takes on discrete values from $-\pi/\Delta x$ to $\pi/\Delta x$. The “naive initial distribution (\[G0\]) then becomes $$G_{0}(\vec{v})=\delta^{2}(\ln\Omega)\prod_{k}\delta^{2}(\widetilde{\alpha}_{k}-(\widetilde{\alpha}_{k}^{+})^{\ast}\,)\frac{e^{-|\widetilde{\alpha}_{k}|^{2}/\overline{n}_{x}}}{\pi\,\overline{n}_{x}}.$$ This is a thermal distribution which is uniform over all $k$. The ideal gas (i.e. $g=0$) evolution of equations (\[Gequations\]) then leads to $$\begin{aligned} \widetilde{\alpha}_{k}^{(\nu)}(\beta) & = & \widetilde{\alpha}_{k}(0)\exp\left[\left(\mu(\beta)-\frac{\hbar^{2}k^{2}}{2m}\right)\frac{\beta}{2}\right],\label{idealgassol}\\ \ln\Omega(\beta) & = & \sum_{k}\left(|\widetilde{\alpha}_{k}(\beta)|^{2}-|\widetilde{\alpha}_{k}(0)|^{2}\right),\notag\end{aligned}$$ where $$\widetilde{\alpha}_{k}(0)=\sqrt{\overline{n}_{x}}\,\eta_{k},$$ with $\eta_{k}$ being independent complex Gaussian noises with variance unity, $\langle\eta_{k}^{\ast}\eta_{k^{\prime}}\rangle_{\mathcal{S}}=\delta_{kk^{\prime}}$. One can see that (\[idealgassol\]) is not necessarily anywhere near a well-sampled ideal gas Bose-Einstein distribution at temperature $\beta$, which would have $$\begin{aligned} \widetilde{\alpha}_{k}^{(\nu)}(\beta) & = & \sqrt{n_{k}^{\mathrm{id}}(\beta)}\ {\eta_{k},}\notag\\ \ln\Omega(\beta) & = & 0,\label{samplesbeta}\end{aligned}$$ with $$n_{k}^{\mathrm{id}}(\beta)=\left\{ \exp\left[-\mu(\beta)\beta+\hbar^{2}k^{2}\beta/2m\right]-1\right\} ^{-1}$$ being the usual Bose-Einstein distribution. For the purpose of the simulations presented here, a fairly crude yet effective importance sampling was applied as follows. For relatively weak coupling $g$, a very rough but useful estimate of the thermal state at coarse resolution is that the Fourier modes are decoupled and thermally distributed with some mean occupations $n_{k}(\beta_{t})$ at the target time $\beta_{t}$ that we are interested in. In practice we will choose some estimate of the guiding density $n_{k}(\beta_{t})$. The desired equal weight sampling at time $\beta_{t}$ would then correspond to the distribution $$\begin{aligned} G^{\text{est}}(\vec{v},\beta_{t}) & = & \delta^{2}(\ln\Omega)\prod_{k}\delta^{2}\left(\widetilde{\alpha}_{k}-(\widetilde{\alpha}_{k}^{+})^{\ast}\right)\notag\\ & & \times\frac{\exp[-|\widetilde{\alpha}_{k}|^{2}/n_{k}(\beta_{t})]}{\pi\, n_{k}(\beta_{t})},\label{Gestbeta}\end{aligned}$$ which leads to samples given by $\widetilde{\alpha}_{k}^{(\nu)}=\sqrt{n_{k}(\beta)}\ {\eta_{k}}$ and $\Omega=1$. What we are interested in is the corresponding distribution of samples at $\beta=0$. An estimate of the initial distribution that leads to $G^{\text{est}}(\vec{v},\beta_{t})$ can be obtained by evolving (\[Gestbeta\]) back in imaginary time using only kinetic interactions. This is again rather rough, since deterministic interaction terms $\propto g$ are omitted, not to mention noise, but it is simple to carry out and proved sufficient for our purposes here. One obtains then an estimated sampling distribution for samples at $\beta=0$: $$\begin{aligned} G^{\text{samp}}(\vec{v},0) & = & \delta^{2}(\ln\Omega-\ln\Omega_{0})\prod_{k}\delta^{2}\left(\widetilde{\alpha}_{k}-(\widetilde{\alpha}_{k}^{+})^{\ast}\right)\notag\\ & & \times\frac{\exp(-|\widetilde{\alpha}_{k}|^{2}/n_{k}^{\text{samp}})}{\pi\, n_{k}^{\text{samp}}},\label{Gest0}\end{aligned}$$ where $$n_{k}^{\mathrm{samp}}=n_{k}(\beta_{t})\exp\left[-\lambda-\mu(\beta_{t})\beta_{t}+\frac{\hbar^{2}k^{2}\beta_{t}}{2m}\right],$$ and the pre-weight $\Omega_{0}\equiv\Omega(0)$ now depends on the set of particular values of $\widetilde{\alpha}_{k}$ at $\beta=0$ obtained for a given sample, according to $$\ln\Omega_{0}=\sum_{k}|\widetilde{\alpha}_{k}|^{2}\left(\frac{1}{n_{k}^{\mathrm{samp}}}-\frac{1}{\overline{n}_{x}}\right).$$ For most of the simulations reported here, taking $n_{k}(\beta_{t})$ to be just the ideal gas Bose-Einstein distribution $n_{k}^{\mathrm{id}}(\beta_{t})$ was sufficient. However, once the chemical potential $\mu(\beta_{t})$ approaches or exceeds zero, this estimate is no longer useful. A better choice for $n_{k}(\beta_{t})$ is the density of states function $\rho_{k}$ of the exact Yang and Yang solution [@yangyang1], although it should be noted that this is not the density of actual particles that we seek. In practice, our approach was to first run a calculation based on this estimate $n_{k}(\beta_{t})=\rho_{k}(\beta_{t})$, obtain a better estimate of the real density from this full stochastic calculation by evaluating the expectation value of $\widehat{\Psi}_{k}^{\dagger}\widehat{\Psi}_{k}$ using Eq. (\[obs\]), then finally use this expectation value to choose an improved preweighting function $n_{k}(\beta_{t})$ for a “second-generation calculation. One important point to make regarding the choice of $n_{k}(\beta_{t})$ is that one should endeavor always to choose the preweighting guide density $n_{k}(\beta_{t})$ equal or greater than the real density, never smaller. The reasoning behind this is as follows: Suppose first one chooses a $n_{k}(\beta_{t})$ guiding function that is much smaller than the true k-space density $n_{k}^{\mathrm{true}}(\beta_{t})$. This means that the variance of the $\widetilde{\alpha}_{k}$ samples will be too small to recover the physical value of the density upon averaging $\langle|\widetilde{\alpha}_{k}|^{2}\Omega\rangle_{\mathcal{S}}$ without resorting to very large weights for the largest $|\widetilde{\alpha}_{k}|$ samples. In practice, if the ratio $n_{k}/n_{k}^{\mathrm{true}}$ is small, then the typical trade-off that occurs is that the largest contribution to $\Omega|\widetilde{\alpha}_{k}|^{2}$ comes from those $|\widetilde{\alpha}_{k}|$ that are many standard deviations from the mean. Their rarity is compensated for by a very large $\Omega$. However, this is fatal for practical numbers of samples because in fact not even one of the samples one obtains ends up in this highest-contribution region at many standard deviations from the mean. For $n_{k}/n_{k}^{\mathrm{true}}\lesssim1/2$, the number of samples with $|\widetilde{\alpha}_{k}|^{2}\gtrsim n_{k}$ will be $\propto\mathcal{S}\prod_{k}\exp\left[-(n_{k}^{\mathrm{true}}/n_{k})^{2}/2\right)$, i.e. vanishing, leading to a systematic error. In contrast, the opposite situation when $n_{k}(\beta_{t})$ is chosen too large is much more benign. Following the above reasoning, one gets a distribution of $\widetilde{\alpha}_{k}$ samples that is too broad, with the result that a majority of samples are too far away from physical values of $|\widetilde{\alpha}_{k}|^{2}$ and their excessive abundance must be compensated for by giving them a correspondingly small weight. However, for reasonably large numbers of trajectories, there always remains a core of the smallest samples that are in the region of most important contributions. The number of these samples is of the order of $\mathcal{S}\prod_{k}n_{k}^{\mathrm{true}}/n_{k}(\beta_{t})$, which is reasonable in practice as long as the estimate $n_{k}(\beta_{t})$ is not extremely poor. Finally, it should be mentioned that superior importance sampling schemes to the crude one we have employed here could be implemented and may allow one to reach much lower temperatures than presented here. A first step would be to keep the $\beta=\beta_{t}$ distribution estimate, Eq. (\[Gestbeta\]), but estimate the resulting initial samples at $\beta=0$ in a more accurate manner. To do this, one could choose the $\beta=\beta_{t}$ samples according to $\widetilde{\alpha}_{k}^{(\nu)}(\beta_{t})=\sqrt{n_{k}(\beta_{t})}\ \eta_{k}$ and $\ln\Omega(\beta_{t})=0$ as usual, but then evolve them back in time to $\beta=0$ numerically, using the deterministic part of the full equations (\[Gequations\]). This would give a superior estimate of the initial distribution as it takes into account $g\neq0$ mean field effects as well as kinetic evolution. Having these $\beta=0$ samples, one would then proceed forward in time with the full stochastic evolution. A further refinement would be to choose initial $\beta=0$ samples via the Metropolis algorithm, so that the initial samples $\vec{v}$ are distributed according to $\mathcal{F}\left[\vec{v}\right]$, where $\mathcal{F}=|\Omega(\beta_{t})|$ when $\Omega(\beta_{t})$ is calculated according to the deterministic part of the evolution, Eq. (\[Gequations\]), starting from $\Omega(0)=0$. This avoids the arbitrariness of the crude Gaussian choice, Eq. (\[Gestbeta\]). A final, but numerically intensive approach would be to sample the phase-space variables $\alpha_{j}(\beta_{t})$ and $\text{Im}[{\ln\Omega}](\beta_{t})$ directly via a Monte Carlo Metropolis algorithm whose free parameters to be varied include both the initial noises $\eta_{k}$ and all the time-dependent noises $\zeta_{j}^{(\nu)}(\beta)$ for a given time lattice $\beta\in\left(0,\beta_{t}\right)$. Trust indicators for sampling {#append:trust} ----------------------------- One should mention two heuristic trust indicators that we use extensively to exclude bad sampling of the underlying phase-space distribution. Firstly, let us point out that the behavior of the evolution equations (\[logequations\]) is such that one builds up an approximately Gaussian distribution of the logarithmic variables (leaving aside the evolution of $N_{j}$ itself, which is initially small). This means that the stochastic averages to be evaluated, e.g., in Eq. (\[g2obs\]), involve means of *exponentials* of approximately Gaussian random variables (as per $\overline{m}=\langle e^{v}\rangle$ with $v$ Gaussian). A feature of such means is that if the variance of the *logarithm* $\text{Re}[v]$ exceeds a value of around $10$ the mean $\overline{m}$ begins to have systematic error when calculated with any practical sample sizes. This is discussed in detail in [@deuar-drummond-pp; @deuar-thesis]. As a result, when calculating observables with some expression $\langle F(\vec{v})\rangle_{\mathcal{S}}$, one must also check that the variance of its logarithm is small enough, i.e. that $$\mathcal{V_{F}}=\langle(\ln|F(\vec{v})|)^{2}\rangle_{\mathcal{S}}-\langle\ln|F(\vec{v})|\rangle_{\mathcal{S}}^{2}\lesssim10.$$ If this is not satisfied, the results for $\langle F(\vec{v})\rangle_{\mathcal{S}}$ must be considered suspect. Secondly, sampling problems of this sort usually make themselves visible if one compares two calculations with widely different sample sizes. In practice one can evaluate an average and its uncertainty with $\mathcal{S}$ samples, and with $\mathcal{S}/10$ samples (where, of course, $\mathcal{S}/10\gg1$). If the difference is statistically significant the result of the $\mathcal{S}$ sample average again should be considered suspect. Choice of intermediate $\mu(\beta)$ {#append:mu} ----------------------------------- If one is primarily interested in the behavior of the system around some target temperature $\beta_{t}$ and chemical potential $\mu(\beta_{t})$ (alternatively – density), then the values of $\mu(\beta)$ at intermediate times $\beta<\beta_{t}$ can in principle be chosen at will. In practice, however, some choices lead to smaller statistical uncertainty than others because the intermediate values of density affect the amount of noise generated during the evolution. A preliminary investigation of $\mu$ choice in [@deuar-thesis] indicated some heuristic guidelines that were also followed in the present work: (*i*) It is advantageous to not vary $\mu_{e}(\beta)$ too much over the course of the simulation. Excessive variation leads to increased noise. (*ii*) A constant or piecewise-constant value of $\mu_{e}$ is also advantageous because the ideal-gas part of the evolution can then be calculated exactly in logarithmic variables (\[trueequationsk\]), and step-size is only important for the interaction part of the evolution. (*iii*) It is advantageous to choose an initial density that is much smaller than the final one at $\beta_{t}$ both for statistical sampling reasons and because this puts the initial gas much further into the classical decoherent regime ($\tau\gg\gamma^{2}$), where the initial condition (\[G0\]) applies, than the final regime. In practice, our simulations used the following form $$\mu_{e}(\beta)=\frac{1}{\Delta\beta}\ln\frac{z(\beta+\Delta\beta)}{z(\beta)},$$ which is piecewise constant over a time step $\Delta\beta$, with the fugacity $$z(\beta)=e^{\mu\beta}=\left\{ \begin{array}{ll} z_{i}, & \text{ when }\beta\leq\beta_{i},\\ z_{t}\exp\left[-\frac{\beta_{t}-\beta}{\beta_{t}-\beta_{i}}\ln\frac{z_{t}}{z_{i}}\right], & \text{ when }\beta>\beta_{i}.\end{array}\right..$$ Here, $\beta_{t}$ and $z_{t}=e^{\mu_{t}\beta_{t}}$ are the target inverse temperature and fugacity, and $\beta_{i}$ and $z_{i}$ are numerical constants for the initial high temperature state that we chose to be $z_{i}^{2}=z_{t}^{2}/1000$ and $\beta_{i}=\beta_{t}/1000$. Given the difficulty of precisely analyzing the statistical behavior, it is unclear whether a wiser choice of $\mu(\beta)$ may lead to significant improvements over the results presented here. However, this is the most successful choice of those we tried. Integrals in perturbation theory in $\gamma$ {#append:integrals-nearly-ideal} ============================================ We begin with Eq. and substitute the expression for $\Gamma(k,\sigma)$ in Eq. to give $$\Delta g^{(2)}(r)=-\frac{g}{\hbar}\sqrt{\frac{m\beta}{\pi}}\int_{0}^{\beta}d\sigma\frac{\exp\left\{ -\frac{r^{2}m\beta}{4\hbar^{2}[\beta^{2}/4-(\sigma-\beta/2)^{2}]}\right\} }{\sqrt{\beta^{2}/4-(\sigma-\beta/2)^{2}}}.$$ Next we make the substitution $t=(2/\beta)(\sigma-\beta/2)$ and $y=r\sqrt{m/(\hbar^{2}\beta)}$ to give $$\begin{aligned} \Delta g^{(2)}(r) & = & -\frac{g}{\hbar}\sqrt{\frac{m\beta}{\pi}}\int_{-1}^{1}dt\frac{e^{-y^{2}/(1-t^{2})}}{\sqrt{1-t^{2}}}\label{appenda_step1.5}\\ & = & -\frac{g}{\hbar}\sqrt{\frac{m\beta}{\pi}}e^{-y^{2}}\int_{-\infty}^{\infty}dx\frac{e^{-y^{2}x^{2}}}{1+x^{2}},\label{appenda_step2}\end{aligned}$$ where the last equality follows from the substitution $t=x/\sqrt{1+x^{2}}$. The exponent in the integrand of Eq. can be represented as a Gaussian integral $$e^{-y^{2}x^{2}}=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}dke^{-k^{2}+2ikyx}.$$ Then, changing the order of integration in Eq. we arrive at $$\begin{aligned} \Delta g^{(2)}(r) & = & -\frac{g}{\hbar\pi}\sqrt{m\beta}e^{-y^{2}}\int_{-\infty}^{\infty}e^{-k^{2}}\int_{-\infty}^{\infty}\frac{e^{i2kyx}}{1+x^{2}}dxdk\notag\\ & = & -\frac{2g\sqrt{m\beta}}{\hbar}\int_{0}^{\infty}e^{-(k+|y|)^{2}}dk.\label{appenda_step3}\end{aligned}$$ The final result shown in Eq. follows trivially from a shift in the integration variable $k\rightarrow k-|y|$, and the definition of the complimentary error function, $$\mathrm{erfc}(|y|)\equiv\frac{2}{\sqrt{\pi}}\int_{|y|}^{\infty}dke^{-k^{2}}.$$ Integrals in the Bogoliubov treatment {#append:Bogoliubov} ===================================== We first evaluate the vacuum contribution $G_{0}(r)$, Eq. (\[G-0-def\]). Writing down the integral explicitly, in terms of $k$, and transforming to a new variable $x=k\xi/2$, we have$$G_{0}(r)=\frac{2}{\pi\xi n}\int\limits _{0}^{\infty}dk\left[\frac{x}{\sqrt{1+x^{2}}}-1\right]\cos(2rx/\xi).$$ Integrating by parts, gives$$G_{0}(r)=-\frac{1}{\pi nr}\int\limits _{0}^{\infty}dx\frac{\sin(2\sqrt{\gamma}nrx)}{(1+x^{2})^{3/2}}.\label{G-0-final}$$ The integral in (\[G-0-final\]) can be expressed in terms of special functions [@MathHandbook], giving $$G_{0}(r)=-\sqrt{\gamma}\left[\mathbf{L}_{-1}(2\sqrt{\gamma}nr)-I_{1}(2\sqrt{\gamma}nr)\right].$$ The finite temperature term $G_{T}(r)$, Eq. (\[G-T-lowT\]), is evaluated by performing variable changes according to $E=$ $\hbar^{2}k^{2}/(2m)$, followed by $\epsilon=\sqrt{E(E+gn)}$ and then $x=\epsilon/gn$. In this way we transform the integral over $k$ to an integral over $x$ $$G_{T}(r)=\sqrt{\frac{2mg}{\pi^{2}\hbar^{2}n}}\int\limits _{0}^{\infty}dx\left[\frac{\sqrt{1+x^{2}}-1}{1+x^{2}}\right]^{1/2}\frac{\cos[k(x)r]}{e^{gnx/T}-1},\label{G_T_a}$$ where $k(x)=[2mgn(\sqrt{1+x^{2}}-1)/\hbar^{2}]^{1/2}$. So far we have not made any additional assumptions or approximations. By inspecting the integrand in Eq. (\[G\_T\_a\]) one can see that for $T\ll gn$ the main contribution to the integral comes from $x\ll1$. Therefore for $T\ll gn$ ($\tau\ll\gamma$) we can simplify the integral by treating $x$ in the integrand as a small parameter. Accordingly, we obtain$$\begin{gathered} \left[\frac{\sqrt{1+x^{2}}-1}{1+x^{2}}\right]^{1/2}\simeq\frac{1}{\sqrt{2}}x,\; x\ll1,\\ k(x)\simeq\sqrt{\frac{mgn}{\hbar^{2}}}x,\; x\ll1,\end{gathered}$$ and therefore$$G_{T}(r)\simeq\frac{\tau^{2}}{4\pi\gamma^{3/2}}\int\limits _{0}^{\infty}dy\frac{y\cos(\tau nry/2\sqrt{\gamma})}{e^{y}-1},$$ where we have introduced $y=gnx/T=\epsilon/T$. Finally we make use of the following integral$$\int_{0}^{\infty}dy\frac{y\cos(ay)}{e^{y}-1}=\frac{1}{2a}-\frac{\pi^{2}}{2}\operatorname{cosech}^{2}\left(\pi a\right),\label{eq:c7}$$ and obtain Eq. (\[G-T-GPa\]). In the opposite limit, dominated by thermal fluctuations and corresponding to $\gamma\ll\tau\ll1$, we first note that large thermal fluctuations correspond to $\tilde{n}_{k}\gg1$, which in turn requires $\epsilon_{k}/T\ll1$. Thus, we replace $\tilde{n}_{k}$ in the integral (\[G-T-def\]) by $\tilde{n}_{k}=[\exp(\epsilon_{k}/T)-1]^{-1}\simeq T/\epsilon_{k}\gg1$. As a result, the thermal contribution $G_{T}(r)$ becomes$$\begin{gathered} G_{T}(r)\simeq\frac{1}{\pi n}\int_{-\infty}^{+\infty}dk\frac{E_{k}T}{\epsilon_{k}^{2}}\cos(kr)\notag\\ =\frac{4mT}{\pi\hbar^{2}n}\int_{0}^{+\infty}dk\frac{\cos(kr)}{k^{2}+(2/\xi)^{2}}=\frac{mT\xi}{\hbar^{2}n}e^{-2r/\xi},\end{gathered}$$ which is valid for $r/\xi\lesssim1$. Rewriting this in terms of the dimensionless parameters $\gamma$ and $\tau$ we obtain Eq. (\[G-T-GPb\]). [For $r/\xi\gg1$ the cosine term becomes important and the values of momenta in the integral Eq. (\[G\_T\_a\]) are cut off by $1/r\ll\xi$. In this regime one can use the approximation that led to Eq. (\[eq:c7\]).]{} Integrals in perturbation theory in $1/\gamma$ {#append:integrals-Tonks} ============================================== We begin by evaluating the direct contribution given by Eq. by substituting Eq. , $$\begin{aligned} \Delta g_{d}^{(2)} & = & \int_{0}^{\beta}d\sigma\int_{-\infty}^{\infty}\frac{dk}{2\pi}\left(-\frac{2\hbar^{2}k^{2}}{mn\gamma}\right)e^{ikr-\sigma\hbar^{2}k^{2}(\beta-\sigma)/m\beta}\notag\\ & = & \frac{-1}{\pi\gamma}\sqrt{\frac{\tau}{2}}\int_{0}^{1}ds\int_{-\infty}^{\infty}dq\, q^{2}e^{iqy-sq^{2}(1-s)},\label{appendc_step1.5}\end{aligned}$$ where we have affected the change of variables $\sigma=\beta s$, $q=\sqrt{\beta\hbar^{2}/m}k$ and $y=\sqrt{m/(\beta\hbar^{2})}r=\sqrt{(\tau n^{2}/2)}r$. The integration with respect to $q$ can then be done using integration by parts, which yields $$\begin{aligned} \Delta g_{d}^{(2)} & = & \frac{-1}{4\gamma}\sqrt{\frac{\tau}{2\pi}}\int_{0}^{1}ds\frac{2s(1-s)-y^{2}}{s^{5/2}(1-s)^{5/2}}e^{-y^{2}/[4s(1-s)]}\notag\\ & = & \frac{-1}{\gamma}\sqrt{\frac{2\tau}{\pi}}\int_{-1}^{1}dt\left(1-\frac{2y^{2}}{1-t^{2}}\right)\frac{e^{-y^{2}/(1-t^{2})}}{\left(1-t^{2}\right)^{3/2}},\notag\\ & & \,\label{appendc_step2}\end{aligned}$$ where the last equality follows from the substitution $s=(t+1)/2$. The simplest way to solve the integral in Eq. is by comparison with Eq. in Appendix \[append:integrals-nearly-ideal\]. In doing so, one may observe $$\begin{aligned} & & \int_{-1}^{1}dt\left(1-\frac{2y^{2}}{1-t^{2}}\right)\frac{\exp\left[-\frac{y^{2}}{1-t^{2}}\right]}{\left(1-t^{2}\right)^{3/2}}\\ & = & \frac{d^{2}}{dy^{2}}\int_{-1}^{1}dt\frac{\exp\left[-\frac{y^{2}}{1-t^{2}}\right]}{\sqrt{1-t^{2}}}=\pi\frac{d^{2}}{dy^{2}}\text{erfc}(|y|).\end{aligned}$$ The result shown in Eq. then follows trivially from this. In order to calculate the exchange contribution we begin with Eq. and substitute Eq. , which immediately yields $$\Delta g_{e}^{(2)}(r)=\frac{1}{\gamma}\sqrt{\frac{\pi\tau}{2}}e^{-in\tau r^{2}/2}F_{e}(\sqrt{\tau n^{2}r^{2}/2})$$ where $F_{e}(y)=\int_{0}^{1}ds\int dq\, q^{2}e^{-s(1-s)q^{2}+i(1-2s)qy}/\pi^{3/2}$, and $s$, $q$ and $y$ are defined the same was as for the direct contribution. The integration with respect to $q$ can be carried out using integration by parts, leaving an integral with respect to $s$: $$\begin{aligned} & & \int_{0}^{1}ds\frac{\exp\left[-\frac{y^{2}(1-2s)^{2}}{4s(1-s)}\right]}{s^{3/2}(1-s)^{3/2}}\left[1-\frac{y^{2}(1-2s)^{2}}{2s(1-s)}\right]\notag\\ & = & 4\int_{-1}^{1}dv\frac{\exp\left[-\frac{y^{2}v^{2}}{1-v^{2}}\right]}{(1-v^{2})^{3/2}}\left[1-\frac{2v^{2}y^{2}}{1-v^{2}}\right]\notag\\ & = & 4\int_{-\infty}^{\infty}dt\left[1-2y^{2}t^{2}\right]e^{-y^{2}t^{2}}\end{aligned}$$ where the first equality comes from the substitution $s=(v+1)/2$ and the second from $v=t/\sqrt{1+t^{2}}$. Both terms are standard definite integrals it is straightforward to show that $$\Delta g_{e}^{(2)}=\frac{4}{n\gamma}\delta(r).$$ Thus the only effect of the exchange contribution is to cancel the delta-function contribution coming from the direct contribution at $r=0$. [84]{} R. Hanbury Brown and R. Q. Twiss, Nature **177**, 27 (1956). M. Yasuda and F. Shimizu, Phys. Rev. Lett. **77**, 3090 (1996). M. Schellekens *et al.,* Science **310**, 648 (2005). T. Jeltes *et al.*, Nature **445**, 402 (2007). E. H. Lieb and W. Liniger, Phys. Rev. **130**, 1605 (1963). E. H. Lieb, Phys. Rev. **130**, 1616 (1963). C. N. Yang, C. P. Yang, J. Math. Phys. **10**, 1115 (1969). V. E. Korepin, N. M. Bogoliubov, and A. G. Izergin, *Quantum Inverse Scattering Method and Correlation Functions* (Cambridge University Press, 1993). T. Giamarchi, *Quantum Physics in One Dimension* (Oxford University Press, 2004). A. O. Gogolin, A. A. Nersesyan, and A. M. Tsvelik, *Bosonization and Strongly Correlated Systems* (Cambridge University Press, 2004). M. D. Girardeau, J. Math. Phys. **1**, 516 (1960); M. D. Girardeau, Phys. Rev. **139**, B500 (1965); see also: L. Tonks, Physical Review **50**, 955 (1936). A. Görlitz *et al.*, Phys. Rev. Lett. **87**, 130402 (2001). F. Schreck *et al.*, Phys. Rev. Lett. **87**, 080403 (2001). M. Greiner, I. Bloch, O. Mandel, T. W. Hänsch, and T. Esslinger, Phys. Rev. Lett. **87**, 160405 (2001). M. Greiner, I. Bloch, O. Mandel, T. W. Hänsch, and T. Esslinger, Applied Physics B - Lasers and Optics **73**, 769 (2001). S. Richard *et al.*, Phys. Rev. Lett. **91**, 010405 (2003). H. Moritz, T. Stöferle, M. Kohl, and T. Esslinger, Phys. Rev. Lett. **91**, 250402 (2003). B. Laburthe Tolra *et al.*, Phys. Rev. Lett. **92**, 190401 (2004). B. Paredes *et al*, Nature (London) **429**, 277 (2004). T. Kinoshita, T. Wenger, and D. S. Weiss, Science **305**, 1125 (2004). T. Kinoshita, T. Wenger, and D. S. Weiss, Phys. Rev. Lett. **95**, 190406 (2005). T. Stöferle, H. Moritz, C. Schori, M. Kohl, and T. Esslinger, Phys. Rev. Lett. **92**, 130403 (2004). T. P. Meyrath, F. Schreck, J. L. Hanssen, C. S. Chuu, and M. G. Raizen, Phys. Rev. A **71**, 041604(R) (2005). J. Esteve *et al.*, Phys. Rev. Lett. **96**, 130403 (2006). S. Hofferberth *et al.*, Nature (London) **449**, 324 (2007). A. H. van Amerongen, J. J. P. van Es, P. Wicke, K. V. Kheruntsyan, and N. J. van Druten, Phys. Rev. Lett. **100**, 090402 (2008) Y. Castin *et al.*, J. Mod. Opt. **47**, 2671 (2000). D. M. Gangardt and G. V. Shlyapnikov, Phys. Rev. Lett. **90**, 010401 (2003). D. M. Gangardt and G. V. Shlyapnikov, New J. Phys. **5**, 79 (2003). K.V. Kheruntsyan, D. M. Gangardt, P. D. Drummond, G. V. Shlyapnikov, Phys. Rev. Lett. **91**, 040403 (2003). M. A. Cazalilla, Phys. Rev. A **67**, 053606 (2003); M. A. Cazalilla, New J. Phys. **37**, S1 (2004). K.V. Kheruntsyan, D. M. Gangardt, P. D. Drummond, G. V. Shlyapnikov, Phys. Rev. A **71**, 053615 (2005). G. E. Astrakharchik and S. Giorgini, Phys. Rev. A **68**, 031602(R) (2003); G. E. Astrakharchik and S. Giorgini, J. Phys. B **39**, S1 (2006). P. D. Drummond, P. Deuar, and K. V. Kheruntsyan, Phys. Rev. Lett. **92**, 040405 (2004). A. Lenard, J. Math. Phys. **5**, 930 (1964). T. D. Schultz, J. Math. Phys. **4**, 666 (1963). J.-S. Caux and P. Calabrese, Phys. Rev. A **74**, 031605(R) (2006). J.-S. Caux, P. Calabrese, and N. A. Slavnov, J. Stat. Mech. P01008 (2007). J. Brand and A. Yu. Cherny, Phys. Rev. A **72**, 033619 (2005). A. Yu. Cherny and J. Brand, Phys. Rev. A **73**, 023612 (2006). E. H. Lieb, R. Seiringer, J. P. Solovej, and J. Yngvason, arXiv:cond-mat/0610117v1 (unpublished). D. S. Petrov, D. M. Gangardt, and G. V. Shlyapnikov, J. Phys. IV (France ) **116**, 3 (2004). Y. Castin, J. Phys. IV (France) **116**, 89 (2004). A. G. Sykes, D. M. Gangardt, M. J. Davis, K. Viering, M. G. Raizen, and K. V. Kheruntsyan, Phys. Rev. Lett. **100**, 160406 (2008). P. Deuar and P. D. Drummond, Phys. Rev. A **66**, 033812 (2002). P. D. Drummond and P. Deuar, J. Opt. B: Quantum Semiclass. Opt. **5**, S281 (2003). P. Deuar and P. D. Drummond, J. Phys. A: Math. Gen. **39**, 2723 (2006). P. Deuar, PhD thesis, The University of Queensland, . T. C. Li *et al.*, Opt. Express **16**, 5465 (2008). M. Olshanii, Phys. Rev. Lett. **81**, 938 (1998). I. Bouchoule, K. V. Kheruntsyan, and G. V. Shlyapnikov, Phys. Rev. A **75**, 031606(R) (2007). A. G. Sykes, P. D. Drummond, and M. J. Davis, Phys. Rev. A **76**, 063620 (2007). P. D. Drummond and C. W. Gardiner, J. Phys. A: Math. Gen. **13**, 2353 (1980). C. W. Gardiner, Quantum Noise (Springer-Verlag, Berlin, 1992). A. Gilchrist, C. W. Gardiner, and P. D. Drummond, Phys. Rev. A **55**, 3014 (1997). Note that Eq. only contains 4 terms coming from Wick’s theorem, the other 20 terms are disconnected corrections (in the language of Feynman diagrams) and hence only produce corrections to the single particle Green functions. That is they represent the interacting corrections to the relation between chemical potential and density. M. Naraschewski and R. J. Glauber, Phys. Rev. A **59**, 4595 (1999). N. D. Mermin and H. Wagner, Phys. Rev. Lett. **17**, 1133 (1966); P. C. Hohenberg, Phys. Rev. **158**, 383 (1967). D. S. Petrov, G. V. Shlyapnikov, and J. T. M. Walraven, Phys. Rev. Lett. **85**, 3745 (2000). T. Cheon and T. Shigehara, Phys. Rev. Lett. **82**, 2536 (1999); D. Sen, J. Phys. A **36**, 7517 (2003). J. Friedel, Nuovo Cimento Suppl. **7**, 287 (1958). G. E. Astrakharchik, J. Boronat, J. Casulleras, and S. Giorgini, Phys. Rev. Lett. **95**, 190407 (2005); M. Batchelor *et al.*, J. Stat. Mech. L10001 (2005). J. F. Corney and P. D. Drummond, Phys. Rev. Lett. **93**, 260401 (2004). T. Aimi and M. Imada, J. Phys. Soc. Japan **76**, 084709 (2007); T. Aimi and M. Imada, J. Phys. Soc. Japan **76**, 113708 (2007). I. Carusotto, and Y. Castin, J. Phys. B: At. Mol. Opt. Phys. **34**, 4589 (2001). P. D. Drummond and I. K. Mortimer, J. Comput. Phys. **93**, 144 (1991). P. Deuar and P. D. Drummond, J. Phys. A: Math. Gen. **39**, 1163 (2006). *Handbook of Mathematical Functions*, Eds. M. Abramowitz and I. A. Stegun (Dover, New York, 1965).
--- abstract: 'Modern experiment requires a reliable theoretical framework for low energy QCD. Some of the requirements for constructing a new model of QCD are presented here. Progress toward these requirements are highlighted.' address: | Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260\ and Jefferson Lab, 12000 Jefferson Ave, Newport News, VA 23606 author: - 'Eric S. Swanson' title: 'Beyond the Constituent Quark Model [^1]' --- Introduction ============ The constituent quark model has a long and distinguished history of service to hadronic physics[@cqm]. However, its utility is restricted to quark-number conserving hadronic processes – it has nothing to say about channel coupling, or about gluonic physics in general. And its connection to QCD is tenuous at best. Indeed, it is clear that present day experiment is outstripping theory and that new reliable and tractable continuum models of QCD are required to interpret and guide the new generation of hadronic experiments. For example, it is very likely that resonant structure is seen in the exotic $J^{PC} = 1^{-+}$ channel at 1600 MeV; and something is seen at 1400 MeV at BNL, CERN, and VES. Proving that these states are mesonic hybrids will require a substantial improvement in our understanding of the dynamics of soft glue, both in terms of the structure of the putative resonance and in terms of its coupling to ‘canonical’ mesons. Similarly, it is tempting to interpret the extensive Crystal Barrel data on the $f_0(1500)$ as evidence for a scalar glueball. However, state mixing in the scalar sector is notorious for its strength and its obscurity. This mixing must be thoroughly mastered before we can claim the discovery of a glueball and this task will require a reliable model of soft glue. As a final example, consider the extraction of baryonic resonance parameters from Jefferson Lab, BNL, GRAAL, and Bonn. At modern energies, one must analyse data using coupled channel methods; thus $\pi N$, $\pi\pi N$, $\pi\pi\pi N$, $\eta N$, $\rho N$, etc channels become important and one must have a trustworthy method to parameterise the couplings between the different channels. It will also become increasingly important to have reliable estimates of background amplitudes. A moments’ reflection reveals that this is a difficult problem in strong QCD: quark exchange diagrams contribute, but may also be present at the effective meson-exchange level. More perplexing is the possibility of quark-antiquark annihilation to intermediate states with excited gluonic content[^2]. Although these examples were drawn from the mainstream of hadron spectroscopy, it should be stressed that these issues are rather far reaching. For example, extracting electroweak phases will require reliable knowledge of strong phases which are generated in the necessarily present hadronic final states. Analyzing forthcoming RHIC data for putative signals of the quark-gluon plasma will require a careful subtraction of hadronic scattering background which may mask the signal. Again, a thorough knowledge of hadronic dynamics is needed. Of course, more fundamental reasons exist for building a new model of strong QCD. QCD is a remarkably rich theory, displaying such diverse phenomena as spontaneous chiral symmetry breaking, asymptotic freedom, colour confinement, possible new high temperature phases of matter, and important topological features. It would be remiss to abrogate the responsibility of learning as much as possible about these phenomena. Issues Facing a Modern Constituent Quark Model ============================================== A short and incomplete list of the issues facing the construction of a ‘new’ quark model follow. .3 true cm $\bullet\ $[***the nature of confinement***]{} For several decades, the quark model has employed a linear (or similar) static long range interquark potential. While this is in accord with lattice data, many open issues remain. For example, what is the colour space structure of the long range force? This is required even for heavy quarks. The standard choice is $\lambda\cdot \lambda$ and this has received strong support from the lattice[@casimir]. It is also possible to prove that this is the correct colour structure in the heavy quark limit[@ss7]. The extrapolation to light quark masses remains an open issue. Note that the colour structure is important when one considers hadronic interactions (ie., processes with more than three valence quarks). The Lorentz structure of confinement is not determined by Wilson loop calculations and is important for those wishing to ‘relativise’ quark models. Indications from heavy quarkonium spin splittings and from direct lattice computations are that the structure is [scalar]{}$\otimes$[scalar]{}[@scalar]. However, it should be remembered that this is the form of the [*effective*]{} interquark interaction once gluons are integrated out of the theory/model. Indeed, comparison with QCD in Coulomb gauge shows that the Lorentz structure of confinement is [vector]{}$\otimes$[vector]{} in the heavy quark limit[@ss5], and that the effective scalar interaction arises due to nonperturbative mixing with intermediate hybrids. to An important, and almost uniformly ignored, aspect of model building is deriving the long range confinement potential. While it is tempting to merely assume that confinement exists and to take its form from the lattice, this risks missing vital aspects of strong QCD because confinement is strongly tied to the vacuum structure of QCD, and therefore is related to the appearance of chiral symmetry breaking and constituent quarks (see ‘chiral pions’ below). All three of these phenomena are central features of low energy hadronic physics and must be modelled reasonably well before we can have full confidence in the new quark model. That this ambitious goal is possible has recently been demonstrated twice: with the Schwinger Dyson formalism in Landau gauge[@axial], and with Hamiltonian methods in Coulomb gauge[@ss7]. The result of the latter calculation is shown in Fig. 1. .3 true cm $\bullet\ $[***gluodynamics***]{} Glue, and especially the dynamics of glue, is conspicuously absent from most present day models of strong QCD. It is clear that glue plays a vital role in many aspects of low energy hadronic physics. Indeed, about the only place where it is relatively safe to neglect QCD gluodynamics is when discussing the static properties of mesons and baryons. Thus, for example, a reliable model of gluodynamics is required to address simple questions such as the masses and static properties of glueballs and hybrids. Just as important is the way in which these states couple to ‘canonical’ matter. This must be known if we are to disentangle exotics from the canonical spectrum. Again the lattice will be of great assistance. Lattice computations of glueball masses[@Morningstar:1999rf] serve as a litmus test for any putative models of gluodynamics. Furthermore, high precision computations of the adiabatic excited gluon energies provide our first glimpse into the dynamics of strongly interacting glue[@JKM]. .3 true cm $\bullet\ $[***unquenching the quark model***]{} A closely related issue is going beyond the valence approximation in hadronic phenomenology. While it is clear that this is a pressing issue for the investigation of nonvalence physics, such as the strangeness content of the proton[@gi], or the perplexing robustness of the OZI rule in the face of hadronic loops[@gi2], it is also relevant to spectroscopy. Typical hadronic widths of 150 MeV point to typical hadronic mass shifts of a similar scale. Furthermore, meson loops cause spin splittings which can confound simple attempts at deriving these. It is clear that a quark model which unifies quark-antiquark pair creation with valence physics is required[@pz]. Although there are a number of technical issues which need to be overcome to achieve this unification (such as efficiently solving coupled channel problems, determining the optimal number of channels to include in a given problem, and accounting for excluded channels), the main issue is the form of the quark creation operator. Certainly this operator is dominated by nonperturbative glue, but a detailed microscopic description is lacking. The most popular model to date is the $^3P_0$ model[@3p0] (first diagram in Fig. 2) which assumes an effective vertex which produces quark pairs with vacuum quantum numbers. This model produces a reasonably reliable phenomenology[@3p02]. Other possible decay mechanisms exist and need to be explored. For example, the second diagram in Fig. 2 is the leading diagram in naive perturbation theory. However predictions of the D/S amplitude ratios (which are sensitive to the assumed Lorentz structure of the decay vertex) in $b_1 \to \omega \pi$ and $a_1 \to \rho\pi$ strongly prefer a $^3P_0$ pair creation over $^3S_1$[@3s1]. to Another possible decay mechanism is obtained by isolating the instantaneous portion of diagram 2 (diagram 3 of Fig. 2) – say by working in radiation gauge. However, a model examination of this process reveals that it is strongly suppressed with respect to the $^3P_0$ vertex[@Ackleh:1996yt]. A promising approach to hadronic properties is provided by the Schwinger-Dyson formalism. The leading decay mechanism in this approach is the triangle diagram[@SDdecay] (last diagram of Fig. 2). This method has the benefit of employing the same kernel to describe quark-quark interactions and quark-antiquark pair production. However, this may be too restrictive since vector pair creation appears to be disfavoured by the $b_1$ and $a_1$ amplitude ratios. Nevertheless it is possible that the situation may be saved by the relativistic character of the Schwinger-Dyson approach (certainly, this physics is not explored in the quark model calculations which favour the $^3P_0$ mechanism). The final possibility considered here is the production of a $q\bar q$ pair directly from the confinement potential/flux tube (fourth diagram, Fig. 2). This diagram is a leading term in Coulomb gauge QCD[@ss7]. Although the phenomenology of decays in Coulomb gauge QCD has not been explored, it is promising because this diagram yields a vertex with $^3P_0$ quantum numbers, and would be our first microscopic justification of the $^3P_0$ model. Given the importance of unquenching the quark model and the paucity of our knowledge of the nonperturbative nature of quark pair creation, [*a lattice exploration of the form of the decay vertex should be a high priority topic in the near future.*]{} .3 true cm $\bullet\ $[***chiral pions***]{} Pions are an important part of any attempt to understand strong QCD. As the lightest hadrons they dominate nuclear physics; pion cloud effects are important, and pions are ubiquitous in final states of many hadronic experiments. Having a firm grip on their qualities is of central importance to constructing viable models of strong QCD. It is often said that the constituent quark model view of pions as quark-antiquark bound states is in conflict with their quasiGoldstone boson nature. However, these two world-views need not be at odds. For example, the very existence of the constituent quark model is due to the existence of light pions: the dynamics which causes dynamical symmetry breaking (and Goldstone bosons) also creates quark-like quasiparticle excitations – the constituent quarks[@chiral]. A recent paper[@ss6], show explicitly how the Goldstone, collective, nature of the pion can coexist with the $q\bar q$ bound state quark model pion; briefly, both descriptions are correct when the appropriate degrees of freedom are employed (partonic for the Goldstone modes; constituent for the quark model states). Thus it is likely that a good phenomenology may be obtained simply by ignoring the underlying chiral aspects of the pion. However, incorporating the physics of chiral symmetry breaking is important if one wishes to deal with aspects of the QCD vacuum or if the model is strongly constrained (so that pionic fluctuations into many-quark Fock components may not be absorbed into model parameters). .3 true cm $\bullet\ $[***relativity***]{} This is a longstanding and well known problem with the constituent quark model which is a left-over from the early days of hadronic physics. There is really no reason to continue with nonrelativistic approaches (except that they are computationally simple and they work reasonably well!) – and several groups have mounted efforts to construct ‘relativised’ quark models. These typically fall into two categories, light cone/Bakamjian Thomas models[@lc] or Schwinger Dyson/Bethe Salpeter models[@SD]. The latter are closer to field theory (or [*are*]{} truncated field theory) and offer great hope. It is possible to overstate the case for covariance. Any nonperturbative approach must break covariance at some level, for example, the lattice breaks Lorentz invariance by working on a grid and models typically must truncate at some level in Fock space. Both of these problems may be removed in principle – in practice they are [*not*]{} removed, but the effects may be checked and are seen to be small (at least in the case of lattice gauge theory). It is perhaps more useful to adopt a practical attitude, for example, it would be useful if the computation of the pion decay constant via the PCAC relation $\langle 0 | A_\mu^a(0)|\pi^b(p)\rangle = i f^{ab} p_\mu$ did [*not*]{} depend on the spacetime index. .3 true cm $\bullet\ $[***short distance dynamics***]{} It is easy to believe that the form of the short distance quark interaction is resolved by QCD; short distance means high $Q^2$, that means small $\alpha_s$, and that means perturbative one gluon exchange. The phenomenology of one gluon exchange works extremely well[@oge1] and has the virtue of providing a good phenomenology for both mesons and baryons[@oge2]. However the [*transition*]{} of one gluon exchange to intermediate or large distance is typically ignored. Indeed, in bound state perturbation theory (which is the way all perturbation theory for hadronic physics should be performed), the diagram corresponding to one gluon exchange (first diagram of Fig. 3) corresponds to mixing with intermediate hybrids (second diagram, Fig. 3), and the first diagram does not occur. However, if one is dealing with a field theory, the first diagram reappears as a counterterm which is active at momentum transfer above the renormalization scale. How these two evolve into each other is therefore an issue dealt with by the renormalization group flow of the underlying field theory and should be properly addressed in a new quark model. This issue is related to a subtlety in most quark models: how are short range and long range dynamics to be merged? If confinement arises from multiple gluon exchange, surely it is not correct to simply add one gluon exchange to an assumed linear potential. Addressing this issue is very difficult in covariant gauges, however, in Coulomb gauge there is a natural separation of instantaneous and transverse potentials which allows the issue to be resolved simply. to The last diagram of Fig. 3 represents meson exchange contributions to the quark interaction. If one admits that pion (and meson) loops can affect hadron properties (as we have argued above) then one must allow these sort of diagrams. However, it is an open issue as to how important they are. Robson examined this possibility years ago[@DR] in the context of the tensor splitting of the $S_{11}(1535)$ and $S_{11}(1650)$ and rejected it. It has since been taken up again[@GR], although not without criticism[@Isgur:2000jv]. .3 true cm $\bullet\ $[***topological aspects***]{} Shortly after the notion of topology (here we focus on instantons) was introduced to QCD, ’t Hooft used instantons to resolve the $U_A(1)$ problem[@inst] – namely that the axial symmetry of (massless) QCD is not realized in the Wigner-Weyl or Nambu-Goldstone modes. It has also been argued that collective effects involving infinitely many instantons may generate the quark condensate, and hence, chiral symmetry breaking. Finally, we mention that instantons induce an effective quark interaction, however it appears that this force does not confine[@SS]. If we are to accept the instanton resolution to the $U_A(1)$ problem, then instanton field configurations must be accepted as an important subset of the vacuum field configurations, and their effects should be included in a new quark model. Indeed, computations with instanton models indicate that they may successfully describe many properties of light hadrons[@SS]. There is also lattice evidence that instantons dominate the vacuum. We note, however, that old arguments of Witten against instantons[@W] have been resurrected[@Horvath:2001ir]. This paper, in turn, has been criticised[@cc]. There appears to be little room for instantons in the nonrelativistic constituent quark model, they simply aren’t needed to explain the spectrum. However, if instantons do dominate low energy QCD, they must be incorporated into models. The Bonn group (see Refs. [@SD; @SDdecay]) has been developing a model which includes instanton-induced quark interactions in a relativistic Bethe-Salpeter approach, and the resulting phenomenology appears quite successful. Moving beyond the phenomenological stage requires incorporating the effects of instantons in a way which is consistent with the new quark model’s treatment of the vacuum. And this means that a consistent treatment of confinement, chiral symmetry breaking, and instanton effects must be found. I know of no attempts in this direction, and it forms a major challenge for future efforts. .3 true cm $\bullet\ $[***hadronic interactions***]{} Hadronic interactions form an important, if under-appreciated, portion of hadronic physics. They are central to developing a microscopic theory of nuclear physics, to nuclear astrophysics, to the analysis of ‘background’ in $N^*$ and other resonance (hybrids, glueballs) experiments, and to electroweak experiments (where hadronic final state interactions must be properly accounted for). As such, any new quark model should carry with it a well-defined, tractable, methodology for computing hadronic interactions. The present state of affairs is less than ideal. Constituent quark model calculations date from the ’70s[@lib], and continue today with resonating group[@rg] and perturbative[@BS] methods (see Fig. 4). While it is likely that these quark model calculations provide reasonable guidance at low energies (except for pion-dominated physics where one must hope that the necessary chiral properties are captured in the quark model – see the discussion above), it is less clear how applicable they are at high momentum transfer. It is here that light cone approaches[@LC] are expected to be applicable (certainly to inclusive reactions, less certainly to exclusive). Of course what is needed is a consistent formalism which allows the computation of hadronic wavefunctions and hadronic scattering in all energy regimes simultaneously. It is evident that close contact with QCD needs to be maintained if these goals are to be achieved. One attempt in this regard was made many years ago by Peskin[@Bhanot:1979vb]. This approach is essentially a multipole expansion of the interaction of a small colour singlet state with an external colour field (see Fig. 4). The resulting dipole interaction is assumed to be applicable to hadron-hadron interactions as well. However, we note that one prediction of this model is that the cross section for $\psi'$ with hadronic matter is 5000 times larger than that of $\psi$s. Indeed, Peskin fears that even the $\Upsilon$ system may be too light for the method to work[@p2]. The historical litmus test for hadronic models has been a computation of the hadron spectrum. It is becoming increasingly clear that this is inadequate because the extraction of resonance parameters is fraught with ambiguity. Computations which are closer to the data are required – in particular reaction dynamics need to be incorporated into new quark model predictions. This field is in its infancy; however, it has started[@SL]. Conclusions =========== A crucial aspect of the new quark model is a thorough understanding of the QCD vacuum. This is required to meet many of the issues raised above: chiral symmetry breaking, confinement, topology, and gluodynamics. These issues, in turn, are central to developing a viable model of low energy QCD. It will clearly be a stiff challenge to develop a model which adequately addresses all of these issues; however, hadronic physics provides our only window into strongly interacting field theory and is a vital component of nuclear physics, astrophysics, cosmology, and physics beyond the standard model[@Capstick]. It will therefore be worth the effort to develop such a model! An efficient description of hadronic physics will require the identification of appropriate degrees of freedom – constituent quarks, massive gluons, flux tubes, instantons, vortices, or something new. However, if the ambitious goals laid out here are to be achieved, a direct connection of these degrees of freedom to QCD must be maintained. We can take heart that progress is being made. Of particular note is the assistance of lattice gauge theory, which promises to be a useful shortcut to the development of new ideas and to testing these ideas. M. Gell-Mann, Phys. Lett. [**8**]{}, 214 (1964); G. Zweig, CERN preprints TH401 and TH412 (1964), Proceedings of Baryon 1980, pg. 439 (Ed. N. Isgur); Morpugo, J. Phys. [**2**]{}, 95 (1965); R. Dalitz, [*Eighth International Conference on High Energy Physics*]{}, Berkeley, (1966). E. S. Swanson, “Hadron hadron interactions in the constituent quark model: Results and extensions,” hep-ph/0102267. G.S. Bali, Phys. Rev. [**D62**]{}, 114503 (2000). A. P. Szczepaniak and E. S. Swanson, hep-ph/0107078, to appear Phys. Rev. D. H.J. Schnitzer, Phys. Rev. Lett. [**35**]{}, 1540 (1975). A. P. Szczepaniak and E. S. Swanson, Phys. Rev. D [**55**]{}, 3987 (1997). D. Atkinson and J.C.R. Bloch, Phys. Rev.d [**D58**]{}, 094036 (1998); L. von Smekal, A. Hauck, and R. Alkofer, Ann. Phys. [**267**]{}, 1 (1998). C. J. Morningstar and M. J. Peardon, Phys. Rev. D [**60**]{}, 034509 (1999). K.J. Juge, J. Kuti, and C.J. Morningstar, Nucl. Phys. Proc. Suppl. [**63**]{}, 326 (1998). P. Geiger and N. Isgur, Phys. Rev. D [**55**]{}, 299 (1997). P. Geiger and N. Isgur, Phys. Rev. D [**47**]{}, 5050 (1993). N. A. Tornqvist and P. $\dot {\rm Z}$enczykowski, Phys. Rev. D [**29**]{}, 2139 (1984). L. Micu, Nucl. Phys. [**B10**]{}, 521 (1969); R. Carlitz and M. Kislinger, Phys. Rev. D [**2**]{}, 336 (1970); A. Le Yaouanc, L. Oliver, O. Pene, and J.-C. Raynal, Phys. Rev. D [**8**]{}, 2233 (1973); Phys. Lett. [**71 B**]{}, 397 (1977); [*ibid*]{} [**72 B**]{}, 57 (1977). R. Kokoski and N. Isgur, Phys. Rev. D [**35**]{}, 907 (1987); S. Capstick and W. Roberts, Phys. Rev. D [**47**]{}, 1994; A. LeYaouanc [*et al.*]{}, [*Hadron Transitions in the Quark Model*]{}, (Gordon Breach, New York, 1988); S. Godfrey and N. Isgur, Phys. Rev. D [**32**]{}, 189 (1985); R. Bonnaz and B. Silvestre-Brac, Prog. Part. Nucl. Phys.  [**44**]{}, 369 (2000). J.W. Alcock, M.J. Burfitt, W.N. Cottingham, Z. Phys. [**C25**]{}, 161 (1984); P. Geiger and E. S. Swanson, Phys. Rev. D [**50**]{}, 6855 (1994). E. S. Ackleh, T. Barnes and E. S. Swanson, Phys. Rev. D [**54**]{}, 6811 (1996). J. C. Bloch, Y. L. Kalinovsky, C. D. Roberts and S. M. Schmidt, Phys. Rev. D [**60**]{}, 111502 (1999); M. A. Pichowsky, S. Walawalkar and S. Capstick, Phys. Rev. D [**60**]{}, 054030 (1999); R. Ricken, M. Koll, D. Merten, B. C. Metsch and H. R. Petry, Eur. Phys. J. A [**9**]{}, 221 (2000). S. L. Adler and A. C. Davis, Nucl. Phys. B [**244**]{}, 469 (1984); J. R. Finger and J. E. Mandula, Nucl. Phys. B [**199**]{}, 168 (1982); A. Le Yaouanc, L. Oliver, S. Ono, O. Pene and J. C. Raynal, Phys. Rev. D [**31**]{}, 137 (1985). A. P. Szczepaniak and E. S. Swanson, Phys. Rev. Lett.  [**87**]{}, 072001 (2001). S. Capstick and B. D. Keister, Phys. Rev. D [**51**]{}, 3598 (1995); L. S. Kisslinger, H. M. Choi and C. R. Ji, Phys. Rev. D [**63**]{}, 113005 (2001). P. Maris and C. D. Roberts, Phys. Rev. C [**56**]{}, 3369 (1997); U. Loring, B. C. Metsch and H. R. Petry, Eur. Phys. J. A [**10**]{}, 395 (2001). A. De Rujula, H. Georgi and S. L. Glashow, Phys. Rev. D [**12**]{}, 147 (1975); N. Isgur and G. Karl, Phys. Rev. D [**18**]{}, 4187 (1978). D. Robson, [*Proceedings of the Topical Conference on Nuclear Chromodynamics*]{}, Argonne National Laboratory (1988), Eds. J. Qiu and D. Sivers (World Scientific), pg. 174. L. Y. Glozman and D. O. Riska, Phys. Rept.  [**268**]{}, 263 (1996). N. Isgur, Phys. Rev. D [**62**]{}, 054026 (2000). G. ’t Hooft, Phys. Rev. Lett. [**37**]{}, 8 (1976); A.M. Polyakov, Phys. Lett. [**59B**]{}, 82 (1975); Nucl. Phys. [**B121**]{}, 429 (1977); A.A. Belavin, A.M. Polyakov, A. Schwartz, and Y. Tyupkin, Phys. Lett. [**59B**]{}, 85 (1975); R. Jackiw and C. Rebbi, Phys. Rev. Lett. [**37**]{}, 172 (1976). See T. Schäfer and E. Shuryak, Rev. Mod. Phys. [**70**]{}, 323 (1998). E. Witten, Nucl. Phys. B [**149**]{}, 285 (1979); see also J. Kogut and L. Susskind, Phys. Rev. D [**11**]{}, 3594 (1975). I. Horvath, N. Isgur, J. McCune and H. B. Thacker, hep-lat/0102003. R. G. Edwards and U. M. Heller, hep-lat/0105004; T. DeGrand and A. Hasenfratz, hep-lat/0103002. D.A. Liberman, Phys. Rev. [**D16**]{}, 1542 (1977). M. Oka and K. Yazaki, Phys. Lett. [**90B**]{} (1980), 41; Prog. Theor. Phys. [**66**]{} (1981), 556; [*ibid.*]{}, p.572; A. Faessler, F. Fernandez, G. Lubeck and K. Shimizu, Phys. Lett. [**112B**]{}, Y. Suzuki and K.T. Hecht, Phys. Rev. [**C27**]{}. 299 (1983). (1982), 201; T. Barnes and E. S. Swanson, Phys. Rev. D [**46**]{}, 131 (1992); E. S. Swanson, Annals Phys.  [**220**]{}, 73 (1992). S. J. Brodsky, “Hadronic light-front wavefunctions and QCD phenomenology,” hep-ph/0102051. G. Bhanot and M. E. Peskin, Nucl. Phys. B [**156**]{}, 391 (1979). M. Peksin, private communication. T. S. Lee and T. Sato, Nucl. Phys. A [**684**]{}, 327 (2001). S. Capstick [*et al.*]{}, “Key issues in hadronic physics,” hep-ph/0012238. [^1]: Based on a plenary talk presented at Hadrons 2001, Aug 25 - Sept 1, Protvino, Russia. [^2]: There is compelling evidence for this in meson-meson and meson-baryon scattering data[@hir].
--- abstract: 'A *compact* $T$-algebra is an initial $T$-algebra whose inverse is a final $T$-coalgebra. Functors with this property are said to be *algebraically compact*. This is a very strong property used in programming semantics which allows one to interpret recursive datatypes involving mixed-variance functors, such as function space. The construction of compact algebras is usually done in categories with a zero object where some form of a limit-colimit coincidence exists. In this paper we consider a more abstract approach and show how one can construct compact algebras in categories which have neither a zero object, nor a (standard) limit-colimit coincidence by reflecting the compact algebras from categories which have both. In doing so, we provide a *constructive* description of a large class of algebraically compact functors (satisfying a compositionality principle) and show our methods compare quite favorably to other approaches from the literature.' author: - Vladimir Zamdzhiev bibliography: - 'refs.bib' title: Reflecting Algebraically Compact Functors --- Introduction {#sec:intro} ============ *Inductive datatypes* for programming languages can be used to represent important data structures such as lists, trees, natural numbers and many others. When providing a denotational interpretation for such languages, type expressions correspond to functors and one has to be able to construct their initial algebras in order to model inductive datatypes [@lehman-smyth]. If the admissible datatype expressions allow only pairing and sum types, then the functors induced by these expressions are all polynomial functors, i.e., functors constructed using only coproducts and (tensor) product connectives, and the required initial algebra may usually be constructed using Adámek’s celebrated theorem [@adamek-original]. However, if one also allows function types as part of the admissible datatype expressions, then we talk about *recursive datatypes* and their denotational interpretation requires additional structure. A solution advocated by Freyd [@freyd] and Fiore and Plotkin [@fiore-plotkin] is based on *algebraically compact functors*, i.e., functors $F$ which have an initial $F$-algebra whose inverse is a final $F$-coalgebra. $F$-algebras with this property are called *compact* within this paper. The celebrated limit-colimit coincidence theorem [@smyth-plotkin] and other similar theorems are usually used for the construction of compact algebras with starting point a zero object of the category where the language is interpreted. However, if one is interested in semantics for mixed linear/non-linear lambda calculi, then it becomes necessary to also solve recursive domain equations within categories that do not have a zero object. In this paper, we demonstrate how one can construct compact algebras in categories which do not have a zero object and we do so without (explicitly) assuming the existence of any limits or colimits whatsoever. Our methods are based on *enriched* category theory and we show how this allows us to reflect compact algebras from categories with strong algebraic compactness properties into categories without such properties. The results which we present are also compositional and this allows us to provide constructive descriptions of large classes of algebraically compact functors using formal grammars. A Reflection Theorem for Algebraically Compact Functors {#sec:reflect} ======================================================= In this section we show how initial algebras, final coalgebras and compact algebras may be reflected. Given an endofunctor $T: \CC \to \CC$, a *$T$-algebra* is a pair $(A, a),$ where $A$ is an object of $\CC$ and $TA \xrightarrow{a} A$ is a morphism of $\CC$. A $T$-algebra morphism $f : (A, a) \to (B, b)$ is a morphism $f: A \to B$ of $\CC$, such that the following diagram: commutes. The dual notion is called a $T$-coalgebra. Obviously, $T$-(co)algebras form a category. A $T$-(co)algebra is initial (final) if it is initial (final) in that category. An endofunctor $T: \CC \to \CC$ is (1) *algebraically complete* if it has an initial $T$-algebra (2) *algebraically cocomplete* if it has a final $T$-coalgebra and (3) *algebraically compact* if it has an initial $T$-algebra $T \Omega \xrightarrow \omega \Omega,$ such that $T \Omega \xleftarrow{ \omega^{-1} } \Omega$ is a final $T$-coalgebra. Next, we recall a lemma first observed by Peter Freyd. Let $\CC$ and $\DD$ be categories and $F: \CC \to \DD$ and $G : \DD \to \CC$ functors. If $GF \Omega \xrightarrow{ \omega } \Omega$ is an initial $GF$-algebra, then $FGF \Omega \xrightarrow{F \omega} F \Omega$ is an initial $FG$-algebra. By dualising the above lemma, we obtain the next one. \[lem:coalgebras\] Let $\CC$ and $\DD$ be categories and $F: \CC \to \DD$ and $G : \DD \to \CC$ functors. If $GF \Omega \xleftarrow{ \omega } \Omega$ is a final $GF$-coalgebra, then $FGF \Omega \xleftarrow{F \omega} F \Omega$ is a final $FG$-coalgebra. By using the two lemmas above, the next theorem follows immediately. \[thm:reflect\] Let $\CC$ and $\DD$ be categories and $F: \CC \to \DD$ and $G : \DD \to \CC$ functors. Then $FG$ is algebraically complete/cocomplete/compact iff $GF$ is algebraically complete/cocomplete/compact, respectively. In order to avoid cumbersome repetition, all subsequent results are stated for algebraic compactness. However, all results presented in this section and the next one (excluding Non-Example \[nonexample:not-compact\]) also hold true when all instances of “algebraic compactness” are replaced with “algebraic completeness” or with “algebraic cocompleteness”. \[ass:enrich\] Throughout the rest of the paper we assume we are given an arbitrary cartesian closed category $\VV(1, \times, \to)$ which we will use as the base of enrichment. $\VV$-categories are written using capital calligraphic letters $(\CCE, \DDE, \ldots)$ and their underlying categories using a corresponding bold capital letter $(\CC, \DD, \ldots)$. $\VV$-functors are also written with calligraphic letters $(\FE, \GE : \CCE \to \DDE)$ and their underlying functors using a corresponding capital letter $F, G : \CC \to \DD.$ A $\VV$-endofunctor $\TE : \CCE \to \CCE $ is *algebraically compact* if its underlying endofunctor $T : \CC \to \CC$ is algebraically compact. A $\VV$-category $\CCE$ is *$\VV$-algebraically compact* if every $\VV$-endofunctor $\TE : \CCE \to \CCE$ is algebraically compact. In particular, a $\Set$-algebraically compact category is a locally small category $\CC$, such that every endofunctor $T : \CC \to \CC$ is algebraically compact. In this case we simply say $\CC$ is algebraically compact. Let $\lambda$ be a cardinal and let $\Hilb$ be the category whose objects are the Hilbert spaces with dimension at most $\lambda$ and whose morphisms are the linear maps of norm at most 1. Then $\Hilb$ is algebraically compact [@barr Theorem 3.2]. For the next (very important) example, recall that a *complete partial order* (cpo) is a poset such that every increasing chain has a supremum. A cpo is *pointed* if it has a least element. A monotone map $f : X \to Y$ between two cpo’s is *Scott-continuous* if it preserves suprema. If, in addition, $X$ and $Y$ are pointed and $f$ preserves the least element of $X$, then we say that $f$ is *strict*. We denote with $\cpo$ the category of cpo’s and Scott-continuous functions and we denote with $\cpobs$ the category of pointed cpo’s and strict Scott-continuous functions. The category $\cpo$ is cartesian closed, $\cpobs$ is symmetric monoidal closed (when equipped with the smash product and strict function space) and both categories are complete and cocomplete [@abramskyjung:domaintheory]. We will see both categories as $\cpo$-categories when equipped with the standard pointwise order on functions. Therefore, a $\cpo$-category ($\cpobs$-category) is simply a category whose homsets have the additional structure of a (pointed) cpo and for which composition is a (strict) Scott-continuous operation in both arguments. A $\cpo$-functor ($\cpobs$-functor) then is simply a functor whose action on hom-cpo’s is a (strict) Scott-continuous function. The notion of a $\cpo$-natural transformation coincides with that of $\cpobs$-natural transformation which also coincides with the ordinary notion. Because of these reasons, it is standard in the programming semantics literature to use the same notation for $\cpo_{(\perp!)}$-enriched categorical notions and their ordinary underlying counterparts. We do the same in this paper. The category $\cpobs$ is $\cpo$-algebraically compact [@fiore-thesis Corollary 7.2.4]. Next, we show how to reflect algebraically compact $\VV$-functors. \[def:factor\] We shall say that a $\VV$-endofunctor $\TE : \CCE \to \CCE$ has a *$\VV$-algebraically compact factorisation* if there exists a $\VV$-algebraically compact category $\DDE$ and $\VV$-functors $\FE : \CCE \to \DDE$ and $\GE : \DDE \to \CCE$ such that $\TE \cong \GE \circ \FE.$ \[thm:factor\] If a $\VV$-endofunctor $\TE : \CCE \to \CCE$ has a $\VV$-algebraically compact factorisation, then it is algebraically compact. *Proof.* Taking $\DDE, \FE, \GE$ as in Definition \[def:factor\], we get a $\VV$-endofunctor $\FE \circ \GE : \DDE \to \DDE$. Since $\DDE$ is $\VV$-algebraically compact, then its underlying endofunctor $F \circ G : \DD \to \DD$ is algebraically compact. Theorem \[thm:reflect\] shows that $G \circ F : \CC \to \CC$ is algebraically compact. Algebraic compactness is preserved by natural isomorphisms and therefore $T \cong G \circ F$ is also algebraically compact. Using the two examples above, we easily get two corollaries. Any endofunctor $T: \Set \to \Set$ which factors through $\Hilb$ is algebraically compact. \[cor:cpo\] Any $\cpo$-endofunctor $T: \cpo \to \cpo$ which factors through $\cpobs$ via a pair of $\cpo$-functors, is algebraically compact. Thus the lifting functor $(-)_\perp : \cpo \to \cpo$ (given by freely adding a least element) is algebraically compact. Note that (ordinary) algebraically compact functors are *not* closed under composition. However, using the additional structure we have introduced, we can prove the following compositionality result. \[prop:compose\] Let $\HE : \CCE \to \CCE$ be a $\VV$-endofunctor and $\TE: \CCE \to \CCE$ be a $\VV$-endofunctor with a $\VV$-algebraically compact factorisation. Then $\HE \circ \TE$ also has a $\VV$-algebraically compact factorisation and is thus algebraically compact. *Proof.* If $\TE \cong \GE \circ \FE$, then $\HE \circ \TE \cong (\HE \circ \GE) \circ \FE$. Constructive Classes of Algebraically Compact Functors {#sec:construct} ====================================================== Throughout the rest of the section, we assume we are given the following data. A $\VV$-category $\CCE$, a $\VV$-algebraically compact category $\DDE$ together with $\VV$-functors $\FE : \CCE \to \DDE$ and $\GE : \DDE \to \CCE$ and a $\VV$-endofunctor $\TE \cong \GE \circ \FE.$ Consider the following grammar: $$\label{eq:grammar} A, B ::= \TE X\ |\ \HE(A_1, \ldots, A_n)\$$ where $X$ is simply a type variable, $n$ ranges over the natural numbers (including zero) and $\HE$ ranges over $\VV$-functors $\HE : \CCE^n \to \CCE$. Every such type expression induces a $\VV$-endofunctor $\lrb{X \vdash A} : \CCE \to \CCE,$ defined by: $$\begin{aligned} \lrb{X \vdash \TE X} &= \TE \\ \lrb{X \vdash \HE(A_1, \ldots, A_n)} &= \HE \circ \langle \lrb{X \vdash A_1}, \ldots, \lrb{X \vdash A_n} \rangle\end{aligned}$$ Since the base of enrichment $\VV$ is cartesian, tuples of $\VV$-functors, as above, are also $\VV$-functors and the above assignment is well-defined. Also, $\VV$-algebraically compact categories have been studied only for cartesian $\VV$. Because of these two reasons, Assumption \[ass:enrich\] cannot be relaxed to a symmetric monoidal closed $\VV$. \[thm:compact-big\] Any functor $\lrb{ X \vdash A } : \CCE \to \CCE$ factors through $\FE$ and is therefore algebraically compact. *Proof.* By induction. For the base case we have $\TE \cong \GE \circ \FE$. The step case is given by $\lrb{X \vdash \HE(A_1, \ldots, A_n)} = \HE \circ \langle \lrb{X \vdash A_1}, \ldots, \lrb{X \vdash A_n} \rangle \cong \HE \circ \langle \GE_1 \circ \FE, \ldots, \GE_n \circ \FE \rangle = \HE \circ \langle \GE_1, \ldots, \GE_n \rangle \circ \FE, $ for some $\VV$-functors $\GE_i : \DDE \to \CCE$. The $\VV$-functor $\TE$ is algebraically compact. \[ex:constant\] Any constant functor $K_c : \CC \to \CC$ is, of course, algebraically compact. This is captured by our theorem, because $K_c$ is the underlying functor of the constant $c$ $\VV$-endofunctor $\KE_c : \CCE \to \CCE$, which may be constructed using our grammar. If $\BBE_1 : \CCE \times \CCE \to \CCE$ and $\BBE_2 : \CCE \times \CCE \to \CCE$ are two $\VV$-bifunctors, and $\EE : \CCE \to \CCE$ is a $\VV$-endofunctor, then the endofunctors $\EE \circ \TE$ and $\BBE_1 \circ \langle \TE, \TE \rangle$ and $\BBE_2 \circ \langle \EE \circ \TE, \BBE_1 \circ \langle \TE, \TE \rangle \rangle$ are algebraically compact (among many other combinations). Special Case: Models of Mixed Linear/Non-linear Lambda Calculi {#sub:covariant-dill} -------------------------------------------------------------- As a special case, our development can be applied to models of mixed linear/non-linear lambda calculi with recursive types, as we shall now explain. In a $\cpo$-category, an *embedding-projection pair* is a pair of morphisms $(e, p)$, such that $e \circ p \leq \id$ and $p \circ e = \id$. The morphism $e$ is called an *embedding* and the morphism $p$ a *projection*. An *e-initial object* is an initial object $0$, such that every initial map with it as source is an embedding. \[def:model\] A model of the linear/nonlinear fixpoint calculus (LNL-FPC) [@lnl-fpc] is given by the following data: 1. A $\cpo$-symmetric monoidal closed category $\DD$ with finite $\cpo$-coproducts, such that $\DD$ has an e-initial object and all $\omega$-colimits over embeddings; 2. A $\cpo$-symmetric monoidal adjunction $\stikz{lnl-fpc-model.tikz}$. In the above situation, the category $\DD$ is necessarily $\cpo$-algebraically compact, so it is an ideal setting for constructing compact algebras of $\cpo$-functors. We will now show that the monad $T$ of this adjunction also induces a large class of algebraically compact functors on $\cpo$ (which is not $\cpo$-algebraically compact). But first, two examples of the above situation. The adjunction , where the left adjoint is given by domain-theoretic lifting and the right adjoint is the forgetful functor, has the required structure. The induced monad $T : \cpo \to \cpo$ is called *lifting* (see Corollary \[cor:cpo\]). This adjunction is in fact a computationally adequate model of LNL-FPC [@lnl-fpc]. Let $\M$ be a small $\cpobs$-symmetric monoidal category and let $\widehat \M = [\M^\op, \cpobs]$ be the indicated $\cpobs$-functor category. There exists an adjunction , where the left adjoint is the $\cpobs$-copower with the tensor unit $I$ and the right adjoint is the representable functor (see [@borceux:handbook2 §6]). Composing the two adjunctions yields a LNL-FPC model. By making suitable choices for $\M$, this data also becomes a model of Proto-Quipper-M, a quantum programming language [@pqm-small] and also a model of ECLNL, a programming language for string diagrams [@eclnl]. Since $\DD$ is $\cpo$-algebraically compact, we can now construct a large class of algebraically compact functors via Theorem \[thm:compact-big\]. For instance, such a subclass is given by the following corollary. Any endofunctor on $\cpo$ constructed using constants, $T$, $\times$ and $+,$ and such that all occurrences of the functorial variable in its definition are surrounded by $T$, is algebraically compact. To make this more precise, one should specify a formal grammar like  to indicate the admissible functorial expressions, but it should be clear that  can be easily specialised to handle this. Next, let us consider some example endofunctors on $\cpo$. \[ex:plus\] The endofunctor $H(X) = TX + TX $ is algebraically compact. Indeed, observe that $H = + \circ \langle T, T \rangle = \lrb{X \vdash TX \times TX}.$ The endofunctor $H(X) = TX + T(TX \times TX)$ is algebraically compact. To see it, observe that $H = + \circ \langle T, T \circ \times \circ \langle T, T \rangle \rangle = \lrb{X \vdash TX + T(TX \times TX)}.$ \[nonexample:not-compact\] The endofunctor $H(X) = X \times TX$ is not algebraically compact (its initial algebra is $\varnothing \times T\varnothing = \varnothing \xrightarrow \id \varnothing$ ). Our results do not apply to it, because the left occurrence of $X$ does not have $T$ applied to it. For the same reason, the identity functor $\Id(X) = X$ is also not algebraically compact and not covered by our development. Algebraically Compact Mixed-Variance Functors ============================================= As mentioned in the introduction, algebraic compactness allows us to model recursive datatypes which include mixed-variance functors such as function space. In this section we show that our methods are also compatible with recursive datatypes. Consider a mixed-variance bifunctor $H: \CC^\op \times \CC \to \CC$. Since $H$ is not an endofunctor, then clearly we cannot talk about $H$-algebras or $H$-coalgebras. A more appropriate notion is that of a $H$-*dialgebra*, which we will not introduce here, because of a lack of space and because the category of $H$-dialgebras is isomorphic to the category of $\cupp H$-algebras  [@freyd2 §4], where $$\cupp H = \langle H^\op \circ \langle \Pi_2, \Pi_1 \rangle, H \rangle : \CC^\op \times \CC \to \CC^\op \times \CC.$$ Because of this, it is standard to model recursive datatypes as endofunctors $\cupp H: \CC^\op \times \CC \to \CC^\op \times \CC$ [@fiore-plotkin]. If a category $\DDE$ is $\VV$-algebraically complete, then $\DDE^\op$ is $\VV$-algebraically cocomplete and vice versa. Thus, $\VV$-algebraic compactness is a self-dual notion. Unlike the previous sections, the results presented here do not hold for algebraically complete or cocomplete functors and categories. Moreover, if a category $\DDE$ is $\VV$-algebraically compact in a parameterised sense, then so is $\DDE^\op \times \DDE.$ For lack of space, we omit the details of parameterised algebraic compactness, but the interested reader may consult [@fiore-plotkin]. We point out that the notions of $\cpo$-algebraic compactness and parameterised $\cpo$-algebraic compactness coincide [@fiore-thesis Corollary 7.2.5] and we shall consider such a $\cpo$-example to illustrate our methods. But we emphasise that our methods can be adapted to the general setting of a parameterised $\VV$-algebraically compact category $\DDE$. Let us assume we are given an LNL-FPC model $\stikz{lnl-fpc-model.tikz}$ as in Subsection \[sub:covariant-dill\] with $T = G \circ F$. In this situation, the category $\DD^\op \times \DD$ is also $\cpo$-algebraically compact and we can thus reuse Theorem \[thm:factor\] and Proposition \[prop:compose\], where we choose the $\cpo$-algebraically compact factorisation $T^\op \times T = (G^\op \times G) \circ (F^\op \times F)$. Consider the following grammar: $$\label{eq:mixed-grammar} A, B ::= c\ |\ T X\ |\ H A\ |\ A+B\ |\ A \times B\ |\ A \to B,$$ where $c$ ranges over the objects of $\cpo$ and $H$ ranges over $\cpo$-endofunctors on $\cpo$. Every such type expression induces a $\cpo$-endofunctor $\lrb{X \vdash A} : \cpo^\op \times \cpo \to \cpo^\op \times \cpo,$ defined by: $$\begin{aligned} \lrb{X \vdash T X} &= T^\op \times T \\ \lrb{X \vdash c} &= K_{(c,c)} \\ \lrb{X \vdash H A} &= (H^\op \times H) \circ \lrb{X \vdash A} \\ \lrb{X \vdash A+B} &= \left( + \circ \langle \Pi_2 \lrb{X \vdash A}, \Pi_2 \lrb{X \vdash B} \rangle \right)^\smallsmile \\ \lrb{X \vdash A \times B} &= \left( \times \circ \langle \Pi_2 \lrb{X \vdash A}, \Pi_2 \lrb{X \vdash B} \rangle \right)^\smallsmile \\ \lrb{X \vdash A \to B} &= \left( [ - \to - ] \circ \langle \Pi_1 \lrb{X \vdash A}, \Pi_2 \lrb{X \vdash B} \rangle \right)^\smallsmile,\end{aligned}$$ where $K_{(c,c)}$ is the constant $(c,c)$ endofunctor on $\cpo^\op \times \cpo$ and $[- \to - ] : \cpo^\op \times \cpo \to \cpo$ is the internal-hom. The last three cases in the above assignment are essentially the same as the standard interpretation of types within FPC [@fiore-plotkin Definition 6.2]. Every functor $\lrb{X \vdash A} : \cpo^\op \times \cpo \to \cpo^\op \times \cpo$ factors through $F^\op \times F$ and is therefore algebraically compact. *Proof.* Simple proof by induction. The first three cases are obvious. For the last three cases, simply use the fact that $(H \circ (F^\op \times F))^\smallsmile = \cupp H \circ (F^\op \times F),$ which can be proved after recognising that $(-)^\op$ is a *covariant* operation with respect to functor composition. Consider the functor $H(X, Y) = [TX \to TY] : \cpo^\op \times \cpo \to \cpo$. Then the functor $\cupp H : \cpo^\op \times \cpo \to \cpo^\op \times \cpo$ is algebraically compact, because: $$\begin{aligned} \cupp H = ([- \to - ] \circ (T^\op \times T))^\smallsmile &= ([ - \to - ] \circ \langle \Pi_1, \Pi_2 \rangle \circ (T^\op \times T))^\smallsmile \\ &= \left( [ - \to - ] \circ \langle \Pi_1 \circ (T^\op \times T), \Pi_2 \circ (T^\op \times T) \rangle \right)^\smallsmile \\ &= \lrb{X \vdash TX \to TX}.\end{aligned}$$ Consider the internal-hom functor $[- \to - ] : \cpo^\op \times \cpo \to \cpo$. Then $[ - \cupp \to - ]$ is not algebraically compact, because its initial algebra is given by $$\left( [ - \cupp \to -] (1, \varnothing) = ([ \varnothing \to 1 ] , [1 \to \varnothing] ) = (1, \varnothing) \right) \xrightarrow{\id} (1, \varnothing),$$ which is not its final coalgebra. Our results do not apply to $[ - \cupp \to -]$, because $T$ does not occur anywhere in its definition. Comparison with Limit-Colimit Coincidence Results ================================================= The focus in this paper is to study algebraically compact endofunctors on categories which do not necessarily have a zero object. In [@barr] Michael Barr considers this situation and he presents a more general version of the standard limit-colimit coincidence theorem [@smyth-plotkin]. The increased generality allows him to establish the existence of algebraically compact endofunctors on categories that do not have a zero object. In this section, we will compare his results about $\cpo$-categories with ours. \[thm:barr\] Let $\CC$ be a $\cpo$-category with initial object $\varnothing$ and terminal object $1$. Assume further $\CC$ has colimits of initial sequences of $\cpo$-endofunctors. Then the class of $\cpo$-endofunctors for which there is a morphism $l : 1 \to H\varnothing$ such that $\left( H1 \xrightarrow{} 1 \xrightarrow l H\varnothing \xrightarrow{Hh} H1 \right) \leq \id_{H1},$ where $h: \varnothing \to 1$ is the unique arrow, is algebraically compact. First, a necessary condition in the above situation. In the situation of Theorem \[thm:barr\], the hom-cpo $\CC(H1, H1)$ is pointed. *Proof.* Let $\perp\ = \left( H1 \xrightarrow{} 1 \xrightarrow l H\varnothing \xrightarrow{Hh} H1 \right)$. Let $f: H1 \to H1$ be an arbitrary morphism. Then $\perp\ =\ \perp \circ f \leq \id \circ f = f.$ We may now see that Barr’s theorem does not behave well when dealing with constant functors or with functors involving coproducts. Consider the constant functor $K_2 : \cpo \to \cpo$ where $2$ is any two point cpo equipped with the discrete order. As we explained in Example \[ex:constant\], our development captures the fact that $K_2$ is algebraically compact. However, Barr’s theorem does not show this, because $\cpo(2, 2)$ is not pointed. Consider the functor $H(X) = X_\perp + X_\perp : \cpo \to \cpo$ where $(-)_\perp$ is given by lifting. Our development showed in Example \[ex:plus\] that this functor is algebraically compact. However, Barr’s theorem does not show this, because $\cpo(1_\perp + 1_\perp, 1_\perp + 1_\perp)$ is not pointed. A natural question to ask is whether there exists an algebraically compact functor described by Theorem \[thm:barr\], but not captured by the methods presented here. We leave this for future work. We also provided a compositionality result (Proposition \[prop:compose\]) which then allowed us to present a *constructive* description of large classes of algebraically compact functors (Section \[sec:construct\]). So far this has not been done with Barr’s results. Another approach for modelling mixed linear/non-linear recursive types is described in [@lnl-fpc] where the authors interpret non-linear types within a carefully constructed subcategory of $\cpo$. That method works only for $\cpo$-categories whereas the techniques presented here work for arbitrary $\VV$-categories. Also, the set of type expressions that can be interpreted with the methods from [@lnl-fpc] is incomparable with the one presented here (neither is a subset of the other). Conclusion ========== We established new results about algebraically compact functors without relying on limits, colimits or their coincidence. We arrived at these results in a more abstract way by observing that any enriched endofunctor is algebraically compact, provided that it factors through a category which is algebraically compact in an enriched sense. This then allowed us to establish large classes of algebraically compact functors which also admit a constructive description. Our results are compositional and nicely complement other existing approaches in the literature which do rely on a limit-colimit coincidence. #### Acknowledgements. The author is supported by the French projects ANR-17-CE25-0009 SoftQPro and PIA-GDN/Quantex.
--- abstract: 'The electron-positron pair production accompanying interaction of a circularly polarized laser pulse with a foil is studied for laser intensities higher than $10^{24}$W cm$^{-2}$. The laser energy penetrates into the foil due to the effect of the relativistic hole-boring. It is demonstrated that the electron-positron plasma is produced as a result of quantum-electrodynamical cascading in the field of the incident and reflected laser light in front of the foil. The incident and reflected laser light makes up the circularly polarized standing wave in the reference frame of the hole-boring front and the pair density peaks near the nodes and antinodes of the wave. A model based on the particle dynamics with radiation reaction effect near the magnetic nodes is developed. The model predictions are verified by 3D PIC-MC simulations.' author: - 'I. Yu. Kostyukov' - 'E. N. Nerush' title: 'Production and dynamics of positrons in ultrahigh intensity laser-foil interactions ' --- Introduction ============ Ultrahigh intensity laser-matter interaction attracts much attention, first of all, due to the fast development in laser technology [@Michigan2008; @Mourou2006]. At extremely high laser intensity the quantum-electrodynamical (QED) effects start to play a key role. Among them are: photon emission by electrons and positrons with strong recoil, photon decay in strong electromagnetic field with electron-positron pair creation (Breit-Wheeler process), Bethe-Heitler process, trident process, etc [@Marklund2006; @Piazza2012]. The laser-matter interaction in the QED-dominated regime leads to manifestation of new phenomena like prolific production of gamma-rays and electron-positron pairs [@Nerush2007; @Ridgers2012; @Kirk2013; @Bashinov2014; @Brady2014; @Nerush2014; @Nerush2015], laser-assisted QED cascading [@Bell2008; @Fedotov2010; @Nerush2011; @Bulanov2013; @Bashmakov2014; @Gelfer2015; @Vranich2015; @Jirka2016], radiation trapping of the charged particles [@Lehmann2012; @Ji2014prl; @Gonoskov2014; @fedotov2014; @Kirk2016] etc. In this paper we focus on laser-plasma interaction in the hole-boring (HB) regime when the light pressure pushes plasma inside the target [@Kruer1975; @Wilks1992]. The hole-boring front can be introduced as a plasma-vacuum interface propagating towards the target. The front separates the vacuum region from the high density plasma. The HB front structure is as follows. The laser pressure pushes the electrons ahead thereby forming the sheath with the unshielded ions and the thin, dense electron layer. Reflection and absorption of the laser light by the electron layer provides efficient laser pressure. The charge separation generates strong longitudinal electric field that, on the one hand, accelerates the ions towards the target and, on the other hand, suppress the electron acceleration by the laser pressure. Laser radiation and the plasma ions mostly contribute into the energy-monetum budget. The HB front velocity can be derived from the equation for the energy-monetum flux balance [@Kruer1975; @Schlegel2009; @Robinson2009] $$\begin{aligned} v_{HB} & = & \frac{c}{1+\mu},\label{vhb}\end{aligned}$$ where $$\begin{aligned} \mu & = & \frac{1}{a_{0}}\sqrt{\frac{Mn_{i}}{mn_{cr}}},\label{mu}\end{aligned}$$ $a_{0}=eE/(mc\omega_{L})$ is the normalized laser field strength, $n_{cr}=m\omega_{L}^{2}/\left(4\pi e^{2}\right)$ is the critical plasma density, $n_{i}$ is the density of the plasma ions, $\omega_{L}$ is the laser frequency, $M$ is the ion mass, $c$ is the speed of light, $m$ and $e>0$ are the electron charge and mass, respectively. It follows from Eqs. (\[vhb\]) and (\[mu\]) that the HB front velocity increases with increasing of the laser intensity and decreasing of the plasma density. The electrons in the laser field can emit high energy photons and if the laser intensity is high enough then the portion of the laser energy converted into the gamma ray energy is large [@Nerush2014] so that the fluxes of the emitted gamma-photons has to be taken into account in the energy-monetum flux budget [@Nerush2015]. It is demonstrated [@Nerush2015; @Capdessus2015] that the efficient generation of gamma-rays reduces the laser reflection and the HB front velocity. Another effect accompanying the ultrahigh intensity laser-solid interaction is electron-positron pair creation [@Ridgers2012; @Kirk2013; @Nerush2015]. The pairs can be created because of Breit-Wheeler process. Avalanche-like production of electron-positron pairs and gamma photons is possible at QED cascading [@Bell2008; @Fedotov2010]. A cascade develops as a sequence of elementary QED processes: photon emission by the electrons and positrons in the laser field alternates with pair production because of photon decay. A cloud or cushion of pair plasma in the laser pulse in front of the target has been observed in numerical simulations [@Ridgers2012]. As the pair number becomes great, there is back reaction of the self-generated pair plasma on the laser-solid interaction. It has been demonstrated [@Ridgers2012; @Nerush2015] that the produced pair plasma dramatically enhances laser field absorption and gamma-ray emission thereby reducing the HB front velocity. The pair motion in the combined laser and plasma fields with radiation reaction is rather complex that makes analytical treatment of pair plasma kinetics difficult. The analytical model for pair cushion in the nonlinear regime when the reflection of the laser pulse is strongly suppressed by the self-generated pair plasma has been recently proposed [@Kirk2013]. In our work we study the pair production in the regime when the number of the produced pairs is not sufficient to suppress laser reflection and to affect the laser-foil interaction. This is the case, for example, for interaction between extremely intense laser pulse and thin foils or for early stage of the laser interaction with thick solid target. The results of three dimensional particle-in-cell Monte Carlo (3D PIC-MC) simulations demonstrating the HB effect at interaction between a circularly polarized laser pulse and a foil are shown in Fig. 1. PIC-MC simulations including emission of hard photons and electron-positron pair production allow us to analyze the HB process at extremely high intensities. A similar numerical approach has been used in a number of works (see, e. g. [@Ridgers2012; @Vranich2015]). To distill the physics of pair production and pair dynamics we consider extremely intense laser pulses. The simulation box is $17.5\lambda\times25\lambda\times25\lambda$ corresponding to the grid size $670\times125\times125$; the time step is $0.005\lambda/c$, where $\lambda$ is the laser wavelength. In the simulation a quasi-rectangular ($11.4\lambda\times23\lambda\times23\lambda$) circularly polarized laser pulse of intensity $I_{L}=2.75\times10^{24}$ $\mathrm{W/cm^{2}}$ ($a_{0}=1000$, ${\ifmmode\begingroup\def\b@ld{bold} \text{\ifx\math@version\b@ld\bfseries\fi\textgreek{l}}\endgroup\else\textgreek{l}\fi}=1\mathrm{{\ifmmode\begingroup\def\b@ld{bold} \text{\ifx\math@version\b@ld\bfseries\fi\textgreek{m}}\endgroup\else\textgreek{m}\fi}m}$) and $I_{L}=9.3\times10^{24}$ $\mathrm{W/cm^{2}}$ ($a_{0}=1840$, ${\ifmmode\begingroup\def\b@ld{bold} \text{\ifx\math@version\b@ld\bfseries\fi\textgreek{l}}\endgroup\else\textgreek{l}\fi}=1\mathrm{{\ifmmode\begingroup\def\b@ld{bold} \text{\ifx\math@version\b@ld\bfseries\fi\textgreek{m}}\endgroup\else\textgreek{m}\fi}m}$) interacts with a diamond foil ($n_{e}=6n_{i}=1.1\times10^{24}\mathrm{cm^{{\ifmmode\begingroup\def\b@ld{bold} \text{\ifx\math@version\b@ld\bfseries\fi\textminus}\endgroup\else\textminus\fi}3}}$, $n_{i}/n_{cr}=158$). The shape of the laser pulse is approximated as follows $$\begin{aligned} E(x) & \propto & \frac{d}{dx}\left\{ \sin x\cos^{2}\left[\frac{\pi\left(x-x_{s}\right)^{4}}{2x_{s}^{4}}\right]\right\} ,\label{pulse}\end{aligned}$$ where $x_{s}=5.7\lambda$ (the pulse duration is about $38$ fs). The pulse has almost constant amplitude in the central area and promptly decreases at the distance $x_{s}$ from the pulse center. For $a_{0}=1840$ parameters $\mu=1$ and the velocity of the HB front is a half of the speed of light. It is seen from Fig. \[interaction\] that the plasma is shifted towards the foil and the thin layer of electron-positron plasma is produced. The longitudinal phase space of the positrons produced at the laser-foil interaction is shown in Fig. \[xvx-a1000-t6\] for two values of $a_{0}$ ($a_{0}=1000\mbox{ and }a_{0}=1840$). In the high intensity regime ($a_{0}=1840$) the positron distribution is strongly localized in the longitudinal phase space. In the low intensity regime ($a_{0}=1000$) the positron distribution is sawtooth-like. ![The distribution of the laser intensity (orange color), the electron density (gray color) and the positron density (red color) in the $x-y$ plane at $z=0$, $t=4\lambda/c$ for $a_{0}=1840$.[]{data-label="interaction"}](fig1.pdf){width="8cm"} ![The positron distribution in the plane $x-v_{x}$, for (a) $a_{0}=1000$, $t=6.0\lambda/c$ and (b) $a_{0}=1840$, $t=3.0\lambda/c$ .[]{data-label="xvx-a1000-t6"}](fig2.pdf){width="8cm"} The electron-positron pair production can be roughly divided into three stages. At the initial stage the electron-positron pairs are produced from the photons emitted by the foil electrons. This stage can be described as follows. The laser pulse propagating in the positive direction of $x$-axis is reflected by the dense electron layer at the HB front. The layer electrons in a laser field emit a number of hard photons propagating in the same direction. The HB front outrun the photons emitted at the large angle to the $x$-axis so after that the photons move in vacuum region in the field of the incident and reflected laser radiation. In the HB front reference frame (“HB-frame”) these photons after escaping from plasma moves in the vacuum region in the negative direction of $x$-axis in the field of the circularly polarized standing wave. The structures, which are close to a counter-propagating wave or a standing wave, are efficient for pair creation [@Bell2008; @Fedotov2010; @Nerush2011] so that the photons decay and produce electron-positron pairs. In the second stage the number of pairs becomes so great that the number of high-energy photons emitted by the pairs exceeds the number of the external photons emitted by the foil electrons. In this case self-sustained QED cascade develops in the standing wave. In the final, third stage there is the back reaction of the self-generated electron-positron plasma on the laser-foil interaction. The electron-positron plasma becomes so dense that the significant part of the laser energy is absorbed by the pairs. In this paper we focus on the first two stages. Field structure and pair production in the vacuum region ======================================================== The incident laser field in the vacuum region can be approximated by the circularly polarized plane wave propagating along $x$-axis $$\begin{aligned} \mathbf{E}_{i} & =a_{0}\frac{mc\omega_{L}}{e}\left(0,\cos\varPhi,\sin\varPhi\right),\label{CPinLF}\\ \mathbf{B}_{i} & =a_{0}\frac{mc\omega_{L}}{e}\left(0,-\sin\varPhi,\cos\varPhi\right),\label{CPinLF2}\end{aligned}$$ where $\varPhi=\omega_{L}x/c-\omega_{L}t$ The incident electromagnetic field in the HB-frame can be calculated with Lorentz transformation $$\begin{aligned} \mathbf{E}_{i}^{\prime} & =a_{0}\left(0,\cos\left(x^{\prime}-t^{\prime}\right),\sin\left(x^{\prime}-t^{\prime}\right)\right),\label{CPinHBF}\\ \mathbf{B}_{i}^{\prime} & =a_{0}\left(0,-\sin\left(x^{\prime}-t^{\prime}\right),\cos\left(x^{\prime}-t^{\prime}\right)\right),\end{aligned}$$ where prime symbol marks the quantities in the HB-frame. In this Section we use the dimensionless units, normalizing the time to $1/\omega^{\prime}$ , the length to $c/\omega^{\prime}$ , the momentum to $mc$, and the field amplitude to $mc\omega^{\prime}/e$, where $\omega^{\prime}=\omega_{L}\gamma_{HB}\left(1-v_{HB}\right)$ is the frequency of the incident wave in the HB-frame and $\gamma_{HB}^{-2}=1-v_{HB}^{2}$. At the HB front position $x^{\prime}=0$ the boundary condition is $E_{y}^{\prime}\left(x^{\prime}=0\right)=E_{z}^{\prime}\left(x^{\prime}=0\right)=0$, where the perfect reflection in the HB-frame is assumed. The reflected laser radiation can be approximated as follows $$\begin{aligned} \mathbf{E}_{r}^{\prime} & =a_{0}\left(0,-\cos\left(x^{\prime}+t^{\prime}\right),\sin\left(x^{\prime}+t^{\prime}\right)\right),\label{EriHBF}\\ \mathbf{B}_{r}^{\prime} & =a_{0}\left(0,\sin\left(x^{\prime}+t^{\prime}\right),\cos\left(x^{\prime}+t^{\prime}\right)\right).\label{BriHBF}\end{aligned}$$ In the laboratory frame the reflected wave takes a form $$\begin{aligned} \mathbf{E}_{r} & =a_{0}\omega_{r}\left(0,-\cos\left(\omega_{r}x+\omega_{r}t\right),\sin\left(\omega_{r}x+\omega_{r}t\right)\right),\label{ErnLF}\\ \mathbf{B}_{r} & =a_{0}\omega_{r}\left(0,\sin\left(\omega_{r}x+\omega_{r}t\right),\cos\left(\omega_{r}x+\omega_{r}t\right)\right),\label{BRinLF}\end{aligned}$$ where $$\begin{aligned} \omega_{r} & =\omega_{L} & \frac{1-v_{HB}}{1+v_{HB}},\label{wr}\end{aligned}$$ is the frequency of the reflected wave in the laboratory frame. As the reflection coefficient is taken to be equal to $1$ in the HB-frame, the standing wave is generated in the vacuum region: $$\begin{aligned} \mathbf{E}^{\prime} & =2a_{0}\left(0,\text{\ensuremath{\sin}}x^{\prime}\sin t^{\prime}\text{,}\sin x^{\prime}\cos t^{\prime}\right),\label{Ehbf}\\ \mathbf{B}^{\prime} & =2a_{0}\left(0,\cos x^{\prime}\sin t^{\prime},\cos x^{\prime}\cos t^{\prime}\right).\label{Bhbf}\end{aligned}$$ The wavelength of the standing wave in the HB-frame is $$\begin{aligned} \lambda^{\prime} & =2\pi=\frac{\lambda}{\gamma_{hb}\left(1-v_{hb}\right)}.\label{lhb}\end{aligned}$$ The probability rate for photon emission by ultra-relativistic electrons and positrons in an electromagnetic field and the probability rate for electron-positron pair production via photon decay are given, respectively, by the formulas [@Baier1998] $$\begin{aligned} W_{rad} & = & \frac{\alpha a_{S}}{\varepsilon_{e}}\int_{0}^{\infty}dx\frac{5x^{2}+7x+5}{3^{3/2}\pi(1+x)^{3}}K_{\frac{2}{3}}\left(\frac{2x}{3\chi_{e}}\right),\label{Wr1}\\ W_{rad} & \approx & \frac{5\alpha a_{S}}{2\sqrt{3}\pi\varepsilon_{e}}\chi_{e},\;\chi_{e}\ll1,\label{Wr2}\\ W_{pair} & = & \frac{\alpha a_{S}3^{-3/2}}{\pi\varepsilon_{ph}}\int_{0}^{1}dx\frac{9-x^{2}}{1-x^{2}}K_{\frac{2}{3}}\left[\frac{8\chi_{ph}^{-1}}{3\left(1-x^{2}\right)}\right],\label{Wp1}\\ W_{pair} & \approx & \frac{3^{3/2}\alpha}{2^{9/2}}\frac{a_{S}\chi_{ph}}{\varepsilon_{ph}}\exp\left(-\frac{8}{3\chi_{ph}}\right),\;\chi_{ph}\ll1,\label{Wp2}\end{aligned}$$ where $$\begin{aligned} \chi_{e,ph} & = & \frac{1}{a_{S}}\sqrt{\left(\varepsilon_{e,ph}\mathbf{E}+\mathbf{p}_{e,ph}\times\mathbf{B}\right)^{2}-\left(\mathbf{p}_{e,ph}\cdot\mathbf{E}\right)^{2}},\label{chi_general}\end{aligned}$$ is the key QED parameter determining the phonon emission ($\chi_{e}$) and the pair production ($\chi_{ph}$) [@Landau4], $a_{S}=eE_{S}/(mc\omega^{\prime})=mc^{2}/\hbar\omega^{\prime}$ is the normalized QED critical field, $E_{S}=m^{2}c^{3}/(\hbar e)$, $\varepsilon_{e,ph}$ is the energy of the electron (positron) and photon, respectively, $\mathbf{p}_{e,ph}$ is the momentum of the electron (positron) and photon, respectively, $\hbar$ is the Plank constant. The dependence of the pair production probability on $\chi$ is sharp in the limit $\chi\ll1$. Therefore we can suppose that the most of the pairs are produced near the points in the spacetime where $\chi$ peaks. If the photon momentum is $\mathbf{p}^{\prime}=\varepsilon_{ph}\left(\cos\alpha,\sin\alpha\cos\beta,\sin\alpha\sin\beta\right)$ then $\chi$ for the circularly polarized standing wave given by Eqs. (\[Ehbf\]), (\[Bhbf\]) takes a forms in the HB-frame after some trigonometrical transformations $$\begin{aligned} \chi_{ph} & = & \frac{\varepsilon_{ph}a_{0}}{a_{S}}\sqrt{1-\sin^{2}\phi\sin^{2}\alpha},\label{chi}\end{aligned}$$ where $\phi=\beta-t^{\prime}$. It follows from Eq. (\[chi\]) that $\chi$ peaks at $\phi=\pm\pi n$ or $\alpha=\pm\pi l$, $n,l=0,1,2,...$ and it does not depend on $x^{\prime}$. For given value of $\beta$ there is always the value of $t^{\prime}$ at which $\phi=\pm\pi n$. If the photon emission is axially symmetrical ($\beta$ is uniformly distributed from $0$ to $2\pi$) then we can suppose that the pair number decreases with increasing the distance from the HB front towards the vacuum region as a result of photon flux attenuation. Particle motion in a standing circularly polarized wave ======================================================= In this Section we study the motion of the electrons and positrons in the vacuum field in the HB-frame. The filed can be approximated by the circularly polarized standing wave defined by Eqs. (\[Ehbf\]) and (\[Bhbf\]). Our treatment is based on the classical approach in order to obtain analytical solutions. In the QED approach the particle momentum suddenly changes because of recoil effect caused by photon emission. However even in the limit $\chi\gg1$ the particle energy much greater than the mean change in its energy because emission of one photon: $\left\langle \varepsilon_{ph}\right\rangle <I_{rad}(\chi\rightarrow\infty)/W_{rad}(\chi\rightarrow\infty)\approx0.25\ll\varepsilon_{e}$, where $I_{rad}$ is the total intensity of the photon emission. It is demonstrated by numerical simulations [@Gonoskov2014; @Jirka2016] that the spatial distributions of the electrons calculated in the classical approach and in QED approach are similar even for extremely strong electromagnetic fields. In the classical approach the positron motion is governed by equations $$\begin{aligned} \frac{d\mathrm{\mathbf{p}}}{dt} & =\mathbf{F}_{L}-\mathbf{v}F_{R},\label{em1}\\ \frac{d\mathbf{r}}{dt} & =\frac{\mathbf{p}}{\gamma},\label{em2}\\ \mathbf{F}_{L} & =\mathbf{E}+\mathbf{v}\times\mathbf{\mathbf{B}},\\ F_{R} & =\mu a_{S}^{2}\chi_{e}^{2}G\left(\chi_{e}\right),\nonumber \\ \chi_{e}^{2} & =a_{S}^{-2}\gamma^{2}\left[\left(\mathbf{E}+\mathbf{v}\times\mathbf{\mathbf{B}}\right)^{2}-\left(\mathbf{v}\cdot\mathbf{\mathbf{E}}\right)^{2}\right],\label{em3}\end{aligned}$$ where $F_{R}/G\left(\chi_{e}\right)$ is the leading term of the radiation reaction force in the classical limit [@Landau2], $\mu=2\omega^{\prime}e^{2}/\left(3mc^{3}\right)$, $G\left(\chi_{e}\right)=I_{rad}\left(\chi_{e}\right)/I_{rad}\left(\chi_{e}=0\right)$ is the QED factor introduced in order to take into account the decreasing of the radiation power and the radiation reaction force in the quantum limit with increasing of $\chi_{e}$ [@Bell2008; @Bulanov2013; @Esirkepov2015]. For the sake of convenience, hereinafter, the prime symbol are omitted for the quantities in the HB-frame. It is shown for a rotating electric field [@Zeldovich1975; @Bulanov2011-1] that there is a stationary trajectory attracting the other trajectories. The field has to be strong enough so that the electrons and positrons move in the radiation reaction regime. Regardless of the initial momentum a positron quickly reaches stationary trajectory which is the rotation with the field frequency. The phase shift between the field and the positron velocity is set so that the work done by the electric field is completely compensated by the radiative losses. We extend the Zeldovich model [@Zeldovich1975] to the configuration of the rotating homogeneous electric and magnetic fields, which are parallel to each other: $$\begin{gathered} \mathbf{E}=E_{0}(0,\sin t,\cos t),\mathbf{\: B}=B_{0}(0,\sin t,\cos t).\label{rf}\end{gathered}$$ The electric and magnetic fields rotate in the plane $y-z$ with the unit frequency $\omega^{\prime}=1$. Like in the Zeldovich model we assume that the positron rotates in the plane $y-z$ with the constant velocity $v_{\perp}$ and frequency $\omega^{\prime}=1$ but it additionally moves along $x$-axis with the constant velocity $v_{x}$. Balancing the forces along $x$-axis and in the $y-z$ plane (along the centrifugal force and along the transversal velocity, respectively) we get $$\begin{aligned} \frac{dp_{x}}{dt} & = & v_{\perp}B_{0}\sin\varphi-v_{x}F_{R}=0,\label{z1}\\ \frac{dp_{y}}{dt} & = & E_{0}\sin\varphi+B_{0}v_{x}\cos\varphi=\gamma v_{\perp},\label{z2}\\ \frac{dp_{z}}{dt} & = & E_{0}\cos\varphi-v_{x}B_{0}\sin\varphi-v_{\perp}F_{R}=0,\label{z3}\\ F_{R} & = & \mu G\left(\chi_{e}\right)W^{2}\gamma^{2}\left(1-v_{\perp}^{2}\cos^{2}\varphi\right),\label{z4}\\ \chi_{e} & = & a_{S}^{-1}W\gamma\sqrt{1-v_{\perp}^{2}\cos^{2}\varphi},\\ \frac{d\mathbf{r}}{dt} & = & \frac{\mathbf{p}}{\gamma},\label{z5}\end{aligned}$$ where it is assumed that $z$-axis is directed along the transverse component of the positron velocity, $\mathbf{v_{\perp}}$, so that the centrifugal force is directed along the $y$-axis, $\varphi$ is the angle between $\mathbf{v_{\perp}}$and $\mathbf{E}$, $\gamma^{-2}=1-v_{x}^{2}-v_{\perp}^{2}$ is the reverse squared relativistic Lorentz factor of the positron, $W^{2}=E_{0}^{2}+B_{0}^{2}$. The first equation represents the balance between the Lorentz force and the radiation reaction force along $x$-axis, the second one represents the balance between the centrifugal force and the Lorentz force. For ultra-relativistic motion $\gamma\gg1$ ($v_{x}^{2}\approx1-v_{\perp}^{2}$) Eqs. (\[z1\])-(\[z4\]) can be reduced to the system of equations for $\gamma$, $v_{\perp}$ and $\cos\varphi$: $$\begin{aligned} v_{\perp}B_{0} & = & \sqrt{\frac{1-v_{\perp}^{2}}{1-\cos^{2}\varphi}}F_{R}\left(\gamma,v_{\perp},\cos\varphi\right),\label{zsys1}\\ \gamma v_{\perp} & = & E_{0}\sqrt{1-\cos^{2}\varphi}+B_{0}\sqrt{1-v_{\perp}^{2}}\cos\varphi,\label{zsys2}\\ v_{\perp}E_{0}\cos\varphi & = & F_{R}\left(\gamma,v_{\perp},\cos\varphi\right),\label{zsys3}\end{aligned}$$ where $F_{R}$ is given by Eq. (\[z4\]). The third equation can be derived by summation of Eq. (\[z1\]) multiplied by $v_{x}$ and Eq. (\[z3\]) multiplied by $v_{\perp}$. It demonstrates that the radiative losses are completely compensated by the work done by the electric field, hence $F_{R}\leq E_{0}$. Note that the useful relations $\gamma v_{\perp}E_{0}=W^{2}\sin\varphi$ and $v_{x}=\left(B_{0}/E_{0}\right)\tan\varphi$ can be derived from Eqs. (\[zsys1\])-(\[zsys3\]). In the limit of high field the radiation reaction is strong and $F_{R}\approx E_{0}$, $\varphi\ll1$, $v_{x}\ll1$ and the solution of Eqs. (\[zsys1\])-(\[zsys3\]) can be written as follows $$\begin{aligned} \varphi & \approx & \frac{E_{0}\gamma}{W^{2}}\ll1,\label{zs1}\\ v_{x} & \approx & \frac{B_{0}}{E_{0}}\varphi\ll1,\label{zs2}\\ \chi_{e} & \approx & \frac{\gamma^{2}}{a_{S}},\label{zs3}\\ \gamma=\varepsilon_{e} & \approx & \left[\frac{E_{0}}{\mu G\left(\chi_{e}\right)}\right]^{1/4},\label{zsg}\end{aligned}$$ where it is assumed that $B_{0}\lesssim E_{0}$. It follows from Eqs. (\[zs1\]) and (\[zsg\]) that the radiation reaction regime corresponds to the condition $\epsilon_{R}\equiv E_{0}^{3}G\mu\gg1$. In order to explicitly write expressions for $\varphi$, $v_{x}$ and $\gamma$ we have to solve Eq. (\[zs3\]) for $\chi_{e}$ $$\begin{aligned} \chi_{e}^{2}G\left(\chi_{e}\right) & = & \frac{E_{0}}{\mu a_{S}^{2}}.\label{chie}\end{aligned}$$ Eqs. (\[zs1\]) and (\[zsg\]) are reduced to the formulas derived by Zeldovich [@Zeldovich1975] in the limit $B_{0}=0$ and $G=1$. More accurate value of $\gamma$ (for arbitrary value of $\varphi$ and $\epsilon_{R}$ i.e. not only for the radiation reaction regime) can be found in the limit $B_{0}\ll E_{0}\epsilon_{R}$ ($v_{x}\ll1$) from equation $$\begin{aligned} E_{0}^{2}-\gamma^{2}\frac{E_{0}^{2}}{W^{2}} & \approx & \mu^{2}G^{2}\left(\chi_{e}\right)\gamma^{8},\label{zgamma}\\ \chi_{e} & \approx & \frac{\gamma^{2}}{a_{S}}.\end{aligned}$$ It should be noted that Eq. (\[zgamma\]) for $\gamma$ is similar to one for the electron energy in the rotating electric field [@Zeldovich1975] and in the running circularly polarized wave [@Bashinov2014; @Brady2014] for $G\left(\chi_{e}\right)=1$. In the limit for the radiation reaction regime ($\epsilon_{R}\gg1$) Eq. (\[zgamma\]) is reduced to Eq. (\[zsg\]). In the opposite limit, when the radiation reaction can be neglected, $\gamma\approx W$ and $\gamma\approx E_{0}$ for $B_{0}=0$ in agreement with the known results [@Zeldovich1975]. Combining Eqs. (\[zs3\]) and (\[zgamma\]) the equation for $\chi_{e}$ can be derived $$\begin{aligned} \frac{G^{2}\left(\chi_{e}\right)\chi_{e}^{4}}{W^{2}a_{S}^{-1}-\chi_{e}} & \approx & \frac{E_{0}^{2}}{\mu^{2}W^{2}a_{S}^{3}},\label{zchi}\end{aligned}$$ In the limit $W^{2}a_{S}^{-1}\gg\chi_{e}$ ($\epsilon_{R}\gg1$) Eq. (\[zchi\]) is reduced to Eq. (\[chie\]). The description with the averaged radiation reaction force with QED factor $G\left(\chi_{e}\right)$ can be used when the number of the photons emitted during interaction and during the characteristic time of the field ($1/\omega^{\prime}$ ) is large: $W_{rad}\gg1$, where $\tau_{rad}\sim W_{rad}^{-1}$ is the characteristic time of the photon emission. The model is not valid when the electric field is too weak and Eq. (\[zsys3\]) is not fulfilled. In other words, the radiative losses has to be compensated by the work done by the electric field. It follows from the obtained result that the stationary trajectory in the rotating electric and magnetic field is helical so that the positron drifts along $x$-axis with the constant velocity $v_{x}$ and rotates in $y-z$ plane with phase shift between the field and the transverse component of the velocity, $\varphi$. The stationary trajectory $(\mathbf{r}^{z},\mathbf{p}^{z})$ in the the radiation reaction regime ($\epsilon_{R}\gg1$) can be approximated as follows $$\begin{aligned} x^{z} & \approx & v_{x}t,\label{xz}\\ y^{z} & \approx & \cos(t+\varphi),\label{yz}\\ z^{z} & \approx & -\sin(t+\varphi),\label{zz}\\ p_{x}^{z} & = & v_{x}\gamma^{z}\approx\frac{B_{0}}{W^{2}}\gamma^{z},\label{pxz}\\ p_{y}^{z} & \approx & \gamma^{z}\sin(t+\varphi),\label{pyz}\\ p_{z}^{z} & \approx & \gamma^{z}\cos(t+\varphi),\label{pzz}\\ \gamma^{z} & \approx & \left[\frac{E_{0}}{\mu G\left(\chi_{e}\right)}\right]^{1/4},\end{aligned}$$ where $\varphi$ and $v_{x}$ are given by Eqs. (\[zs1\]) and (\[zs2\]), respectively. The positron trajectory given by Eqs. (\[zs1\])-(\[zsg\]) is calculated for homogeneous electric and magnetic fields. However the solution can be also used to describe the positron motion in the standing circularly polarized wave far from the electric node (the antinode of $\mathbf{B}$). This is because of the slow motion of the positron along $x$-axis ($v_{x}\ll1$) so that the positron has enough time to switch to the stationary trajectory given by Eqs. (\[xz\])-(\[pzz\]) and determined by the local values of the fields. The trajectory of the positron created near the magnetic node of the standing wave can be calculated by taking into account the dependence of $E_{0}$ and $B_{0}$ on $x$ in Eqs. (\[z1\])-(\[z5\]), where $E_{0}=2a_{0}\sin x$, $B_{0}=2a_{0}\cos x$ and $W=2a_{0}$. The longitudinal coordinate can be found from the equation of motion: $dx/dt=v_{x}$ : $$\begin{aligned} \intop_{0}^{x^{Z}}\frac{d\xi}{v_{x}\left(E_{0}\left(\xi\right),B_{0}\left(\xi\right)\right)} & = & t,\label{xzsw}\end{aligned}$$ where $v_{x}$ is the solution of Eqs. (\[zsys1\])-(\[zsys3\]). $v_{x}$ can be approximated by using of Eq. (\[zs2\]) as follows: $$\begin{aligned} v_{x} & \approx & u\cos x\left|\sin x\right|^{1/4}\textrm{sign}\left(\sin x\right),\label{vx}\end{aligned}$$ where $u=\left(8a_{0}^{3}\mu G\left(\chi_{e}\right)\right)^{-1/4}$, $\chi_{e}$ is the solution of Eq. (\[zchi\]) for $E_{0}\approx2a_{0}$, $\textrm{sign}\left(x\right)=-1$ for $x<0$, $\textrm{sign}\left(x\right)=0$ for $x=0$ and $\textrm{sign}\left(x\right)=1$ for $x>0$. The dependence of $v_{x}$ on $x$ is shown in Fig. \[vx-fig\]. Therefore, the positron trajectory near the magnetic node of the standing wave takes a form $\mathbf{r}\approx\mathbf{r}_{0}+\mathbf{r}^{z}(t,E_{0}\left(x\left(t\right)\right),B_{0}\left(x\left(t\right)\right),\varphi\left(x\left(t\right)\right))$, $\mathbf{p}\approx\mathbf{p}^{z}(t,E_{0}\left(x\left(t\right)\right),B_{0}\left(x\left(t\right)\right),\varphi\left(x\left(t\right)\right))$, where the constant $\mathbf{r}_{0}$ is determined by the initial conditions. Note that such constant is absent in the expression for $\mathbf{p}$ since all positrons locating at the same position on the stationary trajectory have the same momentum. Evidently, for the electrons $v_{x}$ is the same as that for the positrons while $\mathbf{v}_{\perp}$ is opposite to that of the positrons. ![$v_{x}(x)$ calculated from Eq. (\[vx\]) for $u=1$. []{data-label="vx-fig"}](vx2.pdf){width="8cm"} The equations of motion are solved numerically with the radiation reaction force for the positron being initially near the magnetic node ($x(t=0)=0.497\pi$, $p(t=0)=0$) of the standing wave with $a_{0}=1000$. First we solve Eqs. (\[em1\])-(\[em3\]) neglecting the suppression of the radiation reaction force ($G=1$). The values of $\gamma(t)$, $v_{x}(t)$ and $\chi_{e}(t)$ obtained from numerical solution of equations of motion and ones estimated from Eqs. (\[zs1\])-(\[zsg\]) are shown in Fig. \[zeld\], where in the estimations $E_{0}=2a_{0}\sin x(t)$, $B_{0}=2a_{0}\cos x(t)$ and $x(t)$ is retrieved from the numerical solution. It is seen from Fig. \[zeld\](a) that the model prediction is in a very good agreement with the numerical solution of the equations of motion. The better agreement is achieved (see Fig. \[zeld\](b)) when Eq. (\[zgamma\]) is used instead of Eq. (\[zsg\]). It is interesting to note that even near the electric node ($x\approx0$) the agreement is still fairly good. We also solve the equations of motion numerically for the positron with the same initial condition taking into account QED suppression of the radiation reaction force, where the approximation $G(\chi_{e})\approx\left(1+18\chi_{e}+69\chi_{e}^{2}+73\chi_{e}^{3}+5.804\chi_{e}^{4}\right)^{-1/3}$ proposed in Ref. [@Esirkepov2015] is used. The values of $\gamma(t)$, $v_{x}(t)$ and $\chi_{e}(t)$ obtained from numerical solution of equations of motion and ones estimated from Eqs. (\[zs1\])-(\[zs3\]), (\[chie\]) are shown in Fig. \[zeld-bula\](a) and ones estimated from Eqs. (\[zs1\])-(\[zs3\]), (\[zchi\]) are shown in Fig. \[zeld-bula\](b). In the estimations $E_{0}=2a_{0}\sin x(t)$, $B_{0}=2a_{0}\cos x(t)$, where $x(t)$ is retrieved from the numerical solution of the equations of motion. It is seen from Fig. \[zeld-bula\](a) that the agreement between the quantities calculated numerically and the estimated ones is not so good as in the case $G=1$. The discrepancy is caused by strong radiation reaction suppression ($G\ll1$). It is seen from Fig. \[muchiE3\] that the radiation reaction parameter $\epsilon_{R}(t)=\mu G\left(\chi_{e}\left(t\right)\right)E_{0}^{3}\left(t\right)$ determining the transition to the radiation reaction regime decreases in about $20$ times when the suppression is taken into account. In this case the parameter is close to $5$ and is not sufficient to ensure the required accuracy of the approximation corresponding to the radiation reaction regime and described by Eqs. (\[zs1\])-(\[zs3\]), (\[chie\]). Significant improvement of the accuracy can be achieved when more general Eq. (\[zchi\]) is used instead of Eq. (\[chie\]) (see Fig. \[zeld-bula\](b)). It follows from Fig. \[zeld-bula\] that $\gamma\sim1200$ and $\chi_{e}\sim4$ in the case when the radiation reaction suppression is included. The positron energy and the parameter $\chi_{e}$ are higher in several times than ones in the case $G=1$ (see Fig. \[zeld\]). The probability rate for photon emission given by Eq. (\[Wr1\]) is $W_{rad}\sim8$ for $\gamma\sim1200$ and $\chi_{e}\sim4$. Therefore the positron passing from the magnetic node to the electric one during $t_{int}\sim11$ emits about $N_{ph}\sim t_{int}/\tau_{rad}\sim t_{int}W_{rad}\sim90\gg1$ photons and the approximation with the averaged radiation reaction force by means of factor $G$ can be applied. ![The approximation $G=1$ (the QED suppression of the radiation reaction is neglected). $x(t)$ (solid black line 1), $\chi(t)$ (solid green line 2), $\gamma(t)/a_{0}$ (solid red line 3), $v_{x}(t)$ (solid blue line 4) calculated numerically by solving Eqs. (\[em1\])-(\[em3\]) for the positron with the initial condition $x(t=0)=0.497\pi$, $p(t=0)=0$ in the standing wave (Eqs. (\[Ehbf\]) and (\[Bhbf\])) with $a_{0}=1000$. $\chi(x(t))$ (dashed green line 2), $\gamma(x(t))/a_{0}$ (dashed red line 3), $v_{x}(x(t))$ (dashed blue line 4) are calculated from (a) Eqs. (\[zs1\])-(\[zsg\]) and (b) from Eqs. (\[zs1\])-(\[zs3\]), (\[zgamma\]), where $E_{0}=2a_{0}\sin x(t)$, $B_{0}=2a_{0}\cos x(t)$. $x(t)$ is retrieved from the numerical solution and shown by the solid black line 1. []{data-label="zeld"}](trajectorynoG.pdf){width="8cm"} ![The case when the QED suppression of the radiation reaction is taken into account. $x(t)$ (solid black line 1), $\gamma(t)/a_{0}$ (solid red line 2), $\chi(t)$ (solid green line 3), $v_{x}(t)$ (solid blue line 4) calculated numerically by solving Eqs. (\[em1\])-(\[em3\]) for the positron with the initial condition $x(t=0)=0.497\pi$, $p(t=0)=0$ in the standing wave (Eqs. (\[Ehbf\]) and (\[Bhbf\])) with $a_{0}=1000$. $\chi(x(t))$ (dashed green line 3), $\gamma(x(t))/a_{0}$ (dashed red line 2), $v_{x}(x(t))$ (dashed blue line 4) are calculated from (a) Eqs. (\[zs1\])-(\[zsg\]), (\[chie\]) and (b) from Eqs. (\[zs1\])-(\[zs3\]), (\[zchi\]), where $E_{0}=2a_{0}\sin x(t)$, $B_{0}=2a_{0}\cos x(t)$. $x(t)$ is retrieved from the numerical solution and shown by the solid black line 1. []{data-label="zeld-bula"}](trajectoryG.pdf){width="8cm"} ![The radiation reaction parameter $\epsilon_{R}(t)=\mu G\left(\chi_{e}\left(t\right)\right)E_{0}^{3}\left(t\right)$ for the positron with the initial condition $x(t=0)=0.497\pi$, $p(t=0)=0$ in the standing wave (Eqs. (\[Ehbf\]) and (\[Bhbf\])) with $a_{0}=1000$ in the case when the QED suppression of the radiation reaction is taken into account (line 1) and in the approximation $G=1$ when the suppression is neglected (line 2). []{data-label="muchiE3"}](muchiE3.pdf){width="8cm"} It follows from Eqs. (\[vx\]) that the longitudinal velocity of the positrons and electrons created with small momentum in the standing circularly polarized wave is directed from the magnetic nodes ($x=\pm\pi\left(n+1/2\right)$, $n=0,1,2,...$, where $\mathbf{B}=0$ and the electric field amplitude peaks) to the electric ones ($x=\pm\pi n$, $n=0,1,2,...$, where $\mathbf{E}=0$ and the magnetic field amplitude peaks). Hence, the magnetic nodes are unstable for the positrons and electrons while the electric nodes are stable for them (see Fig. \[vx-fig\]). In the magnetic nodes the positrons and electrons perform circular motion in the rotating electric field. In the electric nodes the positrons and electrons move in the rotating magnetic field. This motion is complex and can be qualitatively presented as the superposition of the fast cyclotron rotation (rotation axis is perpendicular to the $x\mbox{-axis)}$ and slow drift. The frequency of the cyclotron rotation in the magnetic field is much higher than the field frequency $\omega_{B}\approx2a_{0}\omega^{\prime}/\gamma\gg\omega^{\prime}$ since the positrons and electrons move in the radiation reaction regime for $a_{0}>300$ so that $\gamma/a_{0}\sim\epsilon_{R}^{-1/4}\ll1$ (see Eq. (\[zsg\]) and Refs. [@Bashinov2014; @Brady2014; @Zeldovich1975]). When the number of the electron-positron pairs becomes large they produce more photons than ones arrived from the electron layer. As a result the self-sustained QED cascade characterized by exponential growth of the pair number in time can develop. It is demonstrated [@Bashmakov2014] that the cascade growth rate is maximal in the magnetic nodes of the circularly polarized standing wave. However it is discussed above that the the pair positions is unstable in the magnetic nodes and is stable in the electric ones. The pair density profile is determined by the trade off between the pair production effect and the pair drift. Therefore the density of the electron-positron plasma may peak at the electric and magnetic nodes as the pairs production is the most efficient at the magnetic nodes while the pairs after creation are attracted to the electric nodes. Numerical simulations ===================== The field of the incident wave can be retrieved from $E_{y}+B_{z}$ and the field of the reflected wave can be retrieved from $E_{y}-B_{z}$ calculated in the numerical simulations. To compare the analytical results with the numerical ones it is convenient to use other dimensionless units, normalizing the time to $1/\omega_{L}$ , the length to $c/\omega_{L}$ , and the field amplitude to $mc\omega_{L}/e$. It follows from Eqs. (\[CPinLF\]), (\[CPinLF2\]), (\[ErnLF\]), (\[BRinLF\]) that $$\begin{aligned} E_{i,y}+E_{r,y}+B_{i,z}+B_{r,z} & =2a_{0}\cos\left(x-t\right),\label{FLF1}\\ E_{i,y}+E_{r,y}-B_{i,z}-B_{r,z} & =2a_{0}\omega_{r}\nonumber \\ \times & \cos\left[\omega_{r}\left(x+t\right)\right].\label{FLF2}\end{aligned}$$ The HB front velocity and the frequency of the reflected wave can be estimated by using Eqs. (\[vhb\]), (\[mu\]) and (\[wr\]). Then for the simulation parameters we get: $\mu=1.84$, $v_{HB}\approx0.35$, and $\omega_{r}\approx0.48$ for $a_{0}=1000$, while $\mu=1$, $v_{HB}=1/2$ and $\omega_{r}\approx0.33$ for $a_{0}=1840$ that is close to the simulation results, namely form the periods of the wave $E_{y}-B_{z}$ (see Figs. \[fields1000\] and \[fields1840\]) we obtain $\omega_{r}\approx0.5$ for $a_{0}=1000$ and $\omega_{r}\approx0.4$ for $a_{0}=1840$. According to the model assumptions the refection in the HB-frame is perfect so that the reflection coefficient in the laboratory frame is equal to $r=\max\left[(E_{y}-B_{z})/(E_{y}+B_{z})\right]=\omega_{r}$. This is also in a good agreement with the results of the numerical simulations (see Figs. \[fields1000\] and \[fields1840\]). Therefore the approximation of the structure of the electromagnetic field in the vacuum region (in front of the foil) as a standing wave can be used for estimations. ![(a) $E_{y}+B_{z}$ and (b) $E_{y}-B_{z}$ as a function of $x$ in front of the foil for $a_{0}=1000$ at $t=6\lambda/c$ .[]{data-label="fields1000"}](Fig4.pdf){width="8cm"} ![(a) $E_{y}+B_{z}$ and (b) $E_{y}-B_{z}$ as a function of $x$ in front of the foil for $a_{0}=1840$ at $t=4\lambda/c$ .[]{data-label="fields1840"}](fig5.pdf){width="8cm"} The positron number as a function of time is shown in Fig. \[N\]. It follows from Fig. \[N\] that the exponential growth representing QED cascading starts almost from the beginning. The cascade develops in the circularly polarized standing wave generated in the HB-frame in front of the foil. The cascade growth rate can me estimated from the figure: $\Gamma\approx0.6$ for $a_{0}=1000$ and $\Gamma\approx1.3$ for $a_{0}=1840$, where the cascade growth rate is normalized to the frequency of the standing wave in the HB-frame, $\omega^{\prime}=\omega_{L}\gamma_{HB}\left(1-v_{HB}\right)$. The obtained values of $\Gamma$ are slightly less than that calculated in Ref. [@Grismayer2016] by numerical simulation for the rotating electric field and for the circularly polarized standing wave ($\Gamma\approx0.8$ for $a_{0}=1000$ and $\Gamma\approx1.8$ for $a_{0}=1840$, see Fig. 2a in Ref. [@Grismayer2016]). The reason is that the standing wave is not perfect in our case because the laser radiation reflection from the foil is not also perfect. ![Positron number as a function of time for $a_{0}=1000$ (line 1) and $a_{0}=1840$ (line 2). []{data-label="N"}](Fig6.pdf){width="8cm"} According to Eq. (\[Ehbf\]) the square of the electric field in the vacuum region in the HB-frame as a function of the space-time position in the laboratory frame takes a form: $$\begin{aligned} \left(\mathbf{E}^{\prime}\right)^{2} & =\left(2a_{0}\omega^{\prime}\right)^{2}\text{\ensuremath{\sin}}^{2}\left(\omega^{\prime}x^{\prime}\right)\nonumber \\ & =\left(2a_{0}\right)^{2}\frac{1-v_{HB}}{1+v_{HB}}\text{\ensuremath{\sin}}^{2}\left(\frac{x-v_{HB}t}{1+v_{HB}}\right).\end{aligned}$$ To calculate the fields in the HB-frame we can apply Lorentz transformation to the field distribution retrieved from the numerical simulations with $v_{HB}\approx0.35$ for $a_{0}=1000$ and $v_{HB}=0.5$ for $a_{0}=1840$. Thus the positions of nodes and antinodes in the laboratory reference frame can be easily found from the distribution of $\left(\mathbf{E}^{\prime}\right)^{2}$. The squared electric and magnetic fields in the HB-frame, $\left(\mathbf{E}^{\prime}\right)^{2}$ and $\left(\mathbf{B}^{\prime}\right)^{2}$, as a function of $x$, the positron distribution in the plane $x-v_{x}$ and the positron density at the axis $y=z=0$ as a function of $x$ are shown in Fig. \[a1000\] for $a_{0}=1000$ and $t=6.4\lambda/c$ and in Fig. \[a1840\] for $a_{0}=1840$ and $t=4.0\lambda/c$. It is seen from Fig. \[a1840\] that for strong laser field with $a_{0}=1840$ the most of the positrons are created in front of the foil near the first magnetic node of the standing wave because the most of the photons, which are emitted from the foil and initiating the cascade, decay already in the first period of the standing wave. In this case the pair production effects dominate over the pair drift so that the number of the pairs produced at the magnetic nodes is higher than that drift to the electric nodes [@Bashmakov2014]. In other words, the time of the particle doubling is less than the time which takes for the the particles to pass from a magnetic node to the neighboring electric nodes. For $a_{0}=1000$ the probability of the pair production is lower than for $a_{0}=1840$ and the positrons are located in the several wavelengths in front of the foil near the electric and magnetic nodes (see Fig. \[a1000\] (c)) that is in the qualitative agreement with the predictions formulated in the previous Section. Small shift of the maximums of the density profile from the exact position of the nodes can be caused by fact that the reflection is not perfect so that the wave in the HB-frame is not exactly standing. It follows from Figs. \[a1000\] and \[a1840\] that the longitudinal velocity of the positrons is close to the HB front velocity at the magnetic nodes and the velocity distributed within wide range near the electric node. For $a_{0}=1000$ the positron distribution is sawtooth-like in $x-v_{x}$ plane (see Fig. \[a1000\](b)). Therefore, in the HB front frame, the longitudinal positron velocity increases towards the HB front from one electric node to another electric node reaching $v_{x}=0$ at the magnetic nodes. This is in qualitative agreement with Eq. (\[vx\]) describing sawtooth-like distribution (see Fig. \[vx-fig\]). The longitudinal dynamics of the secondary electrons is the same as that of the positrons. ![(a) The squared electric and magnetic fields in the HB-frame, $\left(\mathbf{E}^{\prime}\right)^{2}$(solid red line) and $\left(\mathbf{B}^{\prime}\right)^{2}$ (dashed blue line), as a function of $x$, (b) the positron distribution in the plane $x-v_{x}$ and (c) the positron density along the $x$-axis as a function of $x$ for $a_{0}=1000$ and $t=6.4\lambda/c$. []{data-label="a1000"}](positrons1000.pdf) ![(a) The squared electric and magnetic fields in the HB-frame, $\left(\mathbf{E}^{\prime}\right)^{2}$(solid red line) and $\left(\mathbf{B}^{\prime}\right)^{2}$ (dashed blue line), as a function of $x$, (b) the positron distribution in the plane $x-v_{x}$ and (c) the positron density along the $x$-axis as a function of $x$ for $a_{0}=1840$ and $t=4.0\lambda/c$. []{data-label="a1840"}](positrons1840.pdf) Discussions and conclusions =========================== It is demonstrated that numerous electron-positron pairs are produced in the hole-boring regime of interaction between a foil and laser pulse with intensities higher than $10^{24}$W cm$^{-2}$. The pair production scenario can be roughly divided into three stages: (i) cascade initiation by the photons emitted from the foil electrons; (ii) self-sustained QED cascading in the standing wave; (iii) the back reaction of the produced pair plasma on the laser-foil interaction. In the first two stages the pairs are mainly located in the vacuum region in front of the foil where the incident and reflected laser waves interfere. When the number of the produced electron-positron pairs is not very large the field structure in the vacuum region is close to the standing circularly polarized wave in the hole-boring front reference frame. The electron-positron plasma is mainly produced as a result of QED cascading in the standing wave. The analytical model for the dynamics of the electrons and positrons in the rotating electric field with radiation reaction is extended to the rotating electric and magnetic fields which are parallel to each other. The model proposed by Zeldovich [@Zeldovich1975] predicts the stationary trajectory attracting the electron trajectories in the rotating electric field when the radiation reaction is strong. On such trajectory the work done by the electric field is balanced by the radiative losses. The particle performs circular motion and the energy balance is controlled by the phase shift between the electric field and the particle velocity. In the case of the rotating electric and magnetic fields, which are parallel to each other, the stationary trajectory also exists and is helical-like with infinite motion along the axis perpendicular to the plane of the field rotation. The dynamics in the circularly polarized standing wave is more complex [@Lehmann2012; @Esirkepov2015]. Moreover, the particle motion can be stochastic and the attractor accumulating the trajectories in the phase space is located only at the electric node. The attractors are recently studied in the field of standing waves of various configurations [@Lehmann2012; @Gonoskov2014; @Esirkepov2015; @Kirk2016]. Our model allows one to calculate the particle trajectory near the magnetic node of the standing wave. It is shown that the trajectory of the electron and positrons near the magnetic node is close to the stationary trajectory in the local electric and magnetic fields. The model includes the QED effect of the radiation reaction suppression because of the reduction of the total power radiated by the particle in the quantum regime [@Bell2008; @Bulanov2013; @Esirkepov2015]. The calculated trajectories are used to analyze the positron density distribution in the standing wave. It follows from the model that the positron density peaks at the nodes and antinodes of the standing wave because the electron-positron pairs are mainly produced at the magnetic nodes as the cascade growth rate peaks there and the produced pairs drift to the electric nodes as the magnetic nodes are unstable for them. The positron distribution in $x$-$v_{x}$ plane is sawtooth-like and the longitudinal velocity of the positrons is equal to HB-front velocity at the magnetic nodes. Near the electric nodes the motion of the electrons and positrons is close to the superposition of the drift and the rotation so that the longitudinal velocity varies within the wide range. This is in agreement with the results of the numerical simulations. In the case of high laser intensity ($a_{0}=1840$) the density peaks at the magnetic node closest to the HB front. The reason is that the high-energy photons emitted by the foil electrons decay rapidly and cannot initiate cascade far from the HB-front. The number of the pairs in the in the electric nodes is much smaller than that in the magnetic ones because near the magnetic node the pair production rate dominates over the pair loss rate due to the drift. The first stage representing the cascade initiation is not pronounced in Fig. \[N\]. One of the reason is that the number of the high-energy photons emitted by the electron layer is not very large because the laser field is strongly suppressed in the layer and the layer electrons are not accelerated so efficiently as the positrons and the secondary electrons in the vacuum region in front of the foil. Therefore the number of the high-energy photons emitted by the pairs and participating in cascading will exceed the number of the photons emitted by the foil electrons in very short period of time so that the duration of the first stage may be small. When the pair number becomes great the produced electron-positron plasma can absorb the laser radiation and affect the dynamics of the laser-foil interaction. The manifestation of such nonlinear stage (the third stage) can been seen in Figs. \[a1000\](a) and \[a1840\](a) where the standing wave is slightly attenuated towards the HB front. The transition between the second and the third stage can be seen in Fig. \[N\] as a saturation of the pair number growth. The transition occurs at $t\approx7.5\lambda/c$ for $a_{0}=1000$ and for $t\approx4.5\lambda/c$ for $a_{0}=1840$. The analytical model of the third stage has been proposed in Ref. [@Kirk2013]. It is based on one-dimensional solutions of the two-fluid (electron-positron) and Maxwell equations, including a classical radiation reaction term. The model predicts the vacuum gap with the standing wave structure between the pair “cushion” and the targets. However the model verification by self-consistent numerical simulations is still absent and the detailed analysis of the third stage with back reaction is needed. This work was supported by the Russian Science Foundation Grant No. 16-12-10383. [10]{} V. Yanovsky *et* *al.* Opt Express **16**, 2109 (2008). G. Mourou, T. Tajima, S. V. Bulanov, Rev. Mod. Phys. **78**, 309 (2006). M. Marklund, P.K. Shukla, Rev. Mod. Phys. **78**, 591 (2006). A. Di Piazza, C. Muller, K. Z. Hatsagortsyan, C. H. Keitel, Rev. Mod. Phys. **84**, 1177 (2012). E. Nerush, I. Kostyukov Phys. Rev. E **75**, 057401 (2007). C. P. Ridgers, C. S. Brady, R. Duclous, J. G. Kirk, K. Bennett, T. D. Arber, A. P. L. Robinson, A. R. Bell, Phys. Rev. Lett. **108**, 165006 (2012). J. G. Kirk, A. R. Bell, C. P. Ridgers, Plasma Phys. Control. Fusion **55**, 095016 (2013). A. V. Bashinov and A. V. Kim, Phys. Plasmas **20**, 113111 (2013). C. S. Brady, C. P. Ridgers, T. D. Arber, A. R. Bell, Phys. Plasmas **21**, 033108 (2014). E. N. Nerush , I. Yu. Kostyukov, L. Ji, A. Pukhov, Phys. Plasmas **21**, 013109 (2014). E. N. Nerush , I. Yu. Kostyukov, Plasma Phys. Control. Fusion **57**, 035007 (2015). A. R. Bell and J. G. Kirk, Phys. Rev. Lett. **101** 200403 (2008). A. M. Fedotov *et al.* Phys. Rev. Lett. **105** 080402 (2010). E. N. Nerush, I. Yu. Kostyukov, A. M. Fedotov, N. B. Narozhny, N. V. Elkina, and H. Ruhl, Phys. Rev. Lett. **106**, 035001 (2011). S. S. Bulanov, C. B. Schroeder, E. Esarey, W. P. Leemans, Phys. Rev. A **87**, 062110 (2013). V. F. Bashmakov, E. N. Nerush, I. Yu. Kostyukov, A. M. Fedotov, N. B. Narozhny, Phys. Plasmas **21**, 013105 (2014). E. G. Gelfer, A. A. Mironov, A. M. Fedotov, V. F. Bashmakov, E. N. Nerush, I. Yu. Kostyukov, N. B. Narozhny, Physical Review A **92**, 022113 (2015). M. Jirka, O. Klimo, S. V. Bulanov, T. Zh. Esirkepov, E. Gelfer, S. S. Bulanov, S. Weber, and G. Korn, Phys. Rev. E **93**, 023207 (2016). M. Vranic, J. L. Martins, R. A. Fonseca, L. O. Silva, Laser absorption via QED cascades in counter propagating laser pulses, arXiv:1512.05174. G. Lehmann, and K. H. Spatschek, Phys. Rev. E **85**, 056412 (2012). L. L. Ji, A. Pukhov, I. Yu. Kostyukov, B. F. Shen, K. U. Akli, Phys. Rev. Lett. **112**, 145003 (2014). A. Gonoskov, A. Bashinov, I. Gonoskov, C. Harvey, A. Ilderton, A. Kim, M. Marklund, G. Mourou, and A. Sergeev, Phys. Rev. Lett. **113**, 014801 (2014). A. M. Fedotov, N. V. Elkina, E. G. Gelfer, N. B. Narozhny, and H. Ruhl, Phys. Rev.  A **90**, 053847 (2014). J. G. Kirk, Radiative trapping in intense laser beams, arXiv:1605.00822. W. L. Kruer, E. J. Valeo, and K. G. Estabrook, Phys. Rev. Lett. **35**, 1076 (1975). S. C. Wilks, W. L. Kruer, M. Tabak and A. B. Langdon, Phys. Rev. Lett. **69**, 1383 (1992). T. Schlegel, N. Naumova, V. T. Tikhonchuk, C. Labaune, I. V. Sokolov, G. Mourou, Phys. Plasmas **16**, 083103 (2009). A. P. L. Robinson, P. Gibbon, M. Zepf, S. Kar, R. G. Evans and C. Bellei, Plasma Phys. Control. Fusion **51**,024004 (2009). R. Capdessus and P. McKenna, Phys. Rev. E **91**, 053105 (2015). V. N. Baier, V. M. Katkov, and V. M. Strakhovenko, *Electromagnetic Processes at High Energies in Oriented Single Crystals* (Singapore, World Scientific 1998). V. B. Berestetskii, E. M. Lifshits, and L. P. Pitaevskii, *Quantum Electrodynamics* (Pergamon Press, New York, 1982). L. D. Landau, E. M. Lifshits, *The Classical Theory of Fields* (Pergamon, New York, 1982). R. Duclous, J. G. Kirk and A. R. Bell, Plasma Phys. Control. Fusion **53**, 015009 (2011). Ya. B. Zel’dovich, Sov. Phys. Usp. **18**, 97 (1975). S. V. Bulanov, T. Zh. Esirkepov, M. Kando, J. K. Koga, S. S. Bulanov, Phys. Rev. E **84**, 056605 (2011). N. V. Elkina, A. M. Fedotov, I. Yu. Kostyukov, M. V. Legkov, N. B. Narozhny, E. N. Nerush, and H. Ruhl, Phys. Rev. ST Accel. Beams **14**, 054401 (2011). T. Grismayer, M. Vranic, J. L. Martins, R. A. Fonseca, L. O. Silva, Seeded QED cascades in counter propagating laser pulses, arXiv:1511.07503. T. Z. Esirkepov, S. S. Bulanov, J. K. Koga, M. Kando, K. Kondo, N. N. Rosanov, G. Korn, and S. V. Bulanov, Physics Letters A **379** 2044 (2015).
--- author: - Kalaga Madhav - Ziyang Zhang - Martin M Roth title: 'Aperiodic phase masks for inscribing complex multi-notch OH-emission filters for astronomy' --- Introduction ============ Observation at near infrared wavelengths (NIR) between 0.9 to 2.5$\mu$m are critical for modern astrophysics, as they provide access to objects heavily obscured by dust extinction, e.g. the supermassive black hole at the galactic center, and star forming regions, to cool objects like AGB stars, to the high redshift universe, etc. – to name but a few. Furthermore, the availability of high sensitivity large format image sensors and the advent of adaptive optics have made the NIR an extremely attractive wavelength range such that the new generation of large ground-based telescopes like the ELT, TMT, or GMT must be considered mainly NIR facilities. However, observations of faint objects in the NIR from the ground are overwhelmed by a sky background emission line spectrum that is typically 1000 times brighter than the NIR light from the objects of interest. The emission occurs due to de-excitation of atmospheric hydroxyl (OH) molecules in a cold layer of 6-10 km thickness at altitudes of 90km . At the central wavelengths of these emission lines, the signal of faint objects is heavily affected by the photon shot noise and strong residuals associated with the OH lines, so no reliable data can be recorded. Instead, one has to resort to the interline continuum, a technique also known as OH avoidance [@Martini]. However, even when resorting to observations between the bright OH lines, it has been discovered that the faint extended wings, that are due to scattered light occurring inevitably within the spectrograph, are still bright enough to affect the detection limit of faint objects in the continuum. Therefore, it has been considered to filter out the OH lines at high dispersion, however, first concepts have in reality not provided convincing results (e.g. [@Piche; @Ennico; @Maihara]). A radically new idea was proposed by [@Bland] which consists of a filter placed in front of the optical system before the light enters the spectrograph, thus giving nowhere an opportunity to create scattered light [@Ellis1; @Trinh1]. In order to suppress or filter out the OH emission lines, an optical filter capable of $>$30dB suppression at the emission lines with bandwidths as small as 150nm and, with $>$0.5dB throughput outside and between the lines will be required. Fiber Bragg gratings (FBGs) are ideal candidates for filtering with the tight constraints and an aperiodic FBG (AFBG) filter capable of suppressing $\sim$100 lines has been previously demonstrated in the GNOSIS experiment [@Ellis2; @Trinh2]. However, fabricating such filters, with good reproducibility is not a trivial task and requires accurate control of the intensity, phase and exposure length of a complex interference pattern over a moving photosensitive optical fiber. Simple or complex gratings can be fabricated through point-by-point (PbP) [@PbP] or line-by-line (LbL) [@LbL] inscription process using femtosecond lasers and de-phasing methods [@Buryak]. Ultra-long gratings have been fabricated using electro-optic modulators (EOMs) [@Raman] in push-pull configuration, and complex OH filters have been fabricated using acousto-optic modulators (AOMs) [@JAR]. E- or A-OMS fabrication techniques generate a running-interferencepattern, similar to a rack-and-pinion, that is synchronised with the velocity of optical fiber, requiring precise control on the intensity, focus spot size, and velocity over a long length in real time. Phase mask offers a convenience that previously mentioned methods do not offer. By transferring the complexity of fabrication, such as in femtosecond, EOM or AOM techniques, to the one-time manufactured complexity in the phase mask, the convenience is preserved. Requiring no moving parts, or stringent alignment, the complex phase mask can be used off-the-shelf in a standard UV based FBG inscription setup. In this paper, we introduce for the first time the design of an aperiodic phase mask to inscribe multi-channel aperiodic filters in hydrogenated or doped photosensitive fibers, in order to suppress the night-sky OH emission. Aperiodic Bragg grating ======================= The index modulation $\Delta n_g$ and phase $\phi_g$, of the complex grating can be reconstructed from the desired reflection spectrum $|r|$, by using layer peeling method described in [@Skaar1; @Skaar2]. For the design of APM, we selected OH sky emission lines in H-band ranging from 1400nm to 1700nm [@Rousselot]. For compatibility with future tests on PRAXIS [@praxis] system, that uses the existing GNOSIS filters, the full-width half maximum (FWHM), transmission, and wavelength of OH lines are selected as given in Tables.(1,2) in [@Trinh1]. The aperiodic filter is defined by [@Cao], $$\label{eq:desiredspectrum} \begin{multlined} |r\big(\lambda\big)| =\sqrt{R_i}\sum_{i=1}^{N}{\exp \Bigg[-\Bigg\{\frac{2\pi n_{eff}}{p_i}\Big(\frac{1}{\lambda}-\frac{1}{\lambda_i}\Big)\Bigg\}^{q_i}\Bigg]}\times \\ \exp \Bigg[i2\pi n_{eff}\Bigg(\frac{1}{\lambda}-\frac{1}{\lambda_0}\Bigg)g_i\Bigg] \end{multlined}$$ where, $R_i$ is the desired reflectivity of individual OH-emission line filter, $N$ is the number of filters, $(p_i,q_i)$ define the shape of the individual filters, $n_{eff}$ is the effective index, $\lambda_0$ is the seed grating, which also defines the APM’s pitch $\Lambda_m$, and $g_i$ is the individual channel’s group delay. Since there exists an upper limit to the index change achievable in a fiber, $g_i$ can be optimised to reduce the maximum index modulation required. The right choice of $g_i$, or de-phasing, [@Buryak; @Cao] effectively spreads the individual gratings over the length of the grating, instead of crowding them in the same location spatially. We now describe the steps required to design a mask that can be used in a standard UV inscription setup to fabricate a grating with a reflection spectrum defined by eq.(\[eq:desiredspectrum\]). Design of aperiodic phase mask {#sec:apm} ============================== It is well known [@Sheng1; @Sheng2; @Sheng3] that when a $\phi_m$-shifted phase mask is used in side-writing technique for inscribing FBGs, the phase shift in the mask is split into two half-phase shifts ($\frac{\phi_m}{2}$), separated by $\Delta z=2y\tan\theta$ in the fiber, as shown in Fig.\[fig:mask\_fiber1\], where, $y$ is the distance between the mask and the fiber core, and $\theta$ is the angle of diffraction of $\pm 1$ order. ![Schematic showing the propagation of phase mask phase to fiber grating phase.[]{data-label="fig:mask_fiber1"}](mask_fiber1){width="50mm"} The cumulative phase of the light propagating through the fiber gives the desired phase $\phi_g$ in the grating. For example, if a $\pi$-shifted phase mask is used to fabricate the grating, along the grating length there will be two locations with $\phi_g=\pi/2$ phase. In the transmission spectrum, we would see the characteristic single narrow transmission window at the filter center. With increasing $\phi_m$, the narrow transmission window within the Bragg stopband shifts to longer wavelength [@Janos]. To achieve the desired phase $\phi_g$ in the grating, the width of the groove, or phase-step, $\delta_m$, in the phase mask is given by, $$\delta_m=\frac{\Lambda_{m}}{4\pi}\Big(2\pi+\phi_g\Big) \label{eq:maskgap}$$ For example, if $\phi_g=-\pi$ for the standard $\pi-$shifted FBG, we will require a phase mask with $\delta_m=\frac{\Lambda_m}{4}$ at the center of the mask of length $L$. If we use this mask for fabrication, in the fiber the phase will be split, $\phi_g=\big(-\pi/2,-\pi/2\big)$. By tuning $\delta_m$, or equivalently $\phi_m$, we can inscribe a desired $\phi_g$ in the grating at selected locations. Also, when $\delta_m=\Lambda_m/2$, then we get $\phi_g=0$, and a uniform phase mask. Introducing multiple phase shifts using high precision PZT translation was proposed in [@dai] for fabricating periodic or non-periodic high-channel count FBG. In order to design a mask that can generate a grating with the desired aperiodic reflection spectrum defined by eq.(\[eq:desiredspectrum\]), we will require the grating’s phase ($\phi_g$). We use layer peeling (LP) technique to first derive the grating’s complex coupling coefficient, $\kappa$, from which we can extract $\phi_g$. By knowing $\phi_g$, we can then calculate the phase-steps $\delta_m$ of the APM using eq.(\[eq:maskgap\]). ![Representative 2D model of a section of the APM showing two $\Lambda_m/4$ shifted grooves separated by $\Delta z$ corresponding to ($-\pi$, $+\pi$) phase. $\rho=\lambda_{uv}/2(n_{uv}-1)$ is the groove depth of the phase mask, defined by the wavelength $\lambda_{uv}$ of the laser used for fabrication, and the refractive index $n_{uv}$ of the mask material.[]{data-label="fig:apm"}](apm1){width="\linewidth"} Simulation and discussion {#sec:simulation} ========================= Fig.\[fig:maskfdtd\] shows the FDTD simulation for phase steps at two locations on a phase mask of $60\mu m$ length, separated by 20$\mu m$, where mask period $\Lambda_m=1.064\mu m$. The two phase steps $\delta_m=\Big\{\frac{\Lambda_m}{4},\frac{3\Lambda_m}{4}\Big\}$ corresponding to mask phases $\phi_m=\big\{-\pi,+\pi\big\}$, respectively, result in four half-phase regions in the fiber. ![Near field of phase mask with two phase steps of $-\pi$ and $+\pi$.[]{data-label="fig:maskfdtd"}](apmfdtd){width="\linewidth"} ![Transmission spectrum, index modulation, phase and APM groove width for fabricating an aperiodic filter.[]{data-label="fig:layer"}](paperplot2){width="\linewidth"} For designing the APM specifically made to fabricate an aperiodic grating, we first define the filter characteristics, such as, transmission, FWHM and channel center. We chose $N=37$ OH-emission lines between 1500nm to 1600nm, and $L=47.9$mm. Using LP, and the desired filter spectrum constructed using eq.(\[eq:desiredspectrum\]), we find the index modulation $\Delta n$ and $\phi_g$, as shown in Fig.\[fig:layer\]. For covering the filter’s bandwidth ($\beta$), we will require a $\phi_g$, or $\delta_m$ discretization interval or layer thickness in layer peeling [@Skaar2], $\Delta z=\pi / \beta=9.58 \mu m$. We use eq.(\[eq:maskgap\]) to calculate the APM’s groove width $\delta_m$ from $\phi_g$. Since $\phi_g=[-\pi,+\pi]$ rad, we have $\delta_m=[266.32,798.96]$ nm. $\delta_m$ can be incorporated in the mask as a nonlinear chirp, where the groove width $\delta_m$, varies continuously over the length of the mask. To achieve accurate continuously varying $\delta_m$ using e-beam process would be challenging. Alternatively, we design the mask with a global mask pitch of $\Lambda_m=1065.28$nm, corresponding to the seed grating $\lambda_0=1550$nm, and at discrete locations at intervals of $\Delta z$ along the seed phase mask, we incorporate grooves of width $\delta_m$ defined by eq.(\[eq:maskgap\]). An example of a mask segment with two $\delta_m$ is shown in Fig.\[fig:apm\]. Conclusion ========== We have shown the steps involved in transferring the spatial structure of an aperiodic fiber Bragg grating to the corresponding structure in an aperiodic phase mask. Fabrication of the mask with the desired groove accuracy at periodic intervals, or as a continuous chirp is not a trivial task. However with recent advances in e-beam processes, the accuracy required to reproduce $\delta_m$ is a reality. Fabrication of APMs based on the method described in this paper is currently ongoing. With APMs, the complexity of alignment in fabrication setups such as EOM, AOM or femtosecond laser, is now transferred to the fabricated complexity in the mask, facilitating the use of standard phase mask fabrication to inscribe complex gratings. Acknowledgements {#acknowledgements .unnumbered} ================ This work is supported by the BMBF project “Meta-ZiK Astrooptics” (grant no. 03Z22A511). [1]{} Martini, P., & DePoy, D. L. 2000, SPIE, 4008, 695 Piche, F., Parry, I. R., Ennico, K., et al. 1997, SPIE, 2871, 1332 Ennico, K. A., Parry, I. R., Kenworthy, M. A., et al. 1998, SPIE, 3354, 668 Maihara, T., Ohta, K., Tamura, N., et al. 2000, SPIE, 4008, 1111 Bland-Hawthorn, J., Ellis, S. C., Leon-Saval, S. G., et al. 2011, Nature Communications, 2, 581 Ellis, S. C., & Bland-Hawthorn, J. 2008, MNRAS, 386, 47 Ellis, S. C., Bland-Hawthorn, J., Lawrence, J., et al. 2012, MNRAS, 425, 1682 Trinh, C. Q., Ellis, S. C., Bland-Hawthorn, J., et al. 2013-1, AJ, 145, 51 Trinh, C. Q., Ellis, S. C., Bland-Hawthorn, J., et al. 2013-2, MNRAS, 432, 3262 Lai, Y., Zhou, K., Sugden, K., Bennion, I. 2007, Opt.Express, 15, 26, 18318 Zhou, K., Dubov, M., Mou, C., Zhang, L., Mezentsev, V. K., Bennion, I., 2010 IEEE Photo. Tech. Lett., 22, 16, 1190 Buryak, A. V., Kolossovski, K. Y., Stepanov, D. Y. 2003, IEEE J. of Quant. Elect., 39, 1, 91 Loranger, S., Lambin-Iezzi, V., Kashyap, R. 2017, Optica 4, 1143 Gbadebo, A. A., Turitsyna, E. G., Williams, J. A. R.,  2018, Optics Exp., 26, 2, 1315 Cao, H., Atai, J., Shu, X., Chen, G., 2012, Opt. Express, 20, 11, 12095 Sheng, Y., Rothenberg, J. E., Li, H., Wang, Y., Zweiback, J. 2004, IEEE Photo. Tech. Lett., 16, 5, 1316 Tremblay, G., Sheng, Y. 2006, J. Opt. Soc. Am. B, 23, 8, 1511 Sheng, Y., and Sun, L. 2005 Opt. Ex., 13, 16, 6111 Janos, M., Canning, J., Sceats, M. G.  1996, IEEE Electron. Lett., 32, 3, 245 Dai, Y., Yao, J., 2009, IEEE J. of Quant. Elect., 45, 8, 964 Skaar, J., Wang, L., Erdogan, T. 2001, IEEE J. of Quant. Elect., 37, 2, 165 Skaar, J., Feced, R. 2002 J. Opt. Soc. Am. A , 19, 11, 2229 Rousselot, P., Lidman, C., Cuby, J.-G., Moreels, G., Monnet, G. 2000, Astron. Astrophys. 354, 1134 Horton, A., et. al.,  2012, SPIE 8450, 84501V
--- abstract: 'We report final results from our 2.5 year infrared parallax program carried out with the European Southern Observatory 3.5m New Technology Telescope and the SOFI infrared camera. Our program targeted precision astrometric observations of ten T type brown dwarfs in the J band. Full astrometric solutions (including trigonometric parallaxes) for nine T dwarfs are provided along with proper motion solutions for a further object. We find that HgCdTe-based infrared cameras are capable of delivering precision differential astrometry. For T dwarfs, infrared observations are to be greatly preferred over the optical, both because they are so much brighter in the infrared, and because their prominent methane absorptions lead to similar effective wavelengths through the J-filter for both target and reference stars, which in turn results in a dramatic reduction in differential colour refraction effects. We describe a technique for robust bias estimation and linearity correction with the SOFI camera, along with an upper limit to the astrometric distortion of the SOFI optical train. Colour-magnitude and spectral-type-magnitude diagrams for both L and T dwarfs are presented which show complex and significant structure, with major import for luminosity function and mass function work on T dwarfs. Based on the width of the early L dwarf and late T dwarf colour magnitude diagrams, we conclude the brightening of early T dwarfs in the J passband (the “early T hump”) is not an age effect, but due to the complexity of brown dwarf cooling curves. Finally, empirical estimates of the “turn on” magnitudes for methane absorption in field T dwarfs and in young stars clusters are provided. These make the interpretation of the T6 dwarf $\sigma$OriJ053810.1-023626 as a $\sigma$Ori member problematic.' author: - 'C.G. Tinney, Adam J. Burgasser, J. Davy. Kirkpatrick' title: Infrared Parallaxes for Methane T dwarfs --- \[firstpage\] Introduction – Methane T-type Brown Dwarfs ========================================== Numerous examples of the field counterparts to the extremely cool methane brown dwarf Gl229B [@nak1995] are now known [@st1999; @bu1999; @bu2000a; @bu2000b; @le2000; @ts2000; @cu2000]. These objects are now uniformly classified as “T dwarfs” [@bu2002a; @ge2002], and have such low photospheric temperatures (800-1300K), that their photospheres are dominated by the effects of dust and methane formation [@all2001], neither of which are amenable to simple modeling. The discovery of sizable numbers of T dwarfs, means that we are now in a position to use direct trigonometric parallax observations to [*empirically*]{} determine the loci of T dwarf cooling curves, rather than relying on models. The discovery of several T dwarfs by SDSS with spectra bridging the L and T spectral types (e.g. @le2002 [@ge2002]) means we are also in a position to empirically determine where on these brown dwarf cooling curves the L-T transition occurs. Trigonometric parallaxes are also essential to understanding the space density of T dwarfs. Luminosity function estimates for T dwarfs (eg. @burg_thesis) are currently based on limited parallaxes and assumptions about object binarity. (Recent programs targeting more L and T dwarfs [@mbb1999; @ko1999; @reid2001; @bu2003a; @close2003], indicate that $\sim$10-20% of objects observed in sufficient detail are found to be binary.) Luminosity functions based on currently available colour-magnitude relations will therefore be problematic at best. Trigonometric parallaxes are therefore required to determine the [*actual*]{} luminosities of these objects and indicate whether they are single or binary, so that more meaningful luminosity functions for T dwarfs can be constructed. Parallaxes and the Infrared =========================== Traditional parallax techniques based on photography are completely unable to target objects as faint and red as T dwarfs. CCD parallax work in the optical at the USNO, ESO and Palomar [@mo1992; @da2002; @t96; @t95; @t93] have shown that parallaxes can be obtained for objects as faint as I=18-19 at distances $\la$70pc. However, this still leaves the T dwarf class of objects (with I$\ga$21) unobservable. To date only a few of the very brightest and closest T dwarfs have proved tractable for CCD parallax work [@da2002]. Over the last two years, therefore, we have been extending optical CCD astrometric techniques into the infrared, where the J$<$16 magnitudes of most of the detected T dwarfs make significant progress possible. Indeed, there are several reasons to [*prefer*]{} the infrared for high precision astrometry. First, the effects of differential colour refraction (the different amount of refraction the atmosphere produces in red target stars, compared to blue reference stars, see @mo1992) are reduced by working at longer wavelengths. Second, because T dwarfs suffer methane absorption at the red end of their J- and H-band spectra, their effective wavelengths through a J- or H-filter are much closer to that of a typical background reference star, than is the case in the optical. These effects combined mean that the stringent requirements on maintaining control of observations at constant hour angles (at least for T dwarfs) is not present in the infrared (cf. Section \[sec\_dcr\]). This [*considerably*]{} increases the flexibility and efficiency of infrared parallax observing, over the optical. Third, seeing improves in the infrared, leading to smaller images, smaller amounts of differential seeing, and so higher astrometric precision. And finally, T dwarfs show [*much*]{} greater contrast to sky in the near-infrared than in the optical. Infrared parallax observations have been pioneered by @jo2000, who targeted the extremely active (and unfortunately at 76pc also quite distant) late M dwarf PC0025+0447, as well as the nearby M dwarf VB10. The USNO also has an infrared astrometric program in operation, from which published results are expected shortly [@vr2002]. Observations & Sample ===================== Observations were carried out at 7 epochs over the period 2000 April 17 to 2002 May 30. At each epoch, observations were carried out on either two half- or two full-nights. All observations were obtained with the SOFI infrared camera on the European Southern Observatory (ESO) 3.5m New Technology Telescope (NTT). SOFI was used in its “large field” mode in which it provides a 4.92$\times$4.92field-of-view with 0.28826 pixels (cf. Section \[scale\]). Exposures of each target were acquired with a fixed dither pattern (Fig. \[jitter\]) as eight 120s exposures though the SOFI J filter. The exposure pattern was designed so that this 16 minutes of dithered exposure time sampled many different inter-pixel spacings. As much as was feasible (given observing time constraints) we attempted to acquire all epoch observations at the same hour angle as the very first epoch observation acquired, so as to minimise DCR effects. Each epoch observation was also carried out with a specified reference star positioned within a few pixels of its position when observed on the very first epoch. This ensures all observations are carried out as near differentially as possible. Seeing conditions over the course of this program varied. Figure \[seeing\] shows a histogram of the seeing full-width at half maxmimum for all our astrometric observations. The median seeing was 0.82, with 80% of data being acquired in seeing conditions between 0.55 and 1.25. In addition to these epoch observations, all targets were also observed as they rose and set, so that DCR calibrations for each target could be developed, following the technique described in @t95 [@t96]. The J filter was chosen for these observations as it offers the best contrast between sky- and T dwarf brightness. Typical near-infrared sky colours at La Silla are J–H=1.7, H–K=1.0-2.0 (2.0 for dark, 1.0 for bright) (ESO SOFI on-line documentation). By contrast typical T dwarf colours are -0.5$<$J–H$<$0.5 and H–K$<$0.5 [@bu2002a], which means most T dwarfs are around a magnitude brighter compared to the sky in J, than they are in H or K. The sample of objects observed is listed in Table \[sample\], along with indications as to which targets were observed at which epochs. SOFI has a nominal gain of 5.6e-/adu, and a nominal read-noise of 14e- per exposure. -------- ---------- ----------------------------------------------------------------------------------- ----- ----- ----- ----- ----- ----- ------ Object Position Apr Jul Mar Apr Jul Mar May Ref. (J2000) ‘00 & ‘00 & ‘01 & ‘01 & ‘01 & ‘02 & ‘02 &\ & 05[$^{\mathrm h}$]{}59[$^{\mathrm m}$]{}191$-$140449& x & & & x & & x &&\ & 10[$^{\mathrm h}$]{}21[$^{\mathrm m}$]{}097$-$030420& & x & x & x & x & x &&\ & 10[$^{\mathrm h}$]{}47[$^{\mathrm m}$]{}538$+$212423& x & x & x & x & & x &&\ & 12[$^{\mathrm h}$]{}17[$^{\mathrm m}$]{}111$-$031113& x & x & x & x & x & x &&\ & 12[$^{\mathrm h}$]{}25[$^{\mathrm m}$]{}543$-$273947& x & x & x & x & x & x &&\ & 12[$^{\mathrm h}$]{}54[$^{\mathrm m}$]{}539$-$012247& & x & x & & x & x &&\ & 13[$^{\mathrm h}$]{}46[$^{\mathrm m}$]{}464$-$003150& x & x & x & x & x & x &&\ & 15[$^{\mathrm h}$]{}34[$^{\mathrm m}$]{}498$-$295227& x & x & x & x & x & x &&\ & 15[$^{\mathrm h}$]{}46[$^{\mathrm m}$]{}272$-$332511& x & x & x & x & x & x &&\ & 16[$^{\mathrm h}$]{}24[$^{\mathrm m}$]{}144$+$002916& x & x & x & x & x & x &&\ -------- ---------- ----------------------------------------------------------------------------------- ----- ----- ----- ----- ----- ----- ------ ![image](seeinghist.ps){width="70mm"} ![image](jittercgt1.ps){width="70mm"} Object Names ------------ With the exception of $\epsilon$IndB [@scholz2003], all the objects discussed in this paper have been discovered by either the 2MASS ([www.ipac.caltech.edu/2mass]{}), SDSS ([www.sdss.org]{}) or DENIS ([cdsweb.u-strasbg.fr/denis.html]{}) sky surveys, and have been given object names by those surveys, based on their positions in J2000 coordinates. These names have the advantage of being very specific and informative, and the disadvantage of being lengthy and clumsy. Throughout this paper, therefore, we will generally give an object’s complete name when it is first used, and thereafter refer to it (when not confusing to do so) by a shortened 2Mhhmm, SDhhmm or Dhhmm form where hh and mm are the right ascension hour and minute components of its name. Analysis ======== The analysis adopted for these data falls into two main areas: processing to produce linearised, flattened and sky subtracted images, which was quite specific to the SOFI instrument; and astrometric processing of these images, which identically follows that described in @t93 [@t95; @t96]. Processing SOFI data -------------------- [**Dark frames and zero-points**]{} : Dark frames obtained with SOFI reveal significant structure, which can be broken down into a few components. 1. a significant ($\sim$50-100adu peak-to-peak) vertical structure, known as the “shade”, which varies in intensity and shape with the overall level of illumination of the array; 2. a small (1-20adu) dark current from the instrument and a small readout amplifier glow in each quadrant; and 3. a tiny ($<$1adu) but fixed “ray” pattern left after the previous two are modeled and removed from dark current data. The shade pattern is of most concern as the remaining fixed patterns are small compared to the sky brightness. Figure \[shade\] shows a dark current image displaying the shade effect, together with a set of vertical medianned profiles through the shade. Unfortunately, this shade profile is not constant - it varies with the intensity of the overall level of illumination of the array during an exposure, meaning one has an unknown zero-point for every pixel in every exposure. Calibration of the shade was achieved as follows. A small aperture (used to mask the instrument entrance when observing with one of the smaller fields of view) was inserted into the NTT focal plane, and a series of flat fields obtained with varying exposure times. Because only the central quarter of the array is illuminated by this procedure, it is possible to extract a shade profile from the edge of each image. It is also possible to record the level of illumination of the array which produced that shade profile. By performing a least-squares cubic polynomial fit through each pixel of these shade profiles (which correspond to rows on the detector) as a function of array illumination, it is possible to develop a parametrization for the shade profile. Using this parametrization it is a simple matter to produce a shade profile estimate for each data image, and subtract it. The result is an image with zero-point constant across the array.[^1] ![image](tplx_fig2.ps){width="180mm"} ![image](charts.ps){width="180mm"} [**Linearity**]{} : All infrared detectors are non-linear to some extent. For SOFI, ESO usually recommends keeping sky and target object intensities below 10,000adu in order to maintain linearity at better than 1%. Unfortunately, such an observing strategy is not useful for astrometry, which demands the largest possible dynamic range to ensure targets (and reference stars) of widely differing magnitudes are usable in widely varying seeing conditions. We therefore calibrated the linearity of SOFI using the same shade profile data obtained above. Once the data have been shade corrected, they can then be used to examine the response of each pixel to a constant light source over widely varying exposure times. Repetition of a ’calibration’ exposure time throughout the sequence allows the lamp’s constancy to be calibrated – usually to within $\pm$ 0.5%. A sample of the resulting linearity correction is shown in Fig. \[samplelin\] for one of the SOFI quadrants. In all cases these tests were performed independently for each quadrant, and the results were always consistent with the same linearity correction for all quadrants. A single correction was therefore derived as the mean of those in each quadrant. Fig. \[samplelin\] shows that the detector is $\approx$2.5% non-linear at 20,000adu above bias, [*but*]{} that data can be obtained and linearity corrected even up to 25,000adu. ![image](linerarity.eps){width="80mm"} To linearise a pixel, then, with raw intensity $I_{ij}$, it is simply necessary to multiply it by the polynomial P($I_{ij}$) = $a_0 + a_1\,I_{ij} + a_2\,I_{ij}^2 + a_3\,I_{ij}^3$. The coefficients adopted were $ a_0=1.0,a_1=0.0, a_2=1.11329\times10^{-10}, a_3=-2.46799\times10^{-15}$ for 2000 April - 2001 April, and $ a_0=1.0,a_1=0.0, a_2=8.6124661\times10^{-11}, a_3=-1.6986849\times10^{-15}$ for 2001 July - 2002 May. [**Inter-quadrant Row Crosstalk**]{} : HgCdTe detectors typically show an effect known as inter-quadrant row crosstalk [@xtalk]. This has the effect that a constant, small fraction of the total flux seen in each row is seen as crosstalk at the same row in all the other quadrants. Correction of this effect is straightforward. The detector is integrated up into a single vertical column, then the two halves of this cut (Y=1-512 and Y=513-1024) are averaged, multiplied by a single cross-talk constant, and subtracted from every column of the detector. We found a crosstalk coefficient of 2.8$\times$10$^{-5}$ worked well. [**Flat-fielding** ]{} : Flat-fielding was performed using dome flats. Because of the (variable) shade pattern present in every dome flat, NTT staff have developed an observing recipe to obtain a “special” flat-field without a shade pattern present. Or one can used the shade calibration procedure described above to correct standard dome-flat fields. Both were tried for this program and both provided similar results. In the end every run’s data was flattened with a “special” dome flat, as per usual for SOFI observing. [**Sky subtraction** ]{} : Each group of eight 120s dithered exposures was then used to create a normalised and medianned sky frame, which was re-normalised to each of the twelve observations to perform sky subtraction. So that the data frames would maintain approximate photon-counting errors, an appropriate constant sky level was then added back in to each frame [**Astrometric processing**]{} : Following this processing then, we have eight bias-subtracted, linearised, cross-talk corrected, flattened and sky-subtracted data frames for each astrometric epoch. These were then subject to further processing (ie. object finding and point-spread function fitting using DAOPHOT, DCR calibration and proper-motion and parallax solution fitting) as eight individual observations, in a manner identical to that described in @t93 [@t95; @t96]. Astrometric Calibration of SOFI {#scale} ------------------------------- Astrometric calibration observations were acquired in USNO Astrometric Calibration Region M [@stone1999] on 2001 July 12 and 13. These consisted of sixteen 60s exposures (on each night) scattered throughout the 3.2$\times$7.6region which @stone1999 have astrometrically calibrated. These were processed identically to our main astrometric targets. Reference catalogue positions were extracted from the USNO ACR catalogue[^2] in SOFI-field-sized regions around each nominal telescope pointing position. These positions were then tangent projected (using the SLALIB library @sun67) to provide reference data sets in arcsecond offsets on the sky for each observation. These were matched against the observed data to derive a set of linear (ie shift, scale and rotate) transformations from the SOFI pixel positions to arcsecond offsets on the sky[^3]. These transformations determine the SOFI plate scale on this night to be 0.28826$\pm$0.00003/pixel, and that the detector pixel’s misalignment with N-S (0.030$\pm$0.003). These data were also analysed to examine the amount of astrometric distortion (ie. variability in the instrument plate scale with position in the field) present in the SOFI optical train. The astrometric calibration data show there is no significant astrometric distortion in the SOFI Large Field optics. The plate scale in the field corners is the same as that in the field center to within 0.1%. The SOFI field can be considered astrometrically flat to 0.1%. ![image](parallax1.eps){width="150mm"} ![image](parallax2.eps){width="150mm"} [@lrrrrr@]{} Object[^4] &N$_{f,n,s}$&$\pi_{r}$    &$\mu_{r}$   & $\theta_{r}$    & V$_{tan}$\ & & (mas) & (mas) & ()     &(km/s)\ 2M0559 & 71,5,11 & (96.9)[^5]& 677.4$\pm$2.5 & 122.6$\pm$0.1 & 33.1$\pm$0.1\ S1021 &112,7,6 & 34.4$\pm$4.6& 183.2$\pm$3.4 & 248.8$\pm$1.0 & 25.2$\pm$2.4\ 2M1047 & 70,7,4 &110.8$\pm$6.6&1698.9$\pm$2.5 & 256.4$\pm$0.1 & 72.7$\pm$4.4\ 2M1217 &143,10,4 & 90.8$\pm$2.2&1057.1$\pm$1.7 & 274.1$\pm$0.1 & 55.2$\pm$1.4\ 2M1225AB&128,12,7 & 75.1$\pm$2.5& 736.8$\pm$2.9 & 148.5$\pm$0.1 & 46.5$\pm$1.7\ S1254 &104,7,6 & 73.2$\pm$1.9& 491.0$\pm$2.5 & 284.7$\pm$0.1 & 31.8$\pm$1.0\ 2M1346 &118,9,5 & 68.3$\pm$2.3& 516.0$\pm$3.3 & 257.2$\pm$0.2 & 35.8$\pm$1.4\ 2M1534AB&140,11,8 & 73.6$\pm$1.2& 268.8$\pm$1.9 & 159.1$\pm$0.1 & 17.3$\pm$0.4\ 2M1546 &150,10,9 & 88.0$\pm$1.9& 225.4$\pm$2.2 & 32.5$\pm$0.6 & 12.1$\pm$0.4\ S1624 &152,11,8 & 90.9$\pm$1.2& 373.0$\pm$1.6 & 268.6$\pm$0.3 & 19.5$\pm$0.3\ Results for T dwarfs ==================== Astrometric solutions for our T dwarf targets were evaluated in a manner identical to that used by @t95. Briefly the procedure is to transform (using a linear transformation with rotation and a scale factor) all the frames for a given object, onto a chosen master frame of good seeing (known to have the detector rows and columns aligned with the cardinal directions to within $\pm 0.1$), using a set of well exposed reference stars which were required to appear in every frame; differential colour refraction (DCR – see @t95) coefficients were then evaluated for each of these reference stars (relative to the unknown ‘mean’ DCR coefficient for the reference star set), and the reference frame corrected for DCR; each frame was then re-transformed onto the ‘master’ frame; the DCR coefficient for the program object (relative to the DCR-corrected reference frame) was then evaluated; the program object was DCR corrected; and finally, an astrometric solution in parallax and proper motion was made independently for both the $\alpha$ and $\delta$ directions, using a linear weighted-least-squares fit. Uncertainties arising from the DCR correction and the residuals about the reference frame transformation for each frame were carried through to this solution fit, so observations taken in poor seeing or with poor signal-to-noise due to cloud autmoatically receive low weight. The final parallax was taken to be the weighted mean of the $\alpha$ and $\delta$ solutions. Finding charts for our target stars (taken from our SOFI data) showing both the target and reference stars adopted can be seen in Fig. \[charts\]. The resulting relative astrometry is presented in Table \[results\], the columns of which show; the number of frames (N$_f$), nights (N$_n$) and reference stars (N$_s$) used in each solution; the parallax and proper motion solutions (relative to the background reference stars chosen); and the derived tangential velocity for each target )based on the measured parallax, except for 2MASS0559, for which we adopt the distance of @da2002). Plots of these fits are shown in Fig. \[plots\]. The reference stars used to obtain this relative astrometry are typically within $\pm$1mag. of the apparent magnitude of our target T dwarf. At these magnitudes (J=15-18) the reference stars will most commonly be G- to early M type stars at distances of 500-2000pc. Thus although we do not have the photometry available to estimate detailed corrections from relative to absolute parallax, we can estimate with some confidence, that such corrections will generally be less than 1mas in size, and so not significant in comparison to our random astrometric uncertainties. It is instructive to examine the root-mean-square residuals obtained for the reference frame stars in our astrometry, since they tell us how precise we can expect the astrometry of our target objects to be. Over the course of our program we found that for a single 120s exposure, the median value of this rms residual was 0.042pixel or 12.1mas, with 80% of observations having an rms residual between 6.9 and 20.2mas. Recall that at each epoch we acquired 8 such observations in a total exposure time of 960s, which would suggest the median precision from a single epoch is 12.1/sqrt(8) = 4.3mas. Residuals within these groups of eight were somewhat correlated (presumably because they are largely acquired in similar seeing conditions). The USNO have published astrometry for three T dwarfs: 2MASS0559, SDSS1254 & SDSS1624 [@da2002]. While all three were included in our program, insufficient epochs were obtained for 2MASS0559 to measure a parallax. The equivalent relative parallax solution quantities for those we obtained (Table \[results\]), are given by Dahn et al. as; for SDSS1254-01: 84.1$\pm$1.9mas, 496.1$\pm$1.8mas/y, 285.2$\pm$0.4, and for SDSS1624+00 : 90.7$\pm$2.3mas, 383.2$\pm$1.9mas/y, 269.6$\pm$0.5. These independent observations and solutions agree within uncertainties for almost all parameters – the exception being the parallax for SDSS1254, for which the two solutions are different by about 5-$\sigma$, though Dahn et al. do comment that with only 1.2 years of data on this target their solution is only considered to be preliminary. Finally we note that with only 3 epochs of observation per year over two-plus years, there is always the possibility that systematic errors on individual runs may have impacted on our results. For example, a major change in SOFI’s astrometric distortion or a decollimation between the telescope and SOFI on a single run could systematically effect our results. The only way to detect such problems is by detecting a poor match between our astrometric model and the data we obtain, which is difficult with less than 6 epochs. We believe the likelihood of this is small because; (1) exactly the same automated telescope image analysis procedures were used to control the NTT’s primary figure throughout every night of every run, making the chance of an unusual NTT collimation with SOFI unlikely; (2) SOFI’s astrometric distortion (as we have shown above) is tiny, so changes in it can have only negligible effect; (3) SOFI is a Nasmyth mounted instrument, and so is always mounted horizontally and subject only to rotation about its optical axis, greatly reducing the likelihood of flexure within the instrument; and finally (4) because infrared instruments sit in temperature-controlled dewars they suffer almost none of the temperature-dependent flexure and defocus effects present in optical reimaging systems, and they are also much less prone to being opened and modified over the course of an astrometric program. On-going monitoring and independent observations by independent programs are the best way to test for unforeseen systematic errors, and we look forward to checking our results against programs being carried out elsewhere. Discussion ========== Differential Colour Refraction for T dwarfs {#sec_dcr} ------------------------------------------- An interesting result of the DCR calibrations we performed for our T dwarf targets, was to find that T dwarfs have effective wavelengths in the J-band which are essentially indistinguishable from the ensemble of background reference stars against which their positions are measured. This is shown in Fig. \[dcrhist\] which plots histograms of the DCR coefficients determined for reference stars and programme T dwarfs - the similarities in the ensemble values are clear. (These coefficients were derived using the method described in @t93 and @t96. Typical uncertainties in the individual determinations are $\approx \pm 2-6$mas$/$tan(ZA).) As a result, though we have calibrated and applied DCR corrections to our data, such a procedure is not strictly necessary for near-infrared observations of T dwarfs. These observations, therefore, are [*not*]{} rigidly tied to being carried out near the meridian, which adds enormously to the flexibility and efficiency of infrared parallax programmes. ![image](refraction.ps){width="80mm"} Photometry for L and T dwarfs ----------------------------- There are currently only a few large and systematic photometric databases for late M, L and T dwarfs extant. The first is the photometry from the 2MASS database @2mass, which has the advantage of being a well established photometric system which covers the whole sky, and includes all of our T dwarf targets, and almost of all of the other known L and T dwarfs [@burg_thesis]. Unfortunately, the photometry for these objects in the J and K$_s$ 2MASS bands is often near the 2MASS photometric limits, so typical uncertainties of $\pm$0.1mag. or greater are not uncommon. Moreover, because 2MASS does not include an optical passband, colour information has to rely on the J–K colour, which is typically small compared to the photometric precision, as well as giving only a small wavelength “lever arm” on the spectral evolution of L and T dwarfs. We make use of absolute M$_J$ and M$_{Ks}$ values on the 2MASS system compiled by @burg_thesis which is based on the parallaxes presented in @da2002. @da2002 also present optical photometry in the $I_C$ passband, and J,H,K photometry in a photometric system approximating that of the CIT system of @elias1982, as well as data form other work transformed onto this system. A second extensive database is that compiled by @le2002. This includes Z band photometry (on a UKIRT defined photometric system) as well as J,H,K photometry transformed by the authors onto the MKO photometric system (see @le2002, Section 3 for details). Because these data were acquired with a 4m telescope, their photometric precision is much higher than that for 2MASS. Great care should be taken in inter-comparing these two sets of photometry – the systematic differences between the two photometric systems are [*very*]{} significant. This is [*particularly*]{} true of the Z photometric system of @le2002, which is based on a relatively narrow interference filter (0.851-1.056$\mu$m) used with a HgCdTe infrared array, leading to effective wavelengths for L and T dwarfs of $\approx$1.0$\mu$m, unlike the more common optical Z-type observations which are based on long-pass filters ($\ga$0.85$\mu$m) and the declining sensitivity of CCDs at $>$1$\mu$m, leading to effective wavelengths $\approx$0.9$\mu$m. [^6]. UKIRT Z photometry should not be assumed to be directly comparable with optically based Z photometry. In the discussion that follows, therefore, we will discuss only features in the absolute magnitudes [*within an individual photometric system*]{}. For this reason we do not make use of the more heterogeneous J,H,K compilation of @da2002. To these data sets we add observations of the recently announced T dwarf $\epsilon$IndB [@scholz2003], which has a mean I=16.7$\pm$0.1 and 2MASS photometry of J=11.91$\pm$0.04 and Ks=11.21$\pm$0.04 (Burgasser, priv.comm.). Photometric corrections for known binaries {#bincor} ------------------------------------------ ![image](ij_jk.ps){width="100mm"} [@lcccccl@]{} System & Comp. & $\Delta$I$_C$[^7] & $\Delta$J$^a$ & $\Delta$K$^a$ & Sp.T.[^8] & Notes[^9]\ & A & 0.50 & 0.54 & 0.59 & L0.5 & $\Delta$I$_C$,$\Delta$J R01. $\Delta$K Fig.\[ij\_jk\]\ & B & 1.12 & 1.01 & 0.95 & L0.5 & $\Delta$I$_C$,$\Delta$J R01. $\Delta$K Fig.\[ij\_jk\]\ & A & 0.61 & 0.64 & 0.67 & L3 & $\Delta$I$_C$,$\Delta$J R01. $\Delta$K Fig.\[ij\_jk\]\ & B & 0.92 & 0.87 & 0.84 & L3 & $\Delta$I$_C$,$\Delta$J R01. $\Delta$K Fig.\[ij\_jk\]\ & A & 0.29 & 0.55 & 0.26 & L6 & $\Delta$I$_C$,$\Delta$J R01. $\Delta$K Fig.\[ij\_jk\]\ & B & 1.63 & 0.99 & 1.67 & T2:& $\Delta$I$_C$ R01. $\Delta$J Fig.\[cmd\_iz\]. $\Delta$K Fig.\[ij\_jk\]\ & A & 0.75 & 0.75 & 0.75 & L7 & Assumed equal mass binary K99,L01\ & B & 0.75 & 0.75 & 0.75 & L7 & Assumed equal mass binary K99,L01\ & A & 0.54 & 0.66 & 0.75 & L5 & $\Delta$J M99, $\Delta$K K99, $\Delta$I$_C$ Fig.\[ij\_jk\]\ & B & 1.02 & 0.86 & 0.75 & L5 & $\Delta$J M99, $\Delta$K K99, $\Delta$I$_C$ Fig.\[ij\_jk\]\ & A & 0.24 & 0.28 & 0.16 & T6 & $\Delta$I$_C$,$\Delta$J B03, $\Delta$K Fig.\[ij\_jk\]\ & B & 1.76 & 1.63 & 2.15 & T8 & $\Delta$I$_C$,$\Delta$J B03, $\Delta$K Fig.\[ij\_jk\]\ & A & 0.53 & 0.75 & 0.75 & T5.5 & $\Delta$I$_C$,$\Delta$J B03, $\Delta$K assumed 0.75\ & B & 1.03 & 0.75 & 0.75 & T5.5 & $\Delta$I$_C$ B03, $\Delta$J,$\Delta$K assumed 0.75\ Several of the systems published in the photometric compilations listed above are known to be binaries, having been resolved either from the ground [@ko1999; @le2001], or using HST [@mbb1999; @reid2001; @bu2003a]. Unfortunately, not all these systems have measured magnitude differences in all the passbands of interest, so we are forced to estimate magnitudes for the A and B components of these systems based on available colour-colour relationship data. [*In some cases (especially 2M0850B) these extrapolations are large, and the decomposed magnitudes should be treated as indicative only.*]{} Table \[binaries\] shows the magnitude differences between each component and the [*total*]{} magnitude of each system, along with estimated spectral types from @bu2003a. Because of the similarity between the effective wavelengths of the HgCdTe-based Z and J bands, we assume that the magnitude differences in Z are the same as those derived at J. We do not differentiate here between UKIRT and 2MASS J and K bands. [*2M0746AB, 2M1146AB & 2M0850AB*]{} were observed by @reid2001 in the HST F814W filter from which magnitude differences for the components were derived in the I$_C$ passbands. Infrared J-band magnitude differences were then estimated using the L dwarf sequence of the M$_{I}$ versus I$_C$–J colour-magnitude diagram (which has a roughly constant slope). With the exception of 2M0850B, this procedure will be adequate for all the L dwarfs, and those values are shown in Table \[binaries\]. 2M0850B is an exception because its absolute magnitude at I$_C$ is so faint that it must be an early- to mid-T dwarf, rather than an L dwarf. And, as we show in Section \[cmds\], the colour-magnitude diagram is not even remotely linear across the L-T dwarf transition. For 2M0850B, therefore, we have used the M$_I$ for the AB system of [@da2002], and the magnitude differences of @reid2001 to derive for the B component M$_I$ = 20.02$\pm$0.23. The colour-magnitude diagrams in Section \[cmds\] then imply I$_C$–J $\approx$ 5.1$\pm$0.2, from which we derive the B component J magnitude difference shown in the Table. To derive K-band magnitudes for the components of these systems, we have plotted J–Ks versus I$_C$–J for all the L and T dwarfs in @burg_thesis and @da2002 in Figure \[ij\_jk\]. The data reveal two separate sequences – the L dwarfs in which J–Ks becomes redder along with I$_C$–J, and the T dwarfs in which the reverse holds. The two lines on the plot are linear fits to these two regimes (arbitrarily divided at I$_C$–J=4.4). From these relations we predict J–Ks colours for L and T dwarfs from their I–J colours, and so derive the magnitude differences for each component in Table \[binaries\]. [*D0205AB & D1228AB*]{} were both observed by @ko1999 in the K band at Keck, and D0205AB was independently observed at UKIRT by @le2001. D1228 was also observed in the J band with HST by @mbb1999. D0205 was found to be a pair of objects with equal brightness at K, and in the absence of any other information we assume it to be an equal mass binary. D1228 is a nearly equal mass binary – from the marginal J–K colour difference between the two components we can extrapolate to a magnitude difference between the components at I of 0.48. [*2M1225AB, 2M1534AB*]{} have been observed by @bu2003a with HST in the F814W and F1042M filters (the latter enabling the derivation of approximate J magnitude differences for the systems). Once again we use the I–J colours of these objects to extrapolate to K magnitude differences for 2M1225AB’s components. For 2M1534AB the magnitude differences estimated at F1042 are only marginally different from zero, so we assume equal brightnesses in this system at J and K. Spectral-Type-Magnitude relations for L and T dwarfs ---------------------------------------------------- [@llccccccccccl@]{} Object & T[^10] & & & & &\ & & J & K$_s$ & J & K & Z-J & M$_J$ & M$_{Ks}$ & M$_{Z}$& M$_J$ & M$_K$ &\ SD1021 & T3 & 16.26$\pm$0.10 & 15.10$\pm$0.18 & 15.88 & 15.26 & 1.78 & 13.94$\pm$0.29 & 12.78$\pm$0.33 & 15.34$\pm$0.27 & 13.56$\pm$0.27 & 12.94$\pm$0.27 &\ 2M1047 & T6.5 & 15.82$\pm$0.06 & 16.30$\pm$0.30: & 15.46 & 16.20 & 1.93 & 16.05$\pm$0.14 & 16.52$\pm$0.33 & 17.61$\pm$0.13 & 15.68$\pm$0.13 & 16.42$\pm$0.13 &\ 2M1217 & T7.5 & 15.85$\pm$0.07 & 15.90$\pm$0.30: & 15.56 & 15.92 & 2.00 & 15.64$\pm$0.09 & 15.69$\pm$0.30 & 17.35$\pm$0.06 & 15.35$\pm$0.06 & 15.71$\pm$0.06 &\ 2M1225 & T6 & 15.22$\pm$0.05 & 15.06$\pm$0.15 & 14.88 & 15.28 & 1.89 & 14.60$\pm$0.09 & 14.44$\pm$0.17 & 16.15$\pm$0.08 & 14.26$\pm$0.08 & 14.66$\pm$0.08 &\ SD1254 & T2 & 14.88$\pm$0.04 & 13.83$\pm$0.06 & 14.66 & 13.84 & 1.74 & 14.20$\pm$0.07 & 13.15$\pm$0.08 & 15.72$\pm$0.06 & 13.98$\pm$0.06 & 13.16$\pm$0.06 &\ 2M1346 & T6 & 15.86$\pm$0.08 & 15.80$\pm$0.30: & 15.49 & 15.73 & 2.24 & 15.03$\pm$0.11 & 14.97$\pm$0.31 & 16.90$\pm$0.08 & 14.66$\pm$0.09 & 14.90$\pm$0.08 &\ 2M1534 & T5.5 & 14.90$\pm$0.04 & 14.86$\pm$0.11 & 14.60 & 14.91 & & 14.24$\pm$0.05 & 14.19$\pm$0.12 & & 13.94$\pm$0.05 & 14.25$\pm$0.05 &\ 2M1546 & T5.5 & 15.60$\pm$0.05 & 15.42$\pm$0.17 & & & & 15.32$\pm$0.07 & 15.14$\pm$0.18 & & & &\ SD1624 & T6 & 15.49$\pm$0.06 & 15.40$\pm$0.30: & 15.20 & 15.61 & 2.12 & 15.28$\pm$0.07 & 15.19$\pm$0.30 & 17.11$\pm$0.04 & 14.99$\pm$0.04 & 15.40$\pm$0.06 &\ ![image](tplx_spt_abs.ps){width="140mm"} Table \[absolute\] lists @burg_thesis, @da2002 and @le2002 photometry for our NTT parallax sample, along with the resulting absolute magnitudes in these systems. Also listed are spectral types on the scheme of @bu2002a. Figure \[abs\_spt\] shows plots of spectral type against M$_Z$, M$_I$, M$_J$ and M$_{Ks}$/M$_K$. Also shown are absolute magnitudes for late M and L dwarfs using parallaxes and 2MASS photometry from @da2002 for the 2MASS panels, and parallaxes from @da2002 and UKIRT photometry from @le2002 for the UKIRT panels. The spectral types are on the system of @ki1999 for the M and L dwarfs, and @bu2002a for the T dwarfs. Known multiple systems are noted with circles, and decomposed into their component magnitudes as discussed above. The two K-band plots (Fig. \[abs\_spt\]a and \[abs\_spt\]b) indicate that in both systems, the L-T transition is marked by a steepening of the spectral-type-magnitude relation. In general, however, the relationship between absolute magnitude at K and spectral type is well behaved for the purpose of estimating absolute magnitudes from spectral types. This is certainly [*not*]{} true in the 2MASS and UKIRT J passbands (Fig. \[abs\_spt\]c and \[abs\_spt\]d). Indeed, both sets of data indicate a strong inflection (a “hump”) in the relationship between absolute magnitude and spectral type for early T dwarfs – as a class, the T0-T4 brown dwarfs have absolute magnitudes [*brighter*]{} than the latest L dwarfs by a magnitude or more. Put another way, a simple extrapolation of the spectral-type-magnitude relationship for L dwarfs (eg. that from @da2002 shown in the figure) underestimates the absolute magnitude of the early- to mid-T dwarfs by up to two magnitudes. This “early T hump” has been noted previously [@da2002], though on the basis on fewer T dwarf parallaxes. It has been suggested [@burg_thesis] that binarity could be the cause for early T dwarfs being more luminous than the late L dwarfs. While it is certainly true that the L and T dwarfs which have been resolved as binaries are displaced to apparently high absolute magnitudes when plotted as unresolved objects, the addition of new parallaxes would seem to indicate the over-luminosity of early T dwarfs is a general property, rather than being due to the selection of objects which happen to be binaries. Moreover, the magnitude or more of over-luminosity is too large an effect to be due equal-mass binarity which can produce a brightening of only 0.75mag. A similar (though possibly less pronounced) inflection is seen in the M$_Z$ relation (Fig. \[abs\_spt\]f), while the M$_I$ relation (Fig. \[abs\_spt\]e) would appear to be almost as monotonic as that at K, though with a more pronounced inflection at the L-T boundary. Having said this, however, 2M0559 continues to appear to be over-luminous compared to the other early- to mid-T dwarfs in the figure. @bu2003a failed to resolve a binary companion in this system with HST, implying that if it is a binary it must have a separation of less than 0.5a.u. We also note that it has been suggested [@tsuji2003] that the selection of preferentially young objects could produce the “early T hump” – we discuss this further in Section \[sec\_hump\]. There are good physical reasons for expecting a monotonic relationship between effective temperature (T$_{\mathrm eff}$) and luminosity (L) in these objects, since these quantities are directly determined by interior (rather than photospheric) properties. However, it must be remembered that as proxies for T$_{\mathrm eff}$ and luminosity, absolute magnitude in a given passband and spectral type are far from perfect. Spectral typing is in essence an arbitrary allocation of a quantity to an object based on what its spectra look like – there is no guarantee that the relationship between spectral type and T$_{\mathrm eff}$ (even if monotonic) should not have significant changes in slope. Similarly, the relationship between absolute magnitude in a given passband and luminosity is even more problematic. From the spectra of objects ranging from L to T spectral types, and indeed from their J–K colours [@da2002], we know that [*significant*]{} changes take place in their photospheres. There is significant redistribution of flux in the spectra of brown dwarfs across the L-T transition. We should not be surprised if this results in the relationship between luminosity absolute magnitude in a given passband not only containing changes in slope, but not even being monotonic. Given our current parallax database, spectral type is a very poor proxy for absolute magnitude in the Z and J bands from mid-L to mid-T spectral types. The sequences in Fig.\[abs\_spt\] will need to be filled in by many more L and T dwarfs before precise absolute magnitudes can be estimated from spectral types with confidence. Colour Magnitude Diagrams for L and T dwarfs {#cmds} -------------------------------------------- ![image](tplx_izjk_1.ps){width="170mm"} ![image](tplx_izjk_2.ps){width="170mm"} ![image](tplx_jk.ps){width="170mm"} Using the same photometry, we can construct a variety of colour-magnitude diagrams. Figure \[cmd\_iz\] shows such diagrams based around Cousins I$_c$, UKIRT Z and both UKIRT and 2MASS J,K photometry, while Figure \[cmd\_jk\] shows similar diagrams for UKIRT and 2MASS J–K colours. The most noticeable feature of these diagrams is how few are actually [*useful*]{} as traditional colour-magnitude diagrams – almost none show the simple monotonic relationships between absolute magnitude and colour which hold for stars and brown dwarfs down to the early L dwarfs. Fig \[cmd\_iz\]c,d show that I–K colours jumps to the blue by I–K$\approx$0.5 mag as the L-T transition is crossed at M$_I$$\approx$19, M$_K$$\approx$ 13, but then tends redward again for later and later T dwarfs. However, Gl570D, one of the latest and faintest T dwarfs currently known, never becomes as red as the latest L dwarfs. This blueward jump is particularly pronounced at M$_I$ where the absolute magnitudes of L8 and early T dwarfs are indistinguishable. As a result I–K should be considered a poor indicator for determining the absolute magnitude or effective temperature of late L to late T dwarfs. In particular, and luminosity function based on I–K$\ga$5 will be subject to serious biases which will introduce completely spurious structure into the luminosity function. Fig \[cmd\_iz\]a,b shows that I–J colour-magnitude diagrams can be considered the “best of a bad bunch” when it comes to the traditional use of colour-magnitude diagrams (i.e. estimating absolute magnitudes from photometric colours), since the cooling curves of brown dwarfs do not reverse in I–J as they do for every other panel of Figs \[cmd\_iz\] and \[cmd\_jk\]. Even so, between I–J=4 and I–J=5 they show the same pronounced “S-curve” seen in the spectral type data of Fig. \[abs\_spt\], with early T dwarfs being up to a magnitude brighter in M$_J$ than late L dwarfs. And, once again we see that 2M0559 appears anomalously bright, suggestion binarity in spite of @bu2003a’s failure to resolve it with HST.. The Z–J colour-magnitude diagrams (Fig.\[cmd\_iz\]e,f) reveal a very steep colour-magnitude relation, with scatter which is significantly larger than the photometric errors. The slope of the colour-magnitude relation is so steep that no meaningful estimate of M$_Z$ or M$_J$ can be derived from a Z–J colour. This is not surprising, given the very close effective wavelengths of HgCdTe-based Z and J photometry. There is some evidence for trend at the bottom of this colour-magnitude diagram that at M$_Z \ga 16.5$ and M$_{Ks} \ga 14.5$, Z–J colours becomes [*bluer*]{} for fainter and later-type objects. Colour magnitude diagrams involving Z–K (Figs \[cmd\_iz\]g-h) and J–K (Figs \[cmd\_jk\]a-d) show an especially pronounced reversal of the brown dwarf cooling curves beyond M$_K \approx 12.5$, M$_J \approx 14$ and M$_Z \approx 16$. This has been noted in J–K colour-magnitude diagrams by several authors (e.g. @bu2002b). Major changes take place in photospheres below the L-T transition, with the result that T dwarfs swap from very red, to very blue, J–K colours. It is interesting that the Z–K diagrams show almost identical behavior, with Z–K colours for the very faintest T dwarfs becoming as blue as Z-K$\approx$-1. This compares with colours based on the SDSS $z^\prime$ filter (see eg.@da2002 Fig. 3) which continue to become redder for the latest T dwarfs. This once again clearly demonstrates the considerable difference between CCD-based $z^\prime$, and the HgCdTe-based Z. There is a clear warning to astronomers implicit in these diagrams – conclusions reached about luminosity- and mass-functions based on luminosity and/or colours for the L-T effective temperature range are fraught with difficulty. In particular, luminosity functions determined from the colours of objects in field samples will produce completely spurious features in the derived luminosity- and mass-functions, unless the various “bumps and wiggles” in these diagrams are adequately and correctly modeled. (See for example @rg1997’s demonstration of the formation of a “false” peak in M dwarf luminosity function based on the traditional – and inadequate – parametrization of the M dwarf colour-magnitude relation). Similarly, determining bolometric luminosity functions from apparent magnitudes in cluster-based samples is problematic, as we can expect similar “bumps and wiggles" to be present in the bolometric correction relations for the L and T dwarfs. Features like these can introduce significant [*systematic*]{} biases into the mass functions derived from even a perfect statistical sample. [*Actual*]{} statistical data with all the added complexities of uncertain age and binarity distributions add yet more complications. Monte Carlo simulations are essential to the interpretation of any luminosity- or mass-function in the L-T effective temperature range. it is important to carefully “reverse” model such functions from sets of mass-function models, through a variety of possible colour-magnitude and bolometric-correction relations (as allowed by the extant data), to sample observational data. Then such artificial data can then be meaningfully compared to statistical samples [*in the observational plane*]{}. Mass- or luminosity functions which do [*not*]{} include such extensive reverse modeling should be treated with the utmost suspicion. Theoretical Models for L and T dwarfs ------------------------------------- Ultra-cool dwarfs are notoriously difficult to model – the components which need to be included in models for L and T dwarfs include [@all1997]: the effects of tens of millions of molecular transitions in species including H$_2$O, CH$_4$, TiO, VO, CrH, FeH, and a host of others; complex treatments of the line wings of enormously H$_2$ and He pressure-broadened neutral alkali lines like K and Na, Rb and Cs; collision-induced molecular H$_2$ opacity; both the chemistry and opacity involved in the condensation, settling, revapourisation and diffusion of a variety of condensates including liquid Fe, solid VO, and a range of aluminium, calcium, magnesium and titanium bearing refractories; and finally (and least readily modeled of all) the effects of rotation-induced weather on the cloud decks which condensates will form. Significant progress has been made in recent years on the detailed solution of photospheric models using very large line lists (see @all1997 for a review). Probably the largest outstanding problem for modelers of L and T dwarfs is dealing with condensation. Three approximations to this complex situation have currently been implemented. “Dusty” models (eg. the DUSTY model of @all2001) assume condensates remain well suspended and in chemical equilibrium where they form in the photosphere. In general such models have been shown to work reasonably well for L dwarfs, suggesting that their cloud layers lie within their photospheres. “Condensation” models (eg. the COND models of @ba2003, and the CLEAR models of @bu1997) neglect dust opacities, to simulate the removal of all condensates from the photosphere as they form (most likely through gravitational settling). The “CLOUDY” models of @am2001 and @marley2002 incorporate a model for condensate cloud formation, based on an assumed sedimentation efficiency parameter $f_{rain}$. ### Colour-magnitude diagrams in J–K and Z–K Figures \[cmd\_iz\] and \[cmd\_jk\] have over-plotted on them a variety of these models, including the DUSTY [@chab2000; @all2001] and COND models [@ba2003] for an age of 1Gyr, and the CLEAR and CLOUDY models as presented in @bu2002b for $f_{rain}=3$ (determined as the best fit for this model in Jupiter’s ammonia cloud deck @marley2002). As previous studies have shown, DUSTY models reproduce the general features (if not the precise colours) of the cooling curves for L dwarfs, but then proceed to much redder colours than are observed beyond L8. This has been interpreted as indicating that condensates are present in the photosphere of L dwarfs. The COND and CLEAR models reproduce the general features of the cooling curves for late T dwarfs, indicating that at these effective temperatures condensate opacities do not contribute to the radiative transfer, which suggests that the condensate layers have dropped below the photosphere. DUSTY and CLEAR/COND models, therefore, describe the “boundary conditions” to the condensate opacity problem, and are appropriate for the L and late T types respectively. But what about the intermediate case which must be appropriate to early T dwarfs? This is exactly the situation with the sedimentation models of @marley2002 should be able to address. @bu2002b compared their CLOUDY models for $f_{rain}=3$ with a 2MASS M$_J$:J-K colour-magnitude diagram (as we do in Fig. \[cmd\_jk\]). As for the DUSTY models, the CLOUDY models predict the general behaviour of L dwarfs, and then veer towards bluer J–K colours at late T dwarf temperatures. However, this transition does not match the observed sequence, which transitions nearly horizontally between the L dwarf/DUSTY/CLOUDY sequence and late-T dwarf/CLEAR/COND model at M$_J$$\approx$14, M$_{Ks}$$\approx$13. (We note that though the equivalent models are not available in the UKIRT Z,J,K bandpasses, very similar behaviour is seen in Fig. \[cmd\_iz\], with a clear transition between the L dwarf and late T dwarf sequences.) @bu2002b suggested that a possible resolution for this discrepancy could be the appearance of uneven cloud cover on the surface of early T dwarfs. This would allow the emergent spectrum to appear as a “mixture” of the CLEAR/COND and CLOUDY spectra. They modeled this by interpolating between their CLEAR and CLOUDY models at effective temperatures of 800K, 1000K, 1200K, 1400K, 1600 and 1800K with varying fractions of the two models (ie. 20%,40%,60 & 80%). The tracks for these “mixture” models are shown in Fig. \[cmd\_jk\] as dotted lines, and suggest that there is a transition sequence between L and T dwarfs at T$_{\mathrm eff}$ $\approx$ 1300K. SDSS1021, SDSS1254, 2M1225, $\epsilon$IndB and possibly 2M0850B (though with some uncertainty because of the poor quality of its decomposed secondary flux) fill out this transition region. The status of 2M0559 is unclear. If it is a single object, then it probably represents the ‘top’ of the late T dwarf cooling sequence, which is $>$1 mag. brighter in M$_J$ than the bottom of the L dwarf sequence. If however, it is a binary, then the prototype for the ‘top’ of the late T dwarf cooling sequence is probably more like the object 2M1225A or 2M1346 at a spectral type of T5.5-T6. The “transition temperature” indicated by the additional T dwarfs in this work is slightly warmer ($\approx$1300K) than that found by @bu2002b. An alternative dust model to the sedimentation models of Marley et al. has been developed by @tsuji2003. These “Unified Cloudy Models” (UCM) are built around a single thin dust layer in which particles of size greater than a critical radius are removed from the photosphere by sedimentation. This critical radius is parametrized by a critical temperature $T_{cr}$ below which dust particles sediment, which is determined by comparing model results to colour-magnitude diagrams. This single model has the advantage of predicting the gross behaviour of brown dwarfs as they transition from L to T spectral types, with a single model. Unfortunately, [@tsuji2003 Fig.2] the detailed behaviour of the models does not match observations. In particular, UCM cannot make L dwarf as red or faint in M$_J$:J–K as they actually appear. Nor does it predict the observed brightening of the “early T dwarf hump” other than as an age-selection effect, which we conclude below, is not the case. It should be noted, however, that the interpretation of Marley et al. models and data in Fig. \[cmd\_jk\] in terms of cloud openings (i.e. as providing evidence for the existence of weather in early T dwarfs) is quite dependent on the details present in the @marley2002 models. An independent test of this conclusion is clearly desirable. Fortunately, Fig. \[cmd\_jk\] indicates that J and K band time-series photometry can provide that test. The location of a given object on the “transition sequence” will depend critically on its fractional cloud cover. Because this could be expected to change as each brown dwarf rotates, a statistical study of the J-band variability from late L dwarfs to late T dwarfs should find stronger variability in early T dwarfs, than in late L dwarfs or late T dwarfs. Finally, we note that although the COND models do not do a very good job of predicting the [*absolute*]{} colours of late T dwarfs in Z–J and Z–K (Fig. \[cmd\_iz\]e-h), they do suggest a trend for late T dwarfs to become bluer in both Z–J and Z–K as they get colder and fainter than M$_J$$\approx$14.5 and M$_K$$\approx$14.75. Moreover, the available data suggest this trend is real, though the absolute colours of T dwarfs at these magnitudes are somewhat redder than the models would predict. ### The “Early T Hump” {#sec_hump} ![image](tplx_izjk_age.ps){width="170mm"} The M$_J$:I$_C$–J colour-magnitude diagrams shown in Fig. \[cmd\_iz\]a-b indicate a remarkable brightening at M$_J$ for the observed early T dwarfs. Unfortunately, neither the COND nor the DUSTY models indicate why this should be so. The DUSTY models predict an extension of the L dwarf sequence, which we have good reason to believe is not correct, based on the analysis of colour-magnitude diagrams in J–K above. Unfortunately, the COND models [*also*]{} fail to look even remotely like the available data for T dwarfs in Fig. \[cmd\_iz\]a-d. Shortcomings in these models at short wavelengths have been noted by @ba2003, which are thought to be due to an inadequate treatment of the extremely broad wings of the K and Na lines at these wavelengths. One possible interpretation of the “early T hump” in Fig. \[cmd\_iz\]a-b, is that it could be a gravity effect [@tsuji2003]. Very young brown dwarfs will have isochrones slightly offset to brighter magnitudes than older brown dwarfs, because of their lower gravities. This effect is particularly pronounced in photospheres in which dust is an important opacity source. It is possible then that the “early T hump” could be produced by the preferential selection of young, bright brown dwarfs. Fig. \[cmd\_age\] plots the same data as that shown in Fig. \[cmd\_iz\]a-b, but now we plot four isochrones spanning 50Myr-5Gyr to examine the effects of age. The figure shows that, as expected, the DUSTY models (most appropriate for L dwarfs) show significant offsets in their isochrones of a magnitude or more between 50Myr and 5Gyr. These offsets are not as marked for the COND models. Unfortunately, interpreting the “early T hump” as an age effect is severely complicated by the fact that it occurs at [*exactly*]{} the point where there is good evidence to believe neither the DUSTY or COND models are working. For the L dwarfs and late T dwarfs, the spread in the colour-magnitude diagram is not pronounced, (particularly when known binaries are decomposed), suggestive of the small 100Myr – 1Gyr age spread seen in other studies of L and T dwarfs [@da2002; @scholz2003]. It is certainly nowhere near as pronounced as would be necessary to produce the age spread required to account for the more than 1magnitude brightening of the“early T hump” all by itself. Moreover, there is a definite spectral type trend [*along*]{} the track represented by the “early T hump”, as seen in the spectral-type-magnitude diagrams of Fig. \[abs\_spt\] – from the late L dwarfs, through $\epsilon$IndB, SD1254, SD1021 to 2M0559. This same trend is seen in the colour-magnitude diagrams. We interpret this as indicating that the “early T hump” truly is a feature in the cooling curve of brown dwarfs, rather than an artifact of youth and selection. The Onset of CH$_4$ absorption in Clusters {#ch4} ------------------------------------------ ![image](tplx_h.ps){width="170mm"} Methane filters centered on the strong CH$_4$ absorption bands in the H-band have been acquired by a number of observatories for use in their infrared cameras. Given we have now measured just where, in absolute magnitude, the T dwarf class occurs, the question arises, “At what magnitudes will CH$_4$ absorption in young star clusters set in?” Fig. \[cmd\_h\] shows UKIRT M$_J$ and M$_H$:J–K colour-magnitude diagrams, along with the DUSTY and COND models at ages from 10Myr to 5Gyr. Based on these diagrams, we can conclude that for field T dwarfs, as discovered by the 2MASS and SDSS surveys, CH$4$ absorption (corresponding to spectral classes around T2 and later) sets in at at M$_H$$\approx$13, and somewhat more confusingly at M$_J$$\sim$14 – though because of the brightening of “early T hump” at J non-CH$_4$-absorbing L dwarfs will actually be fainter than the earliest CH$_4$ absorbing T dwarfs. Because the turn on of CH$_4$ absorption is primarily an effect driven by effective temperature, to [*first order*]{} it will occur at the same absolute magnitude in young clusters as it does in the field. Looked at in slightly more detail, however, we can see that for a given colour in Fig. \[cmd\_h\], there is a small offset to brighter magnitudes for younger objects – in the DUSTY atmosphere case a 10Myr dwarf at the end of the L8 sequence will be $\approx$1.0mag. brighter than a 1Gyr dwarf of the same colour and effective temperature, and 1.3mag brighter than a 5Gyr dwarf. In the COND case the equivalent differences are 1.3 and 1.7 magnitudes. The likely ages for our field T dwarfs will be somewhere in the range 100Myr-1Gyr [@da2002; @scholz2003]. This would suggest that in clusters like IC2391 or IC2602 of age 10-20Myr at d$\approx$150pc the absolute magnitude for CH$_4$ onset will be M$_H$$\sim$12-12.5, or equivalently H$\sim$18-18.5. For older clusters like the Pleiades (100Myr, d$\sim$125pc) these numbers are more like M$_H$$\sim$12.5-13 or H$\sim$18-18.5.Both of these are eminently reachable magnitude limits with wide-field cameras on 4m-class telescopes, suggesting that CH$_4$ imaging may be a powerful tool for easily conducting an unbiased census of T dwarfs in large open clusters. Similarly for more compact, but distant clusters, like Trapezium ($\sim$25Myr, 450pc) observations at H$\sim$20-20.5 are tractable over the fields-of-view required on 8m-class telescopes. Fig. \[cmd\_h\] also has implications for the interpretation of potential cluster membership. For example, @zo2002 have found a T6 dwarf in the direction of the $\sigma$Orionis cluster. Fig. \[cmd\_h\] suggests that a field brown dwarf of this spectral type will have M$_H$=15.0$\pm$0.5. For the much younger age of $\sigma$Orionis (1-8Myr @zo2002) this will be more like M$_H$=14.0$\pm$0.5, which would imply a distance to the $\sigma$OriJ053810.1-023626 T6 dwarf of 192$\pm$50pc – more consistent with being a foreground object than a member of the cluster at d=352pc [@perry1997]. Colour-Magnitude & Spectral-Type-Magnitude relations for L and T dwarfs {#relations} ----------------------------------------------------------------------- Figures \[abs\_spt\] and \[cmd\_iz\] have over-plotted on some of their panels high-order polynomial fits to the weighted (and binary decomposed) data. As inspection of the figures shows, these fits are not always particularly successful at modeling the extremely complex behaviour of these cooling curves in the observed passbands. Nonetheless, in the absence of working atmospheric models the fits may be a useful tool, so long as their weaknesses are acknowledged. We therefore provide the coefficients for these fits, and the root-mean-square scatter about the fits in Table \[fits\]. [llcrrrrrrrr]{} $P(x)$ & $x$[^11] & RMS & $c_0$[^12] & $c_1$ & $c_2$ & $c_3$ & $c_4$ & $c_5$ & $c_6$ & $c_7$\ M$_{Ks}$ (2M)&SpT &0.38 & 6.27861e$+$1 & -1.47407e$+$1 & 1.54509e$+$0 & -7.42418e$-$2 & 1.63124e$-$3 & -1.25074e$-$5 & - & -\ M$_K$ (U)&SpT &0.40 & 8.14626e$-$1 & 2.95440e$+$0 & -3.89554e$-$1 & 2.68071e$-$2 & -8.86885e$-$4 & 1.14139e$-$5 & - & -\ M$_J$ (2M)&SpT &0.36 & 8.94012e$+$2 & -3.98105e$+$2 & 7.57222e$+$1 & -7.86911e$+$0 & 4.82120e$-$1 & -1.73848e$-$2 & 3.41146e$-$4 & -2.80824e$-$6\ M$_J$ (U)&SpT &0.30 & 5.04642e$+$1 & -3.13411e$+$1 & 9.06701e$+$0 & -1.30526e$+$0 & 1.03572e$-$1 & -4.58399e$-$3 & 1.05811e$-$4 & -9.91110e$-$7\ M$_{Ic}$&SpT &0.37 & 7.22089e$+$1 & -1.58296e$+$1 & 1.56038e$+$0 & -6.49719e$-$2 & 1.04398e$-$3 & -2.49821e$-$6 & - & -\ M$_Z$ (U)&SpT &0.29 & 4.99447e$+$1 & -3.08010e$+$1 & 8.96822e$+$0 & -1.29357e$+$0 & 1.02898e$-$1 & -4.57019e$-$3 & 1.05950e$-$4 & -9.97226e$-$7\ M$_{Ic}$&I$_C$–J &0.67 & 1.17458e$+$3 & -1.22891e$+$3 & 5.00292e$+$2 & -9.70242e$+$1 & 8.86072e$+$0 & -2.97002e$-$1 & - & -\ M$_J$ (2M)&I$_C$–J &0.63 & 1.52199e$+$3 & -1.69336e$+$3 & 7.41385e$+$2 & -1.58261e$+$2 & 1.64657e$+$1 & -6.66978e$-$1 & - & -\ Conclusion ========== We have shown that high precision parallaxes can be obtained with common-user near-infrared cameras using techniques very similar to those used in optical CCD astrometry. The new generation of large format infrared imagers based on HAWAII1 (1K) and HAWAII2 (2K) HgCdTe arrays, and the new generation of large format InSb arrays, offer exciting prospects for the astrometry of cool brown dwarfs in the future. Due to their infrared methane absorption bands, T dwarfs have quite similar effective wavelengths to the ensemble of background reference stars, which makes the correction of differential colour refraction effects considerably easier. The “early T hump” (i.e. the brightening at the J band of early T dwarfs relative to late L dwarfs) appears to be a feature of brown dwarf cooling curves, rather than an effect of binarity or age. And finally, these data imply that detection of T dwarfs in clusters can be made directly at tractable magnitudes in the H-band, opening the way to a new generation of cluster mass function studies based on the powerful technique of CH$_4$ differential imaging. The authors gratefully acknowledge support for this program from the Australian Goverment’s Access to Major Research Facilities Program (grants 99/00-O-15 & 01/02-O-02), and the AAO Director Dr B.Boyle. CGT would like to thank Dr Joss Hawthorn for his assistance with an early draft. Publication funds were provided through support for HST proposal 8563, and AJB acknowledges support by the National Aeronautics and Space Administration (NASA) through Hubble Fellowship grant HST-HF-01137.01. Both are provided by NASA through a grant from the Space Telescope Science Institute, operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. JDK acknowledges the support of the Jet Propulsion Laboratory, California Institute of Technology, which is operated under contract with NASA. This publication makes use of data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by NASA and the NSF. Finally, we would like to thank Dr Hugh Harris for a helpful and efficient referee’s report. Ackerman, A.S. & Marley, M.S. 2001, ApJ, 556, 872 Allard, F., Hauschildt, P.H., Alexander, D.R. & Starrfield, S.1997, ARA&A, 35, 137 Allard, F., Hauschildt, P.H., Alexander, D.R.; Tamanai, A., & Schweitzer, A., 2001, ApJ, 556, 357 Baraffe, I., Chabrier, G., Barman, T.S., Allard, F. & Hauschildt, P. 2003, A&A, submitted. [astro-ph/0302293]{} Burgasser, A. J., et al. 1999, ApJ, 522, L65 Burgasser, A. J., et al. 2000a, ApJ, 531, L57 Burgasser, A. J., et al. 2000b, AJ, 120, 1100 Burgasser, A. J., 2002a, PhD Thesis, California Institute of Technology, Pasadena: California [`www.astro.ucla.edu/~adam/homepage/research/tdwarf/thesis/`]{} Burgasser, A. J., et al. 2002b, ApJ, 564, 421 Burgasser, A. J., et al. 2002c, ApJ, 571, L151 Burgasser, A. J., Kirkpatrick, J.D., Reid, I.N., Brown, M.E., Miskey, C.L. & Gizis, J.E., 2003a, ApJ, 586, 512 Burrows, A. et al. 1997, ApJ, 491, 856 Chabrier, G., Baraffe, I., Allard, F. & Hauschildt, P., ApJ, 542, 464 Close, L.M., Siegler, N., Freed F., Biller, B. 2003, ApJ, in press. [astro-ph/0301095]{} Cuby, J.G. et al. 2000, A&A, 349, L41 Cutri, R. et al. 2001, Explanatory Supplement to the 2MASS Second Incremental Data Release. [`www.ipac.caltech.edu/2mass/releases/second/doc/explsup.html`]{} Dahn, C.C. et al. 2002, AJ, 124, 1170 Elias, J.H., Frogel, J.A., Matthews, K. & Neugebauer, G. 1982 AJ, 87, 1029 Finger, G. & Nicolini, G. 1998, “Interquadrant Row Crosstalk”, Garching: Germany. [`www.eso.org/~gfinger/hawaii_1Kx1K/crosstalk_rock/crosstalk.html`]{} Geballe, T.R. et al. 2002, ApJ, 564, 466 Jones, H.R.A. 2000, HIPPARCOS and the Luminosity Calibration of the Nearer Stars, 24th IAU General Assembly, Joint Discussion 13, Manchester, UK. Kirkpatrick, J. D., et al. 1999, ApJ, 519, 802 Koerner, D.W., Kirkpatrick, J. D., McElwain, M.W. & Bonaventura, N.R. 1999, ApJ, 526, L25 Leggett, S. K., et al. 2000, ApJ, 536, L35 Leggett, S.K., Allard, F., Geballe, T.R., Hauschildt, P.H., Schweitzer, A. 2001, ApJ, 548, L908 Leggett, S. K., et al. 2002, ApJ, 564, 452 Marley, M.S., Seager, S., Saumon, D., Lodders, K., Ackerman, A.S., Freedman, R.S & Fan, X. 2002, ApJ, 568, 335 Martin, E.L., Brandner, W. & Basri, G. 1999, Science, 283, 1718 Monet et al. 1992, AJ, 103, 638 Nakajima, T., Oppenheimer, B.R., Kulkarni, S.R., Golimowski, D.A., Matthews, K., Durrance, S.T., 1995, Nature, 378, 463 Perryman, M. A. C., et al. 1997, A&A, 323, L49 Reid, I.N. & Gizis, J.E. 1997, AJ, 113, 2249 Reid, I.N., Gizis, J.E., Kirkpatrick, J.D. & Koerner, D. 2001, AJ, 121, 489 Scholz, R.D., McCaughrean, M.J., Lodieu, N. & Kuhlbrodt, B., 2003, A&A, submitted [astro-ph/0212487]{}. Stone, R.C., Pier, J.R. & Monet, D.G., 1999, AJ, 118, 2488 Strauss, M. A., et al. 1999, ApJ, 522, L61 Tinney, C.G. 1993, AJ, 105, 1169 Tinney, C.G., Reid, I.N., Gizis, J. & Mould, J.R., 1995, AJ, 110, 3014 Tinney, C.G. 1996, MNRAS, 281, 644 Tsvetanov, Z. I., et al. 2000, ApJ, 531, L61 Tsuji, T. & Nakajima, T. 2003, ApJ, 585, L151 Valdes, F.G., Campusano, L.E., Velasquez, V.D. & Stetson, P. B., 1995, PASP, 107, 1119 Vrba, F., Henden, A.A., Luginbuhl, C.B. & Guetter, H.H. 2002, BAAS, 201, 3305 Wallace, P. 1999, “SLALIB – Positional Astronomy Library”, Starlink User Note 67.45 Zapatero Osorio, M.R. et al. 2002, ApJ, 578, 536 \[lastpage\] [^1]: Sample parameterizations can be found at [http://www.aao.gov.au/local/www/cgt/sofi]{}. [^2]: This data can be obtained from the Vizier service. [^3]: This step made extensive use of M.Richmond’s excellent [match]{} implementation of the @valdes95 object list matching algorithm, which is available at [http://acd188a-005.rit.edu/match/]{} [^4]: See Table 1 for full object names [^5]: @da2002 [^6]: Indeed the two are so different that a distinctive distinctive name – Y – for these HgCdTe-based Z magnitudes is being widely adopted [^7]: Difference in magnitude between the component and the total magnitude of the system in this passband. [^8]: @bu2003a [^9]: R01 - @reid2001, K99 - @ko1999, L01 - @le2001, M99 - @mbb1999, B03 - @bu2003a [^10]: Spectral types on the @bu2000a system [^11]: SpT $= i$ for M$i$, $=j+10$ for L$j$ and $=z+19$ for T$z$ spectral types on the @ki1999 system for M and L dwarfs, and the @bu2002a system for T dwarfs. [^12]: $P(x) = c_0 + c_1\,x + c_2\,x^2 ...$
--- abstract: 'Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.' author: - bibliography: - 'references.bib' title: ' **A Proposal for Semantic Map Representation and Evaluation** ' --- Introduction ============ In the last years, semantic mapping has become a very active research area. Such increasing interest is motivated by the idea that if robots can *understand* the environment in which humans live, and the way they operate in it, they can also *collaborate* and *act* (i.e., have a more cognitive behavior). Nevertheless, the ability to *communicate* represents a strict requirement for collaboration among two or more agents. When dealing with humans, this can be naturally achieved by enabling robots to use spoken language, based on the learned semantic of the world. Associating symbols with numerical representations in fact is a key requirement for producing a robot that can use spoken language. Indeed, semantic mapping is the incremental process of mapping relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine, with the aim of learning to understand, collaborate and communicate. ![Double view of the example dataset acquired in the Robot Innovation Facility of Peccioli, in Italy. Part of the sitting room and the kitchen are shown, together with some bounding boxes identifying a chair, a deckchair and two robots.[]{data-label="fig:teaser"}](images/teaser.png){width="\columnwidth"} ![Double view of the example dataset acquired in the Robot Innovation Facility of Peccioli, in Italy. Part of the sitting room and the kitchen are shown, together with some bounding boxes identifying a chair, a deckchair and two robots.[]{data-label="fig:teaser"}](images/teaser_2.png){width="\columnwidth"} Ongoing research mostly tries to address the problem by focusing on a subset of the information to be learned, and by considering an agent whose main abilities are navigation and object manipulation. In this way, strict requirements for communicative or collaborative behaviors are typically ignored. A relevant definition in this sense is given by N[ü]{}chter and Hertzberg [@Nuechter2008], who describe a semantic map for a mobile robot as “*a map that contains, in addition to spatial information about the environment, assignments of mapped features to entities of known classes. Further knowledge about these entities, independent of the map contents, is available for reasoning in some knowledge base with an associated reasoning engine*”. Based on the same concept, several approaches have been proposed. These can be grouped in two main categories: fully automated methods for classification of locations and objects [@Blodow2011; @Mozos2012; @Gunther2013], and techniques, which exploit the support of the user in the knowledge acquisition and learning process [@Zender2008; @Nieto-Granda2010; @Pronobis2012]. While a comprehensive overview of the relevant work in this direction can be found in the survey by Kostavelis and Gasteratos [@Kostavelis2014], it is important to remark that even the simplest semantic map goes far beyond “simple” labeling of spatial features. In fact, even though they are built on top of sophisticate SLAM procedures, Computer Vision and Machine Learning algorithms, semantic maps must provide the possibility to reason over the acquired knowledge. Therefore they have to be formalized and represented in a proper way. Moreover, semantic mapping methods cannot be directly evaluated on the metrics and benchmarking datasets which are available for other algorithms, since they do not take into account any kind of reasoning. On the contrary, approaches proposed in literature (Section \[sec:related\_work\]) lack of any kind of standardization and typically underestimate these questions. In particular, two main issues emerge from the analysis of the state-of-the-art: 1) the absence of a common formalism for representing semantic maps and, consequently, 2) the lack of suitable validation and evaluation techniques. This puts a significant limitation on the research field, since it is difficult to understand the improvements over the state-of-the-art and to even compare available methods. The aim of this paper is therefore twofold. First, we address the above highlighted issues, by proposing a formalization and a standardization in the representation of semantic maps (Section \[sec:representation\]). Second, we make a proposal for their evaluation, as well as for benchmarking semantic mapping methods, by means of a dataset based on real sensor data (Section \[sec:evaluation\]). Moreover, by describing the procedure and providing usable software for building such a dataset (Section \[sec:dataset\]), we invite the scientific community to contribute to its creation (see Fig. \[fig:teaser\] for an example). Conclusions and open questions related to our proposal are finally reported in Section \[sec:discussion\]. Related Work {#sec:related_work} ============ There exists a large literature on the problem of learning and representing the semantics of environments based on their spatial location, geometry and appearance [@Kostavelis2014]. This activity is usually referred to “semantic mapping”. Such a term, although originally describing a difficult process that deals with more heterogeneous information (i.e., not limited to spatial knowledge), has strong implications. Semantic maps should, in fact, not only assign a certain number of labels or properties to relevant features of the environment (like in [@Goerke2009; @Mozos2012]), but also provide a representation of this knowledge in a form usable by the system. As introduced in the previous section, one of the main issues of current research is the wide heterogeneity of the representations used for semantic maps. For example Galindo *et al.* [@Galindo2005] represent environmental knowledge by anchoring sensor data, that describe rooms or objects in a spatial hierarchy, to the corresponding symbol of a conceptual hierarchy. Such a conceptual hierarchy is based on a small ontology in description logic, which enables the robot to perform inference. The authors validate their approach by building their own domestic-like environment and testing the learned model by executing navigation commands. Pangercic *et al.* [@Pangercic2012], instead, investigate the representation of “semantic object maps” by means of a symbolic knowledge base (in description logic) associated to Prolog predicates (for inference). Such a knowledge base contains classes and properties of objects, instances of semantic classes and spatial information. While profiling the time required by the semantic mapping process, the authors experiment their approach on a PR2 robot which has to open a cabinet and to detect handles based on an apriori given semantic map. Moreover, Bastianelli *et al.* [@Bastianelli2013] use a Prolog knowledge base containing both the specific knowledge of a certain environment and the general knowledge about a domain. The knowledge base is linked to the physical environment by means of a matrix like data structure generated on top of a metric map. Once again, the experimental validation is based on qualitative evaluations of the robot behavior, given a certain command and the learned semantic map. Riazuelo *et al.* [@Riazuelo2015] instead describe the RoboEarth cloud semantic mapping system, which is composed of an ontology, for coding concepts and relations, and a SLAM map for representing the scene geometry and object locations. In particular, a recognition module identifies objects based on a local database of CAD models, while the whole system is integrated with an OWL ontology. The other problem, which emerges as a consequence of the variety of representations, is the absence of a standard suitable validation and evaluation procedure. In addition to previous examples, Zender *et al.* [@Zender2008] generate a representation ranging from sensor-based maps to a conceptual abstraction, encoded in an OWL-DL ontology of an indoor office environment. However, except for individual modules, their experimental evaluation is mainly qualitative. Pronobis and Jensfelt [@Pronobis2012], instead, represent a conceptual map as a probabilistic chain graph model and evaluate their method by comparing the robot belief to be in a certain location against the ground truth. Gunther *et al.* [@Gunther2013] perform a sort of semantic aided object classification based on an OWL-DL knowledge base. The evaluation is based on the rate of correctly classified objects. Finally, Handa *et al.* [@Handa2014] propose a synthetic dataset, which could be eventually extended with semantic knowledge and used as a ground truth for comparing semantic mapping methods. However, even when noise is introduced, fictitious data never reflect a real world acquisition. Note that none of the cited works can compare the performance of their semantic mapping method against those of other similar systems. Starting from these considerations we propose a standard methodology for representing and evaluating semantic maps. In particular, we describe a formalization which includes a reference frame, spatial information and a set of logic predicates. Such a formalization is thought to be used as a general structure of the representation that all the semantic maps have to include and can extend. Moreover, in addition to proposing an evaluation metric, we suggest the procedure for the creation of a semantic mapping dataset. In particular, such a dataset is based on real sensor data enriched with semantic information. Semantic Map Representation {#sec:representation} =========================== As previously stated, in order to define a map to be “semantic”, we require that knowledge is represented in a suitable manner. In fact, this enables additional information to be inferred from the map, whenever a reasoning engine is associated to it. For this reason, in this section, we propose a formalization of a *minimal* general structure of the representation that should be implemented in a semantic map. This representation has to play the role of common interface among all the semantic maps, and can be easily extended or specialized as needed. ![Minimal concept hierarchy to be used for a standard semantic map representation.[]{data-label="fig:ontology"}](images/ontology.png){width="\columnwidth"} In the general formalization that we are describing, such a representation is defined as a triple $${\mathcal{SM}} = \langle R,{\mathcal{M}},{\mathcal{P}} \rangle,$$ where: - $R$ is the global reference system in which all the elements of the semantic map are expressed; - ${\mathcal{M}}$ is a set of geometrical elements obtained as raw sensor data. They are expressed in the reference frame $R$ and describe spatial information in a mathematical form. ${\mathcal{M}}_s \subseteq {\mathcal{M}}$ is the subset of semantically relevant elements; - ${\mathcal{P}}$ is a set of predicates, among which *is-a*(`X`, `Y`) and *instance-of*(`X`, `Y`) are mandatory. ${\mathcal{P}}$ has to be compliant with the concept hierarchy shown in Fig. \[fig:ontology\]. ${\mathcal{P}}_s \subseteq {\mathcal{P}}$, with $|{\mathcal{P}}_s| > 0$, contains the predicates that provide an abstraction of the elements in ${\mathcal{M}}_s$. Note that the definition of a unique reference frame $R$ allows to associate the elements of the subset ${\mathcal{M}}_s$ with those of ${\mathcal{P}}_s$. Moreover, the requirement that ${\mathcal{M}}$ is composed of geometrical elements obtained as raw sensor data, gives the opportunity to define an additional functionality on top of our representation. Indeed, as we will explain in Section \[sec:evaluation\], we are interested in the possibility to get the actual sensor data, given a specific pose in the map expressed according to $R$. For what concerns ${\mathcal{P}}$, instead, the predicates *is-a* and *instance-of* represent respectively: the subclass relation, meaning that if *is-a*(`B`, `A`) holds, the class `B` is a subclass of the class `A` and every instance of `B` is also an instance of `A`; the membership relation, meaning that if *instance-of*(*`a`*, `A`) holds, the individual `a` belongs to the class `A`. Additionally, some predicates can have a function-like behavior, meaning that they can occur only once for each individual. For example, if dealing with the classes `Person` and `IDNumber`, the predicate *hasId*(`X`, `Y`) occurs only once for each instance of `Person` and `IDNumber`. To give a general idea, let us suppose we are building a semantic map for a robot operating and interacting with people in a mall. In this case, we can use our representation and choose ${\mathcal{M}}$ to be a set of points, like a unique point cloud modeling the 3D map of the environment. For what concerns ${\mathcal{P}}$, we can extend the concept hierarchy of Fig. \[fig:ontology\] as follows: - being a person an element of interest, we can define a class `Person` and add the predicate *is-a*(`Person`, `Physical_Thing`); - a specialization of the class `Location` can be introduced for the shops and corridors, by defining the classes `Shop`, `Corridor` and adding the predicates *is-a*(`Shop`, `Location`), *is-a*(`Corridor`, `Location`); - a `Connecting_Architecture` can be specified in such a way that it always *connects* an element of the class `Shop` and one of the class `Corridor`; - since a shop could use advertisements for promoting itself, we can define a class `Advertisement`, add the predicate *is-a*(`Advertisement`, `Abstract_Thing`) and define a new predicate *hasAdvertisement*(`X`, `Y`), where `X` could be an instance of `Shop` and `Y` an instance of `Advertisement`. Finally, we can select as reference frame $R$ the global frame of a 3D map. Semantic Map Evaluation {#sec:evaluation} ======================= Once we are given the representation schema presented in Section \[sec:representation\], a metric and one *shared* environment, then it is possible to perform a comparison between two different methods on the basis of the semantic maps they generate. For this reason, we have to define one or more metrics that allow for a quantitative evaluation of each method. Then, we have to find an environment in which to perform this kind of experiments. While some Robotics Innovation Facilities exist to this purpose, it is still not easy to retrieve common locations and environments, mainly due to logistic, physical and economic constraints. For these reasons, while hypothesizing some metric in Section \[subsec:metric\], we suggest the construction of a dataset of semantic maps according to the proposed representation schema. In particular, the set of geometrical elements ${\mathcal{M}}$ should be built with real sensor data. In this way, it is possible to simulate the robot navigation, as well as its sensor acquisition. This can be done by defining a projection function that transforms the elements of ${\mathcal{M}}$ into the associated sensor domain. For example, in the case of a RGB-D camera the geometrical elements are projected in a depth and RGB image, while in the case of a laser they are projected into a vector of range values. Such a dataset is a ground truth of each environment and therefore it can be used to make comparisons based on specific metrics. Of course, the set ${\mathcal{P}}$ cannot be fully satisfactory, since it is not feasible to take into account all the possible semantic knowledge. For this reason, it is likely that a user might need to extend it. In this case, it is important to update the original ground truth so that it becomes more and more complete and that everyone can test their system on the same dataset. Evaluation Metric Hypotheses {#subsec:metric} ---------------------------- In this section, we hypothesize some possible evaluation metrics to be used for comparison between two semantic maps which are compliant with our previous proposal. Given a representation ${\mathcal{SM}}_1 = \langle R_{GT},{\mathcal{M}}_1,{\mathcal{P}}_1 \rangle$ and the ground truth ${\mathcal{SM}}_{GT} = \langle R_{GT},{\mathcal{M}}_{GT},{\mathcal{P}}_{GT} \rangle$, an evaluation metric can be defined as $$\delta({\mathcal{SM}}_1,{\mathcal{SM}}_{GT}) = f(|{\mathcal{M}}_1 \ominus {\mathcal{M}}_{GT}|, |{\mathcal{P}}_1 \boxminus {\mathcal{P}}_{GT}|).$$ Note that the reference frame $R_{GT}$ of ${\mathcal{SM}}_1$ and ${\mathcal{SM}}_{GT}$ coincide: this is easily achievable by applying the transformation offset between the original frame $R_1$ of ${\mathcal{SM}}_1$ and $R_{GT}$ of ${\mathcal{SM}}_{GT}$. The definition of the operators $\ominus$ and $\boxminus$ determines the metric itself. For example, $\ominus$ can be a distance $d$ between geometrical elements, according to Table \[tab:spatial\_metric\], while the $\boxminus$ operator could return two sets of predicates $\Delta$ and $\Gamma$ such that: $$\label{eq:def-metric} \{{\mathcal{P}}_1 \setminus \Gamma\} \cup \Delta \models {\mathcal{P}}_{GT}$$ ---------- -- -------- ------- -------- Points Lines Planes \[-6pt\] \[5pt\] \[5pt\] ---------- -- -------- ------- -------- : Example definition of the $\ominus$ operator. The index $i$ indicates the $i$-th corresponding geometric element in ${\mathcal{M}}_1$ and ${\mathcal{M}}_2$, while $p$, $l$ and $\pi$ represent respectively a point, a line and a plane.[]{data-label="tab:spatial_metric"} The lower the cardinality of $\Delta$ and $\Gamma$, the better is the semantic representation. However, this does not consider the fact that the subset ${\mathcal{P}}_s$ contains some reference to spatial information (which could be measured again by metric criteria). A solution to this problem could be the redefinition of $\boxminus$ as an operator which returns two sets of predicates $\Delta$ and $\Gamma$, and a distance $d$ such that: $$\{({\mathcal{P}}_1 \setminus {\mathcal{P}}_{1_s}) \setminus \Gamma\} \cup \Delta \models \{{\mathcal{P}}_{GT} \setminus {\mathcal{P}}_{GT_s}\},~~ d({\mathcal{P}}_{1_s}, {\mathcal{P}}_{GT_s}).$$ For example, suppose that the ground truth ${\mathcal{SM}}_{GT}$ contains a table and a chair correctly positioned. If the table is missing in the set ${\mathcal{P}}_1$ of the robot semantic map ${\mathcal{SM}}_{1}$, from our metric in Eq. \[eq:def-metric\] we obtain that $\Delta$ has cardinality $1$. Indeed, in this case the robot would not be able to execute the command “go to the table”. Conversely, if the table belongs to ${\mathcal{P}}_s$, the cardinality is $0$ and the robot is able to execute the command. Similarly, if the object is not well positioned in ${\mathcal{M}}_1$ any distance from Table \[tab:spatial\_metric\] would be much bigger than zero, and the robot would execute the command by reaching a wrong location. Additional metrics could be defined on different criteria like the processing time, the distance traveled by the robot, the number of sensor readings processed, etc. Dataset Construction {#sec:dataset} ==================== Since the construction of the dataset is based on the representation proposed in Section \[sec:representation\], and it consists of the combination of spatial and semantic information, any approach compliant with that could be applied. In this section we describe our method for the generation of a ground truth, in which the set ${\mathcal{M}}$ consists of a 3D point cloud, ${\mathcal{P}}$ implements the proposed concept hierarchy and ${\mathcal{P}}_s$ contains abstractions of bounding boxes. In particular, in order to collaborate with a larger community of researchers, we consider low cost sensors (i.e., RGB-D cameras like Microsoft Kinect and Asus Xtion) which can be easily found on any robot. Note that building a 3D map with this kind of sensors, leads to multiple open issues. Still, even if with an additional manual refinement, our software allows to build such maps. As shown in Fig. \[fig:methodology\], this process is composed of several steps, which can be divided into metric and semantic phases. First, we acquire data in order to generate a 3D map and we perform a preliminary manual annotation of the objects inside the environment. Then, by associating semantic information and volumes in the 3D map, in the form of bounding boxes, we obtain the desired semantic map. Of course, sensor calibration prior to data acquisition is highly recommended (see Section \[subsec:calibration\] for more details). ![Steps involved in the process of building the dataset.[]{data-label="fig:methodology"}](images/methodology.png){width="0.9\columnwidth"} Data Acquisition {#subsec:acquisition} ---------------- The data acquisition step can be divided in two different parts, one related to the 3D map, the other to the semantic annotations for elements of interest inside the environment. While manually collecting semantic annotations is relatively easy, although tedious, 3D data acquisition results to be more challenging due to the limitations of low cost sensors. The generation of a 3D map requires the acquisition of a log capturing the income of the robot sensors while moving around the environment. In particular, this should contain the robot odometry (or laser data) and the camera stream (both for depth and RGB). While taking the log, one should pay attention to steer the robot so that at least one camera does not see only a flat surface. Indeed, structures like a floor, a wall or two parallel planes do not help the mapping system, due to their poor geometrical information. Sensor Calibration {#subsec:calibration} ------------------ The calibration of a sensor is the process of correctly computing its internal parameters, as well as its pose with respect to the robot reference frame. Extracting the right internal parameters improves the data generated by the sensor reducing its intrinsic error. For example, in the case of a depth camera, this corresponds to determine its camera matrix and distortion parameters. Computing the correct pose of a sensor, instead, allows to accurately express data measurements with respect to a different reference frame. In order to perform sensor calibration and supposing to use $n$ RGB-D cameras on the robot, $n + 2$ logs[^1] are required. In particular, choosing one of the cameras as a reference, we have: 1. \[log:internal\] $n$ *intrinsic calibration logs*, containing the stream of the $i$-th RGB-D sensor, for the calibration of the internal parameters of its depth camera (refer to [@Dicicco2014] for more details on how to acquire data); 2. \[log:sensor\_base\] $1$ *sensor-base calibration log*, containing the robot odometry (or laser data) and the camera stream, for calculating the pose between the robot and reference RGB-D sensor (the robot should slowly translate and rotate while the reference sensor sees at least 3 planes, each of them being non parallel with all the others); 3. \[log:sensor\_sensor\] $1$ *sensor-sensor calibration log* (at least), containing the stream of the $n$ cameras, for computing the pose of $n-1$ RGB-D sensors with respect to the reference one (all the cameras should see, at least once, the same part of the environment while *always* respecting the condition of the previous point); Common RGB-D cameras are affected by a substantial distortion in the depth channel. Not considering this distortion leads to systematic drifts in the estimate of the robot pose while mapping. This calibration is performed by following the procedure explained by Di Cicco *et al.* [@Dicicco2014] on the intrinsic calibration logs. At the end of this procedure, it is possible to reduce the intrinsic error which normally affects the sensors data (i.e., walls that should be flat, look curved on the edges). Another goal of the calibration procedure is to find the pose of one of the cameras (*reference*) with respect to the robot frame, and the relative offsets (translation and rotation) between all the other cameras and the *reference*. The software we developed provides two different tools to compute these offsets. The first one performs the computation of the transform $\mathbf{T^*}$ between the robot frame and the reference depth camera. By using the sensor-base calibration log we estimate the motion of the camera in a small region. Taking as reference the odometry of the robot, this tool casts a least square problem that minimizes a cost function which depends on the sensor transform $\mathbf{T}$ and returns $\mathbf{T^*}$. The second tool, instead, allows the computation of the offset between pairs of depth cameras. The main idea is to use the sensor-sensor calibration log to generate, for each camera, an independent point cloud. In this way, each sensor produces a cloud starting from its own reference frame. Once this is done, our registration algorithm can be run between pairs of point clouds. The output of the alignment determines the relative translation and rotation between the origins of the point clouds and thus between the sensors. At the end of the calibration we are able to construct a tree of sensor pose transformations (see Fig. \[fig:transformations\_tree\]). From this tree, it is possible to compute the transformation between any two nodes, by a simple offset concatenation. ![Sensor transformation tree generated at the end of a calibration procedure. In this case the robot was equipped with 3 depth cameras.[]{data-label="fig:transformations_tree"}](images/transformations_tree.png){width="0.7\columnwidth"} Data Processing {#subsec:processing} --------------- Once all the data is acquired, the 3D map can be built. To this end, the point clouds recorded in the log are aligned generating a set of *local maps*. A local map is a point cloud constructed by aligning and integrating a sequence of depth sensor data while the robot moves in the environment. This is obtained through the use of a point cloud registration algorithm based on the work by Serafin *et al.* [@Serafin2014]. A new local map is started whenever one of the two following statements holds: - the estimate of the robot (or equivalently the camera) movement is greater than a certain amount. This allows to limit the growth of the local map in terms of dimension; - the point cloud registration algorithm detects that the last alignment is not good (with possibility of inconsistency). This is necessary in order to avoid to introduce errors inside the local map. The local map generator uses the robot odometry as initial guess for the point cloud alignment. However, a good odometry estimation is not always available. In this case (but this is useful in general), if the robot comes with a 2D laser, it is possible to use as initial guess the transformation provided by the *scan matcher* developed as part of our software. The 3D map is represented as a pose graph [@Grisetti2010], where each local map is connected to the previous and following one by means of a transformation. More in detail, nodes of the pose graph represent local maps, with their position and orientation in a global frame. Edges, instead, are relative transforms between local maps. The benefits of this metric representation are that it allows to add/remove anytime information and update an existing map. Indeed, by using a tool provided in our software, inconsistencies in the map can be manually fixed. More specifically, the user can select and align two nodes of the graph at time and add a new edge between them. This, together with the optimization of the pose graph [@Kummerle2011], leads to the elimination of inconsistencies and thus, to a refined map. Combining 3D Map and Semantic Data {#subsec:fusion} ---------------------------------- Once both the 3D map and the semantic annotations are available it is possible to combine them by means of a geometric abstraction like a volume in the map. In our case, we define such a volume to be a bounding box (i.e., a parallelepiped) containing all the geometric elements to which we want to attach the same semantic information. After all the bounding boxes are assigned, we formalize the predicates ${\mathcal{P}}$ (compliant with the conceptual hierarchy) in OWL-DL, by using Protégé . Bounding boxes, in particular, belong to the subset ${\mathcal{P}}_s$ and they are formalized by means of classes like `Size`, `Position` and `Shape`. Dataset Example {#subsec:example} --------------- ![Detail of the example dataset acquired in the RIF of Peccioli. The image shows a table and chairs with their associated bounding boxes. RGB information is intentionally omitted and resolution is reduced for a better visualization of the bounding boxes.[]{data-label="fig:example"}](images/bounding_boxes_cropped2.png){width="0.85\columnwidth"} We performed the procedure described so far on a set of data specifically acquired during the RoCKIn Camp held in the ECHORD++ Robotic Innovation Facility of Peccioli, in Italy. In particular, this is a domestic environment with several rooms and everyday objects built to foster benchmarking of robotic applications, to test their robustness, and to support standardization efforts. While a detail of the 3D map of the environment is shown in Fig. \[fig:example\], the whole dataset is hosted online (<http://goo.gl/v7xSyl>) and contains a ground truth representation which is compliant with the requirements stated in Section \[sec:representation\]. Namely, a 3D point cloud with an associated reference frame and the corresponding OWL-DL ontology compose the first example of a dataset for semantic maps. Discussion {#sec:discussion} ========== In this paper we defined a methodology for representing semantic maps. In particular, we designed a formalization of their representation which includes both spatial and semantic knowledge. On top of this, we made some hypotheses for metrics and evaluation criteria, based on the idea that a ground truth for semantic maps exists. Note that the procedure we proposed for building a dataset is based on real sensor data. This allows to simulate robot navigation inside the environment, breaking down logistic, physical and economic barriers for a fair comparison between different semantic mapping methods. Finally, we provided useful documented open-source software for building such a dataset (<http://goo.gl/v7xSyl>). We invite, in this way, the scientific community to contribute in populating the dataset with more and more annotations and environments. In addition to all of this, we have also shown a first real example of ground truth for a semantic map. Open challenges, however, still remains. Future work, for example, should be oriented to the definition of a standard metric of evaluation. [^1]: A log is obtained by acquiring and recording the required sensor data.
--- abstract: | Let $G$ be an additive finite abelian group with exponent $\exp(G)=m$. For any positive integer $k$, the $k$-th generalized Erdős-Ginzburg-Ziv constant $\mathsf s_{km}(G)$ is defined as the smallest positive integer $t$ such that every sequence $S$ in $G$ of length at least $t$ has a zero-sum subsequence of length $km$. It is easy to see that $\mathsf s_{kn}(C_n^r)\ge(k+r)n-r$ where $n,r\in\mathbb N$. Kubertin conjectured that the equality holds for any $k\ge r$. In this paper, we mainly prove the following results: 1. For every positive integer $k\ge 6$, we have $$\mathsf s_{kn}(C_n^3)=(k+3)n+O(\frac{n}{\ln n}).$$ 2. For every positive integer $k\ge 18$, we have $$\mathsf s_{kn}(C_n^4)=(k+4)n+O(\frac{n}{\ln n}).$$ 3. For $n\in \mathbb N$, assume that the largest prime power divisor of $n$ is $p^a$ for some $a\in\mathbb N$. For any fixed $r\ge 5$, if $p^t\ge r$ for some $t\in\mathbb N$, then for any $k\in\mathbb N$ we have $$\mathsf s_{kp^tn}(C_n^r)\le(kp^t+r)n+c_r\frac{n}{\ln n},$$ where $c_r$ is a constant depends on $r$. Note that the main terms in our results are consistent with the conjectural values proposed by Kubertin. address: - 'Department of Mathematics, Southwest Jiaotong University, Chengdu 610000, P.R. China' - 'Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, P.R. China' author: - Dongchun Han - Hanbin Zhang title: 'On generalized Erdős-Ginzburg-Ziv constants of $C_n^r$' --- Introduction ============ Let $G$ be an additive finite abelian group with exponent $\exp(G)=m$. Let $S=g_1{\boldsymbol{\cdot}}\ldots{\boldsymbol{\cdot}}g_k$ be a sequence over $G$ (repetition is allowed), where $g_i\in G$ for $1\le i\le k$, $k$ is called the length of the sequence $S$. We call $S$ a zero-sum sequence if $\sum^k_{i=1}g_i=0$. The classical direct zero-sum problem studies conditions (mainly refer to lengths) which ensure that given sequences have non-empty zero-sum subsequences with prescribed properties (also mainly refer to lengths). For example, the Davenport constant, denoted by $\mathsf D(G)$, is the smallest positive integer $t$ such that every sequence $S$ over $G$ of length at least $t$ has a nonempty zero-sum subsequence. It is easy to prove that $\mathsf D(C_n)=n$, where $C_n$ is the cyclic group of order $n$. For any positive integer $k$, the $k$-th generalized Erdős-Ginzburg-Ziv constant $\mathsf s_{km}(G)$ is defined as the smallest positive integer $t$ such that every sequence $S$ over $G$ of length at least $t$ has a zero-sum subsequence of length $km$. In particular, for $k=1$, $\mathsf s_m(G)$ is called the Erdős-Ginzburg-Ziv constant, which is a classical invariant in combinatorial number theory. In 1961, Erdős, Ginzburg and Ziv [@EGZ] proved that $\mathsf s_n(C_n)=2n-1$ which is usually regarded as a starting point of zero-sum theory (see [@ADZ] for other different proofs of this result). We refer to [@GG] for a survey of zero-sum problems. In this paper, we will focus on $\mathsf s_{km}(G)$. Let $G=C_n^r=\langle e_1\rangle\oplus\cdots\oplus\langle e_r\rangle$. Assume that $T$ consists of $n-1$ copies of $e_i$ for $1\le i\le r$. Let $S$ consist of $kn-1$ copies of $0$ and $T$, then it is easy to show that $S$ is a sequence over $C_n^r$ of length $(k+r)n-r-1$ and $S$ contains no zero-sum subsequences of length $kn$. Consequently we have $$\label{eq1.1} \mathsf s_{kn}(C_n^r)\ge(k+r)n-r.$$ For general finite abelian group $G$ with $\exp(G)=m$, similar construction can be used to show that $\mathsf s_{km}(G)\ge km+\mathsf D(G)-1$ holds for $k\ge1$. In 1996, Gao [@Gao3] proved that $\mathsf s_{km}(G)= km+\mathsf D(G)-1$, provided that $km\ge|G|$. In [@GaoThang], Gao and Thangadurai proved that if $km<\mathsf D(G)$, then $\mathsf s_{km}(G)> km+\mathsf D(G)-1$. Define $l(G)$ as the smallest integer $t$ such that $\mathsf s_{km}(G)=km+\mathsf D(G)-1$ holds for every $k\ge t$. From the above we know that $$\frac{\mathsf{D}(G)}{m}\le l(G)\le \frac{|G|}{m}.$$ Recently, Gao, Han, Peng and Sun conjectured ([@GHPS], Conjecture 4.7) that $$l(G)=\lceil\frac{\mathsf D(G)}{m}\rceil.$$ Clearly we have $l(C_n)=1$ by the Erdős-Ginzburg-Ziv theorem. For finite abelian groups $G$ of rank two, $l(G)=2$ (see [@GHPS]). Let $p$ be a prime and $q$ a power of $p$, the above conjecture was verified for $C_q^r$ where $1\le r\le 4$ (also more generally for abelian $p$-group $G$ with $\mathsf D(G)\le 4m$) except for some cases when $p$ is rather small, see [@GaoThang; @HZ; @K]. For the studies of $l(G)$ for the general cases, we refer to [@GHPS; @He; @K]. Recall (\[eq1.1\]) that $\mathsf s_{kn}(C_n^r)\ge(k+r)n-r$, in [@K], Kubertin conjectured that the equality actually holds for any $k\ge r$. \[conj1\] For any positive integers $k,n$ with $k\ge r$, we have $$\mathsf s_{kn}(C_n^r)=(k+r)n-r.$$ According to the results in [@GaoThang; @HZ; @K], Conjecture \[conj1\] has been verified for $r\le 4$ except for some cases when $p$ is rather small ($p\le 3$). Recently, Sidorenko [@S1; @S2] verified Conjecture \[conj1\] for $C_2^r$. He [@S1] also applied his results to prove new bounds for the codegree Turán density of complete $r$-graphs. Moreover, he [@S2] established connections between $\mathsf s_{2k}(C_2^r)$ and linear binary codes. Actually, he showed that the problem of determining $\mathsf s_{2k}(C_2^r)$ is essentially equivalent to finding the lowest redundancy of a linear binary code of given length which does not contain words of Hamming weight $2k$. Towards Conjecture \[conj1\], Kubertin [@K] proved that $$\mathsf s_{kq}(C_q^r)\le (k+\frac{3}{8}r^2+\frac{3}{2}r-\frac{3}{8})q-r,$$ where $p>\min\{2k,2r\}$ is a prime and $q$ is a power of $p$. By extending the method of Kubertin, He [@He] improved the above upper bound and obtained that $$\mathsf s_{kq}(C_q^r)\le(k+5r-2)q-3r$$ when $2p \ge 7r-3$ and $k\ge r$. He also proved that $\mathsf s_{kn}(C_n^r)\le 6kn$ for $n$ with large prime factors and $k$ sufficiently large. More precisely, he showed that for $r,l>0$, $n=p_1^{\alpha_1}\cdots p_l^{\alpha_l}$ with distinct prime factors $p_1,\ldots,p_l\ge\frac{7}{2}r-3$ and $k=a_1\cdots a_l$ a product of positive integers $a_1,\ldots,a_l\ge r$, $\mathsf s_{kn}(C_n^r)\le 6kn$. We also refer to [@BGH; @Gen] for some recent results on the lower bound of $\mathsf s_{kn}(C_n^r)$ when $k$ is much smaller than the rank $r$, note that in this case $\mathsf s_{kn}(C_n^r)>(k+r)n-r$ (see [@GaoThang]). For $n\in\mathbb N$, let $$\mathsf M(n)=\max\{p^k|\text{ with }p^k|n\text{ where $p$ is a prime and k}\in \mathbb N\},$$ i.e., the largest prime power divisor of $n$. For convenience, let $\mathsf M(1)=1$. For any $n,r\in \mathbb N$, we define $$\mathsf p(n,r)=\min\{p^t|\text{ }\mathsf M(n)=p^a\text{ and }p^t\ge r\}.$$ In this paper, we focus on the Conjecture \[conj1\] and prove the following results. \[theorem1\] Let $k\in \mathbb N$. We have 1. For every $k\ge 6$, $$\mathsf s_{kn}(C_n^3)=(k+3)n+O(\frac{n}{\ln n});$$ 2. For every $k\ge 18$, $$\mathsf s_{kn}(C_n^4)=(k+4)n+O(\frac{n}{\ln n});$$ 3. For every $k\in \mathbb N$ and fixed $r\ge 5$, $$\mathsf s_{k\mathsf p(n,r)n}(C_n^r)=(k\mathsf p(n,r)+r)n+O_r(\frac{n}{\ln n}),$$ where $O_r$ depends on $r$. Note that the main terms in Theorem \[theorem1\] are consistent with the conjectural values in Conjecture \[conj1\]. Moreover, the error term can be improved in some cases. By some further studies of $\mathsf M(n)$, roughly speaking, for any real number $A\ge 1$, we can improve the order of the error term from $\frac{n}{\ln n}$ to $\frac{n}{(\ln n)^A}$ for almost every $n\ge 1$. Furthermore, when the number of distinct prime divisors of $n$ is a given integer $m$, we can even improve the order of the error term to $n^{1-\frac{1}{m}}$. The following sections are organized as follows. In Section 2, we shall introduce some notations and preliminary results. In Section 3, we will prove our main results. In Section 4, we will provide further studies on $\mathsf M(n)$ and then apply these results to improve our main results. Preliminaries ============= This section will provide more rigorous definitions and notations. We also introduce some preliminary results that will be used repeatedly below. Let $\mathbb{N}$ denote the set of positive integers, $\mathbb{N}_0=\mathbb{N}\cup\{0\}$ and $\mathbb R$ the field of real numbers. Let $f$ and $g$ be real valued functions, both defined on $\mathbb N$, such that $g(x)$ is strictly positive for all large enough values of $x$. Then we denote $f(x)=O(g(x))$ if and only if there exists a positive real number $M$ and a positive integer $x_0$ such that $$|f(x)|\le M|g(x)|\qquad {\text{ for all }}x\geq x_{0}.$$ We also use the notation $O_r$ (resp. $O_{A,\epsilon}$) which means that the above $M$ depends on $r$ (resp. $A$ and $\epsilon$), where $r\in\mathbb N_0$, $A,\epsilon\in \mathbb R$. Similarly, we denote $f(x)=o(g(x))$ if and only if for every positive constant $\varepsilon$, there exists a positive integer $x_0$ such that $$|f(x)|\le \varepsilon g(x)\qquad {\text{for all }}x\geq x_0.$$ Let $G$ be an additive finite abelian group. By the fundamental theorem of finite abelian groups we have $$G\cong C_{n_1}\oplus\cdots\oplus C_{n_r}$$ where $r=\mathsf r(G)\in \mathbb{N}_0$ is the rank of $G$, $n_1|\cdots|n_r\in\mathbb{N}$ are positive integers. Moreover, $n_1,\ldots,n_r$ are uniquely determined by $G$, and $n_r=\exp(G)$ is called the $exponent$ of $G$. We define a $sequence$ over $G$ to be an element of the free abelian monoid $\big(\mathcal F(G),{\boldsymbol{\cdot}}\big)$, see Chapter 5 of [@GH] for detailed explanation. Our notations of sequences follow the notations in the paper [@GeG]. In particular, in order to avoid confusion between exponentiation of the group operation in $G$ and exponentiation of the sequence operation ${\boldsymbol{\cdot}}$ in $\mathcal F (G)$, we define: $$g^{[k]}=\underset{k}{\underbrace{g{\boldsymbol{\cdot}}\ldots{\boldsymbol{\cdot}}g}}\in \mathcal F (G)\quad \text{and} \quad T^{[k]}=\underset{k}{\underbrace{T{\boldsymbol{\cdot}}\ldots{\boldsymbol{\cdot}}T}}\in \mathcal F (G) \,,$$ for $g \in G$,  $T\in \mathcal F (G)$ and $k \in \mathbb N_0$. We write a sequence $S$ in the form $$S=\prod_{g\in G}g^{\textsf{v}_g(S)}\text{ with }\textsf{v}_g(S)\in \mathbb{N}_0\text{ for all }g\in G.$$ We call - $\textsf{v}_g(S)$ the $multiplicity$ of $g$ in $S$, - $|S|=l=\sum_{g\in G}\textsf{v}_g(S)\in \mathbb{N}_0$ the $length$ of $S$, - $T=\prod_{g\in G}g^{\textsf{v}_g(T)}$ a $subsequence$ of $S$ if $\textsf{v}_g(T)\le \textsf{v}_g(S)$ for all $g\in G$, and denote by $T|S$, - $\sigma(S)=\sum\limits_{i=1}\limits^{l}g_i=\sum_{g\in G}\textsf{v}_g(S)g\in G$ the $sum$ of $S$, - $S$ a $zero$-$sum$ $sequence$ if $\sigma(S)=0$, - $S$ a $zero$-$sum$ $free$ $sequence$ if $\sigma(T)\neq0$ for every $T|S$, - $S$ a $short$ $zero$-$sum$ $sequence$ if it is a zero-sum sequence of length $|S|\in[1,\text{exp}(G)]$. Using these concepts, we can define - $\mathsf D(G)$ as the smallest integer $l\in \mathbb{N}$ such that every sequence $S$ over $G$ of length $|S|\geq l$ has a non-empty zero-sum subsequence. We call $\mathsf D(G)$ the $Davenport$ $constant$ of $G$. - $\mathsf s_{k\exp(G)}(G)$ as the smallest integer $l\in \mathbb{N}$ such that every sequence $S$ over $G$ of length $|S|\geq l$ has a non-empty zero-sum subsequence $T$ of length $|T|=k\exp(G)$, where $k\in\mathbb N$. We call $\mathsf s(G):=\mathsf s_{\exp(G)}(G)$ the Erdős-Ginzburg-Ziv constant and $\mathsf s_{k\exp(G)}(G)$ the $k$-th generalized Erdős-Ginzburg-Ziv constant. \[lemma1.1\][[([@GH], Theorem 5.5.9)]{}]{} Let $G$ be a finite abelian $p$-group and $G=C_{p^{n_{1}}}\oplus\cdots\oplus C_{p^{n_{r}}}$, then $$\mathsf D(G)=\sum\limits_{i=1}\limits^{r}(p^{n_i}-1)+1.$$ \[lower\] Let $G$ be a finite abelian group with $\exp(G)=m$, then $$\mathsf s_{km}(G)\ge km+\mathsf D(G)-1$$ holds for every $k\ge1$. By the definition of $\mathsf D(G)$, there exists a zero-sum free sequence $T$ of length $|T|=\mathsf D(G)-1$. Let $S=T{\boldsymbol{\cdot}}0^{[km-1]}$. It is easy to know that $S$ is a sequence over $G$ of length $|S|=km+\mathsf D(G)-2$ and $S$ contains no zero-sum subsequence of length $km$. This completes the proof. \[lemma1.2\][[([@Gao1], Theorem 3.2)]{}]{} Let $G$ be a finite abelian $p$-group and $\exp(G)=p^{n_r}$. If $p^{m+n_r}\ge\mathsf D(G)$ for some $m\in\mathbb N$, then $$\mathsf s_{kp^mp^{n_r}}(G)=k\cdot p^{m+n_r}+\mathsf D(G)-1,$$ holds for any $k\in\mathbb N$. The following classical result of Alon and Dubiner is crucial in our proof. \[lemma2.1\] There exists an absolute constant $c>0$ such that $$\mathsf s(C_n^r)\le (cr\log_2r)^rn.$$ Although the precise values of $\mathsf s(G)$ for general $C_n^r$ are not known, some cases (when $n$ is a power of a small prime) have been determined. We list some of these results which are very useful in our proof. \[lemma1.3\] Let $n\in\mathbb N$. 1. $\mathsf s(C_{2^n}^3)=8\cdot2^n-7$; 2. $\mathsf s(C_{3^n}^3)=9\cdot3^n-8$; 3. $\mathsf s(C_{2^n}^4)=16\cdot2^n-15$; 4. $\mathsf s(C_{3^n}^4)=20\cdot3^n-19$. \(1) See [@EE], Corollary 4.4. (2) See [@GHST], Theorem 1.7. (3) See [@EE], Corollary 4.4. (4) See [@EE], Theorem 1.3, 1.4 and Section 5. In the rest of this section, we provide some results about $\mathsf M(n)$ which are useful in this paper. Recall that, for any $n\in \mathbb N$, let $$\mathsf M(n)=\max\{p^k\text{ }|\text{ with }p^k|n\text{ where $p$ is a prime and k}\in \mathbb N\}$$ be the largest prime power divisor of $n$. For convenience, let $\mathsf M(1)=1$. For example, we have $\mathsf M(20)=\mathsf M(2^25)=5$, $\mathsf M(40)=\mathsf M(2^35)=2^3$ and $\mathsf M(200)=\mathsf M(2^35^2)=5^2$. Unlike the widely studied largest prime divisor function $$\mathsf P(n)=\max\{p\text{ }|\text{ with }p|n\text{ and $p$ is a prime}\},$$ as far as we know, $\mathsf M(n)$ has not received much attention. As $\mathsf M(p)=p$ where $p$ is a prime, certainly we have $\limsup\limits_{n\rightarrow\infty}\frac{\mathsf M(n)}{n}=1$. It is known and easy to prove that $$\label{eq2} \liminf\limits_{n\rightarrow\infty}\frac{\mathsf M(n)}{\ln n}=1,$$ consequently $$\label{eq3} \lim\limits_{n\rightarrow\infty}\mathsf M(n)=\infty.$$ Recently, Girard [@Gi] used (\[eq3\]) to show that $\mathsf D(C_n^r)=rn+o(n)$, which is an important result in zero-sum theorey and also can be regarded as an example of application of $\mathsf M(n)$. In this paper, we will continue to employ the estimates of $\mathsf M(n)$ to the zero-sum problems. Although the proof of (\[eq2\]) is simple and elementary, it is hard to find this result in literatures or standard textbooks. So we decide to provide a proof here for the convenience of the reader. Let $$\pi(x)=\#\{p\text{ $|$ }p\le x\}$$ be the prime-counting function and $$\vartheta(x)=\sum_{p\le x}\ln p$$ the Chebyshev $\vartheta$ function. The result in the following lemma is very classical and can be easily found in [@T]. \[lemma3.1\]For any $x\ge 2$, we have $$\pi(x)\le 2\frac{x}{\ln x}.$$ This result is an easy consequence of Theorem 3, Page 11 in [@T]. \[lemma3.2\] For any $n\in \mathbb N$, we have $$\mathsf M(n)\ge \frac{1}{2}\ln n.$$ When $n=p^m$ is a prime power, the result is obvious. If $n$ is not a prime power, we may assume that $$n=q_1^{r_1}\cdots q_k^{r_k}p^m,$$ where $q_1<\cdots< q_k$ and $p$ are distinct prime numbers, $r_1,\ldots,r_k,m\in\mathbb N$ with $\mathsf M(n)=p^m$. By the definition of $\mathsf M(n)$, clearly we have $n\le p^mp^{km}$. Moreover we have $k< \pi(p^m)$. For otherwise if $k\ge\pi(p^m)$, then we have $$k=\pi(p_k)\ge\pi(p^m)\ge\pi(p)$$ and consequently $p_k\ge p$. As $q_1<\cdots< q_k$ and $p$ are distinct prime numbers, we have $q_k>p_k$. Therefore, $$\pi(q_k)>\pi(p_k)=k\ge\pi(p^m)$$ and consequently $q_k>p^m$. By the definition of $\mathsf M(n)$, we have $\mathsf M(n)\ge q_k^{r_k}>p^m$, but this contradicts $\mathsf M(n)=p^m$. Therefore $n\le p^{m\pi(p^m)}$, and by Lemma \[lemma3.1\] we have $$\frac{\mathsf M(n)}{\ln n}=\frac{p^m}{\ln n}\ge \frac{p^m}{\ln p^{m\pi(p^m)}} =\frac{p^m}{{\pi(p^m)\ln p^m}}\ge \frac{1}{2}.$$ This completes the proof. For sufficiently large number $n$, $\mathsf P(n)$ may be rather small, for example $\mathsf P(2^m)=2$ for any $m\in \mathbb N$. However, Lemma \[lemma3.2\] means that $\mathsf M(n)$ cannot be too small for sufficiently large $n$. We shall use Lemma \[lemma3.2\] to prove our main results in the next section. In the following, we will prove (\[eq2\]) which shows that actually $\ln n$ is the minimal order of $\mathsf M(n)$. Let $p_k$ denote the $k$-th prime and $n_k=p_1\cdots p_k\in \mathbb N$. Clearly, we have $\mathsf M(n_k)=p_k$ and $\ln n_k=\vartheta(p_k)$. By the Prime Number Theorem, for any $\epsilon>0$ there exists $k_0(\epsilon)>0$ such that for all $k>k_0(\epsilon)$ we have $$\frac{\mathsf M(n_k)}{\ln n_k}=\frac{p_k}{\vartheta(p_k)}\le (1+\epsilon).$$ Therefore we have $\liminf\limits_{n\rightarrow\infty}\frac{\mathsf M(n)}{\ln n}\le1$. Similarly, from Lemma \[lemma3.2\], together with the Prime Number Theorem, for any $\epsilon>0$ there exists $n_0(\epsilon)>0$ such that for all $n>n_0(\epsilon)$ we have $$\frac{\mathsf M(n)}{\ln n}\ge\frac{p^m}{{\pi(p^m)\ln p^m}}\ge (1-\epsilon).$$ Therefore we have $\liminf\limits_{n\rightarrow\infty}\frac{\mathsf M(n)}{\ln n}\ge1$ and $$\liminf\limits_{n\rightarrow\infty}\frac{\mathsf M(n)}{\ln n}=1.$$ This completes the proof of (\[eq2\]). Proof of the main results ========================= In this section, we shall prove our main results, Theorem \[theorem1\]. Firstly, we have to verify Conjecture \[conj1\] for some small primes which are the remaining cases in [@GaoThang; @HZ; @K]. \[lemma4.1\] For any $n\in\mathbb N$, we have 1. $\mathsf s_{k2^n}(C_{2^n}^3)=(k+3)2^n-3$, holds for $k\ge 4$; 2. $\mathsf s_{k3^n}(C_{3^n}^3)=(k+3)3^n-3$, holds for $k\ge 6$; 3. $\mathsf s_{k2^n}(C_{2^n}^4)=(k+4)2^n-4$, holds for $k\ge 12$; 4. $\mathsf s_{k3^n}(C_{3^n}^4)=(k+4)3^n-4$, holds for $k\ge 18$. By Lemma \[lower\], it suffices to prove $\mathsf s_{k\exp(G)}\le k\exp(G)+\mathsf D(G)-1$. \(1) We prove by induction on $k$. Since $2^{2+n}=4\cdot 2^n\ge\mathsf D(C_{2^n}^3)=3\cdot2^n-2$, by Lemma \[lemma1.2\] we have $\mathsf s_{4\cdot 2^n}(C_{2^n}^3)=7\cdot2^n-3$. This proves the case $k=4$. Suppose that $k\ge 5$ and the result holds for all positive integers $n$ with $4\le n\le k$. Now we need to prove $\mathsf s_{(k+1)2^n}(C_{2^n}^3)=(k+1+3)2^n-3$. Let $S$ be any sequence over $C_{2^n}^3$ of length $$|S|=(k+1+3)2^n-3.$$ By Lemma \[lemma1.3\].(1) and the fact that $|S|\ge 8\cdot2^n-7$, we have $S$ contains a zero-sum subsequence $T$ of length $|T|=2^n$. Since $$|S{\boldsymbol{\cdot}}T^{-1}|=(k+3)2^n-3=\mathsf s_{k 2^n}(C_{2^n}^3),$$ we have $S{\boldsymbol{\cdot}}T^{-1}$ contains a zero-sum subsequence $U$ of length $|U|=k2^n$. Consequently, $T{\boldsymbol{\cdot}}U$ is a zero-sum subsequence of $S$ of length $|T{\boldsymbol{\cdot}}U|=(k+1)2^n$. This completes the proof. \(2) We prove by induction on $k$. Since $3^{1+n}=3\cdot 3^n\ge\mathsf D(C_{3^n}^3)=3\cdot3^n-2$, by Lemma \[lemma1.2\] we have $\mathsf s_{6\cdot 3^n}(C_{3^n}^3)=9\cdot3^n-3$. This proves the case $k=6$. Suppose that $k\ge 7$ and the result holds for all positive integers $n$ with $6\le n\le k$. Now we need to prove $\mathsf s_{(k+1)3^n}(C_{3^n}^3)=(k+1+3)3^n-3$. Let $S$ be any sequence over $C_{3^n}^3$ of length $$|S|=(k+1+3)3^n-3.$$ By Lemma \[lemma1.3\].(2) and the fact that $|S|\ge 9\cdot3^n-8$, we have $S$ contains a zero-sum subsequence $T$ of length $|T|=3^n$. Since $$|S{\boldsymbol{\cdot}}T^{-1}|=(k+3)3^n-3=\mathsf s_{k 3^n}(C_{3^n}^3),$$ we have $S{\boldsymbol{\cdot}}T^{-1}$ contains a zero-sum subsequence $U$ of length $|U|=k3^n$. Consequently, $T{\boldsymbol{\cdot}}U$ is a zero-sum subsequence of $S$ of length $|T{\boldsymbol{\cdot}}U|=(k+1)3^n$. This completes the proof. \(3) We prove by induction on $k$. Since $2^{2+n}=4\cdot 2^n\ge\mathsf D(C_{2^n}^4)=4\cdot2^n-3$, by Lemma \[lemma1.2\] we have $\mathsf s_{12\cdot 2^n}(C_{2^n}^4)=16\cdot2^n-4$. This proves the case $k=12$. Suppose that $k\ge 13$ and the result holds for all positive integers $n$ with $12\le n\le k$. Now we need to prove $\mathsf s_{(k+1)2^n}(C_{2^n}^4)=(k+1+4)2^n-4$. Let $S$ be any sequence over $C_{2^n}^4$ of length $$|S|=(k+1+4)2^n-4.$$ By Lemma \[lemma1.3\].(3) and the fact that $|S|\ge 16\cdot2^n-15$, we have $S$ contains a zero-sum subsequence $T$ of length $|T|=2^n$. Since $$|S{\boldsymbol{\cdot}}T^{-1}|=(k+4)2^n-4=\mathsf s_{k 2^n}(C_{2^n}^4),$$ we have $S{\boldsymbol{\cdot}}T^{-1}$ contains a zero-sum subsequence $U$ of length $|U|=k2^n$. Consequently, $T{\boldsymbol{\cdot}}U$ is a zero-sum subsequence of $S$ of length $|T{\boldsymbol{\cdot}}U|=(k+1)2^n$. This completes the proof. \(4) We prove by induction on $k$. Since $3^{2+n}=9\cdot 3^n\ge\mathsf D(C_{3^n}^4)=4\cdot3^n-4$, by Lemma \[lemma1.2\] we have $\mathsf s_{18\cdot 3^n}(C_{3^n}^4)=22\cdot3^n-4$. This proves the case $k=18$. Suppose that $k\ge 19$ and the result holds for all positive integers $n$ with $18\le n\le k$. Now we need to prove $\mathsf s_{(k+1)3^n}(C_{3^n}^4)=(k+1+4)3^n-4$. Let $S$ be any sequence over $C_{3^n}^4$ of length $$|S|=(k+1+4)3^n-4.$$ By Lemma \[lemma1.3\].(4) and the fact that $|S|\ge 20\cdot3^n-19$, we have $S$ contains a zero-sum subsequence $T$ of length $|T|=3^n$. Since $$|S{\boldsymbol{\cdot}}T^{-1}|=(k+4)3^n-4=\mathsf s_{k 3^n}(C_{3^n}^4),$$ we have $S{\boldsymbol{\cdot}}T^{-1}$ contains a zero-sum subsequence $U$ of length $|U|=k3^n$. Consequently, $T{\boldsymbol{\cdot}}U$ is a zero-sum subsequence of $S$ of length $|T{\boldsymbol{\cdot}}U|=(k+1)3^n$. This completes the proof. \[coro2\] Let $n,m\in\mathbb N$ and $p$ be any prime. we have 1. $\mathsf s_{kp^m}(C_{p^m}^3)=(k+3)p^m-3$ holds for $k\ge6$; 2. $\mathsf s_{kp^m}(C_{p^m}^4)=(k+4)p^m-4$ holds for $k\ge18$; By Lemma \[lemma4.1\], the results hold for the cases $p=2,3$. For $p\ge 5$, see Theorem 1.(3) in [@K] and Theorem 1.2.(3) in [@HZ]. The following crucial lemma is based on a standard argument in zero-sum theory (we refer to [@GH], Proposition 5.7.11). \[lemma4.2\] Let $n,m,p,k,r\in \mathbb N$ and $p$ a prime. Assume that $\mathsf s_{kp^m}(C_{p^m}^r)=(k+r)p^m-r$. Then we have $$\mathsf s_{knp^m}(C_{np^m}^r)\le(k+r)np^m+a_rn,$$ where $a_r$ is a constant depends on $r$. Let $S$ be a sequence of length $|S|=((k+r)p^m-r)n+\mathsf s(C_n^r)$ over $C_n^r$. Consider the following map: $$\varphi:C_{np^m}^r\rightarrow C_n^r.$$ Then $\varphi(S)$ is a sequence over $C_n^r$ of length $((k+r)p^m-r)n+\mathsf s(C_n^r)$. By the definition of $\mathsf s(C_n^r)$, we have $\varphi(S)$ contains at least $(k+r)p^m-r$ zero-sum subsequences $S_1,\ldots,S_{(k+r)p^m-r}$ over $C_n^r$ with $|S_i|=n$ for $1\le i\le (k+r)p^m-r$. This means that $$\sigma(S_1),\ldots,\sigma(S_{(k+r)p^m-r})\in \ker(\varphi)=C_{p^m}^r.$$ By the assumption that $\mathsf s_{kp^m}(C_{p^m}^r)=(k+r)p^m-r$, there exist $$\{i_1,\ldots,i_{kp^m}\}\subset\{1,\ldots,(k+r)p^m-r\}$$ such that $\sigma(S_{i_1})+\ldots+\sigma(S_{i_{kp^m}})=0$ and this implies that $S_{i_1}{\boldsymbol{\cdot}}\ldots{\boldsymbol{\cdot}}S_{i_{kp^m}}$ is a zero-sum subsequence of $S$ over $C_{np^m}^r$ of length $knp^m$. Therefore $$\mathsf s_{knp^m}(C_{np^m}^r)\le ((k+r)p^m-r)n+\mathsf s(C_n^r).$$ Moreover, by Lemma \[lemma2.1\], there exists an absolute constant $c$ such that $$((k+r)p^m-r)n+\mathsf s(C_n^r)\le ((k+r)p^m-r)n+(cr\log_2r)^rn.$$ Let $a_r=(cr\log_2r)^r-r$, then we have the desired result. For any fixed $r\in\mathbb N$, we denote $a_r=(cr\log_2r)^r-r$, where $c$ is the absolute constant mentioned in Lemma \[lemma2.1\]. The following corollary is an easy consequence of the above lemma. \[coro3\] Let $n,k,r\in \mathbb N$. Assume that $\mathsf M(n)=p^m$ and $$\mathsf s_{kp^m}(C_{p^m}^r)=(k+r)p^m-r.$$ Then we have $$\mathsf s_{kn}(C_n^r)\le (k+r)n+a_r\frac{n}{\mathsf M(n)}.$$ By Corollary \[coro3\], in order to prove the main results, it suffices to combine the results about $\mathsf M(n)$ in Section 2. [*Proof of the Theorem \[theorem1\].*]{} (1) By Corollary \[coro2\].(1) and \[coro3\], for $k\ge 6$, we have $$\mathsf s_{kn}(C_n^3)\le(k+3)n+a_3\frac{n}{\mathsf M(n)}.$$ By lemma \[lemma3.2\], for $k\ge 6$, actually we have $$\mathsf s_{kn}(C_n^3)\le(k+3)n+2a_3\frac{n}{\ln n}$$ and we get the desired result. \(2) By Corollary \[coro2\].(2) and \[coro3\], for $k\ge 18$, we have $$\mathsf s_{kn}(C_n^4)\le(k+4)n+a_4\frac{n}{\mathsf M(n)}.$$ By lemma \[lemma3.2\], for $k\ge 18$, actually we have $$\mathsf s_{kn}(C_n^4)\le(k+4)n+2a_4\frac{n}{\ln n}$$ and we get the desired result. \(3) By Lemma \[lemma1.2\] and Corollary \[coro3\], for any $k\in \mathbb N$, we have $$\mathsf s_{k\mathsf p(n,r)n}(C_n^r)\le(k\mathsf p(n,r)+r)n+a_r\frac{n}{\mathsf M(n)}.$$ By lemma \[lemma3.2\], for any $k\in \mathbb N$, actually we have $$\mathsf s_{k\mathsf p(n,r)n}(C_n^r)\le(k\mathsf p(n,r)+r)n+2a_r\frac{n}{\ln n}$$ and we get the desired result. Further studies about $\mathsf M(n)$ and some improvements ========================================================== In this section, we will provide some further estimates for $\mathsf M(n)$ in some special cases. With these further estimates, we can improve our main results. All these results can be seen as some applications of $\mathsf M(n)$. Note that, by (\[eq2\]), it is impossible to improve the order of the lower bound in Lemma \[lemma3.2\] of $\mathsf M(n)$ for every $n$ any more. However, we can get some better estimates in some special cases. We denote $$\mathsf E(x,y)=\{n\le x\text{ }|\text{ }\mathsf M(n)\le y\}$$ and $\overline{\mathsf E(x,y)}=\{n\le x\text{ }|\text{ }n\notin \mathsf E(x,y)\}$. Let $A>1$ be any real number, in the following we shall consider $$\mathsf E(x,(\ln x)^A)=\{n\le x\text{ }|\text{ }\mathsf M(n)\le (\ln x)^A\},$$ where $A\ge 1$. Actually, we have the following lemma. \[lemma3.3\] For any positive integer $A\ge 1$ and $\epsilon>0$, we have $$|\mathsf E(x,(\ln x)^A)|=O_{A,\epsilon}(x^{1-\frac{1}{A}+\epsilon}).$$ Clearly we have $|\mathsf E(x,(\ln x)^A)|=\sum\limits_{n\le x\atop \mathsf M(n)\le (\ln x)^A}1$, then for any $\delta>0$, we have $$\sum\limits_{n\le x\atop \mathsf M(n)\le (\ln x)^A}1\le \sum\limits_{n\le x\atop \mathsf M(n)\le (\ln x)^A}\big(\frac{x}{n}\big)^\delta.$$ Similar to the Euler product of the Riemann zeta function, by the fundamental theorem of arithmetic, we have $$\begin{aligned} \sum\limits_{n\le x\atop \mathsf M(n)\le (\ln x)^A}\big(\frac{x}{n}\big)^\delta &\le x^\delta\prod_{p\le (\ln x)^A}(1-\frac{1}{p^{\delta}})^{-1}\\ &= x^\delta\prod_{p\le (\ln x)^A}(1+\frac{1}{p^{\delta}-1}).\end{aligned}$$ If we take $c_{\delta}=\frac{2^{\delta}}{2^{\delta}-1}$, then $\frac{1}{p^{\delta}-1}\le \frac{c_{\delta}}{p^{\delta}}$ and we have $$x^\delta\prod_{p\le (\ln x)^A}(1+\frac{1}{p^{\delta}-1})\le x^\delta\prod_{p\le (\ln x)^A}(1+\frac{c_{\delta}}{p^{\delta}}).$$ As $1+x\le e^x$ for any $x\ge 0$, we have $$\begin{aligned} x^\delta\prod_{p\le (\ln x)^A}(1+\frac{c_{\delta}}{p^{\delta}})&\le x^\delta\prod_{p\le (\ln x)^A}\exp(\frac{c_{\delta}}{p^{\delta}})\\ &=x^\delta \exp(\sum_{p\le (\ln x)^A}\frac{c_{\delta}}{p^{\delta}}).\end{aligned}$$ To estimate the last sum, we employ the relation between the sum and the integral, $$\begin{aligned} x^\delta \exp(\sum_{p\le (\ln x)^A}\frac{c_{\delta}}{p^{\delta}})&\le x^\delta \exp(\sum_{2\le n\le (\ln x)^A}\frac{c_{\delta}}{n^{\delta}})\\ &\le x^\delta \exp(c_{\delta}\int_1^{(\ln x)^A}\frac{1}{t^{\delta}}dt).\end{aligned}$$ Therefore $$\begin{aligned} x^\delta \exp(c_{\delta}\int_1^{(\ln x)^A}\frac{1}{t^{\delta}}dt)&= x^\delta \exp\big(\frac{c_{\delta}}{1-\delta}((\ln x)^{A(1-\delta)}-1)\big)\\ &=\exp(\frac{c_{\delta}}{\delta-1})x^\delta \exp\big(\frac{c_{\delta}}{1-\delta}((\ln x)^{A(1-\delta)})\big).\end{aligned}$$ Now, we take $\delta=1-\frac{1}{A}+\frac{\epsilon}{2}$. Since $$A(1-\delta)=1-\frac{A\epsilon}{2}<1$$ and $$\exp(\frac{c_{\delta}}{1-\delta}((\ln x)^{A(1-\delta)}))=O_{A,\epsilon}(x^{\frac{\epsilon}{2}}),$$ we have $$\exp(\frac{c_{\delta}}{\delta-1})x^\delta \exp(\frac{c_{\delta}}{1-\delta}((\ln x)^{A(1-\delta)}))=O_{A,\epsilon}(x^{1-\frac{1}{A}+\epsilon}).$$ This completes the proof. Recall that for any fixed $r\in\mathbb N$, we denote $a_r=(cr\log_2r)^r-r$, where $c$ is the absolute constant mentioned in Lemma \[lemma2.1\]. Let $$\mathbb S_{k}^r(x,A)= \left\{ \begin{array}{ll}&\{n\le x\text{ }|\text{ }\mathsf s_{kn}(C_n^r)\le(k+r)n+a_r\frac{n}{(\ln n)^A}\}, \mbox{ if } r=3 \mbox{ or 4}, \\&\{n\le x\text{ }|\text{ }\mathsf s_{k\mathsf p(n,r)n}(C_n^r)\le(k\mathsf p(n,r)+r)n+a_r\frac{n}{(\ln n)^A}\}, \mbox{ if } r\ge5 \end{array} \right.$$ and $$\overline{\mathbb S_{k}^r(x,A)}=\{n\le x\text{ }|\text{ }n\notin \mathbb S_{k}^r(x,A)\}.$$ \[theorem4.1\] For any $A\ge 1$ and $\epsilon>0$, we have 1. $|\overline{\mathbb S_{k}^3(x,A)}|=O_{A,\epsilon}(x^{1-\frac{1}{A}+\epsilon}),$ holds for $k\ge 6$; 2. $|\overline{\mathbb S_{k}^4(x,A)}|=O_{A,\epsilon}(x^{1-\frac{1}{A}+\epsilon}),$ holds for $k\ge 18$; 3. $|\overline{\mathbb S_{k}^r(x,A)}|=O_{A,\epsilon}(x^{1-\frac{1}{A}+\epsilon}),$ holds for $r\ge 5$ and any $k\in\mathbb N$. In particular, 1. For $k\ge 6$, we have $$\lim_{x\rightarrow\infty}\frac{|\overline{\mathbb S_{k}^3(x,A)}|}{x}=0;$$ 2. For $k\ge 18$, we have $$\lim_{x\rightarrow\infty}\frac{|\overline{\mathbb S_{k}^4(x,A)}|}{x}=0;$$ 3. For $r\ge 5$ and any $k\in\mathbb N$, we have $$\lim_{x\rightarrow\infty}\frac{|\overline{\mathbb S_{k}^r(x,A)}|}{x}=0;$$ By the definition of $\overline{\mathsf E(x,(\ln x)^A)}$ and Corollary \[coro3\], it is easy to see that $$\overline{\mathsf E(x,(\ln x)^A)}\subset \mathbb S_{k}^3(x,A).$$ Therefore, we have $$\overline{\mathbb S_{k}^3(x,A)}\subset \mathsf E(x,(\ln x)^A).$$ The desired result follows from Lemma \[lemma3.3\]. In particular, let $\epsilon=\frac{1}{2A}$ in the above result, we have $$\lim_{x\rightarrow\infty}\frac{|\overline{\mathbb S_{k}^3(x,A)}|}{x}\le\lim_{x\rightarrow\infty}\frac{|\mathsf E(x,(\ln x)^A)|}{x} =\lim_{x\rightarrow\infty}\frac{O(x^{1-\frac{1}{2A}})}{x}=0.$$ This completes the proof of (1). The proofs of (2) and (3) are similar. According to Theorem \[theorem4.1\], for any $A\ge 1$, roughly speaking, for almost every $n\ge 1$ we have 1. For $k\ge 6$, $$\mathsf s_{kn}(C_n^3)\le (k+3)n+a_3\frac{n}{(\ln n)^A};$$ 2. For $k\ge 18$, $$\mathsf s_{kn}(C_n^4)\le (k+4)n+a_4\frac{n}{(\ln n)^A};$$ 3. For any $k\in\mathbb N$ and fixed $r\ge 5$, $$\mathsf s_{k\mathsf p(n,r)n}(C_n^r)\le (k\mathsf p(n,r)+r)n+a_r\frac{n}{(\ln n)^A}.$$ Let $\omega(n)$ denote the number of distinct prime divisors of $n$. In the following, we can improve the error terms for some $n\in\mathbb N$ when $\omega(n)$ is a given integer $m$. \[lemma4.2\] For any $n\in\mathbb N$, we have $$\mathsf M(n)\ge n^{\frac{1}{\omega(n)}}.$$ For any $n\in\mathbb N$, we assume that $n=q_1^{r_1}\cdots q_m^{r_m},$ where $q_1^{r_1}<\cdots< q_m^{r_m}$ and $q_1,\ldots,q_m$ are distinct prime numbers, $r_1,\ldots,r_m\in\mathbb N$. Then by the definition of $\mathsf M(n)$, we have $\mathsf M(n)=q_m^{r_m}$. Clearly we have $\omega(n)=m$ and $$n=q_1^{r_1}\cdots q_m^{r_m}\le q_m^{mr_m}=\mathsf M(n)^{\omega(n)},$$ consequently $\mathsf M(n)\ge n^{\frac{1}{\omega(n)}}$. Let $n,r,m\in \mathbb N$ with $\omega(n)=m$, we have 1. For $k\ge 6$, $$\mathsf s_{kn}(C_n^3)\le(k+3)n+a_3n^{1-\frac{1}{m}};$$ 2. For $k\ge 18$, $$\mathsf s_{kn}(C_n^4)\le(k+4)n+a_4n^{1-\frac{1}{m}};$$ 3. For any $k\in\mathbb N$ and fixed $r\ge 5$, $$\mathsf s_{k\mathsf p(n,r)n}(C_n^r)\le(k\mathsf p(n,r)+r)n+a_rn^{1-\frac{1}{m}}.$$ \(1) By Corollary \[coro2\] and \[coro3\], Lemma \[lemma4.2\] and $\omega(n)=m$, for $k\ge 6$ we have $$\begin{aligned} &\mathsf s_{kn}(C_n^3)\le(k+3)n+a_3\frac{n}{\mathsf M(n)}=(k+3)n+a_3\frac{n}{n^{\frac{1}{\omega(n)}}}\\ &=(k+3)n+a_3\frac{n}{n^{\frac{1}{m}}}=(k+3)n+a_3n^{1-\frac{1}{m}}.\end{aligned}$$ This completes the proof. The proofs of (2) and (3) are similar. Compared with the previous error terms $\frac{n}{\ln n}$ and $\frac{n}{(\ln n)^A}$, the error term $n^{1-\frac{1}{m}}$ is a large improvement and it is valid for every $n\in\mathbb N$ with $\omega(n)=m$. Acknowledgments {#acknowledgments .unnumbered} --------------- D.C. Han was supported by the National Science Foundation of China Grant No.11601448 and the Fundamental Research Funds for the Central Universities Grant No.2682016CX121. H.B. Zhang was supported by the National Science Foundation of China Grant No.11671218 and China Postdoctoral Science Foundation Grant No. 2017M620936. The authors would like to thank Prof. Weidong Gao for many useful comments and corrections. [10]{} N. Alon and M. Dubiner, *A lattice point problem and additive number theory*, Combinatorica **15**(1995) 301-309. N. Alon and M. Dubiner, *Zero-sum sets of prescribed size*, in: Combinatorics, Paul Erdős is Eighty, Bolyai Society, Mathematical studies, Keszthely, Hungary, 1993, 33-50. J. Bitz, C. Griffith and X. He, *Exponential Lower Bounds on the Generalized Erdős-Ginzburg-Ziv Constant*, arXiv:1712.00861. Y. Edel, C. Elsholtz, A. Geroldinger, S. Kubertin and L. Rackham, *Zero-sum problems in finite abelian groups and affine caps*, Q. J. Math. **58** (2007) 159-186. C. Elsholtz, *Lower bounds for multidimensional zero sums*, Combinatorica **24**(2004) 351-358. P. Erdős, A. Ginzburg and A. Ziv, *Theorem in the additive number theory*, Bull. Res. Council Israel **10**(1961) 41-43. Y. Fan, W. Gao and Q. Zhong, *On the Erdős-Ginzburg-Ziv constant of finite abelian groups of high rank*, J. Number Theory **131**(2011) 1864-1874. W. Gao, *On zero-sum subsequences of restricted size. II*. Discrete Math. **271**(2003) 51-59. W. Gao, *A combinatorial problem on finite abelian groups*, J. Number Theory, **58**(1996) 100-103. W. Gao and A. Geroldinger, *Zero-sum problems in finite abelian groups:A survey*, Expo. Math. **24**(2006) 337-369. W. Gao, D. Han, J. Peng and F. Sun, *On zero-sum subsequences of length $k\exp(G)$*, J. Combin. Theory Ser. A **125**(2014), 240-253. W. Gao, Q. Hou, W. Schmid, and R. Thangadurai, *On short zero-sum subsequences II*, Integers: Electronic Journal of Combinatorial Number Theory, **7**(2007), paper A21,22 pp. W. Gao and R. Thangadurai, *On zero-sum sequences of prescribed length*, Aequations Math. **72**(2006) 201-212. J. Geneson, *Improved lower bound on generalized Erdős-Ginzburg-Ziv constants*, arXiv:1712.02069. A. Geroldinger and D.J. Grynkiewicz, *The large Davenport constant I: groups with a cyclic, index 2 subgroup*, J. Pure. Appl. Algebra 217 (2013), 863-885. A. Geroldinger, D. Grynkiewicz and W. Schmid, *Zero-sum problems with congruence conditions*, Acta Math. Hungarica. **131**(2011) 323-345. A. Geroldinger and F. Halter-Koch, *Non-unique factorizations. Algebraic, Combinatorial and Analytic Theory*, Pure Appl. Math., vol. 278, Chapman $\&$ Hall/CRC, 2006. B. Girard, *An asymptotically tight bound for the Davenport constant*, Journal de l’Ecole polytechnique - Mathematiques **5**(2018) 605-611. D. Han and H. Zhang, *On zero-sum subsequences of prescribed length*, Int. J. Number Theory **14**(2018) 167-191. X. He, *Zero-sum subsequences of length $kq$ over finite abelian $p$-groups*, Discrete Math. **339**(2016), no.1, 399-407. H. Harborth, *Ein Extremalproblem f$\ddot{u}$r Gitterpunkte*, J. Reine Angew. Math. **262**(1973) 356-360. S. Kubertin, *Zero-sums of length $kq$ in ${\mathbb Z}^d_q$*, Acta Arith. **116**(2005) 145-152. A. Sidorenko, *Extremal problems on the hypercube and the codegree Turán density of complete $r$-graphs*, arXiv:1710.08228. A. Sidorenko, *On generalized Erdős-Ginzburg-Ziv constants for $\mathbb Z_2^d$*, arXiv:1808.06555. G. Tennebaum, *Introduction to analytic and probabilistic number theory*, Cambridge Studies in Advanced Mathematics, 46, Cambridge University Press, 1995.
--- abstract: 'An analysis of previous theories of superfluidity of quantum solids is presented in relation to the nonclassical rotational moment of inertia (NCRM) found first in Kim and Chan experiments. A theory of supersolidity is proposed based on the presence of an additional conservation law. It is shown that the additional entropy or mass fluxes depend on the quasiparticle dispersion relation and vanish in the effective mass approximation. This implies that at low temperatures when the parabolic part of the dispersion relation predominates the supersolid properties should be less expressed.' author: - 'Dimitar I. Pushkarov' title: ' Is the supersolid superfluid?' --- Introduction ============ The experiments of Kim and Chan [@KC1] breathed new life into the old idea of possible superfluidity of solids. A quantum solid possessing superfluid properties has been called a supersolid. Originally a supersolid should be a crystalline body where a nondissipative mass current can occur. This should correspond to the superfluid state of liquid helium (helium-II) observed by Kapitsa, and explained theoretically first by Landau. The superfluidity is now well studied and a number of effects has been found, predicted and explained. Between them is the change of the rotational moment due to the fact that the superfluid fraction cannot be involved into rotation at velocities less than the critical one. The qualitative explanation of such a behavior according to Landau is that at small velocities no excitations can be generated. A successful hydrodynamical description is given by the so-called two-fluid (or two-velocity) hydrodynamics. From a mathematical point of view the new element in the two-velocity hydrodynamic equations is the potentiality of the superfluid velocity $\mathbf{v}_s$ which reads $ \mathbf{v}_s = \nabla \mu$ with $\mu$ for the chemical potential in the frame where $\mathbf{v}_s=0$. As a result, a new vibrational mode, the second sound, appears. The phase transition into a superfluid state is well defined and the corresponding changes of the thermodynamic characteristics are well investigated. The quantum-mechanical consideration connects the superfluidity with the Bose-Einstein condensation (BEC). Later on, such kind of condensation in the momentum space was observed in some gases as well. This is the reason to talk on a macroscopic quantum state described by the condensate wave function. In their works Kim and Chan have observed a nonclassical rotational moment (NCRM), i.e. a rotational moment of inertia which changes its value with temperature in a way bearing a resemble with the Kapitza experiments with rotating liquid helium. They argue that this is enough to conclude that the body has been in a supersolid state and that the superfluidity has been finally observed in all three states of matter (gas, liquid and solid). Lately, the term supersolid has become a synonym of a body with NCRM. The first reasonable question is whether the NCRM implies superfluidity (supersolidity). Is the supersolid state “superfluid” or this is an evidence of a new phenomena, maybe more interesting and famous than superfluidity, but nevertheless, of different kind. The existing experimental observations and theoretical analysis did not give an unambiguous answer yet. Originally, the concept of supersolid appeared for a crystalline body inside which a nondissipative (macroscopic) mass current can exist. First considerations (Penrose and Onsager, Andreev and Lifshits, Legget, Chester etc.) had a crystalline bodies in mind. Defects in such crystals are imperfections of *the crystal lattice*, or lattice with an ideal periodicity but with a number of atoms less than the number of the lattice sites (Andreev-Lifshits). The first question is therefore if the experiments can be understood from such a point of view. Most probably this is not the case. Let us first consider the validity of the Landau derivation of the critical velocity. In liquid helium, the energy in the frame where the superfluid velocity is zero can be written in the form: $$\label{Landau} E = E_0 + \mathbf{P}_0 \mathbf{v}+ \frac{1}{2} Mv^2, \quad \mathbf{P} = \mathbf{P}_0 + M\mathbf{v}$$ The same relation for an elementary excitation $\varepsilon(p)$ reads $$\label{Land2} E = \varepsilon(p) + \mathbf{p} \mathbf{v}+ \frac{1}{2} Mv^2$$ where $\mathbf{p}$ is the momentum in the frame where $\mathbf{v}=0$. The least possible change in energy due to the excitation created is $\varepsilon(p) -pv$ and should be negative in order to reduce the energy of the system. This yields $$\label{Land3} v > \varepsilon(p)/p .$$ It is worth noting that equation (\[Landau\]) is always valid because it follows directly from the Galilean principle for *macroscopic* quantities. Relation (\[Land2\]) corresponds to the *microscopic* characteristics of the *elementary excitation*. In a homogeneous and isotropic (Newtonian) space these two relations coincide. However this is not the case in a crystalline solid where quasiparticle states are classified with respect the quasimomentum, not momentum. Quasimomentum is simply a quantum number which apears due to the periodicity of the lattice. There are *no Galilean transformations* for quasiparticle characteristics. The transformation relations which replace the Galilean ones were derived in [@AP85; @Singapore; @Nauka; @PhysRep]). The macroscopic momentum (the mass flux) is not a mean value of the quasimomentum, but of the product $\dx m \frac{\partial \varepsilon}{\partial \mathbf{k}}$ with $\mathbf{k}$ for the quasimomentum. In addition, phonons in crystals have zero momentum and do not transfer mass in contrast to the phonons in liquids. All this implies that the Landau criterion (\[Land3\]) does not work in crystalline bodies. Its ’quasiparticle’ analogue should look like $$\label{} v^{-1} > \frac{m}{\varepsilon} \frac{\partial \varepsilon}{\partial \mathbf{k}} = m \frac{\partial \ln{\varepsilon}}{\partial \mathbf{k}}$$ or $$\label{} m v < \frac{\partial\ln{\varepsilon}}{\partial \mathbf{k}} .$$ But, there is still a question what is, say, the phonon qusimomentum in the co-moving (with the superfluid fraction) frame. In addition, $m=0$ for acoustic phonons. To avoid any misunderstanding, let us stress again that whatever the dispersion relation of the elementary excitations and the spectrum classification parameter (momentum or quasimomentum), the macroscopic fluxes have to obey Galilean relation (\[Landau\]). Next, it is very important that the conservation laws (which are the basis of the hydrodynamics) exist only in an *inertial laboratory frame*. And this laboratory frame is privileged, not Galilean (see more details in Ref. [@PhysRep]). If one considers Bose-condensation of *quasiparticles* then such a condensate in a crystal must be characterized by a value of the *quasimomentum* in a privileged frame. Finally, the particles or quasiparticles (say vacancions) undergoing Bose-condensation should interact weakly enough. It was shown [@meStat] that the vacancion gas is most ideal near the middle of the energy band, not in the bottom. It is seen, therefore, that the situation in a crystalline body is completely different compared to liquids and gases. Nevertheless, the first hydrodynamical theory of the superfluidity of solids [@AL69] was developed in a close analogy with the Landau theory of helium II. Andreev and Lifshits introduced two velocities for a normal and a superfluid fraction of the solid and apply the potentiality condition for the superfluid velocity, $\mathbf{v}_s = \nabla \mu$. They used the Galilean invariance, so in the frame, where the superfluid component is in rest ($\mathbf{v}_s = 0$), the energy per unit volume is: $$\label{} E = \rho {v_s}^2/2 + \mathbf{ p}.{\bf v}_s + \epsilon, \quad \mathbf{ j} = \rho \mathbf{v}_s + {\bf p}$$ where $\mathbf{j}$ is the momentum per unit volume equal to the mass flow while $\mathbf{p}$ is the momentum in the frame with $\mathbf{v}_s =0$, $$\label{} \epsilon = \epsilon( S,\rho, w_{ik})$$ is the internal energy as a function of the entropy, density and the distortion (not symmetric) tensor $$w_{ik} = \frac {\partial u_{i}}{\partial x_k}.$$ The tensor of small deformations is as usually equal to $$u_{ik} =\frac{1}{2}\left\{\frac {\partial u_{i}}{\partial x_k}+ \frac {\partial u_k}{\partial x_l}\right\}$$ and its trace equals the relative variation of the volume $$u_{ii} = w_{ii} = \delta V/V$$ A new point is that this trace is now not connected to the density variation with the known relation, i.e. $$\label{} w_{ii} \ne - \frac {\delta \rho}{\rho}$$ In this notation, $$\label{} d\epsilon = TdS + \lambda_{ik}w_{ik} + \mu d\rho + ({\bf v}_n - {\bf v}_s) d{\bf p}.$$ A standard procedure follows based on the conservation laws: $$\label{} \dot \rho + \div {\bf j} = 0, \qquad \frac {\partial j_i}{\partial t} + \frac {\partial \Pi_{ik}} {\partial x_k} = 0.$$ $$\label{} {\dot S} + \div (S{\bf v}_n + {{\bf q} / T}) = {R / T}, \quad (R > 0 )$$ $$\label{} {\dot {\bf v}}_s + \nabla \varphi = 0.$$ The unknown quantities $\Pi_{ik}, \varphi, {\bf q}, R $ have to be determined so as to satisfy the redundant energy conservation law: $$\label{} \dot E + \div {\bf Q} = 0.$$ The time derivative of $E$ reads: $$\begin{aligned} \dot E &=& T \dot S + \lambda_{ik}{\frac {\partial {\dot u}_{k}}{\partial x_k}} - \mu \div {\bf j} - \div \left({ \frac {{v_s}^2} {2}}{\bf j} \right) + {\bf j} \nabla {\frac {{v_s}^2}{2}} \\ &-& ({\bf j} - \rho {\bf v}_n) \nabla \varphi - v_{ni} {\frac {\partial \Pi_{ik}}{\partial x_k}} + {\bf v}_n {\bf v}_s \div {\bf j} \nonumber\\ &=& - \div \left( {\bf j}{{{v_s}^2} \over{ 2}} + ST{\bf v}_n + {\bf v}_n ({\bf v}_n {\bf p}) \right) + T(\dot S + \div S{\bf v}_n ) \nonumber\\ &+& \lambda_{ik} {\frac {\partial {\dot u}_k} { \partial x_i}} + ({\bf j} - \rho {{\bf v}_{n}}) \nabla \left( \varphi - {{{v_s}^2} \over 2} \right) - \rho {\bf v}_n \nabla \mu \nonumber\\ &-& v_{ni} {\frac {\partial} {\partial x_k}} \left\{ \Pi_{ik} - \rho v_{si}v_{sk} + {v_{si} p_k} + v_{sk} p_i \right. \nonumber\\ &+& \left. [-\epsilon +TS + ({\bf v}_n - {\bf v}_s){\bf p} + \mu \rho] \delta_{ik} \right\} - \mu \div {\bf j}. \nonumber \end{aligned}$$ Here, a term of the form $ \dx v_{ni} \lambda_{kl} \frac {\partial w_{kl} }{\partial x_i} $ is neglected as cubic in “normal motion”. With the aid of conservation laws the time derivative of energy was written in the form [@AL69]: $$\begin{aligned} \label{Edot} \dot E &+& \div \left\{ \left( {{{v_s}^2}\over 2} + \mu \right) {\bf j} + ST{\bf v}_n + {\bf v}_n ({\bf v}_n {\bf p}) + {\bf q} + \varphi ({\bf j} - \rho {\bf v}_n) + \right. \nonumber\\ &+& \left. v_{nk}\pi_{ki} - \frac {{}_{}^{}}{} \lambda_{ik}{\dot u}_k \right\} = \nonumber\\ &=& R + \pi_{ik} \frac {\partial v_{ni}}{\partial x_k} + \psi \div ( {\bf j} - \rho {\bf v}_n) + {{{\bf q} \nabla T} \over T} + ( v_{nk} - {\dot u}_k) \frac {\partial \lambda_{ik}}{\partial x_i}, \end{aligned}$$ This yields the following expressions for the fluxes: $$\begin{aligned} \label{Pi-ik} \Pi_{ik} &=& \rho v_{si} v_{sk} + v_{si}p_k + v_{nk} p_i \nonumber\\ &+& [- \epsilon + TS + ({\bf v}_n - {\bf v}_s) {\bf p} + \mu \rho ] \delta_{ik} - \lambda_{ik} + \pi_{ik}, \\ \varphi &=& {{v_s}^2 \over 2} + \mu + \psi. \end{aligned}$$ $$\begin{aligned} \label{Q} {\bf Q} &=& \left( {\frac {{v_s}^2}{2}} + \mu \right) {\bf j} + ST{\bf v}_n + {\bf v}_n ({\bf v_n}{\bf p}) + \mathbf{q} \nonumber\\ &+& \psi({\bf j} - \rho {\bf v}_n) + v_{nk} \pi_{ki} - \lambda_{ik} {\dot u}_k \end{aligned}$$ and the dissipation function of the crystal is $$\begin{aligned} R = - \pi_{ik}{\frac {\partial v_{ni}}{\partial x_k}} - \psi \div({\bf j}- \rho {\bf v}_n) - {{{\bf q}\nabla T}\over T} - (v_{nk} - {\dot u}_k) {\frac {\partial \lambda_{ik}}{\partial x_i}}\end{aligned}$$ We shall not write here the relations between $\pi_{ik}, \psi, q$ and $(\mathbf{v_n - \dot u})$ that follow from the Onsager principle and the positivity of the dissipative function. The main consequence is that with neglecting dissipation one has $\mathbf{v}_n = \mathbf{\dot u}$. The normal motion is therefore the motion of lattice sites (which may not coincide with given atoms). The superfluid flow could, hence, be possible at a given (even not moving) lattice structure. However, instead of (\[Edot\]) the time derivative $~\dot E$  can also be written in the form: $$\begin{aligned} \label{Edot2} \dot E &+& \div \left\{ \left( {{{v_s}^2}\over 2} + \mu \right) {\bf j} + ST{\bf v}_n + {\bf v}_n ({\bf v}_n {\bf p}) + {\bf q} + \psi ({\bf j} - \rho {\bf v}_n) + \right. \\ &+& \left. v_{nk}\pi_{ki} - \frac {{}_{}^{}}{} \lambda_{ik}{v}_{nk} \right\} = \nonumber \\ &=& R + \pi_{ik} \frac {\partial v_{nk}}{\partial x_i} + \psi \div ( {\bf j} - \rho {\bf v}_n) + {{{\bf q} \nabla T} \over T} + \lambda_{ik}\frac {\partial}{\partial x_i}({\dot u}_k - v_{nk}), \nonumber\end{aligned}$$ which leads to other expressions for fluxes. In this case the nondissipative theory yields: $$\begin{aligned} \dot E &\!\!+\!\!& \div \left\{ \left( {{{v_s}^2}\over 2} + \mu \right) {\bf j} + ST{\bf v}_n + {\bf v}_n ({\bf v}_n {\bf p}) + (\varphi \! -\! \mu \! - \! \frac {v_{s}^{2}}{2}) ({\bf j} - \rho {\bf v}_n) + \right. \nonumber \\ & \!\! + \!\! & v_{nk} \left[ \frac{}{} \Pi_{ki} - \rho v_{si} v_{sk} + v_{sk}p_i + v_{ni} p_k - \right. \\ &\!\!- \!\!& \left. \left. [- \epsilon + TS + ({\bf v}_n - {\bf v}_s) {\bf p} + \mu \rho ] \delta_{ik} \frac{}{} \right] \frac {{}^{}}{{}_{}}\right\} = \nonumber \\ &\!\!=\!\!& \left\{ \frac{}{} \Pi_{ki} - \rho v_{si} v_{sk} + v_{sk}p_i + v_{ni} p_k - \right. \nonumber\\ &\!\!- \!\!& \left. \left. [- \epsilon + TS + ({\bf v}_n - {\bf v}_s) {\bf p} + \mu \rho ] \delta_{ik} \frac {{}^{}}{{}_{}} \right\} \frac{\partial v_{nk}}{\partial x_i} \right. + \nonumber \\ &\!\! + \!\! & (\varphi - \mu - \frac {v_{s}^{2}}{2}) \div ( {\bf j} - \rho {\bf v}_n) + \lambda_{ik}\frac {\partial {\dot u}_k}{\partial x_i} \nonumber\end{aligned}$$ and hence, $$\begin{aligned} \label{Pi-ik2} \Pi_{ik} &=& \rho v_{si} v_{sk} + v_{si}p_k + v_{nk} p_i + \left[- \epsilon + TS + ({\bf v}_n - {\bf v}_s) {\bf p} + \mu \rho \frac {{}^{}}{{}} \right] \delta_{ik} \\ \varphi &=& \mu + \frac {v_{s}^{2}}{2}, \quad \qquad \lambda_{ik} = 0 \quad {\hbox { !!!}} \end{aligned}$$ The procedure used is, therefore, not unique. The relation $\mathbf{v}_n = \mathbf{\dot u}$ was not derived, but presupposed. In fact, the consideration started as a three-velocity theory ($\mathbf{\dot u}, \mathbf{v_n}, \mathbf{v_s}$) and the identity $\mathbf{\dot u} = \mathbf{v_n}$ follows from a condition the time derivative of the total energy be not dependent on the $\frac{\partial \lambda_{ik}}{\partial x_k}$ which is not well grounded. Next, the conservation laws are written in the system where $\mathbf{v_s} = 0$ and this is not the laboratory frame in which the lattice cites are in their equilibrium positions. That is why we turned to another approach based on our theory of the quasiparticle kinetics and dynamics in deformable crystalline bodies [@PhysRep; @Singapore; @Nauka; @Pushk2fluid]. This theory works with exact (in the frame of the quasiparticle approach) selfconsistent set of equations including the nonlinear elasticity theory equation and a transport Boltzman-like equation valid in the whole Brillouin zone of quasiparticles with arbitrary dispersion law. The theory is developed for crystalline bodies subject to time-varying deformations and arbitrary velocities. Partition Function and Thermodynamic Relations ============================================== Let us consider a gas of quasiparticles with dispersion law  $ \epsilon (\bf k)$  at low temperatures, when the frequency of normal processes is much larger that of the Umklapp processes, i.e. $$\tau_{n}^{-1} \gg \tau_{U}^{-1} .$$ The distribution function  $ n_{k}({\bf k,\, r}, \,t)$  corresponds to  $ S_{max}$  with conserved energy $E$, quasiparticle density $n$, quasimomentum $\mathbf{K}$ and momentum (mass flow) $\mathbf{j}$ defined, respectively, as: $$S({\bf r},\, t) ={\tenit\int}\frac{{}^{}}{{}_{}} s[n_{k}]\, d {\bf k},$$ where $ s[n_{k}] = (1+ n_{k}) \ln (1 + n_{k}) - n_{k} \ln n_{k} $ $$\begin{aligned} E({\bf r},\, t) & =& \int \limits_{}^{} \epsilon_{k} \, n_{k}\, d {\bf k} \\ n({\bf r},\, t) & =& \int \limits_{}^{} n_{k}\, d {\bf k} \\ {\bf K}({\bf r},\, t) & =& \int \limits_{}^{}{\bf k} \, n_{k} \, d {\bf k} \\ {\bf j}({\bf r},\, t) & =& m \int \limits_{}^{} \frac {\partial \epsilon_{k}} {\partial {\bf k}} \, n_{k} \, d {\bf k} ,\qquad d{\bf k} = \frac {1}{(2 \pi)^3} \,d k_{1} d k_{2} d k_{3}\end{aligned}$$ This yields $$n_{k}(\mathbf{k, \, r}\, t) = \left\{ \exp \left( \frac {\epsilon_{k} - \mathbf{V.k} - m \mathbf{ W}. (\partial \epsilon_{k}/{\partial \mathbf{k}}) - \mu}{T} \right) - 1 \right\}^{-1}$$ with $\mathbf{V, W}$ and $\mu$ for Lagrangian multipliers. Varying  $S$  yields $$T\delta S = \delta E - \mathbf{ V.}\delta \mathbf{ K} - \mathbf{ W.} \delta \mathbf{ j} - \mu \, \delta n$$ $$\Omega = E - TS - \mathbf{ V.K} - {\bf W .j } - \mu n$$ and respectively $$d \Omega = - S d T - \mathbf{K.}d \mathbf{V} - \mathbf{j .}d\mathbf{ W } - n \, d \mu$$ The nondissipative equations involve the following conservation laws: $$\begin{aligned} {\dot n} &+& \div \mathbf{ J} = 0, \qquad \mathbf{J} = \mathbf{ j}/m \\ \frac {\partial { j }_{i}}{\partial t} &+& \frac {\partial \Pi_{ik}}{\partial x_k} = 0, \\ \frac {\partial { K }_{i}}{\partial t} &\!+\!& \frac {\partial L_{ik}}{\partial x_k} = 0, \\ {\dot S } &+& \div \mathbf{ F } = 0, \nonumber \\ {\dot E } &+& \div \mathbf{ Q } = 0\end{aligned}$$ To second order with respect to velocities  $ \mathbf{ V}$  and  $\mathbf{ W}$  one has: $$\label{} J_{i} = n^{0} V_{i} + n_{ij}W_j,$$ where $$\label{} n_{ij} = \int n_{k}^{0} \, \nu_{il} ({\bf k}) \, d {\bf k}, \qquad \nu_{il}(\mathbf{ k}) = m \frac {\partial^{2} \epsilon_{k}}{\partial k_i \partial k_l}$$ The local drift velocity $ \mathbf{U}= \mathbf{j}/m n^0$ is then $$\label{} U_i = V_i + \frac {n_{il}}{n^{0}} W_{l}$$ The mass flux is, therefore, not collinear to any of velocities $\mathbf{V}$ and $\mathbf{W}$. Analogously, $$\label{} K_i = \rho_{il}V_l + m n^0 W_i,$$ $$\label{} \rho_{il} = - \int k_i k_l \frac {\partial n_{k}}{\partial \epsilon_{k}}\, d \mathbf{ k} = \left. \frac {\partial^2 \Omega}{\partial V_i \partial V_l} \right|_{T, \mu, \mathbf{ W}} \qquad {\rho^{-1}}_{il} = \left. \frac {\partial^2 E}{\partial K_i \partial K_l} \right|_{S,n,\mathbf{ j}}$$ To second order in velocities *the diagonal terms of the quasimomentum flux tensor coincide with the thermodynamic potential* $\Omega(T, \mathbf{V}, \mathbf{W}, \mu) $ [@Singapore; @PhysRep; @Nauka]: $$\label{} L_{ij} = \int k_i \frac {\partial \epsilon_{k}}{\partial k_{j}} n_k \, d{\bf k} = \Omega^0 \, \delta_{ij}$$ and the momentum flux tensor has the form: $$\label{} \Pi_{il} = - \Omega_{il} = T \int \ln( 1 + n_{k}^{0}) \, \nu_{il}({\bf k})\, d \mathbf{k}$$ The energy flux is: $$\label{} Q_{i}({\bf r},t) = {\tenit\int}\epsilon_{k} \frac {\partial \epsilon_{k}}{\partial k_{i}} n_{k}\, d {\bf k} = W^{0} V_i +(TS_{il} + \mu \, n_{il})W_l$$ where $$\label{} S_{il} = \int s[n_{k}^{0}]\, \nu_{il} \, d{\bf k}$$ and $W^0 = E^0 -\Omega^0$ is the enthalpy at $\mathbf{V = W} = 0$. Hence, $ W^{0} \mathbf{V}$ is the energy flux known from the classical hydrodynamics, and there are additional terms due to the supersolid behavior. The full hydrodynamic system consists then of four equations: $$\label{H1} {\dot n} + n \, \div \mathbf{ U} = 0,$$ $$\label{H2} mn {\dot U}_i - \frac {\partial \Omega_{il}}{\partial x_l} = 0$$ $$\label{H3} \rho_{is} \frac {\partial \Omega_{sl}}{\partial x_l} - n \frac {\partial \Omega^{0}}{\partial x_{i}} + n^2 (\delta_{il} - \beta_{il}){\dot W}_{l} = 0$$ $$\label{H4} {\dot E} + W^{0} \div {\bf U} + TS \left( \frac {S_{il}}{S} - \frac {n_{il}}{n} \right) \frac {\partial W_{l}}{\partial x_{i}} = 0$$ where $$\label{H5} \beta_{il} = \rho_{ik} n_{kl} / (n^{0})^2$$ Taking into account the thermodynamic identity: $$\label{H6} d E =T\,dS + \mu \, d n + {\bf V.}d{\bf K} + {\bf W.} d{\bf j}$$ the energy conservation law can be replaced by entropy equation: $$\label{} {\dot S} + S\,\div {\bf U} + S \left( \frac {S_{il}}{S} - \frac {n_{il}}{n} \right) \frac {\partial W_{l}}{\partial x_{i}} = 0$$ It is seen that the mass flux and the entropy flux have different velocities both in magnitude and direction. This means that a mass flux without entropy transport can take place. This implies an existence of a superfluid density $\rho^s$. **In the case of quadratic dispersion law** $$\label{} \rho_{il} = m\, n^{0} \delta_{ik}, \quad \frac {S_{il}}{S} = \frac {n_{il}}{n}, \quad \beta_{il} = \delta_{il}, \quad j_i = \nu_{il}K_l$$ and the additional entropy flux vanishes. This implies that the superfluid effects should be negligible at very low temperatures where the excitations with parabolic dispersion relation predominate. Let us now rewrite the hydrodynamic set in terms of Landau superfluid theory in order to see better the analogy. Cubic crystal ============= Let us first consider, for simplicity, a cubic crystal. Then, to the second order with respect to velocities one has $$\label{} {\bf J}({\bf r},t) = n \,{\bf V} + \nu \, {\bf W}, \qquad {\bf Q}({\bf r},t) = W^{0}{\bf V} + (p+q){\bf W}$$ $$\label{} \Pi_{ik} = p\,\delta_{ik}, \qquad L_{ik} = - \Omega ^{0}\,\delta_{ik}, \qquad {\bf F} = S {\bf V} + p_{T} {\bf W}$$ where $$\begin{aligned} p ({\bf r}, t) &=& \frac {1}{3} m \int \left( \frac {\partial \epsilon_{k}} {\partial {\bf k}} \right)^2 n_{k}^{0}\, d {\bf k}, \quad \rightarrow \quad \!\! -\frac {m}{m^*} \Omega^0 \label{p}\\ \nu ({\bf r}, t) & =& \frac {1}{3} m \int \frac {\partial^{2} \epsilon_{k}} {\partial k^{2}}\, n_{k}^{0}\, d {\bf k}, \qquad \rightarrow \quad \,\, \frac {m}{m^*}n \label{nu}\\ q ({\bf r}, t) &=& \frac {1}{3} m \int \epsilon_{k} \frac {\partial^{2} \epsilon_{k}} {\partial k^{2}}\, n_{k}^{0}\, d {\bf k}, \quad \rightarrow \quad \,\, \frac {m}{m^*} E^0 \label{q}\end{aligned}$$ and the following relations take place: $$\label{} p_{\mu} = \left( \frac {\partial p}{\partial \mu} \right)_{T} = \nu, \qquad p_{T} = \left( \frac {\partial p}{\partial T } \right)_{\mu} = \frac {p + q - \mu \nu}{T}$$ The meaning of the quantities involved can be seen from their limiting expressions in the effective mass ($m^*$) approximation. In the notation of Landau theory:  $\dx \mathbf{ F } = S \mathbf{ V}^n, \,\, \mathbf{ V} = \mathbf{V}^s$  and the system of equations (\[H1\]–\[H5\]) takes the form: $$\label{} {\dot n} + n^s \div {\bf V}^s + n^n \div \mathbf{V}^n = 0, \qquad {\dot S} + S \div \mathbf{ V}^n = 0,$$ $$\label{} { \mathbf{\dot K}} + S \nabla T + n \nabla \mu = 0, \qquad \frac {\partial \mathbf{ j}}{\partial t} + p_{T} \nabla T + p_{\mu} \nabla \mu = 0$$ where $ \dx n^n = S \frac {p_{\mu}}{p_{T}}, \quad n^s = n - n^n$  and the number density flux is $$\label{} \mathbf{J} = n^s \mathbf{ V}^s + n^n \mathbf{ V}^n$$ Second Sound ============ One has in variables  $\mu, \, T, \, \mathbf{ V}^s, \, \mathbf{ V}^n$ $$\begin{aligned} \alpha {\dot T} + \beta {\dot \mu} + n^s \nabla \mathbf{ .V}^s + n^n \mathbf{ \nabla .V^n} &=& 0 \nonumber\\ \gamma {\dot T} + \alpha {\dot \mu} + S \nabla .\mathbf{ V}^n &=& 0 \nonumber\\ S \nabla T + n \nabla \mu + \rho^s \mathbf{ \dot V}^s + \rho^n \mathbf{ \dot V}^n &=& 0 \nonumber\\ n S \nabla T + n n^n \nabla \mu = \rho^n n^s \mathbf{ \dot V}^s + \rho^n n^n \mathbf{ \dot V}^n &=& 0 \nonumber\end{aligned}$$ where $$\label{} \alpha = \left.\frac {\partial n}{\partial T}\right|_{\mu} = \left.\frac {\partial S}{\partial \mu}\right|_{T} , \qquad \beta = \left.\frac {\partial n}{\partial \mu }\right|_{T } , \qquad \gamma = \left.\frac {\partial S}{\partial T}\right|_{\mu}$$ $$\label{} \rho^n = m n S/p_T, \qquad \rho = \frac {1}{3} \int \mathbf{ k}^2 n^0 (1 + n^0)\, d\mathbf{ k}, \qquad \rho^s = \rho - \rho^n$$ $$\label{} \mathbf{ K} = \rho \mathbf{ V} + mn \mathbf{ W} = \rho^s \mathbf{ V}^s + \rho^n \mathbf{ V}^n$$ For [*quasiparticles with quadratic dispersion law*]{}  $ n^s = \rho^s = 0$: $$\begin{aligned} \alpha {\dot T} + \beta {\dot \mu} + n \nabla \mathbf{ .V}^{n} &=& 0 \\ \gamma {\dot T} + \alpha {\dot \mu} + S \nabla .\mathbf{ V}^n &=& 0 \\ S \nabla T + n \nabla \mu + \rho^s \mathbf{ \dot V}^s + \rho \mathbf{ \dot V}^n &=& 0\end{aligned}$$ If the number of quasiparticles is not conserved $$\label{} \omega_{0}^2 (\mathbf{ q}) = \frac{TS}{C_{v} \rho} \mathbf{q}^2$$ If the number of quasiparticles is conserved $$\begin{aligned} \omega^2 (\mathbf{q}) & =& \omega^{2}_{0} (\mathbf{ q}) \left\{ \left( 1 - \frac {\alpha n}{ \beta S} \right)^2 + \frac {C_v n^2}{T \beta S^2} \right\} \nonumber\\ &=& \left[ \frac {T}{C_v} \left(\frac{\partial s}{\partial v_{0}}\right)_{T}^2 - \left( \frac {\partial \mu}{\partial v_{0}}\right)_{T} \right] \frac {\mathbf{q}^{2}}{\rho} = \left( \frac {\partial P}{\partial n}\right)_{s} \frac {n}{\rho}\, \mathbf{q}^2\end{aligned}$$ where $ s = S/n, \quad v_0 = 1/n$ . Quasiparticles with non-quadratic dispersion law ================================================ If the number of quasiparticles is not conserved $$\label{} \omega^{2}(\mathbf{ q}) = (1+ \delta^{\rho}) \frac {TS^2}{C_v \rho^{n}} \mathbf{q}^2 ,\quad \delta ^{\rho} = \frac {\kappa ^{\rho} \kappa^n }{\kappa^{\rho} - \kappa^n}, \quad \kappa^{\rho} = \frac {\rho^s}{\rho^n}, \quad \kappa^{n} = \frac {n^s}{n^n}$$ If the number of quasiparticles is conserved $$\begin{aligned} \omega_{1}^2 (\mathbf{ q})& =& \delta^n \left\{ \omega_{0}^2 (\mathbf{ q},\, \rho \! =\! \rho^n) - \frac {1}{n \rho^{n}}\, \frac {T S}{C_{v}}\, \frac {(\partial P/\partial T)^{2}_{n}} {(\partial P/\partial n)_{s}}\, \mathbf{q}^2 \right\} \\ \omega_{2}^2(\mathbf{q}) & =& (1 - \kappa^n) \, \omega^2(\mathbf{q}, \, \rho \! =\! \rho^n) + \delta^n \omega_{0}^2 (\mathbf{q},\,\rho \! =\! \rho^n) - \omega_{1}^2 (\mathbf{q}) \\ &-& 2 \kappa^n \frac {T S}{\rho^{n} C_{v}}\, \left( \frac {\partial P}{\partial T}\right)_n \mathbf{q}^2 \nonumber\end{aligned}$$ $$\label{} \omega_2(\mathbf{q}) > \omega_1(\mathbf{q})$$ Conclusion ========== It is shown that the theory of superfluidity of solids should not be a replica of the Landau theory of superfluidity. For crystalline bodies a two-velocity theory of supersolidity is presented with accounting of the quasimomentum conservation law. Such a theory cannot be applied to disordered systems, glasses etc. Acknowledgements ================ The author thanks Professor V.Kravtsov for the inviation to the Abdus Salam ICTP, Trieste, where this work was submitted. Acknowledgements are also given regarding the partial financial support from the National Science Fund, Contract F-1517 E. Kim and M.H.W. Chan, Nature, **427** 225 (2004); E. Kim and M.H.W. Chan, Science **305** 1941-44 (2004); E. Kim and M.H.W. Chan, J.Low Temp. Phys. **138** 859 (2005) A.F.Andrev and D.I.Pushkarov, - *Sov.Phys.JETP* **62** (5) 1087-1090 (1985) D.I.Pushkarov, - Quasiparticle theory of defects in solids, World Scientific, Singapore, 1991 D.I.Pushkarov, Defektony v kristallakh (Defectons in crystals - quasiparticle approach to quantum theory of defects) in Russian, Nauka, Moscow 1993 D.I. Pushkarov, - Phys.Rep. **354** 411 (2001) D.I.Pushkarov, - Phys.Stat.Sol.(b) **133** 525 (1986) D.I.Pushkarov and R. Atanasov, Phys.Scripta **42** 481 (1990) A.F.Andreev and I.M.Lifshitz - Zh.Eksp.Teor.Fiz. **56** (12) 2057 (1969)
--- abstract: 'We use data on variable stars from the Optical Gravitational Lensing Experiment (OGLE III) survey to determine the three-dimensional structure of the Small Magellanic Cloud (SMC). Deriving individual distances to RR Lyrae stars and Cepheids we investigate the distribution of these tracers of the old and young population in the SMC. Photometrically estimated metallicities are used to determine the distances to 1494 RR Lyrae stars, which have typical ages greater than 9 Gyr. For 2522 Cepheids, with ages of a few tens to a few hundred Myr, distances are calculated using their period-luminosity relation. Individual reddening estimates from the intrinsic color of each star are used to obtain high precision three-dimensional maps. The distances of RR Lyrae stars and Cepheids are in very good agreement with each other. The median distance of the RR Lyrae stars is found to be $61.5 \pm 3.4$ kpc. For the Cepheids a median distance of $63.1 \pm 3.0$ kpc is obtained. Both populations show an extended scale height, with $2.0 \pm 0.4$ kpc for the RR Lyrae stars and $2.7 \pm 0.3$ kpc for the Cepheids. This confirms the large depth of the SMC suggested by a number of earlier studies. The young population is very differently oriented than the old stars. While we find an inclination angle of $7^\circ \pm 15^\circ$ and a position angle of $83^\circ \pm 21^\circ$ for the RR Lyrae stars, for the Cepheids an inclination of $74^\circ \pm 9^\circ$ and a position angle of $66^\circ \pm 15^\circ$ is obtained. The RR Lyrae stars show a fairly homogeneous distribution, while the Cepheids follow roughly the distribution of the bar with their northeastern part being closer to us than the southwestern part of the bar. Interactions between the SMC, LMC, and Milky Way are presumably responsible for the tilted, elongated structure of the young population of the SMC.' author: - 'Raoul Haschke , Eva K. Grebel, and Sonia Duffau' bibliography: - 'Bibliography.bib' title: | Three dimensional maps of the Magellanic Clouds\ using RR Lyrae Stars and Cepheids\ II. The Small Magellanic Cloud --- Introduction ============ The Small Magellanic Cloud (SMC) is a dwarf irregular satellite of the Milky Way @Bergh99. It is interacting with its larger companion, the Large Magellanic Cloud (LMC), and with the Milky Way. A number of studies have suggested that the apparently disturbed shape and large depth extent of the SMC were caused by these interactions, although the details and the three-dimensional shape of the SMC remain under debate [e.g., @Bekki09c]. The distance to the SMC is usually assumed to be 60 kpc or $(m-M)_0 = 18.90$ mag [e.g., @Westerlund97]. But even the usage of a specific distance indicator may still lead to different results in different studies. Using, for instance, eclipsing binaries to determine the mean distance of the SMC @North10 obtained $(m-M)_0 = 19.11 \pm 0.03$ mag, while @Hilditch05 found a mean value of $(m-M)_0 = 18.91 \pm 0.03$ mag in very good agreement with the mean distance quoted above. The best option to obtain comparable and trustworthy mean distances and structural parameters from different stellar tracers are large surveys. For these the systematic uncertainties are lower, because they have been observed, reduced and analyzed following a coherent procedure and because they tend to provide excellent number statistics for certain distance indicators. Moreover, such surveys can provide stellar tracers of different and distinct ages, an important prerequisite to resolve the different evolutionary states the galaxy has passed through during its history [see, e.g., @Westerlund97 for a summary of the older distance studies]. For the SMC (small) sets of very different stellar tracers, representing young, intermediate-age and old populations, have been investigated for many decades to find a common mean distance, assuming that all these different populations have the same distance from us. The differences between the tracers, but also within the results obtained using a specific tracer are greater than the $1\sigma$-uncertainties of the resulting distances. This may be taken as an indication for a significant depth extent of the SMC, but also shows that it is absolutely necessary to analyze large datasets instead of choosing a small, possibly localized subsample of stars within the SMC. The mean distance moduli of the young population, traced predominantely by Cepheids, are usually greater than $(m-M)_0 = 18.90$ mag [compare, e.g, @Laney86; @Groenewegen00; @Keller06] but also @Ciechanowska10. For the old RR Lyrae stars most of the distance estimates reveal shorter distances than those of the Cepheids. The range of values, however, is large with distance moduli between $(m-M)_0 = 18.78 \pm 0.15$ mag and $(m-M)_0 = 19.20$ mag [see, e.g., @Szewczyk09; @Deb10; @Kapakos11]. For a compilation of distance moduli we refer to the book of @Westerlund97 and to Table \[distance\_table\]. For the forthcoming analysis the absolute mean distance is not of crucial importance. It is nontheless an interesting open question. Apart from differences in the calibration of a given distance indicator, differences in the mean distance may, to some extent, be introduced by a large depth of the SMC as found by, e.g., @Mathewson88. Using Cepheids they showed that the SMC has a considerable depth of about 20 kpc. Newer estimates using intermediate-age tracers led to lower values of the line-of-sight depth. In @Crowl01 a depth between 6 kpc and 12 kpc, in dependence of reddening, was estimated using cluster distances derived by isochrone fitting. In @Subramanian09 the distribution of red clump (RC) stars was investigated and a depth of less than 5 kpc was found for the SMC, in good agreement with the results of @Subramanian12. They also investigated the depth extent of the old population by using RR Lyrae stars and found a mean depth of $4.07 \pm 1.68$ kpc. This is in very good agreement with the $4.13 \pm 0.27$ kpc found for the line-of-sight depth of RR Lyrae stars by @Kapakos11. @Lah05 conducted a study utilizing different stellar tracers to show that red giant branch (RGB) and asymptotic giant branch (AGB) stars might be further away from us than the main body of the SMC (see their Fig. 4). In @Subramanian12 the inclination angle of the RC stars and the RR Lyrae stars was found to be $i \simeq 0.5^\circ$. Furthermore they found a position angle of $\theta = 58.3^\circ$ for the RR Lyrae stars and of $\theta = 55.5^\circ$ for the RC stars, respectively, thus very similar values for old and intermediate-age stars. For the other determinations of the structural parameters of the SMC only young stars have been used. With 63 Cepheids @Caldwell86 found an inclination angle of $i = 70^{\circ} \pm 3^{\circ}$ and a position angle $\theta = 58^{\circ} \pm 10^{\circ}$. With a different sample of 23 Cepheids @Laney86 obtained $i = 45^{\circ} \pm 7^{\circ}$ and a position angle $\theta = 55^{\circ} \pm 17^{\circ}$ in good agreement with @Caldwell86. A much larger sample of 236 Cepheids was investigated by @Groenewegen00, who found $i = 68^{\circ} \pm 2^{\circ}$ and a position angle of the line of nodes of $\Theta = 238^{\circ} \pm 7^{\circ}$. The location of the center of the SMC is also not very well constrained. The optical center [see, e.g., @Westerlund97] and the center of the K- and M-stars [found by @Gonidakis09 from now on G09] are basically identical with $\alpha = 0^{\mathrm{h}}51^{\mathrm{m}}$ and $\delta = -73.1^\circ$. From Hubble Space Telescope (HST) measurements of proper motions @Piatek08 found the kinematical center of the SMC to be at $\alpha = 0^{\mathrm{h}}52^{\mathrm{m}}8^{\mathrm{s}}$ and $\delta = -72.5^\circ$, while @Stanimirovic04 found $\alpha = 0^{\mathrm{h}}47^{\mathrm{m}}33^{\mathrm{s}}$ and $\delta = -72^\circ5'26''$ for the highest H I column density. Throughout this paper we will mostly refer to the optical center and use the result by . In this paper we analyze the data of the Optical Gravitational Lensing Experiment (OGLE III) survey, presented in Section \[data\]. Distances to all RR Lyrae *ab* stars and Cepheids present in the OGLE sample are calculated. In Section \[distance\] we use the metallicity estimates of @Haschke12_MDF and the periods obtained by OGLE. Moreover we apply the reddening maps of @Haschke11_reddening to correct for individual reddening effects. The two-dimensional spatial distribution of the stars is investigated in Section \[Density\_of\_OGLEIII\] and the three-dimensional maps are presented in Section \[3D\]. These three-dimensional maps are analyzed and the structural parameters of the young and old population are determined in Section \[3D\_structure\]. The results are discussed and summarized in Section \[Conclusions\]. Data ==== In 2001 the OGLE experiment started its third phase of monitoring the Magellanic Clouds (OGLE III). This phase ended in 2009. OGLE III used a camera of eight CCDs with $2048 \times 4096$ pixels each and a field of view of to $35' \times 35'$. Altogether 14 square degrees, covering the bar and the wing of the SMC, were monitored. Photometric data in the $V$ and $I$ band were accumulated for 6.2 million stars [@Udalski08b]. Apart from the full photometric catalog, the OGLE collaboration provides specialized catalogs with information about certain types of stars, such as Cepheids or $\delta$ Scuti stars, or astrometric properties of stars[^1]. In @Soszynski10a the data for 2626 classical Cepheids in the SMC are presented, while @Soszynski10b published data for 1933 RR Lyrae stars of type *ab*. All of these stars are pulsating in the fundamental mode and cover the entire OGLE III field of the SMC. The lightcurves were analyzed and periods and mean magnitudes in V- and I-band were published. The OGLE collaboration also carried out a Fourier decomposition [@Simon93] of the very well sampled I-band lightcurves. The Fourier parameters $R_{21}$ and $R_{31}$, which correspond to the skewness of the lightcurve, as well as $\phi_{21}$ and $\phi_{31}$, which represent the acuteness [see @Stellingwerf87a for more details], of each Cepheid and RR Lyrae are published and available from the OGLE website. Distance measurements {#distance} ===================== -------------------- ---------------------- ------------------------------------- Type of indicator mean distance Reference $(m-M)_0 \pm \sigma$ Cepheids $19.11 \pm 0.11$ @Bono01 Cepheids $18.85 \pm 0.14$ @Ciechanowska10 Cepheids $19.11 \pm 0.11$ @Groenewegen00b Cepheids $18.93 \pm 0.024$ @Keller06 Cepheids $19.17 \pm 0.12$ this work - area-averaged reddening Cepheids $19.00 \pm 0.10$ this work - individual reddening CMD fitting $18.88 \pm 0.08$ @Dolphin01 Eclipsing binaries $18.89 \pm 0.14$ @Harries03 Eclipsing binaries $18.91 \pm 0.13$ @Hilditch05 Eclipsing binaries $19.11 \pm 0.03$ @North10 RGB tip $18.99 \pm 0.11$ @Cioni00b RR Lyrae $18.86 \pm 0.01$ @Deb10 RR Lyrae *ab* $18.90 \pm 0.18$ @Kapakos11 RR Lyrae *c* $18.97 \pm 0.14$ @Kapakos11 RR Lyrae $18.97 \pm 0.15$ @Szewczyk09 RR Lyrae $19.13 \pm 0.13$ this work - area-averaged reddening RR Lyrae $18.94 \pm 0.11$ this work - individual reddening -------------------- ---------------------- ------------------------------------- In this Section we discuss our distances to RR Lyrae stars as tracers of the old population [$\geq 9$ Gyr, e.g., @Sarajedini06] and to Cepheids for the young population [$ \sim 30-300$ Myr, @Grebel98; @Luck03] of the SMC. Distance estimates are calculated individually star by star. RR Lyrae -------- In @Haschke12_MDF the photometric metallicities of 1864 RR Lyrae *ab* stars from the OGLE III sample of the SMC were calculated. The absolute luminosity of RR Lyrae stars depends on metallicity only. We use these estimates on the metallicity scale of @Zinn84 to calculate the absolute $V$ band magnitude $M_V$ of the RR Lyrae stars by using the relation introduced in @Benedict11 $$M_V = (0.45 \pm 0.05) + (0.217 \pm 0.047)~(\textrm{[Fe/H]} + 1.5) \label{absolute_magnitude}$$ @Benedict11 found that their [*HST*]{} data are fitted best by an equation having the same slope as the relation found by @Clementini03 for LMC RR Lyrae stars, but with a zeropoint that is $\Delta M_V = 0.07$ mag brighter. Furthermore, we also tried out quadratic equations by @Catelan04 [@Sandage06] and @Bono07. The resulting mean absolute magnitudes of the RR Lyrae stars are quite different from each other. We find median differences of $\Delta M_V = 0.10$ mag for @Sandage06 [their equation 7], $\Delta M_V = 0.17$ mag for @Catelan04 [their equation 8] and $\Delta M_V = 0.20$ mag for @Bono07 [their equation 10]. All of these absolute magnitudes are fainter than those calculated with the relation by @Benedict11. This reflects the comparatively large uncertainty of the magnitude zeropoint of the RR Lyrae stars. We take these systematics for our error analysis into account, and adopt the relation of @Benedict11. The absolute magnitude of each RR Lyrae star together with the observed mean magnitude from the OGLE collaboration yields a distance modulus once a reddening correction is applied. In Section \[reddening\] we apply two different approaches to correct for the reddening and to obtain the three-dimensional structure of the SMC. Cepheids -------- The correlation between absolute magnitude and period is well known for Cepheids. Using Cepheids from the SMC @Sandage09 obtained relations for the $B$, $V$, and $I$ band, which only depend on the period $P$ of the Cepheid investigated: $$\begin{aligned} M_V = -(2.588 \pm 0.045)\log P - (1.400 \pm 0.035) \label{Cepheid_M_V} \\ M_I = -(2.862 \pm 0.035)\log P - (1.847 \pm 0.027) \label{Cepheid_M_I}\end{aligned}$$ There might be a break in the relation for the SMC at $P = 10$ days as found for the LMC [@Sandage04a], but @Sandage09 suggest that this break is not significant. The distance moduli are calculated using the mean observed magnitude of each star, provided by the OGLE collaboration, and the absolute magnitude from these relations. The correction for reddening effects is described in the next section. Reddening correction {#reddening} -------------------- @Haschke11_reddening [hereafter Paper I] used two different approaches to correct for the reddening of the LMC. We use the same method here, which will be shortly outlined. Further details are described in . ### Area-averaged reddening corrections:\ The red clump method RC stars are located at a certain color and luminosity range of the color-magnitude diagram (CMD). That position depends on the distance, the metallicity, and the reddening of these stars. @Girardi01 predicted the theoretical color of the RC for distinct metallicities using models. Assuming a metallicity of $z = 0.025$ [@Cole98; @Glatt08b] for the SMC RC population the difference of the observed to the theoretically predicted color provides the reddening [@Wozniak96]. In @Haschke11_reddening, we estimated reddening values $E(V-I)$ for 681 subfields in the OGLE III field of the SMC. These were converted using the relation by @Schlegel98 $$\begin{aligned} A_V = 3.24(E(V-I)/1.4) \label{reddening_V} \\ A_I = 1.96(E(V-I)/1.4) \label{reddening_I}\end{aligned}$$ to correct the apparent magnitudes of the stars with the mean extinction in the corresponding field. ### Individual reddening correction:\ Intrinsic colors of variable stars {#reddening_color} Individual reddening values of RR Lyrae stars and Cepheids were calculated by subtracting the intrinsic color from the observed color of each star. The observed color $(v-i)$ was computed from the mean magnitudes of the OGLE data. For the intrinsic color $(V-I)_0$, we calculated the absolute magnitudes in $V$ and $I$ of each RR Lyrae star using the relations of @Catelan04. For the absolute $V$ and $I$ magnitudes of each Cepheid Equation \[Cepheid\_M\_V\] and Equation \[Cepheid\_M\_I\] were used. The reddening estimates were transformed to individual extinction values using Equation \[reddening\_V\] and Equation \[reddening\_I\] and the reddening-free distances were calculated. Individual extinction corrections calculated for each target star separately have the advantage of not being subject to unaccounted differential reddening nor population effects [see also @Zaritsky02]. For a detailed description of the method we refer the interested reader to . Star densities in the OGLE III field {#Density_of_OGLEIII} ==================================== ![Densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) are shown as a function of right ascension, $\alpha$ (J2000), and declination, $\delta$ (J2000). While the RR Lyrae stars show a bimodal distribution, the density of Cepheids decreases with distance from the center found by (marked with a blue asterisk). The green asterisk represents the kinematic center found by @Piatek08 using [*HST*]{} proper motions. The box sizes of the evaluated fields are listed in Table \[table\_bins\_RC\_contour\].[]{data-label="RRL_Cep_RADEC_colorext"}](Fig_1.png){width="47.00000%"} The central parts of the SMC, including the bar and a very small part of the wing of the SMC, are covered by the field of OGLE III. Overall 14 square degrees are covered and Figure \[RRL\_Cep\_RADEC\_colorext\] shows the density distribution of RR Lyrae stars and Cepheids in the observed field. The stars are counted in boxes of $0.5^{\circ} \times 0.25^{\circ}$ in $\alpha$ and $\delta$, respectively, (see Table \[table\_bins\_RC\_contour\]) and the resulting distributions are smoothed using a Gaussian kernel. For the RR Lyrae stars the distribution is very smooth and increases steadily from the outskirts towards the center found by from K- and M-stars. However, the highest density of RR Lyrae stars is not at the center, but the peak of the distribution is off-centered and nearly bimodal, as shown in Figure \[RRL\_Cep\_RADEC\_colorext\] [see also Figure 7 in @Soszynski10b]. We may expect that a considerable number of RR Lyrae stars is located outside of the field of view of OGLE III, especially towards the south and northwestern direction. In the northwest OGLE III targeted an extended rectangular region where also a number of SMC RR Lyrae stars were detected (separate rectangular area in the northwest in Figure \[RRL\_Cep\_RADEC\_colorext\]), as well as the region of the Galactic globular cluster 47 Tuc. For distance estimates with the RC reddening correction the stars in the field of the cluster 47 Tuc are excluded, while they are taken into account for the individual reddening-corrected distance estimates. The distribution of the Cepheids is very different from that of the old population traced by the RR Lyrae stars. Very close to the center of the density of Cepheids is highest (Figure \[RRL\_Cep\_RADEC\_colorext\]). Then it drops with increasing distance from the center. First the isodensity lines are nearly circular, but with increasing distance from the center the isodensity contours become more elongated towards the west and northeastern directions. Interestingly no Cepheids are found in the most northern fields, as well as at the position of 47 Tuc in agreement with @Graham75 (47 Tuc is not indicated in Figure \[RRL\_Cep\_RADEC\_colorext\]). Recent star formation has therefore not taken place in these outer regions of the SMC OGLE III field, but is more strongly concentrated in the area of the bar of the SMC. Note that most of the SMC wing [@Shapley40] is not covered by OGLE III. The wing stands out prominently in H$\alpha$ images and may also contain a larger number of Cepheids. Differences between the distribution of the young and the old populations in the SMC using other stellar tracers can also be seen in @Zaritsky00. RR Lyrae Cepheids ------------------------------------ ---------- ---------- $\alpha$ contour bin \[degree\] 0.5 0.5 $\delta$ contour bin \[degree\] 0.25 0.25 distance contour bin \[kpc\] 0.5 0.5 $\alpha$ isodensity bin \[degree\] 2 2 $\delta$ isodensity bin \[degree\] 1 1 distance isodensity bin \[kpc\] 2 2 : Binsizes of the fields evaluated to obtain the densities of Cepheids and RR Lyrae stars in the SMC.[]{data-label="table_bins_RC_contour"} Three dimensional maps {#3D} ====================== For each RR Lyrae star and Cepheid in our sample distances are calculated using the relations described in Section \[distance\]. Either the averaged reddening from the RC stars or the individual reddening method is applied. This results in two independent sets of distance maps for each population. Maps corrected with area-averaged reddening {#corrected_RC} ------------------------------------------- In this subsection the area-averaged reddening values obtained from the RC stars are used. All RR Lyrae stars and Cepheids located in one subfield defined by the RC reddening method are extinction-corrected using the same RC reddening value (for details see ). In Figure \[RRL\_Cep\_dist\_RCext\] we plot the spatial position of the RR Lyrae stars (grey contours) and Cepheids (colored contours) in $\alpha$ and $\delta$, respectively, versus distance. This corresponds to a change of the viewing direction of the observer to a northern position above the SMC in the upper panel, and to the eastern side in the lower panel. Table \[table\_bins\_RC\_contour\] lists the box sizes within which the density of stars is evaluated. In order to smooth the density contour plot a Gaussian kernel with a width of $3 \times 3$ bins is used. The variances on very small scales are reduced by this procedure. The location of the central concentrations of the two populations coincide roughly in Figure \[RRL\_Cep\_dist\_RCext\]. The RR Lyrae stars have a median distance of $D_{\mathrm{RRL/median}} = 66.8 \pm 4.3$ kpc ($(m-M)_0 = 19.13 \pm 0.13$ mag). The median distance of the Cepheids is essentially the same with a distance of $D_{\mathrm{Cep/median}} = 68.1 \pm 4.1$ kpc ($(m-M)_0 = 19.17 \pm 0.12$ mag). For the uncertainties we take the mean magnitude error of 0.07 mag, stated by the OGLE collaboration, the error of the metallicity of 0.23 dex, as determined by @Haschke12_MDF and the mean extinction error of 0.08 mag, found by , and compute the uncertainty by using error propagation. The error of the period as stated by OGLE is too small to be a contributing factor. While our mean distances of the old and young population of the SMC are in very good agreement, this could in principle be attributable to the initially chosen zeropoints. Nonetheless we do not think that it is purely furtuitous. In contrast to many earlier studies our tracers were taken from a homogeneous database, taken with the same instrument and filters and reduced with the same procedure. Additionally, we use state-of-the-art relations to calculate distances. We realize that recent determinations of zeropoints and slopes in the literature no longer show major differences, hence we think that an intrinsic zeropoint shift between the different tracers is not likely. We believe that the lack of a measurable offset between the two populations does imply that their mean distances do coincide. ![Stellar densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) shown as a function of distance and right ascension $\alpha$ in the upper panel, and as a function of distance and declination $\delta$ in the lower panel. Area-averaged reddening values are used to correct all distance estimates. The main concentration of both populations is located nearly at the same position. But the inclination angles of the young and old population are very different from each other. In Table \[table\_bins\_RC\_contour\] the sizes of the evaluated boxes are listed.[]{data-label="RRL_Cep_dist_RCext"}](Fig_2a.png "fig:"){width="47.00000%"} ![Stellar densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) shown as a function of distance and right ascension $\alpha$ in the upper panel, and as a function of distance and declination $\delta$ in the lower panel. Area-averaged reddening values are used to correct all distance estimates. The main concentration of both populations is located nearly at the same position. But the inclination angles of the young and old population are very different from each other. In Table \[table\_bins\_RC\_contour\] the sizes of the evaluated boxes are listed.[]{data-label="RRL_Cep_dist_RCext"}](Fig_2b.png "fig:"){width="47.00000%"} Maps corrected with individual reddening ---------------------------------------- A more precise reddening correction can be applied by using individual reddening estimates for each Cepheid or RR Lyrae star, instead of using an area-averaged reddening value for a large number of RC stars. We use the reddening estimates derived in to correct for the individual intrinsic color differences. The color from the absolute magnitudes in the $V$ and $I$ band are compared with the color from the observed $v$ and $i$ apparent magnitudes. The difference is assumed to be the individual reddening of this star, as described in Section \[reddening\]. ![Stellar densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) as a function of distance and right ascension $\alpha$ in the upper panel and as a function of distance and declination $\delta$ in the middle one. The lower panel shows a three-dimensional representation of an isodensity contour of the RR Lyrae stars (blue) and Cepheids (red) as a function of right ascension $\alpha$, declination $\delta$, and distance. All distances are extinction-corrected using the individual color reddening values. The distributions of the RR Lyrae stars and the Cepheids have a very different orientation in the SMC. While the RR Lyrae form a flattened disk-like structure and are not inclined, the Cepheids show a large inclination angle. Furthermore, the RR Lyrae show a lower density pattern in the center of the SMC surrounded by a higher density ring (upper panel), which is not visible for the Cepheids. The sizes of the boxes used to evaluate the density are listed in Table \[table\_bins\_RC\_contour\]. \[The lower panel is also available as an mpeg animation *video1.mpeg* in the electronic version of this article published in the Astronomical Journal. The video shows a 360$^\circ$ rotation of the isodensity contours.\][]{data-label="RRL_Cep_dist_colorext"}](Fig_3a.png "fig:"){width="46.00000%"} ![Stellar densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) as a function of distance and right ascension $\alpha$ in the upper panel and as a function of distance and declination $\delta$ in the middle one. The lower panel shows a three-dimensional representation of an isodensity contour of the RR Lyrae stars (blue) and Cepheids (red) as a function of right ascension $\alpha$, declination $\delta$, and distance. All distances are extinction-corrected using the individual color reddening values. The distributions of the RR Lyrae stars and the Cepheids have a very different orientation in the SMC. While the RR Lyrae form a flattened disk-like structure and are not inclined, the Cepheids show a large inclination angle. Furthermore, the RR Lyrae show a lower density pattern in the center of the SMC surrounded by a higher density ring (upper panel), which is not visible for the Cepheids. The sizes of the boxes used to evaluate the density are listed in Table \[table\_bins\_RC\_contour\]. \[The lower panel is also available as an mpeg animation *video1.mpeg* in the electronic version of this article published in the Astronomical Journal. The video shows a 360$^\circ$ rotation of the isodensity contours.\][]{data-label="RRL_Cep_dist_colorext"}](Fig_3b.png "fig:"){width="46.00000%"} ![Stellar densities of RR Lyrae stars (filled grey contours) and Cepheids (colored contours) as a function of distance and right ascension $\alpha$ in the upper panel and as a function of distance and declination $\delta$ in the middle one. The lower panel shows a three-dimensional representation of an isodensity contour of the RR Lyrae stars (blue) and Cepheids (red) as a function of right ascension $\alpha$, declination $\delta$, and distance. All distances are extinction-corrected using the individual color reddening values. The distributions of the RR Lyrae stars and the Cepheids have a very different orientation in the SMC. While the RR Lyrae form a flattened disk-like structure and are not inclined, the Cepheids show a large inclination angle. Furthermore, the RR Lyrae show a lower density pattern in the center of the SMC surrounded by a higher density ring (upper panel), which is not visible for the Cepheids. The sizes of the boxes used to evaluate the density are listed in Table \[table\_bins\_RC\_contour\]. \[The lower panel is also available as an mpeg animation *video1.mpeg* in the electronic version of this article published in the Astronomical Journal. The video shows a 360$^\circ$ rotation of the isodensity contours.\][]{data-label="RRL_Cep_dist_colorext"}](Fig_3c.png "fig:"){width="41.00000%"} Figure \[RRL\_Cep\_dist\_colorext\] reveals shorter distances for the SMC than when using the RC reddening values in Figure \[RRL\_Cep\_dist\_RCext\]. The median distance of the RR Lyrae stars is found to be $D_{\mathrm{RRL/median}} = 61.5 \pm 3.4$ kpc ($(m-M)_0 = 18.94 \pm 0.11$ mag), a bit closer than using the RC reddening method. For the Cepheids we also find a closer distance using the individual reddening values, $D_{\mathrm{Cep/median}} = 63.1 \pm 3.0$ kpc ($(m-M)_0 = 19.00 \pm 0.10$ mag). As before in Section \[corrected\_RC\] we calculate the uncertainties of the distances using error propagation. We take into account the intrinsic magnitude error, the uncertainty of the metallicity estimate plus the uncertainty of the reddening. The period is measured with such high accuracy by OGLE III that it does not influence the uncertainty determination. In , we point out that in regions with substantial amounts of dust and gas the reddening can fluctuate significantly with depth and position. Such local differential reddening cannot be resolved by the RC reddening method. Furthermore, as pointed out by @Barmby11, Cepheids suffer from considerable mass loss, leading to circumstellar dust around the star. This leads to additional differential reddening, which is not accounted for by the RC maps. Details on the differences of population- and temperature-dependent reddening are discussed in @Grebel95 [@Zaritsky99] and @Zaritsky02. Details on the differences of population- and temperature-dependent reddening are discussed in @Grebel95 [@Zaritsky99] and @Zaritsky02. A more accurate distance estimate can be obtained by using the individual reddening method. The mean distances of the RR Lyrae and Cepheid population are in good agreement within their uncertainties when using individual dereddening. As found when using the RC reddening the SMC has a considerable depth, even though it is reduced, as expected, when using the individual reddening. The internal structure of the SMC changes as well depending on the kind of reddening correction. In the upper panel of Figure \[RRL\_Cep\_dist\_colorext\] we obtain a lower density for the RR Lyrae stars in the center of the SMC. Moving away from the center of the RR Lyrae distribution, a ring-like structure of higher density surrounds the center. Further outwards the density drops steadily. We check whether this pattern is an artifact introduced by the smoothing with the Gaussian kernel, but the unsmoothed figure contains the same pattern as shown in this representation. By plotting single stars instead of contours we find the same effect and thus consider it to be real. This bimodal distribution is also seen in Figure 7 of @Soszynski10b. For the Cepheids we find a more centrally concentrated distribution when using the individual stellar reddening. The region of the highest density coincides in part with the low density center of the RR Lyrae (see Figure \[RRL\_Cep\_dist\_colorext\]) and with the center of . With increasing distance from the center, the density of the Cepheids drops and the density contours become increasingly elongated. Towards the northeast of the OGLE field the density contours are closer to us and are elongated in the direction towards the Magellanic Bridge. Three-dimensional structure {#3D_structure} =========================== The SMC in slices ----------------- The three-dimensional data for the two populations traced by the Cepheids and RR Lyrae stars allow us to gain a better insight into the internal structural properties of the SMC. In the Figures \[RRL\_dist\_45\_75\] and \[Cep\_dist\_45\_75\] we slice the SMC into three bins of 10 kpc depth each. All stars are color-coded based on their distance and plotted with their spatial coordinates. The individual distance uncertainties of each star are about 8%, therefore well below the binsize. ### RR Lyrae stars ![image](Fig_4.png){height="0.67\textheight" width="100.00000%"} The projected distribution of the old population of RR Lyrae stars is shown in Figure \[RRL\_dist\_45\_75\] for three different distance bins. In the upper left panel of Figure \[RRL\_dist\_45\_75\], which represents RR Lyrae stars that are closer than 55 kpc, we find only very few stars of this old generation, most of which are located in the eastern part of the OGLE III field. Only 1.5% of all RR Lyrae stars are present in this distance bin. These stars are randomly distributed and no pattern or structure is visible. The intermediate distance bin from 55 kpc to 65 kpc (upper right panel) contains 52.6% of the whole RR Lyrae sample. This distance bin is dominated by stars with distances between 60 kpc and 65 kpc. We find that no particular substructure is visible other than a slightly higher density in the central region. The stars are distributed fairly homogeneously over the whole body of the SMC measured by OGLE III. The farthest bin of 65 kpc to 75 kpc contains 45.9% of the RR Lyrae stars and is dominated by stars at distances between 65 kpc and 70 kpc. Overall no heterogenous structural patterns are visible. The density distribution of this panel is similar to the intermediate distance bin. We test if the density distribution follows a random distribution using the $\mathcal{Q}$-parameter [@Cartwright04]. Using the whole sample of RR Lyrae stars, we find a value of $\mathcal{Q} = 0.73$, which corresponds to a value of a perfect random distribution [@Schmeja08]. Dividing the sample in distance bins of 100 RR Lyrae stars each, values of $\mathcal{Q} = 0.72 - 0.83$ are obtained. Four out of 15 bins have values larger than $\mathcal{Q} > 0.80$, which corresponds to a slight indication for a centrally concentrated distribution, while a mean of $\mathcal{Q} = 0.77 \pm 0.03$ is found. We thus conclude that the RR Lyrae stars are overall homogeneously distributed over the OGLE III field of the SMC, showing a slightly ellipsoidal or spheroidal distribution as also found by @Subramanian12. ### Cepheids ![image](Fig_5.png){height="0.67\textheight" width="100.00000%"} Figure \[Cep\_dist\_45\_75\] shows distance bins covering the distance range from 45 kpc to 75 kpc of 10 kpc depth each. The distribution of the Cepheids in these three slices shows a very different picture than for the old population. The distance bin of 45 kpc to 55 kpc contains 122 stars or $5\%$ of the entire sample of the Cepheids. Most of these stars are concentrated in the eastern parts of this closest distance bin. The central distance bin of the SMC with a distance range from 55 kpc to 65 kpc (upper right panel) contains the majority, nearly $63\%$, of Cepheids. The panel is dominated by a distance gradient stretching from the northeast to the southwest. The northeastern parts are closest to us and coincident in projection with the region of the SMC bar that contains the luminous N66 H [ii]{} region[^2]. In the most eastern part of the bar around N80 we hardly find any Cepheids. The highest density of the Cepheids coincides with the center found by very well. This area coincides with a number of H [ii]{} regions in the bar, e.g., N17, N19, N20, N22, and N26. The most southernwestern and the most northwestern parts of the OGLE III region do not contain any Cepheids. The farthest bin, with stars at a distance of 65 kpc to 75 kpc, contains a concentration of stars in the projected central region of the OGLE III field. The eastern parts are nearly devoid of stars in this distance bin and towards the southwest the most distant Cepheids of the SMC are present. Most of the stars concentrated in the central parts of this panel have a similar distance, which is between 65 kpc and 70 kpc. In the northwest of this concentration the density of Cepheids drops drastically and in the most northwestern fields no stars of this young population are present. Recent star formation as traced by Cepheids only took place at locations close to the bar of the SMC ($\alpha \sim 12^{\circ}$ to $16^{\circ}$ and $\delta \sim -73.5^\circ$ to $-72^\circ$). The lower right panel in Figure \[Cep\_dist\_45\_75\] shows the H I contours [observed by @Stanimirovic99] overlayed with observations of H$\alpha$ by @Gaustad01 in greyscale. The combined H$\alpha$-H I image was taken from Lorimer et al. (NRAO/AUI/NSF)[^3]. The Cepheids trace the location of the bar very well. The highest density of visible light agrees with the highest density of Cepheids visible in the upper right and lower left panel. As mentioned in the previous section, the northeastern region of the OGLE III field contains the Cepheids with the closest distance to us. This is in good agreement with the inclined gaseous component of the SMC found by @Stanimirovic04. The number density of Cepheids decreases from the bar in the direction towards the wing. The wing itself is missing from the OGLE III field, which ends just west of the H [ii]{} region N84. Position angle -------------- We count the number of RR Lyrae stars and Cepheids in boxes of 0.2 kpc $\times$ 0.2 kpc in a Cartesian X,Y coordinate system projected on the equatorial plane. The coordinates were transformed using the relations from @Subramanian12, following @Marel01a and @Weinberg01. Moving along the X-axis the position of the fields with the highest number density are fitted with a first order polynomial, using the center by as origin. The position angles of the young and old population of the SMC are similar. While we obtain $\Theta = 66^\circ \pm 15^\circ$ for the Cepheids, a position angle of $\Theta = 83^\circ \pm 21^\circ$ is found for the old population represented by the RR Lyrae stars. Since the RR Lyrae stars are distributed very homogeneously, the position angle does not change if we consider only stars located inside the innermost $3^\circ$ around the center of . For the Cepheids we find a shift of the position angle to $\Theta = 87^\circ \pm 12^\circ$ if taking only the innermost $3^\circ$ of the SMC into account. The position angle of the Cepheids agrees, within the uncertainties, with the literature values for Cepheids of $58^{\circ} \pm 10^{\circ}$ found by @Caldwell86 and $55^{\circ} \pm 17^{\circ}$ by @Laney86. These authors used 63 and 23 Cepheids respectively, which are distributed across the central region of the SMC. @Groenewegen00 found a value of $238^{\circ} \pm 7^{\circ}$, using data from OGLE II, 2MASS and DENIS, which is in good agreement with our result as well, given a periodicity of $\pi$ of the position angle. All the different position angles are summarized in Table \[inclination\_table\]. The value of $58.3^\circ$ for the position angle for the RR Lyrae stars found by @Subramanian12 is also similar to the value obtained in our study, but they do not quote uncertainties for their estimate. Inclination angle ----------------- type of Stars $\Theta$ \[degrees\] $i$ \[degrees\] author --------------- ---------------------- ----------------- ---------------- -- -- Cepheids $58 \pm 10$ $70 \pm 3$ @Caldwell86 Cepheids $55 \pm $ 17 $ 45 \pm 7$ @Laney86 Cepheids $238 \pm 7$ $68 \pm 2$ @Groenewegen00 Red clump 55.5 0.58 @Subramanian12 RR Lyrae 58.3 0.50 @Subramanian12 The inclination angle of the SMC is investigated by subdividing the observed fields in boxes of $0.3^{\circ} \times 0.3^{\circ}$ in $\alpha$ and $\delta$, respectively. For each field the mean distance is determined. These values are fitted by a linear function to estimate the inclination angle. For the RR Lyrae stars we find a very small inclination angle of $i = 7^\circ \pm 15^\circ$. This is consistent with no inclination of the old population of the SMC whatsoever, as found by @Subramanian12. Contrary to the RR Lyrae stars the distribution of the Cepheids is clearly inclined with respect to our line of sight. For the western parts we find mean distances for the Cepheids that are about 15 kpc farther away from us than the eastern parts of the SMC. Overall this results in an inclination angle of $i = 74^\circ \pm 9^\circ$, in very good agreement with the literature values of $68^{\circ} \pm 2^{\circ}$ from, e.g., @Groenewegen00 [Table \[inclination\_table\]]. Depth ----- Several investigations have shown that the SMC has a considerable depth [e.g., @Mathewson88; @Hatzidimitriou89; @Crowl01; @Subramanian12]. To determine the depth of the SMC, we use the orientation of the Cepheids and RR Lyrae stars as seen on the sky and do not rotate the stars to align the major axes of the different populations. This approach is taken to keep the data comparable to other investigations that did not carry out such rotations either. To quantify the different depths of stars within the observed field of the SMC (compare Figure \[Cep\_dist\_45\_75\]) we divide the OGLE III field into nine rectangular fields, as well as into four rings (Figure \[scale\_height\_SMC\]). The depth of the SMC is determined by calculating a cumulative distribution of all stars present in the evaluated field. The distance values where 16% and 84% of all stars, respectively, have shorter individual distances than the remaining stars of the distribution, are assumed to be the lower and upper limit of the depth. These limits define the innermost 68% of the population. The depth is determined in each of these fields individually. To compute uncertainties for the depth we vary the upper and lower limit of the depth by 5% each and take the mean difference as the error estimate. The measured, raw depth of the SMC is still affected by the uncertainties of the distance estimates, $\sigma_{star}$, which we subtract in quadrature to obtain the true depth. Our definition of depth is such, that it is equivalent to two standard deviations from the mean or $2\sigma_{star}$. For the RR Lyrae stars a distance uncertainty of 3.4 kpc is found, while the Cepheid distances have an uncertainty of 3.0 kpc. Using $$depth = \sqrt{depth_{raw}^2 - (2\sigma_{star})^2} \label{real_depth}$$ we obtain the real depth of the SMC. For some fields the uncertainty of the distance is larger than the inferred raw depth. For these fields we do not derive a true depth estimate (Figure \[scale\_height\_SMC\]). The number of RR Lyrae stars varies from field to field and decreases steadily towards the outskirts of the OGLE III field. For the rectangular fields the depth is quite similar for all the fields chosen, as shown in the upper panel of Figure \[scale\_height\_SMC\]. The corrected depth ranges from 1.2 kpc to 5.9 kpc, with a mean depth of $4.2 \pm 0.4$ kpc. The lower panel of Figure \[scale\_height\_SMC\] shows the annular fields, divided into semi-annuli by considering only stars with either positive or negative x-values for the non-central fields. The innermost field is simply a circle around the origin. The corrected depth values reach as much as 5.6 kpc, while a mean value $4.2 \pm 0.3$ kpc is found. The northwestern fields show a slightly reduced depth, but overall no trend of differing depth within the OGLE III field of the SMC is seen. The density of stars in the outer fields is not significantly reduced and we confirm the homogenous distribution of RR Lyrae stars in the SMC. ![Distribution of depth for the RR Lyrae stars (black numbers) and Cepheids (grey numbers, red in the online version) in the SMC. In the upper panel, the depth is calculated in the roughly rectangular fields indicated in the plot. In the lower panel semi-annular fields were used. In each field we quote the resulting depth and its uncertainty in kpc followed by the number of stars used for the calculation. A mean depth of $4.2 \pm 0.4$ kpc is found for the RR Lyrae stars, while the Cepheids’ mean depth varies around 6.0 kpc, depending on the choice of subfields.[]{data-label="scale_height_SMC"}](Fig_6a.png "fig:"){width="47.00000%"} ![Distribution of depth for the RR Lyrae stars (black numbers) and Cepheids (grey numbers, red in the online version) in the SMC. In the upper panel, the depth is calculated in the roughly rectangular fields indicated in the plot. In the lower panel semi-annular fields were used. In each field we quote the resulting depth and its uncertainty in kpc followed by the number of stars used for the calculation. A mean depth of $4.2 \pm 0.4$ kpc is found for the RR Lyrae stars, while the Cepheids’ mean depth varies around 6.0 kpc, depending on the choice of subfields.[]{data-label="scale_height_SMC"}](Fig_6b.png "fig:"){width="47.00000%"} For the Cepheids in the rectangular fields we find a mean corrected depth value of all fields of $5.4 \pm 1.8$ kpc. The separate field to the northwest contains only one Cepheid, and is thus excluded. The semi-annuli have a mean corrected depth value $6.2 \pm 1.8$ kpc. These mean values are close to the depth inferred for the central rectangular or circular field, where most of the Cepheids are concentrated. Evaluating the whole OGLE III field without subdividing it leads to a (mean) depth of $7.5 \pm 0.3$ kpc. Scale Height {#scale_height} ------------ The depth can easily be transformed into a scale height. The scale height is the point where the density of stars has dropped by a factor of $1/e$. Therefore this quantity is half of the innermost $63\%$, instead of the $68\%$ used for the depth. Therefore the equation $$\frac{\textrm{scale~height}}{\textrm{depth}} = \frac{1 - 2(\frac{1}{2e})}{2 \times 0.68} = 0.4648.$$ gives the transformation from the depth to the scale height. Using this transformation we obtain mean scale heights of $2.0 \pm 0.4$ kpc for the RR Lyrae stars using the rectangular and the semi-annular fields. For the Cepheids the different field selections lead to different scale height estimates. They range from $2.5\pm 0.4$ kpc for the rectangular fields to $2.9 \pm 0.3$ kpc when evaluating the whole OGLE III field. Several estimates of the depth have been provided in the literature leading to very different pictures of the SMC. Using 61 Cepheids across the bar of the SMC @Mathewson88 found a depth of 20 kpc, twice as much as found in our work. Using RC stars @Subramanian09 [@Subramanian12] concluded for the OGLE II and OGLE III fields, respectively, that the 1-$\sigma$ depth is below 5 kpc. They used the width of the distribution of magnitudes of the RC stars to estimate the depth. This width is influenced by several different factors, such as reddening, metallicity differences, or true depth effects, possibly leading to an underestimate of the actual depth. The approach taken by @Subramanian09 for the RC stars was used by @Kapakos11 for a subset of about 100 RR Lyrae stars present in the bar region of the OGLE III survey and for the whole OGLE III RR Lyrae stars dataset by @Subramanian12 assuming reddening values from RC stars. Both investigations found a 1-$\sigma$ width of the distribution of stars of $\sim 4$ kpc, in good agreement with our results for the RR Lyrae stars. However, @Subramanian12 conclude that the depth could be as much as 14 kpc taking 3.5-$\sigma$ of the distribution into account. Too few stars and too small a field were investigated in @Kapakos11 to determine spatial differences for the line-of-sight depth. Using the complete OGLE III dataset of the RR Lyrae stars, @Subramanian12 investigated the depth of 70 very small fields and found that the northern and eastern parts may have a slightly decreased depth. This is in good agreement with our results. Furthermore, the estimates of the depth relying on cluster distances agree very well with our calculations for the young population traced by Cepheids. @Crowl01 found a depth of 6 kpc to 12 kpc from ground-based imaging, while the six intermediate-age clusters studied by @Glatt08b with deep *HST* imaging lead to a depth of $\sim 10$ kpc, excluding the star cluster NGC 419, which seems to be 6 kpc closer than the closest of the other six clusters. Summary and Conclusions {#Conclusions} ======================= We investigate the three-dimensional structure of the young and old population of the SMC by calculating individual distances to 2522 Cepheids and 1494 RR Lyrae stars. The absolute magnitudes of the RR Lyrae stars are calculated from the photometric metallicity estimates in @Haschke12_MDF. These are based on the Fourier-decomposed lightcurves of the RR Lyrae *ab* stars observed by the OGLE III survey. The period data to compute absolute magnitudes for the Cepheids are taken from the dataset of the OGLE III survey as well. We use two different approaches to correct for the reddening that the investigated stars are experiencing. First the differences between the observed and theoretical mean color of red clump stars are used to estimate an area-averaged reddening value. For the other technique individual reddening estimates of each Cepheid and RR Lyrae star are calculated. With this method we are able to correct the actual line-of-sight reddening at the very position of the star. The reddening maps of both techniques are shown in @Haschke11_reddening. These result in two sets of self-consistent three-dimensional maps of the SMC for the young Population I (Cepheids) and the old Population II (RR Lyrae) stars. RR Lyrae Cepheids ------------------------------------------------ ------------------ ------------------ -- -- distance modulus using area averaged reddening $19.13 \pm 0.13$ $19.17 \pm 0.12$ distance modulus using individual reddening $18.94 \pm 0.11$ $19.00 \pm 0.10$ inclination \[degree\] $7 \pm 15$ $74 \pm 9$ position angle \[degree\] $83 \pm 21$ $66 \pm 15$ scale height \[kpc\] $2.0 \pm 0.4$ $2.7 \pm 0.3$ Using individual reddening estimates we calculate a median distance modulus of $(m-M)_{0 \mathrm{RRL/median}} = 18.94 \pm 0.11$ for the RR Lyrae stars and $(m-M)_{0 \mathrm{Cep/median}} = 19.00 \pm 0.10$ for the Cepheids. The results of the median distance moduli are in very good agreement with each other and with the distances obtained in the literature (Table \[distance\_table\] and Table \[Summary\_table\]). By applying reddening values obtained with the area-averaged reddenings we find distances that are a bit larger, but still in good agreement with many earlier distance estimates (Table \[distance\_table\]). We explain the larger distance values of the area-averaged extinction technique with unresolved differential reddening effects [e.g., @Barmby11]. We do not find any evidence for a long and short distance scale problem when comparing our RR Lyrae and Cepheid distances to the SMC. A similar result was found for the LMC [@Haschke12_LMC]. The RR Lyrae stars show a fairly homogeneous distribution across the OGLE field. Their density increases gradually towards the center of the field, while the highest-density regions are located in a double-peaked, semi-annular structure around the center [see also @Soszynski10b; @Subramanian12]. Overall the RR Lyrae stars show a roughly spheroidal or ellipsoidal distribution. This spheroidal distribution becomes more pronounced and easier recognizable when taking the much more numerous intermediate-age populations into account [not studied in our paper, but see, e.g., @Zaritsky00; @Cioni00a; @Gonidakis09; @Subramanian12]. The RR Lyrae stars do not reveal any obvious correlations with younger irregular features such as the SMC bar and there is no bar visible in the distribution of RR Lyrae stars. While our analysis is only mildly suggestive of the eastern part of the old population’s distribution being closer to us, this trend is confirmed more clearly when larger areas than the OGLE III field are taken into account, such as in the sparsely sampled survey of the outer SMC regions by @Nidever11, or more numerous intermediate-age populations such as RC stars [e.g., @Hatzidimitriou89; @Gardiner91]. Moreover, the kinematics of intermediate-age and old red giants across the central parts of the SMC suggest that they are part of an unperturbed pressure-supported spheroid [@Harris06]. In contrast, the distribution of the Cepheids is closely correlated with the regions of recent star-formation activity along the bar of the SMC. The wing of the SMC, another region of ongoing star formation, lies outside of the OGLE III field. We emphasize that the Cepheids are tracers of a slightly older population (some 30 – 300 Myr) than the one responsible for the prominent H [ii]{} regions along the SMC bar [see, e.g., the images in @Bolatto11]. Using Cepheids as tracers, we find the eastern part of the SMC field with distances $< 55$ kpc to be closest to us, in good qualitative agreement with the results from older tracers. Cepheids at distances between $\sim 55$ and $\sim 60$ kpc are still found mainly in the eastern part of the OGLE III field, where they coincide with the north-eastern part of the bar around the luminous H [ii]{} region N66. At distances starting at $\sim 62$ kpc most of the Cepheids are concentrated around the SMC center derived by from older K and M giants in 2MASS. This region overlaps in projection with N17 and its neighboring H [ii]{} regions in the lower (southwestern) region of the bar. At distances in the range of $\sim 65$ to $\sim 68$ kpc we still find the highest concentration of Cepheids near ’s center, with a less prominent, scattered tail extending further east and a sparse scattering of stars to the west. Almost no Cepheids are observed in the northwestern part of the bar at these farther distances. If we assume that the Cepheids are physically associated with the bar, this indicates that the bar is tilted from the northwest (closest part) to the southeast (farthest part, elongated along the line of sight). This is visualized in the included mpeg movie. In their analysis of the global star formation history of the SMC @Harris04 inferred that the SMC was comparatively quiescent at intermediate ages (about 8.4 to 3 Gyr ago), while the star formation activity increased at more recent times. They find peaks at 2.5 and 0.4 Gyr, which they attribute to close encounters of the SMC with the Milky Way (in agreement with other studies), and a most recent peak at 60 Myr. The latter two maxima roughly bracket the ages of Cepheids. @Harris04’s distribution of star formation activity at 400 Myr and 250 Myr in their Figure 6 resembles the distribution of Cepheids found in our study. Whether indeed tidally triggered star formation created the Cepheids is unclear since recent high-precision proper motion measurements for the Magellanic Clouds raised new questions regarding their short- and long-term orbital history [e.g., @Kallivayalil06b; @Besla07]. We note that the two-dimensional distribution of star clusters in the age range of the Cepheids coincides well with the general locus of the Cepheids, although the star clusters are more strongly confined to the bar [see @Glatt10 their Figure 8]. The position angle of the two populations are similar. For the Cepheids we obtain $\Theta_{\mathrm{Cep}} = 66^\circ \pm 15^\circ$, while $\Theta_{\mathrm{RRL}} = 83^\circ \pm 21^\circ$ is found for the RR Lyrae stars. Unlike the position angle the inclination angle changes significantly between the old and young population. We find an inclination angle of $i_{\mathrm{RRL}} = 7^\circ \pm 15^\circ$ for the RR Lyrae stars, which is consistent with zero inclination. For the Cepheids an inclination of $i_{\mathrm{Cep}} = 74^\circ \pm 9^\circ$ is obtained such that the northeast is much closer to us than the southwest as mentioned earlier. The closer part is roughly pointing towards the LMC. We visualize the three-dimensional distribution of the Cepheids and RR Lyrae stars in the mpeg movie in Figure \[RRL\_Cep\_dist\_colorext\]. Overall, the comparison of the structural parameters from Cepheids and RR Lyrae stars found in the literature to the results found in this study yields good agreement (Table \[inclination\_table\] and Table \[Summary\_table\]). The OGLE III field is too small to deduce the actual shape of the SMC, which remains under debate. Only large scale surveys of the whole SMC including the outskirts will solve this issue. Nonetheless, we can infer depth estimates from our investigation. The depth of the SMC has been under extensive discussion for the last decades. @Mathewson88 claimed the SMC to be very extended with a depth of 20 kpc, while @Subramanian09 [@Subramanian12] and @Kapakos11 found a 1-$\sigma$ depth of less than 5 kpc. We find different depths for the old population (RR Lyrae stars) and for the young population (Cepheids). For the RR Lyrae stars we find a 1-$\sigma$ depth of $4.2 \pm 0.4$ (or a scale height of $2.0 \pm 0.4$ kpc), while the depth for the Cepheids is measured, depending on the field selection, to be between $5.4 \pm 1.8$ kpc and $6.2 \pm 0.3$ kpc (or a scale height of $2.7 \pm 0.3$ kpc). Usually the scale height for the young population is expected to be smaller than for old populations. However, in the SMC the young population clearly has a very different distribution than the old population, showing an asymmetric and highly inclined distribution Although there is considerable uncertainty regarding the long-term orbits of the SMC, LMC, and Milky Way it seems quite likely that the recent, increased star formation leading to the Cepheids was triggered by a close encounter between these galaxies. This encounter may have shifted and compressed some of the SMC’s gas through tidal and ram pressure effects, possibly even creating some of the features that we observe now as the bar and the wing. We thank our anonymous referee for his or her helpful comments. We are thankful to the OGLE collaboration for making their data publicly available. R. Haschke is obliged to S. Schmeja for giving helpful comments on the calculation of the $\mathcal{Q}$-parameter. The comments to improve the manuscript and the proof reading by K. Glatt and K. Jordi are very much appreciated. This work was supported by Sonderforschungsbereich SFB 881 “The Milky Way System” (subproject A2) of the German Research Foundation (DFG). [^1]: The catalogs are available at <http://ogle.astrouw.edu.pl/> [^2]: The N designation of regions luminous in H$\alpha$ follow the numbering scheme introduced in the catalog of @Henize56. [^3]: http://www.nrao.edu/pr/2007/brightburst
--- author: - 'Aziz Koçanaoğullari, Murat Akcakaya, Deniz Erdoğmuş[^1]' bibliography: - 'ref.bib' nocite: - '[@barcelo2001mathematical]' - '[@renyi1961measures]' title: | Stopping Criterion Design for Recursive Bayesian Classification:\ Analysis and Decision Geometry --- Introduction ============ Conventional Stopping Criteria ============================== Proposed Perspective ==================== Experiments and Results ======================= Conclusion ========== [Aziz Koçanaoğullari]{} received the B.S. degree in electrical and computer engineering and mathematics from Istanbul Technical University, Istanbul Turkey in 2014 and 2015 respectively. He received M.Sc. degree in Telecommunications Engineering in Istanbul Technical University in 2016. Since then, he has been with the Cognitive Systems Laboratory (CSL) at Northeastern University Boston, MA, USA where he is currently a Ph.D. degree candidate. His main areas of research interest are active recursive inference in sequential decision making processes and active model learning. [Murat Akcakaya]{} received the Ph.D. degree in electrical engineering from Washington University in St. Louis, MO, USA, in December 2010. He is an Assistant Professor in the Electrical and Computer Engineering Department of the University of Pittsburgh. His research interests are in the areas of statistical signal processing and machine learning. [Deniz Erdoğmuş]{} received the B.S. degree in electrical engineering and mathematics, and the M.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1997 and 1999, respectively, and the Ph.D. degree in electrical and computer engineering, in 2002, from the University of Florida, Gainesville, FL, USA, where he was a postdoc until 2004. He is currently a Research Professor at Northeastern University. His research focuses on statistical signal processing and machine learning with applications to biomedical signal/image processing and cyberhuman systems. [^1]: This work is supported by NSF (IIS-1149570, CNS-1544895, IIS-1715858, IIS-1717654, IIS-1844885, IIS-1915083), DHHS (90RE5017-02-01), and NIH (R01DC009834).
--- abstract: 'SrTiO$_3$ is a superconducting semiconductor with a pairing mechanism that is not well understood. SrTiO$_3$ undergoes a ferroelastic transition at $T=$ 105 K, leading to the formation of domains with boundaries that can couple to electronic properties. At two-dimensional SrTiO$_3$ interfaces, the orientation of these ferroelastic domains is known to couple to the electron density, leading to electron-rich regions that favor out-of-plane distortions and electron-poor regions that favor in-plane distortion. Here we show that ferroelastic domain walls support low energy excitations that are analogous to capillary waves at the interface of two fluids. We propose that these capillary waves mediate electron pairing at the LaAlO$_3$/SrTiO$_3$ interface, resulting in superconductivity around the edges of electron-rich regions. This mechanism is consistent with recent experimental results reported by Pai et al. \[PRL $\bf{120}$, 147001 (2018)\]' author: - David Pekker - 'C. Stephen Hellberg' - Jeremy Levy bibliography: - 'STO\_SC\_Theory\_Simple.bib' title: 'Theory of Superconductivity at the LaAlO$_3$/SrTiO$_3$ heterointerface: Electron pairing mediated by deformation of ferroelastic domain walls' --- The origin of electron pairing in SrTiO$_3$ (STO) has remained mysterious for over half a century. Superconductivity in bulk STO, first reported [@Schooley1964] in 1964, takes place at exceedingly low carrier densities ranging from $8.5 \times 10^{18}$ to $3.0 \times 10^{20}\,\text{cm}^{-3}$ (Refs. [@Koonce1967; @Lin2013; @Eagles2016; @Swartz2018; @Eagles1986]). The superconducting transition temperature is a dome-shaped [@Koonce1967] function of carrier density, reaching a maximum of $0.4~\text{K}$. Over the last five decades, there have been many attempts to identify the electron pairing mechanism, invoking in many cases the unusual or unique properties of STO. Candidates for the pairing “glue" have included valley degeneracy [@Schooley1964], longitudinal optical phonons [@Cohen1964; @Baratoff1981; @Gorkov2016; @Klimin2014], antiferrodistortive modes [@Appel1969], plasmons [@Ruhman2016], plasmons in conjunction with optical phonons [@Takada1980], Jahn-Teller bipolarons [@Stashans2003], and ferroelectric modes [@Edge2015; @Gabay2017; @Arce-Gamboa2018]. The development of STO-based interfaces [@Ohtomo2004; @Pai2018a], like the LaAlO$_3$/SrTiO$_3$ (LAO/STO) interface, has revived interest in the superconducting properties of STO, particularly following key experimental reports in the LAO/STO system [@Reyren2007; @Caviglia2008; @Bell2009; @Gariglio2009]. Superconductivity at this interface exhibits many of the same features as bulk STO, including the superconducting dome as well as characteristic temperature ($T_c \leq 0.4$ K) and magnetic field ($H_{c2} \sim 2$ kOe) scales, and carrier densities in the $10^{12-13}$ cm$^{-2}$ range, comparable to corresponding densities for bulk STO [@Lin2013]. Quantum dots and nanowires created at the LAO/STO heterointerface exhibit electron pairing without superconductivity [@Cheng2015], which provides us with an independent measure of the pairing strength $E_P \sim 0.1 - 1 \, \text{meV}$. Ferroelastic domains are ubiquitous at the LAO/STO interface and their intersecton with the LAO/STO interface strongly influences the transport behavior [@Honig2013; @Kalisky2013; @Frenkel2017]. The domains observed at the surface are of two kinds. First, ferroelastic domain structures naturally form bulk STO [@Scott2012; @Salje2013], and some of these bulk domains intersect the surface. Second, there is evidence that electron density variations can nucleate ferroelastic domains at the LAO/STO interface. Specifically, piezoforce microscopy techniques shows that at room temperature electron-rich regions expand along the $c$-axis perpendicular to the interface [@Bi2016], which seeds the formation of $z$-oriented ferroelastic domains at low temperatures. Local probes have shown that these domain boundaries strongly influence carrier transport, both in the normal and superconducting state [@Honig2013; @Kalisky2013; @Roy2017; @Cheng2018]. ![image](figFEspectrum.pdf){width="\textwidth"} Recent experiments by Pai et al. [@Pai2018] provide evidence for an intrinsic 1D nature of the superconducting state in LAO/STO. In those experiments, a conductive atomic-force microscopy lithography technique [@Cen2008; @Cen2009; @Brown2016] was used to define electron-rich channels with widths that varied systematically between 10 nm and 1 $\mu$m. These channels showed a superconducting critical current that was independent of the width of the channel. Further, in experiments with multiple parallel channels, the critical current was found to be proportional to the number of conducting channels. The boundaries of the conductive channels are known to coincide with ferroelastic domain walls, which suggest that they play a role in 1D electron pairing. $z$-oriented ferroelastic domains form under the electron-rich (conducting) regions, while the electron poor (insulating) regions are associated with $x$- or $y$-oriented domains [@Honig2013; @Bi2016]. Ferroelastic domain walls form at the interface of the conducting and insulating regions, precisely where superconductivity seems to appear [@Pai2018]. These type of domain walls, with a thickness of several lattice sites, have been investigated in first-principles calculations in Ref. [@Hellberg2019]. Here we describe a model of electron pairing in which the coupling between electrons and ferroelasticity naturally leads to an attractive electron-electron interaction, with the strongest attractive interactions occurring along the ferroelastic domain walls. Specifically, we describe electron pair formation near a ferroelastic domain wall that forms between an electron-rich and an electron-poor region. We begin by extracting the typical ferroelastic domain wall width from first principles density functional theory calculations. Using a spin-wave-like analysis, we show that the ferroelastic domain walls host low-energy elastic modes. Next, by coupling the ferroelastic deformations to electron density, we show that ferroelastic deformations mediate attractive interactions between pairs of electrons. These interactions are found to be strongest when the two electrons are near the domain wall. Finally, we investigate electron pairing using a real-space analogy to the Cooper pair problem [@Cooper1956]. We consider two electrons in the vicinity of the domain wall: (1) the electrons are restricted from entering the electron rich region due to the Pauli exclusion principle; (2) the electrons experience short-range repulsion; (3) the electrons experience long-range attraction as well as attraction to the domain wall that is mediated by ferroelastic distortions and is described at the level of the Born approximation in which electrons are treated as heavy particles. We find that electrons can indeed bind into (real-space) Cooper pairs. We conclude by commenting about the implications of our model to superconductivity in patterned and bulk LAO/STO heterointerfaces as well as possible generalizations to describe superconductivity in bulk STO. Modeling ferroelastic domain walls ================================== We begin by constructing a 2D model of ferroelasticity at the surface of the LAO/STO heterointerface. Our goal is specifically to model the fluctuations of domains induced by charge density at the LAO/STO interface as opposed to bulk domains in STO. First principles density functional calculations show that ferroelastic domain walls are extended objects with a size of roughly $3\,\text{nm}$ [@Hellberg2019]. The largest distortion in tetragonal STO is the rotation of the oxygen octahedra, which can be used to define a vector that rotates by 90 degrees across a domain wall [@Schiaffino2017]. To characterize the domain wall, we take the data of Ref. [@Hellberg2019] and compute the local rotation vector. For each Ti atom the local rotation vector is $$\begin{aligned} \vec{v}_i=\frac{1}{4}\sum_j \vec{\delta v}_j \times \vec{e}_j\end{aligned}$$ where $i$ labels the Ti atom, the sum runs over its six neighboring oxygens $j$, $\vec{\delta v}_j$ is the displacement vector of the $j$th oxygen with respect to its ideal position, and $\vec{e}_j$ is the unit vector from the titanium atom to the ideal position of the $j$th oxygen. Thus in the bulk tetragonal structure, $\vec{v}_i$ gives the displacement of the planar oxygens from their ideal positions. We plot $\vec{v}_i$ as a function of position across the domain wall in Fig. \[fig:spin-wave\]a. The length of this vector varies from $0.315$ Å in the bulk regions to $0.353$ Å in the center of the domain wall. These calculations were performed on a 17.5 nm supercell containing two identical “head-to-tail" domain walls, but only one is shown; computational details are given in [@Hellberg2019]. It is natural, therefore, to model the ferroelasticity using a Heisenberg model (as opposed to an Ising model that would have abrupt domain walls) $$\begin{aligned} H_{\text{FE}}=-J \sum_{\langle ij \rangle} \sigma_i \cdot \sigma_j - \sum_i h_i \cdot \sigma_i, \label{eq:HFE}\end{aligned}$$ where $\sigma_i$ are the spin-1/2 operators representing the ferroelastic distortions, $J$ represents the elastic modulus, and $h_i$ is an effective magnetic field that locks the ferroelasticity in the $z$-direction in the electron rich region and the $x$-direction in the electron poor region (see Fig. \[fig:spin-wave\]b). We justify the use of the locking field by noting that away from domain walls, ferroelastic domains are locked to the heterointerface surface and the crystallographic axis. The locking field has an important implication as it gives mass to the spin-wave (Goldstone) mode which would otherwise be massless. We construct the ferroelastic-wave spectrum of $H_{\text{FE}}$ using mean-field theory + spin-wave fluctuations analysis. We begin by writing down a trial wave function for the spins $$\begin{aligned} |\psi[\{\phi_i\}]\rangle=\prod_i \left[\phi_i |\downarrow\rangle_i + \sqrt{1-|\phi_i|^2} |\uparrow\rangle_i\right], \label{eq:trPsi}\end{aligned}$$ where the optimization parameters is the set of complex numbers $\{\phi_i\}$. Next, we find the variational ground state by minimizing $\langle\psi[\{\phi_i\} | H_{\text{FE}} |\psi[\{\phi_i\}]\rangle$ with respect to the $\phi_i$’s to obtain the mean field ground state defined by $\phi^{0}_i$. Finally, we expand in small fluctuations around the variational ground state $\phi_i \rightarrow \phi^{0}_i + \delta_i(t)$, and minimize the action $\langle\psi[\{\phi_i\} | i\partial_t - H_{\text{FE}} |\psi[\{\phi_i\}]\rangle$ to obtain the ferroelastic-wave spectrum. ![image](figAttraction.pdf){width="90.00000%"} Let us now consider a concrete example of a domain wall defined by the locking field $$\begin{aligned} h(x,y)=\left\{ \begin{array}{cc} \{0,0,1\} & x \leq 5\\ \{1,0,0\} & x \geq 12 \end{array} \right.,\end{aligned}$$ where $1 \leq x \leq 25$, $1 \leq y \leq 100$, we set $J=1.5$, and we use open boundary conditions in the $x$-direction and periodic in the $y$-direction. The ground state ferroelastic (spin) configuration for this domain wall has elongation in the $z$-direction for $x \leq 5$ simulating an electron-rich region and in the $x$-direction for $x \geq 12$ simulating an electron-poor region (see Fig. \[fig:spin-wave\]b). Spin-wave theory tells us that away from a domain wall, the ferroelastic-wave spectrum has a gap of $2|h|$, where $|h|$ is the magnitude of the locking field (we use the bulk value of $|h|$ as the energy scale in the remainder of the manuscript). By introducing a domain wall as depicted in Fig. \[fig:spin-wave\]b, we find that the ferroelastic-wave spectrum acquires a set of modes inside the bulk gap (see Fig. \[fig:spin-wave\]c). The mode weight, $\delta_i$, of these low energy modes lies almost entirely inside the domain wall, see Fig. \[fig:spin-wave\]d and e. Electron-electron attraction ============================ We investigate electron-electron interactions mediated by introducing a linear coupling between the electron density and ferroelasticity. This coupling is inspired by the experimental evidence that the electron density at the LAO/STO interface directly couples to an expansion of the crystal perpendicular to the surface [@Bi2016]. Hence, we choose the interaction Hamiltonian $$\begin{aligned} H_{\text{int}}=-\alpha \sum_{i,\sigma} n_{i,\sigma} S^z_i,\end{aligned}$$ where $n_{i,\sigma}=c^\dagger_{i,\sigma}c_{i,\sigma}$ is the electron number operator and $\alpha$ is the coupling constant (we set $\alpha=h$ in the remainder of the manuscript). To estimate the electron-electron interaction energy, we invoke the Born approximation and treat the electrons as heavy particles and ferroelastic waves as light particles. We begin by considering the effect of a single electron on the ferroelasticity. Within our model, an extra electron placed in an electron-rich domain has very little effect on the ferroelasticity as it is already full polarized. On the other hand an electron placed in an electron-poor domain results in a deformation of the ferroelasticity that heals on the over the length-scale $\xi_{\text{bulk}}=\sqrt{J/h}$. An electron placed in a domain wall results in the deformation of the ferroelasticity, mainly inside the domain wall, that heals over a length-scale $\xi_{\text{DW}} \approx \sqrt{w}$, where $w$ is the domain wall width. To compute electron-electron interaction, within the Born approximation, we need to find the energy $H_{\text{FE}}+H_{\text{int}}$ for each electron configuration (that is defined by the positions of the two electrons). Therefore, we place two electrons, of opposite spin, at positions $j_1=\{x_1,y_1\}$ and $j_2=\{x_2,y_2\}$ (see Fig \[fig:energy\]a) and use the trial wave function of Eq. \[eq:trPsi\] to minimize the energy of this configuration $$\begin{aligned} E[j_1,j_2] = \langle \psi[\{\phi_i\} | H_{\text{FE}} + H_{\text{int}}[j_1,j_2] |\psi[\{\phi_i\}]\rangle. \label{eq:E2e}\end{aligned}$$ As the system is invariant with respect to displacements along the domain wall, the two-electron energy is described by three parameters: 1. $x_1$: the displacement of the first electron away from the center-line of the domain wall, 2. $x_2$: the displacement of the first electron away from the center-line of the domain wall, 3. $\Delta y = y_2-y_1$: the displacement of the two electrons along the domain wall. The energy in Eq.  is composed of the electron-electron interaction energy $E_2$ and the single-electron energy $E_1$ $$\begin{aligned} E[j_1,j_2] = E_2[j_1,j_2]+E_1[j_1]+E_1[j_2].\end{aligned}$$ $E_2$ describes the electron-electron attraction mediated by the ferroelasticity; $E_1$ describes the attraction of electrons to electron-rich region, which is also mediated by the ferroelasticity. We extract the two-electron energy $E_2$ from the computed energy $E[x_1,x_2,\Delta y]$ using the formula $E_2[x_1,x_2,\Delta y] \approx E[x_1,x_2,\Delta y]-E[x_1,x_2,\Delta y\rightarrow L_y/2]$, where $L_y/2$ is the maximum value of $\Delta y$ for two electrons in a box of length $L_y$ along the $y$ directions. In Fig. \[fig:energy\]b we plot $E_2$. As $E_2$ is a function of three parameters ($x_1$, $x_2$ and $\Delta y$), we choose to fix $x_1$ (i.e. the position of the 1st electron) and vary $x_2$ and $\Delta y$ (i.e. the position of the 2nd electron) in each panel of Fig. \[fig:energy\]b. ![Binding energy of a real-space Cooper pair as a function of the barrier position $x_\text{min}$. The barrier prevents the occupation of sites to the left $x_\text{min}$ by the two electrons that are forming the pair. The pairing energy was computed for a number of values of the on-site repulsion $U$.[]{data-label="fig:pairBindingEnergy"}](pairingEnergy.pdf){width="\columnwidth"} We find that $E_2[x_1,x_2,\Delta y]$ attains a minimum (indicative of electron-electron attraction) when the two electrons are near each other. The spatial extent of the “dip” around the minimum depends on the position of the first electron $x_1$. When the 1st electron is in the electron-poor region the dip has a smaller spatial extent than when the 1st electron is inside the domain wall. To quantify the size of this dip, we plot the integrated two-electron energy $E[j_1]=\sum_{j_2} E_2[j_1,j_2]$ as a function of the distance between the first electron and the domain wall. When the first electron is inside the electron rich region (on the left side of the domain wall) there is essentially no electron-electron attraction. This is due to the fact that ferroelasticity is already maximally deformed and the addition of one more electron has no effect. Similarly, the electron-electron attraction is also very weak when the first electron is located in the electron-poor region. This is a consequence of the fact that ferroelastic deformations induced by en electron in the electron-poor region have a very short range that is associated with the large gap to ferroelastic modes. Finally, when the first electron is in the middle of the domain wall, it induces long-range ferroelastic deformations (that propagate along the domain wall) and thus we observe strong electron-electron attraction. Real space Cooper-pair model ============================ In order to demonstrate that electrons can form pairs in the vicinity of a ferroelastic domain wall, we construct a Cooper-pair-like model in real (as opposed to momentum) space. Specifically, we concern ourselves with the motion of two electrons with opposite spin. The motion of the two electrons is described by the Hubbard model $$\begin{aligned} H_{\text{el}}=-t\sum_{\langle ij \rangle, \sigma} c^\dagger_{i,\sigma} c_{j,\sigma} + U \sum_{i} n_{i,\uparrow} n_{i,\downarrow} + \sum_{i,j} E[i,j] n_{i,\uparrow} n_{j,\downarrow} \label{eq:el}\end{aligned}$$ with hopping amplitude $t$, on-site repulsive interaction $U$, and long-range electron-electron interaction mediated by ferroelasticity $E[i,j]$ from Eq.  (here we include both the one- and two-electron terms). To understand the contribution of ferroelastic domain walls to the strength of electron-electron pairing, we consider two cases: (1) a pair of electrons in the electron-poor bulk; (2) a pair of electrons in the vicinity of a ferroelastic domain wall. To be able to smoothly interpolate between these cases, in analogy to the Cooper problem, we supplement the electron Hamiltonian of Eq.  with the condition that the two select electrons must remain to the right side of a barrier located at $x=x_0$. In Fig. \[fig:pairBindingEnergy\] we plot the electron pair binding energy as a function of $x_0$ for various values of the on-site repulsion $U$ (with $t=0.25 h$). As $x_0$ increase the electron pair is pushed into the middle of the domain wall, and we observe strong pairing. As we keep increasing $x_0$ the electrons are pushed out of the domain wall and into the electron poor region. Consequently, the pair binding energy decreases as we would expect from Fig. \[fig:energy\]. As we increase the on-site repulsive interaction $U$, we observe that the pair binding energy decreases. The pair binding energy decreases proportionately more in the electron-poor region as compared to the domain wall region, as the electron-electron interactions have a shorter range in the electron-poor region as compared at the domain wall. To summarize, we observe that ferroelastic domain walls enhance the electron pair binding energy and make electron-pairing more robust to local repulsive interactions. We believe that this enhancement helps to mediate superconductivity in the LAO/STO interface. Summary and outlook =================== In summary, we have presented a scenario ascribing the mechanism of superconductivity in LAO/STO heterostructures to ferroelastic domain walls that form at the interface of electron-rich and electron-poor regions. Specifically, we built up a model that encompass our scenario. First, we modeled ferroelastic waves and showed that ferroelastic domain walls support low-energy modes analogous to capillary waves at the interface of two fluids. Second, we coupled our model of ferroelasticity to electron density and showed that electron-electron attraction is indeed strongest and also has the longest range in the vicinity of domain walls. Finally, we computed the electron-electron binding energy for a select pair of electrons, in analogy to the Cooper-pair problem, and showed that ferroelastic domain walls indeed enhance electron binding. The scenario that we present is consistent with available data on LAO/STO heterointerfaces. It naturally provides an explanation for the one-dimensional nature of superconductivity reported in Ref. [@Pai2018]. It may also help to explain the origin of the superconducting dome that is observed as a function of the carrier density [@Lin2013]: as we increase the electron density pairing first becomes stronger as the gap to ferroelastic modes decreases; however, as electron density become higher ferroelasticity becomes fully saturated in the $z$-direction resulting in the closing of the bulk superconducting gap as superconductivity is pushed to the edges of the electron puddle. We note that the presented scenario predicts that superconductivity should be strongly affected by strain fields which could move existing domain walls or introduce new domain walls. We thank Anthony Tylan-Tyler for useful discussion and performing initial analysis of the related Ising model domain walls. DP and JL acknowledge support from NSF grant PHY-1913034. DP acknowledges support from the Charles E. Kaufman Foundation under grant KA2014-73919. CSH acknowledges support from the Office of the Secretary of Defense through the LUCI program and a grant of computer time from the DoD High Performance Computing Modernization Program. JL acknowledges support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering and funded by the Office of Naval Research through grant N00014-15-1-2847.
--- abstract: 'Two-photon ionization of an alkali-metal atom in the presence of a uniform electric field is investigated using a standardized form of local frame transformation and generalized quantum defect theory. The relevant long-range quantum defect parameters in the combined Coulombic plus Stark potential is calculated with eigenchannel R-matrix theory applied in the downstream parabolic coordinate $\eta$. The present formulation permits us to express the corresponding microscopy observables in terms of the local frame transformation, and it gives a critical test of the accuracy of the Harmin-Fano theory permitting a scholastic investigation of the claims presented in Zhao [*et al.*]{} \[Phys. Rev. A 86, 053413 (2012)\].' author: - 'P. Giannakeas' - 'F. Robicheaux' - 'Chris H. Greene' bibliography: - 'qdtbiblio.bib' title: Photoionization microscopy in terms of local frame transformation theory --- Introduction ============ The photoabsorption spectrum of an alkali-metal atom in the presence of a uniform electric field constitutes a fundamental testbed for atomic physics. Through the past few decades, study of this class of systems has provided key insights into their structure and chemical properties, as well as the nonperturbative effect of an applied external field. The response of the lower energy eigenstates of any alkali-metal atom to a laboratory strength electric field is perturbative and can be described in terms of the static atomic polarizability. For states high in the Rydberg series or in the ionization continuum, however, even a modest field strength nonperturbatively modifies the nature of the energy eigenstates. In fact this problem touches on fundamental issues concerning the description of nonseparable quantum mechanical systems. The Stark effect of alkali-metal atoms is one of the simpler prototypes of such systems, because the short-distance electron motion is nearly separable in spherical coordinates while the intermediate- and long-distance motion is almost exactly separable in parabolic coordinates. The evolution of a quantum electron wave function from small to large distances thus involves a transformation, termed a [*local*]{} frame transformation (LFT) because it is derived in a localized region of space. (The extent of this region is typically limited to within 10-20 a.u. between the electron and the nucleus.) When one encounters a problem of nonrelativistic quantum mechanics where the Schrödinger equation is nonseparable, one usually anticipates that the system will require a complicated numerical treatment. This is the first and most common approach even if the nonseparability is limited to only two coordinates as is the case with the nonhydrogenic Stark effect since the azimuthal angle $\phi$ is separable for this problem (aside from the comparatively weak spin-orbit coupling). Thus it was a major breakthrough when papers by Fano [@fano1981stark] and Harmin [@harminpra1982; @harminprl1982; @harmin1981hydrogenic] showed in the early 1980s how the problem can be solved analytically and almost completely using ideas based on the frame transformation theory and quantum defect theory. Since that body of work introduced the LFT method, it has been generalized to other systems that are similar in having an intermediate region of space where the wave equation is separable in both the small- and large-distance coordinate systems. Example applications include diverse systems such as negative ion photodetachment in either an external magnetic [@Greene1987pra] or electric field [@WongRauGreene1988pra; @RauWong1988pra; @GreeneRouze1988ZPhys; @SlonimGreene1991], and confinement-induced resonances in ultracold atom-atom scattering [@GrangerBlume2004prl; @Giannakeas2012pra; @hess2014pra; @Zhang2013pra] or dipole-dipole collisions [@giannakeas2013prl]. The LFT theory has been demonstrated by now to have great effectiveness in reproducing experimental spectra and collision properties as well as accurate theoretical results derived using other methods including “brute force” computations [@stevens1996precision]. The deviations between highly accurate R-matrix calculations and the LFT method were found in Ref. [@stevens1996precision] to be around 0.1% for resonance positions in the $^7\rm{Li}$ Stark effect. The LFT is evolving as a general tool that can solve this class of nonseparable quantum mechanical problems, but it must be kept in mind that it is an approximate theory. It is therefore desirable to quantify the approximations made, in order to understand its regimes of applicability and where it is likely to fail. The goal of the present study is to provide a critical assessment of the accuracy of the LFT, concentrating in particular on observables related to photoionization microscopy. The experiments in this field [@cohen13; @itatani2004tomographic; @bordas03; @Nicole02] have focused on the theoretical proposal that the probability distribution of an ejected slow continuum electron can be measured on a position-sensitive detector at a large distance from the nucleus [@ost1; @ost2; @ost3; @ost4]. While the Harmin-Fano LFT theory has been shown in the 1980s and 1990s to describe the total photoabsorption Stark spectra in one-electron [@harminpra1982; @harminprl1982; @stevens1996precision] and two-electron [@Armstrong1993prl; @Armstrong1994pra; @Robicheaux1999pra] Rydberg states, examination of a differential observable such as the photodetachment [@Blondel1996prl] or photoionization [@Texier2005pra] microscopy probability distribution should in principle yield a sharper test of the LFT. Indeed, a recent study by Zhao, Fabrikant, Du, and Bordas [@zhao12] identifies noticeable discrepancies between Harmin’s LFT Stark effect theory and presumably more accurate coupled-channel calculations. Particularly in view of the extended applications of LFT theory to diverse physical contexts, such as the confinement-induced resonance systems noted above, a deeper understanding of the strengths and limitations of the LFT is desirable. In this paper we employ R-matrix theory in a fully quantal implementation of the Harmin local frame transformation, instead of relying on semiclassical wave mechanics as he did in Refs.[@harminpra1982; @harminprl1982; @harmin1981hydrogenic]. This allows us to disentangle errors associated with the WKB approximation from those deriving from the LFT approximation itself. For the most part this causes only small differences from the original WKB treatment consistent with Ref. [@stevens1996precision], but it is occasionally significant, for instance for the resonant states located very close to the top of the Stark barrier. Another goal of this study is to standardize the local frame transformation theory to fully specify the asymptotic form of the wave function which is needed to describe other observables such as the spatial distribution function (differential cross section) that is measured in photoionization microscopy. We also revisit the interconnection of the irregular solutions from spherical to parabolic coordinates through the matching of the spherical and parabolic Green’s functions in the small distance range where the electric field is far weaker than the Coulomb interaction. This allows us to re-examine the way the irregular solutions are specified in the Fano-Harmin LFT, which is at the heart of the LFT method but one of the main focal points of criticism leveled by Zhao [*et al.*]{} [@zhao12]. Because Zhao [*et al.*]{} [@zhao12] raise serious criticisms of the LFT theory, it is important to further test their claims of error and their interpretation of the sources of error. Their contentions can be summarized as follows: [*(i)*]{} The Harmin-Fano LFT quite accurately describes the total photoionization cross section, but it has significant errors in its prediction of the differential cross section that would be measured in a photoionization microscopy experiment. This is deduced by comparing the results from the approximate LFT with a numerical calculation that those authors regard as essentially exact. [*(ii)*]{} The errors are greatest when the atomic quantum defects are large, and almost negligible for an atom like hydrogen which has vanishing quantum defects. They then present evidence that they have identified the source of those errors in the LFT theory, namely the procedure first identified by Fano that predicts how the irregular spherical solution evolves at large distances into parabolic coordinate solutions. Their calculations are claimed to suggest that the local frame transformation of the solution regular at the origin from spherical to parabolic coordinates is correctly described by the LFT, but the irregular solution transformation is incorrect. One of our major conclusions from our exploration of the Ref.[@zhao12] claimed problems with the Harmin-Fano LFT is that both claims are erroneous; their incorrect conclusions apparently resulted from their insufficient attention to detail in their numerical calculations. Specifically, our calculations for the photoionization microscopy of Na atoms ionized via a two photon process in $\pi$ polarized laser fields do not exhibit the large and qualitative inaccuracies which were mentioned in Ref.[@zhao12]; for the same cases studied by Zhao [*et al.*]{}, we obtain excellent agreement between the approximate LFT theory and our virtually exact numerical calculations. Nevertheless some minor discrepancies are noted which may indicate minor inaccuracies of the local frame transformation theory. This paper is organized as follows: Section II focuses on the local frame transformation theory of the Stark effect and present a general discussion of the physical content of the theory, including a description of the relevant mappings of the regular and irregular solutions of the Coulomb and Stark-Coulomb Schrödinger equation. Section III reformulates the local frame transformation theory properly, including a description of the asymptotic electron wave function. In addition, this Section defines all of the relevant scattering observables. Section IV discusses a numerical implementation based on a two-surface implementation of the eigenchannel R-matrix theory. This toolkit permits us to perform accurate quantal calculations in terms of the local frame transformation theory, without relying on the semiclassical wave mechanics adopted in Harmin’s implementation. Section V is devoted to discussion of our recent finding in comparison with the conclusions of Ref.[@zhao12]. Finally, Section VI summarizes and concludes our analysis. Local frame transformation theory of the Stark effect ===================================================== This section reviews the local frame transformation theory (LFT) for the non-hydrogenic Stark effect, utilizing the same nomenclature introduced by Harmin [@harminpra1982; @harminprl1982; @harmin1981hydrogenic]. The crucial parts of the corresponding theory are highlighted developing its standardized formulation. General considerations ---------------------- In the case of alkali-metal atoms at small length scales the impact of the alkali-metal ion core on the motion of the valence electron outside the core can be described effectively by a phase-shifted radial wave function: $$\Psi_{\epsilon \ell m}(\mathbf{r})=\frac{1}{r}Y_{\ell m}(\theta, \phi)\big[ f_{\epsilon \ell}(r) \cos \delta_\ell-g_{\epsilon \ell}(r) \sin \delta_\ell \big],~~r>r_0, \label{spherwave}$$ where the $Y_{\ell m}(\theta, \phi)$ are the spherical harmonic functions of orbital angular momentum $\ell$ and projection $m$. $r_0$ indicates the effective radius of the core, $\delta_\ell$ denotes the phase that the electron acquires due to the alkali-metal ion core. These phases are associated with the quantum defect parameters, $\mu_\ell$, according to the relation $\delta_\ell=\pi \mu_\ell$. The pair of $\{f,g\}$ wave functions designate the regular and irregular Coulomb ones respectively whose Wronskian is $W[f,g]=2/\pi$. We remark that this effective radius $r_0$ is placed close to the origin where the Coulomb field prevails over the external electric field. Therefore, the effect on the phases $\delta_\ell$ from the external field can be neglected. Note that atomic units are employed everywhere, otherwise is explicitly stated. At distances $r\gg r_0$ the outermost electron of the non-hydorgenic atom is in the presence of a homogeneous static electric field oriented in the $z$-direction. The separability of the center-of-mass and relative degrees of freedom permits us to describe all the relevant physics by the following Schrödinger equation in the relative frame of reference: $$\bigg(-\frac{1}{2}\nabla^2-\frac{1}{r}+ F z-\epsilon \bigg)\psi(\mathbf{r})=0, \label{eq1}$$ where $F$ indicates the strength of the electric field, $r$ corresponds to the interparticle distance and $\epsilon$ is the total colliding energy. Note that Eq. (\[eq1\]) is invariant under rotations around the polarization axis, namely the corresponding azimuthal quantum number $m$ is a good one. In contrast, the total orbital angular momentum is not conserved, which shows up as a coupling among different $\ell$ states. The latter challenge, however, can be circumvented by employing a coordinate transformation which results in a fully separable Schrödinger equation. Hence, in parabolic coordinates $\xi=r+z$, $\eta=r-z$ and $\phi=\tan^{-1}(x/y)$, Eq. (\[eq1\]) reads: $$\frac{d^2}{d\xi^2}\Xi_{\beta m}^{ \epsilon F}(\xi)+\bigg(\frac{\epsilon}{2}+\frac{1-m^2}{4\xi^2}+\frac{\beta}{\xi}-\frac{F}{4}\xi\bigg)\Xi_{\beta m}^{ \epsilon F}(\xi)=0, \label{eq2}$$ $$\frac{d^2}{d\eta^2}\Upsilon_{\beta m}^{ \epsilon F}(\eta)+\bigg(\frac{\epsilon}{2}+\frac{1-m^2}{4\eta^2}+\frac{1-\beta}{\eta}+\frac{F}{4} \eta\bigg)\Upsilon_{\beta m}^{ \epsilon F} (\eta)=0, \label{eq3}$$ where $\beta$ is the [*effective charge*]{} and $\epsilon$, $F$ are the energy and the field strength in atomic units. We remark that Eq. (\[eq2\]) in the $\xi$ degrees of freedom describes the bounded motion of the electron since as $\xi \to \infty$ the term with the electric field steadily increases. This means that the $\Xi$ wave function vanishes as $\xi \to \infty$ for every energy $\epsilon$ at particular values of the effective charge $\beta$. Thus, Eq. (\[eq2\]) can be regarded as a generalized eigenvalue equation where for each quantized $\beta \equiv \beta_{n_1}$ the $\Xi_{\beta m}^{\epsilon F} \equiv \Xi_{n_1 m}^{\epsilon F}$ wave function possesses $n_1$ nodes. In this case the wave functions $\Xi_{n_1 m}^{\epsilon F}(\xi)$ possess the following properties: - Near the origin $\Xi_{n_1 m}^{\epsilon F}$ behaves as: $\Xi_{n_1 m}^{\epsilon F}(\xi\to 0)\sim N^F_{\xi} \xi^\frac{m+1}{2}[1+O(\xi)]$, where $N^F_\xi$ is an energy-field dependent amplitude and must be determined numerically in general. - The wave function $\Xi_{n_1 m}^{\epsilon F}$ obeys the following normalization condition: $\int_0^\infty \frac{[\Xi_{n_1 m}^{\epsilon F}(\xi)]^2}{\xi}\rm{d} \xi=1.$ On the other hand Eq. (\[eq3\]) describes solely the motion of the electron in the $\eta$ degree of freedom which is unbounded. As $\eta \to \infty$ the term with the electric field steadily decreases which in combination with the coulomb potential forms a barrier that often has a local maximum. Hence, for specific values of energy, field strength and effective charge the corresponding wave function $\Upsilon_{\beta m}^{\epsilon F} \equiv \Upsilon_{n_1 m}^{\epsilon F}$ propagates either above or below the barrier local maximum where the states $n_1$ define asymptotic channels for the scattering wave function in the $\eta$ degrees of freedom. Note that for $\beta_{n_1}>1$, the Coulomb term in Eq. (\[eq3\]) becomes repulsive and therefore no barrier formation occurs. Since Eq. (\[eq3\]) is associated with the unbounded motion of the electron it possesses two solutions, namely the regular $\Upsilon_{n_1 m}^{\epsilon F}(\eta)$ and the irregular ones $\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)$. This set of solutions has the following properties: - Close to the origin and before the barrier the irregular solutions $\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)$ lag by $\pi/2$ the regular ones, namely $\Upsilon_{n_1 m}^{\epsilon F}(\eta)$. Note that their normalization follows Harmin’s definition [@harminpra1982] and is clarified below. - Near the origin the regular solutions vanish according to the relation: $\Upsilon_{n_1 m}^{\epsilon F}(\eta \to 0)\sim N^F_{\eta} \eta^\frac{m+1}{2}[1+O(\eta)]$, where $N^F_\eta$ is an energy- and field-dependent amplitude and must be determined numerically in general. Let us now specify the behavior of the pair solutions $\{\Upsilon_{n_1 m}^{\epsilon F},\bar{\Upsilon}_{n_1 m}^{\epsilon F}\}$ at distances after the barrier. Indeed, the regular and irregular functions can be written in the following WKB form: $$\begin{aligned} &~&\Upsilon_{n_1 m}^{\epsilon F}(\eta \gg \eta_0) \to \sqrt{\frac{2}{\pi k(\eta)}} \sin \bigg[\int^{\eta}_{\eta_0}k(\eta')d\eta'+\frac{\pi}{4}+\delta_{n_1}\bigg] \label{regU}\end{aligned}$$ $$\begin{aligned} &~&\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta \gg \eta_0) \to \sqrt{\frac{2}{\pi k(\eta)}} \sin \bigg[\int^{\eta}_{\eta_0}k(\eta')d\eta'+\frac{\pi}{4}+\delta_{n_1}- \gamma_{n_1}\bigg], \label{irregU}\end{aligned}$$ where $k(\eta)=\sqrt{-m^2/\eta^2+(1-\beta_{n_1})/\eta+\epsilon/2+F\eta/4}$ is the local momentum term with the Langer correction being included, $\eta_0$ is the position of the outermost classical turning point and the phase $\delta_{n_1}$ is the absolute phase induced by the combined Coulomb and electric fields. The phase $\gamma_{n_1}$ corresponds to the relative phase between the regular and irregular functions, namely $\{\Upsilon, \bar{\Upsilon}\}$. We recall that at short distances their relative phase is exactly $\pi/2$, though as they probe the barrier at larger distances their relative phase is altered and hence after the barrier the [*short range*]{} regular and irregular functions differ by $0<\gamma_{n_1}<\pi$ and not just $\pi/2$. We should remark that after the barrier the amplitudes of the pair $\{\Upsilon, \bar{\Upsilon}\}$ are equal to each other and their relative phase in general differs from $\pi/2$. On the other hand, at shorter distances before the barrier the amplitudes of the $\{\Upsilon, \bar{\Upsilon}\}$ basically are not equal to each other and their relative phase is exactly $\pi/2$. This ensures that the Wronskian of the corresponding solutions possesses the same value at all distances and provides us with insight into the interconnection between amplitudes and relative phases. The key concept of Harmin’s theoretical framework is to associate the relevant phases at short distances in the absence of an external field, i.e. $\delta_\ell$ (see Eq. (\[spherwave\])) to the scattering phases at large distances where the electric field contributions cannot be neglected. This can be achieved by mapping the corresponding regular and irregular solutions from spherical to parabolic-cylindrical coordinates as we discuss in the following. Mapping of the regular functions from spherical to parabolic-cylindrical coordinates ------------------------------------------------------------------------------------ The most intuitive aspect embedded in the present problem is that the Hamiltonian of the motion of the electron right outside the core possesses a spherical symmetry which in turn at greater distances due to the field becomes parabolic-cylindrically symmetric. Therefore, a proper coordinate transformation of the corresponding [*energy normalized*]{} wave functions from spherical to parabolic cylindrical coordinates will permit us to [*propagate*]{} to asymptotic distances the relevant scattering or photoionization events initiated near the core. Indeed at distances $r\ll F^{-1/2}$ the regular functions in spherical coordinates are related to the parabolic cylindrical ones according to the following relation: $$\begin{aligned} \psi_{n_1 m}^{\epsilon F}(\mathbf{r})&=&\frac{e^{i m \phi}}{\sqrt{2 \pi}}\frac{\Xi_{n_1 m}^{\epsilon F}(\xi)}{\sqrt{\xi}}\frac{\Upsilon_{n_1 m}^{\epsilon F}(\eta)}{\sqrt{\eta}}\nonumber \\ &=&\sum_\ell U_{n_1 \ell}^{\epsilon Fm} \frac{f_{\epsilon \ell m}(\mathbf{r})}{r}, ~~\rm{for}~~ r \ll F^{-1/2}, \label{eq4}\end{aligned}$$ where $f_{\epsilon \ell m}(\mathbf{r})$ are the regular solutions in spherical coordinates with $\ell$ being the orbital angular momentum quantum number. The small distance behavior is $f_{\epsilon \ell m}(\mathbf{r})\approx N_{\epsilon \ell}Y_{\ell m}(\theta,\phi)r^{\ell+1}[1+O(r)]$ with $N_{\epsilon \ell}$ a normalization constant (see Eq. (13) in Ref. [@harminpra1982]). Therefore, from the behavior at small distances of the parabolic-cylindrical and spherical solutions the frame transformation $U_{n_1 \ell}^{\epsilon Fm}$ has the following form: $$U_{n_1 \ell}^{\epsilon F m}=\frac{N^F_\xi N^F_\eta}{N_{\epsilon \ell}}\frac{(-1)^m\sqrt{4 \ell+2}m!^2}{(2 \ell+1)!! \sqrt{(\ell+m)!(\ell-m)!}}\sum_{k}^{\ell-m} (-1)^k \binom{\ell-m}{k}\binom{\ell+m}{\ell-k} \frac{\nu^{m-\ell}\Gamma(n_1+1)\Gamma(\nu-n_1-m)}{\Gamma(n_1+1-k)\Gamma(\nu-n_1+k-\ell)}, \label{eq5}$$ where $n_1=\beta_{n_1} \nu -1/2-m/2$ and $\nu=1/\sqrt{-2 \epsilon}$. ![(color online). The matrix elements of the local frame transformation $U_{n_1 \ell}^{\epsilon Fm}$ versus the number of states $n_1$ for $m=1$ where the angular momentum acquires the values $\ell=1,2,3~\rm{and}~6$. The electric field strength is $F= 640$ V/cm and total collisional energy is $\epsilon=135.8231$ cm$^{-1}$. The vertical dashed lines indicate the sign and the interval of values of the $\beta_{n_1}$.[]{data-label="fig1"}](fig1.eps) Fig. \[fig1\] plots the elements of the local frame transformation $U$ in Eq. (\[eq5\]) as functions of the number of states $n_1$, where again the integers $n_1$ label the eigenvalues $\beta_{n_{1}}$. The local frame transformation $U$ is plotted for four different angular momenta, namely $\ell=1,2,3~\rm{and}~6$ where we set $m=1$ at energy $\epsilon=135.8231$ cm$^{-1}$ and field $F= 640$ V/cm. One sees that the local frame transformation $U$ becomes significant in the interval $n_1 \in (38,79)$ which essentially corresponds to $\beta_{n_1} \in(0,1)$. For $\beta_{n_1}<0$ or $\beta_{n_1}>1$ the local frame transformation vanishes rapidly. This behavior mainly arises from the normalization amplitudes $N_\xi^F$ and $N_\eta^F$, which obey the following relations: $$N_\xi^F \sim \frac{\beta_{n_1}}{1-e^{-2 \pi \beta_{n_1}/k}}~~\rm{and}~~ N_\eta^F \sim \frac{(1-\beta_{n_1})}{1-e^{-2 \pi(1- \beta_{n_1})/k}}. \label{eq6}$$ Note that these expressions are approximately valid only for positive energies and they are exact for $F=0$. From the expressions in Eq. (\[eq6\]) it becomes evident that for negative eigenvalues $\beta_{n_1}$ the amplitude $N_\xi^F$ vanishes exponentially while $N_\eta^F$ remains practically finite. Similarly, for the case of $\beta_{n_1}>1$ the amplitude $N_\eta^F$ vanishes exponentially, and these result in the behavior depicted in Fig.\[fig1\]. Another aspect of the local frame transformation $U$ is its nodal pattern shown in Fig.\[fig1\]. For increasing $\ell$ the corresponding number of nodes increases as well. For $m=1$, every $U_{n_1 \ell}^{\epsilon Fm}$ possesses $\ell-1$ nodes. Mapping of the irregular functions from spherical to parabolic-cylindrical coordinates -------------------------------------------------------------------------------------- Having established the mapping between the regular solutions of the wave function in spherical and parabolic-cylindrical coordinates, the following focuses on the relation between the irregular ones. The irregular solution in the parabolic-cylindrical coordinates has the following form: $$\chi_{n_1 m}^{\epsilon F}(\mathbf{r})=\frac{e^{i m \phi}}{\sqrt{2 \pi}}\frac{\Xi_{n_1 m}^{\epsilon F}(\xi)}{\sqrt{\xi}}\frac{\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)}{\sqrt{\eta}}, \label{eq7}$$ Recall that In order to relate Eq. (\[eq7\]) to the irregular functions in spherical coordinates we employ Green’s functions as was initially suggested in [@fano1981stark]. More specifically, the principal value Green’s function for the pure Coulomb Hamiltonian $G_P^{(C)}(\mathbf{r,r'})$, is matched with a Green’s function of the Coulomb plus Stark Hamiltonian $G^{(C+F)}(\mathbf{r,r'})$, which is expressed in parabolic-cylindrical coordinates. Of course, in general the two Green’s functions differ from each other since they correspond to different Schrödinger equations. However, at small distances the field term in the Stark Hamiltonian becomes negligible in comparison with the Coulomb term. Therefore, in this restricted region of the configuration space, i.e. $r \ll F^{-1/2}$, the Stark Hamiltonian is virtually identical to the Coulomb Hamiltonian, whereby the corresponding Green’s functions are equivalent to an excellent approximation. We refer to this region as the Coulomb zone. For positive energies recall that the principal value Green’s function is uniquely defined in the infinite configuration space, and it consists of a sum of products of regular and irregular functions. The employed regular and irregular functions are defined such that their relative phase is exactly $\pi/2$ asymptotically [@Rodberg1970; @Economou200606]. Therefore, according to the above mentioned arguments the principal value Green’s function obeys the relation expressed in spherical coordinates: $$G^{(C)}_P(\mathbf{r,r'})= \frac{\pi}{r r'} \sum_{\ell, m} f_{\epsilon \ell m}(\mathbf{r})g_{\epsilon \ell m}(\mathbf{r'}),~~r<r' \label{eq8}$$ where the $\{f,g\}$ solutions correspond to the regular and irregular functions as they are defined in Eq. (\[spherwave\]) Note that the principal value Green’s function of the Coulomb Hamiltonian in spherical and in parabolic-cylindrical coordinates are equal to each other, namely $G_P^{(C),\rm{sc}}\equiv G_P^{(C),\rm{pcc}}$ (the abbreviations sc and pcc stand for spherical and parabolic-cylindrical coordinates, respectively). On the other hand for negative energies, by analytically continuing the $\{f,g\}$ Coulombic functions across the threshold yields the relation $\mathcal{G}^{(C),\rm{sc}}\equiv \mathcal{G}^{(C),\rm{pcc}}$ . The $\mathcal{G}^{(C)}$ is the so called [*smooth*]{} Green’s function which is related to a Green’s function bounded at $r=0$ and at infinity according to the expression [@greene1979general]: $$G^{(C)}(\mathbf{r,r'})=\mathcal{G}^{(C)}(\mathbf{r,r'})+\frac{\pi}{r r'} \sum_\ell f_{\epsilon \ell m}(\mathbf{r}) \cot \beta(\epsilon) f_{\epsilon \ell m}(\mathbf{r'}), \label{smoothg}$$ where $\beta(\epsilon)=\pi (\nu-\ell)$ with $\nu=1/\sqrt{-2\epsilon}$ is the phase accumulated from $r=0$ up to $r\to \infty$. Assume that $\epsilon_n$ (i.e. $\nu \to n \in \aleph^*$) are the eigenergies specified by imposing the boundary condition at infinity where $n$ denotes a counting index of the corresponding bound states. Then in the right hand side of Eq. (\[smoothg\]) the second term at energies $\epsilon=\epsilon_n$ diverges while the first term is free of poles. The smooth Green’s function is identified as the one where the two linearly independent solutions have their relative phase equal to $\pi/2$ [*at small distances*]{}. Furthermore, the singularities in Eq. (\[smoothg\]) originate from imposing the boundary condition at infinity, though in the spirit of multichannel quantum defect theory we can drop this consideration and solely employ the $\mathcal{G}^{(C)}$ which in spherical coordinates reads $$\mathcal{G}^{(C)}(\mathbf{r,r'})= \frac{\pi}{r r'} \sum_{\ell, m} f_{\epsilon \ell m}(\mathbf{r})g_{\epsilon \ell m}(\mathbf{r'}),~~r<r'~~\rm{for}~\epsilon<0. \label{smoothspher}$$ In view of the now established equality between the principal value (smooth) Green’s functions at positive (negative) energies in spherical and parabolic cylindrical coordinates for the pure Coulomb Hamiltonian, the discussion can proceed to the Stark Hamiltonian. Hence as mentioned above in the Coulomb zone, i.e. $r \ll F^{-1/2}$, the Stark Hamiltonian is approximately equal to the pure Coulomb one. This implies the existence of a Green’s function, $G^{(C+F)}$, for the Stark Hamiltonian which is equal to the $G_P^{(C),\rm{pcc}}$ ($\mathcal{G}^{(C),\rm{pcc}}$), and which in turn is equal to Eq. (\[eq8\]) \[Eq. (\[smoothspher\])\] at positive (negative) energies. More specifically, the $G^{(C+F)}$ the Green’s function expressed in parabolic-cylindrical coordinates is given by the expression: $$G^{(C+F)}(\mathbf{r,r'})= 2 \sum_{n_1, m} \frac{\psi^{\epsilon F}_{n_1 m}(\mathbf{r})\chi^{\epsilon F}_{n_1 m}(\mathbf{r'})}{W(\Upsilon_{n_1 m}^{\epsilon F},\bar{\Upsilon}_{n_1 m}^{\epsilon F})},~\rm{for}~\eta< \eta' \ll F^{-1/2}, \label{eq9}$$ where the functions $\{\psi, \chi\}$ are the regular and irregular solutions of the Stark Hamiltonian, which at small distances (in the classically allowed region) have a relative phase of $\pi/2$. This originates from $\pi/2$ relative phase of the $\{\Upsilon, \bar{\Upsilon}\}$ as was mentioned is subsection A. The Wronskian $W[\Upsilon_{n_1 m}^{\epsilon F},\bar{\Upsilon}_{n_1 m}^{\epsilon F}]=(2/\pi) \sin \gamma_{n_1}$ yields $\{\psi, \chi\}$ solutions have the same energy normalization as in the $\{f, g\}$ coulomb functions. We should point out that Eq. (\[eq9\]) is not the principal value Green’s function of the Stark Hamiltonian. Indeed, it can be shown that principal value Green’s function of the Stark Hamiltonian, namely $G^{(C+F)}_P$ and the Green’s function $G^{(C+F)}$ obey the following relation: $$\begin{aligned} G^{(C+F)}(\mathbf{r,r'})&=&G^{(C+F)}_P(\mathbf{r,r'})\nonumber \\ &+&\sum_{n_1}\cot \gamma_{n_1} \psi^{\epsilon F}_{n_1 m}(\mathbf{r})\psi^{\epsilon F}_{n_1 m}(\mathbf{r'}), \label{greenpv}\end{aligned}$$ where we observe that either for positive energies or for $n_1$ channels which lie above the saddle point of the Stark barrier the second term vanishes. This occurs due to the fact that $\gamma_{n_1}\approx \pi/2$ since the barrier does not alter the relative phases between the regular and irregular solutions. For the cases where the barrier effects are absent the $ G^{(C+F)}$ is the principal value Green’s function of the Stark Hamiltonian as was pointed out by Fano [@fano1981stark]. However, in the case of non hydrogenic atoms in presence of external fields the barrier effects are significant especially at negative energies. Therefore the use of solely the principal value Green’s function $ G^{(C+F)}_P$ would not allow a straightforward implementation of scattering boundary conditions. This is why the second term in Eq. (\[greenpv\]) has been included. From the equality between Eqs. (\[eq8\]) \[or (\[smoothspher\])\] and (\[eq9\]), hereafter with the additional use of Eq. (\[eq4\]), the mapping of the irregular solutions is given by the following expression: $$\frac{g_{\epsilon \ell m}(\mathbf{r})}{r}=\sum_{n_1} \chi_{n_1 m}^{\epsilon F}(\mathbf{r})\csc(\gamma_{n_1})(\underline{U})^{\epsilon F m}_{n_1 \ell }~~\rm{for}~~ r \ll F^{-1/2}. \label{eq10}$$ Additionally, Eq. (\[eq4\]) conventionally can be written as $$\frac{f_{\epsilon \ell m}(\mathbf{r})}{r} =\sum_{n_1} \psi_{n_1 m}^{\epsilon F}(\mathbf{r})\big[(\underline{U}^{T})^{-1}\big]_{n_1 \ell}^{\epsilon Fm}, ~~\rm{for}~~ r \ll F^{-1/2}, \label{eq11a}$$ Note that in Eqs. (\[eq10\]) and (\[eq11a\]) $\underline{U}^T$ and $[\underline{U}^T]^{-1}$ are the transpose and inverse transpose matrices of the $U$ LFT matrix whose elements are given by $(\underline{U})^{\epsilon F m}_{n_1 \ell}= U_{n_1 \ell}^{\epsilon Fm}$. In Ref.[@stevens1996precision] Stevens [*et al.*]{} comment that in Eq. (\[eq10\]) only the left hand side possesses a uniform shift over the $\theta$-angles. Quantifying this argument, one can examine the difference the semiclassical phases with and without the electric field. Indeed, for a zero energy electron the phase accumulation due to the existence of the electric field as a function of the angle $\theta$ obeys the expression $$\begin{aligned} \Delta \phi(r,~\theta)&=& \int^r k(r,~\theta) dr-\int^r k_0(r)dr \nonumber \\ &\approx&-\frac{\sqrt{2}}{5} F r^{5/2} \cos\theta,~~\rm{for}~~Fr^2\ll1, \label{approx}\end{aligned}$$ where $k(r,~\theta)$ ($k_0(r)$) indicates the local momentum with (without) the electric field $F$. In Eq. (\[approx\]) it is observed that for field strength $F=1$ kV/cm and $r<50$ a.u. the phase modification due to existence of the electric field is less than 0.001 radians. This simply means that at short distances both sides of Eq. (\[eq10\]) should exhibit practically uniform phase over the angle $\theta$. Recapitulating Eqs. (\[eq10\]) and (\[eq11a\]) constitute the mapping of the regular and irregular functions respectively from spherical to parabolic cylindrical coordinates. Scattering observables in terms of the local frame transformation ================================================================= This section implements Harmin frame transformation theory to determine all the relevant scattering observables. The asymptotic form of the frame transformed irregular solution and the reaction matrix --------------------------------------------------------------------------------------- The irregular solutions which we defined in Eq. (\[irregU\]) are not the [*usual*]{} ones of the scattering theory since in the asymptotic region, namely $\eta \to \infty$, they do not lag by $\pi/2$ the regular functions, Eq. (\[regU\]). Hence, this particular set of irregular solutions should not be used in order to obtain the scattering observables which are properly defined in the asymptotic region. However, by linearly combining Eqs. (\[regU\]) and (\[irregU\]) we define a new set of irregular solutions which are energy-normalized, asymptotically lag by $\pi/2$ the regular ones, and read: $$\bar{\Upsilon}_{n_1 m}^{\epsilon F,~\rm{scat}}(\eta)= \frac{1}{\sin \gamma_{n_1}}\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)-\cot \gamma_{n_1} \Upsilon_{n_1 m}^{\epsilon F}(\eta), \label{eq11}$$ where this equation together with Eq. (\[regU\]) correspond to a set of real irregular and regular solutions according to the usual conventions of scattering theory. The derivation of the reaction matrix follows. Eqs. (\[eq11\]) and (\[eq7\]) are combined and then substituted into Eq. (\[eq10\]) such that the irregular solution in spherical coordinates is expressed in terms of the $\bar{\Upsilon}_{n_1 m}^{\epsilon F,~\rm{scat}}$. $$\frac{g_{\epsilon \ell m}(\mathbf{r})}{r}=\sum_{n_1} \big[\psi_{n_1 m}^{\epsilon F}(\mathbf{r})\cot(\gamma_{n_1})+\chi_{n_1 m}^{\epsilon F,~\rm{scat}}(\mathbf{r})\big](\underline{U}^T)^{\epsilon F m}_{\ell n_1}, \label{eq12}$$ where $\chi_{n_1 m}^{\epsilon F,~\rm{scat}}(\mathbf{r})$ defined as $$\chi_{n_1 m}^{\epsilon F,~\rm{scat}}(\mathbf{r})=e^{i m \phi} \Xi_{n_1 m}^{\epsilon F}(\xi) \bar{\Upsilon}_{n_1 m}^{\epsilon F,~\rm{scat}}(\eta)/\sqrt{2 \pi \xi \eta}.$$ Hereafter, the short-range wave function ( Eq. (\[spherwave\])) expressed in spherical coordinates is transformed via the LFT $U$ into the asymptotic wave function. Specifically, $$\begin{aligned} \Psi_{\epsilon \ell m} (\mathbf{r})&=&\sum_{n_1} \psi_{n_1 m}^{\epsilon F}(\mathbf{r}) \bigg[ \big[(\underline{U}^{T})^{-1}\big]^{\epsilon F m}_{n_1 \ell} \cos \delta_\ell - \cot \gamma_{n_1} (\underline{U})^{\epsilon F m}_{n_1 \ell}\times \nonumber \\ &\times& \sin \delta_\ell\bigg]-\chi_{n_1 m}^{\epsilon F}(\mathbf{r}) (\underline{U})^{\epsilon F m}_{ n_1 \ell } \sin \delta_\ell, \label{eq13}\end{aligned}$$ Then from Eq. (\[eq13\]) and after some algebraic manipulations the reaction matrix solutions are written in a compact matrix notation as $$\begin{aligned} \mathbf{\Phi}^{(R)}(\mathbf{r})&=& \mathbf{\Psi}[\cos \underline{\delta}]^{-1}\underline{U}^T[I-\cot \underline{\gamma} \underline{U}\tan\underline{\delta}\underline{U}^T ]^{-1} \\ \nonumber &=&\bar{\psi}(\mathbf{r})- \bar{\chi}(\mathbf{r})[\underline{U}\tan \underline{\delta} \underline{U}^T][I-\cot \underline{\gamma} \underline{U}\tan\underline{\delta}\underline{U}^T ]^{-1}, \label{eq14}\end{aligned}$$ where $I$ is the identity matrix, the matrices $\cos \underline{\delta},~~\tan \underline{\delta}$, and $\cot \underline{\gamma}$ are diagonal ones. Note that $\bar{\psi}$ ($\bar{\chi}$) indicates a vector whose elements are the $\psi_{n_1 m}^{\epsilon F}(\mathbf{r})$ ($\chi_{n_1 m}^{\epsilon F}(\mathbf{r})$) functions. Similarly, the elements of the vector $\mathbf{\Psi}$ are provided by Eq. (\[spherwave\]). Then from Eq. (\[eq14\]) the reaction matrix obeys the following relation: $$\underline{R}=\underline{U}\tan \underline{\delta}~\underline{U}^T \bigg[I- \cot \underline{\gamma}\underline{U}\tan \underline{\delta}~\underline{U}^T \bigg]^{-1}, \label{eq15}$$ In fact the matrix product $\underline{U}~\tan \underline{\delta}~\underline{U}^T$ can be viewed as a reaction matrix $\mathcal{\underline{K}}$ which does not encapsulates the impact of the Stark barrier on the wave function. Moreover, as shown in Ref.[@robicheaux97] the recasting of the expression for the reaction matrix $\underline{R}$ in form that does not involve the inverse $[\underline{U}^T]^{-1}$ improves its numerically stability. In addition, it can be shown with simple algebraic manipulations that the reaction matrix is symmetric. Note that this reaction matrix $R$ should not be confused with the Wigner-Eisenbud R-matrix. The corresponding [*physical*]{} $S$-matrix is defined from the $R$-matrix via a Cayley transformation, namely $$\begin{aligned} \underline{S}&=&\bigg[I+i \underline{R}\bigg]\bigg[I-i \underline{R}\bigg]^{-1} \nonumber \\ &=&\bigg[I-\big(\cot \underline{\gamma}-i I\big)\mathcal{\underline{K}}\bigg]\bigg[I-\big(\cot \underline{\gamma}+iI\big)\mathcal{\underline{K}}\bigg]^{-1}, \label{eq16}\end{aligned}$$ where clearly this $S$-matrix is equivalent to the corresponding result of Ref.[@zhao12]. Also, the $S$-matrix in Eq. (\[eq16\]) is unitary since the corresponding $R$-matrix is real and symmetric. Dipole matrix and outgoing wave function with the atom-radiation field interaction ---------------------------------------------------------------------------------- As was already discussed, the pair of parabolic regular and irregular solutions $\{\psi,\chi\}$ are the standing-wave solutions of the corresponding Schrödinger equation. However, by linearly combining them and using Eq. (\[eq14\]), the corresponding energy-normalized outgoing/incoming wave functions are expressed as: $$\begin{aligned} \tilde{\mathbf{\Psi}}^{\pm}(\mathbf{r})&=& \mp \mathbf{\Phi}_{R}(\mathbf{r})\big[I\mp i \underline{R}\big]^{-1}\nonumber \\ &=&\frac{\mathbf{X}^{\mp}(\mathbf{r})}{i\sqrt{2}}-\frac{\mathbf{X}^{\pm}(\mathbf{r})}{i\sqrt{2}}\bigg[I\pm i \underline{R}\bigg]\bigg[I\mp i \underline{R}\bigg]^{-1}, \label{eq17}\end{aligned}$$ where the elements of the vectors $\mathbf{X}^{\pm}\mathbf{r}$ are defined by the relation $[\mathbf{X}^\pm(\mathbf{r})]^{\epsilon F}_{n_1 m}=(-\chi_{n_1 m}^{\epsilon F} (\mathbf{r})\pm i \psi_{n_1 m}^{\epsilon F} (\mathbf{r}))/\sqrt{2}$. In the treatment of the photoionization of alkali-metal atoms, the dipole matrix elements are needed to compute the cross sections which characterize the excitation of the atoms by photon absorption. Therefore, initially we assume that at small distances the short-range dipole matrix elements possess the form $d_\ell=\bra{\Psi_{\epsilon \ell m}}\hat{\varepsilon} \cdot \hat{r} \ket{\Psi_{\rm{init}}}$. Note that the term $\hat{\varepsilon} \cdot \hat{r}$ is the dipole operator, the $\hat{\varepsilon}$ denotes the polarization vector and $\ket{\Psi_{\rm{init}}}$ indicates the initial state of the atom. Then the dipole matrix elements which describe the transition amplitudes from the initial to each $n_1$-th of the reaction-matrix states is $$D_{n_1}^{(R)}=\sum_\ell d_\ell \big\{[\cos\underline{\delta}]^{-1} \underline{U}^T\big[I-\cot \underline{\gamma}\mathcal{\underline{K}}\big]^{-1}\big\}_{\ell n_1}. \label{eq18}$$ Now with the help of Eq. (\[eq18\]) we define the dipole matrix elements for transitions from the initial state to the incoming wave final state which has only outgoing waves in the $\mathbf{n_1-th}$ channel. The resulting expression is $$D^{(-)}_{n_1}=\sum_{n^{\prime}_1}D_{n^{\prime}_1}^{(R)} \big[(I-i \underline{R})^{-1}\big]_{n^{\prime}_1 n_1}. \label{eq19}$$ Eq. (\[eq19\]) provides the necessary means to properly define the outgoing wave function with the atom-field radiation. As it was shown in Ref. [@zhao12b] the outgoing wave function can be derived as a solution of an inhomogeneous Schrödinger equation that describes the atom being perturbed by the radiation field. Formally this implies that $$[\epsilon-H]\Psi_{\rm{out}}(\mathbf{r})= \hat{\varepsilon} \cdot \hat{r} \Psi_{\rm{init}}(\mathbf{r}), \label{20}$$ where $\Psi_{\rm{out}}(\mathbf{r})$ describes the motion of the electron after its photoionization moving in the presence of an electric filed, $H$ is the Stark Hamiltonian with $\epsilon$ being the energy of ionized electron. The $\Psi_{\rm{out}}(\mathbf{r})$ can be expanded in outgoing wave functions involving the dipole matrix elements of Eq. (\[eq19\]). More specifically we have that $$\Psi_{\rm{out}}(\mathbf{r})=\sum_{n_1 m} D^{(-)}_{n_1 m}X_{n_1 m}^{\epsilon F,~+}(\mathbf{r}). \label{eq21}$$ Wave function microscopy and differential cross sections -------------------------------------------------------- Recent experimental advances [@cohen13; @itatani2004tomographic; @bordas03; @Nicole02] have managed to detect the square module of the electronic wave function, which complements a number of corresponding theoretical proposals [@ost1; @ost2; @ost3; @ost4]. This has been achieved by using a position-sensitive detector to measure the flux of slow electrons that are ionized in the presence of an electric field. The following defines the relevant observables associated with the photoionization-microscopy. The key quantity is the differential cross section which in turn is defined through the electron current density. As in Ref. [@zhao12b], consider a detector placed beneath the atomic source with its plane being perpendicularly to the axis of the electric field. Then, with the help of Eq. (\[eq21\]) the electron current density in cylindrical coordinates has the following form: $$R(\rho, z_{\rm{det}},\phi)=\frac{2 \pi \omega}{c}\rm{{\it Im}} \big[-\Psi_{out}(\mathbf{r})^\ast\frac{d}{dz}\Psi_{out}(\mathbf{r})\bigg]_{z=z_{\rm{det}}}, \label{eq22}$$ where $z_{\rm{det}}$ indicates the position of the detector along the $z$-axis, $c$ is the speed of light and $\omega$ denotes the frequency of the photon being absorbed by the electron. The integration of the azimuthal $\phi$ angle leads to the differential cross section per unit length in the $\rho$ coordinate. Namely, we have that $$\frac{d \sigma(\rho, z_{\rm{det}})}{d\rho}=\int_0^{2 \pi}d\phi~\rho R(\rho, z_{\rm{det}},\phi), \label{eq23}$$ Eigenchannel R-matrix calculation ================================= Harmin’s Stark effect theory for nonhydrogenic atoms is mainly based on the semi-classical WKB approach. In order to eliminate the WKB approximation as a potential source of error, this section implements a fully quantal description of Harmin’s theory based on a variational eigenchannel R-matrix calculation as was formulated in Ref.[@Greene1983pra; @GreeneKim1988pra] and reviewed in [@aymar1996multichannel]. As implemented here using a B-spline basis set, the technique also shares some similarities with the Lagrange-mesh R-matrix formulation developed by Baye and coworkers[@baye2010]. The present application to a 1D system with both an inner and an outer reaction surface accurately determines regular and irregular solutions of the Schrödinger equation in the $\eta$ degrees of freedom. The present implementation can be used to derive two independent solutions of any one-dimensional Schrödinger equation of the form $$\bigg[-\frac{1}{2} \frac{d^2}{d\eta^2} + V(\eta) \bigg]\psi(\eta)= \frac{1}{4} \epsilon \psi(\eta), \label{eq25}$$ where $$V(\eta)=\frac{m^2-1}{8 \eta^2}-\frac{1-\beta}{2\eta}-\frac{F}{8}\eta \label{eq26}$$ The present application of the non-iterative eigenchannel R-matrix theory adopts a reaction surface $\Sigma$ with two disconnected parts, one at an inner radius $\eta_1$ and the other at an outer radius $\eta_2$. The reaction volume $\Omega$ is the region $\eta_1 < \eta < \eta_2$. This one-dimensional R-matrix calculation is based on the previously derived variational principle [@FanoLeePRL; @Greene1983pra] for the eigenvalues $b$ of the R-matrix, $$b[\psi]=\frac{\int_\Omega\big[- \overrightarrow{\nabla}\psi^\ast \cdot \overrightarrow{\nabla}\psi+2\psi^\ast (E-V) \psi \big]d \Omega}{\int_\Sigma \psi^\ast \psi d\Sigma}. \label{eq27}$$ Physically, these R-matrix eigenstates have the same outward normal logarithmic derivative everywhere on the reaction surface consisting here of these two points $\Sigma_1$ and $\Sigma_2$. The desired eigenstates obey the following boundary condition: $$\frac{\partial \psi}{\partial n}+b \psi=0,~~\rm{on}~\Sigma . \label{eq26b}$$ In the present application the $\psi$-wave functions are expanded as a linear combination of a nonorthogonal B-spline basis [@deBoor], i.e. $$\psi(\eta)=\sum_{i} P_i B_i(\eta) = \sum_{C} P_C B_C(\eta)+ P_IB_I(\eta)+ P_OB_O(\eta), \label{eq28}$$ where $P_i$ denote the unknown expansion coefficients and $B_i(\eta)$ stands for the B-spline basis functions. The first term in the left hand side of Eq. (\[eq28\]) was regarded as the “closed-type basis set” in [@aymar1996multichannel] because every function $B_c(\eta)$ vanishes on the reaction surface, i.e. $B_c(\eta_1)= B_c(\eta_2)=0$. The two basis functions $B_I(\eta)$ and $B_O(\eta)$ correspond to the “open-type basis functions’’ of Ref. [@aymar1996multichannel] in that they are the only B-spline functions that are nonzero on the reaction surface. Specifically, only $B_I(\eta)$ is nonzero on the inner surface $\eta=\eta_1$ ($\Sigma_1$) and only $B_O(\eta)$ is nonzero on the outer surface $\eta=\eta_2$ ($\Sigma_2$). Moreover the basis functions $B_I$ and $B_O$ have no region of overlap in the matrix elements discussed below. Insertion of this trial function into the variational principle leads to the following generalized eigenvalue equation: $$\underline{\Gamma} P=b \underline{\Lambda} P. \label{eq29}$$ In addition, the real, symmetric matrices $\underline{\Gamma}$ and $\underline{\Lambda}$ are given by the following expressions for this one-dimensional problem: $$\begin{aligned} \Gamma_{i j}&=&\int_{\eta_1}^{\eta_2} \big[(\frac{1}{2} \epsilon -2 V(\eta) ) B_i(\eta)B_j(\eta) + B_i^{\prime}(\eta)B_j^{\prime}(\eta) \big]d \eta, \\ \Lambda_{ij}&=& B_i(\eta_1) B_j(\eta_1)+ B_i(\eta_2) B_j(\eta_2)=\delta_{i,I}\delta_{I,j} + \delta_{i,O}\delta_{O,j}, \end{aligned}$$ where $\delta$ indicates the Kronecker symbol and the $^{\prime}$ are regarded as the derivatives with respect to the $\eta$. It is convenient to write this linear system of equations in a partitioned matrix notation, namely: $$\begin{aligned} \underline{\Gamma}_{CC}P_C +\underline{\Gamma}_{CI}P_I+ \underline{\Gamma}_{CO}P_O &=&0 \\ \underline{\Gamma}_{IC}P_C+ \underline{\Gamma}_{II}P_I &=&b P_I \\ \underline{\Gamma}_{OC}P_C+ \underline{\Gamma}_{OO}P_O &=&b P_O. \end{aligned}$$ Now the first of these three equations is employed to eliminate $P_C$ by writing it as $P_C = -\underline{\Gamma}_{CC}^{-1}\underline{\Gamma}_{CI}P_I -\underline{\Gamma}_{CC}^{-1}\underline{\Gamma}_{CO}P_O$, which is equivalent to the “streamlined transformation” in Ref.[@GreeneKim1988pra]. This gives finally a 2$\times$2 matrix $\underline{\Omega}$ to diagonalize at each $\epsilon$ in order to find the two R-matrix eigenvalues $b_\lambda$ and corresponding eigenvectors $P_{i\lambda}$: $$\begin{aligned} \bigg(\begin{matrix} \underline{\Omega}_{II} & \underline{\Omega}_{IO} \\[0.3em] \underline{\Omega}_{OI} & \underline{\Omega}_{OO} \\[0.3em] \end{matrix} \bigg) \bigg(\begin{matrix} P_{I} \\[0.3em] P_{O} \\[0.3em] \end{matrix} \bigg) = b \bigg(\begin{matrix} P_{I} \\[0.3em] P_{O} \\[0.3em] \end{matrix} \bigg). \label{eq32}\end{aligned}$$ Here, e.g., the matrix element $\underline{\Omega}_{II} \equiv \underline{\Gamma}_{II}-\underline{\Gamma}_{IC}\underline{\Gamma}_{CC}^{-1}\underline{\Gamma}_{CI}$, etc. In any 1D problem like the present one, the use of a B-spline basis set leads to a banded structure for $\underline{\Gamma}_{CC}$ which makes the construction of $\underline{\Gamma}_{CC}^{-1} \underline{\Gamma}_{CI}$ and $\underline{\Gamma}_{CC}^{-1} \underline{\Gamma}_{CO}$ highly efficient in terms of memory and computer processing time; this step is the slowest in this method of solving the differential equation, but still manageable even in complex problems where the dimension of $\underline{\Gamma}_{CC}$ can grow as large as 10$^4$ to $10^5$. Again, the indices $C$ refer to the part of the basis expansion that is confined fully within the reaction volume and vanishes on both reaction surfaces. The diagonalization of Eq. (\[eq32\]) provides us with the $b_\lambda$–eigenvalues and the corresponding eigenvectors, which define two linearly independent wave functions $\psi_\lambda$, with $\lambda=1,2$. These obey the Schrödinger equation, Eq. \[eq25\] and have equal normal logarithmic derivatives at $\eta_1$ and $\eta_2$. The final step is to construct two linearly independent solutions that coincide at small $\eta$ with the regular and irregular field-free $\eta$-solutions $f_{\epsilon \beta m}(\eta)$ and $g_{\epsilon \beta m}(\eta)$ (Cf. Appendix \[app:coulomhalfint\]). These steps are rather straightforward and are not detailed further in this paper. Results and discussion ====================== The frame transformed irregular solutions ----------------------------------------- ![(color online). The irregular solutions in spherical coordinates illustrated up to $r=80$ au where $\mathbf{r}=(r,~\theta=5\pi/6,~\phi=0)$. In all panels the azimuthal quantum number is set to $m=1$ and the black solid line indicates the irregular coulomb function in spherical coordinates, namely $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$. (a) depicts the case of $\ell=1$ where $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$ denotes the irregular function in spherical coordinates calculated within the local frame transformation (LFT) framework, for two different cases of total amount of $n_1$ states, namely $n_1^{(tot)}=60$ (green dashed line) and $n_1^{(tot)}=100$ (red dots). (b) refers to the case of $\ell=6$ where $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$ is calculated for $n_1^{(tot)}=60$ (green dashed line), $n_1^{(tot)}=100$ (blue diamonds) and $n_1^{(tot)}=230$ (red dots) states. Note that panel (c) is a zoomed-out plot of the curves shown in panel (b).[]{data-label="fig2"}](fig2.eps) To reiterate, Zhao et al. [@zhao12] claim that the Fano-Harmin frame transformation is inaccurate, based on a disagreement between their full numerical calculations of the differential cross section and the LFT calculation. They then claim to have investigated the origin of the discrepancy and pinpointed an error in the frame transformed irregular function. The present section carefully tests the main conclusion of Ref. [@zhao12] that Eq. (\[eq10\]) does not accurately yield the development of the irregular spherical Coulomb functions into a parabolic field-dependent solution (see Fig.5 in Ref. [@zhao12]). Fig.\[fig2\] illustrates the irregular solutions in spherical coordinates where $\mathbf{r}=(r,~\theta=5\pi/6,~\phi=0)$ and the azimuthal quantum number is set to be $m=1$. The energy is taken to be $\epsilon=135.8231$ cm$^{-1}$ and the field strength is $F=640$ V/cm. In addition we focus on the regime where $r<~90~ \rm{au} \ll F^{-1/2}$. In all the panels the black solid line indicates the analytically known irregular Coulomb function, namely $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$. Fig.\[fig2\](a) and (b,c) examine the cases of angular momentum $\ell=1$ and $6$, respectively. All the green dashed lines, the diamonds and dots correspond to the frame transformed irregular Coulomb functions in spherical coordinates, namely $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$, which are calculated by summing up from 0 to $n_1$ the irregular $\chi_{n_1 m}^{\epsilon F}(\mathbf{r})$ functions in the parabolic coordinates as Eq. (\[eq10\]) indicates. The positive value of the energy ensures that all the $n_1$-channels lie well above the local maximum in the $\eta$ whereby the phase parameter $\gamma_{n_1}$ is very close to its semiclassical expected value $\pi/2$. Furthermore, since only short distances are relevant to this comparison, namely $r<~90$ au, this means that the summed $\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)$ functions on the right-hand side of Eq. (\[eq10\]) in the $n_1$-th irregular $\chi_{n_1 m}^{\epsilon F}(\mathbf{r})$ will be equal to analytically known Coulomb irregular functions in the parabolic coordinates. This is justified since at the interparticle distances that we are interested in, namely $\ll F^{-1/2}$, the electric field is negligible in comparison to the Coulomb interaction. Then the corresponding Schrödinger equation becomes equal to the Schrödinger equation of the pure Coulomb field which is analytically solvable in spherical and parabolic coordinates as well. Thus, in the following we employ the above-mentioned considerations in the evaluation of the right hand side of Eq. (\[eq10\]) for Figs.\[fig2\] and \[fig3\]. ![(color online). The irregular solutions in spherical coordinates are shown up to $r=80$ au where $\mathbf{r}=(r,~\theta=5\pi/6,~\phi=0)$. In all panels the azimuthal quantum number is set to be $m=1$ and the black solid line indicates the analytically known irregular coulomb function, namely $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$. (a) depicts the case of $\ell=2$ where $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$ denotes the irregular function in spherical coordinates calculated within the local frame transformation (LFT) framework for $n_1^{(tot)}=100$ states (red dots). Similarly, (b) refers to the case of $\ell=3$ with $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$ being calculated for $n_1^{(tot)}=100$ (red dots) states.[]{data-label="fig3"}](fig3.eps) ![(color online). The irregular solutions in spherical coordinates at negative energies, ie $\epsilon=-135.8231$ cm$^{-1}$, illustrated for $\mathbf{r}=(r,~\theta= \frac{5 \pi}{6},~\phi=0)$. In all panels the azimuthal quantum number is set to be $m=1$ and the black solid line indicates the analytically known irregular coulomb function, namely $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$. Accordingly, the red dots correspond to the LFT calculations of irregular function, namely $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$. Panels (a-d) depict the cases of $\ell=1,~2,~3~\rm{and}~6$, respectively. For all the LFT calculations the total amount of $n_1$ states is $n_1^{(tot)}=25$ which corresponds to $\beta_{n_1}<1$.[]{data-label="fig4"}](fig4.eps) Fig.\[fig2\](a) compares the radial irregular Coulomb function (black line) with those calculated in the LFT theory for $\ell=m=1$. In order to check the convergence of the LFT calculations with respect to the total number $n_1^{(\rm{tot})}$ different values are considered. Indeed, we observe that the $\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}$ for $n_1^{(tot)}=60$ (green dashed line) does not coincide with $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$ (black line) particularly in the interval of small interparticle distances $r$. This can be explained with the help of Fig.\[fig1\], which demonstrates that the LFT $U$ for $\ell=1$ possesses nonzero elements for $n_1> 60$, and those elements are crucial for the growth of the irregular solution at small distances. Therefore, the summation in Eq. \[eq10\] for $\ell=1$ does not begin to achieve convergence until $n_1\ge 100$, where the corresponding elements of the LFT $U$ tend to zero. Indeed, when the sum over $n_1$ states is extended to this larger range, the irregular function $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ of LFT theory, i.e. for $n_1^{(tot)}=100$ (red dots), accurately matches the spherical field-free irregular solution $\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}$ (black line) (see Fig.\[fig2\]) at small electron distances $r$. ![image](fig5.eps) Furthermore, Fig.(\[fig2\])(b) refers to the case of $\ell=6~m=1$. Specifically, for $n_1^{tot}=60$ states the $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ (green dashed line) agrees poorly with the $\frac{g_{\epsilon \ell m}^{(C)}}{r}$ (black line). Though as in the case of $\ell=1$, by increasing the number of $n_1$ states summed over in Eq. (\[eq10\]) the corresponding $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$, namely to $n_1^{(tot)}=100$ (blue diamonds) and to $n_1^{(tot)}=230$ (red dots), better agreement is achieved with the $\frac{g_{\epsilon \ell m}^{(C)}}{r}$. In contrast to the case where $\ell=1$, the convergence is observed to be very slow for $\ell=6$. The main reason for this is that for $r<20$ au we are in the classically forbidden region where $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ diverges as $1/r^{\ell+1}$. From Eq. (\[eq10\]) it is clear that the sum will diverge due to the divergent behavior of the irregular functions of the $\eta$ direction, namely the $\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)$. Hence, in order the $\bar{\Upsilon}_{n_1 m}^{\epsilon F}(\eta)$ to be divergent in the interval of 10 to 80 au it is important to take into account many $n_1$ states which correspond to $\beta_{n_1}>1$ since only then the term $1-\beta/\eta$ becomes repulsive and producing the diverging behavior appropriate to a classical forbidden region. Fig.\[fig2\](c) is a zoomed-out version of the functions shown in panel (b), which demonstrates that the $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ for $n_1^{(tot)}=230$ correctly captures the divergent behavior of $\frac{g_{\epsilon \ell m}^{(C)}}{r}$ for $r<20$. Similarly, Fig.\[fig3\] explores the cases of $\ell=2$ (see Fig.\[fig3\](a)) and $\ell=3$ (see Fig.\[fig3\](b)). In both panels the black solid lines indicate the field free Coulomb function in spherical coordinates $\frac{g_{\epsilon \ell m}^{(C)}}{r}$ and the red dots correspond to the $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ for $n_1^{(tot)}=100$. Both panels exhibit $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ that are in excellent agreement with $\frac{g_{\epsilon \ell m}^{(C)}}{r}$. Having analyzed the LFT calculations at positive energies, Fig. \[fig4\] illustrates the corresponding LFT calculations at negative energies, namely $\epsilon=-135.8231$ cm$^{-1}$ where the field strength is set to be $F=640~~\rm{V/cm}$. Note that these parameters [@note] are used for an analogous comparison in Fig.5 of Ref. [@zhao12]. In all panels the azimuthal quantum number is considered to be $m=1$, the solid black lines denote the analytically known irregular Coulomb function $[\frac{g_{\epsilon \ell m}^{(C)}(\mathbf{r})}{r}]$ and red dots refer to the corresponding LFT calculations $[\frac{g_{\epsilon \ell m}^{(LFT)}(\mathbf{r})}{r}]$. In Fig. \[fig4\](a-d) the $\ell=1,~2,~3~\rm{and}~6$ cases are considered at $\mathbf{r}=(r,~\theta= \frac{5 \pi}{6},~\phi=0)$, respectively. In addition, for all the panels of Fig. \[fig4\] in the LFT calculations the summation over the $n_1$ states is truncated at $n_1^{\rm{tot}}=25$ for the considered energy and field strength values. This simply means that in the summation of the framed-transformed irregular function contribute solely all the fractional charges $\beta_{n_1}$ that obey the relation $\beta_{n_1}<1$. These states essentially describe all the relevant physics since only for these states the “down field” part of the wave function can probe the core either above or below the Stark barrier. Therefore, the $n_1$ states for which $\beta_{n_1}>1$ physically are irrelevant since they yield a strongly repulsive barrier in the “down field” degree of freedom shielding completely the core. However, for these states the considered pair of regular and irregular functions in Sec. II C for the $\eta$-degree of freedom acquire imaginary parts due to the fact that the colliding energy is below the minimum of the corresponding Coulomb potential. Consequently, these states are omitted from the sum of the frame-transformed irregular function. The omission of states with $\beta_{n_1}>1$ mainly addresses the origin of the accuracy in the LFT calculations. The impact of the omitted states is demonstrated in Fig. \[fig4\] where discrepancies are observed as the orbital angular momentum $\ell$ increases since more $n_1$ states are needed. Indeed, in panels (a), (b) and (c) of Fig. \[fig4\] a good agreement is observed between the framed-transformed irregular function and the Coulombic one (black solid line). On the other hand, in panel (d) of Fig. \[fig4\] small discrepancies, particularly for $r>20$ are observed occurring due to poor convergence over the summation of the $n_1$ states. Though, these discrepancies are of minor importance since they correspond to negligible quantum defects yielding thus minor contributions in the photoabsorption cross section. The bottom line of the computations shown in this subsection is that the frame-transformed irregular functions $\frac{g_{\epsilon \ell m}^{(LFT)}}{r}$ do not display, at least for $\ell= 1~\rm{or}~2$, the inaccuracies that were claimed by Zhao [*et al.*]{} in Ref.[@zhao12]. For negative energies, our evidence suggests that the inclusion of $n_1$ states with $\beta_{n_1}>1$ will enhance the accuracy of the frame-transformed irregular functions as it is already demonstrated by the LFT calculations at positive energies. Photoionization microscopy -------------------------- Next we compute the photoionization microscopy observable for Na atoms, namely the differential cross section in terms of the LFT theory. The system considered is a two step photoionization of ground-state Na in the presence of an electric field $F$ of strength 3590 V/cm, which is again the same system and field strength treated in Ref.[@zhao12]. The two consecutive laser pulses are assumed to be $\pi$ polarized along the field axis, which trigger in succession the following two transitions: (i) the excitation of the ground state to the intermediate state $^2 P_{3/2}$, namely $[\rm{Ne}]~3s~~^2S_{1/2}\rightarrow[\rm{Ne}]~3p~~^2P_{3/2}$ and (ii) the ionization from the intermediate state $^2 P_{3/2}$. In addition, due to spin-orbit coupling the intermediate state will be in a superposition of the states which are associated with different orbital azimuthal quantum numbers, i.e. $m=0$ and $1$. Hyperfine depolarization effects are neglected in the present calculations. Fig. \[fig5\] illustrates the differential cross section $\frac{d\sigma(\rho, z_{\rm{det}})}{d \rho}$ for Na atoms, where the detector is placed at $z_{\rm{det}}=-1$ mm and its plane is perpendicular to the direction of the electric field. Since spin-orbit coupling causes the photoelectron to possess both azimuthal orbital quantum numbers $m=0,1$, the contributions from both quantum numbers are explored in the following. Fig.\[fig5\] panels (a) and (c) illustrate the partial differential cross section for transitions of $m_{\rm{int}}=0 \rightarrow m_f=0$, where $m_{\rm{int}}$ indicates the [*intermediate*]{} state azimuthal quantum number and $m_f$ denotes the corresponding quantum number in the final state. Similarly, panels (b) and (d) in Fig.\[fig5\] are for the transitions $m_{\rm{int}}=1 \rightarrow m_f=1$. In addition, in all panels of Fig.\[fig5\] the red solid lines correspond to the LFT calculations, whereas the black dots indicate the [*ab initio*]{} numerical solution of the inhomogeneous Schrödinger equation which employ a velocity mapping technique and which do not make use of the LFT approximation. More specifically, this method uses a discretization of the Schrödinger equation on a grid of points in the radial coordinate $r$ and an orbital angular momentum grid in $\ell$. The main framework of the method is described in detail in Sec. 2.1 of Ref. [@TR1] and below only three slight differences are highlighted. In order to represent a cw-laser, the source term was changed to $S_0(\vec{r},t)=[1+\rm{erf}(t/t_w)]z\psi_{init}(\vec{r})$ with $\psi_{\rm{init}}$ either the $3p,~m=0$ or $3p,~m=1$ state. The time dependence, $1+\rm{erf}(t/t_w)$ gives a smooth turn-on for the laser with time width of $t_w$; $t_w$ is chosen to be of the order a few picoseconds. The second difference is that the Schrödinger equation is solved until the transients from the laser turn on decayed to zero. The last difference was in how the differential cross section is extracted. The radial distribution in space slowly evolves with increasing distance from the atoms and the calculations become challenging as the region represented by the wave function increases. To achieve convergence in a smaller spatial region, the velocity distribution in the $\rho$-direction is directly obtained. The wave function in $r,~\ell$ is numerically summed over the orbital angular momenta $ \ell$ yielding $\psi_m(\rho,z)$ where $m$ is the azimuthal angular momentum. Finally, using standard numerical techniques a Hankel transformation is performed on the wave function $\psi_m(\rho,z)$ which reads $$\psi_m(k_\rho,z)=\int d\rho \rho J_m(k_\rho\rho)\psi_m(\rho,z)$$ which can be related to the differential cross section. The cross section is proportional to $k_\rho |\psi_m(k_\rho,z)|^2$ in the limit that $z\to -\infty$. The $k_\rho$ is related to the $\rho$ in Fig.\[fig5\] through a scaling factor. The convergence of our results is tested with respect to number of angular momenta, number of radial grid points, time step, $|z|_{max}$, $t_w$ and final time. The bandwidth that the following calculations exhibit is equal to $0.17$ cm$^{-1}$. In addition, in order to check the validity of our velocity mapping calculation we directly compute numerically the differential cross section through the electron flux defined in Eq. (\[eq22\]). An agreement of the order of percent is observed solidifying our investigations. One sees immediately in panels (a-d) of Fig.\[fig5\] that the LFT calculations are in good agreement with the full numerical ones, with only minor areas of disagreement. In particular, the interference patterns in all calculations are essentially identical. An important point is that panels (a) and (c) do not exhibit the serious claimed inaccuracies of the LFT approximation that were observed in Ref.[@zhao12]. In fact, the present LFT calculations are in excellent agreement with the corresponding LFT calculations of Zhao [*et al.*]{} Evidently, this suggests that the disagreement observed by the Zhao [*et al*]{} originates from coupled-channel calculations and not the LFT theory, in particular for the case of $m=0$. Indeed, panels (b) and (d) of Fig.\[fig5\] are in excellent agreement with the corresponding results of both the LFT and coupled-channel calculations of Ref.[@zhao12]. Summary and conclusions ======================= The present study reviews Harmin’s Stark-effect theory and develops a standardized form of the corresponding LFT theory. In addition, the LFT Stark-effect theory is formulated in the traditional framework of scattering theory including its connections to the photoionization observables involving the dipole matrix elements, in particular the differential cross section. In order to quantitatively test the LFT, the present formulation does not use semi-classical WKB theory as was utilized by Harmin. Instead the one-dimensional differential equations are solved within an eigenchannel R-matrix framework. This study has thoroughly investigated the core idea of the LFT theory, which in a nutshell defines a mapping between the irregular solutions of two regions, namely spherical solutions in the field-free region close to the origin and the parabolic coordinate solutions relevant from the core region all the way out to asymptotic distances. For positive energies, our calculations demonstrate that indeed the mapping formula Eq. (\[eq10\]) predicts the correct Coulomb irregular solution in spherical coordinates (see Figs.\[fig2\] and \[fig3\]). On the other hand, at negative energies it is demonstrated (see Fig.\[fig4\]) that the summation over solely “down field” states $\beta_{n_1}<1$ imposes minor limitations in the accuracy of LFT calculation mainly for $\ell>3$. Our study also investigates the concept of wave function microscopy through calculations of photoionization differential cross sections for a Na atom in the presence of a uniform electric field. The photoionization process studied is a resonant two-photon process where the laser field is assumed to be $\pi$ polarized. The excellent agreement between the LFT and the full velocity mapping calculation has been conclusively demonstrated, and the large discrepancies claimed by Ref.[@zhao12] in the case of $m_{\rm{int}}=0 \rightarrow m_f=0$ are not confirmed by our calculations. These findings suggest that the LFT theory passes the stringent tests of wave function microscopy, and can be relied upon both to provide powerful physical insight and quantitatively accurate observables, even for a complicated observable such as the differential photoionization cross section in the atomic Stark effect. The authors acknowledge Ilya Fabrikant and Jesus Perez-Rios for helpful discussions. The authors acknowledge support from the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences Chemical Sciences, Geosciences, and Biosciences Division under Award Numbers DE-SC0012193 and DE-SC0010545. Coulomb functions for non-positive half-integer angular momentum at negative energies {#app:coulomhalfint} ===================================================================================== In this appendix we will present the regular and irregular Coulomb functions with non-positive half-integer, either positive or negative, quantum numbers. The necessity for this particular type of solutions arises from the fact that they constitute the boundary conditions for the R-matrix eigenchannel calculations in the ’down field’ $\eta$ degree of freedom at sufficient small distances where essentially the field term can be neglected. This corresponds in the field free case where the orbital angular momentum does not possess non-negative integer values. The Schrödinger equation in the field free case for the $\eta$ parabolic coordinate has the following form $$\frac{d^2}{d\eta^2}f_{\beta m}^{ \epsilon }(\eta)+\bigg(\frac{\epsilon}{2}+\frac{1-m^2}{4\eta^2}+\frac{1-\beta}{\eta} \bigg)f_{\beta m}^{ \epsilon} (\eta)=0, \label{A1}$$ where the energy $\epsilon$ is considered to be negative. Assuming that $\bar{\epsilon}=2 \epsilon/(1-\beta)^2$, $\zeta=\frac{1-\beta}{2} \eta$ and $\lambda=(m-1)/2$ Eq. (\[A1\]) can be transformed into the following differential equation: $$\frac{d^2}{d\zeta^2}f_{\lambda}^{ \bar{\epsilon} }(\zeta)+\bigg(\bar{\epsilon}-\frac{\lambda(\lambda+1)}{\zeta^2}+\frac{2}{\zeta} \bigg)f_{\lambda}^{ \bar{\epsilon}} (\zeta)=0, \label{A2}$$ which for integer $\lambda$ has two linearly independent energy normalized solutions whose relative phase is $\pi/2$ at small distances and negative energies $$\begin{aligned} f_{\lambda}^{\bar{\epsilon}} (\zeta) &=& A(\bar{\nu}, \lambda)^{1/2}S_{\lambda}^{\bar{\epsilon}}(\zeta) \label{a3a} \\ g_{\lambda}^{\bar{\epsilon}} (\zeta) &=& A(\bar{\nu}, \lambda)^{1/2}S_{\lambda}^{\bar{\epsilon}}(\zeta) \cot((2 \lambda+1)\pi)\nonumber \\ &-&\frac{A(\bar{\nu}, \lambda)^{-1/2}S_{-\lambda-1}^{\bar{\epsilon}}(\zeta)}{\sin((2 \lambda+1)\pi)}, \label{a3} \end{aligned}$$ where $\bar{\nu}=1/\sqrt{-\bar{\epsilon}}$, $A(\bar{\nu}, \lambda)=\frac{\Gamma(\lambda+\bar{\nu}+1)}{\bar{\nu}^{2\lambda+1}\Gamma(\bar{\nu}-\lambda)}$ and the function $S_{\lambda}^{\bar{\epsilon}}(\zeta)$ is obtained by the following relation $$S_{\lambda}^{\bar{\epsilon}}(\zeta)=2^{\lambda+1/2}\zeta^{\lambda+1}e^{-\zeta/\bar{\nu}}~_1\bar{F}_1(\lambda-\bar{\nu}+1;2+2\lambda;2 \zeta/\bar{\nu}), \label{a4}$$ where the function $_1\bar{F}_1$ denotes the regularized hypergeometric function $_1F_1$. One basic property of this function is that it remains finite even when its second argument is a non-positive integer. We recall that the hypergeometric $_1F_1(a;b;x)$ diverges when $b=-1, -2,-3,..$. Moreover, we observe that when $\lambda$ acquires half-integer values, ie $\lambda=\lambda_c$ the nominator and denominator of $g_{\lambda}^{\bar{\epsilon}}$ in Eq. (\[a3\]) both vanish. Therefore, employing the de l’ Hospital’s theorem on $g_{\lambda}^{\bar{\epsilon}}$ in Eq. (\[a3\]) we obtain the following expression: $$\begin{aligned} \bar{g}_{\lambda_c}^{\bar{\epsilon}} (\zeta)&=& \frac{1}{2 \pi}\frac{\partial f_{\lambda}^{\bar{\epsilon}} (\zeta)}{\partial \lambda}\Bigg|_{\lambda=\lambda_c}-\frac{1 }{2\pi \cos[(2\lambda_c+1)\pi]}\frac{\partial f_{-\lambda-1}^{\bar{\epsilon}} (\zeta)}{\partial \lambda}\Bigg|_{\lambda=\lambda_c}. \label{a5}\end{aligned}$$ Hence, Eqs. (\[a3a\]) and (\[a5\]) correspond to the regular and irregular Coulomb functions for non-positive half-integers at negative energies, respectively. This particular set of solutions possess $\pi/2$-relative phase at short distances and they used as boundary conditions in the eigenchannel R-matrix calculations. A similar construction is possible with the help of Ref. [@olver2010nist] for positive energies but it is straightforward and not presented here.
--- abstract: 'We demonstrate how the key notions of Tononi et al.’s Integrated Information Theory (IIT) can be studied within the simple graphical language of process theories, i.e. symmetric monoidal categories. This allows IIT to be generalised to a broad range of physical theories, including as a special case the Quantum IIT of Zanardi, Tomka and Venuti.' address: - Cambridge Quantum Computing - 'Munich Center for Mathematical Philosophy, University of Munich' author: - Sean Tull - Johannes Kleiner bibliography: - 'iit.bib' title: Integrated Information in Process Theories ---
--- abstract: 'In this paper, we study a class of ${\ensuremath{\mathbb{Z}}\xspace}_d$-graded modules, which are constructed using Larsson’s functor from $\sl_d$-modules $V$, for the Lie algebras of divergence zero vector fields on tori and quantum tori. We determine the irreducibility of these modules for finite-dimensional or infinite-dimensional $V$ using a unified method. In particular, these modules provide new irreducible weight modules with infinite-dimensional weight spaces for the corresponding algebras.' author: - 'Xuewen Liu, Xiangqian Guo and Zhen Wei' title: 'Irreducible modules over the divergence zero algebras and their $q$-analogues' --- [*Keywords:*]{} Divergence zero algebras, Witt algebras, quantum tori, irreducible modules. Introduction ============ Representation theory for infinite-dimensional Lie algebras have been attracting extensive attentions of many mathematicians and physicists. These algebras include Kac-Moody algebra, (generalized) Virasoro algebra, Witt algebras, Cartan type Lie algebras, Lie algebras of Block type and so on. For any positive integer $d$, the Witt algebra ${\ensuremath{\mathcal{W}}\xspace}_d$ is the derivation algebra of the Laurent polynomial algebra $A_d={\ensuremath{\mathbb C}\xspace}[t_1^{\pm1},\ldots,t_d^{\pm 1}]$. This algebra is also known as the Lie algebra of the group of diffeomorphisms of the $d$-dimensional torus. The representation theory of Witt algebras was studied extensively by many authors, see [@B; @BF; @BMZ; @E1; @E2; @GZ; @LZ; @T] and references therein. In 1986, Shen [@Sh] defined a class of modules $F_b^{\ensuremath{\alpha}}(V)$ over the Witt algebra ${\ensuremath{\mathcal{W}}\xspace}_d$ for any weight modules over the special linear Lie algebra $\sl_d$, where ${\ensuremath{\alpha}}$ is any $d$-dimensional complex vector and $b$ is a complex number. These modules as well as the functors $F^{\ensuremath{\alpha}}_b$, known as the Larson functors now, were also studied by Larson in [@L1; @L2; @L3]. In 1996, Eswara Rao [@E1] determined the irreducibility of these modules for finite-dimensional $V$ , and recently G. Liu and K. Zhao studied the case when $V$ is infinite-dimensional (see [@LZ]). Geometrically, ${\ensuremath{\mathcal{W}}\xspace}_d$ may be interpreted as the Lie algebra of (complex-valued) polynomial vector fields on a $d$-dimensional torus. It has a natural subalgebra consisting of vector fields of divergence zero, which we denote by $\hLL_d$, and another subalgebra $\LL_d$, which is obtained from $\hLL_d$ modulo the Cartan subalgebra of $\hLL_d$. They are also known as the Cartan S type Lie algebras. The modules $F^{\ensuremath{\alpha}}_b(V)$ admit natural module structures for the algebra $\hLL_d$ and $\LL_d$, and the $\hLL_d$- or $\LL_d$-module structures on $F^{\ensuremath{\alpha}}_b(V)$ do not depend on the parameter $b$, so we will denote them by $F^{\ensuremath{\alpha}}(V)$. Recently, Talboom [@T] determined the irreducibility of the $\hLL_d$-module $F^{\ensuremath{\alpha}}(V)$ for finite-dimensional $V$ and Billig and Talboom [@BT] investigated the category $\mathcal{J}$ of jet modules for $\hLL_d$. Let $q=(q_{ij})_{i,j=1}^d$ be a $d\times d$ matrix over ${\ensuremath{\mathbb C}\xspace}$, where $q_{ij}=q_{ji}^{-1}$ are roots of unity. We have the $d$-dimensional rational quantum torus ${\ensuremath{\mathbb C}\xspace}_q={\ensuremath{\mathbb C}\xspace}_q[t_1^{\pm1},\cdots, t_d^{\pm1}]$ (see [@N]), which has been used to characterize the extended affine Lie algebras in [@AABGP]. The derivation Lie algebra $\Der({\ensuremath{\mathbb C}\xspace}_q)$ is a $q$-analogue of the Witt algebra ${\ensuremath{\mathcal{W}}\xspace}_d$. The representation theory of $\Der({\ensuremath{\mathbb C}\xspace}_q)$ has been studied by many mathematicians ([@LT1; @LZ1.5; @MZ; @Z]). The Lie algebra $\Der({\ensuremath{\mathbb C}\xspace}_q)$ has a natural subalgebra $\hLL_d(q)$, called the skew derivation Lie algebra of ${\ensuremath{\mathbb C}\xspace}_q$. Removing the Cartan subalgebra from $\hLL_d(q)$, we obtain another natural subalgebra $\LL_d(q)$. The algebras $\hLL_d(q)$ and $\LL_d(q)$ are the $q$-analogues of the algebras $\hLL_d$ and $\LL_d$ introduced in the previous paragraph. Similarly, for any $\sl_d$-module $V$, we can construct the Shen-Larsson module for $\hLL_d(q)$ and $\LL_d(q)$, which we denote by $F_q^{\ensuremath{\alpha}}(V)$. The structures of the modules $F_q^{\ensuremath{\alpha}}(V)$ are studied by [@LT2] for the case $d=2$ and $V$ being finite-dimensional $\sl_2$-modules. In the present paper, we will determine the irreducibility of $F^{\ensuremath{\alpha}}(V)$ as modules over the divergence zero algebra $\hLL_d$ as well as $\LL_d$ for arbitrary $\sl_d$-module $V$ in a unified way. Then we generalize our results to the $q$-analogue algebras $\hLL_d(q)$ and $\LL_d(q)$. The paper is organized as follows. In Section 2, we first introduce the notation of the Witt algebra ${\ensuremath{\mathcal{W}}\xspace}_d$, its subalgebras $\hLL_d$ and $\LL_d$, then we construct the modules $F^{{\ensuremath{\alpha}}}(V)$ from irreducible special linear Lie algebra modules $V$. In Section 3, we will determine the irreducibility of $F^{\ensuremath{\alpha}}(V)$ as modules over $\hLL_d$ or $\LL_d$ for irreducible $\sl_d$-modules $V$. In fact, we can give a description of all their submodules. We will handle the problems for both finite-dimensional and infinite-dimensional $V$ using a unified method. When $V$ is finite-dimensional, our results for $\hLL_d$ recover the main result of Talboom [@T] and our results for $\LL_d$ are new. When $V$ is infinite-dimensional, our results provide new simple weight modules with infinite-dimensional weight spaces for both $\hLL_d$ and $\LL_d$. In the last section, we will generalize our results to the $q$-analogue algebras $\hLL(q)_d$ and $\LL(q)_d$, which are natural subalgebras of derivation Lie algebra of quantum tori, where $q$ is a root of unity. In particular, we get new simple weight modules with infinite-dimensional weight spaces for these algebras. Our results for $\hLL_d$ and $\LL_d$ generalize the similar result of Liu and Zhao for the Witt algebra ${\ensuremath{\mathcal{W}}\xspace}_d$ in [@LZ] and the result of Talboom for the algebra $\hLL_d$ for finite-dimensional $V$ ([@T]). Our results for the algebras $\hLL_d(q)$ and $\LL_d(q)$ are new except for the case when $d=2$ and $V$ is finite-dimensional. The main difficulty in our question is that, unlike the algebras ${\ensuremath{\mathcal{W}}\xspace}_d$ or $\hLL_d$, the algebras $\LL_d$ and $\LL_d(q)$ do not admit Cartan subalgebras and hence any submodule of a weight module may not be a weight module automatically. Preliminaries ============= We denote by ${\ensuremath{\mathbb C}\xspace},{\ensuremath{\mathbb{Z}}\xspace},{\ensuremath{\mathbb{Z}}\xspace}_+,{\ensuremath{\mathbb{N}}\xspace}$ the sets of all complex numbers, all integers, all non-negative integer and all positive integer, respectively. All vector spaces and algebras are over ${\ensuremath{\mathbb C}\xspace}$. Fix a positive integer $d{\geqslant}2$. For any ${{\bf n}}=(n_1,\cdots,n_d)^T\in{\ensuremath{\mathbb{Z}}\xspace}_{+}^d$ and ${{\bf a}}=(a_1,\cdots,a_d)^T\in{\ensuremath{\mathbb C}\xspace}^d$, we denote ${{\bf a}}^{{\bf n}}=a_1^{n_1}a_2^{n_2}\cdots a_d^{n_d}$, where $T$ means taking transpose of the matrix. Let $\mathfrak{gl}_d$ be the Lie algebra of all $d\times d$ complex matrices, and $\mathfrak{sl}_d$ be the subalgebra of $\mathfrak{gl}_d$ consisting of all traceless matrices. Let $A_d=\c[t_1^{\pm1},t_2^{\pm1},\cdots,t_d^{\pm 1}]$ be the algebra of Laurent polynomials over ${\ensuremath{\mathbb C}\xspace}$ and denote by ${\ensuremath{\mathcal{W}}\xspace}_d$ the Lie algebra of all derivations of $A_d$, called the Witt algebra. Set $\partial_i=t_i\frac{\partial}{\partial t_i}$ for $i=1,2,\cdots,d$, and ${{\bf t}}^{{\bf n}}=t_1^{n_1}t_2^{n_2}\cdots t_d^{n_d}$ for ${{\bf n}}=(n_1,n_2,\cdots,n_d)^T\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Let $(\cdot|\cdot)$ be the standard symmetric bilinear form on ${\ensuremath{\mathbb C}\xspace}^d$, that is, $({{\bf u}}\ |\ {{\bf v}})={{{\bf u}}}^T{{{\bf v}}}\in{\ensuremath{\mathbb C}\xspace}$. Homogeneous elements of ${\ensuremath{\mathcal{W}}\xspace}_d$ with respect to the power of $t$ will be denoted $D({{\bf u}}, {{\bf r}})={{\bf t}}^{{{\bf r}}}\sum_{i=1}^du_i\partial _i$ for ${{\bf u}}=(u_1,\cdots,u_d)^T\in{\ensuremath{\mathbb C}\xspace}^d, {{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Then ${\ensuremath{\mathcal{W}}\xspace}_d$ is spanned by all $D({{\bf u}}, {{\bf r}})$ with ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ and ${{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. The Lie bracket of ${\ensuremath{\mathcal{W}}\xspace}_d$ is given by $$[D({{\bf u}}, {{\bf r}}), D({{\bf v}}, {{\bf s}})]=D({{\bf w}}, {{\bf r}}+{{\bf s}}), {{\bf u}}, {{\bf v}}\in{\ensuremath{\mathbb C}\xspace}^d, {{\bf r}}, {{\bf s}}\in{\ensuremath{\mathbb{Z}}\xspace}^d,$$ where ${{\bf w}}=({{\bf u}}\ |\ {{\bf s}}){{\bf v}}-({{\bf v}}\ |\ {{\bf r}}){{\bf u}}$. Geometrically, ${\ensuremath{\mathcal{W}}\xspace}_d$ may be interpreted as the Lie algebra of (complex-valued) polynomial vector fields on a $d$-dimensional torus. Then we can deduce a subalgebra of ${\ensuremath{\mathcal{W}}\xspace}_d$, the Lie algebra of divergence zero vector fields, denoted by $\hLL_d$. It is spanned by $D({{\bf u}}, {{\bf r}})$ which satisfy $({{\bf u}}\ |\ {{\bf r}})=0$, that is, $$\hLL_d={\operatorname{span}}_{\ensuremath{\mathbb C}\xspace}\{{\partial}_i,\ {{\bf t}}^{{\bf r}}(r_j{\partial}_i - r_i{\partial}_j)\ |\ i, j=1,\cdots,d, {{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d\}.$$ Note that $\hLL_d$ has the Cartan subalgebra $$\HH={\operatorname{span}}_{{\ensuremath{\mathbb C}\xspace}}\{{\partial}_j\ |\ j=1,\cdots,d\}=\{D({{\bf u}},0)\ |\ {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d\}.$$ The algebra $\hLL_d$ has a natural subalgebra $$\LL_d={\operatorname{span}}_{\ensuremath{\mathbb C}\xspace}\{{{\bf t}}^{{\bf r}}(r_j {\partial}_i - r_i{\partial}_j)\ |\ i, j=1,\cdots,d, {{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d\}.$$ We see that the algebra $\LL_d$ does not admit a Cartan subalgebra. When $d=2$, the algebra $\LL_d$ is just the centerless Virasoro-like algebra. Now we define some modules for these algebras. For any $\alpha\in{\ensuremath{\mathbb C}\xspace}^d,b\in{\ensuremath{\mathbb C}\xspace}$, and $\mathfrak{gl}_d$-module $V$ on which the identity matrix acts as the scalar $b$, let $F_b^{{\ensuremath{\alpha}}}(V)=V\otimes A_d$. Then $F_b^{{\ensuremath{\alpha}}}(V)$ becomes a ${\ensuremath{\mathcal{W}}\xspace}_d$-module with the following actions $$D({{\bf u}}, {{\bf r}})(v{\otimes}{{\bf t}}^{{{\bf n}}})=(({{\bf u}}\ |\ {{\bf n}}+{\ensuremath{\alpha}})v+({{\bf r}}{{\bf u}}^T)v){\otimes}{{\bf t}}^{{{\bf n}}+{{\bf r}}},$$where ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d, v\in V, {{\bf n}}, {{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. When $V$ is a finite-dimensional $\sl_d$-module, $F_b^{{\ensuremath{\alpha}}}(V)$ has all weight spaces finite-dimensional, and the irreducibility of these modules are determined by [@E1] (see also [@GZ]). It was conjectured that all irreducible ${\ensuremath{\mathcal{W}}\xspace}_d$-modules with finite dimensional weight spaces are precisely the generalized highest weight modules and the irreducible sub-quotient modules of $F^{\ensuremath{\alpha}}(V)$ for finite-dimensional irreducible $\sl_d$-modules $V$. This conjecture was recently proved by [@BF]. When $V$ is infinite-dimensional, it was shown that $F_b^{{\ensuremath{\alpha}}}(V)$ is always irreducible by [@LZ]. Since $\widehat{\LL}_d$ and $\LL_d$ are subalgebras of ${\ensuremath{\mathcal{W}}\xspace}_d$, by restriction of the module action, $F_b^{{\ensuremath{\alpha}}}(V)$ can be viewed as a module for the algebras $\widehat{\LL}_d$ and $\LL_d$. This module are not dependent on $b$, so we denote the resulted $\widehat{\LL}_d$-module or $\LL_d$-module by $F^{\ensuremath{\alpha}}(V)$. In ([@T]), Talboom determined the irreducibility of $F^{\ensuremath{\alpha}}(V)$ as a module over $\hLL_d$ when $V$ is finite-dimensional. In this paper, we will determine the irreducibility of $F^{\ensuremath{\alpha}}(V)$ as a module over the algebra $\hLL_d$ as well as the algebra $\LL_d$, when $V$ is an arbitrary simple $\sl_d$-module. The main difficulty is that the algebra $\LL_d$ does not admit a Cartan subalgebra. Modules over the Divergence Zero Algebra ======================================== Let notation as in the last section. In particular, let $V$ be an irreducible $\sl_d$-module, ${\ensuremath{\alpha}}\in{\ensuremath{\mathbb C}\xspace}^d$ and $\LL_d$ is the divergence zero algebra and we have constructed the $\LL_d$-module $F^{\ensuremath{\alpha}}(V)$. Then we recall some known results on finite-dimensional modules over $\sl_d$. Fix a standard basis of $\sl_d$: $\{E_{i,j}, E_{k,k}-E_{k+1,k+1},\ |\ i,j=1,\cdots, d, k=1,\cdots, d-1, i\neq j\},$ where $E_{i,j}$ is the matrix with $1$ as the $(i,j)$-entry and $0$ otherwise. Let $\{\mu_1,\cdots,\mu_{d-1}\}$ be the coordinate functions on the $d\times d$ diagonal matrices, i.e., $\mu_i(E_{j,j})=\delta_{i,j}$. Then ${\ensuremath{\mathfrak{h}}}=\span\{E_{k,k}-E_{k+1,k+1}, k=1,2,\cdots, d-1\}$ is just the Cartan subalgebra, $\mu_i-\mu_{i+1}$ is a base for the standard root system of $\sl_d$ and $\omega_{k}=\mu_1+\cdots+\mu_{k}, k=1,\cdots,d-1$ are the fundamental dominant weights. Then any dominant weight is a ${\ensuremath{\mathbb{Z}}\xspace}_+$-linear combination of these $\omega_k$ and any finite-dimensional irreducible $\sl_d$-module is a highest weight modules $V({\ensuremath{\lambda}})$ with a dominant highest weight ${\ensuremath{\lambda}}$. For convenience, we also denote $\omega_{d}=\mu_1+\cdots+\mu_{d}=0$. The following technical lemma on $\sl_d$-modules are taken from [@LZ]. \[sl\_d-mod\] Let $V$ be an irreducible $\sl_d$-module (not necessarily weight module). - For any $i, j$ with $1{\leqslant}i\neq j{\leqslant}d$, $E_{ij}$ acts injectively or locally nilpotently on $V$. - The module $V$ is finite-dimensional if and only if $E_{ij}$ acts locally nilpotently on $V$ for any $i, j$ with $1{\leqslant}i\neq j{\leqslant}d$. For convenience, we denote $d_{{{\bf r}},i}={{\bf t}}^{{{\bf r}}}(r_{i+1}\partial_i-r_i\partial_{i+1})$ for ${{\bf r}}=(r_1,\cdots,r_{d})^T\in{\ensuremath{\mathbb{Z}}\xspace}^d$ and $i=1,\cdots,d-1$. Let $V$ be an irreducible $\sl_d$-module and $F^{\ensuremath{\alpha}}(V)$ the corresponding $\LL_d$-module. The action of $\LL_d$ on $F^{\ensuremath{\alpha}}(V)$ can be rewritten as $$d_{{{\bf r}},i}v{\otimes}{{\bf t}}^{{{\bf n}}}=\big((r_{i+1}{\epsilon}_i-r_i{\epsilon}_{i+1} | {{\bf n}}+{\ensuremath{\alpha}})+{{\bf r}}(r_{i+1}{\epsilon}_i-r_i{\epsilon}_{i+1})^T\big)v{\otimes}{{\bf t}}^{{{\bf n}}+{{\bf r}}},\ i=1,\cdots,d-1,$$ where $v\in V, {{\bf n}}, {{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, ${\epsilon}_i\in{\ensuremath{\mathbb{Z}}\xspace}^d$ is the standard basis vector with $1$ as the $i$-th entry and $0$ otherwise. Then we have \[element\_d\] Let $V$ be an irreducible $\sl_d$-module such that $V\not\cong V(\omega_k)$ for any $k=1,\cdots,d$ and $N$ an $\LL_d$-submodule of $F^{\ensuremath{\alpha}}(V)$. Then there exists nonzero $v\in V$ such that $v{\otimes}{{\bf t}}^{{\bf n}}\in N$ for all ${{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Let $N$ be a nonzero $\LL_d$-submodule of $F^{\ensuremath{\alpha}}(V)$. Choose a nonzero vector $\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}\in N$, where $I\subseteq {\ensuremath{\mathbb{Z}}\xspace}^d$ is a finite subset. For any ${{\bf m}}=(m_1,\cdots,m_d)^T, {{\bf n}}=(n_1,\cdots,n_d)^T\in{\ensuremath{\mathbb{Z}}\xspace}^d$ and $i,j\in \{1,2,\cdots,d-1\}$, we have $$\label{d:m-n,n}\begin{split} & d_{{{\bf m}}-{{\bf n}},i}d_{{{\bf n}},j}(\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}})\\ = & \big(({{\bf u}}_1 | {\ensuremath{\alpha}}+{{\bf n}}+{{\bf r}})+({{\bf m}}-{{\bf n}}){{\bf u}}_1^T\big)\big(({{\bf u}}_2\ |\ {\ensuremath{\alpha}}+{{\bf r}})+{{\bf n}}{{\bf u}}_2^T\big)({\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}})\\ = & \Big(({{\bf u}}_1 | {\ensuremath{\alpha}}+{{\bf n}}+{{\bf r}})+\sum_k(m_k-n_k)(m_{i+1}-n_{i+1})E_{k, i} -\sum_k(m_k-n_k)(m_{i}-n_{i})E_{k, i+1}\Big)\\ & \hskip5pt \cdot\Big(({{\bf u}}_2 | {\ensuremath{\alpha}}+{{\bf r}})+\sum_ln_ln_{j+1}E_{l,j}-\sum_ln_ln_{j}E_{l,j+1}\Big)\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N.\\ \end{split}$$ where ${{\bf u}}_1=(m_{i+1}-n_{i+1}){\epsilon}_{i}-(m_{i}-n_{i}){\epsilon}_{i+1}, {{\bf u}}_2=n_{j+1}{\epsilon}_{j}-n_{j}{\epsilon}_{j+1}\in{\ensuremath{\mathbb C}\xspace}^d$. Consider the right-hand side of as a polynomial in ${{\bf n}}$ with coefficients in $N$, and using the Vandermonde’s determinant, we can deduce that, $$\label{polynomial}\begin{split} & \sum_{{{\bf r}}\in I}\sum_{k,l=1}^d\Big(n_kn_{i+1}E_{k, i} -n_kn_{i}E_{k, i+1}\Big)\Big(n_ln_{j+1}E_{l,j}-n_ln_{j}E_{l,j+1}\Big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\\ = & \sum_{{{\bf r}}\in I}\sum_{k,l=1}^d\Big(n_kn_{i+1}n_ln_{j+1}E_{k,i}E_{l,j}+n_kn_{i}n_ln_{j}E_{k,i+1}E_{l,j+1}\\ & \hskip10pt -n_kn_{i}n_ln_{j+1}E_{k,i+1}E_{l,j}-n_kn_{i+1}n_ln_{j}E_{k,i}E_{l,j+1}\Big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N,\ \forall\ {{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d.\\ \end{split}$$ **Step 1.** If $V$ is infinite-dimensional, then by Lemma \[sl\_d-mod\], the action of $E_{st}$ on $V$ is injective for some $1{\leqslant}s\neq t{\leqslant}d$. Without loss of generality, we may assume that $s>t$. Take $i=j=t$ in , then considering the coefficient of $n_s^2n_{t+1}^2$ gives $\sum_{{{\bf r}}\in I}E_{st}E_{st}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, thanks to the Vandermonde’s determinant again. By replacing $v_{{{\bf r}}}$ with $E_{st}E_{st}v_{{{\bf r}}}$ we may assume that $$\label{d step-1} \sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N,\ \forall\ {{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d.$$ Now we suppose that $V$ is finite-dimensional and hence a highest weigh module $V({\ensuremath{\lambda}})$, with ${\ensuremath{\lambda}}$ being a dominant weight. Take any ${{\bf r}}_0\in I$ such that $v_{{{\bf r}}_0}=v_1+\cdots +v_l\neq 0$ with $v_i$ being weight vectors of different weights and the weight of $v_1$ is maximal. If the weight of $v_1$ is not ${\ensuremath{\lambda}}$, then there exists $1{\leqslant}j{\leqslant}d-1$ such that $E_{j,j+1}v_1\neq 0$ has larger weight. Now we consider that $$\label{d:n}\begin{split} & d_{{{\bf n}},j}\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}\\ = & \big((n_{j+1}{\epsilon}_j-n_j\epsilon_{j+1} | {\ensuremath{\alpha}}+{{\bf r}})+{{\bf n}}(n_{j+1}{\epsilon}_j-n_j\epsilon_{j+1})^T\big) {\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf r}}}}\\ = & \sum_{{{\bf r}}\in I}\Big((n_{j+1}{\epsilon}_j-n_j\epsilon_{j+1} | {\ensuremath{\alpha}}+{{\bf r}})+ \sum_{k=1}^d(n_kn_{j+1}E_{k,j}-n_kn_jE_{k,j+1})\Big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf r}}}\in N. \end{split}$$ Take suitable ${{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$ (for example, making $n_j\gg n_i$ for $i\neq j$), we see that $-n_j^2E_{j,j+1}v_1$ is a nonzero weight component of $d_{{{\bf n}},j}v_{{{\bf r}}_0}$ with maximal weight larger than that of $v_1$. Replacing $\sum_{{{\bf r}}\in T}v_{{\bf r}}{\otimes}{{\bf t}}^{{{\bf r}}}$ with $d_{{{\bf n}},j}\sum_{{{\bf r}}\in T}v_{{\bf r}}{\otimes}{{\bf t}}^{{{\bf r}}}$, and repeating the above process several times, we may assume that the weight of $v_1$ is ${\ensuremath{\lambda}}$. Suppose that ${\ensuremath{\lambda}}=\sum_{k=1}^{d-1}a_k\omega_k$ for some $a_k\in{\ensuremath{\mathbb{Z}}\xspace}_+$. Since ${\ensuremath{\lambda}}\neq w_k$ for any $k=1,\cdots,d$, there exist some $1{\leqslant}k_1{\leqslant}k_2{\leqslant}d-1$ such that $a_{k_1}+a_{k_2}{\geqslant}2$. Let $\mathfrak{s}$ be the $3$-dimensional simple Lie algebra spanned by $E_{k_1,k_2+1}, E_{k_2+1,k_1}$ and $E_{k_1,k_1}-E_{k_2+1,k_2+1}$. Note that ${\ensuremath{\lambda}}(E_{k_1,k_1}-E_{k_2+1,k_2+1})=a_{k_1}+a_{k_1+1}+\cdots+a_{k_2}$, hence the $\mathfrak{s}$-module generated by $v_1$ has highest weight $a_{k_1}+a_{k_1+1}+\cdots+a_{k_2}{\geqslant}2$. In particular, $E_{k_2+1,k_1}^2v_1\neq 0$ and $E_{k_2+1,k_1}^2v_{{{\bf r}}_0}\neq 0$. Similar as for the infinite-dimensional case, taking $i=j=k_1$ in and considering the coefficient of $n_{k_2+1}^2n_{k_1+1}^2$, one can deduce $0\neq \sum_{{{\bf r}}\in I}E_{k_2+1,k_1}^2v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. By replacing $v_{{{\bf r}}}$ with $E_{k_2+1,k_1}^2v_{{{\bf r}}}$ we may assume that holds. **Step 2.** Now we assume that holds. In particular, we have $$\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf r}}}\in N\ \ \text{and}\ \ \sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf k}}+{{\bf r}}}\in N.$$ Applying $d_{{{\bf m}},i}$ and $d_{{{\bf m}}-{{\bf k}},i}$ to the above elements respectively, we get $$\begin{split} d_{{{\bf m}},i}\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf r}}} =\sum_{{{\bf r}}\in I} \big(({{\bf u}}_3 | {\ensuremath{\alpha}}+{{\bf n}}+{{\bf r}})+{{\bf m}}{{\bf u}}_3^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf n}}+{{\bf r}}}\in N\\ \end{split}$$ and $$\begin{split} d_{{{\bf m}}-{{\bf k}},i}\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}+{{\bf k}}+{{\bf r}}}=\sum_{{{\bf r}}\in I} \big(({{\bf u}}_4 | {\ensuremath{\alpha}}+{{\bf n}}+{{\bf k}}+{{\bf r}}) +({{\bf m}}-{{\bf k}}){{\bf u}}_4^{T}\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf n}}+{{\bf r}}}\in N, \end{split}$$ where ${{\bf u}}_3=m_{i+1}\varepsilon_{i}-m_{i}\varepsilon_{i+1}, {{\bf u}}_4=(m_{i+1}-k_{i+1}){\epsilon}_{i}-(m_{i}-k_{i}){\epsilon}_{i+1}$. Take ${{\bf m}}=2{{\bf k}}$, and subtracting a multiple of $ \sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf n}}+{{\bf r}}}$, we get $$\begin{split} \sum_{{{\bf r}}\in I}\big(({{\bf u}}_3 | {{\bf r}})+{{\bf m}}{{\bf u}}_3^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf n}}+{{\bf r}}}\in N\\ \end{split}$$ and $$\begin{split} \frac{1}{4}\sum_{{{\bf r}}\in I}\big(2({{\bf u}}_3 | {{\bf r}})+{{\bf m}}{{\bf u}}_3^{T}\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf n}}+{{\bf r}}}\in N. \end{split}$$ These two formulas imply that $$\sum_{{{\bf r}}\in I}(k_{i+1}\varepsilon_{i}-k_{i}\varepsilon_{i+1} | {{\bf r}})v_{{{\bf r}}}\otimes t^{2{{\bf k}}+{{\bf n}}+{{\bf r}}}\in N,\ \forall\ {{\bf k}}\neq0, i=1,\cdots, d-1.$$ Combining this formulas with , we can cancel some term involving $v_{{{\bf r}}}, {{\bf r}}\in I$ to make $I$ smaller. Finally, we deduce that $v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf n}}}\in N$ for some ${{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Repeating Step 1 to this element, we can obtain some $v\in V $ such that $ v\otimes {{\bf t}}^{{{\bf n}}}\in N$ for all ${{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. \[d:irre\] Let $V$ be an irreducible $\mathfrak{sl}_d$ and $W$ is a $\LL_d$-submodule of $F^{\ensuremath{\alpha}}(V)$. If there exists $v\in V$ such that $v{\otimes}{{\bf t}}^{{\bf n}}\in W$ for all ${{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}$, then $W=F^{\ensuremath{\alpha}}(V)$. The proof is standard by showing that the subspace $\{v\ |\ v{\otimes}{{\bf t}}^{{\bf n}}\in W,\ \forall\ {{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}\}$ is a $\sl_d$-submodule of $V$. We omit the details. \[d:infinite\] Let $V$ be an irreducible $\sl_d$-module which is not isomorphic to any $V(\o_k)$ for $k=1,2,\cdots, d$, then $F^{\ensuremath{\alpha}}(V)$ is an irreducible $\LL_d$-module. Since $\LL_d$ is a subalgebra of $\hLL$, we can easily deduce the following corollary. \[hatLL\] Let $V$ be an irreducible $\sl_d$-module which is not isomorphic to any $V(\o_k)$ for $k=1,2,\cdots, d$, then $F^{\ensuremath{\alpha}}(V)$ is an irreducible $\hLL_d$-module. Now we determine the $\LL$-submodule structure of $F^{\ensuremath{\alpha}}(V(\o_k))$ for all $k=1,\cdots,d$. First the irreducibility of $F^{\ensuremath{\alpha}}(V(\o_d))$ is clear: since $V(\o_d)=V(0)$ is the $1$-dimensional $\sl_d$-module, we may identify $F^{\ensuremath{\alpha}}(V)={\ensuremath{\mathbb C}\xspace}[t_1^{\pm1}, \cdots,t_d^{\pm1}]$ with module action $D({{\bf u}},{{\bf r}})v({{\bf n}})=({{\bf u}}\ |\ {{\bf n}}+{\ensuremath{\alpha}})v({{\bf n}}+{{\bf r}})$, which is irreducible when ${\ensuremath{\alpha}}\notin{\ensuremath{\mathbb{Z}}\xspace}^d$ and is the direct sum of two irreducible submodules ${\ensuremath{\mathbb C}\xspace}{{\bf t}}^{-{\ensuremath{\alpha}}}$ and $\sum_{{\ensuremath{\alpha}}+{{\bf n}}\neq 0}{\ensuremath{\mathbb C}\xspace}{{\bf t}}^{{\bf n}}$ if ${\ensuremath{\alpha}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Let $V_1=V(\o_1)$, the highest weight $\sl_d$-module with highest weight $\o_1$. Then we can take $V_1={\ensuremath{\mathbb C}\xspace}^d$ as a vector space such that $\sl_d$ acts on $V$ via the matrix multiplication. Let $\bigwedge^{k}(V_1)$ be the submodule of $k$-th tensor product module of $V_1$ consisting of skew symmetric elements. Then $\bigwedge^{k}(V_1)$ is just the highest weight $\sl_d$-module with highest weight $\o_k$ for any $k=1,2\cdots,d$. Now let $V=\bigwedge^k{\ensuremath{\mathbb C}\xspace}^d$ for some $k=1,2\cdots,d$ and we consider the $\LL_d$-submodule structure of $F^{\ensuremath{\alpha}}(V)$. It is easy to see that for any ${\ensuremath{\alpha}}\in{\ensuremath{\mathbb C}\xspace}^d$, the module $F^{{\ensuremath{\alpha}}}(V)$ has a natural submodule $$W=\{(v_1\wedge\cdots\wedge v_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf n}}))\otimes {{\bf t}}^{{{\bf n}}}, v_1,\cdots,v_{k-1}\in {\ensuremath{\mathbb C}\xspace}^d\}$$ and for ${\ensuremath{\alpha}}\in {\ensuremath{\mathbb{Z}}\xspace}^d$, the module $F^{{\ensuremath{\alpha}}}(V)$ has additional submodules of the form $$W'=W\oplus (V'\otimes t^{-{\ensuremath{\alpha}}}),$$ where $V'$ is an arbitrary subspace of $V$. \[orthg\] For any ${{\bf m}},{{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ with ${{\bf n}}\neq0$ and $({{\bf u}}|{{\bf n}})=0$, there exists ${{\bf u}}'\in{\ensuremath{\mathbb C}\xspace}^d$ such that $({{\bf u}}'|{{\bf m}})=0$ and $({{\bf u}}'-x{{\bf u}}|{{\bf m}}-x{{\bf n}})=0$ for any $x\in{\ensuremath{\mathbb C}\xspace}$. Suppose that ${{\bf n}}=(n_1,\cdots,n_d)$ and ${{\bf m}}=(m_1,\cdots,m_d)$. Without loss of generality, we may assume $n_1\neq0$. Then we can write ${{\bf u}}=\sum_{i=2}^da_i(n_i{\epsilon}_1-n_{1}{\epsilon}_i)$ for some $a_i\in{\ensuremath{\mathbb C}\xspace}$. It is easy to check that ${{\bf u}}'=\sum_{i=2}^da_i(m_i{\epsilon}_1-m_1{\epsilon}_i)$ satisfies the requirement of the lemma. \[d:omega\_k\] Let $V=V(\omega_k)$ for $k=1,2,\cdots, d-1$. Then - If ${\ensuremath{\alpha}}\notin{\ensuremath{\mathbb{Z}}\xspace}^d$, then $F^{\ensuremath{\alpha}}(V)$ has a unique nonzero proper $\LL_d$-submodule $W$; - If ${\ensuremath{\alpha}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, then any nonzero submodule of $F^{\ensuremath{\alpha}}(V)$ is of the form $W'=W\oplus (V'\otimes t^{-{\ensuremath{\alpha}}})$, where $V'$ is an arbitrary subspace of $V$. Let $N$ be a nonzero proper submodule of $F^{\ensuremath{\alpha}}(V)$. Choose a nonzero vector $\sum_{{{\bf r}}\in I}v_{{{\bf r}}}{\otimes}{{\bf t}}^{{{\bf r}}}\in N$, where all $v_{{\bf r}}\in V$ are nonzero and $I\subseteq {\ensuremath{\mathbb{Z}}\xspace}^d$ is a finite subset. Take any ${{\bf m}}_1,{{\bf m}}_2\in{\ensuremath{\mathbb{Z}}\xspace}^d$ and ${{\bf u}}_2\in{\ensuremath{\mathbb C}\xspace}^d$ with ${{\bf m}}_2\neq 0, {{\bf u}}_2\neq0$ and $({{\bf m}}_2|{{\bf u}}_2)=0$. By Lemma \[orthg\], we can choose ${{\bf u}}_1\in{\ensuremath{\mathbb C}\xspace}^d$ such that $({{\bf m}}_1|{{\bf u}}_1)=0$ and $({{\bf m}}_1-x{{\bf m}}_2|{{\bf u}}_1-x{{\bf u}}_2)=0$ for any $x\in{\ensuremath{\mathbb C}\xspace}$. Then, for $x\in{\ensuremath{\mathbb{Z}}\xspace}$, we have $$\label{d:m-n,n}\begin{split} & D({{\bf u}}_1-x{{\bf u}}_2,{{\bf m}}_1-x{{\bf m}}_2)D(x{{\bf u}}_2,x{{\bf m}}_2)(\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}})\\ = & x\sum_{{{\bf r}}\in I}\big(({{\bf u}}_1-x{{\bf u}}_2 | {\ensuremath{\alpha}}+{{\bf r}}+x{{\bf m}}_2)+({{\bf m}}_1-x{{\bf m}}_2)({{\bf u}}_1^T-x{{\bf u}}_2^T)\big)\\ &\hskip3cm \cdot \big(({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}})+x{{\bf m}}_2 {{\bf u}}_2^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}_1+{{\bf r}}}. \end{split}$$ By the property of the Vandermonde’s determinant, we see that the coefficients of monomials in $x$ are all in $N$. In particular, the coefficient of $x^2$ gives $$\label{d:poly}\begin{split} & \sum_{{{\bf r}}\in I}\Big(\big(({{\bf u}}_1 |{\ensuremath{\alpha}}+{{\bf r}})+{{\bf m}}_1{{\bf u}}_1^T\big){{\bf m}}_2 {{\bf u}}_2^T\\ &\hskip30pt +({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}})\big(({{\bf u}}_1|{{\bf m}}_2)-({{\bf u}}_2 |{\ensuremath{\alpha}}+{{\bf r}})-{{\bf m}}_1{{\bf u}}_2^T-{{\bf m}}_2{{\bf u}}_1^T\big)\Big) v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}_1+{{\bf r}}}\in N. \end{split}$$ Note that the above formula holds trivially for ${{\bf u}}_2=0$. **Claim 1.** $v_{{\bf r}}{\otimes}{{\bf t}}^{{\bf r}}\in N$ for all ${{\bf r}}\in I$. Take ${{\bf m}}_1=0$ and ${{\bf u}}_1=0$ in , then we get $$\label{d:m=0} \sum_{{{\bf r}}\in I}({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}})^2 v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}\in N.$$ Let ${{\bf m}}_2$ vary and we see that holds for all ${{\bf u}}_2\in{\ensuremath{\mathbb{Z}}\xspace}^d$. On the other hand, we also have $$\label{d:m} D({{\bf u}}, {{\bf m}})\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}=\sum_{{{\bf r}}\in I}\big(({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}})+{{\bf m}}{{\bf u}}^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N,\ \forall\ ({{\bf m}}|{{\bf u}})=0.$$ Replacing $\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}$ with $D({{\bf u}},{{\bf m}})\sum_{{{\bf r}}\in I}v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}$ in , we have $$\label{r+m} \sum_{{{\bf r}}\in I}({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}}+{{\bf m}})^2 \big(({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}})+{{\bf m}}{{\bf u}}^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N,\ \forall\ {{\bf u}}_2\in{\ensuremath{\mathbb{Z}}\xspace}^d.$$ It is easy to see that $({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}}_1+{{\bf m}})^2=({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}}_2+{{\bf m}})^2$ as polynomials in ${{\bf u}}_2$ for ${{\bf r}}_1\neq{{\bf r}}_2$ if and only if ${\ensuremath{\alpha}}+{{\bf r}}_1+{{\bf m}}=\pm({\ensuremath{\alpha}}+{{\bf r}}_2+{{\bf m}})$, or equivalently, $2{{\bf m}}=-{{\bf r}}_2-{{\bf r}}_1-2{\ensuremath{\alpha}}$. Since there are finitely many ${{\bf r}}\in I$, we see that for all but finitely many ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, $({{\bf u}}_2|{\ensuremath{\alpha}}+{{\bf r}}+{{\bf m}})^2$ are distinct polynomials in ${{\bf u}}_2$ for distinct ${{\bf r}}\in I$. Thus we deduce from , for all but finitely many ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, that $$\big(({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}})+{{\bf m}}{{\bf u}}^T\big)v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}}\in N,\ \forall\ {{\bf r}}\in I, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d\ \text{with}\ ({{\bf m}}|{{\bf u}})=0.$$ Applying $D({{\bf u}},-{{\bf m}})$ to this element we obtain, for all but finitely many ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, that $$({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}})^2 v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}\in N,\ \forall\ {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d\ \text{with}\ ({{\bf m}}|{{\bf u}})=0,$$ forcing $v_{{{\bf r}}}\otimes {{\bf t}}^{{{\bf r}}}\in N$ for all ${{\bf r}}\in I$ with ${\ensuremath{\alpha}}+{{\bf r}}\neq0$. Moreover, if $-{\ensuremath{\alpha}}\in I$, we also have $v_{-{\ensuremath{\alpha}}}{\otimes}{{\bf t}}^{-{\ensuremath{\alpha}}}\in N$. Claim 1 follows. So we may assume that $|I|=1$ and $v_{{\bf r}}=v$ in the what follows, more precisely, we have $v{\otimes}{{\bf t}}^{{{\bf r}}}\in N$ for some ${{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Replacing $v{\otimes}{{\bf t}}^{{{\bf r}}}$ with $D({{\bf u}},{{\bf m}})(v{\otimes}{{\bf t}}^{{{\bf r}}})$ for suitable ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ with $({{\bf m}}|{{\bf u}})=0$ if necessary, we may assume that ${\ensuremath{\alpha}}+{{\bf r}}\neq0$ since $V$ is a nontrivial $\sl_d$-module. Now we can get a basis $\{{\ensuremath{\alpha}}+{{\bf r}},{{\bf n}}_1,{{\bf n}}_2,\cdots,{{\bf n}}_{d-1}\}$ of ${\ensuremath{\mathbb C}\xspace}^d$ with ${{\bf n}}_1,{{\bf n}}_2,\cdots,{{\bf n}}_{d-1}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Identify $V$ with $\bigwedge^k{\ensuremath{\mathbb C}\xspace}^d$, where ${\ensuremath{\mathbb C}\xspace}^d$ is regarded as a $\sl_d$-module via matrix multiplication. **Claim 2.** $\big({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{k-1}\wedge v_k\big){\otimes}{{\bf t}}^{{{\bf r}}'}\in N\setminus\{0\}$ for some $v_{k}\in{\ensuremath{\mathbb C}\xspace}^d$ and ${{\bf r}}'-{{\bf r}}$ is a linear combination of $n_1,\cdots,n_{k-1}$ with coefficients $1$ or $0$; The claim is true for $k=d$ or $k=1$. So we assume $2{\leqslant}k{\leqslant}d-1$, which implies $d{\geqslant}3$ in particular. Then we prove the following assertion by induction $$\label{d: claim 2} {{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge v_l{\otimes}{{\bf t}}^{{{\bf r}}_l}\in N\setminus\{0\}$$ for some $v_l\in\bigwedge^{k-l+1}{\ensuremath{\mathbb C}\xspace}^d$, $0{\leqslant}l{\leqslant}k$ such that ${{\bf r}}_l-{{\bf r}}$ is a linear combination of ${{\bf n}}_1,\cdots,{{\bf n}}_{l-1}$ with coefficients $1$ or $0$. By Claim 1, we have $v{\otimes}{{\bf t}}^{{{\bf r}}}\in N$ for some nonzero $v\in V$. Hence is true for $l=0$ with $v_0=v$ and ${{\bf r}}_0={{\bf r}}$. Now suppose holds for $l{\leqslant}k-1$. If ${{\bf n}}_l\wedge v_l=0$, then we have $v_l={{\bf n}}_l\wedge v_{l+1}$ for some $v_{l+1}\in\bigwedge^{k-l}{\ensuremath{\mathbb C}\xspace}^d$, and ${{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l}\wedge v_{l+1}{\otimes}{{\bf t}}^{{{\bf r}}_{l+1}}\in N\setminus\{0\}$, where ${{\bf r}}_{l+1}={{\bf r}}_l$. Then consider the case ${{\bf n}}_l\wedge v_l\neq0$. Noticing that $l{\leqslant}k-1{\leqslant}d-2$, we can choose a basis $\{e_i, i=1,\cdots,d\}$ of ${\ensuremath{\mathbb C}\xspace}^d$ such that $e_d={\ensuremath{\alpha}}+{{\bf r}}_l, e_i={{\bf n}}_i$ for $1{\leqslant}i{\leqslant}l$ and $(e_i|e_j)=0$ for $i\neq j$ and $l+1{\leqslant}i{\leqslant}d-1$. Since $v_l\in\bigwedge^{k-l+1}{\ensuremath{\mathbb C}\xspace}^d$ and $k-l+1{\geqslant}2$, there exists $l+1{\leqslant}i{\leqslant}d-1$ such that $$v_l=e_i\wedge u_1+{{\bf n}}_l\wedge u_2+{{\bf n}}_l\wedge e_i\wedge u_3+u_4,$$ with $u_1, u_2\in\bigwedge^{k-l}(\bigoplus_{j=l+1, j\neq i}^{d}{\ensuremath{\mathbb C}\xspace}e_j), u_3\in\bigwedge^{k-l-1}(\bigoplus_{j=l+1, j\neq i}^{d}{\ensuremath{\mathbb C}\xspace}e_j), u_4\in\bigwedge^{k-l+1}(\bigoplus_{j=l+1, j\neq i}^{d}{\ensuremath{\mathbb C}\xspace}e_j)$ and $u_1\neq 0$. Then applying $D(e_{i},{{\bf n}}_l)$ on ${{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge v_l{\otimes}{{\bf t}}^{{{\bf r}}_l}$, we get $$\begin{split} D(e_{i},{{\bf n}}_l)({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge v_l{\otimes}{{\bf t}}^{{{\bf r}}_l}) = & (({\ensuremath{\alpha}}+{{\bf r}}_l | e_{i})+({{\bf n}}_l e_{i}^T)){{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge v_l{\otimes}{{\bf t}}^{{{\bf r}}_l+{{\bf n}}_l}\\ = & (e_{i}^Te_i)({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge {{\bf n}}_l\wedge u_1){\otimes}{{\bf t}}^{{{\bf r}}_l+{{\bf n}}_l}\\ = & {{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{l-1}\wedge {{\bf n}}_l\wedge v_{l+1}{\otimes}{{\bf t}}^{{{\bf r}}_{l+1}}\in N\setminus\{0\}, \end{split}$$ where $v_{l+1}=u_1$ and ${{\bf r}}_{l+1}={{\bf r}}_l+{{\bf n}}_l$. Claim 2 follows. **Claim 3.** The vector $v_{k}$ in Claim 2 lies in ${\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots\oplus{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}\oplus{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$; Always note that $\{{\ensuremath{\alpha}}+{{\bf r}}', {{\bf n}}_1,\cdots,{{\bf n}}_{d-1}\}$ is a basis of ${\ensuremath{\mathbb C}\xspace}^d$. The claim is trivial for $k=d$, so we assume $k\le d-1$. Without loss of generality, we assume that $v_k\in{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')\oplus{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k}\oplus\cdots{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{d-1}$ and we need only to show $v_k\in{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$. Suppose on the contrary $v_{k}\not\in{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$, then we will deduce a contradiction. Fix any ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. If ${{\bf m}}\in{\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}\oplus{\ensuremath{\mathbb C}\xspace}v_k$, then take any ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ such that $({{\bf u}}|v_k)=({{\bf u}}|{{\bf n}}_i)=0$ for all $i=1,\cdots,k-1$ and $({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')\neq0$. We have $$D({{\bf u}},{{\bf m}})\big(({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'}\big)=({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N\setminus\{0\},$$ forcing $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N$. If ${{\bf m}}\not\in{\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}\oplus{\ensuremath{\mathbb C}\xspace}v_k$ and $k{\geqslant}2$, we take ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ such that $({{\bf u}}|v_k)=({{\bf u}}|{{\bf n}}_i)=0$ for all $i=1,\cdots,k-1$ and $({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')\neq0$. Then by , with ${{\bf m}}_1$ replaced by ${{\bf m}}$, ${{\bf m}}_2$ by ${{\bf n}}_1$, ${{\bf u}}_2$ by ${{\bf u}}$, $\sum_{{{\bf r}}\in I}v_{{\bf r}}{\otimes}{{\bf t}}^{{{\bf r}}}$ by $({{\bf n}}_1\wedge\cdots{{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'}$ and ${{\bf u}}_1$ suitably choosed, we have $$\begin{split} & \Big(\big(({{\bf u}}_1 |{\ensuremath{\alpha}}+{{\bf r}}')+{{\bf m}}{{\bf u}}_1^T\big){{\bf n}}_1{{\bf u}}^T \\ & \hskip30pt +({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')\big(({{\bf u}}_1 |{{\bf n}}_1)-({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')-{{\bf m}}{{\bf u}}^T-{{\bf n}}_1{{\bf u}}_1^T\big)\Big) ({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\\ =&({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')\big(({{\bf u}}_1 |{{\bf n}}_1)-({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')-{{\bf n}}_1{{\bf u}}_1^T\big)\Big) ({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\\ =&({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')^2({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}. \end{split}$$ Consequently, we obtain $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}$. If ${{\bf m}}\not\in{\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}\oplus{\ensuremath{\mathbb C}\xspace}v_k$ and $k=1$. Then ${{\bf m}}\notin{\ensuremath{\mathbb C}\xspace}v_1$ and there exists $1{\leqslant}i_0{\leqslant}d-1$ such that $v_1, {{\bf m}}, {{\bf n}}_i, i=1,\cdots,d-1, i\neq i_0$ form a basis of ${\ensuremath{\mathbb C}\xspace}^d$. For simplicity, we may assume that $i_0=d-1$, that is, $v_1, {{\bf m}}, {{\bf n}}_i, i=1,\cdots,d-2$ form a basis of ${\ensuremath{\mathbb C}\xspace}^d$. For any ${{\bf n}}=(n_1,\cdots,n_d)\in{\ensuremath{\mathbb{Z}}\xspace}^d$, we write ${{\bf n}}=\sum_{i=1}^{d-2}x_i{{\bf n}}_i+xv_1+x'{{\bf m}}$ for some $x_i, x,x'\in{\ensuremath{\mathbb C}\xspace}$. Write $v_1=(v_{1,1},\cdots,v_{1,d})^T, {{\bf m}}=(m_1,\cdots,m_d)$ and ${{\bf n}}_i=(n_{i,1},\cdots,n_{i,d})$ for convenience, without loss of generality, we may assume that $v_{1,1}\neq0$. Take nonzero $w\in{\ensuremath{\mathbb C}\xspace}^d$ such that $(v_1|w)=0$ and $({\ensuremath{\alpha}}+{{\bf r}}'|w)\neq0$, then we can write $w=\sum_{i=2}^{d}a_i(v_{1,i}{\epsilon}_1-v_{1,1}{\epsilon}_i)$ for some $a_i\in{\ensuremath{\mathbb C}\xspace}$. Set ${{\bf u}}=\sum_{i=2}^{d}a_i(n_i{\epsilon}_1-n_1{\epsilon}_i)$, $w'=\sum_{i=2}^{d}a_i(m_i{\epsilon}_1-m_1{\epsilon}_i)$, $w_j=\sum_{i=2}^da_i(n_{j,i}{\epsilon}_1-n_{j,1}{\epsilon}_i)$ for $j=1,\cdots,d-2$, and we see $({{\bf n}}|{{\bf u}})=(v_1|w)=({{\bf m}}|w')=({{\bf n}}_i|w_i)=0$ and ${{\bf u}}=\sum_{i=1}^{d-2}x_iw_i+xw+x'w'$. Note we have $({{\bf m}}-y{{\bf n}}|w'-y{{\bf u}})=0$ for all $y\in{\ensuremath{\mathbb C}\xspace}$. Take ${{\bf m}}_1={{\bf m}}$, ${{\bf m}}_2={{\bf n}}$, ${{\bf u}}_2={{\bf u}}$ and ${{\bf u}}_1=w'$ in , with $\sum_{{{\bf r}}\in I}v_{{\bf r}}{\otimes}{{\bf t}}^{{\bf r}}$ replaced by $v_1{\otimes}{{\bf t}}^{{{\bf r}}'}$, we have $$\begin{split} & \left(\big((w' |{\ensuremath{\alpha}}+{{\bf r}}')+{{\bf m}}w'\,^T\big) \big(\sum_{i=1}^{d-2}x_i{{\bf n}}_i+xv_1+x'{{\bf m}}\big)\big(\sum_{i=1}^{d-2}x_iw_i+xw+x'w'\big)^T\right.\\ &\hskip1pt +\big(\sum_{i=1}^{d-2}x_iw_i+xw+x'w'|{\ensuremath{\alpha}}+{{\bf r}}'\big) \Big(\big(w'\big|\sum_{i=1}^{d-2}x_i{{\bf n}}_i+xv_1+x'{{\bf m}}\big) -\big(\sum_{i=1}^{d-2}x_iw_i+xw+x'w'\big|{\ensuremath{\alpha}}+{{\bf r}}'\big)\\ &\hskip20pt -\left.{{\bf m}}\big(\sum_{i=1}^{d-2}x_iw_i+xw+x'w'\big)^T- \big(\sum_{i=1}^{d-2}x_i{{\bf n}}_i+xv_1+x'{{\bf m}}\big) w'\,^T\Big)\right) v_1\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}'}\in N. \end{split}$$ Regard the element in the above formula as a polynomial in $x_1,\cdots,x_{d-2}, x'$ and the formula holds if we replace $(x_1,\cdots,x_{d-2},x')$ with any elements in $(x_1,\cdots,x_{d-2},x')+{\ensuremath{\mathbb{Z}}\xspace}^{d-1}$. Therefore, the coefficient of each monomial in $x_1,\cdots,x_{d-2},x'$ in the formula lies in $N$, and in particular, the term of degree $0$ with respect to $x_1,\cdots,x_{d-2},x'$ lies in $N$, that is, $$\begin{split} & x^2\Big(\big((w' |{\ensuremath{\alpha}}+{{\bf r}}')+{{\bf m}}w'\,^T\big)v_1w^T \\ & \hskip20pt + \big(w|{\ensuremath{\alpha}}+{{\bf r}}'\big)\big((w'|v_1)-(w|{\ensuremath{\alpha}}+{{\bf r}}') -{{\bf m}}w^T-v_1w'\,^T\big)\Big) v_1\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}'}\in N. \end{split}$$ Noticing that $(w|v_1)=0$ and we may choose $x\neq0$, the above formula is equivalent to $$\begin{split} (w|{\ensuremath{\alpha}}+{{\bf r}}')^2v_{1}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}'}\in N. \end{split}$$ We get $v_{1}\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}'}\in N$ in this case. Now in all cases, we can deduce that $({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{k-1}\wedge v_{k})\otimes {{\bf t}}^{{{\bf m}}+{{\bf r}}'}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. Then Theorem \[d:irre\] implies $N=F^{\ensuremath{\alpha}}(V)$, contradiction. The claim follows. **Remark.** By the proof of Claim 2 and Claim 3, we can also deduce the following two results: - If there is some nonzero $v{\otimes}{{\bf t}}^{{\bf r}}\in N$, then for any ${{\bf n}}_1,\cdots,{{\bf n}}_{k-1}$ such that ${\ensuremath{\alpha}}+{{\bf r}},{{\bf n}}_1,\cdots,{{\bf n}}_{k-1}$ are linearly independent, we can deduce $({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{k-1}\wedge ({\ensuremath{\alpha}}+{{\bf r}}')){\otimes}{{\bf t}}^{{{\bf r}}'}\in N$ for some ${{\bf r}}'\in{\ensuremath{\mathbb{Z}}\xspace}^d$ with ${{\bf r}}'-{{\bf r}}\in{\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots\oplus{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}$; - If $({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf r}}'}\in N$ and $v_k\notin{\ensuremath{\mathbb C}\xspace}{{\bf n}}_1\oplus\cdots\oplus{\ensuremath{\mathbb C}\xspace}{{\bf n}}_{k-1}\oplus{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$, then $({{\bf n}}_1\wedge\cdots\wedge{{\bf n}}_{k-1}\wedge v_k){\otimes}{{\bf t}}^{{{\bf m}}}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. **Claim 4.** $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf s}})){\otimes}{{\bf t}}^{{\bf s}}\in N$ for all ${{\bf s}}\in {\ensuremath{\mathbb{Z}}\xspace}^d$. By Claim 3, we have $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}')){\otimes}{{\bf t}}^{{{\bf r}}'}\in N$. Fix any ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$. First assume $k{\leqslant}d-1$. If ${{\bf m}}\notin{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$, we choose ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ such that $({{\bf m}}|{{\bf u}})=({{\bf n}}_i|{{\bf u}})=0$ for all $i=1,\cdots,k-1$ and $({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')\neq0$, applying $D({{\bf u}},{{\bf m}})$ we have $$\aligned & D({{\bf u}},{{\bf m}})({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}')){\otimes}{{\bf t}}^{{{\bf r}}'}\\ = & \big(({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')+({{\bf m}}{{\bf u}}^T)\big)({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}')){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\\ = & ({{\bf u}}|{\ensuremath{\alpha}}+{{\bf r}}')({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}})){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N, \endaligned$$ which implies $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}})){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N$. If ${{\bf m}}\in{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$ but ${{\bf m}}\neq -({\ensuremath{\alpha}}+{{\bf r}}')$, choose ${{\bf m}}'\notin{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}')$ and ${{\bf m}}-{{\bf m}}'\notin{\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}}')$, then by the previous argument, we have $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}}')){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}'}\in N$ and $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}})){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N$. If ${{\bf m}}=-({\ensuremath{\alpha}}+{{\bf r}}')$, the result is trivial since ${\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}}=0$. Now suppose $k=d$. Then $V$ is a trivial $\sl_d$-module and a similar discussion as above shows that $({{\bf n}}_1\wedge\cdots\wedge {{\bf n}}_{k-1}\wedge({\ensuremath{\alpha}}+{{\bf r}}'+{{\bf m}})){\otimes}{{\bf t}}^{{{\bf r}}'+{{\bf m}}}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d.$ Claim 4 is true. By Claim 4 and the remark before it, we see that $$W=\sum_{{{\bf r}}\in{\ensuremath{\mathbb{Z}}\xspace}^d}\Big({\ensuremath{\mathbb C}\xspace}({\ensuremath{\alpha}}+{{\bf r}})\wedge\bigwedge^{k-1}{\ensuremath{\mathbb C}\xspace}^{d}\Big){\otimes}{{\bf t}}^{{{\bf r}}}\subseteq N.$$ Suppose $N\neq W$ and take any nonzero $\sum_{{{\bf r}}\in J}v_{{\bf r}}{\otimes}{{\bf t}}^{{{\bf r}}}\in N\setminus W$, where $J$ is a finite index set. By Claim 1, we see that $v_{{\bf r}}{\otimes}{{\bf t}}^{{{\bf r}}}\in N$ for all ${{\bf r}}\in J$. So we may assume that $v_{{\bf r}}{\otimes}{{\bf t}}^{{\bf r}}\in N\setminus W$ and $v_{{\bf r}}=v_1\wedge\cdots\wedge v_{k}$. If ${\ensuremath{\alpha}}+{{\bf r}}\neq0$, we must have that $v_1, \cdots, v_{k}, {\ensuremath{\alpha}}+{{\bf r}}$ is linearly independent and, by the remark following Claim 3, we can deduce $(v_1\wedge\cdots\wedge v_{k}){\otimes}{{\bf t}}^{{{\bf r}}+{{\bf m}}}\in N$ for all ${{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, forcing $N=F^{\ensuremath{\alpha}}(V)$, contradiction. Thus we have proved that $W$ is the only nonzero proper $\LL_d$-submodule of $F^{\ensuremath{\alpha}}(V)$ if ${\ensuremath{\alpha}}\notin{\ensuremath{\mathbb{Z}}\xspace}^d$. Assertion (1) is proved. If ${\ensuremath{\alpha}}\in{\ensuremath{\mathbb{Z}}\xspace}^d$, the previous argument indicates that $v{\otimes}{{\bf t}}^{-{\ensuremath{\alpha}}}\in N\setminus W$ for some $v\in V$. Denote $V'=\{v\in V\ |\ v{\otimes}{{\bf t}}^{-{\ensuremath{\alpha}}}\in N\}$, then we see $W'=W\oplus(V'\otimes t^{-{\ensuremath{\alpha}}})$. Assertion (2) follows and the theorem is completed. Modules for the $q$-analogue Algebras ===================================== In this section we consider the similar problems for the $q$-analogues of the algebras $\LL$ and $\hLL$. We first recall the definitions for the corresponding algebras. Let $q=(q_{ij})_{i,j=1}^d$ be a $d\times d$ matrix over ${\ensuremath{\mathbb C}\xspace}$, where $q_{ij}=q_{ji}^{-1}$ are roots of unity. We have the $d$-dimensional quantum torus ${\ensuremath{\mathbb C}\xspace}_q={\ensuremath{\mathbb C}\xspace}_q[t_1^{\pm1},\cdots, t_d^{\pm1}]$, which is the associative non-commutative algebra generated by $t_1^{\pm1},\cdots,t_2^{\pm1}$ subject to the defining relations $t_it_j=q_{ij}t_jt_i$ for $i\neq j$ and $t_it_i^{-1}=1$. As before, we write ${{\bf t}}^{{{\bf n}}}=t_1^{n_1}\cdots t_d^{n_d}$ for any ${{\bf n}}=(n_1,\cdots,n_d)\in{\ensuremath{\mathbb{Z}}\xspace}^d$. It is easy to check that $${{\bf t}}^{{{\bf m}}}{{\bf t}}^{{\bf n}}=\sigma({{\bf m}},{{\bf n}}){{\bf t}}^{{{\bf m}}+{{\bf n}}},\ \ {{\bf t}}^{{{\bf m}}}{{\bf t}}^{{\bf n}}=f({{\bf m}},{{\bf n}}){{\bf t}}^{{{\bf n}}}{{\bf t}}^{{{\bf m}}},\ \forall\ {{\bf m}},{{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d,$$ where $\sigma({{\bf m}},{{\bf n}})=\prod_{1{\leqslant}i<j{\leqslant}d}q_{ji}^{m_jn_i}$ and $f({{\bf m}},{{\bf n}})=\prod_{1{\leqslant}i,j{\leqslant}d}q_{ji}^{m_jn_i}$. It is not hard to verify that $f({{\bf m}},{{\bf n}})=\sigma({{\bf m}},{{\bf n}})\sigma^{-1}({{\bf n}},{{\bf m}})$ and $f({{\bf m}}+{{\bf n}},{{\bf r}})=f({{\bf m}},{{\bf r}})f({{\bf n}},{{\bf r}})$. Denote $${\operatorname{Rad}\xspace}_q=\Big\{{{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d\ \Big|\ f({{\bf n}},{{\bf m}})=0,\ \forall\ {{\bf m}}\in{\ensuremath{\mathbb{Z}}\xspace}^d\Big\}.$$ It is well known from [@BGK] that the derivation Lie algebra of ${\ensuremath{\mathbb C}\xspace}_q$ is $$\Der({\ensuremath{\mathbb C}\xspace}_q)=\bigoplus_{{{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d}\Der_n({\ensuremath{\mathbb C}\xspace}_q),\ \text{where}\ \Der_n({\ensuremath{\mathbb C}\xspace}_q)=\left\{\begin{array}{ll} {\operatorname{ad}\xspace}{{\bf t}}^n, & \text{if}\ {{\bf n}}\not\in{\operatorname{Rad}\xspace}_q,\\\\ \bigoplus_{i=1}^d{\ensuremath{\mathbb C}\xspace}t^{{\bf n}}{\partial}_i, & \text{if}\ {{\bf n}}\in{\operatorname{Rad}\xspace}_q, \end{array}\right.$$ where ${\operatorname{ad}\xspace}{{\bf t}}^{{\bf n}}$ is the inner derivation with respect to ${{\bf t}}^{{\bf n}}$ and ${\partial}_i$ is the degree derivation with respect to the variable $t_i$, that is, ${\partial}_i({{\bf t}}^{{\bf n}})=n_i{{\bf t}}^{{\bf n}}$, for $i=1,\cdots,d$. As before, we also denote $D({{\bf u}},{{\bf n}})={{\bf t}}^{{{\bf n}}}\sum_{i=1}^du_i{\partial}_i$ for ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ and ${{\bf n}}\in{\operatorname{Rad}\xspace}_q$. The Lie bracket of $\Der({\ensuremath{\mathbb C}\xspace}_q)$ can be given explicitly as $$\begin{split} &[{\operatorname{ad}\xspace}{{\bf t}}^{{{\bf m}}},{\operatorname{ad}\xspace}{{\bf t}}^{{\bf n}}]={\operatorname{ad}\xspace}[{{\bf t}}^{{\bf m}},{{\bf t}}^{{\bf n}}],\\ &[D({{\bf u}},{{\bf r}}), {\operatorname{ad}\xspace}{{\bf t}}^{{\bf n}}]=({{\bf u}}|{{\bf s}}){\operatorname{ad}\xspace}{{\bf t}}^{{{\bf r}}}{{\bf t}}^{{{\bf s}}},\\ &[D({{\bf u}},{{\bf r}}), D({{\bf v}},{{\bf s}})]=\sigma({{\bf r}},{{\bf s}})\big(({{\bf r}}|{{\bf v}})D({{\bf u}},{{\bf r}}+{{\bf s}})-({{\bf s}}|{{\bf u}})({{\bf v}},{{\bf r}}+{{\bf s}})\big),\end{split}$$ for all ${{\bf m}}, {{\bf n}}\notin{\operatorname{Rad}\xspace}_q, {{\bf r}}, {{\bf s}}\in{\operatorname{Rad}\xspace}_q$ and ${{\bf u}},{{\bf v}}\in{\ensuremath{\mathbb C}\xspace}^d$. Similarly, for any $\alpha=({\ensuremath{\alpha}}_1,\cdots,{\ensuremath{\alpha}}_d)\in{\ensuremath{\mathbb C}\xspace}^d$ and $\mathfrak{gl}_d$-module $V$, the tensor space $F_q^{{\ensuremath{\alpha}}}(V)={\ensuremath{\mathbb C}\xspace}_q{\otimes}V$ admits an $\Der({\ensuremath{\mathbb C}\xspace}_q)$-module structure defined by $$\label{mod_LLq}\begin{array}{l} ({\operatorname{ad}\xspace}{{\bf t}}^{{\bf m}})({{\bf t}}^{{{\bf n}}}\otimes v)=[{{\bf t}}^{{{\bf m}}},{{\bf t}}^{{{\bf n}}}]\otimes v,\\\\ D({{\bf u}},{{\bf r}})({{\bf t}}^{{{\bf n}}}\otimes v)=\sigma({{\bf r}},{{\bf n}}){{\bf t}}^{{{\bf r}}+{{\bf n}}}\otimes \left(({{\bf u}}|{{\bf n}}+{\ensuremath{\alpha}})+({{\bf r}}{{\bf u}}^T)\right)v,\\ \end{array}$$ fro any ${{\bf m}}\notin{\operatorname{Rad}\xspace}_q, {{\bf r}}\in{\operatorname{Rad}\xspace}_q, {{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ and $v\in V$. The Lie algebra $\Der({\ensuremath{\mathbb C}\xspace}_q)$ has a natural subalgebra $$\LL_d(q)=\bigoplus_{{{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^2}\LL_d(q)_n,\ \text{where}\ \LL_d(q)_n({\ensuremath{\mathbb C}\xspace}_q)=\left\{\begin{array}{ll} {\operatorname{ad}\xspace}{{\bf t}}^n&\text{if}\ {{\bf n}}\not\in{\operatorname{Rad}\xspace}_q,\\\\ \sum_{i,j=1}^d{\ensuremath{\mathbb C}\xspace}t^{{\bf n}}(n_j{\partial}_i-n_i{\partial}_j)&\text{if}\ {{\bf n}}\in{\operatorname{Rad}\xspace}_q. \end{array}\right.$$ Adding the degree operators, we get another subalgebra $\hLL_d(q)=\LL_d(q)\oplus\sum_{i=1}^d{\ensuremath{\mathbb C}\xspace}{\partial}_i$, which is called the skew derivation Lie algebra of ${\ensuremath{\mathbb C}\xspace}_q$. We see that $\hLL_d(q)$ is spanned by all ${\operatorname{ad}\xspace}{{\bf t}}^{{\bf n}}$ for ${{\bf n}}\not\in{\operatorname{Rad}\xspace}_q$ and $D({{\bf u}},{{\bf n}})$ for ${{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d, {{\bf n}}\in{\operatorname{Rad}\xspace}_q$ with $({{\bf u}}|{{\bf n}})=0$. Now we can regard $F_q^{\ensuremath{\alpha}}(V)$ as an $\LL_d(q)$-module as well as an $\hLL_d(q)$-module. In what follows, we will denote $\LL(q)=\LL_d(q), \hLL(q)=\hLL_d(q)$ and $\LL=\LL_d, \hLL=\hLL_d$ for simplicity. Remark that when $q_{i,j}=1$ for all $i,j=1,\cdots,d$, we have ${\ensuremath{\mathbb C}\xspace}_q[t_1^{\pm1},\cdots,t_d^{\pm1}]={\ensuremath{\mathbb C}\xspace}[t_1^{\pm1},\cdots,t_d^{\pm1}]$ is just the usual Laurent polynomial algebra and $\LL(q)$ and $\hLL(q)$ are just the algebras $\LL$ and $\hLL$ we studied in the previous sections. Note that $[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]=\sum_{{{\bf n}}\notin{\operatorname{Rad}\xspace}_q} {\ensuremath{\mathbb C}\xspace}{{\bf t}}^{{{\bf n}}}$. We see that $\LL'(q)=\sum_{{{\bf n}}\in{\operatorname{Rad}\xspace}_q} \sum_{i,j=1}^d{\ensuremath{\mathbb C}\xspace}t^{{\bf n}}(n_j{\partial}_i-n_i{\partial}_j)$ and $\hLL'(q)=\LL'(q)\oplus\sum_{i=1}^d{\ensuremath{\mathbb C}\xspace}{\partial}_i$ are subalgebras of $\LL(q)$ and $\hLL(q)$, respectively. Further more, ${\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]=\sum_{{{\bf m}}\notin{\operatorname{Rad}\xspace}_q}{\ensuremath{\mathbb C}\xspace}{\operatorname{ad}\xspace}{{\bf t}}^{{{\bf m}}}$ is an ideal of both $\LL(q)$ and $\hLL(q)$. We have $$\label{iso_LL} \LL(q)/{\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]\cong\LL'(q)\cong \LL\quad\text{and}\quad \hLL(q)/{\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]\cong\hLL'(q)\cong \hLL.$$ By Theorem III.2 oin [@N] (or cf. [@LZ2] Lemma 3.3), by replacing $t_1,\cdots, t_d$ with another set of suitable generators ${{\bf t}}^{{{\bf n}}_1},\cdots,{{\bf t}}^{{{\bf n}}_d}$, we can assume that $q_{2i-1,2i}=q_{2i,2i-1}^{-1}$ is a primitive root of unity of order $l_{2i-1}=l_{2i}{\geqslant}2$ for any $i=1,\cdots,d_0$ with $2d_0{\leqslant}d$ and all other $q_{ij}=1$. For convenience, denote $l_i=1$ for $2d_0+1{\leqslant}i{\leqslant}d$ and ${{\bf l}}=(l_1,\cdots,l_d)$. Fix these notations from now on. Under this assumption, we have ${\operatorname{Rad}\xspace}_q=\bigoplus_{i=1}^dl_i{\ensuremath{\mathbb{Z}}\xspace}$. Then the Lie algebra isomorphisms in can be given by $$\label{iso'_LL} D({{\bf u}},{{\bf n}})\mapsto D\Big(\sum_{i=1}^dl_iu_i, \sum_{i=1}^d\frac{n_i}{l_i}\Big),$$ for all ${{\bf u}}=(u_1,\cdots,u_d)\in{\ensuremath{\mathbb C}\xspace}^d$ and ${{\bf n}}=(n_1,\cdots,n_d)\in{\operatorname{Rad}\xspace}_q$ with $({{\bf u}}|{{\bf n}})=0$. We first consider $F_q^{\ensuremath{\alpha}}(V)$ as a module over the subalgebras $\LL'(q)\cong\LL$ or $\hLL'(q)\cong\hLL$. Denote $\GG=\LL\ \text{or}\ \hLL$, $\GG'(q)=\LL'(q)\ \text{or}\ \hLL(q)'$ for convenience. Set $$I=\{{{\bf i}}=(i_1,\cdots,i_d)\in{\ensuremath{\mathbb{Z}}\xspace}^d\ |\ 0{\leqslant}i_j< l_j\},$$ then we have the $\GG'(q)$-module decomposition of $F_q^{\ensuremath{\alpha}}(V)$ as follows $$F_q^{\ensuremath{\alpha}}(V)=\sum_{{{\bf i}}\in I}F_q^{{\ensuremath{\alpha}}, {{\bf i}}}(V),\ \text{where}\ F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V)=\sum_{{{\bf r}}\in{\operatorname{Rad}\xspace}_q}{\ensuremath{\mathbb C}\xspace}{{\bf t}}^{{{\bf r}}+{{\bf i}}}{\otimes}V.$$ For the $\gl_d$-module $V$, we define a new action of $\gl_d$ on $V$ as follows: for any $B\in\gl_d$ and $v\in V$, we set $B\circ v=LBL^{-1}v$, where $L=(\delta_{ij}l_i)_{i,j=1}^d$ is the an invertible matrix. Denote this new $\gl_d$-module by $V^{({{\bf l}})}$. Using the Lie algebra isomorphisms given by , we can view each $F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V)$ as a $\GG$-module and moreover, we have the $\GG$-module isomorphism $$\label{iso_mod} F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V) \longrightarrow F^{{\ensuremath{\alpha}}_{{{\bf i}}}}(V^{({{\bf l}})}), {{\bf t}}^{{{\bf n}}+{{\bf i}}}{\otimes}v \mapsto t_1^{n_1/l_1}\cdots t_d^{n_d/l_d}{\otimes}v,\ \forall\ {{\bf n}}=(n_1,\cdots,n_d)\in {\operatorname{Rad}\xspace}_q, $$ where ${{\bf i}}=(i_1,\cdots,i_d)\in I$, ${\ensuremath{\alpha}}_{{{\bf i}}}=(\frac{{\ensuremath{\alpha}}_1+i_1}{l_1},\cdots,\frac{{\ensuremath{\alpha}}_d+i_d}{l_d})$ and $F^{{\ensuremath{\alpha}}_{{{\bf i}}}}(V^{({{\bf l}})})$ is the $\GG$-module studied in Section 3. Noticing that $({\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q])F_q^{{\ensuremath{\alpha}},\bf{0}}=0$ by , we see that the $\LL(q)$ or $\hLL(q)$-module structure of $F_q^{{\ensuremath{\alpha}},\bf{0}}(V)$ is completely determined by the $\GG'(q)\cong\GG$-module structure of $F^{{\ensuremath{\alpha}}_{\bf{0}}}(V)$, which is completely determined by results in Section 3. Now, as in [@LZ2], we denote $$G_q^{\ensuremath{\alpha}}(V)=\sum_{{{\bf i}}\in I, {{\bf i}}\neq \bf{0}}F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V).$$ Then we have $G_q^{\ensuremath{\alpha}}(V)$ is an irreducible $\LL(q)$-module and hence an irreducible $\hLL(q)$-module. Take any nonzero $\LL(q)$-submodule $N$ of $G^{\ensuremath{\alpha}}_q(V)$, then $N_{{{\bf i}}}=N\cap F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V)$ is an $\LL'(q)$-submodule of $F_q^{{\ensuremath{\alpha}},{{\bf i}}}(V)$ for any ${{\bf i}}\in I\setminus\{0\}$. **Claim 1.** $N_{{{\bf i}}}\neq0$ for some ${{\bf i}}\in I\setminus\{0\}$. Take any nonzero $w=\sum_{{{\bf i}}\in J}{{\bf t}}^{{{\bf r}}+{{\bf i}}}\otimes v_{{{\bf i}}}\in N$ for some ${{\bf r}}=(r_1,\cdots,r_d)\notin {\operatorname{Rad}\xspace}_q$, some finite subset $J\subseteq{\ensuremath{\mathbb{Z}}\xspace}^d$ with $0\in J$, and nonzero $v_{{\bf i}}\in V, {{\bf i}}\in J$. If $J\subseteq{\operatorname{Rad}\xspace}_q$, the claim already holds. Now suppose $J\not\subseteq{\operatorname{Rad}\xspace}_q$ and take any ${{\bf j}}=(j_1,\cdots,j_d)\in J\setminus{\operatorname{Rad}\xspace}_q$. Recall ${\operatorname{Rad}\xspace}_q=\bigoplus_{i=1}^dl_i{\ensuremath{\mathbb{Z}}\xspace}$, we may assume that not both of $j_1$ and $j_2$ are divisible by $l_1=l_2$ without loss of generality. There exist a prime $p$ and an integer $k\in{\ensuremath{\mathbb{Z}}\xspace}_+$ such that $$(j_1,j_2)\in (p^k{\ensuremath{\mathbb{Z}}\xspace}\times p^k{\ensuremath{\mathbb{Z}}\xspace})\setminus(p^{k+1}{\ensuremath{\mathbb{Z}}\xspace}\times p^{k+1}{\ensuremath{\mathbb{Z}}\xspace}),\ \ l_1=l_2\in p^{k+1}{\ensuremath{\mathbb{Z}}\xspace}.$$ It is obvious $(r_1,r_2)\notin(p^{k+1}{\ensuremath{\mathbb{Z}}\xspace})^2$ or $(r_1+j_1,r_2+j_2)\notin (p^{k+1}{\ensuremath{\mathbb{Z}}\xspace})^2$, and without loss of generality, we may assume that $(r_1,r_2)\notin (p^{k+1}{\ensuremath{\mathbb{Z}}\xspace})^2$. So there exists ${{\bf n}}=(n_1,n_2,0,\cdots,0)\in{\ensuremath{\mathbb{Z}}\xspace}^d$ such that neither $r_1n_2-r_2n_1$ nor $j_1n_2-j_2n_1$ is divisible by $p^{k+1}$. Hence neither $r_1n_2-r_2n_1$ nor $j_1n_2-j_2n_1$ is divisible by $l_1=l_2$. Set ${{\bf r}}'=(r_1,r_2,0,\cdots,0)$ and ${{\bf j}}'=(j_1,j_2,0,\cdots,0)\in{\ensuremath{\mathbb{Z}}\xspace}^d$. If $q_{21}^{j_1r_2}-q_{21}^{j_2r_1}\neq0$, we have $$0\neq w'=({\operatorname{ad}\xspace}{{\bf t}}^{{{\bf r}}'})w=\sum_{{{\bf i}}\in J}[{{\bf t}}^{{{\bf r}}'}, {{\bf t}}^{{{\bf r}}+{{\bf i}}}]\otimes v_{{{\bf i}}}=\sum_{{{\bf i}}\in J\setminus\{0\}}q_{21}^{r_1r_2}\big(q_{21}^{i_1r_2}-q_{21}^{i_2r_1}\big){{\bf t}}^{{{\bf r}}+{{\bf i}}+{{\bf r}}'}\otimes v_{{{\bf i}}}\in N.$$ Now suppose $q_{21}^{j_1r_2}-q_{21}^{j_2r_1}=0$, or equivalently, $j_1r_2-j_2r_1$ is divisible by $l_1=l_2$. First applying ${\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}}$ on $w$ we get $$0\neq ({\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}})w=\sum_{{{\bf i}}\in J}[{{\bf t}}^{{{\bf n}}}, {{\bf t}}^{{{\bf r}}+{{\bf i}}}]\otimes v_{{{\bf i}}}=\sum_{{{\bf i}}\in J}{{\bf t}}^{{{\bf n}}+{{\bf r}}+{{\bf i}}}\otimes v'_{{{\bf i}}}\in N,$$ where $v'_{{{\bf i}}}=\big(q_{21}^{(r_1+i_1)n_2}-q_{21}^{(r_2+i_2)n_1}\big)v_{{{\bf i}}}\in V$ with $v'_0\neq0$ since $r_2n_1-r_1n_2$ is not divisible by $l_1$. Next we apply ${\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}+{{\bf r}}'+{{\bf j}}'}$ and obtain $$w'=({\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}+{{\bf r}}'+{{\bf j}}'})({\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}})w=({\operatorname{ad}\xspace}{{\bf t}}^{{{\bf n}}+{{\bf r}}'+{{\bf j}}'})\sum_{{{\bf i}}\in J}{{\bf t}}^{{{\bf n}}+{{\bf r}}+{{\bf i}}}\otimes v'_{{{\bf i}}}=\sum_{{{\bf i}}\in J\setminus\{{{\bf j}}\}}[{{\bf t}}^{{{\bf n}}+{{\bf r}}'+{{\bf j}}'}, {{\bf t}}^{{{\bf n}}+{{\bf r}}+{{\bf i}}}]\otimes v'_{{{\bf i}}}\in N.$$ Since $n_1j_2-n_2j_1$ is not divisible by $l_1=l_2$, we have $$[{{\bf t}}^{{{\bf n}}+{{\bf r}}'+{{\bf j}}'}, {{\bf t}}^{{{\bf n}}+{{\bf r}}}]=q_{21}^{(n_1+r_1)(n_2+r_2)}(q_{21}^{(n_1+r_1)j_2}-q_{21}^{(n_2+r_2)j_1})=q_{21}^{(n_1+r_1)(n_2+r_2)}q_{21}^{j_1r_2}(q_{21}^{n_1j_2}-q_{21}^{n_2j_1})\neq0$$ and hence $0\neq w'\in N$. Replacing $w$ with $w'$, we have made the index set $J$ smaller. Repeat this process several times, we can reach a nonzero element in $N_{{{\bf i}}}$ for some ${{\bf i}}\in I\setminus\{0\}$ and the claim follows. By the above claim, we can take ${{\bf i}}\in I\setminus\{0\}$ such that $N_{{{\bf i}}}\neq 0$. Since $\LL'(q)\cong \LL$, we have that $N_{{{\bf i}}}$ is isomorphism to one of the $\LL$-submodules of $F^{{\ensuremath{\alpha}}_{{{\bf i}}}}(V^{({{\bf l}})})$ (cf. ) as described in Section 3. In particular, there exist ${{\bf r}}\in{\operatorname{Rad}\xspace}_q$ and nonzero $v\in V$ such that ${{\bf t}}^{{{\bf r}}+{{\bf i}}}{\otimes}v\in N_{{{\bf i}}}$ by Theorem \[d:infinite\] and Theorem \[d:omega\_k\]. Fix this $v$, and by the module action , we see that $$K=\{{{\bf t}}^{{{\bf n}}}\ |\ {{\bf n}}\not\in{\operatorname{Rad}\xspace}_q, {{\bf t}}^{{\bf n}}{\otimes}v\in N\}$$ is a nonzero ${\ensuremath{\mathbb{Z}}\xspace}^d$-graded ideal of the Lie algebra $[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]$. By Lemma 2.2 of [@LZ2], $[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]$ is a ${\ensuremath{\mathbb{Z}}\xspace}^d$-graded simple Lie algebra, so we have $K=[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]$ and hence $[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]{\otimes}v\subseteq N$. Now let $$V'=\{v'\in V\ |\ [{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]{\otimes}v'\in N\}.$$ Again by the module action we see that $V'$ is stable under the action of ${{\bf r}}^T{{\bf u}}$ for all ${{\bf r}}\in{\operatorname{Rad}\xspace}_q, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ with $({{\bf u}}|{{\bf r}})=0$, hence under the action of $\sl_d$. We obtain that $V'$ is a nonzero submodule of $\sl_d$ and hence $V'=V$. The theorem is completed. $$\label{mod_LLq}\begin{array}{l} ({\operatorname{ad}\xspace}{{\bf t}}^{{\bf m}})({{\bf t}}^{{{\bf n}}}\otimes v)=[{{\bf t}}^{{{\bf m}}},{{\bf t}}^{{{\bf n}}}]\otimes v,\\\\ D({{\bf u}},{{\bf r}})({{\bf t}}^{{{\bf n}}}\otimes v)=\sigma({{\bf r}},{{\bf n}}){{\bf t}}^{{{\bf r}}+{{\bf n}}}\otimes \left(({{\bf u}}|{{\bf n}}+{\ensuremath{\alpha}})+({{\bf r}}{{\bf u}}^T)\right)v,\\ \end{array}$$ fro any ${{\bf m}}\notin{\operatorname{Rad}\xspace}_q, {{\bf r}}\in{\operatorname{Rad}\xspace}_q, {{\bf n}}\in{\ensuremath{\mathbb{Z}}\xspace}^d, {{\bf u}}\in{\ensuremath{\mathbb C}\xspace}^d$ and $v\in V$. Summarize the results in this section, we can conclude that \[main\_q\] Let $V$ be an irreducible $\mathfrak{gl}_d$-module (maybe infinite-dimensional) and $\alpha=({\ensuremath{\alpha}}_1,\cdots,{\ensuremath{\alpha}}_d)\in{\ensuremath{\mathbb C}\xspace}^d$. Denote $\GG=\LL\ \text{or}\ \hLL$, $\GG(q)=\LL(q)\ \text{or}\ \hLL(q)$. The $\GG(q)$-module $F^{\ensuremath{\alpha}}_q(V)$ can be decomposed as the direct sum of two submodules $$F^{\ensuremath{\alpha}}_q(V)=F_q^{{\ensuremath{\alpha}},\bf{0}}(V)\oplus G^{\ensuremath{\alpha}}_q(V).$$ Moreover, $\big({\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]\big)F^{{\ensuremath{\alpha}},\bf{0}}(V)=0$ and as a $\GG(q)/{\operatorname{ad}\xspace}[{\ensuremath{\mathbb C}\xspace}_q,{\ensuremath{\mathbb C}\xspace}_q]\cong\GG$-module $F^{{\ensuremath{\alpha}},\bf{0}}(V)$ is isomorphic to the $\GG$-module $F^{{\ensuremath{\alpha}}_{\bf{0}}}(V^{({{\bf l}})})$ via and , whose module structure is completely determined in Section 3; and $G^{\ensuremath{\alpha}}_q(V)$ is an irreducible $\GG(q)$-module. [**Acknowledgments.**]{} This work is partially supported by the NSF of China (Grant 11471294) and the Foundation for Young Teachers of Zhengzhou University (Grant 1421315071). [99999]{} B.N. Allison, S. Azam, S. Berman, Y. Gao and A. Pianzola, Extended affine Lie algebras and their root systems, [*Mem. Amer. Math. Soc.*]{}, 126 (605) (1997). Y. Billing, Jet modules, [*Canad. J. Math.*]{}, 59(2007), 712–729. S. Berman, Y. Gao and Y. Krylyuk, Quantum tori and the structure of elliptic quasi-simple Lie algebras. [*J. Funct. Anal.*]{}, 135(1996), no. 2, 339–389. Y. Billing and V. Futorny, Classification of irreducible representations of Lie algebra of vector fields on a torus. [*J. Reine Angew. Math.*]{}, 720 (2016), 199–216. Y. Billing, A. Molev and R. Zhang, Differential equations in vertex algebras and simple modules for the Lie algebra of vector fields on a torus. [*Adv. Math.*]{}, 218 (2008), 1972–2004. Y. Billig and J. Talboom, Classification of Category $\mathcal{J}$ Modules for Divergence Zero Vector Fields on a Torus. Preprint, arXiv:1607.07067. D. Djokovic and K. Zhao, Some infinite-dass of Lie algebras of Block timensional simple Lie algebras in characteristic 0 related to those of Block, [*J. Pure Appl. Algebra*]{}, 127(1998), no.2, 153–165. S. Eswara Rao, Irreducible representations of the Lie algebra of the diffeomorphisms of a $d$-dimensional torus, [*J. Algebea*]{}, 182 (1996), no 2, 401–421. S. Eswara Rao, Partial classification of modules for Lie-algebra of diffeomorphisms of $d$-dimensional torus, [*J. Math. Phys.*]{}, 45(8), (2004) 3322–3333. X. Guo, G. Liu and K. Zhao, Irreducible Harish-Chandra modules over extended Witt algebras, [*Ark. Mat.*]{}, 52 (2014), 99–112. X. Guo and K. Zhao, Irreducible weight modules over Witt algebras, [*Proc. Amer. Math. Soc.*]{}, 139(2011), 2367–2373. T. A. Larsson, Multi dimensional Virasoro algebra, [*Phys. Lett.*]{}, B 231, 94–96(1989). T. A. Larsson, Central and non-central extrension of multi-graded Lie algebras, [*J. Phys., A*]{}, 25, 6493–6508(1992). T. A. Larsson, Conformal fields: A class of representation of Vect(N), [*Int. J. Mod. Phys. A*]{}, 7, 6493–6508(1992). W. Lin and S. Tan, Representations of the Lie algebra of derivations for quantum torus, [*J. Algebra*]{}, 275(2004), 250–274. W. Lin and S. Tan, The representation of the skew derivation Lie algebras over the quantum torus, [*Adv. Math. (China)*]{}, 34(4) (2005)477–487. G. Liu and K. Zhao, New irreducible weight modules over Witt algebras with infinite-dimensional weight spaces. [*Bull. Lond. Math. Soc.*]{}, 47 (2015), no. 5, 789–795. G. Liu and K. Zhao, Irreducible Harish-Chandra modules over the derivation algebras of rational quantum tori. [*Glasg. Math. J.*]{}, 55 (2013), no. 3, 677–693. G. Liu and K. Zhao, Irreducible modules over the derivation algebras of rational quantum tori. [*J. Algebra*]{}, 340 (2011), 28–34. V. Marzuchuk, K. Zhao, Supports of weight modules over Witt algebras, [*Proc. Roy. Soc. Edinburgh Sect. A*]{}, 141(2011), 155–170. K. Neeb, the classification of rational quantum tori and the structure of their automorphism groups. [*Canad. Math. Bull.*]{}, 51 (2008), no. 2, 261–282. G. Shen, Graded modules of graded Lie algebra of Cartan type. I. Mixed products of modules, [*Sci. Sinica Ser.*]{}, A 29(1986), no.6, 570–581. J. Talboom, Irreducible modules for the Lie algebra of divergence zero vector fields on a torus. (English summary). [*Comm. Algebra*]{}, 44 (2016), no. 4, 1795–1808. K. Zhao, Weight modules over generalized Witt algebras with 1-dimensional weight spaces, [*Forum Math.*]{}, 16(2004), no. 5, 725–748.
--- abstract: 'We propose various strategies for improving the computation of discrete logarithms in non-prime fields of medium to large characteristic using the Number Field Sieve. This includes new methods for selecting the polynomials; the use of explicit automorphisms; explicit computations in the number fields; and prediction that some units have a zero virtual logarithm. On the theoretical side, we obtain a new complexity bound of $L_{p^n}(1/3,\sqrt[3]{96/9})$ in the medium characteristic case. On the practical side, we computed discrete logarithms in ${{\mathbb F}}_{p^2}$ for a prime number $p$ with $80$ decimal digits.' author: - 'Razvan Barbulescu ^1,2,3,4^' - 'Pierrick Gaudry ^1,2,3^' - 'Aurore Guillevic ^4,3,2^' - 'François Morain ^4,3,2^' title: 'Improvements to the number field sieve for non-prime finite fields\' --- Introduction ============ Discrete logarithm computations in finite fields is one of the important topics in algorithmic number theory, partly due to its relevance to public key cryptography. The complexity of discrete logarithm algorithms for finite fields ${{\mathbb F}}_{p^n}$ depends on the size of the characteristic $p$ with respect to the cardinality $Q=p^n$. In order to classify the known methods, it is convenient to use the famous $L$ function. If $\alpha\in[0,1]$ and $c>0$ are two constants, we set $$L_Q(\alpha,c)=\exp\left((c+o(1))(\log Q)^\alpha (\log \log Q)^{1-\alpha}\right),$$ and sometimes we simply write $L_Q(\alpha)$ if the constant $c$ is not made explicit. When we consider discrete logarithm computations, we treat separately families of finite fields for which the characteristic $p$ can be written in the form $p=L_Q(\alpha)$ for a given range of values for $\alpha$. We say that we are dealing with finite fields of [*small characteristic*]{} if the family is such that $\alpha < 1/3$; [*medium characteristic*]{} if we have $1/3<\alpha<2/3$; and [*large characteristic*]{} if $\alpha>2/3$. In this article, we concentrate on the cases of medium and large characteristic. This covers also the situation where $p=L_Q(2/3)$, that we call the medium–large characteristic boundary case. We start with a brief overview of the general situation, including the small characteristic case for completeness (all the complexities mentioned here are based on unproven heuristics). The case of small characteristic is the one that has been improved in the most dramatic way in the recent years. Before 2013, the best known complexity of $L_Q(1/3, \sqrt[3]{32/9})$ was obtained with the Function Field Sieve [@Adl94; @AdHu99; @JoLe02; @JoLe06] but a series of improvements [@Jou13faster; @JouxL14; @BaGaJoTh14; @GoGrMGZu13; @GrKlZuPower2] has led to a quasi-polynomial complexity for fixed characteristic, and more generally to a complexity of $L_Q(\alpha+o(1))$ when $p = L_Q(\alpha)$, with $\alpha<1/3$. The case of large characteristic is covered by an algorithm called the Number Field Sieve (NFS) that is very close to the algorithm with the same name used for factoring integers [@LeLe93; @Gor93; @Schirokauer1993; @JoLe02; @Sch05]. This is particularly true for prime fields, and it shares the same complexity of $L_Q(1/3,\sqrt[3]{64/9})$. In the case of small extension degrees, the main reference is a variant by Joux, Lercier, Smart and Vercauteren [@JLSV06] who showed how to get the same complexity in the whole range of fields of large characteristic. The case of medium characteristic was also tackled in the same article, thus getting a complexity of $L_Q(1/3,\sqrt[3]{128/9})$, with another variant of NFS. The complexities listed above use versions of NFS where only two number fields are involved. It is however known that using more number fields can improve the complexity. For prime fields it has been done in [@Mat03; @CoSe06], while for large and medium characteristic, it has been recently studied in [@BarPie2014]. In all cases, the complexity remains of the form $L_Q(1/3,c)$, but the exponent constant $c$ is improved: in the large characteristic case we have $c=\sqrt[3]{(92 + 26\sqrt{13})/27}$, like for prime fields, while in the medium characteristic case, we have $c=\sqrt[3]{2^{13}/3^6}$. For the moment, these multiple number field variants have not been used for practical record computations (they have not yet been used either for records in integer factorization). In the medium–large characteristic boundary case, where $p=L_Q(2/3, c_p)$, the complexity given in [@BarPie2014] is also of the form $L_Q(1/3,c)$, where $c$ varies between $16/9$ and $\sqrt[3]{2^{13}/3^6}$ in a way that is non-monotonic with $c_p$. We also mention another variant of NFS that has been announced [@PiRaTh] that seems to be better in some range of $c_p$, when using multiple number fields. In terms of practical record computations, the case of prime fields has been well studied, with frequent announcements [@JoLeRecord05; @Kle07; @DSA180]. In the case of medium characteristic, there were also some large computations performed to illustrate the new methods; see Table 8 in [@JL07] and [@Zajac08; @hayasaka13]. However in the case of non-prime field of large characteristic, we are not aware of previous practical experiments, despite their potential interest in pairing-based cryptography. [**Summary of contributions.**]{} Our two main contributions are, on one side, new complexity results for the finite fields of medium characteristic, and on the other side, a practical record computation in a finite field of the form ${{\mathbb F}}_{p^2}$. Key tools for these results are two new methods for selecting the number fields; the first one is a generalization of the method by Joux and Lercier [@JoLe03] and we call the second one the conjugation method. It turned out that both of them have practical and theoretical advantages. On the theoretical side, the norms that must be tested for smoothness during NFS based on the conjugation method or the generalized Joux-Lercier method are smaller than the ones obtained with previous methods for certain kind of finite fields. Therefore, the probability of being smooth is higher, which translates into a better complexity. Depending on the type of finite fields, the gain is different: - In the medium characteristic finite fields, NFS with the conjugation method has a complexity of $L_Q(1/3, \sqrt[3]{96/9})$. This is much better than the complexity of $L_Q(1/3, \sqrt[3]{128/9})$ obtained in [@JLSV06] and also beats the $L_Q(1/3, \sqrt[3]{2^{13}/3^6})$ complexity of the multiple number field algorithm of [@BarPie2014]. - In the medium–large characteristic boundary case, the situation is more complicated, but there are also families of finite fields for which the best known complexity is obtained with the conjugation method or with the generalized Joux-Lercier method. The overall minimal complexity is obtained for fields with $p=L_Q(2/3, \sqrt[3]{12})$, where the complexity drops to $L_Q(1/3,\sqrt[3]{48/9})$ with the conjugation method. On the practical side, the two polynomials generated by the conjugation method (and for one of the polynomials with the generalized Joux-Lercier construction) enjoy structural properties: it is often possible to use computations with explicit units (as was done in the early ages of NFS for factoring, before Adleman introduced the use of characters), thus saving the use of Schirokauer maps that have a non-negligible cost during the linear algebra phase. Furthermore, it is also often possible to impose the presence of field automorphisms which can be used to speed-up various stages of NFS, as shown in [@JLSV06]. Finally, the presence of automorphisms can interact with the general NFS construction and lead to several units having zero virtual logarithms. This is again very interesting in practice, because some dense columns (explicit units or Schirokauer maps) can be erased in the matrix. A careful study of this phenomenon allowed us to predict precisely when it occurs. All these practical improvements do not change the complexity but make the computations faster. In fact, even though the conjugation method is at its best for medium characteristic, it proved to be competitive even for quadratic extensions. It was therefore used in our record computation of discrete logarithm in the finite field ${{\mathbb F}}_{p^2}$ for a random-looking prime $p$ of 80 decimal digits. The running time was much less than what is required to solve the discrete logarithm problem in a prime field of similar size, namely 160 decimal digits. [**Outline.**]{} In Section \[sec:refresher\] we make a quick presentation of NFS, and we insist on making precise the definitions of virtual logarithms in the case of explicit units and in the case of Schirokauer maps. In Section \[sec:galois\] we show how to obtain a practical improvement using field automorphisms, again taking care of the two ways of dealing with units. Then, in Section \[sec:vanishing\] we explain how to predict the cases where the virtual logarithm of a unit is zero, and in Section \[sec:units\] we show how to use this knowledge to reduce the number of Schirokauer maps if we do not use explicit units. Finally, in Section \[sec:polyselect\] we present our two new methods for selecting polynomials, the complexities of which are analyzed in Section \[sec:complexity\]. We conclude in Section \[sec:effective\] with a report about our practical computation in ${{\mathbb F}}_{p^2}$. The number field sieve and virtual logarithms {#sec:refresher} ============================================= Sketch of the number field sieve algorithm ------------------------------------------ In a nutshell, the number field sieve for discrete logarithms in ${{\mathbb F}}_{p^n}$ is as follows. In the first stage, called polynomial selection, two polynomials $f,g$ in ${{\mathbb Z}}[x]$ are constructed (we assume that $\deg f \geqslant \deg g$), such that their reductions modulo $p$ have a common monic irreducible factor $\varphi_0$ of degree $n$. For simplicity, we assume that $f$ and $g$ are monic. We call $\varphi$ a monic polynomial of ${{\mathbb Z}}[x]$ whose reduction modulo $p$ equals $\varphi_0$. Let $\alpha$ and $\beta$ be algebraic numbers such that $f(\alpha)=0$ and $g(\beta)=0$ and let $m$ be a root of $\varphi_0$ in ${{\mathbb F}}_{p^n}$, allowing us to write ${{\mathbb F}}_{p^n}={{\mathbb F}}_p(m)$. Let $K_f$ and $K_g$ be the number fields associated to $f$ and $g$ respectively, and ${{\mathcal O}}_f$ and ${{\mathcal O}}_g$ their rings of integers. For the second stage of NFS, called relation collection or sieve, a smoothness bound $B$ is chosen and we consider the associated factor base $${{\mathcal F}}=\{\text{prime ideals ${\mathfrak q}$ in ${{\mathcal O}}_f$ and ${{\mathcal O}}_g$ of norm less than }B\},$$ that we decompose into ${{\mathcal F}}= {{\mathcal F}}_f \cup {{\mathcal F}}_g$ according to the ring of integers to which the ideals belong. An integer is $B$-smooth if all its prime factors are less than $B$. For any polynomial $\phi(x)\in{{\mathbb Z}}[x]$, the algebraic integer $\phi(\alpha)$ (resp. $\phi(\beta))$) in $K_f$ (resp. $K_g$) is $B$-smooth if the corresponding principal ideal $\phi(\alpha){{\mathcal O}}_f$ (resp. $\phi(\beta){{\mathcal O}}_g$) factors into prime ideals that belong to ${{\mathcal F}}_f$ (resp. ${{\mathcal F}}_g$). This is almost, but not exactly equivalent to asking that the norm $\operatorname{Res}(\phi,f)$ (resp. $\operatorname{Res}(\phi,g)$) is $B$-smooth. In the sieve stage, one collects $\#{{\mathcal F}}$ polynomials $\phi(x)\in{{\mathbb Z}}[x]$ with coprime coefficients and degree bounded by $t-1$, for a parameter $t\geq 2$ to be chosen, such that both $\phi(\alpha)$ and $\phi(\beta)$ are $B$-smooth, so that we get [*relations*]{} of the form: $$\label{eq:doubly smooth} \left\{ \begin{array}{l} \phi(\alpha){{\mathcal O}}_f=\prod_{{\mathfrak q}\in{{\mathcal F}}_f}{\mathfrak q}^{\operatorname{val}_{\mathfrak q}\left(\phi(\alpha)\right)}\\ \phi(\beta){{\mathcal O}}_g=\prod_{\mathfrak{r}\in{{\mathcal F}}_g} \mathfrak{r}^{\operatorname{val}_\mathfrak{r}\left(\phi(\beta)\right)}.\\ \end{array} \right.$$ The norm of $\phi(\alpha)$ (resp. of $\phi(\beta)$) is the product of the norms of the ideals in the right hand side and will be (crudely) bounded by the size of the finite field; therefore the number of ideals involved in a relation is less than $\log_2 (p^n)$. One can also remark that the ideals that can occur in a relation have degrees that are at most equal to the degree of $\phi$, that is $t-1$. Therefore, it makes sense to include in ${{\mathcal F}}$ only the ideals of degree at most $t-1$ (for a theoretical analysis of NFS one can consider the variant where only ideals of degree one are included in the factor base). In order to estimate the probability to get a relation for a polynomial $\phi$ with given degree and size of coefficients, we make the common heuristic that the integer $\operatorname{Res}(\phi,f)\cdot \operatorname{Res}(\phi,g)$ has the same probability to be $B$-smooth as a random integer of the same size and that the bias due to powers is negligible. Therefore, reducing the expected size of this product of norms is the main criterion when selecting the polynomials $f$ and $g$. In the linear algebra stage, each relation is rewritten as a linear equation between the so-called virtual logarithms of the factor base elements. We recall this notion in Section \[ssec:virtual logarithms\]. We make the usual heuristic that this system has a space of solutions of dimension one. Since the system is sparse, an iterative algorithm like Wiedemann’s [@Wiedemann1986] is used to compute a non-zero solution in quasi-quadratic time. This gives the (virtual) logarithms of all the factor base elements. In principle, the coefficient ring of the matrix is ${{\mathbb Z}}/(p^n-1){{\mathbb Z}}$, but it is enough to solve it modulo each prime divisor $\ell$ of $p^n-1$ and then to recombine the results using the Pohlig-Hellman algorithm [@PohligHellman1978]. Since one can use Pollard’s method [@Pol78] for small primes $\ell$, we can suppose that $\ell$ is larger than $L_{p^n}(1/3)$. It allows us then to assume that $\ell$ is coprime to $\operatorname{Disc}(f)$, $\operatorname{Disc}(g)$, the class numbers of $K_f$ and $K_g$, and the orders of the roots of unity in $K_f$ and $K_g$. These assumptions are used in many places in the rest of the article, sometimes implicitly. In the last stage of the algorithm, called individual logarithm, the discrete logarithm of any element $z=\sum_{i=0}^{n-1}z_i m^i$ of ${{\mathbb F}}_{p^n}$ in the finite field is computed. For this, we associate to $z$ the algebraic number $\overline{z}=\sum_{i=0}^{n-1}z_i\alpha^i$ in $K_f$ and check whether the corresponding principal ideal factors into prime ideals of norms bounded by a quantity $B'$ larger than $B$. We also ask the prime ideals to be of degree at most $t-1$. If $\overline{z}$ does not verify these smoothness assumptions, then we replace $z$ by $z^e$ for a randomly chosen integer $e$ and try again. This allows to obtain a linear equation similar to those of the linear system, in which one of the unknowns is $\log z$. The second step of the individual logarithm stage consists in obtaining relations between a prime ideal and prime ideals of smaller norm, until all the ideals involved are in ${{\mathcal F}}$. This allows to backtrack and obtain $\log z$. Virtual logarithms {#ssec:virtual logarithms} ------------------ In this section, we recall the definition of virtual logarithms, while keeping in mind that in the rest of the article, we are going to use either explicit unit computations or Schirokauer maps. The constructions work independently in each number field, so we explain them for the field $K_f$ corresponding to the polynomial $f$. During NFS, this is also applied to $K_g$. We start by fixing a notation for the “reduction modulo $p$” map that will be used in several places of the article. Let $\rho_f$ be the map from ${{\mathcal O}}_f$ to ${{\mathbb F}}_{p^n}$ defined by the reduction modulo the prime ideal $\mathfrak{p}$ above $p$ that corresponds to the factor $\varphi$ of $f$ modulo $p$. This is a ring homomorphism. Furthermore, if the norm of $z$ is coprime to $p$, then $\rho_f(z)$ is non-zero in ${{\mathbb F}}_{p^n}$. We can therefore extend $\rho_f$ to the set of elements of $K_f$ whose norm has a non-negative valuation at $p$. Since in this article we will often consider the discrete logarithm of the images by $\rho_f$, we restrict its definition to the elements of $K_f$ whose norm is coprime to $p$, for which the image is non-zero. Let $h$ be the class number $K_f$ that we assume to be coprime to the prime $\ell$ modulo which the logarithms are computed. We also need to consider the group of units $U_f$ in ${{\mathcal O}}_f$. By Dirichlet’s theorem it is a finitely generated abelian group of the form $$U_f \sim U_{tors} \times {{\mathbb Z}}^{r},$$ where $r$ is the unit rank given by $r = r_1 + r_2 - 1$ where $r_1$ is the number of real roots of $f$ and $2 r_2$ the number of complex roots, and $U_{tors}$ is cyclic. Any unit $\eta \in U_f$ can be written $$\eta = \varepsilon_0^{u_0} \prod_{j=1}^{r} \varepsilon_j^{u_j}$$ for [*fundamental units*]{} $\varepsilon_j$, $j \geq 1$, and $\varepsilon_0$ a root of unity. For each prime ideal ${\mathfrak q}$ in the factor base ${{\mathcal F}}_f$, the ideal ${\mathfrak q}^h$ is principal and therefore there exists a generator $\gamma_{\mathfrak q}$ for it. It is not at all unique, and the definition of the virtual logarithms will depend on the choice of the fundamental units and of the set of generators for all the ideals of ${{\mathcal F}}_f$. We denote by $\Gamma$ this choice, and will use it as a subscript in our notations to remember the dependence in $\Gamma$. In particular, the notation $\log_\Gamma$ used just below means that the definition of the virtual logarithm depends on the choice of $\Gamma$, and does not mean that the logarithm is given in base $\Gamma$; in fact all along the article we do not make explicit the generator used as a basis for the logarithm in the finite field. Let ${\mathfrak q}$ be an ideal in the factor base ${{\mathcal F}}_f$, and $\gamma_{\mathfrak q}$ the generator for its $h$-th power, given by the choice $\Gamma$. Then the virtual logarithm of ${\mathfrak q}$ w.r.t. $\Gamma$ is given by $$\log_\Gamma{\mathfrak q}\equiv h^{-1}\log(\rho_f(\gamma_{\mathfrak q})) \mod \ell,$$ where the $\log$ notation on the right-hand side is the discrete logarithm function in ${{\mathbb F}}_{p^n}$. In the same manner, we define the virtual logarithms of the units by $$\log_\Gamma\varepsilon_j \equiv h^{-1}\log(\rho_f(\varepsilon_j)) \mod \ell.$$ We now use this definition to show that for any polynomial $\phi$ yielding a relation, we can obtain a linear expression between the logarithm of $\rho_f(\phi(\alpha))$ in the finite field and the virtual logarithms of the ideals involved in the factorization of the ideal $\phi(\alpha){{\mathcal O}}_f$: $$\phi(\alpha){{\mathcal O}}_f=\prod_{{\mathfrak q}\in{{\mathcal F}}_f} {\mathfrak q}^{\operatorname{val}_{{\mathfrak q}}\left( \phi(\alpha) \right)}.$$ After raising this equation to the power $h$, we get an equation between principal ideals that can be rewritten as the following equation between field elements: $$\phi(\alpha)^h=\varepsilon_0^{u_{\phi, 0}} \prod_{j=1,r}\varepsilon_j^{u_{\phi,j}} \prod_{{\mathfrak q}\in{{\mathcal F}}_f} \gamma_{\mathfrak q}^{\operatorname{val}_{{\mathfrak q}}\left( \phi(\alpha) \right)},$$ where the $u_{\phi,j}$ are integers used to express the unit that pops up in the process. We then apply the map $\rho_f$, and use the fact that it is an homomorphism. We obtain therefore $$\rho_f(\phi)^h = \rho_f(\varepsilon_0)^{u_{\phi,0}}\prod_{j=1,r} \rho_f(\varepsilon_j)^{u_{\phi,j}}\prod_{{\mathfrak q}\in{{\mathcal F}}_f} \rho_f(\gamma_{\mathfrak q})^{\operatorname{val}_{{\mathfrak q}}\left( \phi(\alpha) \right)},$$ from which we deduce our target equation by taking logarithms on both sides: $$\label{eq:explicit} \log\left(\rho_f(\phi(\alpha))\right) \equiv \sum_{j=1}^ru_{\phi,j}\log_\Gamma\varepsilon_j + \sum_{{\mathfrak q}\in{{\mathcal F}}_f}\operatorname{val}_{\mathfrak q}\left(\phi(\alpha)\right) \log_\Gamma{\mathfrak q}\mod \ell.$$ In this last step, the contribution of the root of unity $\varepsilon_0$ has disappeared. Indeed, the following simple lemma states that its logarithm vanishes modulo $\ell$. \[lem:roots of unity\] Let $\varepsilon_0$ be a torsion unit of order $r_0$ and assume that $\gcd(hr_0, \ell) = 1$. Then we have $\log_\Gamma\varepsilon_0 \equiv 0 \mod \ell$. Since $\varepsilon_0^{r_0}=1$ in $K_f$, we have $\rho_f(\varepsilon_0)^{r_0} = 1$ in ${{\mathbb F}}_{p^n}$ and we get $hr_0 \log_\Gamma \varepsilon_0 \equiv 0 \mod \ell$. In order to make the equation \[eq:explicit\] explicit for a given $\phi$ that yields a relation, it is necessary to compute the class number $h$ of $K_f$, to find the generators of all the ${\mathfrak q}^h$ and to compute a set of fundamental units. These are reknowned to be difficult problems except for polynomials $f$ with tiny coefficients. We now recall an alternate definition of virtual logarithms based on the so-called Schirokauer maps, for which none of the above have to be computed explicitly. Let $K_\ell$ be the multiplicative subgroup of $K_f^*$ of elements whose norms are coprime to $\ell$. A Schirokauer map is an application $\Lambda:(K_\ell)/(K_\ell)^\ell\rightarrow ({{\mathbb Z}}/\ell{{\mathbb Z}})^{r}$ such that - $\Lambda(\gamma_1\gamma_2)=\Lambda(\gamma_1)+\Lambda(\gamma_2)$ ($\Lambda$ is linear); - $\Lambda(U_f)$ is surjective ($\Lambda$ preserves the unit rank). Schirokauer [@Schirokauer1993] proposed a fast-to-evaluate map satisfying these conditions that we recall now. Let us define first an integer, that is the LCM of the exponents required to apply Fermat’s theorem in each residue field modulo $\ell$: $$\epsilon=\text{lcm}\{\ell^\delta-1,\ \text{such that}\ f(x)\bmod \ell\ \text{has an irreducible factor of degree }\delta\}.$$ Then, by construction, for any element $\gamma$ in $K_\ell$, we have $\gamma^\epsilon$ congruent to $1$ in all the residue fields above $\ell$. Therefore, the map $$\label{eq:polschi} \gamma(\alpha) \mapsto \frac{\gamma(x)^\epsilon-1}{\ell}\bmod (\ell, f(x)),$$ is well defined for $\gamma\in K_\ell$. Taking the coordinates of the image of this map in the basis $1,X,\ldots,X^{\deg f-1}$, we can expect to find $r$ independent linear combinations of these coordinates. They then form a Schirokauer map. In [@Sch05], Schirokauer gave heuristic arguments for the existence of such independent linear combinations; and in practice, in most of the cases, taking the $r$ first coordinates is enough. From now on, we work with a fixed choice of Schirokauer map that we denote by $\Lambda$. We start by taking another set of $r$ independent units: for each $j\in[1,r]$, we choose a unit $\varepsilon_j$ such that $$\Lambda(\varepsilon_j)=(0,\ldots,0,h,0,\ldots,0),$$ where the coordinate $h$ is in the $j$-th position. We can then refine the choice of the generators of the $h$-th power of the factor base ideals, so that we get another definition of the virtual logarithms. Let $\Lambda$ be a Schirokauer map as described above. Let ${\mathfrak q}$ be an ideal in the factor base ${{\mathcal F}}_f$, and $\gamma_{\mathfrak q}$ an (implicit) generator for its $h$-th power, such that $\Lambda(\gamma_{\mathfrak q}) = 0$. Then the virtual logarithm of ${\mathfrak q}$ w.r.t. $\Lambda$ is given by $$\log_\Lambda{\mathfrak q}\equiv h^{-1}\log(\rho_f(\gamma_{\mathfrak q})) \mod \ell.$$ The virtual logarithms of the units are defined in a similar manner: $$\log_\Lambda\varepsilon_j \equiv h^{-1}\log(\rho_f(\varepsilon_j)) \mod \ell.$$ As shown in [@Sch05], by an argument similar to the case of explicit generators, one can write $$\label{eq:implicit} \log\left(\rho_f(\phi(\alpha))\right)\equiv \sum_{j=1}^r\lambda_j\left(\phi(\alpha)\right)\log_\Lambda\varepsilon_j +\sum_{{\mathfrak q}\in{{\mathcal F}}_f}\operatorname{val}_{\mathfrak q}\left(\phi(\alpha)\right) \log_\Lambda{\mathfrak q}\mod \ell,$$ where $\lambda_j$ is the $j$-th coordinate of $\Lambda$. Explicit units or Schirokauer maps? ----------------------------------- Equation  can be written for the two polynomials $f$ and $g$ and hence we obtain a linear equation relating only virtual logarithms. We remark that it is completely allowed to use the virtual logarithms in their explicit version for one of the polynomials if it is feasible, while using Schirokauer maps on the other side. Using explicit units requires to compute a generator for each ideal in the factor base, and therefore the polynomial must have small coefficients (and small class number). A lot of techniques and algorithms are well described in [@LeLe93]. These include generating units and generators in some box or ellipsoid of small lengths, and recovery of units using floating point computations. These are quite easy to implement and are fast in practice. We may do some simplifications when $K_f$ has non-trivial automorphisms, since in this case the generators of several ideals can be computed from one another using automorphisms (see Section \[sec:galois\]). In the general case, one uses Schirokauer maps whose coefficients are elements of ${{\mathbb Z}}/\ell{{\mathbb Z}}$ for a large prime $\ell$. In our experiments, the values of the Schirokauer maps seem to spread in the full range $[0,\ell-1]$ and must be stored on $\log_2\ell$ bits. In a recent record [@DSA180], each row of the matrix consisted in average of $100$ non-zero entries in the interval $[-10,10]$ and two values in $[0,\ell-1]$, for a prime $\ell$ of several machine words. It is then worth to make additional computations in order to reduce the number of Schirokauer maps. This motivated our study in Section \[sec:units\]. Exploiting automorphisms {#sec:galois} ======================== Using automorphisms of the fields involved in a discrete logarithm computation is far from being a new idea. It was already proposed by Joux, Lercier, Smart and Vercauteren [@JLSV06] and was a key ingredient in many of the recent record computations in small characteristic [@JouxRecord13; @GrKlZu14]. In this section we recall the basic idea and make explicit the interaction with both definitions of virtual logarithms, using or not Schirokauer maps. Writing Galois relations ------------------------ The results of this subsection apply potentially to both number fields $K_f$ and $K_g$ independently. Therefore, we will express all the statements with the notations corresponding to the polynomial $f$ (that again, we assume to be monic for simplicity). We assume that $K_f$ has an automorphism $\sigma$, and we denote by $A_\sigma$ and $A_{\sigma^{-1}}$ the polynomials of ${{\mathbb Q}}[x]$ such that $\sigma(\alpha)=A_\sigma(\alpha)$ and $\sigma^{-1}(\alpha)=A_{\sigma^{-1}}(\alpha)$. For any subset $I$ of $K_f$, we denote by $I^\sigma$ the set $\{\sigma(x)\mid x\in I\}$. Let $q$ be a rational prime not dividing the index $\left[{{\mathcal O}}:{{\mathbb Z}}[\alpha]\right]$ of the polynomial $f$. Then, any prime ideal above $q$ of degree one can be generated by two elements of the form $I=\langle q,\alpha-r\rangle$ for some root $r$ of $f$ modulo $q$. If the denominators of the coefficients of $A_\sigma$ and $A_{\sigma^{-1}}$ are not divisible by $q$, then we have $$I^\sigma=\left\langle q,\alpha-A_{\sigma^{-1}}(r)\right\rangle.$$ Since $\sigma^{-1}$ is an automorphism, we have $f\left( A_{\sigma^{-1}}(\alpha) \right)=0$. This is equivalent to $f(A_{\sigma^{-1}}(x))\equiv 0 (\bmod f(x))$ and then $f(A_{\sigma^{-1}}(x))=u(x) f(x)$ for some polynomial $u\in{{\mathbb Q}}[x]$. By evaluating in $r$ we obtain $f(A_{\sigma^{-1}}(r))\equiv 0\pmod q$. Then, by Dedekind’s Theorem, $J=\left\langle q,\alpha-A_{\sigma^{-1}}(r)\right\rangle$ is a prime ideal of degree one. Since $q$ and $A_{\sigma^{-1}}(r)$ are rational, we have $J^{\sigma^{-1}}=\langle q,A_{\sigma^{-1}}(\alpha)-A_{\sigma^{-1}}(r)\rangle$. Since the polynomial $ A_{\sigma^{-1}}(x)-A_{\sigma^{-1}}(r)$ is divisible by $x-r$, $J^{\sigma^{-1}}$ belongs to $\langle q,\alpha-r\rangle=I$. Therefore, $J$ belongs to $I^\sigma$. But $J$ is prime, so $J=I^\sigma$. Before stating the main result on the action of $\sigma$ on the virtual logarithms, we need the following result on the Schirokauer maps. \[lem:kernel\] Let $\Lambda$ be a Schirokauer map modulo $\ell$ associated to $K_f$ and let $\sigma$ be an automorphism of $K_f$. Assume in addition that this Schirokauer map is based on the construction of Equation \[eq:polschi\]. Then we have $$\ker \Lambda= \ker (\Lambda\circ \sigma).$$ Let $A_\sigma(x)\in{{\mathbb Z}}[x]$ be such that $A_\sigma(\alpha)=\sigma(\alpha)$. If $\gamma=P(\alpha)$ is in the kernel of $\Lambda$, then there exist $u,v\in {{\mathbb Z}}[x]$ such that $$P(x)^\epsilon-1=\ell^2u(x)+\ell v(x) f(x).$$ By substituting $A_\sigma(x)$ to $x$, we obtain $$P(A_\sigma(x))^\epsilon-1=\ell^2u(A_\sigma(x))+\ell v(A_\sigma(x)) f(A_\sigma(x)).$$ Since $\sigma$ is an automorphism of $f$, $f(A_\sigma(x))$ is a multiple of $f(x)$. Hence, we obtain that $\sigma(\gamma)=P(A_\sigma(\alpha))$ is in the kernel of $\Lambda$. When $f$ is an even polynomial, i.e.  $f(-x)=f(x)$, the application $\sigma(x)=-x$ is an automorphism of the number field $K_f={{\mathbb Q}}[x]/f(x)$. Consider the Schirokauer map as defined in Equation \[eq:polschi\]. We denote by $\Lambda=(\lambda_1,\ldots,\lambda_r)$ the $r$ first coordinates in basis $1,X,\ldots,X^{\deg f-1}$, and we assume that they are independent, so that $\Lambda$ is indeed a Schirokauer map. Then applying the automorphism, we get $\Lambda\circ\sigma=(\lambda_1,-\lambda_2,\lambda_3,-\lambda_4,\ldots, (-1)^{r+1}\lambda_r)$, and we can check that its kernel coincides with the kernel of $\Lambda$. The following counter-example shows that the condition that $\Lambda$ is constructed from Equation \[eq:polschi\] is necessary for Lemma \[lem:kernel\] to hold. Let $\Lambda=(\lambda_1,\ldots,\lambda_r)$ be a Schirokauer map of $K_f$ with respect to $\ell$, $\sigma$ an automorphism of $K_f$ and ${\mathfrak q}$ a prime ideal. Then $\Lambda'=(\lambda_1+ \operatorname{val}_{{\mathfrak q}}(\cdot),\lambda_2,\lambda_3,\ldots,\lambda_r)$ does not satisfy $\ker \Lambda'=\ker \Lambda'\circ \sigma$. Indeed, let $\gamma$ be a generator of $({\mathfrak q}^{\sigma^{-1}})^h$ with $\Lambda(\gamma)=0$. On the one hand we have $\Lambda'(\gamma)=0$. On the other hand, the first coordinate of $\Lambda'(\sigma(\gamma))$ is the valuation in ${\mathfrak q}$ of $\sigma(\gamma)$, which is non zero because $\sigma(\gamma)$ is in ${\mathfrak q}$. \[th:galois\] We keep the same notations as above, where in particular $\varphi$ is a degree-$n$ irreducible factor of $f$ modulo $p$. Let $\sigma$ be an automorphism of $K_f$ different from the identity such that $$\varphi(\rho_f(A_\sigma(\alpha)))=0.$$ Then, there exists a constant $\kappa\in[1,\operatorname{ord}(\sigma)-1]$ such that the following holds: 1. Let $\Gamma$ be a choice of explicit generators that is compatible with $\sigma$, i.e. such that for any prime ideal ${\mathfrak q}$ the generators for the $h$-th powers of ${\mathfrak q}$ and $\sigma({\mathfrak q})$ are conjugates: $$\gamma_{\sigma({\mathfrak q})} = \sigma(\gamma_{\mathfrak q}).$$ Then we have for any prime ideal ${\mathfrak q}$: $$\log_\Gamma {\mathfrak q}^\sigma \equiv p^\kappa \log_\Gamma {\mathfrak q}\pmod \ell.$$ 2. For any Schirokauer map $\Lambda$ which has a polynomial formula (as in Lemma \[lem:kernel\]) and for any prime ideal ${\mathfrak q}$, we have $$\log_\Lambda {\mathfrak q}^\sigma \equiv p^\kappa \log_\Lambda {\mathfrak q}\pmod\ell.$$ Since $\rho_f(\sigma(\alpha))$ is a root of $\varphi$ other than $m=\rho_f(\alpha)$, the map $T(x)\mapsto T(A_\sigma(x))$ is an element of $\operatorname{Gal}({{\mathbb F}}_{p^n}/{{\mathbb F}}_p)$ other than the identity. So, there exists a constant $\kappa\in[1,\operatorname{ord}(\sigma)-1]$ such that $A_\sigma(x)=x^{p^\kappa}$ for all $x\in{{\mathbb F}}_{p^n}$. In particular, if ${\mathfrak q}$ is a prime ideal and $\gamma_{\mathfrak q}$ is a generator of ${\mathfrak q}^h$, we have $$\label{eq:p^k} \log \rho_f(\sigma(\gamma_{\mathfrak q})) = p^\kappa \log(\rho_f(\gamma_{\mathfrak q})).$$ In the first assertion of the theorem, it is assumed that $\sigma(\gamma_{\mathfrak q})$ is precisely the generator used for $\sigma({\mathfrak q})^h$, and therefore the relation between virtual logarithms follows from their definition. For the second assertion, the compatibility of the generators is deduced from the definition of the virtual logarithms using Schirokauer maps. Indeed, for any prime ideal ${\mathfrak q}$, the generator used for the definition of $\log_\Lambda{\mathfrak q}$ is such that $\Lambda(\gamma_{\mathfrak q})=0$. By Lemma \[lem:kernel\], $\gamma_{\mathfrak q}$ is also in the kernel of $\Lambda\circ\sigma$, that is $\Lambda(\sigma(\gamma_{\mathfrak q}))=0$, so that the conjugate of the generator is a valid generator for the conjugate of the ideal. The conclusion follows. We give an immediate application of the preceding results, which is useful when $K_f$ is an imaginary quadratic field. Let $q$ be a rational prime which is totally ramified in $K_f$, and write $q\, {{\mathcal O}}_f = \mathfrak{q}^n$. Assume that the unit rank of $K_f$ is $0$ and that $n$ is coprime to $\ell$. Then we have $$\log \mathfrak{q} \equiv 0 \bmod \ell.$$ Let $h$ be the class number of $K_f$ and $\gamma_\mathfrak{q}$ a generator of $\mathfrak{q}^h$ such that $\log \mathfrak{q}=h^{-1}\log \gamma_{\mathfrak{q}}$. Then one can write $q^h=u (\gamma_{\mathfrak{q}})^n$ for some root of unity $u$. By Lemma \[lem:roots of unity\], $\log u\equiv 0\mod \ell$, so $$\begin{aligned} \log (q^h) &\equiv& \log ((\gamma_{\mathfrak{q}})^n) \bmod \ell.\end{aligned}$$ Since $q$ belongs to the subgroup of ${{\mathbb F}}_{q^n}$ given by equation $x^{q-1}=1$ and since $\gcd(q-1,\ell)=1$, Lemma \[lem:roots of unity\] gives $\log q=0$. Then the results follows from the fact that $n$ is coprime to $\ell$. Using Galois relations in NFS ----------------------------- Let $\sigma$ and $\tau$ be automorphisms of $K_f$ and $K_g$, and let us assume that they verify the hypothesis of Theorem \[th:galois\]. We can split ${{\mathcal F}}_f$ and ${{\mathcal F}}_g$ respectively in orbits $({\mathfrak q},{\mathfrak q}^\sigma,\ldots)$ if ${\mathfrak q}$ is in ${{\mathcal F}}_f$ and $({\mathfrak q},{\mathfrak q}^\tau,\ldots)$ if ${\mathfrak q}$ is in ${{\mathcal F}}_g$. This allows to reduce the number of unknowns in the linear algebra stage by a factor $\operatorname{ord}(\sigma)$ on the $f$-side and by a factor $\operatorname{ord}(\tau)$ on the $g$-side, at the price of having entries in the matrix that are roots of unity modulo $\ell$ instead of small integers. We collect as many relations as unknowns, hence reducing also the cost of the sieve. Note that the case where $\sigma$ or $\tau$ is the identity is not excluded in our discussion (in that case, the orbits are singletons on the corresponding side). As an example, in Section \[sec:polyselect\] we will see how to construct polynomials $f$ and $g$ whose number fields have automorphisms $\sigma$ and $\tau$, both of order $n$. Then, the number of unknowns is reduced by $n$ and the number of necessary relations is divided by $n$. Since the cost of the linear algebra stage is $\lambda N^2$, where $N$ is the size of the matrix and $\lambda$ is its average weight per row, i.e. the number of non-zero entries per row, we obtain the following result. If $f$ and $g$ are two polynomials with automorphisms $\sigma$ and $\tau$ of order $n$ verifying the hypothesis of Theorem \[th:galois\], then we have: - a speed-up by a factor $n$ in the sieve; - a speed-up by a factor $n^2$ in the linear algebra stage. [**The particular case when $A_\sigma=A_\tau$.**]{} In Section \[subsec: our polyselect\], we will present a method to select polynomials $f$ and $g$ with automorphisms $\sigma$ and $\tau$; it that $\sigma$ and $\tau$ are expressed by the same rational fraction $A_\sigma=A_\tau$. Moreover, the numerator and denominator are constant or linear polynomials. A typical example is when both polynomials are reciprocal and then $\sigma(\alpha)=1/\alpha$ and $\tau(\beta)=1/\beta$. Let $\phi\in{{\mathbb Z}}[x]$ be a polynomial yielding a relation. When we apply $\sigma$ and $\tau$ to the corresponding system of equations , we get: $$\label{eq:conjugated} \left\{ \begin{array}{l} \phi(A_\sigma(\alpha)){{\mathcal O}}_f=\prod_{{\mathfrak q}\in{{\mathcal F}}_f}({\mathfrak q}^\sigma)^{\operatorname{val}_{\mathfrak q}\left(\phi(\alpha)\right)}\\ \phi(A_\tau(\tau)){{\mathcal O}}_g=\prod_{\mathfrak{r}\in{{\mathcal F}}_g}(\mathfrak{r}^\tau)^{\operatorname{val}_\mathfrak{r}\left(\phi(\beta)\right)},\\ \end{array} \right.$$ Since $A_\sigma=A_\tau$ have a simple form, there is a chance that $\phi\circ A_\sigma$ has a numerator that is again a polynomial of the form that would be tested later. The relations being conjugates of each others, the second one brings no new information and should not be sieved. Again, we illustrate this on the example of reciprocal polynomials, where $A_\sigma(x) = A_\tau(x) = 1/x$. For polynomials $\phi(x) = a-bx$ of degree 1, the numerator of $\phi\circ A_\sigma$ is $b-ax$. Therefore, it is interesting not to test the pair $(b,a)$ for smoothness if the pair $(a,b)$ has already been tested. If the sieve is implemented using the lattice sieve, e.g. in CADO-NFS [@CADO], one can collect precisely these polynomials $\phi$ such that $\phi(\alpha)$ is divisible by one of the ideals ${\mathfrak q}$ in a list given by the user. In this case, we make a list of ideals ${\mathfrak q}$ which contains exactly one ideal in each orbit $\{{\mathfrak q},\sigma({\mathfrak q}),\ldots,\sigma^{n-1}({\mathfrak q})\}$. Hence, we do not collect at the same time $\phi$ and the numerator of $\phi\circ A_\sigma$ except if the decomposition of $\phi(\alpha)$ in ideals contains two ideals ${\mathfrak q}$ and ${\mathfrak q}'$ which are in our list of ideals or conjugated to such an ideal. Vanishing of the logarithms of units {#sec:vanishing} ==================================== In this section, we are again in the case where we study the fields $K_f$ and $K_g$ independently. Therefore, we stick to the notations for the $f$-side, but we keep in mind that this could be applied to $g$. Furthermore, for easier reading, for this section we drop the subscript $f$, for structures related to $f$: $K={{\mathbb Q}}(\alpha)$ is the number field of $f$, $U$ the unit group whose rank is denoted by $r$, and $\rho$ is the reduction map to ${{\mathbb F}}_{p^n}$. Also, some of the results of this section depend on the fact that $\ell$ is a factor of $p^n-1$ that is in the “new” part of the multiplicative group: we will therefore always assume that $\ell$ is a prime factor of $\Phi_n(p)$. The aim of this section is to give cases where the logarithms of some or all fundamental units are zero, more precisely units $u$ for which $\log\rho(u) \equiv 0 \bmod \ell$. Units in subfields ------------------ The main case where we can observe units with zero virtual logarithms is when the subfield fixed by an automorphism as in Section \[sec:galois\] has some units. \[th:fixed subfield\] With the same notations as above, assume that $v_1,\ldots,v_r$ are units of $K$ which form a basis modulo $\ell$. Let $\sigma$ be an automorphism of $K$ and assume that there exists an integer $A$ such that $A\not\equiv 1\mod \ell$ and, for all $x\in K$ of norm coprime to $p$, $$\label{eq:log automorphisms} \log\rho(\sigma(x))\equiv A \log\rho(x)\mod \ell.$$ Let $K^{\langle \sigma\rangle}$ be the subfield fixed by $\sigma$ and let $r'$ be its unit rank. Let $u'_1,\ldots,u'_{r'}$ be a set of units of $K^{\langle \sigma\rangle}$ which form a basis modulo $\ell$. Then, $K$ admits a basis $u_1,\ldots,u_r$ modulo $\ell$ such that the discrete logarithms of $\rho(u_1),\ldots,\rho(u_{r'})$ are zero modulo $\ell$. For any $x\in K^{\langle \sigma\rangle}$ we have $\sigma(x)=x$, so, when $\rho$ is defined, we have $\log(\rho(\sigma(x)))\equiv \log(\rho(x))\mod \ell$. Using Equation  we obtain that $\log(\rho(x))\equiv0\mod \ell$ for all $x$ in $K^{\langle \sigma\rangle}$ of norm coprime to $p$. In particular, for $1\leq i\leq r'$, we have $\log(\rho(u'_i))\equiv 0\mod \ell$. One checks that $u'_1,\ldots,u'_{r'}$ are units in $K$. Since they form a basis modulo $\ell$, there is no non-trivial product of powers of $u'_1,\ldots,u'_{r'}$ which is equal to an $\ell$th power. Then, one can select $r-r'$ units among $v_1,\ldots,v_r$ to extend $u'_1,\ldots,u'_{r'}$ to a basis modulo $\ell$. \[ex:deg4\] Consider the family of CM polynomials $$\begin{array}{l} f=x^4+bx^3+ax^2+bx+1,\\ |a|<2,\hspace{1cm}|b|<2+a/2. \end{array}$$ There is always the automorphism, $\forall T\in{{\mathbb Z}}[x],\sigma(T(x))=T(1/x)$ of order 2, so that we have $A =p\equiv -1\mod \ell$ for use in the Theorem. We claim that $r=r'=1$. Let us call $\alpha$ a complex root of $f$. Since $\beta = \alpha+1/\alpha$ is not rational and fixed by $\sigma$, we have $K^{\langle\sigma\rangle}={{\mathbb Q}}(\alpha+1/\alpha)$. Since $\beta$ is a root of the equation $P(Y) = Y^2+bY+(a-2)=0$, whose discriminant $b^2+4(2-a)$ is positive, $ K^{\langle\sigma\rangle} $ is real and we have $r'=1$. The roots of $f$ are roots of $x+1/x=y_1$ or $y_2$ for $y_1=-b/2-\sqrt{b^2/4+(2-a)}$ and $y_2= -b/2+\sqrt{b^2/4+(2-a)}$. Since $|b|<2+a/2$, $f$ has no real roots, so $r=1$. A second proof is as follows. Note that $f(X)$ factors over ${{\mathbb Q}}(\beta)$ as $$(X^2-\beta X + 1) (X^2+(b+\beta)X+1).$$ We put $\varphi(X) = X^2-\beta X + 1$. Let $p$ be a prime for which $P(Y)$ is reducible modulo $p$ and $\varphi$ is not. The following picture shows the characteristic 0 picture, as well as the one modulo $p$. (60,25) (0,20)[(0,0)[$K = {{\mathbb Q}}(\alpha) = {{\mathbb Q}}[X]/(f(X))$]{}]{} (0,18)[(0,-1)[6]{}]{} (0,10)[(0,0)[$K^{\langle\sigma\rangle} = {{\mathbb Q}}(\beta) = {{\mathbb Q}}[Y]/(P(Y))$]{}]{} (0,8)[(0,-1)[5]{}]{} (0,0)[(0,0)[${{\mathbb Q}}$]{}]{} (60,10)[(0,0)[${\mathbb{F}_{{p}^{2}}} = {\mathbb{F}_{p}}[X]/(\overline{\varphi}(X))$]{}]{} (60,8)[(0,-1)[5]{}]{} (60,0)[(0,0)[${\mathbb{F}_{p}}$]{}]{} (25,18)[(2,-1)[15]{}]{} (25,10)[(4,-1)[30]{}]{} Let $\ell\mid p+1$. If $\varepsilon_1$ is the fundamental unit of $K^{\langle\sigma\rangle}$ (and also of $K$ by construction), we have $\log\rho(\varepsilon_1) \equiv 0 \bmod \ell$. Extra vanishing due to ${{\mathbb F}}_\ell$-linear action --------------------------------------------------------- In the previous section, we have just seen that with a careful choice of the basis of units, some of the basis elements can have a zero virtual logarithm. In general, there could be another choice for the basis that give more zero logarithms. We call $\mathcal{R}_\text{opt}$ the maximum number of units of $K$ in a basis modulo $\ell$ that can have zero logarithm. With this notation, the result of Theorem \[th:fixed subfield\] becomes $\mathcal{R}_\text{opt} \geq r'$. The aim of this section it to prove a better lower bound for $\mathcal{R}_\text{opt}$. By studying the ${{\mathbb F}}_\ell$-linear action of $\sigma$ on the units, we will be able to choose a basis for which the number $\mathcal{R}$ of independent units with zero logarithm is (often) larger than $r'$. Therefore, the notation $\mathcal{R}$ in this section is a lower bound on the maximal number of units of $K$ in a basis modulo $\ell$ that can have zero logarithm; and we always have $\mathcal{R}_\text{opt} \geq \mathcal{R}$. For the unit group $U$ of $K$, consider the vector space $U/U^\ell$ over ${{\mathbb F}}_\ell$. We assume that $\ell$ is large enough so that $K$ has no roots of unity of order $\ell$; therefore the dimension of $U/U^\ell$ is equal to $r$. We denote by $\overline{\sigma}$ the vector space homomorphism $U/U^\ell\rightarrow U/U^\ell$, $\overline{\sigma}(u U^\ell)=\sigma(u)U^\ell$. For simplicity, in the sequel, we drop the bar above $\overline{\sigma}$. Let $\mu_{\ell,\sigma}(x)$ be the minimal polynomial of $\sigma$; it is a divisor of $x^n-1$, since $\sigma$ has order $n$. Note however that, $\overline{\sigma}$ can have a smaller order than $\sigma$, as seen by Example \[ex:deg4\] where $\sigma$ has order two but its restriction to the unit group is the identity. Since $\ell$ is a divisor of $\Phi_n(p)$, $\Phi_n(x)$ splits completely in ${{\mathbb F}}_\ell$. Then, $x^n-1$ and $\mu_{\ell,\sigma}$ split completely in ${{\mathbb F}}_\ell$: $$\mu_{\ell,\sigma}(x)=\prod_{i=1}^{\deg \mu_{\ell,\sigma}}(x-c_i),$$ where $c_i$ are distinct elements of ${{\mathbb F}}_\ell$. We remark at this point that as an endomorphism of ${{\mathbb F}}_\ell$-vector spaces, $\sigma$ is diagonalizable. For any eigenvalue $c\in{{\mathbb F}}_\ell$ of $\sigma$, we denote by $E_c$ the eigenspace of $c$: $$E_{c}=\left\{u\in U\mid \exists v\in U, \sigma(u)=u^{c}v^\ell\right\},$$ and since the endomorphism is diagonalizable, the whole vector space can be written as a direct sum of eigenspaces: $$U/U^\ell=\prod_{i=1}^{\deg \mu_{\ell,\sigma}} E_{c_i}.$$ The case covered by Theorem \[th:fixed subfield\] corresponds to the units that are fixed by $\sigma$, i.e. the units in the eigenspace $E_1$. The following lemma generalizes the result to other eigenvalues. If $c\in {{\mathbb F}}_\ell$ is an eigenvalue distinct from $A$ (as defined in (\[eq:log automorphisms\])), then, for all units $u$ such that the class of $u$ in $U/U^{\ell}$ belongs to $E_{c}$, we have $\log(\rho(u))\equiv 0\mod \ell$. For such a unit $u$, we have $$\log(\rho(\sigma(u)))\equiv c\log(\rho(u))\mod \ell.$$ By assumption on $A$, we have $$\log(\rho(\sigma(u)))\equiv A\log(\rho(u))\mod \ell.$$ We conclude that the logarithm of $\rho(u)$ is zero. \[cor:R\] Using the notations above, we have $\mathcal{R} = r-\dim E_A$, where $A$ is as in (\[eq:log automorphisms\]). At this stage, we get an expression that gives the units that have to be considered during the NFS algorithm. But this expression depends on $\ell$, whereas in many cases it will be inherited from global notions and will be the same for any $\ell$ dividing $\Phi_n(p)$. Therefore we consider the linear action of $\sigma$ on the group of non-torsion units. Let $U_\text{tor}$ be the torsion subgroup of $U$ and $\varepsilon_0$ a generator of $U_\text{tor}$. Let $\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_r$ be a system of fundamental units. Let $M_\sigma$ be the matrix of the endomorphism $\overline{\sigma}$ on $U/U_\text{tor}$, in basis $\varepsilon_1 U_\text{tor},\ldots, \varepsilon_rU_\text{tor}$. Then $M_\sigma$ belongs to $\operatorname{GL}_r({{\mathbb Z}})$. Since $M_\sigma$ cancels the monic polynomial $x^n-1$, $M_\sigma$ admits a minimal polynomial $\mu_{{{\mathbb Z}},\sigma}$ with integer coefficients. Note that $\mu_{{{\mathbb Z}},\sigma}$ does not depend on the system of fundamental units used. The following lemma shows that finding roots modulo $\ell$ of $\mu_{{{\mathbb Z}},\sigma}$ gives local information about the vanishing of the logarithms modulo $\ell$. \[lem:non emptyness\] For any root $c\in{{\mathbb F}}_\ell$ of $\mu_{{{\mathbb Z}},\sigma}(x)$ the dimension of the eigenspace $\dim(E_c)$ in $U/U^\ell$ is at least $1$. Since $M_\sigma$ has integer coefficients, its characteristic polynomial $\chi_{M_\sigma}$ is monic with integer coefficients. We deduce that $\mu_{{{\mathbb Z}},\sigma}$ is monic with integer coefficients. We claim that, for all primes $\ell$, $$\label{eq:mu} \mu_{{{\mathbb Z}},\sigma}=\mu_{\ell,\sigma}.$$ On the one hand, $\mu_{{{\mathbb Z}},\sigma}$ has the same irreducible factors over ${{\mathbb Q}}$ as the characteristic polynomial $\chi_{M_\sigma}$ of $M_\sigma$. Since $\sigma$ cancels $x^n-1$, they occur with multiplicity one in $\mu_{{{\mathbb Z}},\sigma}$. Hence, $\mu_{{{\mathbb Z}},\sigma}$ is the product of irreducible factors of $\chi_{M_\sigma}$, taken with multiplicity one. On the other hand, $\mu_{\ell,\sigma}$ has the same irreducible factors as the characteristic polynomial of $\overline{\sigma}$, which is the reduction of $\chi_{M_\sigma}$ modulo $\ell$. Since, $\overline{\sigma}$ cancels $x^n-1$, $\mu_{\ell,\sigma}$ is product of the irreducible factors of $\chi_{{{\mathbb Z}},\sigma}$ modulo $\ell$, taken with multiplicity one. We obtain equation . Finally, it is a classic property of minimal polynomial that all its roots have nonzero eigenspaces. We already mentioned the link between the eigenspace of $1$ and Theorem \[th:fixed subfield\]. We now make this more precise: \[lem:dim E\_1\] Using the previous notations, we have $\dim E_1=r'$, except for a finite set of primes $\ell$. Consider a system of fundamental units. By Theorem \[th:fixed subfield\] there exists a basis $(u_i)$, $1 \leq i\leq r$, of $U/U_\text{tors}$ such that the first $r'$ elements are fixed by $\sigma$ and, no unit in the subgroup $V$ generated by $u_i$, $r'+1\leq i\leq r$, is fixed by $\sigma$. After block-diagonalization, we can assume that $V/V_\text{tors}$ is stable by $\sigma$ and we let $M_\sigma$ be the matrix of $\sigma$ on $V/V_\text{tors}$. The determinant of $(M_\sigma-\text{id})$ is an integer $D$. If $\ell$ is prime to $D$, the discriminant of $\sigma$ on $V/V^\ell$ is non-zero. Hence, $\dim E_1\leq r'$, which completes the proof. As a first application, we study the case of cyclic extensions of prime degree. \[prop:cycl\] Let $n$ be an odd prime and $K/{{\mathbb Q}}$ a cyclic Galois extension of degree $n$. Let $p$ and $\ell$ be two primes such that $\Phi_n(p)$ is divisible by $\ell$. Let $\rho$ be a ring morphism which sends any element $x$ of $K$ with $\nu_p(x) \geq 0$ into the field ${{\mathbb F}}_{p^n}$. Let $\sigma$ be an automorphism of $K$ of order $n$ for which there exists a constant $\kappa$ such that, for all $x\in K$ of positive $p$-valuation, $\rho(\sigma(x))=p^\kappa \rho(x) $. Then we have $\mathcal{R}=n-2$. We want to compute $\mu_{{{\mathbb Z}},\sigma}$. Since $\sigma$ has order $n$, $\mu_{{{\mathbb Z}},\sigma}$ is a divisor of $(x^n-1)=(x-1)\Phi_n(x)$. By Lemma \[lem:dim E\_1\], $\dim \ker (\sigma-\text{id})=r'$, the unit rank of the subfield fixed by $\sigma$. In our case, this subfield is ${{\mathbb Q}}$, so $r'=0$. Then, we have $\mu_{{{\mathbb Z}},\sigma}=\Phi_n(x)$. Let $f$ be a defining polynomial of $K$. Since $f$ has odd degree, it has at least a real root $\alpha$. Since $K$ is Galois, $K={{\mathbb Q}}(\alpha)$, so all roots of $K$ are real, hence its unit rank is $n-1$. By Lemma \[lem:non emptyness\], since $\deg(\Phi_n)=n-1=\dim U/U^\ell$, all the eigenspaces of roots of $\Phi_n$ have dimension one. Using Corollary \[cor:R\], we have $\mathcal{R}=n-2$. Fields of small degree ---------------------- We are now in position to list possible cases for fields of small degree. As a warm-up, we start with the case of degree 2. The imaginary case is of course trivial, since the unit rank is 0. For the real quadratic case, the unit rank $r$ is 1, and one could wonder whether there are cases when we can tell in advance that the virtual logarithm of the unit is 0 modulo $\ell$ with our method. In fact, this does not occur. Indeed, the automorphism $\sigma$ is of order $2$ and therefore the unit rank $r'$ of the subfield is 0. By Lemma \[lem:dim E\_1\] the dimension of the eigenspace $E_1$ is therefore 0 as well. This is no surprise: the fundamental unit is not defined over ${{\mathbb Q}}$, so it is not fixed by $\sigma$. The next step is to study $\mu_{{{\mathbb Z}},\sigma}$. Since $\sigma$ as order $2$, $\mu_{{{\mathbb Z}},\sigma}$ divides $x^2-1$, and we have just seen that 1 is not an eigenvalue. Therefore we deduce that $\mu_{{{\mathbb Z}},\sigma} = x+1$. Hence the vector space $U/U^\ell$ is reduced to the eigenspace $E_{-1}$. Since $-1$ is precisely the value $A$ as in (\[eq:log automorphisms\]), we can not conclude. The cases of degree 3 and 5 are covered by Proposition \[prop:cycl\]. In Table \[tab:cases of automorphisms\], we list the cases for degree 4 and 6. In all cases, a classification according to the signatures of the field and of the fixed subfield is enough to conclude about the value of $\mathcal{R}$. \[thm:fdal\] The values of $\mathcal{R}$ for $K/{{\mathbb Q}}$ of degree 4 or 6 having non-trivial automorphisms are as given in Table \[tab:cases of automorphisms\]. $\deg(K)$ $\operatorname{ord}(\sigma)$ sign($K$), sign($K^{\langle \sigma\rangle}$) $\mu_{{{\mathbb Z}},\sigma}$ $\mathcal{R}$ $r$ $r-\mathcal{R}$ ----------- ------------------------------ ---------------------------------------------- ------------------------------ --------------- ----- ----------------- 4 2 (4,0), (2,0) $(x-1)(x+1)$ 1 3 2 (2,1), (2,0) $(x-1)(x+1)$ 1 2 1 (0,2), (0,1) $x+1$ 0 1 1 (0,2), (2,0) $x-1$ 1 1 0 4 (4,0), - $(x+1)(x^2+1)$ 2 3 1 (0,2), - $x+1$ 1 1 0 6 2 (0,3), (1,1) $(x-1)(x+1)$ 1 2 1 (0,3), (3,0) $x-1$ 2 2 0 3 (6,0), (2,0) $(x-1)(x^2+x+1)$ 3 5 2 (0,3), (0,1) $x^2+x+1$ 1 2 1 6 (6,0), - $(x+1)(x^2+x+1)(x^2-x+1)$ 4 5 1 (0,3), - $x^2+x+1$ 1 2 1 : Table of values of $\mathcal{R}$ for fields of degree 4 and 6.[]{data-label="tab:cases of automorphisms"} Let us consider the various cases of Tab. \[tab:cases of automorphisms\]. In each case, we use a strategy of proof that is not so different from the real quadratic case that we mentioned in introduction. In order to determine the minimal polynomial $\mu_{{{\mathbb Z}},\sigma}$, we consider the factors of $x^n-1$ in ${{\mathbb Z}}[x]$ and we use the fact that $\deg \mu_{\ell,\sigma}$ is at most $\dim(U/U^\ell)=r$. - Case when sign($K$)=(4,0) and sign($K^{\langle \sigma \rangle}$)=(2,0). Then, $r=3$ and $r'=1$. Further, $x-1$ divides $\mu_{{{\mathbb Z}},\sigma}$ with multiplicity one. Since $\sigma$ cancels $x^2-1$, the minimal polynomial is $\mu_{{{\mathbb Z}},\sigma}=x^2-1$. Hence, we have $\dim E_{-1}=2$. Since $A=-1$, we obtain $\mathcal{R}=r'=1$. - Case when sign($K$)=(2,1) and sign($K^{\langle \sigma \rangle}$)=(2,0). Note first that (2,0) is the unique possibility for sign($K^{\langle \sigma \rangle}$). Indeed, if $\alpha$ is a real root of a defining polynomial of $K$, then ${{\mathbb Q}}\left(\alpha+\sigma(\alpha)\right)$ is fixed by $\sigma$ and has degree two, so it is $K^{\langle \sigma \rangle}$. Since this quadratic field is real, its signature is (2,0). As above we have that $\dim E_1\geq 1$ and therefore $\mu_{{{\mathbb Z}},\sigma}=(x-1)(x+1)$, implying that $\mathcal{R}=1$. - Case when sign($K$)=(0,2) and sign($K^{\langle \sigma \rangle}$)=(0,1). Here $r=1$ and $r'=0$. For sufficiently large values of $\ell$, by Lemma \[lem:dim E\_1\], the space $E_1$ of units fixed by $\sigma$ has dimension $r'=0$. Since the minimal polynomial divides $x^2-1$, we have $\mu_{{{\mathbb Z}},\sigma}=x+1$. We deduce that $\mathcal{R}=0$. - Case when sign($K$)=(0,2) and sign($K^{\langle \sigma \rangle}$)=(2,0). Here we have $r=r'=1$. Then any unit is fixed by $\sigma$ and $E_1=U/U^\ell$, so $\mu_{{{\mathbb Z}},\sigma}=x-1$ and $\mathcal{R}=1$. We remark that the fields of Example \[ex:deg4\] falls in this category. Note that $K$ is either totally real or its defining polynomial has no real root. Hence we have two cases: - Case when sign($K$)=(4,0). Here we can apply to $\tau=\sigma^2$ the results on the case of degree four polynomials with automorphisms of order two. Hence we have $\dim\ker(\sigma^2-1)=\dim\ker (\tau-1)=1$ and $\dim(\sigma^2+1)=\dim(\tau+1)=2$. The fixed field has degree $1$, so $r'=0$. Then, the minimal polynomial is $\mu_{{{\mathbb Z}},\sigma}=(x+1)(x^2+1)$, and we have $\mathcal{R}=2$. - Case when sign($K$)=(0,2). Here the unit rank of $K$ is $r=1$, so the minimal polynomial is linear. Since, $r'=0$, we have $\dim E_1=0$, so $\mu_{{{\mathbb Z}},\sigma}=x+1$. Since $A$ is of order $4$ it is not $-1$; hence, we obtain $\mathcal{R}=0$. Note that here the group automorphism $\overline{\sigma}$ equals $-1$, so it has a smaller order that the field automorphism $\sigma$. Here the signature of $K$ can be $(6,0)$, $(4,1)$, $(2,2)$ and $(0,3)$. We only deal with the case $(0,3)$ in the present version of our work. The unit rank of $K$ is $r=2$ and the minimal polynomial is a factor of $x^2-1$. The value of $\mathcal{R}$ is determined by the signature of the subfield fixed by $\sigma$, which is cubic and can have signature (3,0) or (1,1). - Case when sign($K^{\langle \sigma \rangle}$)=(1,1). The unit rank of the fixed subfield is $r'=1$ and $\mathcal{R}=1$. - Case when sign($K^{\langle \sigma \rangle}$)=(3,0). Here we have $r'=2$, so $\dim E_1=r$ and $\mu_{{{\mathbb Z}},\sigma}=x-1$. This shows that $\mathcal{R}=2.$ Note that, the signature ($r_{{\mathbb R}}$,$r_{{\mathbb C}}$) of $K$ satisfies $r_{{\mathbb R}}\equiv 0\mod 3$. Indeed, if a defining polynomial of $K$ has a real root $\alpha$, the roots $\sigma(\alpha)$ and $\sigma^2(\alpha)$ are also real. The two values for the signature are (6,0) and (0,3). - Case when sign($K$)=(6,0). Since $K$ is real, $K^{\langle \sigma \rangle}$ is also real, so $r'=1$. As in the previous cases, the polynomial $\mu_{{{\mathbb Z}},\sigma}$ is a factor of $(x-1)(x^2+x+1)$. Since, $\dim E_1=r'=1$ is neither $0$ nor $r$, we have $\mu_{{{\mathbb Z}},\sigma}=(x-1)(x^2+x+1)$. Since, the characteristic polynomial over ${{\mathbb Q}}$, of $\sigma$ restricted to $V=\text{ker}(\sigma^2+\sigma+1)$ has the same irreducible factors, we have $\chi_\sigma|_V=(x^2+x+1)^2$. Suppose ab absurdo that, for a root $c$ of $\mu_{{{\mathbb Z}},\sigma}$ modulo $\ell$, we have $\dim E_c\geq 3$. Then $\chi_{\sigma}$ modulo $\ell$ is divisible by $(x-c)^3$. It is impossible because it has two roots of multiplicity at least two, so, for the two roots of $x^2+x+1$ modulo $\ell$ we have $\dim E_c=2$. It implies that $\mathcal{R}=3$. - Case when sign(K)=(0,3). The unit rank of $K$ is $r=2$, so the minimal polynomial is $x-1$ or $x^2+x+1$. The fixed subgroup has degree $2$, so we cannot have $r'=2$. This shows that $\mu_{{{\mathbb Z}},\sigma}=x^2+x+1$. Then, $r'=0$ and $\mathcal{R}=1$. As in the case of cyclic quartic Galois extensions, either $K$ is real or has no real roots. - Case when sign($K$)=(6,0). The unit rank of $K$ is $5$, so the minimal polynomial is equal to a factor of $(x-1)(x+1)(x^2+x+1)(x^2-x+1)$, having degree less than or equal to $5$. Since the fixed subgroup is ${{\mathbb Q}}$, we have $r'=0$, so $\dim E_1=0$. The fixed subfield of $\sigma^3$ is a cubic cyclic Galois extension, so its unit rank is two. Hence $\dim\text{ker}(\sigma^3-1 )=2$. Since $\dim E_1=0$, $x^2+x+1$ divides $\mu_{{{\mathbb Z}},\sigma}$. We also deduce that $\dim \text{ker}(\sigma^3+1)=3$. The subfield fixed by $\sigma^2$ has degree two, so its unit rank is at most one. It implies that $\dim(\ker(\sigma+1))\neq 3$, so $x^2-x+1$ divides $\mu_{{{\mathbb Z}},\sigma}$. Since the dimension of its kernel is even, $\dim E_{-1}\neq 0$, so $(x+1)$ divides $\mu_{{{\mathbb Z}},\sigma}$. We conclude that $\mu_{{{\mathbb Z}},\sigma}=(x+1)(x^2+x+1)(x^2-x+1)$ and $\mathcal{R}=4$. - Case when sign($K$)=(0,3). The subfield fixed by $\sigma^3$ is a cyclic cubic extension of ${{\mathbb Q}}$, so its unit rank is $2$. This means that $\dim \text{ker}(\sigma^3-1)=2$, so the minimal polynomial divides $x^3-1$. The fixed subgroup of $\sigma$ is ${{\mathbb Q}}$, so $\dim E_1=0$ and $\mu_{{{\mathbb Z}},\sigma}=x^2+x+1$. We obtain that $\mathcal{R}=1$. Effective computations ---------------------- Theorem \[thm:fdal\] tells us that we do not need to consider the logarithms of all units in many cases. If we have a system of units which form a basis modulo $\ell$, we can make the theorem effective by solving linear systems. This is a less stronger condition than computing a system of fundamental units. One can investigate the use of Schirokauer maps to avoid any requirement of effective computations of units. Let us see a series of examples which illustrate Theorem \[thm:fdal\]. ### Minkowski units {#sec:minkowski} A [*Minkowski unit*]{} for $K$, if it exists, is a unit $\varepsilon$ such that a subset of the conjugates of $\varepsilon$ forms a system of fundamental units. Some results on the classification of such fields exist, we will come back to them in the final version of this work. As an example, when $K$ is totally cyclic of degree 3, it is real and there exists always a Minkowski unit as shown by Hasse [@Hasse48 p. 20]. In that case, using the proof in \[th:galois\], we see that $$\log\rho(\varepsilon^{\sigma}) \equiv p^{\kappa} \log\rho(\varepsilon) \bmod \ell,$$ so that we need to find $\log\rho(\varepsilon) \bmod \ell$ only. It matches Table \[tab:cases of automorphisms\], where we read that only $r-\mathcal{R}=3-2=1$ (well chosen) Schirokauer map is required. ### The degree 4 cases {#sec:exdeg4} When the signature is $(4, 0)$ and the Galois group is $C_4$, we can precise the structure of $U_K$, see [@Hasse48; @Gras79]. The first case is when $K$ admits a Minkowski unit, that is $\varepsilon$ such that $U_K = \langle -1, \varepsilon, \varepsilon^{\sigma}, \varepsilon^{\sigma^2}\rangle$. And we use the same reasoning as in Section \[sec:minkowski\] to reduce the number of logarithms needed to $1$. In the second case, $U_K = \langle -1, \epsilon_1, \epsilon_\chi, \epsilon_\chi^{\sigma}\rangle$, where $\epsilon_1$ is the fundamental unit of the quadratic subfield and $\epsilon_\chi$ is a generator of the group of relative units, that is $\eta \in U_K$ such that $\operatorname{N}_{K/K_2}(\eta) = \pm 1$. We gain two logarithms since we can use the Galois action for $\log\rho(\epsilon_\chi^{\sigma})$, and we know $\log\rho(\epsilon_1)$. Note that Table \[tab:cases of automorphisms\] predicts that the two cases above, with or without relative units, lead to the same number of Schirokauer maps: $r-\mathcal{R}=1$. For signature $(0, 2)$, the rank of $K$ is $1$, and the fundamental unit is that from the real quadratic subfield, so we don’t need any logarithm at all. To be more precise, let us detail the case of our favorite example: $f(X) = X^4+1$ which defines the $8$-th roots of unity, say $K = {{\mathbb Q}}(\zeta_8)$. The Galois group of $f$ is $V_4$ and $K$ has two automorphisms $\sigma_1: x \mapsto -x$, $\sigma_2: x \mapsto 1/x$. We compute that $$K^{\langle\sigma_1\rangle} = {{\mathbb Q}}(i), K^{\langle\sigma_2\rangle} = {{\mathbb Q}}(\sqrt{2}), K^{\langle\sigma_1\sigma_2\rangle} = {{\mathbb Q}}(\sqrt{-2}).$$ The corresponding factorizations of $f(X)$ are $$f(X) = (X^2+i) (X^2-i),$$ $$f(X) = (X^2 - \sqrt{2} X + 1) (X^2 + \sqrt{2} X + 1),$$ $$f(X) = (X^2 - \sqrt{-2} X - 1) (X^2 + \sqrt{-2} X - 1).$$ Since $f$ has signature $(0, 2)$, we have $U_K = \langle \zeta_8\rangle \times \langle\varepsilon\rangle$, where $\varepsilon$ comes from $K^{\langle\sigma_2\rangle}$, the only real quadratic subfield. By Theorem \[thm:fdal\], we do not need any logarithm of units for use in NFS. Reducing the number of Schirokauer maps {#sec:units} ======================================= In this section, we use the preceding Section to conclude that we can reduce the number of Schirokauer maps needed in NFS-DL. We use the notations of Section \[sec:vanishing\]. A system of $r$ units is a basis modulo $\ell$ if its image in $U/U^\ell$ is a basis. Let $p$ be a prime and $n$ an integer such that the reduction of $g$ modulo $p$ has an irreducible factor of degree $n$. Let $\ell$ be a prime factor of $p^n-1$, coprime to $p-1$. In order to reduce the number of Schirokauer maps associated to $g$, we follow the steps below: 1. We find a system of $r$ elements in $U$ which form a basis modulo $\ell$. 2. We compute an integer $\mathcal{R}\leq r$, as large as possible, and a system of $r$ elements $u_1,\ldots,u_r$ in $U$ which form a basis modulo $\ell$, such that the discrete logarithms of $\rho(u_1),\ldots,\rho(u_{\mathcal{R}})$ are zero modulo $\ell$. 3. Using any set of $r$ Schirokauer maps and the system of fundamental units above, we compute a set of Schirokauer maps $\lambda_1,\ldots,\lambda_r$ such that the NFS algorithm can be run using only the last $r-\mathcal{R}$ Schirokauer maps $\lambda_{\mathcal{R}+1},\ldots,\lambda_{r}$. We do not discuss the first point here. Point (2) was studied in Section \[sec:vanishing\]. Point $(3)$ is solved by the corollary of the following theorem. Let $\lambda_1,\ldots,\lambda_r$ be a set of Schirokauer maps and let $u_1,\ldots,u_r$ a system of effectively computed units in $U$, whose image in $U/U^\ell$ form a basis. Then there exits a set of effectively computable Schirokauer maps $\lambda'_1,\ldots,\lambda'_r$ such that, for $1\leq i,j\leq r$, $$\label{eq:dual of lambda} \lambda'_i(u_j)=\left\{\begin{array}{l} 1\qquad \text{ if }i=j,\\ 0\qquad \text{ otherwise.} \end{array}\right.$$ Let $L=(l_{i,j})$ be the $r\times r$ matrix of entries $l_{i,j}=\lambda_i(u_j)$. Let $C=(c_{i,j})$ be the inverse of $L$. Then, we put $$\lambda'_i=\sum_{n=1}^r c_{i,n} \lambda_n .$$ We have $\lambda'_i(u_j)=(CL)_{i,j}$, so the maps $\lambda'_i$ verify the condition in Equation . Let $u_1,\ldots,u_r$ be a set of units, effectively computed, which form a basis modulo $\ell$. Assume that for some integer $\mathcal{R}$, $1\leq \mathcal{R}\leq r$, the first $\mathcal{R}$ units $u_1,\ldots,u_{\mathcal{R}}$ are such that $\log\left(\rho(u_i)\right)\equiv 0\mod \ell$. Then, there exists a set of $r$ effectively computable Schirokauer maps $\lambda'_1,\ldots,\lambda'_r$ such that NFS can be run with the last $r-\mathcal{R}$ maps instead of the complete set of $r$ maps. Using any set of Schirokauer maps, we compute $\lambda'_1,\ldots,\lambda'_r$ such that Equation  holds. By Equation  in Section \[ssec:virtual logarithms\], when running NFS with the maps $\lambda'_1,\ldots,\lambda'_r$, the linear algebra stage computes the virtual logarithms of the ideals in the factor base together with $r$ constants $\chi_i$, $1\leq i\leq r$ such that $$\label{eq:sum lambda} \log(\rho(\gamma))\equiv \sum_{{\mathfrak q}\in \mathcal{F}}\log {\mathfrak q}\operatorname{val}_{{\mathfrak q}}(\gamma) + \sum_{i=1}^r \chi_i \lambda_i(\gamma) \mod \ell.$$ For $1\leq i\leq r$, when injecting $\gamma=u_i$ in Equation  we obtain that $\chi_i=\log(\rho(u_i))$. For $1\leq i\leq \mathcal{R}$ we have $\log(\rho(u_i)\equiv 0\mod \ell$, and therefore $\chi_i\equiv0\mod \ell$. Hence, Equation  can be rewritten with $r-\mathcal{R}$ Schirokauer maps: $$\label{eq:sum lambda 2} \log(\rho(\gamma))\equiv \sum_{{\mathfrak q}\in \mathcal{F}}\log {\mathfrak q}\operatorname{val}_{{\mathfrak q}}(\gamma) + \sum_{i=\mathcal{R}+1}^{r} \chi_i \lambda_i(\gamma) \mod \ell.$$ (continued) The corollary above states that the polynomials in the family described in Example \[ex:deg4\] do not require any Schirokauer map. Moreover, note that if $f_1$ and $f_2$ are two polynomials in this family and $\mu_1,\mu_2$ are two positive rationals such that $\mu_1+\mu_2=1$, then $\mu_1f_1+\mu_2f_2$ also belongs to this family. A more important example is that of cubic polynomials with an automorphism of order three. Then, we can effectively compute a linear combination $\Lambda_{1,2}$ of any two Schirokauer maps $\Lambda_1$ and $\Lambda_2$ so that NFS can be run with $\Lambda_{1,2}$ as unique Schirokauer map. Two new methods of polynomial selection {#sec:polyselect} ======================================= In this section, we propose two new methods to select the polynomials $f$ and $g$, in the case of finite fields that are low degree extensions of prime fields. The first one is an extension to non-prime fields of the method used by Joux and Lercier [@JoLe03] for prime fields. The second one, which relies heavily on rational reconstruction, insists on having coefficients of size $O(\sqrt{p})$ for $g$. For both methods, $f$ has very small coefficients, of size $O(\log p)$. The state-of-art methods of polynomial selection {#subsec:JLSV06 polyselect} ------------------------------------------------ Joux, Lercier, Smart and Vercauteren [@JLSV06] introduced two methods of polynomial selection, one which is the only option for medium characteristic finite fields and one which is the only known for the non-prime large characteristic fields. ### The first method of JLSV {#sssec:JLSV1} Described in [@JLSV06 §2.3], this method is best adapted to the medium characteristic case. It produces two polynomials $f,g$ of same degree $n$, which have coefficients of size $\sqrt{p}$ each. One starts by selecting a polynomial $f$ of the form $f = f_v + a f_u$ with a parameter $a$ to be chosen. Then one computes a rational reconstruction $(u,v)$ of $a$ modulo $p$ and one defines $ g = v f_v + u f_u$. Note that, by construction, we have $f = v \cdot g \bmod p$. Also note that both polynomials have coefficients of size $\sqrt{p}$. Take $p = 1000001447$ and $a = 44723 \geqslant \lceil \sqrt{p} \rceil$. One has $ f = x^4 - 44723 x^3 - 6 x^2 + 44723 x + 1$ and $g = 22360 x^4 - 4833 x^3 - 134160 x^2 + 4833 x + 22360$ with $u/v = 4833 / 22360 $ a rational reconstruction of $a$ modulo $p$. The norm product is $N_f N_g = E^{2n} p = E^{2n} Q^{1/n}$. If one wants to use automorphisms as in Section \[sec:polyselect\], then one chooses $f$ in a family of polynomials which admit automorphisms. For example when $n=4$, one can take $f$ in the family presented in Tab. \[tab:foster\], formed of degree 4 polynomials with cyclic Galois group of order four, having an explicit automorphism: $f = (x^4 - 6 x^2 + 1) + a (x^3 - x) = f_v + a f_u$. Note that the second polynomial $g$ belongs to the same family and has the same automorphisms. ### The second method of JLSV {#sssec:JLSV2} The second method is described in [@JLSV06 §3.2]. It starts by computing $g$ of degree $n$ then it computes $f$ of degree $\deg f \geqslant n$. First one selects $g_0$ of degree $n$ and small coefficients. Then one chooses an integer $W \sim p^{1/(d +1)}$, but slightly larger, and set $g = g_0(x + W)$. The smallest degree coefficient of $g$ has size $W^n$. We need to take into account the skewness of the coefficients. The polynomial $f$ is computed by reducing the lattice of polynomials of degree at most $d$, divisible by $g$ modulo $p$. We do this by defining the matrix $M$ in Sec. \[subsec: Joux Lercier polyselect\], eq. , with $\varphi = g$. We obtain a polynomial $f$ with coefficients of size $p^{n / (d +1)} = Q^{1/(d + 1)}$. Consider again the case of $p=1125899906842783$ and $n=4$. We take $g_0$ a polynomial of degree four and small coefficients, for example $g_0 = x^4 - x^3 - 6x^2 + x + 1$. We can have $\deg(f)=d$ for any value of $d\geq n$, we take $d=7$ for the example. We use $W = 77\geqslant p^{1/(d+1)} $, where we emphasize that we do not use $Q^{1/(d+1)}$, and we set $$g=g_0(x+W)=x^4 + 307x^3 + 35337x^2 + 1807422x + 34661012.$$ We construct the lattice of polynomials of degree at most $7$ which are divisible by $g$ modulo $p$. We obtain $$\begin{array}{c}f=12132118x^7 + 11818855x^6 + 2154686x^5 \qquad\qquad\qquad\\ \qquad\qquad\qquad- 7076039x^4 + 7796873x^3 + 7685308x^2 + 4129660x - 14538916.\end{array}$$ Note that $f$ and $g$ have coefficients of size $Q^{1/8}$. For comparison, we compute the norms’ product: $E^{d+n}Q^{2/(d+1)}$. However, one might obtain a better norms product using the skewness notion introduced by Murphy [@Mur99]. Without entering into details, we use as a lower bound for the norms product the quantity $E^{d+n}Q^{3/2(d+1)}$. Indeed, the coefficients of $f$ have size $Q^{1/(d+1)}$ and the coefficients of $g$ have size $Q^{1/(d+1)}$, which cannot be improved more than $Q^{1/2(d+1)}$ using skewness. This bound is optimistic, but even so the new methods will offer better performances. The generalized Joux-Lercier method {#subsec: Joux Lercier polyselect} ----------------------------------- In the context of prime fields, Joux and Lercier proposed a method in [@JoLe03] to select polynomials using lattice reduction. They start with a polynomial $f$ of degree $d+1$ with small coefficients, such that $f$ admits a root $m$ modulo $p$. Then, a matrix $M$ is constructed with rows that generate the lattice of polynomials of degree at most $d$ with integer coefficients, that also admits $m$ as a root modulo $p$. We denote by $\operatorname{LLL}(M)$ the matrix obtained by applying the LLL algorithm to the rows of $M$: $$\label{eq:LLL}M = \left[ \begin{array}{cccc} p & 0 & \cdots & 0 \\ -m & 1 & 0 & 0 \\ \vdots &\ddots & \ddots & 0 \\ -m^d &\cdots & 0 & 1 \\ \end{array} \right], ~ ~ \operatorname{LLL}(M) = \left[\begin{array}{c@{~~}c@{~~~}c@{~~~} c} g_0 & g_1 & \cdots & g_d \\ * & * & \cdots & * \\ \vdots & \vdots & \ddots & \vdots \\ * & * & \cdots & * \\ \end{array}\right]~.$$ The first row gives a polynomial $g = g_0 + g_1 x + \ldots + g_d x^d$, that has a common root $m$ with $f$, and the pair of polynomials $(f,g)$ can be used for computing discrete logarithm in ${{\mathbb F}}_p$ with the NFS algorithm. In order to tackle discrete logarithms in ${{\mathbb F}}_{p^n}$, we generalize this to polynomials $(f,g)$ that share an irreducible common factor $\varphi$ of degree $n$ modulo $p$. Let $d'$ be an integer parameter that we choose below. We select $f$ an irreducible polynomial in ${{\mathbb Z}}[x]$ of degree $d^{'}+1 \geqslant n$, with small coefficients, good sieving properties and an irreducible degree $n$ factor $\varphi= \sum_{i=0}^{n} \varphi_i x^i$ modulo $p$, that we force to be monic. We define a $(d'+1)\times(d'+1)$ matrix $M$ whose rows generate the lattice of polynomials of degree at most $d'+1$ for which $\varphi$ is also a factor modulo $p$. Then, running the LLL algorithm on this matrix gives a matrix whose rows are generators with smaller coefficients. A possible choice for the matrix $M$ is as follows, where the missing coefficients understand to be zero. $$\label{eq:LLL generalization} M = \left[\begin{array}{cccccc} p & & & & & \\ & \ddots & & & & \\ & & p & & & \\ \varphi_{0} & \varphi_{1}& \cdots & \varphi_{n} & & \\ & \ddots &\ddots & &\ddots & \\ & &\varphi_{0}& \varphi_{1} &\cdots & \varphi_{n} \\ \end{array}\right] \begin{array}{l} \left \rbrace \begin{array}{l} \\ \deg \varphi = n \mbox{ rows} \\ \\ \end{array}\right. \\ \left \rbrace \begin{array}{l} \\ \deg g + 1 - \deg \varphi \\ = d'+1-n \mbox{ rows} \\ \end{array}\right. \end{array} \operatorname{LLL}(M) = \left[\begin{array}{c c c c} g_0 & g_1 & \cdots & g_{d'} \\ & & & \\ & & & \\ & \multicolumn{2}{c}{*} & \\ & & & \\ & & & \\ \end{array}\right]~.$$ One can remark that since $\varphi$ has been made monic, the determinant of $M$ is $\det(M) = p^n$. The first row of $\operatorname{LLL}(M)$ gives a polynomial $g$ of degree at most $d^{'}$ that shares the common factor $\varphi$ of degree $n$ modulo $p$ with $f$. The coefficients of $g$ have approximately a size $(\det M)^{1/(d^{'}+1)} = p^{n/(d^{'}+1)}$ if we assume that the dimension stays small. Note that, when $n=1$, this method produces the same pair $(f,g)$ as the method of Joux-Lercier. Indeed, in this case $\varphi=x-m$ and the rows of the matrix $M$ in Equation  generate the same lattice as the rows of matrix $M$ in Equation . By considering a smaller matrix, it is possible to produce a polynomial $g$ whose degree is smaller than $d' = \deg f-1$. This does not seem to be a good idea. Indeed, the size of the coefficients of $g$ would be the same as the coefficients of a polynomial obtained starting with a polynomial $f$ with coefficients of the same size but of a smaller degree ($d'$ or less). We now discuss the criteria to select the parameter $d'=\deg f-1$ with respect to the bitsize of $p$. The most important quantity to minimize is the size of the product of the norms $\operatorname{Res}(\phi,f)\operatorname{Res}(\phi,g)$, for the typical polynomials $\phi$ that will be used. In this setting, the best choice is to stick to polynomials $\phi$ of degree 1, and we denote by $E$ a bound on its two coefficients that we will tune later. For any polynomial $P$, let us denote by ${{|P|}}_\infty$ the maximum absolute value of the coefficients of $P$. Since $f$ has been selected to have small coefficients, we obtain the following estimate for the product of the norms: $$|\operatorname{Res}(\phi,f)\operatorname{Res}(\phi,g)| \approx \left(E^{\deg(f)}\right)\left( ||g||_\infty E^{\deg(g)}\right),$$ where we did not write factors that contribute in a negligible way. In Table \[tab: ||g||oo wrt deg f, g, varphi, Joux-Lercier\], we list the possible choices for the degrees, that we expect to be practically relevant for discrete logarithms in ${{\mathbb F}}_{p^2}$ and ${{\mathbb F}}_{p^3}$. Field $\deg \varphi$ $\deg f$ $\deg g$ $||g||_\infty$ $E^{\deg f}E^{\deg g}||g||_\infty$ ------------------------------------------- ---------------- ---------- ---------- --------------------- ------------------------------------ ${{\mathbb F}}_Q = {{\mathbb F}}_{p^{2}}$ 2 4 3 $p^{1/2} = Q^{1/4}$ $Q^{1/4} E^{7} $ 2 3 2 $p^{2/3} = Q^{1/3}$ $Q^{1/3} E^{5} $ 3 6 5 $p^{3/6} = Q^{1/6}$ $Q^{1/6} E^{11} $ ${{\mathbb F}}_Q = {{\mathbb F}}_{p^{3}}$ 3 5 4 $p^{3/5} = Q^{1/5}$ $Q^{1/5} E^{9} $ 3 4 3 $p^{3/4} = Q^{1/4}$ $Q^{1/4} E^{7} $ : Size of the product of the norms, for various choices of parameters with the generalized Joux-Lercier method, in ${{\mathbb F}}_{p^2}$ and ${{\mathbb F}}_{p^3}$.[]{data-label="tab: ||g||oo wrt deg f, g, varphi, Joux-Lercier"} As for the value of the parameter $E$, although the asymptotic complexity analysis can give hints about its value, it is usually not reliable for fixed values. Therefore we prefer to use a rough approximation of $E$ using the values of the same parameter in the factoring variant of NFS as implemented in CADO-NFS. These values of $E$ w.r.t. $Q$ are collected in Table \[tab: E values for various Q\] and we will use them together with Table \[tab: ||g||oo wrt deg f, g, varphi, Joux-Lercier\] in order to plot the estimate of the running time in Figure \[fig:norm-product-f-g–wrt-Q–Fpn\] to compare with other methods. Note that, a posteriori, the norms product in our case is smaller than in the factoring variant of NFS. Hence, one can take a slightly smaller values for $E$. $$\begin{array}{|c||ccccccccc|} \hline Q (\mbox{dd}) & 60 & 80 & 100 & 120 & 140 & 160 & 180 & 204 & 220 \\ Q (\mbox{bits}) & 200 & 266 & 333 & 399 & 466 & 532 & 598 & 678 & 731 \\ \hline E (\mbox{bits}) & 19 & 20 & 21 & 23 & 25 & 27 & 28 & 29 & 30\\ \hline \end{array}$$ The conjugation method {#subsec: our polyselect} ---------------------- We propose another method to select polynomials for solving discrete logarithms in ${{\mathbb F}}_{p^n}$ with the following features: the resulting polynomial $f$ has degree $2n$ and small coefficients, while the polynomial $g$ has degree $n$ and coefficients of size bounded by about $\sqrt{p}$. In the next section, an asymptotic analysis shows that there are cases where this is more interesting than the generalized Joux-Lercier method; furthermore, it is also well suited for small degree extension that can be reached with the current implementations. Let us take an example. We take the case of $n=11$ and $p = 134217931$, which is a random prime congruent to $1$ modulo $n$. The method is very general, this case is the simplest. We enumerate the integers $a=1,2,\ldots$ until $\sqrt{a}$ is irrational but exists in ${{\mathbb F}}_p$, i.e. the polynomial $x^2-a$ splits in ${{\mathbb F}}_p$. We call $\lambda$ a square root of $a$ in ${{\mathbb F}}_p$ and test if $x^n-\lambda$ is irreducible modulo $p$. If it is not the case, we continue and try the next value of $a$. For example $a=5$ works. Then we set $\lambda= 108634777={{\mathbb F}}_p(\sqrt{5})$ and we put $\varphi=x^{11}-\lambda$. Next, we do a rational reconstruction of $\lambda$, i.e. we find two integers of size $O(\sqrt{p})$ such that $u/v\equiv\lambda\mod p$. We find $u=1789$ and $v=10393$. The conjugation method consists in setting: 1. $f=(x^{11}-\sqrt{5})(x^{11}+\sqrt{5})=x^{22}-5$; 2. $g=vx^{11}-u=10393x^{11}-1789$. By construction $f$ and $g$ are divisible by $\varphi$ modulo $p$. We continue with a construction that works for ${{\mathbb F}}_{p^2}$ when $p$ is congruent to 7 modulo 8. Let $p \equiv 7 \bmod 8$ and let $f = x^4 + 1$ that is irreducible modulo $p$. From the results in Section \[sec:exdeg4\], we use the factorization $(x^2 + \sqrt{2}x +1) (x^2 - \sqrt{2}x +1)$ of $f(x)$. Since $2$ is a square modulo $p$, we take $\varphi = x^2 + \sqrt{2}x + 1 \in {{{{\mathbb F}}_p}}[x]$. Now, by rational reconstruction of $\sqrt{2}$ in ${{\mathbb F}}_p$, we can obtain two integers $u,v \in {{\mathbb Z}}$ such that $\frac{u}{v}\equiv \sqrt{2}\mod p$, and $u$ and $v$ have size similar to $\sqrt{p}$. We define $g = v x^2 + ux + v$. Then $f$ and $g$ share a common irreducible factor of degree 2 modulo $p$, and verify the degree and size properties that we announced. This construction can be made general: first, it is possible to obtain pairs of polynomials $f$ and $g$ with the claimed degree and size properties for any extension field ${{\mathbb F}}_{p^n}$; and second, in many small cases that are of practical interest, it is also possible to enforce the presence of automorphisms. The general construction is based on Algorithm \[alg:conjugation\]. Select $g_u(x), g_v(x)$, two polynomials with small integer coefficients, $\deg g_u < \deg g_v = n$ $(u,v)\gets$ a rational reconstruction of $\lambda$ $f\gets \operatorname{Res}_Y(\mu(Y), g_v(x) + Yg_u(x))$ $g\gets vg_v +u g_u $ The polynomials $(f,g)$ returned by Algorithm \[alg:conjugation\] verify: 1. $f$ and $g$ have integer coefficients and degrees $2n$ and $n$ respectively; 2. the coefficients of $f$ have size $O(1)$ and the coefficients of $g$ are bounded by $O(\sqrt{p})$. 3. $f$ and $g$ have a common irreducible factor $\varphi$ of degree $n$ over ${{\mathbb F}}_p$. The fact that $g$ has integer coefficients and is of degree $n$ is immediate by construction. As for $f$, since it is the resultant of two bivariate polynomials with integer coefficients, it is also with integer coefficients. Using classical properties of the resultant, $f$ can be seen as the product of the polynomial $ g_v(x) + Yg_u(x)$ evaluated in $Y$ at the two roots of $\mu(Y)$, therefore its degree is $2n$. Also, since all the coefficients of the polynomials involved in the definition of $f$ have size $O(1)$, and the degree $n$ is assumed to be “small”, then the coefficients of $f$ are also $O(1)$. For the size of the coefficients of $g$, it follows from the output of the rational reconstruction of $\lambda$ in ${{\mathbb F}}_p$, which is expected to have sizes in $O(\sqrt{p})$ (in theory, we can not exclude that we are in a rare case where the sizes are larger, though). The polynomials $f$ and $g$ are suitable for NFS in ${{\mathbb F}}_{p^n}$, because both are divisible by $\varphi = g_v+\lambda g_u$ modulo $p$, and by construction it is irreducible of degree $n$. In the example above, for ${{\mathbb F}}_{p^2}$ with $p\equiv 7 \mod 8$, Algorithm \[alg:conjugation\] was applied with $g_u = x$, $g_v = x^2+1$ and $\mu = x^2-2$. One can check that $f = \operatorname{Res}_Y(Y^2-2, (x^2+1) + Yx) = x^4+1$, as can be seen from Section \[sec:exdeg4\]. In Algorithm \[alg:conjugation\], there is some freedom in the choices of $g_u$ and $g_v$. The key idea to exploit this opportunity is to base the choice on one-parameter families of polynomials for which an automorphism with a nice form is guaranteed to exist, in order to use the improvements of Section \[sec:galois\]. In Table \[tab:foster\], we list possible choices for $g_u$ and $g_v$ in degree $2$, $3$, $4$ and $6$, such that for any integer $\lambda$, $g_v+\lambda g_u$ as a simple explicit cyclic automorphism. The families for $3$, $4$ and $6$ are taken from [@Gras79; @Gras87] (see also [@Fos11] references for larger degrees). $ \begin{array}{|c|c|c|c|c|} \hline n & \text{coeffs of $g_v+ag_u$} & g_v & g_u & \text{automorphism: } \theta\mapsto \\ \hline \hline & (1,a,1) & x^2 + 1 & x & 1/\theta \\ 2 & (-1,a,1) & x^2 - 1 & x & -1/\theta \\ & (a,0,1) & x^2 & 1 & - \theta \\ \hline 3 & (1,-a-3,-a,1) & x^3-3x-1 & -(x^2+x)& -(\theta+1)/\theta \\ \hline 4 & (1,-a,-6,a,1) & x^4-6x^2+1 & x^3-x & -(\theta+1)/(\theta-1) \\ \hline 6 & \begin{array}{l}(1,-2a,-5a-15,\\ \quad-20,5a,2a+6,1) \end{array}& \begin{array}{l}x^6+6x^5-\\ \quad20x^3-15x^2+1\end{array} & \begin{array}{l}2x^5+5x^4-\\ \quad5x^2-2x\end{array} &-(2\theta+1)/(\theta - 1) \\ \hline \end{array} $ For any prime $p$ and $n$ in $\{2,3,4,6\}$, the polynomials $f$ and $g$ obtained by the conjugation method using $g_u$ and $g_v$ as in Table \[tab:foster\] generate number fields with two automorphisms $\sigma$ and $\tau$ of order $n$ that verify the hypothesis of Theorem \[th:galois\]. The polynomial $g$ belongs to a family of Table \[tab:foster\], so its number field $K_g$ has an automorphism of order $n$ given by the formula in the last column. Let $\omega$ be a root of $\mu(x)$. The polynomial $g_v+\omega g_u$ defines a number field that is an extension of degree $n$ of ${{\mathbb Q}}(\omega)$ and that admits an automorphism of order $n$, which fixes ${{\mathbb Q}}(\omega)$. Since $f$ and $g_v+\omega g_u$ generate the same number field, this shows that the number field $K_f$ has an automorphism of order $n$. The polynomial $\varphi$ is given by $g_v+\lambda g_u$. Therefore, it belongs to the same family as $g$ hence it has the same automorphism of order $n$ as $f$ and $g$. This shows that modulo $p$, the automorphism sends a root of $\varphi$ to another root of $\varphi$, as required in the hypothesis of Theorem \[th:galois\]. Let us apply the conjugation method for ${{\mathbb F}}_{p^3}$, where $p=2^{31}+11$. Running Algorithm \[alg:conjugation\] with $g_u=-x^2-x$ and $g_v=x^3-3x-1$, one sees that $\mu = x^2-x+1$ has a root $\lambda=2021977950$ in ${{\mathbb F}}_p$ and that $g_v+\lambda g_u$ is irreducible in ${{\mathbb F}}_p[x]$. We obtain $f = x^6 - x^5 - 6x^4 + 3x^3 + 14x^2 + 7x + 1$ and $g = 20413x^3 + 32630x^2 -28609x + 20413$. With $\varphi = x^3 + 125505709x^2 + 125505706x + 2147483658$ as their GCD modulo $p$, we can check that the three polynomials $f$, $g$ and $\varphi$ admit $\theta\mapsto -(\theta+1)/\theta$ as an automorphism of order 3. Estimation and comparison of the methods. {#subsec:comparison of gal Joux Lercier and ours} ----------------------------------------- We have four methods of polynomial selection which apply to NFS in non-prime fields: - the two methods of JLSV, presented in \[sssec:JLSV1\] and \[sssec:JLSV2\], denoted JLSV$_1$ and, respectively, JLSV$_2$; - the generalized Joux-Lercier method, presented in \[subsec: Joux Lercier polyselect\], denoted GJL; - the conjugation method, presented in \[subsec: our polyselect\], denoted bu Conj. We take the size of the product of the norms as the main quantity to minimize, and we estimate its value as $$\label{eq:norm-product-f-g} E^{\deg f}||f||_\infty E^{\deg g} ||g||_\infty ~.$$ The starting point are the properties of the polynomials obtained with the various methods in Tab. \[tab: polyselect complexities\]. method $\deg g$ $\deg f$ $||g||_\infty$ $||f||_\infty$ $E^{\deg f + \deg g} ||f||_\infty ||g||_\infty$ ---------- ---------- --------------- ----------------------- -------------------- ------------------------------------------------- Conj $n$ $2n$ $Q^{1/(2n)}$ $O(1)$ $E^{3n} Q^{1/(2n)}$ GJL $\geq n$ $> \deg g$ $Q^{1/(\deg g+1)}$ $O(1)$ $E^{\deg f + \deg g} Q^{1/(\deg g+1)} $ JLSV$_1$ $n$ $n$ $Q^{1/(2n)}$ $Q^{1/(2n)}$ $E^{2n} Q^{1/n}$ JLSV$_2$ $n$ $\geq \deg g$ $Q^{1/(2(\deg f+1))}$ $Q^{1/(\deg f+1)}$ $E^{\deg f + n} Q^{(3/2) 1/(\deg f+1)} $ : Theoretical complexities for polynomial selection methods, $n$ is the extension degree (${{\mathbb F}}_{p^{n}}$), $Q = p^n$[]{data-label="tab: polyselect complexities"} When the best method depends on the size of the finite field in consideration, we use rough estimates of $E$ taken from Table \[tab: E values for various Q\]. In Table \[tab:norm-product-f-g-wrt-Q-E\] we summarize all the sizes that we can get for reasonable choices of parameters for ${{\mathbb F}}_{p^n}$ with $n\in\{2,3,4,5,6\}$, with all the methods at our disposition. $$\begin{array}{|c| c||c|c|c|c|c r|} \hline \deg g, \deg f & Q & {{\mathbb F}}_{p^n} & ||f||_{\infty} & g & ||g||_{\infty} & {\multicolumn{2}{c|}{E^{\deg f} ||f||_{\infty} E^{\deg g} ||g||_{\infty}}} \\ \hline \hline (2,3) & {\multirow{8}{*}{\ensuremath{p^2}}} & {\multirow{8}{*}{\ensuremath{{{{{\mathbb F}}_{p^2}}}}}} & {\multirow{3}{*}{\ensuremath{O(1)}}} & \operatorname{GJL}& Q^{1/3} & E^5 Q^{1/3} & \\ \cline{1-1} \cline{5-8} (3,4) & & & & \operatorname{GJL}& Q^{1/4} & E^7 Q^{1/4} & \otimes \\ \cline{1-1} \cline{5-8} (2,4) & & & & \operatorname{Conj}& Q^{1/4} & E^6 Q^{1/4} & \\ \cline{1-1} \cline{4-8} (2,2) & & & Q^{1/4} & \operatorname{JLSV}_1 & Q^{1/4} & E^4 Q^{1/2} & \otimes \\ \cline{1-1} \cline{4-8} (2,2) & & & Q^{1/6} & \operatorname{JLSV}_2 & Q^{1/3} & E^4 Q^{1/2} & \otimes \\ \cline{1-1} \cline{4-8} (2,3) & & & Q^{1/8} & \operatorname{JLSV}_2 & Q^{1/4} & E^5 Q^{3/8} & \otimes \\ \cline{1-1} \cline{4-8} (2,4) & & & Q^{1/5} & \operatorname{JLSV}_2 & Q^{1/10} & E^6 Q^{3/10} & \otimes \\ \cline{1-1} \cline{4-8} (2,5) & & & Q^{1/6} & \operatorname{JLSV}_2 & Q^{1/12} & E^7 Q^{1/4} & \otimes \\ \hline \hline (3,4) & {\multirow{8}{*}{\ensuremath{p^3}}} & {\multirow{8}{*}{\ensuremath{{{{{\mathbb F}}_{p^3}}}}}} & {\multirow{3}{*}{\ensuremath{O(1)}}} & \operatorname{GJL}& Q^{1/4} & E^7 Q^{1/4} & \\ \cline{1-1} \cline{5-8} (4,5) & & & & \operatorname{GJL}& Q^{1/5} & E^9 Q^{1/5} & \otimes \\ \cline{1-1} \cline{5-8} (3,6) & & & & \operatorname{Conj}& Q^{1/6} & E^9 Q^{1/6} & \\ \cline{1-1} \cline{4-8} (3,3) & & & Q^{1/6} & \operatorname{JLSV}_1 & Q^{1/6} & E^6 Q^{1/3} & \otimes \\ \cline{1-1} \cline{4-8} (3,3) & & & Q^{1/8} & \operatorname{JLSV}_2 & Q^{1/4} & E^6 Q^{3/8} & \otimes \\ \cline{1-1} \cline{4-8} (3,4) & & & Q^{1/10} & \operatorname{JLSV}_2 & Q^{1/5} & E^7 Q^{3/10} & \otimes \\ \cline{1-1} \cline{4-8} (3,5) & & & Q^{1/12} & \operatorname{JLSV}_2 & Q^{1/6} & E^8 Q^{1/4} & \otimes \\ \cline{1-1} \cline{4-8} (3,6) & & & Q^{1/7} & \operatorname{JLSV}_2 & Q^{1/14} & E^9 Q^{3/14} & \otimes \\ \hline \hline (4,5) & {\multirow{8}{*}{\ensuremath{p^4}}} & {\multirow{8}{*}{\ensuremath{{{\mathbb F}}_{p^{4}}}}} & {\multirow{3}{*}{\ensuremath{O(1)}}} & \operatorname{GJL}& Q^{1/5} & E^9 Q^{1/5} & \\ \cline{1-1} \cline{5-8} (5,6) & & & & \operatorname{GJL}& Q^{1/6} & E^{11} Q^{1/6} & \otimes \\ \cline{1-1} \cline{5-8} (4,8) & & & & \operatorname{Conj}& Q^{1/8} & E^{12} Q^{1/8} & \otimes \\ \cline{1-1} \cline{4-8} (4,4) & & & Q^{1/8} & \operatorname{JLSV}_1 & Q^{1/8} & E^8 Q^{1/4} & \\ \cline{1-1} \cline{4-8} (4,4) & & & Q^{1/10} & \operatorname{JLSV}_2 & Q^{1/5} & E^8 Q^{3/10} & \otimes \\ \cline{1-1} \cline{4-8} (4,5) & & & Q^{1/12} & \operatorname{JLSV}_2 & Q^{1/6} & E^9 Q^{1/4} & \otimes \\ \cline{1-1} \cline{4-8} (4,6) & & & Q^{1/14} & \operatorname{JLSV}_2 & Q^{1/7} & E^{10} Q^{3/14} & \otimes \\ \cline{1-1} \cline{4-8} (4,7) & & & Q^{1/8} & \operatorname{JLSV}_2 & Q^{1/16} & E^{11} Q^{3/16} & \otimes \\ \hline \hline (5,6) & {\multirow{8}{*}{\ensuremath{p^5}}} & {\multirow{8}{*}{\ensuremath{{{\mathbb F}}_{p^{5}}}}} & {\multirow{3}{*}{\ensuremath{O(1)}}} & \operatorname{GJL}& Q^{1/6} & E^{11} Q^{1/6} & \\ \cline{1-1} \cline{5-8} (6,7) & & & & \operatorname{GJL}& Q^{1/7} & E^{13} Q^{1/7} & \otimes \\ \cline{1-1} \cline{5-8} (5,10) & & & & \operatorname{Conj}& Q^{1/10} & E^{15} Q^{1/10} & \otimes \\ \cline{1-1} \cline{4-8} (5,5) & & & Q^{1/10} & \operatorname{JLSV}_1 & Q^{1/10} & E^{10} Q^{1/5} & \\ \cline{1-1} \cline{4-8} (5,5) & & & Q^{1/12} & \operatorname{JLSV}_2 & Q^{1/6} & E^{10} Q^{1/4} & \otimes \\ \cline{1-1} \cline{4-8} (5,6) & & & Q^{1/14} & \operatorname{JLSV}_2 & Q^{1/7} & E^{11} Q^{3/14} & \otimes \\ \cline{1-1} \cline{4-8} (5,7) & & & Q^{1/8} & \operatorname{JLSV}_2 & Q^{1/16} & E^{12} Q^{3/16} & \otimes \\ \cline{1-1} \cline{4-8} (5,8) & & & Q^{1/18} & \operatorname{JLSV}_2 & Q^{1/9} & E^{13} Q^{1/6} & \otimes \\ \hline \hline (6,7) & {\multirow{8}{*}{\ensuremath{p^6}}} & {\multirow{8}{*}{\ensuremath{{{\mathbb F}}_{p^{6}}}}} & {\multirow{3}{*}{\ensuremath{O(1)}}} & \operatorname{GJL}& Q^{1/7} & E^{13} Q^{1/7} & \\ \cline{1-1} \cline{5-8} (7,8) & & & & \operatorname{GJL}& Q^{1/8} & E^{15} Q^{1/8} & \otimes \\ \cline{1-1} \cline{5-8} (6,12) & & & & \operatorname{Conj}& Q^{1/12} & E^{18} Q^{1/12} & \otimes \\ \cline{1-1} \cline{4-8} (6,6) & & & Q^{1/12} & \operatorname{JLSV}_1 & Q^{1/12} & E^{12} Q^{1/6} & \\ \cline{1-1} \cline{4-8} (6,6) & & & Q^{1/14} & \operatorname{JLSV}_2 & Q^{1/7} & E^{12} Q^{3/14} & \otimes \\ \cline{1-1} \cline{4-8} (6,7) & & & Q^{1/16} & \operatorname{JLSV}_2 & Q^{1/8} & E^{13} Q^{3/16} & \otimes \\ \cline{1-1} \cline{4-8} (6,8) & & & Q^{1/18} & \operatorname{JLSV}_2 & Q^{1/9} & E^{14} Q^{1/6} & \otimes \\ \cline{1-1} \cline{4-8} (6,9) & & & Q^{1/20} & \operatorname{JLSV}_2 & Q^{1/10} & E^{15} Q^{3/20} & \otimes \\ \hline \end{array}$$ To choose the best method, we now need to compare the values in the last column of Tab. \[tab:norm-product-f-g-wrt-Q-E\]. For that we consider in Tab. \[tab: E values for various Q\] practical values of $E$ and $Q$ for $Q$ from 60 to 220 decimal digits (dd). We note that $\log E = 0.095 \log Q$ for $Q$ of $60$ dd and $\log E = 0.041 \log Q$ for $Q$ of $220$ dd. We can now eliminate a few other methods in Tab. \[tab:norm-product-f-g-wrt-Q-E\]: $\boldsymbol{n=2}$ : We discard the $\operatorname{JLSV}_1$ and $\operatorname{JLSV}_2$ methods because their complexities are worse than GJL complexity: $E^{4}Q^{1/2} > E^5 Q^{1/3}$ since $Q^{1/6} > E$ (indeed, $Q^{0.1} > E$). $\boldsymbol{n=3}$ : A second time we discard the $\operatorname{JLSV}_1$ method because the GJL method is better. Indeed $E^6 Q^{1/3} < E^{7} Q^{1/4}$ while $E > Q^{1/12}$ i.e. when $Q$ is less or around 60 dd. $\boldsymbol{n=4}$ : This time we discard GJL method with $(\deg g, \deg f) = (5,6)$ because it is less efficient than GJL with $(4,5)$ whenever $E > Q^{1/60}$. We also discard the Conj method because we are not in the case $E < Q^{1/40}$. $\boldsymbol{n=5}$ : We discard the Conj method ($E^{15} Q^{1/10}$) which is worse than GJL with $(5,6)$ while $ E > Q^{1/60}$. We also discard GJL method with $(6,7)$ because the same method with $(5,6)$ is more efficient whenever $Q^{1/84} < E$. $\boldsymbol{n=6}$ : As for $n=5$, the Conj method is not competitive because we are not in the case $E < Q^{1/84}$. We also discard the GJL method with $(7,8)$ compared with $(6,7)$ because we don’t have $E < Q^{1/112}$. We represent the results in Fig. \[fig:norm-product-f-g–wrt-Q–Fpn\]. We can clearly see that when $Q = p^2$ is more than 70 decimal digits long (200 bits, i.e. $\log_2 p = 100$), it is much better to use the construction with $\deg f = 4$ and $\deg g = 2$ for computing discrete logarithms in ${{\mathbb F}}_{Q} = {{{{\mathbb F}}_{p^2}}}$. For $Q$ of more than 220 dd, $(\deg f, \deg g) = (3,4)$ starts to be a better choice than $(2,3)$ but our new method with $(2,4)$ is even better, the value in is about 20 bits smaller. For $Q = p^3$ from 60 to 220 dd (i.e. $p$ from 20 to 73 dd), the choice $(3,4)$ gives a lower value of . Then for $Q$ of more than 220 dd, the method with $(3,6)$ is better. For $Q $ of 220 dd, takes the same value with $(\deg g, \deg f) = (3,6)$ as with $(3,4)$. Improving the selected polynomials {#subsec: improved g in polyselect} ---------------------------------- We explained in Sec. \[subsec: Joux Lercier polyselect\] our generalized Joux-Lercier method and in Sec. \[subsec: our polyselect\] our method of conjugated polynomials. In both cases when $\deg \varphi \geqslant 2$ one obtains two distinct reduced polynomials $g_1$ and $g_2 \in {{\mathbb Z}}[x]$ such that $g_1 \equiv g_2 \equiv \varphi \bmod p$ up to a coefficient in ${{{{\mathbb F}}_p}}$. We propose to search for a polynomial $g = \lambda_1 g_1 + \lambda_2 g_2$ with $\lambda_1, \lambda_2 \in {{\mathbb Z}}$ small, e.g. $|\lambda_1|, |\lambda_2| < 200$ that maximises the Murphy $E$ value of the pair $(f,g)$. The Murphy ${{\mathbb{E}}}$ value is explained in [@Mur99 Sec. 5.2.1, Eq. 5.7 p. 86]. This is an estimation of the smoothness properties of the values taken by either a single polynomial $f$ of a pair $(f,g)$. First one homogenizes $f$ and $g$ and defines $$u_f(\theta_i) = \frac{\log |f(\cos \theta_i, \sin \theta_i)| + \alpha(f)}{\log B_f}$$ with $\theta_i \in \left[ 0, \pi \right]$, more precisely, $\theta_i = \frac{\pi}{K} \left(i - \frac{1}{2}\right) $ (with e.g. $K = 2000$ and $i \in \{1, \ldots, K \}$), $\alpha(f)$ defined in [@Mur99 Sec. 3.2.3] and $B_f$ a smoothness bound set according to $f$. Murphy advises to take $B_f = 1\operatorname{e}7$ and $B_g = 5\operatorname{e}6$. Finally $${{\mathbb{E}}}(f,g) = \sum_{i=1}^{K} \rho (u_f(\theta_i)) \rho (u_g (\theta_i)) ~.$$ We propose to search for $g = \lambda_1 g_1 + \lambda_2 g_2$ with $|\lambda_i| < 200$ and such that ${{\mathbb{E}}}(f,g)$ is maximal. In practice we obtain $g$ with $\alpha(g) \leqslant -1.5$ and ${{\mathbb{E}}}(f,g)$ improved of 2% up to 30 %. Asymptotic complexity {#sec:complexity} ===================== The two new methods of polynomial selection require a dedicated analysis of complexity. First, we show that the generalized Joux-Lercier method offers an alternative to the existing method of polynomial selection in large characteristic [@JLSV06] and determine the range of applicability in the boundary case. When getting close to the limit, it provides the best known complexity. Second, we analyze the conjugation method and obtain the result announced in the introduction, namely the existence of a family of finite fields for which the complexity of computing discrete logarithms is in $L_Q(1/3,\sqrt[3]{48/9})$. The generalized Joux-Lercier method {#the-generalized-joux-lercier-method} ----------------------------------- Using the generalized Joux-Lercier method, one constructs two polynomials $f$ and $g$ such that, for a parameter $d\geq n$, we have $\deg f=d+1$, $\deg g=d$, ${{|g|}}_\infty=Q^{1/d}$ and ${{|f|}}_\infty$ is very small, say $O(\log Q)$. We consider the variant of NFS in which one sieves on linear polynomials $a-bx$ such that $|a|,|b|\leq E$ for a sieve parameter $E$, in order to collect the pairs such that the norms $\operatorname{Res}(f,a-bx)$ and $\operatorname{Res}(g,a-bx)$ are $B$-smooth. Since the cost of the sieve is $E^{2+o(1)}$ and the cost of the linear algebra stage is $B^{2+o(1)}$, we impose $E=B$. We set $E=B=L_Q(1/3,\beta)$ for a parameter $\beta$ to be chosen. We write $d=\frac{\delta}{2}\left(\log Q/\log\log Q\right)^{1/3}$, for a parameter $\delta$ to be chosen. Since the size of the sieving domain must be large enough so that we collect $B$ pairs $(a,b)$, we must have $\mathcal{P}^{-1}=B$, where $\mathcal{P}$ is the probability that a random pair $(a,b)$ in the sieving domain has $B$-smooth norms. We make the usual assumption that the product of the norms of any pair $(a,b)$ has the same probability to be $B$-smooth as a random integer of the same size. We upper-bound the norms product by $$|\operatorname{Res}(f,a-bx)\operatorname{Res}(g,a-bx)|\leq (\deg f){{|f|}}_\infty E^{\deg f}(\deg g){{|g|}}_\infty E^{\deg g},$$ and further, with the $L$-notation, we obtain $$|\operatorname{Res}(f,a-bx)\operatorname{Res}(g,a-bx)|\leq L_Q\left(2/3, \delta\beta +\frac{2}{\delta} \right).$$ Using the Canfield-Erdös-Pomerance theorem, we obtain $$\mathcal{P}=1/L_Q\left(1/3, \frac{\delta}{3}+\frac{2}{3\beta\delta}\right).$$ The equality $\mathcal{P}^{-1}=B$ imposes $$\beta=\frac{\delta}{3}+\frac{2}{3\beta\delta}.$$ The optimal value of $\delta$ is the one which minimizes the expression in the right hand member, so we take $\delta=\sqrt{2/\beta}$ and we obtain $\beta=2/3\sqrt{2/\beta}$, or equivalently $\beta=\sqrt[3]{8/9}$. Since the complexity of NFS is $E^2+B^2=L_Q(1/3,2\beta)$, we obtain $$\text{complexity}(\text{NFS with Generalized Joux-Lercier}) =L_Q\left(1/3,\sqrt[3]{64/9}\right).$$ The method requires $n\leq d$. Since $d=\delta/2 \left(\frac{\log Q}{\log\log Q} \right)^{1/3}$ with $\delta=\sqrt{2/\beta}=\sqrt[3]{3}$, the method applies only to fields ${{\mathbb F}}_{p^n}$ such that $$p\geq L_Q\left(2/3,\sqrt[3]{8/3}\right).$$ The conjugation method {#the-conjugation-method} ---------------------- The conjugation method allows us to construct two polynomials $f$ and $g$ such that $\deg f=2n$, $\deg g=n$, ${{|g|}}_\infty\approx p^{1/2}$ and ${{|f|}}_\infty$ is very small, say $O(\log Q)$. We study first the case of medium characteristic and then the boundary case between medium and large characteristic. We start with those computations which are common for the two cases. ### Common computations We consider the higher degree variant of NFS of parameter $t$, i.e. one sieves on polynomials $\phi$ of degree $t-1$, with coefficients of absolute value less than $E^{2/t}$, where $E$ is called the sieve parameter. The cost of the sieve is then $E^{2+o(1)}$. Since cost of the linear algebra stage is $B^{2+o(1)}$, where $B$ is the smoothness bound, we impose $E=B$ and we write $E=B=L_Q(1/3,\beta)$, for some parameter $\beta$ to be chosen. Then the product of the norms of $\phi(\alpha)$ and $\phi(\beta)$ for any polynomial $\phi$ in the sieve domain is $$| \operatorname{Res}(\phi,f)\operatorname{Res}(\phi,g) | \leq (\deg f+t)!(\deg g+t)! E^{4n/t}{{|f|}}_\infty^{t-1}E^{2n/t}{{|g|}}_\infty^{t-1}.$$ Since $(\deg f+t)!(\deg f+t)!\leq L_Q(2/3, o(1))$, this factor’s contribution will be negligible compared to the main term which is in $L_Q(2/3)$. Therefore we have $$| \operatorname{Res}(\phi,f)\operatorname{Res}(\phi,g) | \leq \left( E^{6n/t}Q^{(t-1)/2n}\right)^{1+o(1)}.$$ We make the usual assumption that the norms product has the same probability to be $B$-smooth as a random integer of the same size. ### The medium characteristic case Let us set the value of the number of terms in the sieve: $$t=c_t n \left(\frac{\log Q}{\log\log Q} \right)^{-1/3}.$$ The probability that a polynomial $\phi$ in the sieving domain has $B$-smooth norms is $$\mathcal{P}=1/L_Q\left(1/3,\frac{2\beta}{c_t}+\frac{c_t}{6} \right).$$ We choose $c_t=2\sqrt{3\beta}$ in order to minimize the right hand member: $$\mathcal{P}=1/L_Q\left(1/3,2\sqrt{\beta/3}\right).$$ In an optimal choice of parameters, the sieve produces just enough relations, so we require that $\mathcal{P}^{-1}=B$, and equivalently $\beta=\sqrt[3]{4/3}$. We obtain $$\text{complexity}(NFS\text{ with medium char.})=L_Q\left(1/3,\sqrt[3]{96/9}\right).$$ ### The boundary case For every constant $c_p>0$, we consider the family of finite fields ${{\mathbb F}}_{p^n}$ such that $$p=L_{p^n}(2/3,c_p)^{1+o(1)}.$$ The parameter $t$ is a constant, or equivalently we have a different algorithm for each value $t=2,3,\ldots$. Then the probability that a polynomial $\phi$ in the sieving domain has $B$-smooth norms is $$\mathcal{P}=1/L_Q\left(1/3, \frac{2}{c_pt}+\frac{c_p(t-1)}{6\beta} \right).$$ If the parameters are tuned to have just enough relations in the sieve, then one has $\mathcal{P}^{-1}=B$. This leads to $\frac{2}{c_pt}+\frac{c_p(t-1)}{6\beta}=\beta$, or $\beta=\frac{1}{c_pt}+\sqrt{\frac{1}{(c_pt)^2}+\frac16 c_p(t-1)}$. Hence, the complexity of NFS with the conjugation method is: $$\text{complexity(NFS with the conjugation method)}=L_Q\left(1/3, \frac{2}{c_pt}+\sqrt{\frac{4}{(c_pt)^2}+\frac23c_p(t-1)} \right).$$ In Figure \[fig:complexities\], we have plotted the complexities of various methods, including the Multiple number field sieve variant of [@BarPie2014]. There are some ranges of the parameter $c_p$ where our conjugation method is the fastest and a range where the generalized Joux-Lercier method is the fastest. The best case for our new method corresponds to the case where $c_p = 12^{1/3}\approx 2.29$ and $t=2$. In that case we get: $$\text{complexity}\text{(best case for the conjugation method)}=L_Q\left(1/3,\sqrt[3]{\frac{48}{9}}\right).$$ ![The complexity of NFS for fields ${{\mathbb F}}_{p^n}$ with $p=L_Q(2/3,c_p)$ is $L_Q(1/3,c)$. The blue curve corresponds to the multiple number field sieve of [@BarPie2014], the green semi-line to the generalized Joux-Lercier method and the red thick curve to the conjugation method.[]{data-label="fig:complexities"}](figure_complexities) Effective computations of discrete logarithms {#sec:effective} ============================================= In order to test how our ideas perform in practice, we did a medium-sized practical experiment in a field of the form ${{\mathbb F}}_{p^2}$. Since we could not find any publicly announced computation for this type of field, we have decided to choose a prime number $p$ of 80 decimal digits so that ${{\mathbb F}}_{p^2}$ has size 160 digits. To demonstrate that our approach is not specific to a particular form of the prime, we took the first 80 decimal digits of $\pi$. Our prime number $p$ is the next prime such that $p \equiv 7 \bmod 8$ and both $p+1$ and $p-1$ have a large prime factor: $p = \lfloor \pi \cdot 10^{79} \rfloor + 217518 $. $$\begin{array}{rcl} p & = & \mathtt{31415926535897932384626433832795028841971693993751058209749445923078164063079607} \\ \ell & = & \mathtt{3926990816987241548078304229099378605246461749218882276218680740384770507884951} \\ p-1 & = & 6 \cdot h_0 \mbox{ with } h_0 \mbox{ a 79 digit prime} \\ p+1 & = & 8 \cdot \ell \\ \end{array}$$ We tried to solve the discrete logarithm problem in the order $\ell$ subgroup. We imposed $p$ to be congruent to $-1$ modulo 8, so that the polynomial $x^4+1$ could be used, as in Section \[sec:exdeg4\], so that no Schirokauer map is needed. The conjugation method yields a polynomial $g$ of degree $2$ and negative discriminant, a particular case that requires no Schirokauer map either: $$\begin{array}{rcl} f & = & x^4 + 1 \\ g & = & 22253888644283440595423136557267278406930\ x^2 \\ & & \ +\, 41388856349384521065766679356490536297931\ \ x \\ & & \ +\, 22253888644283440595423136557267278406930\ \,. \\ \end{array}$$ Since $p$ is 80 digits long, the coefficients of $g$ have almost 40 digits (precisely 41 digits). The polynomials $f$ and $g$ have the irreducible factor [$$\varphi(t) = t^2 + 8827843659566562900817004173601064660843646662444652921581289174137495040966990\, t + 1$$]{} in common modulo $p$, and ${\mathbb{F}_{{p}^{2}}}$ will be taken as ${\mathbb{F}_{p}}[X]/(\varphi)$. The relation collection step was then done using the sieving software of CADO [@CADO]. More precisely, we used the special-${\mathfrak q}$ technique for ideals ${\mathfrak q}$ on the $g$-side, since it produces norms that are larger than on the $f$-side. We sieved all the special-${\mathfrak q}$ larger than $40,000,000$ and smaller than $2^{27}$, keeping only one in each pair of conjugates, as explained in Section \[sec:galois\]. In total, they produced about $15$M relations. The main parameters in the sieve were the following: we sieved all primes below $40$M, and we allowed two large primes less than $2^{27}$ on each side. The search space for each special-${\mathfrak q}$ was set to $2^{15}\times 2^{14}$ (the parameter [I]{} in CADO was set to 15). The total CPU time for this relation collection step is equivalent to 68 days on one core of an Intel Xeon E5-2650 at 2 GHz. This was run in parallel on a few nodes, each with 16 cores, so that the elapsed time for this step was a few days, and could easily be made arbitrary small with enough nodes. The filtering step was run as usual, but we modified it to take into account the Galois action on the ideals: we selected a representative ideal in each orbit under the action $x\mapsto x^{-1}$, and rewrote all the relations in terms of these representatives only. This amounts just to keep track of sign-change, that has to be reminded when combining two relations during the filtering, and when preparing the sparse matrix for the sparse linear algebra step. The output of the filtering step was a matrix with $839,244$ rows and columns, having on average $83.6$ non-zero entries per row. Thanks to our choice of $f$ and $g$, it was not necessary to add columns with Schirokauer maps. We used Jeljeli’s implementation of Block Wiedemann’s algorithm for GPUs [@JeljeliImplem]. In fact, this was a small enough computation so that we did not distribute it on several cards: we used a non-blocked version. The total running time for this step was around 30.3 hours on an NVidia GTX 680 graphic card. At the end of the linear algebra we know the virtual logarithms of almost all prime ideals of degree one above primes of at most 26 bits, and of some of those above primes of 27 bits. At this point we could test that the logs on the f-side were correct. The last step is that of computing some individual logarithms. We used $G = t + 2$ as a generator for ${\mathbb{F}_{{p}^{2}}}$ and the following “random” element: $$s = \lfloor(\pi (2^{264})/4)\rfloor t + \lfloor(\gamma\cdot 2^{264})\rfloor.$$ We started by looking for an integer $e$ such that $z = s^e$, seen as an element of the number field of $f$, is smooth. After a few core-hours, we found a value of $e$ such that $z = z_1/z_2$ with $z_1$ and $z_2$ splitting completely into prime ideals of at most 60 bits. With the lattice-sieving software of CADO-NFS, we then performed a “special-q descent” for each of these prime ideals. We remark that one of the prime ideals in $z_1$ was an ideal of degree 2 above 43, that had to be descended in a specific way, starting with a polynomial of degree 2 instead of 1. The total time for descending all the prime ideals was a few minutes. Finally, we found [$$\log_G(s) = 431724646474717499532141432099069517832607980262114471597315861099398586114668 \bmod \ell.$$]{} Verification scripts in various mathematical software are given in the NMBRTHRY announcement. \#1\#1\#2[\#1]{}\#1\#2[\#2]{} [GGMZ13]{} L. M. Adleman. The function field sieve. In [*Algorithmic Number Theory–ANTS I*]{}, volume 877 of [ *Lecture Notes in Comput. Sci.*]{}, pages 108–121. Springer, 1994. L. M. Adleman and M. D. A. Huang. Function field sieve method for discrete logarithms over finite fields. , 151(1):5–16, 1999. S. Bai, A. Filbois, P. Gaudry, A. Kruppa, F. Morain, E Thomé, P. Zimmermann, et al. Crible algébrique: Distribution, optimisation – [NFS]{}, 2009. Downloadable at <http://cado-nfs.gforge.inria.fr/>. C. Bouvier, P. Gaudry, L. Imbert, H. Jeljeli, and E. Thomé. Discrete logarithms in [[GF]{}]{}(p) — 180 digits, 2014. Announcement available at the [NMBRTHRY]{} archives, item 004703. R. Barbulescu, P. Gaudry, A. Joux, and E. Thom[é]{}. A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In [*Advances in Cryptology–EUROCRYPT 2014*]{}, pages 1–16. Springer, 2014. R. Barbulescu, P. Gaudry, and T. Kleinjung. Yet another variant for [DLP]{} in the upper medium-prime case. Talk given during the DLP Workshop, Ascona, Switzerland, 2014. R. Barbulescu and C. Pierrot. The multiple number field sieve for medium and high characteristic finite fields. Cryptology ePrint Archive, Report 2014/147, 2014. preprint available at <http://eprint.iacr.org/>,accepted for publication at ANTS XI. A. Commeine and I. Semaev. An algorithm to solve the discrete logarithm problem with the number field sieve. In [*Public Key Cryptology–PKC 2006*]{}, volume 3958 of [*Lecture Notes in Comput. Sci.*]{}, pages 174–190. Springer, 2006. K. Foster. and “simplest” number fields. , 55(4):1621–1655, 2011. F. G[ö]{}loglu, R. Granger, G. McGuire, and J. Zumbr[ä]{}gel. On the function field sieve and the impact of higher splitting probabilities: Application to discrete logarithms in $\mathbb{F}_{2^{1971}}$ and $\mathbb{F}_{2^{3164}}$. In [*Advances in Cryptology - CRYPTO 2013, Lecture Notes in Computer Science 8043*]{}, pages pp–109. Springer-Verlag, 2013. R. Granger, T. Kleinjung, and Z. Zumbrägel. Breaking 128-bit secure supersingular binary curves (or how to solve discrete logarithms in $\mathbb{F}_{2^{4\cdot 1223}}$ and $\mathbb{F}_{2^{12\cdot 367}}$), 2014. arXiv report 1402.3668. R. Granger, T. Kleinjung, and Z. Zumbrägel. On the powers of 2, 2014. IACR Eprint report 2014/300. D. M. Gordon. Discrete logarithms in [[GF]{}]{}(p) using the number field sieve. , 6(1):124–138, 1993. M.-N. Gras. Classes et unités des extensions cycliques réelles de degré [$4$]{} de [${\bf Q}$]{}. , 29(1):xiv, 107–124, 1979. M.-N. Gras. Special units in real cyclic sextic fields. , 48(177):179–182, 1987. K. Hayasaka, K. Aoki, T. Kobayashi, and T. Takagi. An experiment of [N]{}umber [F]{}ield [S]{}ieve for discrete logarithm problem over ${GF}(p^{12})$. In [*Number Theory and Cryptography*]{}, pages 108–120. Springer, 2013. H. Hasse. Arithmetische [B]{}estimmung von [G]{}rundeinheit und [K]{}lassenzahl in zyklischent kubischen und biquadratischen [Z]{}ahlk[ö]{}rpern. , 2:1–95, 1948. H. Jeljeli. An implementation of the [B]{}lock-[W]{}iedemann algorithm on [NVIDIA-GPUs]{} using the [R]{}esidue [N]{}umber [S]{}ystem ([RNS]{}) arithmetic., 2014. Available from <http://www.loria.fr/~hjeljeli/>. A. Joux and R. Lercier. . In [*Algorithmic Number Theory–ANTS V*]{}, volume 2369 of [ *Lecture Notes in Comput. Sci.*]{}, pages 431–445. Springer, 2002. A. Joux and R. Lercier. Improvements to the general number field for discrete logarithms in prime fields. , 72(242):953–967, 2003. available at <http://perso.univ-rennes1.fr/reynald.lercier/file/JL03.pdf>. A. Joux and R. Lercier. Discrete logarithms in [[GF]{}]{}(p) — 130 digits, 2005. Announcement available at the [NMBRTHRY]{} archives, item 002869. A. Joux and R. Lercier. . In [*Advances in Cryptology–EUROCRYPT 2006*]{}, volume 4005 of [ *Lecture Notes in Comput. Sci.*]{}, pages 254–270. Springer, 2006. A. Joux, R. Lercier, et al. Algorithmes pour r[é]{}soudre le probl[è]{}me du logarithme discret dans les corps finis. , page 23, 2007. A. Joux, R. Lercier, N. Smart, and F. Vercauteren. The number field sieve in the medium prime case. In [*Advances in Cryptology–CRYPTO 2006*]{}, volume 4117 of [ *Lecture Notes in Comput. Sci.*]{}, pages 326–344. Springer, 2006. A. Joux. Discrete logarithms in [GF]{}(2\^6168) \[=[GF]{}((2\^257)\^24)\], 2013. Announcement available at the [NMBRTHRY]{} archives, item 004544. A. Joux. Faster index calculus for the medium prime case application to 1175-bit and 1425-bit finite fields. In [*Advances in Cryptology–EUROCRYPT 2013*]{}, volume 7881 of [ *Lecture Notes in Comput. Sci.*]{}, pages 177–193. Springer, 2013. A. Joux. A new index calculus algorithm with complexity ${L}(1/4+o(1))$ in small characteristic. In Tanja Lange, Kristin Lauter, and Petr Lisonĕk, editors, [ *Selected Areas in Cryptography – SAC 2013*]{}, Lecture Notes in Computer Science, pages 355–379. Springer, 2014. T. Kleinjung. Discrete logarithms in [[GF]{}]{}(p) — 160 digits, 2007. Announcement available at the [NMBRTHRY]{} archives, item 003269. A. K. Lenstra and H. W. [Lenstra, Jr.]{}, editors. , volume 1554 of [ *Lecture Notes in Math.*]{} Springer, 1993. D. V. Matyukhin. . , 13(1):27–50, 2003. B. A. Murphy. . PhD thesis, Australian National Univers., 1999. S. Pohlig and M. Hellman. An improved algorithm for computing logarithms over [[GF]{}]{}(p) and his cryptographic significance. , 24(1):106–110, 1978. J. M. Pollard. onte [C]{}arlo methods for index computation (mod p). , 32(143):918–924, 1978. O. Schirokauer. Discrete logarithms and local units. , 345(1676):409–423, 1993. O. Schirokauer. Virtual logarithms. , 57(2):140–147, 2005. D. Wiedemann. Solving sparse linear equations over finite fields. , 32(1):54–62, 1986. P. Zajac. . PhD thesis, STU v Bratislave, 2008. <http://www.kaivt.elf.stuba.sk/kaivt/Vyskum/XTRDL>.
--- abstract: 'A Lagrangean and a set of Feynman rules are presented for non-relativistic QFT’s with manifest power counting in the heavy particle velocity $v$. A régime is identified in which energies and momenta are of order $Mv$. It is neither identical to the ultrasoft régime corresponding to radiative processes with energies and momenta of order $Mv^2$, nor to the potential régime with on shell heavy particles and Coulomb binding. In this soft régime, massless particles are on shell, and heavy particle propagators become static. Examples show that it contributes to one- and two-loop corrections of scattering and production amplitudes near threshold. Hence, NRQFT agrees with the results of threshold expansion. A simple example also demonstrates the power of dimensional regularisation in NRQFT.' --- [bsfeyn]{} hep-ph/9712467\ NT@UW-98-3\ 18th December 1997\ **Harald W. Grie[ß]{}hammer[^1]** *Nuclear Theory Group, Department of Physics, University of Washington,\ Box 351 560, Seattle, WA 98195-1560, USA* 1.0cm Suggested PACS numbers: 12.38.Bx, 12.39.Hg, 12.39.Jh.\ Suggested Keywords: non-relativistic QCD, effective field theory, threshold expansion,\ dimensional regularisation. Introduction ============ [\[intro\]]{} Velocity power counting in Non-Relativistic Quantum Field Theories (NRQFT) [@CaswellLepage; @BBL], especially in NRQCD and NRQED, and identification of the relevant energy and momentum régimes has proven more difficult than previously believed. In a recent article, Beneke and Smirnov [@BenekeSmirnov] pointed out that the velocity rescaling rules proposed by Luke and Manohar for Coulomb interactions [@LukeManohar], and by Grinstein and Rothstein for bremsstrahlung processes [@GrinsteinRothstein], as united by Luke and Savage [@LukeSavage], and by Labelle’s power counting scheme in time ordered perturbation theory [@Labelle], do not reproduce the correct behaviour of the two gluon exchange contribution to Coulomb scattering between non-relativistic particles near threshold. This has cast some doubt whether NRQCD, especially in its dimensionally regularised version [@LukeSavage], can be formulated using a self-consistent low energy Lagrangean. The aim of this article is to demonstrate that a Lagrangean establishing explicit velocity power counting exists, and to show that this Lagrangean reproduces the results in Ref. [@BenekeSmirnov]. This article is confined to outlining the ideas to resolve the puzzle, postponing more formal arguments, calculations and derivations to a future, longer publication [@hgpub4] which will also deal with gauge theories and exemplary calculations. It is organised as follows: In Sect.  \[philosophy\], the relevant régimes of NRQFT are identified. A simple example demonstrates the usefulness of dimensional regularisation in enabling explicit velocity power counting. Sect. \[rescaling\] proposes the rescaling rules necessary for a Lagrangean with manifest velocity power counting. The Feynman rules are given. Simple examples in Sect. \[bsexamples\] establish further the necessity of the new, soft régime introduced in Sect.  \[philosophy\]. Summary and outlook conclude the article, together with an appendix on split dimensional regularisation [@LeibbrandtWilliams]. Idea of Dimensionally Regularised NRQFT ======================================= [\[philosophy\]]{} For the sake of simplicity, let us – following [@BenekeSmirnov] – deal with the Lagrangean $$\label{rellagr} {\mathcal{L}}=({\partial}_\mu\Phi_\mathrm{R})^\dagger({\partial}^\mu\Phi_\mathrm{R}) - M^2 \Phi_\mathrm{R}^\dagger\Phi_\mathrm{R}+ {\frac{1}{2}}({\partial}_\mu A)({\partial}^\mu A) - 2 M g\,\Phi_\mathrm{R}^\dagger\Phi_\mathrm{R} A$$ of a heavy, complex scalar field $\Phi_\mathrm{R}$ with mass $M$ coupled to a massless, real scalar $ A $. The coupling constant $g$ has been chosen dimensionless. $\Phi_\mathrm{R}$ will be referred to as “quark” and $ A $ as “gluon” in a slight but clarifying abuse of language. In NRQFT, excitations with four-momenta bigger than $M$ are integrated out, giving rise to four-point interactions between quarks. The first terms of the non-relativistically reduced Lagrangean read $$\label{nrlagr} {\mathcal{L}}_\mathrm{NRQFT}=\Phi^\dagger\Big({\mathrm{i}}{\partial}_0+\frac{{\vec{{\partial}}}^2}{2M} - g c_1\;A \Big)\Phi + {\frac{1}{2}}({\partial}_\mu A)({\partial}^\mu A) + c_2\Big(\Phi^\dagger\Phi\Big)^2 + \dots\;\;,$$ where the non-relativistic quark field is $\Phi=\sqrt{2M}\,{\mathrm{e}}^{{\mathrm{i}}Mt} \Phi_\mathrm{R}$ and the coefficients $c_i$ are to be determined by matching relativistic and non-relativistic scattering amplitudes. To lowest order, $c_1=1$ and $c_2=\frac{-g^4}{24\pi^2 M^2}$. The non-relativistic propagators are $$\label{nonrelprop} \Phi\;:\;\frac{{\mathrm{i}}}{T-\frac{{\vec{\,\!p}\!\:{}}^2}{2M}+{\mathrm{i}}\epsilon}\;\;,\;\; A\;:\;\frac{{\mathrm{i}}}{k^2+{\mathrm{i}}\epsilon}\;\;,$$ where $T=p_0-M=\frac{{\vec{\,\!p}\!\:{}}^2}{2M}+\dots$ is the kinetic energy of the quark. When a Coulombic bound state of two quarks exists, the two typical energy and momentum scales in the non-relativistic system are the bound state energy $Mv^2$ and the relative momentum of the two quarks $Mv$ (i.e. the inverse size of the bound state) [@BBL]. Here, $v=\beta\gamma\ll 1$ is the relativistic generalisation of the relative particle velocity. Cuts and poles in scattering amplitudes close to threshold stem from bound states and on-shell propagation of particles in intermediate states. They give rise to infrared divergences, and in general dominate contributions to scattering amplitudes. With the two scales at hand, and energies and momenta being of either scale, three régimes are identified in which either $\Phi$ or $A$ in (\[nonrelprop\]) is on shell: $$\begin{aligned} \mbox{soft r\'egime: }&A_\mathrm{s}:&k_0\sim |{\vec{k}}|\sim Mv\;\;,{\nonumber}\\ \label{regimes} \mbox{potential r\'egime: }&\Phi_\mathrm{p}:&T\sim Mv^2\;,\; |{\vec{\,\!p}\!\:{}}|\sim Mv\;\;,\\ \mbox{ultrasoft r\'egime: }&A_\mathrm{u}:&k_0\sim |{\vec{k}}|\sim Mv^2{\nonumber}\end{aligned}$$ Ultrasoft gluons ${A_{\mathrm{u}}}$ are emitted as bremsstrahlung or from excited states in the bound system. Soft gluons ${A_{\mathrm{s}}}$ do not describe bremsstrahlung: Because in- and outgoing quarks $\Phi_\mathrm{p}$ are close to their mass shell, they have an energy of order $Mv^2$. Therefore, overall energy conservation forbids all processes with outgoing soft gluons but without ingoing ones, and vice versa, as their energy is of order $Mv$. The list of particles is not yet complete: In a bound system, one needs gluons which change the quark momenta but keep them close to their mass shell: $$\label{pgluon} A_\mathrm{p}\;\;:\;\;k_0\sim Mv^2\;,\;|{\vec{k}}|\sim Mv$$ So far, only potential gluons and quarks, and ultrasoft gluons had been identified in the literature of power counting in NRQFT [@LukeManohar; @GrinsteinRothstein; @Labelle]. That the soft régime was overlooked cast doubts on the completeness of NRQFT after Beneke and Smirnov [@BenekeSmirnov] demonstrated its relevance near threshold in explicit one- and two-loop calculations. In this article, the fields representing a non-relativistic quark or gluon came naturally by identifying all possible particle poles in the non-relativistic propagators, given the two scales at hand. When a soft gluon $A_\mathrm{s}$ couples to a potential quark $ \Phi_\mathrm{p}$, the outgoing quark is far off its mass shell and carries energy and momentum of order $Mv$. Therefore, consistency requires the existence of quarks in the soft régime as well, $$\label{squark} \Phi_\mathrm{s}\;\;:\;\;T\sim |{\vec{\,\!p}\!\:{}}|\sim Mv\;\;.$$ As the potential quark has a much smaller energy than either of the soft particles, it can – by the uncertainty relation – not resolve the precise time at which the soft quark emits or absorbs the soft gluon. So, we expect a “temporal” multipole expansion to be associated with this vertex. In general, the coupling between particles of different régimes will not be point-like but will contain multipole expansions for the particle belonging to the weaker kinematic régime. For the coupling of potential quarks to ultrasoft gluons, this has been observed in Refs. [@GrinsteinRothstein; @Labelle]. Propagators will also be different from régime to régime: For soft quarks, $\frac{\vec{p}^2}{2M}$ is negligible against the kinetic energy $T$, so that the soft quark propagator may be expanded in powers of $\frac{\vec{p}^2}{2M}$, and ${\Phi_{\mathrm{s}}}$ is expected to become static to lowest order. As the energy of potential gluons is much smaller than their momentum, the ${A_{\mathrm{p}}}$-propagator is expected to become instantaneous. With these five fields $\Phi_\mathrm{s},\;\Phi_\mathrm{p},\; A_\mathrm{s}, \;A_\mathrm{p},\;A_\mathrm{u}$ representing quarks and gluons in the three different non-relativistic régimes, soft, potential and ultrasoft, NRQFT becomes self-consistent. The application of these ideas to NRQCD with the inclusion of fermions and gauge particles is straightforward and will be summarised in the next publication [@hgpub4]. An ultrasoft quark (which would have a static propagator) is not relevant for this paper. It is hence not considered, as is a fourth (“exceptional”) régime in which momenta are of the order $Mv^2$ and energies of the order $Mv$ or any régime in which one of the scales is set by $M$. They do not derive from poles in propagators, and hence will be relevant only under “exceptional” circumstances. A future publication [@hgpub4] has to prove that the particle content outlined is not only consistent but complete. It is worth noticing that the particles of the soft régime can neither be mimicked by potential gluon exchange, nor by contact terms generated by integrating out the ultraviolet modes: Fields in the soft régime have momenta of the same order as the momenta of the potential régime, but much higher energies. Therefore, seen from the potential scale they describe instantaneous but non-local interactions, as pointed out in [@BenekeSmirnov]. Integrating out the scale $Mv$, one arrives at soft gluons and quarks as point-like multi-quark interactions in the ultrasoft régime. The physics of potential quarks and gluons will still have to be described by spatially local, but non-instantaneous interactions. The resulting theory – baptised potential NRQCD by Pineda and Soto [@PinedaSoto] – can be derived from NRQCD as presented here by integrating out the fields $\Phi_\mathrm{s},\;A_\mathrm{s}$ and $A_\mathrm{p}$. There is no overlap between interactions and particles in different régimes. In order to clarify this point, and before investigating the interactions of the various régimes further, the following example will demonstrate the power of dimensional regularisation in NRQFT. It also highlights some points which simplify the discussion of the following sections. The integral corresponding to a one-dimensional loop $$\label{example1} I(a,b):=\int {{\mathrm{d}}k\;}\frac{1}{k^2-a^2+{\mathrm{i}}\epsilon}\;\frac{1}{k^2-b^2+{\mathrm{i}}\epsilon}=\frac{{\mathrm{i}}\pi}{ab(a+b)}$$ is easily calculated using contour integration. Assuming $v^2:=\frac{a^2}{b^2}\ll 1$, the dominating contributions come from the regions where $|k|$ is close to $a$ (“smaller régime”) or $b$ (“larger régime”). Then, one can approximate the integral by $$\label{example2} I(a,b)\approx \bigg[\int\limits_{|k|\sim a} +\int\limits_{|k|\sim b}\bigg] {{\mathrm{d}}k\;}\frac{1}{k^2-a^2+{\mathrm{i}}\epsilon}\; \frac{1}{k^2-b^2+{\mathrm{i}}\epsilon} \;\;.$$ In the first integral, $k$ is small against $b$, so that a Taylor expansion in $\frac{k}{b}\sim v$ in that régime is applicable and yields $$\label{example3} \frac{-1}{b^2}\sum\limits_{n=0}^\infty \int\limits_{|k|\sim a}{{\mathrm{d}}k\;}\frac{1}{k^2-a^2+{\mathrm{i}}\epsilon}\; \frac{k^{2n}}{b^{2n}}\;\;.$$ If $k^2$ becomes comparable to $b^2$, the expansion breaks down, so that the approximated integral cannot be solved by contour integration. In general, the (arbitrary) borders of the integration régimes (the “cutoffs”) will enter in the result, and lead to divergences as they are taken to infinity because of contributions from regions where $|k|\sim b \gg a$. A cutoff regularisation may hence jeopardise power counting in $v$. Dimensional regularisation overcomes this problem in a natural and elegant way: If one treats (\[example3\]) as a $d$-dimensional integral with $d\to1$ only at the end of the calculation, the exact result will emerge as a power series in $v=\frac{a}{b}$. First, one extends the integration régime from the neighbourhood of $|a|$ to the whole $d$-dimensional space. Then, one calculates the integral order by order in the expansion, still treating $\frac{k^2}{b^2}\sim v^2$ as formally small. Rewriting $$\label{example4} k^{2n}=\sum\limits_{m=0}^{n}{n \choose m} a^{2m} (k^2-a^2)^{n-m}\;\;,$$ only the ($m=n$)-term contributes thanks to the fact that dimensionally regularised integrals vanish when no intrinsic scale is present, $$\label{example5} \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}k^\alpha=0\;\;.$$ The result, $$\label{example6} \frac{{\mathrm{i}}\pi}{a b^2}\sum\limits_{n=0}^{\infty}\frac{a^{2n}}{b^{2n}} \;\;\bigg(=\frac{{\mathrm{i}}\pi}{a}\;\frac{1}{b^2-a^2}\bigg)$$ is exactly the contribution one obtains in the contour integration from the pole at $|k|=a$. Albeit the integral was expanded over the whole space, dimensional regularisation missed the poles at $\pm b$ after expansion. The integration about $|k|\sim b$ is treated likewise by expansion and term-by-term dimensional regularisation. Adding this contribution, $$\label{example7} \frac{-{\mathrm{i}}\pi}{b^3}\sum\limits_{n=0}^\infty \frac{a^{2n}}{b^{2n}} \;\;\bigg(= \frac{-{\mathrm{i}}\pi}{b}\;\frac{1}{b^2-a^2}\bigg)\;\;,$$ to (\[example6\]), one obtains term by term the Taylor expansion of the exact result (\[example1\]) in the small parameter $v=\frac{a}{b}$. Each of the two regularised integrals sees only the pole in either of the régimes $|k|\sim a$ or $|k|\sim b$. Indeed, the overlap of the two régimes is zero in dimensional regularisation, even for arbitrary $v$. But then, the expansion in the two different régimes can be terminated only at the cost of low accuracy. One could therefore have started with the definition of two different integration variables, one formally living in the smaller régime with $|K_a|\sim a\sim vb$, the other formally living in the larger régime, $|K_b|\sim b$: $$\label{example8} \int{{\mathrm{d}}k\;}\to \int {{\mathrm{d}}^{d}\! K_a\;} + \int {{\mathrm{d}}^{d}\! K_b\;}$$ The momentum $k$ is represented in each of the kinematic régimes by either $K_a$ or $K_b$. The integrands *must* then be expanded in a formal way *as if* $|K_a|\sim a\sim vb$ and $|K_b|\sim b$. Otherwise, the poles are double counted. If one wants to calculate to a certain order in $v$, the expansion in the different variables $\frac{K_a}{b}$ and $\frac{a}{K_b}$ is just terminated at the appropriate order. No double counting of the poles can occur. Coming back to the three different régimes of NRQFT (\[regimes\]), there will therefore be no double counting between any pair of domains. Finally, note that the limit $a\to 0$ is not smooth: For $a=0$, dimensional regularisation of (\[example3\]) is zero because of the absence of a scale (\[example5\]). A pinch singularity encountered in contour integration at $k=\pm {\mathrm{i}}\epsilon$ behaves hence like a pole of second order in dimensional regularisation and is discarded, see also App. \[app:splitdimreg\]. By induction, the arguments presented here can be extended to prove that for any convergent one-dimensional integral containing several scales, Laurent expansion about each saddle point and dimensional regularisation gives the same result as contour integration. A formal proof of the validity of threshold expansion does presently not exist for the case of multi-dimensional and divergent integrals, but Beneke and Smirnov [@BenekeSmirnov] could reproduce the correct structures of known non-trivial two-loop integrals using threshold expansion, which is highly suggestive that such a proof can be given. This claim is supported by the observation that threshold expansion is very similar to the asymptotic expansion of dimensionally regularised integrals in the limit of loop momenta going to infinity, for which such a proof exists [@Smirnov] [^2]. Rescaling Rules, Lagrangean and Feynman Rules ============================================= [\[rescaling\]]{} In order to establish explicit velocity power counting in the NRQFT Lagrangean, one rescales the space-time coordinates such that typical momenta in either régime become dimensionless, as first proposed in [@LukeManohar] for the potential régime, and in [@GrinsteinRothstein] for the ultrasoft régime: $$\begin{aligned} \mbox{soft: } && t=(Mv)^{-1} \;{T_{\mathrm{s}}}\;\;,\;\; {\vec{x}}=(Mv)^{-1}\;{\vec{X}_{\mathrm{s}}}\;\;,{\nonumber}\\ \label{xtscaling} \mbox{potential: }&& t=(Mv^2)^{-1}\; {T_{\mathrm{u}}}\;\;,\;\; {\vec{x}}=(Mv)^{-1}\;{\vec{X}_{\mathrm{s}}}\;\;,\\ \mbox{ultrasoft: }&& t=(Mv^2)^{-1}\; {T_{\mathrm{u}}}\;\;,\;\;{\vec{x}}=(Mv^2)^{-1}\;{\vec{X}_{\mathrm{u}}}\;\;.{\nonumber}\end{aligned}$$ In order for the propagator terms in the NRQFT Lagrangean to be properly normalised, one has to set for the representatives of the gluons in the three régimes $$\begin{aligned} \mbox{soft: } && {A_{\mathrm{s}}}({\vec{x}},t) = (Mv)\; {{\mathcal{A}}_{\mathrm{s}}}({\vec{X}_{\mathrm{s}}},{T_{\mathrm{s}}})\;\;,{\nonumber}\\ \label{gluonscaling} \mbox{potential: } && {A_{\mathrm{p}}}({\vec{x}},t) =(Mv^{\frac{3}{2}})\;{{\mathcal{A}}_{\mathrm{p}}}({\vec{X}_{\mathrm{s}}},{T_{\mathrm{u}}})\;\;,\\ \mbox{ultrasoft: } && {A_{\mathrm{u}}}({\vec{x}},t) = (Mv^2)\; {{\mathcal{A}}_{\mathrm{u}}}({\vec{X}_{\mathrm{u}}},{T_{\mathrm{u}}})\;\;,{\nonumber}\end{aligned}$$ and for the quark representatives $$\begin{aligned} \label{quarkscaling} \mbox{soft: } && \Phi_\mathrm{s}({\vec{x}},t) = (Mv)^{\frac{3}{2}}\; \phi_\mathrm{s} ({\vec{X}_{\mathrm{s}}},{T_{\mathrm{s}}})\;\;,\\ \mbox{potential: } &&\Phi_\mathrm{p}({\vec{x}},t) = (Mv)^{\frac{3}{2}}\; \phi_\mathrm{p}({\vec{X}_{\mathrm{s}}},{T_{\mathrm{u}}})\;\;.{\nonumber}\end{aligned}$$ The rescaled free Lagrangean in the three regions reads then $$\begin{aligned} \label{sfreelagr} \mbox{soft: } &&{{\mathrm{d}}^{3}\:\!\! X_{\mathrm{s}}\;}{{\mathrm{d}}T_{\mathrm{s}}\;}\Big[\phi_\mathrm{s}^\dagger\Big({\mathrm{i}}{\partial}_0+ \frac{v}{2} \vec{{\partial}}^2 \Big)\phi_\mathrm{s} + {\frac{1}{2}}({\partial}_\mu {\mathcal{A}}_\mathrm{s})({\partial}^\mu {\mathcal{A}}_\mathrm{s})\Big]\;\;,\\ \label{pfreelagr} \mbox{potential: } &&{{\mathrm{d}}^{3}\:\!\! X_{\mathrm{s}}\;}{{\mathrm{d}}T_{\mathrm{u}}\;}\Big[ \phi_\mathrm{p}^\dagger\Big({\mathrm{i}}{\partial}_0+\frac{1}{2} \vec{{\partial}}^2 \Big) \phi_\mathrm{p} + {\frac{1}{2}}\Big({\mathcal{A}}_\mathrm{p} {\vec{{\partial}}}^2 {\mathcal{A}}_\mathrm{p} - v^2 {\mathcal{A}}_\mathrm{p} {\partial}^2_0 {\mathcal{A}}_\mathrm{p}\Big) \Big]\;\;,\\ \label{ufreelagr} \mbox{ultrasoft: } && {{\mathrm{d}}^{3}\:\!\! X_{\mathrm{u}}\;}{{\mathrm{d}}T_{\mathrm{u}}\;}{\frac{1}{2}}({\partial}_\mu {{\mathcal{A}}_{\mathrm{u}}})({\partial}^\mu {{\mathcal{A}}_{\mathrm{u}}})\;\;.\end{aligned}$$ Here, as in the following, the positions of the fields have been left out whenever they coincide with the variables of the volume element. Derivatives are to be taken with respect to the rescaled variables of the volume element. The (un-rescaled) propagators are $$\begin{aligned} \label{sprop} \mbox{soft: } && {\Phi_{\mathrm{s}}}:\; {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{heavy,width=thin,label=${\scriptstyle}(T,,{\vec{\,\!p}\!\:{}})$,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\;\frac{{\mathrm{i}}}{T+{\mathrm{i}}\epsilon}\;\;,\\ && {A_{\mathrm{s}}}:\; {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{zigzag,width=thin,label=${\scriptstyle}k$,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\;\frac{{\mathrm{i}}}{k^2+{\mathrm{i}}\epsilon}\;\;,{\nonumber}\\ \label{pprop} \mbox{potential: } && {\Phi_{\mathrm{p}}}:\; {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{fermion,width=thick,label=${\scriptstyle}(T,,{\vec{\,\!p}\!\:{}})$,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\; \frac{{\mathrm{i}}}{T-\frac{{\vec{\,\!p}\!\:{}}^2}{2M}+{\mathrm{i}}\epsilon}\;\;,\\ && {A_{\mathrm{p}}}:\; {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{dashes,width=thin,label=${\scriptstyle}k$ ,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\;\frac{{\mathrm{i}}}{-{\vec{k}}^2+{\mathrm{i}}\epsilon}\;\;,{\nonumber}\\ \label{uprop} \mbox{ultrasoft: } && {A_{\mathrm{u}}}:\; {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{photon,width=thin,label=${\scriptstyle}k$ ,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\;\frac{{\mathrm{i}}}{k^2+{\mathrm{i}}\epsilon}\;\;.\end{aligned}$$ As expected, the soft quark becomes static, resembling the quark propagator in heavy quark effective theory, and the potential gluon becomes instantaneous. In order to maintain velocity power counting, corrections of order $v$ or higher must be treated as insertions as in the example, (\[example1\]). Insertions are represented by the (un-rescaled) Feynman rules $$\label{insertions} {\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{double,width=thin}{i,v,o} \fmfv{decor.shape=cross,label=${\scriptstyle}(T,,{\vec{\,\!p}\!\:{}})$, label.angle=90}{v} \end{fmfgraph*}}}} \;=\;-{\mathrm{i}}\;\frac{{\vec{\,\!p}\!\:{}}^2}{2M}\;=\;{\mathcal{O}}(v) \;\;,\;\;{\mbox{\parbox{60\unitlength}{ \begin{fmfgraph*}(60,30) \fmfleft{i}\fmfright{o} \fmf{dashes,width=thin}{i,v,o} \fmfv{decor.shape=cross,label=${\scriptstyle}k$,label.angle=90}{v} \end{fmfgraph*}}}} \;=\;+ {\mathrm{i}}k_0^2\;=\;{\mathcal{O}}(v^2) \;\;.$$ Except for the physical gluons ${A_{\mathrm{s}}}$ and ${A_{\mathrm{u}}}$, there is no distinction between Feynman and retarded propagators in NRQFT: Antiparticle propagation has been eliminated by the field transformation from the relativistic to the non-relativistic Lagrangean, and both propagators have maximal support for on-shell particles, the Feynman propagator outside the light cone vanishing like ${\mathrm{e}}^{-M}$. Feynman’s perturbation theory becomes more convenient than the time-ordered formalism, as less diagrams have to be calculated . Finally, the interaction part of the Lagrangean reads (neglecting for the moment the $\Phi^4$ vertex in (\[nrlagr\])) $$\begin{aligned} \label{sintlagr} \mbox{soft: } && {{\mathrm{d}}^{3}\:\!\! X_{\mathrm{s}}\;}{{\mathrm{d}}T_{\mathrm{s}}\;}(-g)\Big[\Big({{\mathcal{A}}_{\mathrm{s}}}+ \sqrt{v}\;{{\mathcal{A}}_{\mathrm{p}}}({\vec{X}_{\mathrm{s}}},v{T_{\mathrm{s}}})+v\;{{\mathcal{A}}_{\mathrm{u}}}(v{\vec{X}_{\mathrm{s}}},v{T_{\mathrm{s}}})\Big) {\phi_{\mathrm{s}}}^\dagger{\phi_{\mathrm{s}}}+ \\&&\phantom{{{\mathrm{d}}^{3}\:\!\! X_{\mathrm{s}}\;}{{\mathrm{d}}T_{\mathrm{s}}\;}(-g)\Big[} +\Big({{\mathcal{A}}_{\mathrm{s}}}{\phi_{\mathrm{s}}}^\dagger{\phi_{\mathrm{p}}}({\vec{X}_{\mathrm{s}}},v{T_{\mathrm{s}}}) +\mathrm{ h.c. } \Big)\Big] {\nonumber}\\ \label{pintlagr} \mbox{potential: } && {{\mathrm{d}}^{3}\:\!\! X_{\mathrm{s}}\;}{{\mathrm{d}}T_{\mathrm{u}}\;}(-g)\Big(\frac{1}{\sqrt{v}}{{\mathcal{A}}_{\mathrm{p}}}+ {{\mathcal{A}}_{\mathrm{u}}}(v{\vec{X}_{\mathrm{s}}},{T_{\mathrm{u}}})\Big) {\phi_{\mathrm{p}}}^\dagger{\phi_{\mathrm{p}}}\;\;.\end{aligned}$$ Note that the scaling régime of the volume element is set by the particle with the highest momentum and energy. Vertices like ${{\mathcal{A}}_{\mathrm{s}}}{\phi_{\mathrm{p}}}^\dagger{\phi_{\mathrm{p}}}$ cannot occur as energy and momentum must be conserved within each régime to the order in $v$ one works. Amongst the fields introduced, these are the only interactions within and between different régimes allowed. One sees that technically, the multipole expansion comes from the different scaling of ${\vec{x}}$ and $t$ in the three régimes. It is also interesting to note that there is no choice but to assign one and the same coupling strength $g$ to each interaction. Different couplings for one vertex in different régimes are not allowed. This is to be expected, as the example (\[example8\]) demonstrated that the fields in the various régimes are representatives of one and the same non-relativistic particle, whose interactions are fixed by the non-relativistic Lagrangean (\[nrlagr\]). The interaction Feynman rules are $$\begin{aligned} \label{pppvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{fermion,width=thick,label=${\scriptstyle}(T,,{\vec{\,\!p}\!\:{}})$,label.side=left}{i,v} \fmf{fermion,width=thick,label=${\scriptstyle}(T^\prime,,{\vec{\,\!p}\!\:{}}^\prime)$, label.side=right}{o,v} \fmffreeze \fmf{dashes,width=thin,label=$\uparrow {\scriptstyle}k$}{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \;\delta(T+T^\prime+k_0) \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})\;=\;{\mathcal{O}}(\frac{1}{\sqrt{v}}) \;\;, \\[3ex] \label{pupvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{fermion,width=thick }{i,v} \fmf{fermion,width=thick }{o,v} \fmffreeze \fmf{photon,width=thin }{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \;\delta(T+T^\prime+k_0) \Big[\exp\Big({{\vec{k}}\cdot\frac{\partial}{\partial({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime)}}\Big) \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime)\Big]\;=\;{\mathcal{O}}({\mathrm{e}}^v)\;\;, \\[3ex] \label{sssvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{heavy,width=thin }{i,v} \fmf{heavy,width=thin }{o,v} \fmffreeze \fmf{zigzag,width=thin }{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \;\delta(T+T^\prime+k_0) \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})\;=\;{\mathcal{O}}(v^0) \;\;, \\[3ex] \label{sspvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{heavy,width=thin }{i,v} \fmf{fermion,width=thick }{o,v} \fmffreeze \fmf{zigzag,width=thin }{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \Big[\exp\Big({T^\prime\;\frac{\partial}{\partial (T+k_0)}}\Big) \;\delta(T+k_0)\Big] \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})\;=\;{\mathcal{O}}({\mathrm{e}}^v) \;\;, \\[3ex] \label{spsvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{heavy,width=thin }{i,v} \fmf{heavy,width=thin }{o,v} \fmffreeze \fmf{dashes,width=thin }{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \Big[\exp\Big({k_0\;\frac{\partial}{\partial (T+T^\prime)}}\Big) \;\delta(T+T^\prime)\Big] \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})\,=\,{\mathcal{O}}(\sqrt{v}\;{\mathrm{e}}^v), \\[3ex] \label{susvertex} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{heavy,width=thin }{i,v} \fmf{heavy,width=thin }{o,v} \fmffreeze \fmf{photon,width=thin }{u,v} \end{fmfgraph*}}}} &\!\!\!\!=&\!\! -{\mathrm{i}}g (2\pi)^4 \Big[\exp\Big({k_0\;\frac{\partial}{\partial (T+T^\prime)}}\Big) \;\delta(T+T^\prime)\Big]\times\\ {\nonumber}&&\phantom{-{\mathrm{i}}g (2\pi)^4}\times \Big[\exp\Big({{\vec{k}}\cdot\frac{\partial}{\partial({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime)}}\Big) \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime)\Big]\;=\;{\mathcal{O}}(v\;{\mathrm{e}}^v)\;\;.\end{aligned}$$ The exponents representing the multipole expansion have to be expanded to the desired order in $v$. Double counting is prevented by the fact that in addition to most of the propagators, all vertices are distinct because of different multipole expansions. Using the equations of motion, the temporal multipole expansion may be re-written such that energy becomes conserved at the vertex. Now, both soft and potential or ultrasoft energies are present in the propagators, making it necessary to expand it in ultrasoft and potential energies. An example would be to restate the vertex (\[spsvertex\]) as $$\begin{aligned} {\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,20) \fmfstraight \fmftop{i,o} \fmfbottom{u} \fmf{heavy,width=thin }{i,v} \fmf{heavy,width=thin }{o,v} \fmffreeze \fmf{dashes,width=thin }{u,v} \end{fmfgraph*}}}} &=& -{\mathrm{i}}g (2\pi)^4 \;\delta(T+T^\prime+k_{\mathrm{p},0}) \;\delta^{(3)}({\vec{\,\!p}\!\:{}}+{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})\;=\;{\mathcal{O}}(\sqrt{v}) \;\;,\end{aligned}$$ and the soft propagator as containing insertions ${\mathcal{O}}(v)$ for potential energies $k_{\mathrm{p}}$ $${\mbox{\parbox{55\unitlength}{ \begin{fmfgraph*}(55,30) \fmfleft{i}\fmfright{o} \fmf{heavy,width=thin,label= ${\scriptstyle}(T+k_{\mathrm{p},,0},,{\vec{\,\!p}\!\:{}})$,label.side=left}{i,o} \end{fmfgraph*}}}} \;=\;\frac{{\mathrm{i}}}{T+{\mathrm{i}}\epsilon} \sum\limits_{n=0}^\infty\left(\frac{-k_{\mathrm{p},0} }{T}\right)^n \;\;.$$ The same holds of course for the momentum-non-conserving vertices. In the renormalisation group approach, there is therefore only one relevant coupling (i.e. only one which dominates at zero velocity): As expected, it is the ${\Phi_{\mathrm{p}}}{\Phi_{\mathrm{p}}}{A_{\mathrm{p}}}$ coupling providing the binding. The ${\Phi_{\mathrm{s}}}{\Phi_{\mathrm{s}}}{A_{\mathrm{p}}}$-, ${\Phi_{\mathrm{s}}}{\Phi_{\mathrm{s}}}{A_{\mathrm{u}}}$-couplings and both insertions (\[insertions\]) are irrelevant. The marginal couplings ${\Phi_{\mathrm{p}}}{\Phi_{\mathrm{p}}}{A_{\mathrm{u}}}$, ${\Phi_{\mathrm{s}}}{\Phi_{\mathrm{s}}}{A_{\mathrm{s}}}$ and ${\Phi_{\mathrm{s}}}{\Phi_{\mathrm{p}}}{A_{\mathrm{s}}}$ are irrelevant in gauge theories in carefully chosen gauges like the Coulomb gauge. This point will be elaborated upon in the future [@hgpub4]. The velocity power counting is not yet complete. As one sees from the volume element used in (\[sintlagr\]), the vertex rules for the soft régime count powers of $v$ with respect to the soft régime. One hence retrieves the velocity power counting of Heavy Quark Effective Theory [@IsgurWise1; @IsgurWise2] (HQET), in which the interactions between one heavy (and hence static) and one or several light quarks are described. Usually, HQET counts inverse powers of mass in the Lagrangean, but because in the soft régime $Mv\sim \mbox{const.}$, the two approaches are actually equivalent. HQET becomes a sub-set of NRQCD, complemented by interactions between soft (HQET) and potential or ultrasoft particles. In NRQCD with two potential quarks as initial and final states, the soft régime can occur only inside loops, as noted above. Therefore, the power counting in the soft sub-graph has to be transfered to the potential régime. Because soft loop momenta scale like $[{\mathrm{d}}^{4}\! k_\mathrm{s}]\sim v^4$, while potential ones like $[{\mathrm{d}}^{4}\! k_\mathrm{p}]\sim v^5$, each largest sub-graph which contains only soft quarks and no potential ones (a “soft blob”) is enhanced by an additional factor $\frac{1}{v}$. Couplings between soft quarks and any gluons inside a blob take place in the soft régime and hence are counted according to the rules of that régime. Each soft blob contributes at least four orders of $g$, but only one inverse power of $v\sim g^2$. Power counting is preserved. These velocity power counting rules in loops are verified in explicit calculations of the exemplary graphs (see also below), but a rigorous derivation is left for a future publication [@hgpub4]. With rescaling, multipole expansion and loop counting, the velocity power counting rules are established, and one can now proceed to check the validity of the proposed Lagrangean, matching NRQFT to the relativistic theory in the examples given by Beneke and Smirnov [@BenekeSmirnov]. Model Calculations ================== [\[bsexamples\]]{} The first example is the lowest order correction to the two quark production graph. Without proof, it will be used that in dimensional regularisation, one can match NRQFT and the relativistic theory graph by graph, so that not the whole scattering amplitude has to be considered [@BenekeSmirnov]. The collection of graphs to be matched to the relativistic diagram is depicted in Fig. \[graph1\]. **[=]{}**[+]{}**[+]{}****** Here and in the following, hard (ultraviolet) contributions will not be shown explicitly. They are taken care of by the four-quark interaction of the non-relativistic Lagrangean (\[nrlagr\]) and renormalisation of the external currents [@LukeSavage]. The energy and momentum routing has been chosen to be the one of the non-relativistic center of mass system, with $2T$ the total kinetic energy, and $y=-({\vec{\,\!p}\!\:{}})^2\propto - v^2$ the relative four-momentum squared of the outgoing quarks as indicator for the thresholdness of the process considered. Thanks again to dimensional regularisation, any other assignment can be chosen and reproduces the result. The vanishing of the ultrasoft gluon exchange diagram and the value of the potential gluon exchange diagram have already been calculated in [@LukeSavage]. The soft exchange diagram vanishes, so that no new contribution is obtained. It is not even necessary to specify how soft quarks couple to external sources: If energy is conserved at the production vertex, the integral to be calculated is $$\label{vertex1} \int {\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}\frac{1}{T+k_0+{\mathrm{i}}\epsilon}\;\frac{1}{T-k_0+{\mathrm{i}}\epsilon}\; \frac{1}{k_0^2-{\vec{k}}^2}\;\;.$$ As the gluon is soft, $T\ll k_0$ and the quark propagators must be expanded in $T/k_0\sim v$, giving zero to any order as no scale is present in the dimensionally regularised integral. If energy is not conserved at the production vertex, the soft quark propagator is $\frac{1}{\pm k_0}$, and the contribution vanishes again. Therefore, there is no coupling of soft subgraphs to external sources to any order in $v$. Soft quarks in external lines are far off their mass shell and hence violate the assumptions underlying threshold expansion and NRQFT. In general, we conclude that soft quarks are present only in internal lines, and that the first non-vanishing contribution from the soft régime for the production vertex occurs not earlier than at ${\mathcal{O}}(g^4)$. The first soft non-zero contribution comes actually from the two gluon direct exchange diagram of Fig. \[graph2\] calculated by Beneke and Smirnov [@BenekeSmirnov] using threshold expansion. The Mandelstam variable $t=-({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime)^2$ describes the momentum transfer in the center of mass system. **[=]{}**[+]{}**[+]{}****** **[+]{}**[+]{}**[+]{}****** The ultraviolet behaviour of this graph is mimicked in NRQFT by a four-fermion exchange given by the vertex ${\mathrm{i}}c_2=\frac{-{\mathrm{i}}g^4}{24 \pi^2 M^2}={\mathcal{O}}(t^0,y^0)$ of the Lagrangean (\[nrlagr\]), which using the rescaling rules is seen to be ${\mathcal{O}}(v)$. The Feynman rules (\[pupvertex\]) give that the ${A_{\mathrm{u}}}{A_{\mathrm{u}}}$-diagram is of order ${\mathrm{e}}^v$ with a leading loop integral contribution (similar to [@BenekeSmirnov fl.  (32)]) $$\label{vertex2uu} \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}\frac{1}{k_0^2-{\vec{k}}^2}\; \frac{1}{k_0^2-{\vec{k}}^2}\;\frac{1}{T+k_0-\frac{{\vec{\,\!p}\!\:{}}^2}{2M}} \;\frac{1}{T-k_0-\frac{{\vec{\,\!p}\!\:{}}^2}{2M}}\;\;.$$ The diagram is expected to be zero to all orders since the ultrasoft gluons do not change the quark momenta and therefore the scattering takes place only in the forward direction, ${\vec{\,\!p}\!\:{}}={\vec{\,\!p}\!\:{}}^\prime$. Upon employing the on-shell condition for potential quarks, $T=\frac{{\vec{\,\!p}\!\:{}}^2}{2M}$ to leading order, it indeed vanishes as no scale is present. Since $T-\frac {{\vec{\,\!p}\!\:{}}^2}{2M}\sim Mv^4\ll |{\vec{k}}|\sim Mv$ (and $k_0\sim Mv^2$) in the potential régime, this is a legitimate expansion. The ${A_{\mathrm{u}}}{A_{\mathrm{p}}}$ and ${A_{\mathrm{p}}}{A_{\mathrm{u}}}$ contributions (${\mathcal{O}}(\frac{1}{v}{\mathrm{e}}^v)$) are zero for the same reason. The lowest order contribution to the ${A_{\mathrm{p}}}{A_{\mathrm{p}}}$ graph (${\mathcal{O}}(\frac{1}{v^2})$) is $$\label{vertex2pp} \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}\frac{1}{{\vec{k}}^2-{\mathrm{i}}\epsilon}\;\frac{1}{({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})^2 -{\mathrm{i}}\epsilon}\; \frac{1}{T+k_0-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}\; \frac{1}{T-k_0-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}\;\;.$$ In the light of the discussion at the end of Sect. \[philosophy\], it is most consistent to perform the $k_0$ integration by dimensional regularisation, using $\int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^{d}}\;}=\int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;} {\frac{{\mathrm{d}}^{d-\sigma}\! {\vec{k}}}{(2\pi)^{d-\sigma}}\;}$, $\sigma\to1$ [@Collins Chap. 4.1]. Split dimensional regularisation was introduced by Leibbrandt and Williams [@LeibbrandtWilliams] to cure the problems arising from pinch singularities in non-covariant gauges. Appendix \[app:splitdimreg\] shows that in the case at hand, it has the same effect as closing the $k_0$-contour and picking the quark propagator poles prior to using dimensional regularisation in $d-1$ Euclidean dimensions. To achieve ${\mathcal{O}}(v^1)$ accuracy, one must also consider one insertion (\[insertions\]) at the potential gluon lines, giving rise to a contribution $$\begin{aligned} \label{vertex2pp2} && \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}\frac{1}{{\vec{k}}^2-{\mathrm{i}}\epsilon}\; \frac{1}{({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})^2-{\mathrm{i}}\epsilon}\; k_0^2\;\Big(\;\frac{1}{{\vec{k}}^2-{\mathrm{i}}\epsilon}\;+\; \frac{1}{({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})^2 -{\mathrm{i}}\epsilon}\;\Big)\;\times\nonumber \\ &&\;\;\;\;\;\;\;\;\;\;\;\;\; \times\;\frac{1}{T+k_0-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}\; \frac{1}{T-k_0-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}\;\;.\end{aligned}$$ The $k_0$ integration is na[[[ï]{}]{}]{}vely linearly divergent, and hence closing the contour is not straightforward. As App. \[app:splitdimreg\] demonstrates, split dimensional regularisation circumvents this problem. The sum of both contributions (\[vertex2pp\]/\[vertex2pp2\]), $$\label{vertex2ppresult} \frac{{\mathrm{i}}}{8\pi t}\;\frac{M+T}{\sqrt{y}}\;\Big(\frac{2}{4-d}- \gamma_\mathrm{E}-\ln\frac{-t}{4\pi\mu^2}\Big)\;\;,$$ agrees with [@BenekeSmirnov fl. (31)] when one keeps in mind that in that reference, heavy particle external lines were normalised relativistically, while a non-relativistic normalisation was chosen here. Also, this article uses the $MS$ rather than the $\overline{MS}$ scheme. Near threshold, the scale is set by the total threshold energy $4\pi\mu^2=4(M+T)^2$. The soft gluon part is to lowest order (${\mathcal{O}}(v^{-1})$ because of one soft blob) given by $$\label{vertex2ss} \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^d}\;}\frac{1}{k_0^2-{\vec{k}}^2+{\mathrm{i}}\epsilon}\;\frac{1}{k_0^2-({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime+ {\vec{k}})^2+{\mathrm{i}}\epsilon}\;\frac{1}{k_0+{\mathrm{i}}\epsilon}\;\frac{1}{-k_0+{\mathrm{i}}\epsilon}\;\;,$$ which corresponds to [@BenekeSmirnov fl. (33)]. Now, split dimensional regularisation must be used if no ad-hoc prescription for the pinch singularity at $k_0=0$ is to be invoked. That the pinch is accounted for by potential gluon exchange and hence must be discarded, agrees with the intuitive argument that zero four-momentum scattering in QED is mediated by a potential only, and no retardation or radiation effects occur. On the other hand, the model Lagrangean contains three marginal couplings as seen at the end of Sect. \[rescaling\], which may give finite contributions as energies and momenta of the scattered particles go to zero. Although the prescription and the result from split dimensional regularisation coincide in the present case as demonstrated at the end of Sect. \[philosophy\] and in App. \[app:splitdimreg\], this may not hold in general. The result to ${\mathcal{O}}(v^1)$ exhibits another collinear divergence, $$\label{vertex2ssresult} \frac{-{\mathrm{i}}}{4\pi^2 t}\;\Big(\frac{2}{4-d}-\gamma_\mathrm{E}- \ln\frac{-t}{4\pi\mu^2}\Big)\;+\;\frac{{\mathrm{i}}}{24\pi^2M^2}\;\Big[1+ \frac{2y}{t} \;\Big(\frac{2}{4-d}-\gamma_\mathrm{E}-\ln\frac{-t}{4\pi\mu^2}\Big)\Big] \;\;,$$ and agrees with the one given by Beneke and Smirnov [@BenekeSmirnov fl.  (36)]. The second term comes from insertions and multipole expansions to achieve ${\mathcal{O}}(v)$ accuracy. It is easy to see that the power counting proposed works. As expected, the potential diagram is $\sqrt{y}\propto v$ stronger that the leading soft contribution, and $t \sqrt{y}\propto v^3$ stronger than the four-fermion interaction. In conclusion, the proposed NRQFT Lagrangean reproduces the result for the planar graph of the relativistic theory *only if* the soft gluon and the soft quark are accounted for: The four-fermion contact interaction produces just a $\frac{1}{M^2}$-term, graphs containing ultrasoft gluons were absent, and the potential gluon (\[vertex2ppresult\]) gave no ${\mathcal{O}}(y^0)$ contribution. This shows the necessity of soft quarks and gluons. The coupling strength of the ${\Phi_{\mathrm{s}}}{A_{\mathrm{s}}}{\Phi_{\mathrm{p}}}$ vertex is also seen to be identical to the other vertex coupling strengths, $g$. The planar fourth order correction to two quark production (Fig. \[graph3\]) was also compared to the result of [@BenekeSmirnov], and is correctly accounted for when the Feynman rules proposed above are used to ${\mathcal{O}}(v^1)$. **[=]{} **[+]{}**[+]{}****** Conclusions and Outlook ======================= [\[conclusions\]]{} The objective of this article was a simple presentation of the ideas behind explicit velocity power counting in dimensionally regularised NRQFT. It started with the identification of three different régimes of scale for on-shell particles in NRQFT from the poles in the non-relativistic propagators. This leads in a natural way to the existence of a new quark field and a new gluon field in the soft scaling régime $E\sim |{\vec{\,\!p}\!\:{}}|\sim Mv$. In it, quarks are static and gluons on shell, and HQET becomes a sub-set of NRQCD. Neither of the five fields in the three régimes should be thought of as “physical particles”. Rather, they represent the “true” quark and gluon in the respective régimes as the infrared-relevant degrees of freedom. None of the régimes overlap. An NRQFT Lagrangean has been proposed which leads to the correct behaviour of scattering and production amplitudes. It establishes explicit velocity power counting which is preserved to all orders in perturbation theory. The reason for the existence of such a Lagrangean, once dimensional regularisation is chosen to complete the theory, was elaborated upon in a simple example: the non-commutativity of the expansion in small parameters with dimensionally regularised integrals. Due to the similarity between the calculation of the examples in the work presented here and in [@BenekeSmirnov], one may get the impression that the Lagrangean presented is only a simple re-formulation of the threshold expansion. Partially, this is true, and a future publication [@hgpub4] will indeed show the equivalence of the two approaches to all orders in the threshold and coupling expansion. A list of other topics to be addressed there contains: the straightforward generalisation to NRQCD; a proof whether the particle content outlined above is not only consistent but complete, i.e. that no new fields (e.g. an ultrasoft quark) or “exceptional” régimes arise; an investigation of the influence of soft quarks and gluons on bound state calculations in NRQED and NRQCD; a full list of the various couplings between the different régimes and an exploitation of their relevance for physical processes. The formal reason why double counting between different régimes and especially between soft and ultrasoft gluons does not occur, a derivation of the way soft quarks couple to external sources, and the róle of soft gluons in Compton scattering deserve further attention, too. I would like to stress that the diagrammatic threshold expansion derived here allows for a more automatic and intuitive approach and makes it easier to determine the order in $\sqrt{-y}\propto v$ to which a certain graph contributes. On the other hand, the NRQFT Lagrangean can easily be applied to bound state problems. As the threshold expansion of Beneke and Smirnov starts in a relativistic setting, it may formally be harder to treat bound states there. Indeed, I believe that even if one may not be able to prove the conjectures of the one starting from the other, both approaches will profit from each other in the wedlock of NRQFT and threshold expansion. Acknowledgments {#acknowledgments .unnumbered} =============== It is my pleasure to thank J.-W. Chen, D. B. Kaplan and M. J. Savage for stimulating discussions. The work was supported in part by a Department of Energy grant DE-FG03-97ER41014. Some Details on Split Dimensional Regularisation ================================================ [\[app:splitdimreg\]]{} This appendix presents the part of the calculations in the examples of Sect.  \[bsexamples\] which makes use of split dimensional regularisation as introduced by Leibbrandt and Williams [@LeibbrandtWilliams]. In its results, split dimensional regularisation agrees with other methods to compute loop integrals in non-covariant gauges, such as the non-principal value prescription [@LeeNyeo], but two features make it especially attractive: It treats the temporal and spatial components of the loop integrations on an equal footing, and no recipes are necessary. Rather, it uses the fact that, like in ordinary integration, the axioms of dimensional regularisation [@Collins Chap.  4.1] allow to split the integration into two separate integrals: $$\label{eq:splitdimreg} \int{\frac{{\mathrm{d}}^{d}\! k}{(2\pi)^{d}}\;}=\int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;}{\frac{{\mathrm{d}}^{d-\sigma}\! {\vec{k}}}{(2\pi)^{d-\sigma}}\;}$$ Both integrations can be performed consecutively, and the limit $\sigma\to 1$ can – if finite – be taken immediately, because the integration over the spatial components of the loop momentum in (\[eq:splitdimreg\]) is still regularised in $d-1$ dimensions. Finally, the limit $d\to 4$ is taken at the end of the calculation. Equation (\[vertex2pp\]) contains the simplest $k_0$ sub-integral: $$\label{eq:vertex2ppsdr} \int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;} \frac{1}{k_0+T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}\; \frac{1}{-k_0+T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon}$$ Using standard formulae for dimensional regularisation in Euclidean space [@Ramond App. B], the result is finite as $\sigma\to 1$: $$\begin{aligned} \label{eq:vertex2ppsdrres} && \int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;} \frac{1}{[T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon]^2 -k_0^2} = -\frac{\Gamma[1-\frac{\sigma}{2}]}{(4\pi)^{\sigma/2} \Gamma[1]} \; \bigg(-\Big[T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon\Big]^2 \bigg)^{\frac{\sigma}{2}-1}\to\nonumber\\ &&\to -\frac{{\mathrm{i}}}{2}\;\Big(T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon\Big)^{-1} \;\;\mbox{ as }\sigma\to 1\end{aligned}$$ It is no surprise that closing the contour produces the same result, because for any finite integral, the answer of all regularisation methods have to coincide. The integral over the spatial components of the loop momentum is now straightforward. The potential gluon diagram with one insertion at a gluon leg (\[vertex2pp2\]) yields a split dimensional integral which diverges linearly in $k_0$, so that na[[[ï]{}]{}]{}ve contour integration is not legitimate. $$\label{eq:vertex2pp2sdr} \int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;} \frac{k_0^2}{[T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}+{\mathrm{i}}\epsilon]^2 -k_0^2} \to - \frac{{\mathrm{i}}}{2}\;\Big(T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M}\Big)\;\;\mbox{ as } \sigma\to 1$$ To arrive at this result, the numerator was re-written as $(k_0^2- (T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M})^2) \;+\;(T-\frac{({\vec{k}}+{\vec{\,\!p}\!\:{}})^2}{2M})^2$. Its first term cancels the denominator, yielding an integral without scale which therefore vanishes in dimensional regularisation. The second term has been calculated in (\[eq:vertex2ppsdrres\]). The integral over the spatial components of the loop momentum provides again no complications, leading to (\[vertex2ppresult\]). Finally, it was already shown at the end of Sect. \[philosophy\] that dimensional regularisation discards pinch singularities encountered in contour integrations. This is validated again by looking at the split dimensional integral for $k_0$ in the soft gluon contribution (\[vertex2ss\]), $$\label{eq:vertex2sssdr} \int{\frac{{\mathrm{d}}^{\sigma}\! k_0}{(2\pi)^{\sigma}}\;} \frac{1}{k_0^2-a^2}\; \frac{1}{k_0^2-b^2}\; \frac{1}{-k_0^2}\;\;,$$ where $a^2:={\vec{k}}^2-{\mathrm{i}}\epsilon$ and $b^2:=({\vec{\,\!p}\!\:{}}-{\vec{\,\!p}\!\:{}}^\prime+{\vec{k}})^2-{\mathrm{i}}\epsilon$. After combining denominators, the resulting integral is simple: $$\begin{aligned} -2\;\frac{\Gamma[3-\frac{\sigma}{2}]}{(4\pi)^\frac{\sigma}{2} \Gamma[3]}\; \int\limits^1_0 {{\mathrm{d}}^{}\! x\;}{{\mathrm{d}}^{}\! y\;} x\;\Big(-a^2 (1-x)-b^2 xy \Big)^{\frac{\sigma}{2}-3}\to \frac{{\mathrm{i}}}{2}\; \frac{1}{a^2-b^2}\; \Big(\frac{1}{a^3}-\frac{1}{b^3}\Big)\;\;\mbox{ as } \sigma\to 1 \;\;.\end{aligned}$$ This agrees with the result of Beneke and Smirnov [@BenekeSmirnov fl.  (34)] who use contour integration and drop the contribution from the pinch singularity. The integral over the spatial components of the loop momentum provides again no unfamiliar complications, leading to (\[vertex2ssresult\]). [99]{} W. E. Caswell and G. P. Lepage: [[[*Phys.  Lett. *[B]{}]{}]{} [**[167]{}**]{}, 437 (1986)]{}. G. T. Bodwin, E. Braaten and G. P. Lepage: [[[*Phys. Rev. *[D]{}]{}]{} [**[51]{}**]{}, 1125 (1995)]{}; [[[*Phys. Rev. *[D]{}]{}]{} [**[55]{}**]{}, 5853 (1997)]{}. M. Beneke and V. A. Smirnov: [*Asymptotic Expansion of Feynman Integrals near Threshold*; CERN-TH/7-315, hep-ph/9711391, 1997 (to be publishedin [*Nucl. Phys. *[B]{}]{})]{}. M. Luke and A. V. Manohar: [[[*Phys. Rev. *[D]{}]{}]{} [**[55]{}**]{}, 4129 (1997)]{}. B. Grinstein and I. Z. Rothstein: [[[*Phys. Rev. *[D]{}]{}]{} [**[57]{}**]{}, 78 (1998)]{}. M. Luke and M. J. Savage: [[[*Phys. Rev. *[D]{}]{}]{} [**[57]{}**]{}, 413 (1998)]{}. P. Labelle: [*Effective Field Theories for QED Bound States: Extending Nonrelativistic QED to Study Retardation Effects*; MCGILL-96-33, hep-ph/9608491, 1996 (to be published)]{}. H. W. Grie[ß]{}hammer (in preparation); see also [*The Soft Régime on NRQCD*; NT@UW-98-12, hep-ph/9804251, 1998 (to be publishedin the Proceedings of the Workshop “Nuclear Physics With Effective Field Theories” at Caltech, 26th – 27th February 1998, eds. R. Seki, U. van Kolck and M. J. Savage, World Scientific)]{}. G. Leibbrandt and J. Williams, [[[*Nucl. Phys. *[B]{}]{}]{} [**[475]{}**]{}, 469 (1996)]{}. A. Pineda and J. Soto: [[[*Phys.  Lett. *[B]{}]{} (Proc. Suppl.)]{} [**[64]{}**]{}, 428 (1998)]{}. V. A. Smirnov: [*Renormalization and Asymptotic Expansion*; Progress in Physics Vol. 14, Birkhäuser (1991)]{}. N. Isgur and M. B. Wise: [[[*Phys.  Lett. *[B]{}]{}]{} [**[232]{}**]{}, 113 (1989)]{}. N. Isgur and M. B. Wise: [[[*Phys.  Lett. *[B]{}]{}]{} [**[237]{}**]{}, 527 (1990)]{}. J. Collins, [*Renormalization*; Cambridge Monographs on Mathematical Physics, CUP (1984)]{}. K.-C. Lee and S.-L. Nyeo: [[[*J. Math. Phys. *]{}]{} [**[35]{}**]{}, 2210 (1994)]{}. P. Ramond: [*Field Theory: A Modern Primer*; Frontiers in Physics Vol. 74, Addison Wesley (1990)]{}. [^1]: Email: [email protected] [^2]: I am indebted to M. Beneke for conversation on this point
--- abstract: 'The conserved Swift-Hohenberg equation with cubic nonlinearity provides the simplest microscopic description of the thermodynamic transition from a fluid state to a crystalline state. The resulting phase field crystal model describes a variety of spatially localized structures, in addition to different spatially extended periodic structures. The location of these structures in the temperature versus mean order parameter plane is determined using a combination of numerical continuation in one dimension and direct numerical simulation in two and three dimensions. Localized states are found in the region of thermodynamic coexistence between the homogeneous and structured phases, and may lie outside of the binodal for these states. The results are related to the phenomenon of slanted snaking but take the form of standard homoclinic snaking when the mean order parameter is plotted as a function of the chemical potential, and are expected to carry over to related models with a conserved order parameter.' author: - Uwe Thiele - 'Andrew J. Archer' - 'Mark J. Robbins' - Hector Gomez - Edgar Knobloch title: 'Localized states in the conserved Swift-Hohenberg equation with cubic nonlinearity' --- Introduction ============ [\[intro\] ]{} Spatially localized structures (hereafter LS) are observed in a great variety of pattern-forming systems. Such states include spot-like structures found in reaction-diffusion systems [@CRT00] and in a liquid light-valve experiment [@BCR09], isolated spikes observed in a ferrofluid experiment [@RB05] and localized buckled states resulting from the buckling of slender structures under compression [@HPCWWBL00]. Related states are observed in fluid mechanics, including convection [@bkam06; @BK:08b; @Blanchflower99] and shear flows [@sgb10]. In other systems, such as Faraday waves, the LS oscillate in time, either periodically or with a more complex time-dependence, forming structures referred to as oscillons [@laf96; @rlc11]. This is also the case for oscillons in granular media [@UMS:96] and in optics [@AFO09]. Other examples of LS include localized traveling waves [@Kolodner88; @Barten91] and states called “worms” observed in electroconvection [@Dennin96; @Riecke98]. In many of these systems the use of envelope equations removes the (fast) time-dependence and maps such time-dependent structures onto equilibria. Many of the structures mentioned above are examples of “dissipative solitons” [@PBA10] in which energy loss through dissipation is balanced by energy input through spatially homogeneous forcing. Others (e.g., [@RB05; @HPCWWBL00]) correspond to local minima of an underlying energy or Lyapunov functional. This is the case for systems whose dynamics are of gradient type, provided only that the energy is bounded from below. On finite domains with null boundary conditions such systems evolve to steady states which may or may not correspond to LS. However, in generic systems supporting dissipative solitons no Lyapunov function will be present and the evolution of the system may be nonmonotonic. In either system type, time-independent LS are found in regions of parameter space in which a time-independent spatially homogeneous (i.e., uniform) state coexists with a time-independent spatially periodic state. Within this region there is a subregion referred to as the [*snaking*]{} or [*pinning*]{} region [@Pom:86] in which a great variety of stationary LS are present. In the simplest, one-dimensional, case the LS consist of a segment of the periodic state embedded in a homogeneous background; segments of the homogeneous state embedded in periodic background may be thought of as localized hole states LH. Steady states of this type lie on a pair of intertwined branches that snake back and forth across the snaking region, one of which consists of reflection-symmetric LS with a peak in the center, while the other consists of similar states but with a dip in the center, and likewise for the holes. Near the left edge of the snaking region each LS adds a pair of new cells, one at either end, and these grow to full strength as one follows the LS across the snaking region to its right edge, where both branches turn around and the process repeats. Thus, as one follows the LS branches towards larger $L^2$ norm, both types of LS gradually grow in length, and all such structures coexist within the snaking region. On a finite interval the long LS take the form of holes in an otherwise spatially periodic state but on the real line, the LS and LH remain distinct although both occupy the same snaking region. The LS branches are, in adition, interconnected by cross-links resembling the rungs of a ladder, consisting of asymmetric LS [@bukn07]. In generic systems posed on the real line states of this type drift, either to the left or the right, depending on the asymmetry, but in systems with gradient dynamics the asymmetric states are also time-independent. These, along with bound states of two, three, etc LS/LH are also present within the snaking region [@bukn07]. The above behavior is typical of nonconserved order parameter fields. However, an important subclass of gradient systems possesses a conserved quantity, and in such systems the order parameter field has a fixed mean value. Systems of this type arise frequently in fluid convection and other applications [@CM01; @LBK11; @BBKK12] and are distinguished from the standard scenario summarised above by the following properties [@FCS07; @dawe08]: (i) the snaking becomes slanted (sometimes referred to as “sidewinding”), (ii) LS may be present outside of the region of coexistence of the homogeneous and periodic states, (iii) LS are present even when the periodic states bifurcate supercritically, i.e., when the coexistence region is absent entirely. The slanting of the snakes-and-ladders structure is a finite size effect: in a finite domain expulsion of the conserved quantity from the LS implies its shortage outside, a fact that progressively delays (to stronger forcing) the formation events whereby the LS grow in length. The net effect is that LS are found in a much broader region of parameter space than in nonconserved systems. The above properties are shared by many of the models arising in dynamical density functional theory (DDFT) and related phase field models of crystalline solids. The simplest such phase field crystal (PFC) model [@PFC_review] (see below) leads to the so-called conserved Swift-Hohenberg (cSH) equation. This equation was first derived, to the authors’ knowledge, as the equation governing the evolution of binary fluid convection between thermally insulating boundary conditions [@K:89]; for recent derivations in the PFC context see Refs. [@vBVL09; @ARTK12; @PFC_review]. In this connection the PFC model may be viewed as probably the simplest [*microscopic*]{} model for the freezing transition that can be constructed. In this model the transition from a homogeneous state to a periodic state corresponds to the transition from a uniform density liquid to a periodic crystalline solid. The LS of interest in this model then correspond to states in which a finite size portion of the periodic crystalline phase coexists with the uniform density liquid phase, and these are expected to be present in the coexistence region between the two phases. Some rather striking examples of LS in large two-dimensional systems include snow-flake-like and dendritic structures, e.g., Refs.  [@PFC_review; @tegz09; @tegz09b; @tegz09c; @tams08]. In fact, as shown below, the LS are also present at state points outside of the coexistence region. However, despite the application of the cSH (or PFC) equation in this and other areas, the detailed properties of the LS described by this equation have not been investigated. In this paper we make a detailed study of the properties of this equation in one spatial dimension with the aim of setting this equation firmly in the body of literature dealing with spatially localized structures. Our results are therefore interpreted within both languages, in an attempt to make existing understanding of LS accessible to those working on nonequilibrium models of solids, and to use the simplest PFC model to exemplify the theory. In addition, motivated by Refs. [@tegz09; @tegz09b; @tegz09c; @tams08], we also describe related results in two (2d) and three (3d) dimensions, where (many) more types of regular patterns are present and hence many more types of LS. Our work focuses on ‘bump’ states (also referred to as ‘spots’) which are readily found in direct numerical simulations of the conserved Swift-Hohenberg equation as well as in other systems [@BCR09]. Although the theory for these cases in 2d and 3d is less well developed [@LSAC08; @ALBKS10] continuation results indicate that some of the various different types of LS can have quite different properties. For example, the bump states differ from the target-like LS formed from the stripe state that can also be seen in the model. In particular, spots in the nonconserved Swift-Hohenberg equation in the plane bifurcate from the homogeneous state regardless of whether stripes are subcritical or supercritical [@LS09], see also Ref. [@MS10]. The key question, hitherto unanswered, is whether two-dimensional structures in the plane snake indefinitely and likewise for three-dimensional structures. The paper is organized as follows. In Sec. II we describe the conserved Swift-Hohenberg equation and its basic properties. In Sec. III we describe the properties of LS in one spatial dimension as determined by numerical continuation. In Sec. IV we describe related results in two and three spatial dimensions, but obtained by direct numerical simulation of the PFC model. Since this model has a gradient structure, on a finite domain all solutions necessarily approach a time-independent equilibrium. However, as we shall see, the number of competing equilibria may be very large and different equilibria are reached depending on the initial conditions employed. In Sec. V we put our results into context and present brief conclusions. The conserved Swift-Hohenberg equation ====================================== Equation and its variants ------------------------- [\[sec:eqs\] ]{} We write the cSH (or PFC) equation in the form $$\partial_t \phi(\mathbf{x},t)=\alpha \nabla^2 \frac{\delta F[\phi]}{\delta \phi(\mathbf{x},t)} , {\label{eq:DDFT_PFC} }$$ where $\phi(\mathbf{x},t)$ is an order parameter field that corresponds in the PFC context to a scaled density profile, $\alpha$ is a (constant) mobility coefficient and $F[\phi]$ denotes the free energy functional $$\begin{aligned} F[\phi]\equiv \int d\mathbf{x} \left[\frac{\phi}{2}[r+(q^2+\nabla^2)^2]\phi+\frac{\phi^4}{4}\right]. {\label{eq:hfe} }\end{aligned}$$ Here $\mathbf{x}=(x,y,z)$, $\nabla=(\partial_x,\partial_y,\partial_z)^T$ is the gradient operator, and subscripts denote partial derivatives. It follows that the system evolves according to the cSH equation $$\partial_t\,\phi\,=\, \alpha \nabla^2\left[r\phi + (\nabla^2+q^2)^2\phi + \phi^3\right]. {\label{eq:csh} }$$ Equation (\[eq:csh\]) is sometimes called the derivative Swift-Hohenberg equation [@MaCo00; @Cox04]; many papers use a different sign convention for the parameter $r$ (e.g., [@EKHG02; @Achi09; @GDL09; @SHP06; @BRV07; @OhSh08; @MKP08; @tegz09; @vBVL09]). In this equation the quartic term in $F[\phi]$ may be replaced by other types of nonlinearity, such as $f_{23}=-b_2\phi^3/3+\phi^4/4$ [@MaCo00; @Cox04; @tegz09; @vBVL09] without substantial change in behavior. Related but nonconserved equations $\partial_t \phi=-\tilde\alpha\delta F[\phi]/\delta \phi$ with nonlinear terms of the form $f_{23}$ [@BuKn06] or $f_{35}=-b_3\phi^4/4+\phi^6/6$ [@bukn07] have also been extensively studied, subject to the conditions $b_2>27/38$ (resp., $b_3>0$) required to guarantee the presence of an interval of coexistence between the homogeneous state $\phi=0$ and a spatially periodic state. Note that in the context of nonconserved dynamics [@BuKn06; @bukn07] one generally selects a nonlinear term $g_\mathrm{nl}$ directly, although this term is related to $f_\mathrm{nl}$ through the relation $g_\mathrm{nl}\equiv -d f_\mathrm{nl}/d\phi$, i.e., $g_{23}$ or $g_{35}$. As we shall see below, in the conserved case, having the nonlinear term $f_{23}$ describes the generic case and the role of the coefficient $b_2$ is effectively played by the value of $\phi_0$, which is the average value of the order parameter $\phi(\mathbf{x})$. Equation (\[eq:csh\]) can be studied in one, two or more dimensions. In one dimension with $g_{23}$ the equation was studied by Matthews and Cox [@MaCo00; @Cox04] as an example of a system with a conserved order parameter; this equation is equivalent to Eq. (\[eq:csh\]) with imposed nonzero mean $\phi$. A weakly localized state of the type that is of interest in the present paper is computed in [@MaCo00] and discussed further in [@Cox04]. Localized states in one spatial dimension ----------------------------------------- [\[sec:loc-states-1d\] ]{} We first consider Eq. (\[eq:csh\]) in one dimension, with $\alpha=1$ and $q=1$, i.e., $$\partial_t\,\phi\,=\, \partial^2_x\left[r\phi + (\partial^2_{x}+1)^2\phi +\phi^3\right]. {\label{eq:csh-loc} }$$ This equation is reversible in space (i.e., it is invariant under $x\rightarrow -x$). Moreover, it conserves the total “mass” $\int_0^L\phi\,dx$, where $L$ is the size of the system. In the following we denote the average value of $\phi$ in the system by $\phi_0\equiv\langle\phi\rangle$ so that perturbations ${\tilde\phi}\equiv \phi-\phi_0$ necessarily satisfy $\langle{\tilde\phi}\rangle=0$, where $\langle\cdots\rangle\equiv L^{-1}\int_0^L(\cdots)\,dx$. Steady states ($\partial_t\,\phi\,=\,0$) are solutions of the fourth order ordinary differential equation $$0= r\phi + (\partial_{xx}+1)^2\phi +\phi^3-\mu, {\label{eq:csh-loc-steady} }$$ where $\mu\equiv \delta F[\phi]/\delta\phi$ is an integration constant that corresponds to the chemical potential. Each solution of this equation corresponds to a stationary value of the underlying Helmholtz free energy $$\tilde{F}=\int_0^L\left[(1+r)\frac{\phi^2}{2}+\frac{\phi^4}{4} - (\partial_{x}\phi)^2+\frac{1}{2}(\partial_{xx}\phi)^2 \right]\,dx. {\label{eq:csh-energy} }$$ We use the free energy to define the grand potential $$\Omega=\tilde{F} - \int_0^L \mu\phi\,dx {\label{eq:csh-grand} }$$ and will be interested in the normalized free energy density $f=(\tilde{F}[\phi(x)]-\tilde{F}[\phi_0])/L$ and in the density of the grand potential $\omega=\Omega/L=\tilde{F}[\phi(x)]/L - \mu{\phi_0}$. We also use the $L^2$ norm $$||\delta{\phi}||=\sqrt{\frac{1}{L}\int_0^L(\phi-\phi_0)^2\,dx} {\label{eq:csh-norm} }$$ as a convenient measure of the amplitude of the departure of the solution from the homogeneous background state $\phi=\phi_0$. Linearizing Eq. (\[eq:csh-loc\]) about the steady homogeneous solution $\phi=\phi_0$ using the ansatz $\delta\phi(x,t)\equiv\phi(x,t)-\phi_0=\epsilon\exp(\beta t + i kx)$ with $\epsilon\ll1$ results in the dispersion relation $$\beta=-k^2\,[r+(1-k^2)^2+3\phi_0^2]. {\label{eq:csh-disp} }$$ It follows that in an infinite domain, the threshold for instability of the homogeneous state corresponds to $r_c^\infty=-3\phi_0^2$. In a domain of finite length $L$ with periodic boundary conditions (PBC), the homogeneous state is linearly unstable for $r<r_n$, where $$r_n=-(1-k_n^2)^2-3\phi_0^2, {\label{eq:csh-stab} }$$ and $k_n\equiv 2\pi n/L$, $n=1,2,\dots$. Standard bifurcation theory with PBC shows that for $L<\infty$ each $r_n$ corresponds to a bifurcation point creating a branch of periodic solutions that is uniquely specified by the corresponding integer $n$. For those integers $n$ for which $r_n > -9/2$ the branch of periodic states bifurcates supercritically (i.e., towards smaller values of $|\phi_0|$); for $r_n < -9/2$, the bifurcation is subcritical (i.e., the branch bifurcates towards larger values of $|\phi_0|$). Since each solution can be translated by an arbitrary amount $d$ (mod $L$), each bifurcation is in fact a pitchfork of revolution. Although the periodic states can be computed analytically for $r\approx r_n$, for larger values of $|r-r_n|$ numerical computations are necessary. In the following we use the continuation toolbox AUTO [@DKK91; @AUTO07P] to perform these (and other) calculations. For interpreting the results it is helpful to think of $r$ as a temperature-like variable which measures the undercooling of the liquid phase. ![(color online) The phase diagram for the 1d PFC model (\[eq:csh\]) when $q = 1$. The red solid lines are the coexistence curves between the periodic and uniform phases calculated using a two mode approximation [@ARTK12]. The green squares show the coexistence values calculated from simulations [@ARTK12]. The red circles are the tricritical points. The blue dashed line shows the curve of marginal stability of the uniform state within linear stability theory.](fig1.pdf){width="0.8\hsize"} [\[fig:cSH-phasediagram-1d\] ]{} Before we can discuss LS in the above model, it is helpful to refer to the phase diagram appropriate to a one-dimensional setting (Fig. \[fig:cSH-phasediagram-1d\]). As shown in Ref. [@ARTK12] the tricritical point is located at $(\phi_{0b},r_b^\mathrm{max})=(\pm\sqrt{3/38},-9/38)$. For $r>r_b^\mathrm{max}$ there exists no thermodynamic coexistence zone between the homogeneous and periodic states. Such a region is only present for $r<r_b^\mathrm{max}$ and is limited by the binodal lines that indicate the values of $\phi_0$ for which the homogeneous and periodic solutions at fixed $r$ have equal chemical potential and pressure (i.e., equal grand potential). Thus for $r<r_b^\mathrm{max}$ the transition from the homogeneous to the periodic state is of first order. The binodals can either be calculated for specific domain sizes and periods of the periodic structure or for an infinite domain. In the latter case the period of the periodic state is not fixed but corresponds to the period that minimizes the Helmholtz free energy at each $(\phi_0,r)$ [@ARTK12]. We remark that with this choice of parameters the tricritical point is not the point at which the bifurcation to the periodic state changes from supercritical to subcritical. As already mentioned, the latter occurs at $(\phi_0,r)=(\pm\sqrt{3/2},-9/2)$, i.e., at values of $r$ much smaller than $r_b^\mathrm{max}$. Further discussion of this point may be found in the conclusions. Results for the conserved Swift-Hohenberg equation ================================================== Families of localized states ---------------------------- Since Eq. (\[eq:csh-loc\]) represents conserved gradient dynamics based on an energy functional that allows for a first order phase transition between the homogeneous state and a periodic patterned state, one may expect the existence of localized states (LS) to be the norm rather than an exception. In the region between the binodals, where homogeneous and periodic structures may coexist, the value of $\phi_0$, i.e., the amount of ‘mass’ in the system, determines how many peaks can form. As in other problems of this type we divide the LS into three classes. The first class consists of left-right symmetric structures with a peak in the middle. Structures of this type have an overall odd number of peaks and we shall refer to them as odd states, hereafter LS$_\mathrm{odd}$. The second class consists of left-right symmetric structures with a dip in the middle. Structures of this type have an overall even number of peaks and we refer to them as even states, hereafter LS$_\mathrm{even}$. Both types have even parity with respect to reflection in the center of the structure. The third class consists of states of no fixed parity, i.e., asymmetric states, LS$_\mathrm{asym}$. The asymmetric states are created from the symmetric states at pitchfork bifurcations and take the form of rungs on a ladder-like structure with interconnections between LS$_\mathrm{odd}$ to LS$_\mathrm{even}$. In view of the gradient structure of Eq. (\[eq:csh-loc\]) the asymmetric states are likewise stationary solutions of the equation. We now address the following questions:\ 1. Do localized states exist outside the binodal region? Can they form the energetic minimum outside the binodal region?\ 2. How does the bifurcational structure of the localized states change with changes in the temperature-like parameter $r$? How does the transition from tilted or slanted snaking to no snaking occur? What is the behavior of the asymmetric localized states during this process? Answers to these and other questions can be obtained by means of an in-depth parametric study. In the figures that follow we present bifurcation diagrams for localized states as a function of the mean order parameter value $\phi_0$ for a number of values of the parameter $r$. All are solutions of Eq.  that satisfy periodic boundary conditions on the domain $0\le x\le L$, and are characterized by their $L^2$ norm $||\delta \phi||$, chemical potential $\mu$, free energy density $f$, and grand potential density $\omega$ as defined in Sec. \[sec:loc-states-1d\]. In Fig. \[fig:loc-fam-rm09\] we show the results for $r=-0.9$ for $L=100$. Figure \[fig:loc-fam-rm09\](a) shows $||\delta\phi ||$ as a function of $\phi_0$: the classical bifurcation diagram. For these parameter values, as $\phi_0$ is increased the homogeneous (liquid) phase first becomes unstable to perturbations with mode number $n=16$ (i.e., 16 bumps), followed closely by bifurcations to modes with $n=15$ and $n=17$. All other modes bifurcate at yet smaller values of $|\phi_0|$ and are omitted. All three primary bifurcations are supercritical. The figure also reveals that the $n=16$ branch undergoes a secondary instability already at small amplitude; this instability creates a pair of secondary branches of spatially localized states, LS$_\mathrm{odd}$ (solid line) and LS$_\mathrm{even}$ (dashed line). With increasing amplitude these branches undergo [*slanted snaking*]{} as one would expect on the basis of the results for related systems with a conservation law [@dawe08; @LBK11]. The LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches are in turn connected by ladder branches consisting of asymmetric states LS$_\mathrm{asym}$, much as in standard snaking [@BuKn06]. Sample solution profiles for these three types of LS are shown in Fig. \[fig:loc-prof-rm09\]. The snaking ceases when the LS have grown to fill the available domain; in the present case the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches terminate on the same $n=16$ branch that created them in the first place. Whether or not this is the case depends in general on the domain length $L$, as discussed further in Ref. [@BBKM08]. The key to the bifurcation diagram shown in Fig. \[fig:loc-fam-rm09\](a) is evidently the small amplitude bifurcation on the $n=16$ branch. This bifurcation destabilises the $n=16$ branch that would otherwise be stable and is a consequence of the presence of the conserved quantity $\phi_0$ [@MaCo00]. As $L$ increases, the bifurcation moves down to smaller and smaller amplitude, so that in the limit $L\rightarrow\infty$ the periodic branch is entirely unstable and the LS bifurcate directly from the homogeneous state. Since the LS bifurcate [*subcritically*]{} it follows that such states are present not only when the primary pattern-forming branch is supercritical but moreover are present [*below*]{} the onset of the primary instability. Figure \[fig:loc-fam-rm09\](b) shows the corresponding plot of the free energy density $f$ as a function of $\phi_0$. This figure demonstrates that throughout much of the range of $\phi_0$ the localized states have a lower free energy than the extended periodic states. In this range the LS are therefore energetically favored. Figure \[fig:loc-fam-rm09\](c) shows the corresponding plot of the chemical potential $\mu$ while Fig. \[fig:loc-fam-rm09\](d) shows the grand potential density $\omega$. Of these Fig. \[fig:loc-fam-rm09\](c) is perhaps the most interesting since it shows that the results of Fig. \[fig:loc-fam-rm09\](a), when replotted using $(\phi_0,\mu)$ to characterize the solutions, in fact take the form of standard snaking, provided one takes the chemical potential $\mu$ as the control parameter and $\phi_0$ as the response. In this form the bifurcation diagram gives the values of $\phi_0$ that are consistent with a given value of the chemical potential $\mu$ (recall that $\phi_0$ is related to the total particle number density). **(a)**![(color online) Characteristics of steady state (localized) solutions of the one-dimensional conserved Swift-Hohenberg equation (\[eq:csh-loc\]) as a function of the mean order parameter $\phi_0$ for a fixed domain size of $L=100$ and $r=-0.9$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta \phi||$, (b) free energy density $f$, (c) chemical potential $\mu$, and (d) grand potential density $\omega$. The thick green dotted line, labeled ‘flat’, corresponds to the homogeneous solution $\phi(x)=\phi_0$. Periodic solutions with $n=16$ peaks are shown as a thin green dashed line, whereas the nearby thin blue and black dotted lines represent the $n=15$ and $n=17$ solutions, respectively. The thick solid black and dashed red lines that bifurcate from the $n=16$ periodic solution represent symmetric localized states with a maximum (LS$_\mathrm{odd}$) and a minimum (LS$_\mathrm{even}$) at their center, respectively. Both terminate on the $n=16$ solution. The 14 blue solid lines that connect the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches of symmetric localized states correspond to asymmetric localized states (LS$_\mathrm{asym}$). Together these three sets of branches of localized states form a tilted snakes-and-ladders structure. Typical order parameter profiles along the three LS branches are shown in Fig. \[fig:loc-prof-rm09\], and correspond to locations indicated in panels (a) and (c) by filled black squares (LS$_\mathrm{odd}$), red circles (LS$_\mathrm{even}$), blue triangles (LS$_\mathrm{asym}$) and green diamond (periodic solution with $n=16$). ](phi_rm0,9_L100_norm2 "fig:"){width="0.45\hsize"} ![(color online) Characteristics of steady state (localized) solutions of the one-dimensional conserved Swift-Hohenberg equation (\[eq:csh-loc\]) as a function of the mean order parameter $\phi_0$ for a fixed domain size of $L=100$ and $r=-0.9$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta \phi||$, (b) free energy density $f$, (c) chemical potential $\mu$, and (d) grand potential density $\omega$. The thick green dotted line, labeled ‘flat’, corresponds to the homogeneous solution $\phi(x)=\phi_0$. Periodic solutions with $n=16$ peaks are shown as a thin green dashed line, whereas the nearby thin blue and black dotted lines represent the $n=15$ and $n=17$ solutions, respectively. The thick solid black and dashed red lines that bifurcate from the $n=16$ periodic solution represent symmetric localized states with a maximum (LS$_\mathrm{odd}$) and a minimum (LS$_\mathrm{even}$) at their center, respectively. Both terminate on the $n=16$ solution. The 14 blue solid lines that connect the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches of symmetric localized states correspond to asymmetric localized states (LS$_\mathrm{asym}$). Together these three sets of branches of localized states form a tilted snakes-and-ladders structure. Typical order parameter profiles along the three LS branches are shown in Fig. \[fig:loc-prof-rm09\], and correspond to locations indicated in panels (a) and (c) by filled black squares (LS$_\mathrm{odd}$), red circles (LS$_\mathrm{even}$), blue triangles (LS$_\mathrm{asym}$) and green diamond (periodic solution with $n=16$). ](phi_rm0,9_L100_energy "fig:"){width="0.45\hsize"}**(b)** **(c)**![(color online) Characteristics of steady state (localized) solutions of the one-dimensional conserved Swift-Hohenberg equation (\[eq:csh-loc\]) as a function of the mean order parameter $\phi_0$ for a fixed domain size of $L=100$ and $r=-0.9$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta \phi||$, (b) free energy density $f$, (c) chemical potential $\mu$, and (d) grand potential density $\omega$. The thick green dotted line, labeled ‘flat’, corresponds to the homogeneous solution $\phi(x)=\phi_0$. Periodic solutions with $n=16$ peaks are shown as a thin green dashed line, whereas the nearby thin blue and black dotted lines represent the $n=15$ and $n=17$ solutions, respectively. The thick solid black and dashed red lines that bifurcate from the $n=16$ periodic solution represent symmetric localized states with a maximum (LS$_\mathrm{odd}$) and a minimum (LS$_\mathrm{even}$) at their center, respectively. Both terminate on the $n=16$ solution. The 14 blue solid lines that connect the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches of symmetric localized states correspond to asymmetric localized states (LS$_\mathrm{asym}$). Together these three sets of branches of localized states form a tilted snakes-and-ladders structure. Typical order parameter profiles along the three LS branches are shown in Fig. \[fig:loc-prof-rm09\], and correspond to locations indicated in panels (a) and (c) by filled black squares (LS$_\mathrm{odd}$), red circles (LS$_\mathrm{even}$), blue triangles (LS$_\mathrm{asym}$) and green diamond (periodic solution with $n=16$). ](phi_rm0,9_L100_mu2 "fig:"){width="0.45\hsize"} ![(color online) Characteristics of steady state (localized) solutions of the one-dimensional conserved Swift-Hohenberg equation (\[eq:csh-loc\]) as a function of the mean order parameter $\phi_0$ for a fixed domain size of $L=100$ and $r=-0.9$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta \phi||$, (b) free energy density $f$, (c) chemical potential $\mu$, and (d) grand potential density $\omega$. The thick green dotted line, labeled ‘flat’, corresponds to the homogeneous solution $\phi(x)=\phi_0$. Periodic solutions with $n=16$ peaks are shown as a thin green dashed line, whereas the nearby thin blue and black dotted lines represent the $n=15$ and $n=17$ solutions, respectively. The thick solid black and dashed red lines that bifurcate from the $n=16$ periodic solution represent symmetric localized states with a maximum (LS$_\mathrm{odd}$) and a minimum (LS$_\mathrm{even}$) at their center, respectively. Both terminate on the $n=16$ solution. The 14 blue solid lines that connect the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches of symmetric localized states correspond to asymmetric localized states (LS$_\mathrm{asym}$). Together these three sets of branches of localized states form a tilted snakes-and-ladders structure. Typical order parameter profiles along the three LS branches are shown in Fig. \[fig:loc-prof-rm09\], and correspond to locations indicated in panels (a) and (c) by filled black squares (LS$_\mathrm{odd}$), red circles (LS$_\mathrm{even}$), blue triangles (LS$_\mathrm{asym}$) and green diamond (periodic solution with $n=16$). ](phi_rm0,9_L100_grandPotential "fig:"){width="0.45\hsize"}**(d)** [\[fig:loc-fam-rm09\] ]{} ![(Color online) A selection of steady state profiles $\phi(x)-\phi_0$ for $r=-0.9$ and values of $\phi_0$ in the range $-0.65 \leq \phi_0\leq -0.3$ (the number in each panel indicates the corresponding value of ${\phi}_0$). Going from top left to bottom right we first show nine LS$_\mathrm{odd}$ solutions, i.e., symmetric localized states with an odd number of maxima (in black), then eight LS$_\mathrm{even}$ solutions, i.e., symmetric localized states with an even number of maxima (in red), followed by six LS$_\mathrm{asym}$ solutions, i.e., asymmetric localized states (in green). The final plot is the $n=16$ periodic solution for ${\phi}_0=-0.3$ (in blue). The solutions on the symmetric branches correspond to locations indicated in Fig. \[fig:loc-fam-rm09\](a) and are shown in order, starting from the bifurcation point that creates them and continuing to larger norm $||\delta \phi||$. The color coding corresponds to that used in Figs. \[fig:loc-fam-rm09\](a,c): LS$_\mathrm{odd}$ (filled black squares), LS$_\mathrm{even}$ (red circles), LS$_\mathrm{asym}$ (blue triangles) and periodic (green diamond). []{data-label="fig:loc-prof-rm09"}](cSH_prof_rm0,9L100_loc_various_phi){width="0.9\hsize"} **(a)**![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,7_L100_norm2 "fig:"){width="0.425\hsize"} ![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,7_L100_mu2 "fig:"){width="0.425\hsize"}**(b)** **(c)**![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,6_L100_norm2 "fig:"){width="0.425\hsize"} ![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,6_L100_mu2 "fig:"){width="0.425\hsize"}**(d)** **(e)**![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,5_L100_norm2 "fig:"){width="0.425\hsize"} ![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,5_L100_mu2 "fig:"){width="0.425\hsize"}**(f)** **(g)**![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,4_L100_norm2 "fig:"){width="0.425\hsize"} ![(color online) The norm (left) and chemical potential (right) of the homogeneous, periodic and localized steady state solutions as a function of the mean order parameter $\phi_0$, for a fixed domain size of $L=100$ and several values of $r>-0.9$. In (a,b) $r=-0.7$, (c,d) $r=-0.6$, (e,f) $r=-0.5$, and (g,h) $r=-0.4$. The line styles are as in Fig. \[fig:loc-fam-rm09\]. Typical order parameter profiles along the branches of symmetric localized states with an odd number of maxima (black lines) are shown in Fig. \[fig:loc-prof-rm07-0375\], and correspond to locations indicated in the panels by filled black squares. ](phi_rm0,4_L100_mu2 "fig:"){width="0.425\hsize"}**(h)** [\[fig:loc-fam-sevrm-one\] ]{} ![(Color online) A selection of steady state profiles $\phi(x)-\phi_0$ along the LS$_\mathrm{odd}$ branches shown in Fig. \[fig:loc-fam-sevrm-one\] for $r=-0.7$ (1st row), $r=-0.6$ (2nd row), $r=-0.5$ (3rd row), $r=-0.4$ (4th row), and $r=-0.375$ (last row), for various values of ${\phi}_0$, as indicated in the top left corner of each panel. The solutions in each row are shown in order, starting from near the bifurcation point that creates them and continuing to larger norm $||\delta \phi||$. The locations of the profiles are indicated in Fig. \[fig:loc-fam-sevrm-one\] by the filled black squares. The bifurcation diagram for $r=-0.375$ (not shown in Fig. \[fig:loc-fam-sevrm-one\]) is qualitatively the same as that for $r=-0.4$. []{data-label="fig:loc-prof-rm07-0375"}](cSH_prof_symBump_L100_various_r_phi){width="0.9\hsize"} We now show how the bifurcation diagrams evolve as the temperature-like parameter $r$ changes. We begin by showing the bifurcation diagrams for decreasing values of $|r|$. In the Appendix we use amplitude equations to determine the direction of branching of the localized states. Here we discuss the continuation results. The bifurcation diagram for $r=-0.7$ (Fig. \[fig:loc-fam-sevrm-one\](a,b)) resembles that for $r=-0.9$ (Fig. \[fig:loc-fam-rm09\]) although the snaking structure has moved towards smaller $|\phi_0|$ and is now thinner. In addition, it is now the second saddle-node on the LS$_\mathrm{odd}$ branch that lies farthest to the left, and not the first. For $r=-0.6$ (Fig. \[fig:loc-fam-sevrm-one\](c,d)), the branches of localized states still form a tilted snakes-and-ladders structure, but the saddle-nodes on the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches are now absent, i.e., both solution branches now grow monotonically. The resulting diagram has been called “smooth snaking” [@DL10]. However, despite the absence of the saddle-nodes on the LS$_\mathrm{odd}$ and LS$_\mathrm{even}$ branches the interconnecting ladder states consisting of asymmetric states still remain. This continues to be the case when $r=-0.5$ (Fig. \[fig:loc-fam-sevrm-one\](e,f)) although the structure has moved to yet larger $\phi_0$ and the snake has become even thinner. Finally, for $r=-0.4$ (Fig. \[fig:loc-fam-sevrm-one\](g,h)), the snake is nearly dead, and only tiny wiggles remain. The bifurcation of the localized states from the $n=16$ periodic state is now supercritical (see Appendix) but the LS branches continue to terminate on the same branch at larger amplitude, and do so via a single saddle-node at the right (Fig. \[fig:loc-fam-sevrm-one\](h)). Sample profiles along the resulting LS$_\mathrm{odd}$ branches are shown for several values of $r$ in Fig. \[fig:loc-prof-rm07-0375\]. We note that a change of $r$ has a profound effect on the transition region between the homogeneous background state and the periodic state: with decreasing $|r|$ the LS become wider and the localized periodic structure looks more and more like a wave packet with a smooth sinusoidal modulation of the peak amplitude. As the ‘temperature’ $r$ decreases even further, the bifurcation diagrams remain similar to those displayed in Fig. \[fig:loc-fam-rm09\] until just before $r=-1.5$, where substantial changes take place and the complexity of the bifurcation diagram grows dramatically. This is a consequence of the appearance of other types of localized states that we do not discuss here. Likewise, we omit here all bound states of the LS described above. These are normally found on an infinite stack of isolas that are also present in the snaking region [@bukn09; @KLSW:11]. Tracking the snake {#sec:snake} ------------------ In Fig. \[fig:prof-loc-folds\] we show, for $-1.5<r<-0.4$, the result of tracking *all* the saddle-node bifurcations visible in the previous bifurcation diagrams in the $(\phi_0,r)$ plane, while Fig. \[fig:prof-loc-folds-zoom\] shows an enlargement of the region $-1<r<-0.35$ together with the result of tracking the tertiary pitchfork bifurcations to the asymmetric states. Figure \[fig:prof-loc-folds\] shows that the saddle-nodes annihilate pairwise in cusps as $r$ increases. The annihilations occur first for smaller $\phi_0$ and later for larger $\phi_0$, and occur alternately on LS$_\mathrm{odd}$ and LS$_\mathrm{even}$. Above the locus of the cusps the snaking is smooth, although as shown in Fig. \[fig:prof-loc-folds-zoom\] the tertiary ladder states remain. The thick green curve in Fig. \[fig:prof-loc-folds-zoom\] represents the locus of the secondary bifurcation from the $n=16$ periodic state to LS and shows that on either side the bifurcation to LS is subcritical for sufficiently negative $r$ but becomes supercritical at larger $r$, cf. Fig. \[fig:loc-fam-sevrm-one\](h) and Appendix. ![(color online) Loci of saddle-node bifurcations on the branches of symmetric localized states in the $(\phi_0,r)$ plane for $r>-1.5$. Saddle-nodes annihilate pairwise as $r$ increases (solid black lines: LS$_\mathrm{odd}$; dashed red lines: LS$_\mathrm{even}$). ](phi0-r_L100_fold){width="0.9\hsize"} [\[fig:prof-loc-folds\] ]{} ![(color online) Loci of the saddle-node bifurcations on the branches of symmetric localized states in the $(\phi_0,r)$ plane (solid black lines: LS$_\mathrm{odd}$; dashed red lines: LS$_\mathrm{even}$) together with the bifurcations to asymmetric localized states (dotted blue lines) for $r>-1.5$. The thick green curve represents the locus of secondary bifurcation from the periodic $n=16$ state to localized states and shows that on either side the bifurcation to LS is subcritical for sufficiently negative $r$ but becomes supercritical at larger $r$. ](phi0-r_L100_fold_zoom2){width="0.9\hsize"} [\[fig:prof-loc-folds-zoom\] ]{} One may distinguish six intervals in $r$ with different types of behavior. These depend on the system size, but the results in Fig. \[fig:prof-loc-folds-zoom\] for $L=100$ are representative: - For $r>-0.33$: no LS exist, the only nontrivial states are periodic solutions. - For $-0.33>r>-0.39$: branches of even and odd symmetric LS are present, and appear and disappear via supercritical secondary bifurcations from the branch of periodic solutions. With decreasing $r$, more and more branches of asymmetric LS emerge from these two secondary bifurcation points. - For $-0.39>r>-0.41$: both branches of symmetric LS emerge subcritically at large $\phi_0$ and supercritically at small $\phi_0$. - For $-0.41>r>-0.56$: both branches of symmetric LS emerge subcritically at either end. Further branches of asymmetric LS emerge with decreasing $r$ either from the two secondary bifurcation points or from the saddle-node bifurcations on the branches of symmetric LS, but the symmetric LS still do not exhibit snaking, i.e., no additional folds are present on the branches of symmetric LS. - For $-0.56>r>-0.64$ (highlighted by the grey shading in Fig. \[fig:prof-loc-folds-zoom\]): pairs of saddle-nodes appear successively in cusps as $r$ decreases, starting at larger $\phi_0$. Thereafter saddle-nodes appear alternately on branches of even and odd symmetric LS. The appearance of the cusps is therefore associated with the transition from smooth snaking to slanted snaking. - For $-0.64>r$: The slanted snake is fully developed. Only one further pair of saddle-node bifurcations appears in the parameter region shown in Fig. \[fig:prof-loc-folds-zoom\]. With decreasing $r$ the snaking becomes stronger; each line in Figs. \[fig:prof-loc-folds\] and \[fig:prof-loc-folds-zoom\] that represents a saddle-node bifurcation crosses more and more other such lines, i.e., more and more different states are possible at the same values of $\phi_0$. Furthermore, the subcritical regions (outside the green curve in Fig. \[fig:prof-loc-folds-zoom\]) become larger. Relation to binodal lines ------------------------- From the condensed matter point of view, where the cSH/PFC equation represents a model for the liquid (homogeneous) and solid (periodic) phases, one is particularly interested in results in the thermodynamic limit $L\to \infty$. As mentioned above in the context of the phase diagram in Fig. \[fig:cSH-phasediagram-1d\], the binodal lines correspond to values of $(\phi_0,r)$ at which the homogeneous state and the minimum energy periodic state coexist in the thermodynamic limit. These are defined as pairs of points at which the homogeneous state and the periodic state have the same ‘temperature’ (i.e., same $r$ value), the same chemical potential $\mu$ and the same pressure $p=-\omega$, and are displayed as the blue dash-dot lines in Fig. \[fig:prof-loc-folds-binodal\]. For a given value of $r$, these two lines give the values of $\phi_0$ of the coexisting homogeneous (lower $\phi_0$) and periodic (higher $\phi_0$) states. Note that when plotted with the resolution of Fig. \[fig:prof-loc-folds-binodal\], the binodals are indistinguishable from the coexistence lines between the finite size $L=100$, $n=16$ periodic solution and the homogeneous state. Figure \[fig:prof-loc-folds-binodal\] also displays the line (green solid line) at which the $L=100$ localized states bifurcate from the $n=16$ branch of periodic solutions. For $r<-0.4$ the LS bifurcations are subcritical implying that the localized states are present outside the green solid line. Figure \[fig:prof-loc-folds-binodal\] shows the loci of the outermost saddle-node bifurcations on the branches of symmetric LS that result (dashed lines to the left and to the right of the green solid line for $r<-0.4$). The most striking aspect of Fig. \[fig:prof-loc-folds-binodal\] is that for $r\lesssim-1$ these lines actually cross and exit the region between the two binodal lines, indicating that in the PFC model one can find stable LS [*outside*]{} of the binodal. Although these are not the lowest free energy states (we have checked this for $r\gtrsim-1.5$), this remarkable fact points towards the possibility of metastable nanocrystals existing outside of the binodal. We must mention, however, that these structures have been found in a finite size system with $L=100$; we have not investigated their properties for larger system sizes $L$. ![(color online) Loci in the $(\phi_0,r)$ plane computed for $L=100$ of (i) the primary bifurcation from the homogeneous state to the $n=16$ periodic branch (dotted orange line); (ii) the bifurcation of the localized states from the $n=16$ periodic branch (solid green line); (iii) the outermost saddle-node bifurcations on the branches of symmetric localized states (long-dashed red and short-dashed black lines); and (iv) the binodals between the periodic and homogeneous states (dot-dashed blue lines). The latter coincide with the binodals for the $n=16$ periodic state and the homogeneous state to within the resolution of the figure. Figures \[fig:prof-loc-folds\] and \[fig:prof-loc-folds-zoom\] show how the loci of the outermost saddle-node bifurcations of localized states fit into the overall picture.](phi0-r_L100_fold_binodal_n16_simple){width="0.9\hsize"} [\[fig:prof-loc-folds-binodal\] ]{} Stability --------- Although we have not computed the stability properties of the different localized states with respect to infinitesimal perturbations, we can use the principles of bifurcation theory to deduce the likely stability properties. In the reference bifurcation diagram in Fig. \[fig:loc-fam-rm09\](a) the homogeneous state (liquid) is stable for large negative $\phi_0$, and for a system with $L=100$ loses stability to the $n=16$ mode as $\phi_0$ increases. Since the bifurcation is supercritical, the $n=16$ periodic state is initially stable. The LS that bifurcate from it subcritically will both be unstable. The single bump state is likely to be once unstable and it therefore acquires stability at the first saddle-node on the left, remaining stable until the first crosslink where a second (phase) eigenvalue becomes unstable. The first (amplitude) eigenvalue becomes unstable at the saddle-node on the right so that the portion of LS$_\mathrm{odd}$ with negative slope is twice unstable. These instabilities are then undone so that the LS$_\mathrm{odd}$ above the second crosslink are again stable. The transition to smooth snaking that occurs with decreasing $r$ eliminates instabilities associated with the first eigenvalue but not the second. The LS$_\mathrm{even}$ states are initially twice unstable but both eigenvalues stabilize near the first saddle-node on the left, so that LS$_\mathrm{even}$ is stable on the part of the branch with positive slope, below the second crosslink etc. Thus the connecting LS$_\mathrm{asym}$ are unstable. Once again, the transition to smooth snaking that occurs with decreasing $r$ eliminates instabilities associated with the amplitude eigenvalue but not those associated with the phase eigenvalue. One can check that in this case the connecting LS$_\mathrm{asym}$ remain unstable with the LS$_\mathrm{odd}$ stable between the second and third crosslinks (Fig. \[fig:loc-fam-sevrm-one\](c)), and unstable between the third and fourth crosslinks. Likewise the LS$_\mathrm{even}$ are stable above the left saddle-node and below the second crosslink; they are unstable between the second and third crosslinks and acquire stability at the third crosslink etc (Fig. \[fig:loc-fam-sevrm-one\](c)). Note that as a result of these stability assignments there is at least one LS state that is stable at all $\phi_0$ values between the leftmost and rightmost saddle-nodes. Localized states in two and three dimensions ============================================ [\[sec:2d3d\] ]{} Numerical algorithm ------------------- To perform direct numerical simulations (DNS) of the conserved Swift-Hohenberg equation in higher dimensions we use a recently proposed algorithm [@GoNo12] that has been proved to be unconditionally energy-stable. As a consequence, the algorithm produces free-energy-decreasing discrete solutions, irrespective of the time step and the mesh size, thereby respecting the thermodynamics of the model even for coarse discretizations. For the spatial discretization we employ Isogeometric Analysis [@HCB05], which is a generalization of the Finite Element Method. The key idea behind Isogeometric Analysis is the use of Non-Uniform Rational B-Splines (NURBS) instead of the standard piecewise polynomials used in the Finite Element Method. With NURBS Isogeometric Analysis gains several advantages over the Finite Element Method. In the context of the conserved Swift-Hohenberg equation, the most relevant one is that the Isogeometric Analysis permits the generation of arbitrarily smooth basis functions that lead to a straightforward discretization of the higher-order partial derivatives of the conserved Swift-Hohenberg equation [@GCBH08]. For the time discretization we use an algorithm especially designed for the conserved Swift-Hohenberg equation. It may be thought of as a second-order perturbation of the trapezoidal rule which achieves unconditional stability, in contrast with the trapezoidal scheme. All details about the numerical algorithms may be found in [@GoNo12]. Two dimensions -------------- ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/150 "fig:"){width="\hsize"} ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/200 "fig:"){width="\hsize"} ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/250 "fig:"){width="\hsize"}\ ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](fig9.pdf){width="0.55\hsize"} ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/300 "fig:"){width="\hsize"} ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/350 "fig:"){width="\hsize"} ![(color online) Phase diagram for the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions when $q = 1$. The red solid lines show the various coexistence curves, the blue dotted line shows the limit of linear stability and the grey striped areas show phase coexistence regions. The small panels show steady solutions at $r=-0.9$ and selected mean order parameter values $\phi_0$ indicated by the black squares. From top to bottom left $\phi_0=-0.15$, $-0.2$ and $-0.25$, while from top to bottom right $\phi_0=-0.3$, $-0.35$ and $-0.725$. Additional solutions that detail the transition occurring between the last two panels are shown in Fig. \[fig:prof-twodim-one\]. The domain size is $100\times100$.](2d/725 "fig:"){width="\hsize"}\ [\[fig:cSH-phasediagram-2d\] ]{} As in one spatial dimension, the phase diagram for two-dimensional structures helps us to identify suitable parameter values where LS are likely to occur. The phase diagram in Fig. \[fig:cSH-phasediagram-2d\], determined numerically [@ARTK12], shows three distinct phases, labeled ‘bumps’, ‘stripes’ and ‘holes’. In view of the fact that $\phi(x)=\phi_0+\delta\phi(x)$, bumps and holes correspond to perturbations $\delta\phi(x)$ with opposite signs; both have hexagonal coordination. In addition, the phase diagram reveals four regions of thermodynamic coexistence (hatched in Fig.\[fig:cSH-phasediagram-2d\]), between bumps and the uniform state, between bumps and stripes, between stripes and holes, and between holes and the uniform state, respectively. Examples of results obtained by DNS of Eq. (\[eq:csh\]) starting from random initial conditions are displayed in the side panels of Fig. \[fig:cSH-phasediagram-2d\]. These six panels show results for fixed $r=-0.9$. For small values of $|\phi_0|$ the system forms a labyrinthine lamellar-like stripe state; the stripes pinch off locally into bumps as $|\phi_0|$ is increased, leading to the formation of inclusions of bumps in a background stripe state. The pinching tends to occur first at the ends of a stripe and then proceeds gradually inwards. In other cases, free ends are created by the splitting of a stripe into two in a region of high curvature. The formation of bumps tends to take place at grain boundaries, and once bump formation starts, it tends to spread outward from the initial site. For $\phi_0=-0.2$ the areas covered by stripes and bumps are comparable and for larger values of $|\phi_0|$ the bump state dominates. By $\phi_0=-0.3$ the stripes are almost entirely gone and the state takes the form of a crystalline solid with hexagonal coordination but having numerous defects. As $|\phi_0|$ increases further, vacancies appear in the solid matrix and for large enough $|\phi_0|$ the solid “melts” into individual bumps or smaller clusters, as further detailed in Fig. \[fig:prof-twodim-one\]. Figure \[fig:prof-twodim-one\] shows further results from a scan through decreasing values of $\phi_0$ at $r=-0.9$. We focus on the relatively small range $\phi_0=-0.45$ to $\phi_0=-0.675$, where localized states occur. These reveal a gradual transition from a densely packed solid-like structure to states with a progressively increasing domain area that is free of bumps, i.e., containing the homogeneous state. The bumps percolate through the domain until approximately $\phi_0=-0.575$. For smaller values of $\phi_0$ the order parameter profiles resembles a suspension of solid fragments in a liquid phase; the solid is no longer connected. As $\phi_0$ decreases further the characteristic size of the solid fragments decreases, as the solid fraction falls. ![(color online) Steady-state solutions of the conserved Swift-Hohenberg equation, Eq. (\[eq:csh\]), in two dimensions for $r=-0.9$ and different values of $\phi_0$ in the range $-0.675<\phi_0<-0.45$, where localized states occur. The corresponding value of $\phi_0$ is indicated below each panel. The domain size is $100\times100$.](2d/Transition){width="1.0\hsize"} [\[fig:prof-twodim-one\] ]{} Three dimensions ---------------- In three dimensions, Eq. (\[eq:csh\]) exhibits a large number of steady state spatially periodic structures. These include those with the symmetries of the simple cubic lattice, the face-centered cubic lattice and the body-centered cubic lattice [@Callahan]. Although we do not calculate the phase diagram for the three-dimensional (3d) system, numerical simulations in 3d reveal that a lamellar (parallel ‘sheets’) state is energetically preferred for small $\phi_0$. Slices through these structures resemble the stripes observed in 2d. As $|\phi_0|$ increases, the lamellae pinch off, much as in two dimensions, and progressively generate a 3d disordered array of bumps (Fig. \[fig:prof-threedim-one\]). This solid-like state is far from being a perfect crystal, however, and with increased $|\phi_0|$ develops vacancies which eventually lead to its dissolution, just as in two dimensions. ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi025 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi125 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi225 "fig:"){width="0.32\hsize"}\ ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi325 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi425 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi525 "fig:"){width="0.32\hsize"}\ ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi625 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi725 "fig:"){width="0.32\hsize"} ![(color online) Steady-state localized solutions of Eq. (\[eq:csh\]) in three dimensions for $r=-0.9$ and different mean order parameter values $\phi_0$: from top left to bottom right $\phi_0=0.025, 0.125, 0.225, 0.325, 0.425, 0.525, 0.625, 0.725$ and $0.750$. The domain size is $100\times100\times100$.](3d/phi750 "fig:"){width="0.32\hsize"}\ [\[fig:prof-threedim-one\] ]{} In Fig. \[fig:loc-2d-fam-rm09\] we superpose the 2d and 3d DNS results for $r=-0.9$ on the 1d bifurcation diagrams for $r=-0.9$ and system size $L=100$ (Fig. \[fig:loc-fam-rm09\]). The 2d order parameter profiles are calculated on a domain with area $L^2=100^2$ and the 3d results have a system volume $L^3=100^3$. In Fig. \[fig:loc-2d-fam-rm09\](a) we display the $L^2$ norm $||\delta\phi ||$, in (b) the chemical potential $\mu$, in (c) the mean free energy $(F-F_0)/L^d$, and in (d) the mean grand potential $\omega=F/L^d-\phi_0\mu$. In each plot the connected (violet) squares correspond to results from 2d calculations such as those displayed in Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\] and the connected green triangles correspond to 3d results such as those displayed in Fig. \[fig:prof-threedim-one\]. It is from examining the figures in panels (c) and (d) that one can most easily discern the reason for the main differences between the 1d results and the 2d and 3d results: we see (particularly in the 2d results) that the chemical potential $\mu$ and the pressure $p=-\omega$ have regions where these measures are roughly flat as a function of $\phi_0$ and regions where they increase as a function of $\phi_0$. We also see that these increases in some places relate to features in the 1d results but in other places they do not have any relation to what one sees from the 1d results. This is because in 2d and 3d the system displays phases that are not seen in 1d (cf. Figs. \[fig:cSH-phasediagram-1d\] and \[fig:cSH-phasediagram-2d\]). To understand the origin of these roughly flat portions, we recall that in the thermodynamic limit $L\to\infty$ two states are said to be at coexistence if the ‘temperature’ $r$, the chemical potential $\mu$ and the pressure $p=-\omega$ is the same for both states. These quantities do not change in value as one takes the system across the coexistence region by increasing the average density in the system (or equivalently $\phi_0$). This is because the additional surface excess free energy terms (surface tension terms that are present because both coexisting phases are in the system) do not contribute in the thermodynamic limit. This is in turn a consequence of the fact that these surface terms scale as $L^{(d-1)}$, whereas the bulk volume terms scale as $L^d$, where $d$ is the dimensionality of the system. Thus, regions where these measures are approximately flat are in a coexistence region between two phases – this can be confirmed for the 2d results by comparing the ranges of $\phi_0$, where the results for $\mu$ and $\omega$ are approximately flat, with the coexistence regions in Fig. \[fig:cSH-phasediagram-2d\]. The observation that the 2d curves in Fig. \[fig:loc-2d-fam-rm09\] are not completely flat indicates that the interfacial (surface tension) terms between the different phases in the LS state do contribute to the free energy, and is thus a finite size effect. Note that it might be possible to distribute the LS in such a way that they percolate throughout the whole system so that the contribution from the interfaces scales as $L^d$. If this is the case, the above argument does not apply. **(a)**![(color online) Characteristics of steady-state localized solutions of the conserved Swift-Hohenberg equation for $r=-0.9$, as a function of the mean order parameter $\phi_0$ on fixed one, two and three-dimensional domains of size $L^d$, $d=1,2,3$, and $L=100$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta\phi||$, (b) chemical potential $\mu$, (c) mean free energy $(F-F_0)/L^d$, and (d) mean grand potential $\omega=F/L^d-\phi_0\mu$. The connected (violet) square and (green) triangle symbols correspond to 2d calculations (for sample profiles see Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\]) and 3d calculations (for sample profiles see Fig. \[fig:prof-threedim-one\]). For comparison we also show the 1d results from Fig. \[fig:loc-fam-rm09\] for the periodic (i.e., stripe) state with $n=16$ bumps, green dashed) and the various 1d localized states. The thick green dotted line corresponds to the homogeneous solution $\phi(\mathbf{x})=\phi_0$.](2d_phi_rm0,9_100x100_norm "fig:"){width="0.45\hsize"} ![(color online) Characteristics of steady-state localized solutions of the conserved Swift-Hohenberg equation for $r=-0.9$, as a function of the mean order parameter $\phi_0$ on fixed one, two and three-dimensional domains of size $L^d$, $d=1,2,3$, and $L=100$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta\phi||$, (b) chemical potential $\mu$, (c) mean free energy $(F-F_0)/L^d$, and (d) mean grand potential $\omega=F/L^d-\phi_0\mu$. The connected (violet) square and (green) triangle symbols correspond to 2d calculations (for sample profiles see Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\]) and 3d calculations (for sample profiles see Fig. \[fig:prof-threedim-one\]). For comparison we also show the 1d results from Fig. \[fig:loc-fam-rm09\] for the periodic (i.e., stripe) state with $n=16$ bumps, green dashed) and the various 1d localized states. The thick green dotted line corresponds to the homogeneous solution $\phi(\mathbf{x})=\phi_0$.](2d_phi_rm0,9_100x100_energy "fig:"){width="0.45\hsize"}**(b)** **(c)**![(color online) Characteristics of steady-state localized solutions of the conserved Swift-Hohenberg equation for $r=-0.9$, as a function of the mean order parameter $\phi_0$ on fixed one, two and three-dimensional domains of size $L^d$, $d=1,2,3$, and $L=100$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta\phi||$, (b) chemical potential $\mu$, (c) mean free energy $(F-F_0)/L^d$, and (d) mean grand potential $\omega=F/L^d-\phi_0\mu$. The connected (violet) square and (green) triangle symbols correspond to 2d calculations (for sample profiles see Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\]) and 3d calculations (for sample profiles see Fig. \[fig:prof-threedim-one\]). For comparison we also show the 1d results from Fig. \[fig:loc-fam-rm09\] for the periodic (i.e., stripe) state with $n=16$ bumps, green dashed) and the various 1d localized states. The thick green dotted line corresponds to the homogeneous solution $\phi(\mathbf{x})=\phi_0$.](2d_phi_rm0,9_100x100_mu "fig:"){width="0.45\hsize"} ![(color online) Characteristics of steady-state localized solutions of the conserved Swift-Hohenberg equation for $r=-0.9$, as a function of the mean order parameter $\phi_0$ on fixed one, two and three-dimensional domains of size $L^d$, $d=1,2,3$, and $L=100$. The various solution profiles are characterized by their (a) $L^2$ norm $||\delta\phi||$, (b) chemical potential $\mu$, (c) mean free energy $(F-F_0)/L^d$, and (d) mean grand potential $\omega=F/L^d-\phi_0\mu$. The connected (violet) square and (green) triangle symbols correspond to 2d calculations (for sample profiles see Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\]) and 3d calculations (for sample profiles see Fig. \[fig:prof-threedim-one\]). For comparison we also show the 1d results from Fig. \[fig:loc-fam-rm09\] for the periodic (i.e., stripe) state with $n=16$ bumps, green dashed) and the various 1d localized states. The thick green dotted line corresponds to the homogeneous solution $\phi(\mathbf{x})=\phi_0$.](2d_phi_rm0,9_100x100_grandPotential "fig:"){width="0.45\hsize"}**(d)** [\[fig:loc-2d-fam-rm09\] ]{} Discussion and conclusions ========================== The conserved Swift-Hohenberg equation is perhaps the simplest example of a pattern-forming system with a conserved quantity. Models of this type arise when modelling a number of different systems, with the PFC model being one particular example. Other examples include binary fluid convection between thermally insulating boundaries [@K:89], where this equation was first derived, convection in an imposed magnetic field (where the conserved quantity is the magnetic flux [@CM01; @LBK11]) and two-dimensional convection in a rotating layer with stress-free boundaries (where the conserved quantity is the zonal velocity [@CM01; @BBKK12]). Models of a vibrating layer of granular material are also of this type (here the conserved quantity is the total mass [@DL10]). It is perhaps remarkable that all these distinct systems behave very similarly. In particular, they all share the following features: (i) strongly subcritical bifurcations forming localized structures so that the resulting LS are present outside of the bistability region between the homogeneous and periodic states; (ii) presence of LS even when the periodic branch is supercritical; (iii) organization of LS into slanted snaking; and (iv) the transition from slanted snaking to smooth snaking whereby the LS grow smoothly without specific bump-forming events (referred to as ‘nucleation’ events in the pattern formation literature [^1].) These properties of the system can all be traced to the fact that the conserved quantity is necessarily redistributed when an instability takes place or a localized structure forms. This fact makes it harder for additional LS to form and as a result the system has to be driven harder for this to occur, leading to slanted snaking. Bistability is no longer required since the localized structures are no longer viewed as inclusions of a periodic state within a homogeneous background or vice versa. These considerations also explain why the bifurcation diagrams in these systems are sensitive to the domain size, and it may be instructive, although difficult, to repeat some of our calculations for larger domain sizes. ![(color online) The $L^2$ norm $||\delta\phi ||$ for the homogeneous, periodic and localized steady state solutions of the conserved Swift-Hohenberg equation (\[eq:csh\]) as a function of the chemical potential $\mu$, for a fixed domain size of $L=100$ and various $r$. The horizontal thick dot-dashed green line corresponds to the homogeneous solution $\phi(x)=\phi_0$. Periodic solutions with $n=16$ peaks are labeled by the corresponding $r$ values, whereas the branches that bifurcate from the periodic solution branches represent the two types of symmetric localized states (LS$_\mathrm{odd}$ and LS$_\mathrm{even}$): $r=-0.7$ (dotted red lines), $r=-0.6$ (dashed black lines), $r=-0.5$ (solid blue lines), $r=-0.4$ (dotted green lines), $r=-0.3$ (solid black line) and $r=-0.2$ (dashed red line). The dashed vertical lines indicate the corresponding coexistence chemical potential values for $r=-0.7$, $-0.6$ and $-0.3$, respectively.](norm_mu_L100_various_r){width="0.9\hsize"} [\[fig:bif-over-mu\] ]{} Of particular significance is the observation that the slanted snaking found when the LS norm $||\delta \phi ||$ is plotted as a function of the mean order parameter $\phi_0$ for fixed $r$ (Figs. \[fig:loc-fam-rm09\] and \[fig:loc-fam-sevrm-one\](a,b)) is ‘straightened’ out when the same solution branches are displayed as a function of $\mu$ (see Fig. \[fig:bif-over-mu\]). The snaking is then vertically aligned and centred around the coexistence chemical potential values (referred to as Maxwell points in Ref. [@BuKn06]) as known from the non-conserved Swift-Hohenberg equation [@BuKn06]. In this representation the periodic states typically bifurcate subcritically from the homogeneous state (Fig. \[fig:bif-over-mu\]), in contrast to the supercritical transitions found when $\phi_0$ is used as the bifurcation parameter (Figs. \[fig:loc-fam-rm09\] and \[fig:loc-fam-sevrm-one\]). It is also worth noting that the thermodynamic tricritical point at $(\phi_{0b},r_b^\mathrm{max})=(\pm\sqrt{3/38},-9/38)$ discussed in Sec. \[sec:loc-states-1d\] (see also the last paragraph of the Appendix) corresponds to the transition from a subcritical to a supercritical primary bifurcation in the representation of Fig. \[fig:bif-over-mu\], i.e., when $\mu$ is used as the bifurcation parameter. This transition takes place between the solid black line for $r = -0.3$ and the dashed red line for $r = -0.2$ in Fig. \[fig:bif-over-mu\] and implies that the linear stability properties of *identical* periodic solutions differ, depending on whether the permitted perturbations preserve $\mu$ (and hence permit $\phi_0$ to vary) or vice versa, and this is so for the localized states as well. In particular, when $\mu$ is used as the control parameter, the differences identified in Sec. \[sec:snake\] between cases (i)-(iv) where the LS branches do not snake, and cases (v)-(vi) where they do snake disappear. In Fig. \[fig:bif-over-mu\] snaking over $\mu$ is clearly visible for $r=-0.7$, $-0.6$ and $-0.5$. There is no snaking for $r=-0.4$, most likely due to finite size effects, or for larger $r$. The thermodynamic reason that the snaking becomes straightened when displayed as a function of the chemical potential $\mu$ is related to the issues discussed at the end of Sec. \[sec:2d3d\]. For a system with $(\phi_0,r)$ chosen so that it is in the coexistence region (e.g., between the homogeneous and the bump states) and size $L$ large enough to be considered to be in the thermodynamic limit $L \to \infty$, the chemical potential does not vary when $\phi_0$ is changed and neither does the grand potential $\omega=-p$. This is because the free energy contribution from interfaces between the coexisting phases (the interfacial tension) scales with the system size $\sim L^{(d-1)}$, and hence is negligible compared to the bulk contributions which scale $\sim L^d$. As $\phi_0$ is varied so as to traverse the coexistence region, new bumps are added or removed from the bump state (LS). For a (thermodynamically) large system, the resulting changes in the free energy are negligible, although this is no longer true of a finite size system. Furthermore, the interfacial free energy contribution between the two phases varies depending on the size of the bumps right at the interface and the size of these bumps depends on the value of $\phi_0$. The difference between the maximal interfacial energy and the minimal value determines the width of the snake. All of these contributions have leading order terms that scale $\sim L^{-1}$. In a related situation, the variation in the chemical potential as the density is varied in a finite-size system containing a fluid exhibiting gas-liquid coexistence is discussed in Refs. [@MSE06; @SVB09; @BBVT12]. It is instructive to compare, for example, Fig. 3 in Ref. [@BBVT12] and Fig. \[fig:loc-2d-fam-rm09\](c) of the present paper. We mention that the steady states of the nonconserved Swift-Hohenberg equation studied, e.g., in Refs. [@BuKn06; @BBKM08], correspond to solutions of Eq. (\[eq:csh-loc-steady\]) with the nonlinearity $g_{23}$ but $\mu=0$ (see Sec. \[sec:eqs\]). These always show vertically aligned snaking when the LS norm $||\delta\phi||$ is plotted as a function of $r$ (adapting $\phi_0$). However, when $\mu=0$ no localized states are present with the pure cubic nonlinearity employed in Eq. (\[eq:csh-loc-steady\]). To find such states $\mu$ must be fixed at a value sufficiently far from zero; the resulting LS then exhibit vertically aligned snaking when plotted as a function of $r$ [@BuKn06]. The domain size we have used, $L=100$, is moderately large. It contains of the order of 16 wavelengths of the primary structure-forming instability. Because the equation is simpler than the hydrodynamic equations for which similar behavior was observed, the results we have been able to obtain are substantially more complete, even in one dimension, than was possible elsewhere. In particular, we have been able to compute the rungs of asymmetric LS and to study their behavior as the system transitions from slanted snaking to smooth snaking as $r$ increases towards zero. The extension of our results to two and three spatial dimensions is necessarily incomplete, although the transition to clusters of bumps followed by isolated bumps as the total “mass” decreases is not surprising. However, the transition from a connected structure to a disconnected one (the “percolation” threshold) in two and three dimensions deserves a much more detailed study than we have been able to provide. In this connection we mention two experimental systems exhibiting a transition from a solid-like phase to a gas-like phase of individual spots. This is the gaseous discharge system studied by H.-G. Purwins and colleagues [@astrov01; @PBA10] and the liquid crystal light valve experiment of S. Residori and colleagues [@BCR09]. In both these systems a crystal-like structure of spots with hexagonal coordination was observed to melt into a ‘gas’ of individual spots as a parameter was varied. This two-dimensional process leads to states resembling those found here in Figs. \[fig:cSH-phasediagram-2d\] and \[fig:prof-twodim-one\] although the stripe-like structures were typically absent. In these two systems the spots in the ‘gas-phase’ are mobile unlike in the cSH equation indicating absence of variational structure. However, both systems are globally coupled, by the imposed potential difference in the discharge system and the feedback loop in the liquid crystal light valve experiment, raising the possibility that the global coupling in these systems plays a similar role to the role played by the conserved order parameter in the cSH equation. We should also mention that some of the localized states observed in Figs.\[fig:prof-twodim-one\] and \[fig:prof-threedim-one\] raise some concern about the validity of the PFC as a model for solidification and freezing – we refer in particular to the order parameter profiles displayed at the bottom right of these figures. These show that in both 2d and 3d the PFC predicts the existence of steady states with isolated single bumps. Recall that in the standard interpretation of the PFC, the bumps correspond to frozen particles whilst the homogeneous state corresponds to the uniform liquid state. Such profiles could perhaps be a signature of the dynamical heterogeneity that is a feature of glassy systems, but there are problems with this interpretation – the glass transition is a collective phenomenon – single particles do not freeze on their own whilst the remainder of the particles around them remain fluid! We refer readers interested in the issue of the precise interpretation of the order parameter in the PFC to the discussion in the final section of Ref. [@RATK12]. We should also point out that although these structures correspond to local minima of the free energy (i.e., they are stable) they do not correspond to the global minimum. These states occur for state points where the global free energy minimum corresponds to the uniform homogeneous state. All the results presented here have been obtained for the generic conserved Swift-Hohenberg equation, Eq. (\[eq:DDFT\_PFC\]), with the energy functional in Eq. (\[eq:hfe\]). We believe that our main results provide a qualitative description of a number of related models in material science that are of a similar structure and describe systems that may show transitions between homogeneous and patterned states characterized by a finite structure length. In particular, we refer to systems that can be described by conserved gradient dynamics based on an underlying energy functional that features a local double-well contribution, a destabilizing squared gradient term and a stabilizing squared Laplacian term. The latter two terms may themselves result from a gradient expansion of an integral describing nonlocal interactions such as that required to reduce DDFT models to the simpler PFC model [@vBVL09; @ARTK12]. Other systems, where the present results may shed some light, include diblock copolymers and epitaxial layers. The time evolution equation for diblock copolymers [@OoSh87; @Paqu91; @TeNi02; @Glas10; @Glas12] is of fourth order like the nonconserved SH equation but contains a global coupling term that is related to mass conservation. The equation emerges from a nonlocal term in the energy functional [@Leib80; @OhKa86]. The global coupling results in an evolution towards a state with a given mean value for the density order parameter $\phi_0$ if the initial value is different from $\phi_0$ or in a conservation of mass as the system evolves if the initial value coincides with the imposed $\phi_0$. Although this differs from the formulation using a conserved Swift-Hohenberg equation, the steady versions of the diblock-copolymer equation and of the conserved Swift-Hohenberg equation are rather similar: they only differ in the position of the nonlinearity. Up to now no systematic study of localized states exists for the diblock-copolymer equation, although Ref. [@Glas12] discusses their existence and gives some numerical examples for a profile with a single bump in rather small systems (see their Fig. 5). Since in the diblock-copolymer system the order parameter is a conserved quantity, we would expect the snaking of localized states for fixed $\phi_0$ to be slanted similar to our Fig. \[fig:loc-fam-rm09\], instead of being vertical, corresponding to a standard snake, where all the saddle-node bifurcations are vertically aligned, as sketched in Fig. 6 of [@Glas12]. Finally, we briefly mention a group of model equations that are derived to describe the evolution of the surface profile of epitaxially strained solid films including, e.g., the self-organization of quantum dots [@GDV03]. The various evolution equations that have been employed account for the elasticity (linear and nonlinear isotropic elasticity, as well as misfit strain) of the epitaxial layer and the (isotropic or anisotropic) wetting interaction between the surface layer and the solid beneath. The evolution equations we wish to highlight are of sixth order [@SDV93; @Savi03; @GLSD04; @Thie10] much like the conserved Swift-Hohenberg equation investigated here. Other models, however, are of fourth order only [@XiE02; @SRF03] or contain fully nonlocal terms (resulting in integro-differential equations) [@XiE04; @LGDV07]. However, even the sixth order models often contain additional nonlinear terms in the derivatives (see, for instance, Eq. (5) of Ref. [@GDV03]). Localized state solutions of these equations have to our knowledge not yet been studied systematically, although some have been obtained numerically (Fig. 5 of [@GDV03]). Future research should investigate how the characteristics of the localized states analysed here for the conserved Swift-Hohenberg equation differ from those in specific applied systems such as diblock copolymers or epitaxial layers. Appendix {#appendix .unnumbered} ======== In this Appendix we determine the direction of branching of the localized states when they bifurcate from the branch of periodic states. When the domain is large this bifurcation occurs when the amplitude of the periodic states is small and hence is accessible to weakly nonlinear theory. We begin with Eq. (\[eq:csh-loc\]) which may be written $$\phi_t=\alpha\partial_x^2[(r+q^4)\phi+\phi^3+2q^2\partial_x^2\phi+\partial_x^4\phi].$$ This equation has the homogeneous solution $\phi=\phi_0$. We let $\phi=\phi_0+\psi$, obtaining $$\psi_t=\alpha\partial_x^2[(r+q^4+3\phi_0^2)\psi+3\phi_0\psi^2+\psi^3+2q^2\partial_x^2\psi+\partial_x^4\psi].$$ Linearizing and looking for solutions of the form $\psi\propto\exp(\beta t+ikx)$ we obtain the dispersion relation $$\beta=-\alpha k^2[r+(q^2-k^2)^2+3\phi_0^2]$$ The condition $\beta=0$ gives the critical wavenumbers $$k_c^2=q^2\pm\sqrt{-r-3\phi_0^2}$$ Thus when $r=r_c\equiv -3\phi_0^2$ the neutral curve has maxima at both $k=0$ and $k_c=q$. When $r=-3\phi_0^2-\epsilon^2\nu$, where $\nu={\cal O}(1)$ and $\epsilon$ is a small parameter that defines how far $r$ is from $r_c$, then a band of wavenumbers near $k=q$ grows slowly with growth rate $\beta={\cal O}(\epsilon^2)$, while wavenumbers near $k=0$ decay at the same rate. There is therefore time for these two disparate wavenumbers to interact and it is this interaction that determines the direction of branching. These considerations suggest that we perform a two-scale analysis with a short scale $x={\cal O}(q^{-1})$ and a long scale $X=\epsilon x$, so that $\partial_x\rightarrow\partial_x+\epsilon\partial_X$ etc. We also write $$\psi=\epsilon A(X,t)e^{iqx}+ \epsilon^2B(X,t)+\epsilon^2 C(X,t)e^{2iqx}+{\rm c.c.}+{\cal O}(\epsilon^3),$$ where the amplitudes $A$ and $C$ are complex and $B$ is real. Substituting, we obtain $$A_t=-\epsilon^2\alpha q^2(-\nu A-4q^2A_{XX}+6\phi_0AB+6\phi_0C A^*+3|A|^2A)+{\cal O}(\epsilon^3),$$ $$B_t=\epsilon^2\alpha(q^4B_{XX}+6\phi_0|A|^2_{XX})+{\cal O}(\epsilon^3),\label{B}$$ and $$C_t=-4\alpha q^2(9q^4C+3\phi_0A^2)+{\cal O}(\epsilon).$$ The last equation implies that the mode $C$ decays on an ${\cal O}(1)$ time scale to its asymptotic value, $C=-\phi_0 A^2/3q^4+{\cal O}(\epsilon)$. The resulting equations may be written in the form $$A_{t}=\nu A+4A_{XX}-\xi A\theta_X-3\biggl(1-\frac{\xi^2}{54}\biggr)|A|^2A+{\cal O}(\epsilon),\label{A}$$ $$\theta_{t}=\theta_{XX}+\xi |A|^2_{X}+{\cal O}(\epsilon)\label{theta}$$ using the substitution $B=\theta_X$, and integrating Eq. (\[B\]) once to obtain an equation for $\theta$. In writing these equations we have absorbed $q$ into the length scale $X$ and $\epsilon^2\alpha q^2$ into the time scale $t$, and introduced the parameter $\xi\equiv 6\phi_0/q^2<0$. The resulting equation are equivalent to the equations studied by Matthews and Cox [@MaCo00]. Equations (\[A\])–(\[theta\]) provide a complete description of the small amplitude behavior of Eq. (\[eq:csh-loc\]). The equations inherit a gradient structure from Eq. (\[eq:csh-loc\]), $$A_t=-\frac{\delta F}{\delta A^*},\qquad \theta_t=-\frac{\delta F}{\delta \theta},$$ where $$F[A,A^*,\theta]=\int_D\biggl\{-\nu|A|^2+4|A_X|^2+\frac{1}{2}\theta_X^2+\xi|A|^2\theta_X+\frac{3}{2}\biggl(1-\frac{\xi^2}{54}\biggr)|A|^4\biggr\}\,dX.$$ We may write this energy in the form $$F[A,A^*,\theta]=\int_D\biggl\{-\nu|A|^2+4|A_X|^2+\frac{1}{2}\bigg[(\theta_X+\xi|A|^2)^2+3\biggl(1-\frac{19\xi^2}{54}\biggr)|A|^4\biggr]\biggr\}\,dX,\label{energy}$$ implying that the free energy $F[A,A^*,\theta]$ is not bounded from below once $\xi^2>54/19$ (equivalently $\phi_0^2>3q^4/38$). This is a reflection of the presence of subcritical branches. Indeed, the steady states of Eqs. (\[A\])–(\[theta\]) correspond to critical points of this energy and satisfy the [*nonlocal*]{} equation $$4A_{XX}+(\nu-\langle|A|^2\rangle)A-3\biggl(1-\frac{19\xi^2}{54}\biggr)|A|^2A=0,\label{nonlocal}$$ where $\langle(\cdots)\rangle\equiv L^{-1}\int_0^L(\cdots)\,dx$ and $L$ is the domain length. This equation demonstrates (i) that the primary bifurcation is subcritical when $\xi^2>54/19$ in agreement with Eq. (\[energy\]), and (ii) that as $|A|$ increases the value of $\nu$ has to be raised in order to maintain the same value of the effective bifurcation parameter ${\nu}_{\rm eff}\equiv\nu-\langle|A|^2\rangle$. This is the basic reason behind the slanted structure in Fig. \[fig:loc-fam-rm09\](a). Equations of this type arise in numerous applications [@hall; @Elmer88a; @Elmer88b; @BBKK12] and their properties have been studied in several papers [@Elmer88a; @norbury02; @vega05; @norbury07]. We mention, in particular, that unmodulated wavetrains bifurcate supercritically when $\xi^2<54$, a condition that differs from the corresponding condition $\xi^2<54/19$ for spatially modulated wavetrains. Equations (\[A\])–(\[theta\]) possess the solution $(A,\theta)=(A_0,0)$, where $|A_0|^2=\nu/[3(1-\xi^2/54)]$, corresponding to a spatially uniform wavetrain. This state bifurcates in the positive $\nu$ direction wherever $\xi^2<54$, or equivalently $\phi_0^2<3q^4/2$. In the following we are interested in the modulational instability of this state. We suppose that this instability takes place at $\nu=\nu_c$ and write $\nu=\nu_c+\delta^2{\tilde \nu}$, where $\delta\ll\epsilon$ is a new small parameter and ${\tilde \nu}={\cal O}(1)$. In addition, we write $A=A_0(1+{\tilde A})$, $\theta_{X}={\tilde V}$. Since the imaginary part of ${\tilde A}$ decays to zero we take ${\tilde A}$ to be real and write $${\tilde A}=\delta{\tilde A}_1+\delta^2{\tilde A}_2+\delta^3{\tilde A}_3+\dots,\quad {\tilde V}=\delta{\tilde V}_1+\delta^2{\tilde V}_2+\delta^3{\tilde V}_3+\dots.$$ Finally, since the localized states created at $\nu_c$ are stationary, we set the time derivatives to zero and integrate Eq. (\[theta\]) once obtaining $${\tilde V}+\xi |A|^2+C(\delta)=0\label{V},$$ where $C=C_0+\delta C_1+\delta^2C_2+\delta^3C_3+\dots$ and the $C_j$ are determined by the requirement that the average of $V$ over the domain vanishes. Substitution of the above expressions yields a sequence of ordinary differential equations which we solve subject to periodic boundary conditions. At ${\cal O}(1)$ we obtain $C_0=-\xi|A_c|^2$, where $A_c$ denotes the value of $A_0$ at $\nu=\nu_c$. At ${\cal O}(\delta)$ we obtain the linear problem $$-2\nu_c {\tilde A}_1+4{\tilde A}_{1XX}-\xi {\tilde V}_1=0,\qquad {\tilde V}_1+2\xi|A_c|^2{\tilde A}_1=0, \qquad C_1=0,$$ and conclude that ${\tilde A}_1=A_{11}\cos\ell X$, ${\tilde V}_1=V_{11}\cos\ell X$, provided that $\nu_c=-2\ell^2[(1-\xi^2/54)/(1-19\xi^2/54)]$. Note that this quantity is positive when $54>\xi^2>54/19$ (equivalently $3q^4/2>\phi_0^2>3q^4/38$) and that it vanishes in the limit $\ell\rightarrow0$, i.e., for infinite modulation length scale. At ${\cal O}(\delta^2)$ we obtain $$-2\nu_c {\tilde A}_2+4{\tilde A}_{2XX}-\xi {\tilde V}_2=\xi {\tilde A}_1{\tilde V}_1+3\nu_c {\tilde A}_1^2,\qquad {\tilde V}_2+\xi|A_c|^2(2{\tilde A}_2+{\tilde A}_1^2)+C_2=0.$$ Thus ${\tilde A}_2=A_{20}+A_{22}\cos 2\ell X$, ${\tilde V}_2=V_{22}\cos 2\ell X$, where $$A_{20}=-\frac{3}{4}\biggl[\frac{1-(13\xi^2/54)}{1-(\xi^2/54)}\biggl]A_{11}^2,\qquad A_{22}=\frac{1}{4}A_{11}^2,\qquad V_{22}=-\xi|A_c|^2A_{11}^2,$$ together with $$C_2=-\frac{1}{2}\xi|A_c|^2A_{11}^2-2\xi|A_c|^2A_{20}.$$ Finally, at ${\cal O}(\delta^3)$ we obtain $$-2\nu_c {\tilde A}_3+4{\tilde A}_{3XX}-\xi {\tilde V}_3=2{\tilde\nu}{\tilde A}_1+\xi ({\tilde A}_1{\tilde V}_2+{\tilde A}_2{\tilde V}_1)+\nu_c (6{\tilde A}_1{\tilde A}_2+{\tilde A}_1^3),$$ and $${\tilde V}_3+\xi|A_c|^2(2{\tilde A}_3+2{\tilde A}_1{\tilde A}_2)+C_3=-\frac{2\xi}{3(1-(\xi^2/54))}{\tilde\nu}{\tilde A}_1.$$ The direction of branching of solutions with ${\tilde A}\ne0$ follows from the solvability condition for ${\tilde A}_3$, i.e., the requirement that the inhomogeneous terms contain no terms proportional to $\cos\ell X$. We obtain $${\tilde\nu}+3\ell^2{\tilde A}_{11}^2\biggl[\frac{1-(13\xi^2/54)}{1-(19\xi^2/54)}\biggr]=0.\label{direction1}$$ It follows that the localized states bifurcate in the positive $\nu$ direction (lower temperature) when $54/19<\xi^2<54/13$ and in the negative $\nu$ direction when $54/13<\xi^2<54$. The former requirement is equivalent to $3q^4/38<\phi_0^2<3q^4/26$, the latter to $3q^4/26<\phi_0^2<3q^4/2$. These results agree with the corresponding results for an equation similar to Eq. (\[nonlocal\]) obtained by Elmer [@Elmer88a] (see also [@MaCo00; @BBKK12]). In the above calculation we have fixed the parameter $\xi$ (equivalently $\phi_0$) and treated $r$ (equivalently $\nu$) as the bifurcation parameter. However, we can equally well do the opposite. Since the calculation is only valid in the neighborhood of the primary bifurcation at $r=-3\phi_0^2$, i.e., in the neighborhood of $\xi=-\sqrt{-12r}/q^2$, the destabilizing perturbation analogous to ${\tilde\nu}>0$ is now replaced by ${\tilde\xi}>0$ (cf. Fig. \[fig:cSH-phasediagram-1d\]), implying that the direction of branching of localized states changes from subcritical to supercritical as $\xi^2$ decreases through $\xi^2=54/13$ (equivalently $\phi_0^2$ decreases through $\phi_0^2=3q^4/26$ or $r$ increases through $r=-9q^4/26\approx-0.35q^4$), in good agreement with the numerically obtained value $r\approx -0.41$ (for $q=1$) that is indicated in Fig. \[fig:prof-loc-folds-zoom\] by the horizontal line separating regions (iii) and (iv). We next consider the corresponding results in the case when the chemical potential $\mu$ is fixed. In this case the constants $C_j$ vanish, and the ${\cal O}(\delta^2)$ problem therefore has solutions of the form ${\tilde A}_2=A_{20}+A_{22}\cos 2\ell X$, ${\tilde V}_2=V_{20}+V_{22}\cos 2\ell X$, where $$A_{20}=-\frac{3}{4}A_{11}^2,\qquad V_{20}=\xi |A_c|^2 A_{11}^2,\qquad A_{22}=\frac{1}{4}A_{11}^2,\qquad V_{22}=-\xi|A_c|^2A_{11}^2.$$ The nonzero $V_{20}$ contributes additional terms to the solvability condition at ${\cal O}(\delta^3)$. The result corresponding to (\[direction1\]) is $${\tilde\nu}+3\ell^2{\tilde A}_{11}^2\biggl[\frac{1-(\xi^2/54)}{1-(19\xi^2/54)}\biggr]=0.\label{direction2}$$ This implies that the secondary LS branches are subcritical whenever the periodic branch is supercritical ($\xi^2<54$) and the secondary bifurcation is present ($\xi^2>54/19$). These results are consistent with Fig. \[fig:bif-over-mu\] and moreover predict that the LS states are absent for $\xi^2<54/19$, i.e., for $\phi_0>-\sqrt{3/38}q^2$ or $r>-9q^4/38$ (cf. Fig. \[fig:bif-over-mu\]). This point is, of course, the thermodynamic tricritical point discussed in Sec. \[sec:loc-states-1d\]. [**Acknowledgement**]{}. The authors wish to thank the EU for financial support under grant MRTN-CT-2004-005728 (MULTIFLOW) and the National Science Foundation for support under grant DMS-1211953. [10]{} P. Coullet, C. Riera, and C. Tresser, “Stable static localized structures in one dimension,” Phys. Rev. Lett. **84**, 3069–3072 (2000). U. Bortolozzo, M. G. Clerc, and S. Residori, “Solitary localized structures in a liquid crystal,” New. J. Phys. **11**, 093037 (2009). R. Richter and I. V. Barashenkov, “Two-dimensional solitons on the surface of magnetic fluids,” Phys. Rev. Lett. **94**, 184503 (2005). G. W. Hunt, M. A. Peletier, A. R. Champneys, P. D. Woods, M. A. Wadee, C. J. Budd, and G. J. Lord, “Cellular buckling in long structures,” Nonlinear Dynamics **21**, 3–29 (2000). O. Batiste, E. Knobloch, A. Alonso, and I. Mercader, “Spatially localized binary-fluid convection,” J. Fluid Mech. **560**, 149–158 (2006). A. Bergeon and E. Knobloch, “Spatially localized states in natural doubly diffusive convection,” Phys. Fluids **20**, 034102 (2008). S. Blanchflower, “Magnetohydrodynamic convectons,” Phys. Lett. A **261**, 74–81 (1999). T. M. Schneider, J. F. Gibson, and J. Burke, “Snakes and ladders: localized solutions of plane [C]{}ouette flow,” Phys. Rev. Lett. **104**, 104501 (2010). O. Lioubashevski, H. Arbell, and J. Fineberg, “Dissipative solitary waves in driven surface waves,” Phys. Rev. Lett. **76**, 3959–3962 (1996). J. Rajchenbach, A. Leroux, and D. Clamond, “New standing solitary waves in water,” Phys. Rev. Lett. **107**, 024502 (2011). P. B. Umbanhowar, F. Melo, and H. L. Swinney, “Localized excitations in a vertically vibrated granular layer,” Nature **382**, 793–796 (1996). T. Ackemann, W. J. Firth, and G.-L. Oppo, “Fundamentals and applications of spatial dissipative solitons in photonic devices,” in P. R. Berman, E. Arimondo, and C.-C. Lin, editors, “Advances in Atomic, Molecular and Optical Physics,” pages 323–421, Academic Press (2009). P. Kolodner, D. Bensimon, and C. M. Surko, “Traveling-wave convection in an annulus,” Phys. Rev. Lett. **60**, 1723–1726 (1988). W. Barten, M. Lücke, and M. Kamps, “Localized traveling-wave convection in binary-fluid mixtures,” Phys. Rev. Lett. **66**, 2621–2624 (1991). M. Dennin, G. Ahlers, and D. S. Cannell, “Spatiotemporal chaos in electroconvection,” Science **272**, 388–390 (1996). H. Riecke and G. D. Granzow, “Localization of waves without bistability: Worms in nematic electroconvection,” Phys. Rev. Lett. **81**, 333–336 (1998). H.-G. Purwins, H. U. Boedeker, and S. Amiranashvili, “Dissipative solitons,” Adv. Phys. **59**, 485–701 (2010). Y. Pomeau, “Front motion, metastability and subcritical bifurcations in hydrodynamics,” Physica D **23**, 3–11 (1986). J. Burke and E. Knobloch, “Homoclinic snaking: [S]{}tructure and stability,” Chaos **17**, 037102 (2007). S. M. Cox and P. C. Matthews, “New instabilities of two-dimensional rotating convection and magnetoconvection,” Physica D **149**, 210–229 (2001). D. [Lo Jacono]{}, A. Bergeon, and E. Knobloch, “Magnetohydrodynamic convectons,” J. Fluid Mech. **687**, 595–605 (2011). C. Beaume, A. Bergeon, H.-C. Kao, and E. Knobloch, “Convectons in a rotating fluid layer,” J. Fluid Mech. **in press** (2012). W. J. Firth, L. Columbo, and A. J. Scroggie, “Proposed resolution of theory-experiment discrepancy in homoclinic snaking,” Phys. Rev. Lett. **99**, 104503 (2007). J. H. P. Dawes, “Localized pattern formation with a large-scale mode: Slanted snaking,” SIAM J. Appl. Dyn. Syst. **7**, 186–206 (2008). H. Emmerich, H. L[ö]{}wen, R. Wittkowski, T. Gruhn, G. T[ó]{}th, G. Tegze, and L. Gr[á]{}n[á]{}sy, “Phase-field-crystal models for condensed matter dynamics on atomic length and diffusive time scales: an overview,” Adv. Phys. **61**, 665–743 (2012). E. Knobloch, “Pattern selection in binary-fluid convection at positive separation ratios,” Phys. Rev. A **40**, 1549–1559 (1989). S. van Teeffelen, R. Backofen, A. Voigt, and H. L[ö]{}wen, “Derivation of the phase-field-crystal model for colloidal solidification,” Phys. Rev. E **79**, 051404 (2009). A. J. Archer, M. J. Robbins, U. Thiele, and E. Knobloch, “Speed of solidification fronts in supercooled liquids: why rapid fronts lead to disordered glassy systems,” Phys. Rev. E **46**, 031603 (2012). G. Tegze, L. Granasy, G. I. Toth, F. Podmaniczky, A. Jaatinen, T. Ala-Nissila, and T. Pusztai, “Diffusion-controlled anisotropic growth of stable and metastable crystal polymorphs in the phase-field crystal model,” Phys. Rev. Lett. **103**, 035702 (2009). G. Tegze, G. Bansel, G. I. Toth, T. Pusztai, Z. Y. Fan, and L. Granasy, “Advanced operator splitting-based semi-implicit spectral method to solve the binary phase-field crystal equations with variable coefficients,” J. Comput. Phys. **228**, 1612–1623 (2009). G. Tegze, L. Granasy, G. I. Toth, J. F. Douglas, and T. Pusztai, “Tuning the structure of non-equilibrium soft materials by varying the thermodynamic driving force for crystal ordering,” Soft Matter **7**, 1789–1799 (2011). T. Pusztai, G. Tegze, G. I. Toth, L. Kornyei, G. Bansel, Z. Y. Fan, and L. Granasy, “Phase-field approach to polycrystalline solidification including heterogeneous and homogeneous nucleation,” J. Phys.-Condes. Matter **20**, 404205 (2008). D. J. B. Lloyd, B. Sandstede, D. Avitabile, and A. R. Champneys, “Localized hexagons patterns of the planar [S]{}wift-[H]{}ohenberg equation,” SIAM J. Appl. Dyn. Syst. **7**, 1049–1100 (2008). D. Avitabile, D. J. B. Lloyd, J. Burke, E. Knobloch, and B. Sandstede, “To snake or not to snake in the planar [S]{}wift-[H]{}ohenberg equation,” SIAM J. Appl. Dyn. Syst. **9**, 704–733 (2010). D. J. B. Lloyd and B. Sandstede, “Localized radial solutions of the [S]{}wift-[H]{}ohenberg equation,” Nonlinearity **22**, 485–524 (2009). S. G. Mc[C]{}alla and B. Sandstede, “Snaking of radial solutions of the multi-dimensional [S]{}wift-[H]{}ohenberg equation: a numerical study,” Physica D **239**, 1581–1592 (2010). P. C. Matthews and S. M. Cox, “Pattern formation with a conservation law,” Nonlinearity **13**, 1293–1320 (2000). S. M. Cox, “The envelope of a one-dimensional pattern in the presence of a conserved quantity,” Phys. Lett. A **333**, 91–101 (2004). K. R. Elder, M. Katakowski, M. Haataja, and M. Grant, “Modeling elasticity in crystal growth,” Phys. Rev. Lett. **88**, 245701 (2002). C. V. Achim, J. A. P. Ramos, M. Karttunen, K. R. Elder, E. Granato, T. Ala-Nissila, and S. C. Ying, “Nonlinear driven response of a phase-field crystal in a periodic pinning potential,” Phys. Rev. E **79**, 011606 (2009). P. Galenko, D. Danilov, and V. Lebedev, “Phase-field-crystal and [S]{}wift-[H]{}ohenberg equations with fast dynamics,” Phys. Rev. E **79**, 051110 (2009). P. Stefanovic, M. Haataja, and N. Provatas, “Phase-field crystals with elastic interactions,” Phys. Rev. Lett. **96**, 225504 (2006). R. Backofen, A. Ratz, and A. Voigt, “Nucleation and growth by a phase field crystal ([PFC]{}) model,” Philos. Mag. Lett. **87**, 813–820 (2007). H. Ohnogi and Y. Shiwa, “Instability of spatially periodic patterns due to a zero mode in the phase-field crystal equation,” Physica D **237**, 3046–3052 (2008). J. Mellenthin, A. Karma, and M. Plapp, “Phase-field crystal study of grain-boundary premelting,” Phys. Rev. B **78**, 184110 (2008). J. Burke and E. Knobloch, “Localized states in the generalized [S]{}wift-[H]{}ohenberg equation,” Phys. Rev. E **73**, 056211 (2006). E. Doedel, H. B. Keller, and J. P. Kernevez, “Numerical analysis and control of bifurcation problems [(I) B]{}ifurcation in finite dimensions,” Int. J. Bifurcation Chaos **1**, 493–520 (1991). E. J. Doedel and B. E. Oldeman, *AUTO07p: Continuation and bifurcation software for ordinary differential equations*, Concordia University, Montreal (2009). A. Bergeon, J. Burke, E. Knobloch, and I. Mercader, “Eckhaus instability and homoclinic snaking,” Phys. Rev. E **78**, 046201 (2008). J. H. P. Dawes and S. Lilley, “Localized states in a model of pattern formation in a vertically vibrated layer,” SIAM J. Appl. Dyn. Syst. **9**, 238–260 (2010). J. Burke and E. Knobloch, “Multipulse states in the [S]{}wift-[H]{}ohenberg equation,” Discrete and Continuous Dyn. Syst. Suppl. pages 109–117 (2009). J. Knobloch, D. J. B. Lloyd, B. Sandstede, and T. Wagenknecht, “Isolas of 2-pulse solutions in homoclinic snaking scenarios,” J. Dyn. Diff. Eq. **23**, 93–114 (2011). H. Gomez and X. Nogueira, “An unconditionally energy-stable method for the phase field crystal equation,” Comput. Meth. Appl. Mech. Eng. (2012), at press. T. Hughes, J. Cottrell, and Y. Bazilevs, “Isogeometric analysis: [CAD]{}, finite elements, [NURBS]{}, exact geometry and mesh refinement,” Comput. Meth. Appl. Mech. Eng. **194**, 4135–4195 (2005). H. Gomez, V. Calo, Y. Bazilevs, and T. Hughes, “Isogeometric analysis of the [C]{}ahn-[H]{}illiard phase-field model,” Comput. Meth. Appl. Mech. Eng. **197**, 4333–4352 (2008). T. K. Callahan and E. Knobloch, “Symmetry-breaking bifurcations on cubic lattices,” Nonlinearity **10**, 1179–1216 (1997). L. G. MacDowell, V. K. Shen, and J. R. Errington, “Nucleation and cavitation of spherical, cylindrical, and slablike droplets and bubbles in small systems,” J. Chem. Phys. **125**, 034705 (2006). M. Schrader, P. Virnau, and K. Binder, “Simulation of vapor-liquid coexistence in finite volumes: [A]{} method to compute the surface free energy of droplets,” Phys. Rev. E **79**, 061104 (2009). K. Binder, B. J. Block, P. Virnau, and A. Tröster, “Beyond the [Van Der Waals loop: W]{}hat can be learned from simulating [L]{}ennard-[J]{}ones fluids inside the region of phase coexistence,” American Journal of Physics **80**, 1099–1109 (2012). Y. A. Astrov and H.-G. Purwins, “Plasma spots in a gas discharge system: birth, scattering and formation of molecules,” Phys. Lett. A **283**, 349–354 (2001). M. J. Robbins, A. J. Archer, U. Thiele, and E. Knobloch, “Modeling the structure of liquids and crystals using one- and two-component modified phase-field crystal models,” Phys. Rev. E **85**, 061408 (2012). Y. Oono and Y. Shiwa, “Computationally efficient modeling of block copolymer and [Bé]{}nard pattern formations,” Mod. Phys. Lett. B **1**, 49–55 (1987). G. C. Paquette, “Front propagation in a diblock copolymer melt,” Phys. Rev. A **44**, 6577–6599 (1991). T. Teramoto and Y. Nishiura, “Double gyroid morphology in a gradient system with nonlocal effects,” J. Phys. Soc. Jpn. **71**, 1611–1614 (2002). K. B. Glasner, “Spatially localized structures in diblock copolymer mixtures,” SIAM J. Appl. Math. **70**, 2045–2074 (2010). K. B. Glasner, “Characterising the disordered state of block copolymers: Bifurcations of localised states and self-replication dynamics,” Eur. J. Appl. Math. **23**, 315–341 (2012). L. Leibler, “Theory of microphase separation in block co-polymers,” Macromolecules **13**, 1602–1617 (1980). T. Ohta and K. Kawasaki, “Equilibrium morphology of block copolymer melts,” Macromolecules **19**, 2621–2632 (1986). A. Golovin, S. Davis, and P. Voorhees, “Self-organization of quantum dots in epitaxially strained solid films,” Phys. Rev. E **68**, 056203 (2003). B. Spencer, S. Davis, and P. Voorhees, “Morphological instability in epitaxially strained dislocation-free solid films - nonlinear evolution,” Phys. Rev. B **47**, 9760–9777 (1993). T. V. Savina, A. A. Golovin, S. H. Davis, A. A. Nepomnyashchy, and P. W. Voorhees, “Faceting of a growing crystal surface by surface diffusion,” Phys. Rev. E **67**, 021606 (2003). A. A. Golovin, M. S. Levine, T. V. Savina, and S. H. Davis, “Faceting instability in the presence of wetting interactions: [A]{} mechanism for the formation of quantum dots,” Phys. Rev. B **70**, 235342 (2004). U. Thiele, “Thin film evolution equations from (evaporating) dewetting liquid layers to epitaxial growth,” J. Phys.-Cond. Mat. **22**, 084019 (2010). Y. Xiang and W. E, “Nonlinear evolution equation for the stress-driven morphological instability,” J. Appl. Phys. **91**, 9414–9422 (2002). V. Shenoy, A. Ramasubramaniam, and L. Freund, “A variational approach to nonlinear dynamics of nanoscale surface modulations,” Surf. Sci. **529**, 365–383 (2003). Y. Xiang and W. E, “Misfit elastic energy and a continuum model for epitaxial growth with elasticity on vicinal surfaces,” Phys. Rev. B **69**, 035409 (2004). M. Levine, A. Golovin, S. Davis, and P. Voorhees, “Self-assembly of quantum dots in a thin epitaxial film wetting an elastic substrate,” Phys. Rev. B **75**, 205312 (2007). P. Hall, “Evolution equations for [T]{}aylor vortices in the small-gap limit,” Phys. Rev. A **29**, 2921–2923 (1984). F. J. Elmer, “Nonlinear and nonlocal dynamics of spatially extended systems: Stationary states, bifurcations and stability,” Physica D **30**, 321–342 (1988). F. J. Elmer, “Spatially pattern formation in fmr–an example for nonlocal dynamics,” Journal de Physique, Colloque C8 **49**, 1597–1598 (1988). J. Norbury, J. Wei, and M. Winter, “Existence and stability of singular patterns in a [G]{}inzburg-[L]{}andau equation coupled with a mean field,” Nonlinearity **15**, 2077–2096 (2002). J. M. Vega, “Instability of the steady states of some [G]{}inzburg-[L]{}andau-like equations with real coefficients,” Nonlinearity **18**, 1425–1441 (2005). J. Norbury, J. Wei, and M. Winter, “Stability of patterns with arbitrary period for a [G]{}inzburg-[L]{}andau equation with a mean field,” Euro. J. Applied Math. **18**, 129–151 (2007). [^1]: In the condensed matter literature the word ‘nucleation’ refers to the traversing of a (free) energy barrier to form a new phase. In the theory of pattern formation, this term is used more loosely to describe the appearance or birth of a new structure, bump, etc, without necessarily implying that there is an energy barrier to be crossed.
--- abstract: 'Constrained realisations of Gaussian random fields are used in cosmology to design special initial conditions for numerical simulations. We review this approach and its application to density peaks providing several worked-out examples. We then critically discuss the recent proposal to use constrained realisations to modify the linear density field within and around the Lagrangian patches that form dark-matter haloes. The ambitious concept is to forge ‘genetically modified’ haloes with some desired properties after the non-linear evolution. We demonstrate that the original implementation of this method is not exact but approximate because it tacitly assumes that protohaloes sample a set of random points with a fixed mean overdensity. We show that carrying out a full genetic modification is a formidable and daunting task requiring a mathematical understanding of what determines the biased locations of protohaloes in the linear density field. We discuss approximate solutions based on educated guesses regarding the nature of protohaloes. We illustrate how the excursion-set method can be adapted to predict the non-linear evolution of the modified patches and thus fine tune the constraints that are necessary to obtain preselected halo properties. This technique allows us to explore the freedom around the original algorithm for genetic modification. We find that the quantity which is most sensitive to changes is the halo mass-accretion rate at the mass scale on which the constraints are set. Finally we discuss constraints based on the protohalo angular momenta.' author: - | Cristiano Porciani,$^{1}$[^1]\ $^{1}$ Argelander Institute for Astronomy, University of Bonn, Auf dem Hügel 71, D-53121, Bonn, Germany\ date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'Constrained simulations and excursion sets: understanding the risks and benefits of ‘genetically modified’ haloes' --- \[firstpage\] galaxies: formation, haloes – cosmology: theory, dark matter, large-scale structure of Universe – methods: numerical Introduction ============ @HR [hereafter HR] presented a fast technique to build constrained realisations of Gaussian random fields. This method is exact and applies as long as the constraints can be expressed in terms of linear functionals of the random field. The algorithm has been widely used to generate ‘special’ initial conditions for numerical simulations of structure formation, either by requiring the presence of uncommon features like high-density peaks [e.g. @VB; @RD06] or by imposing sets of observational constraints to reproduce the large-scale properties of the local universe [@GH; @Sorce16 and references therein]. Recently, @RPP [hereafter RPP] applied the HR algorithm to modify the initial conditions within the Lagrangian patches that form dark-matter haloes in numerical simulations (protohaloes). The basic idea is to alter the linear density field in a controlled way so that to produce ‘genetically modified’ haloes (or, possibly, even galaxies) with some desired properties (e.g. the final mass or the merging history). Although the concept is intriguing, its practical implementation is problematic due to the complexity of characterising the statistical properties of protohaloes. This was already realised by [@MB] who considered (and then abandoned) the idea of pursuing a similar approach (see their Appendix A) in order to build analytical models aimed at explaining the origin of the seemingly universal halo mass-density profiles. This paper digs deeper into the matter. In Section \[const\], we review the theory of constrained random fields and provide several examples of increasing complexity. These are intended to guide the less experienced reader through the topic but also set the notation and provide the mathematical background to understand the rest of the paper. Some of the examples we give are unprecedented and form the basis for new applications. In Section \[gmh\], we demonstrate that the original execution of the genetic-modification idea by RPP is approximate because it suffers from the implicit assumption that protohaloes sample a set of random points with a fixed mean overdensity. We show that an exact implementation of genetic modification requires a mathematical understanding of the process of halo formation and in particular of the physics that sets the locations of protohaloes in the linear density field. Using toy models rooted on the idea that protohaloes might be associated with local maxima of the smoothed density field, we explore the degrees of freedom of genetic modification and clarify the meaning of probability of a constraint. Our results suggest new ways to enforce constraints within protohaloes. In Section \[secmah\], we illustrate how the excursion-set method [e.g. @BCEK; @Zentner] can be used to predict the accretion history and the final mass of the genetically modified haloes. This provides us with a tool to calibrate the constraints to set in order to produce a given growth history. We also use this method to estimate the size of the deviations in the assembly history of the haloes from the solution presented in RPP. We find that the quantity which is most affected is the mass-accretion rate at the mass scale of the constraints. Finally, in Section \[am\], we discuss how to set constraints based on the angular momentum of the haloes and, in Section \[con\], we conclude. Theory {#const} ====== Conditional expectations for normal deviates -------------------------------------------- Let ${\bf X}$ be a multivariate normal vector with expectation $E[{\bf X}]=\mathbf{m}$ and covariance matrix ${\boldsymbol{\mathsf{C}}}$. Let us partition ${\bf X}$ into two subsets $\{{\bf Y},{\bf Z}\}$ so that $\mathbf{m}=\{\mathbf{m}_Y,\mathbf{m}_Z \}$ and write $${\boldsymbol{\mathsf{C}}}=\left(\begin{array}{cc} {\boldsymbol{\mathsf{C}}}_{YY} & {\boldsymbol{\mathsf{C}}}_{YZ} \\ {\boldsymbol{\mathsf{C}}}_{ZY} & {\boldsymbol{\mathsf{C}}}_{ZZ} \end{array} \right)\;.$$ It is a classic result of probability theory that the conditional distribution of ${\bf Y}$ given ${\bf Z}={\bf a}$ is normal with expectation $$\mathbf{m}^{\rm (c)}_Y=E[{\bf Y}|{\bf Z}={\bf a}]= \mathbf{m}_Y+{\boldsymbol{\mathsf{C}}}_{YZ}\,{\boldsymbol{\mathsf{C}}}_{ZZ}^{-1} ({\bf a}-\mathbf{m}_Z)\;, \label{gcondmean}$$ and covariance matrix $${\boldsymbol{\mathsf{C}}}^{(\rm c)}_{YY}={\boldsymbol{\mathsf{C}}}_{YY}-{\boldsymbol{\mathsf{C}}}_{YZ}\,{\boldsymbol{\mathsf{C}}}_{ZZ}^{-1}\,{\boldsymbol{\mathsf{C}}}_{ZY}\;. \label{gcondvar}$$ Note that the conditional covariance matrix ${\boldsymbol{\mathsf{C}}}^{\rm (c)}_{YY}$ does not depend on the vector ${\bf a}$. This property is key to building constrained realisations of Gaussian random fields (see Section \[constrained\]). In particular, if ${\bf Z}$ is unidimensional, the relations above reduce to: $$\begin{aligned} \mathbf{m}^{\rm (c)}_Y\!\!\!\!&=&\!\!\!\!\mathbf{m}_Y+\frac{{\boldsymbol{\mathsf{C}}}_{YZ}}{C_{ZZ}} (a-m_Z)\;,\label{mean1}\\ {\boldsymbol{\mathsf{C}}}^{\rm (c)}_{YY}\!\!\!\!&=&\!\!\!\!{\boldsymbol{\mathsf{C}}}_{YY}-\frac{{\boldsymbol{\mathsf{C}}}_{YZ}\,{\boldsymbol{\mathsf{C}}}_{ZY}}{C_{ZZ}} \label{variance1}\;.\end{aligned}$$ Constrained Gaussian random fields {#constrained} ---------------------------------- Let us consider a real-valued, stationary, Gaussian random field[^2] $\delta({{\mathbf q}})$ (${{\mathbf q}}\in \mathbb{R}^3$) with expectation $\langle \delta({{\mathbf q}})\rangle=\mu({{\mathbf q}})$. Let $F: \delta \to F[\delta]\in \mathbb{R}$ be a linear functional of the field that can be generally written as $F[\delta]=\int h({{\mathbf q}})\,\delta({{\mathbf q}})\,{\rm d}^3q$ where $h$ denotes a (tempered) distribution on ${{\mathbf q}}$-space. It follows from Eq. (\[mean1\]) that the (location-dependent) mean of the field $\delta$ subject to the constraint $F[\delta]=f$ is $$\begin{aligned} \label{meanHR} {\mu}^{\rm (c)}_f({{\mathbf q}})\!\!\!\!\!&\equiv&\!\!\!\!\! \langle \delta({{\mathbf q}}) | F[\delta]=f \rangle\\&=&\!\!\!\!\!\mu({{\mathbf q}})+ \frac{\langle [\delta({{\mathbf q}})-\mu({{\mathbf q}})] (F[\delta]-\langle F[\delta] \rangle)\rangle}{\langle (F[\delta]-\langle F[\delta]\rangle)^2 \rangle}\,(f-\langle F[\delta] \rangle)\;. \nonumber\end{aligned}$$ Note that the symbol $\langle \dots \rangle$ denotes averages taken over all the possible realisations of the random field $\delta$ while $\langle \dots | F[\delta]=f\rangle$ indicates the expected value over a restricted ensemble: only those realisations in which $F[\delta]=f$ are considered. Eq. (\[meanHR\]) implies that, for each functional $F$, the conditional mean field ${\mu}^{\rm (c)}({{\mathbf q}})$ can be written in terms of the power spectrum and the expectation of the unconstrained random field (see Section \[subsexamples\] for further details). Similarly, from Eq. (\[variance1\]) we derive that the (location-dependent) variance of the constrained field around the mean field is $$\begin{aligned} \label{varHR} \Sigma^{\rm (c)}_f \!\!\!\!\!&\equiv&\!\!\!\!\! \langle [\delta({{\mathbf q}})-{\mu}^{\rm(c)}_f({{\mathbf q}})]^2 | F[\delta]=f \rangle \\ &=&\!\!\!\!\!\langle [\delta({{\mathbf q}})-\mu({{\mathbf q}})]^2\rangle -\frac{\langle [\delta({{\mathbf q}})-\mu({{\mathbf q}})]( F[\delta]-\langle F[\delta]\rangle)\rangle^2}{\langle (F[\delta]-\langle F[\delta]\rangle)^2\rangle}\;.\nonumber\end{aligned}$$ Starting from these classical results, HR developed an efficient algorithm to build a numerical realisation ${\delta_{\rm c}}({{\mathbf q}})$ of a Gaussian random field that satisfies the linear constraint $F[\delta]=f_{\rm c}$. The input is an unconstrained realisation ${\delta_{\rm r}}$ of the random field for which it happens to be that $F[{\delta_{\rm r}}]=f_{\rm r}$. This configuration can be interpreted as a specific realisation that satisfies the constraint $F[\delta]=f_{\rm r}$. Therefore one can write ${\delta_{\rm r}}({{\mathbf q}})={\mu}^{\rm (c)}_{f_{\rm r}}({{\mathbf q}})+\epsilon({{\mathbf q}})$ with $\epsilon({{\mathbf q}})$ the (zero-mean) residual field with respect to the conditional mean field. Since the variance (and thus the whole probability density) of the residuals does not depend on the value of $F[\delta]$, the same $\epsilon({{\mathbf q}})$ can be used to build the constrained realisation by simply adding the appropriate mean field to it: ${\delta_{\rm c}}({{\mathbf q}})={\mu}^{\rm (c)}_{f_{\rm c}}({{\mathbf q}})+\epsilon({{\mathbf q}})={\delta_{\rm r}}({{\mathbf q}})+{\mu}^{\rm (c)}_{f_{\rm c}}({{\mathbf q}})-{\mu}^{\rm (c)}_{f_{\rm r}}({{\mathbf q}})$. Putting everything together, we obtain $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\langle \delta({{\mathbf q}}) | F[\delta]=f_{\rm c} \rangle-\langle \delta({{\mathbf q}}) | F[\delta]=f_{\rm r} \rangle\;, \label{HRsinglewithmean}$$ or, equivalently, using Eq. (\[meanHR\]) $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\frac{\langle [ \delta({{\mathbf q}})-\mu({{\mathbf q}})]( F[\delta]-\langle F[\delta]\rangle) \rangle}{\langle (F[\delta]-\langle F[\delta]\rangle)^2 \rangle}\,(f_{\rm c}-f_{\rm r})\;. \label{HRsingle}$$ In words: a suitably scaled mean-field component (i.e. a deterministic quantity proportional to the cross-correlation function between the functional constraint and the field) is added to ${\delta_{\rm r}}$ in order to construct a specific field realisation ${\delta_{\rm c}}$ that satisfies the constraint $F[\delta]=f_{\rm c}$. Note that the unconstrained realisation ${\delta_{\rm r}}$ is only used to generate the statistical noise around the conditional mean field. Equations (\[gcondmean\]) and (\[gcondvar\]) provide all the necessary information to impose an arbitrary number of (linear) constraints $F_i[\delta]=f_i$ with $i=1,\dots,N_{\rm c} \in \mathbb{N}$. In this case, the constrained mean field is $$\begin{aligned} {\mu}^{\rm (c)}({{\mathbf q}})\!\!\!\!\!&=&\!\!\!\!\!\langle \delta({{\mathbf q}}) | F_i[\delta]\nonumber=f_i \rangle\\&=&\!\!\!\!\! \mu({{\mathbf q}})+ \eta_i({{\mathbf q}})\,A_{ij}^{-1}\,(f_j-\langle F_j[\delta] \rangle)\label{mfmulti}\end{aligned}$$ (sums over repeated indices are implied) where $$\begin{aligned} \eta_i({{\mathbf q}})\!\!\!\!\!&=&\!\!\!\!\!\langle [\delta({{\mathbf q}})-\mu({{\mathbf q}})] (F_i[\delta]-\langle F_i[\delta]\rangle) \rangle\nonumber \\ &=&\!\!\!\!\! \langle \delta({{\mathbf q}}) F_i[\delta]\rangle-\mu({{\mathbf q}})\langle F[\delta]\rangle \label{mf2multi}\end{aligned}$$ denotes the cross-covariance function between the field and the functional form of the $i$-th constraint, $$\begin{aligned} A_{ij}\!\!\!\!\!&=&\!\!\!\!\!\langle (F_i[\delta]-\langle F_i[\delta]\rangle)\,(F_j[\delta]-\langle F_j[\delta]\rangle)\rangle\nonumber\\ &=&\!\!\!\!\!\langle F_i[\delta]\,F_j[\delta]\rangle-\langle F_i[\delta]\rangle \langle F_j[\delta]\rangle \label{covcon}\end{aligned}$$ is the $ij$ element of the covariance matrix of the constraints ${\boldsymbol{\mathsf{A}}}$ and ${\boldsymbol{\mathsf{A}}}^{-1}$ is its inverse matrix. Therefore, one finally obtains: $$\begin{aligned} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})\!\!\!\!\!&=&\!\!\!\!\!\langle \delta({{\mathbf q}}) |F_i[\delta]=f_{{\rm c},i} \rangle- \langle \delta({{\mathbf q}}) |F_i[\delta]=f_{{\rm r},i} \rangle \nonumber \\ &=&\!\!\!\!\!\eta_i({{\mathbf q}})\,A^{-1}_{ij}\,(f_{{\rm c},j}-f_{{\rm r},j})\;, \label{HRmult}\end{aligned}$$ which from now on we will refer to as the ‘HR correction’. Given the linearity of the constraints, it can be easily shown [@Bert87; @VB] that the conditional probability $${\cal P}[\delta | F_i[\delta]=f_i]=\frac{{\cal P}[\delta]}{{\cal P}(F_i[\delta]=f_i)}\;, \label{probconst}$$ where ${\cal P}[\delta]$ indicates the probability of an unconstrained realisation (a multivariate Gaussian in the case of finite sampling which can be written as a path integral in the continuum limit) and the probability of the constraints is ${\cal P}(F_i[\delta]=f_i)\propto\exp(-\chi^2/2)$ with $\chi^2(f_i)=(f_i-\langle F_i[\delta]\rangle)\, A^{-1}_{ij}\,(f_j-\langle F_j[\delta]\rangle)$. This number can thus be used to quantify how likely it is that the constraints one is imposing occur.[^3] The chance to randomly pick a realisation with values $F_i[\delta]=f_{{\rm c},i}$ with respect to one with $f_{{\rm r},i}$ is ${\cal P}_{\rm rel}\propto\exp(-\Delta \chi^2/2)$ with $\Delta\chi^2=\chi^2(f_{{\rm c},i})-\chi^2(f_{{\rm r},i})$. Since the probability distribution of the residual field $\epsilon({{\mathbf q}})$ is independent of the constraints and the mean field depends deterministically on them, this quantity essentially quantifies the relative likelihood of ${\delta_{\rm c}}$ with respect to ${\delta_{\rm r}}$. It also follows from Eq. (\[probconst\]) that the conditional mean field is the most likely realisation which is compatible with the constraints [@Bert87]. Examples {#subsexamples} -------- In this Section we apply the theory described above to cosmological perturbations in the ‘Newtonian’ limit. Let $\delta({{\mathbf q}})$ denote the linear mass-density fluctuations in the universe (at some fixed time after matter-radiation equality) with expectation $\langle \delta({{\mathbf q}}) \rangle=0$ and power spectral density $\langle \tilde{\delta}({{\bf k}}) \tilde{\delta}({{\bf k}}')\rangle=(2\pi)^3\, \delta_{\rm D}({{\bf k}}+{{\bf k}}')\,P(k)$ (where $\tilde{\delta}({{\bf k}})=\int \delta({{\mathbf q}})\exp{(i {{\bf k}}\cdot{{\mathbf q}})}\,{{\rm d}}^3q$ is the Fourier transform of the density field, $\delta_{\rm D}({{\bf x}})$ is the Dirac-delta distribution in three dimensions, and the random field is assumed to be stationary, i.e. statistically homogeneous and isotropic). Constraints will be imposed averaging the field (or the result of linear operators acting on it) over space with a weighting function $W({{\mathbf q}})$ characterized by the Fourier transform $\widetilde{W}({{\bf k}})$. @CSS have shown that several statistics of protohaloes in $N$-body simulations can be accurately described using the effective window function $$\widetilde{W}(k)=3A\,\frac{\sin(kR)-kR\cos(kR)}{(kR)^3} \,\exp{\left[-\frac{B\,(kR)^2}{50} \right]}\;, \label{efffilter}$$ where $R$ is the characteristic protohalo radius while $A\simeq 1$ and $B\simeq 1$ are fitting parameters that slightly depend upon the redshift of halo identification and the halo mass. To draw plots we will use this filter. Following a standard procedure in the analysis of random fields [@CLH; @Vanmarcke83; @BBKS hereafter BBKS], we introduce the spectral moments $$\sigma_n^2=\int \widetilde{W}^2({{\bf k}})\,k^{2n}P(k)\,\frac{{{\rm d}}^3k}{(2\pi)^3}\;,$$ with $n=0, 1$ and 2. The ratio $R_{0}=\sigma_0/\sigma_1$ gives (neglecting factors of order unity[^4]) the typical separation between neighbouring zero up-crossings of the smoothed density field (more rigorously, the mean number density of the up-crossings scales as $R_0^{-3}$). Similarly, $R_{\rm pk}=\sigma_1/\sigma_2$ characterises the separation between adjacent density maxima. Finally, to quantify the spectral bandwidth, we introduce the dimensionless parameter $\gamma=R_{\rm pk}/R_0=\sigma_1^2/(\sigma_0\,\sigma_2)$. This quantity provides a measure of ‘spectral narrowness’ (i.e. how concentrated the power is around the dominant wavenumbers) and ranges between 0 and 1: it is 1 for a single frequency spectrum (the number of maxima and zero up-crossings coincide in a plane wave) and 0 for white noise. Note that $\gamma$ is the Pearson correlation coefficient between $\delta$ and $\nabla^2\delta$, i.e. $\gamma=\langle \delta({{\mathbf q}}) \nabla^2\delta({{\mathbf q}})\rangle/\{\langle [\delta({{\mathbf q}})]^2\rangle\,\langle [\nabla^2\delta({{\mathbf q}})]^2\rangle\}^{1/2}$. For adiabatic perturbations in the $\Lambda$CDM model, $\gamma$ monotonically grows from 0.45 to 0.65 when the smoothing volume increases from the protohaloes of dwarf galaxies to those of galaxy clusters. ### One density constraint {#1dens} As a first example, we use the HR method to impose a constraint on the value of the (volume-averaged) mass density at a particular location. To simplify notation, we choose a coordinate system originating from this point and consider the linear functional $$F[\delta]=\int W({{\mathbf q}})\, \delta({{{\mathbf q}}})\, {{\rm d}}^3q\equiv{\bar{\delta}}\;. \label{eq:defconst}$$ Note that ${\bar{\delta}}$ is a stochastic variable whose value changes in each realisation of $\delta({{\mathbf q}})$. We want to generate a specific realisation ${\delta_{\rm c}}({{\mathbf q}})$ in which $\bar{\delta}$ assumes the particular value $\bar{\delta}_{\rm c}$. Our input will be a random realisation ${\delta_{\rm r}}({{\mathbf q}})$ in which it happens to be that $\bar{\delta}=\bar{\delta}_{\rm r}$. In order to apply Eqs. (\[HRsinglewithmean\]) and (\[HRsingle\]) to this case, we need to evaluate some statistical properties of the variable $\bar{\delta}$. Averaging over the ensemble of all possible realisations, we obtain $\langle {\bar{\delta}}\rangle=0$ and $\langle {\bar{\delta}}^2 \rangle=\sigma_0^2$. At the same time, $$\langle \delta({{\mathbf q}})\,{\bar{\delta}}\rangle= \int W({{\bf p}})\, \xi(|{{\mathbf q}}-{{\bf p}}|)\, {{\rm d}}^3p\equiv \bar{\xi}({{\mathbf q}})\;,$$ with $\xi(q)=\langle \delta({{\bf x}}+{{\mathbf q}})\,\delta({{\bf x}})\rangle$ the autocovariance function of the field $\delta$. In terms of the power spectrum of $\delta$: $$\bar{\xi}({{\mathbf q}})=\int \widetilde{W}({{\bf k}})\,P(k)\,e^{-i{{\bf k}}\cdot{{\mathbf q}}} \,\frac{{{\rm d}}^3k}{(2\pi)^3}\;. \label{xibarpk}$$ Note that, in general, the function $\bar{\xi}({{\mathbf q}})$ is not spherically symmetric around the origin, this happens if and only if $W({{\mathbf q}})$ has the same symmetry. We can now use Eq. (\[meanHR\]) to derive the conditional mean field and obtain that $\langle \delta({{\mathbf q}})|\bar{\delta}\rangle=\bar{\delta}\,\bar{\xi}({{\mathbf q}})/\sigma_0^2$. Since $\delta$ is statistically homogeneous, this quantity also coincides with the average density profile around a random point with mean overdensity $\bar{\delta}$, i.e. $\langle \delta({{\bf x}}+{{\mathbf q}})|\bar{\delta}({{\bf x}})\rangle$ [as originally derived in @Dekel81]. Given all this, when the single constraint ${\bar{\delta}}={\bar{\delta}}_{\rm c}$ is imposed at the origin of the coordinate system, Eq. (\[HRsingle\]) reduces to $$\label{sol1dens} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\Delta \bar{ \delta} \,\frac{\bar{\xi}({{\mathbf q}})}{\sigma_0^2}\;, $$ where $\Delta \bar{\delta}={\bar{\delta}}_{\rm c}-{\bar{\delta}}_{\rm r}$ quantifies how much the constraint changes the mean density within the smoothing volume. The relative probability of ${\delta_{\rm c}}({{\mathbf q}})$ with respect to ${\delta_{\rm r}}({{\mathbf q}})$ corresponds to $\Delta \chi^2=({\bar{\delta}}_{\rm c}^2-{\bar{\delta}}_{\rm r}^2)/\sigma_0^2$ (note that changes need not be small to get a likely configuration, i.e. changing sign to the mean density within the constrained region gives $\Delta\chi^2=0$). The HR correction in Eq. (\[sol1dens\]) modifies the unconstrained field in a very specific way. In Figure \[barxi\] we plot the functions $ \bar{\xi}(q)$ and $\bar{\xi}(q)/\sigma_0^2$ using the Planck-2013 cosmology for a $\Lambda$CDM model and two smoothing volumes with different characteristic linear sizes $R$ (we use the window function in Eq. (\[efffilter\]) which is spherically symmetric). The function $\bar{\xi}(q)$ shows a local maximum for ${\bf q}=0$. Well within the smoothing volume, $$\bar{\xi}({{\mathbf q}})\simeq \bar{\xi}({\bf 0})+\frac{1}{2}{{\mathbf q}}^{\rm T} \cdot \left[ \nabla_{{{\bf x}}}\nabla_{{{\bf x}}} \bar{\xi}({{\bf x}})\right]_{{{\bf x}}=\mathbf{0}}\cdot {{\mathbf q}}+\dots $$ which, in the spherically symmetric case (when the traceless part of the Hessian does not contribute by symmetry), reduces to $$\bar{\xi}(q)\simeq \bar{\xi}(0)+\frac{1}{6}\nabla^2\bar{\xi}(0)\,q^2+\dots$$ where[^5] $\bar{\xi}(0)=(2\pi^2)^{-1}\int \widetilde{W}(k)\,k^2\,P(k)\,{{\rm d}}k>0$ and $\nabla^2 \bar{\xi}(0)=\overline{\nabla^2 \xi}(0)=-(2\pi^2)^{-1}\int \widetilde{W}(k)\,k^4\,P(k)\,{{\rm d}}k<0$. For $q\gg R$, instead, $\bar{\xi}(q)$ scales proportionally to the autocovariance function of $\delta$. Note that imposing a localised constraint on the size of the density fluctuations requires long-range corrections due to the slowly decreasing spatial autocorrelation of the random field $\delta$. If the density field has substantial power on scales smaller than $R$, then the HR correction is always subdominant with respect to the unconstrained field (this might not be noticeable when setting the initial conditions for $N$-body simulations due to the artificial cutoff of the power around the Nyquist frequency). Also note that the mean density of a constrained realisation within a finite box does not vanish. ![The curves represent the cross-covariance $\bar{\xi}(q)=\langle \delta({{\mathbf q}})\,\bar{\delta}\rangle$ between the linear overdensity field (extrapolated to the present time), $\delta({{\mathbf q}})$, and the mean density contrast, $\bar{\delta}$, measured within a spherically symmetric region of radius $R$ centred on the origin of the coordinate system. The feature on the right-hand side is the baryon acoustic peak. To compute the cross-covariance we used Eqs. (\[efffilter\]) and (\[xibarpk\]). The small circles along the vertical axis indicate the corresponding values of the variance $\sigma_0^2=\langle \bar{\delta}^2\rangle$. The inset shows the ratio $\bar{\xi}(q)/\sigma_0^2$. This function represents the mean density profile around a random point with overdensity $\bar{\delta}=1$, i.e. $\langle \delta({{\mathbf q}})| \bar{\delta}=1\rangle$, and regulates the HR correction for a single density constraint given in Eq. (\[sol1dens\]).[]{data-label="barxi"}](f1a.eps){width="\columnwidth"} ### Two density constraints {#2dens} In some applications it is useful to set multiple constraints. As an example we impose two simultaneous conditions on the values of the volume-averaged mass density (defined using different smoothing volumes and denoted by the subscripts A and B) at the same spatial location (here identified as the origin of the coordinate system). In this case, the diagonal elements of the covariance matrix of the constraints, ${\boldsymbol{\mathsf{A}}}$, are $\sigma^2_{\rm A}=\langle \bar{\delta}_{\rm A}^2\rangle$ and $\sigma^2_{\rm B}=\langle \bar{\delta}_{\rm B}^2\rangle$ while the off-diagonal element is $\zeta=\langle \bar{\delta}_{\rm A} \bar{\delta}_{\rm B}\rangle=(2\pi)^{-3}\int \widetilde{W}_{\rm A}({{\bf k}})\, \widetilde{W}_{\rm B}({{\bf k}})\,P(k)\,{{\rm d}}^3k$. The appropriate HR correction straightforwardly follows from Eq. (\[HRmult\]), $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\alpha_{\rm A}\,\bar{\xi}_{\rm A}({{\mathbf q}})+\alpha_{\rm B}\,\bar{\xi}_{\rm B}({{\mathbf q}})\;,$$ with $$\begin{aligned} \!\!\!\!\!\!\alpha_{\rm A}\!\!\!\!\!&=&\!\!\!\!\!(\sigma_{\rm A}^2\sigma_{\rm B}^2-\zeta^2)^{-1}\left(\sigma_{\rm B}^2\,\Delta\bar{\delta}_{\rm A}- \zeta\,\Delta\bar{\delta}_{\rm B}\right) \\ \!\!\!\!\!\!\alpha_{\rm B}\!\!\!\!\!&=&\!\!\!\!\!(\sigma_{\rm A}^2\sigma_{\rm B}^2-\zeta^2)^{-1}\left(\sigma_{\rm A}^2\,\Delta\bar{\delta}_{\rm B}- \zeta\,\Delta\bar{\delta}_{\rm A}\right)\;.\end{aligned}$$ The tangled structure of the solution above reflects the fact that $\bar{\delta}_{\rm A}$ and $\bar{\delta}_{\rm B}$ are correlated Gaussian variables. The relative likelihood of ${\delta_{\rm c}}$ vs. ${\delta_{\rm r}}$ is quantified by $\Delta\chi^2=\chi^2_{\rm c}-\chi^2_{\rm r}$ with $\chi^2=[\sigma_{\rm B}^2\bar{\delta}_{\rm A}^2+\sigma_{\rm A}^2\bar{\delta}_{\rm B}^2-2\zeta\bar{\delta}_{\rm A}\bar{\delta}_{\rm B}]/(\sigma_{\rm A}^2\sigma_{\rm B}^2-\zeta^2)$. ### Density and density gradient constraints {#extre} Let us now enforce simultaneous constraints[^6] on the variables $\bar{\delta}$ and $\bar{{{\mathbf s}}}=\overline{\nabla \delta}$ at ${{\mathbf q}}=0$ (from now on, to simplify notation, we use the same window function for all constraints but it is trivial to generalise our formulae by considering the appropriate combinations of smoothing radii to evaluate the spectral moments and $\bar{\xi}$). In a Gaussian random field, $\langle \delta\,\nabla \delta\rangle=0$ (in general, odd derivatives are uncorrelated with even derivatives) and $\langle \bar{s}_i \,\bar{s}_j \rangle=\sigma_1^2\,\delta_{ij}/3$ (where $\delta_{ij}$ is the Kronecker symbol). The matrix ${\boldsymbol{\mathsf{A}}}$ is therefore diagonal and the cross-covariance $\langle \delta({{\mathbf q}})\,\overline{\nabla\delta}\rangle=\nabla\bar{\xi}({{\mathbf q}})$. Eq. (\[HRmult\]) then gives $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\Delta{\bar{\delta}}\,\frac{\bar{\xi}({{\mathbf q}})}{{\sigma_{0}^2}}+ \Delta\bar{{{\mathbf s}}}\cdot\frac{\nabla\bar{\xi}({{\mathbf q}})}{{\sigma_{1}^2}}\;, \label{HRdandg}$$ with $\Delta\chi^2=({\bar{\delta}}_{\rm c}^2-{\bar{\delta}}_{\rm r}^2)/\sigma_0^2+(\bar{s}^2_{\rm c}-\bar{s}^2_{\rm r})/\sigma_1^2$. A couple of things are worth noting in the HR correction. First, setting constraints on $\overline{\nabla \delta}$ results in the appearance of a new term proportional to $\nabla \bar{\xi}$. Second, contrary to $\Delta \bar{\delta}_{\rm A}$ and $\Delta \bar{\delta}_{\rm B}$ in §\[2dens\], $\Delta \bar{\delta}$ and $\Delta \bar{{{\mathbf s}}}$ do not mix due to the fact that $\bar{\delta}$ and $\overline{\nabla \delta}$ are independent Gaussian variables. ### Density and tidal-field constraints {#dt} Tides play a major role in gravitational collapse and it is certainly interesting to be able to control them in the initial conditions of numerical simulations. We thus impose constraints on the elements of the linear deformation tensor ${\boldsymbol{\mathsf{D}}}=\nabla\nabla\Phi$ with $\Phi=\nabla^{-2}\delta$ the (suitably rescaled) peculiar gravitational potential. Note that the trace of ${\boldsymbol{\mathsf{D}}}$ coincides with $\delta$ while the linear tidal tensor ${\boldsymbol{\mathsf{T}}}={\boldsymbol{\mathsf{D}}}-(\delta/3){\boldsymbol{\mathsf{I}}}$ (where ${\boldsymbol{\mathsf{I}}}$ denotes the identity matrix with elements $\delta_{ij}$) is the traceless part of the deformation tensor. Considering that $\langle \delta({{\bf x}}+{{\mathbf q}}) \,\overline{D}_{ij}({{\bf x}})\rangle=\partial_i\partial_j \nabla^{-2}\bar{\xi}({{\mathbf q}})$ and $\langle \overline{D}_{ij}\, \overline{D}_{\ell m}\rangle= \sigma_0^2 \,(\delta_{ij}\delta_{\ell m}+\delta_{i\ell}\delta_{jm}+\delta_{im}\delta_{\ell j})/15$, Eq. (\[HRmult\]) gives: $$\begin{aligned} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\Delta\bar{\delta}\,\frac{\bar{\xi}({{\mathbf q}})}{\sigma_0^2}+\frac{15}{2}\left(\Delta \overline{T}_{ij}\partial_i\partial_j\right) \frac{\nabla^{-2}\bar{\xi}({{\mathbf q}})}{\sigma_0^2}\;. $$ In this case, $\Delta \chi^2=\chi^2_{\rm c}-\chi^2_{\rm r}$, with $$\begin{aligned} \chi^2\!\!\!\!\!&=&\!\!\!\!\!\frac{1}{\sigma_0^2}\left[\bar{\delta}^2+6\left(\overline{T}_{11}^2+\overline{T}_{22}^2+\overline{T}_{33}^2 \right)\right.\\ &-&\!\!\!\!\!3\left( \overline{T}_{11} \overline{T}_{22}+\overline{T}_{11} \overline{T}_{33} +\overline{T}_{22} \overline{T}_{33}\right) +15\left.\left( \overline{T}_{12}^2+\overline{T}_{13}^2+\overline{T}_{23}^2\right)\right]\;,\nonumber\end{aligned}$$ where $\overline{T}_{11}+\overline{T}_{22}+\overline{T}_{33}=0$. ### Adding curvature constraints {#curv} Finally, we generalise all our previous results by imposing extra constraints on the six independent elements of the Hessian matrix ${\boldsymbol{\mathsf{H}}}=\nabla\nabla \delta$ in addition to controlling $\bar{{{\mathbf s}}}$ and $\overline{{\boldsymbol{\mathsf{D}}}}$. Since ${\boldsymbol{\mathsf{H}}}$ is made of second-order derivatives of $\delta$, it correlates with the density and the tidal fields: $\langle D_{ij}({{\bf x}}+{{\mathbf q}})\,\overline{H}_{\ell m}({{\bf x}})\rangle=\partial_i\partial_j \partial_\ell \partial_m \nabla^{-2}\bar{\xi}({{\mathbf q}})$. At the same time, the covariance matrix of the constraints is composed of simple blocks and its inverse can be written in a compact analytic form (see Appendix \[inversion\]). In fact, the only additional non-vanishing contributions to ${\boldsymbol{\mathsf{A}}}$ with respect to those discussed in §\[extre\] and §\[dt\] are $\langle \overline{D}_{ij} \,\overline{H}_{\ell m} \rangle=\sigma_1^2 \,(\delta_{ij}\delta_{\ell m}+\delta_{i\ell}\delta_{jm}+\delta_{im}\delta_{\ell j})/15$ and $\langle \overline{H}_{ij}\, \overline{H}_{\ell m}\rangle= \sigma_2^2 \,(\delta_{ij}\delta_{\ell m}+\delta_{i\ell}\delta_{jm}+\delta_{im}\delta_{\ell j})/15$. After performing the matrix inversion, we can easily derive the conditional mean field $\langle \delta({{\mathbf q}}) | \overline{{\boldsymbol{\mathsf{D}}}}, \bar{{{\mathbf s}}}, \overline{{\boldsymbol{\mathsf{H}}}}\rangle$ using Eqs. (\[mfmulti\]) and (\[mf2multi\]). It is convenient to express the final results in terms of the Laplacian $\nabla^2 \delta=H_{11}+H_{22}+H_{33}\equiv \kappa$ (which gives the sum of the principal curvatures or, equivalently, 3 times the mean principal curvature) and of the tensor ${\boldsymbol{\mathsf{C}}}={\boldsymbol{\mathsf{H}}}-(\kappa/3){\boldsymbol{\mathsf{I}}}$ (the trace-free part of the Hessian matrix) which describes the orientation and the relative length of the principal axes of curvature. We thus obtain: $$\begin{aligned} \label{quasimain} \langle \delta({{\mathbf q}}) | \overline{{\boldsymbol{\mathsf{D}}}}, \bar{{{\mathbf s}}}, \overline{{\boldsymbol{\mathsf{H}}}}\rangle\!\!\!\!\! &=&\!\!\!\!\! \bigg\{\frac{1}{\sigma_0^2(1-\gamma^2)}\,\bigg[\bar{\delta}\,\left(1+R_{\rm pk}^2\,\nabla^2\right)+\bar{\kappa}\, R^2_{\rm pk}\,\left( 1+R_0^2\,\nabla^2\right) \nonumber \\ &+&\!\!\!\!\! \overline{T}_{ij}\,\frac{15}{2}\left(\partial_i\partial_j\nabla^{-2}+R_{\rm pk}^2\, \partial_i\partial_j\right)\nonumber\\ &+&\!\!\!\!\! \overline{C}_{ij}\,R_{\rm pk}^2\,\frac{15}{2}\left(1+R_0^2\,\partial_i\partial_j \right)\bigg] +\frac{1}{\sigma_1^2}\bar{s}_i\partial_i\bigg\}\,\bar{\xi}({{\mathbf q}})\;,\end{aligned}$$ where implicit summations run over all the nine elements of the tensors (and not over six like in Appendix \[inversion\]). The constrained density field is derived from Eq. (\[HRmult\]) which, in this instance, gives $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\langle \delta({{\mathbf q}}) | \overline{{\boldsymbol{\mathsf{D}}}}_{\rm c}, \bar{{{\mathbf s}}}_{\rm c}, \overline{{\boldsymbol{\mathsf{H}}}}_{\rm c}\rangle- \langle \delta({{\mathbf q}}) | \overline{{\boldsymbol{\mathsf{D}}}}_{\rm r}, \bar{{{\mathbf s}}}_{\rm r}, \overline{{\boldsymbol{\mathsf{H}}}}_{\rm r}\rangle\;. \label{main}$$ Note that the rhs of this equation assumes the same identical form as in Eq. (\[quasimain\]) provided that the field variables subject to constraints are replaced with their variations (e.g. $\bar{\delta}\to \Delta \bar{\delta}$, $\bar{\kappa}\to \Delta \bar{\kappa}$, etc.). Once again the relative likelihood of ${\delta_{\rm c}}$ vs. ${\delta_{\rm r}}$ is quantified by $\Delta\chi^2=\chi^2_{\rm c}-\chi^2_{\rm r}$ where, in this case, $$\label{chimain} \chi^2= \frac{\psi(\delta,{{\boldsymbol{\mathsf{T}}}},\delta,{{\boldsymbol{\mathsf{T}}}})}{\sigma_0^2(1-\gamma^2)}+\frac{\psi(\kappa,{{\boldsymbol{\mathsf{C}}}},\kappa,{{\boldsymbol{\mathsf{C}}}})}{\sigma_2^2(1-\gamma^2)} -\frac{2\gamma}{1-\gamma^2}\,\frac{\psi(\delta,{{\boldsymbol{\mathsf{T}}}},\kappa,{{\boldsymbol{\mathsf{C}}}})}{\sigma_0\sigma_2} +\frac{\bar{s}^2}{\sigma_1^2}\;,$$ with, for instance, $$\begin{aligned} \psi(\delta,{{\boldsymbol{\mathsf{T}}}},\kappa,{{\boldsymbol{\mathsf{C}}}})\!\!\!\!\!&=&\!\!\!\!\! \bar{\delta} \bar{\kappa}+6\left[ \overline{T}_{11}\overline{C}_{11}+ \overline{T}_{22} \overline{C}_{22}+ \overline{T}_{33} \overline{C}_{33} \right] -\frac{3}{2}\left[ \overline{T}_{11} \overline{C}_{22}\right.\nonumber\\ &+&\!\!\!\!\! \left.\overline{T}_{11} \overline{C}_{33}+\overline{T}_{22} \overline{C}_{11} +\overline{T}_{22} \overline{C}_{33}+ \overline{T}_{33} \overline{C}_{11}+ \overline{T}_{33} \overline{C}_{22}\right] \nonumber\\ &+&\!\!\!\!\!15\left[ \overline{T}_{12} \overline{C}_{12}+ \overline{T}_{13} \overline{C}_{13}+ \overline{T}_{23} \overline{C}_{23}\right] \;.\end{aligned}$$ Setting constraints at local density maxima {#hrbbks} ------------------------------------------- Some applications require setting constraints at special locations that form a point process and for which elementary probability theory does not apply (for further details see Appendix \[condpeak\]). A classic example is maxima (peaks) of the smoothed linear density field which are often used as a proxy for the location of protohaloes [e.g. @Doroshkevich70; @K84; @PH BBKS]. A peak is a point in which $\overline{\nabla \delta}$ vanishes and $\overline{{\boldsymbol{\mathsf{H}}}}$ is negative definite. BBKS derived several statistical properties (e.g. the mean density and the large-scale clustering amplitude as a function of the peak characteristics) for local maxima of a random field in three dimensions. These authors also computed the mean and variance of the mass-density profiles around peaks. The key element to perform these calculations is the definition of probability for $\delta$ subject to the constraint that there is a peak at a specific location. In general, considering only peaks with overdensity $\bar{\delta}$ and Hessian matrix $\overline{{\boldsymbol{\mathsf{H}}}}$ gives $$\langle \delta({{\mathbf q}}) | F[\delta]=f\rangle_{\rm pk}= \langle \delta({{\mathbf q}}) | F[\delta]=f, \bar{\delta}, \bar{{{\mathbf s}}}=0, \overline{{\boldsymbol{\mathsf{H}}}}\rangle \label{peakcond}$$ where the subscript pk indicates that a local density maximum is present at the origin of the coordinate system (see our Appendix \[condpeak\] for a formal derivation of this equation which is not as intuitive as it might seem). Eq. (\[peakcond\]) shows that conditional probabilities requiring the presence of a peak are equivalent to those obtained imposing a set of linear constraints on $\delta$ and its spatial derivatives. It is exactly this property that makes it possible to use the HR method also for peak conditioning. From Eq. (\[peakcond\]) we can write the mean field around a peak of height $\bar{\delta}$ and Hessian matrix $\overline{{\boldsymbol{\mathsf{H}}}}$ as $\langle \delta({{\mathbf q}}) \rangle_{\rm pk}=\langle \delta({{\mathbf q}})| \bar{\delta}, \bar{{{\mathbf s}}}=0, \overline{{\boldsymbol{\mathsf{H}}}}\rangle$ and the ensemble average on the rhs can be easily evaluated using Eq. (\[quasimain\]). We finally obtain $$\begin{aligned} \label{peakprofile} \langle \delta({{\mathbf q}})\rangle_{\rm pk}\!\!\!\!\!&=&\!\!\!\!\!\bigg\{\frac{1}{\sigma_0^2(1-\gamma^2)}\,\bigg[\bar{\delta}\,\left(1+R_{\rm pk}^2\,\nabla^2\right) +\bar{\kappa}\, R^2_{\rm pk}\,\left( 1+R_0^2\,\nabla^2\right) \nonumber\\ &+&\!\!\!\!\!\overline{C}_{ij}\,R_{\rm pk}^2\,\frac{15}{2}\left(1+R_0^2\,\partial_i\partial_j \right)\bigg]\bigg\}\,\bar{\xi}({{\mathbf q}})\;,\end{aligned}$$ which coincides with Eq. (7.8) in BBKS although it is written using a different notation (note that setting just ${{\mathbf s}}=0$ in our Eq. (\[quasimain\]) gives an even more general expression that makes explicit the dependence of the mean density profile of a peak on the local tidal field). It is important to stress that $\langle \delta({{\mathbf q}}) \rangle_{\rm pk}\neq \langle \delta({{\mathbf q}}) | \bar{\delta}\rangle= \bar{\delta}\,\bar{\xi}({{\mathbf q}})/\sigma_0^2$. In words, the conditional mean field[^7] decreases more rapidly around a density peak with respect to a random point with the same $\bar{\delta}$. The exact shape of the profile depends on the Hessian matrix of the density at the peak. This is a consequence of the fact that ${\boldsymbol{\mathsf{H}}}$ correlates with the density field as we have discussed in §\[curv\]. The formalism to set up initial conditions for $N$-body simulations in the presence of peak constraints has been developed by @VB. This technique combines the HR method with the BBKS conditional probabilities, i.e. the conditional mean field $\mu^{\rm (c)}({{\mathbf q}})$ in Eq. (\[mfmulti\]) is computed using expectations over the point process formed by the density peaks $\langle \delta({{\mathbf q}})| F_i[\delta]=f_i \rangle_{\rm pk}$. As the random realisation ${\delta_{\rm r}}$ does not have a peak at ${{\mathbf q}}=0$, the final expression for the HR correction is $$\begin{aligned} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})\!\!\!\!\!&=&\!\!\!\!\!\langle \delta({{\mathbf q}})| F_i[\delta]=f_i, \bar{\delta}, \bar{{{\mathbf s}}}=0, \overline{{\boldsymbol{\mathsf{H}}}}\rangle\nonumber\\ &-&\!\!\!\!\!\langle \delta({{\mathbf q}})| F_i[\delta]=f_{{\rm r},i}, \bar{\delta}_{\rm r}, \bar{{{\mathbf s}}}_{\rm r}, \overline{{\boldsymbol{\mathsf{H}}}}_{\rm r}\rangle\;.\end{aligned}$$ Eq. (\[peakcond\]) shows that imposing the presence of a peak at a particular location requires specifying at least 10 constraints (1 for $\bar{\delta}$, 3 for $\bar{{{\mathbf s}}}$ and 6 for $\overline{{\boldsymbol{\mathsf{H}}}}$) plus choosing a smoothing kernel and fixing its scale length. However, additional requirements can be added. For instance, @VB also considered the linear velocity of the peak (or, equivalently, the gravitational acceleration) and the linear velocity shear (or the traceless tidal tensor). In this case, there are 8 additional constraints to set. As in every other application of the HR method, the constraints determine the conditional mean field and ${\delta_{\rm r}}$ provides the statistical noise around the expectation. By changing ${\delta_{\rm r}}$ for a given set of constraints, it is in principle possible to build an infinite number of realisations including all the large-scale environments that may exist. The method thus provides an unbiased sampling of the initial conditions that are compatible with the peak constraint. Peak constraints are particularly suitable for simulating the formation of structures that originate from rare field configurations. In fact these initial conditions would be hardly encountered in random realisations of $\delta$. Among the applications of the method are high-redshift quasars [e.g. @RD11] and galaxy clusters [e.g. @Dom06] as well as theoretical studies of gravitational collapse [e.g @vdWB]. Genetically modified haloes {#gmh} =========================== RPP applied the HR method to modify the initial conditions of $N$-body simulations within the Lagrangian patches that lead to the formation of specific haloes (that, in the authors’ jargon, get genetically modified, hereafter GM). The gist of the paper is to produce halo families in which the mass accretion history varies in a controlled and nearly continuous way. In practice, the proposed method for genetic modification consists of several steps: i) a reference $N$-body simulation is run starting from random initial conditions (i.e. from an unconstrained realisation of a Gaussian field); ii) a particular dark-matter halo is selected; iii) linear constraints are imposed (using the HR method) within the Lagrangian volume occupied by the particles that form the halo in the reference simulation; iv) a new simulation is run starting from the constrained initial conditions. Genetic modification has a distinctive characteristic when compared with other applications of the HR method. In fact, it does not use statistical sampling: given a set of constraints, there is one and only one realisation satisfying them. In a sense, the goal is to keep the large-scale structure fixed while altering the linear density field around protohaloes and within a few correlation lengths of the variables on which the constraints are imposed. This objective could also be achieved by setting peak constraints as discussed in §\[hrbbks\] and smoothly changing the characteristics of the imposed peak (or enforcing simultaneous peak constraints on different length scales) while keeping ${\delta_{\rm r}}$ fixed. In compact notation, the peak-based analogue of genetic modification would be $$\delta_{\rm pk2}({{\mathbf q}})-\delta_{\rm pk1}({{\mathbf q}})=\langle \delta({{\mathbf q}})\rangle_{\rm pk2}-\langle \delta({{\mathbf q}})\rangle_{\rm pk1}\;, \label{peakgm}$$ where both $\delta_{\rm pk1}$ and $\delta_{\rm pk2}$ are obtained from the same ${\delta_{\rm r}}$. However, a strong point in favour of genetic modification is that it deals directly with protohaloes and does not rely on the assumption that virialised structures form out of density peaks. In this Section, we are going to demonstrate that this advantage in theory turns out to be also a serious disadvantage in practical applications. Since we cannot yet associate protohaloes (and the characteristics of the corresponding haloes) with particular configurations in the underlying density field, genetic-modification schemes currently have to trade exactness for tractability. Related to this, we are going to show that the original implementation of the genetic-modification algorithm by RPP is based on an unstated simplifying assumption and is therefore not exact but approximate. The degree of inaccuracy caused by this issue (in terms of the final halo structure and the mass accretion history) is, however, difficult to gauge because of the highly non-linear dynamics of gravitational collapse. In this Section, we will focus on the conceptual issues while we will discuss practicalities in Section \[secmah\]. Conditional averages at protohaloes {#caap} ----------------------------------- A key feature of the classic HR method is that the unconstrained field ${\delta_{\rm r}}$ is only used to generate the noise around the conditional mean field. All the localised constraints are imposed at random positions (e.g. at points with fixed coordinates) for different realisations of ${\delta_{\rm r}}$ and ‘know’ nothing about ${\delta_{\rm r}}$. On the other hand, in order to implement their scheme for genetic modification, RPP use information extracted from ${\delta_{\rm r}}$ to select the location at which the constraints are imposed (as well as the shape and size of the smoothing volume used to define the constraints). Genetic modification aims at transforming the Lagrangian regions of haloes. Therefore, not only the constraints are set only where ${\delta_{\rm r}}$ displays particular features, but it also is necessary that ${\delta_{\rm c}}$ presents all the special features that define a protohalo at the very same locations. From the mathematical point of view, restricting the analysis to protohaloes corresponds to changing the ensemble over which the conditional mean fields in Eqs. (\[mfmulti\]) and (\[HRmult\]) should be evaluated. Specifically, expectations should be taken over the point process formed by the protohaloes, $\langle \delta({{\mathbf q}})| F_i[\delta]=f_i \rangle_{\rm h}$, although these are problematic to compute in practice. This subtlety has been disregarded by RPP who instead derived the conditional mean by averaging over the distribution of the underlying overdensity field, $\langle \delta({{\mathbf q}})| F_i[\delta]=f_i \rangle$, which is easy to work out. Generally, this simplification introduces a bias in the constrained field, as we will show in detail later. In summary, a self-consistent genetic-modification scheme should replace Eq. (\[HRmult\]) with $${\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\langle \delta({{\mathbf q}})| F_i[\delta]=f_{{\rm c},i}\rangle_{\rm h_c}-\langle \delta({{\mathbf q}})|F_i[\delta]=f_{{\rm r},i}\rangle_{\rm h_r}\;, \label{gmgm}$$ in which the subscripts ${\rm h_c}$ and ${\rm h_r}$ distinguish the attributes of the different protohaloes. Note that this expression closely parallels Eq. (\[peakgm\]). The main problem for integrating the HR method into the genetic-modification scheme concerns the identification of the protohalo sites and the statistical properties of the linear density field at these special locations. In $N$-body simulations, protohaloes appear to be mostly associated with local maxima of the smoothed linear density field [@LP; @HP14]. This tight correspondence is expected to produce a very specific form of scale-dependent bias between the clustering properties of the protohaloes and the underlying matter distribution [BBKS, @Desjacques08] which is robustly measured in numerical simulations [@ELP; @Baldauf15]. Simulations also show that the shape and orientation of proto-haloes strongly align with the local tidal field [@LeeP; @PDH2; @LHP09; @LP; @Despali13; @LBP]. All these phenomena establish a link between the collapsing patches and several properties of the linear perturbations. The emerging picture is that the local values of the density, of its first and second spatial derivatives, and of the tidal field form the minimal set of variables that are necessary to characterise protohaloes. This conclusion forms the basis for our discussion of conditional probabilities at protohaloes in the remainder of the paper. At this point, it is useful to recall that setting simultaneous constraints on $\overline{{\boldsymbol{\mathsf{D}}}}, \bar{{{\mathbf s}}}$ and $\overline{{\boldsymbol{\mathsf{H}}}}$ yields the conditional mean field and the $\Delta\chi^2$ function given in Eqs. (\[quasimain\]) and (\[chimain\]). A worked-out example -------------------- In order to clarify the practical impact of the ensemble choice, we consider a simple representative example that has been already discussed by RPP and highlight the reasons for which their method is not exact. Suppose we want to genetically modify a halo by imposing a single density constraint ${\bar{\delta}}={\bar{\delta}}_{\rm c}$. First of all, the Lagrangian patch that forms the selected halo in ${\delta_{\rm r}}$ must be used to define the smoothing volume appearing in Eq. (\[eq:defconst\]). Then, some version of the HR algorithm needs to be implemented. Starting from Eq. (\[meanHR\]), RPP identify the mean-field correction with the expectation of the density profile around random points having ${\bar{\delta}}={\bar{\delta}}_{\rm c}$, i.e. ${\mu}^{\rm (c)}_{\bar{\delta}_{\rm c}}({{\mathbf q}})=\langle \delta({{\mathbf q}}) | \,{\bar{\delta}}={\bar{\delta}}_{\rm c} \rangle$ which leads to Eq. (\[sol1dens\]). This choice neglects that protohaloes form at special locations and treats them as any other point at which ${\bar{\delta}}={\bar{\delta}}_{\rm c}$. The ensemble average is blind to the value of either $\overline{\nabla \delta}$ or $\overline{{\boldsymbol{\mathsf{H}}}}$ (or even the tidal field) evaluated at the centre of the selected protohalo. In fact, Eq. (\[sol1dens\]) is obtained considering probability densities that have been marginalised over all the field properties except the overdensity. The resulting mean field would be meaningful if the value of ${\bar{\delta}}$ would be the only information that matters to determine a protohalo. However, this is not the case in general: protohalo sites are determined by additional field variables (see §\[caap\] for a plausible list). Note that the HR method is exact. The inconsistency of the genetic-modification algorithm lies in the implicit assumption that protohaloes (where the constraints are set) sample random points with a specific value of ${\bar{\delta}}$. As we mentioned earlier, what one should do is to replace the conditional probabilities $\langle \delta({{\mathbf q}})| \,{\bar{\delta}}={\bar{\delta}}_{\rm c}\rangle$ with $\langle \delta({{\mathbf q}})| \,{\bar{\delta}}={\bar{\delta}}_{\rm c}\rangle_{\rm h}$ where only the realisations that produce a protohalo at ${{\mathbf q}}=0$ are considered in the ensemble average. Although this change provides the correct solution, we cannot evaluate the expectation value because we do not know yet how to precisely characterise the locations of the protohaloes in mathematical terms. This is a formidable complication. ![image](rev2a.eps){width="\columnwidth"} ![image](rev2b.eps){width="\columnwidth"} To better understand the problem, let us consider a couple of simpler cases for which we can write analytical solutions. Let us assume for a moment that local extrema (i.e. maxima, minima and saddle points) of the linear density field form a good proxy for the location of protohaloes. By analogy with Eq. (\[peakcond\]), the conditional probability enforcing $\bar{\delta}=\bar{\delta}_{\rm c}$ at an extremum can be written as (see Appendix \[condpeak\]) $$\langle \delta({{\mathbf q}}) | \bar{\delta}=\bar{\delta}_{\rm c}\rangle_{\rm ex}= \langle \delta({{\mathbf q}})| \bar{\delta}=\bar{\delta}_{\rm c}, \bar{\bf s}=0\rangle\;.$$ Taking into account the results presented in §\[extre\], we thus require that $\bar{{{\mathbf s}}}_{{\rm c}}=0$ (i.e. the point at which the constraints are set must be a density extremum in the constrained realisation) and also assume that $\bar{{{\mathbf s}}}_{{\rm r}}=0$ (i.e. the point was already a density extremum in the unconstrained realisation). In this case, from Eq. (\[HRdandg\]) we indeed recover Eq. (\[sol1dens\]) meaning that there is no difference in imposing density constraints at random points or at density extrema with the same density. This happens because density and density gradients are uncorrelated. If haloes would form at density extrema, then the solution for setting constraints on $\bar{\delta}$ presented by RPP would be correct. As a more realistic example, let us now assume that haloes form around linear density maxima of the $\delta$ field smoothed on the halo mass scale [an excellent approximation for massive haloes, see @LP]. In this case, when a density constraint is enforced, it also is necessary to impose that $\overline{\nabla \delta}=0$ (at ${{\mathbf q}}=0$ both in ${\delta_{\rm c}}$ and in ${\delta_{\rm r}}$) and the Hessian matrix $\overline{{\boldsymbol{\mathsf{H}}}}$ is negative definite. It follows that the conditional mean field coincides with the peak density profile given in Eq. (\[peakprofile\]). Thus, even if one decides to keep all the elements of $\overline{{\boldsymbol{\mathsf{H}}}}$ unchanged, imposing a simple density constraint will require the following HR correction: $$\label{eur} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\frac{\Delta\bar{\delta}}{\sigma_0^2(1-\gamma^2)}\,\left[\bar{\xi}({{\mathbf q}})+ R_{\rm pk}^2\,\nabla^2\bar{\xi}({{\mathbf q}})\right]\;,$$ which deviates from the one used in RPP and, as a matter of fact, will generate a different mass accretion history for the GM haloes. The mismatch derives from the fact that $\bar{\delta}$ and $\overline{{\boldsymbol{\mathsf{H}}}}$ are correlated variables and by selecting density peaks we are implicitly setting a constraint on $\overline{{\boldsymbol{\mathsf{H}}}}$. As we mentioned before, numerical simulations suggest that $\overline{{\boldsymbol{\mathsf{H}}}}$ plays a role in determining the location of protohaloes. If this conjecture is true, then the HR correction for genetic modification will also depart from the RPP solution. This can be easily understood following a different line of reasoning: if we want to preserve the density gradient, the Hessian matrix, and the tidal field at a given point (not necessarily a local maximum) while changing the overdensity, Eq. (\[main\]) reduces to Eq. (\[eur\]) with $\Delta \chi^2=(\bar{\delta}_{\rm c}^2-\bar{\delta}_{\rm r}^2)/[\sigma_0^2\,(1-\gamma^2)]$. As expected, requiring that density maxima are genetically modified into density maxima with similar characteristics (or, more in general, that the Hessian matrix at the location of the constraints is not changed) provides a different field correction with respect to enforcing a density constraint at a random point as in RPP (see Figure \[fig2\]). Also the associated $\Delta \chi^2$ changes (see Figure \[figchi\]). Setting density and curvature constraints {#pks} ----------------------------------------- Further understanding can be gained through a study of the field transformations that change only the spherical parts of the tensors in Eq. (\[main\]), i.e. $\bar{\delta}$ and $\bar{\kappa}$. In this case, the most general HR correction consists of a linear superposition of terms proportional to $\bar{\xi}({{\mathbf q}})$ and to $\nabla^2\bar{\xi}({{\mathbf q}})$. The relative weight of the two contributions depends on the exact form of the constraints. For instance, Eq. (\[main\]) reduces to Eq. (\[sol1dens\]) if $\Delta \bar{\kappa} =-\Delta \bar{\delta}/R_0^2$ while all the other variables are left unchanged. This means that what RPP call a ‘pure-density’ constraint sets in reality correlated constraints on the density and the mean curvature[^8] when one keeps $\bar{{{\mathbf s}}}$, $\overline{T}_{ij}$ and $\overline{C}_{ij}$ fixed instead of marginalising over them (and $\bar{\kappa}$). In particular, if $\Delta\bar{\delta}<0$, the constraint can change sign to one or more of the principal curvatures and transform a density maximum into a saddle point or a minimum. Moreover, the $\Delta \chi^2$ associated with the correlated constraints in the restricted ensemble is substantially different (see Figure \[figchi\]) from what RPP found for random points, i.e. $\Delta \chi^2_{\rm ran}=(\bar{\delta}^2_{\rm c}-\bar{\delta}^2_{\rm r})/\sigma_0^2$. It is not surprising that the chance of drawing a specific realisation depends on the ensemble over which the probability has been defined: constraints that are likely in one ensemble might be rare in another one. In fact, the ensemble (i.e. what is kept fixed, what is marginalised over and what is allowed to vary) should always be specified when a quantity like $\Delta\chi^2$ is mentioned. Another instructive example is obtained by requiring that $\Delta \bar{\kappa}=-\Delta \bar{\delta}/R_{\rm pk}^2$ which gives $$\label{nablashape} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\frac{\Delta\bar{\delta}}{\sigma_0^2(1-\gamma^2)}\, (R_{\rm pk}^2-R_{0}^2)\,\nabla^2\bar{\xi}({{\mathbf q}})\;.$$ Note that imposing this constraint requires a field correction with a very different functional form than the previous ones (see Figure \[fig2\]). Finally, it is interesting to identify correlated constraints for which $\Delta\chi^2=\Delta\chi^2_{\rm ran}$. This is obtained imposing $\Delta\bar{\kappa}_{\rm c}=\Delta\bar{\delta}_{\rm c}/R_0^2$ which gives $$\label{mostlik} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\frac{\Delta\bar{\delta}}{\sigma_0^2(1-\gamma^2)}\, (1+\gamma^2+2R_{\rm pk}^2\,\nabla^2)\,\bar{\xi}({{\mathbf q}})\;.$$ In Figure \[fig2\], we compare the expressions for ${\delta_{\rm c}}-{\delta_{\rm r}}$ given in Eqs. (\[sol1dens\]), (\[eur\]), (\[nablashape\]) and (\[mostlik\]) assuming the spherically symmetric window given in Eq. (\[efffilter\]) with two different smoothing radii, $R$. All curves cross for $q$ slightly smaller than $R$ and their ordering is reversed for smaller and larger scales. Moreover, since $\nabla^2\bar{\xi}$ drops much faster than $\bar{\xi}$ with increasing $q$, the field correction in Eq. (\[nablashape\]) gives appreciable contributions only on scales comparable with $R$ or smaller and on the scale of the baryonic acoustic peak [see also @Desjacques08]. On the other hand, all other expressions for ${\delta_{\rm c}}-{\delta_{\rm r}}$ scale proportionally to $\bar{\xi}$ on large scales but with substantially different normalisations. Eqs. (\[eur\]) and (\[mostlik\]) present a double peak (the first located at $q=0$ and the second for $q$ slightly above $R$) and have a positive slope at $q=R$. These results show that, even only considering changes in the spherical parts of the deformation and density-Hessian tensors, there is quite some freedom in the choice of the constraints that fix the mean density within a protohalo in ${\delta_{\rm r}}$. Each transformation generates a different mass-accretion history for the resulting halo and corresponds to a distinct protohalo shape in ${\delta_{\rm c}}$. ![The relative likelihood of the fields ${\delta_{\rm c}}$ and ${\delta_{\rm r}}$ is proportional to $\exp(-\Delta \chi^2/2)$. The quantity $\Delta\chi^2$ is plotted as a function of $\Delta \bar{\delta}$ for the different field transformations that have been presented in Figure \[fig2\] (line styles are the same). Calculations are based on Eq. (\[chimain\]). The shaded region indicates the values that cannot be obtained by imposing constraints that only change $\bar{\delta}$ and $\bar{\kappa}$ at ${{\mathbf q}}=0$. Note that the field transformation given in Eq. (\[sol1dens\]) is associated either with $\Delta\chi^2_{\rm ran}$ (long-dashed line) if interpreted as setting pure density constraints at random points (as in RPP) or with a different $\Delta\chi^2$ function (short-dashed line) if interpreted as setting joint constraints on $\bar{\delta}$ and $\bar{\kappa}$ with $\Delta\bar{\kappa}=-\Delta\bar{\delta}/R_0^2$.[]{data-label="figchi"}](chis.eps){width="\columnwidth"} In Figure \[figchi\] we compare how the $\Delta\chi^2$ function varies with $\Delta \bar{\delta}$ for the different constrained fields considered so far. Since $\Delta\chi^2$ also depends on the values that the functional constraints assume in ${\delta_{\rm r}}({{\mathbf q}})$, as a reference, we assume $\bar{\delta}_{\rm r}=\sigma_0$ and $\bar{\kappa}_{\rm r}=-\sigma_2$. The boundary of the shaded region on the bottom indicates the lowest $\Delta\chi^2$ that can be obtained for a given $\Delta\bar{\delta}$ and is obtained minimising $\Delta\chi^2$ with respect to $\Delta \bar{\kappa}$ at fixed $\Delta\bar{\delta}$. This corresponds to imposing $\bar{\kappa}_{\rm c}=\bar{\delta}_{\rm c}/R_0^2$. The figure clearly illustrates that the relative likelihood of a constrained realisation does not depend only on the value of the density constraint (as assumed by RPP) but also on how the curvature of the perturbation is changed. Future applications of genetic modification should take this into account. Before proceeding further, it is convenient to recap the main results presented in this Section. First we have shown that imposing constraints within the Lagrangian volume of haloes in the reference simulation (based on ${\delta_{\rm r}}$) introduces a bias due to the fact that the constraints are implicitly set at special (i.e. non random) locations. This should be reflected in the conditional mean field of the HR formalism. Therefore, applying only a simple density constraint based on the statistics of random points as in RPP is not conceptually rigorous and provides approximate results. However, the state of the art does not allow us to provide a precise mathematical characterisation of protohaloes and thus an exact algorithm for genetic modification cannot be formulated yet. Using educated guesses based on the association between protohaloes and linear density maxima introduces several degrees of freedom into the problem. In the next section, we will use the excursion-set model to quantify the actual importance of this freedom in practical applications of the genetic-modification method and compare our results with the original implementation by RPP. Predicting the mass accretion history {#secmah} ===================================== Changing at will the mass-accretion history of haloes by modifying the linear properties within the corresponding Lagrangian patches would certainly be attractive and useful. However, the non-linear dynamics of halo collapse makes it difficult to predict the final outcome of the simulations given the initial constraints (or, vice versa, to pick the constraints that produce a given set of required properties). RPP suggested that the final mass $M$ of the haloes forming from the constrained realisations can be accurately estimated using the halo mass function $n(M)$. Their key assumption is that the relative probability ${\cal P}_{\rm rel}$ of getting a perturbation with mean density ${\bar{\delta}}$ coincides with the ratio $n(M)/n(M_{\rm r})$ where $M_{\rm r}$ denotes the mass of the halo formed in the unconstrained run. RPP came up with a heuristic argument to test the consistency of this Ansatz when only one density constraint is set (see their Section 6.1). Their reasoning assumes that there exists a well-defined ${\bar{\delta}}$-$M$ relation and develops in terms of probabilistic arguments. It is difficult to understand, however, why the functional form of the halo mass function (which is a weighted average over all possible formation histories, i.e. over $\epsilon({{\mathbf q}})$ and ${\bar{\delta}}$) should be relevant for a problem which involves a single realisation of the residual field. Moreover, the HR method is completely deterministic (no generation of pseudo-random numbers is required to impose the constraints on a pre-existing random field) and this suggests that the relative probability of a constrained realisation should not matter at all to determine $M$. In fact, for a given ‘family’ of GM haloes - i.e. at fixed ${\delta_{\rm r}}({{\mathbf q}})$ or $\epsilon({{\mathbf q}})$ - there is a deterministic relation between ${\bar{\delta}}$ and the final halo mass $M$ (even RPP approximated this relation with a power law for each GM family). This relation, however, will be different for every realisation of the residual field. Similarly, the mass distribution within each family of GM haloes will depend on $\epsilon({{\mathbf q}})$. The mass function ‘emerges’ only after averaging over the different realisations. Here we use a variant of the excursion-set method in order to predict the mass-accretion history and the final mass of the GM initial conditions. Excursion sets {#exc} -------------- Let us consider a realisation of the linear density field and a specific halo that forms out of these initial conditions. The excursion-set trajectory, $\hat{\delta}(R)$, associated with the halo is obtained by averaging $\delta$ over a volume (with variable characteristic size $R$) surrounding the corresponding protohalo centre which we identify with the origin of the coordinate system. For instance, using a spherical top-hat filter $W_{\rm TH}(q)=3\,\Theta(R-q)/(4\pi R^3)$ with $\Theta(x)$ the Heaviside step distribution, one has $$\hat{\delta}(R)= \int W_{\rm TH}(q)\,\delta({{\mathbf q}})\,{{\rm d}}^3q$$ (this is the same as in Eq. (\[eq:defconst\]) but we will use $\bar{\delta}$ to indicate averages over the protohalo volume and $\hat{\delta}$ for averages over the excursion-set filter). Depending on the application, the trajectory can be seen as a function of the smoothing radius $R$, the mass contained within the filter in Lagrangian space $M=4\pi\bar{\rho}R^3/3$ (where $\bar{\rho}$ denotes the average comoving density of the universe) or the variance of the linear overdensity $\sigma_0^2$. It is convenient to sort the pseudo temporal variable in descending order for $R$ and $M$ and in ascending order for $\sigma_0^2$. In what follows we will use $\log (M/{\rm M}_\odot)$. ![Top: Excursion-set trajectory centred on the Lagrangian region that forms a galaxy-sized halo in a high-resolution $N$-body simulation (solid). The vertical dashed line indicates the halo mass at redshift $z=0$. Middle: The effective threshold $T$ which perfectly reproduces the mass accretion history of the halo (solid) is contrasted with the fit by @SMT01 [short dashed] and two constant thresholds: $T=1.686$ (dotted) and $T=2.1$ (dot-dashed). Bottom: The mass-accretion history of the halo in the simulation (solid) is compared with the prediction of the excursion-set model using the thresholds shown in the middle panel. Choosing the constant value $T=2.1$ approximates the numerical data to better than 15 per cent.[]{data-label="fig3"}](f3.eps){width="\columnwidth"} The excursion-set trajectory can be used to estimate the mass-accretion history of every dark-matter halo [@BCEK]. The key assumption is that the mass shell with Lagrangian radius $R$ will accrete onto the halo at time $t$ if $\hat{\delta}(R)$ - which scales with the linear growth factor $D_+(t)$ - is equal to a threshold $T$ and $\hat{\delta}(R')<T$ for all $R'>R$. Therefore, at a given epoch, the halo mass can be determined identifying the first upcrossing of the level $T$ by the excursion-set trajectory. Detailed comparisons against $N$-body simulations have shown that this procedure works reasonably well if the trajectories are computed at protohalo centers while it fails miserably around random points [@White96; @SMT01]. The threshold value depends on the precise halo definition and several environmental factors that influence the geometry of gravitational collapse (e.g. the tidal field). On average, it is a decreasing function of the halo mass but there is considerable scatter around the mean [@SMT01; @Robertson09; @ELP; @LBP; @BLP]. Moreover, there exists a substantial population of low-mass haloes for which the excursion-set method works only at early times because tidal effects prevent the accretion of the outermost shells in Lagrangian space [@LBP; @BLP]. In the top panel of Figure \[fig3\], we show the excursion-set trajectory (linearly extrapolated at the present time, i.e. setting $D_+=1$) extracted from the initial conditions of a high-resolution $N$-body simulation and centered on the Lagrangian patch that forms a halo of mass $4.3 \times 10^{11}$ $h^{-1}$ M$_\odot$ at redshift $z=0$. The halo has been identified using the AHF algorithm [@AHF] and the reported mass lies within a sphere with mean density $200 \rho_{\rm c}=200\bar{\rho}/\Omega_{\rm m}$ (here the matter density parameter is $\Omega_{\rm m}=0.308$). In the middle panel, we show the threshold value (solid) that would perfectly reproduce the mass-accretion history measured in the simulation. For comparison, we also draw $T=1.686$ (dotted) as obtained from the collapse of a spherical top-hat perturbation in an Einstein-de Sitter universe and the mass-dependent fit derived by @SMT01 [short-dashed line]. Note that the solid line lies always in between the other two. The fact that the effective threshold is larger than 1.686 is not surprising because tidal effects are expected to slow down gravitational collapse with respect to the spherical case. On the other hand, the threshold by @SMT01 is statistical in nature as it has been derived to fit the halo mass function and is not expected to accurately describe every single halo. The effective threshold that reproduces the $N$-body data oscillates around $T=2.1$ (dot-dashed) with relatively small deviations (always smaller than 12 per cent). This constant threshold thus provides an excellent approximation for this halo between $0\leq z\leq 1$. Note that at $z=1$ the halo undergoes a major merger and the point ${{\mathbf q}}=0$ is contained in the Lagrangian region of the less massive progenitor. For this reason it it does not make sense to push the calculation for $z>1$. Finally, in the bottom panel, we contrast the mass-accretion history measured in the simulation (solid) with that predicted by the excursion-set method using the different thresholds introduced above (same line styles as above). The constant value $T=2.1$ reproduces the simulation masses to better than 15 per cent. For this reason we use this value in the remainder of the paper. ![Top: Corrections to the excursion-set trajectory associated with setting the constraint $\Delta \bar{\delta}=1$ on the Lagrangian scale $R_{\rm c}=1\, h^{-1}$ Mpc. The line styles match those in Figure \[fig2\] and refer to different constraints on the mean principal curvature at the protohalo centre. Bottom: Mass-accretion histories obtained applying the corrections shown in the top panel to the trajectory presented in Figure \[fig3\]. The excursion-set method with $T=2.1$ has been used to estimate the growth rate of the haloes stemming from the constrained realisations with $\Delta \bar{\delta}=1$. As a reference, we also show the accretion history of the halo forming in the $N$-body simulation from the unconstrained initial conditions (dots).[]{data-label="fig4"}](f4.eps){width="\columnwidth"} Excursion set and genetically-modified haloes --------------------------------------------- We now explain how the excursion-set method can be employed to predict the growth of GM haloes. Let us first consider the simple density constraint presented in Eq. (\[sol1dens\]). The corresponding correction to the excursion-set trajectory is: $$\begin{aligned} \Delta \hat{\delta}(R)\!\!\!\!\!&=&\!\!\!\!\! \hat{\delta}_{\rm c}(R)-\hat{\delta}_{\rm r}(R)=\frac{\Delta \bar{\delta}}{\sigma_0^2}\,\int W_{\rm TH}(q)\, \bar{\xi}({{\mathbf q}})\,{{\rm d}}^3{{\mathbf q}}\nonumber\\ &=&\!\!\!\!\!\frac{\Delta \bar{\delta}}{\sigma_0^2}\,\int \widetilde{W}_{\rm TH}(kR)\,\widetilde{W}({{\bf k}})\,P(k)\,\frac{{{\rm d}}^3k}{(2\pi)^3}\\ &=&\!\!\!\!\!\Delta \bar{\delta} \, \frac{\langle \hat{\delta}(R)\, \bar{\delta}\rangle}{\langle \bar{\delta}^2\rangle}\nonumber\end{aligned}$$ (this result follows from Eq. (\[xibarpk\]) and the definition of Fourier transform). Similarly, for the more complex case given in Eq. (\[eur\]), one gets $$\begin{aligned} \Delta \hat{\delta}(R)\!\!\!\!\!&=&\!\!\!\!\!\frac{\Delta \bar{\delta}}{\sigma_0^2}\,\int \widetilde{W}_{\rm TH}(kR)\,\widetilde{W}({{\bf k}})\,P(k)\,\frac{1-(kR_{\rm pk})^2}{1-\gamma^2}\,\frac{{{\rm d}}^3k}{(2\pi)^3}\nonumber\\ &=&\!\!\!\!\! \Delta \bar{\delta} \, \frac{\sigma_2^2\,\langle \hat{\delta}(R)\, \bar{\delta}\rangle^2-\sigma_1^2\, \langle \hat{\delta}(R)\, \overline{\nabla^2\delta}\rangle}{\sigma_0^2\sigma_2^2(1-\gamma^2)}\;.\end{aligned}$$ These corrections are completely deterministic and always the same independently of the unconstrained trajectory. Consequently, there is no difficulty in computing $\hat{\delta}_{\rm c}(R)$. To make a practical example, let us modify the initial conditions shown in Figure \[fig3\] by requiring a density variation of $\Delta \bar{\delta}=1$ within a Lagrangian region of characteristic size $R_{\rm c}=1\,h^{-1}$ Mpc centered on the protohalo. We use Eqs. (\[sol1dens\]), (\[eur\]), (\[nablashape\]) and (\[mostlik\]) to set different correlated constraints on the mean curvature. The resulting corrections to the trajectories[^9] and the corresponding mass-accretion histories inferred from the excursion-set method are shown in Figure \[fig4\]. As expected, we find that the mass of the GM haloes assemble at a different rate depending on the exact form of the HR correction. Our results clearly support two main conclusions. i) The excursion-set method provides a convenient tool to predict the non-linear growth of the GM haloes. This procedure does not require any external input as the collapse threshold can be calibrated to match the mass-accretion history of the unconstrained realisation. ii) Although conceptually distinct, Eqs. (\[sol1dens\]) and (\[eur\]) generate similar mass accretion histories for galaxy-sized haloes although larger differences should be expected for cluster-sized haloes (see Figure \[fig2\]). This suggests that, after all, the implementation by RPP might provide results in the right ball park, at least for certain classes of objects. However, bigger discrepancies are found with Eqs. (\[nablashape\]) and (\[mostlik\]). The variable that appears to be most sensitive to the details of the HR correction is the mass-accretion rate at the mass scale of the constraints (see below for a detailed explanation). The extent to which the excursion-set method provides accurate predictions should be tested against $N$-body simulations, which is beyond the scope of this paper. For constraints that induce relatively small changes in the trajectories, we can write an analytical expression for the mass variation. This is based on the fact that the slope of the trajectory determines how sensitive the final halo mass is to the modifications induced by the constraints. Taylor expanding the unconstrained trajectory around the mass scale of first upcrossing at a reference time $t_0$, we obtain $$D_+(t_0)\,\hat{\delta}_{\rm r}(y)\simeq T+D_+(t_0)\,\hat{\delta}_{\rm r}'(y_{\rm up, r})\,(y-y_{\rm up, r})+\dots$$ where $y=\log (M/{\rm M}_\odot)$ and $\hat{\delta}'={{\rm d}}\hat{\delta}/{{\rm d}}y$ measures the slope (‘velocity’) of the excursion-set trajectory (note that this quantity is always negative at the scale of first upcrossing). Similarly, assuming that the constraints are imposed at $y_{\rm up, r}$ (i.e. at the halo mass scale at time $t_0$), we get $$\Delta\hat{\delta}(y)\simeq \Delta \bar{\delta}+ \Delta\hat{\delta}' (y_{\rm up, r})\,(y-y_{\rm up, r})+\dots\;.$$ Finally, we can solve for the scale $y_{\rm up, c}$ at which $D_+(t)\,\hat{\delta}_{\rm c}(y_{\rm up, c})=T$ or, equivalently, for $\hat{\delta}_{\rm r}(y_{\rm up, c})+\Delta\hat{\delta}(y_{\rm up, c})=T/D_+(t)$ and find: $$M_{\rm c}(t)=M_{\rm r}(t_0)\,10^{\alpha(t,t_0)} \label{masspred}$$ with $$\alpha(t,t_0)=y_{\rm up, c}-y_{\rm up, r}=\frac{[T/D_+(t)]-[T/D_+(t_0)]-\Delta\bar{\delta}}{\hat{\delta}_{\rm r}'(y_{\rm up, r})+\Delta\hat{\delta}' (y_{\rm up, r})}\;. \label{masspred2}$$ For the halo in Figure \[fig3\] this expression gives masses that are in very good agreement with those obtained using the full excursion-set model. Eqs. (\[masspred\]) and (\[masspred2\]) acquire a particularly clear meaning for trajectories centred at local density maxima. In this case, the slope of the trajectory reflects the mean curvature of the peak [this connection is remarkably transparent when Gaussian smoothing is used to build the trajectories, see also @Dalal08; @MS12]. The top panel of Figure \[fig4\] shows that the sign of $\Delta\bar{\kappa}$ determines the slope of the corrections to the trajectory on the mass scale of the constraints and, consequently, the speed with which the halo mass grows. Therefore, the freedom in setting simultaneous constraints on $\bar{\delta}$ and $\bar{\kappa}$ can be used to regulate both the final mass and the mass-accretion rate of the GM haloes. There are also other consequences of the curvature. RPP have shown that different families of GM haloes occupy different loci in the plane defined by the concentration of the mass-density profiles and the collapse time (see the right panel in their Figure 4). Our discussion above provides new insight into the origin of this phenomenon. In fact, @Dalal08 presented evidence from $N$-body simulations that steeper excursion-set trajectories correspond to haloes with higher mass concentration, at least for sufficiently large halo masses. Therefore, the offset in the tracks of the different GM families likely reflects the different slope of their excursion-set trajectories (i.e. the different curvature in the density at the protohalo location). Angular-momentum constraints {#am} ============================ RPP have pre-announced a forthcoming upgrade of their code in which they set constraints on the halo specific angular momentum. In this Section, we extend our analysis to this type of constraints. To leading order in the density and velocity perturbations, the angular momentum gained by a protohalo during its early-collapse phase is [@Doroshkevich70] $${\mathbf L}=-C\int W({{\mathbf q}})\,{{\mathbf q}}\times \nabla\Phi({{\mathbf q}})\,{{\rm d}}^3q \label{L}$$ where ${{\mathbf q}}$ is measured from the centre of the protohalo and $C$ is a time-dependent factor that follows from the fact that both the linear displacement of the mass elements and their linear velocity field are proportional to $-\nabla\Phi$. Both $\mathbf{L}$ and the specific angular momentum per unit mass $\mathbf{L}/M$ (with $M=\bar{\rho}\,\int W({{\mathbf q}})\,{{\rm d}}^3q$) are thus linear in the density perturbations (as they scale proportionally to the peculiar potential) and suitable for the HR and the genetic-modification methods. However, the angular momentum influences the process of gravitational collapse so that altering ${\mathbf L}$ necessarily changes the shape and size of the collapsing material and thus $W({{\mathbf q}})$ in an unpredictable way. For this reason, even ignoring the higher-order corrections to Eq. (\[L\]), it is not possible to set precise constraints on the angular momentum (specific or not) of a GM halo. What can be easily constrained, instead, is the linear angular momentum gained by a fixed Lagrangian volume corresponding to the window function $W({{\mathbf q}})$, for instance the protohalo in the unconstrained initial conditions. Under the assumption made by RPP that protohaloes sample random locations with a given overdensity, constraints on the Cartesian components of $\mathbf{L}$ can be easily imposed using Eq. (\[HRmult\]). In this case, there are four scalar constraints $\bar{\delta}=\bar{\delta}_{\rm c}$ and $\mathbf{L}=\mathbf{L}_{\rm c}$ so that the covariance matrix of their functional forms is composed of the blocks $\langle \mathbf{L} \mathbf{L} \rangle$, $\langle\bar{\delta}^2\rangle=\sigma_0^2$ and $\langle \mathbf{L}\,\bar{\delta}\rangle=0$ (because $\mathbf{L}\propto \nabla\Phi$ while $\delta\propto \nabla^2\Phi$ and $\Phi$ is a Gaussian random field). Therefore, constraints on the mean density within the window function are statistically independent of those on $\mathbf{L}$. Finally, since $\delta$ is stationary, we obtain[^10] $$\frac{\langle \mathbf{L}\, \mathbf{L}\rangle}{C^2}= \int W({{\bf x}})\,W({{\bf y}})\, \frac{({{\bf y}}\times{{\bf x}})\,({{\bf y}}\times{{\bf x}})}{|{{\bf x}}-{{\bf y}}|^2}\,\psi(|{{\bf x}}-{{\bf y}}|)\,{{\rm d}}^3x\,{{\rm d}}^3y\;, \label{LL}$$ where $\psi(r)=\partial^2 \xi_{\Phi}/\partial r^2$ and $\xi_{\Phi}=\nabla^{-4}\xi(r)$ denotes the autocovariance function of the potential $\Phi$. This expression completes the calculation of the matrix ${\boldsymbol{\mathsf{A}}}$ in Eq. (\[HRmult\]). On the other hand, the shape of the mean field in the presence of the constraints is determined by the cross-covariance function between $\delta$ and $\mathbf{L}$ (as before, we denote the location at which the constraints are set with the coordinates ${{\mathbf q}}=0$), $$\langle \delta({{\mathbf q}})\,{\mathbf L}\rangle= -C\int W({{\bf x}}) \,\frac{{{\bf x}}\times{{\mathbf q}}}{|{{\bf x}}-{{\mathbf q}}|}\,\omega_1(|{{\bf x}}-{{\mathbf q}}|)\,{{\rm d}}^3x\;, \label{Ldelta}$$ with $\omega_n(r)=\partial^n [\nabla^{-2}\xi(r)]/\partial r^n$ where $\nabla^{-2}\xi$ is the cross-covariance function between $\delta$ and $\Phi$. Putting everything together, the HR method gives: $$\label{amrandom} {\delta_{\rm c}}({{\mathbf q}})-{\delta_{\rm r}}({{\mathbf q}})=\langle \delta({{\mathbf q}})\,L_i\rangle\,\left(\langle \mathbf{L} \,\mathbf{L} \rangle^{-1}\right)_{ij}\,\Delta L_j +\frac{\bar{\xi}({{\mathbf q}})}{\sigma_0^2}\,\Delta\bar{\delta} \;.$$ This expression can be used to set simultaneous constraints on ${\mathbf L}$ and $\bar{\delta}$ within a fixed Lagrangian volume centred on a random point. The linear angular-momentum also correlates with the $n$th-order spatial derivatives of $\delta$: $$\begin{aligned} \frac{\langle L_i\, \partial_j \dots \partial_\ell\delta({{\mathbf q}}) \rangle}{C}\!\!\!\!\!&=&\!\!\!\!\!- \int W({{\bf x}}) \, \frac{({{\bf x}}\times{{\mathbf q}})_i\,({{\bf x}}-{{\mathbf q}})_j\dots({{\bf x}}-{{\mathbf q}})_\ell}{|{{\bf x}}-{{\mathbf q}}|^{n+1}} \nonumber \\ & & \omega_{n+1}(|{{\bf x}}-{{\mathbf q}}|)\,{{\rm d}}^3x\;. \label{Lder}\end{aligned}$$ Note that the cross-covariances in Eqs. (\[Ldelta\]) and (\[Lder\]) vanish for ${{\mathbf q}}=0$ implying that angular-momentum constraints are independent from the (unfiltered) values of the density, the density gradient, and the curvature matrix at the protohalo centre. This follows from two facts: i) the angular momentum is measured with respect to the centre itself, and ii) the statistical isotropy of the density field combined with the cross product. This does not mean, however, that Eq. (\[amrandom\]) can also be used to set linear angular-momentum constraints at special locations (e.g. protohaloes or density maxima). As we have already discussed for the density constraints in Section \[gmh\], extra requirements must be set to make sure that averages are taken at protohaloes and the full covariance matrix of the joint constraints needs to be inverted in this case. For instance, in the peak approximation, $\langle \delta({{\mathbf q}}) | \mathbf{L} \rangle_{\rm pk}=\langle \delta({{\mathbf q}}) | \mathbf{L}, \bar{\delta}, \bar{{{\mathbf s}}}=0,\overline{{\boldsymbol{\mathsf{H}}}}\rangle \neq \langle \delta({{\mathbf q}}) | \mathbf{L}, \bar{\delta} \rangle$. In fact, while the cross-correlation coefficients $\langle \mathbf{L}\,\bar{\delta}\rangle$ and $\langle \mathbf{L}\,\overline{{\boldsymbol{\mathsf{H}}}}\rangle$ vanish because they pair odd and even spatial derivatives of the Gaussian field $\Phi$, the term $\langle \mathbf{L}\,\bar{{{\mathbf s}}}\rangle$ does not. Actually, $$\langle \mathbf{L}\,\bar{{{\mathbf s}}}\rangle=-C\int W({{\bf x}})\,W({{\bf y}})\,{{\bf x}}\times \langle \nabla\Phi({{\bf x}})\,\nabla\delta({{\bf y}})\rangle\, {{\rm d}}^3x \,{{\rm d}}^3y$$ with $\int W({{\bf y}})\,\langle \nabla\Phi({{\bf x}})\,\nabla\delta({{\bf y}})\rangle\,{{\rm d}}^3y=-\nabla\nabla\nabla^{-2}\bar{\xi}({{\bf x}}-{{\bf y}})$ (contrary to Eqs. (\[Ldelta\]) and (\[Lder\]) this expression cannot be simplified in terms of radial derivatives because, in general, $\bar{\xi}({{\mathbf q}})$ is not isotropic due to the asphericity of the window function that defines a protohalo). This implies that linear-angular-momentum constraints correlate with conditions imposed on the mean density gradient. In other words, angular-momentum constraints set at extremal points of the density field require a different HR correction than for constraints set at random points with the same overdensity. The exact expression for the correction can be derived by inverting the covariance matrix of the constraints which is beyond the scope of this paper and can be more easily done numerically. The expression for the linear angular momentum in Eq. (\[L\]) can be simplified by assuming that only the large-scale modes of the potential contribute. In this case one can smooth $\Phi$ over the protohalo and replace it with its Taylor expansion [@White84]. The leading-order term is $L^{\rm(T)}_i\simeq C\,\epsilon_{ijk} {D}_{j\ell} ({{\mathbf q}}=0)\,{Q}_{\ell k}$ where ${Q}_{ij}=\int W({{\mathbf q}}) \,q_i \,q_j \,{{\rm d}}^3 q$ is the quadrupole moment of the protohalo. Note that the spherical parts of $D_{ij}$ and ${Q}_{ij}$ do not contribute to the cross product and $\mathbf{L}$ can then be expressed in terms of the linear tidal tensor $T_{ij}({{\mathbf q}}=0)$ and the traceless quadrupole moment $Q_{ij}-(Q_{ii}/3)\,\delta_{ij}$. This result forms the heart of the so-called tidal-torque theory and is equivalent to assuming that the (linear) velocity shear is approximately constant within the protohalo. This approximation gives unbiased angular momenta with respect to Eq. (\[L\]) but generates a scatter of $\sim30$ per cent in the amplitude and a characteristic deviation of $20-30$ degrees in the direction [@PDH1]. Higher-order corrections couple mass multipole moments of order $n>2$ with $n$ spatial derivatives of $\Phi$ [see Eqs. (10) and (11) in @PDH1]. To first order in this expansion and for a fixed quadrupole tensor $Q_{ij}$ (corresponding to a fixed Lagragian patch), linear-angular-momentum constraints are therefore equivalent to constraints on the local value of the linear tidal tensor (or velocity shear) and can be set using Eq. (\[main\]) even at density peaks. Note that, in this case, $\langle \delta({{\mathbf q}})\,L^{\rm (T)}_i\rangle=C\,\epsilon_{ijk} Q_{\ell k}\, [\partial_j \partial_\ell \nabla^{-2} \bar{\xi}({{\mathbf q}}) ]$. Summary and conclusions {#con} ======================= The HR method provides an efficient tool to generate constrained realisations of Gaussian random fields in which certain linear functionals of the field variables assume pre-defined values. Although this technique has been around for 25 years, many researchers are not very familiar with it and still see it as arcane or esoteric. Motivated by the intent to improve this situation, in Section \[const\], we reviewed the basic principles of the HR method and made a number of examples for its application to cosmology, including peak-based constraints. We hope that our analytical results will provide a useful reference and help revealing the intrinsic simplicity of the algorithm. In Section \[gmh\], we discussed ‘genetically modified’ haloes. RPP applied the HR algorithm to modify the initial conditions of $N$-body simulations within and around the regions that collapse to form dark-matter haloes. The gist of their initiative is to alter the linear density field at will so that to produce haloes with a set of desired properties after the non-linear evolution. At first sight, this project might appear a relatively straightforward application of the HR method. However, it contains a subtle complication: the points at which the constraints are applied are chosen after inspecting the unconstrained realisation. They are the Lagrangian locations at which haloes form and they must preserve this property after being genetically modified. From the mathematical point of view, this is equivalent to restricting the ensemble over which averages in the HR method should be taken in order to build the conditional mean field. RPP have disregarded this issue and used averages taken over the full ensemble. In other words, they treated protohaloes as randomly selected points with a given overdensity in Lagrangian space. This implicit assumption made the calculation possible but the results that follow from it are likely to suffer from a statistical bias. Our paper provides a first step towards understanding this issue. What makes the problem so challenging is that we do not know yet how to characterize protohaloes in mathematical terms. Although it is currently impossible to find an exact answer, reasonable lines of attack have been presented in the literature. Two common assumptions are that i) the Lagrangian sites for halo formation coincide with local density maxima of the smoothed density field [e.g. @Doroshkevich70; @K84; @PH BBKS] and ii) the boundaries of protohaloes correspond to isodensity surfaces [e.g. @HP; @CT]. Detailed tests against $N$-body simulations give strong support to the validity of the first hypothesis, at least for haloes above the characteristic collapsed mass at each epoch [@LP]. On the other hand, protohaloes’ principal directions and shapes have been found to strongly correlate with the local tidal field rather than with the density distribution [@LeeP; @PDH2; @LHP09; @LP; @Despali13; @LBP]. All this suggests that it should be possible to characterize (at least to some extent) the properties of protohaloes in terms of the following variables: the density contrast, its first and second spatial derivatives, and the tidal field. Using the HR method we derived an analytical formula for setting simultaneous constraints on all these quantities. Our result is given in Eqs. (\[quasimain\]) and (\[main\]) while Eq. (\[chimain\]) can be used to evaluate the relative probability of the constrained realisations with respect to the original one. If one wants to make sure that a protohalo in the unconstrained initial conditions, ${\delta_{\rm r}}$, remains a protohalo in the constrained linear density field, ${\delta_{\rm c}}$, only some of the relevant field variable should be allowed to vary while some others should be kept fixed. There is some freedom here. For instance, one might want to require that a density peak in ${\delta_{\rm r}}$ stays a peak in ${\delta_{\rm c}}$ (i.e. $\overline{\nabla \delta}=0$ and $\overline{{\boldsymbol{\mathsf{H}}}}$ is negative definite). With this in mind, we showed that the field transformation that sets a pure density constraint and marginalises over all the other field variables (Eq. (\[sol1dens\]) which has been used by RPP) corresponds to setting correlated constraints in $\bar{\delta}$ and the mean curvature $\bar{\kappa}/3$ when the condition of being a local extremum and the traceless Hessian matrix are kept fixed. Although the expression of the HR correction is identical in these two cases, the likelihood of the constrained realisations is quite different. This demonstrates that $\Delta\chi^2$ values should be interpreted with care as they depend on the assumptions that are made on the nature of the constraints. We also provided several additional examples including the case in which a density constraint is imposed while keeping the density gradient, the Hessian matrix and the tidal field fixed, Eq. (\[eur\]). In the second part of the paper (Section \[secmah\]) we have developed a variant of the excursion-set formalism in order to predict the mass-accretion history of GM haloes. This is key to optimising the choice of the constraints that should be set in order to produce haloes with the desired properties after their non-linear collapse. Our method does not require any external input and can be used with all sorts of constraints. Basically, we first compute the change in the excursion-set trajectory induced by the HR method and then solve for the first-upcrossing of a threshold which has been calibrated using the mass-accretion history of the original unconstrained run. The entire algorithm is very simple to code and essentially takes no time to run. For constraints that require small changes we derived an analytical expression for the final halo mass which is given in Eqs. (\[masspred\]) and (\[masspred2\]). Our analysis indicates that, after all, the implementation by RPP generates halo mass accretion histories that are qualitatively similar to those obtained assuming a correspondence between protohaloes and local density maxima, at least on galaxy scales (see Figure \[fig4\]). However, we found that the mass-accretion rate at the mass scale of the constraints is very sensitive to the detailed form of the imposed restrictions. This suggests that the method used by RPP might be suitable for investigating broad evolutionary scenarios but care should be taken when using it to make precise quantitative measurements. Future studies should test our semi-analytic results against $N$-body simulations. In particular, they should measure how big of an effect is obtained when additional conditions on the density gradient, the Hessian matrix and the tidal field are combined with the pure density constraints used by RPP. Finally, in Section \[am\], we discussed the possibility of using the HR method to constrain the angular momentum that a halo gains to leading order in perturbation theory. We concluded that this is impossible to achieve because the shape of protohaloes depends on the initial conditions in an unknown (and thus unpredictable) way. Nevertheless, the HR method can be used to set constraints based on the angular momentum gained by a fixed Lagrangian region. We derived the corresponding analytical solution for patches centered on random points with a fixed overdensity which is given in Eqs. (\[LL\]), (\[Ldelta\]) and (\[amrandom\]). We also demonstrated that this solution does not hold true for density maxima or, more generally, when information on $\overline{\nabla \delta}$ is used to identify the location of the constraints (and thus, most likely, for protohaloes). On the other hand, using the tidal-torque theory to first order, we reduced the angular-momentum constraints to tidal-field constraints that can more easily be imposed at special locations identified using spatial derivatives of the density field. In conclusion, we would like to express the wish that future investigations will focus more and more onto the problem of characterising the locations and properties of protohaloes. Acknowledgements {#acknowledgements .unnumbered} ================ We warmly thank Nina Roth for discussions regarding the genetic-modification algorithm and Yehuda Hoffman for suggestions that improved the presentation of our results. This work was partly funded by the German Research Foundation (DFG) through the Cooperative Research Center TRR33 ‘The Dark Universe’. [99]{} Baldauf, T., Desjacques, V., & Seljak, U. 2015, , 92, 123507 Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, , 304, 15 Bertschinger, E. 1987, , 323, L103 Bond, J. R., Cole, S., Efstathiou, G., & Kaiser, N. 1991, , 379, 440 Borzyszkowski, M., Ludlow, A. D., & Porciani, C. 2014, , 445, 4124 Cartwright, D. E., & Longuet-Higgins, M. S. 1956, Proceedings of the Royal Society of London Series A, 237, 212 Catelan, P., & Theuns, T. 1996, , 282, 436 Chan, K. C., Sheth, R. K., & Scoccimarro, R. 2015, arXiv:1511.01909 Cramér, H. & Leadbetter, M. R. 1967, Stationary and Related Stochastic Processes: Sample Function Properties and their Applications, Wiley, New York Dalal, N., White, M., Bond, J. R., & Shirokov, A. 2008, , 687, 12-21 Daley, D. J. & Vere-Jones, D. 2007 An Introduction to the Theory of Point Processes: Volume II General Theory and Structure, Springer, New York, second edition ISBN 978-0-387-21337-8 Dekel, A. 1981, , 101, 79 Desjacques, V. 2008, , 78, 103503 Despali, G., Tormen, G., & Sheth, R. K. 2013, , 431, 1143 Domainko, W., Mair, M., Kapferer, W., et al. 2006, , 452, 795 Doroshkevich, A. G.  1970, Astrofizika, 6, 581 Elia, A., Ludlow, A. D., & Porciani, C. 2012, , 421, 3472 Ganon, G., & Hoffman, Y. 1993, , 415, L5 Hahn, O., & Paranjape, A. 2014, , 438, 878 Heavens, A., & Peacock, J. 1988, , 232, 339 Hoffman, Y., & Ribak, E. 1991, , 380, L5 Kac, M. 1943, Bull. Amer. Math. Soc., 49, 314 Kaiser, N. 1984, , 284, L9 Knollmann, S. R., & Knebe, A. 2009, , 182, 608 Lee, J., Hahn, O., & Porciani, C. 2009, , 707, 761 Lee, J., & Pen, U.-L. 2000, , 532, L5 Ludlow, A. D., & Porciani, C. 2011, , 413, 1961 Ludlow, A. D., Borzyszkowski, M., & Porciani, C. 2014, , 445, 4110 Ma, C. -P., & Bertschinger, E. 2004, , 612, 28 Musso, M., & Sheth, R. K. 2012, , 423, L102 Peacock, J. A., & Heavens, A. F. 1985, , 217, 805 Porciani, C., Dekel, A., & Hoffman, Y. 2002a, , 332, 325 Porciani, C., Dekel, A., & Hoffman, Y. 2002b, , 332, 339 Rice, S. O. 1945, Bell System Tech. J., 24, 46 Romano-Díaz, E., Faltenbacher, A., Jones, D., et al. 2006, , 637, L93 Romano-Diaz, E., Shlosman, I., Trenti, M., & Hoffman, Y. 2011, , 736, 66 Robertson, B. E., Kravtsov, A. V., Tinker, J., & Zentner, A. R. 2009, , 696, 636 Roth, N., Pontzen, A., & Peiris, H. V. 2016, , 455, 974 Sheth, R. K., Mo, H. J., & Tormen, G. 2001, , 323, 1 Sorce, J. G., Gottl[ö]{}ber, S., Yepes, G., et al. 2016, , 455, 2078 van de Weygaert, R., & Babul, A. 1994, , 425, L59 van de Weygaert, R., & Bertschinger, E. 1996, , 281, 84 Vanmarcke, E. 1983, Random Fields, by Erik Vanmarcke, pp. 372. ISBN 0-262-72045-0. Cambridge, Massachusetts, USA: The MIT Press, March 1983. (Paper), 372 White, S. D. M. 1996, Cosmology and Large Scale Structure, 349 White, S. D. M. 1984, , 286, 38 Zentner, A. R. 2007, International Journal of Modern Physics D, 16, 763 Inverse covariance for the constraints {#inversion} ====================================== We show here how to invert the 15-dimensional covariance matrix of the constraints discussed in §\[curv\]. Since the density gradient is independent from all the other variables, we will consider only the deformation tensor and the Hessian of the density for which $\langle \overline{D}_{ij} \,\overline{D}_{\ell m}\rangle= (\sigma_0^2 /15)\, S_{ij\ell m}$, $\langle \overline{D}_{ij} \,\overline{H}_{\ell m}\rangle= (\sigma_1^2 /15) \,S_{ij\ell m}$ and $\langle \overline{H}_{ij}\, \overline{H}_{\ell m}\rangle= (\sigma_2^2/15) \,S_{ij\ell m}$ with $S_{ij\ell m}=\delta_{ij}\delta_{\ell m}+\delta_{i\ell}\delta_{jm}+\delta_{im}\delta_{\ell j}$. If we organise the six independent elements of each tensor (say $\overline{D}_{ij}$) in the form of a vector with elements $(\overline{D}_{11}, \overline{D}_{22}, \overline{D}_{33}, \overline{D}_{12}, \overline{D}_{13}, \overline{D}_{23})$, then the 12-dimensional covariance matrix can be written as $${\boldsymbol{\mathsf{A}}}=\begin{pmatrix} {\boldsymbol{\mathsf{B}}}_0 & {\boldsymbol{\mathsf{B}}}_1\\ {\boldsymbol{\mathsf{B}}}_1& {\boldsymbol{\mathsf{B}}}_2 \end{pmatrix}$$ where ${\boldsymbol{\mathsf{B}}}_0=\sigma_0^2 \,{\boldsymbol{\mathsf{M}}}$, ${\boldsymbol{\mathsf{B}}}_1=\sigma_1^2 \,{\boldsymbol{\mathsf{M}}}$ and ${\boldsymbol{\mathsf{B}}}_2=\sigma_2^2 \,{\boldsymbol{\mathsf{M}}}$ with $${\boldsymbol{\mathsf{M}}}=\frac{1}{15} \begin{pmatrix} 3 & 1 & 1 & 0 & 0 & 0\\ 1 & 3 & 1 & 0 & 0 & 0\\ 1 & 1 & 3 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}\;.$$ The inverse covariance thus also has a block structure $${\boldsymbol{\mathsf{C}}}^{-1}= \begin{pmatrix} {\boldsymbol{\mathsf{E}}}& {\boldsymbol{\mathsf{F}}}\\ {\boldsymbol{\mathsf{F}}}& {\boldsymbol{\mathsf{G}}}\end{pmatrix}$$ with $$\begin{aligned} {\boldsymbol{\mathsf{E}}}\!\!\!\!\!&=&\!\!\!\!\!({\boldsymbol{\mathsf{B}}}_0-{\boldsymbol{\mathsf{B}}}_1{\boldsymbol{\mathsf{B}}}_2^{-1}{\boldsymbol{\mathsf{B}}}_1)^{-1}=\frac{1}{\sigma_0^2\,(1-\gamma^2)}\,{\boldsymbol{\mathsf{M}}}^{-1} \nonumber \\ {\boldsymbol{\mathsf{F}}}\!\!\!\!\!&=&\!\!\!\!\! -({\boldsymbol{\mathsf{B}}}_0-{\boldsymbol{\mathsf{B}}}_1{\boldsymbol{\mathsf{B}}}_2^{-1}{\boldsymbol{\mathsf{B}}}_1)^{-1} {\boldsymbol{\mathsf{B}}}_1{\boldsymbol{\mathsf{B}}}_2^{-1}=- \frac{\gamma}{\sigma_0\,\sigma_2\,(1-\gamma^2)}\,{\boldsymbol{\mathsf{M}}}^{-1}\\ {\boldsymbol{\mathsf{G}}}\!\!\!\!\!&=&\!\!\!\!\! {\boldsymbol{\mathsf{B}}}_2^{-1}+{\boldsymbol{\mathsf{B}}}_2^{-1}{\boldsymbol{\mathsf{B}}}_1 ({\boldsymbol{\mathsf{B}}}_0-{\boldsymbol{\mathsf{B}}}_1{\boldsymbol{\mathsf{B}}}_2^{-1}{\boldsymbol{\mathsf{B}}}_1)^{-1} {\boldsymbol{\mathsf{B}}}_1{\boldsymbol{\mathsf{B}}}_2^{-1}= \frac{1}{\sigma_2^2\,(1-\gamma^2)}\,{\boldsymbol{\mathsf{M}}}^{-1}\nonumber\end{aligned}$$ where $${\boldsymbol{\mathsf{M}}}^{-1}=15 \begin{pmatrix} 2/5 & -1/10 & -1/10 & 0 & 0 & 0\\ -1/10 & 2/5 & -1/10 & 0 & 0 & 0\\ -1/10 & -1/10 & 2/5 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}\;.$$ Conditional probabilities at local maxima {#condpeak} ========================================= Let us consider a sufficiently smooth and differentiable Gaussian random field $\delta({{\mathbf q}})$. A mathematically well-defined cumulative probability distribution for the height of a local maximum of the field is obtained taking the limit[^11] $$\lim_{\epsilon \to 0} {\cal{P}}\{ \delta({{\mathbf q}}_0)>u | \,\exists \mathrm{\ a\ local\ maximum\ of}\ \delta({{\mathbf q}})\ \mathrm{in}\ U({{\mathbf q}}_0,\epsilon)\}$$ where ${\cal P}$ denotes probability and $U({{\mathbf q}}_0,\epsilon)$ is the three-dimensional open cube of side $\epsilon$ centered at ${{\mathbf q}}_0$ [@Cra-Lea]. Generalising this definition to more variables and differentiating, we can introduce the differential probability distribution for maxima of height $\delta$ and (negative definite) Hessian matrix ${\boldsymbol{\mathsf{H}}}$, $P_{\rm pk}(\delta,{\boldsymbol{\mathsf{H}}})$, which, apart from a normalisation factor, coincides with the intensity function $\bar{n}_{\rm pk}(\delta,{\boldsymbol{\mathsf{H}}})$ such that $\bar{n}_{\rm pk}(\delta,{\boldsymbol{\mathsf{H}}})\,{{\rm d}}\delta\,{{\rm d}}^6 {\boldsymbol{\mathsf{H}}}$ gives the expected number of peaks with height between $\delta$ and $\delta+{{\rm d}}\delta$ and Hessian matrix between ${\boldsymbol{\mathsf{H}}}$ and ${\boldsymbol{\mathsf{H}}}+{{\rm d}}^6 {\boldsymbol{\mathsf{H}}}$ per unit comoving volume (note that ${{\rm d}}^6 {\boldsymbol{\mathsf{H}}}$ denotes the Lebesgue measure on the space of $3\times 3$ negative definite matrices). The intensity function can be computed following the methods introduced by @Kac and @Rice as shown in BBKS. In brief, the reasoning proceeds as follows. The number density of local maxima (characterised by the peak height $\delta_{\rm pk}$ and the Hessian matrix ${\boldsymbol{\mathsf{H}}}_{\rm pk}$) in one realisation of the random field can be formally written as $$n_{\rm pk}({{\mathbf q}})=\sum_i \delta_{\rm D}({{\mathbf q}}-{{\mathbf q}}_{{\rm pk}, i})\;. \label{formalnpk}$$ Around a peak, the gradient of the random field can be approximated with its Taylor expansion to first order $s_i({{\mathbf q}})\simeq H_{ij}({{\mathbf q}}_{\rm pk})\,({{\mathbf q}}-{{\mathbf q}}_{\rm pk})_j$. Using the properties of the Dirac-$\delta$ distribution, Eq. (\[formalnpk\]) can be re-written as $$\begin{aligned} n_{\rm pk}({{\mathbf q}})\!\!\!\!\!&=&\!\!\!\!\!|\det{{\boldsymbol{\mathsf{H}}}}({{\mathbf q}})|\,\{1-\Theta[\lambda_{\rm m}({{\mathbf q}})]\}\nonumber\\ &\times&\!\!\!\!\!\delta_{\rm D}[{{\mathbf s}}({{\mathbf q}}) ]\,\delta_{\rm D}[\delta({{\mathbf q}})-\delta_{\rm pk}]\,\delta_{\rm D}[{\boldsymbol{\mathsf{H}}}({{\mathbf q}})-{\boldsymbol{\mathsf{H}}}_{\rm pk}]\;, \label{kacrice}\end{aligned}$$ with $\lambda_{\rm m}$ the largest eigenvalue of ${\boldsymbol{\mathsf{H}}}$. The function $\bar{n}_{\rm pk}$ is obtained taking the expectation of Eq. (\[kacrice\]) which gives $$\bar{n}_{\rm pk}(\delta_{\rm pk},{\boldsymbol{\mathsf{H}}}_{\rm pk})= |\det{{\boldsymbol{\mathsf{H}}}_{\rm pk}} |\,[1-\Theta(\lambda_{\rm m, pk})]\,{\cal P}(\delta_{\rm pk}, {\bf s}=0, {\boldsymbol{\mathsf{H}}}_{\rm pk} )\;.$$ where ${\cal P}$ is a multivariate Gaussian distribution expressing the joint probability of $\delta$, ${{\mathbf s}}$ and ${\boldsymbol{\mathsf{H}}}$ in the original random field. The conditional probability of a series of events $\mathbf{E}$ subject to the constraint that there is a density peak at ${{\mathbf q}}=0$ can be defined as the ratio between the number of peaks for which $\mathbf{E}$ is true and $\bar{n}_{\rm pk}$. The values assumed by the density field at all positions ${{\mathbf q}}$ can also be included in $\mathbf{E}$. Therefore, the conditional probability for a realisation of the field (here simply denoted by the letter $\delta$ and switching from functions to functionals), ${\cal P}_{\rm pk}[\delta | F_i[\delta]=f_i]$, can be formally written as $$\begin{aligned} {\cal P}_{\rm pk}[\delta | F_i[\delta]=f_i]\!\!\!\!\!&=& \!\!\!\!\!\frac{\bar{n}_{\rm pk}[\delta, \delta_{\rm pk}, {\boldsymbol{\mathsf{H}}}_{\rm pk}, F[\delta] =f]}{\bar{n}_{\rm pk}(\delta_{\rm pk}, {\boldsymbol{\mathsf{H}}}_{\rm pk})}\nonumber \\ &=&\!\!\!\!\!\frac{{\cal P}[ \delta, \delta_{\rm pk}, {{\mathbf s}}=0, {\boldsymbol{\mathsf{H}}}_{\rm pk}, F[\delta] =f]}{{\cal P}(\delta_{\rm pk}, {{\mathbf s}}=0, {\boldsymbol{\mathsf{H}}}_{\rm pk})}\nonumber\\ &=&\!\!\!\!\!\!{\cal P}[\delta | \delta_{\rm pk}, {{\mathbf s}}=0, {\boldsymbol{\mathsf{H}}}_{\rm pk}, F[\delta] =f]\;. $$ In words, conditional probabilities at peaks coincide with conditional probabilities taken at random points characterised by $\delta=\delta_{\rm pk}$, ${{\mathbf s}}=0$ and ${\boldsymbol{\mathsf{H}}}={\boldsymbol{\mathsf{H}}}_{\rm pk}$. It follows that the conditional mean field around a peak is $$\langle \delta({{\mathbf q}})| F_i[\delta]=f_i \rangle_{\rm pk}=\langle \delta({{\mathbf q}})| F_i[\delta]=f_i, \delta_{\rm pk}, {{\mathbf s}}=0, {\boldsymbol{\mathsf{H}}}_{\rm pk} \rangle\;.$$ Similarly, for local extrema, one obtains: $$\langle \delta({{\mathbf q}})| F_i[\delta]=f_i \rangle_{\rm ex}=\langle \delta({{\mathbf q}})| F_i[\delta]=f_i, \delta_{\rm ex}, {{\mathbf s}}=0\rangle\;.$$ \[lastpage\] [^1]: E-mail: [email protected] [^2]: To simplify the notation we will not distinguish between a finite sampling of the field in $N^3$ points (with $N\in \mathbb{N}$) forming a regular lattice (as used in numerical simulations) and the continuum limit. The formal passage of letting $N\to \infty$ is discussed in @Bert87 and @VB. [^3]: By diagonalising ${\boldsymbol{\mathsf{A}}}$ one can determine $N_{\rm c}$ linear combinations of the original constraints that are statistically independent. In terms of the (orthonormal) eigenvectors ($\mathbf{e}_i$) and eigenvalues ($\lambda_i$) of ${\boldsymbol{\mathsf{A}}}$, $\Delta \chi^2=(p_{{\rm c},i}^2-p_{{\rm r},i}^2)/\lambda_i$ where $p_i=\mathbf{w}\cdot \mathbf{e}_i$ denotes the projection of the vector with original components $w_j=f_j-\langle F_j[\delta]\rangle$ along the $i^{\rm th}$ eigenvector of ${\boldsymbol{\mathsf{A}}}$. Note that, in order to avoid the inversion of ${\boldsymbol{\mathsf{A}}}$, RPP re-wrote the HR algorithm in terms of a Gram-Schmidt process. Differently from them, we follow the original notation by HR which we find easier to interpret. [^4]: Note that our definitions for $R_0$ and $R_{\rm pk}$ differ from those in BBKS by a factor of $3^{1/2}$. [^5]: The Fourier integrals defining $\bar{\xi}(\mathbf{0})$ and $\nabla^2\bar{\xi}(\mathbf{0})$ are analogous to $\sigma_0^2$ and $\sigma_1^2$ but are evaluated using $\widetilde{W}(k)$ instead of its square. [^6]: Constraints on the density gradient can be imposed using the the derivative of the Dirac-delta distribution to define the linear functional $F[\delta]$. [^7]: Also the scatter around it changes, see Eq. (7.9) in BBKS. [^8]: It is easy to understand how this works when we use a spherically symmetric filter: starting from the definition of $\bar{\xi}$ and smoothing over the window function, one finds that $\bar{\bar{\xi}}({\bf 0})=\sigma_0^2$ and $\overline{\nabla^2 \bar{\xi}}({\bf 0})=-\sigma_1^2$. Thus, using Eq. (\[sol1dens\]) introduces the variation $\Delta \bar{\kappa}=-(\sigma_1^2/\sigma_0^2)\, \Delta\bar{\delta}=-\Delta \bar{\delta}/R_0^2$. [^9]: We use the window function in Eq. (\[efffilter\]) to set the density constraint and a spherical top-hat filter to build the trajectories. [^10]: Eq. (\[LL\]) follows from the fact that all 2-point correlators are completely determined by the scalar distance between the points. In fact, $\nabla f(r)= \hat{{{\bf r}}}\, \partial f/\partial r$ for a generic function $f$ that depends only on the radial coordinate. [^11]: Elementary probability theory cannot handle these probabilities because the event that a point process has an element at a specified location has zero measure. Conditioning on point processes is rigorously defined in terms of the Palm distribution and Campbell measures [see e.g. @DVJ07].
--- abstract: 'Let $Q$ be an acyclic quiver and $\s$ be a sequence with elements in the vertex set $Q_0$. We describe a sequence of simple (backward) tilting in the bounded derived category ${\mathrm{\hua{D}}}(Q)$, starting from the standard heart ${\mathrm{\hua{H}}}_Q={\mathrm{mod}}{\mathbf{k}}Q$ and ending at the heart ${\mathrm{\hua{H}}}_\s$ in ${\mathrm{\hua{D}}}(Q)$. Then we show that $\s$ is a green mutation sequence if and only if every heart in the simple tilting sequence is greater than or equal to ${\mathrm{\hua{H}}}_Q[-1]$; it is maximal if and only if ${\mathrm{\hua{H}}}_\s={\mathrm{\hua{H}}}_Q[-1]$. This provides a simple way to understand green mutations. Further, fix a Coxeter element $c$ in the Coxeter group $W_Q$ of $Q$, which is admissible with respect to the orientation of $Q$. We show that the sequence $\widetilde{\gm}$ induced by a $c$-sortable word $\w$ is a green mutation sequence. As a consequence, we obtain a bijection between the set of c-sortable words and finite torsion class in ${\mathrm{\hua{H}}}_Q$. As byproducts, the interpretations of inversions, descents and cover reflections of a $c$-sortable word $\w$, and thus noncrossing partitions, as well as the wide subcategories associated to ${\mathrm{\hua{H}}}_\w$, are given in terms of red vertices in green mutations. .3cm [ *Key words:*]{} Coxeter group, c-sortable word, quiver mutation, cluster theory, tilting theory' author: - '[Yu Qiu ]{}' title: 'C-sortable words as green mutation sequences' --- Introduction ============ Cluster algebra was invented by Fomin-Zelevinsky in 2000, attempting to understand total positivity in algebraic groups and canonical bases in quantum groups. It has been heavily studied during the last decade due to its wide connection to many areas in mathematics, (for more details, see the introduction survey [@Kel12]). The combinatorial ingredient in the cluster theory is quiver mutation, which leads to the categorification of cluster algebra via quiver representation theory due to Buan-Marsh-Reineke-Reiten-Todorov in 2005. Recently, Keller spotted a remarkable special case of quiver mutation by adding certain restrictions, known as the green quiver mutation (Definition \[def:green\]); using which, he obtained results concerning Kontsevich-Soibelman’s noncommutative Donaldson-Thomas invariant via quantum cluster algebras. Inspired by Keller [@Kel11] and Nagao [@Nag10], King-Qiu [@KQ11] studied the exchange graphs of hearts and clusters in various categories associated to cluster categories, with applications to stability conditions and quantum dilogarithm identities in [@Qiu11]. Another motivation studying green mutation sequences is coming from theoretical physics where they yield the complete spectrum of BPS states, cf. [@CCV]. Our first aim in this paper is to interpret Keller’s green mutation in terms of tilting. More precisely, a green sequence $\s$ induces a path ${\mathrm{P}}(\s)$ in the exchange graph ${\mathrm{EG}}_Q$ (cf. Definition \[def:eg\]), that is, a sequence of simple (backward) tilting. Thus $\s$ corresponds to a heart ${\mathrm{\hua{H}}}_\s$. Then we can obtain Keller’s results about green mutations via studying this heart. Here is a summarization of the results in Section \[sec:keller\]. \[thm:0.1\] Let $Q$ be an acyclic quiver. - A sequence $\s$ is a green mutation sequence if and only if ${\mathrm{\hua{H}}}\geq{\mathrm{\hua{H}}}_Q[-1]$ for any ${\mathrm{\hua{H}}}$ in the path ${\mathrm{P}}(\s)$. - A vertex $j\in Q_0$ for some green mutation sequence $\s$ is either green or red. Moreover, it is green if and only if the corresponding simple $S_j^{\s}$ in ${\mathrm{\hua{H}}}_\s$ is in ${\mathrm{\hua{H}}}_Q$ and it is red if and only if $S_j^{\s}$ is in ${\mathrm{\hua{H}}}_Q[-1]$. - A green sequence $\s$ is maximal if and only if ${\mathrm{\hua{H}}}_\s={\mathrm{\hua{H}}}_Q[-1]$. Hence the mutated quivers associated to two maximal green mutation sequences are isomorphic. - The simples of the wide subcategory ${\hua{W}}{\s}$ associated to the torsion class $\hua{T}_\s$ are precisely the red simples in ${\mathrm{\hua{H}}}_\s$ shifting by one. Our second focus is on c-sortable words (c for Coxeter element), defined by Reading [@R07], who showed bijections between c-sortable words, c-clusters and noncrossing partitions in finite case (Dynkin case). Ingalls-Thomas extended Reading’s result in the direction of representation theory and gave bijections between many sets (see [@IT09 p. 1534]). The bijection between c-sortable words and finite torsion classes was first generalized by Thomas [@Tho] and also obtained by Amiot-Iyama-Reiten-Todorov [@AIRT10] via layers for preprojective algebras. We will interpret a c-sortable word as a green mutation sequence (Theorem \[thm:main\]) and obtain many consequences, summarized by the following theorem. \[thm:0.2\] For an acyclic quiver $Q$ with an admissible Coxeter element $c$. Then any $c$-sortable word $\w$ induces a green mutation sequence $\widetilde{\w}$ and we have the following bijections. - $\{$the $c$-sortable word $\w\} \overset{_{1-1}}{\longleftrightarrow} \{$the finite torsion class $\hua{T}_{\gm}$ in ${\mathrm{\hua{H}}}_Q={\mathrm{mod}}{\mathbf{k}}Q\}$. - $\{$the inversion $t_{T}$ for $\w\} \overset{_{1-1}}{\longleftrightarrow} \{$the indecomposable $T$ in $\hua{T}_{\gm}\}$. - $\{$the descent $s_j$ for $\w\} \overset{_{1-1}}{\longleftrightarrow} \{$the red vertex $j$ for $\w\}$. - $\{$the cover reflection $t_{T}$ for $\w\} \overset{_{1-1}}{\longleftrightarrow} \{$the red simple $T$ in ${\mathrm{\hua{H}}}_{\gm}\}$. Further, if $Q$ is of Dynkin type, the noncrossing partition ${\mathrm{nc}}_c(\w)$ associated to $\w$ can be calculated as $${\mathrm{nc}}_c(\w)= \prod_{j\in {\mathrm{V^{r}}(\gm)}} s_j^{\gm},$$ with $\rank{\mathrm{nc}}_c(\w)=\#{\mathrm{V^{r}}(\gm)}$, where ${\mathrm{V^{r}}(\gm)}$ is the set of the red vertices and $s_j^{\gm}$ is the reflection corresponding the $j$-th simple in ${\mathrm{\hua{H}}}_\w$. Also, the tree of $c$-sortable words (with respect to the week order) is isomorphic to a supporting tree of the exchange graph ${\mathrm{EG}}_Q$. These results give a deeper understanding of the results of Ingalls-Thomas [@IT09]. Note that all our bijections are consistent with theirs, cf. Table \[table\] and [@IT09 Table 1]. Also, the construction from c-sortable words to the green mutation sequences should be the ‘dual’ construction of Amiot-Iyama-Reiten-Todorov [@AIRT10] (cf. [@BIRS09]) and provides a combinatorial perspective to attack their problems at end of their paper. Finally, a more systematic study of maximal green sequences can be found in [@BDP]. Acknowledgements {#acknowledgements .unnumbered} ---------------- Some of the ideas in the work was developed when I was a Ph.D student of Alastair King and when I visited Bernhard Keller in March 2011. I would like to thank them, as well as Thomas Brustle for helpful conversations. Preliminaries ============= Fixed an algebraically-closed field ${\mathbf{k}}$. Throughout this paper, $Q$ will be a finite acyclic quiver with vertex set $Q_0=\{1,\ldots,n\}$ (unless otherwise stated). The path algebra is denoted by ${\mathbf{k}}Q$. Let ${\mathrm{\hua{H}}}_Q:={\mathrm{mod}}{\mathbf{k}}Q$ be the category of finite dimensional ${\mathbf{k}}Q$-modules, which is an abelian category, and ${\mathrm{\hua{D}}}(Q):=\hua{D}^b({\mathrm{\hua{H}}}_Q)$ be its bounded derived category, which is a triangulated category. We denote by $\Sim\hua{A}$ a complete set of non-isomorphic simples in an abelian category $\hua{A}$ and let $$\Sim{\mathrm{\hua{H}}}_Q=\{S_1,\ldots,S_n\},$$ where $S_i$ is the simple ${\mathbf{k}}Q$-module corresponding to vertex $i\in Q_0$. Coxeter group and words ----------------------- Recall that the *Euler form* $$\<-,-\>:\kong{Z}^{Q_0}\times\kong{Z}^{Q_0}\to\kong{Z}$$ associated to the quiver $Q$ is defined by $$\<\mathbf{a}, \mathbf{b}\> =\sum_{i\in Q_0}a_i b_i-\sum_{(i\to j)\in Q_1}a_i b_j.$$ Denote by $(-,-)$ the symmetrized Euler form, i.e. $(\mathbf{a},\mathbf{b})=\<\mathbf{a},\mathbf{b}\>+\<\mathbf{b},\mathbf{a}\>$. Moreover for $M,L\in{\mathrm{mod}}{\mathbf{k}}Q$, we have $$\begin{gathered} \label{eq:euler form} \<{\mathrm{\underline{dim}}}M,{\mathrm{\underline{dim}}}L\>={\mathrm{\underline{dim}}}\Hom(M,L)-{\mathrm{\underline{dim}}}\Ext^1(M,L),\end{gathered}$$ where ${\mathrm{\underline{dim}}}E\in\kong{N}^{Q_0}$ is the *dimension vector* of any $E\in{\mathrm{mod}}{\mathbf{k}}Q$. Let $V=\Grot({\mathbf{k}}Q)\otimes\kong{R}$, where $\Grot({\mathbf{k}}Q)$ is the Grothendieck group of ${\mathbf{k}}Q$. For any non-zero $v\in V$, define a *reflection* $$s_v(u)=u-\frac{2(v,u)}{(u,u)}v.$$ We will write $s_M=s_{{\mathrm{\underline{dim}}}M}$ for $M\in {\mathrm{\hua{H}}}_Q\sqcup{\mathrm{\hua{H}}}_Q[-1]$. The *Coxeter group* $W=W_Q$ is the group of transformations generated by the *simple reflections* $s_i=s_{{\mathrm{\underline{dim}}}S_i}, i\in Q_0$. The (real) *roots* in $W$ are $\{w(e_i)\mid w\in W, i\in Q_0\}$, where $e_i$ are the idempotents; the *positive roots* are those root which are a non-negative (integral) combination of the $e_i$. Note that, the reflection of a positive root is in $W$. Denote by $\mathrm{T}$ the set of all the reflections of $W$, that is, the set of all conjugates of the simple reflections of $W$. A *Coxeter element* for $W$ is the product of the simple reflections in some order. For a Coxeter element $c=s_{\sigma_1}\ldots s_{\sigma_n}$, we say it is *admissible* with respect to the orientation of $Q$, if there is no arrow from $\sigma_i$ to $\sigma_j$ in $Q$ for any $i>j$. A word $\w$ in $W$ is an expression in the free monoid generated by $s_i, i\in Q_0$. For $w\in W$, denote by $l(w)$ its *length*, that is, the length of the shortest word for $w$ as a product of simple reflections. A *reduced* word $\w$ for an element $w\in W$ is a word such that $\w=w$ with minimal length. The notion of reduced word leads to the *weak order* $\leq$ on $W$, i.e. $x\leq y$ if and only if $x$ has a reduced expression which is a prefix of some reduced word for $y$. For a word $\w$ in $W_Q$, we have the following notions. - An *inversion* of $\w$ is a reflection $t$ such that $l(t\w)\leq l(\w)$. The set of inversions of $\w$ is denoted by ${\mathrm{Inv}}(w)$. - A *descent* of $\w$ is a simple reflection $s$ such that $l(\w s)\leq l(\w)$. The set of descent of $\w$ is denoted by ${\mathrm{Des}}(w)$. - A *cover reflection* of $w$ is a reflection $t$ such that $t\w=\w s$ for some descent $s$ of $\w$. The set of cover reflections of $\w$ is denoted by ${\mathrm{Cov}}(w)$. For a word $a=a_1\ldots a_k$, define the support ${\mathrm{supp}}(a)$ to be $\{a_1,\ldots,a_k\}$. Fix a Coxeter element $c=s_{\sigma_1}\ldots s_{\sigma_n}$. A word $\w$ is called *$c$-sortable* if it has the form $\w=c^{(0)}c^{(1)}\ldots c^{(m)}$, where $c^{(i)}$ are subwords of $c$ satisfying $${\mathrm{supp}}(c^{(0)})\subseteq{\mathrm{supp}}(c^{(1)})\subseteq\cdots\subseteq{\mathrm{supp}}(c^{(m)})\subseteq Q_0.$$ Similarly to normal words, a $\mathrm{T}$-word is an expression in the free monoid generated by elements in the set $\mathrm{T}$ of all reflections. Denote by by $l_T(w)$ its *absolute length*, that is, the length of the shortest word for $w$ as a product of arbitrary reflections. So we have the notion of reduced $\mathrm{T}$-words, which induces the *absolute order* $\leq_{\mathrm{T}}$ on $W$. The *noncrossing partitions*, with respect to a Coxeter element $c$, for $W$ are elements between the identity and $c$, with respect to the absolute order. The *rank* of a noncrossing partition is its absolute length. Hearts and t-structures ----------------------- We collect some facts from [@KQ11] about tilting theory. A *(bounded) t-structure* on a triangulated category $\hua{D}$ is a full subcategory $\hua{P} \subset \hua{D}$ with $\hua{P}[1] \subset \hua{P}$ satisfies the following - if one defines $\hua{P}^{\perp}=\{ G\in\hua{D}: \Hom_{\hua{D}}(F,G)=0, \forall F\in\hua{P} \}$, then, for every object $E\in\hua{D}$, there is a unique triangle $F \to E \to G\to F[1]$ in $\hua{D}$ with $F\in\hua{P}$ and $G\in\hua{P}^{\perp}$; - for every object $M$, the shifts $M[k]$ are in $\hua{P}$ for $k\gg0$ and in $\hua{P}^{\perp}$ for $k\ll0$, or equivalently, $$\hua{D}= \displaystyle\bigcup_{i,j \in \ZZ} \hua{P}^\perp[i] \cap \hua{P}[j].$$ It follows immediately that we also have $$\hua{P}=\{ F\in\hua{D}: \Hom_{\hua{D}}(F,G)=0, \forall G\in\hua{P}^\perp \}.$$ Note that $\hua{P}^{\perp}[-1]\subset \hua{P}^{\perp}$. The *heart* of a t-structure $\hua{P}$ is the full subcategory $${\mathrm{\hua{H}}}= \hua{P}^\perp[1]\cap\hua{P}$$ and a t-structure is uniquely determined by its heart. More precisely, any bounded t-structure $\hua{P}$ with heart ${\mathrm{\hua{H}}}$ determines, for each $M$ in $\hua{D}$, a canonical filtration $$\label{eq:canonfilt} \xymatrix@C=0,5pc{ 0=M_0 \ar[rr] && M_1 \ar[dl] \ar[rr] && \cdots\ar[rr] && M_{m-1} \ar[rr] && M_m=M \ar[dl] \\ & H_1[k_1] \ar@{-->}[ul] && && && H_m[k_m] \ar@{-->}[ul] }$$ where $H_i \in {\mathrm{\hua{H}}}$ and $k_1 > \ldots > k_m$ are integers. Moreover, the $k$-th homology of $M$, with respect to ${\mathrm{\hua{H}}}$ is $$\begin{gathered} \label{eq:homology} {\mathrm{\bf H}_{k}}(M)= \begin{cases} H_i & \text{if $k=k_i$} \\ 0 & \text{otherwise.} \end{cases}\end{gathered}$$ Then $\hua{P}$ consists of those objects with no (nonzero) negative homology, $\hua{P}^\perp$ those with only negative homology and ${\mathrm{\hua{H}}}$ those with homology only in degree 0. There is a natural partial order on hearts given by inclusion of their corresponding t-structures. More precisely, for two hearts ${\mathrm{\hua{H}}}_1$ and ${\mathrm{\hua{H}}}_2$ in $\hua{D}$, with t-structures $\hua{P}_1$ and $\hua{P}_2$, we say $$\label{def:ineq} {\mathrm{\hua{H}}}_1 \leq {\mathrm{\hua{H}}}_2$$ if and only if $\hua{P}_2\subset\hua{P}_1$ , or equivalently ${\mathrm{\hua{H}}}_2\subset \hua{P}_1$, or equivalently $\hua{P}^\perp_1\subset\hua{P}^\perp_2$, or equivalently ${\mathrm{\hua{H}}}_1\subset \hua{P}^\perp_2[1]$. Torsion pair and tilting ------------------------ A similar notion to a t-structure on a triangulated category is a torsion pair in an abelian category. Tilting with respect to a torsion pair in the heart of a t-structure provides a way to pass between different t-structures. A *torsion pair* in an abelian category $\hua{C}$ is a pair of full subcategories $\<\hua{F},\hua{T}\>$ of $\hua{C}$, such that $\Hom(\hua{T},\hua{F})=0$ and furthermore every object $E \in \hua{C}$ fits into a short exact sequence $ \xymatrix@C=0.5cm{0 \ar[r] & E^{\hua{T}} \ar[r] & E \ar[r] & E^{\hua{F}} \ar[r] & 0}$ for some objects $E^{\hua{T}} \in \hua{T}$ and $E^{\hua{F}} \in \hua{F}$. \[(Happel, Reiten, Smalø)\] Let ${\mathrm{\hua{H}}}$ be a heart in a triangulated category $\hua{D}$. Suppose further that $\<\hua{F},\hua{T}\>$ is a torsion pair in ${\mathrm{\hua{H}}}$. Then the full subcategory $${\mathrm{\hua{H}}}^\sharp =\{ E \in \hua{D}:{\mathrm{\bf H}_{1}}(E) \in \hua{F}, {\mathrm{\bf H}_{0}}(E) \in \hua{T} \mbox{ and } {\mathrm{\bf H}_{i}}(E)=0 \mbox{ otherwise} \}$$ is also a heart in $\hua{D}$, as is $${\mathrm{\hua{H}}}^\flat =\{ E \in \hua{D}:{\mathrm{\bf H}_{0}}(E) \in \hua{F}, {\mathrm{\bf H}_{-1}}(E) \in \hua{T} \mbox{ and } {\mathrm{\bf H}_{i}}(E)=0 \mbox{ otherwise} \}.$$ Recall that the homology ${\mathrm{\bf H}_{\bullet}}$ was defined in . We call ${\mathrm{\hua{H}}}^\sharp$ the *forward tilt* of ${\mathrm{\hua{H}}}$, with respect to the torsion pair $\<\hua{F},\hua{T}\>$, and ${\mathrm{\hua{H}}}^\flat$ the *backward tilt* of ${\mathrm{\hua{H}}}$. Note that ${\mathrm{\hua{H}}}^\flat={\mathrm{\hua{H}}}^\sharp[-1]$. Furthermore, ${\mathrm{\hua{H}}}^\sharp$ has a torsion pair $\<\hua{T},\hua{F}[1]\>$ and we have $$\hua{T}={\mathrm{\hua{H}}}\cap{\mathrm{\hua{H}}}^\sharp, \quad \hua{F}={\mathrm{\hua{H}}}\cap{\mathrm{\hua{H}}}^\sharp[-1].$$ With respect to this torsion pair, the forward and backward tilts are $\bigl({\mathrm{\hua{H}}}^\sharp\bigr)^\sharp={\mathrm{\hua{H}}}[1]$ and $\bigl({\mathrm{\hua{H}}}^\sharp\bigr)^\flat={\mathrm{\hua{H}}}$. Similarly ${\mathrm{\hua{H}}}^\flat$ has a torsion pair $\<\hua{T}[-1],\hua{F}\>$ with $$\begin{gathered} \label{eq:torsion} \hua{F}={\mathrm{\hua{H}}}\cap{\mathrm{\hua{H}}}^\flat, \quad \hua{T}={\mathrm{\hua{H}}}\cap{\mathrm{\hua{H}}}^\flat[1].\end{gathered}$$ And with respect to this torsion pair, we have $\bigl({\mathrm{\hua{H}}}^\flat\bigr)^\sharp={\mathrm{\hua{H}}}$, $\bigl({\mathrm{\hua{H}}}^\flat\bigr)^\flat={\mathrm{\hua{H}}}[-1]$. Recall the basic property of the partial order between a heart and its tilts as follows. \[lem:tiltorder\] Let ${\mathrm{\hua{H}}}$ be a heart in $\hua{D}(Q)$. Then ${\mathrm{\hua{H}}}<{\mathrm{\hua{H}}}[m]$ for $m>0$. For any forward tilt ${\mathrm{\hua{H}}}^\sharp$ and backward tilt ${\mathrm{\hua{H}}}^\flat$, we have $${\mathrm{\hua{H}}}[-1] \leq {\mathrm{\hua{H}}}^\flat \leq {\mathrm{\hua{H}}}\leq {\mathrm{\hua{H}}}^\sharp \leq {\mathrm{\hua{H}}}[1].$$ Further, the forward tilts ${\mathrm{\hua{H}}}^\sharp$ can be characterized as precisely the hearts between ${\mathrm{\hua{H}}}$ and ${\mathrm{\hua{H}}}[1]$. similarly the backward tilts ${\mathrm{\hua{H}}}^\flat$ are those between ${\mathrm{\hua{H}}}[-1]$ and ${\mathrm{\hua{H}}}$. Recall that an object in an abelian category is *simple* if it has no proper subobjects, or equivalently it is not the middle term of any (non-trivial) short exact sequence. An object $M$ is *rigid* if $\Ext^1(M,M)=0$. \[def:simpletilt\] We say a forward tilt is *simple*, if the corresponding torsion free part is generated by a single rigid simple object $S$. We denote the new heart by ${{{\mathrm{\hua{H}}}}^{\sharp}_{S}}$. Similarly, a backward tilt is simple if the corresponding torsion part is generated by such a simple and the new heart is denoted by ${{{\mathrm{\hua{H}}}}^{\flat}_{S}}$. For the standard heart $\nzero$ in ${\mathrm{\hua{D}}}(Q)$, an APR tilt, which reverses all arrows at a sink/source of $Q$, is an example of a simple (forward/backward) tilt. The simple tilting leads to the notation of exchange graphs. ([@KQ11])\[def:eg\] The *exchange graph* ${\mathrm{EG}}{\mathrm{\hua{D}}}(Q)$ of a triangulated category ${\mathrm{\hua{D}}}$ to be the oriented graph, whose vertices are all hearts in ${\mathrm{\hua{D}}}$ and whose edges correspond to the simple ***backward*** tilting between them. We denote by ${\mathrm{EG}^\circ}{\mathrm{\hua{D}}}(Q)$ the ‘principal’ component of ${\mathrm{EG}}{\mathrm{\hua{D}}}(Q)$, that is, the connected component containing the heart ${\mathrm{\hua{H}}}_Q$. Furthermore, denote by ${\mathrm{Eg}_Q}$ full subgraph of ${\mathrm{EG}^\circ}{\mathrm{\hua{D}}}(Q)$ consisting of those hearts which are backward tilts of ${\mathrm{\hua{H}}}_Q$. We have the following proposition which ensures us to tilt at any simple of any heart in ${\mathrm{Eg}_Q}$. Let $Q$ be an acyclic quiver. Then every heart in ${\mathrm{EG}^\circ}{\mathrm{\hua{D}}}(Q)$ is finite and rigid (i.e. has finite many simples, each of which is rigid) and $${\mathrm{Eg}_Q}=\{{\mathrm{\hua{H}}}\in{\mathrm{EG}^\circ}(Q)\mid {\mathrm{\hua{H}}}_Q[-1]\leq{\mathrm{\hua{H}}}\leq{\mathrm{\hua{H}}}_Q \}.$$ Unfortunately, we take a different convention to [@KQ11] (backward tilting instead of forward). Thus a exchange graph in this paper has the opposite orientation of the exchange graph there. \[ex:pentagon\] Let $Q\colon=(2 \to 1)$ be a quiver of type $A_2$. A piece of AR-quiver of ${\mathrm{\hua{D}}}(Q)$ is: $$\xymatrix@C=1pc@R=1pc{ \cdots\qquad & P_2[-1] \ar[dr] && S_1 \ar[dr] && S_2\ar[dr] \\ S_1[-1] \ar[ur] && S_2[-1] \ar[ur] && P_2 \ar[ur]&&\cdots}$$ Then ${\mathrm{EG}}_Q$ is as follows: $${\xymatrix@R=1.5pc@C=.5pc{ &&\{S_1[-1], S_2\} \ar[drr]\\ \{S_1, S_2\} \ar[urr] \ar[ddr] &&&& \{S_1[-1], S_2[-1]\}\\\\ &\{P_2,S_2[-1]\} \ar[rr] && \{P_2[-1], S_1\} \ar[uur], }}$$ where we denote a heart by the set of its simples. Simple (backward) tilting sequence {#sec:sst} ---------------------------------- Let $\s=i_1\ldots i_m$ be a sequence with $i_j\in Q_0$ and we have a sequence of hearts ${\mathrm{\hua{H}}}_{\s,j}$ with simples $$\Sim{\mathrm{\hua{H}}}_{\s,j}=\{S_i^{\s,j} \mid i\in Q_0\},\quad 0\leq j \leq m,$$ inductively defined as follows. - ${\mathrm{\hua{H}}}_{\s,0}={\mathrm{\hua{H}}}_Q$ with $S_i^{\s,0}=S_i$ for any $i\in Q_0$. - For $0\leq j\leq m-1$, we have $${\mathrm{\hua{H}}}_{\s,{j+1}}={{({\mathrm{\hua{H}}}_{\s,j})}^{ \flat }_{ S^{\s,j}_j }}.$$ Note that $\Sim{\mathrm{\hua{H}}}_{\s,{j+1}}$ is given by the formula [@KQ11 Proposition 5.2 (5.2)] in terms of $\Sim{\mathrm{\hua{H}}}_{\s,{j}}$ so that each simple in $\Sim{\mathrm{\hua{H}}}_{\s,{j+1}}$ inherits a labeling (in $Q_0=\{1,\ldots,n\}$) from the corresponding simple in $\Sim{\mathrm{\hua{H}}}_{\s,{j}}$ inductively. Define $${\mathrm{\hua{H}}}_\s={\mathrm{\hua{H}}}_\s(Q)\colon={\mathrm{\hua{H}}}_{\s, m}$$ and ${\mathrm{P}}(\s)$ to be the path ${\mathrm{P}}(\s)=T_m^\s \cdots T_1^\s$ as follow $${\mathrm{P}}(\s)=\colon {\mathrm{\hua{H}}}_Q{\mathrm{\hua{H}}}_{\s, 0} \xrightarrow{ T_1^\s } {\mathrm{\hua{H}}}_{\s,1} \xrightarrow{ T_2^\s } \ldots \xrightarrow{ T_m^\s } {\mathrm{\hua{H}}}_{\s,m}={\mathrm{\hua{H}}}_\s,$$ in ${\mathrm{EG}^\circ}{\mathrm{\hua{D}}}(Q)$, where $T_j^\s=S_{j}^{\s,{j-1}}$ is the $j$-th simple in ${\mathrm{\hua{H}}}_{\s_{j-1}}$. As usual, the support ${\mathrm{supp}}{\mathrm{P}}(\s)$ of ${\mathrm{P}}(\s)$ is the set $\{ T_1^\s,\ldots, T_m^\s \}$. Green mutation {#sec:keller} ============== In this section, the interpretation of the green mutation via King-Qiu’s Ext-quiver of heart is given, which provides a proof of Keller’s theorem. Green quiver mutation --------------------- (\[Fomin-Zelevinsky\])\[def:mutation\] Let $R$ be a finite quiver without loops or $2$-cycles. The *mutation* $\mu_k$ on $R$ at vertex $k$ is a quiver $R'=\mu_k(R)$ obtaining from $R$ as follows - adding an arrow $i\to j$ for any pair of arrows $i\to k$ and $k \to j$ in $R$; - reversing all arrows incident with $k$; - deleting as many $2$-cycles as possible. It is straightforward to see that the mutation is an involution, i.e. $\mu_k^2=id$. A *mutation sequence* $\s=i_1 \ldots i_m$ on $R$ is a sequence with $i_j\in R_0$ and define $$R_\s\colon=\mu_\s(R)=\mu_{i_m}( \mu_{i_{m-1}}(\ldots \mu_{i_1}(R) \ldots)).$$ As in Section \[sec:sst\], a (green) mutation sequence $\s$ induces a sequence of simple (backward) tilting and a heart ${\mathrm{\hua{H}}}_\s$. Let ${\widetilde{Q}}$ be the *principal extension* of $Q$, i.e. the quiver obtained from $Q$ by adding a new frozen vertex $i'$ and a new arrow $i'\to i$ for each vertex $i\in Q_0$. Note that we will never mutate a quiver at a frozen vertex and so mutation sequences $\s$ for ${\widetilde{Q}}$ are precisely mutation sequences of $Q$. \[def:green\] Let $\s$ be a mutation sequence of ${\widetilde{Q}}$. - A vertex $j$ in the quiver ${\widetilde{Q}}_\s$ is called *green* if there is no arrows from $j$ to any frozen vertex $i'$; - A vertex $j$ is called *red* if there is no arrows to $j$ from any frozen vertex $i'$. Let ${\mathrm{V^{r}}(\s)}$ be the set of red vertices in ${\widetilde{Q}}_\s$ for $\s$. - A *green mutation sequence* $\s$ on $Q$ (or ${\widetilde{Q}}$) is a mutation sequence on $Q$ such that every mutation in the sequence is at a green vertex in the corresponding quiver. Such a green mutation sequence $\s$ is *maximal* if ${\mathrm{V^{r}}(\s)}=Q_0$. Principal extension of Ext-quivers ---------------------------------- Following [@KQ11], we will also use Ext-quivers of hearts interpret green mutation. \[def:extquiv\] Let ${\mathrm{\hua{H}}}$ be a finite heart in a triangulated category ${\mathrm{\hua{D}}}$ with $\mathbf{S}_{{\mathrm{\hua{H}}}}=\bigoplus_{S\in\Sim{\mathrm{\hua{H}}}} S$. The Ext-quiver ${\mathcal{Q}({\mathrm{\hua{H}}})}$ is the (positively) graded quiver whose vertices are the simples of ${\mathrm{\hua{H}}}$ and whose graded edges correspond to a basis of $\End^\bullet(\mathbf{S}_{{\mathrm{\hua{H}}}},\mathbf{S}_{{\mathrm{\hua{H}}}})$. Further, the *CY-3 double* of a graded quiver $\hua{Q}$, denoted by ${\mathrm{CY}^{3}(\hua{Q})}$, to be the quiver obtained from $\hua{Q}$ by adding an arrow $T\to S$ of degree $3-k$ for each arrow $S\to T$ of degree $k$ and adding a loop of degree 3 at each vertex. See Table \[quivers\] for an example of Ext-quivers and CY-3 doubling. For the principal extension ${\widetilde{Q}}$ of a quiver $Q$, Consider its module category ${\mathrm{\hua{H}}}_{{\widetilde{Q}}}$ and derived category ${\mathrm{\hua{D}}}({\widetilde{Q}})$. Since $Q$ is a subquiver of its extension ${\widetilde{Q}}$, ${\mathrm{\hua{H}}}_Q$ and ${\mathrm{\hua{D}}}(Q)$ are subcategories of ${\mathrm{\hua{H}}}_{{\widetilde{Q}}}$ and ${\mathrm{\hua{D}}}({\widetilde{Q}})$ respectively. For a sequence $\s$, it also induces a simple tilting sequence in ${\mathrm{\hua{D}}}({\widetilde{Q}})$ (starting at ${\mathrm{\hua{H}}}_{{\widetilde{Q}}}$) and corresponds to a heart, denoted by $\widetilde{{\mathrm{\hua{H}}}_\s}$. Let the set of simples in $\Sim{\mathrm{\hua{H}}}_{{\widetilde{Q}}}-\Sim{\mathrm{\hua{H}}}_Q$ be $$\Sim{\mathrm{\hua{H}}}_{Q'}\colon=\{S_i'\mid i\in Q_0\}.$$ A straightforward calculation gives $$\Hom^k(S_i',S_j)=\delta_{ij}\delta_{1k}, \quad \forall i,j\in Q_0, k\in\kong{Z}.$$ Hence, for any $M\in{\mathrm{\hua{H}}}_Q$, we have $$\begin{gathered} \label{eq:ijk1} \Hom^k(\bigoplus_{i\in Q_0} S_i',M)\neq0\quad \iff\quad k=1.\end{gathered}$$ We have the following lemma. \[lem:bubian\] For any sequence $\s$, we have $\Sim\widetilde{{\mathrm{\hua{H}}}_\s}=\Sim{\mathrm{\hua{H}}}_{\s}\cup\Sim{\mathrm{\hua{H}}}_{Q'}$. Use induction on the length of $\s$ starting from the trivial case when $\s=\emptyset$. Suppose that $\s=\t j$ with $\Sim\widetilde{{\mathrm{\hua{H}}}_\t}=\Sim{\mathrm{\hua{H}}}_{\t}\cup\Sim{\mathrm{\hua{H}}}_{Q'}$. By [@KQ11 Lemma 3.4], we have ${\mathrm{\hua{H}}}_\t\leq{\mathrm{\hua{H}}}_Q$ and hence the homology of any object in ${\mathrm{\hua{H}}}_t$, with respect to ${\mathrm{\hua{H}}}_Q$, lives in non-positive degrees. Thus, any $M\in{\mathrm{\hua{H}}}_\t$ admits a filtration with factors $S_i[k], i\in Q_0, k\leq0$. As $s'$ is a source in ${\widetilde{Q}}$ for any $s\in Q_0$, $S_s'$ is an injective object in ${\mathrm{\hua{H}}}_{{\widetilde{Q}}}$ which implies that $\Ext^1(S_i[k],S_s')=0$ for any $i\in Q_0$ and $k\leq0$. Therefore, we have $\Ext^1(M, S_s')=0$ for any $M\in{\mathrm{\hua{H}}}_\t$, in particular, for $M=S_j^\t$. Then applying [@KQ11 formula (5.2)] to the backward tilt ${{{\mathrm{\hua{H}}}_\t}^{\flat}_{S_j^\t}}$ and ${{(\widetilde{{\mathrm{\hua{H}}}_\t})}^{\flat}_{S_j^\t}}$, gives $\Sim\widetilde{{\mathrm{\hua{H}}}_\s}=\Sim{\mathrm{\hua{H}}}_{\s}\cup\Sim{\mathrm{\hua{H}}}_{Q'}$. By the lemma, we know that ${\mathcal{Q}({\mathrm{\hua{H}}}_\s)}$ is a subquiver of ${\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})}$. \[def:p.e.\] Given a sequence $\s$, define the *principal extension* of the Ext-quiver ${\mathcal{Q}({\mathrm{\hua{H}}}_\s)}$ to be the Ext-quiver ${\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})}$ while the vertices in $\Sim{\mathrm{\hua{H}}}_{Q'}$ are the frozen vertices. From the proof of Lemma \[lem:bubian\], it is straightforward to see the following. \[lem:source\] Every frozen vertices $S_i'$ is a source in ${\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})}$. Green mutation as simple (backward) tilting ------------------------------------------- Before we proof Keller’s observations for green mutation, we need the following result concerning the relation between quivers (for clusters) and Ext-quivers. Because the proof is technical, we leave it to the appendix. \[lem:KQ\] If $\widetilde{{\mathrm{\hua{H}}}_\s}\in{\mathrm{EG}}_{{\widetilde{Q}}}$ for some sequence $\s$, then ${\widetilde{Q}}_\s$ is canonically isomorphic to the degree one part of ${\mathrm{CY}^{3}( {\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})} )}$. See Appendix \[app\]. Now we proceed to prove our first theorem (which is an interpretation of Keller’s green mutation in the language of tilting in the derived categories). \[thm:keller\] Let $Q$ be an acyclic quiver and $\s$ be a green mutation sequence for $Q$. Then we have the following. ${\mathrm{\hua{H}}}_Q[-1]\leq{\mathrm{\hua{H}}}_\s\leq{\mathrm{\hua{H}}}_Q$ and hence ${\mathrm{\hua{H}}}_\s\in{\mathrm{EG}}_Q$. ${\widetilde{Q}}_\s$ is canonically isomorphic to the degree one part of ${\mathrm{CY}^{3}( {\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})} )}$. A vertex $j$ in ${\widetilde{Q}}_\s$ is either green or red. Moreover, it is green if and only if the corresponding simple $S_j^{\s}$ in ${\mathrm{\hua{H}}}_\s$ is in ${\mathrm{\hua{H}}}_Q$ and it is red if and only if $S_j^{\s}$ is in ${\mathrm{\hua{H}}}_Q[-1]$. We use induction on the length of $\s$ starting with trivial case when $l(\s)=0$. Now suppose that the theorem holds for any green mutation sequence of length less than $m$ and consider the case when $l(\s)=m$. Let $\s=\t j$ where $l(\t)=m-1$ and $j$ is a green vertex in ${\widetilde{Q}}_{\t}$. First, the simple $S_j^\t$ corresponding to $j$ is in ${\mathrm{\hua{H}}}_Q$, by $3^\circ$ of the induction step, which implies $1^\circ$ by [@KQ11 Lemma 5.4, $1^\circ$]. Second, as $\Sim\widetilde{{\mathrm{\hua{H}}}_\s}=\Sim{\mathrm{\hua{H}}}_{\s}\cup\Sim{\mathrm{\hua{H}}}_{Q'}$ by Lemma \[lem:bubian\], ${\mathrm{\hua{H}}}_\s\in{\mathrm{EG}}_Q$ is equivalent to $\widetilde{{\mathrm{\hua{H}}}_\s}\in{\mathrm{EG}}_{{\widetilde{Q}}}$. Then $2^\circ$ follows from Lemma \[lem:KQ\]. Third, since ${\mathrm{\hua{H}}}_Q$ is hereditary, $1^\circ$ implies that any simple $S_j^\s\in\Sim{\mathrm{\hua{H}}}_\s$ is in either ${\mathrm{\hua{H}}}_Q$ or ${\mathrm{\hua{H}}}_Q[-1]$. If $S^\s_j$ is in ${\mathrm{\hua{H}}}_Q$, by , there are arrows $S_i'\to S^\s_j$ in ${\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})}$ and each of which has degree one. Then, by $2^\circ$ any such degree one arrow corresponds to an arrow $i'\to j$ in ${\widetilde{Q}}_\s$. Therefore $j$ is green. Similarly, if $S^\s_j$ is in ${\mathrm{\hua{H}}}_Q[-1]$, there are arrows $S_i'\to S^\s_j$ in ${\mathcal{Q}(\widetilde{{\mathrm{\hua{H}}}_\s})}$, each of which has degree two and corresponds to an arrow $i'\leftarrow j$ in ${\widetilde{Q}}_\s$. Then $j$ is red and thus we have $3^\circ$. For a green mutation sequence $\s$ of $Q$, we will call a simple $S^{\s}_j\in\Sim{\mathrm{\hua{H}}}_{\s}$ green/red if the vertex $j$ is green/red in ${\widetilde{Q}}_\s$. The consequences of theorem include a criterion for a sequence being green mutation sequence and one of Keller’s original statement about maximal green mutation sequences. \[cor:keller\] A sequence $\s$ is a green mutation sequence if and only if ${\mathrm{\hua{H}}}\geq{\mathrm{\hua{H}}}_Q[-1]$ for any ${\mathrm{\hua{H}}}\in{\mathrm{supp}}{\mathrm{P}}(\s)$. Further, a green mutation sequence $\s$ is maximal if and only if ${\mathrm{\hua{H}}}_\s={\mathrm{\hua{H}}}_Q[-1]$. Thus, for a maximal green mutation sequence $\s$, ${\widetilde{Q}}_\s$ can be obtained from ${\widetilde{Q}}$ by reversing all arrows that are incident with frozen vertices. The necessity of first statement follows from $1^\circ$ of Theorem \[thm:keller\]. For the sufficiency, we only need to show that if $\t$ is a green mutation sequence and $\s=\t j$ satisfies ${\mathrm{\hua{H}}}_\s\geq{\mathrm{\hua{H}}}_Q[-1]$, for some $j\in Q_0$, then $\s$ is also a green mutation sequence. Since ${\mathrm{\hua{H}}}_\t\geq{\mathrm{\hua{H}}}_Q[-1]$, by [@KQ11 Lemma 5.4, $1^\circ$] we know that ${\mathrm{\hua{H}}}_\s\geq{\mathrm{\hua{H}}}_Q[-1]$ implies $S_j^\t$ is in ${\mathrm{\hua{H}}}_Q$. But this means $j$ is a green vertex for $\t$, by $3^\circ$ of Theorem \[thm:keller\], as required. For the second statement, $\s$ is a maximal, if and only if $S_i^\s\in{\mathrm{\hua{H}}}_Q[-1]$ for any $i\in Q_0$, or equivalently, ${\mathrm{\hua{H}}}_\s={\mathrm{\hua{H}}}_Q[-1]$. This implies the statement immediately. \[ex:keller\] We borrow an example of $A_2$ type green mutations from Keller [@Kel11] (but the orientation slightly differs). Figure \[fig:keller\] gives two different maximal green mutation sequences ($121$ and $21$) which end up being isomorphic to each other. If we identify the isomorphic ones, we recover the pentagon in Example \[ex:pentagon\]. $$\xymatrix@C=1pc@R=1pc{ &&&& {\textcolor{Emerald}{\mathrm{1}}}\ar@{<-}[dd]\ar[rr]&& {\textcolor{Emerald}{\mathrm{2}}}\ar@{<-}[dd]&&&&& {\textcolor{red}{\mathrm{1}}}\ar[dd]&& {\textcolor{Emerald}{\mathrm{2}}}\ar@{<-}[ddll]\ar[ll]\\ &&&\ar@{~>}[lldd]_{\mu_2} &&&& \ar@{~>}[rrr]^{\mu_1}&&& &&&&\ar@{~>}[rrdd]^{\mu_2}\\ &&&& 1' && 2' &&&&& 1' && 2'\ar[uu]\\ &&&&& \ar@{~>}[dddd]_{\mu_{21}} & \ar@{~>}[ddddrrrrr]^{\mu_{121}} &&&&&&&&&&\\ {\textcolor{Emerald}{\mathrm{1}}}\ar@{<-}[dd]\ar@{<-}[rr]&& {\textcolor{red}{\mathrm{2}}}\ar[dd]&&&&&&&&&&&&& {\textcolor{Emerald}{\mathrm{1}}}\ar@{<-}[ddrr]&& {\textcolor{red}{\mathrm{2}}}\ar[ddll]\ar@{<-}[ll]\\ &&& \\ 1' && 2' &&&&&&&&&&&&& 1' && 2'\ar@{<-}[uu]\\ & \ar@{~>}[ddrr]_{\mu_1} &&&&&&&&&&&&&&& \ar@{~>}[ddll]^{\mu_1}\\ &&&& {\textcolor{red}{\mathrm{1}}}\ar[dd]\ar[rr]&& {\textcolor{red}{\mathrm{2}}}\ar[dd]&&&&& {\textcolor{red}{\mathrm{1}}}&& {\textcolor{red}{\mathrm{2}}}\ar[ll]\\ &&&&&&& \ar@{<->}[rrr]^{\text{iso.}} &&&&&&&\\ &&&& 1' && 2' &&&&& 1'\ar@{<-}[uurr] && 2'\ar@{<-}[uull] }$$ Wide subcategory via red simples -------------------------------- In this section, we aim to show the red simples are precisely the simples in the wide subcategory ${\hua{W}}_{\s}$ corresponds to the torsion class $\hua{T}_{\s}$ in the sense of Ingalls-Thomas. Recall that a wide subcategory is an exact abelian category closed under extensions of some abelian category. Further, given a finite generated torsion class $\hua{T}$ in ${\mathrm{\hua{H}}}_Q$, define the corresponding wide subcategory ${\hua{W}}(\hua{T})$ to be (cf. [@IT09 Section 2.3]) $$\begin{gathered} \label{eq:defwide} \{ M\in\hua{T} \mid \forall (f;X\to M)\in\hua{T}, \ker(f)\in\hua{T} \}.\end{gathered}$$ First, we give another characterization for ${\hua{W}}(\hua{T})$. \[pp:wide\] Let $\<\hua{F},\hua{T}\>$ be a finite generated torsion pair in ${\mathrm{\hua{H}}}_Q$ and ${\mathrm{\hua{H}}}^{\sharp}$ be the corresponding backward tilt. Then we have $$\begin{gathered} \label{eq:wide} \Sim{\hua{W}}(\hua{T})=\hua{T}\cap\Sim{\mathrm{\hua{H}}}^\sharp.\end{gathered}$$ By [@IT09] and [@KQ11], such torsion pair corresponds to a cluster tilting object (in the cluster category of ${\mathrm{\hua{D}}}(Q)$) and thus the heart ${\mathrm{\hua{H}}}^\sharp$ is in ${\mathrm{EG}}_Q$ and hence finite. Noticing that ${\mathrm{\hua{H}}}^\sharp$ admits a torsion pair $\<\hua{T},\hua{F}[1]\>$, any its simple is either in $\hua{T}$ or $\hua{F}[1]$. Let $\hua{W}$ be the wide subcategory of ${\mathrm{\hua{H}}}^\sharp$ generated by simples in $\hua{T}\cap\Sim{\mathrm{\hua{H}}}^\sharp$. First, for any $S\in\hua{T}\cap\Sim{\mathrm{\hua{H}}}^\sharp$ and $$(f:X\to S)\in\hua{T}\subset{\mathrm{\hua{H}}}^\sharp,$$ $f$ is surjective (in ${\mathrm{\hua{H}}}^\sharp$) since $S$ is a simple. Thus $\ker(f)$ is in $\hua{T}$ since $\hua{T}$ is a torsion free class in ${\mathrm{\hua{H}}}^\sharp$, which implies $S\in{\hua{W}}(\hua{T})$. Therefore $\hua{W}\subset{\hua{W}}(\hua{T})$ and we claim that they are equal. If not, let $M$ in ${\hua{W}}(\hua{T})-\hua{W}$ whose simple filtration in ${\mathrm{\hua{H}}}^\sharp$ (with factors in $\Sim{\mathrm{\hua{H}}}^\sharp$) has minimal number of factors. Let $S$ be a simple top of $M$ and then $X=\ker(M\twoheadrightarrow S)$ is in $\hua{T}$. If $S$ is in $\hua{T}\cap\Sim{\mathrm{\hua{H}}}^\sharp$, then $X$ is in ${\hua{W}}(\hua{T})-\hua{W}$ with less simple factors, contradicting to the choice of $M$. Hence $S\in\hua{F}[1]\cap\Sim{\mathrm{\hua{H}}}^\sharp$. Then we obtain a short exact sequence $$0 \to X \hookrightarrow M \twoheadrightarrow S \to 0$$ in ${\mathrm{\hua{H}}}^\sharp$ which became a short exact sequence $$0 \to S[-1] \hookrightarrow X \overset{f}{\twoheadrightarrow} M \to 0$$ in ${\mathrm{\hua{H}}}_Q$. But $\ker(f)=S[-1]\in\hua{F}$, which contradicts to the fact that $M$ is in ${\hua{W}}(\hua{T})$ (cf. ). Therefore ${\hua{W}}(\hua{T})=\hua{W}$ or . An immediate consequence of this corollary is as follows. Recall Bridgeland’s notion of stability condition first. \[def:stab\] A *stability condition* $\sigma = (Z,\hua{P})$ on $\hua{D}$ consists of a group homomorphism $Z:K(\hua{D}) \to \kong{C}$ called the *central charge* and full additive subcategories $\hua{P}(\varphi) \subset \hua{D}$ for each $\varphi \in \kong{R}$, satisfying the following axioms: if $0 \neq E \in \hua{P}(\varphi)$ then $Z(E) = m(E) \exp(\varphi \pi \mathbf{i} )$ for some $m(E) \in \kong{R}_{>0}$, for all $\varphi \in \kong{R}$, $\hua{P}(\varphi+1)=\hua{P}(\varphi)[1]$, if $\varphi_1>\varphi_2$ and $A_i \in \hua{P}(\varphi_i)$ then $\Hom_{\hua{D}}(A_1,A_2)=0$, for each nonzero object $E \in \hua{D}$ there is a finite sequence of real numbers $$\varphi_1 > \varphi_2 > ... > \varphi_m$$ and a collection of triangles $$\xymatrix@C=0.8pc@R=1.4pc{ 0=E_0 \ar[rr] && E_1 \ar[dl] \ar[rr] && E_2 \ar[dl] \ar[rr] && ... \ \ar[rr] && E_{m-1} \ar[rr] && E_m=E \ar[dl] \\ & A_1 \ar@{-->}[ul] && A_2 \ar@{-->}[ul] && && && A_m \ar@{-->}[ul] },$$ with $A_j \in \hua{P}(\varphi_j)$ for all j. We call the collection of subcategories $\{\hua{P}(\varphi)\}$, satisfying $2^\circ \sim 4^\circ$ in Definition \[def:stab\], the *slicing*. Note that $\hua{P}(\varphi)$ is always abelian for any $\varphi\in\kong{R}$ (cf. [@B1]) and we call it a *semistable subcategory* of $\sigma$. \[cor:stab\] A finite generated wide subcategory in ${\mathrm{\hua{H}}}_Q$ is a semistable subcategory of some Bridgeland stability condition on ${\mathrm{\hua{D}}}(Q)$. Let ${\hua{W}}(\hua{T})$ be a finite generated wide subcategory in ${\mathrm{\hua{H}}}_Q$ which corresponds to the torsion pair $\<\hua{F},\hua{T}\>$. Let ${\mathrm{\hua{H}}}^{\sharp}$ be the corresponding backward tilt. Recall that we have the following ([@B1]: - To give a stability condition on a triangulated category $\hua{D}$ is equivalent to giving a bounded t-structure on $\hua{D}$ and a stability function on its heart with the HN-property. Thus, a function $Z$ from $\Sim{\mathrm{\hua{H}}}^\sharp$ to the upper half plane $\kong{H}$ gives a stability condition $\sigma(Z, {\mathrm{\hua{H}}}^\sharp)$ on the triangulated category ${\mathrm{\hua{D}}}(Q)$. Then choosing $Z$ as follows $$Z(S)= \begin{cases} i & \text{if $S\in\Sim{\mathrm{\hua{H}}}^\sharp\cap\hua{T}=\Sim{\hua{W}}(\hua{T})$} \\ 0 & \text{if $S\in\Sim{\mathrm{\hua{H}}}^\sharp\cap\hua{F}[1]$} \end{cases}$$ will make ${\hua{W}}(\hua{T})$ a semistable subcategory with respect to $\sigma(Z, {\mathrm{\hua{H}}}^\sharp)$. As Bridgeland is kind of an improved version of King’s $\theta$-stability condition, And in fact, Corollary \[cor:stab\] implies immediately Ingalls-Thomas’ result, that every wide subcategory in ${\mathrm{\hua{H}}}_Q$ is a semistable subcategory for some $\theta$-stability condition on ${\mathrm{\hua{H}}}_Q$. We end this section by showing that the simples of the wide subcategory associated to a green mutation sequence. Let $\s$ be a green mutation sequence and $$\begin{gathered} \label{def:torsion} \hua{T}_\s={\mathrm{\hua{H}}}_Q\cap{\mathrm{\hua{H}}}_\s[1],\end{gathered}$$ which is a torsion class in ${\mathrm{\hua{H}}}_Q$ by . We will write $\hua{W}_\s$ for the wide subcategory ${\hua{W}}(\hua{T}_\s)$ of $\hua{T}_\s$. Recall that ${\mathrm{V^{r}}(\s)}$ is the set of red vertices of a green mutation sequence $\s$. Denote by ${\mathrm{V^{r}}({\mathrm{\hua{H}}}_\s)}$ the set of red simples in ${\mathrm{\hua{H}}}_{\s}$. Let $s$ be a green mutation sequence. Then $\Sim\hua{W}_\s={\mathrm{V^{r}}({\mathrm{\hua{H}}}_\s)}[1]$. By $3^\circ$ of Theorem \[thm:keller\], we have $${\mathrm{V^{r}}({\mathrm{\hua{H}}}_\s)}={\mathrm{\hua{H}}}_Q[-1]\cap\Sim{\mathrm{\hua{H}}}_\s.$$ But ${\mathrm{V^{r}}({\mathrm{\hua{H}}}_\s)}\subset{\mathrm{\hua{H}}}_\s$, we have $${\mathrm{V^{r}}({\mathrm{\hua{H}}}_\s)}=({\mathrm{\hua{H}}}_Q[-1]\cap{\mathrm{\hua{H}}}_\s)\cap\Sim{\mathrm{\hua{H}}}_\s=\hua{T}_\s[-1]\cap\Sim{\mathrm{\hua{H}}}_\s,$$ where the second equality uses . Noticing that ${\mathrm{\hua{H}}}_\s[1]$ is the forward tilt of ${\mathrm{\hua{H}}}_Q$ with respect to the torsion class $\hua{T}_\s$, we have $$\Sim\hua{W}_\s=\hua{T}_\s\cap\Sim{\mathrm{\hua{H}}}_\s[1]$$ by Proposition \[pp:wide\]. Thus the claim follows. C-sortable words ================ In this section, we will show that it is natural to interpret a c-sortable word as a green mutation sequence, which produces many consequences. Main results ------------ Denote by $\widetilde{\gm}=i_1\ldots i_k$ the sequence induced from a $c$-sortable word $\w=s_{i_1}\ldots s_{i_k}$. Note that $\gm$ induces a path ${\mathrm{P}}(\widetilde{\gm})$ and a heart ${\mathrm{\hua{H}}}_{\widetilde{\gm}}$ as in Section \[sec:sst\]. We will drop the tilde of $\widetilde{\w}$ later when it appears in the subscript or superscript. \[thm:main\] Let $Q$ be an acyclic quiver and $c$ be an admissible Coxeter element with respect to the orientation of $Q$. Let $\w$ be a $c$-sortable word and we have the following. $\widetilde{\gm}$ is a green mutation sequence. For any $i\in Q_0$, let $s_i^{\gm}$ be the reflection of $S^{\gm}_i$, the $i$-th simple of ${\mathrm{\hua{H}}}_{\gm}$. Then $$\begin{gathered} \label{eq:main} s_i^{\gm} \cdot \w=\w \cdot s_i .\end{gathered}$$ Let the torsion class $\hua{T}_{\gm}$ is defined as in and we have $\Ind\hua{T}_{\gm}={\mathrm{supp}}{\mathrm{P}}({\gm})$. We use induction on $l(\w)+\#Q_0$ staring with the trivial case $l(\w)=0$. Suppose that the theorem holds for any $(Q, c, \w)$ with $l(\w)+\#Q_0<m$. Now we consider the case when $l(\w)+\#Q_0=m$. Let $c=s_1 c_-$ without lose of generality. If $s_1$ is not the initial of $\w$, then the theorem reduces to the case for $(Q_-, c_-, \w)$, where $Q_-$ is the full subquiver with vertex set $Q_0-\{1\}$, which is true by the inductive assumption. Next, suppose that $s_1$ is the initial of $\w$, so $\w=s_1 \v$ for some $\v$. Denote by $\widetilde{\v}$ the sequence induced by $\v$. Let $Q_+=\mu_1(Q)$, $c_+=s_1 c s_1$ and we identify $${\mathrm{\hua{H}}}_{Q_+}={\mathrm{mod}}{\mathbf{k}}Q_+\quad\text{with}\quad{\mathrm{\hua{H}}}_{s_1}={{({\mathrm{\hua{H}}}_Q)}^{\flat}_{S_1}}$$ via a so-called APR-tilting (reflecting the source $1$ of $Q$). By [@R07 Lemma 2.5], $\v$ is $c_+$-sortable and hence the theorem holds for $(Q_+, c_+, \v)$ by inductive assumption. Let $\v=\u s_j$, then the theorem also holds for $(Q, c, s_1 \u)$. Let $T=S_j^\w$ the $j$-th simple of ${\mathrm{\hua{H}}}_{\w}$. Use the criterion in Corollary \[cor:keller\] for being a green mutation sequence, we know that $$\label{eq:geq} \left\{ \begin{array}{l} {{({\mathrm{\hua{H}}}_\w)}^{\sharp}_{T}}={\mathrm{\hua{H}}}_{s_1 \u}\geq{\mathrm{\hua{H}}}_Q[-1],\\ {\mathrm{\hua{H}}}_{\w}={\mathrm{\hua{H}}}_{\w}(Q)={\mathrm{\hua{H}}}_{\v}(Q_+)\geq{\mathrm{\hua{H}}}_{Q_+}[-1]. \end{array} \right.$$ If ${\mathrm{\hua{H}}}_{\w}\geq{\mathrm{\hua{H}}}_Q[-1]$ fails, comparing with $$\Ind{\mathrm{\hua{H}}}_{Q_+}[-1]=\Ind{\mathrm{\hua{H}}}_Q[-1]-\{S_1[-1]\}\cup\{S_1[-2]\},$$ we must have $T=S_1[-2]$. However, by formula for $(Q,c,s_1 \u)$ and $j\in Q_0$, noticing that the $j$-th simple of ${\mathrm{\hua{H}}}_{s_1 \u}$ is $T[1]$, we have $$s_{T[1]} \cdot (s_1 \u)=(s_1 \u) \cdot s_j.$$ The RHS is $\w$ while the LHS equals to $s_1^2 \u=\u$, which is a contradiction to the fact that the $c$-sortable word $\w$ is reduced. So ${\mathrm{\hua{H}}}_{\w}\geq{\mathrm{\hua{H}}}_Q[-1]$, and thus $\widetilde{\w}$ is a green mutation sequence, by Corollary \[cor:keller\], as $1^\circ$ required. For $2^\circ$, consider the influence of the APR-tilting on the dimension vectors and Coxeter group. we know that for any $M\in{\mathrm{\hua{H}}}_Q-\{S_1\}$, the ${\mathrm{\underline{dim}}}_+M$ with respect to $Q_+$ equals $s_1({\mathrm{\underline{dim}}}M)$. Thus the reflection $t_M$ of $M$ for $Q_+$ equals $s_1 s_M s_1$. In particular, the reflection $t_i^{\v}$ of $S_i^{\gm}$ for $Q_+$ equals $s_1 s_i^{\w} s_1$. Then formula gives $$t_i^{\v} \cdot \v=\v \cdot s_i\,\quad \text{or} \quad\,s_i^{\gm} \cdot \w=\w \cdot s_i,$$ as required. Finally, we have $\Ind\hua{T}(Q)_{\gm}=\Ind\hua{T}_{\v}(Q_+)\cup\{S_1\}$ which implies $3^\circ$. Consequences ------------ In this subsection, we discuss various corollaries of Theorem \[thm:main\]. First, we prove the bijection between $c$-sortable words and finite torsion classed in ${\mathrm{\hua{H}}}_Q$, which is essentially equivalent to the result in [@AIRT10], that there is a bijection between $c$-sortable words and finite torsion-free classed in ${\mathrm{\hua{H}}}_Q$. There is a bijection between the set of $c$-sortable words and the set of finite torsion classes in ${\mathrm{\hua{H}}}_Q$, sending such a word $\w$ to $\hua{T}_{\gm}$. Clearly, every torsion class $\hua{T}_{\gm}$ induced by a $c$-sortable word $\w$ is finite. To see two different $c$-sortable words $\w_1$ and $\w_2$ induce different finite torsion classes, we use the induction on $l(\w)$. Then it is reduced to the case when the initials of $\w_1$ and $\w_2$ are different. Without lose of generality, let the initial $s_1$ of $\w_1$ is on the left of the initial $s_2$ of $\w_2$ in expression $$c=\cdots s_1 \cdots s_2 c'$$ of the Coxeter element $c$. Now, the sequence of simple tilting $\widetilde{\w}_2$ takes place in the full subcategory $${\mathrm{\hua{D}}}(Q_{{\mathrm{res}}})\subset{\mathrm{\hua{D}}}(Q),$$ where $Q_{{\mathrm{res}}}$ is the full subquiver of $Q$ restricted to ${\mathrm{supp}}(s_2 c')$. Thus the simple $S_1$ will never appear in the path ${\mathrm{P}}(\w_2)$ which implies $\hua{T}_{\widetilde{\w}_1}\neq\hua{T}_{\widetilde{\w}_2}$ by $3^\circ$ of Theorem \[thm:main\]. Therefore, we have an injection from the set of $c$-sortable words to the set of finite torsion classes in ${\mathrm{\hua{H}}}_Q$. To finish, we need to show the surjectivity, i.e. any finite torsion class $\hua{T}$ is equal to $\hua{T}_{\gm}$ for some $c$-sortable words. This is again by induction for $(Q, c, \hua{T})$ on $\#\Ind\hua{T}+\#Q_0$, starting with the trivial case when $\#\Ind\hua{T}=0$. Suppose that the surjectivity hold for any $(Q, c, \hua{T})$ with $\#\Ind\hua{T}+\#Q_0<m$ and consider the case when $\#\Ind\hua{T}+\#Q_0=m$. Let $c=s_1 c_-$ without lose of generality. If the simple injective $S_1$ of ${\mathrm{\hua{H}}}_Q$ is not in $\hua{T}$, we claim that $\hua{T}\subset{\mathrm{\hua{H}}}_{Q_-}\subset{\mathrm{\hua{H}}}_Q$, where $Q_-$ is the full subquiver with vertex set $Q_0-\{1\}$. If so, the theorem reduces to the case for $(Q_-, c_-, \s)$, which holds by the inductive assumption. To see the claim, choose any $M\in{\mathrm{\hua{H}}}_Q-{\mathrm{\hua{H}}}_{Q_-}$. Then $S_1$ is a simple factor of $M$ in its canonical filtration and hence the top, since $S_1$ is injective. Thus $\Hom(M,S)\neq0$. But $S_1\notin\hua{T}$ implies $S_1$ is in the torsion free class corresponds to $\hua{T}$. So $M\notin\hua{T}$, which implies $\hua{T}\subset{\mathrm{\hua{H}}}_{Q_-}$ as required. If the simple injective $S_1$ of ${\mathrm{\hua{H}}}_Q$ is in $\hua{T}$, then consider the quiver $Q_+=\mu_1(Q)$ and the torsion class $$\hua{T}_+={\mathrm{add}}\left(\Ind\hua{T}-\{S_1\}\right).$$ Similar to the proof of Theorem \[thm:main\], we know that the claim holds for $(Q_+, c+, \hua{T}_+)$, where $c_+=s_1 c s_1$. i.e. $\hua{T}_+=\hua{T}_{\v}$ for some $c_+$-sortable word $\v$. But $\w=s_1 \v$ is a $c$-sortable word by [@R07 Lemma 2.5] and we have $$\Ind\hua{T}_{\gm}=\{S_1\}\cup\Ind\hua{T}_{\v}=\Ind\hua{T},$$ or $\hua{T}=\hua{T}_{\gm}$, as required. Second, we claim that the path ${\mathrm{P}}(\w)$ has maximal length. Let $\w$ be a $c$-sortable word. Then ${\mathrm{P}}(\gm)$ is a directed path in ${\mathrm{EG}}_Q$ connecting ${\mathrm{\hua{H}}}$ and ${\mathrm{\hua{H}}}_{\gm}$ with maximal length. By $4^\circ$ of Theorem \[thm:main\], the number of indecomposables in $\hua{T}_{\gm}$ is exactly the length of ${\mathrm{P}}(\gm)$. Then the corollary follows from the fact that, each time we do a backward tilt in the sequence $\widetilde{\gm}$, the torsion class adds at least a new indecomposable, i.e. the simple where the tilting is at. Third, we describe the properties of a $c$-sortable word $\w$ in terms of red vertices of the corresponding green mutation sequence $\widetilde{\gm}$. Recall that ${\mathrm{V^{r}}(\gm)}$ is the set of red vertices of a green mutation sequence $\widetilde{\gm}$ and ${\mathrm{V^{r}}({\mathrm{\hua{H}}}_{\gm})}$ the set of (red) simples in ${\mathrm{\hua{H}}}_{\gm}$. Let $Q$ be an acyclic quiver and $c$ be an admissible Coxeter element with respect to the orientation of $Q$. For a $c$-sortable word $\w$, the set of its inversions, descents and cover reflections are given as follows $$\begin{gathered} \label{eq:Inv} {\mathrm{Inv}}({\w})=\{s_T \mid T\in{\mathrm{supp}}{\mathrm{P}}({\gm})\},\\ \label{eq:Des} {\mathrm{Des}}(\w)=\{s_i\mid i\in {\mathrm{V^{r}}(\gm)}\}, \\ \label{eq:Cov} {\mathrm{Cov}}(\w)=\{s_T \mid T\in {\mathrm{V^{r}}({\mathrm{\hua{H}}}_{\gm})}\},\end{gathered}$$ where $\s_T$ is the reflection of $T$. First of all, as the proof of Theorem \[thm:main\] or [@IT09 Theorem 4.3], we have by inducting on $l(\w)+\#Q_0$. For any $j\in{\mathrm{V^{r}}(\s)}$, by $4^\circ$ of Theorem \[thm:keller\], we have the corresponding simple $S_j^{\gm}$ is in ${\mathrm{\hua{H}}}_Q[-1]$ and hence $S_j^{\gm}[1]$ the torsion class $\hua{T}_{\gm}$. By formula , we know that $s_i^{\gm}$ is in ${\mathrm{Inv}}(\w)$ and hence $s_i$ is in ${\mathrm{Des}}(\w)$ by . For any $j\notin{\mathrm{V^{r}}(\s)}$, by $3^\circ$ of Theorem \[thm:keller\], and hence the corresponding simple $S_j^{\gm}$ is in ${\mathrm{\hua{H}}}_Q$ but not in the torsion class $\hua{T}_{\gm}$. Then ${\mathrm{\underline{dim}}}S_j^{\gm}$ is not equal to any ${\mathrm{\underline{dim}}}T, T\in\hua{T}_{\gm}$ since $\hua{T}_{\gm}$ is a simple in ${\mathrm{\hua{H}}}_{\gm}\supset\hua{T}$. Again, by formula , we know that $s_i^{\gm}$ is not in ${\mathrm{Inv}}(\w)$ and hence $s_i$ is not in ${\mathrm{Des}}(\w)$ by . Therefore, and both follow. In the finite case, there are two more consequences. The first one is about the supporting trees of the (cluster) exchange graphs. Let $Q$ be a Dynkin quiver. For any ${\mathrm{\hua{H}}}\in{\mathrm{EG}}_Q$, there is a unique $c$-sortable word $\w$ such that ${\mathrm{\hua{H}}}={\mathrm{\hua{H}}}_{\gm}$. Equivalently, the tree of $c$-sortable word $\w$ (with respect to the week order) is isomorphic to a supporting tree of the exchange graph ${\mathrm{EG}}_Q$. First, notice that all $c$-sortable words forms a tree with respect to the week order. Then the corollary follows from $3^\circ$ of Theorem \[thm:main\] and the fact that any torsion class in ${\mathrm{\hua{H}}}_Q$ is finite. We finish this section by showing a formula of a $T$-reduced expression for noncrossing partitions via red vertices. Let ${\mathrm{nc}}_c$ be Reading’s map from $c$-sortable words to noncrossing partitions. We have the following formula. Let $Q$ be a Dynkin quiver. Keep the notation of Theorem \[thm:main\], we have the following formula $$\begin{gathered} {\mathrm{nc}}_c(\w)= \prod_{j\in {\mathrm{V^{r}}(\gm)}} s_j^{\gm},\end{gathered}$$ with $\rank{\mathrm{nc}}_c(\w)=\#{\mathrm{V^{r}}(\gm)}$. The corollary follows from and Reading’s map ([@R07 Section 6]). Example: Associahedron ====================== (0,0) node\[rectangle,rounded corners,draw=white\] (x1) [${\textcolor{red}{\mathrm{\widehat{Y}}}}$${\textcolor{red}{\mathrm{\widehat{Z}}}}$]{}; (2,1) node\[rectangle,rounded corners,draw=white\] (x2) [${\textcolor{red}{\mathrm{\widehat{Y}}}}$]{}; (-2,1) node\[rectangle,rounded corners,draw=white\] (x3) [${\textcolor{red}{\mathrm{\widehat{Z}}}}$]{}; (0,2) node\[rectangle,rounded corners,draw=Emerald\] (x4) ; (0,4) node\[rectangle,rounded corners,draw=white\] (x5) [${\textcolor{red}{\mathrm{\widehat{X}}}}$]{}; (4,5) node\[rectangle,rounded corners,draw=white\] (x6) [${\textcolor{red}{\mathrm{\widehat{Y}}}}$${\textcolor{red}{\mathrm{\widehat{X}}}}$]{}; (2,5) node\[rectangle,rounded corners,draw=white\] (x7) [${\textcolor{red}{\mathrm{\widehat{B}}}}$]{}; (-2,5) node\[rectangle,rounded corners,draw=white\] (x8) [${\textcolor{red}{\mathrm{\widehat{C}}}}$]{}; (-4,5) node\[rectangle,rounded corners,draw=white\] (x9) [${\textcolor{red}{\mathrm{\widehat{Z}}}}$${\textcolor{red}{\mathrm{\widehat{X}}}}$]{}; (0,6) node\[rectangle,rounded corners,draw=white\] (X) [${\textcolor{red}{\mathrm{\widehat{B}}}}$${\textcolor{red}{\mathrm{\widehat{C}}}}$]{}; (0,8) node\[rectangle,rounded corners,draw=white\] (X1) [${\textcolor{red}{\mathrm{\widehat{A}}}}$]{}; (2,9) node\[rectangle,rounded corners,draw=white\] (X2) [${\textcolor{red}{\mathrm{\widehat{C}}}}$${\textcolor{red}{\mathrm{\widehat{Y}}}}$]{}; (-2,9) node\[rectangle,rounded corners,draw=white\] (X3) [${\textcolor{red}{\mathrm{\widehat{B}}}}$${\textcolor{red}{\mathrm{\widehat{Z}}}}$]{}; (0,10) node\[rectangle,rounded corners,draw=red\] (X4) [${\textcolor{red}{\mathrm{\widehat{X}}}}$${\textcolor{red}{\mathrm{\widehat{Z}}}}$${\textcolor{red}{\mathrm{\widehat{Y}}}}$]{}; (x4)edge\[\] node\[thick, Black\][**[2]{}**]{} (x2) edge\[\] node\[thick, Black\][**[3]{}**]{} (x3) edge\[\] node\[thick, Black\][**[1]{}**]{} (x5); (x2)edge\[\] node\[thick, Black\][**[3]{}**]{} (x1); (x5)edge\[\] node\[thick, Black\][**[2]{}**]{} (x7) edge\[\] node\[thick, Black\][**[3]{}**]{} (x8); (x7)edge\[\] node\[thick, Black\][**[1]{}**]{} (x6) edge\[\] node\[thick, Black\][**[3]{}**]{} (X); (X1)edge\[\] node\[thick, Black\][**[3]{}**]{} (X2) edge\[\] node\[thick, Black\][**[2]{}**]{} (X3); (X3)edge\[\] node\[thick, Black\][**[3]{}**]{} (X4); (x8)edge\[\] node\[thick, Black\][**[1]{}**]{} (x9); (X) edge\[\] node\[thick, Black\][**[1]{}**]{} (X1); (x3) edge\[-&gt;,&gt;=latex\] (x1); (x8) edge\[-&gt;,&gt;=latex\] (X); (X2) edge\[-&gt;,&gt;=latex\] (X4); (x1) edge\[-&gt;,&gt;=latex, bend left=11\] (X4); (x3) edge\[-&gt;,&gt;=latex\] (x9); (x9) edge\[-&gt;,&gt;=latex\] (X3); (x2) edge\[-&gt;,&gt;=latex\] (x6); (x6) edge\[-&gt;,&gt;=latex\] (X2); \[ex\] Consider an $A_3$ type quiver $Q\colon2 \leftarrow 1 \rightarrow 3$ with $c=s_1s_2s_3$. We have the tree of $c$-sortable words below. $$\xymatrix@C=1.5pc{ &s_2\ar[r]&s_2s_3&s_1s_2|s_1\\ e\ar[r]\ar[dr]\ar[ur]&s_1\ar[r]\ar[dr]&s_1s_2\ar[r]\ar[ur]&s_1s_2s_3\ar[r] &s_1s_2s_3|s_1\ar[dr]\ar[r]&s_1s_2s_3|s_1s_2\ar[r]&s_1s_2s_3|s_1s_2s_3\\ &s_3&s_1|s_3\ar[r]&s_1s_3|s_1&&s_1s_2s_3|s_1s_3 }$$ Moreover, a piece of AR-quiver of ${\mathrm{\hua{D}}}(Q)$ is as follows $$\xymatrix@R=1pc@C=1pc{ {\textcolor{red}{\mathrm{\widehat{Z}}}} \ar[dr] && {\textcolor{red}{\mathrm{\widehat{B}}}} \ar[dr] && {\textcolor{Emerald}{\mathrm{Y}}} \ar[dr] && {\textcolor{Emerald}{\mathrm{C}}} \ar[dr] \\ & {\textcolor{red}{\mathrm{\widehat{A}}}} \ar[ur]\ar[dr] && {\textcolor{red}{\mathrm{\widehat{X}}}} \ar[ur]\ar[dr] && {\textcolor{Emerald}{\mathrm{A}}} \ar[ur]\ar[dr] && {\textcolor{Emerald}{\mathrm{ X}}} \\ {\textcolor{red}{\mathrm{\widehat{Y}}}} \ar[ur] && {\textcolor{red}{\mathrm{\widehat{C}}}} \ar[ur] && {\textcolor{Emerald}{\mathrm{Z}}} \ar[ur] && {\textcolor{Emerald}{\mathrm{B}}} \ar[ur]}$$ where the green vertices are the indecomposables in ${\mathrm{\hua{H}}}_Q$ and the red hatted ones are there shift minus one. Note that ${\textcolor{Emerald}{\mathrm{X}}},{\textcolor{Emerald}{\mathrm{Y}}},{\textcolor{Emerald}{\mathrm{Z}}}$ are the simples $S_1, S_2, S_3$ in ${\mathrm{\hua{H}}}_Q$ respectively. Figure \[fig:main\] is the exchange graph ${\mathrm{EG}}_Q$ (cf. [@KQ11 Figure 1 and 4]). where we denote a heart ${\mathrm{\hua{H}}}_\w$ by the set of its simples $S_1^\w S_2^\w S_3^\w$ (in order). The green edges are the green mutations in some green mutation sequences induced from $c$-sortable words. The number on a green edges indicates where the mutation is at. Note that the underlying graph of Figure \[fig:main\] is the associahedron (of dimension 3). Further, Table \[table\] is a list of correspondences between $c$-sortable words, hearts (denoted by their simples as in the Figure \[fig:main\]), descents, cover reflection, inversions and (finite) torsion classes. Note that this table is consistent with [@IT09 Table 1], in the sense that the objects in the $j$-th row here are precisely objects in the $j$-th row there. ----------------------------- --------------------------------------------------------------------------------------------------------------------------- ---------------------- ---------------------- ------------------------------------------------------------------ $c$-sortable Heart Descent Cover ref. Torsion class word $\w$ ${\mathrm{\hua{H}}}_\w$ ${\mathrm{Des}}(\w)$ ${\mathrm{Cov}}(\w)$ $\hua{T}_\w$ $s_1 s_2 s_3 | s_1 s_2 s_3$ ${{\textcolor{red}{\mathrm{\widehat{X}}}}{\textcolor{red}{\mathrm{\widehat{Z}}}}{\textcolor{red}{\mathrm{\widehat{Y}}}}}$ $s_1,s_2,s_3 $ $t_X, t_Y, t_Z $ ${\textcolor{Emerald}{\mathrm{\underline{XBCAZY}}}}$ $s_1 s_2 s_3| s_1 s_2 $ ${{\textcolor{red}{\mathrm{\widehat{B}}}}{\textcolor{red}{\mathrm{\widehat{Z}}}}{\textcolor{Emerald}{\mathrm{Y}}}}$ $s_1,s_2 $ $t_B, t_Z $ ${\textcolor{Emerald}{\mathrm{X\underline{B}C\underline{AZ}}}} $ $s_1 s_2 s_3| s_1 s_3 $ ${{\textcolor{red}{\mathrm{\widehat{C}}}}{\textcolor{Emerald}{\mathrm{Z}}}{\textcolor{red}{\mathrm{\widehat{Y}}}}}$ $s_1,s_3 $ $t_C, t_Y $ ${\textcolor{Emerald}{\mathrm{XB\underline{CAY}}}} $ $s_2 s_3 $ ${{\textcolor{Emerald}{\mathrm{X}}}{\textcolor{red}{\mathrm{\widehat{Y}}}}{\textcolor{red}{\mathrm{\widehat{Z}}}}}$ $s_2,s_3 $ $t_Y, t_Z $ ${\textcolor{Emerald}{\mathrm{\underline{YZ}}}} $ $s_1 s_2 s_3 $ ${{\textcolor{Emerald}{\mathrm{A}}}{\textcolor{red}{\mathrm{\widehat{B}}}}{\textcolor{red}{\mathrm{\widehat{C}}}}}$ $s_2,s_3 $ $t_B, t_C $ ${\textcolor{Emerald}{\mathrm{X\underline{BC}}}} $ $s_1 s_3| s_1 $ ${{\textcolor{red}{\mathrm{\widehat{Z}}}}{\textcolor{Emerald}{\mathrm{B}}}{\textcolor{red}{\mathrm{\widehat{X}}}}}$ $s_1,s_3 $ $t_Z, t_X $ ${\textcolor{Emerald}{\mathrm{\underline{XCZ}}}} $ $s_1 s_2| s_1 $ ${{\textcolor{red}{\mathrm{\widehat{Y}}}}{\textcolor{red}{\mathrm{\widehat{X}}}}{\textcolor{Emerald}{\mathrm{C}}}}$ $s_1,s_2 $ $t_Y, t_X $ ${\textcolor{Emerald}{\mathrm{\underline{XBY}}}} $ $s_2 $ ${{\textcolor{Emerald}{\mathrm{X}}}{\textcolor{red}{\mathrm{\widehat{Y}}}}{\textcolor{Emerald}{\mathrm{Z}}}}$ $s_2 $ $t_Y $ ${\textcolor{Emerald}{\mathrm{\underline{Y}}}} $ $s_3 $ ${{\textcolor{Emerald}{\mathrm{X}}}{\textcolor{Emerald}{\mathrm{Y}}}{\textcolor{red}{\mathrm{\widehat{Z}}}}}$ $s_3 $ $t_Z $ ${\textcolor{Emerald}{\mathrm{\underline{Z}}}} $ $s_1 s_2 s_3| s_1 $ ${{\textcolor{red}{\mathrm{\widehat{A}}}}{\textcolor{Emerald}{\mathrm{Z}}}{\textcolor{Emerald}{\mathrm{Y}}}}$ $s_1 $ $t_A $ ${\textcolor{Emerald}{\mathrm{XBC\underline{A}}}} $ $s_1 s_2 $ ${{\textcolor{Emerald}{\mathrm{Y}}}{\textcolor{red}{\mathrm{\widehat{B}}}}{\textcolor{Emerald}{\mathrm{C}}}}$ $s_2 $ $t_B $ ${\textcolor{Emerald}{\mathrm{X\underline{B}}}} $ $s_1 s_3 $ ${{\textcolor{Emerald}{\mathrm{Z}}}{\textcolor{Emerald}{\mathrm{B}}}{\textcolor{red}{\mathrm{\widehat{C}}}}}$ $s_3 $ $t_C $ ${\textcolor{Emerald}{\mathrm{X\underline{C}}}} $ $s_1 $ ${{\textcolor{red}{\mathrm{\widehat{X}}}}{\textcolor{Emerald}{\mathrm{B}}}{\textcolor{Emerald}{\mathrm{C}}}}$ $s_1 $ $t_X $ ${\textcolor{Emerald}{\mathrm{\underline{X}}}} $ $e $ ${{\textcolor{Emerald}{\mathrm{X}}}{\textcolor{Emerald}{\mathrm{Y}}}{\textcolor{Emerald}{\mathrm{Z}}}}$ $\emptyset $ $\emptyset $ $\emptyset$ ----------------------------- --------------------------------------------------------------------------------------------------------------------------- ---------------------- ---------------------- ------------------------------------------------------------------ : Example:$A_3$[]{data-label="table"} $$$$ **N.B.**$1\colon\; \{t_X,t_Y,t_Z,t_A,t_B,t_C\} =\{s_1,s_2,s_3,s_2 s_3 s_1 s_3 s_2, s_1 s_2 s_1, s_1 s_3 s_1\}\,$. **N.B.**$2\colon$ The underlines objects in $\hua{T}_{\gm}$ form the wide subcategory ${\hua{W}}_{\gm}$. Proof of Lemma \[lem:KQ\] {#app} ========================= Recall some terminology. For an acyclic quiver $R$, the *cluster category* $\hua{C}(R)$ of $R$ is the *orbit category* ${\mathrm{\hua{D}}}(R)/[-1]\circ\tau$. A *cluster tilting object* $\mathbf{P}$ in $\hua{C}(R)$ is a maximal rigid objects. Note that every such object has exactly $m$ indecomposable summand, where $m$ is the number of vertices in $R$. One can mutate a cluster tilting object to get a new one by replacing any one of its indecomposable summands with another unique indecomposable object in $\hua{C}(R)$. More precisely, if $\mathbf{P}=\oplus\bigoplus_{j}P_j$, then for any $i$ we have $$\mu_{P_i}(\mathbf{P})=P_i'\oplus\bigoplus_{j\neq i}P_i,$$ where $$\begin{aligned} P_i'&=&{\mathrm{Cone}}(P_i \to \bigoplus_{j\neq i} \Irr(P_i,P_j)^*\otimes P_j)\\ &=&{\mathrm{Cone}}(\bigoplus_{j\neq i} \Irr(P_j,P_i)\otimes P_j \to P_i)[-1].\end{aligned}$$ Moreover, this mutation will induce the mutation, in the sense of Definition \[def:mutation\], on the corresponding Gabriel quiver. The *cluster exchange graph* ${\mathrm{CEG}_{}(R)}$ of $\hua{C}(R)$ is the unoriented graph whose vertices are cluster tilting objects and whose edges correspond to mutations. For instance, the cluster exchange graph of an $A_2$ quiver is a pentagon. Furthermore, Buan-Thomas [@BT09] associated a *colored quiver* $Q^C(\mathbf{P})$ to each $\mathbf{P}$, whose degree zero part is the *Gabriel quiver* $Q(\mathbf{P})$ of $\mathbf{P}$. King-Qiu [@KQ11] introduced a modification of this colored quiver, called *augmented quiver* and denoted by $Q^+(\mathbf{P})$, such that the degree one part of $Q^+(\mathbf{P})$ is the degree zero part of $Q^C(\mathbf{P})$, i.e. $Q(\mathbf{P})$. We need the following two lemma. ([@KQ11 Corollary 5.12]) The underlying unoriented graph of the exchange graph ${\mathrm{EG}}_R$ (of hearts) is canonically isomorphic to the cluster exchange graph ${\mathrm{CEG}_{}(R)}$. ([@KQ11 Theorem 8.7]) Let ${\mathrm{\hua{H}}}$ be a heart in ${\mathrm{EG}}_R$ and $\mathbf{P}$ be the corresponding cluster tilting object in ${\mathrm{CEG}_{}(R)}$. Then CY-3 double of the Ext-quiver ${\mathcal{Q}({\mathrm{\hua{H}}})}$ of ${\mathrm{\hua{H}}}$ is isomorphic to the augmented quiver of $\mathbf{P}$ ([@KQ11]), i.e. $${\mathrm{CY}^{3}({\mathcal{Q}({\mathrm{\hua{H}}})})}\cong Q^+(\mathbf{P}).$$ \[ex:quivers\] We keep the notations in Example \[ex\]. The left colon of quivers corresponds to the cluster tilting object ${\textcolor{Emerald}{\mathrm{Y}}}\oplus{\textcolor{Emerald}{\mathrm{A}}}\oplus{\textcolor{Emerald}{\mathrm{B}}}$ and the heart ${\mathrm{\hua{H}}}_1$ with simples $\{{\textcolor{red}{\mathrm{\widehat{C}}}}, {\textcolor{Emerald}{\mathrm{Z}}}, {\textcolor{Emerald}{\mathrm{B}}}\}$; the right colon of quivers corresponds to the cluster tilting object ${\textcolor{Emerald}{\mathrm{Y}}}\oplus{\textcolor{red}{\mathrm{\widehat{Z}}}}\oplus{\textcolor{Emerald}{\mathrm{B}}}$ and the heart $h_2$ with simples $\{{\textcolor{red}{\mathrm{\widehat{X}}}}, {\textcolor{red}{\mathrm{\widehat{Z}}}}, {\textcolor{Emerald}{\mathrm{B}}}\}$. Note that we have $${\textcolor{Emerald}{\mathrm{Y}}}\oplus{\textcolor{red}{\mathrm{\widehat{Z}}}}\oplus{\textcolor{Emerald}{\mathrm{B}}} =\mu_{{\textcolor{Emerald}{\mathrm{A}}}}({\textcolor{Emerald}{\mathrm{Y}}}\oplus{\textcolor{Emerald}{\mathrm{A}}}\oplus{\textcolor{Emerald}{\mathrm{B}}})$$ and $${\mathrm{\hua{H}}}_2={{{\mathrm{\hua{H}}}_1}^{{\textcolor{Emerald}{\mathrm{Z}}}}_{\flat}}.$$ --------------------- ------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------- [Gabriel quivers]{} $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@{<-}[dr] \\\bullet\ar@{<-}[ur]&&\bullet }$ $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@{<-}[dl] \\\bullet&&\bullet\ar@{<-}[ul] \ar[ll] }$ $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@{<-}@<.8ex>[dr]^{_0}\ar@{<-}@<.2ex>[dl]^{1} $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@<.8ex>[dr]^{_0}\ar@<.2ex>[dl]^{1} \\\bullet\ar@{<-}@<.8ex>[ur]^{_0}&&\bullet\ar@{<-}@<.2ex>[ul]^{1} \\\bullet\ar@<.2ex>[rr]^{1}\ar@<.8ex>[ur]^{_0}&&\bullet\ar@<.2ex>[ul]^{1} }$ \ar@<.8ex>[ll]^{_0} }$ $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@{<-}[dr]^{1} \\\bullet\ar@{<-}[ur]^{1}&&\bullet }$ $\xymatrix@R=2.7pc@C=2pc{ &\bullet\ar@{<-}[dl]_{1} \\\bullet&&\bullet\ar@{<-}[ul]_{1} \ar@{<-}[ll]^{2} }$ $\quad\xymatrix@R=2.7pc@C=2pc{ &\bullet \ar@(ul,ur)[]^3 $\quad\xymatrix@R=2.7pc@C=2pc{ &\bullet \ar@(ul,ur)[]^3 \ar@{<-}@<.8ex>[dr]^{1}\ar@{<-}@<.2ex>[dl]^{2} \ar@<.8ex>[dr]^{1}\ar@<.2ex>[dl]^{2} \\ \bullet\ar@(l,d)[]_3\ar@{<-}@<.8ex>[ur]^{1}&& \\ \bullet\ar@(l,d)[]_3\ar@<.2ex>[rr]^{2}\ar@<.8ex>[ur]^{1}&& \bullet\ar@(d,r)[]_3\ar@{<-}@<.2ex>[ul]^{2} \bullet\ar@(d,r)[]_3\ar@<.2ex>[ul]^{2} }\quad$ \ar@<.8ex>[ll]^{1} }\quad$ --------------------- ------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------- : Example of various quivers[]{data-label="quivers"} $\qquad$ Now take $R={\widetilde{Q}}$ and then the heart $\widetilde{{\mathrm{\hua{H}}}_\s}$ in ${\mathrm{EG}}_Q$ will correspond to a cluster tilting set $\widetilde{\mathbf{P}}_\s$ in the cluster category $\hua{C}({\widetilde{Q}})$ with $$\begin{gathered} \label{eq:app} {\mathrm{CY}^{3}({\mathcal{Q}({\mathrm{\hua{H}}}_\s)})}\cong Q^+(\widetilde{\mathbf{P}}_\s).\end{gathered}$$ Following the mutation procedure, we deduce that ${\widetilde{Q}}_\s$ is isomorphic to the Gabriel quiver of the $\widetilde{\mathbf{P}}_\s$ and hence the degree one part of , as required. [99]{} Preprojective algebras and c-sortable words, to appear in *Proc. of the London Math. Soc.* ([arXiv:1002.4131v3](http://arxiv.org/abs/1002.4131)). Stability conditions on triangulated categories, *Ann. Math.* **166** (2007). ([arXiv:math/0212237v3](http://arxiv.org/abs/math/0212237)) On Maximal Green Sequence, [arxiv:1205.2050v2](http://arxiv.org/abs/1205.2050). Cluster structures for 2-Calabi-Yau categories and unipotent groups, *Compos. Math.*, 145 (2009), 1035-1079. Coloured quiver mutation for higher cluster categories, *Adv. Math.* 222 (2009), 971–995, ([arXiv:0809.0691v3](http://arxiv.org/abs/0809.0691)). Braids, walls and mirrors, [arXiv:1110.2115v1](http://arxiv.org/abs/1110.2115). Noncrossing partitions and representations of quivers, *Compos. Math.*, 145 (2009), no. 6, 1533-1562 ([arXiv:math/0612219v4](http://arxiv.org/abs/math/0612219)). On cluster theory and quantum dilogarithm, [arXiv:1102.4148v4](http://arxiv.org/abs/1102.4148). Cluster algebras and derived categories, [arXiv:1202.4161v4](http://arxiv.org/abs/1202.4161). Exchange graphs of acyclic Calabi-Yau categories, [arXiv:1109.2924v2](http://arxiv.org/abs/1109.2924). Donaldson-Thomas theory and cluster algebras, [arXiv:1002.4884v2](http://arxiv.org/abs/1002.4884). Stability conditions and quantum dilogarithm identities for Dynkin quivers, [arXiv:1111.1010v2](http://arxiv.org/abs/1111.1010). Clusters, Coxeter-sortable elements and noncrossing partitions, *Trans. Amer. Math. Soc.*, 359 (2007), no. 12, 5931-5958 ([arXiv:math/0507186v2](http://arxiv.org/abs/math/0507186)). Finite torsion classes and c-sortable elements of w, in preparation. E-mail address: [email protected]
--- abstract: 'We present a method to derive Bell monogamy relations by connecting the complementarity principle with quantum non-locality. The resulting monogamy relations are stronger than those obtained from the no-signaling principle alone. In many cases, they yield tight quantum bounds on violation of single and multiple qubit correlation Bell inequalities. In contrast with the two-qubit case, a rich structure of possible violation patterns is shown to exist in the multipartite scenario.' author: - 'P. Kurzyński' - 'T. Paterek' - 'R. Ramanathan' - 'W. Laskowski' - 'D. Kaszlikowski' title: Correlation complementarity yields Bell monogamy relations --- It is an experimentally confirmed fact, with the exception of certain experimental loopholes, that Bell inequalities are violated [@BELL]. In a typical Bell scenario a composite system is split between many parties and each party independently performs measurements on their corresponding subsystems. When all measurements are done the parties meet and calculate a function (Bell parameter) of their measurement outcomes in order to check whether they succeeded in violation of local realism. An interesting phenomenon occurs when a subsystem is involved in more than one Bell experiment, i.e. when measurement outcomes of one party are plugged into more than one Bell parameter involving different parties. In this case trade-offs exist between strengths of violations of a Bell inequality by different sets of observers, known as monogamy relations [@SG2001; @TV2006; @BLMPSR2005; @MAN2006; @TONER2009; @PB2009]. One of the origins of this monogamy is the principle of no-signaling, according to which information cannot be transmitted with in nite speed. If violations are sufficiently strong possibility of superluminal communication between observers arises and consequently the Bell monogamy is present in every no-signaling theory [@BLMPSR2005; @MAN2006; @TONER2009; @PB2009]. However, no-signaling principle alone does not identify the set of violations allowed by quantum theory. The monogamy relations derived within quantum theory, in the scenario where a Bell inequality is tested between parties $AB$ and $AC$, show even more stringent constraints on the allowed violations [@SG2001; @TV2006]. Here we derive within quantum theory the monogamy relations which involve violation of multi-partite Bell inequalities, and study their properties. The trade-offs obtained are stronger than those arising from no-signaling alone and in most cases we show that they fully characterize the quantum set of allowed Bell violations. Our method uses complementarity of operators defining quantum values of Bell parameters and shows that Bell monogamy stems from quantum complementarity. This sheds new light on the relation between complementarity (uncertainty) and quantum non-locality. Oppenheim and Wehner show that complementarity relations for single-party observables determine the strength of a single Bell inequality violation [@OW]. Here we show for qubit inequalities that the same can be achieved using complementarity between correlation observables and that this type of complementarity also determines violation strength for several Bell inequalities (monogamy). We begin with the principle of complementarity, which forbids simultaneous knowledge of certain observables, and show that the only dichotomic complementary observables in quantum formalism are those that anti-commute. Conversely, we demonstrate that there exists a bound for the sum of squared expectation values of anti-commuting operators in any physical state [@GEZA; @WW2008]. This bound is subsequently used to derive quantum bounds on Bell inequality violations. For its other applications see for instance Ref. [@MACRO]. Consider a set of dichotomic ($\pm 1$) complementary measurements. The complementarity is manifested in the fact that if the expectation value of one measurement is $\pm1$ then expectation values of all other complementary measurements are zero. We show that the corresponding quantum mechanical operators anti-commute. Consider a pair of dichotomic operators $A$ and $B$ and put the expectation value $\langle A \rangle = 1$, i.e., the state being measured, say ${\left | a \right\rangle}$, is one of the $+1$ eigenstates. Complementarity requires ${\left \langle a \right |} B {\left | a \right\rangle} = 0$, which implies $B {\left | a \right\rangle} = {\left | a_{\perp} \right\rangle}$, where $\perp$ denotes a state orthogonal to ${\left | a \right\rangle}$. Since $B^2 = \openone$, we also have $B {\left | a_{\perp} \right\rangle} = {\left | a \right\rangle}$ and therefore ${\left | b \right\rangle} = \tfrac{1}{\sqrt{2}}({\left | a \right\rangle} + {\left | a_\perp \right\rangle})$ is the $+1$ eigenstate of $B$. For this state complementarity demands, ${\left \langle b \right |} A {\left | b \right\rangle} = 0$, i.e. $A {\left | b \right\rangle}$ is orthogonal to ${\left | b \right\rangle}$ which is only satisfied if ${\left | a_\perp \right\rangle}$ is the $-1$ eigenstate of $A$. The same argument applies to all $+1$ eigenstates, therefore the two eigenspaces have equal dimension. As a consequence, $A = \sum_{a} ({\left | a \right\rangle} {\left \langle a \right |} - {\left | a_\perp \right\rangle} {\left \langle a_\perp \right |})$ and $B = \sum_{a} ({\left | a_{\perp} \right\rangle} {\left \langle a \right |} + {\left | a \right\rangle} {\left \langle a_{\perp} \right |})$. It is now easy to verify that $A$ and $B$ anti-commute. Conversely, consider a set of traceless and trace-orthogonal dichotomic hermitian operators $A_k$. We denote by $\alpha_k$ the expectation values of measurements $A_k$ in some state $\rho$, which are real numbers in the range $[-1,1]$. Let us group operators $A_k$ into disjoint sets $S_j$ of mutually anti-commuting operators, $S_j=\{A_1^{(j)},A_2^{(j)},\dots\}$. Next, consider an operator $F_j \equiv \sum_{k=1}^{|S_j|} \alpha_{kj} A_{k}^{(j)} = \vec \alpha_j \cdot \vec A_j$, whose variance in the same state $\rho$ is given by $\langle F_j^2\rangle-\langle F_j\rangle^2 = |\vec \alpha_j|^2 (1- |\vec \alpha_j|^2)$ due to assumed anti-commutativity and because the square of each individual operator is identity. Positivity of variance, which stems from the positivity of $\rho$, implies that $$|\vec\alpha_{j}|\leq 1. \label{INEQ}$$ As a result, if an expectation value of one observable is $\pm 1$ then expectation values of all other anti-commuting observables are necessarily zero. In this way anti-commuting operators are related to complementarity. In fact, the above inequality is more general as it gives trade-offs between squared expectation values of anti-commuting operators in any physical state. Here, we derive inequality (\[INEQ\]) in the spirit of Heisenberg uncertainty relation, see [@GEZA; @WW2008] for alternative derivations. For dichotomic observables the square of expectation value is related to the Tsallis entropy as $S_2(A_j)=\frac{1}{2}(1-\langle A_j\rangle^2)$, therefore the inequality can be converted into entropic uncertainty relation. Inequality (\[INEQ\]) provides a powerful tool for the studies of quantum non-locality. We show that it allows derivation of the Tsirelson bound [@TSIRELSON] and monogamy of Bell inequality violations between many qubits. A general $N$-qubit density matrix can be decomposed into tensor products of Pauli operators $$\rho = \frac{1}{2^N} \sum_{\mu_1,...,\mu_N =0}^3 T_{\mu_1 \dots \mu_N} \sigma_{\mu_1} \otimes \dots \otimes \sigma_{\mu_N},$$ where $\sigma_{\mu_n} \in \{\openone, \sigma_x,\sigma_y,\sigma_z\}$ is the $\mu_n$-th local Pauli operator for the $n$-th party and $T_{\mu_1 \dots \mu_N} = {\mathrm{Tr}}[\rho (\sigma_{\mu_1} \otimes \dots \otimes \sigma_{\mu_N}) ]$ are the components of the correlation tensor $\hat T$. The orthogonal basis of tensor products of Pauli operators has the property that its elements either commute or anti-commute. We study a complete collection of two-setting correlation Bell inequalities for $N$ qubits [@WZ2001; @WW2001; @ZB2002]. It can be condensed into a single general Bell inequality, whose classical bound is one [@ZB2002]. All correlations which satisfy this general inequality and only such correlations admit a local hidden variable (LHV) description of the Bell experiment. This is in contrast to single inequality like, e.g., CHSH [@CHSH] violation of which is only sufficient to disqualify LHV model. For two qubits, if the general inequality is satisfied then all CHSH inequalities are satisfied, and if the general inequality is violated then there exists a CHSH inequality (with minus sign in a suitable place) which is violated. The quantum value of the general Bell parameter, denoted by $\mathcal{L}$, was shown to have an upper bound of $$\mathcal{L}^2 \le \sum_{k_1, \dots, k_N=x,y} T_{k_1 \dots k_N}^2, \label{UP_BOUND}$$ where summation is over orthogonal local directions $x$ and $y$ which span the plane of the local settings [@ZB2002]. If the upper bound above is smaller than the classical limit of $1$, there exists an LHV model. Our method for finding quantum bounds for Bell violations is to use condition (\[UP\_BOUND\]) for combinations of Bell parameters and then identify sets of anti-commuting operators in order to utilize inequality (\[INEQ\]) and obtain a bound on these combinations. We begin by showing an application of Inequality (\[INEQ\]) to a new derivation of the Tsirelson bound. For two qubits the general Bell parameter is upper bounded by $\mathcal{L}^2 \le T_{xx}^2 + T_{xy}^2 + T_{yx}^2 + T_{yy}^2$. One can identify here two vectors of averages of anti-commuting observables, e.g., $\vec \alpha_1 = (T_{xx},T_{xy})$ and $\vec \alpha_2 = (T_{yx},T_{yy})$. Due to (\[INEQ\]) we obtain $\mathcal{L} \le \sqrt{2}$ which is exactly the Tsirelson bound. One can apply this method to look for corresponding maximal quantum violations of other correlation inequalities, e.g. it is easy to verify that the “Tsirelson bound” of the multi-setting inequalities [@MULTISETTING] is just the same as the one for the two-setting inequalities. Our derivation shows that Tsirelson’s bound is due to complementarity of correlations $T_{ix}^2+T_{iy}^2 \leq 1$ with $i=x,y$. Any theory more non-local than quantum mechanics would have to violate this complementarity relation (compare with Ref. [@OW]). ![The nodes of these graphs represent observers trying to violate Bell inequalities which are denoted by colored edges. [**a)**]{} The simplest case: two subsets of three parties try to violate CHSH inequality. [**b)**]{} Four three-party subsets of four parties try to violate Mermin inequality. [**c)**]{} Two subsets of odd number of parties try to violate multi-partite Bell inequality in a scenario in which only one particle is common to two Bell experiments. [**d)**]{} A binary tree configuration leads to strong monogamy relation. []{data-label="FIG_GRAPHS"}](fig1.jpg) To describe how complementarity of correlations can be used to establish Bell monogamy, consider the simplest scenario of three particles, illustrated in Fig. \[FIG\_GRAPHS\]a. We show that if correlations obtained in two-setting Bell experiment by $AB$ cannot be modeled by LHV, then correlations obtained by $AC$ admit LHV model. We use condition (\[UP\_BOUND\]) which applied to the present bipartite scenario reads: $\mathcal{L}_{AB}^2 + \mathcal{L}_{AC}^2 \le \sum_{k,l=x,y} T_{kl0}^2 + \sum_{k,m=x,y} T_{k0m}^2$. It is important to note that the settings of $A$ are the same in both sums and accordingly orthogonal local directions $x$ and $y$ are the same for $A$ in both sums. We arrange the Pauli operators corresponding to correlation tensor components entering the sums into the following two sets of anti-commuting operators: $\{XX\openone,XY\openone,Y\openone X,Y\openone Y\}$ and $\{YX\openone,YY\openone, X\openone X,X \openone Y\}$, where $X=\sigma_x$ and $Y=\sigma_y$. Note that the anti-commutation of any pair of operators within a set is solely due to anti-commutativity of local Pauli operators. We obtain our result $\mathcal{L}_{AB}^2 + \mathcal{L}_{AC}^2 \le 2$. Once a CHSH inequality is violated between $AB$, all CHSH inequalities between $AC$ are satisfied, similar results were obtained in [@SG2001; @TV2006]. Before we move to a general case of arbitrary number of qubits, we present an explicit example of multipartite monogamy relation. Consider parties $A$, $B$, $C$, $D$ trying to violate a correlation Bell inequality in a scenario depicted in Fig. \[FIG\_GRAPHS\]b. We show the new monogamy relation: $\mathcal{L}_{ABC}^2+\mathcal{L}_{ABD}^2+\mathcal{L}_{ACD}^2+\mathcal{L}_{BCD}^2 \leq 4$. Condition (\[UP\_BOUND\]) applied to these tripartite Bell parameters implies that the left-hand side is bounded by the sum of $32$ elements. The corresponding tensor products of Pauli operators can be grouped into four sets: $$\begin{aligned} \{XXY \openone,XY\openone X,X\openone XY, \openone YYY,\dots\}, \nonumber \\ \{XYX\openone,YY\openone Y,Y\openone XX,\openone XXY,\dots\}, \nonumber \\ \{YXX\openone,XX\openone Y,Y\openone YY,\openone XYX,\dots\}, \nonumber \\ \{YYY\openone,YX\openone X,X\openone YX,\openone YXX,\dots\}, \nonumber \end{aligned}$$ where the dots denote four more operators being the previous four operators with $X$ replaced by $Y$ and vice versa. All operators in each set anti-commute, therefore the bound is proved. To give a concrete example of monogamy of a well-known inequality we choose the inequality due to Mermin [@MERMIN]: $E_{112} + E_{121} + E_{211} - E_{222} \le 2$, where $E_{klm}$ denote the correlation functions. Since the classical bound of the Mermin inequality is $2$, and not $1$ as we have assumed in our derivation, the new “Mermin monogamy” is $\mathcal{M}_{ABC}^2+\mathcal{M}_{ABD}^2+\mathcal{M}_{ACD}^2+\mathcal{M}_{BCD}^2 \leq 16$, where $\mathcal{M}$ is the quantum value of the corresponding Mermin parameter. The bound of the new monogamy relation can be achieved in many ways. If a triple of observers share the GHZ state, they can obtain maximal violation of $4$ and the remaining triples observe vanishing Mermin quantities $\mathcal{M}$. This can be attributed to maximal entanglement of the GHZ state. It is also possible for two and three triples to violate Mermin inequality non-maximally, and at the same time to achieve the bound. For example, the state $\frac{1}{2}\left(|0001\rangle+|0010\rangle +i\sqrt{2}|1111\rangle\right)$ allows $ABC$ and $ABD$ to obtain $\mathcal{M} = 2\sqrt{2}$, and the state $\frac{1}{\sqrt{6}}\left(|0001\rangle+|0010\rangle+|0100\rangle+i\sqrt{3}|1111\rangle\right)$ allows $ABC$, $ABD$ and $ACD$ to obtain $\mathcal{M} = \tfrac{4}{\sqrt{3}}$. Note that it is impossible to violate all four inequalities simultaneously. We now derive new monogamy relations for $N$ qubits. Consider scenario of Fig. \[FIG\_GRAPHS\]c, in which $N$ is odd, $A$ is the fixed qubit and the remaining $N-1$ qubits are split into two groups $\vec B = (B_1,...,B_M)$ and $\vec C = (C_1,...,C_M)$ each containing $M=\tfrac{1}{2}(N-1)$ qubits. We shall derive the trade-off relation between violation of $(M+1)$-partite Bell inequality by parties $A \vec B$ and $A \vec C$. Using condition (\[UP\_BOUND\]), the elements of the correlation tensor which enter the bound of $\mathcal{L}_{A \vec B}^2 + \mathcal{L}_{A \vec C}^2$ are of the form $T_{k l_1 \dots l_M 0 \dots 0}$ and $T_{k 0 \dots 0 m_1 \dots m_M}$. The corresponding Pauli operators can be arranged into $2^{M}$ sets of four mutually anti-commuting operators each: $\vec A_{1S} = \{XXS I, XYSI, YIXS, YIYS\}$, $\vec A_{2S} = \{YXS I, YYS I, XIXS, XIYS\}$, where $S$ stands for all $2^{M-1}$ combinations of $X$’s and $Y$’s for $M-1$ parties, and $I = \openone^{\otimes M}$ is identity operator on $M$ neighboring qubits. Therefore, according to the theorem, we arrive at the following trade-off: $\mathcal{L}_{A \vec B}^2 + \mathcal{L}_{A \vec C}^2 \le 2^M$. The bound of this inequality is tight in the sense that there exist quantum states achieving the bound for all allowed values of $\mathcal{L}_{A \vec B}$ and $\mathcal{L}_{A \vec C}$. This is a generalization of a similar property for CHSH monogamy [@TV2006]. The state of interest can be chosen as $$|\psi \rangle = \tfrac{1}{\sqrt{2}} \cos \alpha \left( | 0 \vec 0 \vec 0 \rangle + |1 \vec 0 \vec 1 \rangle \right) + \tfrac{1}{\sqrt{2}} \sin \alpha \left( | 1 \vec 1 \vec 0 \rangle + |0 \vec 1 \vec 1 \rangle \right), \label{PSI_MONO}$$ where e.g. $|1 \vec 0 \vec 1 \rangle$ denotes a state in which qubit $A$ is in the $| 1 \rangle$ eigenstate of local $Z$ basis, all qubits of $\vec B$ are in state $| 0 \rangle$ of their local $Z$ bases, and all qubits of $\vec C$ are in state $| 1 \rangle$ of their respective $Z$ bases. The non-vanishing correlation tensor components in $xy$ plane, which involve only $(M+1)$-partite correlations are $T_{x \vec w \vec 0} = \pm \sin 2\alpha$, $T_{x \vec 0 \vec w} = \pm 1$, and $T_{y \vec 0 \vec v} = - \cos 2\alpha$, where $\vec w$ contains even number of $y$ indices, other indices being $x$, and $\vec v$ contains odd number of $y$ indices, other indices again being $x$. There are $\sum_{k=1}^{\lfloor M/2 \rfloor} {M \choose 2 k} = 2^{M-1}$ correlation tensor elements of each type and consequently $$\mathcal{L}_{A \vec B}^2 = 2^{M-1} \sin^2 2\alpha, \quad \mathcal{L}_{A \vec C}^2 = 2^{M-1}(1+ \cos^2 2\alpha). \label{TIGHT}$$ Therefore, the bound is always achieved and all allowed values of $\mathcal{L}_{A \vec B}$ and $\mathcal{L}_{A \vec C}$ can be attained either by the state (\[PSI\_MONO\]) or the state with the role of qubits $\vec B \leftrightarrow \vec C$ interchanged. The underlying reason why the above trade-off allows for violation by both $A \vec B$ and $A \vec C$ is the fact that sets of anti-commuting operators of the Bell parameters can contain at most four elements. Now we present a much stronger new monogamy related to the graph in Fig. \[FIG\_GRAPHS\]d. Consider $M$-partite Bell inequalities corresponding to different paths from the root of the graph to its leaves ($M=3$ in Fig. \[FIG\_GRAPHS\]d). There are $2^{M-1}$ such inequalities and we shall prove that their quantum mechanical values obey $$\mathcal{L}_1^2 + \dots + \mathcal{L}_{2^{M-1}}^2\le 2^{M-1}, \label{STRONG}$$ where $\mathcal{L}_j$ is the quantum value for the $j$-th Bell parameter in the graph. To prove this, we construct $2^{M-1}$ sets of anti-commuting operators, each set containing $2^M$ elements, such that they exhaust all correlation tensor elements which enter the bound of the left-hand side of (\[STRONG\]) after application of condition (\[UP\_BOUND\]). The construction also uses the graph of the binary tree. We begin at the root, to which we associate a set of two anti-commuting operators, $X$ and $Y$, for the corresponding qubit. A general rule now is that if we move up in the graph from qubit $A$ to qubit $B$ we generate two new anti-commuting operators by placing $X$ or $Y$ at position $B$ to the operator which had $X$ at position $A$. Similarly, if we move down in the graph to qubit $C$ we generate two new anti-commuting operators by placing $X$ or $Y$ at position $C$ to the operator which contained $Y$ at position $A$. For example, starting from the set of operators $(X,Y)$ by moving up we obtain $(XX\openone,XY\openone)$, and by moving down we have $(Y \openone X ,Y \openone Y)$. The next sets of operators are $(XX\openone X\openone \openone \openone, XX\openone Y\openone \openone \openone)$, $( XY\openone\openone X \openone \openone, XY\openone \openone Y \openone \openone)$, $(Y\openone X\openone \openone X\openone, Y\openone X \openone \openone Y \openone)$ and $(Y \openone Y \openone \openone \openone X, Y \openone Y\openone \openone \openone Y)$ if we move from the root: up up, up down, down up and down down, respectively. By following this procedure in the whole graph we obtain a set of $2^M$ mutually anti-commuting operators. According to this algorithm the anti-commuting operators can be grouped in pairs having the same Pauli operators except for the qubits of the last step (the leaves of the graph). There are $2^{M-1}$ such pairs corresponding to distinct combinations of tensor products of $X$ and $Y$ operators on $M-1$ positions. Importantly, in different operators these positions are different and to generate the whole set of operators entering the bound we have to perform suitable permutations of positions. Such permutations always exist and they do not affect anti-commutativity. Finally we end up with the promised $2^{M-1}$ sets of $2^{M}$ anti-commuting operators each, which according to Eq. (\[INEQ\]) give the bound of (\[STRONG\]). The inequality (\[STRONG\]) is stronger than the previous trade-off relation in the sense that it does not allow simultaneous violation of all the inequalities of its left-hand side. All other patterns of violations are possible as we now show. Choose any number, $m$, of Bell inequalities, i.e. paths in the Fig. \[FIG\_GRAPHS\]d. Altogether they involve $n$ parties which share the following quantum state $${\left | \psi_n \right\rangle} = \frac{1}{\sqrt{2}} | \underbrace{0 \dots 0}_{n} \rangle + \frac{1}{\sqrt{2 m}} \sum_{j = 1}^m | 0 \dots 0 \underbrace{1 \dots 1}_{\mathcal{P}_j} 0 \dots 0 \rangle,$$ where $\mathcal{P}_j$ denotes parties involved in the $j$-th Bell inequality. Note that all states under the sum are orthogonal as they involve different parties. The only non-vanishing components of the correlation tensor of this state have even number of $y$ indices for the parties involved in the Bell inequalities. Squares of all these components are equal to $\tfrac{1}{m}$ which gives $\mathcal{L}_j^2 = \tfrac{2^{M-1}}{m}$ for each Bell inequality $j=1,\dots, m$. Therefore, all $m$ Bell inequalities are violated as soon as $m < 2^{M-1}$. Moreover, the sum of these $m$ Bell parameters saturates the bound of (\[STRONG\]) and therefore independently of the state shared by other parties the remaining Bell parameters of (\[STRONG\]) all vanish. In conclusion, we have derived monogamy of multipartite Bell inequality violations which are all quadratic functions of Bell parameters. As such these relations are stronger than those following from no-signaling principle alone, which are linear in Bell parameters [@BLMPSR2005; @MAN2006; @TONER2009; @PB2009]. Indeed, most of our monogamies are tight in the sense that they precisely identify the set of Bell violations allowed by quantum theory. Our proofs are within quantum formalism and utilize the bounds imposed by the complementarity principle. These bounds were established for dichotomic observables and are applicable to any Bell inequality involving these, it would be useful to extend the formalism to measurements with more outcomes. It would also be interesting to see if the Bell violation trade-offs can be derived without using quantum formalism, a candidate for this task is the principle of information causality [@IC]. *Acknowledgements*. This research is supported by the National Research Foundation and Ministry of Education in Singapore. WL is supported by the EU program Q-ESSENCE (Contract No.248095), the MNiSW Grant no. N202 208538 and by the Foundation for Polish Science. [9]{} J. S. Bell, Physics [**1**]{}, 195 (1964). V. Scarani and N. Gisin, Phys. Rev. Lett. [**87**]{}, 117901 (2001); Phys. Rev. A [**65**]{}, 012311 (2001). B. Toner and F. Verstraete, quant-ph/0611001. J. Barrett, N. Linden, S. Massar, S. Pironio, S. Popescu, and D. Roberts, Phys. Rev. A [**71**]{}, 022101 (2005) Ll. Masanes, A. Acin, and N. Gisin, Phys. Rev. A [**73**]{}, 012112 (2006). B. Toner, Proc. R. Soc. A [**465**]{}, 59 (2009). M. Paw[ł]{}owski and [Č]{}. Brukner, Phys. Rev. Lett. [**102**]{}, 030403 (2009). J. Oppenheim and S. Wehner, Science [**330**]{}, 1072 (2010). G. Tóth and O. Gühne, Phys. Rev. A [**72**]{}, 022340 (2005). S. Wehner and A. Winter, J. Math. Phys. [**49**]{}, 062105 (2008); New J. Phys. [**12**]{}, 025009 (2010). R. Ramanathan, T. Paterek, A. Kay, P. Kurzyński, D. Kaszlikowski, arXiv:1010.2016 (2010). B. S. Tsirelson, Lett. Math. Phys. [**4**]{}, 93 (1980). H. Weinfurter and M. Żukowski, Phys. Rev. A [**64**]{}, 010102(R) (2001). R. F. Werner and M. W. Wolf, Phys. Rev. A [**64**]{}, 032112 (2001). M. Żukowski and [Č]{}. Brukner, Phys. Rev. Lett. [**88**]{}, 210401 (2002). J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969). W. Laskowski, T. Paterek, M. Żukowski, and [Č]{}. Brukner, Phys. Rev. Lett. [**93**]{}, 200401 (2004). N. D. Mermin, Phys. Rev. Lett. [**65**]{}, 1838 (1990). M. Paw[ł]{}owski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. Żukowski, Nature [**461**]{}, 1101 (2009).
--- abstract: 'We construct an obstruction to the existence of embeddings of a homology $3$-sphere into a homology $S^3\times S^1$ under some cohomological condition. The obstruction are defined as an element in the filtered version of the instanton Floer cohomology due to [@FS92]. We make use of the ${\mathbb Z}$-fold covering space of homology $S^3\times S^1$ and the instantons on it.' author: - Masaki Taniguchi bibliography: - 'Instantons.bib' title: 'Instantons for 4-manifolds with periodic ends and an obstruction to embeddings of 3-manifolds' --- Introduction ============ There are two typical studies of gauge theory for 4-manifolds with periodic end by C.H.Taubes [@T87] and J.Lin [@L16]. They gave a sufficient condition to exist a natural compactification of the instanton and the Seiberg-Witten moduli spaces for such non-compact 4-manifolds. The condition of Taubes is the non-existence of non-trivial $SU(2)$ flat connection of some segment of the end. The condition of Lin is the existence of a positive scalar curvature metric on the segment of the end. As an application of the existence of the compactification, Taubes showed the existence of uncountable family of exotic ${\mathbb R}^4$ and Lin constructed an obstruction to the existence of a positive scalar curvature metric. In this paper, we also give a similar sufficient condition for the instanton moduli spaces. The condition is an uniform bound on the $L^2$-norm of curvature of instantons. When we bound the $L^2$-norm of curvature, we use an invariant which is a generalization of the Chern-Simons functional. Under this condition, we prove a compactness theorem (Theorem \[cptness\]). As the main theorem of this paper, we construct an obstruction of the existence of embeddings of a homology $3$-sphere into a homology $S^3\times S^1$ with some cohomological condition (Theorem \[mainthm\]). To formulate the obstruction, we need a variant of the instanton Floer cohomology. The variant is the filtered instanton Floer cohomology $HF^i_r$ whose filtration was essentially considered by R.Fintushel-R.Stern in [@FS92]. The obstruction is an element of the filtered instanton cohomology. We denote the element by $[\theta^r]$. The element $[\theta^r]\in HF^1_r$ is a filtered version of $[\theta]$ which was already defined by S.K.Donaldson [@Do02] and K.Frøyshov [@Fr02]. The class $[\theta]$ is defined by counting the gradient lines of the Chern-Simons functional which converge to the trivial flat connection. In order to show $[\theta^r]$ is actually an obstruction of embeddings, we count the number of the end of the $1$-dimensional instanton moduli space for 4-manifold which has both of cylindrical and periodic end. For the couting, we use the compactness theorem (Theorem \[cptness\]). This paper is organized as follows. In Section \[main\], we give a precise formulation of our main theorem (Theorem \[mainthm\]). In Section \[moduli theory\], we prepare several notations and constructions which are used in the rest of this paper. In particular, we introduce the filtered instanton Floer homology $HF^r_i$ and the obstruction class $[\theta^r]$. We also review Fredholm, moduli theory for 4-manifolds with periodic end. In Section \[excs\], we generalize the Chern-Simons functional and introduce the invariants $Q^{i}_X$. In Section 5, we prove the compactness theorem(Theorem \[cptness\]). We use $Q^{i}_X$ to control the $L^2$-norm of curvature. In Section 6, we deal with technical arguments about the transversality and the orientation for the instanton moduli spaces for 4-manifolds with periodic end. In Section 7, we prove Theorem \[mainthm\]. [**Acknowledgements**]{}: The author is grateful to Mikio Furuta for his suggestions. The author would like to thank Hokuto Konno for useful conversations. Main theorem {#main} ============ Let $X$ be a homology $S^3\times S^1$, i.e. , $X$ is a closed 4-manifold equipped with an isomorphism $\phi:H_*(X,{\mathbb Z})\ri H_*(S^3\times S^1,{\mathbb Z})$ in this paper. Then $X$ has an orientation induced by the standard orientation of $S^3\times S^1$ and $\phi$. Let $Y$ be an oriented homology $S^3$. We construct an obstruction of embeddings $f$ of $Y$ into $X$ satisfying $f_*[Y]=1\in H_3(X,{\mathbb Z})$ as an element in the filtered instanton Floer cohomology. We use information of the compactness of the instanton moduli spaces for periodic-end 4-manifold in a crucial step of our construction. In order to formulate our main theorem, we need to prepare several notations. For any manifold $Z$, we denote by $P_Z$ the product $SU(2)$ bundle. The product connection on $P_Z$ is written by $\theta$. $${\mathcal A}(Z):= \{SU(2)\text{-connections on $P_Z$}\},$$ $${\mathcal A}^{\text{flat}}(Z):=\{SU(2)\text{-flat connections on $P_Z$}\}\subset {\mathcal A}(Y),$$ $$\widetilde{{\mathcal B}}(Z):= {\mathcal A}(Z) /\map_0(Z,SU(2)),$$ $$\widetilde{R}(Z):={\mathcal A}^{\text{flat}} /\map_0(Z,SU(2))\subset \widetilde{{\mathcal B}}(Z),$$ and $$R(Z):= {\mathcal A}^{\text{flat}}(Z)/\map(Z,SU(2)),$$ where $\map_0(Z,SU(2))$ is a set of smooth functions with mapping degree $0$. When $Z$ is equal to $Y$, the Chern-Simons functional $cs_Y:{\mathcal A}(Y)\ri {\mathbb R}$ is defined by $$cs_Y(a):=\frac{1}{8\pi^2}\int_Y Tr(a\wedge da +\frac{2}{3}a\wedge a\wedge a).$$ It is known that $cs_Y$ decends to a map $\widetilde{{\mathcal B}}(Y)\ri {\mathbb R}$, which we denote by the same notation $cs_Y$. \[defl\] We denote the number of elements in $R(Y)$ by $l_Y$. If $R(Y)$ is not a finite set, we set $l_Y=\infty$. We will use the following assumption on $Y$ in our main theorem(Theorem \[mainthm\]). \[imp\] All $SU(2)$ flat connections on $Y$ are non-degenerate, i.e. the first cohomology group of the next twisted de Rham complex: $$0 \ri \Om^0(Y)\otimes {\mathfrak{su}(2)}{\xrightarrow}{d_a} \Om^1(Y)\otimes {\mathfrak{su}(2)}{\xrightarrow}{d_a} \Om^2(Y)\otimes {\mathfrak{su}(2)}{\xrightarrow}{d_a}\Om^3\otimes {\mathfrak{su}(2)}\ri 0$$ vanishes for $[a] \in R(Y)$. All flat connections on the Brieskorn homology $3$-sphere $\Sigma(p,q,r)$ are non-degenerate.([@FS90]) Under Assumption \[imp\], $l_Y$ is finite ([@T90]). In this paper without the use of Assumption \[imp\], we will introduce the following invariants: - $HF^i_r(Y)$ for $Y$ and $r \in ({\mathbb R}\setminus cs_Y(\widetilde{R}(Y)))\cup \{\infty\}$ in Definition \[defofHFr\*\] satisfying $HF^i_\infty(Y)=HF^i(Y)$, - $[\theta^r] \in HF^1_r(Y)$ for $Y$ and $r \in ({\mathbb R}\setminus cs_Y(\widetilde{R}(Y)))\cup \{\infty\}$ in Definition \[defiofthetar\] satisfying $[\theta^\infty]=[\theta] \in HF^1(Y)$, and - $Q^i_X \in {\mathbb R}_{\geq 0} \cup \{\infty\} $ for $i \in {\mathbb N}$ and $X$ in Definition \[defofQiX\] (When $X$ is homotopy equivalent to $S^3\times S^1$, $Q^i_X=\infty$ for all $i \in {\mathbb N}$). Our main theorem is: \[mainthm\]Under Assumption $\ref{imp}$, if there exists an embedding $f$ of $Y$ into $X$ with $f_*[Y]=1\in H_3(X,{\mathbb Z})$ then $[\theta^r]$ vanishes for any $r\in [0,\min \{Q^{2l_Y+3}_X, 1\}] \cap ({\mathbb R}\setminus cs_Y(\widetilde{R}(Y))\cup \{\infty\})$ In particular, if there exists an element $$r\in [0,\min\{Q^{2l_Y+3}_X,1\}] \cap ({\mathbb R}\setminus cs_Y(\widetilde{R}(Y))\cup \{\infty\})$$ satisfying $0\neq [\theta^r]$, Theorem \[mainthm\] implies that there is no embedding from $Y$ to $X$ with $f_*[Y]=1 \in H_3(X,{\mathbb Z})$. \[exbri\] Let $X$ be a closed $4$-manifold which is homotopy equivalent to $S^3\times S^1$. There is no embedding $f$ of $\Sigma(2,3,6k-1)$ into $X$ satisfying $f_*[\Sigma(2,3,6k-1)] =1\in H_3(X,{\mathbb Z})$ for a positive integer $k$ satisfying $1\leq k \leq 12$. The proof of Example \[exbri\] is given in the end of Subsection \[filter\]. Preliminaries {#moduli theory} ============= In this section, we review the (filtered) instanton Floer theory and moduli theory on the periodic end 4-manifolds. Holonomy perturbation(1) {#hol} ------------------------ In this subsection we review classes of perturbations which were considered in [@Fl88], [@BD95] to define the instanton Floer homology. Let $Y$ and $P_Y$ be as in Section \[main\]. We fix a Riemannian metric $g_Y$ on $Y$. We define the set of embeddings from solid tori to $Y$ by $${\mathcal{F}}_d:= \left\{ (f_i)_{1\leq i \leq d} :S^1\times D^2\ri Y \middle | f_i: \text{orientation preserving embedding} \ \right\}.$$ Fix a two form $d\mathcal{S}$ on $D^2$ supported in the interior of $D^2$ with $\int_{D^2}d\mathcal{S}=1$. We denote by $C^l(SU(2)^d,{\mathbb R})_{\ad}$ the adjoint invariant $C^l$-class functions from $SU(2)^d$ to ${\mathbb R}$ and define $$\prod(Y):= \bigcup_{d \in {\mathbb N}}{\mathcal{F}}_d\times C^l(SU(2)^d,{\mathbb R})_{\ad}.$$ We use the following notation, $$\widetilde{{\mathcal B}}^*(Y):=\left\{[a]\in \widetilde{{\mathcal B}}(Y)\middle | \text{$a$ is an irreducible connection} \right\} ,$$ where $\widetilde{{\mathcal B}}(Y)$ is defined in Section \[main\]. For $\pi = (f,h)\in \prod(Y)$, the perturbed Chern-Simons functional $cs_{Y,\pi}:\widetilde{{\mathcal B}}^*(Y) \ri {\mathbb R}$ is defined by $$cs_{Y,\pi}(a)= cs_Y(a)+ \int_{x \in D^2} h(\hol(a)_{f_1(-,x)},\dots ,\hol(a)_{f_d(-,x)})d\mathcal{S},$$ where $\hol(a)_{f_i(-,x)}$ is the holonomy around the loop $t \mapsto f_i(t,x)$ for each $i \in \{1,\dots,d\}$. If we identify ${\mathfrak{su}(2)}$ with its dual by the Killing form, the derivative of $h_i=pr_i^*h$ is a Lie algebra valued 1-form over $SU(2)$ for $h \in C^l(SU(2)^d,{\mathbb R})_{\ad}$. Using the value of holonomy for the loops $\{f_i(x,t)| t\in S^1\}$, we obtain a section $\hol_{f_i(t,x)}(a)$ of the bundle $P$ over $\im f_i$. Sending the section $\hol_{f_i(t,x)}(a)$ by the bundle map induced by $h_i':\aut P\ri \ad P$, we obtain a section $h_i'(\hol_{f_i(t,x)}(a))$ of $\ad P$ over $\im f_i$. We now describe the gradient-line equation of $cs_{Y,\pi}$ with respect to $L^2$ metric : $$\begin{aligned} \label{grad} \frac{\partial}{\partial t} a_t=\grad_a\ cs_{Y,\pi} = *_{g_Y}(F(a_t)+\sum_{1\leq i \leq d} h'_i(\hol(a_t)_{f_i(t,x)})(f_i)_*{\pr}_2^*d\mathcal{S}),\end{aligned}$$ where $\pr_2$ is the second projection $\pr_2:S^1\times D^2 \ri D^2$ and $*_{g_Y}$ is the Hodge star operator. We denote $\pr_2^*d\mathcal{S}$ by $\eta$. We set $$\widetilde{R}(Y)_\pi:= \left\{a \in \widetilde{{\mathcal B}}(Y) \middle |F(a)+\sum_{1\leq i \leq d} h'_i(\hol(a)_{f_i(t,x)})(f_i)_*\eta=0 \right\},$$ and $$\widetilde{R}^*(Y)_\pi:= \widetilde{R}(Y)_\pi \cap \widetilde{{\mathcal B}}^*(Y).$$ The solutions of correspond to connections $A$ over $Y\times {\mathbb R}$ which satisfy an equation: $$\begin{aligned} \label{pASD} F^+(A)+ \pi(A)^+=0,\end{aligned}$$ where - The two form $\pi(A)$ is given by $$\sum_{1\leq i \leq d} h'_i(\hol(A)_{\tilde{f}_i(t,x,s)}){(\tilde{f}_i)}_* (\pr_1^* \eta).$$ - The map $\pr_1$ is a projection map from $(S^1\times D^2) \times {\mathbb R}$ to $S^1\times D^2$. - The notation $+$ is the self-dual component with respect to the product metric on $Y\times {\mathbb R}$. - The map $\tilde{f}_i: S^1\times D^2\times {\mathbb R}\ri Y\times {\mathbb R}$ is $f_i\times id$. We also use several classes of the perturbations. \[flatpres\]A class of perturbation $\prod(Y)^{\text{flat}}$ is defined by a subset of $\prod(Y)$ with the conditions: - $cs_Y $ coincides with $cs_{Y,\pi}$ on a small neighborhood of critical points of $cs_Y$ - $\widetilde{R}(Y)=\widetilde{R}(Y)_\pi$, for all element in $\prod(Y)^{\text{flat}}$. If the cohomology groups defined by the complex (12) in [@SaWe08] satisfies $H^i_{\pi,a}=0$ for all $[a] \in \widetilde{R}(Y)_\pi \setminus \{ [\theta]\}$ for $\pi$, we call $\pi$ [*non-degenerate perturbation*]{}. If $\pi$ satisfies the following conditions, we call $\pi$ [*regular perturbation*]{}. - The linearization of $$d^+_A+d\pi^+_A : \Om^1(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_q}\ri \Om^+(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_q}$$ is surjective for $[A] \in M(a,b)_\pi$ and all irreducible critical point $a,b$ of $cs_{Y,\pi}$. - The linearization of $$d^+_A+d\pi^+_A : \Om^1(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}}\ri \Om^+(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}}$$ is surjective for $[A] \in M(a,\theta)_\pi$ and all irreducible critical point $a$ of $cs_{Y,\pi}$. Here the spaces $M(a,b)_\pi$ and $M(a,\theta)_\pi$ are given in in Subsection \[filter\], $L^2_q$ is the Sobolev norm and $L^2_{q,\delta}$ is the weighted Sobolev norm which is same one as in Subsection $3.3.1$ in [@Do02]. Filtered instanton Floer (co)homology {#filter} ------------------------------------- In this subsection, we give the definition of the filtration of the instanton Floer (co)homology by using the technique in [@FS92]. First, we give the definition of usual instanton Floer homology. Let $Y$ be a homology $S^3$ and fix a Riemannian metric $g_Y$ on $Y$. Fix a non-degenerate regular perturbation $\pi \in \prod(Y)$. Roughly speaking, the instanton Floer homology is inifinite dimensional Morse homology with respect to $$cs_{Y,\pi} :\widetilde{{\mathcal B}}^*(Y) \ri {\mathbb R}.$$ Floer defined $\ind: \widetilde{R}^*(Y)_\pi \ri {\mathbb Z}$, called the Floer index. The (co)chains of the instanton Floer homology are defined by $$CF_i:= {\mathbb Z}\left\{ [a] \in \widetilde{R}^*(Y)_\pi \middle | \ind(a)=i \right\}(CF^i:= \hom(CF_i,{\mathbb Z})).$$ The boundary maps $\partial :CF_i \ri CF_{i-1}(\delta:CF^i\ri CF^{i+1})$ are given by $$\partial ([a]) := \sum_{b \in \widetilde{R}^*(Y)_\pi \text{ with } \ind(b)=i-1}\# (M(a,b)/{\mathbb R}) [b]\ (\delta:=\partial^*),$$ where $M(a,b)$ is the space of trajectories of $cs_{Y,\pi} $ from $a$ to $b$. We now write the explicit definition of $M(a,b)$. Fix a positive integer $q\geq3$. Let $A_{a,b}$ be an $SU(2)$ connection on $Y \times {\mathbb R}$ satisfying $A_{a,b}|_{Y\times (-\infty,1]}=p^*a$ and $A_{a,b}|_{Y\times [1,\infty)}=p^*b$ where $p$ is projection $Y\times {\mathbb R}\ri Y$. $$\begin{aligned} \label{*} M(a,b)_\pi:=\left\{A_{a,b}+c \middle | c \in \Omega^1(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_q}\text{ with } \eqref{pASD} \right\}/ {\mathcal G}(a,b),\end{aligned}$$ where ${\mathcal G}(a,b)$ is given by $${\mathcal G}(a,b):=\left\{ g \in \aut(P_{Y\times {\mathbb R}})\subset {\End(\mathbb{C}^2) }_{L^2_{q+1,\text{loc}}} \middle | \nabla_{A_{a,b}}(g) \in L^2_q \right\}.$$ The action of ${\mathcal G}(a,b)$ on $\left\{A_{a,b}+c \middle | c \in \Omega^1(Y\times {\mathbb R})\otimes {\mathfrak{su}(2)}_{L^2_q}\text{ with }\eqref{pASD} \right\}$ given by the pull-backs of connections. The space ${\mathbb R}$ has an action on $M(a,b)$ by the translation. Floer show that $M(a,b)/{\mathbb R}$ has structure of a compact oriented 0-manifold whose orientation is induced by the orientation of some determinant line bundles and $\partial^2=0$ holds. The instanton Floer (co)homology $HF_*(Y)(HF^*(Y))$ is defined by $$HF_*(Y):= \ker \partial / \im \partial \ (HF^*(Y):= \ker \delta / \im \delta).$$ Second, we introduce the filtration in the instanton Floer homology. This filtration is essentially considered by Fintushel-Stern in [@FS92]. We follow Fintushel-Stern and use the class of perturbations which they call $\epsilon$-perturbation defined in Section $3$ of [@FS92]. They constructed ${\mathbb Z}$-graded Floer homology whose chains are generated by the critical points of $cs_{Y,\pi}$ with $cs_Y(a)\in (m,m+1)$. We now consider Floer homology whose chains generated by the critical points of $cs_{Y,\pi}$ with $cs_Y(a)\in (-\infty,r)$. Let $\widetilde{R}(Y) $ be as in Section \[main\] and $\Lambda_Y$ be ${\mathbb R}\setminus \im \ cs_Y|_{\widetilde{R}(Y)}$. For $r \in \Lambda_Y$, we define the filtered instanton homology $HF^r_*(Y) (HF^*_r(Y))$ by using $\epsilon$-perturbation. For $r \in \Lambda_Y$, we set $\epsilon= \inf_{a \in \widetilde{R}(Y)} |cs_Y(a)-r|$ and choose such a $\epsilon$-perturbation $\pi$. \[defofHFr\*\]The chains of the filtered instanton Floer (co)homology are defined by $$CF^r_i:= {\mathbb Z}\left\{ [a] \in \widetilde{R}^*(Y)_\pi \middle | \ind(a)=i,\ cs_{Y,\pi}(a)<r \right\}\ (CF^i_r:= \hom(CF_r^i,{\mathbb Z})).$$ The boundary maps $\partial^r :CF_i^r \ri CF_{i-1}^r$(resp. $\delta^r:CF^i_r\ri CF^{i+1}_r $) are given by the restriction of $\partial$ to $CF_i^r$(resp. $\delta^r:=(\partial^r)^*$). This maps are well-defined and $(\partial^r)^2=0$ holds as in Section $4$ of [@FS92]. The filtered instanton Floer (co)homology $HF^r_*(Y)$(resp. $HF_r^*(Y))$ is defined by $$\begin{aligned} HF^r_*(Y):= \ker \partial^r / \im \partial^r \ (\text{resp. }HF^*_r(Y):= \ker \delta^r / \im \delta^r).\end{aligned}$$ We can also show $HF^r_i(Y)$ and $HF^i_r(Y)$ are independent of the choices of the perturbation and the metric by similar discussion in [@FS92]. For $r \in \Lambda_Y$, we now introduce obstruction classes in $HF^1_r(Y)$. These invariants are generalizations of $[\theta] \in HF^1(Y)$ considered in Subsection $7.1$ of [@Do02] and Subsection $2.1$ of [@Fr02]. \[defiofthetar\]For $r \in \Lambda_Y$, we set homomorphism $$\theta^r :CF^r_1\ri {\mathbb Z}$$ by $$\begin{aligned} \theta^r(a):= \# (M(a,\theta)_\pi/{\mathbb R}).\end{aligned}$$ As in [@Do02] and [@Fr02], we use the weighted norm on $M(a,\theta)_\pi$ to use Fredholm theory. From the same discussion for the proof of $(\delta^r)^2=0$, we can show $\delta^r (\theta^r)=0$. Therefore it defines the class $[\theta^r] \in HF^1_r(Y)$. We call the class $[\theta^r]$ [*obstruction class*]{}. The class $[\theta^r]$ does not depend on the small perturbation and the metric. The proof is similar to the proof for original one $[\theta]$. Now we give the proof of Example \[exbri\].\ Because $X$ is homotopy equivalent to $S^3\times S^1$, $Q^i_X=\infty$ for $i \in {\mathbb N}$. If the element $[\theta^1] \in HF^1_r(-\Sigma(2,3,6k-1))$ does not vanish for $r=1$, we can apply Theorem \[mainthm\]. Frøyshov showed $0 \neq [\theta] \in HF^1(-\Sigma(2,3,6k-1))$ by using the property of h-invariant of Proposition $4$ in [@Fr02]. Then we get nonzero homomorphism $\theta:CF^r_1(Y)\ri {\mathbb Z}$ for $r=\infty$. ( If $r=\infty$, $HF^1_r$ is the usual instanton Floer cohomology by the definition.) By using calculation about the value of the Chern-Simons functional of Section 7 in [@FS92], we can see $\theta^1:CF_1^r\ri {\mathbb Z}$ is nonzero for $r=1$ and $CF_i^r$ is zero for $r=1$ and $i\in 2{\mathbb Z}$. This implies $$0 \neq [\theta^1] \in HF^1_r(-\Sigma(2,3,6k-1)) \text{ for } r=1.$$ Fredholm theory and moduli theory on 4-manifolds with periodic end {#Fred} ------------------------------------------------------------------ In [@T87], Taubes constructed the Fredholm theory of some class of elliptic operators on 4-manifolds with periodic ends. He also extend moduli theory of $SU(2)$ gauge theory on such non-compact 4-manifolds. In this subsection, we review Fredholm theory of a certain elliptic operator on $4$-manifolds with the periodic ends as in [@T87] and define the Fredholm index of the class of operators, which gives the formal dimension of a suitable instanton moduli space on such non-compact 4-manifolds. First we formulate the 4-manifolds with periodic ends. Let $Y$ be an oriented homology $S^3$ as in Section \[main\]. Let $W_0$ be an oriented homology cobordism from $Y$ to $-Y$. We get a compact oriented 4-manifold $X$ by pasting $W_0$ with itself along $Y$ and $-Y$. We give several notations in our argument. - The manifold $W_i$ is a copy of $W_0$ for $i \in {\mathbb Z}$ - We denote $\partial(W_i)$ by $Y^i_+\cup Y^i_-$ where $Y^i_+$(resp. $Y^i_-$) is equal to $Y$(resp. $-Y$) as oriented manifolds. - For $(m,n)\in ({\mathbb Z}\cup \{ -\infty\} ) \times ({\mathbb Z}\cup \{\infty\})$ with $m<n$, we set $$\displaystyle W[m,n]:=\coprod_{m\leq i \leq n} W_i / \{Y^j_- \sim Y^{j+1}_+\ j \in \{m,\cdots ,n\}\}.$$ We denote by $W$ the following non-compact 4-manifold $$W:= Y\times (-\infty,0] \cup W[0,\infty]/\{\partial (Y\times (-\infty,0]) \sim Y^{0}_+\}.$$ For a fixed Riemannian metric $g_Y$ on $Y$, we choose a Riemannian metric $g_W$ on $W$ which satisfies - $g_W|_{Y\times (-\infty,-1]}=g_Y\times g^{\text{stan}}_{\mathbb R}$. - The restriction $g_W|_{W[0,\infty]}$ is a periodic metric. There is a natural orientation on $W[0,\infty]$ and $W$ induced by the orientations of $W_0$. The infinite cyclic covering space of $X$ can be written by $$\widetilde{X} \cong W[-\infty,\infty].$$ Let $T$ be the deck transformation of $\widetilde{X}$ which maps each $W_i$ to $W_{i+1}$. By restriction, $T$ has an action on $W[0,\infty]$. We use the following smooth functions $\tau$ and $\tau'$ on $W$ $$\tau,\ \tau': W \ri {\mathbb R}\,$$ satisfying - $\tau (T|_{W[0,\infty]}(x))= \tau(x)+1$, $\tau'(T|_{W[0,\infty]}(x))=\tau'(x)+1$ for $x \in W[0,\infty]$. - $\tau|_{Y\times (-\infty,-2]}=0$, $\tau'(y,t)= -t$ for $(y,t) \in Y\times (-\infty,-2]$. By the restriction of $\tau$, we have a function on $W[0,\infty]$ which we denote by same notation $\tau$. In this subsection, we review the setting of the configuration space of fields on $W$ and define the Fredholm index of a kind of operator on $W$. We fix $\pi \in \prod(Y)$ in Subsection $\ref{hol}$ and assume that $\pi$ is a non-degenerate perturbation. Let $P_W$ be the product $SU(2)$ bundle. \[confset\]For each element $[a] \in \widetilde{R}(Y)$, we fix an $SU(2)$ connection $A_a $ on $P_W$ which satisfying $A_{a}|_{Y\times (-\infty,-1]} =\pr_1^*a$ and $A_{a}|_{W[0,\infty]}=\theta$. If $a$ is an irreducible (resp. reducible) connection, we define the space of connections on $P_W$ by $${\mathcal A}^W(a)_\delta := \left\{ A_{a}+c \ \middle| c \in \Omega^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}} \right\},$$ $$(\text{ resp.}\ {\mathcal A}^W(a)_{(\delta,\delta)} := \left\{ A_{a}+c \ \middle| c \in \Omega^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,(\delta,\delta)}}\right\}\ )$$ where $\Omega^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}}$(resp. $\Omega^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,(\delta,\delta)}}$) is the completion of $\Omega^1(W)\otimes {\mathfrak{su}(2)}$ with $L^2_{q,\delta}$-norm(resp. $L^2_{q,(\delta,\delta)}$-norm), $q $ is a natural number greater than $3$, and $\delta$ is a positive real number. For $f \in \Omega^i(W)\otimes {\mathfrak{su}(2)}$ with compact support, we define $L^2_{q,\delta}$-norm(resp. $L^2_{q,(\delta,\delta)}$-norm) by $$||f||^2_{L^2_{q,\delta}}:= \sum_{0\leq j \leq q} \int_W e^{\delta \tau} \left|\nabla_{\theta}^j f \right|^2 d\text{vol} ,$$ $$(\text{resp. } ||f||^2_{L^2_{q,(\delta,\delta)}}:= \sum_{0\leq j \leq q} \int_W e^{\delta \tau'} \left|\nabla_{\theta}^j f \right|^2 d\text{vol}\ )$$ where $\nabla_{\theta}$ is covariant derivertive with respect to the product connection $\theta$. We use a periodic metric $||-||$ on the bundle. Its completion is denoted by $\Omega^i(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}}$. We define the gauge group $${\mathcal G}^W(a)_{\delta}:= \left\{ g \in \aut(P_W)_{L^2_{q+1,\text{loc}}} \middle| \nabla_{A_{a}}(g) \in L^2_{q,\delta} \right\}$$ $$( \text{ resp. }{\mathcal G}^W(a)_{(\delta,\delta)}:= \left\{ g \in \aut(P_W)_{L^2_{q+1,\text{loc}}} \middle| \nabla_{A_{a}}(g) \in L^2_{q,(\delta,\delta)} \right\}),$$ which has the action on ${\mathcal A}^W(a)_\delta$ induced by the pull-backs of connections. The space ${\mathcal G}^W(a)_{\delta}$(resp. ${\mathcal G}^W(a)_{(\delta,\delta)}$) has structure of Banach Lie group and the action of ${\mathcal G}^W(a)_{\delta}$ on ${\mathcal A}^W(a)_\delta$(resp. ${\mathcal G}^W(a)_{(\delta,\delta)}$ on ${\mathcal A}^W(a)_{(\delta,\delta)}$) is smooth. [*The configuration space for*]{} $W$ is defined by $${\mathcal B}^W(a)_\delta := {\mathcal A}^W(a)_\delta /{\mathcal G}^W(a)_{\delta} (\text{resp. } B^W(a)_{(\delta,\delta)} := {\mathcal A}^W(a)_{(\delta,\delta)} /{\mathcal G}^W(a)_{(\delta,\delta)}).$$ Let $s$ be a smooth function from $W$ to $[0,1]$ with $$s|_{Y\times (-\infty -2]}=1,\ s|_{Y\times [-1,0] \cup_Y W[0,\infty]}=0.$$ We define the instanton moduli space for $W$ by $$\begin{aligned} M^W(a)_{\pi,\delta}:=\{[A] \in {\mathcal B}^W(a)_\delta | \mathcal{F}_{\pi}(A)=0 \}\end{aligned}$$ where $\mathcal{F}_{\pi}$ is the perturbed ASD-map $$\mathcal{F}_{\pi}(A):=F^+(A)+s\pi(A).$$ For each $A \in {\mathcal A}^W(a)_\delta$, we have the bounded linear operator: $$\begin{aligned} \label{elli} d^{*_{L^2_\delta}}_A+d\mathcal{F}_A: \Om^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}} \ri (\Om^0(W) \oplus \Om^+(W))\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}\end{aligned}$$ $$\begin{aligned} \label{thetacase} (\text{ resp. }d^{*_{L^2_{(\delta,\delta)}}}_A+d\mathcal{F}_A: \Om^1(W)\otimes {\mathfrak{su}(2)}_{L^2_{q,(\delta,\delta)}} \ri (\Om^0(W) \oplus \Om^+(W))\otimes {\mathfrak{su}(2)}_{L^2_{q-1,(\delta,\delta)}}).\end{aligned}$$ Taubes gave a criterion for the operator $d^{*_{L^2_\delta}}_A+d\mathcal{F}_A=d^{*_{L^2_\delta}}_A+ d^+_A+sd\pi^+_A$ in $\eqref{elli}$(resp. ) to be Fredholm in Theorem $3.1$ of [@T87]. \[fred\] There exists a descrete set $D$ in ${\mathbb R}$ with no accumulation points such that $\eqref{elli}$$($resp. $)$ is Fredholm for each $\delta$ in ${\mathbb R}\setminus D$. The discrete set $D$ is defined by $$D:=\{\delta \in {\mathbb R}| \text{ the cohomology groups } H^i_z\text{ are acyclic for all $z$ with } |z|=e^{\frac{\delta}{2}} \}.$$ The cohomology groups $H^i_z$ are given by the complex: $$\begin{aligned} \label{cpx} 0\ri \Om^0(X) \otimes {\mathfrak{su}(2)}{\xrightarrow}{d_{\theta,z}} \Om^1(X)\otimes {\mathfrak{su}(2)}{\xrightarrow}{d^+_{\theta,z}} \Om^+(X)\otimes {\mathfrak{su}(2)}\ri 0,\end{aligned}$$ where $$d_{\theta,z}:\Om^0(X) \otimes {\mathfrak{su}(2)}{\xrightarrow}{} \Om^1(X)\otimes {\mathfrak{su}(2)}$$and $$d^+_{\theta.z} :\Om^1(X)\otimes {\mathfrak{su}(2)}\ri \Om^+(X)\otimes {\mathfrak{su}(2)}$$ are given by $$d_{\theta,z}(f)=z^\tau d_{p^*\theta} (z^{-\tau} (p^*f)),\ d^+_{\theta,z}(f)=z^\tau d^+_{p^*\theta} (z^{-\tau}(p^*f)),$$ where $p$ is the covering map $\widetilde{X}\ri X$. (We fix a branch of ln $z$ to define $z^{\tau} = e^{\tau ln z}$.) In above definition, $d_{\theta,z}(f)$ and $d^+_{\theta,z}(f)$ are sections of $p^*P_{X}$, however these are invariant under the deck transformation, we regard $d_{\theta,z}(f)$ and $d^+_{\theta,z}(f)$ as sections on $P_X$. The operators $d_{\theta,z}$ and $d^+_{\theta.z}$ in depend on the metric on $X$ and $\tau$ however the cohomology groups $H_i^z$ are independent of the choice of them. We now introduce the formal dimension of the instanton moduli spaces: Suppose $a$ is an irreducible critical point of $cs_{Y,\pi}$. From Theorem \[fred\], there exists $\delta_0>0$ such that $\eqref{elli}$ is Fredholm for any $\delta \in (0,\delta_0)$ and $A \in {\mathcal A}^W(a)_{\delta}$. We define [*the formal dimension*]{} $\ind_W(a)$ of the instanton moduli spaces for $W$ by the Fredholm index of . For the case of $a=\theta$, we also set $\ind_W(a)$ as the Fredholm index of . The formal dimension $\ind_W(a)$ is calculated by using the following proposition. \[calfred\] Suppose $a$ is an irreducible critical point of $cs_{Y,\pi}$. The formal dimension $\ind_W(a)$ is equal to the Floer index $\ind(a)$ of $a$. If $a$ is equal to $\theta$, $\ind_W(a)=\ind (a)=-3$. First we take a compact oriented 4-manifold $Z$ with $\partial Z=-Y$. It is easy to show there is an isomorphism $H^*(W[0,\infty]) \cong H^*(S^3)$. We define $Z^+:= Z\cup_Y W[0,\infty]$ and fix a periodic Riemannian metric $g_{Z^+}$ satisfying $g_{Z^+}|_{W[0,\infty]}=g_W$. In Proposition $5.1$ of [@T87], Taubes computed the Fredholm index of $d^+_\theta+ d^{*_{L^2_\delta}}_\theta $ as a operator on $Z^+$ in the situation that $H_*(W[0,\infty],{\mathbb Z})\cong H_*(S^3,{\mathbb Z})$(The proof is given by using the admissibility of each segment $W_0$, however Taubes just use the condition $H_*(W[0,\infty],{\mathbb Z})\cong H_*(S^3,{\mathbb Z})$.): $$\ind (d^+_\theta+ d^*_\theta) =-3(1-b_1(Z)+b^+(Z))$$ for a small $\delta$. Fix an $SU(2)$-connection $A_{a,\theta}$ on $W$ with $A|_{Y\times (-\infty,-1]}=a$, $A|_{W[0,\infty]}=\theta$ and an $SU(2)$-connection $B_{a}$ on $Z\cup_Y Y\times [0,\infty)$ By the similar discussion about gluing of the operators on cylindrical end in Proposition $3.9$ of [@Do02], we have $$\ind (d_{B_{a}}^*+d_{B_{a}}^+)+ \ind (d_{A_{a,\theta}}^*+d_{A_{a,\theta}}^+)= \ind (d_{\theta}^*+d_{\theta}^+).$$ Donaldson show that $\ind (d_{B_{a}}^*+d_{B_{a}}^+)$ is equal to $-\ind(a)-3(1-b_1(Z)+b^+(Z))$ in Proposition $3.17$ of [@Do02]. The second statement is similar to the first one. Chern-Simons functional for homology $S^3\times S^1$ {#excs} ==================================================== For a pair $(X,\phi)$ consisting of an oriented 4-manifold and non-zero element $0\neq \phi \in H^1(X,{\mathbb Z})$, we generalize the Chern-Simons functional to a functional $cs_{(X,\phi)}$ on the flat connections on $X$. We define the invariants $Q^i_X \in {\mathbb R}_{\geq0} \cup\{\infty\}$ for $i\in {\mathbb N}$ by using the value of $cs_{(X,\phi)}$. In our construction, $cs_{(X,\phi)}$ cannot be extended to a functional for arbitrary $SU(2)$ connections on $X$. Let $X$ be an oriented closed 4-manifolds equipped with $0 \neq \phi \in H^1(X,{\mathbb Z})$ and $p:\widetilde{X}^{\phi}\ri X$ be the ${\mathbb Z}$-hold covering of $X$ corresponding to $\phi \in H^1(X,{\mathbb Z})\cong [X,B{\mathbb Z}]$. Recall that the bundle $P_{X} $ and the set $\widetilde{R}(X)$ as in Section \[main\]. Let $f$ be a smooth map representing the class $\phi \in H^1(X,{\mathbb Z})\cong [X,S^1]$, and $\tilde{f}$ is a lift of $f$. \[csX\]We define the Chern-Simons functional for $X$ as the following map $$cs_{(X,\phi)}:\widetilde{R}(X)\times \widetilde{R}(X) \ri {\mathbb R},$$ $$cs_{(X,\phi)}([a],[b]):= \frac{1}{8\pi^2}\int_{\widetilde{X}^\phi} Tr(F(A_{a,b})\wedge F(A_{a,b})),$$ where $a,b$ are flat connections on $P_X$ and $A_{a,b}$ is an $SU(2)$-connection on $P_{\widetilde{X}^\phi}:=\widetilde{X}^\phi \times SU(2)$ which satisfies $A_{a,b}|_{\tilde{f}^{-1}(-\infty,-r]}=p^*a$ and $A_{a,b}|_{\tilde{f}^{-1}([r,\infty))}=p^*b$ for some $r>0$. We have an alternative description of $cs_{(X,\phi)}([a],[b])$ when a closed oriented $3$-manifold $Y$ is given as a sub-manifold of $X$ satisfying $i_*[Y]=\text{PD}(\phi) \in H_3(X,{\mathbb Z})$, where $i$ is the inclusion $Y\ri X$. Such $Y$ is given as an inverse image of a regular value of $f$. We can take $Y$ to be connected, and we assume this. We denote by $W_0$ the cobordism from $Y$ to itself obtained from cutting $X$ open along $Y$. Since $Y$ is connected and $\phi \neq 0$, $W_0$ is also connected. Then we choose the idenitification of $\widetilde{X}^\phi$ and $\dots \cup_Y W_0 \cup_Y W_1 \cup_Y \dots $. We have the following formula. \[sumformula\] $$cs_{(X,\phi)}([a],[b])=cs_Y([i^*a])-cs_Y([i^*b])$$ Let $A_{a,b}$ the $SU(2)$-connection on $P_{\widetilde{X}^\phi}$ in Definition \[csX\]. Take a natural number $N$ large enough to satisfy $$\tilde{f}^{-1}([-r,r]) \subset W[-N,N],$$ for which we have $$\begin{aligned} \label{cscal} \int_{\widetilde{X}^\phi} Tr(F(A_{a,b})\wedge F(A_{a,b}))= \int_{W[-N,N]}Tr(F(A_{a,b})\wedge F(A_{a,b})).\end{aligned}$$ $$=cs_Y(i^*_+A_{a,b})-cs_Y(i^*_-A_{a,b}).$$ Here $i_+$(resp. $i_-$) is inclusion from $Y$(resp. $-Y$) to $X$, and we use the Stokes theorem. When $X$ is equal to $Y\times S^1$, this map $cs_{(X,\phi)}:\widetilde{R}(X)\times \widetilde{R}(X) \ri {\mathbb R}$ essentially coincides with the restricton of Chern-Simons functional $cs_Y$ on $Y$ by the following sense. For $[a]\in \widetilde{R}(Y\times S^1)$, the restriction $[i^*a] \in \widetilde{R}(Y)$ satisfies $$cs_Y([i^*a])=cs_{(Y\times S^1,\text{PD}([Y]))}([a],[\theta]),$$ where $i$ is a inclusion $Y=Y\times 1 \ri Y\times S^1$ and is the Poincaré duality. This is a corollary of Lemma \[sumformula\]. We have the following well-definedness. $cs_{(X,\phi)}$ does not depend on the choices of $f$, representatives $a$ and $b$, and $A_{a,b}$. This is also a consequence of Lemma \[sumformula\] \[tildeqx\]Let $X$ be a closed oriented $4$-manifold equipped with $\phi \in H^1(X,{\mathbb Z})$. The invariant $\tilde{Q}_{(X,\phi)}$ is defined by $$\begin{cases} \parbox{.9\linewidth}{$ \infty$ \ \ \ \ \ if $\widetilde{R}^*(X)= \emptyset$, \\ $ \inf \left\{\left|cs_{(X,\phi)}([a],[\theta])+m\right| \middle| m\in {\mathbb Z}\ , [a] \in \widetilde{R}^*(X) \right\} $ if $\widetilde{R}^*(X) \neq \emptyset$, } \end{cases}$$ where $\widetilde{R}^*(X)$ is the subset of the classes of the irreducible connections in $\widetilde{R}(X)$. We now give a definition of ${Q}^i_X \in {\mathbb R}_{\geq 0} \cup \{\infty\}$. \[defofQiX\] Suppose that $X$ is a homology $S^3\times S^1$ and $i$ is a positive integer. Let $\widetilde{X}$ be ${\mathbb Z}$-fold covering space over $X$ corresponding to the $1\in H^1(X,{\mathbb Z})\cong_{PD} H_3(X,{\mathbb Z})$. We set $\widetilde{X}^i:= \widetilde{X}/ i{\mathbb Z}$. Since the quotient map $p^i:\widetilde{X}\ri \widetilde{X}^i$ is a ${\mathbb Z}$-fold covering, this determine a class $\phi^i \in H^1(\widetilde{X}^i,{\mathbb Z})$. We define $Q^l_X\in {\mathbb R}_{\geq 0} \cup \{\infty\}$ by $$Q^l_X:=\displaystyle \min_{0\leq i \leq l}\tilde{Q}_{(\widetilde{X}^i,\phi^i)}.$$ We show the following lemma which is used in the proof of Key lemma(Lemma \[lem:theta\]). \[lem:cs\] Suppose that $\gamma$ is a flat connection on $W[m,n]$ satisfying the following conditions. - $\gamma|_{Y^m_+} \cong \gamma|_{Y^n_-}$. - There exists $u \in {\mathbb Z}$ satisfying $\left|cs_{(\overline{W[m,n]},\text{PD}[Y^m_+])}(r(\gamma),\theta)+ u\right|< Q^{n-m+1}_X$, where $r(\gamma)$ is a flat connection on $\overline{W[0,k]}$ given by pasting $\gamma$ with itself along $Y^0_+ \cup Y^k_-$. Then $\gamma$ is gauge equivalent to $\theta$. Suppose $\gamma$ is not gauge equivalent to $\theta$. The calculation $$H_1(W[m,n])\cong 0$$ and holonomy correspondence $$R(W[m,n]) \cong \hom (\pi_1(W[m,n]),SU(2))/ \text{conjugate}$$ imply that there is no reducible $SU(2)$ connection on $W[m,n]$ except $\theta$. Therefore $\gamma$ is an irreducible connection on $W[m,n]$. Because $\overline{W[m,n]} \ri X$ is the $(n-m+1)$-fold covering space of $X$, $$Q^{n-m+1}_X \leq \left|cs_{(\overline{W[m,n]},\text{PD}[Y^m_+])}(r(\gamma),\theta)+ u\right|$$ holds by the definition of $Q^i_X$. This is a contradiction. Compactness =========== The compactness of the instanton moduli spaces for non-compact 4-manifolds is treated in [@Fl88], [@Fu90],[@Do02] for cylindrical end case and in [@T87] for periodic end case. In [@Fu90] and [@T87], they consider the instanton moduli spaces with the connections asymptotically convergent to the trivial connection on the end. We also follow their strategy by using $Q^{2l_Y+3}_X$ defined in the previous section. More explicitly, in this section we explain a compactness result for the instanton moduli spaces for a non-compact manifold $W[0,\infty]$ with periodic end. Key lemma --------- Let $W_0$, $W[0,\infty]$ be the oriented Riemannian 4-manifolds introduced in the beginning of Subsection \[Fred\]. By pasting $W_0$ with itself along its boundary $Y$ and $-Y$, we obtain a homology $S^3\times S^1$ which we denote by $X$. We consider the product $SU(2)$-bundle $P_{W[0,\infty]}$ on $W[0,\infty]$. For $q\geq3$ and $\delta>0$, we define [*the instanton moduli space*]{} $M^{W[0,\infty]}_\delta$ by $$M^{W[0,\infty]}_\delta := \left\{ \theta +c \in \Om^1(W[0,\infty])\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}} \middle | F^+(\theta+c)=0 \right\}/ {\mathcal G},$$ where ${\mathcal G}$ is the gauge group $${\mathcal G}:=\left\{g \in \aut(P_{W[0,\infty]}) \subset \End(\mathbb{C}^2)_{L^2_{q+1,\text{loc}}} \middle | dg \in L^2_{q,\delta} \right\},$$ and the action of ${\mathcal G}$ is given by the pull-backs of connections. For $f\in \Omega^i(W[0,\infty])\otimes {\mathfrak{su}(2)}$ with compact support, we define $L^2_{q,\delta}$ norm by the following formula $$||f||^2_{L^2_{q,\delta}}:= \sum_{0\leq j \leq q} \int_{W[0,\infty]} e^{\delta \tau} \left|\nabla_{\theta}^j f\right|^2 d\text{vol} ,$$ where $\nabla_{\theta}$ is the covariant derivertive with respect to the product connection. We use the periodic metric $|-|$ which is induced from the Riemannian metric $g_W$. Its completion is denoted by $\Omega^i(W[0,\infty])\otimes {\mathfrak{su}(2)}_{L^2_{q,\delta}}$. Our goal of this section is to show the next theorem under the above setting. \[cptness\]Under Assumption $\ref{imp}$ the following statement holds. There exist $\delta'>0$ satisfying the following property. Suppose that $\delta$ is a non-negative number less than $\delta'$ and $\{A_n\} $ is a sequence in $M^{W[0,\infty]}_\delta$ satisfying $$\displaystyle \sup_{n \in {\mathbb N}}||F(A_n)||^2_{L^2(W[0,\infty])} < \min\{8\pi^2,Q^{2l_Y+3}_X\}.$$ Then for some subsequence $\{A_{n_j}\}$, a positive integer $N_0$ and some gauge transformations $\{g_j\}$ on $W[N_0,\infty]$, the sequences $\{g_j^*A_{n_j}\}$ converges to some $A_\infty$ in $L^2_{q,\delta}(W[N_0,\infty])$. The proof of Theorem \[cptness\] is given in the end of Subsection \[c3\]. We use the following estimate. \[lem:fundamental\] For a positive number $c_1>0$, there exists a positive number $c_2>0$ satisfying the following statement. For any $SU(2)$-connection $a$ on $Y^0_+$ and any flat connection $\gamma$ on $W[0,k]$ satisfying the following conditions - $\sup_{x \in Y^+_0}\sum_{0\leq j \leq 1}\left|\nabla^{(j)}_{\gamma}(a-(l^0_+)^*\gamma)(x)\right|<c_1$. - $\gamma|_{Y^0_+ } \cong \gamma|_{Y^k_-}$. , the inequality $$\left| cs_Y([a])-cs_{(\overline{W[0,k]},\text{PD}[Y^0_+])}(r(\gamma),\theta)\right| \leq c_2 \sup_{x \in Y^+_0}\sum_{0\leq j \leq 1}\left|\nabla^{(j)}_{(l^0_+)^*\gamma}(a-(l^0_+)^*\gamma)(x)\right|^2$$ holds, where $r(\gamma)$ is a flat connection on $\overline{W[0,k]}$ given by pasting $\gamma$ with itself along $Y^0_+ \cup Y^k_-$ and $l^0_+:Y^0_+\ri W_0$ is the inclusion. Lemma \[sumformula\] imply $$\left| cs_Y([a])-cs_{(\overline{W[0,k]},\text{PD}[Y^0_+])}(\gamma,\theta)\right| =|cs_Y(a)-cs_Y((l^0_+)^*\gamma)|.$$ Since $(l^0_+)^*\gamma$ is a flat connection on $Y^0_+$, we have $$=\frac{1}{8\pi^2}\left| \int_{Y^0_+}Tr( (a-(l^0_+)^*\gamma) \wedge d_{(l^0_+)^*\gamma}(a-(l^0_+)^*\gamma) + \frac{2}{3}(a-(l^0_+)^*\gamma) ^3\right|$$ $$\leq \frac{1}{8\pi^2}\text{vol}_Y (\sup_{x \in Y^0_+}\left|\nabla_{(l^0_+)^*\gamma}(a-(l^0_+)^*\gamma) (x)\right||(a-(l^0_+)^*\gamma)|+ \frac{2}{3}\sup_{x \in Y^0_+}|(a-(l^0_+)^*\gamma)|^3)$$ $$\leq c_2 \sup_{x \in Y^0_+}\sum_{0\leq j \leq 1}|\nabla^{(j)}_{(l^0_+)^*\gamma}(a-(l^0_+)^*\gamma)(x)|^2.$$ Next lemma gives us a key estimate. We use $Q^{2l_Y+3}_X$ to obtain an estimate of the difference between an ASD-connection and the trivial flat connection on the end $W[0,\infty]$. \[lem:theta\]Suppose that $Y$ satisfies Assumption \[imp\]. There exists a positive number $c_3$ satisfying the following statement. For $A \in M^{W[0,\infty]}_{\delta}$ satisfying $\frac{1}{8\pi^2}||F(A)||^2_{L^2(W[0,\infty]}< \min \{1, Q^{2l_Y+3}_X\}$, there exists a positive number $\eta_0$ which depends only on the difference $\min \{1, Q^{2l_Y+3}_X\}-\frac{1}{8\pi^2}||F(A)||^2_{L^2}$ such that the following condition holds. Note that if $K$ is sufficiently large, the inequality $||F(A)||^2_{L^2(W_k)}< \eta_0$ is satisfied for every $k>K$. When $K$ satisfies this property, there exist gauge transformations $g_k$ over $W[k,k+2]$ such that $$\sup_{x\in W[k,k+2]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\theta}({g_k}^*A|_{W[k,k+2]}-\theta)(x)|^2$$ $$\leq c_3{||F(A)||^2}_{L^2(W[k-l_Y-2,k+l_Y+3])}$$ holds for $k>K+l_Y+3$. For $k>K+l_Y+3$, we apply Lemma $10.4$ in [@T87] to $A|_{{W[k-l_Y-1,k+l_Y+2]}}$. Then we obtain gauge transformations $g_k$ and flat connections $\gamma_k$ over $W[k-l_Y-1,k+l_Y+2]$ satisfying $$\sup_{x \in W[k-l_Y-1,k+l_Y+2]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\gamma}({g_k}^*A|_{W[k-l_Y-1,k+l_Y+2]}-\gamma_k)(x)|^2$$ $$\begin{aligned} \label{glo} \leq c_3{||F(A)||^2}_{L^2(W[k-l_Y-2,k+l_Y+3])}\leq (2l+5)c_3\eta\end{aligned}$$ for a small $\eta$. By using the pull-backs of $\gamma_k$ from $W[k-l_Y-1,k-1]$(resp. $W[k+2,k+l_Y+2]$) to $Y^i_+$, we get the flat connections over $Y$. Then we get $l_Y+1$ flat connections by using this method. Under the assumption that $l_Y=\# R(Y)$, same flat connections appear by the pigeonhole principle. We choose two numbers $k(1)^\pm<k(2)^\pm$ which satisfy ${(l^{k(1)^\pm}_+)}^*\gamma_k \cong {(l^{k(2)^\pm}_+)}^*\gamma_k$ as connections on $Y$, where $k(1)^+$ and $k(2)^+$ are elements in $\{k-l_Y-1,\cdots , k-1\}$ (resp. $k(1)^-,k(2)^- \in \{k+2,k+l_Y+2\}$). The map $l^{k}_\pm:Y^k_\pm \ri W_k$ is the inclusion. Suppose $||F(A)||^2_{L^2(W_k)}<\eta$ holds for $k>K+l_Y+3$. For sufficiently small $\eta$, the flat connection $\gamma_k$ is isomorphic to $\theta$. The properties of $k(1)^\pm$ and $k(2)^\pm$, and Lemma $\ref{lem:fundamental}$ imply $$\begin{aligned} \label{1} \left| cs_Y((l^{k(1)^\pm}_+)^* {g_k}^*A) -cs_{(\overline{(W[k(1)^\pm,k(2)^\pm]},\text{PD}[Y^{k(1)^\pm}_+])}(r(\gamma_k),\theta)\right|\leq (2l+5)c_3\eta c_2.\end{aligned}$$ We also have $$cs_Y((l^{k(1)^\pm}_+)^* {g_k}^*A)= cs_Y(l^{k(1)^\pm}_+)^*A)+\text{deg}(g_k|_{Y^{k(1)^\pm}_+})$$ $$\begin{aligned} \label{3} = ||F(A)||^2_{L^2(W[k(1)^\pm,\infty])}+\text{deg}(g_k|_{Y^{k(1)^\pm}_+}). \end{aligned}$$ We choose $\eta_0$ satisfying the following condition: $$\begin{aligned} \label{2} (2l+5)c_3\eta_0 c_2 < Q^{2l_Y+3}_X - \frac{1}{8\pi^2}\int_{W[k(1)^\pm,\infty]}|F(A)|^2,\end{aligned}$$ where the right hand side is positive by the assumption of $A$. We obtain $$\left|cs_{(\overline{W[k(1)^\pm,k(2)^\pm]}, \text{PD}[Y^{k(1)^\pm}])}(r(\gamma_k),\theta)+\text{deg}(g_k|_{Y^{k(1)^\pm}_+})\right|< Q^{2l_Y+3}_X$$ from , and . Then Lemma \[lem:cs\] imply $\gamma|_{W[k(1)^\pm,k(2)^\pm]}\cong \theta$. Similarly the inequality $$\sup_{x \in W[k(2)^-,k(1)^+]}\sum_{0\leq j\leq q}|{\nabla^{j}}_{\gamma_k}({g_k}^*A|_{\widetilde{W}}-\gamma_k|_{W[k(2)^-,k(1)^+]})(x)|\leq (2l_Y+5)c_3\eta_0$$ holds over $W[k(2)^-,k(1)^+]$. From above discussion, ${l^{k(2)^-}_+}^*\gamma_k$ and ${i^{k(1)^+}_+}^*\gamma_k$ are gauge equivalent. By Lemma \[lem:fundamental\], we also get $$\left| cs_Y((l^{k(2)^-}_+)^*g_k^*A)-cs_{(\overline{W[k(2)^-,k(1)^+]}, \text{PD}[Y^{k(2)^-}_+])}(r(\gamma_k),\theta)\right|\leq (2l_Y+5)c_3\eta_0c_2.$$ By choice of $\eta_0$, we have $$\left|cs_{(\overline{W[k(2)^-,k(1)^+]}, \text{PD}[Y^{k(2)^-}_+])}(r(\gamma_k),\theta)+ \text{deg}(g_k|_{Y^{k(2)^-}_+})\right|< Q^{2l_Y+3}_X$$ and get $\gamma_k|_{\overline{W[k(2)^-,k(1)^+]}}\cong \theta$ by using Lemma \[lem:cs\] as above. Chain convergence ----------------- We introduce the following notion which is crucial for our proof of the compactness theorem(Theorem \[cptness\]). For a fixed number $\eta>0$ and a sequence $\{A_n\}\subset M^{W[0,\infty]}_{\delta}$ satisfying $$\displaystyle \sup_{n\in {\mathbb N}}||F(A_n)||^2_{L^2(W[0,\infty])}<\infty,$$ when a finitely many sequences $\{s^j_n(\eta)\}_{1\leq j \leq m}$ of non-negative numbers satisfies $${||F(A_n)||^2}_{L^2(W_{s})}>\eta\ \iff \ s=s^j_n \text{ for some }j,$$ and $$s^1_n < \dots < s^m_n\ ,$$ we call $\{s^j_n\}_{1\leq j \leq m}$ [*chain decomposition*]{} of $\{A_n\}\subset M^{W[0,\infty]}_{\delta}$ for $\eta$. For any sequence $\{A_n\}$ satisfying $\sup_{n\in {\mathbb N}}||F(A_n)||^2_{L^2(W[0,\infty])}<\infty$ and any $\eta>0$, we can show the existence of a chain decomposition for $\eta>0$ if we take a subsequence of $\{A_n\}$. First we give the next technical lemma. \[idhom\] Let $A$ be a $L^2_q$ ASD-connection on $W[k,\infty]$ satisfying $$\displaystyle \frac{1}{8\pi^2} ||F(A)||^2_{L^2(W[k,\infty])}<1.$$ Then there exists a positive number $c_4$ which depends only on the difference $1-\frac{1}{8\pi^2} ||F(A)||^2_{L^2(W[k,\infty])}$ such that the following statement holds. Suppose there exists a gauge transformation $g$ on $Y^k_+$ satisfying $$\sum_{0\leq j\leq 1}\sup_{x \in Y^k_+} \left| \nabla^j_\theta (g^*(l^k_+)^*A-\theta)(x) \right|^2 \leq c_4.$$ Then $g$ is homotopic to the identity gauge transformation. By the property of $cs_Y$, we have $$|\text{deg}(g)|= |cs_Y(g^*(l^k_+)^*A)-cs_Y((l^k_+)^*A)|$$ $$\leq |cs_Y(g^*(l^k_+)^*A)|+||F(A)||^2_{L^2(W[k,\infty])}$$ $$\leq \sum_{0\leq j\leq 1}\sup_{x \in Y^k_+} \left| \nabla^j_\theta (g^*(l^k_+)^*A-\theta)(x)\right|^2+||F(A)||^2_{L^2(W[k,\infty])}.$$ We define $c_4< \frac{1}{2}(1-||F(A)||^2_{L^2(W[k,\infty])})$, then $$|\text{deg}(g)| < 1$$ holds. This implies the conclusion. The next proposition is a consequence of Lemma \[lem:theta\] and \[idhom\]. \[lem:type\] Let $\eta$ be a positive number and $\{s^j_n\}_{1\leq j \leq m}$ be a chain decomposition for $\eta$ of a sequences $\{A_n\}$ in $M^{W[0,\infty]}_{\delta}$ with $$\displaystyle \frac{1}{8\pi^2}\sup_{n \in {\mathbb N}}||F(A_n)||^2_{L^2(W[0,\infty])} <\min \{1,Q^{2l_Y+3}_X\}.$$ Then there exists a subsequence $\{A_{n_i}\}$ of $\{A_n\}$ such that $\sup_{i\in {\mathbb N}} |s^m_{n_i}|<\infty$ holds. Suppose there exists $\eta_0>0$ which does not satisfy the condition of Proposition $\ref{lem:type}$. There exist a chain decomposition $\{s^j_n(\eta_0) \}_{0\leq j\leq m}$ of $\{A_n\}$ which satisfies $s^m_{n} \ri \infty$ as $n \ri \infty$. We take a sufficiently small $\eta>0$, we will specify $\eta$ later. Choose a subsequence of $\{A_n\}$ which allows a chain decomposition for $\eta$. For simplify, we denote the subsequence by same notation $\{A_n\}$. We denote the chain decomposition of $\{A_n\}$ for $\eta$ by $\{t^{j}_n(\eta)\}_{1\leq j \leq m'}$. There are two cases for $\{t^{j}_n(\eta)\}_{1\leq j \leq m'}$: - There exists $j'' \in \{0,\cdots, m'-1\}$ satisfying $t^{j''}_n - t^{j''+1}_n\ri -\infty$. - There is no $j'' \in \{0,\cdots, m'-1\}$ satisfying $t^{j''}_n - t^{j''+1}_n\ri -\infty$. We define a sequence by $$u_n(\eta):=\left\lfloor \frac{t^{j''}_n(\eta)+t^{j''+1}_n(\eta)}{2} \right\rfloor \in {\mathbb N}$$ for the first case and by $$u_n(\eta):=0$$ for the second case, where $\lfloor - \rfloor$ is the floor function. Applying Lemma \[lem:theta\] to $A_n|_{W[u_n(\eta),u_n(\eta)+2]}$, we get the gauge transformation $g_n$ on $W[u_n(\eta)-l_Y-2,u_n(\eta)+l_Y+3]$ satisfying $$\begin{aligned} \label{yyy} \sup_{x \in W[u_n(\eta),u_n(\eta)+2]}\sum_{1\leq j \leq q+1}|\nabla^j_{\theta}({g_n}^*{A_n}|_{W[u_n,u_n(\eta)+2]} - \theta)|^2\leq c_3(2l_Y+5)\eta,\end{aligned}$$ for small $\eta$ and large $n$. Because $A_n$ is the ASD-connection for each $n$, we have $$\frac{1}{8\pi^2}\int_{W[u_n,\infty]}Tr(F(A_n)\wedge F(A_n))=\frac{1}{8\pi^2}\int_{W[u_n,\infty]}|F(A_n)|^2>\eta_0$$ for large $n$. On the other hand, by the Stokes theorem $$\frac{1}{8\pi^2}\int_{W[u_n,\infty]}Tr(F(A_n)\wedge F(A_n))=cs_Y({l^{u_n}_+}^*A_n)$$ holds. By Lemma \[idhom\], we have $$cs_Y({l^{u_n}_+}^*A_n)=cs_Y({l^{u_n}_+}^*g_n^* A_n)$$ for small $\eta$. Therefore and Lemma \[lem:fundamental\] gives $$|cs_Y({l^{u_n}_+}^*A_n)|\leq c'c_3(2l_Y+5)\eta.$$ We choose $\eta$ satisfying $$c'c_3(2l_Y+5)\eta<\frac{1}{8\pi^2}\eta_0.$$ For such $\eta$, we have $$|cs_Y({l^{u_n}_+}^*A_n)|<\frac{1}{8\pi^2}\eta_0.$$ On the other hand, $\eta_0$ satisfies $$\frac{1}{8\pi^2}\eta_0 <\frac{1}{8\pi^2}\int_{W[u_n,\infty]}|F(A_n)|^2=|cs_Y({l^{u_n}_+}^*A_n)|$$ which is a contradiction. Exponential decay {#c3} ----------------- In the instanton Floer theory, there is an estimate called exponential decay about the $L^2$-norm of curvature of the instanton over cylindrical end. We give a generalization of the exponential decay estimate over $W[0,\infty]$. In the end of this subsection, we also give a proof of Theorem \[cptness\]. \[lem:estimate\] There exists a constant $c_5$ satisfying the following statement. For $A \in M^{W[0,\infty]}_{\delta}$ satisfying $\frac{1}{8\pi^2}||F(A)||^2<\min \{1,Q^{2l_Y+3}_X\}$, there exists $\eta_1>0$ which depends only on the difference $\min\{ Q^{2l_Y+3}_X,1\} -\frac{1}{8\pi^2}||F(A)||^2$ such that the following condition holds. Let $K>0$ be a positive number satisfying $||F(A)||^2_{L^2(W_k)}< \eta_1$ for any $k>K$, the inequality $$\begin{aligned} \label{ooo} ||F(A)||^2_{L^2(W[k,k+m])} \end{aligned}$$ $$\leq c_3( ||F(A)||^2_{L^2(W[k-l_Y-2,k+l_Y+3])}+||F(A)||^2_{L^2(W[k+m-l_Y-2,k+m+l_Y+3]})$$ holds for $k>K+l_Y+3$. Let $\eta$ be the positive number in Lemma \[lem:theta\] which depends only the difference $Q^{2l_Y+3}_X-\frac{1}{8\pi^2}||F(A)||^2$. Then for $k>K+l_Y+3$, we have the following inequalities $$\sup_{x \in W_k}\sum_{0\leq j\leq q}\left|{\nabla^{j}}_{\theta}(g_k^*A-\theta)(x)\right|^2\leq c_3||F(A)||^2_{L^2(W[k-l_Y-2,k+l_Y+3])}$$ $$\begin{aligned} \label{i1} \leq c_3(2l_Y+5)\eta_1\end{aligned}$$ and $$\sup_{x \in W_{k+m}}\sum_{0\leq j\leq q}\left|{\nabla^{j}}_{\theta}(g_{k+m}^*A-\theta)(x)\right|^2\leq c_3||F(A)||^2_{L^2(W[k+m-l_Y-2,k+m+l_Y+3]}$$ $$\begin{aligned} \label{i2} \leq c_3(2l_Y+5)\eta_1.\end{aligned}$$ These inequality , and Lemma \[idhom\] imply that for sufficiently small $\eta_1$, the gauge transfromation $g_k|_{Y^+_k}$(resp. $g_{k+m}|_{Y^-_{k+m}}$) is homotopic to the constant gauge transformation. Hence, there exists a gauge transformation $\hat{g}$ on $W[k,k+m]$ satisfying $\hat{g}|_{W_k}=g_k$ and $\hat{g}|_{W_{k+m}}=g_{k+m}$, moreover, since $A$ is the ASD connection, we have $$||F(A)||^2=||F(\hat{g}^*A)||^2_{L^2(W[k,k+m])}=8\pi^2 (cs_Y({(l^k_+)}^*{g_k}^*A))-cs_Y({(l^{k+m}_-)}^*{g_{k+m}}^*A)).$$ Applying the inequalities and again, we get the conclusion. \[prop:expdecay\]There exists $\delta'>0$ satisfying the following statement. Suppose $A$ is an element in $M^{W[0,\infty]}_{\delta}$ satisfying the assumption of Lemma \[lem:estimate\]. Then there exists $c_5(K)>0$ satisfying the following inequality. $${||F(A)||^2}_{W[k-l_Y-2,k+l_Y+3]} \leq c_5(K)e^{-k\delta'}.$$ for $k>K+l_Y+3$. This is a consequence of Lemma $\ref{lem:estimate}$ and Lemma $5.2$ in [@Fu90] by applying $q_i = ||F(A)||^2_{W[i-l_Y-2,i+l_Y+3]}$. By using a similar argument in Lemma $4.2$ and Lemma $7.1$ of [@Fu90] we have: \[lem:patchingarg\] For a positive number $c_7$, there exists a constant $c_8$ satisfying the following statement holds. Suppose we have an $L^2_q$ connection $A$, the gauge transformations $g_k$ on $W[k-1,k+1]$ satisfying $$\int_{W[k-1,k+1]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\theta}({g_k}^*A|_{W[k-1,k+1]}-\theta)|^2$$ $$\leq c_7{||F(A)||^2}_{L^2(W[k-l-3,k+l+2])}.$$ for any non-negative integer $k$. Then there exists the positive integer $n_0$ and a gauge transformation $g$ on $W[n_0,\infty] $ satisfying the following condition: $$\int_{W[k-1,k+1]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\theta}({g}^*A|_{W[k-1,k+1]}-\theta)|^2$$ $$\leq c_8{||F(A)||^2}_{L^2(W[k-l-3,k+l+2])}$$ for $k>n_0$. We use this lemma to prove the next proposition. \[prop:conv\] There exists $\delta'>0$ satisfying the following condition. Let $K>0$ be a positive number and $\{A_n\}$ be a sequence in $M^{W[0,\infty]}_\delta$ satisfying the following properties: - $\displaystyle 0< \min \{Q^{2l_Y+3}_X,1\}- \sup_{n\in {\mathbb N}} \frac{1}{8\pi^2}||F(A_n)||^2$. - There exists a chain decomposition $\{s^n_j\}$ of $\{A_n\}$ for $\eta_2$ satisfying $$\sup_{n\in {\mathbb N}}|s^m_n(\eta_*)|<\infty$$ for $$\eta_2:= \inf_{n\in {\mathbb N}}\left\{\eta |\text{ constants which depend on $A_n$ in Lemma \ref{lem:theta} and \ref{lem:estimate}} \right\}.$$ Then there exist a positive integer $N_0$, gauge transformations $\{g_j\}$ on $W[N_0,\infty]$ and subsequence $\{A_{n_j}\}$ of $\{A_n\}$ such that $\{{g_j}^*A_{n_j}\}$ converge to some $A_\infty$ in $L^2_{q,\delta}(W[N_0,\infty])$ for any $0\leq \delta<\delta'$. If we apply the Lemma $\ref{lem:theta}$ to $A_n$, there exists gauge transformations $g^n_k$ on $W[k-1,k+1]$ satisfying the following condition: for $k>l_Y+K+3$, $$\int_{W[k-1,k+1]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\theta}({g^n_k}^*A_n|_{W[k-1,k+1]}-\theta)|^2$$ $$\leq c_3{||F(A_n)||^2}_{L^2(W[k-l-3,k+l+2])}.$$ On the other hand, we have $$\begin{aligned} \label{exx} {||F(A_n)||^2}_{L^2(L^2(W[k-l-3,k+l+2])} \leq c_6(K)e^{-\delta' k}\end{aligned}$$ by using the exponential decay estimate(Proposition \[prop:expdecay\]). Using , we can show that $n_0$ uniformly with respect to $n$ in Lemma \[lem:patchingarg\]. So there exist a large natural number $N_0$ and a gauge transformation on $W[N_0,\infty]$ for each $n$ satisfying $$\int_{W[k-1,k+1]}\sum_{0\leq j\leq q+1}|{\nabla^{j}}_{\theta}({g_n}^*A_n|_{W[k-1,k+1]}-\theta)|^2$$ $$\begin{aligned} \label{o} \leq c'_8{||F(A_n)||^2}_{L^2(W[k-l-3,k+l+2])} \leq c_6(K)c'_8e^{-\delta'k},\end{aligned}$$ where the last inequality follows from . We set $g_n^*A_n=\theta +a_n$. Then we have $$||a_n||_{{L^2_{q+1,\delta}(W[N_0,\infty])}}=\sum_{0\leq j\leq q+1}\int_{W[N_0,\infty] } e^{\delta \tau}|{\nabla^{j}}_{\theta}(a_n)|^2$$ $$\leq \sum_{0\leq j\leq q+1}\sum_{N_0 \leq i\leq \infty} e^{i\delta } \int_{ {W_i}} |{\nabla^{j}}_{\theta}(a_n)|^2.$$ Putting this estimate and together, we have $$\begin{aligned} \label{expdecay} ||a_n||^2_{{L^2}_{q+1,\delta}(W[k,\infty] )}\leq c_{9}e^{(\delta-\delta')k}\end{aligned}$$ for $k>N_0$. We take a subsequence of $\{a_n\}$ which converges on any compact set in $L^2_q(W[k,\infty]) $ by using the Relich Lemma. We denote the limit in $L^2_{q,\text{loc}}$ by $a_\infty$. Then the exponential decay and a standard argument implies that $\{a_n\}$ converges $a_\infty$ on $W[N_0,\infty]$ in $L^2_{q,\delta}$-norm. We now give the proof of Theorem \[cptness\]. We choose $\eta_2$ in Proposition \[prop:conv\]. After taking subsequence of $\{A_n\}$, we consider the chain decomposition $\{s^j_n\}_{1\leq j \leq m} $ for $\eta_2$ of $\{A_n\}$. From Proposition \[lem:type\], $\{s^j_n\}$ has upper bound by some $K>0$ after taking a subsequence of $\{A_n\}$ again. So we can apply Proposition \[prop:conv\], we get the conclusion. Perturbation and Orientation ============================ To prove the vanishing $[\theta^r]=0$ in Theorem \[mainthm\], we use the moduli spaces $M^W(a)_{\pi,\delta}$ and need the transversality for the equation $F^+(A)+s\pi(A)=0$. We also need the orientability of $M^W(a)_{\pi,\delta}$. Holonomy perturbation(2) ------------------------ In [@Do87], Donaldson introduced the holonomy perturbation with compact support for irreducible ASD-connections. Combining the technique in [@Do87] and the compactness theorem (Theorem \[cptness\]), we get sufficient perturbations to achieve required transvesality. \[hol2\] Let $\pi$ be an element in $\prod(Y)$ and $a$ be a critical point of $cs_{Y,\pi}$. We use the following notations: - $\Gamma (W):=\left\{ l:S^1 \times D^3 \ri W \middle | \text{$l$: orientation preserving embedding} \right\}.$ - $\Lambda^d(W):= \left\{(l_i, \mu^+_i)_{1\leq i \leq d} \in \Gamma(W)^d\times (\Om^+(W)\otimes {\mathfrak{su}(2)}))^d \middle | \text{supp} \mu^+_i \subset \im l_i \right\}$. - $\displaystyle \Lambda(W):= \bigcup_{d\in {\mathbb N}} \Lambda^d(W)$. Let $\chi:SU(2)\ri \mathfrak{su}(2)$ be $$\begin{aligned} \chi(u):=u-\frac{1}{2}tr(u)id\end{aligned}$$ and fix $\mu^+_i \in \Omega^+(W)\otimes \mathfrak{su}(2)$ supported on $ l_i(S^1\times D^3)$ for $i\in \{1,\cdots ,d\}$. For $\epsilon \in {\mathbb R}^d$, we set $$\begin{aligned} \sigma_{\Psi}(A,\epsilon):= \sum_{1\leq i \leq d} \epsilon_i \chi(\hol_{x \in l_i(S^1\times D^3)} (A))\mu^+_i,\end{aligned}$$ where $\hol_{x\in l_i(S^1\times D^3)}$ is a holonomy around the loop $t \mapsto l_i(t,y_x)$ satisfying $x=l_i (t_x,y_x)$ for some $t_x$ and $\epsilon=(\epsilon_i)_{1\leq i \leq d}$. For $\Psi=(l_i,\mu_i)_{1\leq i \leq d} \in \Lambda$, Donaldson defined [*the holonomy perturbation of the ASD-equation*]{}: $$\begin{aligned} \label{pert} \mathcal{F}_{\pi,\Psi}(A,\epsilon):=F^+(A)+s\pi(A)+\sigma_\Psi(A,\epsilon)=0.\end{aligned}$$ The map $\sigma_\Psi(-,\epsilon)$ is smoothly extended to the map ${\mathcal A}^W(a)_{\delta}\ri \Omega^+(W)\otimes \mathfrak{su}(2)_{L^2_{q-1,\delta}}$ and the map ${\mathcal A}^W(a)_{(\delta,\delta)}\ri \Omega^+(W)\otimes \mathfrak{su}(2)_{L^2_{q-1,(\delta,\delta)}}$. For $\Psi$ and $\epsilon \in {\mathbb R}^d$, [*the perturbed instanton moduli space*]{} are defined by $$M^W(a)_{\pi,\Psi,\epsilon,\delta}:=\left\{ c \in {\mathcal B}^W(a)_{\delta} \middle |\mathcal{F}_{\pi,\Psi}(c,\epsilon)=0 \right\}$$ in the case of $a\in \widetilde{R}^*(Y)_\pi$ and $$M^W(a)_{\pi,\Psi,\epsilon,(\delta,\delta)}:=\left\{ c \in {\mathcal B}^W(a)_{(\delta,\delta)} \middle |\mathcal{F}_{\pi,\Psi}(c,\epsilon)=0 \right\}$$ in the case of $\text{Stab}(a)=SU(2)$. For a fixed $\epsilon \in {\mathbb R}^d$, if the operator $$d(\mathcal{F}_{\pi,\Psi})_{(A,0)}:T_A{\mathcal A}^W(a)_{\delta}\times {\mathbb R}^d \ri \Omega^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}.$$ is surjective for all $[A] \in M^W(a)_{\delta,\pi,\Psi,\epsilon}$, we call $(\Psi,\epsilon)$ [*regular perturbation*]{} for $a$$\in \widetilde{R}^*(Y)_\pi$. Let $FM^W(a)_{\delta,\pi,\Psi}$ be [*the family version of the perturbed instanton moduli spaces*]{} defined by $$FM^W(a)_{\delta,\pi,\Psi}:=\left\{ (c,\epsilon) \in {\mathcal B}^W(a)_{\delta}\times {\mathbb R}^d \middle |\mathcal{F}_{\pi,\Psi}(c,\epsilon)=0 \right\}.$$ \[lem:Sur\]Suppose that $Y$ satisfies Assumption \[imp\]. There exists $\delta'>0$ such that for a fixed $\delta \in (0,\delta')$, the following statement holds. Suppose $\pi$ is a holonomy perturbation which is non-degenerate and regular. Let $a$ be an irreducible critical point of $cs_{Y,\pi}$ with $cs_Y(a)<\min\{Q^{2l_Y+3}_X,1\}$. We assume the next three hypotheses for $(\pi,a)$. 1. For $[A] \in M^W(b)_{\pi,\delta}$, $$\displaystyle \frac{1}{8\pi^2}\sup_{n\in {\mathbb N}} ||F(A)+s\pi(A)||^2_{L^2(W)} <\min \{1 ,Q^{2l_Y+3}_X\},$$ where $b$ is an element of $\widetilde{R}(Y)_\pi$ with $cs_{Y,\pi} (b) \leq cs_{Y,\pi}(a)$. 2. The linear operator $$d^+_\theta+ sd\pi^+_\theta:T_\theta{\mathcal A}^W(\theta)_{(\delta,\delta)}\times {\mathbb R}^d \ri \Omega^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,(\delta,\delta)}}$$ is surjective. 3. $M(c)_{\pi,\delta}$ is empty set for $c\in \widetilde{R}_\pi(Y)$ satisfying $cs_{Y,\pi}(c)<0$. Then there exist a small number $\eta>0$ and a perturbation $\Psi$ such that the map $$d\mathcal{F}_{\pi,\Psi} :T{\mathcal A}^W(a)_{\delta}\times {\mathbb R}^d \ri \Omega^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}.$$ is surjective for all point in ${\mathcal{F}_{\pi,\Psi}}^{-1}(0)\cap ({\mathcal A}^W(a)_{\delta}\times B^d(\eta))$. First we show that the surjectivity of $d\mathcal{F}_{\pi,\Psi}$ at the point in ${\mathcal{F}_{\pi,\Psi}}^{-1}(0)\cap ({\mathcal A}^W(a)_{\delta}\times \{ 0\} )$. Second, we show that there exists a positive number $\eta>0$ such that $d\mathcal{F}_{\pi,\Psi}$ is surjectivie at the point in ${\mathcal{F}_{\pi,\Psi}}^{-1}(0)\cap ({\mathcal A}^W(a)_{\delta}\times B^d(\eta))$. We name the critical point of $cs_{Y,\pi}$ by $$0=cs_{Y,\pi}( \theta=a_0 ) \leq cs_{Y,\pi}(a_1)\leq cs_{Y,\pi} (a_2 ) \cdots \leq cs_{Y,\pi}(a_w=a).$$ The proof is induction on $w$ and there are four steps. \[step1\] For an irreducible element $A \in {\mathcal A}(a_{w})_{\delta}$ with $0\neq \coker (d(\mathcal{F}_{\pi,\Psi})_{(A,0)})$, there exists $\Psi(A)=\{l^{A}_i,\mu^{A}_i \}_{1\leq i \leq d(A)}$ such that $d(\mathcal{F}_{\pi,\Psi})_{(A,0)}| {\mathbb R}^{d(A)}$ generates the space $(d^+_A+sd\pi^+ _A)$. The proof is essentially the same discussion of Lemma $2.5$ in [@Do87]. We fix $h \in \Om^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}$ satisfying $0\neq h \in \coker (d^+_A+sd\pi _A$) with $||h||_{L^2}=1$. The unique continuation theorems: Proposition $8.6$ (ii) of [@SaWe08] for the equation $(d^+_A(-)+d\pi^+_A)^*(-)=0$ on $Y\times (-\infty,-1]$ and Section $3$ of [@FU91] for the equation $(d^+_A)^*(-)=0$ on $W[0,\infty]$ imply $h|_{Y\times [-2,0] \cup_Y W_0}\neq 0$. Then we choose $x_h$ in $Y\times [-2,0] \cup_Y W_0$ so that $h(x_h)\neq0$ holds. Since $A$ is the irreducible connection, $$\hol(A,x_h)=\left\{ \hol_l(A) \in SU(2) \middle |l \text{: loop based at } x_h \right\}$$ is a dense subset of $SU(2)$. So we can choose the loops $l^{h}_i$ based at $x_h$ satisfying $$\{e_i=\chi(\hol_{l^{h}}(A))\}_{i} \text{ generates }{\mathfrak{su}(2)}.$$ For a small neighborhood $U_{x_h}$ of $x_h$, we can write $h$ by $$h|_{U_{x_h}}= \sum_{1\leq i \leq 3}h_i \otimes e_i.$$ By using a smoothing of $\delta$ function, we have $$\begin{aligned} \label{point} \inner<h|U_{x_h},\sum_{1\leq i \leq 3}\mu^+_i(h)\otimes \chi(Hol_{l_i}(A))>_{L^2(U_{x_h})} \neq 0,\end{aligned}$$ where $\mu^+_i(h)$ are three self dual $2$-forms supported on $U_{x_h}$. For a fixed generator $\{h^1,\dots ,h^u\}$ of $(d^+_A+sd\pi^+_A)$, we get the points $\{x_{h^j}\} \subset Y\times [-2,0] \cup_Y W_0$, small neighborhoods $\{U_{x_{h^j}}\}$, loops $\{l^{h^j}_i\}$ and self dual 2-forms $\{\mu^+_i(h^j)\}$ satisfying for all $h^j$. We extend the maps $l^{h^j}_i:S^1\ri W$ to embeddings $S^1\times D^3 \ri W$. We can choose $U_{x_{h^j}}$ satisfying $U_{x_{h^j}} \subset \im\ l^{h^j}$. We set $$\Psi(A):= (l^{h^j}_i,\mu^+_i(h^j))_{i,j} \in \Lambda(W),$$ which satisfies the statement of Step \[step1\]. For $j\geq0 $ satisfying $cs_{Y,\pi}(w_j)=0$, we show: \[step2\] For an element $b \in \widetilde{R}^*(Y)_\pi$$($resp. $b=\theta$$)$ satisfying $cs_{Y,\pi}(b)=0$, there exists a perturbation $\Psi^b$ such that the operator $$d\mathcal{F}_{\pi,\Psi^b}|_{(A,0)}:T_A{\mathcal A}^W(b)_\delta \times {\mathbb R}^d\ri \Om^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}$$ $$(\text{resp. } d\mathcal{F}_{\pi,\Psi^b}|_{(A,0)}:T_A{\mathcal A}^W(b)_{(\delta,\delta)} \times {\mathbb R}^d\ri \Om^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,(\delta,\delta)}} \text{})$$ is surjective for $A \in (\mathcal{F}_{\pi,\Psi^b})^{-1}(0) \cap ({\mathcal A}^W(a)_{\delta}\times \{ 0\})$ $($resp. $A \in (\mathcal{F}_{\pi,\Psi^b})^{-1}(0) \cap ({\mathcal A}^W(a)_{(\delta,\delta)}\times \{ 0\})$$)$. First we show that $M^W(b)_{\pi,\delta}$ is compact. Let $\{[A_n]\}$ be any sequence in $M^W(b)_{\pi,\delta}$. By the second hypothesis, we have $$\frac{1}{8\pi^2}\sup_{n \in {\mathbb N}}{||F(A_n)||^2_{L^2(W[0,\infty])}} \leq \frac{1}{8\pi^2}\sup_{n \in {\mathbb N}}||F(A_n)+s\pi(A_n)||^2_{L^2(W)}<\min \{1 ,Q^{2l_Y+3}_X\}.$$ By Theorem $\ref{cptness}$, there exist a large positive number $N$ and the gauge transformations $\{g_n\}$ over $W[N,\infty]$ such that $\{g_n^*A_n\}$ converges over $W[N,\infty]$ for small $\delta$ after taking a subsequence. Note that $Y\times (-\infty,0] \cup_Y W[0,N+1]$ is a cylindrical end manifold and we can apply the general theory developed on Section $5$ of [@Do02]. In particular, there exist gauge transformations $\{h_n\}$ on $Y\times (-\infty,0] \cup_Y W[0,N+1]$ such that $\{h_n^*A_n|_{Y\times (-\infty,0] \cup_Y W[0,N+1]}\}$ has a chain convergent subsequence in the sense in Section $5$ in [@Do02] because the bubble phenomenon does occur under the first hypothesis $$\frac{1}{8\pi^2}\sup_{n \in {\mathbb N}}||F(A_n)+s\pi(A_n)||^2_{L^2(W)}<1.$$ By gluing $\{g_n\}$ and $\{h_n\}$, we obtain a chain convergent subsequence $$[A_{n_j}] \ri ([C^1],\dots,[C^N], [A^0]) \in M(b=c_1,c_2)_\pi \times \dots \times M(c_v,c_{v+1})_\pi \times M^W(c_{v+1})_{\pi,\delta}$$ with $c_i \in \widetilde{R}(Y)_\pi$ . Suppose that $[A_{n_j}] \ri [A^0] \in M(b)_{\pi,\delta}$ does not hold. We get $cs_{Y,\pi}(c_{v+1})<0$ because the moduli spaces $$M(b,c_1)_\pi,\cdots , M(c_v,c_{v+1})_\pi$$ are non-empty sets. However this contradicts to the assumption of $M(c)_{\pi,\delta}=\emptyset$ for $c\in \widetilde{R}(Y)_\pi$ with $cs_{Y,\pi}(c)<0$. When $b$ is an irreducible connection, the compactness of $M(b)_{\pi,\delta}$, Step \[step1\] and the openness of surjective operators imply Step \[step2\]. When $b$ is equal to $\theta$, the second hypothesis implies Step \[step2\]. For the inductive step, we show: \[k\] Suppose there is a perturbation $$\Psi^{w-1}=(l^{w-1}_i, \mu^{w-1}_i)_{i} \in \Lambda(W)$$ such that the operators $$d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A,0)}:T_A{\mathcal A}^W(a_j)_\delta \ri \Om^+(W)_{L^2_{q-1,\delta}}$$ is surjective for $(A,0) \in (\mathcal{F}_{\pi,\Psi^{w-1}})^{-1} \cap ({\mathcal A}^W(a_j) \times \{0\})$ and $j \in \{1, \cdots, w-1\}$. Then the space $$K_w:=\left\{A\in M^W(a_w)_{\pi,\delta} \middle |\ 0\neq \coker (d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A,0)}) \subset \Omega^{+}(W)_{L^2_{q-1,\delta}} \right\}$$ is compact. Let $\{[A_n]\}$ be a sequence in $K_w$. By the similar estimate in Step \[step2\] and Theorem \[cptness\], we get a chain convergent subsequence $$[A_{n_j}] \ri ([B^1],\dots,[B^N], [A^0]) \in M(a_w=b_1,b_2)_\pi \times \dots \times M(b_v,b_{v+1})_\pi \times M^W(b_{v+1})_{\pi,\delta}$$ with $b_i \in \widetilde{R}(Y)_\pi$. Suppose that $[A_{n_j}] \ri [A^0] \in M(a_{w})_{\pi,\delta}$ does not hold. In this case, the operators $d^+_{B^i}+d\pi_{B^i}$ on $Y\times {\mathbb R}$ and the operators $d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A^0,0)}$ on $W$ are surjective in the suitable functional spaces by the assumption of $\pi$ and the induction. For large $j$, the operator $d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A^{n_j},0)}$ can be approximated by the gluing of the operators $d_{B^i}^++d\pi_{B^i}$, $d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A^0,0)}$. By gluing the right inverses of them as in Theorem $7.7$ of [@SaWe08], $d\mathcal{F}_{\pi,\Psi^{w-1}}|_{(A^{n_j},0)}$ also has a right inverse for sufficiently large $j$. This is a contradiction and we have the conclusion of Step \[k\]. For induction, we need to show: \[step4\] There exists the perturbation $\Psi^w$ satisfying the surjectivity of the operator $$d\mathcal{F}_{\pi,\Psi^w}:{\mathcal A}^W(a_w)_\delta \times {\mathbb R}^d\ri \Om^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}$$ for any point in $(\mathcal{F}_{\pi,\Psi^w})^{-1}(0)\cap ({\mathcal A}(a_w)_{\delta}\times \{ 0\} )$. We take the perturbation $\Psi_A=((l^j_A),(\mu^+_j(A)))$ for each $ A\in K_w$ in Step 1. Because $K_w$ is compact and surjectivity of the operators is open condition, there exist $\{A_1,\cdots ,A_k\} \subset K_w$ and a perturbation $\Psi^w$ such that $$d\mathcal{F}_{\pi,\Psi^w}|_{(A,0)}:{\mathcal A}^W(a_w)_\delta \times {\mathbb R}^d\ri \Om^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,\delta}}$$ is surjective for all $(A,0) \in (\mathcal{F}_{\pi,\Psi^w})^{-1}(0) \cap ({\mathcal A}^W(a)_{\delta}\times \{ 0\})$. Here $\Psi^w$ is defined by $$\Psi^w:=((l^j_{A_1} \cdots l^j_{A_k}, l^{w-1}_i) ,(\mu^+_j(A_1),\cdots,\mu^+_j(A_k), \mu^{w-1}_i))$$ which satisfies the property in Step \[step4\]. Second, we show that the operator $d\mathcal{F}_{\pi,\Psi^w}$ is surjective for any point in ${\mathcal{F}_{\pi,\Psi^w}}^{-1}(0)\cap ({\mathcal A}^W(a)_{\delta}\times D^d(\eta) )$. Suppose there is no $\eta$ such that the statement holds. Then there is a sequence $\{(A_n,\epsilon_n)\}$ in $M^W(a)_{\delta,\pi,\Psi,\epsilon_n}$ which satisfies that $\epsilon_n \ri 0$ as $n\ri \infty$ and $d\mathcal{F}_{\pi,\Psi^w}|_{(A_n,\epsilon_n)}$ is not surjective for all $n\in {\mathbb N}$. Because the bubble does occur, $\{A_n\}$ has a chain convergent subsequence to $$([B^1],\dots , [B^N],[A^0]) \in M(b_0,b_1)_\pi \times \dots \times M(b_{v-1},b_v)_\pi \times M^W(b_v)_{\delta,\pi,\Psi,0}$$ for some $b_i \in \widetilde{R}(Y)_\pi$. Since $\pi$ is a regular perturbation and $d\mathcal{F}_{\pi,\Psi^w}|_{(A^0,0)}$ is surjective, there exist the right inverses of $d^+_{B^1}+d\pi^+_{B^1}, \dots,d^+_{B^N}+d\pi^+_{B^N}$ and $d\mathcal{F}_{\pi,\Psi^w}|_{(A^0,0)}$ for suitable functional spaces. By the gluing of the right inverses as in Step \[k\], $d\mathcal{F}_{\pi,\Psi^w}|_{(A_N,\epsilon_N)}$ also has the right inverse for large $N$. This is a contradiction and this completes the proof. \[Tra\] For a given data $(\delta,\pi,a)$ in Lemma \[lem:Sur\], there exist $\eta>0$, a perturbation $\Psi$ and a dense subset of $R \subset$ $B^d(\eta) \subset {\mathbb R}^d$ such that $(\Psi,b)$ is a regular perturbation for $b \in R$. This is a conclusion of Lemma \[lem:Sur\], the argument in Section $3$ of [@FU91] and the Sard-Smale theorem. Applying the implicit function theorem, we get a structure of manifold of $M^W(a)_{\delta,\pi,\Psi,b}$. Its dimension coincides with the Floer index $\ind(a)$ of $a$ by Proposition \[calfred\]. Therefore we have: \[trans\] For given data $(\delta,\pi,a)$ in Lemma \[lem:Sur\], there exist $\eta>0$, a perturbation $\Psi$ and a dense subset of $R \subset$ $B^d(\eta) \subset {\mathbb R}^d$ such that $M^W(a)_{\delta,\pi,\Psi,b}$ has a structure of manifold of dimension $\ind(a)$. Orientation {#ori} ----------- In [@Do87], Donaldson showed the orientability of the instanton moduli spaces for closed oriented 4-manifolds. In this subsection, we deal with the case for non-compact 4-manifold $W$ by generalizing Donaldson’s argument. More explicitly, we show that the moduli space $M^W(a)_{\delta,\pi}$ is orientable. We also follow Fredholm and moduli theory in [@T87] to formulate the configuration space for $SU(l)$-connections for $l\geq 2$. Let $Z$ be a compact oriented 4-manifold which satisfies $\partial Z=Y$ and $H_1(Z)\cong 0$. We set $Z^+:=(-Z)\cup_Y W[0,\infty]$ and $\hat{Z}:= (-Z)\cup_Y Y\times [0,\infty)$. Fix a Riemannian metric $g_{Z^+}$ on $Z^+$ with $g_{Z^+}|_{W[0,\infty]}=g_W|_{W[0,\infty]}$ and Riemannian metric $g_{\hat{Z}}$ with $g_{\hat{Z}}|_{Y\times [0,\infty)}= g_Y \times g^{\text{stan}}_{\mathbb R}$. First, we introduce the configuration spaces for $SU(l)$-connections on $W$ and $Z^+$ for $l\geq 2$ and [*$SU(2)$-configuration space*]{} for $\hat{Z}$. Fix a positive integer $q \geq3$. For an irreducible $SU(2)$-connection $a$ on $Y$, we define $${\mathcal A}^{W}(a)_{(\delta,\delta),l}:= \left\{A_{a}+c \middle | c \in \Om^1(W)\otimes \mathfrak{su}(l)_{L^2_{q,(\delta,\delta)}} \right\},$$ $${\mathcal A}^{Z^+}_{\delta,l}:= \left\{\theta+c \middle | c \in \Om^1(Z^+)\otimes \mathfrak{su}(l)_{L^2_{q,\delta}} \right\},$$ and $${\mathcal A}^{\hat{Z}}(a):= \left\{B_a+c \middle | c \in \Om^1(\hat{Z})\otimes \mathfrak{su}(2)_{L^2_{q}} \right\},$$ where - $A_{a}$ is an $SU(l)$-connection on $W$ with $A_{a}|_{Y\times (-\infty,-1]}=\pr^* (a\oplus \theta)$, $A_{a}|_{W[0,\infty] }=\theta$ - $B_a$ is an $SU(2)$-connection on $\hat{Z}$ with $B_{a}|_{Y\times [0,\infty)}=\pr^*a$. - $L^2_{q,(\delta,\delta)}(W)$-norm is defined by $$||f||^2_{L^2_{q,(\delta,\delta)}(W)}:= \sum_{0\leq i \leq q} \int_W e^{\tau' \delta} |\nabla_{A_a}^i f |^2d\text{vol} ,$$ where $\tau'$ is defined in Definition \[confset\] and $f$ is an element in $\Om^1(W)\otimes {\mathfrak{su}(2)}$ with compact support. - $L^2_{q,\delta}(Z^+)$-norm is defined by $$||f||^2_{L^2_{q,\delta}(Z^+)}:= \sum_{0\leq i \leq q} \int_{Z^+} e^{\tau'' \delta} |\nabla^i_\theta f |^2d\text{vol},$$ where $\tau'':Z^+\ri [0,1]$ is a smooth function satisfying $\tau''|_{W[0,\infty]}=\tau$ defined in Definition \[confset\] and $f$ is an element in $\Om^1(Z^+)\otimes {\mathfrak{su}(2)}$ with compact support. - $L^2_{q}(\hat{Z})$-norm is defined by $$||f||^2_{L^2_{q}(\hat{Z})}:= \sum_{0\leq i \leq q} \int_{\hat{Z}} |\nabla_{B_a}^i f |^2d\text{vol} ,$$ where $f$ is an element in $\Om^1(\hat{Z})\otimes {\mathfrak{su}(2)}$ with compact support. We also define [*the $SU(l)$ configuration spaces*]{} ${\mathcal B}^W (a)_{\delta,\delta}^{l}$, ${\mathcal B}^{Z^+}_{\delta,l}$ and [*$SU(2)$-configuration spaces*]{} ${\mathcal B}^{\hat{Z}}(a)$ by $${\mathcal B}^W (a)_{(\delta,\delta),l}:={\mathcal A}^W(a)_{(\delta,\delta),l} /{\mathcal G}^W(a)_{l},\ {\mathcal B}^{Z^+}_{\delta,l}:={\mathcal A}^{Z^+}_{\delta,l} /{\mathcal G}^{Z^+}_{l}$$ and $${\mathcal B}^{\hat{Z}}(a):={\mathcal A}^{\hat{Z}}(a) /{\mathcal G}^{\hat{Z}}(a)$$ where ${\mathcal G}^W(a)_{l}$, ${\mathcal G}^{Z^+}_{l}$ and ${\mathcal G}^{\hat{Z}}(a)$ are given by $${\mathcal G}^W(a)_l:=\left\{ g\in \aut(W\times SU(l)) \subset \End(\mathbb{C}^l)_{L^2_{q+1,\text{loc}}} \middle| \nabla_{A_a}(g) \in L^2_{q,(\delta,\delta)}(W)\right\} ,$$ $${\mathcal G}^{Z^+}_l:=\left\{ g\in \aut(Z^+\times SU(l)) \subset \End(\mathbb{C}^l)_{L^2_{q+1,\text{loc}}} \middle| d(g) \in L^2_{q,\delta}(Z^+) \right\}$$ and $${\mathcal G}^{\hat{Z}}(a):= \left\{ g\in \aut(\hat{Z}\times SU(l)) \subset \End(\mathbb{C}^2)_{L^2_{q+1,\text{loc}}} \middle| \nabla_{B_a}(g) \in L^2_{q,\delta}(\hat{Z}) \right\}.$$ The action of ${\mathcal G}^W(a)_l$(resp. ${\mathcal G}^{Z^+}_l$, ${\mathcal G}^{\hat{Z}}(a)$) on ${\mathcal A}^{W}(a)_{(\delta,\delta).l}$(resp. ${\mathcal A}^{Z^+}_{\delta,l}$, ${\mathcal A}^{\hat{Z}}(a)$) is the pull-backs of connections. We define [*the reduced gauge group*]{} by $$\hat{{\mathcal G}}^{W}(a)_l:= \left\{ g \in {\mathcal G}^W(a)_l \middle |\lim_{t \ri -\infty} g|_{Y\times t} =id\ \right\} ,$$ $$\hat{{\mathcal G}}^{W,\text{fr}}(a)_l:=\left\{ g \in \hat{{\mathcal G}}^{W}(a)_l \middle| \lim_{n \ri \infty} g|_{W_n} \ri id \right\}$$ and $$\hat{{\mathcal G}}^{Z^+}_l:= \left\{ g \in {\mathcal G}^{Z^+}_l \middle | \lim_{n \ri \infty} g|_{W_n} \ri id \right\}.$$ Then we define $$\hat{{\mathcal B}}^{W} (a)_{(\delta,\delta),l} :={\mathcal A}(a)^{W}_{(\delta,\delta)}/ \hat{{\mathcal G}}^{W}(a)_l,\ \hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l} :={\mathcal A}(a)^{W}_{(\delta,\delta)}/ \hat{{\mathcal G}}^{W,\text{fr}}(a)_l$$ and $$\hat{{\mathcal B}}^{Z^+}_{\delta,l}:={\mathcal A}^{Z^+}_{\delta,l}/ \hat{{\mathcal G}}^{Z^+}_{l}.$$ The group $\hat{{\mathcal G}}^{W,\text{fr}}(a)_l$ (resp. $\hat{{\mathcal G}}^{Z^+}_l$) has a structure of Banach Lie sub group of ${\mathcal G}^W(a)_l$ (resp. ${\mathcal G}^{Z^+}_l$). By the construction of them, there are exact sequences $$\hat{{\mathcal G}}^{W}(a)_l \ri {\mathcal G}^W(a)_l \ri \stab(a\oplus \theta),\ \hat{{\mathcal G}}^{W,\text{fr}}(a)_l\ri \hat{{\mathcal G}}^W(a)_l \ri SU(l)$$ and $$\hat{{\mathcal G}}^{Z^+}_l\ri {{\mathcal G}}^{Z^+}_l \ri SU(l)$$ of Lie groups. The group $\hat{{\mathcal G}}^{W}(a)_l$(resp. $\hat{{\mathcal G}}^{Z^+}_l$) acts on ${\mathcal A}^{W} (a)_{(\delta,\delta),l}$(resp. ${\mathcal A}^{Z^+}_{\delta,l})$ freely. \[simp\] For $l\geq 3$ and an $SU(2)$-flat connection $a$, there exists a positive number $\delta'$ such that for a positive real number $\delta$ less than $\delta'$ the following properties hold. - $\hat{{\mathcal B}}^{W} (a)_{(\delta,\delta),l}$ is simply connected. - $\hat{{\mathcal B}}^{Z^+}_{\delta,l}$ is simply connected. We will show only the first property. The second one is shown in a similar way to the first case. We use the condition $H_1(Z,{\mathbb Z})\cong 0$ for the second property. Since $\pi_i(SU(l))=0$ for $i=0,1$, $$\pi_1(\hat{{\mathcal B}}^{W}(a)_{(\delta,\delta),l} )\text{ is isomorphic to }\pi_1(\hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l} ).$$ Therefore, we will show $\pi_1(\hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l})=0$. There exists $\delta'>0$ such that for $0< \delta<\delta'$, $$\begin{aligned} \label{fib} \hat{{\mathcal G}}^{l}(a)_{(\delta,\delta)}^f \ri {\mathcal A}^{W} (a)_{(\delta,\delta),l} \ri \hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l}\end{aligned}$$ is a fibration since has a local slice due to Fredholm and moduli theory in [@T87]. Let $W^*$ be the one point compactification of $W$. Using $\eqref{fib}$, we obtain $$\pi_1(\hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l}) \cong \pi_0(\hat{{\mathcal G}}^{l}(a)_{(\delta,\delta)}^f ) \cong [W^*,SU(l)].$$ Since $\pi_i(SU(l))$ vanishes for $i=0,1,2,4$, the obstruction for an element of $ [W^*,SU(l)]$ to be homotopic to the constant map lives in $H^3(W^*,\pi_3(SU(l))) \cong H^3_{\text{comp}}(W,\pi_3(SU(l))) \cong H_1(W,\pi_3(SU(l)))=0$ where the second isomorphism is the Poincaré duality. This implies $$\pi_1(\hat{{\mathcal B}}^{W,\text{fr}} (a)_{(\delta,\delta),l})\cong 0.$$ We now define the determinant line bundles. For simplify, we impose Assumption $\ref{imp}$ on $Y$. Let $\pi$ be an element in $\prod(Y)^{\text{flat}}$ and $(\Psi,\epsilon)$ be a perturbation in Subsection \[hol2\] and fix an element $a \in \widetilde{R}(Y)$. For $c \in {\mathcal B}^{W}(a)_{(\delta,\delta),l}$ ($\hat{{\mathcal B}}^{W}(a)_{(\delta,\delta),l})$, we have the following bounded operator $$d(\mathcal{F}_{\pi,\Psi})_c+d^{*_{L^2_{(\delta,\delta)}}}_c:\ome^1(W) \otimes {\mathfrak}{su}(l)_{L^2_{q,(\delta,\delta)}} \ri \ome^0(W) \otimes {\mathfrak}{su}(l) \oplus \ome^+(W) \otimes {\mathfrak}{su}(l))_{L^2_{q-1,(\delta,\delta)}}.$$ The operators $d(\mathcal{F}_{\pi,\Psi})_c+d^{*_{L^2_{(\delta,\delta)}}}_c$ are the Fredholm operators for small $\delta$. Fix such a $\delta$. We set $$\lambda^W (a,l,c):= \Lambda^{\max} \ker (d(\mathcal{F}_{\pi,\Psi})_c) \otimes \Lambda^{\max} \coker (d(\mathcal{F}_{\pi,\Psi})_c)^*.$$ [*The determinant line bundles*]{} are defined by $$\lambda^W (a,l):=\displaystyle \bigcup_{c \in {\mathcal B}^W(a)_{(\delta,\delta),l}} \lambda (a,l,c) \ri {\mathcal B}^W(a)_{(\delta,\delta),l}$$ and $$\hat{\lambda}^{W} (a,l):=\displaystyle \bigcup_{c \in \hat{{\mathcal B}}^{W}(a)_{(\delta,\delta),l}} \lambda (a,l,c) \ri \hat{{\mathcal B}}^{W}(a)_{(\delta,\delta),l}.$$ We also define $$\lambda^{Z^+} (l)\ri {{\mathcal B}}^{Z^+}_{\delta,l} \text{ and } \lambda^{\hat{Z}}(a) \ri {{\mathcal B}}^{\hat{Z}}(a)$$ in a similar way with respect to the operators $$d^+_c+d^{*_{L^2_{\delta}}}_c :\ome^1(Z^+) \otimes {\mathfrak}{su}(l)_{L^2_{q,\delta}} \ri \ome^0(Z^+) \otimes {\mathfrak}{su}(l) \oplus \ome^+(Z^+) \otimes {\mathfrak}{su}(l))_{L^2_{q-1,\delta}}$$ for $c \in {{\mathcal B}}^{Z^+}_{\delta,l}$ and $$d^+_c+d^*_c :\ome^1(\hat{Z}) \otimes {\mathfrak{su}(2)}_{L^2_{q}} \ri\ome^0(\hat{Z}) \otimes {\mathfrak{su}(2)}\oplus \ome^+(\hat{Z}) \otimes {\mathfrak{su}(2)})_{L^2_{q-1}}$$ for $c \in {{\mathcal B}}^{\hat{Z}}(a)$. For a given data $(a,\delta,l)$ in Proposition \[simp\], the bundles $\lambda^{Z^+} (a,l)\ri {{\mathcal B}}^{Z^+}_{\delta,l}$ and $\lambda^W(a,l)\ri {\mathcal B}^W(a)_{(\delta,\delta),l}$ are trivial. Since the determinant line bundle is a real line bundle, the triviality of $\lambda^{Z^+} (a,l)\ri {{\mathcal B}}^{Z^+}_{\delta,l}$ is a consequence of Proposition $\ref{simp}$. Therefore we show the triviality of $\lambda^W(a,l)\ri {\mathcal B}^W(a)_{(\delta,\delta),l}$. We have a fibration $$\begin{aligned} \label{fib1} \stab(a\oplus \theta) \ri \hat{{\mathcal B}}^W(a)_{(\delta,\delta),l} {\xrightarrow}{j} {\mathcal B}^W(a)_{(\delta,\delta),l}.\end{aligned}$$ We also have an isomorphism $j^*\lambda^W(a,l) \cong \hat{\lambda}^W(a,l)$ for $j$ in . $\hat{\lambda}^W(a,l)$ is the trivial bundle for $l>2$ from Proposition $\ref{simp}$. So if the fiber $\stab(a\oplus \theta)$ of is connected, $\lambda^W(a,l)$ is also trivial. The possibilities of $\stab(a\oplus \theta)$ are $SU(l)$, $U(1)\times U(l-1)$, $S(U(2)\times U(l-2))$ and $$\left\{(z,A) \in U(1)\times U(l-2) \middle| z^2 \text{det}A=1\right\}.$$ Since these groups are connected, $\lambda^W(a,l)$ is the trivial bundle. \[222\]Suppose that $Y$ satisfies Assumption \[imp\] and $a$ is an element in $\widetilde{R}^*(Y)$. Let $i_1:{\mathcal B}^W(a)_{(\delta,\delta),2} \ri {\mathcal B}^W(a)_{(\delta,\delta),3}$ and $i_2:{\mathcal B}^{Z^+}_{\delta,2} \ri {\mathcal B}^{Z^+}_{\delta,3}$ be the maps induced by the product with the product connection. There exists a positive number $\delta'$ such that for a positive real number $\delta$ less than $\delta'$, $i_1^*\lambda^W(a,3) \cong \lambda^W(a,2)$ and $i_2^*\lambda^{Z^+}(a,3) \cong \lambda^{Z^+}(a,2)$ hold. Under Assumption \[imp\] on $Y$, the isomorphism class of these line bundles are independent of the choices of the perturbations $\pi$ and $(\Psi,\epsilon)$ by considering a 1-parameter family of perturbations $\pi_t:=(f,th)$ and $(\Psi,t\epsilon)$ for $t \in [0,1]$. So Lemma $(5.4.4)$ in [@DK90] implies the conclusion. \[orie\]Suppose that $Y$ satisfies Assumption \[imp\] and $a$ is an element in $\widetilde{R}^*(Y)$. Let $\pi $ be an element in $\prod(Y)^{\text{flat}}$ and $(\Psi, \epsilon)$ be a regular perturbation for $a \in \widetilde{R}^*(Y)$. For sufficiently small $\delta$, $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ is orientable. Furthermore the orientation of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ is induced by the orientation of $\lambda^W(a,2)$. Using the exponential decay estimate in Proposition 4.3 of [@Do02], we have a inclusion $i:M^W(a)_{\pi,\Psi,\epsilon,\delta} \ri B^W(a)_{(\delta,\delta),2}$ for small $\delta$ as a set. From this inclusion $i$, we regard $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ as a subset in $B^W(a)_{(\delta,\delta),2}$. Applying result of convergence in Corollary $5.2$ of [@Do02], we can show that the topology of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ in $B^W(a)_{\delta}$ coincides with the topology of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ in $B^W(a)_{(\delta,\delta),2}$ Also using exponential decay estimate for solutions to the linearized equation in Lemma $3.3$ of [@Do02] , $\lambda^W(a,2)|_{M^W(a)_{\pi,\Psi,\epsilon,\delta}} \ri M^W(a)_{\pi,\Psi,\epsilon,\delta}$ is canonically isomorphic to $\Lambda^{\max} M^W(a)_{\pi,\Psi,\epsilon,\delta}$. From Theorem \[orie\], an orientation of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ is characterized by the trivialization of $\lambda^W(a,2)$. On the other hand, to formulate the instanton Floer homology of $Y$ with ${\mathbb Z}$ coefficient, Donaldson introduced the line bundle $\lambda(a):=\lambda^{(-\hat{Z})}(a) \otimes \lambda_{(-\hat{Z})}^* \ri {\mathcal B}^{(-\hat{Z})}(a)$, where $\lambda_{(-\hat{Z})}$ is given by $$\Lambda^{\max} (H^0_{DR}(-\hat{Z})\oplus H^1_{DR}(-\hat{Z}) \oplus H^+_{DR}(-\hat{Z}))$$ in Subsection $5.4$ of [@Do02]. The orientation of $\lambda(a)$ is essentially independent of the choice of $Z$. We set $$\lambda_W:= \Lambda^{\max} (H^0_{DR}(W)\oplus H^1_{DR}(W) \oplus H^+_{DR}(W)),$$ and $\lambda^W(a):= \lambda^W(a,2) \otimes \lambda_W \ri {\mathcal B}^{W} (a)_{(\delta,\delta),2}$. \[orien\] Suppose that $Y$ satisfies Assumption \[imp\]. For an irreducible flat connection $a$, there is a canonical identification between the orientations of $\lambda^W(a )$ and the orientations of $\lambda(a)$. It suffices to construct an isomorphism $\lambda^W(a)\cong \lambda(a)$ which is canonical up to homotopy. First we fix two elements $[A] \in {\mathcal B}^{W} (a)_{(\delta,\delta)}$ and $[B] \in {\mathcal B}^{\hat{Z}}(a)$ which have representative $A$ and $B$ satisfying $A_{Y\times (-\infty,-1]}=\pr^*a$ and $B|_{Y\times [1,\infty)}=\pr^*a$. For such two connections, we obtain an element $A\# B \in {\mathcal B}^{Z^+}_\delta$ by gluing of connections. This map induces an isomorphism $$\#:det( d_A^*+d^+_A)\otimes det( d_B^*+d^+_B)\ri det( d_{A\# B}^*+d^+_{A\# B})$$ from the similar argument of Proposition $3.9$ in [@Do02]. Therefore we have identification $$\#:\lambda^W(a)|_{[A]}\otimes \lambda^{Z^+}(a)|_{[B]} \ri \lambda_{Z^+}|_{[A\#B]}.$$ If we choose a path from $[\theta]$ to $[A\#B]$ in ${\mathcal B}^{Z^+}_\delta$, then we have an identification between $\lambda_{Z^+}|_{[\theta]}$ and $\lambda_{Z^+}|_{[A\#B]}$. The line bundle $\lambda_{Z^+}|_{[\theta]}$ is naturally isomorphic to $$\Lambda^{\max} (H^0_{DR}(Z^+)\oplus H^1_{DR}(Z^+) \oplus H^+_{DR}(Z^+))$$ by using Proposition $5.1$ in [@T87]. This cohomology group is isomorphic to $$\Lambda^{\max} (H^0_{DR}(Z)\oplus H^1_{DR}(Z) \oplus H^+_{DR}(Z))\otimes \Lambda^{\max} (H^0_{DR}(W)\oplus H^1_{DR}(W) \oplus H^+_{DR}(W))$$ by using the Mayer-Vietoris sequence. Therefore $$\lambda^{W}(a)|_{[A]}\otimes \lambda_W^* \cong (\lambda^{(-Z)}(a)|_{[B]} \otimes \lambda_{(-Z)}^*)^*$$ holds. Because $\lambda^{Z^+}(a,2)\ri {\mathcal B}^{Z^+}_{\delta,2}$ is orientable by Lemma \[222\] and Proposition \[simp\] , the homotopy class of this identification does not depend on choices of the path, $A$, $B$, the bump functions of the gluing map. We also have the following canonical isomorphism $$(\lambda^{(-Z)}(a)|_{[B]} \otimes \lambda_{(-Z)}^*)^* \cong \lambda^Z(a)|_{[B]} \otimes \lambda_{Z}^*.$$ by the gluing $Z$ and $-Z$ as above discussion and the Mayer-Vietoris sequence. This completes the proof. Combining Theorem \[orie\] and Lemma \[orien\], we have: \[ori:conc\]Under the assumption of Theorem \[orie\], an orientation $\lambda(a)$ and an orientation of $\lambda_W$ give an orientation of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$. Proof of main theorem {#promain} ===================== Let $Y$, $l_Y$ and $X$ be as in Section $\ref{main}$. Take a Riemannian metric $g_Y$ on $Y$. Fix a non-negative real number $r \in \Lambda_Y$ smaller than $Q^{2l_Y+3}_X$. Suppose that there is an embedding $f$ of $Y$ into $X$ satisfying $f_*[Y]= 1 \in H_3(X,{\mathbb Z})$. Then we obtain the oriented homology cobordism from $Y$ to $-Y$ by cutting open along $Y$. Recall that $W$ is a non-compact oriented Riemann 4-manifold $W$ with both of cylindrical end and periodic end which is formulated at the beginning of Subsection \[fred\]. We fix a holonomy perturbation $\pi \in \prod(Y)$ satisfying the following conditions. 1. $\pi$ is a $\epsilon$-perturbation in Subsection \[filter\]. 2. $\pi$ is a regular perturbation in the end of Subsection \[hol\]. 3. $\pi$ is an element of $\prod(Y)^{\text{flat}}$ in Definition $\ref{flatpres}$. 4. For $a\in \widetilde{R}(Y)$ with $0\leq cs_Y(a) < \min\{1,Q^{2l_Y+3}_X\}$ and $A \in M^W(a)_{\pi,\delta}$, $$\frac{1}{8\pi^2}\sup_{n \in {\mathbb N}} ||F(A)+s\pi(A)||^2_{L^2(W)} <\min\{1,Q^{2l_Y+3}_X\}$$ holds. 5. $d^+_\theta+ sd\pi_\theta:{\mathcal A}^W(\theta)_{(\delta,\delta)}\times {\mathbb R}^d \ri \Omega^+(W)\otimes {\mathfrak{su}(2)}_{L^2_{q-1,(\delta,\delta)}}$ is surjective. 6. For $c \in \widetilde{R}(Y)_\pi$ satisfying $cs_{Y}(c)<0$, $M^W(c)_{\pi,\delta}$ is the empty set. Assumption \[imp\] and the proof of Thereom $8.4$ (ii) of [@SaWe08] implies the existence of the perturbation ssatisfying the third condition. The first, forth, fifth and sixth conditions follow from choosing small $h\in C^{l'}(SU(2)^{d'},{\mathbb R})_{ad} $ of $\pi=(f,h)$. Next we also fix a holomomy perturbation $(\Psi,\epsilon)$ satisfying the following conditions. 1. $(\Psi,\epsilon)$ is a regular for $[b] \in \widetilde{R}(Y)$ with $0\leq cs_Y(b) \leq cs_Y(a)$. 2. For $a\in \widetilde{R}(Y)$ with $0\leq cs_Y(a) < \min\{1,Q^{2l_Y+3}_X\}$, $$\frac{1}{8\pi^2}\sup_{n \in {\mathbb N}} ||F(A)+s\pi(A)+\sigma_{\Psi}(A,\epsilon)||^2_{L^2(W)} <\min\{1,Q^{2l_Y+3}_X\}$$ hold. To get the first condition, we use Lemma $\ref{lem:Sur}$. The second condition satisfied when we take $\epsilon$ sufficiently small. In order to formulate the instanton Floer homology of $Y$ with ${\mathbb Z}$ coefficient, we fix an orientation of fix an orientation of $\lambda(a)$ for each $a \in R(Y)$. The orientation of $Y$ induce an orientation of $\lambda_W$. To determine the orientation of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$, we fix a compact oriented manifold $Z$ with $H_1(Z,{\mathbb Z})\cong 0$ as in Subsection $\ref{ori}$. The relation between $\lambda_{Z,a}$ and $\lambda(a)$ is given by $$\lambda^Z(a) \otimes \lambda_{Z} \cong \lambda(a).$$ Let $a$ be a flat connection satisfying $cs_Y(a)<r\leq \min \{Q_X^{2l_Y+3},1\}$ and $\ind(a)=1$. We consider the moduli space $M^W(a)_{\pi,\Psi,\epsilon,\delta}$. From the choice of these perturbation data and Corollary \[trans\], $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ has a structure of 1-dimensional manifold for small $\delta$. From Theorem \[ori:conc\], we obtain an orientation of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ induced by the orientation of $\lambda_{Z,a}$. Let $(A,B)$ be a limit point of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$. Using Theorem $\ref{cptness}$ and the standard dimension counting argument, the limit points of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ correspond to two cases: 1. $\displaystyle(A,B) \in \bigcup_{b \in \widetilde{R}^*(Y), cs_Y(b)<r,\ind(b)=0} M(a,b)_\pi \times M^W(b)_{\pi,\Psi,\epsilon,\delta}$ 2. $(A,B) \in M(a,\theta)_\pi \times M^W(\theta)_{\pi,\Psi,\epsilon,(\delta,\delta)}$. For the second case, we use the exponential decay estimate to show $B \in M^W(\theta)_{\pi,\Psi,\epsilon,(\delta,\delta)}$. Here $M(a,b)_\pi$ and $M(a,\theta)_{\pi,\delta}$ have a structure of 1-dimensional manifold. The quotient spaces $M(a,b)_\pi/{\mathbb R}$ and $M(a,\theta)_{\pi,\delta}/{\mathbb R}$ have a structure of compact oriented $0$-dimensional manifold whose orientation induced by the orientation of $\lambda_Z$ and ${\mathbb R}$ action by the translation as in Subsection $5.4$ of [@Do02]. Corollary \[trans\] and Theorem \[ori:conc\] imply that $M(b)_{\pi,\Psi,\epsilon,\delta}$ has a structure of compact oriented 0-manifold whose orientation induced by the orientation of $\lambda_b$ and $\lambda_W$ for small $\delta$. Since the formal dimension of $M(\theta)_{\pi,\Psi,\epsilon,(\delta,\delta)}$ is $-3$ from Proposition \[calfred\] and there is no reducible solution except $\theta$ for a regular perutrbation $(\Psi,\epsilon)$, $M(\theta)_{\pi,\Psi,\epsilon,\delta}$ consists of just one point. By the gluing theory as in Theorem $4.17$ and Subsection $4.4.1$ of [@Do02], there is the following diffeomorphism onto its image: $$\mathcal{J}:\left(\bigcup_{b \in \widetilde{R}^*(Y), cs_Y(b)<r, \ind(b)=0}(M(a,b)_\pi/{\mathbb R}\times M(b)_{\pi,\Psi,\epsilon,\delta}) \cup M(a,\theta)_\pi/{\mathbb R}\right)\times [T,\infty)$$ $$\ri M^W(a)_{\pi,\Psi,\epsilon,\delta}.$$ By the definition of the orientation of $M(a,b)_\pi/{\mathbb R}$ and $M(a,\theta)_\pi/{\mathbb R}$, we can construct $\mathcal{J}$ as an orientation preserving map. Furthermore, the complement of $\im \mathcal{J}$ is compact. Therefore we can construct the compactification of $M^W(a)_{\pi,\Psi,\epsilon,\delta}$ by adding the finitely many points $$\begin{aligned} \label{main'} \bigcup_{b \in \widetilde{R}^*(Y), cs_Y(b)<r,\ind(b)=0} (M(a,b)_\pi/{\mathbb R}\times M(b)_{\pi,\Psi,\epsilon,\delta}) \cup M(a,\theta)_\pi/{\mathbb R},\end{aligned}$$ which has a structure of compact oriented $1$-manifold. By counting of boundary points of the compactification, we obtain the relation $$\delta^{r}( n)(a)+ \theta^{r}(a)=0,$$ where $n \in CF^0_r(Y)$ is defined by $n(b):= \# M(b)_{\pi,\Psi,\epsilon,\delta}$. This implies $\theta^{r}$ is a coboundary. Therefore we have $0=[\theta^r] \in HF^1_{r}(Y)$ for $0\leq r \leq \min \{Q^{2l_Y+3}_X,1\}$.
--- abstract: 'The cost of wind energy can be reduced by using SCADA data to detect faults in wind turbine components. Normal behavior models are one of the main fault detection approaches, but there is a lack of consensus in how different input features affect the results. In this work, a new taxonomy based on the causal relations between the input features and the target is presented. Based on this taxonomy, the impact of different input feature configurations on the modelling and fault detection performance is evaluated. To this end, a framework that formulates the detection of faults as a classification problem is also presented.' bibliography: - 'biblio.bib' --- Introduction {#submission} ============ In 2018, global energy-related CO$_2$ emissions reached a historic high of 33.1 gigatonnes. These emissions are caused by the burning of fossil fuels, mainly natural gas, coal and oil, which accounted for 64% of global electricity production in this same year [@iea:co2]. Greenhouse gases like CO$_2$ are responsible for climate change which threatens to change the way we have come to know Earth and human life. For the previous reasons, there has been a global effort to shift from a fossil fuel based energy system towards a renewable energy one. In fact, it is expected that by 2050 wind energy will represent 14% of the world’s total primary energy supply [@dnv:data]. The operation and maintenance costs of can account for up to 30% of the cost of wind energy [@ewea:om]. This happens because while generators in fossil fuel power plants operate in a constant, narrow range of speeds, are designed to operate under a wide range of wind speeds and weather conditions. This means that stresses on components are significantly higher, which increases the number of failures and consequently the maintenance costs. There have been recent efforts to monitor and detect incipient faults in by harvesting the high amounts of data already generated by their systems, which, in turn, enables the wind farm owners to employ a predictive maintenance strategy. In fact, it is expected that by 2025 new predictive maintenance strategies can reduce the cost of wind energy by as much as 25% [@irena:change]. One of the main methods for monitoring the condition of is building of the component temperatures. The fundamental assumption behind the use of is that a fault condition is normally characterized by a loss of efficiency, which results in increased temperatures. By using SCADA data to build a model of the temperatures of the components, one can calculate the residuals, which are the difference between the real values measured by the sensors and the predicted values by the model. These residuals can be used to detect abnormally high temperatures that may be indicative of an incipient fault. Multiple works [@article:zaher; @article:brandao1; @inproceedings:brandao2] have reported good results using to predict failures, being able to predict failures in components months in advance. In these works the authors used as features active power, nacelle temperature and lagged values of the target temperature, thus including autoregressive properties into to the model, to predict the temperatures of various components. In [@meik:0] and [@article:bach] the authors obtained an important result: although the use of autoregressive features resulted in better temperature modelling performance it also resulted in worse fault detection performance. Another important result was obtained in [@bang:1] and [@tautz:phd], which indicated that using features that are highly correlated with the target also increased the modelling performance but decreased the fault detection performance of the model. Nonetheless, this type of features are still used in many works today, such as [@article:bach; @bach:phd; @today:colone; @today:zhao; @today:tautz; @today:zhao2; @today:mazdi]. There are also conflicting opinions regarding the use of autoregressive features, with some works using them and others not. The main reason behind this is the lack of consistent case studies that evaluate the impact of different features on both the temperature modelling and fault detection performances. It should also be noted that in it’s not trivial that the more features the model has the better its fault detection performance will be. This happens because the model is being trained to minimize the temperature modelling error and not the fault detection one. Having this in mind, this work will present a new feature taxonomy to distinguish different input feature types. Then, the impact of these input feature types on the temperature modelling and fault detection performances will be evaluated. Finally, evaluating the fault detection performance of different models is not as trivial as evaluating their temperature modelling performance. In fact, there is no standard in the literature regarding how to evaluate fault detection performance. This happens because of the inherent nature of the fault detection problem, in which there is rarely groundtruth. Indeed, there is data of when the failure happened, but there is no information regarding when the fault state started, making it not trivial to formulate as a classification problem. Hence why the majority of the literature evaluates the fault detection results by visual inspection, observing the increase in the residuals before the failure. This is problematic, because comparisons between different models will be highly subjective. Having this in mind, this work will also present a formulation of the detection of faults as a classification problem. Methods ======= Data and Training ------------------ In this work a dataset composed of 15 turbines during a 6 year period will be used. This data corresponds to SCADA signals with 10 minute resolution. During the year of 2012 there was a total of 5 failures related with the Gearbox IMS Bearing. For these reasons, this will be the component for which an will be trained, with the objective of predicting the corresponding failures. The models will be trained with data from the beginning of 2007 to the end of 2011 and tested on data from 2012. Periods with faults will be removed from the training data so the model does not learn abnormal behaviour. The models will be implemented with , which work by iteratively combining weak decision trees into a strong ensemble learner. In terms of implementation, LightGBM [@lightgbm] will be used due to its high computational performance. In terms of optimization, the year of 2011 will be used as a validation set when choosing the number of trees for each model by early stopping. Note that no exhaustive hyperparameter optimization was performed, so all models will use the same hyperparameters besides the number of trees. Feature Taxonomy ---------------- In the present work we hypothesize that what causes a decrease in fault detection performance is not using input features highly correlated with the target, but using those whose sensors are physically close to the target sensor. If there is an increase in the temperature of a faulty component, the physically close components will also get hotter due to heat transfer. Thus, using physically close components as features to the model may leak information regarding the fault state of the target, making it unable to detect abnormal behaviour. These ideas can be clarified by using appropriate nomenclature. Based on Econometric Causality [@econ:nom], we will distinguish features based on their causal relations with the target. If the target is causally dependent of the features, they are causal features. On the other hand, if the target depends on the features but the features also depend on the target they are simultaneity features. Such causal relations are assumed based on the domain knowledge of the physical system. Based on the taxonomy previously presented, different models will be defined based on their input feature configuration. The simplest model that will be tested is the , which only uses causal features. These are determined based on domain knowledge and will be: rotor speed, active power, pitch angle, wind speed and ambient temperature. All these features characterize the operation regimes of the , these are causal features because the gearbox IMS bearing temperature depends on their values, but their values are not dependent on it. For example, variations in the ambient temperature influence the gearbox IMS bearing temperature, but the influences of the latter on the ambient temperature can be disregarded. On the other hand, simultaneity features will be chosen based on Pearson Correlation, which is a standard first approach for regression problems. The highest correlated feature with the gearbox IMS bearing temperature is the gearbox HSS bearing temperature, which is a simultaneity feature because there is heat transfer between the two sensors, thus meaning that their values are mutually causally dependent. Having this in mind, the will use all the features from the plus gearbox HSS bearing temperature. Two more models will be tested, which correspond to the autoregressive versions of the previously described models: and . Fault Evaluation Framework -------------------------- To develop an evaluation framework for fault detection, one must first formulate it as a binary classification problem where there are two labels: fault and no-fault. Since there is no information regarding the fault state of the component, only the date of failure, it was defined with the wind farm owners that for the failures studied in this work it can be assumed that a fault state would be present at most 60 days before the failure. It was also defined that for the alarms to be useful they should be triggered at least 15 days before the failure. This means that to be considered a the alarm must be triggered between 60 and 15 days before the failure. Figure \[fig:evalfram\] presents a schematic example of the previously described problem formulation. Taking this example, it is important to note that the number of alarms triggered in the prediction window is not relevant, they are all aggregated as 1 . The main reason for this, is that if the aggregation is not done, then 4 alarms for the same failure would count as much as 4 detected failures with 1 alarm each. This clearly is not what is intended of the framework, since 1 alarm should be enough to motivate an inspection, and detecting 4 failures with 1 alarm outweighs detecting 1 failure with 4 alarms. Finally, it is also important to note that alarms triggered less than 15 days before the failure are not considered , since there is indeed a fault state, it simply is not relevant, so they are considered . Results ======= In terms of temperature modelling, the models were evaluated on periods of turbines that are known to be healthy. The results, presented in Table \[resulstnbm\], indicate that the use of simultaneity features indeed improves the modelling performance, since SNBM obtains better results than CNBM. The use of autoregressive features also improves the modelling performance, since ACNBM and ASNBM obtain better results than their non-autoregressive counterparts. This results make sense, since there are certain regimes of the turbine that are difficult to model without simultaneity nor autoregressive features, such as the turning off of the turbine as noted in [@bach:phd]. 0.15in ------- ------ ------ ------ ------ MAE RMSE MAE RMSE CNBM 1.48 2.14 1.80 2.62 SNBM 0.87 1.26 1.01 1.41 ACNBM 1.03 1.57 1.14 1.67 ASNBM 0.83 1.22 0.96 1.38 ------- ------ ------ ------ ------ : Regression error metrics for the training and test sets of each model.[]{data-label="resulstnbm"} -0.1in In terms of fault detection, a baseline was defined that consists of setting different thresholds on the distribution of the target temperature and obtaining the corresponding precision and recall. For the models, also different thresholds were applied in the residuals to obtain the different values of precision and recall. The results are presented in Figure \[fig:results\]. As can be seen, the CNBM which obtained the worst modelling performance obtains the best fault detection performance. Also, note that the models with the simultaneity feature are significantly worse than the baseline. Conclusions =========== An evaluation framework to formulate fault detection as a classification problem was presented. This hopes to contribute to the development of a standard approach for fault detection performance evaluation. Furthermore, a taxonomy regarding the causal relations of the different input feature types was presented, which hopes to make the discussion on how different features affect the performance of models clearer. Finally, it was demonstrated that although autoregressive and simultaneity features increase the modelling performance they decrease the fault detection capabilities of the model. This is an important contribution since the majority of works today still use these types of features.
--- abstract: 'Matter exhibits phases and their transitions. These transitions are classified as first-order phase transitions (FOPTs) and continuous ones. While the latter has a well-established theory of the renormalization group, the former is only qualitatively accounted for by classical theories of nucleation, since their predictions often disagree with experiments by orders of magnitude. A theory to integrate FOPTs into the framework of the renormalization-group theory has been proposed but seems to contradict with extant wisdom. Here we show first that classical nucleation and growth theories alone cannot explain the FOPTs of the paradigmatic two-dimensional Ising model driven by linearly varying an externally applied field. Then we offer compelling evidence that the transitions agree well with the renormalization-group theory when logarithmic corrections are properly considered. This unifies the theories for both classes of transitions and FOPTs can be studied using universality and scaling similar to their continuous counterpart.' author: - Fan Zhong title: 'Compelling evidence for the theory of dynamic scaling in first-order phase transitions' --- Matter as a many-body system exists in various phases and/or their coexistence and its diversity comes from phase changes. It thus exhibits just phases and their transitions. These transitions are classified as first-order phase transitions (FOPTs) and continuous ones. Although the phases can be studied by a well-established framework and the continuous phase transitions have a well-established theory of the renormalization group (RG) that has predicted precise results in good agreement with experiments, the FOPTs gain a different status in statistical physics. They proceed through either nucleation and growth or spinodal decomposition [@Gunton83; @Bray; @Binder2]. Although classical theories of nucleation [@Becker; @Becker1; @Becker2; @Zeldovich; @books; @books1; @books2; @Oxtoby92; @Oxtoby921; @Oxtoby922; @Oxtoby923] and growth [@Avrami; @Avrami1; @Avrami2] correctly account for the qualitative features of a transition, even an agreement in the nucleation rate of just several orders of magnitude between theoretical predictions and experimental and numerical results is considered as a feat [@Oxtoby92; @Oxtoby921; @Oxtoby922; @Oxtoby923; @Filion; @Filion1; @Filion2]. A lot of improvements have thus been proposed and tested in the two-dimensional (2D) Ising model whose exact solution is available. One theory of nucleation, called FT hereafter, considers field theoretic corrections to the classical theory [@Langer67; @Gunther]. Its field dependence was quantitatively verified for a constant applied magnetic field $H$ that directs oppositely to the equilibrium magnetization $M_{\rm eq}$ at a temperature $T$ below the critical temperature $T_c$ by Monte Carlo simulations [@Rikvold94]. By employing the results of such relaxation processes, FT was also shown to accurately produce numerical results of hysteresis loop areas in a single droplet (SD) regime in which only a single droplet nucleates and grows quickly throughout the system [@Sides98]. So was in a multidroplet (MD) regime where many droplets nucleate and grow even in the case of a sinusoidally varying $H$ by using Avrami’s growth law [@Avrami; @Ramos] and an adiabatic approximation [@Sides99]. In this regime, an adjustable parameter was needed to match the area of just one frequency but then yielded good results for others [@Sides99]. Another theory, referred to as BD below, adds appropriate corrections to the droplet free energy of Becker and Döring’s nucleation theory [@Becker2]. Such a theory was found to accurately predict nucleation rates for the 2D Ising model without adjustable parameters [@Ryu; @Ryu1]. However, it is well-known that classical nucleation theories are not applicable in spinodal decompositions in which the critical droplet for nucleation is of the lattice size and thus no nucleation is needed [@Gunton83]. Although sharply defined spinodals that divide the two regimes of the apparently different dynamic mechanisms do not exist for systems with short-range interactions contrary to the mean-field case which has long-range interactions [@Gunton83; @Bray; @Binder2], it is generally believed that there exists a crossover region between them at least at the early stage of an FOPT for systems with short-range interactions [@Gunton83; @Bray; @Binder2]. One may then characterize this crossover by fluctuation-shifted mean-field spinodals and expand near such instability points below $T_c$ of a usual $\phi^4$ theory that describes the critical behavior of the Ising model. This results in a $\phi^3$ theory for the FOPT due to the lack of the up–down symmetry in the expansion [@Zhongl05; @zhong16]. An RG theory for the FOPT can then be set up in parallel to that for the critical phenomena, giving rise to universality and dynamic scaling characterized by “instability” exponents corresponding to the critical ones. The primary qualitative difference is that the nontrivial fixed points of such a theory are imaginary in values and are thus usually considered to be unphysical, though the instability exponents are real. Yet, it is later shown that counter-intuitively imaginariness is physical in order for the $\phi^{3}$ theory to be mathematically convergent, since at the instability points, the unstable degrees of freedom of the system flows to the fixed points upon coarse graining [@Zhonge12]. Moreover, the other degrees of freedom that need finite free energy costs for nucleation are coarse-grained away with the costs and are thus irrelevant to the transition [@Zhonge12]. This indicates that nucleation is irrelevant to the scaling. Although no clear evidence of an overall power-law relationship was found for the magnetic hysteresis in a sinusoidally oscillating field in two dimensions [@Thomas; @Sides98; @Sides99], recently, with properly logarithmic corrections a dynamic scaling near a temperature other than the equilibrium transition point $T_0$ was found for the cooling FOPTs in the 2D Potts model [@Pelissetto16]. This result shows that spinodal-like dynamic scaling does exist for FOPTs in systems with short-range interactions if logarithmic corrections are properly considered. However, in that case only one hysteresis exponent found numerically is consistent with a similar theory [@Liang16]. Here we first compare results arsing from both FT [@Sides98; @Sides99] and BD [@Ryu; @Ryu1] and numerical simulations of the 2D Ising model. We see that both the theories agree quite well generally with the numerical results. However, the slight but systematic deviations for different sweeping rates of the external driving indicate that the theories alone cannot explain such a driven transition. Then we find good agreement with the RG theory of FOPTs including instability exponents and even scaling forms as well as existence of finite instability points for two different $T$ below $T_c$ after account of additional logarithmic corrections. This offers compelling evidence for the theory and thus one can study the universality and scaling of FOPTs similar to their continuous counterpart. #### Finite-time scaling {#finite-time-scaling .unnumbered} Crucial in our analysis is the theory of finite-time scaling (FTS) [@Gong; @Gong1]. We drive the FOPT by linearly rather than sinusoidally varying $H$. This linear driving is a direct implementation of the FTS [@Gong; @Gong1], whose essence is a constant finite time scale associating with the given sweeping rate $R$ of the field. This single externally imposed time scale can thus probe effectively the transition when it is of the order of the nucleation time. In contrast, the sinusoidal driving has two controlling parameters, the field amplitude $H_0$ and the frequency $\omega$, and thus complicates and conceals the essence of the process [@Feng]. In particular, at a fixed $H_0$, for $\omega\rightarrow 0$, the hysteresis loop area is governed by $H_0\omega$, which is equivalent to $R$, and increases with $\omega$; while for $\omega\rightarrow\infty$, the area is determined by $H_0^2/\omega$ in mean field and vanishes [@Rao]. At least these two mechanisms compete and produce an area maximum at some $\omega$ [@Rao; @Char; @Sides98; @Sides99]. In addition, for high $\omega$, the hysteresis loops are rounded and even not close and thus their areas are not well defined [@Sides98]. This shortcoming does not contaminate the linear driving [@Zhonge; @Zhonge1]. #### Deficiency of nucleation theories for driving {#deficiency-of-nucleation-theories-for-driving .unnumbered} In FT [@Sides98; @Sides99], if a positive constant $H$ is applied against $-M_{\rm eq}$, the field-theoretically corrected nucleation rate $I(T,H)$ per unit time and volume is given by [@Langer67; @Gunther] $$I=B(T)H^Ke^{-F_c/k_{\rm B}T}= B(T)H^Ke^{-\Xi/H}\label{ith} %I=B(T)H^Ke^{-\frac{F_c}{k_{\rm B}T}}= B(T)H^Ke^{-\frac{\Xi}{H}}\label{ith} %I=B(T)H^K\exp\left(-F_c/k_{\rm B}T\right)= B(T)H^K\exp\left(-\Xi/H\right)\label{ith}$$ with $\Xi=\Omega_2\sigma_0^2/2M_{\rm eq}k_{\rm B}T$ (see Supplemental material for details), where $F_c$ is the free-energy cost for the critical nucleus, $B(T)$ is a parameter, $K=3$ for the 2D kinetic Ising model [@Langer67; @Gunther; @Rikvold94; @Harris84], $\Omega_d(T)$ is a shape factor in a $d$-dimensional space, $\sigma_0$ is the surface tension along a primitive lattice vector, and $k_{\rm B}$ is Boltzmann’s constant. In the MD regime, Avrami’s growth law [@Avrami] gives the magnetization $M$ at time $t$ as [@Avrami; @Sides99; @Ramos] $$M(t)=1-2\exp\left\{-\Omega_d\int_0^tI\left[\int_{t_n}^tv(t')dt'\right]^ddt_n\right\},\label{mt}$$ where $v(t)$ is the interface velocity of a growing droplet. $v\approx g H^\theta$ with $\theta=1$ and a constant proportionality $g$ in the Lifshitz-Allen-Cahn approximation [@Lifshitz; @Lifshitz1; @Gunton83]. For a time-dependent field $H(t)=Rt$, by assuming an adiabatic approximation in which the constant field is simply replaced with its time dependent one [@Sides99], Eqs. (\[ith\]) and (\[mt\]) then result in $\Gamma(-4,x)/x^4-\Gamma(-6,x)/x^2+ \Gamma(-8,x)=4R^3\ln2/[\Omega_2g^2B(T)\Xi^8]$ with $x\equiv\Xi/H_c$ in two dimensions, where the coercivity $H_c$ is the field at $M=0$ and $\Gamma$ is the incomplete gamma function. An identical equation has been derived for the sinusoidal driving in the low frequency approximation [@Sides99] in which $R=H_0\omega\equiv2\pi H_0/[\tau(H_0,T)R_0]$ with $\tau(H_0,T)$ being the average lifetime of the metastable state at $H_0$ and $T$ [@Sides99]. In the SD regime [@Rikvold94], by neglecting the growth time for a supercritical nucleus to occupy half the system volume $L^d$ compared with the nucleation time, the probability for the system to make the transition by time $t$ is [@Sides98] $$P(t)=1-\exp\left[-L^d\int_0^tI(T,H)dt\right].\label{pt}$$ Accordingly, $H_c$ is approximately given by the time $t_c$ at which $P(t_c)=1/2$. Using again the adiabatic approximation for $I$, one obtains in this regime in two dimensions [@Sides98] $\Gamma(-4,x)/x^4 =R\ln2/[B(T)L^2\Xi^4]\equiv CR$. In BD, on the basis of the Becker-Döring theory of nucleation [@Becker2; @Ryu; @Ryu1], the nucleation rate can also be cast in the form of Eq. (\[ith\]) but with a complicated $B(T,H)$ that is $H$ dependent (see Supplemental material). $H_c$ in the MD and SD regimes can then be found similar to FT. An asymptotic form $H_c\sim[-\ln (CR)]^{-1}$ can be found by expanding $\Gamma(a,x)$ in large $x$ in the SD regime [@Sides98; @Sides99]. This was argued to be the leading behavior for small $R$ [@Thomas]. However, it has been shown that such a behavior if exists could only be detected for extremely low $R$ [@Sides98; @Sides99], as seen by the curves marked asymptotic logarithm in Fig. \[hsr\](a). We shall thus not pursue it. ![\[hsr\](Color online) (a) $H_c$ versus scaled sweep rate $R_0$. Linear and sin indicate the data obtained numerically from the 2D Ising model using a linearly and a sinusoidally varying external field, respectively. Note that the “error bars" give the standard deviations of the distributions of the transition involved [@Sides99]. The three curves around SD are theoretical results for the single-droplet regime \[one BD and two FT curves with $B(T)=0.02515$ for the upper and $B(T)=69.73$ for the lower\] and the two lower curves are results of the asymptotic logarithmic approximation \[the results of the larger $B(T)$ are far smaller and absent\]. The horizontal lines with arrows indicate the dynamic spinodal (DS) and the mean-field spinodal (MFS) [@Tomita; @Rikvold94]. (b) Differences in $H_c$. BD-FT denotes the differences of the two theories, while the others are the differences to the linear driving. $256$ symbols the results about the $256^2$ lattices. (c) and (d) Finite time effects of $\kappa$ and $H_{c0}$, respectively. Each curve is obtained by successively omitting the datum with the smallest $R_0$ and plotting the results at the remaining smallest $R_0$. Different curves start with different largest $R_0$. The widths of the distributions have not been included into the fits, since their inclusion only slightly change the results of large $R_0$ for large ranges. For clarity, we plot only every other curve for the theories. Lines connecting the symbols are only a guide to the eye.](nucl1.eps){width="1.0\linewidth"} Figure \[hsr\] shows the simulation results (see Supplemental material for detailed method) along with theoretical ones from solving numerically the relevant equations and their BD counterparts. Using the values of $H_c$ at $R_0=200$ in the linear driving, we find $B(T)=0.02515$, which is close to $0.02048$ found in Ref. [@Sides99] but produces better results. As seen in Fig. \[hsr\](a), the predictions of FT are excellent in the MD regime and even beyond, while in the SD regime, they are poor. To match the lowest rate, we find $B(T)=69.73$, larger by more than two thousand times. On the other hand, BD yields good results even remarkably in the SD regime without any adjustable parameters, though they are slightly smaller as seen in Fig. \[hsr\](b) and the $H$ range is far larger than $0.01$ to $0.13$ studied in Refs. [@Ryu; @Ryu1] for a constant field. Even though Fig. \[hsr\](a) appears to demonstrate both FT and BD are quite good generally, comparing with other curves in Fig. \[hsr\](b), one sees that both theories exhibit systematic deviations from the numerical results. This can be clearly seen from Figs. \[hsr\](c) and (d), where we show the results of a systematic fits to the simple power law [@Zhonge; @Zhonge1], $H_c=H_{c0}+aR_0^{-\kappa}$, with constants $H_{c0}$, $a$, and $\kappa$. For the theories, both $\kappa$ and $H_{c0}$ change continuously with the range of $R_0$ that is used to find them, even if we change $\theta$ and $K$ to give better agreement with the numerical results, conforming to the expectation that the results described by such theories exhibit no scaling [@Sides98; @Sides99]. However, the simulation results are qualitatively distinct. If we include the theoretical data from the SD regime into the fits, we see a similar upturn near $R_0=10$ and a descent at larger $R_0$. This would indicate that the feature of the simulation results were related to a crossover from the MD to the SD regimes. However, deviations from the theoretical upturn are large (see Supplemental material for details). If we neglect in Figs. \[hsr\](c) and (d) the two rightmost data, we see monotonic variations roughly up to the 12th curve (light cyan). This implies that the theories might be valid within the range from $R_0=0.5$ to $100$ or so, albeit not from the mean-field spinodal (MFS) above which spinodal decomposition occurs to the dynamic spinodal (DS) that separates regimes of MD and SD [@Tomita; @Rikvold94]. However, Fig. \[hsr\](d) shows clearly that there still exists a substantial discrepancy in $H_{c0}$ between the theories and the numerical results even in the reduced range, though $\kappa$ may agree. Note that this large gap cannot be removed by adjusting parameters like $B(T)$, because bigger $H_{c0}$ leads to bigger $\kappa$ and thus the gap transfers to $\kappa$. Moreover, such possible adjustments have only a negligible effect since the differences in $H_c$ between the theories and the numerical results are small. #### Evidence for the RG theory {#evidence-for-the-rg-theory .unnumbered} We next show that the $\phi^3$ theory can explain the results. Within the theory, scaling exists similar to the critical phenomena. For example, the scaling form for $M$ is [@Zhongl05; @zhong16], $M\ln^{m}\!t=M_s+R^{\beta/{r\nu}}(-\ln R)^{m_1}f[(H\ln^{n}\!t-H_{s})R^{-\beta\delta/r\nu}(-\ln R)^{n_{1}}]$, where $\beta$, $\delta$, $\nu$, and $r=z+\beta\delta/\nu$ are instability exponents for $M$, $H$, the correlation, and $R$, respectively, with $z$ being the dynamic exponent, each corresponding to its critical counterpart [@Zhongl05; @zhong16], and $f$ is a scaling function. When $n=m=0$, $H_s$ and $M_s$ compose simply the instability point around which the theory is expanded and are thus finite, in sharp contrast with the critical phenomena. In the presence of the special logarithmic corrections in $t$, the point appears effectively at $H_s\ln^{-n}\!t$ and $M_s\ln^{-m}\!t$, which are scale dependence in consistent with previous studies [@Binder78; @Kawasaki; @Kaski]. The $\ln^n\!t$ term with $n=d/(d-1)$ was argued to arise from the interplay between the exponential time in tunneling between the two phases and droplet formations in the low-$T$ phase in the Potts model [@Pelissetto16]. In that case, the field is replaced by $T-T_0$. The curves of normalized energies versus $(T-T_0)\ln^2t$ for various cooling rates cross at a finite value, which was suggested to show a dynamic transition with spinodal-like singularity [@Pelissetto16]. Figure \[rgt\](a) shows that this crossing does appear for the Ising model studied here at $T=0.8T_c$. However, it is absent at $T\approx0.6T_c$. This indicates that the mechanism can not be dominated generally, as varying $T$ and varying $H$ cannot change the mechanism. We thus regard $n$ as an adjustable parameter and introduce generally the other exponents for the logarithmic corrections. ![image](rgt.eps){width="0.8\linewidth"} Our task is to show that the scaling form can indeed account for the data. This demands that there exist a single point, ($H_s,M_s$), such that at the particular $M_s$ $$\label{hscaling} H\ln^{n}\!t=H_{s}+a_1R^{\beta\delta/(r\nu)}(-\ln R)^{-n_{1}},$$ while at the corresponding $H_s$, $$\label{mscaling} M\ln^{m}\!t=M_{s}+f(0)R^{\beta/(r\nu)}(-\ln R)^{m_{1}},$$ self-consistently, where $a_1$ is a constant satisfying $f(a_1)=0$. In order to reduce the parameters to be fitted and lift precision, we choose the values of the four $n$ and $m$ as input. We find this condition is highly restrictive for their choices. For example, if all $n$ and $m$ are set to zero, the condition cannot be satisfied. Neither can the seemingly plateau in Fig. \[rgt\](o). In addition, since we have not considered sub-leading contributions and corrections to scaling, Eqs. (\[hscaling\]) and (\[mscaling\]) are not expected to hold for a large range of $R$. Nevertheless, we require that the exponents obtained should somehow not depend on $R$ in a certain range. Figure \[rgt\](d) to (o) show the results. Except for (f) and (i), all other figures show that the fitted results exhibit jumps from large to small $R$ values. It is remarkable that when the self-consistent $H_s$ and $M_s$ are reached, the fitted results minimize their variations with $R$ and approach each other for some $R$ ranges. For example, at other $M_s$, the three lowest curves in Figs. \[rgt\](k) and (n) tilt and separate from each other. For $T\approx0.6T_c$, $n=m=-1/3$ is not special. They can lie in the range between $-0.2$ to $-0.45$, with $\beta\delta/r\nu$ and $\beta/r\nu$ varying from $0.589$ to $0.635$ and from $-0.077$ to $-0.078$, respectively. The final fitted results are employed to collapse $M$ and its fluctuation $\langle M^2\rangle-\langle M\rangle^2$. The latter is rescaled just by $R^{(\beta\delta-\beta)/(r\nu)}$ rather than follows the susceptibility $\partial M/\partial H$, though the exponents for the two functions are identical. This arises from the violation of fluctuation-dissipation theorem in the nonequilibrium driving [@Feng]. The collapses as displayed in Figs. \[rgt\](b) and (c) are reasonably quite good, noting that only the leading behavior is considered, thus confirming the results. Note however that data collapses are sometimes deceptive. We show in Supplemental material an example in which the collapse appears perfect but unreasonable. Besides the existence of the single finite $H_s$ and $M_s$, the most striking result is that the estimated exponents and their deviations from results of both $T$, $\beta\delta/r\nu\approx0.61(3)$ and $\beta/r\nu\approx-0.082(6)$, agree remarkably with their three and two loop results of $0.575$ and $-0.0905$, respectively, especially the negative value of $\beta$ in two dimensions [@zhong16]. Moreover, although why the two $T$ data take on $n$ and $m$ values of opposite signs and their consequences have yet to be explored, a possible reason being the proximity of the high $T$ to $T_c$, the scaling functions appear to be universal up to a proper overall displacement and scaling as seen in Figs. \[rgt\](b) and (c). These therefore provide a compelling evidence for the RG theory. I thank Professor Per Arne Ridvold for his information and Shuai Yin, Baoquan Feng, Yantao Li, Guangyao Li, and Ning Liang for their useful discussions. This work was supported by National Natural Science Foundation of China (Grant Nos. 10625420 and 11575297). [99]{} J. D. Gunton, M. San Miguel, and P. S. Sahni, in [*Phase Transitions and Critical Phenomena*]{}, eds. C. Domb and J. L. Lebowitz Vol. 8, 267 (Academic, London, 1983). A. J. Bray, Adv. Phys. [**43**]{}, 357 (1994). K. Binder and P. Fratzl, in [*Phase Transformations in Materials*]{}, ed. G. Kostorz, 409 (Wiley, Weinheim, 2001). M. Volmer and A. Weber, Z. Phys. Chem. (Leipzig) [**119**]{}, 277 (1926). L. Farkas, [*ibid.*]{} [**125**]{}, 236 (1927). R. Becker and W. Döring, Ann. Phys. (Leipzig) [**24**]{}, 719 (1935). Ya. B. Zeldovich, Acta Physicochim. USSR [**18**]{}, 1 (1943). 251 (1990).F. F. Abraham, [*Homogeneous Nucleation Theory*]{} (Academic, New York, 1974). P. Debenedetti, [*Metastable Liquids*]{} (Princeton University Press, Princeton, NJ 1996). D. Kashchiev, [*Nucleation: Basic Theory with Applications*]{} (Butterworth-Heinemann, Oxford, 2000). D. W. Oxtoby, Acc. Chem. Res. [**31**]{}, 91 (1998). J. D. Gunton, J. Stat. Phys. [**95**]{}, 903 (1999). S. Auer and D. Frenkel, Annu. Rev. Phys. Chem. [**55**]{}, 333 (2004). R. P. Sear, J. Phys.: Condens. Matter [**19**]{}, 033101 (2007). A. N. Kolmogorov, Bull. Acad. Sci. USSR, Class Sci., Math. Nat. [**3**]{}, 355 (1937) W. A. Johnson and P. A. Mehl, Trans. Am. Inst. Min. Metall. Eng. [**135**]{}, 416 (1939). M. Avrami, J. Chem. Phys. [**7**]{}, 1103 (1939). S. Auer and D. Frenkel, Nature (London) [**409**]{}, 1020 (2001). T. Kawasaki and H. Tanaka, Proc. Nat. Acad. Sci. [**107**]{}, 14036 (2010). L. Filion, R. Ni, D. Frenkel, and M. Dijkstra, J. Chem. Phys. [**134**]{}, 134901 (2011). J. S. Langer, Ann. Phys. (N. Y.) [**41**]{}, 108 (1967). N. J. Günther, D. J. Wallace, and D. A. Nicole, J. Phys. A [**13**]{}, 1755 (1980). P. A. Rikvold, H. Tomita, S. Miyashita, and S. W. Sides, Phys. Rev. E [**49**]{}, 5080 (1994). S. W. Sides, P. A. Rikvold, and M. A. Novotny, Phys. Rev. E [**57**]{}, 6512 (1998). R. A. Ramos, P. A. Rikvold, and M. A. Novotny, Phys. Rev. B [**59**]{}, 9053 (1999). S. W. Sides, P. A. Rikvold, and M. A. Novotny, Phys. Rev. E [**59**]{}, 2710 (1999). S. Ryu and W. Cai, Phys. Rev. E [**81**]{}, 030601(R) (2010). S. Ryu and W. Cai, Phys. Rev. E [**82**]{}, 011603 (2010). F. Zhong and Q. Z. Chen, Phys. Rev. Lett. [**95**]{}, 175701 (2005). F. Zhong, Front. Phys. [**12**]{}, 126402 (2017) \[arXiv1205.1400 (2012)\]. F. Zhong, Phys. Rev. E [**86**]{}, 022104 (2012). P. B. Thomas and D. Dhar, J. Phys. A [**26**]{}, 3973 (1993).A. Pelissetto and E. Vicari, Phys. Rev. Lett. [**118**]{}, 030602 (2017). N. Liang and F. Zhong, Phys. Rev. E [**95**]{}, 032124 (2017). S. Gong, F. Zhong, X. Huang, and S. Fan, New J. Phys.[**12**]{}, 043036 (2010). F. Zhong, in [*Applications of Monte Carlo Method in Science and Engineering*]{}. ed. S. Mordechai, 469 (Intech, 2011). Available at http://www.intechopen.com/books/applications-of-monte-carlo-method-in-science-and-engineering/finite-time-scaling-and-its-applications-to-continuous-phase-transitions. B. Feng, S. Yin, and F. Zhong, Phys. Rev. B [**94**]{}, 144103 (2016). M. Rao, H. R. Krishnamurthy, and R. Pandit, Phys. Rev. B [**42**]{}, 856 (1990). For a review, see K. Chakrabarti and M. Acharyya, Rev. Mod. Phys. [**71**]{}, 847 (1998). F. Zhong, J. X. Zhang, and X. Liu, Phys. Rev. E [**52**]{}, 1399- (1995). F. Zhong, Phys. Rev. B [**66**]{}, 060401(R) (2002). C. K. Harris, J. Phys. A [**17**]{}, L143 (1984). I. Lifshitz, Sov. Phys. JETP [**15**]{}, 939-942 (1962) \[Zh. Eksp. Teor. Fiz. [**42**]{}, 1354 (1962)\].S. Allen and J. Cahn, Acta Metall. [**27**]{}, 1084 (1979). V. A. Shneidman, K. A. Jackson, and K. M. Beatty, J. Chem. Phys. [**111**]{}, 6932 (1999). X. Huang, S. Gong, F. Zhong, and S. Fan, Phys. Rev. E [**81**]{}, 041139 (2010). H. Tomita and S. Miyashita, Phys. Rev. B [**46**]{}, 8886 (1992). C. Billoted and K. Binder, Z. Phys. B [**32**]{}, 195 (1979).K. Kawasaki, T. Imaeda, and J. D. Gunton, in [*Perspectives in Statistical Physics*]{}, ed. H. J. Raveché, 201 (North Holland, Amsterdam, 1981). K. Kaski, K. Binder, and J. D. Gunton, Phys. Rev. B [**29**]{}, 3996 (1984).
--- abstract: 'We present a percolation theory for the high-$T_c$ oxides pseudogap and $T_c$ dependence on the hole level. The doping dependent inhomogeneous charge structure is modeled by a distribution which may represent the stripe morphology and yield a spatial distribution of local $T_c(r)$. The temperature onset of spatial dependent superconducting gap is identified with the vanishing of the pseudogap temperature $T^*$. The transition to a superconducting state corresponds to the percolation threshold among regions of different $T_c$. As a paradigm we use a Hubbard Hamiltonian with a mean field approximation to yield a doping and temperature dependent superconducting d-wave gap. We show here that this new approach reproduces the phase diagram, explains and gives new insights on several experimental features of high-$T_c$ oxides.' address: 'Departamento de Física, Universidade Federal Fluminense, av. Litorânia s/n, Niterói, R.J., 24210-340, Brazil' author: - 'E. V. L. de Mello, E. S. Caixeiro and J. L. Gonzaléz' date: Received title: A Novel Theory for High Temperature Superconductors considering Inhomogeneous Charge Distributions --- [2]{} Introduction ============ High-$T_c$ oxides has been discovered fifteen years ago[@BM] but many of their important properties remains not well understood. Among these, the pseudogap phenomenon, that is, a discrete structure of the energy spectrum above $T_c$, identified by several different experiments[@TS], has its nature not yet been clarified. Such open problem has attracted a lot of experimental and theoretical effort because it is general belief that its solution is related to the understanding of the superconducting fundamental interaction. The evidence of such energy gap above the superconducting phase is clearly demonstrated by tunneling[@Retal98; @Setal99] and angle-resolved photoemission spectroscopy[@Shen95; @Ding96] experiments. In the resistivity measurements its presence is seen by a decrease in the linear behavior below $T^*$[@Oda00; @Takagi92] and in the specific heat as a suppression in the linear coefficient $\gamma$ of the temperature[@Loram97]. There is also mounting experimental evidence that the hole doped inhomogeneity into the $CuO_2$ planes, common to all cuprates, is directly related to the pseudogap phenomenon. In a given family, the underdoped compounds near the doping onset of superconductivity have the more inhomogeneous charge distributions and the larger $T^*$. As the doping level increases, the samples become more homogeneous while $T^*$ decreases[@Billinge00; @Buzin00; @Pan]. For overdoped compounds $T^*$ disappears or becomes equal to $T_c$. The inhomogeneities were long supposed to be important in the studies of high temperature superconductors[@Egami96] but only after the discovered of the spin-charge stripes[@Tranquada], they have become matter of systematical studies. In the spin-charge stripes scenario, regions of the plane are heavily doped (the stripes) and other regions are underdoped and fill the space between the charge-rich stripes. Recently, magnetic excitations as the vortex-like Nernst effect have been reported above $T_c$[@Xu] and a local Meissner state, which usually appears only in the superconducting phase, has been seen as a precursor to superconductivity[@IYS]. Such inhomogeneous diamagnetic domains develop near $T^*$ and grow continuously as $T$ decreases towards $T_c$. Near $T_c$, the domains appear to percolate, according Fig.3 from Iguchi et al[@IYS]. Based on all these experimental findings, about the pseudogap phenomenon, local charge inhomogeneities and a non percolative local Meissner state between $T^*$ and $T_c$, we propose a new scenario to explain the high $T_c$ superconductors phenomenology: due to the doping dependent charge inhomogeneities in a given compound with average charge density $\rho_m$, there is a distribution of local clusters with spatial dependent charge density $\rho(r)$, each with its superconducting transition temperature $T_c(r)$. $T^*$ is the maximum of all $T_c(r)$. As the temperature falls below $T^*$, some clusters become superconducting, but they are surrounded by metallic and/or antiferromagnetic insulating domains and, consequently, the whole system is not a superconductor. The number of superconducting clusters increases as the temperature decreases, so the superconducting regions grow and, eventually, at a temperature $T_c$, they percolate and become able to hold a macroscopic dissipationless current. Exactly as the Meissner state domains shown in Fig.3 of Iguchi et al[@IYS]. Similar ideas were discussed by Ovchinnikov at al[@OWK]. They were concerned mainly with the microscopic mechanism which leads to a distribution of $T_c(r)$ and its effect on the density of state. In order to show that these ideas are able to make quantitative predictions and reproduce the high-$T_c$ oxides phase diagram, we have performed calculations on a Hubbard model and a gap equation is obtained within mean field approach. The Charge Distribution ======================= The consequence of the microscopic charge inhomogeneities distribution in the $CuO_2$ planes, possibly in a striped configuration, is the existence of two phases which are spontaneously created in the $Cu-O_2$ planes; regions which are heavily doped or hole-rich form the stripes and others regions which are hole-poor and are created between the charge-rich stripes. The exactly form of these charge distributions is not known and it is presently matter of research[@Billinge00; @Buzin00; @Egami]. We have chosen a distribution capable to reproduce the experimental observations, and for this purpose, we use a combination of a Poisson and a Gaussian distribution. For a given compound with an average charge density $\rho_m$, the hole distribution is function of the local hole density $\rho$, $P(\rho;\rho_m)$ divided in two branches. The low density branch represents the hole-poor or non-conducting regions and the high density one represents the hole-rich or metallic regions. As concerns the superconductivity, only the properties of the hole rich branch are important since the current flows through the metallic region. Such distribution may be given by: $$\begin{aligned} P(\rho) &=& \pm (\rho-\rho_c)exp(-(\rho-\rho_c)^2/2(\sigma_{\pm})^2)/ \nonumber \\ && (\sigma^2)(2-exp(-(0.05)^2/2(\sigma_{\pm})^2)) \end{aligned}$$ The plus sign is for the hole-rich ($\rho_c \approx \rho_m$) for $\rho_m \le \rho$, the minus to the hole-poor branch ($\rho_c= 0.05$) for $\rho \le 0.05$ and $P(\rho)=0$ for $0.05\le \rho \le \rho_m$. The half-width $ \sigma$ is related with the degree of inhomogeneities and decreases with the hole density in order to represent current observations[@Billinge00; @Buzin00; @Egami]. ![Model charge distribution for the inhomogeneities or stripe two phase regions. The low density insulating (antiferromagnetic) branch. The high density hole-rich region starts at the compound average density $\rho_m$ and $\rho_p$, indicating by the arrows, is the density where percolation can occur.](fig1pseud.eps){width="8cm"} An example of the distribution is shown in fig.1. For illustration purpose, we show the results for compounds with $\rho_m=0.185$ and $\sigma_+=0.05$ and with $\rho_m=0.32$ and $\sigma_+=0.038$. Above $\rho_m=0.25$ the charge distribution becomes a simple Gaussian centered at $\rho_m$ with $\sigma\le0.02$, which reflects the non-existence of the stripes phases and the observed homogeneous charge distribution for the overdoped compounds. The values of $\sigma$ for a given sample are chosen in order that percolation in the hole-rich branch occurs exactly at a given density $\rho_p$. Thus $T_c(\rho_p)$ is the maximum temperature which the system can percolate and which we identify as equal to $T_c(\rho_m)$. Although we used a set of parameters to compare with the experimental phase diagram of Bi2212, the main physical aspects can be modeled by others distributions with the similar results. According to percolation theory, percolation occurs in a square lattice when 59% of the sites or bonds are filled[@Stauffer]. Thus, we find the density where the hole-rich branch percolates integrating $\int P(\rho) d\rho$ from $\rho_m$ till the integral reaches the value of 0.59, where we define $\rho_p$. Below $T_c(\rho_m)$ the system percolates and, consequently, it is able to hold a dissipationless supercurrent. To estimate $T_c(\rho_m)$ we need to calculate $T^*$ as function of $\rho$. The Phase Diagram ================= To develop the dynamics of the hole-type carriers in the Cu-O planes, we adopt a two dimension extended Hubbard Hamiltonian in a square lattice of lattice parameter $a$ $$\begin{aligned} H&=&-\sum_{\ll ij\gg \sigma}t_{ij}c_{i\sigma}^\dag c_{j\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow} \nonumber \\ && +\sum_{<ij>\sigma \sigma^{\prime}}V_{ij}c_{i\sigma}^\dag c_{j\sigma^{\prime}}^\dag c_{j\sigma^{\prime}}c_{i\sigma} \label{b}\end{aligned}$$ where $t_{ij}$ is the hopping integral between sites $i$ and $j$; $U$ is the Coulomb on-site correlated repulsion and $V_{ij}$ is the a phenomenological attractive interaction between nearest-neighbor sites $i$ and $j$ which will will argue later about its possible origin. Using a BCS-type mean-field approximation to develop Eq.(\[b\]) in the momentum space, one obtains the self-consistent gap equation, at finite temperatures [@Mello96; @Angi; @Caixa] $$\begin{aligned} \Delta_{\bf k}=-\sum_{\bf k^{\prime}}V_{\bf kk^{\prime}}\frac{\Delta_{\bf k^{\prime}}}{2E_{\bf k^{\prime}}}\tanh\frac{E_{\bf k^{\prime}}}{2k_BT},%\label{cc}\end{aligned}$$ with $E_{\bf k}=\sqrt{\varepsilon_{\bf k}^2+\Delta_{\bf k}^2}, \label{rr}$ which contains the attractive potential $V_{\bf kk^{\prime}}$ in the extended Hubbard Hamiltonian of Eq.2. For a d-wave order parameter, $U$ is summed out of the gap equation and the amplitude of the attractive potential $V$ is the only unknown variable and become an adjustable parameter[@Caixa; @Caixa2]. $\varepsilon_{\bf k}$ is a dispersion relation. The hole density and the gap equation, are solved self-consistently for a d-wave order parameter[@Caixa; @Caixa2]. In Fig.2 we plot the temperatures of vanishing gap from Eq.2 which we take as $T^*$, as function of $\rho$. Here we use $V=-0.150$eV which reproduces well the experimental values of $T^*\times \rho$ for the Bi2212 system[@Oda00]. The parameters are within 10% to those taken from ARPES measurements [@Randeria] with a nearest neighbor hopping $t=0.12$eV and further hopping parameters up to fifth neighbor. The particular form of the $T^*$ curve depends on the values of these hoppings. Notice that the density of holes has a factor of 2 with respect to the values given by Ref.[@Oda00], but in agreement with ref.[@Angi] as it is more appropriate for the Bi2212 system[@Konsin]. ![The phase diagram taking $T^*(\rho)$ as the onset of vanishing gap and $T_c(\rho_m)$ as the percolating threshold. The experimental points and the symbols are taking from Ref.[@Oda00]](fig2pseud.eps){width="8cm"} Thus the $T^*$ and $T_c$ shown in Fig.2, for a sample with average charge density $\rho_m$, are obtained in the follow way: the hole-rich or metallic branch of the distribution describes the regions with hole charge densities $\rho \ge \rho_m$. These charge fluctuations yield clusters with local superconducting temperature $T_c(\rho)$, as $\rho$ varies in the sample and one may write $T_c(r)$, where $r$ represents any position inside the compound. For the metallic regions, $T_c(\rho)$ is a decreasing function of $\rho$ and the maximum gap temperature occurs for $T^*(\rho_m)\equiv T^*$. The different metallic regions in this sample have $T_c(\rho) \le T^*$. For temperatures below $T^*$, some superconducting clusters are formed, like small superconducting islands embedding in a metallic and insulating medium. Thus, as the temperature decreases, more clusters become superconducting, and eventually the superconducting regions percolates at $T_c$, that is, a superconducting current can flow for temperatures $T \le T_c$. Discussion ========== The fact that $T^*$ decreases continuously with $\rho$ in the superconducting region ($\rho_m > 0.15$), as seen in Fig.2, is very suggestive and in agreement with the early ideas regarding a phonon mediated superconducting interactions: Materials whose vibrating atoms interact strongly with the electrons, and are poor metals, should become superconductors at higher temperatures than those good metals, whose atoms interact weakly with electrons[@Matthias]. For cuprates, as the doping level of the samples increases, it is well known that the compounds change from very poor metals in the normal phase to very good metals with typical Fermi liquid behavior for overdoped samples. Since $T^* \times \rho$ is a decreasing curve, for any cuprate family, and assuming that $T^*$ is the onset of superconducting gap, such curve may be a strong indication of the phononic superconducting interaction. There are several observations and measurements that can be well explained within the percolating approach, we will discuss here only a few examples. 1-Harris et al[@Harris], through ARPES measurements, have reported the anomalous behavior of $\Delta_0(\rho_m)$ which decreases steadily with the doping $\rho_m$ although $T_c$ increases by a factor of 2 for their underdoped samples. In the overdoped region, since $T_c$ also decreases, the behavior is the expected conventional proportionality. It is well known that superconductors have a constant value for the ratio $2\Delta_0/k_BT_c$, being 3.75 for usual isotropic order parameter and 4.18 for $d_{x^2-y^2}$ solution[@Maki]. ![The zero temperature gap for 9 samples as measured by Harris et al[@Harris] and our calculations.](fig3pseud.eps){width="8cm"} At low temperature, since the superconducting region percolates through different regions, each with a given $\Delta_0(r)$, tunneling and ARPES experiments detect the largest gap present in the compound. Consequently, $\Delta_0(\rho_m)$ must be correlated with the onset of vanishing gap $T^*(\rho_m)$ which is the largest superconducting temperature in the sample, and not with $T_c(\rho_m)$. As we show in Fig.3, correlating the values plotted in Fig.2 for $T^*(\rho_m)$ with $\Delta_0(\rho_m)$, we are able to give a reasonable fit for the data of Harris et al[@Harris] on $Dy-BSCCO$ and explains the different energy scales pointed out by Harris et al and several others authors. 2- The resistivity measurement is also one of the tools to detect the pseudogap. The underdoped and optimum doped high-$T_c$ oxides have a linear behavior for the resistivity in the normal phase up to very high temperature. However, at $T^*$ there is a deviation from the linear behavior and the resistivity falls faster with decreasing temperature[@TS; @Oda00]. This behavior can be understood by our model, with the increasing of superconducting cluster numbers and size, as the temperatures decreases below $T^*$. Each superconducting cluster produces a short circuit which decreases the resistivity below the linear behavior between $T^*$ and $T_c$. 3- Recently measurements of magnetic domains above $T_c$ has been interpreted as a diamagnetic precursor to the Meissner state, produced by performed pairs in underdoped $La_{2-x}Sr_xCuO_4$ thin films[@IYS]. The existence of superconducting cluster between $T^*$ and $T_c$ easily explains the appearance of local diamagnetic or Meissner domains, and, if there is a temperature gradient in the sample, the local flux flows and produces the dynamic flux flow state[@Xu]. 4- Another important consequence is that the pairing mechanism must be investigated by experiments performed mainly at $T^*$. A such experiment was accomplished by Rubio Temprano et al[@Temprano], which measured a large isotope effect associated with $T^*$ and an almost negligible isotopic effect associated with $T_c$ in the slightly underdoped $HoBa_2Cu_4O_8$ compound. The results strongly support the fact that electron-phonon induced effects are present in the superconducting mechanism associated with $T^*$ and with the percolation approach to $T_c$. Conclusions =========== We have demonstrated that the percolating approach for an inhomogeneous charge distribution on the $CuO_2$ planes provides new physical explanations for many experiments performed on high-$T_c$ oxides and quantitative results for their phase diagram. Contrary to some current trends, which $T_c$ is regarded as a phase coherence temperature and the existence of a gap phase without coherence between $T_c$ and $T^*$, in our approach, $T_c$ is a percolating temperature for different regions which, due to the inhomogeneities, possess different local superconducting transition $T^*(r)$. Similarly, instead of having a superconducting gap $\Delta_{sc}$ and an excitation gap $\Delta$ associated with $T^*$, we have a distribution of locally dependent $\Delta_{sc}(r)$. The method described in this work can be applied in any cuprate and yields also several new implications which will be discussed elsewhere, but one of the most interesting is that one can search for materials with very large $T_c$’s if a better control of the local doping level is achieved. Financial support of CNPq and FAPERJ is gratefully acknowledged. JLG thanks CLAF for a CLAF/CNPq pos-doctoral fellowship. J.G. Bednorz and K.A. Müller, Z. Phys., [**64**]{}, 189 (1986). T. Timusk and B. Statt, Rep. Prog. Phys., [**62**]{}, 61 (1999). C. Renner, B. Revaz, J.-Y Genoud, K. Kadowaki, and O. Fischer, Phys. Rev. Lett. [**80**]{}, 149 (1998). M. Suzuki, T. Watanabe, A. Matsuda, Phys. Rev. Lett. [**82**]{}, 5365 (1999). Z-X. Shen and D. S. Dassau, Phys. Rep., [**253**]{}, 1 (1995). H. Ding, T. Yokkoya, J.C. Campuzano, T. Takahashi, M. Randeria, M.R Norman, T. Mochiku, K. Kadowaki, and J. Giapintzaki, Nature, [**382**]{}, 51 [1996]{} M. Oda, N. Momono and M. Ido, Supercond. Sci. Technol. [**13**]{}, R139, (2000). H. Takagi, B. Batlogg, H.L. Kao, R.J. Cava, J.J. Krajewski, and W.F. Peck, Phys. Rev. Lett. [**62**]{}, 2975 (1992). J. Loram, K.A. Mirza, J.R. Cooper, J.L. Tallon, Physica [**C282-287**]{}, 1405 (1997). S.J.L. Billinge, et al, J. Supercond., Proceedings of the Conf. “Major Trends in Superconductivity in New Milenium”, 2000. E.S. Bozin, G.H. Kwei, H. Takagi, and S.J.L. Billinge, Phys. Rev. Lett. [**84**]{}, 5856, (2000). S.H.Pan et al, cond-mat/0107347. T. Egami and S.J.L. Billinge, in “Physical Properties of High-Temperatures Superconductors V” edited by D.M. Greensberg, World Scientific, Singapure 1996), p. 265. J.M.Traquada, B.J. Sternlieb, J.D, Axe, Y. Nakamura, and S. Uchida, Nature (London),[**375**]{}, 561 (1995). Z.A. Xu, N.P. Ong, Y. Wang, T. Kakeshita, and S. Uchida, Nature [**406**]{}, 486-488 (2000). I. Iguchi, I. Yamaguchi, and A. Sugimoto, Nature, [**412**]{}, 420 (2001). Yu.N. Ovchinnikov, S.A. Wolf, V.Z. Krezin, Phys. Rev. [**B63**]{}, 6452, (2001), and Physica [**C341-348**]{}, 103, (2000). T. Egami, Proc. of the New3SC International Conference, to be published in Physica C. D.F. Stauffer and A. Aharony, “Introduction to Percolation Theory”. Taylor&Francis Ed., London, 1994. E.V.L. de Mello, Physica [**C259**]{}, 109 (1996). G.G.N. Angilella, R. Pucci, and F. Siringo, Phys. Rev. B [**54**]{}, 15471 (1996). E.S. Caixeiro, and E.V.L. de Mello, Physica [**C353**]{}, 103 (2001). E. S. Caixeiro, and E.V.L. de Mello, submitted to the J. Phys. [**CM**]{}. M.R. Norman, M. Randeria, H. Ding, J.C. Campuzano, and A.F Bellman, Phys. Rev. [**B52**]{}, 615 (1995). P. Konsin, N. Kristoffel, and B. Sorkin, J. Phys. C.M. [**10**]{}, 6533 (1998). B.T. Matthias, “Superconductivity”, Scientific American, 92, November of 1957. J.M. Harris, Z.H. Shen, P.J. White, D.S. Marshall, M.C. Schabel, J.N. Eckstein, and I. Bozovic, Phys.Rev. B[**54**]{} R15665 (1996). H. Won and K. Maki, Phys. Rev. [**B49**]{}, 1397 (1994). D. Rubio Temprano, J. Mesot, S. Janssen, K. Conder, A. Furrer, H. Mutka, and K.A. Müller, Phys.Rev.Lett [**84**]{}, 1990 (2000). M.T.D. Orlando, A.G Cunha, E.V.L. de Mello, H.Belich, E. Baggio-Saitovich, A. Sin, X. Obradors, T. Burghardt, A. Eichler, Phys. Rev. [**B61**]{}, 15454 (2000). J.L. Gonzaléz, M.T.D. Orlando, E.S. Yugue, E.V.L. de Mello and E. Baggio-Saitovich, Phys. Rev. [**B63**]{}, 54516 (2001).
--- abstract: | Extremal problems involving the enumeration of graph substructures have a long history in graph theory. For example, the number of independent sets in a $d$-regular graph on $n$ vertices is at most $(2^{d+1}-1)^{n/2d}$ by the Kahn-Zhao theorem [@K01; @Z]. Relaxing the regularity constraint to a minimum degree condition, Galvin [@G] conjectured that, for $n\geq 2d$, the number of independent sets in a graph with $\delta(G)\geq d$ is at most that in $K_{d,n-d}$. In this paper, we give a lower bound on the number of independent sets in a $d$-regular graph mirroring the upper bound in the Kahn-Zhao theorem. The main result of this paper is a proof of a strengthened form of Galvin’s conjecture, covering the case $n\leq 2d$ as well. We find it convenient to address this problem from the perspective of ${\overline}{G}$. In other words, we give an upper bound on the number of complete subgraphs of a graph $G$ on $n$ vertices with $\Delta(G)\leq r$, valid for all values of $n$ and $r$. author: - | Jonathan Cutler\ \ - | A.J. Radcliffe\ \ bibliography: - 'maxcliquedegree.bib' title: The maximum number of complete subgraphs in a graph with given maximum degree --- Introduction {#sec:introduction} ============ There has been quite a bit of recent interest in a range of extremal problems involving counting the number of a given type of substructure in a graph. For instance, the number of independent sets or the number of the complete subgraphs[^1] of a graph. We let ${{\mathcal I}}(G)$ be the set of independent sets in the graph $G$ and ${{\mathcal K}}(G)$ be the set of cliques in $G$. We write ${i}(G)$ and ${k}(G)$ for $\abs{{{\mathcal I}}(G)}$ and $\abs{{{\mathcal K}}(G)}$, respectively. A classic example of type of result we consider is the Kahn-Zhao theorem, proved first for bipartite graphs by Kahn [@K01] and later extended to all graphs by Zhao [@Z]. \[thm:KZ\] If $G$ is a $d$-regular graph with $n$ vertices then $${i}(G)^{\frac1n} \le {i}(K_{d,d})^{\frac1{2d}}=(2^{d+1}-1)^{\frac{1}{2d}}.$$ The Kahn-Zhao theorem is tight when $n$ is a multiple of $2d$ with $\frac{n}{2d}K_{d,d}$, i.e., $\frac{n}{2d}$ copies of $K_{d,d}$, achieving equality in the bound. Little is known about extremal examples when $2d$ does not divide $n$. One result of this paper is a corresponding theorem proving $${i}(G)^{1/n}\geq {i}(K_{d+1})^{1/(d+1)}$$ for $G$ a $d$-regular graph on $n$ vertices. Extremal enumeration problems for complete subgraphs run in parallel to those for independent sets. If $G$ is a graph with $N$ independent sets, then ${\overline}{G}$ is a graph with $N$ cliques. Any degree condition on $G$ translates into a corresponding degree condition on ${\overline}{G}$. For example, the Kahn-Zhao theorem can be rephrased as follows: if $G$ is an $r$-regular graph on $n$ vertices, then $${k}(G)^{\frac{1}n}\leq k(2K_{n-1-r})^{\frac{1}{2(n-1-r)}}.$$ Note, however, that in the intuitively natural regime where $r$ is fixed and $n$ is large, we do not expect this bound to be tight. Although regularity is a very natural condition to impose, a range of other conditions have been studied. For instance, it is a consequence of the Kruskal-Katona theorem [@K63; @K68] that among all graphs of given average degree, the lex graph[^2] has the largest number of independent sets, indeed the largest number of independent sets of any fixed size. For a derivation, see, e.g., [@CR11]. Another example is the oft-rediscovered result originally due to Zykov [@Z49] (see also [@E62; @S71; @H76; @R]) which bounds the number of cliques in graphs with bounded clique number, $\omega(G)$. If $G$ is a graph with $n$ vertices and $\omega(G)\leq \omega$, then $${k}(G)\leq {k}(T_{n,\omega}),$$ where $T_{n,\omega}$ is the Turán graph with $\omega$ parts. Equivalently, this gives a bound on $i(G)$ for graphs with bounded independence number; the extremal graph is a union of disjoint complete graphs of almost equal sizes. The main problem we discuss in this paper is that of computing, given $n$ and $r$, $$\max{\left\{{{k}(G)}\,:\,{\text{$G$ is a graph on $n$ vertices with ${\Delta}(G)\le r$}}\right\}}.$$ This problem is equivalent to that of determining $$\max{\left\{{{i}(G)}\,:\,{\text{$G$ is a graph on $n$ vertices with ${\delta}(G)\ge d$}}\right\}},$$ where $d=n-1-r$. Galvin [@G] made the following conjecture. \[conj:Galvin\] If $G$ is a graph on $n$ vertices with minimum degree at least $d$, where $n\geq 2d$, then $i(G)\leq i(K_{d,n-d})$. Galvin proved in [@G] that the conjecture holds for fixed $d$ and $n$ sufficiently large, and also for all $n$ with $d=1$. Engbers and Galvin [@EG] proved the conjecture when $d=2$ or $3$. Alexander and Mink together with the first author [@ACM] proved the conjecture for bipartite graphs. The main result of this paper is to prove a strengthened version of Conjecture \[conj:Galvin\]. It is convenient for us to phrase our theorem in the language of cliques. In the process of taking complements, the minimum degree condition is replaced by a maximum degree condition. From this perspective, we are able to remove the condition from the conjecture relating $n$ and $d$. We prove the following theorem, and hence the subsequent corollary. For all $n,r\in {\mathbb{N}}$, write $n=a(r+1)+b$ with $0\le b\le r$. If $G$ is a graph on $n$ vertices with $\Delta(G)\leq r$, then $${k}(G) \leq {k}(aK_{r+1} \cup K_b),$$ with equality if and only if $G=aK_{r+1}\cup K_b$, or $r=2$ and $G=(a-1)K_3\cup C_4$ or $(a-1)K_3\cup C_5$. For all $n,d\in {\mathbb{N}}$, write $n=a(n-d)+b$ with $0\le b< n-d$. If $G$ is a graph on $n$ vertices with $\delta(G)\geq d$, then $${i}(G) \leq {i}\left(\,{\overline}{aK_{n-d} \cup K_b}\,\right)=a(2^{n-d}-1)+2^b.$$ In the low density regime, i.e., for $n\geq 2d$, the extremal graph is indeed $K_{d,n-d}$. For the high density regime, where $a\geq 2$, the extremal graphs become more and more regular and we believe that this bound is the best known even in the regular case. In Section \[sec:signposts\] we prove a weak form of the Main Theorem, valid only for $n$ a multiple of $r+1$, proved already by Engbers and Galvin [@EG]. There is a striking similarity of proof technique between this result and the lower bound mentioned above on $i(G)^{1/n}$ for $G$ a regular graph, so we include both in the same section. The argument for the weak bound essentially forms the kernel of the proof of the Main Theorem. We outline that proof in Section \[sec:outline\]. In Sections \[sec:FL\], \[sec:strong\], and \[sec:discharging\], we introduce the main tools of the proof. Finally, in Section \[sec:proof\], we prove the Main Theorem. Weak bounds {#sec:signposts} =========== In the section we prove two simple results that illustrate some of the methods we will use later in the paper. The first is a Kahn-Zhao type result (though substantially easier to prove) concerning the minimum number of independent sets in a $d$-regular graph. We let ${{\mathcal I}}_t(G)$ be the set of independent sets of size $t$ in $G$ and ${i}_t(G)=\abs{{{\mathcal I}}_t(G)}$. \[thm:ind\_signpost\] If $G$ is $d$-regular on $n=a(d+1)$ vertices, then $${i}(G) \ge {i}(aK_{d+1}) = (d+2)^a.$$ Indeed, for all $0\leq t\leq n$ we have ${i}_t(G)\ge {i}_t(aK_{d+1})=(d+1)^t\binom{a}t$. The essential part of the proof of this theorem is contained in the following lemma. \[lem:ind\_upwards\] If $G$ is $d$-regular on $n=a(d+1)$ vertices then for all $1\leq t\leq n$ and $I\in {{\mathcal I}}_{t-1}(G)$ we have $${\#}{\left\{{J\in{{\mathcal I}}_t(G)}\,:\,{J\supseteq I}\right\}} \ge (a-t+1)(d+1).$$ The number on the left is exactly the number of common non-neighbors of $I$ (other than the elements of $I$). This is exactly $$\abs[\Big]{V(G) - {\bigcup}_{x\in I} N[x]} \ge n - (t-1) (d+1) = (a-t+1)(d+1),$$ where $N[x]$ is the closed neighborhood of $x$. We’ll prove the stronger statement by induction. Certainly, since $e(G) = e\left(aK_{d+1}\right)$ the statement is true for $t=2$. (The statement is trivial for $t=0,1$.) Suppose now that $t>2$. By double-counting, we have $$\begin{aligned} {i}_t(G) &= \frac1t \sum_{I\in {{\mathcal I}}_{t-1}(G)} {\#}{\left\{{J\in{{\mathcal I}}_t(G)}\,:\,{J\supseteq I}\right\}} \\ &\ge \frac1t (a-t+1)(d+1)\, i_{t-1}(G)\\ &\ge \frac{(a-t+1)(d+1)}t\, i_{t-1}(aK_{d+1}) \\ &= {i}_t(aK_{d+1}).\qedhere \end{aligned}$$ \[cor:minind\] If $G$ is $d$-regular on $n$-vertices then $${i}(G)^{\frac{1}{n}} \ge {i}(K_{d+1})^{\frac1{d+1}} = (d+2)^{\frac1{d+1}}$$ $${i}(G)^{d+1} = {i}((d+1)G) \ge {i}(nK_{d+1}) = {i}(K_{d+1})^n.\qedhere$$ Noting that our proof of Lemma \[lem:ind\_upwards\] only required an upper bound on the degrees of vertices in $G$, we also get the following result. If $G$ is a graph on $n$ vertices with $\Delta(G)\le d$ then $${i}(G)^{\frac1n} \ge {i}(K_{d+1})^{\frac{1}{d+1}}.$$ Our second “signpost” result is a best possible bound on ${k}(G)$ for graphs with $\Delta(G)\leq r$, valid only when $r+1$ divides $n(G)$. This result appears in a paper of Engbers and Galvin [@EG], but it’s important for the development of the rest of the paper that we include the proof. We let ${{\mathcal K}}_t(G)$ be the set of cliques of $G$ of size $t$ and set ${k}_t(G)=\abs{{{\mathcal K}}_t(G)}$. \[thm:k\_signpost\] If $G$ is a graph on $n=a(r+1)$ vertices and $\Delta(G)\leq r$, then $${k}(G) \le {k}\bigl(a K_{r+1}\bigr) = 1+a \bigl(2^{r+1}-1\bigr).$$ Indeed, for $0\leq t\leq n$, we have ${k}_t(G)\leq {k}_t(aK_{r+1})$. As in the proof of Theorem \[thm:ind\_signpost\], we start by proving that if $C\in {{\mathcal K}}_{t-1}(G)$, $${\#}{\left\{{D\in {{\mathcal K}}_t(G)}\,:\,{D\supseteq C}\right\}}\leq r-t+2.$$ This is immediate since each vertex in $C$ has at most $r-t+2$ neighbors outside $C$. Thus, by induction on $t$ (starting at $t=1$), $$\begin{aligned} {k}_t(G)&\leq \frac{r-t+2}{t}\,{k}_{t-1}(G)\\ &\leq \frac{r-t+2}{t}\,{k}_{t-1}(aK_{r+1})\\ &= \frac{r-t+2}{t}\,a\binom{r+1}{t-1}\\ &= a\binom{r+1}t={k}_t(aK_{r+1}).\qedhere \end{aligned}$$ In the remaining sections of the paper, we prove a best possible bound for all $n$ on the number of complete subgraphs of a graph with $\Delta(G)\leq r$. Outline of the proof {#sec:outline} ==================== Our approach to the proof of the Main Theorem is as follows. We consider a graph $G$ on $n$ vertices with ${\Delta}(G)\le r$. If $K_{r+1}{\subseteq}G$ then we are done by induction. Motivated by the proof of Theorem \[thm:k\_signpost\], we will assign weights to the complete subgraphs of $G$: if $C\in {{\mathcal K}}(G)$ then we set $${w}(C) = \abs[\Big]{{\bigcap}_{x\in C} N(x)},$$ the number of common neighbors of all the vertices in $C$. (In particular of course no element of $C$ is counted since it is not adjacent to itself.) Equivalently, ${w}(C)$ is the number of cliques of $G$ of size $|C|+1$ containing $C$. By the same double-counting argument as in Theorem \[thm:ind\_signpost\] we have that $$\label{eq:kbound} {k}_{t}(G) = \frac1{t} \sum_{C\in {{\mathcal K}}_{t-1}(G)} {w}(C).$$ Thus if the average weight of $(t-1)$-cliques is small then there will not be many $t$-cliques in total. The bound on ${\Delta}(G)$ shows that ${w}(C) \le r+1-\abs{C}$ for all $C$. If a clique $C$ satisfies this bound with equality then we call it *tight*. The core of our proof is to focus on the tight cliques. Suppose then that $G$ is a graph with $\Delta(G)\leq r$ and $n=a(r+1)+b$ vertices. In crude outline our proof has four cases: I) $G$ contains a $K_{r+1}$, in which case we are done by induction. II) $G$ has no tight cliques. In this case, by (\[eq:kbound\]), we observe that $G$ satisfies the *strong inequalities*: for all $t\geq 3$ we have $${k}_{t}(G) \le \frac {r-t+1}{t} \;{k}_{t-1}(G).$$ We will show that in all nontrivial cases, the strong inequalities imply that ${k}(G)<{k}(aK_{r+1}\cup K_b)$. III) $G$ has some tight clique for which a certain parameter, that we call *fixed loss*, is small. In this case, we modify the graph $G$ to obtain a graph $G'$ with ${k}(G')>{k}(G)$. IV) $G$ has tight cliques each having large fixed loss. We prove that $G$ satisfies the strong inequalities despite having tight cliques.\[alligator\] In the next section, we introduce and discuss fixed loss. In subsequent sections, we consider the strong inequalities and use a discharging technique to deal with case \[alligator\]. Fixed loss {#sec:FL} ========== The modification we hope to do to a graph containing a tight clique is relatively simple. It is described in the following definition. Suppose that $G$ is a graph with ${\Delta}(G)\le r$ and $T{\subseteq}V(G)$ is a tight clique of size $t$. We let $S_T = {\bigcap}_{x\in T} N(x)$ and define a new graph by converting $T\cup S_T$ into a clique (of size $r+1$) and deleting all the edges $[S_T,V(G){\setminus}(T\cup S)]$. In other words we define $${G_{T}} = G + \binom{S_T}{2} - [S_T,V(G){\setminus}(T\cup S_T)],$$ where $\binom{S_T}2$ is the set of all pairs in $S_T$ and, for sets $U$ and $V$, $[U,V]={\left\{{uv}\,:\,{u\in U, v\in V}\right\}}$. \[lem:dR\] With the notation of the previous definition, for all $x\in S$ we have $$\abs{N_G(x){\setminus}(T\cup S)} \le d_{R_T}(x)$$ Immediate. If $G$, $T$ and $S$ are as in the definition, then there are no edges in $G$ between $T$ and $V(G){\setminus}(T\cup S)$. Thus ${G_{T}}$ contains a copy of $K_{r+1}$ on $T\cup S$. By Lemma \[lem:dR\], we have $\Delta({G_{T}}) \le r$. Vertices in $T\cup S$ now have degree exactly $r$ and no other vertex has had its degree increased. We will give a bound on ${k}({G_{T}})$ that is described in terms of the edges inside $S$ that we are “filling in”. Let $G$ be a graph with ${\Delta}(G)\le r$ and $T{\subseteq}V(G)$ a tight clique in $G$. Set $S = S_T$. Then we define $$R_T = {\overline}{G[S]},$$ i.e., the graph on $S$ whose edges are those not in $G$. If $R$ is any graph and $I{\subseteq}V(R)$ we define $${\delta}_I = \min{\left\{{d_R(x)}\,:\,{x\in I}\right\}}.$$ We define the *fixed loss of a graph $R$* to be $${\phi}(R) = \sum_{\substack{I\in {{\mathcal I}}(R)\\I\not={\emptyset}}} (2^{{\delta}_I} - 1).$$ The following lemma gives a lower bound on ${k}(G_T)$ in terms of ${\phi}(R)$. \[lem:kGS\] If $G$ is a graph with ${\Delta}(G)\le r$ and $T{\subseteq}V(G)$ is a tight $t$-clique then $${k}(G_T) \ge {k}(G) + 2^{r+1} - 2^t\,{i}(R_T) - {\phi}(R_T).$$ For convenience we will set $S = S_T$ and $V'=V(G){\setminus}(T\cup S)$. We also abbreviate $R_T$ to $R$. We will show that $$\begin{aligned} \abs{{{\mathcal K}}(G){\setminus}{{\mathcal K}}(G_T)} &\le {\phi}(R) \text{ and} \\ \abs{{{\mathcal K}}(G_T){\setminus}{{\mathcal K}}(G)} &= 2^{r+1} - 2^t\,({i}(R)-s-1), \end{aligned}$$ from which the result follows immediately. Consider first a clique $C\in {{\mathcal K}}(G){\setminus}{{\mathcal K}}(G_T)$. It must meet both $S$ and $V'$. We will count such cliques according to their intersection with $S$, so set $I=C\cap S$. This intersection must be an independent set in $R$ (since edges of $R$ are missing in $G$) and (at a bare minimum) the elements of $C\cap V'$ must be common neighbors of all the elements of $I$. Since, by Lemma \[lem:dR\], each $x\in I$ has at most $d_{R}(x)$ neighbors in $V'$, we can fix some $x_0\in I$ with $d_{R}(x_0)={\delta}_I$ and we see that each such $C$ is associated with a unique non-empty subset of $N_G(x_0)\cap V'$. Thus there are at most $2^{{\delta}_I}-1$ such $C$, and at most ${\phi}(R)$ cliques in ${{\mathcal K}}(G){\setminus}{{\mathcal K}}(G_T)$ in total. Turning now to ${{\mathcal K}}(G_T){\setminus}{{\mathcal K}}(G)$ we see that if $C\in {{\mathcal K}}(G_T){\setminus}{{\mathcal K}}(G)$ then we must have $C{\subseteq}T\cup S$ with $C\cap S\neq {\emptyset}$. All such subsets are cliques of $G_T$. The ones that are cliques of $G$ are those not missing an edge in $S$, i.e., those that meet $S$ in an independent set of $R$ of size at least two. \[cor:fli\] With the setup of Lemma \[lem:kGS\] and setting $s=\abs{S}$, if $$2^t > \frac{{\phi}(R)}{2^s-{i}(R)+s+1}$$ then ${k}(G_T)>{k}(G)$. We start our investigation of the graph parameter ${\phi}$ by proving some simple (if somewhat surprising) extremal results. \[thm:maxFL\] If $R$ is a graph on $s$ vertices then $${\phi}(R) \le {\phi}(K_s) = s(2^{s-1}-1).$$ We will in fact prove something stronger, that $$\sum_{\substack{I\in {{\mathcal I}}(R)\\I\not={\emptyset}}} \abs{I}\bigl(2^{{\delta}_I}-1\bigr) \le s(2^{s-1}-1).$$ We calculate as follows. $$\begin{aligned} \sum_{\substack{I\in {{\mathcal I}}(R)\\I\not={\emptyset}}} \abs{I}\bigl(2^{{\delta}_I}-1\bigr) &= \sum_{\substack{I\in {{\mathcal I}}(R)\\I\not={\emptyset}}} \sum_{x\in I} \bigl(2^{{\delta}_I}-1\bigr) \\ &\le \sum_{\substack{I\in {{\mathcal I}}(R)\\I\not={\emptyset}}} \sum_{x\in I} \bigl(2^{d(x)}-1\bigr) \\ &= \sum_{x\in V(R)} \sum_{\substack{I \in {{\mathcal I}}(R)\\ x\in I}} \bigl(2^{d(x)}-1\bigr) \\ &\le \sum_{x\in V(R)} 2^{s-d(x)-1} \bigl(2^{d(x)}-1\bigr) \\ &=\sum_{x\in V(R)} (2^{s-1}-2^{s-d(x)-1})\\ &\le s(2^{s-1}-1). \end{aligned}$$ In the antepenultimate step we used the fact that if $I$ is independent and contains $x$ then certainly $I{\setminus}{\left\{{x}\right\}} {\subseteq}V(R){\setminus}N[x]$. We will also require a slightly more technical result bounding ${\phi}(R)$ in terms of both $s$ and the number of vertices of $R$ of degree one. Before we do this, we need to start with a simple lemma showing that we can assume that $R$ contains no $K_2$ components. \[lem:k2\] Suppose that $G$ is a graph with $\Delta(G)\leq r$ and $T$ is a tight clique in $G$ with $\abs{T}\geq 2$. If $R=R_T$ contains a $K_2$ component on vertices $u$ and $v$, then the graph $G'$ obtained from $G$ by adding the edge $uv$ and deleting any edges in $[{\left\{{u,v}\right\}},V(G){\setminus}(T\cup S_T)]$ has $\Delta(G')\leq r$ and ${k}(G')>{k}(G)$. Most cliques are the same in $G$ and $G'$. In $G'$ we no longer have the $K_2$s corresponding to edges in $[{\left\{{u,v}\right\}},V(G){\setminus}(T\cup S_T)]$; since each of $u$ and $v$ is incident to at most one such edge (by Lemma \[lem:dR\]), we have lost at most two cliques. On the other hand, we have gained the edge $uv$ and $t\geq 2$ triangles of the form ${\left\{{x,u,v}\right\}}$ with $x\in T$. \[thm:flell\] Let $R$ be a graph on $s$ vertices having $\ell$ vertices of degree one and containing neither a $K_1$ nor a $K_2$ component. Then $${\phi}(R)\leq 2^{s}+(s-\ell-2)2^{s-\ell-1}.$$ Let $L$ be the set of vertices of degree one. We split up the sum computing ${\phi}(R)$ into two parts, the contributions of independent sets containing an element of $L$ and the rest. To this end, let $$\begin{aligned} {\phi}'(R)&=\sum_{\substack{I\in {{\mathcal I}}(R)\\I\cap L\neq {\emptyset}}} 2^{{\delta}_I}-1={\#}{\left\{{I\in {{\mathcal I}}(R)}\,:\,{I\cap L\neq {\emptyset}}\right\}},\quad\text{and}\\ {\phi}''(R)&=\sum_{\substack{{\emptyset}\neq I\in {{\mathcal I}}(R)\\I\cap L={\emptyset}}} 2^{{\delta}_I}-1. \end{aligned}$$ To bound the first term, we observe $${\phi}'(R)={\#}{\left\{{I\in {{\mathcal I}}(R)}\,:\,{I\cap L\neq {\emptyset}}\right\}}\leq (2^{\ell}-1)2^{s-\ell-1}.$$ This follows from the fact that no vertex of $L$ is adjacent to any other and therefore, given any nonempty subset $L'$ of $L$, at least one vertex of $R{\setminus}L$ is excluded from $I$. So there are at most $2^{s-\ell-1}$ independent sets contributing to ${\phi}'(R)$ of the form $L'\cup J$ where $L\cap J={\emptyset}$. On the other hand, writing $d_{\ell}(v)$ for $\abs{N(v)\cap L}$, $$\begin{aligned} {\phi}''(R)&=\sum_{\substack{{\emptyset}\neq I\in {{\mathcal I}}(R)\\I\cap L={\emptyset}}} 2^{{\delta}_I}-1\\ &\leq \sum_{\substack{{\emptyset}\neq I\in {{\mathcal I}}(R)\\I\cap L={\emptyset}}} \abs{I}(2^{{\delta}_I}-1)\\ &= \sum_{v\in V(R)}\; \sum_{\substack{v\in I\in {{\mathcal I}}(R)\\I\cap L={\emptyset}}} 2^{{\delta}_I}-1\\ &\leq \sum_{v\in V(R)}\; \sum_{\substack{v\in I\in {{\mathcal I}}(R)\\I\cap L={\emptyset}}} 2^{d(v)}-1\\ &\leq \sum_{v\in V(R)} 2^{s-\ell-d(v)+d_{\ell}(v)-1}(2^{d(v)}-1)\\ &= \sum_{v\in V(R)} 2^{s-\ell+d_{\ell}(v)-1}-2^{s-\ell-d(v)+d_{\ell}(v)-1}\\ &= 2^{s-\ell-1}\left(\sum_{v\in V(R)} 2^{d_{\ell}(v)}-\sum_{v\in V(R)}2^{d_{\ell}(v)-d(v)}\right)\\ &\leq 2^{s-\ell-1}(2^{\ell}+s-\ell-1). \end{aligned}$$ The fifth step above follows as in the proof of Theorem \[thm:maxFL\] and the final step uses the convexity of $2^x$ on the first term and ignores the second. Combining these bounds, we have $${\phi}(R)={\phi}'(R)+{\phi}''(R)\leq 2^s+(s-\ell-2)2^{s-\ell-1}.\qedhere$$ The strong inequalities {#sec:strong} ======================= As noted in Section \[sec:outline\], if there are no tight cliques of size at least two, then $G$ satisfies the strong inequalities: for all $t\geq 3$, $${k}_t(G)\leq \frac{r-t+1}t {k}_{t-1}(G).\label{eqn:strong}$$ Note that cliques of size one are tight exactly if the vertex has degree $r$. Our bound on $k_2(G)=e(G)$ will be the obvious one arising from the degree bound. The next lemma summarizes these inequalities into a bound on $k(G)$. \[lem:strongimps\] If $G$ is a graph on $n$ vertices with $\Delta(G)\leq r$, where $r\geq 2$, and $G$ satisfies the strong inequalities for $t\geq 3$, then $${k}(G)\leq 1 + \frac{n}{r-1}(2^r-2).$$ First note that ${k}_0(G)=1$, ${k}_1(G)=n$, and ${k}_2(G)\leq \frac{nr}2$. For $t\geq 3$, we note that, by induction using (\[eqn:strong\]), $${k}_t(G)\leq \frac{n}{r-1}\binom{r}t.$$ Hence, $$\begin{aligned} {k}(G)&\leq 1+n+\frac{nr}2+\frac{n}{r-1}\left(2^r-\binom{r}2-r-1\right)\\ &=1+\frac{n}{r-1}(2^r-2).\qedhere\end{aligned}$$ \[lem:strong\] If $r\geq 3$ and $n=a(r+1)+b$ where $a\geq 1$ and $0\leq b\leq r$, then $$1 + \frac{n}{r-1}(2^r-2)\leq a(2^{r+1}-1)+2^b,\label{eq:strong}$$ with strict inequality unless $r=3$ and $n=6$. It will be convenient to address first the case when $a=1$ and $b=0$. In this case, we are claiming $$\frac{r+1}{r-1}(2^{r}-2)<2^{r+1}-1,\label{eq:blah}$$ which is true for all $r\geq 3$. In general, the two sides of (\[eq:strong\]) are each linear in $a$, with respective coefficients the two sides of (\[eq:blah\]). Thus, it suffices to prove the result for $a=1$. In this case, we need to show $$\begin{aligned} 1+\frac{r+1+b}{r-1}(2^r-2)&< 2^{r+1}-1+2^b\\ \shortintertext{which is equivalent to} (r-1)(2-2^b)-2(r+1+b)&<(r-3-b)2^r. \end{aligned}$$ The last inequality is clearly true if $b\leq r-3$, since the left hand side is negative and the right hand side is nonnegative. For the remaining cases, i.e., $b=r-2, r-1, r$, we need to check whether $$(b-(r-3))2^r<(r-1)(2^b-2)+2(r+1+b).$$ This is straightforward to check in each case when $r\geq 5$ and easy to check in the other cases. In the case $r=3$ and $b=2$, we get equality. \[cor:strong\] Let $G$ be a graph on $n$ vertices with $\Delta(G)\leq r$ which satisfies the strong inequalities for $t\geq 3$. If $n=a(r+1)+b$ with $a\geq 1$ and $0\leq b\leq r$, then $${k}(G)<{k}(aK_{r+1}\cup K_b).$$ This is immediate from Lemmas \[lem:strongimps\] and \[lem:strong\] except when $n=6$ and $r=3$. In this case, ${k}(G)$ could only be as large as the left hand side of (\[eq:blah\]) if $G$ were $3$-regular. Neither of the $3$-regular graphs on $6$ vertices achieves ${k}(G)=19$. Discharging {#sec:discharging} =========== In this section, we discuss the case of the argument wherein every tight clique has large fixed loss. Throughout this section, we let $G$ be a graph on $n$ vertices with $\Delta(G)\leq r$ and $T$ be a tight clique in $G$. We set $t=\abs{T}$, $S=S_T$, $R=R_T$, and $s=\abs{S}$. In order to understand the structure of tight cliques, we make the following definition. A *cluster* is a maximal tight clique. If $T$ is a cluster in a graph $G$ and $C$ is a $c$-clique with $\abs{T\cap C}=c-1$, then we say that $C$ is *associated with $T$*. Note that $x$ and $y$ belong to some common tight clique exactly if $N[x]=N[y]$. In particular, the relation of belonging to some common tight clique is an equivalence relation, with clusters as the equivalence classes. We note a consequence of Corollary \[cor:fli\]. \[lem:sbound\] If $T$ is a cluster in $G$ with ${k}(G_T)\leq {k}(G)$, then ${\phi}(R)\geq 2^r+s2^t$ and $t< \log_2(s)$. From Corollary \[cor:fli\], we know that if ${k}(G_T)\leq {k}(G)$, then $2^t\leq {\phi}(R)/(2^s-i(R)+s+1)$. Since $T$ is a cluster, we have $\delta(R)\geq 1$. By a result of Galvin [@G], since $\delta(R)\geq 1$, we know that $i(R)\leq 2^{s-1}+1$. Hence, $${\phi}(R)\geq 2^t(2^s-i(R)+s+1) \geq 2^t(2^{s-1}+s)=2^r+s2^t.$$ Also, by Theorem \[thm:maxFL\], $$\begin{aligned} t&\leq \log_2\frac{{\phi}(R)}{2^s-i(R)}\\ &\leq \log_2\frac{s(2^{s-1}-1)}{2^{s-1}+s}\\ &< \log_2 s.\qedhere \end{aligned}$$ The main result of this section shows that if the fixed loss of a cluster is large, then there are many cliques associated with that cluster having low weight. In the proof of the Main Theorem we will transfer weight from tight cliques inside a given cluster to cliques of low weight associated with that cluster, proving that $G$ satisfies the strong inequalities. \[lem:clusnum\] Let $r\geq 3$. If $T$ is a cluster in $G$, ${k}(G_T)\leq {k}(G)$, and $R$ has no $K_2$ component, then for every $2\leq c\leq t$, there are at least $2\binom{t}{c}$ $c$-cliques associated with $T$ having weight at most $r-c-1$. If $C$ is a $c$-clique associated with $T$ and $x$ is the unique element of $S\cap C$, then ${w}(C)=r+1-c-d_R(x)$. Thus, all associated cliques containing vertices of degree at least two in $R$ have weight at most $r-c-1$. We will show that there are at least $t-1$ such vertices. We let $\ell$ be the number of vertices of $R$ of degree one. If $\ell\geq s-t+2$, then by Theorem \[thm:flell\], we would have ${\phi}(R)\leq 2^s+(t-4)2^{t-3}$. Note that this would imply $$2^r\leq 2^r+s2^t\leq {\phi}(R)\leq 2^{r+1-t}+(t-4)2^{t-3}\leq 2^{r+1-t}+\frac{1}8 s\log_2 s\leq 2^{r+1-t}+\frac{1}8 r\log_2 r,$$ by Lemma \[lem:sbound\]. Since $t\geq 2$, we see $$2^{r}\leq 2^{r+1-t}+\frac{1}8 r\log_2 r \quad\implies\quad 2^{r}\leq \frac{1}{4}r\log_2 r,$$ a contradiction for $r\geq 3$. Let $h$ be the number of vertices in $R$ of degree at least two. Having shown that $\ell\leq s-t+1$, we know that $h\geq t-1$. The number of $c$-cliques of weight at most $r-c-1$ associated with $T$ is $$h\binom{t}{c-1}\geq (t-1)\binom{t}{c-1}\geq \frac{2(t-c+1)}{c}\binom{t}{c-1}=2\binom{t}c,$$ for $c\geq 2$. Proof of the Main Theorem {#sec:proof} ========================= For all $n,r\in {\mathbb{N}}$, write $n=a(r+1)+b$ with $0\le b\le r$. If $G$ is a graph on $n$ vertices with $\Delta(G)\leq r$, then $${k}(G) \leq {k}(aK_{r+1} \cup K_b),\label{eqn:theone}$$ with equality if and only if $G=aK_{r+1}\cup K_b$, or $r=2$ and $G=(a-1)K_3\cup C_4$ or $(a-1)K_3\cup C_5$. If $r=1$, the result is trivial. If $r=2$, it is almost as trivial: for $n=3a+b$ with $0\leq b\leq 2$, we have $e(G)\leq e(aK_3\cup K_b)+1$ with equality if and only if $b\neq 0$ and $G$ is $2$-regular. Also, $k_3(G)\leq a$. However, one cannot have equality in both bounds. The graphs described in the statement of the theorem are the only examples achieving equality in (\[eqn:theone\]). We assume henceforth that $r\geq 3$. We proceed by induction on $n$, noting that the result is trivial for $n< r+1$, i.e., $a=0$. Consider then a graph $G$ as in the statement of the theorem with ${k}(G)$ maximal. If $K_{r+1}{\subseteq}G$, then we are done by induction. If $G$ has no tight cliques of size at least two, then $G$ satisfies the strong inequalities for $t\geq 3$ and, so by Corollary \[cor:strong\], we have ${k}(G)<{k}(aK_{r+1} \cup K_b)$, a contradiction to the choice of $G$. The remaining cases involve graphs $G$ containing tight cliques of size at least two. We note that $G$ cannot contain a tight clique $T$ with ${k}(G_T)>{k}(G)$ (by the maximality of ${k}(G)$) nor can it contain a tight clique $T$ such that $R_T$ has a $K_2$ component (by Lemma \[lem:k2\]). Thus, $G$ has some tight cliques of size at least two, all of which have ${k}(G_T)\leq {k}(G)$. In this final case, we will use the results of Section \[sec:discharging\] to show that, in fact, $G$ satisfies the strong inequalities. We will define new weights on all cliques in the following fashion. We will reduce the weights of tight cliques by one. If a clique $C$ is associated with a cluster, we increase its weight by one half. It is possible for a clique of size two to be associated with two clusters; in this case, we increase its weight by one. Larger cliques cannot be associated with more than one cluster since they must intersect each cluster in $c-1$ vertices, hence the clusters themselves would intersect. Denoting these new weights by $w'(C)$, we observe first that Lemma \[lem:clusnum\] implies $\sum_{C\in {{\mathcal K}}_{t}(G)} {w}(C)\leq \sum_{C\in {{\mathcal K}}_{t}(G)} w'(C)$ for $t\geq 2$. Also, for all $c$-cliques $C$, we have $w'(C)\leq r-c$. Using (\[eq:kbound\]), we get $$k_t(G)=\frac{1}{t}\sum_{C\in {{\mathcal K}}_{t-1}(G)} {w}(C)\leq \frac{1}{t}\sum_{C\in {{\mathcal K}}_{t-1}(G)} w'(C) \leq \frac{r-t+1}{t} {k}_{t-1}(G),$$ for $t\geq 3$, i.e., $G$ satisfies the strong inequalities. [^1]: We use the term *clique* to refer to a complete subgraph, not necessarily a maximal complete subgraph. [^2]: The *lex graph with $n$ vertices and $m$ edges*, denoted $L(n,m)$, has vertex set $[n]={\left\{{1,2,\ldots,n}\right\}}$ and edge set an initial segment of size $m$ in $\binom{[n]}{2}$ according to the lexicographic order.
--- abstract: 'The distribution of products of random matrices chosen from fixed spherical classes is determined for classical rank 1 symmetric spaces. It is observed that $n\to\infty$ limit behaves approximately as in the abelian case. A theorem on the rate of convergence to the Haar measure in the case of $SU(n)$ is also established.' --- [**On the Distribution of Products of Spherical Classes in Classical Symmetric Spaces of Rank One**]{} [^1] AMS Subject Classification 2000: 22E46, 53C35, 43A90. Introduction ============ A basic problem in random matrix theory is the determination of the distribution of products of matrices chosen randomly from specified classes. For example the problem of the support of products of random unitary matrices chosen from fixed conjugacy classes, or sums of Hermitian matrices of given eigenvalues is treated in \[AW\] and \[F\] (which is based on the work of Klyachko \[Kl\]). In this paper we investigate the distribution of products of random matrices chosen from spherical classes for classical rank 1 symmetric spaces. When matrices are chosen from conjugacy or spherical classes from a simple group of $2\times 2$ matrices a complete solution is given in \[Sh1\] and \[Sh2\]. The distribution of products of spherical classes may be interpreted as the determination of the algebra structure for the Hecke algebra generated by the singular measures concentrated on orbits of $K\times K$ in $G$ for a symmetric pair $(G,K)$. It can also be given an interpretation in geometric probability. Consider for example $S^{2n-1}$ with the natural action of $SU(n)$ on it. Fix a longitude $\Lambda\subset S^{2n-1}$, let $z_i\in \Lambda$, $i=1,2$, and choose matrices $g_i$ randomly from $SU(n)$ (according to Haar measure) conditioned on the requirement that $g_i.z_i$ lies in the same meridian as $z_i$. Then one may inquire about the density function for $g_2g_1$. For the case of classical symmetric spaces of rank 1 a complete solution to this problem is given in this paper (Theorems \[thm:shaffaf4\], \[thm:shaffaf5\], \[thm:shaffaf23\] and \[thm:shaffaf24\], \[thm:shaffaf22\]). It is observed that as the dimension of the symmetric space of a given class tends to infinity, the distribution of the product measures converges weakly to a singular continuous measure concentrated on a single orbit of $K\times K$. This may be rephrased as the $n\to\infty$ limit behaves approximately as in the abelian case. A similar limiting behavior is also observed for higher rank symmetric spaces but this more complex limit theorem will be treated later in another publication. In view of this limiting behavior, in section 7 the rate of convergence to Haar measure on $SU(n)$ of products of the form $$\begin{aligned} g_1h_1g_2h_2\ldots g_Nh_N,\end{aligned}$$ where $g_i$’s (resp. $h_i$’s) are chosen from a fixed spherical class ${\mathcal O}_a$ (resp. ${\mathcal O}_b$), is determined. In particular, it is shown that for $N \sim C\log n$ the product measure $\lambda_a\star\lambda_b\star\ldots \star \lambda_a\star\lambda_b$, (product of $N$ copies of $\lambda_a\star\lambda_b$) where $\lambda_a$ is the (probability) invariant measure on ${\mathcal O}_a$, tends to the Haar measure of $SU(n)$ as $n\to\infty$ in $L^p$ for $1\le p\le 2$. The methods used in this work are based on harmonic analysis on symmetric spaces. In section 2 relevant integration formulae for symmetric spaces are stated in the appropriate form. Sections 3, 4 , 5 and 6 give the explicit density functions for products of spherical classes in classical symmetric spaces of rank 1 of both compact and non-compact types. In the final section essential use is made of representation theory and harmonic analysis for $SU(n)/S(U(1)\times U(n-1))$ to obtain the rate of convergence. Similar results are also valid for rank 1 symmetric spaces of the orthogonal groups but are not treated here. The author wishes to thank Professor S. Shahshahani who gave him the opportunity to pursue his interests, and especially Professor Mehrdad Shahshahani for suggesting the problem and many stimulating discussions. Integration Formulae ==================== In this section we recall some basic integration formulae related to symmetric spaces. A detailed treatment is given in \[H1\] and \[H2\]. Let $G$ be a connected compact semi-simple Lie group , $\mathfrak{g}$ its Lie algebra and $K$ be a subgroup of $G$ with Lie algebra $\mathfrak{k}$ such that $(G, K)$ is a symmetric pair of compact type. Let $ \mathfrak{g}= \mathfrak{k} + \mathfrak{p} $ the corresponding Cartan decomposition, $\mathfrak{a} \subset \mathfrak{p}$ a maximal abelian subspace and $\Sigma$ the corresponding set of restricted roots. Fix a Weyl chamber $\mathfrak{a}^+\subset\mathfrak{a}$, let $\Sigma^+$ and $\Phi=\{\alpha_1,\ldots,\alpha_l\}\subset \Sigma_+$ denote the corresponding sets of simple and positive roots. The multiplicity of a root $\alpha$ will be denoted by $m_\alpha$. Let $M$ be the centralizer of $\mathfrak{a}$ in $\mathfrak{k}$, the group $A = \exp\mathfrak{a}$ is a closed, and therefore a compact subgroup of $G$. Let $dg$, $dk$, $dm$ and $da$ denote the Haar measures on the compact groups $G$, $K$, $M$ and $A$, and $du$ and $db$ be invariant measures on $U=G/K$, $B=K/M$, respectively. We consider the surjective map $$\begin{aligned} \Psi: (K/M)\times A\longrightarrow G/K,~~~\Psi(kM, a)= kaK.\end{aligned}$$ The Jacobian of $\Psi$ is $$\label{eq:shaffaf2} \det ( d\Psi_{(kM, a)} ) = \prod_{ \alpha \in \Sigma^+} | \sin \alpha (i \ H)|^ {m_ {\alpha}}$$ where $a = \exp (H)$, $H \in \mathfrak{a}$ and $m _\alpha$ is the multiplicity of the restricted root $\alpha$. Let $\delta(a)$ denote the right hand sight of (\[eq:shaffaf2\]). $\Psi$ is one-to-one and regular on an open dense set and we have : $$\begin{aligned} \Psi ^\star (du)= c \delta(a) db da \end{aligned}$$ where $c$ is a constant depending on the normalization of measures. \[thm:shaffaf1\] Let $U=G/K$ be a Riemannian symmetric space of compact type. Then with the above notation we have $$\begin{aligned} \int_G f ( gK) dg = c \int_{K/M} \left(\int_A f(ka K) \delta(a) da \right) db\end{aligned}$$ for all $f\in C(G/K)$. Moreover if $G$ is simply connected and $K$ is connected, then $$\begin{aligned} \int_{G/K} f(gK) d u = c \int _ {K/M} db \left( \int_Q f( k ( \exp H)K) \delta( \exp H) d H \right )\end{aligned}$$ where $$\begin{aligned} \delta(\exp(H)) = \prod_{ \alpha \in \Sigma^+} | \sin \alpha (i \ H)|^ {m_ {\alpha}},~~~c^{-1} = \int_{Q} \prod_ {\alpha \in \Sigma^+} ~ (\sin ( \alpha (i H) )^ {m_\alpha} d H,\end{aligned}$$ the measures on $U=G/K$, $B=K/M$ and $A$ are normalized to be 1, and $Q$ is the polyhedron $$\begin{aligned} Q = \{H\in \mathfrak{a}: \frac{1}{i} \mu_j(H)> 0, ( 1\leq j \leq l), \frac{1}{i} \mu(H)< \pi \}.\end{aligned}$$ According to the classification theory the classical symmetric spaces of rank 1 are ---------------------------------------------------------------------------------- Noncompact Compact Dimension of $G/K$ ------------------------ --------------------------------- -------------------- -- $(SU(n,1),S(U(1)\times $(SU(n+1), S(U(1)\times U(n)))$ $2n$ U(n)))$ $(SO(n,1),SO(n))$ $(SO(n+1),SO(n))$ $n$ $(Sp(n,1),Sp(1) $(Sp(n+1),Sp(1)\times Sp(n))$ $4n$ \times Sp(n))$ ---------------------------------------------------------------------------------- There is also an exceptional symmetric pair of rank 1 associated to the exceptional Lie group $F_4$ which is not treated in this paper. The Symmetric Pair $(SU(n+1), S(U(1) \times U(n)))$ ==================================================== In this section $G= SU(n+1) $ and $K = S(U(1) \times U(n))$. In this case $G$ is simply connected, $K$ connected and $G/K$ is irreducible. The Lie algebra of $G$ is $\mathfrak{g}= su(n+1)$, the space of traceless skew hermitian matrices and the Lie algebra of the subgroup $K$ is $$\begin{aligned} \mathfrak{k}= \left\{\left[\begin{array}{c|ccc} -\text{tr}(A)&&0 \\\hline &&\\0&&A\\&& \end{array}\right] ; \ \textrm{$A$ is an $n\times n$ skew hermitian matrix}\right\}.\end{aligned}$$ Let $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ be the Cartan decomposition then $\mathfrak{p}$ is the subspace of matrices of the form $$\begin{aligned} \left[\begin{array}{ccccc} 0& & \xi_1 & \ldots & \xi_n \\ \begin{array}{c} \\-\overline{\xi_1} \\ \vdots \\ \\-\overline{\xi_n} \\ \end{array} & & & 0_{n \times n} & \\ \end{array}\right]\end{aligned}$$ A maximal abelian subspace $ \mathfrak{a} \subset \mathfrak{p}$ is $ \mathfrak{a} = \{t H ~;~~ t \in R \}$ where $H$ is the matrix $$\begin{aligned} H = \left[\begin{array}{ccccc} 0& & \ 1 & \ldots & 0 \\ \begin{array}{c} \ -1 \\ \ 0 \\ \vdots \\ \ 0 \\ \end{array} & & & 0_{n \times n} & \\ \end{array}\right]\end{aligned}$$ The centralizer $\mathfrak{m}$ of $\mathfrak{a}$ in $\mathfrak{k}$ is the subalgebra of $\mathfrak{k}$ consisting of matrices of the form $$\left[\begin{array}{c|ccc} -\text{tr}(A)&&0 \\\hline &&\\0&&A\\&& \end{array}\right]$$ where $A= [a_{ij}]$ is an $n\times n$ skew hermitian matrix satisfying $$\begin{aligned} A v = \ \textrm{tr}(A) \ v\end{aligned}$$ where $v=[1,1, \ldots ,1]^T$. There is a unitary matrix $P$ such that $$\begin{aligned} PAP^{-1} \ = \ \left[\begin{array}{c|ccc} \textrm{tr}(A)&&0 \\\hline &&\\0&&B\\&& \end{array}\right],~~~{\rm tr}(B)=0.\end{aligned}$$ Therefore we can identify the Lie algebra $\mathfrak{m}$ with $s( u(1) \times u(1) )\times su(n-1) $. The Jacobian $\delta$ for polar coordinates of the symmetric pair $(G,K)$ is given by the formula $$\begin{aligned} \delta(\exp(tH))=(\sin t)^{2(n-1)}~\sin 2t .\end{aligned}$$ [**Proof**]{}- By a straightforward calculation the positive restricted roots are $$\begin{aligned} \alpha_1(tH) = i t , \ \ \alpha_2(t H ) = 2 \alpha_1 (tH)= 2i t\end{aligned}$$ with multiplicities $ m _{\alpha_1} = 2(n-1), m_{\alpha_2} = 1 $. Therefore by theorem (\[thm:shaffaf1\]) for the Jacobian $\delta$ we have $$\begin{aligned} \delta(\exp(t H)) = \prod_{ \alpha \in \ \Sigma^+} | \sin \alpha (i t H)|^ {m_ {\alpha}} = (\sin ( t) ) ^{2(n-1)} \ \sin (2 t),\end{aligned}$$ and the lemma followed. $\blacksquare$ The stabilizer subgroup $K_{z_0}$ of $z_0 = [0:1:1: \ldots :1]$, under left action of $K$ on $\mathbb{CP}(n)\simeq G/K$ is the centralizer subgroup $M$. [**Proof**]{}- Let $p=[z, \xi ] \in \mathbb{CP}(n)$ and note that $\mathbb{CP}(n)$ is the quotient space $ \mathbb{CP}(n)= S^{2n+1}/S^1$ under diagonal action of $S^1$. The orbit of a generic point $z=[\zeta, \xi ] \in \mathbb{CP}(n)$ under $K = S( U(1) \times U(n))$ is $$\label{eq:shaffaf51} \ \left[\begin{array}{c|ccc} e^{-i\theta}&0&\cdots&0 \\\hline 0&&\\\vdots&&B\\0&& \end{array}\right]\left[\begin{array}{cccc} \zeta \\\hline\\ \xi\\\\ \end{array}\right]=\left[\begin{array}{cccc} e^{-i\theta}\zeta \\\hline\\ B\xi\\\\ \end{array}\right].$$ It is now clear that the first component of the above vector is a copy of $S^1$ and the second component is a $(2n-1)$-sphere with equivalency under $S^1$ which is, by definition, a copy of $\mathbb{CP}(n-1)$. Hence we showed that the orbit of a generic point $z \in \mathbb{CP}(n)$ under the action of the subgroup $K$ is equivalent to $S^1 \times \mathbb{CP}(n-1)$. Fix a generic point $[\zeta,\xi]\in \mathbb{CP}(n)$ and consider the corresponding embedding of $S^1 \times \mathbb{CP}(n-1)\hookrightarrow \mathbb{CP}(n)$ given by (\[eq:shaffaf51\]. Let $z \in \mathbb{CP}(n)$ we know $ O_z = K \ .\ z = S^1 \times \mathbb{CP}(n-1)$ and let $K_z$ be the stabilizer subgroup of the point $z$  i.e. $ K_z =\{ k \in K \ | \ kz = z \}$. We want to show that this stabilizer subgroup $K_z$ for $z=z_0$ as in the theorem is isomorphic to the centralizer subgroup $M$. Recall that the Lie algebra $\mathfrak{m}$ of the Lie group $M$ is characterized by $$\mathfrak{m} = \left\{ B = \left[\begin{array}{c|ccc} -\text{tr}(A)&&0 \\\hline &&\\0&&A\\&& \end{array}\right] \in \mathfrak{k} \ \Big {|} \ A^{\ast} = -A \ , \ B \ v = \ \text{tr}(A) \ v \right\}$$ where $v = [0, 1, 1, \ldots , 1]$. So we obtain $$e^B \ v = \det( e^A) \ v.$$ Therefore $v$ is an eigenvector for $M$ and conversely every element in $K$ which has $v$ as an eigenvector is necessarily an element of the centralizer subgroup $M$. Hence we have $K_{z_0}=M$. $\blacksquare$ \[rem:shaffaf1\] [It is clear from the above lemma that the centralizer subgroup $M$ is a connected subgroup and therefore we have $M= S(U(1)\times U(1) \times U(n-1))$.]{} We have $K/M \simeq S^1 \times \mathbb{CP}(n-1)$. [**Proof**]{}- According to the previous Lemma the orbit of the point $ z_0 = [0:1:1: \ldots :1] \in \mathbb{CP}(n)$ under the action of the group $K$ can be identified with $S^1 \times \mathbb{CP}(n-1)$ and the stabilizer of this point is the centralizer subgroup $M$, so by the isomorphism theorem we have $$O_{z_0} = K/K_{z_0} = K/M.$$ Hence $ K/M = S^1 \times \mathbb{CP}(n-1)$. $\blacksquare$\ Thus we are allowed to apply the polar coordinate on the pair $(K, M)$ and we bring it as the following lemma The Jacobian of polar coordinates for the pair $(K,M)$ is $$\begin{aligned} \delta_0(\exp(t H)) = (\sin ( t) ) ^{2(n-2)} \ \sin (2 t)\end{aligned}$$ [**Proof**]{}- Since the pair $(K, M)$ is symmetric in this case we have the Cartan decomposition $ \mathfrak{k} = \mathfrak{m} \oplus \mathfrak{p}_0$ with $$[\mathfrak{m}, \mathfrak{m}]\subseteq \mathfrak{m}, \ \ [\mathfrak{m}, \mathfrak{p}_0]\subseteq \mathfrak{p}_0 , \ \ [\mathfrak{p}_0, \mathfrak{p}_0]\subseteq \mathfrak{m}.$$ Now $\mathfrak{m} = \mathfrak{s}(\mathfrak{u}(1) \times \mathfrak{u}(1)) \times \mathfrak{su}(n-1)$ which are the matrices of the form $$\left[\begin{array}{c|c|ccc} - \text{tr}(A)&0&0&\cdots&0 \\\hline 0 & \text{tr}(A) & 0 & \ \cdots \ \ 0\\\hline 0& 0 &&&\\\vdots&\vdots&& B &\\ 0& 0&&&\\ \end{array}\right], \ \ B \in su(n-1)$$and the vector space $\mathfrak{p}_0$ is matrices in the Lie algebra $\mathfrak{k}$ of the form $$\left[\begin{array}{c|c|ccc} 0&0&0&\cdots&0 \\\hline 0&0&\xi_1&\cdots&\xi_{n-1}\\\hline 0&-\overline{\xi}_1&&&\\\vdots&\vdots&&0_{n-2}&\\0&-\overline{\xi}_{n-1}&&&\\ \end{array}\right]$$ It is clear that the maximal subspace is spanned by $$H_0 = \left[\begin{array}{c|c|ccc} 0&0&0&\cdots&0 \\\hline 0&0& 1 & \cdots & 0 \\\hline 0& -1 &&&\\\vdots&\vdots&&0_{n-2}&\\ 0 & 0 &&&\\ \end{array}\right]\in \mathfrak{p}_0.$$ The positive restricted roots are $$\begin{aligned} \beta_1(tH_0) = i t , \ \ \beta_2(t H_0 ) = 2 \beta_1 = 2i t\end{aligned}$$ with multiplicities $ m _{\beta_1} = 2(n-2), m_{\beta_2} = 1 $. Therefore by theorem (\[thm:shaffaf1\]) we have $$\begin{aligned} \delta_0(\exp(t H_0)) = \prod_{ \beta \in \ \Sigma^+} | \sin \alpha (i t H)|^ {m_ {\alpha}} = (\sin ( t) ) ^{2(n-2)} \ \sin (2 t).\end{aligned}$$ The lemma is proved. $\blacksquare$ Now we state the main theorem of this section. \[thm:shaffaf4\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= SU(n+1)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $SU(n+1)$. It is a spherical measure and for a continuous spherical function $f$ on $SU(n+1)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = (c\mathrm{vol}(M)\mathrm{vol}(K))^2 \delta (t_1) \delta (t_2) \int_{I_{a,\ b} }\ f (u )((a_2^2 - (u - a_1)^2)^{n-2} \ (a_1 - u )\ d u ~ ,\end{aligned}$$ where $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sin t)^{2(n-1)} \ \sin 2 t$, $a_1 = \cos t_1 \cos t_2$, $a_2 = \sin t_1 \sin t_2$ and $$\begin{aligned} I_{a,b} = [\cos (t_1 + t_2) , \ \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{2(n-2)}) \ ].\end{aligned}$$ [**Proof**]{}- Since both $f$ and $\lambda_a$ are $K$-bi-invariant, $\lambda_a\star f(x)$ is $K$-bi-invariant and therefore to compute $\mu_a\star f(x)$ we can assume that $x$ is of the form $x= \exp(t'H)$. Let $\{\theta_n\}$ be a sequence of spherical functions converging weakly to the singular measure $\lambda_a$ on the orbit $\mathcal{O}_a$. Applying the polar (Cartan) coordinate decomposition for the convolution $\lambda_a \star \check{f}(x)$ we have $$\begin{aligned} \lambda_a \star \check{f}(x) &=& \int_{\mathcal{O}_a} f(y x^{-1}) dy \\ &=& \lim_{n \rightarrow \infty} \int_G \theta_n(g) f(g x^{-1} ) dg \\&=& c\lim_{n \rightarrow \infty} \int_K \int_K \int_A \theta_n(k_1 a' k_2 ) f ( k_1 a' k_2 x^{-1}) \delta(a') da' dk_1 dk_2 \\& =& c \lim_{n \rightarrow \infty } \int_K \int_A \theta_n(a') f(a' k x^{-1}) \delta(a') da' dk \\ &=& c \delta(t_1) \int_K f( a k x^{-1}) dk\end{aligned}$$ where $$\begin{aligned} \delta(t) = \delta(\exp(tH))= (\sin t)^{2(n-1)} \sin 2t.\end{aligned}$$ Recall that $M$ is the centralizer group of $A$ in $K$ and it is easy to verify that the function $g$ defined by $$g(k) = \ f(\exp(t_1 H)k \exp(-t' H))$$ is an $M$- spherical function. Applying the polar coordinates decomposition to the pair $(K, M)$ the above integral over $K$ becomes $$\begin{aligned} \lambda_a \star \check{f}(x) &=& c \delta(t_1) \int_K f(\exp(t_1 H)k \exp(-t' H))d k \\&=& c\delta(t_1) \int_M \int_M \ \int_{A_0} \ g(m_1 \ a \ m_2 )\delta_0 (a) d m_1 d a d m_2 \\ & = & c (\textrm{vol}.(M))^2 \delta(t_1) \int_{A_0} \ g(a) \delta_0(a) d a ~,\end{aligned}$$ where $a = \exp( t H_0)$, $H_0=E_{23} - E_{32}$ is as above, $A_0$ is the corresponding real Cartan subgroup and $\delta_0$ is the Jacobian of the polar coordinates corresponding to the pair $(K,M)$. Thus we have $$\begin{aligned} \lambda_a \star\check{f}(x)= c (\textrm{vol}.(M))^2 \delta(t_1) \int_{Q_0} \ g(a) \delta_0( \exp( t H_0) d t\end{aligned}$$ where the polyhedron $Q_0$ is the interval $[0, \frac{\pi}{2(n-2)}]$ in this case. So the above convolution integral becomes $$\begin{aligned} \lambda_a \star \check{f}(x)= c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{2(n-2)}}\ f ( \exp (t_1 H) \ \exp(t H_0) \exp (-t' H)) \delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ where $\delta(t_1)=(\sin t_1)^{2(n-1)} \ \sin 2 t_1$ and $\delta_0(t) =(\sin t)^{2(n-2)} \sin 2t.$ We know that the function $f$ is a spherical function and so it depends only to the norm of the first entry of the product matrix $$\exp (t_1 H) \ \exp(t H_0) \exp (-t' H).$$ After a simple calculation we obtain $$a_{11} = \cos t_1 \cos t' - \cos t \sin t_1 \sin t' .$$ Set $a_1 = \cos t_1 \cos t'$ and $a_2 = \sin t_1 \sin t'$ to obtain $$\begin{aligned} \lambda_a \star \check{f}(x) = c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{2(n-2)}} \ f ( a_1 - a_2 \cos t )\delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ Next we compute the convolution $\lambda_a \star \lambda_b(f)$ with $a=\exp(t_1 H)$ and $b=\exp(t_2 H)$. Assume that $h(x)= \lambda_b \star \check{f}(x)$ then $$\begin{aligned} (\lambda_a\star\lambda_b)(f)&=& (\lambda_a \star (\lambda_b \star\check{f}))(e)\\&=&(\lambda_a \star h)(e) =\int_{O_a} h(x)d\lambda_a (x)= h(a) \textrm{vol}(O_a),\end{aligned}$$ Applying polar coordinates on $G$ for the volume of the spherical class $O_a$, $a=\exp(t_1 H)$, we obtain $$\begin{aligned} \textrm{vol} (O_a) = c \int_K \int_K \delta(\exp(t_1 H)) \ d k \ d k^{\prime} = c (\textrm{vol}(K))^2 (\sin ( t_1) ) ^{2(n-1)} \ \sin 2 t_1.\end{aligned}$$ For $h(a)$ we have $$\begin{aligned} h(a)= \lambda_b \star \check{f}(a) = c \ \delta(t_2)(\textrm{vol}(M))^2 \ \int_0^{\frac{\pi}{2(n-2)}} \ f ( a_1 - a_2 \cos t ) (\sin t)^{2(n-2)} \sin 2t \ d t.\end{aligned}$$ We make change of variable $ u = a_1 - a_2 \cos t$. Then $u$ is an increasing function on the interval $[0, \frac{ \pi}{2(n-2)}]$ and it maps this interval onto the interval $$I_{a,b} = [\cos (t_1 + t_2) , \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{2(n-2)}) ]$$ Substituting the new variable $u$ and simplifying the above integral we obtain $$\begin{aligned} h(a) = \lambda_b \star \check{f}(a) = \frac{c \delta(t_2)(\textrm{vol}(M))^2}{a_2^{2(n-1)}} \ \int_{I_{a, b} }f (u ) (a_2^2 - (u - a_1)^2)^{n-2} (a_1 - u ) d u\end{aligned}$$ Finally for $\lambda_a\star\lambda_b(f)$ we obtain $$\begin{aligned} \lambda_a\star\lambda_b(f)&=& h(a) \textrm{vol}(O_a) \\& =& \frac{(c\textrm{vol}(M)\textrm{vol}(K))^2}{a_2^{2(n-1)}} \delta (t_1) \delta (t_2) \int_{I_{a,\ b} } f (u ) (a_2^2 - (u - a_1)^2)^{n-2}(a_1 - u )d u\end{aligned}$$ which completes the proof of the theorem. $\blacksquare$\ \[cor:shaffaf13\]Choosing matrices A and B according to the (singular) invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measures, then the support of the distribution of the product $A B$ is the interval $[\cos(t_2 + t_1), \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{2(n-2)})]$ and its density function is $$\begin{aligned} \frac{2n-2}{(\sin t_1 \sin t_2 \sin \frac{\pi}{2(n-2)})^{2n-2}}(a_2^2 - (u - a_1)^2)^{n-2} (a_1 - u )~,\end{aligned}$$ where $a_1 = \cos t_1 \cos t_2$ and $a_2 = \sin t_1 \sin t_2$.\ \[cor:shaffaf51\] With the notation and hypotheses of Corollary \[cor:shaffaf13\] the density function for the convolution of probability measures $\lambda_a$ and $\lambda_b$ converges weakly to the singular invariant measure on the spherical class through $\exp(t_1+t_2)H$ as $n\to\infty$. [**Proof**]{} - Since $\lambda_a$ and $\lambda_b$ are probability measures, so is $\lambda_a\star\lambda_b$. The support of this measure is the interval $I_{a,b}$, in the appropriate coordinate system, which tends to the single point $\cos (t_1+t_2)$ from which the required result follows. $\blacksquare$ The Symmetric Pair $(SU(1,n), S(U(1) \times U(n)))$. ============================================ Let $G = Su(1,n)$ and $K = S(U(1) \times U(n))$, and $\mathfrak{g}= \mathfrak{k}\oplus \mathfrak{p}$ be the corresponding Cartan decomposition where $\mathfrak{g}$ and $\mathfrak{k}$ are the Lie algebras of $G$ and $K$. Let $ \mathfrak{a} \subset \mathfrak{p}$ be $$\begin{aligned} \mathfrak{a} = \{ tH ~ ; ~~ t \in \mathbb{R} \}\end{aligned}$$ where $H= E_{12} + E_{21}$ and the restricted roots are given by $\alpha( tH) = t$ and $2\alpha$ with multiplicities $2(n-1)$ and $1$ respectively. The centralizer of $\mathfrak{a}$ in $\mathfrak{k}$ is $\mathfrak{m} = \mathfrak{s}(u(1) \times u(1) \times u(n-1))$ and the corresponding Lie subgroup is $M = S(U(1) \times U(1) \times U(n-1))$. Therefore the pair $(K, M)$ in the case of the non-compact symmetric pair $(SU(1,n), S(U(1) \times U(n)))$ is same as in the case of the compact symmetric pair $(SU(n+1), S(u(1) \times U(n)))$ treated in the preceding section. \[thm:shaffaf5\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= SU(1,n)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $SU(1,n)$. It is a spherical measure and for a continuous spherical function $f$ on $SU(1,n)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = (c\mathrm{vol}(M)\mathrm{vol}(K))^2 \delta (t_1) \delta (t_2) \int_{I_{a, b} } f (u )((a_2^2- (u - a_1)^2)^{n-2}(a_1 - u ) d u ~ ,\end{aligned}$$ where $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sinh t)^{2(n-1)} \sinh 2 t$, and $$\begin{aligned} I_{a,b} = [\cosh (t_1 - t_2) , \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{2(n-2)}) \ ].\end{aligned}$$ [**Proof**]{}- Since both $f$ and $\lambda_a$ are $K$-bi-invariant, $\lambda_a\star f(x)$ is $K$-bi-invariant and therefore to compute $\lambda_a\star f(x)$ we can assume that $x$ is of the form $x= \exp(t'H)$. Let $\{\theta_n\}$ be a sequence of spherical functions converging weakly to the singular measure $\lambda_a$ on the orbit $\mathcal{O}_a$. Applying the polar coordinates (Cartan) decomposition for the convolution $\lambda_a \star \check{f}(x)$ we obtain $$\begin{aligned} \lambda_a \star \check{f}(x) &=& \int_{\mathcal{O}_a} f(y x^{-1}) dy \\ &=& \lim_{n \rightarrow \infty} \int_G \theta_n(g) f(g x^{-1} ) dg \\&=& c\lim_{n \rightarrow \infty} \int_K \int_K \int_A \theta_n(k_1 a' k_2 ) f ( k_1 a' k_2 x^{-1}) \delta(a') da' dk_1 dk_2 \\& =& c \lim_{n \rightarrow \infty } \int_K \int_A \theta_n(a') f(a' k x^{-1}) \delta(a') da' dk \\ &=& c \delta(t_1) \int_K f( a k x^{-1}) dk\end{aligned}$$ where $$\begin{aligned} \delta(t) = \delta(\exp(tH))= (\sinh t)^{2(n-1)} \sinh 2t.\end{aligned}$$ Recall that $M$ is the centralizer group of $A$ in $K$ and it is easy to verify that the function $g$ defined by $$g(k) = f(\exp(t_1 H)k \exp(-t' H))$$ is an $M$-spherical function. Using polar coordinates, as in the previous section, the above integral over $K$ reduces to $$\begin{aligned} \lambda_a \star \check{f}(x) &=& c \delta(t_1) \int_K f(\exp(t_1 H)k \exp(-t' H))d k \\&=& c\delta(t_1) \int_M \int_M \ \int_{A_0} \ g(m_1 \ a \ m_2 )\delta_0 (a) d m_1 d a d m_2 \\ & = & c (\textrm{vol}.(M))^2 \delta(t_1) \int_{A_0} \ g(a) \delta_0(a) d a ~,\end{aligned}$$ where $H_0=E_{12} - E_{21}$, $A_0=\{\exp (tH_0) \}$ is the real Cartan subgroup and $\delta_0$ is as in the theorem (\[thm:shaffaf1\]). Thus we have: $$\begin{aligned} \lambda_a \star\check{f}(x)= c (\textrm{vol}.(M))^2 \delta(t_1) \int_{Q_0} \ g(a) \delta_0( \exp( t H_0) d t ~,\end{aligned}$$ where the polyhedra $Q_0$ is the interval $[0, \frac{\pi}{2(n-2)}]$. Simplifying we obtain $$\begin{aligned} \lambda_a \star \check{f}(x)= c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{2(n-2)}}\ f ( \exp (t_1 H) \exp(t H_0) \exp (-t' H)) \delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ where $\delta(t_1)=(\sinh t_1)^{2(n-1)} \ \sinh 2 t_1$ and $\delta_0(t) =(\sin t)^{2(n-2)} \sin 2t.$ The function $f$ is spherical and so it depends only on the norm of the first entry of the product matrix $$\exp (t_1 H) \ \exp(t H_0) \exp (-t' H).$$ Now $$a_{11} = \cosh t_1 \cosh t' - \cos t \sinh t_1 \sinh t'.$$ Set $a_1 = \cosh t_1 \cosh t'$ and $a_2 = \sinh t_1 \sinh t'$ to obtain $$\begin{aligned} \lambda_a \star \check{f}(x) = c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{2(n-2)}} \ f ( a_1 - a_2 \cos t )\delta_0 ( \exp ( t H_0)) d t.\end{aligned}$$ Let $h(x)= \lambda_b \star \check{f}(x)$, then $$\begin{aligned} (\lambda_a\star\lambda_b)(f)&=& (\lambda_a \star (\lambda_b \star\check{f}))(e)\\&=&(\lambda_a \star h)(e) =\int_{O_a} h(x)d\lambda_a (x)= h(a) \textrm{vol}(O_a).\end{aligned}$$ Using the decomposition $G=KAK$ we obtain $$\begin{aligned} \textrm{vol} (O_a) = c \int_K \int_K \delta(\exp(t_1 H)) \ d k \ d k^{\prime} = c (\textrm{vol}(K))^2 (\sinh ( t_1) ) ^{2(n-1)} \ \sinh 2 t_1.\end{aligned}$$ Therefore $$\begin{aligned} h(a)= \lambda_b \star \check{f}(a) = c \ \delta(t_2)(\textrm{vol}(M))^2 \ \int_0^{\frac{\pi}{2(n-2)}} f ( a_1 - a_2 \cos t ) (\sin t)^{2(n-2)} \sin 2t d t\end{aligned}$$ The change of variable $ u = a_1 - a_2 \cos t$ maps the interval $[0, \frac{ \pi}{2(n-2)}]$ onto the interval $$I_{a, b} = [\cosh (t_1 - t_2) , \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{2(n-2)}) \ ]$$ Therefore $$\begin{aligned} h(a) = \lambda_b \star \check{f}(a) = \frac{ c \delta(t_2)(\textrm{vol}(M))^2}{a_2^{2(n-1)}} \int_{I_{a, b} }\ f (u )(a_2^2 - (u - a_1)^2)^{n-2} \ (a_1 - u ) d u\end{aligned}$$ Finally for $\lambda_a\star\lambda_b(f)$ we obtain $$\begin{aligned} \lambda_a\star\lambda_b(f)&=& h(a) \textrm{vol}(O_a) \\& =& \frac{(c\textrm{vol}(M)\textrm{vol}(K))^2}{a_2^{2(n-1)}} \delta (t_1) \delta (t_2) \int_{I_{a,b} }\ f (u )(a_2^2 - (u - a_1)^2)^{n-2}(a_1 - u ) d u\end{aligned}$$ which completes the proof of the theorem. $\blacksquare$ \[cor:shaffaf12\]Choosing matrices A and B according to the (singular) invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measures, then the support of the distribution of the product $A B$ is the interval $[\cosh (t_2 - t_1), \cosh t_1 \ \cosh t_2 - \sinh t_1 \ \sinh t_2 \ \cos( \frac{\pi}{2(n-2)}) ]$ and its density function is $$\begin{aligned} \frac{2n-2}{(\sinh t_1 \sinh t_2 \sin \frac{\pi}{2(n-2)})^{2n-2}}(a_2^2 \ - (u \ - \ a_1)^2)^{n-2} \ (a_1 - u )~,\end{aligned}$$ where $a_1 = \cosh t_1 \cosh t_2$ and $a_2 = \sinh t_1 \sinh t_2$. Furthermore $$\begin{aligned} \lim_{n\to\infty} \lambda_a\star\lambda_b=\lambda_c,~~~~{\rm weakly},\end{aligned}$$ where $\lambda_c$ is the singular invariant probability measure on the spherical class through $\exp((t_1-t_2)H)$. \[rem:shaffaf31\][Note that $\exp(\pm tH)$ are in the same spherical class and therefore $\exp((t_1-t_2)H)$ and $\exp((t_2-t_1)H)$ are in the same spherical class.]{} The symmetric pairs $(SO(n+1), SO(n))$, and $(SO(1,n)^\circ,SO(n))$, $n\ge 3$ ============================================================================== The Lie algebra of the orthogonal group $G=SO(n+1)$ is the algebra of skew symmetric matrices i.e. $$\mathfrak{g}= \mathfrak{so}(n+1)= \{ A \ \in M_{n+1} ( \mathbb{R} ) \ | \ A^t = - A \}$$ For the Cartan decomposition of $\mathfrak{g}$ we have $$\mathfrak{so}(n+1) = \mathfrak{so}(n) \oplus \mathfrak{p},$$ where $\mathfrak{p}$ is the subspace spanned by the matrices of the form $$\begin{aligned} \left[\begin{array}{ccccc} 0& & \xi_1 & \ldots & \xi_n \\ \begin{array}{c} \\ -\xi_1 \\ \vdots \\ \\ -\xi_n \\ \end{array} & & & 0_{n \times n} & \\ \end{array}\right]\end{aligned}$$ A maximal abelian subspace of $\mathfrak{p}$ is: $$\mathfrak{a} = \{t H : t \in R \}$$ where $H$ is the matrix $$\begin{aligned} H = \left[\begin{array}{ccccc} 0& & \ 1 & \ldots & 0 \\ \begin{array}{c} \ -1 \\ \ 0 \\ \vdots \\ \ 0 \\ \end{array} & & & 0_{n \times n} & \\ \end{array}\right]\end{aligned}$$ The centralizer $\mathfrak{m}$ of $\mathfrak{a}$ in $\mathfrak{k}= \mathfrak{so}(n)$ is exactly the Lie algebra $\mathfrak{so}(n-1)$. It is straightforward that the centralizer subgroup $M$ is connected and therefore we have $M=SO(n-1)$ and $$K / M = SO(n) / SO(n-1) \cong S^{n-1}.$$ By straightforward calculation the eigenvalues of the operator $$\textrm{ad} H : \mathfrak{so}(n) \longrightarrow\mathfrak{so}(n)$$ are $\pm i$ with multiplicity $n-1$. Thus for $(SO(n+1), SO(n))$ we have one positive restricted root $\alpha(tH) = it$ whose multiplicity is $m_\alpha = n-1$. Hence $$\delta(\exp(t H)) = \prod_{ \alpha \in \ \Sigma^+} | \sin \alpha (i t H)|^ {m_ {\alpha}} = (\sin ( t) ) ^{n-1}$$ For the pair $(K,M) = (SO(n), SO(n-1))$ the maximal abelian subspace $\mathfrak{a}_0$ is $$\mathfrak{a}_0 = \{ t H_0 \ | \ t \in \mathbb{R} \} \ \ ,$$ where $H_0$ is $$H_0= \left[\begin{array}{c|c|ccc} 0&0&0&\cdots&0 \\\hline 0&0& 1 & \cdots & 0 \\\hline 0& -1 &&&\\\vdots&\vdots&&0_{n-3}&\\ 0 & 0 &&&\\ \end{array}\right]$$ Therefore the corresponding Jacobian for the pair $(K,M)$ is $$\delta_0(\exp(t H_0)) = \prod_{ \alpha \in \ \Sigma^+} | \sin \alpha (i t H_0)|^ {m_ {\alpha}} = (\sin ( t) ) ^{n-2}.$$ Since $SO(n+1)$ is not simply connected we work with the double cover ${\rm Spin}(n+1)$ which is simply connected, and let $$\pi :{\rm Spin(n+1)} \longrightarrow SO(n+1).$$ Since $ \mathfrak{spin}(n+1) = \mathfrak{so}(n+1)$ we have $ \mathfrak{spin}(n+1) = \mathfrak{spin}(n) \oplus \mathfrak{p}$. Note that we identify ${\rm Spin}(n-1)$ with the pre-image of the subgroup $K = SO(n-1)$ under the covering map $\pi$. We denote this subgroup by $$\widetilde{K} = {\rm Spin}(n) = \pi^{-1} (SO(n) ).$$ The computation of the restricted positive root and its multiplicity is the same as in the case of $\mathfrak{so}(n+1)$. Now Theorem (\[thm:shaffaf1\]) is applicable to the symmetric pair $({\rm Spin}(n+1), {\rm Spin}(n) )$, but note that a function on the group $SO(n+1)$ can be considered as a function on ${\rm Spin}(n)$ and its integral over the group $G = {\rm Spin}(n+1)$ is equal to twice its integral over $G= SO(n+1)$. \[thm:shaffaf23\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= SO(n)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $SO(n)$. It is a spherical measure and for a continuous spherical function $f$ on $SO(n)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = (c\mathrm{vol}(M)\mathrm{vol}(K))^2 \delta (t_1) \delta (t_2) \int_{I_{a,b} }\ f (u )((a_2^2 - (u - a_1)^2)^{\frac{n-4}{2}} d u ~ ,\end{aligned}$$ where $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sin t)^{n-2}$, and $$\begin{aligned} I_{a,b} = [\cos (t_1 + t_2) , \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{n-3)}) ].\end{aligned}$$ [**Proof**]{}- Since both $f$ and $\lambda_a$ are $K$-bi-invariant, $\lambda_a\star f(x)$ is $K$-bi-invariant and therefore to compute $\lambda_a\star f(x)$ we can assume that $x$ is of the form $x= \exp(t'H)$. Let $\{\theta_n\}$ be a sequence of spherical functions converging weakly to the singular measure $\lambda_a$ on the orbit $\mathcal{O}_a$. Applying the polar coordinates (Cartan) decomposition for the convolution $\lambda_a \star \check{f}(x)$ we have $$\begin{aligned} \lambda_a \star \check{f}(x) &=& \lim_{n \rightarrow \infty} \int_G \theta_n(g) f(g x^{-1} ) dg \\&=& \frac{1}{2} \ \textrm{lim} \int_{\widetilde{G}} \theta_n(g) f(g x^{-1}) d g \\& =&\frac{ c}{2}\lim_{n \rightarrow \infty} \int_{ \widetilde{K}} \int_{ \widetilde{K}} \int_{\widetilde{A}} \theta_n(k_1 a' k_2 ) f ( k_1 a' k_2 x^{-1}) \delta(a') da' dk_1 dk_2 \\& =& \frac{c}{2} \lim_{n \rightarrow \infty } \int_{\widetilde{K}} \int_{\widetilde{A}} \theta_n(a') f(a' k x^{-1}) \delta(a') da' dk \\ &=&\frac{c}{2} \delta(t_1) \int_{\widetilde{K}} f( a k x^{-1}) dk \\&=& c \delta(t_1) \int_K f( a k x^{-1}) dk\end{aligned}$$ where $$\begin{aligned} \delta(t) = \delta(\exp(tH))= (\sin t)^{n-2}.\end{aligned}$$ Recall that $M$ is the centralizer of $A$ in $K$ and that the function $g$ defined by $$g(k) = \ f(\exp(t_1 H)k \exp(-t' H))$$ is an $M$-spherical function. Applying the polar coordinates decomposition to the pair $(K, M)$ the above integral becomes $$\begin{aligned} \lambda_a \star \check{f}(x) &=& c \delta(t_1) \int_K f(\exp(t_1 H)k \exp(-t' H))d k \\&=& c\delta(t_1) \int_M \int_M \ \int_{A_0} \ g(m_1 \ a \ m_2 )\delta_0 (a) d m_1 d a d m_2 \\ & = & c (\textrm{vol}.(M))^2 \delta(t_1) \int_{A_0} \ g(a) \delta_0(a) d a ~,\end{aligned}$$ where $a = \exp( t H_0)$, where $H_0=E_{23} - E_{32}$ is as above, $A_0$ is the real Cartan subgroup and $\delta_0$ is the Jacobian of the polar coordinates corresponding to the pair $(K,M)$. Thus we have: $$\begin{aligned} \lambda_a \star\check{f}(x)= c (\textrm{vol}.(M))^2 \delta(t_1) \int_{Q_0} \ g(a) \delta_0( \exp( t H_0) d t ~,\end{aligned}$$ where the polyhedra $Q_0$ is the interval $[0, \frac{\pi}{n-2}]$. Therefore $$\begin{aligned} \lambda_a \star \check{f}(x)= c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{n-2}}\ f ( \exp (t_1 H) \ \exp(t H_0) \exp (-t' H)) \delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ where $\delta(t_1)=(\sin t_1)^{n-1}$ and $\delta_0(t) =(\sin t)^{n-2}.$ The function $f$ is spherical and so it depends only on the norm of the first entry of the product matrix $\exp (t_1 H) \exp(t H_0) \exp (-t' H),$ and is given by $ a_{11} = \cos t_1 \cos t' - \cos t \sin t_1 \ \sin t'.$ Set $a_1 = \cos t_1 \cos t'$ and $a_2 = \sin t_1 \sin t'$ to obtain $$\begin{aligned} \lambda_a \star \check{f}(x) = c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{n-3}} f ( a_1 - a_2 \cos t )\delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ Let $h(x)= \lambda_b \star \check{f}(x)$ with $a=\exp(t_1 H)$ and $b=\exp(t_2 H)$, then $$\begin{aligned} (\lambda_a\star\lambda_b)(f)&=& (\lambda_a \star (\lambda_b \star\check{f}))(e)\\&=&(\lambda_a \star h)(e)\\ & =&\int_{O_a} h(x)d\lambda_a (x)= h(a) \textrm{vol}(O_a),\end{aligned}$$ Using the Cartan decomposition we obtain $$\begin{aligned} \textrm{vol} (O_a) =\frac {c}{2} \int_{\widetilde{K}} \int_{\widetilde{K}} \delta(\exp(t_1 H)) \ d k \ d k^{\prime} = 2 c (\textrm{vol}(K))^2 (\sin ( t_1) ) ^{n-1}\end{aligned}$$ Now $$\begin{aligned} h(a)= \lambda_b \star \check{f}(a) = c \ \delta(t_2)(\textrm{vol}(M))^2 \ \int_0^{\frac{\pi}{n-2}} \ f ( a_1 - a_2 \cos t ) (\sin t)^{n-2} \ d t\end{aligned}$$ The change of variable $ u = a_1 - a_2 \cos t$ maps the interval $[0, \frac{ \pi}{n-2}]$ onto the interval $$I_{a,b} = [\cos (t_1 + t_2) , \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{n-2}) \ ]$$ The expression for $h(a)$ becomes $$\begin{aligned} h(a) = \lambda_b \star \check{f}(a) = \frac{c \delta(t_2)(\textrm{vol}(M))^2}{a_2^{n-2}} \ \int_{I_{a,b} } f (u ) (a_2^2 - (u - a_1)^2)^{\frac{n-3}{2}} d u\end{aligned}$$ Finally for $\lambda_a\star\lambda_b(f)$ we obtain $$\begin{aligned} \lambda_a\star\lambda_b(f)&=& h(a) \textrm{vol}(O_a) \\& =& \frac{(c\textrm{vol}(M)\textrm{vol}(K))^2 \delta (t_1) \delta (t_2)}{a_2^{n-2}} \int_{I_{a,b} } \ f (u ) (a_2^2 - (u - a_1)^2)^{\frac{n-3}{2}} d u\end{aligned}$$ which completes the proof of the theorem. $\blacksquare$ \[cor:shaffaf26\]Choosing matrices A and B according to the $($singular$)$ invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measure, then the support of the distribution of the product $A B$ is the interval $$[\cos (t_2 + t_1), \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{n-2})]$$ and its density function is $$\begin{aligned} \frac{(a_2^2 - (u - a_1)^2)^{\frac{n-3}{2}}}{a_2^{n-2} \int_0^{\frac{\pi}{n-2}} (\sin t)^{n-2} dt} ~,\end{aligned}$$ where $a_1=\cos t_1\cos t_2$ and $a_2=\sin t_1\sin t_2$. Furthermore $$\begin{aligned} \lim_{n\to\infty} \lambda_a\star\lambda_b=\lambda_c,~~~~{\rm weakly},\end{aligned}$$ where $\lambda_c$ is the singular invariant probability measure on the spherical class through $\exp((t_1+t_2)H)$. For the symmetric pair $(SO(1,n)^\circ,SO(n))$ the calculations are similar and therefore are not repeated. We obtain \[thm:shaffaf24\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= SO(1,n)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $SO(1,n)$. It is a spherical measure and for a continuous spherical function $f$ on $SO(1,n)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = (c\rm{vol}(M)\rm{vol}(K))^2 \delta (t_1) \delta (t_2) \int_{I_{a,b} }\ f (u )((a_2^2 - (u - a_1)^2)^{\frac{n-3}{2}} d u ~ ,\end{aligned}$$ where $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sinh t)^{n-1}$, and $$\begin{aligned} I_{a,b} = [\cosh (t_1 - t_2) , \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{n-2}) ].\end{aligned}$$ \[cor:shaffaf27\]Choosing matrices A and B according to the $($singular$)$ invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measure, then the support of the distribution of the product $A B$ is the interval $$[\cosh (t_1 - t_2), \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{n-2})]$$ and its density function is $$\begin{aligned} \frac{(a_2^2 - (u - a_1)^2)^{\frac{n-3}{2}}}{a_2^{n-2} \int_0^{\frac{\pi}{n-2}} (\sin t)^{n-2} dt} ~,\end{aligned}$$ where $a_1=\cosh t_1\cosh t_2$ and $a_2=\sinh t_1\sinh t_2$. Furthermore $$\begin{aligned} \lim_{n\to\infty} \lambda_a\star\lambda_b=\lambda_c,~~~~{\rm weakly},\end{aligned}$$ where $\lambda_c$ is the singular invariant probability measure on the spherical class through $\exp((t_1-t_2)H)$. The symmetric pairs $(Sp(n+1), Sp(1)\times Sp(n))$, and $(Sp(1,n),Sp(1)\times Sp(n))$ ============================================================================= Let $G= Sp(n+1)$ and $K = Sp(1) \times Sp(n)$, then $G$ is simply connected, $K$ connected and $G/K$ is the quaternionic projective space. The Lie algebra of $G$ is $\mathfrak{g}= sp(n+1)$, the space of complex matrices $X$ satisfying $ JX + X^t J = 0$. If we write $X$ in the form $$\left[\begin{array}{c|c} X_1&X_2\\\hline X_3&X_4 \end{array}\right]$$ where $X_1, X_2, X_3, X_4$ are matrices of degree $n+1$, the condition $ JX + X^t J = 0$ gives $$\begin{aligned} X_4 = - X_1^t ~~~ X_3 = X_3^t ~~~ X_2 = X_2^t\end{aligned}$$ and the Lie algebra of the subgroup $K$ is $$\begin{aligned} \mathfrak{k}= \left\{\left[\begin{array}{cc|cc} x_{11}&x_{12}&0&0\\ -\overline{x_{12}}&\overline{x_{11}}&0&0\\\hline 0&0&Y_{11}&Y_{12}\\ 0&0&-\overline{Y_{12}}&\overline{Y_{11}} \end{array}\right]; \ \textrm{$x_{ij} \in \mathbb{C}$ and $Y_{11} \in \mathfrak{u}(n)$, $Y_{12}$ is $ n \times n$ symmetric}\right\}.\end{aligned}$$ Let $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ be the Cartan decomposition then $\mathfrak{p}$ is the subspace of matrices of the form $$\begin{aligned} \left[\begin{array}{cc|cc} 0&0&Z_{13}&-\overline{Z_{14}}\\ 0&0&-Z_{14}&-\overline{Z_{13}}\\\hline -\overline{Z_{13}}^t&\overline{Z_{14}}^t&0&0\\ Z_{14}^t&Z_{13}^t&0&0 \end{array}\right] \ ~,\end{aligned}$$ where $Z_{ij}$ are $1 \times n$ complex matrices. A maximal abelian subspace $ \mathfrak{a} \subset \mathfrak{p}$ is $ \mathfrak{a} = \{t H ~;~~ t \in \mathbb{R} \}$ where $H$ is the matrix $$\begin{aligned} H = E_{32} + E_{41}-E_{23}-E_{14}.\end{aligned}$$ The restricted roots are given by $\alpha( tH) = it$ and $2\alpha$ with multiplicities $8(n-1)$ and $2$ respectively. The centralizer of $\mathfrak{a}$ in $\mathfrak{k}$ is $\mathfrak{m} = \mathfrak{sp}(1) \times \mathfrak{sp}(1) \times \mathfrak{sp}(n-1)$, the corresponding Lie subgroup $M$ is connected and $M = Sp(1) \times Sp(1) \times Sp(n-1)$. \[thm:shaffaf22\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= Sp(n+1)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $Sp(n+1)$. It is a spherical measure and for a continuous spherical function $f$ on $Sp(n+1)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = \frac{(c\rm{vol}(M)\rm{vol}(K))^2}{a_2^{8n-12}} \delta (t_1) \delta (t_2) \int_{I_{a, b} } f (u )(a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2 d u ~ ,\end{aligned}$$ where $c$ is a constant, $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sin t)^{8(n-1)} \ (\sin 2 t)^2$, $a_1 = \cos t_1 \cos t_2$, $a_2 = \sin t_1 \sin t_2$ and $$\begin{aligned} I_{a,b} = [\cos (t_1 + t_2) , \ \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{8(n-2)}) ].\end{aligned}$$ [**Proof**]{}- Since both $f$ and $\lambda_a$ are $K$-bi-invariant, $\lambda_a\star f(x)$ is $K$-bi-invariant and therefore to compute $\mu_a\star f(x)$ we can assume that $x$ is of the form $x= \exp(t'H)$. Let $\{\theta_n\}$ be a sequence of spherical functions converging weakly to the singular measure $\lambda_a$ on the orbit $\mathcal{O}_a$. Applying the polar (Cartan) coordinate decomposition for the convolution $\lambda_a \star \check{f}(x)$ we have $$\begin{aligned} \lambda_a \star \check{f}(x) &=& \int_{\mathcal{O}_a} f(y x^{-1}) dy \\ &=& \lim_{n \rightarrow \infty} \int_G \theta_n(g) f(g x^{-1} ) dg \\&=& c\lim_{n \rightarrow \infty} \int_K \int_K \int_A \theta_n(k_1 a' k_2 ) f ( k_1 a' k_2 x^{-1}) \delta(a') da' dk_1 dk_2 \\& =& c \lim_{n \rightarrow \infty } \int_K \int_A \theta_n(a') f(a' k x^{-1}) \delta(a') da' dk \\ &=& c \delta(t_1) \int_K f( a k x^{-1}) dk\end{aligned}$$ where $$\begin{aligned} \delta(t) = \delta(\exp(tH))= (\sin t)^{8(n-1)} (\sin 2t)^2.\end{aligned}$$ The function $g$ defined by $$g(k) = \ f(\exp(t_1 H)k \exp(-t' H))$$ is an $M$- spherical function. Applying the polar coordinates decomposition to the pair $(K,M)$, the above integral over $K$ becomes $$\begin{aligned} \lambda_a \star \check{f}(x) &=& c \delta(t_1) \int_K f(\exp(t_1 H)k \exp(-t' H))d k \\&=& c\delta(t_1) \int_M \int_M \ \int_{A_0} \ g(m_1 \ a \ m_2 )\delta_0 (a) d m_1 d a d m_2 \\ & = & c (\textrm{vol}.(M))^2 \delta(t_1) \int_{A_0} \ g(a) \delta_0(a) d a ~,\end{aligned}$$ where $a = \exp( t H_0)$, $H_0=E_{74} + E_{83}-E_{56} -E_{65}$ is as above, $A_0$ is the corresponding real Cartan subgroup and $\delta_0$ is the Jacobian of the polar coordinates corresponding to the pair $(K,M)$. Thus we have $$\begin{aligned} \lambda_a \star\check{f}(x)= c (\textrm{vol}.(M))^2 \delta(t_1) \int_{Q_0} \ g(a) \delta_0( \exp( t H_0) d t\end{aligned}$$ where the polyhedron $Q_0$ is the interval $[0, \frac{\pi}{8(n-2)}]$ in this case. So the above convolution integral becomes $$\begin{aligned} \lambda_a \star \check{f}(x)= c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{8(n-2)}}\ f ( \exp (t_1 H) \ \exp(t H_0) \exp (-t' H)) \delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ where $\delta(t_1)=(\sin t_1)^{8(n-1)} \ (\sin 2 t_1)^2$ and $\delta_0(t) =(\sin t)^{8(n-2)} (\sin 2t)^2.$ We know that the function $f$ is a spherical function and so it depends only on the norm of the first entry $a_{11}$ of the product matrix $\exp (t_1 H) \ \exp(t H_0) \exp (-t' H)$, and after a simple calculation we obtain $$a_{11} = \cos t_1 \cos t' - \cos t \sin t_1 \sin t' .$$ Set $a_1 = \cos t_1 \cos t'$ and $a_2 = \sin t_1 \sin t'$ to obtain $$\begin{aligned} \lambda_a \star \check{f}(x) = c \delta(t_1) (\textrm{vol}.(M))^2 \int_0^{\frac{\pi}{8(n-2)}} \ f ( a_1 - a_2 \cos t )\delta_0 ( \exp ( t H_0)) d t\end{aligned}$$ Next we compute the convolution $\lambda_a \star \lambda_b(f)$ with $a=\exp(t_1 H)$ and $b=\exp(t_2 H)$. Assume that $h(x)= \lambda_b \star \check{f}(x)$ then $$\begin{aligned} (\lambda_a\star\lambda_b)(f)&=& (\lambda_a \star (\lambda_b \star\check{f}))(e)\\&=&(\lambda_a \star h)(e) =\int_{O_a} h(x)d\lambda_a (x)= h(a) \textrm{vol}(O_a),\end{aligned}$$ Applying polar coordinates on $G$ for the volume of the spherical class $O_a$, $a=\exp(t_1 H)$, we obtain $$\begin{aligned} \textrm{vol} (O_a) = c \int_K \int_K \delta(\exp(t_1 H)) \ d k \ d k^{\prime} = c (\textrm{vol}(K))^2 (\sin ( t_1) ) ^{8(n-1)} \ (\sin 2 t_1)^2.\end{aligned}$$ For $h(a)$ we have $$\begin{aligned} h(a)= \lambda_b \star \check{f}(a) = c \ \delta(t_2)(\textrm{vol}(M))^2 \ \int_0^{\frac{\pi}{8(n-2)}} \ f ( a_1 - a_2 \cos t ) (\sin t)^{8(n-2)} (\sin 2t)^2 \ d t.\end{aligned}$$ We make change of variable $ u = a_1 - a_2 \cos t$. Then $u$ is an increasing function on the interval $[0, \frac{ \pi}{8(n-2)}]$ and it maps this interval onto the interval $$I_{a,b} = [\cos (t_1 + t_2) , \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{8(n-2)}) ]$$ Substituting the new variable $u$ and simplifying the above integral we obtain $$\begin{aligned} h(a) = \lambda_b \star \check{f}(a) = \frac{c \delta(t_2)(\textrm{vol}(M))^2}{a_2^{8n-12}} \ \int_{I_{a, b} }f (u ) (a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2 d u\end{aligned}$$ Finally for $\lambda_a\star\lambda_b(f)$ we obtain $$\begin{aligned} \lambda_a\star\lambda_b(f)&=& h(a) \textrm{vol}(O_a) \\& =& \frac{(c\textrm{vol}(M)\textrm{vol}(K))^2}{a_2^{8n-12}} \delta (t_1) \delta (t_2) \int_{I_{a, b} } f (u )(a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2 d u\end{aligned}$$ which completes the proof of the theorem. $\blacksquare$\ \[cor:shaffaf23\]Choosing matrices A and B according to the (singular) invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measures, then the support of the distribution of the product $A B$ is the interval $[\cos(t_2 + t_1), \cos t_1 \cos t_2 - \sin t_1 \sin t_2 \cos( \frac{\pi}{8(n-2)})]$ and its density function is $$\begin{aligned} \frac{1}{a_2^{8n-12}\int_0^{\frac{\pi}{8(n-2)}}(\sin t)^{8n-14}(\cos t)^2 dt}(a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2~,\end{aligned}$$ where $a_1 = \cos t_1 \cos t_2$ and $a_2 = \sin t_1 \sin t_2$. Furthermore $$\begin{aligned} \lim_{n\to\infty} \lambda_a\star\lambda_b=\lambda_c,~~~~{\rm weakly},\end{aligned}$$ where $\lambda_c$ is the singular invariant probability measure on the spherical class through $\exp((t_1+t_2)H)$. For the symmetric pair $(Sp(1,n),Sp(1) \times Sp(n))$ the calculations are similar and therefore are not repeated. We obtain \[thm:shaffaf222\] Let $\lambda_a$ and $\lambda_b$ be two (singular) spherical measures concentrated on the $K$-spherical classes ${\cal O}_a$ and ${\cal O}_b$ in the group $G= Sp(1,n)$ respectively. Then $\lambda_a\star\lambda_b$ is absolutely continuous relative to the Haar measure on $Sp(1,n)$. It is a spherical measure and for a continuous spherical function $f$ on $Sp(1,n)$ we have $$\begin{aligned} \lambda_a\star\lambda_b(f) = \frac{(c\rm{vol}(M)\rm{vol}(K))^2}{a_2^{8n-12}} \delta (t_1) \delta (t_2) \int_{I_{a,\ b} } f (u )(a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2 d u ~ ,\end{aligned}$$ where $a=\exp(t_1H)$, $b=\exp(t_2H)$, $\delta(t)=(\sinh t)^{8(n-1)}(\sinh 2t)^2$, and $a_1 = \cosh t_1 \cosh t_2$, $a_2 = \sinh t_1 \sinh t_2$ $$\begin{aligned} I_{a,b} = [\cosh (t_1 - t_2) , \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{8(n-2)}) ].\end{aligned}$$ \[cor:shaffaf223\]Choosing matrices A and B according to the (singular) invariant measures on the spherical classes ${\cal O}_a$ and ${\cal O}_b$ respectively and normalized to be probability measures, then the support of the distribution of the product $A B$ is the interval $[\cosh (t_2 -t_1), \cosh t_1 \cosh t_2 - \sinh t_1 \sinh t_2 \cos( \frac{\pi}{8(n-2)})]$ and its density function is $$\begin{aligned} \frac{1}{a_2^{8n-12}\int_0^{\frac{\pi}{8(n-2)}}(\sin t)^{8n-14}(\cos t)^2 dt}(a_2^2 - (u - a_1)^2)^{4n- \frac{15}{2}} (a_1 - u )^2~,\end{aligned}$$ where $a_1 = \cosh t_1 \cosh t_2$ and $a_2 = \sinh t_1 \sinh t_2$. Furthermore $$\begin{aligned} \lim_{n\to\infty} \lambda_a\star\lambda_b=\lambda_c,~~~~{\rm weakly},\end{aligned}$$ where $\lambda_c$ is the singular invariant probability measure on the spherical class through $\exp((t_1-t_2)H)$. Convergence to Haar Measure =========================== It was noted that $\lambda_a\star\lambda_b$ behaves approximately as in the abelian case when $n\to\infty$. In this section we determine the rate of convergence of $(\lambda_a\star\lambda_b)^{l(n)}$ to the Haar measure as $n\to\infty$. More precisely we prove \[thm:jshaffaf2\] Let $\lambda_a$ denote the invariant measure on the spherical class ${\mathcal O}_a$. Then $(\lambda_a\star\lambda_b)^{l(n)}$ converges to the Haar measure on $SU(n)$ as $n\to\infty$ if $l(n)\ge c\log n$ where $c$ is a constant depending on the choice of the spherical classes ${\mathcal O}_a$ and ${\mathcal O}_b$. To interpret this theorem let ${\mathcal O}_a$ and ${\mathcal O}_b$ denote spherical classes where $a=\exp(t_1H)$ and $b=\exp(t_2H)$. We had observed that as $n\to\infty$ the Product ${\mathcal O}_a.{\mathcal O}_b$ converges to the spherical measure concentrated on the spherical class passing through $\exp(t_1+t_2)H$. The measure $(\lambda_a\star\lambda_b)^{l(n)}$ represents the empirical measure of products $$A_1 B_1 A_2 B_2 \ldots A_{l(n)}B_{l(n)}$$ where $A_i$’s are chosen randomly on ${\mathcal O}_a$ and similarly for $B_j$’s. Theorem \[thm:jshaffaf2\] asserts that for $l(n)$ of the stated form this empirical measure converges weakly to the Haar measure on $G=SU(n)$ as $n\to\infty$. The proof of this theorem requires Schur-Weyl theory on the representations of $SU(n)$ (see \[B\] or \[W\] for an account of Schur-Weyl theory). To fix notation we recall the relevant facts. Let $T$ denote a Young diagram, i.e., a graphical representation of a partition $m=m_1+\ldots +m_k$, $k\le n$, with $m_i\ge m_{i+1}$. A Young diagram $T$ filled with integers $1,2,\ldots,m$ is denoted by $\{T\}$ and called a Young tableau. A standard Young tableau is one such that the integers are strictly increasing along rows and columns. We fix the enumeration of the boxes in a Young diagram by starting at the upper left corner and moving along columns consecutively. With this enumeration of the squares in a Young diagram $T$, two subgroups of the symmetric group ${\cal S}_m$ specified, namely, the subgroup $H=H_T$ consisting of all permutations preserving the rows, and $H^\prime=H^\prime_T$ consisting of all permutations preserving the columns. Let ${\bf Z}[{\cal S}_m]$ be the [*integral group algebra*]{} of the symmetric group, i.e., formal linear combinations with integers coefficients and multiplication inherited from the group law in ${\cal S}_m$. Define the [*Young symmetrizer*]{} ${\sf{C}}={\sf{C}}_T\in {\bf Z}[{\cal S}_m]$ as $$\begin{aligned} {\sf{C}}={\sf{C}}_T=(\sum_{\tau\in H^\prime}\epsilon_\tau \tau) (\sum_{\sigma\in H}\sigma)= \sum_{\sigma\in H,\tau\in H^\prime} \epsilon_\tau \tau\sigma.\end{aligned}$$ The symmetric group ${\cal S}_m$ and therefore its group algebra ${\bf Z}[{\cal S}_m]$ act on the tensor space $T^m(V)$. In fact, given a tensor $v_{i_1}\otimes\cdots\otimes v_{i_m}$, $v_{i_j}\in V$, and $\sigma\in {\cal S}_m$, the action of $\sigma$ is given by $$\begin{aligned} v_{i_1}\otimes\cdots\otimes v_{i_m} \buildrel\sigma\over\longrightarrow v_{i_{\sigma(1)}}\otimes\cdots\otimes v_{i_{\sigma(m)}}.\end{aligned}$$ Notice that this action of the permutation group commutes with the induced action of $G=SU(n)$ on $T^m(V)$, and therefore we have a representation $\tau_m$ of $G\times {\cal S}_m$ on $T^m(V)$. It also follows that image of $T^m(V)$ under a Young symmetrizer is invariant under $G$. It is well-known that \[thm:Young2\] Every partition $T:m=m_1+\cdots +m_k$ with $m_1\ge m_2\ge\cdots\ge m_k$ determines a unique irreducible representation $\lambda_T$ of ${\cal S}_n$, and every irreducible representation of ${\cal S}_n$ is of the form $\lambda_T$. The degree of $\lambda_T$ is the number of standard Young tableaux whose underlying Young diagram is $T$. The basic result of Schur-Weyl theory can be summarized as follows: \[thm:Young1\] The representation $\rho_T$ of $G$ is irreducible. For every Young diagram $T$ corresponding to a partition of $m$, let $Z_T\subset T^m(V)$ be the minimal linear subspace containing ${\rm Im}{\sf{C}}_T$ and invariant under action of $SU(n)\times {\cal S}_m$. $Z_T$ has dimension $\deg (\rho_T)\deg (\lambda_T)$ and is irreducible under the representation $\tau_T=\rho_T\otimes\lambda_T$ of $SU(n)\times {\cal S}_m$. $\deg (\rho_T)$ is equal to the number of semi-standard Young tableaux whose underlying diagram is $T$. Furthermore $T^m(V)$ admits of the decomposition, as a $G\times {\cal S}_m$-module (under $\tau_m$), $$\begin{aligned} T^n(V)\simeq \sum_T Z_T,\end{aligned}$$ where the summation is over all partitions of $T$ of $m$ with $k\le n$ parts. Let ${\cal A}_T(G)$ and ${\cal A}_T({\cal S}_m)$ denote the algebras of linear transformations of $Z_T$ generated by the matrices $\rho_T(g)\otimes I$, ($g\in G$), and $I\otimes \lambda_T(\sigma)$, ($\sigma\in {\cal S}_m$). Then the full matrix algebra on $Z_T$ has the decomposition ${\cal A}_{T}(G)\otimes {\cal A}_{T}({\cal S}_m)$. An irreducible representation $\rho$ of $SU(n)$ occurs in $L^2(G/K)$ if and only if it has a $K$-fixed vector (Frobenius reciprocity), and since $(G,K)$ is a symmetric pair the space of $K$-fixed vectors is one dimensional. We have \[prop:jshaffaf1\] Let $G=SU(n)$ and $K=S(U(k) \times U(n-k))$ where $k\ge n-k$. An irreducible representation $\rho$ of $G$ has a $K$-fixed vector if and only if the corresponding Young diagram is of the form =.04in $ \begin{array}{llllll} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ &\fbox{}&\ldots&\fbox{}\fbox{}&\ldots&\fbox{}\\[-0.25cm] &\fbox{}&\ldots&\fbox{}\fbox{}&\ldots&\fbox{}\\[-0.2cm] &\vdots&&\vdots&&\vdots\\[-0.25cm] &\fbox{}&\ldots&\fbox{}\fbox{}&\ldots&\fbox{}\\[-0.25cm] &\fbox{}&\ldots&\fbox{}&&\\[-0.25cm] &\fbox{}&\ldots&\fbox{}&&\\[-0.2cm] &\vdots&&\vdots&&\\[-0.25cm] &\fbox{}&\ldots&\fbox{}&&\\ \end{array} $ .3 cm where there are $k$ squares in the first column and $n-k$ squares in the last, and the number of columns of lengths $k$ and $n-k$ are equal. [**Proof**]{} - Let $T$ be a Young diagram of the form specified in the lemma and let $r$ denote the number of columns of length $k$ (or $n-k$) , and $\{T\}$ denote the semi-standard Young tableau where the first $r$ columns are filled with integers $1,\ldots,k$ and the last $r$ columns are filled with integers $k+1,\ldots,n$. The action $g\in U(k)\times U(n-k)$ on the vector $v_T$ corresponding to $\{T\}$ is given by $$\begin{aligned} v_T\longrightarrow (\det g_1 \det g_2)^r,~~~~ where~~ g=\begin{pmatrix} g_1&0\cr o&g_2 \end{pmatrix}\end{aligned}$$ Therefore $v_T$ is fixed by $K$. The converse statement that the existence of $K$-fixed vector implies the corresponding Young diagram is of the required form will be proven only for $k=n-1$ which is the only case needed here. We make use of the following simple \[lem:jshaffaf2\] Let ${\mathcal T}\subset SU(n)=G$ be the maximal torus of diagonal matrices. An irreducible representation $\rho_T$ of $G$ contains a $T$-fixed vector if and only if the corresponding Young diagram contains $rn$ squares for some positive integer $r$, and the fixed vector is represented by a Young tableau with the same number of 1’s, 2’s,$\ldots, n$’s. [**Proof of Lemma**]{} - Let $e^{i\tau_j}$ denote the diagonal entries of a matrix in ${\mathcal T}$. Then the only relation among $\tau_j$’s is $\sum\tau_j=0$. Therefore there is a vector in the representation space of $\rho_T$ fixed by $\mathcal{T}$ if and only if there is a semi-standard Young tableau $\{T\}$ containing the same number of 1’s, 2’s,$\ldots, n$’s which proves the lemma. Let $\rho_T$ denote the irreducible representation $U(n)$ and the Young diagram $T$ correspond to the partition $m=m_1+\ldots +m_n$. According to the Branching Law \[Kn, page 569\] the restriction of $\rho_T$ to $U(n-1)$ decomposes into a direct sum of irreducible representations with multiplicity one according to partitions $l=l_1+\ldots +l_{n-1}$ such that $$\label{eq:jshaffaf11} m_1\ge l_1\ge m_2\ge l_2\ge \ldots \ge l_{n-1}\ge m_n.$$ In order for the restriction of $\rho_T$ to $U(n-1)$ to contain the representation $\det^r$ it is necessary and sufficient that one of the partitions of $l$ be of the form $$\begin{aligned} l=l^\prime+l^\prime+\ldots +l^\prime, ~~~{\rm that~is}~~l=rl^\prime.\end{aligned}$$ Therefore by (\[eq:jshaffaf11\]), the representation $\det^r$ occurs in the restriction of $\rho_T$ to $U(n-1)$ if and only if $$\label{eq:jshaffaf12} l^\prime = m_2=m_3=\ldots =m_{n-1}.$$ Since for irreducible representation of $SU(n)$ it is only necessary to consider Young diagrams with $n-1$ rows, it follows from (\[eq:jshaffaf12\]) that the restriction of an irreducible representation $\rho_T$ of $SU(n)$ to $U(n-1)$ contains $\det^r$ if and only if $$\begin{aligned} m_1\ge m_2=m_3=\ldots =m_{n-1}.\end{aligned}$$ By Lemma \[lem:jshaffaf2\] such a representation contains a ${\mathcal T}$-fixed vector if and only if there is a Young tableau $\{T\}$ with the same number of 1’s,2’s,$\ldots, n$’s. Furthermore the one dimensional invariant subspace transforming according to $\det^r$ under $U(n-1)$ is spanned by the Young tableau $\{T\}$ where the number of $1$’s and $n$’s in the first row is $l^\prime =m_2$. Therefore $m_1=2l^\prime$ and the proof of the Proposition is complete. $\blacksquare$ .3 cm [**Proof of Theorem \[thm:jshaffaf2\]** ]{}- In order to prove the theorem we recall the relevant aspect of the Plancherel theorem for a compact connected semi-simple Lie group. The Fourier transform of the spherical measures $\lambda_a$ is $$\label {eq:shaffaf8} F(\lambda_a)(\rho) =\widehat{\lambda_a}(\rho)= \int_{O_a} \rho(x) d\lambda_a = \phi_{\rho}(a)\textrm{vol}(O_a) ~,$$ with similar expression for $\widehat{\lambda_b}$, where $\phi_{\rho}$ is the elementary $K$-spherical function corresponding to the irreducible representation $\rho$ containing a $K$-fixed vector. It is well-known that every elementary $K$-spherical function on $G$ is of the form $\rho_{11}(g)$ in which $\rho_{11}$ is the $(11)$-entry of the matrix of $\rho$ relative to an orthonormal basis $v_1,\ldots,v_N$ where $\rho (K)v_1=v_1$ (see \[H2\], page 414). According to Proposition \[prop:jshaffaf1\] irreducible representations of $SU(n)$ containing a $K$ fixed vector are parameterized by integers $m$ corresponding to partitions $$\begin{aligned} N=2m~+~m~+~m~+~\ldots~+~m~~~~~(n-1)~{\rm summands}.\end{aligned}$$ We need \[lem:shaffaf61\] Let $\rho$ be a spherical representation of the group $G$ (i.e., containing a $K$-fixed vector $v$), then the corresponding elementary spherical function $$\begin{aligned} |\phi_m (\exp tH)|= C~\frac{(n-1)^{n-\frac{1}{2}}m^{m+\frac{1}{2}}}{(n+m-1)^{n+m-\frac{1}{2}}}~\frac{t^{-n+\frac{1}{2}}}{\sqrt{m}}.\end{aligned}$$ for some constant $C$ independent of $m$ and $n$. [**Proof**]{} - This lemma is probably well-known to experts in spherical functions but since the author does not know of a specific reference for it in the form suitable for this work, a proof is sketched. However, in \[SC\] estimates for Jacobi polynomials are used to establish precise rates of convergence for certain diffusion processes. By applying the radial part of the Laplacian to the elementary spherical function $\phi_m$, one obtains a second order linear ordinary differential equation with regular singular points for it. Consequently one obtains \[H2\] $$\begin{aligned} \phi_m (\exp tH)= F(m+n,-m,n;\sin^2t)\end{aligned}$$ where $F$ is the hypergeometric function which reduces to the Jacobi polynomial $P^{n-1,n}_m (\frac{\cos^2t}{2})$ (except for normalization by a constant) (see \[AAR\] for explanation of notation and extensive treatment of Jacobi polynomials). Now $$\begin{aligned} P^{n-1,n}_m(1)=\frac{(n+m-1)!}{(n-1)!m!}.\end{aligned}$$ Since $\phi_m(e)=1$, $P^{n-1,n}_m$ should be normalized accordingly. Estimates for Jacobi polynomials are obtained by examining the behavior of their generating function on the unit circle and applying standard methods for obtaining estimates from generating functions. In fact one obtains (see \[AAR\] especially page 350) $$\label{eq:shaffaf61} P^{n-1,n}_m(\cos\theta)=\begin{cases} \theta^{-n+\frac{1}{2}} O(\frac{1}{\sqrt{m}}),~~{\rm for}~~\frac{c}{n}\le \theta\le \frac{\pi}{2};\\ O(m^{n-1}),~~{\rm for}~~0\le \theta\le\frac{c}{n}; \end{cases}$$ for a suitable constant $c$ as $m\to\infty$. Substituting in the expression for $\phi_m$ in terms of Jacobi polynomials we obtain the desired estimate. $\blacksquare$ Because $\phi_m$ is a spherical function, the value of this function on the spherical class $O_a$ is constant and so we can take $a=\exp(t_1H)$. Now by (\[eq:shaffaf8\]) for the Fourier transform of the spherical measure $\lambda_a$ and $\lambda_b$ we obtain: $$\begin{aligned} F(\lambda_a)(\rho_m) = \phi_m(a) \textrm{vol}(O_a) = \phi_m( \exp t_1H) \textrm{vol}(O_a)\\ F(\lambda_b)(\rho_m) = \phi_m(b) \textrm{vol}(O_b) = \phi_m( \exp t_2H) \textrm{vol}(O_b) ~,\end{aligned}$$ where $\rho_m$ is the spherical representation corresponding to $m$. Now applying the Plancherel theorem for the function $(\lambda_a \star \lambda_b)^{l(n)}$ we obtain $$\begin{aligned} \| (\lambda_a \star \lambda_b)^{l(n)} - 1 \|^2_{L^2} &=& \ \sum_{m>0} d_{\rho_m} (\phi_m(t_1) \phi_m(t_2))^{2l(n)} ~,\end{aligned}$$ We want to find $l(n)$ such that the sequence $$\label{eq:shaffaf11} c_n=\sum_m d_{\rho_m} (\phi_m(t_1) \phi_m(t_2))^{2l(n)}$$ converges to zero when $n$ goes to infinity. This will ensure that $(\lambda_a \star \lambda_b)^{l(n)}$ converges to the Haar measure in $L^p$-norm for $1\le p\le 2$ as $n\to \infty$ since on compact groups of fixed finite volume $L^2$ norm dominates $L^p$ for $p\le 2$. For analyzing the sequence $\{c_n \}$ we need to compute the dimension of the representation $\rho_m$. By Weyl’s dimension formula the dimension of the irreducible representation $\rho_T$ of $U(n)$ determined by the Young diagram $ T: m = m_1 + m_2 + \ldots +m_n$ is $$\frac{\mathcal{D}(a_1,a_2, \ldots , a_n)}{\mathcal{D}(m-1,m-2, \ldots , 0)}~ ~ ,$$ where $a_k = m_k + n - k$ and $\mathcal{D}(a_1,a_2, \ldots , a_n) = \prod_{j<k} (a_j - a_k)$. In our situation the spherical representation $\rho_m$ has the Young diagram characterized in Proposition \[prop:jshaffaf1\] and the corresponding partition is $ m n = 2m + m + \ldots + m $ where $2m$ is the number of columns in the corresponding Young diagram. Therefore $$\begin{aligned} a_1 = 2m+n-1,~~ a_2 = m+n-2,~ a_3 = m+n-3, \ldots ,a_{n-1} = m+1, ~ a_n=0\end{aligned}$$ By the Weyl’s dimension formula the dimension $d(m,n)$ of the representation $\rho_m$ is $$\begin{aligned} d(m,n) &=& (2m+n-1)\frac{\prod_{k=2}^{n-1} [(n-k-1)! (m+n-k)^2]}{(n-2)!(n-1)! \ldots 2! 1!}\\ &=& (2m+n-1)\frac{\prod_{k=2}^{n-1} (m+n-k)^2}{(n-2)!(n-1)!} \\ &=& \frac{((m+n-2)!)^2}{(m!)^2 (n-2)! (n-1)!} (2m +n-1).\end{aligned}$$ Applying the Sterling estimate $ n!\sim \frac{1}{\sqrt{2 \pi}} n^{n+\frac{1}{2}} e^{-n}$ we obtain $$\begin{aligned} d(m,n) \sim \frac{e}{2 \pi} \frac{(m+n-2)^{2(m+n)-3}}{m^{2m+1} n^{2n-2}}(2m+n-1)\end{aligned}$$ Substituting for $\phi_m(t)$ from the Lemma \[lem:shaffaf61\] we obtain $$\begin{aligned} c_n = b^{(2n-1)l(n)} \sum_{m=1} \frac{(m+n-2)^{2(m+n)-3}(2m+n-1)}{m^{2m+1} n^{2n-1}} \bigg( \frac{(n-1)^{2n-1} m^{2m}}{(n+m-1)^{2(m+n)-1}} \bigg)^{2l(n)}\end{aligned}$$ where $b = \frac{1}{t_1 t_2}$. Now we decompose the summation $c_n$ into two parts $s_1$ and $s_2$ as follows: $$\begin{aligned} c_n = b^{(2n-1)l(n)}(s_1 + s_2) = b^{(2n-1)l(n)}(\sum_{m\le n-1} ~ + ~ \sum_{m > n-1})\end{aligned}$$ Since in $s_1$ the summation is over $m \leq n-1$ we have $$\begin{aligned} s_1 &=& \sum_{m \leq n-1} (m+n-1)^{2(m+n)-3-2l(n)(2(m+n)-1)}(2m+n-1) m^{4m l(n)-2m-1} n^{2(2n-1)l(n)-2n+2} \\ &\leq & n^{2(2n-1)l(n)-2n+2} \sum _{m \le n-1} (2(n-1))^{2(m+n)-3-2l(n)(2(m+n)-1)} (n-1)^{4ml(n)-2m-1} \\ &= & 2^{2n-4nl(n)+2l(n)-3}(n-1)^{2n-4nl(n)+2l(n)-3} n^{2(2n-1)l(n)-2n+2}.\\&& \sum_{m \leq n-1} 2^{2m-4ml(n)} (n-1)^{2m-4ml(n)} (n-1)^{4ml(n)-2m} \\ & = & 2^{2n-4nl(n)+2l(n)-3} n^{-1} \sum_{m \leq n-1} (4^{1-2l(n)})^m \end{aligned}$$ Therefore $$\begin{aligned} b^{(2n-1)l(n)} s_1 \le 2^{2n-4nl(n)+2l(n)-3} n^{-1} b^{(2n-1)l(n)} \frac{a^n-1}{a-1},\end{aligned}$$ where $a = 4^{1-2l(n)}$. Now for $l(n)\ge C_1\log n$ and $C_1$ sufficiently large and depending on $t_1$ and $t_2$, we obtain $$\label{eq:jshaffaf1} s_1\le \frac{C_2}{n^\epsilon},$$ for some $\epsilon>0$ and some constant $C_2$ depending only on $t_1$ and $t_2$. Now we estimate $s_2$ where the summation is over $m > n-1$. It is clear that $$\begin{aligned} s_2 &\leq &\sum_{m > n-1} 3m (2m)^{2(m+n)-3-2l(n)(2(m+n)-1)} m^{4ml(n)-2m-1} n^{2(2n-1)l(n)-2n+2} \\ &\ = & b^{(2n-1)l(n)} 2^{2n-4nl(n)+2l(n)-3} n^{2(2n-1)l(n)-2n+2} \sum_{m>n-1} m^{2n +2l(n)-4n l(n)-3}2^{2m-4ml(n)} \\ & \le & b^{(2n-1)l(n)} 2^{2n-4nl(n)+2l(n)-3} n^2 \sum_{m=n}^{\infty} a^m \\ & = & b^{(2n-1)l(n)} 2^{2n-4nl(n)+2l(n)-3} n^2 \frac{a^n}{1-a},\end{aligned}$$ where $a = 4^{1-2l(n)}$ as before. Consequently for $l(n)\ge C_3 \log n$ and $C_3$ sufficiently large and depending on $t_1$ and $t_2$ we have $$\label{eq:jshaffaf2} s_2\to 0~~~{\rm as} ~ n\to\infty.$$ Therefore $c_n = s_1+s_2$ tends to zero and the proof of the theorem is complete. $\blacksquare$ [Wan86]{} Andrew, G., R. Askey and R. Roy - Cambridge University Press, (1999). Agnihotri, S. and C. T. Woodward - , , [**5**]{}, (2002), pp. 817-836. Boerner, H. - American Elsevier Publishing Co., Inc., New York (1970). Fulton, W. - , , [**37**]{}, no. 3, (2000), pp. 209-249. Helgason, S. - , (2002). Helgason, S. - , (1984). Klyachko. A. A. - , , [**4**]{}, no.3 (1998), pp. 419-445. Knapp, A. - , Birkhauser, (2004). Saloff-Coste, L. - , , [**217**]{}, (1994), pp.641-677. Shaffaf, J. - , Sharif University of Technology. Shaffaf, J. - , . Weyl, H. - , Princeton University Press 1966. .9 cm Institute for Studies in Theoretical Physics and Mathematics, Tehran, Iran, [*and*]{}\ Sharif University of Technology, Tehran, Iran. .4 cm [^1]: Institute for Studies in Theoretical Physics and Mathematics (IPM) and Sharif University of Technology, Tehran, Iran. Email : [email protected]
--- author: - Martin Kroll bibliography: - 'ConcPPP.bib' title: Concentration inequalities for Poisson point processes with application to adaptive intensity estimation ---
--- abstract: | We consider the problem of determining the maximal $\alpha \in (0,1]$ such that every matching $M$ of size $k$ (or at most $k$) in a bipartite graph $G$ contains an induced matching of size at least $\alpha |M|$. This measure was recently introduced in [@alon2017graph] and is motivated by connectionist models of cognition as well as modeling interference in wireless and communication networks. We prove various hardness results for computing $\alpha$ either exactly or approximately. En route to our results, we also consider the maximum connected matching problem: determining the largest matching $N$ in a graph $G$ such that every two edges in $N$ are connected by an edge. We prove a nearly optimal $n^{1-{\varepsilon}}$ hardness of approximation result (under randomized reductions) for connected matching in bipartite graphs (with both sides of cardinality $n$). Towards this end we define bipartite half-covers: A new combinatorial object that may be of independent interest. To the best of our knowledge, the best previous hardness result for the connected matching problem was some constant $\beta>1$. Finally, we demonstrate the existence of bipartite graphs with $n$ vertices on each side of average degree $d$, that achieve $\alpha=1/2-{\varepsilon}$ for matchings of size sufficiently smaller than $n/{{\rm poly}}(d)$. This nearly matches the trivial upper bound of $1/2$ on $\alpha$ which holds for any graph containing a path of length 3. author: - 'Noga Alon [^1]' - 'Jonathan D. Cohen [^2]' - 'Thomas L. Griffiths [^3]' - 'Pasin Manurangsi [^4]' - 'Daniel Reichman [^5]' - 'Igor Shinkar [^6]' - 'Tal Wagner [^7]' - 'Alexander Yu [^8]' bibliography: - 'approx.bib' title: | Multitasking Capacity:\ Hardness Results and Improved Constructions --- Introduction ============ A matching in an undirected graph $G$ is a set of vertex disjoint edges. Matchings have been used in studying interference effects in parallel and distributed systems. The object of study is typically a set of units that transmit or receive information. For example, in the communication setting, there is a bipartite network $G=(S,R,E)$ consisting of senders ($S$) and receivers ($R$)[^9]. Given a set $E'=(s_i,t_i)_{1 {\leqslant}i {\leqslant}\ell}\subseteq E$ every sender $s_i$ wishes to send a message to its neighbor $t_i$. The main assumption is that in order to successfully receive a message, a receiver can have only a *single* incident edge carrying a message at a given time, as messages arriving on multiple incident edges create interference with each other. This can formalized by a condition which we term the *matching condition*: A subset $E'\subset E$ can be used for concurrent interference-free communication if it forms a matching in $G$. However, in several communication settings such as radio and wireless networks [@birk1993uniform; @chlamtac1985broadcasting; @alon2012nearly], a more constrained setting is considered: the senders cannot choose which edges to broadcast, but instead, if they choose to transmit, then they automatically broadcast on *all* their incident edges. This leads to the stronger *induced matching condition*: A subset $E'\subset E$ can be used for concurrent interference-free communication if it forms an *induced* matching in $G$, namely no two edges in $E'$ are connected by an edge in $G$. A similar interference model, directed towards understanding multitasking constraints in neural systems, has been proposed recently in computational neuroscience [@feng2014multitasking; @Musslick2016b; @Musslick2016a]. These works seek to understand the reason behind multitasking limitations: The limited ability of people to execute several actions concurrently, a ubiquitous finding in cognitive psychology [@shiffrin1977controlled]. The main idea in these works is that such limitations arise from interference effects between interrelated units and not because of limited resources (e.g., limited attention or constrained working memory). The models in  [@feng2014multitasking; @Musslick2016b; @Musslick2016a] follow the *connectionist approach* to cognition [@rumelhart1986general; @rumelhart1987parallel] which strives to explain cognition in terms of a large network of interconnected units that operate in a parallel and distributed manner and have also played a pivotal role in the study of neural networks [@hinton1990connectionist]. In [@feng2014multitasking; @Musslick2016b; @Musslick2016a] a formal model to study multitasking is provided where given a bipartite graph $G=(S,R,E)$, every vertex $s \in S$ is associated with a set of inputs $I_s$, every vertex $t\in T$ is associated with a set of outputs $O_t$ and the edge $(a,b)$ is associated with a function (“task") $f_{s,t}:I_s\rightarrow O_t$. As before, it is assumed that every unit in $S$ transmits its computed value to all adjacent neighbors in $T$, and that a value is stored without interference in a node $t \in T$ only if it receives at most one value from a single unit in $S$. In other words, given a set of $\ell$ of edges $E'=(s_i,t_i)_{1 {\leqslant}i {\leqslant}\ell}$ the set of functions $f_{s_i,t_i}$ can be performed concurrently (“multitasked") if $E'$ is an induced matching. These works are noteworthy as they relate graph theoretic properties of interconnected units and cognitive performance (multitasking). Perhaps surprisingly, hardly any works originating from connectionist models have studied how graph-theoretic properties relate to cognitive models arising from experimental findings or computer simulations. Based on these interference assumptions, a new measure has been proposed to capture how well such networks allow for interference-free processing [@alon2017graph]. The idea behind this measure is to consider a parameter $k {\leqslant}n$, and ask whether *every* matching $M$ of size $k$ (or of size at most $k$) contains a large *induced* matching $M'\subseteq M$. Unless stated otherwise we will always assume that graphs are bipartite and that both sides of the bipartition have cardinality $n$. Let $G=(A, B,E)$ be a bipartite graph, and let $k \in {{\mathbb N}}$ be a parameter. For ${\alpha}\in (0,1]$ we say that $G$ is a *$(k,{\alpha})$-multitasker* if for every matching $M$ in $G$ of size $|M| = k$, there exists an induced matching $M' \subseteq M$ such that $$|M'| {\geqslant}{\alpha}|M|.$$ Define ${\alpha}_k(G)$ to be the maximal ${\alpha}$ such that $G$ is a $(k,{\alpha})$-multitasker if $G$ contains a matching of size $k$, and define ${\alpha}_k(G) = 1$ if $G$ does not contain a matching of size $k$. We call the parameter ${\alpha}_k \in (0,1]$ the *multitasking capacity of $G$ for matchings of size $k$*. Also, define ${\alpha}_{{\leqslant}k}(G) = \min_{1 {\leqslant}\ell {\leqslant}k} {\alpha}_\ell(G)$ and call it the *multitasking capacity of $G$ for matchings of size at most $k$*.[^10] The parameters ${\alpha}_k, {\alpha}_{{\leqslant}k}$ measure how resilient $G$ is to interferences. The larger these parameters are, the better $G$ is considered as a multitasker. One motivation for this definition is that it is sometimes assumed (e.g., [@feng2014multitasking]) that the set of tasks (edges) that need to be multitasked are restricted to be a matching, a restriction which is imposed by limitation on the parallelism of the network. In this case, the multitasking capacity quantifies for *every* set $S'$ of allowable tasks what fraction of tasks from $S'$ are guaranteed to be achieved without interference. Another motivation is the distinction between interference effects that result from a violation of the matching condition to those that result from a violation of the *induced* matching condition. In particular, the above multitasking measure allows us to assess the fraction of tasks that can be performed concurrently conditioned on not violating the matching condition. In [@alon2017graph] several properties of ${\alpha}_{{\leqslant}n}(G)$ have been proven. For example, it was shown that ${\alpha}_{{\leqslant}n}(G){\leqslant}\frac{9}{\sqrt{d}}$ for $d$-regular graphs, and that ${\alpha}_{{\leqslant}n}(G){\leqslant}O((\frac{\log n}{d})^{1/3})$ for graphs of average degree $d$. This upper bound supports the previous hypothesis [@feng2014multitasking] suggesting that as the average degree increases, the multitasking capacity inevitably decreases, regardless of the structure of the network – a phenomenon referred to as the “multiplexing versus multitasking trade-off”[^11]. It was also shown in [@alon2017graph] how to construct graphs with desirable multitasking properties. Namely graphs for which ${\alpha}_{{\leqslant}k}(G) {\geqslant}\tau$ for $\tau=\Omega(1)$ provided that $k=O(n/d^{1+\tau})$, where $d$ is the average degree of $G$. The results in [@alon2017graph] leave several questions. \[q:compute-alpha\] Given a graph $G$ and a parameter $k$, can we compute ${\alpha}_k(G)$ or ${\alpha}_{{\leqslant}k}(G)$ efficiently? Indeed, if we are to use ${\alpha}_k(G)$ or ${\alpha}_{{\leqslant}k}(G)$ to evaluate how prone to interference parallel architectures are, then a natural question is whether it is possible to compute or approximate these quantities in polynomial time. For example, computer simulations are frequently used in developing connectionless models and these models often consist of networks consisting of dozens (or more) of units. Hence to evaluate the usefulness of ${\alpha}_k(G)$ in connectionist models of multitasking it is desirable to have efficient methods to compute ${\alpha}_k(G)$ exactly or approximately. Another question is whether it is possible construct multitaskers with near-optimal capacity. While [@alon2017graph] provide multitaskers with ${\alpha}_{{\leqslant}k}(G)=\Omega(1)$ for $k{\leqslant}n/d^{O(1)}$ (and show that the upper bound on $k$ is tight up to the degree of the $d^{O(1)}$ term), the best constant value of ${\alpha}_{{\leqslant}k}(G)$ they achieve is bounded away from the natural barrier[^12] ${\alpha}_{{\leqslant}k}(G){\leqslant}1/2$. We thus raise the following question. \[q:construct-alpha-0.5\] Is there an infinite family of graphs $G_n$ of average degree $d$ such that ${\alpha}_{{\leqslant}k}(G_n) {\geqslant}1/2-{{\varepsilon}}$ for arbitrarily small ${{\varepsilon}}> 0$ and $k {\geqslant}n/d^{f({\varepsilon})}$ for some function $f$? In this paper we address these two questions. For \[q:compute-alpha\] we show that under standard complexity theoretic assumptions ${\alpha}_k(G)$ and ${\alpha}_{{\leqslant}k}(G)$ cannot be computed efficiently, thus giving a negative answer to this question. For \[q:construct-alpha-0.5\] we give a positive answer, by showing how to construct bipartite graphs with multitasking capacity approaching $1/2$. Our results ----------- As it turns out, a useful notion in studying the computational hardness of computing the multitasking capacity is that of a *connected matching*, which is a matching in which every two edges are connected by a third edge (see Definition \[def:connected\_matching\] for a formal definition). Connected matchings have been studied in several contexts, such as Hadwiger’s conjecture [@kawarabayashi2005improvements; @plummer2003special; @furedi2005connected]. Motivated by applications to other optimization problems [@jobson2014connected], algorithms for finding connected matchings of maximum cardinality have been studied in special families of graphs such as chordal [@cameron2003connected] and bipartite chordal graphs [@jobson2014connected][^13] and bipartite permutation graphs [@golovach2014hadwiger]. In \[sec:connected\] we establish hardness of approximation for the size of the largest connected matching to within a factor of $n^{1-{\varepsilon}}$ assuming ${\ensuremath{\mathcal{NP}}}\neq{\ensuremath{{\rm co}\mathcal{RP}}}$. Previously, this problem was known to be ${\ensuremath{\mathcal{NP}}}$-hard to approximate within some constant factor [@plummer2003special] for general (non-bipartite) graphs. We also prove that deciding whether a bipartite graph $G=(A,B,E)$ with $|A|=|B|=n$ contains a connected matching of size $n$ is ${\ensuremath{\mathcal{NP}}}$-hard. In \[sec:hardness\] we prove several hardness results for computing the multitasking capacity. To be more precise, we define the decision problem of computing the multitasking capacity as follows: \[def:mt\] Let ${{\mathsf{MT}}}$ be the problem of deciding whether for a given graph $G$, a positive integer $k \in {{\mathbb N}}$ and a rational number $\eta>0$ it holds that $\alpha_{k}(G) {\geqslant}\eta$. The problem ${{\mathsf{MT}}}$ belongs to the second level of the polynomial hierarchy, $\Pi_2$, since the statement $\alpha_{k}(G) {\geqslant}\eta$ can be expressed as $\forall M \exists M' P(G,k; M,M')$, where $P$ is the predicate checking that $M$ is a matching in $G$ of size $k$, and $M'$ is an induced matching, which is clearly computable in time ${{\rm poly}}(|G|)$. We note that it is not clear whether it belongs to ${\ensuremath{\mathcal{NP}}}$ or to ${\ensuremath{{\rm co}\mathcal{NP}}}$, and in fact, we give evidence that ${{\mathsf{MT}}}$ belongs to neither of the classes. Specifically, we show that ${{\mathsf{MT}}}$ is both ${\ensuremath{\mathcal{NP}}}$-hard and ${\ensuremath{{\rm co}\mathcal{NP}}}$-hard; thus, if ${{\mathsf{MT}}}\in {\ensuremath{\mathcal{NP}}}\cup {\ensuremath{{\rm co}\mathcal{NP}}}$, then the polynomial hierarchy collapses to the first level. Furthermore, we show various hardness of approximation results for computing $\alpha_k(G)$ and $\alpha_{{\leqslant}k}(G)$. Most notably, we show under standard complexity theoretic assumptions that (1) $\alpha_n(G)$ is inapproximable to within $n^{1-{\varepsilon}}$ for any ${\varepsilon}>0$, and, (2) $\alpha_{{\leqslant}k}(G)$ inapproximable to within any constant for $k=n^{1-{{\varepsilon}}}$ for any ${{\varepsilon}}>0$. Furthermore, under a stronger assumption, we improve the inapproximability ratio for $\alpha_{{\leqslant}k}(G)$ to $n^{1/\mathrm{polyloglog}(n)}$ for $k=n^{1-1/\mathrm{polyloglog}(n)}$. Our hardness results are summarized in \[tbl:hardness\]. ---------------------------- ---------------------------------------------------------------------- ---------------------------------- ------------------------------------------------- ---------------------------- Variant Assumption $k$ $f$ (approximation factor) Remarks \[0.5ex\] $\alpha_k(G)$ ${\ensuremath{\mathcal{P}}}\neq{\ensuremath{\mathcal{NP}}}$ $n$ $n^{1-{\varepsilon}}$ for any ${\varepsilon}>0$ $\alpha_k(G)$ ${\ensuremath{\mathcal{P}}}\neq{\ensuremath{\mathcal{NP}}}$ $n$ $O(d/{{\rm poly}}\log d)$ $G$ has maximum degree $d$ $\alpha_{{\leqslant}k}(G)$ ${\ensuremath{\mathcal{NP}}}\neq{\ensuremath{{\rm co}\mathcal{RP}}}$ $n$ some constant $\alpha_{{\leqslant}k}(G)$ ${\ensuremath{\mathcal{NP}}}\neq{\ensuremath{{\rm co}\mathcal{RP}}}$ $n^{1-{\varepsilon}}$ arbitrarily large constant $\alpha_{{\leqslant}k}(G)$ ETH $n^{1-1/\mathrm{polyloglog}(n)}$ $n^{1/\mathrm{polyloglog}(n)}$ ---------------------------- ---------------------------------------------------------------------- ---------------------------------- ------------------------------------------------- ---------------------------- : Hardness of approximation results for computing the multitasking capacity. In each row, the stated variant of the multitasking capacity (either $\alpha_k(G)$ or $\alpha_{{\leqslant}k}(G)$) is hard to approximate under the stated assumption up to a multiplicative factor $f$, for the stated values of $k$ and $f$.[]{data-label="tbl:hardness"} In \[sec:construction\], we prove the existence of multitaskers with near-optimal capacity. For integers $d,n$ with $n {\geqslant}d$ and ${\varepsilon}\in (0,1)$, we show how to construct multitasker graph $G$ on $2n$ vertices with average degree $d$ and $\alpha_{{\leqslant}k}(G) {\geqslant}1/2-{\varepsilon}$, where $k = \Omega(n/d^{1+O(1/{\varepsilon})})$. In particular, for $d=n^{o(1)}$ this implies that ${\varepsilon}$ can be taken to be $o(1)$, and thus $\alpha_{{\leqslant}k}(G)$ tends to its natural barrier $1/2$ as $n$ grows. Our techniques -------------- #### Hardness results. With respect to multitasking, connected matchings are the worst possible multitasking configuration for a matching of size $k$. In particular, it holds trivially that ${\alpha}_k(G){\geqslant}1/k$ and ${\alpha}_{{\leqslant}k}(G) {\geqslant}1/k$, and the equality holds if and only if $G$ contains a connected matching of size $k$. This fact, together with extremal Ramsey bound on the size of independent sets, turns out to be instrumental in proving hardness results for computing the multitasking capacity. #### Construction of multitaskers. The starting point of our multitaskers with nearly optimal multitasking capacity is based on locally sparse graphs, similarly to [@alon2017graph]. They used the local sparsity with Turan’s lower bound on independent sets in graphs with a given average degree in order to establish the existence of sufficiently large independent sets (which translate to induced matchings). However, the use of Turan’s bound necessarily entails a constant loss, which makes the final multitasking capacity bounded away from $1/2$. We circumvent this roadblock by also requiring that the graph has large girth, and use this fact in order to carefully construct a large independent set. Preliminaries {#sec:prelim} ============= All graphs considered in this work are undirected. A matching in a graph $G=(V,E)$ is a collection $M {\subseteq}E$ of vertex disjoint edges. We say that a vertex $v \in V$ is *covered* by $M$ if it is one of the endpoints of an edge in $M$. We say that a matching $M$ is *induced in $G$* if no two edges in $M$ are connected by an edge in $E$, i.e., the vertices in $M$ span only the edges in $M$ and no other edges. Given a graph $G$ and an edge $e = (u,v) \in E$, we define the *contraction* of $e$ to be the operation that produced the graph $G \setminus e$, whose vertex set is $(V\cup v_{e}) \setminus \{u,v\}$, the vertex $v_{e}$ is connected to all vertices in $G$ neighboring $u$ or $v$, and for all other vertices $x,y \in V \setminus\{u,v\}$, they form an edge in $G \setminus e$ if and only if they were connected in $G$. Contracting a set of edges, and in particular contracting a matching, means contracting the edges one by one in an arbitrary order[^14]. Below we define two combinatorial optimization problems that we will relate to when proving hardness of approximation results for the parameters ${\alpha}_k$ and ${\alpha}_{{\leqslant}k}$. \[def:mis\] Given an undirected graph $G$, an *independent set* in $G$ is a set of vertices that spans no edges. The Maximum Independent Set Problem (${MIS}$) is the problem of finding a maximum cardinality of an independent set in $G$. \[def:biclique\] Given a graph $G = (V,E)$, we say that two disjoint subsets of the vertices $A,B {\subseteq}V$ form a *bipartite clique (biclique)* in $G$ if $(a,b) \in E$ for all $a \in A$ and $b \in B$. We say that the biclique $(A,B)$ is balanced if $|A| = |B|$. In the Maximum Balanced Biclique Problem we are given a bipartite graph $G$ and a parameter $k$, and the goal is to decide whether $G$ contains a balanced biclique with $k$ vertices on each size. \[def:connected\_matching\] Given a graph $G$, a *connected matching* in $G$ is a matching $M$ such that every two edges in $M$ are connected by an edge in $G$. We use $\nu_c(G)$ to denote the size of the maximum cardinality of a connected matching in $G$. In the Connected Matching Problem, we are given graph $G$ and parameter $k$ and our goal is to determine whether $\nu_c(G) {\geqslant}k$. Given an optimization (minimization or maximization) problem $\Pi$ over graphs, we denote by $OPT_\Pi(G) > 0$ the value of the optimal solution of $\Pi$ for $G$. An algorithm $A$ for a maximization (minimization) problem is said to achieve an approximation ratio $\rho > 1$ if for every input $G$ the algorithm returns a solution $A(G)$ such that $OPT_\Pi(G) {\geqslant}A(G) {\geqslant}OPT_\Pi(G)/\rho$ (resp. $OPT_\Pi(G) {\leqslant}A(G) {\leqslant}\rho \cdot OPT_\Pi(G)$). We assume familiarity with complexity classes such as ${\ensuremath{\mathcal{NP}}}, {\ensuremath{{\rm co}\mathcal{NP}}}, {\ensuremath{{\rm co}\mathcal{RP}}}, \Pi_2$, and the polynomial-time hierarchy. Precise definitions of these terms are omitted, and can be found, e.g., in [@papadimitriou2003computational]. Hardness results for maximum connected matchings {#sec:connected} ================================================ In this section, we prove hardness results for finding large connected matchings in graphs. Hardness of approximating the size of a maximum connected matching ------------------------------------------------------------------ We start by showing an almost optimal hardness of approximation result for the connected matching problem. \[thm:connected-match-inapprox\] Given a bipartite graph $G$ with $n$ vertices on each side, it is ${\ensuremath{\mathcal{NP}}}$-hard to approximate $\nu_c(G)$ within a factor of $n^{1-{{\varepsilon}}}$ for any ${{\varepsilon}}> 0$ under a randomized polynomial time reduction. More precisely, given a bipartite graph $G$ with $n$ vertices on each side, it is ${\ensuremath{\mathcal{NP}}}$-hard to distinguish between the case where $\nu_c(G) {\geqslant}n^{1-{{\varepsilon}}}$ and the case where $\nu_c(G) {\leqslant}n^{{{\varepsilon}}}$ for any ${{\varepsilon}}> 0$. A natural approach to prove hardness of approximation results for connected matching is to reduce the clique problem to it. Namely given a graph $G = ([n], E_G)$ for which we wish to determine if $G$ contains a $k$-clique, replace every vertex $i$ by an edge $e_i=(u_i, v_i)$ and add two edges $(u_i, v_j)$ and $(u_j, v_i)$ for every edge $(i, j)$ in $G$. Call the resulting graph after these transformation $G'$. While it is clear that a large clique in $G$ translates to a large connected matching in $G'$, it is not clear that a large connected matching in $G'$ implies a large clique in $G$. The difficulty is that a connected matching might contain “bad" edges of the form $(u_i,v_j)$ where $i \ne j$. An illustrative example is the case where $G = K_{n/2, n/2}$ is a biclique; in this case, the largest clique in $G$ has size only $2$ but the resulting graph $G'$ contains a large connected matching of size as large as $n$. To overcome this problem, we first observe that instead of adding both $(u_i, v_j)$ and $(u_j, v_i)$ to the graph $G'$ for every edge $(i, j)$ in $G$. It suffices to add only one of the two to retain a large connected matching in the YES case. Then, the insight is that, when we choose the edge to add independently at random for each $(i, j)$, we can control the number of bad edges in every connected matching in $G'$ We formalize the described ideas below, starting with the main gadget of our reduction: Fix $n \in {{\mathbb N}}$. A bipartite graph ${HC}_n = (A = \{u_1, \dots, u_n\}, B = \{v_1, \dots, v_n\}, E_H)$ is said to be a *bipartite half-cover of $K_n$* if (1) for every $\{i, j\} \subseteq [n]$, $(u_i, v_j) \in E_H$ or $(u_j, v_i) \in E_H$, and (2) for every $i \in [n]$, $(u_i, v_i) \notin E_H$. The reduction used in the proof of \[thm:connected-match-inapprox\] uses the existence of such bipartite half-covers of $K_n$ that do not contain a large connected matching. Such graphs can be easily constructed using a randomized algorithm as shown below. \[claim:random-half-cover-small-nu-c\] There is an $O(n)$-time randomized algorithm that on input $n \in {{\mathbb N}}$ outputs a graph $HC_n$, which is a bipartite half-cover of $K_n$ such that $\nu_c(HC_n) {\leqslant}O(\log n)$ with probability $1 - o(1)$. We construct ${HC}_n$ by choosing for each $\{i, j\} \subseteq [n]$ to add to $E_H$ either $(u_i, v_j)$ or $(u_j, v_i)$ independently with probability 1/2. Clearly, ${HC}_n$ is a bipartite half-cover of $K_n$. Below we show that $\nu_c(H) {\leqslant}O(\log n)$ with probability $1 - o(1)$. We prove this in two steps: first, we will prove the $O(\log n)$ upper bound on a special class of connected matching and, then, we will show that any connected matching contains a large (constant fraction) matching of this type. Let $M \subseteq E_H$ be any matching in $H$. We say that the matching is *non-repetitive* if, for each $i \in [n]$, at most one of $u_i$ or $v_i$ appears in $M$. We will now argue that with probability $1 - o(1)$, any connected non-repetitive matching has size less than $D := 20\log n$. To do so, consider any ordered tuple $(i_1, j_1, \dots, i_D, j_D)$ where $i_1, \dots, i_D, j_1, \dots, j_D$ are all distinct. The probability that $(u_{i_1}, v_{j_1}), \dots, (u_{i_D}, v_{j_D})$ is a connected matching is at most $$\begin{aligned} \Pr[\forall 1 {\leqslant}k < \ell {\leqslant}D, (u_{i_k}, v_{j_\ell}) \in E_H \vee (u_{i_\ell}, v_{j_k}) \in E_H] &= \prod_{1 {\leqslant}k < \ell {\leqslant}D} \Pr[(u_{i_k}, v_{j_\ell}) \in E_H \vee (u_{i_\ell}, v_{j_k}) \in E_H] \\ &= \prod_{1 {\leqslant}k < \ell {\leqslant}D} (3/4) = (3/4)^{D(D - 1)/2}\end{aligned}$$ where the first two equalities use the fact that $i_1, \dots, i_1, j_1, \dots, j_D$ are distinct, meaning that the events considered are all independent. Hence, by union bound over all such sequences, we can conclude that the probability that $H$ contains a connected non-repetitive matching of size $D$ is at most $n^{2D} \cdot (3/4)^{D(D - 1)/2} = (n^2 \cdot (3/4)^{(D - 1)/2})^D = o(1)$. Finally, observe that any matching $M {\subseteq}E_H$ contains a non-repetitive matching $M' {\subseteq}M$ of size at least $|M|/3$. Indeed, given a matching $M$ we can construct $M'$ iteratively by picking an arbitrary edge $e = (u_i, v_j) \in M$, remove $e$ and all edges touching $v_i$ or $u_j$ from $M$ and add $e$ to $M'$. We repeat this procedure until $M = \emptyset$. Since we add one edge to $M'$ while removing at most three edges from $M$, we arrive at a non-repetitive $M' \subseteq M$ of size at least $|M|/3$. As a result, the graph ${HC}_n$ does not contain any connected matching of size at least $3D = O(\log n)$ with probability $1 - o(1)$. 1. We remark that a deterministic polynomial time construction of such graphs would imply that the hardness result in \[thm:connected-match-inapprox\] holds under a deterministic reduction (as oppose to the randomized reduction, currently stated). 2. We comment that there is a connection between Ramsey graphs and half-cover of $K_n$ with small $\nu_c({HC}_n)$. Specifically, if we can deterministically construct half-cover for $K_n$ with $\nu_c({HC}_n) {\leqslant}f(n)$, then we can deterministically construct $n$-vertex $(f(n)+1)$-Ramsey graphs. This is because, we can think of half-cover ${HC}_n$ as a bichromatic $K_n$ where $(i, j)$ for $i < j$ is colored red if $(u_i, v_j) \in E_H$ and it is colored blue otherwise (i.e. $(u_j, v_i) \in E_H$). It is easy to check that any monochromatic clique of size $r$ in $K_n$ implies a connected matching of size $r-1$ in ${HC}_n$. While there are explicit constructions of Ramsey graphs, it is unclear (to us) how to construct such half-cover from these constructions. 3. Using a different approach we can show that it is ${\ensuremath{\mathcal{NP}}}$-hard to compute $\nu_c(G)$ under a *deterministic* reduction. See \[sec:hardness-deterministic-reduction-cm\] for details. ### Proof of \[thm:connected-match-inapprox\] With the gadget from \[claim:random-half-cover-small-nu-c\] we are ready to prove \[thm:connected-match-inapprox\]. This is done in the following claim. \[claim:reduction-cm\] Let $G = (V_G = [n], E_G)$ be an $n$-vertex graph, and let $H = (A = \{u_1, \dots, u_n\}, B = \{v_1, \dots, v_n\}, E_H)$ be a balanced bipartite graph. Let $G \boxminus H = (A, B, E_{G \boxminus H})$ be the balanced bipartite graph with $n$ vertices on each side, where (1) for every $\{i, j\} \subseteq [n]$, $(u_i, v_j) \in E_{G \boxminus H}$ if and only if $(u_i, v_j) \in E_H$ and $(i, j) \in E_G$, and (2) for every $i \in [n]$, $(u_i, v_i) \in E_{G \boxminus H}$. Then, for any such $G$ we have $\nu_c(G \boxminus H) {\leqslant}\omega(G) + 3\nu_c(H)$ where $\omega(G)$ denotes the clique number of $G$. Furthermore, if $H$ is a bipartite half-cover of $K_n$, then $\omega(G) {\leqslant}\nu_c(G \boxminus H)$. \[claim:reduction-cm\] immediately implies \[thm:connected-match-inapprox\]. Indeed, by [@haastad2001some; @zuckerman2006linear] given an $n$-vertex graph $G$ it is NP-hard to decide between the case where $\omega(G) {\geqslant}n^{1-{{\varepsilon}}/2}$, and the case where $\omega(G) {\leqslant}n^{{{\varepsilon}}/2}$. Therefore, we can define a randomized reduction that given an $n$-vertex graph $G$ constructs (with high probability) ${HC}_n$, the bipartite half-cover of $K_n$, with $\nu_c({HC}_n) {\leqslant}O(\log n)$, and outputs $G \boxminus H$, which can be clearly constructed in time that is linear in the size of $G$. In the YES case, if $\omega(G) {\geqslant}n^{1-{{\varepsilon}}/2}$, then by the “furthermore” part of \[claim:reduction-cm\] we have $\nu_c(G \boxminus {HC}_n) {\geqslant}\omega(G) {\geqslant}n^{1-{{\varepsilon}}/2}$, and in the NO case, if $\omega(G) {\leqslant}n^{{{\varepsilon}}/2}$, then by \[claim:reduction-cm\] we have $\nu_c(G \boxminus {HC}_n) {\leqslant}\omega(G) + \nu_c({HC}_n) {\leqslant}n^{{{\varepsilon}}/2} + O(\log n)$. This completes the proof of \[thm:connected-match-inapprox\]. We now turn to the proof of \[claim:reduction-cm\]. First, we will show that $\nu_c(G \boxminus H) {\leqslant}\omega(G) + 3\nu_c(H)$. Let $M \subseteq E_{G \boxminus H}$ be any connected matching in $G \boxminus H$. We partition $M$ into two disjoint sets $M_{\parallel}$ and $M_{\times}$ where $M_{\parallel} = M \cap \{(u_i, v_i) \mid i \in [n]\}$ and $M_{\times} = M \setminus M_{\parallel}$. We will show that $|M_{\parallel}| {\leqslant}\omega(G)$ and $|M_{\times}| {\leqslant}3\nu_c(H)$. To show that $|M_{\parallel}| {\leqslant}\omega(G)$, suppose that $M_{\parallel} = \{(u_{i_1}, v_{i_1}), \dots, (u_{i_t}, v_{i_t})\}$. By the definition if $(u_i, v_i)$ is connected to $(u_{i'}, v_{i'})$ in $G \boxminus H$, then $(i, i') \in E_G$. Therefore, $\{i_1, \dots, i_t\}$ induces a clique in $G$ and $\omega(G) {\geqslant}t = |M_{\parallel}|$ follows. Next, we show that $|M_{\times}| {\leqslant}3\nu_c(H)$. Let us first define non-repetitive matching in the same way as that in the proof of \[claim:random-half-cover-small-nu-c\]. Using the same argument as in that proof, we can conclude that $M_{\times}$ contains a non-repetitive connected matching $M'_{\times} \subseteq M_{\times}$ of size at least $|M_{\times}|/3$. We claim that $M'_{\times}$ is also a connected matching in $H$. Indeed, since every edge in $M'_{\times}$ belongs to $E_H$, the non-repetitiveness implies that any pair of edges in $M'_{\times}$ is connected by an edge that also belongs to $E_H$. As a result, we can conclude that $|M_{\times}| {\leqslant}3|M'_{\times}| {\leqslant}3\nu_c(H)$. Combining the above two bounds yields $\nu_c(G \boxminus H) {\leqslant}\omega(G) + 3\nu_c(H)$ as desired. Finally, assume that $H$ is a bipartite half-cover of $K_n$. For any clique $C \subseteq V_G$ in $G$, it is not hard to see that the matching $M_C = \{(u_i, v_i): i \in C \}$ is a connected matching in $G \boxminus H$. Indeed, for each distinct $i, j \in C$ we have either $(u_i, v_j) \in E_H$ or $(u_j, v_i) \in E_H$ (from definition of bipartite half-cover of $K_n$), and hence either $(u_i, v_j)$ or $(u_j, v_i)$ belongs to $E_{G \boxminus H}$. Therefore, $\nu_c(G \boxminus H) {\geqslant}\omega(G)$, which completes our proof. Hardness of finding a connected perfect matching ------------------------------------------------ In this section we show that given a bipartite graph $G$ with $n$ vertices on each side, it is ${\ensuremath{\mathcal{NP}}}$-hard to find a connected matching of size $n$. \[thm:hardness-perfect-connected-matching\] Given a bipartite graph $G=(A,B,E)$ with $|A|=|B|=n$ it is ${\ensuremath{\mathcal{NP}}}$-hard to determine whether $\nu_C(G)=n$. By \[thm:connected-match-inapprox\] given a graph $G = (A,B,E_G)$ with $N$ vertices of each side it is ${\ensuremath{\mathcal{NP}}}$-hard to decide whether $G$ contains a connected matching of size $k = N^{1-{{\varepsilon}}}$. Consider the reduction that given a graph $G = (A,B,E_G)$ outputs $H=(A \cup A', B \cup B',E_H)$ as follows. The sets $A'$ and $B'$ are two disjoint sets that are also disjoint from $A,B$ with $|A'|=|B'|=N-k$. The set of edges $E_H$ is defined as $E_H = E_G \cup \{(i,j) : i \in A', j \in B \cup B'\} \cup \{(i,j) : i \in A \cup A', j \in B\} $. That is, the graph $H$ contains the graph $G$ as the induced graph on the vertices $A \cup B$, and in addition, every vertex in $A'$ is connected to all vertices in $B \cup B'$, and every vertex in $B'$ is connected to all vertices in $A \cup A'$, The graph $H$ is a balanced bi-partite graph with $n = 2N-k$ vertices on each side. We claim that $\nu_C(G)=k$ if and only if $\nu_C(H)=n$. In one direction, suppose that $G$ has a connected matching $M_G = \{e_1,...,e_k\}$ of size $k$. We construct a matching $M'$ of size $2N-k$ as follows. For each vertex $v \in A \cup B$ not covered by $M_G$, we pick a distinct element $w_v \in A' \cup B'$ that is a neighbor of $v$. Define a matching in $H$ to be $M' = M \cup N$, where $N = \{(v,w_v) : v \in V(G) \setminus V(M_G)\}$. By the construction of $H$, each edge in $N$ is connected to every other edge in $M'$ using an edge between $A'$ and $B'$. Every pair of edges in $M_G$ are connected since $M_G$ is a connected matching in $G$. Thus, $M'$ is a connected matching of size $n$ in $H$. Conversely, suppose $H$ has a connected matching $M_H$ of size $n$. Then, there must be is a submatching $M \subseteq M_H$ of size $|M| = k$ such that no edge in $M$ contains a vertex in $A'\cup B'$. Thus, $M$ is a matching in $G$, and since $M_H$ is a connected matching so is $M$. It follows that $G$ has a connected matching of size $k$, as required. Hardness results for computing ${\alpha}_k(G)$ {#sec:hardness} ============================================== In this section we study the computational complexity both of the decision problem ${{\mathsf{MT}}}$ as well as the problem of computing ${\alpha}_k(G)$ exactly or approximately. We first show an almost optimal inapproximability result for $\alpha_n(G)$, which is stated and proved below. \[thm:alpha-n-hardness\] For any ${{\varepsilon}}>0$, given a bipartite graph $G$ with $n$ vertices in each part, it is ${\ensuremath{\mathcal{NP}}}$-hard to approximate ${\alpha}_n(G)$ within a factor $n^{1-{{\varepsilon}}}$. Furthermore, given a bipartite graph $G$ with $n$ vertices in each part, where the degree of each vertex is at most $d$ it is ${\ensuremath{\mathcal{NP}}}$-hard to approximate ${\alpha}_n(G)$ within a factor $O(\frac{d}{\log^4(d)})$ and it is ${\ensuremath{\mathcal{UG}}}$-hard to approximate ${\alpha}_n(G)$ within a factor $O(\frac{d}{\log^2(d)})$. The proof is by a reduction from the Maxium Independent Set problem. Given an $n$ vertex graph $H = (U_H,E_H)$ instance of the ${MIS}$ we construct a bipartite graph $G$ as follows. Denote the vertices of $H$ by $U_H = \{u_1,u_2,\dots,u_n\}$. Then the vertices of the bipartite graph $G = (V_G = A \cup B, E_G)$ are defined by $A = \{v_i : i \in [n]\}$ and $B = \{v'_i : i \in [n]\}$, and the edges of $G$ are $E_G = \{(v_i,v'_i) : i \in [n]\} \cup \{(v_i,v'_j) : i < j \wedge (u_i,u_j) \in E_H\}$. Note that the only perfect matching in $G$, i.e., a matching of size $n$, is the matching $N = \{(v_i,v'_i) : i \in [n]\}$. Indeed, suppose there exists another matching $M$ with $|M| = n$. Then $M$ has at least one edge of the form $e=(v_i,v'_j)$ with $i<j$ and suppose that $e$ is such that $i$ is minimal (where the minimum is taken with respect to all edges not in $N$). If any edge in $M$ covers $v'_i$, then it cannot belong to $N$ as $M$ is a a matching. By the definition of $E_G$ there cannot be an edge in $M$ that covers $v'_i$ by the minimality of $i$. As all vertices of $H$ must be matched in order for $|M| = n$, we get a contradiction showing that $N$ is indeed the unique matching of size $n$. We claim that $H$ contains an independent set of size at least ${\alpha}$ if and only if ${\alpha}_n(G) {\geqslant}\frac{{\alpha}}{n}$. Indeed, a set $I {\subseteq}V_H$ is an independent set in $H$ if and only if $M' = \{(v_i,v'_i) : i \in I\}$ is an induced matching contained in $M$. Hence if $H$ contains an independent set of size $\alpha$ then $M$ contains an induced matching of size $\alpha$. Conversely, If $M$ contains an induced matching of size $\alpha$ then $H$ has an independent set of size $\alpha$. It is well known that for any $\delta<1/2$ it is [$\mathcal{NP}$]{}-hard to distinguish between $n$-vertex graphs that contain an independent set of size at least $n^{1-\delta}$ (YES case) and graph that do not contain an independent set of size at least $n^{\delta}$ (NO-case) [@haastad2001some; @zuckerman2006linear]. By the reduction described above it is [$\mathcal{NP}$]{}-hard to distinguish between a bipartite graph $G'$ with sides of cardinality $n$ satisfying ${\alpha}_n(G') {\geqslant}n^{1-\delta}/n=n^{-\delta}$ to a graph $G''$ satisfying ${\alpha}_n(G') {\leqslant}n^{\delta}/n=n^{\delta-1}$ as this would enable to distinguish between the YES and NO cases described above. The result now follows by taking $\delta$ to equal ${\varepsilon}/2$. The result for graphs of maximum degree $d$ follows by noting that if the maximal degree of $H$ is at most $d$, then the maximal degree of $G$ is upper bounded by $d+1$. Therefore, since it is [$\mathcal{NP}$]{}-hard to approximate ${MIS}$ in graphs of maximum degree $d$ within a factor of $O(\frac{d}{\log^4(d)})$ [@chan2016approximation] and [$\mathcal{UG}$]{}-hard to approximate ${MIS}$ in graphs of maximum degree $d$ within a factor of $O(\frac{d}{\log^2(d)})$ [@austrin2009inapproximability], the analogous hardness computing ${\alpha}_n$ also follows. We remark that by adding isolated vertices to the graph, the above hardness result also implies hardness of approximating $\alpha_{k}(G)$ to within factor of $k^{1 - \varepsilon}$ for every $\varepsilon > 0$ and every $k {\geqslant}n^\delta$ for any constant $\delta \in (0, 1)$. Recall the decision problem ${{\mathsf{MT}}}$ from Definition \[def:mt\]. As mentioned in the introduction, ${{\mathsf{MT}}}$ clearly belongs to the class $\Pi_2$. We show the following: \[thm:MT-is-NP-hard-coNP-hard\] The decision problem ${{\mathsf{MT}}}$ is ${\ensuremath{\mathcal{NP}}}$-hard and ${\ensuremath{{\rm co}\mathcal{NP}}}$-hard. By \[thm:alpha-n-hardness\] if follows that that there is a reduction from any problem in ${\ensuremath{\mathcal{NP}}}$ that produces a graph $G$ and a parameter $k = n$ such that in the YES case ${\alpha}_k(G) {\geqslant}1/n^{{{\varepsilon}}}$, and in the NO case ${\alpha}_k(G) {\leqslant}1/n^{1-{{\varepsilon}}}$. In particular, this implies that ${{\mathsf{MT}}}$ is ${\ensuremath{\mathcal{NP}}}$-hard. In order to prove that ${{\mathsf{MT}}}$ is ${\ensuremath{{\rm co}\mathcal{NP}}}$-hard we use \[thm:hardness-perfect-connected-matching\]. Indeed, observe that ${\alpha}_{n}(G) {\leqslant}1/n$ if and only if $G$ contains a connected matching of size $n$, and hence there is a reduction from any problem in ${\ensuremath{\mathcal{NP}}}$ that produces a graph $G$ and $k = n$ such that in the YES case ${\alpha}_k(G) {\leqslant}1/k$, and in the NO case ${\alpha}_k(G) {\geqslant}2/k$. This completes the proof of \[thm:MT-is-NP-hard-coNP-hard\] Using Theorem \[thm:hardness-perfect-connected-matching\], we demonstrate that it is unlikely that ${{\mathsf{MT}}}$ belongs to ${\ensuremath{\mathcal{NP}}}\cup {\ensuremath{{\rm co}\mathcal{NP}}}$. If the decision problem ${{\mathsf{MT}}}$ belongs to ${\ensuremath{\mathcal{NP}}}\cup {\ensuremath{{\rm co}\mathcal{NP}}}$, then the polynomial-time hierarchy collapses to the first level. Indeed, this follows from the fact that if ${\ensuremath{\mathcal{NP}}}{\subseteq}{\ensuremath{{\rm co}\mathcal{NP}}}$, then ${\ensuremath{\mathcal{NP}}}={\ensuremath{{\rm co}\mathcal{NP}}}$ (see e.g., Proposition 10.2 in [@papadimitriou2003computational]), and hence the polynomial hierarchy collapses to the first level. We end this section with several remarks. 1. Note that the proof of \[thm:alpha-n-hardness\] shows that the problem of computing ${\alpha}_n(G)$ is ${\ensuremath{\mathcal{NP}}}$-hard on graphs with $n$ vertices on each side that even if $G$ contains a unique perfect matching. 2. Note also that the hardness result in \[thm:alpha-n-hardness\] for bounded degree graphs is unlikely to hold for $d$ *regular* graphs (as opposed to graphs with degree at most $d$) This is because in [@alon2017graph] it is shown that ${\alpha}_n(G) {\leqslant}O(1/\sqrt{d})$ for every $d$-regular graph $G$. In particular, this implies that it is easy to approximate ${\alpha}_n(G)$ within a factor of $O(\sqrt{d})$ for $d$-regular graphs. Hardness results for computing ${\alpha}_{{\leqslant}k}(G)$ =========================================================== Here we prove that it is hard to calculate the parameter ${\alpha}_{{\leqslant}k}(G)$. Hardness results for computing ${\alpha}_{{\leqslant}n}(G)$ ----------------------------------------------------------- We first consider the $k = n$ case. Given a bipartite graph $G = (A,B,E)$ with $|A|=|B|=n$, it is ${\ensuremath{\mathcal{NP}}}$-hard to compute ${\alpha}_{{\leqslant}n}(H)$. It is immediate that ${\alpha}_{{\leqslant}n}(H) {\geqslant}1/n$ and that equality holds if and only if $H$ contains a connected matching of size $n$. The theorem follows from \[thm:hardness-perfect-connected-matching\]. We proceed and consider approximating ${\alpha}_{{\leqslant}n}(G)$. Unless ${\ensuremath{\mathcal{NP}}}= {\ensuremath{{\rm co}\mathcal{RP}}}$, there is no polynomial algorithm for approximating ${\alpha}_{{\leqslant}n}(H)$ within some constant factor. We first use the fact that it is [$\mathcal{NP}$]{}-hard to distinguish between $n$-vertex graphs with cliques of size $b \cdot n$ to graphs with no clique of size $a \cdot n$ where $a,b$ are some constants satisfying $1/2<a<b<1$. Indeed it is well known that there are $a,b \in (0,1)$ such that it is [$\mathcal{NP}$]{}-hard to distinguish between $n$-vertex graphs with cliques of size $b \cdot n$ and graphs with no clique of size $a \cdot n$ (e.g. [@haastad2001some]). The fact now follows by taking a graph $G$ of $n$ vertices, adding to it a clique of size $n$ and connecting all vertices in this clique to all vertices of $G$. Given a graph $G$ apply the reduction in \[claim:reduction-cm\] (with $H$ being the random graph described in \[claim:random-half-cover-small-nu-c\]) and call the resulting graph $G'$. If there is a clique $G$ of size $b \cdot n$ then clearly ${\alpha}_{{\leqslant}n}(G') {\leqslant}\frac{b}{n}$. Suppose there is no clique of size $a \cdot n$ in $G$. Then by \[claim:random-half-cover-small-nu-c\], with high probability there is no connected matching in $G'$ of size greater than $(a+\delta )\cdot n$ where $\delta>0$ can be taken to be arbitrarily small. It follows that for $c>a+\delta$, every connected matching in $G$ contains a induced matching of size at least $2$. Therefore, for $(a + \delta)<c<1$ we have that conditioned on the existence of a matching of size $k$, ${\alpha}_{k}(G')=\frac{2}{c n}> \frac{1}{(a+\delta) \cdot n}$. Indeed, $\frac{2}{c}>\frac{1}{a+\delta}$ as $a+\delta>1/2$. As for $k<(a+\delta) n$ it clearly holds that ${\alpha}_{k}(G')>\frac{1}{(a+\delta)n},$ we have that in this case ${\alpha}_{{\leqslant}n}(G') = \frac{a+\delta}{n}$. This implies that approximating ${\alpha}_{{\leqslant}n}(H)$ within a ratio smaller than $\frac{b}{a+\delta}$ in polynomial time would allow one to determine whether $G$ contains a clique of size $b \cdot n$ or no clique of size $a \cdot n$. Taking $\delta$ such that $\frac{b}{a+\delta}>1$ concludes the proof. Hardness results for computing ${\alpha}_{{\leqslant}k}(G)$ for $k < n$ ----------------------------------------------------------------------- We now turn to the problem of proving hardness of approximation results for ${\alpha}_{{\leqslant}k}(G)$ for $k<n$; for certain values of $k$, we show that ${\alpha}_{{\leqslant}k}(G)$ is ${\ensuremath{\mathcal{NP}}}$-hard to approximate to within any constant factor under randomized reduction. One approach to prove this is to use the reduction in \[thm:alpha-n-hardness\]. However, this approach does not seem to work, as it allows one to consider also matchings that contain “diagonal edges" of the form $(u_i,v'_j)$ and it is not clear how to apply the analysis in \[thm:alpha-n-hardness\] to such matchings. Instead, we build upon the hardness of the connected matching problem given in \[thm:connected-match-inapprox\]. We claim that the reduction in \[thm:connected-match-inapprox\] shows that it is hard to approximate ${\alpha}_{{\leqslant}k}(G)$ for $k= n^{1-{{\varepsilon}}}$. Note that in the YES-case, if $\nu_c(G) = k {\geqslant}n^{1-{{\varepsilon}}}$, then ${\alpha}_{{\leqslant}k}(G) = 1/k$. The NO-case is a bit subtle, and it is, a priori, not clear why $\nu_c(G) {\leqslant}n^{{{\varepsilon}}}$ implies that any matching of size at most $k$ contains a large induced matching. We resolve this problem using the following Ramsey-theoretic fact (see e.g., [@boppana1992approximating; @erdos1935combinatorial]). \[fact:ramsey\] Let $G$ be an $n$-vertex graph not containing a clique of size $k+1$ and suppose $k {\geqslant}2\log n$. Then $G$ contains an independent set of size at least $s=\log n/\log (k/\log n)$. Coupled with \[thm:alpha-n-hardness\] we prove the following result. \[thm:alpha-k-constant-hardness\] For any constants ${{\varepsilon}}\in (0,1/2)$ and $\rho > 1$, it is ${\ensuremath{\mathcal{NP}}}$-hard (under randomized reduction) to approximate ${\alpha}_{{\leqslant}k}(G)$ within a factor of $\rho$ on bipartite graphs with $n$ vertices on each side for $k = n^{1-{{\varepsilon}}}$. By \[thm:connected-match-inapprox\] given a bipartite graph $G$ it is ${\ensuremath{\mathcal{NP}}}$-hard to distinguish between the case where $\nu_c(G) {\geqslant}n^{1-{{\varepsilon}}}$, and the case where $\nu_c(G) {\leqslant}n^{\delta}$ for $\delta = 1/(2\rho)$. For the YES-case if $\nu_c(G) {\geqslant}n^{1-{{\varepsilon}}}$, then clearly ${\alpha}_{{\leqslant}k}(G) = 1/k$ for $k = n^{1-{{\varepsilon}}}$. In the NO-case suppose that $\nu_c(G) {\leqslant}n^\delta$, and consider an arbitrary matching $M$ of size $s$ with $s {\leqslant}k$. If $s< 2\delta k$ then clearly $M$ contains an induced matching of size at least $s/(2\delta k)$. Otherwise, contract all edges in $M$. Denote by $H(M)$ the subgraph induced by the $s$ contracted nodes. Observe that a subset of nodes in $H(M)$ forms a clique if and only if their corresponding edges in $G$ form a connected matching. Otherwise, by the assumption that $\nu_c(G) {\leqslant}n^\delta$ we get that $H(M)$ contains no clique of size $n^{\delta}$. Hence, by \[fact:ramsey\] we conclude that $H(M)$ contains an independent set of size at least $\frac{\log s}{\log(n^\delta/\log s)} {\geqslant}\frac{1}{2\delta}$ (assuming $n$ is sufficiently large). Therefore, given a bipartite graph $G$ with $n$ vertices on each side, and $k = n^{1-{{\varepsilon}}}$ it is ${\ensuremath{\mathcal{NP}}}$-hard to distinguish between the YES-case of ${\alpha}_{ {\leqslant}k}(G) {\leqslant}1/k$, and the NO-case of ${\alpha}_{{\leqslant}k}(G) {\geqslant}1/(2\delta k) = \rho/k$. This concludes the proof. We can achieve stronger hardness results under stronger assumptions than ${\ensuremath{\mathcal{NP}}}$-hardness. Recall that the Exponential Time Hypothesis (ETH) postulates that no algorithm of running time $2^{o(n)}$ can decide whether an $n$-variable SAT formula has a satisfying assignment. Assuming ETH we have the following hardness result: \[thm:almost-poly\] Assuming ETH there exists a $k$ such that given $H = (A,B,E)$ with $|A|=|B|=n$ there is no polynomial time algorithm that approximates ${\alpha}_{{\leqslant}k}(H)$ within a factor of $n^{(1/\log \log n)^c}$ where $c>0$ is a universal constant independent of $n$. We will rely on the following simple lower bound on independent sets in graphs of average degree $d_{avg}$ due to Turan. \[lem:turan\_independent\_set\] Every $n$-vertex graph with average degree $d_{avg}$ contains an independent set of size at least $\frac{n}{d_{avg}+1}$. It is known [@manurangsi2017almost] that assuming ETH for $k=n^{1-1/\mathrm{polyloglog}(n)}$ there is no polynomial algorithm that distinguishes between the case where $H$ contains a bipartite clique with $t$ vertices on each side (YES-case) to the case where every subgraph contained in $H$ with $k' {\leqslant}k$ vertices satisfies $|E(H)| {\leqslant}{k' \choose 2}/n^{(1/\log \log n)^c}$ (NO-case). In the first case ${\alpha}_{{\leqslant}k}(H)=1/k$. In the second case, given a matching $M$ with $|M|=k;$ and $k' {\leqslant}k$ we claim that $M$ contains an induced matching of size $\Omega(\max((k'n^{-(1/\log \log n)^c},1))$. The claim is trivially true if $k' {\leqslant}n^{(1/\log \log n)^c}$ hence assume $k' > n^{(1/\log \log n)^c}$. Let $H(M)$ be the graph induced on $M$ and let $H'(M)$ be the graph obtained after all edges in $M$ are contracted. Clearly the average degree of $H'(M)$ is $O(k'n^{-(1/\log \log n)^c})$ (see Lemma 2.1 in [@alon2017graph]) hence by \[lem:turan\_independent\_set\] it contains an independent set $I'$ of size $\Omega(n^{(1/\log \log n)^c})$. It is easily verified that this independent set corresponds to an induced matching contained in $M$ whose size is $\Omega(n^{(1/\log \log n)^c})$. Therefore every matching of size at most $k' {\leqslant}k$ contains an induced matching of size $\Omega(\lceil k'n^{-(1/\log \log n)^c})\rceil)$ which implies that ${\alpha}_{{\leqslant}k}(H)=\Omega (n^{(1/\log \log n)^c}/k)$. It follows that if we could approximate ${\alpha}_{{\leqslant}k}(H)$ within a factor better than $\Omega(n^{(1/\log \log n)^c}))$ in polynomial time then we could distinguish between the YES and NO cases described above. This concludes the proof. Improved construction of multitaskers {#sec:construction} ===================================== In this section we prove the following theorem. \[thm:alpha-&gt;1/2\] Let $d {\leqslant}n$ be positive integers such that $n$ is sufficiently large, and let ${{\varepsilon}}\in (0,1)$ be such that ${{\varepsilon}}{\geqslant}\frac{20 \log d}{\log n}$. Then, there is a bipartite graph $G$ with $n$ vertices on each side and average degree at least $d/2$, such that ${\alpha}_{{\leqslant}k}(G){\geqslant}1/2-{{\varepsilon}}$ for $k = (\frac{1}{101e^5})^{4/{\varepsilon}} \cdot \frac{n}{d^{1+8/{\varepsilon}}}=\frac{n}{d^{1+O(1/{\varepsilon})}}$. For the proof of \[thm:alpha-&gt;1/2\] we need the following lemma. We remark that a similar result also appears in [@alon2017graph] (proof of Theorem 4.14 in the arXiv version). \[lemma:high-girth-sparse-graphs-are-good-multitaskers\] Let $G=(A,B,E)$ be a balanced bipartite graph, and let $g$ be the girth of $G$. Let $t \in {{\mathbb N}}$ be such that for every subset of vertices $T {\subseteq}A \cup B$ satisfying $|T \cap A|= |T \cap B| {\leqslant}s {\leqslant}t$ it holds that $|E(T)| {\leqslant}(2+\beta/g)s$ edges for some $\beta > 0$. Then ${\alpha}_{{\leqslant}t}(G){\geqslant}\frac{1}{2} - \frac{1+\beta}{g}$. Let $G = (A,B,E)$ with $|A|=|B|=n$ that satisfies the assumptions in the lemma. and let $M$ be a matching in $G$ of size of size $s {\leqslant}t$ . We show that $M$ contains an induced matching $M'$ of size at least $(\frac{1}{2}-\frac{1+\beta}{g})|M|$. Let $F$ be the graph whose vertices correspond to the $s$ edges of $M$, and two vertices in $F$ are connected if the corresponding edges are connected by an edge in $G$. We show below that $F$ contains an independent set on nearly half of its vertices. By the assumptions of the claim, the girth of $F$ is at least $g/2$, and any set of $s$ of its vertices spans at most $(1+\beta/g)s$ edges. Construct an independent set in $F$ as follows. As long as $F$ contains a vertex of degree at most $1$ add it to the independent set, and omit it and its unique neighbor from $F$. Suppose that this process stops with $h$ vertices. This implies that the independent set so far has at least $(s-h)/2$ vertices. If $h=0$ we are done, as the independent set has at least $s/2$ vertices. Otherwise, in the induced subgraph of $F$ on the remaining $h$ vertices the minimum degree is at least $2$ and the average degree is at most $2+2\beta/g$. Hence it contains at most $2 \beta h/g$ vertices of degree at least $3$. Omit these vertices. The remaining graph is a union of paths and cycles, which may contain odd cycles, but all cycles in it are of length at least $g/2$. Therefore this part contains an independent set of size at least $\frac{1}{2} (1-2 \beta /g) \cdot (1 - 2 /g) h$, which together with the $(s-h)/2$ vertices obtained in the initial process result with an independent set of size at least $$\frac{s-h}{2}+ \frac{1}{2} (1-2 \beta/g) \cdot (1 - 2 /g)h > \frac{s-h}{2}+ \frac{1}{2} (1- 2 \beta/g - 2 /g)h > \frac{s}{2}-\frac{1+\beta}{g}h {\geqslant}(\frac{1}{2}-\frac{1+\beta}{g})s ,$$ as required. We can now prove \[thm:alpha-&gt;1/2\]. We start a random bipartite graph $G'$ with $n$ nodes on each side, in which each edge is included independently with probability $p=d/n$. The following two claims prove the properties required in order to apply \[lemma:high-girth-sparse-graphs-are-good-multitaskers\]. \[claim:Gnp-short-cycles\] Let $g$ be an even integer such that $2/{{\varepsilon}}{\leqslant}g {\leqslant}4/{{\varepsilon}}$. Then, with probability $1-\frac{2}{n^{0.3}} {\geqslant}0.99$ the number of cycles of length at most $g$ is upper bounded by $\sqrt n$. The expected number of cycles of length up to $g$ is upper bounded by $$\sum_{s=2}^{g/2}{n\choose s}^2(s!)^2p^{2s} {\leqslant}\sum_{s=2}^{g/2}(np)^{2s} {\leqslant}\sum_{s=2}^{2/{\varepsilon}}d^{2s} {\leqslant}2d^{4/{\varepsilon}}.$$ In particular, for ${{\varepsilon}}{\geqslant}\frac{20 \log d}{\log n}$ the expected number of cycles of length up to $g$ is at most $2d^{4/{\varepsilon}} {\leqslant}2 n^{1/5}$. The claim follows by Markov’s inequality. \[claim:G(n,p)-low-avg-degree\] With probability $0.99$, every subgraph of $G'$ with at most $(\frac{1}{101e^5})^{4/{\varepsilon}}\cdot n/d^{1+8/{\varepsilon}}$ nodes on each side has average degree at most $(2+{\varepsilon}/4)$. Let $s$ be an integer satisfying $1 {\leqslant}s {\leqslant}(\frac{1}{101e^5})^{4/{\varepsilon}}\cdot n/d^{1+8/{\varepsilon}}$. By the union bound over all subsets of $G'$ with $s$ vertices on each side, the probability that $G'$ contains a balanced subgraph with $s$ nodes on each side and average degree at least $(2+{\varepsilon}/4)$ is $${n \choose s}^2{s^2 \choose (2+{\varepsilon}/4)s}p^{(2+{\varepsilon}/4)s} {\leqslant}\left(\frac{n e}{s}\right)^{2s} \cdot \left( s e \right)^{(2+{{\varepsilon}}/4)s} \cdot \left( \frac{d}{n}\right)^{(2+{{\varepsilon}}/4)s} {\leqslant}\left(\frac{e^5 d^{2+{\varepsilon}/4}s^{{\varepsilon}/4}}{n^{{\varepsilon}/4}}\right)^s {\leqslant}\left(\frac1{101}\right)^s.$$ By taking the union bound,over all values of $s$ we get that the probability that $G'$ contains a dense induced subgraph is at most $\sum_{s=1}^\infty\left(\frac1{101}\right)^s=0.01$, as required. By Chernoff bound with probability $0.99$, $G'$ contains at least $0.9 dn$ edges. Therefore, with probability $0.97$ the latter event occurs together with the events in the two foregoing lemmas. Let $g \in [\frac{2}{{{\varepsilon}}},\frac{4}{{{\varepsilon}}}]$ be an even integer, as in \[claim:Gnp-short-cycles\]. We remove an edge from each cycle of length at most $g$, thus removing at most $\sqrt{n}$ edges, so that the average degree remains at least $d/2$. The resulting graph $G$ satisfies the conditions of \[lemma:high-girth-sparse-graphs-are-good-multitaskers\] with $g \in [\frac{2}{{{\varepsilon}}},\frac{4}{{{\varepsilon}}}]$ and $t = (\frac{1}{101e^5})^{4/{\varepsilon}}\cdot n/d^{1+8/{\varepsilon}}$, and hence ${\alpha}_t(G) {\geqslant}1/2 - 2/g {\leqslant}1/2 - {{\varepsilon}}$, as required. This concludes the proof of \[thm:alpha-&gt;1/2\]. We note that if we consider ${\alpha}_{{\leqslant}n}(G)$ instead of ${\alpha}_{{\leqslant}n/d^{1+O(1/{\varepsilon})}}(G)$, then for the construction in the proof of \[thm:alpha-&gt;1/2\] it holds that ${\alpha}_{{\leqslant}n}(G)=O(\frac{\ln d}{d}+O(1/\sqrt{n}))$ with high probability. Indeed, it can be shown that prior to deletions $G'$ has a matching of size $\Omega(n)$ and no induced matching of size larger that $O(\frac{\ln d}{d}n)$ with high probability. Therefore, since removing $\sqrt{n}$ edges can increase the size of any induced matching by at most $\sqrt{n}$, we get that the entire construction satisfies ${\alpha}_{{\leqslant}n}(G)=O(\frac{\ln d}{d}+1/\sqrt{n})$. We also remark that [@alon2017graph] described a construction with average degree $d=\Omega(\log \log n)$ for which ${\alpha}_{{\leqslant}n} > 0.33$. Is $\alpha_k(G)=1/2$ attainable? -------------------------------- The foregoing positive result obtains $\alpha_{{\leqslant}k}(G)=1/2-{\varepsilon}$ for $k=O(n/d^{1+O(1/{\varepsilon})})$, approaching the natural barrier $1/2$. A natural question is whether $1/2$ can be attained exactly, and for which values of $k$. We now show the following limitation. There are absolute constants $C>0$ and ${\varepsilon}_0>0$ such that for $n,d$ sufficiently large and $k{\geqslant}C\cdot n/d^{1+{\varepsilon}_0}$, every graph $G$ with $n$ nodes on each side and average degree $d$ has $\alpha_{{\leqslant}k}(G)$ strictly smaller than $1/2$. One obstacle to obtaining $\alpha_{{\leqslant}k}(G)=1/2$ is cycles of length $2$ mod $4$. In particular, consider a cycle of length $\ell=2k$, where $k$ is an odd integer. It is straightforward to check that picking every other edge of the cycle yields a matching $M$ of size $k$, whose largest induced matching has size $(k-1)/2=(\frac12-\frac1{2k})|M|$. Hence, a graph $G$ containing such cycle has $\alpha_k(G)$ strictly less than $1/2$. We now show that every $n$-vertex graph with average degree $d$ contains such a cycle for some $k = O(n/d^{1+{\varepsilon}_0})$. Let $G$ be a bipartite graph with $n$ nodes on each side and average degree $d$. It is known (see e.g., [@sudakov2016extremal]) that there is an absolute constant $r>0$ such that any graph with average degree at least $r$ contains a cycle of length $2$ mod $4$, which we call a “bad” cycle. By [@feige2016generalized], $G$ contains a subgraph $G'$ on at most $\ell=O(n/d^{1+2/(r-1)})$ nodes whose average degree is at least $r$. Hence $G'$ (and therefore $G$) contains a bad cycle whose length is at most $\ell$. As per above, this implies that $\alpha_{{\leqslant}k}(G)$ is strictly less than $1/2$ for $k=\ell/2$. We do not know if $\alpha_{{\leqslant}k}(G)=1/2$ is achievable for the value $k$ considered in \[thm:alpha-&gt;1/2\]. We suspect that $\alpha_{{\leqslant}k}(G)<1/2$ holds for all for graphs of large enough average degree $d$ even for $k = {{\rm poly}}\log n$. Whether this is indeed the case is left as an open question. Conclusion and future directions ================================ We have studied the computational complexity of computing ${\alpha}_k(G)$, a parameter that arises in wireless networks and connectionist models of multitasking. Our study reveals that algorithmic as well as combinatorial questions (such as the existence of graphs with certain combinatorial properties) are relevant to connectionist models of cognitions, and we hope that future work will reveal more connections between such models, theoretical computer science and combinatorics. While we have shown that computing ${\alpha}_k(G)$ is intractable, our results do not rule out the existence of an efficient constant factor approximation algorithm for $\alpha_{{\leqslant}n}(G)$, which could potentially be used in in computer simulations and in analyzing behavioral and neuroscientific data. Whether such an algorithm exists is an interesting direction for future study. We conclude with several specific questions arising from this work. - We believe that for $d$-regular graphs the upper bound $\alpha_{{\leqslant}n}(G) {\leqslant}9/ \sqrt{d}$ is not tight. It is an open problem whether for all $d$-regular graphs it holds that $\alpha_{{\leqslant}n}(G) {\leqslant}o(1/\sqrt{d})$, and it is possible that $\alpha_{{\leqslant}n}(G)=O(\frac{\log d}{d})$ holds. - It would be interesting to see if the $n^{1-{\varepsilon}}$ hardness of approximation result can be obtained assuming ${\ensuremath{\mathcal{P}}}\neq {\ensuremath{\mathcal{NP}}}$ (that is, under a deterministic reduction). In particular, it would be interesting to find efficient and deterministic constructions of bipartite half-covers with maximal connected matching upper bounded by $n^{o(1)}$. - Finally, understanding how well one can multitask on a “small" number of tasks is of interest. This raises the question of fixed-parameter algorithms for ${\alpha}_k$ and connected matchings, where $k$ is a parameter independent of $n$. ${\ensuremath{\mathcal{NP}}}$-hardness of computing the maximum connected matching of a graph {#sec:hardness-deterministic-reduction-cm} ============================================================================================= In this section we that given a bipartite graph it is NP-hard to compute $\nu_C(G)$ exactly under a deterministic polynomial time reduction. This is as opposed to the randomized reduction given in \[thm:connected-match-inapprox\]. We remark that [@plummer2003special] proved this result for the non-bipartite case. Our proof is an adaptation of their proof to the bipartite case. \[thm:exact\_connected\] It is [$\mathcal{NP}$]{}-hard to determine given a bipartite graph $G=(A,B)$ and a parameter $k$ whether $G$ contains a connected matching of size $k$. We reduce the biclique to the problem of determining if $\nu_C(G)=k$. Recall that a biclique $G'=(C',D')$ in a bipartite graph $G$ is a subgraph $G'$ of $G$ such that every vertex in $C'$ is connected to every vertex in $D'$. A biclique $(C',D')$ is balanced if $|C'|=|D'|$ The biclique problem is: given a bipartite graph $G = (A,B)$ (we assume that $|A| = |B|$) and integer $k$, is there a biclique $(A',B')$ with $A' \subseteq A, B'\subseteq B$ and $|A'| = |B'| = k$. This problem is well known to be [$\mathcal{NP}$]{}-complete. Given a bipartite graph $G = (A,B)$ with $|A| = |B| = n$, form a new graph $H$ as follows. Initialize $H_1 = (A_1,B_1)$ to equal $G$ and we call this the copy of $G$ inside $H_1$. Then add a new set $A'$ of $n$ vertices such that $(A_1,A')$ forms biclique, and add a new set $B'$ of $n$ vertices such that $(B_1,B')$ forms a biclique. Initialize another graph $H_2 = (A_2,B_2)$ to be a biclique with $|A_2| = |B_2| = n$ (where $A_2,B_2$ are disjoint from $A_1\cup B_1 \cup A' \cup B'$). Add an edge between every vertex of $(A_1 \cup B')$ and every vertex of $B_2$, and add an edge between every vertex of $(B_1 \cup A')$ and every vertex of $A_2$. The resulting (bipartite) graph is $H = (A_1 \cup B' \cup A_2, B_1 \cup A' \cup B_2)$. Consider a connected matching $M$ in $H$. Let $M_A \subseteq M$ be the set of all edges in $M$ contained in the biclique $(A_1,A')$ and let $M_B \subseteq M$ be the set all edges in $M$ contained in the biclique $(B_1,B')$, and let $M_r = M - (M_A \cup M_B)$. Then $|M| = |M_A| + |M_B| + |M_r|$. Let $X_A\subseteq A_1$ denote the set of vertices in $A_1$ being an endpoint of an edge in $M_A$, and let $X_B$ be analogously defined with respect to $B_1$ and $M_B$. Since $M$ is a connected matching, $(X_A, X_B)$ is a biclique. We also have $|M_r| {\leqslant}2n - \max\{|X_A|,|X_B|\}$ which implies $|M| {\leqslant}2n + \min\{|X_A|,|X_B|\}$ where we have used $|X_A| = |M_A|, |X_B| = |M_B|$. Thus, if $G$ has a connected matching of size $2n + k$ then $\min\{|X_A|,|X_B|\} {\geqslant}k$ which means that there is a biclique of size $k$. Conversely, if $G$ contains a biclique $(R,S)$ of size $k$, we can easily form a connected matching $M$ in $H$ of size $2n + k$. To construct $M$, we take $k$ edges $M_A$ in $(A,A')$ with $X_A = R$, we take $k$ edges $M_B$ in $(B,B')$ with $X_B = S$, we take $n - k$ edges matching the $n - k$ vertices of $A_1 - X_A$ with $n - k$ vertices $B_2' \subseteq B_2$, we take $n - k$ edges matching the $n - k$ vertices of $B_1 - X_B$ with $n - k$ vertices $A_2' \subseteq A_2$, and we take $k$ edges matching $A_2 - A_2'$ with $B_2 - B_2'$. Thus, $G$ contains a biclique of size $k$ if and only if $H$ contains a connected matching of size $2n + k$. This completes the proof. [^1]: Tel Aviv University and Princeton University `[email protected]` [^2]: Princeton University `[email protected]` [^3]: Princeton University `[email protected]` [^4]: University of California, Berkeley `[email protected]` [^5]: Princeton University `[email protected]` [^6]: Simon Fraser University `[email protected]` [^7]: MIT, CSAIL `[email protected]` [^8]: Princeton University `[email protected]` [^9]: To simplify matters, we consider the synchronous setting where transmissions occur in discrete time slots. [^10]: Since we consider the minimum, the definition of ${\alpha}_{k}$ ensures that values of $r {\leqslant}k$ for which there is no matching of size $r$ have no influence on ${\alpha}_{{\leqslant}k}(G)$. [^11]: In the irregular case this holds assuming the average degree satisfies $d \gg \log n$. [^12]: Observe that if a network contains a path of length $3$ then trivially ${\alpha}_{{\leqslant}k}(G) {\leqslant}1/2$ for all $k {\geqslant}3$. [^13]: Observe that bipartite chordal graphs are not necessarily chordal. See [@jobson2014connected] for details. [^14]: We remark that the graph obtained from contracting a set of edges, indeed, does not depend on the order.
--- abstract: 'Hyperspectral image is a substitution of more than a hundred images, called bands, of the same region. They are taken at juxtaposed frequencies. The reference image of the region is called Ground Truth map (GT). the problematic is how to find the good bands to classify the pixels of regions; because the bands can be not only redundant, but a source of confusion, and decreasing so the accuracy of classification. Some methods use Mutual Information (MI) and threshold, to select relevant bands without treatement of redundancy; others consider the neighbors having sensibly the same MI with the GT as redundant and so discarded. This is the most inconvenient of this method, because this avoids the advantage of hyperspectral images: some precious information can be discarded. In this paper well make difference between useful and useless redundancy. A band contains useful redundancy if it contributes to decreasing error probability. According to this scheme, we introduce new algorithm using also mutual information, but it retains only the bands minimizing the error probability of classification. To control redundancy, we introduce a complementary threshold. So the good band candidate must contribute to decrease the last error probability augmented by the threshold. This process is a wrapper strategy; it gets high performance of classification accuracy but it is expensive than filter strategy.' --- **Applied Mathematical Sciences, Vol. 6, 2012, no. 102, 5073 - 5084** **[ELkebir Sarhrouni\*, Ahmed Hammouch\*\* and Driss Aboutajdine\*]{}** \*LRIT, Faculty of Sciences, Mohamed V - Agdal University, Morocco \*\*LRGE, ENSET, Mohamed V - Souissi University, Morocco [email protected] \[section\] \[Theorem\][Definition]{} \[Theorem\][Corollary]{} \[Theorem\][Lemma]{} \[Theorem\][Example]{} [**Keywords:**]{} Hyperspectral images, classification, feature selection, mutual information, error probability, redundancy Introduction ============ In the feature classification domain, the choice of data affects widely the results. For the Hyperspectral image, the bands dont all contain the information; some bands are irrelevant like those affected by various atmospheric effects, see Figure.4, and decrease the classification accuracy. And there exist redundant bands to complicate the learning system and product incorrect prediction \[14\]. Even the bands contain enough information about the scene they may can’t predict the classes correctly if the dimension of space images, see Figure.3, is so large that needs many cases to detect the relationship between the bands and the scene (Hughes phenomenon) \[10\]. We can reduce the dimensionality of hyperspectral images by selecting only the relevant bands (feature selection or subset selection methodology), or extracting, from the original bands, new bands containing the maximal information about the classes, using any functions, logical or numerical (feature extraction methodology) \[11\]\[9\]. Here we focus on the feature selection using mutual information. Hyperspectral images have three advantages regarding the multispectral images \[6\],\ *[**Assertion:**]{} when we reduce hyperspectral images dimensionality, any method used must save the precision and high discrimination of substances given by hyperspectral image.* ![Precision an dicrimination added by hyperspectral images[]{data-label="fig_sim"}](figure1){width="3.5in"} In this paper we use the Hyperspectral image AVIRIS 92AV3C (Airborne Visible Infrared Imaging Spectrometer). \[2\]. It contains 220 images taken on the region ”Indiana Pine” at ”north-western Indiana”, USA \[1\]. The 220 called bands are taken between 0.4m and 2.5m. Each band has 145 lines and 145 columns. The ground truth map is also provided, but only 10366 pixels are labeled fro 1 to 16. Each label indicates one from 16 classes. The zeros indicate pixels how are not classified yet, Figure.2.\ The hyperspectral image AVIRIS 92AV3C contains numbers between 955 and 9406. Each pixel of the ground truth map has a set of 220 numbers (measures) along the hyperspectral image. This numbers (measures) represent the reflectance of the pixel in each band. So the pixel is shown as vector off 220 components. Figure .3. ![The Ground Truth map of AVIRIS 92AV3C and the 16 classes []{data-label="fig_sim"}](figure2){width="3.5in"} The hyperspectral image AVIRIS 92AV3C contains numbers between 955 and 9406. Each pixel of the ground truth map has a set of 220 numbers (measures) along the hyperspectral image. This numbers (measures) represent the reflectance of the pixel in each band. So the pixel is shown as vector off 220 components.\ Figure.3 shows the vector pixel’s notion \[7\]. So reducing dimensionality means selecting only the dimensions caring a lot of information regarding the classes. ![The notion of pixel vector []{data-label="fig_sim"}](figure3){width="4in"} We can also note that not all classes are carrier of information. In Figure. 4, for example, we can show the effects of atmospheric affects on bands: 155, 220 and other bands. This hyperspectral image presents the problematic of dimensionality reduction. Mutual Information based feature selection ========================================== Definition of mutual information -------------------------------- This is a measure of exchanged information between tow ensembles of random variables A and B : $$I(A,B)=\sum\;log_2\;p(A,B)\;\frac{p(A,B)}{p(A).p(B)}$$ Considering the ground truth map, and bands as ensembles of random variables, we calculate their interdependence.\ Geo \[3\] uses also the average of bands 170 to 210, to product an estimated ground truth map, and use it instead of the real truth map. Their curves are similar. This is shown at Figure 4. ![Mutual information of AVIRIS with the Ground Truth map (solid line) and with the ground apporoximated by averaging bands 170 to 210 (dashed line) .[]{data-label="fig_sim"}](figure4){width="3.5in"} The mesure of error probability ------------------------------- Fano \[14\] has demonstrated that as soon as mutual information of already selected feature has high value, the error probability of classification is decreasing, according to the formula bellow: $$\;\frac{H(C/X)-1}{Log_2(N_c)}\leq\;P_e\leq\frac{H(C/X)}{Log_2}\;$$with : $$\;\frac{H(C/X)-1}{Log_2(N_c)}=\frac{H(C)-I(C;X)-1}{Log_2(N_c)}\;$$ and : $$P_e\leq\frac{H(C)-I(C;X)}{Log_2}=\frac{H(C/X)}{Log_2}\;$$ The expression of conditional entropy *H(C/X)* is calculated between the ground truth map (i.e. the classes C) and the subset of bands candidates X. Nc is the number of classes. So when the features X have a higher value of mutual information with the ground truth map, (is more near to the ground truth map), the error probability will be lower. But it’s difficult to compute this conjoint mutual information *I(C;X)*, regarding the high dimensionality \[14\].This propriety makes Mutual Information a good criterion to measure resemblance between too bands, like it’s exploited in section II. Furthermore, we will interest at case of one feature candidate X. *[**Corollary: for one feature X, as X approaches the ground truth map, the interval *P*$_{e}$ is very small.**]{}* The principe of proposed algorithm based on inequality of Fano ============================================================== Our idea is based on this observation: the band that has higher value of Mutual Information with the ground truth map can be a good approximation of it. So we note that the subset of selected bands are the good ones, if thy can generate an estimated reference map, sensibly equal the ground truth map. It’s clearly that’s an Incremental Wrapper-based Subset Selection (IWSS) approach\[16\] \[13\]. Our process of band selection will be as following: we order the bands according to value of its mutual information with the ground truth map. Then we initialize the selected bands ensemble with the band that has highest value of MI. At a point of process, we build an approximated reference map *C*$_{est}$ with the already selected bands, and we put it instead of $X$ for computing the error probability (*P*$_{e}$); the latest band added (at those already selected) must make *P*$_{e}$ decreasing, if else it will be discarded from the ensemble retained. Then we introduce a complementary threshold *T*$_{h}$ to control redundancy. So the band to be selected must make error probability less than ( *P*$_{e}$ - *T*$_{h}$) , where *P*$_{e}$ is calculated before adding it. The algorithm following shows more detail of the process:\ *Let SS be the ensemble of bands already selected and S the band candidate to be selected. *Build*$_{estimated}C()$ is a procedure to construct the estimated reference map. *P*$_{e}$ is initialized with a value [*P*$_{e}^*$ ]{} . X the number of bands to be selected, $SS$ is empty and $R={1..220}$.* Select *band index*$_{s}$ $S$=*argmax*$_{s}$ MI(s) $SS\gets \textit{SS} \cup \textit{S}$ and $R\gets \textit{R} \setminus\textit{S}$ *C*$_{est}$= *Build*$_{estimated}C(SS)$ $$Pe=\frac{H(C/C_{est})}{Log_2}\ - \frac{H(C/C_{est})-1}{Log_2(N_c)};\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$$ $Pe\gets Pe^*$ $SS\gets \textit{SS} \setminus \textit{S}$ Results and analysis ===================== We apply this algorithm on the hyperspectral image AVIRIS 92AV3C \[1\], 50% of the labeled pixels are randomly chosen and used in training; and the other 50% are used for testing classifcation \[3\]. The classifer used is the SVM \[5\] \[12\] \[4\].\ *[ The procedure to construct the estimated reference map *C*$_{est}$ is the same SVM classifier used for classification. So *C*$_{est}$ is the output of classification. ]{}* Results -------- Table. I shows the results obtained for several thresholds. We can see the effectiveness selection bands of our algorithm, and the important effect of avoiding redundancy. ----------------------- ------- ------- ------- ------- ------- ------- ${\mathrm Bands } $ $Th$ ${\mathrm retained }$ 0.00 0.001 0.008 0.015 0.020 0.030 10 55.43 55.43 55.58 53.09 60.06 71.62 18 59.09 59.09 64.41 73.70 82.62 90.00 20 63.08 63.08 68.50 76.15 84.36 - 25 66.02 66.12 74.62 84.41 89.06 - 27 69.47 69.47 76.00 86.73 91.70 - 30 73.54 73.54 79.04 88.68 - - 35 76.06 76.06 81.38 92.36 - - 40 78.96 79.41 86.48 - - - 45 80.58 80.60 89.09 - - - 50 81.63 81.20 91.14 - - - 53 82.27 81.22 92.67 - - - 60 86.13 86.23 - - - - 70 86.97 87.55 - - - - 80 89.11 89.42 - - - - 90 90.55 90.92 - - - - 100 92.50 93.18 - - - - 102 92.62 93.34 - - - - 114 93.34 - - - - - ----------------------- ------- ------- ------- ------- ------- ------- : Results illustrate elimination of Redundancy using algorithm based on inequality of Fano, for thresholds ($Th$) \ \ \ \ \ Figure.5 shows more detail of the accuracy curves, versus number of bands retained, for several thresholds. This covers all behaviors of the algorithm. ![Accuracy of classification using the algorithm based on inequality of Fano, using numerous thresholds.[]{data-label="fig_sim"}](figure5){width="5.5in"} Analysis --------- Table.I and Figure.5 allow us to comment four cases:\ \ **First:** For the highest threshold values (0.1, 0.05, 0.03 and 0.02) we obtain a hard selection: a few more informative bands are selected; the accuracy of classification is 90% with less than 20 bands selected.\ **Second:** For the medium threshold values (0.015, 0.012, 0.010, 0.008, 0.006), some redundancy is allowed, in order to made increasing the classification accuracy.\ **Tired:** For the small threshold values (0.001 and 0), the redundancy allowed becomes useless, we have the same accuracy with more bands.\ **Finally:** for the negative thresholds, for example -0.01, we allow all bands to be selected, and we have no action of the algorithm. This corresponds at selection bay ordering bands on mutual information . The performance is low.\ We can not here that \[15\] uses two axioms to characterize feature selection. Sufficiency axiom: the subset selected feature must be able to reproduce the training simples without losing information. The necessity axiom “simplest among different alternatives is preferred for prediction”. In the algorithm proposed, reducing error probability between the truth map and the estimated minimize the information loosed for the samples training and also the predicate ones. We note also that we can use the number of features selected like condition to stop the search; so we can obtain an hybrid approach filter-wrapper\[16\].\ \ **Partial conclusion:**\ The algorithm proposed is a very good method to reduce dimensionality of hyperspectral images.\ \ We illustrate in Figure .6, the Ground Truth map originally displayed, like at Figure .1, and the scene classified with our method, for threshold as 0.03, so 18 bands selected.\ ![Original Grand Truth map(in the left) and the map produced bay our algorithm according to the threshold 0.03 i.e 18 bands (in the right). Acuracy=90%. ](figure6.png){width="5in"} \ Table II indicates the classification accuracy of each class, for several thresholds.\ -------------------- ---------------------- ------- ------- ------- ------- ------- ------- ${\mathrm Class }$ $Total$ $Th$ ${\mathrm pixels }$ 0.00 0.001 0.008 0.015 0.020 0.030 1 : 54 86.96 82.61 86.96 83.96 78.26 86.96 2 : 1434 91.07 89.40 89.54 89.12 88.01 83.96 3 : 834 89.93 90.89 89.69 86.09 83.69 81.53 4 : 234 96.32 83.76 86.32 87.18 87.18 86.32 5 : 597 95.93 95.53 94.34 95.93 95.93 95.53 6 : 747 98.60 98.60 98.32 98.60 98.32 98.32 7 : 26 84.62 84.62 84.62 84.62 84.62 84.62 8 : 489 98.37 98.37 98.78 97.96 98.78 98.78 9 : 20 100 100 100 100 100 100 10: 968 92.15 92.98 91.32 91.74 90.91 89.05 11: 2468 93.84 94.17 92.54 92.71 91.90 91.25 12: 614 91.21 93.49 92.83 92.18 88.93 87.30 13: 212 98.06 98.06 98.06 98.06 98.06 98.06 14: 1294 97.53 97.86 97.22 97.84 97.99 97.53 15: 390 79.52 77.71 75.90 74.10 78.92 64.46 16: 95 93.48 93.48 93.48 93.48 93.48 93.48 -------------------- ---------------------- ------- ------- ------- ------- ------- ------- : Accuracy of classification(%) of each class for numerous thresholds ($Th$) \ \ **Comments:**\ **First :**we can not the effectiveness of this algorithm for particularly the classes with a few number of pixels, for example class number 9.\ **Second:** we can not that 18 bands (i.e. threshold 0.03) are sufficient to detect materials contained in the region. It’s also shown in Figure .6\ **Tired:** one of important comment is that most of class accuracy change lately when the threshold changes between 0.03 and 0.015 Conclusion ========== In this paper we presented the necessity to reduce the number of bands, in classification of Hyperspectral images. Then we have introduce the mutual information based scheme. We carried out their effectiveness to select bands able to classify the pixels of ground truth. We introduce an algorithm also based on mutual information and using a measure of error probability (inequality of Fano). To choice a band, it must contributes to reduce error probability. A complementary threshold is added to avoid redundancy. So each band retained has to contribute to reduce error probability by a step equal to threshold even if it caries a redundant information. We can tell that we conserve the useful redundancy by adjusting the complementary threshold. The process introduced is able to select the good bands to classification for also the classes that have a few number of pixels. This algorithm is a feature selection methodology. But it’s a wrapper approach, because we use the classifier to make the estimated reference map. This is expenssive than Filter strategy, but it can be used for application that need more precision. This scheme is very interesting to investigate and improve, considering its performance. [99]{} D. Landgrebe, “On information extraction principles for hyperspectral data: A white paper,” Purdue University, West Lafayette, IN, Technical Report, School of Electrical and Computer Engineering, 1997. Téléchargeable ici : http://dynamo.ecn.purdue.edu/ landgreb/whitepaper.pdf. ftp://ftp.ecn.purdue.edu/biehl/MultiSpec/ Baofeng Guo, Steve R. Gunn, R. I. Damper Senior Member, “Band Selection for Hyperspectral Image Classification Using Mutual Information” , IEEE and J. D. B. Nelson. IEEE GEOSCIENCE AND REMOTE SINSING LETTERS, Vol .3, NO .4, OCTOBER 2006. Baofeng Guo, Steve R. Gunn, R. I. Damper, Senior Member, IEEE, and James D. B. Nelson.“Customizing Kernel Functions for SVM-Based Hyperspectral Image Classification”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 4, APRIL 2008. Chih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology , 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm. Nathalie GORRETTA-MONTEIRO , Proposition d’une approche de’segmentation d’images hyperspectrales. PhD thesis. Universite Montpellier II. Février 2009. David Kernéis.“ Amélioration de la classification automatique des fonds marins par la fusion multicapteurs acoustiques”. Thèse, ENST BRETAGNE, université de Rennes. Chapitre3, Réduction de dimensionalité et classification,Page.48. Avril 2007. Kwak, N and Choi, C. “Featutre extraction based on direct calculation of mutual information”.IJPRAI VOL. 21, NO. 7, PP. 1213-1231, NOV. 2007 (2007). Nojun Kwak and C. Kim,“Dimensionality Reduction Based on ICA Regression Problem”. ARTIFICIAL NEURAL NETWORKS-ICANN 2006. Lecture Notes in Computer Science, 2006, Isbn 978-3-540-38625-4, Volume 1431/2006. Huges, G. Information Thaory,“On the mean accuracy of statistical pattern recognizers”. IEEE Transactionon Jan 1968, Volume 14, Issue:1, p:55-63, ISSN 0018-9448 DOI: 10.1109/TIT.1968.1054102. YANG, Yiming, and Jan O. PEDERSEN, 1997.A comparative study of feature selection in text categorization. In: ICML. 97: Proceedings of the Fourteenth International Conference on Machine Learning. San Francisco, CA, USA:Morgan Kaufmann Publishers Inc., pp. 412.420. Chih-Wei Hsu; Chih-Jen Lin,“A comparison of methods for multiclass support vector machines” ;Dept. of Comput. Sci. Inf. Eng., Nat Taiwan Univ. Taipei Mar 2002, Volume: 13 I:2;pages: 415 - 425 ISSN: 1045-9227, IAN: 7224559, DOI: 10.1109/72.991427 Bermejo, P.; Gamez, J.A.; Puerta, J.M.“Incremental Wrapper-based subset Selection with replacement: An advantageous alternative to sequential forward selection” ;Comput. Syst. Dept., Univ. de Castilla-La Mancha, Albacete; Computational Intelligence and Data Mining, 2009. CIDM ’09. IEEE Symposium on March 30 2009-April 2 2009, pages: 367 - 374, ISBN: 978-1-4244-2765-9, IAN: 10647089, DOI: 10.1109/CIDM.2009.4938673 Lei Yu, Huan Liu,“Efficient Feature Selection via Analysis of Relevance and Redundancy”, Department of Computer Science and Engineering; Arizona State University, Tempe, AZ 85287-8809, USA, Journal of Machine Learning Research 5 (2004) 1205-1224. Hui Wang, David Bell, and Fionn Murtagh, “Feature subset selection based on relevance” , Vistas in Astronomy, Volume 41, Issue 3, 1997, Pages 387-396. P. Bermejo, J.A. Gámez, and J.M. Puerta, "A GRASP algorithm for fast hybrid (filter-wrapper) feature subset selection in high- [**Received: March, 2012**]{}
--- abstract: 'Modern distributed storage systems often use erasure codes to protect against disk and node failures to increase reliability, while trying to meet the latency requirements of the applications and clients. Storage systems may have caches at the proxy or client ends in order to reduce the latency. In this paper, we consider a novel caching framework with erasure code called [*functional caching*]{}. Functional Caching involves using erasure-coded chunks in the cache such that the code formed by the chunks in storage nodes and cache combined are maximal-distance-separable (MDS) erasure codes. Based on the arrival rates of different files, placement of file chunks on the servers, and service time distribution of storage servers, an optimal functional caching placement and the access probabilities of the file request from different disks are considered. The proposed algorithm gives significant latency improvement in both simulations and a prototyped solution in an open-source, cloud storage deployment.' author: - 'Vaneet Aggarwal, Yih-Farn R. Chen, Tian Lan, and Yu Xiang[^1]' bibliography: - 'cache.bib' - 'allstorage.bib' - 'Tian.bib' - 'ref\_Tian2.bib' - 'ref\_Tian3.bib' - 'Vaneet\_cloud.bib' - 'Tian\_rest.bib' - 'yu.bib' title: 'Sprout: A functional caching approach to minimize service latency in erasure-coded storage' --- Conclusions =========== In this paper, we propose functional caching, a novel approach to create erasure-coded cache chunks that maintain MDS code property along with existing data chunks. It outperforms exact caching schemes and provides a higher degree of freedom in file access and request scheduling. We quantify an upper bound on the mean service latency in closed-form for erasure-coded storage systems with functional caching, for arbitrary chunk placement and service time distributions. A cache optimization problem is formulated and solved using an efficient heuristic algorithm. Numerical results and prototype in an open-source cloud storage validate significant service latency reduction using functional caching. This paper assumes that a rate monitoring/prediction oracle (e.g., an online predictive model or a simple sliding-window-based method) is available to detect the rate changes. Finding a robust algorithm that can automatically adjust to such changes is an open problem and will be considered as future work. [^1]: The author names are written in an alphabetical order. V. Aggarwal is with the School of IE, Purdue University, West Lafayette, IN 47907, USA, email: [email protected]. Y. R. Chen and Y. Xiang are with AT$\&$T Labs-Research, Bedminster, NJ 07921, USA, email: {chen,yxiang}@research.att.com. T. Lan is with the Department of ECE, George Washington University, DC 20052, USA, email: [email protected] This work was supported in part by the National. Science Foundation under grant CNS-1618335. This work was presented in part in Proc. IEEE ICDCS 2016 [@7536586].
--- abstract: 'The interplay between magnetism and metal-insulator transitions is fundamental to the rich physics of the single band fermion Hubbard model. Recent progress in experiments on trapped ultra-cold atoms have made possible the exploration of similar effects in the boson Hubbard model (BHM). We report on Quantum Monte Carlo (QMC) simulations of the spin-1 BHM in the ground state. For antiferromagnetic interactions, $(U_2>0)$, which favor singlet formation within the Mott insulator lobes, we present exact numerical evidence that the superfluid-insulator phase transition is first (second) order depending on whether the Mott lobe is even (odd). Inside even Mott lobes, we examine the possibility of nematic-to-singlet first order transitions. In the ferromagnetic case $(U_2<0)$, the transitions are all continuous. We map the phase diagram for $U_2<0$ and demonstrate the existence of the ferromagnetic superfluid. We also compare the QMC phase diagram with a third order perturbation calculation.' author: - 'G.G. Batrouni$^1$, V.G. Rousseau$^2$, and R.T. Scalettar$^3$' title: 'Magnetic and Superfluid Transitions in the d=1 Spin-1 Boson Hubbard Model' --- The single band fermion Hubbard model (FHM) offers one of the most fundamental descriptions of the physics of strongly correlated electrons in the solid state. The spinful nature of the fermions is central to the wide range of phenomena it displays such as interplay between its magnetic and transport properties.[@fazekas] Such complex interplay is absent in the superfluid to Mott insulator transition [@fisher89; @batrouni90; @freericks96; @prokofev07] in the spin-0 Boson Hubbard model (BHM). However, purely optical traps [@stamperkurn98] can now confine alkali atoms $^{23}$Na, $^{39}$K, and $^{87}$Rb, which have hyperfine spin $F=1$, without freezing $F_z$. As in the fermion case, the nature of the superfluid-Mott insulator (SF-MI) transition is modified by the spin fluctuations which are now allowed. Initial theoretical work employed continuum, effective low-energy Hamiltonians and determined the magnetic properties and excitations of the superfluid phases.[@magprop] To capture the SF-MI transition it is necessary to consider the spin-1 Bosonic Hubbard Hamiltonian, $$\begin{aligned} \label{hubham} H=&& -t \sum_{\langle ij\rangle ,\sigma}(a^{\dagger}_{i\sigma}a_{j\sigma}+ h.c) + \frac{U_0}{2} \sum_i {\hat{n}_i}({\hat{n}_i}-1) \nonumber \\ &&+\frac{U_2}{2} \sum_i (\vec{F}_{i}^{\, 2}- 2 \, \hat{n}_i) %- \mu \sum_{i}\, \hat{n}_i \,\, .\end{aligned}$$ The boson creation (destruction) operators $a_{i \sigma}^\dagger \,(a_{i \sigma})$ have site $i$ and spin $\sigma$ indices. $\sigma=1,0,-1$. The first term describes nearest-neighbor, $ \langle ij \rangle$, jumps. The hybridization $t=1$ sets the energy scale, and we study the one dimensional case. The number operator $\hat n_i\equiv\sum_\sigma\hat{n}_{i\sigma}= \sum_\sigma a^{\dagger}_{i\sigma}a_{i\sigma}$ counts the total boson density on site $i$. The on-site repulsion $U_0$ favors states with uniform occupation and competition between $U_0$ and $t$ drives the MI-SF transition. The spin operator $\vec{F}_i=\sum_{\sigma,\sigma '} a^{\dagger}_{i\sigma} \vec{F}_{\sigma \sigma '} a_{i\sigma '}$, with $ \vec{F}_{\sigma \sigma '}$ the standard spin-1 matrices, contains further density-density interactions and also interconversion terms between the spin species. We treat the system in the canonical ensemble where the total particle number is fixed and the chemical potential is calculated, $\mu(N)=E(N+1)-E(N)$ where $E(N)$ is the ground state energy with $N$ particles. This holds only when each energy used is that of a single thermodynamic phase and not that of a mixture of coexisting phases as happens with first order transitions. It is therefore incorrect to determine first order phase boundaries with the naive use of this method. In the present case there is the added subtlety that the three species are interconvertible. These issues will be addressed in more detail below. Several important aspects of the spin-1 BHM are revealed by analysing the independent site limit, $t/U_0=0$. The Mott-1 state with $n_i=1$ on each site has ${\cal E}_{\rm M}(1)=0$. In the Mott-2 state with $n_i=2$, the energy is ${\cal E}_{\rm M}(2)=U_0-2U_2$, if the bosons form a singlet, $F=0$, and is ${\cal E}_{{\rm M}2}=U_0+U_2$ if $F=2$. Thus $U_2>0$ favors singlet phases while $U_2<0$ favors (on-site) ferromagnetism. This applies to all higher lobes as well. In the canonical ensemble, the chemical potential at which the system goes from the $n$th to the $(n+1)$th Mott lobe is $\mu(n\to n+1)={\cal E}_{n+1}-{\cal E}_{n}$. First consider $U_2>0$. The energy of Mott lobes at odd filling, $n_{\rm o}$, is ${\cal E}_{M}(n_{\rm o})=U_0n_{\rm o}(n_{\rm o}-1)/2+U_2(1-n_{\rm o})$ while at even filling, $n_{\rm e}$, ${\cal E}_{M}(n_{\rm e})=U_0n_{\rm e}(n_{\rm e}-1)/2-n_{\rm e}U_2$. Therefore, the boundaries of the lobes, going from lower to higher filling, are $\mu (n_{\rm e}\to (n_{\rm e}+1))=n_{\rm e}U_0$ and $\mu (n_{\rm o}\to (n_{\rm o}+1))=n_{\rm o}U_0-2U_2$. This demarks the positions of the ‘bases’ of the Mott lobes in the $(t/U_0,\mu/U_0)$ ground state phase diagram. For $U_2>0$ the even Mott lobes grow at the expense of the odd ones, which disappear entirely for $U_0=2U_2$. For $U_2<0$, the ground state is ferromagnetic (maximal $F$: $F^2=n(n+1)$) which gives for all Mott lobes ${\cal E}_{\rm M}(n)=n(n-1)(U_0+U_2)/2$. Consequently, the boundary of the $n$th and $(n+1)$th Mott lobes $\mu (n\to n+1)=n(U_0+U_2)$. The bases of both the odd and even Mott lobes shrink with increasing $|U_2|$, in contrast to the $U_2>0$ case where the even Mott lobes expand. Mean-field treatments of the lattice model capture the SF-MI transition as the hopping $t$ is turned on, and have been performed both at zero and finite temperature.[@krutitsky04; @tsuchiya05; @pai08] Even when $U_2=0$, the spin degeneracy alters the nature of the transition. For $U_2>0$, the order of the phase transition depends on whether the Mott lobe is even or odd. These mean field calculations assume a non-zero order parameter $\langle a_{i\sigma} \rangle$, which cannot be appropriate in d=1 or in d=2 at finite $T$. Therefore it is important to verify these predictions for the qualitative aspects of the phase diagram, especially in low dimension. A quantitative determination of the phase boundaries requires numerical treatments. Indeed, DMRG [@dmrg] and Quantum Monte Carlo (QMC) [@apaja06] results for $U_2>0$ and d=1 reported the critical coupling strength and showed the odd Mott lobes are characterized by a dimerized phase which breaks translation symmetry. For $U_2<0$, the nature of the SF-MI transition does not depend on the order of the Mott lobe while for $U_2>0$ it is predicted to be continuous (discontinuous) into odd (even) lobes. Consequently, it suffices to study the first two Mott lobes both for $U_2>0$ and $U_2<0$ to demonstrate the behavior for all lobes. Furthermore, in what follows we will focus on the case $|U_2/U_0|=0.1$ in order to compare our results for $U_2>0$ with Rizzi [*et al*]{}[@dmrg] Here, we will use an exact QMC approach, the SGF algorithm with directed update, to study the spin-1 BHM in d=1 for both positive and negative $U_2$.[@SGF] For $U_2>0$ (e.g. $^{23}$Na), which favors low total spin states, Fig. \[rhovsmuU010Ut1fss\] shows the total number density $\rho=N/L$ against the chemical potential, $\mu$, for $U_0=10t$ and $U_2=t$. It displays clearly the first two incompressible MI phases. In agreement with the $t/U_0=0$ analysis, $U_2$ causes an expansion of the second Mott lobe, $\rho=2$, at the expense of the first, $\rho=1$. Our Mott gaps agree with DMRG results[@dmrg] to within symbol size. However, the $\rho$ versus $\mu$ curve in Fig. \[rhovsmuU010Ut1fss\] does not betray any evidence of the different natures of the phase transitions into the first and second Mott lobes. In particular, for a spin-0 BHM, first order transitions are clearly exposed by the appearence of negative compressibility,[@batrouni00] $\kappa=\partial \rho/\partial\mu$, which is not present here. The transition into the second Mott lobe is expected to be first order and driven by the formation of bound pairs of bosons in singlet states. Therefore, in the canonical ensemble, we expect that near this transition there will be phase coexistence between singlets arranged in a Mott region and superfluid. The nature of the transitions is revealed by the evolution of the spin populations in the system as $\rho$ increases. Since the singlet wavefunction is $|0,0\rangle=\sqrt{2/3}|1,1\rangle|1,-1\rangle-\sqrt{1/3}|1,0\rangle|1,0\rangle$, this state has $\rho_+=\rho_0=\rho_-$. We plot in the inset of Fig. \[rhovsmuU010Ut1fss\] the population fractions, $N_0/N$ and $N_-/N=N_+/N$ versus the total density. We see that as $\rho$ increases, $N_+/N$ and $N_0/N$ oscillate: When $N$ is even, singlet bound states of two particles try to form drawing the values of $N_+/N$ and $N_0/N$ closer together. However, singlets form fully, making $N_+=N_0$, only close to the second Mott lobe, $\rho=2$, where we clearly see $N_+/N=N_0/N$ for a range of even values of $N$. On the other hand, when $N$ is odd, singlets cannot form and the spin populations are much farther from that given by the singlet wavefunction. In the thermodynamic limit and fixed $N$, one expects true phase separation into $\rho=2$ singlet MI regions and $\rho<2$ SF regions. For a finite system, phase separation commences for (even) fillings where we first have $N_+/N=N_0/N$, [*i.e.*]{} $1.5\leq \rho < 2$ and similar behavior for $\rho > 2$. Another interesting feature in the inset of Fig. \[rhovsmuU010Ut1fss\] is that the difference between $N_+/N$ and $N_0/N$, for even $N$, decreases linearly as the density approaches the transition at $\rho= 1.5$. No such behavior is seen as the first Mott lobe is entered from below or above $\rho=1$: the transition is coninuous as predicted for odd lobes. The boxes in Fig. \[rhovsmuU010Ut1fss\] show the values of $\rho$ corresponding to phase coexistence rather than to one stable thermodynamic phase. It is, therefore, clear that the canonical calculation of the phase boundaries, [*i.e.*]{} simply adding a particle to, or removing it from, the MI, is not applicable in the presence of a first order transition. Figure \[rhovsmuU010Ut1fss\] reveals the SF-MI transition at fixed, sufficiently large $U_0$ when $\rho$ is varied. In Fig. \[jsqvsU\] we show the transition when the density is fixed, $\rho=2$, and $U_0$ is the control parameter with $U_2/U_0$ fixed at $0.1$ (main figure) and $0.01$ (inset). Singlet formation is clearly shown by $\langle F^2\rangle\to 0$, as $t/U_0$ decreases and the second MI is entered.[@footnote1] Indeed, the origin of the first order transition into even Mott lobes, as the filling is tuned, is linked to the additional stabilization of the Mott lobe associated with this singlet energy.[@pai08] The superfluid density, $\rho_s=L\langle W^2\rangle/2t\beta$, where $W$ is the winding number, is a topological quantity and truly characterizes the SF-MI phase transition which is continuous in this case. As $L$ is increased from $10$ to $16$ and $20$, the vanishing of $\rho_s$ gets sharper. We find that the critical value of $t/U_0$ for the $\rho=2$ lobe is somewhat less than that reported in DMRG[@dmrg] indicated by the dashed line. We believe this is because with DMRG the phase boundaries were obtained using finite differences of the energy with small doping above and below commensurate filling. As discussed above, this is not appropriate for a first order transition. For $2dU_2/U_0<0.1$, and $d=2,3$, mean field[@zhou; @imambekov04] predicts that, when $t/U^{c1}_0\sim \sqrt{U_2/4dU_0}$ is in the MI, then Mott lobes of even order are comprised of two phases: (a) the singlet phase for $t/U_0\leq t/U^{c1}_0$ and (b) a nematic phase for $t/U^{c1}_0 \leq t/U_0\leq t/U^c_0$, where $t/U^c_0$ is the tip of the Mott lobe. Inside the lobe, the nematic-to-singlet transition is predicted to be first order which raises the question: are the singlet-to-SF and the nematic-to-SF transitions of the same order? Figure \[jsqvsU\] shows that the SF-MI transition, $\rho_s\to 0$, occurs at larger $t/U_0$ than singlet formation, $\langle F^2\rangle \to 0$, both for $U_2/U_0=0.1$ and $0.01$. The passage of $\langle F^2\rangle$ to zero gets sharper for smaller $U_2/U_0$ but remains continuous, not exhibiting any signs of a first order transition. We have verified this for $U_2/U_0=0.1, \,0.05,\, 0.01, \,0.005$. Furthermore, the insensitivity of $\langle F^2\rangle $ to finite size effects indicates that it does not undergo a continuous phase transition. We have also verified that for the second Mott lobe, the SF-MI transition is first order regardless whether $t/U_0$ is less or greater than $t/U^{c1}_0$. We conclude that while $\rho_s\to 0$ is a continuous critical transition, $\langle F^2\rangle \to 0$ is a [ *crossover*]{} not a phase transition. This, of course, does not preclude the possibility of a first order transition for $d=2,3$. Whereas $^{23}$Na has positive $U_2$, $^{87}$Rb has $U_2<0$, leading to different behavior. We begin with $\rho$ versus $\mu$ in Fig. \[rhovsmuU010Utm1\]. Unlike the $U_2>0$ case, the SF-MI transitions are continuous for both even and odd Mott lobes: the inset shows that the spin populations do not oscillate as for $U_2>0$. The population ratio, $\rho_0=2\rho_+$ can be understood as follows. As shown above for $t/U_0\to 0$, maximum spin states are favored when $U_2<0$. So, when a site is doubly occupied, the spin-2 state is favored. But, since our study is in the $S_z^{\rm total}=N_+-N_-=0$ sector, the wavefunction of the spin-2 state is $|2,0\rangle= 1/\sqrt{3}|1,1\rangle|1,-1\rangle + \sqrt{2/3}|1,0\rangle|1,0\rangle$ and thus $\rho_0=2\rho_+=2\rho_-$. As discussed above, $U_2<0$ favors ‘local ferromagnetism’, namely high spin states on each of the individual lattice sites. As with the FHM, the kinetic energy gives rise to second order splitting which lifts the degeneracy between commensurate filling strong coupling states with different intersite spin arrangements. We can therefore ask whether the local moments order from site to site: Do the Mott and superfluid phases exhibit global ferromagnetism[@pai08]? To this end, we measure the magnetic structure factor, $$S_{\rm \sigma \sigma}(q) = \sum_{l} e^{iql} \langle F_{\sigma,j+l} F_{\sigma,j} \rangle$$ where $\sigma=x$ or $z$. Figure \[corrmagxFTutm1\] shows $S_{\rm xx}(q)$ in the superfluid phase at half-filling.[@footnote2] The peak at $q=0$ grows linearly with lattice size, indicating the superfluid phase does indeed possess long range ferromagnetic order. We find that the MI phase is also ferromagnetic. To determine the phase diagram, we scan the density as in Fig. \[rhovsmuU010Utm1\] for many values of $U_0$ with $U_2/U_0$ constant ($-0.1$ in our case). The resulting phase diagram is shown in Fig. \[MottLobesU2neg\]. Comparison of data for two lattice sizes demonstrates that finite size effects are small. Early in the evaluation of the phase boundaries of the spin-0 BHM it was observed that a perturbation calculation [@freericks96] agreed remarkably well with QMC results.[@batrouni90] We now generalize the spin-0 perturbation theory to spin-1 and show a similar level of agreement with the QMC results. If we assume the system always to be perfectly magnetized, then $n$ bosons on a site will yield the largest possible spin, $F^2=n(n+1)$. Consequently, the interaction term in the Hamiltonian, Eq.(\[hubham\]), reduces to $(U_0-U_2)\sum_i {\hat{n}_i}({\hat{n}_i}-1)/2$, giving a Hamiltonian identical to the spin-0 BHM but with the interaction shifted to $(U_0-U_2)/2$. One can then repeat the perturbation expansion to third order in $t/(U_0-U_2)$ to determine the phase diagram.[@freericks96] The result is shown as the dashed line in Fig. \[MottLobesU2neg\] and is seen to be in excellent agreement with QMC. The agreement further suggests that the finite lattice effects in the phase diagram are small. Such a perturbation calculation is not possible for the $U_0>0$ case since $F^2$ depends on the phase, SF vs MI, and on the order of the MI lobe. The (dipolar) interactions between spinful bosonic atoms confined to a [*single*]{} trap have been shown to give rise to fascinating “spin textures".[@stamperkurn06] An additional optical lattice causes a further enhancement of interactions, and opens the prospect for the observation of the rich behavior associated with Mott and magnetic transitions, and comparisons with analogous properties of strongly correlated solids.[@zhou; @imambekov04] Here, we have quantified these phenomena in the one-dimensional spin-1 BHM with exact QMC methods. We have shown that, for $U_2>0$, the MI phase is characterized by singlet formation clearly seen for even Mott lobes where $\langle F^2\rangle \to 0$ as $U_0$ increases. We also showed that the transition into odd lobes is continuous while that into even lobes is discontinuous (first order). We emphasized that the naive canonical determination of the phase boundaries is not appropriate for a first order transition. For $U_2<0$, we showed that all MI-SF transitions are continuous and that both the SF and MI phases are ferromagnetic. The phase diagram in the $(\mu/U_0, t/U_0)$ plane obtained by QMC can be described very accurately using third order perturbation theory. Acknowledgements: G.G.B. is supported by the CNRS (France) PICS 3659, V.G.R. by the research program of the ‘Stichting voor Fundamenteel Onderzoek der Materie (FOM)’ and R.T.S. by ARO Award W911NF0710576 with funds from the DARPA OLE Program. We would like to thank T.B. Bopper for useful input. [99]{} “Lecture Notes on Electron Correlation and Magnetism," Patrik Fazekas, World Scientific (1999). M.P.A. Fisher [*et al.*]{}, Phys. Rev. [**B40**]{}, 546 (1989). G.G. Batrouni, R.T. Scalettar, and G.T. Zimanyi, Phys. Rev. Lett. [**65**]{}, 1765 (1990). J.K. Freericks and H. Monien, Phys. Rev. [**B53**]{}, 2691 (1996). B. Capogrosso-Sansone [*et al*]{}, Phys. Rev. [**A77**]{} 015602 (2008). D.M. Stamper-Kurn [*et al*]{}, Phys. Rev. Lett. [**80**]{}, 2027 (1998). T.L. Ho, Phys. Rev. Lett. [**81**]{}, 742 (1998); T. Ohmi and K. Machida, J. Phys. Soc. Japan [**67**]{}, 1822 (1998); S. Mukerjee, C. Zu, and J.E. Moore, Phys. Rev. Lett. [**97**]{}, 120406 (2006). K.V. Krutitsky and R. Graham, Phys. Rev. [ **A70**]{}, 063610 (2004); S. Ashhab, J. Low Temp. Phys. [**140**]{}, 51 (2005). T. Kimura, S. Tsuchiya, and S. Kurihara Phys. Rev. Lett. [**94**]{}, 110403 (2005). V. Pai, K. Sheshadri and R. Pandit, Phys. Rev. [**B77**]{}, 014503 (2008). M. Rizzi [*et al*]{}, Phys. Rev. Lett. [**95**]{}, 240404 (2005); S. Bergkvist, I. McCulloch and A. Rosengren, Phys. Rev. [**A74**]{} 053419 (2006). V. Apaja and O.F. Sylju[å]{}sen, Phys Rev [**A74**]{}, 035601 (2006). V.G. Rousseau, Phys. Rev. [**E77**]{}, 056705 (2008); V.G. Rousseau, Phys. Rev. [**E78**]{}, 056707 (2008). G.G. Batrouni and R.T. Scalettar, Phys. Rev. Lett. [**84**]{}, 1599 (2000). While $\langle F^2 \rangle$ vanishes for the even Mott lobes, for the odd lobes $\langle F^2 \rangle \rightarrow 2$. This is obvious for $\rho=1$ since there is a single spin-1 boson on each site. For $\rho=3$, for example, two of the bosons pair into a singlet, again leaving an effective spin-1 on each site. In a simulation in the canonical ensemble in the $F_z=0$ sector, $S_{\rm zz}(q=0)$ is constrained to vanish at $q=0$. However, its $q \rightarrow 0$ limit gives the $q=0$ values of the unconstrained situation.[@batrouni90] L.E. Sadler [*et al*]{}, Nature [**443**]{}, 312 (2006). E. Demler and F. Zhou, Phys. Rev. Lett. [**88**]{}, 163001 (2002); M. Snoek and F. Zhou, Phys. Rev. [**B69**]{}, 094410 (2004). A. Imambekov, M. Lukin, and E. Demler, Phys. Rev. [**A68**]{}, 063602 (2003) and Phys. Rev. Lett. [**93**]{}, 120405 (2004).
--- author: - zofi - 'Ž. Chrobáková, R. Nagy, M. López-Corredoira' bibliography: - 'Refer\_notes\_gaia\_maps.bib' date: 'Received xxxx; accepted xxxx' nocite: - '[@romi]' - '[@martin_warp]' - '[@yusifov]' - '[@bahcall_lum]' title: - - 'Structure of the outer Galactic disc with *Gaia* DR2' --- [The structure of outer disc of our Galaxy is still not well described, and many features need to be better understood. The second Gaia data release (DR2) provides data in unprecedented quality that can be analysed to shed some light on the outermost parts of the Milky Way.]{} [We calculate the stellar density using star counts obtained from Gaia DR2 up to a Galactocentric distance R=20 kpc with a deconvolution technique for the parallax errors. Then we analyse the density in order to study the structure of the outer Galactic disc, mainly the warp.]{} [In order to carry out the deconvolution, we used the Lucy inversion technique for recovering the corrected star counts. We also used the Gaia luminosity function of stars with $M_G<10$ to extract the stellar density from the star counts.]{} [The stellar density maps can be fitted by an exponential disc in the radial direction $h_r=2.07\pm0.07$ kpc, with a weak dependence on the azimuth, extended up to 20 kpc without any cut-off. The flare and warp are clearly visible. The best fit of a symmetrical S-shaped warp gives $z_w\approx z_\odot+(37\pm4.2(stat.)-0.91(syst.)) pc\cdot\left(R/R_\odot\right)^{2.42\pm 0.76(stat.) + 0.129 (syst.)}sin(\phi+\ang{9.3}\pm\ang{7.37} (stat.) +\ang{4.48} (syst.))$ for the whole population. When we analyse the northern and southern warps separately, we obtain an asymmetry of an $\sim25\%$ larger amplitude in the north. This result may be influenced by extinction because the Gaia G band is quite prone to extinction biases. However, we tested the accuracy of the extinction map we used, which shows that the extinction is determined very well in the outer disc. Nevertheless, we recall that we do not know the full extinction error, and neither do we know the systematic error of the map, which may influence the final result.\ The analysis was also carried out for very luminous stars alone ($M_G<-2$), which on average represents a younger population. We obtain similar scale-length values, while the maximum amplitude of the warp is $20-30\%$ larger than with the whole population. The north-south asymmetry is maintained. ]{} Introduction {#intro} ============ Studying the Galactic structure is crucial for our understanding of the Milky Way. Star counts are widely used for this purpose [@paul], and the importance of this tool has increased in the past decades with the appearance of wide-area surveys [@bahcall; @majewski], which made it possible to obtain reliable measurements of the Galactic thin- and thick-disc and halo [@chen_counts; @juric; @bovy; @robin_bulge]. It is common to simplify the Galactic disc as an exponential or hyperbolic secant form, but there are many asymmetries such as the flare and warp that need to be taken into account. These structures can be seen from 3D distribution of stars, as shown by [@liu], who mapped the Milky Way using the LAMOST (The Large Sky Area Multi-Object Fibre Spectroscopic Telescope) RGB (red-giant branch) stars; [@skowron], who constructed a map of the Milky Way from classical Cepheids; or [@anders], who used the second Gaia data release (DR2). The warp was first detected in the Galactic gaseous disc in 21 cm HI observations [@kerr; @oort]. Since then, the warp has also been discovered in the stellar disc [@carney; @martin_warp; @reyle; @amores; @chen_warp], and the kinematics of the warp has been studied as well [@dehnen; @drimmel; @martin_warp_kin; @schonrich].\ Vertical kinematics in particular can reveal much about the mechanism behind the formation of warp. [@poggio] found a gradient of $5-6$ km/s in the vertical velocities of upper main-sequence stars and giants located from 8 to 14 kpc in Galactic radius using Gaia DR2 data, revealing the kinematic signature of the warp. Their findings suggest that the warp is principally a gravitational phenomenon. [@skowron_kin_warp] also found a strong gradient in vertical velocities using classical Cepheids supplemented by the OGLE (Optical Gravitational Lensing Experiment) survey. [@nas_clanok] investigated the dynamical effects produced by different mechanisms that can explain the radial and vertical components of extended kinematic maps of [@martin], who used Lucy’s deconvolution method (see Sect. \[ch6\]) to produce kinematical maps up to a Galactocentric radius of 20 kpc. [@nas_clanok] found that vertical motions might be dominated by external perturbations or mergers, although with a minor component due to a warp whose amplitude is evolving with time. However, the kinematic signature of the warp is not enough to explain the observed velocities.\ To date, the shape of the warp has been constrained only roughly, and the kinematical information is not satisfying enough to reach consensus about the mechanism causing the warp. Theories include accretion of intergalactic matter onto the disc [@martin_accretion], interaction with other satellites [@kim], the intergalactic magnetic field [@battaner], a misaligned rotating halo [@debattista], and others. We now have a new opportunity to improve our knowledge about the Milky Way significantly through the Gaia mission of the European Space Agency [@gaia2]. Gaia data provide unprecedented positional and radial velocity measurements and an accurate distance determination, although the error of the parallax measurement increases with distance from us. It brings us the most accurate data about the Galaxy so far, ideal to advance in all branches of Galactic astrophysics and study our Galaxy in greater detail than ever before. Gaia DR2 has been used by [@anders], who provided photo-astrometric distances, extinctions, and astrophysical parameters up to magnitude G=18, making use of the Bayesian parameter estimation code [StarHorse]{}. After introducing the observational data and a number of priors, their code finds the Bayesian stellar parameters, distances, and extinctions. The authors also present density maps, which we compare with our results in Section \[ch8\]. Gaia data have also been used to study the structure of outer Galactic disc, especially the warp and the flare. The first Gaia data release brought some evidence of the warp [@schonrich], but the more extensive second data release provides a better opportunity to study the warp attributes. [@poggio] combined Gaia DR2 astrometry with 2MASS (Two Micron All-Sky Survey) photometry and revealed the kinematic signature of the warp up to 7 kpc from the Sun. [@li] found the flare and the warp in the Milky Way, using only OB stars of the Gaia DR2. In this work, we make use of Gaia DR2 data as described in Section \[ch2\] and use star counts to obtain the stellar density by applying Lucy’s inversion technique. Then we analyse the density maps to determine the warp. The paper is structured as follows: in Section \[ch2\] we describe the Gaia data and extinction maps that we used, in Section \[ch4\] we present the luminosity function used in our calculations, in Section \[ch5\] we explain the methods for obtaining our density maps, and in Section \[results\] we discuss the results. In Section \[chdensity\] we present the exponential fits of the density, in Section \[ch11\] we study the warp, and in Section \[ch12\] we repeat the previous analysis of the young population. Data selection {#ch2} ============== We used data of the second Gaia data release [@gaia] here, which were collected during first 22 months of observation. We are interested in stars with known five-parameter astrometric solution: more than 1.3 billion sources. G magnitudes, collected by astrometric instrument in the white-light G-band of Gaia (330–1050 nm) are known for all sources, with precisions varying from around 1 millimag at the bright (G&lt;13) end to around 20 millimag at G=20. For the details on the astrometric data processing and validation of these results, see [@lindegren]. We chose stars with apparent magnitude up to G=19, where the catalogue is complete up to 90% [@arenou]. We chose data with a parallax in the interval \[0,2\] mas. In our analysis, we did not consider any zero-point bias in the parallaxes of Gaia DR2, as found by some authors [@lindegren; @arenou; @stassun; @zinn], except in Sect. 4.3, where we repeat our main calculation including a non-zero value of the zero-point to prove that this effect is negligible in our results. Extinction maps {#ch3 .unnumbered} --------------- We used two different extinction maps. For the luminosity function (Sect. \[ch4\]), we used the extinction map of [@green_extinkcia] through its Python package *dustmaps*, choosing the *Bayestar17* version. This map covers 75% of the sky (declinations of $\delta\gtrsim \ang{-30}$) and provides reddening in similar units as @sfd [SFD].\ To calculate the density (Sect. \[ch5\]), we need to cover the whole sky, therefore we used the three-dimensional less accurate but full-sky extinction map of [@bovy_extinkcia] through its Python package *mwdust*. This map combines the results of [@marshall], [@green], and [@drimmel] and provides reddening as defined in [@sfd].\ In order to convert the interstellar reddening of these maps into $E(B-V)$, we used coefficients [@hendy_ext_koef; @rybizki_ext_koef] $$\begin{aligned} \label{1} \begin{split} A_G/A_v&=0.859~, \\ R_V&=A_v/E(B-V)=3.1~. \end{split}\end{aligned}$$ Luminosity function {#ch4} =================== To construct the luminosity function, we chose all stars with heliocentric distance $d<0.5$ kpc (distances determined as $1/\pi$, where $\pi$ is the parallax). We did not find many bright stars ($M_G<-5$) in this area, therefore we also chose a specific region with Galactic height $\lvert z \rvert <1$ kpc and Galactocentric distance $R<5$ kpc, in which we only selected stars with absolute magnitude $M_G<-5$. We normalised the counts of stars with high magnitude and then joined these two parts to create the luminosity function.\ In the range of distance that we used for the luminosity function, the star counts are complete for the absolute magnitude that we are calculating, except perhaps for the possible loss of the brightest stars through saturation at $M_G<-5$. Moreover, the error in the parallax for these stars is negligible, so that the calculation of the absolute magnitude from the apparent magnitude is quite accurate. We did not take into account the variations of the luminosity function throughout the Galactic disc. We assumed that it does not change. The luminosity function we obtained is shown in Fig.\[o1\]. We interpolated the luminosity function with a spline $N=spl(M)$ of the first degree. The result is shown in Fig.\[o2\]. For the interpolation, we used values between magnitudes $M=[-5,10]$ because the values outside this interval are unreliable, and we used the extrapolation of the spline function to lower magnitudes. The values of the luminosity function are listed in Table \[lumf\]. ![Luminosity function.[]{data-label="o1"}](lum_hist_nove_norm.pdf){width="50.00000%"} ![Interpolation of the luminosity function with a spline compared with the luminosity function of [@bahcall_lum]. These two functions are not directly comparable because [@bahcall_lum] used a slightly different filter in the visible, but it shows that our luminosity function is reasonable.[]{data-label="o2"}](lum_func_referee.pdf){width="50.00000%"} $M_G$ N ------- ----------------------- -- -- -- -10 $2.704\cdot 10^{-8}$ -9 $8.424\cdot 10^{-8}$ -8 $2.625\cdot 10^{-7}$ -7 $8.177\cdot 10^{-7}$ -6 $2.547\cdot 10^{-6}$ -5 $7.936\cdot 10^{-6}$ -4 $2.927\cdot 10^{-5}$ -3 $8.028\cdot 10^{-5}$ -2 $2.936 \cdot 10^{-4}$ -1 $1.066 \cdot 10^{-3}$ 0 $2.299 \cdot 10^{-3}$ 1 $4.117 \cdot 10^{-3}$ 2 $8.805 \cdot 10^{-3}$ 3 $2.081\cdot 10^{-2}$ 4 $3.838\cdot 10^{-2}$ 5 $5.667\cdot 10^{-2}$ 6 $8.273\cdot 10^{-2}$ 7 0.122 8 0.171 9 0.221 10 0.27 : Values of the luminosity function.[]{data-label="lumf"} Density maps {#ch5} ============ Deconvolution of star counts {#ch6} ---------------------------- To calculate the stellar density, we need to measure star counts as a function of distance. However, the error of parallax increases with distance from us, which means that our analysis would be correct only within roughly $5$ kpc from the Sun. To be able to reach higher distances, we corrected for this effect using the method developed by [@martin], who used Lucy’s deconvolution method (Lucy 1974; see Appendix A) to obtain an accurate distance measurement up to $R=20$ kpc. They expressed the observed number of stars per parallax $\overline{N}(\pi)$ as a convolution of the real number $N(\pi)$ of stars with a Gaussian function $$\begin{aligned} \label{2} \overline{N}(\pi)=\int_{0}^{\infty} \mathrm{d}\pi^\prime N(\pi^\prime)G_{\pi^\prime}(\pi-\pi^\prime)~,\end{aligned}$$ where $$\begin{aligned} \label{3} G_{\pi}(x)=\frac{1}{\sqrt{2\pi}\sigma_\pi}e^{-\frac{x^2}{2\sigma_\pi^2}}~.\end{aligned}$$ For the error $\sigma_\pi$ we averaged errors of every bin, which we calculated from values given by Gaia DR2.\ We only used the parallax between \[0,2\] mas. For the upper limit the relative error of parallax is very small and does not produce any bias. For the lower limit, the truncation avoiding the negative parallaxes affects the distribution of parallaxes and statistical properties (average, median, etc.) [@gaia_par Section 3.3]. However, in our method we do not calculate the average distance from the average parallax. We used Lucy’s method, which iterates the counts of the stars with positive parallaxes until we obtained the final solution. This does not mean that we truncated the star counts with negative parallaxes. We used only the stars with positive parallaxes as is required by our method, explained in the Appendix A. $N(\pi)$ for negative values of $\pi$ can also be calculated and fitted, but they are not used in our calculation. In other words, we did not assume that the number of the stars with negative parallaxes is zero, we simply did not use this information because it is not necessary. The fact that this method does not produce any bias is tested in section 4.2. Monte Carlo simulation to test the Lucy inversion method {#ch7} -------------------------------------------------------- In order to test the reliability of the inversion method, we performed Monte Carlo simulations to determine whether we can recover the original function after deconvolution. We created datasets with randomly distributed particles. Then we convolved this distribution with a Gaussian. We applied Lucy’s deconvolution method to the dataset to determine whether we can recreate the original distribution. The results are shown in Fig. \[o3\]. We conclude that regardless of the original distribution, we can accurately recover the original data up to $50$ kpc or more, which is satisfying to study the Milky Way. We also studied the dependence of the method on the parallax error. We used various values of the average parallax error in Eq. \[3\] from the interval \[0.05,0.4\] mas, which are the most common values for the average parallax error in our data. In Fig. \[chyba\_par\] we plot the result, which shows that even though the precision of the method depends on the parallax error, we obtain a satisfying result up to 20 kpc even in the worst case with the highest parallax error. Application to full-sky Gaia-DR2 data {#ch8} ------------------------------------- We divided the data into bins of Galactic longitude $\ell$, Galactic latitude $b,$ and apparent magnitude $m$. For the values of $b$ we made bins of length $\ang{2}$ and corresponding $\ell$ in bins of $\ang{5}/cos(b)$. We divided each of the lines of sight in magnitude, binned with size $\Delta m=1.0$ between G=12 and G=19. We obtained 29 206 different areas in which we calculated the density independently. We made use of the fundamental equation of stellar statistics, where the number of stars $N(m)$ of apparent magnitude $m$ is expressed per unit solid angle and per unit magnitude interval [@chandrasekhar], $$\begin{aligned} \label{5} N(m)=\int_{0}^{\infty} \rho(r)\Phi(M)r^2\mathrm{d}r~,\end{aligned}$$ where we substitute $$\begin{aligned} r(m)=(1/\pi)=10^{(m-M+5-A_G(1/\pi))/5}~,\end{aligned}$$ which yields for the density $$\begin{aligned} \label{4} \rho(1/\pi)&=&\frac{N(\pi)\pi^4 }{\Delta\pi\omega \int_{M_{G,low~lim}}^{M_{G,low~lim}+1} \mathrm{d}M_G\Phi(M_G)}~,\end{aligned}$$ $$\begin{aligned} M_{G,low~lim}&=&m_{G,low~lim}-5log_{10}(1/\pi)-10 \nonumber \\ &-&A_{G}(1/\pi)~,\end{aligned}$$ where $\omega$ is the covered angular surface ($10 ~\mathrm{degrees}^2$ in our case), $\Delta\pi$ is the parallax interval (0.01 mas in our case), which must be added in the equation because we did not use the unit parallax, $\Phi(M_G)$ is the luminosity function in the G filter, $m_{G,low~lim}$ is the limiting maximum apparent magnitude, and $A_{G}(r)$ is the extinction, as a function of distance. After this, we calculated the weighted mean density for all seven ranges of magnitude in each line of sight. Then we transformed this into cylindrical coordinates and made bins of Galactocentric radius $R$ of length 0.5 kpc, in Galactic height $z$ of 0.1 kpc and in azimuth of $\ang{30}$. We define the azimuthal angle $\phi$ to be measured from the centre-Sun-anticentre direction towards the Galactic rotation, going from to . We interpolated the missing bins with *NearestNDInterpolator* from the python *SciPy* package, which uses nearest-neighbour interpolation in N dimensions. We plot the resulting density maps in Fig. \[o7\]-\[o10\]. In Fig. \[o7\] we plot the density in cylindrical coordinates as a function of Galactic radius $R$ for different azimuths. We do not plot the results for azimuths $\ang{90}<\phi<\ang{270}$ because in this area the extinction is significant and we observe stars farther than the Galactic centre for which the errors are too large, therefore we cannot see any structure in density. However, we can see even by eye that a northern warp is present in the azimuths $\ang{60}<\phi<\ang{90}$ and a southern warp in the azimuths $\ang{270}<\phi<\ang{300}$. Another structure that can be seen from the plots is the flaring of the disc. We analyse these structures below. In Fig. \[o10\]. (a)-(c) we plot the density map in Cartesian coordinates, and in Fig \[o10\]. (d) we plot the density in cylindrical coordinates, integrated through all ranges of azimuths, except for the areas that were excluded from the analysis. The Cartesian coordinates are defined such that $X_\odot=8.4$ kpc. In these plots we note a flat disc with some fluctuations in density, but no apparent features. However, some slights overdensities both above and below the Galactic plane are visible. The features above the plane are present only in Fig. \[o10\]. (b)-(c), but not in Fig. \[o10\]. (d), which suggests that it might be a contamination. The feature below the Galactic plane is present in all the three plots. As the direction of these overdensities is towards the Magellanic Clouds, it might be an effect of the Milky Way pulling stars out of Magellanic Clouds, as suggested by [@anders]. Another possible explanation for these overdensities is the finger of God artefact, which is caused by the foreground dust clouds and causes elongated overdensities that point to the Sun. This artefact has previously been seen in Gaia data, as shown in the Gaia DR2 documentation[^1].\ Zero-point correction in parallaxes {#ch9} ----------------------------------- So far, we did not consider any zero-point bias in parallaxes. [@lindegren] found a global mean offset of $-0.029$ mas, meaning that Gaia DR2 parallaxes are lower than the true value. We repeated our calculations with this correction and present the results in Fig. \[o9\], where we chose some of the lines of sight to show the comparison. We find that these results are very similar to our original results, and this correction brings a negligible effect. We also tried a value of $-0.046$ mas, found by [@zerop2]. In Fig. \[o9\] we show that the difference between the different zero-point values is very small, therefore we only use the value of -0.029 mas in the further calculations.\ For the analysis of the warp in Sections 5.5 and 5.6., we repeated the analysis of Section 4 with the value of parallax corrected for the zero-point. We find that this brings a small correction to the warp parameters, which we state as the systematic error in the results. Error of the extinction ----------------------- To test how accurate the extinction map is, we analysed the map of [@green] using the function [query]{}, which returns the standard deviation $\sigma_G$ for a given line of sight. We calculate a new extinction as $$\begin{aligned} \label{ext_corr} A_G^*(r)=A_G(r)+f*\sigma_G(r)~,\end{aligned}$$ where $A_G$ is the extinction given by the map, $r$ is the distance, and $f$ is a factor chosen randomly from a Gaussian distribution with $\mu=0$ and $\sigma=1$.\ In Fig. \[ext\] we show the relative error of the density $\delta=(\rho(A_G)-\rho(A_G^*))/\rho(A_G)$. For all lines of sight that we tested, the difference is negligible, except for the area in the centre of the Galaxy, which we know is problematic. However, in the outer disc, where we carried out our analysis, the extinction is determined quite accurately. We must of course take into account that we used the map of [@bovy_extinkcia], which combines different maps and is less accurate and therefore can give different results than [@green] in some areas. Moreover, we estimated only the statistical error of the extinction, but we recall that we do not have information about the systematic error of the extinction map. However, for our purposes, the extinction map gives satisfying results in the area we analysed. The stellar warp has been studied using star counts by many other authors [@martin_warp; @reyle; @amores and others], therefore this method is most likely not especially flawed. Thick-disc areas {#ch10} ---------------- In the previous analysis we considered only the thin-disc population because the luminosity function presented in Section \[ch4\] is calculated in thin-disc regions. However, we can also analyse high Galactic heights, where the influence of the thick disc is significant. To test the importance of the change in luminosity function, we tested the density calculations with a tentative thick-disc luminosity function that reduces the number of bright stars. We used the source table of [@wainscoat], who give the ratio of all the components of the Galaxy for all stellar classes. Based on this comparison, we altered our luminosity function to construct a theoretical thick-disc luminosity function, as depicted in Fig. \[o21\]. Then we repeated our calculation with this new luminosity function. In Fig. \[o18\] we show the result for some lines of sight. In the area where we carried out the analysis, the difference between the two approaches is clearly visible starting at $\sim20$ kpc. Our density analysis is made in the area below 20 kpc, where the difference between the two densities is negligible. We note that this difference changes with line of sight, which is caused by the extinction. In the areas where the extinction is significant, the difference between densities derived from thin- and thick-disc luminosity functions is more important, but these areas are removed from our analysis. Therefore our maps are also valid for thick-disc areas. ![Comparison of luminosity functions for the thin and thick disc.[]{data-label="o21"}](lum_func_referee_thick_norm.pdf){width="50.00000%"} Analysis of the density maps {#results} ============================ Comparison with the maps of [@anders] ------------------------------------- Recently, similar maps were created by [@anders]. In their analysis, they used the code [StarHorse]{}, originally developed to determine stellar parameters and distances for spectroscopic surveys [@queiroz]. This code compares observed quantities to a number of stellar evolutionary models. It finds the posterior probability over a grid of stellar models, distances, and extinctions. To do this, it needs many priors, including stellar initial mass function, density laws for main Milky Way components, and the broad metallicity and age of those components. Afterwards, the authors applied various criteria on their sample to choose only accurate results.\ When we compare our results, we can observe similar structures, except in the area of the Galactic bulge, where our data are not reliable and the data of [@anders] are much more accurate. However, because data with high errors in parallax were removed, [@anders] were unable to reach such high distances, which are necessary to study features of the outer disc such as the flare or the warp. Another advantage of our method is that we did not assume any priors about the Milky Way. Furthermore, our density maps are a representation of the complete number of stars per unit volume up to some given absolute magnitude, taking into account the luminosity function, whereas [@anders] gave the stars observed by Gaia, a much larger number in the solar neighbourhood, thus not useful to quantify absolute trends in the density distribution. Nevertheless, we consider the results of [@anders] very useful because they improve the accuracy of the data significantly and can be used to study parts of Milky Way where our data fail. Cut-off in the Milky Way ------------------------ There has been some discussion about the cut-off in the Milky Way. Some authors have reported to find a cut-off starting at about 14 kpc from the Galactic centre [@robin1; @robin2; @minniti]. However, [@carraro] argued that these finding are erroneous either because the dataset is biased or because the warp and flare is confused with the cut-off. The absence of the cut-off has been confirmed by several studies [@martin_cutoff; @sale_cut_off; @brand_cut_off]. Our results show that there is no cut-off in the Galactic disc, at least up to 20 kpc.\ Stellar density in the solar neighborhood ----------------------------------------- We define the solar neighbourhood as the area where 7.5 kpc &lt; R &lt; 8.5 kpc and $\lvert z \rvert < 0.05 $ kpc and calculate the average density in this area. We find $\rho_\odot=0.064~stars/pc^3$, which is close to other values in the literature, for example $0.03~stars/pc^3$ obtained by [@chang_solar_neighb], who used a three-component model to fit data from 2MASS. [@eaton_solar_neigh] found $\rho_\odot=0.056~stars/pc^3$, which is lower than our result, but this value is influenced by the range of the luminosity function, which is where the difference between the values stems from. In our case, we measured stars with $M_G<10$. Exponential fits of the density {#chdensity} ------------------------------- To describe the radial volume mass density distribution in the Galactic equatorial plane, we used a modified exponential disc with a deficit of stars in the inner in-plane region adopted from [@corr1] in the following form: $$\begin{aligned} \label{dens1} \rho(R) = \rho_0 \times\exp\left(\frac{R_{\odot}}{h_r}+\frac{h_{r,hole}}{R_{\odot}}\right)\times \exp\left(-\frac{R}{h_r}-\frac{h_{r,hole}}{R}\right)~,\end{aligned}$$ where $h_r$ is the scale length, $h_{r,hole}=3.74~kpc$ is the scale of the hole, $R_{\odot}$ is the Galactocentric distance of the Sun, and $R$ is the Galactocentric distance. We neglected the contribution of the thick disc and analysed only the thin disc. We divided the Galactic equatorial plane into three regions according to the Galactic azimuth $\left[\ang{-45},\ang{-15}\right],\left[\ang{-15},\ang{15}\right],\left[\ang{15},\ang{45}\right]$. We focused on the Galactic equatorial plane, therefore we considered stars in the close vicinity of the plane with a vertical distance $|z|<0.2$ kpc and R&gt;6 kpc. We fitted the density for various azimuths with the corresponding exponential fits based on Eq. (\[dens1\]). The scale length slightly depends on the Galactic azimuth; it reaches the highest value for the Sun-anticentre direction and $\phi = +30^\circ$, $h_r=2.78$$\pm$$0.13$ kpc, and $h_r = 2.29$$\pm$$0.21$ kpc. On the other hand, the lowest value of the scale length is $h_r = 1.88$$\pm$$0.12$ kpc for $\phi = -30^\circ$ . This results in an average of $h_r=2.29\pm0.08$ kpc, with small dependence on the azimuth. We can compare the results with published papers. [@martin_cutoff] used SDSS-SEGUE (Sloan Digital Sky Survey - Sloan Extension for Galactic Understanding and Exploration) data to investigate the density distribution in the Galactic disc. They obtained the scale length for the thick and for the thin disc, $h_{r,thin} = 2.1$ kpc and $h_{r,thick} = 2.5$ kpc for the azimuth $\phi \le 30^{\circ}$, which is consistent with our results. [@li] studied OB stars using Gaia DR2 data and the derived scale length of the Galactic disc, and found $h_{r} = 2.10 \pm 0.1$ kpc, which is in accordance with our results. We also plot the dependence of the density in the Galactic equatorial plane on azimuth for various values of Galactocentric distance in Fig. \[d2\]. The density is slightly dependent on the Galactic azimuth for all radii, but this dependence is very small. An analysis of the scale height and its corresponding flare will be given in a forthcoming paper (Nagy et al. 2020, in preparation). ![Dependence of the density on azimuth near the centre-Sun-anticentre direction for various values of Galactocentric distance. The data points are obtained as weighted mean in bins of size 1 kpc in R and 0.4 kpc in $\lvert z \rvert$. Only bins with a number of points $N \geq 50$ points are plotted.[]{data-label="d2"}]({phiRhoBn}.pdf){width="9cm"} Warp {#ch11} ---- The density maps (Fig. \[o7\]) directly show a northern warp in azimuth $\ang{90}$ and southern warp in azimuth $\ang{270}$. Here, we analyse these structures in greater detail. We removed the azimuths $\ang{150}<\phi<\ang{240}$ and radii R&lt;6 kpc from our analysis because these data have low quality and influence the results negatively.\ We calculated the average elevation above the plane $z_w$ as $$\begin{aligned} \label{zw} z_w=\frac{\int_{z_{min}}^{z_{max}} \rho z \mathrm{d}z }{\int_{z_{min}}^{z_{max}} \rho \mathrm{d}z }\end{aligned}$$ and fit this quantity with models of the warp. In our first approach, we used the model by López-Corredoira et al. (2002b, Eq. 20), $$\begin{aligned} \label{9} z_w=[C_wR(pc)^{\epsilon_w}sin(\phi-\phi_w)+17]~pc~.\end{aligned}$$ The 17 pc term compensates for the elevation of the Sun above the plane [@z_slnko]. $C_w,\epsilon_w$ , and $\phi_w$ are free parameters of the model, which were fitted to our data. An asymmetry is observed between the northern and southern warp for the gas [@voskes] and for the young population [@amores], therefore we also explore the northern and southern warp separately here. The fit of our data yields maximum amplitudes $z_w=0.317$ kpc for the northern and $z_w=-0.287$ kpc for the southern warp, both at a distance R=\[19.5,20\] kpc, revealing a small asymmetry between the north and south. For the fit, we used the function *curve fit* from the python *SciPy* package, which uses non-linear least squares to fit a function to data. The parameters of the best fit for this model for the whole dataset are $$\begin{aligned} \label{parametre} C_w&=&1.17\cdot10^{-8} \mathrm{pc} \pm 1.34\cdot10^{-9} \mathrm{pc} (stat.) \nonumber \\ &-& 2.9\cdot10^{-10} \mathrm{pc} (syst.)~, \nonumber \\ \epsilon_w&=&2.42\pm 0.76(stat.) + 0.129 (syst.)~, \\ \phi_w&=&\ang{-9.31}\pm \ang{7.37} (stat.) +\ang{4.48} (syst.)~. \nonumber\end{aligned}$$ Here, the error of $c_w$ stands for the error of the amplitude alone, without the variations of $\epsilon_w$ and $\phi_w$. The plot of the results is shown in Fig. \[o11\], where we show the comparison of minimum and maximum value of $z_w(R)$. The average elevation of the plane is highest for azimuths $[\ang{60},\ang{90}]$ and $[\ang{90},\ang{120}]$ in most of the cases, whereas the minimum is reached for azimuths $[\ang{240},\ang{270}]$ in most of the cases. A slight asymmetry between the northern and southern warp is also clearly visible. ![Minimum and maximum average elevation of the plane as a function of radius. The warp fit is based on Eq. (\[9\]), and the error bars represent the uncertainty in the distance in the Lucy method.[]{data-label="o11"}](zw_syst_err_no_grid.pdf){width="50.00000%"} Another approach that we used is based on the work of [@levine], who studied the vertical structure of the outer disc of the Milky Way by tracing neutral hydrogen gas. They analysed the Galactic warp using a Lomb periodogram analysis. They concluded that the first two Fourier modes are the strongest modes. We use the expression derived by [@levine] in the following form: $$\begin{aligned} \label{warp2} z_w=z_0 + z_1\cdot\sin{\left(\phi-\phi_1\right)} + z_2\cdot\sin{\left(2\phi-\phi_2\right)}~,\end{aligned}$$ where $z_w$ is the average elevation above the plane, $z_i$ for $i\in\left(0,1,2\right)$ are the amplitudes of the warp, $\phi_i$ for $i\in\left(1,2\right)$ are the phases. The dependence of the amplitudes of the warp on the Galactocentric distances is $$\begin{aligned} \label{warp2b} z_i=k_0 + k_1\cdot\left(R-R_k\right) + k_2\cdot\left(R-R_k\right)^2~\text{for}~i=0,1,2~,\end{aligned}$$ where $k_i$ and $R_k$ are free parameters of the fit. We fitted our data with Eqs. (\[warp2\]) and (\[warp2b\]) for various values of Galactocentric distances $R<20$ kpc. We plot the data and the fits for $R \in \left(13.25, 16.25,19.25\right)$ kpc in Fig. \[w2\]. Fig. \[w3\] shows the azimuth of the maximum and minimum of the Galactic warp as a function of the Galactocentric distance. In our analysis, we excluded data for the Galactic azimuths $\phi\in\left(120^{\circ},240^{\circ}\right)$ because of the high error values in our data. We used a $10^\circ$ binning in azimuth. Fig. \[w2\] shows that the data for $\ang{250} < \phi < \ang{270}$ are somewhat noisy, which can be caused by problems with extinction or with the Lucy method in a particular line of sight. Therefore we tested a fit without these points, which turned out to produce an insignificant difference. For instance, the minimum amplitude obtained without these points changed by  10% in the worse case, and the maximum amplitude changed by  2%. Figs. \[w2\] and \[w3\] clearly show that the warp is present in our analysis. The azimuth of the maximum of the warp (the northern warp) is an increasing function of the Galactocentric distance ($52^\circ<\phi<56^\circ$). On the other hand, the azimuth of the minimum of the warp is in $312^\circ<\phi<324^\circ$ and corresponds to the southern warp. The strongest deviation of the average elevation of the Galactic plane from the Galactic equatorial plane rises with Galactocentric distance. The highest amplitude of the northern and southern warp is $z_w=0.48$ kpc and $z_w=-0.38$ kpc, respectively. An asymmetrical warp is clearly present. The value of the line of nodes from the fit is $\phi_0=\ang{-1.18}$. We plot the changes in amplitude of the Galactic warp fit \[Eq.(\[warp2b\])\] with Galactocentric distance in Fig. \[w5\]. ![Minimum and maximum of the average elevation of the plane as a function of Galactocentric distance. The warp fit is based on Eq. \[warp2\]. The colours code the azimuth of the minimum and maximum of the warp fit, and the error bars represent the uncertainty on the distance in the Lucy method.[]{data-label="w3"}]({warp5}.pdf){width="9cm"} ![Changes of the amplitudes of the Galactic warp fit according to Eqs. \[warp2\] and \[warp2b\].[]{data-label="w5"}]({warp6}.pdf){width="9cm"} ![Comparison of maximum amplitudes of our model (based on Eq. (\[warp2\])) with other works.[]{data-label="amplit"}]({amplitudy_referee_max}.pdf){width="9cm"} Similar results were obtained by [@li], who used OB stars of Gaia DR2 to measure the warp. They fit their data with a sinusoidal function similar to ours and obtained a warp with a mean magnitude up to $z=0.5$ kpc. However, they did not account for the asymmetry of the warp, therefore they found the same result for the north and south. [@chen_warp] used Cepheids from the WISE (Wide-field Infrared Survey Explorer) catalogue and traced the warp up to R=20 kpc. Their results show a warp extended up to $\lvert z \rvert=1.5$ kpc, which we cannot confirm using the whole population. [@poggio] studied the kinematics of the Milky Way using Gaia DR2 and found the warp up to 7 kpc from the Sun. This agrees with our results, but we show that the warp extends to a higher radius, at least up to 20 kpc. In Fig. \[amplit\], we compare the maximum amplitudes of our model with other works. We obtain a very low amplitude, especially in comparison with Cepheids. On the other hand, the closest result is that of [@chen_warp], who used OB stars from Gaia DR2. This significant difference between the amplitude of various populations is in favour of the formation of the warp through accretion onto the disc [@martin_accretion], which causes the gas and young stars to warp more strongly than the remaining population.\ [@momany] studied the stellar warp using 2MASS red clump and red giant stars, selected at fixed heliocentric distances of 3, 7, and 17 kpc. They found a rather symmetric warp and argued that a symmetric warp can be observed as asymmetric for two reasons. First, the Sun is not located at the line of nodes, and second, the northern warp is located behind the Norma-Cygnus arm, which can cause variation in extinction that can produce an apparent asymmetric warp. As for the first point, the position of Sun on the line of nodes is a problem when we observe the warp at a fixed distance. However, we have a 3D distribution, which ensures that the position from which we look does not influence how we perceive the warp. As for the second remark, as we showed in Section \[ch3\] that the extinction is determined quite accurately by the extinction map of [@green]. However, some variations might influence the final shape of the warp and may not have been taken into account, therefore we need to keep that in mind when we interpret our results. Young population {#ch12} ---------------- In this section, we apply the previous analysis to the young population. To do so, we only chose stars brighter than an absolute magnitude $M_G=-2$ (see the luminosity function in Fig. \[o19\]) and repeated all the steps as described in Section \[ch8\]. Then we produced density maps and analysed the scale length and the warp of this population using methods from Sections \[chdensity\] and \[ch11\].\ The exponential fits of the density for the young population yield $h_r=2.5\pm0.22$ kpc for $\phi=\ang{30}$, $h_r=1.92\pm0.15$ kpc for $\phi=\ang{0}$ , and $h_r=2.04\pm0.15$ kpc for $\phi=\ang{30}$, which is similar to the whole population. This results in $h_r=2.09\pm0.09$ kpc on average. The variation with azimuth is still insignificant, as in the case of the entire population. Fig. \[hr\_young\] shows that the variation of density with azimuth is also negligible in the case of the young population.\ For the warp, as previously, we removed the azimuths $\ang{150}<\phi<\ang{240}$ from the analysis. The fit of Eq. (\[9\]) to the young population yields $$\begin{aligned} C_w&=&4.85\cdot10^{-14} \mathrm{pc} \pm 6.33\cdot10^{-15} \mathrm{pc} (stat.) \nonumber \\ &+&5.4\cdot 10^{-15} \mathrm{pc} (syst.)~, \nonumber \\ \epsilon_w&=&3.69\pm 1.19 (stat.) -0.373(syst.)~, \\ \phi_w&=&\ang{-1.64}\pm \ang{8.85} (stat.) - \ang{2.803} (syst.)~. \nonumber\end{aligned}$$ ![Luminosity function used in the Eq. \[4\] for the analysis of the young population.[]{data-label="o19"}](lum_func_referee_young_pop.pdf){width="50.00000%"} ![Same as Fig. \[d2\], but the young population alone is considered.[]{data-label="hr_young"}](phiRhoBnY.pdf){width="50.00000%"} We also repeated the analysis with the approach using Eq. (\[warp2\]). Fig. \[w2y\] presents the Galactic warp of the young stellar population for various Galactocentric distances, and Fig. \[w3y\] shows the amplitudes of the fits of the Galactic warp and the azimuth of the maximum and minimum. In this case, the warp of the young stellar population is stronger than the case considering all stars in our dataset. The azimuth of the maximum of the northern warp is an increasing function of the Galactocentric distance ($50^\circ<\phi<54^\circ$), and the azimuth of the minimum of the warp is in $265^\circ<\phi<315^\circ$. The highest amplitude of the northern and the southern warp is $z_w=0.57$ kpc and $z_w=-0.5$ kpc, respectively. For the line of nodes, we find $\phi_0=\ang{-6.56}$, which agrees with the whole population. [@chen_warp] used Cepheids from the WISE survey and a number of optical surveys to measure the warp, and [@skowron] used Cepheids from the OGLE catalogue supplemented by other surveys. [@chen_warp] obtained a rather symmetric warp with an amplitude of about 1.5 kpc in R=20 kpc. [@skowron] obtained a similar result with an amplitude 0.74 kpc in R=15 kpc. These values are much higher than our findings, which is probably due to differences in the population: our young population is older than the Cepheids. In Fig. \[lon\] we plot the variation of the line of nodes with radius for the whole and the young population, compared with other works. We use two different methods to plot the line of nodes for our work. First, we plot the angle $\phi_w$ for Eq. (\[parametre\]). Another method is to use the Eq. (\[9\]) to find the value of the angle $\phi$ when $z_w=0$. We would expect that our young population lies between the total population and the young Cepheids. This is true only for R&gt;12 kpc. At shorter distances, the warp is not very strong and is more difficult to detect, therefore the error bars are larger in this area. Moreover, the error bars of the young populations are very large because of the lower number of stars in the sample combined with possible problems in determining extinction. For these reasons, the value of the line of nodes for R&lt;12 kpc is rather unreliable. ![Minimum and maximum of the average elevation of the plane as a function of the Galactocentric distance. The warp fit is based on Eq. (\[warp2\]). The colours code the azimuth of the minimum and the maximum of the warp fit. The dataset containing a young population of stars is considered, and the error bars represent the uncertainty on the distance in the Lucy method.[]{data-label="w3y"}]({warp5Y}.pdf){width="9cm"} ![Comparison of line of nodes for our model (based on Eq. \[9\]) with other works. We use two different methods to plot the line of nodes for our work. First, we plot the angle $\phi_w$ for Eq. (\[parametre\]). Another method is to use the Eq. (\[9\]) to find the value of the angle $\phi$ when $z_w=0$.[]{data-label="lon"}]({warp4Both2}.pdf){width="9cm"} Conclusions {#concl} =========== We produced density maps from Gaia DR2 data and analysed them to study the Galactic warp. The density maps directly show a northern warp in the azimuths $\ang{60}<\phi<\ang{90}$ and a southern warp in the azimuths $\ang{270}< \phi <\ang{300}$. Our maps reach a Galactocentric radius of 20 kpc, and we note that up to this distance, the density decreases exponentially and we do not observe a cut-off. Another feature in the density maps is a Galactic flare, that is, an increase in scale height towards the outer Galaxy. The analysis of the flare will be given in a forthcoming paper (Nagy et al. 2020, in preparation). We used the maps to calculate the scale length, where we find $h_r=2.29\pm0.08$ kpc, with a small dependence of $h_r$ from the Galactic azimuth. The lowest value of $h_r$ that we found is $1.88\pm0.12$ kpc for $\phi\approx\pm\ang{-30}$ and the highest value is $2.78\pm0.13$ kpc for the Sun-anticentre direction and $2.29\pm0.21$ $\phi\approx\pm\ang{30}$.\ From our maps, we calculated the average elevation of the plane and fitted it with different warp models. We fitted the northern and southern warp separately with a simple sinusoidal model, and we found a small asymmetry: the northern warp reaches an amplitude $0.317$ kpc for the azimuth $\ang{60}<\phi<\ang{90}$ and the southern warp reaches $-0.287$ kpc for the azimuth $\ang{240}<\phi<\ang{270}$, both at R=\[19.5,20.0\] kpc. Then we fitted the warp with a model combining two sinusoids to detect the asymmetry without assuming its existence, and we found values of amplitude $\sim0.5$ for the northern and $\sim-0.4$ for the southern warp both at R=\[19.5,20.0\] kpc, revealing the asymmetry found with the previous approach. The azimuths of the warp maximum and minimum for this model are $\ang{52}<\phi<\ang{56}$ and $\ang{312}<\phi<\ang{324}$ , respectively. In terms of Galactocentric radius, we find that warp starts to manifest itself from about 12 kpc and extends at least up to 20 kpc. We repeated this analysis on the young population, where we find that it follows the result for the whole population, but reaches a higher amplitude of warp and similar values of scale height. The comparison of our amplitude of the warp with other works showed that we obtain a significantly lower amplitude than an analysis carried out with very young stars such as Cepheids. This supports the formation of the warp through accretion onto the disc [@martin_accretion].\ A future analysis of the next Gaia data release combined with the deconvolution method based on Lucy’s method of inversion, as described in Section \[ch6\], will allow us to explore distances larger than 20 kpc. The future data release will provide a much deeper magnitude limit and much lower parallax errors, which will allow us to extend the range of Galactocentric distances and study the morphology of the disc and of the stellar halo at very large distances. We thank the anonymous referee for helpful comments, which improved this paper, and Astrid Peter (language editor of A&A) for proof-reading of the text. ZC and MLC were supported by the grant PGC-2018-102249-B-100 of the Spanish Ministry of Economy and Competitiveness (MINECO). RN was supported by the Scientific Grant Agency VEGA No. 1/0911/17. This work made use of the IAC Supercomputing facility HTCondor (http://research.cs.wisc.edu/htcondor/), partly financed by the Ministry of Economy and Competitiveness with FEDER funds, code IACA13-3E-2493. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. The reduced catalogue of Gaia with $m_G<19$ was produced by Pedro Alonso Palicio. Lucy’s method for the inversion of Fredholm integral equations of the first kind {#.Lucy} ================================================================================ The inversion of Fredholm integral equations of the first kind such as Eq. (\[2\]) is ill-conditioned. Typical analytical methods for solving these equations [@balazs] cannot achieve a good solution because the kernel is sensitive to the noise of the star counts [@craig chapter 5]. Because the functions in these equations have a stochastic rather than analytical interpretation, it is to be expected that statistical inversion algorithms are more robust [@turchin; @jupp; @balazs]. These statistical methods include the iterative method of Lucy’s algorithm [@lucy; @turchin; @balazs; @martin2], which is appropriate here. Its key feature is the interpretation of the kernel as a conditioned probability and the application of Bayes’ theorem. In Eq. (\[2\]), $N(\pi )$ is the unknown function, and the kernel is $G(x)$, whose difference $x$ is conditioned to the parallax $\pi '$. The inversion is carried out as $$N (\pi )=\lim _{n\rightarrow \infty}N _{n}(\pi ) ,$$ $$N_{n+1}(\pi)=N_n(\pi )\frac{\int _0^\infty \frac{\overline{N}(\pi ')} {\overline{N_n}(\pi ')}G _{\pi '}(\pi -\pi')d\pi '} {\int _0^\infty G_{\pi '}(\pi -\pi')d\pi '} ,$$ $$\overline{N_n}(\pi )=\int _0^\infty N_n(\pi ')G_{\pi '}(\pi -\pi')d\pi ' .$$ The iteration converges when $=\overline{N_n}(\pi )\approx \overline{N}(\pi )$ $\forall \pi$, that is, when $N _{n}(\pi )\approx N (\pi )$ $\forall \pi$. The first iterations produce a result that is close to the final answer, with the subsequent iterations giving only small corrections. In our calculation, we set as initial function of the iteration $N_0(\pi )=\overline{N}(\pi ),$ and we carry out a number of iterations until the Pearson $\chi ^2$ test $$\frac{1}{N_p-2}\sum _{j=2}^{N_p-1}\frac{[\overline{N_n}(\pi _j)-\overline{N}(\pi _j)]^2}{\overline{N_n}(\pi _j)} ,$$ reaches the minimum value. Further iterations would enter within the noise. This algorithm has a number of beneficial properties (Lucy 1974, 1994): all the functions are defined as being positive, the likelihood increases with the number of iterations, the method is insensitive to high-frequency noise in $\overline{N}(\pi )$, and so on. We note, however, that precisely because this method only works when $N$ are positive functions, it does not work with negative ones. [^1]: <https://gea.esac.esa.int/archive/documentation/GDR2/Data_analysis/chap_cu8par/sec_cu8par_validation/ssec_cu8par_validation_additional-validation.html>
--- abstract: 'I consider anomalies in effective field theories (EFTs) of gauge fields coupled to fermions on an interval in $AdS_5$, and their holographic duals. The anomalies give rise to constraints on the consistent EFT description, which are stronger than the usual four-dimensional anomaly cancellation condition for the zero modes. Even though the anomalies occur on both boundaries of the interval, corresponding to both the UV and the IR of the holographic dual, they are nevertheless consistent with the non-renormalization of the anomaly and the ’t Hooft matching condition. They give rise, in general, to a Wess-Zumino-Witten (WZW) term in the four-dimensional, low-energy effective action, whose form I compute. Finally I discuss the relevance to holographic models of electroweak symmetry breaking. I show that the so-called ‘minimal composite Higgs models’ have a consistent EFT description without a WZW term. In contrast, a variant of an earlier model of Contino, Nomura, and Pomarol does have a WZW term.' author: - Ben Gripaios title: 'Anomaly Holography, the Wess-Zumino-Witten Term, and Electroweak Symmetry Breaking' --- \[intro\]Introduction ===================== The AdS/CFT [@Maldacena:1997re] correspondence has given rise to new solutions to the hierarchy problem of electroweak symmetry breaking (EWSB) [@Randall:1999ee]. In their most sophisticated form [@Contino:2003ve; @Agashe:2004rs], these consist of gauge theories coupled to fermions on an interval (or orbifold) of an $AdS_5$ geometry, with the scalar Higgs sector of the Standard Model arising from the fifth-dimensional components of the gauge fields. Such models are dual [@Arkani-Hamed:2000ds; @Rattazzi:2000hs] to models in four dimensions, in which a CFT, coupled to external fields in the ultra-violet, becomes strongly coupled in the infra-red. The onset of strong coupling spontaneously breaks some of the symmetries of the theory. In particular, the breaking of approximate global symmetries of the theory gives rise to pseudo-Nambu-Goldstone bosons in the low-energy effective theory, that play the rôle of light Higgs scalars and give rise to EWSB.[^1] In general, gauge theories of this type, living on an interval in $d=5$, suffer from anomalies. These anomalies, which are localized on the four-dimensional boundaries of the interval, arise variously from fermions (whether localized on the boundaries or propagating in the bulk) and from bulk Chern-Simons (CS) terms for the gauge fields. This has been known for some time [@Arkani-Hamed:2001is; @Scrucca:2004jn], but appears to have been disregarded in the literature on holographic models of EWSB, perhaps with the tacit assumption that, provided the usual $d=4$ anomaly cancellation condition is satisfed for the gauge and fermion zero modes (which of course it is in any model that reproduces the Standard Model at low energies), the fermionic boundary anomalies can always be cancelled by suitable CS terms. Unfortunately, this is not the case. Indeed, consider [@Gripaios:2007tk] the example of an $SU(2)$ bulk gauge field, broken to the same $U(1)$ subgroup on each boundary, and let the fermionic content consist of a left-handed Weyl fermion of charge $+1$ on one boundary, and a left-handed Weyl fermion of charge $-1$ on the other boundary. Now, the theory is certainly free of anomalies from the $d=4$, low-energy perspective, because its zero modes describe a $U(1)$ gauge theory coupled to Weyl fermions of charge $\pm1$. But there is no way in which one can cancel the boundary anomalies in the $d=5$ theory with a CS term. The reason for this is simply that the CS term, which is proportional to ${\,\mathrm{str}}T^A T^B T^C \equiv {\,\mathrm{tr}}T^A \{T^B, T^C\}$, vanishes identically for the generators $T^A$ of the Lie algebra of $SU(2)$. The observation that this simple example illustrates, namely that $d=4$ anomaly cancellation is necessary but not sufficient for $d=5$ anomaly cancellation, is not new. To my knowledge, it appeared first in the context of string orbifold models in [@vonGersdorff:2003dt] (and later in [@Liao:2006uja]). Nevertheless, it does not seem to be widely appreciated. It is clear, then, that one needs to worry about the anomalies in holographic models of EWSB. Since such theories are non-renormalizable, effective field theories, the anomalies do not render the theory inconsistent. Rather, similar to the case of anomalous EFTs in $d=4$ [@Preskill:1990fr], a consistent EFT description always exists, but the anomalous symmetries must be non-linearly realized on the boundaries in $d=5$ [@Gripaios:2007tk]. The effect of this non-linear realization is to change the boundary conditions (BCs) of the gauge fields, such that the spectrum of gauge field zero modes is not that which one might naïvely assume. It was further explained in [@Gripaios:2007tk] how, in this consistent description, the usual anomaly cancellation condition is guaranteed in the $d=4$ dual. Essentially, what happens is that any boundary anomaly leads to the corresponding gauge field being non-linearly realized, such that it cannot be present in the low-energy gauge group. In consequence, the surviving low-energy gauge group is reduced to one that is anomaly-free with respect to the $d=4$ zero modes. Moreover, in this description, the ’t Hooft matching condition [@tHooft:1980xb] is obeyed in the $d=4$ dual [@Gripaios:2007tk]. Conversely, to get the pattern of symmetry breaking that is naïvely assumed in a given model, the net anomaly for the unbroken subgroup on the boundary must vanish, and it is important that this be checked in existing and future models. What is more, even if the anomalies can be cancelled, there will still, in general, be observable physics associated with the anomaly, that one might hope to explore at upcoming collider experiments. This physics takes the form of Wess-Zumino-Witten (WZW) terms [@Wess:1971yu; @Witten:1983tw] in the low-energy effective action, that reproduce the UV anomalies of non-linearly realized symmetries, in accordance with ’t Hooft’s condition [@Panico:2007qd]. This is analogous to what happens in the strong interaction. There, the WZW term in the low-energy pion Lagrangian reproduces the anomalies of the approximate chiral symmetries of the quarks in the UV, and gives rise to spectacular physics, most notably the decay $\pi^0 \rightarrow \gamma \gamma$. In the context of holographic models of EWSB, the electromagnetic field is enlarged to the full Standard Model gauge group, and the the Higgs sector takes the place of the pions. The goal of the present work is to supply a strategy by which one can firstly determine if a given model can be made anomaly-free by adding CS terms, and secondly can be used to compute any WZW term in the low-energy effective action. Explicitly, the strategy is to rewrite the theory in terms of an equivalent theory in which the full bulk gauge symmetry is resurrected on the boundaries (albeit non-linearly realized), via the addition of coset sigma models (which I review in the next Section) localized on the boundaries. In Section \[class\], I show that this always possible at the classical level. In Section \[cosa\], I consider quantum effects, beginning with a review of anomalies in coset sigma models. In Section \[quant\], I return to theories on an interval of $AdS_5$, and determine whether $G$-invariance may be resurrected at the quantum level, including the anomalies. In doing so, I derive the consistency conditions, and the form of the WZW term that arises in the low-energy effective Lagrangian. In Section \[ex\], I apply the strategy. I show that the so-called ‘$MCHM_{10}$’ model [@Agashe:2006at] can be made consistent by addition of CS terms, and does not have any WZW term in the low energy Lagrangian. Nor does the ‘$MCHM_{5}$’ model, who consistency was shown previously in [@Gripaios:2007tk]. I show that yet another model, derived from the one of [@Contino:2003ve], does have a WZW term. I should remark that WZW terms are generic in models in which the Higgs sector arises from anomalous, non-linearly-realized symmetries. For example, Hill and Hill [@Hill:2007nz] have recently discussed how they arise in little Higgs models. I postpone a discussion of the phenomenological implications to future work, but for two remarks. The first is that the WZW terms may prove to be invaluable if Nature does actually choose to realize EWSB via strong-coupling at the TeV scale. The reason for this is that excited states in such theories typically have a width comparable to their mass (there is no small parameter to suppress one versus the other) and so the spectrum contains a mess of broad resonances, whose identification in experiment is problematic. This makes it difficult to learn anything about the theory from experiment, even before one attacks the theoretical strong-coupling problem. Again, the analogy with the strong interaction is helpful: after many decades of experiment at the GeV scale, we still argue about the spectrum of hadronic resonances and our understanding is minimal, even though we know very well what the microscopic theory is in that case. The things that we can infer about the high-energy theory from low-energy experiments (such as the presence of approximate chiral symmetries of quarks and the number of colours) come from the study of the symmetries and anomalies thereof. If a similar scenario does indeed explain the weak scale, then it is on the symmetries and their anomalies that we should perhaps focus. The second remark is that WZW terms should be of particular interest in models, such as the ones discussed here, where the Standard Model fermions themselves belong partly to the strongly-coupled sector. In such models, the WZW term, which measures the anomaly content of the strongly-coupled sector, is fixed by the Standard Model fermion content, once it has been decided how the Standard Model fermions fit into representations of the symmetry group of the strongly-coupled sector. This offers the hope that we may be able to [*predict*]{} the form of the WZW term. Our notations for ${\mathrm{AdS}}$ and fermions therein are those of [@Gripaios:2006dc]. \[cos\]$G/H$-coset sigma models =============================== In this Section, I review the construction of coset sigma models, and the gauging thereof. I follow closely the notation and discussion of Preskill [@Preskill:1990fr]. Let $G$ be a compact Lie group with subgroup $H$, and let $G/H$ be a coset manifold with co-ordinates $\phi_a$. I can always choose a reductive splitting for a basis of Hermitian generators, $T^\alpha \in {\,\mathrm{Lie}}(H)$, and the remaining generators, $X^a \in {\,\mathrm{Lie}}(G)$, such that $$\begin{gathered} \label{reduct} [T^\alpha,X^a] = i f^{ \alpha a}_{\phantom{ \alpha a}b} X^b \end{gathered}$$ and I shall always do so in the sequel. A coset representative, $g(\phi_a) \in G$, is defined up to the equivalence relation $$\begin{gathered} g \sim gh, \end{gathered}$$ where $h \in H$; a convenient coset representative is $$\begin{gathered} g(\phi_a) = e^{i \phi_a X^a}.\end{gathered}$$ Under left-multiplication by $\Omega^{-1} \in G$, the coset representative transforms as $$\begin{gathered} \label{Gaction} \Omega: \; g(\phi_a) \rightarrow g(\phi^{\prime}_a) = \Omega^{-1} g(\phi_a) h (\Omega, g),\end{gathered}$$ where the compensating transformation, $h (\Omega, g)$, is chosen so as to maintain the choice of coset representative. The ${\,\mathrm{Lie}}(G)$-valued Cartan $1$-form,[^2] $0^g \equiv g^{-1} d g$, transforms under the global $G$-action (\[Gaction\]) as $$\begin{gathered} \Omega: \;0^g \rightarrow h^{-1} g^{-1} \Omega d( \Omega^{-1} g h )= h^{-1} 0^g h + h^{-1} d h.\end{gathered}$$ Decomposing the Cartan form as $$\begin{gathered} 0^g = (0^g)_H + (0^g)_X = (0^g)_\alpha T^\alpha + (0^g)_a X^a,\end{gathered}$$ the reductive splitting (\[reduct\]) implies that the global $G$ action (\[Gaction\]) reduces to $$\begin{aligned} \Omega: \; (0^g)_H &\rightarrow h^{-1} (0^g)_H h + h^{-1} d h \\ (0^g)_X &\rightarrow h^{-1} (0^g)_X h .\end{aligned}$$ Thus, $(0^g)_X$ transforms homogeneously under the global $G$-action (\[Gaction\]), whereas $(0^g)_H$ transforms as an $H$-connection. I should now like to elevate the global $G$ action (\[Gaction\]) to a local one. To do so, I let $\Omega = \Omega (x)$, and define the local $G$-action on the coset representative by $$\begin{gathered} \label{Glocal} \Omega: \; g(\phi_a) \rightarrow g(\phi^{\prime}_a) = \Omega^{-1} (x) g(\phi_a) h (\Omega (x), g).\end{gathered}$$ I also introduce a connection for $G$, [*viz.*]{} a ${\,\mathrm{Lie}}(G)$-valued 1-form, $A$, which I take to be anti-hermitian, transforming as $$\begin{gathered} \Omega: \;A \rightarrow A^{\Omega} \equiv \Omega^{-1} (A + d) \Omega.\end{gathered}$$ Then, the object $A^g \equiv g^{-1} (A + d) g$ ([*cf.*]{} the definition of $0^g \equiv g^{-1} d g$) transforms under the local $G$-action as $$\begin{gathered} \Omega: \; A^g \rightarrow A^{\Omega \Omega^{-1} gh} = h^{-1} A^g h + h^{-1} d h.\end{gathered}$$ Decomposing $A^g \in {\,\mathrm{Lie}}(G)$ as before, one sees again that $(A^g)_X$ transforms homogeneously under the local $G$-action (\[Glocal\]), whereas $(A^g)_H$ transforms as an $H$-connection. Given some matter field $\psi$ transforming as a representation $r$ of $H$, $$\begin{gathered} \psi \rightarrow D_r [h^{-1}] \psi,\end{gathered}$$ this can be extended to a transformation under the $G$-action (global or local) as $$\begin{gathered} \label{psireal} \Omega: \;\psi \rightarrow D_r [h^{-1}(\Omega, g)] \psi.\end{gathered}$$ Note that this transformation, which involves the coset fields, is non-linear. A covariant derivative for the local $G$-action is $$\begin{gathered} \label{A} D\psi = (d + D_r [(A^g)_H ] ) \psi,\end{gathered}$$ (or $D\psi = (d + D_r [(0^g)_H ] ) \psi $ in the global case). I will also need to consider a matter field $\Psi$, transforming as a representation of the whole group $G$, $$\begin{gathered} \Omega: \;\Psi \rightarrow D_R [\Omega^{-1}] \Psi.\end{gathered}$$ Whilst such fields can be trivially coupled to the $G$-connection $A$ using the usual covariant derivative, they can also be coupled to matter fields $\psi$ transforming linearly under $H$, but non-linearly under $G$, as discussed above. Indeed, the field $\Psi^\prime \equiv D_R [g^{-1}]\Psi$ transforms only under the compensator $$\begin{gathered} \label{bigreal} \Omega: \; \Psi^\prime \rightarrow D_R [h^{-1}(\Omega, g)] \Psi^\prime\end{gathered}$$ and can be coupled to fields $\psi$ transforming as in (\[psireal\]). This is the non-linear sigma model analogue of a Yukawa coupling. We now have all the necessary ingredients to built a locally $G$-invariant, $G/H$-coset sigma model, coupled to matter transforming in representations of either $H$ or $G$. \[class\]Resurrecting $G$-invariance at the classical level =========================================================== In the usual formulation of a gauge theory on an interval, the bulk $G$ gauge invariance is allowed to be broken to subgroups $H_{0,1}$ on the boundaries, by choosing the Dirichlet BC for the fifth components, $A_5$, of gauge fields corresponding to generators in ${\,\mathrm{Lie}}(H_0)$ on the UV boundary, and the Neumann BC for the others. Similarly, on the IR boundary, one chooses the Dirichlet BC for the $A_5$ components corresponding to generators in ${\,\mathrm{Lie}}(H_1)$, and so on. Since the theory on the boundary does not respect the full $G$ invariance, matter fields living on the boundary need only come in representations of the subroup $H_0$ on the UV boundary, and $H_1$ on the IR boundary. Similarly, though matter fields that propagate in the bulk must transform as representations of $G$, their boundary conditions need only respect $H_0$ or $H_1$, as appropriate. In particular, for a bulk (Dirac) fermion, one is free to choose either the left- or right-handed Weyl components to vanish on, say, the UV boundary, provided that states with the same BC furnish a representation of $H_0$. In this Section, I show that, at least at the classical level, such a theory has an equivalent formulation in which the full, bulk $G$-invariance is maintained everywhere, including on the boundaries. In this formulation, I must add $G/H_{0,1}$-coset sigma models (as described in the last Section) on the respective boundaries. The symmetries corresponding to generators in $G/H_{0,1}$ are non-linearly realized by the coset scalar fields. The couplings on the boundaries between the coset scalars and the gauge fields modify the BCs and give rise to the same physical spectrum of gauge boson zero modes as in the usual formulation. A pedagogical explanation is given in [*e. g. *]{}[@Csaki:2005vy]. So the full, $G$ gauge invariance can be resurrected, at least in the gauge sector, by adding $G/H_{0,1}$-coset sigma models on the boundaries. What is more, the same coset fields can be used to restore the gauge symmetry in the matter sector as well. To see how this is achieved, consider first matter fields localized on, say, the UV boundary. In the usual formulation, these need only transform as a rep of the broken subgroup $H_0$. But as we saw in the previous Section, the $G/H_{0}$ coset fields allow us to extend the matter field to a realization of $G$, according to (\[psireal\]). Resurrecting $G$-invariance for a matter field $\Psi$ living in the bulk is not much more difficult. Here, the problem is that, in the usual formalism, the BCs for $\Psi$ on, say, the UV boundary, need not respect $G$, but only the subgroup $H_0$. Let us suppose, for example, that we have a bulk fermion $$\begin{gathered} \Psi = \begin{pmatrix} \psi_{\alpha} \\ \overline{\chi}^{\dot{\alpha}} \end{pmatrix}\end{gathered}$$ in a rep $R$ of $G$, and that the UV BCs are $\psi_0 = 0$ for states in $R$ forming some rep $r$ of $H_0$, and $\chi_0 =0$ otherwise. To resurrect $G$ on the UV boundary, I consider instead a bulk fermion $\Psi$ with the $G$-invariant BC $\chi_0 = 0$ for all states in the rep $R$. I also add a boundary-localized fermion $\eta_0$ in rep $\overline{r}$ of $H_0$. Now $\eta$ carries a realization of $G$ according to (\[psireal\]) and, furthermore, the object $D_R[g^{-1}]\psi$ transforms only under the compensator, as in (\[bigreal\]). Since the rep $R$, construed as a rep of $H_0$, contains the rep $r$, I can write a $G$-invariant term coupling $\eta_0$ and $D_R[g^{-1}]\psi_0$ on the UV boundary. In the limit that the dimensionful coupling constant of this term becomes large (of order of the EFT cut-off), its effect [@Gripaios:2006dc] is equivalent to flipping the BC from $\chi_0 = 0$ to $\psi_0 = 0$ for states in a rep $r$. Thus, it is equivalent to the usual situation of a bulk fermion with BCs respecting only $H_0$. On the IR boundary, I follow a similar procedure, except that I choose the $G$-invariant BC for $\Psi$ to be the opposite one, namely $\psi_1 = 0$, for all states. With this choice, the bulk fermion $\Psi$ has no $d=4$ zero modes. This simplifies the derivation of the low-energy effective action: to get it, I simply integrate out all of the (massive) bulk fermion modes. We thus see how to convert the usual formulation, with $G$ broken on the boundaries, into one with $G$-invariance resurrected on the boundaries. The equivalence of these two formulations is, in fact, a trivial one: the alternative formulation simply has a larger gauge invariance than the usual one, in that it is $G$-invariant everywhere. The usual formulation is then obtained as a gauge-fixing of the alternative one. The gauge-fixing is, of course, the one in which the coset fields on the boundaries vanish. Given that the two formulations are equivalent, what is the utility of the alternative formulation? As we shall see, it makes it much easier to deduce the consistency requirements following from anomaly considerations, and also to compute the WZW term. Thus far, everything has been classical. In the next section, we shall see how things change at the quantum level. \[cosa\]Anomalies in $G/H$-coset sigma models ============================================= In this Section I review the anomaly structure of coset models, following Alvarez-Gaumé and Ginsparg [@AlvarezGaume:1985yb]. Let us first recall the structure of the anomaly in $d=4$ arising from a Weyl fermion, $\psi$, transforming as a linear representation, $R$ of group $G$. The effective action, obtained by integrating out the fermions, and defined by[^3] $$\begin{gathered} e^{-\Gamma_R [A]} = \int_{\psi} \exp{-\int d^4 x \; \mathcal{L} (A,\psi)},\end{gathered}$$ is not, in general, invariant under an infinitesimal gauge transformation, $A\rightarrow A^{1+\omega}$, but rather transforms as $$\begin{gathered} \label{standanom} \delta_{\omega} \Gamma_R [A] = \frac{1}{24 \pi^2} \int d^4 x \; Q_R (\omega , A),\end{gathered}$$ where $$\begin{gathered} Q_R (\omega , A) = {\,\mathrm{tr}}_R (\omega d[AdA +\frac{1}{2}A^3])\end{gathered}$$ with the trace over matrices in the representation $R$. To integrate this, I let $\omega = g^{-1} \delta g$, such that $$\begin{gathered} \Gamma_R [A^{g+\delta g}] - \Gamma_R [A^g] = \frac{1}{24 \pi^2} \int d^4 x \; Q_R ( g^{-1} \delta g , A^g),\end{gathered}$$ and choose a one-parameter family $g_s (x)$ of maps on $s \in [0,1]$, such that $g_{s=0} = 1$ and $g_{s=1} = g$. Integrating with respect to $s$, I obtain $$\begin{gathered} \label{end} \Gamma_R [A^g] - \Gamma_R [A] = \frac{1}{24 \pi^2} \int_{0}^{1} ds \; \int d^4 x \; Q_R ( g_s^{-1} \partial_s g_s , A^{g_s}).\end{gathered}$$ Now consider fermions in a representation $r$ of the subgroup $H$, coupled to the gauge field and the sigma-model fields via the $H$-connection, $(A^g)_H$, as in (\[A\]). By comparison with (\[standanom\]) we see that the $G$-anomaly of the effective action $\Gamma_r$, obtained by integrating over the fermions, is given by $$\begin{gathered} \label{ranom} \delta_{\epsilon} \Gamma_r [(A^g)_H] = \frac{1}{24 \pi^2} \int d^4 x \; Q_r (\epsilon , (A^g)_H).\end{gathered}$$ In the above, $\epsilon$ is the infinitesimal version of the compensating $h$ transformation: $h (\Omega, g) = 1 + \epsilon (\Omega, g)+\dots$. In theories on a higher-dimensional interval, with $G$-resurrected at the classical level as described in the previous Section, the boundary fermions will give rise to anomalies of exactly this form, with $H$ replaced by the linearly-realized subgroup $H_{0,1}$ on the relevant boundary. In order to consistently quantize the theory, I need to be able to cancel this anomaly against anomalies coming from the bulk fermions and CS terms. The latter are anomalies of the group $G$, whereas the anomalies in (\[ranom\]) have the structure of anomalies in the subgroup $H = H_{0,1}$ (even though they are defined for the whole group $G$ via the compensator). It would, therefore, seem to be impossible to cancel the anomalies in this way. In fact, the anomalies can be cancelled, under certain conditions. To see how this may be achieved, consider the following object $$\begin{gathered} \Gamma_R^{WZW} = \frac{1}{24 \pi^2} \int_0^1 ds \; \int d^4 x \; Q_R (g_s \partial_s g_s^{-1} , (A^g)_H^{g_s^{-1}}),\end{gathered}$$ where $R$ is any representation of $G$. By reversing the argument of Eqn’s (\[standanom\]-\[end\]), we see that this object transforms under the $G$-action like the difference[^4] $$\begin{gathered} \label{diff} \Gamma_R [(A^g)_H^{g^{-1}}] - \Gamma_R [(A^g)_H].\end{gathered}$$ These terms are just the effective actions one would obtain by integrating out fermions in representation $R$, coupled to connections $(A^g)_H^{g^{-1}}$ and $(A^g)_H$, respectively. But under the $G$-action, $(A^g)_H \rightarrow (A^g)_H^h$, where $h = h(\Omega,g)$ is the compensator, and so the anomalous $G$-action on the second term in (\[diff\]) cancels the anomalous $G$-action on $\Gamma_r [(A^g)_H]$ in (\[ranom\]) iff.  $$\begin{gathered} \label{match} {\,\mathrm{str}}_r T^\alpha T^\beta T^\gamma = {\,\mathrm{str}}_R T^\alpha T^\beta T^\gamma,\end{gathered}$$ where the generators are those of ${\,\mathrm{Lie}}(H)$. Moreover, since under the $G$-action $(A^g)_H^{g^{-1}} \rightarrow (A^g)_H^{g^{-1}\Omega}$, we see that the first term in (\[diff\]) has the usual $G$-anomaly corresponding to representation $R$, but with the alternative $G$-connection, $(A^g)_H^{g^{-1}}$, replacing the usual $G$-connection $A$. This is easily corrected by addition of Bardeen’s counterterm [@Bardeen:1969md; @AlvarezGaume:1984dr] $$\begin{gathered} B_R [A_1, A_2] = \frac{1}{48\pi^2} \int d^4 x \; {\,\mathrm{tr}}_R [(F_1 + F_2)(A_2 A_1 - A_1 A_2) - A_2^3 A_1+ A_1^3 A_2 +\frac{1}{2} A_2 A_1 A_2 A_1],\end{gathered}$$ which transforms such that $$\begin{gathered} B_R [A_1^\Omega, A_2^\Omega] - B_R[A_1, A_2] = \Gamma_R [A_1^\Omega] - \Gamma_R [A_2^\Omega] - \Gamma_R [A_1] + \Gamma_R [A_2].\end{gathered}$$ In the case at hand, setting $A_1 = A$ and $A_2 = (A^g)_H^{g^{-1}}$, I find that adding the term $$\begin{gathered} \label{whole} \Gamma_R^{WZW} + B_R [A, (A^g)_H^{g^{-1}} ]\end{gathered}$$ to the action converts the $G$-anomaly of a fermion in representation $r$ of $H$ to the usual $G$-anomaly of a fermion in representation $R$ of $G$, iff. the $H$-anomalies of $r$ and $R$ match, in the sense of (\[match\]). If this is the case, then I can cancel the anomalies coming from boundary fermions against anomalies coming from bulk fermions or CS terms. \[quant\]Resurrecting $G$-invariance at the quantum level ========================================================= In order to resurrect $G$-invariance everywhere on a $d=5$ interval at the quantum level, the $G$ anomalies on each of the two boundaries must separately vanish: If they do not, the number of linearly-realized gauge symmetries (and hence the low-energy gauge group) is smaller than that which is claimed. On each boundary, there are three contributions to the anomaly. Firstly, there are boundary-localized fermions in a reps $r_{0,1}$ of $H_{0,1}$, whose contribution to the anomaly takes the form of (\[ranom\]), with $r\rightarrow r_{0,1}$ and $H \rightarrow H_{0,1}$. Secondly, there are bulk fermions in a rep $R'$ of $G$. Thirdly, there are CS terms corresponding to a rep $R''$ of $G$. The nature of the BCs I choose for the bulk fermions means that the contribution to the anomaly is the same for both bulk fermions and CS terms. They take the form of (\[standanom\]) with $R = R' \oplus R''$, but have opposite signs on the two boundaries. Equivalently, I can say that the anomaly on the UV boundary is that of $R$, whilst the anomaly on the IR boundary is that of $\overline{R}$. Now, we saw in the last Section that anomalies of the form (\[ranom\]), can be converted to anomalies of the form (\[standanom\]), via the term (\[whole\]) iff. (\[match\]) is satisfied. Therefore, on the interval, we can consistently quantize the theory with the assumed structure of linearly and non-linearly realized symmetries iff. the anomaly of the rep $r_0$ of $H_0$ matches that of the rep $R$, construed as a rep of $H_0$, [*and*]{} the anomaly of the rep $r_1$ of $H_1$ matches that of the rep $\overline{R}$, construed as a rep of $H_1$. Note that this condition includes the usual $d=4$ zero mode anomaly cancellation condition, which is that the anomaly of the rep $r_0 \oplus \overline{r}_1$ of the largest subgroup of both $H_0$ and $H_1$ should vanish, but it is in fact much stronger. What is more, even if I allow myself free choice of the CS term, corresponding to $R$ being an arbitrary rep, I still find that the condition is stronger than the usual $d=4$ condition. We see, in particular, that our original example, with $SU(2)$ broken to $U(1)$ on each boundary and with fermions of opposite charge on the boundaries, does not satisfy the condition, because the anomaly of any rep $R$ of $SU(2)$ must vanish. In this example, the condition is not satisfied at either boundary. To exhibit an example where the condition is satisfied at one boundary but not the other, consider the same set-up, but with the bulk group $SU(2)$ completely broken on one boundary. The condition is now trivially satisfied on this boundary, and from the $d=4$ perspective (there is no surviving gauge group in either case), but is violated on the other boundary, where a $U(1)$ is preserved. Once a consistent theory has been found, it is a simple matter to derive the form of the WZW term in the low-energy effective action. Since I have a theory which is everywhere $G$-invariant, I am free to choose the gauge $A_5 = 0$. When I do this, the bulk Dirac fermions have no chiral coupling, and integrating them out has no effect on the anomaly structure. Furthermore, any CS term involves only vector fields in this gauge, so cannot contribute to the WZW term. The only place the low-energy WZW term can now come from is from the boundary-localized WZW terms of the form (\[whole\]). What is more, I can use some of the remaining gauge freedom to gauge away all of the coset fields on one boundary, such that the WZW term on that boundary vanishes. Once I have done so, I can still gauge away some, but not all, of the coset fields on the other boundary.The ones I can gauge away are those that were not paired with a coset field on the other boundary, in the sense that they corresponded to the same generator of $G$. Thus I am left with one physical scalar coset field in the low-energy theory for every generator that is in ${\,\mathrm{Lie}}(G)$ but not in ${\,\mathrm{Lie}}(H_0)$ or in ${\,\mathrm{Lie}}(H_1)$. The WZW term that remains on one boundary in this gauge is not quite the WZW term that appears in the low-energy effective action in $d=4$, because it involves all of the $G$ gauge fields. Some of these gauge fields do not survive in the low-energy theory, that is to say they have no zero modes. The only ones that do survive are those corresponding to generators in the intersection of ${\,\mathrm{Lie}}(H_0)$ and ${\,\mathrm{Lie}}(H_1)$. To integrate out the massive gauge fields in the WZW term, I simply set them to zero, and replace the surviving gauge fields by their zero modes. Having done so, I am left with the WZW term that appears in the low-energy effective action. Though it may at first seem rather odd that I can evaluate the WZW term in the low-energy action either by going to a gauge in which it is generated at the UV boundary, or by going to a gauge in which it is generated at the IR boundary, this is in fact completely necessary from the point of view of the holographic dual. According to the duality, the UV boundary corresponds to the UV of the $d=4$ theory, and the IR boundary to the IR. Because the anomaly is non-renormalized, its form is the same at any energy scale. So the form of the anomaly in the $d=4$ dual is completely fixed by the anomalies on, say, the UV boundary. Although the form of the anomaly is fixed, the form of the WZW term that appears in the low-energy effective action is not. Indeed, the form of the WZW term is not fixed until the fate of the various symmetries at low energy has been decided: If, on the one hand, a symmetry remains linearly-realized, the anomaly must be reproduced by fermions in the low-energy effective theory, as argued by ’t Hooft; if, on the other hand, the symmetry is non-linearly realized, then the anomaly is reproduced at low energy by the WZW term. Now, in theories on an interval in $AdS$, the fate of the symmetries at low energies is decided, in part, by the anomaly structure on the IR boundary: if a symmetry is anomalous on the IR boundary, it must be non-linearly realized at low energy. So in the context of holography, one can say, in a sense, that the anomaly in the $d=4$ dual is completely determined by the anomaly on the UV boundary of the $d=5$ theory, but that the WZW term in the $d=4$ dual is then determined by the anomaly on the IR boundary in $d=5$. \[ex\]Examples ============== Perhaps the most realistic holographic models of EWSB are the ‘minimal composite Higgs models’ of [@Agashe:2004rs]. They are based on bulk gauge group $G = SU(3)_c \times SO(5) \times U(1)_X$, broken to the custodially-symmetric $H_1 = SU(3)\times SO(4) \times U(1)_X$ in the IR, where $SO(4) = SU(2)_L \times SU(2)_R$, and to the Standard Model gauge group, $H_1 = SU(3)\times SU(2)_L \times U(1)_Y$ in the IR, where $Y=T_R^3 + X$. The custodial symmetry prevents large corrections to the $T$-parameter of precision tests of EWSB, and by enlarging the $SO(4)$ to $O(4)$ one can even control the corrections to $Z \rightarrow b\overline{b}$ [@Agashe:2006at]. The models still appear to require some fine-tuning to get a small enough value for the $S$-parameter, however. It is simple enough to see that models based on this pattern of symmetry breaking can always be rendered consistent by addition of a suitable CS term, and do not lead to a WZW term in the low-energy effective action. Indeed, in the alternative formulation with $G$ invariance resurrected everywhere, we know that the boundary-localized fermions correspond to the fermion zero modes, which in this case are just three Standard Model generations. Now, in general, the fermions can be split between the two boundaries, with some living on the UV boundary and some living on the IR boundary. But in the case at hand, all of the zero mode fermions must live on the UV boundary, where the gauge group is that of the Standard Model. If some of the fermions were to live on the IR boundary, then we would have to be able to organize them into a representation of the unbroken group there, [*viz.*]{} $H_1 = SU(3)\times SO(4) \times U(1)_X$. But there simply is no way to organize a subset of the Standard Model fermions into a rep of $SU(3)\times SO(4) \times U(1)_X$. Thus all of the Standard Model fermions live on the UV boundary, and the net $H_1$ anomaly from fermions localized on the UV boundary vanishes. Of course, there will in general be be anomalies coming from bulk fermions, but these come in reps of $G$ and can always be cancelled by a CS term. There is, therefore, no WZW term in the low-energy effective action. It is not difficult to find a model which does have a WZW term. Indeed, consider the symmetry structure of the original holographic model of this type, in which bulk group $G =SU(3)_c \times SU(3)_L \times U(1)_X$ is broken on both branes to $H_{0,1} = SU(3)_c \times SU(2)_L \times U(1)_Y$, where $SU(2)_L$ is generated by the first three Gell-Mann generators of $SU(3)$ and $Y = T^8/\sqrt{3}+X$.[^5] Again, in the formulation with $G$ resurrected everywhere, the boundary fermions must correspond to the Standard Model fermions. But now I am free to put some fermions, the quarks say, on one boundary, and the leptons on the other. The boundary fermion contributions to the $SU(2)_L^2 U(1)_Y$ and $U(1)_Y^3$ anomalies are now non-vanishing, and must be cancelled by a combination of bulk CS terms and boundary WZW terms. The boundary WZW terms give rise to a WZW term in the low-energy effective action, as described above. I thank S. M. West for providing details of his calculations of the fermion anomalies in various models, and thank G.  F.  Giudice, L.  Randall, R.  Rattazzi and A.  Wulzer for discussions. [27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (), . , ****, (), . , , , ****, (), . , , , ****, (), . , , , ****, (), . , ****, (), . , , , , , ****, (), . , , , ****, (), . For a review, see , ****, (), . , ****, (), . , ****, (), . , ****, (), . , ****, (). (), . , ****, (). , ****, (). For other discussions relating the $d=4$ WZW term to a $d=5$ interval, see: , ****, (), ; , ****, (), ; , ****, (), ; (), . , , , , ****, (), . , ****, (), . , ****, (), . , , (), . , ****, (). , ****, (). , ****, (). [^1]: Although I will not discuss them here, similar considerations apply to Higgsless models [@Csaki:2003sh] and models with other, [*e. g. *]{}flat, geometries. [^2]: The reason for the obscure notation $0^g$ will, I hope, become clear. [^3]: In treating the theory at the quantum level, I shall always consider the Euclidean path integral formalism. [^4]: Note that $(A^g)_H^{g^{-1}} \neq (A)_H$! [^5]: This model has many faults, not least that it is inconsistent with electroweak precision tests without fine-tuning, but that does not concern us here.
--- abstract: 'Syntax-directed translation tools require the specification of a language by means of a formal grammar. This grammar must conform to the specific requirements of the parser generator to be used. This grammar is then annotated with semantic actions for the resulting system to perform its desired function. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators. ModelCC receives a conceptual model as input, along with constraints that annotate it. It is then able to create a parser for the desired textual syntax and the generated parser fully automates the instantiation of the language conceptual model. ModelCC also includes a reference resolution mechanism so that ModelCC is able to instantiate abstract syntax graphs, rather than mere abstract syntax trees.' author: - 'Fernando Berzal Francisco J. Cortijo Juan-Carlos Cubero Luis Quesada' bibliography: - 'modelcc.bib' title: 'The ModelCC Model-Driven Parser Generator' --- Introduction ============ Widely-used language processing tools require language designers to provide a textual description of the language syntax, typically using a BNF-like notation. The proper specification of such a grammar is a nontrivial process that depends on the lexical and syntactic analysis techniques to be used, since particular techniques require the grammar to comply with specific and different constraints. The most significant constraints on formal language specification originate from the need to consider context-sensitivity, the need of performing an efficient analysis, and some techniques’ inability to consider grammar ambiguities or resolve conflicts caused by them. Whenever the language syntax has to be modified, the language designer has to manually propagate the changes throughout the entire language processor tool chain. These updates are time-consuming, tedious, and error-prone. By making such changes labor-intensive, the traditional approach hampers the maintainability and evolution of the language [@Kats2010]. Moreover, it is not uncommon that different tools use the same language, e.g. compilers, code generators, debuggers, lint-like utilities, code beautifiers... Multiple copies of the same language specification must then be maintained in sync, since language specification (i.e. its grammar) is tightly coupled to language processing (i.e. the semantic actions that annotate that grammar). A grammar is a model of the language it defines. But a language can also be defined by a conceptual data model that represents the abstract syntax of the desired language, focusing on the elements the language will represent and their relationships. In conjunction with the declarative specification of some constraints, such model can be automatically converted into a grammar-based language specification. By using an annotated conceptual model, model-based language specification completely decouples language specification from language processing, which can be performed using whichever parsing techniques might be suitable for the formal language implicitly defined by the model. Semantic actions are no longer embedded within the language specification, as usual in grammar-driven language processors. The model representing the language can be modified as needed, without having to worry about the language processor and the peculiarities of the chosen parsing technique, since the corresponding language processor will be automatically updated. As the language model is not bound to any particular parsing technique, evaluating alternative and/or complementary parsing techniques is therefore possible without having to propagate their constraints into the language model. It should be noted that, while the result of the traditional parsing process is an abstract syntax tree that corresponds to a valid interpretation of the input text according to the language syntax, nothing prevents the model-based language designer from modeling non-tree structures. Indeed, a model-driven parser generator can automate the implementation of reference resolution mechanisms, among other syntactic and semantic checks that are typically deferred to later stages in the traditional language processing pipeline. Since ModelCC is able to resolve references, it obtains abstract syntax graphs as the result of the parsing process, rather than the abstract syntax trees obtained from conventional parser generators. Model-Based Language Specification {#sec:modelbased} ================================== In this Section, we introduce the distinction between abstract and concrete syntax (\[subsec:asmcsm\]), discuss the potential advantages of model-based language specification (\[subsec:modelbased\]), and compare our approach with the traditional grammar-driven language design process (\[subsec:comparison\]). Abstract Syntax and Concrete Syntaxes {#subsec:asmcsm} ------------------------------------- The abstract syntax of a language is just a representation of the structure of the different elements of the language without the superfluous details related to its particular textual representation [@Kleppe2007]. A concrete syntax is a particularization of the abstract syntax that defines, with precision, a specific textual or graphical representation of the language. It should be noted that a single abstract syntax can be shared by several concrete syntaxes [@Kleppe2007]. For example, the abstract syntax of the typical *$<$if$>$-$<$then$>$-$<$optional else$>$* statement in imperative programming languages could be described as the concatenation of a conditional expression and one or two statements. Different concrete syntaxes could be defined for such an abstract syntax, which would correspond to different textual representations of a conditional statement, e.g. {“[if]{}”, “[(]{}”, expression, “[)]{}”, statement, optional “[else]{}” followed by another statement} and {“[if]{}”, expression, “[then]{}”, statement, optional “[else]{}” followed by another statement, “[endif]{}”}. The idea behind model-based language specification is that, starting from a single abstract syntax model (ASM) representing the core concepts in a language, language designers would later develop one or several concrete syntax models (CSMs). These concrete syntax models would suit the specific needs of the desired textual or graphical representation for the language sentences. The ASM-CSM mapping could be performed, for instance, by annotating the abstract syntax model with the constraints needed to transform the elements in the abstract syntax into their concrete representation. Advantages of Model-Based Language Specification {#subsec:modelbased} ------------------------------------------------ Focusing on the abstract syntax of a language offers some benefits [@Kleppe2007] and provides some potential advantages to model-based language specification over the traditional grammar-based language specification approach: - When reasoning about the features a language should include, specifying its abstract syntax seems to be a better starting point than working on its concrete syntax details. We control complexity by building abstractions that hide details when appropriate. - Sometimes, different incarnations of the same abstract syntax might be better suited for different purposes: an human-friendly syntax for manual coding, a machine-oriented format for automatic code generation, a Fit-like syntax for testing, different architectural views for discussions with project stakeholders... It might be useful for a given language to support multiple syntaxes. - Since model-based language specification is independent from specific lexical and syntax analysis techniques, the constraints imposed by specific parsing algorithms do not affect the language design process. In principle, it might not even be necessary for the language designer to have advanced knowledge on parser generators when following a model-driven approach. - A full-blown model-driven language workbench would allow the modification of a language abstract syntax model and the automatic generation of a working IDE on the run. The specification of domain-specific languages would become easier, as the language designer could play with the language specification and obtain a fully-functioning language processor on the fly, without having to worry about the propagation of changes throughout the complete language processor tool chain. In summary, the model-driven language specification approach brings domain-driven design [@ddd] to the domain of language design. It provides the necessary infrastructure for what Evans would call the ‘supple design’ of language processing tools: the intention-revealing specification of languages by means of abstract syntax models, the separation of concerns in the design of language processing tools by means of declarative ASM-CSM mappings, and the automation of a significant part of the language processor implementation. Comparison with the Traditional Approach {#subsec:comparison} ---------------------------------------- A diagram contrasting two different approaches to language specification and design is shown in Figure \[fig:approach\]: the traditional grammar-driven approach on the left and the model-driven approach on the right. ![image](image/traditional.pdf) \[fig:traditional\] ![image](image/modelcc.pdf) \[fig:ModelCC\] When following the traditional grammar-driven approach, the language designer starts by designing the grammar corresponding to the concrete syntax of the desired language, typically in BNF or a similar notation. Then, the designer annotates the grammar with attributes (and, probably, semantic actions), so that the resulting attribute grammar can be fed into lexer and parser generator tools that produce the corresponding lexers and parsers. The syntax-directed translation process generates abstract syntax trees from the textual representation in the concrete syntax of the language. When following the model-driven approach, the language designer starts by designing the conceptual model that represents the abstract syntax of the desired language, focusing on the elements the language will represent and their relationships. Instead of dealing with the syntactic details of the language from the start, the designer devises a conceptual model for it (i.e. the abstract syntax model, or ASM), the same way a database designer starts with an implementation-independent conceptual database schema before he converts that schema into a logical schema that can be implemented in the particular kind of DBMS that will host the resulting database. In the model-driven language design process, the ASM would play the role of entity-relationship diagrams in database design and each particular CSM would correspond to final table layouts for the physical database schema in relational DBMS’s. Even though the abstract syntax model of the language could be converted into a suitable concrete syntax model automatically, the language designer will often be interested in specifying the details of this ASM-CSM mapping. With the help of constraints imposed over the abstract model, the designer is able to guide the conversion from the ASM to its concrete representation using a particular CSM. This concrete model, when it corresponds to a textual representation of the abstract model, can be be described by a formal grammar. It should be noted, however, that the specification of the ASM is independent from the peculiarities of the desired CSM. Therefore, the grammar specification constraints enforced by particular parsing tools do not impose limits on the design of the ASM. The model-driven language processing tool will take charge of those constraints and derive the grammar resulting from the ASM-CSM mapping that satisfies the parsing tool requirements. While the traditional language designer specifies the grammar for the concrete syntax of the language, annotates it for syntax-directed processing, and obtains an abstract syntax tree that is an instance of the implicit conceptual model defined by the grammar, the model-based language designer starts with an explicit full-fledged conceptual model and specifies the necessary constraints for the ASM-CSM mapping. In both cases, parser generators create the tools that parse the input text in its concrete syntax. The difference lies in the specification of the grammar that drives the parsing process, which is hand-crafted in the traditional approach and automatically-generated as a result of the ASM-CSM mapping in the model-driven approach. Another difference stems from the fact that the result of the parsing process is an instance of an implicit model in the grammar-driven approach while that model is explicit in the model-driven approach. An explicit conceptual model is absent in the traditional language design process albeit that does not mean that it does not exist. The model-driven approach enforces the existence of an explicit conceptual model, which lets the proposed approach reap the benefits of domain-driven design. In general, the result of the parsing process is an abstract syntax tree that corresponds to a valid interpretation of the input text according to the language concrete syntax (at least for the constituency-based parsers typically used for programming languages). However, nothing prevents the conceptual model designer from modeling non-tree structures, which can be described, for instance, by reference attributed grammars [@Burger2010]. Hence the use of the ‘abstract syntax graph’ term in Figure \[fig:ModelCC\]. This might be useful, for instance, for modeling graphical languages, which are not constrained by the linear nature of the text-based languages. In summary, the model-based language specification process goes from the abstract to the concrete, instead of following the traditional syntax-directed approach that goes from a concrete syntax model to an implicit abstract syntax model, which is now explicit in the model-driven approach. This alternative approach facilitates the proper design and implementation of language processing systems by decoupling language processing from language specification. ModelCC Model Specification {#sec:modelspecification} =========================== Once we have described model-driven language specification in general terms, we now proceed to introduce ModelCC [@Quesada2014c], a tool that supports the model-driven approach for thee design of language processing systems. ModelCC, at its core, acts as a parser generator. The starting abstract syntax model is created by defining classes that represent language elements and establishing relationships among those elements (associations in UML terms). Once the abstract syntax model is established, its incarnation as a concrete syntax is guided by the constraints imposed over language elements and their relationships as annotations on the abstract syntax model. In other words, the declarative specification of constraints over the ASM establishes the desired ASM-CSM mapping. ![image](image/tutorial/ModelCC-json.pdf) ModelCC allows the specification of languages in the form of abstract syntax models such as the one shown in Figure \[fig:language-json\]. This model, depicted here as an UML class diagram for clarity, specifies the abstract syntax model of the JSON open data exchange format. In Section \[sec:example1\], we will analyze a more traditional example: the language of arithmetic expressions (see Figure \[fig:calcmodelcc\]). The annotations that accompany the model in Figure \[fig:language-json\] provide the necessary information for establishing the complete ASM-CSM mapping that results in the concrete syntax of the JSON standard (as the annotations in Figure \[fig:calcmodelcc\] define the traditional infix notation for arithmetic expressions). In this Section, we introduce the basic constructs that allow the specification of abstract syntax models, while we will discuss how model constraints help us establish the desired ASM-CSM mapping in Section \[sec:modelconstraints\]. Basically, the ASM is built on top of basic language elements, which might be viewed as the tokens of the model-driven language specification. Model-driven language processing tools such as ModelCC provide the necessary mechanisms to combine those basic elements into more complex language constructs, which correspond to the use of concatenation, selection, and repetition in the syntax-driven specification of languages. Concatenation ------------- Concatenation is the most basic construct we can use to combine sets of language elements into more complex language elements. In textual languages, this is achieved just by joining the strings representing its constituent language elements into a longer string that represents the composite language element. In ModelCC, concatenation is achieved by object composition. The resulting language element is the composite element and its members are the language elements the composite element collates. In Figure \[fig:language-json\], [JSONPair]{} is a composite element that results from the concatenation of a [JSONString]{} name and a [JSONValue]{} value. When translating the ASM into a textual CSM, each composite element in a ModelCC model generates a production rule in the grammar representing the CSM. This production, with the nonterminal symbol of the composite element in its left-hand side, concatenates the nonterminal symbols corresponding to the constituent elements of the composite element in its right-hand side. By default, the order of the constituent elements in the production rule is given by the order in which they are specified in the model, but such an order is not mandatory (e.g. many ambiguous languages require different ordered sequences of constituent elements and even some unambiguous languages allow for unordered sequences of constituent elements). Selection --------- Selection is the language modeling construct used to represent choices: it enables alternative elements in language constructs. In ModelCC, selection is created by subtyping. Specifying object-oriented inheritance relationships between language elements is equivalent to defining ‘is-a’ relationships in traditional database design. The language element we wish to establish alternatives for is the superelement (i.e. the superclass in OO design or the supertype in DB modeling), whereas the different alternatives are represented as subelements (i.e. subclasses in OO, subtypes in DB modeling). Alternative elements are always kept separate to enhance the modularity of ModelCC abstract syntax models and their integration in language processing systems. In Figure \[fig:language-json\], [JSONValue]{}s can be either [JSONObject]{}s or [JSONArray]{}s, as well as a bunch of basic data values including strings ([JSONString]{}), numbers ([JSONNumber]{}), booleans ([JSONBoolean]{}), and null ([JSONNull]{}). Each inheritance relationship in ModelCC, when converting the ASM into a textual CSM, generates a production rule in the CSM grammar. In those productions, the nonterminal symbol corresponding to the superelement appears in its left-hand side, while the nonterminal symbol of the subelement appears as the only symbol in the production right-hand side. Obviously, if a given superelement has $k$ different subelements, $k$ different productions will be generated representing the $k$ alternatives defined in the ASM. Repetition ---------- Representing repetition is also necessary in abstract syntax models, since a language element might appear several times in a given language construct, but, when a variable number of repetitions is allowed, mere concatenation does not suffice to model it in the ASM. Repetition is also achieved through object composition in ModelCC, just by allowing different multiplicities in the associations that connect composite elements to their constituent elements. In Figure \[fig:language-json\], [JSONObject]{}s are made of a variable number of [JSONPair]{}s. Likewise, [JSONArray]{}s contain a variable number of [JSONValue]{}s. Each composition relationship representing a repetitive structure in the ASM will lead to two additional production rules in the grammar defining its textual CSM. A recursive production of the form [*$<$List$>$ ::= $<$Element$>$ $<$List$>$*]{} allows for the repetition of elements, whereas a simple production [*$<$List$>$ ::= $\epsilon$*]{} or [*$<$List$>$ ::= $<$Element$>$*]{} provides the base case for the recursion, depending on whether the list can be empty or not. It should also be noted that [*$<$List$>$*]{} will take the place of the [*$<$Element$>$*]{} nonterminal in the production derived from the composition relationship that connects the repeating element with its composite element. Element multiplicities and list delimiters in the CSM will be determined from the constraints we will now see in Section \[sec:modelconstraints\] ModelCC Model Constraints {#sec:modelconstraints} ========================= Once we have examined the mechanisms that let us create abstract syntax models in ModelCC, we now proceed to describe how constraints can be imposed on such models in order to establish the desired ASM-CSM mapping. Table \[fig:tablesummary\] summarizes the set of constraints supported by ModelCC for establishing the ASM-CSM mappings between abstract syntax models and their concrete representation in textual CSMs: - A first set of constraints is used for pattern specification, a necessary feature for defining the lexical elements of the concrete syntax model, i.e. its tokens. Pattern matching lets us extract fragments from the textual input (e.g. using regular expressions) and fill in values for the basic language elements that are the building blocks of more complex abstract syntax models. - A second set of constraints is employed for defining delimiters and separators in the concrete syntax model. They help us eliminate language ambiguities, when we want to obtain deterministic context-free languages, or can be used just as syntactic sugar to help improve the readability and writability of many languages. They are also common in repeating elements, which can be annotated with separators (in case separators are employed, the recursive production derived from repeating elements will be of the form [*$<$List$>$ ::= $<$Element$>$ $<$Separator$>$ $<$List$>$*]{}). - A third set of ModelCC constraints lets us impose cardinality constraints on language elements, which can be used to control the multiplicity of repeating language elements, as well as the optionality or mandatoriness of any element in the language model. - A fourth set of constraints lets us impose an evaluation order on language elements. These constraints are employed to declaratively resolve ambiguities in the concrete syntax of a textual language by establishing associativity, precedence, and composition policies. Associativity and precedence constraints are common in the evaluation of arithmetic expressions, as we will see in the next section (see, e.g., Figure \[fig:calcmodelcc\]). Composition constraints help us resolve the ambiguities that cause the typical shift-reduce conflicts in LR parsers, as shown in Figure \[fig:language-shift-reduce\]. - A fifth set of constraints lets us specify the relative ordering of the constituents in composite language elements, or even allow for the free ordering of elements (a feature uncommon in programming languages, yet useful in other settings beyond deterministic context-free languages). - A sixth set of constraints lets us specify referenceable language elements and references to them, enabling the reference resolution mechanism included in ModelCC. When references are resolved, the ModelCC parser returns an abstract syntax graph instead of the abstract syntax tree resulting from context-free grammar parsing. - Finally, custom constraints let us provide specific lexical, syntactic, and semantic constraints that take into consideration additional context information. Certainly not needed for deterministic context-free grammars, they provide a general customization mechanism for ModelCC extensions. ![image](image/tutorial/constraints/ModelCC-if.pdf) \[fig:test1\] ![image](image/tutorial/ModelCC-awk.pdf) \[fig:test2\] As soon as the complete ASM-CSM mapping is established, ModelCC is able to generate the suitable parser for the concrete syntax defined by the CSM. In its current version, this ASM-CSM mapping is specified with the help of metadata annotations on the class model that defines the ASM. Now supported by all the major programming platforms, metadata annotations have been used in reflective programming and code generation [@Fowler2002]. Among many other things, they can be employed for dynamically extending the features of your software development runtime [@Berzal2005] or even for building complete model-driven software development tools that benefit from the infrastructure provided by your standard compiler and its associated tools [@mdsd-ideal]. A Simple Example {#sec:example1} ================ An interpreter for arithmetic expressions in infix notation can be used to illustrate the differences between ModelCC and more conventional tools. A full implementation of an extended example using ModelCC and two well-known parser generators (lex & yacc on the one side, ANTLR on the other) is available at <http://www.modelcc.org/examples>. Albeit the arithmetic expression example is necessarily simplistic, it already provides some hints on the potential benefits that model-driven language specification can bring to more challenging endeavors. This simple language is also used in the next section as the basis for a more complex language, which illustrates ModelCC reference resolution mechanism. Using conventional tools, the language designer would start by specifying the grammar defining the arithmetic expression language in a BNF-like notation. When using lex & yacc, the language designer converts the BNF grammar into a grammar suitable for LR parsing. Likewise, when using ANTLR, the language designer converts the BNF grammar into a grammar suitable for LL parsing. LL(\*) parsers do not support left-recursion, so left-recursive grammar productions must be refactored. Since ANTLR provides no mechanism for the declarative specification of token precedences, such precedences must be incorporated into the grammar. Unfortunately, these grammar refactorings typically involve the introduction of a certain degree of duplication in the language specification, such as separate token types in the lexer and multiple parallel production rules in the parser. Once the grammar is adjusted to satisfy the constraints imposed by the parser generators, the language designer can define the semantic actions needed to implement our arithmetic expression interpreter. Using lex & yacc, albeit somewhat verbose using the C programming language syntax, the implementation of an arithmetic expression interpreter is relatively straightforward. The streamlined syntax of the scannerless ANTLR parser generator makes this implementation significantly more concise than the equivalent lex & yacc implementation. When following a model-based language specification approach, the language designer starts by elaborating an abstract syntax model, which will later be mapped to a concrete syntax model by imposing constraints on the abstract syntax model. Annotated models can be represented graphically, as the UML class diagram in Figure \[fig:calcmodelcc\], or implemented using conventional programming languages, as the complete Java implementation included in Figure \[fig:calcimmodelcc\]. The declarative specification of associativity and precedence constraints for the different operators spare us from the grammar refactorings needed by conventional tools. The implementation of the arithmetic expression interpreter is also more elegant in ModelCC: the polymorphic [eval()]{} method takes care of the evaluation of arithmetic expressions. ![image](image/tutorial/ModelCC-arithmetic.pdf) // Expressions public abstract class Expression implements IModel { public abstract double eval(); } @Prefix("\\(") @Suffix("\\)") public class ExpressionGroup extends Expression { Expression e; @Override public double eval() { return e.eval(); } } public class Literal extends Expression { @Value double value; @Override public double eval() { return value; } } public class BinaryExpression extends Expression { Expression e1; Operator op; Expression e2; @Override public double eval() { return op.eval(e1,e2); } } // Operators @Associativity(AssociativityType.LEFT_TO_RIGHT) public abstract class Operator implements IModel { public abstract double eval(Expression e1,Expression e2); } @Pattern(regExp="\\+") @Priority(value=2) public class AdditionOperator extends Operator { @Override public double eval(Expression e1,Expression e2) { return e1.eval()+e2.eval(); } } @Pattern(regExp="-") @Priority(value=2) public class SubtractionOperator extends Operator { @Override public double eval(Expression e1,Expression e2) { return e1.eval()-e2.eval(); } } @Pattern(regExp="\\*") @Priority(value=1) public class MultiplicationOperator extends Operator { @Override public double eval(Expression e1,Expression e2) { return e1.eval()*e2.eval(); } } @Pattern(regExp="\\/") @Priority(value=1) public class DivisionOperator extends Operator { @Override public double eval(Expression e1,Expression e2) { return e1.eval()/e2.eval(); } } Figure \[fig:run\] shows the actual code needed to generate and invoke the parser in ModelCC. ModelCC generates a parser from the arithmetic expression language model. This parser receives input strings such as “10/(2+3)\*0.5+1” and instantiates *Expression* objects from them. The [eval()]{} method then yields the final result of the evaluation (2 in this case). // Read the model. Model model = JavaModelReader.read(Expression.class); // Generate the parser. Parser<Expression> parser = ParserFactory.create(model); // Parse the input string and instantiate the corresponding expression. Expression expr = parser.parse("10/(2+3)*0.5+1"); // Evaluate the expression. double value = expr.eval(); In its current version, ModelCC generates Lamb lexers [@Quesada2011a] and Fence parsers [@Quesada2012f], albeit traditional LL and LR parsers might also be generated whenever the ASM-CSM mapping constraints make LL and LR parsing feasible. ModelCC also provides a testing framework that integrates well with existing IDEs and JUnit. Since separate language elements are models themselves, it is possible to implement unit tests that focus on specific language elements and integration tests for the successive refinements of a language model, hereby enabling and supporting the incremental design of languages. Since the abstract syntax model in ModelCC is not constrained by the vagaries of particular parsing algorithms, the language design process can be focused on its conceptual design, without the artificial introduction of design artifacts just to satisfy the demands of particular tools: - Conventional tools such as lex & yacc example force the creation of artificial token types in order to avoid lexical ambiguities, which leads to duplicate grammar production rules and duplicate semantic actions in the language specification. As in any other software development project, duplication hinders the evolution of languages and affects the maintainability of language processors. In ModelCC, duplication in the language model does not have to be included to deal with lexical ambiguities: token type definitions do not have to be adjusted, duplicate syntactic production rules will not appear in the language model, and, as a consequence, semantic predicates do not have to be duplicated either. - Established parser generators require modifications to the language grammar in order to comply with parsing constraints, let it be the elimination of left-recursion for LL parsers or the introduction of new nonterminals so that the desired precedence relationships are established. In the model-driven language specification approach, the left-recursion problem disappears since it is something the underlying tool can easily deal with in a fully-automated way when an abstract syntax model is converted into a concrete syntax model. Moreover, the declarative specification of constraints is orthogonal to the abstract syntax model that defines the language. Those constraints fully determine the ASM-CSM mapping and, since ModelCC takes charge of everything in the conversion process, the language designer does not have to modify the abstract syntax model just because a given parser generator might prefer its input in a particular format. This is the main benefit that results from raising your abstraction level in model-based language specification. - When changes in the language specification are necessary, as it is often the case when a software system is successful, the traditional language designer will have to propagate changes throughout the entire language processing tool chain, often introducing significant changes and making profound restructurings in the production code base. These changes can be time-consuming, quite tedious, and extremely error-prone. In contrast, modifications are easier when a model-driven language specification approach is followed. Any modifications in the language will affect either to the abstract syntax model, when a language is extended with new capabilities, or to the constraints that define the ASM-CSM mapping, whenever syntactic details change or new CSMs are devised for the same ASM. In either case, the more time-consuming, tedious, and error-prone modifications are automated and the language designer can focus his efforts on the essence of the required changes rather than on their accidents. - Traditional parser generators typically mix semantic actions with the syntactic details of the language specification. This approach, which might be justified when performance is the top concern, might lead to poorly-designed hard-to-test systems. Moreover, when different applications or tools employ the same language, any changes to the syntax of that language must be carefully replicated in all the applications and tools that use the language. The maintenance of several versions of the same language specification in parallel might also lead to severe maintenance problems. In contrast, the separation of concerns provided by ModelCC, which separates ASM and ASM-CSM mappings, promotes a more elegant design for language processing systems. By decoupling language specification from language processing and providing an explicit conceptual model for the language, different applications and tools can now use the same language without duplicate language specifications. A similar result could be hand-crafted using traditional parser generators (i.e. making their implicit conceptual model explicit and working on that explicit model), but ModelCC automates this part of the process. In summary, while traditional language processing tools provide different mechanisms for resolving ambiguities and implementing language constraints, the solutions they provide typically interfere with the conceptual modeling of languages: relatively minor syntactic details might significantly affect the structure of the whole language specification. Model-driven language specification, as exemplified by ModelCC, provides a cleaner separation of concerns: the abstract syntax model is kept separate from its incarnation in concrete syntax models, thereby separating the specification of abstractions in the ASM from the particularities of their textual representation in CSMs. Additional Examples {#sec:example2} =================== ModelCC is able to automatically generate a grammar from the ASM defined by the class model and the ASM-CSM mapping, which is specified as a set of metadata annotations on the class model. These annotations also provide a mechanism for reference resolution that allows the automatic instantiation of complete object graphs. // Read the model. Model model = JavaModelReader.read(Expression.class); // Create the parser. Parser<Expression> parser = ParserFactory.create(model); // Define a constant parser.add(new Constant("pi", 3.1415927)); // Use the predefined constant in JUnit tests for arithmetic expressions assertEquals(3.1415927, parser.parse("pi").eval(), EPSILON); assertEquals(2*3.1415927, parser.parse("2*pi").eval(), EPSILON); ![image](image/tutorial/constraints/ModelCC-ref.pdf) The reference resolution mechanism in ModelCC is illustrated by the code snippet in Figure \[fig:constant\] and the model shown in \[fig:language-reference\]. In this example, which extends our arithmetic expression language, a constant is defined before the parser is invoked. Then we can parse an expression that includes references to the predefined constant, whose definition does not have to be included in the textual input of the parser, thus providing a crude but elegant form of separate compilation. Following the same approach, we could easily design a full-fledged imperative programming language. Language composition would enable us to extend our arithmetic expression language easily, just by including statements, new expression types, and additional operators in our language model. Figures \[fig:language-lisp\] and \[fig:language-prolog\] include two more examples: the traditional syntax of LISP S-expressions and a PROLOG-like logic programming languages. A fully-functional version of ModelCC for Java, additional examples of its use, and a detailed user manual describing all the annotations that can be used to annotate class models in ModelCC can be found at the ModelCC web site: <http://www.modelcc.org>. ![image](image/tutorial/ModelCC-lisp.pdf) ![image](image/tutorial/ModelCC-prolog.pdf) Conclusions and Future Work {#sec:conclusionsfuturework} =========================== In this paper, we have introduced ModelCC, a model-based tool for language specification. ModelCC lets language designers create explicit models of the concepts a language represents, i.e. the abstract syntax model of the language (ASM). Then, that abstract syntax can be represented in textual or graphical form, using the concrete syntax defined by a concrete syntax model (CSM). ModelCC automates the ASM-CSM mapping by means of metadata annotations on the ASM, which let ModelCC act as a model-based parser generator. ModelCC is not bound to particular scanning and parsing techniques, so language designers do not have to tweak their models to comply with the constraints imposed by particular parsing algorithms. ModelCC abstracts away many details traditional language processing tools have to deal with. It cleanly separates language specification from language processing. Given the proper ASM-CSM mapping definition, ModelCC-generated parsers are able to automatically instantiate the ASM given an input string in the concrete syntax. Apart from being able to deal with ambiguous languages, ModelCC also allows the declarative resolution of language ambiguities by means of constraints defined over the ASM. The current version of ModelCC also supports lexical ambiguities and custom pattern matching classes. ModelCC incorporates a reference resolution mechanism within its parsing process. Instead of returning abstract syntax trees, ModelCC is able to obtain abstract syntax graphs from textual inputs. Such abstract syntax graphs are not restricted to directed acyclic graphs, since ModelCC supports the resolution of anaphoric, cataphoric, and recursive references. The proposed model-driven language specification approach promotes the domain-driven design of language processing systems. Its model-driven philosophy supports language evolution by improving the maintainability of such systems. It also facilitates the reuse of language models across product lines and different applications, eliminating the duplication required by conventional tools and improving the modularity of the resulting systems. In the future, we intend to study the possibilities ModelCC opens up in different application domains, including traditional language processing systems (compilers and interpreters), domain-specific languages and language workbenches, model-driven software development tools, natural language processing, text mining, data integration, and information extraction. Acknowledgements {#acknowledgements .unnumbered} ================ Work partially supported by research project TIN2012-36951, “NOESIS: Network-Oriented Exploration, Simulation, and Induction System”, funded by the Spanish Ministry of Economy and the European Regional Development Fund (FEDER).
[**Variational Calculus of Supervariables** ]{} [**and Related Algebraic Structures**]{}[^1] [Xiaoping Xu]{} [Department of Mathematics, The Hong Kong University of Science & Technology]{} [Clear Water Bay, Kowloon, Hong Kong]{}[^2] [**Abstract**]{} [We establish a formal variational calculus of supervariables, which is a combination of the bosonic theory of Gel’fand-Dikii and the fermionic theory in our earlier work. Certain interesting new algebraic structures are found in connection with Hamiltonian superoperators in terms of our theory. In particular, we find connections between Hamiltonian superoperators and Novikov-Poisson algebras that we introduced in our earlier work in order to establish a tensor theory of Novikov algebras. Furthermore, we prove that an odd linear Hamiltonian superoperator in our variational calculus induces a Lie superalgebra, which is a natural generalization of the Super-Virasoro algebra under certain conditions.]{} Introduction ============ Formal variational calculus was introduced by Gel’fand and Dikii \[GDi1-2\] in studying Hamiltonian systems related to certain nonlinear partial differential equation, such as the KdV equations. Invoking the variational derivatives, they found certain interesting Poisson structures. Moreover, Gel’fand and Dorfman \[GDo\] found more connections between Hamiltonian operators and algebraic structures. Balinskii and Novikov \[BN\] studied similar Poisson structures from another point of view. The nature of Gel’fand and Dikii’s formal variational calculus is bosonic. In \[X3\], we presented a general frame of Hamiltonian superoperators and a purely fermionic formal variational calculus. Our work \[X3\] was based on pure algebraic analogy. In this paper, we shall present a formal variational calculus of supervariables, which is a combination of the bosonic theory of Gel’fand-Dikii and the fermionic theory in \[X3\]. Our new theory was motivated by the known super-symmetric theory in mathematical physics (cf. \[De\], \[M\]). We find the conditions for a “matrix differential operator” to be a Hamiltonian superoperator. In particular, we classify two classes of Hamiltonian superoperators by introducing two kinds of new algebraic structures. Moreover, we prove that an odd linear Hamiltonian superoperator in our variational calculus induces a Lie superalgebra, which is a natural generalization of the Super-Virasoro algebra under certain conditions. We believe that the results in this paper would be useful in study nonlinear super differential equations. They could also play important roles in the application theory of algebras. The discovery of our new algebraic structures proposes new objects in algebraic research. In fact, a new family of infinite-dimensional simple Lie superalgebras were discovered in \[X5\] based on the results in this paper. Recently, we notice that Daletsky \[Da1\] introduced a definition of a Hamiltonian superoperator associated with an abstract complex of a Lie superalgebra. He also established in \[Da1-2\] a formal variational calculus over a commutative superalgebra generated by a set of so-called “graded symbols” with coefficients valued in a Grassman algebra. We believe that one of the subtlenesses of introducing Hamiltonian superoperators is the constructions of suitable natural complexes of a Lie superalgebra. In our work \[X3\], we gave a concrete construction of the complex of a colored Lie superalgebra with respect to a graded module and explained the meaning of a Hamiltonian superoperator in detail. It seems to us that the formal variational calculus introduced in \[Da1-2\] lacks links with the known super-symmetric theory (cf. \[De\], \[M\]). For instance, its connection with the known super differential equations, such as the super-symmetric KdV equations, are not clear (cf. \[M\]). Our formal variational calculus in \[X3\] was based on free fermionic fields. The combined theory of Gel’fand-Dikki’s \[GDi1\] and ours \[X3\] that we shall present in this paper is well motivated by the theory of super-symmetric KdV equations (cf. \[M\]) and the super-symmetric theory in \[De\]. Our main purpose in this paper is to show certain new algebraic structures naturally arisen from our theory of Hamiltonian superoperators in a supervariable. Below, we shall give more detailed introduction. Throughout this paper, we let $\Bbb{R}$ be the field of real numbers, and all the vector spaces are assumed over $\Bbb{R}$. Denote by $\Bbb{Z}$ the ring of integers and by $\Bbb{N}$ the set of natural numbers $\{0,1,2,...\}$. First let us briefly introduce the general frame of Hamiltonian superoperators. We shall sightly modify the differential $d$ defined in (2.7) of \[X3\]. A [*Lie superalgebra*]{} $L$ is a $\Bbb{Z}_2$-graded algebra $L=L_0\oplus L_1$ with the operation $[\cdot,\cdot]$ satisfying $$[x,y]=-(-1)^{xy}[y,x],\qquad [x,[y,z]]+(-1)^{x(y+z)}[y,[z,x]]+(-1)^{z(x+y)}[z,[x,y]]=0\eqno(1.1)$$ for $x,y,z\in L$, where we have used the convention of the notions of exponents of $-1$ used in mathematical physics (cf. \[De\]); that is, when a vector $u\in L$ appears in an exponent of $-1$, we always means $u\in L_i$ and the value of $u$ in the exponent is $i$. A [*graded module*]{} $M$ of $L$ is a ${\Bbb{Z}_2}$-graded vector space $M=M_0\oplus M_1$ with the action of $L$ on $M$ satisfies: $$L_i(M_j)\subset M_{i+j},\qquad [x,y]v=xyv-(-1)^{xy}yxv\qquad\mbox{for}\;i,j\in {\Bbb{Z}_2};\;\;x,y\in L;\;v\in M.\eqno(1.2)$$ A $q$-[*form of*]{} $L$ [*with values in*]{} $M$ is a multi-linear map $\omega:\;L^q=L\times \cdots \times L\rightarrow M$ for which $$\omega (x_1,x_2, \cdots,x_q)=-(-1)^{x_ix_{i+1}}\omega(x_1,\cdots,x_{i-1},x_{i+1},x_i,x_{i+2},\cdots, x_q)\eqno(1.3)$$ for $x_1,...,x_q\in L$. We denote by $c^q(L,M)$ the set of $q$-forms. We define the grading over $c^q(L,M)$ by $$c^q(L,M)_i=\{\omega \in c^q(L,M)\mid \omega(x_1,...,x_q)\in M_{j_1+\cdots j_q+i}\;\mbox{for}\;x_l\in L_{j_l}\},\qquad i\in \Bbb{Z}_2.\eqno(1.4)$$ Then we have $c^q(L,M)=c^q(L,M)_0+c^q(L,M)_1$. Moreover, we define a differential $d:\;c^q(L,M)\rightarrow c^{q+1}(L,M)$ by $$\begin{aligned} & & d\omega(x_1,x_2,...,x_{q+1})\\&=&\sum_{i=1}^{q+1}(-1)^{i+1+(\omega+x_1+\cdots x_{i-1})x_i}x_i(\omega(x_1,...,\check{x}_i,...,x_{q+1}))+\sum_{i<j}(-1)^{i+j+(x_1+\cdots +x_{i-1})x_i}\\& &(-1)^{(x_1+\cdots +\check{x}_i+\cdots+ x_{j-1})x_j}\omega([x_i,x_j],x_1,...,\check{x}_i,...,\check{x}_j,...,x_{q+1})\hspace{4cm}(1.5)\end{aligned}$$ for $\omega\in c^q(L,M),x_l\in L.$ A $q$-form $\omega$ is called [*closed*]{} if $d\omega=0$. It is easily seen that $d^2=0$ by the proof of Proposition 2.1 in \[X3\]. Let $\omega\in c^2(L,M)_j$. We define: $${\cal H}_i=\{(x,m)\in L_i\times M_{i+j}\mid \omega (y,x)=(-1)^{jy}ym\;\mbox{for}\;y\in L\},\;\; {\cal H}={\cal H}_0+{\cal H}_1.\eqno(1.6)$$ By (2.10) in \[X3\], $([x,y],\omega(x,y))\in {\cal H}_{j+l}$ for $x\in L_j,y\in L_l$ if $\omega$ is closed. In this case, we have the following super Poisson bracket $$\{m_1,m_2\}=\omega(x_1,x_2)\qquad \mbox{for}\;\;(x_1,m_1),(x_2,m_2)\in {\cal H}\eqno(1.7)$$ over the subspace ${\cal N}$ of $M$ defined by $${\cal N}={\cal N}_0+{\cal N}_1,\qquad {\cal N}_i=\{u\in M_{j+i}\mid (L_i,u)\bigcap {\cal H}\neq\emptyset\}.\eqno(1.8)$$ Let $\Omega$ be a graded subspace of $c^1(L,M)$ such that $dM\subset \Omega$. A graded linear map $H$ is called [*super skew-symmetric*]{} if $$\xi_1(H\xi_2)=-(-1)^{(H\xi_1)(H\xi_2)}\xi_2(H\xi_1)\qquad \mbox{for}\;\;\xi_1,\xi_2\in \Omega.\eqno(1.9)$$ With a super skew-symmetric graded linear map $H:\: \Omega\rightarrow L$, we connect a 2-form $\omega_H$ defined on $\mbox{Im}\: H$ by $$\omega_H(H\xi_1,H\xi_2)=\xi_2(H\xi_1)\qquad\mbox{for}\;\;\xi_1,\xi_2\in \Omega. \eqno(1.10)$$ We say that $H$ is a [*Hamiltonian superoperator*]{} if (a) the subspace $\mbox{\it Im}\:H$ of $L$ is a subalgebra; (b) the form $\omega_H$ is closed on $H(\Omega)$. In \[GDo\] and \[BN\], a new algebra, which was called a “Novikov algebra” in \[O1\], was introduced. A [*Novikov algebra*]{} ${\cal A}$ is a vector space with an operation “$\circ$” satisfying: $$(x\circ y)\circ z=(x\circ z)\circ y,\qquad (x\circ y)\circ z-x\circ (y\circ z)=(y\circ x)\circ z-y\circ (x\circ z)\eqno(1.11)$$ for $x,y,z\in {\cal A}$. The beauty of a Novikov algebra is that the left multiplication operators forms a Lie algebra and the right multiplication operators are commutative (cf. \[Z\], \[O1\]). Zel’manov \[Z\] proved that any finite-dimensional simple Novikov algebra over an algebraically closed field with characteristic $0$ is one-dimensional. Osborn \[O1-5\] classified simple Novikov algebras with an idempotent element and their certain modules. In \[X4\], we gave a complete classification of finite-dimensional simple Novikov algebras and their irreducible modules over an algebraically closed field with prime characteristic. Another algebraic structure introduced in \[GDo\], which we called “Gel’fand-Dorfman operator algebra,” was proved in \[X2\] to be equivalent to an associative algebra with a derivation under the unitary condition. A Novikov algebra actually provides a Poisson structure associated with many-body systems analogous to the KdV-equation (cf. \[GDo\], \[BN\]). One might think that the algebra corresponding to the super Poisson structure associated with many-body systems analogous to the super KdV-equations should be the following natural super analogue of Novikov algebras. A [*Novikov superalgebra*]{} is a $\Bbb{Z}_2$-graded vector space ${\cal A}={\cal A}_0\oplus {\cal A}_1$ with an operation “$\circ$” satisfying: $$(x\circ y)\circ z=(-1)^{yz}(x\circ z)\circ y,\;\; (x\circ y)\circ z-x\circ (y\circ z)=(-1)^{xy}(y\circ x)\circ z-(-1)^{xy}y\circ (x\circ z)\eqno(1.12)$$ for $x,y,z\in {\cal A}$. It is surprised that Novikov superalgebras are not the algebraic structures corresponding to the super Poisson structures associated with many-body systems analogous to the super KdV-equations. In fact, Novikov superalgebras do not fit in our theory of Hamiltonian superoperators in a supervariable at all. This is because of that the image of a Hamiltonian superoperator is required to be a graded subspace as we introduced in the above. As one of the main theorems (see Theorem 3.1), we prove in Section 3 that the algebraic structures corresponding to the Hamiltonian operators (or super Poisson structures) associated with many-body systems (see (3.3)) analogous to the super KdV-equations (see (2.8)) are what we call “NX-bialgebras.” An [*NX-bialgebra*]{} is a vector space $V$ with two operations “$\times,\circ$” such that $(V,\times)$ forms a commutative (may not be associative) algebra and $(V,\circ)$ forms a Novikov algebra for which $$(u\times v)\circ w=u\times (v\circ w),\eqno(1.13)$$ $$(u\times v)\times w+u\times (v\times w)=(v\circ u)\times w+u\times (v\circ w)-v\circ (u\times w),\eqno(1.14)$$ $$(u\times v)\times w-u\times (v\times w)=(u\times v)\circ w+w\circ (u\times v)-u\circ (v\times w)-(v\times w)\circ u\eqno(1.15)$$ for $u,v,w\in V$. In \[X4\], we introduced “Novikov-Poisson” algebras in order to establish a tensor theory of Novikov algebras. A [*Novikov-Poisson algebra*]{} is a vector space ${\cal A}$ with two operations “$\cdot,\circ$” such that $({\cal A},\cdot)$ forms a commutative associative algebra (may not have an identity element) and $( {\cal A},\circ)$ forms a Novikov algebra for which $$(x\cdot y)\circ z=x\cdot (y\circ z),\qquad (x\circ y)\cdot z-x\circ (y\cdot z)=(y\circ x)\cdot z-y\circ (x\cdot z)\eqno(1.16)$$ for $x,y,z\in {\cal A}$. We prove in Section 3 that certain Novikov-Poisson algebras are NX-bialgebras. This in a way shows the significance of introducing Novikov-Poisson algebras. The detailed study on Novikov Poisson algebras was carried in our work \[X5\]. We can view the algebraic structure (1.11) as a [*bosonic Novikov algebra*]{} because the right multiplication operators are commutative. In Section 4, we prove that the following “fermionic Novikov algebra” does correspond to a certain Hamiltonian superoperator in a supervariable. A [*fermionic Novikov algebra*]{} ${\cal A}$ is a vector space with an operation “$\circ$” satisfying: $$(x\circ y)\circ z=-(x\circ z)\circ y,\qquad (x\circ y)\circ z-x\circ (y\circ z)=(y\circ x)\circ z-y\circ (x\circ z)\eqno(1.17)$$ for $x,y,z\in {\cal A}$. In Section 5, we prove that an odd linear Hamiltonian superoperator induces a Lie superalgebra, which is a natural generalization of the Super-Virasoro algebra under certain conditions. Section 2 is the general theory of our formal variational calculus of supervariables. Formal Calculus =============== In this section, we shall present the frame of our variational calculus of super variables. Let $\Lambda$ be a vector space that is not necessary finite-dimensional. Let $F(\Lambda)$ be the free associative algebra generated by ${\Lambda}$. Then the exterior algebra $R$ generated by ${\Lambda}$ is isomorphic to $$R=F({\Lambda})/(\{uv+vu\mid u,v\in{\Lambda}\}).\eqno(2.1)$$ We can identify ${\Lambda}$ with its image in $R$. Note that $$R=\Bbb{R}\oplus {\Lambda}R=R_c\oplus R_a,\qquad\mbox{where}\;\; R_c=\sum_{n=0}^{\infty}{\Lambda}^{2n},\;\;R_a=\sum_{n=0}^{\infty}{\Lambda}^{2n+1}.\eqno(2.2)$$ According to \[De\], the elements of $R_c$ are called $c$-[*numbers*]{} (means commutative numbers) and the elements of $R_a$ are called $a$-[*numbers*]{} (means anti-commutative numbers). Any $u\in R$ can be uniquely written $u=u_b+u_s$ with $u_b\in \Bbb{R},\;u_s\in {\Lambda}R$ and $u_b$ ($u_s$) is called the [*body*]{} ([*soul*]{}, respectively) of $u$. Any analytic function $f$ from $R_c$ to $R$ is of the form $$f(x)=\sum_{n=0}^{\infty}{\phi^{(n)}(x_b)\over n!}x_s^n,\qquad\mbox{where}\;\;\phi:\Bbb{R}\rightarrow R\;\mbox{is}\;C^{\infty}.\eqno(2.3)$$ An analytic function $\Psi:\;R_c\times R_a \rightarrow R$ is of the form $$\Psi(x,\theta)=f_0(x)+f_1(x)\theta,\qquad\mbox{where}\;\;f_i:\;R_c\rightarrow R\;\mbox{are analytic}\eqno(2.4)$$ (cf. \[De\]). Note that $$\theta^2=0,\qquad\partial_{\theta}^2=0.\eqno(2.5)$$ Define $$D=\theta\partial_x+\partial_{\theta}\eqno(2.6)$$ Then $$D^2=\partial_x\eqno(2.7)$$ (cf. \[M\]). Let $\Phi(x,\theta,t)$ be a function from $R_c\times R_a\times \Bbb{R}$ to $R$. Moreover, we assume that $\Phi(x,\theta, t)\in R_a$ for any $(x,\theta,t)\in (R_c\times R_a\times \Bbb{R})$, $\Phi$ is analytic for fixed $t$ and is $C^1$ with respect to $t$. A super KdV equation is of form $$\Phi_t=-D^6\Phi+\mu D^2(\Phi D\Phi)+(6-2\mu)D\Phi D^2\Phi\eqno(2.8)$$ (cf. \[M\]). Mathieu \[M\] found the Hamiltonians for the above equation when $\mu=2, 3$. Let $\{\Phi_i\mid I\}$ be a family of functions from $R_c\times R_a\times \Bbb{R}$ to $R$ with the same properties as the above $\Phi$. Set $$\Phi_i(n+1)=D^n\Phi_i\qquad \mbox{for}\;i\in I;\;n\in \Bbb{N}.\eqno(2.9)$$ Then we have $$\Phi_i(m)\Phi_j(n)=(-1)^{mn}\Phi_j(n)\Phi_i(m)\qquad \mbox{for}\;i,j\in I;\;m,n\in \Bbb{N}^+=\Bbb{N}\setminus\{0\}.\eqno(2.10)$$ Let ${\cal A}$ be the subalgebra generated by $\{\Phi_i(n)\mid i\in I,\;n\in \Bbb{N}^+\}$ (the set of functions from $R_c\times R_a\times \Bbb{R}$ to $R$ forms an associative algebra). Note that ${\cal A}$ is a $\Bbb{Z}_2$-graded algebra ${\cal A}={\cal A}_0+{\cal A}_1$ with $${\cal A}_i=\mbox{span}\{\Phi_{i_1}(n_1)\cdots \Phi_{i_p}(n_p)\mid p\in \Bbb{N},i_j\in I,\; n_j\in \Bbb{N}^+, \sum_{j=1}^pn_j\equiv i\;(\mbox{mod}\;2)\},\eqno(2.11)$$ $$u_1u_2=(-1)^{u_1u_2}u_2u_1,\;\;D(u_1u_2)=D(u_1)u_2+(-1)^{u_1}u_1D(u_2)\qquad\mbox{for}\;\;u_1,u_2\in {\cal A}.\eqno(2.12)$$ Now we treat $\{\Phi_i(n)\}$ as formal variables. Set $$L_i=\{\sum_{j\in I}\sum_{l\in \Bbb{N}^+}u_{j,l}{\partial}_{\Phi_j(l)}\mid u_{j,l}\in {\cal A}_{i+l}\},\;\;i\in \Bbb{Z}_2,\qquad L=L_0+L_1.\eqno(2.13)$$ Note that the set of superderivations of ${\cal A}$ forms a Lie superalgebra. In particular, $L$ forms a Lie sub-superalgebra with the commutator: $$[\partial_1,\partial_2]=\sum_{j,p\in I}\sum_{l,q\in \Bbb{N}^+}(u^1_{p,q}{\partial}_{\Phi_p(q)}(u^2_{j,l})-(-1)^{{\partial}_1{\partial}_2}u^2_{p,q}{\partial}_{\Phi_p(q)}(u^1_{j,l}))\partial_{\Phi_j(l)}\eqno(2.14)$$ for $\partial_s=\sum_{j\in I}\sum_{l\in \Bbb{N}^+} u^s_{j,l}{\partial}_{\Phi_j(l)}\in L$. Note that we can write $$D=\sum_{i\in I}\sum_{n\in \Bbb{N}^+}\Phi_i(n+1){\partial}_{\Phi_i(n)}.\eqno(2.15)$$ Thus $D\in L$. By the proof of Lemma 3.2 in \[X3\], we have: [**Lemma 2.1**]{}. [*For*]{} $\partial=\sum_{j\in I}\sum_{l\in \Bbb{N}^+}u_{j,l}{\partial}_{\Phi_j(l)}\in (L_0\bigcup L_1)$, $[\partial, D]=0$ [*if and only if*]{} $$u_{j,n+1}=(-1)^{n{\partial}}D^n(u_{j,1}),\qquad n\in \Bbb{N}.\eqno(2.16)$$ Set $${\cal L}={\cal L}_1+{\cal L}_2\subset {\cal A}^I,\qquad {\cal L}_s=({\cal A}_{s+1})^I.\eqno(2.17)$$ For any $\bar{u}=\{u_i\mid i\in I\}\in {\cal L}_s$, we let $$\partial_{\bar{u}}=\sum_{j\in I}\sum_{n\in \Bbb{N}}(-1)^{sn}D^n(u_j){\partial}_{\Phi_j(n+1)}\in L.\eqno(2.18)$$ Then $[\partial_{\bar{u}},D]=0$. For $\bar{u}=\{u_i\},\bar{v}=\{v_i\}\in {\cal L}$, $$[\partial_{\bar{u}},\partial_{\bar{v}}]={\partial}_{\bar{w}}\eqno(2.19)$$ with $$\begin{aligned} \hspace{1cm}\bar{w}&=&\{\sum_{p\in I}\sum_{ m\in \Bbb{N}^+}((-1)^{m\bar{u}}D^m(u_p){\partial}_{\Phi_p(m+1)}(v_q)\\& &-(-1)^{\bar{u}\bar{v}+m\bar{v}}D^m(v_p){\partial}_{\Phi_p(m+1)}(u_q))\mid q\in I\}\hspace{5.2cm}(2.20)\end{aligned}$$ (cf. (3.26-27) in \[X3\]). Thus if we define $$[\bar{u},\bar{v}]=\bar{w},\eqno(2.21)$$ then $({\cal L},\Bbb{Z}_2,[\cdot,\cdot])$ forms a Lie superalgebra. Next we define variational operators on ${\cal A}$: $$\delta_i=\sum_{m=0}^{\infty}(-1)^{m(m-1)/2}D^m\circ {\partial}_{\Phi_i(m+1)},\qquad \bar{\delta}=\{\delta_i\mid i\in I\}.\eqno(2.22)$$ By the proof of Lemma 3.4 in \[X3\], we have: [**Lemma 2.2**]{}. [*For any*]{} $u\in \sum_{i\in I,n\in \Bbb{N}}{\cal A}\Phi_i(n+1)$, $$\bar{\delta}(u)=0\Longleftrightarrow u=D(v)\;\;\mbox{\it for some}\;\;v\in {\cal A}.\eqno(2.23)$$ Now we let $$\tilde{\cal A}={\cal A}/D({\cal A}).\eqno(2.24)$$ We define an action of ${\cal L}$ on $\tilde{\cal A}$ by $$\bar{u}(\tilde{w})=\partial_{\bar{u}}(w)+D({\cal A} )= \sum_{i\in I}(u_i\delta_i(w))^{\sim}\eqno(2.25)$$ (cf. (3.39) in \[X3\]). This is well defined since $[\partial_{\bar{u}},D]=0$. Thus $\tilde{\cal A}$ forms an ${\cal L}$-module. Furthermore, we set $$\Omega=\{\bar{\xi}=\{\xi_i\}\in {\cal A}^I\mid \mbox{only finite number of}\;\xi_i\neq 0\}.\eqno(2.26)$$ For any $\bar{\xi}\in \Omega, \;\bar{u}\in {\cal L}$, we define: $$\bar{\xi}(\bar{u})=\sum_{i\in I}(u_i\xi_i)^{\sim}.\eqno(2.27)$$ Then $\Omega\subset c^1({\cal L},\tilde{\cal A})$. Note that by (2.25), $$d(\tilde{w})=\bar{\delta}(w)\in \Omega\qquad\mbox{for}\;\;\tilde{w}\in {\cal A},\eqno(2.28)$$ where (2.23) implies that the map $\bar{\delta}:\:\tilde{\cal A}\rightarrow \Omega$ is well defined. Hence $d(\tilde{\cal A})\in \Omega$. Note that as sets, $\Omega\subset {\cal L}$. We let $$\Omega_i=\Omega\bigcap {\cal L}_i\qquad\mbox{for}\;\;i\in \Bbb{Z}_2.\eqno(2.29)$$ Suppose that $H:\;\Omega\rightarrow {\cal L}$ is a linear map as follows: for $\bar{\xi}\in \Omega_i,\;i\in \Bbb{Z}_2$, $$(H\bar{\xi})_p=\sum_{q\in I}H^i_{p,q}\xi_q,\;\;\mbox{where}\;\;H^i_{p,q}=\sum_{l=0}^{n(i,p,q)}a_{p,q,l}^iD^l\;\;\mbox{with}\;\;a_{p,q,l}^i\in{\cal A}_{\iota+l},\;\iota\in \Bbb{Z}_2.\eqno(2.30)$$ Such an $H$ is called a [*matrix differential operator of type*]{} $\iota$. Moreover, $H(\Omega)$ is a $\Bbb{Z}_2$-graded subspace. Furthermore, the super skew-symmetry is equivalent to $$\sum_{l=0}^{n(0,p,q)}(-1)^{(2\iota+l)(l-1)/2}D^l\circ a_{p,q,l}^0=\sum_{l=0}^{n(0,q,p)}a_{q,p,l}^0D^l,\;\;\;a_{p,q,l}^0=(-1)^{\iota+1}a_{p,q,l}^1.\eqno(2.31)$$ Let $H:\;\Omega \rightarrow {\cal L}$ be a super skew-symmetric matrix differential operator. We want to find the condition for $H$ to be a Hamiltonian operator. For $\bar{\xi}\in \Omega_i$, we define a linear map $(D_H\bar{\xi}):\;{\cal L}\rightarrow {\cal L}$ by $$(D_H\bar{\xi})(\bar{\eta})=(D_H\bar{\xi})\bar{\eta},\;\;\;(D_H\bar{\xi})_{p,q}=\sum_{t\in I}\sum_{l,m\in \Bbb{N}}(-1)^{m(i+\iota)}{\partial}_{\Phi_q(m+1)}(a^i_{p,t,l})D^l(\xi_t)D^m, \eqno(2.32)$$ for $\bar{\eta}\in \Omega$. By the proof of Theorem 4.1 in \[X3\], we have: [**Theorem 2.3**]{}. [*A matrix differential operator*]{} $H$ [*of form (2.30) is a Hamiltonian operator if and only if (2.31) and the following equation hold:*]{} $$\begin{aligned} & &(-1)^{\bar{\xi}_1}\bar{\xi}_3((D_H\bar{\xi}_1)H\bar{\xi}_2)+(-1)^{\bar{\xi}_2+(\bar{\xi}_1+\iota,\bar{\xi}_2+\bar{\xi}_3)}\bar{\xi}_1((D_H\bar{\xi}_2)H\bar{\xi}_3)\\&=&-(-1)^{\bar{\xi}_3+(\bar{\xi}_3+\iota,\bar{\xi}_1+\bar{\xi}_2)}\bar{\xi}_2((D_H\bar{\xi}_3)H\bar{\xi}_1)\hspace{7.8cm}(2.33)\end{aligned}$$ [*for*]{} $\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3\in \Omega$. [**Remark 2.4**]{}. By (2.31) and the above theorem, the operator $$H=\sum_{m=0}^{\infty} a_mD^{4m+1},\;\;\;a_m\in \Bbb{R}\eqno(2.34)$$ is a Hamiltonian operator of type 1. Moreover, the operator $H'$ defined by $$H'(\bar{\xi})=(-1)^{\bar{\xi}}\sum_{m=0}^{\infty} b_mD^{4m}\bar{\xi},\;\;\;b_m\in \Bbb{R},\;\;\mbox{for}\;\;\bar{\xi}\in \Omega,\eqno(2.35)$$ is a Hamiltonian operator of type 0. Let $H_1$ and $H_2$ be matrix differential operators of the same type $\iota$. If $aH_1+bH_2$ is Hamiltonian for any $a,b\in \Bbb{R}$, then we call $(H_1,H_2)$ a [*Hamiltonian pair*]{}. For any two matrix differential operators $H_1$ and $H_2$, we define the Schouten-Nijenhuis super-bracket $[H_1,H_2]:\: \Omega^3\rightarrow \tilde{\cal A}$ by $$\begin{aligned} & &[H_1,H_2](\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3)\\&=& (-1)^{\bar{\xi}_1}\bar{\xi}_3((D_{H_1}\bar{\xi}_1)H_2\bar{\xi}_2)+(-1)^{\bar{\xi}_1}\bar{\xi}_3((D_{H_2}\bar{\xi}_1)H_1\bar{\xi}_2)\\& &+(-1)^{\bar{\xi}_2+(\bar{\xi}_1+\iota,\bar{\xi}_2+\bar{\xi}_3)}\bar{\xi}_1((D_{H_1}\bar{\xi}_2)H_2\bar{\xi}_3)+(-1)^{\bar{\xi}_2+(\bar{\xi}_1+\iota,\bar{\xi}_2+\bar{\xi}_3)}\bar{\xi}_1((D_{H_2}\bar{\xi}_2)H_1\bar{\xi}_3)\\& &+(-1)^{\bar{\xi}_3+(\bar{\xi}_3+\iota,\bar{\xi}_1+\bar{\xi}_2)}\bar{\xi}_2((D_{H_1}\bar{\xi}_3)H_2\bar{\xi}_1)+(-1)^{\bar{\xi}_3+(\bar{\xi}_3+\iota,\bar{\xi}_1+\bar{\xi}_2)}\bar{\xi}_2((D_{H_2}\bar{\xi}_3)H_1\bar{\xi}_1)\hspace{1cm}(2.36)\end{aligned}$$ for $\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3\in \Omega$. Then (2.33) is equivalent to $[H,H]=0$. In general, we have: [**Corollary 2.5**]{}. [*Matrix differential operators*]{} $H_1$ [*and*]{} $H_2$ [*of the same type forms a Hamiltonian pair if and only if they satisfy (2.31) and*]{} $$[H_1,H_1]=0,\;\;\;[H_2,H_2]=0,\;\;\;[H_1,H_2]=0.\eqno(2.37)$$ Hamiltonian Superoperators and NX-Bialgebras ============================================ In this section, we consider the type-1 Hamiltonian operator $H$ of the form: $$H^1_{{\alpha},{\beta}}=H^0_{{\alpha},{\beta}}=a_{{\alpha},{\beta}}D^5+\sum_{{\gamma}\in I}[b^{{\gamma}}_{{\alpha},{\beta}}\Phi_{{\gamma}}D^2+c_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(2)D+d_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(3)],\eqno(3.1)$$ where $a_{{\alpha},{\beta}}^{{\gamma}},b_{{\alpha},{\beta}}^{{\gamma}},c_{{\alpha},{\beta}}^{{\gamma}},d_{{\alpha},{\beta}}^{{\gamma}}\in \Bbb{R}.$ We let $$L=\sum_{{\alpha},{\beta}\in I}\chi_{{\alpha},{\beta}}\Phi_{{\alpha}}\Phi_{{\beta}}(2),\qquad \chi_{{\alpha},{\beta}}\in \Bbb{R}.\eqno(3.2)$$ Then we have the following many-body systems analogous to the super KdV equations: $$(\Phi_{{\alpha}})_t=\sum_{{\beta}\in I} H_{{\alpha},{\beta}}\delta_{{\beta}}(L),\qquad\qquad{\alpha}\in I.\eqno(3.3)$$ As we shall show below, it is not easy to find the condition for an operator in (3.1) to be Hamiltonian. The difficulty is that (2.33) is equivalent to a set of many equations. Therefore, high technical reductions are needed in order to find the condition of simplest form. Note that the super skew-symmetry of $H$ is equivalent to $$\begin{aligned} & &a_{{\alpha},{\beta}}D^5+\sum_{{\gamma}\in I}[b^{{\gamma}}_{{\alpha},{\beta}}\Phi_{{\gamma}}D^2+c_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(2)D+d_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(3)]\\&=&a_{{\beta},{\alpha}}D^5+\sum_{{\gamma}\in I}[b^{{\gamma}}_{{\beta},{\alpha}}D^2\circ\Phi_{{\gamma}}+c_{{\beta},{\alpha}}^{{\gamma}}D\circ\Phi_{{\gamma}}(2)-d_{{\beta},{\alpha}}^{{\gamma}}\Phi_{{\gamma}}(3)]\\&=& a_{{\beta},{\alpha}}D^5+\sum_{{\gamma}\in I}[b^{{\gamma}}_{{\beta},{\alpha}}\Phi_{{\gamma}}D^2+c_{{\beta},{\alpha}}^{{\gamma}}\Phi_{{\gamma}}(2)D+(b_{{\beta},{\alpha}}^{{\gamma}}+c_{{\beta},{\alpha}}^{{\gamma}}-d_{{\beta},{\alpha}}^{{\gamma}})\Phi_{{\gamma}}(3)]\hspace{2.6cm}(3.4)\end{aligned}$$ by(2.31), equivalently, $$a_{{\alpha},{\beta}}=a_{{\beta},{\alpha}},\;\;b_{{\alpha},{\beta}}^{{\gamma}}=b_{{\beta},{\alpha}}^{{\gamma}},\;\;c_{{\alpha},{\beta}}^{{\gamma}}=c_{{\beta},{\alpha}}^{{\gamma}},\;\;b_{{\alpha},{\beta}}^{{\gamma}}+c_{{\alpha},{\beta}}^{{\gamma}}=d^{{\gamma}}_{{\alpha},{\beta}}+d^{{\gamma}}_{{\beta},{\alpha}}.\eqno(3.5)$$ Moreover, we let $$V=\sum_{{\alpha}\in I}\Bbb{R}\Phi_{{\alpha}}\eqno(3.6)$$ and define the operations: $\cdot,\times,\circ:\;V\times V\rightarrow V$ and the bilinear form ${\langle}\cdot,\cdot{\rangle}$ by $$\Phi_{{\alpha}}\cdot \Phi_{{\beta}}=\sum_{{\gamma}\in I}b_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}},\;\;\Phi_{{\alpha}}\times \Phi_{{\beta}}=\sum_{{\gamma}\in I}c_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}},\;\;\Phi_{{\alpha}}\circ \Phi_{{\beta}}=\sum_{{\gamma}\in I}d_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}},\;\;{\langle}\Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}=a_{{\alpha},{\beta}}\eqno(3.7)$$ for ${\alpha},\;{\beta}\in I.$ Then $(V,\cdot),\;(V,\times)$ are commutative algebras (may not be associative) and ${\langle}\cdot,\cdot{\rangle}$ is a symmetric bilinear form. In order to find the conditons for which (2.33) holds, we have to find the exact formula for each term in (3.33). For $\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3\in \Omega$, we have $$\begin{aligned} & &\bar{\xi}_3((D_H\bar{\xi}_1)H\bar{\xi}_2)\\ &=& \sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}[b_{{\gamma},{\alpha}}^{{\lambda}}D^2(\xi_{1{\alpha}})+(-1)^{\bar{\xi}_1+1}c_{{\gamma},{\alpha}}^{{\lambda}}D(\xi_{1{\alpha}})D+d_{{\gamma},{\alpha}}^{{\lambda}}\xi_{1{\alpha}}D^2]\\& & [a_{{\lambda},{\beta}}D^5(\xi_{2{\beta}})+b_{{\lambda},{\beta}}^{\mu}\Phi_{\mu}D^2(\xi_{2{\beta}})+c_{{\lambda},{\beta}}^{\mu}\Phi_{\mu}(2)D(\xi_{2{\beta}})+d_{{\lambda},{\beta}}^{\mu}\Phi_{\mu}(3)\xi_{2{\beta}}]\xi_{3{\gamma}}\\&=& \sum_{{\alpha},{\beta},{\gamma}\in I}\{{\langle}\Phi_{{\gamma}}\cdot \Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}D^2(\xi_{1{\alpha}})D^5(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\cdot \Phi_{{\alpha}})\cdot \Phi_{{\beta}}]D^2(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})\\& &+[(\Phi_{{\gamma}}\cdot \Phi_{{\alpha}})\times \Phi_{{\beta}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\cdot \Phi_{{\alpha}})\circ \Phi_{{\beta}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}{\langle}\Phi_{{\gamma}}\times\Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}D(\xi_{1{\alpha}})D^6(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\cdot \Phi_{{\beta}}](2)D(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})\\& &+[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\cdot \Phi_{{\beta}}]D(\xi_{1{\alpha}})D^3(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\times \Phi_{{\beta}}](2)D(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})\hspace{3cm}\end{aligned}$$ $$\begin{aligned} && -[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\times \Phi_{{\beta}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+{\langle}\Phi_{{\gamma}}\circ \Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}\xi_{1{\alpha}}D^7(\xi_{2{\beta}})\\&&+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\cdot \Phi_{{\beta}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\cdot \Phi_{{\beta}}]\xi_{1{\alpha}}D^4(\xi_{2{\beta}})\\& &+[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})+[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}})\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\circ \Phi_{{\beta}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})\}\xi_{3{\gamma}},\hspace{0.4cm}(3.8)\end{aligned}$$ $$\begin{aligned} & &(-1)^{(\bar{\xi}_1+1)(\bar{\xi}_2+\bar{\xi}_3)}\bar{\xi}_1((D_H\bar{\xi}_2)H\bar{\xi}_3)\\&=&\sum_{{\alpha},{\beta},{\gamma}\in I}\{(-1)^{\bar{\xi}_2}{\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}[D^5(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})+2D^3(\xi_{1{\alpha}})D^4(\xi_{2{\beta}})+D^1(\xi_{1{\alpha}})D^6(\xi_{2{\beta}})\\& &+(-1)^{\bar{\xi}_1+1}D^4(\xi_{1{\alpha}})D^3(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}2D^2(\xi_{1{\alpha}})D^5(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}\xi_{1{\alpha}}D^7(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_2}[[\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})+[\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D^2(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})\\& &+[\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]\xi_{1{\alpha}}D^4(\xi_{2{\beta}})]]+(-1)^{\bar{\xi}_2}[[(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})\\&& +[(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)D(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}})]\\& & +(-1)^{\bar{\xi}_2+1}[(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+\bar{\xi}_2+1}{\langle}\Phi_{{\alpha}}\times\Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}[D^6(\xi_{1{\alpha}})D(\xi_{2{\beta}})\\& &+3D^4(\xi_{1{\alpha}})D^3(\xi_{2{\beta}})+3D^2(\xi_{1{\alpha}})D^5(\xi_{2{\beta}})+\xi_{1{\alpha}}D^7(\xi_{2{\beta}})]\\ & & +(-1)^{\bar{\xi}_1+\bar{\xi}_2+1}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_1+\bar{\xi}_2}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}}) \\&&-[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})\\& &-[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D^3(\xi_{1{\alpha}})D(\xi_{2{\beta}})-[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D(\xi_{1{\alpha}})D^3(\xi_{2{\beta}})\\& &+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D^2(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]\xi_{1{\alpha}}D^4(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_1+\bar{\xi}_2+1}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_1+\bar{\xi}_2}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}}) \\& &-[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_1+\bar{\xi}_2}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+\bar{\xi}_2+1}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})\\&& -[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_2+1}{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}[D^7(\xi_{1{\alpha}})\xi_{2{\beta}}+3D^5(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})+3D^3(\xi_{1{\alpha}})D^4(\xi_{2{\beta}})\\& &+D(\xi_{1{\alpha}})D^6(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[D^6(\xi_{1{\alpha}})D(\xi_{2{\beta}})+3D^4(\xi_{1{\alpha}})D^3(\xi_{2{\beta}})\\& &+3D^2(\xi_{1{\alpha}})D^5(\xi_{2{\beta}})+\xi_{1{\alpha}}D^7(\xi_{2{\beta}})]]+(-1)^{\bar{\xi}_2}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_2+1}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+2[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\hspace{5cm}\end{aligned}$$ $$\begin{aligned} & &+2[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})+2[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D^2(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]D^4(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}]\xi_{1{\alpha}}D^4(\xi_{2{\beta}})] \\& & +(-1)^{\bar{\xi}_2}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_2+1}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})D^2(\xi_{2{\beta}}) \\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}](2)\xi_{1{\alpha}}D^3(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_2+1}[(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\circ \Phi_{{\gamma}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+(-1)^{\bar{\xi}_2}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]]\}\xi_{3{\gamma}},\hspace{1.7cm}(3.9)\end{aligned}$$ $$\begin{aligned} & &(-1)^{(\bar{\xi}_3+1)(\bar{\xi}_1+\bar{\xi}_2)}\bar{\xi}_2((D_H\bar{\xi}_3)H\bar{\xi}_1)\\&=& \sum_{{\alpha},{\beta},{\gamma}\in I}\{(-1)^{\bar{\xi}_3}{\langle}\Phi_{{\beta}}\cdot \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}[D^7(\xi_{1{\alpha}}) \xi_{2{\beta}}+D^5(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})] \\& &+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}]D^4(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}]D^2(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})D^2(\xi_{2{\beta}})]\\& &+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_3}{\langle}\Phi_{{\beta}}\times\Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}[D^7(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}D^6(\xi_{1{\alpha}})D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})] \\& &+(-1)^{\bar{\xi}_3+1}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}-[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}]D^4(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}]D^3(\xi_{1{\alpha}})D(\xi_{2{\beta}})] +(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D^2(\xi_{1{\alpha}})D(\xi_{2{\beta}})] \\&&+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}-[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_3}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](4)\xi_{1{\alpha}}D(\xi_{2{\beta}})] \\& &+(-1)^{\bar{\xi}_3+1}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}-[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)D(\xi_{1{\alpha}})D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_3+1}[{\langle}\Phi_{{\beta}}\circ \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}D^7(\xi_{1{\alpha}})\xi_{2{\beta}}\\&&+[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}]D^4(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}](4)D(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D^3(\xi_{1{\alpha}})\xi_{2{\beta}}\hspace{2cm}\end{aligned}$$ $$\begin{aligned} & &+[(\Phi_{{\beta}}\circ\Phi_{{\gamma}})\circ \Phi_{{\alpha}}](5)\xi_{1{\alpha}}\xi_{2{\beta}}+[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)D^2(\xi_{1{\alpha}})\xi_{2{\beta}}]\}\xi_{3{\gamma}}.\hspace{2.4cm}(3.10)\end{aligned}$$ Here we have viewed each term in (3.8-10) as an element in $\tilde{A}$ (cf. (2.23)). For convenience, we call $D^{m_1}(\xi_{1{\alpha}})D^{m_2}(\xi_{2{\beta}})\xi_{3{\gamma}}$ a [*monomial of index*]{} $(0,m_1,m_2)$ and call $\Phi(n_1)D^{n_2}(\xi_{1{\alpha}})D^{n_3}(\xi_{2{\beta}})\xi_{3{\gamma}}$ a [*monomial of index*]{} $(n_1,n_2,n_3)$. We suppose that $H$ is Hamiltonian operator. Thus (2.33) holds. We substitute (3.8-10) into (2.33). By comparing the coefficients of the monomial of index (0,7,0) in (2.33), we have: $${\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+{\langle}\Phi_{{\beta}}\circ \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}={\langle}\Phi_{{\beta}}\cdot \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}+{\langle}\Phi_{{\beta}}\times \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}.\eqno(3.11)$$ Moreover, by (3.5), (3.11) is equivalent to: $${\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+{\langle}\Phi_{{\gamma}}\circ \Phi_{{\beta}}, \Phi_{{\alpha}}{\rangle}={\langle}\Phi_{{\alpha}},\Phi_{{\gamma}}\circ \Phi_{{\beta}}{\rangle}.\eqno(3.12)$$ Comparing the coefficients of the monomial of index (0,6,1) in (2.33), we obtain $${\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+{\langle}\Phi_{{\beta}}\times \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}={\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.13)$$ The coefficients of the monomial of index (0,5,2) in (2.33) show: $${\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+{\langle}\Phi_{{\beta}}\cdot \Phi_{{\gamma}}, \Phi_{{\alpha}}{\rangle}=3{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.14)$$ Examing the coefficients of the monomial of index (0,4,3) in (2.33), we have: $${\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+3{\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}=3{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.15)$$ The coefficients of the monomial of index (0,3,4) in (2.33) imply $$2{\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}=3{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.16)$$ Considering the coefficients of the monomial of index (0,2,5) in (2.33), we find: $${\langle}\Phi_{{\gamma}}\cdot \Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}+3{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}=2{\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+3{\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.17)$$ Looking up the coefficients of the monomial of index (0,1,6) in (2.33), we have: $${\langle}\Phi_{{\gamma}}\times\Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}+{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}={\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.18)$$ The following equation follows from the coefficients of the monomial of index (0,0,7) in (2.33): $${\langle}\Phi_{{\gamma}}\circ\Phi_{{\alpha}}, \Phi_{{\beta}}{\rangle}+{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}={\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}+{\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}}, \Phi_{{\gamma}}{\rangle}.\eqno(3.19)$$ The coefficients of the monomial of index (5,0,0) in (2.33) tell us that $$(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\circ\Phi_{{\beta}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}=(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}.\eqno(3.20)$$ The coefficients of the monomial of index (4,1,0) in (2.33) give us the following equation: $$(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\circ\Phi_{{\beta}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}=(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}.\eqno(3.21)$$ Let us look at the coefficients of the monomial of index (4,0,1) in (2.33). We obtain: $$(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\times\Phi_{{\beta}}=(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}.\eqno(3.22)$$ Comparing the coefficients of the monomial of index (3,2,0) in (2.33), we get: $$\begin{aligned} & &(\Phi_{{\gamma}}\cdot\Phi_{{\alpha}})\circ\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}\\& &+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}-(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}} \\&=&(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}-(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}.\hspace{5cm}(3.23)\end{aligned}$$ The coefficients of the monomial of index (3,1,1) in (2.33) tell us that $$\begin{aligned} & &(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\times\Phi_{{\beta}}-(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\circ\Phi_{{\beta}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}\\&=&(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}} +(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}.\hspace{4.5cm}(3.24)\end{aligned}$$ Consulting the coefficients of the monomial of index (3,0,2) in (2.33), we find: $$\begin{aligned} & &(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\circ\Phi_{{\beta}}-(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}-(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}\\&=& -(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\circ \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}\\& &-(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}+(\Phi_{{\beta}}\cdot\Phi_{{\gamma}})\circ \Phi_{{\alpha}}.\hspace{1.6cm}(3.25)\end{aligned}$$ The coefficients of the monomial of index (2,3,0) in (2.33) imply: $$(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\times\Phi_{{\gamma}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}=(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}.\eqno(3.26)$$ Extracting the coefficients of the monomial of index (2,2,1) in (2.33), we have: $$(\Phi_{{\gamma}}\cdot\Phi_{{\alpha}})\times\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}=(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}.\eqno(3.27)$$ The coefficients of the monomial of index (2,1,2) in (2.33) imply: $$(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\times\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}=(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}+(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\times \Phi_{{\alpha}}.\eqno(3.28)$$ By comparing the coefficients of the monomial of index (2,0,3) in (2.33), we have: $$(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\times\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}=(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\times \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}.\eqno(3.29)$$ Comparing the coefficients of the monomial of index (1,4,0) in (2.33), we get: $$(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\cdot\Phi_{{\gamma}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}=(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}.\eqno(3.30)$$ The coefficients of the monomial of index (1,3,1) in (2.33) tell us that $$(\Phi_{{\alpha}}\times\Phi_{{\beta}})\cdot\Phi_{{\gamma}}=\Phi_{{\alpha}}\cdot (\Phi_{{\beta}}\times \Phi_{{\gamma}}).\eqno(3.31)$$ The following equation follows from the coefficients of the monomial of index (1,2,2) in (2.33): $$(\Phi_{{\gamma}}\cdot\Phi_{{\alpha}})\cdot\Phi_{{\beta}}-(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}+2(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\cdot\Phi_{{\gamma}}=(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}.\eqno(3.32)$$ In terms of the coefficients of the monomial of index (1,1,3) in (2.33), $$(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\cdot\Phi_{{\beta}}=\Phi_{{\gamma}}\cdot (\Phi_{{\alpha}}\times \Phi_{{\beta}}).\eqno(3.33)$$ Consulting the coefficients of the monomial of index (1,0,4) in (2.33), we find: $$(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\cdot \Phi_{{\gamma}}=(\Phi_{{\alpha}}\cdot \Phi_{{\beta}})\cdot \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\cdot \Phi_{{\gamma}}.\eqno(3.34)$$ Here we have always assumed that ${\alpha},{\beta},{\gamma}$ are three arbitrary elements of the index set $I$. Next we shall do technical reductions. By (3.15) and (3.16), we have: $${3\over 2}{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}+ 3{\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}=3{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}\Longrightarrow {\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}=2 {\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}.\eqno(3.35)$$ Moreover, by (3.5) and (3.12), we can prove that (3.11-19) are equivalent to: $${\langle}\Phi_{{\alpha}}\times \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}={\langle}\Phi_{{\alpha}}, \Phi_{{\beta}}\times\Phi_{{\gamma}}{\rangle}={1\over 2}{\langle}\Phi_{{\alpha}}\circ \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}={1\over 3}{\langle}\Phi_{{\alpha}}\cdot \Phi_{{\beta}},\Phi_{{\gamma}}{\rangle}.\eqno(3.36)$$ By (3.5), (3.20) is equivalent to: $$(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}=(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}}\eqno(3.37)$$ and (3.21) is equivalent to: $$(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}=(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}.\eqno(3.38)$$ Note that (3.22) and (3.38) are equivalent. Again by (3.5), (3.23) is equivalent to: $$(\Phi_{{\gamma}}\cdot\Phi_{{\alpha}})\circ\Phi_{{\beta}}+\Phi_{{\gamma}}\circ (\Phi_{{\alpha}}\circ \Phi_{{\beta}})=(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\cdot \Phi_{{\alpha}}+(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}},\eqno(3.39)$$ (3.24) is equivalent to: $$(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\times\Phi_{{\beta}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}=(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}} +(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}-\Phi_{{\beta}}\circ(\Phi_{{\gamma}}\times\Phi_{{\alpha}}),\eqno(3.40)$$ (3.25) is equivalent to: $$(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\circ\Phi_{{\beta}}=\Phi_{{\gamma}}\circ(\Phi_{{\beta}}\circ \Phi_{{\alpha}})+(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\circ \Phi_{{\alpha}},\eqno(3.41)$$ and (3.26), (2.29) are equivalent to (3.22). If we change the indices in (3.34) according to the cycle ${\alpha}\rightarrow{\beta}\rightarrow {\gamma}\rightarrow{\alpha}$, then we get (3.30). Similarly, (3.31) and (3.33) are equivalent. Furthermore, (3.5), (3.32) and (3.34) imply: $$(\Phi_{{\gamma}}\cdot\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\cdot\Phi_{{\gamma}}=(\Phi_{{\beta}}\cdot \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}+(\Phi_{{\beta}}\circ \Phi_{{\alpha}})\cdot \Phi_{{\gamma}},\eqno(3.42)$$ $$(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\cdot \Phi_{{\beta}}=(\Phi_{{\beta}}\circ\Phi_{{\alpha}})\cdot \Phi_{{\gamma}}.\eqno(3.43)$$ Our strategy to do further reduction is to get rid of “$\cdot$” in (3.27-28), (3.31), (3.39) and (3.42-43) by (3.5) and (3.37-38). Note that (3.27) is equivalent to: $$\begin{aligned} & & (\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\times\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ\Phi_{{\gamma}})\times\Phi_{{\beta}}-(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\times\Phi_{{\beta}}\\& &+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}\\&=&(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}+\Phi_{{\alpha}}\circ (\Phi_{{\beta}}\times \Phi_{{\gamma}})+ (\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}},\hspace{1.1cm}(3.44)\end{aligned}$$ which is equivalent to (3.40) by (3.38). Again using (3.5), (3.28) is equivalent to: $$\begin{aligned} & &(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\circ\Phi_{{\beta}}+\Phi_{{\beta}}\circ(\Phi_{{\gamma}}\times\Phi_{{\alpha}})+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\beta}}\circ \Phi_{{\alpha}})\times \Phi_{{\gamma}}\\&=&-(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}} (\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}+(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}},\hspace{1.2cm}(3.45)\end{aligned}$$ which is equivalent to (3.40) by (3.38). Furthermore, (3.31) is equivalent to: $$\begin{aligned} & & (\Phi_{{\alpha}}\times\Phi_{{\beta}})\circ\Phi_{{\gamma}}+\Phi_{{\gamma}}\circ(\Phi_{{\alpha}}\times\Phi_{{\beta}})-\Phi_{{\alpha}}\circ (\Phi_{{\beta}}\times \Phi_{{\gamma}})-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}\\&=&(\Phi_{{\alpha}}\times\Phi_{{\beta}})\times\Phi_{{\gamma}}-\Phi_{{\alpha}}\times(\Phi_{{\beta}}\times \Phi_{{\gamma}}).\hspace{7.6cm}(3.46)\end{aligned}$$ Now (3.5) implies that (3.39) is equivalent to: $$\begin{aligned} & & (\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\circ\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ\Phi_{{\gamma}})\circ\Phi_{{\beta}}-(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\circ\Phi_{{\beta}}+\Phi_{{\gamma}}\circ (\Phi_{{\alpha}}\circ \Phi_{{\beta}})\\&=&(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}}+ \Phi_{{\alpha}}\circ (\Phi_{{\gamma}}\circ \Phi_{{\beta}})-(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}+(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}},\hspace{1.4cm}(3.47)\end{aligned}$$ which by (3.37-38) is equivalent to: $$(\Phi_{{\alpha}}\circ\Phi_{{\gamma}})\circ\Phi_{{\beta}}-\Phi_{{\alpha}}\circ (\Phi_{{\gamma}}\circ \Phi_{{\beta}})=(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}-\Phi_{{\gamma}}\circ (\Phi_{{\alpha}}\circ \Phi_{{\beta}}).\eqno(3.48)$$ Equations (3.37) and (3.48) shows that $(V,\circ)$ forms a Novikov algebra. Next by (3.5), (3.42) is equivalent to: $$\begin{aligned} & &(\Phi_{{\gamma}}\circ\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ\Phi_{{\gamma}})\cdot\Phi_{{\beta}}-(\Phi_{{\gamma}}\times\Phi_{{\alpha}})\cdot\Phi_{{\beta}}+(\Phi_{{\alpha}}\circ\Phi_{{\beta}})\cdot\Phi_{{\gamma}}\\&=&(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}+(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\cdot \Phi_{{\alpha}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\cdot \Phi_{{\alpha}}+(\Phi_{{\beta}}\circ \Phi_{{\alpha}})\cdot \Phi_{{\gamma}},\hspace{2cm}(3.49)\end{aligned}$$ which holds if (3.43) and (3.31) hold. Furthermore, (3.5) implies that (3.43) is equivalent to: $$\begin{aligned} && (\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}+\Phi_{{\beta}}\circ(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})-(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}\\&=&(\Phi_{{\beta}}\circ\Phi_{{\alpha}})\circ \Phi_{{\gamma}}+ \Phi_{{\gamma}}\circ(\Phi_{{\beta}}\circ\Phi_{{\alpha}})-(\Phi_{{\beta}}\circ\Phi_{{\alpha}})\times \Phi_{{\gamma}},\hspace{4.8cm}(3.50)\end{aligned}$$ which holds if (3.37-38) and (3.48) hold. We summarize what we have proved as: [**Theorem 3.1**]{}. [*A differential operator*]{} $H$ [*of the form (3.1) is a Hamiltonian operator if and only if*]{} $(V,\circ,\times)$ [*is an NX-bialgebra,*]{} $$u\cdot v=u\circ v+v\circ u-u\times v\qquad\;\;\mbox{for}\;\;u,v\in V,\eqno(3.51)$$ [*and*]{} ${\langle}\cdot,\cdot{\rangle}$ [*is a symmetric bilinear form satisfying:*]{} $${\langle}u\circ v,w{\rangle}={\langle}u,v\circ w{\rangle}=2{\langle}u\times v,w{\rangle}\qquad\;\;\mbox{for}\;\;u,v,w\in V.\eqno(3.52)$$ [**Example**]{}. Let $({\cal A},\cdot,\circ )$ be a Novikov-Poisson algebra such that $({\cal A},\cdot)$ contains an identity element $1$ and $$1\circ 1=2.\eqno(3.53)$$ We shall show now that $({\cal A},\cdot,\circ)$ is a NX-bialgebra. In fact, we have $$x\circ y=x\cdot \partial(y),\qquad\mbox{where}\;\;\partial(y)=1\circ y\eqno(3.54)$$ for $x,y\in {\cal A}$. Note by (1.16), $$\partial(x\circ y)=\partial(x)\cdot y+x\cdot \partial(y)-2x\cdot y\qquad\mbox{for}\;\;x,y\in {\cal A}\eqno(3.55)$$ (cf. \[X5\]). Thus we have: $$(y\circ x)\cdot z+x\cdot (y\circ z)-y\circ (x\cdot z)=2y\cdot(x\cdot z)=(x\cdot y)\cdot z+x\cdot (y\cdot z)\eqno(3.56)$$ for $x,y,z\in {\cal A}$ by the commutativity and associativity of $({\cal A},\cdot).$ Furthermore, $$\begin{aligned} & & (x\cdot y)\circ z+z\circ (x\cdot y)-x\circ (y\cdot z)-(y\cdot z)\circ x\\&=& x\cdot y\cdot \partial(z)+z\cdot \partial(x)\cdot y+z\cdot x\cdot \partial(y)-x\cdot z\cdot \partial(y)\cdot z-x\cdot y\cdot \partial(z)-y\cdot z\cdot \partial(x)\\&=&0\\&=& (x\cdot y)\cdot z-x\cdot (y\cdot z)\hspace{10.1cm}(3.57)\end{aligned}$$ for $x,y,z\in {\cal A}$. Hence $({\cal A},\cdot,\circ)$ is an NX-bialgebra. Next we shall give a concrete example. Let $({\cal A},\cdot)$ be the quotient algebra $\Bbb{R}[t]/(t^n)$ of the algebra $\Bbb{R}[t]$ of polynomials for a positive integer $n$. Denote by $e_j$ the image of $t^j$ in ${\cal A}$. We define the operation $\circ$ by $$e_i\circ e_j=(j+2)e_{i+j}\qquad \mbox{for}\;\;0\leq i,j<n.\eqno(3.58)$$ Here we have used the convention that $e_l=0$ if $l\geq n$. Then $({\cal A},\cdot,\circ)$ is a Novikov algebra satisfying (3.53) (cf. \[X5\]). Moreover, we define a bilinear form ${\langle}\cdot,\cdot{\rangle}$ on ${\cal A}$ by $${\langle}e_i,e_j{\rangle}=\delta_{i,0}\delta_{j,0}\qquad\mbox{for}\;\;0\leq i,j<n.\eqno(3.59)$$ Then ${\langle}\cdot,\cdot{\rangle}$ satisfies (3.52). Hamiltonian Superoperators and Fermionic Novikov Algebras ========================================================= Consider the following Hamiltonian operator $H$ of type 0: $$-H^1_{{\alpha},{\beta}}=H^0_{{\alpha},{\beta}}=\sum_{{\gamma}\in I}(a_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(2)+b_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}D),\eqno(4.1)$$ where $a_{{\alpha},{\beta}}^{{\gamma}},b_{{\alpha},{\beta}}^{{\gamma}}\in \Bbb{R}$. Again we let $V$ be as in (3.6) and define operations: $\times,\circ:\;V\times V\rightarrow V$ by $$\Phi_{{\alpha}}\circ \Phi_{{\beta}}=\sum_{{\gamma}\in I}a_{{\alpha},{\beta}}^{{\gamma}}\Phi_r,\qquad \Phi_{{\alpha}}\times \Phi_{{\beta}}=\sum_{{\gamma}\in I}b_{{\alpha},{\beta}}^{{\gamma}}\Phi_r\qquad\;\mbox{for}\;\;{\alpha},{\beta}\in I.\eqno(4.2)$$ [**Theorem 4.1**]{}. [*A differential operator of the form (4.1) is a Hamiltonian operator if and only if*]{} $(V,\circ)$ [*is a fermionic Novikov algebra and*]{} $$u\times v=v\circ u-u\circ v\qquad\qquad\mbox{\it for}\;\;u,v\in V.\eqno(4.3)$$ [*Proof.*]{} By (2.31), the super skew-symmetry of $H$ is equivalent to: $$\begin{aligned} \sum_{{\gamma}\in I}(a_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}(2)+b_{{\alpha},{\beta}}^{{\gamma}}D\circ \Phi_{{\gamma}})&=&\sum_{{\gamma}\in I}((a_{{\alpha},{\beta}}^{{\gamma}}+b_{{\alpha},{\beta}}^{{\gamma}})\Phi_{{\gamma}}(2)-b_{{\alpha},{\beta}}^{{\gamma}}\Phi_{{\gamma}}D)\\&=&\sum_{{\gamma}\in I}(a_{{\beta},{\alpha}}^{{\gamma}}\Phi_{{\gamma}}(2)+b_{{\beta},{\alpha}}^{{\gamma}}\Phi_{{\gamma}}D)\hspace{4.6cm}(4.4)\end{aligned}$$ for ${\alpha},{\beta}\in I$, which is equivalent to (4.3). Next we shall find the exact formula for each term in (3.33). For any $\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3\in \Omega$, we have: $$\begin{aligned} & &\bar{\xi}_3((D_H\bar{\xi}_1)H\bar{\xi}_2)\\ &=& (-1)^{\bar{\xi}_1+\bar{\xi}_2}\sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}[(-1)^{\bar{\xi}_1}a_{{\gamma},{\alpha}}^{{\lambda}}\xi_{1{\alpha}}D+b_{{\gamma},{\alpha}}^{{\lambda}}D(\xi_{1{\alpha}})][a_{{\lambda},{\beta}}^{\mu}\Phi_{\mu}(2)\xi_{2{\beta}}+b_{{\lambda},{\beta}}^{\mu}\Phi_{\mu}D(\xi_{2{\beta}})]\xi_{3{\gamma}}\\&=& \sum_{{\alpha},{\beta},{\gamma}\in I}\{(-1)^{\bar{\xi}_1+\bar{\xi}_2+1}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}+(-1)^{\bar{\xi}_2}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})\\&&+(-1)^{\bar{\xi}_2}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+\bar{\xi}_2}[(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}]\xi_{1{\alpha}}D^2(\xi_{2{\beta}})\\& &+(-1)^{\bar{\xi}_1+\bar{\xi}_2}[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_2}[(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\times \Phi_{{\beta}}]D(\xi_{1{\alpha}})D(\xi_{2{\beta}})\}\xi_{3{\gamma}},\hspace{6.4cm}(4.5)\end{aligned}$$ $$\begin{aligned} & &(-1)^{(\bar{\xi}_1+1)(\bar{\xi}_2+\bar{\xi}_3)}\bar{\xi}_1((D_H\bar{\xi}_2)H\bar{\xi}_3)\\&=& \sum_{{\alpha},{\beta},{\gamma}\in I}\{(-1)^{\bar{\xi}_2+\bar{\xi}_3+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}+(-1)^{\bar{\xi}_2+\bar{\xi}_3}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})] \\&&+(-1)^{\bar{\xi}_2+\bar{\xi}_3}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_2+\bar{\xi}_3+1}[[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}\\& &+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}]D^2(\xi_{1{\alpha}})\xi_{2{\beta}}+[(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}]\xi_{1{\alpha}}D^2(\xi_{2{\beta}})] \\& &+(-1)^{\bar{\xi}_1+\bar{\xi}_3+1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1+\bar{\xi}_2+\bar{\xi}_3+1}[[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})\\& &-[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\circ \Phi_{{\gamma}}]D(\xi_{1{\alpha}})D(\xi_{2{\beta}})+(-1)^{\bar{\xi}_1}[(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}]\xi_{1{\alpha}}D^2(\xi_{2{\beta}})]\}\xi_{3{\gamma}},\hspace{0.8cm}(4.6)\end{aligned}$$ $$\begin{aligned} & &(-1)^{(\bar{\xi}_3+1)(\bar{\xi}_1+\bar{\xi}_2)}\bar{\xi}_2((D_H\bar{\xi}_3)H\bar{\xi}_1)\\&=& \sum_{{\alpha},{\beta},{\gamma}\in I}\{(-1)^{\bar{\xi}_1+\bar{\xi}_3+1}[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}+(-1)^{\bar{\xi}_1+\bar{\xi}_3+1}[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\&&+(-1)^{\bar{\xi}_1+\bar{\xi}_3+1}[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}+(-1)^{\bar{\xi}_3+\bar{\xi}_1}[(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}]D^2(\xi_{1{\alpha}})\xi_{2{\beta}}\\& &+(-1)^{\bar{\xi}_3+\bar{\xi}_1+1}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](3)\xi_{1{\alpha}}\xi_{2{\beta}}+[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})\xi_{2{\beta}}\\&&(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}](2)\xi_{1{\alpha}}D(\xi_{2{\beta}})]+(-1)^{\bar{\xi}_1+\bar{\xi}_3+1}[[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times\Phi_{{\alpha}}](2)D(\xi_{1{\alpha}})\xi_{2{\alpha}}\\&& -[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}]D^2(\xi_{1{\alpha}})\xi_{2{\alpha}}+(-1)^{\bar{\xi}_1+1}[(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}]D(\xi_{1{\alpha}})D(\xi_{2{\alpha}})]\}\xi_{3{\gamma}}.\hspace{0.5cm}(4.7)\end{aligned}$$ We assume that $H$ is a Hamiltonian operator. Thus (2.33) holds. We substitute (4.5-7) into (2.33). We define the monomial index as in Section 3. In the following, we always assume that ${\alpha},{\beta},{\gamma}$ are arbitrary elements of $I$. By comparing the coefficients of the monomial of index (3,0,0) in (2.33), we have: $$(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}=0,\eqno(4.8)$$ which by (4.3) is equivalent to: $$(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}=-(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}}.\eqno(4.9)$$ The coefficients of the monomial of index (2,1,0) in (2.33) imply: $$\begin{aligned} & &(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}+(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\circ \Phi_{{\alpha}}\\&=&(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ \Phi_{{\alpha}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}},\hspace{4.5cm}(4.10)\end{aligned}$$ which by (4.3) is equivalent to: $$(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\circ \Phi_{{\beta}}+ \Phi_{{\gamma}}\circ(\Phi_{{\alpha}}\circ \Phi_{{\beta}})-\Phi_{{\alpha}}\circ (\Phi_{{\beta}}\circ \Phi_{{\gamma}})-\Phi_{{\alpha}}\circ (\Phi_{{\beta}}\times \Phi_{{\gamma}})=0.\eqno(4.11)$$ Again by (4.3), (4.11) is equivalent to: $$(\Phi_{{\alpha}}\circ \Phi_{{\gamma}})\circ \Phi_{{\beta}}-\Phi_{{\alpha}}\circ (\Phi_{{\gamma}}\circ \Phi_{{\beta}})=(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}- \Phi_{{\gamma}}\circ(\Phi_{{\alpha}}\circ \Phi_{{\beta}}).\eqno(4.12)$$ Note that (4.9) and (4.12) imply that $(V,\circ)$ is a fermionic Novikov algebra. Consulting the coefficients of the monomial of index (2,0,1) in (2.33), we get: $$\begin{aligned} & &(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\circ \Phi_{{\beta}}+(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}\\&=&(\Phi_{{\alpha}}\times \Phi_{{\gamma}})\circ \Phi_{{\gamma}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\circ\Phi_{{\alpha}},\hspace{4.5cm}(4.13)\end{aligned}$$ which is (4.10) if we change the indices according to the cycle ${\alpha}\rightarrow {\beta}\rightarrow{\gamma}\rightarrow{\alpha}$. Examing the coefficients of the monomial of index (1,2,0) in (2.33), we have: $$(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}}-(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}=0,\eqno(4.14)$$ which by (4.3) is equivalent to: $$(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}=0.\eqno(4.15)$$ Again by (4.3), (4.15) is equivalent to: $$\Phi_{{\gamma}}\circ(\Phi_{{\alpha}}\circ \Phi_{{\beta}})-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\circ \Phi_{{\gamma}}- \Phi_{{\alpha}}\circ(\Phi_{{\gamma}}\circ \Phi_{{\beta}})+(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\circ \Phi_{{\alpha}}=0,\eqno(4.16)$$ which is equivalent to (4.12) by (4.9). The coefficients of the monomial of index (1,0,2) in (2.33) tell us that $$(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}-(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}=0,\eqno(4.17)$$ which is (4.14) if we change indices according to the cycle ${\alpha}\rightarrow {\beta}\rightarrow{\gamma}\rightarrow{\alpha}$. Finally, checking the coefficients of the monomial of index (1,1,1) in (2.33), we obtain: $$(\Phi_{{\gamma}}\times \Phi_{{\alpha}})\times \Phi_{{\beta}}+(\Phi_{{\alpha}}\times \Phi_{{\beta}})\times \Phi_{{\gamma}}+(\Phi_{{\beta}}\times \Phi_{{\gamma}})\times \Phi_{{\alpha}}=0,\eqno(4.18)$$ which by (4.3) is equivalent to: $$\begin{aligned} & &(\Phi_{{\alpha}}\circ \Phi_{{\gamma}})\times \Phi_{{\beta}}-(\Phi_{{\gamma}}\circ \Phi_{{\alpha}})\times \Phi_{{\beta}}+(\Phi_{{\beta}}\circ\Phi_{{\alpha}})\times \Phi_{{\gamma}}\\&=&(\Phi_{{\alpha}}\circ \Phi_{{\beta}})\times \Phi_{{\gamma}}-(\Phi_{{\gamma}}\circ \Phi_{{\beta}})\times \Phi_{{\alpha}}+(\Phi_{{\beta}}\circ \Phi_{{\gamma}})\times \Phi_{{\alpha}},\hspace{4.5cm}(4.19)\end{aligned}$$ which holds if (4.15) is satisfied. This shows that $(V,\times)$ is a Lie algebra. From the above arguments, one can see that we have proved that a matrix differential operator $H$ of the form (4.1) is a Hamiltonian operator if and only if (4.3), (4.9) and (4.12) are satisfied.$\Box$ [**Example**]{}. It is not that easy to construct nontrivial fermionic Novikov algebras. Let $V$ be a vector space with a basis $\{e_1,e_2,e_3,e_4\}$ and $(E(V),\cdot)$ be the exterior algebra generated by $V$. Then $E(V)$ is 16-dimensional. Set $$v_1=e_2\cdot e_3\cdot e_4,\;\;v_2=e_1\cdot e_3\cdot e_4,\;\;v_3=e_1\cdot e_2\cdot e_4,\;\;v_4=e_1\cdot e_2\cdot e_3,\eqno(4.20)$$ $$v_0=\sum_{i<j}c_{i,j}e_i\cdot e_j,\qquad v_5=e_1\cdot e_2\cdot e_3\cdot e_4,\eqno(4.21)$$ where $c_{i,j}\in \Bbb{R}$ are constants. We define $${\cal A}=\sum_{i=0}^5\Bbb{R}v_i\eqno(4.22)$$ and define the operation on ${\cal A}$ by: $$v\circ v_0=v\circ v_5=0,\qquad v\circ v_i=v\cdot e_i\qquad\qquad\mbox{for}\;\;v\in {\cal A},\;i=1,2,3,4.\eqno(4.23)$$ Then the operation $\circ$ satisfies (4.9). Let us prove (4.12), that is $$(v_i\circ v_j)\circ v_k-v_i\circ (v_j\circ v_k)=(v_j\circ v_i)\circ v_k-v_j\circ (v_i\circ v_k)\qquad\mbox{for}\;\;i,j,k=0,1,...,5.\eqno(4.24)$$ Notice that (4.24) holds obviously if one of the following conditions is satisfied : (a)$i=j$; (b) $k=0,5$; (c) $i=5$; (d) $j=5$; (e) $i,j\in \{1,2,3,4\}$. Moreover, by symmetry, we only need to prove it when $i=0,\;j=1$ and $k=1$ or 2. $$\begin{aligned} \hspace{2cm}& &(v_0\circ v_1)\circ v_1-v_0\circ (v_1\circ v_1)\\&=&v_0\cdot e_1\cdot e_1-v_0\circ (v_1\cdot e_1)\\&=& v_0\circ v_5\\&=&0,\hspace{11.6cm}(4.25)\end{aligned}$$ $$\begin{aligned} \hspace{2cm}& &(v_1\circ v_0)\circ v_1-v_1\circ (v_0\circ v_1)\\&=&-v_1\circ (v_0\cdot e_1)\\&=& -v_1\circ (c_{2,3}v_4+c_{2,4}v_3+c_{3,4}v_2)\\&=&-(e_2\cdot e_3\cdot e_4)\cdot (c_{2,3}e_4+c_{2,4}e_3+c_{3,4}e_2)\\&=&0;\hspace{11.6cm}(4.26)\end{aligned}$$ $$\begin{aligned} \hspace{2cm}& &(v_0\circ v_1)\circ v_2-v_0\circ (v_1\circ v_2)\\&=&v_0\cdot e_1\cdot e_2-v_0\circ (v_1\cdot e_2)\\&=&c_{3,4}v_5,\hspace{10.9cm}(4.27)\end{aligned}$$ $$\begin{aligned} \hspace{2cm}& &(v_1\circ v_0)\circ v_2-v_1\circ (v_0\circ v_2)\\&=&-v_1\circ (v_0\cdot e_2)\\&=& -v_1\circ (-c_{1,3}v_4-c_{1,4}v_3+c_{3,4}v_1)\\&=&-(e_2\cdot e_3\cdot e_4)\cdot (-c_{1,3}e_4-c_{1,4}e_3+c_{3,4}e_1)\\&=&c_{3,4}v_5.\hspace{10.9cm}(4.28)\end{aligned}$$ Thus we prove that the algebra $({\cal A},\circ)$ defined in (4.21-23) is a fermionic Novikov algebra. Induced Lie Superalgebras ========================= In this section, we shall prove that a type-1 Hamiltonian operator $H$ of the form $$H^1_{{\alpha},{\beta}}=H^0_{{\alpha},{\beta}}=\sum_{{\gamma}\in I}[\sum_{m=0}^Na_{{\alpha},{\beta},{\gamma}}^m\Phi_{{\gamma}}(2(N-m)+1)D^{2m}+\sum_{n=0}^{N-1}b_{{\alpha},{\beta},{\gamma}}^n\Phi_{{\gamma}}(2(N-n))D^{2n+1}]\eqno(5.1)$$ induces a Lie superalgebra. In the rest of this section, we denote by $\theta_i$ anticommuative formal variables and by $z_i$ commutative formal variables for $i=1,2,3$, that is, $$\theta_i\theta_j=-\theta_j\theta_i,\qquad z_i\theta_j=\theta_j z_i,\qquad z_iz_j=z_jz_i\qquad\mbox{for}\;\;i,j=1,2,3.\eqno(5.2)$$ We let $$\delta\left({z_i\over z_j}\right)=\sum_{m\in \Bbb{Z}}{z_i^m\over z_j^m},\qquad \Delta_{i,j}=(\theta_i-\theta_j)\delta\left({z_i\over z_j}\right).\eqno(5.3)$$ Note that $$\Delta_{i,j}=-\Delta_{j,i}.\eqno(5.4)$$ Let $$f(\theta,z)=f_0(z)+\theta f_1(z),\qquad\mbox{for}\;\;f_i(z)\in \Bbb{R}[z,z^{-1}].\eqno(5.5)$$ [**Lemma 5.1**]{}. [*We have*]{}: $$f(\theta_1,z_1)\Delta_{1,2}=f(\theta_2,z_2)\Delta_{1,2}.\eqno(5.6)$$ [*Proof.*]{} $$\begin{aligned} \hspace{1cm}&&f(\theta_1,z_1)\Delta_{1,2}\\&=&(f_0(z_1)+\theta_1f_1(z_1))(\theta_1-\theta_2)\delta\left({z_1\over z_2}\right)\\&=&(\theta_1-\theta_2)f_0(z_1)\delta\left({z_1\over z_2}\right)-\theta_1\theta_2f_1(z_1)\delta\left({z_1\over z_2}\right)\\&=& (\theta_1-\theta_2)f_0(z_2)\delta\left({z_1\over z_2}\right)+\theta_2\theta_1f_1(z_2)\delta\left({z_1\over z_2}\right)\\&=&(f_0(z_2)+\theta_2f_1(z_2))(\theta_1-\theta_2)\delta\left({z_1\over z_2}\right)\\&=&f(\theta_2,z_2)\Delta_{1,2}.\qquad\qquad\Box\hspace{8.8cm}(5.7)\end{aligned}$$ Let $L$ be a vector space with a basis $\{\phi_{{\alpha}}(n)\mid {\alpha}\in I,n\in \Bbb{Z}/2\}$. We denote $$\phi_{{\alpha}}(\theta,z)=\sum_{n\in\Bbb{Z}}\phi_{{\alpha}}(n)z^{-n-N-1}\theta+\sum_{n\in\Bbb{Z}}\phi_{{\alpha}}\left(n+{1\over 2}\right)z^{-n-N-1}=\phi_{{\alpha}}^0(z)\theta+\phi_{{\alpha}}^1(z)\eqno(5.8)$$ for ${\alpha}\in I.$ Our notions are motivated by the theory of vertex operator algebras (e.g., cf. \[FLM\]). In the rest of this section, we always assume that $$\theta_i \phi_{{\alpha}}(n)=\phi_{{\alpha}}(n)\theta_i,\;\;\;\theta_i \phi_{{\alpha}}\left(n+{1\over 2}\right)=-\phi_{{\alpha}}\left(n+{1\over 2}\right)\theta_i;\eqno(5.9)$$ $$z_i \phi_{{\alpha}}(n)=\phi_{{\alpha}}(n)z_i,\;\;\;z_i \phi_{{\alpha}}\left(n+{1\over 2}\right)=\phi_{{\alpha}}\left(n+{1\over 2}\right)z_i\eqno(5.10)$$ for ${\alpha}\in I,n\in \Bbb{Z}$ and $i=1,2,3.$ We also use the notions: $$D_i=\theta_i\partial_{z_i}+\partial_{\theta_i}\qquad\mbox{for}\;\;i=1,2,3;\eqno(5.11)$$ By induction on $n\in \Bbb{N}$, we can prove: [**Lemma 5.2**]{}. $$z_2^{-1}D_1^{2n}\Delta_{1,2}=(-1)^nz_1^{-1}D_2^{2n}\Delta_{1,2},\qquad z_2^{-1}D_1^{2n+1}\Delta_{1,2}=(-1)^{n+1}z_1^{-1}D_2^{2n+1}\Delta_{1,2}\eqno(5.12)$$ [*for*]{} $n\in \Bbb{N}.$ Now we define the operation $[\cdot,\cdot]$ on $L$ by $$\begin{aligned} &=&z_2^{-1}\sum_{{\gamma}\in I}\{\sum_{m=0}^Na_{{\alpha},{\beta},{\gamma}}^mD_1^{2(N-m)}(\phi_{{\gamma}}(\theta_1,z_1))D_1^{2m}(\Delta_{1,2})\\& & +\sum_{n=0}^{N-1}b_{{\alpha},{\beta},{\gamma}}^nD_1^{2(N-n)-1}(\phi_{{\gamma}}(\theta_1,z_1))D_1^{2n+1}(\Delta_{1,2})\}\hspace{2cm}(5.13)\end{aligned}$$ for ${\alpha},{\beta}\in I$. [**Theorem 5.3**]{}. [*The algebra*]{} $(L,[\cdot,\cdot])$ [*forms a Lie supealgebra with the grading:*]{} $$L_0=\sum_{{\alpha}\in I}\sum_{n\in \Bbb{Z}}\Bbb{R}\phi_{{\alpha}}(n),\qquad L_1=\sum_{{\alpha}\in I}\sum_{n\in \Bbb{Z}}\Bbb{R}\phi_{{\alpha}}\left(n+{1\over 2}\right).\eqno(5.14)$$ [*Proof*]{}. First, we have: $$\begin{aligned} \hspace{1cm}& &[\phi_{{\alpha}}(\theta_1,z_1),\phi_{{\beta}}(\theta_2,z_2)]\\&\stackrel{\tiny (5.12)}{=}&z_1^{-1}\sum_{{\gamma}\in I}\{\sum_{m=0}^N(-1)^ma_{{\alpha},{\beta},{\gamma}}^mD_1^{2(N-m)}(\phi_{{\gamma}}(\theta_1,z_1))D_2^{2m}(\Delta_{1,2})\\& & +\sum_{n=0}^{N-1}(-1)^{n+1}b_{{\alpha},{\beta},{\gamma}}^nD_1^{2(N-n)-1}(\phi_{{\gamma}}(\theta_1,z_1))D_2^{2n+1}(\Delta_{1,2})\}\\&=&z_1^{-1}\sum_{{\gamma}\in I}\{\sum_{m=0}^N(-1)^ma_{{\alpha},{\beta},{\gamma}}^mD_2^{2m}[D_1^{2(N-m)}(\phi_{{\gamma}}(\theta_1,z_1))\Delta_{1,2}]\\& & +\sum_{n=0}^{N-1}(-1)^{n+1}b_{{\alpha},{\beta},{\gamma}}^nD_2^{2n+1}[D_1^{2(N-n)-1}(\phi_{{\gamma}}(\theta_1,z_1))\Delta_{1,2}]\}\\&\stackrel{\tiny (5.6)}{=}&z_1^{-1}\sum_{{\gamma}\in I}\{\sum_{m=0}^N(-1)^ma_{{\alpha},{\beta},{\gamma}}^mD_2^{2m}[D_2^{2(N-m)}(\phi_{{\gamma}}(\theta_2,z_2))\Delta_{1,2}]\hspace{5cm}\end{aligned}$$ $$\begin{aligned} & & +\sum_{n=0}^{N-1}(-1)^{n+1}b_{{\alpha},{\beta},{\gamma}}^nD_2^{2n+1}[D_2^{2(N-n)-1}(\phi_{{\gamma}}(\theta_2,z_2))\Delta_{1,2}]\}\\&=&z_1^{-1}\sum_{{\gamma}\in I}\{\sum_{m=0}^N(-1)^{m+1}a_{{\alpha},{\beta},{\gamma}}^mD_2^{2m}[D_2^{2(N-m)}(\phi_{{\gamma}}(\theta_2,z_2))\Delta_{2,1}]\\& &+\sum_{n=0}^{N-1}(-1)^nb_{{\alpha},{\beta},{\gamma}}^nD_2^{2n+1}[D_2^{2(N-n)-1}(\phi_{{\gamma}}(\theta_2,z_2))\Delta_{2,1}]\}\hspace{4.5cm}(5.15)\end{aligned}$$ for ${\alpha},{\beta}\in I$. Therefore, the super skew-symmetry of $H$ and (2.31) imply $$\begin{aligned} & &[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^0(z_2)]\theta_1\theta_2-[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^1(z_2)]\theta_1+[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^0(z_2)]\theta_2+[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^1(z_2)]\\&=& [\phi_{{\alpha}}(\theta_1,z_1),\phi_{{\beta}}(\theta_2,z_2)]\\&=&[\phi_{{\beta}}(\theta_2,z_2),\phi_{{\alpha}}(\theta_1,z_1)]\\&=&[\phi_{{\beta}}^1(z_2),\phi_{{\alpha}}^0(z_1)]\theta_2\theta_1-[\phi_{{\beta}}^0(z_2),\phi_{{\alpha}}^1(z_1)]\theta_2\\& &+[\phi_{{\beta}}^1(z_2),\phi_{{\alpha}}^0(z_1)]\theta_1+[\phi_{{\beta}}^1(z_2),\phi_{{\alpha}}^1(z_1)]\hspace{7.4cm}(5.16)\end{aligned}$$ for ${\alpha},{\beta}\in I$, which implies the skew-symmetry: $$[\phi_{{\alpha}}^i(z_1),\phi^j_{{\beta}}(z_2)]=-(-1)^{ij}[\phi^j_{{\beta}}(z_2),\phi_{{\alpha}}^i(z_1)]\qquad\mbox{for}\;\;{\alpha},{\beta}\in I;\;i,j\in \Bbb{Z}_2.\eqno(5.17)$$ In the rest of this section, we assume that ${\alpha},{\beta},{\gamma}$ are arbitrary elements of $I$. Note that $$\begin{aligned} && [[\phi_{{\alpha}}(\theta_1,z_1),\phi_{{\beta}}(\theta_2,z_2)] ,\phi_{{\gamma}}(\theta_3,z_3)]\\&=& z_2^{-1}\sum_{{\lambda}\in I}\{\sum_{m=0}^Na_{{\alpha},{\beta},{\lambda}}^m[D_1^{2(N-m)}(\phi_{{\lambda}}(\theta_1,z_1))D_1^{2m}(\Delta_{1,2}),\phi_{{\gamma}}(\theta_3,z_3)]\\& & +\sum_{m=0}^{N-1}b_{{\alpha},{\beta},{\lambda}}^m[D_1^{2(N-m)-1}(\phi_{{\lambda}}(\theta_1,z_1))D_1^{2m+1}(\Delta_{1,2}),\phi_{{\lambda}}(\theta_3,z_3)]\}\\&=& z_2^{-1}\sum_{{\lambda}\in I}\{\sum_{m=0}^N-a_{{\alpha},{\beta},{\lambda}}^mD_1^{2(N-m)}[\phi_{{\lambda}}(\theta_1,z_1),\phi_{{\gamma}}(\theta_3,z_3)]D_1^{2m}(\Delta_{1,2})\\& & +\sum_{m=0}^{N-1}b_{{\alpha},{\beta},{\lambda}}^mD_1^{2(N-m)-1}[\phi_{{\lambda}}(\theta_1,z_1),\phi_{{\lambda}}(\theta_3,z_3)]D_1^{2m+1}(\Delta_{1,2})\}\\&=& z_2^{-1}z_3^{-1}\sum_{{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^N-a_{{\alpha},{\beta},{\lambda}}^ma_{{\lambda},{\gamma},\mu}^nD_1^{2(N-m)}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2n}(\Delta_{1,3})]D_1^{2m}(\Delta_{1,2})\\& &+\sum_{m=0}^N\sum_{n=0}^{N-1}-a_{{\alpha},{\beta},{\lambda}}^mb_{{\lambda},{\gamma},\mu}^nD_1^{2(N-m)}[D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))D_1^{2n+1}(\Delta_{1,3})]D_1^{2m}(\Delta_{1,2})\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^Nb_{{\alpha},{\beta},{\lambda}}^ma_{{\lambda},{\gamma},\mu}^nD_1^{2(N-m)-1}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2n}(\Delta_{1,3})]D_1^{2m+1}(\Delta_{1,2}) \hspace{2cm}\end{aligned}$$ $$\begin{aligned} &&+\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}b_{{\alpha},{\beta},{\lambda}}^mb_{{\lambda},{\gamma},\mu}^nD_1^{2(N-m)-1}[D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))D_1^{2n+1}(\Delta_{1,3})]D_1^{2m+1}(\Delta_{1,2})\},\hspace{0.6cm}(5.18)\end{aligned}$$ $$\begin{aligned} && [[\phi_{{\beta}}(\theta_2,z_2),\phi_{{\gamma}}(\theta_3,z_3)] ,\phi_{{\alpha}}(\theta_1,z_1)]\\&=&z_2^{-1}z_3^{-1}\sum_{{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^N(-1)^{N+m+n+1}a_{{\beta},{\gamma},{\lambda}}^ma_{{\lambda},{\alpha},\mu}^nD_1^{2n}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2(N-m)}(D_1^{2m}(\Delta_{1,3})\Delta_{1,2})]\\& &+\sum_{m=0}^N\sum_{n=0}^{N-1}(-1)^{N+m+n}a_{{\beta},{\gamma},{\lambda}}^mb_{{\lambda},{\alpha},\mu}^nD_1^{2n+1}[D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))D_1^{2(N-m)}(D_1^{2m}(\Delta_{1,3})\Delta_{1,2})]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^N(-1)^{N+m+n}b_{{\beta},{\gamma},{\lambda}}^ma_{{\lambda},{\alpha},\mu}^nD_1^{2n}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2(N-m)-1}(D_1^{2m+1}(\Delta_{1,3})\Delta_{1,2})]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}(-1)^{N+m+n+1}b_{{\beta},{\gamma},{\lambda}}^mb_{{\lambda},{\alpha},\mu}^nD_1^{2n+1}[D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))\\& &D_1^{2(N-m)-1}(D_1^{2m+1}(\Delta_{1,3})\Delta_{1,2})]\}\hspace{8.7cm}(5.19)\end{aligned}$$ $$\begin{aligned} && [[\phi_{{\gamma}}(\theta_3,z_3),\phi_{{\alpha}}(\theta_1,z_1)] ,\phi_{{\beta}}(\theta_2,z_2)]\\&=& z_2^{-1}z_3^{-1}\sum_{{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^N(-1)^ma_{{\gamma},{\alpha},{\lambda}}^ma_{{\lambda},{\beta},\mu}^nD_1^{2m}[D_1^{2(N-m)}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2n}(\Delta_{1,2})]\Delta_{1,3}]\\& &+\sum_{m=0}^N\sum_{n=0}^{N-1}(-1)^ma_{{\gamma},{\alpha},{\lambda}}^mb_{{\lambda},{\beta},\mu}^nD_1^{2m}[D_1^{2(N-m)}[D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))D_1^{2n+1}(\Delta_{1,2})]\Delta_{1,3}]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^N(-1)^{m+1}b_{{\gamma},{\alpha},{\lambda}}^ma_{{\lambda},{\alpha},\mu}^nD_1^{2m+1}[D_1^{2(N-m)-1}[D_1^{2(N-n)}(\phi_{\mu}(\theta_1,z_1))D_1^{2n}(\Delta_{1,2})]\Delta_{1,3}]\\&&+\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}(-1)^{m+1}b_{{\gamma},{\alpha},{\lambda}}^mb_{{\lambda},{\alpha},\mu}^nD_1^{2m+1}[D_1^{2(N-m)-1}\\& & [D_1^{2(N-n)-1}(\phi_{\mu}(\theta_1,z_1))D_1^{2n+1}(\Delta_{1,2})]\Delta_{1,3}]\}.\hspace{6.8cm}(5.20)\end{aligned}$$ On the other hand, we have for $\bar{\xi}_1,\bar{\xi}_2,\bar{\xi}_3\in \Omega_0$: $$\begin{aligned} & & \bar{\xi}_1(D_H\bar{\xi}_2H\bar{\xi}_3)\\&=&\sum_{{\alpha},{\lambda}\in I}(D_H\bar{\xi}_2)_{{\alpha},{\lambda}}(H\bar{\xi}_3)_{{\lambda}}\xi_{1{\alpha}}\\&=&\sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}\{[\sum_{m=0}^Na_{{\alpha},{\beta},{\lambda}}^mD^{2m}(\xi_{2{\beta}})D^{2(N-m)}-\sum_{m=0}^{N-1}b_{{\alpha},{\beta},{\lambda}}^mD^{2m+1}(\xi_{2{\beta}})D^{2(N-m)-1}]\\&&[\sum_{n=0}^Na_{{\lambda},{\gamma},\mu}^n\Phi_{\mu}(2(N-n)+1)D^{2n}(\xi_{3{\gamma}})+\sum_{n=0}^{N-1}b_{{\lambda},{\gamma},\mu}^n\Phi_{\mu}(2(N-n))D^{2n+1}(\xi_{3{\gamma}})]\}\xi_{1{\alpha}}\\&=&\sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^Na_{{\alpha},{\beta},{\lambda}}^ma_{{\lambda},{\gamma},\mu}^nD^{2(N-m)}[\Phi_{\mu}(2(N-n)+1)D^{2n}(\xi_{3{\gamma}})]D^{2m}(\xi_{2{\beta}})\hspace{3cm}\end{aligned}$$ $$\begin{aligned} & &+\sum_{m=0}^N\sum_{n=0}^{N-1}a_{{\alpha},{\beta},{\lambda}}^mb_{{\lambda},{\gamma},\mu}^nD^{2(N-m)}[\Phi_{\mu}(2(N-n))D^{2n+1}(\xi_{3{\gamma}})]D^{2m}(\xi_{2{\beta}})\\& &-\sum_{m=0}^{N-1}\sum_{n=0}^Nb_{{\alpha},{\beta},{\lambda}}^ma_{{\lambda},{\gamma},\mu}^nD^{2(N-m)-1}[\Phi_{\mu}(2(N-n)+1)D^{2n}(\xi_{3{\gamma}})]D^{2m+1}(\xi_{2{\beta}})\\& &-\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}b_{{\alpha},{\beta},{\lambda}}^mb_{{\lambda},{\gamma},\mu}^nD^{2(N-m)-1}[\Phi_{\mu}(2(N-n))D^{2n+1}(\xi_{3{\gamma}})]D^{2m+1}(\xi_{2{\beta}})\}\xi_{1{\alpha}},\hspace{1cm}(5.21)\end{aligned}$$ $$\begin{aligned} & & \bar{\xi}_2(D_H\bar{\xi}_3H\bar{\xi}_1)\\&=&\sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^N(-1)^{N+m+n}a_{{\beta},{\gamma},{\lambda}}^ma_{{\lambda},{\alpha},\mu}^nD^{2n}[\Phi_{\mu}(2(N-n)+1)D^{2(N-m)}[D^{2m}(\xi_{3{\gamma}})\xi_{2{\beta}}]]\\& &+\sum_{m=0}^N\sum_{n=0}^{N-1}(-1)^{N+m+n+1}a_{{\beta},{\gamma},{\lambda}}^mb_{{\lambda},{\alpha},\mu}^nD^{2n+1}[\Phi_{\mu}(2(N-n))D^{2(N-m)}[D^{2m}(\xi_{3{\gamma}})\xi_{2{\beta}}]]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^N(-1)^{N+m+n+1}b_{{\beta},{\gamma},{\lambda}}^ma_{{\lambda},{\alpha},\mu}^nD^{2n}[\Phi_{\mu}(2(N-n)+1)D^{2(N-m)-1}[D^{2m+1}(\xi_{3{\gamma}})\xi_{2{\beta}}]]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}(-1)^{N+m+n}b_{{\beta},{\gamma},{\lambda}}^mb_{{\lambda},{\alpha},\mu}^nD^{2n+1}[\Phi_{\mu}(2(N-n))D^{2(N-m)-1}[D^{2m+1}(\xi_{3{\gamma}})\xi_{2{\beta}}]]\}\xi_{1{\alpha}},\hspace{0.5cm}(5.22)\end{aligned}$$ $$\begin{aligned} & & \bar{\xi}_3(D_H\bar{\xi}_1H\bar{\xi}_2)\\&=& \sum_{{\alpha},{\beta},{\gamma},{\lambda},\mu\in I}\{\sum_{m=0}^N\sum_{n=0}^N(-1)^{m+1}a_{{\gamma},{\alpha},{\lambda}}^ma_{{\lambda},{\beta},\mu}^nD^{2m}[D^{2(N-m)}[\Phi_{\mu}(2(N-n)+1)D^{2n}(\xi_{2{\beta}})]\xi_{3{\gamma}}]\\& &+\sum_{m=0}^N\sum_{n=0}^{N-1}(-1)^{m+1}a_{{\gamma},{\alpha},{\lambda}}^mb_{{\lambda},{\beta},\mu}^nD^{2m}[D^{2(N-m)}[\Phi_{\mu}(2(N-n))D^{2n+1}(\xi_{2{\beta}})]\xi_{3{\gamma}}]\xi_{3{\gamma}}]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^N(-1)^mb_{{\gamma},{\alpha},{\lambda}}^ma_{{\lambda},{\beta},\mu}^nD^{2m+1}[D^{2(N-m)-1}[\Phi_{\mu}(2(N-n)+1)D^{2n}(\xi_{2{\beta}})]\\& &+\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}(-1)^mb_{{\gamma},{\alpha},{\lambda}}^mb_{{\lambda},{\beta},\mu}^nD^{2m+1}[D^{2(N-m)-1}[\Phi_{\mu}(2(N-n))D^{2n+1}(\xi_{2{\beta}})]\xi_{3{\gamma}}]\}\xi_{1{\alpha}} .\hspace{0.5cm}(5.23)\end{aligned}$$ Comparing (5.18) and (5.21), (5.19) and (5.22), (5.20) and (5.23), we have: $$\begin{aligned} \hspace{1cm}& &[[\phi_{{\alpha}}(\theta_1,z_1),\phi_{{\beta}}(\theta_2,z_2)] ,\phi_{{\gamma}}(\theta_3,z_3)]+[[\phi_{{\beta}}(\theta_2,z_2),\phi_{{\gamma}}(\theta_3,z_3)] ,\phi_{{\alpha}}(\theta_1,z_1)]\\&=&-[[\phi_{{\gamma}}(\theta_3,z_3),\phi_{{\alpha}}(\theta_1,z_1)] ,\phi_{{\beta}}(\theta_2,z_2)].\hspace{6.9cm}(5.24)\end{aligned}$$ Furthermore, we have: $$\begin{aligned} & &[[\phi_{{\alpha}}(\theta_1,z_1),\phi_{{\beta}}(\theta_2,z_2)] ,\phi_{{\gamma}}(\theta_3,z_3)]\\&=& [[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^0(z_2)] ,\phi_{{\gamma}}^0(z_3)]\theta_1\theta_2\theta_3-[[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^1(z_2)] ,\phi_{{\gamma}}^0(z_3)]\theta_1\theta_3\\&&+[[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^0(z_2)] ,\phi_{{\gamma}}^0(z_3)]\theta_2\theta_3+[[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^1(z_2)] ,\phi_{{\gamma}}^0(z_3)]\theta_3\hspace{6cm}\end{aligned}$$ $$\begin{aligned} && +[[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^0(z_2)] ,\phi_{{\gamma}}^1(z_3)]\theta_1\theta_2+[[\phi_{{\alpha}}^0(z_1),\phi_{{\beta}}^1(z_2)] ,\phi_{{\gamma}}^1(z_3)]\theta_1\\&&-[[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^0(z_2)] ,\phi_{{\gamma}}^1(z_3)]\theta_2+[[\phi_{{\alpha}}^1(z_1),\phi_{{\beta}}^1(z_2)] ,\phi_{{\gamma}}^1(z_3)],\hspace{4.5cm}(5.25)\end{aligned}$$ $$\begin{aligned} & &[[\phi_{{\beta}}(\theta_2,z_2),\phi_{{\gamma}}(\theta_3,z_3)] ,\phi_{{\alpha}}(\theta_1,z_1)]\\&=& [[\phi_{{\beta}}^0(z_2),\phi_{{\gamma}}^0(z_3)] ,\phi_{{\alpha}}^0(z_1)]\theta_1\theta_2\theta_3-[[\phi_{{\beta}}^1(z_2),\phi_{{\gamma}}^0(z_3)] ,\phi_{{\alpha}}^0(z_1)]\theta_1\theta_3\\&&+[[\phi_{{\beta}}^0(z_2),\phi_{{\gamma}}^0(z_3)] ,\phi_{{\alpha}}^1(z_1)]\theta_2\theta_3-[[\phi_{{\beta}}^1(z_2),\phi_{{\gamma}}^0(z_3)] ,\phi_{{\alpha}}^1(z_1)]\theta_3\\& &+[[\phi_{{\beta}}^0(z_2),\phi_{{\gamma}}^1(z_3)] ,\phi_{{\alpha}}^0(z_1)]\theta_1\theta_2+[[\phi_{{\beta}}^1(z_2),\phi_{{\gamma}}^1(z_3)] ,\phi_{{\alpha}}^0(z_1)]\theta_1\\& &+[[\phi_{{\beta}}^0(z_2),\phi_{{\gamma}}^1(z_3)] ,\phi_{{\alpha}}^1(z_1)]\theta_2+[[\phi_{{\beta}}^1(z_2),\phi_{{\gamma}}^1(z_3)] ,\phi_{{\alpha}}^1(z_1)],\hspace{4.1cm}(5.26)\end{aligned}$$ $$\begin{aligned} & &[[\phi_{{\gamma}}(\theta_3,z_3),\phi_{{\alpha}}(\theta_1,z_1)] ,\phi_{{\beta}}(\theta_2,z_2)]\\&=& [[\phi_{{\gamma}}^0(z_3),\phi_{{\alpha}}^0(z_1)] ,\phi_{{\beta}}^0(z_2)]\theta_1\theta_2\theta_3- [[\phi_{{\gamma}}^0(z_3),\phi_{{\alpha}}^0(z_1)] ,\phi_{{\beta}}^1(z_2)]\theta_1\theta_3\\ && + [[\phi_{{\gamma}}^0(z_3),\phi_{{\alpha}}^1(z_1)] ,\phi_{{\beta}}^0(z_2)]\theta_2\theta_3+ [[\phi_{{\gamma}}^0(z_3),\phi_{{\alpha}}^1(z_1)] ,\phi_{{\beta}}^1(z_2)]\theta_3\\&&+ [[\phi_{{\gamma}}^1(z_3),\phi_{{\alpha}}^0(z_1)] ,\phi_{{\beta}}^0(z_2)]\theta_1\theta_2-[[\phi_{{\gamma}}^1(z_3),\phi_{{\alpha}}^0(z_1)] ,\phi_{{\beta}}^1(z_2)]\theta_1\\& &+ [[\phi_{{\gamma}}^1(z_3),\phi_{{\alpha}}^1(z_1)] ,\phi_{{\beta}}^0(z_2)]\theta_2+ [[\phi_{{\gamma}}^1(z_3),\phi_{{\alpha}}^1(z_1)] ,\phi_{{\beta}}^1(z_2)].\hspace{4.1cm}(5.27)\end{aligned}$$ Now (5.24-27) imply: $$\begin{aligned} & &[[\phi_{{\alpha}}^i(z_1),\phi_{{\beta}}^j(z_2)],\phi_{{\gamma}}^k(z_3)]+(-1)^{i(j+k)}[[\phi_{{\beta}}^j(z_2),\phi_{{\gamma}}^k(z_3)],\phi_{{\alpha}}^i(z_1)]\\&=&-(-1)^{k(i+j)}[[\phi_{{\alpha}}^k(z_3),\phi_{{\alpha}}^i(z_1)],\phi_{{\beta}}^j(z_2)]\hspace{7.6cm}(5.28)\end{aligned}$$ which is the super Jacobi identity. $\qquad\Box$ [**Example**]{}. If we let $a_{{\alpha},{\beta}}=0$ in (3.1), then the Hamiltonian operator (3.1) is a special case of (5.1). Therefore, each NX-bialgebra induces a Lie superalgebra. It can be proved that $\{a_{{\alpha},{\beta}}\mid {\alpha},{\beta}\in I\}$ induces a one-dimeional central extension of the Lie superalgebra. In general, Threorem 5.3 holds for any linear Hamiltonian superoperator of type 1. Let $0<n\in \Bbb{Z}$ and let $L$ be a vector space with a basis $\{\phi_i(n),c\mid n\in \Bbb{Z}/2, i=0,1,...,n-1\}$. Besides using (5.8-10) and (5.14), we also assume: $$\theta_ic=c\theta_i,\qquad cz_i=z_ic\qquad\qquad\mbox{for}\;\;i=1,2.\eqno(5.29)$$ By the example in Section 3, we have the following type-1 Hamiltonian superoperator $H$: $$H^1_{i,j}=H_{i,j}^0=\delta_{i,0}\delta_{j,0}D^5+(i+j+3)\Phi_{i+j}D^2+\Phi_{i+j}(2)D+(j+2)\Phi_{i+j}(3)\eqno(5.30)$$ for $i,j=0,1,...,n-1$. Here we have used the convention that $\Phi_l=0$ if $l\geq n$. We define the opration $[\cdot,\cdot]$ on $L$ by: $$[u,c]=[c,u]=0\qquad\qquad\mbox{for}\;\;u\in L\eqno(5.31)$$ and $$\begin{aligned} && [\phi_i(\theta_1,z_1),\phi_j(\theta_2,z_2)]\\&=& z_2^{-1}\{ \delta_{i,0}\delta_{j,0}D^5\Delta_{1,2}c+(i+j+3)\Phi_{i+j}(\theta_1,z_1)D_1^2\Delta_{1,2}\\& &+D_1(\phi_{i+j}(\theta_1,z_1))D_1\Delta_{1,2}+(j+2)D_1^2(\phi_{i+j}(\theta_1,z_1))\Delta_{1,2}\}\\&=& z_2^{-1}\{\delta_{i,0}\delta_{j,0}[\partial_{z_1}^2\delta(z_1/z_2)c-\partial_{z_1}^3\delta(z_1/z_2)c\theta_1\theta_2]+(i+j+3)[\phi_{i+j}^0(z_1)\theta_1+\phi_{i+j}^1(z_1)]\\& & (\theta_1-\theta_2)\partial_{z_1}\delta(z_1/z_2)+[\phi_{i+j}^0(z_1)-\partial_{z_1}(\phi_{i+j}^1(z_1))\theta_1][\delta(z_1/z_2)-\partial_{z_1}\delta(z_1/z_2)\theta_1\theta_2]\\& &+(j+2)[\partial_{z_1}(\phi_{i+j}^0(z_1))\theta_1+\partial_{z_1}(\phi_{i+j}^1(z_1))](\theta_1-\theta_2)\delta(z_1/z_2)\}\\&=&z_2^{-1}\{\delta_{i,0}\delta_{j,0}\partial_{z_1}^2\delta(z_1/z_2)c+\phi_{i+j}^0(z_1)\delta(z_1/z_2)\\&&+[(i+j+3)\phi_{i+j}^1(z_1)\partial_{z_1}\delta(z_1/z_2)+(j+1)\partial_{z_1}(\phi_{i+j}^1(z_1))\delta(z_1/z_2)]\theta_1\\& &-[(i+j+3)\phi_{i+j}^1(z_1)\partial_{z_1}\delta(z_1/z_2)+(j+2)\partial_{z_1}(\phi_{i+j}^1(z_1))\delta(z_1/z_2)]\theta_2\\& &-[\delta_{i,0}\delta_{j,0}\partial_{z_1}^3\delta(z_1/z_2)c+(i+j+4)\phi_{i+j}^0(z_1)\partial_{z_1}\delta(z_1/z_2)\\& &+(j+2)\partial_{z_1}(\phi_{i+j}^0(z_1))\delta(z_1/z_2)]\theta_1\theta_2\}\hspace{7.6cm}(5.32)\end{aligned}$$ for $i,j=0,1,...,n-1.$ Thus we have $$[\phi_i^1(z_1),\phi_j^1(z_2)]=z_2^{-1}[\delta_{i,0}\delta_{j,0}\partial_{z_1}^2\delta(z_1/z_2)c+\phi_{i+j}^0(z_1)\delta(z_1/z_2)],\eqno(5.33)$$ $$-[\phi_i^0(z_1),\phi_j^1(z_2)]=z_2^{-1}[(i+j+3)\phi_{i+j}^1(z_1)\partial_{z_1}\delta(z_1/z_2)+(j+1)\partial_{z_1}(\phi_{i+j}^1(z_1))\delta(z_1/z_2)],\eqno(5.34)$$ $$[\phi_i^1(z_1),\phi_j^0(z_2)]=-z_2^{-1}[(i+j+3)\phi_{i+j}^1(z_1)\partial_{z_1}\delta(z_1/z_2)+(j+2)\partial_{z_1}(\phi_{i+j}^1(z_1))\delta(z_1/z_2)],\eqno(5.35)$$ $$\begin{aligned} & &[\phi_i^0(z_1),\phi_j^0(z_2)]\\&=&-z_2^{-1}[\delta_{i,0}\delta_{j,0}\partial_{z_1}^3\delta(z_1/z_2)c+(i+j+4)\phi_{i+j}^0(z_1)\partial_{z_1}\delta(z_1/z_2)\\& &+(j+2)\partial_{z_1}(\phi_{i+j}^0(z_1))\delta(z_1/z_2)].\hspace{8.4cm}(5.36)\end{aligned}$$ Note that (5.33-36) are equivalent to: $$\left[\phi_i\left(m+{1\over 2}\right),\phi_j\left(n+{1\over 2}\right)\right]=\delta_{i,0}\delta_{j,0}\delta_{m+n+1,0}(n+1)nc+\phi_{i+j}(m+n+1),\eqno(5.37)$$ $$\left[\phi_i\left(m+{1\over 2}\right),\phi_j(n)\right]=[(j+2)(m+1)-(i+1)(n+1)]\phi_{i+j}\left(m+n+{1\over 2}\right),\eqno(5.38)$$ $$\begin{aligned} &=&-\delta_{i,0}\delta_{j,0}\delta_{m+n+1,0}(n+1)n(n-1)c\\& &+[(j+2)(m+1)-(i+2)(n+1)]\phi_{i+j}(m+n)\hspace{3cm}(5.39)\end{aligned}$$ for $i,j=0,1,...,n-1;\;m,n\in \Bbb{Z}$. Therefore, we obtain a Lie superalgebra $(L,[\cdot,\cdot])$ that is a natural generalization of the Super-Virasoro algebra. [**Remark 5.4**]{}. (a) Lie superalgebras induced by Novikov-Poisson algebras whose Novikov algebras are simple were studied in \[X5\]. \(b) We still do not know how to connect linear Hamiltonian superoperators of type 0 with Lie superalgebras. [**References**]{} [\[BN\]]{} : A. A. Balinskii and S. P. Novikov, Poisson brackets of hydrodynamic type, Frobenius algebras and Lie algebras, [*Soviet Math. Dokl.*]{} Vol. [**32**]{} (1985), No. [**1**]{}, 228-231. [\[DGM\]]{} : L. Dolan, P. Goddard and P. Montague, Conformal field theory of twisted vertex operators, [*Nucl. Phys.*]{} [**B338**]{} (1990) 529-601. [\[Da1\]]{} : Yu. L. Daletsky, Lie superalgebras in Hamiltonian operator theory, In: [*Nonlinear and Turbulent Processes in Physics*]{}, ed. V. E. Zakharov, 19984, pp. 1307-1312. [\[Da2\]]{} : Yu. L. Daletsky, Hamiltonian operators in graded formal calculus of variables, [*Func. Anal. Appl.*]{} [**20**]{} (1986), 136-138. [\[De\]]{} : B. DeWitt, [*Supermanifolds,*]{} Second Edition, Cambridge University Press, 1992. [\[FFR\]]{} : A. J. Feingold, I. B. Frenkel and J. F. Ries, Spinor construction of vertex operator algebras, triality and $E_8^{(1)}$, [*Contemp. Math.*]{} [**121**]{}, 1991. [\[FLM\]]{} : I. B. Frenkel, J. Lepowsky and A. Meurman, [*Vertex Operator Algebras and the Monster*]{}, Pure and Applied Math. Academic Press, 1988. [\[GDi1\]]{} : I. M. Gel’fand and L. A. Dikii, Asymptotic behaviour of the resolvent of Sturm-Liouville equations and the algebra of the Korteweg-de Vries equations, [*Russian Math. Surveys*]{} [**30:5**]{} (1975), 77-113. [\[GDi2\]]{} : I. M. Gel’fand and L. A. Dikii, A Lie algebra structure in a formal variational Calculation, [*Func. Anal. Appl.*]{} [**10**]{} (1976), 16-22. [\[GDo\]]{} : I. M. Gel’fand and I. Ya. Dorfman, Hamiltonian operators and algebraic structures related to them, [*Func. Anal. Appl.*]{} [**13**]{} (1979), 248-262. [\[M\]]{} : P. Mathieu, Supersymetry extension of the Korteweg-de Vries equation, [*J. Math. Phys.*]{} [**29**]{}(11) (1988), 2499-2507. [\[NO\]]{} : J. W. Negele and H. Orland, [*Quantum many-particle systems*]{}, Addison-Wesley Publishing Company, 1988. [\[O1\]]{} : J. Marshall Osborn, Novikov algebras, [*Nova J. Algebra*]{} & [*Geom.*]{} [**1**]{} (1992), 1-14. [\[O2\]]{} : J. Marshall Osborn, Simple Novikov algebras with an idempotent, [*Comm. Algebra*]{} [**20**]{} (1992), No. 9, 2729-2753. [\[O3\]]{} : J. Marshall Osborn, Infinite dimensional Novikov algebras of characteristic 0, [*J. Algebra*]{} [**167**]{} (1994), 146-167. [\[O4\]]{} : J. Marshall Osborn, Modules for Novikov algebras, [*Proceeding of the II International Congress on Algebra, Barnaul, 1991.*]{} [\[O5\]]{} : J. Marshall Osborn, Modules for Novikov algebras of characteristic 0, [*preprint*]{}. [\[T\]]{} : H. Tsukada, Vertex operator superalgebras, [*Comm. Math. Phys.*]{} [**18**]{} (1990), 2249-2274. [\[X1\]]{} : X. Xu, On spinor vertex operator algebras and their modules, [*J. Algebra*]{} [**191**]{}, 427-460. [\[X2\]]{} : , X. Xu, Hamiltonian operators and associative algebras with a derivation, [*Lett. Math. Phys.*]{} [**33**]{} (1995), 1-6. [\[X3\]]{} : X. Xu, Hamiltonian superoperators, [*J. Phys A: Math. Gen.*]{} [**28**]{} (1995), 1681-1698. [\[X4\]]{} : X. Xu, On simple Novikov algebras and their irreducible modules, [*J. Algebra*]{} [**185**]{} (1996), 905-934. [\[X5\]]{} : X. Xu, Novikov-Poisson Algebras, [*J. Algebra*]{} [**190**]{} (1997), 253-279. [\[X6\]]{} : X. Xu, Skew-symmetric differential operators and combinatorial identities, [*Mh. Math*]{} [**127**]{} (1999), 243-258. [\[Z\]]{} : E. I. Zel’manov, On a class of local translation invariant Lie algebras, [*Soviet Math. Dokl.*]{} Vol [**35**]{} (1987), No. [**1**]{}, 216-218. [^1]: 1991 Mathematical Subject Classification. Primary 17C 70, 81Q 60; Secondary 17A 30, 81T 60 [^2]: Research supported by the Direct Allocation Grant 4083 DAG93/94 from HKUST.
--- abstract: | Information about size and shape of particles produced in various manufacturing processes is very important for process and product development because design of downstream processes as well as final product properties strongly depend on these geometrical particle attributes. However, recovery of particle size and shape information in situ during crystallisation processes has been a major challenge. The focused beam reflectance measurement (FBRM) provides the chord length distribution (CLD) of a population of particles in a suspension flowing close to the sensor window. Recovery of size and shape information from the CLD requires a model relating particle size and shape to its CLD as well as solving the corresponding inverse problem. This paper presents a comprehensive algorithm which produces estimates of particle size distribution and particle aspect ratio from measured CLD data. While the algorithm searches for a global best solution to the inverse problem without requiring further a priori information on the range of particle sizes present in the population or aspect ratio of particles, suitable regularisation techniques based on relevant additional information can be implemented as required to obtain physically reasonable size distributions. We used the algorithm to analyse CLD data for samples of needle-like crystalline particles of various lengths using two previously published CLD models for ellipsoids and for thin cylinders to estimate particle size distribution and shape. We found that the thin cylinder model yielded significantly better agreement with experimental data, while estimated particle size distributions and aspect ratios were in good agreement with those obtained from imaging. address: - 'Department of Chemical and Process Engineering, University of Strathclyde, James Weir Building, 75 Montrose Street, Glasgow, G1 1XJ, United Kingdom.' - 'WestCHEM, Department of Pure and Applied Chemistry and Centre for Process Analytics and Control Technology, University of Strathclyde, 295 Cathedral Street, Glasgow, G1 1XL, United Kingdom' - 'Mettler-Toledo Ltd., 64 Boston Road, Beaumont Leys Leicester, LE4 1AW, United Kingdom' - 'Department of Mechanical and Aerospace Engineering, University of Strathclyde, James Weir Building, 75 Montrose Street, Glasgow, G1 1XJ, United Kingdom. ' - 'Department of Mathematics and Statistics, University of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XH, United Kingdom ' author: - 'Okpeafoh S. Agimelen' - Peter Hamilton - Ian Haley - Alison Nordon - Massimiliano Vasile - Jan Sefcik - 'Anthony J. Mulholland' title: 'Estimation of Particle Size Distribution and Aspect Ratio of Non-Spherical Particles From Chord Length Distribution' --- Chord Length Distribution ,Particle Size Distribution ,Particle Shape ,Focused Beam Reflectance Measurement. Introduction {#intro} ============ Manufacturing of particulate products in pharmaceutical and fine chemicals industries includes various particle formation processes, such as crystallisation or granulation, and downstream processing of resulting suspensions or powders as well as final product properties are strongly dependent on geometrical particle attributes, most importantly size and shape. Design and operation of particle formation processes greatly benefits from in situ monitoring of particle size and shape, but it has been a major challenge to get reliable quantitative estimates of these key particle attributes, especially in cases where solid loadings are relatively high or sampling is challenging. There are numerous particle sizing techniques, including sieving, electrical zone sensing, laser diffraction, focused beam reflectance measurement (FBRM) and imaging [@Washington1992; @Heinrich2012]. While several techniques are well suited for determination of particle size distributions of spherical particles, there are significant challenges when particles become strongly non-isometric, such as in the case of needle-like or plate-like particles, which are ubiquitous in pharmaceutical manufacturing. Imaging is well suited for dealing with high aspect ratio particles, but accurate determination of particle size and shape by imaging typically requires highly diluted samples and/or specially designed flow cells, which make it difficult to apply in situ under process conditions. Although laser diffraction and reflectance techniques provide information which is sensitive to particle shape, extracting accurate shape information has been challenging since appropriate models need to be used and corresponding inverse problems need to be solved. Suspensions also need to be relatively dilute for laser diffraction measurements in order to avoid multiple scattering effects. Reflectance techniques, such as FBRM, are particularly suitable for in situ monitoring of particles in suspensions during the manufacturing process. FBRM measures chord length distribution (CLD), which depends on both size and shape of particles present in a suspension. There has been considerable efforts [@Tadayyon1998; @Ruf2000; @Heath2002; @Wynn2003; @Worlitschek2003; @Worlitschek2005; @Li2005n1; @Li2005n2; @Vaccaro2006; @Kail2007; @Kail2008; @Kail2009; @Scheler2013] devoted towards obtaining useful information about particle geometrical attributes from this technique, leading to the development of suitable models [@Tadayyon1998; @Ruf2000; @Heath2002; @Wynn2003; @Worlitschek2003; @Worlitschek2005; @Li2005n1; @Vaccaro2006; @Kail2007; @Kail2008; @Kail2009; @Scheler2013] for CLDs for particles of various shapes in order to obtain particle size distributions from FBRM data. However, the inverse problem of retrieving size and shape information from FBRM data is non-trivial [@Heinrich2012]. The inverse problem is well-known to be ill-posed, i.e., there are potentially multiple solutions in terms of particle size distributions and shape which give essentially the same CLD within the accuracy of experimental data. Several regularisation approaches have been proposed to deal with this problem [@Worlitschek2005; @Li2005n1] but there is still a challenge of finding a global best solutions for physically reasonable combinations of particle size distribution and shape. One important factor which can be used to constrain inverse problem solutions is the size range of particles used in the calculations. In the work by Ruf et al [@Ruf2000] information about particle size range was obtained by a laser diffraction technique and microscopy, while Worlitschek et al. [@Worlitschek2005], Li et al. [@Li2005n2], Li et al. [@Li2013; @Li2014] and Yu et al. [@Yu2008] obtained particle size range information by sieving. Also, Kail et al. [@Kail2009] obtained information about particle size range in their population of particles from the manufacturer. However, information about particle size range may not be readily available or it may not be convenient to obtain this information a priori (for example in a manufacturing process). When moving from modelling of CLD of single particles to a population of particles of various sizes, it is necessary to properly account for size effects. It has been previously shown [@Simmons1999; @Hobbel1991; @Vaccaro2006] (see also section 3 of the supplementary information) that probability of larger particles to be detected by the FBRM probe is proportional to their characteristic size. While this effect has been taken into account in some cases [@Ruf2000; @Worlitschek2005] it has been neglected in some other cases [@Li2005n1; @Li2005n2] in the previous literature, which may introduce significant errors if the size range of particles in the population is relatively large. Early CLD geometrical models [@Hobbel1991; @Simmons1999; @Langston2001; @Hukkanen2003; @Barrett1999] were based on populations of spherical particles [^1]. While these models can give reasonable estimates of particle sizes from measured CLD data, if appropriate approaches are used for solving corresponding inverse problems, they are not suitable for particles whose shape deviate significantly from spherical. Even though there has been some progress in retrieving size and shape information from CLD data for populations of particles with different degrees of variation from spherical [@Ruf2000; @Worlitschek2005; @Li2005n1; @Li2005n2; @Noelle2006; @Nere2007], there has been no previous attempt (although Czapla et al. [@Czapla2010] calculated the CLD of needle shaped particles using a numerical model, the inverse problem was not solved) to obtain size and shape information for populations of needle shaped particles which are commonly present in pharmaceutical manufacturing. This is despite the fact that there are suitable geometrical models [@Li2005n1; @Vaccaro2006] available in the literature which can be used to obtain useful size and shape information for needle shaped particles from experimental CLD data. In this paper, we present an algorithm for estimating of size and shape information for needle shaped particles from experimental CLD data. We use 2 D geometrical CLD models available in the literature which are suitable for opaque particles. However, the method presented here can be extended to different 2 D and 3 D geometrical and optical CLD models for particles of arbitrary shape and optical properties. Such models would need to account for possible discontinuities along the particles’ boundaries if the particles’ boundaries contain strong concavities (for example the case of particle clusters). More general models would also need to account for the optical properties of the particles if the particles are not opaque. The optimum size range of particles in a population providing the best fit with the experimental CLD data can be directly determined by the algorithm in the case when no further information is available, although any external information on particle size range or shape can be utilised in the algorithm as needed. We compare results from our calculations with data obtained by dynamic image analysis and laser diffraction in order to assess suitability and validity of models used. ![Microscope images (magnification factor of $\times 150$) of samples of COA after undergoing different drying conditions in the vacuum agitated drier [@Hamilton2012]. The samples in (a) to (e) are labelled Sample 1 to Sample 5 in Figs. \[fig2\] and \[fig3\]. The white horizontal line on the bottom right of (a) indicates a length of $100\mu$m. Reproduced by permission of The Royal Society of Chemistry ([View Online](http://doi.org/10.1039/C1AN15836H)).[]{data-label="fig1"}](fig1.pdf){width="\textwidth"} ![(a)Volume based particle size distribution obtained with the Malver Mastersizer and (b)unweighted number based chord length distribution from the FBRM probe for the samples in Fig. \[fig1\].[]{data-label="fig2"}](fig2.pdf){width="\textwidth"} ![(a)Volume based EQPC diameter, maximum Feret diameter (b), and minimum Feret diameter (c) obtained by dynamic image analysis for the samples in Fig. \[fig1\]. (d)A measure of the degree of elongation (aspect ratio) of the needles in Fig. \[fig1\].[]{data-label="fig3"}](fig3.pdf){width="\textwidth"} Experimental Data {#mat} ================= For the purpose of demonstrating and validating our technique, we shall apply the method (to be described in subsequent sections) to data obtained in a previous study [@Hamilton2012]. Five samples (sample 1 to sample 5) of needle-shaped particles of cellobiose octaacetate (COA) that had been subjected to different drying conditions [@Hamilton2011] were analysed by laser diffraction, FBRM and dynamic image analysis. The drying conditions used caused different degrees of particle attrition as shown in Fig. \[fig1\]. Samples were dispersed in $0.1\%$ Tween 80 (Sigma-Aldrich, UK) solution in water for all particle size measurements. Laser diffraction measurements were carried out using a Malvern Mastersizer 2000 (Malvern Instruments, UK). FBRM data were obtained using a Lasentec FBRM PI-12/206 probe. Dynamic image analysis was carried out using a QICPIC (Sympatec Ltd., UK) instrument with a LIXELL wet dispersion unit. Further experimental details for the particle size analysis techniques employed can be found in the previous study [@Hamilton2012]. The particle size distribution (volume weighted) estimated by laser diffraction, which assumes that the particles are spherical, for samples 1 to 5 is shown in Fig. \[fig2\](a). The CLD data obtained by FBRM for the five samples is shown in Fig. \[fig2\](b). The equivalent projected circle EQPC diameter (which is the diameter of a circle of equal area to the 2 D projection of a particle) distribution obtained by dynamic image analysis is shown in Fig. \[fig3\](a). The maximum Feret diameter (Feret Max)[^2] obtained using dynamic image analysis, which was shown to be a good indicator of needle length [@Hamilton2012], is shown in Fig. \[fig3\](b). In addition, the Feret Min diameter (Feret Min) which is an indication of needle width is shown in Fig. \[fig3\](c). The degree of elongation (aspect ratio) of the needles can be estimated by computing the ratio of the modes of the Feret Min distributions to the modes of the Feret Max distributions. The result ($F_{min}/F_{max}$) of this calculation is shown in Fig. \[fig3\](d). The data in Figs. \[fig2\] and \[fig3\] will be used to compare against estimated PSDs and aspect ratios obtained from CLDs data in Fig. \[fig2\](b) using the algorithm described in section \[invalg\]. Modelling Chord Length Distribution {#fbrm} =================================== The FBRM technology involves a laser beam which is focused onto a spot by a system of lenses. The focus spot is located near a sapphire window and it is rotated along a circular path at a speed of about 2ms$^{-1}$ [@Heinrich2012; @Kail2007; @Kail2008; @Worlitschek2003]. The assembly of lenses is enclosed in a tubular probe which is inserted into a slurry of dispersed particles. Particles passing near the probe window reflect light back into the probe which is then detected. It is assumed that the particles are much smaller than the diameter of the circular trajectory of the laser beam, and the particles move much more slowly than the speed of the laser spot [@Heinrich2012]. Hence the length of arc (taken to be a straight line) made by the laser spot on a particle from which light is back scattered is just a product of the speed of the laser spot and the duration of reflection [@Heinrich2012], and the corresponding chord length is recorded. Since the beam does not always pass through the centre of the particle, a range of chord lengths is recorded as a given particle encounters the beam multiple times. The FBRM device accumulates chord lengths across different particles present in the slurry for a duration pre-set by the user, after which it reports a chord length histogram, and this data is referred to as chord length distribution (CLD). Calculating CLD from PSD {#mod} ------------------------ The CLD and PSD are related to each other and the CLD obtained from a given particle depends on both its size and shape. This size and shape information is expressed in a kernel function $A(D,L)$ which defines the CLD of a single particle of characteristic size $D$. In a population of particles, the probability of a particle being detected is linearly proportional to its characteristic size [@Hobbel1991; @Simmons1999; @Ruf2000; @Worlitschek2005; @Vaccaro2006] (see also section 3 of the supplementary information). Hence the kernel needs to be weighted by the characteristic sizes of the particles in the population. The characteristic size of each particle is a monotonic function of some length scale associated with the particle [@Worlitschek2005], this function depends on the shape of the particle [@Vaccaro2006]. For example, in the case of a population of spherical particles of different sizes the characteristic size is $D=2a_s$, where $a_s$ is the radius of a sphere. Thus the relationship between the CLD and PSD can be written as [@Hobbel1991] $$C(L) = \int_0^{\infty}A(D,L)DX(D)dD, \label{eq1}$$ where $C$ is the CLD of the particle population, $L$ is chord length and $X$ is the PSD expressed as a normalised number distribution. Equation can be discretised and written in matrix form as [@Li2005n1; @Hobbel1991] $$\mathbf{C} = \mathbf{A\tilde{X}}, \label{eq2}$$ where $\mathbf{A}$ is a transformation matrix. The column vector $\mathbf{C}$ is the chord length histogram or CLD, while the column vector $\mathbf{\tilde{X}}$ is defined as $$\tilde{X}_i = \overline{D}_iX_i, \quad i = 1,2,3,\ldots ,N, \label{eq3}$$ where $\mathbf{\overline{D}}$ is the vector of characteristic sizes and $\mathbf{X}$ is the unknown PSD. The characteristic sizes $D_i$ make up the bin boundaries of the PSD $X_i$, and the characteristic size of the particles bounded by the bin boundaries $D_i$ and $D_{i+1}$ is given as $\overline{D}_i=\sqrt{D_iD_{i+1}}$. Equation can be rearranged so that each component of $\mathbf{\overline{D}}$ multiplies a column of $\mathbf{A}$ to give $$\mathbf{C} = \mathbf{\tilde{A}X}, \label{eq4}$$ where $$\tilde{A}_j = [a_{j,1}\overline{D}_1~ a_{j,2}\overline{D}_2~ \ldots~ a_{j,i}\overline{D}_i~ \ldots~ a_{j,N}\overline{D}_N], \label{eq5}$$ represents column $j$ of $\mathbf{\tilde{A}}$. The matrix $\mathbf{A}$ is of dimension $M\times N$, where $M$ is the number of chord length bins in the histogram $\mathbf{C}$ and $N$ is the number of particle size bins in the histogram $\mathbf{X}$ [@Li2005n1]. The columns of matrix $\mathbf{A}$ are constructed as [@Li2005n1] $$A_j = [a_{j,1}~ a_{j,2}~ \ldots~ a_{j,i}~ \ldots~ a_{j,N}], \label{eq6}$$ where $$a_{j,i} = p_{\overline{D}_i}(L_j,L_{j+1}) \label{eq7}$$ is the probability that the length of a measured chord from a particle of characteristic size $\overline{D}_i$ lies between $L_j$ and $L_{j+1}$. The probabilities $p_{\overline{D}_i}(L_j,L_{j+1})$ for different particle sizes and chord length bins are calculated from appropriate probability density functions (PDF). The PDFs employed in this work are those given by the Vaccaro-Sefcik-Morbidelli (VSM) [@Vaccaro2006] model and the Li-Wilkinson (LW) model [@Li2005n1]. The forward problem of calculating the CLD from a known PSD using Eq. is trivial as it is mere matrix multiplication. However, the inverse problem of calculating the PSD from a known CLD is non trivial. The solution vector $\mathbf{X}$ must meet the requirement of non negativity, hence different techniques have been used in the past [@Worlitschek2005; @Li2005n1] to fulfil this requirement. There could also be errors in the solution vector $\mathbf{X}$ if the transformation matrix $\mathbf{A}$ is inaccurate. The accuracy of the matrix $\mathbf{A}$ depends on the particle size range and the model used in calculating the probabilities in Eq. . Here we shall describe a technique to select the most appropriate particle size range. The method employed here also guarantees the non negativity requirement of the solution vector $\mathbf{X}$. Appropriate models then need to be chosen based on any available information about the overall particle shape. In the case of needle-like particles considered here, we can use two analytical models available in the literature as discussed below. The VSM model ------------- The microscope images in Fig. \[fig1\] suggest that the shape of the particles could be represented by thin cylinders. The 2 D projections of these thin cylinders will look like the shapes in Fig. \[fig1\]. The cylindrical VSM model [@Vaccaro2006] gives a PDF $X_p^c$ which defines the relative likelihood that a chord taken from a cylindrical particle has a length between $L$ and $L+dL$. To this end, the model considers all possible 3 D orientations of each cylindrical particle and calculates chord lengths from each 2 D projection. The characteristic size of a cylinder is calculated by equating to the diameter of a sphere of equivalent volume. For a thin cylinder of height $a_c$, base radius $b_c$, aspect ratio $r_c = b_c/a_c$ and characteristic size $D_c = a_c\sqrt[3]{3r_c^2/2}$, the VSM model gives the probability $X_p^c$ (for $b_c/a_c \ll 1$) as [@Vaccaro2006] $$a^{\ast}X_p^c (L) = \begin{cases} \frac{1}{2}\frac{L}{\sqrt{r_c^2a_c^2-L^2}}\left(1 - \sqrt{1 - r_c^2}\right), & \forall L \in [0,r_ca_c[ \\ \frac{1}{\pi}\frac{r_c^2}{\sqrt{1 - \left(\frac{L}{a_c}\right)^2}} + \frac{1}{2\pi}\frac{a_c}{L}\frac{\frac{L}{a_c}\sqrt{1 - \left(\frac{L}{a_c}\right)^2}+\cos^{-1}\left(\frac{L}{a_c}\right)}{\frac{L}{r_ca_c}\sqrt{\left(\frac{L}{r_ca_c}\right)^2 - 1}} & \forall L \in ]r_ca_c, a_c[ \\ 0 & \forall L \in [a_c, \infty[, \end{cases} \label{eq8}$$ where $$a^{\ast} = \frac{a_c}{4} + \frac{1}{2}r_ca_c\left[1 - \sqrt{1 - r_c^2} + \frac{1}{2}r_c\left(1 - \frac{4}{\pi}\sin^{-1}(r_c)\right)\right] \label{eq9}$$ is a normalisation factor. Then the probability that the length of a measured chord from a particle of size $D_c$ falls in the bin bounded by $L_j$ and $L_{j+1}$ is calculated as $$p_{c\overline{D}_i}(L_j,L_{j+1}) = \int_{L_j}^{L_{j+1}} X_p^c(L)dL. \label{eq10}$$ The integration in Eq. is performed numerically. The LW model ------------ In this case, we approximate the shape of the needles in Fig. \[fig1\] by thin ellipsoids. The model considers 2 D projections of each of ellipsoid with its major and minor axes parallel to the projection plane, so that all projections will be an ellipse of semi major axis length $a_e$, semi minor axis length $b_e$ and aspect ratio $r_e=b_e/a_e$. The length of a chord on this ellipse depends on the angle $\alpha$ between the chord and the $x$ axis (where the projection plane is the $x-y$ plane) [@Li2005n1]. Hence the PDF for such an ellipse is angular dependent. The PDF for different values of $\alpha$ are given by the LW model as [@Li2005n1]:\ for $\alpha = 0$ or $\pi$ $$p_{e\overline{D}_i}(L_{j,\alpha},L_{j+1,\alpha}) = \begin{cases} \sqrt{1 - \left(\frac{L_j}{2a_{ei}} \right)^2} - \sqrt{1 - \left(\frac{L_{j+1}}{2a_{ei}} \right)^2}, & \textrm{for}~ L_j < L_{j+1} \leq 2a_{ei} \\ \sqrt{1 - \left(\frac{L_j}{2a_{ei}}\right)^2}, & \textrm{for}~ L_j \leq 2a_{ei} < L_{j+1} \\ 0, & \textrm{for}~ 2a_{ei} < L_j < L_{j+1}, \end{cases} \label{eq11}$$ for $\alpha = \pi/2$ or $3\pi/2$ $$p_{e\overline{D}_i}(L_{j,\alpha},L_{j+1,\alpha}) = \begin{cases} \sqrt{1 - \left(\frac{L_j}{2r_ea_{ei}} \right)^2} - \sqrt{1 - \left(\frac{L_{j+1}}{2r_ea_{ei}} \right)^2}, & \textrm{for}~ L_j < L_{j+1} \leq 2r_ea_{ei} \\ \sqrt{1 - \left(\frac{L_j}{2r_ea_{ei}}\right)^2}, & \textrm{for}~ L_j \leq 2r_ea_{ei} < L_{j+1} \\ 0, & \textrm{for}~ 2r_ea_{ei} < L_j < L_{j+1}, \end{cases} \label{eq12}$$ for other values of $\alpha$ $$p_{e\overline{D}_i}(L_{j,\alpha},L_{j+1,\alpha}) = \begin{cases} \sqrt{1 - \frac{r_e^2 + s^2}{1 + s^2}\left(\frac{L_j}{2r_ea_{ei}} \right)^2} & \\ - \sqrt{1 - \frac{r_e^2 + s^2}{1 + s^2}\left(\frac{L_{j+1}}{2r_ea_{ei}} \right)^2}, & \textrm{for}~ L_j < L_{j+1} \leq 2r_ea_{ei}\sqrt{\frac{1 + s^2}{r_e^2 + s^2}} \\ \sqrt{1 - \frac{r_e^2 + s^2}{1 + s^2}\left(\frac{L_j}{2r_ea_{ei}}\right)^2}, & \textrm{for}~ L_j \leq 2r_ea_{ei}\sqrt{\frac{1+s^2}{r_e^2+s^2}} < L_{j+1} \\ 0, & \textrm{for}~ 2r_ea_{ei}\sqrt{\frac{1+s^2}{r_e^2+s^2}} < L_j < L_{j+1}, \end{cases} \label{eq13}$$ where $s=\tan{(\alpha)}$. The angle independent PDF is then given as $$p_{e\overline{D}_i}(L_j,L_{j+1}) = \frac{1}{2\pi}\int_0^{2\pi} p_{e\overline{D}_i}(L_{j,\alpha},L_{j+1,\alpha})d\alpha. \label{eq14}$$ Equation allows the construction of the transformation matrix $\mathbf{A}$ in Eq. which can be converted to the matrix $\tilde{\mathbf{A}}$ as described in Eq. . The matrix $\tilde{\mathbf{A}}$ is then used to solve the inverse problem. The LW model constructs the PDF of an ellipsoidal particle by considering only one 2 D projection of the ellipsoid where the major axis is parallel to the projection plane. Hence the monotonic function which gives the characteristic size $D_e$ of the resulting ellipse is obtained from the area of a circle of equivalent area. Hence, using $r_e = b_e/a_e$, the characteristic size is given as $D_e=2a_e\sqrt{r_e}$. ![Pictorial representation of the bins and bin boundaries of the CLD histogram showing a window of size $S$ at the first two positions set by $p=1$ and $p=2$ shifted by $q$. The window is moved in such a way that some of the bins contained in the window at $p=1$ overlap some of the bins of the window at $p=2$.[]{data-label="fig4"}](fig4.pdf){width="\textwidth"} Inversion Algorithm {#invalg} =================== As mentioned in the introduction, one important factor which can be used to constrain inverse problem solutions is the size range ($D_{min}$ to $D_{max}$) of particles used in the calculations, where $D_{min}$ is the smallest particle size and $D_{max}$ is the largest particle size in the population. Since this information is not always readily available, we introduce an inversion algorithm which is capable of automatically determining the best values of $D_{min}$ and $D_{max}$ to solve the inverse problem. We use the bin boundaries of the chord length histogram to specify the size range boundaries $D_{min}$ and $D_{max}$. A number $S$ of consecutive bins of the chord length histogram are chosen, these bins make up a window of width $S$. This means that the width (or size) of a window is the number of bins contained within that window. The geometric mean of the first two bin boundaries of a window is taken as $D_{min}$ and the geometric mean of the last two bins of a window is taken as $D_{max}$. The procedure is outlined below. The boundaries of the chord length histogram are labelled as $$L_j, ~ j=1,2,3, \ldots, M+1 \label{eq15}$$ as illustrated in Fig. \[fig4\]. The characteristic chord length $\overline{L}_j$ of bin $j$ is the geometric mean of the chord lengths of its boundaries $$\overline{L}_j = \sqrt{L_iL_{j+1}}. \label{eq16}$$ At the beginning of the calculation, the first $w$ bins of the chord length histogram are chosen, so that $S = w$, $D_{min} = \overline{L}_1$ and $D_{max} = \overline{L}_w$. After the first iteration (see steps 5 to 9 in the algorithm below), a new set of bin boundaries are selected. This new set of bin boundaries is made up of the same number of bins $S$ as the previous set, but it is shifted to the right of the previous set by an amount $q$. That is, there are $q$ bins between the beginning of the first set of bins and the beginning of the second set. The shift is made in such a way that the two set of bins overlap each other (that is, $q<S$). For example, in the case illustrated in Fig. \[fig4\], the window initially runs from bin boundary $L_1$ to bin boundary $L_4$. At this position, the window contains bins $\overline{L}_1$ to $\overline{L}_3$ so that the width of the window is $S = 3$. At the end of the first iteration, a new set of bins are chosen, this time starting from bin boundary $L_3$ and ending at bin boundary $L_6$ as in Fig. \[fig4\]. The number of bins in the new set of bins (or window) is the same as before $S = 3$. Each window (or set of bins) is identified by its position index $p$. In the case shown in Fig. \[fig4\], the value of the first position index is $p = 1$ and the value of the second position index is $p = 2$. There are two bins between the beginning of the window at $p = 1$ and the beginning of the window at $p = 2$ so that $q = 2 < S$. At the end of the second iteration, the window is shifted to the right again, while maintaining fixed values of $S$ and $q$. This process continues until the last bin boundary of the chord length histogram is reached. Each time a set of bins are chosen, the values of $D_{min}$ and $D_{max}$ are calculated as $$\begin{aligned} D_{min} & = \overline{L}_1\beta^{(p-1)q} \\ D_{max} & = D_{min}\beta^{\left(S-1\right)}, \end{aligned}$$ \[eq17\] where $\beta = \overline{L}_{j+1}/\overline{L}_j$. The position index of the windows take values $$p = 1,2,3, \ldots, \left\lfloor \frac{M}{q} \right\rfloor \label{eq18},$$ where the floor function $\lfloor \cdot \rfloor$ returns the value of the largest integer that is less than or equal to $M/q$. Once the values of $D_{min}$ and $D_{max}$ have been calculated from Eq. , then particle size bins are constructed. The bin boundaries $D_i$ of the particle size bins are calculated as $$D_i = D_{min}\mu^{i-1}, \quad i=1,2, \ldots , N+1 \label{eq19}$$ where $$\mu = \left(\frac{D_{max}}{D_{min}}\right)^{\frac{1}{N}} \label{eq20},$$ where $N$ is the chosen number of particle size bins. The characteristic size of a particle size bin is calculated as $$\overline{D}_i = \sqrt{D_iD_{i+1}}. \label{eq21}$$ Once the characteristic particle sizes $[\overline{D}_1,\overline{D}_N]$ have been constructed, then the transformation matrix $\tilde{\mathbf{A}}$ can be constructed (for a chosen aspect ratio) as in Eq. . The chord lengths reported by the FBRM sensor run from $1\mu$m to $1000\mu$m. However, the particle size range $[\overline{D}_1,\overline{D}_N]$ set by a window will not necessarily cover the entire size range of $1\mu$m to $1000\mu$m. To account for the other sizes that may not be covered by a window, the length weighted transformation matrix $\tilde{\mathbf{A}}$ is augmented with columns of ones as appropriate. Then the particle sizes are extended to the left of $\overline{D}_1$ down to $1\mu$m and to the right of $\overline{D}_N$ up to $1000\mu$m as appropriate. This ensures that the recovered PSD covers the entire particle sizes from $1\mu$m to $1000\mu$m. The process of augmenting the transformation matrix with columns of ones corresponds to the addition of slack variables in an optimisation problem [@Boyd2004] (see also section 1 of supplementary information). To guarantee non negative PSD the vector $\mathbf{X}$ is written as [@Kaasaleinen2001] $$X_i = e^{\gamma_i},~i = 1,2,3, \ldots, N, \label{eq22}$$ where $\gamma_i$ are arbitrary fitting parameters. Then Eq. is rewritten as $$\mathbf{C} = \mathbf{\tilde{A}X}+\mbox{\boldmath{$\epsilon$}}, \label{eq23}$$ where $\mbox{\boldmath{$\epsilon$}}$ is an additive error between the model prediction and the actual measurement. The vector $\mathbf{X}(r)$ at the chosen aspect ratio $r$ is then obtained by searching for $\gamma_i$ which minimises the objective function $f_1$ given as[^3] $$f_1 = \sum_{j=1}^{M}{\left[C_j^{\ast} - \sum_{i=1}^N{\tilde{A}_{ji}X_i}\right]^2}, \label{eq24}$$ where $C_j^{\ast}$ is the experimentally measured CLD. This nonlinear least squares problem was solved with the Levenberg-Marquardt (LM) algorithm (implemented in Matlab in this work). Then starting with an initial value[^4] for the vector $\gamma_i$ the LM algorithm performs a successive iteration until an optimum $\gamma_i$ is reached. The iterations are terminated when a specified tolerance in the difference between successive function evaluations is reached. In this case we used a tolerance of $10^{-6}$ since the results did not change for values of tolerance below $10^{-4}$. An initial value of $\mbox{\boldmath{$\gamma = 0$}}$ was used in the LM algorithm. The solution vector $\mathbf{X}$ obtained this way (using Eq. ) is dependent on the chosen aspect ratio $r$ (hence $\mathbf{X} = \mathbf{X}(r)$), window size $S$ and window position $p$. Thus, starting with a window of a chosen size[^5] and at position set at $p=1$, a solution vector $\mathbf{X}(r)$ is obtained for the chosen aspect ratio. Then the forward problem is solved to obtain a CLD $\mathbf{C}(r)$ at that aspect ratio and window position $p=1$. The window position is advanced one step forward and the calculation repeated until the last bin of the chord length histogram is reached. The window position at which the $L_2$ norm $$\left\|\mathbf{C}^{\ast}-\mathbf{C}(r)\right\| \label{eq25}$$ is minimized is the optimum window position for that window size. This optimum window then sets the particle size range to construct the optimum transformation matrix $\mathbf{\tilde{A}}$ at that window size. The case of $S=20$ applied to the CLD from Sample 1 (using the LW model) is shown in Fig. \[fig5\]. The procedure is repeated using windows of different sizes and eventually the optimum window size and position which set the particle size range for the chosen aspect ratio is obtained. The whole process is repeated at different aspect ratios, and for each aspect ratio the particle size range is obtained from the optimum window size and position. ![An example of the minimisation of the $L_2$ norm in Eq. when a window of a given size approaches and passes its optimum position along the bin boundaries of the chord length histogram.[]{data-label="fig5"}](fig5.pdf){width="50.00000%"} The key parameters of the algorithm are the quantities $r$, $S$, $q$ and $N$. An extensive study (see section 2 of the supplementary information) has shown that a value of $N=70$ is suitable for the two models implemented here. The algorithm starts with an initial window size $S$ after which the window size is increased. In section 2 of the supplementary information it was demonstrated that initial values of $S$ from 2 up to 50 give consistent results for $N\gtrsim 60$. However an initial value of $S=6$ was used in all the calculations here for more accuracy. The smallest value of $q$ that can be used is $q=1$, however a value of $q=2$ was used here since there is no significant change in the level of accuracy obtained at $q=1$. The value of $q=1$ will only lead to greater resolution as can be seen in Fig. \[fig5\]. Once the initial value of $S$, the values of $q$ and $N$ have been fixed, then the algorithm loops through subsequent values of $S$ at all desired values of $r$ as summarised below: 1. [Choose an aspect ratio $r$.]{} 2. [Choose a number $S$ of bins of the chord length histogram.]{} 3. [Start at window position $p=1$.]{} 4. [Obtain the values of $D_{min}$ and $D_{max}$ dictated by the window at the position set by $p$.]{} 5. [Construct matrix $\tilde{\mathbf{A}}$ corresponding to the values of $D_{min}$ and $D_{max}$ in step 4.]{} 6. [Augment matrix $\tilde{\mathbf{A}}$ with columns of ones and extend the particle size range as necessary.]{} 7. [Implement the LM algorithm to calculate $\mbox{\boldmath{$\gamma$}}$ starting with $\mbox{\boldmath{$\gamma = 0$}}$, and then calculate $\mathbf{X}(r)$ from Eq. .]{} 8. [Calculate $\mathbf{C}(r)$ from Eq. .]{} 9. [Calculate the $L_2$ norm in Eq. for the given values of $r$, $S$ and $p$.]{} 10. [Update $p$ and repeat steps 4 to 9 for the same values of $r$ and $S$ until the last bin of the chord length histogram is reached.]{} 11. [Choose the best window position (the window position with the minimum $L_2$ norm as in Fig. \[fig5\]) for the given values of $r$ and $S$.]{} 12. [Update the window size $S$ and repeat steps 3 to 11.]{} 13. [For a given $r$ obtain the window position and size at which the $L_2$ norm in Eq. attains its minimum. Record the particle size range corresponding to this window position and size.]{} 14. [Update $r$ and repeat steps 2 to 13.]{} The values of $S$ used in the algorithm will depend on the desired level of accuracy. Using closely spaced values of $S$ will result in greater accuracy but with the consequent increase in computational time. However widely spaced values of $S$ will lead to lower computational times but less accurate results. The window sizes are calculated as $$S_k = S_0 + \left\lfloor \left(k - 1\right) \frac{M}{N_w}\right\rfloor \label{eq26},$$ where $\lfloor \cdot \rfloor$ is the floor function discussed in Eq. , $S_0$ is the initial window size and $N_w < M$ is the desired number of windows. A value of $N_w = 50$ was used in the calculations here. The values of $r$ chosen depends on the desired range of aspect ratios to explore. Having obtained the optimum particle size ranges at different aspect ratios for a particular sample, then the optimum aspect ratio for that sample can be chosen using a suitable procedure. The simplest procedure would have been to pick the aspect ratio at which the $L_2$ norm reaches its global minimum. However, the simulations show (see section 6 of supplementary information) that when the number of particle size bins is large enough the $L_2$ norm in Eq. does not show a clear global minimum. Instead it decreases with increasing aspect ratio and then levels off after some critical aspect ratio. Hence unique shape information cannot be obtained using the objective function in Eq. . This problem of non uniqueness can be removed if the shape of the recovered PSD ($X_i$ in Eq. ) is taken into account. As the aspect ratio deviates further from some reasonable value for a particular sample, the recovered PSD shows an increasing degree of oscillations. These oscillations could also occur even when the aspect ratio is physically reasonable, but they become more pronounced as the aspect ratio deviates from realistic values. To address this issue, one can introduce a modified function which reduces these oscillations by minimising the total variation in the PSD. Here we use a new objective function $f_2$ given as $$f_2 = \sum_{j=1}^{M}{\left[C_j^{\ast} - \sum_{i=1}^N{\tilde{A}_{ji}X_i}\right]^2} + \lambda\sum_{i=1}^N{X_i^2}, \label{eq27}$$ where the parameter $\lambda$ sets the level of the penalty function imposed on the norm of the PSD. The value of $\lambda$ is chosen by comparing the relative magnitude of the two sums of squares in Eq. (see section 6 of supplementary information for more details). The optimum particle size ranges at different aspect ratios obtained using the inversion algorithm above are used to construct the transformation matrix $\tilde{\mathbf{A}}$ (in Eq. ) at the corresponding aspect ratios. The optimum aspect ratio is chosen as the value of $r$ at which the objective function $f_2$ reaches its global minimum for a carefully chosen value of $\lambda$. The corresponding PSD at which $f_2$ reaches its global minimum is then chosen as the optimum PSD. For a meaningful comparison of calculated PSD with experimentally measured PSD from laser diffraction and imaging, it is necessary that the calculated PSD be cast as a volume based distribution. This is because some instruments report PSD in terms of a volume based distribution for example Figs. \[fig3\](a), \[fig3\](b) and \[fig3\](c). The volume based PSD $\mathbf{X}^v$ given by [@Holdich2002] $$X^v_i = \frac{X^o_i\overline{D}^3_i}{\sum_i^N{X^o_i\overline{D}^3_i}}, \label{eq28}$$ (where $\mathbf{X}^o$ is the optimum number based PSD which minimises the objective function $f_2$ in Eq. ) could lead to artificial peaks at large particle sizes if there are small fluctuations in the right hand tail of the number based PSD estimates (see section 5 of supplementary information). These fluctuations are usually very small with an amplitude of the order of $0.1\%$ of the peak of the number based PSD $X_i$ in Eq. . Because the amplitude of the fluctuation is small, they are not removed by the penalty function in Eq. . Also, the level of penalty imposed on the recovered number based PSD needs to be maintained at reasonable levels so that the recovered PSD does not get skewed. This situation requires that a suitable regularisation be applied to the recovered volume based PSD such as the $\lambda$ parameter in Eq. . This problem can be addressed by restating the inverse problem as follows:\ Calculate the CLD $C^o_j$ given by $$C_j^o = \tilde{A}^o_{ji}\hat{X}^o_i, \label{eq29}$$ where $\tilde{A}^o_{ji}$ is the optimum transformation matrix obtained by the inversion algorithm and $$\hat{X}^o_i = \frac{X^o_i}{\sum_i^N{X_i^o}}. \label{eq30}$$ If the volume based PSD $X^v_i$ was known, then the CLD $C_j^o$ can also be calculated from $$C^o_j = \overline{A}^o_{ji}\overline{X}^v_i, \label{eq31}$$ where $$\begin{aligned} \overline{A}^o_{ji} & = \frac{\tilde{A}^o_{ji}}{\overline{D}^3_i} \\ X^v_i & = \frac{\hat{X}^o_i\overline{D}^3_i}{\sum_i^N{\hat{X}^o_i\overline{D}^3_i}} \\ \overline{X}^v_i & = \left[\sum_{i}^{N}{X^o_i\overline{D}^3_i}\right]X^v_i. \end{aligned}$$ \[eq32\] Equation is the forward problem for the volume based PSD similar to the case of Eq. for the number based PSD. However, since the volume based PSD is not known, then an objective function similar to $f_2$ in Eq. can be formulated to recover the volume based PSD. This objective function $f_3$ is given as ![(a)The minimum values of the objective function in Eq. versus the aspect ratio (the minimum values of the objective function for all window sizes and positions for each sample indicated with symbols as: Sample 1 - red pentagrams, Sample 2 - green crosses, Sample 3 - blue asterisks, Sample 4 - magenta squares, Sample 5 - black circles) obtained with the VSM model. (b)The aspect ratios ($Min~r_c$) at which the objective function reaches a global minimum for each Sample obtained with the VSM model. (c)Similar to (a) obtained with the LW model. (d)Similar to (b) with the LW model.[]{data-label="fig6"}](fig6.pdf){width="\textwidth"} $$f_3 = \sum_{j=1}^M{\left[C_j^o - \sum_{i=1}^N{\overline{A}^o_{ji}\overline{X}^v_i} \right]^2} + \lambda\sum_{i=1}^N{\left[\,\overline{X}_i^v\right]^2}. \label{eq33}$$ This allows $\overline{X}^v_i$ (obtained to some weighting factor due to Eq. (c)) to be calculated as $$\overline{X}^v_i = e^{\gamma^v_i},~i = 1,2, \ldots,N, \label{eq34}$$ where $\gamma_i^v$ is an arbitrary parameter which is used to minimise the objective function $f_3$ for a carefully chosen $\lambda$. The weighted volume based PSD is then normalised and made grid independent as $$\tilde{X}^v_i = \frac{\overline{X}^v_i}{\left(D_{i+1}-D_i\right)\sum_i^N{\overline{X}^v_i}}. \label{eq35}$$ Results and Discussion {#res} ====================== Once the optimum particle size ranges at the different aspect ratios have been obtained using the inversion algorithm, then the optimum aspect ratio for each sample can be determined by selecting the aspect ratio at which the objective function $f_2$ (in Eq. ) reaches its global minimum. The objective function $f_2$ at different aspect ratios $r_c\in [0,0.4]$ for the five samples in Fig. \[fig1\] is shown in Fig. \[fig6\](a) for the case of the VSM model[^6]. The function $f_2$ reaches its global minimum at $r_c\approx 0.2$ as in Fig. \[fig6\](b). The calculations with the VSM model was restricted to the range $r_c\in [0, 0.4]$ because the thin cylindrical VSM model is only valid for $r_c \ll 1$ [@Vaccaro2006]. Figure \[fig6\](c) shows a similar result to Fig. \[fig6\](a) for the same samples in Fig. \[fig1\] for the case of the LW model. The function $f_2$ reaches its global minimum for $r_e\approx 0.3$ as in Fig. \[fig6\](d). In this case, the aspect ratios $r_e$ cover a broader range $r_e\in [0, 1]$ since the LW model is valid for $r_e\in [0, 1]$. The aspect ratios predicted by the VSM and LW models in Figs. \[fig6\](b) and \[fig6\](d) are comparable to the aspect ratios estimated from image data in Fig. \[fig3\](d), although the calculated aspect ratios appear slightly higher. The aspect ratios predicted by the VSM model in Fig. \[fig6\](b) are closer to the estimated aspect ratios in Fig. \[fig3\](d) when compared with the aspect ratios predicted by the LW model in Fig. \[fig6\](d). This could be because the cylindrical shape used in the VSM model is closer to the shape of the particles in Fig. \[fig1\] than the ellipsoidal shape used in the LW model. We also note that the VSM model gives much lower error norm than the LW model for the same aspect ratio as seen in Figs. \[fig6\](a) and \[fig6\](c), and this is also the case when $\lambda = 0$ (see section 6 of supplementary information). The effect of shape on the level of accuracy reached in the calculations is demonstrated by the fact that when the LW model is applied to a system of spherical particles (section 6 of supplementary information), the error norm obtained in that case is comparable to the error norm obtained when the VSM model is applied to the needle particles. ![(a)The recovered volume based PSDs calculated from the objective function in Eq. for $\lambda = 0$ (with the VSM model) at the minimum aspect ratios (shown in Fig. \[fig6\](b)) for each Sample. (b)-(f)Calculated (symbols) and measured (solid line) Chord Length Distributions for the Samples indicated in each Figure. The calculated CLDs were obtained by solving the forward problem in Eq. using the number based PSD which minimise the objective function in Eq. for $\lambda=0.01$.[]{data-label="fig7"}](fig7.pdf){width="\textwidth"} Figure \[fig7\](a) shows the recovered volume based PSD calculated by minimising the objective function $f_3$ in Eq. using the optimum aspect ratios in Fig. \[fig6\](b)[^7] for the case of the VSM model. The transformation matrix $\tilde{\mathbf{A}}^o$ used in Eq. was constructed using the optimum particle size range obtained by the inversion algorithm and aspect ratios shown in Fig. \[fig6\](b). The matrix $\tilde{\mathbf{A}}^o$ is then weighted as in Eq. (a) to obtain the matrix $\overline{\mathbf{A}}^o$. The volume based PSD $\tilde{\mathbf{X}}^v$ normalised and rescaled as in Eq. are shown in Fig. \[fig7\](a). The PSDs in Fig. \[fig7\](a) are shown as a function of the characteristic particle size $D_c$. This PSD can be compared to the data from laser diffraction in Fig. \[fig2\](a) and EQPC diameter in Fig. \[fig3\](a). The particle sizes in Fig. \[fig7\](a) cover a range of $D_c\approx 7\mu$m to $D_c\approx 200\mu$m. The modes of the distributions cover a range of $D_c\approx 40\mu$m to $D_c\approx 70\mu$m, with the sizes increasing from sample 1 to sample 5. This is consistent with the data from laser diffraction in Fig. \[fig2\](a) where the diameters cover a range of about $2\mu$m to about $200\mu$m. The modes of the distributions cover a range of about $10\mu$m to about $30\mu$m with the particle sizes increasing from sample 1 to sample 5. Similarly, the EQPC diameters in Fig. \[fig3\](a) cover a range of about $10\mu$m to about $200\mu$m with the modes running from about $30\mu$m to about $100\mu$m, and the sizes increasing from sample 1 to sample 5. The peaks of the PSDs from the laser diffraction in Fig. \[fig2\](a) and EQPC diameters in Fig. \[fig3\](a) decrease from sample 1 to sample 5 which is consistent with the results reported in Fig. \[fig7\](a). The symbols in Figs. \[fig7\](b) to \[fig7\](f) show the calculated (using the VSM model) CLDs for the five samples in Fig. \[fig1\]. The CLDs were calculated from Eq. using the number based PSD which minimises the objective function $f_2$ in Eq. . The calculations were done at the optimum aspect ratios in Fig. \[fig6\](b). The blue solid lines in Figs. \[fig7\](b) to \[fig7\](f) are the experimentally measured CLDs for the five samples shown in Fig. \[fig2\](b). The agreement between the calculated CLDs and the experimentally measured CLDs in Figs. \[fig7\](b) to \[fig7\](f) is near perfect. This level of agreement between the calculated PSD and CLD with the experimentally measured PSD and CLD demonstrates the level of accuracy that can be achieved with this algorithm. ![Similar to Fig. \[fig7\] obtained with the LW model. In this case the volume weighted PSDs were obtained at $\lambda = 10^{-14}$ from Eq., while the CLDs correspond to number based PSD obtained at $\lambda = 0.2$ from Eq. .[]{data-label="fig8"}](fig8.pdf){width="\textwidth"} ![Particle lengths for the five samples calculated with (a) the VSM model and (b) the LW model.[]{data-label="fig9"}](fig9.pdf){width="\textwidth"} Figure \[fig8\](a) shows the volume based PSDs for the five samples in Fig. \[fig1\] calculated with the LW model. The calculations were done in a similar manner as in Fig. \[fig7\](a). The distributions are plotted as a function of the characteristic size $D_e$ which are comparable to the laser diffraction data in Fig. \[fig2\](a) and EQPC data in Fig. \[fig3\](a). The level of consistency of the volume based PSDs in Fig. \[fig8\](a) to the particle sizes in Figs. \[fig2\](a) and \[fig3\](a) is similar to the case of Fig. \[fig7\](a). The range of particle sizes in Fig. \[fig8\](a) and the modes of the distributions in Fig. \[fig8\](a) are close to the measured data in Figs. \[fig2\](a) and \[fig3\](a). However, the calculated PSDs in Fig. \[fig8\](a) show some oscillations. This is also reflected in the fact that the error norms between the measured CLD and calculated CLD with the LW model is higher than the corresponding error norm of the calculations with the VSM model as seen in Figs. \[fig6\](a) and \[fig6\](c). The symbols in Figs. \[fig8\](b) to \[fig8\](f) show the calculated (using the LW model) CLDs for the samples in Fig. \[fig1\]. The calculations were done in a manner similar to the case of Figs. \[fig7\](b) to \[fig7\](f). However, the calculated CLDs in Figs. \[fig8\](b) to \[fig8\](f) show a slight mismatch with the experimental data unlike the case of Figs. \[fig7\](b) to \[fig7\](f) where the match is near perfect. A likely reason for the different levels of agreement between calculated data with the two models and experimental data is that different kinds of approximations were made in the formulation of the models. The VSM model considers all possible 3 D orientations of the cylinder in the computation of the cylindrical PDF [@Vaccaro2006]. However, the LW model considers only one 2 D projection of the ellipsoid where the major and minor axes are parallel to the $x-y$ plane [@Li2005n1]. Also, the cylindrical shape of the VSM model is closer to the needle shape of the particles than the ellipsoidal shape of the LW. Figure \[fig9\](a) shows the volume based PSD calculated with the VSM model plotted as a function of the characteristic length $l_c = a_c$ (the length of the cylinder). This data can be compared to the Feret Max data in Fig. \[fig3\](b). The Feret Max data covers a range of about $100\mu$m to about $800\mu$m for samples 4 and 5, and about $10\mu$m to about $500\mu$m for samples 1 to 3. The characteristic lengths predicted by the VSM model in Fig. \[fig9\](a) are short of the Feret Max data in Fig. \[fig3\](b) because the aspect ratios predicted by the VSM model in Fig. \[fig6\](b) are higher than the estimated aspect ratios in Fig. \[fig3\](d). This implies that the VSM model predicts needles that are slightly thicker and shorter than the actual needles in the samples. However, the needle lengths calculated with the VSM model cover a range of about $10\mu$m to about $300\mu$m which are still comparable to the Feret max measurements in Fig. \[fig3\](b). A similar situation holds for the LW model where the predicted ellipsoid heights ($l_e$ in Fig. \[fig9\](b)) are short of the Feret Max measurements in Fig. \[fig3\](b). Similarly, the aspect ratios predicted by the LW model in Fig. \[fig6\](d) are higher than the estimated aspect ratios in Fig. \[fig3\](d). This again shows that the LW model predicts needles which are slightly thicker and shorter than the actual needles in the samples. The range of needle lengths calculated with the LW model are reasonable when compared with the measured Feret Max in Fig. \[fig3\](b). Even though the predicted lengths ($l_c$ and $l_e$) do not have a perfect match with the measured Feret Max data, the trend in the lengths of needles from sample 1 to sample 5 in Fig. \[fig3\](b) are consistent with the trend in needle lengths from sample 1 to sample 5 in Fig. \[fig9\](a). However, the trend in needle lengths in Fig. \[fig9\](b) are not so consistent with the trend in needle lengths in Fig. \[fig3\](b) moving from sample 1 to sample 5. This is because the LW model predicts smaller aspect ratios for sample 2 and sample 3 in Fig. \[fig6\](d) resulting in a shift of the distributions to higher values for sample 2 and sample 3 in Fig. \[fig9\](b). Conclusions {#disc} =========== We have presented an algorithm which produces best estimates of PSD and particle aspect ratio from measured CLD data. Although the algorithm does not require any additional information about particle size range or particle aspect ratio, this information can be used to further constrain the search if it is available. If such information is not available (for example during in situ monitoring of a crystallisation process), then the algorithm will perform an automatic search for the best estimate of particle size range and aspect ratio. The approach described here can be used with any geometrical or optical model that provides CLD for particles of given size, shape and optical properties. In the case considered here the particles were treated as opaque and assumed to have convex shapes (that is cylindrical or ellipsoidal). This representation is suitable for the CoA particles considered here as can be seen in Fig. \[fig1\]. A more detailed discussion of the possible errors that can occur from using this representation is presented in Section 8 of the supplementary information. Also in the supplementary information is a detailed analysis of sensitivity of resulting estimates to choice of algorithm parameters to validate accuracy and robustness of algorithm outcomes. We applied the algorithm to previously collected CLD data for slurries of needle shaped crystalline particles of COA with different particle size distributions. COA slurries were characterised using FBRM (to measure CLD), imaging (to measure EQPC, maximum and minimum Feret diameters) and laser diffraction (to measure PSD based on equivalent sphere diameter approximation). Measured CLD data were used in the algorithm without any further information input, using two different CLD geometrical models, one for ellipsoids and the other one for thin cylinders. Best estimates for particle aspect ratios and corresponding PSDs were obtained with each model and these were compared to experimental data from imaging and laser diffraction. Estimated aspect ratios from the thin cylinder model were in good agreement with those obtained from the ratio of maximum and minimum Feret diameters, while those from the ellipsoid model were somewhat higher. Corresponding to this, there was a good agreement between measured and fitted CLDs for the thin cylinder model, but some discrepancies could be seen for the ellipsoid model. Ranges and modes of particle size distributions determined for both models were in a good agreement with those obtained by imaging. Although it was possible to estimate aspect ratios of needle like particles from CLD data reasonably accurately for the system analysed here, the optimisation problem of finding most appropriate PSD and aspect ratio would be greatly simplified if additional information about particle size range or shape is available, for example from a suitable imaging or scattering technique, especially in the case of systems with significant polydispersity or multimodality in terms of particle shape or size. The technique described here will be of particular benefit to crystallisation process control, since controlling the process critically depends on real-time in situ information about the size and shape of the particulate product. Acknowledgement {#acknowledgement .unnumbered} =============== The authors wish to thank the EPSRC (grant number EP/K014250/1), AstraZeneca and GlaxoSmithKline for generous funding for this project. {#section .unnumbered} Slack Variables =============== The concept of slack variables in optimisation problems is described in previous literature [@Boyd2004]. The idea of introducing columns of 1s to the transformation matrix is based on the following argument. Consider the optimisation problem:\ find $\beta$ which minimises the objective function $\phi$ where $$\phi = \sum_{i=1}^M{\left[y_i - g_i(\beta)\right]^2}, \label{eqs1}$$ where $y\in \mathbb{R}^M$, $\beta\in\mathbb{R}^N$ and $g: \mathbb{R}^N\rightarrow\mathbb{R}$. The optimisation problem in Eq. is equivalent to $$\begin{split} \textrm{minimise} &~~ \sum_{i=1}^M{z_i} \\ \textrm{subject to} &\quad z_i = \left[y_i - g_i(\beta)\right]^2. \label{eqs2} \end{split}$$ Since $[y_i - g_i(\beta)]^2 \geq 0$, then $z_i \geq y_i - g_i(\beta)$. Hence the optimisation problem in Eq. is equivalent to $$\begin{split} \textrm{minimise} &~~\sum_{i=1}^M{z_i} \\ \textrm{subject to} & \quad y_i - g_i(\beta) - z_i \leq 0. \end{split} \label{eqs3}$$ There exist slack variables $s_i\geq 0,~j=1,2, \ldots, M$ such that $y_i - g_i(\beta) - z_i + s_i = 0$. Hence the optimisation problem in Eq. is equivalent to $$\begin{split} \textrm{minimise} &~~\sum_{i=1}^M{z_i} \\ \textrm{subject to} & \quad y_i - g_i(\beta) - z_i + s_i = 0 \\ & \quad s_i \geq 0. \end{split} \label{eqs4}$$ Substituting for $z_i$ in Eq. gives the following equivalent formulation for the optimisation problem in Eq. : $$\begin{split} \textrm{minimise} &~~\sum_{i=1}^M{y_i - g_i(\beta) + s_i} \\ \textrm{subject to} & \quad s_i\geq 0 \quad \square . \end{split} \label{eqs5}$$ Choice of Algorithm Parameters {#param} ============================== ![(a)Variation of the $L_2$ norm in Eq. 25 of the main text (from the LW model) with the number of size bins $N$ at the different aspect ratios $r_e$ (indicated in the Figure) for Sample 1. (b)Recovered number distributed PSDs (from the LW model) at the specified values of $r_e$ and $N$. The quantity $D_e$ is the characteristic size for the LW model described in the main text. (c)Chord length distributions corresponding to the PSDs in (b). The parameter $L$ is the chord length described in the main text.[]{data-label="figs1"}](figs1){width="\textwidth"} ![Similar to Fig. \[figs1\] obtained with the VSM model.[]{data-label="figs2"}](figs2){width="\textwidth"} ![Particle size distributions recovered (using the LW model at the aspect ratios $r_e$ indicated) by minimising the objective function $f_1$ in the main text using the different number of particle size bins $N$ indicated in each figure.[]{data-label="figs3"}](figs3){width="\textwidth"} ![Same as in Fig. \[figs3\] with the VSM model.[]{data-label="figs4"}](figs4){width="\textwidth"} ![Variation of the $L_2$ norm in Eq. 25 of the main text with different initial values of window size $S_w$. The calculations were done with the LW model at the different aspect ratios $r_e$ and number of particle size bins $N$ indicated in each figure.[]{data-label="figs5"}](figs5){width="\textwidth"} ![Same as in Fig. \[figs5\] with the VSM model.[]{data-label="figs6"}](figs6){width="\textwidth"} In this section the motivations for choice of values for parameters in the inversion algorithm are presented. Number of size bins N --------------------- The solution vector $\mathbf{X}$ which minimises the objective function $f_1$ in Eq. (24) of the main text varies slightly with different numbers of particle size bins $N$. This in turn leads to a variation in the vector $\mathbf{C}$ obtained from the forward problem in Eq. (4) of the main text. Hence different values of $N$ were used and each time the $L_2$ norm in Eq. (25) of the main text was calculated in order to determine the optimum number of fitting parameters. The variation of the $L_2$ norm with the number of particle size bins $N$ at different aspect ratios for the LW model is shown in Fig. \[figs1\](a). As the value of $N$ increases, the $L_2$ norm decreases gradually and then begins to level off at large values of $N$. The result is the same for different aspect ratios $r_e$ as in Fig. \[figs1\](a). For a fixed aspect ratio $r_e$ (for example $r_e = 0.3$ in Fig. \[figs1\](b)) and a small value of $N$, the PSD obtained from the inverse problem is a bit noisy at the left hand tail of the distribution as in the case of $N=20$ in Fig. \[figs1\](b), while the corresponding CLD calculated from the forward problem contains small oscillations as shown in Fig. \[figs1\](c). As the value of $N$ is increased, the recovered PSD becomes more noisy as can be seen for the case of $N=40$ in Fig. \[figs1\](b). However, the oscillations in the corresponding CLD decrease as in Fig. \[figs1\](c). As $N$ is increased further, the oscillations in the recovered PSD become more severe as in Fig. \[figs1\](b) for $N=80$. The corresponding CLD for $N=80$ shows very little change from that obtained at $N=40$. A similar situation holds for the VSM model where the $L_2$ norm levels off with increasing $N$ as in Fig. \[figs2\](a) for different aspect ratios $r_c$. The behaviour of the recovered PSDs for different values of $N$ in Fig. \[figs2\](b) is similar to the case of Fig. \[figs1\](b). Also, the behaviour of the corresponding CLDs for different values of $N$ in Fig. \[figs2\](c) is similar to the case of Fig. \[figs1\](c). Figures \[figs1\](a) and \[figs2\](a) show that the $L_2$ norm had become fairly level for $N\gtrsim 60$ for both models and all aspect ratios, which suggests that the calculations reach about the same level of accuracy for number of particle size bins $N\gtrsim 60$. However, as already seen in Fig. \[figs1\](b) and \[figs2\](b) the recovered PSDs have different levels of fluctuations for $N\gtrsim 60$. This situation is shown more clearly in Figs. \[figs3\] and \[figs4\]. Figure \[figs3\](a) shows the recovered PSD (with the LW model) at the indicated aspect ratios $r_e$ for $N=60$. The PSD for $r_e = 0.1$ is fairly smooth except the long spike at $D_e\approx 1$. However, the PSDs begin to develop oscillations as the aspect ratio $r_e$ increases as seen in the cases of $r_e=[0.3, 0.5, 0.7]$ in Fig. \[figs3\](a). A similar situation holds for $N=70$ (Fig. \[figs3\](b)) and $N=80$ (Fig. \[figs3\](c)). However, the oscillations for the case of $N=80$ is much more severe. Figure \[figs4\] is similar to Fig. \[figs3\] but calculated with the VSM model. For a fixed N, the fluctuations in the PSDs increase as the aspect ratio $r_c$ increases as seen in Figs. \[figs4\](a), \[figs4\](b) and \[figs4\](c). The level of fluctuations at $N=80$ in Fig. \[figs4\](c) is much more severe when compared with the cases of $N=60$ (Fig. \[figs4\](a)) and $N=70$ (Fig. \[figs4\](b)). For $N=60$ (Fig. \[figs4\](a)) the small particle sizes of $D_c\approx 2$ for $r_c>0.1$ are not fully resolved when compared with the case of $N=70$ in Fig. \[figs4\](b). The data in Figs. \[figs1\] to \[figs4\] suggest that the optimum number of size bins $N$ should be $N=70$. This is because the level of accuracy in the calculations does not increase significantly for $N>70$. Instead, using a larger value of $N$ only leads to severe fluctuations in the calculated PSDs and longer computational times. The value of $N=70$ also gives a better resolution of small particle sizes for both models. Hence a value of $N=70$ was used in all the calculations in the main text. Window size S and spacing q --------------------------- The inversion algorithm described in Section 4 of the main text places a window of size $S$ on the bins of the chord length histogram. This window starts with an initial size $S_0$, then slides along the bins of the chord length histogram until it reaches the last bin of the chord length histogram. The window then returns to the beginning of the vector at which its size is increased. The calculations are more accurate if the initial window size is sufficiently small. However, this also depends on the number of particle size bins in the particle size histogram. Then the question is: what is the appropriate number of size bins at which the accuracy of the calculations become independent of the initial window size? Figure \[figs5\](a) shows that for $N=20$ (calculations with the LW model), the $L_2$ norm in Eq. 25 of the main text (calculated at the optimum window size and position) shows a dependence on $S_0$ at different aspect ratios. This dependence reduces significantly at $N=40$ as in Fig. \[figs5\](b) and becomes nearly independent at $N=60$. A similar situation holds for calculations with the VSM model where the large dependence of the $L_2$ norm (at different aspect ratios) on $S_0$ seen in Fig. \[figs6\](a) (for $N=20$) decreases as $N$ increases to 40 in Fig. \[figs6\](b). The $L_2$ norm becomes nearly independent of $S_0$ at $N=60$ as in Fig. \[figs6\](c). The values of the $L_2$ norm obtained with the VSM model for $N\gtrsim 40$ (Fig. \[figs6\]) are significantly less than the values of the $L_2$ norm obtained with the LW model for the same aspect ratios in Fig. \[figs5\]. This suggests that the cylindrical geometry of the VSM model fits the needle data better than the ellipsoidal geometry of the LW for sufficiently large $N$. The results in Figs. \[figs5\] and \[figs6\] suggest that any value of $S_0$ from 2 up to 50 (corresponding to a particle size range of about $1\mu$m to about $43\mu$m) could be used in the calculations for $N\gtrsim 60$. However, a value of $S_0 = 6$ (corresponding to a particle size range of $1\mu$m to $1.5\mu$m) and $N=70$ were used in all the calculations in the main text. The spacing between consecutive positions (that is, $q$ in Eq. 17 of the main text) was kept at $q=2$ in all the calculations in the main text. The smallest value of $q=1$ did not yield any significant increase in accuracy of the calculations. Length Weighting {#lenwt} ================ In this section we present a simple numerical simulation which demonstrates the effect of particle size on detection probability. It had already been suggested [@Hobbel1991; @Simmons1999; @Vaccaro2006] that larger particles have a higher probability of being encountered by the FBRM laser. Here we represent the laser beam in the focal plane by the red circle in Fig. \[figs7\](a). The circular window of the probe is represented by the black circle in Fig. \[figs7\](a). We simulate spherical particles (represented by the blue circles in Fig. \[figs7\](a)) falling at random positions on the plane of the laser spot. We assume that all particles regardless of size have equal probability of falling in the focal plane. Each time the boundary of a particle intersects the trajectory of the laser beam a ‘hit’ is recorded. The idea behind the simulation is to see how the number of hits scales with the particle size (diameter of each circle). ![(a)Pictorial representation of the viewing window (black circle) of the FBRM probe, the laser beam (in the focal plane) is represented by the red circle while spherical particles are represented by the blue circles. (b)Variation of the frequencies of hits of the laser beam with different particles of sizes $D_s$ and different sizes of the viewing window as indicated in the Figure.[]{data-label="figs7"}](figs7){width="\textwidth"} ![(a)Single particle CLDs for an ellipsoid (for the LW model with aspect ratios indicated as $r_e$) of length $l_e = 2a_e = 100\mu$m ($a_e = $ semi major axis length of ellipsoid) and a cylinder (for the VSM model with aspect ratios indicated as $r_c$) of height $a_c = 100\mu$m. (b)Simulated PSD $\mathbf{X}^s$, recovered PSDs $\mathbf{X}^t$ (with length weighted transformation matrix) and $\mathbf{X}^f$ (with unweighted transformation matrix). (c)Weighted CLD $\mathbf{C}^+$ from $\mathbf{X}^s$ due to weighted transformation matrix and unweigthed CLD $\mathbf{C}^{\_}$ from $\mathbf{X}^s$ due to unweighted transformation matrix.[]{data-label="figs8"}](figs8){width="\textwidth"} Since each event of a particle falling on the focal plane is independent of another particle falling on the focal plane, then we simulate $N_r$ realisations of a single particle of size $D_s$ falling on the focal plane separately from the same number of realisations of another particle of a different size. The FBRM probe reports chord lengths between $1\mu$m and $1000\mu$m (for example Fig. 2(b) of the main text). Hence we set the particles sizes $D_s \in [10^{-3}, 1]$mm. The radius $R_L$ of the laser beam is set at 4mm [@Heinrich2012], while the radius of the circular window $R_W$ is set in multiples of $R_L$. The results shown in Fig. \[figs7\] (b) show that the number of hits scale linearly with the particle size regardless of the size of the probe window. These results agree with earlier suggestions in [@Hobbel1991; @Simmons1999; @Vaccaro2006]. Hence a linear characteristic size weighting is used in the main text in relating the population CLD to the PSD of the population. Single Particle and Population CLD ================================== In this section we show the single particle CLD realised with the LW and VSM models. Then we demonstrate the effect of length weighting on the population CLD. Single Particle CLD of LW and VSM models ---------------------------------------- Different mathematical approximations were made in the formulation of the LW and VSM models [@Li2005n1; @Vaccaro2006] as already noted in the main text. These different approximations give rise to different CLDs for a single particle of similar geometrical shape. The single particle CLDs (for different aspect ratios) realised for an ellipsoid (an ellipse in 2D) of length $l_e = 2a_e = 100\mu$m ($a_e$ is the length of the semi major axis) is shown in Fig. \[figs8\](a). The peaks of the single particle CLDs shift to the left as the aspect ratio $r_e = b_e/a_e$ (where $b_e$ is the semi minor axis length) is decreased. The single particle CLDs of the LW model increase slowly at small chord lengths before reaching their peaks at $2b_e$ and then decrease to zero at $l_e$. They have a right shoulder which gets broader as $r_e$ is decreased. The LW model approximates the single particle CLD of the ellipsoid by considering a single projection of the ellipsoid where the major and minor axes are parallel to the $x-y$ plane. It is not known what the effects of the other orientations of the ellipsoid will have on the single particle CLD as these orientations were not considered. The single particle CLDs of the cylindrical (for a cylinder of height $a_c = 100\mu$m) VSM model shown in Fig. \[figs8\](a) are less sensitive to small chord lengths as they rise very quickly to their peaks at $2b_c$ ($b_c$ is the radius of the cylinder). They then decrease more slowly (in a manner similar to the LW case) to zero at $a_c$. The low sensitivity of the single particle cylindrical VSM CLDs to small chord lengths is due to the small angle approximation [@Vaccaro2006] made in the calculation of the probability density function for the cylindrical VSM model. However, the positions of the peaks of the single particle cylindrical VSM match those of the LW for the same aspect ratio as seen in Fig. \[figs8\](a). Effect of Length Weighting on Population CLD and Recovered PSD -------------------------------------------------------------- The effect of the size of a particle to its detection probability has been demonstrated in section \[lenwt\]. This length bias could have a substantial effect on the calculations if it is not incorporated in some way. Consider the simulated PSD $\mathbf{X}^s$ shown by the solid line in Fig. \[figs8\](b). The PSD was made by randomly drawing $10^6$ particle sizes from the normal distribution with mean size 500 $\mu$m and standard deviation 100 $\mu$m. Then the particle sizes were shifted to ensure non negativity. Finally the PSD was made from a normalised histogram of 30 bins. The solid line in Fig. \[figs8\](c) shows the CLD $\mathbf{C}^{\_}$ calculated from the normalised PSD $\mathbf{X}^s$ as $$\mathbf{C}^{\_} = \mathbf{A}\mathbf{X}^s, \label{eqs6}$$ where $\mathbf{A}$[^8] is the transformation matrix in Eq. (6) of the main text without any length weighting. The symbols in Fig. \[figs8\](c) show the CLD $\mathbf{C}^+$ calculated from the normalised PSD $\mathbf{X}^s$ as $$\mathbf{C}^+ = \mathbf{\tilde{A}}\mathbf{X}^s, \label{eqs7}$$ where $\mathbf{\tilde{A}}$ is the transformation matrix in Eq. (5) of the main text with length weighting. Figure \[figs8\](c) shows that the CLD $\mathbf{C}^+$ calculated with length weighting is substantially higher than the corresponding CLD $\mathbf{C}^{\_}$ without length weighting and slightly shifted to the right. This shows that the experimentally measured CLD could be substantially biased due to the length weighting effect demonstrated in section \[lenwt\]. Hence the length weighting effect needs to be incorporated into the calculations to account for this length bias. The red diamonds in Fig. \[figs8\](b) show the PSD obtained by minimising the objective function $\phi$ (similar to the function $f_1$ in Eq. 24 of the main text) given as $$\phi = \sum_{j=1}^{M}{\left[C_j^+ - \sum_{i=1}^N{\tilde{A}_{ji}X_i^t}\right]^2}, \label{eqs8}$$ where $M$ is the number of chord length bins, $N$ is the number of particle size bins and $\mathbf{X}^t$ is the optimum PSD which minimises the objective function. The recovered PSD $\mathbf{X}^t$ matches the original PSD $\mathbf{X}^s$ because the length weighting effect has been incorporated into the matrix $\mathbf{\tilde{A}}$. However, when the objective function is formulated as $$\phi = \sum_{j=1}^{M}{\left[C_j^+ - \sum_{i=1}^N{A_{ji}X_i^f}\right]^2}, \label{eqs9}$$ the optimum PSD $\mathbf{X}^f$ is substantially higher than the original PSD $\mathbf{X}^s$ and slightly shifted to the right as seen in Fig. \[figs8\](b). This again demonstrates the need to account for the length bias that comes with the experimentally measured CLD to reduce its effect on the calculated PSD. Number and Volume Based PSD =========================== Some particle sizing instruments report the PSD in terms of a volume distribution for example Figs. 2(a), 3(a), 3(b) and 3(c) of the main text. Hence it becomes necessary to calculate a volume based PSD that is comparable to the experimentally measured PSDs. The volume based PSD $\mathbf{X}^v$ can be calculated from [@Holdich2002] ![(a)The simulated PSD $\mathbf{X}^s$ in Fig. \[figs8\](b), volume based PSDs $\mathbf{X}^v_1$ (calculated from Eq. using $\mathbf{X}^s$) and $\mathbf{X}^v_2$ (calculated from Eq. ). (b)Normalised number based PSD $\mathbf{X}$ obtained by minimising the function $f_1$ in the main text using the LW model at the aspect ratio $r_e$ indicated in the figure. Volume based PSD $\mathbf{X}_1^v$ calculated from Eq. using the PSD $\mathbf{X}$. Normalised volume based PSD $\mathbf{X}_2^v$ obtained from the function $f_3$ (at $\lambda = 0$) in the main text. (c)Same as in (b) with the VSM model.[]{data-label="figs9"}](figs9){width="\textwidth"} $$X_i^v = \frac{X_i\overline{D}_i^3}{\sum_{i=1}^N{X_i\overline{D}_i^3}}, \label{eqs10}$$ where $\mathbf{X}$ is the number based PSD and $\overline{D}$ is the characteristic size of the population of particles. This is equivalent to $$X_i^v = \frac{\hat{X}_i\overline{D}_i^3}{\sum_{i=1}^N{\hat{X}_i\overline{D}_i^3}}, \label{eqs11}$$ where $$\hat{X}_i = \frac{X_i}{\sum_{i=1}^N{X_i}}. \label{eqs12}$$ Because the inversion problem is ill posed, the calculated PSD $\mathbf{X}$ (which is usually Gaussian like) from the experimentally measured CLD could have small fluctuations at the tails of the distribution. The presence of small fluctuations at the right tail of the number based PSD $\mathbf{X}$ leads to artificial peaks at large particle sizes. For example, the number based PSD (shown in Fig. \[figs9\](b)) recovered for sample 1 with the LW model contains a small fluctuation at $D_e\approx 200 \mu$m. This leads to the peak at $D_e\approx 200\mu$m in the volume based PSD $\mathbf{X}_1^v$ calculated from Eq. . This peak is clearly artificial as the number based PSD $\mathbf{X}$ in Fig. \[figs9\](b) shows a near zero particle size count at $D_e\approx 200\mu$m. This problem led to the formulation of a new method for calculating the volume based PSD which allows the application of a suitable regularisation to remove these artificial peaks. To demonstrate that the method summarised in Eqs. 29 to 33 of the main text reproduces the correct volume based PSD, consider the simulated PSD $\mathbf{X}^s$ in Fig. \[figs9\](a) which is the same normalised PSD $\mathbf{X}^s$ in Fig. \[figs8\](b). The red squares in Fig. \[figs9\](a) show the volume based PSD $\mathbf{X}_1^v$ calculated froom Eq. using the PSD $\mathbf{X}^s$. The black pentagrams in Fig. \[figs9\](a) show the normalised volume based PSD $\mathbf{X}_2^v$ calculated by minimising the objective function $\phi$ given as $$\phi = \sum_{j=1}^{M}{\left[C_j^+ - \sum_{i=1}^N{\overline{A}_{ji}\overline{X}^v_i}\right]^2}, \label{eqs13}$$ where $$\overline{A}_{ji} = \frac{\tilde{A}_{ji}}{\overline{D}_i^3}, \label{eqs14}$$ $\mathbf{\tilde{A}}$ is the length weighted transformation matrix in Eq. , $\mathbf{C}^+$ is the length biased CLD in Eq. and $\mathbf{\overline{X}}_2^v$ is the optimum PSD which minimises the objective function in Eq. . The normalised volume based PSD $\mathbf{X}_2^v$ obtained as $$X_{2i}^v = \frac{\overline{X}_{2i}^v}{\sum_{i=1}^N{\overline{X}_{2i}^v}} \label{eqs16}$$ matches the volume based PSD $\mathbf{X}_1^v$ calculated from Eq. . The PSD $\mathbf{X}_2^v$ is shown by the black pentagrams in Fig. \[figs9\](a). The peaks of the volume based PSDs $\mathbf{X}_1^v$ and $\mathbf{X}_2^v$ are shifted to the right of the number based PSD $\mathbf{X}^s$ as expected. Figure \[figs9\](b) shows the volume based PSD $\mathbf{X}_2^v$ calculated (using the LW model and normalised as in Eq. ) by minimising the objective function $f_3$ given by Eq. 33 of the main text for $\lambda = 0$. the volume based PSD $\mathbf{X}_2^v$ calculated from $f_3$ matches the volume based PSD $\mathbf{X}_1^v$ calculated from Eq. as expected as shown in Fig. \[figs9\](b). However, the volume based PSD $\mathbf{X}_2^v$ still contains the artificial peak at $D_e\approx 200\mu$m. This peak can be removed using a suitable value of $\lambda$ which enforces the penalty on the norm as given in Eq. 33 of the main text. The recovered number based PSD $\mathbf{X}$ contains fluctuations at small particle sizes $D_e\approx 1$, but these fluctuations have no effect on the volume based PSDs $\mathbf{X}_1^v$ or $\mathbf{X}_2^v$ since the third moment of particle sizes $D_e\approx 1\mu$m is much less than the third moment of particle sizes $D_e\approx 10^2\mu$m. A similar situation holds for the VSM model as seen in Fig. \[figs9\](c). The volume based PSD $\mathbf{X}_1^v$ calculated from Eq. using the recovered number based PSD $\mathbf{X}$ matches the volume based PSD $\mathbf{X}_2^v$ obtained by minimising Eq. 33 of the main text. However, in this case, there is no artificial peak in either $\mathbf{X}_1^v$ or $\mathbf{X}_2^v$ at large particle sizes since there are no fluctuations in the number based PSD $\mathbf{X}$ (at large particle sizes) in this case. Uniqueness of Shape Information =============================== As discussed in section 4 of the main text, minimisation of the objective function $f_1$ defined in Eq. 24 of the main text using different particle size ranges (at different aspect ratios) results in a situation where the $L_2$ norm in Eq. 25 of the main text flattens out after some critical aspect ratio. This situation is shown in Fig. \[figs10\](a) (for $N=70$) for sample 1 (referred to in the main text) using the LW model. The situation is the same for the VSM model as seen in Fig. \[figs10\](b) for the same sample 1. The $L_2$ norm decreases with increasing aspect ratio after which it becomes nearly flat after some critical aspect ratio indicated as $r_e^{*}\approx 0.3$ in Fig. \[figs10\](a) and $r_c^{*}\approx 0.25$ in Fig. \[figs10\](b). The $L_2$ norm in the two cases shown in Figs. \[figs10\](a) and \[figs10\](b) does not have a clear global minimum making it necessary to reformulate the problem in such a way that unique shape information can be retrieved. The problem of non uniqueness is common to other samples discussed in the main text. ![Variation of the $L_2$ norm in the main text with aspect ratios for different number of particle size bins $N$ for (a)sample 1 (calculations with the LW model), (b)sample 1 (calculations with the VSM model) and (c)the system of spherical particles described in footnote \[fn1\] (calculations with the LW model). []{data-label="figs10"}](figs10){width="\textwidth"} ![(a)Variation of the squared residual norm between the measured CLD $\mathbf{C}^{\textbf{*}}$ and calculated CLD $\mathbf{C}$ with aspect ratio for the different values of $\lambda$ (in the function $f_2$ in the main text) indicated in the figure. (b)Variation of the square norm of the recovered PSD with aspect ratio for the values of $\lambda$ indicated in (a). (c)Variation of the function $f_2$ with aspect ratio for the values of $\lambda$ in (a). All calculations with the LW model for sample 1.[]{data-label="figs11"}](figs11){width="\textwidth"} ![The recovered PSDs from the function $f_2$ (at the values of $\lambda$ indicated in (a)) at the aspect ratios $r_e$ indicated in the figures. All calculations with the LW model for sample 1.[]{data-label="figs12"}](figs12){width="\textwidth"} ![Volume based PSDs calculated from the function $f_3$ at (a)$\lambda = 0$ and (b)$\lambda = 10^{-14}$ for the five samples in the main text. (c)The square norm of the calculated volume based PSD from $f_3$ in the range of aspect ratios where the function $f_2$ reaches it minimum in Fig. \[figs11\](c) for all five samples. (d)The sum of squared deviation of the calculated CLDs defined in $f_3$ for the range of aspect ratios in (c) for all five samples. All calculations in (a) to (d) done with the LW model. (e)Similar to (a) but calculated with the VSM model.[]{data-label="figs13"}](figs13){width="\textwidth"} This problem of non uniqueness only comes to light when the number of particle size bins is large enough. For the case where the number of particle size bins is not large enough say $N=20$ in Fig. \[figs10\](a), an artificial global minimum could be realised for a suitable initial window size. This result is only artificial as it depends on the initial window size chosen. This is because the $L_2$ norm is still dependent on the initial window size as seen in Fig. \[figs5\](a). Also the fits obtained at such values of $N$ are poorer than fits obtained at larger $N$ as seen in Fig. \[figs5\]. Figure \[figs10\](b) also shows a situation where an artificial global minimum is realised with the VSM model for a small value of $N$ ($N=40$ in Fig. \[figs10\](b)). The reason being similar to the case of the LW model. The $L_2$ norm is still strongly dependent on the initial window size (for small values of $N$) as seen in Fig. \[figs6\]. Figures \[figs10\](a) and \[figs10\](b) (for $N=70$) show that the level of fit to the needle data is much better with the VSM than the LW model. This could be because the cylindrical shape of the VSM model is closer to the shape of the needles than the ellipsoid geometry of the LW. This proposition is supported by the fact that for a system of spherical particles[^9] [^10], the level of fit obtained with the LW model (Fig. \[figs10\](c)) is comparable to the level of fit obtained with the VSM model for the needles (Fig. \[figs10\](b)). This situation of non uniqueness of particle shape information led to the introduction of the objective $f_2$ defined in Eq. 27 of the main text. The motivation comes from the observation in Figs. \[figs3\] and \[figs4\] that the level of fluctuations of the recovered number based PSD increases as the aspect ratio increases for a fixed $N$. A possible reason for this could be because the larger aspect ratios in Figs. \[figs3\] and \[figs4\] deviate too much from the actual shape of the particles (as seen in Figs. 1 and 3(d) of the main text) even though they yield about the same level of fit with the intermediate aspect ratios as seen in Figs. \[figs10\](a) and \[figs10\](b) (for $N=70$). A suitable value of $\lambda$ (in Eq. 27 of the main text) can be chosen by comparing the relative magnitudes of the two sums in Eq. 27 of the main text. The variation of the square residual norm between the measured CLD $\mathbf{C}^{\textbf{*}}$ and calculated CLD $\mathbf{C}$ with aspect ratio for sample 1 (for different values of $\lambda$ in Eq. 27 of the main text) are shown in Fig. \[figs11\](a). The square of residual norm for $\lambda = 0$ (in the flat region) is of the order of $10^4$ as seen in Fig. \[figs11\](a). Figure \[figs11\](b) shows the square norm from Eq. 27 of the main text for different values of $\lambda$. The square norm for $\lambda = 0$ shows a spike at $r_e\approx 0.1$ and then increases gradually with aspect ratio as in Fig. \[figs11\](b). The square norm is of order $10^5$. This suggests values of $\lambda$ of order $10^{-1}$. The squares of the residual norm between the measured CLD $\mathbf{C}^{*}$ and calculated CLD $\mathbf{C}$ for $\lambda = [0.1, 0.2, 0.3]$ are shown in Fig. \[figs11\](a) while the corresponding squares of the norms of the recovered number based PSD are shown in Fig. \[figs11\](b). As expected, the spikes in the recovered number based PSDs are mitigated for $\lambda\neq 0$ as seen in Fig. \[figs11\](b). However, the penalty becomes less effective as the aspect ratio increases resulting in an increase in the squares of PSD norms in Fig. \[figs11\](b) with increasing $r_e$. Also, the fits to the experimental data reduces as $\lambda$ increases as seen in Fig. \[figs11\](a), but the mismatch increases with aspect ratio. The process of penalising spikes in the recovered number based PSD (at the cost of reduced match of the experimental data as seen in Fig. \[figs11\](a)) seen in Fig. \[figs11\](b) leads to the development of a global minimum in the objective function $f_2$ as seen in Fig. \[figs11\](c) for $\lambda\neq 0$. For $\lambda = 0.1$, the global minimum is quite shallow and not so obvious. However, it gets clearer at $\lambda = 0.2$ as in Fig. \[figs11\](c). The global minimum occurs at about the same region of $r_e\approx 0.3$ for $\lambda = [0.1, 0.2, 0.3]$ as in Fig. \[figs11\](c). Since $\lambda = 0.2$ yields a clear global minimum for the objective function $f_2$ with less cost on the quality of fit, then the value of $\lambda = 0.2$ was chosen for the minimisation of the objective function $f_2$ in the main text for the LW model. A similar procedure led to the choice of $\lambda = 0.01$ for the VSM model. The effect of penalising the number based PSD is shown in Fig. \[figs12\]. At $r_e=0.1$ and $\lambda=0$, the recovered number based PSD has a long thin spike at $D_e\approx 1$ as in Fig. \[figs12\](a). However, the spike at $D_e\approx 1$ is removed for $\lambda\neq 0$ as seen in Fig. \[figs12\](a). The cost of removing the spike at $D_e\approx 1$ in Fig. \[figs12\](a) is the introduction of oscillations at small particle sizes. Similar to the spike at $D_e\approx 1$ in Fig. \[figs12\](a) is the spike at $D_e\approx 2$ in Fig. \[figs12\](b) (although shorter than the case of Fig. \[figs12\](a)) for $\lambda = 0$. This spike is removed for $\lambda\neq 0$. The same situation plays out at $r_e=0.3$ in Fig. \[figs12\](c). As the aspect ratio increases, the single long spike at a small particle size is replaced by small oscillations at small particle sizes for $\lambda = 0$ as seen in Figs. \[figs12\](c) to \[figs12\](f). However, at the optimum value of $\lambda = 0.2$ selected in Fig. \[figs11\], the distributions close to the minimum $r_e\approx [0.2,0.3]$ in Fig. \[figs11\](c) have the least oscillations at all particle sizes as seen in Figs. \[figs12\](b) and \[figs12\](c). This confirms that the choice of $\lambda = 0.2$ (for the LW model) used in Eq. 26 of the main text and the consequent minimum of the objective function $f_2$ at $r_e\approx 0.3$ for sample 1 in Fig. 11(c) yield physically realistic PSDs. A similar situation holds for the other samples and the VSM model. Having obtained the optimum size ranges using the inversion algorithm and the optimum aspect ratio by minimising the objective function $f_2$, then the volume based PSD can be calculated at the optimum particle size range range and aspect ratio. The volume based PSD is calculated by minimising the objective function $f_3$ (defined in Eq. 33 of the main text) using the optimum particle size range and aspect ratio. The objective function $f_3$ could be minimised at $\lambda = 0$ or $\lambda\neq 0$ depending on the level of noisebin the recovered volume based PSD. For example, in the cases of samples 1 and 5 (using the LW model), the volume based PSDs recovered by minimising the objective function $f_3$ at $\lambda = 0$ contain spikes at $D_e\approx 200$ as in Fig. \[figs13\](a). This is because the corresponding number based PSDs contain small fluctuations at $D_e\approx 200$ leading to an exaggerated particle size counts at $D_e\approx 200$. These spikes at $D_e\approx 200$ in samples 1 and 5 can be removed by searching for a suitable value of $\lambda\neq 0$ so that the penalty on the norm of the PSD in the objective function $f_3$ becomes effective. ![Schematic representation of metrics Feret Max, Feret Min and EQPC obtained by dynamic image analysis for the five samples.[]{data-label="figs14"}](figs14){width="\textwidth"} The procedure for selecting $\lambda$ is similar to the case of the number based PSD. However, this time around the selection is done using the optimum particle size range obtained from the inversion algorithm and aspect ratio recovered from the objective function $f_2$. The sum of the squared deviation $[C_j^o-\overline{A}^o_{ji}\overline{X}^v_i]^2$ in Eq. 33 of the main text is of order $[10^{-6}, 1]$ for $r_e\in [0.15,0.4]$ for the five samples as seen in Fig. \[figs13\](d). The sum of the squares $[\overline{X}_i^v]^2$ in Eq. 33 is of order $[10^8, 10^{11}]$ for the five samples as in Fig. \[figs13\](c). This suggests values of $\lambda$ of order $10^{-15}$. The value of $\lambda = 8\times 10^{-15}$ was used for the five samples for calculations with the LW model. The resulting volume based PSDs for the five samples obtained at $\lambda = 8\times 10^{-15}$ are shown in Fig. \[figs13\](b). Figure \[figs13\](b) shows that the spikes at $D_e\approx 200\mu$m have been removed for samples 1 and 5. The volume based PSDs obtained by minimising the function $f_3$ (at $\lambda = 0$) with the VSM calculation were fairly smooth unlike the case of Fig. \[figs13\](a). Hence the value of $\lambda=0$ was employed for the five samples for the case of the VSM model. The volume based PSDs recovered from the function $f_3$ (at $\lambda = 0$) using the VSM model for sample 1 to sample 5 are shown in Fig. \[figs13\](e). Dynamic image analysis ====================== As mentioned in the main text, dynamic image analysis was performed with a QICPIC (Sympatec Ltd., UK) instrument with a LIXELL wet dispersion unit. The metrics obtained from dynamic image analysis were the equivalent projected circle EQPC diameter, the maximum feret (Feret Max) and the minimum feret (Feret Min) as described in [@Hamilton2012]. The Feret Max is the longest distance between two parallel tangents on opposite sides of the projected particle, while the Feret Min is the shortest distance between two tangents on opposite sides of the projected particle [@Hamilton2012]. The EQPC diameter is the diameter of a sphere whose 2 D projection has an area equal to the area of the 2 D projection of the particle. These metrics are illustrated schematically in Fig. \[figs14\]. Possible discrepancies between calculated and measured chord lengths ==================================================================== The calculation presented in the main text is based on a chord being defined as extending from edge to edge across a particle. The analytical models used in this work assume continuity of the particle boundary and the geometry of each particle has been assumed to be perfectly cylindrical or ellipsoidal with no concavities. However, real convex (approximately) particles contain small concavities along their boundaries which implies that the particle boundaries are not always continuous or smooth. The approach used here implies that these small discontinuities in the particles’ boundaries have been removed by using a linear interpolation between the points of discontinuity. However, the presence of concavities along particles’ boundaries will introduce small discrepancies between measured and calculated chord lengths. The typical particle size is of the order of 100$\mu$m so that the depth of these concavities will be less than 1$\mu$m. Hence we expect an error of less than 1% in the calculated chord lengths. However, for particles with more pronounced concavities (for example agglomerates) the error could increase significantly if the concavities are not properly accounted for by the model used. Work on a suitable model for dealing with agglomerates (which contain pronounced concavities) is currently in progress. Another factor that can introduce discrepancies between the measured and calculated chord lengths is the optical properties of the particles. The focal spot of the laser has a fixed width. The laser beam converges towards the focal plane and diverges away from it. Hence a suitable threshold is used in the FBRM sensor to determine when the reflected light is accepted or rejected. This then implies that the length of a chord depends on the distance of the particle (from which light is reflected) from the focal plane and its reflecting properties. A small particle (whose size is close to the width of the laser spot) close to the focal plane could give rise to a measured chord length which is larger than its true value if the particle has very good reflectance. However, this small particle may be missed completely by the FBRM sensor if the particle is far away from the focal plane and has a poor reflectance. The situation is similar for a large particle (whose size is significantly larger than the width of the laser spot). The measured chord length could be larger or smaller than the true value depending on the reflecting properties of the particle and its distance away from the focal plane. Hence the optical properties of the particles in a population determine if a measured CLD is representative of the particles in the population or not. The degree of accuracy of the calculated CLD will also be affected by whether the optical properties of the particles are taken into account in the models or not. In the work presented here all particles are assumed to be opaque and to have good reflectance. Representative images of the particles in Fig. 1 of the main text shows that this approximation is justified. Hence we do not expect a significant shift in the peak of the calculated CLD for this kind of system of particles. However, for a system of highly transparent particles, there could be a significant shift in the peak of the calculated CLD and hence the optical properties of the particles will need to be taken into account. [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} C. Washington, Particle size analysis in pharmaceutics and other industries, Ellis Horwood Limited, Chichester, England, 1992. J. Heinrich, J. Ulrich, Application of laser-backscattering instruments for in situ monitoring of crystallization process - a review, Chem. Eng. Technol. 35 (6) (2012) 967–979. A. Tadayyon, S. Rohani, Determination of particle size distribution by [P]{}ar-[T]{}ec©100: modeling and experimental results, Part. Part. Syst. Charact. 15 (1998) 127–135. A. Ruf, J. Worlitschek, M. Mazzotti, Modeling and experimental analysis of [PSD]{} measurements through [FBRM]{}, Part. Part. Syst. Charact. 17 (2000) 167–179. A. R. Heath, P. D. Fawell, P. A. Bahri, J. D. Swift, Estimating average particle size by focused beam reflectance measurement ([FBRM]{}), Part. Part. Syst. Charact. 19 (2002) 84–95. E. J. W. Wynn, Relationship between particle-size and chord-length distributions in focused beam reflectance measurement: stability of direct inversion and weighting, Powder Technology 133 (2003) 125–133. J. Worlitschek, M. Mazzotti, Choice of the focal point position using [L]{}asentec [FBRM]{}, Part. Part. Syst. Charct. 20 (2003) 12–17. J. Worlitschek, T. Hocker, M. Mazzotti, Restoration of [PSD]{} from chord length distribution data using the method of projections unto convex sets, Part. Part. Syst. Charct. 22 (2005) 81–98. M. Li, D. Wilkinson, Determination of non-spherical particle size distribution from chord length measurements. part 1: theoretical analysis., Chemical Engineering Science 60 (2005) 3251–3265. M. Li, D. Wilkinson, K. Patchigolla, Determination of non-spherical particle size distribution from chord length measurements. part 2: experimental validation, Chemical Engineering Science 60 (2005) 4992–5003. A. Vaccaro, J. Sefcik, M. Morbidelli, Modeling focused beam reflectance measurement and its application to sizing of particles of variable shape, Part. Part. Syst. Charact. 23 (2006) 360–373. N. Kail, H. Briesen, W. Marquardt, Advanced geometrical modeling of focused beam reflectance measurements [(FBRM)]{}, Part. Part. Syst. Charact. 24 (2007) 184–192. N. Kail, H. Briesen, W. Marquardt, Analysis of [FBRM]{} measurements by means of a 3[D]{} optical model, Powder Technology 185 (2008) 211–222. N. Kail, W. Marquardt, H. Briesen, Estimation of particle size distributions from focused beam reflectance measurements based on an optical model, Chemical Engineering Science 64 (2009) 984–1000. S. Scheler, Ray tracing as a supportive tool for interpretation of [FBRM]{} signals from spherical particles, Chemical Engineering Research and Design 101 (2013) 503–514. H. Li, M. A. Grover, Y. Kawajiri, R. W. Rousseau, Development of an empirical method relating crystal size distributions and [FBRM]{} measurements, Chemical Engineering Science 89 (2013) 142–151. H. Li, Y. Kawajiri, M. A. Grover, R. W. Rousseau, Application of an empirical [FBRM]{} model to estimate crystal size distributions in batch crystallization, Cryst. Growth Des. 14 (2014) 067–616. Z. Q. Yu, P. S. Chow, R. B. H. Tan, Interpretation of focused beam reflectance measurement ([FBRM]{}) data via simulated crystallization, Organic Process Research & Development 12 (2008) 646–654. M. J. H. Simmons, P. A. Langston, A. S. Burbidge, Particle and droplet size analysis from chord distributions, Powder Technology 102 (1999) 75–83. E. F. Hobbel, R. Davies, F. W. Rennie, T. Allen, L. E. Butler, E. R. Waters, J. T. Smith, R. W. Sylvester, Modern methods of on-line size analysis for particulate streams, Part. Part. Syst. Charact. 8 (1991) 29–34. P. A. Langston, A. S. Burbidge, T. F. Jones, M. J. H. Simmons, Particle and droplet size analysis from chord measurements using [B]{}ayes’ theorem, Powder Technology 116 (2001) 33–42. E. J. Hukkanen, R. D. Braatz, Measurement of particle size distribution in suspension polymerization using in situ laser back scattering, Sensors and Actuators B 96 (2003) 451–459. P. Barrett, B. Glennon, In-line [FBRM]{} monitoring of particle size in dilute agitated suspensions, Part. Part. Syst. Charact. 16 (1999) 207–211. M.-N. Pons, K. Milferstedt, E. Morgenroth, Modeling of chord length distributions, Chemical Engineering Science 61 (2006) 3962–3973. N. K. Nere, D. Ramkrishna, B. E. Parker, W. V. B. III, P. Mohan, Transformation of the chord-length distributions to size distributions for nonspherical particles with orientation bias, Ind. Eng. Chem. Res. 46 (2007) 3041–3047. F. Czapla, N. Kail, A. Öncül, H. Lorenz, H. Briesen, A. Seidel-Morgenstern, Application of a recent [FBRM]{}-probe model to quantify preferential crystallization of [DL]{}-threonine, Chemical Engineering Research and Design 88 (2010) 1494–1504. P. Hamilton, D. Littlejohn, A. Nordon, J. Sefcik, P. Slavin, Validity of particle size analysis techniques for measurement of the attrition that occurs during vacuum agitated powder drying of needle-shaped particles, Analyst 137 (2012) 118–125. P. Hamilton, D. Littlejohn, A. Nordon, J. Sefcik, P. Slavin, P. Dallin, J. Andrews, Studies of particle drying using non-invasive [R]{}aman spectroscopy and particle size analysis, Analyst 136 (2011) 2168–2174. S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge University Press, Cambridge, UK, 2004. M. Kaasaleinen, J. Torppa, Optimization methods for asteroid lightcurve inversion i. shape determination, Icarus 153 (2001) 24–36. R. G. Holdich, Fundamentals of Particle Technology, Midland Information Technology and Publishing, Leicestershire, UK, 2002. [^1]: The problem is significantly simplified for spherical particles due to the symmetry properties of the sphere. [^2]: See section 7 of the supplementary information and [@Hamilton2012]for further description of the concepts of Feret diameter and EQPC. [^3]: In all the calculations here a value of $N = 70$ was used for both VSM amd LW models (section 2 of the supplementary information). [^4]: Different choices of initial $\gamma_i$ resulted in the same optimum solution. [^5]: The values $q=2$, and initial window size $S=6$ were used for both the VSM and LW models (section 2 of supplementary information). [^6]: The values of $\lambda = 0.01$ and $\lambda = 0.2$ were used in Eq. for the VSM and LW model respectively (section 6 of the supplementary information) [^7]: The values of $\lambda =0$ and $\lambda = 8\times 10^{-15}$ were used in Eq. for the VSM and LW models respectively (see section 6 of the supplementary information). [^8]: The matrix is calculated with the LW model. [^9]: \[fn1\] The system of spherical particles is a 0.05% weight suspension of polystyrene microspheres dispersed in isopropanol. The suspension was place in a jacketed vessel (with a jacket temperature of $20^{\circ}$C) in the Mettler Toledo EasyMax  system. The suspension was stirred at 100rpm and the CLD was measured with a Mettler Toledo FBRM G400 probe. [^10]: The refernce to the system of spherical particles here is only done for the purpose of comparing Fig. \[figs10\](c) with Fig. \[figs10\](b). There shall be no further reference to the system of spherical particles beyond this point. All other samples referred to in this text or in the main text are (or any of) the sample 1 to sample 5 shown in Fig. 1 of the main text.
--- abstract: 'Given a family of systems, identifying stabilizing switching signals in terms of infinite walks constructed by concatenating cycles on the underlying directed graph of a switched system that satisfy certain conditions, is a well-known technique in the literature. This paper deals with a new [method to design]{} these cycles for stability of switched linear systems. We employ properties of the subsystem matrices and mild assumption on the admissible switches between the subsystems [for this purpose]{}. In contrast to prior works, [our construction of]{} stabilizing cycles does not involve design of Lyapunov-like functions and storage of sets of scalars in memory prior to the application of a cycle detection algorithm. As a result, [the]{} techniques [proposed in this paper]{} offer improved numerical tractability.' address: | Department of Electrical Engineering,\ Indian Institute of Science Bangalore,\ Bengaluru - 560012, India,\ E-mail: [email protected] author: - Atreyee Kundu title: | On the design of stabilizing cycles for\ switched linear systems --- Introduction {#s:intro} ============ #### Motivation A *switched system* has two ingredients — a family of systems and a switching signal. The *switching signal* selects an *active subsystem* at every instant of time, i.e., the system from the family that is currently being followed [@Liberzon2003 §1.1.2]. In this paper we work with switched systems in the setting of linear dynamics in discrete-time. Given a family of systems, [identification of]{} classes of switching signals that preserve stability of the resulting switched system is a key topic in the literature [@Liberzon2003 Chapter 3]. In the recent past this problem has been studied widely by employing multiple Lyapunov-like functions [@Branicky1998] and graph-theoretic tools. A weighted directed graph [@Bollobas] is associated to a family of systems and the admissible transitions between them; the vertices of this graph are weighted by a measure of the rate of growth or decay of the Lyapunov-like functions corresponding to the subsystems, and the edges are weighted by a measure of the jump between these functions. A switching signal is expressed as an infinite walk on the above directed graph. Infinite walks whose corresponding switching signals preserve stability, are [constructed]{} as concatenation of negative weight cycles [@Bollobas] (henceforth, also referred to as stabilizing cycles).[^1] This class of results was first introduced in [@KunCha_HSCC_2014] for switched linear systems, and was later extended to the setting of switched nonlinear systems in [@KunCha_NAHS_2017]. A primary feature of the stability conditions proposed in [@KunCha_HSCC_2014; @KunCha_NAHS_2017] is their numerical tractability compared to the prior results that rely on point-wise properties of the switching signals [@Zhai2002; @KunMisCha_CDC_2015]. In contrast to verifying certain conditions on the number of switches and duration of activation of unstable subsystems on *every* interval of time, one needs to detect negative weight cycles on the underlying weighted directed graph of a switched system. Existence of these cycles depends on two factors: (i) connectivity of the directed graph, and (ii) weights associated to the vertices and edges of this graph. While (i) is determined by the admissible switches between the subsystems, the elements of (ii) are scalar quantities calculated from Lyapunov-like functions, the choice of which is not unique. This feature leads to the problem of ‘co’-designing Lyapunov-like functions such that the underlying weighted directed graph of a switched system admits a stabilizing cycle. This design problem, in general, is numerically difficult, see [@KunCha_NAHS_2017 §3] for a detailed discussion. As a natural choice, the existing literature considers the Lyapunov-like functions and the corresponding vertex and edge weights to be “given”, and detects negative weight cycles on the underlying weighted directed graph of a switched system. However, non-existence of a negative weight cycle with the given choice of vertex and edge weights does not conclude that a family of systems does not admit such cycles at all. In addition, storing the vertex and edge weights prior to the application of a negative weight cycle detection algorithm requires a huge memory when the number of subsystems is large. These features motivate the search for a [method to construct]{} stabilizing cycles [that is]{} independent of the choice of Lyapunov-like functions and [works]{} without complete knowledge of the vertex and edge weights of the underlying directed graph of a switched system. In this paper we report our result that addresses these requirements. #### Our contributions Given a family of systems (containing both stable and unstable components) and a set of admissible switches between the subsystems, we [first identify sufficient conditions on]{} cycles on the underlying directed graph of the switched system [such that they are stabilizing]{}. [These conditions are derived]{} in terms of properties of the subsystem matrices, and hence the use of multiple Lyapunov-like functions is avoided. In particular, we rely on (matrix) commutators between the subsystem matrices and mild assumption on the admissible switches between the subsystems for this purpose. Our stability condition involves the rate of decays of the Schur stable matrices, upper bounds on the Euclidean norms of the commutators of the subsystem matrices, certain scalars capturing the properties of these matrices individually, and the total number of subsystems. In contrast to prior works, our directed graphs are unweighted, and the detection of our stabilizing cycles depends only on the activation of a vertex whose corresponding subsystem satisfies certain conditions. We [then]{} present an algorithm to detect cycles [on the underlying directed graph of a switched system]{} that satisfy our conditions. Matrix commutators (Lie brackets) have been employed to study stability of switched systems with all or some stable subsystems earlier in the literature [@Narendra1994; @Agrachev2012; @abc; @def]. However, to the best of our knowledge, this is the first instance when they are employed to [construct]{} stabilizing cycles for switched systems. In summary, the main contribution of this paper is in extending the technique of employing stabilizing cycles to [construct]{} stabilizing switching signals, by proposing a new [method for designing]{} these cycles. [Our]{} class of stabilizing cycles offers better numerical tractability in terms of detecting its elements compared to the existing results. #### Paper organization The remainder of this paper is organized as follows: we formulate the problem under consideration and catalog the required preliminaries for our result in §\[s:prelims\]. Our main result appears in §\[s:mainres\]. We also discuss various features of our result in this section. In §\[s:num\_ex\] we present numerical experiments. We provide a comparison of our result with the existing methods in §\[s:discssn\], and conclude in §\[s:concln\]. A proof of our main result appears in §\[s:all\_proofs\]. #### Notation ${\mathbb{N}}$ is the set of natural numbers, ${\mathbb{N}}_{0} = {\mathbb{N}}\cup\{0\}$. ${\left\lVert\cdot\right\rVert}$ denotes the Euclidean norm (resp., induced matrix norm) of a vector (resp., a matrix). For a matrix $P$, given by a product of matrices $M_{i}$’s, ${\left\lvert{P}\right\rvert}$ denotes the length of the product, i.e., the number of matrices that appear in $P$, counting repetitions. Preliminaries {#s:prelims} ============= #### The problem We consider a discrete-time switched linear system [@Liberzon2003 §1.1.2] $$\begin{aligned} \label{e:swsys} x(t+1) = A_{\sigma(t)}x(t),\:\:x(0)=x_{0},\:\:t\in{\mathbb{N}}_{0} \end{aligned}$$ generated by - a family of systems $$\begin{aligned} \label{e:family} x(t+1) = A_{i}x(t),\:\:x(0) = x_{0},\:\:i\in{\mathcal{P}},\:\:t\in{\mathbb{N}}_{0}, \end{aligned}$$ where $x(t)\in{\mathbb{R}}^{d}$ is the vector of states at time $t$, ${\mathcal{P}}= \{1,2,\ldots,N\}$ is an index set, $A_{i}\in{\mathbb{R}}^{d\times d}$, $i\in{\mathcal{P}}$ are constant matrices, and - a switching signal $\sigma:{\mathbb{N}}_{0}\to{\mathcal{P}}$ that specifies at every time $t$, the index of the [active subsystem]{}, i.e., the dynamics from that is being followed at $t$. The solution to is given by $$\begin{aligned} \label{e:soln} x(t) = A_{\sigma(t-1)}\ldots A_{\sigma(1)}A_{\sigma(0)}x_{0},\:\:t\in{\mathbb{N}}, \end{aligned}$$ where we have suppressed the dependence of $x$ on $\sigma$ for notational simplicity. [[@Agrachev2012 §2]]{} \[d:ges\] [ The switched system is *globally exponentially stable (GES) under a switching signal $\sigma$* if there exist positive numbers $c$ and $\gamma$ such that for arbitrary choices of the initial condition $x_{0}$, the following inequality holds: $$\begin{aligned} \label{e:ges1} {\left\lVertx(t)\right\rVert}\leq c\exp(-\gamma t){\left\lVertx_{0}\right\rVert}\:\:\text{for all}\:\:t\in{\mathbb{N}}. \end{aligned}$$ ]{} We are interested in the following problem: \[prob:mainprob\] Given a family of systems (containing both stable and unstable components) and a set of admissible switches, find a class of switching signals $\mathcal{S}$ such that the switched system is GES under every $\sigma\in\mathcal{S}$. Prior to presenting our solution to Problem \[prob:mainprob\], we catalog some required preliminaries. #### Family of systems Let ${\mathcal{P}}_{S}$ and ${\mathcal{P}}_{U}\subset{\mathcal{P}}$ denote the sets of indices of Schur stable and unstable subsystems, respectively, ${\mathcal{P}}= {\mathcal{P}}_{S}\sqcup{\mathcal{P}}_{U}$.[^2] Let $E({\mathcal{P}})$ denote the set of ordered pairs $(i,j)$ such that a switch from subsystem $i$ to subsystem $j$ is admissible, $i,j\in{\mathcal{P}}$.[^3] Let $$\begin{aligned} \label{e:M_defn} M = \max_{i\in{\mathcal{P}}}{\left\lVertA_{i}\right\rVert}. \end{aligned}$$ The following fact follows from the properties of Schur stable matrices: \[fact:key\] There exists $m\in{\mathbb{N}}$ and $\rho\in]0,1[$ such that the following set of inequalities holds: $$\begin{aligned} \label{e:m_ineq} {\left\lVertA_{i}^{m}\right\rVert}\leq\rho,\:\:i\in{{\mathcal{P}}_{S}}. \end{aligned}$$ We will employ the set of (matrix) commutators defined below in our stability analysis:[^4] $$\begin{aligned} \label{e:comm} E_{p,i} = A_{p}A_{i} - A_{i}A_{p},\:\:p\in{\mathcal{P}}_{S},\:\:i\in {\mathcal{P}}\setminus\{p\}. \end{aligned}$$ #### Switching signals We associate a directed graph $G(V,E)$ with the switched system in the following manner: - The set of vertices $V$ is the set of indices of the subsystems, ${\mathcal{P}}$. - The set of edges $E$ contains a directed edge from a vertex $i$ to a vertex $j$ whenever $(i,j)\in E({\mathcal{P}})$. Recall that a *walk* on a directed graph is an alternating sequence of vertices and edges $W = v_{0}, e_{1}, v_{1}, e_{2}, v_{2}, \ldots, v_{n-1}, e_{n}, v_{n}$, where $v_{\ell}\in V$, $e_{\ell} = (v_{\ell-1},v_{\ell})\in E$, $0 < \ell \leq n$. The length of a walk is its number of edges, counting repetitions, e.g., the length of $W$ above is $n$. The *initial vertex* of $W$ is $v_{0}$ and the *final vertex* of $W$ is $v_{n}$. If $v_{0}=v_{n}$, we say that the walk $W$ is closed. A closed walk $W$ is called a *cycle* if the vertices $v_{i}$, $0<i<n$ are distinct from each other and $v_{0}$. By the term *infinite walk* we mean a walk of infinite length, i.e., it has infinitely many edges. [[@KunCha_HSCC_2014 Fact 3]]{} \[fact:walk\] The set of switching signals $\sigma:{\mathbb{N}}_{0}\to{\mathcal{P}}$ and the set of infinite walks on $G(V,E)$ are in bijective correspondence. Clearly, given a family of systems and a set of admissible switches, we are interested in a class of infinite walks whose corresponding class of switching signals ensures GES of the switched system . For a cycle $W$ on $G(V,E)$, let $ \mathcal{V}(W) = \{v\in V\:|\:v\:\:\text{appears in}\:\:W\},\:\text{and} $ $ \mathcal{E}(W) = \{(u,v)\in E\:|\:(u,v)\:\:\text{appears in}\:\:W\}. $ \[d:v-cycles\] Fix a vertex $\overline{v}\in V$. A cycle $W = v_{0},(v_{0},v_{1}),v_{1},\ldots,v_{n-1},(v_{n-1},v_{0}),v_{0}$ on $G(V,E)$ is called a $\overline{v}$-cycle if $\overline{v}\in\mathcal{V}(W)$. \[d:concat\_cycle\] The distinct cycles $ W_{1} = v_{0}^{(1)}, (v_{0}^{(1)},v_{1}^{(1)}), v_{1}^{(1)}$,\ $\ldots,v_{n_{1}-1}^{(1)}, (v_{n_{1}-1}^{(1)},v_{0}^{(1)}), v_{0}^{(1)}, W_{2} = v_{0}^{(2)}, (v_{0}^{(2)},v_{1}^{(2)}), v_{1}^{(2)},\ldots$,\ $v_{n_{1}-1}^{(2)}, (v_{n_{1}-1}^{(2)},v_{0}^{(2)}), v_{0}^{(2)},\ldots, W_{r} = v_{0}^{(r)}, (v_{0}^{(r)},v_{1}^{(r)}), v_{1}^{(r)},\ldots$,\ $v_{n_{1}-1}^{(r)}, (v_{n_{1}-1}^{(r)},v_{0}^{(r)}), v_{0}^{(r)}, $ on $G(V,E)$ are called *concatenable* on a vertex $u\in V$ if $v_{0}^{(1)} = v_{0}^{(2)}$ = $\cdots$ = $v_{0}^{(r)}$ = $u$. Let $C_{\overline{v}}$ denote the set of all $\overline{v}$-cycles on $G(V,E)$, $\overline{v}\in V$, that are concatenable on vertex $u$ for some $u\in V$. We will consider the following structures: - If there is no cycle $W$ on $G(V,E)$ such that $\overline{v}\in\mathcal{V}(W)$, then $C_{\overline{v}} = \emptyset$. - If $G(V,E)$ admits exactly one $\overline{v}$-cycle $W$, then $C_{\overline{v}}$ is a singleton. In particular, $C_{\overline{v}} = \{W\}$. We will work with infinite walks on $G(V,E)$ constructed by concatenating the elements from $C_{\overline{v}}$. At this point, it is important to clarify the meaning of such a concatenation. Let $W_{1}, W_{2}, \ldots, W_{q}$ be distinct concatenable $\overline{v}$-cycles of length $n_{1},n_{2},\ldots,n_{q}$ on $G(V,E)$, respectively. An infinite walk $W$ obtained by concatenating $W_{j}$, $j = 1,2,\ldots,q$ is $ W = v_{0}^{(j_{1})}, \bigl(v_{0}^{(j_{1})}, v_{1}^{(j_{1})}\bigr), v_{1}^{(j_{1})}, \ldots, v_{n_{1}-1}^{(j_{1})}, \bigl(v_{n_{1}-1}^{(j_{1})},v_{0}^{(j_{1})}\bigr), v_{0}^{(j_{2})}, \bigl(v_{0}^{(j_{2})}$,\ $v_{1}^{(j_{2})}\bigr), v_{1}^{(j_{2})}, \ldots, v_{n_{2}-1}^{(j_{2})}, \bigl(v_{n_{2}-1}^{(j_{2})},v_{0}^{(j_{2})}\bigr), v_{0}^{(j_{3})},\ldots$,\ $j_{k}\in\{1,2,\ldots,q\},\:\:k\in{\mathbb{N}}.$ We are now in a position to present our result. Result {#s:mainres} ====== \[t:mainres\] *Consider a switched system and its underlying directed graph $G(V,E)$. Let $\gamma$ be an arbitrary positive number such that $$\begin{aligned} \label{e:maincondn1} \rho e^{\gamma m} < 1. \end{aligned}$$ Suppose that there exists $p\in{\mathcal{P}}_{S}$ that satisfies the following conditions:* - $C_{p} \neq \emptyset$, and - there exists a scalar $\varepsilon_{p}$ small enough such that $$\begin{aligned} \label{e:maincondn2} {\left\lVertE_{p,i}\right\rVert}\leq\varepsilon_{p}\:\:\text{for all}\:i\in{\mathcal{P}}\setminus\{p\}, \end{aligned}$$ and $$\begin{aligned} \label{e:maincondn3} \rho e^{\gamma m} + (N-1)\frac{m(m+1)}{2}M^{Nm-2}\varepsilon_{p}e^{\gamma Nm}\leq 1. \end{aligned}$$ Then the switched system is GES under every switching signal $\sigma$ whose corresponding infinite walk $W$ is constructed by concatenating elements from $C_{p}$. Theorem \[t:mainres\] is our solution to Problem \[prob:mainprob\]. The elements of the class of stabilizing switching signals $\mathcal{S}$ correspond to the infinite walks that are constructed by concatenating elements from the set of cycles, $C_{p}$, where $p$ is a stable subsystem satisfying certain conditions. Notice that since $\rho < 1$, there always exists a number $\gamma$ (could be very small) such that condition holds. If in addition, there exists a $p\in{\mathcal{P}}_{S}$ such that $G(V,E)$ admits at least one cycle that involves vertex $p$, and the Euclidean norms of (matrix) commutators of $A_{p}$ and $A_{i}$, $i\in{\mathcal{P}}\setminus\{p\}$, are bounded above by a scalar $\varepsilon_{p}$ small enough such that condition holds, then is GES under a $\sigma$ whose corresponding infinite walk is constructed by concatenating the cycles from $C_{p}$. Our stability conditions have the following important features:\ 1) Theorem \[t:mainres\] accommodates sets of matrices $A_{j}$, $j\in{\mathcal{P}}$, for which $A_{p}$ and $A_{i}$, $p\in{\mathcal{P}}_{S}$, $i\in{\mathcal{P}}\setminus\{p\}$, do not necessarily commute, but are “close” to sets of matrices for which they commute. When these matrices commute for all $i\in{\mathcal{P}}\setminus\{p\}$, (i.e., $\varepsilon_{p} = 0$), condition reduces to condition . The stability conditions of Theorem \[t:mainres\] are inherently robust in the above sense. Indeed, if we are relying on approximate models of $A_{j}$, $j\in{\mathcal{P}}$, or the elements of $A_{j}$, $j\in{\mathcal{P}}$, are prone to evolve over time, then GES of holds under our stabilizing switching signals as long as for some $p\in{\mathcal{P}}_{S}$, the commutators of $A_{p}$ and $A_{i}$, $i\in{\mathcal{P}}\setminus\{p\}$, in their Euclidean norm, are bounded above by a small scalar $\varepsilon_{p}$ such that condition holds.\ 2) [The proposed construction of infinite walks]{} involves activation of a vertex (subsystem) $p\in{\mathcal{P}}_{S}$ for which conditions - hold. Clearly, we require ${\mathcal{P}}_{S}\neq\emptyset$. However, stabilizing cycles with all unstable vertices except for $p$ are perfectly admissible. The activation of at least one Schur stable subsystem is also a requirement for Lyapunov-like functions based [construction]{} of stabilizing cycles. \[rem:comm\_compa\] Commutation relations between the subsystem matrices or certain products of these matrices have been employed to study stability of the switched system earlier in the literature. The switched system is stable under arbitrary switching [@Liberzon2003 Chapter 2] if all the subsystems $A_{i}$, $i\in{\mathcal{P}}$, are Schur stable, and commute pairwise [@Narendra1994] or are “sufficiently close” to a set of matrices whose elements commute pairwise [@Agrachev2012]. Recently in [@abc; @def] stability of under restricted switching is studied using matrix commutators. Given an admissible minimum dwell time, in [@abc] we identify conditions on the subsystem matrices such that stability of is preserved under all switching signals satisfying the given minimum dwell time. The overarching hypothesis there is that all subsystems are Schur stable. The problem of identifying classes of stabilizing switching signals when not all systems in the family are stable and an admissible switching signal obeys certain minimum and maximum dwell times on all subsystems, is addressed in [@def]. In the current work we tackle the problem of [algorithmically constructing]{} stabilizing cycles by employing commutation relations between the subsystem matrices. We deal with families containing both stable and unstable subsystems, and obeying pre-specified restrictions on admissible switches between the subsystems. See Remark \[rem:proof\_tech\] for a discussion on our analysis technique. We next provide an algorithm (Algorithm \[algo:sw-sig\_construc\]) that detects a vertex (subsystem) $p$ such that conditions - hold, and [constructs]{} the set of cycles, $C_{p}$. Once a suitable $C_{p}$ is obtained, an infinite walk $W = v_{0}, (v_{0},v_{1}),v_{1}, (v_{1},v_{2}),v_{2},\ldots$ can be constructed by concatenating the elements of $C_{p}$ as described in §\[s:prelims\], and the corresponding switching signal $\sigma$ can be designed as $\sigma(0) = v_{0}$, $\sigma(1) = v_{1}$, $\sigma(2) = v_{2},\:\:\ldots$. Clearly, if $C_{p}$ is a singleton, e.g., $C_{p} = \{W\}$, then a stabilizing $\sigma$ described in Theorem \[t:mainres\] is periodic. It corresponds to the infinite walk constructed by repeating $W$. a family of systems and a set of admissible switches $E({\mathcal{P}})$ a set of concatenable $p$-cycles, $C_{p}$ Construct the underlying directed graph $G(V,E)$ of the switched system . Compute $m$ and $\rho$ such that holds. Compute $\gamma$ such that holds. Compute $\varepsilon_{p}$ such that holds. Create a set $L_{p}$ containing all $p$-cycles on $G(V,E)$. Create a set $C_{p}$ with all elements of $L_{p}$ that are concatenable on a vertex $u$ for some $u\in V$. Create a set $C_{p}$ with any one element of $L_{p}$. $C_{p} = \emptyset$. Store $C_{p}$ in memory. Exit Algorithm \[algo:sw-sig\_construc\] An important aspect of our stability conditions is the existence and detection of a set of cycles, $C_{p}$, corresponding to a vertex (subsystem) $p$ for which conditions - hold. The existence of a non-empty $C_{p}$ depends solely on the connectivity of $G(V,E)$ which is governed by the given set of admissible switches $E({\mathcal{P}})$. We address the detection issue in two steps: first, we list out all $p$-cycles on $G(V,E)$, and second, we pick elements from the above list that are concatenable on a vertex $u$ for some $u\in V$. Off-the-shelf algorithms from graph theory can be employed to enumerate all cycles on $G(V,E)$ that involve a pre-specified vertex $p$, see e.g., [@Johnson1975] and the references therein. Numerical experiments {#s:num_ex} ===================== \[ex:num\_ex1\] Consider ${\mathcal{P}}= \{1,2,3\}$ with $A_{1} = {\begin{pmatrix}0.86 & 0.05\\-0.07 & 0.89\end{pmatrix}}$ and $A_{2} = {\begin{pmatrix}0.81 & -0.07\\-0.74 & 0.73\end{pmatrix}}$. Clearly, ${\mathcal{P}}_{S} = \{1\}$ and ${\mathcal{P}}_{U} = \{2\}$. Let $E({\mathcal{P}}) = \{(1,2),(2,1)\}$. It follows that $L_{1} = \bigl\{\bigl(1,(1,2),2,(2,1),1\bigr),\bigl(2,(2,1),1,(1,2)$,\ $2\bigr)\bigr\}$. Fix $C_{1} = \{1,(1,2),2,(2,1),1\}$. It is worth noting that both ${\left\lVertA_{1}A_{2}\right\rVert}$ and ${\left\lVertA_{2}A_{1}\right\rVert}>1$, and hence stability of under the switching signal $\sigma(0) = 1$, $\sigma(1) = 2$, $\sigma(2) = 1$, $\sigma(3) = 2,\ldots$ is non-trivial. We will apply the conditions of Theorem \[t:mainres\] to determine of GES of under the above $\sigma$. We have ${\left\lVertA_{1}\right\rVert} = 0.89$ and $M = 1.25$. Let $\rho = 0.90$ and $\gamma = 0.0001$. It follows that $m=1$ and $\rho e^{\gamma m} = 0.90 < 1$. We compute $\varepsilon_{1} = {\left\lVertA_{1}A_{2}-A_{2}A_{1}\right\rVert} = 0.06$. Consequently, $$\begin{aligned} \rho e^{\gamma m} + (N-1)\frac{m(m+1)}{2}M^{Nm-2}\varepsilon_{p}e^{\gamma Nm} = 0.96 < 1. \end{aligned}$$ We demonstrate $({\left\lVertx(t)\right\rVert})_{t\in{\mathbb{N}}_{0}}$ under $\sigma$ corresponding to 1000 different initial conditions $x_{0}$ chosen uniformly at random from the interval $[-10,10]^{2}$ in Figure \[fig:x\_plot\]. ![Plot of ${\left\lVertx(t)\right\rVert}$ versus $t$[]{data-label="fig:x_plot"}](xplot-eps-converted-to) \[ex:num\_ex2\] We now check scalability of the stability conditions in Theorem \[t:mainres\] with respect to $N$, $M$, $m$ and $\rho$. In the simplest case when there is a $p\in{\mathcal{P}}_{S}$ such that the matrix $A_{p}$ commutes with every matrix $A_{i}$, $i\in{\mathcal{P}}\setminus\{p\}$ (i.e., $\varepsilon_{p} = 0$), it follows that condition holds even when the numbers $M$, $N$ and $m$ are very large. A more interesting case is, of course, when the above matrices do not commute but are sufficiently “close”. We fix $\gamma = 0.0001$, and vary the parameters $N$, $m$, $M$, $\rho$ to plot upper bounds on $\varepsilon_{p}$ for the satisfaction of condition , see Figures \[fig:plot1\], \[fig:plot2\] and \[fig:plot3\]. Not surprisingly, it is observed that the size of the class of non-commuting matrices catered by Theorem \[t:mainres\] shrinks with increasing $N$, $m$, $M$, and $\rho$. ![Plot of $\varepsilon_{p}$ versus $N$ with $M = 2$ and $\rho = 0.9$[]{data-label="fig:plot1"}](plot1-eps-converted-to) ![Plot of $\varepsilon_{p}$ versus $N$ with $m = 5$ and $\rho = 0.9$[]{data-label="fig:plot2"}](plot2-eps-converted-to) ![Plot of $\varepsilon_{p}$ versus $N$ with $M = 2$ and $m = 5$[]{data-label="fig:plot3"}](plot3-eps-converted-to) Discussion {#s:discssn} ========== The method of characterizing stabilizing switching signals in terms of their corresponding infinite walks constructed by concatenating cycles that satisfy certain conditions, is not new in the literature. Consider a directed graph $\overline{G}(V,E)$, which is the graph $G(V,E)$ with vertex weights $w(i) = -{\left\lvert{\ln\lambda_{i}}\right\rvert},\:\text{if}\:i\in{\mathcal{P}}_{S}$ and $w(i) = {\left\lvert{\ln\lambda_{i}}\right\rvert},\:\text{if}\:i\in{\mathcal{P}}_{U}$, and edge weights $w(i,j) = \ln\mu_{ij},\:(i,j)\in E({\mathcal{P}})$. Here the scalars $0 < \lambda_{i} < 1$ for $p\in{\mathcal{P}}_{S}$, $\lambda_{i}\geq 1$ for $p\in{\mathcal{P}}_{U}$ and $\mu_{ij} > 0$ for $(i,j)\in E({\mathcal{P}})$, are computed from Lyapunov-like functions $V_{i}:{\mathbb{R}}^{d}\to{\mathbb{R}}$ corresponding to the subsystems $i\in{\mathcal{P}}$. In [@KunCha_HSCC_2014; @KunCha_NAHS_2017] a stabilizing cycle was characterized as a negative weight cycle on $\overline{G}(V,E)$. Notice that the existence of a negative weight cycle $W$ on $\overline{G}(V,E)$ depends on: (a) the connectivity of $\overline{G}(V,E)$, and (b) the weights associated to the vertices and edges of $\overline{G}(V,E)$. While connectivity of $\overline{G}(V,E)$ is governed by the given set of admissible switches $E({\mathcal{P}})$, there is an element of ‘choice’ associated to the Lyapunov-like functions $V_{i}$, $i\in{\mathcal{P}}$ and hence the corresponding scalars $\lambda_{i}$, $i\in{\mathcal{P}}$ and $\mu_{ij}$, $(i,j)\in E({\mathcal{P}})$. A natural question is, therefore, the following: given a family of systems and a set of admissible switches, can we design Lyapunov-like functions $V_{i}$, $i\in{\mathcal{P}}$ (and hence, the corresponding scalars $\lambda_{i}$, $i\in{\mathcal{P}}$ and $\mu_{ij}$, $(i,j)\in E({\mathcal{P}})$) such that $\overline{G}(V,E)$ admits a negative weight cycle? To the best of our knowledge, this question is numerically difficult, and only a partial solution can be obtained as a special case of [@KunQue_TCNS_2019 Algorithm 2]. This partial solution searches for suitable scalars over a finite number of choices from the set of quadratic Lyapunov-like functions, see also [@KunQue_TCNS_2019 Remarks 9 and 12] for discussions. In the absence of a complete solution to the above question, the existing literature considers the Lyapunov-like functions $V_{i}$, $i\in{\mathcal{P}}$ and the corresponding scalars $\lambda_{i}$, $i\in{\mathcal{P}}$ and $\mu_{ij}$, $(i,j)\in E({\mathcal{P}})$ to be “given”, and searches for negative weight cycles on $\overline{G}(V,E)$. However, non-existence of such a cycle with the given sets of vertex and edge weights, does not conclude that does not admit such a cycle at all, and hence adds to the conservatism of the class of results under consideration. Moreover, for large-scale switched systems, storing the vertex and edge weights prior to the application of a negative weight cycle detection algorithm, has a large memory requirement. Recently in [@BalaKunCha_MCRF_2019] the authors proposed a randomized algorithm that detects a cycle on $\overline{G}(V,E)$ without prior knowledge of vertex and edge weights of the graph, and identified sufficient conditions on the connectivity and vertex and edge weights of $\overline{G}(V,E)$ under which such a cycle is stabilizing (in the sense that it is a negative weight cycle). The conditions on favourable vertex and edge weights are provided in terms of their statistical properties. However, given a family of systems , the problem of designing Lyapunov-like functions $V_{i}$, $i\in{\mathcal{P}}$ such that the corresponding scalars $\lambda_{i}$, $i\in{\mathcal{P}}$ and $\mu_{ij}$, $(i,j)\in E({\mathcal{P}})$ satisfy the proposed conditions is not addressed. In the current work we employ properties of the subsystem matrices and avoid the use of Lyapunov-like functions. In contrast to $\overline{G}(V,E)$, we do not associate weights to the vertices and edges of $G(V,E)$, and our detection mechanism for a set cycles, $C_{p}$, solely depends on the connectivity of $G(V,E)$. However, [the stability conditions]{} presented in this paper are restricted to the setting of switched linear systems unlike the case of Lyapunov-like functions based [construction]{} of stabilizing cycles that extends to the nonlinear setting, as demonstrated in [@KunCha_NAHS_2017]. Concluding remarks {#s:concln} ================== In this paper we proposed a new [method to construct]{} stabilizing cycles for switched linear systems. The proposed result involves properties of commutators of the subsystem matrices and mild assumption on the admissible switches between the subsystems. [Algorithmic construction]{} of stabilizing cycles for switched nonlinear systems without involving the design of Lyapunov-like functions remains an open question. Proof of our result {#s:all_proofs} =================== [*Proof of Theorem \[t:mainres\]*]{}: Let admit a $p\in{\mathcal{P}}_{S}$ for which conditions and hold. Let $\sigma$ be a switching signal whose corresponding infinite walk is constructed by concatenating elements from the set $C_{p}$. Let ${\mathcal{M}}$ be the corresponding word (matrix product) defined as $ {\mathcal{M}}= \cdots A_{\sigma(2)}A_{\sigma(1)}A_{\sigma(0)}. $ The condition for GES of under $\sigma$ can be written equivalently as [@Agrachev2012 §2]: there exist positive numbers $c$ and $\gamma$ such that $$\begin{aligned} \label{e:pf1_step1} {\left\lVert{\mathcal{M}}\right\rVert} \leq c e^{-\gamma{\left\lvert{{\mathcal{M}}}\right\rvert}}\:\:\text{for all}\:{\left\lvert{{\mathcal{M}}}\right\rvert}. \end{aligned}$$ It, therefore, suffices to show that condition holds for the above $\sigma$. We will employ mathematical induction on ${\left\lvert{{\mathcal{M}}}\right\rvert}$ for this purpose. [*A. Induction basis*]{}: Pick $c$ large enough so that holds for ${\mathcal{M}}$ satisfying ${\left\lvert{{\mathcal{M}}}\right\rvert}\leq Nm$. [*B. Induction hypothesis*]{}: Let ${\left\lvert{{\mathcal{M}}}\right\rvert}\geq Nm + 1$ and assume that is proved for all products of length less than ${\left\lvert{{\mathcal{M}}}\right\rvert}$. [*C. Induction step*]{}: Let ${\mathcal{M}}= LR$, where ${\left\lvert{L}\right\rvert} = Nm$. We observe that $L$ contains at least $m$-many $A_{p}$. Indeed, each cycle in $C_{p}$ contains vertex $p$, and the length of any cycle on $G(V,E)$ is at most $N$. We rewrite $L$ as $ L = A_{p}^{m}L_{1} + L_{2}, $ where ${\left\lvert{L_{1}}\right\rvert} = (N-1)m$, and $L_{2}$ contains at most $(N-1)\frac{m(m+1)}{2}$ terms of length $Nm-1$ with $Nm-2$ $A_{i}$, $i\in{\mathcal{P}}$ and $1$ $E_{p,i}$, $i\in{\mathcal{P}}\setminus\{p\}$. Now, from the sub-multiplicativity and sub-additivity properties of the induced norm, we have $$\begin{aligned} \label{e:pf1_step2} &\hspace*{-0.15cm}{\left\lVert{\mathcal{M}}\right\rVert} = {\left\lVertLR\right\rVert} \leq {\left\lVertA_{p}^{m}\right\rVert}{\left\lVertL_{1}R\right\rVert} + {\left\lVertL_{2}\right\rVert}{\left\lVertR\right\rVert}\nonumber\\ &\hspace*{-0.15cm}\leq \rho ce^{-\gamma({\left\lvert{{\mathcal{M}}}\right\rvert}-m)} + (N-1)\frac{m(m+1)}{2}M^{Nm-2}\varepsilon_{p} ce^{-\gamma({\left\lvert{{\mathcal{M}}}\right\rvert}-Nm)}\nonumber\\ &\hspace*{-0.15cm}=ce^{-\gamma{\left\lvert{{\mathcal{M}}}\right\rvert}}\bigl(\rho e^{\lambda m} + (N-1)({m(m+1)}/{2})M^{Nm-2}\varepsilon_{p} e^{\gamma Nm}\bigr). \end{aligned}$$ In the above inequality the upper bounds on ${\left\lVertL_{1}R\right\rVert}$ and ${\left\lVertR\right\rVert}$ are obtained from the relations ${\left\lvert{{\mathcal{M}}}\right\rvert} = {\left\lvert{A_{p}^{m}}\right\rvert}+{\left\lvert{L_{1}R}\right\rvert}$ and ${\left\lvert{{\mathcal{M}}}\right\rvert} = {\left\lvert{L}\right\rvert}+{\left\lvert{R}\right\rvert}$, respectively. Applying to leads to . Consequently, is GES under the $\sigma$ in consideration. This completes our proof of Theorem \[t:mainres\]. \[rem:proof\_tech\] Our proof of Theorem \[t:mainres\] relies on combinatorial arguments applied to the matrix product ${\mathcal{M}}$ split into sums $A_{p}^{m}L_{1}R + L_{2}R$. Combinatorial analysis technique for stability of switched system was first introduced in [@Agrachev2012] for stability under arbitrary switching, and was recently extended to stability under pre-specified dwell times in [@abc; @def]. The following properties of stabilizing cycles are used in our analysis: (i) each of them contains a vertex $p$ for which conditions - hold, and (ii) each of them is of length at most $N$. The choice of matrix commutators involves the subsystem matrix $A_{p}$ and the subsystem matrices $A_{i}$, $i\in{\mathcal{P}}\setminus\{p\}$, and they are utilized to rearrange any possible product corresponding to an infinite walk constructed by concatenating elements from $C_{p}$. Notice that the upper bound on the number of terms in $L_{2}$ and their structure are obtained by exchanging $A_{p}$ with $A_{i}$, $i\in{\mathcal{P}}\setminus\{p\}$ towards achieving the form $L = A_{p}^{m}L_{1}+L_{2}$. Consider, for example, $G(V,E)$ with $V = \{1,2,3,4\}$, ${\mathcal{P}}_{S} = \{1,2\}$, ${\mathcal{P}}_{U} = \{3,4\}$ and $E({\mathcal{P}}) = \{(1,2),(1,3),(1,4),(2,1),(2,3),(3,4),(4,1),(4,2),(4,3)\}$. Let $m=2$, and conditions - hold for $p=1$. Suppose that $L = A_{2}A_{4}A_{3}A_{1}A_{4}A_{3}A_{2}A_{1}$. It can be rewritten as $$\begin{aligned} &A_{2}A_{4}\underline{A_{3}A_{1}}A_{4}A_{3}A_{2}A_{1}\\ =& A_{2}\underline{A_{4}A_{1}}A_{3}A_{4}A_{3}A_{2}A_{1}-A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =& \underline{A_{2}A_{1}}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1} - A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =& A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}\underline{A_{2}A_{1}}-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1} -A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1}\\ &- A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&A_{1}A_{2}A_{4}A_{3}A_{4}\underline{A_{3}A_{1}}A_{2}-A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1} -E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}\\ &-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1}- A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&A_{1}A_{2}A_{4}A_{3}\underline{A_{4}A_{1}}A_{3}A_{2}-A_{1}A_{2}A_{4}A_{3}A_{4}E_{3,1}A_{2} -A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1}\\ &-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1} -A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1} - A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&A_{1}A_{2}A_{4}\underline{A_{3}A_{1}}A_{4}A_{3}A_{2}-A_{1}A_{2}A_{4}A_{3}E_{4,1}A_{3}A_{2} -A_{1}A_{2}A_{4}A_{3}A_{4}E_{3,1}A_{2}\\ &-A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1}-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1}\\ &- A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&A_{1}A_{2}\underline{A_{4}A_{1}}A_{3}A_{4}A_{3}A_{2}-A_{1}A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2} -A_{1}A_{2}A_{4}A_{3}E_{4,1}A_{3}A_{2}\\ &-A_{1}A_{2}A_{4}A_{3}A_{4}E_{3,1}A_{2}-A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1}-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}\\ &-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1} - A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&A_{1}\underline{A_{2}A_{1}}A_{4}A_{3}A_{4}A_{3}A_{2}-A_{1}A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2} -A_{1}A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}\\ &-A_{1}A_{2}A_{4}A_{3}E_{4,1}A_{3}A_{2} -A_{1}A_{2}A_{4}A_{3}A_{4}E_{3,1}A_{2}-A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1}\\ &-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1} - A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}\\ =&{A_{1}^{2}}A_{2}A_{4}A_{3}A_{4}A_{3}A_{2}-A_{1}E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2} -A_{1}A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}\\ &-A_{1}A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}-A_{1}A_{2}A_{4}A_{3}E_{4,1}A_{3}A_{2}-A_{1}A_{2}A_{4}A_{3}A_{4}E_{3,1}A_{2}\\ &-A_{1}A_{2}A_{4}A_{3}A_{4}A_{3}E_{2,1}-E_{2,1}A_{4}A_{3}A_{4}A_{3}A_{2}A_{1}-A_{2}E_{4,1}A_{3}A_{4}A_{3}A_{2}A_{1}\\ &- A_{2}A_{4}E_{3,1}A_{4}A_{3}A_{2}A_{1}. \end{aligned}$$ In the worst case the $k$-th instance of $A_{p}$ in $L$ (reading from the right) is to be exchanged with $k(N-1)$ terms, $k = 1,2,\ldots,m$, to obtain the desired structure of $L$. [10]{} , [*On robust [L]{}ie-algebraic stability conditions for switched linear systems*]{}, Systems Control Lett., 61 (2012), pp. 347–353. , [*A randomized algorithm for stabilizing switching signals*]{}. Mathematical Control and Related Fields, vol. 9, no. 1, 2019, pp. 159-174. , [*Modern graph theory*]{}, vol. 184 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1998. , [*Multiple [L]{}yapunov functions and other analysis tools for switched and hybrid systems*]{}, IEEE Trans. Automat. Control, 43 (1998), pp. 475–482. Hybrid control systems. , [*Finding all the elementary circuits of a directed graph*]{}, SIAM J. Comput., 4 (1975), pp. 77–84. , [*Robust stability conditions for switched linear systems under restricted switching*]{}. Submitted, arxiv: 1903.09440. , [*Robust matrix commutator conditions for stability of switched linear systems under restricted switching*]{}. Submitted, arxiv: 1903.10249. height 2pt depth -1.6pt width 23pt, [*Stabilzing discrete-time switched linear systems*]{}, in Proceedings of the 17th ACM Conf. on Hybrid Systems: Computation and Control, Apr 2014, pp. 11–20. height 2pt depth -1.6pt width 23pt, [*On stability of discrete-time switched systems*]{}, Nonlinear Analysis: Hybrid Systems (Special Issue: HSCC), 23 (2017), pp. 191–210. , [*Stabilzing discrete-time switched systems with inputs*]{}, in Proceedings of the 54th Conference on Decision and Control, Dec 2015, pp. 4897–4902. , [*Stabilizing scheduling policies for networked control systems*]{}. IEEE Transactions on Control of Network Systems, To appear, arxiv: 1901.08353. , [*Switching in [S]{}ystems and [C]{}ontrol*]{}, Systems & Control: Foundations & Applications, Birkhäuser Boston Inc., Boston, MA, 2003. , [*A common [L]{}yapunov function for stable [LTI]{} systems with commuting [A]{}-matrices*]{}, IEEE Trans. Automat. Control, 39 (1994), pp. 2469–2471. , [*Qualitative analysis of discrete-time switched systems*]{}, Proc. of the American Control Conference, (2002), pp. 1880–1885. [^1]: By the term ‘stabilizing cycles’ we refer to cycles such that a switching signal corresponding to an infinite walk constructed by concatenating these cycles, is stabilizing. We will discuss and employ different [methods of constructing]{} such cycles throughout the paper. [^2]: A matrix $A\in{\mathbb{R}}^{d\times d}$ is Schur stable if all its eigenvalues are inside the open unit disk. We call $A$ unstable if it is not Schur stable. [^3]: Clearly, $(i,i)$ implies that it is allowed to dwell on a subsystem for two or more consecutive time steps. [^4]: These matrix commutators will be employed in rearranging the matrix products on the right-hand side of . An exchange between $A_{p}$ with itself is undefined, and hence $i\in{\mathcal{P}}\setminus\{p\}$.
--- abstract: 'We present a continuous time state estimation framework that unifies traditionally individual tasks of smoothing, tracking, and forecasting (STF), for a class of targets subject to smooth motion processes, e.g., the target moves with nearly constant acceleration or affected by insignificant noises. Fundamentally different from the conventional Markov transition formulation, the state process is modeled by a continuous trajectory function of time (FoT) and the STF problem is formulated as an online data fitting problem with the goal of finding the trajectory FoT that best fits the observations in a sliding time-window. Then, the state of the target, whether the past (namely, smoothing), the current (filtering) or the near-future (forecasting), can be inferred from the FoT. Our framework releases stringent statistical modeling of the target motion in real time, and is applicable to a broad range of real world targets of significance such as passenger aircraft and ships which move on scheduled, (segmented) smooth paths but little statistical knowledge is given about their real time movement and even about the sensors. In addition, the proposed STF framework inherits the advantages of data fitting for accommodating arbitrary sensor revisit time, target maneuvering and missed detection. The proposed method is compared with state of the art estimators in scenarios of either maneuvering or non-maneuvering target.' author: - 'Tiancheng Li,  Huimin Chen,  Shudong Sun, and  Juan M. Corchado[^1][^2][^3] [^4][^5]' bibliography: - 'IEEE\_STF.bib' title: 'Joint Smoothing, Tracking, and Forecasting Based on Continuous-Time Target Trajectory Fitting' --- [Li : Continuous-Time Trajectory Fitting]{} Trajectory estimation, data fitting, target tracking, filtering, smoothing, forecasting Introduction ============ state estimation, e.g., tracking the movement of an aircraft or a car, has been widely required in engineering, which basically concerns inferring the latent state of interest based on discrete time series noisy observations[@haykin14; @Sayed08; @Bar-Shalom01]. The time of interest may be the past (usually referred to as smoothing), the present (filtering or tracking) or the future (prediction or forecasting). In this paper, we present an estimation framework that unifies the tasks of smoothing, tracking, and forecasting (STF), and accommodates the scenario of little a prior statistical information about the target and imperfect sensors that possibly suffer from unknown noise statistics and missed detections. For this challenging goal, this paper focuses on a specific class of targets that move in smooth patterns (e.g., moves in predefined runways and/or with nearly constant velocity/acceleration). While we do not make a rigorous definition of the “*smoothness*", we note that it corresponds to an important class of real world targets involved in air/maritime/space traffic management where for the passengers’ safety, no abrupt and significant changes should be made on the movement of the carrier. The remainder of this paper is organized as follows. Section \[motivation\] outlines the motivation and key contribution of our work. Two major challenges to the popular HMM (hidden Markov model) framework are also discussed. Section \[fot\] presents our proposal for modeling the target motion by a deterministic trajectory function of time (FoT) and details how to obtain it from the online sensor data series. Section \[stf\] addresses joint STF based on the trajectory FoT. Section \[relatedwork\] reviews related works on trajectory estimation and sensor data fitting in wide disciplines, highlighting the innovation of our approach. Section \[simulation\] provides simulation studies to demonstrate the effectiveness of the proposed STF framework on a variety of typical scenarios with comparison to state-of-the-art smoothers and filters. Section \[conclusion\] concludes the paper. Motivation and Key Contribution {#motivation} =============================== Generally speaking, system modeling is the prerequisite for estimation, which describes how we understand the system and the observation data. The goal of modeling is twofold: one is to relate the sensor observation to the latent target state by an observation function, and the other is to relate the target state to the time by a state function. The former explains how the data are generated from the target state, and the latter explains how the target state evolves over time. Usually, the statistical property of the sensor is easy to be estimated or given a priori and so can be considered time-invariant while that of the latent target is unknown, complicated and time-varying, offering no real time information to the tracker. Due to the repetitively revisit nature of sensors, the observation function is commonly formulated in discrete time series as $$\label{eq:1} \mathbf{y}_k = h_k(\mathbf{x}_k,\mathbf{v}_k),$$ where $k\in \mathbb{N}$ indicates the time-instant, $\mathbf{x}_k\in \mathbb{R}^{D_\mathbf{x}}$ denotes the $D_\mathbf{x}$-dimensional state, $\mathbf{y}_k\in \mathbb{R}^{D_\mathbf{y}}$ denotes the $D_\mathbf{y}$-dimensional observation (also called measurement), and $\mathbf{v}_k\in \mathbb{R}^{D_\mathbf{y}}$ denotes the observation noise. In contrast, there are different ways to model the target motion, which rests on the root of different estimation approaches. First, one may infer the state directly from the observation via maximum-likelihood estimation (MLE) or direct observation-to-state projection [@Li16O2; @Li17clustering; @Li17MSDC], without relying on any statistical assumptions on the state process. This class of “*data-driven*" solutions will yield good results when the sensor data are highly informative (namely, the noise is very small), and are gaining favor when little is known about the motion model of the target, or when it is simply not interested/worthwhile or too hard to approximate one. Benefit to do so can particularly be seen in the context of visual tracking [@Smeulders14; @Luo15] and chaotic time series [@Pisarenko04; @Perretti13; @Judd09failure], where the target (e.g., pedestrian) motion is hard to be correctly/precisely modeled. However, in general, when there is any model information about the target dynamics, it should be carefully evaluated and utilized. This forms the majority of the existing efforts for estimation and the way to do so distinguishes our approach from the others. The prevailing, considered the standard, “*model-driven*" estimation solution is to apply a hidden Markov model (HMM) to link the target state over time, for online recursive computing. The best known methodology in this category is the sequential Bayesian inference, for which a filter consisting of *prediction* and *correction* steps is applied iteratively [@Li17AGC]. The HMM can be written in either discrete-time (mainly for convenience) or continuous-time (which is the nature of reality), as given by difference equation and differential equation , respectively, \[eq:2\] $$\label{eq:2a} \mathbf{x}_k = g_k(\mathbf{x}_{k-1},\mathbf{u}_k),$$ $$\label{eq:2b} \frac{d\mathbf{x}_t}{dt} = g_t(\mathbf{x}_{t},\mathbf{u}_t),$$ where $t\in \mathbb{R}^+$ indicates continuous time, $\mathbf{u}_k \in \mathbb{R}^{D_\mathbf{x}}$ and $\mathbf{u}_t \in \mathbb{R}^{D_\mathbf{x}}$ denote the discrete time and continuous time state process noises, respectively. Challenges to HMM ----------------- - **Challenge 1** *Difficulties involved in statistical modeling and full Bayes posterior computing, leading to inevitable model error and approximate computation, respectively*. All target models with parameters are no more than statistical simplification to the truth and inevitably suffer from approximation errors and disturbances. Challenges involved in system modeling/identification have been noted in several aspects, e.g., the model must meet practical constraints [@Li16; @Duan16; @Kurz16; @Li17AGC] and match the sensor revisit rate [@Li17AGC] while noises need to be properly identified [@Li15bias; @Duník17; @Ristic17]. In particular, the noise $\mathbf{u}_k/\mathbf{u}_t$ represents the uncertainty of the state process model, which has to be modeled with respect to the occasionally irregular revisit rate of the sensor (including missed detection, delayed or out of sequence measurements). Instead, there are also a few works using a deterministic Markov transition model [@Morrison2012tracking; @Nadjiasngar13; @Judd00; @Judd08; @Judd09failure; @Judd09forecasting; @Judd15; @Smith10] which does not define the state process noise/uncertainty item; see further discussion given in Section \[literature\]. In fact, in the majority of practical setups, the ground truth is deterministic conditioned on which, the Bayesian Cramér-Rao lower bounds (CRLB) do not provide a lower bound for the conditional mean square error (MSE) matrix [@Fritsche16] and so correspondingly, no Bayesian filters can yield Minimum MSE estimate. One of the main reasons for the popularity of HMMs is the friendly assumption that states are conditionally-independent given the previous state. This allows easy forward-backward recursive inference, namely prediction-smoothing, but also severely limits the temporal dependencies that can be modeled which invites many alternatives [@Layton06; @Frigola14; @Bielecki17]. However, recursive estimators of the prior-updating format are vulnerable to the prior bias. Once an estimate bias is made, whether due to erroneous modeling, disturbances or over approximation, it will propagate in recursions and can hardly be removed [@Li16O2] unless a sliding time window or fading factor is used to adjust or re-initialize the estimator. A biased prior will likely not benefit the filter, especially when the observation is of high quality but instead, the filter may perform worse than the observation-only (O2) inference [@Li16O2; @Li17clustering]. This fact however has been overlooked. While “it is hard to overstate the importance of the role of a good model" [@Li03] for any model-driven tracker, model validation [@Djuric10; @Thygesen17] has few been investigated. Even if the statistical models, uncertainties and constraints regarding the target (quantity and dynamics), sensor profiles (e.g., missed detection, clutter) and the scenario background can all be well approximated, the full Bayesian computation (or even the likelihood alone [@Tran15; @Hartig11]) that involves integration over an infinite state space could be prohibitively expensive, forcing the need for further approximation. This crucial challenge to real time computation and sensitivity to model-error will be escalated with the increasing revisit rate and joint use of advanced sensors [@Li17clustering; @Li17MSDC]. - **Challenge 2** *Strictness of the Markov-Bayes iteration that works on exact model assumptions and uses only limited statistical information while omitting others such as linguistic/fuzzy information*. Most model-driven estimators may work promisingly, e.g., minimizing the MSE, provided that the model assumptions come proper. However, many unexpected issues can occur in reality such as false, missing, morbid, biased, disordered data that are intractable for modeling and entail additional robust or adaptive processing schemes. This has formed the majority of extensions of the model-driven framework, e.g., a huge number of works for noise covariance estimation [@Duník17], but also formed new challenges to the real time implementation due to escalated computational requirement. More importantly, it is unclear how to optimally use some important but fuzzy information such as a linguistic context that *the target moves close to a straight line*, which might not be easily defined as constraints [@Duan16; @Kurz16; @Ko07; @Julier07; @Ravela07] and [@Sayed08 Chapter 6]. This class of information is very common and useful to targets like aircraft, satellites, large cruise ships, and trains, which are supposed to move on pre-defined runways. To alleviate these problems while gaining higher algorithm flexibility, we now substitute the stepwise Markov transition model with a deterministic FoT for describing the target motion. The resulting trajectory function that best fits the online sensor data series allows joint filtering, smoothing and forecasting while requiring fewer assumptions on the target motion and the background. Main contribution of our work ----------------------------- At the core of our approach is a continuous-time trajectory function which is used to replace HMM for describing the target dynamics, i.e., $$\label{eq:3} \mathbf{x}_t=f(t).$$ Combining the observation function and the trajectory FoT leads to an optimization problem for minimizing the sensor data fitting error. This can be written as $$\label{eq:4} \underset{F(t)}{\text{argmin}} \sum_{t=k_1}^{k_2}\parallel \mathbf{y}_t-h_t(F(t),\bar{\mathbf{v}}_t)\parallel ,$$ where $\parallel \mathbf{a}-\mathbf{b}\parallel$ is a measure of the distance between $\mathbf{a} \in \mathbb{R}^{D_\mathbf{y}}$ and $\mathbf{b} \in \mathbb{R}^{D_\mathbf{y}}$ such as the square error as in the least squares estimation, $F(t)$ is the FoT to be estimated based on the data provided in the underlying time-window $[k_1, k_2]$ and $\bar{\mathbf{v}}_t$ is an average to compensate for the observation error (if anything is known) and can be specified as the noise mean $\mathrm{E}[\mathbf{v}_t]$ if known or otherwise as zero by assuming the sensor unbiased. Since the estimate $F(t)$ needs to be updated at each time $k$ when a new observation is received, we denote the FoT obtained at time $k$ as $F_k (t)$. As the default, $k_2=k$, ensuring that the newest observation data are used in the fitting. To incorporate useful model information such as a linguistic description that “*the target is free falling*" or “*the target passes by a station*", the FoT can be more definitely specified as $F(t;C_k)$ of an engineering-friendly format such as a polynomial, with certain parameters $C_k$ to be estimated which reflects the a priori model information and fully determine the FoT at time $k$. To this end, the formula may be extended to a *constrained* version, such as $$\begin{aligned} \begin{split} \label{eq:5} \underset{F(t;C_k)}{\text{argmin}} & \sum_{t=k_1}^{k_2}\parallel \mathbf{y}_t-h_t(F(t;C_k),\bar{\mathbf{v}}_t)\parallel, \\ \text{s.t.} \hspace{0.5cm} & F(t;C_k) \in {\mathfrak{F}}, \end{split}\end{aligned}$$ where $\mathfrak{F}$ is a finite set of specific functions, such as *a set of polynomials of no more than 3-order*. Another common strategy to integrate the model information into the optimization formula is to additionally define a penalty factor $\Omega (C_k)$ on the model fitting error as a measure of the disagreement of the fitting function to the model constraint a priori. For instance, one can define $\Omega (C_k):=\parallel F(t_0;C_k)-\mathbf{x}_0\parallel$ to measure the mismatch between the fitting trajectory and the known state $\mathbf{x}_0$ that the target passes by/close-to at time $t_0$. Then, the formula is extended to $$\label{eq:6} \underset{F(t;C_k)}{\text{argmin}} \sum_{t=k_1}^{k_2}\parallel \mathbf{y}_t-h_t(F(t;C_k),\bar{\mathbf{v}}_t)\parallel + \lambda \Omega (C_k) ,$$ where $\lambda>0$ controls the trade-off between the data fitting error and the model fitting error. In this paper, we focus on a class of realistic targets of significance including the passenger aircraft or ship that moves in well-designed smooth routines (aside of which the tracker is given no other statistical information) and ballistic targets that are subject to the (nearly) constant velocity, acceleration or turn-rate. The challenge arises as that no quantitative statistical information is available about the target dynamics. Therefore, we will not explicitly define a quantized penalty function $\Omega(C_k)$ to account for the model fitting error at present, but instead directly assume the trajectory being a FoT of specified format, e.g., a polynomial, to reflect the a priori fuzzy information of “*smoothness*". Our earlier work [@Li16fitting] presented the idea of constructing the target trajectory FoT by fitting the discrete-time estimates yielded by an off-the-shelf estimator such as a Markov-Bayes filter or an O2/C4F estimator. In the latter the sensor data have to be converted to the state space which requires the observation model to be injective or multiple sensors available. In this paper, we ease this requirement and carry out fitting directly on the time-series observations rather than their conversion to the state space. The present approach is applicable to any observation model and does not make Markov-transition assumption. This is an essential difference of our work to the state of the art and our previous work. Spatio-Temporal Trajectory Model {#fot} ================================ In this paper, we limit our discussion to the state variables that can be observed directly, e.g., the target position, rather than all variables of interest such as position, velocity and acceleration. To clarify this, an important definition is made on the concept of “*directly-observed state*". **Definition 1** *Directly-observed state* The “directly-observed state" is referred to the variables of the state that are deterministic function of the observation. For example, for the range and bearing observation, the directly-observed state is the target position while for the Doppler observation, the directly-observed state is the radial velocity. We note that some unobserved variables of interest may be inferred from the directly-observed variables based on the laws of physics, e.g., the differentiation of position and velocity over time is velocity and acceleration, respectively. General framework ----------------- Instead of , we propose using the trajectory FoT to model the target motion. Our goal is to find a engineering-friendly $F(t)$ for approximating $f(t)$ which best fits the sensor data as in and then use it to estimate the state for a desired time. To avoid the computationally intractable hyper-surface fitting, we perform fitting with respect to each dimension of the “*directly-observed state*", by assuming conditional independence among the dimensions. This is computationally efficient since neither integration nor differentiation in one dimension will affect the others in the orthogonal coordinates. Any continuous trajectory can be approximated by a polynomial of a certain degree to an arbitrary accuracy [@Bar-Shalom01; @Li03]. This accuracy can be easily analyzed by the Taylor series expansion as in Appendix A. Based on this, linear parameter dependence can be assumed, i.e., $$\label{eq:7} F(t)=c_1\phi_1(t)+c_2\phi_2(t)+\cdots+c_m\phi_m(t) ,$$ where $\{\phi_i(t)\}$ are a priori selected sets of functions, for example, monomials $\{t^{i-1}\}$ or trigonometric functions $\{\text{sin}it\}$, and $C:=\{c_i\}, i=1,2,…,m$ are parameters to be determined. We call $m$ the order of the fitting function, which controls the number of free parameters in the model and thereby governs the model complexity. Hereafter, we denote the parameterized FoT $F_k (t)$ in a specified form $F(t;C_k)$, where $C_k$ denotes the parameter set at the time $k$. It is crucial to determine the order $m$ of the polynomial properly. For the typical CV and CA models, we have the following constraint, respectively, $$\label{eq:8} \frac{\partial f_\text{CV}(t)} {\partial t} = \text{Constant}, \frac{\partial^2 f_\text{CA}(t)} {\partial^2 t} = \text{Constant} ,$$ which indicates that the suitable fitting function order for the CV and CA motions are $m=2$ and $m=3$, respectively. In our approach, we advocate sliding time-window fitting (to be detailed in Section \[Time-window\]), by which any trajectory function can be divided into a number of consecutive time intervals, each of which corresponds to one function of relatively lower order. This allows us trade-off fitting fidelity to the real trajectory FoT with computational complexity. For example, a two dimensional polynomial with $m=2$ given as follows applies to most *smooth* trajectories in the planar space $$\label{eq:9} F(t,C_k): \left[ \begin{array}{c} x(t) \\ y(t) \\ \end{array} \right]=\left[ \begin{array}{ccc} a_{k,1} & a_{k,2} & a_{k,3} \\ b_{k,1} & b_{k,2} & b_{k,3} \\ \end{array} \right] \left[ \begin{array}{c} 1 \\ t \\ t^2 \\ \end{array} \right] ,$$ where $C_k:=\{a_{k,i},b_{k,i}\}_{i=1,2,3}$. While the above formulation is suitable to fit straight lines or smooth curves, the trigonometric function is particularly useful for circular trajectories, e.g., an elliptic trajectory in the planar Cartesian space can be represented as $$\label{eq:10} a'_{k,2} \big(x(t)-a'_{k,1}\big)^2+b'_{k,2} \big(y(t)-b'_{k,1}\big)^2= 1,$$ for which $C_k:=\{a'_{k,i},b'_{k,i}\}, i=1,2$. The elliptic curve can also be given by a parametric form with the $x$ and $y$ coordinates having different scalings $$\label{eq:11} F(t;C_k ):\left\{ \begin{array}{ll} x(t)=a_{k,1} + a_{k,2} \text{sin}⁡ (t)\\ y(t)=b_{k,1} + b_{k,2} \text{cos⁡} (t) \end{array} \right. ,$$ and correspondingly $C_k:=\{a_{k,i},b_{k,i}\}, i=1,2$. FoT Parameter Estimation and Optimization ----------------------------------------- To get the desired FoT $F(t;C_k)$, the parameters $C_k$ can be determined by minimizing the fitting residual $\parallel F (t;C_k)-f_k (t)\parallel$ in the underlying time window $[k_1,k_2]$. However, these residuals are not explicitly available since the true trajectory FoT $f_k(t)$ is unknown and is exactly what we want to estimate. As such, we turn to selecting the function that best fits the sensor data series as in in which the fitting residual is defined by the discrepancy between the original sensor data and the pseudo observation made on the FoT of the corresponding time, namely, $$\label{eq:12} R_t(C_k):=\parallel \mathbf{y}_t-h_t(F(t;C_k),\bar{\mathbf{v}}_t)\parallel .$$ Typically, the distance $\parallel \cdot\parallel$ can be given as the un-weighted $\ell_2$-norm of the error (square error), namely $R_t(C_k)|_{\ell_2}:=\parallel \mathbf{y}_t-h_t(F(t;C_k),\bar{\mathbf{v}}_t)\parallel^2$. This is referred to as the least squares (LS) fitting for which the Gauss-Newton method [@Morrison2012tracking] is popular. The resulting $\hat{F}(t;C_k)$ is known as the ordinary LS fitting to the given data which implicitly assumes that the sensor data received at different time instants provide equally accurate information. Further on, one can assign weight $w_t$ to each time instant to account for time-varying data uncertainty, e.g., being inversely proportional to the covariance of $\mathbf{y}_t$, leading to $$\label{eq:13} R_t(C_k)|_{\ell_2}:=\tilde{w}_t\parallel \mathbf{y}_t-h_t(F(t;C_k),\bar{\mathbf{v}}_t)\parallel^2 ,$$ where $\tilde{w}_t=w_t\big(\sum_{t=k_1}^{k_2} w_t\big)^{-1}$ is the normalized weight regarding all data in the time window. Moreover, a fading factor can also be considered in the weight design, such as $w_t:=\lambda^{k-t}$ where $0<\lambda<1$, in order to emphasize the newest data by assigning lower weights to history data. We denote by $ \Phi_k(C_k):=\sum_{t=k_1}^{k_2}R_t(C_k)$ the sum of the residuals in the time window $[k_1,k_2]$. Then, the fitting problem is reduced to parameter estimation of $C_k$, namely, $$\label{eq:14} \underset{C_k}{\text{argmin}}\Phi_k(C_k) ,$$ where the sensor data $\mathbf{y}_t$ may arrive at irregular time intervals and suffer from missed detection and outlier/false alarms. Analytic Solution and Numerical Approximation --------------------------------------------- In general, the (necessary) condition for $\Phi_k(C_k)$ to be a minimum is that the following $m$ gradient equations are zero, namely, $$\label{eq:15} \frac{\partial \Phi_k(C_k)} {\partial c_i} = 0, \forall i=1,...,m.$$ In a linear system (and for continuously increasing time window, namely $k_1=0$), it can be exactly solved given sufficient sensor data, e.g., by a recursive LS algorithm , [@haykin14 Chapter 10], [@Sayed08 Chapter 30] as briefly shown in Appendix B. Particularly, when equality constraints exist, they can be easily incorporated into the minimization function by methods such as Lagrange multipliers [@Lagrange]. However, in a nonlinear system, the derivatives are functions of both the independent variable and the parameters, which make these gradient equations do not have a closed solution. Instead, we have to resort to numerical approximation methods such as the trust-region-reflective (TRR) algorithm [@Sorensen97; @Branch99] and the Levenberg-Marquardt algorithm (LMA) [@Kanzow04]. In particular, constraints on parameters, e.g. bounded parameters, can be easily integrated in TRR [@Branch99], as in . These algorithms have been implemented in popular software and compute efficiently, offering great convenience for engineering use. However, we must note that almost all fitting algorithms including TRR and LMA work from an initial guess of the parameters for iterative searching and do not guarantee finding the global minimum. That is, the parameters are obtained by successive approximation till the residuals $\Phi_k$ do not decrease significantly in iterations or become lower than a threshold. To speed up the iteration, we set the parameters $C_{k-1}$ yielded at time $k-1$ as the initial parameters for estimating $C_k$ at time $k$. This is feasible because the trajectory functions yielded by the data in time window $[k_1,k_2]$ and that in time window $[k_1+1,k_2+1]$ will be similar due to the common data in $[k_1+1,k_2]$. In this setting, the result is a recursive algorithm for which the recursion can be described as $$\label{eq:16} C_k = \Psi_k(C_{k-1}) .$$ It is worth noting that the parameter transition in due to data updating in the sliding fixed-length time window is not a parameter convergence process as in the recursive LS estimation due to new data adding to the continuous increasing time window therein. It is also not necessarily a Markov-jump process as the parameters at time $k$ depends not only on that at time $k-1$ but also that of earlier times, if the fitting interval is longer than unity. To ameliorate the computational complexity due to nonlinear fitting, an alternative method is to project the sensor data into the state space as is done by the O2 approach [@Li16O2; @Li17clustering; @Li17MSDC]. Then, the problem reduces to performing linear fitting on the intermediate O2 estimates $\hat{x}^\text{O2}_{k_1:k_2} $ for which the fitting residual is given by $R_t(C_k):=\parallel \hat{x}^\text{O2}_t-F(t;C_k)\parallel $. This however requires the observation function being injective or multiple sensors being used which does not apply to the sensor setup such as a bearing-only sensor. Trajectory FoT Based STF {#stf} ======================== Given an FoT estimate $F(t;C_k)$, the state at any time $t$ (that does not have to be an integer) in the effective fitting time window (EFTW) $[K_1,K_2]$ can be estimated as follows $$\label{eq:17} \hat{\mathbf{x}}_t=F(t;C_k), \forall t\in [K_1,K_2],$$ where EFTW $[K_1,K_2]$ at least covers the sampling time window $[k_1,k_2]$, namely $K_1 \leq k_1, k_2\leq K_2$. More specifically, the inference is referred to as extrapolation when $K_1\leq t <k_1$ or $k_2 \leq t \leq K_2$ and as interpolation when $k_1\leq t\leq k_2$. Different choices of time $t\in [K_1,K_2]$ with regard to the right bound of the time window $k_2$, are immediately apparent and correspond to different fitting terminologies as follows: - **Delayed fitting**: $t<k_2$, the state to infer is for an earlier time. Particularly, we notice that fitting at the middle of the time window, namely $t=(k_1+k_2 )/2$, is comparably more accurate. This is also referred to as *fixed-lag smoothing* as the estimation bears a fixed time-delay of $k_2-t$. - **Online fitting**: $t=k_2$, the state to infer/filter is exactly for the time when the latest sensor data arrive. - **Forecasting**: $t>k_2$, the state to infer is for the future time. Particularly, denoting $n:=(t-k_2)$, it is called $n$-step forecasting. In addition, any estimates given above can be further fitted over a time window, forward and backward in time series, as many times as desired, to repeatedly revise the fitting function for more accurate trajectory estimation. This type of batch/off-line fitting is referred to as **Smoothed Fitting** hereafter. This is preferable for off-line data analysis but caution should be exercised to avoid over-fit. We take once-forward and once-backward delayed fitting as the default smoothed fitting, which resembles the conventional *fixed-interval smoothing*. It can be written as $$\label{eq:18} \underset{C_k}{\text{argmin}} \sum_{t=k_1}^{k_2}\parallel \hat{x}^\text{DF}_t-F(t;C_k)\parallel ,$$ where $\{\hat{x}^\text{DF}_t\}, t=k_2,k_2-1,\cdots,k_1$ are the estimates yielded by the *Delayed Fitting* and the fitting time-window $[k_1,k_2]$ moves in reverse order by time (namely, backward). Finally, we emphasize two important points about fitting, which are particularly beneficial for our approach. Piecewise/Sliding Time-window Fitting {#Time-window} ------------------------------------- Numerical fitting over a long data series suffers from instability, especially when the trajectory is subject to different models at different time periods. In such a situation, piecewise fitting, also referred to as spline fitting or segmented fitting, is a useful alternative. The advantage of piecewise fitting is that at each time instant, the complexity of the fitting function can be controlled (of lower order), which will not be affected by the data outside the time window. At the core of piecewise fitting is to detect the model change from the sensor data in time series, where the change point is the desired boundary between segments. This is formally known as *change-point detection* in general[@Gustafsson00] or *maneuver detection* in the context of target tracking [@Li03; @Ru09; @Li17Maneuvering]. There are a large body of algorithms and softwares; see e.g., [@Gustafsson00; @Ross15]. However, we want to indicate that most change detection mechanisms, including our own previous attempt [@Li16fitting], are problem-dependent and suffer from detection delay. In this paper, sliding time-window fitting (which is a special type of piecewise fitting) is advocated. With a sliding time-window fitting, the time-window $[k_1,k_2]$ is supposed to move forward with time $k$. The length of the time window can be adapted to accommodate high varying target dynamics, in accordance to the order of the fitting function and the feasible computing time that has to be smaller than the sampling interval. For the target we consider here such as passenger aircraft/ships and satellites, even a maneuver occurs, the target trajectory may remain smooth. In this case, maneuver detection becomes unnecessary as the fitting can be carried out the same when the target maneuvers smoothly, i.e., the trajectory remains smooth as demonstrated in our simulations in Sections \[sec:linear\] and \[sec:nonlinear\]. We have particularly addressed this issue in [@Li17Maneuvering]. It is one of the advantages yielded by formulating the target motion as a FoT rather than by a Markov transition model. Missed detection and irregular sensor data ------------------------------------------ Fundamentally different from the Markov-based estimator that needs to assume independence among time series states, time-window fitting eases such assumptions and does not require the data to be uniformly observed over time or to be chronological. Because of this, neither missed detection/delayed data, nor irregular sensor revisit frequency will inhibit our approach so much as it does to a Markov-Bayes estimator. In fact, both missed detection and delayed observation can be viewed as a special case of the problem of sensor data arriving at irregular time intervals, which do not constitute any challenge to fitting as long as the sensor data and its corresponding time are correctly matched. This greatly adds to the flexibility and reliability of our approach. Next, we will review work on trajectory estimation, some of which are based on fitting, and exhibit advantages for coping with the missed detection and irregular sensor data. Related Work {#relatedwork} ============ Target trajectory estimation and analysis do not only allow recording the history of past locations and predicting the future but also provide a means to specify behavioral patterns or relationships between locations observed in time series and to guide target detection in future frames [@Leibe07], to name a few. Most existing works however are based on either deterministic or stochastic HMM assumption of the target motion and need statistical property of the observation, which forms the key difference to our approach. In addition, no explicit attempts explicitly unify the tasks of smoothing, filtering, tracking and forecasting, fully based on data fitting/learning. Discrete-time trajectory estimation {#literature} ----------------------------------- Instead of estimating point-states at each time instant when a new observation is received, there are some studies that recursively estimate the discrete time-series state set [@Guerriero10; @Smith10; @García-Fernández16] based on a sequence of observations. Compared to the recursive point-state estimation, this, as generally termed data assimilation when formulated as an optimization problem, requires much higher computation. For linear systems, discrete-time trajectory estimation has direct connection to Gauss’ LS estimate [@Plackett50] and Kolmogorov-Wiener’s interpolation and extrapolation of a sequence [@Singpurwalla17]. Data assimilation refers to finding trajectories of a prescribed dynamical model such that the output of the model follows a given time series of observations [@Dimet86; @Talagrand87; @Bröcker12; @Wang13; @Rosenthal17]. The key point is to search for the maximum of the posterior density function by assuming certain (e.g., Gaussian) observation noise, model initial conditions and model errors, and iteratively minimizing a cost function which is the negative of the logarithm of the posterior density function. Of high relevance, an expectation maximization (EM) approach is proposed [@Shumway82] in conjunction with conventional Kalman smoothers for smoothing and forecasting, yielding a recursive procedure for estimating the parameters by MLE, which can deal with missing observations. Differently to the stochastic modeling of the state process, Judd etc. presented a series of non-sequential/optimization-based estimation and forecasting works, particularly in the area of chaotic systems, e.g., [@Judd00; @Judd08; @Judd09forecasting; @Judd15; @Smith10], which remove the use of the state transition noise. Actually, similar deterministic Markov models have been applied in noise reduction methods [@Kostelich93], moving horizon estimator [@Michalska95] and Gauss-Newton filter [@Morrison2012tracking; @Nadjiasngar13]. Interestingly, Judd’s shadowing filter yields more reliable and even more accurate performance than the Bayesian filters - *however, a fairer comparison should be made between shadowing filters with Bayesian smoothers, using the same amount of observation data* - in the case when the nonlinearity is significant, but the noise is largely observational [@Judd09failure], or when the objects do not typically display any significant random motions at the length and the time scales of interest [@Judd15]. The Gauss-Newton filter that models the state transition by a deterministic differential equation, namely without noise $\mathbf{u}_t$, is Cramér-Rao consistent (providing minimum variance)[@Morrison2012tracking]. Despite their Markov assumptions, these approaches, similar to our fitting approach, are based on optimization formulation, which is advantageous in handling constraints (as shown in in Section \[simulation\].C) and is less sensitive to process disturbances, missing data and observation singularities than a recursive Bayesian filter. Continuous-time trajectory estimation ------------------------------------- More relevantly to our approach, efforts have been devoted to continuous time trajectory estimation via data fitting in different disciplines. De facto, signal processing stems from the interpolation and extrapolation of a sequence of observations [@Singpurwalla17]. Data fitting is a self-contained mathematical problem and a prosperous research theme by its own, which has proven to be a powerful and universal method for pattern learning and time series data prediction, especially when adequate analytical solutions may not exist. Moreover, the recursive LS algorithm reformulated in state-space form was recognized a special case of the Kalman filter (KF)[@Sorenson70; @Sayed94]. However, most existing works work in batch manners based on either MLE [@Anderson-Sprecher96] or Bayesian inference [@Hadzagic11; @Dimatteo01] or as an extra scheme to a recursive filtering algorithm [@El-Hawary95; @Wang10; @Liu14]. In [@Anderson-Sprecher96], directional bearing data from one or multiple sensors are investigated, where Cardinal splines (i.e., splines with equally spaced knots) of different dimensions are fit to the data in the MLE manner; this is one of the earliest and few attempts that assume a spatio-temporal trajectory for tracking. In [@Hadzagic11], the trajectory is approximated by a cubic spline with an unknown number of knots in 2D state plane, and the function estimate is determined from positional measurements which are assumed to be received in batches at irregular time intervals. For the data drawn from an exponential family, the spline knot con-figurations (number and locations) are changed by reversible-jump Markov chain Monte Carlo [@Dimatteo01]. Much more complicated, artificial neural networks were considered as a parametric non-linear model in [@Hamed13], which is unaffordable in computation for online estimation. Continuous time trajectory estimation has also received attention in the context of mobile robot simultaneous localization and mapping (SLAM) [@Bibby10; @Lovegrove13] and visual tracking [@Delong10; @Milan16]. In the former, the robot motion is usually under the user’s control (called proprioceptive sensor data) and the continuous-time trajectory representation makes it easy to deal with asynchronous measurements and constraints. In the latter, starting and/or ending points may be specified for the trajectory. The tracking problem is treated as the discrete-continuous optimization with label costs[@Milan16], where the key is generating all the trajectory hypotheses having a reasonable low label cost based on a variety of DA rules, for which the design of the label cost takes the critical issues such as the targets’ dynamics, occlusions and collisions into account. However, only linear fitting is involved. Parametric curve fitting methods have the difficulty to define knots. Comparably, Gaussian process (GP) provides a non-parametric tool for learning regression functions from data, having the advantage of providing uncertainty estimates [@Anderson15] under linear state process function. Furthermore, regression based on a support vector regression model and a GP model respectively was advocated to predict ballistic coefficients of high-speed vehicles and also the long-term future state [@Song16]. Relevantly, the Gaussian smoothing [@Särkkä13] allows for inferring the state at any time of interest using the interpolation scheme that is inherent to GP regression. Multi-step ahead prediction in time series analysis is treated using the non-parametric GP model [@Girard03] in the manner of making repeated one-step ahead predictions. There methods are all based on data training and again, stepwise state transition. In summary, our FoT fitting approach differs from the above various data fitting approaches (not only for target tracking) in four major aspects: - The assumed trajectory function is purely a function of continuous time (namely spatio-temporal, “$x=f(t)$"), rather than a spatial function defined in the state space in a manner like “$y=f(x)$"; - We perform continuous time fitting in each state-dimension independently; - We use a sliding time window rather than pre-defined, ad-hoc knots for fitting flexibility; and - Our approach accommodates complete absence of statistical information about the target and the sensor. Moreover, we reiterate two critical, original strategies for efficiently real time implementation of our approach - As indicated by , usual description of the target motion can be utilized for fitting function design; and - Thanks to the use of in nonlinear fitting initialization, our approach can be carried out online. Simulation ========== Although the proposed FoT formulation of the target motion is fundamentally different from HMM, it is interesting to compare the FoT-based STF approach with the state-of-the-art Markov-Bayes solutions. For this, this section will study a variety of representative scenarios. In all cases, our fitting approach does not need to make any statistical assumption on the target dynamics, background or sensor noise while ideal statistical information is provided to the Markov-Bayes filters, smoothers and forecasters except otherwise stated for their most favorable performance. For the sake of generality and reproduction of the results, the first two simulations are taken from an excellent Matlab toolbox due to Hartikainen, Solin, and Särkkä [@Hartikainen13]: one uses linear and non-deterministic target dynamics and linear observation model, while the other utilizes deterministic target dynamics and nonlinear observation model. This toolbox features a large body of popular filters and smooths for discrete-time state space models, including the KF, extended KF (EKF) and unscented KF (UKF) and their corresponding smoothers implemented on the basis of the rauch-tung-striebel (RTS) algorithm. In addition, the interacting multiple model (IMM) approach, as well as its non-linear extensions based on the mentioned filters and RTS smoothers, has also been simulated. In contrast, the third simulation is described in a continuous-time system for tracking a non-maneuvering ballistic target in which a particle filter (PF) is compared. The Matlab codes used for the simulation are available at: [https://sites.google.com/site/tianchengli85/matlab-codes/fot4stf]{} Linear observation maneuvering target tracking {#sec:linear} ---------------------------------------------- This simulation example is the same as that described in Section 4.1.4 of [@Hartikainen13], where the motion of a maneuvering object switches between WPV (Wiener process velocity) with low process noise, such as a power spectral density $0.1$, and WPA (Wiener process acceleration) with high process noise, such as a power spectral density $1$. The system is simulated with 200 sampling steps (with the sensor revisit interval $\Delta= 0.1$s). The real target motion model was manually set to WPV during steps 1-50, 71-120 and 151-200 and to WPA during steps 51-70, and 121-150. This leads to four maneuvers. The initial state of the target is $\mathbf{x}_0 = [0, 0, 0, -1, 0, 0]^T$, which means that the object starts to move from the origin with velocity $-1$ along the y-axis. All filters and smoothers are correctly initialized with the true origin $\mathbf{x}_0$ and the covariance diag$([0.1,0.1,0.1,0.1,0.5,0.5]^T )$. In addition, for the best possible performance of the IMM approach, correct knowledge about the two models is assumed (except the maneuvering time). The prior model probabilities are set to $[0.9,0.1]^T$ and the model transition probability matrix for IMM is set to $$\label{eq:19} Tr_\text{IMM}=\left[ \begin{array}{cc} 0.98 & 0.02 \\ 0.02 & 0.98 \\ \end{array} \right] .$$ Observation $\mathbf{y}_k$ is made on the target position $[p_{x,k},p_{y,k}]^T$ with Gaussian noise $\mathbf{v_k}$, $\mathbf{y}_k = [p_{x,k},p_{y,k}]^T +\mathbf{v}_k$ where $$\label{eq:20} \mathrm{E}[\mathbf{v}_k]=\mathbf{0}, \mathrm{E}[\mathbf{v}_k\mathbf{v}_j^T]=\left[ \begin{array}{cc} 0.1 & 0 \\ 0 & 0.1 \\ \end{array} \right]\delta_{kj},$$ where $\delta_{kj}$ is the Kronecker-delta function which equal to one if $k=j$ and to zero otherwise. \[fig:1\] ![image](1){width="14.2"} \[fig:2\] ![RMSE of different linear estimators over sampling steps](2 "fig:"){width="8"} For this unbiased linear measurement model, the unbiased O2 position estimates can be directly given by measurement $[p^\text{O2}_{x,k},p^\text{O2}_{y,k}]^T=\mathbf{y}_k$. The proposed trajectory FoT fitting is carried out in the x-axis and y-axis individually in the LS manner, with a sliding time window of 10 sampling steps (except the starting stage when little data are available). The polynomial trajectory FoT of order $m=2$ is assumed as follows $$\label{eq:21} \left\{ \begin{array}{ll} p_{x,t}=a_1+a_2t\\ p_{y,t}=b_1+b_2t \end{array} \right. ,$$ and the optimization goal is given as $$\label{eq:22} \left\{ \begin{array}{ll} \Phi_k(a_1,a_2) := \sum_{t=k_1}^{k_2} \big( p^\text{O2}_{x,t}-(a_1+a_2t)\big)^2\\ \Phi_k(b_1,b_2) := \sum_{t=k_1}^{k_2} \big( p^\text{O2}_{y,t}-(b_1+b_2t)\big)^2 \end{array} \right. ,$$ where $k_2$ is the current time, $k_1=\text{max}⁡(1,k_2-10)$. Given a time series of O2 estimates $[p^\text{O2}_{x,k},p^\text{O2}_{y,k}]^T$ for the time window $k\in [k_1,k_2]$, the trajectory FoT parameter can be exactly determined by , i.e., $$\label{eq:23} \left\{ \begin{array}{ll} \sum_{t=k_1}^{k_2} \big(a_1+a_2t-p^\text{O2}_{x,t}\big)=0\\ \sum_{t=k_1}^{k_2} \big(a_1t+a_2t^2-p^\text{O2}_{x,t}t\big)=0\\ \sum_{t=k_1}^{k_2} \big(b_1+b_2t-p^\text{O2}_{y,t}\big)=0\\ \sum_{t=k_1}^{k_2} \big(b_1t+b_2t^2-p^\text{O2}_{y,t}t\big)=0\\ \end{array} \right. .$$ Once $a_1,a_2,b_1,b_2$ are obtained, the position of the target at time $t$ can be inferred as $[\hat{p}_{x,t},\hat{p}_{x,t}]^T=[a_1+a_2t,b_1+b_2t]^T$ straightforwardly. As addressed, four forms of fitting-inference can be implemented based on the same fitting function: delayed fitting ($t=k_2-5\Delta$ which estimates the state with $0.5$s delay), online fitting ($t=k_2$ which estimates the state using the latest 10 sensor data), forecasting ($t=k_2+5\Delta$ which estimates the state of the future, 0.5s in advance), and smoothed fitting (which is given by carrying out the delayed fitting forward in time series and then backward). Very different from IMM approach, our approach needs neither any multiple-model design for dealing with the target maneuver nor any a priori knowledge of the initial target state for estimator initialization. This, at the starting point, reveals the robustness advantage of fitting. The simulation is performed with 100 Monte Carlo runs, each run having a randomly generated trajectory originating from the same initial point, and a corresponding independently generated observation-series. The real trajectory and the estimates given by different filters, smoothers and forecasters in one run are given in Fig.1. The IMM-based 5-step forecasting is given by iteratively carrying out the prediction of the IMM approach 5 times without observation updating. As shown in Fig.1, all estimators correctly capture the trend of the trajectory. For more insights, their root MSEs (RMSEs) on the position estimation over time are given in Fig.2, where RMSE is calculated over 100 Monte Carlo runs. The mean of all $\text{RMSE}_k$ over the 200 sampling steps and the average running time per run are given in Table \[tab:1\]. - On the estimation accuracy, the online fitting outperforms the KF based on the MPV model but underperforms the IMM and the KF using the MPA model. The smoothed fitting improves over the delayed fitting, both outperforming the KS (Kalman smoother) using WPV but losing to the KF using MPA and the IMM smoother. For 5-step ahead forecasting, our fitting approach outperforms the IMM approach. - On the computing speed, the online fitting is slower than the filters, while the delayed fitting is slightly slower than the KSs using only one model, but is faster than the IMM smoother. The smoothed fitting is the slowest overall. However, the forecasting fitting is both faster and more accurate than the IMM forecaster. We must note here that all fitting approaches share the same fitting function, and their respective computing times have taken into account the common part for obtaining that fitting function (which is the majority of the computation required overall). Therefore, their joint computing time, if STF are required jointly, is much less than the sum of their respective times shown here. Although the estimation accuracy is slightly inferior to some of the suboptimal filters and smoothers that are based on ideal models and parameters, our fitting approaches work under the harsh condition that (i) they need neither a priori information about the target motion models nor the sensor observation noise statistics, but (ii) they provide powerful continuous-time estimates (yet we only compare the estimation at discrete time instants), including better prediction than that of model-based estimators. More importantly, (iii) the proposed fitting approach obviates the need to design multiple/adaptive models due to target maneuver. However, when these filters and smoothers are not initialized perfectly with the true state and even with the ideal error covariance, and/or if the multiple-model approach is not designed properly, their performance will undoubtedly degrade. In contrast, the fitting scheme based on minimum unrealistic assumptions will not suffer from these problems. These are just the advantage of modeling the target dynamics by a trajectory FoT rather than by a HMM. Next, we will investigate two nonlinear systems, in which the estimators may be provided with incorrect sensor noise information or poorly initialized. **Estimators** **Aver. RMSE** **Compt. Time (s)** ---------------------------------- ---------------- --------------------- KF (using WPV) 0.4498 0.0252 KS (using WPV) 0.2308 0.0472 KF (using WPA) 0.2520 0.0251 KS (using WPA) 0.1184 0.0496 IMM 0.2116 0.2864 IMM smoother 0.1025 1.0875 IMM 5-step forecaster 0.4373 0.9896 Online Fitting 0.2654 0.7120 Delayed Fitting 0.1442 0.7434 Smoothed Fitting 0.1348 1.4686 Fitting-based 5-step Forecasting 0.5586 0.7210 : Average performance of different linear estimators[]{data-label="tab:1"} Nonlinear observation maneuvering target tracking {#sec:nonlinear} ------------------------------------------------- This simulation is set the same as that given in in Section 4.2.2 of [@Hartikainen13]. To simulate the deterministic target motion (as shown in Fig.1), two Markov models using insignificant noises are assumed with sampling step size $\Delta=0.1$s. The first is given by a single linear WPV model with insignificant process noise (zero-mean and power spectral density 0.01), based on which the standard EKF, UKF and their corresponding RTS smoothers (EKS and UKS respectively) are realized. The other is given by a combination of this WPV model with a nonlinear CT model (no position and velocity noise but zero-mean turn rate noise with covariance 0.15). In the latter, multi-model design and nonlinear estimation approaches are required for which the EKF/EKS-IMM and UKF/UKS-IMM approaches are employed for filtering/smoothing. The IMM uses a model transition probability matrix as follows $$\label{eq:24} Tr_\text{IMM}=\left[ \begin{array}{cc} 0.9 & 0.1 \\ 0.1 & 0.9 \\ \end{array} \right] ,$$ with the prior model probabilities given by $[0.9,0.1]^T$. The measurement is made on the noisy bearing of the object, which is given by four sensors located at $[s_{x,1},s_{y,1}]^T=[-0.5,3.5]^T$, $[s_{x,2},s_{y,2}]^T=[-0.5,-3.5]^T$, $[s_{x,3},s_{y,3}]^T=[7,-3.5]^T$ and $[s_{x,4},s_{y,4} ]^T=[7,3.5]^T$, respectively. The noisy bearing observation of sensor $i=1,2,3,4$ is given as $$\label{eq:25} \theta_{k,i}=\text{arctan}\Big(\frac{p_{y,k}-s_{y,i}}{p_{x,k}-s_{x,i}}\Big) + v_{k,i} ,$$ where $v_{k,i} \sim \mathcal{N}(0,\Sigma_v)$ and we will use $\Sigma_v =0.01$ and $\Sigma_v=0.0025$ separately. \[fig:3\] ![RMSE of different nonlinear estimators over sampling steps when full and correct model information is provided to all estimators](3 "fig:"){width="8"} \[fig:4\] ![image](4){width="12"} \[fig:5\] ![RMSE of different nonlinear estimators over sampling steps when incorrect a priori information is provided about the sensors’ uncertainty](5 "fig:"){width="8"} Different from the previous example, at least two bearing sensors are needed to cooperate in order to project the bearing observations into the position space for O2 inference [@Li17clustering]. The nonlinear projection will lead to a bias if the bearing noise is not properly taken into account. For this reason, we perform the LS fitting directly on the bearing data with regard to the four sensors jointly, rather than on its projection in the position space. This releases the need of explicit statistical knowledge of the sensor observation noise. A sliding time window of 10 sampling steps (of length of totally $1$s) and a polynomial fitting fucntion of order $m=2$ were utilized. The fitting function is assumed as the same as . Given that the four sensors are of the same quality (while we do not really need to know $v_{k,i}$), the joint optimization function is given as $$\label{eq:26} \underset{a_1,a_2,b_1,b_2}{\text{argmin}} \sum_{t=k_1}^{k_2} \sum_{i=1}^4 \bigg(\theta_{t,i}-\text{arctan}\Big(\frac{b_1+b_2t-s_{y,i}}{a_1+a_2t-s_{x,i}}\Big)\bigg)^2 ,$$ where $k_2$ is the current time. Different to our previous simulation using simple linear measurement model that can be easily solved analytically, the above nonlinear formula is optimized by the LS curve fitting function: LSQCURVEFIT provided with the Optimization Toolbox of the Matlab software. All traditional estimators are initialized favorably with the true state $\mathbf{x}_0=[0,0,1,0,0]^T$ and covariance $\text{diag}([10.1,10.1,1.1,1.1,1]^T)$ and have the correct information about the sensor noise statistics. We initialize position estimates in our fitting approach at the first two sampling instants as $[0,0]^T$ and $[1,0]^T$, respectively. Then, the fitting (for smoothing, filtering and forecasting, respectively) is performed from the third sampling instant. This can be viewed as a *hot-start* fitting as the information about the initial state $\mathbf{x}_0$ excluding covariance is used (while the fitting that uses no a priori information, as is done in the last simulation, is called *cold-start*). We first set $\Sigma_v=0.01$. The simulation is performed with 100 Monte Carlo runs, each run consisting of 200 sampling steps (lasting 20 seconds) and using the same, deterministic trajectory but randomly generated observation series. The average performance of different estimators over 200 sampling steps is given in Fig.3. The RMSE over 200 sampling steps and the average running time per run are given in Table \[tab:2\]. Regarding the estimation accuracy, the online fitting outperforms both the EKF and UKF using a single MPV model but slightly underperforms the EKF/UKF IMM. The smoothed fitting improves the delayed fitting while the latter is equivalent to EKS and UKS, all inferior to the EKF/UKF IMM smoothers. For 5-step ahead forecasting, our fitting approach outperforms the IMM approach. However, the computing speed of our fitting approaches is the fastest among all categories (whether for filtering, smoothing or forecasting) and is remarkably faster by using the LSQCURFIT tool as compared with the polynomial fitting used in the previous example. This is primarily because in our realization here, the coefficient parameters obtained in the last fitting process will be used as the initial parameters in the next fitting process, as in . This significantly reduces the optimization routine for solving and can be applied in most situations. Next, we change the simulation setup by reducing the real noise for all bearing sensor observation to $\Sigma_v=0.0025$ in without informing any estimators. All filters, smoothers and forecaster and the unbiased O2 inference still use the same information of $\Sigma_v=0.01$. The biased O2 inference and our fitting approaches that never use this information will not be affected. This situation corresponds to the realistic case that the user does not have correct information about the sensors’ noise statistics. In this case, the performance for one run and the average performance over 100 runs for different estimators are given in Figs. 4 and 5, respectively. The RMSE and the average computing time per run are given in Table \[tab:3\]. The results show that our fitting approaches benefit the most from the increased sensor accuracy. On the estimation RMSE, the online fitting outperforms the EKF/UKF and the EKF/UKF-IMM approaches. The smoothed and delayed fittings achieve the best performance on average, especially at the time periods when target maneuver occurs, such as $t=6-8$s and $t=13-15$s demonstrating better performance than the IMM approach that admits maneuver detection delay. It is interesting to note that there is a crossover of the performance of the delayed and the smoothed fitting, which slightly outperform each other at different model stages. As addressed, the latter may suffer somehow from overfit and therefore does not always perform better than the former. A further analysis about the overfit problem remains open. On the computing speed, our fitting approaches yield the fastest computing speed overall, demonstrating good real time running potentiality. The second simulation setup, in which complete and correct sensor noise information is unavailable, conforms more to reality than the first simulation setup where the model information, which is complete and correct, is simply too ideal. In the next example, we will demonstrate that even when perfectly modeled and parameterized, the suboptimal filters that are not so ideally initialized can perform worse than the data driven solutions that make little or no state model assumption. **Estimators** **Aver. RMSE** **Compt. Time (s)** --------------------------------- ---------------- --------------------- EKF (using WPV) 0.3263 0.0288 EKS (using WPV) 0.1652 0.0494 UKF (using WPV) 0.3270 0.0900 UKS (using WPV) 0.1649 0.1108 EKF-IMM 0.2985 0.1749 EKF-IMM smoother 0.1302 1.2694 UKF-IMM 0.2728 0.9816 UKF-IMM smoother 0.1316 2.7550 UKF-IMM 5-step forecaster 0.4845 2.9821 Online Fitting 0.3029 0.0267 Delayed Fitting 0.1631 0.0446 Smoothed Fitting 0.1500 0.2623 Fitting-based 5-step forecaster 0.6200 0.0190 : Average Performance of different nonlinear estimators (when completely correct sensor noise knowledge is used)[]{data-label="tab:2"} **Estimators** **Aver. RMSE** **Compt. Time (s)** --------------------------------- ---------------- --------------------- EKF (using WPV) 0.2716 0.0404 EKS (using WPV) 0.1341 0.0587 UKF (using WPV) 0.2725 0.1557 UKS (using WPV) 0.1338 0.1736 EKF-IMM 0.2247 0.3078 EKF-IMM smoother 0.0883 1.2402 UKF-IMM 0.1965 0.9463 UKF-IMM smoother 0.0939 2.4897 UKF-IMM forecaster 0.3894 2.6716 Online Fitting 0.1599 0.0248 Delayed Fitting 0.0875 0.0430 Smoothed Fitting 0.0867 0.3004 Fitting-based 5-step forecaster 0.3937 0.0148 : Average performance of different nonlinear estimators (when incorrect sensor noise knowledge is used)[]{data-label="tab:3"} Ballistic target tracking ------------------------- \[fig:6\] ![Geometry of the vertically falling target](6 "fig:"){width="8"} \[fig:7\] ![image](7){width="14"} \[fig:8\] ![image](8){width="14"} In this example, the target is vertically falling, which is studied for testing nonlinear filters, such as [@Julier00] and [@Athans68]. The geometry of the scenario can be illustrated in Fig.6 with two known parameters: the horizontal distance of the radar to the target $M=10^5$ ft and the altitude of the radar $H=10^5$ ft. The target state is modeled as $x_t = [h_t,s_t,c_t]^T$ consisting of altitude $h_t$, velocity $s_t$ and ballistic coefficient $c_t$. While only the altitude is the directly-observed state, we will also show how to infer the velocity and ballistic coefficient via fitting. The continuous-time nonlinear dynamics of the target is governed by the following differential equations $$\label{eq:27} \dot{h}_t=-s_t ,$$ $$\label{eq:28} \dot{s}_t=-e^{-\gamma h_t}s_t^2c_t ,$$ $$\label{eq:29} \dot{c}_t=0 ,$$ where $\gamma=5\times10^{-5}$. The discrete-time range observation is made every second and is given by $$\label{eq:30} y_k=\sqrt[]{M^2+(h_k-H)^2} + v_k ,$$ where the the observation noise is Gaussian $v_k\sim \mathcal{N}(0,R)$. To simulate the ground truth, the initial state of the target is set as $\mathbf{x}_0 = [3\times10^5 \text{ft},2\times10^4 \text{ft/s},10^(-3) \text{ft}^{-1}]^T$. In accordance with [@Judd00] for high-precise approximation of the differential equations, a fourth order Runge-Kutta method based on 64 iterations every second between two successive observations is employed for simulating the deterministic ground truth. The fourth order Runge-Kutta method is also used in the estimators for accurate simulation of the state motion. In this example, the online fitting to compare with the O2 inferences and three typical nonlinear filters including EKF [@Athans68], UKF [@Julier00] and PF. To initialize them for tracking, a priori information of the initial state of the target is given as $\mathbf{x}_0 =[3\times10^5 \text{ft},2\times10^4 \text{ft/s},3\times 10^(-3) \text{ft}^{-1}]^T$ with error covariance $\mathbf{P}_0=[10^6 \text{ft},4\times10^6 \text{ft/s},10^(-4) \text{ft}^{-1}]^T$. This is the same as that in [@Julier00; @Athans68]. That is, a priori knowledge about the target altitude and velocity is ideally consistent with the truth but knowledge regarding the ballistic coefficient is bad. In particular, the Gaussian observation noise is of relatively small variance, which will yield a steep Gaussian likelihood function. As a result, the standard sampling importance resampling (SIR) PF will suffer from the sample degeneracy/impoverishment significantly[@Li15resampling] . To combat this, we use a likelihood function with heavy tails as follows $$\label{eq:31} p(y_k|h_k^{(i)}) \propto \text{exp}\bigg(-\frac{(y_k-y_k^{(i)})^2}{\underset{i}{\text{max}}\{(y_k-y_k^{(i)})^2\}}\bigg),$$ where $y_k^{(i)}=\sqrt[]{M^2+(h_k^{(i)}-H)^2}$ and $h_k^{(i)}$ is the estimated altitude of the $i$th particle of the total $200$. Given the latent constraint that: the target is falling vertically and will not make any movement in the horizontal direction, the O2 inference estimates the altitude $h_k$ based on the triangulation between $h_k$, $M$ and $y_k$ as follows: $$\label{eq:32} \hat{h}_k=+/-\sqrt[]{y_k^2-M^2} + H ,$$ where the sign will change from positive to negative (only once) at an altitude about $h_k \approx H$ during the entire tracking process. More specifically, the sign can be determined based on the elevation angle of the radar in practice (if available) or based on another latent rule that: *the ball is falling in a single direction and with a velocity that accelerates with time*. That is, we have the contextual information of “*accelerated falling*" which forms a constraint on the altitude and velocity as follows: $$\label{eq:33} \left\{ \begin{array}{ll} \hat{h}_k<\hat{h}_{k-1}\\ \hat{s}_{k-1} < \hat{s}_k \end{array} \right. .$$ When the statistics of the sensor observation noise is known, de-biasing shall be applied for which we applied the Monte Carlo de-biasing approach [@Li16O2; @Li17clustering] using 100 samples. This is referred to unbiased O2 inference while that given by without debiasing is referred to as biased O2 inference. As addressed earlier, we can assume a reasonable trajectory function on both altitude and velocity to carry out fitting for estimation. Here, the directly-observed state is the altitude and its fitting estimation shall rely only on the observation, while for velocity, the deterministic dynamic model - can be used. For both altitude and velocity, we use a sliding time window of no more than 5 sampling steps (as we found that both trajectories are very smooth) and 3-order fitting function. That is, the altitude function is given as $$\label{eq:34} h_t=a_1+a_2t+a_3t^2 ,$$ and the corresponding object minimizing function $$\label{eq:35} \underset{a_1,a_2,a_3}{\text{argmin}} \sum_{t=k_1}^{k_2} \big( y_k - \sqrt[]{M^2+(a_1+a_2t+a_3t^2)^2}\big)^2 .$$ Once the altitude trajectory function is obtained, its derivation gives that $$\label{eq:36} \dot{h}_t = a_2+2a_3t ,$$ Similarly, the velocity function can be assumed $$\label{eq:37} s_t=b_1+b_2t+b_3t^2 ,$$ and consequently, we have its derivation $$\label{eq:38} \dot{s}_t = b_2+2b_3t .$$ The velocity function should be determined such that the discrepancies between both sides of and are minimized. By comparing with and with , these two discrepancies can be written as, respectively $$\label{eq:39} \Phi_1:= \dot{h}_t -(-s_t) ,$$ $$\label{eq:40} \Phi_2:= \dot{s}_t- (-e^{-\gamma h_t}s_t^2\hat{c}_t) ,$$ where $\hat{c}_t$ is calculated using the data from the previous sampling time instant, based on , as follows $$\label{eq:41} \hat{c}_t :=-\frac{\dot{s}_{t-1}}{e^{-\gamma h_{t-1}} s_{t-1}^2}.$$ We are able to do so because the ballistic coefficient is a constant (such is known a priori). However, an initial estimate of it is needed in the first round of fitting, for which we assume the same as that in the filters, i.e., $c_0=3\times 10^{-3} \text{ft}^{-1}$. Given that and are equally weighted, the joint optimization function in the LS manner can be written as $$\label{eq:42} \underset{b_1,b_2,b_3}{\text{argmin}} \sum_{t=k_1}^{k_2} (\Phi_1^2+\Phi_2^2).$$ First, we set sensor noise the same as that in [@Julier00] by using $R=10^4$. This information is precisely provided to the EKF, UKF, PF and the unbiased O2 inference while the biased O2 inference does not need it. The simulation results are given in Fig. 7 for the altitude truth and estimates given by different estimators in one trial and the RMSEs of the altitude, velocity and ballistic coefficient, respectively. It is shown that at altitudes near $h_t \approx H$, all estimators are highly inaccurate. Surprisingly, except near $h_t \approx H$ when the calculation given by is very inaccurate, the O2 inference outperforms the EKF/UKF/PF remarkably. This indicates that the filters are in fact ineffective most of the time according to the definition of “*effectiveness*" given in [@Li16O2]. However, both de-biasing and fitting approaches have not improved the biased O2 inference by much. The average altitude RMSEs and computing times of different estimators are given in Table \[tab:4\]. Overall, the online fitting inference achieves the best performance while the PF does the worst in altitude estimation. Both the (biased and unbiased) O2 inferences and the online fitting approach are inferior to the ideally-modeled filters in the estimation of the velocity and constant ballistic coefficient, which are not directly-observed variables. On the computing speed, the O2 inference is unsurprisingly much faster than the others while the PF is the slowest. Second, we apply a larger sensor noise $R=10^5$ which is correctly provided to all estimators and all the other settings remain the same. The simulation results are given in Fig. 8 and Table \[tab:5\] for similar contexts to that of Fig. 7 and Table \[tab:4\], respectively. Most of the trends in Fig. 8 are similar to those in Fig. 7. We can see that the O2 inference, unbiased O2 inference and our fitting approach outperform all filters including the EKF, UKF and the PF for altitude estimation. While the de-biasing still does not improve the O2 inference, the fitting does so. For more insights, Figs. 7 and 8 show that the fitting approach benefits the most around time 10 (when $h_t \approx H$) when all estimators suffer from unstable estimation. The biased and unbiased O2 inferences and the online fitting approach are again significantly inferior to filters in estimating the velocity and ballistic coefficient. This exposes one limitation of the data driven approaches including the O2 inferences and the online fitting approach, to which we need to develop more thorough data mining solutions to explore the deterministic model information hidden in -. At the current stage, we primarily concentrated on the directly-observed state and position-trajectory inference. **Estimators** **Aver. Altitude RMSE(ft)** **Compt. Time (s)** ---------------- ----------------------------- --------------------- EKF 355 0.1517 UKF 349 0.1588 PF 1664 24.63 Biased O2 396 2.7$\times 10^{-4}$ Unbiased O2 392 0.0022 Fitting 212 2.324 : Average altitude RMSE and computing time of different estimators ($R=10^4$)[]{data-label="tab:4"} **Estimators** **Aver. Altitude RMSE(ft)** **Compt. Time (s)** ---------------- ----------------------------- --------------------- EKF 1262 0.1518 UKF 1213 0.1601 PF 6970 16.41 Biased O2 1002 3.18$\times10^{-4}$ Unbiased O2 1031 0.0023 Fitting 613 2.517 : Average altitude RMSE and computing time of different estimators ($R=10^5$)[]{data-label="tab:5"} Conclusion ========== For a class of target tracking problems with poor a priori knowledge about the system, we presented an online sensor data fitting framework for approximating the continuous-time trajectory function. This leads to a unified methodology for joint smoothing, tracking and forecasting and provides a flexible and reliable solution to use information such as “*the target is descending*", “*the target is about passing by a location*" or “*the target goes to a fixed destination*". Such information is common and important in reality, but is often overlooked or treated in an ad-hoc manner in existing solutions because they cannot be quantitatively defined as statistical knowledge. In a variety of representative scenarios, the proposed methods perform comparably to classic suboptimal algorithms that have complete and correct model information and can even outperform them if, more commonly in reality, they are provided with incorrect sensor statistical information or are improperly initialized. Moreover, the present sliding time window fitting approach does not need to take ad-hoc multiple/adaptive models to handle target maneuver, as long as the target trajectory remains smooth over the time window. This adds greatly to the reliability, flexibility and ease of implementation of the framework. Unifying the tasks of smoothing, tracking and forecasting on a single estimation framework is of high interest to many real world problems of significance, where the history, the current and the future of the target’s state are desired simultaneously. Also, continuous time trajectory acquisition is essential for detecting and resolving potential trajectory conflicts when multiple targets exist or compete and for analyzing/learning the target movement pattern. All of these, together with the rapid escalation and massive deployment of massive sensors, will make sensor data-learning/fitting approach more promising. There is broad space for further development in this direction, including, better data mining solutions to infer the indirectly-observed variables from the directly-observed variables (especially when the observation is sparse), to obtain further statistical knowledge of the estimate (e.g., the accuracy given in the manner of variance) and data-to-trajectory association for handling multiple (potentially interacting) targets. Appendix {#appendix .unnumbered} ======== Lagrange remainder for linearization ------------------------------------ We analyze the linearization error caused by converting a nonlinear fitting function to a linear one based on the Taylor series expansion. A Taylor series of a real function $f(\mathbf{x})$ about a point $\mathbf{x}=\mathbf{x}_0$ is given by $$f(\mathbf{x})=f(\mathbf{x}_0)+\dot{f}(\mathbf{x}_0)(\mathbf{x}-\mathbf{x}_0)+\cdots+\frac{1}{n!}f^{(n)}(\mathbf{x}_0)(\mathbf{x}-\mathbf{x}_0)^n+R_n ,$$ where $R_n$ is a remainder term known as the Lagrange remainder (also known as truncation error), which is given by $$R_n=\frac{f^{(n+1)}(\bar{\mathbf{x}})}{(n+1)!}(\mathbf{x}-\mathbf{x}_0)^{n+1} ,$$ where $\bar{\mathbf{x}} \in [\mathbf{x}_0,\mathbf{x}]$ lies somewhere in the interval $[\mathbf{x}_0,\mathbf{x}]$. This indicates that the closer the prediction data x is with $\mathbf{x}_0$, the smaller $R_n$. This explains why piecewise/sliding time-window fitting, is highly suggested in our approach. Recursive Least Square Adaptive Filter -------------------------------------- Let $y_k= x_k^Tc_k+e_k$ be a general 1-dimensional linear fitting model where $x_k$ and $y_k$ are the system input (regressor) and output at time $k$, $e_k$ represents an additive noise (i.e., the fitting error to be admitted) and $c_k$ is the parameter to be recursively estimated for minimizing the square error $\sum_{t=1}^{k}\lambda^{k-t} (y_t-x_t^Tc_t)^2 $. The recursive LS algorithm updates the parameter estimate by the recursion in a time-window $$\hat{c}_k = \hat{c}_{k-1} + \mathcal{G}_k(y_k-x_k^Tc_{k-1}),$$ $$\mathcal{G}_k = \frac{P_{k-1}x_k}{\lambda + x_k^TP_{k-1}x_k} ,$$ $$P_k = \frac{1}{\lambda}\bigg( P_{k-1}-\frac{P_{k-1}x_kx_k^TP_{k-1}}{\lambda + x_k^TP_{k-1}x_k} \bigg) ,$$ where the forgetting factor $\lambda$ is usually chosen in $[0.9, 0.999]$ to reduce the influence of past data or chosen as $1$ to equally treat all data in the time window, and the matrix $P_t$ is related to the covariance matrix, but $P_t \neq \text{Cov}(\hat{C}_k)$. [^1]: T. Li and J.M. Corchado are with School of Sciences, University of Salamanca, 37007 Salamanca, Spain, E-mail: {t.c.li, corchado}@usal.es [^2]: T. Li is presently visiting the Institute of Telecommunications, Vienna University of Technology, 1040 Wien, Austria [^3]: H. Chen is with Department of Electrical Engineering, University of New Orleans, LA 70148, USA, E-mail: [email protected] [^4]: S. Sun is with School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, China, E-mail: [email protected] [^5]: This work is in part supported by the Marie Skłodowska-Curie Individual Fellowship (H2020-MSCA-IF-2015) under Grant no. 709267.
--- abstract: | In this paper we study the dynamics of the stellar interior of the early red-giant star KIC 4448777 by asteroseismic inversion of 14 splittings of the dipole mixed modes obtained from [*Kepler*]{} observations. In order to overcome the complexity of the oscillation pattern typical of red-giant stars, we present a procedure which involves a combination of different methods to extract the rotational splittings from the power spectrum. We find not only that the core rotates faster than the surface, confirming previous inversion results generated for other red giants [@deheuvels2012; @deheuvels2014], but we also estimate the variation of the angular velocity within the helium core with a spatial resolution of $\Delta r=0.001R$ and verify the hypothesis of a sharp discontinuity in the inner stellar rotation [@deheuvels2014]. The results show that the entire core rotates rigidly with an angular velocity of about $\langle\Omega_c/2\pi\rangle=748\pm18$ nHz and provide evidence for an angular velocity decrease through a region between the helium core and part of the hydrogen burning shell; however we do not succeed to characterize the rotational slope, due to the intrinsic limits of the applied techniques. The angular velocity, from the edge of the core and through the hydrogen burning shell, appears to decrease with increasing distance from the center, reaching an average value in the convective envelope of $\langle\Omega_s/2\pi\rangle=68\pm22$ nHz. Hence, the core in KIC 4448777 is rotating from a minimum of 8 to a maximum of 17 times faster than the envelope. We conclude that a set of data which includes only dipolar modes is sufficient to infer quite accurately the rotation of a red giant not only in the dense core but also, with a lower level of confidence, in part of the radiative region and in the convective envelope. author: - 'M. P. Di Mauro, R. Ventura, D. Cardini, D. Stello, J. Christensen-Dalsgaard, W.  A. Dziembowski, L. Paternò, P. G. Beck, S. Bloemen, G. R. Davies, K. De Smedt, Y. Elsworth, R. A. García, S. Hekker, B. Mosser, and A. Tkachenko' title: 'Internal rotation of the red-giant star KIC 4448777 by means of asteroseismic inversion' --- Introduction ============ Stellar rotation is one of the fundamental processes governing stellar structure and evolution. The internal structure of a star at a given phase of its life is strongly affected by the angular momentum transport history. Investigating the internal rotational profile of a star and reconstructing its evolution with time become crucial in achieving basic constraints on the angular momentum transport mechanisms acting in the stellar interior during different phases of its evolution. In particular, physical processes that affect rotation and in turn are affected by rotation, such as convection, turbulent viscosity, meridional circulation, mixing of elements, internal gravity waves, dynamos and magnetism, are at present not well understood and modeled with limited success [e.g., @marques2013; @cantiello]. Until fairly recently, rotation inside stars has been a largely unexplored field of research from an observational point of view. Over the past two decades helioseismology changed this scenario, making it possible to measure the rotation profile in the Sun’s interior through the measurement of the splittings of the oscillation frequencies, revealing a picture of the solar internal dynamics very different from previous theoretical predictions [see e.g., @elsworth1995; @schou1998; @thompson2003]. Contrary to what one could expect from the angular momentum conservation theory, predicting a Sun with a core rotating much faster than the surface when only meridional circulation and classical hydrodynamic instabilities are invoked [e.g., @chaboyer], helioseismology shows an almost uniform rotation in the radiative interior and an angular velocity monotonically decreasing from the equator to high latitudes in the convective envelope. This strongly supports the idea that several powerful processes act to redistribute angular momentum in the interior, like for example magnetic torquing [e.g., @brun] and internal gravity waves [e.g., @talon2005]. The rotation breaks the spherical symmetry of the stellar structure and splits the frequency of each oscillation mode of harmonic degree $l$ into $2l+1$ components which appear as a multiplet in the power spectrum. Multiplets with a fixed radial order $n$ and harmonic degree $l$ are said to exhibit a frequency “splitting” defined by: $$\delta \nu_{n,l,m}=\nu_{n,l,m}-\nu_{n,l,0}\; , \label{Eq.1}$$ somewhat analogous to the Zeeman effect on the degenerate energy levels of an atom, where $m$ is known as the azimuthal order. Under the hypothesis that the rotation of the star is sufficiently slow, so that effects of the centrifugal force can be neglected, the frequency separation between components of the multiplet is directly related to the angular velocity [@cow]. In recent years spectacular asteroseismic results on data provided by the space missions CoRoT [@baglin2006] and [*Kepler*]{} [@borucki2010] have revolutionized the field. In particular, the [*Kepler*]{} satellite has provided photometric time series of unprecedented quality, cadence and duration, supplying the basic conditions for studying the internal rotational profile and its temporal evolution in a large sample of stars, characterized by a wide range of masses and evolutionary stages. In this context the detection of solar-like pulsations - as in the Sun excited by turbulent convection - in thousands of red giants, from the bottom to the tip of the red-giants branch [see, e.g., @mosser2013b; @Stello13] and to the AGB [@corsaro2013] appears particularly exciting. Red-giant stars are ideal asteroseismic targets for many reasons. Compared to the main-sequence stars, solar-like oscillations in red giants are easier to detect due to their higher pulsation amplitudes [@mosser2013a]. What is more important, red-giant frequency spectra reveal mixed modes [see, e.g., @beck2011], which probe not only the outer layers, where they behave like acoustic modes, but also the deep radiative interior, where they propagate as gravity waves. Both the gravity and the acoustic-wave propagation zones contribute, in various proportions, to the formation of mixed modes. The greatest contribution from the acoustic zone occurs for modes with frequency near the resonant frequency of the acoustic cavity. Mode inertias attain local minima at these frequencies. Moreover, the red-giant phase represents a crucial step in the stellar angular momentum distribution history [@ceillier2013; @marques2013]. When a star evolves off the relatively long and stable main sequence, its rotation starts evolving differently in the inner and outer parts causing the formation of a sharp rotation gradient in the intermediate regions where hydrogen is still burning: assuming that the angular momentum is locally conserved, the contraction of the core causes its rotation to speed up in a relatively short timescale, while the outer layers slow down due to their expansion. The accurate determination of the rotational profiles in subgiants and red giants provides information on the angular momentum transport mechanism potentially leading to significant improvements in the modeling of stellar structure and evolution. Recently, results based on measurements of the rotational splittings of dipole mixed modes have been reported in the literature [e.g. @beck2012; @beck2014; @mosser2012c; @deheuvels2012; @deheuvels2014]. @beck2012, based on high precision measurements of rotational splittings provided by [*Kepler*]{}, found that the core in the red-giant stars is rotating faster than the upper layers. These results were confirmed by applying inversion techniques to rotational splittings by @deheuvels2012 [@deheuvels2014]. Asteroseismology of large sample of stars [@mosser2012c; @deheuvels2014] allowed to clarify that the mean core rotation significantly slows down as stars ascend the red-giant branch. Several theoretical investigations have explored the consequences of these recent results on internal angular momentum transport inside solar-like oscillating stars along their evolution [e.g., @ceillier2013; @marques2013; @tayar2013; @cantiello]. These results show that the internal rotation rates, predicted by current theoretical models of subgiants and red giants, are at least 10 times higher compared to observations, suggesting the need to investigate more efficient mechanisms of angular-momentum transport acting on the appropriate timescales during these phases of stellar evolution. In this paper we analyze more than two years of [*Kepler*]{} observations of the red-giant star KIC 4448777 and identify 14 rotational splittings of mixed modes in order to characterize its internal rotational profile using different inversion techniques at first applied successfully to helioseismic data [e.g., @thompson96; @schou1998; @paterno1996; @dimauro1998] and recently to data of more evolved stars [@deheuvels2012; @deheuvels2014]. The paper is organized as follows: Section 2 reports the results of the spectroscopic analysis of the star aimed at the determination of its atmospheric parameters. Section 3 describes the method adopted to analyze the oscillation spectrum and identify the mode frequencies and the related splittings. Section 4 provides the basic formalism for performing the inversion, starting from the observed splittings and models of the star. Section 5 describes the evolutionary models constructed to best fit the atmospheric and asteroseismic constraints. Section 6 presents the details of the asteroseismic inversion carried out to infer the rotational profile of the star. In Section 7 we test the inversion techniques for the case of red giants and the capability of detecting the presence of rotational gradient in the deep interior of the star. In Section 8 the results obtained by the inversion techniques are compared with those obtained by other methods. Section 9 summarizes the results and draws the conclusions. Spectroscopic analysis {#Sec:spec} ====================== In order to properly characterize the star, six spectra of 1800 seconds integration time each were obtained with the HERMES spectrograph [@raskin2011], mounted on the 1.2-m MERCATOR telescope at La Palma. This highly efficient échelle spectrometer has a spectral resolution of R=86000, covering a spectral range from 380 to 900 nm. The raw spectra were reduced with the instrument specific pipeline and then averaged to a master spectrum. The signal-to-noise ratio was around 135 in the range from 500 to 550 nm. The atmospheric parameter determination was based upon Fe I and Fe II lines which are abundantly present in red-giant spectra. We used the local thermal equilibrium (LTE) Kurucz-Castelli atmosphere models [@castelli] combined with the LTE abundance calculation routine MOOG (version August 2010) by C. Sneden. Fe lines were identified using VALD line lists [@kupka]. For a detailed description of the different steps needed for atmospheric parameter determination, see, e.g., @desmedt. We selected Fe lines in the highest signal-to-noise region of the master spectrum in the wavelength range between 500 and 680 nm. The equivalent width (EW) was calculated using direct integration and the abundance of each line was then computed by an iterative process where theoretically calculated EWs were matched to observed EWs. Due to the high metallicity, the spectrum of KIC 4448777 displays many blended lines. To avoid these blended lines in our selected Fe line lists, we first calculated the theoretical EW of all available Fe I and Fe II lines in the wavelength range between 500 and 680 nm. The theoretical EWs were then compared to the observed EWs to detect any possible blends. The atmospheric stellar parameters derived by the spectroscopic analysis are reported in Table  \[tab:parameters\] and are based upon the results from 46 Fe I and 32 Fe II lines. [ccccc]{} $11.56$ & $4750 \pm 250$ & $3.5 \pm 0.5$ & $0.23 \pm 0.12$ & $< 5$\ \[tab:parameters\] We have also explored the possibility to derive the surface rotation rate by following the method by @Garcia2014, but we have not found any signatures of spot modulation as evidence for an on-going magnetic field. Time series analysis and Fourier spectrum {#sec:osc} ========================================= For the asteroseismic analysis we have used near-continuous photometric time series obtained by [*Kepler*]{} in long-cadence mode (time sampling of 29.4 min). This light curve spans about 25 months corresponding to observing quarters Q0-9, providing a formal frequency resolution of 15 nHz. We used the so-called PDC-SAP (pre data conditioning - simple aperture photometry) light curve [@Jenkins10] corrected for instrumental trends and discontinuities as described by @Garcia11. The power spectrum of the light curve shows a clear power excess in the range $(180-260)\, \mu$Hz (Fig. \[fig:spectrum\]) due to radial modes, with the comb-like pattern typical of the solar-like p-mode oscillations, and non-radial modes, particularly those of spherical degree $l=1$, modulated by the mixing with g modes. ![In panel a) the observed frequency spectrum of KIC 4448777 is shown. The harmonic degrees of the modes ($l=0,1,2,3$) are indicated. Multiplets due to rotation are visible for $l=1$. Panel b) shows observed rotational splittings for $l=1$ modes.[]{data-label="fig:spectrum"}](figfin.eps){width="12cm"} The initial analysis of the spectrum was done using the pipeline described in @Huber09. By this method we determined the frequency at maximum oscillation power $\nu_{\rm max} = (219.75 \pm 1.23)\, \mu$Hz and the so-called large frequency separation between modes with the same harmonic degree $\Delta\nu = (16.96 \pm 0.03)\, \mu$Hz. The quoted uncertainties have been derived from analyzing 500 spectra generated by randomly perturbing the original spectrum, according to @huber2012. For the purposes of this paper it has been necessary, also, to identify the individual modes and measure their frequencies. This process, known as “peak bagging”, is notoriously difficult to perform for red giants because of their complex frequency spectra, with modes of very different characteristics within narrow, sometimes even overlapping, frequency ranges. Mixed modes of different inertia have very different damping times and hence also different profiles in the frequency spectrum. Here we therefore used a combination of known methods tailored to our particular case, although we should mention that an automatic “one-fits-all” approach has been recently developed by @corsaro. Four independent groups (simply called ’fitters’) performed the fitting of the modes by using slightly different approaches: 1. The first team smoothed the power spectrum of the star to account for the intrinsic damping and reexcitation of the modes. They located the modes by two separate steps using a different level of smoothing in each. First, they heavily smoothed the power spectrum (by 13 independent bins in frequency) to detect the most damped modes including radial, quadrupole and dipole modes with lower inertia, and then they smoothed less (by 7 bins) to identify the dipole modes with higher inertia. In both cases the peaks were selected and associated with modes only if they were significant at the 99% level, setting the threshold according to the statistics for smoothed spectra, which takes into account the level of smoothing applied and the frequency range over which the modes have been searched [see e.g., @Chaplin2002]. In addition, the ‘toy model’ of @Stello12 was used to locate a few extra dipole modes. This fitting was performed using the MCMC Bayesian method by @HandbergCampante11; 2. The second team extracted the frequencies of individual modes as the centroid of the unsmoothed power spectral density (PSD) in narrow predefined windows, checked for consistency by fitting Lorentzian profiles to a number of modes [@beck2013]; 3. The third team modelled the power spectrum with a sum of many Lorentzians, performed a global fit using a maximum likelihood estimator (MLE), and calculated the formal uncertainties from the inverse Hessian matrix [@mathur2013]; 4. The fourth team derived proxies of the oscillation frequencies from a global fit based on global seismic parameters; all peaks with a height-to-background ratio larger than 8 were selected; radial modes and quadrupole modes were estimated from the fit of the large separation provided by the second-order asymptotic expansion for pressure modes [@mosser2011]; the dipole modes were obtained with the asymptotic expansion for mixed modes @mosser2012b with rotational splittings derived with the method of @goupil2013. The final set of 58 individual mode frequencies, including the multiplets due to rotation for the $l=1$ modes, consists only of those frequencies detected at least by two of the fitters. In order to obtain statistically consistent uncertainties for the mode frequencies we used the Bayesian Markov-Chain Monte Carlo (MCMC) fitting algorithm by @HandbergCampante11 for the peak bagging. The algorithm allows simultaneous fitting of the stellar granulation signal [@mathur2011] and all oscillation modes, each represented by a Lorentzian profile. However, due to the complexity of the frequency spectrum of KIC 4448777, we were not able to fit all modes simultaneously using a single method. In particular, the mixed modes with very high inertia and hence very long mode lifetimes have essentially undersampled frequency profiles in the spectrum. Fitting Lorentzian profiles to these modes is therefore unsuitable, and can easily lead the fitting algorithm astray. We therefore treated the radial and quadrupole modes separately from the dipole modes. The radial and quadrupole modes were fitted as Lorentzian profiles using the MCMC approach. In this analysis we ignored any mixing of quadrupole modes and hence fit only one quadrupole mode per radial order. For the dipole modes, which we were not able to fit as part of the MCMC method for the reasons explained above, we decided to adopt the initial frequencies, found from the smoothed spectrum by Team 1, as the final frequencies and the scatter among the four fitters as a proxy for uncertainties. As a sanity check for this approach we compared the uncertainties obtained by the MCMC fitting procedure of the radial modes with the scatter between different fitters of the same modes. We found that on average the ‘fitter scatter’ is within 17% of the MCMC uncertainty, and all fitter scatter values are within a factor of two of the MCMC-derived uncertainty. In the above analysis each dipole mode was detected separately, independently of the azimuthal order $m$. As in @beck2014, we noticed that the components with $m=\pm1$ of a given triplet are not equally spaced from the central $m=0$ mode. In the framework of the perturbation theory, the splitting asymmetries correspond to second-order effects in the oscillation frequency that mainly account for the distortion caused by the centrifugal force. Here, the asymmetry is smaller in size and reasonably negligible, in first approximation, when compared to the rotational splitting itself, ranging from 0.3% to - at most - 12% (with a mean value of 6%) of the respective rotational splittings, with values comparable with the frequency uncertainties. In order to remove second order perturbation effects, here we used the generalized rotational splitting expression [e.g., @goupil2009]: $$\delta \nu_{n,l}=\frac{\nu_{n,l,m}-\nu_{n,l,-m}}{2m}.$$ The relative uncertainties have been calculated according to the general propagation formula for indirect measurements. Detailed investigation on the physical meaning of rotational splitting asymmetries and the possibility to derive from them more stringent constraints on the internal rotational profile of oscillating stars have been the subject of different papers [cf., @suarez], but it is beyond the aim of the present work. Table \[tab:frequencies\] lists the final set of frequencies together with their uncertainties, corresponding to the values obtained by the MCMC fitting procedure for radial and quadrupole modes and to the scatter in the results from the four fitters for dipole modes, their spherical degree and the rotational splittings for 14 dipole modes. To measure the inclination of the star we used the above MCMC peak bagging algorithm, restricting the fit to the strongest dipole modes of $m=0$ and imposing equal spacing of the $m=\pm1$ components. This calculation provided an inclination of $i=32.6^ {+5.0}_{-4.3}\deg$. As in @beck2012, the observed rotational splittings are not constant for consecutive dipole modes (see Fig. \[fig:spectrum\]b showing rotational splittings for the $l=1$ modes). Splittings are larger for modes with a higher inertia which predominately probe the inner radiative interior. This shows that the deep interior of the star is rotating faster than the outer layers. The identification of several dipole modes and the use of the method by @mosser2012b, based on the asymptotic relation, allowed us to estimate the asymptotic period spacing $\Delta \Pi_1 = 89.87 \pm 0.07\,$s, which places this star on the low luminosity red-giant branch in agreement with the evolutionary phase predicted by the value of the observed $\Delta\nu$ [@bedding2011; @mosser2012b; @Stello13]. A first estimate of the asteroseismic stellar mass and radius can be obtained from the observed $\Delta\nu$ and $\nu_{\mathrm{max}}$ together with the value of $T_{\mathrm{eff}}$ [@KjeldsenBedding95; @kallinger2010; @belkacem2011; @miglio2012]. In particular, by using the scaling relation calibrated on solar values, we obtain $M_{ast}/{\mathrm M}_{\odot} = 1.12 \pm 0.09$ and $R_{ast}/{\mathrm R}_{\odot} = 4.13 \pm 0.11$. By using the scaling relation by @mosser2013a, calibrated on a large sample of observed solar-like stars, we obtain $M_{ast}/{\mathrm M}_{\odot} = 1.02 \pm 0.05$ and $R_{ast}/{\mathrm R}_{\odot} = 3.97 \pm 0.06$. From the above values we can determine the asteroseismic surface gravity be $\log g_{ast}=3.25\pm0.03$ dex. This value is in good agreement with that determined by the spectroscopic analysis (see Table  \[tab:parameters\]). [cccccc]{} 0 & 159.842 $\pm$ 0.014 & n.a. & 2 & 174.005 $\pm$ 0.043 &...\ 0 & 176.277 $\pm$ 0.018 & n.a. & 2 & 190.623 $\pm$ 0.034 & ...\ 0 & 192.907 $\pm$ 0.016 & n.a. & 2 & 207.551 $\pm$ 0.026 & ...\ 0 & 209.929 $\pm$ 0.014 & n.a. & 2 & 224.646 $\pm$ 0.011 & ...\ 0 & 226.831 $\pm$ 0.014 & n.a. & 2 & 241.630 $\pm$ 0.022 & ...\ 0 & 243.879 $\pm$ 0.013 & n.a. & 3 & 213.443 $\pm$ 0.015 & ...\ 0 & 261.215 $\pm$ 0.034 & n.a. & 3 & 230.423 $\pm$ 0.011 & ...\ 1 & 167.061 $\pm$ 0.011 & ... & 3 & 247.600 $\pm$ 0.017 & ...\ 1 & 185.069 $\pm$ 0.011 & 0.2025 $\pm$ 0.0078 & & &\ 1 & 187.402 $\pm$ 0.011& 0.3565 $\pm$ 0.0078& & &\ 1 & 199.986 $\pm$ 0.011 & 0.2955 $\pm$ 0.0078& & &\ 1 & 201.864 $\pm$ 0.011 & 0.1755 $\pm$ 0.0078& & &\ 1 & 204.528 $\pm$ 0.011 & 0.3505 $\pm$ 0.0078 & & &\ 1 & 208.571 $\pm$ 0.011 & ... & & &\ 1 & 211.913 $\pm$ 0.011 & 0.3765 $\pm$ 0.0078 & & &\ 1 & 215.699 $\pm$ 0.015 & 0.3425 $\pm$ 0.0078 & & &\ 1 & 218.299 $\pm$ 0.017 & 0.1555 $\pm$ 0.0078 & & &\ 1 & 220.814 $\pm$ 0.018 & 0.3235 $\pm$ 0.0078 & & &\ 1 & 229.276 $\pm$ 0.011 & 0.3820 $\pm$ 0.0160 & & &\ 1 & 233.481 $\pm$ 0.011 & 0.2900 $\pm$ 0.0078 & & &\ 1 & 235.783 $\pm$ 0.011 & 0.1975 $\pm$ 0.0140 & & &\ 1 & 239.463 $\pm$ 0.011 & 0.3450 $\pm$ 0.0160 & & &\ 1 & 244.385 $\pm$ 0.011 & ... & & &\ 1 & 249.417 $\pm$ 0.011 & 0.3200 $\pm$ 0.0210& & &\ 1 & 252.377 $\pm$ 0.021 & ... & & &\ 1 & 252.661 $\pm$ 0.016 & ...& & &\ \[tab:frequencies\] Asteroseismic inversion ======================= The asteroseismic inversion is a powerful tool which allows to estimate the physical properties of stars, by solving integral equations expressed in terms of the experimental data. Previous experience acquired in helioseismology on inverting solar data represents a useful background for asteroseismic inversion. Earlier attempts in generalizing the standard helioseismic differential methods to find the structure differences between the observed star and a model have been applied to artificial data with encouraging results by @GK93, @roxburgh98, @Ber01. More recently, @dim04 was able to infer the internal structure of Procyon A below $0.3 R$ by inversion of real data comprising 55 low-degree p-mode frequencies observed in the star. A general conclusion from these previous investigations is that the success of the inversion depends strongly on the number of observed frequencies and the accuracy with which the model represents the star. Stellar inversions to infer the internal rotational profiles of stars were firstly applied to artificial data of moderately rotating stars such as $\delta$-Scuti stars [@goupil96] and white dwarfs [@kaw99]. Rotational inversion of simulated data of solar-like stars was studied by @lochard05 for the case of a subgiant model representative of $\eta$ Boo. They showed that mixed modes can improve the inversion results on the internal rotation of the star, while data limited to pure $l=1,2$ p modes are not sufficient to provide reliable solutions. Indeed, striking results on rotation have been obtained by @deheuvels2012 who performed a detailed modeling of the red-giant star KIC 7341231, located at the bottom of the red giant branch. They performed an inversion of the internal stellar rotation profile based on observed rotational splittings of 18 mixed modes. They found that the core is rotating at least five times faster than the envelope. More recently @deheuvels2014 applied their techniques to six subgiants and low-luminosity red giants. The internal rotation of KIC 4448777 can be quantified by inverting the following equation [@gough1981], obtained by applying a standard perturbation theory to the eigenfrequencies, in the hypothesis of slow rotation: $$\delta\nu_{n,l} = \int_{0}^{R} {\cal K}_{n,l}(r) \frac{\Omega(r)}{2 \pi}\, dr +\epsilon_{n,l}\; , \label{eq:rot}$$ where $\delta\nu_{n,l}$ is the adopted set of splittings, $\Omega(r)$ is the internal rotation assumed to be a function of only the radial coordinate, $\epsilon_{n,l}$ are the uncertainties in the measured splittings and ${\cal K}_{n,l}(r)$ is the mode kernel functions calculated on the unperturbed eigenfunctions for the modes $(n,l)$ of [*the best model*]{} of the star: $${\cal K}_{n,l}(r)=\frac{1}{I}\left[\xi_{r}^{2}+l(l+1)\xi_{h}^{2}-2\xi_{r}\xi_{h}-\xi_{h}^{2}\right]\rho r^{2}\; , \label{ker}$$ where $\xi_{r}$ and $\xi_{h}$ are the radial and horizontal components of the displacement vector respectively, $\rho$ is the density and $R$ is the photospheric stellar radius, while the inertia is given by: $$I=\int_{0}^{R}\left[\xi_{r}^{2}+l(l+1)\xi_{h}^{2}\right]\rho r^2 dr\; . \label{inertia}$$ The properties of the inversion depend both on the mode selection $i\equiv(n,l)$ and on the observational uncertainties $\epsilon_{i}$ which characterize [*the mode set*]{} $i= 1, \ldots, N$ to be inverted. The main difficulty in solving Eq. \[eq:rot\] for $\Omega(r)$ arises from the fact that the inversion is an ill-posed problem: the observed splittings constitute a finite and quite small set of data and the uncertainties in the observations prevent the solution from being determined with certainty. Thus, an appropriate choice for a suitable inversion technique is the first important step during an asteroseismic inverse analysis. Inversion procedure ------------------- There are two important classes of methods for obtaining estimates of $\Omega(r)$ from Eq. \[eq:rot\]: the optimally localized averaging (OLA) method, based on the original idea of @backus1970, and the regularized least-squares (RLS) fitting method [@ph62; @ti63]. Both methods give linear estimates of the function $\Omega(r)$ with results generally in agreement, as was demonstrated by @CD90 [@sekii97; @deheuvels2012]. Here we study and apply the OLA method and its variant form, which allows us to estimate a localized weighted average of angular velocity $\bar{\Omega}(r_{0})$ at selected target radii $\{r_{0}\}$ by means of a linear combination of all the data: $$\frac{\bar{\Omega}(r_{0})}{2\pi}=\sum_{i=1}^{N}c_{i}(r_{0})\delta\nu_{i}= \int_{0}^{R} {K}(r_0,r) \frac{\Omega(r)}{2\pi}dr\; , \label{backus}$$ where $c_{i}(r_{0})$ are the inversion coefficients and $$K(r_{0},r)=\sum_{i=1}^{N} c_{i}(r_{0}){\cal K}_{i}(r)$$ are the averaging kernels. Here we adapted the code, developed for solar rotation in @paterno1996, to be applied to any evolutionary phase. Because of the ill-conditioned nature of the inversion problem, it is necessary to introduce a regularization procedure. By varying a trade-off parameter $\mu$, we look for the coefficients $c_{i}(r_{0})$ that minimize the propagation of the uncertainties and the spread of the kernels: $$\int_{0}^{R}J(r_0,r)K(r_{0},r) ^2 dr+\frac{\mu}{ \mu_0} \sum_{i=1}^{N}\epsilon_{i}^2 c_{i}^2(r_{0})\; , \label{ls}$$ where $$\mu_0=\frac{1}{N}\sum_{i=1}^{N}\epsilon_{i}^2$$ assuming that $$\int_{0}^{R} K(r_{0},r) dr=1 \; .$$ $J(r_0,r)$ is a weight function, small near $r_0$ and large elsewhere, which has been assumed to be: $$J(r_0,r)=12(r-r_0)^2/R\;,$$ designed to build averaging kernels as close as possible to a Dirac function centered in $r_0$. The minimization of Eq. \[ls\] is equivalent to solve a set of $N$ linear equations for $c_i$. The uncertainties of the solutions are the standard deviations calculated in the following way: $$\sigma\left[\frac{\bar{\Omega}(r_0)}{2\pi}\right]=\left[\sum_{i=1}^N c_i^2(r_0)\epsilon_i^2\right]^{1/2}\; . \label{deltaf}$$ The center of mass of the averaging kernels is: $$\bar{r}(r_{0})=\int_0^R {r}K(r_{0},r)dr. \label{cgravity}$$ We also considered the method in the variant form, as described in @pijpers1992, known as SOLA (Subtractive Optimally Localized Averaging), making attempts to fit the averaging kernel to a Gaussian function $G(r_{0},r)$ of an appropriate width, centered at the target radius [@dimauro1998]. The two parameters, the width of the Gaussian target function and the trade-off parameter, are tuned to find an acceptable matching of the averaging kernel to its target function and also to ensure an acceptable small error on the result from the propagation of the measurement errors. Therefore, the coefficients are determined by minimizing the following: $$\int_{0}^{R} R\left[K(r_{0},r)-G(r_{0},r )\right]^2 dr+\frac{\mu}{\mu_0}\sum_{i=1}^{N}\epsilon_i^2 c_{i}^2(r_{0}), \label{sola}$$ where $$G(r_{0},r )=\frac{1}{\sqrt{2\pi\sigma^2}}\exp^{-|r-r_0|^2/2\sigma^2}$$ and $\sigma$ is chosen to fix the width of the Gaussian function. Evolutionary models of KIC 4448777 ================================== We first need to construct a [ *best fitting*]{} model of the star that satisfies all the observational constraints in order to quantify the internal rotation and to understand the relation between the observed rotational splittings and how sensitive each mode is to the different regions of that model. The theoretical structure models have been calculated by using the ASTEC evolution code [@chris2008a], spanning the parameter space given in Table \[tab:parameters\] and following the procedure described in @dimauro2011. The input physics for the evolution calculations included the OPAL 2005 equation of state [@OPAL], OPAL opacities [@Igl96], and the NACRE nuclear reaction rates [@NACRE]. Convection was treated according to the mixing-length formalism (MLT) [@bohm] and defined through the parameter $\alpha=\ell/H_p$, where $H_p$ is the pressure scale height and $\alpha$ is varied from $1.6$ to $1.8$. The initial heavy-element mass fraction $Z_i$ has been calculated from the iron abundance given in Table \[tab:parameters\] using the relation \[Fe/H\]$=\log(Z/X)-\log(Z/X)_{\odot}$, where $(Z/X)$ is the value at the stellar surface and the solar value was taken to be $(Z/X)_{\odot}=0.0245$ [@GN93]. Thus, we used $Z/X=0.04\pm0.01$ in the modeling. The resulting evolutionary tracks are characterized by the input stellar mass $M$, the initial chemical composition and a mixing-length parameter. For the models with values of $T_\mathrm{eff}$ and $\log g$ consistent with the spectroscopic observed values, we calculated the adiabatic oscillation frequencies using the ADIPLS code [@chris2008b]. We applied the surface effect correction following the approach proposed by @kjeldsen2008 and using the prescription of @brandao11, which takes into account that modes with high inertia suffer a smaller surface effect than do p modes. The correction applied to all calculated frequencies is then of the form: $$\nu^{mod}_{n,l}=\nu_{n,l}+a\frac{1}{Q_{n,l}}\left(\frac{\nu_{n,l}}{\nu_0}\right)^b$$ where $\nu^{mod}_{n,l}$ are the corrected frequencies, $Q_{n,l}$ is the inertia of the given mode normalized by the inertia of a radial mode of the same frequency, obtained by interpolation, $\nu_{n,l}$ are the best-model frequencies, $\nu_0$ is a constant frequency, usually chosen to be the frequency at maximum oscillation power, $a$ is the amplitude of the correction at $\nu_0$ and $b$ is the exponent assumed to be $4.90$ as the one calculated for the solar frequencies by @kjeldsen2008. The results of the fits between the observed star and the models were evaluated according to the total $\chi^2$ between the observed $\nu^{obs}_i$ and calculated $\nu^{mod}_i$ values of the individual oscillation frequencies as: $$\chi^2=\frac{1}{N}\sum_1^N\left(\frac{\nu^{obs}_i-\nu^{mod}_{i}}{\epsilon_i}\right)^2,$$ where $\epsilon_i$ are the uncertainties on the observed frequencies. [lccc]{} $M/{\mathrm M}_{\odot}$ &$1.02\pm0.05\tablenotemark{a}$ & 1.02 & 1.13\ Age (Gyr) & -&8.30& 7.24\ $T_{\mathrm{eff}}$ (K) &$4750\pm250\tablenotemark{b}$& 4800 & 4735\ $\log g$ (dex) &$3.5\pm0.5\tablenotemark{b}$& 3.26 & 3.27\ $R/{\mathrm R}_{\odot}$ &$3.97\pm0.06\tablenotemark{a}$ &3.94 & 4.08\ $L/{\mathrm L}_{\odot}$ & - &7.39 & 7.22\ $Z_{i}$ & - &0.015 & 0.022\ $X_{i}$ & - & 0.69 & 0.69\ $[\mathrm{Fe/H}]$ & $0.23\pm0.12\tablenotemark{b}$ &$-0.04$ & $0.13$\ $r_{cz}/R$ & - &0.15 & 0.14\ $\alpha_{MLT}$ &- &1.80&1.80\ $\Delta \nu\, (\mu$Hz) & $16.96\pm0.03$&$16.97$& $16.93$ \[tab:fitted\] In Table \[tab:fitted\] we give a comprehensive set of stellar properties for the two best fitting models compared to observations of KIC 4448777. Fig. \[fig:tracks\] shows evolutionary tracks plotted in a Hertzsprung-Russell diagram for the two best-fitting models. ![Evolutionary tracks plotted in an H-R diagram. Black dots indicate two models which best reproduce the observations. The rectangle indicates the 1 $\sigma$ error box of the observed $\log g$ and $T_\mathrm{eff}$.[]{data-label="fig:tracks"}](plot.logT-logg-art.eps){height="8cm"} The location of the star in the H-R diagram identifies KIC 4448777 as being at the beginning of the ascending red-giant branch. It has a small degenerate helium core, having exhausted its central hydrogen and it is in the shell-hydrogen burning phase. The hydrogen abundance as a function of the fractional mass and radius plotted for one of the selected model of KIC 4448777 shows the extent of the core with a radius $r_c=0.01R$ and the location of the base of the convective zone (Fig. \[Xr\]). The outer convective zone appears to be quite deep, reaching about $r_{cz}\simeq 0.15R$. It can be noticed that Model 2, during the main sequence phase, develops a convective core, which lasts almost to the hydrogen exhaustion at the centre. The higher metallicity of Model 2, in comparison to Model 1, leads to a high opacity and therefore one would expect a lower luminosity and no convective core in this evolutionary phase. However, in Model 2, the quite low hydrogen abundance determining a higher mean molecular weight acts to increase the luminosity, pushing again to develop a convective core. ![Hydrogen content in Model 2 of KIC 4448777. The base of the convective envelope located at $m_{cz}=0.23M$ and $r_{cz}= 0.14R$ is shown by the dashed line.[]{data-label="Xr"}](Xabundance-mass-radius.eps){height="8cm"} As shown in the propagation diagram obtained for Model 1 in Fig. \[prop\], the huge difference in density between the core region and the convective envelope, causes a large value of the buoyancy frequency in the core, determining well-defined acoustic and gravity-wave cavities, with modest interaction between p and g modes. ![Propagation diagram from the center to $ r=0.3\,R$ for Model 1. The solid black line represents the buoyancy frequency $N$. The dashed line represents the Lamb frequency $S_{l}$ for $l=1$.[]{data-label="prop"}](Fig.prop1.eps){height="8cm"} Figure \[fig:echelle\] shows the échelle diagram obtained for the two models. The results show, as explained in previous sections, that the observed modes are $l=0$ pure acoustic modes, and $l=1,2,3$ g-p mixed modes. Several non-radial mixed modes have a very low inertia, hence they propagate in the low-density region, namely the acoustic cavity and behave like p modes. Most of the $l=1$ mixed modes have a quite high inertia, which means that they propagate in the gravity-wave cavity in the high-density region, although the mixing with a p mode enhances their amplitude and hence ensures that they can be observed at the surface. In the échelle diagram these gravity-dominated modes evidently departs from the regular solar-like pattern. We found that there is an agreement between observed and theoretical frequencies of the two selected models, within 4-sigma errors and with $\chi^2=45$ for Model 1 and $\chi^2=61$ for Model 2; however we notice that Model 2 best reproduces the spectroscopic observation of the iron abundance. ![Échelle diagrams for Models 1 (upper panel) and 2 (lower panel) of Table \[tab:fitted\]. The filled symbols show the observed frequencies. The open symbols show computed frequencies. Circles are used for modes with $l=0$, triangles for $l=1$, squares for $l=2$ and diamonds for $l=3$. Observed splitting of $l=1$ modes can be seen as triplets or doublets of black triangles. The size of the open symbols indicates the relative surface amplitude of oscillation of the modes.[]{data-label="fig:echelle"}](plot.echellecorr.0102.Z15.X69.586.tutti.eps "fig:"){height="8cm"} ![Échelle diagrams for Models 1 (upper panel) and 2 (lower panel) of Table \[tab:fitted\]. The filled symbols show the observed frequencies. The open symbols show computed frequencies. Circles are used for modes with $l=0$, triangles for $l=1$, squares for $l=2$ and diamonds for $l=3$. Observed splitting of $l=1$ modes can be seen as triplets or doublets of black triangles. The size of the open symbols indicates the relative surface amplitude of oscillation of the modes.[]{data-label="fig:echelle"}](plot.echellecorr.0113.Z22.X0.69.778.tutti.eps "fig:"){height="8cm"} Results of the asteroseismic inversions ======================================= ![Individual kernels calculated for Model 1 (on the left) and Model 2 (on the right) according to Eq. \[ker\] and corresponding to two observed $l=1$ modes with frequencies of $187.40\,\mu$Hz and $201.86\,\mu$Hz. The panels on the top shows a mode with higher inertia than the mode in the bottom panels. In each panel the corresponding theoretical oscillation frequency, harmonic degree $l$, and radial order $n$ are indicated.[]{data-label="kernl1"}](kernelnl-model1-n4800-rev1lres.eps "fig:"){width="9.6cm"} ![Individual kernels calculated for Model 1 (on the left) and Model 2 (on the right) according to Eq. \[ker\] and corresponding to two observed $l=1$ modes with frequencies of $187.40\,\mu$Hz and $201.86\,\mu$Hz. The panels on the top shows a mode with higher inertia than the mode in the bottom panels. In each panel the corresponding theoretical oscillation frequency, harmonic degree $l$, and radial order $n$ are indicated.[]{data-label="kernl1"}](kernelnl-model2-kersola4800-rev1lres.eps "fig:"){width="9.6cm"} Once the best model has been selected, it is then possible to invert Eq. \[eq:rot\] following the procedure described in the Section 4. For this we used the 14 rotational splittings of the dipole modes given in Table \[tab:frequencies\]. Kernels calculated for Model 1 and Model 2, corresponding to two observed modes with different inertia, are shown as an example in Fig. \[kernl1\] [see also @goupil96]. It is interesting to notice that kernels calculated for the two different models, but corresponding to the same frequency, show similar amplitudes in the interior. ![Internal rotation of KIC 4448777 at different depths as obtained by the OLA inversion based on the two best-fitting models. Vertical error bars are 2 standard deviations. The dashed line indicates the location of the inner edge of the H-burning shell. The shaded area indicates the region inside the star in which it was not possible to determine any solutions. []{data-label="rot"}](ome-log-2400-theta0.001.eps){height="8cm"} The inferred rotation rate obtained by applying the OLA technique for the two models is shown in Fig. \[rot\], where the points indicate the angular velocity against the selected target radii $\{r_0\}$. The radial spatial resolution is the interquartile range of the averaging kernels and gives a measure of the localization of the solution. In the probed regions the distance between the center of mass and the target location is smaller than the width of the averaging kernel (see Eq. \[cgravity\]). To show more clearly the errors in the inferred internal rotation, the vertical bars are 2 $\sigma$, $\sigma$ being the standard deviation given by Eq. 10. Different trade-off parameters have been used with $\mu=0-10$ to try to better localize the kernels. A good compromise between localization and error magnification in the solution has been obtained using $\mu=0.001$ for both Model 1 and Model 2. The inversion parameter has been chosen by inverting a known simple rotational profile. We were able to estimate the variation of the angular velocity with the radius in the inner interior with a spatial resolution of $\Delta r=0.001R$, thanks to the very localized averaging kernels at different radii. Figure \[ker1\] shows OLA averaging kernels localized at several target radii $r_0$ obtained with a trade-off parameter $\mu=0.001$ for the inversion given in Fig. \[rot\] by using Model 2. We find an angular velocity in the core at $r=0.005R_{\odot}$ of $\Omega_{c\,\mathrm{OLA}}/2\pi=749\pm11$ nHz with Model 1, well in agreement with the value obtained with Model 2 which is $\Omega_{c\,\mathrm{OLA}}/2\pi=744\pm12$ nHz. The rotation appears to be constant inside the core and smoothly decreases from the edge of the helium core through the hydrogen burning shell with increasing radius. In Fig. \[cum-tuttiola\] we plot the OLA cumulative integrals of the averaging kernels centered at different locations in the inner interior, to show in which region of the star the solutions are most sensitive. The cumulative kernels corresponding to solutions centred below and above the H-burning shell look quite similar. The leakage from the core explains the reason why the OLA results show an almost constant rotation in the He core and the H-burning shell. We note that it is not possible to find localized solutions for $r_0>0.01R$. Attempts to concentrate solutions above this point return averaging kernels which suffer from very large leakage from both the deep core and the superficial layers, as shown in Fig. \[ker1\]. ![OLA averaging kernels localized at several target radii $r_0$ obtained with a trade-off parameter $\mu=0.001$ for the inversion given in Fig. \[rot\] by using Model 2.[]{data-label="ker1"}](kernelav113-2LUG-2400-800.theta0.001.eps){height="12cm"} ![ Cumulative integrals of the averaging kernels centered at different locations in the inner interior as obtained by the OLA inversion using Model 2. The dashed black line indicates the location of the inner edge of the H-burning shell.[]{data-label="cum-tuttiola"}](cumulative-OLA-16LUG-n2400-theta0.001-tuttir0log.eps){height="8cm"} However, due to the p-mode contributions of certain modes considered, some reliable results can be found above $0.9R$, although with fairly low weight as shown by the kernel centered around $r_0=0.98R$ of Fig. \[ker1\]. The angular velocity reaches a mean value in the convective envelope of $\Omega_{s\,\mathrm{OLA}}/2\pi=68\pm22$ nHz with Model 1 in good agreement with $\Omega_{s\,\mathrm{OLA}}/2\pi=60\pm14$ nHz obtained with Model 2. The angular velocity value below the surface and the significance of this result can be investigated by considering the cumulative integral of the averaging kernels $\int_0^rK(r_0,r)dr$, in order to understand where the kernels are most sensitive inside the star. Fig. \[cum-OLA\] shows that the surface averaging kernels provide a weighted average of the angular velocity of the layers $r>0.2R$, in most of the convective envelope, and not an estimate of the rotation at the surface. This is due to the fact that the eigenfunctions of the modes considered here are too similar to one another to build averaging kernels localized at different radii in the acoustic cavity. Moreover, Fig. \[cum-OLA\] shows that the present set of data does not allow us to appreciate the difference between the cumulative integral of the surface kernels calculated for the two models, hence the results obtained at the surface do not depend on the stellar model chosen. The detection of a larger number of modes trapped in the convective envelope would have given the possibility to study the upper layers with a higher level of confidence. ![Cumulative integral of the surface averaging kernel (centered at $r=0.98R$) obtained with the OLA method using Model 1 (red line) and Model 2 (black line).[]{data-label="cum-OLA"}](cumulative-surface-OLA-16LUG-n800-theta0.001.eps){height="10cm"} The solutions inferred by the SOLA method, plotted against the target radius, are shown in Fig. \[rotsola\]. The radial resolution is equal to the width of the target Gaussian kernels, while the uncertainty in the solutions is plotted as 2 standard deviations, like for the OLA results. The values obtained for the angular velocity in the core at $r=0.004R$ are $\Omega_{c\,\mathrm{SOLA}}/2\pi = 754\pm12$ nHz with Model 1 and $\Omega_{c\,\mathrm{SOLA}}/2\pi = 743\pm13$ nHz with Model 2, which are well in agreement with the values obtained by the OLA method. On the other hand, and differing from the OLA results, above $r=0.01R$ the angular velocity appears to drop down rapidly indicating an almost constant rotation from the edge of the core to the surface. ![Internal rotation of KIC 4448777 at different target radii $r_0$ as obtained by the SOLA inversion for Model 1 and 2. The trade-off parameter used is $\mu=1$. Vertical error bars are 2 standard deviations. The dashed line indicates the location of the inner edge of the H burning shell. The shaded area indicates the region inside the star in which it was not possible to determine any solutions.[]{data-label="rotsola"}](ome-solalog-n2400-theta1.ps){height="8cm"} The SOLA technique produced reliable results only for $r<0.01R$, due to the fact that we failed to fit the averaging kernels to the Gaussian target function, as required by the method. Figure \[kersola\] shows averaging kernels and Gaussian target functions for the solutions plotted in Fig. \[rotsola\]. Here we used a trade-off parameter $\mu=1$, but we notice that the solutions appear to be sensitive to small changes of the trade-off parameter $(0<\mu<10)$ only below the photosphere, with variation up to 20% in the results. The solutions in the core are not sensitive to the same changes of $\mu$ and the averaging kernels remain well localized. It is clear that only solutions related to averaging kernels which are well localized and close to the target Gaussian functions can be considered reliable. ![Averaging kernels (in red) plotted together with the Gaussian target functions (in green) for the SOLA inversion of KIC 4448777, adopting Model 2 and a trade-off parameter $\mu=1$.[]{data-label="kersola"}](kervel113-sola-n2400-800-theta1.eps){width="12cm"} In Fig. \[cum-tutti\] we plot the SOLA cumulative integrals of the averaging kernels corresponding to solutions at different locations in the interior. We find that the core cumulative kernel is very well localized, and the cumulative kernel for the solution at $r=0.01R$ is localized with a percentage of $70\%$, although contaminated from the layers of the convective envelope. Cumulative kernels for solutions with $r>0.01R$ appear to be sensitive to the radiative region for a percentage that is quickly decreasing with increasing target radii, while the contamination from the outer layers appear high. Thus, we can conclude that while the OLA solutions are strongly affected by the core (Fig. \[cum-tuttiola\]), the SOLA solutions appear more polluted by the signal from the surface layers. Nevertheless, the averaging kernel and cumulative kernel for the SOLA solution at $r=0.01R$ result better localized than the OLA solution indicating that the decrease occurring around the base of the H-burning shell is reliable. ![Cumulative integrals of the averaging kernels centered at different locations in the core obtained with the SOLA method using Model 2. The dashed line indicates the location of the inner edge of the H-burning shell.[]{data-label="cum-tutti"}](cumulative-sola-tuttir0.ps){height="12cm"} The angular velocity below the surface at $r_0=0.85R$ with the SOLA method results to be $\Omega_{s\,\mathrm{SOLA}}/2\pi=28\pm14$ nHz with Model 1 and $\Omega_{s\,\mathrm{SOLA}}/\pi=11 \pm16$ nHz with Model 2. ![Cumulative integral of the surface averaging kernel (centered at $r_0=0.85R$) obtained with the SOLA method using Model 1 (red line) and Model 2 (black line).[]{data-label="cum-SOLA"}](cumulative-surf-3lug-2400.ps){height="8cm"} We can compare the surface cumulative integrals of the averaging kernels $\int_0^rK(r_0,r)dr$ as obtained for the two inversion methods and plotted in Figs. \[cum-OLA\] and \[cum-SOLA\]. We found that the cumulative kernel integral for the near-surface SOLA inversion appears marginally contaminated by the kernels of the regions $r<0.2R$, but the results can still be considered in good agreement with the OLA ones. In the SOLA averaging kernels it was not possible to suppress efficiently the strong signal of the modes concentrated in the core. We conclude that the angular velocity value obtained at the surface by applying the SOLA method represents a weighted average of the angular velocity of the entire interior. As a consequence, we think that, in this case, the OLA result should be preferred as a probe of the rotation in the convective envelope. A strong rotational gradient in the core? ========================================= The results obtained in Section 6 raise the question about the possibility of the existence of a sharp gradient in the rotational profile localized at the edge of the core. Evolution of theoretical models which assumes conservation of angular momentum of the stellar interior predicts that during the post-main sequence phase, a sharp rotation gradient localized near the H-burning shell, should form between a fast-spinning core and a slow-rotating envelope [see, e.g. @ceillier2013; @marques2013]. However, if an instantaneous angular momentum transport mechanism is at work, the whole star should rotate as a solid body. The general understanding is that the actual stellar rotational picture should be something in between. The occurrence of a sharp rotation gradient in post main sequence stars has been already investigated by other authors [see, e.g., @deheuvels2014], with no possibility to get a definitive conclusion. In order to understand the differences in the inversion results at the edge of the core obtained by using the OLA and the SOLA methods, we tested both techniques by trying to recover simple input rotational profiles by computing and inverting artificial rotational splittings. In order to accomplish this task we used the forward seismological approach as described in @dimauro2003 for the case of Procyon A. We computed the expected frequency splittings for several very simple rotational profiles by solving Eq. \[eq:rot\], and adopting the kernels computed from the models used in the present work. Each set of data includes 14 artificial rotational splittings corresponding to the modes observed for KIC 4448777. A reasonable error of $7.8$ nHz equal for each rotational splitting has been adopted (see Table \[tab:frequencies\]). The sets of artificial splittings have been then inverted following the procedures as described in the above sections. In our tests we used four different input rotational laws: a) $\Omega(r)=\Omega_0$; b) $\Omega(r) = \Omega_c $ for $r \leq r_c$ $ \Omega(r)=\Omega_e$ for $r > r_c$; c) $ \Omega(r)=\Omega_0 \cos (2 \pi A \,r)$ where $A$ is a constant; d) $ \Omega(r)=\Omega_0+\Omega_1 r+\Omega_2 r^2+\Omega_3 r^3$. Figure \[profile\] shows the input rotational profiles and superimposed the results obtained by OLA and SOLA inversions for four of the cases considered with the use of Model 2. Similar solutions have been obtained with Model 1. It should be pointed out that, although the panels show inversion results obtained along the entire profile to strengthen the potential of the inversion techniques, as already explained in Sec. 6, the considered set of dipolar modes allows to probe properly only the regions where the modes are mostly localized. We find that both the OLA and SOLA techniques are able to well reproduce the angular velocity in the core at $0.005R$ and in the convective zone for $r\geq0.4R$ producing results well in agreement and model independent. The OLA and SOLA techniques well recover rotational profiles having convective envelopes or entire interior which rotate rigidly. Figures \[profile\]b), c) and d) show that both techniques are able to measure the maximum gradient strength of an internal rotational profile with a steep discontinuity, both in the case of decreasing and increasing gradient toward the surface. However the method fails to localize discontinuities with an accuracy better than $0.1R$, due to the progressively increasing errors in spatial resolution at increasing distance from the core. In the case of the OLA technique the inverted rotational profile decreases smoothly towards the surface even in the case of a step-like input rotational law. On the other hand, with the SOLA inversion we find it more difficult to localize averaging kernels at target radii near the layer of strong gradient of angular velocity. See for example results plotted in Figs. \[profile\]b), c), d), obtained for internal profiles characterized by discontinuities with different slopes. In addition we notice that the inversion of the observed set of dipolar modes is not able to recover more complicated rotational profiles (as the case Fig. \[profile\]d). ![Internal rotation of Model 2 at different radii as obtained by the OLA and SOLA inversion of artificial rotational splittings calculated from four simple input rotational profiles shown by dashed lines. For solution above $r=0.1R$ the horizontal error bars correspond to $10\%$ of the radial spatial resolution defined by the averaging kernels. Panel a) shows profile $\Omega(r)/2\pi=450$nHz; Panel b) shows profile $\Omega(r)/2\pi=750$nHz for $r\leq0.02$ and $\Omega(r)/2\pi=100$nHz for $r>0.02$; Panel c) shows profile $ \Omega(r)/2\pi=740 \cos (2 \pi \, 0.5\, r)$nHz; Panel d) shows profile $ \Omega(r)/2\pi=832-3515 r+4992 r^2-2202 r^3$nHz. The shaded area indicates the region inside the star in which it was not possible to determine any solutions.[]{data-label="profile"}](ome-flat450log.ps "fig:"){width="8cm"} ![Internal rotation of Model 2 at different radii as obtained by the OLA and SOLA inversion of artificial rotational splittings calculated from four simple input rotational profiles shown by dashed lines. For solution above $r=0.1R$ the horizontal error bars correspond to $10\%$ of the radial spatial resolution defined by the averaging kernels. Panel a) shows profile $\Omega(r)/2\pi=450$nHz; Panel b) shows profile $\Omega(r)/2\pi=750$nHz for $r\leq0.02$ and $\Omega(r)/2\pi=100$nHz for $r>0.02$; Panel c) shows profile $ \Omega(r)/2\pi=740 \cos (2 \pi \, 0.5\, r)$nHz; Panel d) shows profile $ \Omega(r)/2\pi=832-3515 r+4992 r^2-2202 r^3$nHz. The shaded area indicates the region inside the star in which it was not possible to determine any solutions.[]{data-label="profile"}](ome-step0.02log.ps "fig:"){width="8cm"} ![Internal rotation of Model 2 at different radii as obtained by the OLA and SOLA inversion of artificial rotational splittings calculated from four simple input rotational profiles shown by dashed lines. For solution above $r=0.1R$ the horizontal error bars correspond to $10\%$ of the radial spatial resolution defined by the averaging kernels. Panel a) shows profile $\Omega(r)/2\pi=450$nHz; Panel b) shows profile $\Omega(r)/2\pi=750$nHz for $r\leq0.02$ and $\Omega(r)/2\pi=100$nHz for $r>0.02$; Panel c) shows profile $ \Omega(r)/2\pi=740 \cos (2 \pi \, 0.5\, r)$nHz; Panel d) shows profile $ \Omega(r)/2\pi=832-3515 r+4992 r^2-2202 r^3$nHz. The shaded area indicates the region inside the star in which it was not possible to determine any solutions.[]{data-label="profile"}](ome-cos0.0344.log.ps "fig:"){width="8cm"} ![Internal rotation of Model 2 at different radii as obtained by the OLA and SOLA inversion of artificial rotational splittings calculated from four simple input rotational profiles shown by dashed lines. For solution above $r=0.1R$ the horizontal error bars correspond to $10\%$ of the radial spatial resolution defined by the averaging kernels. Panel a) shows profile $\Omega(r)/2\pi=450$nHz; Panel b) shows profile $\Omega(r)/2\pi=750$nHz for $r\leq0.02$ and $\Omega(r)/2\pi=100$nHz for $r>0.02$; Panel c) shows profile $ \Omega(r)/2\pi=740 \cos (2 \pi \, 0.5\, r)$nHz; Panel d) shows profile $ \Omega(r)/2\pi=832-3515 r+4992 r^2-2202 r^3$nHz. The shaded area indicates the region inside the star in which it was not possible to determine any solutions.[]{data-label="profile"}](ome-curvastep-cg.ps "fig:"){width="8cm"} We can conclude from our tests that with a small set of only dipolar modes, we have sufficient information to study the general properties of the internal rotational profile of a red giant, mainly the maximum gradient strength and, with some uncertainty, also the approximate radial location of the peak gradient. With the actual set of modes we are not able to distinguish between a smooth or a sharp rotation gradient inside our star. Internal rotation by other methods ================================== As we have pointed out in the above section, asteroseismic inversion of a set of 14 dipole-mode rotational splittings enables an estimate of the angular velocity only in the core and, to some degree, in some part of the radiative interior and in the convective envelope of the red-giant star. Here we explore the possibility to compare our inversion results and to get additional conclusions on the rotational velocity of the interior by applying different methods. This can be achieved by separating in our splittings data the contribution due to the rotation in the radiative region from that of the convective zone. Recently, @goupil2013 proposed a procedure to investigate the internal rotation of red giants from the observations. They found that an indication of the average rotation in the envelope and the radiative interior can be obtained by estimating the trapping of the observed modes through the parameter $\zeta=I_g/I$, the ratio between the inertia in the gravity-mode cavity and the total inertia (see Eq. \[inertia\]). Because of the sharp decrease of the Brunt-Väisälä frequency at the edge of the H burning shell, the gravity cavity corresponds to the radiative region, while the convective envelope essentially corresponds to the acoustic resonant cavity (see Fig. \[prop\]). Thus, @goupil2013 demonstrated that, for $l=1$ modes, Eq. \[eq:rot\] can be written: $$2\pi\delta\nu_{n,l}=0.5 \zeta\Omega_{g}+(1-\zeta)\Omega_{p}=\zeta(0.5\Omega_{g}-\Omega_{p})+\Omega_{p}, \label{linear}$$ where $\Omega_{g}$ is the angular velocity averaged over the layers enclosed within the radius $r_{g}$ of the gravity cavity, while $\Omega_{p}$ is the mean rotation in the acoustic mode cavity. Equation \[linear\] shows that a linear relation approximately exists between the observed rotational splittings and the trapping of the corresponding modes. The parameter $\zeta$ has been computed for both Model 1 and Model 2 from the relevant eigenfunctions. In order to ascertain the model independence of the results we also computed $\zeta$ by adopting the approximated expression given by @goupil2013, based on the observed values of $\nu_{max}$, $\Delta \nu$ and $\Delta \Pi_1$ (see Sec. 2). Figure \[zeta\] shows the linear dependence of the observed rotational splittings on $\zeta$ for both models and for the Goupil’s approximation. We deduced the mean rotational velocity in the gravity cavity ($0<r<0.15R$), $\Omega_g/2\pi$, as well as the mean rotational velocity in the acoustic cavity, $\Omega_p$, by fitting a relation of the type $\delta\nu = (a\zeta + b)$ to the observations, so that $\Omega_g/2\pi = 2(a+b)$ and $\Omega_p/2\pi = b$. We obtained for Model 1 $\Omega_g/2\pi= 762.0\pm31.82$ nHz and $ \Omega_p/2\pi=10.06\pm9.64$ nHz; for Model 2 $\Omega_g/2\pi= 756.48\pm29.86$ nHz and $ \Omega_p/2\pi=38.45\pm9.03$ nHz. The results obtained by adopting Goupil’s approximation for $\zeta$: are $\Omega_g/2\pi= 748.90\pm36.30$ nHz and $ \Omega_p/2\pi= - 60.95\pm11.51$ nHz. It is clear that the determination of the mean rotation in the convective envelope of KIC 4448777 is highly difficult by adopting this method, and we can only conclude that the value of $\Omega_p$ is certainly lower compared to the angular velocity of the core. ![Observed splittings (Model 1: green bullets, Model 2: red stars, Goupil’s approximation: black squares) plotted as function of the parameter $\zeta$, which indicates the trapping of the modes. The solid lines shows a linear fit of the type $\delta\nu = (a\zeta + b)$.[]{data-label="zeta"}](zita2.eps){width="10cm"} Another procedure to assess the angular velocity in the interior can be obtained by searching for the rotation profile that gives the closest match to the observed rotational splittings by performing a least-squares fitting. To do that, the stellar radius was cut into $K$ regions delimited by the radii $0=r_0< r_1< r_2<....r_K=R$. Thus, Eq. \[eq:rot\] can be modified in the following way: $$2\pi\delta\nu_{i}=\int_{0}^{R} {\cal K}_{i}(r) \Omega(r)dr= \sum_{k=1}^{K}S_{i,k}\Omega_k \label{averaged}$$ where $S_{i,k}=\int_k{\cal K}_{i}(r)dr$ and ${\cal K}_i(r)$ are given by Eq. \[ker\] for each mode $i$ of the set of data used. $\Omega_k$ represents an average value of the angular velocity in the region $k$. To assess the angular velocity in the interior we can perform a least-squares fit to the observations by minimizing the $\chi^2$ function: $$\chi^2=\sum_{i=1}^N\frac{[2 \pi \delta \nu_i-\sum_{k=1}^{K}S_{i,k}\Omega_k]^2}{\epsilon_i^2} \label{chiav}$$ We have explored several cases for small values of $K$ and we found that the result depends on $K$ and on the choice of the values $r_k$ of the boundaries between each region $k$. Reasonable values of the reduced $\chi^2$ for the two models (Model 1: $\chi^2$ = 5.10, Model 2: $\chi^2$ = 10.6) have been obtained by cutting the interior of our models in 2 regions, so that the region for $k=1$ corresponds to radiative region $r/R\leq0.15$, while the region for $k=2$ corresponds to the convective envelope with $0.15< r/R\leq1.0$. Table \[omegak2\] lists the values of $\Omega_k$ in the different regions for the two models. Once the least-squares problem has been solved, we can use the coefficients to calculate the averaging kernels: $$K(r_k,r)=\sum_{i=1}^N c_i(r_k){\cal K}_i(r).$$ Figure \[cum-LS\] shows that while the cumulative integrated kernel of the region $k=1$ is quite sensitive to the core, although a contribution from the surface is still present, the cumulative integrated kernel of the region $k=2$ is strongly contemned by the modes trapped in the radiative interior. ![Cumulative integral of the averaging kernel of the core (black line) and the surface (red line) obtained with the LS method using Model 2.[]{data-label="cum-LS"}](effective-ker2.eps){height="8cm"} [lcc]{} $ \Omega_1/2\pi$ (nHz) &$765\pm23$ & $746\pm22$\ $\Omega_2/2\pi$ (nHz)& $25\pm 8$& $52\pm8$\ \[omegak2\] Conclusion ========== In the present paper, we have analyzed the case of KIC 4448777, a star at the beginning of the red-giant phase, for which a set of only 14 rotational splittings of dipolar modes have been identified. Its internal rotation has been probed successfully by means of asteroseismic inversion. We confirm previous findings obtained in other red giants [@beck2012; @deheuvels2012; @deheuvels2014] that the inversion of rotational splittings can be employed to probe the angular velocity not only in the core, but also in the convective envelope. We find that the helium core of KIC 4448777 rotates faster than the surface at an angular velocity of about $\langle\Omega_c/2\pi\rangle=748\pm18$ nHz, obtained as an average value from the SOLA and OLA inversion techniques. Moreover, we found that the result in the core, well probed by mixed-modes, does not depend on the stellar structure model employed in the inversion. The value estimated in the core agrees well with those obtained by applying other methods, as the one based on the relation between the observed rotational splittings and the inertia of the modes [@goupil2013], or the least-squares fit to the observed rotational splittings. Our results have shown that the mean rotation in the convective envelope of KIC 4448777 is $\langle\Omega_s/2\pi\rangle=68\pm22$ nHz, obtained as an average of the OLA inversion results for the two ’best-fitting’ selected models, indicating that the core should rotate at about $11$ times faster than the convective envelope. For the SOLA inversion solutions, it was not possible to suppress efficiently the strong signal of the modes concentrated in the core. The value of the rotation deduced in the convective zone is compatible with the upper limit measured at the photosphere $\Omega_{ph}/2\pi<538\,\mathrm{nHz}$ derived from the spectroscopic value of $v\sin i$ (see Sect. 2) and the stellar radius provided by the models. Unfortunately, with few modes able to probe efficiently the acoustic cavity, we have been able to deduce only a weighted average of the rotation in the whole convection zone, but the inversion results appear to not depend on the equilibrium structure model employed. Other methods used to determine rotation in the convection zone have provided us with results reasonable in agreement with those obtained by inversion. Furthermore, we demonstrate that the inversion of rotational splittings can be employed to probe the variation with radius of the angular velocity in the core, because the observed mixed modes enable us to build well localized averaging kernels. The application of both SOLA and OLA inversion techniques allowed us to show that the entire core is rotating with a constant angular velocity. In addition, the SOLA method found evidence for an angular velocity decrease occurring in the region $0.007R\leq r \leq0.1R$, between the helium core and part of the hydrogen burning shell, which cannot be better localized, due to the intrinsic limits of the applied technique and to the lower resolving power of the employed modes in the regions above the core. Thus, although we are not able to distinguish between a smooth or a sharp gradient, we can determine with good approximation the maximum gradient strength and the radial position of the peak gradient. With the available data, including just a modest number of dipolar modes, it is clearly impossible to infer the complete internal rotation law of KIC 4448777. In order to resolve the regions above the core, it is necessary to invert a set of data which includes more rotational splittings, and in particular of modes with $l>1$ and with significant amplitudes in the acoustic cavity. Such data may become available for other targets or from analysis of longer time-series observations than those considered here. It seems fair to say that, at this stage, the asteroseismic inversions are giving very useful results to test the actual evolution theory of stellar structure, but we expect the ever-improving accuracy of the data will drive the theory to advance in new directions and eventually lead to a more thorough understanding of stellar rotation. Considering the fact that the internal angular velocity of the core of the red giants is theoretically expected to be higher compared to our results, it remains still necessary to investigate more efficient mechanisms of angular momentum transport acting on the appropriate timescales during the different phases of the stellar evolution, before the red-giant phase. We expect that the measurements of rotational splittings for modes with low inertia will shed some light on the above picture and on the question of the stellar angular momentum transport. Although some preliminary tests [@beck2014] have shown that the use of $l=2$ modes splittings cannot help to resolve internal rotation inside red giants, we believe that a detailed analysis is required by considering red giants at different evolutionary phases. We conclude that it is reasonable to think that this approach, proved to be very powerful in the case of the Sun, for which thousands of modes from low to high degree have been detected, can be well applied even to small sets of only dipolar modes in red-giant stars. The authors thank very much the anonymous referee for his/her suggestion and comments, which gave the opportunity to greatly improve the manuscript. Funding for the Kepler mission is provided by NASA’s Science Mission Directorate. We thank the entire Kepler team for the development and operations of this outstanding mission. WAD was supported by the Polish NCN grant DEC-2012/05/B/ST9/03932. SH acknowledges financial support from the Netherlands Scientific Organization (NWO). DS acknowledges support from the Australian Research Council. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). The research is supported by the ASTERISK project (ASTERoseismic Investigations with SONG and Kepler) funded by the European Research Council (Grant agreement no.: 267864). BM, PB and RAG acknowledge the ANR (Agence Nationale de la Recherche, France) program IDEE (n. ANR-12-BS05-0008) “Interaction Des Étoiles et des Exoplanètes”. AT is postdoctoral fellow of the Fund for Scientific Research (FWO), Flanders, Belgium. The research leading to the presented results has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 338251 (StellarAges). The work presented here is also based on ground-based spectroscopic observations made with the Mercator Telescope, operated on the island of La Palma by the Flemish Community, at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. MPDM is grateful to the Astrophysical Observatory of Catania for hosting her during the preparation of the present manuscript. Aizenman M., Smeyers P., Weigert A., 1977, A&A, 58, 41 Angulo, C., Arnould, M. Rayet M. et al. 1999, Nucl. Phys. A, 656, 3 Backus, G. E. & Gilbert, F. 1970, Phil. Trans. R. Soc. Lond., 266, 123 Baglin, A., Auvergne, M., Boisnard, L. et al. 2006, in 36th COSPAR Scientific Assembly Plenary Meeting, Vol. 36, Meeting abstract from the CDROM, 3749 Basu, S., Pinsonneault, M.H. & Bahcall, J.N., 2000, ApJ, 529, 1084 Beck, P. G. 2013, PhD Thesis, ’Asteroseismology of Red-Giant Stars: Mixed Modes, Differential Rotation, and Eccentric Binaries’, Arenberg Doctoral School of Science, Engineering & Technology, Faculty of Science, Department of Physics and Astronomy, KU Leuven, Belgium. Beck, P. G., Bedding, T. R., Mosser, B. et al. 2011, Science, 332, 205 Beck, P. G., Hambleton, K., Vos, J., et al. 2014, A&A, 564, 36 Beck, P. G., Montalban, J., Kallinger, T., et al. 2012, Nature, 481, 55 Bedding, T.R., Mosser, B., Huber, D. et al. 2011, Nature, 471, 608 Belkacem, K., Goupil, M. J., Dupret, M. A., et al., 2011, A&A, 530, 142 Berthomieu, G., Toutain, T., Gonczi, G. et al. 2001, in Proc. SOHO 10/GONG 2000 Workshop ’Helio- and Asteroseismology at the Dawn of the Millennium’, ESA SP-464, 411 Borucki, W. J., Koch, D., Basri, G. et al. 2010, Science 327,977 Brandão, I. M., Doǧan, G., Christensen-Dalsgaard, J., et al. 2011, A&A, 527, A37 Brun, A. S., Zahn, J.-P- 2006, A&A, 457, 665 Böhm-Vitense, E., 1958, Zeitschrift für Astrophysik, 46, 1115 Cantiello, M., Mankovich, C., Bildsten, L. et al. 2014, ApJ 788, 93 Castelli F., Kurucz R. L. 2004, A&A 419, 725 Ceillier T., Eggenberger P., García, R. A., Mathis S. 2013, A&A 555, 54 Chaboyer, B., Demarque, P. Pinsonneauls, M. H. 1995, ApJ, 441, 876 Chaplin, W. J., Christensen-DalsgaardJ., Eslworth Y. et al., 1999, MNRAS, 308, 405 Chaplin, W. J., Eslworth Y., Isaak, G. R. et al., 2002, MNRAS, 336, 979 Christensen-Dalsgaard, J. 2008, Ap&SS, 316, 13 Christensen-Dalsgaard, J. 2008, Ap&SS, 316, 113 Christensen-Dalsgaard J., 2004, Sol. Phys., 220, 137 Christensen-Dalsgaard, J., Schou J.,Thompson, M.,1990, MNRAS, 242, 353 Corsaro, E. et al. 2012, ApJ 757, 190 Corsaro, E. & De Ridder, J. 2014, A&A 571, 71 Cowling, T. G. & Newing, R. A., 1949, ApJ, 109, 149 De Smedt, K., Van Winckel, H., Karakas A. I., et al. 2012, A&A 541, 67 Deheuvels, S., García, R.A., Chaplin, W. J. et al., 2012, ApJ, 756, 19 Deheuvels, S., Dogan, G., Goupil, M. J. et al., 2014, A&A, 564, 27 Di Mauro, M. P., Christensen-Dalsagaard, J., & Weiss, A., 2003, in The Third MONS Workshop: Science Preparation and Target Selection, Proceedings of a Workshop held in Aarhus, Denmark, January 24-26, 2000, Eds.: T.C. Teixeira, and T.R. Bedding, Aarhus Universitet, 2000., p.151 Di Mauro, M. P., 2004, Proc. SOHO 14 / GONG 2004 Workshop “Helio- and Asteroseismology: Towards a Golden Future”, eds D. Danesy., ESA-SP 559, 186 Di Mauro, M. P. & Dziembowski, W. 1998, MemSAit, 69,559 Di Mauro, M. P., Cardini, D., Catanzaro, G. et al., 2011, MNRAS, 415, 3783 Dziembowski, W. A., 1971, AcA, 21, 289 Dziembowski, W. A., Gough D. O., Houdek G., Sienkiewicz R., 2001, MNRAS, 328, 601 Dupret, M.-A., Belkacem, K., Samadi, R. et al., 2009, A&A, 506, 57 Elsworth, Y., Howe, R., Isaak, G. R. et al., 1995 Nature 376, 669 García, R. A., Hekker, S., Stello, D. et al., 2011, MNRAS, 414, L6 García, R. A., Ceillier, T., Salabert, D. et al. 2014, A&A 572,34 Gough, D. O., 1981, MNRAS, 196,731 Gough, D. O., Kosovichev, A. G., 1993, in ASP Conf. Ser., 40, IAU Coll. 137,’Inside the Stars’, 541 Goupil, M.-J., Dziembowski, W. A., Goode, P. R., et al., 1996, A&A, 305, 487 Goupil, M. J., 2009, LNP, 765,45 Goupil, M. J., Mosser, B., Marques, J. P, et al., 2013, A&A, 549, A75 Grevesse, N. & Noels, A. 1993 in Origin and Evolution of the Elements, ed. S. Kubono & T. Kajino, 14 Handberg, R. & Campante, T. L., 2011, A&A, 527, A56 Huber, D., Stello, D., Bedding, T. R. et al., 2009, CoAst, 160, 74 Huber, D., Ireland, M. J., Bedding, T. R., et al., 2012, ApJ, 760, 32 Iglesias, C. A., & Rogers F. J. 1996, ApJ, 464, 943 Jenkins, J. M., Caldwell, D. A., Chandrasekaran, H. et al., 2010, ApJ, 713, L87 Kallinger, T., Mosser, B., Hekker, S. et al., 2010, A&A, 522, A1 Kawaler, S. D., Sekii T., Gough, D. 1999, ApJ, 516, 349 Kjeldsen, H. & Bedding, T., 1995, A&A, 293, 87 Kjeldsen, H., Bedding, T. R., Christensen-Dalsgaard, J., 2008, ApJ, 683, L175 Kupka F. G., Ryabchikova, T., Weiss W. W. et al. 2000 BaltA 9, 590 Lochard, J., Samadi, R. & Goupil, M.J., 2005, A&A, 438, 939 Marques, J.P., Goupil, M.J., Lebreton, Y. et al., 2013, A&A, 549, A74 Mathur S., Hekker S., Trampedach R. et al. 2011, ApJ, 741,119 Mathur,S., Bruntt, H., Catala, C. et al. 2013, A&A 549, 12 Miglio, A., Brogaard, K., Stello, D. et al., 2012, MNRAS, 419, 2077 Mosser, B., Barban, C., Montalban, J. et al. 2011, A&A, 532, A86 Mosser, B., Elsworth, Y., Hekker, S. et al. 2012a, A&A, 537, A30 Mosser, B., Goupil, M. J., Belkacem, K. et al., 2012b, A&A, 540, A143 Mosser, B.,Goupil, M.J., Belkacem, K., et al. 2012c, A&A, 548, A10 Mosser, B., Michel, E., Belkacem, K., et al. 2013a, A&A, 550, 126 Mosser, B., Dziembowski, W. A., Belkacem, K., et al. 2013b, A&A, 559, 137 Osaki, J. 1975, PASJ, 27, 237 Paternò, L., Di Mauro, M. P., Sofia, S., 1996, A&A, 314, 94 Phillips, D. L., 1962 J. Assoc. Comput. Mech., 9, 84 Pijpers, F. P., & Thompson M.J, 1992, A&A, 262, L33 Raskin, G., van Winckel, H., Hensberge, H. et al. 2011, A&A, 526, 69 Rogers, F. J., Nayvonov, A. 2002, ApJ, 576 Roxburgh, I. W., Audard, N., Basu, S., et al. 1998, in IAU Symp. 181 ’Sounding Solar and Stellar nteriors (poster vol.),eds. J. Provost and F.-X. Schmider, Nice observatory, 245 Sekii, T., 1997, in Sounding Solar and Stellar Interiors, IAU Symp. No. 181, ed. by J. Provost and F.-X. Schmider, 189 Schou, J., Antia, H. M., Basu, S., et al. 1998, ApJ, 505, 390 Schou, J., Tomczyk S., Thompson M.J., 1996, BASI, 24, 375 Stello, D., Huber, D., Bedding, T. R. et al., 2013, ApJ, 765, L41 Stello, D., 2012, ASP Conference Proceeding, 462, 200 Suárez, J. C., Andrade, L., Goupil, M. J., et al., 2006, AN, 88, 789 Talon, S. & Charbonnel, C., 2005, A&A, 440, 981 Tayar, J. & Pinsonneault, M.H., 2013, ApJL, 775, 1 Tikhonov, A. N., 1963, Sov. Maths. Dokl., 4, 1035 Thompson, M. J., Toomre, J., Anderson, E. R. et al., 1996, Sci, 272, 1300 Thompson, M. J., Christensen-Dalsgaard, J., Miesch, M. S., Toomre, J. 2003 ARA&A, 41, 599
--- author: - 'A. Baier [^1]' - 'F. Kerschbaum' - 'T. Lebzelter' bibliography: - '13968refs.bib' date: 'Received 24 12 2009 / Accepted 12 03 2010' subtitle: 'I. Perspectives & Limitations' title: Fitting of dust spectra with genetic algorithms --- Introduction ============ Asymptotic Giant Branch (AGB) stars [e.g. @agbbook] are one of the main sources of dust in the universe. According to their evolutionary state they produce different amounts and species of dust. At the onset of their mass loss stars with solar metallicity exhibit a mainly oxygen-rich dust mineralogy, which consists of oxides and silicates [e.g. @posch99; @jaeger03]. At later stages of their evolution, the star’s chemical composition changes depending on its mass, which may lead to a mineralogy dominated by carbon-rich dust such as amorphous carbon, carbides, sulphides and nitrides [e.g. @hony04]. Dust formation is of immense importance for the overall behavior of these stars. It has an effect on the dynamics of their atmospheres and thus on their mass loss. As a consequence, the spectral appearance changes as well. The dust can produce prominent features in the infrared regime of a spectrum. Many of those dust species might be still unknown, others produce spectral features in the same wavelength region and thus are difficult to distinguish. Another aspect is the dependence of the feature shape on the dust grain shape itself. Up to know, the fitting of those spectra has mostly been done manually. A routine to automatise this process, which also allows to quantify the results would be a big asset in this field of research. In this work we are taking first steps into this direction. By combining a genetic algorithm with a widely used radiative transfer code we want to show that it is possible to veer away from the traditional way of hand fitting to a more independent procedure. The Fitting Procedure ===================== The problem {#tp} ----------- As mentioned above, the spectrum of a low mass late type star can exhibit strong dust features in the infrared range. The mostly complex dust composition of the star’s shell can make the determination of its exact composition difficult. In order to determine the detailed dust composition of a star a synthetic spectrum is fitted to an observed one. The parameters for the synthetic spectra are chosen to include those dust species, which show features at the same wavelengths as the spectrum of the observed object. This may then eventually lead to a satisfying fit and thus to the identification of certain dust species. Since most stars do not only house a single dust species in their surroundings, their respective infrared spectrum can be quite complex, showing features that are the result of a certain blend of minerals. An unambiguous determination of the dust composition can constitute a complex challenge, since the spectral behaviour of dust does not only depend on the correct ratio of abundances, but also strongly on such parameters as the dust temperature, the optical depth and the grain shape itself [see e.g. @posch07b]. One of the most commonly used radiative transfer codes to produce these synthetic spectra is DUSTY [@ivezic97]. During the last years it has proven to be a very useful tool to gain new insights into the dust mineralogy of evolved stars, although it has to be noted that DUSTY does not treat the dust opacity as a function of distance from the star. The assumption of an opacity $\kappa$ which is invariant to the stars radius is, however, unrealistic. Nevertheless DUSTY is a quite powerful tool and serves as a very good starting point in means of determination of dust composition for highly evolved stars. Although this method of fitting an IR dust spectrum of a star is a robust one, it holds a downside which always needs to be taken into account when looking at the results. Casually speaking, the problem is the scientist himself. First, in order to produce a “good” fit a certain amount of experience and intuition is definitely an asset. Second, this experience can also be a drawback, since different people may produce equally good fits with a completely different set of parameters. This raises the need to quantify those models in order to compare them with each other. Furthermore a process which automatises the fitting procedure, in terms of finding the correct parameter values of the desired fit and thus produces a result which is less biased by human intervention, would serve the purpose of quantification even more. Inspired by a preliminary study by @dijk07 an attempt has been made to combine DUSTY with a genetic algorithm. Following Dijkstra’s example the publicly available genetic algorithm PIKAIA [@charb95] has been used for this task. DUSTY in a nutshell ------------------- DUSTY is used to perform radiative transfer calculations for objects of different nature. The code is designed to deal with the radiation from some source which is modified by a dusty region. In our case this is a central star wrapped in a dust shell. Since the light emitted by the star is scattered, absorbed and reemitted by the dust shell, the spectrum emerging is usually the only way to gain information about the dust and the object hidden inside. DUSTY offers users an integral equation for the spectral energy distribution introduced by @ivezic97 in order to solve the radiative transfer problem. Beside the basic free parameters such as the stellar temperature, the chemical composition, the grain size distribution, the optical depth and many more, it also allows to chose between a spherical or a plane-parallel geometry. Genetic algorithms & PIKAIA --------------------------- ![Basic concept of a genetic algorithm[]{data-label="genal"}](13968fig1.eps) Genetic Algorithms (GAs) are searching techniques used in computation for optimisation of a given problem. To assure a healthy and steady evolution a certain size of a population is required as well as the offspring showing a certain range of “fitness” to secure the variability of the process. With these considerations in mind a genetic algorithm can be defined as the implementation of the evolutionary principle in a computational context. A simplified flowchart is shown in Fig.\[genal\]. This basic version gradually improves via successive iteration, but does not maximise in a strictly mathematical point of view. Most on genetic algorithms based codes comprise further strategies to face this problem. The genetic algorithm-based optimisation routine called PIKAIA [@charb95] is written in a very user-friendly way. Thus, it is primarily used as a learning tool rather than a science tool. Still, this “easy-to-use” policy makes it a very attractive choice for optimisation problems of relatively low complexity. Nevertheless, one always has to keep in mind that the performance efficiency is often sacrificed to the clarity of this code. PIKAIA maximises a problem over a fixed population and a given number generations. This means that PIKAIA does not strive to optimise a population until a certain criterion is fulfilled, but rather until a given number of generations is reached. The code is written in FORTRAN and sticks to the conventions of ANSI-FORTRAN 77. Although PIKAIA is also available in JAVA or as a parallelised version, the decision to use its original FORTRAN version is quite obvious since DUSTY is also written in the same language and thus a certain kind of code persistency is preserved. Genetic Algorithms are used to solve manifold problems. In astronomy itself GAs are a common tool for optimisation.@hetem07 make use of a GA in order to improve already existing models of protoplanetary disks. In his work on the transition region from the chromosphere to the corona of the sun [@peter01] used PIKAIA to optimise his Gaussian fits of various lines. @noyes97 used the code for optimisation of the derived orbital parameters in their work on a possible planet orbiting $\rho$ CrB. In a much larger context genetic algorithms are also used for example to model the interaction of galaxies [@theis98; @theis01]. @metcalfe00 as well as @mokiem05 used the optimisation code PIKAIA as an improvement and automatisation of the fitting routines of the respective studied objects. This is in the case of the former, the observed pulsation frequency of the white dwarf GD 358 and in the case of the latter the combination of the non-LTE stellar atmosphere code FASTWIND [@puls05] with PIKAIA for the spectral analysis of early-type stars. Finally, @canto09 used an, as they call it, asexual genetic algorithm for optimization and model fitting of various problems. In their approach they apply an asexual reproduction scheme, which means the offspring are only produced by one parent – not unlike bacteria – and not by two parents as used in a traditional GA. As possible applications for this method a typical optimization problem as well as the fitting of the spectral energy distribution for a young stellar objects is presented. PIKAIA & DUSTY – a convenience marriage ---------------------------------------- f-Number Species Formula Ref ---------- ----------------------------- ----------------------------- ----- $f_1$ Amorphous aluminum oxide Al$_2$O$_3$ 1 $f_2$ astronomical silicate SiO$_4$ 2 $f_3$ Amorphous pyroxene (100K) Mg$_x$Fe$_{1-x}$SiO$_3$ 3 $f_4$ Amorphous melilite Ca$_2$Al$_2$SiO$_7$ 4 $f_5$ Amorphous olivine Mg$_{0.1}$Fe$_{1.2}$SiO$_4$ 5,6 $f_6$ Crystalline magnesiowustite Mg$_{0.1}$Fe$_{0.9}$O 7 $f_7$ Amorphous Mg-Fe-silicate MgFeSiO$_4$ 8 $f_8$ Crystalline spinel MgAl$_2$O$_4$ 9 : Dust species used for fitting the ISO spectra with DUSTY \(1) @begemann97; (2) @ossenkopf92; (3) @henning97; (4) @mutschke98; (5) @jaeger94; (6) @dorschner95; (7) @henning95; (8) @hanner88; (9) @posch99 \[tbldustspec\] As already mentioned in Sect. \[tp\] fitting of dust spectra of AGB stars without the help of a suitable routine can be challenging and may lead to ambiguous results. In our approach, the DUSTY code was connected with the genetic optimisation algorithm PIKAIA which allowed to produce spectra, that are less dependent on the individual experience of the respective scientist [see @dijk07]. Being based on the evolutionary natural selection process, PIKAIA tries to maximize the function $g(\lambda)=[F(\lambda)-F_m(\lambda)]^{-2}$, where $F(\lambda)$ is the observed spectrum and $F_m(\lambda)$ represents one model spectrum calculated with DUSTY. In order to optimize $g(\lambda)$, PIKAIA creates an initial population of trial solutions for the DUSTY models $F_m(\lambda)$ by using random values for the input parameters. Starting from there, pairs of trial solutions are selected which are then used to bred two new solutions, i.e. the offspring. This way a new population with the same number of trial solutions is created, and then, the breeding process starts again until the final generation is reached. As input parameters, for DUSTY the following have to be chosen before the calculation is started and will not be changed by PIKAIA. DUSTY’s wavelength grid has been kept down to only 105 grid points in order to keep the runtime for each model as small as possible considering the big overall number of models to be calculated. ![Dust absorption data of the dust species used for fitting the ISO spectra with DUSTY[]{data-label="dustspecies"}](13968fig2.eps) For the grain size distribution MRN as well as for the ratio between the outer shell radius $R_{out}$ and the inner shell radius $R_{in}$, the default settings have been kept. This grain size distribution, following @mathis77 is given by a power law of the following form $$n(a)\propto a^{-q} \quad \mathrm{ for } \quad a_{min} \leq a \leq a_{max}$$ with $n(a)$ being the number density of the dust grains with a radius a, and with the default parameters $q = 3.5$, $a_{min} = 0.005\,\mu$m and $a_{max} = 0.25\,\mu$m. For $R_{out}/R_{in}$ a value of 1000 was chosen, since any ratio above 100 has only a very small influence on the shape of the MIR spectrum. The effective stellar photospheric temperature $T_*$ has also to be chosen beforehand. DUSTY then approximates the photospheric spectrum by a single black body. For this first stage of modelling, it was also decided that the dust temperature $T_{dust}$ and the optical depth $\tau$ are kept throughout the calculation process. Both values have a big influence on the spectral shape and thus should be known to a certain extent already beforehand. Finally the pool of dust species used by DUSTY is also fixed during the whole run. Throughout this paper the dust species will be referred to as $f_1$ to $f_8$. The key to this numbering scheme can be found in Table \[tbldustspec\] as well as a graphical representation of the optical constants in Fig. \[dustspecies\] where the main features in the studied wavelength range can be seen. Their respective abundances are subject of PIKAIA’s optimisation routine. In order to avoid completely unrealistic combinations of abundances, the values have always been set to a range that seemed most likely for the respective object. For PIKAIA two of the most important input values are the generation and population number. If these values are too small the genetic algorithm cannot act as intended, which means, values sufficiently high for the treated problem must be chosen. For our experiments we used as pop/gen values either 90/100 or 100/200, depending on the complexity of the observed spectrum. PIKAIA qualifies each fit using a user supplied fitness function. Here we used the straight forward $\chi{}^2$ assuming that $$\mathrm{fitness}=\frac{1}{\chi{}^2}.$$ For each model the respective $\chi{}^2$ has to be calculated and then used to generate the fitness value which is used for the internal ranking of each model. At the end of the evolutional process, PIKAIA provides the respective parameter values together with the fitness value as an output for the user. This allows us to compare different fits in a quantitative way. \# pop gen $T_{dust}$ $f_2$ $f_3$ $\tau_{10\mu{}m}$ free paras fitness ---- ------------ ---------- ------------ ------- ------- ------------------- ------------ --------- 0 artificial spectrum 750 65 35 0.009 - - 1 10 5 754 - - - 1 0.019 2 10 5 - 65 35 - 1 1.000 3 10 5 - - - 0.00898 1 0.023 4 10 5 - 66.5 33.5 0.00878 2 0.552 5 10 5 752 50.3 49.7 0.00827 3 0.664 6 30 20 - 66.6 33.4 0.00783 2 0.777 7 30 20 730 65.6 34.4 0.008837 3 0.969 \[perres\] Performance Test ================ In order to investigate the performance and the overall functionality of the two combined programs a simple test setup has been designed. For this task an artificial spectrum has been created with DUSTY. Following the plan to keep the fake input spectrum as simple as possible, the stellar parameters have been chosen to represent a more or less average mira star with a silicate only dust composition. This artificial spectrum has then been fed into PIKAIA as the “observed” spectrum. For this performance test two combinations of generation and population numbers have been used. Furthermore, for each of the test runs some of the parameters have been kept with a fixed value while others where allowed to be chosen freely. The value used to qualify the fits is the already introduced “fitness”. As can be seen in Table \[perres\], it does not only depend on the number of free parameters but also on which of the parameters have been left free. For calculations with a very low number of population members and generations, our procedure obviously has the least difficulties when all parameters except the dust composition are given in advance. Still it has to be pointed out that even for the low gen/pop combination the best result has been achieved with all parameters left free. This shows that the genetic algorithm operates most efficiently when given a sufficient amount of freedom concerning the parameters. Nevertheless, when comparing the fitness values of fits 5 and 7, it is also shown very clearly that in the case of no fixed parameters, a low number of populations and generations – such as in the case of fit 5 – is not sufficient anymore to retrieve a reasonable result. The generation and population numbers may appear to be very low considering that one would expect both values to be higher to receive acceptable results from the algorithm. This is indeed true, when the code is applied to real observed spectra. Still for a simple test setup like the one used here, the numbers are sufficient enough since usually the best fit is achieved after 10 to 15 generations. This is demonstrated on the right panel of Fig. \[dvd\], where the development over time (i.e. generations) of each of the free parameters is shown for fit 7 (see Table \[perres\]). The run time for the best model was about 4 hours in total on a PC. The left side of this figure shows the artificial input spectrum with the best PIKAIA fit after 20 generations using a starting population of 30 trial solutions. In this test run, 4 parameters were determined by PIKAIA which are the dust temperature on the inner boundary $T_{dust}$, the abundances of 2 dust species $f_2$ and $f_3$ and the optical depth $\tau$. This test showed, on the one hand, that the program works as expected and on the other hand gave a first idea about how the different parameters interact with each other. Nevertheless, it has to be noted that the test was neither realistic concerning the complexity of typical observed spectra nor the number of generations and population members. A more realistic approach will be presented in Sect. \[iso\]. Another point that needs to be mentioned is the CPU time needed for the calculations, which amounted to about two hours for the full procedure. The method presented in this paper might therefore not be able to compete with an experienced scientist doing a handfit. Still it offers some advantages compared to the “intuitive” method, apart from not getting your hands dirty but letting the computer do your work. With the fitness value it offers a parameter which allows to compare your results in a quantitative way. Furthermore the process is much less biased than a fit done by hand and as such, again, much better to be used for general comparison. Application to ISO spectra of AGB stars {#iso} ======================================= In order to test our routine with more realistic data, models for selected AGB stars, observed by the Infrared Space Observatory (ISO) [@isoref], were calculated. The stars were chosen by following criteria: To start with, they had to be simple in a sense that their spectral appearance and thus mainly their dust composition is an easy (not more than three species) and more or less known and confirmed one. Nevertheless, the chosen stars should represent a wide parameter range. Finally, another criterion was that there had already been some DUSTY model calculations done for that object, in order to compare our results with previous ones and thus to put the algorithm to another test. Source ISO TDT number Spectral class Variability Ref --------- ---------------- ---------------- ------------- ----- CE And 80104817 M5 Lb 1 $o$ Cet 45101201 M7IIIe Mira 2 Z Cyg 37400126 M5e Mira 3 TY Dra 74102309 M8 Lb 1 S Pav 14401702 M7IIe SRa 1 SV Peg 74500605 M7 SRb 1 : Basic properties of ISO test stars \(1) @olofsson02; (2) @loup93; (3) @groen99 \[isostars\] All the data on the selected objects (see Table \[isostars\]) have been taken from the ISO archive and have been processed with ISO’s automatic data-analysis pipeline called Off-Line Processing, v10.1. For further reduction and processing the ISO Spectroscopic Analysis Package (ISAP) was used [@sturm98]. Since the major interest was on the overall shape and not on special features of the observed spectra, they have been rebinned to a resolution of $R=200$ in order not to waste processing time by an unnecessarily high number of data points.[^2] Before getting into detail with each single star, a remark has to be made concerning the 30 $\mu$m feature visible in all the spectra. This feature has not been taken into account in any of the calculations since it is very likely to be an instrumental artefact rather than a real feature. Although @sloan03 suggest that it might not be completely artificial, there is still no consensus on that matter. For fits done by hand this feature does not present any problem since it can be easily ignored during the fitting procedure by the scientist. On the contrary, PIKAIA treats every data point equally so fits were only done until 26 $\mu$m and thus the wavelengths above are not shown in the presented plots since they are not used in the analysis. The dust composition of the individual stars has been taken from the literature. In the case of S Pav and CE And @heras05 served as a source for the starting values of the calculations. For TY Dra, $o$ Cet and Z Cyg the values were taken from @poschPhD. Finally, for both sources have been used to determine the starting values. Although the compositions used presented a good starting point, they had to be slightly adapted in the run, especially those of @heras05. This is most probably due to the fact that for these tests only a simple black body was used as a DUSTY input while for them a composed spectrum of a black body, the observed photospheric ISO-SWS spectrum and a Rayleigh-Jeans extrapolation served as input. The results for each star are discussed in the following sections. Furthermore Table \[tabu\] gives an overview of all input and output values of the fits shown in Fig. \[isotest\] plus the assumed starting values used for the genetic algorithm. The first entry for each star gives the parameters for the fit done by @poschPhD. Thus there are no values for the population and generation entered. The second entry gives the start parameters for the PIKAIA runs. This is implicated by the values 0 for the generation and population and the missing fitness value. Finally the best result for each star is shown. CE And and S Pav do not have fits done by @poschPhD available. For SV Peg the results of 2 additional fits done with PIKAIA are given. Fig. \[svpegdetail\] shows those results in comparison to the other fits. Input Output -------- ------ ------ --------- ------------ ------------------- ------- ------- ------- ------- -------- ------- ------- ------- --------- -- Star Pop. Gen. $T_{*}$ $T_{dust}$ $\tau_{10\mu{}m}$ $f_1$ $f_2$ $f_3$ $f_4$ $f_5$ $f_6$ $f_7$ $f_8$ Fitness \[K\] \[K\] CE And 0 0 2700 500 0.19 24 72 4 CE And 90 120 2700 450 0.009 20 65 12 3 3.9998 o Cet 2600 500 0.03 100 6.3535 o Cet 0 0 2600 500 0.03 50 50 o Cet 90 100 2200 650 0.2 60 40 6.3538 Z Cyg 2400 400 0.09 100 4.3745 Z Cyg 0 0 2500 500 0.15 50 50 Z Cyg 90 100 2700 490 0.13 30 36 34 4.3926 TY Dra 3000 600 0.025 100 9.2667 TY Dra 0 0 3000 600 0.025 50 50 TY Dra 90 100 3000 650 0.3 60 30 10 9.6803 S Pav 0 0 2800 400 0.07 42 42 12 4 S Pav 100 200 2800 650 0.0045 41 41 14 4 9.9916 SV Peg 3100 550 0.008 15 71 10 4 6.3979 SV Peg 0a 0a 3100 550 0.008 15 71 10 4 SV Peg 100 90 3000 615 0.008 10 60 26 4 6.4002 SV Peg 100 120 3000 560 0.007 13 61 22 4 6.4006 SV Peg 0b 0b 2800 500 0.12 22 45 20 9 4 SV Peg 100 200 2800 570 0.011 31 32 20 12 5 6.4017 \[tabu\] The MIR spectrum of CE And is dominated by a classical amorphous silicate emission [@fabian01; @heras05]. As a starting point for the dust composition amorphous melilite, amorphous olivine and crystalline magnesiowustite, respectively $f_4$, $f_5$ and $f_6$,were chosen. During the first manual tests it appeared that there might exist another species contributing to the spectrum. In order to test this hypothesis crystalline spinel ($f_8$) was used as an additional species. It indeed turned out that the results where better with this additional component. Concerning the population number, a starting population of 90 individuals seemed to be sufficient for the given problem as for the number of generations 120 seemed to serve the purpose. The final results are shown hereafter in Table \[tabu\]. The reason for picking $o$ Ceti is obvious. As *the* Mira star per se, its dust composition is very well known and it has been topic of a number of publications of which some presented synthetic fits [e.g. @heras05]. Mira’s mineralogy exhibits mostly silicate dust, which makes it a perfect target for the further evaluation of the code. As starting parameters for the calculation, we assumed the same dust composition as @poschPhD. After some iterations and efforts to adapt the parameters, it turned out that another dust composition including aluminum oxide produces a better fit. The best PIKAIA results were achieved with 100 generations and a population of 90. Results are shown in Table \[tabu\] and also in Fig. \[isotest\]. Z Cyg is a very well known Mira variable with a solely silicate dust composition. The dust properties of Z Cyg have already been extensively discussed by @onaka02 focusing on the effects of time variations of SWS spectra. The SWS spectrum at variability phase $\phi$=0.97 (i.e. TDT37400126) used here is the same as @onaka02 used to derive the dust emissivity of Z Cyg. The mass loss rate, obtained from their best fit, of 7 x 10$^{-8}$ (r$_*$/3 x 10$^{13}$ cm) seems to be in rather good agreement with the one (4 x 10$^{-8}$ M$_{\odot}$/yr) obtained by @young95. Figure \[isotest\] shows a typical fit done by hand (dashed line) taken from [@poschPhD] and the best PIKAIA fit (solid line) that could be accomplished so far. This fit exhibits an exclusively silicate containing dust composition. Although this fit covers the main silicate feature around 20 $\mu$m quite well, there seem to be some problems connected with the 9.7 $\mu$m feature. In comparison to the hand fit, the PIKAIA fit is covering this feature much better. A combination of amorphous pyroxene, $f_{3}$, amorphous melilite, $f_{4}$ and amorphous olivine, $f_{5}$ proved to produce the fittest results. Still this results are not completely satisfying yet. Again, results are given in Table \[tabu\]. TY Dra proved to be more complicated than first expected. Although its spectrum is supposed to be entirely dominated by amorphous silicate dust (MgFeSiO$_{4}$; see @poschPhD), this solution did not turn out to be the fittest for the given starting values. In Fig. \[isotest\] the hand fit based on the assumption mentioned is shown. It can be clearly seen that, even though the 10 $\mu$m peak seems to fit quite well, and with some adjustments of the input parameters might fit even better, the peak around 19 $\mu$m is not well represented at all. On the other hand, the PIKAIA fit seems to be a better option, although not perfect, for this case. Here we assumed a dust composition consisting of amorphous pyroxene, amorphous aluminum oxide and amorphous olivine (respectively $f_{2}$,$f_{1}$ and $f_{5}$). S Pav is a fairly complex star in terms of dust composition [see e.g. @fabian01]. As stated in @heras05, the given starting composition of amorphous aluminium oxide, amorphous melilite, magnesiowustite and crystalline spinel (respectively $f_1$,$f_4$,$f_6$ and $f_8$) PIKAIA was able to produce a fairly reasonable fit for the given parameter range presenting only minor changes in the amount of amorphous aluminium oxide, amorphous melilite and magnesiowustite. Still, setting the generation number to 200 in order to obtain the best fit possible, went at the cost of run time. ![image](13968fig4.eps) SV Peg ------ Just like S Pav, SV Peg shows a complex mineralogy. Although still dominated by silicates, its spectrum also exhibits quite strong features of amorphous aluminium oxide, crystalline magnesiowustite and crystalline spinel according to @poschPhD. This complex dust composition made the fitting process for our routine very complicated. SV Peg turned out to be a surprise in terms of fitting. Looking like too complex to be fitted by this DUSTY-PIKAIA routine in the beginning, the results were fairly good in the end, but at the costs of having to set very strict constraints to the input dust composition. Figure \[svpegdetail\] tries to give an impression about the evolution of fits and the difficulties that appear when choosing the input parameters. The two fits displayed in grey (further referred to as fit A and fit B) show how the increase of the generation number leads to an improvement of the fit. Nevertheless, it turned out that in the case of these two fits, the chosen dust parameters were not the most suitable ones, since a restriction in the parameter space and the raise of the generation number did not have such a strong effect on the results as expected. Fit A was done with a dust composition of amorphous aluminum oxide,amorphous silicate, crystalline magnesiowustite and crystalline spinel (respectively $f_1$,$f_2$,$f_6$ and $f_8$) suggested by @poschPhD. This worked very well for the manual fit. Unfortunately it turned out that a too high number of free parameters combined with too large range of values for the respective parameters did not perform very well. This was mainly due to the chosen number of only 90 generations (and also population members). For a problem of this complexity more time is needed to find a solution, thus more generations are needed. As a consequence the number of generations for fit B was increased to 120. It can be seen clearly in Fig. \[svpegdetail\], that the fit slowly improves and approaches the real spectrum. Nevertheless only raising the number of generations did not have such a strong effect as desired. Furthermore, the increment lead to an immensely extended run time of almost 2 days. Since we were not able to achieve considerably better results even with higher generation numbers, a slightly altered dust composition, consisting of marphous aluminium oxide, am. melilite, olivine, crystalline magnesiowustite and crystalline spinel (respectively $f_1$,$f_4$,$f_5$,$f_6$ and $f_8$) in accordance to @heras05, was used. This time only the amount of dust was set to be a free parameter. All other variables were fixed. Furthermore the number of generations was increased to 200 to provide PIKAIA with enough time to arrive the best fit possible. This result can be also seen in Fig. \[svpegdetail\] as well as in Table \[tabu\]. ![This Figure shows a fit of the spectrum of SV Peg done by Posch (black, dashed dotted line) in comparison to 3 PIKAIA fits. The best of those fits (black line) shows already a quite good approximation of the given observed spectrum, while the other fits (small and big dotted grey) display a kind of evolution during the fitting process. It can be seen that with an increasing number of generations the fit approaches the observed spectrum. Still even with a higher number of generations the results were not satisfying, so we decided to change the dust settings of the model. This led us to the best fit, which can be also seen in Fig. \[iso\]. []{data-label="svpegdetail"}](13968fig5.eps) Conclusions and Outlook ======================= AGB stars are known to exhibit strong dust features in the IR range of their spectra. The carriers of those features have to be identified in order to study the dust composition of these shells and hence the mass loss process of AGB stars. The goal of this work was to study an automatised routine to fit this IR part of the spectrum of an AGB star and to explore the possibilities and more important the boundaries for a routine like this. Therefor we have focused in this work only on the improvement of fits which have already been carried out on the respective objects. This goal was definitely reached. We could show that it is indeed possible to improve fits done by various researchers using a genetic algorithm for fine tuning of the parameters. We could also show that in cases of Z Cyg, SV Peg and TY Dra a more elaborate or even slightly different dust composition leads to better results than with the composition taken from @poschPhD, which was used as a starting point for our calculation. For CE And, o Cet, S Pav and SV Peg our results showed only minor difference from the dust compositions taken from @heras05, which we used as starting parameters for our models for these objects. Furthermore this routine also provides a quantitative method, namely the fitness value supplied by the genetic algorithm, for comparison of the model spectra. This is connected with the need of independence from an all hands on method done by a scientist, which is, even when carefully trying to avoid it, always biased by the individual experience on the respective field of research. This method can be seen as a little step away form this type of fitting process to a more automatised and independent fitting process. Nevertheless there are a few things which need to be kept in mind while using PIKAIA and DUSTY in order to fit a spectrum. First, the starting parameters for the fits of all stars mentioned in this paper were taken from the literature. However, one will not always work on objects that have already been the subject of investigations beforehand. This leaves us with the question of how to determine the basic stellar parameters for each object needed as input for our automatic fitting method. For a star’s temperature the most likely approach is carrying out JHK-photometry in order to retrieve these values. Another logical approach would be using only similar object of a certain group, which then should also offer some hints for the approximate values. The dust temperature is also a value which might not be found in the literature. In order to estimate the temperature a simple black body fit of the respective wavelength region should offer a sufficient starting value for our calculations. Still, the by far most difficult part in choosing the starting values for a model is the selection of suitable dust species and their respective amount. So far the only reliable method is an experienced scientist. Although this step could be executed by a program doing a comparison of local maxima in the dust spectra with the star’s spectrum, an accordance in the peaks unfortunately does not necessarily mean that a single species or a certain combination applies to the star. For now human intervention in this step seems to be unavoidable and a performance efficient black box fitting procedure out of reach. This leads also directly to the problem of the run time of our routine. @metcalfe00 as well as @puls05 used the parallelized version of PIKAIA to run their simulations. In this paper we used only a serial version of PIKAIA in order to study the overall behaviour of the routine in combination with DUSTY in a setting as simple as possible. Thus a full optimisation procedure took on an average work station up to a day at most. The most time during the fitting procedure is consumed by DUSTY and not by PIKAIA itself. Thus, our attempts to speed up the routine focused on DUSTY. We reduced the wavelength grid as much as possible, skipped unnecessary command line outputs and used the basic black bodies, inbuilt in DUSTY, as a background spectrum instead of a more elaborated one. Another problem which increases the runtime, is that DUSTY models with a good fitness, that continue to the next generation are not stored anywhere, but are calculated again. All those points together will likely lead to an extension of the runtime by up to a factor of two. In the current version of our code we did take these points into account as we were aiming at testing the general fitting possibilities of this approach first. Solutions to most of these points are straightforward and their implementation is foreseen for a forthcoming version of our code currently under development. The results shown in this paper are very promising, thus we expect that our method, keeping in mind its limitations, will finally provide a valuable tool for fitting dust spectra of AGB stars. A. B. received a DOC-fFORTE grant from the Austrian Academy of Sciences. This work was supported by the *Fonds zur Förderung der Wissenschaftlichen Forschung* (FWF) under project number P18939-N16. TL acknowledges funding by the FWF under project number P20046-N16. Furthermore we would like to thank T. Posch for providing his fits to the ISO spectra and W. Nowotny for careful reading and helpful feedback. [^1]: *email to:* [email protected] [^2]: The dust species examined in this paper all exhibit only such dust features which are broad enough to be resolved at R = 200. Thus no essential dust features are missed at this resolution.
--- abstract: 'Let $F$ be the complete flag variety over ${\mbox{Spec}}\Z$ with the tautological filtration $0 \subset E_1 \subset E_2 \subset\cdots \subset E_n=E$ of the trivial bundle $E$ over $F$. The trivial hermitian metric on $E(\C)$ induces metrics on the quotient line bundles $L_i(\C)$. Let ${\widehat}{c}_1({\overline}{L}_i)$ be the first Chern class of ${\overline}{L}_i$ in the arithmetic Chow ring ${\widehat}{CH}(F)$ and ${\widehat}{x}_i=-{\widehat}{c}_1({\overline}{L}_i)$. Let $h{\hspace{-2pt}\in\hspace{-2pt}}\Z[X_1,\ldots,X_n]$ be a polynomial in the ideal $\left<e_1,\ldots,e_n\right>$ generated by the elementary symmetric polynomials $e_i$. We give an effective algorithm for computing the arithmetic intersection $h({\widehat}{x}_1,\ldots,{\widehat}{x}_n)$ in ${\widehat}{CH}(F)$, as the class of a $SU(n)$-invariant differential form on $F(\C)$. In particular we show that all the arithmetic Chern numbers one obtains are rational numbers. The results are true for partial flag varieties and generalize those of Maillot \[Ma\] for grassmannians. An ‘arithmetic Schubert calculus’ is established for an ‘invariant arithmetic Chow ring’ which specializes to the Arakelov Chow ring in the grassmannian case.' author: - | Harry Tamvakis\ Department of Mathematics\ University of Chicago\ Chicago IL 60637 title: '**Arithmetic Intersection Theory on Flag Varieties**' --- ¶[[P]{}]{} Ø[[O]{}]{} [H]{} §§§[[S]{}]{} [v]{} Introduction {#intro} ============ Arakelov theory is a way of ‘completing’ a variety defined over the ring of integers of a number field by adding fibers over the archimedian places. In this way one obtains a theory of intersection numbers using an arithmetic degree map; these numbers are generally real valued. The work of Arakelov on arithmetic surfaces has been generalized to higher dimensions by H. Gillet and C. Soulé. This provides a link between number theory and hermitian complex geometry; the road is via arithmetic intersection theory. One of the difficulties with the higher dimensional theory is a lack of examples where explicit computations are available. The arithmetic Chow ring of projective space was studied by Gillet and Soulé (\[GS2\], 5) and arithmetic intersections on the grassmannian by Maillot \[Ma\]. In this article we study arithmetic intersection theory on general flag varieties and solve two problems: (i) finding a method to compute products in the arithmetic Chow ring, and (ii) formulating an ‘arithmetic Schubert calculus’ analogous to the geometric case. The grassmannian case is easier to work with because the fiber at infinity is a hermitian symmetric space. To the author’s knowledge this work is the first to provide explicit calculations when the harmonic forms are not a subalgebra of the space of smooth forms. The question of computing arithmetic intersection numbers on flag manifolds was raised by C. Soulé in his 1995 Santa Cruz lectures \[S\]. We now describe our results in greater detail. The crucial case is that of complete flags, so we discuss that for simplicity. Let $F$ denote the complete flag variety over ${\mbox{Spec}}\Z$, parametrizing over any field $k$ the complete flags in a $k$-vector space of dimension $n$. Let ${\overline}{E}$ be the trivial vector bundle over $F$ equipped with a trivial hermitian metric on $E(\C)$. There is a tautological filtration $${\displaystyle}\E:\ E_0=0 \subset E_1 \subset E_2 \subset\cdots \subset E_n=E$$ and the metric on $E$ induces metrics on all the subbundles $E_i$. We thus obtain a [*hermitian filtration*]{} ${\overline}{\E}$ with quotient line bundles $L_i=E_i/E_{i-1}$, which are also given induced metrics. Let ${\widehat}{CH}(F)$ be the arithmetic Chow ring of $F$ (see \[ait\] and \[GS1\], 4.2.3) and ${\widehat}{x}_i:=-{\widehat}{c}_1({\overline}{L}_i)$, where ${\widehat}{c}_1({\overline}{L}_i)$ is the arithmetic first Chern class of ${\overline}{L}_i$ (\[GS2\], 2.5). Let $h{\hspace{-2pt}\in\hspace{-2pt}}\Z[X_1,\ldots,X_n]$ be a polynomial in the ideal $\left<e_1,\ldots,e_n\right>$ generated by the elementary symmetric polynomials $e_i(X_1,\ldots,X_n)$. Our main result is a computation of the arithmetic intersection $h({\widehat}{x}_1,\ldots,{\widehat}{x}_n)$ in ${\widehat}{CH}(F)$, as a class corresponding to a $SU(n)$-invariant differential form on $F(\C)$. This enables one to reduce the computation of any intersection product in ${\widehat}{CH}(F)$ to the level of smooth differential forms; we show how to do this explicitly for products of classes ${\widehat}{c}_i({\overline}{E_l/E_k})$. In particular, we obtain the following result: Let $k_i$, $1{\leqslant}i {\leqslant}n$ be nonnegative integers with $\sum k_i= \dim{F}={n \choose 2}+1$. Then the arithmetic Chern number $ {\displaystyle}{\widehat}{\deg}({\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}\cdots{\widehat}{x}_n^{k_n}) $ is a rational number. Let $CH({\overline}{G_d})$ be the Arakelov Chow ring (\[GS1\], 5.1) of the grassmannian $G_d$ over ${\mbox{Spec}}\Z$ parametrizing $d$-planes in $E$, with the natural invariant Kähler metric on $G_d(\C)$. Maillot \[Ma\] gave a presentation of $CH({\overline}{G_d})$ and constructed an ‘arithmetic Schubert calculus’ in $CH({\overline}{G_d})$. There are difficulties in extending his results to flag varieties, mainly because the Arakelov Chow group $CH({\overline}{F})$ is not a [*subring*]{} of ${\widehat}{CH}(F)$. To overcome this problem we define, for any partial flag variety $F$, an ‘invariant arithmetic Chow ring’ ${\widehat}{CH}_{inv}(F)$. This subring of ${\widehat}{CH}(F)$ specializes to the Arakelov Chow ring if $F(\C)$ is a hermitian symmetric space. We extend the notion of Bott-Chern forms for short exact sequences to filtered bundles. These forms give relations in ${\widehat}{CH}(F)$ (Theorem \[abc\]); however they are generally not closed forms, and thus do not represent cohomology classes. This forces us to work on the level of differential forms in order to calculate arithmetic intersections. We compute the Bott-Chern forms on flag varieties $F$ by using a calculation of the curvature matrices of homogeneous vector bundles on generalized flag manifolds due to Griffiths and Schmid \[GrS\]. One thus obtains expressions for the Bott-Chern forms in terms of invariant forms on $F(\C)$. The Schubert polynomials of Lascoux and Schützenberger provide a convenient basis to describe the product structure of ${\widehat}{CH}_{inv}(F)$. Using them we formulate an ‘arithmetic Schubert calculus’ for flag varieties which generalizes that of Maillot \[Ma\] for grassmannians. However explicit general formulas are lacking, as we cannot do these computations using purely cohomological methods. This paper is organized as follows. In \[prelim\] we review some preliminary material on Bott-Chern forms, arithmetic intersection theory, flag varieties and Schubert polynomials. In \[cbcfs\] we state the main tool for computing Bott-Chern forms (for any characteristic class) in the case of induced metrics. The definition and construction the Bott-Chern forms associated to a hermitian filtration is the content of \[bcfffb\]. \[hvb\] is concerned with the explicit computation of the curvature matrices of the tautological vector bundles over flag varieties $F$. In \[afvs\] we define the invariant arithmetic Chow ring ${\widehat}{CH}_{inv}(F)$. This subring of ${\widehat}{CH}(F)$ is where all the intersections of interest take place. In \[ai\] we give an algorithm for calculating arithmetic intersection numbers on the complete flag variety $F$, in particular proving that they are all rational. In \[asc\] we describe the product structure of ${\widehat}{CH}_{inv}(F)$ in more detail, formulating an arithmetic Schubert calculus. Some applications of our results are given in \[ex\]. One has the Faltings height of the image of $F$ under its pluri-Plücker embedding; this is always a rational number. We give a table of the arithmetic Chern numbers for $F_{1,2,3}$. Finally \[pfvs\] shows how to generalize the previous results to partial flag varieties. This paper will be part of the author’s 1997 University of Chicago thesis. I wish to thank my advisor William Fulton for many useful conversations and exchanges of ideas. The geometric aspects of this work generalize readily to other semisimple groups. We plan a sequel discussing arithmetic intersection theory on symplectic and orthogonal flag varieties. Preliminaries {#prelim} ============= Bott-Chern forms {#bcfs} ---------------- The main references for this section are \[BC\] and \[GS2\]. Consider the coordinate ring $\C[T_{ij}]$ $(1{\leqslant}i,j{\leqslant}n$) of the space $M_n(\C)$ of $n\times n$ matrices. $GL_n(\C)$ acts on matrices by conjugation; let $I(n)=\C[T_{ij}]^{GL_n(\C)}$ denote the corresponding graded ring of invariants. There is an isomorphism $\sigma : I(n){\rightarrow}\C[X_1,X_2,\ldots,X_n]^{S_n}$ obtained by evaluating an invariant polynomial $\phi$ on the diagonal matrix diag$(X_1,\ldots,X_n)$. We will often identify $\phi$ with the the symmetric polynomial $\sigma(\phi)$. We let $I(n,\Q)=\sigma^{-1}(\Q[X_1,X_2,\ldots,X_n]^{S_n})$. For $A$ an abelian group, $A_{\Q}$ denotes $A\otimes_{\Z}\Q$. Let $X$ be a complex manifold, and denote by $A^{p,q}(X)$ the space of differential forms of type $(p,q)$ on $X$. Let $A(X)=\bigoplus_p A^{p,p}(X)$ and ${\widetilde}{A}(X)$ be the quotient of $A(X)$ by $\mbox{Im} \partial + \mbox{Im} {\overline{\partial}}$. If $\omega$ is a closed form in $A(X)$ the cup product $\wedge\omega: {\widetilde}{A}(X){\rightarrow}{\widetilde}{A}(X)$ and the operator $dd^c:{\widetilde}{A}(X){\rightarrow}A(X)$ are well defined. Let $E$ be a rank $n$ holomorphic vector bundle over $X$, equipped with a hermitian metric $h$. The pair ${\overline}{E}=(E,h)$ is called a [*hermitian vector bundle*]{}. A direct sum ${\overline}{E}_1\bigoplus{\overline}{E}_2$ of hermitian vector bundles will always mean the orthogonal direct sum $(E_1\bigoplus E_2,h_1\oplus h_2)$. Let $D$ be the hermitian holomorphic connection of ${\overline}{E}$, with curvature $K=D^2{\hspace{-2pt}\in\hspace{-2pt}}A^{1,1}(X,{\mbox{End}}(E))$. If $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$ is any invariant polynomial, there is an associated differential form $\phi({\overline}{E}):=\phi(\frac{i}{2\pi}K)$, defined locally by identifying ${\mbox{End}}(E)$ with $M_n(\C)$. These differential forms are $d$ and $d^c$ closed, have de Rham cohomology class independent of the metric $h$, and are functorial under pull back by holomorphic maps (cf. \[BC\]). In particular one obtains the [*power sum forms*]{} $p_k({\overline}{E})$ with ${\displaystyle}p_k=\sum_i X_i^k$ and the [*Chern forms*]{} $c_k({\overline}{E})$ with $c_k=e_k$ the $k$-th elementary symmetric polynomial. Let $\E:\ 0{\rightarrow}S {\rightarrow}E{\rightarrow}Q {\rightarrow}0$ be an exact sequence of holomorphic vector bundles on $X$. Choose arbitrary hermitian metrics $h_S,h_E,h_Q$ on $S,E,Q$ respectively. Let $$\label{ses} {\overline}{\E}=(\E,h_S,h_E,h_Q):\ 0{\rightarrow}{\overline}{S} {\rightarrow}{\overline}{E}{\rightarrow}{\overline}{Q}{\rightarrow}0.$$ We say that ${\overline}{\E}$ is [*split*]{} when $(E,h_E)=(S\bigoplus Q,h_S\oplus h_Q)$ and $\E$ is the obvious exact sequence. Let $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$ be any invariant polynomial. Then there is a unique way to attach to every exact sequence ${\overline}{\E}$ a form ${\widetilde}{\phi}({\overline}{\E})$ in ${\widetilde}{A}(X)$, called the Bott-Chern form of ${\overline}{\E}$, in such a way that: \(i) $dd^c{\widetilde}{\phi}({\overline}{\E})=\phi({\overline}{S}\bigoplus {\overline}{Q}) -\phi({\overline}{E})$, \(ii) For every map $f:X{\rightarrow}Y$ of complex manifolds, ${\widetilde}{\phi}(f^*({\overline}{\E}))=f^*{\widetilde}{\phi}({\overline}{\E})$, \(iii) If ${\overline}{\E}$ is split, then ${\widetilde}{\phi}({\overline}{\E})=0$. For $\phi$, $\psi {\hspace{-2pt}\in\hspace{-2pt}}I(n)$ one has the following useful relations in ${\widetilde}{A}(X)$: $$\label{define} {\widetilde}{\phi + \psi}={\widetilde}{\phi}+{\widetilde}{\psi}, \ \ \ \ \ \ {\widetilde}{\phi\psi}={\widetilde}{\phi}\cdot\psi({\overline}{S}\oplus {\overline}{Q})+ \phi({\overline}{E})\cdot{\widetilde}{\psi}.$$ Arithmetic intersection theory {#ait} ------------------------------ We recall here the generalization of Arakelov theory to higher dimensions due to H. Gillet and C. Soulé. For more details see \[GS1\], \[GS2\], \[SABK\]. Let $X$ be an [*arithmetic scheme*]{}, by which we mean a regular scheme, projective and flat over $\mbox{Spec}\Z$. For $p{\geqslant}0$, we denote the Chow group of codimension $p$ cycles on $X$ modulo rational equivalence by $CH^p(X)$ and let $CH(X)=\bigoplus_p CH^p(X)$. ${\widehat}{CH}^p(X)$ will denote the $p$-th arithmetic Chow group of $X$. Recall that an element of ${\widehat}{CH}^p(X)$ is represented by an arithmetic cycle $(Z,g_Z)$; here $g_Z$ is a Green current for the codimension $p$ cycle $Z(\C)$. Let ${\widehat}{CH}(X)=\bigoplus_p {\widehat}{CH}^p(X)$. The involution of $X(\C)$ induced by complex conjugation is denoted by $F_{\infty}$. Let $A^{p,p}(X_{\R})$ be the subspace of $A^{p,p}(X(\C))$ generated by real forms $\eta$ such that $F^*_{\infty}\eta=(-1)^p\eta$; denote by ${\widetilde}{A}^{p,p}(X_{\R})$ the image of $A^{p,p}(X_{\R})$ in ${\widetilde}{A}^{p,p}(X(\C))$. Let $A(X_{\R})=\bigoplus_p A^{p,p}(X_{\R})$ and ${\widetilde}{A}(X_{\R})=\bigoplus_p {\widetilde}{A}^{p,p}(X_{\R})$. We have the following canonical morphisms of abelian groups: $${\displaystyle}\zeta :{\widehat}{CH}^p(X) \longrightarrow CH^p(X), \ \ \ {[(Z,g_Z)]} \longmapsto {[Z]},$$ $${\displaystyle}\omega : {\widehat}{CH}^p(X) \longrightarrow A^{p,p}(X_{\R}), \ \ \ {[(Z,g_Z)]} \longmapsto dd^cg_Z+\delta_{Z(\C)},$$ $${\displaystyle}a : {\widetilde}{A}^{p-1,p-1}(X_{\R}) \longrightarrow {\widehat}{CH}^p(X), \ \ \ \eta \longmapsto {[(0,\eta)]}.$$ For convenience of notation, when we refer to a real differential form $\eta{\hspace{-2pt}\in\hspace{-2pt}}A(X_{\R})$ as an element of ${\widehat}{CH}(X)$, we shall always mean $a([\eta])$, where $[\eta]$ is the class of $\eta$ in ${\widetilde}{A}(X_{\R})$. There is an exact sequence $$\label{ex1} CH^{p,p-1}(X) \longrightarrow {\widetilde}{A}^{p-1,p-1}(X_{\R}) \stackrel{a}\longrightarrow {\widehat}{CH}^p(X) \stackrel{\zeta}\longrightarrow CH^p(X)\longrightarrow 0$$ Here the group $CH^{p,p-1}(X)$ is the $E_2^{p,1-p}$ term of a spectral sequence used by Quillen to calculate the higher algebraic $K$-theory of $X$ (cf. \[G\]). One can define a pairing ${\widehat}{CH}^p(X)\otimes{\widehat}{CH}^q(X) {\rightarrow}{\widehat}{CH}^{p+q}(X)_{\Q}$ which turns ${\widehat}{CH}(X)_{\Q}$ into a commutative graded unitary $\Q$-algebra. The maps $\zeta$, $\omega$ are $\Q$-algebra homomorphisms. If $X$ is smooth over $\Z$ one does not have to tensor with $\Q$. The functor ${\widehat}{CH}^p(X)$ is contravariant in $X$, and covariant for proper maps which are smooth on the generic fiber. We also note the useful identity $a(x)y=a(x\omega(y))$ for $x,y{\hspace{-2pt}\in\hspace{-2pt}}{\widehat}{CH}(X)$. Choose a Kähler form $\om_0$ on $X(\C)$ such that $F^*_{\infty}\omega_0=-\om_0$ and let $\cal{H}^{p,p}(X_{\R})$ be the space of harmonic (with respect to $\omega_0$) $(p,p)$ forms on $X(\C)$ invariant under $F_{\infty}$. The $p$-th [*Arakelov Chow group*]{} of ${\overline}{X}=(X,\om_0)$ is defined by $CH^p({\overline}{X}):= \omega^{-1}(\cal{H}^{p,p}(X_{\R}))$. The group $CH({\overline}{X})_{\Q}=\bigoplus_p CH^p({\overline}{X})_{\Q}$ is generally not a subring of ${\widehat}{CH}(X)_{\Q}$, unless the harmonic forms $\cal{H}^*(X_{\R})$ are a subring of $A(X_{\R})$. This is true if $(X(\C),\om_0)$ is a hermitian symmetric space, such as a complex grassmannian, but fails for more general flag varieties. Let $f:X{\rightarrow}{\mbox{Spec}}\Z$ be the projection. If $X$ has relative dimension $d$ over $\Z$, then we have an arithmetic degree map ${\widehat}{\deg}:{\widehat}{CH}^{d+1}(X){\rightarrow}\R$, obtained by composing the push-forward $f_*:{\widehat}{CH}^{d+1}(X){\rightarrow}{\widehat}{CH}^1({\mbox{Spec}}\Z)$ with the isomorphism ${\widehat}{CH}^1({\mbox{Spec}}\Z)\stackrel{\sim}{\rightarrow}\R$. The latter maps the class of $(0,2\lambda)$ to the real number $\lambda$. A [*hermitian vector bundle*]{} ${\overline}{E}=(E,h)$ on an arithmetic scheme $X$ is an algebraic vector bundle $E$ on $X$ such that the induced holomorphic vector bundle $E(\C)$ on $X(\C)$ has a hermitian metric $h$ with $F_{\infty}^*(h)=h$. There are characteristic classes ${\widehat}{\phi}({\overline}{E}){\hspace{-2pt}\in\hspace{-2pt}}{\widehat}{CH}(X)_{\Q}$ for any $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n,\Q)$, where $n={\mbox{rk}}E$. For example, we have [*arithmetic Chern classes*]{} ${\widehat}{c}_k({\overline}{E}) {\hspace{-2pt}\in\hspace{-2pt}}{\widehat}{CH}^k(X)$. For the basic properties of these classes, see \[GS2\], Theorem 4.1. Flag varieties and Schubert polynomials {#classgps} --------------------------------------- Let $k$ be a field, $E$ an $n$-dimensional vector space over $k$. Let $${\displaystyle}\r=(0<r_1<r_2<\ldots<r_m=n)$$ be an increasing $m$-tuple of natural numbers. A [*flag of type $r$*]{} is a flag $$\label{fil} \E:\ E_0=0 \subset E_1 \subset E_2 \subset\cdots \subset E_m=E$$ with $\mbox{rank}{E_i}=r_i$, $1{\leqslant}i{\leqslant}m$. Let $F(\r)$ denote the arithmetic scheme parametrizing flags $\E$ of type $\r$ over any field $k$. (\[fil\]) will also denote the tautological flag of vector bundles over $F(\r)$, and we call the resulting filtration of the bundle $E$ a [*filtration of type $\r$*]{}. The above [*arithmetic flag variety*]{} is smooth over ${\mbox{Spec}}\Z$. There is an isomorphism $F(\r)(\C)\simeq SL(n,\C)/P$, where $P$ is the parabolic subgroup of $SL(n,\C)$ stabilizing a fixed flag. In the extreme case $m=2$ (resp. $m=n$) $F(\r)$ is the grassmannian $G_d$ parametrizing $d$-planes in $E$ (resp. the complete flag variety $F$). Although the results of this paper are true for any partial flag variety $F(\r)$, for simplicity we will work with the complete flag variety $F$, leaving the discussion of the general case to \[pfvs\]. The notation for these varieties and the dimension $n$ will be fixed throughout this paper. We now recall the standard presentation of the Chow group $CH(F)$. Define the quotient line bundles $L_i=E_i/E_{i-1}$. Consider the polynomial ring $P_n=\Z[X_1,\ldots,X_n]$ and the ideal $I_n$ generated by the elementary symmetric functions $e_i(X_1,\ldots,X_n)$. Then $CH(F)\simeq P_n/I_n$, where the inverse of this isomorphism sends $[X_i]$ to $-c_1(L_i)$. The ring $H_n=P_n/I_n$ has a free $\Z$-basis consisting of classes of monomials $X_1^{k_1}X_2^{k_2}\cdots X_n^{k_n}$, where the exponents $k_i$ satisfy $k_i{\leqslant}n-i$. The Schubert polynomials of Lascoux and Schützenberger \[LS\] are a natural $\Z$-basis of $H_n$, corresponding to the classes of Schubert varieties in $CH(F)$. Our main reference for Schubert polynomials will be Macdonald’s notes \[M\]. Let $S_{\infty}=\cup_nS_n$ and $P_{\infty}=\Z[X_1,X_2,\ldots]$. For each $w{\hspace{-2pt}\in\hspace{-2pt}}S_{\infty}$, $l(w)$ denotes the [*length*]{} of $w$ and $\partial_w:P_{\infty}{\rightarrow}P_{\infty}$ the corresponding [*divided difference operator*]{} (\[M\] Chp. 2). If $w_0$ is the longest element of $S_n$ and $w{\hspace{-2pt}\in\hspace{-2pt}}S_n$ is arbitrary, the [*Schubert polynomial*]{} $\S_w$ is given by $${\displaystyle}\S_w=\partial_{w^{-1}w_0}(X_1^{n-1}X_2^{n-2}\cdots X_{n-1}).$$ This definition is compatible with the natural inclusion $S_n\subset S_{n+1}$ (with $w(n+1)=n+1$). It follows that $\S_w$ is well defined for any $w{\hspace{-2pt}\in\hspace{-2pt}}S_{\infty}$. We let $\Lambda_n=P_n^{S_n}$ be the ring of symmetric polynomials. The set $\{\S_w\ | \ w{\hspace{-2pt}\in\hspace{-2pt}}S_n \}$ is both a free $\Lambda_n$-basis of $P_n$ and a free $\Z$-basis of $H_n$. Let $S^{(n)}$ denote the set of permutations $w{\hspace{-2pt}\in\hspace{-2pt}}S_{\infty}$ such that $w(n+1)<w(n+2)<\cdots$ Then $\{\S_w\ | \ w{\hspace{-2pt}\in\hspace{-2pt}}S^{(n)} \}$ is a free $\Z$-basis of $P_n$(\[M\], (4.13)). Define a $\Lambda_n$-valued scalar product on $P_n$ by $\left< f ,g \right>=\partial_{w_0}(fg)$, for $f,g{\hspace{-2pt}\in\hspace{-2pt}}P_n$. If $\{\S^w\}_{w\in S_n}$ is the $\Lambda_n$-basis of $P_n$ dual to the basis $\{\S_w\}$ relative to this product, then $\S^w(X)=w_0\S_{ww_0}(-X)$.  (\[M\], (5.12)). For any $h{\hspace{-2pt}\in\hspace{-2pt}}I_n$ we have a decomposition ${\displaystyle}h=\sum_{w\in S_n}\left<h,\S^w\right>\S_w$, where each $\left<h,\S^w\right>$ is in $\Lambda_n\cap I_n$. Calculating Bott-Chern forms {#cbcfs} ============================ Consider the short exact sequence ${\overline}{\E}$ in (\[ses\]) and assume that the metrics on ${\overline}{S}$ and ${\overline}{Q}$ are induced from the metric on $E$. Let $r$, $n$ be the ranks of the bundles $S$ and $E$. For $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$ homogeneous of degree $k$ we let $\phi^{{\prime}}$ be a $k$-multilinear invariant form on $M_n(\C)$ such that $\phi(A)=\phi^{{\prime}}(A,A,\ldots,A)$. Such forms are most easily constructed for the power sums $p_k$, by defining $$p_k^{{\prime}}(A_1,A_2,\ldots,A_k)={\mbox{Tr}}(A_1A_2\cdots A_k).$$ If $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ is a partition of $k$, define ${\displaystyle}p_{\lambda}:=\prod_{i=1}^mp_{\lambda_i}$. For $p_{\lambda}$ we can take $p_{\lambda}^{{\prime}}=\prod p_{\lambda_i}^{{\prime}}$. Since the $p_{\lambda}$’s are an additive $\Q$-basis for the ring of symmetric polynomials, we can use the above constructions to find multilinear forms $\phi^{{\prime}}$ for any $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$. For any two matrices $A,B {\hspace{-2pt}\in\hspace{-2pt}}M_n(\C)$ let $${\displaystyle}\phi^{{\prime}}(A;B):=\sum_{i=1}^k\phi^{{\prime}}(A,A,\ldots,A,B_{(i)},A,\ldots,A),$$ where the index $i$ means that $B$ is in the $i$-th position. Consider a local orthonormal frame $s$ for $E$ such that the first $r$ elements generate $S$, and let $K({\overline}{S})$, $K({\overline}{E})$ and $K({\overline}{Q})$ be the curvature matrices of ${\overline}{S}$, ${\overline}{Q}$ and ${\overline}{E}$ with respect to $s$. Let $K_S=\frac{i}{2\pi}K({\overline}{S})$, $K_E=\frac{i}{2\pi}K({\overline}{E})$ and $K_Q=\frac{i}{2\pi}K({\overline}{Q})$. Write $${\displaystyle}K_E= \left( \begin{array}{c|c} K_{11} & K_{12} \\ \hline K_{21} & K_{22} \end{array} \right)$$ where $K_{11}$ is an $r\times r$ submatrix, and consider the matrices $${\displaystyle}K_0= \left( \begin{array}{c|c} K_S & 0 \\ \hline K_{21} & K_Q \end{array} \right) \ \mbox{ and } \ J_r= \left( \begin{array}{c|c} Id_r & 0 \\ \hline 0 & 0 \end{array} \right).$$ Let $u$ be a variable and define $K(u)=uK_E+(1-u)K_0$. We can then state the main computational \[calc\] For $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$, we have $${\displaystyle}{\widetilde}{\phi}({\overline}{\E})=\int_0^1\frac{\phi^{{\prime}}(K(u); J_r)- \phi^{{\prime}}(K(0); J_r)}{u}\,du.$$ Proposition \[calc\] is essentially a consequence of the work of Bott and Chern \[BC\], although we have not been able to find this general statement in the literature. For history and a complete proof, see \[T\]. What will prove most useful to us in the sequel is \[ratcor\] For any $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n,\Q)$ the Bott-Chern form ${\widetilde}{\phi}({\overline}{\E})$ is a polynomial in the entries of the matrices $K_E$, $K_S$ and $K_Q$ with [*rational*]{} coefficients. [**Proof.**]{} By the equations (\[define\]) it suffices to prove this for $\phi=p_k$ a power sum. Using the bilinear form $p_k^{{\prime}}$ described previously in Proposition \[calc\] gives $${\displaystyle}{\widetilde}{p}_k({\overline}{\E})=k\int_0^1\frac{1}{u}{\mbox{Tr}}((K(u)^{k-1}-K(0)^{k-1})J_r)\,du,$$ so the result is clear. Define the [*harmonic numbers*]{} ${\displaystyle}\cal{H}_k=\sum_{i=1}^k\frac{1}{i}$, $\cal{H}_0=0$. We will need the following useful calculations, which one can deduce from the definition and from Proposition \[calc\]: \(a) ${\widetilde}{c_1^k}({\overline}{\E})=0$ for all $k$ and ${\widetilde}{c}_p({\overline}{\E})=0$ for all $p > {\mbox{rk}}E$. \(b) ${\widetilde}{c}_2({\overline}{\E})=c_1({\overline}{S})-{\mbox{Tr}}K_{11}$ (see \[D\], 10.1 and \[T\]). \(c) If $E$ is flat, then ${\displaystyle}{\widetilde}{c}_k({\overline}{\E})= \cal{H}_{k-1}\sum_{i=0}^{k-1}ic_i({\overline}{S})c_{k-1-i}({\overline}{Q})$ (\[Ma\], Th. 3.4.1). Bott-Chern forms for filtered bundles {#bcfffb} ===================================== In this section we will extend the definition of Bott-Chern forms for an exact sequence of bundles to the case of a filtered bundle. Let $X$ and $E$ be as in \[bcfs\], and assume that $E$ has a filtration of type $\r$ $$\label{fil2} \E:\ E_0=0 \subset E_1 \subset E_2 \subset\cdots \subset E_m=E$$ by complex subbundles $E_i$, with $\r$ as in \[classgps\]. Let $Q_i=E_i/E_{i-1}$, $1{\leqslant}i{\leqslant}m$ be the quotient bundles. A [*hermitian filtration ${\overline}{\E}$ of type $\r$*]{} is a filtration (\[fil2\]) together with a choice of hermitian metrics on $E$ and on each quotient bundle $Q_i$. Note that we do not assume that metrics have been chosen on the subbundles $E_1,\ldots, E_m$ or that the metrics on the quotients are induced from $E$ in any way. We say that ${\overline}{\E}$ is [*split*]{} if, when $E_i$ is given the induced metric from $E$ for each $i$, the sequence ${\overline}{\E_i} : 0{\rightarrow}{\overline}{E}_{i-1} {\rightarrow}{\overline}{E}_i {\rightarrow}{\overline}{Q}_i {\rightarrow}0$ is split. In this case of course ${\displaystyle}{\overline}{E}=\bigoplus_i{\overline}{Q}_i$. \[bcf\] Let $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n)$ be an invariant polynomial. There is a unique way to attach to every hermitian filtration of type $\r$ a form ${\widetilde}{\phi}({\overline}{\E})$ in ${\widetilde}{A}(X)$ in such a way that: [*(i)*]{} ${\displaystyle}dd^c{\widetilde}{\phi}({\overline}{\E})= \phi(\bigoplus_{i=1}^m {\overline}{Q}_i)-\phi({\overline}{E})$, [*(ii)*]{} For every map $f:X{\rightarrow}Y$ of complex manifolds, ${\widetilde}{\phi}(f^*({\overline}{\E}))=f^*{\widetilde}{\phi}({\overline}{\E})$, [*(iii)*]{} If ${\overline}{\E}$ is [*split*]{}, then ${\widetilde}{\phi}({\overline}{\E})=0$. If $m=2$, i.e. the filtration $\E$ has length 2, then ${\widetilde}{\phi}({\overline}{\E})$ coincides with the Bott-Chern class ${\widetilde}{\phi}(0 {\rightarrow}{\overline}{Q}_1 {\rightarrow}{\overline}{E} {\rightarrow}{\overline}{Q}_2 {\rightarrow}0)$ defined in [*\[bcfs\]*]{}. [**Proof.**]{} The essential ideas are contained in \[GS2\], Th. 1.2.2 and sections 7.1.1, 7.1.2., so we will only sketch the argument. We first show that such forms exist. Given any hermitian filtration $\E$, equip each subbundle $E_i$ with the induced metric from ${\overline}{E}$ and consider the exact sequence $${\displaystyle}{\overline}{\E_i} : 0{\rightarrow}{\overline}{E}_{i-1} {\rightarrow}{\overline}{E}_i {\rightarrow}{\overline}{Q}_i {\rightarrow}0.$$ If ${\widetilde}{\phi}({\overline}{\E})$ and ${\widetilde}{\psi}({\overline}{\E})$ have already been defined then the equations $${\displaystyle}{\widetilde}{\phi + \psi}({\overline}{\E})={\widetilde}{\phi}({\overline}{\E})+{\widetilde}{\psi}({\overline}{\E})$$ $${\widetilde}{\phi\psi}({\overline}{\E})={\widetilde}{\phi}({\overline}{\E})\psi(\bigoplus_{i=1}^m {\overline}{Q}_i)+ \phi({\overline}{E}){\widetilde}{\psi}({\overline}{\E})$$ can be used to define ${\widetilde}{\phi + \psi}$ and ${\widetilde}{\phi\psi}$ (see \[GS2\], Prop. 1.3.1 for the case $m=2$). Therefore it suffices to construct the Bott-Chern classes ${\widetilde}{p_k}$ for the power sums $p_k$. For this we simply let $$\label{*} {\widetilde}{p_k}({\overline}{\E}):=\sum_{i=1}^m{\widetilde}{p_k}({\overline}{\E_i}).$$ Since the ${\widetilde}{p_k}({\overline}{\E_i})$ are functorial and additive on orthogonal direct sums, it is clear that (\[\*\]) satisfies (i)-(iii). The construction for $m=2$ gives the classes of \[bcfs\]. We will use a separate construction of the total Chern forms ${\widetilde}{c}({\overline}{\E})$: For each $i$ with $1{\leqslant}i{\leqslant}m-1$, let ${\overline}{\cal{Q}}_i$ be the sequence ${\displaystyle}0 {\rightarrow}0 {\rightarrow}\bigoplus_{j=i+1}^m {\overline}{Q}_j {\rightarrow}\bigoplus_{j=i+1}^m {\overline}{Q}_j {\rightarrow}0$, and let ${\overline}{\E_i^+}={\overline}{\E_i}\bigoplus {\overline}{\cal{Q}}_i$. Let ${\overline}{\E_m^+}={\overline}{\E_m}$. To each exact sequence ${\overline}{\E_i^+}$ we associate a Bott-Chern form ${\widetilde}{c}({\overline}{\E_i^+})$. It follows from \[GS2\], Prop. 1.3.2 that $${\displaystyle}{\widetilde}{c}({\overline}{\E_i^+})={\widetilde}{c}({\overline}{\E_i}\bigoplus {\overline}{\cal{Q}}_i) ={\widetilde}{c}({\overline}{\E_i})c(\bigoplus_{j=i+1}^m{\overline}{Q_j}) ={\widetilde}{c}({\overline}{\E_i})\wedge\bigwedge_{j=i+1}^mc({\overline}{Q_j}).$$ It is easy to see that ${\displaystyle}{\widetilde}{c}({\overline}{\E}):= \sum_{i=1}^m {\widetilde}{c}({\overline}{\E_i^+})$ satisfies (i)-(iii). To prove that the form ${\widetilde}{\phi}({\overline}{\E})$ is unique, one constructs a deformation of the filtration ${\overline}{\E}$ to the split filtration, as in \[GS2\], 7.1.2. Let ${\overline}{\O(1)}$ be the canonical line bundle on $\P^1=\P^1(\C)$ with its natural Fubini-Study metric and let $\sigma$ be a section of $\O(1)$ vanishing only at $\infty$. Let $p_1$, $p_2$ be the projections from $X\times\P^1$ to $X$, $\P^1$ respectively. We denote by $E$, $E_i$, $Q_i$ and $\O(1)$ the bundles $p_1^*E$, $p_1^*E_i$, $p_1^*Q_i$, and $p_2^*\O(1)$ on $X\times\P^1$. For a bundle $F$ on $X\times\P^1$ we let $F(k):=F\otimes\O(1)^k$. For each $i\leqslant m-1$, we map $E_i(m-1-i)$ to $E_{i+1}(m-1-i)$ by the inclusion of $E_{i}\hookrightarrow E_{i+1}$ and to $E_i(m-i)$ by ${\mbox{id}}_{E_i}\otimes\sigma$. For $1{\leqslant}j{\leqslant}m$ let $${\displaystyle}{\widetilde}{E}_k:=\left( \bigoplus_{i=1}^k E_i(m-i)\right)\Bigl/ \left( \bigoplus_{i=1}^{k-1} E_i(m-1-i)\right).$$ Setting ${\widetilde}{E}:={\widetilde}{E}_m$ we get a filtration of type $\r$ over $X\times \P^1$ : $${\widetilde}{\E} :\ 0 \subset {\widetilde}{E}_1 \subset {\widetilde}{E}_2 \subset \cdots \subset {\widetilde}{E}_m={\widetilde}{E}.$$ The quotients of this filtration are ${\widetilde}{Q}_i={\widetilde}{E}_i/{\widetilde}{E}_{i-1}=Q_i(m-i)$, $1{\leqslant}i{\leqslant}m$. For $z{\hspace{-2pt}\in\hspace{-2pt}}\P^1$, denote by $i_z:X{\rightarrow}X\times \P^1$ the map given by $i_z(x)=(x,z)$. When $z{\hspace{-2pt}\neq\hspace{-2pt}}\infty$, $i_z^*{\widetilde}{E}\simeq E$, while ${\displaystyle}i_{\infty}^*{\widetilde}{E}\simeq \bigoplus_{i=1}^m Q_i$. Using a partition of unity we can choose hermitian metrics ${\widetilde}{h}_i$ on ${\widetilde}{Q}_i$ and ${\widetilde}{h}$ on ${\widetilde}{E}$ such that the isomorphisms $i_z^*{\widetilde}{Q}_i\simeq Q_i$, $i_0^*{\widetilde}{E}\simeq E$ and ${\displaystyle}i_{\infty}^*{\widetilde}{E}\simeq \bigoplus_{i=1}^m Q_i$ all become isometries. We also let $({\widetilde}{\E},{\widetilde}{h})$ denote the hermitian filtration of type $\r$ defined by these data. Then one shows (as in loc. cit.) that ${\widetilde}{\phi}({\overline}{\E})$ is uniquely determined in ${\widetilde}{A}(X)$ by the formula $${\displaystyle}{\widetilde}{\phi}({\overline}{\E})=-\int_{\P^1}\phi({\widetilde}{E},{\widetilde}{h})\log |z|^2.$$ [**Remark.**]{} Gillet and Soulé used ${\widetilde}{ch}({\overline}{\E})$ to give an explicit description of the Beilinson regulator map on $K_1(X)$, where $X$ is an arithmetic scheme (\[GS2\], 7.1). It is easy to prove that analogues of the properties of Bott-Chern forms for short exact sequences (\[GS2\], 1.3) are true for the above generalization to filtered bundles. In particular the formulas (\[define\]) take the form: $$\label{sumprod} {\widetilde}{\phi + \psi}={\widetilde}{\phi}+{\widetilde}{\psi}, \ \ \ \ \ \ {\widetilde}{\phi\psi}={\widetilde}{\phi}\cdot\psi(\bigoplus_{i=1}^m {\overline}{Q}_i)+ \phi({\overline}{E})\cdot{\widetilde}{\psi}.$$ for any $\phi$, $\psi {\hspace{-2pt}\in\hspace{-2pt}}I(n)$. Using Theorem \[bcf\] and the same argument as in the proof of Theorem 4.8(ii) in \[GS2\], we obtain \[abc\] Let $${\displaystyle}{\overline}{\E} :\ 0\subset {\overline}{E}_1\subset {\overline}{E}_2 \subset\cdots\subset {\overline}{E}_m={\overline}{E}$$ be a hermitian filtration on an arithmetic scheme $X$, with quotient bundles $Q_i$, and let $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n,\Q)$. Then $${\displaystyle}{\widehat}{\phi}(\bigoplus_{i=1}^m{\overline}{Q}_i)- {\widehat}{\phi}({\overline}{E})=a({\widetilde}{\phi}({\overline}{\E})).$$ Assume that the subbundles $E_i$ are given metrics induced from $E$ and the quotient bundles $Q_i$ are given the metrics induced from the exact sequences ${\overline}{\E_i}$. Define matrices $K_{E_i}=\frac{i}{2\pi}K({\overline}{E}_i)$ and $K_{Q_i}=\frac{i}{2\pi} K({\overline}{Q}_i)$ as in \[cbcfs\]. Then the constructions in Theorem \[bcf\] and Corollary \[ratcor\] immediately imply \[ratbcf\] For any $\phi{\hspace{-2pt}\in\hspace{-2pt}}I(n,\Q)$ the Bott-Chern form ${\widetilde}{\phi}({\overline}{\E})$ is a polynomial in the entries of the matrices $K_{E_i}$ and $K_{Q_i}$, $1{\leqslant}i {\leqslant}m$, with [*rational*]{} coefficients. Curvature of homogeneous vector bundles {#hvb} ======================================= Let $G=SL(n,\C)$, $K=SU(n)$ and $P$ be a parabolic subgroup of $G$, with Lie algebras $\g$, $\k$ and $\p$ respectively. Complex conjugation of $\g$ with respect to $\k$ is given by the map $\tau$ with $\tau(A)=-{\overline}{A}^t$. We let $\v=\p\cap\tau(\p)$ and $\n$ be the unique maximal nilpotent ideal of $\p$, so that $\g=\v\oplus\n\oplus\tau(\n)$. Let $\h=\{\mbox{diag}(z_1,\ldots,z_n)\ |\ \sum z_i=0\}$ be the Cartan subalgebra of diagonal matrices in $\g$. The set of roots $\Delta=\{z_i-z_j\ |\ 1{\leqslant}i\neq j {\leqslant}n \}$ is a subset of $\h^*$. We denote the root $z_i-z_j$ by the pair $ij$, and fix a system of positive roots $\Delta_+=\{ij\ |\ i>j\}$. The adjoint representation of $\h$ on $\g$ determines a decomposition ${\displaystyle}\g=\h\oplus\sum_{\a\in\Delta}\g^{\a}$. Here the root space $\g^{\a}=\C e_{\a}$, where $e_{\a}=e_{ij}=E_{ij}$ is the matrix with 1 at the $ij$-th entry and zeroes elsewhere. Set ${\overline}{e}_{ij}=\tau(e_{ij})=-E_{ji}$. Let $V=K\cap P$ and consider the complex manifold $X=G/P=K/V$. Let $p:K{\rightarrow}X$ be the quotient map, and let $\Psi\subset\Delta_+$ be such that ${\displaystyle}\n=\sum_{\a\in\Psi}\g^{-\a}$. For $\a,\b{\hspace{-2pt}\in\hspace{-2pt}}\Psi$, the equations $$\begin{aligned} \omega^{\a}(e_{\b})=\delta_{\a\b}, & \omega^{\a}({\overline}{e}_{\b})=0, & \omega^{\a}(\v)=0 \\ {\overline}{\omega}^{\a}(e_{\b})=0, & {\overline}{\omega}^{\a}({\overline}{e}_{\b})=\delta_{\a\b}, & {\overline}{\omega}^{\a}(\v)=0\end{aligned}$$ define elements of the dual space $\g^*$, which we shall regard as left invariant complex one-forms on $K$. A given differential form $\eta$ on $X$ pulls back to $$\label{forms} p^*\eta=\sum_{a,b} f_{ab}\omega^{\a_1}\wedge\ldots\wedge \omega^{\a_r}\wedge {\overline}{\omega}^{\b_1}\wedge\ldots\wedge{\overline}{\omega}^{\b_s}$$ on $K$, with coefficients $f_{ab}{\hspace{-2pt}\in\hspace{-2pt}}C^{\infty}(K)$. Conversely, every $V$-invariant element of $C^{\infty}(K)\otimes \bigwedge\tau(\n)^*\otimes\bigwedge\n^*$ is the pullback to $K$ of a differential form on $X$. A form $\eta$ on $X$ is of $(p,q)$ type precisely when every summand on the right hand side of (\[forms\]) involves $p$ unbarred and $q$ barred terms. \[invdefn\] [*Inv*]{}${}_{\R}(X)$ (respectively [*Inv*]{}${}_{\Q}(X)$) denotes the ring of $K$-invariant forms in the $\R$-subalgebra (respectively $\Q$-subalgebra) of $A(X)$ generated by $ \{\frac{i}{2\pi}\om^{\a}\wedge{\overline}{\om}^{\b} \ | \ \a,\b {\hspace{-2pt}\in\hspace{-2pt}}\Psi \}. $ Suppose now that $\pi:V{\rightarrow}GL(E_0)$ is an irreducible unitary representation of $V$ on a complex vector space $E_0$. $\pi$ defines a homogeneous vector bundle ${\overline}{E}=K\times_VE_0{\rightarrow}X$ which has a $K$-invariant hermitian metric. Extend $\pi$ to a unique holomorphic representation $\pi:P{\rightarrow}GL(E_0)$, and denote the induced representation of $\p$ by the same letter. Then ${\overline}{E}=G\times_PE_0$ is a holomorphic hermitian vector bundle over $X$ which gives a complex structure to $K\times_VE_0$. In \[GrS\], equation $(4.4)_X$, Griffiths and Schmid calculate the $K$-invariant curvature matrix $K({\overline}{E})$ explicitly in terms of the above data. Their result is $$\label{grs} K({\overline}{E})=\sum_{\a,\b\in\Psi}\pi([e_{\a},e_{-\b}]_{\v})\otimes \omega^{\a}\wedge{\overline}{\omega}^{\b}.$$ The invariant differential forms giving the Chern classes of homogeneous line bundles were given by Borel \[B\]; see the introduction to \[GrS\] for more references. Let $Y=F(\C)\simeq SL(n,\C)/B=SU(n)/S(U(1)^n)$ be the complex flag variety and let ${\overline}{E}$ denote the trivial hermitian vector bundle over $Y$, with the tautological hermitian filtration $${\displaystyle}{\overline}{\E}:\ 0 \subset {\overline}{E}_1 \subset {\overline}{E}_2 \subset\cdots \subset {\overline}{E}_n={\overline}{E}$$ with quotient line bundles ${\overline}{L}_i$ and all metrics induced from the metric on ${\overline}{E}$. All of these bundles are homogeneous, and we want to use equation (\[grs\]) to compute their curvature matrices. Note that (\[grs\]) applies directly only to the line bundles ${\overline}{L}_i$, as the higher rank bundles are not given by irreducible representations of the torus $S(U(1)^n)$. We can avoid this problem by considering the grassmannian $Y_k=G_k(\C)=SU(n)/S(U(k)\times U(n-k))$ and the natural projection $\rho:Y{\rightarrow}Y_k$. Now (\[grs\]) can be applied to the universal bundle ${\overline}{E}_k$ over $Y_k$ and the curvature matrix $K({\overline}{E}_k)$ pulls back via $\rho$ to the required matrix over $Y$. In fact by projecting to a partial flag variety one can compute the curvature matrix of any quotient bundle $E_l/E_k$. The representations $\pi$ of $V$ inducing these bundles are the obvious ones in each case. What remains is a straightforward application of equation (\[grs\]), so we will describe the answer without belaboring the details. We have defined differential forms $\om^{ij}$, ${\overline}{\om}^{ij}$ on $K=SU(n)$ which we identify with corresponding forms on $Y$. With this notation, we can state (compare \[GrS\], $(4.13)_X$) : \[curvmat\] Let $k<l$ and consider the vector bundle $Q_{lk}=E_l/E_k$ over $F(\C)$. Let the curvature matrix of ${\overline}{Q}_{lk}$ with its induced metric be $\Theta=\{\Theta_{\a\b}\}_{k+1{\leqslant}\a,\b{\leqslant}l}$. Then $${\displaystyle}\Theta_{\a\b}=\sum_{i{\leqslant}k}\omega^{\a i}\wedge{\overline}{\omega}^{\b i}- \sum_{j>l}\omega^{j\a}\wedge{\overline}{\omega}^{j\b}.$$ For notational convenience we let $\om_{ij}:=\gamma\om^{ji}$, ${\overline}{\om}_{ij}:=\gamma{\overline}{\om}^{ji}$ and $\Om_{ij}:=\om_{ij}\wedge{\overline}{\om}_{ij}$, $(i<j)$, where $\gamma$ is a constant such that $\gamma^2=\frac{i}{2\pi}$. Then we have \[grscor1\] $${\displaystyle}c_1({\overline}{L}_k)=\sum_{i<k}\Om_{ik}-\sum_{j>k}\Om_{kj}$$ $${\displaystyle}K_{{\overline}{E}_k}=\frac{i}{2\pi}K({\overline}{E}_k)=-\left\{\sum_{j>k}\om_{\a j} \wedge{\overline}{\om}_{\b j}\right\}_{1{\leqslant}\a,\b {\leqslant}k}$$ Let ${\displaystyle}\Omega:=\bigwedge_{i<j}\Om_{ij}$. To compute classical intersection numbers on the flag variety using the differential forms in Corollary \[grscor1\] it suffices to know ${\displaystyle}\int_Y\Omega$. If $x_i=-c_1({\overline}{L}_i)$, it is well known that $\eta=\S_{w_0}(x)=x_1^{n-1}x_2^{n-2}\cdots x_{n-1}$ is dual to the class of a point in $Y$; thus ${\displaystyle}\int_Y\eta=1$. An easy calculation shows that ${\displaystyle}\eta=\prod_{k=1}^{n-1}k!\cdot \Omega$, thus ${\displaystyle}\int_Y\Omega=\prod_{k=1}^{n-1}\frac{1}{k!}$. Invariant arithmetic Chow rings {#afvs} =============================== It is well known that the arithmetic variety $F$ has a cellular decomposition in the sense of \[Fu1\], Ex. 1.9.1. It follows that one can use the excision exact sequence for the groups $CH^{*,*}(F)$ (cf. \[G\], 8) to show that $CH^{p,p-1}(F)=0$ (compare \[Ma\], Lemma 4.0.6). Therefore the exact sequence (\[ex1\]) summed over $p$ gives $$\label{flagex} 0 \longrightarrow {\widetilde}{A}(F_{\R}) \stackrel{a}\longrightarrow {\widehat}{CH}(F) \stackrel{\zeta}\longrightarrow CH(F)\longrightarrow 0.$$ Recall that ${\widetilde}{A}(F_{\R})={\mbox{Ker}}\zeta$ is an ideal of ${\widehat}{CH}(F)$ whose ${\widehat}{CH}(F)$-module structure is given as follows: if $\a{\hspace{-2pt}\in\hspace{-2pt}}{\widehat}{CH}(F)$ and $\eta{\hspace{-2pt}\in\hspace{-2pt}}{\widetilde}{A}(F_{\R})$, then $\a\cdot\eta=\om(a)\wedge\eta$. ${\widetilde}{A}(F_{\R})$ is not a square zero ideal, but its product is induced by $\theta\cdot \eta=(dd^c\theta)\wedge\eta$. This product is well defined and commutative (\[GS1\], 4.2.11). We equip $E(\C)$ with a trivial hermitian metric. This metric induces metrics on all the $L_i$, which thus become hermitian line bundles ${\overline}{L}_i$. Recall from \[classgps\] that $CH(F)$ has a free $\Z$-basis of monomials in the Chern classes $c_1(L_i)$. The unique map of abelian groups $\epsilon:CH(F){\rightarrow}{\widehat}{CH}(F)$ sending $\prod c_1(L_i)^{k_i}$ to $\prod {\widehat}{c}_1({\overline}{L}_i)^{k_i}$ when $k_i{\leqslant}n-i$ for all $i$ is then a splitting of (\[flagex\]). Thus we have an isomorphism of abelian groups $$\label{bigiso} {\widehat}{CH}(F)\simeq CH(F)\oplus {\widetilde}{A}(F_{\R}).$$ As an analogue of the Arakelov Chow ring we define an [*invariant arithmetic Chow ring*]{} ${\widehat}{CH}_{inv}(F)$ as follows. Let ${\mbox{Inv}}^{p,p}(F_{\R})$ be the group of $(p,p)$-forms $\eta$ in ${\mbox{Inv}}_{\R}(F(\C))$ satisfying $F_{\infty}^*\eta=(-1)^p\eta$, and set ${\mbox{Inv}}(F_{\R})=\oplus_p {\mbox{Inv}}^{p,p}(F_{\R})$. Let ${\widetilde}{{\mbox{Inv}}}(F_{\R})\subset {\widetilde}{A}(F_{\R})$ be the image of ${\mbox{Inv}}(F_{\R})$ in ${\widetilde}{A}(F_{\R})$. Define the rings ${\mbox{Inv}}(F_{\Q})$ and ${\widetilde}{{\mbox{Inv}}}(F_{\Q})$ similarly, replacing $\R$ by $\Q$ in the above. \[invdef\] The invariant arithmetic Chow ring ${\widehat}{CH}_{inv}(F)$ is the subring of ${\widehat}{CH}(F)$ generated by $\epsilon(CH(F))$ and $a({\widetilde}{\mbox{\em Inv}}(F_{\R}))$. Suppose that $x,y{\hspace{-2pt}\in\hspace{-2pt}}CH(F)$ and view $x$ and $y$ as elements of ${\widehat}{CH}(F)$ using the inclusion $\epsilon$. In \[ai\] we will see that under the isomorphism (\[bigiso\]), $xy{\hspace{-2pt}\in\hspace{-2pt}}CH(F)\oplus {\widetilde}{{\mbox{Inv}}}(F_{\Q})$. It follows that there is an exact sequence of abelian groups $$\label{invex} 0 \longrightarrow {\widetilde}{{\mbox{Inv}}}(F_{\R}) \stackrel{a}\longrightarrow {\widehat}{CH}_{inv}(F) \stackrel{\zeta}\longrightarrow CH(F)\longrightarrow 0$$ which splits as before, giving \[chinv\] There is an isomorphism of abelian groups $${\widehat}{CH}_{inv}(F)\simeq CH(F)\oplus{\widetilde}{\mbox{\em Inv}}(F_{\R}).$$ [**Remark 1:**]{} One can define another ‘invariant arithmetic Chow ring’ $${\widehat}{CH}_{inv}^{{\prime}}(F):=\om^{-1}({\mbox{Inv}}(F_{\R})),$$ where $\om$ is the ring homomorphism defined in \[ait\]. There is a natural inclusion ${\widehat}{CH}_{inv}(F)\hookrightarrow{\widehat}{CH}_{inv}^{{\prime}}(F)$; we do not know if these two rings coincide. [**Remark 2:**]{} The arithmetic Chern classes of the natural homogeneous vector bundles over $F$ are all contained in the ring ${\widehat}{CH}_{inv}(F)$. In fact one need not use real coefficients for this; it suffices to take $CH(F)\oplus {\widetilde}{{\mbox{Inv}}}(F_{\Q})$ with the induced product from ${\widehat}{CH}(F)$. As there are bounds on the denominators that occur, it follows that [*the subring of ${\widehat}{CH}(F)$ generated by $\epsilon(CH(F))$ is a finitely generated abelian group*]{}. However it seems that this group is too small to contain the characteristic classes of all the vector bundles of interest. Calculating arithmetic intersections {#ai} ==================================== In this section we describe an effective procedure for computing arithmetic intersection numbers on the complete flag variety $F$. One has a tautological hermitian filtration of the trivial bundle ${\overline}{E}$ over $F$ $${\displaystyle}{\overline}{\E}:\ 0 \subset {\overline}{E}_1 \subset {\overline}{E}_2 \subset\cdots \subset {\overline}{E}_n={\overline}{E}$$ as in \[hvb\]. Recall that the inverse of the isomorphism $CH(F)\simeq P_n/I_n$ sends $[X_i]$ to $-c_1(L_i)$. Let $x_i=-c_1({\overline}{L}_i)$ and ${\widehat}{x}_i=-{\widehat}{c}_1({\overline}{L}_i)$ for $1{\leqslant}i {\leqslant}n$. If $\phi{\hspace{-2pt}\in\hspace{-2pt}}\Lambda_n\otimes_{\Z}\Q$ is a homogeneous symmetric polynomial of positive degree then $\phi$ defines a characteristic class. Theorem \[abc\] applied to the hermitian filtration ${\overline}{\E}$ shows that $${\displaystyle}\phi({\widehat}{x}_1,{\widehat}{x}_2,\ldots,{\widehat}{x}_n)=(-1)^{\deg \phi}\,{\widetilde}{\phi}({\overline}{\E})$$ in the arithmetic Chow ring ${\widehat}{CH}(F)$. In particular for $\phi=e_i$ an elementary symmetric polynomial this gives $${\displaystyle}e_i({\widehat}{x}_1,{\widehat}{x}_2,\ldots,{\widehat}{x}_n)=(-1)^i\,{\widetilde}{c}_i({\overline}{\E}).$$ Let $h$ be a homogeneous polynomial in the ideal $I_n$. We will give an algorithm for computing $h({\widehat}{x}_1,{\widehat}{x}_2,\ldots{\widehat}{x}_n)$ as a class in ${\widetilde}{{\mbox{Inv}}}(F_{\Q})$: [**:**]{} Decompose $h$ as a sum $h=\sum e_if_i$ for some polynomials $f_i$. More canonically one may use the equality $${\displaystyle}h=\sum_{w\in S_n} \left< h , \S^w \right> \S_w$$ from \[classgps\]. Since $a(x)y=a(x\omega(y))$ in ${\widehat}{CH}(F)$ and $\omega(f_i({\widehat}{x}_1,\ldots,{\widehat}{x}_n))= f_i(x_1,\ldots,x_n)$, we have $${\displaystyle}h({\widehat}{x}_1,{\widehat}{x}_2,\ldots{\widehat}{x}_n)=\sum_{i=1}^n (-1)^i\,{\widetilde}{c}_i({\overline}{\E})f_i(x_1,x_2,\ldots,x_n)=$$ $${\displaystyle}=\sum_{w\in S_n} (-1)^{\deg h +l(w)}\,{\widetilde}{\left< h , \S^w \right>}({\overline}{\E}) \S_w(x_1,x_2,\ldots,x_n).$$ [**:**]{} By Corollary \[ratbcf\], we may express the forms ${\widetilde}{c}_i({\overline}{\E})$ and ${\displaystyle}{\widetilde}{\left< h , \S^w \right>}({\overline}{\E})$ as polynomials in the entries of the matrices $K_{E_i}$ and $K_{L_i}=c_1({\overline}{L_i})$ with rational coefficients. In practice this may be done recursively for the Chern forms ${\widetilde}{c}_i$ as follows: Use equation (\[\*\]) and the construction in Corollary \[ratcor\] to obtain the power sum forms ${\widetilde}{p}_i({\overline}{\E})$, then apply the formulas (\[sumprod\]) to Newton’s identity $${\displaystyle}p_i-c_1p_{i-1}+c_2p_{i-2}-\cdots+(-1)^iic_i=0.$$ On the other hand Corollary \[grscor1\] gives explicit expressions for all the above curvature matrices in terms of differential forms on $F(\C)$. Thus we obtain formulas for ${\widetilde}{c}_i({\overline}{\E})$ and ${\displaystyle}{\widetilde}{\left< h , \S^w \right>}({\overline}{\E})$ in terms of these forms. For example, using the notation of \[hvb\], we have ${\widetilde}{c}_1({\overline}{\E})=0$ and ${\displaystyle}{\widetilde}{c}_2({\overline}{\E})=-\sum_{i<j}\Om_{ij}$. [**Proof.**]{} Use (\[\*\]), properties (a) and (b) at the end of \[cbcfs\], and the identity $2c_2=c_1^2-p_2$.   [**:**]{} Substitute the forms obtained in Step 2 into the formulas given in Step 1. Note that the result is the class of a form in ${\mbox{Inv}}(F_{\Q})$ since all the ingredients are functorial for the natural $U(n)$ action on $F(\C)$. In particular, if $k_i$ are nonnegative integers with $\sum k_i= \dim{F}={n \choose 2}+1$, the monomial $X_1^{k_1}\cdots X_n^{k_n}$ is in the ideal $I_n$. If $X_1^{k_1}\cdots X_n^{k_n}=\sum e_if_i$, then we have $${\displaystyle}{\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}\cdots{\widehat}{x}_n^{k_n}= \sum_i(-1)^i\,{\widetilde}{c}_i({\overline}{\E})f_i(x_1,\ldots,x_n).$$ Now if $\Om=\bigwedge \Om_{ij}$ is defined as in \[hvb\], we have shown that $${\displaystyle}{\widetilde}{c}_i({\overline}{\E})f_i(x_1,\ldots,x_n)=r_i\Om$$ for some rational number $r_i$. Therefore $${\displaystyle}{\widehat}{\deg}({\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}\cdots{\widehat}{x}_n^{k_n})= \frac{1}{2}\sum_i(-1)^i\,r_i\int_{F(\C)}\Om= \frac{1}{2}\sum_i(-1)^i\,r_i\prod_{k=1}^{n-1}\frac{1}{k!}\, .$$ Of course this equation implies \[mainthm\] The arithmetic Chern number $ {\displaystyle}{\widehat}{\deg}({\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}\cdots{\widehat}{x}_n^{k_n}) $ is a rational number. [**Remark.**]{}  For $a<b$ let ${\overline}{Q}_{b,a}=E_b/E_a$, equipped with the induced metric. Then one can show that any intersection number $ {\displaystyle}{\widehat}{\deg}(\prod_i {\widehat}{c}_{m_i}({\overline}{Q}_{b_ia_i})^{k_i})$ for $ \sum k_im_i(b_i-a_i)=\dim F $ is rational. This is done by using the hermitian filtrations $${\displaystyle}0\subset {\overline}{Q}_{a+1,a}\subset{\overline}{Q}_{a+2,a}\subset\cdots\subset{\overline}{Q}_{b,a}$$ and Theorem \[abc\] to reduce the problem to the intersections occuring in Theorem \[mainthm\]. To compute arithmetic intersections of the form $(0,\eta)\cdot (0,\eta^{{\prime}})$ with $\eta,\eta^{{\prime}}{\hspace{-2pt}\in\hspace{-2pt}}{\widetilde}{{\mbox{Inv}}}(F_{\Q})$, we need to know the value of $dd^c\eta$. For this one may use the Maurer-Cartan structure equations on $SU(n)$ (cf. \[GrS\], Chp. 1); all such intersections lie in ${\widetilde}{{\mbox{Inv}}}(F_{\Q})$. Although there is an effective algorithm for computing arithmetic Chern numbers, explicit general formulas seem difficult to obtain. There are some general facts we can deduce for those intersections that pull back from grassmannians, for instance that ${\widehat}{x}_1^{n+1}={\widehat}{x}_n^{n+1}=0$. There is also a useful symmetry in these intersections: \[symprop\] $ {\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}\cdots{\widehat}{x}_n^{k_n}= {\widehat}{x}_n^{k_1}{\widehat}{x}_{n-1}^{k_2}\cdots{\widehat}{x}_1^{k_n}, $  for all integers $k_i{\geqslant}0$. [**Proof.**]{} This is a consequence of the involution $\nu : F(\C) {\rightarrow}F(\C)$ sending $${\displaystyle}{\overline}{\E} : \ 0 \subset {\overline}{E}_1 \subset {\overline}{E}_2 \subset \cdots \subset {\overline}{E}_n={\overline}{E}$$ to $${\displaystyle}{\overline}{\E}^{\bot} : \ 0={\overline}{E}^{\bot} \subset {\overline}{E}_{n-1}^{\bot} \subset {\overline}{E}_{n-2}^{\bot} \subset \cdots \subset 0^{\bot}={\overline}{E}.$$ Over ${\mbox{Spec}}\Z$, $\nu$ corresponds to the map of flag varieties sending $E_i$ to the quotient $E/E_i$. If ${\widehat}{x}_i^{\bot}$ are the arithmetic Chern classes obtained from ${\overline}{\E}^{\bot}$, then using the split exact sequences $0{\rightarrow}{\overline}{E}_i {\rightarrow}{\overline}{E} {\rightarrow}{\overline}{E}_i^{\bot} {\rightarrow}0$ we obtain $${\displaystyle}{\widehat}{x}_i^{\bot}=-{\widehat}{c}_1({\overline}{L}_i^{\bot})= -{\widehat}{c}_1({\overline}{E}_{n-i}^{\bot})+{\widehat}{c}_1({\overline}{E}_{n+1-i}^{\bot})= {\widehat}{c}_1({\overline}{E}_{n-i})-{\widehat}{c}_1({\overline}{E}_{n+1-i})={\widehat}{x}_{n+1-i}.$$ Since $\nu$ is an isomorphism, the result follows. Arithmetic Schubert calculus {#asc} ============================ Let $P_n$, $I_n$, $\Lambda_n$ and $S^{(n)}$ be as in \[classgps\]. The Chow ring $CH(F)$ is isomorphic to the quotient $H_n=P_n/I_n$. Recall that $H_n$ has a natural basis of Schubert polynomials $\{\S_w\ | \ w{\hspace{-2pt}\in\hspace{-2pt}}S_n \}$, and that the $\S_w$ for $w{\hspace{-2pt}\in\hspace{-2pt}}S^{(n)}$ form a free $\Z$-basis of $P_n$. We let $T_n=S^{(n)}\smallsetminus S_n$. The key property of Schubert polynomials that we require for the ‘arithmetic Schubert calculus’ is described in \[schlemma\] If $w{\hspace{-2pt}\in\hspace{-2pt}}T_n$, then $\S_w{\hspace{-2pt}\in\hspace{-2pt}}I_n$. In fact we have a decomposition $${\displaystyle}\S_w=\sum_{v\in S_n}\left<\S_w,\S^v\right>\S_v,$$ where $ \left<\S_w,\S^v\right>{\hspace{-2pt}\in\hspace{-2pt}}\Lambda_n\cap I_n$. [**Proof.**]{} Assume first that $w(1)>w(2)>\cdots >w(n)$, so that $w$ is [*dominant*]{}. Then by \[M\], (4.7) we have $${\displaystyle}\S_w=X_1^{w(1)-1}X_2^{w(2)-1}\cdots X_n^{w(n)-1}.$$ If $w\notin S_n$ then clearly $w(1)>n$, so $X_1^{w(1)-1}{\hspace{-2pt}\in\hspace{-2pt}}I_n$ and thus $\S_w{\hspace{-2pt}\in\hspace{-2pt}}I_n$. If $w{\hspace{-2pt}\in\hspace{-2pt}}T_n$ is arbitrary, form $w^{{\prime}}{\hspace{-2pt}\in\hspace{-2pt}}T_n$ by rearranging $(w(1),w(2),\ldots,w(n))$ in decreasing order and letting $w^{{\prime}}(i)= w(i)$ for $i>n$. We have shown that $\S_{w^{{\prime}}}{\hspace{-2pt}\in\hspace{-2pt}}I_n$. There is an element $v{\hspace{-2pt}\in\hspace{-2pt}}S_n$ such that $wv=w^{{\prime}}$ and $l(v)=l(w^{{\prime}})-l(w)$. Note that since $\partial_v$ is $\Lambda_n$-linear, $\partial_v I_n\subset I_n$. Therefore (\[M\], (4.2)): $\S_w=\partial_v\S_{wv}=\partial_v\S_{w^{{\prime}}}{\hspace{-2pt}\in\hspace{-2pt}}I_n.$ The decomposition claimed now follows, as in \[classgps\]. It is well known that there is an equality in $P_{\infty}$ $$\label{cuvws} \S_u\S_v=\sum_{w\in S_{\infty}}c_{uv}^w\S_w,$$ where the $c_{uv}^w$ are nonegative integers that vanish whenever $l(w)\neq l(u)+l(v)$ (\[M\], (A.6)). A particular case of this is [*Monk’s formula:*]{} if $s_k$ denotes the transposition $(k,k+1)$, then $${\displaystyle}\S_{s_k}\S_w=\sum_t \S_{wt}$$ summed over all transpositions $t=(i,j)$ such that $i{\leqslant}k <j$ and $l(wt)=l(w)+1$ (\[M\], ($4.15^{{\prime}{\prime}}$)). We now express arithmetic intersections in ${\widehat}{CH}(F)$ using the basis of Schubert polynomials. Lemma \[schlemma\] is the main reason why this basis facilitates our task. This property (for Schur functions) also plays a crucial role in the arithmetic Schubert calculus for grassmannians (see \[pfvs\] and \[Ma\], Th. 5.2.1). For each $w{\hspace{-2pt}\in\hspace{-2pt}}S^{(n)}$, let ${\widehat}{\S}_w=\S_w({\widehat}{x}_1, \ldots,{\widehat}{x}_n)$. If $w{\hspace{-2pt}\in\hspace{-2pt}}T_n$ then Lemma \[schlemma\] and the discussion in \[ai\] imply that ${\widehat}{\S}_w{\hspace{-2pt}\in\hspace{-2pt}}{\widetilde}{{\mbox{Inv}}}(F_{\Q})$; we denote these classes by ${\widetilde}{\S}_w$. We have $${\displaystyle}{\widetilde}{\S}_w=\sum_{v\in S_n}(-1)^{l(v)+l(w)} {\widetilde}{\left<\S_w,\S^v\right>}({\overline}{\E})\S_v(x_1,\ldots,x_n).$$ We can now describe the multiplication in ${\widehat}{CH}_{inv}(F)$: \[slring\] Any element of ${\widehat}{CH}_{inv}(F)$ can be expressed uniquely in the form ${\displaystyle}\sum_{w\in S_n}a_w{\widehat}{\S}_w+\eta$, where $a_w{\hspace{-2pt}\in\hspace{-2pt}}\Z$ and $\eta\in{\widetilde}{\mbox{\em Inv}}(F_{\R})$. For $u,v{\hspace{-2pt}\in\hspace{-2pt}}S_n$ we have $$\label{punchline} {\widehat}{\S}_u\cdot{\widehat}{\S}_v=\sum_{w\in S_n}c_{uv}^w{\widehat}{\S}_w+ \sum_{w\in T_n}c_{uv}^w{\widetilde}{\S}_w,$$ $${\displaystyle}{\widehat}{\S}_u\cdot \eta=\S_u(x_1,\ldots,x_n)\wedge\eta, \ \ \ \ \mbox{and} \ \ \ \ \eta\cdot \eta^{{\prime}}=(dd^c\eta)\wedge\eta^{{\prime}},$$ where ${\widetilde}{\S}_w\in{\widetilde}{\mbox{\em Inv}}(F_{\Q})$, $\eta$, $\eta^{{\prime}}\in{\widetilde}{\mbox{\em Inv}}(F_{\R})$ and the $c_{uv}^w$ are as in [*(\[cuvws\])*]{}. [**Proof.**]{} The first statement is a corollary of Theorem \[chinv\]. Equation (\[punchline\]) follows immediately from the formal identity (\[cuvws\]) and our definition of ${\widetilde}{\S}_w$. The rest is a consequence of properties of the multiplication in ${\widehat}{CH}(F)$ discussed in \[afvs\] and \[ai\]. [**Remark.**]{} It is interesting to note that we also have, for $u,v{\hspace{-2pt}\in\hspace{-2pt}}T_n$, $${\displaystyle}{\widetilde}{\S}_u\cdot{\widetilde}{\S}_v=(dd^c{\widetilde}{\S}_u)\wedge{\widetilde}{\S}_v= \sum_{w\in T_n}c_{uv}^w{\widetilde}{\S}_w$$ in ${\widetilde}{{\mbox{Inv}}} (F_{\Q})$. Applying (\[punchline\]) when $\S_u=\S_{s_k}$ is a special Schubert class gives (Arithmetic Monk Formula): $${\displaystyle}{\widehat}{\S}_{s_k}\cdot{\widehat}{\S}_w=\sum_s {\widehat}{\S}_{ws} + \sum_t {\widetilde}{\S}_{wt},$$ where the first sum is over all transpositions $s=(i, j){\hspace{-2pt}\in\hspace{-2pt}}S_n$ such that $i{\leqslant}k <j$ and $l(ws)=l(w)+1$, and the second over all transpositions $t=(i,n+1)$ with $i{\leqslant}k$ and $w(i)>w(j)$ for all $j$ with $i<j {\leqslant}n$. Examples {#ex} ======== Heights ------- The flag variety $F$ has a natural pluri-Plücker embedding $j:F\hookrightarrow \P^N_{\Z}$. $j$ is defined as the composition of a product of Plücker embeddings followed by a Segre embedding; if $Q_i=E/E_i$, then $j$ is associated to the line bundle ${\displaystyle}Q=\bigotimes_{i=1}^{n-1} \det (Q_i)$. Let ${\overline}{\O}(1)$ denote the canonical line bundle over projective space, equipped with its canonical metric (so that $c_1({\overline}{\O}(1))$ is the Fubini-Study form). The [*height*]{} of $F$ relative to ${\overline}{\O}(1)$ (cf. \[Fa\], \[BoGS\], \[S\]) is defined by $${\displaystyle}h_{{\overline}{\O}(1)}(F) ={\widehat}{\deg}({\widehat}{c}_1({\overline}{\O}(1))^{{n \choose 2}+1}\vert \ F).$$ Since $${\displaystyle}j^*({\widehat}{c}_1({\overline}{\O}(1)))={\widehat}{c}_1({\overline}{Q})= -\sum_{i=1}^{n-1}{\widehat}{c}_1({\overline}{E}_i)= \sum_{i=1}^{n-1} (n-i){\widehat}{x}_i= \sum_{i=1}^{n-1} {\widehat}{\S}_{s_i},$$ we have that $${\displaystyle}h_{{\overline}{\O}(1)}(F)= {\widehat}{\deg}({\widehat}{c}_1({\overline}{Q})^{{n \choose 2}+1}\vert \ F)= {\widehat}{\deg}((\sum_{i=1}^{n-1} {\widehat}{\S}_{s_i})^{{n \choose 2}+1}).$$ Now Theorems \[mainthm\] and \[slring\] immediately imply \[slheight\] The height $h_{{\overline}{\O}(1)}(F)$ is a rational number. Intersections in $F_{1,2,3}$ ---------------------------- In this section we calculate the arithmetic intersection numbers for the classes ${\widehat}{x}_i$ in ${\widehat}{CH}(F)$ when $n=3$, so $F=F_{1,2,3}$. Over $F$ we have 3 exact sequences $${\displaystyle}{\overline}{\E}_i : \ 0{\rightarrow}{\overline}{E}_{i-1} {\rightarrow}{\overline}{E}_i {\rightarrow}{\overline}{L}_i {\rightarrow}0 \ \ \ \ \ \ 1{\leqslant}i {\leqslant}3.$$ We adopt the notation of \[hvb\] and define $\Om_{ij}= \om_{ij}\wedge{\overline}{\om}_{ij}$. Then Corollary \[grscor1\] gives $${\displaystyle}x_1=\Om_{12}+\Om_{13}, \ \ \ \ x_2=-\Om_{12}+\Om_{23}, \ \ \ \ x_3=-\Om_{13}-\Om_{23},$$ $${\displaystyle}K_{E_2}=-\left( \begin{array}{cc} \Om_{13} & \om_{13}\wedge{\overline}{\om}_{23} \\ \om_{23}\wedge{\overline}{\om}_{13} & \Om_{23} \end{array}\right).$$ We refer now to the properties of the forms ${\widetilde}{c_k}$ mentioned at the end of \[cbcfs\]. By property (a) ${\widetilde}{c}({\overline}{\E}_1)=0$, while (b) gives ${\widetilde}{c}({\overline}{\E}_2)=-\Om_{12}$. Property (c) applied to ${\overline}{\E}_3$ gives ${\widetilde}{c}({\overline}{\E}_3)=-\Om_{13}-\Om_{23}+ 3\Om_{13}\Om_{23}$. Using the construction of the Bott-Chern form for the total Chern class given in the proof of Theorem \[bcf\], we find that $$\label{firstkey} {\widetilde}{c}({\overline}{\E})= -\Om_{12}-\Om_{13}-\Om_{23}-\Om_{12}\Om_{13}-\Om_{12}\Om_{23}+ 3\Om_{13}\Om_{23}.$$ Notice that this expression for ${\widetilde}{c}({\overline}{\E})$ is not unique as a class in ${\widetilde}{{\mbox{Inv}}}(F_{\R})$. For instance, we can add the exact form $c_1({\overline}{L}_1)c_1({\overline}{L}_2)-c_2({\overline}{E}_2)=\Om_{12}\Om_{23}- \Om_{12}\Om_{13}-\Om_{13}\Om_{23}$ to get $$\label{key} {\widetilde}{c}({\overline}{\E})= -\Om_{12}-\Om_{13}-\Om_{23}-2\Om_{12}\Om_{13}+2\Om_{13}\Om_{23}.$$ The Bott-Chern form (\[key\]) is the key to computing any intersection number ${\widehat}{\deg}({\widehat}{x}_1^{k_1} {\widehat}{x}_2^{k_2}{\widehat}{x}_3^{k_3})$, following the prescription of \[ai\]. (Of course we can just as well use (\[firstkey\]), with the same results.) For example, since $x_1^4=x_1^3e_1-x_1^2e_2+x_1e_3$, we have $${\displaystyle}{\widehat}{x}_1^4=x_1^2(\Om_{12}+\Om_{13}+\Om_{23})+ x_1(2\Om_{12}\Om_{13}-2\Om_{13}\Om_{23})= 2\Om-2\Om=0.$$ On the other hand, a similar calculation for ${\widehat}{x}_2^4$ gives $${\displaystyle}{\widehat}{x}_2^4=-x_2^2{\widetilde}{c}_2({\overline}{\E})- x_2{\widetilde}{c}_3({\overline}{\E})=-2\Om+4\Om=2\Om.$$ Thus ${\displaystyle}{\widehat}{\deg}({\widehat}{x}_2^4)=\int_{F(\C)}\Om=\frac{1}{2}$. The following is a table of all the intersection numbers ${\widehat}{\deg}({\widehat}{x}_1^{k_1}{\widehat}{x}_2^{k_2}{\widehat}{x}_3^{k_3})$ (multiplied by 4): ------------- ----------------------- ------------- ----------------------- ------------- ----------------------- $k_1k_2k_3$ $4\,{\widehat}{\deg}$ $k_1k_2k_3$ $4\,{\widehat}{\deg}$ $k_1k_2k_3$ $4\,{\widehat}{\deg}$ 400 0 004 0 040 2 310 5 013 5 121 2 301 -5 103 -5 202 9 220 -1 022 -1 211 -4 112 -4 130 -1 031 -1 ------------- ----------------------- ------------- ----------------------- ------------- ----------------------- Note that the numbers in the first two columns are equal, in agreement with Proposition \[symprop\]. We can use the table to compute the height of $F$ in its pluri-Plücker embedding in $\P^8_{\Z}$: $${\displaystyle}h_{{\overline}{\O}(1)}(F_{1,2,3})={\widehat}{\deg}((2{\widehat}{x}_1+{\widehat}{x}_2)^4)=\frac{65}{2}.$$ Partial flag varieties {#pfvs} ====================== In this final section we show how to generalize the previous work to partial flags $F(\r)$. Our results may thus be regarded as an extension of those of Maillot \[Ma\] in the grassmannian case. As usual we have a tautological filtration of type $\r$ $$\label{fil3} \E:\ 0 \subset E_1 \subset E_2 \subset\cdots \subset E_m=E$$ of the trivial bundle over $F(\r)$, with quotient bundles $Q_i$. Equip $E(\C)$ with the trivial hermitian metric, inducing metrics on all the above bundles. The calculations of \[hvb\] apply equally well to $X_{\r}=F(\r)(\C)$. Proposition \[curvmat\] describes the curvature matrices of all the relevant homogeneous vector bundles, and one can compute classical intersection numbers on $X_{\r}$ in a similar fashion. Call a permutation $w{\hspace{-2pt}\in\hspace{-2pt}}S_{\infty}$ an [*$\r$-permutation*]{} if $w(i)<w(i+1)$ for all $i$ not contained in $\{r_1,\ldots,r_{m-1}\}$. Let $S_{n,\r}$ and $T_{n,\r}$ be the set of $\r$-permutations in $S_n$ and $T_n$, respectively. For such $w$ one knows (cf. \[Fu2\], 8) that the Schubert polynomial $\S_w$ is symmetric in the variables in each of the groups $$\label{vars} X_1,\ldots,X_{r_1};X_{r_1+1},\ldots,X_{r_2};\ldots ; X_{r_{m-2}+1},\ldots,X_{r_{m-1}}.$$ The product group ${\displaystyle}H=\prod_{i=1}^m S_{r_i-r_{i-1}}$ acts on $P_n$, the factors for $i<m$ by permuting the variables in the corresponding group of (\[vars\]), while $S_{n-r_{m-1}}$ permutes the remaining variables $X_{r_{m-1}+1},\ldots ,X_n$. If $P_{n,\r}=P_n^H$ is the ring of invariants and $I_{n,\r}=P_{n,\r} \cap I_n$, then $CH(F(\r))\simeq P_{n,\r}/I_{n,\r}$. The set of Schubert polynomials $\S_w$ for all $w{\hspace{-2pt}\in\hspace{-2pt}}S_{n,\r}$ is a free $\Z$-basis for $P_{n,\r}/I_{n,\r}$. Let $w{\hspace{-2pt}\in\hspace{-2pt}}S_{n,\r}$. If we regard each of the groups of variables (\[vars\]) as the Chern roots of the bundles $Q_1,Q_2,\ldots,Q_{m-1}$, it follows that we may write $\S_w$ as a polynomial $\S_{w,\r}$ in the Chern classes of the $Q_i$, $1{\leqslant}i {\leqslant}m-1$. The class of $\S_{w,\r}$ in $CH(F(\r))$ is that of corresponding Schubert variety in $F(\r)$ (see loc. cit. for the relative case). By putting ‘hats’ on all the quotient bundles involved (with their induced metrics as in \[asc\]) we obtain classes ${\widehat}{\S}_{w,\r}$ in ${\widehat}{CH}_{inv}(F(\r))$. The analysis of \[afvs\] remains valid; the map $\epsilon$ can be defined by $\epsilon(\S_{w,\r})={\widehat}{\S}_{w,\r}$. In particular we have an invariant arithmetic Chow ring ${\widehat}{CH}_{inv}(F(\r))$ for which Theorem \[chinv\] holds. If $F(\r)=G_d$ is a Grassmannian over ${\mbox{Spec}}\Z$, then ${\widehat}{CH}_{inv}(G_d)$ coincides with the Arakelov Chow ring $CH({\overline}{G_d})$, where $G_d(\C)$ is given its natural invariant Kähler metric, as in \[Ma\]. Suppose that $\r^{{\prime}}$ is a refinement of $\r$, so we have a projection $p:F(\r^{{\prime}}){\rightarrow}F(\r)$. In this case there are natural inclusions ${\widetilde}{{\mbox{Inv}}}(F(\r)_{\R})\hookrightarrow {\widetilde}{{\mbox{Inv}}}(F(\r^{{\prime}})_{\R})$ and $CH(F(\r))\hookrightarrow CH(F(\r^{{\prime}}))$. Applying the five lemma to the two exact sequences (\[invex\]) shows that the pullback $p^*:{\widehat}{CH}_{inv}(F(\r))\hookrightarrow {\widehat}{CH}_{inv}(F(\r^{{\prime}}))$ is an injection. Note however that this is not compatible with the splitting of Theorem \[chinv\]. One can compute arithmetic intersections in ${\widehat}{CH}_{inv}(F(\r))$ as in \[ai\]. Applying Theorem \[abc\] to the filtration (\[fil3\]) (with induced metrics as above) gives the key relation required for the calculation. In particular we see that all the arithmetic Chern numbers are rational, as is the Faltings height of $F(\r)$ in its natural pluri-Plücker embedding. Theorem \[slheight\] thus generalizes the corresponding result of Maillot mentioned in \[intro\]. There is an arithmetic Schubert calculus in ${\widehat}{CH}_{inv}(F(\r))$ analogous to that for complete flags. The analogue of Lemma \[schlemma\] is true, that is $\S_w{\hspace{-2pt}\in\hspace{-2pt}}I_{n,\r}$ if $w{\hspace{-2pt}\in\hspace{-2pt}}T_{n,\r}$ (this is an easy consequence of Lemma \[schlemma\] itself). It follows that for $w{\hspace{-2pt}\in\hspace{-2pt}}T_{n,\r}$, ${\widehat}{\S}_{w,\r}$ is a class ${\widetilde}{\S}_{w,\r}{\hspace{-2pt}\in\hspace{-2pt}}{\widetilde}{{\mbox{Inv}}}(F(\r)_{\Q})$. The analogue of (\[punchline\]) in this context is $$\label{partialpunch} {\widehat}{\S}_{u,\r}\cdot{\widehat}{\S}_{v,\r}=\sum_{w\in S_{n,\r}}c_{uv}^w{\widehat}{\S}_{w,\r}+ \sum_{w\in T_{n,\r}}c_{uv}^w{\widetilde}{\S}_{w,\r}$$ where $u,v{\hspace{-2pt}\in\hspace{-2pt}}S_{n,\r}$ and the numbers $c_{uv}^w$ are as in (\[cuvws\]). The remaining statements of Theorem \[slring\] require no further change. [**Remark.**]{} Equation (\[partialpunch\]) is not a direct generalization of the analogous statement in \[Ma\], Theorem 5.2.1. However one can reformulate Maillot’s results using the classes ${\widehat}{c}_*({\overline}{S})$ instead of ${\widehat}{c}_*({\overline}{Q}-{\overline}{\E})$ (notation as in \[Ma\], 5.2). With this modification, the arithmetic Schubert calculus described above (for $m=2$) and that in \[Ma\] coincide. In the grassmannian case $\S_{w,\r}$ is a Schur polynomial and there are explicit formulas for ${\widetilde}{\S}_{w,\r}$ in terms of harmonic forms on $G_d(\C)$ (as in loc. cit.). [\[SABK\]]{} A. Borel : [*Kählerian Coset Spaces of Semisimple Lie Groups*]{}, Proc. Nat. Acad. Sci. U.S.A. 40 (1954), 1147-1151. J.-B. Bost, H. Gillet and C. Soulé : [*Heights of Projective Varieties and Positive Green Forms*]{}, Journal of the AMS 7 (1994), 903-1027. R. Bott and S. S. Chern : [*Hermitian Vector Bundles and the Equidistribution of the Zeroes of their Holomorphic Sections*]{}, Acta Math. 114 (1968), 71-112. P. Deligne : [*Le Determinant de la Cohomologie*]{}, in Current Trends in Arithmetical Algebraic Geometry, Contemp. Math. 67 (1987), 93-178. G. Faltings : [*Diophantine Approximation on Abelian Varieties*]{}, Ann. of Math. 133 (1991), 549-576. W. Fulton : [*Intersection Theory*]{}, Ergebnisse der Math. 2 (1984), Springer-Verlag. W. Fulton : [*Flags, Schubert Polynomials, Degeneracy Loci, and Determinantal Formulas*]{}, Duke Math. J. 65 no. 3 (1992), 381-420. H. Gillet : [*Riemann-Roch Theorems for Higher Algebraic $K$-theory*]{}, Advances in Mathematics 40 no. 3 (1981), 203-289. H. Gillet and C. Soulé : [*Arithmetic Intersection Theory*]{}, Publ. math., I.H.E.S. 72 (1990), 94-174. H. Gillet and C. Soulé : [*Characteristic Classes for Algebraic Vector Bundles with Hermitian Metrics, I, II*]{}, Annals of Math. 131 (1990), 163-203 and 205-238. P. Griffiths and W. Schmid : [*Locally Homogeneous Complex Manifolds*]{}, Acta Math. 123 (1969), 253-302. A. Lascoux and M.-P. Schützenberger : [*Polynômes de Schubert*]{}, C. R. Acad. Sci. Paris 295 (1982), 629-633. I. Macdonald : [*Notes on Schubert Polynomials*]{}, Laboratoire de Combinatoire et d’Informatique Mathématique (1991). V. Maillot : [*Un Calcul de Schubert Arithmétique*]{}, Duke Math. J. 80 no. 1 (1995), 195-221. C. Soulé : [*Hermitian Vector Bundles on Arithmetic Varieties*]{}, Lecture Notes, Santa Cruz 1995. C. Soulé, D. Abramovich, J.-F. Burnol and J. Kramer : [*Lectures on Arakelov Geometry*]{}, Cambridge Studies in Advanced Mathematics 33 (1992). H. Tamvakis : [*Bott-Chern Forms and Arithmetic Intersections*]{}, to appear in L’Enseign. Math.
--- abstract: 'Two-dimensional Bloch electrons in a uniform magnetic field exhibit complex energy spectrum. When static electric and magnetic modulations with a checkerboard pattern are superimposed on the uniform magnetic field, more structures and symmetries of the spectra are found, due to the additional adjustable parameters from the modulations. We give a comprehensive report on these new symmetries. We have also found an electric-modulation induced energy gap, whose magnitude is independent of the strength of either the uniform or the modulated magnetic field. This study is applicable to experimentally accessible systems and is related to the investigations on frustrated antiferromagnetism.' author: - 'Ming-Che Chang' - 'Min-Fong Yang' title: Energy Spectrum of Bloch Electrons Under Checkerboard Field Modulations --- introduction ============ When the spectrum of a two-dimensional (2D) Bloch electron in a uniform magnetic field is plotted in the energy-flux diagram, a self-similar structure with fractal property emerges.[@hofstadter76] Such a complex structure, called the Hofstadter spectrum, arises due to the commensurability between two length scales in this system: the lattice constant and the cyclotron radius. The Hofstadter spectrum is one of the earliest predictions of fractal structure in solid-state physics. Subsequently, it was found that not only the energy spectrum has self-similarity, the wave function also exhibits scaling behavior and can be analyzed using the renormalization group.[@thouless83] Because of its beautiful structure, the Hofstadter spectrum has attracted many researchers’ attention, and the spectra for different 2D lattice symmetries have been reported. Besides the square lattice, there are also triangular lattice,[@claro79] honeycomb lattice,[@rammal85] Kagome lattice,[@xiao02] and a bipartite periodic structure with hexagonal symmetry.[@vidal98] These are all studied within the framework of Bloch electrons in a uniform magnetic field, usually assuming nearest-neighbor (NN) couplings $t_1$ only. Including and varying the next-nearest-neighbor (NNN) couplings $t_2$ leads to band-crossings accompanied by exchange of quantized Hall conductances between bands.[@hatsugai90; @lee94] For a square lattice, detailed scaling analysis reveals a bicritical point at $t_1=2t_2$, accompanied by interesting topological change of the spectrum.[@han94] Spectra for systems with couplings beyond next-nearest neighbors have also been studied.[@barelli92] In addition, the external magnetic field, rather than being uniform, can be periodically modulated with a pattern unrelated to the original lattice. The simplest situation when a magnetic lattice overlaps with the electric lattice is realized when a ferromagnetic grid is deposited on a semiconductor.[@gerhardts96] The interfacial stress between two materials would naturally induce an electric grid with the same period and symmetry as the ferromagnetic grid. More generally, there can also be a magnetic modulation with the pattern of a 1D strip,[@oh95] or a 2D checkerboard[@shi97; @oh99; @ito99] superimposes on the electric square lattice. The checkerboard configuration has been realized experimentally using a superconducting Nb-network with periodic magnetic Dy-islands.[@ito99] The calculations of the Hofstadter spectra provide the basis to study such articifial networks. A direct observation of the Hofstadter spectrum has been realized using microwaves[@kuhl98] or acoustic waves[@richoux02] transmitting through an array of [*macroscopic*]{} scatters. However, a fractal [*electronic*]{} spectrum is significantly more difficult to be realized in an usual solid, whose lattice constant is only a few angstroms, and a magnetic field of the order of $10^4$ Teslas is required. For 10 Teslas or less, we can only probe the part of the Hofstadter spectrum that reduces to the familiar Landau levels with roughly equal spacings in energy. In the last decade, different superlattice structures with much larger lattice constants are used to cope with this high-field problem.[@albrecht01] Besides, several physical systems are closely related to the Hofstadter problems and offer alternative angles of investigation, for example, the studies of a superconductor in a vortex state,[@morita01] a superconducting network in a magnetic field,[@ito99] and a junction of three quantum wires.[@chamon03] Furthermore, recent advance on optical lattices makes it possible to implement a lattice Hamiltonian resembling the effects of magnetic fields with [*neutral*]{} atoms.[@jaksch03] This offers great opportunities since not only the magnetic field, but also the lattice symmetry, the potential strength, and the relative importance of many-body effects can be adjusted in such a system. This paper is motivated by a study very different from those mentioned above. In a recent paper,[@misguich01] Misguich and coworkers, by using the hard-core bosons to represent the spin degrees of freedom, and using the Chern-Simons transformation to transmute bosons to fermions, mapped a 2D frustrated antiferromagnetic problem to a Hofstadter problem. This approach is subsequently used to study the magnetization properties of the $J_1-J_2$ Heisenberg model on a square lattice.[@chang02] After suitable mathematical mappings and a mean-field approximation, the magnetization problem can be reduced to a Hofstadter problem with [*both*]{} electric and magnetic super-structures superimpose on the original lattice. This motivates us to consider the checkerboard super-structure, which is related to the Néel phase in the magnetization problem, with congruous electric [*and*]{} magnetic modulations. Couplings up to next nearest-neighbors are considered, which are essential to cause magnetic frustration in the $J_1-J_2$ Heisenberg model.[@note0] In this paper, we make a comprehensive survey of the symmetries of the Hofstadter spectra with field modulations. Some of the symmetries already exist without modulations, such as the ones related to reversing the direction of the magnetic flux, and shifting the flux in a plaquette by two flux quanta (see items II and III in Sec. II). Some of the other symmetries that are closely related to field modulations are reported for the first time. In particular, when the system is subject to staggered $\pi$-fluxes, its spectrum in the $E-\phi$ diagram has an up-dowm symmetry even with NNN couplings (see Fig. 5), which is quite unexpected since NNN couplings usually destroy such a symmetry.[@hatsugai90] Besides the studies on symmetries, for systems with only NN couplings, we find a simple algebraic connection between the spectra with and without the [*electric*]{} checkerboard field. We also find a flux-independent energy gap induced by the electric modulation (see Fig. 3), which can be explained by using the algebraic relation just mentioned. This paper is organized as follows. Theoretical formulation on the system with checkerboard super-structure, as well as the discussion of the symmetries of the energy spectra, can be found in Sec. II. Major features of the Hofstadter spectra are discussed in Sec. III. We summarize and conclude our results in Sec. IV. The proofs on checkerboard-translation symmetry of the spectrum and on the existence of the flux-independent energy gap are given in the appendices. theoretical analysis ==================== The tight-binding Hamiltonian describing the motion of an electron in a magnetic field is given by $$H = -\frac{1}{2} \sum_{\left\langle i,j\right\rangle} \left( t_{ij} e^{i\theta_{ij}} \, f^+_i f_j + h.c. \right) + \sum_i V_i f^+_i f_i \; , \label{tight-binding}$$ where $\theta_{ij}$ is the magnetic phase factor. For clarification, we will replace the label $i$ by $(n, m)$ in the following, which denotes the $(n, m)$ plaquette as well as the lattice point at the lower left corner of the plaquette. Without loss of generality, we take the uniform part of the magnetic flux through plaquette as $\phi=2\pi p/q$ with relative prime integers $p$ and $q$. For the checkerboard modulation, we have $\delta\phi_i / 2\pi = - \Delta_\phi (-1)^{n+m}$ and $V_i \equiv \Delta_V (-1)^{n+m}$ (see Fig. 1). The Landau gauge is used such that the magnetic phase factors become $$\left\{ \begin{array}{l} \theta_{n+1,\;m;\;n,\;m}=0 \; , \nonumber \\ \theta_{n,\;m+1;\;n,\;m}=n\phi +(-1)^{n+m} \pi \Delta_\phi \; , \\ \theta_{n+1,\;m+1;\;n,\;m}=\theta_{n,\;m+1;\;n+1,\;m}= \left( n + {1 \over 2} \right) \phi \; . \nonumber \end{array}\right. \label{theta_2D}$$ Due to the modulation in the $y$ direction and under the gauge choice in Eq. (\[theta\_2D\]), the tight-binding Hamiltonian in Eq. (\[tight-binding\]) now becomes invariant under the $y$-translation $m \rightarrow m+2$. Thus the Bloch theorem gives $$f_{n,\;m}=e^{-ik_y m} c_{n,\;m}(k_y) \label{Bloch2D_y}$$ for $|k_y|\leq\pi/2$, where $c_{n,\;m+2}(k_y)=c_{n,\;m}(k_y)$ and $c_{n,\;m}(k_y+\pi)=c_{n,\;m}(k_y)$. Therefore, the generalized Harper equation becomes $$\begin{aligned} \label{Harper_2D} &&{\bf A}_n \vec{c}_n(k_y) + {\bf B}_n \vec{c}_{n+1}(k_y) + {\bf B}_{n-1} \vec{c}_{n-1}(k_y) \nonumber \\ && = E \vec{c}_n(k_y)\end{aligned}$$ where $\vec{c}_n(k_y)=(c_{n,\;1}(k_y), c_{n,\;2}(k_y))^T$ and $${\bf A}_n=\left( \begin{array}{cc} -(-1)^n \Delta_V & -t_1 \cos (\chi_n) e^{i\delta_n} \\ -t_1 \cos (\chi_n) e^{-i\delta_n} & (-1)^n \Delta_V \end{array}\right) \label{matrix_A}$$ $${\bf B}_n=\left( \begin{array}{cc} -t_1 /2 & -t_2 \cos (\eta_n) \\ -t_2 \cos (\eta_n) & -t_1 /2 \end{array}\right) \label{matrix_B}$$ with $\chi_n = n\phi + k_{y}$, $\delta_n = (-1)^n \pi\Delta_\phi$, and $\eta_n = (n+1/2)\phi + k_{y}$. It can be easily checked that $${\bf A}_{n+Q}={\bf A}_{n},~ {\bf B}_{n+Q}={\bf B}_{n},$$ where $Q=q$ ($2q$) for an even (odd) integer $q$. Thus, the $n$ in Eq. (\[Harper\_2D\]) satisfies the condition $1\le n\le Q$. Besides, because of the magnetic translation symmetry, the primitive unit cell is consisted of $q$ ($Q$) plaquettes without (with) checkerboard modulation. The Bloch condition along the $x$ direction can be written as $$c_{n,\;m}(k_y)=e^{-ik_{x} n}\psi_{n,\;m}(k_x, k_y) \label{Bloch2D_x}$$ for $|k_{x}|\leq\pi/Q$, where $\psi_{n+Q,\;m}(k_x, k_y)=\psi_{n,\;m}(k_x, k_y)$ and $\psi_{n,\;m}(k_x+2\pi/Q, k_y+\pi)=\psi_{n,\;m}(k_x, k_y)$. Now we only need to solve the problem within the first magnetic Brillouin zone given by $|k_{x}|\leq\pi/Q$ and $|k_{y}|\leq\pi/2$. Thus we obtain the eigenvalue equation, ${\bf M}\Psi=E\Psi$, where $\Psi=(\psi_{1,\;1},\psi_{1,\;2},\psi_{2,\;1},\psi_{2,\;2},\cdots, \psi_{Q,\;1},\psi_{Q,\;2})^T$ and $${\bf M}=\left( \begin{array}{ccccccc} {\bf A}_{1} & {\bf B}_{1}e^{-ik_x} & 0 & \cdots & 0 & 0 &{\bf B}_{Q} e^{ik_x} \\ {\bf B}_{1}e^{ik_x} & {\bf A}_{2} & {\bf B}_{2}e^{-ik_x} & \cdots & 0 & 0 & 0 \\ 0 & {\bf B}_{2}e^{ik_x} & {\bf A}_{3} & \cdots & 0 & 0 & 0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ 0 & 0 & 0 & \cdots & {\bf B}_{Q-2}e^{ik_x} & {\bf A}_{Q-1} & {\bf B}_{Q-1}e^{-ik_x} \\ {\bf B}_{Q}e^{-ik_x} & 0 & 0 & \cdots & 0 & {\bf B}_{Q-1}e^{ik_x} & {\bf A}_{Q} \end{array}\right). \label{matrix2D}$$ We calculate the energy eigenvalues for all the values of $\vec{k}$ in the first magnetic Brillouin zone, $|k_{x}|\leq\pi/Q$ and $|k_{y}|\leq\pi/2$, by directly diagonalizing the $2Q \times 2Q$ Hamiltonian matrix ${\bf M}(\vec{k})$. As indicated in Fig. 1, the system has the checkerborad translational symmetry. That is, the system is invariant under the lattice translation by two lattice constants along either the $x$ or the $y$ directions, or under the translation $(n,\;m)\rightarrow(n+1,\;m+1)$ along the diagonal. Thus one expects that, under the above transformations, the energy spectrum obtained by the eigenvalue problem with the Hamiltonian matrix ${\bf M}(\vec{k})$ should remain the same, which is not obvious as seen from Eq. (\[matrix2D\]). In Appendix A, we prove that the Hamiltonian matrices before and after the translations are identical up to a shift in $k_y$ and thus give the same energy spectra. We show that there are several general symmetries of the spectra in the $E-\phi$ diagram, which can be used to reduce the amount of calculations. Similar discussion for the systems without field modulations can be found in Ref. \[\]. In the following, the collecton of energy subbands at a flux $\phi$ (per plaquette) modulated by $(\Delta_\phi,\Delta_V)$ is denoted by $E(\phi,\Delta_\phi,\Delta_V)$. It has the following symmetries:[@note1] 1. \[s1\] $E(\phi,\Delta_\phi,\Delta_V)=E(-\phi,-\Delta_\phi, \Delta_V)$\ This follows from using two (three-dimensional) coordinate systems which are mirror images of each other with respect to the $x-y$ plane. The physics, and hence the energy spectra, should be the same in these two frames with opposite handnesses. 2. \[s2\] $E(\phi,\Delta_\phi,\Delta_V)=E(-\phi, \Delta_\phi,\Delta_V)=E(-\phi,-\Delta_\phi,-\Delta_V)$\ The first equality follows from rotating the (three-dimensional) coordinate frame by 180 degrees around either the $x$-axis or the $y$-axis; the second is from shifting the coordinate by one plaquette along either the $x$-axis or the $y$-axis. 3. \[s3\] $E(\phi,\Delta_\phi,\Delta_V)=E(\phi+2\phi_0, \Delta_\phi,\Delta_V)$\ This results from the following two facts: (i) the smallest hopping loop for electrons encloses half of a plaquette; (ii) the Aharonov-Bohm phase for this closed loop is unchanged after adding one flux quantum to within this loop. 4. \[s4\] $E(\phi,\Delta_\phi,\Delta_V)=\Re E(\phi+\phi_0, -\Delta_\phi,\Delta_V)$, where the operator $\Re$ flips the spectrum with respect to the horizontal $E=0$ line.\ This follows from the two transformations: (i) $f_{n,m} \rightarrow (-1)^{n+m}f_{n,m}$ and $\phi \rightarrow \phi+\phi_0$; (ii) $f_{n,m} \rightarrow f_{n,m+1}$ and $\Delta_\phi \rightarrow -\Delta_\phi$. It can be shown that the overall sign of the Hamiltonian in Eq. (\[tight-binding\]) changes after this two transformations, thus the spectra should have symmetry IV after shifting $\phi$ by one $\phi_0$ and reversing the direction of $\Delta_\phi$. 5. \[s5\] $E(\phi,\Delta_\phi,\Delta_V)=E(\phi-\phi_0,\Delta_\phi-1, \Delta_V)$\ This is because of the freedom in shifting the fluxes, $(\phi_A,\phi_B) \rightarrow (\phi_A-2\phi_0,\phi_B)$, where $\phi_A=\phi+2\pi\Delta_\phi$ and $\phi_B=\phi-2\pi\Delta_\phi$ are the fluxes through each plaquette of the $A$ and $B$ sublattices respectively. By combining symmetries I and II, it is not difficult to see that $E(\phi,\Delta_\phi,\Delta_V)$ should remain unchanged when the sign of [*any*]{} of its arguments, $\phi$, $\Delta_\phi$, or $\Delta_V$, is changed. From symmetries III and II, we have $E(\phi_0+\phi,\Delta_\phi,\Delta_V)=E(\phi-\phi_0,\Delta_\phi,\Delta_V) =E(\phi_0-\phi,\Delta_\phi,\Delta_V)$. That is, the distribution of $E(\phi)$ in the $E-\phi$ diagram has a mirror symmetry with respect to the vertical line $\phi=\phi_0$. Finally, we show that, in the $E-\phi$ diagram, it is sufficient to plot the spectra within the range $0\leq\phi<\phi_0/2$ only. The reason is as follows: from II, III, IV, and the freedom to flip the signs of $\Delta_\phi$ and $\Delta_V$ without changing the spectrum, it is clear that, for fixed values of $\Delta_\phi$ and $\Delta_V$, it suffices knowing the spectrum within the interval $[0,\phi_0)$. Moreover, from IV and I, we have $E(\phi_0/2+\phi,\Delta_\phi,\Delta_V)=\Re E(\phi-\phi_0/2,-\Delta_\phi,\Delta_V)=\Re E(\phi_0/2-\phi,\Delta_\phi,\Delta_V)$. Therefore, the spectrum along the whole flux-coordinate can be determined by the $E(\phi)$ within the interval $[0,\phi_0/2)$. main features of the spectra ============================ In the discussion below, all energies are in units of $t_1$. Besides $t_1$(=1), there are three adjustable parameters in the present generalized Hofstadter model on a square lattice: $\Delta_\phi$, $\Delta_V$, and $t_2$. It is impossible to show all the results from the whole three-dimensional parameter space.[@note2] Therefore, we selectively report on certain sets of parameters with representative features. Notice that in Refs. \[\], neither electric modulation nor NNN hoppings has been considered and the parameter space is one dimensional only. First, we consider the effect of the checkerboard modulation on the systems [*without*]{} NNN couplings $t_2$, which have several symmetries [*in addition to*]{} the symmetries I$\sim$V listed above. 1. $E(\phi,\Delta_\phi,\Delta_V)= E(\phi+\phi_0,\Delta_\phi,\Delta_V)$\ When there is only NN hoppings, the period of the spectrum is one flux quantum since the smallest loop of hopping now is one plaquette, instead of half of the plaquette. 2. $E(\phi,\Delta_\phi,\Delta_V)= \Re E(\phi,\Delta_\phi,\Delta_V)$\ This results from symmetries IV and III$^\prime$, followed by flipping the sign of $\Delta_\phi$, which would not change the spectrum. Thus, $E(\phi,\Delta_\phi,\Delta_V)$ is symmetric with respect to the horizontal $E=0$ line when $t_2$ =0. 3. $E(\phi,\Delta_\phi,\Delta_V)= E(\phi-\phi_0/2,\Delta_\phi-1/2,\Delta_V)$\ The argument is similar to the one leading to V, except that now $\phi_A$ can be shifted by one $\phi_0$ without altering the Aharonov-Bohm phase of a closed-loop hopping. In Fig. 2, the spectrum for a checkerboard modulation with $(\Delta_\phi,\Delta_V)=(0.1,0)$ is presented. The spectrum is indeed symmetric with respect to the $E=0$ line, according to symmetry IV$^{\prime}$. Furthermore, because of the symmetry V$^{\prime}$ and the freedom to flip the signs of the arguments, we have $E(\phi_0/4+\phi,\Delta_\phi,\Delta_V) =E(\phi-\phi_0/4,1/2-\Delta_\phi,\Delta_V) =E(\phi_0/4-\phi,\Delta_\phi-1/2,\Delta_V)$. Therefore, [*after being reflected*]{} by the vertical line at $\phi_0/4$, Fig. 2 with $\Delta_\phi=0.1$ is identical to the Hofstadter spectrum for $\Delta_\phi=0.4$.[@note3] In Fig. 3, a checkerboard modulation with $(\Delta_\phi,\Delta_V)=(0.1,0.1)$ is considered. Without NNN hoppings, this Hofstadter spectrum retains the same symmetries as in Fig. 2. However, a distinctive $\phi$-independent energy gap with a magnitude $E_g=2\Delta_V$ appears in the middle \[also see Fig. 6(a)\]. This is true [*with or without*]{} adding the modulation $\Delta_\phi$. First, it is not difficult to understand why the spectrum splits to two groups in energy: they originate from the two Bloch bands at $\phi=0$ due to the checkerboard modulation of the scalar potential. What is surprising is that the magnitude of the gap remains a constant for different $\phi$’s and $\Delta_\phi$’s. It is no longer a constant as long as NNN couplings are included (see Fig. 4). A proof of the existence of the flux-independent gap is given in Appendix B, where it is shown that there exists a very simple relation between the spectra with and without electrostatic modulation $\Delta_V$. That is, $E(\phi,\Delta_\phi,\Delta_V)=\pm[E(\phi,\Delta_\phi,0)^2+\Delta_V^2]^{1/2}$. It can be checked that the spectra in Fig. 2 and Fig. 3 do obey this relation in details. The constancy of the energy gap [*in the limit of small flux*]{} $\phi$ can be understood in the following semiclassical picture.[@chang96] The energy bands with vanishing widths as $\phi\rightarrow 0$ in Fig. 3 are the cyclotron energy levels of the two parent bands at $\phi=0$, which have the energy dispersions $E_\pm({\vec k})=\pm[(\cos k_x+\cos k_y)^2+\Delta_V^2]^{1/2}$ [*if*]{} $\Delta_\phi=0$. It can be shown that, near the two inner band edges with energies $E_\pm=\pm\Delta_V$, the cyclotron effective masses approach infinity. Therefore, the position of the lowest Landau level approaches the lowest possible energy at the band edge and does not depend on the uniform magnetic field. When NNN couplings are included, the spectrum immediately lose the mirror symmetry with respect to the horizontal $E=0$ line.[@hatsugai90] If only two of the three parameters are nonzero, then the spectrum remains fractal but distorted. When all three parameters, $\Delta_\phi,\Delta_V$, and $t_2$, are nonzero, the subbands become significantly wider in most, but not all, of the regions. A typical example is shown in Fig. 4. The extent of widening varies as the parameters are varied. Because of the widening, the electrons are more delocalized, and become more mobile in transport. There is a surprising exception to the asymmetry resulted from NNN hoppings: the symmetry is restored again when $\Delta_\phi=0.5$, even if [*both*]{} $\Delta_V$ and $t_2$ are nonzero. For example, the symmetric spectrum shown in Fig. 5 is for ($ \Delta_\phi,\Delta_V,t_2)=(0.5,0.3,0.7)$. The existence of such a symmetry can be proved as follows. From the symmetries III, IV, and V listed above, and the freedom to flip the signs of the arguments, it can be shown that $E(\phi,\Delta_\phi,\Delta_V)= \Re E(\phi,1-\Delta_\phi,\Delta_V)$, which is a far less apparent symmetry since it relates two systems with different strengths of flux modulation. It is clear that when $\Delta_\phi=0.5$, the spectrum has to be symmetric with respect to the line $E=0$. In Fig. 6, we demonstrate how the continuous variation of $\Delta_V$ and $\Delta_\phi$ influence the spectrum. The uniform flux and the NNN couplings are fixed at the values of $p/q=2/5$ and $t_2=0$. In principle, there should be $Q=2q=10$ bands at this value of the flux. However, in Fig. 6(a) with $(\Delta_V,\Delta_\phi)=(x,0)$, where $x\in [0,1]$ is the value of the $x$-coordinate, only 6 bands are observed. In fact, each of the upper two and lower two bands is itself formed by two overlapping subbands. We can also see that the band gap in the middle is indeed proportional to $\Delta_V$, as mentioned earlier. In Fig. 6(b), $(\Delta_V,\Delta_\phi)=(0,x)$, where $x\in [0,1]$ is again the value of the $x$-coordinate. There is almost no similarity between (a) and (b). The overlapped subbands in Fig. 6(a) are split by a nonzero $\Delta_\phi$ and become very thin in most of the regions. On the other hand, the band in the middle is thick and is actually composed of two subbands. The increase of flux modulation also induces many band crossings. In addition, there is an apparent symmetry $E(\phi,\Delta_\phi,\Delta_V)=E(\phi,1-\Delta_\phi,\Delta_V)$. In Fig. 6(c), both $\Delta_V$ and $\Delta_\phi$ are nonzero and have the same numerical value. It has mixing features from (a) and (b), but the magnitude of the energy gap in the middle is not altered \[comparing with (a)\] by the nonzero $\Delta_\phi$. Such a continuous tuning of the band structure might be realized in the future using the optical lattices formed by quantum optical means.[@jaksch03] summary ======= The studies of Hofstadter spectrum have evolved from pure academic curiosities to accessible experimental investigations. It is a basic physics problem involving simple interplay between a lattice and a magnetic field. Because of its general setting, it is not surprising to find counterpart problems in different physical systems, such as the quantum Hall system, the type-II superconductivity, and the two-dimensional antiferromagnetism. Motivated by a study on the frustrated antiferromagnetism, and the recent experimental advances, we study the Hofstadter problem with checkerboard modulations in details. In this paper, the spectra are found to have several flux-related symmetries with respect to the change of $\phi$ and $\Delta_\phi$. One unanticipated symmetry occurs when $\Delta_\phi=1/2$. At that value, the spectrum are symmetric with respect to the $E=0$ line even in the presence of NNN hoppings. In the absence of NNN hoppings, we find a flux-independent energy gap induced by electric modulations. Furthermore, a simple connection between the spectra for [*bipartite*]{} systems with and without electric modulation is discovered. More detailed aspects of the spectra are not investigated in this paper, however, such as the change of the fractal measures in the $\Delta_\phi-\Delta_V-t_2$ parameter space. Such a study would reveal different phases in this space, as was done by Han and coworkers on the systems in a uniform magnetic field.[@han94] The most general problem, when the superlattices of modulation can have the symmetries of their own, is considerally more involved. This study offers a starting point for researches in this direction. M.C.C. and M.F.Y. acknowledge the financial support from the National Science Council of Taiwan under Contract Nos. NSC 91-2112-M-003-019 and NSC 91-2112-M-029-007 respectively. proof of the checkerboard-translation symmetry of the spectrum ============================================================== In this appendix, we show that the energy spectrum obtained from Eq. (\[matrix2D\]) does respect the checkerboard translation symmetry. First, because there is no $m$-dependence of the matrix elements in Eq. (\[matrix2D\]), the Hamiltonian matrix and therefore the spectrum are unchanged under the lattice translation $m \rightarrow m+2$ such that $\psi_{n,\;m}(k_x, k_y)\rightarrow\psi_{n,\;m+2}(k_x, k_y)$. Second, from Eqs. (\[matrix\_A\]) and (\[matrix\_B\]), one can show that the matrix elements in the Hamiltonian matrix satisfy the relations ${\bf A}_{n+2}(k_y)={\bf A}_{n}(k_y+2\phi)$ and ${\bf B}_{n+2}(k_y)={\bf B}_{n}(k_y+2\phi)$. Therefore, under the lattice translation $n \rightarrow n+2$ such that $\psi_{n,\;m}(k_x, k_y)\rightarrow\psi_{n+2,\;m}(k_x, k_y)$, the new Hamiltonian matrix for the eigenvalue problem after transformation becomes identical to the original one with another value of $k_y$, i.e., $k_y\rightarrow k_y+2\phi$. Thus the whole energy spectrum within the first magnetic Brillouin zone remains the same. Third, the matrix elements in the Hamiltonian matrix can be shown to obey the following identities: $\sigma_x {\bf A}_{n+1}(k_y) \sigma_x = {\bf A}_{n}(k_y+\phi)$ and $\sigma_x {\bf B}_{n+1}(k_y) \sigma_x = {\bf B}_{n}(k_y+\phi)$, where $\sigma_x$ is the Pauli matrix. By using these identities, one can prove that, under the lattice translation $(n,\;m)\rightarrow(n+1,\;m+1)$ such that $\psi_{n,\;m}(k_x, k_y)\rightarrow\psi_{n+1,\;m+1}(k_x, k_y)$, the new Hamiltonian matrix again becomes identical to the original one with a shift $k_y\rightarrow k_y+\phi$. Hence we conclude that the energy spectrum is indeed invariant under the checkerborad translation. proof of the existence of the flux-independent energy gap ========================================================= For $t_2 =0$, our model is a nearest-neighbor-hopping model on a bipartite lattice. Therefore, we can rewrite the Hamiltonian in Eq. (\[tight-binding\]) as $$H= (\{f^\dagger_A\},\{f^\dagger_B\}) \pmatrix{ \Delta_V I&{\cal D}\cr {\cal D}^\dagger& -\Delta_V I\cr } \pmatrix{ \{f_A\}\cr \{f_B\}\cr },$$ where $I$ denotes the identity matrix, $\{f_A\}=\{f_{n,m}\ | \ n+m {\rm \ is \ even} \}$ is a set of fermion operators for sublattice $A$ and $\{f_B\}=\{f_{n,m}\ | \ n+m {\rm \ is \ odd}\}$ is for sublattice $B$. When $\Delta_V=0$, the Schr$\rm{\ddot{o}}$dinger equation is $$\begin{aligned} \pmatrix{ 0&{\cal D}\cr {\cal D}^\dagger& 0\cr}\pmatrix{\Phi_A\cr\Phi_B\cr}=E_0\pmatrix{\Phi_A\cr\Phi_B\cr},\end{aligned}$$ where $E_0$ is the eigenvalue for the system with $\Delta_V=0$, and $(\Phi_A,\Phi_B)^T$ is the corresponding eigenvector. From them we can construct the eigenstates for the original problem: $$\begin{aligned} &&\Phi_+ \equiv \pmatrix{ \Delta_V + \sqrt{E_0^2 + \Delta_V^2} \Phi_A \cr {\cal D}^\dagger\Phi_A \cr }\\ &&\Phi_- \equiv \pmatrix{ {\cal D} \Phi_B \cr -\Delta_V - \sqrt{E_0^2 + \Delta_V^2} \Phi_B \cr }\end{aligned}$$ with the corresponding eigenvalues $E_\pm=\pm\sqrt{E_0^2 + \Delta_V^2}$, because $$\pmatrix{ \Delta_V I&{\cal D}\cr {\cal D}^\dagger& -\Delta_V I\cr } \Phi_\pm = \pm \sqrt{E_0^2 + \Delta_V^2} \Phi_\pm .$$ Therefore, the energy spectrum is symmetric with respect to the horizontal $E=0$ line as mentioned in Sec. III. The positive-energy and the negative-energy parts are separated by an energy gap $2\sqrt{|E_0|_{\rm min}^2 + \Delta_V^2}$, where $|E_0|_{\rm min}$ is the minimum value of $|E_0|$ at given $\phi$ and $\Delta_\phi$. Since it has been shown that zero-energy modes exist in the absence of $\Delta_V$ for all flux values,[@hatsugai97] we have $|E_0|_{\rm min}=0$ for all values of $\phi$ and $\Delta_\phi$. Consequently, the magnitude of the energy gap in the presence of $\Delta_V$ should be $2\Delta_V$, independent of the values of $\phi$ and $\Delta_\phi$. [99]{} D. R. Hofstadter, Phys. Rev. [**14**]{}, 2239 (1976). D. J. Thouless and Q. Niu, J. Phys. A, Math. Gen. [**16**]{}, 1911 (1983); D. Dominguez, C. Wiecko, and J. Jose, Phys. Rev. B [ **45**]{}, 13919 (1992). F. H. Claro and G. H. Wannier, Phys. Rev. B [**19**]{}, 6068 (1979). R. Rammal, J. Phys. (Paris) [**46**]{}, 1345 (1985). Y. Xiao, V. Pelletier, P. M. Chaikin, and D. A. Huse, Phys. Rev. B [**67**]{}, 104505 (2003). J. Vidal, R. Mosseri, and B. Doucot, Phys. Rev. Lett. [**81**]{}, 5888 (1998). Y. Hatsugai and M. Kohomoto, Phys. Rev. B [**42**]{}, 4282 (1990). M. Y. Lee, M. C. Chang, and T. M. Hong, Phys. Rev. B [**57**]{}, 11895 (1998). J. H. Han, D. J. Thouless, H. Hiramoto, and M. Kohmoto, Phys. Rev. B [**50**]{}, 11365 (1994). A. Barelli and R. Fleckinger, Phys. Rev. B [**46**]{}, 11559 (1992). P. D. Ye [*et al*]{}, Appl. Phys. Lett. [**67**]{}, 1441 (1995); R. R. Gerhardts, D. Pfannkuche, and V. Gudmundsson, Phys. Rev. B [**53**]{}, 9591 (1996). G. Y. Oh, J. Jang, and M. H. Lee, J. Korean Phys. Soc. [**28**]{}, 79 (1995); G. Y. Oh and M. H. Lee, Phys. Rev. B [**53**]{}, 1225 (1996); P. Fekete and G. Gumbs, J. Phys.: Condens. Matter [**11**]{}, 5475 (1999). Q. W. Shi and K. Y. Szeto, Phys. Rev. B [**56**]{}, 9251 (1997). G. Y. Oh, Phys. Rev. B [**60**]{}, 1939 (1999). S. Ito, M. Ando, S. Katsumoto, and Y. Iye, J. Phys. Soc. Japan [**68**]{}, 3158 (1999); M. Ando, S. Ito, S. Katsumoto, and Y. Iye, [*ibid*]{} [**68**]{}, 3462 (1999). U. Kuhl and H. J. Stockmann, Phys. Rev. Lett. [**80**]{}, 3232 (1998). O. Richoux and V. Pagneux, Europhys. Lett. [**59**]{}, 34 (2002). C. Albrecht, J. H. Smet, K. von Klitzing, D. Weiss, V. Umansky, and H. Schweizer, Phys. Rev. Lett. [**86**]{}, 147 (2001). Y. Morita and Y. Hatsugai, Phys. Rev. Lett. [**86**]{}, 151 (2001); H. K. Nguyen and S. Chakravarty, Phys. Rev. B [**65**]{}, 180519 (2002). C. Chamon, M. Oshikawa, and I. Affleck, cond-mat/0305121. D. Jaksch and P. Zoller, New J. Phys. [**5**]{}, 56 (2003). G. Misguich, Th. Jolicoeur, and S. M. Girvin, Phys. Rev. Lett. [**87**]{}, 097203 (2001). M. C. Chang and M. F. Yang, Phys. Rev. B [**66**]{}, 184416 (2002). The antiferromagnetic couplings $J_1$ and $J_2$ in the Heisenberg model are equal to $-t_1$ and $-t_2$ respectively. The non-zero $J_2$ coupling results in a [*frustrated*]{} antiferromagnetic system. After a mean-field approximation, it can be shown that $\Delta_V=4(J_1-J_2)\Delta_\phi$ (see Sec. II for the definitions of $\Delta_\phi$ and $\Delta_V$). Therefore, there is only one parameter from the modulation. In this paper, we treat $\Delta_\phi$ and $\Delta_V$ as independent quantities. Symmetries I and II remain valid even if there are couplings beyond next-nearest neighbors. We have carried out calculations with the following sets of parameters: $(\Delta_\phi,\Delta_V,t_2)=(0.1i,0,0.1j),(0,0.1i,0.1j)$, and $(0.1i,0.1i,0.1j)$, where $i$ and $j$ are non-negative integers below 5 and 9 respectively. That is, 180 Hofstadter spectra have been generated. The parameter $(2/\pi)\beta$ in Ref. 13 is the same as our $\Delta_\phi$. Therefore, its Fig. 1(b) with $\beta=0.2\pi$ is the figure with $\Delta_\phi=0.4$. However, the edges of subbands for a few simple fractions, e.g., $p/q=2/3$, in their Fig. 1(b) fail to align with nearby edges when the uniform flux is slightly varied. M. C. Chang and Q. Niu, Phys. Rev. B [**53**]{}, 7010 (1996). Y. Hatsugai, X. G. Wen, and M. Kohmoto, Phys. Rev. B [**56**]{}, 1061 (1997).
--- abstract: 'The low for random reals are characterized topologically, as well as in terms of domination of Turing functionals on a set of positive measure.' author: - 'Bjørn Kjos-Hanssen[^1]' bibliography: - 'PAMSwithHyperlinks-arxiv.bib' title: 'Low for random reals and positive-measure domination' --- Introduction ============ A function $f:\omega{\rightarrow}\omega$ is *uniformly almost everywhere (a.e.) dominating* if for measure-one many $X$, and all $g$ computable from $X$, $f$ dominates $g$. Such functions were first studied by Kurtz [@Kurtz1981] who showed that uniformly a.e. dominating functions exist and that in fact $0'$, the Turing degree of the halting problem, computes one of them. If we replace measure by category, there are no such functions, as is not hard to see. A few decades later Dobrinen and Simpson [@dob:04] made use of a.e. domination in Reverse Mathematics. They made a couple of fundamental conjectures that were promptly refuted in [@BKLS:05] and [@CGM]. In this article we strengthen the results of [@BKLS:05] to provide a characterization of a related concept, positive-measure domination, in terms of lowness for randomness. Conversely, we characterize low for random reals in terms of such domination. The following characterizations are already known. (We assume the reader is familiar with the definition of Martin-Löf random reals and of prefix-free Kolmogorov complexity $K$.) The following are equivalent for $A\in 2^\omega$: - $A$ is low for random: [each Martin-Löf random real is Martin-Löf random relative to $A$.]{} - $A$ is $K$-trivial: $\exists c\forall n\, K(A{\upharpoonright}n)\le K(\emptyset{\upharpoonright}n)+c$. - $A$ is low for $K$: $\exists c\forall n\, K(n)\le K^A(n)+c$. - $\exists Z\ge_T A$, $Z$ is ML-random relative to $A$. - $A\le_T 0'$ and $\Omega$ is ML-random relative to $A$ The low for random reals induce a $\Sigma^0_3$ nonprincipal ideal in the Turing degrees bounded above by a low$_2$ $\Delta^0_2$ degree [@AM], and have already found application to long-standing open problems in computability theory. Our characterizations in this paper are distinguished by not being couched in the language of randomness and Kolmogorov complexity. They do however refer to measure; it remains open whether a characterization purely in terms of domination or traces can be given such as that found for low for Schnorr random reals [@KNS][@TZ]. The first main result of Section \[2\] is Theorem \[AB\], which is a characterization of the low for random reals in terms of containment of effectively closed sets of positive measure. Building on this result, Theorem \[mainly\] is a characterization of low for random reals in terms of positive-measure domination. Section \[3\] contains, first, a characterization of the Turing degrees relative to which $0'$ is low for random, in terms of positive-measure domination. Finally, with a view toward future research, we include a proof that there is a Turing functional that is universal for this kind of domination. Low for random reals {#2} ==================== To obtain our topological characterization, we will pass first from a certain universal Martin-Löf test (given in terms of $K$) to an arbitrary Martin-Löf test, and then to an arbitrary open set of measure $<1$. \[KC\] Suppose ${\langle}n_k,\sigma_k{\rangle}$, $k\in\omega$ is an $A$-recursive sequence, with $\sum_k 2^{-n_k}\le 1$. Then there exists a partial $A$-recursive prefix-free machine $M$ and a collection of strings $\tau_k$ with $|\tau_k|=n_k$ and $M(\tau_k)=\sigma_k$. Let $A\in 2^\omega$. An *information content measure relative to $A$* is a partial function $\hat K:2^{<\omega}{\rightarrow}\omega$ such that $$\sum_{\sigma\in 2^{<\omega}}2^{-\hat K(\sigma)}\le 1$$ and $\{{\langle}\sigma,k{\rangle}:\hat K(\sigma)\le k\}$ is r.e. in $A$. \[minimal\] If $\hat K$ is an information content measure relative to a real $A$, then for all $n$, $K^A(n)\le \hat K(n)+{\mathcal}O(1)$. \[remind\] For any real $X$, let $S^A=\{S^A_n\}_{n\in\omega}$ where $S^A_n=\{X:\exists m\,\,K^A(X{\upharpoonright}m)\le m-n\}$. \[singapore\] If $V^A$ is a Martin-Löf test relative to $A$, then for each $n$ there exists $p$ such that $V^A_p\subseteq S^A_n$. For each $m$, write $V^A_{2m}=\bigcup\{[\sigma_{m,k}]:k\in\omega\}$ where the function $f$ given by $f(m,k)=\sigma_{m,k}$ is computable and the sets $\{\sigma_{m,k}:k\ge 1\}$ are prefix-free. Define numbers $n_{m,k}=|\sigma_{m,k}|-m+1$ for $m,k\in\omega$. We have $$\sum_{m,k} 2^{-|n_{m,k}|}=\sum_m 2^{m-1} \sum_k 2^{-|\sigma_{m,k}|}=\sum_m 2^{m-1}\, \mu V_{2m} \le \sum_m 2^{m-1} 2^{-2m} = 1.$$ Hence by Theorem \[KC\], we have a partial $A$-recursive prefix-free machine $M$ and strings $\tau_{m,k}$ with $|\tau_{m,k}|=n_{m,k}$ and $M(\tau_{m,k})=\sigma_{m,k}$. Thus $K_M$, complexity based on the machine $M$, satisfies $K_M(\sigma_{m,k})\le n_{m,k}$, and so by Lemma \[minimal\], there is a constant $c$ such that $K^A(\sigma_{m,k})\le n_{m,k}+c=|\sigma_{m,k}|-m+1+c$. This means that $V^A_{2m}\subseteq S^A_{m-c-1}$ for each $m$. Thus, given $n$, let $m=n+c+1$ and $p=2m$. Then $V^A_p=V^A_{2m}\subseteq S^A_{m-c-1}=S^A_n$, as desired. Schnorr [@241] showed that $S^A$ is a universal Martin-Löf test relative to $A$. Let $n\ge 1$. Let $\Sigma^\mu_n$ denote the collection of all $\Sigma^0_n$ classes of measure $<1$. The complement of a $\Sigma^\mu_n$ class is a $\Pi^\mu_n$ class. The complement of $U$ is denoted $\overline U$. The clopen subset of $2^\omega$ generated by $\sigma\in 2^{<\omega}$ is denoted $[\sigma]$, and concatenation of strings is denoted by juxtaposition. If $U$, $V$ are open subsets of $2^\omega$ given by $U=\bigcup\{[\sigma]:\sigma\in \hat U\}$ and $V=\bigcup\{[\sigma]:\sigma\in \hat V\}$, where $U$ and $V$ are prefix-free sets of strings, then we define $$UV=\bigcup\{[\sigma\tau]:\sigma\in\hat U, \,\tau\in\hat V\}.$$ This product depends on $\hat U$ and $\hat V$, not just on $U$ and $V$, so when considering a $\Sigma^0_1$ class $U$, we implicitly fix a suitable recursively enumerable set $\hat U$ for $U$. We define $U^n=U^{n-1}U$ where $U^1=U$. We can also think of this exponentiation as acting on a closed set $Q$, defining $Q^n$ via the equation $\overline{Q^n}=\overline Q^n$. It will be clear whether we are considering a set as open or closed. \[tonda\] For each $\Pi^\mu_1(A)$ class $Q$ there is a computable function $f$ such that $\{\overline {Q^{f(n)}}\}_{n\in\omega}$, is a Martin-Löf test relative to $A$. Let $q>0$ be a rational number such that $\mu Q\ge q$. Let $P=\overline Q$. Then $\mu P^n=(\mu P)^n\le (1-q)^n$. Let $f$ be a computable function such that for all $k\in\omega$, $\mu P^{f(k)}\le 2^{-k}$. Let $V^A_k=P^{f(k)}$. Then $V^A$ is a Martin-Löf test relative to $A$. \[WalMart\] If $P$ is an open set such that $P^n$ is contained in a $\Sigma^\mu_1$ class for some $n\ge 2$, then $P$ itself is contained in a $\Sigma^\mu_1$ class. We write $U|\sigma=\bigcup\{[\tau]: [\sigma\tau]\subseteq U\}$. Note that if $P$ is open then so is $P^2$. Hence by iteration, it suffices to consider the case $n=2$. So suppose $(\exists U)\,\,\,P^2\subseteq U\in\Sigma^\mu_1$. Case 1: $\exists\sigma$, $\mu (U|\sigma)<1$, $\sigma\in\hat P$. Then $P^2\cap[\sigma]=[\sigma]P$, the product of $[\sigma]$ and $P$. Then $P=([\sigma]P)|\sigma=(P^2{\cap}[\sigma])|\sigma=P^2|\sigma\subseteq U|\sigma\in\Sigma^\mu_1$. Case 2: Otherwise; so $\hat P\subseteq\left\{\sigma:\mu(U|\sigma)=1\right\}$. Fix a rational number $\epsilon>0$ such that $\mu U<1-\epsilon$, and let $V=\bigcup\left\{[\sigma]: \mu (U|\sigma)\ge 1-\epsilon \right\}$. Note that $V$ is $\Sigma^0_1$, contains $P$, and $\mu V<1$ because $(1-\epsilon)\mu V\le \mu U<1-\epsilon$. As usual, an $A$-random is a real that is Martin-Löf random relative to $A$. If $A, B\in 2^\omega$ then $A$ is a *tail* of $B$ if there exists $n$ such that $A(k)=B(n+k)$ for all $k\in\omega$. \[antonin\] For each $A\in 2^\omega$, each $\Pi^\mu_1(A)$ class contains a tail of each $A$-random real. Let $Q$ be a $\Pi^\mu_1(A)$ class and suppose $X$ is $A$-random. Then by Lemma \[tonda\], there is an $m$ such that $X\in Q^m$. If $m=2$ then clearly, as $Q$ is closed, some tail of $X$ is an element of $Q$. If $m>2$, the result follows by iteration since each $Q^m$ is closed. \[AB\] Let $A\in 2^\omega$. The following are equivalent: 1. Each 1-random real is $A$-random ($A$ is *low for random* [@AM]). 2. For each $\Pi^\mu_1$ class $Q$ consisting entirely of $1$-random reals, there exist $\sigma,n$ such that $Q{\cap}[\sigma]\ne\emptyset$ but $Q{\cap}S^A_n{\cap}[\sigma]=\emptyset$. 3. For some $n$, $\overline{S^A_n}$ has a $\Pi^\mu_1$ subclass. 4. For each $A$-Martin-Löf test $V_n^A$, there exists an $n$ such that $\overline{V^A_n}$ has a $\Pi^\mu_1$ subclass. 5. For each $\Pi^\mu_1(A)$ class $Q$ there exists an $n$ such that $Q^n$ has a $\Pi^\mu_1$ subclass. 6. Each $\Pi^\mu_1(A)$ class has a $\Pi^\mu_1$ subclass. 7. Some $\Pi^\mu_1(A)$ class consisting entirely of $A$-random reals has a $\Pi^\mu_1$ subclass. 8. The class of $A$-random reals has a $\Pi^\mu_1$ subclass. (1)${\Rightarrow}$(2): For this implication we use an argument of Nies and Stephan [@N]. Suppose $A$ is low for random but (2) fails. So there is a $\Pi^\mu_1$ class $Q$ consisting entirely of $1$-random reals, such that for all $\sigma,n$, if $Q{\cap}[\sigma]\ne\emptyset$ then $Q{\cap}S^A_n{\cap}[\sigma]\ne\emptyset$. Let $\sigma_0=\lambda$, and $\sigma_{n+1}\succeq \sigma_n$, with $[\sigma_{n+1}]\subseteq S_n^A$ but $[\sigma_{n+1}]{\cap}Q\ne\emptyset$. Then $Y=\bigcup_{n\in\omega} \sigma_n$ is not $A$-random, but is $1$-random, since $Y\in Q$. (2)${\Rightarrow}$(3) Let $Q$ be as in (2), and let $n$, $\sigma$ be as guaranteed by (2) for $Q$. Then $Q{\cap}[\sigma]$ is the desired subclass. It has positive measure because no 1-random belongs to a $\Pi^0_1$ class of measure zero. (3)${\Rightarrow}$(4): Lemma \[singapore\]. (4)${\Rightarrow}$(5): Let $Q$ be a $\Pi^\mu_1(A)$ class. By Lemma \[tonda\], $V^A_k=\overline {Q^{f(k)}}$ is a Martin-Löf test relative to $A$ for some computable $f$. By (4), $Q^{f(m)}=\overline{V_m^A}\supseteq F$ for some $F\in\Pi^\mu_1$ and $m$; let $n=f(m)$. (5)${\Rightarrow}$(6): Lemma \[WalMart\]. (6)${\Rightarrow}$(7): If $U^A$ is a universal Martin-Löf test for $A$-randomness then we can let $Q=\overline {U_1}$. (7)${\Rightarrow}$(8): Since any class consisting entirely of $A$-randoms is contained in the class of all $A$-randoms. (8)${\Rightarrow}$(1): Suppose $X$ is $1$-random; we need to show $X$ is $A$-random. Let $F$ be a $\Pi^\mu_1$ subclass of the class of $A$-randoms. By Lemma \[antonin\], some tail of $X$ is an element of $F$. Hence a tail of $X$ is $A$-random, and thus $X$ itself is $A$-random. To characterize the low for random reals in terms of domination we first introduce some notation. We write Tot($\Phi$)=$\{X:\Phi^X$ is total$\}$ and $\varphi^X(n)=(\mu s)(\forall m<n)(\Phi_s^X(m)\downarrow\le s)$. Note that Tot($\Phi$) is a $\Pi^0_2$ class for each $\Phi$, and Tot($\Phi$)=Tot($\varphi$). The function $\varphi$ is the running time of $\Phi$, explicitly satisfying $\Phi^X(n)\le\varphi^X(n)$ for all $n$. Let $\Phi$ be a Turing functional and $B\in 2^\omega$. If there exists $f\le_T B$ such that for positive-measure many $X$, $\Phi^X$ is dominated by $f$, then we write $\Phi<B$. By $\sigma$-additivity this is equivalent to the statement that there exists $f\le_T B$ such that for positive-measure many $X$, $\Phi^X$ is *majorized* by $f$. We also write $\Phi<B$ in the case that Tot($\Phi$) has measure zero. \[following\] Let $B\in 2^\omega$ and let $\Phi$ be a Turing functional. Then $\varphi<B$ iff Tot($\Phi$) has a $\Pi^\mu_1(B)$ subclass. First suppose $\varphi<B$, as witnessed by $f$. Then $\{X:\forall n\,\,\Phi^{X}_{f(n)}(n)\downarrow\}$ is a $\Pi^\mu_1(B)$ subclass of Tot($\Phi$). Conversely, let $F$ be a $\Pi^\mu_1(B)$ subclass of Tot($\Phi$). By compactness, $\{\varphi^{X}(n):X\in F\}$ is finite for each $n$, and $\{{\langle}n,m{\rangle}:\forall X(X\in F{\rightarrow}\varphi^{X}(n)<m\}$ is a $\Sigma^0_1(B)$ class. Hence by $\Sigma^0_1(B)$ uniformization there is a function $f\le_T B$ such that $\forall n\forall X(X\in F{\rightarrow}\varphi^{X}(n)<f(n))$; i.e., $f$ witnesses that $\varphi<B$. \[mainly\] Let $A\in 2^\omega$. The following are equivalent: 1. $A$ is low for random. 2. Each $\Pi^\mu_1(A)$ class has a $\Pi^\mu_1$ subclass. 3. \(i) $A\le_T 0'$ and (ii) for each $\Phi$, if Tot($\Phi$) has a $\Pi^\mu_1(A)$ subclass then $\varphi<0$. 4. \(i) $A\le_T 0'$, and (ii) for each $\Phi$, if $\varphi<A$ then $\varphi<0$. (1)${\Leftrightarrow}$(2) was shown in Theorem \[AB\]. (2)${\Rightarrow}$(3): Nies [@AM] shows that if $A$ is low for random then $A\le_T 0'$. Suppose Tot($\Phi$) has a $\Pi^\mu_1(A)$ subclass $Q$. By (2), Tot($\Phi$) has a $\Pi^\mu_1$ subclass $F$. By Lemma \[following\], we are done. (3)${\Rightarrow}$(2): Suppose (3) holds and suppose $Q$ is a $\Pi^\mu_1(A)$ class. Pick $\Psi$ such that $Q=\{X:\Psi^{X\oplus A}(0)\uparrow\}$. Since $A\le_T 0'$, $A=\lim_s A_s$, the limit of a computable approximation. Let $\Phi^{X}(s)=\mu t>s(\Psi_t^{X\oplus A_t}(0)\uparrow)$. Then $Q=$Tot($\Phi$). Applying (3) to this $\Phi$, we have $\varphi<0$ and so by Lemma \[following\] we are done. (3)${\Leftrightarrow}$(4) is immediate from Lemma \[following\]. Positive-measure domination {#3} =========================== In [@dob:04] it was asked whether the Turing degrees $A$ of uniformly a.e. dominating functions are characterized by either of the inequalities $A\ge 0'$ and $A'\ge_T 0''$. The case $A\ge 0'$ was refuted by a direct construction in [@CGM]. The case $A'\ge_T 0''$ was refuted in [@BKLS:05] using precursors to the results presented here. Namely, the dual of property 4(ii) above is $\forall\varphi(\varphi<A)$ or equivalently $\forall\varphi(\varphi<0'{\rightarrow}\varphi<A)$. Relativizing our proofs gives that this is equivalent to: $0'$ is low for random relative to $A$. If we restrict ourselves to $A\le_T 0'$, then by [@AM] this implies $A'\ge_{tt}0''$, which is strictly stronger than $A'\ge_T 0''$. We do not know whether the assumption $A\le_T 0'$ is necessary for either of the conclusions $A'\ge_{tt}0''$, $A'\ge_T 0''$. We say that $A$ is *positive-measure dominating* if for each $\Phi$, $\Phi<A$. If each $B$-random real is $A$-random then we write $A\le_{LR}B$ (A is low for random relative to $B$) following [@AM]. We write $\Phi^A$ for the functional $X\mapsto \Phi^{A\oplus X}$. \[refreq\] Let $A\in 2^\omega$ and ${\mathcal}C\subseteq 2^\omega$. Then ${\mathcal}C$ is a $\Pi^0_2(A)$ class iff ${\mathcal}C$ is Tot($\Phi^A$) for some Turing functional $\Phi$. Suppose ${\mathcal}C$ is a $\Pi^0_2(A)$ class, i.e. ${\mathcal}C=\{X:\forall y\exists s R(y,s,A,X)\}$ where $R$ is a formula in the language of second-order arithmetic all of whose quantifiers are first-order and bounded. Then we can let $\Phi^{A\oplus X}(y)=\mu s(R(y,s,A,X))$. Conversely, Tot($\Phi^A)=\{X:\forall y\exists s(\Phi_s^{A\oplus X}(y)\downarrow)\}$. \[jada\] Let $B\in 2^\omega$. Then $0'$ is low for random relative to $B$ iff $B$ is positive-measure dominating. This is the special case $A=0$ of the fact that for each $A, B\in 2^\omega$, the following are equivalent: 1. $A'\le_{LR}A\oplus B$. 2. Each $\Pi^\mu_1(A')$ class has a $\Pi^\mu_1(A\oplus B)$ subclass. 3. Each $\Pi^\mu_2(A)$ class has a $\Pi^\mu_1(A\oplus B)$ subclass. 4. $\forall\Phi$, if Tot($\Phi^A$) has positive measure then it has a $\Pi^\mu_1(A\oplus B)$ subclass. 5. $\forall\Phi(\varphi^A<A\oplus B$) The equivalences are proved as follows. (1)${\Leftrightarrow}$(2): Relativization of Theorem \[AB\] gives: $A\le_{LR}B$ iff each $\Pi^\mu_1(A)$ class has a $\Pi^\mu_1(B)$ subclass. (3)${\Leftrightarrow}$(4): Lemma \[refreq\]. (4)${\Leftrightarrow}$(5): Relativization of Lemma \[following\]. (2)${\Leftrightarrow}$(3): Let $A\in 2^\omega$. $A'$ is uniformly a.e. dominating relative to $A$, hence $A'$ is positive-measure dominating relative to $A$. Hence by putting $B=A'$ in (3)${\Leftrightarrow}$(5), each $\Pi^\mu_2(A)$ class has a $\Pi^\mu_1(A')$ subclass. Universal functionals [^2] {#universal-functionals .unnumbered} -------------------------- Suppose $\Phi_i$, $i\in\omega$ are all the Turing functionals. As observed in [@CGM], the functional $\Psi$ given by $\Psi^{0^i1X}=\Phi_i^X$ is *universal* for uniform a.e. domination, in the sense that any function that dominates $\Psi$ on almost every $X$, is a uniformly a.e. dominating function. As $\Psi<0$, $\Psi$ is not universal for positive-measure domination; however, the following functional is. Fix $c\in\omega$. Let $U^Y$ be universal among prefix-free Turing machines with oracle $Y$. Then $$\tag{0}(\forall n)(K^{0'}(X{\upharpoonright}n)\ge n-c)$$ is equivalent to $$\tag{1} (\forall n)(\forall\sigma\in 2^{<n-c})( \neg (U^{0'}(\sigma)=X{\upharpoonright}n)).$$ Let $\upsilon_t(\sigma)$ be the use of $0'$ in the computation $U^{0'[t]}_t(\sigma)$. (If the latter is undefined then so is the former.) This is again equivalent to $$\tag{2} (\forall n)(\forall\sigma\in 2^{<n-c})(\forall s)(\exists t\ge s)$$ $$\notag \neg(U^{0'[t]}_t(\sigma)=X{\upharpoonright}n \text{ and }0'[t]{\upharpoonright}\upsilon_t(\sigma)=0'[t-1] {\upharpoonright}\upsilon_t(\sigma)).$$ (2)${\Rightarrow}$(1): Suppose $\neg$(1), so that actually $U^{0'}(\sigma)=X{\upharpoonright}n$. Let $s$ be such that $0'$ has stabilized up to the use of $U^{0'}(\sigma)$ by stage $s-1$. Then for all $t\ge s$, $U^{0'[t]}_t(\sigma)=X{\upharpoonright}n$, and $\neg$(2) follows. (1)${\Rightarrow}$(2): Suppose $\neg$(2), as witnessed by $n$, $\sigma$, and $s$. Thus, for all $t\ge s$ we have $U^{0'[t]}_t(\sigma)=X{\upharpoonright}n))$, and $0'$ never changes below the use of $U^{0'}(\sigma)$ after stage $s$. Thus $U^{0'}(\sigma)=X{\upharpoonright}n$. Let $\Xi_c^X(n,\sigma,s)$ be the least stage $t\ge s$ at which $X$ looks like it is 2-random, with constant $c$, in the sense that $$|\sigma|< n-c{\rightarrow}\neg(U^{0'[t]}_t(\sigma)=X{\upharpoonright}n \text{ and }0'[t]{\upharpoonright}\upsilon_t(\sigma)=0'[t-1] {\upharpoonright}\upsilon_t(\sigma)).$$ Then $\Xi_c^X$ is total iff (0) holds. Thus $\Xi_c$ is total for positive-measure many $X$, all of which are 2-randoms. The running time $\xi_c$ of $\Xi_c$ is universal for positive-measure domination in the following sense. The class $\{A: A$ is positive-measure dominating$\}$ is $\Sigma^0_3$. In fact, for each $A\in 2^\omega$ and $c\in\omega$, $A$ is positive-measure dominating iff $\xi_c<A$. Suppose $\xi_c<A$. By Lemma \[following\], Tot($\Xi_c$) has a $\Pi^\mu_1(A)$ subclass. The complement of Tot($\Xi_c$) is $\{X: \exists n\, K^{0'}(X{\upharpoonright}n)<n-c\}$ which is open. Hence Tot($\Xi_c$) is closed and is in fact a $\Pi^\mu_1(0')$ class. Thus: Some $\Pi^\mu_1(0')$ class consisting entirely of $0'$-randoms has a $\Pi^\mu_1(A)$ subclass. By Theorem \[AB\] (7) relativized, $0'\le_{LR}A$, and so by Theorem \[jada\], $A$ is positive-measure dominating. [^1]: The author thanks the Institute for Mathematical Sciences of the National University of Singapore for support during preparation of this manuscript at the *Computational Prospects of Infinity* conference in Summer 2005. The author also thanks Denis R. Hirschfeldt for proving upon request a lemma used in an earlier proof of the case $B\le_T 0'$ of Theorem \[jada\], and G. Barmpalias, A.E.M. Lewis and M. Soskova for useful communications. [^2]: The published version of this section contained a mistake, which has here been corrected.
--- abstract: 'Suppose that $X$ is a subspace of a Tychonoff space $Y$. Then the embedding mapping $e_{X, Y}: X\rightarrow Y$ can be extended to a continuous monomorphism $\hat{e}_{X, Y}: AP(X)\rightarrow AP(Y)$, where $AP(X)$ and $AP(Y)$ are the free Abelian paratopological groups over $X$ and $Y$, respectively. In this paper, we mainly discuss when $\hat{e}_{X, Y}$ is a topological monomorphism, that is, when $\hat{e}_{X, Y}$ is a topological embedding of $AP(X)$ to $AP(Y)$.' address: 'Fucai Lin(corresponding author): Department of Mathematics and Information Science, Zhangzhou Normal University, Zhangzhou 363000, P. R. China' author: - Fucai Lin title: topological monomorphisms between free paratopological groups --- [^1] Introduction ============ In 1941, free topological groups were introduced by A.A. Markov in [@MA] with the clear idea of extending the well-known construction of a free group from group theory to topological groups. Now, free topological groups have become a powerful tool of study in the theory of topological groups and serve as a source of various examples and as an instrument for proving new theorems, see [@A2008; @GM; @NP]. In [@GM], M.I. Graev extended continuous pseudometrics on a space $X$ to invariant continuous pseudometrics on $F(X)$ (or $A(X)$). Apparently, the description of a local base at the neutral element of the free Abelian topological group $A(X)$ in terms of continuous pseudometric on $X$ was known to M.I. Graev, but appeared explicitly in [@MD] and [@NP]. When working with free topological groups, it is also very important to know under which conditions on a subspace $X$ of a Tychonoff space $Y$, the subgroup $F(X, Y)$ of $F(Y)$ generated by $X$ is topologically isomorphic to the group $F(X)$, under the natural isomorphism extending the identity embedding of $X$ to $Y$. V.G. Pestov and E.C. Nummela gave some answers (see e.g. Theorem \[t1\]) in [@PV] and [@NE], respectively. In the Abelian case, M.G. Tkackenko gave an answer in [@TM1], see Theorem \[t2\]. It is well known that paratopological groups are good generalizations of topological groups, see e.g. [@A2008]. The Sorgenfrey line ([@E1989 Example 1.2.2]) with the usual addition is a first-countable paratopological group but not a topological group. The absence of continuity of inversion, the typical situation in paratopological groups, makes the study in this area very different from that in topological groups. Paratopological groups attract a growing attention of many mathematicians and articles in recent years. As in free topological groups, S. Romaguera, M. Sanchis and M.G. Tkackenko in [@RS] define free paratopological groups. Recently, N.M. Pyrch has investigated some properties of free paratopological groups, see [@PN1; @PN; @PN2]. In this paper, we will discuss the topological monomorphisms between free paratopological groups, and extend several results valid for free (abelian) topological groups to free (abelian) paratopological groups. Preliminaries ============= Firstly, we introduce some notions and terminology. Recall that a [*topological group*]{} $G$ is a group $G$ with a (Hausdorff) topology such that the product mapping of $G \times G$ into $G$ is jointly continuous and the inverse mapping of $G$ onto itself associating $x^{-1}$ with an arbitrary $x\in G$ is continuous. A [*paratopological group*]{} $G$ is a group $G$ with a topology such that the product mapping of $G \times G$ into $G$ is jointly continuous. [@MA] Let $X$ be a subspace of a topological group $G$. Assume that 1. The set $X$ generates $G$ algebraically, that is $<X>=G$; 2. Each continuous mapping $f: X\rightarrow H$ to a topological group $H$ extends to a continuous homomorphism $\hat{f}: G\rightarrow H$. Then $G$ is called the [*Markov free topological group on*]{} $X$ and is denoted by $F(X)$. [@RS] Let $X$ be a subspace of a paratopological group $G$. Assume that 1. The set $X$ generates $G$ algebraically, that is $<X>=G$; 2. Each continuous mapping $f: X\rightarrow H$ to a paratopological group $H$ extends to a continuous homomorphism $\hat{f}: G\rightarrow H$. Then $G$ is called the [*Markov free paratopological group on*]{} $X$ and is denoted by $FP(X)$. Again, if all the groups in the above definitions are Abelian, then we get the definitions of the [*Markov free Abelian topological group*]{} and the [*Markov free Abelian paratopological group on*]{} $X$ which will be denoted by $A(X)$ and $AP(X)$ respectively. By a [*quasi-uniform space*]{} $(X, \mathscr{U})$ we mean the natural analog of a [*uniform space*]{} obtained by dropping the symmetry axiom. For each quasi-uniformity $\mathscr{U}$ the filter $\mathscr{U}^{-1}$ consisting of the inverse relations $U^{-1}=\{(y, x): (x, y)\in U\}$ where $U\in\mathscr{U}$ is called the [*conjugate quasi-uniformity*]{} of $\mathscr{U}$. We recall that the standard base of the [*left quasi-uniformity*]{} $\mathscr{G}_{G}$ on a paratopological group $G$ consists of the sets $$W_{U}^{l}=\{(x, y)\in G\times G: x^{-1}y\in U\},$$where $U$ is an arbitrary open neighborhood of the neutral element in $G$. If $X$ is a subspace of $G$, then the base of the left induced quasi-uniformity $\mathscr{G}_{X}=\mathscr{G}_{G}\mid X$ on $X$ consists of the sets $$W_{U}^{l}\cap (X\times X)=\{(x, y)\in X\times X: x^{-1}y\in U\}.$$Similarly, we can define the [*right induced quasi-uniformity*]{} on $X$. We also recall that the [*universal quasi-uniformity*]{} $\mathscr{U}_{X}$ of a space $X$ is the finest quasi-uniformity on $X$ that induces on $X$ its original topology. Throughout this paper, if $\mathscr{U}$ is a quasi-uniformity of a space $X$ then $\mathscr{U}^{\ast}$ denotes the smallest uniformity on $X$ that contains $\mathscr{U}$, and $\tau (\mathscr{U})$ denotes the topology of $X$ generated by $\mathscr{U}$. A quasi-uniform space $(X, \mathscr{U})$ is called [*bicomplete*]{} if $(X, \mathscr{U}^{\ast})$ is complete. A function $f: (X, \mathscr{U})\rightarrow (Y, \mathscr{V})$ is called [*quasi-uniformly continuous*]{} if for each $V\in \mathscr{V}$ there exists an $U\in\mathscr{U}$ such that $(f(x), f(y))\in V$ whenever $(x, y)\in U,$ where $\mathscr{U}$ and $\mathscr{V}$ are quasi-uniformities for $X$ and $Y$ respectively. A [*quasi-pseudometric*]{} $d$ on a set $X$ is a function from $X\times X$ into the set of non-negative real numbers such that for $x, y, z\in X$: (a) $d(x, x)=0$ and (b) $d(x, y)\leq d(x, z)+d(z, y)$. If $d$ satisfies the additional condition (c) $d(x, y)=0\Leftrightarrow x=y$, then $d$ is called a [*quasi-metric*]{} on $X$. Every quasi-pseudometric $d$ on $X$ generates a topology $\mathscr{F}(d)$ on $X$ which has as a base the family of $d$-balls $\{B_{d}(x, r): x\in X, r>0\}$, where $B_{d}(x, r)=\{y\in X: d(x, y)< r\}$. A topological space $(X, \mathscr{F})$ is called [*quasi-(pseudo)metrizable*]{} if there is a quasi- (pseudo)metric $d$ on $X$ compatible with $\mathscr{F}$, where $d$ is compatible with $\mathscr{F}$ provided $\mathscr{F}=\mathscr{F}(d).$ Denote by $\mathscr{U}^{\star}$ the upper quasi-uniformity on $\mathbb{R}$ the standard base of which consists of the sets $$U_{r}=\{(x, y)\in \mathbb{R}\times \mathbb{R}: y<x+r\},$$where $r$ is an arbitrary positive real number. Given a group $G$ with the neutral element $e$, a function $N: G\rightarrow [0,\infty)$ is called a [*quasi-prenorm*]{} on $G$ if the following conditions are satisfied: 1. $N(e)=0$; and 2. $N(gh)\leq N(g)+N(h)$ for all $g, h\in G$. Let $X$ be a subspace of a Tychonoff space $Y$. 1. The subspace $X$ is [*P-embedded*]{} in $Y$ if each continuous pseudometric on $X$ admits a continuous extension over $Y$; 2. The subspace $X$ is [*P$^{\ast}$-embedded*]{} in $Y$ if each bounded continuous pseudometric on $X$ admits a continuous extension over $Y$; 3. The subspace $X$ is [*quasi-P-embedded*]{} in $Y$ if each continuous quasi-pseudometric from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ admits a continuous extension from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$; 4. The subspace $X$ is [*quasi-P$^{\ast}$-embedded*]{} in $Y$ if each bounded continuous quasi-pseudometric from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ admits a continuous extension from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Throughout this paper, we use $G(X)$ to denote the topological groups $F(X)$ or $A(X)$, and $PG(X)$ to denote the paratopological groups $FP(X)$ or $AP(X)$. For a subset $Y$ of a space $X$, we use $G(Y, X)$ and $PG(Y, X)$ to denote the subgroups of $G(X)$ and $PG(X)$ generated by $Y$ respectively. Moreover, we denote the abstract groups of $F(X), FP(X)$ by $F_{a}(X)$ and of $A(X)$ and $AP(X)$ by $A_{a}(X)$, respectively. Since $X$ generates the free group $F_{a}(X)$, each element $g\in F_{a}(X)$ has the form $g=x_{1}^{\varepsilon_{1}}\cdots x_{n}^{\varepsilon_{n}}$, where $x_{1}, \cdots, x_{n1}\in X$ and $\varepsilon_{1}, \cdots, \varepsilon_{n}=\pm 1$. This word for $g$ is called [*reduced*]{} if it contains no pair of consecutive symbols of the form $xx^{-1}$ or $x^{-1}x$. It follow that if the word $g$ is reduced and non-empty, then it is different from the neutral element of $F_{a}(X)$. In particular, each element $g\in F_{a}(X)$ distinct from the neutral element can be uniquely written in the form $g=x_{1}^{r_{1}}x_{2}^{r_{2}}\cdots x_{n}^{r_{n}}$, where $n\geq 1$, $r_{i}\in \mathbb{Z}\setminus\{0\}$, $x_{i}\in X$, and $x_{i}\neq x_{i+1}$ for each $i=1, \cdots, n-1$. Such a word is called the [*normal form*]{} of $g$. Similar assertions are valid for $A_{a}(X)$. We denote by $\mathbb{N}$ the set of all natural numbers. The letter $e$ denotes the neutral element of a group. Readers may consult [@A2008; @E1989; @Gr1984] for notations and terminology not explicitly given here. Backgrounds =========== If $X$ is an arbitrary subspace of a Tychonoff space $Y$, then let $e_{X, Y}$ be the natural embedding mapping from $X$ to $Y$. The following two theorems are well known in the theory of free topological groups. [@NE; @PV Nummela-Pestov]\[t1\] Let $X$ be a dense subspace of a Tychonoff space $Y$. Then the embedding mapping $e_{X, Y}$ can be extended to a topological monomorphism $\hat{e}_{X, Y}: F(X)\rightarrow F(Y)$ if and only if $X$ is P-embedded in $Y$. [@TM1 M.G. Tkackenko]\[t2\] Let $X$ be an arbitrary subspace of a Tychonoff space $Y$. Then the embedding mapping $e_{X, Y}$ can be extended to a topological monomorphism $\hat{e}_{X, Y}: A(X)\rightarrow A(Y)$ if and only if $X$ is P$^{\ast}$-embedded in $Y$. Obviously, if $X$ is a subspace of a Tychonoff space $Y$, then the embedding mapping $e_{X, Y}: X\rightarrow Y$ can be extended to a continuous monomorphism $\hat{e}_{X, Y}: PG(X)\rightarrow PG(Y)$. However, by Theorems \[t1\] and \[t2\], it is natural to ask the following two questions: \[q1\] Let $X$ be a dense subspace of a Tychonoff space $Y$. Is it true that the embedding mapping $e_{X, Y}$ can be extended to a topological monomorphism $\hat{e}_{X, Y}: FP(X)\rightarrow FP(Y)$ if and only if $X$ is quasi-P-embedded in $Y$? \[q2\] Let $X$ be a subspace of a Tychonoff space $Y$. Is it true that the embedding mapping $e_{X, Y}$ can be extended to a topological monomorphism $\hat{e}_{X, Y}: AP(X)\rightarrow AP(Y)$ if and only if $X$ is quasi-P$^{\ast}$-embedded in $Y$? In this paper, we shall give an affirmative answer to Question \[q2\]. Moreover, we shall give a partial answer to Question \[q1\], and prove that for a Tychonoff space $Y$ if $X$ is a dense subspace of the smallest uniformity containing $\mathscr{U}_{Y}$ induces on $\tilde{Y}$ of the bicompletion of $(Y, \mathscr{U}_{Y})$ and the natural mapping $\hat{e}_{X, Y}: FP(X)\rightarrow FP(Y)$ is a topological monomorphism then $X$ is quasi-P-embedded in $Y$. Quasi-pseudometrics on free paratopological groups ================================================== In this section, we shall give some lemmas and theorems in order to prove our main results in Section 4. We now outline some of the ideas of [@RS] in a form suitable for our applications. Suppose that $e$ is the neutral element of the abstract free group $F_{a}(X)$ on $X$, and suppose that $\rho$ is a fixed quasi-pseduometric on $X$ which is bounded by 1. Extend $\rho$ from $X$ to a quasi-pseudometric $\rho_{e}$ on $X\cup\{e\}$ by putting $$\rho_{e}(x, y)=\left\{ \begin{array}{lll} 0, & \mbox{if } x=y,\\ \rho(x, y), & \mbox{if } x, y\in X,\\ 1, & \mbox{otherwise}\end{array}\right.$$ for arbitrary $x, y\in X\cup\{e\}$. By [@RS], we extend $\rho_{e}$ to a quasi-pseudometric $\rho^{\ast}$ on $\tilde{X}=X\cup\{e\}\cup X^{-1}$ defined by $$\rho^{\ast}(x, y)=\left\{ \begin{array}{lll} 0, & \mbox{if } x=y,\\ \rho_{e}(x, y), & \mbox{if } x, y\in X\cup\{e\},\\ \rho_{e}(y^{-1}, x^{-1}), & \mbox{if } x, y\in X^{-1}\cup\{e\},\\ 2, & \mbox{otherwise}\end{array}\right.$$ for arbitrary $x, y\in\tilde{X}$. Let $A$ be a subset of $\mathbb{N}$ such that $|A|=2n$ for some $n\geq 1$. A [*scheme*]{} on $A$ is a partition of $A$ to pairs $\{a_{i}, b_{i}\}$ with $a_{i}<b_{i}$ such that each two intervals $[a_{i}, b_{i}]$ and $[a_{j}, b_{j}]$ in $\mathbb{N}$ are either disjoint or one contains the other. If $\mathcal{X}$ is a word in the alphabet $\tilde{X}$, then we denote the reduced form and the length of $\mathcal{X}$ by $[\mathcal{X}]$ and $\ell (\tilde{X})$ respectively. For each $n\in \mathbb{N}$, let $\mathcal{S}_{n}$ be the family of all schemes $\varphi$ on $\{1, 2, \cdots, 2n\}$. As in [@RS], define $$\Gamma_{\rho}(\mathcal{X}, \varphi)=\frac{1}{2}\sum_{i=1}^{2n}\rho^{\ast}(x_{i}^{-1}, x_{\varphi (i)}).$$ Then we define a quasi-prenorm $N_{\rho}: F_{a}(X)\rightarrow [0, +\infty)$ by setting $N_{\rho}(g)=0$ if $g=e$ and $$N_{\rho}(g)=\inf\{\Gamma_{\rho}(\mathcal{X}, \varphi): [\mathcal{X}]=g, \ell (\tilde{X})=2n, \varphi\in\mathcal{S}_{n}, n\in \mathbb{N}\}$$ if $g\in F_{a}(X)\setminus\{e\}$. It follows from Claim 3 in [@RS] that $N_{\rho}$ is an invariant quasi-prenorm on $F_{a}(X)$. Put $\hat{\rho}(g, h)=N_{\rho}(g^{-1}h)$ for all $g, h\in F_{a}(X)$. We refer to $\hat{\rho}$ as the Graev extension of $\rho$ to $F_{a}(X)$. Given a word $\mathcal{X}$ in the alphabet $\tilde{X}$, we say that $\mathcal{X}$ is [*almost irreducible*]{} if $\tilde{X}$ does not contain two consecutive symbols of the form $u$, $u^{-1}$ or $u^{-1}$, $u$ (but $\mathcal{X}$ may contain several letters equal to $e$), see [@RS]. The following two lemmas are essentially Claims in the proof of Theorem 3.2 in [@RS]. \[l10\][@RS] Let $\varrho$ be a quasi-pseudometric on $X$ bounded by 1. If $g$ is a reduced word in $F_{a}(X)$ distinct from $e$, then there exists an almost irreducible word $\mathcal{X}_{g}=x_{1}x_{2}\cdots x_{2n}$ of length $2n\geq 2$ in the alphabet $\tilde{X}$ and a scheme $\varphi_{g}\in\mathcal{S}_{n}$ that satisfy the following conditions: 1. for $i=1, 2, \cdots, 2n$, either $x_{i}$ is $e$ or $x_{i}$ is a letter in $g$; 2. $[\mathcal{X}_{g}]=g$ and $n\leq \ell(g)$; and 3. $N_{\rho}(g)=\Gamma_{\rho}(\mathcal{X}_{g}, \varphi_{g}).$ \[l11\][@RS] The family $\mathcal{N}=\{U_{\rho}(\varepsilon): \varepsilon >0\}$ is a base at the neutral element $e$ for a paratopological group topology $\mathscr{F}_{\rho}$ on $F_{a}(X)$, where $U_{\rho}(\varepsilon)=\{g\in F_{a}(X): N_{\rho}(g)<\varepsilon\}$. The restriction of $\mathscr{F}_{\rho}$ to $X$ coincides with the topology of the space $X$ generated by $\rho$. \[l0\][@F1982] For every sequence $V_{0}, V_{1}, \cdots,$ of elements of a quasi-uniformity $\mathscr{U}$ on a set $X$, if $$V_{0}=X\times X\ \mbox{and}\ V_{i+1}\circ V_{i+1}\circ V_{i+1}\subset V_{i},\ \mbox{for}\ i\in \mathbb{N},$$ where ‘$\circ$’ denotes the composition of entourages in the quasi-uniform space $(X, \mathscr{U})$, then there exists a quasi-pseudometric $\rho$ on the set $X$ such that, for each $i\in \mathbb{N}$, $$V_{i}\subset\{(x, y): \rho (x, y)\leq \frac{1}{2^{i}}\}\subset V_{i-1}.$$ \[l1\] For every quasi-uniformity $\mathscr{V}$ on a set $X$ and each $V\in \mathscr{V}$ there exists a quasi-pseudometric $\rho$ bounded by 1 on $X$ which is quasi-uniform with respect to $\mathscr{V}$ and satisfies the condition $$\{(x, y): \rho (x, y)< 1\}\subset V.$$ By the definition of a quasi-uniformity, we can find a sequence $V_{0}, V_{1}, \cdots , V_{n}, \cdots$ of members of $\mathscr{V}$ such that $V_{1}=V$ and $V_{i+1}\circ V_{i+1}\circ V_{i+1}\subset V_{i},\ \mbox{for each}\ i\in \mathbb{N}$. Let $\rho=\mbox{min}\{1, 4\rho_{0}\}$, where $\rho_{0}$ is a quasi-pseudometric as in Lemma \[l0\]. Then $\rho$ is a quasi-pseudometric which has the required property. Given a finite subset $B$ of $\mathbb{N}$ on $B$ with $|B|=2n\geq 2$, we say that a bijection $\varphi: B\rightarrow B$ is an [*Abelian scheme*]{} on $B$ if $\varphi$ is an involution without fixed points, that is, $\varphi(i)=j$ always implies $j\neq i$ and $\varphi(j)=i$. \[l2\] Suppose that $\rho$ is a quasi-pseudometric on a set $X$, and suppose that $m_{1}x_{1}+\cdots +m_{n}x_{n}$ is the normal form of an element $h\in A_{a}(X)\setminus\{e\}$ of the length $l=\sum_{i=1}^{n}|m_{i}|$. Then there is a representation $h=(-u_{1}+v_{1})+\cdots +(-u_{k}+v_{k})$ $\dotfill$ (1)\ where $2k=l$ if $l$ is even and $2k=l+1$ if $l$ is odd, $u_{1}, v_{1}, \cdots , u_{k}, v_{k}\in\{\pm x_{1}, \cdots , \pm x_{n}\}$ (but $v_{k}=e$ if $l$ is odd), and such that $\hat{\rho}_{A}(e, h)=\sum_{i=1}^{k}\rho^{\ast}(u_{i}, v_{i})$. $\dotfill$ (2)\ In addition, if $\hat{\rho}_{A}(e, h)<1$, then $l=2k$, and one can choose $y_{1}, z_{1}, \cdots , y_{k}, z_{k}\in\{x_{1}, \cdots , x_{n}\}$ such that $h=(-y_{1}+z_{1})+\cdots +(-y_{k}+z_{k})$ $\dotfill$ (3)\ and $\hat{\rho}_{A}(e, h)=\sum_{i=1}^{k}\rho^{\ast}(y_{i}, z_{i})$. $\dotfill$ (4). Obviously, we have $h=h_{1}+\cdots +h_{l}$, where $h_{i}\in\{\pm x_{1}, \cdots , \pm x_{n}\}$ for each $1\leq i\leq l$. Obviously, there exists an integer $k$ such that $2k-1\leq l\leq 2k$. Without loss of generality, we may assume that $l$ is even. In fact, if $l=2k-1$, then one can additionally put $h_{2k}=e$. It follows from the proof of Lemma \[l10\] (see [@RS]) that we have a similar assertion is valid for the case of $A_{a}(X)$. Then there exists an Abelian scheme $\varphi$ on $\{1, 2, \cdots , 2k\}$ such that $\hat{\rho}_{A}(e, h)=\frac{1}{2}\sum_{i=1}^{2k}\rho^{\ast}(-h_{i}, h_{\varphi (i)}).$ $\dotfill$ (5)\ Since the group $A_{a}(X)$ is Abelian, we may assume that $\varphi (2i-1)=2i$ for each $1\leq i\leq k$. Obviously, $\varphi (2i)=2i-1$ for each $1\leq i\leq k$. Hence, we have $h=(h_{1}+h_{2})\cdots +(h_{2k-1}+h_{2k})$. $\dotfill$ (6)\ For each $1\leq i\leq k$, put $u_{i}=-h_{2i-1}$ and $v_{i}=h_{2i}$. Then it follows from (5) and (6) that (1) and (2) are true. Finally, suppose that $\hat{\rho}_{A}(e, h)<1$. Since $\rho (x, e)= 1$ and $\rho (e, x)= 1$, we have $\rho^{\ast}(x, e)= 1$, $\rho^{\ast}(e, x)= 1$, $\rho^{\ast}(-x, y)= 2$ and $\rho^{\ast}(x, -y)= 2$ for all $x, y\in X$. However, it follows from (5) that $\rho^{\ast}(-h_{2i-1}, h_{2i})<1$ for each $1\leq i\leq k$, and therefore, one of the elements $h_{2i-1}, h_{2i}$ in $X$ while the other is in $-X$. Thus, for each $1\leq i\leq k$, we have $h_{2i-1}+h_{2i}=-y_{i}+z_{i}$, where $y_{i}, z_{i}\in X$. Obviously, $y_{i}, z_{i}\in\{x_{1}, \cdots , x_{n}\}$ for each $1\leq i\leq k$. Next, we only need to replace $h_{2i-1}$ and $h_{2i}$ by the corresponding elements $\pm y_{i}$ and $\pm z_{i}$ in (5) and (6), respectively. Hence we obtain (3) and (4). \[l8\] If $d$ is a quasi-pseudometric on a set $X$ quasi-uniform such that it is quasi-uniform with respect to $\mathscr{U}_{X}$, then $d$ is continuous as a mapping from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Take an arbitrary point $(x_{0}, y_{0})\in X\times X$. It is sufficient to show that $d$ is continuous at the point $(x_{0}, y_{0})$. For each $\varepsilon>0$, since $d$ is quasi-uniform with respect to $\mathscr{U}_{X}$, there exists an $U\in\mathscr{U}_{X}$ such that $d(x, y)<\frac{\varepsilon}{2}$ for each $(x, y)\in U$. Let $U_{1}=\{x\in X: (x, x_{0})\in U\}$ and $U_{2}=\{y\in X: d(y_{0}, y)<\frac{\varepsilon}{2}\}$. Then $U_{1}, U_{2}$ are neighborhoods of the points $x_{0}$ and $y_{0}$ in the spaces $(X, \mathscr{U}_{X}^{-1})$ and $(X, \mathscr{U}_{X})$ respectively. Put $V=U_{1}\times U_{2}$. Then $V$ is a neighborhood of the point $(x_{0}, y_{0})$ in $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$. For each $(x, y)\in V$, we have $$\begin{aligned} d(x, y)-d(x_{0}, y_{0})&\leq&d(x, x_{0})+d(x_{0}, y_{0})+d(y_{0}, y)-d(x_{0}, y_{0}) \nonumber\\ &=&d(x, x_{0})+d(y_{0}, y)<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \nonumber\end{aligned}$$ Therefore, the quasi-pseudometric $d$ is continuous at the point $(x_{0}, y_{0})$. \[l3\][@NP] Let $\{V_{i}: i\in\mathbb{N}\}$ be a sequence of subsets of a group $G$ with identity $e$ such that $e\in V_{i}$ and $V_{i+1}^{3}\subset V_{i}$ for each $i\in \mathbb{N}$. If $k_{1}, \cdots , k_{n}, r\in \mathbb{N}$ and $\sum_{i=1}^{n}2^{-k_{i}}\leq 2^{-r}$, then we have $V_{k_{1}}\cdots V_{k_{n}}\subset V_{r}$. In the next theorem we prove that the family of quasi-pseudometrics $\{\hat{\rho}_{A}: \rho\in \mathscr{P}_{X}\}$, where $\mathscr{P}_{X}$ is the family of all continuous quasi-pseudometrics from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$, generates the topology of the free Abelian paratopological group $AP(X)$. \[t0\] Let $X$ be a Tychonoff space, and let $\mathscr{P}_{X}$ be the family of all continuous quasi-pseudometrics from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ which are bounded by 1. Then the sets $$V_{\rho}=\{g\in AP(X): \hat{\rho}_{A}(e, g)<1\}$$ with $\rho\in\mathscr{P}_{X}$ form a local base at the neutral element $e$ of $AP(X)$. Let $V$ be an open neighborhood of $e$ in $AP(X)$. Since $AP(X)$ is a paratopological group, there exists a sequence $\{V_{n}: n\in \mathbb{N}\}$ of open neighborhoods of $e$ in $AP(X)$ such that $V_{1}\subset V$ and $V_{i+1}+V_{i+1}+V_{i+1}\subset V_{i}$ for every $i\in \mathbb{N}$. For each $n\in \mathbb{N}$, put $$U_{n}=\{(x, y)\in X\times X: -x+y\in V_{n}\}.$$ Then each $U_{n}$ is an element of the universal quasi-uniformity $\mathscr{U}_{X}$ on the space $X$ and $U_{n+1}\circ U_{n+1}\circ U_{n+1}\subset U_{n}$. Hence, it follows from Lemmas \[l0\] and \[l8\] that there is a continuous quasi-pseudometric $\rho_{1}$ on $X$ such that, for each $n\in \mathbb{N},$$$\{(x, y)\in X\times X: \rho_{1} (x, y)<2^{-n}\}\subset U_{n}.$$Let $\rho =\mbox{min}\{1, 4\rho_{1}\}$. Then $\rho\in\mathscr{P}_{X}$. Claim: We have $V_{\rho}\subset V.$ Indeed, let $h\in V_{\rho}$. It follows from Lemma \[l2\] that the element $h$ can be written in the form $$h=(-x_{1}+y_{1})+\cdots +(-x_{m}+y_{m}),\ \mbox{where}\ x_{i}, y_{i}\in X\ \mbox{for each}\ 1\leq i\leq m,$$ such that $$\hat{\rho}_{A}(e, h)=\rho (x_{1}, y_{1})+\cdots +\rho (x_{m}, y_{m})< 1.$$ It follows from the definition of $\rho$ and $\rho_{1}$ that $\hat{\rho}=4\hat{\rho_{1}}$. Therefore, we have $$\hat{\rho}_{1}(e, h)=\rho_{1}(x_{1}, y_{1})+\cdots +\rho_{1}(x_{m}, y_{m})<\frac{1}{4}.$$ If $1\leq i\leq m$ and $\rho_{1}(x_{i}, y_{i})>0$ then we choose a $k_{i}\in \mathbb{N}$ such that $$2^{-k_{i}-1}\leq \rho_{1}(x_{i}, y_{i})<2^{-k_{i}}.$$ And then, if $1\leq i\leq m$ and $\rho_{1}(x_{i}, y_{i})=0$ then we choose a sufficiently large $k_{i}\in \mathbb{N}$ such that $\sum_{i=1}^{m}2^{-k_{i}}<\frac{1}{2}$. For every $1\leq i\leq m$, since $-x_{i}+y_{i}\in V_{k_{i}}$, it follows from Lemma \[l3\] that $$h=(-x_{1}+y_{1})+\cdots +(-x_{m}+y_{m})\in V_{k_{1}}+\cdots +V_{k_{m}}\subset V_{1}\subset V.$$ Therefore, we have $V_{\rho}\subset V.$ We don’t know whether a similar assertion is valid for the free paratopological group $FP(X)$. However, we have the following Theorem \[t3\]. Our argument will be based on the following combinatorial lemma (Readers can consult the proof in [@A2008 Lemma 7.2.8].). \[l4\] Let $g=x_{1}\cdots x_{2n}$ be a reduced element of $F_{a}(X)$, where $x_{1}, \cdots , x_{2n}\in X\cup X^{-1}$, and let $\varphi$ be a scheme on $\{1, 2, \cdots, 2n\}$. Then there are natural numbers $1\leq i_{1}<\cdots <i_{n}\leq 2n$ and elements $h_{1}, \cdots , h_{n}\in F_{a}(X)$ satisfying the following two conditions:\ i) $\{i_{1}, \cdots , i_{n}\}\cup\{i_{\varphi(1)}, \cdots , i_{\varphi(n)}\}=\{1, 2, \cdots , 2n\};$\ ii) $g=(h_{1}x_{i_{1}}x_{\varphi(i_{1})}h_{1}^{-1})\cdots (h_{n}x_{i_{n}}x_{\varphi(i_{n})}h_{n}^{-1}).$ A paratopological group $G$ has an [*invariant basis*]{} if there exists a family $\mathscr{L}$ of continuous and invariant quasi-pseudometric on $G$ such that the family $\{U_{\rho}: \rho\in\mathscr{L}\}$ as a base at the neutral element $e$ in $G$, where each $U_{\rho}=\{g\in G: \rho(e, g)<1\}$. \[t3\] For each Tychonoff space $X$, if the abstract group $F_{a}(X)$ admits the maximal paratopological group topology $\mathscr{F}_{\mbox{inv}}$ with invariant basis such that every continuous mapping $f: X\rightarrow H$ to a paratopological group $H$ with invariant basis can be extended to a continuous homomorphism $\tilde{f}: (F_{a}(X), \mathscr{F}_{\mbox{inv}})\rightarrow H$, then the family of all sets of the form $$U_{\rho}=\{g\in F_{a}(X): \hat{\rho}(e, g)<1\},$$where $\rho$ is a continuous quasi-pseudometric from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$, with $\rho\leq 1$ constitutes a base of the topology $\mathscr{F}_{\mbox{inv}}$ at the neutral element $e$ of $F_{a}(X)$. For each $\rho\in\mathscr{P}_{X}$, put $$U_{\rho}=\{g\in F_{a}(X): N_{\rho}(g)<1\},\ \mbox{and}\ \mathscr{N}=\{U_{\rho}: \rho\in\mathscr{P}_{X}\},$$where $N_{\rho}(g)$ is the invariant quasi-prenorm on $F_{a}(X)$ defined by $\hat{\rho}(g, h)=N_{\rho}(g^{-1}h)$. By Lemma \[l11\] and Proposition 3.8 in [@PN], it is easy to see that $\mathscr{N}$ as a base at the neutral element $e$ of $F_{a}(X)$ for a Hausdorff paratopological group topology. We denote this topology by $\mathscr{F}_{\mbox{inv}}$. Since $\hat{\rho}$ is invariant on $F_{a}(X)$, the paratopological group $FP_{\mbox{inv}}(X)=(FP(X), \mathscr{F}_{\mbox{inv}})$ has an invariant basis, and hence $\hat{\rho}$ is continuous on $FP_{\mbox{inv}}(X)$. Let $f: X\rightarrow H$ be a continuous mapping of $X$ to a paratopological group $H$ with invariant basis. Let $\tilde{f}$ be the extension of $f$ to a homomorphism of $F_{a}(X)$ to $H$. Claim: The map $\tilde{f}: FP_{\mbox{inv}}(X)\rightarrow H$ is a continuous homomorphism. Let $V$ be an open neighborhood of the neutral element of $H$. Then there exists an invariant quasi-prenorm $N$ on $H$ such that $W=\{h\in H: N(h)<1\}\subset V$ by Lemma \[l1\]. Therefore, we can define a quasi-pseudometric $\rho$ on $X$ by $\rho (x, y)=N(f^{-1}(x)f(y))$ for all $x, y\in X$. Next, we shall show that $\tilde{f}(U_{\rho})\subset W$. Indeed, take an arbitrary reduced element $g\in U_{\rho}$ distinct from the neutral element of $F_{a}(X)$. Obviously, we have $\hat{\rho}(e, g)<1$. Moreover, it is easy to see that $g$ has even length, say $g=x_{1}\cdots x_{2n}$, where $x_{i}\in X\cup X^{-1}$ for each $1\leq i\leq 2n$. It follows from $\hat{\rho}(e, g)<1$ that there is a scheme $\varphi$ on $\{1, 2, \cdots, 2n\}$ such that $$\hat{\rho}(e, g)=\frac{1}{2}\sum_{i=1}^{2n}\rho^{\ast}(x_{i}^{-1}, x_{\varphi(i)})<1.$$By Lemma \[l4\], we can find a partition $\{1, 2, \cdots , 2n\}=\{i_{1}, \cdots , i_{n}\}\cup\{i_{\varphi(1)}, \cdots , i_{\varphi(n)}\}$ and a representation of $g$ as a product $g=g_{1}\cdots g_{n}$ such that $g_{k}=h_{k}x_{i_{k}}x_{\varphi(i_{k})}h_{k}^{-1}$ for each $k\leq n$, where $h_{k}\in F_{a}(X)$. Since $N$ is invariant, we have $$\begin{aligned} N(\tilde{f}(g))&\leq&\sum_{i=1}^{n}N(\tilde{f}(g_{k}))=\sum_{i=1}^{n}N(\tilde{f}(x_{k})\tilde{f}(x_{\varphi(k)})) \nonumber\\ &=&\rho^{\ast}(x_{1}^{-1}, x_{\varphi(1)})+\cdots +\rho^{\ast}(x_{n}^{-1}, x_{\varphi(n)})\nonumber\\ &<&1.\nonumber\end{aligned}$$ Therefore, we have $\tilde{f}(g)\in W$, and it follows that $\tilde{f}(U_{\rho})\subset W\subset V$. Hence $\tilde{f}$ is a continuous homomorphism. Topological monomorphisms between free paratopological groups ============================================================= In order to prove one of our main theorems, we also need the following lemma. \[l7\] Let $(X, \mathscr{U}_{X})$ be a quasi-uniform subspace of a Tychonoff space $(Y, \mathscr{U}_{Y})$. Then $X$ is quasi-P$^{\ast}$-embedded in $Y$. Let $d$ be a bounded, continuous quasi-pseudometric from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. One can assume that $d$ is bounded by $\frac{1}{2}$. For each $i\in \mathbb{N}$, take a $V_{i}\in \mathscr{U}_{Y}$ satisfying $V_{i}\cap (X\times X)\subset \{(x, y)\in X\times X: d(x, y)<\frac{1}{2^{i}}\}$, and then by [@SA Chap. 3, Proposition 2.4 and Theorem 2.5], take a continuous quasi-pseudometric $d_{i}$ from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ such that $d_{i}$ is bounded by 1, quasi-uniform with respect to $\mathscr{U}_{Y}$ and $\{(x, y)\in Y\times Y: d_{i}(x, y)<\frac{1}{4}\}\subset V_{i}$. Put $$\rho(x, y) =8\sum_{i=1}^{\infty}\frac{1}{2^{i}}d_{i}(x, y).$$ One can easily prove that $\rho$ is a continuous quasi-pseudometric from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Moreover, it is easy to see that $\rho$ is quasi-uniform with respect to $\mathscr{U}_{Y}$ and satisfies $d(x, y)\leq\rho(x, y)$ for all $x, y\in X$. Put $$\rho^{\prime}(x, y)=\inf\{\rho(x, a)+d(a, b)+\rho(b, y): a, b\in X\},\ \mbox{where}\ x, y\in Y.$$ Let $$\tilde{d}=\min\{\rho(x, y), \rho^{\prime}(x, y)\}.$$ Obviously, $\tilde{d}$ is quasi-uniform with respect to $\mathscr{U}_{Y}$. It follows from Lemma \[l8\] that $\tilde{d}$ is a continuous quasi-pseudometric from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Moreover, we have $\tilde{d}|X\times X=d.$ Therefore, $X$ is quasi-P$^{\ast}$-embedded in $Y$. Now, we shall prove our main theorem, which gives an affirmative answer to Question \[q2\]. Let $X$ be an arbitrary subspace of a Tychonoff space $Y$. Then the natural mapping $\hat{e}_{X, Y}: AP(X)\rightarrow AP(Y)$ is a topological monomorphism if and only if $X$ is quasi-P$^{\ast}$-embedded in $Y$. Necessity. Let $d$ be an arbitrary bounded continuous quasi-pseudometric from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$, where $\mathscr{U}_{X}$ is the universal quasi-uniformity on $X$. Then $U_{d}=\{(x, y)\in X\times X: d(x, y)<1\}\in\mathscr{U}_{X}$. Put $V_{d}=\{g\in AP(X): \hat{d}(e, g)<1\}$. Then $V_{d}$ is a neighborhood of the neutral element of $AP(X)$. Since $AP(X)\subset AP(Y)$, it follows from Theorem \[t0\] that there is some continuous quasi-pseudometric $\rho$ from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ such that $V_{\rho}\cap AP(X)\subset V_{d}$, where $\mathscr{U}_{Y}$ is the universal quasi-uniformity on $Y$ and $V_{\rho}=\{g\in AP(Y): \hat{\rho}(e, g)<1\}$. Note that $U_{\rho}=\{(x, y)\in Y\times Y: \rho(x, y)<1\}\in\mathscr{U}_{Y}$ and $U_{\rho}\cap (X\times X)\subset U_{d}$. Moreover, one can see that $\hat{\rho}(e, x^{-1}y)=\rho(x, y)$ and $\hat{d}(e, x^{-1}y)=d(x, y)$ for all $x, y$. Therefore, $(X, \mathscr{U}_{X})$ is a quasi-uniform subspace of $(Y, \mathscr{U}_{Y})$. Hence $X$ is quasi-P$^{\ast}$-embedded in $Y$ by Lemma \[l7\]. Sufficiency. Let $X$ be quasi-P$^{\ast}$-embedded in $Y$. Denote by $e_{X, Y}$ the identity embedding of $X$ in $Y$. Obviously, the monomorphism $\hat{e}_{X, Y}$ is continuous. Next, we need to show that the isomorphism $\hat{e}_{X, Y}^{-1}: AP(X, Y)\rightarrow AP(X)$ is continuous. Assume that $U$ is a neighborhood of the neutral element $e_{X}$ in $AP(X)$. It follows from Theorem \[t0\] that there is a continuous quasi-pseudometric $\rho$ from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ such that $V_{\rho}=\{g\in AP(X): \hat{\rho}_{A}(e_{X}, g)<1\}\subset U$. Without loss of generality, we may assume that $\rho\leq 1$ (otherwise, replace $\rho$ with $\rho^{\prime}=\mbox{min}\{\rho, 1\}$). Since $X$ is quasi-P$^{\ast}$-embedded in $Y$, the quasi-pseudometric $\rho$ can be extended to a continuous quasi-pseudometric $d$ from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Suppose that $\hat{d}_{A}$ is the Graev extension of $d$ over $AP(Y)$. It follows from Theorem \[t0\] again that $V_{d}=\{g\in AP(Y): \hat{d}_{A}(e_{Y}, g)<1\}$ is an open neighborhood of the neutral element $e_{Y}$ in $AP(Y)$. Obviously, one can identify the abstract group $A_{a}(X)$ with the subgroup $\hat{e}_{X, Y}(A_{a}(X))=A_{a}(X, Y)$ of $A_{a}(Y)$ generated by the subset $X$ of $A_{a}(Y)$. Since $d\mid X=\rho$, it follows from Lemma \[l2\] that, for each $h\in A_{a}(X, Y)$, $\hat{d}_{A}(e_{Y}, h)=\hat{\rho}_{A}(e_{Y}, h)$. Hence we have $A_{a}(X, Y)\cap V_{d}=V_{\rho}$, that is, $AP(X, Y)\cap V_{d}=\hat{e}_{X, Y}(V_{\rho})$. Therefore, the isomorphism $\hat{e}_{X, Y}^{-1}: AP(X, Y)\rightarrow AP(X)$ is continuous. In order to give a partial answer to Question \[q1\], we need to prove some lemmas. \[l6\]Let $X$ be a Tychonoff space. Then the restriction $\mathscr{G}_{X}=\mathscr{G}_{PG(X)}\mid X$ of the left uniformity $\mathscr{G}_{PG(X)}$ of the paratopological group $PG(X)$ to the subspace $X\subset PG(X)$ coincides with the universal quasi-uniformity $\mathscr{U}_{X}$ of $X$. Since the topology on $X$ generated by the left uniformity $\mathscr{G}_{PG(X)}$ of $PG(X)$ coincides with the original topology of the space $X$, we have $\mathscr{G}_{X}\subset \mathscr{U}_{X}$. Next, we need to show that $\mathscr{U}_{X}\subset\mathscr{G}_{X}$. Take an arbitrary element $U\in \mathscr{U}_{X}$. It follows from Lemmas \[l1\] and \[l8\] that there exists a continuous quasi-pseudometric $\rho$ from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ such that $\{(x, y)\in X\times X: \rho(x, y)<1\}\subset U.$ By Theorem 3.2 in [@RS], the quasi-pseudometric $\rho$ on set $X$ extends to a left invariant quasi-pseudometric $\hat{\rho}$ on the abstract group $PG(X)$. One can see that $\hat{\rho}$ is continuous from $(PG(X)\times PG(X), \mathscr{U}_{PG(X)}\times \mathscr{U}_{PG(X)}^{-1})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. It follows from Theorem \[t0\] that $V=\{g\in PG(X): \hat{\rho}(e, g)<1\}$ is an open neighborhood of the neutral element $e$ in $PG(X)$. If $x, y\in X$ and $x^{-1}y\in V$, then $$\rho(x, y)=\hat{\rho}(x, y)=\hat{\rho}(e, x^{-1}y)<1,$$which implies that the element $W_{V}^{l}=\{(g, h)\in G\times G: g^{-1}h\in V\}$ of $\mathscr{G}_{PG(X)}$ satisfies $W_{V}^{l}\cap (X\times X)\subset U$. Therefore, $\mathscr{U}_{X}\subset\mathscr{G}_{X}$. \[l5\][@RS1] The finest quasi-uniformity of each quasi-pseudometrizable topological space is bicomplete. \[l9\] Let $X$ be a subspace of a Tychonoff space $Y$, and let $X$ be $\tau(\tilde{\mathscr{U}_{Y}}^{\ast})$-dense in $(\tilde{Y}, \tilde{\mathscr{U}_{Y}})$, where $\mathscr{U}_{Y}$ is the universal quasi-uniformity and $(\tilde{Y}, \tilde{\mathscr{U}_{Y}})$ is the bicompletion of $(Y, \mathscr{U}_{Y})$. Then the following conditions are equivalent: 1. $X$ is quasi-P$^{\ast}$-embedded in $Y$; 2. $X$ is quasi-P-embedded in $Y$; 3. $\mathscr{U}_{Y}\mid X=\mathscr{U}_{X}$; 4. $X\subset Y\subset \tilde{X}$, where $(\tilde{X}, \tilde{\mathscr{U}_{X}})$ is the bicompletion of $(X, \mathscr{U}_{X})$. Obviously, $(2)\Rightarrow (1)$. Hence it is suffices to show that $(1)\Rightarrow (3)\Rightarrow (4)\Rightarrow (2).$ $(1)\Rightarrow (3).$ Assume that $X$ is quasi-P$^{\ast}$-embedded in $Y$. For each $U\in \mathscr{U}_{X}$, it follows from Lemmas \[l1\] and \[l8\] that there exists a bounded continuous quasi-pseudometric $\rho_{X}$ from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ such that $$W_{X}=\{(x, x^{\prime})\in X\times X: \rho_{X}(x, x^{\prime})<1\}\subset U.$$ Since $X$ is quasi-P$^{\ast}$-embedded in $Y$, let $\rho_{Y}$ be an extension of $\rho_{X}$ to a continuous quasi-pseudometric from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Put $$W_{Y}=\{(y, y^{\prime})\in Y\times Y: \rho_{Y}(y, y^{\prime})<1\}.$$ Then it is obvious that $W_{Y}\in \mathscr{U}_{Y}$ and $W_{Y}\cap (X\times X)=W_{X}\subset U.$ Therefore, the quasi-uniformity $\mathscr{U}_{Y}\mid X$ is finer than $\mathscr{U}_{X}$. Moreover, it is clear that $\mathscr{U}_{Y}\mid X\subset \mathscr{U}_{X}$. Hence $\mathscr{U}_{Y}\mid X=\mathscr{U}_{X}$. $(3)\Rightarrow (4).$ Assume that $\mathscr{U}_{Y}\mid X=\mathscr{U}_{X}$. Let $(\tilde{Y}, \tilde{\mathscr{U}_{Y}})$ be the bicompletion of quasi-uniform space $(Y, \mathscr{U}_{Y})$. Because $\tilde{\mathscr{U}_{Y}}\mid Y=\mathscr{U}_{Y}$, we have $\tilde{\mathscr{U}_{Y}}\mid X=\mathscr{U}_{X}$. Moreover, since $X$ is $\tau(\tilde{\mathscr{U}_{Y}}^{\ast})$-dense in $\tilde{Y}$ and $X\subset Y$, $(\tilde{Y}, \tilde{\mathscr{U}_{Y}})$ is the bicompletion of the quasi-uniform space $(X, \mathscr{U}_{X})$. Hence $X\subset Y\subset \tilde{X}$. $(4)\Rightarrow (2).$ Assume that $Y\subset\tilde{X}$. Consider an arbitrary continuous quasi-pseudometric $\rho$ from $(X\times X, \mathscr{U}_{X}^{-1}\times \mathscr{U}_{X})$ to $(\mathbb{R}, \mathscr{U}^{\star})$. Let $(\overline{X}, \overline{\rho})$ be the quasi-metric space obtained from $(X, \rho)$ by identifying the points of $X$ lying at zero distance one from another with respect to $\rho$. Let $\pi: X\rightarrow \overline{X}$ be the natural quotient mapping. Obviously, $\rho(x, y)=\overline{\rho}(\pi(x), \pi(y))$ for all $x, y\in X$. Suppose that $\mathscr{U}_{\overline{X}}$ is the universal quasi-uniformity on $\overline{X}$. Then $\pi$ is a quasi-uniformly continuous map from $(X, \mathscr{U}_{X})$ to $(\overline{X}, \mathscr{U}_{\overline{X}})$ by [@BG]. Moreover, by Lemma \[l5\], $(\overline{X}, \mathscr{U}_{\overline{X}})$ is bicomplete. Therefore, it follows from Theorem 16 in [@LW] that $\pi$ admits a quasi-uniformly continuous extension $\overline{\pi}: (\tilde{X}, \tilde{\mathscr{U}}_{X})\rightarrow (\overline{X}, \mathscr{U}_{\overline{X}}).$ Since $Y\subset \tilde{X}$, we can define a continuous mapping of $d$ from $(Y\times Y, \mathscr{U}_{Y}^{-1}\times \mathscr{U}_{Y})$ to $(\mathbb{R}, \mathscr{U}^{\star})$ by $d(x, y)=\overline{\rho}(\overline{\pi}(x), \overline{\pi}(y))$ for all $x, y\in Y$. Clearly, the restriction of $d$ to $X$ coincides with $\rho$. Hence $X$ is quasi-P-embedded in $Y$. Let $X$ be an arbitrary $\tau(\tilde{\mathscr{U}_{Y}}^{\ast})$-dense subspace of a Tychonoff space $Y$. If the natural mapping $\hat{e}_{X, Y}: FP(X)\rightarrow FP(Y)$ is a topological monomorphism, then $X$ is quasi-P-embedded in $Y$. Assume that the monomorphism $\hat{e}_{X, Y}: FP(X)\rightarrow FP(Y)$ extending the identity mapping $e_{X, Y}: X\rightarrow Y$ is a topological embedding. Therefore, it is easy to see that we can identify the group $FP(X)$ with the subgroup $FP(X, Y)$ of $FP(Y)$ generated by the set $X$. We denote by $\mathscr{G}_{X}$ and $\mathscr{G}_{Y}$ the left quasi-uniformities of the groups $FP(X)$ and $FP(Y)$, respectively. Since $FP(X)$ is a subgroup of $FP(Y)$, we obtain that $\mathscr{G}_{Y}\mid FP(X)=\mathscr{G}_{X}$. Moreover, it follows from Lemma \[l6\] that $\mathscr{G}_{X}\mid X=\mathscr{U}_{X}$ and $\mathscr{G}_{Y}\mid Y=\mathscr{U}_{Y}$. Hence we have $$\mathscr{G}_{Y}\mid X=\mathscr{G}_{X}\mid X=\mathscr{U}_{X}.$$ Therefore, it follows from Lemma \[l9\] that $X$ is quasi-P-embedded in $Y$. Let $X$ be an arbitrary $\tau(\tilde{\mathscr{U}_{Y}}^{\ast})$-dense subspace of a Tychonoff space $Y$. If $X$ is quasi-P-embedded in $Y$, is the natural mapping $\hat{e}_{X, Y}: FP(X)\rightarrow FP(Y)$ a topological monomorphism? In [@SO], O.V. Sipacheva has proved that if $Y$ is a subspace of a Tychonoff $X$ then the subgroup $F(Y, X)$ of $F(X)$ is topologically isomorphic to $F(Y)$ iff $Y$ is $P^{\ast}$-embedded in $X$. In [@TM1], M.G. Tkackenko has proved that if $Y$ is a subspace of a Tychonoff $X$ then the subgroup $A(Y, X)$ of $A(X)$ is topologically isomorphic to $A(Y)$ iff $Y$ is $P^{\ast}$-embedded in $X$.Therefore, we have the following question: Let $X$ be an arbitrary subspace of a Tychonoff space $Y$. Is it true that the subgroup $PG(Y, X)$ of $PG(X)$ is topologically isomorphic to $PG(Y)$ iff $Y$ is quasi-$P^{\ast}$-embedded in $X$? [**Acknowledgements**]{}. I wish to thank the reviewers for the detailed list of corrections, suggestions to the paper, and all her/his efforts in order to improve the paper. [99]{} A.V. Arhangel’skiǐ, M. Tkachenko, [*Topological Groups and Related Structures*]{}, Atlantis Press and World Sci., 2008. G. Brümmer , [*Natural extensions of the $T_{0}$-spaces, their idempotency, and the quasi-uniform biconpletion*]{}, Sum Topo 2001, Sixteenth Summer Conference on Topology and its Applications, July, 18–21, 2001. R. Engelking, [*General Topology*]{} (revised and completed edition), Heldermann Verlag, Berlin, 1989. P. Fletcher, W.F. Lindgren, [*Quasi-uniform spaces*]{}, Marcel Dekker, New York, 1982. M.I. Graev, [*Free topological groups*]{}, Izvestiya Akad. Nauk SSSR Ser. Mat., [**12**]{}(1948), 279–323. M.I. Graev, [*Theory of topological groups I*]{}, Uspekhi Mat. Nauk, [**5**]{}(1950), 3–56. G. Gruenhage, *Generalized metric spaces*, K. Kunen, J.E. Vaughan eds., Handbook of Set-Theoretic Topology, North-Holland, (1984), 423-501. W.F. Lindgren, P. Fletcher, [*The construction of the pair completion of a quasi-uniform space*]{}, Canad. Math. Bull., [**21**]{}(1978), 53–59. A.A. Markov, [*On free topological groups*]{}, Dokl. Akad. Nauk. SSSR, [**31**]{}(1941), 299–301. D. Marxen, [*Neighborhoods of the identity of the free abelian topological groups*]{}, Math. Slovaca, [**26**]{}(1976), 247–256. P. Nickolas, [*Free topological groups and free products of topological groups*]{}, PhD thesis (University of New South Wales, Australia), 1976. E. Nummela, [*Uniform free topological groups and Samuel compactifications*]{}, Topol. Appl., [**13**]{}(1982), 77–83. V. Pestov, [*Some properties of free topological groups*]{}, Moscow Univ. Math. Bull., [**37**]{}(1982), 46–49. N.M. Pyrch, [*On isomorphisms of the free paratopological groups and free homogeneous spaces I*]{}, Ser. Mech-Math., [**63**]{}(2005), 224–232. N.M. Pyrch, A.V. Ravsky, [*On free paratopological groups*]{}, Matematychni Studii, [**25**]{}(2006), 115–125. N.M. Pyrch, [*Free paratopological groups and free products of paratopological groups*]{}, Journal of Mathematical Sciences, [**174(2)**]{}(2011), 190–195. S. Romaguera, S. Salbany, [*Quasi-metrizable spaces with a bicomplete structure*]{}, Extracta Mathematicae, [**7**]{}(1992), 99–102. S. Romaguera, M. Sanchis, M.G. Tkackenko, [*Free paratopological groups*]{}, Topology Proceedings, [**27**]{}(2002), 1–28. A.R. Singal, [*Remarks on separation axioms*]{}, In: Stanley P. Franklin and Zdeněk Frolík and Václav Koutník (eds.): General Topology and Its Relations to Modern Analysis and Algebra, Proceedings of the Kanpur topological conference, 1968. Academia Publishing House of the Czechoslovak Academy of Sciences, Praha, (1971), 265–296. S. Salbany, [*Bitopological spaces, compactifications and completions*]{}, Math. Monographs Univ. Cape Town, [**1**]{}(1974). O.V. Sipacheva, [*Free topological groups of spaces and their subspaces*]{}, Topol. Appl., [**101**]{}(2000), 181–212. M.G. Tkackenko, [*On the completeness of free Abelian topological groups*]{}, Soviet Math. Dokl., [**27**]{}(1983), 341–345. [^1]: Supported by the NSFC (No. 10971185, No. 10971186) and the Natural Science Foundation of Fujian Province (No. 2011J05013) of China.
--- abstract: 'Among the icy satellites of Saturn, Iapetus shows a striking dichotomy between its leading and trailing hemispheres, the former being significantly darker than the latter. Thanks to the VIMS imaging spectrometer on-board Cassini, it is now possible to investigate the spectral features of the satellites in Saturn system within a wider spectral range and with an enhanced accuracy than with previously available data. In this work, we present an application of the *G-mode* method to the high resolution, visible and near infrared data of Phoebe, Iapetus and Hyperion collected by Cassini/VIMS, in order to search for compositional correlations. We also present the results of a dynamical study on the efficiency of Iapetus in capturing dust grains travelling inward in Saturn system with the aim of evaluating the viability of Poynting-Robertson drag as the physical mechanism transferring the dark material to the satellite. The results of spectroscopic classification are used jointly with the ones of the dynamical study to describe a plausible physical scenario for the origin of Iapetus’ dichotomy. Our work shows that mass transfer from the outer Saturnian system is an efficient mechanism, particularly for the range of sizes hypothesised for the particles composing the newly discovered outer ring around Saturn. Both spectral and dynamical data indicate Phoebe as the main source of the dark material. However, due to considerations on the collisional history of the Saturnian irregular satellites and to the differences in the spectral features of Phoebe and the dark material on Iapetus in the visible and ultraviolet range, we suggest a multi-source scenario where now extinct prograde satellites and the disruptive impacts that generated the putative collisional families played a significant role in supplying the original amount of dark material.' author: - | F. Tosi$^{1}$[^1], D. Turrini$^{1}$, A. Coradini$^{1}$, G. Filacchione$^{2}$ and the VIMS Team\ $^{1}$Institute for Interplanetary Space Physics, INAF, Via Fosso del Cavaliere 100, 00133, Rome, Italy\ $^{2}$Institute for Space Astrophysics and Cosmic Physics, INAF, Via Fosso del Cavaliere 100, 00133, Rome, Italy date: 'Accepted XXX. Received XXX; in original form XXX' title: Probing the origin of the dark material on Iapetus --- \[firstpage\] planets and satellites: general; techniques: spectroscopic; methods: numerical; planets and satellites: individual: Iapetus; planets and satellites: individual: Phoebe; planets and satellites: individual: Hyperion Introduction ============ Saturn’s third largest satellite, Iapetus, shows one of the most striking dichotomies in the solar system. Observations of the satellite, starting from a couple of years after its discovery by G.D. Cassini in 1671 and culminating in two Voyager flybys and several Earth-based investigations (e.g., @cruik83 [@squyr83; @squyr84]), had shown that the trailing hemisphere has an albedo, composition, and morphology typical of the other icy Saturnian satellites, being highly reflective and spectrally consistent with water ice, while the surface of the leading hemisphere is composed of a very low surface reflectance (geometric albedo 2-6$\%$), reddish material that may include organics and carbon.\ The origin and nature of the low-albedo hemisphere of Iapetus is still debated, representing one of the most intriguing problems in planetary science. The two main theories of the origin of the low-albedo hemisphere are that it was created by an endogenic geologic process such as flooding by magmas [@smith81], or that it resulted from accretion of exogenous particles [@soter74; @matt92]. The primary evidence for the endogenic model is the existence of in-filled craters [@smith81], while the main supporting observation for the exogenous model is that the dark material is centred precisely on the apex of motion [@matt92].\ Disk-integrated spectra collected by the VIMS instrument on-board Cassini during several serendipitous periods in 2004 July and October (at distances between 2 - 3.1 $\times$ 10$^6$ km) enabled a first analysis of Iapetus’ surface in the spectral range from 1 to 5 $\mu$m that led to the identification of CO$_2$, which is most prominent on the dark side [@bea05a]. These authors also found that a good fit to the low-albedo average spectrum is obtained by a small amount of ice, an even smaller amount of Fe$_2$O$_3$ to account for the ferric absorption band at $\sim$1.0 $\mu$m, 36$\%$ Triton tholin, and substantial amounts of the HCN polymer (“poly-HCN”).\ On 31 December 2004, the Cassini spacecraft had a distant flyby with Iapetus ($>$ 124,000 km, yielding a maximum spatial resolution on the satellite’s surface of $\sim$62 km), that represented the first good chance to acquire spatially resolved spectra of the satellite with better signal-to-noise ratio (SNR) in the range beyond 3 $\mu$m, though in this case the relatively large phase angle (54$^{\circ}$ to 116$^{\circ}$) allowed to explore only a fraction of the dayside, where most of the framed surface was covered by dark material. On these data, @cruik08 reported the identification of a broad absorption band centred at 3.29 $\mu$m, and concentrated in a region comprising about 15$\%$ of the low-albedo surface area, that they interpreted as the C-H stretching mode vibration in polycyclic aromatic hydrocarbon (PAH) molecules. @cruik08 also found, in association with the aromatic band, two weaker bands at 3.42 and 3.52 $\mu$m, attributed to the asymmetric stretching modes of the -CH2- group in aliphatic hydrocarbons. Compounds containing the nitrile (-C$\equiv$N) functional group were initially suspected as the cause of the 2.42-2.44 $\mu$m band, clearly correlated to the low-albedo material on Phoebe, Iapetus and Dione [@cruik91; @clark05; @brown06; @clark07], and possibly some of the indistinct spectral structure in the region 4.5-4.8 $\mu$m [@bea05a]; although recently @clark09 revised this interpretation and suggested that molecular hydrogen trapped in dark material would be a better candidate for this feature. From the analysis of far ultraviolet spectra returned by the Cassini *Ultraviolet Imaging Spectrograph* (UVIS), @hah08 found that water ice amounts increase within the dark material away from the apex (at 90 W$^{\circ}$ longitude, the centre of the dark leading hemisphere), consistently with thermal segregation of water ice; yet the fact that water ice is present also at the lowest, darkest and warmest latitudes, where it is not expected to be stable, may be a sign of ongoing or recent emplacement of the dark material from an exogenous source (ibid).\ Images returned by the *Imaging Science Subsystem* on-board Cassini during this flyby suggested mass wasting of ballistically deposited material [@por05], although the limited spatial resolution did not allow to safely identify any small bright crater lying over the dark terrain, which is a strong argument in favour of deposited material. Moreover, using scatterometry data returned by the *Cassini Titan RADAR Mapper* (named RADAR for concision in the following), @ostro06 pointed out that Iapetus’ 2.2-cm (13.78 GHz) radar albedo is dramatically higher on the optically bright trailing side than the optically dark leading side, whereas 12.6-cm (2.38 GHz) results reported by @black04 with the Arecibo Observatory’s radar system show hardly any hemispheric asymmetry and give a mean radar reflectivity several times lower than the reflectivity measured at 2.2 cm, which can be explained if the leading side’s optically dark contaminant is present to depths of at least one to several decimetres.\ On September 10, 2007, the Cassini spacecraft performed its first and only targeted Iapetus flyby, at minimum altitude of 1620 km. Approach occurred over the unlit leading hemisphere, departure over the illuminated trailing one. From the ISS data, a strong indication that dark material is lying over bright terrain finally came from the numerous small bright craters detected in the high-resolution images from the flyby. The existence of these small craters (up to some dozens of meters in diameter) indicates that the dark blanket is very thin, probably no more than a few meters, possibly only decimetres. On the other hand, the dark material is expected to be thicker than only millimetres because of the evidence for mass wasting and because of the ability of the Cassini 2.2 cm RADAR to distinguish between the dark and the bright terrain.\ The confirmation of the exogenous model emphasised the need to identify a reliable source of the dark material. In past times, for different reasons, a number of possible sources were proposed: Phoebe [@soter74], Titan [@owen01], Hyperion [@matt92], or possibly other small dark irregular satellites [@bea02; @bea05b], although other works essentially focused on Hyperion and Phoebe as the main candidates. The mechanisms invoked to justify the transfer of material from the source to Iapetus were different for these two bodies: for Hyperion the proposed mechanism was collisional excavation and/or break-up [@matt92; @mar02; @dal04], while Poynting-Robertson (*PR* in the following) drag was advocated for Phoebe [@soter74; @bur96]. PR drag was also invoked by @bea05b in their suggestion that the source of the dark material could be linked to the retrograde irregular satellites as a whole instead than just to Phoebe. Dust ejected from retrograde satellites and migrating inward from the outer Saturnian system would impact Iapetus on its leading hemisphere due to its counter-revolving motion, thus naturally explaining the leading-trailing asymmetry [@soter74]. Intuitively, such mechanism should be characterised by a high transfer efficiency since Iapetus’ orbital period is shorter than the migration timescale by orders of magnitude, yet to present the only quantitative evaluation of the transfer efficiency is that of @bur96, who estimated it being about 70$\%$ for 10 $\mu$m sized dust grains. On the contrary, several evaluations of the transfer efficiency from Hyperion to Iapetus have been performed [@matt92; @mar02; @dal04], the resulting value varying over three orders of magnitude (from 0.1$\%$ to about 20-40$\%$) depending on the characteristics of the ejection process.\ In addition to the limited information on the transfer efficiency, the major issue of the Phoebe-based model is that the visible spectrum of Iapetus’ leading side is similar to D-type asteroids, a primitive, very low albedo group of bodies exhibiting a typically reddish spectrum in the visual region, i.e. increasing in reflectance with increasing wavelength, while Phoebe’s spectrum is essentially flat and grey in the visual region, thus similar to C-type or F-type asteroids (see for example @thol83 [@bea02; @grav07]); moreover its albedo is higher than carbon, particularly in the brighter areas detected by Voyager 2 and Cassini.\ In this paper, we have explored VIMS data returned by the Cassini spacecraft after its close encounter with Iapetus, combining them with VIMS data of Phoebe and Hyperion - acquired with comparable spatial resolution and favourable illumination conditions - that were extensively analysed in previous works. We classify this dataset, with an automatic statistical method, separately for the visual and IR portions, to identify homogeneous types and look for correlations among the spectra of Iapetus, Phoebe and Hyperion. To support the results of the spectral comparison, we reviewed and extended the dust migration scenario based on Poynting Robertson drag in light of the recent results of both theoretical works and the analysis of Galileo, Cassini and Spitzer data. We estimated its mass transfer efficiency by means of a statistical approach derived from the one designed by [@kes81] to evaluate the impact probabilities of Jovian irregular satellites. Finally, we discuss the amount of ejecta needed to supply the dark material present on Iapetus and compare it to the one needed in the Hyperion-based scenario and the available observational constrains. The G-mode method ================= The *G-mode* method was originally developed by A.I. Gavrishin and A. Coradini (see @gavr80 [@gavr92; @cora76; @cora77]) to classify lunar samples on the basis of the major oxides composition. The good results obtained warranted its application to several different data sets (see, for example, @cora76 [@carusi78; @gavr80; @bianchi80; @giov81; @cora83; @barucci87; @orosei03]). In particular, the Imaging Spectrometer for Mars (ISM), flown on-board the Soviet Phobos mission, offered the first chance to apply the G-mode method to imaging spectroscopy data [@cora91; @erard91; @cerroni95]. More recently, the G-mode was applied also to Cassini/VIMS data acquired during the close flyby of Phoebe [@tosi05; @cora08] and to the combined data of Titan returned by Cassini/VIMS and Cassini/RADAR [@tosi09].\ The G-mode differs from other broadly used statistical methods - such as the *Principal Components Analysis* (PCA) and the *Q-mode* method - in some key characteristics (see @bianchi80): in summary, a linear dependence of the variables is not needed; instrumental errors can be taken into account; meaningless variables are discerned and removed; and finally different levels of classification can be performed.\ Basically, by lowering the confidence level of the test, set a priori by the user, the algorithm can perform a more refined classification, in order to look for further homogeneous types. In this case, the G-mode includes a test that allows to interrupt the classification when it becomes too detailed: when the statistical distance among types becomes smaller than the established confidence level, the algorithm can either stop or continue by merging different small types together (this condition is reported in the output of the program).\ In this specific case, our statistical universe is composed by $N$ spectra sampled in $M$ spectral channels. The G-mode allows the user to apply either the same error value to all the variables (this approach is preferred when all the variables shall drive the classification at the same level), or to assign a specific error value to each variable. In the case of multi-spectral and hyperspectral data, when using reflectances as variables, a logic approach is to use the instrumental noise, that for each spectral channel is given by the inverse of the SNR, as an error. Alternatively, should the depths of some diagnostic absorption bands be used as variables, then the average standard deviation of the band depth can be used as an error for each variable. In this work, we actually apply the first approach to the visible portion of the data, where no absorption band is expected to occur, while the second approach is preferred for the infrared data, where several spectral signatures are used to classify the data. The VIMS instrument =================== The *Visual and Infrared Mapping Spectrometer* (VIMS) is an imaging spectrometer on-board the Cassini Orbiter spacecraft. VIMS is actually made up of two spectrometers, VIMS-V (developed in Italy) and VIMS-IR (developed in the USA). VIMS is the result of an international collaboration involving the space agencies of the United States, Italy, France and Germany as well as other academic and industrial partners.\ The two channels share a common electronics box and are co-aligned on a common optical pallet. The combined optical system generates 352 two-dimensional images (with maximum nominal dimensions of 64$\times$64, 0.5 mrad pixels), each one corresponding to a specific spectral channel. These images are merged by the main electronics in order to produce “image cubes” representing the spectrum of the same field of view (FOV) in the range from 0.35 to 5.1 $\mu$m, sampled in 352 bands. See @brown04 [@miller96] for a complete description of the instrument.\ In this work, we separately explored both the visible and infrared portions of the selected cubes. **VIMS-V** **VIMS-IR** ---------------------------------------- ---------------------------------------------------- ---------------------------------------------------- Spectral coverage ($\mu$m) 0.35 - 1.05 0.85 - 5.1 Spectral channels (bands) 96 256 Average spectral sampling (nm/channel) 7.3 16.6 Total FOV ($^{\circ}$) 1.83$\times$1.83 1.83$\times$1.83 Total FOV (mrad) 32$\times$32 32$\times$32 Nominal IFOV (mrad) 0.50$\times$0.50 0.50$\times$0.50 Hi-res IFOV (mrad) 0.167$\times$0.167 0.25$\times$0.50 Nominal image dimension (pixel) 1$\times$1, 6$\times$6, 12$\times$12, 64$\times$64 1$\times$1, 6$\times$6, 12$\times$12, 64$\times$64 Detector type Si CCD (2D) InSb photodiodes (1D) Average instrumental SNR 380 100 \[tab1\] The dataset =========== Among the VIMS acquisitions, we selected 3 cubes, respectively from the Phoebe flyby and the to-date closest flybys with Iapetus and Hyperion. Such data were selected in order to have: a spatial resolution as high as possible, a low phase angle (intended to select only cubes showing most of targets’ dayside, and to limit the dependence on the specific observational geometry), and a good SNR. To do this, we have selected cubes from the three satellites showing at the same time: 1) a spatial resolution $\leq$ 3 km/pixel; 2) a phase angle $\leq$ 43$^{\circ}$; 3) an IR exposure time $\geq$ 120 ms/pixel. Details about these data are given in Tables \[tab2\] and \[tab3\]. The VIMS cube selected for Phoebe is the closest available from the flyby, acquired at 19:32 UT on 11 June 2004 at a range of 2096 km and phase angle 24.6$^{\circ}$. Also for Hyperion we selected the closest cube, acquired at 02:15 on 26 September 2005 at a range of 2989 km and a phase angle of 42.6$^{\circ}$. Finally, the cube selected for Iapetus was acquired at 15:01 on 10 September 2007, at a range of 6091 km and a phase angle of 17.3$^{\circ}$: although this cube was not acquired at closest approach but only 40 minutes later, it represents a good tradeoff among image dimensions, spatial resolution, exposure time and phase angle. Moreover, this cube is peculiar because it shows a region located exactly at the boundary between dark and bright terrains, where a relatively sharp division can be seen. In all cases, there are no sky background pixels since the satellite’s surface always fills the VIMS FOV. Figure \[fig1\] shows the RGB images of these cubes in their VIS portion.\ Due to the fact that the Phoebe and Hyperion cubes were acquired in high resolution IFOV mode both by the visible and infrared channels of VIMS, the IFOV of VIMS-IR being 250$\times$500 $\mu$rad wide while the IFOV of VIMS-V is 167$\times$167 $\mu$rad wide, in the case of these two satellites the IR and VIS images do not match. On the contrary, the Iapetus cube was acquired in nominal IFOV mode (500$\times$500 $\mu$rad) by both channels, so that the IR image basically matches with the VIS image (actually a slight misalignment occurring between the optical axes of the two VIS and IR telescopes prevents a perfect match). Figure \[fig2\] shows the RGB images of these cubes in their IR portion. --------------- -------------- -------------- ------------------ ------------- -------------- -------------- ---------------- ------------------ **Satellite** **Mission** **Dataset** **Filename** **Date** **UTC** **Relative** **Spacecraft** **Phase** **Sequence** **on-board** **velocity** **altitude** **angle** **(km/s)** **(km)** **($^{\circ}$)** Phoebe S01 PHOEBE017 CM$\_$1465674952 11 Jun 2004 19:32 6.35 2096 24.576 Hyperion S14 HYPERIONC007 CM$\_$1506393701 26 Sep 2005 02:15 5.64 2989 42.598 Iapetus S33 ORSHIRES001 CM$\_$1568129671 10 Sep 2007 15:01 2.35 6091 17.298 --------------- -------------- -------------- ------------------ ------------- -------------- -------------- ---------------- ------------------ \[tab2\] --------------- --------------- ------------------ -------------------- ------------------- -------------------- -- **Satellite** **Image** **IR t$_{exp}$** **IR Spatial** **VIS t$_{exp}$** **VIS Spatial** **dimension** **resolution** **resolution** **(px)** **(msec/px)** **(km/px)** **(msec)** **(km/px)** Phoebe 30$\times$18 180 0.524$\times$1.048 5630 0.349$\times$0.349 Hyperion 36$\times$24 320 0.747$\times$1.494 7680 0.498$\times$0.498 Iapetus 40$\times$40 120 3.045$\times$3.045 4800 1.015$\times$1.015 --------------- --------------- ------------------ -------------------- ------------------- -------------------- -- \[tab3\] All of the VIMS cubes used in this work were initially calibrated, in their infrared portion, by means of the RC15 VIMS-IR sensitivity function and the flat-field cube released in 2005, namely the last official products available at the time this work was undertaken. The sensitivity function allows to convert the raw signal of each pixel inside the IR image into radiance (divided by the IR integration time and the flat-field) and then into reflectance $I/F$, where $I$ is the intensity of reflected light (uncorrected for the specific observational geometry and the thermal emission) and $\pi F$ is the plane-parallel flux of sunlight incident on the satellite [@theka73], scaled for its heliocentric distance (for details about the VIMS calibration, see @mccord04). It should be noted that the latest unofficial RC17 sensitivity function of VIMS-IR, derived in 2008 and currently under test, is different from the RC15 calibration used in @clark05 by the use of additional standard stars and data from the VIMS solar port. It was found that the RC15 calibration, based on comparison to Galileo NIMS data with VIMS data from the Cassini Jupiter encounter, the Moon flyby, plus ground calibration data contained a residual 3-micron absorption and spectral structure in the 2-2.5 micron region due to ringing in the VIMS order sorting filters (Clark, personal communication, 2009). For this reason, in order not to mislead our work, we have separately tested both these sensitivity functions on the VIMS infrared data.\ Moreover, all the spectra were “despiked”: we removed single-pixel, single-spectral channel deviations, caused by systematic instrumental artifacts like order-sorting filters as well as random artifacts like cosmic rays or high energy radiation striking the detectors (these events reveal themselves as spikes in the dark current stored in each raw cube). On the visible portion of the VIMS data, also a “destriping” procedure was applied, aimed to remove residual offsets between radiometric levels of different CCD columns while observing a uniform scene.\ For the Phoebe and Hyperion cubes, we selected all the pixels inside the spectral image, so we have $540$ samples from Phoebe and $864$ samples from Hyperion. In the case of the Iapetus cube, showing both dark and bright material, we define a “region of interest” including $639$ samples from the dark material, covering $40\%$ of the image. Hence it follows that, by summing the samples from Phoebe, Iapetus and Hyperion, our global dataset is made up by $2043$ samples/spectra. Discussion ========== Results in the visible range ---------------------------- In the range measured by VIMS-V (0.35 - 1.05 $\mu$m), no signatures can be safely identified in the spectra of Iapetus, Phoebe and Hyperion; hence a spectral classification is driven by the spectral continuum, namely the slope parameter. First, all the 2043 calibrated spectra of Iapetus, Phoebe and Hyperion were normalised to the value of 1.0 at the wavelength of 549.54 nm (VIMS channel 28), then the G-mode was applied with a 97.13$\%$ confidence level (1.90$\sigma$). The classification returned 5 homogeneous types, composed by 505, 744, 175, 538 and 71 samples respectively.\ A convenient way to represent the results is plotting the data in a multidimensional space with axes corresponding to the VIMS spectral channels nearest to the filters defined in the most common photometric systems: $U$ (366.3 nm), $B$ (446.5 nm), $R$ (659.1 nm), $I$ (805.2 nm) and $Z$ (885.9 nm); the $V$ filter being not considered, as it has been used as a reference wavelength for the normalisation of all the spectra. In this way, the three major types, i.e. type 1, 2 and 4 (in blue, cyan and yellow, respectively), can be easily identified in a 3D space whose axes correspond to the U, B and R photometric variables (see Fig. \[fig3\]); for better clarity, the same data points can be also plotted in a 2D space with axes corresponding to pairs of the U, B, R, I and Z bands, so that the colour indexes for each sample can be evaluated as well: since we keep the information about the real object (satellite) corresponding to each sample, it can be verified that the main three types correspond to the three satellites (see Fig. \[fig4\]). Type 1 (blue) is made up by samples of the dark side of Iapetus, that appear rather concentrated in all the photometric variables. Type 2 (cyan) is linked to Hyperion, showing the steeper slope of all three satellites (and thus being redder than the dark side of Iapetus, as inferred by the high reflectance values in the R, I and Z bands) and with samples that show a larger spread with respect to the samples of Iapetus. Type 4 (yellow) is clearly related to Phoebe, whose samples, despite of the small area of the satellite considered for this work, are more scattered in the multidimensional space with respect to the samples of Iapetus and Hyperion (especially towards the shorter wavelengths) and are definitely not reddened but, on the contrary, show subsets of pixels exhibiting a higher reflectance towards the blue and UV wavelengths, consistently with recent Earth-based observations [@grav07]. Type 3 (green) includes a relatively large number of samples that show an intermediate behaviour between the dark side of Iapetus and Phoebe or Hyperion. Finally, type 5 (red) is made up by samples of Hyperion that behave differently from the majority of the samples of that satellite, showing higher reflectance values on an average in the R, I and Z bands and thus being redder than all the other types.\ This classification points out the fact that the colour indexes of the dark side of Iapetus, Phoebe and Hyperion, evaluated on high resolution data, are significantly different from each other. As expected, Hyperion is spectrally red at visible wavelengths, similar to D-type asteroids, with a subset of samples showing the steeper spectral slope of all the dataset; yet this reddening is, on an average, higher than the dark material of Iapetus, while Phoebe represents a totally different case, with no reddening and even a minor number of samples exhibiting a negative slope (i.e. the reflectance increases towards the shorter wavelengths). ![image](./Table4.png){height="22cm"} Results in the near infrared range ---------------------------------- In the range measured by VIMS-IR (0.85 - 5.1 $\mu$m), unlike the visible range case, several absorption features can be identified in the spectra of the three icy satellites. In the past, for classification purposes, we used the spectral range offered by VIMS-IR, both as a whole and truncated at 4.3 $\mu$m in order to discard the contribution of thermal emission for Iapetus, using simple reflectances $I/F$ as variables (see @tosi06). In this work, we adopted a different approach: we normalise all the spectra to VIMS channel 179, corresponding to the 2.23 $\mu$m wavelength, that is free from absorption features in all the three satellites, then we select a number of diagnostic absorption bands to be used as variables.\ All the spectra from Iapetus, Phoebe and Hyperion are dominated by the large 3 $\mu$m absorption band due to the OH fundamental stretch in H$_2$O ice and/or bound water, which is therefore not relevant for the classification. Instead, we use the spectral signatures of H$_2$O at 1.51 and 2.05 $\mu$m, combined to a number of spectral features diagnostic for volatiles.\ Interestingly, the $\nu_3$ asymmetric stretch of CO$_2$ is slightly shifted in position, moving - on an average - from 4.247 $\mu$m on Hyperion to 4.254 $\mu$m on Iapetus and 4.260 $\mu$m on Phoebe [@clark05; @bea05a; @cruik07; @fila09], i.e. a maximum difference of 13 nm, which is anyway comprised into one spectral channel of VIMS-IR (whose spectral resolution at those wavelengths is $\sim$22 nm). We sample this compound at 4.26 $\mu$m, the absorption being broad enough to prevent a significant error to be committed in evaluating the band depth. The aromatic C-H stretch on Phoebe mostly occurs at 3.25 $\mu$m [@clark05; @cora08] while on Iapetus is reported at 3.29 $\mu$m [@cruik08], i.e. shifted by $\sim$40 nm or two spectral channels of VIMS-IR. We also include the CH aliphatic stretch at 3.41 $\mu$m and its overtone at 1.75 $\mu$m [@cruik08]; and the 2.44 broad feature possibly being the overtone of a cyanide compound [@clark05] or the fundamental of molecular hydrogen trapped in dark material [@clark09]. According to the first interpretation, the fundamental of CN varies in position as a result of variations in its chemical environment; assuming the RC15 calibration, in the Phoebe spectrum a feature is seen in the 4.5-4.6 $\mu$m range [@clark05] and we evaluate this bond at 4.53 $\mu$m. Finally, assuming the RC15 calibration we have also considered the 4.42 $\mu$m signature, reported by @cora08 in homogeneous types identified on the surface of Phoebe and suggeted to be related to a nitrile compound like HC$_3$N or HNCO. It should be noted that by applying the RC17 calibration, the 4.53 $\mu$m and 4.42 $\mu$m features tend to disappear: such a behaviour can be indicative of calibration artifacts, so we discarded these two variables in our second test.\ The list of the spectral features used as variables in our classification is summarised in Table \[tab4\]. For each spectral signature, we compute the band depth following the definition by @clark84, as: $$D = 1 - \frac{R_b}{R_c}$$ where *R$_b$* is the reflectance measured at the band centre and *R$_c$* is the reflectance of the spectral continuum at the band centre, reconstructed through a linear fit relying on the wings of the band.\ We also combine these band depths to the normalised reflectance of the spectral continuum sampled in two wavelengths that are safely free from absorptions (at 1.82 $\mu$m and 3.55 $\mu$m), as a set of convenient variables to be used for the classification (see Table \[tab4\]). **Variable name** **VIMS Spectral channel** **Wavelength ($\mu$m)** **Compound/Spectrally active bond** ------------------- --------------------------- ------------------------- -------------------------------------------------------- BD1 135 1.51 H$_2$O ice BD2 150 1.75 C-H stretch overtone BD3 168 2.05 H$_2$O ice BD4 191 2.44 C$\equiv$N overtone or trapped H$_2$ BD5 240 3.25 aromatic C-H stretch BD6 250 3.41 aliphatic C-H stretch BD7 301 4.26 CO$_2$ BD8 310 4.42 Nitrile? (RC15 calibration only) BD9 317 4.53 C$\equiv$N fundamental stretch (RC15 calibration only) CL1 154 1.82 Continuum CL2 258 3.55 Continuum \[tab4\] By composing such a dataset and classifying it through the G-mode method with a 98.98$\%$ confidence level (2.05$\sigma$), we find 2 types made up of 849 and 1194 samples, respectively. By plotting the result in a 3D space with axes corresponding, for example, to the abundances of H$_2$O ice (measured at 1.51 $\mu$m), CO$_2$ (measured at 4.26 $\mu$m) and C$\equiv$N (measured at 4.53 $\mu$m), these types can be highlighted and the corresponding samples can be put in association with the satellite (see Fig. \[fig5\]). For clarity, the same points can be also plotted in a 2D space with axes corresponding to the depths of some diagnostic spectral features (aromatic/aliphatic CH, CO$_2$, C$\equiv$N, etc.) versus the 1.51 $\mu$m signature, related to the abundance of water ice. Keeping the information about the satellite connected to each sample, it can be easily verified that samples of type 1 (blue) entirely correspond to spectra of Hyperion, while samples of type 2 (red) are - with the exception of only one sample - spectra of Phoebe and of the dark side of Iapetus (see Figs. \[fig6\] and \[fig7\]).\ From the classification, it turns out that Hyperion and Phoebe are generally richer in water ice than the Iapetus dark terrain, consistently with the results obtained in the far ultraviolet wavelengths by Cassini/UVIS [@hah08], with a subset of the samples of Iapetus and Phoebe exhibiting similar amounts of ice. The abundance of CO$_2$ is more prominent in the Iapetus dark material than on Phoebe, in agreement with the results by @bea05a, with Hyperion representing an intermediate situation and showing a stronger correlation with water ice. The abundances of hydrocarbons (traced by the CH aromatic and aliphatic stretches) are quite similar on the three bodies, but with a stronger correlation with water ice in the case of Hyperion. On the other hand, the dark material of Iapetus and Phoebe show similar amounts of nitrile compounds (assuming CN-bearing molecules as traced by the 2.44, 4.42 and 4.53 $\mu$m signatures), that however turn out again to be deeply bounded to water ice on Hyperion, whereas such a correlation is not found on the other two bodies.\ Regarding the spatial variability, for the considered dataset it should be noted that CO$_2$ is more scattered in samples from Iapetus and Hyperion than in samples of Phoebe. Nitriles traced at 4.53 $\mu$m and 4.42 $\mu$m show a larger variability on Hyperion, although the opposite is interestingly observed for the 2.44 $\mu$m feature, especially on Phoebe. The aromatic CH stretch is more variable on Hyperion, whereas the aliphatic CH signature shows a larger variability on Iapetus. Finally, the spectral continuum sampled at 1.82 $\mu$m (once the spectra have been normalised at 2.23 $\mu$m) turns out to be similar for Phoebe and Iapetus, while Hyperion shows a higher I/F at that wavelength; the three satellites show anyway different I/Fs at 3.55 $\mu$m. ![image](./Table6.png){height="22cm"} ![image](./Table7.png){height="22cm"} After this analysis, we redid the work in the infrared range by applying the latest, unofficial sensitivity function derived for VIMS-IR (RC17, Roger Clark, personal communication, 2008). As mentioned before, with this new calibration the features at 4.42 and $\sim$4.5 $\mu$m disappear on most data, so they are no longer considered as variables for the classification, while we keep all the other features, so that the new dataset is made up by 2043 spectra in 9 variables. By classifying it through the G-mode method with the same confidence level used before (98.98$\%$ or 2.05$\sigma$), we find 2 types made up of 844 and 1199 samples, respectively. Figures \[fig8\] and \[fig9\] show the result of the classification, with samples represented in a 2D space and axes again corresponding to the depths of the considered spectral features with respect to the 1.51 $\mu$m signature related to the abundance of water ice. It can be seen that the classification remains essentially unchanged with respect to the application of the RC15 sensitivity function, with the same variability of the key compounds on the satellites and a general strong correlation with water ice that still characterizes the samples of Hyperion in comparison to those of Iapetus and Phoebe.\ In light of this analysis, a general evidence is that, regardless of the specific sensitivity function used to calibrated the VIMS data, the situation in the near infrared range is different with respect to the visible range: on the basis of the depths of either nine or seven spectral features, related to the abundances of several compounds or spectrally active bonds on the surfaces of the satellites, and of the spectral continuum profile sampled in two different wavelengths, the existence of two classes defines a general higher degree of similarity between the dark hemisphere of Iapetus and Phoebe; the G-mode method allows to come to this conclusion through an automatic statistical approach applied to a large number of samples, thus increasing the degree of confidence. However, despite the small spatial scale of the considered data, this analysis points out a general stronger correlation with water ice for Hyperion, and a certain degree of variability of spectral features related to hydrocarbons, CO$_2$ and other compounds on the three satellites. ![image](./Table8.png){height="17.4cm"} ![image](./Table9.png){height="17.4cm"} Origin of the dust: physical and dynamical scenario =================================================== As we mentioned in the introduction, Phoebe and Hyperion have historically been proposed as the main potential sources of the dark material coating the leading hemisphere of Iapetus [@soter74; @matt92]. More recently, @bea05b suggested that the source of this material could be linked to more than one object, namely the retrograde Saturnian irregular satellites. To delivery the dark material from Hyperion to Iapetus-crossing orbits, dynamical models relied on high ejection velocities (generally $v$ $>$ 100 m s$^{-1}$, see @matt92 [@mar02; @dal04]) and possibly on collimated clouds of ejecta [@matt92; @mar02]. PR drag, causing the dust grains to spiral inward toward Saturn, was instead invoked by those scenarios linking the dark material to Phoebe and the irregular satellites [@soter74; @bea05b].\ In this section we will discuss the dust production and transfer mechanisms in the outer Saturnian system (i.e. the region populated by the irregular satellites) and compare them with the ones estimated for the Hyperion-based scenario. Irregular satellites: impacts and dust production ------------------------------------------------- The images of Phoebe taken by Cassini mission [@por05] supply the first evidence that impacts played an important role in the history of the irregular satellites of Saturn. Prior to the arrival of Cassini, @nes03 predicted that collisional removal between the population of irregular satellites should have caused significant modifications of the Saturnian system over the age of the Solar System. Phoebe, as the biggest member of the Saturnian irregular satellites, likely played a major role in this removal process (ibid). At the time, only 13 irregular satellites had been discovered around Saturn: estimating the collisional evolution of this population, @nes03 reported that about 6 impacts (mostly due to Phoebe) were to be expected over a time comparable to the age of the Solar System. Using an updated sample counting 35 irregular satellites, @tur08 showed that, taking into account the present population, a number of impacts more than twice higher should be expected over a time of 3.5$\times$10$^{9}$ years, i.e. the hypothesised lower limit to the age of the irregular satellites. Such results would imply an original population at least 33$\%$ higher than the present one (ibid). Moreover, the orbital structure and the radial distribution of the Saturnian irregular satellites suggests that Phoebe played a major role in clearing its near-by orbital region from once existing collisional shards and smaller objects [@tur08; @tur09]. Finally, even if less clearly than the Jovian system, the orbital structure of the Saturnian irregular satellites hints to the existence of possible collisional families produced by the disruption of bigger parent bodies [@nes03; @tur08].\ Most impacts between Saturnian irregular satellites would involve pairs of counter-revolving, i.e. prograde vs. retrograde, objects [@nes03; @tur08]. As shown by @nes03, the associated impact velocities are about $3$ km s$^{-1}$. This implies that the dust-generating collisions in the Saturnian system would be as energetic as those in the Jovian system, which mainly involve members of the sole prograde or retrograde populations of irregular satellites and have impact velocities of the order of $1-2$ km s$^{-1}$ (ibid). Also collisions between the Saturnian irregular satellites and interplanetary objects, likely at the basis of the formation of the putative collisional families, would be characterised by the same range of impact velocities. As shown by @zea03, collisions of ecliptic comets on Phoebe would involve average impact velocities of about $3$ km s$^{-1}$. Since the impact velocity is mainly due to the encounter velocity between the host planet and the comet (ibid) and is little influenced by the differences in the small orbital velocities of the irregular satellites, we can assume the value computed by @zea03 for Phoebe as generally valid for all Saturnian irregular satellites.\ In addition to these possible major impact events, it’s been suggested by @kri02 that impacts of interplanetary micrometeorites with small, atmosphereless bodies like the irregular satellites would supply a smaller yet continuous dust source. Such hypothesis is based on the data collected by Galileo space mission, which suggest that a dust production process is still ongoing in the system and is responsible for an enhancement of the circum-Jovian dust respect to interplanetary dust of about an order of magnitude (ibid). While the results of @kri02 refer exclusively to Jovian system, their implications are in principle valid for all giant planets with the caveat that smaller fluxes of micrometeorites are to be considered. As shown by @zea03 for ecliptic comets, in fact, the impact rate on Phoebe is on average half than that on Himalia, the biggest Jovian irregular satellite. As showed by both theoretical modeling (see e.g. @caf97 [@ben99]) and observations (e.g. @mea05 [@iaa08] and references therein for ejecta speed from Deep Impact experiment), most of the produced dust particles would be ejected with velocities lower than those (i.e. $v$ $\gtrsim$ 400 m s$^{-1}$) needed to escape Saturn’s gravitational attraction [@tur09], thus remaining on planetocentric orbits.\ While all these dust production processes were suggested by indirect evidences and comparative considerations with the Jovian system, the recent discovery of a ring of particles around Saturn spanning over most of the orbital region of the irregular satellites [@vsh09] represents the proof that dust production processes acted and, by analogy with the Jupiter’s gossamer rings, are still acting in the Saturnian system (ibid). Even if the information on this newly discovered disk is still limited, we argue that its formation is likely connected to the capture of Phoebe [@tur09] and the origin of Phoebe’s gap [@tur08; @tur09]. If collisional families are present between the Saturnian irregular satellites as suggested by @nes03 and @tur08, the disruptive collisions that generated them would supply additional dusty material and collisional shards which are longer-lived than those ejected in the orbital region crossed by Phoebe [@tur08]. Transfer mechanism from the outer Jovian system ----------------------------------------------- Dust grains in orbit around Saturn would experience various perturbing effects, namely the gravitational perturbations of the Sun and the outer planets, the radiation pressure and, on longer timescales, the PR drag. While particles smaller than $1.5$ $\mu$m would be rapidly ejected from the Saturnian system due to the effects of radiation pressure as reported by @vsh09, bigger particles (i.e. $>3.5$ $\mu$m, ibid) would likely survive long enough for PR drag to act. Following the work by @soter74 and @bur96, the orbital evolution of such dust particles would cause them to spiral towards Saturn. If the dust grains where originally located outside the orbit of Iapetus, during their radial migration they intersect the orbit of the satellite. During their inward drift, dust grains moving on retrograde orbits would impact Iapetus on the leading hemisphere due to their counter-revolving motion, while the same argument does not apply to prograde grains. Moreover, as a consequence of the counter-revolving motion, retrograde dust particles would experience a higher frequency of close encounters with the satellite, suggesting that Iapetus should be more efficient in collecting them than those in prograde motion. This discrepancy in the capture efficiency could justify the asymmetry in the distribution of the dark material covering the surface of the satellite. Such mass transport mechanism is probably still active in the Solar System, as shown by the results of Galileo mission we mentioned above. The data supplied by Galileo spacecraft, moreover, indicated that a significant fraction of the collected dust grains were on retrograde planetocentric orbits [@kri02].\ To evaluate the efficiency of Iapetus to collect the dust grains drifting towards Saturn due to Poynting-Robertson drag, we used a modified version of the algorithm developed by @kes81. The original algorithm has been designed to study the collisional evolution of the Jovian irregular satellites by computing the impact probabilities between pairs of them. The algorithm takes into account the spatial and temporal density distributions of the two satellites, assuming that their main orbital elements (semimajor axis, eccentricity and inclination) are fixed in time and that in the timespan considered their remaining orbital angles would sample uniformly all possible values (i.e. the timespan considered should be longer than their precession timescales). While the dust grains migrate inward, they go subject to the gravitational perturbations of Jupiter and the Sun in the outer Saturnian system and of the regular satellites in the inner Saturnian system. As a consequence, the invariance of the main orbital elements cannot be assumed.\ To overcome this issue, instead of evaluating the impact probability between Iapetus and real, migrating and evolving dust grains, we used Kessler’s method to evaluate the efficiency of Iapetus in sweeping different regions of the orbital phase space. We considered a radial region comprised in the range \[1.25$\times$10$^{-2}$ - 2.38$\times$10$^{-1}$\] AU from Saturn (i.e. the radial region where orbits with different eccentricity values can intersect the one of Iapetus) divided into 20 concentric rings. In this region we computed the probability of collision of Iapetus with synthetic massless particles whose eccentricity and inclination[^2] values vary in the ranges \[0-0.9\] and \[0$^{\circ}$-60$^{\circ}]$ for prograde orbits or \[120$^{\circ}$-180$^{\circ}$\] for retrograde orbits. The sampling steps assumed were respectively 0.1 and 2$^{\circ}$. The time over which to integrate the impact probability for each particle has been assumed equal to the time needed to cross the radial ring where it is initially located. This migration time is computed through the relationship for planetocentric Poynting-Robertson drag (see p. 430 from @dpl01): $$\frac{da}{dt}=-\frac{a}{t_{pr}}\frac{5+cos^{2}i_{*}}{6}$$ where $t_{pr}$ is a characteristic decay time given by $$t_{pr}=\frac{1}{3\beta}\frac{r^{2}_{\odot}}{GM/c}\approx530\frac{r^{2}_{AU}}{\beta}\,yr,$$ $\beta$ is the ratio between forces due to the radiation pressure and the gravity of the Sun, $a$ is the semimajor axis of the particle, $i$ its inclination respect to the orbital plane of the planet about the Sun, $r_{\odot}$ is the Sun-Saturn distance and $r_{AU}$ is the same distance expressed in astronomical units. Inclination and eccentricity are not influenced by PR drag (ibid). As our template orbit for Iapetus we used the mean orbital elements supplied by the JPL Solar System Dynamics website[^3]. However, since the inclination referred to the local Laplace plane while our reference frame is the orbital plane of Saturn, we recomputed the mean inclination of the satellite ($i$=10.152$^{\circ}$) from the results of Model 2 of @tur08.\ For our numerical setup and the grain sizes we considered (see section \[efficiency\]), the migration times are always longer (usually by at least an order of magnitude) than the precession timescales of the bodies involved: this allowed us to apply Kessler’s method to each radial region. To estimate the impact probability of real dust grains, we computed the cumulative impact probability over the whole radial path of particles sharing the same eccentricity and inclination values. Globally, we considered 6510 prograde orbits and 6510 retrograde ones: once integrated over the radial path, the number of orbits accounted for is 310 for each of the two cases in inclination.\ Dust capture efficiency {#efficiency} ----------------------- We applied our modified model to perfectly absorbing grains of different sizes, which are characterised by different migration timescales through the dependence of $\beta$ by the particle radius (see e.g. p. 35 from @dpl01). We considered the following values for the size of the grains: 0.1, 1, and 10 $\mu$m. Such range of values was selected on the basis of comparative considerations with the results obtained by @kri02 for the Jovian system. A recent estimate performed by @vsh09 for the Saturnian system suggests more strict constraints (i.e. d $> 3.5$ $\mu$m) to the size of the grains which could survive long enough to be affected by PR drag. The size range we considered overlaps with the one indicated by @vsh09, therefore we decided to keep our original one since it emphasizes the changes in the capture efficiency as a function of the size of the grains.\ The estimated sweeping efficiency of Iapetus, expressed through number of collisions during the crossing of the whole radial path, is plotted in Figs. \[cloud-pro\] and \[cloud-retro\] for prograde and retrograde dust grains respectively as a function of the eccentricity and the inclination values. While Iapetus has a capture efficiency of $1$ or $100\%$ over those regions of the considered phase space characterized by more than $1$ collision, we preferred to express the capture probabilities as a function of the number of impacts in order to emphasize the different statistical weights of our estimates (i.e. $3$ collisions is statistically more significant than $1$ collision).\ As can be seen from Fig. \[cloud-pro\], the prograde dust grains captured more efficiently are those with very low eccentricity values ($e$ $\approx$ 0): for grains bigger than 10 $\mu$m, the capture probability is of the order of $100\%$ yet, once the dynamical features of the prograde irregular satellites are accounted for, this dynamical class of particles is physically unrealistic. There is a second island of higher efficiency at high eccentricity values ($e$ $>$ 0.4) for those particles having inclination values similar to that of Iapetus but the capture probability is lowest in the region of phase space ($e$ $>$ 0.1 and 30$^{\circ}$ $<$ $i$ $<$ 60$^{\circ}$) populated by the prograde irregular satellites. Set aside for the high-end tail of ejection velocities, the bulk of the impact-generated dust grains would populate the low efficiency region of the $e-i$ plane: as a consequence, the capture efficiency of Iapetus would be of the order of a few 0.1$\%$ for 1 $\mu$m grains and would never exceed 20-30$\%$ even for 10 $\mu$m grains.\ The situation is different for retrograde grains (see Fig. \[cloud-retro\]). Again, the highest capture probabilities are in the low eccentricity ($e$ $\approx$ 0), physically unrealistic region. For retrograde grains, however, the island with relatively high capture probability overlap the region of the $e-i$ plane populated by the retrograde irregular satellites (i.e. $e$ $>$ 0.1 and 150$^{\circ}$ $<$ $i$ $<$ 180$^{\circ}$). As a consequence, 1 $\mu$m grains are characterised by capture probabilities ranging between 10-40$\%$, while for the majority of 10 $\mu$m grains the probabilities are of $100\%$. Mass transfer from Hyperion --------------------------- Before discussing the conclusions of this work, we would like to review the mass transfer scenarios linking Hyperion to the dark material on Iapetus in light of the results of Cassini mission. Prior to the arrival of Cassini to Saturn, several studies [@matt92; @mar02; @dal04] evaluated the transfer efficiency for collisionally-generated material from Hyperion to Iapetus. @matt92 assumed narrow, conical-shaped clouds of ejecta and estimated a single-passage transfer efficiency of 10$^{-3}$. A more detailed and realistic evaluation of the same scenario performed by @mar02 raised the transfer efficiency to about 20-40$\%$, depending on the characteristics of the break-up event. However, the authors emphasised that the assumptions on the ejection direction were made to study the transfer efficiency of those fragments most likely to reach Iapetus. They also noted that such particles would likely represent only a fraction of the ejecta cloud generated by catastrophic disruption events and that, if isotropic ejection is assumed, the mean transfer efficiency lowers to 0.4$\%$. Such value is in agreement with the one found by @dal04 in studying the impact probability of fragments ejected by Hyperion against the other Saturnian satellites.\ @matt92 estimated the mass of dark material required to cover the Cassini region on Iapetus to a depth of 1 km to be about 3$\times$10$^{21}$ g (i.e. about half the mass of Hyperion, which is about 5$\times$10$^{21}$ g as reported by @tea07). The assumption on the depth of the dark material followed from the requirement that no subsequent impact should excavate the whole layer of dark material and expose the bright material underneath. As we said in the introduction, radar measurements of the thickness of the dark material layer performed by Cassini constrained its depth between a few decimetres to about one meter. As a consequence, the amount of material needed to cover the leading hemisphere of Iapetus is at least 3 orders of magnitude smaller than previously thought, i.e. approximately a few 10$^{18}$ g or lower.\ We can estimate the cratering event needed to roughly supply this amount of material by inverting the formula for the volume of a simple, bowl-shaped crater $$\label{volume} V={\pi}h\left( \frac{d^{2}}{8}+\frac{h^{2}}{6}\right)$$ where $h$ is the depth of the crater, $d$ its diameter and $V$ the volume. If we assume the depth-to-diameter ratio estimated for Hyperion by @tea07, 0.21$\pm$0.05, we have $d\approx5h$. We then express $V$ as $M\rho^{-1}$ where $\rho$=544 kg m$^{-3}$ (ibid). We assume for $M$ the value computed by @matt92 corrected for the new depth of the dark material on Iapetus: $M$=3$\times$10$^{18}$ g. By inverting eq. \[volume\] we have $$h=\left(\frac{24M}{79\pi\rho}\right)^{\frac{1}{3}}$$ This indicates that, if we assume a complete transfer between Hyperion and Iapetus, the needed amount of material would be supplied by excavating a crater with depth $h$=8.11 km and diameter $d$=40.55 km, which is comprised in the crater size distribution observed on Hyperion [@tea07]. However, we need to take into account the effects of the transfer efficiency. If we assume the most favourable single-event scenario, i.e. a collimated, high velocity ejection with a 40$\%$ transfer efficiency (the maximum estimated by @mar02), the resulting crater would have $h$=11 km and $d$=55 km. If we consider a more realistic yet still favourable case with a 20$\%$ transfer efficiency [@mar02], the dimensions of the crater become $h$=13.87 km and $d$=69.33 km, nearing the high-end tail of Hyperion’s crater size distribution [@tea07]. It should be noted that, in their work, @mar02 assume that the fragments are ejected in the direction that maximises the transfer efficiency; moreover, these author assume a uniform distribution of the ejection velocities with a maximum value of 1.5 km s$^{-1}$, likely overestimating the contribution of high-velocity ejecta. Finally, as emphasised by @tea07, Hyperion is characterised by a significant porosity, of the order of 42$\%$ if the moon is mainly composed by water ice and likely higher if the rock fraction is significant. For porosity values this high, impacts can be extremely ineffective in excavating high-velocity ejecta since the main crater-forming process is compression instead of excavation [@hea02]. As reported by @hea02, for a target porosity of $60\%$ only $2\%$ of the ejecta achieve velocities greater than 10 m s$^{-1}$. This implies that our estimates represent a lower limit to the size of the crater needed to supply the dark material on Iapetus and that more than one cratering event is likely necessary.\ Before concluding this review of the scenarios linking Hyperion to the dark material on Iapetus, we would like to point out that the present porosity of Hyperion could be the by-product of a past major collisional event and that the parent body of the satellite could have been characterised by a significantly lower porosity. Under this assumption, if the mass of the parent body was about $10\%$ higher than Hyperion’s present mass and such excess mass was collisionally removed, in principle Hyperion could supply the right amount of material observed on Iapetus. Comparative discussion of the mass transfer scenarios ----------------------------------------------------- As we previously said, due to its high porosity the present-day Hyperion is an ineffective source for the dark material coating the leading hemisphere of Iapetus. A most viable scenario connecting Hyperion to the dark material implies a significantly lower porosity of Hyperion’s parent body and the collisional removal of about $1/10$ of the present mass of the satellite. However, the excavated material would be delivered in the form of collisional shards and fragments, thus leaving open the issue of how to obtain the uniform dark blanket we now observe on Iapetus.\ As a comparison, due to its lower porosity and higher density [@por05], Phoebe would be a more efficient source of ejecta. A medium sized crater like Hylas, with its diameter $d$=28 km and its depth $h$=4.67 km computed through its depth to diameter ratio 1:6 [@gie06], would supply 2.43$\times$10$^{18}$ g, i.e. about the estimated amount of dark material. As for Hyperion, not all the material excavated by impacts on Phoebe will be ejected as dust: part of it will consist of collisional shards and fragments. However, the results of @tur08 show that those fragments too big to be influenced by radiation forces would likely re-impact the satellite during their dynamical lifetime. The ejection-reaccretion process would repeat also for the material excavated by these secondary impacts with an ejection velocity higher than Phoebe’s escape velocity, i.e. about $100$ $\mathrm {m\,s}^{-1}$ [@por05], with an overall enhanced production of dust.\ The divergent results we obtained for the visible and infrared data, however, argue against the identification of Phoebe as the sole source of the dark material. Due to the likely active collisional history of the irregular satellites in Saturn system, a multi-source scenario like the one hypothesised by @bea05b is more plausible and the contribution of different satellites could explain the observed spectral differences between Phoebe and Iapetus. Moreover, if the conjectured nature of Phoebe’s gap [@tur08] is real, Phoebe collisionally removed once-existing prograde regular satellites near its orbital region and reaccreted most of the generated collisional shards: in such scenario, the contribution to the dust flux of these now-extinct satellites should be accounted for in interpreting the spectral features of the dark material.\ The existence of a disk around Saturn which spans over the orbital region of the irregular satellites argues in favour of this scenario. First, it proves that collisional, dust-generating processes acted and are likely still acting in the outer Saturnian system. Second, the lower limit to its mass estimated by @vsh09 under the conservative assumption that the disk is composed by $10$ $\mu$m grains indicates that even moderate cratering events (i.e. producing craters of about $1$ km in diameter) would be able to supply enough material to form such disk. Assuming the present disk is in a steady state, the accumulation rate on Iapetus would be of $20$ $\mu$m Myr$^{-1}$ [@vsh09]. Depending on the cratering rate on Iapetus (see e.g. the timescale estimated by @zea03 for ecliptic comets) and the real size distribution of the grains composing the disk, such rate can possibly be enough to resupply the material excavated on Iapetus by impacts.\ Conclusions =========== The spectral range covered by Cassini/VIMS and the availability of data from the close flyby of Iapetus allowed us to perform an automatic spectral classification of the surfaces of Iapetus, Hyperion and Phoebe at comparably high spatial resolution and favourable geometry. Our aim was to look for spectral affinities between the dark hemisphere of Iapetus and the two satellites that have been historically indicated as possible sources of the exogenous material covering it. The classification has been performed separately and adopting different approaches for the visual and infrared portions of the spectra measured by VIMS. For the former, the I/Fs measured in all the available wavelengths were used as variables, while several diagnostic features and the spectral continuum were used as variables for the latter, further separating the results obtained with the application of two different sensitivity functions in the calibration pipeline.\ As a general remark, striking spectral associations between Iapetus’ dark material and Phoebe or Hyperion are hardly found in these data. In the visible range, the G-mode analysis confirms that Phoebe is essentially grey, with a subset of the spectra even showing a negative slope, while Iapetus and Hyperion are clearly reddened. However, we found no association between the dark side of Iapetus and Hyperion, since the reddening appears stronger on Hyperion than on Iapetus while the dark material on Iapetus shows a lower albedo than Hyperion. Moreover, any association in the visible range would not be conclusive, as similar photometric correlations have been pointed out also with other small dark satellites moving on both retrograde and prograde orbits [@bea05b]. In the near infrared range the correlation appears generally stronger between Iapetus and Phoebe, although significant variability of some compounds is found on all the three satellites. We observed a clear correlation of most non-ice features with water ice in the case of Hyperion, which suggests that these compounds exist as trapping structures (e.g., fluid inclusions) into water ice, whereas the same kind of correlation - particularly for spectral features clearly correlated with dark material - is not present in the data on Iapetus and Phoebe. Furthermore, the spectra of Hyperion in the near infrared show a continuum profile significantly different from those of Iapetus and Phoebe.\ The evaluation of Iapetus’ sweeping efficiency in a PR drag based scenario yielded interesting results. The unusually high orbital inclination of Iapetus naturally enhances its capture efficiency for retrograde dust grains and lower that for prograde grains. Statistically, a significant fraction of the retrograde dust particles impacts the satellite while migrating inward for grain sizes greater than 1 $\mu$m. For grains of several microns in size, i.e. those that would not be removed by radiative forces as reported by @vsh09, this fraction approaches unity. For prograde particles, on the contrary, the transfer efficiency is an order of magnitude lower and even for the biggest grains we considered it never exceeds $10-30\%$. Together with the hypothesised past history of the Saturnian irregular satellites, i.e. the formation of Phoebe’s gap [@tur08; @tur09], the capture of Phoebe [@tur09] and the creation of collisional families [@nes03; @tur09], this result can explain the striking appearance of the satellite. The interpretation of ultraviolet data suggests that the delivery of the dark material is a recent or even still ongoing process [@hah08]. This can be naturally explained in the PR drag scenario through the effects of the impacts of micrometeorites on the irregular satellites. Such micro-impacts, particularly those on the smaller retrograde satellites characterised by lower escape velocities, would supply a continuous source of dust as suggested by the data of Galileo mission on the Jovian system [@kri02]. The existence of the newly discovered outer disk around Saturn [@vsh09] strongly supports this scenario.\ In conclusion, the results of our work argue in favour of a link between Phoebe and the dark material on Iapetus due to Poynting-Robertson drag of dust particles both on spectroscopic and dynamical basis. While likely being the main actor, we argue that Phoebe was not the sole source which contributed in supplying the dark material. Collisionally removed prograde satellites which originally populated the orbital region near Phoebe and the parent bodies of the suggested collisional families existing in the system likely played a role and could account for the differences observed in the spectroscopic data. [999]{} Barucci M. A., Capria M. T., Coradini A., Fulchignoni M., 1987. “Classification of asteroids using G-mode analysis”. Icarus 72, 304-324. Benz W., Asphaug E., “Catastrophic Disruptions Revisited”, 1999, Icarus 142, 5. Bianchi R., Coradini A., Butler J. C., Gavrishin A. I., 1980. “A classification of lunar rock and glass samples using the G-mode central method”. Moon and the Planets 22, 305-322. Black G. J., Campbell, D. B., Carter L. M., Ostro S. J., 2004. “Radar detection of Iapetus”. Science 304, 553. Brown R. H., et al., 2004. “The Cassini Visual and Infrared Mapping Spectrometer (VIMS) investigation”. Space Sci. Rev. 115, 111-168. Brown R. H., et al., 2006. “Observations in the Saturn system during approach and orbital insertion, with Cassini’s Visual and Infrared Mapping Spectrometer (VIMS)”. Astron. Astrophys. 446, 707-716. Buratti B. J., Hicks M. D., Tryka K. A., Sittig M. S., Newburn R. L., 2002. “High-resolution 0.33-0.92 $\mu$m spectra of Iapetus, Hyperion, Phoebe, Rhea, Dione, and D-Type asteroids: How are they related?” Icarus 155, 375-381. Buratti B. J., et al., 2005. “Cassini Visual and Infrared Mapping Spectrometer observations of Iapetus: detection of CO$_2$”. Astrophys. J. Lett. 622, L149-152. Buratti B. J., Hicks M. D., Davies A., “Spectrophotometry of the small satellites of Saturn and their relationship to Iapetus, Phoebe and Hyperion”, 2005, Icarus, 175, 490-495 Burns J. A., et al. 1996, “The contamination of Iapetus by Phoebe dust”, 1996, “Physics, Chemistry, and Dynamics of Interplanetary Dust”, ASP Conference Series, 104, (eds. Gustafson B. A. S., Hanner M. S.), Chicago, University of Chicago Press, 179-182. Carusi A., Massaro E., 1978. “Statistics and mapping of asteroid concentrations in the proper elements space”. Astron. Astrophys. Suppl. 34, 81-90. Cerroni P., Coradini A., 1995. “Multivariate classification of multispectral images: an application to ISM Martian spectra”. Astron. Astrophys. Suppl. 109, 585-591. Clark R. N., Roush T. L., 1984. “Reflectance spectroscopy: Quantitative analysis techniques for remote sensing applications”. J. Geophys. Res. 89, 6329-6340. Clark R. N., et al., 2005. “Compositional maps of Saturn’s moon Phoebe from imaging spectroscopy”. Nature 435, 66-69. Clark R. N., et al., 2007. “Compositional mapping of Saturn’s satellite Dione with Cassini VIMS and implications of dark material in the Saturn system”. Icarus 193, 372-386. Clark R. N., et al., 2009. “The composition of Iapetus: mapping results from Cassini VIMS”. Submitted to Icarus. Coradini A., Fulchignoni M., Gavrishin A. I., 1976. “Classification of lunar rocks and glasses by a new statistical technique”. The Moon 16, 175-190. Coradini A., Fulchignoni M., Fanucci O., Gavrishin A. I., 1977. “A FORTRAN V program for a new classification technique: the G-mode central method”. Comput. Geosci. 3, 85-105. Coradini A., Giovannelli F., Polimene M. L., 1983. “A statistical X-ray QSOs classification”. International Cosmic Ray Conference Papers, Volume 1 (A85-22801 09-93). Bombay, Tata Institute of Fundamental Research, 35-38. Coradini A., Cerroni P., Forni O., Bibring J.-P., Gavrishin A. I., 1991. “G-mode classification of Martian infrared spectral data from ISM-Phobos 2”. Abstracts of the Lunar and Planetary Science Conference 22, 243. Coradini A., et al., 2008. “Identification of spectral units on Phoebe”. Icarus 193, 233-251. Cordelli A., Farinella P., “A new model to simulate impact breakup”, 1997, Plan. Space Sci. 45, 1639-1647. Cruikshank D. P., Bell J. F., Gaffey M. J., Brown R. H., Howell R., Beerman C., Rognstad M., 1983. “The dark side of Iapetus”. Icarus 53, 90-104. Cruikshank D. P., Allamandola L. J., Hartmann W. K., Tholen D. J., Brown R. H., Matthews C. N., Bell J. F., 1991. “Solid C$\equiv$N bearing material on outer Solar System bodies”. Icarus 94, 345-353. Cruikshank D. P., et al., 2007. “Surface composition of Hyperion”, Nature 448, 54-56. Cruikshank D. P., et al., 2008. “Hydrocarbons on Saturn’s satellites Iapetus and Phoebe”. Icarus 193, 334-343. De Pater I., Lissauer J. J., “Planetary Sciences”, 2001, Cambridge (UK), Cambridge University Press, ISBN 0521482194 Dobrovolskis A. R., Lissauer J. J., “The fate of ejecta from Hyperion”, 2004, Icarus, 169, 462-473. Erard S., Cerroni P., Coradini A., 1991. “Automatic classification of spectral types in the equatorial regions of Mars”. 24th DPS Meeting, abstract n. 24.14-P; Bulletin of the American Astronomical Society, Vol. 24, 978. Farinella P., Marzari F., Matteoli S., “The disruption of Hyperion and the origin of Titan’s atmosphere”, 1997, The Astronomical Journal, 113, 2312-2316 Filacchione G., et al., 2009. “Saturn’s icy satellites investigated by Cassini-VIMS. II. Results at the end of Nominal Mission”. Submitted to Icarus. In press. Gavrishin A. I., Coradini A., Fulchignoni M., 1980. “Trends in the chemical composition of lunar rocks and glasses”. Geokhimiia, Mar. 1980, 359-370 (in Russian). Gavrishin A. I., Coradini A., Cerroni P., 1992. “Multivariate classification methods in Planetary sciences”. Earth, Moon, and Planets 59, 141-152. Giese B., Neukum G., Roatsch T., Denk T., Porco C. C., “Topographic modeling of Phoebe using Cassini images”, 2006, Planet. Space Sci. 54, 1156-1166. Giovannelli F., Coradini A., Polimene M. L., Lasota J. P., 1981. “Classification of cosmic sources - a statistical approach”. Astron. Astrophys. 95, 138-142. Grav T., Bauer J., 2007. “A deeper look at the colors of the saturnian irregular satellites”. Icarus 191, 267-285. Hendrix A. R., Hansen C. J., “The albedo dichotomy of Iapetus measured at UV wavelenghts”, 2008, Icarus 193, 344-351. Holsapple K., Giblin I., Housen K., Nakamura A., Ryan E., “Asteroid Impacts: Laboratory Experiments and Scaling Laws”, 2002, Asteroids III, Eds. W. F. Bottke Jr., A. Cellino, P. Paolicchi and R. P. Binzel, University of Arizona Press, Tucson, 443-462 Ipatov S. I., A’Hearn M. F., “Velocities and relative amount of material ejected from Comet 9P/Tempel 1 after the Deep Impact collision”, 2008, arXiv:0810.1294v2 Kessler D. J., “Derivation of the collision probability between orbiting objects. The lifetimes of Jupiter’s outer moons", 1981, Icarus, 48, 39-48. Krivov A. V., Wardinski I., Spahn F., Krüger H., Grün E., “Dust on the Outskirts of the Jovian System”, 2002, Icarus, 157, 436-455. Marchi S., Barbieri C., Dell’Oro A., Paolicchi P., “Hyperion-Iapetus: Collisional relationships”, 2002, Astronomy & Astrophysics 381, 1059-1065. Matthews R. A. J., “The Darkening of Iapetus and the Origin of Hyperion”, 1992, Quarterly Journal of the Royal Astronomical Society, 33, 253-258 McCord T. B., et al., 2004. “Cassini VIMS observations of the Galilean satellites including the VIMS calibration procedure”. Icarus 172, 104-126. Meech K. J., et al., “Deep Impact: Observations from a Worldwide Earth-Based Campaign”, 2005, Science 310, 265-269. Miller E., et al., 1996. “The Visual and Infrared Mapping Spectrometer for Cassini”. Proc. SPIE Vol. 2803, 206-220. Nesvorny D., Alvarellos J. L. A., Dones L., Levison H., “Orbital and Collisional Evolution of the Irregular Satellites” 2003, The Astronomical Journal 126, 398-429. Orosei R., Bianchi R., Coradini A., Espinasse S., Federico C., Ferriccioni A., Gavrishin A. I., 2003. “Self-affine behavior of Martian topography at kilometer scale from Mars Orbiter Laser Altimeter data”. J. Geophys. Res. 108 (E4), GDS 4-1. Ostro S., et al., 2006. “Cassini RADAR observations of Enceladus, Tethys, Dione, Rhea, Iapetus, Hyperion, and Phoebe”. Icarus 183, 479-490. Owen T. C., Cruikshank D. P., Dalle Ore C. M., Geballe T. R., Roush T. L., de Bergh C., Meier R., Pendleton Y. L., Khare B. N., 2001. “Decoding the domino: the dark side of Iapetus”. Icarus 149, 160-172. Porco C. C., et al., 2005. “Cassini Imaging Science: initial results on Phoebe and Iapetus”. Science 307, 1237-1242. Smith B. A., et al. 1981, Science 212, 163. Soter S., 1974. “Brightness of Iapetus”. Poster paper presented at the 28th IAU Colloq. on Planetary Satellites, Cornell University, August 1974. Squyres S. W., Sagan C., 1983.“Albedo asymmetry of Iapetus”. Nature 303, 782-785. Squyres S. W., Buratti B. J., Veverka J., Sagan C., 1984.“Voyager photometry of Iapetus”. Icarus 59, 426-435. Thekaekara M. P., 1973. In: A. J. Drummond & M. P. Thekaekara (eds.), The Extraterrestrial Solar Spectrum, Institute of Environmental Sciences, Mount Prospect, IL, 114. Tholen D. J., Zellner B. 1983, Icarus 53, 341-347. Thomas P. C., et al., “Hyperion’s sponge-like appearance”, 2007, Nature 448, 50-53. Tosi F., Coradini A., Gavrishin A. I., Adriani A., Capaccioni F., Cerroni P., Filacchione G., Brown R. H., 2005. “G-mode classification of spectroscopic data”. Earth, Moon, & Planets 96, 165-197. Tosi F., et al., 2006. “Iapetus, Phoebe and Hyperion: are they related?”. 37th Annual Lunar and Planetary Science Conference, March 13-17, 2006, League City, Texas, abstract no.1582. Tosi F., et al., “Analysis of selected VIMS and RADAR data over the surface of Titan through a multivariate statistical method”, 2009. Submitted to Icarus. Turrini D., Marzari F., Beust H., “A new perspective on the irregular satellites of Saturn - I. Dynamical and collisional history”, 2008, MNRAS, 391, 1029-1051. Turrini D., Marzari F., Tosi F., “A new perspective on the irregular satellites of Saturn - II. Dynamical and physical origin”, 2008, MNRAS, 392, 455-474 Verbiscer A. J., Skrutskie M. F., Hamilton D. P., “Saturn’s largest ring”, 2009, Nature, doi:10.1038/nature08515 Zahnle K., Schenk P. Levison H., Dones L., “Cratering rates in the outer solar system”, 2003, Icarus, 162, 263-289 \[lastpage\] [^1]: E-mail: [email protected] [^2]: The inclination of the particle’s orbit is measured respect to the planet’s orbital plane about the Sun [^3]: <http://ssd.jpl.nasa.gov/?sat_elem>
--- author: - 'G.P. Tozzi' - 'H. Boehnhardt' - 'L. Kolokolova' - 'T. Bonev' - 'E. Pompei' - 'S. Bagnulo' - 'N. Ageorges' - 'L. Barrera' - 'O. Hainaut' - 'H.U. Käufl' - 'F. Kerber' - 'G. LoCurto' - 'O. Marco' - 'E. Pantin' - 'H. Rauer' - 'I. Saviane' - 'C. Sterken' - 'M. Weiler' date: 'Received: today; accepted: tomorrow' title: | Dust observations of Comet 9P/Tempel 1\ at the time of the Deep Impact [^1] --- [On 4 July 2005 at 05:52UT, the impactor of NASA’s Deep Impact (DI) mission crashed into comet 9P/Tempel 1 with a velocity of about 10[$\mathrm{km\,s}^{-1}$]{}. The material ejected by the impact expanded into the normal coma, produced by ordinary cometary activity.]{} [Based on visible and near-IR observations, the characteristics and the evolution with time of the cloud of solid particles released by the impact is studied in order to gain insight into the composition of the nucleus of the comet. An analysis of solid particles in the coma not related to the impact was also performed.]{} [The characteristics of the non-impact coma and cloud produced by the impact were studied by observations in the visible wavelengths and in the near-IR. The scattering characteristics of the “normal” coma of solid particles were studied by comparing images in various spectral regions, from the UV to the near-IR. For each filter, an image of the “normal” coma was then subtracted from images obtained in the period after the impact, revealing the contribution of the particles released by the impact.]{} [For the non-impact coma the Af$\rho$, a proxy of the dust production, has been measured in various spectral regions. The presence of sublimating grains has been detected. Their lifetime was found to be $\sim 11$ hours. Regarding the cloud produced by the impact, the total geometric cross section multiplied by the albedo, SA, was measured as a function of the color and time. The projected velocity appeared to obey a Gaussian distribution with the average velocity of the order of 115[$\mathrm{m\,s}^{-1}$]{}. By comparing the observations taken about 3 hours after the impact, we have found a strong decrease in the cross section in $J$ filter, while that in Ks remained almost constant. This is interpreted as the result of sublimation of grains dominated by particles of sizes of the order of some microns.]{} Introduction ============ The Deep Impact mission (hereafter DI) to the Jupiter family comet 9P/Tempel 1 (hereafter 9P) was aimed at studying the cratering physics in minor bodies in the solar system and the primordial material preserved inside cometary nuclei. On July 4, 2005 the impactor of the DI experiment produced a high-speed (about 10[$\mathrm{km\,s}^{-1}$]{}) impact in the nucleus of 9P excavating a considerable amount of cometary material that was observed and measured both in-situ by the DI fly-by spacecraft and remotely by Earth-based instrumentation. First results of the mission are described in @AHearn2005 and @Sunshine2006. Early earth-based and other space-based measurements of the event have been published by @Meech2005, @Sugita2005, @Harker2005, @Lisse2006, and @Schleicher2005. At the European Southern Observatory (ESO) DI received considerable observing time allocated to observe the event at their Chilean observatory, at Cerro La Silla and at Cerro Paranal sites [@Kaeufl2005a]. Here, we summarize results from the visible and near-IR measurements of the dust in the cometary coma obtained both shortly before and after the DI event. We focus on the dust ejecta properties such as scattering properties, projected velocity, and spatial distribution and their evolution with time. Complementary data from the ESO DI campaign on polarimetric and mid-IR observations as well as on the cometary gas emission and the large-scale coma activity of the comet are described elsewhere (see e. g., Boehnhardt et al., 2007). Pre-impact monitoring of the cometary activity is described by @Kaeufl2005b and @Lara2006. Observations ============ Telescopes, instruments, filters -------------------------------- The majority of the observations, described here, were performed at the European Southern Observatory (ESO) in La Silla/Chile using the 3.5m New Technology Telescope (NTT) by switching between two focal plane instruments: EMMI (ESO Multi-Mode Instrument), for the visible spectral region, and SOFI (Son of ISAAC), for the near-IR ([*JHK*]{}). Both instruments are of focal reducer type for imaging and spectroscopic observations. EMMI provides a field of view of $9.1 \times 9.9$ arcmin with a two-detector array in the red ($400-1\,000$nm) and of $6.2 \times 6.2$ arcmin with a single detector in the blue arm (300-500nm) at 0.32 and 0.37 arcsec/pixel resolution (using the $2 \times 2$ and $1 \times 1$ binning options), respectively. In its large field option used for these observations, SOFI has a single detector of $4.9 \times 4.9$ arcmin field of view at 0.288 arcsec/pixel resolution. In the visible, narrow band filters, with bandpasses within selected wavelength regions of interest for cometary science, were used. In particular, for the study of the cometary dust, the following filters with no or negligible gas emission in their passband were used: one in the ultraviolet ([$U_\mathrm{c}$]{}), one in the blue ([$B_\mathrm{c}$]{}) and one in the red ([$R_\mathrm{c}$]{}) spectral region. The near-IR observations were performed with the regular $J$, $H$, and [$K_\mathrm{s}$]{} broad band filters since in this region the gas contamination is negligible. Table \[tablog\] gives the log of observations together with the list of filters used including the respective central wavelength and full width at half maximum (FWHM) of the wavelength passband. Technical information on the La Silla telescope and instruments can be found at http://www.ls.eso.org/lasilla/sciops. Since La Silla was clouded over during night 5-6 July 2006, near-IR imaging of the comet was shifted to the on-going DI campaign at the Cerro Paranal Observatory using ISAAC (Infrared Spectrometer And Array Camera) at the 8.2m unit telescope Antu of the ESO’s Very Large Telescope (VLT). Due to the shortage of time at the end of the nightly visibility window, only part of the $J$, $H$ and [$K_\mathrm{s}$]{} filter imaging sequence was performed. ISAAC is a focal reducer-type instrument providing a field of view of $2.5 \times 2.5$arcmin at a pixel resolution of 0.148 arcsec. Technical information on the VLT and the ISAAC instrument can be found at http://www.eso.org/paranal/sciops. Calibrations on the sky ----------------------- For calibration purposes, some photometric standards were also observed before and after the comet observations on each clear night. In the near-IR, the normal $JH{\ensuremath{K_\mathrm{s}}}$ photometric standards were used [@Persson1998], while for the calibration of the narrow band filters in the visible, well known spectrophotometric standards from the list by @Hamuy1994 were measured. The required calibration frames (bias and sky flatfield exposures for the visible imaging and screen and/or lamp flatfields with lamp illumination on and off for the near-IR) were obtained during daytime and/or twilight periods. Observing techniques -------------------- The comet imaging was performed with the telescope tracking at the speed of the moving target. Jitter offsets of small amplitude (order of 10-30 arcsec) were applied between individual exposures through a single filter. As usual for extended objects, the observations of the comet in the near-IR spectral region were interlaced by observations of the sky at an offset of $\simeq$ 8$\arcmin$ in a different region of the sky. A sequence of 5 comet and 5 sky images were usually taken in each near-IR filter. The jitter sequence typically lasted for 11-12 minutes per filter. Due to the mentioned shortage of time, observations on night July 5-6 with ISAAC consisted of only 2 comet and 2 sky images per filter. Observations in the visible were also repeated 5 times for each filter, offsetting the telescope by 10-30 arcsec. Calibration observations (standards, sky flatfields) were performed with the telescope tracking at the sidereal rate. Daytime calibration images (bias, dome flats) used fixed telescope pointing. ------------------------ -------------------- -------------------- -------------------- ----------- ----------- -------------------- --------------- --------- [$U_\mathrm{c}$]{} [$B_\mathrm{c}$]{} [$R_\mathrm{c}$]{} $J$ $H$ [$K_\mathrm{s}$]{} $\lambda_0$ (nm): 372.5 442.2 683.8 1247 1653 2162 $\Delta \lambda$ (nm): 6.9 3.7 8.1 290 297 275 Instrument: EMMI EMMI EMMI SOFI SOFI SOFI $t - t_0$ $t - t_0$ $t - t_0$ $t - t_0$ $t - t_0$ $t - t_0$ Night (hh:mm) (hh:mm) (hh:mm) (hh:mm) (hh:mm) (hh:mm) (YYYY-MM-DD) Weather \[2mm\] $-$29:42 $-$28:13 $-$28:45 $-$26:08 $-$26:21 $-$26:34 2007-07-02/03 CLR-THN $-$04:39 $-$04:52 2007-07-03/04 THN $+$17:29 $+$17:16 $+$17:03 2007-07/04/05 CLR $+$22:32 $+$20:29 $+$17:16 $+$20:13 CLR $+$45:32 $+$45:44 $+$45:55 2007-07-05/06 COUT $+$65:39 $+$65:26 $+$65:13 2007-07-06/07 THN $+$90:14 $+$92:33 $+$92:21 $+$94:13 $+$94:00 $+$93:47 2007-07-07/08 THN $+$94:26 THN $+$113:55 $+$113:42 $+$113:28 2007-07-08/09 CLR $+$118:21 $+$118:40 $+$114:37 $+$114:29 $+$114:09 CLR $+$141:15 $+$138:25 $+$138:12 2007-07-09/10 CLR ------------------------ -------------------- -------------------- -------------------- ----------- ----------- -------------------- --------------- --------- Since EMMI and SOFI focal plane instruments were mounted on the two Nasmyth foci of the NTT telescope, fully simultaneous observations in the near-IR and visible were not possible. However, the switching time between the two instruments was short (less than 15 min) and allowed us to use both instruments sequentially during the nightly visibility window of the comet. The summary log of observations is given in Table \[tablog\]. During the observing period the Sun (r$_h$) and Earth ($\Delta$) distances of the comet were 1.51AU and 0.89–0.91AU, respectively. The phase (Sun-Comet-Observer) angle was $\simeq$ 41, and the position angle of the Sun projected on the sky at the position of the comet, was $\simeq$ 290. Data reduction ============== Frame pre-processing -------------------- For the visible imaging, all comet and standard star images were corrected for the bias and the flatfield. Both bias and flatfield maps were computed as the average of a series of bias and sky flatfield exposures taken during the observing interval and through the corresponding filters (for the flatfield). Subsequently, first-order sky background correction was applied by subtracting the average sky flux value measured at the four edges of the individual flatfielded images. For the near-IR data the flatfield maps were computed from the screen flat images in each filter with the lamp illumination on and off. Then, for each sequence, a median average sky+bias was computed from the sequence of five sky observations. The comet images were then reduced by subtracting the median averaged sky frame and by dividing the result by the flatfield for the corresponding filter. Finally, comet images for each filter/sequence in the visible and near-IR were obtained as the median average of the single 5 images, after their re-centering on the photometric nucleus. With the median average of 5 images, all the possible background stars and detector defects (hot or dead pixels) were almost completely erased. For the night of July 5-6, this was not possible, since only 2 comet and 2 sky images were recorded. In this case, the background stars and detector defects were erased manually. Although for morphology studies this was acceptable, it prevented any precise quantitative measurements for this particular night. The same procedure was applied to the standards stars. From the reduced standard star images, photometric zero points were derived for clear nights (using aperture photometry and the procedure described in @Boehnhardt2007 for the EMMI images). Residual background flux removal -------------------------------- The presence of a constant residual background was checked and corrected by measuring the function $\Sigma$Af at large projected nucleocentric distances, $\rho$. $\Sigma$Af, derived from the Af$\rho$ introduced by @AHearn1984, describes the dust albedo (A) multiplied by the total area covered by the solid particles in an annulus of radius $\rho$ and unitary thickness. It is equal to $2\pi \rho$Af, where f is the average filling factor of the grains at the projected distance $\rho$. Note that the definition given here is slightly different from the the original one given in @Tozzi02 and @Tozzi04, even though the physical meaning is the same. Assuming a simple outflow pattern, i.e. geometric attenuation and expansion at constant outflow velocity of the cometary dust, the $\Sigma$Af function should be independent of $\rho$. In this case, a small residual background (for instance, from incomplete sky subtraction) would introduce a linear dependence of the function with $\rho$. Hence, by applying a trial and error procedure, the residual background can be removed from the reduced images such that the $\Sigma$Af function becomes constant at large $\rho$. This procedure does not affect the detection of changes in the cometary activity, since the latter introduces an “expanding bump" in the profile (see section 4.2.2), a very different behavior from the linear dependence with $\rho$  introduced by uncorrected background subtraction. This approach for residual background removal is still applicable for the observations taken within about a day after the impact, since the dust produced by the impact was confined to cometary distances shorter than 20000km and the coma flux measured in the SOFI images at larger distances could still be used for the above-mentioned calculations. This method of the residual background removal is not easily applicable to the coma images taken with ISAAC during the night July 5-6, 2005 since the DI ejecta had already expanded to the edge of the field of view. For those observations, we assumed that the integral over the position angle (PA) of the function $\Sigma$Af, obtained on the opposite side of the ejecta cloud (over PA between 0 and 90 deg), remained unchanged from night to night. By changing the background level in ISAAC images, the flux profiles measured in this quadrant before and after DI were forced to be constant with $\rho$. Indications for the assumption of unchanged appearance of the normal coma comes from the analysis of pre-impact observations [see @Lara2006] and from the fact that the coma signal disappears in the respective PA range when subtracting a pre-impact image from a post-impact one (both taken through the same filter). Flux calibration ---------------- All the images taken in clear conditions were then calibrated in Af using the following formula, derived from @AHearn1984 $${\rm Af}=5.34 \times 10^{11} \left(\frac{r_h}{dx}\right)^2 C_s \times 10^{-0.4(Z_p-M_s)}$$ where $r_h$ is the heliocentric distance in AU, $dx$ is the detector pixel size in arcsec, $C_s$ is the pixel signal in $e^-$/s, $Zp$ and $Ms$ are the zero points and the solar magnitude in the used filter, respectively. Images taken at non-photometric conditions are flux calibrated assuming that the $\Sigma$Af profiles at large $\rho$ are coincident with those of the day before and/or day after. This was justified by the fact that, due to the low dust expansion velocity, any change in the dust production would not affect regions at $\rho$ larger than 20000–30000km in 24 hours. Moreover, the coma analysis by @Boehnhardt2007 suggests that no significant changes in the flux distribution (except for the DI ejecta cloud) took place between July 3 and 10, 2005. The relative calibration from consecutive good nights was checked by comparing the $\Sigma$Af values of the comet at large nucleocentric distances. By a careful examination of all comet exposures, we noticed that the images recorded before and 90 hours after the impact were very similar and no or negligible traces of DI ejecta were found. Hence, in order to increase the signal-to-noise (SN) ratio, we computed images (one per filter) of the “undisturbed” comet (hereafter called the “quiet” comet) as the median average of the comet images taken before and after 90 hours from the impact. The standard deviation of the median average is within $2-6$% for the most part of the comet, , for regions at nucleocentric distances between 2000 and 50000km. In regions closer to the nucleus, this standard deviation increases slightly because of the effect of different seeing in the various nights. It also increases at distances larger than 50000km due to the low coma signal. The subsequent scientific analysis of the calibrated images is based mainly on $\Sigma$Af profiles and Af$\rho$, both easily obtained by numerical integration of the flux in the comet images in concentric apertures centered on the nucleus. The physical meaning of the $\Sigma$Af profiles is described above. Following the original definition by @AHearn1984, Af$\rho$ is proportional to the average comet flux in the aperture multiplied by its equivalent cometocentric projected distance $\rho$. This function does not depend on $\rho$ for constant outflow velocity. Thus, when using filter images taken in the dust continuum bands, Af$\rho$ is a proxy of the dust production rate, $Q_\mathrm{dust}$, of the cometary nucleus. However, due to unknown dust properties such as the dust size distribution and the dust albedo A, it is not straightforward to quantify $Q_\mathrm{dust}$ using Af$\rho$ measurements of comets. Results ======= The “quiet” comet ----------------- ### Af$\rho$ and $\Sigma$Af as a measure of cometary dust production The $\Sigma$Af and Af$\rho$ profiles of the “quiet” comet, derived from the observations of 9P in the six continuum bands [$U_\mathrm{c}$]{}, [$B_\mathrm{c}$]{}, [$R_\mathrm{c}$]{}, $J$, $H$, [$K_\mathrm{s}$]{} during the nights on July 2/3, 3/4, 7/8, 8/9 and 9/10, 2005, are plotted in Fig. \[figqc2\]. Various pieces of information on the comet dust production can be derived from these profiles. The horizontal profiles at distances beyond 10000km from the nucleus suggest a steady-state level in the dust production that resembles homogeneous and isotropic dust expansion in the coma at a constant speed. From the existence of jet and fan structures in the 9P coma and since the radiation pressure modifies the dust distribution in the coma, it is clear that these ideal conditions are not fulfilled. However, as long as the jets and fans are stable, they don’t change the dust production, the Af$\rho$ and $\Sigma$Af functions are constant. The solar radiation pressure may introduce a linear dependence of these functions with $\rho$, but it becomes noticeable only at large scales. The Af$\rho$ values as a measure of the dust production of the “quiet” comet are determined at projected distances larger than 40000km. where the radial profiles in Fig. \[figqc2\] have reached constant values. Results are presented in Table \[tabAfrho\]. Filter Af$\rho$(cm) -------------------- -------------- [$U_\mathrm{c}$]{} $111 \pm 11$ [$B_\mathrm{c}$]{} $125 \pm 12$ [$R_\mathrm{c}$]{} $191 \pm 19$ $J$ $228 \pm 23$ $H$ $253 \pm 25$ [$K_\mathrm{s}$]{} $269 \pm 27$ : \[tabAfrho\] Measured Af$\rho$ for the “quiet” comet (see text) The error in the Af$\rho$ measurements is mainly due to the relative photometric calibration error, which is estimated to be of the order of 10%. The Af$\rho$ values given here are slightly higher than the value of 112 cm given by @Schleicher2005  for observations in the green wavelengths (445–526nm). They are also higher than the value of 102 cm, later revised to 99 cm, derived from Rosetta/Osiris observations, using the NAC (Near Angle Camera) broad-band filters ([@Keller2005] and [@Keller2007]). However, as already pointed out by Schleicher et al., this may be due to the larger phase angle of the spacecraft observations (69) compared to the measurements from the Earth (41). ### Signatures of dust sublimation: It is evident from Fig. \[figqc2\] that the near-IR $\Sigma$Af profiles are not constant. They increase significantly for distances smaller than $\approx$ 15000km, showing also a little spike very close to the photometric nucleus. The spike is probably the signature of the nucleus convolved with the seeing. However, the SN ratio of this signature is too low to derive any useful information. Instead, it can be evaluated through the spatial resolution imaging of the coma using adaptive optics systems, such as those collected during the impact week using the NACO instrument at the VLT (not described here). The slow increase of $\Sigma$Af cannot be due to dynamical phenomena (for example, increased cometary activity), since the near-IR profiles derived from different observing nights look very much the same. Instead the $\Sigma$Af profiles in the visible do not show any evident increase at small nucleocentric distances. Similar $\Sigma$Af profiles have been found in comet C/2000 WM$_1$ (LINEAR) (hereafter WM$_1$) [@Tozzi04]. At that time this phenomenon was interpreted as a result of sublimation of two kinds of organic grains: one with a lifetime of $\approx$1.3 h and the other of $\approx$ 17 h. Following the analysis of the WM$_1$ data, the near-IR $\Sigma$Af profiles of 9P for $\rho >$ 1000km were fit by a function of the kind $$\Sigma {\rm Af}(\rho) = \Sigma {\rm Af}_0+ \Sigma {\rm Af}_1 \times e^{-(\frac{\rho}{L_1})} \label{EQ1}$$ which contains a constant term $\Sigma {\rm Af}_0$ representing the non-sublimating (permanent) dust component, and just one decaying term $\Sigma {\rm Af}_1$ representing sublimating grains and characterized by the length-scale $L_1$. The fit achieved for 9P is very good and shows length-scales L$_1$ of similar value for all three near-IR bands, i.e. $6300 \pm 160$km. Using the fixed length-scale 6300km, the fitting procedure was done one more time, now for all profiles including those derived from visible data. The best fit parameters $\Sigma {\rm Af}_0$ and $\Sigma {\rm Af}_1$, including standard deviations, are listed in Table \[tab\_q1\]. Again, the fit gives very good results, with the exceptions of [$U_\mathrm{c}$]{} and [$R_\mathrm{c}$]{} filters, where $\Sigma {\rm Af}_1$ has values close to zero. Band $\Sigma$Af$_0$(cm) $\Sigma$Af$_1$(cm) -------------------- -------------------- -------------------- [$U_\mathrm{c}$]{} $345.3 \pm 1.3$ $ 10.7 \pm 4.1$ [$B_\mathrm{c}$]{} $377.9 \pm 1.3$ $ 62.8 \pm 2.8$ [$R_\mathrm{c}$]{} $603.5 \pm 0.6$ $ 2.5 \pm 2.8$ $J$ $682.0 \pm 0.6$ $251.0 \pm 1.9$ $H$ $758.4 \pm 0.9$ $313.8 \pm 3.1$ [$K_\mathrm{s}$]{} $792.9 \pm 1.6$ $392.4 \pm 6.3$ : \[tab\_q1\] $\Sigma {\rm Af}_0$ and $\Sigma {\rm Af}_1$ best fit results (see text) Interesting trends appear when plotting the wavelength dependence of the decaying ($\Sigma$Af$_1$) and the constant ($\Sigma$Af$_0$) terms of the fits (see Fig. \[fig\_color\_quiet\]). In the near-IR the constant term varies only by a factor of 1.16 going from $J$ to [$K_\mathrm{s}$]{}, while the decaying term changes by a factor of 1.55. This finding may indicate that the two solid components are of a very different nature: one (the permanent one) composed of refractory grains and the other (the decaying one) made of sublimating grains (or dust covered by sublimating material) that scatter very efficiently in the near-IR, but are inefficient scatterers in the visible light. As for the comet WM$_1$, taking into account that the length-scale for the density was about 25% longer than that for the column density, and assuming an outflow velocity of about 0.2[$\mathrm{km\,s}^{-1}$]{}, the lifetime of the sublimating material was of the order of 40000 s (about 11 hours). Assuming that the sublimation is driven directly by the radiation, the scalelength and lifetime scale as the square of the heliocentric distance, r$_h$. Then the density length-scale and the lifetime at r$_h$ = 1AU should be 3500km and 17800s ($\approx 5$h). That lifetime differs from those found for the volatile dust grains in WM$_1$, which, assuming as well an outflow velocity equal to 0.2[$\mathrm{km\,s}^{-1}$]{}, were 61000s ($\approx 17$h) and 4700s ($\approx$ 1.3h) at the heliocentric distance of that comet (1.2AU). They scaled to 42000s ($\approx 12$h) and 3300s (54min) at 1AU. The grain sublimation may not depend directly on the solar irradiation, but may depend indirectly on it, through the grain temperature. In this case the lifetime and scalelength scale in a more complicated way with r$_h$, dependending on grain size and thermal degradation of the element [see, e.g., @Cottin2004]. The spectral scattering properties of the sublimating grains in WM$_1$ are different from those found here: both WM$_1$ components have a good scattering efficiency in $R$ and $I$, and the long lifetime component has a scattering efficiency similar to the normal dust. The sublimating component in 9P almost does not scatter in the visible. This means that, even if the phenomena may be similar for the two comets, the nature of the grains must be different. In order to investigate the origin of the sublimating dust component, we have tried to determine its 2D distribution in the coma by subtracting a radially symmetric artificial coma image from each near-IR filter image. The artificial coma image was computed using the parameters for the permanent dust component in the respective fit to the original image. Figure \[fig\_subl\_comp\] shows an example for the difference image in the [$K_\mathrm{s}$]{} filter. The non-uniform flux distribution in the images suggests that the sublimating dust is more confined in the coma sector defined approximately by PA = 100  to 200. Given the indicated PA range, the appearance of sublimating dust seems to correlate with the nucleocentric surface regions of enhanced activity since various jet and fan structures are detectable in the same coma region (see @Lara2006 [-@Lara2006] and @Boehnhardt2007 [-@Boehnhardt2007]). However, the jets and fans are also present in the difference images described above since the subtraction of a circular symmetric comet image does not cancel any asymmetries in the coma flux distribution. Thus, it is not possible to fully disentangle the jet and fan structures from the sublimating dust component, except that the former have much lower intensity levels than those of the fading grains. The ejecta cloud ---------------- ### Geometry of the ejecta cloud For the study of the various effects produced by the DI event, the signal of normal activity was removed from the post-impact images of 9P by simple subtraction of the image of the “quiet” comet, taken through the same filter. This processing should remove the non-impact comet coma without introducing new unwanted features from day-to-day variability, since the normal activity of 9P displayed a rather steady-state appearance. The expanding cloud of solid particles is clearly noticeable in the visible and near-IR images until at least July 6-7, 2005 (three days after DI). Fig. \[fig\_C9P\_dif1\] shows the ejecta cloud in $J$ band as seen 17:29 and 20:29 hours after the impact. It can be seen in the figure that the cloud is initially expanding into the coma sector between PA of 120 to 345. The time evolution of the cloud expansion can be characterized by using visible broadband imaging [@Boehnhardt2007]. ### Ejecta dust production Integrating the Af in the images difference over the position angle range of the initial ejecta cloud (PA=120–345), we obtain the $\Sigma$Af profile of the cloud vs $\rho$. The $\Sigma$Af profiles determined in the [$R_\mathrm{c}$]{}, $J$, $H$ and [$K_\mathrm{s}$]{} filters for the night just after the impact are shown in Fig. \[fig\_cloud\_rho\]. Note that the different extension of the cloud profiles in the figure is due to the different observing epochs during which the cloud was expanding in the field of view. By integrating these profiles over $\rho$, we can obtain the total scattering cross section (SA) of the dust ejecta, SA = $\int{({\Sigma {\rm Af}})d \rho}$, i.e., the albedo at the phase angle of the comet multiplied by the total geometric grain cross section. SA provides useful information to evaluate the number and the intrinsic color of the particles produced by the impact and their evolution with time. For the first observations of the first night (July 4-5, 2005) after the impact, we measured the SA to be 27.6, 27.3 and 34.6km$^2$ in $J$, $H$ and [$K_\mathrm{s}$]{} respectively. Assuming that the scattering properties of the ejecta grains are the same as those of the refractory component in the pre-impact coma, it is possible to estimate the time interval TI necessary to produce the same amount of dust by the normal activity: TI = $\frac{SA}{v \Sigma {\rm Af}_0}$, where v is the mean outflow velocity, and $\Sigma$Af$_0$ are the values obtained for the normal activity. To estimate the order of magnitude, any possible differences in the scattering properties of the dust grains were ignored and a $v = 0.2$[$\mathrm{km\,s}^{-1}$]{} was assumed. Then TI is about $5-6$ hours (depending on the filter). For the lower velocity $v = 0.1$[$\mathrm{km\,s}^{-1}$]{}, the equivalent duration of normal dust production is doubled. We conclude that the amount of dust produced by DI and detectable in the near-IR about 17-20 h post-impact time is equal to the amount of the dust produced during a few hours (maximum half a day) of normal activity of the comet just before the impact. Results for the effective scattering cross section SA in km$^2$, estimated from the dust filter images available to us, are tabulated in Table \[tabintsaf\]. The relative error for different filters depends mainly on the respective flux calibration uncertainty that is estimated to be of the order of 10%. The relative error for different measurements with the same filter does not depend on this calibration, because during the data reduction the $\Sigma$Af profiles of the ejecta cloud images were checked to match the reference profile of the “quiet” comet at nucleus distances beyond 30000km. Thus, the relative uncertainty in $\Sigma$Af for different post-impact epochs depends only on the accuracy of the matching of the “quiet” comet profiles. As seen in Figs. \[fig\_C9P\_dif1\] and \[fig\_cloud\_rho\], a rather accurate match is achieved and, hence, the relative error of $\Sigma$Af is evaluated to be less that 5%. The values shown in the table are in agreement with the $33 \pm 3$km$^2$ obtained in the visible range from Rosetta/Osiris observations about 40 minutes after the impact [@Kueppers2005]. $t-t_0$ (hh:mm) SA([$R_\mathrm{c}$]{}) SA($J$) SA($H$) SA([$K_\mathrm{s}$]{}) ----------------- ------------------------ --------- --------- ------------------------ 17:29 27.6 27.3 34.6 20:29 17.9 33.0 22:32 27.3 : \[tabintsaf\] Effective cross section of the ejecta grains in km$^2$ vs. time $t-t_0$ after impact for different dust filters. ’Effective cross section’ is defined in the text. ### Ejecta velocities Contrary to the natural outbursts in comets, in the case of DI event the exact starting time of the formation of the ejecta cloud is very well known. Hence, assuming that the major dust production by DI was short-term (as suggested by the fly-by spacecraft imaging; see @AHearn2005 [-@AHearn2005]), the radial profiles of the ejecta cloud would also reflect the expansion velocity distribution of the cloud particles. The solar radiation pressure applies some acceleration $\frac{dv}{dt}$ to the particles, which is inversely proportional to the particle radius, $a$. It has been shown that some time between 1 and 2 days after impact the dust grains, as observed in the visible, reached the turning point in their motion in the sunward direction [@Boehnhardt2007] due to the solar radiation pressure. So, depending on the grain size, the radial profiles $\Sigma$Af($\rho$) of the ejecta cloud show a memory of the initial projected velocities of the dust after the ejection from the nucleus and possible further acceleration in the near-nucleus zone by the ejecta gas and/or normal gas release activity. Thus, $\Sigma$Af($\rho$) divided by the elapse time since the impact results in some distribution function $\Sigma$Af(v) where v is the mean velocity of the dust grain at the respective projected nucleocentric distance in the cloud. A typical mean velocity distribution is shown in Fig. \[fig\_cloud\_v\]. As pointed out by @Kueppers2005 and @Jorda2007 the radial profile of the eject cloud can be well fitted with a Gaussian function. In Fig. \[fig\_cloud\_v\], the Gaussian function that provides the best fit to the velocity profile is overplotted. The agreement is very good for all the filters. For comparison, a Maxwellian function is also shown. Here the agreement with the data is good in the leading part of the function, but is poor for small velocities. Because of the radiation pressure (see above), the velocity distribution is not the same in all directions; the mean velocity is lower in the Sun direction than in other directions. To study this effect, the cloud images have been divided in three sectors: S1, S2, and S3 defined by their position angle ranges 145–204, 205–264and 265–325, respectively. The velocity profiles were then fitted with a Gaussian function. Results are shown in Table \[tabvel\], where the mean velocity $\bar{V}$ and its FWHM is given for nights July 4-5 and 5-6. The table does not indicate the errors of the fitting since they are very small, of the order of $\pm 2$[$\mathrm{km\,s}^{-1}$]{} and $\pm 4$[$\mathrm{km\,s}^{-1}$]{} for the first and second night after the impact, respectively. -------------------- --------- ----------- ------ ----------- ------ ----------- ------ $t-t_0$ Band (hh:mm) $\bar{V}$ FWHM $\bar{V}$ FWHM $\bar{V}$ FWHM [$R_\mathrm{c}$]{} 23:32 138 95 103 61 87 50 $J$ 17:29 120 69 100 58 98 51 $J$ 20:29 123 57 101 48 100 40 $J$ 45.32 134 48 98 44 93 42 $H$ 17:16 117 69 98 56 98 49 $H$ 45:73 133 58 102 53 76 41 [$K_\mathrm{s}$]{} 17:03 113 71 95 63 91 50 [$K_\mathrm{s}$]{} 20:13 143 87 84 52 76 41 [$K_\mathrm{s}$]{} 45:55 122 62 115 54 97 41 -------------------- --------- ----------- ------ ----------- ------ ----------- ------ : \[tabvel\] Average projected velocities $\bar{V}$ and FWHM, in m$s^{-1}$, as determined in the three sectors S1, S2 and S3 of the dust ejecta cloud (see text) for different filters. Table \[tabvel\] indicates the following: (1) $\bar{V}$, as measured in the near-IR, is independent of the filter, (2) $\bar{V}$ depends on the sector, it is slowest in S3, but shows very similar values for the three near-IR filters, (3) $\bar{V}$ in sector S3 slightly decreases with time, while in sector S1 it increases with time. This picture can be explained by the solar radiation pressure, since the projected Sun direction at PA $\simeq$ 290  falls almost exactly in the middle of sector S3. The average values of the mean velocities, as in the near-IR, are the following: $\bar{V}_{S1}$ = 123$\pm$12, $\bar{V}_{S2}$ = 96$\pm$7, $\bar{V}_{S3}$ = 93$\pm$10[$\mathrm{m\,s}^{-1}$]{} for the night July 4-5 and $\bar{V}_{S1}$ = 130$\pm$7, $\bar{V}_{S2}$ = 105$\pm$9, $\bar{V}_{S3} = 89 \pm 11$[$\mathrm{m\,s}^{-1}$]{} for the subsequent night. Using simple physical considerations, the projected distance covered by a grain in the S3 sector ($\simeq$Sun direction) is $S_{S3}(t) = V_{ej} t-\frac{1}{2}\frac{dv}{dt} \times t^2$ and, in the sector S1, ($\simeq$60  from the antisun direction) $S_{S1}(t) = V_{ej} t + \frac{1}{2}\frac{dv}{dt} cos(60) \times t^2$. With simple algebraic operations the V$_{ej}$ results are $(\bar{V}_{S3}(t)+2 \bar{V}_{S1}(t))/3$. Using the average values given above, the results are $V_{ej} = 113\pm 16$ and $116\pm 16$[$\mathrm{m\,s}^{-1}$]{}, for the nights July 4-5 and 5-6, respectively. The typical FWHM is of the order of 75 [$\mathrm{m\,s}^{-1}$]{}  for both nights. In the visible the eject velocity is estimated to be of the order of 120 [$\mathrm{m\,s}^{-1}$]{}. Since only a single observation of this kind is available to us, we adopt an uncertainty similar to those in the near-IR. ### Effective scattering cross section A surprising finding is the rapid decrease in the effective $J$ scattering cross section SA of the ejecta cloud by 35% during the time interval from 17:29 to 20:29 hours after the impact as seen in Table \[tabintsaf\]. This is also noticeable in Fig. \[fig\_C9P\_dif1\], which demonstrates that the cloud at 20:29-hours post-impact is significantly fainter than three hours earlier (17:29 h post-impact) even though it has almost the same extension. Also, the change in the radial profiles $\Sigma$Af of the $J$ filter data 17:29 and 20:29 h after the impact shown in Fig. \[fig\_cloud\_rho\] confirms the change in the scattering of the ejecta particles. This cannot be due to calibration problems, because SA is computed using the difference between the impact images minus the “quiet” comet: calibration problems would give values different from zero in regions far from the cloud. However, this is not the case, as can be seen in Figs. \[fig\_C9P\_dif1\] and \[fig\_cloud\_rho\]. The cloud color 17:29h after the impact is almost ’gray’ from $J$ to $H$, but its scattering efficiency increases by 25% in [$K_\mathrm{s}$]{}. Three hours later (20:29 h post-impact), the scattering efficiency increases by 84% between $J$ and [$K_\mathrm{s}$]{} (observations in $H$ were not taken because of the lack of time). Thus, the reduction in the effective scattering cross section of the dust cloud in the near-IR between 17:29 and 20:29 h after the DI event was accompanied by a strong reddening of the dust. To study this, the scattering efficiency for the three near-IR filters as a function of particle radius, $a$, was modeled. In the model, spherical particles were assumed with a power-law size distribution of power –3.1 as derived by @Jorda2007 using data obtained 40 min after the impact. The refractive index was set to 1.65+i0.062, that represents a mixture of silicates and organics typical for comets (@Jessberger1988. Results are shown in Fig. \[fig\_scattering\]: the left panel gives the scattering efficiency for $J$ and [$K_\mathrm{s}$]{} bands as a function of the particle radius, while the right panel gives the cumulative scattering efficiency of the particles, normalized to 1 at large sizes (5 $\mu$m). From the figures it is possible to see that the particles with radius less that 0.1 $\mu$m give a negligible contribution to the scattering and that 80% of the total scattering is reached for the particles with $a \leq$ 1 and $\leq$ 1.5 $\mu$m for $J$ and [$K_\mathrm{s}$]{}, respectively. Note that the particles with $a \leq$ 0.92 $\mu$m are mainly responsible for the greatest difference between the cumulative scattering in $J$ and [$K_\mathrm{s}$]{}. Indeed, these particles provide 77% of the total scattering in $J$ and only 44% in [$K_\mathrm{s}$]{}. From these results it is possible to derive an important conclusion: if the destruction or sublimation of particles with $a \leq$ 0.92 $\mu$m is responsible for the above-mentioned decrease of SA in the $J$ band, a destruction of 50% of such particles should decrease the SA by 35%. However, since these particles ($a \leq 0.92 \mu$m) contribute also to the scattering in [$K_\mathrm{s}$]{} (44%), this destruction should also produce a 22% decrease of the SA in [$K_\mathrm{s}$]{}filter. Instead, the measured decrease for this filter is only 5%. A possible conclusion is that the observed particles do not obey a power law size distribution. The results obtained by the DI spacecraft spectrometer indicates that the original DI size distribution was dominated by particles of a few microns in size (@AHearn2005 [-@AHearn2005] and supporting on-line material of @Lisse2006 [-@Lisse2006]). The difference between @Jorda2007 and this result may be due to a poor sensitivity of the Jorda et al. measurements to particles of size larger than 1 $\mu$m as is expected for the measurements in the visual, as pointed out in Jorda et al.. Thus, the particles of size $a \approx$ 1 $\mu$m could dominate in the ejecta cloud 17:29 h after the impact and produce a significant contribution to the radiation measured in $J$ filter. It is well known that small grains sublimate faster than large ones because they warm up more efficiently than the larger ones, since the absorption of the solar radiation is proportional to their area and the heating is inverse proportional to their volume. This results in the 1/$a$ law [see, e.g. @Lamy1974]), i.e. the smaller the particles are the faster they sublimate. This means that 1 $\mu$m particles may sublimate faster and be eliminated from the size distribution more efficiently than larger particles. Thus, 20:29 h after the impact the maximum of the size distribution shifts to larger particles. This leaves the $J$ band without the most efficient contributors whereas the situation the [$K_\mathrm{s}$]{} band remains almost unchanged. This scenario, consistent with the in situ data, gives a hope that careful simulations of the sublimation of Deep Impact ejecta particles may provide some information about the sublimation rate of the ejecta volatiles, and, thus, may help to identify it. An alternative scenario would be a change in the particle composition which significantly modified the value of the refractive index. However, this hypothesis cannot explain the observations. First, the most dramatic changes would be expected shortly after the impact when the most volatile components of the dust, e.g. ice, sublimate. For the less volatile components the change should be slow and it is hard to imagine how such a dramatic change happened between 17:29 and 20:29 hours after the impact. Second, we are not aware of any material that may be expected in comets that has a so significant difference in its optical properties between $J$ and [$K_\mathrm{s}$]{} filters. ### Color gradient Due to the different orientation of the grain velocity vector to the radiation pressure force, radiation pressure has different effects on the dust grains in sectors S1, S2, and S3 of the ejecta cloud. This results in different sorting of dust particles by size for different sectors which may appear as a difference in the dust colors. From the $\Sigma$Af($\rho$) profiles the color distribution of the cloud is computed and the results are averaged over the PA range of the three considered sectors. Since only the near-IR images had sufficient signal-to-noise ratio within the nucleocentric distance range of 2000–9000km, and only $J$ and [$K_\mathrm{s}$]{} observations were performed during the first night after impact, only the $J-{\ensuremath{K_\mathrm{s}}}$ color (C) is computed. The results, in percent per nm and per 1000km, for the night July 4-5, 2005 show that the ejecta cloud becomes ’bluer’ with larger distance from the nucleus. From the images $\simeq$ 17:29 h after the impact we have found the following: $C_{S1} = (4.1\pm0.1) - (0.31\pm0.02) \% nm^{-1} (1000 {\rm km})^{-1}$ $C_{S2} = (3.8\pm0.1) - (0.36\pm0.02) \% nm^{-1} (1000 {\rm km})^{-1}$ $C_{S3} = (5.0\pm0.1) - (0.51\pm0.02) \% nm^{-1} (1000 {\rm km})^{-1}$ Thus, the grains closer to the nucleus scattered about 40% (4% $\times$ the difference of the central wavelength of [$K_\mathrm{s}$]{} and $J$) more efficiently in [$K_\mathrm{s}$]{} than in $J$. However, they scatter with almost the same efficiency in sector S3, at about 10000km from the nucleus. The strong difference in the gradients between the three sectors can be explained by the effect of the solar radiation pressure mentioned above. Three hours later (i.e. 20:29 h after the impact) we find: $C_{S1} = (~7.5\pm0.3) - (0.28\pm0.06) \% nm^{-1} (1000 {\rm km})^{-1}$ $C_{S2} = (10.7\pm0.3) - (1.06\pm0.02) \% nm^{-1} (1000 {\rm km})^{-1}$ $C_{S3} = (15.5\pm0.3) - (1.76\pm0.02) \% nm^{-1} (1000 {\rm km})^{-1}$ As is expected from decrease in the effective scattering cross section in the $J$ filter, a significant increase in the dust reddening took place close to the nucleus. The scattering efficiency of the grains close to the nucleus became a factor 2-3 higher in [$K_\mathrm{s}$]{} than in $J$. We also notice a strong change in the spatial gradients of the dust reddening: while the gradient in S1 changes only by about 25%, in the two other sectors the change is significantly more evident, e.g. by factor of 3 for S3. Moreover, the increase is non-linear in the inner part of the cloud, as can be seen in Fig. \[fig\_color\_rho\]. These facts cannot be explained by the solar radiation pressure only. Also, from our estimation of the speed of particles, the three hours between our observations cannot bring a significant number of new particles: the particles can travel only $\approx 20$km farther from the nucleus. Thus, 17:29 hours after the impact and 3 hours later we observed particles of the same origin. However, their properties and/or size distribution had changed dramatically. This is confirmed by the above change in the scattering cross sections. The changes in color likely also resulted from the sublimation of particles which had some specific, non-power-law size distribution. This dominance of particles of different sizes is what determines the color and its gradient. The difference in the color gradients may indicate the efficiency of sublimation whose rate increases as particle size decreases (see Section 4.2.4). Discussion ========== An important result of these observations has been the discovery of the sublimating component in the coma of the so-called “quiet” comet. This kind of fading grain has been found in comet C/2000 WM$_1$, where two components, one with a short and the other with a long lifetime, were found when the comet was at 1.2AU from the Sun. These components were interpreted as organic grains or refractory grains embedded in organic matter that sublimated while heated by the solar radiation. The sublimating component discovered in comet 9P seems to be different from both sublimating components found in comet C/2000 WM$_1$. It scatters very efficiently in the near-IR, but it does not scatter at all in the visible. Scaled to 1AU, its length-scale is of the order of 3500km and, assuming an outflow velocity of 0.2[$\mathrm{km\,s}^{-1}$]{}, the lifetime is 18000s (5h). Note that @Cottin2004 suggested that refractory organic grains, namely polyoxymethylene (POM), may be responsible for the distributed source of formaldehyde, observed in several comets. They computed the scalength and lifetime of the POM grains assuming photolysis by solar radiation and thermal sublimation. With a temperature of grains of 350K, they computed a POM length-scale of the same order of magnitude as that measured in 9P, depending very little on grain size. It varied from 3300 to 7100km for sizes going from 0.1 to 10$\mu$m. The following results are obtained for the ejecta cloud produced by the impact : - [The total amount of the dust, multiplied by the albedo, covers a surface of about 30km$^{2}$ about 17h after the impact, but it drops dramatically for the $J$ band 3h later; ]{} - [ The velocity distribution of the solid components had a Gaussian distribution with an average ejected velocity equal to 115$\pm$16 [$\mathrm{m\,s}^{-1}$]{}with a FWHM of the order of 75[$\mathrm{m\,s}^{-1}$]{};]{} - [ The velocities in the projected direction of the Sun are smaller than those in other directions and have similar values for the near-IR bands. Those in the visible have larger values than the near-IR ones;]{} - [From the observations 17:29 h hours after the impact, the near-IR color of the grains close to the nucleus was found to be very red and shows a strong gradient with the nucleocentric distance with the highest values in the Sun direction. Three hours later, the near-IR color becomes even more red and the gradient with $\rho$ in the Sun direction increases by a factor $\simeq 3$.]{} The Gaussian velocity distribution of the particles is puzzling. In the case of gas drag produced by an explosive event that has a timescale shorter than the acceleration time scale, the velocity of a particle with radius $a$ should change as $a^{-1}$. This, combined with a power-law size distribution of power equal to –3.1 [@Jorda2007] would produce a velocity distribution very different from a Gaussian one. If the lifetime of the explosive event was longer than the timescale of the acceleration the ejection velocity distribution with particle sizes would follow a more complex law, but always size dependent [see, e.g. @Gombosi1986]). In the case of a natural outburst the velocity distribution has been found to follow a Maxwellian one [see, e.g., @Schulz2000]. On the other hand, a thermal acceleration of the dust, that would give a Maxwellian distribution, is excluded because it would require a temperature excessively high to accelerate a grain with the mass of the order of 10$^{-16}$ g at a velocity of a hundred [$\mathrm{m\,s}^{-1}$]{}. Models of ejecta with grains with a power law size distribution, accelerated by gas drag, give a velocity distribution far from a Gaussian one. Only in the case of grains with almost the same size would the velocity distribution follow that of the grains and it can also became a Gaussian one. The average projected velocity found here is in good agreement with the value of 115 [$\mathrm{m\,s}^{-1}$]{} obtained by by @Feldman2007 from HST observations in the visible just after the impact. It is smaller than those noted by @Meech2005 (200$\pm$20 [$\mathrm{m\,s}^{-1}$]{}) and [@Schleicher2005] (220 [$\mathrm{m\,s}^{-1}$]{}) who refer to the velocity of the leading part of the cloud. For the latter we find 250-300 [$\mathrm{m\,s}^{-1}$]{}, in fairly good agreement with their values. However, our result for the Gaussian velocity distribution of the expanding ejecta cloud differs from that of @Jorda2007 (190[$\mathrm{m\,s}^{-1}$]{}with FWHM = 150[$\mathrm{m\,s}^{-1}$]{}), derived from Osiris observations on-board Rosetta. This difference cannot be explained with the different viewing geometry of Rosetta spacecraft with respect to an observer on Earth (the difference in aspect angle is just 20). The Osiris and our ESO measurements, obtained at similar wavelengths, are in a good agreement for the total light scattering area of the dust ejecta (33 km$^2$ by Osiris and 27.3 km$^2$ for our [$R_\mathrm{c}$]{} observations). An explanation for the different velocity of small and faster grains resulting from dust sublimation a few hours after the impact is thus very unlikely. The dramatic changes in the total amount of dust observed in $J$ band is another very puzzling result. It is associated with a strong change in the color of the cloud and its gradient with $\rho$. A possible scenario would be a sublimation of the dust grains containing slow-sublimating volatiles such as organics. Water ice is excluded because its sublimation timescale is much shorter than 17 h [@Hanner1981]. If the particles of a few micron size dominated in the original size distribution (as was found from the [*in-situ*]{} data) then sublimation of such particles would result in elimination of the most abundant 1 micron size particles, which are the most efficient in the $J$ band. This will manifest in a significant decrease of the brightness in $J$ band as well as by a change in the dust color. This means that the scattering area SA of the grains just after the impact must have been much larger than that measured at 17:29 h post-impact, i.e., the quantity of the solid component released by the impact may have been of an order of magnitude larger than the 5-10h of normal activity derived in Section 4.2 without considering sublimation effects. However this is not confirmed by the results of @Kueppers2005 who found a light scattering cross section of the cloud in agreement with that found here (see above). It is not possible to obtain more information from the results presented here, but about 17 hours after the impact the size distribution of the grains seems not to follow a power law (if it ever did), but seems “monochromatic”. There is no obvious correlation between the organic grains found in the ‘quiet’ comet and those supposed to be present in the ejecta cloud. Conclusions =========== From observations of gas emission-free regions of the comet 9P/Tempel1 made before and after the Deep Impact event, the scattering characteristics and the velocity of the ejecta cloud produced by the impact have been measured. Seventeen and a half hours after the impact, the total area covered by the grains of the ejecta cloud, multiplied by their albedo, was $27-35$km$^2$ in $JH{\ensuremath{K_\mathrm{s}}}$ . Three hours later, it dropped to $\approx 18$km$^2$ in $J$, but remained almost constant in [$K_\mathrm{s}$]{}. During this interval of time, the $J-{\ensuremath{K_\mathrm{s}}}$ color gradient with the nucleocentric distance $\rho$ also changed significantly, increasing by a factor of three in the direction of the Sun. The projected average velocity of the ejected cloud measured in the near-IR was $115 \pm 16$[$\mathrm{m\,s}^{-1}$]{}, and was found to be independent of the filter used for the observations and the position angle. Its distribution was very similar to a Gaussian. It has been shown that all these results cannot be explained assuming that grain size has a power distribution nor it is possible to assume that grains are ejected by gas drag. While the mechanism of grain ejection is difficult to explain, the behavior afterward can be justified only with the presence of $\approx 1$$\mu$m size organic grains that are sublimated by the solar radiation. From the pre-impact and late post-impact observations, the presence of a sublimating component has been detected and interpreted in terms of organic grains that sublimate because of the solar radiation. A’Hearn, M.F., Schleicher, D.G., Feldman, P.D, Millis R.L., & Thompson, D.T. 1984, AJ, 89, 579 A’Hearn, M.F., Millis, R.L., Schleicher, D.G., 1995, Icarus, 118, 223 A’Hearn, M.F., Belton, M.J.S., Delamere, W.A., 2005, Science 310, 258 Boehnhardt, H., Pompei, E., Tozzi, G.P., 2007, A&A, submitted Cottin, H., Bénilan, Y., Gazeu, M.-C., & Raulin, F. 2004, Icarus, 167, 397 Feldman, P.D., McCandliss, S. R., Route, M. 2007, Icarus, 187, 113 Gombosi, T.I., Nagy, A.F., & Cravens, T.E. 1986, Rev. of Geophysics, 24, 667 Jessberger, E. K., Christoforidis, A., Kissel, J., 1988, Nature, 332, 691 Jorda, L., Lamy, P., Faury, G., 2007, Icarus, 187, 208 Hamuy, M., Suntzeff, N.B., Hezthcote, S.R., 1994, PASP, 106, 566 Hanner, M.S. 1981, Icarus, 47, 342 Harker, D.E., Woodward, C.E., & Wooden, D.H. 2005, Science 310, 278 Kaeufl, H.U., Ageorges, N., Bagnulo, S., 2005a, Messenger 121, 11 Kaeufl, H.U., Bonev, T., Boehnhardt, H., 2005b, EMP 97, 331 Keller, H.U., Jorda, L., Küppers M., 2005, Science 310, 281 Keller, H.U., Küppers, M., Fornasier, S. Jorda, 2007, Icarus 187, 87 Küppers, M., Bertini, I., Fornasier, S., 2005, Nature 437, 987 Lamy, P. 1974, A&A, 35, 197 Lara, L., Boehnhardt, H., Gredel, R., 2006, A&A, 445, 1151 Lisse, C.M., VanCleve J., Adams, A.C., 2006, Science 313, 635 Meech, K.J., Ageorges, N., A’Hearn, M.F., 2005, Science 310, 265 Persson, S.E., Murphy, D.C., Krzeminski, W., Roth, M., & Rieke, M.J. 1998, AJ 116, 2475 Schleicher, D.G., Barnes, K.L., Baugh, N.F. 2005, AJ, 131, 1130 Schulz, R., Stüwe, J.A., Tozzi, G.P., & Owens, A. 2000, A&A, 361, 359 Sugita, S., Ootsubo, T., Kadono, T., 2005, Science 310, 274 Sunshine J.M., A’Hearn M.F., Groussin, O., 2006, Science 311, 1453 Tozzi, G.P. and Licandro, J., 2002, Icarus, 157, 187 Tozzi, G.P., Lara, L.M., Kolokolova, 2004, A&A, 424, 325 [^1]: Based on observations performed at the ESO La Silla and Paranal Observatories in Chile (program ID 075.C-0583)
--- abstract: 'Spiral wave propagation in period-2 excitable media is often accompanied by line-defects, the locus of points with period-1 oscillations. Here we investigate spiral line-defects in cardiac tissue where period-2 behavior has a known arrhythmogenic role. We find that the number of line defects, which is constrained to be an odd integer, is three for a freely rotating spiral, with and without meander, but one for a spiral anchored around a fixed heterogeneity. We interpret analytically this finding using a simple theory where spiral wave unstable modes with different numbers of line-defects correspond to quantized solutions of a Helmholtz equation. Furthermore, the slow inward rotation of spiral line-defects is described in different regimes.' author: - 'Juan G. Restrepo' - Alain Karma title: ' Line-Defect Patterns of Unstable Spiral Waves in Cardiac Tissue' --- Spiral waves are observed in extremely diverse physical and biological excitable media and are known to play a key role in the genesis of abnormally rapid life-threatening heart rhythm disorders [@weiss]. Despite considerable progress to date, complex spatiotemporal behaviors resulting from unstable spiral wave propagation remain poorly understood theoretically with the exception of meander [@barkley; @hakim], a classic spiral core instability with flower-like tip trajectories. A particularly rich dynamics results from instabilities in period-2 media where the local dynamics of the medium, i.e. the dynamics of uncoupled excitable elements, exhibits a period-doubling bifurcation as a function of parameters of the medium or the external stimulation frequency. Although period-2 behavior has been seen in different excitable and oscillatory media, it has received particular attention in a cardiac context. The hallmark of period-2 behavior in this context is alternans, a beat-to-beat alternation in the duration of cardiac excitation, which has been linked to the onset of lethal heart rhythm disorders [@karmapt]. Unstable spiral wave propagation in period-2 media is invariably accompanied by “line-defects”, which are the locus of points where the dynamics is locally period-1. Line-defects are generally present in these media when plane waves radiating out of the core region are unstable at the spiral rotation period, independently of whether meander is present or not. Studies in [*in vitro*]{} cardiac cell tissue cultures [@leepnas; @kimpnas], chemical reactions [@parklee; @parklee2; @marts], and coupled oscillators [@goryachev; @wu] have revealed the existence of a rich variety of patterns ranging from one and three line-defect structures [@parklee2], to phase bubbles [@parkleeprl2008], to line-defect turbulence [@parkleeprl1999]. Spiral wave breakup in models of cardiac excitation has also been found in parameter regimes of local period-2 dynamics, and hypothesized in this context as a potential mechanism for heart fibrillation [@weiss; @karmaprl; @karmachaos; @fenton]. Spiral line-defect patterns, however, have not been systematically investigated in cardiac tissue. In this Letter, we investigate the selection and dynamics of line-defect patterns resulting from unstable spiral wave propagation in cardiac tissue. Moreover, we interpret our findings using an amplitude equation framework recently used to study the evolution of line-defects during periodic stimulation from a single site [@blas]. In this framework, the spatiotemporal modulation of the phase and amplitude of period-2 oscillations is described by a simple partial differential equation that can be readily analyzed. Our study is based on the standard wave equation for cardiac tissue $$%\addtolength{\belowcaptionskip}{-0.9cm}\addtolength{\abovecaptionskip}{-0.9cm} \partial_t V = \gamma \nabla^2 V - I_m(V,\vec y)/C_m, \label{rd1} %\partial_t \vec y& = &\vec g(V,\vec y)\label{rd2}$$ where $V$ is the transmembrane voltage, $\gamma$ is the voltage diffusion coefficient, $C_m$ is the membrane capacitance, and $\vec y$ is a vector of gate variables that controls the flow of ions through the membrane, and hence the total membrane ionic current $I_m$. We studied different models of $I_m(V,\vec y)$ and gating kinetics to explore universal features of line-defect patterns that depend on qualitative properties of core and plane wave instabilities. The latter are manifested either as [*stationary*]{} [@blas; @watanabe] or [*traveling*]{} [@blas] spatial modulations of period-2 oscillation amplitude with an intrinsic spatial scale determined by parameters of the excitable medium [@blas]. These spatial modulations have nodes with period-1 dynamics in one dimension, or nodal lines in two, which correspond here to line-defects in the spiral far-field. We therefore chose models to explore line-defect patterns for stationary and traveling nodes with and without meander. The model of Ref. [@karmachaos] has pinwheel spirals (no meander) and stationary nodes under periodic pacing. The other two models of Ref. [@blas3v] and Ref. [@blas] both exhibit meander and have fixed and traveling nodes, respectively. Freely propagating spiral waves in all three models were studied by numerically solving Eq. (\[rd1\]) in a circular domain of radius $r_e = 3$ cm with no-flux boundary condition, $\partial_r V|_{r=r_e}=0$. Anchored spirals were studied by introducing an inexcitable disk of radius $r_i$ and imposing no-flux conditions on both the inner and outer radii, $\partial_r V|_{r=r_i}=\partial_r V|_{r=r_e}=0$. We implemented the phase-field method of Ref. [@fentonetalchaos] that automatically handles no-flux boundary conditions in an arbitrary geometry using a finite-difference representation of the Laplacian on a square grid, and iterated Eq. (\[rd1\]) using a simple explicit Euler scheme. Model parameters are identical to the published ones except those listed in Fig. 1. The latter were chosen for intermediate action potential duration restitution slopes, which suffice to produce unstable spiral waves with line-defects in each geometry, but are not steep enough to cause wave breakup in this domain size. We used a half plane wave as initial condition to initiate a spiral wave (obtained by first triggering a full plane wave and resetting part of the circular domain to the resting state). To track line defects, we define at each point ${\bf x}$ and time $t$ a local beat number $n({\bf x},t)$, set everywhere initially to zero after the half plane wave is created, and increased by one at the end of each action potential, i.e. every time that the voltage $V({\bf x},t)$ crosses a fixed threshold $V_c$ with $dV/dt<0$. We then define the period-2 alternans amplitude as $$a({\bf x} ,t) =(-1)^{n_c(t)} \left[D({\bf x},n_c(t)) - D({\bf x},n_c(t)-1)\right]/2 \label{adef}$$ where $D({\bf x},n)=\int_{V({\bf x},t')>V_c,n({\bf x},t') = n-1}dt'$ is the local action potential duration (APD) and $n_c(t) \equiv \min_{{\bf x}}n({\bf x},t)$ is the common beat, i.e. the largest beat number that has been registered at all points at time $t$. The line-defects are then the locus of points where $a({\bf x} ,t)=0$ at any instant of time. The use of a common beat number introduces here a discontinuity in $a$ (indicated by dashed lines in Fig. \[spiral\]) since the APD of a given beat might change as the wave front rotates around the spiral tip. This discontinuity, however, does not affect the dynamics. Other methods to track line defects [@parklee; @zhan] yield similar results except for unessential imaging differences. Results of simulations that pertain to the selection of the number of line-defects are shown in Fig. \[spiral\]. The top four panels reveal that the pinwheel spirals simulated with the two-variable model of Ref. [@karmachaos] exhibit three line-defects when propagating freely in spatially homogeneous tissue, but only one line-defect when anchored around an inexcitable disk of $0.5$ cm radius. Furthermore, the bottom two panels show that for the more physiologically realistic three-variable model of Ref. [@blas3v], freely propagating spirals still exhibit three-line defects even though the spiral tip meanders. Since anchored spirals become free in the limit of vanishing obstacle size, one would expect transitions from one to three (three to one) line-defects to occur with decreasing (increasing) obstacle size. Indeed, for the model of Ref. [@karmachaos], we found three line defects for obstacles with diameter smaller than $\sim0.1$ cm, including the freely propagating pinwheel spiral ($r_i=0$) in Fig. \[spiral\] (b), and one line defect for diameters larger than $\sim0.3$ cm as in the example of Fig. \[spiral\] (d). For intermediate diameters, we found complex behaviors marked by transitions from three to one or one to three line-defects. The former occur when two line defects merge into one line defect that moves away from the core, and the latter when a phase bubble enclosed by a line-defect loop nucleates in, and expands from, the core, as illustrated in Fig. \[schematic\]. We find the same qualitative behavior in the model of Ref. [@blas3v] except that meander makes the transitions between patterns with different numbers of line-defects more complex. Let us now turn to interpret our results in the amplitude equation framework [@blas]. For simplicity, we restrict our analysis to non-meandering spiral waves. Furthermore, to keep the analysis tractable, we first assume that the propagation wave speed is constant and relax this assumption subsequently when examining the motion of line defects. With this assumption, linear perturbations of a steady-state rigidly rotating spiral wave with period $T$ obey the equation $$T \partial_t a= \sigma a + \xi^2 \nabla^2a, \label{linear}$$ where $a$ is the alternans amplitude subject to the radial $\partial_r a|_{r_i}=\partial a|_{r_e}=0$ and angular $a(\theta+2\pi,t) = -a(\theta,t)$ boundary conditions. The latter constrains the number of line defects to be an odd integer and results from the change in beat number across any closed circuit enclosing the spiral tip for steady-state alternans. It follows directly from the definition of $a$ \[Eq. (\[adef\])\] and the requirement that the voltage be continuous everywhere in space. In addition, $\sigma=\ln f'$, where $f'$, the slope of the action potential duration restitution curve defined by $D^{n+1} = f(T-D^n)$, controls the onset of alternans and $\xi\sim (\gamma D)^{1/2}$, where $D$ is the value of the action potential duration at the period-doubling bifurcation ($\sigma=0$), measures the scale over which the voltage dynamics is diffusively coupled on the time scale of one beat. This linear stability problem is easily solved by the substitution $a({\bf r},t) \sim e^{\Omega t} \Psi(r,\theta)$ that transforms Eq. (\[linear\]) into a Helmholtz equation for $\Psi(r,\theta)$. The latter can then be solved by separation of variables with the substitution $\Psi(r,\theta)\sim R(r)\Theta(\theta)$. The angular part is found to be $\Theta_n(\theta) = \sin\left((n+1/2) \theta\right)$, where mode $n$ corresponds to $2n+1$ line-defects. The radial part obeys a Bessel equation. For $r_i>0$, it has solutions $R_{n,m}(r) \propto J'_{-n-1/2}(k_{n,m} r_e)J_{n+1/2}(k_{n,m} r) - J'_{n+1/2}(k_{n,m} r_e)J_{-n-1/2}(k_{n,m} r)$ that satisfy the outer radial boundary condition $\partial a|_{r_e}=0$, where $n,m = 0,1,\dots$, and the inner condition $\partial_r a|_{r_i}=0$ determines $k_{n,m}$, and hence the growth rate $\Omega_{n,m} T = \sigma - \xi r_e^{-2} k^{2}_{n,m}$. We find that the smallest $k_{n,m}$ occurs for $n=0$ independently of the ratio $r_e/r_i$. Therefore, the mode corresponding to a single line defect is the most unstable when the spiral is anchored. This agrees with our numerical observations in Fig. \[spiral\] (d). For freely rotating spirals, $r_i = 0$, $J_{-n-1/2}(r)$ diverges at the origin, so the solutions are $R_{n,m}(r) \propto J_{n+1/2}(k_{n,m} r)$, where $k_{nm} r_e$ is the $m^{th}$ zero of $J'_n(r)$. The most unstable modes are $n=0$ and $n=1$ corresponding to $1$ and $3$ line defects, respectively (see Fig. \[tentative\]). However, $J_{1/2}(k_{0,0} r)$ has a divergent derivative that is incompatible with the physical requirement that the voltage, and hence the APD, must vary smoothly on a scale $\xi$. On the other hand, $J_{3/2}(k_{1,0} r)$ smoothly vanishes at the origin. Therefore, in this case, the boundary condition at the origin selects a 3-line-defect pattern as observed in Fig. \[spiral\] (b). Interestingly, a 3-line-defect pattern is also selected with meander present \[Fig. \[spiral\] (f)\], thereby suggesting that the boundary condition on $a$ on the outer scale of the line-defect pattern is not strongly affected by meander. The analysis also predicts qualitative features of the radial distribution of alternans amplitude for three- and one-line-defect patterns of Figs. \[spiral\](b) and \[spiral\](d), respectively. Fig. \[raial\] compares the numerical radial distributions of root-mean-square amplitude $\langle a \rangle_{rms}$ averaged over a full line-defect rotation period for three line-defects (thin solid line) and one line-defect (thick line), with the corresponding radial modes from the theoretical analysis (dashed lines) scaled to have the same radial average as the observed curves. The theory predicts well that the alternans amplitude is more strongly suppressed near the core for the larger number of line defects. So far our analysis has assumed that the wave speed is constant, which predicts that line-defects extend straight out of the core and are stationary, as implied by the angular distribution $\sin\left((n+1/2) \theta\right)$ of linearly unstable modes (see Fig. \[tentative\]). In contrast, simulations in Fig. \[spiral\] show that line-defects have a spiral shape and slowly rotate inward in the opposite direction of the spiral wavefront. Line-defect motion can generally be induced both by line-defect curvature and the dependence of the wave speed $c$ on the interval $I$ between two waves, known as the conduction velocity (CV) restitution curve in the cardiac literature. While a full stability analysis that includes these effects would be required to treat line-defect motion in general, two important limiting cases can be readily analyzed. The first pertains to anchored spiral waves for medium parameters where plane waves paced at the spiral rotation period exhibits stationary line-defects, as for the model of Ref. [@karmachaos] studied here. In this case, we expect line-defect motion to be generated predominantly by the spiral wavefront dynamics around the anchoring obstacle. Neglecting wavefront curvature effects, this dynamics should be approximately described by that of a propagating pulse in a one-dimensional ring of perimeter $L=2\pi r_i$ [@blas; @court]. To test this hypothesis, we computed the quasiperiodic frequency $\Omega$ of the local medium dynamics induced by line-defect rotation for anchored spirals for the model of Ref. [@karmachaos]. The frequency was obtained by fitting the time series $a({\bf r},jT)/a({\bf r},0)$ at a single point ${\bf r}$ to $\eta^j\cos(\Omega T j+\delta)$, with $\eta$, $\Omega$, and $\delta$ the fitting parameters. For the theory, we used the dispersion relation giving the quasiperiodic frequency $\Omega$ modulating alternans, $a \propto e^{i\Omega jT}$, in a one dimensional ring derived in Ref. [@blas] $$e^{i\Omega T}\left( 1-\frac{i}{2\Lambda k} \right) = (1-i w k -\xi^2k^2) f'(I) +\frac{i}{2\Lambda k},$$ where $k = \pi/L + \Omega T/L$ is the wavenumber corresponding to a single line defect and $\Lambda = c'(I)/(2c^2)$. The APD- and CV-restitution curves, $f(I)$ and $c(I)$, were calculated in a one-dimensional cable as in Ref. [@blas]. In addition, the intercellular coupling parameters $\omega$ and $\xi$ were estimated as $\omega \sim 2\gamma/c$ and $\xi \sim (\gamma D)^{1/2}$ [@blas]. The comparison in Fig. \[frecues\] shows that the ring-based theory predicts reasonably well the frequency of line-defect rotation for anchored spiral waves of different period $T$, which was varied here by increasing the obstacle radius $r_i$ in the simulations. The opposite limit that can also be readily understood is the one where plane waves paced at the spiral rotation period exhibit line-defects that move towards the pacing site, which generally occurs for steeper CV-restitution. In this case, line-defect motion is expected to be dominated by the far-field spiral dynamics [@blas]. We have checked that, for the two-variable model of Ref. [@blas], spiral line-defects indeed rotate inward with a frequency equal to the product of the velocity of the planar line-defects and the inverse of their spacing. This property was purposely checked in a domain much larger than the spiral wavelength ($r_e=18$ cm) and with an obstacle size ($r_i=0.72$ cm) sufficient to prevent spiral wave breakup inherent in this model. However, we expect this behavior to be generic for systems with traveling planar line-defects and to also apply to freely rotating spirals with three line defects for parameters where breakup does not occur. In summary, we have surveyed spiral line-defect patterns in simplified models of cardiac excitation with period-2 dynamics. Although far from exhaustive, this survey yields the striking finding that freely propagating and anchored spiral waves select different numbers of line-defects. This opens up the possibility to distinguish free and anchored spiral waves in cardiac tissue by monitoring the number of line-defects. We have shown that spiral wave unstable modes with different numbers of line-defects correspond to topologically quantized solutions of a Helmholtz equation. In this framework, the boundary condition on the period-2 oscillation amplitude in the spiral core, which is fundamentally different for free and anchored spirals, is responsible for selecting the number of line defects. Furthermore, we have found that spiral line-defect inward rotation can be driven either by the core or far-field wavefront dynamics, with concomitantly different frequencies. Our results suggest that the observation of single-line-defect spirals in cardiac tissue culture [@leepnas; @kimpnas] may be a consequence of anchoring on small millimeter-size heterogeneities. However, the dynamics in real tissue is also influenced by the coupling of voltage and intracellular calcium dynamics [@weiss; @karmapt], which has been neglected here. The investigation of the effect of this coupling on line-defect dynamics and its relationship to wave breakup is an interesting future project. Finally, the previous finding of free spirals with one line-defect [@zhan] in period-2 media with qualitatively different excitable dynamics than cardiac tissue suggests that other pattern selection mechanisms may be operative in different media. These differences also remain to be elucidated. We thank Blas Echebarria for valuable discussions. This work was supported by NIH Grant No. P01 HL078931. [1.0]{} [99]{} J. N. Weiss [*et al.*]{}, Circulation [**112**]{}, 1232 (2005). D. Barkley, Phys. Rev. Lett. [**68**]{}, 2090 (1992). V. Hakim and A. Karma, Phys. Rev. Lett. [**79**]{}, 665 (1997); Phys. Rev. E [**60**]{}, 5073-5105 (1999). A. Karma, and R. F. Gilmour, Physics Today [**60**]{}, 51 (2007); J. N. Weiss et al., Circ. Res. [**98**]{}, 1244 (2006). S. M. Hwang, T. Y. Kim, and K. J. Lee, Proc. Nat Acad. of Sci. USA [**102**]{}, 10363 (2005). T. Y. Kim et al., Proc. Natl. Acad. Sci. USA [**104**]{}, 11639 (2007). J. S. Park, and K. J. Lee, Phys. Rev. Lett. [**88**]{}, 224501 (2002); J. S. Park, Sung-Jae Woo, and K. J. Lee, Phys. Rev. Lett. [**93**]{}, 098302 (2004); J. S. Park, and K. J. Lee, Phys. Rev. E [**73**]{}, 066219 (2006). J. S. Park, and K. J. Lee, Phys. Rev. Lett. [**83**]{}, 5393 (1999); B. Marts, D. J. W. Simpson, A. Hagberg, A. L. Lin, Phys. Rev. E [**76**]{}, 026213 (2007). A. Goryachev and R. Kapral, Phys. Rev. E [**54**]{}, 5469 (1996); A. Goryachev, H. Chaté, and R. Kapral, Phys. Rev. Lett. [**80**]{}, 873 (1998); A. Goryachev, R. Kapral, and H. Chaté, Int. J. Bif. Chaos [**10**]{}, 1537 (2000). S. Wu, Fluc. and Noise Lett. [**6**]{}, L379 (2006). J. S. Park, S.-J. Woo, O. Kwon, T. Y. Kim, and K. J. Lee, Phys. Rev. Lett. [**100**]{}, 068302 (2008). J. S. Park, and K. J. Lee, Phys. Rev. Lett. [**83**]{}, 5393 (1999); A. Karma, Phys. Rev. Lett. [**71**]{}, 1103 (1993). A. Karma, Chaos [**4**]{}, 461 (1994). F. H. Fenton [*et al.*]{}, Chaos [****]{}12, 852 (2002). B. Echebarria and A. Karma, Phys. Rev. Lett. [**88**]{}, 208101 (2002); Phys. Rev. E [**76**]{}, 051911 (2007). M. A. Watanabe [*et al.*]{}, J. Cardiovasc. Electrophysiol. [**12**]{}, 196 (2001). B. Echebarria and A. Karma, Eur. Phys. J. ST[**146**]{}, 217 (2007). F. Fenton [*et al.*]{}, Chaos [**15**]{}, 013502 (2005). M. Zhan, and R. Kapral, Phys. Rev. E [**72**]{}, 046221 (2005). M. Courtemanche, L. Glass, and J. P. Keener, Phys. Rev. Lett. [**70**]{}, 2182 (1993).
--- address: | $^{*}$ Radboud University Nijmegen, Institute for Molecules and Materials, Heyndaalseweg 135, NL-6525AJ Nijmegen, The Netherlands\ $^{+}$ Low Temperature Laboratory, Aalto University, School of Science and Technology, P.O. Box 15100, FI-00076 AALTO, Finland\ $^{\#}$ Landau Institute for Theoretical Physics RAS, Kosygina 2, 119334 Moscow, Russia author: - 'M. I. Katsnelson$^{*}$ [^1] and G.E. Volovik $^{+\#}$' title: 'Quantum electrodynamics with anisotropic scaling: Heisenberg-Euler action and Schwinger pair production in the bilayer graphene' --- Introduction ============ Both superfluid $^3$He-A [@Volovik2003] and single-layer graphene [@CastroNeto2009; @Vozmediano2010; @Katsnelson2012] serve as examples of emerging relativistic quantum field theory in 3+1 and 2+1 dimensions, respectively. Both systems contain “relativistic” fermions, which are protected by the combined action of symmetry and topology (see review [@Volovik2011]), while the collective modes and deformations provide the effective gauge and gravity fields acting on these fermions [@Volovik2003; @Vozmediano2010]. Effective electromagnetic field emerging in superfluid $^3$He-A is the collective field which comes from the degrees of freedom corresponding to the shift of the position of the Weyl (conical) point in momentum space. The corresponding Maxwell action (for special orientation of the effective electric and magnetic fields, see details in [@Volovik2003]) is $$S \sim \int d^3x dt \frac{v_F}{24\pi^2}\left[ B^2-\frac{1}{v_F^2} E^2\right] \ln \frac{1}{\left[ B^2-\frac{1}{v_F^2} E^2\right] }\,, \label{3He-A}$$ where $v_F$ is the Fermi velocity. This action describes the effect of vacuum polarization, and corresponds to the Heisenberg-Euler action of standard quantum electrodynamics [@HeisenbergEuler1936]. The only difference from the latter is that the fermions in the vacuum of $^3$He-A are massless, and their masslessness is protected by momentum-space topology of the Weyl point. This results in the logarithmic term which describes the zero-charge effect – the screening of the effective charge. The effective action becomes imaginary when $E>v_F B$ describing Schwinger pair production in electric field [@Schwinger1951] with the production rate $\propto E^2$ for the case of massless fermions [@Volovik1992] (Schwinger pair production of massive fermions has been discussed for the other topological superfluid – $^3$He-B, see Ref. [@SchopohlVolovik1992]). Similar action, but for the real electromagnetic field, should take place in the Weyl semimetals (for semimetals with the topologically protected Weyl points see Refs. [@Abrikosov1971; @Nielsen1983; @Abrikosov1998; @XiangangWan2011; @Burkov2011; @Aji2011]). Among the other things, the massless fermions give rise to the chiral anomaly, and the corresponding Adler-Bell-Jackiw equation for anomaly has been verified in the $^3$He-A experiments [@Bevan1997]. Graphene gives the opportunity not only to study the 2+1 quantum electrodynamics, but also extend the theory to different directions. In particular, to study systems with anisotropic scaling, which were suggested by Hořava for construction of the quantum theory of gravity, which does not suffer the ultraviolet (UV) divergencies (the UV completion of general relativity) [@HoravaPRL2009; @HoravaPRD2009; @Horava2008; @Horava2010]. As distinct from the relativistic massless fermions in a single layer graphene, which obey the invariance under conventional scaling ${\bf r}\rightarrow b {\bf r}$, $t\rightarrow b t$, fermions in a bilayer ($N=2$) or rhombohedral $N$-layer ($N>2$) graphene have a smooth touching of the Dirac point, $E^2 \propto p^{2N}$, see Refs. [@CastroNeto2009; @Katsnelson2012]. These fermions obey the anisotropic scaling ${\bf r}\rightarrow b {\bf r}$, $t\rightarrow b^N t$, precisely which is needed for construction of the divergence-free quantum gravity. Here we discuss the effect of this anisotropic scaling on the effective action for real or artificial (e.g., created by deformations [@Vozmediano2010; @Katsnelson2012]) electromagnetic fields. Due to anisotropic scaling, which distinguishes between the space and time components, the electric and magnetic fields experience different scaling laws. In particular, the one-loop action for the magnetic field is $\propto B^2 p_0^{N-4+D}$, where $D$ is the space dimension and $p_0$ is the infrared cut-off. This demonstrates that superfluid $^3$He-A with its $D=3$ and $N=1$ and the bilayer graphene with its $D=N=2$ both correspond to the critical dimension $D_c=4-N$ and thus they both give rise to the logarithmically divergent action for the magnetic field (which manifests itself as a logarithmic divergence of diamagnetic susceptibility in undoped bilayer graphene with the trigonal warping effects being neglected [@safran; @koshino]). However, for the electric field the action is $\propto E^2 p_0^{D-2-N}$. While for the superfluid $^3$He-A this again gives the logarithmic divergence as it happens in the relativistic systems with massless fermions, in the multiple-layered graphene with its $D=2$ the action diverges in the infrared for any $N>1$, giving rise to the power-law action $\propto E^{(N+2)/(N+1)}$. Effective action for real and induced electromagnetic field {#sec:EffectiveAction} =========================================================== [*Anisotropic scaling for electromagnetic field*]{} \[sec:HoravaGravity\] The effective Hamiltonian for the bilayer graphene in the simplest approximation is [@CastroNeto2009; @Katsnelson2012] $${\cal H}= \frac{\sigma^+}{2m}\left((\hat{\bf x}+i\hat{\bf y})\cdot({\bf p}- e{\bf A}) \right)^2 + \frac{\sigma^-}{2m}\left((\hat{\bf x}-i\hat{\bf y})\cdot({\bf p}- e{\bf A})\right)^2\,, \label{FermionHamiltonian2}$$ where $\sigma$ are Pauli matrices and $m$ is the mass entering the quadratic band touching. Experimentally, $m \approx 0.03 m_e$ where $m_e$ is the free-electron mass [@mayorov]. Here we neglect the degrees of freedom related to the tetrad gravity and concentrate on the degrees of freedom corresponding to the electromagnetic field (in principle the vector potential ${\bf A}$ may include not only the real electromagnetic field, but also the collective field which come from the degrees of freedom corresponding to the shift of the position of the Dirac point in momentum space, as in $^3$He-A). We shall use the natural units in which $\hbar=1$; electric charge $e=1$; the vector field ${\bf A}$ has dimension of momentum, $[A]=[L]^{-1}$; electric and magnetic fields have dimensions $[E]=[LT]^{-1}$ and $[B]=[L]^{-2}$, respectively. For standard quantum electrodynamics emerging in the vacuum with massive electrons, the Lorentz invariance combined with the dimensional analysis gives the general form $(B^2-E^2)f(x,y)$ for the Heisenberg-Euler action [@HeisenbergEuler1936] in terms of dimensionless quantities $x=(B^2-E^2)/M^4$, $y= {\bf B}\cdot {\bf E}/M^4$. Here $M$ is the rest energy of electron, which violates the scale invariance in the infrared. Extension of the Born-Infeld electrodynamics to the anisotropic scaling has been considered in Ref. [@Andreev2010]. We consider the electrodynamics with anisotropic scaling, which is induced by fermions obeying the Hamiltonian . [*General effective action*]{} \[sec:GenEffectiveAction\] In Eq. there is only one dimensional parameter, the mass $m$. Combining the dimensional analysis with the anisotropic scaling, one obtains that the effective action for the constant in space and time electromagnetic field, which is obtained by the integration over the 2+1 fermions with quadratic dispersion, is the function of the dimensionless combination $\mu$ of electric and magnetic field $$S = \int d^2x dt ~\frac{B^2}{m} g(\mu)~~,~~ \mu=\frac{m^2E^2}{B^3}\,. \label{EffectiveActionGeneral}$$ The change of the regime from electric-like to magnetic-like behavior of the action occurs at $\mu \approx 1$. The asymptotical behavior in two limit cases, $g(\mu \rightarrow 0)=a$ and $g(\mu \rightarrow \infty)= (b+ic)\mu^{2/3}$, gives the effective actions for the constant in space and time magnetic and electric fields: $$S_{B}=a \int d^2x dt ~\frac{B^2}{m} ~~,~~ S_{E}= \int d^2x dt (b+ic) E^{4/3}m^{1/3}\,. \label{BandE}$$ The dimensionless parameters $a$ and $b$ describe the vacuum polarization, and the dimensionless parameter $c$ describes the instability of the vacuum with respect to the Schwinger pair production in the electric field, which leads to the imaginary part of the action. This should be contrasted with the single layer graphene, where the corresponding effective action for the 2+1 relativistic quantum electrodynamics is $$S \sim \int d^2x dt ~v_F\left[ B^2-\frac{1}{v_F^2} E^2\right]^{3/4}\,, \label{SingleLayer}$$ critical field is $E_c(B) = v_F B$; and the Schwinger pair production is $\propto E^{3/2}$ at $B=0$ (see Refs. [@AndersenHaugset1995; @allor; @Vildanov2009]). Returning to the conventional units, one should substitute $B\rightarrow eB/c$ and $E\rightarrow eE$. [*Action for magnetic field*]{} \[sec:Bfield\] First, let us consider the case $E=0$. The action is $$S=a\int d^2xdt\frac{B^2}m$$ where $a$ is undefined yet numerical coefficient. The simplest way to find $a$ is the following. First, let us consider the case of uniform magnetic field and the sample of unit area, than, $\int d^2x\rightarrow 1$. Second, let us go to the imaginary time, $\exp \left( iS\right) \rightarrow \exp \left( -\beta F\right) $ where $F$ is the free energy, $\beta =1/T$ is the inverse temperature. The change of the free energy in magnetic field is $F=-\chi B^2/2$ where $\chi $ is the magnetic susceptibility (per unit area). Thus, we have $$a=\frac{m\chi }2$$ The susceptibility for the case of bilayer within the model of purely parabolic spectrum has been calculated in Refs. [@safran; @koshino]. As we know, the $D=2$ vacuum of fermions with quadratic touching, $N=2$, corresponds to the critical dimension for the magnetic field action: it is logarithmically divergent for the case of zero doping. This logarithm should be cut at the energy of the reconstruction of the spectrum due to the farther hopping effects (“trigonal warping”) and/or interelectron interaction (for the most recent discussion, see Ref. [@mayorov]). The answer is $$a=\frac{g_sg_v}{32\pi }\Lambda \,,$$ where $g_s=g_v=2$ are spin and valley degeneracies. For small fields, the logarithmically running coupling is $\Lambda =\ln \left( \gamma _1^2/\Delta^2 \right) $, where the ultraviolet cut-off is provided by the interlayer hopping $\gamma _1$, while the infrared cut-off is provided by the trigonal warping whose typical energy is $\Delta \approx 0.01\gamma _1$. For larger fields, $B>\Delta^2$, the infrared cut-off is provided by the field itself, $\Lambda =\ln \left( \gamma _1^2/B \right)$, which is similar to what occurs in effective electrodynamics in $^3$He-A with massless fermions, see Eq.. [*Schwinger pair production*]{} \[sec:Schwinger\] Schwinger pair production in bilayer graphene in zero magnetic field has been considered in Ref. [@Vildanov2009]. Let us consider the case of the crossed fields ${\bf B}=B\hat{\bf z}$, ${\bf E}=E\hat{\bf x}$ using the semiclassical approximation. We will use the gauge $A_x=0,A_y=Bx$. Thus, the imaginary part of the momentum along $x$ direction is determined by the equation (cf. Ref. [@Vildanov2009]): $$\kappa ^2\left( x\right) =\left( k+Bx\right) ^2-2mE\left| x\right|$$ where $k$ is the (conserving) momentum in $y$-direction. The classically forbidden regions relevant for the tunneling is determined by the condition $% \kappa ^2\left( x\right) >0$ and the tunneling exponent is determined by the imaginary part of the action $$S_E\left( k\right) =2\int\limits_{x_L}^{x_R}dx\kappa \left( x\right)$$ where $x_L,x_R^{}$ are the left and right turning points. One has to introduce the parameter $$\mathcal{E}=\frac{mE}B$$ and $$X=k+Bx$$ thus, $$S_E\left( k\right) =\frac 2B\int\limits_{X_L}^{X_R}dX\kappa \left( X\right)$$ where $$\kappa ^2\left( X\right) =\left\{ \begin{array}{cc} \left( X-\mathcal{E}\right) ^2+\mathcal{E}\left( 2k-\mathcal{E}\right) , & X>k \\ \left( X+\mathcal{E}\right) ^2-\mathcal{E}\left( 2k+\mathcal{E}\right) , & X<k \end{array} \right.$$ Now we have to study conditions of existence of the left and right turning points. We can restrict ourselves by the case $k>0$ only, due to symmetry with respect to the replacement $k\rightarrow -k,x\rightarrow -x$. Simple analysis show that two turning point exist only if $$\mathcal{E}>2k$$ The further calculations are straightforward, and the answer is $$S_E\left( k\right) =\mu f\left( \frac{2k}{\mathcal{E}}% \right)$$ where $$f\left( x\right) =x-\frac{1+x}2\ln \left( 1+x\right) -\frac{1-x}2\ln \left( \frac 1{1-x}\right)$$ The Taylor expansion for the function $f$ reads $$f\left( x\right) = \sum_{n=1}^{\infty} \frac{x^{2n+1}}{2n(2n+1)}$$ For $\mathcal{E}\rightarrow \infty $ we have $f\left( x\right) \approx x^3/6 $, which is in agreement with Ref. [@Vildanov2009], if one fixes the misprint in Eq.(36) of [@Vildanov2009], where there is a power 1/3 instead of 3. The semiclassical approximation is valid if $\mu \gg 1$. The pair production is obtained after integration of the tunneling exponent over the momentum $k$: $$\begin{aligned} {\rm Im}~ S =\frac{g_sg_v}{2\pi^2} E \int_{0}^{\mathcal{E}/2} dk \exp\left(- S_E\left( k\right)\right) \nonumber \\ =\frac{g_sg_v}{4\pi^2} B^2\mu \int_{0}^{1} dx \exp\left(- \mu f(x)\right) \,. \label{ImaginaryAction}\end{aligned}$$ Semiclassically, there is *always* some states which demonstrate tunneling, but only with $\left| k\right| <\mathcal{E}/2$. In the case of strong magnetic field their contribution shrinks to the point. In the limit of small magnetic fields one obtains the parameter $c$ in the action : $$\begin{aligned} \nonumber {\rm Im} S= \frac{g_sg_v}{4\pi^2} B^2\mu \int_{0}^{+\infty} dx \exp\left(- \frac{\mu}{6} x^3\right)=cE^{4/3}m^{1/3}\,, \\ c=\frac{g_sg_v}{12\pi^2}6^{1/3}\Gamma(1/3) . \label{ImaginaryActionE}\end{aligned}$$ The Schwinger pair production rate proportional to $E^{4/3}$ has been discussed in Ref. [@Vildanov2009]. If we take into account the next-order corrections in $1/\mu$ the expression is multiplied by the factor $1-\frac{3^{5/3} 2^{2/3}}{10 \Gamma(1/3)}\frac{B^2}{(mE)^{4/3}}$. The change of the regime from magnetic-like to electric-like happens at $\mu \approx 1$ which means, in CGSE units, $$\frac{E}{B} \approx \frac{\hbar}{mc}\sqrt{\frac{|e|B}{\hbar c}}$$ For the field $B = 1$T this ratio is of the order of $3 \cdot 10^{-3}$, that is, the order of magnitude smaller than for the case of the single-layer graphene where it is $v_F/c \approx 1/300$. [*Effective action for higher order touching*]{} \[sec:multilayered\] All this can be extended to the order of $N$ band touching, which presumably may be achieved by rhombohedral stacking of $N$ graphene-like layers [@CastroNeto2009; @Katsnelson2012]. The effective Hamiltonian for fermions in this case is $${\cal H}= \frac{\sigma^+}{2m}\left((\hat{\bf x}+i\hat{\bf y})\cdot({\bf p}- e{\bf A}) \right)^N + \frac{\sigma^-}{2m}\left((\hat{\bf x}-i\hat{\bf y})\cdot({\bf p}- e{\bf A})\right)^N. \label{FermionHamiltonianN}$$ If we are interested only in the infrared behavior of the action in the regime when the electric field dominates, then the induced electromagnetic action has the following general structure: $$S(N) = \int d^2x dt~ \frac{B^{(2+N)/2}}{m} g(\mu)~~,~~ \mu=\frac{m^2E^2}{B^{N+1}}\,. \label{EMactionGeneral}$$ At large $\mu$ one has $g(\mu) \propto \mu^{(N+2)/2(N+1)}$, and the Schwinger pair production rate in zero magnetic field is $$\dot n \sim \frac{1}{m}\left(mE\right)^{\frac{N+2}{N+1}} . \label{SchwingerN}$$ Discussion {#sec:Discussion} ========== In the condensed matter context (or in related microscopic theories of the quantum vacuum), the existence of the topologically protected nodes in spectrum of Weyl media gives rise to the effective gauge fields ($U(1)$ and $SU(2)$) and gravity. The same takes place in single layer and bilayer graphene, which contain Dirac points in their spectrum. The effective $SU(2)$ gauge field comes from the spin degrees of freedom, these are the collective modes in which the momentum of the Dirac point shifts differently for spin-up and spin-down species. In addition to the collective modes related to the shift of the Dirac point, the graphene has degrees of freedom, which correspond to tetrads in anisotropic gravity. In particular, for the bilayer graphene these degrees of freedom enter the effective Hamiltonian in the following way $$\begin{aligned} {\cal H}= \sigma^+\left(({\bf e}_1+i{\bf e}_2)\cdot({\bf p}- e{\bf A}) \right)^2 \nonumber \\ + \sigma^-\left(({\bf e}_1-i{\bf e}_2)\cdot({\bf p}- e{\bf A})\right)^2\,, \label{Tetrad}\end{aligned}$$ where ${\bf e}_1$ and ${\bf e}_2$ play the role of zweibein fields, which give rise to the energy spectrum $E^2=\left(g^{ik}p_ip_k\right)^2$ corresponding to the effective 2D metric $g^{ik}=e_1^ie_1^k + e_2^ie_2^k$; to spin connection and torsion. As distinct from superfluid $^3$He-A, which is the analog of the relativistic vacuum with massless Weyl fermions, the bilayer graphene is the representative of the quantum vacua, which experience different scaling laws for space and time. While such vacua were considered by Hořava in relation to quantum gravity, here we applied the anisotropic scaling to quantum electrodynamics emerging in these systems, using as an example the 2D system with massless Dirac fermions with quadratic spectrum. Such systems have peculiar properties, and we touched here the Heisenberg-Euler action and the Schwinger pair production. Acknowledgements {#acknowledgements .unnumbered} ================ It is our pleasure to thank Frans Klinkhamer for discussion. This work is supported by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), and by the Academy of Finland and its COE program. [99]{} G.E. Volovik, [*The Universe in a Helium Droplet*]{}, Clarendon Press, Oxford (2003). A.H. Castro Neto, F. Guinea, N.M.R. Peres, K.S. Novoselov, and A.K. Geim, Rev. Mod. Phys. **81**, 109–162 (2009). M.A.H. Vozmediano, M.I. Katsnelson, and F. Guinea, Physics Reports [**496**]{}, 109–148 (2010). M.I. Katsnelson, [*Graphene: Carbon in Two Dimensions*]{}, Cambridge Univ. Press, Cambridge (2012). G.E. Volovik, Topology of quantum vacuum, draft for Chapter in proceedings of the Como Summer School on analogue gravity, arXiv:1111.4627. W. Heisenberg and H. Euler, Z. Phys. [**98**]{}, 714–732 (1936). J. Schwinger, Phys. Rev. [**82**]{}, 664–679 (1951). G.E. Volovik, [*Exotic properties of superfluid $^3$He*]{}, World Scientific, Singapore (1992). N. Schopohl and G.E. Volovik, Ann. Phys. (N. Y.) [**215**]{}, 372–385 (1992). A.A. Abrikosov and S.D. Beneslavskii, Sov. Phys. JETP [**32**]{}, 699–708 (1971). H.B. Nielsen and M. Ninomiya, Phys. Lett. B [**130**]{}, 389–396 (1983). A.A. Abrikosov, Phys. Rev. [**B 58**]{}, 2788–2794 (1998). X. Wan, A.M. Turner, A. Vishwanath and S.Y. Savrasov, Phys. Rev. B [**83**]{}, 205101 (2011). A.A. Burkov and L. Balents, Phys. Rev. Lett. [**107**]{}, 127205 (2011). V. Aji, arXiv:1108.4426. T.D.C. Bevan, A.J. Manninen, J.B. Cook, J.R. Hook, H.E. Hall, T. Vachaspati, and G.E. Volovik, Nature [**386**]{}, 689–692 (1997). P. Hořava, Phys. Rev. Lett. [**102**]{}, 161301 (2009). P. Hořava, Phys. Rev. D [**79**]{}, 084008 (2009). P. Hořava, JHEP 0903, 020 (2009), C. Xu and P. Hořava, Phys. Rev. D [**81**]{}, 104033 (2010). J.O. Andersen and T. Haugset, Phys. Rev. D [**51**]{},3073–3080 (1995). S.A. Safran, Phys. Rev. B **30**, 421–423 (1984). M. Koshino and T. Ando, Phys. Rev. B **76**, 085425 (2007). A.S. Mayorov, D.C. Elias, M. Mucha-Kruczynski, R.V. Gorbachev, T. Tudorovskiy, A. Zhukov, S.V. Morozov, M.I. Katsnelson, V.I. Fal’ko, A.K. Geim, and K.S. Novoselov, Science **333**, 860–863 (2011). O. Andreev, Int. J. Mod. Phys. A [**25**]{}, 2087–2101 (2010); arXiv:0910.1613. D. Allor, T.D. Cohen, and D.A. McGady, Phys. Rev. D [**78**]{}, 096009 (2008). N.M. Vildanov, J. Phys.: Condens. Matter [**21**]{}, 445802 (2009). [^1]: e-mail: [email protected],[email protected]
--- abstract: 'The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of $\ell_2$-regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.' author: - 'Euhanna Ghadimi, André Teixeira, Iman Shames, and Mikael Johansson [^1]' bibliography: - 'admmbib.bib' title: ' Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems' --- Introduction {#sec:introduction} ============ The alternating direction method of multipliers is a powerful algorithm for solving structured convex optimization problems. While the ADMM method was introduced for optimization in the 1970’s, its origins can be traced back to techniques for solving elliptic and parabolic partial difference equations developed in the 1950’s (see [@Boyd11] and references therein). ADMM enjoys the strong convergence properties of the method of multipliers and the decomposability property of dual ascent, and is particularly useful for solving optimization problems that are too large to be handled by generic optimization solvers. The method has found a large number of applications in diverse areas such as compressed sensing [@Yang:2011], regularized estimation [@bo2012], image processing [@image010], machine learning [@Forero010], and resource allocation in wireless networks [@Joshi:12]. This broad range of applications has triggered a strong recent interest in developing a better understanding of the theoretical properties of ADMM[@deng12; @luo12; @boley:13]. Mathematical decomposition is a classical approach for parallelizing numerical optimization algorithms. If the decision problem has a favorable structure, decomposition techniques such as primal and dual decomposition allow to distribute the computations on multiple processors[@Las:70; @BeT:89]. The processors are coordinated towards optimality by solving a suitable master problem, typically using gradient or subgradient techniques. If problem parameters such as Lipschitz constants and convexity parameters of the cost function are available, the optimal step-size parameters and associated convergence rates are well-known (*e.g.*, [@Nesterov03]). A drawback of the gradient method is that it is sensitive to the choice of the step-size, even to the point where poor parameter selection can lead to algorithm divergence. In contrast, the ADMM technique is surprisingly robust to poorly selected algorithm parameters: under mild conditions, the method is guaranteed to converge for all positive values of its single parameter. Recently, an intense research effort has been devoted to establishing the rate of convergence of the ADMM method. It is now known that if the objective functions are strongly convex and have Lipschitz-continuous gradients, then the iterates produced by the ADMM algorithm converge linearly to the optimum in a certain distance metric [e.g. ]{}[@deng12]. The application of ADMM to quadratic problems was considered in [@boley:13] and it was conjectured that the iterates converge linearly in the neighborhood of the optimal solution. It is important to stress that even when the ADMM method has linear convergence *rate*, the number of iterations ensuring a desired accuracy, [i.e. ]{}the convergence *time*, is heavily affected by the choice of the algorithm parameter. We will show that a poor parameter selection can result in arbitrarily large convergence times for the ADMM algorithm. The aim of the present paper is to contribute to the understanding of the convergence properties of the ADMM method. Specifically, we derive the algorithm parameters that minimize the convergence factor of the ADMM iterations for two classes of quadratic optimization problems: $\ell_2$-regularized quadratic minimization and quadratic programming with linear inequality constraints. In both cases, we establish linear convergence rates and develop techniques to minimize the convergence factors of the ADMM iterates. These techniques allow us to give explicit expressions for the optimal algorithm parameters and the associated convergence factors. We also study over-relaxed ADMM iterations and demonstrate how to jointly choose the ADMM parameter and the over-relaxation parameter to improve the convergence times even further. We have chosen to focus on quadratic problems, since they allow for analytical tractability, yet have vast applications in estimation [@Falcao1995], multi-agent systems [@nedic10] and control[@DSB:2013]. Furthermore, many complex problems can be reformulated as or approximated by QPs [@SBV:2004], and optimal ADMM parameters for QP’s can be used as a benchmark for more complex ADMM sub-problems [e.g. ]{}$\ell_1$-regularized problems [@Boyd11]. To the best of our knowledge, this is one of the first works that addresses the problem of optimal parameter selection for ADMM. A few recent papers have focused on the optimal parameter selection of ADMM algorithm for some variations of distributed convex programming subject to linear equality constraints [e.g. ]{}[@TGS:13; @SLY:13]. The paper is organized as follows. In Section \[sec:backgroud\], we derive some preliminary results on fixed-point iterations and review the necessary background on the ADMM method. Section \[sec:l2\] studies $\ell_2$-regularized quadratic programming and gives explicit expressions for the jointly optimal step-size and acceleration parameter that minimize the convergence factor. We then shift our focus to the quadratic programming with linear inequality constraints and derive the optimal step-sizes for such problems in Section \[sec:qp\]. We also consider two acceleration techniques and discuss inexpensive ways to improve the speed of convergence. Our results are illustrated through numerical examples in Section \[sec:qp\_evaluation\]. In Section \[sec:qp\_evaluation\] we perform an extensive Model Predictive Control (MPC) case study and evaluate the performance of ADMM with the proposed parameter selection rules. A comparison with an accelerated ADMM method from the literature is also performed. Final remarks and future directions conclude the paper. Notation -------- We denote the set of real numbers with ${\mathcal{R}^{}}$ and define the set of positive (nonnegative) real numbers as ${\mathcal{R}^{}}_{++}$ (${\mathcal{R}^{}}_{+}$). Let ${\mathcal S^n}$ be the set of real symmetric matrices of dimension $n\times n$. The set of positive definite (semi-definite) $n\times n$ matrices is denoted by ${\mathcal{S}_{++}^{n}}$ (${\mathcal{S}_{+}^{n}}$). With $I$ and $I_m$, we symbolize the identity matrix and the identity matrix of a dimension $m\times m$, respectively. Given a matrix $A\in {\mathcal{R}^{n\times m}}$, let ${\mathcal{N}(A)}\triangleq\{x\in{\mathcal{R}^{m}} \vert \; Ax=0\}$ be the null-space of $A$ and denote the range space of $A$ by ${\mbox{Im}(A)}\triangleq\{y\in{\mathcal{R}^{n}} \vert \; y=Ax,\;x\in{\mathcal{R}^{m}}\}$. We say the nullity of $A$ is $0$ (of zero dimensional) when ${\mathcal{N}(A)}$ only contains $0$. The transpose of $A$ is represented by $A^\top$ and for $A$ with full-column rank we define $A^\dagger \triangleq (A^\top A)^{-1}A^\top$ as the pseudo-inverse of $A$. Given a subspace $\mathcal{X}\subseteq {\mathcal{R}^{n}}$, $\Pi_{\mathcal{X}}\in{\mathcal{R}^{n\times n}}$ denotes the orthogonal projector onto $\mathcal{X}$, while $\mathcal{X}^\bot$ denotes the orthogonal complement of $\mathcal{X}$. For a square matrix $A$ with an eigenvalue $\lambda$ we call the space spanned by all the eigenvectors corresponding to the eigenvalue $\lambda$ the $\lambda$-eigenspace of $A$. The $i$-th smallest in modulus eigenvalue is indicated by $\lambda_i(\cdot)$. The spectral radius of a matrix $A$ is denoted by $r(A)$. The vector (matrix) $p$-norm is denoted by $\Vert \cdot \Vert_p$ and $\Vert \cdot \Vert = \Vert \cdot \Vert_2$ is the Euclidean (spectral) norm of its vector (matrix) argument. Given a subspace $\mathcal{X}\subseteq{\mathcal{R}^{n}}$ and a matrix $A\in{\mathcal{R}^{n\times n}}$, denote $\|A\|_{\mathcal{X}} = \max_{x\in\mathcal{X}}\dfrac{\Vert Ax \Vert}{\Vert x\Vert}$ as the spectral norm of $A$ restricted to the subspace $\mathcal{X}$. Given $z\in {\mathcal{R}^{n}}$, the diagonal matrix $Z\in {\mathcal{R}^{n\times n}}$ with $Z_{ii} = z_i$ and $Z_{ij}=0$ for $j\neq i$ is denoted by $Z=\mbox{diag}(z)$. Moreover, $z\geq0$ denotes the element-wise inequality, $\vert z \vert$ corresponds to the element-wise absolute value of $z$, and $\mathcal{I}_{+}(z)$ is the indicator function of the positive orthant defined as $\mathcal{I}_{+}(z) = 0$ for $z\geq0$ and $\mathcal{I}_{+}(z) = +\infty$ otherwise. Consider a sequence $\{x^k\}$ converging to a fixed-point $x^\star\in{\mathcal{R}^{n}}$. The *convergence factor* of the converging sequence is defined as $$\begin{aligned} \label{eqn:convergence_factor_def} \zeta&\triangleq\, \underset{k:\,x^k \neq x^\star}{\mbox{sup}} \dfrac{\Vert x^{k+1}-x^\star\Vert}{\Vert x^{k}-x^\star\Vert}.\end{aligned}$$ The sequence $\{x^k\}$ is said to converge Q-sublinearly if $\zeta = 1$, Q-linearly if $\zeta^k\in (0,1)$, and Q-superlinearly if $\zeta = 0$. Moreover, we say that convergence is R-linear if there is a nonnegative scalar sequence $\{\nu_k\}$ such that $\Vert x^k-x^\star \Vert\leq \nu_k$ for all $k$ and $\{\nu_k\}$ converges Q-linearly to $0$ [@JNW:2006] [^2]. In this paper, we omit the letter Q while referring the convergence rate. Given an initial condition $x^0$ such that $\Vert x^{0}-x^\star\Vert\leq \sigma $, we define the *$\varepsilon$-solution time* $\pi_{\varepsilon}$ as the smallest iteration count to ensure that $\Vert x_k\Vert \leq \varepsilon$ holds for all $k\geq \pi_{\varepsilon}$. For linearly converging sequences with $\zeta\in(0,1)$ the $\varepsilon$-solution time is given by $\pi_\varepsilon \triangleq\, \dfrac{\log(\sigma)-\log({\varepsilon})} {-\log(\zeta)}$. If the $0$-solution time is finite for all $x^0$, we say that the sequence converges in finite time. As for linearly converging sequences $\zeta < 1$, the $\varepsilon$-solution time $\pi_\varepsilon$ is reduced by minimizing $\zeta$. Background and preliminaries {#sec:backgroud} ============================ This section presents preliminary results on fixed-point iterations and the ADMM method. Fixed-point iterations ---------------------- Consider the following iterative process $$\begin{aligned} \label{eqn:fixed_point_iterates} x^{k+1} = T x^{k},\end{aligned}$$ where $x^k\in {\mathcal{R}^{n}}$ and $T \in \mathcal{S}^{n\times n}$. Assume $T$ has $m<n$ eigenvalues at $1$ and let $V\in{\mathcal{R}^{n\times m}}$ be a matrix whose columns span the $1$-eigenspace of $T$ so that $TV=V$. Next we determine the properties of $T$ such that, for any given starting point $x^0$, the iteration in  converges to a fixed-point that is the projection of the $x^0$ into the $1$-eigenspace of $T$, i.e. $$\begin{aligned} \label{eqn:fixed_point_limit} x^\star\triangleq\lim_{k\rightarrow \infty} x^k = \lim_{k\rightarrow \infty} T^k x^0 = \Pi_{{\mbox{Im}(V)}}x^0. $$ \[prop:1\] The iterations  converge to a fixed-point in ${\mbox{Im}(V)}$ if and only if $$\begin{aligned} \label{eqn:convergence_condition} r\left(T-\Pi_{{\mbox{Im}(V)}} \right)<1.\end{aligned}$$ The result is an extension of [@XiB:04 Theorem 1] for the case of $1$-eigenspace of $T$ with dimension $m>1$. The proof is similar to this citation and is therefore omitted. Proposition \[prop:1\] shows that when $T\in \mathcal{S}^n$, the fixed-point iteration  is guaranteed to converge to a point given by (\[eqn:fixed\_point\_limit\]) if all the non-unitary eigenvalues of $T$ have magnitudes strictly smaller than 1. From  one sees that $$\begin{aligned} x^{k+1}-x^\star &= \left(T-\Pi_{{\mbox{Im}(V)}}\right) x^k = \left(T-\Pi_{{\mbox{Im}(V)}}\right) (x^k-x^\star)\end{aligned}$$ Hence, the convergence factor of  is the modulus of the largest non-unit eigenvalue of $T$. Based on the results of Proposition \[prop:1\] a few comments are in order. \(i) the convergence factor of (\[eqn:fixed\_point\_iterates\]) is defined as $\mbox{sup} \dfrac{\Vert x^{k+1}-x^\star\Vert_2}{\Vert x^{k}-x^\star\Vert_2} = r\left(T-\Pi_{{\mbox{Im}(V)}}\right)$. \(ii) assuming $T\in {\mathcal S}^n$ one can do further weight optimization via formulating an SDP to minimize the convergence factor. Specially $$\begin{aligned} \begin{array}[c]{ll} \max &s\\ \mbox{subject to}& -sI \leq T-V(V^\top V)^{-1} V^\top \leq s I\\ & T\in {\mathcal S}^n , \, T V = V \end{array}\end{aligned}$$ The ADMM method --------------- The ADMM algorithm solves problems of the form $$\begin{aligned} \begin{array}[c]{ll} \mbox{minimize} & f(x)+g(z)\\ \mbox{subject to} & Ax+Bz=c \end{array} \label{eqn:admm_standard_form}\end{aligned}$$ where $f$ and $g$ are convex functions, $x\in {\mathcal R}^n$, $z\in {\mathcal R}^m$, $A\in {\mathcal R}^{p\times n}$, $B\in {\mathcal R}^{p\times m}$ and $c\in {\mathcal R}^p$; see [@Boyd11] for a detailed review. Relevant examples that appear in this form are, [e.g. ]{}regularized estimation, where $f$ is the estimator loss and $g$ is the regularization term, and various networked optimization problems, *e.g.* [@Italian; @Boyd11]. The method is based on the *augmented Lagrangian* $$\begin{aligned} L_{\rho}(x,z,\mu) &= f(x)+g(z) + \dfrac{\rho}{2}\Vert Ax+Bz-c\Vert_2^2 + \mu^{T}(Ax+Bz-c),\end{aligned}$$ and performs sequential minimization of the $x$ and $z$ variables followed by a dual variable update: $$\begin{aligned} x^{k+1} &= \underset{x}{\operatorname{argmin}}\, L_{\rho}(x,z^{k}, \mu^{k}), \nonumber\\ z^{k+1} &= \underset{z}{\operatorname{argmin}}\, L_{\rho}(x^{k+1}, z, \mu^k), \label{eqn:admm_iterations}\\ \mu^{k+1} &= \mu^{k} + \rho(Ax^{k+1}+Bz^{k+1}-c), \nonumber\end{aligned}$$ for some arbitrary $x^0 \in {\mathcal R}^n$, $z^0\in {\mathcal R}^m$, and $\mu^0\in {\mathcal R}^p$. It is often convenient to express the iterations in terms of the scaled dual variable $u=\mu/\rho$: $$\begin{aligned} \label{eqn:admm_scaled} \begin{array}[c]{ll} x^{k+1} &= \underset{x}{\operatorname{argmin}} \left\{ f(x)+ \dfrac{\rho}{2}\Vert Ax+Bz^k-c+u^k\Vert_2^2\right\}, \\ z^{k+1} &= \underset{z}{\operatorname{argmin}} \left\{g(z) + \dfrac{\rho}{2}\Vert Ax^{k+1}+Bz-c+u^{k}\Vert_2^2\right\}, \\ u^{k+1} &= u^{k} + Ax^{k+1}+Bz^{k+1}-c. \end{array}\end{aligned}$$ ADMM is particularly useful when the $x$- and $z$-minimizations can be carried out efficiently, for example when they admit closed-form expressions. Examples of such problems include linear and quadratic programming, basis pursuit, $\ell_1$-regularized minimization, and model fitting problems to name a few (see [@Boyd11] for a complete discussion). One advantage of the ADMM method is that there is only a single algorithm parameter, $\rho$, and under rather mild conditions, the method can be shown to converge for all values of the parameter; see [@Boyd11; @JEK:12] and references therein. As discussed in the introduction, this contrasts the gradient method whose iterates diverge if the step-size parameter is chosen too large. However, $\rho$ has a direct impact on the convergence factor of the algorithm, and inadequate tuning of this parameter can render the method slow. The convergence of ADMM is often characterized in terms of the residuals $$\begin{aligned} r^{k+1} &= Ax^{k+1}+B z^{k+1}-c,\label{eq:primal_res}\\ s^{k+1} &= \rho A^\top B (z^{k+1}-z^k),\label{eq:dual_res}\end{aligned}$$ termed the *primal* and *dual* residuals, respectively [@Boyd11]. One approach for improving the convergence properties of the algorithm is to also account for past iterates when computing the next ones. This technique is called *relaxation* and amounts to replacing $A x^{k+1}$ with $h^{k+1} = \alpha^k A x^{k+1}- (1-\alpha^k) (Bz^k -c)$ in the $z$- and $u$-updates [@Boyd11], yielding $$\label{eqn:admm_relaxed} \begin{aligned} z^{k+1} &= \underset{z}{\operatorname{argmin}} \,\left\{g(z) + \dfrac{\rho}{2}\left\Vert h^{k+1} +Bz-c+u^{k}\right\Vert_2^2\right\}, \\ u^{k+1} &= u^{k} + h^{k+1}+ Bz^{k+1}-c. \end{aligned}$$ The parameter $\alpha^k\in (0,2)$ is called the *relaxation parameter*. Note that letting $\alpha^k=1$ for all $k$ recovers the original ADMM iterations . Empirical studies show that over-relaxation, *i.e.* letting ${\alpha^k>1}$, is often advantageous and the guideline $\alpha^k\in [1.5, 1.8]$ has been proposed [@Eckstein:1994]. In the rest of this paper, we will consider the traditional ADMM iterations  and the relaxed version  for different classes of quadratic problems, and derive explicit expressions for the step-size $\rho$ and the relaxation parameter $\alpha$ that minimize the convergence factors. Optimal convergence factor for $\ell_2$-regularized quadratic minimization {#sec:l2} =========================================================================== Regularized estimation problems $$\begin{aligned} \begin{array}[c]{ll} \mbox{minimize} & f(x) + \dfrac{\delta}{2}\Vert x \Vert_p^q \end{array}\end{aligned}$$ where $\delta>0$ are abound in statistics, machine learning, and control. In particular, $\ell_1$-regularized estimation where $f(x)$ is quadratic and $p=q=1$, and *sum of norms* regularization where $f(x)$ is quadratic, $p=2$, and $q=1$, have recently received significant attention [@ohlsson2010segmentation]. In this section we will focus on $\ell_2$-regularized estimation, where $f(x)$ is quadratic and $p=q=2$, [i.e. ]{}$$\begin{aligned} \label{eqn:L2_formulation} \begin{array}{ll} \mbox{minimize} & \dfrac{1}{2} x^\top Q x + q^\top x + \dfrac{\delta}{2} \Vert z\Vert_2^2 \\ \mbox{subject to}& x-z=0, \end{array}\end{aligned}$$ for $Q \in {\mathcal{S}_{++}^{n}}$, $x,q,z \in {\mathcal{R}^{n}}$ and constant regularization parameter $\delta \in \mathcal{R}_+$. While these problems can be solved explicitly and do not motivate the ADMM machinery per se, they provide insight into the step-size selection for ADMM and allow us to compare the performance of an optimally tuned ADMM to direct alternatives (see Section \[sec:qp\_evaluation\]). Standard ADMM iterations ------------------------ The standard ADMM iterations are given by $$\begin{aligned} \label{eqn:ADMM_L2_iterations} \begin{array}{l} x^{k+1}=(Q+\rho I)^{-1}(\rho z^k-\mu^k-q),\\ z^{k+1} = \dfrac{\mu^k+\rho x^{k+1}}{\delta+\rho},\\ \mu^{k+1}= \mu^{k}+\rho (x^{k+1}-z^{k+1}). \end{array} \end{aligned}$$ The $z$-update implies that $\mu^k=(\delta+\rho)z^{k+1}-\rho x^{k+1}$, so the $\mu$-update can be re-written as $$\begin{aligned} \nonumber \mu^{k+1}=(\delta + \rho)z^{k+1}-\rho x^{k+1} + \rho(x^{k+1}-z^{k+1})= \delta z^{k+1}.\end{aligned}$$ Hence, to study the convergence of (\[eqn:ADMM\_L2\_iterations\]) one can investigate how the errors associated with $x^k$ or $z^k$ vanish. Inserting the $x$-update into the $z$-update and using the fact that $\mu^k=\delta z^k$, we find $$\begin{aligned}\label{eqn:ADMM_L2_matrix} z^{k+1}&= \underbrace{\dfrac{1}{\delta+\rho}\left(\delta I + \rho (\rho-\delta)\left(Q+\rho I\right)^{-1}\right)}_E z^k -\dfrac{\rho}{\delta+\rho}(Q+\rho I)^{-1}q. \end{aligned}$$ Let $z^{\star}$ be a fixed-point of , i.e. $z^{\star}= E z^\star -\dfrac{\rho(Q+\rho I)^{-1}}{\delta+\rho}q$. The dual error $e^{k+1}\triangleq z^{k+1}-z^\star$ then evolves as $$\begin{aligned} e^{k+1} &= E e^{k}.\label{eqn:ADMM_L2_error}\end{aligned}$$ A direct analysis of the error dynamics (\[eqn:ADMM\_L2\_error\]) allows us to characterize the convergence of (\[eqn:ADMM\_L2\_iterations\]): \[thm:L2:standard\] For all values of the step-size $\rho>0$ and regularization parameter $\delta >0$, both $x^k$ and $z^k$ in the ADMM iterations (\[eqn:ADMM\_L2\_iterations\]) converge to $x^\star=z^\star$, the solution of optimization problem . Moreover, $z^{k+1}-z^\star$ converges at linear rate $\zeta \in (0,1)$ for all $k\geq 0$. The pair of the optimal constant step-size $\rho^{\star}$ and convergence factor $\zeta^\star$ are given as $$\begin{aligned} \label{eqn:ADMM_L2_optimal_step-size} \rho^{\star} &= \begin{cases} \sqrt{\delta \lambda_1(Q)}&\quad \mbox{if} \; \delta< \lambda_1(Q),\\ \sqrt{\delta \lambda_n(Q)}&\quad \mbox{if} \; \delta> \lambda_n(Q),\\ \delta &\quad\mbox{otherwise}. \end{cases}\quad \zeta^{\star} &= \begin{cases} \left(1+\dfrac{\delta+\lambda_1(Q)}{2\sqrt{\delta \lambda_1(Q)}}\right)^{-1}& \mbox{if} \; \delta< \lambda_1(Q),\\ \left(1+\dfrac{\delta+\lambda_n(Q)}{2\sqrt{\delta \lambda_n(Q)}}\right)^{-1}& \mbox{if} \; \delta> \lambda_n(Q),\\ \dfrac{1}{2} & \mbox{otherwise}. \end{cases} \end{aligned}$$ See appendix for this and the rest of the proofs. \[cor:L2:standard\] Consider the error dynamics described by (\[eqn:ADMM\_L2\_error\]) and $E$ in . For $\rho=\delta$, $$\begin{aligned} \lambda_i(E)=1/2,\qquad i=1, \dots, n,\end{aligned}$$ and the convergence factor of the error dynamics is independent of $Q$. Note that the convergence factors in Theorem \[thm:L2:standard\] and Corollary \[cor:L2:standard\] are guaranteed for all initial values, and that iterates generated from specific initial values might converge even faster. Furthermore, the results focus on the dual error. For example, in Algorithm  with $\rho = \delta$ and initial condition $z^0=0$, $\mu^0 = 0$, the $x$-iterates converge in one iteration since $x^1 = -(Q+\delta I)^{-1}q=x^\star$. However, the constraint in  is not satisfied and a straightforward calculation shows that $e^{k+1}=1/2 e^k$. Thus, although $x^k=x^\star$ for $k\geq 1$, the dual residual $\Vert e^k\Vert=\Vert z^k-z^\star\Vert$ decays linearly with a factor of $1/2$. The analysis above also applies to the more general case with cost function $\dfrac{1}{2} \bar{x}^\top \bar{Q} \bar{x} + \bar{q}^\top \bar{x} + \dfrac{\delta}{2} \bar{z}^\top \bar{P} \bar{z}$ where $\bar{P} \in {\mathcal{S}_{++}^{n}}$. A change of variables $z=\bar{P}^{1/2}\bar{z}$ is then applied to transform the problem into the form (\[eqn:L2\_formulation\]) with $x=\bar{P}^{1/2} \bar{x}$, $q= \bar{P}^{-1/2}\bar{q}$, and $Q= \bar{P}^{-1/2}\bar{Q}\bar{P}^{-1/2}$. To shed more light on the results of Theorem \[thm:L2:standard\] and Corollary \[cor:L2:standard\], consider Algorithm  with $\rho = \delta$ and initial condition $z^0=0$, $\mu^0 = 0$. After the first iteration, $x^1 = -(Q+\delta I)^{-1}q=x^\star$, and $z^1=x^\star/2$. Even though the primal variable $x$ reaches its optimal value, the constraint in  is not satisfied. Moreover, by setting $\rho=\delta$ in , one sees $e^{k+1}=1/2 e^k$. It means that while $x^k=x^\star$ for $k\geq 1$, the dual residual $\Vert e^k\Vert=\Vert z^k-z^\star\Vert$ decays linearly with a factor of $1/2$. Over-relaxed ADMM iterations ---------------------------- The over-relaxed ADMM iterations for (\[eqn:L2\_formulation\]) can be found by replacing $x^{k+1}$ by $\alpha x^{k+1} + (1-\alpha)z^k$ in the $z-$ and $\mu$-updates of . The resulting iterations take the form $$\begin{aligned} \label{eqn:ADMM_L2_iterations_relaxation} \begin{array}[c]{ll} x^{k+1}&=(Q+\rho I)^{-1}(\rho z^k-\mu^k-q),\\ z^{k+1} &= \dfrac{\mu^k+\rho (\alpha x^{k+1}+ (1-\alpha) z^k)}{\delta+\rho},\\ \mu^{k+1} &= \mu^{k}+\rho \left(\alpha (x^{k+1} - z^{k+1}) + (1-\alpha)\left(z^k - z^{k+1}\right)\right). \end{array}\end{aligned}$$ The next result demonstrates that in a certain range of $\alpha$ it is possible to obtain a guaranteed improvement of the convergence factor compared to the classical iterations . \[thm:L2:relaxed\] Consider the $\ell_2$-regularized quadratic minimization problem  and its associated over-relaxed ADMM iterations . For all positive step-sizes $\rho>0$ and all relaxation parameters $\alpha\in (0, 2\underset{i}{\min}\{ (\lambda_i(Q)+\rho)(\rho+\delta)/(\rho\delta + \rho \lambda_i(Q))\})$, the iterates $x^k$ and $z^k$ converge to the solution of . Moreover, the dual variable converges at linear rate $\Vert z^{k+1} - z^\star\Vert \leq \zeta_R \Vert z^k - z^\star\Vert$ and the convergence factor $\zeta_R <1$ is strictly smaller than that of the classical ADMM algorithm  if $1<\alpha<2\underset{i}{\min}\{ (\lambda_i(Q)+\rho)(\rho+\delta)/(\rho\delta + \rho \lambda_i(Q))\}$ The jointly optimal step-size, relaxation parameter, and the convergence factor $(\rho^\star, \alpha^\star,\zeta_R^\star)$ are given by $$\begin{aligned} \label{eqn:ADMM_L2_Relaxation_optimal} \rho^\star = \delta, \quad \alpha^\star = 2,\quad \zeta^\star_R = 0.\end{aligned}$$ With these parameters, the ADMM iterations converge in one iteration. The upper bound on $\alpha$ which ensures faster convergence of the over-relaxed ADMM iterations  compared to  depends on the eigenvalues of $Q$, $\lambda_i(Q)$, which might be unknown. However, since $(\rho+\delta)(\rho+\lambda_i(Q))> \rho(\lambda_i(Q)+\delta)$ the over-relaxed iterations are guaranteed to converge faster for all $\alpha \in (1,2]$, independently of $Q$. Optimal convergence factor for quadratic programming {#sec:qp} ==================================================== In this section, we consider a quadratic programming (QP) problem of the form $$\begin{aligned} \label{eqn:Quadratic_problem} \begin{array}[c]{ll} \mbox{minimize} & \dfrac{1}{2} x^\top Q x+ q^\top x\\ \mbox{subject to}& Ax \leq c \end{array}\end{aligned}$$ where $Q\in {\mathcal{S}_{++}^{n}}$, $q \in {\mathcal{R}^{n}}$, $A\in \mathcal{R}^{m\times n}$ is full rank and $c\in \mathcal{R}^{m}$. Standard ADMM iterations ------------------------ The QP-problem (\[eqn:Quadratic\_problem\]) can be put on ADMM standard form (\[eqn:admm\_standard\_form\]) by introducing a slack vector $z$ and putting an infinite penalty on negative components of $z$, *i.e.* $$\begin{aligned} \label{eqn:Quadratic_problem_1} \begin{array}[c]{ll} \mbox{minimize} & \dfrac{1}{2} x^\top Q x+ q^\top x+ \mathcal{I}_{+}(z)\\ \mbox{subject to}& Ax - c + z = 0. \end{array}\end{aligned}$$ The associated *augmented Lagrangian* is $$\begin{aligned} L_\rho(x,z,u) = \dfrac{1}{2} x^\top Q x+ q^\top x+ \mathcal{I}_{+}(z) + \dfrac{\rho}{2} \Vert Ax - c + z + u \Vert^2_2,\end{aligned}$$ where $u=\mu/\rho$, which leads to the scaled ADMM iterations $$\begin{aligned} \begin{array}[c]{ll} x^{k+1} &= -(Q+\rho A^\top A)^{-1} [q+\rho A^\top(z^k + u^k - c)], \\ z^{k+1} &= \mbox{max}\{0,-A x^{k+1}-u^{k}+c\}, \\ u^{k+1} &= u^{k} + A x^{k+1}-c+z^{k+1}. \end{array} \label{eqn:Quadratic_admm_iterations}\end{aligned}$$ To study the convergence of  we rewrite it in an equivalent form with linear time-varying matrix operators. To this end, we introduce a vector of indicator variables $d^k\in \{0,1\}^{n}$ such that $d^k_i = 0$ if $u^k_i=0$ and $d^k_i=1$ if $u^k_i\neq 0$. From the $z$- and $u$- updates in , one observes that $z_i^k\neq 0\rightarrow u_i^k=0$, *i.e.* $u_i^k\neq 0\rightarrow z_i^k=0$. Hence, $d^k_i =1$ means that at the current iterate, the slack variable $z_i$ in  equals zero; i.e., the $i^{\rm th}$ inequality constraint in  is active. We also introduce the variable vector $v^k\triangleq z^k+u^k$ and let $D^k=\mbox{diag}(d^k)$ so that $D^k v^k= u^k$ and $(I-D^k)v^k=z^k$. Now, the second and third steps of  imply that $v^{k+1} = \left\vert Ax^{k+1}+u^k -c\right\vert = F^{k+1}(A x^{k+1}+ D^k v^k-c)$ where $F^{k+1}\triangleq \mbox{diag}\left ( \text{sign}(A x^{k+1}+ D^k v^k-c) \right )$ and $\text{sign}(\cdot)$ returns the signs of the elements of its vector argument. Hence,  becomes $$\begin{aligned} \begin{array}[c]{ll} x^{k+1} &= -(Q+\rho A^\top A)^{-1} [q+\rho A^\top (v^k-c)], \\ v^{k+1} &= \left\vert A x^{k+1}+ D^k v^k - c\right\vert = F^{k+1}(A x^{k+1}+ D^k v^k-c), \\ D^{k+1} &= \dfrac{1}{2}(I+F^{k+1}), \end{array} \label{eqn:Quadratic_admm_iterations_reformed}\end{aligned}$$ where the $D^{k+1}$-update follows from the observation that $$\begin{aligned} \nonumber (D_{ii}^{k+1},\, F_{ii}^{k+1}) =\left\{ \begin{array}[l]{lll} \hspace{-5pt}(0, \,-1) & \hspace{-3pt}\mbox{if} & \hspace{-2pt}v_i^{k+1} = -(A x_i^{k+1}+u_i^k - c) \\ \hspace{-5pt}(1,\, 1) & \hspace{-3pt}\mbox{if} & \hspace{-2pt}v_i^{k+1} = A x_i^{k+1}+ u_i^k- c \end{array}\right.\end{aligned}$$ Since the $v^k$-iterations will be central in our analysis, we will develop them further. Inserting the expression for $x^{k+1}$ from the first equation of  into the second, we find $$\begin{aligned} \label{eqn:v_recurrence} v^{k+1} &= F^{k+1}\Big(\left( D^k - A (Q/\rho+ A^\top A)^{-1} A^\top \right) v^k\Big) - F^{k+1}\Big(A (Q+\rho A^\top A)^{-1}(q-\rho A^\top c) + c\Big). \end{aligned}$$ Noting that $D^k=\dfrac{1}{2}(I+F^k)$ and introducing $$\begin{aligned} M &\triangleq A (Q/\rho+ A^\top A)^{-1} A^\top,\end{aligned}$$ we obtain $$\begin{aligned} \label{eqn:QP_Fv_sequence} F^{k+1}v^{k+1} - F^k v^k &= \left( \dfrac{I}{2} - M \right) (v^k - v^{k-1}) + \dfrac{1}{2}\left( F^k v^k - F^{k-1} v^{k-1}\right). \end{aligned}$$ We now relate $v^k$ and $F^k v^k$ to the primal and dual residuals, $r^k$ and $s^k$, defined in (\[eq:primal\_res\]) and (\[eq:dual\_res\]): \[prop:w\_residuals\] Consider $r^{k}$ and $s^{k}$ the primal and dual residuals of the QP-ADMM algorithm  and auxiliary variables $v^k$ and $F^k$. The following relations hold $$\begin{aligned} &F^{k+1}v^{k+1} - F^kv^{k} = r^{k+1} - \dfrac{1}{\rho}Rs^{k+1} - \Pi_{{\mathcal{N}(A^\top)}}(z^{k+1} - z^k), \label{eqn:w_minus_residual}\\ &v^{k+1} - v^{k} = r^{k+1} + \dfrac{1}{\rho}Rs^{k+1} + \Pi_{{\mathcal{N}(A^\top)}}(z^{k+1} - z^k),\label{eqn:w_plus_residual}\end{aligned}$$ $$\begin{aligned} \label{eqn:r_to_fv} & \Vert r^{k+1} \Vert \leq \Vert F^{k+1} v^{k+1} - F^{k} v^{k} \Vert,\\ & \Vert s^{k+1} \Vert \leq \rho \Vert A\Vert \Vert F^{k+1} v^{k+1} - F^{k} v^{k} \Vert. \label{eqn:s_to_fv}\end{aligned}$$ where 1. $R= A(A^\top A)^{-1}$ and $\Pi_{{\mathcal{N}(A^\top)}} = I-A(A^\top A)^{-1}A^\top$, if $A$ has full column-rank; 2. $R=(AA^\top)^{-1}A$ and $\Pi_{{\mathcal{N}(A^\top)}} = 0$, if $A$ has full row-rank; 3. $R = A^{-1}$ and $\Pi_{{\mathcal{N}(A^\top)}} = 0$, if $A$ is invertible. The next theorem guarantees that  convergence linearly to zero in the auxiliary residuals  which implies R-linear convergence of the ADMM algorithm  in terms of the primal and dual residuals. The optimal step-size $\rho^\star$ and the smallest achievable convergence factor are characterized immediately afterwards. \[thm:QP:linear\_rate\] Consider the QP  and the corresponding ADMM iterations . For all values of the step-size $\rho\in{\mathcal{R}^{}}_{++}$ the residual $ F^{k+1}v^{k+1} - F^{k}v^{k}$ converges to zero at linear rate. Furthermore, $ r^k $ and $ s^k $, the primal and dual residuals of , converge R-linearly to zero. \[thm:QP\_optimal\_factor\] Consider the [QP  and the corresponding ADMM iterations ]{}. If the constraint matrix $A$ is either full row-rank or invertible then the optimal step-size and convergence factor for the $F^{k+1}v^{k+1}-F^k v^k$ residuals are $$\begin{aligned}\label{eqn:QP_optimal_factor} \rho^\star &= \left(\sqrt{\lambda_1(A Q^{-1} A^\top) \lambda_n(A Q^{-1} A^\top)}\right)^{-1},\\ \zeta^\star &= \dfrac{\lambda_n(A Q^{-1} A^\top)}{\lambda_n(A Q^{-1} A^\top) + \sqrt{\lambda_1(A Q^{-1} A^\top)\lambda_n(A Q^{-1} A^\top)}}. \end{aligned}$$ Although the convergence result of Theorem \[thm:QP:linear\_rate\] holds for all QPs of the form , optimality of the step-size choice proposed in Theorem \[thm:QP\_optimal\_factor\] is only established for problems where the constraint matrix $A$ has full row-rank or it is invertible. However, as shown next, the convergence factor can be arbitrarily close to $1$ when rows of $A$ are linearly dependent. \[thm:QP\_slow\] Define variables $$\begin{aligned} \epsilon_k &\triangleq \dfrac{\|M(v^k - v^{k-1})\|}{\|F^kv^k - F^{k-1} v^{k-1}\|},\quad \quad \delta_k\triangleq\dfrac{ \|D^kv^k - D^{k-1} v^{k-1}\| }{\|F^kv^k - F^{k-1} v^{k-1}\|},\\ \tilde{\zeta}(\rho) &\triangleq \max_{i:\;\lambda_i(AQ^{-1}A^\top) > 0}\left\{ \left\vert \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1+\rho\lambda_i(A Q^{-1} A^\top)} - \dfrac{1}{2}\right\vert + \dfrac{1}{2}\right\},\end{aligned}$$ and $\underline\zeta^k\triangleq\vert \delta_k - \epsilon_k \vert$. The convergence factor $\zeta$ of the residual $F^{k+1}v^{k+1} - F^kv^k$ is lower bounded by $$\label{eq:QP_lower_bound} \underline{\zeta}\triangleq \max_k\; \underline\zeta^k < 1.$$ Furthermore, given an arbitrarily small $\xi\in(0,\, \frac{1}{2})$ and $\rho>0$, we have the following results: 1. the inequality $\underline{\zeta}<\tilde{\zeta}(\rho)<1$ holds for all $\delta_k\in[0,\;1]$ if and only if the nullity of $A$ is zero; 2. when the nullity of $A$ is nonzero and $\epsilon_k \geq 1-\xi$, it holds that $\underline{\zeta} \leq \tilde{\zeta}(\rho) + \sqrt{\dfrac{\xi}{2}}$; 3. when the nullity of $A$ is nonzero, $\delta_k \geq 1-\xi$, and $\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1}) \|/\|v^k - v^{k-1}\| \geq \sqrt{1- \xi^2/\|M\|^2}$, it follows that $\underline{\zeta}\geq 1-2\xi$. The previous result establishes that slow convergence can occur locally for any value of $\rho$ when the nullity of $A$ is nonzero and $\xi$ is small. However, as section (ii) of Theorem \[thm:QP\_slow\] suggests, in these cases,  can still work as a heuristic to reduce the convergence time if $\lambda_1(AQ^{-1}A^\top)$ is taken as the smallest nonzero eigenvalue of $AQ^{-1}A^\top$. In Section \[sec:qp\_evaluation\], we show numerically that this heuristic performs well with different problem setups. Over-relaxed ADMM iterations ---------------------------- Consider the relaxation of  obtained by replacing $Ax^{k+1}$ in the $z$- and $u$-updates with $\alpha A x^{k+1}- (1-\alpha) (z^k - c)$. The corresponding relaxed iterations read $$\begin{aligned} \begin{array}[c]{ll} x^{k+1} &= -(Q+\rho A^\top A)^{-1} [q+\rho A^\top(z^k + u^k - c)], \\ z^{k+1} &= \mbox{max}\{0,-\alpha (A x^{k+1}-c)+(1-\alpha)z^k-u^{k}\}, \\ u^{k+1} &= u^{k} + \alpha (A x^{k+1}+z^{k+1}-c)+(1-\alpha)(z^{k+1}-z^{k}). \end{array} \label{eqn:Quadratic_admm_iterations_relaxation}\end{aligned}$$ In next, we study convergence and optimality properties of these iterations. We observe: \[lem:optimal\_fixed\_point\] Any fixed-point of  corresponds to a global optimum of . Like the analysis of , introduce $v^k = z^k+u^k$ and $d^k\in {\mathcal{R}^{n}}$ with $d^k_i=0$ if $u_i^k=0$ and $d_i^k=1$ otherwise. Adding the second and the third step of  yields $v^{k+1} = \left\vert \alpha (A x^{k+1}-c)-(1-\alpha)z^k + u^k \right\vert$. Moreover, $D^k = \mbox{diag}(d^k)$ satisfies $D^k v^k = u^k$ and $(I-D^k)v^k = z^k$, so  can be rewritten as $$\begin{aligned} \begin{array}[c]{ll} x^{k+1} &= -(Q+\rho A^\top A)^{-1} [q+\rho A^\top (v^k-c)], \\ v^{k+1} &= F^{k+1}\Big( \alpha \left(A x^{k+1}+D^k v^k - c \right)\Big) - F^{k+1}\Big((1-\alpha) (I-2D^k)v^k \Big), \\ D^{k+1} &= \dfrac{1}{2}(I+F^{k+1}), \end{array} \label{eqn:Quadratic_admm_iterations_reformed_relaxation}\end{aligned}$$ where $F^{k+1}\triangleq \mbox{diag}\left ( \text{sign}\left(\alpha(A x^{k+1}+D^k v^k - c) -(1-\alpha)(I-2D^k)v^k \right) \right )$. Defining $M \triangleq A (Q/\rho+ A^\top A)^{-1} A^\top$ and substituting the expression for $x^{k+1}$ in  into the expression for $v^{k+1}$ yields $$\begin{aligned} \label{eqn:v_recurrence_relaxation} v^{k+1} &= F^{k+1}\Big( \left(-\alpha M + (2-\alpha)D^k - (1-\alpha)I \right) v^k\Big) - F^{k+1}\Big( \alpha A (Q+\rho A^\top A)^{-1}(q-\rho A^\top c) + \alpha c\Big). \end{aligned}$$ As in the previous section, we replace $D^k$ by $\dfrac{1}{2}(I+F^k)$ in  and form $F^{k+1}v^{k+1}-F^k v^k$: $$\begin{aligned} \label{eqn:QP_Fv_sequence_relaxation} F^{k+1}v^{k+1} -F^k v^k &= \dfrac{\alpha}{2}\left(I-2M\right) \left(v^{k}-v^{k-1}\right) + (1-\dfrac{\alpha}{2})\left(F^{k}v^{k} -F^{k-1} v^{k-1}\right). \end{aligned}$$ The next theorem characterizes the convergence rate of the relaxed ADMM iterations. \[thm:QP\_relaxation\_convergence\] Consider the QP and the corresponding relaxed ADMM iterations . If $$\begin{aligned} \rho\in\mathcal{R}_{++}, \quad \alpha \in (0, 2],\end{aligned}$$ then the equivalent fixed point iteration  converges linearly in terms of $F^{k+1}v^{k+1} -F^k v^k$ residual. Moreover, $ r^k$ and $ s^k$, the primal and dual residuals of , converge R-linearly to zero. Next, we restrict our attention to the case where $A$ is either invertible or full row-rank to be able to derive the jointly optimal step-size and over-relaxation parameter, as well as an explicit expression for the associated convergence factor. The result shows that the over-relaxed ADMM iterates can yield a significant speed up compared to the standard ADMM iterations. \[thm:QP\_relaxation\_optimal\_factor\] Consider the QP and the corresponding relaxed ADMM iterations . If the constraint matrix $A$ is of full row-rank or invertible then the joint optimal step-size, relaxation parameter and the convergence factor with respect to the $F^{k+1}v^{k+1} -F^k v^k $ residual are $$\begin{aligned} \label{eqn:QP_relaxation_optimal_factor} \rho^\star &= \left(\sqrt{\lambda_1(AQ^{-1}A^\top)\; \lambda_n(AQ^{-1}A^\top)}\right)^{-1}, \quad \alpha^\star = 2,\\ \zeta_R^\star &= \dfrac{\lambda_n(AQ^{-1}A^\top)-\sqrt{\lambda_1(AQ^{-1}A^\top)\;\lambda_n(AQ^{-1}A^\top)}}{\lambda_n(AQ^{-1}A^\top) + \sqrt{\lambda_1(AQ^{-1}A^\top)\;\lambda_n(AQ^{-1}A^\top)}} \end{aligned}$$ Moreover, when the iterations  are over-relaxed; [i.e. ]{} $\alpha \in (1,2]$ their iterates have a smaller convergence factor than that of . Optimal constraint preconditioning ---------------------------------- In this section, we consider another technique to improve the convergence of the ADMM method. The approach is based on the observation that the optimal convergence factors $\zeta^\star$ and $\zeta_R^\star$ from Theorem \[thm:QP\_optimal\_factor\] and Theorem \[thm:QP\_relaxation\_optimal\_factor\] are monotone increasing in the ratio $\lambda_n(AQ^{-1}A^\top)/\lambda_1(AQ^{-1}A^\top)$. This ratio can be decreased –without changing the complexity of the ADMM algorithm – by scaling the equality constraint in  by a diagonal matrix $L\in{\mathcal{S}_{++}^{m}}$, i,e., replacing $Ax-c+z=0$ by $L\left(Ax-c+z\right) = 0$. Let $\bar{A} \triangleq LA$, $\bar{z} \triangleq Lz$, and $\bar{c}\triangleq Lc$. The resulting scaled ADMM iterations are derived by replacing $A$, $z$, and $c$ in  and  by the new variables $\bar{A}$, $\bar{z}$, and $\bar{c}$, respectively. Furthermore, the results of Theorem \[thm:QP\_optimal\_factor\] and Theorem \[thm:QP\_relaxation\_optimal\_factor\] can be applied to the scaled ADMM iterations in terms of new variables. Although these theorems only provide the optimal step-size parameters for the QP when the constraint matrices are invertible or have full row-rank, we use the expressions as heuristics when the constraint matrix has full column-rank. Hence, in the following we consider $\lambda_n(\bar{A} Q^{-1} \bar{A}^{\top})$ and $\lambda_1(\bar{A} Q^{-1} \bar{A}^{\top})$ to be the largest and smallest nonzero eigenvalues of $\bar{A} Q^{-1} \bar{A}^{\top} = LAQ^{-1}A^\top L$, respectively and minimize the ratio $\lambda_n/{\lambda_1}$ in order to minimize the convergence factors $\zeta^\star$ and $\zeta_R^\star$. A similar problem was also studied in [@GIS:14; @GSJ:13]. \[thm:QP\_optimal\_preconditioning\] Let $R_q R_q^\top = Q^{-1}$ be the Choleski factorization of $Q^{-1}$ and $P\in {\mathcal{R}^{n \times n-s}}$ be a matrix whose columns are orthonormal vectors spanning ${\mbox{Im}(R_q^\top A^\top)}$ with $s$ being the dimension of ${\mathcal{N}(A)}$ and let $\lambda_n(LAQ^{-1}A^\top L)$ and $\lambda_1(LAQ^{-1}A^\top L)$ be the largest and smallest nonzero eigenvalues of $LAQ^{-1}A^\top L$. The diagonal scaling matrix $L^\star \in{\mathcal{S}_{++}^{m}}$ that minimizes the eigenvalue ratio $\lambda_n(LAQ^{-1}A^\top L)/\lambda_1(LAQ^{-1}A^\top L)$ can be obtained by solving the convex problem $$\label{eqn:QP_optimal_scaling_convex} \begin{aligned} \begin{array}{ll} \underset{{t\in{\mathcal{R}^{}},\;w\in{\mathcal{R}^{m}}}}{\mbox{minimize}} & t\\ \mbox{subject to} & W=\mbox{diag}(w),\; w>0,\\ & tI - R_q^\top A^\top W A R_q \in{\mathcal{S}_{+}^{n}},\\ & P^\top (R_q^\top A^\top W A R_q - I )P \in{\mathcal{S}_{+}^{n-s}}, \end{array} \end{aligned}$$ and setting $L^\star = W^{\star^{1/2}}$. So far, we characterized the convergence factor of the ADMM algorithm based on general properties of the sequence $\{ F^k v^k\}$. However, if we a priori know which constraints will be active during the ADMM iterations, our parameter selection rules  and  may not be optimal. To illustrate this fact, we will now analyze the two extreme situations where no and all constraints are active in each iteration and derive the associated optimal ADMM parameters. Special cases of quadratic programming -------------------------------------- The first result deals with the case where the constraints of  are never active. This could happen, for example, if we use the constraints to impose upper and lower bounds on the decision variables, and use very loose bounds. \[prop:QP\_when\_careful\_1\] Assume that ${F^{k+1} = F^{k} = -I}$ for all epochs $k\in {\mathcal{R}^{}}_{+}$ in  and . Then the modified ADMM algorithm  attains its minimal convergence factor for the parameters $$\begin{aligned} \alpha = 1,\quad \rho \rightarrow 0.\end{aligned}$$ In this case coincide with  and their convergence factor is minimized: $\zeta = \zeta_R \rightarrow 0$. The next proposition addresses another extreme scenario when the ADMM iterates are operating on the active set of the quadratic program . \[prop:QP\_when\_careful\_2\] Suppose that $F^{k+1} = F^{k} = I$ for all $k\in {\mathcal{R}^{}}_+$ in  and . Then the relaxed ADMM algorithm  attains its minimal convergence factor for the parameters $$\begin{aligned} \alpha = 1,\quad \rho \rightarrow \infty.\end{aligned}$$ In this case coincides with  and their convergence factors are minimized: $\zeta = \zeta_R \rightarrow 0$. It is worthwhile to mention that when  is defined so that its constraints are active (inactive) then the $s^{k}$ ($r^k$) residuals of the ADMM algorithm remain zero for all $k \geq 2$ updates. Consider the optimal convergence factor $\zeta^\star = \dfrac{\lambda_n(AQ^{-1}A^\top)}{\lambda_n(AQ^{-1}A^\top)+\sqrt{\lambda_1(AQ^{-1}A^\top)\; \lambda_n(AQ^{-1}A^\top)}}$ given in Theorem \[thm:QP\_optimal\_factor\]. Given that $\lambda_1(AQ^{-1}A^\top)$ and $\lambda_n(AQ^{-1}A^\top)$ are the two nonzero bounding eigenvalues of $AQ^{-1}A^\top$, one sees that if $\lambda_n(AQ^{-1}A^\top)$ grows unboundedly or $\lambda_1(AQ^{-1}A^\top)$ tends to zero then the performance of ADMM can be arbitrarily slow. The next Lemma shows that the extreme eigenvalues of $AQ^{-1}A^\top$ are bounded with the ones of $Q^{-1}$ and $AA^\top$. \[lem:QP\_Bounding\_eigenvalues\] Denote $\lambda_1(AQ^{-1}A^\top)$ and $\lambda_n(AQ^{-1}A^\top)$ be the smallest and largest nonzero eigenvalues of $AQ^{-1}A^\top$. Then we have $$\begin{aligned} \label{eqn:QP_bounding_eigenvalues} \dfrac{\lambda_1(AA^\top)}{\lambda_n(Q)} \leq \lambda_1(AQ^{-1}A^\top), \quad \lambda_n(AQ^{-1}A^\top) \leq \dfrac{\lambda_n(AA^\top)}{\lambda_1(Q)} \end{aligned}$$ The proof follows [@HoJ:85 p.225]. Specifically, for the congruent matrices $Q^{-1}$ and $AQ^{-1}A^\top$ there exist numbers $\tau_i \in {\mathcal{R}^{}}_{+}$ such that $\lambda_1(AA^\top) \leq \tau_i \leq \lambda_n(AA^\top)$ and $\tau_i \lambda_i(Q^{-1})=\lambda_i(AQ^{-1}A^\top)$. For $i=1$ and $i=n$, substituting $\tau_i = \dfrac{\lambda_i(AQ^{-1}A^\top)}{\lambda_i(Q^{-1})}$ leads to . Here we assume that the choice of $Q\in {\mathcal{S}_{++}^{n}}$ is reasonable in a sense that it has its positive eigenvalues bounded away from zero and infinity. However, the extreme eigenvalues of the matrix $AA^\top$ still can render the performance of ADMM slow. Particularly, the next lemma observes that if $\lambda_1(AA^\top) = \epsilon \approx 0$ then the performance of ADMM iterates  regardless of the choice of $\rho$ is arbitrary poor. \[lem:QP\_poor\_performance\] Consider the Quadratic program  and the ADMM iterations  to solve it. Assume $F^{k+1}=F^{k}$ and there exists a $\bar{v}\triangleq v^{k}-v^{k-1}\in{\mathcal{R}^{n}}$ such that $M \bar{v} = \epsilon \bar{v}$ for $\epsilon >0$. $$\begin{aligned} (1-\epsilon) \Vert v^k - v^{k-1}\Vert \leq \Vert v^{k+1}-v^k \Vert \leq \Vert v^k - v^{k-1}\Vert. \end{aligned}$$ Recall  and set $F^{k+1} = F^k = F$ to obtain $$\begin{aligned} F(v^{k+1} - v^k) = \left(\dfrac{I}{2}-M\right) \left(v^{k}-v^{k-1}\right) + \dfrac{1}{2}F \left(v^{k} - v^{k-1}\right) \end{aligned}$$ Taking into account that $M(v^k-v^{k-1}) = \epsilon (v^k - v^{k-1})$ and multiplying both sides of above equality to $F$, we have $$\begin{aligned} v^{k+1} - v^k &= \dfrac{1}{2}(I+F)(v^k - v^{k-1}) - \epsilon F (v^k-v^{k-1})\\ &\stackrel{(a)}{=} D(v^k - v^{k-1}) - \epsilon F (v^k-v^{k-1}) \end{aligned}$$ where (a) uses $D = \dfrac{1}{2}(I+F)$. Denote $v^t=v_1^t + v_0^t$ where $v_1^t\triangleq D v^t$ and $v_0^t \triangleq (I-D)v^t$. Noting that $DD = D$, $DF = D$, and $D-F = I-D$ one can decompose above equality into $$\begin{aligned} v_0^{k+1} - v_0^k = \epsilon (v_0^{k} - v_0^{k-1}), \quad v_1^{k+1} - v_1^k = (1-\epsilon) (v_1^{k} - v_1^{k-1}) \end{aligned}$$ It is easy to check that $v_0^t \perp v_1^t$. Hence, $$\begin{aligned} \Vert v^{k+1} - v^k \Vert &= \Vert v_0^{k+1} - v_0^{k} \Vert + \Vert v_1^{k+1} - v_1^{k} \Vert \\ &= \epsilon \Vert v_0^{k} - v_0^{k-1} \Vert + (1-\epsilon)\Vert v_1^{k} - v_1^{k-1} \Vert. \end{aligned}$$ The interpretation of above Lemma is that in the local regimes when $F^k$ remains unchanged then the residual of $\Vert v^{k+1} - v^k \Vert$ in terms of the inactive components (those components $i$’s that have $D_i = 0$) decays quickly with factor $\epsilon$ whereas the residual in terms of active components (those components $i$’s that have $D_i = 1$) decays arbitrary slow with the factor $1-\epsilon$. First approach to prevent the conditions specified in Lemma \[lem:QP\_poor\_performance\] is to boost the smallest nonzero eigenvalue of $M$ to make sure that it is bounded away from $0$. From  and Lemma \[lem:QP\_Bounding\_eigenvalues\] one concludes that $\lambda_1(AA^\top)$ has a direct impact on the smallest nonzero eigenvalue od $M$ and a way to cancel this effect is to increasing $\rho$ to bound it away from $0$. However, enormous values of $\rho$ causes the convergence factor $\zeta$ in   to approach to $1$. Fig. X illustrates this effect where increasing $\rho> \rho^\star$ will temporarily improve the convergence factor until some threshold and further increment afterwards deteriorates the performance. Another interesting aspect of the dynamics of the ADMM iterations is when it operates on the localized regions. In particular, when $F^{k+1}=F^k$ for some $k\in {\mathcal{R}^{}}_{+}$, then one might ask if the general rules of $(\alpha,\rho)$ selection stated in Theorem \[thm:QP\_relaxation\_optimal\_factor\] prevails? In what comes next, we address this question via a local analysis of the iterations in . Assume $F^{k}=F^{k+1} =\dots F^{k+m} = F$ for some $m,k\in {\mathcal{R}^{}}_{+}$. It implies that the set of active and inactive constraints of  remains unchanged for some successive epochs. Then  takes the form $$\begin{aligned} v^{k+1}-v^k = \left(\alpha F (\dfrac{I}{2}-M)+ (1-\dfrac{\alpha}{2})I\right) (v^k-v^{k-1}) .\end{aligned}$$ Denote $v_0^k = (I-D^k)v^k=\dfrac{1}{2}(I-F)v^k$, and $v_1^k = Dv^k = \dfrac{1}{2}(I+F)v^k$. Essentially, $v_0^k$ is a vector that keeps the components of $v_k$ corresponding to the $-1$ diagonal elements of $F^k$ and set the rest to zero. On the other hand, $v_1^k$ as a complement of $v_0^k$ maintains the components of $v_k$ corresponding to $+1$ diagonal elements of $F^k$ and set the rest to zero. With this definition $v_0^k$ corresponds to the inactive components of the constraint set while $v_1^k$ maintains the active ones. Given that $F$ is constant in two successive iterations of the above equality, we have $$\begin{aligned} v_0^{k+1}-v_0^k &= \Big((1-\alpha)I+\alpha M\Big) (v_0^k-v_0^{k-1}),\\ v_1^{k+1}-v_1^k &= \left(I-\alpha M\right) (v_1^k-v_1^{k-1}).\end{aligned}$$ Now from $v=v_0+v_1$, we conclude $$\begin{aligned} v^{k+1}-v^k = \left((1-\alpha)I+\alpha M\right) (v_0^k-v_0^{k-1}) + \left(I-\alpha M\right)(v_1^k - v_1^{k-1}) .\end{aligned}$$ The fact that $v_0\perp v_1$ combined with cauchy inequality leads to $$\begin{aligned} \Vert v^{k+1}-v^k \Vert^2 \leq \left\Vert (1-\alpha)I+\alpha M\right \Vert \Vert v_0^k-v_0^{k-1}\Vert + \Vert I-\alpha M\Vert \Vert v_1^k - v_1^{k-1}\Vert .\end{aligned}$$ Numerical examples {#sec:qp_evaluation} ================== In this section, we evaluate our parameter selection rules on numerical examples. First, we illustrate the convergence factor of ADMM and gradient algorithms for a family of $\ell_2$-regularized quadratic problems. These examples demonstrate that the ADMM method converges faster than the gradient method for certain ranges of the regularization parameter $\delta$, and slower for other values. Then, we consider QP-problems and compare the performance of the over-relaxed ADMM algorithm with an alternative accelerated ADMM method presented in [@GOS:2012]. The two algorithms are also applied to a Model Predictive Control (MPC) benchmark where QP-problems are solved repeatedly over time for fixed matrices $Q$ and $A$ but varying vectors $q$ and $b$. $\ell_2$-regularized quadratic minimization via ADMM ---------------------------------------------------- We consider $\ell_2$-regularized quadratic minimization problem  for a $Q\in \mathcal{S}_{++}^{100}$ with condition number $1.2 \times 10^3$ and for a range of regularization parameters $\delta$. Fig. \[fig:l2\_rate\] shows how the optimal convergence factor of ADMM depends on $\delta$. The results are shown for two step-size rules: $\rho=\delta$ and $\rho=\rho^\star$ given in . For comparison, the gray and dashed-gray curves show the optimal convergence factor of the gradient method $$\begin{aligned} &x^{k+1}=x^k - \gamma (Qx^k + q + \delta x^k),\\ \intertext{with step-size $\gamma<2/(\lambda_n(Q)+\delta)$ and a multi-step gradient iterations on the form} &x^{k+1}=x^k - a (Q x^k + q + \delta x^k) + b (x^k- x^{k-1}),\end{aligned}$$ This latter algorithm is known as the heavy-ball method and significantly outperforms the standard gradient method on ill-conditioned problems [@polyak]. The algorithm has two parameters: $a<2(1+b)/(\lambda_n(Q)+\delta)$, and $b\in [0,1]$. For our problem, since the cost function is quadratic and its Hessian $\nabla^2 f(x)= Q + \delta I$ is bounded between $l=\lambda_1(Q)+\delta$ and $u=\lambda_n(Q)+\delta$, the optimal step-size for the gradient method is $\gamma^{\star}=2/(l+u)$ and the optimal parameters for the heavy-ball method are $a^\star=4/(\sqrt{l}+\sqrt{u})^2$, and $b^\star = (\sqrt{u}-\sqrt{l})^2/(\sqrt{l}+\sqrt{u})^2$[@polyak]. Figure \[fig:l2\_rate\] illustrates the convergence properties of the ADMM method under both step-size rules. The optimal step-size rule gives significant speedups of the ADMM for small or large values of the regularization parameter $\delta$. This phenomena can be intuitively explained based on the interplay of the two parts of the objective function in . For extremely small values of $\delta$, one sees that the $x$-th part of the objective is becoming dominant compared to $z$-th part. Consequently, using the optimal step-size in , $z$- is dictated to quickly follow the value of $x$-update. A similar reasoning holds when $\delta$ is large, in which the $x$- has to obey the $z$-update. It is interesting to observe that ADMM outperforms the gradient and heavy-ball methods for small $\delta$ (an ill-conditioned problem), but actually performs worse as $\delta$ grows large ([i.e. ]{}when the regularization makes the overall problem well-conditioned). It is noteworthy that the relaxed ADMM method solves the same problem in one step (convergence factor $\zeta^\star_R=0$). ![Convergence factor of the ADMM, gradient, and heavy-ball methods for $\ell_2$ regularized minimization with fixed $Q$-matrix and different values of the regularization parameter $\delta$.[]{data-label="fig:l2_rate"}](Figures/L2/L2_admm_gradient_Hb){width=".4\columnwidth"} Quadratic programming via ADMM ------------------------------ Next, we evaluate our step-size rules for ADMM-based quadratic programming and compare their performance with that of other accelerated ADMM variants from the literature. ### Accelerated ADMM One recent proposal for accelerating the ADMM-iterations is called *fast-ADMM* [@GOS:2012] and consists of the following iterations $$\begin{aligned} \label{eqn:admm_nesterov_acceleration} \begin{array}[c]{ll} x^{k+1} &= \underset{x}{\operatorname{argmin}}\, L_{\rho}(x,\hat{z}^{k}, \hat{u}^{k}), \\ z^{k+1} &= \underset{z}{\operatorname{argmin}}\, L_{\rho}(x^{k+1}, z, \hat{u}^k), \\ u^{k+1} &= \hat{u}^{k} + Ax^{k+1}+Bz^{k+1}-c, \\ \hat{z}^{k+1} &= \alpha^k z^{k+1}+(1-\alpha^k)z^k,\\ \hat{u}^{k+1} &= \alpha^k u^{k+1} + (1-\alpha^k)u^k. \end{array}\end{aligned}$$ The relaxation parameter $\alpha^k$ in the fast-ADMM method is defined based on the Nesterov’s order-optimal method [@Nesterov03] combined with an innovative restart rule where $\alpha^k$ is given by $$\begin{aligned} \label{eqn:fast_ADMM_restart_rule} \alpha^k =\left\{ \begin{array}[c]{ll} 1+\dfrac{\beta^k -1}{\beta^{k+1}} & \operatorname{if}\, \dfrac{\max(\Vert r^k \Vert, \Vert s^k\Vert)}{\max(\Vert r^{k-1}\Vert, \Vert s^{k-1}\Vert)}< 1, \\ 1 &\mbox{otherwise}, \end{array}\right. \end{aligned}$$ where $\beta^1=1$, and $\beta^{k+1}= \dfrac{1+\sqrt{1+4{\beta^k}^2}}{2}$ for $k>1$. The restart rule assures that  is updated in the descent direction with respect to the primal-dual residuals. To compare the performance of the over-relaxed ADMM iterations with our proposed parameters to that of fast-ADMM, we conducted several numerical examples. For the first numerical comparison, we generated several instances of ; Figure \[fig:qp\_comparison\] shows the results for the two representative examples. In the first case, $A\in \mathcal{R}^{50\times 100}$ and $Q\in \mathcal{S}_{++}^{100}$ with condition number $1.95 \times 10^{3}$; $32$ constraints are active at the optimal solution. In the second case, $A\in \mathcal{R}^{200\times 100}$ and $Q\in \mathcal{S}_{++}^{100}$, where the condition number of $Q$ is $7.1\times 10^3$. The polyhedral constraints correspond to random box-constraints, of which $66$ are active at optimality. We evaluate for four algorithms: the ADMM iterates in  with and without over-relaxation and the corresponding tuning rules developed in this paper, and the fast-ADMM iterates  with $\rho=1$ as proposed by [@GOS:2012] and $\rho=\rho^\star$ of our paper. The convergence of corresponding algorithms in terms of the summation of primal and dual residuals $\Vert r^k\Vert+ \Vert s^k\Vert$ are depicted in Fig. \[fig:qp\_comparison\]. The plots exhibit a significant improvement of our tuning rules compared to the fast-ADMM algorithm. To the best of our knowledge, there are currently no results about optimal step-size parameters for the fast-ADMM method. However, based on our numerical investigations, we observed that the performance of fast-ADMM algorithm significantly improved by employing our optimal step-size $\rho^\star$ (as illustrated in \[fig:qp\_comparison\]). In the next section we perform another comparison between three algorithms, using the optimal $\rho$-value for fast-ADMM obtained by an extensive search. ### Model Predictive Control Consider the discrete-time linear system $$\begin{aligned} \label{eq:control_system} x_{t+1} &= Hx_t + J u_t + J_r r,\end{aligned}$$ where $t\geq 0$ is the time index, $x_t \in {\mathcal{R}^{n_x}}$ is the state, $u_t \in {\mathcal{R}^{n_u}}$ is the control input, $r \in {\mathcal{R}^{n_r}}$ is a constant reference signal, and $H\in {\mathcal{R}^{n_x \times n_x}}$, $J\in {\mathcal{R}^{n_x \times n_u}}$, and $J_r\in {\mathcal{R}^{n_x \times n_r}}$ are fixed matrices. Model predictive control aims at solving the following optimization problem $$\begin{aligned} \label{eqn:MPC_problem_1} \begin{array}[c]{ll} \underset{\{u_i\}_0^{N_p-1}}{\mbox{minimize}} & \dfrac{1}{2}\sum_{i=0}^{N_p - 1}(x_i-x_r)^{\top}Q_x (x_i-x_r) + (u_i-u_r)^{\top} R (u_i-u_r) + (x_{N_p}-x_r)^\top Q_N (x_{N_p}-x_r)\\ \mbox{subject to}& x_{t+1} = Hx_t + J u_t + J_r r\quad \forall t,\\ & x_t\in\mathcal{C}_x\quad \forall t,\\ & u_t\in\mathcal{C}_u\quad \forall t, \end{array} \end{aligned}$$ where $x_0$, $x_r$, and $u_r$ are given, $Q_x\in {\mathcal{S}_{++}^{n_x}}$, $R\in {\mathcal{S}_{++}^{n_u}}$, and $Q_N\in {\mathcal{S}_{++}^{n_x}}$ are the state, input, and terminal costs, and the sets $\mathcal{C}_x$ and $\mathcal{C}_u$ are convex. Suppose that the sets $\mathcal{C}_x$ and $\mathcal{C}_u$ correspond to component-wise lower and upper bounds, i.e., $\mathcal{C}_x = \{x\in{\mathcal{R}^{n_x}} \vert 1_{n_x}\bar{x}_{min}\leq x \leq 1_{n_x}\bar{x}_{max}\}$ and $\mathcal{C}_u = \{u\in{\mathcal{R}^{n_u}} \vert 1_{n_u}\bar{u}_{min}\leq u \leq 1_{n_u}\bar{u}_{max}\}$. Defining $\chi = [x_1^\top \, \dots \, x_{N_p}^\top]^\top$, $\upsilon=[u_0^\top \, \dots \, u_{N_p-1}^\top]^\top$, $\upsilon_r=[r^\top \, \dots \, r^\top]^\top$,  can be rewritten as $\chi = \Theta x_0 + \Phi\upsilon + \Phi_r\upsilon_r$. The latter relationship can be used to replace $x_t$ for $t=1,\dots,N_p$ in the optimization problem, yielding the following QP: $$\begin{aligned} \label{eqn:MPC_problem_4} \begin{array}[c]{ll} \underset{\upsilon}{\mbox{minimize}} & \dfrac{1}{2}\upsilon^\top Q \upsilon + q^\top \upsilon\\ \mbox{subject to} & A\upsilon \leq b , \end{array} \end{aligned}$$ where $$\bar{Q} = \begin{bmatrix} I_{N_p-1} \otimes Q_x & 0\\ 0 & Q_N \end{bmatrix},\quad \bar{R} = I_{N_p} \otimes R,\quad \begin{aligned} A = \begin{bmatrix} \Phi\\ -\Phi\\ I\\ -I \end{bmatrix},\quad b=\begin{bmatrix} 1_{n_x N_p} \bar{x}_{max} - \Theta x_0-\Phi_r\upsilon_r\\ 1_{n_x N_p} \bar{x}_{min} + \Theta x_0+\Phi_r\upsilon_r\\ 1_{n_u N_p} \bar{u}_{max}\\ 1_{n_u N_p} \bar{u}_{min} \end{bmatrix}, \end{aligned}$$ and $Q=\bar{R} + \Phi^\top\bar{Q}\Phi$ and $q^\top = x_0^\top\Theta^\top \bar{Q}\Phi + \upsilon_r^\top \Phi_r^\top \bar{Q}\Phi - x_r^\top\left( 1_{N_p}^\top\otimes I_{n_x} \right)\bar{Q}\Phi - u_r^\top \left( 1_{N_p}^\top\otimes I_{n_u} \right)\bar{R}$. Below we illustrate the MPC problem for the quadruple-tank process [@Johansson2000]. The state of the process $x\in{\mathcal{R}^{4}}$ corresponds to the water levels of all tanks, measured in centimeters. The plant model was linearized at a given operating point and discretized with a sampling period of $2\,s$. The MPC prediction horizon was chosen as $N_p = 5$. A constant reference signal was used, while the initial condition $x_0$ was varied to obtain a set of MPC problems with different non-empty feasible sets and linear cost terms. In particular, we considered initial states of the form $x_0 = [x_1\, x_2\, x_3\, x_4]^\top$ where $x_i \in \{10,\; 11.25,\; 12.5,\; 13.75,\; 15\}$ for $i=1,\dots,4$. Out of the possible $625$ initial values, $170$ yields feasible QPs (each with $n=10$ decision variables and $m=40$ inequality constraints). We have made these QPs publically available as a [MATLAB]{} formatted binary file [@mpc_dataset]. To prevent possible ill-conditioned QP-problems, the constraint matrix $A$ and vector $b$ were scaled so that each row of $A$ has unit-norm. Fig. \[fig:QP\_MPC\] illustrates the convergence of the ADMM iterations for the $170$ QPs as a function of the step-size $\rho$, scaling matrix $L$, and over-relaxation factor $\alpha$. Since $A^\top$ has a non-empty null-space, the step-size $\rho^\star$ was chosen heuristically based on Theorem \[thm:QP\_optimal\_factor\] as $\rho^\star = 1/\sqrt{\lambda_{1}(AQ^{-1}A^\top)\lambda_{n}(AQ^{-1}A^\top)}$, where $\lambda_1(AQ^{-1}A^\top)$ is the smallest nonzero eigenvalue of $AQ^{-1}A^\top$. As shown in Fig. \[fig:QP\_MPC\], our heuristic step-size $\rho^\star$ results in a number of iterations close to the empirical minimum. Moreover, performance is improved by choosing $L=L^\star$ and $\alpha=2$. ![Number of iterations $k:\,\max\{\|r^k\|,\, \|s^k\|\} \leq 10^{-5}$ for ADMM with $L=I$ and $\alpha=2$ and fast-ADMM algorithms applied to the MPC problem for different initial states $x_0$. The line in blue denotes the minimum number of iterations taken over all the initial states, while the red line represents the maximum number of iterations.[]{data-label="fig:QP_MPC_FADMMvsADMM"}](Figures/MPC/MPC_quadtank170_SxQ_rhos_alpha2_LI_vs_fastADMM_v2){width=".4\hsize"} The performance of the Fast-ADMM and ADMM algorithms is compared in Fig. \[fig:QP\_MPC\_FADMMvsADMM\] for $L=I$ and $\alpha=2$. The ADMM algorithm with the optimal over-relaxation factor $\alpha=2$ uniformly outperforms the Fast-ADMM algorithm, even with suboptimal scaling matrix $L$. ### Local convergence factor To illustrate our results on the slow local convergence of ADMM, we consider a QP problem of the form  with $$\label{eqn:QP:slow_covergence_example} \begin{aligned} Q&= \begin{bmatrix} 40.513 & 0.069\\ 0.069 & 40.389 \end{bmatrix},\quad q=0\\ A&=\begin{bmatrix} -1 & 0 \\ 0 &-1 \\ 0.1151 & 0.9934\\ \end{bmatrix},\quad b= \begin{bmatrix} 6\\ 6\\ -0.3422 \end{bmatrix}. \end{aligned}$$ The ADMM algorithm was applied to the former optimization problem with $\alpha = 1$ and $L=I$. Given that the nullity of $A$ is not $0$, the step-size was chosen heuristically based on Theorem \[thm:QP\_optimal\_factor\] as $\rho^\star = 1/\sqrt{\lambda_{1}(AQ^{-1}A^\top)\lambda_{n}(AQ^{-1}A^\top)} = 28.6$ with $\lambda_1(AQ^{-1}A^\top)$ taken to be the smallest nonzero eigenvalue of $AQ^{-1}A^\top$. The resulting residuals are shown in Fig. \[fig:QP\_slow\], together with the lower bound on the convergence factor $\underline\zeta$ evaluated at each time-step. As expected from the results in Theorem \[thm:QP:linear\_rate\], the residual $F^{k+1}v^{k+1} - F^{k}v^{k}$ is monotonically decreasing. However, as illustrated by $\underline\zeta^k$, the lower bound on the convergence factor from Theorem \[thm:QP\_slow\], the auxiliary residual $F^{k+1}v^{k+1} - F^{k}v^{k}$ and the primal-dual residuals show a convergence factor close to $1$ over several time-steps. The heuristic step-size rule performs reasonably well as illustrated in the right subplot of Fig. \[fig:QP\_slow\]. Conclusions and Future Work =========================== We have studied optimal parameter selection for the alternating direction method of multipliers for two classes of quadratic problems: $\ell_2$-regularized quadratic minimization and quadratic programming under linear inequality constraints. For both problem classes, we established global convergence of the algorithm at linear rate and provided explicit expressions for the parameters that ensure the smallest possible convergence factors. We also considered iterations accelerated by over-relaxation, characterized the values of the relaxation parameter for which the over-relaxed iterates are guaranteed to improve the convergence times compared to the non-relaxed iterations, and derived jointly optimal step-size and relaxation parameters. We validated the analytical results on numerical examples and demonstrated superior performance of the tuned ADMM algorithms compared to existing methods from the literature. As future work, we plan to extend the analytical results for more general classes of objective functions. Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank Pontus Giselsson, Themistoklis Charalambous, Jie Lu, and Chathuranga Weeraddana for their valuable comments and suggestions to this manuscript. Proofs ====== Proof of Theorem \[thm:L2:standard\] ------------------------------------ From Proposition \[prop:1\], the variables $x^k$ and $z^k$ in iterations (\[eqn:ADMM\_L2\_iterations\]) converge to the optimal values $x^\star$ and $z^\star$ of  if and only if the spectral radius of the matrix $E$ in (\[eqn:ADMM\_L2\_matrix\]) is less than one. To express the eigenvalues of $E$ in terms of the eigenvalues of $Q$, let $\lambda_i(Q), i=1, \dots, n$ be the eigenvalues of $Q$ sorted in ascending order. Then, the eigenvalues $\zeta(\rho, \lambda_i(Q))$ of $E$ satisfy $$\begin{aligned} \zeta(\rho, \lambda_i(Q)) =\dfrac{\rho^2+ \lambda_i(Q) \delta}{\rho^2+\lambda_i(Q) \delta+(\lambda_i(Q)+\delta)\rho}.\label{eqn:fexpressions}\end{aligned}$$ Since $\lambda_i(Q), \rho, \delta \in \mathcal{R}_{++}$, we have $0 \leq \zeta(\rho, \lambda_i(Q)) <1$ for all $i$, which ensures convergence. To find the optimal step-size parameter and the associated convergence factor $(\rho^{\star}, \zeta^\star)$, note that, for a fixed $\rho$, the convergence factor $\zeta(\rho) = \max_{e^k} \|e^{k+1}\|/\|e^k\|$ corresponds to the spectral radius of $E$, [i.e. ]{}$\zeta(\rho)=\max_i\left\{ \zeta(\rho,\lambda_i(Q))\right\}$. It follows that the optimal pair $(\rho^{\star}, \zeta^\star)$ is given by $$\begin{aligned} \label{eqn:ADMM_L2_optimal_step-size_optProblem} \rho^{\star} = \underset{\rho}{\mbox{argmin }} \max_i\left\{ \zeta(\rho,\lambda_i(Q))\right\},\quad \zeta^{\star} = \max_i\left\{ \zeta(\rho^\star,\lambda_i(Q))\right\}. \end{aligned}$$ From  (\[eqn:fexpressions\]), we can see that $\zeta(\rho,\lambda_i(Q))$ is monotone decreasing in $\lambda_i(Q)$ when $\rho>\delta$ and monotone increasing when $\rho<\delta$. Hence, we consider these two cases separately. When $\rho>\delta$, the largest eigenvalue of $E$ is given by $\zeta(\rho, \lambda_1(Q))$ and $\rho^{\star} = \mbox{argmin}_{\rho}\zeta(\rho,\lambda_1(Q))$. By the first-order optimality conditions and the explicit expressions in  we have $$\begin{aligned} \rho^{\star}&=\sqrt{\delta \lambda_1(Q)}, \quad \zeta^\star = \zeta(\rho^{\star}, \lambda_1(Q)) = (1+\dfrac{\delta+\lambda_1(Q)}{2\sqrt{\delta \lambda_1(Q)}})^{-1}.\end{aligned}$$ However, this value of $\rho$ is larger than $\delta$ only if $\delta<\lambda_1(Q)$. When $\delta\geq \lambda_1(Q)$, the assumption that $\rho>\delta$ implies that $0\leq (\rho-\delta)^2 \leq (\rho-\delta)(\rho-\lambda_1(Q))$, so $$\begin{aligned} &\zeta(\rho,\lambda_1(Q)) =\dfrac{\rho^2+ \lambda_i(Q) \delta}{\rho^2+\lambda_i(Q) \delta+(\lambda_i(Q)+\delta)\rho}\geq\\ & \dfrac{\rho^2+\lambda_1(Q)\delta}{\rho^2 +\lambda_1(Q)\delta +(\lambda_1(Q)+\delta)\rho + (\rho-\delta)(\rho-\lambda_1(Q))}=\dfrac{1}{2}.\end{aligned}$$ Since $\rho=\delta$ attains $\zeta(\delta,\lambda_1(Q))=1/2$ it is optimal. A similar argument applies to $\rho<\delta$. In this case, $\max_i \zeta(\rho,\lambda_i(Q)) = \zeta(\rho, \lambda_n(Q))$ and when $\delta>\lambda_n(Q)$, $\rho^{\star}=\sqrt{\delta \lambda_n(Q)}$ is the optimal step-size and the associated convergence factor is $$\zeta^\star = \left(1+\dfrac{\delta+\lambda_n(Q)}{2\sqrt{\delta \lambda_n(Q)}}\right)^{-1}.$$ For $\delta\leq \lambda_n(Q)$, the requirement that $\rho<\delta$ implies the inequalities $0\leq (\delta-\rho)^2 \leq (\lambda_n(Q)-\rho)(\delta-\rho)$ and that $\zeta(\rho,\lambda_n(Q)) \geq \dfrac{1}{2}$, which leads to $\rho=\delta$ being optimal. Proof of Corollary \[cor:L2:standard\] -------------------------------------- The proof is a direct consequence of evaluating at $\rho=\delta$ for $i=1,\dots,n$. Proof of Theorem \[thm:L2:relaxed\] ----------------------------------- The $z$-update in  implies that $\mu^k = (\delta+\rho)z^{k+1}- \rho (\alpha x^{k+1}+ (1-\alpha)z^k)$, and that the $\mu$-update in  can be written as $\mu^{k+1} = \delta z^{k+1}$. Similarly to the analysis of the previous section, inserting the $x$-update into the $z$-update, we find $$\begin{aligned} z^{k+1} = \underset{E_R}{\underbrace{\dfrac{1}{\delta+\rho}\left(\delta I + \rho \left(\alpha(\rho-\delta)\left(Q+\rho I\right)^{-1}+ (1-\alpha)I\right)\right)} }z^k - \dfrac{1}{\delta+\rho}\rho\alpha (Q+\rho I)^{-1}q.\end{aligned}$$ Consider the fixed-point candidate $z^\star$ satisfying $z^\star = E_R z^\star - \dfrac{1}{\delta+\rho}\rho\alpha(Q+\rho I)^{-1}q$ and $z^{k+1}-z^\star = E_R (z^{k}-z^\star)$. The $z^k$-update in  converges (and so does the ADMM algorithm) if and only if the spectral radius of the error matrix in the above linear iterations is less than one. The eigenvalues of $E_R$ can be written as $$\begin{aligned} \zeta_R(\alpha,\rho,\lambda_i(Q))= 1- \dfrac{\alpha \rho (\lambda_i(Q)+\delta)}{(\rho+\lambda_i(Q))(\rho+\delta)}.\label{eqn:zeta_expressions_relaxation}\end{aligned}$$ Since $\rho, \delta,$ and $\lambda_i(Q) \in {\mathcal{R}^{}}_{++}$, we see that $0<\alpha<2\underset{i}{\min}\dfrac{(\rho+\delta)(\rho+\lambda_i(Q))}{\rho(\lambda_i(Q)+\delta)}$ implies that $\vert \zeta_R(\alpha, \rho, \lambda_i(Q))\vert<1$ for all $i$, which completes the first part of the proof. For a fixed $\rho$ and $\delta$, we now characterize the values of $\alpha$ that ensure that the over-relaxed iterations  have a smaller convergence factor and thus a smaller $\varepsilon$-solution time than the classical ADMM iterates , [i.e. ]{}$\zeta_R-\zeta < 0$. From  and  we have $\operatorname{argmax}_i{\zeta_R(\alpha, \rho, \lambda_i(Q))} = \operatorname{argmax}_i{\zeta(\rho, \lambda_i(Q))}$, since $\zeta_R$ and $\zeta$ are equivalent up to an affine transformation and they have the same sign of the derivative with respect to $\lambda_i(Q)$. For any given $\lambda_{i}(Q)$ we have $$\begin{aligned} \zeta_R-\zeta = \dfrac{\rho(1-\alpha)(\lambda_i(Q)+\delta) }{\rho^2+(\lambda_i(Q)+\delta)\rho+\lambda_i(Q) \delta}\end{aligned}$$ and we conclude that $\zeta_R-\zeta <0$ when $\alpha\in \left(1,\;\dfrac{2(\rho+\delta)(\rho+\lambda_i(Q))}{\rho(\lambda_i(Q)+\delta)}\right)$. Recalling the first part of the proof we conclude that, for given $\rho,\delta\in{\mathcal{R}^{}}_{++}$, the over-relaxed iterations converge with a smaller convergence factor than classical ADMM for $1<\alpha<2\underset{i}{\min}\dfrac{(\rho+\delta)(\rho+\lambda_i(Q))}{\rho(\lambda_i(Q)+\delta)}$. To find $(\rho^\star, \alpha^\star, \zeta_R^\star)$, we define $$\begin{aligned} \label{eqn:ADMM_L2_optimal_step-size_optProblem_relaxation} (\rho^{\star}, \alpha^\star) = \underset{\rho, \alpha}{\mbox{argmin }} \max_i\left\vert \zeta_R(\rho, \alpha,\lambda_i(Q))\right\vert, \quad \zeta_R^{\star} = \max_i\left \vert \zeta_R(\rho^\star,\alpha^\star,\lambda_i(Q))\right\vert. \end{aligned}$$ One readily verifies that $\zeta_R(\delta, 2, \lambda_i(Q))=0$ for $i= 1,\dots n$. Since zero is the global minimum of $\vert \zeta_R\vert$ we conclude that the pair $(\rho^\star, \alpha^\star) = (\delta, 2)$ is optimal. Moreover, for $(\rho^\star, \alpha^\star) = (\delta, 2)$ the matrix $E_R$ is a matrix of zeros and thus the algorithm  converges in one iteration. Proof of Proposition \[prop:w\_residuals\] ------------------------------------------ For the sake of brevity we derive the expressions only for $w_-^{k+1}\triangleq F^{k+1}v^{k+1}-F^kv^k$, as similar computations also apply to $w_+^{k+1}\triangleq v^{k+1}-v^k$. First, since $v^k=z^k+u^k$, it holds that $F^k v^k = (2D^k - I)v^k = 2D^k v^k - u^k-z^k$. From the equality $D^kv^k = u^k$ we then have $F^kv^k = u^k-z^k$. The residual $w_-^{k+1} $ can be rewritten as $w_-^{k+1} = u^{k+1} - u^k - z^{k+1} + z^k$. From  and  we observe that $u^{k+1}-u^k = r^{k+1}$, so $w_-^{k+1} = r^{k+1} - (z^{k+1} - z^k)$. Decomposing $z^{k+1} - z^k$ as $\Pi_{{\mbox{Im}(A)}}(z^{k+1} - z^k) + \Pi_{{\mathcal{N}(A^\top)}}(z^{k+1} - z^k)$ we then conclude that $w_-^{k+1} = r^{k+1} - \Pi_{{\mbox{Im}(A)}}(z^{k+1} - z^k) - \Pi_{{\mathcal{N}(A^\top)}}(z^{k+1} - z^k)$. We now examine each case $(i)-(iii)$ separately: \(i) When $A$ has full column rank, $\Pi_{{\mbox{Im}(A)}} = A(A^\top A)^{-1}A^\top$ and $\Pi_{{\mathcal{N}(A^\top)}} = I-\Pi_{{\mbox{Im}(A)}}$. In the light of the dual residual  we obtain $\Pi_{{\mbox{Im}(A)}}(z^{k+1} - z^k) =1/\rho A(A^\top A)^{-1}s^{k+1}$. \(ii) Note that the nullity of $A^\top$ is $0$ if $A$ is full row-rank. Thus, $\Pi_{{\mathcal{N}(A^\top)}} = 0$ and $\Pi_{{\mbox{Im}(A)}}=I$. Moreover, since $AA^\top$ is invertible, $z^{k+1}-z^{k} = (AA^\top)^{-1}AA^\top(z^{k+1}-z^{k}) = 1/\rho(AA^\top)^{-1}As^{k+1}$. \(iii) When $A$ is invertible, the result easily follows. We now relate the norm of $ r^{k+1}$ and $ s^{k+1}$ to the one of $ w_-^{k+1}$. From  and , we have $$\begin{aligned} \Vert r^{k+1}\Vert = \dfrac{1}{2}\Vert w_-^{k+1} + w_+^{k+1} \Vert \leq \dfrac{1}{2} (\Vert w_-^{k+1} \Vert + \Vert w_+^{k+1}\Vert)\leq \Vert w_-^{k+1}\Vert,\end{aligned}$$ where the first inequality is the triangle inequality and the last inequality holds as $v^k$’s are positive vectors, $\Vert w_+^{k+1}\Vert =\Vert v^{k+1}-v^k\Vert\leq \Vert F^{k+1}v^{k+1}-F^k v^k \Vert = \Vert w_-^{k+1}\Vert$. For the dual residual, it can be verified that in case (i) and (ii) $A^\top(w_+^{k+1} - w_-^{k+1}) = \dfrac{2}{\rho}s^{k+1}$, so $$\begin{aligned} \Vert s^{k+1}\Vert = &\dfrac{\rho}{2}\Vert A^\top (w_-^{k+1} - w_+^{k+1}) \Vert \leq \dfrac{\rho}{2} \Vert A\Vert\left(\Vert w_-^{k+1} - w_+^{k+1}\Vert\right)\\ &\leq \dfrac{\rho}{2} \Vert A\Vert\left(\Vert w_-^{k+1} \Vert + \Vert w_+^{k+1}\Vert\right)\leq \rho \Vert A\Vert \Vert w_-^{k+1}\Vert.\end{aligned}$$ In case (iii), one finds $A(w_+^{k+1} - w_-^{k+1})= \dfrac{2}{\rho}s^{k+1}$ and again the same bound can be achieved (by replacing $A^\top$ with $A$ in above equality), thus concluding the proof. Proof of Theorem \[thm:QP:linear\_rate\] ---------------------------------------- Note that since $v^k$ is positive and $F^{k}$ is diagonal with elements in $\pm 1$, $F^{k+1}v^{k+1}=F^{k}v^{k}$ implies $v^{k+1}=v^k$. Hence, it suffices to establish the convergence of $F^{k}v^k$. From  we have $$\begin{aligned} \left\Vert F^{k+1}v^{k+1} - F^{k} v^{k} \right\Vert \leq \dfrac{1}{2}\left\Vert 2M - I\right\Vert \left\Vert v^{k} - v^{k-1} \right\Vert + \dfrac{1}{2} \left\Vert F^k v^k - F^{k-1} v^{k-1}\right\Vert. \end{aligned}$$ Furthermore, as $v^k$s are positive vectors, $\left\Vert v^k - v^{k-1}\right\Vert\leq \left\Vert F^k v^k - F^{k-1} v^{k-1}\right\Vert$, which implies $$\begin{aligned} \label{eqn:Quadratic_linear_rate} \left\Vert F^{k+1}v^{k+1} - F^k v^k \right\Vert \leq \underset{\zeta}{\underbrace{\left( \dfrac{1}{2}\left\Vert 2M - I \right\Vert + \dfrac{1}{2} \right)}} \left\Vert F^k v^k - F^{k-1} v^{k-1} \right\Vert.\end{aligned}$$ We conclude that if $\left\Vert 2M - I\right\Vert < 1$, then $\zeta<1$ and the iterations  converge to zero at a linear rate. To determine for what values of $\rho$ the iterations  converge, we characterize the eigenvalues of $M$. By the matrix inversion lemma $M = \rho AQ^{-1}A^\top - \rho AQ^{-1} A^\top (I+ \rho A Q^{-1} A^\top)^{-1} \rho A Q^{-1}A^\top$. From [@HoJ:85 Cor. 2.4.4], $(I+ \rho A Q^{-1} A^\top)^{-1}$ is a polynomial function of $\rho A Q^{-1} A^\top$ which implies that $M=f(\rho AQ^{-1}A^\top)$ is a polynomial function of $\rho AQ^{-1}A^\top$ with $f(t) = t - t (1+t)^{-1} t$. Applying [@HoJ:85 Thm. 1.1.6], the eigenvalues of $M$ are given by $f(\lambda_i(\rho AQ^{-1}A^\top))$ and thus $$\begin{aligned} \label{eqn:QP_M_eigenvalues} \lambda_i(M) = \dfrac{\lambda_i(\rho AQ^{-1}A^\top)}{1+ \lambda_i(\rho AQ^{-1}A^\top)}.\end{aligned}$$ If $\rho > 0$, then $\lambda_i(\rho AQ^{-1}A^\top)\geq 0$ and $\lambda_i(M)\in [0,1)$. Hence $\left\Vert 2M - I\right\Vert \leq 1$ is guaranteed for all $\rho \in {\mathcal{R}^{}}_{++}$ and equality only occurs if $M$ has eigenvalues at $0$. If $A$ is invertible or has full row-rank, then $M$ is invertible and all its eigenvalues are strictly positive, so $\left\Vert 2M - I\right\Vert <1$ and  is guaranteed to converge linearly. The case when $A$ is tall, i.e., $A^\top$ is rank deficient, is more challenging since $M$ has zero eigenvalues and $\left\Vert 2M - I\right\Vert = 1$. To prove convergence in this case, we analyze the $0$-eigenspace of $M$ and show that it can be disregarded. From the $x$-iterates given in  we have ${x^{k+1}-x^{k} = -(Q/\rho+A^\top A)^{-1}A^\top (v^k - v^{k-1})}$. Multiplying the former equality by $A$ from the left on both sides yields ${A(x^{k+1}-x^{k}) = - M (v^k-v^{k-1})}$. Consider a nonzero vector ${v^k - v^{k-1}}$ in ${\mathcal{N}(M)}$. Then we have either ${x^{k+1} = x^{k}}$ or ${x^{k+1}-x^{k}}\in {\mathcal{N}(A)}$. Having assumed that $A$ is full column-rank denies the second hypothesis. In other words, the $0$-eigenspace of $M$ corresponds to the stationary points of the algorithm . We therefore disregard this eigenspace and the convergence result holds. Finally, the R-linear convergence of the primal and dual residuals follows from the linear convergence rate of $F^{k+1}v^{k+1}-F^kv^k$ and Proposition \[prop:w\_residuals\]. Proof of Theorem \[thm:QP\_optimal\_factor\] -------------------------------------------- From the proof of Theorem \[thm:QP:linear\_rate\] recall that $$\begin{aligned} \left\Vert F^{k+1}v^{k+1} - F^k v^k \right\Vert \leq \left( \dfrac{1}{2}\left\Vert 2M - I \right\Vert + \dfrac{1}{2} \right) \left\Vert F^k v^k - F^{k-1} v^{k-1} \right\Vert.\end{aligned}$$ Define $$\begin{aligned} \zeta \triangleq \frac{1}{2}\Vert 2M-I \Vert + \frac{1}{2} &= \max_i \frac{1}{2} \vert 2 \lambda_i(M)-1\vert + \frac{1}{2} = \max_i \left\vert \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1+\rho\lambda_i(A Q^{-1} A^\top)} - \dfrac{1}{2}\right\vert + \dfrac{1}{2}\end{aligned}$$ where the last equality follows from the definition of $\lambda_i(M)$ in . Since $\rho>0$ and for the case where $A$ is either invertible or has full row-rank, $\lambda_i(AQ^{-1}A^{\top}) >0$ for all $i$, we conclude that $\zeta<1$. It remains to find $\rho^\star$ that minimizes the convergence factor, *i.e.* $$\begin{aligned} \label{eqn:QP_optProblem} \rho^{\star} &= \underset{\rho}{\mbox{argmin }} \max_i\left\{ \left\vert \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1+\rho\lambda_i(A Q^{-1} A^\top)} - \dfrac{1}{2}\right\vert + \dfrac{1}{2}\right\}.\end{aligned}$$ Since $\dfrac{\rho\lambda_i(A Q^{-1} A^\top)}{1+\rho\lambda_i(A Q^{-1} A^\top)}$ is a monotonically increasing function in $\lambda_i(A Q^{-1} A^\top)$, the maximum values of $\zeta$ happen for the two extreme eigenvalues $\lambda_1(A Q^{-1} A^\top)$ and $\lambda_n(A Q^{-1} A^\top)$: $$\begin{aligned}\label{eqn:QP_convergence_factor} \max_i \left\{ \zeta(\lambda_i(A Q^{-1} A^\top),\rho)\right\} = \left\{ \begin{array}[c]{lll} \dfrac{1}{1+\rho\lambda_1(A Q^{-1} A^\top)} & \mbox{if} &\rho \leq \rho^\star, \\ \dfrac{\rho\lambda_n(A Q^{-1} A^\top)}{1+\rho\lambda_n(A Q^{-1} A^\top)} & \mbox{if}& \rho > \rho^\star. \end{array}\right. \end{aligned}$$ Since the left brace of $\max_i \left\{ \zeta(\lambda_i(A Q^{-1} A^\top),\rho)\right\}$, i.e. $\dfrac{1}{1+\rho\lambda_1(A Q^{-1} A^\top)}$ is monotone decreasing in $\rho$ and the right brace is monotone increasing, the minimum with respect to $\rho$ happens at the intersection point . Proof of Theorem \[thm:QP\_slow\] --------------------------------- First we derive the lower bound on the convergence factor and show it is strictly smaller than $1$. From  we have $\left\Vert F^{k+1}v^{k+1} - F^k v^k \right\Vert = \left\Vert D^k v^k-D^{k-1} v^{k-1} - M (v^k - v^{k-1})\right\Vert.$ By applying the reverse triangle inequality and dividing by $\|F^kv^k - F^{k-1} v^{k-1}\|$, we find $$\dfrac{\|F^{k+1}v^{k+1} - F^k v^k \|}{\|F^k v^k - F^{k-1} v^{k-1}\|} \geq \vert \delta_k - \epsilon_k \vert.$$ Recalling from  that the convergence factor $\zeta$ is the maximum over $k$ of the left hand-side yields the lower bound . Moreover, the inequality $1 > \zeta \geq \underline{\zeta}$ follows directly from Theorem \[thm:QP:linear\_rate\]. The second part of the proof addresses the cases (i)-(iii) for $\rho>0$. Consider case (i) and let ${\mathcal{N}(A^\top)}= \{0\}$. It follows from Theorem \[thm:QP\_optimal\_factor\] that the convergence factor is given by $\tilde{\zeta}(\rho)$, thus proving the sufficiency of ${\mathcal{N}(A^\top)}= \{0\}$ in (i). The necessity follows directly from statement (iii), which is proved later. Now consider the statement (ii) and suppose ${\mathcal{N}(A^\top)}$ is not zero-dimensional. Recall that $\lambda_1(AQ^{-1}A^\top)$ is the smallest nonzero eigenvalue of $AQ^{-1}A^\top$ and suppose that $\epsilon_k \geq 1-\xi$. Next we show that $\epsilon_k \geq 1-\xi$ implies $\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1})\| / \|v^k - v^{k-1}\|\leq \sqrt{2\xi}$. Since $M\Pi_{{\mathcal{N}(A^\top)}} = 0$, $\|M\|<1$, and $\|v^k - v^{k-1}\|\leq \|F^kv^k - F^{k-1}v^{k-1}\|$ we have $$\begin{aligned} \epsilon_k^2 &= \dfrac{\|M(I-\Pi_{{\mathcal{N}(A^\top)}})( v^k - v^{k-1}) \|^2}{\|F^kv^k - F^{k-1}v^{k-1}\|^2} \leq \dfrac{\|\Pi_{{\mbox{Im}(A)}}(v^k - v^{k-1}) \|^2}{\|v^k - v^{k-1}\|^2} = 1-\dfrac{\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1}) \|^2}{\|v^k - v^{k-1}\|^2}. \end{aligned}$$ Using the above inequality and $\epsilon_k^2 \geq (1-\xi)^2$ we obtain $\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1})\| / \|v^k - v^{k-1}\|\leq \sqrt{2\xi-\xi^2} \leq \sqrt{2\xi}$. The latter inequality allows us to derive an upper-bound on $\underline{\zeta}$ as follows. Recalling , we have $$\label{eq:QP_slow_ii} \begin{aligned} \underline{\zeta}\leq\dfrac{\|F^{k+1}v^{k+1} - F^k v^k \|}{\|F^k v^k - F^{k-1} v^{k-1}\|} &\leq \dfrac{1}{2} + \dfrac{1}{2}\dfrac{\|(I- 2M) (v^k - v^{k-1})\|}{\|F^k v^k - F^{k-1} v^{k-1}\|} \\ &= \dfrac{1}{2} + \dfrac{1}{2}\sqrt{\dfrac{\|(I- 2M)\Pi_{{\mbox{Im}(A)}}(v^k - v^{k-1})\|^2}{\|F^k v^k - F^{k-1} v^{k-1}\|^2} + \dfrac{\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1})\|^2}{\|F^k v^k - F^{k-1} v^{k-1}\|^2}}. \end{aligned}$$ Using the inequalities $\|v^k - v^{k-1}\| \leq \|F^k v^k - F^{k-1} v^{k-1}\|$ and $\sqrt{a^2 + b^2} \leq a + b$ for $a,b\in\mathcal{R}_+$, the inequality  becomes $\underline{\zeta}\leq \dfrac{1}{2} + \dfrac{1}{2}\|(I- 2M)\Pi_{{\mbox{Im}(A)}}\| + \sqrt{\dfrac{\xi}{2}} \leq \tilde{\zeta}(\rho) + \sqrt{\dfrac{\xi}{2}},$ which concludes the proof of (ii). As for the third case (iii), note that $\epsilon_k\leq \xi$ holds if $\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1})\| / \|v^k - v^{k-1}\|\geq \sqrt{1-\xi^2/\|M\|^2}$, as the latter inequality implies that $$\begin{aligned} \epsilon_k &= \dfrac{\|M\Pi_{{\mbox{Im}(A)}}(v^k - v^{k-1})\|}{\|F^{k}v^k - F^{k-1}v^{k-1}\|} \leq \|M\|\dfrac{\|\Pi_{{\mbox{Im}(A)}}(v^k - v^{k-1})\|}{\|v^k - v^{k-1}\|} \leq \xi. \end{aligned}$$ Supposing that there exists a non-empty set $\mathcal{K}$ such that $\delta_k \geq 1-\xi$ and $\|\Pi_{{\mathcal{N}(A^\top)}}(v^k - v^{k-1})\| / \|v^k - v^{k-1}\|\geq \sqrt{1-\xi^2/\|M\|^2}$ holds for all $k\in\mathcal{K}$, we have $\underline{\zeta} \geq \max_{k\in\mathcal{K}}\; \delta_k - \epsilon_k \geq 1-2\xi$ regardless the choice of $\rho$. Proof of Lemma \[lem:optimal\_fixed\_point\] -------------------------------------------- Let $(x^\star,\, z^\star,\, u^\star)$ denote a fixed-point of  and let $\mu$ be the Lagrange multiplier associated with the equality constraint in . For the optimization problem , the Karush-Kuhn-Tucker (KKT) optimality conditions [@Nesterov03] are $$\begin{aligned} 0 & = Q x + q + A^\top \mu,\quad\;\; z \geq 0,\\ 0 &=Ax + z - b,\quad\quad\quad 0 =\mbox{diag}(\mu) z. \end{aligned}$$ Next we show that the KKT conditions hold for the fixed-point $(x^\star,\, z^\star,\, u^\star)$ with $\mu^\star = 1/\rho u^\star$. From the $u-$iterations we have $0 = \alpha (A x^\star-c)-(1-\alpha)z^\star+z^\star = \alpha( A x^\star +z^\star-c)$. It follows that $z^\star$ is given by $z^{\star} = \mbox{max}\{0,-\alpha (A x^{\star} + z^\star -c)+z^\star-u^{\star}\} = \mbox{max}\{0, z^\star-u^{\star}\} \geq 0$. The $x-$iteration then yields $0 = Qx^\star +q + \rho A^\top(Ax^\star + z^\star -c + u^\star) = Qx^\star +q + A^\top \mu^\star$. Finally, from $z^\star \geq 0$ and the $z-$update, we have that $z^\star_i > 0 \Rightarrow u^\star_i = 0$ and $z^\star_i = 0 \Rightarrow u^\star_i \geq 0$. Thus, $\rho\,\mbox{diag}(\mu^\star) z^\star = 0$. Proof of Theorem \[thm:QP\_relaxation\_convergence\] ---------------------------------------------------- Taking the Euclidean norm of  and applying the Cauchy-Schwarz inequality yields $$\begin{aligned} \left\Vert F^{k+1}v^{k+1} - F^{k} v^{k} \right\Vert \leq \dfrac{\vert \alpha \vert}{2}\left\Vert 2M - I \right\Vert \left\Vert v^{k} - v^{k-1} \right\Vert + \vert 1-\dfrac{\alpha}{2}\vert \left\Vert F^k v^k - F^{k-1} v^{k-1}\right\Vert. \end{aligned}$$ Note that since $v^k$s are positive vectors we have $\left\Vert v^k - v^{k-1}\right\Vert\leq \left\Vert F^k v^k - F^{k-1} v^{k-1}\right\Vert$ and thus $$\begin{aligned} \label{eqn:Quadratic_linear_rate_relaxation} \dfrac{\left\Vert F^{k+1}v^{k+1} - F^{k} v^{k} \right\Vert}{\left\Vert F^k v^k - F^{k-1} v^{k-1} \right\Vert} \leq \underset{\zeta_R}{\underbrace{\left( \dfrac{\vert \alpha \vert}{2}\left\Vert 2M - I \right\Vert + \left\vert 1-\dfrac{\alpha}{2} \right\vert \right)}} .\end{aligned}$$ Note that $\rho \in {\mathcal{R}^{}}_{++}$ and recall from the proof of Theorem \[thm:QP:linear\_rate\] that the $0$-eigenspace of $M$ can be disregarded. Therefore, $\dfrac{1}{2}\left\Vert 2M-I\right\Vert_{{\mathcal{N}(M)}^\bot} \in [0,\dfrac{1}{2})$. Defining $\tau \triangleq \dfrac{1}{2}\left\Vert 2M- I\right\Vert_{{\mathcal{N}(M)}^\bot}$ we have $$\zeta_R=\alpha \tau + \vert1- \dfrac{\alpha}{2}\vert < \dfrac{\alpha}{2} + \vert1- \dfrac{\alpha}{2}\vert$$ Hence, we conclude that for $\rho \in {\mathcal{R}^{}}_{++}$ and $\alpha\in (0, 2]$, it holds that $\zeta_R <1$ , which implies that  converges linearly to a fixed-point. By Lemma \[lem:optimal\_fixed\_point\] this fixed-point is also a global optimum of . Now, denote $w_-^{k+1} \triangleq F^{k+1} v^{k+1}- F^{k} v^{k}$ and $w_+^{k+1}\triangleq v^{k+1}-v^k$. Following the same steps as Proposition \[prop:w\_residuals\], it is easily verified that $w_-^{k+1} = u^{k+1}-u^k + z^k - z^{k+1}$ and $w_+^{k+1}=u^{k+1}-u^k +z^{k+1}- z^k$ from which combined with  one obtains $$\begin{aligned} s^{k+1} = \rho\dfrac{A^\top}{2} (w_+^{k+1}-w_-^{k+1}), \quad r^{k+1} = \dfrac{1}{2}w_+^{k+1} + \dfrac{2-\alpha}{2\alpha}w_-^{k+1}.\end{aligned}$$ We only upper-bound $\Vert r^{k+1}\Vert$, since an upper bound for $\Vert s^{k+1}\Vert$ was already established in . Taking the Euclidean norm of the second equality above and using the triangle inequality $$\begin{aligned} \label{eqn:r_to_fv_relaxed} \Vert r^{k+1}\Vert \leq \dfrac{1}{2} \Vert w_+^{k+1}\Vert + \dfrac{2-\alpha}{2\alpha} \Vert w_-^{k+1}\Vert \leq \dfrac{1}{\alpha} \Vert w_-^{k+1}\Vert.\end{aligned}$$ The R-linear convergence of the primal and dual residuals now follows from the linear convergence rate of $F^{k+1}v^{k+1}-F^kv^k$ and the bounds in  and . Proof of Theorem \[thm:QP\_relaxation\_optimal\_factor\] -------------------------------------------------------- Define $$\begin{aligned} \label{eqn:QP_Convergencefactor_relaxation} &\zeta_R( \rho, \alpha ,\lambda_i(AQ^{-1}A^\top)) = \alpha \left\vert \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1 + \rho \lambda_i(AQ^{-1}A^\top)} - \dfrac{1}{2}\right\vert + 1- \dfrac{\alpha}{2},\\ &\zeta_R^\star = \underset{i}{\max} \, \underset{\rho, \alpha}{\min}\{\zeta_R(\rho, \alpha ,\lambda_i(AQ^{-1}A^\top))\}. \end{aligned}$$ Since $\left\vert \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1 + \rho \lambda_i(AQ^{-1}A^\top)} - \dfrac{1}{2}\right\vert < \dfrac{1}{2}$, it follows that $\zeta_R(\rho, \alpha, \lambda_i(AQ^{-1}A^\top))$ is monotone decreasing in $\alpha$. Thus, $\zeta_R(\rho, \alpha, \lambda_i(AQ^{-1}A^\top))$ is minimized by $\alpha^\star=2$. To determine $$\begin{aligned} \label{eqn:QP_optProblem_relaxation} \rho^{\star} = \underset{\rho}{\mbox{argmin }} \max_i\left\{ \zeta_R(\rho, 2,\lambda_i(AQ^{-1}A^\top))\right\}, $$ we note that  and  are equivalent up to an affine transformation, hence we have the same minimizer $\rho^\star$. It follows from the proof of Theorem \[thm:QP\_optimal\_factor\] that ${\rho^\star = 1/\sqrt{\lambda_1(AQ^{-1}A^\top)\; \lambda_n(AQ^{-1}A^\top)}}$. Using $\rho^\star$ in  results in the convergence factor . For given $A$, $Q$, and $\rho$, we can now find the range of values of $\alpha$ for which  have a smaller convergence factor than , [i.e. ]{}for which $\zeta_R - \zeta < 0$. By  and  it holds that $$\begin{aligned} \zeta_R - \zeta = \dfrac{\alpha}{2}\left\Vert 2M - I \right\Vert + 1-\dfrac{\alpha}{2}- \dfrac{1}{2}\left\Vert 2M - I\right\Vert - \dfrac{1}{2} = (1 - \alpha )\left(\dfrac{1}{2} - \dfrac{1}{2}\left\Vert 2M - I\right\Vert \right).\end{aligned}$$ This means that $\zeta_R - \zeta<0$ when $\alpha>1$. Therefore, the iterates produced by the relaxed algorithm  have smaller convergence factor than the iterates produced by  for all values of the relaxation parameter $\alpha \in (1,2]$. This concludes the proof. Proof of Theorem \[thm:QP\_optimal\_preconditioning\] ----------------------------------------------------- Note that the non-zero eigenvalues of $LAQ^{-1}A^\top L$ are the same as the ones of $R_q^\top A^\top W A R_q$ where $W=L^2$ and $R_q^\top R_q=Q^{-1}$ is its Choleski factorization [@HoJ:85]. Defining $\lambda_n(R_q^\top A^\top W A R_q)$ and $\lambda_1(R_q^\top A^\top W A R_q)$ as the largest and smallest nonzero eigenvalues of $LAQ^{-1}A^\top L$, the optimization problem we aim at solving can be formulated as $$\label{eqn:QP_optimal_scaling_proof} \begin{aligned} \begin{array}{ll} \underset{\bar\lambda\in{\mathcal{R}^{}},\;\underline\lambda\in{\mathcal{R}^{}},\;l\in{\mathcal{R}^{m}}}{\mbox{minimize}} & {\bar\lambda} / {\underline\lambda}\\ \mbox{subject to}& \bar\lambda > \lambda_n(R_q^\top A^\top W A R_q),\\ & \lambda_1(R_q^\top A^\top W A R_q) > \underline\lambda,\\ & W=\mbox{diag}(w),\; w > 0. \end{array} \end{aligned}$$ In the proof we show that the optimization problem is equivalent to . Define $T(\bar\lambda) \triangleq \bar\lambda I - R_q^\top A^\top W A R_q$. First observe that $\bar\lambda \geq \lambda_n(R_q^\top A^\top W A R_q)$ holds if and only if $T(\bar\lambda) \in {\mathcal{S}_{+}^{n}}$, which proves the first inequality in the constraint set . To obtain a lower bound on $\lambda_1(R_q^\top A^\top W A R_q)$ one must disregard the zero eigenvalues of $R_q^\top A^\top W A R_q$ (if they exist). This can be performed by restricting ourselves to the subspace orthogonal to ${\mathcal{N}(R_q^\top A^\top W A R_q)} = {\mathcal{N}(A R_q)}$. In fact, letting $s$ to be the dimension of the nullity of $A R_q$ or simply $A$ and denoting $P^{n\times n-s}$ as a basis of $\mbox{Im}(R_q^\top A^\top)$, we have that $\underline\lambda \leq \lambda_1$ if and only if $x^\top P^\top T(\underline\lambda) Px \leq 0$ for all $x\in{\mathcal{R}^{n-s}}$. Note that for the case when the nullity of $A$ is $0$ ($s=0$), all the eigenvalues of $R_q^\top A^\top W A R_q$ are strictly positive and, hence, one can set $P=I$. We conclude that $\underline\lambda \leq \lambda_1(R_q^\top A^\top W A R_q)$ if and only if $P^\top\left(R_q^\top A^\top W A R_q-\underline\lambda I \right) P \in {\mathcal{S}_{+}^{n-s}}$. Note that $\lambda_1(R_q^\top A^\top W A R_q)>0$ can be chosen arbitrarily by scaling $W$, which does not affect the ratio $\lambda_n(R_q^\top A^\top W A R_q) / \lambda_1(R_q^\top A^\top W A R_q)$. Without loss of generality, one can suppose $\underline\lambda^\star = 1$ and thus the lower bound on $\lambda_1(R_q^\top A^\top W A R_q)\geq \underline\lambda^\star = 1$ corresponds to the last inequality in the constraint set of . Observe that the optimization problem now reduces to minimizing $\bar\lambda$. The proof concludes by rewriting  as , which is a convex problem. Proof of Proposition \[prop:QP\_when\_careful\_1\] -------------------------------------------------- Assuming ${F^{k+1} = F^{k} = -I}$,  reduces to $v^{k+1}-v^k = \left( (1-\alpha)I + \alpha M\right) (v^k-v^{k-1})$. By taking the Euclidean norm of both sides and applying the Cauchy inequality, we find $$\begin{aligned} \Vert v^{k+1} - v^{k} \Vert \leq \Vert (1-\alpha)I + \alpha M \Vert \Vert v^k-v^{k}\Vert. \end{aligned}$$ Since the eigenvalues $M$ are $\dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1+\rho \lambda_i(AQ^{-1}A^\top)}$, the convergence factor $\zeta_R$ is $$\begin{aligned} \zeta_R(\rho,\alpha,\lambda_i(AQ^{-1}A^\top)) &= 1-\alpha + \alpha \dfrac{\rho \lambda_i(AQ^{-1}A^\top)}{1+\rho \lambda_i(AQ^{-1}A^\top)}. $$ It is easy to check that the smallest value of $\vert \zeta_R\vert$ is obtained when $\alpha=1$ and $\rho\rightarrow 0$. Since $\alpha=1$ the relaxed ADMM iterations  coincide with  and consequently $\zeta=\zeta_R$. Proof of Proposition \[prop:QP\_when\_careful\_2\] -------------------------------------------------- The proof follows similarly to the one of Proposition \[prop:QP\_when\_careful\_1\] but with ${F^{k+1} = F^{k} = I}$. [^1]: E. Ghadimi, A. Teixeira, and M. Johansson are with the ACCESS Linnaeus Center, Electrical Engineering, Royal Institute of Technology, Stockholm, Sweden. [{euhanna, andretei, mikaelj}@ee.kth.se]{}. I. Shames is with the Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Australia. [[email protected]]{}. This work was sponsored in part by the Swedish Foundation for Strategic Research, SSF, and the Swedish Research Council, VR. [^2]: The letters Q and R stand for quotient and root, respectively.
--- abstract: '[ We use the Endpoint model for exclusive hadronic processes to study Compton scattering of the proton. The parameters of the Endpoint model are fixed using the data for $F_1$ and the ratio of Pauli and Dirac form factors ($F_2/F_1$) and then used to get numerical predictions for the differential scattering cross section. We studied the Compton scattering at fixed $\theta_{CM}$ in the $s \sim t \gg \Lambda_{QCD}$ limit and at fixed $s$ much larger than $t$ limit. We observed that the calculations in the Endpoint Model give a good fit with experimental data in both regions. ]{}' author: - | Sumeet Dagaonkar[^1] \ *[Department of Physics, Indian Institute of Technology, Kanpur 208016, India]{}\ * title: '****' --- Though we have a well understood QCD Lagrangian, predicting processes involving hadrons is a difficult task. The interaction of a high energy probe with quarks or gluons in a hadron requires us to understand physics which is non-perturbative. While in processes like deep inelastic scattering we are able to successfully use factorization - to separate the non-perturbative part into a parton distribution function while the rest could be calculated perturbatively, such simplifications are understood to be much more difficult in the case of exclusive processes [@Isgur:1984jm] . Theoretical models aimed at explaining such processes have been around for four decades now and the ideas can be spilt into two major camps - methods involving hard gluon exchanges within the constituents (short distance model) and methods without hard exchanges (soft or Feynman mechanism). The Endpoint Model(EP) used in this paper combines the idea of soft mechanism with a model of hadron wavefunction which constrains the transverse momenta of confined quarks. The exclusive process of interest in this paper is the Real Compton scattering ($p\gamma {\rightarrow}p\gamma$). The first measurements for Compton scattering were made at Cornell [@Shupe:1979vg], where the differential cross section $d\sigma/dt$ was measured and found to show a scaling of $1/s^6$. However more recent measurements at JLab [@Danagoulian:2007gs] have shown that the scaling goes more like $1/s^{8.0 \pm 0.2}$. In recent years the experiments using polarization transfer [@Fanelli:2015eoa] have also given measurements of transverse polarization transfer $K_{LS}$ and longitudinal polarization transfer $K_{LL}$. The first theoretical predictions for the scaling behaviour of Compton scattering appeared in [@Brodsky:1973kr; @Matveev:1973ra]. They predicted that $ d\sigma/dt|_{\mathrm{fixed\, t}} \propto 1/s^{6} f(t/s)$ using simple constituent counting ideas. Recent calculations in perturbative QCD (short distance model) [@Brooks:2000nb; @Thomson:2006ny] using this formalism give predictions a order lower than the experimental data. However, it is understood that the perturbative calculations are only applicable at asymptotically high energies not explored at existing experimental facilities. The soft mechanism was used by Diehl et al.[@Diehl:1998kh] in calculations involving generalized parton distribution functions (GPD), while Miller [@Miller:2004rc] calculated the handbag diagram in the constituent quark model (CQM). The former work was shown to be equivalent to a sum of overlap of light cone wave functions for all Fock states. For the leading Fock state, the pole structure leads to a similar endpoint dominance as obtained in our model. While the GPD based analysis agrees with some features of the data, the scaling behaviour is not consistent with the latest data. Work by Kivel and Vanderhaeghen [@Kivel:2013sya; @Kivel:2015vwa] on Compton scattering unifies the short distance and the soft mechanism using Soft collinear effective theory. The latest results on polarization transfer measurements [@Fanelli:2015eoa] show that, while the $K_{LS}$ agrees well with the results of pQCD[@Thomson:2006ny], GPD’s [@Huang:2001ej], CQM [@Miller:2004rc] and SCET [@Kivel:2015vwa], the $K_{LL}$ measurements have been unexpectedly larger and do not agree with any of the theoretical predictions. The Endpoint Model [@Dagaonkar:2014yea; @Dagaonkar:2015laa] applies to all exclusive hadronic processes and reproduces the quark counting rules at high energies. In the model, the dominant contributions involve struck quarks carrying a large fraction of the hadron’s momenta. The scaling is now completely dependent on the endpoint behaviour of the light cone wavefunctions. It is then possible to obtain the functional form of the wavefunction near the endpoint. After extracting the wavefunction of the proton from the $F_1$ data, the authors successfully used the wavefunction to understand the scaling behaviour of $pp$ scattering and the ratio of the Pauli and Dirac form factors ($F_2(Q^2)/F_1(Q^2)$) of the proton. These results motivated the author to attack the Compton scattering problem using the Endpoint Model. After introducing the Endpoint Model and setting up the frame work, we will show in Section \[sec:1\] that the EP calculation for $d\sigma / dt$ obeys scaling laws of [@Brodsky:1973kr; @Matveev:1973ra] at large $s$ in the $s \sim t \gg \Lambda_{QCD}$ limit and also a scaling of $1/t^4$ in the fixed $s$ much larger than $t$ region. A detailed numerical calculation in Section \[sec:2\] will help us determine the range of $s$ for which we may expect the scaling behaviour to set in and we will also extend the model’s prediction into a low $Q^2$ region to compare with data. At asymptotic energies, we expect that pQCD contributions may dominate. However as seen in the current analysis, a soft mechanism like the Endpoint model can be used to understand data which lies within experimental reach. Compton scattering using the Endpoint model {#sec:1} =========================================== The diagrams allowed for Compton scattering under the Endpoint Model are given in Fig. \[fig:cs2diag\]. It can be noticed that the interaction between the struck quark and the photon mirrors the diagrams of the Compton scattering with electrons. Kinematics ---------- In the above diagrams, the incoming proton is understood to be deflected by $q^{\mu} = (0,Q,0,0)$, where $q^{\mu} = q_{1}^{\mu} - q_{2}^{\mu}$. This allows us to use the same frame and kinematics for the proton, as was used for the analysis of Dirac and Pauli form factors [@Dagaonkar:2014yea; @Dagaonkar:2015laa] with $q=(0,Q,0,0), P=(\sqrt{Q^{2}/2+M_{P}^{2}},-Q/2,0,Q/2), P'=(\sqrt{Q^{2}/2+M_{P}^{2}},Q/2,0,Q/2)$. We can choose $q_1,q_2$ appropriately so that $\theta_{cm} \in [64^{\circ}, 130^{\circ}]$, which is the range of the data obtained at Jlab [@Danagoulian:2007gs]. For $\theta_{cm} \approx 90^{\circ}$, $q_1=\left(Q/\sqrt{2},Q/2,0,-Q/2\right), q_2=\left(Q/\sqrt{2},-Q/2,0,-Q/2\right).$ Let us also define the various quark momenta that will be useful in our calculation, starting with a basis for transverse momenta: $ y^{\mu} =(0, 0, 1,0) = y'$ such that $ \hat P\cdot y=\hat P'\cdot y'=0$ and $n^{\mu} = (1/\sqrt{2}) (0, -1, 0, -1)$ such that $ \hat P\cdot n= 0$ and $n'^{\mu} = (1/\sqrt{2}) (0, 1, 0,-1)$ such that $\hat P'\cdot n'=0.$ Here $\hat P = \left(0,-1/\sqrt{2},0,1/\sqrt{2}\right)$ and $\hat P' = \left(0,1/\sqrt{2},0,1/\sqrt{2}\right)$ are the unit vectors along the direction of propagation of the incoming photon and incoming proton respectively. The four momenta of the quarks are then given by, k\_[i]{}\^ &= (k\_[i]{}\^[0]{},-x\_[i]{}-,k\_[iy]{},x\_[i]{}-)\ k\_[i]{}\^[’]{} &= (k\_[i]{}\^[’0]{},x’\_[i]{}+,k’\_[iy]{},x’\_[i]{}-) \[eq:qmomentum\] Endpoint Model Calculation -------------------------- The amplitude for the process can be written as \[eq:amp\] [[i1mu]{}]{} = \_[i]{} (2)\^4(k\_1+k\_2+k\_3-P) (2)\^4(k’\_1+k’\_2+k’\_3-P’)\ \^[\*]{}(q\_[2]{})\^(q\_[1]{}), where $\Psi_{\alpha\beta\gamma}$ refer to 3 quark Bethe-Salpeter wavefunction, the indices $\alpha,\beta,\gamma$ refer to the $u,u,d$ carrying momentum $k_1,k_2,k_3$ respectively. The primed quantities refer to the outgoing proton. The $\mathcal{M}^{\mu\nu}$ in the above expression is taken as, $$\begin{aligned} \mathcal{M}_{\alpha'\beta'\gamma'\alpha\beta\gamma}^{\mu\nu}&=& \left[(-{{i\mkern1mu}}e_{u}\gamma^{\mu}) \frac{{{i\mkern1mu}}(\slashed{k}_1+\slashed{q}_1+m_q)}{(k_1+q_1)^2-m_q^2}(-{{i\mkern1mu}}e_{u}\gamma^{\nu}) + (-{{i\mkern1mu}}e_{u}\gamma^{\nu}) \frac{{{i\mkern1mu}}(\slashed{k}_1-\slashed{q}_2+m_q)}{(k_1-q_2)^2-m_q^2}(-{{i\mkern1mu}}e_{u}\gamma^{\mu}) \right]_{\alpha^{'}\alpha}{\nonumber}\\ && (2\pi)^{12}\delta^{4}(k_{1}+q-k'_{1}){{i\mkern1mu}}(\lambda\slashed{k}_{2}-m_{2})_{\beta^{'}\beta} \delta^{4}(k_{2}-k'_{2}){{i\mkern1mu}}(\lambda\slashed{k}_{3}-m_{3})_{\gamma^{'}\gamma}\delta^{4}(k_{3}-k'_{3}) {\nonumber}\\ &+&\left[(-{{i\mkern1mu}}e_{u}\gamma^{\mu}) \frac{{{i\mkern1mu}}(\slashed{k}_2+\slashed{q}_1+m_q)}{(k_2+q_1)^2-m_q^2}(-{{i\mkern1mu}}e_{u}\gamma^{\nu}) + (-{{i\mkern1mu}}e_{u}\gamma^{\nu}) \frac{{{i\mkern1mu}}(\slashed{k}_2-\slashed{q}_2+m_q)}{(k_2-q_2)^2-m_q^2}(-{{i\mkern1mu}}e_{u}\gamma^{\mu}) \right]_{\beta^{'}\beta}{\nonumber}\\ & & (2\pi)^{12}\delta^{4}(k_{2}+q-k'_{2}){{i\mkern1mu}}(\lambda\slashed{k}_{1}-m_{1})_{{\alpha}^{'}{\alpha}}\delta^{4}(k_{1}-k'_{1}) {{i\mkern1mu}}(\lambda\slashed{k}_{3}-m_{3})_{{\gamma}^{'}{\gamma}}\delta^{4}(k_{3}-k'_{3}) {\nonumber}\\ &+&\left[(-{{i\mkern1mu}}e_{d}\gamma^{\mu}) \frac{{{i\mkern1mu}}(\slashed{k}_3+\slashed{q}_1+m_q)}{(k_3+q_1)^2-m_q^2}(-{{i\mkern1mu}}e_{d}\gamma^{\nu}) + (-{{i\mkern1mu}}e_{d}\gamma^{\nu}) \frac{{{i\mkern1mu}}(\slashed{k}_3-\slashed{q}_2+m_q)}{(k_3-q_2)^2-m_q^2}(-{{i\mkern1mu}}e_{d}\gamma^{\mu}) \right]_{\gamma^{'}\gamma}{\nonumber}\\ && (2\pi)^{12}\delta^{4}(k_{3}+q-k'_{3}){{i\mkern1mu}}(\lambda\slashed{k}_{1}-m_{1})_{\alpha^{'}\alpha}\delta^{4}(k_{1}-k'_{1}){{i\mkern1mu}}(\lambda\slashed{k}_{2}-m_{2})_{\beta^{'}\beta}\delta^{4}(k_{2}-k'_{2}), \label{eq:m}\end{aligned}$$ where we have taken into account both diagrams in Fig. \[\[fig:cs2diag\]\] and the three terms represent the photon’s interactions with u,u,d quarks respectively. We would like to integrate over the $k_i^-, k_i^{'-}$ momenta in the Eq. \[eq:m\] so as to replace the Bethe Salpter wavefunctions by Light cone wavefunctions using the approximations developed in [@Brodsky:1984vp]. The integrand has $k_i^-, k_i^{'-}$ dependence due to the propagators associated with the Bethe Salpeter wavefunction and from the spectator quarks. The spectator quarks interact through soft gluons and behave like a effective diquark propagator. Its form will require us to do an detailed analysis of the physics in this non-perturbative system. As a starting point however, we use a simple model consisting of two non-interacting quarks given by $(\lambda \slashed{k}_2 - m)(\lambda \slashed{k}_3 - m)$, where $\lambda$ may be a scalar function of the spectator quark momentum ($k_2,k_3$). The complete expression for $\mathcal{M}^{\mu\nu}$ is assumed to be dominated by a region where the quarks are on-shell which allows us to make the substitution $\kappa_i^- = (k^0 - x_i Q/\sqrt{2})(P^0+Q/\sqrt{2})= (m_i^2+\vec{k}^2_{\perp \, i})/(k^{0} + x_i Q/\sqrt{2})$. In this substitution, we have taken into account the energy scale dependence of the mass which causes the effective mass to be $m^2_i\sim \Lambda^2$ for the spectator quarks and $m^2_i \sim$ few MeV for the struck quark. Momenta for each of the quarks is conserved independently and as per the definition of $i\mathcal{M}$, a factor of $\delta^{4}(P+q_1 - q_2 -P')$ has to be dropped in the above expression. Under these approximations, the amplitude \[eq:amp\] becomes $$\begin{aligned} {{i\mkern1mu}}\mathcal{M} &=& \int \prod\limits_{i} dx_{i}d\vec{k}_{\perp i} dx^{'}_{i}d\vec{k}'_{\perp i} \delta(x_{1}+x_{2}+x_{3}-1)\delta^{2}(k_{\perp 1}+k_{\perp 2}+k_{\perp 3}) \delta(x'_{1}+x'_{2}+x'_{3}-1){\nonumber}\\ & & \delta^{2}(k'_{\perp 1}+k'_{\perp 2}+k'_{\perp 3}) \epsilon^{*\mu}(q_{2})\epsilon^{\nu}(q_{1})\left[ \overline{Y^{\prime}}_{\alpha^{'}\beta^{'}\gamma^{'}}(x'_{i},\vec{k}'_{\perp i}) \times \mathcal{M}_{\alpha'\beta'\gamma'\alpha\beta\gamma}^{\mu\nu}\times Y_{\alpha\beta\gamma}(x_{i},\vec{k}_{\perp i}) \right]. \label{eq:bjs}\end{aligned}$$ The light cone wave function for the proton $Y(k_i)$ at leading twist and leading power of large $P$ is [@Ioffe; @Avdeenko], $$Y_{\alpha\beta\gamma}(k_{i},P) = \frac{f_{N}}{16\sqrt{2}N_{c}}\{ (\slashed{P}C)_{\alpha\beta}(\gamma_{5}N)_{\gamma}\mathcal{V} + (\slashed{P}\gamma_{5}C)_{\alpha\beta}N_{\gamma}\mathcal{A} + {{i\mkern1mu}}(\sigma_{\mu\nu}P^{\nu}C)_{\alpha\beta} (\gamma^{\mu}\gamma_{5}N)_{\gamma}\mathcal{T}\}.\label{eq:lipwavef}$$ Here $\mathcal{V,A,T}$ are scalar wavefunctions of the quark momenta, $N$ is the proton spinor, $N_{c}$ the number of colors, $C$ the charge conjugation operator, $\sigma_{\mu\nu}= \frac{{{i\mkern1mu}}}{2}[\gamma_{\mu},\gamma_{\nu}]$, and $f_{N}$ is a normalization. The functional dependence for the scalar functions near the endpoint region of the $x_i$, the momentum fraction of the struck quark, was obtained in [@Dagaonkar:2014yea] by matching the EP calculation with the experimental scaling behaviour of $F_1$ of the proton. We will carry over that form in this paper = v (1-x\_i)\^[- k\_[T]{}\^[2]{}/\^[2]{}]{}; = a (1-x\_i)\^[- k\_[T]{}\^[2]{}/\^[2]{}]{}; = t (1-x\_i)\^[- k\_[T]{}\^[2]{}/\^[2]{}]{} \[eq:wf\]. The $\vec{k}_{T}$ represents the transverse momenta of the quark which is suppressed by an exponential function in the above form and is understood to be cut off sharply for $|k_T| > \Lambda_{QCD}$. Scaling in Endpoint Model {#sec:scaling} ------------------------- Before presenting the endpoint model’s prediction for Compton scattering, we explicitly evaluate a part of the entire expression to extract the scaling behaviour to be expected for fixed $\theta_{CM}$ and fixed $s$ cases. Let us concentrate on the diagram shown in Fig. \[fig:cs2diag\], in which $d$ quark is struck. The delta functions in the last term of Eq. \[eq:m\] and Eq. \[eq:bjs\] imply, $ x_{1} = 1 - x_{2} - x_{3}; x'_{1} = 1 - x'_{2} - x'_{3}; k_{1n} = -k_{2n} - k_{3n}; k_{1y} = -k_{2y} - k_{3y} ; k'_{1n} = -k'_{2n} - k'_{3n}; k'_{1y} = -k'_{2y} - k'_{3y} ; k_{2y} = k'_{2y}; k_{3y} = k'_{3y} ;\, x'_{2}= x_{2}; x'_{3}= x_{3}; k_{3n}=Q(1-x'_{3})/\sqrt{2} ;\, k'_{3n}=Q(1-x_{3})/\sqrt{2}; k_{2n}=Q(-x'_{2})/\sqrt{2} ;\, k'_{2n}= Q(-x_{2})/\sqrt{2}. $ Integrating over the delta functions leads to a factor of $1/Q^2$. Using only the first term of the wavefunction Eq. \[eq:lipwavef\], the amplitude is obtained as, $$\begin{aligned} i\mathcal{M} &=& \int dx_1 dx_2 dk_{1y} dk_{2y} \frac{1}{Q^2}\hspace{5pt} \epsilon^{*\mu}(q_{2})\epsilon^{\nu}(q_1)\bigg[[(C^{-1}\slashed{P'})_{\alpha'\beta'} (\overline{N}\gamma_5)_{\gamma'}\mathcal{V}^*]{\nonumber}\\ && \left[(-{{i\mkern1mu}}e_{d}\gamma^{\mu}) \frac{{{i\mkern1mu}}(\slashed{k_3}+\slashed{q_1}+m_q)}{(k_3+q_1)^2-m_q^2}(-{{i\mkern1mu}}e_{d}\gamma^{\nu}) + (-{{i\mkern1mu}}e_{d}\gamma^{\nu}) \frac{{{i\mkern1mu}}(\slashed{k_3}-\slashed{q_2}+m_q)}{(k_3-q_2)^2-m_q^2}(-{{i\mkern1mu}}e_{d}\gamma^{\mu}) \right]_{\gamma^{'}\gamma}{\nonumber}\\ && {{i\mkern1mu}}(\lambda\slashed{k}_{1}-m_{1})_{\alpha^{'}\alpha}{{i\mkern1mu}}(\lambda\slashed{k}_{2}-m_{2})_{\beta^{'}\beta} [(\slashed{P}C)_{\alpha\beta}(\gamma_{5}N)_{\gamma}\mathcal{V}]+\cdots \bigg]\end{aligned}$$ The experimentally measured quantity is the unpolarized differential cross section $d\sigma/dt= 1/16\pi(s-m_p^2)^2 1/4\sum|M|^2$, (the integrations in the complex conjugate are over the hatted variables) $$\begin{aligned} \frac{d\sigma}{dt} &=& \frac{1}{16\pi(s-m_p^2)^2}\frac{1}{4}\int dx_1 dx_2 dk_{1y} dk_{2y} \frac{1}{Q^2} \int d\hat{x}_1 d\hat{x}_2 d\hat{k}_{1y} d\hat{k}_{2y} \frac{1}{Q^2}{\nonumber}\\ &&\bigg[\mathrm{Tr}[(C^{-1}(\slashed{P'})(\lambda\slashed{k}_{2}-m_{2})(\slashed{P}C)^\intercal(\lambda\slashed{k}_{1}-m_{1})^\intercal] \mathrm{Tr}[(C^{-1}(\slashed{P'})(\lambda\slashed{\hat{k}}_{2}-m_{2})(\slashed{P}C)^\intercal(\lambda\slashed{\hat{k}}_{1}-m_{1})^\intercal]^* {\nonumber}\\ && \mathrm{Tr}\bigg[(\slashed{P'}+m_p)\gamma_5[ \frac{\gamma^{\mu} (\slashed{k}_3+\slashed{q}_1+m_q)\gamma^{\nu}}{(k_3+q_1)^2-m_q^2} + \frac{ \gamma^{\nu}(\slashed{k}_3-\slashed{q}_2+m_q)\gamma^{\mu}}{(k_3-q_2)^2-m_q^2} ]\gamma_5 (\slashed{P}+m_p)\gamma_5 {\nonumber}\\ && [\frac{\gamma^{\nu'} (\slashed{\hat{k}}_3+\slashed{q}_1+m_q)\gamma^{\mu'}}{(\hat{k}_3+q_1)^2-m_q^2} + \frac{ \gamma^{\mu'}(\slashed{\hat{k}}_3-\slashed{q}_2+m_q)\gamma^{\nu'}}{(\hat{k}_3-q_2)^2-m_q^2}]\gamma_5\bigg] e^4_{d}\mathcal{V}^*(k'_i)\mathcal{V}(k_i)\mathcal{V}^*(\hat{k}'_i)\mathcal{V}(\hat{k}_i){\nonumber}+ \cdots \bigg] \\ && \sum\limits_{polarization} \epsilon^*_{\mu}(q_2)\epsilon_{\mu'}(q_2) \sum\limits_{polarization} \epsilon^*_{\nu}(q_1)\epsilon_{\nu'}(q_1). \label{eq:dsdt}\end{aligned}$$ We can integrate over the variables after plugging in the wavefunction from Eq. \[eq:wf\]. Our calculation shows scaling behaviour in two limits, for $s \sim t \gg \Lambda_{QCD}$ and for fixed $s$ much larger than $t$. In the $s \sim t \gg \Lambda_{QCD}$ limit, the leading order contributions give, &\~dx\_1 dx\_2 dk\_[1y]{} dk\_[2y]{} d\_1 d\_2 d\_[1y]{} d\_[2y]{} ()\^2((PP’)(k\_1k\_2)+…)\ & ((PP’)(\_1\_2)+…) (1-x\_3)\^2 (1-\_[3]{})\^2 +\ & \~()\^2(Q\^2)\^2 \~ \~. Thus in the large $s$ limit, we can see that we obtain a scaling behavior of $1/s^6$, as expected from the quark counting rules. In order to analyse the differential cross section for fixed $s$ when $s > t$, we have to alter the photon momenta defined specifically for $\theta_{CM} = 90^{\circ}$ above and instead use $ q_1=\left(Q/\sqrt{2}, Q/2,0,f(s,Q)\right)$, $ q_2=\left(Q/\sqrt{2},-Q/2,0,f(s,Q)\right)$. The definition of $s = (P+q_1)^2$ can be used to find the functional form of $f(s,t)$. To the leading order in $s$, it can be shown that $f(s,Q) \sim \pm s/Q$. In the $s > t $ limit, the leading order contributions are now, &\~dx\_1 dx\_2 dk\_[1y]{} dk\_[2y]{} d\_1 d\_2 d\_[1y]{} d\_[2y]{} ()\^2((PP’)(k\_1k\_2)+…)\ & ((PP’)(\_1\_2)+…) (1-x\_3)\^2 (1-\_[3]{})\^2 +\ & \~()\^2(Q\^2)\^2 \~ \~ \~ Comparing Compton scattering in Endpoint Model with experimental data {#sec:2} ===================================================================== The full prediction of the Endpoint Model for Compton scattering involves substituting the full expressions Eq. \[eq:m\], \[eq:lipwavef\] into the expression Eq. \[eq:dsdt\]. The expression involves multiple traces over gamma matrices which were handled by the Mathematica package FEYNCALC [@Mertig:1990an]. The resulting expression contains thousands of terms for each combination of wavefunction $\mathcal{V,A,T}$. Due to the large number of terms, analytic evaluation would be cumbersome and it is dealt with using a Monte Carlo routine for integration (VEGAS [@Lepage:1977sw]). In the previous work on Endpoint Model [@Dagaonkar:2014yea] , the authors concentrated on explaining the scaling behaviour of exclusive hadronic processes using a functional form of the wavefunction. In the current paper, we extended the above work by using $\chi^2$ minimization to extract the free parameters of the model. This would be essential when comparing the magnitude of the prediction of the Compton scattering in EP with data. Using the data for $F_1$[@Sill:1992qw] and $F_2/F_1$[@Puckett:2010ac; @Puckett:2011xg](at $Q^2 \gtrsim 5.5 \mathrm{\,GeV}^2$) and the EP prediction in [@Dagaonkar:2015laa] , the minimization gives the values for the constants $v,a,t$ from Eq. \[eq:wf\], mass of the quark $m$ and the factor $\lambda$ for the model of spectator quarks. The constants obtained in the above minimization will carry over to all the processes that EP may be applied to. At fixed $s$ much larger than $t$, we observed that the experimental data showed a scaling behaviour of $1/t^4$ at lower angles. We carried out EP calculations at $s$ = $6.79$, $8.90$, $10.92$ $\mathrm{\, GeV}^2$ and observed that the scaling can be correctly reproduced by the model as was also seen in the calculation in Sec \[sec:scaling\]. We can see in Fig. \[fig:fixs\] that there is a good agreement between the data and our EP prediction at the above energies, which improves as we increase the $s$ of the data. The rise in the $d\sigma / dt $ at larger angles is however not captured by the EP calculation. Our choice of $\theta_{CM} = 90^{\circ}$ in the fixed $\theta_{CM}$ analysis above was influenced by this disagreement. For the fixed $\theta_{cm}$ analysis in the $s \sim t$ region, the expected scaling behaviour from the quark counting rules [@Brodsky:1973kr; @Matveev:1973ra] was $1/s^6$ and is not seen in the experimental data which shows a scaling of $1/s^8$[@Danagoulian:2007gs] . We evaluate the integral in Eq. \[eq:dsdt\] for a range of $Q^2$ at $\theta_{CM} = 90^{\circ}$ and we observe in the resulting plot (Fig. \[fig:s6\]) that EP shows the above scaling behaviour of $1/s^6$ after we reach $s\sim 25 \mathrm{\, GeV}^2$. At the experimental energy levels, though a $1/s^8$ scaling was not observed, there was a remarkable match between the EP predictions and the experimental data. ![Plot of $\frac{d\sigma}{dt}$ nbarns/GeV$^2$ vs t for $s = 6.79, 8.90, 10.92$ GeV$^2$ and $m=0.29 \mathrm{\, GeV}, \lambda = 1/2, v = -16, a= 0, t = 45$[]{data-label="fig:fixs"}](s679dsdtvst.pdf "fig:"){width="3.8in"}\ ![Plot of $\frac{d\sigma}{dt}$ nbarns/GeV$^2$ vs t for $s = 6.79, 8.90, 10.92$ GeV$^2$ and $m=0.29 \mathrm{\, GeV}, \lambda = 1/2, v = -16, a= 0, t = 45$[]{data-label="fig:fixs"}](s890dsdtvst.pdf "fig:"){width="3.8in"}\ ![Plot of $\frac{d\sigma}{dt}$ nbarns/GeV$^2$ vs t for $s = 6.79, 8.90, 10.92$ GeV$^2$ and $m=0.29 \mathrm{\, GeV}, \lambda = 1/2, v = -16, a= 0, t = 45$[]{data-label="fig:fixs"}](s1092dsdtvst.pdf "fig:"){width="3.8in"} ![EP evaluation of $d\sigma/dt$ $\frac{\mathrm{nbarns}}{\mathrm{GeV}^2}$ vs $s$ GeV$^2$ for $m=0.29 \mathrm{\, GeV}, \lambda = 1/2, v = -16, a= 0, t = 45$[]{data-label="fig:s6"}](dsdtresult.pdf){width="3.8in"} Conclusions =========== The Endpoint model combines the soft mechanism and the nature of the transverse momenta of a quark in a hadron to study scaling behaviour in its exclusive processes. Using the model to calculate exclusive processes leads to expressions dominated by the endpoint region of the wavefunction, this helps us extract the nature of the wavefunction. Specifically for the proton, using one set of data to obtain the wavefunction ($F_1$ data), the scaling behaviour of $F_2/F_1$ of proton and $pp$ scattering was successfully obtained. The successes of the Endpoint model lead us to the problem of real Compton scattering of the proton. The experimental data for Compton scattering [@Danagoulian:2007gs] show a scaling behaviour for the differential scattering cross section in two regions of $s,t$: a $1/s^8$ scaling for fixed $\theta_{CM}$ and $s \sim t \gg \Lambda_{QCD}$ and a $1/t^4$ scaling at fixed $s$ much larger than $t$. Fixing the free parameters in the Endpoint Model using the data for $F_1$ and $F_2/F_1$, we carried out numerical calculation for Compton scattering in these limits. For fixed $s$ larger than $t$, the Endpoint calculations show the $1/t^4$ scaling observed in data and have a good match with the data for lower angles. In the fixed $\theta_{CM}$ and $s \sim t \gg \Lambda_{QCD}$ region, the Endpoint Model calculation for the Compton scattering shows the elusive $1/s^6$ scaling, that is expected from constituent counting rules [@Brodsky:1973kr; @Matveev:1973ra]. Moreover, the Endpoint model also suggests that the $1/s^6$ scaling can be expected to be dominant after $s \sim 25 \, \mathrm{GeV}^2$. At the experimental values of $s$, though the experimentally observed scaling is absent in the Endpoint Model, an excellent agreement with experimental observations can be seen when extending the calculation to lower $s$ (lower $Q^2$). With the current work, we have shown once again that the Endpoint model is capable of explaining a range of scaling laws for hadronic processes. It is capable of generating the quark counting rules [@Brodsky:1973kr; @Matveev:1973ra] and also suggests the energy scales at which one can expect these scaling laws to dominate. Fixing the parameters of the model using existing data, the Endpoint model is also able to give an excellent match with experimental data. As we go to higher angles in the fixed $s$ differential cross section measurements, EP does not correctly predict the rise in the $d\sigma / dt$ which has to be explored in future work. Also, further work will be required for the evaluation of polarization transfer variables ($K_{LL} \& K_{LS}$) under the Endpoint model. Acknowledgement {#acknowledgement .unnumbered} =============== The author would like to thank Prof. Pankaj Jain for useful discussions and comments. For the computational work in this paper, I would like to thank the Physics Department at IIT Kanpur for facilities provided. The author would also like to thank Bogdan Wojtsekhowski for suggesting the Compton scattering problem. [unsrt]{} N. Isgur and C. H. Llewellyn Smith, Phys. Rev. Lett.  [**52**]{} (1984) 1080. doi:10.1103/PhysRevLett.52.1080 M. A. Shupe [*et al.*]{}, Phys. Rev. D [**19**]{} (1979) 1921. doi:10.1103/PhysRevD.19.1921 A. Danagoulian [*et al.*]{} \[Hall A Collaboration\], Phys. Rev. Lett.  [**98**]{} (2007) 152001 \[nucl-ex/0701068 \[NUCL-EX\]\]. C. Fanelli [*et al.*]{}, Phys. Rev. Lett.  [**115**]{} (2015) no.15, 152001 doi:10.1103/PhysRevLett.115.152001 \[arXiv:1506.04045 \[nucl-ex\]\]. S. J. Brodsky and G. R. Farrar, Phys. Rev. Lett.  [**31**]{} (1973) 1153. doi:10.1103/PhysRevLett.31.1153 V. A. Matveev, R. M. Muradian and A. N. Tavkhelidze, Lett. Nuovo Cim.  [**7**]{} (1973) 719. doi:10.1007/BF02728133 T. C. Brooks and L. J. Dixon, Phys. Rev. D [**62**]{} (2000) 114021 doi:10.1103/PhysRevD.62.114021 \[hep-ph/0004143\]. R. Thomson, A. Pang and C. R. Ji, Phys. Rev. D [**73**]{} (2006) 054023 doi:10.1103/PhysRevD.73.054023 \[hep-ph/0602164\]. M. Diehl, T. Feldmann, R. Jakob and P. Kroll, Eur. Phys. J. C [**8**]{} (1999) 409 doi:10.1007/s100529901100 \[hep-ph/9811253\]. G. A. Miller, Phys. Rev. C [**69**]{} (2004) 052201 doi:10.1103/PhysRevC.69.052201 \[nucl-th/0402092\]. N. Kivel and M. Vanderhaeghen, Nucl. Phys. B [**883**]{} (2014) 224 doi:10.1016/j.nuclphysb.2014.03.019 \[arXiv:1312.5456 \[hep-ph\]\]. N. Kivel and M. Vanderhaeghen, Eur. Phys. J. C [**75**]{} (2015) no.10, 483 doi:10.1140/epjc/s10052-015-3694-0 \[arXiv:1504.00991 \[hep-ph\]\]. H. W. Huang, P. Kroll and T. Morii, Eur. Phys. J. C [**23**]{} (2002) 301 Erratum: \[Eur. Phys. J. C [**31**]{} (2003) 279\] doi:10.1007/s100520100883 \[hep-ph/0110208\]. S. K. Dagaonkar, P. Jain and J. P. Ralston, Eur. Phys. J. C [**74**]{}, no. 8, 3000 (2014) doi:10.1140/epjc/s10052-014-3000-6 \[arXiv:1404.5798 \[hep-ph\]\]. S. Dagaonkar, P. Jain and J. P. Ralston, Eur. Phys. J. C [**76**]{} (2016) no.7, 368 doi:10.1140/epjc/s10052-016-4224-4 \[arXiv:1503.06938 \[hep-ph\]\]. S. J. Brodsky, C. R. Ji and M. Sawicki, Phys. Rev. D [**32**]{} (1985) 1530. V. M. Belyaev and B. L. Ioffe, Zh. Eksp. Teor. Phys. **[83]{}, 876 (1982) \[Sov. Phys. JETP 56, 493 (1982)\].** V.A. Avdeenko, V.L. Chernyak and S.A. Korenblit, Yad. Fiz. **[33]{} (1981) 481.** R. Mertig, M. Bohm and A. Denner, Comput. Phys. Commun.  **[64]{} (1991) 345. G. P. Lepage, J. Comput. Phys.  [**27**]{} (1978) 192. doi:10.1016/0021-9991(78)90004-9 A. F. Sill [*et al.*]{}, Phys. Rev. D [**48**]{} (1993) 29. doi:10.1103/PhysRevD.48.29** A. J. R. Puckett, E. J. Brash, M. K. Jones, W. Luo, M. Meziane, L. Pentchev, C. F. Perdrisat and V. Punjabi [*et al.*]{}, Phys. Rev. Lett.  **[104]{} (2010) 242301 \[arXiv:1005.3419 \[nucl-ex\]\]. A. J. R. Puckett, E. J. Brash, O. Gayou, M. K. Jones, L. Pentchev, C. F. Perdrisat, V. Punjabi and K. A. Aniol [*et al.*]{}, Phys. Rev. C **[85]{} (2012) 045203 \[arXiv:1102.5737 \[nucl-ex\]\].**** [^1]: [email protected]
--- abstract: 'The backward scattering of TM-polarized light by a two-side-open subwavelength slit in a metal film is analyzed. We show that the reflection coefficient versus wavelength possesses a Fabry-Perot-like dependence that is similar to the anomalous behavior of transmission reported in the study \[Y. Takakura, Phys. Rev. Lett. **86**, 5601 (2001)\]. The open slit totally reflects the light at the near-to-resonance wavelengths. In addition, we show that the interference of incident and resonantly backward-scattered light produces in the near-field diffraction zone a spatially localized wave whose intensity is 10-10$^3$ times greater than the incident wave, but one order of magnitude smaller than the intra-cavity intensity. The amplitude and phase of the resonant wave at the slit entrance and exit are different from that of a Fabry-Perot cavity.' author: - 'S. V. Kukhlevsky$^a$, M. Mechler$^b$, L. Csapó$^c$, K. Janssens$^d$, O. Samek$^e$' title: 'Resonant backward scattering of light by a two-side-open subwavelength metallic slit' --- Introduction ============ The most impressive features of light scattering by subwavelength metallic nanostructures are resonant enhancement and localization of the light by excitation of electron waves in the metal (for example, see refs. [@Neer; @Harr; @Betz1; @Ebbe; @Hess; @Nev; @Nev1; @Sarr1; @Cscher; @Port; @Trea; @Asti; @Pop; @Bozs; @Taka; @Yang; @Hibb; @Gar3; @Cao; @Barb; @Dykh; @Stee; @Shi; @Scho; @Naha; @Bouh; @Kuk2; @Gar1; @Lind; @Xie; @Decha; @Bori; @Fan; @Zay; @Li; @Lab; @Ben; @Monz; @Vigo]). In the last few years, a great number of studies have been devoted to the nanostructures in metal films, namely a single aperture, a grating of apertures and an aperture surrounded by grooves. Since the recent paper of Ebbesen and colleagues[@Ebbe] on the resonantly enhanced transmission of light observed for a 2D array of subwavelength holes in metal films, the resonant phenomenon is intensively discussed in the literature.[@Ebbe; @Hess; @Nev; @Nev1; @Sarr1; @Cscher; @Port; @Trea; @Asti; @Li; @Pop; @Bozs; @Taka; @Yang; @Hibb; @Gar3; @Cao; @Barb; @Dykh; @Stee; @Shi; @Scho; @Naha; @Bouh; @Kuk2; @Gar1; @Lind; @Xie; @Decha; @Bori; @Fan; @Zay; @Monz; @Vigo] Such a kind of light scattering is usually called a Wood’s anomaly. In the early researches, Hessel and Oliner showed that the resonances come from coupling between nonhomogeneous diffraction orders and eigenmodes of the grating.[@Hess] Neviere and co-workers discovered two other possible origins of the resonances.[@Nev; @Nev1] One appears when the surface plasmons of a metallic grating are excited. The other occurs when a metallic grating is covered by a dielectric layer, and corresponds to guided modes resonances in the dielectric film. The role of resonant Wood’s anomalies and Fano’s profiles in the resonant transmission were explained in the study.[@Sarr1] The phenomena involved in propagation through hole arrays are different from those connected with slit arrays. In a slit waveguide there is always a propagating mode inside the channel, while in a hole waveguide all modes are evanescent for hole diameters smaller than approximately a wavelength. In the case of slit apertures in a thick metal film, the transmission exhibits enhancement due to a pure geometrical reason, the resonant excitation of propagating modes inside the slit waveguide.[@Port; @Hibb; @Taka; @Asti; @Cao] At the resonant wavelengths, the transmitted field increases via the strong coupling of an incident wave with the waveguide modes giving a Fabry-Perot-like behavior.[@Asti; @Taka; @Yang; @Kuk2] In the case of films, whose thickness are too small to support the intra-cavity resonance, the extraordinary transmission can be caused by another mechanism, the generation of resonant surface plasmon polaritons and coupling of them into radiation.[@Ebbe; @Port; @Hibb; @Gar1; @Cscher] Both physical mechanisms play important roles in the extraordinary transmission through arrays of two-side-open slits (transmission gratings) and the resonant reflection by arrays of one-side-open slits (reflection gratings). A model of trapped (waveguide) modes has been recently used to show that an array of two-side-open slits can operate like a reflection grating totally reflecting TE-polarized light.[@Bori] The surface plasmons and Rayleigh anomalies were involved in explanation of reflective properties of such a kind of gratings.[@Stee] The studies [@Taka; @Yang; @Gar3; @Kuk2] have pointed out that the origin of anomalous scattering of light by a grating of slits (holes) can be better understood by clarifying the transmission and reflection properties of a single subwavelength slit. Along this direction, it was already demonstrated that the intensity of TM-polarized light resonantly transmitted through a single slit can be 10-10$^3$ times higher than the incident wave[@Neer; @Harr; @Betz1; @Kuk2] and that the transmission coefficient versus wavelength possesses a Fabry-Perot-like behavior[@Taka; @Yang; @Kuk2]. Unfortunately, the reflection properties of the slit have received no attention in the literature. The very recent study[@Bori] only concerned the problem by regarding the total reflection of TE-polarized light by a grating of two-side-open slits to properties of the independent slit emitters. In this article, the backward scattering of light by a two-side-open subwavelength slit is analyzed. To compare properties of the light reflection with the extraordinary transmission[@Taka; @Yang; @Kuk2], we consider the scattering of TM-polarized light by a slit in a thick metallic film of perfect conductivity. From the latter metal property it follows that surface plasmons do not exist in the film. Such a metal can be described by the Drude model for which the plasmon frequency tends towards infinity. The traditional approach based on the Neerhoff and Mur solution of Maxwell’s equations is used in the computations.[@Neer; @Harr; @Betz1] The article is organized as follows. The theoretical background, numerical analysis and discussion are presented in Section II. The summary and conclusions are given in Section III. The brief description of the model is presented in the Appendix. Numerical analysis and discussion ================================= It is well known that when a light wave is scattered by a subwavelength metallic object, a significant part of the incident light can be scattered backward (reflected) whatever the object be reflecting or transparent. It was recently demonstrated that an array of two-side-open subwavelength metallic slits effectively reflects light waves at the appropriate resonant conditions.[@Stee; @Bori] One may suppose that this is true also in the case of a single slit. In this section, we test whether a light wave can be resonantly reflected by a single two-side-open subwavelength metallic slit. To address this question, the energy flux in front of the slit is analyzed numerically for various regimes of the light scattering. In order to compare properties of the light reflection with that of the extraordinary (resonant) transmission[@Taka; @Yang; @Kuk2], we consider the zeroth-order scattering of a time-harmonic wave of TM-polarized light by a slit in a perfectly conducting thick metal film placed in vacuum (Fig. \[fig:5\]). ![\[fig:5\]Propagation of a continuous wave through a subwavelength nano-sized slit in a thick metal film.](fig1.eps){width="\columnwidth"} The energy flux $\vec S_I$ in front of the slit is compared with the fluxes $\vec S_{II}$ and $\vec S_{III}$ inside the slit and behind the slit, respectively. The amplitude and phase of the light wave at the slit entrance and exit are compared with that of a Fabry-Perot cavity. The electric $\vec{E}$ and magnetic $\vec{H}$ fields of the light are computed by using the traditional approach based on the Neerhoff and Mur solution of Maxwell’s equations.[@Neer; @Harr; @Betz1] For more details of the model, see the Appendix. According to the model, the electric $\vec E(x,z)$ and magnetic $\vec H(x,z)$ fields in front of the slit (region I), inside the slit (region II) and behind the slit (region III) are determined by the scalar fields $U_1(x,z)$, $U_2(x,z)$ and $U_3(x,z)$, respectively. The scalar fields are found by solving the Neerhoff and Mur integral equations. The magnetic field of the wave is assumed to be time harmonic and constant in the $y$ direction: $\vec{H}(x,y,z,t)=U(x,z)\exp(-i\omega{t})\vec{e}_y$. In front of the slit, the field is decomposed into $U_1(x,z)=U^i(x,z)+U^r(x,z)+U^d(x,z)$. The field $U^i(x,z)$ represents the incident field, which is assumed to be a plane wave of unit amplitude; $U^r(x,z)$ denotes the field that would be reflected if there were no slit in the film; $U^d(x,z)$ describes the backward diffracted field due to the presence of the slit. The time averaged Poynting vector (energy flux) $\vec S$ of the electromagnetic field is calculated (in CGS units) as $\vec{S}=(c/16\pi)(\vec{E}\times\vec{H}^*+\vec{E}^*\times\vec{H})$. The reflection coefficient $R=S_{int}^{rd}$ is given by the normalized flux $S_n^{rd}=S^{rd}/S^i$ integrated over the slit width $2a$ at the slit entrance ($z=b$), where $S^{rd}$ is the $z$ component of the backward scattered flux, and $S^i$ is the incident flux along the $z$ direction. The flux $S^{rd}=S^{rd}(U^r,U^d)$ is produced by the interference of the backward scattered fields $U^r(x,z)$ and $U^d(x,z)$. The transmission coefficient $T=S_{int}^3(b)$ is determined by the normalized flux $S_n^3=S^3/S^i$ integrated over the slit width at the slit exit ($z=0$), where the flux $z$-component $S^3 = S^3(U^3)$ is produced by the forward scattered (transmitted) field $U^3(x,z)$. Notice that the definitions of the reflection $R$ and transmission $T$ coefficients are equivalent to the more convenient ones defined as the integrated reflected or transmitted flux divided by the integrated incident flux. In the following analysis, the reflection and transmission coefficients are compared to the fluxes $S_{int}^d$ and $S_{int}^{ird}$ obtained by integrating the normalized fluxes $S_n^d=S^d/S^i$ and $S_n^{ird}=S^{ird}/S^i$, respectively. We analyzed the backward scattering of light for a wide range of scattering conditions determined by values of the wavelength $\lambda$, slit width $2a$ and film thickness $b$. As an example, the reflection coefficient $R=S_{int}^{rd}(b)$ as a function of the film thickness $b$ computed for the wavelength $\lambda=800$ nm and the slit width $2a=25$ nm is shown in Fig. \[fig:1a\]. ![ (a) The reflection coefficient $R=S_{int}^{rd}(b)$, the transmission coefficient $T=S_{int}^3(b)$, and the integrated fluxes $S_{int}^d(b)$ and $S_{int}^{ird}(b)$ as a function of the film thickness $b$ computed for the wavelength $\lambda=800$ nm and the slit width $2a=25$ nm. (b) The logarithm of the integrated flux $S_{int}^{ird}(a,b)$ as a function of the slit half-width $a$ and film thickness $b$.](fig2a.eps "fig:"){width="\columnwidth"} ![ (a) The reflection coefficient $R=S_{int}^{rd}(b)$, the transmission coefficient $T=S_{int}^3(b)$, and the integrated fluxes $S_{int}^d(b)$ and $S_{int}^{ird}(b)$ as a function of the film thickness $b$ computed for the wavelength $\lambda=800$ nm and the slit width $2a=25$ nm. (b) The logarithm of the integrated flux $S_{int}^{ird}(a,b)$ as a function of the slit half-width $a$ and film thickness $b$.](fig2b.eps "fig:"){width="\columnwidth"} The transmission coefficient $T=S_{int}^3(b)$ and the integrated fluxes $S_{int}^d(b)$ and $S_{int}^{ird}(b)$ are presented in the figure for the comparison. We note the reflection resonances of $\lambda/2$ periodicity with the maxima $R_{max}\approx{2}$. In agreement with the previous results[@Taka; @Yang; @Kuk2], one can see also the transmission resonances having the same period, and the peak heights $T\approx{10}$ ($T\approx\lambda/2\pi a$) at the resonances. It is worth to note the correlation between the positions of maxima and minima in the reflection and transmission. The resonance positions for the total reflection are somewhat left-shifted with respect to the transmission resonances. The maxima of the transmission coefficient correspond to reflection minima. In Fig. \[fig:1a\], one can observe also many satellite peaks in reflection. For one broad minimum, it appears a local reflection maximum, which is characterized by a weak amplitude. The local maxima appear before 400, 800, and 1200 nm. The positions of the local maxima approximately correspond to the $S^d_{int}$ maxima. To clarify a role of the fields $U^i$, $U^r$, $U^d$ and $U^3$ in the resonant backward scattering, we compared the integrated flux $S_{int}^{ird}(U^i,U^r,U^d)$ with the fluxes $S_{int}^d(U^d)$ and $S_{int}^3(U^3)=T$. One can see from Fig. \[fig:1a\] that the flux $S_{int}^{ird}$ produced in front of the slit by the interference of the incident field $U^i(x,z)$ and the backward scattered fields $U^r(x,z)$ and $U^d(x,z)$ is practically undistinguishable from that generated by the backward diffracted field $U^d$ and forward scattered (transmitted) field $U^3$. The integrated flux $S_{int}^{ird}(a,b)$ as a function of the slit half-width $a$ and film thickness $b$ is shown in Fig. \[fig:1b\]. We notice that the widths and shifts of the resonances increase with increasing the value $a$. Analysis of Fig. \[fig:1a\] indicates that the difference between the integrated fluxes $S_{int}^{rd}(U^r,U^d)=R$ and $S_{int}^3(U^3)=T$ ($T{\approx}S_{int}^d(U^d)$) appears due to the interference of the backward diffracted field $U^d(x,z)$ and the reflected field $U^r(x,z)$. The dispersion of the reflection coefficient $R(\lambda)=S_{int}^{rd}(\lambda)$ for the slit width $2a=25$ nm and the screen thickness $b=351$ nm is shown in Fig. \[fig:2a\]. ![ (a) The reflection coefficient $R=S_{int}^{rd}(\lambda)$, the transmission coefficient $T=S_{int}^3(\lambda)$, and the integrated fluxes $S_{int}^d(\lambda)$ and $S_{int}^{ird}(\lambda)$ versus the wavelength $\lambda$ computed for the slit width $2a=25$ nm and the screen thickness $b=351$ nm. (b) The real part of the normalized electric field x-component $E_x(U_2)=E_x(x,z)$ versus the normalized distance $z/b$ inside the slit cavity at $x=0$, for the resonant wavelength $\lambda_r^1=800$ nm, $\lambda_r^2=389$ nm, $\lambda_r^3=255$ nm.](fig3a.eps "fig:"){width="\columnwidth"} ![ (a) The reflection coefficient $R=S_{int}^{rd}(\lambda)$, the transmission coefficient $T=S_{int}^3(\lambda)$, and the integrated fluxes $S_{int}^d(\lambda)$ and $S_{int}^{ird}(\lambda)$ versus the wavelength $\lambda$ computed for the slit width $2a=25$ nm and the screen thickness $b=351$ nm. (b) The real part of the normalized electric field x-component $E_x(U_2)=E_x(x,z)$ versus the normalized distance $z/b$ inside the slit cavity at $x=0$, for the resonant wavelength $\lambda_r^1=800$ nm, $\lambda_r^2=389$ nm, $\lambda_r^3=255$ nm.](fig3b.eps "fig:"){width="\columnwidth"} The integrated fluxes $S_{int}^d(\lambda)$, $S_{int}^{ird}(\lambda)$ and $S_{int}^3(\lambda)=T(\lambda)$ versus the wavelength are shown in the figure for the comparison. A very interesting behavior of the dispersion is that the coefficient $R$ versus the wavelength $\lambda$ possesses a Fabry-Perot-like dependence that is similar to the anomalous behavior of transmission $T(\lambda)$ reported in the studies[@Asti; @Taka; @Yang; @Gar3; @Kuk2]. In agreement with the studies[@Neer; @Harr; @Betz1; @Kuk2], the height of the first (maximum) transmission peak is given by $T\approx \lambda_r^1/2\pi a$. The wavelengths corresponding to the resonant peaks $\lambda_r^m\approx 2b/m$ ($m=1,2,3,\dots$) are in accordance with the results[@Taka]. The high peak amplitudes (enhancement), however, are different from the low magnitudes (attenuation) predicted in the study[@Taka], but compare well with the experimental and theoretical results[@Neer; @Harr; @Betz1; @Kuk2; @Yang]. The difference is caused by the manner in which the Maxwell equations are solved. The study[@Taka] uses a simplified approach based on the matching the cavity modes expansion of the light wave inside the slit with the plain waves expansion above and below the slit using two boundary conditions, at $z=0$ and $|z|=b$. Conversely, the Neerhoff and Mur method performs the matching with five boundary conditions, at $z\rightarrow 0$, $z\rightarrow b$, $x\rightarrow a$, $x\rightarrow -a$, and $r\rightarrow \infty$. In contrast to the sharp Lorentzian-like transmission peaks, the slit forms very wide Fano-type reflection bands (see, Fig. \[fig:2a\]). For one broad minimum in reflection, it appears also a local reflection maximum, which is characterized by weak amplitude. At the near-to-resonance wavelengths of the transmission, the open aperture totally reflects the light. It is worth to note the correlation between the wavelengths for maxima and minima in the reflection $R(\lambda)$, transmission $T(\lambda)$, and the flux $S^d_{int}$ (Table \[tab:refl\]). [cccc]{} $\lambda$ (nm) of&$R_{max}^{main}$ &$R_{max}^{little}$&$R_{min}$\ &276&248&237\ &426&377&255\ &882&773&356\ &&&389\ &&&714\ &&&802\ $\lambda$ (nm) of&$T_{max}$&$T_{min}$ &$S^d_{max}$\ &260&226&253\ &396&315&388\ &812&542&802\ The resonance wavelengths for the main reflection maxima are red-shifted with respect to the transmission resonances. The wavelengths of both the transmission and reflection (main and little) resonances are red-shifted with respect to the Fabry-Perot wavelengths $\lambda_r^m=702$ nm, $351$ nm,$\dots$ ($\lambda_r^m\approx 2b/m$, $m=1,2,3,\dots$). To understand the physical mechanism of the resonant backward scattering, we also compared the integrated flux $S_{int}^{ird}(U^i,U^r,U^d)$ with the fluxes $S_{int}^d(U^d)$ and $S_{int}^3(U^3)=T$. As can be seen from Fig. \[fig:2a\], the integrated fluxes $S_{int}^d(\lambda)$, $S_{int}^{ird}(\lambda)$ and $S_{int}^3(\lambda)=T(\lambda)$ are practically undistinguishable also in the $\lambda$-domain (for the $b$-domain, see Fig. \[fig:1a\]). The difference between the integrated fluxes $S_{int}^{rd}(U^r,U^d)=R$ and $S_{int}^3(U^3)=T$ ($T{\approx}S_{int}^d(U^d)$) is caused by the interference of the backward diffracted field $U^d(x,z)$ and the reflected field $U^r(x,z)$ in the energy flux $\vec{S}\sim{(\vec{E}\times\vec{H}^*+\vec{E}^*\times\vec{H})}$. The wavelengths of the little maxima of the reflection $R=R(U^d,U^r)$ correspond approximately to the high maxima $S^d_{int}$. Therefore, the little maxima can be attributed to the interference of the reflected field $U^r$ with the dominant diffracted field $U^d$. The red shifts and the asymmetrical shapes of the reflection bands can be explained by a Fano analysis[@Sarr1] of the scattering problem by distinguishing resonant and non-resonant interfering contributions to the reflection process. The resonant contribution is given by the field $U^d$ and the non-resonant one is attributed to the field $U^r$. Other interesting interpretations of the shifts of resonant wavelengths in the transmission spectra from the values $2b/m$ can be found in the studies[@Asti; @Taka; @Yang; @Kuk2; @Bori; @Trea]. It should be mentioned that the asymmetrical behavior of reflection was observed also in the case of a Fabry-Perot resonator[@Monz; @Vigo]. The conditions to achieve such an asymmetry are rested on the existence of dissipative loss in the resonator. There is no explicit loss in the present problem, but the dissipative loss can be substituted by radiative loss due to the diffraction by the slit. After the analysis of Fig. \[fig:1a\], it is not surprising that the maxima of the transmission are accompanied by the minima of the reflection also in the $\lambda$-domain (see, Fig. \[fig:2a\]). It should be noted in this connection that such a behavior of $R(\lambda)$ and $T(\lambda)$ is similar to that observed in the case of excitation of the surface plasmons in an array of slit in a thin metal film.[@Stee] In the study[@Stee], the minima in reflection spectra corresponding to the maxima in the transmission spectra were attributed to the redistribution of the energy of diffracted evanescent order into the propagating order. In the case of a thick film, we explain such a behavior by another physical mechanism, the interference of the backward diffracted field $U^d(x,z)$ and the reflected field $U^r(x,z)$. It can be noted that the correlation of positions of reflection minima and transmission maxima (see, Figs. \[fig:1a\] and \[fig:2a\]) are consistent with that predicted by the study[@Bori] for TE-polarized light scattered by a grating of two-side-open slits in a thick metal film. However, the values of $R(\lambda)$ and $T(\lambda)$ are in contrast to the relation $R(\lambda)+T(\lambda)=1$ given in the study[@Bori]. The difference can be explained by the fact that we examined light scattering by an infinite screen using local definitions of $R$ and $T$, while the study[@Bori] analyzed the global reflection and transmission by a grating of finite size. ![\[fig:3\](a) The phase distribution $\varphi(x,z)$ of the electric field $x$-component $E_x(x,z)$ inside and outside the slit. The field component $E_x(x,z)$ is given by $E_x(U^i,U^r,U^d)$, $E_x(U^2)$ and $E_x(U^3)$ in the regions I, II and III, respectively. (b) The phase distribution $\varphi(x,z)$ at $x=0$. The slit width $2a=25$ nm, the film thickness $b=351$ nm and the wavelengths $\lambda=800$ nm.](fig4a.eps "fig:"){width="\columnwidth"} ![\[fig:3\](a) The phase distribution $\varphi(x,z)$ of the electric field $x$-component $E_x(x,z)$ inside and outside the slit. The field component $E_x(x,z)$ is given by $E_x(U^i,U^r,U^d)$, $E_x(U^2)$ and $E_x(U^3)$ in the regions I, II and III, respectively. (b) The phase distribution $\varphi(x,z)$ at $x=0$. The slit width $2a=25$ nm, the film thickness $b=351$ nm and the wavelengths $\lambda=800$ nm.](fig4b.eps "fig:"){width="\columnwidth"} ![\[fig:4\](a) The spatial distribution $|\vec{S}_z(x,z)|$ of the absolute value of the normalized energy flux along the $z$ direction inside and outside the slit. The distribution $|S_z(x,z)|$ is shown in the logarithmical scale. The flux $S_z$ is given by the normalized fluxes $S^{ird}(U^i,U^r,U^d)/S^i(U^i)$, $S^2(U^2)/S^i(U^i)$ and $S^3(U^3)/S^i(U^i)$ for the regions I, II and III, respectively. (b) The energy flux distribution $S_z(x,z)$ at $x=0$. The slit width $2a=25$ nm, the film thickness $b=351$ nm and the wavelength $\lambda=800$ nm (solid line) and $\lambda = 882$ nm (dotted line) corresponding to a transmission resonance and a little reflection resonance, respectively.](fig5a.eps "fig:"){width="\columnwidth"} ![\[fig:4\](a) The spatial distribution $|\vec{S}_z(x,z)|$ of the absolute value of the normalized energy flux along the $z$ direction inside and outside the slit. The distribution $|S_z(x,z)|$ is shown in the logarithmical scale. The flux $S_z$ is given by the normalized fluxes $S^{ird}(U^i,U^r,U^d)/S^i(U^i)$, $S^2(U^2)/S^i(U^i)$ and $S^3(U^3)/S^i(U^i)$ for the regions I, II and III, respectively. (b) The energy flux distribution $S_z(x,z)$ at $x=0$. The slit width $2a=25$ nm, the film thickness $b=351$ nm and the wavelength $\lambda=800$ nm (solid line) and $\lambda = 882$ nm (dotted line) corresponding to a transmission resonance and a little reflection resonance, respectively.](fig5b.eps "fig:"){width="\columnwidth"} The dispersions $S_{int}^d(\lambda)$, $S_{int}^{ird}(\lambda)$ and $S_{int}^3(\lambda)=T(\lambda)$ shown in Fig. \[fig:2a\] indicate the wave-cavity interaction behavior, which is similar to that in the case of a Fabry-Perot resonator. The fluxes $S_{int}^d(U^d)$, $S_{int}^{ird}(U^i,U^r,U^d)$ and $S_{int}^3(U^3)=T$ exhibit the Fabry-Perot-like maxima around the resonance wavelengths $\lambda_r^m\approx{2b/m}$. In order to understand the connection between the Fabry-Perot-like resonances and the total reflection, we computed the amplitude and phase distributions of the light wave at the resonant and near-resonant wavelengths inside and outside the slit cavity (see, Figs. \[fig:2b\], \[fig:3\] and \[fig:4\]). At the resonance wavelengths, the intra-slit fields possess maximum amplitudes with Fabry-Perot-like spatial distributions (Fig. \[fig:2b\]). However, in contrast to the Fabry-Perot-like modal distributions, the resonant configurations are characterized by antinodes of the electric field at each open aperture of the slit. Such a behavior is in agreement with the results[@Asti; @Lind; @Xie]. It is interesting that at the slit entrance, the amplitudes $E_x$ of the resonant field configuration possesses the Fabry-Perot-like phase shift on the value of $\pi$ (Fig. \[fig:3\]). The integrated fluxes $S_{int}^{ird}(U^i,U^r,U^d)$, $S_{int}^d(U^d)$ and $S_{int}^3(U^3)$, at the first resonant wavelength $\lambda_r^1$, exhibit enhancement by a factor $\lambda/2\pi{a}\approx{10}$ with respect to the incident wave (Fig. \[fig:2a\]). For the comparison, the normalized resonant fluxes $S_n^{ird}(U^i,U^r,U^d)$ and $S_n^3(U^3)$ in the near-field zone ($z\approx{-2a}$) are about 5 times greater than the incident wave (see, Fig. \[fig:4\]). It should be stressed that the resonantly enhanced intra-cavity intensity $S_n^2(U^2)$ is about 10 times higher than the resonant fluxes $S_n^{ird}(U^i,U^r,U^d)$ and $S_n^3(U^3)$ localized in the near field zone in front of the slit and behind the slit, respectively (Fig. \[fig:4\]). The interference of the incident $U^i(x,z)$ wave and the backward scattered fields $U^r(x,z)$ and $U^d(x,z)$, at the resonant wavelengths $\lambda_r^m\approx{2b}/m$, produces in the near-field diffraction zone a strongly localized wave whose normalized flux $S_n^{ird}(U^i,U^r,U^d)$ is $\lambda/2\pi{a}\approx$10-10$^3$ times greater than the incident wave, but about one order of magnitude smaller than the resonant intra-cavity intensity. In our model we considered an incident wave with TM polarization. According to the theory of waveguides, the vectorial wave equations for this polarization can be reduced to one scalar equation describing the magnetic field $H$ of TM modes. The electric component $E$ of these modes is found using the field $H$ and Maxwell’s equations. The TM scalar equation for the component $H$ is decoupled from the similar scalar equation describing the field $E$ of TE (transverse electric) modes. Hence, the formalism works analogously for TE polarization exchanging the $E$ and $H$ fields. Summary and conclusion ====================== In the present paper, the backward scattering of TM-polarized light by a two-side-open subwavelength slit in a metal film has been analyzed. We predict that the reflection coefficient versus wavelength possesses a Fabry-Perot-like dependence that is similar to the anomalous behavior of transmission. The open slit totally reflects the light at the near-to-resonance wavelengths. The resonance wavelengths for the total reflection are somewhat red-shifted with respect to the transmission resonances. The wavelengths of both the reflection and transmission resonances are red-shifted with respect to the Fabry-Perot wavelengths. The sharp resonant maxima of transmission are accompanied by the wide minima of the reflection. In addition, we showed that the interference of incident and resonantly backward-scattered light produces in the near-field diffraction zone a strongly localized wave whose intensity is greater than the incident wave by a factor $\lambda/2\pi{a}\approx$10-10$^3$ and about one order of magnitude smaller than the intra-cavity intensity. The correlation between the amplitude and phase distributions of light waves inside and outside the slit was also investigated. The slit cavity was compared with a Fabry-Perot resonator. We showed that the amplitude and phase of the resonant wave at the slit entrance and exit are different from that of a Fabry-Perot cavity. The physical mechanism responsible for the total reflection is the interference of the backward diffracted resonant field $U^d(x,z)$ and the reflected non-resonant field $U^r(x,z)$ in the energy flux at the near-to-resonance wavelengths (Fano-type effect). The wavelength-selective total reflection of light by two-side-open metal slits may find application in many kinds of sensors and actuators. The (10-10$^3$)-times and ($10^2$-$10^4$)-times enhancement of the light intensity in front of the slit and inside the slit can be used in reflective nanooptics and in intra-cavity spectroscopy of single atoms. We believe that the presented results gain insight into the physics of resonant scattering of light by subwavelength nano-slits in metal films. The authors appreciate the valuable comments and suggestions of the anonymous referees. This study was supported by the Fifth Framework of the European Commission (Financial support from the EC for shared-cost RTD actions: research and technological development projects, demonstration projects and combined projects. Contract NG6RD-CT-2001-00602) and in part by the Hungarian Scientific Research Foundation (OTKA, Contracts T046811 and M045644) and the Hungarian R[&]{}D Office (KPI, Contract GVOP-3.2.1.-2004-04-0166/3.0). Appendix {#appendix .unnumbered} ======== We briefly describe the Neerhoff and Mur model[@Neer; @Betz1] of the scattering of a plane continuous wave by a subwavelength slit of width $2a$ in a perfectly conducting metal screen of thickness $b$. The slit is illuminated by a normally incident plane wave under TM polarization (magnetic-field vector parallel to the slit), as shown in Fig. \[fig:5\]. The magnetic field of the wave is assumed to be time harmonic and constant in the $y$ direction: $$\begin{aligned} \vec{H}(x,y,z,t)=U(x,z){\mathrm{e}^{-i\omega{t}}}\vec{e}_y. \label{eq:1}\end{aligned}$$ The electric field of the wave is found from the scalar field $U(x,z)$ using Maxwell’s equations. The restrictions in Eq. (\[eq:1\]) reduce the diffraction problem to one involving a single scalar field $U(x,z)$ in two dimensions. The field is represented by $U_{j}(x,z)$ ($j$=1,2,3 in region I, II and III, respectively), and satisfies the Helmholtz equation: $({\nabla}^2+k_{j}^2)U_j=0,$ where $j=1,2,3$. In region I, the field $U_{1}(x,z)$ is decomposed into three components: $$U_1(x,z)=U^i(x,z)+U^r(x,z)+U^d(x,z),$$ each of which satisfies the Helmholtz equation. $U^i$ represents the incident field: $$\begin{aligned} U^i(x,z)={\mathrm{e}^{-ik_1z}}.\end{aligned}$$ $U^r$ denotes the reflected field without a slit: $$\begin{aligned} U^r(x,z)=U^i(x,2b-z).\end{aligned}$$ Finally, $U^d$ describes the diffracted field in region I due to the presence of the slit. With the above set of equations and standard boundary conditions for a perfectly conducting screen, a unique solution exists for the scattering problem. The solution is found by using the Green function formalism. The magnetic ${\vec{H}}(x,z,t)$ fields in regions I, II, and III are given by $$\begin{aligned} H^1(x,z)=\exp(-ik_1z)+\exp(-ik_1(2b-z))\nonumber\\ \frac{ia}{N}\frac{\epsilon_1}{\epsilon_2}\sum_{j=1}^{N}H^1_0(k_1\sqrt{(x-x_j)^2+(z-b)^2})(DU_b)_j,\end{aligned}$$ $$\begin{aligned} \lefteqn{H^2(x,z)=-\frac{i}{2N\sqrt{k_2^2}}{\mathrm{e}^{i\sqrt{k_2^2}|z|}}\sum_{j=1}^{N}(DU_0)_j+\frac{i}{2N\sqrt{k_2^2}}}\nonumber\\ &&\times{\mathrm{e}^{i\sqrt{k_2^2}|z-b|}}\sum_{j=1}^N(DU_b)_j-\frac{1}{2N}{\mathrm{e}^{i\sqrt{k_2^2}|z|}}\sum_{j=1}^{N}(U_0)_j+\frac{1}{2N}\nonumber\\ &&\times{\mathrm{e}^{i\sqrt{k_2^2}|z-b|}}\sum_{j=1}^{N}(U_b)_j-\frac{i}{N}\sum_{m=1}^{\infty}\frac{1}{\gamma_1}\cos{\frac{m\pi(x+a)}{2a}}{\mathrm{e}^{i\gamma_1|z|}}\nonumber\\ &&\times\sum_{j=1}^{N}\cos\frac{m\pi(x_j+a)}{2a}(DU_0)_j-\frac{1}{N}\sum_{m=1}^{\infty}\cos\frac{m\pi(x+a)}{2a}\nonumber\\ &&\times{\mathrm{e}^{i\gamma_1|z|}}\sum_{j=1}^N\cos\frac{m\pi(x_j+a)}{2a}(U_0)_j\nonumber+\frac{i}{N}\sum_{m=1}^{\infty}\frac{1}{\gamma_1}{\mathrm{e}^{i\gamma_1|z-b|}}\\ &&\times\cos\frac{m\pi(x+a)}{2a}\sum_{j=1}^N\cos\frac{m\pi(x_j+a)}{2a}(DU_b)_j\nonumber\\ &&+\frac{1}{N}\sum_{m=1}^{\infty}\cos(m\pi\frac{x+a}{2a}){\mathrm{e}^{i\gamma_1|z-b|}}\nonumber\\ &&\times\sum_{j=1}^N\cos\frac{m\pi(x_j+a)}{2a}(U_b)_j,\end{aligned}$$ $$\begin{aligned} \lefteqn{H^3(x,z)=i\epsilon_3\sum_{j=1}^N\frac{a}{N\epsilon_2}(D\vec{U}_0)_j}\nonumber\\ &&\times H_0^{(1)}\left[k_3\sqrt{(x-x_j)^2+z^2}\right],\end{aligned}$$ where $x_{j}=2a(j-1/2)/N-a$, $j=1,2,\dots,N$; $N>2a/z$; $H_0^{(1)}(X)$ is the Hankel function; $\vec{H}^i=H^i\cdot \vec{e}_y$, $i=1,2,3$; $\gamma_m=[k_2^2-(m{\pi}/2a)^2]^{1/2}$. The coefficients $(D{\vec{U}}_0)_{j}$ are found by solving numerically four coupled integral equations. For more details on the model and the numerical solution of the Neerhoff and Mur coupled integral equations, see the references[@Neer; @Betz1]. F. L. Neerhoff and G. Mur, Appl. Sci. Res. [**28**]{}, 73 (1973). R.F. Harrington and D.T. Auckland, IEEE Trans. Antennas Propag [**[AP28]{}**]{}, 616 (1980). E. Betzig, A. Harootunian, A. Lewis, and M. Isaacson, Appl. Opt. [**25**]{}, 1890 (1986). T.W. Ebbesen, H.J. Lezec, H.F. Ghaemi, T. Thio, and P.A. Wolff, Nature(London) [**391**]{}, 667 (1998). A. Hessel and A.A. Oliner, Appl. Opt. [**4**]{}, 1275 (1965). M. Nevière, D. Maystre, and P. Vincent, J. Opt. [**8**]{}, 231 (1977). D. Maystre and M. Nevière, J. Opt. [**8**]{}, 165 (1977). M. Sarrazin, J.P. Vigneron, and J.M. Vigoureux, Phys.  Rev. B [**67**]{}, 085415 (2003). J. A. Porto, F.J. García-Vidal, and J.B. Pendry, Phys. Rev. Lett., 2845 (1999). S. Astilean, Ph. Lalanne, and M. Palamaru, Opt. Commun., 265 (2000). A.P. Hibbins, J.R. Sambles and C.R. Lawrence, Appl. Phys. Lett. [**81**]{}, 4661 (2002). Q. Cao and P. Lalanne, Phys. Rev. Lett. [**88**]{}, 057403 (2002). Y. Takakura, Phys. Rev. Lett. [**86**]{}, 5601 (2001). F.Z. Yang and J.R. Sambles, Phys. Rev. Lett. [**89**]{}, 063901 (2002). S.V. Kukhlevsky, M. Mechler, L. Csapo, K. Janssens, and O. Samek, Phys. Rev. B [**70**]{}, 195428 (2004). F.J. García-Vidal, H.J. Lezec, T.W. Ebbesen, and L. Martín-Moreno, Phys. Rev. Lett. [**90**]{}, 231901 (2003). A.G. Borisov, F.G. Garcia de Abajo, and S.V. Shabanov, Phys. Rev. B [**71**]{}, 075408 (2005). J.M. Steele, C.E. Moran, A. Lee, C.M. Aguirre, and N.J. Halas, Phys. Rev. B, 205103 (2003). F.J. García-Vidal and L. Martín-Moreno, Phys. Rev. B [**66**]{}, 155412 (2002). J. Lindberg, K. Lindfors, T. Setala, M. Kaivola, and A.T. Friberg, Opt. Express [**12**]{}, 623 (2004). Y. Xie, A.R. Zakharian, J.V. Moloney, and M. Mansuripur, Opt. Express [**12**]{}, 6106 (2004). U. Schröter and D. Heitmann, Phys. Rev. B [**58**]{}, 15419 (1998). M.M.J. Treacy, Phys. Rev. Lett. [**75**]{}, 606 (1999). J.M. Vigoureux and R. Giust, Opt. Commun. [**186**]{}, 21 (2000). J.J. Monzón, T. Yonte, and L.L. Sánchez-Soto, Opt. Commun. [**218**]{}, 43 (2003) E. Popov, M. Nevière, S. Enoch, and R. Reinisch, Phys. Rev. B [**62**]{}, 16100 (2000). S.I. Bozhevolnyi, J. Erland, K. Leosson, P.M.W. Skovgaard, and J.M. Hvam, Phys. Rev. Lett. [**86**]{}, 3008 (2001). A. Barbara, P. Quemerais, E. Bustarret, and T. Lopez-Rios, Phys. Rev. B, 161403 (2002). A.M. Dykhne, A.K. Sarychev, and V.M. Shalaev, Phys. Rev. B [**67**]{}, 195402 (2003). X.L. Shi, L. Hesselink, and R.L. Thornton, Opt. Lett. [**28**]{}, 1320 (2003). H.F. Schouten, T.D. Visser, D. Lenstra, and H. Blok, Phys. Rev. E [**67**]{}, 036608 (2003). A. Nahata, R.A. Linke, T. Ishi, and K. Ohashi, Opt. Lett. [**28**]{}, 423 (2003). A. Bouhelier, M. Beversluis, A. Hartschuh, and L. Novotny, Phys. Rev. Lett. [**90**]{}, 013903 (2003). K.R. Li, M.I. Stockman, and D.J. Bergman, Phys.  Rev. Lett. [**91**]{}, 227402 (2003). A. Dechant and A.Y. Elezzabi, Appl. Phys. Lett. [**84**]{}, 4678 (2004). W.J. Fan, S. Zhang, B. Minhas, K.J. Malloy, and S.R.J. Brueck, Phys. Rev. Lett. [**94**]{}, 033902 (2005). A.V. Zayats, I.I. Smolyaninov, and A.A Maradudin, Phys.  Rep. [**408**]{}, 131 (2005). M. Labardi, M. Zavelani-Rossi, D. Polli, G. Cerullo, M. Allegrini, S. De Silvestri, and O. Svelto, Appl. Phys.  Lett. [**86**]{}, 031105 (2005). Y. Ben-Aryeh, International J. Quantum Information [**3**]{}, 111 (2005).
--- abstract: 'We use photoionization models designed to reconcile the joint rest-UV-optical spectra of high-$z$ star-forming galaxies to self-consistently infer the gas chemistry and nebular ionization and excitation conditions for $\sim150$ galaxies from the Keck Baryonic Structure Survey (KBSS), using only observations of their rest-optical nebular spectra. We find that the majority of $z\sim2-3$ KBSS galaxies are moderately O-rich, with an interquartile range in $12+\log(\textrm{O/H})=8.29-8.56$, and have significantly sub-solar Fe enrichment, with an interquartile range of \[Fe/H\]$=[-0.79,-0.53]$, contributing additional evidence in favor of super-solar O/Fe in high-$z$ galaxies. Model-inferred ionization parameter and N/O are strongly correlated with common strong-line indices (such as O32 and N2O2), with the latter exhibiting similar behavior to local extragalactic regions. In contrast, diagnostics commonly used for measuring gas-phase O/H (such as N2 and O3N2) show relatively large scatter with the overall amount of oxygen present in the gas and behave differently than observed at $z\sim0$. We provide a new calibration for using R23 to measure O/H in typical high-$z$ galaxies, although it is most useful for relatively O-rich galaxies; combining O32 and R23 does not yield a more effective calibration. Finally, we consider implications for the intrinsic correlations between physical conditions across the galaxy sample and find that N/O varies with O/H in high-$z$ galaxies in a manner almost identical to local regions. However, we do not find a strong anti-correlation between ionization parameter and metallicity (O/H or Fe/H) in high-$z$ galaxies, which is one of the principal bases for using strong-line ratios to infer oxygen abundance.' author: - 'Allison L. Strom' - 'Charles C. Steidel' - 'Gwen C. Rudie' - 'Ryan F. Trainor' - Max Pettini bibliography: - 'allrefs.bib' title: | Measuring the Physical Conditions in High-Redshift Star-Forming Galaxies:\ Insights from KBSS-MOSFIRE --- Introduction ============ Advancing our understanding of galaxy assembly is one of the key goals of modern astrophysics. However, progress is often difficult, as galaxies form and evolve under the influence of a variety of competing baryonic processes, the effects of which are difficult to disentangle. Gaseous inflows supply the raw material for star formation and the growth of supermassive black holes. The resulting powerful stellar winds, supernova explosions, and feedback from active galactic nuclei (AGN) are all thought to contribute to galaxy-scale outflows, which are relatively uncommon in nearby galaxies, but known to be nearly ubiquitous in the early universe. The details of these processes and their relative importance throughout cosmic time leave imprints on nascent galaxies, resulting in the scaling relations and chemical abundance patterns observed across galaxy populations. Efforts by several groups over the last few years have extended our understanding of galaxies’ physical conditions to the peak of galaxy assembly [$z\sim1-3$; e.g., @madau2014], using new spectroscopic observations of large numbers of typical galaxies to study their gas and stars in detail [e.g., @masters2014; @steidel2014; @shapley2015; @wisnioski2015; @sanders2016; @steidel2016; @kashino2017; @strom2017]. These studies have focused on measurements from galaxies’ rest-optical ($3600-7000$Å) nebular spectra, which can now be observed for large samples of individual objects, owing to sensitive multi-object near-infrared (NIR) spectrographs like the Multi-Object Spectrometer For InfraRed Exploration [MOSFIRE, @mclean2012; @steidel2014] and the $K$-band Multi-Object Spectrograph [KMOS, @sharples2013]. However, despite the headway made in characterizing galaxies during this crucial epoch, significant tension remains regarding how best to infer high-$z$ galaxies’ physical conditions—especially chemical abundances—from easily-observable quantities, such as the strong emission lines of hydrogen, oxygen, and nitrogen present in their region spectra. It is tempting to simply build on the large body of work that has decrypted the spectra of galaxies in the local universe [e.g., @kauffmann2003; @brinchmann2008; @masters2016]. Such efforts have frequently relied on the sample of $z\sim0$ star-forming galaxies from the Sloan Digital Sky Survey [SDSS, @york2000] to explore trends in physical conditions and construct diagnostics for quantities like oxygen abundance (O/H) that can then be applied to observations of other samples, including those at high redshift. Yet it is well-established that star-forming galaxies at $z\sim1-3$ differ from typical $z\sim0$ star-forming galaxies in a number of key ways that make directly transferring this paradigm for understanding galaxies’ nebular spectra to the study of the high-$z$ universe problematic: $z\sim2-3$ galaxies have 10 times higher star-formation rates [e.g., @erb2006mass] and gas masses [@tacconi2013] at fixed stellar mass, significantly smaller physical sizes [@law2012], and are relatively young (characteristic ages of a few hundred Myr) with rising star-formation histories [@reddy2012]. Recent work from surveys like the Keck Baryonic Structure Survey [KBSS, @steidel2014; @steidel2016; @strom2017], the MOSFIRE Deep Evolution Field survey [MOSDEF, @kriek2015; @shapley2015], and the KMOS$^{\textrm{3D}}$ survey [@wisnioski2015] have also revealed a number of important differences in terms of the nebular spectra of high-$z$ galaxies, with perhaps the most well-known being the offset in the log(\[\]$\lambda5008$/H$\beta$) vs. log(\[\]$\lambda6585$/H$\alpha$) plane (the so-called N2-BPT diagram, after @baldwin1981, but popularized by @veilleux1987). This offset has been attributed to a variety of astrophysical differences, including enhanced N/O at fixed O/H in high-$z$ galaxies [@masters2014; @shapley2015; @sanders2016], higher ionization parameters [e.g. @kewley2015; @bian2016; @kashino2017], higher electron densities [@liu2008; @bian2010], and harder ionizing radiation fields [@steidel2014; @steidel2016; @strom2017], although the true origin of the offset is likely due to a combination of effects for individual galaxies [see also @kojima2017]. Since strong-line diagnostics operate by relying on the underlying correlations between the quantity of interest (often “metallicity", or gas-phase O/H) and other astrophysical conditions (e.g., ionization state, ionizing photon distribution) that also influence the observables, it is imperative to consider how these quantities are different in typical high-$z$ galaxies relative to present-day galaxies *and* how they may vary among high-$z$ galaxies. In @steidel2016 [hereafter Steidel16], we sought to directly address this issue by combining observations of the rest-UV spectra of high-$z$ galaxies’ massive stellar populations with observations of the ionized gas surrounding the same stars, as probed by their rest-UV-optical nebular spectra. We compared a composite rest-UV-optical spectrum of 30 star-forming galaxies from KBSS with stellar population models and photoionization model predictions and showed that only models that simultaneously include iron-poor massive star binaries and moderate oxygen enrichment in the gas can reconcile all of the observational constraints, even accounting for somewhat higher ionization parameters and electron densities than observed in typical $z\sim0$ galaxies. In @strom2017 [hereafter Strom17], we found that this abundance pattern—low Fe/H and moderate-to-high O/H—is also necessary to explain the behavior of *individual* $z\simeq2-2.7$ KBSS galaxies and the behavior of the high-$z$ star-forming galaxy locus in multiple 2D line-ratio spaces, including the N2-BPT diagram, the S2-BPT diagram (which trades \[\]$6718,6732$ for \[\]$\lambda6585$), and the O32-R23 diagram[^1] (which is sensitive to changes in ionization and excitation). In this paper, we expand on the analysis from and and present a new method for self-consistently determining gas-phase oxygen abundance (O/H), nitrogen-to-oxygen ratio (N/O), and ionization parameter ($U$) in individual high-$z$ galaxies, utilizing measurements of the nebular emission lines in their rest-optical spectra and photoionization models motivated by observations of the same high-$z$ galaxies. The sample used here is described in Section \[sample\_selection\], followed by a description of the photoionization model method in Section \[model\_section\]. The physical parameters inferred for individual galaxies are presented in Section \[results\_section\]. The results of this analysis are used to determine new strong-line diagnostics for $U$, N/O, and O/H in Section \[strongline\_section\], along with our guidance regarding the best method for determining these quantities using emission line measurements. New constraints on relationships between physical conditions in high-$z$ galaxies’ regions (including the N/O-O/H relation) are presented in Section \[correlation\_section\]. We conclude with a summary of our results in Section \[summary\_section\]. Throughout the paper, we adopt the solar metallicity scale from @asplund2009, with $Z_{\odot} = 0.0142$, 12+log(Fe/H)$_{\odot}=7.50$, $12+\log(\textrm{O/H})$$_{\odot}=8.69$, and log(N/O)$_{\odot} = -0.86$. When necessary, we assume a $\Lambda$CDM cosmology: $H_0=70$km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$, and $\Omega_{\textrm{m}} = 0.3$. Finally, specific spectral features are referred to using their vacuum wavelengths in angstroms. Sample Description {#sample_selection} ================== The sample of galaxies we consider for analysis is drawn from KBSS, which is a large, spectroscopic galaxy survey conducted in 15 fields centered on bright quasars and is explicitly designed to study the galaxy-gas correlation during the peak of cosmic star formation [@rudie2012; @steidel2014]. Extensive imaging and spectroscopic campaigns have been conducted in all of the KBSS fields, including deep broad-band and medium-band optical-IR imaging[^2], as well as spectroscopy obtained using the Low Resolution Imaging Spectrometer [LRIS, @oke1995; @steidel2004] at optical wavelengths and MOSFIRE in the NIR bands. The acquisition and reduction of the photometric and spectroscopic data have been described elsewhere [e.g., by @steidel2003; @reddy2012; @steidel2014], with additional details related to the rest-optical spectroscopic analysis (including the measurement of emission line fluxes and corrections for relative slit-losses between the NIR bands) provided by . ![The distribution of nebular redshift for the sample of 202 galaxies discussed in this paper, which has $\langle z\rangle = 2.3$.[]{data-label="sample_zhist"}](kbss_z2_zhist_models_paper) For this paper, we selected the subsample of $z\simeq2-2.7$ KBSS galaxies with nebular redshifts measured from MOSFIRE spectra ($\langle z\rangle = 2.3$, Figure \[sample\_zhist\]) and spectral coverage of the regions near H$\alpha$, H$\beta$, \[\]$\lambda5008$, and \[\]$\lambda6585$ (i.e., the lines used in the N2-BPT diagram). Although not required for inclusion in the sample, measurements of or limits on \[\]$\lambda\lambda3727,3729$, \[\]$\lambda4364$, \[\]$\lambda\lambda6718,6732$ and \[\]$\lambda3869$ are incorporated when present. Objects are included regardless of the signal-to-noise ratio (SNR) for a single line measurement, but we restrict the sample to those galaxies with $\textrm{SNR}>5$ measurements of the Balmer decrement (H$\alpha$/H$\beta$), which we use to correct for dust attenuation. This requirement is imposed to ensure a fair comparison with the line flux predictions from photoionization models, which represent the nebular spectrum as it would appear if unattenuated by dust along the line-of-sight. We also exclude galaxies where there is evidence of significant AGN activity [usually in the form of high-ionization rest-UV emission features; see @steidel2014; @strom2017], as these galaxies will not be well-matched by photoionization model predictions made using ionizing radiation fields from purely stellar sources. In total, 202 galaxies satisfy these criteria and compose the sample discussed in the remainder of the paper. ![The bulk galaxy properties ($M_{\ast}$, SFR, and sSFR) for the KBSS sample used for the analysis presented in this paper. The sample spans over two decades in all three parameters and is representative of the larger KBSS sample, with median M$_{\ast}=9.5\times10^{9}$ M$_{\odot}$, median $\textrm{SFR}=23$ M$_{\odot}$ yr$^{-1}$, and median $\textrm{sSFR}=2.4$ Gyr$^{-1}$. For comparison, the distribution of M$_{\ast}$ for all $z\simeq2-2.7$ KBSS galaxies is shown in grey in the top panel, and a two-sample Kolmogorov-Smirnov (KS) test indicates that the distributions are consistent with one another ($p=0.95$).[]{data-label="sample_hists"}](kbss_z2_mstar_hist_models_paper "fig:") ![The bulk galaxy properties ($M_{\ast}$, SFR, and sSFR) for the KBSS sample used for the analysis presented in this paper. The sample spans over two decades in all three parameters and is representative of the larger KBSS sample, with median M$_{\ast}=9.5\times10^{9}$ M$_{\odot}$, median $\textrm{SFR}=23$ M$_{\odot}$ yr$^{-1}$, and median $\textrm{sSFR}=2.4$ Gyr$^{-1}$. For comparison, the distribution of M$_{\ast}$ for all $z\simeq2-2.7$ KBSS galaxies is shown in grey in the top panel, and a two-sample Kolmogorov-Smirnov (KS) test indicates that the distributions are consistent with one another ($p=0.95$).[]{data-label="sample_hists"}](kbss_z2_sfr_hist_models_paper "fig:") ![The bulk galaxy properties ($M_{\ast}$, SFR, and sSFR) for the KBSS sample used for the analysis presented in this paper. The sample spans over two decades in all three parameters and is representative of the larger KBSS sample, with median M$_{\ast}=9.5\times10^{9}$ M$_{\odot}$, median $\textrm{SFR}=23$ M$_{\odot}$ yr$^{-1}$, and median $\textrm{sSFR}=2.4$ Gyr$^{-1}$. For comparison, the distribution of M$_{\ast}$ for all $z\simeq2-2.7$ KBSS galaxies is shown in grey in the top panel, and a two-sample Kolmogorov-Smirnov (KS) test indicates that the distributions are consistent with one another ($p=0.95$).[]{data-label="sample_hists"}](kbss_z2_ssfr_hist_models_paper "fig:") Figure \[sample\_hists\] shows the stellar mass (M$_{\ast}$), star-formation rate (SFR), and specific star-formation rate ($\textrm{sSFR}=\textrm{SFR}/\textrm{M}_{\ast}$) distributions for the paper sample (blue histograms), with the full sample of $z\simeq2-2.7$ KBSS galaxies with M$_{\ast}$ estimates shown for comparison in the top panel (grey histogram). Stellar masses are measured as in @steidel2014 and , using the methodology described by @reddy2012. SFRs are determined using extinction-corrected H$\alpha$ measurements as described by , where we have adopted the Galactic extinction curve from @cardelli1989. Values for M$_{\ast}$, SFR, and sSFR are reported assuming a @chabrier2003 stellar initial mass function (IMF). The sample considered here spans a large range in all three bulk galaxy properties, with median M$_{\ast}=9.5\times10^{9}$ M$_{\odot}$, median $\textrm{SFR}=23$ M$_{\odot}$ yr$^{-1}$, and median $\textrm{sSFR}=2.4$ Gyr$^{-1}$. These values are consistent with the SFR-M$_{\ast}$ relation for galaxies at $2.0<z<2.5$ reported by @whitaker2014. However, both the @whitaker2014 relation and the median SFR for the KBSS sample discussed here are somewhat higher than would be predicted by the relation from @shivaei2015, based on a different sample of $z\sim2$ galaxies[^3]. Photoionization Model Method {#model_section} ============================ Model Grid {#grid_section} ---------- As in and , we use Cloudy [v13.02, @ferland2013] to predict the nebular spectrum originating from gas irradiated by a given stellar population’s radiation field, with the ultimate goal of generating predicted emission line fluxes as a function of *all* of the physical parameters of interest, $f_\textrm{line}(Z_{\ast}, U, Z_{\textrm{neb}}, \textrm{N/O})$. In all cases, we adopt a plane parallel geometry and $n_H=300$ cm$^{-3}$, with the latter motivated by measurements of electron density ($n_e$) in $\langle z\rangle = 2.3$ KBSS galaxies (, )[^4]. Dust grains are included assuming the “Orion” mixture provided as part of Cloudy, with a dust-to-gas ratio that scales linearly with the metallicity of the gas, $Z_{\textrm{neb}}$. We use stellar population synthesis models from “Binary Population and Spectral Synthesis” [BPASSv2; @stanway2016; @eldridge2017] as the input ionizing spectra, which set the *shape* of the ionizing radiation field. To capture the range of stellar populations that may exist in the parent sample of high-$z$ star-forming galaxies, we employ models from BPASSv2 with constant star-formation histories and varying stellar metallicities, $Z_{\ast}=[0.001,0.002,0.003,0.004,0.006,0.008,0.010,0.014]$ ($Z_{\ast}/Z_{\odot}\approx0.07-1.00$). We use a single age of 100 Myr for all stellar population models, consistent with the characteristic ages of star-forming galaxies at $z\sim2$ [e.g. @reddy2012]. The assumption of a fixed age does not significantly impact the photoionization model predictions, as the shape of the UV ionizing SED produced by BPASSv2 models with constant star formation histories reaches equilibrium after $20-30$ Myr. Variations in the *intensity* of the radiation field are parameterized by the ionization parameter $U$($= n_\gamma/n_H$, the dimensionless ratio of the number density of incident ionizing photons to the number density of neutral hydrogen). Because we adopt a single $n_H$, differences in $U$ largely reflect differences in the normalization of the radiation field. The metallicity of the gas, $Z_{\textrm{neb}}$, is allowed to vary independently of $Z_{\ast}$ and spans $Z_{\textrm{neb}}/Z_{\odot}=0.1-2.0$. As discussed in greater detail elsewhere , $Z_{\textrm{neb}}$ primarily reflects the abundance of O (which is the most abundant heavy element in regions, thus regulating gas cooling and many features of the nebular spectrum). In contrast, $Z_{\ast}$ traces Fe abundance, which provides the majority of the opacity in stellar atmospheres and is critical for determining the details of the stellar wind and mass loss. While we expect the gas and stars to have the same O/H and Fe/H as one another, decoupling $Z_{\textrm{neb}}$ and $Z_{\ast}$ allows us to explicitly test for the presence of non-solar O/Fe in high-$z$ galaxies. Given the high sSFRs observed in these galaxies and young inferred ages, enrichment from core-collapse supernovae (CCSNe) will dominate relative to contributions from Type Ia SNe, making it likely that super-solar O/Fe is in fact more typical than (O/Fe)$_{\odot}$ at high redshift. In the future, we look forward to incorporating stellar population synthesis models with non-solar values of O/Fe—thus allowing a single value of $Z$ to be adopted for both the gas and stars—but without access to such models at the current time, varying $Z_{\textrm{neb}}$ and $Z_{\ast}$ independently is the simplest way to mimic the super-solar O/Fe required to match the observations of typical high-$z$ galaxies’ nebular spectra . Finally, we allow N/O to vary independently of both $Z_{\textrm{neb}}$ and $Z_{\ast}$, with a minimum $\log(\textrm{N/O})=-1.8$ ($[\textrm{N/O}] = -0.94$). In the local universe, the N/O measured in regions and star-forming galaxies is known to correlate with the overall O/H [e.g., @vanzee1998], exhibiting a low, constant value ($\log(\textrm{N/O})\approx-1.5$) at $12+\log(\textrm{O/H})\lesssim8.0$ and increasing with increasing O/H past this critical level of enrichment. Although the nucleosynthetic origin of N remains largely uncertain, this behavior is often interpreted as a transition between “primary” and “secondary” production of N [e.g., @edmunds1978; @vilacostas1993]. Notably, among a compilation of local extragalactic regions from @pilyugin2012 [hereafter Pil12], there is a factor of $\sim2-3$ scatter in N/O at *fixed* O/H, particularly near the transition metallicity ($12+\log(\textrm{O/H})\approx8.0$). Methods that adopt (explicitly or implicitly) a single relation with no scatter between N/O and O/H marginalize over this intrinsic physical scatter, which affects the accuracy of the inferred O/H. Some methods, like HII-CHI-MISTRY [@perez-montero2014], do allow for some scatter in N/O at fixed O/H, but still impose a prior that prevents any enrichment in N/O greater than that observed in the calibration sample. Because we are interested in quantifying the relationship between N/O and O/H among high-$z$ galaxies, including the degree of intrinsic scatter, we opt to allow any value of $\log(\textrm{N/O})\geq-1.8$. Although varying other physical conditions (including $n_H$) can also affect the resulting nebular spectrum, the effect of such differences within the range observed in the KBSS sample (where most galaxies are consistent with $n_e=300$ cm$^{-3}$) is considerably smaller than the effects of altering the shape and/or normalization of the ionizing radiation field (i.e., varying $Z_{\ast}$ and/or $U$) or the chemical abundance pattern in the gas (i.e., varying $Z_{\textrm{neb}}$ or N/O). Thus, in summary, we assemble a four-dimensional photoionization model grid, spanning the following parameter space: $$\begin{aligned} &&Z_{\ast}/Z_{\odot} \approx [0.07,1.00] \textrm{, from BPASSv2} \nonumber \\ &&Z_{\textrm{neb}}/Z_{\odot} = [0.1,2.0] \textrm{, every 0.1 dex} \nonumber \\ &&\log(U) = \nonumber [-3.5,-1.5] \textrm{, every 0.1 dex} \\ &&\log(\textrm{N/O}) \geq -1.8. \nonumber\end{aligned}$$ ![image](chain_Q1442_BX160){height="0.42\textheight"} ![image](chain_Q1009_BX218){height="0.42\textheight"} MCMC Method ----------- We do not know *a priori* the degeneracies among $Z_{\ast}$, $U$, $Z_{\textrm{neb}}$, and N/O, in terms of determining the observed nebular spectra of high-$z$ galaxies, and it is possible that the correlations found in low-$z$ samples may not hold at higher redshifts. Indeed, two of the main goals of this work are to: (1) quantitatively determine these parameters in individual high-$z$ galaxies, discussed in this section, and (2) evaluate the presence of any intrinsic correlations between them among the high-$z$ galaxy population, discussed in Section \[correlation\_section\]. Given the relatively large size of the total parameter space outlined above and the expectation that the posterior probability density functions (PDFs) for the parameters may not be normally distributed, we employ a Markov Chain Monte Carlo (MCMC) technique. Such a method allows us to efficiently determine the combinations of $Z_{\ast}$, $U$, $Z_{\textrm{neb}}$, and N/O that are consistent with the entire observed rest-optical nebular spectra of individual objects, as well as to quantify the correlations between inferred parameters. We initialize the chain at the photoionization model grid point that best matches the combination of emission lines measured from a galaxy’s spectrum. The MCMC sampler then explores log parameter space, using a normal proposal algorithm whose width is adjusted based on the acceptance rate to allow for more efficient local sampling. In general, flat priors are adopted within the boundaries of the parent photoionization model grid, with one additional constraint: $$0.0 < \log(Z_{\textrm{neb}}/Z_{\ast}) < 0.73.$$ Given our assumption that $Z_{\textrm{neb}}$ traces O/H and $Z_{\ast}$ traces Fe/H, limits on $\log(Z_{\textrm{neb}}/Z_{\ast})$ in our model reflect limits on \[O/Fe\], which are informed by stellar nucleosynthesis and galactic chemical evolution models. The upper limit we impose on $\log(Z_{\textrm{neb}}/Z_{\ast})$ corresponds to the highest \[O/Fe\] expected for the Salpeter IMF-averaged yields from Fe-poor ($Z_{\ast}=0.001$) CCSNe [@nomoto2006]. The lower limit is equivalent to $[\textrm{O/Fe}]=0.0$, which is the default implicitly assumed by most stellar population synthesis models and, by extension, most photoionization model methods for determining physical conditions in galaxies. Since high-$z$ galaxies are typically young and rapidly star-forming, the true value of \[O/Fe\]—and, thus, $\log(Z_{\textrm{neb}}/Z_{\ast})$—likely falls in this range for most objects. If the proposed step is not coincident with a grid point in the parent photoionization model grid, the predicted emission line fluxes are calculated via trilinear interpolation of the grid in $Z_{\ast}$, $U$, and $Z_{\textrm{neb}}$ and subsequently scaling the model \[\]$\lambda6585$ flux based on the proposed value of log(N/O). For example, $\log(\textrm{N/O})=-0.86$ ($[\textrm{N/O}]=0$) corresponds to a scale factor of 1, whereas $\log(\textrm{N/O})=-1.5$ ($[\textrm{N/O}]=-0.64$) corresponds to a scale factor of 0.23 relative to the default \[\]$\lambda6585$ flux from our parent photoionization model grid, which assumes solar N/O[^5]. Because the photoionization model line intensities are reported relative to H$\beta$, we convert the observed line fluxes and errors onto the same scale by first correcting all of the line measurements for differential reddening due to dust and then dividing by the flux in H$\beta$. The final errors used in the MCMC analysis include the contribution from the error on the Balmer decrement, but do not account for systematic uncertainties in the choice of extinction curve. In all cases, the line-of-sight extinction curve from @cardelli1989 is used to correct the line fluxes, but because many commonly-used extinction curves have a similar shape at rest-optical wavelengths, our results do not change significantly if we adopt an SMC-like extinction curve. Finally, although a Balmer decrement of 2.86 is widely adopted as the fiducial value for Case B recombination [@osterbrock1989], the exact value is sensitive to both $n_e$ and the electron temperature, $T_e$ (and, thus, the shape of the ionizing radiation field). To remain self-consistent, we adopt a Case B value for the Balmer decrement based on the predicted H$\alpha$ flux (in units of H$\beta$) at every proposed step, which ranges from $2.85-3.14$ in the parent photoionization model grid. Our MCMC method is able to account for low-SNR measurements of emission lines in the same manner as more significant detections, and thus no limits are used. If a measured line flux is formally negative , the expectation value for the line flux is assumed to be zero, with a corresponding error $\sigma_{f/\textrm{H}\beta} = \frac{\sigma_{f}}{f_{\textrm{H}\beta}}$. Convergence of the MCMC to the posterior is evaluated using the potential scale reduction factor, $\hat{R}$ [also known as the Gelman-Rubin diagnostic, @gelman1992], using equal-length segments of the chain to compare the dispersion within the chain to the dispersion between portions of the chain. When the chain segments look very similar to one another, $\hat{R}$ will trend from higher values toward 1. When $\hat{R}<1.05$ for all four model parameters, we assume approximate convergence has been achieved[^6]. Measurements of the Chemistry and Ionization in Individual $z\sim2$ Galaxies {#results_section} ============================================================================ [llll]{} Galaxies used in the photoionization model analysis & 202\ Galaxies with bound PDFs for all parameters & 121\ Galaxies with bound PDFs for all but N/O & 27\ \ Galaxies with bound $\log(Z_{\textrm{neb}}/Z_{\odot})$ PDF & 163\ Galaxies with bound $\log$(N/O) PDF & 164\ Galaxies with bound $\log(U)$ PDF & 193\ Galaxies with bound $\log(Z_{\ast}/Z_{\odot})$ PDF & 169\ \ Galaxies with 2+ peaks in $\log(Z_{\textrm{neb}}/Z_{\odot})$ & 88\ Galaxies with 2+ peaks in $\log$(N/O) & 35\ Galaxies with 2+ peaks in $\log(U)$ & 24\ Galaxies with 2+ peaks in $\log(Z_{\ast}/Z_{\odot})$ & 111\ \ Galaxies with an upper limit on $\log$(N/O) & 37\ Galaxies with an upper limit on $\log(U)$ & 1\ Galaxies with an upper limit on $\log(Z_{\ast}/Z_{\odot})$ & 15\ Galaxies with a lower limit on $\log(Z_{\ast}/Z_{\odot})$ & 2 \[sample\_table\] Figures \[good\_mcmc\] and \[bad\_mcmc\] show the MCMC results for two individual galaxies in the KBSS sample, highlighting the range of possible outcomes. The results for Q1442-BX160 contain a single preferred solution, with pairwise 2D posteriors that resemble bivariate Gaussian distributions. In contrast, the joint posterior PDF for Q1009-BX218 reveals two distinct solutions and notably non-normal marginalized posteriors. Nearly two-thirds of objects ($\sim65$%), including Q1009-BX218, have 2 or more peaks in the marginalized posterior for at least one model parameter. Nevertheless, the distribution of probability between multiple peaks still allows for a single solution to be preferred in many cases. This is true for Q1009-BX218, where the 68% highest density interval (HDI, the narrowest range that includes 68% of the distribution) contains a single dominant peak in log($Z_{\textrm{neb}}/Z_{\odot}$). We adopt the following criteria for identifying “bound” marginalized posteriors, which are those posterior distributions from which a preferred model parameter can be estimated: the 68% HDI must contain either a single peak or, if 2 or more peaks are present, there cannot exist a local minimum in the posterior PDF with lower probability than the probability at the edge of the 68% HDI. In addition, the 68% HDI must not abut the edge of the model grid. For bound posteriors, we adopt the maximum *a posteriori* (MAP) estimate as the inferred value for the model parameter. The MAP estimate is taken to be the peak value of the Gaussian-smoothed histogram representing the marginalized posterior. The asymmetric errors on this estimate are determined by calculating the 68% HDI for the same posterior. In total, 121 galaxies have bound posteriors and, thus, MAP estimates for log($Z_{\textrm{neb}}/Z_{\odot}$), log(N/O), log($U$), and log($Z_{\ast}/Z_{\odot}$). For cases where one edge of the 68% HDI does in fact coincide with the edge of the parent photoionization model grid for a given parameter, we assume that the posterior reflects a limit and record the opposite boundary of the 68% HDI as an upper or lower limit in that parameter, corresponding to a $2\sigma$ limit. Table \[sample\_table\] provides a more detailed summary of the MCMC results, including the number of galaxies with limits on a given parameter. Figure \[cloudy\_hists\] shows the distributions of inferred log($Z_{\textrm{neb}}/Z_{\odot}$), log(N/O), log($U$), and log($Z_{\ast}/Z_{\odot}$) for the 121 KBSS galaxies with bound posteriors in all four model parameters and the 27 galaxies with upper limits on log(N/O), resulting in a total sample of 148 galaxies. The upper right panel, showing the distribution of log(N/O), separates the contribution from galaxies with N/O limits, illustrated as the grey portion of the histogram; we discuss this subsample in more detail below. The interquartile ranges in the model parameters for this subsample (including those with upper limits on N/O) are $$\begin{aligned} \log(Z_{\textrm{neb}}/Z_{\odot})_{50} &=& [-0.40,-0.13] \nonumber \\ \log(\textrm{N/O})_{50} &=& [-1.44,-1.07] \nonumber \\ \log(U)_{50} &=& [-2.93,-2.58] \nonumber \\ \log(Z_{\ast}/Z_{\odot})_{50} &=& [-0.79,-0.53]. \nonumber\end{aligned}$$ These characteristic ranges are consistent with inferences of these parameters by other means. In particular, the range of interred log($U$) is in good agreement with the range we previously reported for individual KBSS galaxies in , using a more limited photoionization model approach. ![image](cloudy_met_hist) ![image](cloudy_no_hist)\ ![image](cloudy_zstar_hist) ![image](cloudy_logu_hist) Multi-peaked Posteriors and Limits {#peaks_and_limits} ---------------------------------- ![image](bpt_met_npk_models_paper) ![image](s2_bpt_met_npk_models_paper) ![image](o32_r23_met_npk_models_paper)\ ![image](bpt_zstar_npk_models_paper) ![image](s2_bpt_zstar_npk_models_paper) ![image](o32_r23_zstar_npk_models_paper) Multipeaked posteriors are most frequently observed for log($Z_{\textrm{neb}}/Z_{\odot}$) and log($Z_{\ast}/Z_{\odot}$). Figure \[ratio\_diagrams\] shows the location of galaxies with 2 or more peaks in the marginalized posteriors for these parameters in the N2-BPT, S2-BPT, and O32-R23 diagrams, highlighting differences in the spectra of galaxies with unbound posteriors (e.g., those with equal power shared among 2 or more peaks, identified by red diamonds) and those with bound posteriors (e.g., those with a single dominant peak or group of peaks, identified by blue squares). A few trends are common for both parameters, including the clustering on the left (low-R23) side of the galaxy locus in the O32-R23 diagram (shown in the right column). This region of parameter space is populated by photoionization model grid points with either low *or* high $Z_{\textrm{neb}}$, whereas higher values of R23 at fixed O32 can only be achieved by combining moderate $Z_{\textrm{neb}}$ with low $Z_{\ast}$. Interestingly, galaxies with multi-peaked posteriors in log($Z_{\textrm{neb}}/Z_{\odot}$) or log($Z_{\ast}/Z_{\odot}$) are well-mixed with the total sample in the N2-BPT diagram (left column in Figure \[ratio\_diagrams\]), regardless of whether the posteriors are bound. The same is true for galaxies with bound multi-peaked posteriors in either parameter in the S2-BPT diagram (blue squares in the center column). However, galaxies with unbound multi-peaked posteriors in log($Z_{\ast}/Z_{\odot}$) almost exclusively occupy a region of parameter space to the lower left of the ridge-line of KBSS galaxies in the S2-BPT diagram (the red diamonds with low \[\]/H$\beta$ at fixed \[\]/H$\alpha$ in the bottom center panel of Figure \[ratio\_diagrams\]). In general, these galaxies do not exhibit multiple isolated peaks in log($Z_{\ast}/Z_{\odot}$), but rather relatively flat posteriors with 2 or more local maxima spread over a range in log($Z_{\ast}/Z_{\odot}$). Galaxies in this region of parameter space, as with those found at low-R23 and high-O32, are consistent with a range in $Z_{\ast}$, and additional information is required to break the degeneracy between possible solutions. In the future, we hope to incorporate constraints on the detailed stellar photospheric absorption features observed in the non-ionizing UV spectra of individual galaxies in KBSS, but note that such data are not always available when trying to infer physical conditions from galaxies’ rest-optical spectra (either from KBSS or other surveys). There are 27 galaxies with marginalized posterior PDFs that reflect upper limits on log(N/O) while having bound posteriors for the other three parameters. These galaxies are not preferentially distributed in a specific location in the N2-BPT, S2-BPT, and O32-R23 diagrams, but all such galaxies have $\textrm{SNR}<2$ measurements of \[\]$\lambda6585$, which was the significance threshold we adopted in . In contrast, only $\sim18$% of galaxies with bound posteriors in all four model parameters, including log(N/O), have $\textrm{SNR}<2$ in their \[\]$\lambda6585$ measurements. Unsurprisingly, a two-sample Kolmogorov-Smirnov (KS) test indicates that the distributions of \[\]$\lambda6585$ SNR for the two subsamples are significantly unlikely to be drawn from the same parent population. This result follows naturally from the fact that \[\] is the only available constraint on N/O. Because the likelihood of having an upper limit on log(N/O) results almost exclusively from a low SNR \[\]$\lambda6585$ measurement, which (as we show later in Section \[snr\_section\] and Table \[corr\_w\_observables\]) has only a minor impact on the other model parameters, we choose to include galaxies with N/O limits but bound posteriors for the other three model parameters in our analysis. Inferences Using Only “BPT" Lines --------------------------------- Of the 202 total galaxies in our sample, 27 do not have measurements of or limits on the \[\]$\lambda\lambda3727,3729$ doublet or the \[\]$\lambda3869$ line, which are observed in $J$-band for galaxies at these redshifts. To understand the effect that using only lines present in $H$- and $K$-band (i.e., the “BPT” lines: H$\alpha$, H$\beta$, \[\]$\lambda5008$, \[\]$\lambda6585$, and \[\]$\lambda\lambda6718,6732$) might have on the MCMC results, we compare the model results for the 109 galaxies with spectral coverage of all of the available emission lines and bound PDFs for all parameters with the results for the same galaxies when \[\]$\lambda\lambda3727,3729$ and \[\]$\lambda3869$ are excluded. Of these, 66 galaxies also have bound PDFs for all parameters when only considering the BPT lines. In general, differences in the model parameters are small—$\sim0.01-0.03$ dex for log($Z_{\textrm{neb}}/Z_{\odot}$), log(N/O), and log($U$) and $\sim0.06$ dex for log($Z_{\ast}/Z_{\odot}$)—especially when compared to the median uncertainty on the parameter estimates ($\sim0.1-0.2$ dex, when all the available emission lines are considered). The uncertainties are larger by $\sim0.05$ dex when only the BPT lines are used in the MCMC method. This comparison demonstrates the importance of using more emission line measurements to achieve more precise results, but confirms that no large systematic errors are introduced when only the BPT lines are available. The LM1 Composite Spectrum -------------------------- ![image](chain_KBSS_LM1) We also test our method using measurements from the rest-optical composite spectrum presented in , where we were able to estimate O/H, N/O, $U$, and $Z_{\ast}$ using a number of independent cross-checks. The “LM1” composite from that analysis was constructed using MOSFIRE observations of 30 KBSS galaxies with $z=2.40\pm0.12$. As described in , the $J$, $H$, and $K$-band spectra of the galaxies were corrected for differential slit losses between bands and shifted into the rest-frame according to the measured nebular redshift, with the normalization of each galaxy’s spectra adjusted to account for small redshift differences among the sample. The final composite spectrum is a straight average of the individual shifted and scaled spectra, excluding regions near bright OH sky lines. To make a fair comparison with the individual galaxies in our sample, for which only the strongest rest-optical nebular emission lines are typically measurable, we use the measurements of \[\]$\lambda\lambda3727,3729$, \[\]$\lambda3869$, H$\beta$, \[\]$\lambda5008$, H$\alpha$, \[\]$\lambda6585$, and \[\]$\lambda\lambda6718,6732$ reported in . Although we reported a limit on \[\]$\lambda4364$ for the LM1 composite in , we choose not to include it here, as it was not observed in the spectrum of each individual galaxy in the LM1 sample and is generally not available for individual high-$z$ galaxies. The MCMC results using only the strong-line measurements are shown in Figure \[lm1\_mcmc\] and show narrow and well-defined marginalized posteriors for each of the four model parameters. The parameter estimates for the LM1 composite from our photoionization model method are very similar to the values from (identified by shaded blue regions in Figure \[lm1\_mcmc\], see also Table \[lm1\_table\]). In that paper, we were able to directly incorporate information about the photospheric absorption features observed in the non-ionizing UV composite spectrum in order to infer $Z_{\ast}$, and we also leveraged measurements of the auroral \[\]$\lambda\lambda1661,1666$ lines in the rest-UV as an independent constraint on $Z_{\textrm{neb}}$. In addition, the N/O inferred for the LM1 composite using two locally-calibrated empirical strong-line calibrations was found to be in excellent agreement with the N/O determined through comparisons of the LM1 measurements with photoionization models alone. Given the internal consistency of the analysis presented in , the agreement between the new photoionization model method results and the earlier estimates is reassuring. The inferred values of log($Z_{\textrm{neb}}/Z_{\odot}$), log(N/O), and log($U$) for the LM1 composite spectrum discussed above (Table \[lm1\_table\]) are close to the most common values reported for individual galaxies in Figure \[cloudy\_hists\]. At the same time, the model stellar metallicity inferred for the LM1 composite ($Z_{\ast}/Z_{\odot}=0.12$) is somewhat lower than the median value for individual galaxies ($Z_{\ast}/Z_{\odot}=0.20$). Although it is possible that the galaxies used to construct the LM1 composite are not characteristic of the full range in $Z_{\ast}$ present in the entire $z\simeq2-2.7$ KBSS sample, it is important to remember that the constraints on $Z_{\ast}$ from the MCMC method presented in this paper are indirect and based on the ionizing spectral shape of the input stellar population, as reflected by the ratios of the rest-optical emission lines. In , we were able to leverage direct observations of photospheric line-blanketing at non-ionizing UV wavelengths to measure $Z_{\ast}$. That the two analyses agree for the LM1 composite suggests that the BPASS models near $Z_{\ast}/Z_{\odot}\sim0.1$ are self-consistent in terms of their ionizing radiation fields and photospheric absorption—but the overall mapping of radiation field hardness to $Z_{\ast}$ remains unavoidably model-dependent. Extending the kind of analysis presented in and in this paper either to individual galaxies or to composite spectra in bins of photoionization model-inferred $Z_{\ast}$ could therefore provide a method for cross-checking the consistency of stellar population models across a range of metallicities. [lrr]{} $Z_{\textrm{neb}}/Z_{\odot}$ & $0.50\pm0.10$ & $0.59^{+0.03}_{-0.06}$\ log(N/O) & $-1.24\pm0.04$ & $-1.27^{+0.02}_{-0.03}$\ log($U$) & $-2.8$ & $-2.82\pm0.02$\ $Z_{\ast}/Z_{\odot}$ & $\simeq0.1$ & $0.12\pm0.01$ \[lm1\_table\] Trends with Galaxy Spectral Properties -------------------------------------- ### Location in Line-ratio Diagrams ![image](bpt_w_met) ![image](s2_bpt_w_met) ![image](o32_r23_w_met)\ ![image](bpt_w_logu) ![image](s2_bpt_w_logu) ![image](o32_r23_w_logu)\ ![image](bpt_w_no) ![image](s2_bpt_w_no) ![image](o32_r23_w_no)\ ![image](bpt_w_zstar) ![image](s2_bpt_w_zstar) ![image](o32_r23_w_zstar) It is interesting to consider whether galaxies with certain combinations of physical conditions are more likely to inhabit specific regions of the nebular diagnostic diagrams, resulting in trends in physical conditions along or across the galaxy locus in line-ratio space. For example, the locus of $z\sim0$ star-forming galaxies from SDSS in the N2-BPT diagram is known to be a sequence in both gas-phase O/H (corresponding to $Z_{\textrm{neb}}$) and $U$. However, when we investigate the presence of similar trends among $\langle z\rangle = 2.3$ galaxies (Figure \[bpt\_trends\]), we find that although $U$ is strongly correlated with position along the galaxy locus in the N2-BPT, the S2-BPT, *and* the O32-R23 diagram (second row of panels), the trend with $Z_{\textrm{neb}}$ in the N2-BPT or S2-BPT planes is noticeably weaker (top left and center panels). The inferred value of $Z_{\textrm{neb}}$ for $\langle z\rangle = 2.3$ galaxies appears to change *across* the galaxy locus rather than *along* it in the O32-R23 diagram, where higher values of $Z_{\textrm{neb}}$ are found on the low-R23 side of the locus. That the trends with $Z_{\textrm{neb}}$ and $U$ are essentially perpendicular to one another in the O32-R23 diagram suggests that a combination of these indices could be used to construct a strong-line diagnostic that simultaneously provides constraints on both O/H and $U$. This idea formed the basis of the earliest strong-line calibrations for R23 [e.g. @pagel1979; @mcgaugh1991] and has been discussed more recently by @shapley2015, who showed that the local sequence of galaxies in the O32-R23 diagram increased monotonically in O/H as O32 and R23 declined. We explore the possibility of constructing such a diagnostic using the results for the KBSS sample in Section \[oh\_section\]. We also observe a strong correlation between galaxies’ positions in the N2-BPT plane and the value of log(N/O) inferred from their nebular spectra (left panel in the third row of Figure \[bpt\_trends\]), with higher N/O corresponding to higher values of \[\]/H$\alpha$. This trend is also observed, although much less strongly, in the S2-BPT diagram, with higher N/O occurring in galaxies on the lower part of the locus. This is the opposite of the observed trend with $U$, which decreases in the same direction in the S2-BPT plane. This may suggest the presence of an inverse correlation between the ionization conditions and star-formation history as probed by chemical abundance patterns (at least, N/O) in high-$z$ galaxies, but the orthogonal trend with $Z_{\textrm{neb}}$ in the line ratio diagrams suggests that any such relationship may be qualitatively different from the strong anti-correlation observed between $U$ and O/H at $z\sim0$ [e.g., @dopita1986]. We quantitatively investigate the correlations between inferred physical conditions, including ionization and metallicity, in Section \[correlation\_section\]. Finally, in the bottom row of Figure \[bpt\_trends\], we investigate the correlation between model-inferred $Z_{\ast}$ and the location of galaxies in nebular parameter space. Consistent with the analysis presented in and , the galaxies with the lowest inferred $Z_{\ast}$ (red points)—and, thus the hardest ionizing radiation fields—are among the most offset upward and to the right of the local distribution of galaxies in the N2-BPT and S2-BPT diagrams. Galaxies with smaller \[\]/H$\beta$ ratios and low R23 at fixed O32 exhibit a more mixed distribution of $Z_{\ast}$, as we might expect from the location of galaxies with multiple peaks in their $Z_{\ast}$ PDFs (Section \[peaks\_and\_limits\]). As with $Z_\textrm{neb}$ (top row of Figure \[bpt\_trends\]), the overall trend with $Z_{\ast}$ is *across* the high-$z$ galaxy locus; in other words, lines of constant $Z_{\ast}$ appear to be largely parallel to the galaxy locus, especially in the S2-BPT (bottom center panel) and O32-R23 (bottom right panel) diagrams. ### Emission Line SNR {#snr_section} For the galaxies with bound posteriors, we characterize their marginalized posteriors using four quantities: the MAP estimate for the model parameter, the width of the 68% HDI (hereafter $\Delta_{68}$), the skewness, and the kurtosis excess. The skewness of a posterior, $S$, quantifies whether the distribution is left- or right-tailed and provides some insight regarding the constraints imposed on a given parameter in certain regions of parameter space. The kurtosis excess, $K$, measures the “peakiness” of the distribution (a normal distribution has $K=0$); positive values indicate that the distribution contains more power in the peak relative to the tails, when compared with a normal distribution. As a posterior may have fat tails and still be relatively narrow, we use $\Delta_{68}$ as our primary measure of precision. Importantly, there appear to be no strong trends in $\Delta_{68}$ for any model parameter in terms of the location of galaxies in the nebular diagnostic diagrams. This result implies that we should be capable of recovering all high-$z$ galaxies’ physical conditions with similar precision, given a minimum set of emission line measurements. The exception is galaxies falling below the galaxy locus in the S2-BPT diagram, where objects frequently have parameter PDFs with multiple strong peaks (as shown in Figure \[ratio\_diagrams\]) and would likely benefit from additional constraints from rest-UV observations. Figure \[best\_cloudy\_param\] shows the distribution of the most-precisely inferred parameter (i.e., the parameter with the smallest value of $\Delta_{68}$, corresponding to the fractional error) for individual galaxies. For nearly $\sim80$% of galaxies, log($U$) is the parameter with the narrowest marginalized posterior, meaning that the nebular spectra of individual galaxies are only consistent with a relatively limited range in $U$, but could be matched by a broader range in $Z_{\textrm{neb}}$, N/O, or $Z_{\ast}$ at a given $U$. In contrast, log($Z_{\ast}/Z_{\odot}$) is the most precise parameter for only two galaxies in the sample, consistent with our argument in Section \[peaks\_and\_limits\] regarding the lack of robust constraints on the shape of the ionizing radiation when using only indirect observables, such as nebular emission lines. ![The distribution of the most precisely-inferred parameter for individual galaxies. In $\sim80$% of cases, log($U$) is the parameter with the narrowest 68% HDI ($\Delta_{68}$), suggesting that the observed nebular spectrum responds most sensitively to changes in the ionization conditions and thus has the most power to discriminate between different ionization parameters. In contrast, log($Z_{\ast}/Z_{\odot}$) is the most precise parameter for only two objects, reflecting the relative indirectness of the constraints on the shape of the ionizing radiation obtained from nebular spectroscopy, especially when compared to direct observation of the non-ionizing rest-UV spectum.[]{data-label="best_cloudy_param"}](best_cloudy_param_models_paper) [lccccccccccc]{} log($Z_{\textrm{neb}}/Z_{\odot}$) & $S$ & $+0.17$ & $1.9$ & $+0.05$ & $0.6$ & $+0.09$ & $1.0$ & $+0.02$ & $0.3$ & $-0.10$ & $1.1$\ log($Z_{\textrm{neb}}/Z_{\odot}$) & $K$ & $+0.14$ & $1.5$ & $+0.14$ & $1.6$ & $+0.11$ & $1.2$ & $+0.19$ & $2.1$ & $+0.33$ & $3.5$\ log($Z_{\textrm{neb}}/Z_{\odot}$) & $\Delta_{68}$ & $-0.56$ & $6.1$ & $-0.46$ & $5.0$ & $-0.29$ & $3.1$ & $-0.26$ & $2.8$ & $-0.30$ & $3.3$\ log(N/O) & $S$ & $-0.10$ & $1.1$ & $-0.01$ & $0.1$ & $-0.15$ & $1.6$ & $-0.48$ & $5.3$ & $-0.14$ & $1.6$\ log(N/O) & $K$ & $-0.07$ & $0.8$ & $-0.22$ & $2.4$ & $+0.09$ & $0.9$ & $+0.75$ & $8.2$ & $+0.35$ & $3.8$\ log(N/O) & $\Delta_{68}$ & $-0.48$ & $5.3$ & $-0.14$ & $1.6$ & $-0.45$ & $4.8$ & $-0.86$ & $9.4$ & $-0.70$ & $7.6$\ log($U$) & $S$ & $-0.52$ & $5.7$ & $-0.16$ & $1.8$ & $-0.54$ & $5.7$ & $-0.27$ & $3.0$ & $-0.40$ & $4.3$\ log($U$) & $K$ & $-0.53$ & $5.8$ & $-0.24$ & $2.7$ & $-0.46$ & $4.9$ & $-0.19$ & $2.1$ & $-0.29$ & $3.1$\ log($U$) & $\Delta_{68}$ & $-0.62$ & $6.8$ & $-0.49$ & $5.4$ & $-0.39$ & $4.1$ & $-0.37$ & $4.0$ & $-0.36$ & $4.0$\ log($Z_{\ast}/Z_{\odot}$) & $S$ & $-0.10$ & $1.1$ & $-0.02$ & $0.2$ & $-0.11$ & $1.1$ & $-0.17$ & $1.8$ & $-0.11$ & $1.2$\ log($Z_{\ast}/Z_{\odot}$) & $K$ & $+0.27$ & $3.0$ & $+0.26$ & $2.8$ & $+0.03$ & $0.3$ & $+0.11$ & $1.2$ & $+0.31$ & $3.4$\ log($Z_{\ast}/Z_{\odot}$) & $\Delta_{68}$ & $-0.62$ & $6.8$ & $-0.52$ & $5.7$ & $-0.19$ & $2.1$ & $-0.15$ & $1.6$ & $-0.29$ & $3.1$ \[corr\_w\_observables\] Table \[corr\_w\_observables\] lists the Spearman coefficients ($\rho$) and significance of the correlations between $\Delta_{68}$, $S$, and $K$ for each model parameter and the SNR of the Balmer decrement and the most commonly-measured strong emission lines. The skewness and kurtosis excess of the model parameter posteriors (corresponding to the shape of the PDFs) appear most commonly correlated with the SNR of \[\]$\lambda\lambda6718,6732$, although $S_{\textrm{N/O}}$ and $K_{\textrm{N/O}}$ are also negatively and positively correlated with the SNR of \[\]$\lambda6585$, respectively; the correlation with $K_{\textrm{N/O}}$ indicates that a higher SNR corresponds to strongly peaked posteriors. In contrast, $S_{U}$ and $K_{U}$ are anti-correlated with the SNR of the Balmer decrement and \[\]$\lambda3727,3729$ (instead reflecting a tendency toward fat-tailed distributions at high SNR). Of greater interest is that the precision of the model parameter estimates ($\Delta_{68}$) is sensitive to the quality of specific emission line features. Notably, improvements in $\Delta_{68}$ for log($Z_{\textrm{neb}}/Z_{\odot}$) correlate most strongly with the SNR of the Balmer decrement, but are also significantly (albeit only moderately, i.e., $|\rho|\lesssim0.5$) correlated with all of the other emission features excluding \[\]$\lambda6585$; this suggests that there is no single feature that is most important for determining $Z_{\textrm{neb}}$ in high-$z$ galaxies, and improving the SNR of any given line measurement only moderately improves the precision of $Z_{\textrm{neb}}$ estimates. Conversely, the precision of inferred log(N/O) is most strongly correlated with the SNR of \[\]$\lambda6585$, followed by the SNR of \[\]$\lambda\lambda6718,6732$, the Balmer decrement, and \[\]$\lambda\lambda3727,3729$. Ionization parameter is the only quantity where $\Delta_{68}$ is significantly correlated with the SNR of all the strongest emission line features, with the precision of the parameter estimate most sensitive to changes in the quality of the Balmer decrement and \[\]$\lambda5008$ measurements. Somewhat surprisingly, $\Delta_{68}$ for log($Z_{\ast}/Z_{\odot}$) is only significantly correlated with the SNR of the Balmer decrement, \[\]$\lambda5008$, and \[\]$\lambda\lambda6718,6732$, perhaps reflecting the greater utility of the S2-BPT diagram over the N2-BPT diagram in discriminating between sources of excitation for high-$z$ galaxies. The advantage of having robust measurements of the \[\]$\lambda\lambda6718,6732$ doublet is also demonstrated by the fact that only the SNR of the sulfur lines and the Balmer decrement (critical for correctly accounting for differential reddening due to dust) are significantly correlated with the precision of the model estimates for all four parameters. Collectively, these results have a number of implications for using strong emission line measurements to infer galaxies’ physical conditions. First, *the nebular spectrum of high-$z$ galaxies is most sensitive to changes in the ionization conditions in the galaxies’ regions*. Consequently, ionization parameter, $U$, is the most easily and precisely determined quantity given the observables that are typically available for high-$z$ galaxies. Although there is certainly some ability to discriminate between the likely shape of the ionizing radiation (parameterized in our model by $Z_{\ast}$), our results also underscore that it is preferable to use more direct constraints, such as observations at rest-UV wavelengths, when available. New Calibrations for Nebular Strong-Line Diagnostics {#strongline_section} ==================================================== The photoionization model method we have described can easily be applied to the majority of high-$z$ galaxy samples with existing rest-optical spectroscopy. However, it is useful to test whether there are new calibrations for the commonly-used strong-line diagnostics (especially for N/O and O/H) that may be more appropriate for high-$z$ samples than calibrations based on samples at lower redshifts. Significant effort in the last several years has been dedicated to understanding the evolution of strong-line metallicity indices as a function of redshift [e.g., @kewley2013; @steidel2014; @jones2015; @shapley2015; @cullen2016; @dopita2016; @sanders2016; @hirschmann2017; @kashino2017 among others], with the prevailing wisdom being that calibrations based on “extreme” low-$z$ samples of galaxies or regions are better suited for use at $\langle z\rangle = 2.3$ than those based on typical local galaxies. Although what is considered extreme varies, this assumption must be true to some degree, as evidence suggests that typical $\langle z\rangle = 2.3$ galaxies have considerably higher nebular ionization and excitation than typical local galaxies. But it is also true that $\langle z\rangle = 2.3$ galaxies differ in their characteristic star-formation histories relative to galaxies at $z<1$. Such differences will result in important variations in the chemical abundance patterns between high-$z$ galaxies and even “extreme” low-$z$ objects, which may, in turn, introduce systematic biases when diagnostics tuned to the latter are applied to the former. In this section, we investigate the correlation between model-inferred physical conditions for individual galaxies in KBSS and strong-line ratios measured from their nebular spectra. Such an exercise allows us to both comment on which parameters galaxies’ spectra are most sensitive to and offer guidance regarding the utility of strong-line diagnostics at high redshift. [llcccc]{} O32& $\log$(\[\]$\lambda\lambda4960,5008$/\[\]$\lambda\lambda3727,3729$) & $U$ & $+0.91$ & $9.4$ & $0.11$\ Ne3O2 & $\log$(\[\]$\lambda3869$/\[\]$\lambda\lambda3727,3729$) & $U$ & $+0.56$ & $4.3$ & $0.22$\ O3 & $\log$(\[\]$\lambda5008$/H$\beta$) & $U$ & $+0.83$ & $9.1$ & $0.16$\ N2O2& $\log$(\[\]$\lambda6585$/\[\]$\lambda\lambda3727,3729$) & N/O & $+0.77$ & $7.2$ & $0.12$\ N2S2 & $\log$(\[\]$\lambda6585$/\[\]$\lambda\lambda6718,6732$) & N/O & $+0.82$ & $6.8$ & $0.09$\ & & N/O & $+0.81$ & $8.0$ & $0.11$\ N2 & $\log$(\[\]$\lambda6585$/H$\alpha$)\ & & O/H & +0.31 & $3.0$ & 0.16\ O3N2 & $\log$(\[\]$\lambda5008$/H$\beta$)$-\log$(\[\]$\lambda6585$/H$\alpha$) & O/H & $-0.37$ & $3.6$ & $0.16$\ R23& $\log$\[(\[\]$\lambda\lambda4960,5008$+\[\]$\lambda\lambda3727,3729$)/H$\beta$\] & O/H & $-0.69$ & $6.9$ & $0.09$ \[index\_table\] Ionization Parameter -------------------- ![image](logu_vs_o32_models_paper) ![image](logu_vs_ne3o2_models_paper) ![image](logu_vs_o3_models_paper) The importance of the ionization state of the gas in regions in determining the resulting nebular spectrum is obvious: in order for the majority of the emission line features to be observed, H must be photoionized and then recombine; simultaneously, heavier elements such as N, O, and S must be singly- or doubly-ionized, then collisionally excited. The details of the observed collisionally-excited emission are set by $T_e$, which is in turn sensitive to the distribution of ionizing photon energies and the gas cooling (dominated by emission from O atoms). Oxygen is the only element with multiple ions whose commonly-observed transitions fall in the $J$-, $H$-, and $K$-band for galaxies with $z\simeq2-2.7$[^7], making the combination of \[\]$\lambda5008$ and \[\]$\lambda\lambda3727,3729$ measurements a sensitive probe of the ionization conditions in high-$z$ galaxies. Although \[\]$\lambda6301$ can be also be observed in the spectra of individual $z\simeq2-2.7$ galaxies on occasion, emission from neutral O may not be spatially coincident with emission from ionized O, which requires at least an H-ionizing photon to be created. We discuss the challenges associated with observations of ions that trace low-ionization gas in Section \[no\_section\] below. As discussed in Section \[grid\_section\], we parameterize the ionization state of gas using the ionization parameter, $U=n_{\gamma}/n_H$. Higher values of $U$ will result in more doubly-ionized O relative to neutral and singly-ionized O in the irradiated gas. In general, $n_{\gamma}$ can be varied by changing the shape and/or the normalization of the ionizing radiation field, so measurements of $U$ are most meaningful when details about the ionizing source are also known. In our method, we explicitly decouple these effects so that $U$ reflects the required scaling of the ionizing radiation field, while the shape is set by the $Z_{\ast}$ of the input BPASSv2 model. Figure \[logu\_calibs\] shows the correlation between model-inferred log($U$) and common strong-line indices for the sample of KBSS galaxies with bound posteriors in all model parameters and $\textrm{SNR}>2$ for the line index (different for each panel). The definitions for O32, Ne3O2, and O3 are listed in Table \[index\_table\], along with other commonly-used strong-line indices. Although all three indices shown in Figure \[logu\_calibs\] are strongly positively correlated with $U$, the correlation with O32 is the strongest and most significant (Spearman $\rho=0.91$, significance of 9.4$\sigma$). We can determine a calibration based on the KBSS results [calculated using the *MPFITEXY* IDL routine[^8], @williams2010], which has the following form: $$\log(U) = 0.79\times\textrm{O32}-2.95. \label{logu_equation}$$ Relative to the best-fit relation (shown in cyan), the measurements have RMS scatter $\sigma_{\textrm{RMS}} = 0.11$ dex. The same statistics for the correlations between log($U$) and both Ne3O2 and O3 are reported in Table \[index\_table\], corresponding to the following calibrations: $$\begin{aligned} &&\log(U) = 0.64\times\textrm{Ne3O2}-2.22 \\ &&\log(U) = 1.33\times\textrm{O3}-3.55.\end{aligned}$$ ![image](no_vs_n2o2_models_paper) ![image](no_vs_n2s2_models_paper) ![image](no_vs_n2_models_paper) It is perhaps somewhat disappointing that the correlation between Ne3O2 and log($U$) is so poor, given the practical advantages of measuring Ne3O2 relative to O32. Because \[\]$\lambda3869$ and \[\]$\lambda\lambda3727,3729$ are close in wavelength, observing both emission features requires less observing time for high-$z$ galaxies, and uncertainties in dust reddening are reduced. However, despite the relatively tight locus in Ne3O2 vs. O32 space observed for high-ionization $z\sim0$ SDSS galaxies [c.f. @levesque2014], $z\sim2$ galaxies show notably more scatter in the same observed line ratio space . We also showed in that changes in the shape of the ionizing radiation field at fixed $U$ affect Ne3O2 more strongly than O32. For these reasons, the larger intrinsic scatter between Ne3O2 and log($U$) may not be entirely unexpected. In contrast, it is interesting that O3 (which has the same advantages as Ne3O2) is more significantly correlated with log($U$) and has less scatter about the best-fit calibration. An important caveat is that the calibrations presented here are only appropriate for objects with stellar populations similar to those of KBSS galaxies; this is especially important for O3, which is more strongly correlated with O/H in samples with presumably softer ionizing radiation fields, including SDSS galaxies [@maiolino2008]. Nitrogen-to-oxygen Ratio {#no_section} ------------------------ In contrast with $U$, N/O has a smaller effect on the overall nebular spectra from regions, as it primarily affects emission lines of N. Still, as with $U$, there are clear and direct proxies for measuring N/O accessible in the rest-optical nebular spectra of galaxies. Due to similarities in the ionization potentials of N and O, the ionization correction factors are also relatively similar, meaning that N$^+$/O$^+$ corresponds roughly to N/O. As \[\]$\lambda6585$ and \[\]$\lambda\lambda3727,3729$ are frequently detected in high-$z$ galaxy spectra, N/O is one of the most accessible probes of the chemical abundance pattern in galaxies’ interstellar medium (ISM). The topic of N/O in galaxies has a long history, with a recent resurgence in interest as samples of high-$z$ galaxies with rest-optical spectra have increased in size and quality [e.g., @masters2014; @shapley2015; @masters2016; @kashino2017; @kojima2017]. In , we addressed the issue of N/O in KBSS galaxies, using N2O2 and a calibration based on a sample of extragalactic regions from . The 414 objects from have direct method measurements of N/H and O/H, which make them a useful comparison sample that we also choose to employ here. Figure \[no\_calibs\] shows the correlation between model-inferred log(N/O) and N2O2, N2S2 (another commonly-used probe of N/O), and N2 for KBSS galaxies (green points) and the regions (orange squares). Also shown are calibrations based on the sample from (red dashed lines) and new calibrations determined based on the work presented in this paper (cyan lines). As with the strong-line calibrations for $U$ introduced above, we assess the usefulness of the strong-line ratios as proxies for N/O by calculating the strength and significance of the correlations, as well as the best-fit linear relation between N/O and the indices. The results of this analysis are listed in Table \[index\_table\], and the calibrations themselves are $$\begin{aligned} &&\log(\textrm{N/O}) = 0.51\times\textrm{N2O2}-0.65 \label{n2o2_equation} \\ &&\log(\textrm{N/O}) = 0.85\times\textrm{N2S2}-1.00\\ &&\log(\textrm{N/O}) = 0.62\times\textrm{N2}-0.57.\end{aligned}$$ All three indices have similarly significant correlations with N/O, as determined by a Spearman rank-correlation test, and similar RMS scatter relative to the new calibrations. We can take some guidance from the distribution of regions, which exhibit similar behavior in the indices relative to N/O, but have smaller measurement errors. For the sample, N2O2 (which is the most direct proxy for N/O) exhibits the smallest scatter relative to the best-fit relation based on the same sample (the red dashed line in the left panel of Figure \[no\_calibs\]); this suggests that N2O2 may also be the most accurate probe of N/O in high-$z$ galaxies. ![N2S2 and N2O2 for three samples of objects: $\langle z\rangle = 2.3$ KBSS galaxies (green points), individual $z\sim0$ regions from (orange squares), and $z\sim0$ galaxies from SDSS (greyscale, with 90% of the sample enclosed by the red contour). As in Figure \[no\_calibs\], KBSS galaxies and the sample exhibit similar behavior, suggesting that N2O2 and N2S2 trace N/O in similar ways between the two samples. However, the locus of SDSS galaxies occupies a distinct region of parameter space, with somewhat higher N2O2 and N2S2 overall, but lower N2S2 at a given N2O2 than either the $\langle z\rangle = 2.3$ galaxies or $z\sim0$ regions. One explanation for this difference may be the increased importance of low-ionization diffuse ionized gas in the integrated-light spectra of nearby galaxies.[]{data-label="n2s2_vs_n2o2"}](n2s2_vs_n2o2) Although the calibrations based on the sample are similar to those based on our new measurements for individual $\langle z\rangle = 2.3$ KBSS galaxies (especially for N2O2), we note that calibrations based on samples of local *galaxies* generally fare much worse in returning accurate estimates of N/O for high-$z$ galaxies. Figure \[n2s2\_vs\_n2o2\] shows the reason for this discrepancy. Although N2S2 and N2O2 have almost identical behavior relative to one another for KBSS galaxies (green points) and regions (orange squares), the behavior of the same indices for $z\sim0$ galaxies from SDSS (greyscale, with 90% of galaxies enclosed by the red contour) differs significantly, particularly at higher values of N2S2 and N2O2. Just as locally-calibrated strong-line methods for O/H that use the N2-BPT lines will return inconsistent estimates relative to one another for objects that fall significantly off the $z\sim0$ N2-BPT locus, strong-line methods for N/O based on $z\sim0$ galaxies will disagree for high-$z$ galaxies. This inconsistency poses challenges to easily comparing N/O inferred using N2S2 and N2O2. @masters2016 argue that the N/O-M$_{\ast}$ relation evolves slowly with redshift, with high-$z$ galaxies exhibiting only slightly lower N/O at fixed M$_{\ast}$ relative to local galaxies. Their inferences are based on results from @kashino2017, who examine the behavior of N2S2 with M$_{\ast}$ for a sample of $z\sim1.6$ galaxies from the FMOS-COSMOS program [@silverman2015] relative to $z\sim0$ galaxies from SDSS and find that the values of N2S2 observed in high-mass galaxies in their sample approach the N2S2 measured in high-mass SDSS galaxies. Based on these observations and the expectation that the mass-metallicity relation evolves more quickly with redshift [e.g., @erb2006metal; @steidel2014; @sanders2015], @masters2016 claim that the slight observed decrement in N2S2 of the $z\sim1.6$ FMOS-COSMOS galaxies relative to $z\sim0$ SDSS galaxies reflects higher N/O values in high-$z$ galaxies at fixed O/H relative to the N/O-O/H relation observed in the local universe. From our analysis of N/O in KBSS galaxies in , however, we show that the evolution in the N/O-M$_{\ast}$ relation is roughly equivalent in magnitude to the inferred evolution in the O/H mass-metallicity relation, with $\langle z\rangle = 2.3$ galaxies exhibiting values of log(N/O) that are $\sim0.32$ dex lower than $z\sim0$ galaxies at fixed M$_{\ast}$. While we could not directly study the N/O-O/H relation for high-$z$ galaxies in , we revisit the likelihood of elevated N/O at fixed O/H in Section \[no\_offset\_section\]. The behavior of N2S2 and N2O2 shown in Figure \[n2s2\_vs\_n2o2\] offers some clues that may explain the discrepancy between our interpretation of the data and the interpretation favored by @masters2016 and @kashino2017. We can confidently assume that N/O increases with increasing N2S2 and N2O2 for all three samples. Nevertheless, N2S2 does not increase as quickly with N/O in $z\sim0$ galaxies as it does in $z\sim0$ regions and $\langle z\rangle = 2.3$ galaxies. As a result, a comparison between N2S2, without first converting the index to N/O using the appropriate calibration, will underestimate N/O in $z\sim0$ galaxies relative to $\langle z\rangle = 2.3$ galaxies. Conversely, comparing N2O2 between $z\sim0$ and $\langle z\rangle = 2.3$ galaxies (as we did in by assuming the same calibration for N/O) may over-estimate the difference between the two samples, but N2O2 remains a far more direct tracer of N/O than N2S2. The comparison between N2S2 and N2O2 shown in Figure \[n2s2\_vs\_n2o2\] also highlights an intriguing and likely very meaningful result: $\langle z\rangle = 2.3$ galaxies have nebular spectra that are much more similar to individual regions at $z\sim0$ than to intregrated-light spectra of $z\sim0$ galaxies. There are a number of reasons this might be the case, but as other authors [including @sanders2016; @kashino2017] have also suggested, the most straightforward explanation is that star-formation in $\langle z\rangle = 2.3$ galaxies takes place mostly in one, or a few, dominant regions, compared to typical $z\sim0$ galaxies with similar M$_{\ast}$, where regions are more isolated with respect to one another. Thus, an integrated-light spectrum of a local galaxy may potentially include contributions from gas outside regions (especially for neutral species like and low-ionization species like S$^+$, which do not require an H-ionizing photon to be created); we refer readers to @sanders2017 for more information regarding the biases introduced by not accounting for this diffuse ionized gas. In this context, the behavior of N2S2 for SDSS galaxies in Figure \[n2s2\_vs\_n2o2\] makes more sense. If the \[\]$\lambda\lambda6718,6732$ emission observed in SDSS galaxy spectra includes contributions from the diffuse ionized gas in addition to region emission, N2S2 will be depressed at fixed N2O2 relative to high-$z$ galaxies and individual regions. Oxygen Abundance {#oh_section} ---------------- ![image](met_vs_r23_models_paper) ![image](met_vs_o3n2_models_paper) ![image](met_vs_n2_models_paper) Of perhaps greatest interest is the potential to provide new calibrations for strong-line diagnostics for O/H, which is generally the quantity one intends when referring to gas-phase “metallicity”. The impact of the overall metallicity of ionized gas on the resulting nebular spectrum is more nuanced than the effects of ionization and differences in abundance *ratios*, like N/O. Metals, like O, are the primary coolants in low-density ionized gas, as they are able to convert kinetic energy in the gas into electromagnetic emission through collisionally-excited transitions. Yet, trends in emission line strength with O/H for transitions such as \[\]$\lambda5008$ are frequently complicated due to the competition between enrichment and gas cooling. The classic example is the double-valued behavior of R23, which increases with increasing O/H due to the larger number of O atoms present—until some critical value (usually near $12+\log(\textrm{O/H})=8.3$, but sensitive to the shape of the ionizing radiation field), at which point gas cooling becomes so efficient that the collisional excitation of O$^{+}$ and O$^{++}$ drops, and the observed value of R23 declines. Galaxies or regions with maximal values of R23($\approx0.8-1.0$) must therefore have moderate O/H, as lower or higher values of O/H would result in a lower value of R23 being observed. This reasoning led us to conclude in and that most $\langle z\rangle = 2.3$ KBSS galaxies must be moderately O-rich, which we have now confirmed in this paper (Figure \[cloudy\_hists\]). For strong-line indices other than R23, including O3N2 and N2, part of the ability to estimate O/H comes from indirectly measuring N/O instead (by including \[\]$\lambda6585$) and relying on the existence of a relationship between N/O and O/H. This implicit dependence on the N/O-O/H relation is what, until now, has prevented a direct investigation of the N/O-O/H relation at high redshift (the subject of Section \[no\_vs\_oh\_section\]) and an independent analysis of O/H enrichment in the early universe, free from the biases introduced by relying on local calibrations. We are now poised to develop new calibrations for O/H using the results from our photoionization model method, but must first address the issue of abundance scales. To this point, we have reported the results of our photoionization model method in terms of $Z_{\textrm{neb}}$. However, given the importance of O to the overall mass budget of metals in regions and, thus, to gas cooling and the nebular spectrum, it is reasonable to assume that $Z_{\textrm{neb}}$ largely traces gas-phase O/H. As a result, we may convert log($Z_{\textrm{neb}}/Z_{\odot}$) to the more commonly-used metric $12+\log(\textrm{O/H})$ by adopting a fiducial solar value. From @asplund2009, $12+\log(\textrm{O/H})_{\odot}=8.69$, so $12+\log(\textrm{O/H})_{\textrm{KBSS}}=\log(Z_{\textrm{neb}}/Z_{\odot})+8.69$. The O/H scale that results from this translation differs significantly from the abundance scale for diagnostics determined using $T_e$-based measurements of O/H, which are lower by $\approx0.24$ dex relative to measurements based on nebular recombination lines (see the discussion in Section 8.1.2 of @steidel2016; also @esteban2004 [@blanc2015]). Authors who have investigated this phenomenon report that the abundances resulting from recombination line methods are in fact closer to the *stellar* abundance scale than abundances from $T_e$-based methods that rely on collisionally-excited emission lines. Additionally, as we showed in , a similar offset in 12+log(O/H) is required to force (N/O)$_{\odot}$ to correspond with 12+log(O/H)$_{\odot}$ for local regions from , where both N/O and O/H are measured using the direct method. Therefore, in order to compare our O/H measurements with those for the region sample used in the previous section (which is also the same as used in ), we shift the abundances toward higher $12+\log(\textrm{O/H})$ by 0.24 dex. It is not necessary to account for this difference when considering the abundance *ratio* N/O, as it affects inferences of both O/H and N/H. Figure \[oh\_calibs\] compares the correlation between $12+\log(\textrm{O/H})$ and the three most commonly-used strong-line indices for measuring O/H for both $\langle z\rangle = 2.3$ KBSS galaxies (green points) and the local regions from (orange squares). Notably, R23 shows the strongest trend with $12+\log(\textrm{O/H})$ for $\langle z\rangle = 2.3$ KBSS galaxies, with O3N2 and N2 showing markedly larger scatter. Table \[index\_table\] lists the Spearman coefficient and the significance of the correlation for all three indices. As with strong-line indices and N/O in Figure \[no\_calibs\], the behavior of R23 with O/H appears very similar to that observed for the regions, although the KBSS sample mostly populates the high-metallicity “branch.” If we solve for a quadratic calibration that describes the relation between $12+\log(\textrm{O/H})$ and R23 for the KBSS sample (shown in cyan in the left panel of Figure \[oh\_calibs\]), we find $$12+\log(\textrm{O/H}) = 8.24+(0.85-0.87\times\textrm{R23})^{1/2}, \label{r23_equation}$$ with the R23 measurements exhibiting $\sigma_{\textrm{RMS}} = 0.09$ dex relative to the calibration. The calibration in Equation \[r23\_equation\] is significantly different from the calibration for R23 provided by @maiolino2008, which we have shifted by 0.24 dex and shown as the dot-dashed purple curve in the left panel of Figure \[oh\_calibs\]. @maiolino2008 used a superset of direct method and photoionization model measurements of O/H to construct a family of self-consistent strong-line diagnostics (the corresponding calibrations for O3N2 and N2 are shown in the center and right panels). Although both the @maiolino2008 and KBSS calibrations reach a maximum value of R23 at nearly the same oxygen abundance ($12+\log(\textrm{O/H})\approx 8.3$) the KBSS calibration has a smaller latus rectum[^9], with R23 varying more quickly as a function of O/H. We note, however, that $\sim84$% of $\langle z\rangle = 2.3$ KBSS galaxies have $\textrm{R23}>0.8$, where the index (regardless of calibration choice) becomes relatively insensitive to changes in O/H. As a result, even our re-calibration for R23 is primarily useful for O-rich galaxies. ![$12+\log(\textrm{O/H})$ as a function of $X_{\textrm{O32-R23}}$, a diagnostic combining O32 and R23, as described in the text. The dotted line shows the one-to-one relation. Although making use of both indices reduces scatter relative to the model-inferred O/H at high-metallicities, the calibration fails at $12+\log(\textrm{O/H})\lesssim8.6$ ($Z_{\textrm{neb}}/Z_{\odot}\lesssim0.8$).[]{data-label="o32r23_calib"}](met_vs_o32r23_models_paper) If we attempt to construct a calibration based on O32 *and* R23, in order to jointly account for the effects of O/H and $U$, we can reduce the scatter relative to the model-inferred O/H at high metallicities, as shown in Figure \[o32r23\_calib\]. Here, we have chosen a diagnostic with the following form: $$\begin{split} 12+\log(\textrm{O/H}) = X_{\textrm{O32-R23}} = 9.59+0.93\times\textrm{O32}\\-1.85\times\textrm{R23}+0.18\times\textrm{O32}^2+0.65\times\textrm{R23}^2\\-0.96\times(\textrm{O32}\times\textrm{R23}), \end{split}$$ which is based on the sample of KBSS galaxies occupying the upper branch of the R23 relation, corresponding to $12+\log(\textrm{O/H})\geq8.3$, and determined using a least-squares fit. For large values of O/H, this calibration has less RMS scatter than the R23-only calibration—but it fails for galaxies with $12+\log(\textrm{O/H})\lesssim8.6$ ($Z_{\textrm{neb}}/Z_{\odot}\lesssim0.8$), greatly reducing its utility for samples at high redshift. It is likely that a separate O32-R23 calibration could be determined for galaxies on the low-metallicity branch in R23, but it is clear that inferring O/H for galaxies with R23 near the maximum using a simple strong-line diagnostic is more challenging. The large scatter between both O3N2 and N2 and O/H for KBSS galaxies is notable and may caution against relying on either index as a diagnostic for O/H in high-$z$ galaxies, despite the formal significance of the correlations. Still, these indices have been favored in the past due to their practical advantages: they are less sensitive to dust reddening and require fewer emission line measurements, thus necessitating less observing time. Using the KBSS results to construct new calibrations, we find $$\begin{aligned} &&12+\log(\textrm{O/H}) = 8.75-0.21\times\textrm{O3N2} \\ &&12+\log(\textrm{O/H}) = 8.77+0.34\times\textrm{N2}.\end{aligned}$$ We emphasize that, even using these calibrations, the large scatter could lead to incorrect estimates of O/H in high-$z$ galaxies. However, the situation is worse if calibrations based on a local sample (the red dashed line in the right panel of Figure \[oh\_calibs\]) are used instead; our results imply that $12+\log(\textrm{O/H})$ could be overestimated by up to $\sim0.4$ dex for some high-$z$ galaxies. ![image](logu_vs_zstar_stack_models_paper){width="30.00000%"} ![image](logu_vs_oh_stack_models_paper){width="30.00000%"} ![image](logu_vs_no_stack_models_paper){width="30.00000%"}\ ![image](no_vs_zstar_stack_models_paper){width="30.00000%"} ![image](no_vs_oh_stack_models_paper){width="30.00000%"} ![image](zstar_vs_zneb_stack_models_paper){width="30.00000%"} Given these results, it would seem that the most accurate and precise estimates of $12+\log(\textrm{O/H})$ for $\langle z\rangle = 2.3$ galaxies *currently* achievable result from using a photoionization model method such as the one described in this paper. Although R23 (or a combined diagnostic using O32 and R23) is useful for galaxies with relatively high metallicities ($12+\log(\textrm{O/H})\gtrsim8.5-8.6$), there is significant added value in using the same emission line measurements to simultaneously constrain O/H, N/O, and $U$. Measurements of $T_e$-based abundances for a representative sample of high-$z$ galaxies, which will be enabled in the future by facilities like the *James Webb Space Telescope*, may lead to improved calibrations. However, we caution that strong-line calibrations for quantities like N/O and O/H inevitably marginalize over intrinsic scatter in the galaxy population—which may itself be correlated with other physical conditions. Correlations Between Physical Conditions {#correlation_section} ======================================== Correlations Between Model Parameters for Individual Galaxies ------------------------------------------------------------- We have previously noted the similarity in the behavior of the nebular spectra of $\langle z\rangle = 2.3$ KBSS galaxies and individual local regions, especially as compared to $z\sim0$ galaxies. This agreement—or disagreement, in some cases—arises from similarities and differences in the underlying correlations between the physical conditions driving the observed spectrum. Thus, it is important to quantify the intrinsic correlations between these parameters to understand the ways in which the high-$z$ galaxy populations differ from galaxies found in the present-day universe. When performing linear regression of astronomical data, the measurement errors on the independent and dependent variables are often assumed to be independent. In the case that the measurement errors are actually correlated, however, the magnitude of the observed correlation will be overestimated if the measurement error correlation has the same sign as the intrinsic correlation between variables and underestimated in cases where the measurement error correlation has a different sign from the intrinsic correlation [@kelly2007]. As a result, weak correlations where the measurement error correlation has the opposite sign may be unrecoverable unless the measurement errors *and* the covariance between measurement errors have been accounted for using a statistical model. From the MCMC results shown in Figures \[good\_mcmc\] and \[bad\_mcmc\], it is clear that the model parameter estimates are correlated with one another for at least some fraction of the KBSS sample. Knowledge of these degeneracies for individual galaxies assists in measuring the intrinsic correlations between parameters across the entire sample, but it is also interesting to consider the *typical* correlation between model parameters for a single galaxy, which can inform our intuition regarding galaxies’ nebular spectra. [llc]{} log($Z_{\ast}/Z_{\odot}$) & log(N/O) & $+0.07$\ log($Z_{\textrm{neb}}/Z_{\odot}$) & log(N/O) & $-0.09$\ log($Z_{\textrm{neb}}/Z_{\odot}$) & log($Z_{\ast}/Z_{\odot}$) & $+0.14$\ log(N/O) & log($U$) & $+0.21$\ log($Z_{\textrm{neb}}/Z_{\odot}$) & log($U$) & $+0.31$\ log($Z_{\ast}/Z_{\odot}$) & log($U$) & $+0.48$ \[covar\_table\] Figure \[param\_correlations\] shows the average pairwise posteriors for the 4 model parameters, which have been constructed by shifting the normalized 2D posteriors for individual objects so that the peak of the distribution is located at the origin and summing the PDFs for all objects with bound posteriors. The green and blue contours highlight the shape of the posterior, assuming it can be described by a bivariate Gaussian function, and the linear correlation coefficients derived from the fits are reported in Table \[covar\_table\]. The correlations between some model parameters are expected, including the strong correlation between $U$ and $Z_{\ast}/Z_{\odot}$ (upper left panel), which parameterize the normalization and shape of the ionizing radiation, respectively. These results highlight the power of applying additional constraints, where available. For example, if we could impose a prior on $Z_{\ast}$ from observations of the rest-UV spectra of the same galaxies (as we did for a composite spectrum in ), our estimates of log($U$) would be even more precise because we would not need to marginalize over as large a range in $Z_{\ast}$. In some cases, the correlations between inferred parameters run counter to our expectations for the intrinsic correlations between the same parameters: N/O and $Z_{\textrm{neb}}/Z_{\odot}$ (bottom center panel) are anti-correlated for a typical individual galaxy, whereas N/O and O/H (which is traced by $Z_{\textrm{neb}}/Z_{\odot}$) are positively correlated for galaxy and region samples in the nearby universe. This result underscores the importance of accounting for the correlations between inferred parameters for individual objects when quantifying the intrinsic correlations between the same parameters in the full sample, which is the focus of the remainder of this section. N/O-O/H Relation {#no_vs_oh_section} ---------------- ![The N/O-O/H relation observed for $\langle z\rangle = 2.3$ KBSS galaxies, shown as green points; galaxies with only a limit on the inferred value of log(N/O) are shown as dark green triangles at the location corresponding to a $2\sigma$ upper limit. For comparison, the distribution of $z\sim0$ objects from is represented by the orange contours, with the outermost contour enclosing 99% of the sample. Most of the KBSS galaxies ($\sim72$%) fall within these contours and appear to follow the same trend as the local regions. Three galaxies with different combinations of N/O and O/H are highlighted by colored symbols: Q1442-RK119 (red diamond), Q1603-BX173 (yellow triangle), and Q0207-BX230 (purple square). For comparison, the locations of these galaxies in common line-ratio diagrams are shown in Figure \[bpt\_no\_examples\].[]{data-label="no_vs_oh_plot"}](no_vs_oh_twins_models_paper) Figure \[no\_vs\_oh\_plot\] shows that $\langle z\rangle = 2.3$ star-forming galaxies exhibit N/O ratios at fixed O/H consistent with the range observed in local regions. The KBSS sample is shown as green points or dark green triangles (for galaxies with only upper limits on N/O), with the local sample from represented by the orange contours; 72% of KBSS galaxies fall within the outermost contour, which contains 99% of the objects, and there is good agreement in the distribution of N/O at fixed O/H between the samples. We note that the paucity of objects with $12+\log(\textrm{O/H})<8.2$ is likely the result of selection bias. Galaxies with lower M$_{\ast}$ than typical KBSS galaxies, such as the Lyman-$\alpha$-selected galaxies from @trainor2015 with M$_{\ast} \approx 10^{8}-10^{9}$ M$_{\odot}$, are expected to populate this region of parameter space. Like the $z\sim0$ regions, $\langle z\rangle = 2.3$ galaxies show large scatter in N/O at fixed O/H. At $12+\log(\textrm{O/H})=8.3$ ($Z_{\textrm{neb}}/Z_{\odot}\approx0.4$), the full range in observed $\log(\textrm{N/O})$ is $\sim0.8$ dex, consistent with that observed among the regions. Although N/O and O/H are formally significantly correlated for the local sample, a simple Spearman test indicates only a marginally significant (2.3$\sigma$) correlation between the two parameters for KBSS galaxies. Given the anti-correlation between inferred N/O and O/H for individual galaxies (bottom center panel of Figure \[param\_correlations\], Table \[covar\_table\]), however, a Spearman test is likely to underestimate the presence of a positive correlation between N/O and O/H for the KBSS sample. To address this issue, we use the IDL routine LINMIX\_ERR to measure the strength of the correlation between N/O and O/H among the KBSS galaxy sample while accounting for both measurement errors and the covariance between measurement errors. The resulting linear correlation coefficient inferred for the $\langle z\rangle = 2.3$ N/O-O/H relation is $\rho=0.52$, with an overall $99.8$% likelihood of a positive correlation between the parameters and an estimated intrinsic scatter of $\sigma_{\textrm{int}} = 0.18$ dex. ![image](bpt_no_select_models_paper) ![image](s2_bpt_no_select_models_paper) ![image](o32_r23_no_select_models_paper) The physical cause of this intrinsic scatter in N/O at fixed O/H is not immediately obvious for either the $\langle z\rangle = 2.3$ sample or the $z\sim0$ objects. Some authors have found that local galaxies with higher N/O at a given O/H also have higher SFR [@andrews2013], but we find no strong evidence of a similar trend for the KBSS sample. To illustrate differences in the nebular spectra of high-$z$ galaxies found in different regions of the N/O-O/H plane, we highlight three galaxies in Figure \[no\_vs\_oh\_plot\]: Q1442-RK119, which has the highest value of N/O in the KBSS sample (red diamond); Q1603-BX173, near the upper envelope of the locus at more moderate O/H (yellow triangle); and Q0207-BX230, which has similar O/H to Q1603-BX173, but is near the low-N/O edge of the distribution (purple square). The locations of these galaxies in the N2-BPT, S2-BPT, and O32-R23 diagrams are also noted in Figure \[bpt\_no\_examples\]. As expected for a galaxy with high N/O, Q1442-RK119 also has a very high value of N2 (the highest among the star-forming KBSS galaxies discussed in this paper[^10]). Further, in contrast to the two galaxies with more moderate O/H, Q1442-RK119 is found on the low-R23 side of the galaxy locus in the O32-R23 diagram, as expected. A comparison between the locations of Q1603-BX173 and Q0207-BX230 demonstrates that an increase in N/O at fixed O/H does not correspond only to a horizontal shift in the N2-BPT diagram as one might naively expect; instead, Q1603-BX173 also exhibits higher \[\]/H$\beta$ and O32 and lower \[\]/H$\alpha$. Together, these differences reflect a increase in log($U$) of 0.43 dex (at similar O/H to Q0207-BX230), which is clear when information from all three diagrams is incorporated (as in our photoionization model method) but more difficult to diagnose on the basis of the galaxies’ locations in a single diagram. ### Enhanced N/O and the N2-BPT Offset {#no_offset_section} In , we studied the distribution of N/O (inferred using N2O2) within the KBSS sample without the aid of independent estimates of O/H. Based on our comparison between $\langle z\rangle = 2.3$ KBSS galaxies and $z\sim0$ galaxies that likely share similar ionizing radiation fields, we concluded that although high-$z$ galaxies may have somewhat higher N/O than local galaxies with the same O/H, this could not account for the entirety of the offset observed between typical $\langle z\rangle = 2.3$ galaxies and typical $z\sim0$ galaxies in the N2-BPT diagram. Now, with self-consistent estimates of N/O and O/H in hand for a large sample of $\langle z\rangle = 2.3$ galaxies, we can revisit the question of whether N/O is enhanced in the most offset star-forming galaxies at high redshift. If we simultaneously solve for the best-fit linear relation between N/O and O/H for KBSS galaxies that fall above the sample ridge-line in the N2-BPT diagram (the cyan curve in the left panel of Figure \[bpt\_no\_examples\]) and those that fall below the ridge-line, we confirm an offset between the two samples of $\Delta\log(\textrm{N/O})=0.25\pm0.05$ dex, with the most offset KBSS galaxies exhibiting significantly higher N/O at fixed O/H than KBSS galaxies that appear more similar to typical galaxies in SDSS. However, we note that this difference cannot account for the *total* offset between the KBSS subsamples in the N2-BPT diagram , reaffirming our conclusion from @steidel2014 and that high-$z$ galaxies must also have harder ionizing radiation fields at fixed O/H relative to typical local galaxies. Using the results from our MCMC method, we find that a combination of higher ionization parameters and harder spectra (due to more Fe-poor stellar populations) account for the remainder of the offset that cannot be explained by the modest N/O enhancement at fixed O/H. A two-sample KS test reveals that the distributions of $U$ and $Z_{\ast}/Z_{\odot}$ for KBSS galaxies above and below the ridge-line in the N2-BPT diagram are statistically inconsistent with one another. For galaxies above the ridge-line, the median $\log(U)=-2.69$ and the median $Z_{\ast}/Z_{\odot}=0.18$, whereas for galaxies below the ridge-line, the median $\log(U)=-2.90$ and the median $Z_{\ast}/Z_{\odot}=0.25$. $U$-O/H and $U$-$Z_{\ast}$ Relations ------------------------------------ ![Ionization parameter as a function of stellar metallicity, which traces Fe/H, for the same galaxies shown in Figure \[logu\_vs\_oh\_plot\]; here, the yellow stars show the median value of log($U$) in equal-number bins of $Z_{\ast}/Z_{\odot}$. Accounting for measurement errors reveals evidence of a somewhat more significant anti-correlation than observed between $U$ and O/H. We calculate a 93% probability of a negative correlation, with a linear correlation coefficient of $\rho=-0.20$. The best-fit linear relation corresponding to this correlation is shown as a cyan line in the figure.[]{data-label="logu_vs_zstar_plot"}](logu_vs_oh_models_paper) ![Ionization parameter as a function of stellar metallicity, which traces Fe/H, for the same galaxies shown in Figure \[logu\_vs\_oh\_plot\]; here, the yellow stars show the median value of log($U$) in equal-number bins of $Z_{\ast}/Z_{\odot}$. Accounting for measurement errors reveals evidence of a somewhat more significant anti-correlation than observed between $U$ and O/H. We calculate a 93% probability of a negative correlation, with a linear correlation coefficient of $\rho=-0.20$. The best-fit linear relation corresponding to this correlation is shown as a cyan line in the figure.[]{data-label="logu_vs_zstar_plot"}](logu_vs_zstar_models_paper) Several authors have found evidence for an anti-correlation between ionization parameter and O/H in low-$z$ samples [e.g., @dopita1986; @dopita2006; @perez-montero2014], but evidence for a similar trend among high-$z$ galaxies is less abundant. @sanders2016 offer a thorough exploration of the behavior of O32 with galaxy properties in $\langle z\rangle = 2.3$ MOSDEF galaxies; they noted the strong observed anti-correlation between O32 and M$_{\ast}$ and suggested that an anti-correlation between ionization parameter and metallicity at high redshift might also exist, presuming the existence of a mass-metallicity relation. @onodera2016 estimate $q$ ($= Uc$) and O/H for a sample of $z\gtrsim3$ galaxies using strong-line methods and recover a strong anti-correlation, but note that it arises largely because both parameters are estimated using the same line ratio. Although @kojima2017 use $T_e$-based estimates of O/H instead of relying on strong-line methods, they find that most of the $z\sim2$ galaxies in their sample are not consistent with the observed $U$-O/H relation at $z\sim0$. Figure \[logu\_vs\_oh\_plot\] shows little evidence for a correlation between $U$ and O/H in the KBSS sample, even after accounting for measurement errors and the correlation between the inferred parameters for individual galaxies (top center panel of Figure \[param\_correlations\]). Our results show that there is only a 29% probability of an intrinsic anti-correlation between $U$ and O/H in our sample, consistent with no correlation. Given our sample selection, however, the range in O/H observed in KBSS galaxies does not extend to very low metallicities, which could impede our ability to recover a weak, but significant anti-correlation in that regime. It is also possible that our definition of $U$, which is a free parameter that does not incorporate differences in geometry or the shape of the ionizing radiation field, obscures an intrinsic relationship between the total ionizing flux and O/H in the population—which has been found at $z\sim0$, often using somewhat different definitions for ionization parameter. Although unexpected in the context of the local universe [c.f. the steeper relations reported by, e.g., @perez-montero2014; @sanchez2015], the absence of a strong anti-correlation between $U$ and O/H, coupled with the sensitivity of high-$z$ galaxies’ nebular spectra to differences in $U$ (Figure \[logu\_calibs\]), does offer an explanation for the relatively large scatter between some strong-line ratios and O/H (Figure \[oh\_calibs\]). Even though our analysis suggests that $\langle z\rangle = 2.3$ KBSS galaxies follow the same N/O-O/H relation as observed in the local universe and that N/O can be accurately and precisely determined for high-$z$ galaxies using common strong-line diagnostics (Figure \[no\_calibs\]), measuring O/H robustly will remain challenging if it is not also strongly correlated with $U$ (as appears to be the case), especially given the large scatter observed in the N/O-O/H relation. In addition to the issues of selection bias and model construction we have already highlighted, there may be a physical reason to expect the behavior of $U$ and O/H in high-$z$ galaxies to differ from galaxies and regions in the local universe. @dopita2006 assert that the underlying astrophysical cause for an observed $U$-O/H anti-correlation is the effect of increasing metallicity on the opacity of stellar atmospheres, which will (1) increase the number of ionizing photons absorbed by the stellar wind and (2) more efficiently convert luminous energy flux to mechanical energy needed to launch the wind. However, O/H does not trace enrichment in Fe (which is the primary source of the opacity in stellar atmospheres and is responsible for driving stellar winds) at high redshift in the same manner as in the majority of local galaxies, so the absence of a strong anti-correlation between $U$ and O/H is not especially surprising, if the expected correlation is in fact between $U$ and $Z_{\ast}$. Figure \[logu\_vs\_zstar\_plot\] shows the correlation between $U$ and $Z_{\ast}$, as observed in the KBSS sample. After accounting for measurement errors, we find a 93% probability of an anti-correlation, with a most-likely linear correlation coefficient of $\rho=-0.20$ signifying a moderately weak relationship. The corresponding best-fit linear relation is shown in cyan and has the form: $$\log(U) = -3.0-0.27\times \log(Z_{\ast}/Z_{\sun}).$$ Our understanding of the intrinsic correlations (or lack thereof) between $U$ and other parameters will benefit from expanding high-$z$ samples to include a larger range in O/H and $Z_{\ast}$, if present, but it seems likely that the paradigm based on our understanding of the local universe is insufficient to fully explain the phenomena observed at high redshift. Fortunately, with samples like KBSS, it is now possible to study the high-$z$ universe independently of calibrations based on low-$z$ galaxy populations. O/Fe in High-$z$ Galaxies ------------------------- Just as we translated $\log(Z_{\textrm{neb}}/Z_{\odot})$ into $12+\log(\textrm{O/H})$, we may also directly approximate \[O/Fe\] for KBSS galaxies using the inferred values of stellar and gas-phase metallicity from the model: $[\textrm{O/Fe}]\approx\log(Z_{\textrm{neb}}/Z_{\odot})-\log(Z_{\ast}/Z_{\odot})$. This ratio roughly traces the relative contributions of Type Ia SNe and CCSNe, which contribute the majority of the Fe and O, respectively, to the overall enrichment of the ISM in galaxies. Figure \[zratio\_hist\] shows the distribution of \[O/Fe\] for the 44 KBSS galaxies with bound posteriors in $\log(Z_{\textrm{neb}}/Z_{\odot})-\log(Z_{\ast}/Z_{\odot})$, including galaxies with limits on N/O[^11]. The interquartile range in inferred \[O/Fe\] for this sample is $0.34-0.48$ with a median \[O/Fe\] $=0.42$, somewhat lower than the value we inferred in for a composite spectrum of KBSS galaxies (\[O/Fe\] $\sim0.6-0.7$) and also lower than the maximal \[O/Fe\] expected for purely CCSN enrichment [[\[O/Fe\]]{} $\approx0.73$ @nomoto2006]. Still, the range in \[O/Fe\] found in the KBSS subsample shown in Figure \[zratio\_hist\] is significantly super-solar, reinforcing the need for stellar models that explore non-solar abundance patterns, particularly in O and Fe. ![The distribution of \[O/Fe\] for the 44 KBSS galaxies with bound posteriors in $\log(Z_{\textrm{neb}}/Z_{\odot})-\log(Z_{\ast}/Z_{\odot})$, including galaxies with limits on N/O. For these galaxies the median \[O/Fe\] $=0.42$, which is somewhat lower than the value inferred for a composite spectrum of KBSS galaxies in . Still, these galaxies demonstrate that a significant portion of high-$z$ galaxies exhibit abundance patterns that are indicative of enrichment primarily by CCSNe, modulo uncertainties in supernova yields.[]{data-label="zratio_hist"}](cloudy_zratio_hist) Summary and Conclusions {#summary_section} ======================= We have described a photoionization model method that self-consistently accounts for the primary astrophysical drivers of high-$z$ galaxies’ nebular spectra, using observables that are commonly available for most samples. Applying this method to 148 $\langle z\rangle = 2.3$ galaxies from KBSS, we find that: - The majority of $\langle z\rangle = 2.3$ galaxies are moderately O-rich, with an interquartile range in $12+\log(\textrm{O/H})=8.29-8.56$ and median $12+\log(\textrm{O/H})=8.37$ (top left panel of Figure \[cloudy\_hists\]). The same galaxies have significantly sub-solar Fe enrichment, with an interquartile range of \[Fe/H\] $=[-0.79,-0.53]$ and median \[Fe/H\] $=-0.70$ (bottom left panel of Figure \[cloudy\_hists\]). - There are strong correlations between commonly-used line-ratio indices and both $U$—which is the parameter that galaxies’ nebular spectra respond *most* sensitively to—and N/O (Figures \[logu\_calibs\] and \[no\_calibs\]). We have presented new calibrations for these quantities (Equations \[logu\_equation\] and \[n2o2\_equation\]), which can be used to estimate $U$ and N/O in high-$z$ galaxies or galaxies with similar stellar populations. - The nebular spectra of high-$z$ galaxies resemble more closely those of local regions than the integrated-light spectra of galaxies at $z\sim0$ (Figure \[n2s2\_vs\_n2o2\]). This suggests that diffuse ionized gas may play a less important role in determining the observed rest-optical spectra of high-$z$ galaxies than is thought to be the case for local galaxies [e.g., @sanders2017]. - Of the strong-line indices used most commonly for estimating O/H, R23 shows the strongest correlation with gas-phase O/H in high-$z$ galaxies (Figure \[oh\_calibs\]). However, $\sim84$% of high-$z$ galaxies in the KBSS sample have values of R23 near the “turnaround”, making R23 a useful diagnostic only for O-rich systems (Equation \[r23\_equation\]). A calibration based on both R23 and O32 does not perform significantly better than R23 alone, except at $12+\log(\textrm{O/H})\gtrsim8.6$ (Figure \[o32r23\_calib\]). - The physical parameters inferred from galaxies’ nebular spectra are often highly correlated with one another (Figure \[param\_correlations\], Table \[covar\_table\]), complicating studies of the intrinsic relationships between the physical conditions in galaxies. It is therefore necessary to account for both measurement errors and covariance between measurement errors in determining the true correlation between parameters, as we have done in Section \[correlation\_section\]. - High-$z$ galaxies span a similar range in N/O at fixed O/H as observed in a sample of local regions (Figure \[no\_vs\_oh\_plot\]). For both samples, N/O varies by up to a factor of $2-3$ at a given O/H, particularly near the transition between the primary plateau and secondary N enrichment. - We do not recover a strong anti-correlation between $U$ and either O/H or $Z_{\ast}$, contrary to observations of galaxies in the local universe (Figures \[logu\_vs\_oh\_plot\] and \[logu\_vs\_zstar\_plot\]). Practically, the lack of a strong correlation between these quantities makes it more difficult to infer O/H from high-$z$ galaxies’ nebular spectra, which are much more sensitive to changes in $U$. - The nebular spectra of most high-$z$ galaxies are consistent with super-solar O/Fe (Figure \[zratio\_hist\]), indicative of ISM that has been enriched primarily by CCSNe. Given mounting observational evidence for substantially non-solar abundance patterns in elements like O and Fe at high redshift, it is important to compute stellar models that better reflect the enrichment patterns in the early universe—or account for this effect in photoionization models by allowing stellar and nebular “metallicity" to vary independently of one another, rather than simply scaling solar relative abundances. The general conclusion of this work is that the mapping of abundance patterns to observable properties is significantly different in $z\simeq2-2.7$ galaxies than in the samples of $z\sim0$ galaxies which have heretofore driven the development of diagnostics for inferring galaxy physical conditions. In order to fully characterize galaxies that formed during this critical time in the universe’s history—and, in turn, provide constraints on theoretical models of galaxy formation and evolution—it is imperative to avoid making assumptions based on galaxies with significantly different enrichment histories. Although it is not realistic that every observation of individual high-$z$ galaxies will allow for an independent analysis as outlined in this paper, diagnostics based on galaxies with similar stellar populations and star-formation histories should be preferred over locally-calibrated methods. To this end, we have provided guidance for inferring $U$, N/O, and O/H in typical $z\simeq2-2.7$ star-forming galaxies, including re-calibrations of common strong-line diagnostics based on the model results for KBSS galaxies. Finally, we note that the results of applying our photoionization model method also have implications for characterizing other galaxy scaling relations at high redshift, including the mass-metallicity relation (MZR). This will be the focus of forthcoming work. The data presented in this paper were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. This work has been supported in part by a US National Science Foundation (NSF) Graduate Research Fellowship (ALS), by the NSF through grants AST-0908805 and AST-1313472 (CCS and ALS). Finally, the authors wish to recognize and acknowledge the significant cultural role and reverence that the summit of Maunakea has within the indigenous Hawaiian community. We are privileged to have the opportunity to conduct observations from this mountain. [^1]: O32=$\log(\textrm{[\ion{O}{3}]}\lambda\lambda4960,5008/\textrm{[\ion{O}{2}]}\lambda\lambda3727,3729$) and R23=$\log[(\textrm{[\ion{O}{3}]}\lambda\lambda4960,5008+\textrm{[\ion{O}{2}]}\lambda\lambda3727,3729)/\textrm{H}\beta]$. [^2]: All fields have imaging in $U_n$, $G$, $\mathcal{R}$, $J$, and $K_s$ from the ground, as well as data from *Hubble*/WFC3-IR F140W and *Spitzer*/IRAC at $3.6\mu$m and $4.5\mu$m. 10 fields have at least one deep pointing obtained using *Hubble*/WFC3-IR F160W, and 8 fields have NIR imaging in $J1$, $J2$, $J3$, $H1$, and $H2$ collected using Magellan-FourStar [@persson2013]. [^3]: The MOSDEF sample used by @shivaei2015 was selected in a different manner from KBSS, which may account for this difference. [^4]: In ionized gas, $n_H\approx n_e$. [^5]: Assuming a constant value of N/O in Cloudy does not have a significant impact on the structure of the nebula. [^6]: In two cases (Q1549-BX197 and Q2343-BX505), $\hat{R}$ remains above 1.05 even after a large number of steps. This can occur when there are multiple, well-separated peaks in the posterior distribution. Because both peaks were well-sampled despite large values of $\hat{R}$ for the total chain, we include the galaxies in the final sample. [^7]: Doubly-ionized S has a low-lying transition at 9533Å, but this is only observable from the ground for galaxies with $z\lesssim1.5$. [^8]: <http://purl.org/mike/mpfitexy> [^9]: The latus rectum of a conic section, here a parabola, is the line segment that passes through the focus and is perpendicular to the major axis. The length of the latus rectum describes the “width" of the parabola. [^10]: AGN at similar redshifts can have higher N2 values, which we reported for KBSS in . @coil2015 discusses the nebular properties of a more diverse sample of AGN from the MOSDEF survey, which extend to even larger N2. [^11]: This sample is smaller than the sample shown in Figure \[cloudy\_hists\] because many galaxies with bound posteriors in the four original model parameters have multiple peaks or limits in the combined $\log(Z_{\textrm{neb}}/Z_{\odot})-\log(Z_{\ast}/Z_{\odot})$ parameter.
--- abstract: 'The group of rigid motions is considered to guide the search for a natural system of space-time coordinates in General Relativity. This search leads us to a natural extension of the space-times that support Painlevé–Gullstrand synchronization. As an interesting example, here we describe a system of rigid coordinates for the cross mode of gravitational linear plane waves.' author: - 'Xavier Jaén[^1] and Alfred Molina[^2]' title: 'Rigid covariance as a natural extension of Painlevé–Gullstrand space-times: gravitational waves' --- Introduction ============ At the beginning of his search for a theory of general relativity, Einstein’s first steps were to search for a formulation of Special Relativity for non-inertial observers, using the Equivalence Principle to place inertial and gravitational forces on the same footing [@Einstein]. The difficulties in carrying out that program led to Einstein introducing general covariance as a new principle and using it as a guide to clarify the way forward. Critics of such a strategy emerged, such as Kretschmann [@Kretschmann] and later Fock [@Fock], who in general objected that since any theory can support a generally covariant formulation, the physical meaning of the principle was highly dubious [@Antoci]. Essentially the situation is that General Relativity lacks of a dynamical invariance group equivalent to the group of rigid motions which characterize physical reference systems in Newtonian Mechanics. Recently, some authors have suggested that may be advantageous to place some restriction to general covariance [@Ellis]. Such a restriction sometimes appears under the name of generalized isometries [@Bel], [@Llosa]. The concept of a rigid body and rigid motion arose naturally as an idealization of solid objects that surround us. In fact, from the point of view of both experience and physical theories, a perfectly rigid body cannot exist at the non-relativistic level, because it would imply the existence of infinite elastic modules. At the relativistic level, a new argument is added because the existence of a perfectly rigid body would imply instantaneous signal propagation between particles. However, leaving aside its existence as a real substance, it is possible to conceive of rigid motion through some coherent construction; moreover, we are not necessarily interested in the existence of the substance that permits an implementation of the concept of rigid motion. Here, our interest is to study the compatibility between classical rigid motions and General Relativity. Some authors have argued that if we are able to implement the concept of rigid motion in the relativistic domain, we will be able to develop relativity to the same degree as we have developed Newtonian Mechanics; for instance, we will be able to develop a clear relativistic theory of elasticity. The Painlevé–Gullstrand coordinate system [@Painleve; @Gullstrand] is used to expand the Schwarzschild solution within its event horizon. Written in this coordinate system, the Schwarzschild metric is regular inside the horizon and it is singular only at $r=0$. Another interesting property is its spatial geometry: the surfaces $t=\rm{constant}$ are flat. This is what is known as Painlevé–Gullstrand synchronization. Such synchronization is interesting in the context of gravitational collapse due to the fact that we can go beyond the Schwarzschild radius. This kind of synchronization is increasingly present in the literature; for example, in the so-called analogue models of gravity [@Barcelo] or in relativistic hydrodynamics [@Ellis]. We will call *Painlevé–Gullstrand space-times* those space-times that support a Painlevé–Gullstrand coordinate system. In a series of previous papers [@Jaen1; @Jaen2; @Jaen3], we established a close relationship between Painlevé–Gullstrand space-times and rigid motions. We showed how a significant set of space-times admit, as generalized isometries, the group of rigid motions. That set coincides with the set of Painlevé–Gullstrand space-times. The rigid covariant formulation that we mention was built up by paying attention to some little-known properties of Newtonian Mechanics. This permitted us to determine the physical meaning of the various potentials that arise quite well. We obtained a formulation of a set of space-times defined via five potentials which obey rigid covariant equations. This formulation does not cover all the space-times of General Relativity. Some of the space-times that are of particular interest to us, such as the Kerr space-time and the space-time that corresponds to gravitational waves, remain outside this formulation. Our aim in this paper is to establish whether rigid covariance can also support gravitational waves [@Misner]. To this end, we introduce a sixth potential while trying to maintain all the properties that are characteristic of the rigid covariant formulation developed so far. In particular, we are looking for a covariant formulation under a group of transformations that allows us to characterize the space-time metric by means of the six potentials. The group of rigid motions is a reasonable candidate to play this role, as it already does in Newtonian Mechanics, and there is no *a priori* reason not to consider it. We believe that studying the possibility of formulating General Relativity, or a significant portion thereof, in a way that is covariant under the group of rigid motions, using essentially six potentials, is work that is important in itself, beyond the interpretations that may arise. We prove that this is indeed possible and as a result, we will obtain a rigid system of coordinates for gravitational linear plane waves. For the sake of simplicity, in this paper we fix the value of the cosmological potential $H$ [@Jaen2] to $H = 1$. Our work does not depend at all on this condition and it can easily be implemented if $H \neq 1 $. So, in section§\[sec\_2\], we show how the usual Painlevé–Gullstrand space-times can be understood as rigid covariant space-times. We then review their main properties from this new perspective. In section§\[sec\_3\], we study an extension of the Painlevé–Gullstrand space-times which maintain the rigid covariance together with most of the properties studied in the previous section. Then, section§\[sec\_4\] is devoted to finding a rigid system of coordinates for a given space-time in arbitrary coordinates. Finally, in section§\[sec\_5\], we apply the equations derived in the previous sections to finding a rigid system of coordinates for a gravitational linear plane wave. The rigid covariant formulation of Painlevé–Gullstrand space-times {#sec_2} ================================================================== We define Painlevé–Gullstrand space-times as those that admit of space-time coordinates (known by the same name) in such a way that the metric allows the possibility of flat space slicing. The metric of a Painlevé–Gullstrand space-time can always be written using four potentials, $\Phi,K_{i}$, in the form:[^3] $$\label{eq_1_metric} ds^{2}=-\Phi^{2}d\lambda ^{2}+2K_{i}dx^{i}d\lambda +\delta _{ij}dx^{i}dx^{j}$$ We define the potential $\tau$ and ${\vec{v}}$ according to $$\begin{aligned} \label{eq_2} && \Phi^{2}=-\delta_{ij}v^{i}v^{j}+c^{2}\left( \tau_{,\lambda }^2-(\tau_{,i}v^{i})^{2} \right)\nonumber\\ && K_{i}=-\delta_{ij} v^{j}-c^{2}\left(\tau _{,\lambda }+\tau_{,j}v^{j}\right)\tau _{,i}\end{aligned}$$ One can see [@Jaen3] that $\tau$ is any solution of the action in the Hamilton–Jacobi equation associated with the metric (\[eq\_1\_metric\]). That is, $\tau$ is any solution of: $$\label{eq_4} \partial _{\lambda }\tau+H(\vec{x},\vec{p}=\vec{\nabla }\tau,\lambda )=0$$ with $$\label{eq_3} H(\vec{x},\vec{p},\lambda )=-{\vec{K}}\cdot \vec{p}-\sqrt{[1+c^{2}{\vec{p}}^{2}]\left[\left(\frac{K}{c}\right)^{2}+\frac{\Phi^{2}}{c^{2}}\right]}$$ For each solution, $\tau$, of (\[eq\_4\]), the corresponding potential ${\vec{v}}$ is: $$\label{eq_5} {\vec{v}}=\frac{\partial H}{\partial \vec{p}}(\vec{x},\vec{p}=\vec{\nabla }\tau,\lambda )$$ In terms of the potentials $\tau $ and ${\vec{v}}$, the metric (\[eq\_1\_metric\]) can be written as: $$\label{eq_6} \mathit{ds}^{2}=-c^{2}d\tau^{2}+c^{2}[\vec{\nabla }\tau\cdot (d\vec{x}-{\vec{v}}d\lambda )]^{2}+(d\vec{x}-{\vec{v}}d\lambda )^{2}$$ which has the following properties: 1. Newtonian limit: If we have that $\tau=\lambda +\frac{f(\vec{x},\lambda )}{c^{2}}$, the Newtonian non-relativistic limit can be obtained as $c\to \infty $ without any consideration regarding weak fields [@Jaen1]. 2. Rigid motion covariance or *rigid motion generalized isometry*[@Bel; @Llosa]: This is a property of space-time that is not apparent from the perspective of a metric in the form of (\[eq\_1\_metric\]), but instead in the form (\[eq\_6\]) it becomes quite natural. Under rigid motions transformations $$\begin{aligned} \label{eq_7} && \lambda =\lambda' \nonumber \\ && \vec{x}\equiv x^{i}\vec{e}_{i} = \vec{X}(\lambda )+\vec{x}{}'=X^{i}(\lambda )\vec{e}_{i}+x{}'^{i}\vec{e}_{i}{}'= \nonumber \\ && \hspace*{1em} \left(X^{i}(\lambda )+x{}'^{k}R_{k}^{i}(\lambda )\right)\vec{e}_{i},\end{aligned}$$ where $R_{k}^{i}(\lambda )$ is an orthogonal matrix, (\[eq\_6\]) is shape invariant. To be more specific, the metric becomes: $$\mathit{ds}^{2}=-c^{2}d\tau'^{2}+c^{2}[\vec{\nabla }\tau' \cdot (d\vec{x}{}'-\vec{v}{}'d\lambda )]^{2}+(d{\vec{x}}{}'-{\vec{v}}{}'d\lambda )^{2}$$ with $\tau'({\vec{x}}{}',\lambda )=\tau(\vec{x},\lambda )$ and $\vec{v}{}'(\vec{x}{}',\lambda)=\vec{v}(\vec{x},\lambda )-\vec{v}_{0}(\vec{x},\lambda )$; and where ${\vec{v}}_{0}(\vec{x},\lambda )$ is the field associated with the rigid trajectories (7), i.e.: $${\vec{v}}_{0}(\vec{x},\lambda )=\dot{{\vec{X}}}(\lambda )+\vec{\Omega }(\lambda )\times [\vec{x}-\vec{X}(\lambda )],$$ where $\vec{\Omega }(\lambda)=\frac{1}{2}\sum _{j}{R_{j}}^{k}(\lambda ){{\dot{R}}_{j}}^{m}(\lambda){\vec{e}}_{k}\times {\vec{e}}_{m}$ and $\times$ stands for the usual cross product. We will say that (\[eq\_6\]) is the manifestly rigid covariant form of the metric. 3. Physical meaning of the potentials $\tau$ and $\vec{v}$: By construction, expressions (\[eq\_4\]) and (\[eq\_5\]), and from the metric (\[eq\_6\]), we see that the field $U=\partial_{\lambda}+\vec{v} \cdot \partial_{\vec{x}}$ is geodesic with proper time $\tau$ [@Jaen3]. 4. Gauge invariance: We saw this invariance earlier when we defined the potentials $\tau$ and ${\vec{v}}$ in (\[eq\_2\]). In the present context, if we start with the potentials $\tau$ and ${\vec{v}}$, which using (\[eq\_2\])  give $\Phi $ and $K_{i}$, then any solution $\tau ^{\text{*}}$ and ${\vec{v}}^{\text{*}}$ of Equations (4) and (5) will give, again using (2), the same potentials: $\Phi$ and $K_{i}$. This gauge invariance has a clear meaning because of the physical meaning of the potentials $\tau$ and ${\vec{v}}$. 5. Painlevé–Gullstrand synchronization: The slicing $\lambda =\text{constant}$ is flat; i.e.: $ds^{2}|_{d\lambda =0}=d{\vec{x}}^{2}$. By tending towards the limit $c\to \infty $, we can see how the meaning of the potential $\vec{v}$, the gauge invariance and the rigid covariance persist at a Newtonian level. As shown in [@Jaen1]–[@Jaen3], regardless of General Relativity, a Newtonian theory of gravitation can be formulated starting from a potential, ${\vec{v}}$, in such a way that it unifies the inertial and gravitational fields in a set of equations that are shape invariant under rigid motion transformations. In this theory, the integral trajectories of ${\vec{v}}$ are solutions of the equation of motion for test particles. In fact, first we found the properties **2–4** at a Newtonian level and later we used them as a guide to define the metric (\[eq\_6\]), as is explained in [@Jaen3]. The rigid motion covariant form of the metric {#sec_3} ============================================= In this section we introduce a new potential to generalize the metric (\[eq\_6\]). It should be borne in mind that, for simplicity, we have set the value of $H = 1$. The new potential that we aim to introduce has nothing to do with the potential $H$. The price we will pay is such that we will lose the flat space slicing property. What we will see is that a new potential can be introduced while maintaining properties **1–4**. We start by considering the metric: $$ds^{2}=-c^{2}d{\tau}^{2}+c^{2}(\tau _{,i}(dx^{i}-v^{i}d\lambda ))^{2}+\gamma _{ij}(dx^{i}-{v}^{i}d\lambda )(dx^{j}-{v}^{j}d\lambda )$$ where $\gamma _{ij}$ is not specified and may functionally depend on the potentials $\tau,{\vec{v}}$ and on a new potential denoted by $\sigma$. Following the same steps as in [@Jaen3], we can see that if $\gamma_{ij}$ does not depend on $\tau$ and ${\vec{v}}$, and if furthermore $\sigma$ is gauge invariant, then $\gamma_{ij}$ will also be gauge invariant. Then Equations (\[eq\_1\_metric\]–\[eq\_5\]) will only be modified by the fact that, instead of using $\delta_{ij}$, we use $\gamma_{ij}$. In order to maintain property [**2**]{}, we will require that $\gamma_{ij}( dx^{i}-v^{i}d\lambda )(dx^{j}-{v}^{j}d\lambda )$ be rigid covariant. Under these conditions, we have a couple of candidates: $\gamma_{ij}\equiv\delta _{ij}+\epsilon \sigma _{,i}\, \sigma_{,j}$ where $\epsilon =\pm 1$. In a coordinate system $\{\lambda , x^{i}\}$, that we call *rigid Euclidean*, the family of metrics: $$\label{eq_11} ds^{2}=-\Phi^{2}d\lambda^{2}+2K_{i}dx^{i}d\lambda +\left(\delta_{ij}+\epsilon\sigma_{,i}\sigma_{,j}\right)dx^{i}dx^{j}$$ have properties **1–4** with the following modifications: **1.** If $\sigma =\frac{1}{c}s$, the Newtonian non-relativistic limit can still be obtained as $c\to \infty $ without any considerations regarding weak fields. **2.** The space slicing $\lambda =\text{constant}$ becomes a minimum modification of the flat case: $ds^{2}|_{d\lambda=0}=\bar{d}{\vec{x}}^{2}+\epsilon(\bar{d}\sigma)^{2}.$ The expression of the metric (\[eq\_11\]) is the basis of rigid General Relativity. $\epsilon =\pm 1$ with the sign to be determined. The five potentials of rigid General Relativity are: $\Phi,K_{i}$ and $\sigma$ (six, if we also consider the cosmological potential $H$). We can express $\Phi, K_{i}$ in terms of the potentials $\tau,v^{i}$ and $\sigma$. We will have a gauge freedom in the choice of $\tau,v^{i}$. With $\gamma_{ij}\equiv \delta_{ij}+\epsilon \sigma _{,i}\sigma_{,j}$, the relationship between the potentials $\Phi ,K_{i}$ and $\tau,v^{i}$ and $\sigma$ is now: $$\begin{aligned} &&\Phi^{2}=-\gamma_{ij}v^{i}v^{j}+c^{2}\left(\tau _{,\lambda }^{2}-(\tau _{,i}v^{i})^{2}\right) \nonumber \\ && K_{i}=-\gamma _{ij}v^{j}-c^{2}\left(\tau _{,\lambda }+\tau _{,j}v^{j}\right)\tau _{,i}\end{aligned}$$ i.e., the same expression as in (\[eq\_2\]) but using $\gamma_{ij}$ instead of $\delta_{ij}$. The metric (\[eq\_11\]) in terms of these potentials becomes: $$\label{eq_13} ds^{2}=-c^{2}d\tau^{2}+c^{2}(\tau _{,i}(dx^{i}-v^{i}d\lambda ))^{2}+\left(\delta _{ij}+\epsilon \sigma _{,i}\sigma _{,j}\right)(dx^{i}-v^{i}d\lambda )(dx^{j}-v^{j}d\lambda )$$ which is the manifestly rigid covariant form of General Relativity. Note that if we also consider the cosmological potential, $H$, the only change we need to implement in (\[eq\_13\]), and throughout the entire paper, is to replace the flat Euclidean metric $\delta_{ij}$ by $H^{-2}\delta_{ij}$. The properties **1-4** studied in §\[sec\_2\] will be maintained; and in this case, property **5**, the slicing $\lambda =\text{constant}$, will be $ds^{2}|_{d\lambda=0}=H^{-2} \bar{d}{\vec{x}}^{2}+\epsilon(\bar{d}\sigma)^{2}$. Moving from general covariance to rigid covariance {#sec_4} ================================================== Given a metric written in unspecified space-time coordinates $\{T,X^{i}\}$: $$\label{eq_14} ds^{2}=-\Phi ^{2}dT^{2}+2K_{i}dX^{i}dT+\gamma _{ij}dX^{i}dX^{j}$$ i.e., given the ten known coefficients $\{\Phi ,K_{i},\gamma _{ij}\}$, we aim to find the same metric but written in a rigid Euclidean coordinate system $\{\lambda ,x^{i}\}$. The form (\[eq\_14\]) of the metric is generally covariant: it contains ten potentials. We want to write the same metric in the rigid covariant form. First we perform a time transformation $T=T(\lambda ,X^{i})$ so that (\[eq\_14\]) becomes: $$\label{eq_15} ds^{2}=-\Phi ^{2}T_{,\lambda }^{2}d\lambda ^{2}+2T_{,\lambda }[K_{i}-\Phi ^{2}T_{,i}]dX^{i}d\lambda +[\gamma _{ij}+2K_{i}T_{,j}-\Phi ^{2}T_{,i}T_{,j}]dX^{i}dX^{j}$$ We want the space components of the metric (\[eq\_15\]) to take the form: $$\label{eq_16} \gamma _{ij}+2K_{\text{(}i}T_{,j\text{)}}-\Phi ^{2}T_{,i}T_{,j}=\Delta _{ij}+\epsilon \sigma _{,i}\sigma _{,j}$$ where $\Delta _{ij}$ must be a three-dimensional flat metric. Solving for $\Delta _{ij}$: $$\label{eq_17} \Delta _{ij}=\gamma _{ij}+2K_{\text{(}i}T_{,j\text{)}}-\Phi ^{2}T_{,i}T_{,j}-\epsilon \sigma _{,i}\sigma _{,j}$$ To determine $T(\lambda ,X^{i})$ and $\sigma(\lambda ,X^{i})$, we require $\Delta _{ij}$ be flat. Regardless of the nature of the $X^{i}$ coordinates, this condition can be expressed as: $$\label{eq_18} Ricci_{(3)}(\Delta _{ij})=0$$ Since the generalization to $H\neq 1$ is trivial, we can assert that if the corresponding Equation (\[eq\_18\]), in accordance with the comments at the end of section §\[sec\_3\] by replacing $\delta_{ij}\longrightarrow H^{-2}\delta_{ij}$ in $\Delta _{ij}$ with the unknowns $T(\lambda ,X^{i})$, $\sigma(\lambda ,X^{i})$ and $H(\lambda ,X^{i})$, has a solution for some functions $T$, $\sigma$ and $H$, then rigid covariance using the six essential potentials will be locally equivalent to general covariance. Once we have found $T$ and $\sigma$ from (\[eq\_18\]), we can find a rigid Euclidean coordinate system $x^{i}$. Generally, $\Delta _{ij}$, despite being flat, will not have the Euclidean (canonical) form $\delta _{ij}$. Therefore, we can perform a change $X^{i}=X^{i}(\lambda ,x^{k})$, so: $$\label{eq_19} \Delta _{mn}X_{,i}^{m}X_{,j}^{n}=\delta _{ij}$$ Note that when performing the change $X^{i}=X^{i}(\lambda ,x^{k})$ on $\Delta _{ij}$, we only change the space coordinates $X^{i}$. The time $\lambda $ in the expression $X^{i}=X^{i}(\lambda ,x^{k})$ is only a parameter. The change is possible if (\[eq\_18\]) can be solved for $T$ and $\sigma$. Finally the change $\{\vec{X},T\}\to \{\vec{x},\lambda\}$, together with composition of the changes that we have found, $\left\{X^{i}=X^{i}(\lambda ,x^{k}),\, T=T(\lambda ,\ X^{i}(\lambda,x^{k}))\right\}$, will transform the metric (\[eq\_14\]), written in the coordinates $\{\vec{X},T\}$, into the form (\[eq\_11\]) in rigid Euclidean coordinates $\{\vec{x},\lambda\}$. Gravitational linear plane waves {#sec_5} ================================ In this section, we want to find a rigid covariant form for a gravitational linear plane wave. In the coordinates $\{\vec{X}=(X,Y,Z),T\}$, consider the cross mode of a linear plane wave [@Misner] $h_{\times}=h$ (we take $c=1$) $$\label{eq_20} ds^{2}=-dT^{2}+{d\vec{X}}^{2}+2h\;\varepsilon ^{2}dX\,dY$$ with $h=h(Z-T)$. (\[eq\_20\]) is everywhere a solution of $Ricci=0$ with $Riemann\neq 0$ up to order $\varepsilon ^{2}$. In what follows, we will work up to order $\varepsilon ^{2}$. We note that the coordinate system $\{\vec{X},T\}$ are adapted geodesic coordinates; i.e., the lines $\vec{X}$ constant are geodesics. Comparing (\[eq\_20\]) and (\[eq\_14\]), we have: $$\label{eq_21} \Phi ^{2}=1\ ;\ K_{i}=0\ ;\ \gamma ={d\vec{X}}^{2}+2h\varepsilon ^{2}\mathit{dX}\mathit{dY}$$ The expression (\[eq\_17\]) for the metric $\Delta _{ij}$ is: $$\label{eq_22} \Delta _{ij}=(\gamma _{ij}-T_{,i}T_{,j}-\epsilon \sigma _{,i}\sigma _{,j})dX^{i}dX^{j}$$ Performing the time transformation $T=\lambda +\varepsilon(XT_{(x)}+YT_{(y)})$ and choosing $\sigma=\varepsilon (X\sigma _{(x)}+Y\sigma _{(y)})$, where $\{T_{(x)},T_{(y)},\sigma _{(x)},\sigma _{(y)}\}$ are functions depending on $Z-\lambda $, the condition $Ricci_{(3)}(\Delta _{ij})=0$ up to order $\varepsilon ^{2}$ demands $\epsilon =-1$ and:[^4] $$\label{eq_23} \sigma _{(x)}'{}^{2}=T'_{(x)}{}^{2}; \,\sigma _{(y)}'^{2}=T'_{(y)}{}^{2}\ ;\ 2\sigma _{(x)}'\sigma _{(y)}'-2T_{(x)}'T_{(y)}'+h''=0$$ If we take: $$\label{eq_24} \sigma'_{(x)}=T'_{(x)};\ \sigma' _{(y)}=-T'_{(y)}$$ which fulfil the first two conditions of (\[eq\_23\]), the third condition of (\[eq\_23\]) becomes: $$\label{eq_25} 4T'_{(x)}T'_{(y)}=h''$$ which can always be fulfilled for any function $h$. To complete the work, we must find a system of rigid Euclidean coordinates. We can solve $\Delta _{ij}dX^{i}dX^{j}=\delta _{ij}dx^{i}dx^{j}$ for a coordinate change $\vec{X}=\{X,Y,Z\}\to \vec{x}=\{x,y,z\}$. This change depends on $\lambda $, which in the space $\vec{X}$ acts as a parameter. Linking the two transformations, $\{X,Y,Z,T\}\to \{x,y,z,\lambda \}$ and up to the order $\varepsilon ^{2}$: $$\label{eq_26} \begin{matrix} X=x+\varepsilon^{2}y\left[T_{(x)}T_{(y)}-{\displaystyle \frac{h}{2}}+{\displaystyle \int}(T_{(x)}T'_{(y)}-T_{(y)}T'_{(x)})dz\right] \hfill\null\\[2ex] Y=y+\varepsilon ^{2}x\left[T_{(x)}T_{(y)}-{\displaystyle\frac{h}{2}}-{\displaystyle\int} (T_{(x)}T'_{(y)}-T_{(y)}T'_{(x)})dz\right]\hfill\null\\[2ex] Z=z+{\displaystyle\frac{\varepsilon ^{2}}{2}}xyh'\hfill\null \\[2ex] T=\lambda+\varepsilon \left(xT_{(x)}+yT_{(y)}\right)\hfill\null \end{matrix}$$ where $T_{(x)},T_{(y)}$ and $h$ can be considered functions on $z-\lambda $, and we should recall that it is necessary to fulfil $4T'_{(x)}T'_{(y)}=h''$. If we perform the change (\[eq\_26\]) on the metric ( \[eq\_20\]), we obtain a rigid covariant expression for this metric which agrees with (\[eq\_11\]). The monochromatic linear plane wave ----------------------------------- A particularly interesting case is that of the cross mode of a monochromatic linear plane wave, with frequency $\omega $. This corresponds to considering (\[eq\_20\]) with: $$\label{eq_28} \varepsilon ^{2}\;h(Z,T)=A^{2}\sin (\omega (Z-T))$$ i.e. $\varepsilon =A$ and $h=\sin [\omega (Z-\lambda )]$ . As a solution of (\[eq\_25\]), we choose: $$\label{eq_29} T_{(x)}=\sqrt{2}\sin [\frac{\omega }{2}(Z-\lambda )]\ ;\ T_{(y)}=\sqrt{2}\cos [\frac{\omega }{2}(Z-\lambda )]$$ Using (\[eq\_26\]) and working always up to order $A^{2}$, we obtain the coordinate change: $$\begin{matrix} X=x+yA^{2}\left\{{\displaystyle\frac{1}{2}}\sin [\omega (z-\lambda )]-\omega(z-\lambda )+f_{x}(\lambda )\right\}\hfill\null \\[2ex] Y=y+xA^{2}\left\{{\displaystyle\frac{1}{2}}\sin\omega (z-\lambda )]+\omega (z-\lambda )+f_{y}(\lambda )\right\}\hfill\null\\[2ex] Z=z+xy{\displaystyle\frac{1}{2}}\omega A^{2}\cos [\omega (z-\lambda )]\hfill\null\\[2ex] T=\lambda +\sqrt{2}A\left\{x\sin\left[{\displaystyle\frac{\omega }{2}}(z-\lambda))\right]+y\cos\left[{\displaystyle\frac{\omega }{2}}(z-\lambda )\right]\right\}\hfill\null \end{matrix}$$ where we have included the two arbitrary functions of $\lambda $, $f_{x}(\lambda )$ and $f_{y}(\lambda )$, as a consequence of the pair of integrals on $z$ appearing in (\[eq\_26\]). The inverse change is: $$\label{eq_30} \begin{matrix} x=X-YA^{2}\left\{ {\displaystyle\frac{1}{2}}\sin [\omega (Z-\lambda )]-\omega(Z-\lambda )+f_{x}(\lambda )\right\}\hfill\null \\[2ex] y=Y-XA^{2}\left\{ {\displaystyle\frac{1}{2}}\sin [\omega (Z-\lambda )]+\omega (Z-\lambda )+f_{y}(\lambda )\right\}\hfill\null\\[2ex] z=Z-XY{\displaystyle\frac{1}{2}}\omega A^{2}\cos [\omega (Z-\lambda )]\hfill\null \\[2ex] T=\lambda +\sqrt{2}A\left\{X\sin\left[{\displaystyle\frac{\omega }{2}}(Z-\lambda)\right]+Y\cos\left[{\displaystyle\frac{\omega }{2}}(Z-\lambda )\right]\right\}\hfill\null \end{matrix}$$ Since $\vec{X}$ are adapted geodesic coordinates, we can interpret (\[eq\_30\]) as geodesic trajectories $\vec{x}(\lambda ;\vec{X})$ with proper time $T(\lambda ;\vec{X})$ and $\vec{X}$ playing the role of the initial conditions. We can also find, in rigid coordinates, the geodesic velocity field or potential ${\vec{v}}$, ${\vec{v}}(\vec{x},\lambda)=\left.\frac{\partial \vec{x}(\lambda;\vec{X})}{\partial \lambda }\right|_{\vec{X}\rightarrow \vec{x}}$, and the corresponding proper time field or potential $\tau$, $\tau(\vec{x},\lambda )=\left.T(\lambda;\vec{X})\right|_{\vec{X}\rightarrow \vec{x}}$, which, together with $\sigma(\vec{x},\lambda )=\left.\varepsilon(X\sigma_{(x)}+Y\sigma _{(y)})\right|_{\vec{X}\rightarrow \vec{x}}$ and (\[eq\_24\]), characterize the space-time of the wave (\[eq\_20\]). For $\lambda =0$, from (\[eq\_30\]) we can define $x_0=x(\lambda=0,\vec X)$, $y_0=y(\lambda=0,\vec X)$ and $z_0=z(\lambda=0,\vec X)$ and if we perform the transformation $\vec{X}=\{X,Y,Z\}\to\vec{y}=\{x_{0,}y_{0,}z_{0}\}$ on (\[eq\_30\]), we have: $$\label{eq_32} \begin{matrix} x=x_{0}-y_{0}A^{2}\left\{{\displaystyle\frac{1}{2}}\left(\sin [\omega (z_{0}-\lambda )]-\sin [\omega z_{0}]\right)-\omega \lambda +f_{x}(\lambda )-f_{x}(0)\right\}\hfill\null\\[2ex] y=y_{0}-x_{0}A^{2}\left\{{\displaystyle\frac{1}{2}}\left(\sin [\omega (z_{0}-\lambda )]-\sin [\omega z_{0}]\right)+\omega \lambda +f_{y}(\lambda )-f_{y}(0)\right\}\hfill\null \\[2ex] z=z_{0}-x_{0}y_{0}{\displaystyle\frac{1}{2}}\omega A^{2}\cos [\omega (z_{0}-\lambda )]\hfill\null \\[2ex] T=\lambda+\sqrt{2}A\left\{x_{0}\sin\left[{\displaystyle\frac{\omega}{2}}(z_{0}-\lambda )\right]+y_{0}\cos\left[{\displaystyle\frac{\omega }{2}}(z_{0}-\lambda )\right]\right\}\hfill\null \end{matrix}$$ The geodesic corresponding to the initial conditions $\{x_{0,}y_{0,}z_{0}\}=\{0,0,0\}$ is $\{x,y,z\}=\{0,0,0\}$. Choosing $f_{x}(\lambda )=-f_{y}(\lambda )=\omega \lambda $ (\[eq\_32\]) becomes: $$\label{eq_33} \begin{matrix} x=x_{0}-y_{0}A^{2}\left\{{\displaystyle\frac{1}{2}}\left(\sin [\omega (z_{0}-\lambda )]-\sin[\omega z_{0}]\right)\right\}\hfill\null\\[2ex] y=y_{0}-x_{0}A^{2}\left\{{\displaystyle\frac{1}{2}}\left(\sin [\omega (z_{0}-\lambda )]-\sin [\omega z_{0}]\right)\right\}\hfill\null\\[2ex] z=z_{0}-x_{0}y_{0}{\displaystyle\frac{1}{2}}\omega A^{2}\cos [\omega (z_{0}-\lambda )]\hfill\null \\[2ex] T=\lambda +\sqrt{2}A\left\{x_{0}\sin\left[{\displaystyle\frac{\omega }{2}}(z_{0}-\lambda )\right]+y_{0}\cos\left[{\displaystyle\frac{\omega }{2}}(z_{0}-\lambda )\right]\right\}\hfill\null \end{matrix}$$ which, to first order in the coordinates near $x_{0}=y_{0}=z_{0}=0$ becomes: $$\label{eq_34} \begin{matrix} x=x_{0}+y_{0}A^{2}{\displaystyle\frac{1}{2}}\sin[\omega \lambda]\hfill\null \\[2ex] y=y_{0}+x_{0}A^{2}{\displaystyle\frac{\omega}{2}}\lambda{\displaystyle\frac{1}{2}}\sin [\omega \lambda ]\hfill\null \\[2ex] z=z_{0}\hfill\null \end{matrix}$$ and $$\label{eq_35} T=\lambda+\sqrt{2}A\left((y_0\cos\left[{\displaystyle\frac{\omega}{2}}\lambda\right]-x_0\sin\left[{\displaystyle\frac{\omega}{2}}\lambda\right]\right)$$ This coincides with the usual result [@Misner]. We note that, up to the order in which we work, we can replace $\lambda=T$ in (\[eq\_34\]). In fact, $\lambda $ is the proper time of the geodesic $x_{0}=y_{0}=z_{0}=0$. Conclusions =========== In this paper we have tried to advance the review of some aspects of the foundations of General Relativity that we began in three recently published papers [@Jaen1; @Jaen2; @Jaen3]. In [@Jaen3] we identified up to five metric potentials with physical meaning. There, we saw how, using these potentials, we could express the metric of a significant set of space-times in a rigid Euclidean coordinate system. However, we realized that the Kerr space-time and those related to gravitational waves remain outside that set. In the present paper we have conveniently introduced a sixth potential, $\sigma$, completing a minimal set of independent potentials to try to cover, locally by using a rigid Euclidean coordinate system, the whole of General Relativity. As a significant example, we have written the space-times of a gravitational linear plane wave in a rigid Euclidean coordinate system. It is important to note that in doing so we have had no need to use any kind of Fermi coordinates [@Manasse]. That is, our rigid Euclidean coordinate system is an exact concept in General Relativity and does not arise as a consequence of any kind of approximation process. The only approximation we have made is related exclusively to the fact that in section §\[sec\_5\] we are working with linear waves. This does not mean that our proposal is free from difficulties. We have found a rigid Euclidean coordinate system for gravitational linear plane waves, but we were not been able to guarantee its existence before the calculation neither do we have a well-defined uniqueness criterion that would guarantee a unique rigid Euclidean coordinate system, except for changes related to the physical observer, as is the case in Newtonian Mechanics. Regarding gravitational waves, what we have proven is that we can find a rigid Euclidean coordinate system from which, by using the usual approximation, we obtain the known results. But we do not know the meaning of the expressions we found without using the same kind of approximation that people usually do when studying gravitational waves, which is none other than the use of Fermi coordinates. To take advantage of the rigid coordinates found, we think it will require a little more work along the lines set out in the following paragraphs. Given an arbitrary space-time, the existence of a rigid Euclidean coordinate system is guaranteed if we can prove that Equation (\[eq\_18\]), taking into account the cosmological potential $H$, always has a solution for some functions $T$, $\sigma$ and $H$. This is an open problem. As we state above, if we are able to prove this, then rigid covariance, using the six essential potentials together with a rigid Euclidean coordinate system, will be locally equivalent to general covariance. The uniqueness problem is related to identifying physical observers and this is related to finding the dynamical group of motion of General Relativity. In Newtonian Mechanics, this group is the group of rigid motions; that is, the group of transformations that depend on functions of one parameter, say $\lambda$, that leave the form $\bar{d}s^{2}=\bar{d}{\vec{x}}^{2}$ invariant. Beyond the rigid motions, the group we are looking for must leave the form $\bar{d}s^{2}= H^{-2}\bar{d}{\vec{x}}^{2}+\epsilon(\bar{d}\sigma)^{2}$ shape invariant (covariant). In [@Jaen2], we studied the case $\sigma=0$ and $H\neq 0$, and we found that the group of motions was the homothetic group of motions. Surprisingly, as seen in [@Jaen2], that group also plays a role in Newtonian Mechanics in relation to Newtonian cosmological questions. Now the problem that we face, leaving aside the cosmological potential, is that of finding the group of motions that leave $\bar{d}s^{2}=\bar{d}{\vec{x}}^{2}+\epsilon(\bar{d}\sigma)^{2}$ shape invariant. An important subgroup is the group of rigid motions. But now, in order to find the new required motions, we have no non-relativistic equivalent, as in the case of the homothetic group. We hope that in the future we will be able to answer these questions. [99]{} Einstein, A.: *The Relativity Principle*, Jahrbuch der Radioaktivitfit and Elektronik 4, 411-462 (1907) Kretschmann, E.: Über den physikalischen Sinn der Relativitätspostulate, A. Einsteins neue und seine ursprüngliche Relativitätstheorie, *Annalen der Physik* [**358**]{}, 575-614(1918) Fock, V.: *The theory of space, time and gravitation.*, New York, Macmillan, 1964.2 d rev. ed. Translated from the Russian by N. Kemmer(1964) Antoci, S. & Liebscher, D.-E.: *The group aspect in the physical interpretation of General Relativity theory* (2009), arXiv preprint arXiv: 0910.2073. Ellis, G. & Matravers, D. : Gen. Relativ. Gravit., [**27**]{}, 777-788 (1995) Bel, L. : *Born’s group and Generalized isometries*, in *Relativity in general.Proceedings of the Relativity Meeting’93* J.Diaz and M Lorente eds, Editions Frontieres (1994). Llosa, J.: *An extension of Poincaré group abiding arbitrary acceleration*(2015), arXiv preprint arXiv:1512.07465. Painlevé, P.: *Le Mecanique Classique et la Theorie de la Relativite.*, L’Astronomie 36 , 6-9 (1922) Gullstrand, A.: *Allgemeine lösung des statischen einkörperproblems in der Einsteinschen gravitationstheorie*. Almqvist & Wiksell(1922) Barceló, C.; Liberati, S.; Visser, M. & others : *Analogue gravity*, Living Rev. Rel 8, 214 (2005) Jaén, X. & Molina, A.: Gen. Relativ. Gravit. 45, 1531-1546 (2013) Jaén, X. & Molina, A.: Gen. Relativ. Gravit. 46, 1-14 (2014) Jaén, X. & Molina, A.: Gen. Relativ. Gravit. 47, 1-16 (2015) Misner, C. W.; Thorne, K. S.: & Wheeler, J. A.: *Gravitation*. Macmillan (1973) Manasse, F. & Misner, C. W.:Journal of Mathematical Physics 4, 35-745 (1963) [^1]: Dept. de Física, Universitat Politècnica de Catalunya, Spain, e-mail address: [email protected] [^2]: Dept. Física Quàntica i Astrofísica, Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Spain, e-mail address: [email protected] [^3]: Throughout the paper we will use the following notation: Latin indices $i,j,k=1,2,3$; $dx\,dy=\frac{1}{2}(dx\otimes dy+dy\otimes dx)$; $T_{(i}Q_{j)}=\frac{1}{2}(T_{i}Q_{j}+T_{j}Q_{i})$; $\delta _{ij}$ is the three-dimensional identity; $f_{,i}=\frac{\partial f}{\partial x^{i}}$, where $x^{i}$ are the space coordinates; $f_{,\lambda }=\frac{\partial f}{\partial \lambda }$; $\bar{d}$ is the restriction of the differential to $d\lambda =0$, i.e. $\bar{d} f(\vec x,\lambda )= \frac{\partial f}{\partial x^{i}} dx^i$. [^4]: In this section we will use a prime, $f'$, to indicate the derivation of a function with respect to its argument.
--- abstract: 'The model uncertainty obtained by variational Bayesian inference with Monte Carlo dropout is prone to miscalibration. In this paper, different logit scaling methods are extended to dropout variational inference to recalibrate model uncertainty. Expected uncertainty calibration error (UCE) is presented as a metric to measure miscalibration. The effectiveness of recalibration is evaluated on CIFAR-10/100 and SVHN for recent CNN architectures. Experimental results show that logit scaling considerably reduce miscalibration by means of UCE. Well-calibrated uncertainty enables reliable rejection of uncertain predictions and robust detection of out-of-distribution data.' bibliography: - 'literature.bib' --- Introduction {#sec:intro} ============ Advances in deep learning have led to high accuracy predictions for classification tasks, making deep-learning classifiers an attractive choice for safety-critical applications like autonomous driving [@Chen2015] or computer-aided diagnosis [@Esteva2017]. However, the high accuracy of recent deep learning models in not sufficient for such applications. In cases, where serious decisions are made upon model’s predictions, it is essential to also consider the uncertainty of these predictions. We need to know if the prediction of a model is likely to be incorrect or if invalid input data is presented to a deep model, e.g. data that is far away from the training domain or obtained from a defective sensor. The consequences of a false decision based on an uncertain prediction can be fatal. ![Calibration of uncertainty: (Left) reliability diagrams with uncertainty calibration error (UCE) and (right) detection of out-of-distribution (OoD) data. Uncalibrated uncertainty does not correspond well with the model error. Logit scaling is able to recalibrate deep Bayesian neural networks, which enables robust OoD detection. The dashed line denotes perfect calibration.[]{data-label="fig:opener"}](opener_fig-crop.pdf) A natural expectation is that the certainty of a prediction should be directly correlated with the quality of the prediction. In other words, a prediction with a high certainty is more likely to be accurate than an uncertain prediction which is likely to be incorrect. A common misconception is the assumption that the estimated class likelihood (of a softmax activation) can be directly used as a confidence measure for the predicted class. This expectation is dangerous in the context of critical decision-making. The estimated likelihood of a model trained by minimizing the negative log-likelihood (i.e. cross entropy) is highly overconfident. That is, the estimated likelihood is considerably higher than the observed frequency of accurate predictions with that likelihood [@Guo2017]. Guo et al. proposed calibration of the likelihood estimation by scaling the logit output of a neural network to achieve a correlation between the predicted likelihood and the expected likelihood. However, they follow a frequentist approach, where they assume a single best point estimate of the parameters (or weights) of a neural network. In frequentist inference, the weights of a deep model are obtained by maximum likelihood estimation [@Bishop2006], and the normalized output likelihood for an unseen test input does not consider uncertainty in the weights [@Kendall2017]. Weight uncertainty (also referred to as model or epistemic uncertainty) is a considerable source of predictive uncertainty for models trained on data sets of limited size [@Bishop2006; @Kendall2017]. Bayesian neural networks and recent advances in their approximation provide valuable mathematical tools for quantification of model uncertainty [@Gal2016; @Kingma2013]. Instead of assuming the existence of a single best parameter set, we place distributions over the parameters and want to consider all possible parameter configurations, weighted by their posterior. More formally, given a training data set $ \mathcal{D} $ of labeled images and an unseen test image $ \bm{x} $ with class label $ y $, we are interested in evaluating the predictive distribution $$p(y \vert \bm{x}, \mathcal{D}) = \int p(y \vert \bm{x}, \bm{w}) p(\bm{w} \vert \mathcal{D}) \, \mathrm{d}\bm{w} ~ .$$ This integral requires to evaluate the posterior $ p(\bm{w} \vert \mathcal{D}) $, which involves the intractable marginal likelihood [@Gal2016Diss]. One practical approximation of the posterior is variational inference with Monte Carlo (MC) dropout [@Gal2016]. It is commonly used to obtain epistemic uncertainty, which is caused by uncertainty in the model weights. However, epistemic uncertainty from MC dropout still tends to be miscalibrated, i.e. the uncertainty does not correspond well with the model error [@Gal2017]. The quality of uncertainty highly depends on the approximate posterior [@Louizos2017]. In [@Lakshminarayanan2017] it is stated that MC dropout uncertainty does not allow to robustly detect out-of-distribution data. However, calibrated uncertainty is essential as miscalibration can lead to decisions with catastrophic consequences in the aforementioned task domains. We therefore propose a notion for perfect calibration of uncertainty and propose a definition of *expected uncertainty calibration error* (UCE), derived from ECE. We then show how current calibration techniques (for confidence) based on logit scaling can be extended to calibrate model uncertainty. We compare calibration results for temperature scaling, vector scaling and auxiliary scaling [@Guo2017; @Kuleshov2018] using our metric UCE as well as established ECE. We finally show how calibrated model uncertainty improves out-of-distribution (OoD) detection, as well as predictive accuracy by rejecting high-uncertainty predictions. To the best of our knowledge, logit scaling has not been used to calibrate model uncertainty in Bayesian inference for classification. In summary the main contributions of our work are 1. a new metric for perfect calibration of uncertainty, 2. derivation of logit scaling for Gaussian Dropout, 3. first to apply logit scaling calibration to a Bayesian classifier obtained from MC Dropout, and 4. empirical evidence that logit scaling leads to well-calibrated model uncertainty which allows robust OoD detection (in contrast to what is stated in [@Lakshminarayanan2017]; shown for different network architectures on CIFAR-10/100 and SVHN. Our code is available at: [<https://github.com/link-withheld>]{}. Related Work {#sec:related_work} ============ Overconfident predictions of neural networks have been addressed by entropy regularization techniques. Szegedy et al. presented label smoothing as regularization of models during supervised training for classification [@Szegedy2016]. They state that a model trained with one-hot encoded labels is prone to becoming overconfident about its predictions, which causes overfitting and poor generalization. Pereyra et al. link label smoothing to confidence penalty and propose a simple way to prevent overconfident networks [@Pereyra2017]. Low entropy output distributions are penalized by adding the negative entropy to the training objective. However, the referred works do not apply entropy regularization to the calibration of confidence or uncertainty. In the last decades, several non-parametric and parametric calibration approaches such as isotonic regression [@Zadrozny2002] or Platt scaling [@Platt1999] have been presented. Recently, temperature scaling has been demonstrated to lead to well-calibrated model likelihood in non-Bayesian deep neural networks [@Guo2017]. It uses a single scalar $ T $ to scale the logits and smoothen ($ T > 1 $) or sharpen ($ T < 1 $) the softmax output and thus regularize the entropy. Logit scaling has also been introduced to approximate categorical distributions by the Gumbel-Softmax or Concrete distribution [@Jang2016; @Maddison2016]. Recently, [@Kull2019] stated that temperature scaling does not lead to classwise-calibrated models because the single parameter $ T $ cannot calibrate each class individually. They proposed Dirichlet calibration to address this problem. To verify this statement, we will investigate classwise logit scaling in addition to temperature scaling. We will show later that temperature scaling for calibrating model uncertainty in Bayesian deep learning, which takes into account all classes, does not have this shortcoming. More complex methods, such as a neural network as auxiliary recalibration model, have been used in calibrated regression [@Kuleshov2018]. Methods ======= In this section, we discuss how model uncertainty is obtained by Monte Carlo Gaussian dropout and how it can be calibrated with logit scaling. We define the expected uncertainty calibration error as a new metric to quantify miscalibration and describe confidence penalty as an alternative to logit scaling. Uncertainty Estimation {#sec:uncertainty} ---------------------- We assume a general multi-class classification task with $ C $ classes. Let input $ \bm{x} \in \mathcal{X} $ be a random variable with corresponding label $ y \in \mathcal{Y} = \{1, \ldots , C\} $. Let $ \bm{f}_{\bm{w}}(\bm{x}) $ be the output (logits) of a neural network with weight matrices $ \bm{w} $, and with model likelihood $ p( y \! = \! c \,\vert\, \bm{f}_{\bm{w}}(\bm{x}) ) $ for class $ c $, which is sampled from a probability vector $ \bm{p} = \bm{\sigma}_{\mathrm{SM}}(\bm{f}_{\bm{w}}(\bm{x})) $, obtained by passing the model output through the softmax function $ \bm{\sigma}_{\mathrm{SM}}(\cdot) $. From a frequentist perspective, the softmax likelihood is often interpreted as *confidence* of prediction. Throughout this paper, we follow this definition. To determine model *uncertainty*, dropout variational inference is performed by training the model $ \bm{f}_{\bm{w}} $ with dropout [@Srivastava2014] and using dropout at test time to sample from the approximate posterior distribution by performing $ N $ stochastic forward passes [@Gal2016; @Kendall2017]. This is also referred to as MC dropout. In MC dropout, the final probability vector is obtained by MC integration: $$\bm{p} (\bm{x}) = \frac{1}{N} \sum_{i=1}^{N} \boldsymbol{\sigma}_{\mathrm{SM}} \left( \bm{f}_{\bm{w}_{i}} (\bm{x}) \right) .$$ The entropy of the softmax likelihood is used to describe uncertainty of prediction [@Kendall2017]. In contrast to confidence as a quality measure of prediction (see §\[sec:calibration\]), uncertainty takes into account the likelihoods of all $ C $ classes. We propose to use the normalized entropy to scale the values to a range between $ 0 $ and $ 1 $: $$\tilde{\mathcal{H}}(\bm{p}) := - \frac{1}{\log C} \sum_{c=1}^{C} p^{(c)} \log p^{(c)} ~ , \quad \tilde{\mathcal{H}} \in \left[0, 1\right] . \label{eq:norm_entropy}$$ Besides MC dropout there are other methods for estimating the model uncertainty such as Bayes by Backprop [@Blundell2015], which uses Monte Carlo gradient estimation to learn a distribution on the weights of a neural network, or SWAG [@Maddox2019], which approximates the posterior distribution with a Gaussian using the trajectory of stochastic gradient descent. These methods are however not discussed in this paper. Monte Carlo Gaussian Dropout ---------------------------- ![Implicit output distribution of MC dropout and corresponding Gaussian dropout. Gaussian dropout replaces Bernoulli dropout and allows a learnable dropout rate $ p $. The input and the weights of the convolutional layer are randomly initialized.[]{data-label="fig:gaussian_dropout_clt"}](gaussian_dropout.pdf) We will first review Gaussian dropout, which was proposed by [@Wang2013], and subsequently use it to obtain model uncertainty with MC dropout. Dropout is a stochastic regularization technique, where entries of the input $ \bm{x} $ to a weight layer $ \bm{w} $ are randomly set to zero by elementwise multiplication $ \odot $ with $$\begin{gathered} \bm{d} ~ \mathrm{where} ~ d_{j} \sim \mathsf{Bernoulli}(1-p) ~ , \\ \bm{y} = \bm{w}^{T}(\bm{d} \odot (\bm{x} / (1-p) ) ) ~ ,\end{gathered}$$ with dropout rate $ p $. This introduces Bernoulli noise during optimization and reduces overfitting of the training data. The resulting output $ \bm{y} $ of a layer with dropout is a weighted sum of Bernoulli random variables. Then, the central limit theorem states, that $ \bm{y} $ is approximately normally distributed (see Fig.\[fig:gaussian\_dropout\_clt\]). Instead of sampling from the weights and computing the resulting output, we can directly sample from the implicit Gaussian distribution of dropout $$\bm{y} \sim q( \bm{y} \vert \bm{x}) = \mathcal{N}(\mu_{\bm{y}}, \sigma_{\bm{y}}^{2})$$ with $$\begin{gathered} \mu_{\bm{y}} = \mathbb{E}[ y_{k} ] = \sum_{j} w_{j,k} x_{j} ~ , \\ \sigma_{\bm{y}}^{2} = \mathrm{Var}[ y_{k} ] = p/(1-p) \sum_{j} w_{j,k}^{2} x_{j}^{2} ~ ,\end{gathered}$$ using the reparameterization trick [@Kingma2015] $$y_{j} = \mu_{j} + \sigma_{j} \varepsilon_{j} ~ \mathrm{with} ~ \varepsilon_{j} \sim \mathcal{N}(0, 1) ~ .$$ Gaussian dropout is a continuous approximation to Bernoulli dropout, and in comparison it will better approximate the true posterior distribution and is expected to provide improved uncertainty estimates [@Louizos2017]. Throughout this paper, Gaussian dropout is used as a substitute to Bernoulli dropout to obtain epistemic uncertainty under the MC dropout framework. It can efficiently be implemented in four lines of PyTorch code (see Fig.\[fig:gaussian\_dropout\_code\]). The dropout rate $ p $ is now a learnable parameter and does not need to be chosen carefully by hand. In fact, $ p $ could be optimized w.r.t. uncertainty calibration, scaling the variance of the implicit Gaussian of dropout. A similar approach was presented by [@Gal2017] using the Concrete distribution. However, we will focus on logit scaling methods for calibration and therefore fixed $ p $ in our subsequent experiments. Gaussian dropout has been used in the context of uncertainty estimation in prior work. In [@Louizos2017], it is used together with multiplicative normalizing flows to improve the approximate posterior. A similar Gaussian approximation of Batch Normalization was presented in [@Teye2018], where Monte Carlo Batch Normalization is proposed as approximate Bayesian inference. Calibration of Uncertainty {#sec:calibration} -------------------------- To give an insight into our general approach to calibration of uncertainty, we will first revisit the definition of perfect calibration of confidence [@Guo2017] and show how this concept can be extended to calibration of uncertainty. Let $ \hat{y} = \operatorname*{arg\,max}\bm{p} $ be the most likely class prediction of input $ \bm{x} $ with likelihood $ \hat{p} = \max \bm{p} $ and true label $ y $. Then, following [@Guo2017], *perfect calibration of confidence* is defined as $$\mathbb{P} \left( \hat{y} = y \,\vert\, \hat{p} = q \right) = q , \quad \forall q \in \left[ 0, 1 \right] . \label{eq:perfect_calibration}$$ That is, the probability of a correct prediction $ \hat{y} = y $ given the prediction confidence $ \hat{p} $ should exactly correspond to the prediction confidence. From Eq.(\[eq:perfect\_calibration\]) and Eq.(\[eq:norm\_entropy\]), we define *perfect calibration of uncertainty* as $$\mathbb{P} ( \hat{y} \neq y \,\vert\, \tilde{\mathcal{H}}( \bm{p} ) = q ) = q , \quad \forall q \in \left[0, 1\right] .$$ That is, in a batch of inputs that are all predicted with uncertainty of e.g. $ 0.2 $, a top-1 error of $ 20\,\% $ is expected. The confidence is interpreted as the probability of belonging to a particular class, which should naturally correlate with the model error of that class. This characteristic does not generally apply to entropy, and therefore the question arises why entropy should resonate with the model error. However, entropy is considered a measure of uncertainty, and we expect that a prediction with lower uncertainty is less likely to be false and vice versa. In fact, our experimental results for uncalibrated models show that the confidence is as miscalibrated as the normalized entropy (see Fig.\[fig:reliability\]). Expected Uncertainty Calibration Error (UCE) -------------------------------------------- Due to optimizing the weights $ \bm{w} $ via minimization of the negative log-likelihood of $ p( y \,\vert\, \bm{f}_{\bm{w}}(\bm{x}) ) $, modern deep models are prone to overly confident predictions and are therefore miscalibrated [@Guo2017; @Gal2017]. A popular way to quantify miscalibration of neural networks with a scalar value is the expectation of the difference between predicted softmax likelihood $ \hat{p} $ and accuracy $$\mathbb{E}_{\hat{p}}\left[ \, \left| \mathbb{P} \left( \hat{y} = y \,\vert\, \hat{p} = q \right) - q \right| \, \right], \quad \forall q \in \left[ 0, 1 \right] , \label{eq:ece}$$ based on the natural expectation that confidence should linearly correlate to the likelihood of a correct prediction. This expectation of the difference can be approximated by the Expected Calibration Error (ECE) [@Naeini2015; @Guo2017]. The output of a neural network is partitioned into $ M $ bins with equal width and a weighted average of the difference between accuracy and confidence (softmax likelihood) is taken: $$\mathrm{ECE} = \sum_{m=1}^{M} \frac{\left| B_{m} \right|}{n} \, \big| \mathrm{acc}(B_{m}) - \mathrm{conf}(B_{m}) \big| ~ ,$$ with total number of inputs $ n $ and set of indices $ B_{m} $ of inputs whose confidence falls into that bin (see [@Guo2017] for more details). We propose the following slightly modified notion of Eq.(\[eq:ece\]) to quantify miscalibration of uncertainty: $$\mathbb{E}_{\tilde{\mathcal{H}}} [ \, \vert \mathbb{P} ( \hat{y} \neq y \,\vert\, \tilde{\mathcal{H}}( \bm{p} ) = q ) - q \vert \, ], \quad \forall q \in \left[ 0, 1 \right] .$$ We refer to this as Expected Uncertainty Calibration Error (UCE) and approximate analogously with $$\mathrm{UCE} := \sum_{m=1}^{M} \frac{\left| B_{m} \right|}{n} \big| \mathrm{err}(B_{m}) - \mathrm{uncert}(B_{m}) \big| ~ . \label{eq:uce}$$ The error per bin is defined as $$\mathrm{err}(B_{m}) := \frac{1}{\left| B_{m} \right| } \sum_{i \in B_{m}} \bm{1} (\hat{y}_{i} \neq y) ~ ,$$ where $ \bm{1} (\hat{y}_{i} \neq y) = 1 $ and $ \bm{1} (\hat{y}_{i} = y) = 0 $. Uncertainty per bin is defined as $$\mathrm{uncert}(B_{m}) := \frac{1}{\left| B_{m} \right| } \sum_{i \in B_{m}} \tilde{\mathcal{H}} (\bm{p}_{i}) ~ .$$ In [@Kull2019], it is stated that the ECE has a fundamental limitation. Due to binning across all classes, over-confidence on one class can be compensated by under-confidence on another class. Thus, a model can achieve low ECE values even if the confidence for each classes is either over- or underestimated. They propose the classwise ECE (cECE) and, following that, we additionally define the classwise UCE (cUCE) as $$\mathrm{cUCE} := \frac{1}{C} \sum_{c=1}^{C} \mathrm{UCE}(c) \label{eq:cuce}$$ to evaluate classwise calibration. It is defined as the mean of all UCEs per class, which are denoted by $ \mathrm{UCE}(c) $. Additionally, we plot $ \mathrm{err}(B_{m}) $ vs. $ \mathrm{uncert}(B_{m}) $ to create reliability diagrams and visualize calibration. Temperature Scaling for Dropout Variational Inference ----------------------------------------------------- State-of-the-art deep neural networks are generally miscalibrated with regard to softmax likelihood [@Guo2017]. However, when obtaining model uncertainty with dropout variational inference, this also tends to be not well-calibrated [@Louizos2017; @Gal2017; @Lakshminarayanan2017]. Fig.\[fig:opener\] (left) shows reliability diagrams [@Niculescu2005] for ResNet-101 trained on CIFAR-100. The divergence from the identity function reveals miscalibration. Furthermore, it is not possible to robustly detect OoD data from uncalibrated uncertainty (see Fig.\[fig:opener\] (right)). If the fraction of OoD data in a batch of test images is $ > 50\,\% $, there is almost no increase in mean uncertainty. We first address the problem using temperature scaling, which is the most straightforward logit scaling method for recalibration. Temperature scaling with MC dropout variational inference is derived by closely following the derivation of frequentist temperature scaling in the appendix of [@Guo2017]. Let $ \left\{ \bm{z}_{1,j}, \ldots , \bm{z}_{N,j} \right\} $ be a set of logit vectors obtained by MC dropout with $ N $ stochastic forward passes for each input $ \bm{x}_{j} \in \left\{ \bm{x}_{1}, \ldots , \bm{x}_{M} \right\} $ with true labels $ \left\{ y_{1}, \ldots , y_{M} \right\} $. Temperature scaling is the solution $ \hat{p} $ to entropy maximization $$\underset{\hat{p}}{\max} ~ - \frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{M} \sum_{c=1}^{C} \hat{p} \left( \bm{z}_{i,j} \right)^{(c)} \log \hat{p} \left( \bm{z}_{i,j} \right)^{(c)} ,$$ subject to $$\hat{p} (\bm{z}_{i,j})^{(c)} \geq 0 \quad \forall i,j,c ~ ,$$ $$\sum_{c=1}^{C} \hat{p} (\bm{z}_{j})^{(c)} = 1 \quad \forall j ~ ,$$ $$\frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{M} z_{i,j}^{(y_{j})} = \frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{M} \sum_{c=1}^{C} z_{i,j}^{(c)} \hat{p} ( \bm{z}_{i,j})^{(c)} .$$ Guo etal. solve this constrained optimization problem with the method of Lagrange multipliers. We skip reviewing their proof as one can see that the solution to $ \hat{p} $ in the case of MC dropout integration provides $$\begin{aligned} \frac{1}{N} \sum_{i=1}^{N} \hat{p}_{i} \left( \bm{z}_{j} \right)^{(c)} &= \frac{1}{N} \sum_{i=1}^{N} \frac{e^{\lambda z_{i,j}^{(c)}}}{\sum_{\ell=1}^{C} e^{\lambda z_{i,j}^{(\ell)} }} \\ &= \frac{1}{N} \sum_{i=1}^{N} \boldsymbol{\sigma}_{\mathrm{SM}} \left( \lambda \bm{f}_{\bm{w}_{i}} ( \bm{x}_{j} )\right)^{(c)} ,\end{aligned}$$ which yields temperature scaling for $ \lambda = T^{-1} $ [@Guo2017]. A scalar parameter cannot rescale the class logits individually. Thus, more complex logit scaling can be derived by using any function at this point to smoothen or sharpen the softmax output (see next section). In this work, Gaussian dropout is inserted between each weight layer with fixed dropout rate of $ p = 0.2 $. Temperature scaling with $ T > 0 $ is inserted before final softmax activation and before MC integration: $$\hat{\bm{p}} (\bm{x}) = \frac{1}{N} \sum_{i=1}^{N} \boldsymbol{\sigma}_{\mathrm{SM}} \left( T^{-1} \bm{f}_{\bm{w}_{i}} (\bm{x}) \right) .$$ First, $ \bm{f}_{\bm{w}} $ is trained with Gaussian dropout until convergence on the training set. Next, we fix the parameters $ \bm{w} $ and optimize $ T $ with respect to the negative log-likelihood on a separate calibration set using MC Gaussian dropout. This is equivalent to maximizing the entropy of $ \hat{\bm{p}} $ [@Guo2017]. Classwise Logit Scaling {#sec:aux_method} ----------------------- It is stated by [@Kull2019] that temperature scaling would be inferior to more complex calibration methods when compared by means of classwise calibration. In [@Guo2017], temperature scaling is used to calibrate the confidence that takes into account only one class probability. In contrast, we use temperature scaling to calibrate the model uncertainty, expressed via normalized entropy. This considers all class probabilities and thus, we hypothesize that temperature scaling implicitly leads to well-calibrated classwise uncertainty. To demonstrate this experimentally, we implement vector scaling and auxiliary scaling and compare them using classwise UCE. *Vector scaling* is a multi-class extension of temperature scaling, where an individual scaling factor for each class is used to scale the final softmax output: $$\hat{\bm{p}}_{i} (\bm{x}) = \boldsymbol{\sigma}_{\mathrm{SM}} \left( \bm{T} \bm{f}_{\bm{w}_{i}} (\bm{x}) \right) ~ ,$$ with $ \bm{T} = \mathrm{diag} (t_{1}, \ldots, t_{C}) $. *Auxiliary scaling* makes use of a more powerful auxiliary recalibration model $ \bm{R}_{\bm{\theta}} $ consisting of a two-layer fully-connected network with $ C $ hidden units and leaky ReLU activations after the hidden layer: $$\hat{\bm{p}}_{i} (\bm{x}) = \boldsymbol{\sigma}_{\mathrm{SM}} \left( \bm{R}_{\bm{\theta}} ( \bm{f}_{\bm{w}_{i}} (\bm{x}) ) \right) ~ ,$$ which is inspired by [@Kuleshov2018]. The intuition behind this is that recalibration may require a more complex function than simple scaling. Both $ \bm{T} $ and the parameters $ \bm{\theta} $ of the auxiliary model are optimized w.r.t. negative log-likelihood in a separate calibration phase by gradient descent. We initialize with $ t_{j} \leftarrow 1 $ and $ \bm{\theta}_{1,2} \leftarrow \bm{I}_{C} $, respectively. Thus, recalibration is started form the identity function. It must be emphasized that in contrast to temperature scaling, both vector and aux scaling can change the maximum of the softmax and thus affect model accuracy. Confidence Penalty {#sec:conf_penalty} ------------------ Additionally, we compare temperature scaling to entropy regularization, where low entropy output distributions are penalized by adding the negative entropy $ \mathcal{H} $ of the softmax output to the negative log-likelihood training objective, weighted by an additional hyperparameter $ \beta $. This leads to the following optimization function: $$\mathcal{L}_{\mathrm{CP}}(\bm{w}) = - \sum_{\mathcal{X}, \mathcal{Y}} \log \bm{p}_{\bm{w}} (\bm{y} \vert \bm{x}) - \beta \, \mathcal{H} \left( \bm{p}_{\bm{w}}(\bm{y} \vert \bm{x}) \right) ~ . \label{eq:conf_penalty}$$ We reproduce the experiment of Pereyra et al. on supervised image classification [@Pereyra2017] and compare the quality of calibration of confidence and uncertainty to logit scaling calibration methods. Calibration by confidence penalty must be performed during the training and cannot be done afterwards. Thus, a separate calibration phase is omitted. Experiments {#sec:experiments} =========== The experimental results are presented threefold: First, the proposed logit scaling methods are used to calibrate confidence and uncertainty and are compared with entropy regulation; second, predictions with high uncertainty are rejected; and third, the effect of out-of-distribution data on uncertainty is analyzed. All models were trained from random initialization. More details on the training procedure can be found in the appendix. Uncertainty Calibration {#sec:exp_uncert} ----------------------- To show the effectiveness of uncertainty calibration, we train ResNet-34 [@He2016] and DenseNet-121 [@Huang2017] on CIFAR-10 [@Krizhevsky2009] and SVHN [@SVHN], as well as ResNet-101 and DenseNet-169 on CIFAR-100 with Gaussian dropout until convergence. We mainly focus on the calibration of uncertainty obtained by performing $ N=25 $ forward passes with MC Gaussian dropout. Additionally, we reproduce the experiments of [@Guo2017] and analyze calibration of frequentist confidence $ \hat{p} = \max \bm{p} $ along with likelihood values $ \hat{p} = \max N^{-1} \sum_{i=1}^{N} \bm{p}_{i} $ from MC dropout. Subsequently, the models are calibrated using the previously mentioned logit scaling methods. The validation set with 5,000 images is used as calibration set. We additionally train all networks in the exact same manner with confidence penalty loss with fixed $ \beta = 0.1 $. The proposed UCE and classwise UCE metrics are used to quantify calibration of uncertainty. Reliability diagrams (top-1 error vs. uncertainty) are used to visualize (mis-)calibration. Classwise UCE values are given in Tab.\[tab:results\] and the reliability diagrams show the corresponding UCE. Rejection of Uncertain Predictions ---------------------------------- An example application of well-calibrated uncertainty is the rejection of uncertain predictions. In e.g. a medical imaging scenario, a critical decision should only be made on the basis of reliable predictions. We define an uncertainty threshold $ \mathcal{H}_{\mathrm{max}} $ and reject all predictions from the test set where $ \tilde{\mathcal{H}}(\bm{p}) > \mathcal{H}_{\mathrm{max}} $. A decrease in false predictions of the remaining test set is expected. Out-of-Distribution Detection ----------------------------- Deep neural networks only provide reliable predictions for data on which they have been trained. In practice, however, the trained network will encounter samples that lie outside the distribution of training data. Problematically, a miscalibrated model will still produce highly confident estimates for such out-of-distribution (OoD) data [@Lee2018]. To our surprise, Bayesian neural networks have not been extensively studied for out-of-distribution detection. Epistemic uncertainty from MC dropout was successfully used to detect OoD samples in neural machine translation [@Xiao2019]. We reproduce the experiments presented by [@Lakshminarayanan2017], where predictive uncertainty obtained from deep ensembles is used to detect if data from CIFAR10 is provided to a network trained on SVHN. They state that uncertainty produced by MC dropout is over-confident and cannot robustly detect OoD data. We expect that well-calibrated uncertainty from Bayesian methods allows us to detect if data from CIFAR10 is presented to a deep model trained on SVHN. However, the SVHN data set shows house numbers and the CIFAR data set contains everyday objects and animals; the data domains are overly disjoint. To demonstrate the OoD detection ability under more difficult conditions, we additionally provide images from CIFAR100 to a deep model trained on CIFAR10 (note that both CIFAR data sets have no mutual classes). In this experiment, we compose a batch of 100 images from the test set of the training domain and stepwise replace images with out-of-distribution data. In practice, it is expected that models are applied to a mix of known and unknown classes. After each step, we evaluate the batch mean uncertainty and expect, that the mean uncertainty increases as a function of the fraction of OoD data. Results {#sec:results} ======= ------------------------------------------------------------------------- -------------- ------ ------ ------ ------ ---------- ------ ---------- ---------- ---------- ---------- (lr)[3-4]{} (lr)[5-6]{} (lr)[7-8]{} (lr)[9-10]{} (lr)[11-12]{} Data Set Model cECE cUCE cECE cUCE cECE cUCE cECE cUCE cECE cUCE CIFAR-10 ResNet-34 4.46 4.03 8.29 19.8 **1.95** 3.68 2.09 3.73 2.10 **2.38** CIFAR-10 DenseNet-121 10.1 9.52 8.49 18.5 3.05 5.72 3.15 6.09 **2.98** **4.55** CIFAR-100 ResNet-101 20.5 23.2 14.6 19.4 10.8 11.5 **10.7** **11.4** 32.9 35.3 CIFAR-100 DenseNet-169 32.4 37.1 15.6 20.6 12.9 13.9 **12.8** **13.8** 48.9 52.6 SVHN ResNet-34 2.37 2.07 9.11 22.3 1.47 3.47 1.44 3.43 **1.34** **1.85** SVHN DenseNet-121 2.91 2.47 7.53 19.7 2.06 5.08 1.96 4.88 **1.51** **2.46** ------------------------------------------------------------------------- -------------- ------ ------ ------ ------ ---------- ------ ---------- ---------- ---------- ---------- \[tab:results\] ---------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ResNet-101/CIFAR100 DenseNet-169/CIFAR100 ![image](results_cifar100_resnet101_composed-crop.pdf){width="0.98\columnwidth"} ![image](results_cifar100_densenet169_composed-crop.pdf){width="0.98\columnwidth"} ---------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------- ---------------------------------------------------------- Rejection of unreliable predictions Out-of-Distribution Detection ![image](results_reject-crop.pdf){width="0.98\columnwidth"} ![image](results_ood-crop.pdf){width="0.98\columnwidth"} ------------------------------------------------------------- ---------------------------------------------------------- In this section, the results of the above mentioned experimental setup are presented and discussed. Uncertainty Calibration {#uncertainty-calibration} ----------------------- Tab.\[tab:results\] reports classwise UCE test set results and Fig.\[fig:reliability\] shows reliability diagrams for the experimental setup described in the previous section. All logit scaling methods considerably reduce miscalibration on CIFAR-10/100 by means of cECE and cUCE. For the smaller networks on CIFAR-10 and SVHN, the more powerful aux scaling yields lowest cUCE. On CIFAR-100, however, aux scaling increases miscalibration. In this case, the auxiliary model $ \bm{R} $ has $ C=100 $ units in the hidden layer and easily overfits the calibration set (we observe calibration set accuracy of 100%). This results in worse calibration on the test set than the uncalibrated model. A possible solution to this is adding regularization (e.g. early stopping or weight decay) during optimization of $ \bm{R} $. If the model is already well-calibrated (e.g. for SVHN in our experiments), temperature scaling and vector scaling can slightly worsen calibration. In this case, a larger calibration set is preferred or recalibration can be omitted at all. Confidence penalty only slightly reduces miscalibration for larger models on CIFAR-100. On all other configurations, it leads to worse calibration. As hypothesized in §\[sec:related\_work\], temperature scaling results in classwise calibrated uncertainty and is only marginally outperformed by the classwise logit scaling methods. The reliability diagrams in Fig.\[fig:reliability\] give additional insight and show, that calibrated uncertainty corresponds well with the model error. It is worth noting that the likelihood in the Bayesian approach is generally better calibrated than the frequentist confidence. Rejection of Uncertain Predictions ---------------------------------- Fig. \[fig:uncert\_thresh\] (left) shows the top-1 error as a function of decreasing $ \mathcal{H}_{\mathrm{max}} $. For both uncalibrated and calibrated uncertainty, decreasing $ \mathcal{H}_{\mathrm{max}} $ reduces the top-1 error. Again, we can observe the underestimation of uncalibrated uncertainty: $ \mathcal{H}_{\mathrm{max}} $ has little effect at first and few uncertain predictions are rejected. Using calibrated uncertainty with temperature or vector scaling, the relationship is almost linear, allowing robust rejection of uncertain predictions. Except for aux scaling on CIFAR-100, logit scaling is capable of reducing the top-1 error below 1%. Further, we observe that confidence penalty can lead to *over*-estimation of uncertainty. Out-of-Distribution Detection ----------------------------- Fig.\[fig:uncert\_thresh\] (right) shows the effect of calibrated uncertainty to OoD detection. All calibration approaches are able to improve the detection of OoD data. The benefit of calibration is most noticeable on ResNet (C10$\rightarrow$C100) and DenseNet (SVHN$\rightarrow$C10, C10$\rightarrow$SVHN), where the mean uncertainty stays almost constant for OoD data $ > 50\,\%$ and thus, robust OoD detection is only possible after calibration. As in Fig.\[fig:uncert\_thresh\] (left), we can observe overestimation of uncertainty for confidence penalty. In some cases (e.g. DenseNet SVHN$\rightarrow$C10), this causes a more robust OoD detection. This is in contrast to the results presented in [@Lakshminarayanan2017], where MC dropout uncertainty was not able to capture OoD data sufficiently. Conclusion ========== In this paper, calibration of Bayesian model uncertainty is discussed. We derive logit scaling as entropy maximization technique to recalibrate the uncertainty of deep models trained with Gaussian dropout. Following commonly accepted metrics for calibration of confidence, we present the (classwise) expected uncertainty calibration error to quantify miscalibration of uncertainty. Logit scaling calibrates uncertainty obtained by Monte Carlo Gaussian dropout with high effectiveness. The experimental results show that better calibrated uncertainty allows more robust predictions and detection of out-of-distribution data; a key feature that is particularly important in safety-critical applications. Logit scaling is easy to implement and more effective than confidence penalty during training. Simple scaling methods are preferred over more complex methods, as they provide similar results and do not tend to overfit the calibration set. Temperature scaling improves uncertainty estimation without affecting the accuracy of the model. Vector and auxiliary scaling also improve calibration of uncertainty, but can have (positive or negative) influence on predictive accuracy. By using entropy, the classwise uncertainty calibrated by vector and auxiliary scaling is not substantially better than that calibrated by temperature scaling. Logit scaling calibrates not only the frequentist confidence but also the Bayesian uncertainty. Outlook ======= Throughout this work, we used a fixed dropout rate $ p $ for Gaussian dropout. In [@Gal2017], the Concrete distribution was used as a continuous approximation to the discrete Bernoulli distribution in dropout, which allows optimizing $ p $ w.r.t. calibrated uncertainty. Using Gaussian dropout as described above, we can also recalibrate models by optimizing $ p $ w.r.t. NLL on the calibration set, which scales $ \sigma $ to reduce underestimation of uncertainty. In Bayesian active learning we want to train a model with the minimal number of expert queries from a pool of unlabeled data. Calibrated uncertainty can further be useful to acquire the most uncertain samples from pool data to increase information efficiency [@Gal2017b]. Additionally, pseudo-labels can be generated from the least uncertain predictions in semi-supervised learning. However, there are many factors (e.g. network architecture, weight decay, dropout configuration) influencing the uncertainty in Bayesian deep learning that have not been discussed in this paper and are open to future work. Reviews ======= This paper was submitted to *International Conference on Machine Learning* (ICML) 2020 and rejected with the following scores: - Below the acceptance threshold, I would rather not see it at the conference. - Borderline paper, but has merits that outweigh flaws. - Borderline paper, but has merits that outweigh flaws. - Borderline paper, but the flaws may outweigh the merits. In the following, we disclose the anonymous reviews and our rebuttal. Meta-Review ----------- 1\. Please provide a meta-review for this paper that explains to both the program chairs and the authors the key positive and negative aspects of this submission. Because authors cannot see reviewer discussions, please also summarize any relevant points that can help improve the paper. Please be sure to make clear what your assessment of the pros/cons of this paper are, especially if your assessment is at odds with the overall reviewer scores. Please do not explicitly mention your recommendation in the meta-review (or you may have to edit it later). The authors calibrate Gaussian dropout models and observe better calibrated uncertainty. After a discussion, the reviewers converged towards rejection being a more appropriate decision at this time. The reviewers agreed that the empirical evidence that model calibration is beneficial and that the analysis is sound. However, they generally felt that the novelty of the methods is limited and lacks justification. 8\. I agree to keep the paper and supplementary materials (including code submissions and Latex source), and reviews confidential, and delete any submitted code at the end of the review cycle to comply with the confidentiality requirements. Agreement accepted 9\. I acknowledge that my meta-review accords with the ICML code of conduct (see https://icml.cc/public/CodeOfConduct). Agreement accepted Review \#2 ---------- **Questions** 1\. Please summarize the main claim(s) of this paper in two or three sentences. The authors apply the standard calibration techniques to Gaussian dropout models and observe better calibrated uncertainty. 2\. Merits of the Paper. What would be the main benefits to the machine learning community if this paper were presented at the conference? Please list at least one. The paper provides additional empirical evidence that model calibration is beneficial. 3\. Please provide an overall evaluation for this submission. Below the acceptance threshold, I would rather not see it at the conference. 4\. Score Justification Beyond what you’ve written above as ”merits“, what were the major considerations that led you to your overall score for this paper? The results of the paper are trivial. Temperature scaling as a well-known technique that improves the performance of basically all classification models. This particular paper applies it to Gaussian dropout networks. 5\. Detailed Comments for Authors Please comment on the following, as relevant: - The significance and novelty of the paper’s contributions. - The paper’s potential impact on the field of machine learning. - The degree to which the paper substantiates its main claims. - Constructive criticism and feedback that could help improve the work or its presentation. - The degree to which the results in the paper are reproducible. - Missing references, presentation suggestions, and typos or grammar improvements. I have read the author response. Some of my points have been addressed. I am willing to slightly increase my score, by I still think that the paper is below the acceptance threshold. – It is not clear to me why the authors focus on Gaussian dropout. Their main results, eq. 23-24, can be applied to any ensemble. Overall, the result is trivial: one can just take the predictive distribution of any model, be it a single neural network, a deep ensemble, or the result of MC dropout integration, and apply the temperature scaling, vector scaling or matrix scaling to this distribution. Moreover, the authors use Gaussian dropout as an approximation of binary dropout. Why do that when one can just start with Gaussian dropout? Moreover, since the authors mentioned the framework of variational inference, why not just stick with fully factorized Gaussian variational inference from the beginning? It has been a standard technique in Bayesian deep learning for years and does not require the extra steps going from binary dropout to its Bayesian interpretation, to its Gaussian approximation. This makes the paper much more confusing. ”the main contributions of our work are ... 3. first to apply logit scaling calibration to a Bayesian classifier obtained from MC dropout“ This has already been done by Ashukha et al 2020. They apply logit scaling to different kinds of ensembles, including Bayesian neural networks in general and beth MC dropout and FFG variational inference in particular. The expected calibration error is a biased metric. Its bias depends on the model, so it cannot be used to compare the calibration of different models (Vaicenavicius et al 2019). The same holds for UCE. How is UCE different from other biased estimates of calibration error (ECE, TACE, SCE and others by Nixon et al 2019)? I am not convinced that this metric can provide any additional insights. Introducing more biased metrics is harmful to the community as it would make the further results on comparing different methods even less reliable. Moreover, there already are some calibration metrics that do not have such problems (Widmann et al 2019). Nixon, Jeremy, et al. “Measuring calibration in deep learning.” arXiv preprint arXiv:1904.01685 (2019).\ Vaicenavicius, Juozas, et al. “Evaluating model calibration in classification.” arXiv preprint arXiv:1902.06977 (2019).\ Widmann, David, Fredrik Lindsten, and Dave Zachariah. “Calibration tests in multi-class classification: A unifying framework.” Advances in Neural Information Processing Systems. 2019.\ Ashukha, Arsenii, et al. “Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning.” In International Conference on Learning Representations. 2020. 6\. Please rate your expertise on the topic of this submission, picking the closest match. I have published one or more papers in the narrow area of this submission. 7\. Please rate your confidence in your evaluation of this paper, picking the closest match. I tried to check the important points carefully. It is unlikely, though possible, that I missed something that could affect my ratings. 8\. Datasets If this paper introduces a new dataset, which of the following norms are addressed? (For ICML 2020, lack of adherence is not grounds for rejection and should not affect your score; however, we have encouraged authors to follow these suggestions.) This paper does not introduce a new dataset (skip the remainder of this question). 12\. I agree to keep the paper and supplementary materials (including code submissions and Latex source) confidential, and delete any submitted code at the end of the review cycle to comply with the confidentiality requirements. Agreement accepted 13\. I acknowledge that my review accords with the ICML code of conduct (see https://icml.cc/public/CodeOfConduct). Agreement accepted Review \#4 ---------- **Questions** 1\. Please summarize the main claim(s) of this paper in two or three sentences. The authors propose a methodology for calibrating model uncertainty (measured as entropy of the marginal posterior predictive distribution) instead of the parameters of the (marginal) posterior predictive distribution (ECE). They introduce their approach in the context of MC Dropout, and demonstrate results on a set of experiments. 2\. Merits of the Paper. What would be the main benefits to the machine learning community if this paper were presented at the conference? Please list at least one. The authors propose the aforementioned methodology, and back it up with a set of empirical experiments. 3\. Please provide an overall evaluation for this submission. Borderline paper, but has merits that outweigh flaws. 4\. Score Justification Beyond what you’ve written above as ”merits“, what were the major considerations that led you to your overall score for this paper? The authors propose a method to calibrate the model uncertainty, but the indicated approach specifically calibrates the entropy of the marginal posterior predictive distribution, which contains both data and model uncertainty sources. Given that, I would have expected to see an experimental setup that compared the benefit of calibrating according to UCE vs. ECE. The listed experiments demonstrate that it is possible to apply calibration techniques developed for ECE to their proposed UCE, but the reader is left wondering whether UCE provides a marked improvement. The rejection experiments are useful, but it would have been good to compare the results to the alternative of thresholding on the max predicted probability (i.e., Hendrycks et al., 2017). I agree with the authors in the motivation for using model uncertainty, but I still think the paper would benefit from the comparison. Addendum:\ Thank you to the authors for the rebuttal! Given the noted inclusion of UCE vs. ECE experiments, comparison to max predicted probability, added discussion around Nixon et al. 2019, and updated text re: predictive entropy containing both data & model uncertainty, I have increased my score. 5\. Detailed Comments for Authors Please comment on the following, as relevant: - The significance and novelty of the paper’s contributions. - The paper’s potential impact on the field of machine learning. - The degree to which the paper substantiates its main claims. - Constructive criticism and feedback that could help improve the work or its presentation. - The degree to which the results in the paper are reproducible. - Missing references, presentation suggestions, and typos or grammar improvements. Significance: Considering model uncertainty and the extent to which it is calibrated is well-motivated, as is the usage of it for making rejections in order to improve performance. The experiments indicate that there is promise in both calibrating measures that incorporate model uncertainty. However, the experiments do not directly demonstrate the benefit over existing baselines using ECE and the parameters of the (marginal) predictive distribution. One other issue is that the entropy of the marginal posterior predictive distribution is a measure of both data uncertainty and model uncertainty. Novelty: To the best of the reviewer’s knowledge, a calibration metric for predictive entropy has not been introduced before. Presentation/clarity: - p. 1, line 21: “considerably reduce” -&gt; ”considerably reduces“\ - p. 2, line 83, left: define ECE and cite Naeini et al., 2015. 6\. Please rate your expertise on the topic of this submission, picking the closest match. I have published one or more papers in the narrow area of this submission. 7\. Please rate your confidence in your evaluation of this paper, picking the closest match. I tried to check the important points carefully. It is unlikely, though possible, that I missed something that could affect my ratings. 8\. Datasets If this paper introduces a new dataset, which of the following norms are addressed? (For ICML 2020, lack of adherence is not grounds for rejection and should not affect your score; however, we have encouraged authors to follow these suggestions.) This paper does not introduce a new dataset (skip the remainder of this question). 12\. I agree to keep the paper and supplementary materials (including code submissions and Latex source) confidential, and delete any submitted code at the end of the review cycle to comply with the confidentiality requirements. Agreement accepted 13\. I acknowledge that my review accords with the ICML code of conduct (see https://icml.cc/public/CodeOfConduct). Agreement accepted Review \#5 ---------- **Questions** 1\. Please summarize the main claim(s) of this paper in two or three sentences. The main claims are a new metric for uncertainty calibration and the introduction of logit scaling with Gaussian MC Dropout. The logit scaling with MC dropout is analyzed empirically. 2\. Merits of the Paper. What would be the main benefits to the machine learning community if this paper were presented at the conference? Please list at least one. Calibration and Bayesian approaches are often seen as two, different approaches for obtaining better calibrated predictions. This paper shows that calibration is also beneficial for Bayesian DNNs. Furthermore, uncertainty in deep learning is a highly relevant topic, as it is substantial for real world deep learning in safety critical environments. Additional insight is always welcome for advancing the field. 3\. Please provide an overall evaluation for this submission. Borderline paper, but has merits that outweigh flaws. 4\. Score Justification Beyond what you’ve written above as “merits”, what were the major considerations that led you to your overall score for this paper? The paper is well written and the analysis is sound. It can still be improved, but due to the importance of the topic and the works quality, i deem it over the acceptance threshold. The novelty of the methods is limited, but additional insight is sufficient for advancing a field. 5\. Detailed Comments for Authors Please comment on the following, as relevant: - The significance and novelty of the paper’s contributions. - The paper’s potential impact on the field of machine learning. - The degree to which the paper substantiates its main claims. - Constructive criticism and feedback that could help improve the work or its presentation. - The degree to which the results in the paper are reproducible. - Missing references, presentation suggestions, and typos or grammar improvements. The novel methods (uncertainty calibration metric and logit scaling for gaussian dropout) are straight forward applications of known principles and ideas. However, the paper is still somewhat significant due to the novel insight presented by the authors. Especially the combination of Bayesian approximations and calibration is relevant, as it was recently shown that Bayesian methods are not always leading to better calibrated predictions. The paper can trigger additional research into the application of calibration methods for Bayesian approximations, which is especially interesting when considering that Bayesian methods are still very expensive and calibrated Bayesian methods may offer a way to mitigate the flaws of cheaper posterior approximations. The claims of the paper are sufficiently substantiated. Approaches and equations are well explained and understandable. However, as the paper is mostly depending on the results and analysis, this section should be extended. Some possible improvements are: Comparison with Dirichlet calibration, better comparison with frequentist results (e.g. cECE) and analysis of class distribution changes (within the same dataset). The results are likely reproducible, due the available code and the use of standard DNN architectures. 6\. Please rate your expertise on the topic of this submission, picking the closest match. I have seen talks or skimmed a few papers on this topic, and have not published in this area. 7\. Please rate your confidence in your evaluation of this paper, picking the closest match. I am willing to defend my evaluation, but it is fairly likely that I missed some details, didn’t understand some central points, or can’t be sure about the novelty of the work. 12\. I agree to keep the paper and supplementary materials (including code submissions and Latex source) confidential, and delete any submitted code at the end of the review cycle to comply with the confidentiality requirements. Agreement accepted 13\. I acknowledge that my review accords with the ICML code of conduct (see https://icml.cc/public/CodeOfConduct). Agreement accepted Review \#6 ---------- **Questions** 1\. Please summarize the main claim(s) of this paper in two or three sentences. The authors study the problem of calibration of uncertainty inspired by calibration of confidence. Specifically, the authors modify several existing calibration methods to do calibration of uncertainty for Gaussian dropout. The proposed methods are tested on standard calibration tasks in comparison with the corresponding calibration of confidence methods. 2\. Merits of the Paper. What would be the main benefits to the machine learning community if this paper were presented at the conference? Please list at least one. The idea of calibration of uncertainty is interesting and reasonable. As far as I understand, this is the first work to give an attempt. 3\. Please provide an overall evaluation for this submission. Borderline paper, but the flaws may outweigh the merits. 4\. Score Justification Beyond what you’ve written above as “merits”, what were the major considerations that led you to your overall score for this paper? Although it is interesting to see a paper attempting calibration of uncertainty, the method is very handwavy and lack of justification. 5\. Detailed Comments for Authors Please comment on the following, as relevant: - The significance and novelty of the paper’s contributions. - The paper’s potential impact on the field of machine learning. - The degree to which the paper substantiates its main claims. - Constructive criticism and feedback that could help improve the work or its presentation. - The degree to which the results in the paper are reproducible. - Missing references, presentation suggestions, and typos or grammar improvements. Compared to previous methods, the only difference is replacing the confidence probability by uncertainty which is measured by normalized entropy. The use of normalized entropy as an uncertainty metric and the definition of the perfect calibration of uncertainty still need justification. The authors did not provide a clear connection of normalized entropy and uncertainty as well as a connection between normalized entropy and top-1 error. Therefore, the basis of all the proposed methods in the paper seems very handwavy. For the experiments, the authors seem to only compare with ECE in the first experiment. It will be better to report the ECE results on the other experiments as well. I’m curious if calibrated MC dropout is better than a calibrated point estimate. From the results of the first experiment, it did not seem to be true. Update: Thank the authors for the clarification. However, without seeing the new results, the concerns about experiments remain. Thus I keep the original score. 6\. Please rate your expertise on the topic of this submission, picking the closest match. I have seen talks or skimmed a few papers on this topic, and have not published in this area. 7\. Please rate your confidence in your evaluation of this paper, picking the closest match. I am willing to defend my evaluation, but it is fairly likely that I missed some details, didn’t understand some central points, or can’t be sure about the novelty of the work. 8\. Datasets If this paper introduces a new dataset, which of the following norms are addressed? (For ICML 2020, lack of adherence is not grounds for rejection and should not affect your score; however, we have encouraged authors to follow these suggestions.) This paper does not introduce a new dataset (skip the remainder of this question). 12\. I agree to keep the paper and supplementary materials (including code submissions and Latex source) confidential, and delete any submitted code at the end of the review cycle to comply with the confidentiality requirements. Agreement accepted 13\. I acknowledge that my review accords with the ICML code of conduct (see https://icml.cc/public/CodeOfConduct). Agreement accepted Rebuttal -------- 1\. Author Response to Reviewers Please use this space to respond to any questions raised by reviewers, or to clarify any misconceptions. Please do not include any links to external material, nor include ”late-breaking“ results that are not responsive to reviewer concerns. We request that you understand that this year is especially difficult for many people, and to be considerate in your response. We thank the reviewers for their valuable feedback. It allows us to improve our paper substantially. We acknowledge Reviewer \#2’s references to Ashukha et al., (2020) and other highly relevant work and will update our literature review accordingly. Reviewer \#2’s main concern seems to be the disadvantages of ECE-like calibration metrics. After carefully reading the suggested literature (Widmann et al, 2019; Ashukha et al., 2020; Nixon et al., 2019), two major concerns with recent calibration metrics are raised, which do not apply to UCE: 1. Non-applicability to multi-class classification: In contrast to ECE, UCE considers all class predictions by using the predictive entropy as uncertainty metric. We already addressed that in our manuscript and compare to classwise ECE as suggested by Kull et al., (2019). 2. ”ECE-like scores are minimized by a model with constant uniform predictions“ (Ashukha et al., 2020; and analogously Nixon et al., 2019): This also does not apply to the UCE metric as uniform predictions would result in high entropy. Consider the following example: Binary classification with balanced class frequencies and a model with constant uniform predictions. This would result in ECE=0%, but UCE=50%. UCE suffers from fixed bin sizes (Nixon et al., 2019), which we will discuss appropriately in our conclusion. This could easily be fixed by combining UCE with adaptive binning from ACE/TACE. We do not believe that the proposed UCE metric is harmful to the community as it does not have the major disadvantages compared to other ECE-like metrics. UCE is a useful metric and can give valuable insights into the calibration of uncertainty. We focus on Gaussian dropout as we have derived our approach from the MC dropout framework for uncertainty estimation. We will adjust this section and refer to fully factorized Gaussian variational inference to reduce the reader’s confusion. We thank reviewer \#2 for pointing out that temperature scaling was recently applied to MC dropout by Ashukha et al., (2020). We further extend their work by applying more complex logit scaling calibration to a Bayesian classifier obtained from MC dropout. Our work therefore provides additional insights into calibration of Bayesian neural nets. Our results suggest that the more complex calibration methods (like class-wise calibration) is advantageous compared to only temperature scaling (see bold values in Tab. 1). Based on feedback from reviewers \#4 and \#6, we extended our experiments to emphasize the benefits of calibration according to UCE vs. ECE. We now also compare the rejection and OoD detection experiments to thresholding on the max predicted probability (i.e., Hendrycks et al., 2017). We added additional figures and corresponding text passages to the results section of the manuscript. Based on the comment of Reviewer \#6 we realized the lack of a clear connection between normalized entropy and uncertainty/top-1 error. The use of predictive entropy to measure predictive uncertainty in classification is well motivated in Gal, (2016) pp. 51–54. Normalization was introduced to restrict the values to \[0, 1\] independent of the number of classes C. Normalization is not essential for calibration but gives a more “intuitive” interpretation of the uncertainty values themselves. When all entries of the probability vector are predicted with equal probability, normalized entropy equals to 1.0 and we expect the prediction to be false (i.e. the expectation of the top-1 error to be 1.0). We added a more detailed explanation on the use of normalized entropy to the manuscript. Reviewer \#4 mentioned that “the entropy of the marginal posterior predictive distribution is a measure of both data uncertainty and model uncertainty”. Classification models trained by minimizing NLL (i.e. cross-entropy) already capture a data-dependent uncertainty. Therefore, the predictive entropy both contains data and model uncertainty. We added a sentence for clarification and changed the manuscript accordingly. We hope that our revisions meet the expectations of the reviewers. The comments have greatly helped us to increase the quality of our work. We thank the reviewers for their valuable time. Nixon, J. et al. “Measuring calibration in deep learning.” arXiv preprint arXiv:1904.01685 (2019).\ Widmann, D. et al. “Calibration tests in multi-class classification: A unifying framework.” Advances in Neural Information Processing Systems. 2019.\ Ashukha, A. et al. “Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning.” In International Conference on Learning Representations. 2020.\ Gal, Y. Uncertainty in Deep Learning. PhD thesis, Department of Engineering, University of Cambridge, 2016 3\. I certify that this author response conforms to the ICML Code of Conduct (https://www.icml.cc/public/CodeOfConduct) Agreement accepted
--- abstract: 'High-energy evolution equations, such as the BFKL, BK or JIMWLK equations, aim at resumming the high-energy (next-to-)leading logarithms appearing in QCD perturbative series. However, the standard derivations of those equations are performed in a strict high-energy limit, whereas such equations are then applied to scattering processes at large but finite energies. For that reason, there is typically a slight mismatch between the leading logs resummed by those evolution equations without finite-energy corrections and the leading logs actually present in the perturbative expansion of any observable. That mismatch is one of the sources of large corrections at NLO and NLL accuracy. In the case of the BFKL equation in momentum space, that problem is solved by including a kinematical constraint in the kernel, which is the most important finite-energy correction. In this paper, such an improvement of kinematics is performed in mixed-space (transverse positions and $k^+$) and with a factorization scheme in the light-cone momentum $k^+$ (in a frame in which the projectile is right-moving and the target left-moving). This is the usual choice of variables and factorization scheme for the the BK equation. A kinematically improved version of the BK equation is provided, consistent at finite energies. The results presented here are also a necessary step towards having the high energy limit of QCD (including gluon saturation) quantitatively under control beyond strict leading logarithmic accuracy.' author: - Guillaume Beuf bibliography: - 'MaBiblioHEQCD.bib' title: 'Improving the kinematics for low-x QCD evolution equations in coordinate space' --- Introduction {#sec:intro} ============ Large logarithms arise in the perturbative expansion of observables related to QCD scattering processes, when the total energy of the collision is much larger than all the other available scales. In order to obtain reliable theoretical results, such logarithms have to be resummed, thanks to a high-energy evolution equation. In the case of a collision between two dilute objects, the high-energy leading logarithms (LL) are in principle resummed thanks to the BFKL equation [@Lipatov:1976zz; @Kuraev:1977fs; @Balitsky:1978ic]. When one of the colliding particles is hadron or nucleus considered dense, one should instead use the JIMWLK equation [@Jalilian-Marian:1997jx; @Jalilian-Marian:1997gr; @Jalilian-Marian:1997dw; @Kovner:2000pt; @Weigert:2000gi; @Iancu:2000hn; @Iancu:2001ad; @Ferreiro:2001qy] or equivalently Balitsky’s hierarchy of equations [@Balitsky:1995ub] in order to resum the LL’s, which take into account high-density effects like gluon saturation [@Gribov:1984tu; @Mueller:1985wy; @McLerran:1993ni; @McLerran:1993ka; @McLerran:1994vd]. These equations also apply to the case of dense-dense collisions [@Gelis:2008rw], such as heavy ion collisions at high energy. In practice, one often uses the BK equation [@Balitsky:1995ub; @Kovchegov:1999yj; @Kovchegov:1999ua] instead, which is a mean-field truncation of Balitsky’s hierarchy. In the standard derivations of all of the aforementioned evolution equations, the high-energy limit is taken in order to simplify the kinematics. These equations are therefore valid for hypothetical collisions at infinite energy, but not necessarily for realistic collisions at large but finite energy, where finite-energy corrections may be quantitatively important. Indeed, one has to include a kinematical constraint into the BFKL equation in momentum space in order to make it self-consistent at finite energies. That kinematical constraint was first proposed as one of the ingredients to build the CCFM equation [@Ciafaloni:1987ur; @Catani:1989sg; @Catani:1989yc], generalizing the BFKL equation. The kinematical constraint for the BFKL equation was further studied in the refs. [@Andersson:1995jt; @Andersson:1995ju; @Kwiecinski:1996td] and also included, in a different form, into the Monte Carlo code DIPSY [@Avsar:2005iz; @Avsar:2006jy; @Avsar:2007xg; @Flensburg:2008ag; @Flensburg:2010kq; @Flensburg:2011kk; @Flensburg:2012zy]. However, the kinematical constraint has been overlooked until the BFKL equation was calculated at next-to-leading logarithmic (NLL) accuracy [@Fadin:1998py; @Ciafaloni:1998gs]. It was then noticed that higher order corrections to the BFKL equation are typically larger than the leading order contributions, especially in the collinear limits. Those large corrections signal a breakdown of the perturbation theory resummed thanks to BFKL, and require a further resummation in the collinear regimes. In ref. [@Salam:1998tj], such a collinear resummation was outlined, and it was noticed that the lack of kinematical constraint in the standard BFKL equation at LL is the main (but not unique) reason for the appearance of large NLL corrections. Hence, including the kinematical constraint into the BFKL equation corresponds to performing a significant part of the collinear resummation. Then, the full collinear resummation was performed, within various schemes, in the refs. [@Ciafaloni:1999yw; @Altarelli:1999vw; @Ciafaloni:2003rd; @Altarelli:2005ni; @Ciafaloni:2007gf; @Altarelli:2008aj]. For a few years, a significant effort has been devoted to the calculation of higher order corrections for high-energy processes with gluon saturation. Indeed, the NLL corrections to the BK equation have been calculated [@Balitsky:2008zz; @Balitsky:2009xg], as well as the NLO corrections to Deep Inelastic Scattering (DIS) structure functions [@Balitsky:2010ze; @Beuf:2011xd] and to single inclusive hadron production in pA collisions [@Chirilli:2011km; @Chirilli:2012jd]. The full calculation of the JIMWLK equation and Balitsky’s hierarchy at NLL accuracy is underway, and preliminary results are already available [@Balitsky:2013fea; @Kovner:2013ona]. Before using those higher order results in phenomenology, one should consider the issue of finite-energy corrections and collinear resummations in the presence of gluon saturation. Toy model numerical simulations [@Avsar:2011ds] have demonstrated that saturation effects cannot tame the large higher order corrections, so that collinear resummations have to be performed also in the case of high-energy evolution equations with gluon saturation. Those nonlinear equations are available in mixed-space (transverse position and light-cone momentum $k^+$), whereas the kinematical constraint and the collinear resummations are known for the BFKL equation in momentum space or in Mellin space. The kinematical constraint has been investigated in mixed space only in the seminal paper [@Motyka:2009gi], which nevertheless contains a few shortcomings and inaccuracies. The aim of the present paper is to revisit the issue of the mixed space version of the kinematical constraint and provide a kinematically improved version of the BK equation, self-consistent at finite energies, which corresponds to the equation . This paper is organized as follows. In section \[sec:prelim\], after a brief presentation of the various evolution equations aiming at resumming high-energy leading logarithms (LL), several factorization schemes for that resummation are discussed. Then, the sections \[sec:kin\_mom\_space\], \[sec:Mellin\_BFKL\_BK\_LL\_NLL\] and \[sec:NLO\_IF\_analysis\] present various arguments in favor of the kinematical constraint for high-energy evolution equations. Those three sections are essentially independent of each other. More precisely, the derivation of Mueller’s dipole model [@Mueller:1993rr; @Mueller:1994jq] is revisited in section \[sec:kin\_mom\_space\], analyzing carefully the kinematics of the relevant graphs in Light-Front perturbation theory in momentum space, in an analogous way as in the ref. [@Motyka:2009gi] but going into more details. The section \[sec:Mellin\_BFKL\_BK\_LL\_NLL\] reviews the Mellin space approach for the study of high-energy evolution equations in the dilute (BFKL) regime, and the knowledge about kinematical issues obtained in this way, mostly in ref. [@Salam:1998tj]. The section \[sec:NLO\_IF\_analysis\] is devoted to the analysis of the real NLO corrections to DIS structure functions in the dipole factorization picture, as calculated in ref. [@Beuf:2011xd]. It is shown that those NLO corrections contain less LL contributions than the ones resummed by the standard LL evolution equations without kinematical constraint. The section \[sec:kcBK\] presents the construction of a high-energy LL evolution equation in mixed-space with kinematical constraint, using on the one hand the knowledge accumulated in the previous sections and on the other hand the requirement of probability conservation along the initial-state parton cascade. The obtained equation is the main result of the present paper. Conclusions are given in the section \[sec:Discussion\]. Additional material is provided in appendices. The appendix \[App:locality\_kc\] presents some technical extension of the analysis within Light-Front perturbation theory performed in the section \[sec:kin\_mom\_space\]. For completeness, the definition and basic properties of the Laplace transform and the Mellin representation, used various times in this paper, are recalled in the appendices \[App:Laplace\] and \[App:Mellin\] respectively. In the appendix \[App:Yfplus\], some of the calculations performed in the section \[sec:NLO\_IF\_analysis\] are redone within a different prescription, for comparison. A few remarks to the reader are in order. In this paper, the kinematical constraint is discussed thoroughly from multiple perspectives, because it is rather difficult to find the complete picture in the existing literature, where the emphasis is often on technical aspects and not on the physics. Therefore, there is partial overlap between some of the sections, and one can easily skip some parts of the paper, the first time in particular. For example, a reader not at ease with Mellin transforms can skip completely all the discussions in Mellin space, which are provided both as a cross-check and in order to make contact with the BFKL literature. A reader already familiar with the need for the kinematical constraint can focus his attention on the sections \[sec:evol\_variables\] and \[sec:kcBK\]. Preliminaries\[sec:prelim\] =========================== LL evolution equations in mixed space in the dipole/CGC framework\[sec:evolEqs\] -------------------------------------------------------------------------------- Following the idea of high-energy operator product expansion [@Balitsky:1995ub], one can obtain high-energy factorization formulae for a wide class of observables, most notably in the cases of deep inelastic scattering (DIS) processes or forward particle production in hadronic collisions. Those factorization formulae typically involve the convolution of perturbatively calculable factors with the expectation value of some operators, which are products of light-like Wilson lines, evaluated in the target state. Contrary to the case of collinear factorization, new operators appear in the high-energy factorization formulae at each perturbative order. At leading order (LO), DIS structure functions and forward single inclusive particle production in hadron-hadron or hadron-nucleus collisions only depend on the dipole operator $${\mathbf S}_{01} = \frac{1}{N_c} \textrm{Tr} \left(U_{\mathbf{x}_{0}}\, U_{\mathbf{x}_{1}}^\dag \right)\, ,\label{dipole_S_matrix}$$ where $U_{\mathbf{x}_{i}}$ is the fundamental Wilson line along the $x^+$ direction[^1], at $x^-=0$ and at the transverse position $\mathbf{x}_{i}$. The expectation value of the operator in the state of the target is noted $\left\langle {\mathbf S}_{01} \right\rangle_{\eta}$. Here, $\eta$ is a common regulator for the rapidity divergence of the operator and for the soft divergence of the next-to-leading order (NLO) impact factor, and play the role of a factorization scale. It will be discussed in more details in the section \[sec:evol\_variables\]. At leading logarithmic(LL) accuracy in the high-energy limit, the $\eta$-dependence of $\left\langle {\mathbf S}_{01} \right\rangle_{\eta}$ is given by the equation [@Balitsky:1995ub] $$\begin{aligned} \partial_{\eta} \left\langle {\mathbf S}_{01} \right\rangle_{\eta}&=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \left\langle{\mathbf S}_{02} {\mathbf S}_{21} \!-\! {\mathbf S}_{01} \right\rangle_{\eta} \, ,\label{B_JIMWLK_dipole}\end{aligned}$$ with the notations $$\begin{aligned} \abar&\equiv &\frac{N_c}{\pi}\, \alpha_s\\ \textbf{K}_{012}&\equiv &\frac{x_{01}^2}{x_{02}^2\, x_{21}^2}\: .\end{aligned}$$ The equation is not closed because its right-hand side involves the new double-trace operator $\left\langle{\mathbf S}_{02} {\mathbf S}_{21} \right\rangle_{\eta}$. The evolution equation for that new operator would involve for example $\left\langle{\mathbf S}_{02} {\mathbf S}_{23} {\mathbf S}_{31} \right\rangle_{\eta}$. Hence the equation is only the first in an infinite hierarchy of coupled equations, called Balitsky’s hierarchy [@Balitsky:1995ub]. In the Color Glass Condensate effective theory (CGC) [@McLerran:1993ni; @McLerran:1993ka; @McLerran:1994vd; @Jalilian-Marian:1997jx; @Jalilian-Marian:1997gr; @Jalilian-Marian:1997dw; @Kovner:2000pt; @Weigert:2000gi; @Iancu:2000hn; @Iancu:2001ad; @Ferreiro:2001qy; @Jeon:2013zga], valid for a dense target, the target is described by a random distribution of classical color charges, corresponding to the large-$x$ partons, and the classical gluon field radiated by those classical charges, corresponding to the low-$x$ partons. In this context, taking expectation values $\left\langle \cdots \right\rangle_{\eta}$ in the target state reduces to perform the statistical average over the distribution of classical color charges, and ${\eta}$ is also related with the cut-off separating the large-$x$ and low-$x$ partons in the target. In the CGC, the JIMWLK equation [@Jalilian-Marian:1997jx; @Jalilian-Marian:1997gr; @Jalilian-Marian:1997dw; @Kovner:2000pt; @Weigert:2000gi; @Iancu:2000hn; @Iancu:2001ad; @Ferreiro:2001qy] is the renormalization group equation associated with the change of the cut-off between large-$x$ and low-$x$ partons. The JIMWLK equation gives formally the LL evolution for the expectation value of any product of light-like Wilson lines, and thus reproduces in particular Balitsky’s hierarchy. The equation can then be called the B-JIMWLK evolution equation for $\left\langle {\mathbf S}_{01} \right\rangle_{\eta}$. For simplicity, it is often convenient to perform the mean-field approximation $$\left\langle{\mathbf S}_{02} {\mathbf S}_{21} \right\rangle_{\eta}\simeq \left\langle{\mathbf S}_{02} \right\rangle_{\eta}\: \left\langle{\mathbf S}_{21} \right\rangle_{\eta}\, ,\label{mean-field_approx}$$ which allows to close the equation , and give the Balitsky-Kovchegov (BK) equation [@Balitsky:1995ub; @Kovchegov:1999yj; @Kovchegov:1999ua] $$\begin{aligned} \partial_{\eta} \left\langle {\mathbf S}_{01} \right\rangle_{\eta}&=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \bigg[ \left\langle{\mathbf S}_{02} \right\rangle_{\eta} \left\langle{\mathbf S}_{21} \right\rangle_{\eta} \!-\! \left\langle {\mathbf S}_{01} \right\rangle_{\eta}\bigg] \, ,\label{BK_S}\end{aligned}$$ which is also often written in terms of the dipole-target amplitude $\left\langle {\textbf N}_{01} \right\rangle_{\eta}=1-\left\langle {\textbf S}_{01} \right\rangle_{\eta}$ as $$\begin{aligned} \partial_{\eta} \left\langle {\textbf N}_{01} \right\rangle_{\eta}&=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \bigg[ \left\langle{\textbf N}_{02} \right\rangle_{\eta} \!+\! \left\langle{\textbf N}_{21} \right\rangle_{\eta} \!-\! \left\langle {\textbf N}_{01} \right\rangle_{\eta}\!-\!\left\langle{\textbf N}_{02} \right\rangle_{\eta} \left\langle{\textbf N}_{21} \right\rangle_{\eta}\bigg] \, .\label{BK_N}\end{aligned}$$ In the cases where the target is dilute and thus the amplitude $\left\langle {\textbf N}_{01} \right\rangle_{\eta}$ is much smaller than $1$, it is legitimate to linearize the BK equation , which then reduces to the dipole form [@Mueller:1993rr; @Mueller:1994jq] of the BFKL equation [@Lipatov:1976zz; @Kuraev:1977fs; @Balitsky:1978ic] $$\begin{aligned} \partial_{\eta} \left\langle {\textbf N}_{01} \right\rangle_{\eta}&=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \bigg[ \left\langle{\textbf N}_{02} \right\rangle_{\eta} \!+\! \left\langle{\textbf N}_{21} \right\rangle_{\eta} \!-\! \left\langle {\textbf N}_{01} \right\rangle_{\eta}\bigg] \, .\label{BFKL_dipole}\end{aligned}$$ High-energy factorization schemes and evolution variables\[sec:evol\_variables\] -------------------------------------------------------------------------------- There are many ways to regulate the rapidity divergence of the light-like Wilson line operators just discussed, and each way is associated to a particular definition of the cut-off variable $\eta$. Nevertheless, at LL accuracy, it is always possible to write the evolution equations in $\eta$ from the previous section. By contrast, different choices of regularization for the rapidity divergence, or equivalently of high-energy factorization scheme, generically lead to different evolution equations at next-to-leading logarithmic accuracy (NLL) and beyond (see *e.g.* refs. [@Balitsky:2008zz] and [@Balitsky:2009xg]). One possible way to regularize the light-like Wilson line operators is to make them time-like, slightly changing their slope. The variable $\eta$ is then related to the new slope of the Wilson line. That method is commonly used in the case of the TMD-factorization [@Collins_TMD_book] and was used in the case of high-energy factorization for example in the original derivation of Balitsky’s hierarchy [@Balitsky:1995ub]. Another possibility is to forbid some kinematical range to the gluons included in the Wilson line operators by an explicit cut-off. For example, one allows in the Wilson lines only gluons with a $k^+$ smaller than some factorization scale $k^+_f$. That prescription was used in ref. [@Balitsky:2008zz] to derive the NLL corrections to the first B-JIMWLK equation , and it will be used in most of the rest of the present study. In that scheme, the variable $$Y^+_f= \log\left(\frac{k^+_f}{k^+_{\min}}\right) \label{Yfplus_def}$$ is the appropriate evolution variable $\eta$ for the equations , , and , where at this stage $k^+_{\min}$ is only an arbitrary reference scale in $k^+$. Indeed, quantities such that $\left\langle {\mathbf S}_{01} \right\rangle$ cannot depend on $k^+_f$ alone, but only on a ratio of $k^+$’s, due to the required invariance under longitudinal boosts. There are obvious variants of that regularization and factorization scheme. Indeed, one can instead include only gluons with $k^-$ larger than some factorization scale $k^-_f$ in the Wilson lines. The associated variable $\eta$ appearing in the evolution equations , , and is then $$Y^-_f= \log\left(\frac{k^-_{\max}}{k^-_f}\right) \label{Yfminus_def}\, .$$ Yet another possibility is to include only gluons with rapidity $y=\log(k^+/k^-)/2$ smaller than a value $y_f$ in the Wilson lines. In that case, the evolution variable $\eta$ is taken to be $$Y_f= y_f-y_{\min} \label{Yf_def}\, .$$ Finally, a last type of regularization and factorization scheme was proposed in ref.[@Balitsky:2009xg], the so-called conformal dipole scheme. In QCD with massless quarks, conformal symmetry should be an anomalous symmetry, *i.e.* broken only by the running of the coupling. However, the regularization schemes discussed earlier lead to an explicit breaking of conformal symmetry. Because of that, scheme-dependent non-conformal terms arise when calculating higher order perturbative corrections to the impact factors and to the high-energy evolution equations. If the Wilson lines are regularized in a conformal way, all such non-conformal higher order terms should disappear. Unfortunately, such conformal regularization and factorization scheme is not explicitly known, and results in that scheme have been constructed only perturbatively, starting from expressions in a non-conformal scheme [@Balitsky:2009xg; @Balitsky:2010ze]. The physical interpretation of the variable playing the role of $\eta$ in that scheme is also rather obscure. For those reasons, we will not attempt to address the case of the conformal dipole scheme, despite its mathematical attractiveness. Let us come back to the schemes with explicit cut-off in the Wilson lines. In those cases, the factorization scale $k^+_f$ (or $k^-_f$ or $y_f$) is covariant with respect to longitudinal boosts. It is convenient to choose that factorization scale close enough to the corresponding typical scale associated with the projectile, in order to avoid the appearence of potentially large logs in the projectile impact factor. On the other hand, for finite-energy collisions, the target is setting another typical scale in $k^+$ (or $k^-$ or $y$). It is convenient to choose the reference scale $k^+_{\min}$ (or $k^-_{\max}$ or $y_{\min}$) to be that scale provided by target. Thanks to that choice, the evolution variable $Y^+_f$ (or $Y^-_f$ or $Y_f$) is invariant under longitudinal boosts and carries the dependence on the total energy of the collision. It represents the range over which one should evolve the regularized Wilson line operators with a high-energy evolution equation of the previous section, starting from some initial condition. That initial condition is purely non-perturbative, and encodes the dynamics of the target as seen with a poor time resolution[^2]. In order to get more explicit expressions for the variables $Y^+_f$, $Y^-_f$ and $Y_f$, one has to perform some modelling of the un-evolved target. For our purposes, the following very simple model should be enough. Let the target be a collection of partons, each of them having a transverse mass $Q_0$ and carrying a fraction $x_0$ of the large component $P^-$ of the momentum of the target[^3]. Within that model[^4], is it natural to choose $$\begin{aligned} k^-_{\max} &=& x_0\, P^-\\ k^+_{\min} &=& \frac{Q_0^2}{2\, x_0\, P^-}\label{kplus_min_def}\\ y_{\min} &=& \frac{1}{2} \log\left(\frac{Q_0^2}{2 (x_0\, P^-)^2}\right)\, .\end{aligned}$$ Applying those ideas to the example of DIS at low $x_{Bj}$, mediated by a photon of virtuality $Q^2$ and momentum $q^+$, so that $$x_{Bj} \simeq \frac{Q^2}{2\, P^-\, q^+}\, ,$$ one finds $$\begin{aligned} Y^+_f &=& \log\left(\frac{x_0\, Q^2}{x_{Bj}\, Q_0^2}\right)+ \log\left(\frac{k^+_f}{q^+}\right)\label{Yfplus_final}\\ Y^-_f &=& \log\left(\frac{x_0}{x_{Bj}}\right)+ \log\left(\frac{Q^2}{2\, q^+\, k^-_f}\right)\label{Yfminus_final}\\ Y_f &=& \log\left(\frac{x_0\, Q}{x_{Bj}\, Q_0}\right)+ \log\left(\frac{Q\, e^{y_f}}{\sqrt{2}\, q^+}\right)\label{Yf_final}\, .\end{aligned}$$ In each case, the first term is the dominant one, because the factorisation scales should be taken close enough to the scales fixed by the virtual photon $$\begin{aligned} k^+_f &\lesssim & q^+\\ k^-_f &\gtrsim &\frac{Q^2}{2\, q^+}\\ e^{y_f}&\lesssim & \frac{\sqrt{2}\, q^+}{Q}\, .\end{aligned}$$ Most of the studies in the literature have been performed at strict LL accuracy. Accordingly, no distinction between factorization schemes is usually done, and the target is evolved over a range $\log(1/x_{Bj})$ or $\log(x_0/x_{Bj})$ (with typically $x_0=0.01$). That is indeed legitimate as this order. However, it is clear from our discussion that one has to be more careful when trying to include higher order effects consistently. According to the equations , and , the LL terms $(\abar\, Y^+_f)^n$, $(\abar\, Y^-_f)^n$ or $(\abar\, Y_f)^n$ at low $x_{Bj}$ differ from each other by terms of order NLL. Moreover, $\log(Q^2/Q^2_0)$ can be large, in practical applications to DIS. In the following, we will often drop the $f$ subscript in $Y^+_f$, $Y^-_f$ and $Y_f$ for the variable appearing in the high-energy evolution equations, and keep the notation $Y^+_f$, $Y^-_f$ and $Y_f$ for the total range over which the Wilson line correlators should be evolved, *e.g.* , or , given the process considered, the total energy of the collision and the precise choice of factorization scale $k^+_f$ (or $k^-_f$ or $y_f$). As a remark, note that the freedom to choose the evolution variable $Y^+_f$, $Y^-_f$ or $Y_f$ in order to specify an explicit factorization scheme is related in the traditional BFKL formalism to the freedom to choose the reference scale $s_0$ for the total energy. Kinematics of multi-gluon Fock components of photon wave-functions in momentum space\[sec:kin\_mom\_space\] =========================================================================================================== Within the dipole model [@Mueller:1993rr; @Mueller:1994jq], one can obtain the real emission contribution to the LL high-energy evolution equations of the section \[sec:evolEqs\] from the tree-level multi-gluon Fock components of photon wave-functions [@Mueller:1993rr; @Kovchegov:1999yj; @Kovchegov:1999ua], which are calculable for example in light-front perturbation theory. When applied to DIS observables, those multi-gluon Fock components are building blocks for the higher-order corrections to the impact factors (see *e.g.* ref. [@Beuf:2011xd]). The general idea is that the emission of softer and softer gluons in the photon wave-functions tends to somehow factorize from the rest of the wave-functions, and not to modify the kinematics of harder partons, so that by moving the factorization scale (typically $k^+_f$) closer to the projectile one can reinterpret the soft gluon emissions as part of the LL evolution of the target. In that calculation, some kinematical approximations for the soft gluons are crucial, in particular in the energy denominators. Those kinematical approximations are usually done in a crude way, sufficient only at strictly LL accuracy. Performing those approximations in a fully self-consistent way leads to the kinematical constraint of refs. [@Ciafaloni:1987ur; @Catani:1989sg; @Catani:1989yc; @Andersson:1995jt; @Andersson:1995ju; @Kwiecinski:1996td]. In the rest of this section, that point is discussed thoroughly, extending the related discussion already available in ref. [@Motyka:2009gi]. The links of the kinematical constraint with the physics of the collinear and anti-collinear limits and with the ordering in formation time are also discussed for completeness. Kinematical approximations in the high-energy limit\[sec:kinematical\_approx\] ------------------------------------------------------------------------------ 1to 10cm[ ![\[Fig:ga2qqbargg\]Example of light-front perturbation theory diagram contributing to the $q\bar{q}gg$ Fock component of a photon. Energy denominators are indicated by the vertical dashed lines. The light-front time $x^+$ increases from the left to the right of the diagram, from $-\infty$ to $0$.](gamma2qqbargg_sample.eps "fig:") ]{} Let us consider the diagram on Fig.\[Fig:ga2qqbargg\], which is a typical contribution to the $q\bar{q}gg$ Fock component of the photon wave-function within light-front perturbation theory. Following the usual rules of light-front perturbation theory[@Kogut:1969xa; @Bjorken:1970ah], there is conservation of the transverse momentum $\mathbf{k}$ and of the $k^+$ at each vertex, but not of the light-front energy $k^-$. Instead, each of the partons is on-shell, *i.e.* $2 k^+\, k^- \!-\!\mathbf{k}^2=0$ because only massless partons will be considered here. The energy denominators, which encode the energy off-shellness of each intermediate Fock state, are given by the difference between the initial $k^-$ and the total $k^-$ of the partons present in the current intermediate Fock state. The effective initial $k^-$ associated to an incoming real or virtual photon is[^5] $k^-_{init}=-Q^2/(2\, q^+)$, where $q^+$ is the light-front momentum of the photon. For the diagram on Fig.\[Fig:ga2qqbargg\], one has the obvious momentum conservation relations $\mathbf{k}_0=-\mathbf{k}_1$, ${k_0}^+=q^+\!-\!{k_1}^+$, $\mathbf{k}_1'=\mathbf{k}_1\!-\!\mathbf{k}_2$, ${k_1'}^+={k_1}^+\!-\!{k_2}^+$, $\mathbf{k}_2'=\mathbf{k}_2\!-\!\mathbf{k}_3$ and ${k_2'}^+={k_2}^+\!-\!{k_3}^+$. Hence, the energy denominators $ED_1$, $ED_2$ and $ED_3$ write $$\begin{aligned} ED_1&=& -\frac{Q^2}{2\, q^+}\!-\!{k_0}^-\!-\!{k_1}^- \;\;=\;\; -\frac{Q^2}{2\, q^+}\!-\!\frac{q^+\:{\mathbf{k}_1}^2}{2\, {k_1}^+ (q^+\!-\!{k_1}^+)}\nonumber\\ ED_2&=& -\frac{Q^2}{2\, q^+}\!-\!{k_0}^-\!-\!{k_1'}^-\!-\!{k_2}^- \;\;=\;\; -\frac{Q^2}{2\, q^+}\!-\!\frac{{\mathbf{k}_1}^2}{2\, (q^+\!-\!{k_1}^+)}\!-\!\frac{(\mathbf{k}_1\!-\!\mathbf{k}_2)^2}{2\, ({k_1}^+\!-\!{k_2}^+)}\!-\!\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+} \nonumber\\ ED_3&=& -\frac{Q^2}{2\, q^+}\!-\!{k_0}^-\!-\!{k_1'}^- \!-\!{k_2'}^-\!-\!{k_3}^-\;\;=\;\; -\frac{Q^2}{2\, q^+}\!-\!\frac{{\mathbf{k}_1}^2}{2\, (q^+\!-\!{k_1}^+)}\!-\!\frac{(\mathbf{k}_1\!-\!\mathbf{k}_2)^2}{2\, ({k_1}^+\!-\!{k_2}^+)}\!-\!\frac{(\mathbf{k}_2\!-\!\mathbf{k}_3)^2}{2\, ({k_2}^+\!-\!{k_3}^+)}\!-\!\frac{{\mathbf{k}_3}^2}{2\, {k_3}^+}\, .\label{ED_exact}\end{aligned}$$ One of the most crucial approximations in the derivation of the BFKL equation in the dipole model [@Mueller:1993rr] is that in the case of softer and softer gluon emissions, energy denominators should be dominated by the contribution of the last emitted gluon[^6] $$\begin{aligned} ED_2&\simeq & -{k_2}^- \;\;=\;\;-\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+} \nonumber\\ ED_3&\simeq & -{k_3}^-\;\;=\;\;-\frac{{\mathbf{k}_3}^2}{2\, {k_3}^+}\, .\label{ED_approx}\end{aligned}$$ It allows in the end to factorize the emission of each additional softer gluon. One usually justifies that approximation by taking the gluons strongly ordered in $k^+$ $$q^+ > {k_1}^+,\: q^+\!-\!{k_1}^+ \gg {k_2}^+ \gg {k_2}^+ \gg \cdots \; ,\label{kplus_ordering}$$ and by assuming that all the transverse momentums are of the same order $$Q^2 \simeq {\mathbf{k}_1}^2 \simeq {\mathbf{k}_2}^2 \simeq {\mathbf{k}_3}^2 \simeq \cdots \; .\label{kt_equal}$$ Because of the strong $k^+$ ordering, one gets in the end a LL high-energy evolution equation , or in the factorization scheme with cut-off in $k^+$, and thus with $Y^+=\log(k^+/k^+_{\min})$ playing the role of the evolution variable $\eta$. However, the kernel of the equation contains typically an unrestricted integration over $\mathbf{k}$, when written in momentum space. And thus there are contributions beyond the assumption which violate the approximation in some parametrically small part of the integration range. Due to this small inconsistency at LL accuracy, pathologically large corrections arise at NLL accuracy and beyond. The other derivations of high-energy evolution equations such as BFKL, BK or JIMWLK always rely on some kinematical approximation equivalent to . Usually, either the kinematics and is assumed, or the $k^+$ ordering is replaced by the $k^-$ ordering $$|ED_1|= \frac{Q^2}{2\, q^+}+\frac{q^+\:{\mathbf{k}_1}^2}{2\, {k_1}^+ (q^+\!-\!{k_1}^+)} \ll {k_2}^- \ll {k_3}^- \ll\cdots \; ,\label{kminus_ordering}$$ or by an ordering in rapidity $y=\log(k^+/k^-)/2$. Those choices provide LL evolution equations in the factorization scheme where respectively $Y^-=\log(k^-_{\max}/k^-)$ or $Y=y-y_{\min}$ plays the rôle of evolution variable. When assuming , those three different possible ordering become equivalent. That is why one obtains in any of those factorizations schemes a LL evolution equation with the same kernel. However, in each case, the transverse integration in the kernel is unrestricted, and contain a regime where kinematical approximations done in the derivation of the evolution equation are violated. Hence, the standard version of any high-energy evolution equation is not fully self-consistent. This problem generates the largest corrections at higher orders in the evolution equation and in the impact factor of observables sensitive to high-energy logs. Moreover, at NLL accuracy, the kernel starts to depend on the choice of evolution variable $Y^+$, $Y^-$ or $Y$, or equivalently on the factorization scheme. In order to address those issues, let us examine more carefully the energy denominators . When assuming the strong $k^+$ ordering only, and nothing about the transverse momentums, one has the simplification $$\begin{aligned} ED_2&\simeq & -\frac{Q^2}{2\, q^+}\!-\!\frac{{\mathbf{k}_1}^2}{2\, (q^+\!-\!{k_1}^+)}\!-\!\frac{(\mathbf{k}_1\!-\!\mathbf{k}_2)^2}{2\, {k_1}^+}\!-\!\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+} \;\;\simeq \;\; ED_1\!-\!\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+} \label{ED2_approx_kplus_ord}\\ ED_3&\simeq& -\frac{Q^2}{2\, q^+}\!-\!\frac{{\mathbf{k}_1}^2}{2\, (q^+\!-\!{k_1}^+)}\!-\!\frac{(\mathbf{k}_1\!-\!\mathbf{k}_2)^2}{2\, {k_1}^+}\!-\!\frac{(\mathbf{k}_2\!-\!\mathbf{k}_3)^2}{2\, {k_2}^+}\!-\!\frac{{\mathbf{k}_3}^2}{2\, {k_3}^+}\;\;\simeq\;\; ED_1\!-\!\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+}\!-\!\frac{{\mathbf{k}_3}^2}{2\, {k_3}^+}\, .\label{ED3_approx_kplus_ord}\end{aligned}$$ One arrives at the last expression for $ED_2$ using the following reasoning. Due to the strong $k^+$ ordering, the term in ${\mathbf{k}_2}^2/{k_2}^+$ will be usually dominant, except if $\mathbf{k}_2$ is excessively small. In that case, any of the other terms can dominate $ED_2$. In particular, the term containing $(\mathbf{k}_1\!-\!\mathbf{k}_2)^2$ may be dominant only if $(\mathbf{k}_1\!-\!\mathbf{k}_2)^2$ is so much larger than ${\mathbf{k}_2}^2$ that the $k^+$ ordering is compensated. In that case, it is clear that ${\mathbf{k}_2}^2 \ll (\mathbf{k}_1\!-\!\mathbf{k}_2)^2\simeq {\mathbf{k}_1}^2$. Hence, once the $k^+$ ordering is satisfied, the last expression for $ED_2$ in is always a good approximation, whatever is the relative size of $Q^2$, ${\mathbf{k}_1}^2$ and ${\mathbf{k}_2}^2$. One obtains the last expression for $ED_3$ in following the same method. Those results generalize to any light-front perturbation theory tree-level diagram contributing to the quark, anti-quark plus $N$ gluons Fock component of a photon wave-function: if gluons have a strongly decreasing $k^+$ from the first to the last emitted gluon in light-front time $x^+$, then the energy denominator $ED_{n+1}$ following to the $n$-th gluon emission is always well approximated by $$\begin{aligned} ED_{n+1}&\simeq& ED_1\!-\!\frac{{\mathbf{k}_2}^2}{2\, {k_2}^+}\!- \cdots -\!\frac{{\mathbf{k}_{n+1}}^2}{2\, {k_{n+1}}^+} \;\;= \;\; ED_1\!-\!{k_2}^-\!- \cdots -\!{k_{n+1}}^- \, .\label{EDn_approx_kplus_ord}\end{aligned}$$ Hence, assuming the strong $k^+$ ordering , one gets the approximation $$\begin{aligned} ED_{n+1}&\simeq& - \frac{{\mathbf{k}_{n+1}}^2}{2\, {k_{n+1}}^+} \;\;= \;\; - {k_{n+1}}^- \, .\label{EDn_approx}\end{aligned}$$ for each energy denominator if the $k^-$ ordering is also satisfied. It is also possible but more cumbersome to show that if only the $k^-$ ordering is assumed, the approximation is valid precisely when the $k^+$ ordering is satisfied. The approximation , which is necessary to factorize each gluon emission from the previous ones and thus to get a high-energy evolution equation like BFKL or BK, is then valid if both the $k^+$ ordering and the $k^-$ ordering are simultaneously satisfied. By contrast, the assumption is both misleading and meaningless due to the transverse integration in the kernel of the high-energy evolution equations. The $k^+$ ordering and the $k^-$ ordering together imply the rapidity $y$ ordering, which is intermediate between the two. When writing down a high-energy evolution equation, the choice of evolution variable $Y^+$, $Y^-$ or $Y$ makes the ordering in the corresponding variable ($k^+$, $k^-$ or $y$) automatic. The general idea behind the kinematical constraint [@Ciafaloni:1987ur; @Catani:1989sg; @Catani:1989yc; @Andersson:1995jt; @Andersson:1995ju; @Kwiecinski:1996td] is that one should add a theta function in the kernel of the BFKL (or BK) equation, in order to impose the $k^+$ or $k^-$ (or both) ordering not already guarantied by the choice of evolution variable. This can be viewed either as an all order resummation of the largest corrections arising at NLL and beyond when calculated in the standard way, or as a improvement of the LL evolution equation, making it kinematically self-consistent. In the appendix \[App:locality\_kc\], the analysis of the kinematics in a dipole cascade within light-front perturbation theory is performed in a more refined way. There, it is shown that, to LL accuracy, the $k^+$ and $k^-$ orderings and are local instead of global, *i.e.* the $k^+$ and $k^-$ of each gluon are constrained only by the $k^+$’s and $k^-$’s of the two partons forming the color dipole emitting that gluon, and not by the $k^+$’s and $k^-$’s of partons present in other branches of the cascade, contrary to statements made in ref. [@Motyka:2009gi]. That locality of the $k^+$ and $k^-$ orderings is crucial in order to be able to write a kinematical constrained versions of the BFKL and BK equation. DLL limits\[sec:DLL\_mom\] and the failure of the standard high-energy evolution equations to reproduce both of them -------------------------------------------------------------------------------------------------------------------- The DGLAP evolution of the photon corresponds to the ordering $$Q^2 \ll {\mathbf{k}_1}^2 \ll {\mathbf{k}_2}^2 \ll {\mathbf{k}_3}^2 \ll \cdots \; ,\label{anti_kt_ord}$$ while keeping all the $k^+$’s parametrically of the same order. When using the obtained photon wave-function to calculate photoproduction or DIS observables, this regime is the anti-collinear regime, relevant mainly for the so-called *resolved photon* contributions. When taking the $k^+$ ordering in addition to the ${\mathbf{k}}^2$ ordering , one arrives at the anti-collinear double leading log (DLL) regime, which is both the low-$x$ limit of the anti-collinear DGLAP evolution and the anti-collinear limit of the low-$x$ evolution equation. In that case, and together imply the approximation of the energy denominators, as well as the $k^-$ ordering . For that reason, one can conclude that generically, low-$x$ evolution equations with $Y^+$ as evolution variable should have a smooth anti-collinear limit, indeed reproducing the low-$x$ limit of the anti-collinear DGLAP evolution. On the other hand, the collinear regime, associated with the DGLAP evolution of the target in the case of DIS, is defined by the ordering $$Q^2 \gg {\mathbf{k}_1}^2 \gg {\mathbf{k}_2}^2 \gg {\mathbf{k}_3}^2 \gg \cdots \; ,\label{kt_ord}$$ while keeping all the $k^-$ parametrically of the same order. Taking the $k^-$ ordering in addition to the ${\mathbf{k}}^2$ ordering defines collinear DLL regime. In that regime, the $k^+$ ordering and the approximation of the energy denominators are automatically verified. Hence, any high-energy evolution equation formulated with the $Y^-$ evolution variable should generically give the correct collinear DLL physics in the limit . By contrast, if one assume both the $k^+$ ordering and the collinear ${\mathbf{k}}^2$ ordering , one cannot deduce anything about the validity or not of the $k^-$ ordering or of the approximation . Hence, if one takes a high-energy evolution equation formulated with the $Y^+$ evolution variable, one has to be careful when discussing the collinear limit . If the kinematical constraint has been imposed in the kernel of the evolution equation, then the $k^-$ ordering is by definition satisfied, so that the collinear DLL physics is correctly reproduced. On the other hand, if one considers the standard version of the high-energy evolution equation, *i.e.* without kinematical constraint, and derived assuming both and , one cannot obtain the correct collinear DLL limit in the regime , since the $k^-$ are completely unconstrained and unordered. Of course, by symmetry, one expect similar issues in the anti-collinear limit (resp. in both the collinear and anti-collinear limits) when studying a high-energy evolution equation with $Y^-$ (resp. with $Y$) playing the role of evolution variable. Hence, for any standard definition of the multi-Regge kinematics, either and , or and , or rapidity ordering and , one obtains high energy evolution equations which cannot have both the correct collinear and anti-collinear DLL limits. That problem is solved when using the kinematical constraint [@Ciafaloni:1987ur; @Catani:1989sg; @Catani:1989yc; @Andersson:1995jt; @Andersson:1995ju; @Kwiecinski:1996td]. Actually, making appropriate all-order resummations in order to ensure both the correct collinear and anti-collinear DLL limits in the BFKL equation is essentially equivalent [@Salam:1998tj] as imposing the kinematical constraint in the BFKL kernel, *i.e.* as imposing simultaneously the $k^+$ ordering and the $k^-$ ordering . Formation time ordering\[sec:Form\_time\] ----------------------------------------- 1to 10cm[ ![\[Fig:last\_splitting\]Tree level contribution to the wave-function of a particle in light-front perturbation theory, where only the last parton splitting is specified to be a one-to-two splitting. $X$ designates here an arbitrary Fock-state. The last two energy denominators are indicated by the vertical dashed lines. The light-front time $x^+$ increases from the left to the right of the diagram, from $-\infty$ to $0$.](last_splitting.eps "fig:") ]{} For completeness, let us now discuss the link between energy denominators and formation time. Consider the effect of a one-to-two parton splitting at the end of a parton cascade, as shown in Fig.\[Fig:last\_splitting\]. The energy denominator $ED_{N+1}$ after that last splitting differs from the energy denominator $ED_{N}$ just before that splitting in the following way: the contribution from the parent parton is removed, and replaced by the contributions of the two daughters. Hence, following the notations in Fig.\[Fig:last\_splitting\], one has $$ED_{N+1}=ED_{N} +{k_N}^--{k_{N}'}^--{k_{N+1}}^-\label{EDenom_recur}\, .$$ Restricting ourselves to the case of massless partons and using the momentum conservation relations $$\begin{aligned} {k_N}^+&=&{k_{N}'}^++{k_{N+1}}^+\\ \mathbf{k}_N&=&\mathbf{k}_{N}'+\mathbf{k}_{N+1}\, ,\end{aligned}$$ it is elementary to rewrite the relation as $$ED_{N+1}=ED_{N}-\frac{{k_N}^+\, {\mathbf{q}_{N+1}}^2}{2\, {k_{N}'}^+\, {k_{N+1}}^+} \label{EDenom_recur_2}\, ,$$ introducing the relative transverse momentum of the daughters with respect to the parent parton $$\mathbf{q}_{N+1}=\mathbf{k}_{N+1}-\frac{{k_{N+1}}^+}{{k_N}^+}\, \mathbf{k}_{N} = \frac{{k_{N}'}^+}{{k_N}^+}\, \mathbf{k}_{N} - \mathbf{k}_{N}' = \frac{{k_{N}'}^+}{{k_N}^+}\, \mathbf{k}_{N+1}- \frac{{k_{N+1}}^+}{{k_N}^+}\, \mathbf{k}_{N}' \label{Relative_transv_mon}\, .$$ The absolute value of the second term in the right hand side of the equation is exactly the inverse of the formation time (in $x^+$) associated with the considered parton splitting, *i.e.* the $x^+$ interval it takes for the daughters to be at a larger transverse distance from each other than their transverse wave-length, so that they may lose their quantum coherence. One can deduce from the recursion relation that, in the case of a parton cascade initiated from a single particle (on-shell or space-like), and involving only one-to-two splittings, each energy denominator is the opposite of the sum of the inverse formation times associated with each of the previous splittings in the cascade. In the case of the simultaneous ordering of partons in $k^+$ and $k^-$ discussed in the section , each energy denominator is dominated by a contribution associated with the last splitting, see the equation . Hence, all cascades satisfying the simultaneous ordering in $k^+$ and $k^-$ are such that the formation times for each of the splittings are strongly decreasing as the cascade develops. And thus the formation time of the whole cascade is essentially the same as the one associated with the very first splitting. Such an ordering in formation time would not always be satisfied, when using the various usual definitions of the multi-Regge kinematics, without the kinematical constraint. In general, QCD parton cascades at tree level can involve also one-to-three parton splittings, either from the local four-gluons vertex of QCD, or from the nonlocal vertices appearing in light-front perturbations theory. However, those vertices do not give rise to high-energy LL contributions, only to NLL ones at best, so that we can indeed ignore them safely in the present discussion. For obvious symmetry reasons, it is tempting to guess that the simultaneous ordering in $k^+$ and $k^-$ also imply an opposite strong ordering of the formation times along $x^-$, in a frame where the same cascade seems to develop from the target instead of from the projectile. However, the $s$-channel picture used here breaks the symmetry between projectile and target, and makes it very cumbersome to check explicitly if that property is indeed true. That issue is beyond the scope of the present study. Mellin space analysis of BFKL and BK evolutions at LL and NLL accuracy\[sec:Mellin\_BFKL\_BK\_LL\_NLL\] ======================================================================================================= Going to Mellin space allows to diagonalize the LL BFKL equation, making its study straightforward. At higher order, that representation is still very useful, although running coupling effects bring some complications [@Lipatov:1985uk; @Chirilli:2013kca]. Naively, it seems unlikely that such a linear transformation would help much in order to study the BK or B-JIMWLK equations, due to their nonlinearity. However, in those equations, the virtual terms are free from nonlinear contributions, and in the real terms in mixed space, the linear and nonlinear contributions are such that they have to combine into a product of dipole (or higher multipole) S-matrices, so that for example the linear real terms fully determine the nonlinear ones. The kinematical issues discussed in this study are associated with the probability density of real gluon emission. That probability density is identical in the nonlinear equations and in their BFKL linearization. Hence, those kinematical effects can be conveniently studied in the context of the Mellin representation of the BFKL equation in mixed space. No information about kinematics is lost in the linearization, because in mixed space the nonlinear terms can be reconstructed uniquely from the linearized version of the real terms. In this section, some of the results from the seminal paper [@Salam:1998tj] analyzing the NLL BFKL equation are adapted to mixed space rather than momentum space, and applied in particular to the explicit result [@Balitsky:2008zz] for the NLL B-JIMWLK evolution of a color dipole, in the dilute target regime. Mellin representation of the LL BFKL evolution ---------------------------------------------- Let us choose for example $Y^+$ as evolution variable in the BFKL equation. Then, one can perform a Laplace transform with respect to $Y^+$ (see appendix \[App:Laplace\]), which is equivalent to a Mellin transform in $k^+$. Concerning the dependence in the transverse variables, one can use a Mellin representation (see appendix \[App:Mellin\]), which has no inverse. For example, one writes the full Mellin representation[^7] $$\left\langle {\textbf N}_{ij} \right\rangle_{Y^+} = \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y^+}\; \hat{\textbf N}_{ij}(\om) = \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y^+} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; \hat{\cal N}(\g,\om)\label{LaplaceMellin_rep}\, ,$$ for the solution of the mixed-space BFKL equation with $\eta=Y^+$. $Q_0$ is an arbitrary momentum scale, which in practice is set to be a typical transverse momentum scale associated with the target. The Laplace transform of the BFKL equation writes $$\begin{aligned} \om\, \hat{\textbf N}_{01}(\om) \!-\!\left\langle {\textbf N}_{01} \right\rangle_{0}&=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \bigg[ \hat{\textbf N}_{02}(\om) \!+\! \hat{\textbf N}_{21}(\om) \!-\!\hat{\textbf N}_{01}(\om)\bigg] \, .\label{BFKL_Laplace}\end{aligned}$$ Introducing the Mellin representation $$\left\langle {\textbf N}_{ij} \right\rangle_{0} =\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; {\cal N}^{\, 0}(\g)\label{Mellin_rep_InitCond}\, .$$ of the initial condition, one obtains from the equation $$\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g \; \hat{\cal N}(\g,\om)\; \Big[\om- \abar \chi(\g)\Big] =\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g \; {\cal N}^{\, 0}(\g)\label{LaplaceMellin_rep_BFKL}\, ,$$ where $$\chi(\g)=2 \Psi(1)-\Psi(\g)-\Psi(1\!-\!\g)\label{BFKL_eigenvalue}$$ is the characteristic function of the LL BFKL kernel at zero conformal spin. Here, $\Psi(\g)$ is the digamma function. $\hat{\cal N}(\g,\om)$ has to be singular both in $\om$ and in $\g$, in order to provide a non-zero function $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ via the formula . And due to the relation , the only possible contribution to the integral over $\om$ can come from a single pole of $\hat{\cal N}(\g,\om)$ at $\om= \abar \chi(\g)$. Hence, without loss of generality, one can take $$\hat{\cal N}(\g,\om)= \frac{{\cal N}^{\, 0}(\g)}{\om- \abar \chi(\g)}\, , \label{LL_BFKL_sol_in_LaplaceMellin}$$ and thus $$\begin{aligned} \left\langle {\textbf N}_{ij} \right\rangle_{Y^+} &=&\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y^+}\; \frac{ {\cal N}^{\, 0}(\g)}{\om- \abar \chi(\g)}\nonumber\\ &=&\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; {\cal N}^{\, 0}(\g) \; e^{\abar \chi(\g)\, Y^+} \label{LL_BFKL_sol}\, .\end{aligned}$$ The integration over $\g$ can then be estimated using the saddle-point approximation, in either the $Y^+\rightarrow +\infty$, the $x_{ij}^2\rightarrow +\infty$ or the $x_{ij}^2\rightarrow 0$ limit. Since $Y^+$ is used as the evolution variable, one has some control over the $k^+$ of the gluon in the parton cascades contributing to $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$: the gluons are ordered in $k^+$ as in when $Y^+$ is large enough. The dipole-target amplitude $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ is related by some Fourier transformation to an unintegrated gluon distribution in the target (see *e.g.* [@Dominguez:2010xd; @Dominguez:2011wm] for a more details), the transverse momentum $\mathbf{k}$ of the gluon being the conjugate of the dipole vector $\mathbf{x}_{i}\!-\!\mathbf{x}_{j}$, and thus $|\mathbf{k}|\propto 1/x_{ij}$. Hence, the regime $x_{ij}^2 \gg 4/ Q_0^2$ for $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ corresponds to the ${\mathbf{k}}^2$ ordering , *i.e.* the anti-collinear regime. In that regime, the saddle point for the integration is dominated by the first singularity in $\g$ on the left of the line $\textrm{Re}(\g)=1/2$. For physically relevant initial conditions, one does not expect ${\cal N}^{\, 0}(\g)$ to have a singularity in the strip $0<\textrm{Re}(\g)\leq 1/2$, so that the dominant singularity is the one of $\chi(\g)$ in $\g=0$. Indeed, $$\chi(\g)=\frac{1}{\g} + {\cal O}\left(\g^2\right) \qquad \qquad \textrm{for } \g\rightarrow 0\, .\label{anticoll_DLL_chi}$$ The limit $x_{ij}^2 \gg 4/ Q_0^2$ with large $Y^+$ corresponds to the anti-collinear DLL regime discussed in the section \[sec:DLL\_mom\], associated with the simultaneous orderings and of the parton cascades. From the relations and , one sees that $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ is driven by a single pole at $\om=\abar/\g$ in that regime. On the other hand, the generic solution to the DGLAP evolution[^8] for $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ in the anti-collinear regime should write $$\begin{aligned} \left\langle {\textbf N}_{ij} \right\rangle_{Y^+} &=&\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y^+}\; \frac{{\hat{N}}_{\, 0}(\om) }{\g- \abar \tilde{P}(\om,\abar)} \label{AC_DGLAP_sol}\, ,\end{aligned}$$ where ${\hat{N}}_{\, 0}(\om)$ is the Laplace transform of the initial condition for that evolution. The DLL limit of that DGLAP solution is associated to $Y^+\rightarrow +\infty$, and thus to $\om\rightarrow 0$ because $\tilde{P}(\om,\abar)$ and ${\hat{N}}_{\, 0}(\om)$ should not have singularities for $\textrm{Re}(\om)>0$. To leading order in $\abar$, the DGLAP anomalous dimension writes $$\abar\, \tilde{P}(\om,\abar)= \abar\, \tilde{P}(\om,0)+ {\cal O}(\abar^2)\, \quad \textrm{with} \quad \tilde{P}(\om,0)=\frac{1}{\om}-\left(\frac{11}{12}+\frac{N_f}{6\, N_c^3}\right) + {\cal O}(\om) \quad \textrm{for} \quad \om\rightarrow 0 \, ,\label{DGLAP_Mellin}$$ so that the $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ is driven by a single pole at $\g=\abar/\om$ in the anti-collinear DLL regime, in agreement with the result just obtained from the LL BFKL equation. This confirms the fact that the anti-collinear DLL regime is correctly included in the LL BFKL equation with $Y^+$ as evolution variable, as already argued in the section \[sec:DLL\_mom\]. The LL BFKL characteristic function $\chi(\g)$ has the symmetry $\chi(1\!-\!\g)=\chi(\g)$, and thus its first singularity on the right of the line $\textrm{Re}(\g)=1/2$ is the single pole $$\chi(\g)=\frac{1}{1\!-\!\g} + {\cal O}\left((1\!-\!\g)^2\right) \qquad \qquad \textrm{for } \g\rightarrow 1\, .\label{fake_coll_DLL_pole}$$ That pole is driving the behavior of $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ in the limit $x_{ij}^2 \ll 4/ Q_0^2$, which is the position space analog of the transverse momentum ordering . However, the evolution variable is $Y^+$, which can only impose the $k^+$ ordering . Hence, that pole at $\g=1$ does not correspond to the correct collinear DLL limit, but rather to the regime with the simultaneous orderings and , and with the $k^-$’s unconstrained. As discussed in the section \[sec:DLL\_mom\], due to the choice of high-energy factorization scheme with $Y^+$ as evolution variable, one expects the anti-collinear DLL limit to be correctly reproduced but not the collinear DLL limit. In the Mellin representation , this failure thus shows up as unphysical singularities arising at $\g=1$. The collinear regime in Mellin representation\[sec:coll\_Mellin\] ----------------------------------------------------------------- Following the momentum-space discussion of the section \[sec:DLL\_mom\], one should keep track of the variable $Y^-$ instead of $Y^+$ when studying the collinear regime, in which the transverse scales are harder and harder when going from the target to the projectile. Hence, it is natural for that purpose to choose a factorization scheme in which $Y^-$ plays the rôle of evolution variable. A change of factorization scheme can modify the NLL kernel of the BFKL or BK evolution but not the LL kernel. Hence, the generic solution of the LL BFKL evolution in such a factorization scheme writes $$\begin{aligned} \left\langle {\textbf N}_{ij} \right\rangle_{Y^-} &=&\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \bar{\g}}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^{\bar{\g}} \; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \bar{\om}}{2\pi i}\; e^{\bar{\om}\, Y^-}\; \frac{\bar{{\cal N}}^{\, 0}(\bar{\g})}{\bar{\om}- \abar \chi(\bar{\g})} \label{LL_BFKL_minus_sol}\, .\end{aligned}$$ The variables $Y^-$ and $Y^+$ are directly related to each other in momentum space. However, due to the transverse Fourier transform, it is difficult to relate the mixed-space representation in $Y^-$ and transverse position to the more usual mixed-space representation in $Y^+$ and transverse position. In full momentum space, one has $$Y^-=\log\left(\frac{k^-_{\max}}{k^-}\right)=\log\left(\frac{2 k^+}{\mathbf{k}^2}\, \frac{Q_0^2}{2 k^+_{\min}}\right) =Y^++\log\left(\frac{Q_0^2}{\mathbf{k}^2}\right)\, .$$ Starting from the mixed-space in $Y^+$ and transverse positions, the best approximation one has for $|\mathbf{k}|$ is $2/x_{ij}$. Hence, in that case, it is natural to approximate $Y^-$ as $$Y^-\simeq Y^++\log\left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)\, .\label{Yminus_approx}$$ Following that idea, one can approximate the expression within the standard mixed space as $$\begin{aligned} \left\langle {\textbf N}_{ij} \right\rangle_{Y^+} &\simeq &\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \bar{\g}}{2\pi i}\, \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^{\bar{\g}} \; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \bar{\om}}{2\pi i}\; e^{\bar{\om}\, \left(Y^++\log\left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)\right)}\; \frac{\bar{{\cal N}}^{\, 0}(\bar{\g})}{\bar{\om}- \abar \chi(\bar{\g})}\nonumber\\ &\simeq &\int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \bar{\g}}{2\pi i}\, \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \bar{\om}}{2\pi i}\; \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^{\bar{\g}+\bar{\om}} \; e^{\bar{\om}\, Y^+}\; \frac{\bar{{\cal N}}^{\, 0}(\bar{\g})}{\bar{\om}- \abar \chi(\bar{\g})} \label{LL_BFKL_minus_sol_approx}\, .\end{aligned}$$ Comparing the expressions and , one finds that the Laplace-Mellin variables $(\bar{\g},\bar{\om})$ suitable in the collinear regime are related to the Laplace-Mellin variables $(\g,\om)$ suitable in the anti-collinear regime as $$\bar{\om}=\om \quad \textrm{and} \quad \bar{\g}=\g-\om\, .$$ The collinear DLL regime is now obtained by taking the $x_{ij}^2 \ll 4/ Q_0^2$ limit, while supposing $Y^+\!+\!\log\left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)$ large but finite. Due to the behavior , $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ is driven in this regime by a single pole at $\bar{\om}=\abar / (1\!-\! \bar{\g})$. When translated in $(\g,\om)$ variables, this corresponds to $$\om= \frac{\abar}{1\!-\!\g \!+\! \om}$$ and thus to $$\begin{aligned} \om&=& \frac{1}{2}\, \left[\sqrt{(1\!-\!\g)^2+4\, \abar}-(1\!-\!\g) \right]\label{coll_DLL_full} \\ &=& \frac{\abar}{(1\!-\!\g)}-\frac{{\abar}^2}{(1\!-\!\g)^3}+{\cal O}\left(\frac{{\abar}^3}{(1\!-\!\g)^5}\right) \quad \textrm{for} \; \abar\rightarrow 0 \; \; \textrm{and} \; \g<1 \, .\label{coll_DLL_series}\end{aligned}$$ Notice that the full expression is regular at $\g=1$, whereas, when truncated at any order, the series has coefficients with severe unphysical singularities at $\g=1$. A similar analysis of the collinear limit was performed in Ref. [@Salam:1998tj] for the case of the BFKL evolution in full momentum space. Spurious singularities in the NLL B-JIMWLK evolution for a dipole\[sec:spurious\_sing\] --------------------------------------------------------------------------------------- The NLL generalization of the B-JIMWLK evolution equation for $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$ has been calculated in ref. [@Balitsky:2008zz] in the factorization scheme with cut-off in $k^+$, and using the standard definition of the Regge limit, *i.e.* assuming all the transverse scales to be of the same order. That equation has been studied in Mellin representation within the 2-gluons exchange approximation, valid for a dilute target. In that NLL equation, the terms associated with the one-loop renormalization of the coupling $\abar$ lead to terms with derivatives $\d_\g$ in the Mellin representation of the kernel. It is convenient to separate those contributions from the other NLL ones and resum them into the LL part of the equation, by promoting the coupling to a running coupling $$\abar \mapsto \abar(x_{ij}^2)= \frac{1}{b\: \log\left(\frac{4\, \exp({2\, \Psi(1)})}{x_{ij}^2 \, \Lambda_{QCD}^2} \right)}\; ,\label{run_abar}$$ with the one-loop beta function coefficient $$b=\frac{11}{12}-\frac{N_f}{6\, N_c}\label{def_b}$$ and the scale $\Lambda_{QCD}$ from the $\overline{\textrm{MS}}$ scheme at one-loop, with $N_c$ colors and $N_f$ flavors. The factor $4\, \exp({2\, \Psi(1)})$ appearing in the logarithm in the expression comes from the Fourier transform from momentum to transverse position space [@Kovchegov:2006vj], but is not important for our purposes. The simplest choice is the so-called parent dipole prescription, where the coupling in the LL kernel is taken to run with the parent dipole size $x_{01}$. Then, the NLL generalization of the equation can be written in the form $$\begin{aligned} \om\, \hat{\textbf N}_{01}(\om) \!-\!\left\langle {\textbf N}_{01} \right\rangle_{0}&=& \bar{\alpha}(x_{01}^2) \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \bigg[ \hat{\textbf N}_{02}(\om) \!+\! \hat{\textbf N}_{21}(\om) \!-\!\hat{\textbf N}_{01}(\om)\bigg]+ \bar{\alpha}^2(\cdots) \int \textbf{K}^{NLL}\: \otimes \:\hat{\textbf N}_{ij}(\om)\label{BFKL_NLL_dipole_Laplace_1}\\ &=& \bar{\alpha}(x_{01}^2) \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g \; \chi(\g) \; \hat{\cal N}(\g,\om)\nonumber\\ & & + \bar{\alpha}^2(\cdots) \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g \; \chi_1(\g) \; \hat{\cal N}(\g,\om) \, .\label{BFKL_NLL_dipole_Laplace}\end{aligned}$$ In the right hand side of the equation , the scale for the coupling in the NLL contribution is unconstrained at this order. The characteristic function $\chi_1(\g)$ of the NLL kernel $\textbf{K}^{NLL}$ in the 2-gluons exchange approximation has been calculated in ref. [@Balitsky:2008zz], and shown to be identical[^9] to the characteristic function of the momentum-space NLL BFKL kernel [@Fadin:1998py; @Ciafaloni:1998gs], up to terms depending on the choices of running coupling prescription and of factorization scheme, as expected. $\chi_1(\g)$ is regular for $\g$ between $0$ and $1$, and has multiple poles at $\g=0$ and $\g=1$, which write $$\chi_1(\g)= -\frac{b}{\g^2} -\left(\frac{11}{12}+\frac{N_f}{6\, N_c^3}\right) \frac{1}{\g^2} + {\cal O}\left(\frac{1}{\g}\right) \qquad \qquad \textrm{for } \g\rightarrow 0\label{double_poles_0}$$ and $$\chi_1(\g)= -\frac{1}{(1\!-\!\g)^3} -\left(\frac{11}{12}+\frac{N_f}{6\, N_c^3}\right) \frac{1}{(1\!-\!\g)^2} + {\cal O}\left(\frac{1}{1\!-\!\g}\right) \qquad \qquad \textrm{for } \g\rightarrow 1\label{double_and_triple_poles_1}\, .$$ In the anti-collinear regime, associated with $\g\rightarrow 0$, the presence of a double pole imply that NLL corrections dominate over LL terms, which contain only a simple pole, see . Hence, the perturbative expansion of the BK (or BFKL) kernel breaks down in the anti-collinear regime. The collinear regime is driven by the first singularities on the right of $\g=1/2$, which are here at $\g=1$. At this point there is a triple pole in addition to the double pole, so that the breakdown of the perturbative expansion of the BK kernel is even more severe in the collinear regime than in the anti-collinear regime. The presence of that triple pole at $\g=1$ together with the absence of triple pole at $\g=0$ confirm the previous analysis of the collinear and anti-collinear regimes, as this triple pole in is precisely the second term in the expansion . Hence, the triple pole at $\g=1$ is an artifact of the factorization scheme based on the $Y^+$ variable, which is the appropriate variable to keep in addition to the transverse ones in the anti-collinear regime but not in the collinear regime. In Mellin space, this is associated with the mismatch between the variables $\g$ and $\bar{\g}$, as already discussed. The correct prediction of the pattern of triple poles also suggests that the approximation of $Y^-$ is good enough for our purposes. The aim of the present study is to propose a consistent scheme for the resummation of such large higher order corrections coming from factorization scheme issues. However, for completeness, let us discuss also the large corrections associated with the double poles at $0$ or $1$, before closing this section. The first contribution in the expression , proportional to $b$, is obviously associated with the running of the coupling. One expects that the physically correct scale for the running coupling in the LL kernel is the hardest available, *i.e.* the smallest dipole size among parent and daughters, when one is much smaller than the two others. The scale $x_{01}^2$ chosen for the coupling $\abar$ is thus correct in the collinear regime $x_{01}^2\ll x_{02}^2 \sim x_{21}^2$, but inappropriate in the anti-collinear regimes $x_{02}^2\ll x_{01}^2 \sim x_{21}^2$ and $x_{21}^2\ll x_{01}^2 \sim x_{02}^2$. That explains why a large correction proportional to $b$ appears in $\chi_1(\g)$ in the anti-collinear limit $\g\rightarrow 0$ but not in the collinear limit $\g\rightarrow 1$. For the same reason, one can also predict the appearance of poles of order $n+1$ with residues related to $b$ in the characteristic function of the N$^n$LL kernel at $\g=0$ but not at $\g=1$. In order to make the perturbative expansion more stable, one should then choose a running coupling prescription physically correct in all the limits, in order to avoid such large higher order corrections to appear. In position space, the two available running coupling prescriptions which satisfy that requirement are Balitsky’s prescription [@Balitsky:2006wa] and the minimal dipole size prescription $\abar(\min (x_{01}^2,x_{02}^2,x_{21}^2))$. Despite sharing the same behavior in all the relevant limits, those two prescriptions are not identical and give different quantitative results [@Berger:2011ew]. Notice that in the Kovchegov-Weigert prescription [@Kovchegov:2006vj] the scale in the coupling does not reduces to the parent dipole size in the collinear limit, so that it should induce poles of order $n+1$ in the characteristic function of the N$^n$LL kernel at $\g=1$, making the perturbative expansion unstable in the collinear regime. The second term contributing to the double pole at $\g\rightarrow 0$ in the expression is inherited from the LO DGLAP anomalous dimension due to the duality between the DGLAP anomalous dimension and the BFKL characteristic function (see *e.g.* [@Altarelli:1999vw]). In general, after removing the contributions related to running coupling, the perturbative expansion the DGLAP anomalous dimension can be written to all order as $$\g=\abar\, \tilde{P}(\om,\abar)=\sum_{n=1}^{+\infty} \sum_{m=0}^{+\infty} p_{n,m}\; \left(\frac{\abar}{\om}\right)^n\, \om^m \, ,\label{DGLAP_Mellin_exp}$$ where the terms with a given $n$ sum up to the N$^{n\!-\!1}$LO contribution to the DGLAP anomalous dimension, for example $$\tilde{P}(\om,0)= \sum_{m=0}^{+\infty} p_{1,m}\; \om^{m\!-\!1} \, .\label{DGLAP_Mellin_exp_LO}$$ Similarly, the perturbative expansion of the BFKL intercept can be written to all orders as (discarding running coupling contributions) $$\om=\sum_{q=1}^{+\infty} \sum_{k=0}^{+\infty} c_{q,k}\; \left(\frac{\abar}{\g}\right)^q\, \g^k \, ,\label{BFKL_Mellin_exp}$$ where the terms with a given $q$ sum up to the N$^{q\!-\!1}$LO contribution to the BFKL intercept, for example at LO $$\chi(\g)= \sum_{k=0}^{+\infty} c_{1,k}\; \g^{k\!-\!1} \, .\label{BFKL_Mellin_exp_LO}$$ The all-order expansions and have to coincide in the anti-collinear low-$x$ regime $\g$, $\om$, $\abar\rightarrow 0$ with $\g \om \sim\abar$. Then, the coefficients $c_{q,k}$ are fully determined by the full set of coefficients $p_{n,m}$ and vice versa. In particular, the terms in the anomalous dimension which are the most singular in the low-$x$ limit ($\om\rightarrow 0$), which are of the form $p_{n,0}\; (\abar/\om)^n$, are fully determined by the LO contributions to the the BFKL intercept. The first terms give $$\begin{aligned} p_{1,0}&=& c_{1,0} =1 \label{DLL_DGLAP_coeff}\\ p_{2,0}&=& p_{1,0}\, c_{1,1}=0%\nonumber\\ %p_{3,0}&=& p_{1,0}\, c_{1,0}\, c_{1,2} -p_{1,0}\, (c_{1,1})^2 +2\, p_{2,0}\, c_{1,1}=0 \, ,\label{sing_DGLAP_coeff}\end{aligned}$$ where the values $c_{1,0}=1$ and $c_{1,1}=0$ have been read off from the expansion . Hence, the absence of spurious singularities at low-$x$ in the DGLAP evolution at NLO is due to the (*a priori* accidental) cancellation of the term of order $\g^0$ in the expansion of the LO BFKL characteristic function. Conversely, the terms in the BFKL intercept which are the most singular in the anti-collinear limit ($\g\rightarrow 0$), of the form $c_{q,0}\; (\abar/\g)^q$, are fully determined by the LO contributions to the DGLAP anomalous dimension. At first order, one gets the relation once again, and at the next order $$\begin{aligned} c_{2,0}&=& c_{1,0}\, p_{1,1}=-\left(\frac{11}{12}+\frac{N_f}{6\, N_c^3}\right) \, ,\label{sing_BFKL_coeff}\end{aligned}$$ where the value of the coefficient $p_{1,1}$ has been read off from the expansion . Hence, the second term in the expression is inherited from the LO DGLAP anomalous dimension as announced. Furthermore, one can predict poles of order $n+1$ in the N$^n$LL BFKL characteristic function and calculate their residues from the expansion of the LO DGLAP anomalous dimension . The same analysis can be done in the collinear regime $\bar{\g}\rightarrow 1$ instead of $\g\rightarrow 0$. It would predict the double pole term in the expansion , up to higher order terms due to the change of variable from $\bar{\g}$ to $\g$. Hence, the second term in the expansions and should be resummed in principle by promoting the collinear and anti-collinear DLL terms in the LO BFKL or BK kernel to the full collinear and anti-collinear LO DGLAP. This task has been performed for the BFKL case both in Mellin space [@Ciafaloni:1999yw; @Altarelli:1999vw; @Altarelli:2005ni] and in momentum space [@Ciafaloni:2003rd]. In the case of the BK equation, it is the main step left for further studies towards a full resummation of the pathologically large higher order contributions to the kernel in mixed space. In that case, the main difficulty is that it seems quite difficult to see how the DGLAP evolution of the target arises in the dipole framework. Analysis of the NLO DIS impact factors\[sec:NLO\_IF\_analysis\] =============================================================== At low $x_{Bj}$, it is convenient to parameterize the DIS cross section by the transverse and longitudinal virtual photon cross sections $\sigma_{T,L}^{\gamma}(Q^2,x_{Bj})$, related to the usual structure functions $F_L(Q^2,x_{Bj})$ and $F_2(Q^2,x_{Bj})=F_T(Q^2,x_{Bj})+F_L(Q^2,x_{Bj})$ by $$F_{T,L}(Q^2,x_{Bj})=\frac{Q^2}{(2\pi)^2\, \alpha_{em}}\; \sigma^{\gamma}_{T,L}(Q^2,x_{Bj})\, .\label{rel_FTL_sigmaTL}$$ At low $x_{Bj}$, those photon cross sections $\sigma_{T,L}^{\gamma}$ obey at LO the dipole factorization [@Bjorken:1970ah; @Nikolaev:1990ja]. The real NLO corrections to the dipole factorization have been calculated in Ref.[@Beuf:2011xd]. The expression for $\sigma_{T,L}^{\gamma}$ at NLO writes[^10] $$\begin{aligned} \sigma_{T,L}^{\gamma}(Q^2,x_{Bj})&=& \frac{4\, N_c\, \alpha_{em}}{(2\pi)^2}\sum_f e_f^2 \int \textrm{d}^2\mathbf{x}_{0} \int \textrm{d}^2\mathbf{x}_{1} \int_0^1 \textrm{d} z_1\, \Bigg\{ \bigg[\mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2)+{\cal O}(\abar) \bigg] \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{0}\Big]\nonumber\\ & & + \bar{\alpha} \int_{k^+_{\min}/q^+}^{1\!-\!z_1}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{0}\Big] %\left\langle \left(1\!-\!\frac{1}{N_c^2}\right) - \left({\mathbf S}_{02}\,{\mathbf S}_{21}\!-\!\frac{1}{N_c^2}{\mathbf S}_{01}\right) \right\rangle_{0} \Bigg\}\, ,\label{sigma_TL_z_min}\end{aligned}$$ where ${\mathbf S}_{012}$ is the operator describing the eikonal interaction of a $q\bar{q}g$ tripole with the target, *i.e.* $${\mathbf S}_{012}\equiv \frac{1}{N_c\, C_F}\: \textrm{Tr} \left(U_{\mathbf{x}_{0}}\, t^a\, U_{\mathbf{x}_{1}}^\dag\, t^b \right)\: \tilde{U}^{ba}_{\mathbf{x}_{2}} =\frac{N_c}{2 C_F} \bigg[ {\mathbf S}_{02}\,{\mathbf S}_{21}-\frac{1}{N_c^2} {\mathbf S}_{01}\bigg]\, ,\label{def_tripole}$$ $\tilde{U}^{ba}_{\mathbf{x}_{2}}$ being in the adjoint representation. The notation $\left\langle\, \cdots \right\rangle_{0}$ indicates that the expectation values of the operators should be evaluated, at this stage, in a quasi-classical approximation, such as the MV model [@McLerran:1993ni; @McLerran:1993ka; @McLerran:1994vd] in the case of a large nuclear target. The expression is indeed valid at strict NLO accuracy, and does not yet contain the resummation of high-energy LL. The light-cone momentum $k^+_2=z_2 q^+$ of the additional gluon in the photon wave-function has been bounded by the longitudinal resolution $k^+_{\min}$ of the target, in order to regulate the integral over $z_2$. Using the model for the target proposed in the section \[sec:evol\_variables\], the lower cut-off in $z_2$ becomes $$z_{\min}=\frac{k^+_{\min}}{q^+}=\frac{x_{Bj}\, Q_0^2}{Q^2\, x_0}\, .$$ In the dipole factorization formula , the LO impact factors are $$\begin{aligned} \mathcal{I}_{L}^{LO}({x}_{01},z_1,Q^2)&=& 4 Q^2 z_1^2 (1\!-\!z_1)^2\, \textrm{K}_0^2\!\left(QX_2\right)\label{ImpFact_LO_L}\\ \mathcal{I}_{T}^{LO}({x}_{01},z_1,Q^2)&=& \big[z_1^2+(1\!-\!z_1)^2\big] z_1 (1\!-\!z_1) Q^2 \textrm{K}_1^2\!\left(QX_2\right)\label{ImpFact_LO_T}\, ,\end{aligned}$$ whereas the longitudinal NLO impact factor is $$\begin{aligned} \mathcal{I}_{L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)&=&4 Q^2 \, \textrm{K}_0^2\!\left(QX_3 \right) \Bigg\{ \frac{z_1^2 (1\!-\!z_1)^2}{{x}_{20}^2}\, {\cal P}\!\left(\frac{z_2}{1\!-\!z_1}\right)\nonumber\\ & &+\frac{(z_1\!+\!z_2)^2 (1\!-\!z_1\!-\!z_2)^2 }{2\, {x}_{21}^2}\, \left[1+\left(1\!-\!\frac{z_2}{z_1\!+\!z_2}\right)^2\right]{\cal P}\!\left(\frac{z_2}{z_1\!+\!z_2}\right)\nonumber\\ & & -2 z_1 (1\!-\!z_1) (z_1\!+\!z_2) (1\!-\!z_1\!-\!z_2) \left[1\!-\!\frac{z_2}{2(1\!-\!z_1)}\!-\!\frac{z_2}{2(z_1\!+\!z_2)}\right] \left(\frac{\mathbf{x}_{20}\cdot\mathbf{x}_{21}}{{x}_{20}^2\; {x}_{21}^2}\right) \Bigg\} \label{ImpFact_NLO_L}\end{aligned}$$ and the transverse one $$\begin{aligned} & &\mathcal{I}_{T}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)= \Bigg[\frac{Q X_3\, \textrm{K}_1\!\left(Q X_3\right)}{X_3^2}\Bigg]^2 \Bigg\{z_1^2 (1\!-\!z_1)^2 \big[z_1^2+(1\!-\!z_1)^2\big] \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)^2 \frac{\,{\cal P}\!\left(\frac{z_2}{1\!-\!z_1}\right)}{{x}_{20}^2}\nonumber\\ & &+(1\!-\!z_1\!-\!z_2)^2 (z_1\!+\!z_2)^2 \big[(1\!-\!z_1\!-\!z_2)^2+(z_1\!+\!z_2)^2\big] \left(\mathbf{x}_{01}\!-\!\frac{z_2}{z_1\!+\!z_2} \mathbf{x}_{21}\right)^2 \frac{\,{\cal P}\!\left(\frac{z_2}{z_1\!+\!z_2}\right)}{{x}_{21}^2}\nonumber\\ & &+2 z_1 (1\!-\!z_1) (1\!-\!z_1\!-\!z_2) (z_1\!+\!z_2) \Big[z_1(z_1\!+\!z_2)+(1\!-\!z_1\!-\!z_2)(1\!-\!z_1)\Big]\nonumber\\ & & \quad \times \left[1\!-\!\frac{z_2}{2(1\!-\!z_1)}\!-\!\frac{z_2}{2(z_1\!+\!z_2)}\right] \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)\!\!\cdot\!\! \left(\mathbf{x}_{01}\!-\!\frac{z_2}{z_1\!+\!z_2} \mathbf{x}_{21}\right) \left(\frac{\mathbf{x}_{20}\cdot\mathbf{x}_{21}}{{x}_{20}^2\; {x}_{21}^2}\right)\nonumber\\ & & + \frac{z_2^2\, z_1\, (1\!-\!z_1\!-\!z_2)\, (1\!-\!2 z_1\!-\!z_2)^2}{(1\!-\!z_1) (z_1\!+\!z_2)}\; \frac{\big(\mathbf{x}_{20}\wedge\mathbf{x}_{21}\big)^2}{{x}_{20}^2\; {x}_{21}^2}\nonumber\\ & & + z_2\, z_1^2\, (1\!-\!z_1\!-\!z_2) \bigg[\frac{z_1\, (1\!-\!z_1\!-\!z_2)}{(1\!-\!z_1)}+\frac{(1\!-\!z_1)^2}{(z_1\!+\!z_2)} \bigg] \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)\!\!\cdot\!\! \left(\frac{\mathbf{x}_{20}}{{x}_{20}^2}\right)\nonumber\\ & &+ z_2\, z_1\, (1\!-\!z_1\!-\!z_2)^2 \bigg[\frac{z_1\, (1\!-\!z_1\!-\!z_2)}{(z_1\!+\!z_2)}+\frac{(z_1\!+\!z_2)^2}{(1\!-\!z_1)} \bigg] \left(\mathbf{x}_{01}\!-\!\frac{z_2}{z_1\!+\!z_2} \mathbf{x}_{21}\right)\!\!\cdot\!\! \left(\frac{\mathbf{x}_{21}}{{x}_{21}^2}\right)\nonumber\\ & &+ \frac{z_2^2\, z_1^2\, (1\!-\!z_1\!-\!z_2)^2}{2} \bigg[\frac{1}{(1\!-\!z_1)^2}+\frac{1}{(z_1\!+\!z_2)^2} \bigg] \Bigg\} \, .\label{ImpFact_NLO_T}\end{aligned}$$ The variables $X_2$ and $X_3$ which appear in the impact factors are defined by $$\begin{aligned} X_2^2&=& z_1\, (1\!-\!z_1)\, {x}_{01}^2\label{X2}\\ X_3^2&=& z_1\, (1\!-\!z_1\!-\!z_2)\, {x}_{01}^2 + z_2\, (1\!-\!z_1\!-\!z_2)\, {x}_{02}^2 + z_2\, z_1\, {x}_{21}^2\label{X3}\, ,\end{aligned}$$ and $${\cal P}(z)=\frac{1}{2} \left[1+ \left(1\!-\!z\right)^2\right]=1-z+\frac{z^2}{2}\label{cal_P}$$ is related to the non-regularized quark to gluon LO DGLAP splitting function as $$P_{gq}(z)=2\, C_F\;\frac{{\cal P}(z)}{z}\label{Pgq_splitting}\, .$$ Mixed-space analysis of the real NLO DIS impact factors ------------------------------------------------------- ### DIS impact factors and the formation time of intermediate Fock states\[sec:Form\_time\_mixed\_space\] As argued in the section II.C.2 of Ref.[@Beuf:2011xd], the factors in the LO and NLO DIS impact factors , , and containing the modified Bessel functions $\textrm{K}_0$ or $\textrm{K}_1$ have a kinematical origin. The quantities $2 q^+\, X_2^2$ and $2 q^+\, X_3^2$ are mixed-space expressions for the formation time of the Fock state ($q\bar{q}$ or $q\bar{q}g$ respectively) in the photon wave-function which is resolved by interaction with the target. On the other hand, $2 q^+/Q^2$ is the lifetime of the virtual photon. Hence, $Q^2 X_2^2$ and $Q^2 X_3^2$ are the ratios of the formation time of the $q\bar{q}$ and $q\bar{q}g$ Fock states over the photon lifetime. The Fock states which have not enough time to form during the photon lifetime should not contribute to the photon-target cross sections. That property is guarantied by the exponential suppression at large values of $Q^2 X_2^2$ or $Q^2 X_3^2$ provided by the modified Bessel functions in the LO and NLO DIS impact factors. In the real-photon limit $Q^2\!\!\rightarrow\!\! 0$, the longitudinal photon contribution disappears as expected, thanks to $Q^2 \, \textrm{K}_0^2\!\left(QX_n \right)\!\rightarrow\! 0$. And the only change in the transverse photon case is the switch from exponential suppression to power suppression $Q^2 \, \textrm{K}_1^2\!\left(QX_n \right)\!\rightarrow\! 1/X_n^2$ at large $X_n$, effectively allowing the contribution to $\sigma_{T}^{\gamma}(Q^2\!=\!0,x_{Bj})$ of Fock states with arbitrary large formation time.[^11] Physically, one expects that NLO corrections can contribute to high-energy logarithms only if there is a strong ordering in the formation time, as discussed section \[sec:Form\_time\], with the $q\bar{q}g$ state being a short-lived fluctuation of a $q\bar{q}$ dipole. Indeed, when $X_3^2\simeq X_2^2$, the Bessel function factor in $\mathcal{I}_{T,L}^{NLO}$ reduces to the one in $\mathcal{I}_{T,L}^{LO}$. ### Resummation of high-energy LL’s in DIS at NLO\[sec:std\_subtr\_LL\] When calculating an observable at fixed order in perturbation theory in the high-energy limit, one obtains large logs at each order, starting at NLO. By definition, high-energy evolution equations like those discussed in the section \[sec:evolEqs\] should allow us to resum the leading high-energy logs appearing in the higher order perturbative corrections into the non-perturbative objects appearing at lower orders. In the case of DIS cross sections , only the dipole amplitude $\left\langle {\mathbf S}_{01} \right\rangle_{0}$ appear at LO. After performing the LL resummation, it should be replaced in the LO contribution by the dipole amplitude $\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}$ evolved over the range $Y_f^+=\log(k_f^+/k^+_{\min})=\log(z_f/z_{\min})$, where $k_f^+$ is the chosen factorization scale. In practice, one substitutes in the LO contribution the expression $$\left\langle {\mathbf S}_{01} \right\rangle_{0}=\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}+\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\, ,$$ where $\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}$ is a counter-term allowing to avoid double counting of LL’s. When using the standard version of the dipole B-JIMWLK evolution equation at LL with the factorization scheme in $k^+$, the counter-term writes $$\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL}}= -\, \bar{\alpha} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \left\langle{\mathbf S}_{02} {\mathbf S}_{21} \!-\! {\mathbf S}_{01} \right\rangle_{Y_2^+}\, .$$ That counter-term is supposed to remove the LL contributions from the fixed-order NLO corrections. It can be split uniquely into a term associated with real corrections (or $q\bar{q}g$ Fock states) and one associated with virtual corrections (or $q\bar{q}$ Fock states) as $$\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL}}=\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}}+\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, virt}}\, ,$$ with $$\begin{aligned} \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}}&=& \bar{\alpha} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+}\Big]\label{LL_ct_real}\\ \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, virt}}&=& -\, \bar{\alpha} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big]\label{LL_ct_virt}\, .\end{aligned}$$ At this stage, the photon-target cross-section becomes, at NLO+LL accuracy, $$\begin{aligned} & &\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sigma_{T,L}^{\gamma}(Q^2,x_{Bj})= \frac{4\, N_c\, \alpha_{em}}{(2\pi)^2}\sum_f e_f^2 \int \textrm{d}^2\mathbf{x}_{0} \int \textrm{d}^2\mathbf{x}_{1} \int_0^1 \textrm{d} z_1\, \Bigg\{ \mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2) \bigg[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL}}\:\bigg]\nonumber\\ & &\qquad\qquad\qquad +{\cal O}(\abar) \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{0}\Big]-\mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2) \:\: \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, virt}} \nonumber\\ & &\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \bar{\alpha} \int_{k^+_{\min}/q^+}^{1\!-\!z_1}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{0}\Big]-\mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2) \:\: \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}} %\left\langle \left(1\!-\!\frac{1}{N_c^2}\right) - \left({\mathbf S}_{02}\,{\mathbf S}_{21}\!-\!\frac{1}{N_c^2}{\mathbf S}_{01}\right) \right\rangle_{0} \Bigg\} .\label{sigma_TL_NLO+LL}\end{aligned}$$ High-energy LL’s should cancel independently between the virtual terms in the second line of and between real terms in the third line. In the expression , the operator expectation values $\left\langle {\mathbf S}_{01} \right\rangle$ and $\left\langle {\mathbf S}_{012} \right\rangle$ are not evaluated at the same $Y^+$ in the fixed order results and in the counter-terms and . However, that mismatch is an effect of order NNLO (or NLL) in the photon-target cross-section, beyond the accuracy of the present results and irrelevant when discussing the cancelation of high-energy LL’s in the second and third lines of . The NLO DIS impact factors and satisfy the factorization property $$\mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2=0, Q^2) = \textbf{K}_{012}\;\;\; \mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2)\label{Fact_IF_NLO_zero_z2}$$ in the case of infinitely soft gluon. Hence, one can rewrite the last term in the equation as $$\begin{aligned} \mathcal{I}_{T,L}^{LO}({x}_{01},z_1,Q^2) \:\: \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}}&=&\bar{\alpha} \int_{k^+_{\min}/q^+}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2=0,Q^2)\nonumber\\ & &\qquad\qquad\qquad\qquad\qquad\qquad \times \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+}\Big] \, ,\label{LL_ct_real_NLO-like}\end{aligned}$$ making the comparison with the original real NLO contribution easier. A reasonable choice of the factorization scale $k_f^+=z_f\, q^+$ should be such that $z_{\min}\ll z_f\lesssim (1\!-\! z_1)$, so that $\log ((1\!-\! z_1)/z_f)$ is not large. Therefore, in the first term in the third line of , only the interval $[z_{\min},z_f]$ can produce high-energy LL’s in the $z_2$ integral. Then, the counter-term can be taken into account via a “$+$” prescription. For example, if one approximates $\left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+}$ by $\left\langle {\mathbf S}_{012} \right\rangle_{0}$ (up to higher order corrections) in the counter-term , the third line of can be written as $$\begin{aligned} & &\bar{\alpha} \int_{z_f}^{1\!-\!z_1}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{0}\Big]\nonumber\\ &&+ \bar{\alpha}\int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{2\, C_F}{N_c} \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{0}\Big]\: \int_{0}^{z_f}\frac{\textrm{d}z_2}{(z_2)_{+}}\; \mathcal{I}_{T,L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)\:\, ,\label{NLO_DIS_with_LL_subtr}\end{aligned}$$ with, by definition, $$\frac{1}{(z)_{+}}\: f(z)= \frac{1}{z}\: \Big(f(z)-f(0)\Big)\, . \label{plus_prescription}$$ Indeed, the “$+$” prescription removes the log divergence from the $z_2$ integral, allowing to drop the lower cut-off $z_{\min}=k^+_{\min}/q^+$. This treatment of the LL resummation and of the soft divergence of the NLO DIS impact factor is equivalent to the methods used in refs. [@Balitsky:2010ze; @Beuf:2011xd], and also in ref. [@Chirilli:2012jd] in the case of forward single-inclusive hadron production in pA collisions. In all those references, the absence of soft log divergence was implicitly taken as an evidence for the absence of high-energy LL’s in the final expression of the real NLO correction after subtraction of the counter-term, implying that the LL’s have been properly resumed into the $\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}$ evolved with the standard dipole B-JIMWLK equation . However, as will be discussed in the next section \[sec:Issues\_LL\_subtr\], that assumption is not correct. The scale $Y^+$ at which the operator expectation values $\left\langle {\mathbf S}_{01} \right\rangle$ and $\left\langle {\mathbf S}_{012} \right\rangle$ in the NLO corrections and in the counter-term should be evaluated is not under control at this order in perturbation theory. However, a change in that scale is expected to have a sizable effect in practice, so that this issue deserves further discussion. It is natural to expect that some of the NLL terms contained in NNLO corrections to $\sigma_{T,L}^{\gamma}$ will contribute to the LL evolution of the operator present in the real NLO correction up to some scale $Y^+>0$, in the same way as the LL contributions contained in NLO corrections have lead to evolve $\left\langle {\mathbf S}_{01} \right\rangle$ from $Y^+=0$ up to $Y^+=Y_f^+$. Such an effect might also modify the scale $Y^+$ relevant for the counterterms and for the virtual NLO correction. Ultimately, the scale $Y^+$ at which the operator expectation values $\left\langle {\mathbf S}_{01} \right\rangle$ or $\left\langle {\mathbf S}_{012} \right\rangle$ has to be evaluated should be the same in each NLO correction as in the corresponding counterterm, in order to insure the cancelation of the $z_2\rightarrow 0$ divergence to all orders. The two most natural values one can expect for that scale are: - the factorization scale $Y_{f}^+$ - the scale $Y_2^+=\log(z_2\; q^+/k^+_{\min})=\log(z_2/z_{\min})$, as suggested by the expressions and of the counter-term. It will be shown in the section \[sec:Mellin\_NLO\_IF\] and in the appendix \[App:Yfplus\] that $Y_2^+$ is the only one of those possibilities which allows for a smooth transition between the high-energy regime and the collinear regime. ### Issues with the standard resummation of high-energy LL’s\[sec:Issues\_LL\_subtr\] The standard resummation of low-x LL’s from the NLO to the LO term explained in the previous section is motivated by the assumption that at sufficiently small $z_2$, $\mathcal{I}_{T,L}^{NLO}$ is well approximated by its value at $z_2=0$, which takes the factorized form . That assumption is indeed true in most of the phase-space, but not in all of it. For any given small but finite $z_2$, that approximation of $\mathcal{I}_{T,L}^{NLO}$ breaks down when $\mathbf{x}_{2}$ is far enough from $\mathbf{x}_{0}$ and $\mathbf{x}_{1}$ in the transverse plane. In that case, one can have in particular $X_3\gg X_2$, so that the exact $\mathcal{I}_{T,L}^{NLO}$ is exponentially smaller than its factorized approximation , due to the behavior of the Bessel function factor. In that regime, the counter-term is much larger in absolute value than the real NLO term before subtraction, *i.e.* the first term in the third line of the equation , so that one is doing an over-subtraction of leading logs. Moreover, the exponential suppression of the unsubtracted real NLO term in that kinematical regime is precisely the property discussed in the section \[sec:Form\_time\_mixed\_space\]: a $q\bar{q}g$ Fock state should not contribute to the photon-target cross section if its formation time is larger than the virtual photon lifetime. Due to its inability to reproduce that physically-motivated suppression of $q\bar{q}g$ Fock states with a gluon emitted a very large transverse distance, the counter-term gives a sizable negative contribution to $\sigma_{T,L}^{\gamma}$ from a kinematical regime where nothing should happen. Those serious issues with the standard resummation of low-x LL’s and the associated counter-terms are the counterpart for the DIS impact factors in mixed space of the kinematical issues with the low-$x_{Bj}$ evolution equations discussed in the section \[sec:kin\_mom\_space\]. Imposing the kinematical constraint (or its mixed space version) introduced in that section allows to solves simultaneously the kinematical problems encountered in the analysis of the evolution equations and of the DIS impact factors, as will be shown in the rest of the present paper. ### Better soft gluon approximation to the real NLO DIS impact factors\[sec:NLO\_IF\_approx\] In order to understand more quantitatively the problems with the standard resummation of high-energy LL, one has to study more carefully the behavior of the impact factors $\mathcal{I}_{T,L}^{NLO}$ at low but finite $z_2$. Assuming $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$ but nothing about $x_{10}$, $x_{20}$ and $x_{21}$, the longitudinal NLO impact factor simplifies as $$\begin{aligned} \mathcal{I}_{L}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2) &\simeq &4 Q^2 \, \textrm{K}_0^2\!\left(QX_3\right) z_1^2 (1\!-\!z_1)^2 \Bigg\{\frac{1}{{x}_{20}^2} +\frac{1}{{x}_{21}^2} -2 \left(\frac{\mathbf{x}_{20}\cdot\mathbf{x}_{21}}{{x}_{20}^2\; {x}_{21}^2}\right) \Bigg\} \qquad \textrm{for } z_2\ll z_1, 1\!-\!z_1\nonumber\\ &\simeq & \mathcal{I}_{L}^{LO}({x}_{01},z_1,Q^2) \quad \frac{\textrm{K}_0^2\!\left(QX_3\right)}{\textrm{K}_0^2\!\left(QX_2\right)} \quad \textbf{K}_{012}\, .\end{aligned}$$ Hence, at low $z_2$, $\mathcal{I}_{L}^{NLO}$ is well approximated by its $z_2=0$ value if and only if $X_3\simeq X_2$, in agreement with the previous discussion. Assuming $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$ only, the expression for $X_3^2$ simplifies a bit, as $$\begin{aligned} X_3^2&\simeq & z_1\, (1\!-\!z_1)\, {x}_{10}^2 + z_2\, (1\!-\!z_1)\, {x}_{20}^2 + z_2\, z_1\, {x}_{21}^2\label{X3_low_z2_bis}\, .\end{aligned}$$ Generically, the first term tends to dominates the expression, so that $X_3$ reduces to $X_2$. However, this is not true anymore when ${x}_{20}^2$ or ${x}_{21}^2$ is so much larger than ${x}_{10}^2$ that the smallness of $z_2$ is compensated. In this regime, one has necessarily ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$, which allows to simplify further the expression . Hence, assuming $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$ only, the expressions $$\begin{aligned} X_3^2&\simeq & z_1\, (1\!-\!z_1)\, {x}_{10}^2 + z_2\, {x}_{20}^2\nonumber\\ &\simeq & z_1\, (1\!-\!z_1)\, {x}_{10}^2+ z_2\, {x}_{21}^2\label{X3_low_z2}\end{aligned}$$ always provide correct approximations of $X_3^2$, no matter what are the relative transverse distances ${x}_{10}$, ${x}_{20}$ and ${x}_{21}$. The approximation is the mixed space analog of the approximation of the energy denominators in momentum space. For the impact factor $\mathcal{I}_{L}^{NLO}$, the situation is thus the following. At low $z_2$, *i.e.* $z_2\ll z_1\lesssim 1$ and $z_2\ll 1\!-\!z_1\lesssim 1$, one should split the integration range for $\mathbf{x}_{2}$ into two domains: - For $z_1\, (1\!-\!z_1)\, {x}_{10}^2\gg z_2\, {x}_{20}^2$ and/or $z_1\, (1\!-\!z_1)\, {x}_{10}^2\gg z_2\, {x}_{21}^2$, $\mathcal{I}_{L}^{NLO}$ is well approximated by its factorized $z_2=0$ value - For $z_1\, (1\!-\!z_1)\, {x}_{10}^2\lesssim z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$, the expression is a bad approximation of $\mathcal{I}_{L}^{NLO}$, so that the standard resummation of high-energy LL is not correct in this regime. The analysis of the transverse impact factor $\mathcal{I}_{T}^{NLO}$ is more cumbersome not only due to its more complicated expression , but also because transverse recoil effects start to matter. Assuming $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$ only, $\mathcal{I}_{T}^{NLO}$ reduces to $$\begin{aligned} & &\mathcal{I}_{T}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)\simeq \frac{Q^2 \, \textrm{K}_1^2\!\left(Q X_3\right)}{X_3^2} \Bigg\{z_1^2 (1\!-\!z_1)^2 \big[z_1^2+(1\!-\!z_1)^2\big]\nonumber\\ & &\times\bigg[ \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)^2 \frac{1}{{x}_{20}^2}+ \left(\mathbf{x}_{10}\!+\!\frac{z_2}{z_1} \mathbf{x}_{21}\right)^2 \frac{1}{{x}_{21}^2}-2 \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)\!\!\cdot\!\! \left(\mathbf{x}_{10}\!+\!\frac{z_2}{z_1} \mathbf{x}_{21}\right) \left(\frac{\mathbf{x}_{20}\cdot\mathbf{x}_{21}}{{x}_{20}^2\; {x}_{21}^2}\right)\bigg]\nonumber\\ & & + z_2\, z_1\, (1\!-\!z_1) \big[z_1^2+(1\!-\!z_1)^2\big] \bigg[ \left(\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right)\!\!\cdot\!\! \left(\frac{\mathbf{x}_{20}}{{x}_{20}^2}\right) -\left(\mathbf{x}_{10}\!+\!\frac{z_2}{z_1} \mathbf{x}_{21}\right)\!\!\cdot\!\! \left(\frac{\mathbf{x}_{21}}{{x}_{21}^2}\right)\bigg]\nonumber\\ & &+ \frac{z_2^2}{2}\big[z_1^2+(1\!-\!z_1)^2\big]+z_2^2\, (1\!-\!2 z_1)^2\; \frac{\big(\mathbf{x}_{20}\wedge\mathbf{x}_{21}\big)^2}{{x}_{20}^2\; {x}_{21}^2} \Bigg\} \, .\label{ImpFact_NLO_T_low_z_2_generic}\end{aligned}$$ As discussed in the section II.C.1 of Ref.[@Beuf:2011xd], $x_{10}$ is the transverse distance between the quark and the anti-quark at the time $x^+=0$ when the $q\bar{q}g$ state crosses the target. However, $x_{10}$ does not necessarily reflect the size of the parent dipole before emission of the gluon, due to transverse recoil effects. Taking those recoil effects, the relevant size of the parent dipole is either $$\left|\mathbf{x}_{10}\!-\!\frac{z_2}{1\!-\!z_1} \mathbf{x}_{20}\right| \quad \textrm{or} \quad \left|\mathbf{x}_{10}\!+\!\frac{z_2}{z_1\!+\!z_2} \mathbf{x}_{21}\right|\simeq \left|\mathbf{x}_{10}\!+\!\frac{z_2}{z_1} \mathbf{x}_{21}\right|\, ,$$ depending if the gluon is emitted from the quark or the anti-quark. Let us first consider the regime $${x}_{10}\gg\frac{z_2}{1\!-\!z_1} {x}_{20} \quad \textrm{and} \quad {x}_{10}\gg\frac{z_2}{z_1} {x}_{21}\, ,\label{no_recoil_regime}$$ while still assuming $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$. Then, the transverse recoil effects are negligible, and the contributions in the third and fourth line of the expression as well. Note that the first term in the fourth line and the ones in the third line are due to instantaneous interactions in light-front perturbation theory [@Beuf:2011xd]. The transverse impact factor $\mathcal{I}_{T}^{NLO}$ thus reduces in that regime to $$\begin{aligned} \mathcal{I}_{T}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)&\simeq & \frac{Q^2\, \textrm{K}_1^2\!\left(Q X_3\right)}{X_3^2} \Bigg\{z_1^2 (1\!-\!z_1)^2 \big[z_1^2+(1\!-\!z_1)^2\big] x_{10}^2 \bigg[\frac{1}{{x}_{20}^2}+\frac{1}{{x}_{21}^2}-2 \left(\frac{\mathbf{x}_{20}\cdot\mathbf{x}_{21}}{{x}_{20}^2\; {x}_{21}^2}\right)\bigg] \Bigg\}\nonumber\\ &\simeq & \mathcal{I}_{T}^{LO}({x}_{01},z_1,Q^2) \quad \frac{X_2^2}{X_3^2}\; \frac{\textrm{K}_1^2\!\left(QX_3\right)}{\textrm{K}_1^2\!\left(QX_2\right)} \quad \textbf{K}_{012} \, .\label{ImpFact_NLO_T_low_z_2_recoil-less}\end{aligned}$$ It is important to notice that the inequalities , at low $z_2$, do not provide any information about the relative size of $X_3$ and $X_2$. Both $X_3\simeq X_2$ and $X_3\gg X_2$ are still possible, so that the situation is analogous to the $\mathcal{I}_{L}^{NLO}$ case. Hence, for $z_2\ll z_1\lesssim 1$ and $z_2\ll 1\!-\!z_1\lesssim 1$, in the case of $\mathcal{I}_{T}^{NLO}$, one should split the integration range for $\mathbf{x}_{2}$ in three domains: - For $z_1\, (1\!-\!z_1)\, {x}_{10}^2\gg z_2\, {x}_{20}^2$ and/or $z_1\, (1\!-\!z_1)\, {x}_{10}^2\gg z_2\, {x}_{21}^2$, $\mathcal{I}_{T}^{NLO}$ is well approximated by its factorized $z_2=0$ value - For[^12] ${x}_{10}^2/z_2 \lesssim {x}_{20}^2 \simeq {x}_{21}^2 \ll {x}_{10}^2/z_2^2$, the expression is a bad approximation of $\mathcal{I}_{T}^{NLO}$ and should be replaced by - For $ {x}_{10}^2/z_2^2\lesssim {x}_{20}^2 \simeq {x}_{21}^2$, the recoil effects become important, and the instantaneous interaction contributions are of the same order as the other ones. Deep into that regime, for $ {x}_{10}^2/z_2^2\ll {x}_{20}^2 \simeq {x}_{21}^2$, a correct approximation of $\mathcal{I}_{T}^{NLO}$ is $$\begin{aligned} \mathcal{I}_{T}^{NLO}(\mathbf{x}_{0},\mathbf{x}_{1},\mathbf{x}_{2},z_1,z_2,Q^2)&\simeq & \mathcal{I}_{T}^{LO}({x}_{01},z_1,Q^2) \quad \frac{\textrm{K}_1^2\!\left(Q \sqrt{z_2\, x_{20}^2}\right)}{\textrm{K}_1^2\!\left(QX_2\right)} \quad \frac{z_2}{2\, z_1\, (1\!-\!z_1)\, x_{20}^2} \, .\label{ImpFact_NLO_T_low_z_2_strong_recoil}\end{aligned}$$ The main conclusion is that for both the longitudinal and the transverse impact factors at low $z_2$, the whole part of the integration domain in $\mathbf{x}_{2}$ such that $z_1\, (1\!-\!z_1)\, {x}_{10}^2\lesssim z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$ should not contribute to leading logs, because of the exponential suppression provided by the modified Bessel functions. This property is not taken into account in the standard resummation of high-energy LL’s exposed in the section \[sec:std\_subtr\_LL\]. Mellin space analysis of the real NLO impact factors and LL counter-term\[sec:Mellin\_NLO\_IF\] ----------------------------------------------------------------------------------------------- In order to investigate in more details the problems of the standard subtraction of high energy LL’s described in the section , it is very convenient to compare the approximate Mellin representation of the real NLO corrections to the DIS cross sections and the one of the counter-term , supposed to be their LL approximation. In the dilute regime (or BFKL approximation, or linear regime, see section \[sec:evolEqs\]), where the Mellin representation is most useful, one has $$\begin{aligned} \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+}\Big]&=& \left(1\!-\!\frac{1}{N_c^2}\right) - \left\langle{\mathbf S}_{02}\,{\mathbf S}_{21}\!-\!\frac{1}{N_c^2}{\mathbf S}_{01} \right\rangle_{Y^+} \simeq \left\langle {\textbf N}_{02} \right\rangle_{Y^+} + \left\langle {\textbf N}_{21} \right\rangle_{Y^+} - \frac{1}{N_c^2}\, \left\langle{\textbf N}_{01} \right\rangle_{Y^+}\label{operator_NLO_real}\, .\end{aligned}$$ As we have discussed previously, the standard resummation of LL’s is correct when one daughter dipole is much smaller than the other and the parent, or when the three dipoles are of the same order, when using the factorization scheme in $k^+$. Problems might arise only if the parent dipole is much smaller than the the two daughters, *i.e.* when ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$. Due to color transparency, one should have $$\left\langle{\textbf N}_{ij} \right\rangle_{Y^+} \propto x_{ij}^2 \label{color_transparency}$$ up to logarithmic factors, in the limit $x_{ij}\rightarrow 0$ at fixed $Y^+$. Due to quantum evolution effects, the dipole target amplitude $\left\langle{\textbf N}_{ij} \right\rangle_{Y^+}$ typically acquires some anomalous dimension, modifying the behavior . But $\left\langle{\textbf N}_{ij} \right\rangle_{Y^+}$ should still behave roughly as a positive power of $x_{ij}$ in all the linear regime. Hence, for ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$ one has $$\left\langle {\textbf N}_{02} \right\rangle_{Y^+} \simeq \left\langle {\textbf N}_{21} \right\rangle_{Y^+} \gg \left\langle{\textbf N}_{01} \right\rangle_{Y^+}\, ,\label{coll_ord_dipole_ampl}$$ so that the expression appearing in the real NLO corrections in and in the naive counter-term reduces to $$\begin{aligned} \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+}\Big] &\simeq & 2 \left\langle {\textbf N}_{02} \right\rangle_{Y^+}\label{operator_NLO_real_coll}\, .\end{aligned}$$ in the part of the linear regime satisfying ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$, and there, the (unknown) virtual NLO corrections to the DIS cross section and the corresponding counter-term are power suppressed compared to the real corrections, so that we can ignore them. As mentioned in the end of the section \[sec:std\_subtr\_LL\], $Y_2^+$ and $Y_f^+$ are the two natural guesses for the scale at which one should take the expectation value of the operator $\left\langle {\mathbf S}_{012} \right\rangle$ in the counter-term and/or in the real NLO correction. The case of $Y_2^+$ is considered here, whereas the case $Y_f^+$ is treated in the appendix \[App:Yfplus\]. It is shown in that appendix that in the $Y_f^+$ prescription, the Regge limit and the collinear limit do not commute, making the collinear DLL regime ambiguous and quite pathological. Hence, the $Y_2^+$ prescription is more appropriate than the $Y_f^+$ prescription for the expectation value of the operator $\left\langle {\mathbf S}_{012} \right\rangle$ in the real NLO correction and in the counter-term . With the $Y_2^+$ prescription, the contribution to the counter-term from the domain ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$ writes in the dilute regime $$\begin{aligned} \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}}^{{x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2} &=& \bar{\alpha} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int_{{x}_{10}^2\ll {x}_{20}^2} \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+}\Big] \nonumber\\ &\simeq & \bar{\alpha} \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; \int_{{x}_{01}^2}^{+\infty} \frac{\textrm{d}({x}_{02}^2)}{2}\;\; \frac{{x}_{01}^2}{{x}_{02}^4}\; \; 2 \left\langle {\textbf N}_{02} \right\rangle_{Y_2^+}\nonumber\\ & \simeq & \bar{\alpha} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\;{\cal N}(\g,Y_2^+)\;\; \int_{{x}_{01}^2}^{+\infty} \frac{\textrm{d}({x}_{02}^2)}{{x}_{01}^2}\;\; \left(\frac{{x}_{02}^2}{{x}_{01}^2}\right)^{\g-2}\nonumber\\ & \simeq & \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y_f^+}\; \hat{\cal N}(\g,\om) \;\; \frac{\bar{\alpha}}{\om (1\!-\! \g)}\, , \label{Mellin_naive_subtract}\end{aligned}$$ where the relation has been used in the last step of the calculation. Large logs manifest themselves in the Mellin representation as poles, and more precisely we have here the correspondence $$\frac{\bar{\alpha}}{\om (1\!-\! \g)} \quad \leftrightarrow \quad \bar{\alpha}\, Y_f^+\, \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right)\, ,$$ following the discussion in the section \[sec:Mellin\_BFKL\_BK\_LL\_NLL\]. That DLL contribution is not the correct collinear DLL contribution compatible with DGLAP physics, which would contain $Y_f^-$ instead of $Y_f^+$. When calculating the analogous approximate Mellin representation of the real NLO correction, it is convenient to change the upper bound of the $z_2$ integration from $1\!-\! z_1$ to $z_1(1\!-\! z_1)$ and make the choice of factorization scale $z_f\equiv z_1(1\!-\! z_1)$. All of this does not affect the pattern of singularities in the Mellin representation, or equivalently the presence of large logs. The integrand is taken in the dilute approximation and assuming $z_2 \ll z_f$. In order to facilitate the comparison with the expression , one divides the real NLO correction by the LO impact factor $\mathcal{I}_{T,L}^{LO}$, and only the potentially problematic domain ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$ is considered. Calculating first the contribution from the region $z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$, in which the factorized approximation is valid, one finds $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T,L}^{NLO}}{\mathcal{I}_{T,L}^{LO}} \;\frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+_2}\Big]\right|_{{x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2 \textrm{ and }z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2}\nonumber\\ & &\quad \simeq \bar{\alpha} \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; \int_{{x}_{01}^2}^{{x}_{01}^2 \exp(Y^+_f\!-\!Y^+_2)} \frac{\textrm{d}({x}_{02}^2)}{2}\;\; \frac{{x}_{01}^2}{{x}_{02}^4}\; \; 2 \left\langle {\textbf N}_{02} \right\rangle_{Y_2^+}\nonumber\\ & & \quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \frac{\bar{\alpha}}{(1\!-\! \g)} \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; \left(1\!-\!e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)} \right)\;{\cal N}(\g,Y_2^+)\nonumber\\ & &\quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y_f^+}\; \hat{\cal N}(\g,\om) \;\; \frac{\bar{\alpha}}{\om (1\!-\! \g\!+\!\om)}\, . \label{Mellin_NLO real_kc_reg}\end{aligned}$$ In the last step of the calculation , one writes the inverse Laplace transform of the Laplace transform $$\begin{aligned} \int_{0}^{+\infty}\!\!\textrm{d}Y_f^+\; e^{-\om\, Y_f^+} \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; \left(1\!-\!e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)} \right)\;{\cal N}(\g,Y_2^+) &=& \left[\frac{1}{\om}\!-\!\frac{1}{1\!-\!\g\!+\!\om}\right]\; \hat{\cal N}(\g,\om)\nonumber\\ &=& \frac{(1\!-\!\g)}{\om(1\!-\!\g\!+\!\om)}\; \hat{\cal N}(\g,\om)\, ,\end{aligned}$$ calculated by interchanging the order of the integrations. The only difference between the results and is the shift of the pole in $\g$ from $\g=1$ to $\g=1+\om$. As shown in the section \[sec:coll\_Mellin\], that shift is induced by the change of variables from $Y_f^-$ to $Y_f^+$, and one has the correspondence $$\frac{\bar{\alpha}}{\om (1\!-\! \g\!+\!\om)} \quad \leftrightarrow \quad \bar{\alpha}\, \bigg[Y_f^+ - \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right)\bigg]\, \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right)\simeq\bar{\alpha}\, Y_f^-\, \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right) \, .$$ Hence, the real NLO corrections to the DIS structure functions indeed provide the correct collinear DLL limit, by contrast to the standard counter-term , see . However, only the contribution from the region $z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$ has been calculated so far. Hence, it remains to show that, in the real NLO corrections, the contributions from the region $z_f\, {x}_{10}^2\ll z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$ are subleading in the collinear DLL regime. For that purpose, one should consider the transverse and longitudinal photon cases separately, and use the various approximations for $\mathcal{I}_{L}^{NLO}$ and $\mathcal{I}_{T}^{NLO}$ found in the section \[sec:NLO\_IF\_approx\]. Then, one gets in the longitudinal case $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{L}^{NLO}}{\mathcal{I}_{L}^{LO}} \;\frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+_2}\Big]\right|_{z_f\, {x}_{01}^2\ll z_2\, {x}_{02}^2\simeq z_2\, {x}_{21}^2 }\nonumber\\ & &\quad \simeq \bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \; \int^{+\infty}_{{x}_{01}^2 z_f/z_2} \frac{\textrm{d}({x}_{02}^2)}{2}\;\; \frac{{x}_{01}^2}{{x}_{02}^4}\;\; \frac{\textrm{K}_0^2\!\left(Q\sqrt{z_2\, {x}_{02}^2}\right)}{\textrm{K}_0^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \; \; 2 \left\langle {\textbf N}_{02} \right\rangle_{Y_2^+}\nonumber\\ & & \quad \simeq \bar{\alpha} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; f_0(\g\!-\!2\, ,z_f {x}_{01}^2 Q^2)\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)}\; {\cal N}(\g,Y_2^+)\nonumber\\ & &\quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y_f^+}\; \hat{\cal N}(\g,\om) \;\; f_0(\g\!-\!2\, ,z_f {x}_{01}^2 Q^2)\; \; \frac{\bar{\alpha}}{(1\!-\! \g\!+\!\om)} \, ,\label{Mellin_NLO real_L_non-kc}\end{aligned}$$ where we have introduced the notation $$\begin{aligned} f_{\beta}(\xi,\tau^2) &=& \int_{1}^{+\infty} \textrm{d}u\; u^{\xi}\; \frac{\textrm{K}_{\beta}^2\!\left(\tau \sqrt{u}\right)}{\textrm{K}_{\beta}^2\!\left(\tau\right)} \, .\label{f_beta}\end{aligned}$$ Note that, for $\tau>0$, the exponential decay of the modified Bessel function $\textrm{K}_{\beta}$ implies that the integral in $u$ converges no matter what is the value of $\xi$. Therefore, $f_{\beta}(\xi,\tau^2)$ is holomorphic in $\xi$ for any $\tau>0$. The only singularity in Mellin space in the expression is thus the pole at $\g=1+\om$. It corresponds to a collinear single log, and there is no collinear DLL in the contribution , as expected. In the transverse photon case the intermediate region $z_f {x}_{10}^2/z_2 \ll {x}_{20}^2 \simeq {x}_{21}^2 \ll z_f^2 {x}_{10}^2/z_2^2$, where transverse recoil effects are still negligible, gives the contribution $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T}^{NLO}}{\mathcal{I}_{T}^{LO}} \;\frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+_2}\Big]\right|_{z_f {x}_{10}^2/z_2 \ll {x}_{20}^2 \simeq {x}_{21}^2 \ll z_f^2 {x}_{10}^2/z_2^2}\nonumber\\ & &\quad \simeq \bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \; \int^{{x}_{01}^2 z_f^2/z_2^2}_{{x}_{01}^2 z_f/z_2} \frac{\textrm{d}({x}_{02}^2)}{2}\;\; \frac{{x}_{01}^2}{{x}_{02}^4}\;\; \frac{z_f\, {x}_{01}^2}{z_2\, {x}_{02}^2}\;\; \frac{\textrm{K}_1^2\!\left(Q\sqrt{z_2\, {x}_{02}^2}\right)}{\textrm{K}_1^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \; \; 2 \left\langle {\textbf N}_{02} \right\rangle_{Y_2^+}\nonumber\\ & & \quad \simeq \bar{\alpha} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)}\; {\cal N}(\g,Y_2^+) \int_{1}^{e^{(Y^+_f\!-\!Y^+_2)}} \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\textrm{d}u\;\;\; u^{\g-3}\; \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \nonumber\\ & &\quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y_f^+}\; \hat{\cal N}(\g,\om) \;\; f_1(2\g\!-\!\om\!-\!4\, ,z_f {x}_{01}^2 Q^2)\; \; \frac{\bar{\alpha}}{(1\!-\! \g\!+\!\om)} \, ,\label{Mellin_NLO real_T_non-kc_recoilless}\end{aligned}$$ This expression has a single pole in Mellin space corresponding to a collinear single log. It differs from the result found in the longitudinal photon case only by the precise value of the factor in front of the single pole or single log. Finally, in the extreme region $z_f^2 {x}_{10}^2 \ll z_2^2 {x}_{20}^2 \simeq z_2^2 {x}_{21}^2$ where transverse recoil effects are important, one finds $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T}^{NLO}}{\mathcal{I}_{T}^{LO}} \;\frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y^+_2}\Big]\right|_{z_f^2\, {x}_{01}^2\ll z_2^2\, {x}_{02}^2\simeq z_2^2\, {x}_{21}^2}\nonumber\\ & &\quad \simeq \bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \; \int_{{x}_{01}^2 z_f^2/z_2^2}^{+\infty} \frac{\textrm{d}({x}_{02}^2)}{2}\;\; \frac{z_2}{2\, z_f\, {x}_{02}^2}\;\; \frac{\textrm{K}_1^2\!\left(Q\sqrt{z_2\, {x}_{02}^2}\right)}{\textrm{K}_1^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \; \; 2 \left\langle {\textbf N}_{02} \right\rangle_{Y_2^+}\nonumber\\ & & \quad \simeq \frac{\bar{\alpha}}{2} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)}\; {\cal N}(\g,Y_2^+) \int^{+\infty}_{\exp{(Y^+_f\!-\!Y^+_2)}} \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\textrm{d}u\;\;\; u^{\g-1}\; \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \nonumber\\ & &\quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; \int_{\om_0-i\infty}^{\om_0+i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y_f^+}\; \hat{\cal N}(\g,\om)\nonumber\\ & &\quad \quad\quad \; \times \; \frac{\bar{\alpha}}{2(1\!-\! \g\!+\!\om)} \;\; \Big[f_1(\g\!-\!1\, ,z_f {x}_{01}^2 Q^2)\!-\! f_1(2\g\!-\!\om\!-\!2\, ,z_f {x}_{01}^2 Q^2) \Big] \, ,\label{Mellin_NLO real_T_deep_recoil}\end{aligned}$$ The two terms cancel each other when $1\!-\! \g\!+\!\om=0$, so that the singularity at $1\!-\! \g\!+\!\om=0$ is actually removable. Hence, there is no true Mellin space singularity in the expression , and thus the region $z_f^2 {x}_{10}^2 \ll z_2^2 {x}_{20}^2 \simeq z_2^2 {x}_{21}^2$ contributes neither to the high-energy LL’s nor to the collinear LL’s. It may be quite counter-intuitive that this region of the $\mathbf{x}_{2}$ plane does not contribute to DGLAP collinear logs whereas the intermediate region does. But this is only due the choice of the variable $Y^+$, which is not the most appropriate when discussing the collinear limit as explained in the section \[sec:coll\_Mellin\]. As expected, neither the intermediate region $z_f {x}_{10}^2/z_2 \ll {x}_{20}^2 \simeq {x}_{21}^2 \ll z_f^2 {x}_{10}^2/z_2^2$ nor the extreme region $z_f^2 {x}_{10}^2 \ll z_2^2 {x}_{20}^2 \simeq z_2^2 {x}_{21}^2$ contribute to the high-energy LL’s in the transverse photon case. Summary of the analysis of the real NLO corrections to DIS ---------------------------------------------------------- The study of the real NLO corrections to DIS in Mellin space done in the previous section \[sec:Mellin\_NLO\_IF\] confirms the hints found directly in mixed space. In particular, the emission of a gluon at parametrically large distance in the transverse plane, such that $z_f {x}_{10}^2 \ll z_2 {x}_{20}^2 \simeq z_2 {x}_{21}^2$, does not contribute to high-energy LL’s. This constraint is the analog in mixed-space of the $k^-$ ordering in momentum space discussed in the section \[sec:kin\_mom\_space\] and of the shift of the collinear pole in Mellin space from $\g=1$ to $\g=1+\om$ discussed in the section \[sec:coll\_Mellin\]. That kinematical constraint, necessary to reproduce the correct DLL limit compatible with the DGLAP evolution of the target, is not included in the standard version of the high-energy evolution equations, BFKL, BK or B-JIMWLK. Hence, those equations lead to slightly overestimate the high-energy LL contributions arising at higher orders in fixed order calculations, and thus do not allow to resum the LL’s correctly, following the method presented in the section \[sec:std\_subtr\_LL\]. After such an incorrect resummation, the leftover NLO (and higher order) corrections become large and negative when approaching the collinear regime, leading to a breakdown of the naively resummed perturbative expansion. Partonic Fock states in the photon wave-function which have a formation time larger than the virtual photon lifetime give only exponentially suppressed contributions to the DIS cross section, within fixed order perturbative calculations. In order to maintain that physically correct property when performing the resummation of high-energy LL’s, it is necessary to use a high-energy evolution equation including the kinematical constraint. Kinematical constraint for LL evolution equations in mixed space\[sec:kcBK\] ============================================================================ Constraint for the real emission kernel --------------------------------------- In the standard LL approximation without kinematical constraint, the probability density for the initial-state emission of a gluon with momentum fraction $z_2$ at position $\mathbf{x}_{2}$ from a single $q\bar{q}$ dipole $(\mathbf{x}_{0},\mathbf{x}_{1})$ writes [@Mueller:1993rr] $$\frac{\alpha_s\, C_F}{\pi^2}\, \frac{\dd z_2}{z_2}\, \dd^2 \mathbf{x}_{2}\,\frac{x_{01}^2}{x_{02}^2\, x_{21}^2}\equiv\abar\, \frac{2 C_F}{N_c}\, \frac{\dd z_2}{z_2}\, \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\, , \label{Proba_dipole_split_LL_naive}$$ where the gluon is assumed to be much softer than the quark and the anti-quark, *i.e.* $z_2\ll z_1$ and $z_2\ll 1\!-\!z_1$. As shown in the previous section, such a LL gluon emission can actually occur only in some bounded domain of the $\mathbf{x}_{2}$-plane. The probability density should then be kinematically constrained as $$\abar\, \frac{2 C_F}{N_c}\, \frac{\dd z_2}{z_2}\, \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\Big(z_1 (1\!-\!z_1)\, {x}_{01}^2 \!-\! z_2\, l_{012}^2\Big)\, . \label{Proba_parent_dipole_split_LL_kc}$$ The quantity $l_{ijk}$ introduced in the equation should satisfy $$l_{ijk}\simeq {x}_{ik}\simeq {x}_{jk} \quad \textrm{in the regime} \quad {x}_{ij}\ll {x}_{ik}\simeq {x}_{jk}\, ,\label{lijk_large}$$ in order to implement the precise kinematical restriction found in the section \[sec:NLO\_IF\_analysis\]. On the other hand, the theta function introduced in should not have a significant effect in the rest of the $\mathbf{x}_{2}$-plane, and thus $$l_{ijk}\lesssim {x}_{ij} %\quad \textrm{or} \quad l_{ijk}= o\!\left({x}_{ij}\right) \quad \textrm{in the regimes} \quad {x}_{ik}\ll {x}_{ij} \simeq {x}_{jk}\; , \;\; {x}_{jk}\ll {x}_{ij} \simeq {x}_{ik}\;\; \textrm{and }\; {x}_{ij}\simeq {x}_{ik}\simeq {x}_{jk}\label{lijk_short} \, .$$ Apart from the requirements and , the precise expression of $l_{ijk}$ is essentially arbitrary. Any choice leads in the end to self-consistent kinematically improved BK or BFKL equations. This should be understood as a resummation scheme ambiguity associated with the kinematical constraint. Most of the calculations in the rest of this paper will be done for arbitrary $l_{ijk}$ obeying the conditions and . However, for practical applications, one can use the explicit expression $$l_{ijk}= \min ({x}_{ik}, {x}_{jk})\, ,\label{lijk_min_explicit}$$ which is symmetric in the parent dipole legs $i$ and $j$, and minimizes the impact of the theta function when $z_2$ is not too small. Having the probability density for the emission of a gluon from a single dipole, the next step is to study how multiple gluon emissions iterate in the case of a full initial-state parton (or dipole) cascade. Most of QCD evolution equations describing parton cascades, like the DGLAP, BFKL or BK equations are local: after several steps, further emission in one branch of the parton cascade is independent of what happens in other branches. Hence, the information about subsequent evolution in other branches can be thrown away, which allows to write those evolution equations in closed form as simple integro-differential equations. The main counter-example is the B-JIMWLK evolution, in which the information about the full cascade has to be kept. This is the reason why B-JIMWLK can be written as a functional equation or as an infinite hierarchy of equations, but not as a closed integro-differential equation. It has been shown in the section \[sec:kin\_mom\_space\] that, in Light-Front perturbation theory, the kinematical constraint is obtained from a careful study of energy denominators, leading to a simultaneous $k^+$ and $k^-$ ordering of successive emissions. However, each energy denominator involve the momenta of all the partons present in the current intermediate Fock state. Hence, it seems that the $k^+$ and $k^-$ orderings are global, *i.e.* with the momentum of a new radiated gluon restricted by the momenta of all of the partons already radiated, including in other branches of the cascade, preventing one to include the kinematical constraint as a simple modification of the BFKL and BK integro-differential equations. This issue has been noticed in Ref. [@Motyka:2009gi], where the authors have then considered for simplicity a local $k^+$ and $k^-$ ordering - with respect to the legs of the emitting dipole only - with the hope that the mismatch between local and global orderings would not be essential. However, a more thorough study of those issues, presented in the appendix \[App:locality\_kc\], shows that Light-Front perturbation theory actually lead to a local instead of a global $k^+$ and $k^-$ ordering. More precisely, there is a global constraint at the level of individual graphs, which however becomes a local one when summing over graphs differing just by the order of gluon emissions by different color dipoles[^13], up to corrections of NLL order. Hence, in the case of the emission of a gluon $k$ by a generic dipole $ij$ within a full parton cascade, with the parton $j$ radiated after the parton $i$, one should have the orderings $$\begin{aligned} && k^+_{i}\gg k^+_{j}\gg k^+_{k}\\ && k^-_{i}\ll k^-_{j}\ll k^-_{k}\end{aligned}$$ in full momentum space. Then, one can write the probability density for that gluon emission in mixed-space as $$\abar\, \frac{2 C_F}{N_c}\, \frac{\dd z_k}{z_k}\, \frac{\dd^2 \mathbf{x}_{k}}{2\pi}\, \textbf{K}_{ijk}\; \theta\!\Big(z_j\, {x}_{ij}^2 \!-\! z_k\, l_{ijk}^2\Big)\, , \label{Proba_generic_dipole_split_LL_kc}$$ where the theta function effectively enforces the condition $k^-_{j}< k^-_{k}$. At this stage, it is clear that one can write an evolution equation in $k^+$ in order to generate the dipole cascade at LL accuracy with the kinematical constraint, involving the probability densities for gluon emission and . Let us consider a dipole ${x}_{ij}$, associated with some factorization scale $k^+_f=z_f\, q^+$. Then, this dipole can emit a gluon with momentum $k^+_{k}\ll k^+_f$ and position $\mathbf{x}_{k}$ with the probability density $$\abar\, \frac{2 C_F}{N_c}\, \frac{\dd k^+_{k}}{k^+_{k}}\, \frac{\dd^2 \mathbf{x}_{k}}{2\pi}\, \textbf{K}_{ijk}\; \theta\!\Big(k^+_f\, {x}_{ij}^2 \!-\! k^+_{k}\, l_{ijk}^2\Big)\, , \label{Proba_dipole_split_LL_kc_kfplus}$$ and one obtains two dipoles $(\mathbf{x}_{i},\mathbf{x}_{k})$, and $(\mathbf{x}_{k},\mathbf{x}_{j})$, both considered at the new factorization scale $k^+_{k}$. One can then iterate further this evolution for each daughter dipole. In this way, the factorization scale associated to each dipole in the cascade is the $k^+$ of its softest leg, except for the primordial dipole initiating the cascade, for which the factorization scale can be taken as $k^+_f\equiv z_1 (1\!-\!z_1) q^+$. In this way, the evolution equation in $k^+$ reproduces the expression for the first gluon emission and the expression for subsequent gluon emissions in the dipole cascade, as it should. It only remains to include virtual corrections in a consistent way in order to write down explicitly the evolution equation. Calculating the virtual corrections ----------------------------------- ### Probability conserving evolution The virtual terms in the evolution equation are not directly sensitive to the kinematical issues discussed previously. Indeed, virtual corrections were irrelevant in the discussion of the collinear regime for DIS at NLO in the section \[sec:NLO\_IF\_analysis\]. Because of this, there is *a priori* some freedom about the treatment of the virtual terms in the resummation associated with the kinematical constraint: in the evolution equation one can move virtual terms from the higher order contributions to the resummed leading order *a priori* without restriction. Of course, one has to make sure that in the strict Regge limit, the evolution equation reduces to the standard unresummed one, but this still leaves a lot of freedom. In the previous study of the kinematical constraint in mixed space [@Motyka:2009gi], the focus was clearly on the real emission kernel, and no specific expression for the virtual corrections has been proposed. Those results were used in numerical studies in ref. [@Berger:2010sh], with a particular choice of the implementation of virtual terms, for which no motivation is provided. However, there is a very natural way to pin down the virtual corrections. It consists in requiring the probabilistic interpretation of the parton cascade [@Mueller:1993rr] to be preserved by the kinematical resummation. Indeed, probability conservation is automatically guaranteed when using the real emission kernel to write a Bethe-Salpeter-like integral equation for the dipole-target S-matrix, resuming the LL contributions between the scale $k^+_f$ of the projectile dipole and the scale $k^+_{\min}$ of the target (assuming obviously $k^+_f>k^+_{\min}$), $$\begin{aligned} \left\langle {\mathbf S}_{01}\right\rangle_{\log (k^+_f/k^+_{\min})}&=& {\mathbf D}_{01}(k^+_f,k^+_{\min})\: \left\langle {\mathbf S}_{01}\right\rangle_{0}\nonumber\\ & &+ \abar\, \frac{2 C_F}{N_c}\int_{k^+_{\min}}^{k^+_f}\!\! \frac{\dd k^+_2}{k^+_2}\, {\mathbf D}_{01}(k^+_f,k^+_2)\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(k^+_f\, {x}_{01}^2 \!-\! k^+_2\, l_{012}^2\right)\, \left\langle {\mathbf S}_{012}\right\rangle_{\log (k^+_2/k^+_{\min})}\, ,\label{BetheSalp_kplus}\end{aligned}$$ with the $q\bar{q}g$ tripole operator ${\mathbf S}_{012}$ defined by the equation . In the equation , the factor ${\mathbf D}_{01}(k^+_f,k^+)$ is the probability that the dipole $01$ doesn’t split when evolved from the factorization scale $k^+_f$ down to $k^+<k^+_f$. It should obviously satisfy the initial condition $${\mathbf D}_{01}(k^+,k^+)=1\label{init_cond_D01}$$ for any positive $k^+$. The first term in the right hand side of the equation is the contribution of the case when the parent dipole $01$ does not split when evolved from $k^+_f$ all the way down to $k^+_{\min}$, leaving no room for evolution of the target. By contrast, the second term is the contribution of the case when dipole splittings occur, and only the first splitting, on the projectile side, is described explicitly using the real emission kernel . In the equation , there are UV divergences for $\mathbf{x}_{2}\rightarrow \mathbf{x}_{0}$ and $\mathbf{x}_{2}\rightarrow \mathbf{x}_{1}$. Hence, one should regularize the transverse integration in the equation , for example like in ref. [@Mueller:1993rr] by restricting it to the domain such that ${x}_{02}>\rho$ and ${x}_{12}>\rho$, where $\rho$ is a given short distance cut-off. However, in order to simplify notations, the regularization is kept implicit in equation and in the following. The next step is to calculate explicitly the function ${\mathbf D}_{01}(k^+_f,k^+)$ consistent with probability conservation. The evolution equation is independent of the nature of the target, and should even be valid in the absence of any target. In that case, $\left\langle {\mathbf S}_{01}\right\rangle_{Y^+}\equiv 1$ and $\left\langle {\mathbf S}_{012}\right\rangle_{Y^+}\equiv 1$ for any $Y^+$, and thus the equation reduces to $$\begin{aligned} 1&=& {\mathbf D}_{01}(k^+_f,k^+_{\min})+ \abar\, \frac{2 C_F}{N_c}\int_{k^+_{\min}}^{k^+_f}\!\! \frac{\dd k^+_2}{k^+_2}\, {\mathbf D}_{01}(k^+_f,k^+_2)\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(k^+_f\, {x}_{01}^2 \!-\! k^+_2\, l_{012}^2\right)\, \, .\label{BetheSalp_Vac}\end{aligned}$$ Taking the derivative of that relation with respect to $k^+_{\min}$, one finds $$\begin{aligned} k^+_{\min}\, \d_{k^+_{\min}}\, {\mathbf D}_{01}(k^+_f,k^+_{\min})&=& \abar\, \frac{2 C_F}{N_c} {\mathbf D}_{01}(k^+_f,k^+_{\min}) \int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(k^+_f\, {x}_{01}^2 \!-\! k^+_{\min}\, l_{012}^2\right)\, ,\end{aligned}$$ which is trivially solved by $$\begin{aligned} {\mathbf D}_{01}(k^+_f,k^+_{\min})=\exp \bigg[-\abar\, \frac{2 C_F}{N_c} \int_{k^+_{\min}}^{k^+_f}\!\! \frac{\dd k^+}{k^+}\, \int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(k^+_f\, {x}_{01}^2 \!-\! k^+\, l_{012}^2\right) \bigg]\, ,\label{D01_sol_1}\end{aligned}$$ where the regularization of the transverse integration is again kept implicit. The integration over $k^+$ can be done explicitly, which gives $$\begin{aligned} {\mathbf D}_{01}(k^+_f,k^+_{\min})=\exp \left[-\abar\, \frac{2 C_F}{N_c} \int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \left(\log \left(\frac{k^+_f}{k^+_{\min}}\right)-\Delta_{012}\right)\; \; \; \theta\!\left(\log \left(\frac{k^+_f}{k^+_{\min}}\right)-\Delta_{012}\right) \right]\, ,\label{D01_sol_2}\end{aligned}$$ with the notation $$\Delta_{012}= \max \left\{0,\, \log\left(\frac{l_{012}^2}{{x}_{01}^2}\right) \right\}\, .\label{Delta012}$$ The generic behavior of the shift $\Delta_{012}$ is then $$\begin{aligned} \Delta_{012}&=& 0 \qquad \textrm{for} \quad {x}_{02}^2\ll {x}_{01}^2 \quad \textrm{or} \quad {x}_{21}^2\ll {x}_{01}^2\nonumber\\ \Delta_{012}&\sim& \log \left(\frac{{x}_{02}^2}{{x}_{01}^2}\right) \: \sim \:\: \log \left(\frac{{x}_{21}^2}{{x}_{01}^2}\right) \qquad \textrm{for} \quad {x}_{01}^2 \ll {x}_{02}^2 \sim {x}_{21}^2\, ,\end{aligned}$$ and its precise value outside of those limits depends on the choice of $l_{012}$, *i.e.* on the choice of resummation scheme. With the result , the integral evolution equation is fully specified, and is a kinematically improved version of the B-JIMWLK evolution equation for the dipole, obeying the kinematical constraint for the real emission kernel but still preserving the probabilistic interpretation of the parton cascade exactly, by construction. Thanks to their structure, the equations and can be rewritten in terms of logarithmic variables $Y^+$ instead of the $k^+$’s, as $$\begin{aligned} \left\langle {\mathbf S}_{01}\right\rangle_{Y^+_f}&=& {\mathbf D}_{01}(Y^+_f)\: \left\langle {\mathbf S}_{01}\right\rangle_{0}+ \abar\, \frac{2 C_F}{N_c}\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(Y^+_f\!-\! \Delta_{012}\right)\, \int_{0}^{Y^+_f\!-\! \Delta_{012}}\!\! \dd Y^+_2\, {\mathbf D}_{01}(Y^+_f\!-\!Y^+_2) \left\langle {\mathbf S}_{012}\right\rangle_{Y^+_2}\label{BetheSalp_Yplus}\end{aligned}$$ and[^14] $$\begin{aligned} {\mathbf D}_{01}(Y^+_f)=\exp \left[-\abar\, \frac{2 C_F}{N_c} \int \frac{\dd^2 \mathbf{x}_{v}}{2\pi}\, \textbf{K}_{01v}\; \left(Y^+_f\!-\!\Delta_{01v}\right)\; \; \; \theta\!\left(Y^+_f\!-\!\Delta_{01v}\right) \right]\, ,\label{D01_sol_3}\end{aligned}$$ where $Y^+_f>0$ has been assumed in both equations. Rather than an integral equation like , it is often more convenient to have an integro-differential equation. One obtains the latter from the equation by dividing by ${\mathbf D}_{01}(Y^+_f)$, taking the derivative with respect to $Y^+_f$, and multiplying again by ${\mathbf D}_{01}(Y^+_f)$. In that way, one obtains $$\begin{aligned} \d_{Y_f^+} \left\langle {\mathbf S}_{01}\right\rangle_{Y_f^+}&=& \abar\, \frac{2 C_F}{N_c}\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(Y_f^+\!-\! \Delta_{012}\right)\, \Bigg\{{\mathbf D}_{01}(\Delta_{012})\; \left\langle {\mathbf S}_{012}\right\rangle_{Y_f^+\!-\!\Delta_{012}} - \left\langle {\mathbf S}_{01}\right\rangle_{Y_f^+} \nonumber\\ & & +\int_{0}^{Y_f^+\!-\! \Delta_{012}}\!\! \dd Y_2^+\, {\mathbf D}_{01}(Y_f^+\!-\!Y_2^+) \left\langle {\mathbf S}_{012}\right\rangle_{Y_2^+}\; \d_{Y_f^+} \log \left(\frac{{\mathbf D}_{01}(Y_f^+\!-\!Y_2^+)}{{\mathbf D}_{01}(Y_f^+)}\right) \Bigg\}\, .\label{B_JIMWLK_kc_blah}\end{aligned}$$ As a cross-check, one can formally recover from that equation the strict LL equation by setting $\Delta_{012}$ to $0$ and also $\Delta_{01v}$ to $0$ in the expression . Indeed, in that case, the ratio ${\mathbf D}_{01}(Y_f^+\!-\!Y_2^+)/{\mathbf D}_{01}(Y_f^+)$ becomes independent of $Y_f^+$, so that the second line of the equation does not contribute in the standard Regge limit. Calculating explicitly the last term in the equation thanks to the expression , one finally gets $$\begin{aligned} \d_{Y^+} \left\langle {\mathbf S}_{01}\right\rangle_{Y^+}&=&\abar\, \frac{2 C_F}{N_c}\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \Bigg\{ \theta\!\left(Y^+\!-\! \Delta_{012}\right)\, \bigg[{\mathbf D}_{01}(\Delta_{012})\; \left\langle {\mathbf S}_{012}\right\rangle_{Y^+\!-\!\Delta_{012}} - \left\langle {\mathbf S}_{01}\right\rangle_{Y^+} \bigg]\nonumber\\ & & \!\!\!\!\!\!+ \abar\, \frac{2 C_F}{N_c}\int \frac{\dd^2 \mathbf{x}_{3}}{2\pi}\, \textbf{K}_{013}\; \theta\!\left(Y^+\!-\! \Delta_{013}\right)\, \theta\!\left(\Delta_{013}\!-\! \Delta_{012}\right)\, \int_{Y^+\!-\! \Delta_{013}}^{Y^+\!-\! \Delta_{012}}\!\! \dd Y_2^+\, {\mathbf D}_{01}(Y^+\!-\!Y_2^+) \left\langle {\mathbf S}_{012}\right\rangle_{Y_2^+}\!\!\! \Bigg\}\, .\label{B_JIMWLK_kc_untrunc}\end{aligned}$$ ### Discarding explicitly NLL terms The aim of this study is to perform a resummation of contributions of higher logarithmic order in the strict Regge limit, in order to provide an improved version of evolution equations at LL accuracy. In the equation , such a resummation of higher order contribution into a modification of LL terms appears most notably as the shift $Y^+ \mapsto Y^+\!-\!\Delta_{012}$ in the first term. However, the equation also contains contributions which are explicitly of order NLL or higher, for example all the second line, which is a contribution of order ${\cal O}(\abar^2)$. Since we are not including the full NLL BFKL [@Fadin:1998py; @Ciafaloni:1998gs] or NLL BK [@Balitsky:2008zz] kernel, it is not really consistent to keep those terms, which appear only because *exact* probability conservation in the dipole cascade has been required. However, it makes presumably more sense to require only probability conservation up to terms of order ${\cal O}(\abar^2)$ in the equation , and discard all the explicitly NLL terms, *i.e.* the second line and the higher order terms in the expansion of ${\mathbf D}_{01}(\Delta_{012})$ in $\abar$. One then obtains the truncated equation $$\begin{aligned} \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}&=& \bar{\alpha}\, \frac{2 C_F}{N_c} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \theta\!\left(Y^+\!-\! \Delta_{012}\right)\, \Big[\left\langle {\mathbf S}_{012}\right\rangle_{Y^+\!-\!\Delta_{012}} - \left\langle {\mathbf S}_{01}\right\rangle_{Y^+} \Big]\, .\label{B_JIMWLK_kc_trunc}\end{aligned}$$ An additional attractive feature of the equation is that one can safely remove the regulator $\rho$ of the transverse integration, like in the unresummed equation , whereas the regulator $\rho$ is necessary in the equation . Physically, $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$ and $\left\langle {\mathbf S}_{012} \right\rangle_{Y^+}$ have to take values in the range $[0,1]$, and are decreasing functions of $Y^+$. Indeed, increasing ${Y^+}$ amounts to increase the density of gluons in the target, making the interaction with any projectile stronger. Moreover, one has $\left\langle {\mathbf S}_{012} \right\rangle_{Y^+}\leq \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$ because the tripole $012$ decoheres in color more easily than the dipole $01$ by interaction with the same target. Hence, in the standard B-JIMWLK dipole evolution equation , the virtual term is driving the decrease of $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$ whereas the real term is slowing down that decrease. In the square bracket in the kinematically improved equation , the real term is enhanced because $Y^+\!-\!\Delta_{012}\leq Y^+$ implies $\left\langle {\mathbf S}_{012} \right\rangle_{Y^+\!-\!\Delta_{012}}\geq \left\langle {\mathbf S}_{012} \right\rangle_{Y^+}$, whereas the virtual term is unchanged, which make the evolution of $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$ according to the kinematically improved equation slower than according to the standard equation . Obviously, the presence of the theta function in the equation further slows down the evolution of $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$. In order to get some insight into the effect of the truncation of explicitly NLL contributions on probability conservation, one can rewrite the equation in integral form as $$\begin{aligned} \left\langle {\mathbf S}_{01}\right\rangle_{Y^+_f}&=& {\mathbf D}_{01}(Y^+_f)\: \left\langle {\mathbf S}_{01}\right\rangle_{0}+ \abar\, \frac{2 C_F}{N_c}\int \frac{\dd^2 \mathbf{x}_{2}}{2\pi}\, \textbf{K}_{012}\; \theta\!\left(Y^+_f\!-\! \Delta_{012}\right)\, \int_{0}^{Y^+_f\!-\! \Delta_{012}}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \dd Y^+_2\, \frac{{\mathbf D}_{01}(Y^+_f)}{{\mathbf D}_{01}(Y^+_2\!\!+\!\! \Delta_{012})} \left\langle {\mathbf S}_{012}\right\rangle_{Y^+_2}\, .\label{B_JIMWLK_kc_trunc_integ_1}\end{aligned}$$ By comparison with the original equation , one can see that the truncation of the explicitly NLL contributions amounts to write inaccurately the probability of no splitting before the first splitting, as ${\mathbf D}_{01}(Y^+_f)/{\mathbf D}_{01}(Y^+_2\!\!+\!\! \Delta_{012})$ instead of ${\mathbf D}_{01}(Y^+_f\!-\!Y^+_2)$. Those two expression become equivalent under the replacement $\Delta_{012}\mapsto 0$, as expected because no such truncation is needed for the standard versions of the BFKL and BK equations. Counter-terms for NLO observables and for the NLL evolution equations --------------------------------------------------------------------- ### Kinematically constrained LL counter-term for observables at NLO Armed with the kinematically improved evolution equation , one can revisit the resummation of LL’s in observables known beyond LO order as outlined in the section \[sec:std\_subtr\_LL\]. For observables involving at LO only the dipole operator ${\mathbf S}_{01}$, like DIS structure functions or forward single inclusive particle production in pA collisions, one replaces in the LO term $\left\langle {\mathbf S}_{01} \right\rangle_{0}$ by $\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}$ evolved according to the kinematically constrained LL equation , up to a counterterm, *i.e.* $$\left\langle {\mathbf S}_{01} \right\rangle_{0}=\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL}} +\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL}}\, .\label{def_kcLL_ct}$$ From the relation , one gets the expression of the counter-term by integration of the evolution equation , which reads[^15] $$\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL}}= -\, \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(Y_2^+\!-\! \Delta_{012}\right)\, \Big[\left\langle {\mathbf S}_{012}\right\rangle_{Y_2^+\!-\!\Delta_{012}} - \left\langle {\mathbf S}_{01}\right\rangle_{Y_2^+} \Big]\label{kcLL_ct} \, .$$ As in the section \[sec:std\_subtr\_LL\], it is convenient to split that counter-term into a counter-term for the real NLO correction and a counter-term for the virtual NLO correction, as $$\begin{aligned} \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL, real}}&=& \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(Y_2^+\!-\! \Delta_{012}\right)\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+\!-\! \Delta_{012}}\Big]\label{kcLL_ct_real}\\ \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL, virt}}&=& -\, \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \: \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big] \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(Y_2^+\!-\! \Delta_{012}\right)\label{kcLL_ct_virt}\, .\end{aligned}$$ The real counter-term can be rewritten as $$\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL, real}}= \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(Y_f^+ \!-\! Y_2^+\!-\! \Delta_{012}\right)\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_2^+}\Big]\label{kcLL_ct_real_2}\, ,$$ or, in $k^+$ variables, as $$\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{\log(k_f^+/k^+_{\min})}\bigg|_{\textrm{kcLL, real}}= \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{k^+_{\min}}^{k^+_f}\frac{\textrm{d}k^+_2}{k^+_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(k^+_f\, {x}_{01}^2 \!-\! k^+_2\, l_{012}^2\right)\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{\log(k_2^+/k^+_{\min})}\Big] \label{kcLL_ct_real_3}\, .$$ The only difference between that kinematically improved counter-term and its analog associated with the naive LL evolution is the presence of the theta function. Thanks to the general behavior and of $l_{012}$, that theta function cuts precisely the regime which, in the section \[sec:NLO\_IF\_analysis\], has been found not to contribute to high-energy LL’s within the real NLO corrections to DIS. Hence, that counter-term allows to subtract only the high-energy LL contributions actually present in the real NLO corrections to DIS and presumably to other observables, provided high-energy factorization is valid. In particular, when evaluating in the dilute approximation and in Mellin representation the contribution to the real counter-term coming from the regime $z_2 \ll z_f$ and ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$, one obtains again the expression . Concerning the counter-term for the virtual NLO corrections, the only change between the kinematically improved counter-term and the naive one is also the presence of a theta function. In order to understand the impact of this change, it is convenient to study the difference between these two counterterms $$\begin{aligned} \!\!\!\!\!\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL, virt}}-\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, virt}} &=&\, \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \: \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big] \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(\Delta_{012}\!-\! Y_2^+\right)\nonumber\\ &=&\, \bar{\alpha}\: \frac{2\, C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \: \Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big] \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(l_{012}^2-{x}_{01}^2\, e^{Y^+_2} \right) \label{kcLL-LL_ct_virt} .\end{aligned}$$ In the expression , the theta function is completely cutting the anti-collinear regimes ${x}_{20}^2\ll {x}_{10}^2 \simeq {x}_{21}^2$ and ${x}_{21}^2\ll {x}_{10}^2 \simeq {x}_{20}^2$, thus guarantying that the UV divergences present in the counter-term are identical as in the naive one . As the anti-collinear regimes are also untouched by the kinematical constraint in the case of the real conterterm , it is clear that the UV divergences still cancel exactly between the real and virtual counter-terms, as they should. Moreover, for $Y^+_2$ not too small, the integral in $\mathbf{x}_{2}$ receives contributions only from the collinear regime ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$, so that it can be calculated approximately as $$\int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \theta\!\left(l_{012}^2-{x}_{01}^2\, e^{Y^+_2} \right) \simeq \int_{{x}_{01}^2\, e^{Y^+_2}}^{+\infty} \frac{\dd ({x}_{02}^2)}{2} \frac{{x}_{01}^2}{{x}_{02}^4} = \frac{1}{2}\, e^{-Y^+_2} \qquad \textrm{for $Y^+_2$ not too small,}\label{kc_kernel_virt}$$ and thus $$\begin{aligned} \!\!\!\!\!\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{kcLL, virt}}-\delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, virt}} & \simeq &\, \bar{\alpha}\: \frac{C_F}{N_c} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \: e^{-Y^+_2} \:\Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big]\nonumber\\ & \rightarrow & \, \bar{\alpha}\: \frac{C_F}{N_c} \int_{0}^{+\infty} \!\!\!\!\dd Y_2^+ \: e^{-Y^+_2} \:\Big[1- \left\langle {\mathbf S}_{01} \right\rangle_{Y_2^+}\Big]\quad \textrm{for}\quad Y_f^+\rightarrow +\infty \label{kcLL-LL_ct_virt2}\, .\end{aligned}$$ Indeed, the convergence of the integral in $Y_2^+$ is guarantied by the exponential decay because $0\leq \left\langle {\mathbf S}_{01} \right\rangle_{Y^+} \leq 1$. Hence, the difference between the two virtual counter-term is a finite ${\cal O}(\abar)$ contribution, with no large logs. On the other hand, each of these two counter-terms contain LL contributions of order ${\cal O}(\abar Y_f^+)$ at large $Y_f^+$, with a UV divergent coefficient. Hence, the kinematical constraint does not modify the LL terms subtracted by the counter-term for the virtual NLO correction, by contrast to the case of the counter-term for the real NLO corrections. ### Subtracting kinematical spurious singularities from NLL evolution equations In the naive version of the Regge limit, one can calculate the $Y^+$ evolution of the expectation value of multipole operators systematically in powers of $\abar$. For example, for the dipole operator ${\mathbf S}_{01}$, one has $$\partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+} = \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{LL} + \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL} +{\cal O}(\abar^3)\, .\label{series_evol_naive}$$ The first term in the right hand side of , proportional to $\abar$, is given by the equation (with $\eta=Y^+$), whereas the second term, proportional to $\abar^2$, has been calculated in ref. [@Balitsky:2008zz]. As discussed at length in this paper, that perturbative expansion breaks down due to the appearance of larger and larger higher order corrections, requiring a resummation. The largest corrections at each order are resummed by taking the kinematically improved equation instead of as the first order in the expansion, *i.e.* $$\partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+} = \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{kcLL} + \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL} -\partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL, c.t.} +{\cal O}(\abar^3)\, ,\label{series_evol_kc}$$ where the third term is a counter-term accounting for the difference between the LL evolution equations and , which should remove the most pathological parts of the naive NLL contribution to the evolution, corresponding in Mellin space in the dilute regime to the collinear triple pole at $\g=1$, see section \[sec:spurious\_sing\]. That counter-term can be calculated by re-expanding the kinematically improved equation in the naive Regge limit, and then collecting the terms of order ${\cal O}(\abar^2)$. Formally, it amounts to perform a Taylor expansion around $\Delta_{012}=0$ in as $$\begin{aligned} \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{kcLL}&=& \bar{\alpha}\, \frac{2 C_F}{N_c} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \theta\!\left(Y^+\!-\! \Delta_{012}\right)\, \Big[\left\langle {\mathbf S}_{012}\right\rangle_{Y^+\!-\!\Delta_{012}} - \left\langle {\mathbf S}_{01}\right\rangle_{Y^+} \Big]\nonumber\\ &=& \bar{\alpha}\, \frac{2 C_F}{N_c} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \Big[\left\langle {\mathbf S}_{012}\right\rangle_{Y^+} - \left\langle {\mathbf S}_{01}\right\rangle_{Y^+} - \Delta_{012}\: \partial_{Y^+} \left\langle {\mathbf S}_{012}\right\rangle_{Y^+} \nonumber\\ & &\qquad\qquad\qquad\qquad\qquad\qquad + {\cal O}\left(\Delta_{012}^2 \partial^2_{Y^+} \left\langle {\mathbf S}_{012}\right\rangle_{Y^+}\right) \Big] \, .\label{B_JIMWLK_kc_trunc_reexpand}\end{aligned}$$ The theta function completely disappears when doing that Taylor expansion at $Y^+>0$. In the right hand side of the equation , the $\partial_{Y^+}$ derivatives correspond to the evolution following the standard LL B-JIMWLK equations, without kinematical constraint, and each $\partial_{Y^+}$ gives a power of $\abar$. Hence, one can read off from the expression of the counter-term for the evolution $$\begin{aligned} \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL, c.t.} &=& -\bar{\alpha}\, \frac{2 C_F}{N_c} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \Delta_{012}\: \bigg\{ \partial_{Y^+} \left\langle {\mathbf S}_{012}\right\rangle_{Y^+}\bigg|_{LL} \bigg\}\nonumber\\ &=& -\bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\: \Delta_{012}\: \bigg\{ \partial_{Y^+} \left\langle {\mathbf S}_{02}\,{\mathbf S}_{21}\right\rangle_{Y^+}\bigg|_{LL} -\frac{1}{N_c^2}\; \partial_{Y^+} \left\langle{\mathbf S}_{01}\right\rangle_{Y^+}\bigg|_{LL} \bigg\}\, .\label{NLL_ct_1}\end{aligned}$$ In the first term, one needs the second equation in Balitsky’s hierarchy [@Balitsky:1995ub] at LL accuracy in the naive Regge limit, which writes (see *e.g.* eq.(152) in ref. [@Balitsky:2001gj]) $$\begin{aligned} \partial_{Y^+} \left\langle {\mathbf S}_{02}\,{\mathbf S}_{21}\right\rangle_{Y^+} &=& \bar{\alpha} \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi}\; \bigg\{ \textbf{K}_{023}\: \left\langle \left[{\mathbf S}_{03} {\mathbf S}_{32} \!-\! {\mathbf S}_{02}\right] {\mathbf S}_{21} \right\rangle_{Y^+} +\textbf{K}_{213}\: \left\langle {\mathbf S}_{02} \left[{\mathbf S}_{23} {\mathbf S}_{31} \!-\! {\mathbf S}_{21}\right]\right\rangle_{Y^+}\nonumber\\ & & \qquad\qquad\qquad -\frac{1}{2\, N_c^2}\: \Big[ \textbf{K}_{023}\!+\!\textbf{K}_{213}\!-\!\textbf{K}_{013}\Big]\, \left\langle{\mathbf S}_{023123} \!+\! {\mathbf S}_{032132} \!-\! 2\, {\mathbf S}_{01} \right\rangle_{Y^+} \bigg\}\, ,\label{B_JIMWLK_2nd_Eq}\end{aligned}$$ where we have introduced the fundamental sextupole operator $${\mathbf S}_{012345}=\frac{1}{N_c} \textrm{Tr} \left(U_{\mathbf{x}_{0}}\, U_{\mathbf{x}_{1}}^\dag U_{\mathbf{x}_{2}}\, U_{\mathbf{x}_{3}}^\dag U_{\mathbf{x}_{4}}\, U_{\mathbf{x}_{5}}^\dag \right) \, .\label{def_sextupole}$$ Using the first two equations and of Balitsky’s hierarchy, one obtains the final expression for the counter-term for the NLL evolution equation $$\begin{aligned} \partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL, c.t.} &\!\!\! =&\!\!\! -\bar{\alpha}^2 \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\, \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi} \bigg\{ \textbf{K}_{023}\, \left\langle \left[{\mathbf S}_{03} {\mathbf S}_{32} \!-\! {\mathbf S}_{02}\right] {\mathbf S}_{21} \right\rangle_{Y^+} +\textbf{K}_{213}\, \left\langle {\mathbf S}_{02} \left[{\mathbf S}_{23} {\mathbf S}_{31} \!-\! {\mathbf S}_{21}\right]\right\rangle_{Y^+}\bigg\}\nonumber\\ & & + \frac{\bar{\alpha}^2}{2\, N_c^2} \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\, \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi} \Big[ \textbf{K}_{023}\!+\!\textbf{K}_{213}\!-\!\textbf{K}_{013}\Big]\, \left\langle{\mathbf S}_{023123} \!+\! {\mathbf S}_{032132} \!-\! 2\, {\mathbf S}_{01} \right\rangle_{Y^+}\nonumber\\ & & + \frac{\bar{\alpha}^2}{N_c^2} \left[\int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\right]\, \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi}\, \textbf{K}_{013}\, \left\langle {\mathbf S}_{03} {\mathbf S}_{31} \!-\! {\mathbf S}_{01}\right\rangle_{Y^+} \, .\label{NLL_ct_2}\end{aligned}$$ That counter-term is supposed to cancel the largest and most pathological contributions appearing in the B-JIMWLK evolution equation for the dipole at NLL accuracy [@Balitsky:2008zz] in the naive Regge limit. Such contributions should behave as triple logs in the collinear limit [@Salam:1998tj]. However, it is rather difficult to track down those triple logs in the NLL evolution equation or in the counter-term without further approximation. One can nevertheless notice that all the multipole operators appearing in the counter-term also appear in the NLL evolution equation [@Balitsky:2008zz] for the dipole $ \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$, as expected for consistency. In order to analyse further the counter-term , it is convenient to consider the dilute target case, and take accordingly the $2$ gluons exchange approximation for the expectation value of all the operators. Expanding the Wilson lines in the sextupole operator , collecting the terms of order ${\cal O}(g^2)$ and comparing the result with the similar expansion for the dipole operator ${\mathbf S}_{ij}$, one finds that $$\begin{aligned} 1- \left\langle{\mathbf S}_{012345}\right\rangle_{Y^+} &\simeq & \left\langle {\textbf N}_{01} +{\textbf N}_{03}+{\textbf N}_{05}+{\textbf N}_{21}+{\textbf N}_{23}+{\textbf N}_{25}+{\textbf N}_{41}+{\textbf N}_{43}+{\textbf N}_{45}\right\rangle_{Y^+}\nonumber\\ & &- \left\langle{\textbf N}_{02}+{\textbf N}_{04}+{\textbf N}_{24}+{\textbf N}_{13}+{\textbf N}_{15}+{\textbf N}_{35}\right\rangle_{Y^+}\, ,\end{aligned}$$ so that $$\begin{aligned} 1- \left\langle{\mathbf S}_{023123}\right\rangle_{Y^+} & \simeq & \left\langle {\textbf N}_{01} \right\rangle_{Y^+}\, .\end{aligned}$$ Hence, in the $2$ gluons exchange approximation, $$\left\langle{\mathbf S}_{023123} \!+\! {\mathbf S}_{032132} \!-\! 2\, {\mathbf S}_{01} \right\rangle_{Y^+} \simeq 0\, ,$$ and thus the term in the second line of the expression disappears completely, and the counter-term reduces to $$\begin{aligned} \!\!\!\!\!\!\!\!\!\!\!\!\partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL, c.t.; \textrm{ dilute}} &\!\!\! = &\!\!\! \bar{\alpha}^2 \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\, \bigg[ \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi} \textbf{K}_{023}\, \left\langle {\mathbf N}_{03}\!+\!{\mathbf N}_{32} \!-\! {\mathbf N}_{02} \right\rangle_{Y^+} \nonumber\\ & & + \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi}\textbf{K}_{213}\, \left\langle {\mathbf N}_{23}\!+\! {\mathbf N}_{31} \!-\! {\mathbf N}_{21}\right\rangle_{Y^+} - \frac{1}{N_c^2} \, \int \frac{\textrm{d}^2\mathbf{x}_{3}}{2\pi}\, \textbf{K}_{013}\, \left\langle {\mathbf N}_{03}\!+\! {\mathbf N}_{31} \!-\! {\mathbf N}_{01}\right\rangle_{Y^+}\bigg] \, .\label{NLL_ct_2glue}\end{aligned}$$ Each of the three terms in the bracket in the expression corresponds to the right hand side of the BFKL equation . Therefore, introducing the Mellin representation for the dipole amplitude $\left\langle {\mathbf N}_{ij}\right\rangle_{Y^+}$ and using the characteristic function $\chi(\g)$ of the BFKL kernel , one gets $$\begin{aligned} \!\!\!\!\!\!\!\!\!\!\!\!\partial_{Y^+} \left\langle {\mathbf S}_{01} \right\rangle_{Y^+}\bigg|_{NLL, c.t.; \textrm{ dilute}} &\!\!\! = &\!\!\! \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\; {\cal N}(\g,Y^+)\: \bar{\alpha}^2\, \chi(\g) \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\, \nonumber\\ & & \qquad \times \left[ \left(\frac{x_{02}^2 \, Q_0^2}{4}\right)^\g + \left(\frac{x_{21}^2 \, Q_0^2}{4}\right)^\g \; - \frac{1}{N_c^2} \, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\right] \nonumber\\ & \!\!\! = & \!\!\! \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\; \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\, {\cal N}(\g,Y^+)\; \bar{\alpha}^2\, \chi(\g)\, \left[ {\cal F}_{\Delta}(\g) - \frac{1}{2\, N_c^2}\, {\cal F}_{\Delta}(0) \right] \, ,\label{NLL_ct_2glue_2}\end{aligned}$$ with the notation $${\cal F}_{\Delta}(\g) = \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi} \textbf{K}_{012}\, \Delta_{012}\, \left[\left(\frac{x_{02}^2}{x_{01}^2}\right)^\g+ \left(\frac{x_{21}^2}{x_{01}^2}\right)^\g\: \right]\, .$$ Thanks to the definition of the shift $\Delta_{012}$, the regimes $x_{02}\ll x_{21} \simeq x_{01}$ and $x_{02}\ll x_{21} \simeq x_{01}$ are explicitly cut-off. Hence, potential problems for the convergence of the $\mathbf{x}_{2}$ integration can come only from the large $\mathbf{x}_{2}$ limit, *i.e.* the regime $x_{01}\ll x_{02} \simeq x_{21}$. For that reason, ${\cal F}_{\Delta}(\g)$ can not have singularities on the left of the line $\textrm{Re}(\g)=1/2$ but just on the right, and ${\cal F}_{\Delta}(0)$ is a well-defined constant. The first singularity of ${\cal F}_{\Delta}(\g)$ on the right of the line $\textrm{Re}(\g)=1/2$ is obtained by taking the integrand in the limit $x_{01}\ll x_{02} \simeq x_{21}$, as $${\cal F}_{\Delta}(\g)\bigg|_{1\textrm{st sing.}} = \int_{x_{01}^2}^{+\infty} \frac{\textrm{d}(x_{02}^2)}{2}\: \frac{x_{01}^2}{x_{02}^4}\, \log\left(\frac{x_{02}^2}{x_{01}^2}\right)\; 2 \left(\frac{x_{02}^2}{x_{01}^2}\right)^\g= \frac{1}{(1\!-\!\g)^2} \, .$$ The next singularities of ${\cal F}_{\Delta}(\g)$, located at $\g=2$ and higher integers, are irrelevant for our purposes. Finally, the counter-term in the 2 gluons approximation and in Mellin representation is driven in the collinear limit $(x_{01}^2 \, Q_0^2) \rightarrow 0$ by a triple pole at $\g=1$ $$\abar^2 \chi(\g)\, \left[{\cal F}_{\Delta}(\g)- \frac{1}{2\, N_c^2}\, {\cal F}_{\Delta}(0)\right]= \abar^2 \left[ \frac{1}{(1\!-\!\g)^3} + {\cal O}\left(\frac{1}{(1\!-\!\g)}\right) \right] \qquad \textrm{for } \g\rightarrow 1 \label{kernel_NLL_ct_coll}$$ and in the anti-collinear limit by a simple pole at $\g=0$ $$\abar^2 \chi(\g)\, \left[{\cal F}_{\Delta}(\g)- \frac{1}{2\, N_c^2}\, {\cal F}_{\Delta}(0)\right]= \abar^2 \left[ \frac{1}{\g}\; {\cal F}_{\Delta}(0) \: \left(1\!-\!\frac{1}{2\, N_c^2}\right) + {\cal O}\left(\g^2\right) \right] \qquad \textrm{for } \g\rightarrow 0\, . \label{kernel_NLL_ct_anticoll}$$ The triple pole is the same[^16] as the one appearing in the characteristic function $\chi_1(\g)$ of the naive NLL evolution equation , see equations and . Hence, in the dilute regime, the counter-term cancels exactly the spurious collinear triple logs appearing in the NLL evolution equation obtained in the naive Regge limit, which manifest themselves as a triple pole at $\g=1$ in Mellin space. Apart from this, the counter-term is modifying contributions to the NLL kernel behaving at most as single logs in the collinear and anti-collinear regimes, which cannot overcome the LL contributions. As announced in the section \[sec:spurious\_sing\], the kinematical constraint allows to deal with the triple pole at $\g=1$ in $\chi_1(\g)$ by correcting the kinematics in the collinear limit, but leaves the double poles at $\g=1$ and $\g=0$ unaffected. Those would require distinct resummations. Conclusion and Discussion\[sec:Discussion\] =========================================== In momentum space, LL’s arise in initial-state parton or dipole cascades from configurations strictly ordered in $k^+$ and $k^-$ simultaneously. That fact is overlooked in standard derivations of the high-energy LL evolution equations, where only one ordering is strictly imposed. In momentum space, one can ensure that both orderings are satisfied by introducing a kinematical constraint in the kernel of the BFKL equation. In this paper, the translation of the kinematical constraint from momentum space to mixed space has been studied, because mixed-space is the most suitable for high-energy evolution equations with gluon saturation, like BK, JIMWLK, and Balitsky’s hierarchy. The mixed-space version of the kinematical constraint has been understood in the section \[sec:NLO\_IF\_analysis\], by extracting LL contributions from the explicit expressions of the real NLO corrections to DIS structure functions, known in mixed-space. The result of that analysis has been used in the section \[sec:kcBK\] to write down kinematically-improved high-energy LL evolution equations in mixed-space, with the form of the virtual corrections fixed by the requirement of probability conservation along the dipole cascade. More precisely, two equations have been proposed. The first one, eq. , satisfies exact probability conservation, but includes also a tower of higher order corrections, which depend on a UV cut-off. The second one, eq. , is UV finite and contains only terms of (improved-) LL accuracy, but obeys probability conservation only up to NLL corrections. Presumably, the equation should be preferred for all practical purposes, except for Monte Carlo simulation of dipole cascades, requiring exact probability conservation. The equations and are written as generalizations of the first equation of Balitsky’s hierarchy . But of course, performing the usual mean-field or dilute approximations, one obtains kinematically constrained versions of the BK or BFKL equations in mixed-space. By contrast, constructing a kinematically constrained version of the full hierarchy of Balitsky or of the JIMWLK equation seems to be a complicated task, and is left for further studies. As a remark, note that the kinematical improvement of the high-energy evolution equations in mixed space has been discussed and performed only within the factorization scheme with a cut-off in $k^+$. In other schemes, the kinematical constraint should look completely different. Its form in the factorization schemes in $k^-$ or in rapidity $y$ mentioned in the section \[sec:evol\_variables\] could be guessed, but not cross-checked, because no explicit higher order calculation has been done in those schemes in mixed space. In particular, in the $k^-$ scheme, the kinematical constraint should affect the small daughter dipole regimes instead of the large daughter dipoles regime. As a side result of the present paper, it has been found in the appendix \[App:locality\_kc\] that despite naive expectations based on light-front perturbation theory, and by contrast to the claim made in ref. [@Motyka:2009gi], the kinematical constraint is local and not global in a cascade, *i.e.* each emitted gluon has its $k^+$ and $k^-$ constrained by the ones of the two partons forming the parent color dipole, but not by the $k^+$’s and $k^-$’s of the other partons present in the cascade, up to NLL corrections. In phenomenological studies, one uses the fixed order perturbative results, at LO or NLO, for the considered observable, together with the resummation of large high-energy logs, at LL or NLL accuracy. Accordingly, the kinematically consistent BK equation (kcBK) obtained from eq. is useful in practice at LO+LL accuracy, NLO+LL accuracy, or NLO+NLL accuracy. By consistency, it should be used to evolve the dipole amplitude over the appropriate range $Y^+_f$, as discussed in the section \[sec:evol\_variables\], like the expression in the case of DIS. First, as a finite-energy correction, the kinematical constraint is an improvement of the theoretical framework at LO+LL accuracy, conceptually analog to the one provided by the inclusion of running coupling effects. Both can be understood as a resummation of terms of all logarithmic orders in the naive perturbative expansion. The higher order terms associated with running coupling effects or kinematical constraint effects are independent of each other, and their resummation affects different parts of the LL evolution equations. Therefore, it is straightforward to take both the kinematical constraint and the running coupling into account: one should just replace the factor $\abar \textbf{K}_{012}$ in the equation by the kernel with the chosen running coupling prescription, for example Balitsky’s prescription [@Balitsky:2006wa]. In practice, the evolution according to the BK equation with both the kinematical constraint and the running coupling (kcrcBK) should be significantly slower than the evolution according to the BK equation with just running coupling (rcBK), especially in the beginning. Hence, going from rcBK to kcrcBK should improve further the agreement between phenomenological studies at LO+LL accuracy and the DIS data [@Albacete:2010sy; @Kuokkanen:2011je]. Second, the kinematical constraint is absolutely necessary for studies at NLO+LL accuracy. The LL evolution equations with kinematical constraint allow, by construction, to resum properly the LL’s and to remove them exactly from the NLO corrections thanks to the counter-term . By contrast, as discussed in the section \[sec:NLO\_IF\_analysis\], the standard LL evolution equations without kinematical constraint overestimate the LL’s present in the NLO corrections, and thus fail at properly resumming them. In that case, the leftover NLO correction after the failed resummation is negative and overcomes the LO term in the collinear regime, although that effect disappears progressively in the limit of high energy (or large interval $Y^+_f$), in which the kinematical constraint would become weaker and weaker. This phenomenon has indeed been observed for single inclusive hadron production at forward rapidity in pp or pA collisions in ref. [@Stasto:2013cha], which is the only phenomenological study performed so far at NLO+LL accuracy with gluon saturation, and it does not include the kinematical constraint[^17]. Third, the kinematical constraint is a crucial building block for studies at NLO+NLL accuracy. As discussed in the section \[sec:spurious\_sing\], the NLL evolution equations contain large corrections in the collinear and anti-collinear regimes making them useless without further collinear resummations. The kinematical constraint corresponds to the resummation of the parametrically largest of those corrections. Hence, one should take the kinematically improved equation for the LL term, and use the counter-term to remove the contributions induced by the kinematical constraint from the naive NLL terms in the evolution equations. The large corrections associated with the running coupling can be dealt with in a similar way, by taking the LL term with Balitsky’s running coupling prescription [@Balitsky:2006wa], or with the smallest dipole prescription. On the other hand, there are also large NLL contributions induced the DGLAP evolution in the collinear and anti-collinear regimes, whose resummation in mixed-space is left for further studies. I thank Emil Avsar, Ian Balitsky, Giovanni Chirilli, Anna Staśto, Heribert Weigert and Bo-Wen Xiao for helpful discussions at various stages of this project. This work is funded by European Research Council grant HotLHC ERC-2011-StG-279579; by Ministerio de Ciencia e Innovación of Spain under project FPA2011-22776; by Xunta de Galicia (Consellería de Educación); by the Spanish Consolider-Ingenio 2010 Programme CPAN and by FEDER. This project has been started when I was a research associate at Brookhaven National Laboratory, working under the Contract No. \#DE-AC02-98CH10886 with the U.S. Department of Energy. Locality of the kinematical constraint in Light-Front perturbation theory\[App:locality\_kc\] ============================================================================================= In this appendix, the discussion of the kinematical constraint based on Light-Front perturbation theory presented in section \[sec:kinematical\_approx\] is extended, in order to show that the constraint is local in the parton or dipole cascade. More precisely, the aim is to show that gluon emission by a dipole is insensitive, at LL accuracy, to gluon emission by another dipole in the same dipole cascade. Diagrams without gluon splittings --------------------------------- 1to 10cm[ ![\[Fig:qqbarggg\_A\] Examples of light-front perturbation theory diagrams without gluon splitting contributing to the $q\bar{q}ggg$ Fock component of a photon wave-function.](qqbarggg_A.eps "fig:") ]{} 2to 10cm[ ![\[Fig:qqbarggg\_A\] Examples of light-front perturbation theory diagrams without gluon splitting contributing to the $q\bar{q}ggg$ Fock component of a photon wave-function.](qqbarggg_A_prime.eps "fig:") ]{} The diagrams **A** and **A’** shown in Fig. \[Fig:qqbarggg\_A\] are some of the most simple ones for which the question of local *versus* global kinematical constraint is relevant. In both of them, the parent dipole $01$ has split into two dipoles $02$ and $21$ by emission of the gluon $2$. Then, each of the two dipoles emits another gluon. The light-front diagrams **A** and **A’** differ only by the order of emission of the last two gluons, called $3$ and $4$. The diagrams **A** and **A’** have the same color flow and the same momentum flow, and thus their vertices have identical expressions. Moreover, the energy denominators $I$, $II$ and $IV$ are the same for the two graphs, and write $$\begin{aligned} & & ED_{I}^{\textbf{A}}= ED_{I}^{\textbf{A'}}= -\frac{Q^2}{2 q^+}- k^-_0 - k^-_1 +i \epsilon \label{ED_I_A}\\ & & ED_{II}^{\textbf{A}}= ED_{II}^{\textbf{A'}}= -\frac{Q^2}{2 q^+}- k^-_0 - {k_{1}'}^- - k^-_2 +i \epsilon \label{ED_II_A}\\ & & ED_{IV}^{\textbf{A}}= ED_{IV}^{\textbf{A'}}= -\frac{Q^2}{2 q^+}- {k_{0}'}^- - {k_{1}''}^- - k^-_2- k^-_3- k^-_4 +i \epsilon\, . \label{ED_IV_A}\end{aligned}$$ Hence, the only difference between the expressions of the diagrams **A** and **A’** comes from the energy denominator $III$, which writes $$\begin{aligned} & & ED_{III}^{\textbf{A}}= -\frac{Q^2}{2 q^+}- k^-_0 - {k_{1}''}^- - k^-_2 - k^-_3 +i \epsilon \label{ED_III_A}\\ & & ED_{III}^{\textbf{A'}}= -\frac{Q^2}{2 q^+}- {k_{0}'}^- - {k_{1}'}^- - k^-_2 - k^-_4 +i \epsilon\, . \label{ED_III_A_prime}\end{aligned}$$ Then, the sum of the diagrams **A** and **A’** is of the form $$\begin{aligned} \textbf{A} + \textbf{A'}&=& \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{A}}}\; \frac{1}{ED_{II}^{\textbf{A}}}\; \left[\frac{1}{ED_{III}^{\textbf{A}}} +\frac{1}{ED_{III}^{\textbf{A'}}} \right]\; \frac{1}{ED_{IV}^{\textbf{A}}}\nonumber\\ &=& \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{A}}}\; \left[\frac{1}{ED_{II}^{\textbf{A}}}+\frac{1}{ED_{IV}^{\textbf{A}}}\right]\; \frac{1}{ED_{III}^{\textbf{A}}}\; \frac{1}{ED_{III}^{\textbf{A'}}}\label{A+A_prime} \, ,\end{aligned}$$ using the identity $$ED_{III}^{\textbf{A}}+ED_{III}^{\textbf{A'}}= ED_{II}^{\textbf{A}}+ED_{IV}^{\textbf{A}}$$ satisfied by the energy denominators , , and . The aim is now to understand sufficient kinematical conditions for each gluon emission to come with a large soft log. Following the reasoning introduced in section \[sec:kinematical\_approx\], one finds that the conditions $$\begin{aligned} & & k^+_3,\: k^+_4 \ll k^+_2 \ll k^+_0,\: k^+_1 \leq q^+ \label{kplus_ord_A}\\ & & k^-_3,\: k^-_4 \gg k^-_2 \gg \frac{Q^2}{2 q^+}+ k^-_0 + k^-_1\label{kminus_ord_A} \, ,\end{aligned}$$ with no constraint on the relative size of $k^+_3$ and $k^+_4$ and of $k^-_3$ and $k^-_4$, are sufficient in order to justify the approximations $$\begin{aligned} ED_{II}^{\textbf{A}} & \simeq & - k^-_2 +i \epsilon \\ ED_{III}^{\textbf{A}} & \simeq & - k^-_3 +i \epsilon \\ ED_{III}^{\textbf{A'}} & \simeq & - k^-_4 +i \epsilon\, .\end{aligned}$$ Assuming only the $k^+$ ordering , the energy denominator $IV$ simplifies as $$\begin{aligned} ED_{IV}^{\textbf{A}} & \simeq & -\frac{Q^2}{2 q^+}- \frac{(\mathbf{k}_0\!-\!\mathbf{k}_4)^2}{2\, {k_0}^+} - \frac{(\mathbf{k}_1\!-\!\mathbf{k}_2\!-\!\mathbf{k}_3)^2}{2\, {k_1}^+} - \frac{\mathbf{k}_2^2}{2\, {k_2}^+} - \frac{\mathbf{k}_3^2}{2\, {k_3}^+}- \frac{\mathbf{k}_4^2}{2\, {k_4}^+} +i \epsilon \, .\label{ED_IV_A_approx1}\end{aligned}$$ For generic values of the transverse momenta, the expression is dominated by the terms with ${k_3}^+$ or ${k_4}^+$ in the denominator. Other terms are then relevant only in the regime where both $\mathbf{k}_3$ and $\mathbf{k}_4$ are parametrically small, where typically the term with ${k_2}^+$ in the denominator is the dominant one, unless $\mathbf{k}_2$ is also parametrically small. Hence, thanks to the $k^+$ ordering , one can drop $\mathbf{k}_2$, $\mathbf{k}_3$ and $\mathbf{k}_4$ from the second and third terms in the right-hand side of the equation no matter what is the relative size of all the transverse momenta, and obtain $$\begin{aligned} ED_{IV}^{\textbf{A}} & \simeq & ED_{I}^{\textbf{A}} - k^-_2- k^-_3- k^-_4 +i \epsilon \, ,\label{ED_IV_A_approx2}\end{aligned}$$ under the assumption only. Therefore, when assuming both the $k^+$ ordering and the $k^-$ ordering , one has $$\left| ED_{IV}^{\textbf{A}} \right| \simeq k^-_3 + k^-_4 \gg \left| ED_{II}^{\textbf{A}} \right| \simeq k^-_2\, .$$ All in all, the sum of the diagrams **A** and **A’** simplifies to $$\begin{aligned} \textbf{A} + \textbf{A'}&\simeq & \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{A}}}\; \frac{1}{[- k^-_2 +i \epsilon]}\; \frac{1}{[- k^-_3 +i \epsilon]}\; \frac{1}{[- k^-_4 +i \epsilon]}\label{A+A_prime_approx} \, ,\end{aligned}$$ in the regime where all the conditions and are satisfied. As usual, those conditions also allow one to neglect the non-eikonal terms in the vertices, as well as the transverse and longitudinal recoil effects. Hence, the conditions and are sufficient for the sum of the diagrams **A** and **A’** to exhibit the factorized form which will give, upon Fourier transform to mixed space and squaring of the wave-function, one large soft log for each gluon emission. The important point is that the relative size of $k^+_3$ and $k^+_4$ and of $k^-_3$ and $k^-_4$ becomes irrelevant to the appearance of high-energy LL’s, when considering the sum **A**$+$**A’**. Obviously, this observation generalizes to a large class of graphs contributing to the $\gamma\rightarrow q + \bar{q} + n\, g$ sector of the photon wave-function at tree level, namely the graphs with all the $n$ gluons emitted directly from the quark or the anti-quark, without any gluon splitting. The configurations contributing to LL’s for those graphs are the ones where each gluon has a smaller $k^+$ and a larger $k^-$ than the gluon previously emitted by the same parent (quark or anti-quark). That previous gluon plays the role of the other leg of the emitting dipole. On the other hand, there is no sensitivity to possible gluon emission on the other side of the cascade, when taking the sum over graphs with identical color and momentum flow, but different $x^+$ ordering of gluon emission vertices. Diagrams with gluon splittings ------------------------------ 1to 10cm[ ![\[Fig:qqbarggg\_B\] Examples of light-front perturbation theory diagrams with gluon splittings contributing to the $q\bar{q}ggg$ Fock component of a photon wave-function. These diagrams can be interpreted in the standard way, or as color-ordered contributions to the photon wave-function, with the color factor $[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1}$.](qqbarggg_B.eps "fig:") ]{} 2to 10cm[ ![\[Fig:qqbarggg\_B\] Examples of light-front perturbation theory diagrams with gluon splittings contributing to the $q\bar{q}ggg$ Fock component of a photon wave-function. These diagrams can be interpreted in the standard way, or as color-ordered contributions to the photon wave-function, with the color factor $[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1}$.](qqbarggg_B_prime.eps "fig:") ]{} Diagrams which include $3$-gluons vertices, like the diagrams **B** and **B’** shown on Fig. \[Fig:qqbarggg\_B\], bring an extra complication. Indeed, we would like to consider each gluon as emitted from a color singlet dipole. However, using the usual expression for the $3$-gluons vertices, there is an ambiguity in associating the gluon emission to one or the other of the dipoles delimitated by the parent gluon. In the original construction of the dipole model [@Mueller:1993rr], that ambiguity disappears when considering the squared wave-function and taking the large $N_c$ limit. By contrast, one can also resolve that ambiguity before taking the square, by splitting the wave-functions into their color-ordered components. This is the analog for the wave-functions of the color-ordered amplitudes [@Berends:1987cv; @Mangano:1987xk; @Mangano:1988kk] (for a recent pedagogical introduction, see ref. [@Dixon:2013uaa]). The idea is the following: using the commutation relation $$[t^a,t^b]=i f^{abc} t^c\, ,$$ one can rewrite the color factor of the diagram **B** as $$(-i) f^{a_4 a_2 b}\, (-i) f^{a_3 b c} [t^c]_{\alpha_0 \alpha_1} =-[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1} -[t^{a_3}t^{a_2}t^{a_4}]_{\alpha_0 \alpha_1} +[t^{a_3}t^{a_4}t^{a_2}]_{\alpha_0 \alpha_1}+[t^{a_2}t^{a_4}t^{a_3}]_{\alpha_0 \alpha_1} \label{color_fact_B}$$ and the one of the diagram **B’** as $$(-i) f^{a_3 a_2 b}\, (-i) f^{a_4 b c} [t^c]_{\alpha_0 \alpha_1} =-[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1} -[t^{a_3}t^{a_2}t^{a_4}]_{\alpha_0 \alpha_1} +[t^{a_4}t^{a_3}t^{a_2}]_{\alpha_0 \alpha_1}+[t^{a_2}t^{a_3}t^{a_4}]_{\alpha_0 \alpha_1} \, .\label{color_fact_B_prime}$$ The first term in the expressions and , $[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1}$, is the same as the color factor of the diagrams **A** and **A’**. Only that term in the diagrams **B** and **B’** corresponds to the emission of the gluon $4$ by the dipole $02$ and of the gluon $3$ by the dipole $21$. Instead, the second term in the expressions and is associated with the emission of the gluon $4$ by the dipole $21$ and of the gluon $3$ by the dipole $02$, and the other terms in the expressions and , differing between the diagrams **B** and **B’**, correspond to the emission of both the gluons $3$ and $4$ on the same side. Hence, only the term $[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1}$ is relevant for our purposes, whereas the other ones, contributing to distinct dipole cascades, can be discarded. Concerning the kinematics, the only difference in the momentum flow of the diagrams **B** and **B’** is the momentum ${k_{2}'}$, which is constrained to take different values. The energy denominators $I$, $II$ and $IV$ are the same for the diagrams **B** and **B’**, whereas $$\begin{aligned} & & ED_{III}^{\textbf{B}}= -\frac{Q^2}{2 q^+}- k^-_0 - {k_{1}'}^- - \frac{(\mathbf{k}_2\!-\!\mathbf{k}_3)^2}{2\, ({k_2}^+\!-\!{k_3}^+)} - k^-_3 +i \epsilon \label{ED_III_B}\\ & & ED_{III}^{\textbf{B'}}= -\frac{Q^2}{2 q^+}- {k_{0}}^- - {k_{1}'}^- - \frac{(\mathbf{k}_2\!-\!\mathbf{k}_4)^2}{2\, ({k_2}^+\!-\!{k_4}^+)} - k^-_4 +i \epsilon\, . \label{ED_III_B_prime}\end{aligned}$$ Under the conditions and , the energy denominators simplify to $$\begin{aligned} ED_{II}^{\textbf{B}} & \simeq & - k^-_2 +i \epsilon \\ ED_{III}^{\textbf{B}} & \simeq & - k^-_3 +i \epsilon \\ ED_{III}^{\textbf{B'}} & \simeq & - k^-_4 +i \epsilon\\ ED_{IV}^{\textbf{B}} & \simeq & - k^-_3 - k^-_4 +i \epsilon \, ,\end{aligned}$$ so that the diagrams **B** and **B’** reduce to $$\begin{aligned} \textbf{B} &\simeq & \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{B}}}\; \frac{1}{[- k^-_2 +i \epsilon]}\; \frac{1}{[- k^-_3 +i \epsilon]}\; \frac{1}{[- k^-_3- k^-_4 +i \epsilon]}\label{B_approx}\\ \textbf{B'} &\simeq & \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{B}}}\; \frac{1}{[- k^-_2 +i \epsilon]}\; \frac{1}{[- k^-_4 +i \epsilon]}\; \frac{1}{[- k^-_3- k^-_4 +i \epsilon]}\label{B_prime_approx} \, .\end{aligned}$$ Neglecting longitudinal and transverse recoil effects and non-eikonal contributions due to the assumptions and , the vertices take identical values in the diagrams **B** and **B’**, up to the different color factors and already discussed. Hence, for the color-ordered versions of **B** and **B’** associated with the color-factor $[t^{a_4}t^{a_2}t^{a_3}]_{\alpha_0 \alpha_1}$, one has $$\begin{aligned} \bigg(\textbf{B}+\textbf{B'}\bigg)_{\textrm{color-ordered}} &\simeq & \textrm{vertices} \times \frac{1}{ED_{I}^{\textbf{B}}}\; \frac{1}{[- k^-_2 +i \epsilon]}\; \frac{1}{[- k^-_3 +i \epsilon]}\; \frac{1}{[- k^-_4 +i \epsilon]}\label{B+B_prime_approx} \, .\end{aligned}$$ This approximated expression for **B** and **B’**, leading eventually to one large log for each of the $3$ gluons, is valid under the assumptions and , no matter what is the relative size of $k^+_3$ and $k^+_4$ and of $k^-_3$ and $k^-_4$. It is thus clear that, when considering properly color-ordered diagrams, the presence of gluon splitting vertices do not affect our discussion. Therefore, in a generic dipole cascade, the configurations contributing to the high-energy LL’s are the ones where each gluon has a smaller $k^+$ and a larger $k^-$ than both partons delimitating the color-singlet dipole emitting that gluon, independently of what happens in the rest of the cascade. The kinematical constraint is then local instead of global, by contrast to the statement made in ref. [@Motyka:2009gi]. Basics of Laplace transform\[App:Laplace\] ========================================== Consider a function ${ F}(Y)$ defined for $Y\in [0,+\infty[$. Its Laplace transform is defined by $$\hat{F}(\om)= \int_0^{+\infty} \dd Y\; e^{-\om\, Y}\, {F}(Y) \label{Laplace}\, .$$ More precisely, the formula is being used when $\textrm{Re}(\om)$ is large enough to make the integral convergent, and then $\hat{F}(\om)$ is obtained in the rest of the complex plane by analytical continuation. The Laplace transform is invertible, and the inverse formula is $${F}(Y)= \int_{\om_0\!-\!i\infty}^{\om_0\!+\!i\infty} \frac{\dd \om}{2\pi i}\; e^{\om\, Y}\; \hat{F}(\om)\label{Laplace_inv}\, .$$ In , $\om_0$ is a real number large enough so that the integration path passes on the right of all singularities of $\hat{F}(\om)$. By moving the integration contour to the left in , one picks progressively contributions from the singularities of $\hat{F}(\om)$, for example $r_s\: e^{\om_s\, Y}$ if $\hat{F}(\om)$ has a simple pole in $\om=\om_s$ with residue $r_s$. Hence, it is clear that the large $Y$ asymptotic behavior of ${F}(Y)$ is determined by the rightmost singularity of $\hat{F}(\om)$. The properties of the Laplace transform with respect to derivation and integration are also needed in this paper. Let $f$ be the derivative of ${F}$: ${ f}(Y)=\d_Y {F}(Y)$. Then, its Laplace transform is $$\hat{f}(\om)= \int_0^{+\infty} \dd Y\; e^{-\om\, Y}\, \d_Y {F}(Y) = \om\, \hat{F}(\om) - {F}(0) \label{Laplace_deriv}\, .$$ Let ${G}(Y)$ be the primitive $${G}(Y)= \int_0^{Y} \dd y\; {F}(y) \label{primitive}\, .$$ Then, its Laplace transform is $$\hat{G}(\om)= \int_0^{+\infty} \dd y\; {F}(y) \int_y^{+\infty} \dd Y\; e^{-\om\, Y} = \frac{\hat{F}(\om)}{\om}\label{Laplace_int}\, .$$ Basics of Mellin representation\[App:Mellin\] ============================================= Consider a dimensionless function ${\mathbf F}_{ij}$, which depends explicitly on the distance $x_{ij}$ between the points ${\mathbf x}_{i}$ and ${\mathbf x}_{j}$ in the transverse plane, but also implicitly on a reference scale $Q_0$. Then, one can use the Mellin representation $${\mathbf F}_{ij}= \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\; \left(\frac{x_{ij}^2 \, Q_0^2}{4}\right)^\g \; {\cal F}(\g)\label{Mellin_rep}\, .$$ Since $x_{ij}$ can be both larger or smaller than $2/Q_0$, or equivalently $\log(x_{ij}^2 \, Q_0^2/4)$ can be both positive or negative, there is no inverse formula for which would be the analog of , and generically ${\cal F}(\g)$ has singularities on both sides of the integration path. When $x_{ij}\rightarrow +\infty$, it is convenient to move the integration path to the left in order to make the integrand small. By doing so, one picks progressively contributions from the singularities of ${\cal F}(\g)$ located on the left of the initial integration path. Hence, by analogy with the Laplace transform case, the behavior of ${\mathbf F}_{ij}$ in the limit $x_{ij}\rightarrow +\infty$ is determined by the first singularity on the left of the line $\textrm{Re}(\g)=1/2$. By symmetry, the behavior of ${\mathbf F}_{ij}$ in the limit $x_{ij}\rightarrow 0$ is determined by the first singularity of ${\cal F}(\g)$ on the right of the line $\textrm{Re}(\g)=1/2$. Mellin space analysis of the NLO DIS impact factors with the operator evaluated at $Y^+_f$ or at $0$\[App:Yfplus\] ================================================================================================================== In the section \[sec:Mellin\_NLO\_IF\], the behavior of the NLO DIS impact factors and of the counter-term for the naive LL evolution in the dilute regime in the domain ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$ has been studied in Mellin space, evaluating the expectation value $\left\langle {\mathbf S}_{012} \right\rangle$ at the scale $Y_2^+$. However, it might also be natural to take that expectation value at the scale $Y^+_f$. In the discussion of the naive resummation of high-energy LL’s in the section \[sec:std\_subtr\_LL\], the un-evolved expectation value $\left\langle {\mathbf S}_{012} \right\rangle_0$ has also been considered. Those two other prescriptions deserve further study. It is easy to deal at once with a whole range of prescriptions for $\left\langle {\mathbf S}_{012} \right\rangle$ including $Y^+_f$ and $0$, *i.e.* choosing a generic positive or zero constant $Y_c^+$, independent of both $Y_2^+$ and $z_2$. In that case, the operator expectation value factors out of the integration in $Y_2^+$ or $z_2$. It is then not helpful to take the Laplace transform from $Y^+$ space to $\om$ space, which would transform an ordinary product into a convolution product. Revisiting the calculations of the section \[sec:Mellin\_NLO\_IF\], but in $(\g,Y^+)$ space and with constant $Y_c^+$, one obtains the following. The contribution to the counter-term from the domain ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$ writes in the dilute regime $$\begin{aligned} \delta\!\left\langle {\mathbf S}_{01} \right\rangle_{Y_f^+}\bigg|_{\textrm{LL, real}}^{{x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2} &=& \bar{\alpha} \int_{0}^{Y_f^+} \!\!\!\!\dd Y_2^+ \int_{{x}_{10}^2\ll {x}_{20}^2} \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \textbf{K}_{012}\: \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_c^+}\Big] \nonumber\\ &\simeq & \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; {\cal N}(\g,Y_c^+)\;\; \frac{\bar{\alpha}\: Y_f^+}{(1\!-\! \g)}\, . \label{Mellin_naive_subtract_Yc}\end{aligned}$$ This is indeed the direct analog of the result , given the correspondence $Y_f^+ \leftrightarrow 1/\om$. The expression shows the same type of DLL behavior as , which is not the correct collinear DLL due to the presence of $Y_f^+$ instead of $Y_f^-$. The contribution from the region $z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$ to the approximate Mellin representation of the real NLO correction (still restricted to $z_2 \ll z_f$ and ${x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2$), analog to , writes $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T,L}^{NLO}}{\mathcal{I}_{T,L}^{LO}} \; \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_c^+}\Big]\right|_{{x}_{10}^2\ll {x}_{20}^2 \simeq {x}_{21}^2 \textrm{ and }z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2}\nonumber\\ & & \quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\;\;{\cal N}(\g,Y_c^+) \; \frac{\bar{\alpha}}{(1\!-\! \g)} \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; \left(1\!-\!e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)} \right)\nonumber\\ & & \quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\;\;{\cal N}(\g,Y_c^+) \; \frac{\bar{\alpha}}{(1\!-\! \g)^2} \; \left[e^{-(1\!-\!\g)Y^+_f}\!-\!1 + (1\!-\!\g)Y^+_f\right]\, . \label{Mellin_NLO real_kc_reg_Yc}\end{aligned}$$ In the longitudinal photon case, the contribution from the region $z_f\, {x}_{10}^2\ll z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$, analog to , is now $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{L}^{NLO}}{\mathcal{I}_{L}^{LO}} \; \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_c^+}\Big]\right|_{z_f\, {x}_{01}^2\ll z_2\, {x}_{02}^2\simeq z_2\, {x}_{21}^2 }\nonumber\\ & & \quad \simeq \bar{\alpha} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; {\cal N}(\g,Y_c^+)\; f_0(\g\!-\!2\, ,z_f {x}_{01}^2 Q^2)\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)}\nonumber\\ & &\quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\;\;{\cal N}(\g,Y_c^+) \; f_0(\g\!-\!2\, ,z_f {x}_{01}^2 Q^2)\; \frac{\bar{\alpha}}{(1\!-\! \g)} \; \left[1\!-\!e^{-(1\!-\!\g)Y^+_f}\right] \, .\label{Mellin_NLO real_L_non-kc_Yc}\end{aligned}$$ In the transverse photon case, the contribution from the intermediate region $z_f {x}_{10}^2/z_2 \ll {x}_{20}^2 \simeq {x}_{21}^2 \ll z_f^2 {x}_{10}^2/z_2^2$, analog to , becomes $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T}^{NLO}}{\mathcal{I}_{T}^{LO}} \; \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_c^+}\Big]\right|_{z_f {x}_{10}^2/z_2 \ll {x}_{20}^2 \simeq {x}_{21}^2 \ll z_f^2 {x}_{10}^2/z_2^2}\nonumber\\ & & \quad \simeq \bar{\alpha} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; {\cal N}(\g,Y_c^+) \; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)} \int_{1}^{e^{(Y^+_f\!-\!Y^+_2)}} \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\textrm{d}u\;\;\; u^{\g-3}\; \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \nonumber\\ & & \quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g {\cal N}(\g,Y_c^+) \; \frac{\bar{\alpha}}{(1\!-\! \g)} \; \int_{1}^{e^{Y^+_f}}\!\!\!\!\!\!\!\!\textrm{d}u\; u^{\g-3} \left[u^{-(1\!-\!\g)}\!-\!e^{-(1\!-\!\g)Y^+_f}\right] \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \, .\label{Mellin_NLO real_T_non-kc_recoilless_Yc}\end{aligned}$$ Finally, the contribution from the extreme region $z_f^2 {x}_{10}^2 \ll z_2^2 {x}_{20}^2 \simeq z_2^2 {x}_{21}^2$ in the transverse photon case, analog to , writes $$\begin{aligned} & &\left.\bar{\alpha} \int_{z_{\min}}^{z_f}\frac{\textrm{d}z_2}{z_2}\; \int \frac{\textrm{d}^2\mathbf{x}_{2}}{2\pi}\; \frac{\mathcal{I}_{T}^{NLO}}{\mathcal{I}_{T}^{LO}} \; \frac{2\, C_F}{N_c}\: \Big[1- \left\langle {\mathbf S}_{012} \right\rangle_{Y_c^+}\Big]\right|_{z_f^2\, {x}_{01}^2\ll z_2^2\, {x}_{02}^2\simeq z_2^2\, {x}_{21}^2}\nonumber\\ & & \quad \simeq \frac{\bar{\alpha}}{2} \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; {\cal N}(\g,Y_c^+)\; \int_{0}^{Y_f^+}\!\!\textrm{d}Y_2^+\; e^{-(1\!-\!\g)(Y^+_f\!-\!Y^+_2)} \int^{+\infty}_{\exp{(Y^+_f\!-\!Y^+_2)}} \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\textrm{d}u\;\;\; u^{\g-1}\; \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \nonumber\\ & & \quad \simeq \int_{1/2-i\infty}^{1/2+i\infty} \frac{\dd \g}{2\pi i}\, \left(\frac{x_{01}^2 \, Q_0^2}{4}\right)^\g\; {\cal N}(\g,Y_c^+)\; \Bigg\{ \frac{\bar{\alpha}}{2(1\!-\! \g)} \; \int_{1}^{e^{Y^+_f}}\!\!\!\!\!\!\!\!\textrm{d}u\; u^{\g-1} \Big[1\!-\!u^{\g\!-\!1}\Big] \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)}\nonumber\\ & & \qquad \qquad +\frac{\bar{\alpha}}{2(1\!-\! \g)}\; \left[1\!-\!e^{-(1\!-\!\g)Y^+_f}\right] \; \int^{+\infty}_{e^{Y^+_f}}\!\!\!\textrm{d}u\; u^{\g-1} \frac{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2} \sqrt{u}\right)}{\textrm{K}_{1}^2\!\left(Q\sqrt{z_f\, {x}_{01}^2}\right)} \Bigg\} \, .\label{Mellin_NLO real_T_deep_recoil_Yc}\end{aligned}$$ For any finite (possibly large) value of $Y_f^+$, all the potential singularities in $\g$ space appearing in the various contributions , , and to the real NLO corrections the the DIS structure functions, all located at $\g=1$, are actually removable. Hence, it seems that the real NLO corrections do not contain any DGLAP-like collinear logs if $\left\langle {\mathbf S}_{012} \right\rangle$ is evaluated at a scale $Y_{c}^+$ independent of the integration variables $Y_2^+$ and $z_2^+$. This is quite unexpected and worrisome. However, the standard Regge limit is equivalent here to the $Y_f^+\rightarrow +\infty$ limit while keeping $\g$ fixed, in the range $0<\textrm{Re}(\g)<1$. That Regge limit amounts to make the replacements $e^{-(1\!-\!\g)Y^+_f}\rightarrow 0$ and $e^{Y^+_f}\rightarrow +\infty$ in the expressions , , and . Then, many of the removable singularities at $\g=1$ are transformed into poles, with a pattern reminiscent of the case of $\left\langle {\mathbf S}_{012} \right\rangle$ evaluated at $Y_2^+$. - In the contribution from the domain $z_f\, {x}_{10}^2\gg z_2\, {x}_{20}^2 \simeq z_2\, {x}_{21}^2$ where the factorized approximation of $\mathcal{I}_{T,L}^{NLO}$ is valid, one obtains both a single and a double pole terms, which can be interpreted together as $$\bar{\alpha}\Bigg[Y_f^+-\frac{1}{(1\!-\! \g)}\Bigg] \frac{1}{(1\!-\! \g)}\quad \leftrightarrow \quad \bar{\alpha}\, \bigg[Y_f^+ - \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right)\bigg]\, \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right)\simeq\bar{\alpha}\, Y_f^-\, \log\left(\frac{4}{x_{01}^2 \, Q_0^2}\right) \, ,$$ which is the correct collinear DLL contribution. - In the transverse photon case, there is still no true Mellin space singularity, and thus no large logs coming from the domain $z_f^2 {x}_{10}^2 \ll z_2^2 {x}_{20}^2 \simeq z_2^2 {x}_{21}^2$ where transverse recoil effects are important. - The other contributions and provide a single pole at $\g=1$ with a coefficient independent of $Y_f^+$. These correspond to single collinear logs, with no high-energy logs. Hence, if one takes first the standard Regge limit and then the collinear limit, one obtains the correct collinear DLL contributions from the real NLO corrections to DIS, as with the $Y_2^+$ scale choice. However, the absence of singularities in $\g$ at finite $Y_f^+$ means that if one takes the collinear limit first and then the high-energy limit, one cannot obtain collinear DLL contributions but just high-energy logs. The lack of commutation of the collinear and high-energy limits prevent us to obtain a smooth interpolation between the BFKL/BK regime and the collinear DGLAP regime with the choices of scale $Y^+=0$ or $Y^+=Y_f^+$ for $\left\langle {\mathbf S}_{012} \right\rangle_{Y^+}$. Hence, the choice $Y^+=Y_2^+$ should be done instead in practice. [^1]: Hereafter, the frame is chosen such that the projectile (virtual photon in the DIS case) is right-moving and the target left-moving. [^2]: Remember that from the point of view of the target, $x^-$ plays the role of time, not $x^+$. [^3]: In the case of a nuclear target, following the standard conventions, $P^-$ is instead the average momentum per nucleon of the target, and the momentum of the partons is still noted $x_0\, P^-$. [^4]: That model has been introduced in ref. [@Beuf:2011xd], up to the parameter $x_0$ which has been added here for completeness. [^5]: In the case of a photo-production reaction, the initial Fock state for the real photon wave-function is just the one-photon Fock state, with momentum $k^+_{init}=q^+$, $\mathbf{k}_{init}=\mathbf{q}=0$, and thus $k^-_{init}=q^-=0$. In the case of deep inelastic scattering in the one-photon exchange approximation, the photon is always strictly on mass shell in light-front perturbation theory. However, the correct initial state is not the one-photon Fock state but the one-lepton Fock state. Nevertheless, as explained in the appendix A.3 of Ref.[@Beuf:2011xd], one can effectively start from a virtual photon initial state with $k^-_{init}=-Q^2/(2\, q^+)$, which reproduces the contribution of both the initial lepton and the final scattered lepton to each of the energy denominators. $Q^2$ is defined from the initial and final leptons 4-momentums ${k_l}^{\mu}$ and ${k_{l'}}^{\mu}$ as $Q^2=-({k_l}^{\mu}\!-\!{k_{l'}}^{\mu})({k_l}_{\mu}\!-\!{k_{l'}}_{\mu})$. [^6]: Notice that it is the momentum of the gluon just at its emission, like ${\mathbf{k}_2}$, which appears in the approximation , not the momentum at the end of the initial-state parton cascade, like ${\mathbf{k}_2'}$, which can be quite different, see Fig.\[Fig:ga2qqbargg\]. [^7]: In order to write the formula (and similarly ), one makes the assumption that $\left\langle {\textbf N}_{ij} \right\rangle_{Y^+}$ depend on the distance $x_{ij}$, but not on the points $\mathbf{x}_i$ and $\mathbf{x}_i$ independently. It is possible to relax that assumption by including components of higher conformal spin [@Lipatov:1985uk] in the Mellin representation . However, such contributions with strictly positive conformal spin do not grow with energy, and thus are not interesting in the context of gluon saturation. Moreover for the components of conformal spin $n\neq 0$, the higher order corrections to the BFKL kernel [@Kotikov:2000pm] are smooth and well-behaved so that no collinear resummation is needed for them. [^8]: For simplicity only the mostly gluonic eigenstate of the DGLAP evolution in the singlet sector is considered here, since that eigenstate is dominant in the DLL regime. [^9]: The small discrepancy in the original calculation of ref. [@Balitsky:2008zz] was due to a mistake in the calculation of some integral, which has been corrected since then, see ref. [@Balitsky:2009xg]. [^10]: In Ref.[@Beuf:2011xd], only the real NLO corrections have been calculated explicitly, whereas virtual NLO corrections have been inferred by using a unitarity argument. However, there was a flaw in the particular implementation of the unitarity requirement, so that the expression given in Ref.[@Beuf:2011xd] is not correct and the virtual NLO corrections need to be calculated explicitly. Here, those yet unknown virtual corrections are indicated by the ${\cal O}(\abar)$ term. This issue is being further studied [@BeufToAppear], but does not affect the physics discussed in the present paper, which is driven by the NLO real corrections. The virtual ${\cal O}(\abar)$ term should be UV divergent, in order to cancel the divergences of the NLO real contribution for $\mathbf{x}_{2}\rightarrow \mathbf{x}_{0}$ and $\mathbf{x}_{2}\rightarrow \mathbf{x}_{1}$. The ${\cal O}(\abar)$ term should also have a soft log divergence regulated by the cut-off $k^+_{\min}$. [^11]: This observation seems to suggest that in the real-photon limit, there will be sizable contributions from $q\bar{q}$ dipoles with arbitrarily large transverse separation, among other Fock states. But of course, such contributions should be suppressed at the non-perturbative level by confinement effects. Hence, this just confirms that one should not trust perturbative calculations of the photon wave-function in the case of a real or quasi-real photon. [^12]: The factors $z_1$ or $1\!-\!z_1$, typically not too small, have been dropped for simplicity. [^13]: I thank Heribert Weigert for an enlightening discussion on a closely related problem. [^14]: In the equation , the integration variable has been relabeled from $\mathbf{x}_{2}$ to $\mathbf{x}_{v}$ in order to avoid confusions in later stages of the calculation, in particular in the equation . [^15]: If one decides to use the kinematically improved evolution with exact probability conservation , one can still use the counter-term in order to remove LL’s from NLO corrections to inclusive observables. Indeed, the two improved evolution equations and differ only by terms of order ${\cal O}(\abar^2)$, which would matter when removing LL’s from NNLO corrections instead. [^16]: The different sign is trivially due to the fact that the equation is written as an equation for $\left\langle {\mathbf N}_{01} \right\rangle_{Y^+}$ whereas the counter-term applies to the equation for $\left\langle {\mathbf S}_{01} \right\rangle_{Y^+}$. [^17]: It is clear that the lack of kinematical constraint does produce at NLO+LL accuracy large corrections with the systematics observed in ref. [@Stasto:2013cha]. However, it is not excluded that the large and negative correction observed in ref. [@Stasto:2013cha] is the cumulated effect of the lack of kinematical constraint plus another yet unknown problem. In order to clarify that issue, one should study the LL’s in the NLO corrections to single inclusive hadron production [@Chirilli:2011km; @Chirilli:2012jd], with the method of section \[sec:NLO\_IF\_analysis\]. The same kinematical constraint should occur in a quite different way for that observable than for the dipole cascades relevant for DIS analyzed in the present paper, due to the crossing of Wilson lines from the complex conjugate amplitude into the amplitude. However, it was demonstrated in ref. [@Mueller:2012bn] that this crossing does not modify the evolution equation at NLL accuracy. Therefore, the same kcBK equation has to be valid both for DIS structure functions and single inclusive hadron production.
--- abstract: 'Quantitative analysis of commonalities and differences between recorded music performances is an increasingly common task in computational musicology. A typical scenario involves manual annotation of different recordings of the same piece along the time dimension, for comparative analysis of, e.g., the musical tempo, or for mapping other performance-related information between performances. This can be done by manually annotating one reference performance, and then automatically synchronizing other performances, using audio-to-audio alignment algorithms. In this paper we address several questions related to those tasks. First, we analyze different annotations of the same musical piece, quantifying timing deviations between the respective human annotators. A statistical evaluation of the marker time stamps will provide (a) an estimate of the expected timing precision of human annotations and (b) a ground truth for subsequent automatic alignment experiments. We then carry out a systematic evaluation of different audio features for audio-to-audio alignment, quantifying the degree of alignment accuracy that can be achieved, and relate this to the results from the annotation study.' bibliography: - 'ismir\_2019\_ARXIV\_literature.bib' title: A Study of Annotation and Alignment Accuracy for Performance Comparison in Complex Orchestral Music --- Introduction {#sec:introduction} ============ An increasingly common task in computational musicology – specifically: music performance analysis – consists in annotating different performances (recordings) of classical music pieces with structural information (e.g., beat positions) that defines a temporal grid, in order then to carry out some comparative performance analyses, which require time alignments between the performances. As manually annotating many recordings is a very time-consuming and tedious task, an obvious shortcut would be to manually annotate only one performance, and then use automatic audio-to-audio matching algorithms to align additional recordings to it, and thus also be able to automatically transfer structural annotations. The work presented here is part of a larger project on the analysis of orchestral music performance. In this musicological context, it is crucial to understand the level of precision one can expect of the empirical data collected. The present study attempts to answer two specific questions: (1) what is the precision / consistency we can expect from human time annotations in such complex music? and (2) can automatic alignment be precise enough to be used for transferring annotations between recordings, instead of tediously annotating each recording manually? We will approach this by collecting manual annotations from expert musicians, on a small set of carefully selected pieces and recordings (Section \[sec:annotation\]), analyzing these with statistical methods (Section \[sec:eval:annotations\]) – which will also supply us with a ground truth for the subsequent step –, then performing systematic experiments with different audio features and parameters for automatic audio-to-audio alignment (Section \[sec:alignment\]), quantifying the degree of alignment precision that can be achieved, and relating this to the results from the previous annotation study (Section \[sec:eval:alignments\]). Related Work ============ [@weiss_2016_measure_annotation] presented a case study of opera recordings that were annotated by five annotators, at the bar level. The authors used the mean values over the annotators as ground-truth values for the respective marker positions and the variance to identify sections possibly problematic to annotate, and offered a qualitative analysis of the musical material and sources for error and disagreement between annotators. [@grachten_alignment_structure] deals with the alignment of recordings with possibly different structure. Their contribution is relevant for our endeavor in so far as they evaluated different audio features and parameters ranges for an audio-to-audio alignment task on a data set of, among others, symphonies by Beethoven, which matches our data set very well. [@kirchhoff_2011_evaluation_features_alignment] evaluated audio features for the audio-to-audio alignment task using several different data sets. While many studies of alignment features do not use real human performances but artificial data, we only use ground-truth produced from human annotations (by averaging over multiple annotations per recording) of existing recordings for the evaluation of the alignment task. Furthermore, the results of our analysis of manual annotations (Step 1) will inform our interpretation of the automatic alignment experiments in Step 2 (by relating the observed alignment errors to the variability within the human annotations), leading to some insights useful for quantitative musicological studies. Annotation and Ground-truth {#sec:annotation} =========================== Annotation vs. Tapping ---------------------- ![image](Boulez_1969_std_along_piece.png){width="100.00000%"} Our primary goal is to map the musical time grid as defined by the score, to one or more performances given as audio recordings. Due to expressiveness performance, these mapped time points may be very different between different recordings. Following [@dixon_2005_match], we will call the occurrence of one or more (simultaneous) score notes a *score event*. In our case, we were interested in annotating regularly spaced score events, for instance, on the quarter note beats. Different methods can be employed for marking score events in a recording. One possibility is to tap along a recording on a keyboard (or other input device) and have the computer store the time-stamps. We will refer to a sequence of time-stamps produced this way as a *tapping* in the following. Producing markers this way has been termed “reverse conducting” by the Mazurka project[^1]. This is to be distinguished from what we will call an *annotation* throughout this paper. In that case, markers are first placed by tapping along, or even by visually inspecting the audio waveform, and then iteratively corrected on (repeated) critical listening. In general, we assume corrected annotations to have smaller deviations from the “true” time-stamps than uncorrected tappings, especially around changes of tempo. Pieces, Annotators, and Annotation Process ------------------------------------------ The annotation work for this study was distributed over a pool of four annotators. Three are graduates of musicology and one is a student of the violin. The pieces considered are: Ludwig van Beethoven’s Symphony No. 9, 1st movement; Anton Bruckner’s Symphony No. 9, 3rd movement; and Anton Webern’s Symphony Op. 21, 2nd movement (see Table \[table:pieces\] for details). The first two are symphonic movements, played by full classical/romantic period orchestra. The third is an atonal piece where the second movement is of a “theme and variations” form, and requires a much smaller ensemble (clarinets, horns, harp, string section). While the first two pieces can be considered to be well known even to average listeners of classical music, the Webern piece was expected to be less familiar to the annotators. It is rhythmically quite complicated, with many changes in tempo and many sections ending in a fermata. We expected it to be a suitable challenge for the annotators as well as the for the automatic alignment procedure. The quarter beat level was chosen as (musically reasonable) temporal annotation granularity, in all three cases. The annotators were asked to mark all score events (notes or pauses) at the quarter beat level, using the Sonic Visualiser software [@SonicVisualiser], and then to correct markers such that they coincide with the score events when listening to the playback with a “click" sound together with the recording of the piece. They also had to annotate “silent" beats (i.e. general pauses) or even single or multiple whole silent bars with the given granularity. It is clear that this may create large deviations between annotators at such points, as the way to choose the marker positions is not always obvious or even meaningfully possible in these situations. Each recording was annotated by three annotators, giving us a total of 21 complete manual annotations[^2]. Composer Conductor Orch. Year Dur. Med. SD ------------- ----------- ------- ------ ------- --------- Beethoven Karajan VPO ’47 16:00 32 Karajan BPO ’62 15:28 32 Karajan BPO ’83 15:36 27 A. Bruckner Karajan BPO ’75 09:30 68 Abbado VPO ’96 10:40 52 A. Webern Boulez LSO ’69 03:08 47 Karajan BPO ’74 03:28 63 : Annotated recordings. VPO = Vienna Philharmonic Orchestra, BPO = Berlin Philharmonic, LSO = London Symphony Orchestra. Each recording was annotated by three annotators. Med. SD is the median value of standard deviations of the annotations (in milliseconds, rounded to nearest integer), for details see Sec. \[sec:eval:annotations\]. []{data-label="table:recordings"} Evaluation of Annotations {#sec:eval:annotations} ========================= For a statistical analysis of this rather small number of human annotations, we need to make some idealizing assumptions. We assume that there is one clear point in time that can be attributed to each respective score event, i.e. there are “true” time-stamps $\tau_n$, $n = 1, 2, \ldots$ for the score events we sought to annotate. If each score event is annotated multiple times, the annotated markers $\theta_n$ will show random variation around their true time-stamps, with a certain variance $\sigma^2_n$. It seems reasonable to assume the respective markers to be realizations of random variables $\varTheta_n$, each following a normal distribution, i.e. $\varTheta_n \sim \mathcal{N}(\tau_n, \sigma_n^2)$. ![ Modeling annotations as random variables. Musical score and waveform of a performance. Hypothetical true time-stamps $\tau_n$. Annotation markers $\theta_n$. Bottom row: pdfs of random variables $\varTheta_n$, each of mean $\tau_n$. []{data-label="fig:theta_Theta"}](score_fig_ipe.pdf){width="\columnwidth"} Thus, for each event to be annotated we would expect (a large number of) annotations to exhibit a normal distribution around some mean $\tau_n$. This is schematically illustrated in Figure \[fig:theta\_Theta\]. However, for estimating the parameters of these distributions, rather large numbers of annotations would be required. [@dannenberg_2009_single_tap_error_distribution] has shown that with some additional assumptions, the distribution can be estimated from as little as two sequences of markers. We follow [@dannenberg_2009_single_tap_error_distribution] in the derivations below. If the variance $\sigma^2$ of the time stamps is assumed to be constant over time (across the whole piece or part to be annotated), subtracting two sequences $\theta_n^1$, $\theta_n^2$ of markers for the same score events, i.e. $$\label{eq:delta_diff} \Delta_n = \varTheta_n{^{1}} - \varTheta_n{^{2}},$$ yields the variable $\Delta_n \sim \mathcal{N}(0, 2 \sigma_\varTheta^2)$. Note that if the mean of $\Delta_n$ is not zero, we can force it to be by suitably offsetting either $\varTheta_n{^{1}}$ or $\varTheta_n{^{2}}$ by $\bar{\Delta}_n$ – since we assume both sequences to mark the same events with mean zero, a total mean deviation can be viewed as a systematic offset by either annotator. One could then use the differences $\delta_n = \theta_n^1 - \theta_n^2$ to estimate the variance $\sigma_\varTheta^2$ around the true time-stamps: $$\label{eq:estimate_var} \hat{\sigma}_\varTheta^2 = \frac{1}{2N}\sum_{n = 1}^N (\theta_n{^{1}} - \theta_n{^{2}})^2.$$ In [@dannenberg_2009_single_tap_error_distribution], two example analyses of tap sequences were presented that support these assumptions. We analyzed our annotation data according to these ideas. First, for each annotated recording, we calculated the time-stamp differences between each pair of annotations, according to Eq. , and tested the resulting distributions for normality, using the Shapiro-Wilk test. However, for all annotations created, none of the distributions is normal according to these tests. On visual inspection of the distributions of differences of annotation sequences $\delta_n$ using quantile-quantile plots (see Fig. \[fig:qqplot\_webern\]), the tails of the distributions turned out to be typically significantly heavier than for a normal distribution. ![ Webern Op21-2, Boulez. Quantile-quantile plot of the differences of a pair of annotation sequences for the whole piece. Solid red line fitted to first and third data quartile, dashed lines show $\pm$95% confidence around this line. Non-normal data deviate strongly from area enclosed by dashed lines. []{data-label="fig:qqplot_webern"}](Boulez_1969_qqplot_diffs_legend.png){width="1\columnwidth"} We suspect that this discrepancy to the results given in [@dannenberg_2009_single_tap_error_distribution] is most likely due to the higher complexity of our musical material, with large orchestras playing highly polyphonic and rhythmically complex music in varying tempi. It seems intuitively clear that for some sections, the deviations among annotated markers will be much smaller than in complex parts. Additionally, as we asked also silent beats to be annotated, even during whole silent bars, we should expect substantial deviations for at least a few such events in every recording. We therefore conclude that at least the assumption of identical variance across a whole piece should be dropped (for more complex material) when more detailed information about local uncertainties of the annotation is desired. However, it is interesting to note that locally, when the differences for only a few consecutive (around 20-30) annotated time-stamps are pooled, they conform to a normal distribution quite well. This means that the assumption of about equal variance for the annotation of score events tends to hold for short blocks of time, but rather not globally (for a whole piece), at least for the musical material considered here. As estimating the standard deviation (as a measure of uncertainty) of each time-stamp’s markers is not reliable given only few annotations, we used an alternative based on the above observation. For blocks of 24 consecutive score events (with a hop size of 12), the differences of a pair of annotation sequences were *pooled* and used to estimate the standard deviation for each respective block. The resulting, block-wise constant curve of standard deviations is shown in Fig. \[fig:std\_annotat\] (magenta), along with the simple standard deviation per score event, calculated from three markers (blue), for a specific recording and pair of annotations. The median of these per-block estimated standard deviations is used as a global estimate of the precision of the annotations for the respective performance, and is given for the respective performance as the right-most column in Table \[table:recordings\]. As can be seen, the values differ substantially across the pieces as well as within the pieces, for different performances. The right-most boxplot in Fig. \[fig:std\_annotat\] shows a summarization of the per-block estimated standard deviations. Interestingly, for the 1st movement of Beethoven’s Symphony 9 (with its relatively constant tempo), the estimated standard deviation is close to the value presented in [@dannenberg_2009_single_tap_error_distribution], but it is considerably larger for the other pieces that exhibit more strongly varying tempo. Automatic Alignment {#sec:alignment} =================== As mentioned above, annotating a large number of performances of the same piece is a time-consuming process. A more efficient alternative would be to automatically transfer annotations from one recording to a number of unseen recordings, via audio-to-audio alignment. Alignment Procedure and Ground-truth ------------------------------------ The method of choice for (off-line) audio-to-audio alignment is *Dynamic Time-warping (DTW)* [@muller_2019_cross_modal]. Aligning two recordings via DTW involves extracting sequences $X \in {{\mathbb{R}}}^{L \times D}$ and $Y \in {{\mathbb{R}}}^{M \times D}$ of feature vectors, respectively. Using a distance function $d(x_l, y_m)$, the DTW algorithm finds a path of minimum cost, i.e. a mapping between elements $x_l$, $y_m$ of the sequences $X$, $Y$. An alignment is thus a mapping between pairs of feature vectors (from different recordings), each vector representing a block of consecutive audio samples. As each audio sample has an associated time-stamp (an integer multiple of the inverse of the sample rate), each feature vector, say $x_l$, can be associated with a time-stamp $t_l^X$ as well, (here) representing the center of the block of audio samples. The matching of sequence elements is schematically illustrated in Fig. \[fig:alig\_ts\_gt\], for the “direction” $X \rightarrow Y$ (note that direction here refers to the evaluation, as will be illustrated next). For each block of $X$, the matching block of $Y$ is found, and its associated time-stamp $t_m^Y$ is subtracted from the ground-truth time-stamp $g_n^Y$. This produces the pairwise error sequence $e_n^{X\rightarrow Y}$. As we have ground-truth annotations for both recordings of a pair available, we can also calculate an error sequence for the “reverse” direction $Y \rightarrow X$. The sequences of ground-truth time-stamps were produced from the multiple annotations discussed above (Section \[sec:annotation\]), by taking for each annotated score event the sample mean across the three annotations per recording. For computing the alignments, an implementation of FastDTW [@salvador_2007_fastdtw] in python was used. ![ Matching feature vectors through DTW, and calculating errors between associated time-stamps $t_m^Y$ and ground-truth time-stamps $g_n^Y$, for direction $X \rightarrow Y$. This yields the error sequence $e_n^{X \rightarrow Y}$. []{data-label="fig:alig_ts_gt"}](alignment_blocks_ts.pdf){width="0.5\columnwidth"} Choice of Audio Features ------------------------ The actual alignment process is preceded by extracting features from the recordings to be aligned. Different features have been proposed and evaluated for this task in the literature. We decided to choose only features that have proven to yield highly accurate alignments and thus small alignment errors. [@grachten_alignment_structure] evaluated several different audio features separately on data sets of different music genres, among them symphonies by Beethoven. They achieved the best results overall by using 50 MFCC (in contrast to 13 or even 5 as used in [@kirchhoff_2011_evaluation_features_alignment]), for two different block lengths. As the results on these corpora, which are similar to ours, were dominated by MFCC, we decided to use these with similar configurations for our experiments. Additionally, we included a variant of MFCC (in the following addressed as “MFCC mod”) following an idea described in [@muller_2009_chroma_features_], where 120 MFCC are extracted, then the first $n_{skip}$ are discarded and only the remaining ones used. However, in contrast to their proposal we skip the subsequent extraction of chroma information and use the MFCC directly. The second family of features that has proven successful for alignment tasks are chroma features, which were tested as an alternative. For extracting the feature values, the implementations from LibROSA [@librosa_2015] were used. Besides “classical” chroma features, the variants chroma\_cqt (employing a constant-Q transform) and chroma\_cens were used. We decided not to include more specialized features that include local onset information, like LNSO / NC [@arzt_2012_adaptive_distance], or DLNCO (in combination with chroma), as they would seem to give no advantage on our corpus as suggested from the results in [@grachten_alignment_structure] and [@ewert_2009_hires_sync_chroma]. Systematic Experiments Performed -------------------------------- In order to find the best setup for audio-to-audio alignment for complex orchestral music recordings, we carried out a large number of alignment experiments, by systematically varying the following parameters: - FFT sizes: 1024 to 8192 (chroma), up to 16384 (MFCC) - Hop sizes MFCC: half of FFT size, for 16384 fixed to 4096 - Hop sizes Chroma: 512 and 1024, for each FFT size; additionally 2048 for chroma\_cens and chroma\_cqt - Number of MFCC: 13, 20, 30, 40, 50, 80, 100 - MFCC mod: 120 coefficients, first 10, 20, $\ldots$, 80 discarded - Distance measures: Euclidean ($l_2$), city block ($l_1$) and cosine distance. Note that the audio signals were not down-sampled in any of the cases, but used with their full sample rate of 44.1 kHz. All in all, a total number of 312 different alignments were computed and evaluated for each performance pair. Each alignment of each pair of performances was evaluated in both directions. As it is impossible to display all results in this paper, we will only report a subset of best results in Section \[sec:eval:alignments\]. Evaluation of Alignments {#sec:eval:alignments} ======================== Alignment Accuracy ------------------ For quantifying the alignment accuracy, we calculated pairwise errors $e_n$ between the ground-truth time-stamps $g_n$ for the respective recording and the matching alignment time-stamps $t_l$ (see Fig. \[fig:alig\_ts\_gt\]). Per pair of recordings, two error sequences are obtained, one for each evaluation direction, i.e. $e_n^{X \rightarrow Y}$ and $e_n^{Y \rightarrow X}$. As a general global measure of the accuracy of a full alignment, the mean absolute error is used, where the maximum absolute error can be seen as a measure for lack of robustness. For reporting of the best results, we first ranked all alignments whose absolute maximum errors are below 5 seconds by their mean absolute errors. As large maximum error is taken as lack of robustness, the worst performing settings were thus discarded. For each pair of recordings, from the remaining error sequences (from originally 312 alignments per pair, each with 2 directions of evaluation), the 10 best results, in terms of mean absolute error, were then kept for further analysis. The error values for both directions of each specific alignment were then pooled, i.e. the error values were collected and analyzed jointly. A one-way ANOVA (null hypothesis: no difference in the means) was conducted for the 10 best alignments per pair of recordings, where for all cases the null hypothesis could not be rejected (recording pair with smallest p-value: $F = 0.6$, $p = 0.8$). Thus, as the different settings of the 10 best alignments do not result in significant differences in terms of mean error performance, the error sequences for those 10 best alignments were collected, to estimate a distribution of the absolute errors. Fig. \[fig:alig\_eCDF\_summarized\] shows the empirical cumulative distribution function of the pairwise absolute errors for all 5 alignment (performance) pairs, where each curve is obtained from the 2 error sequences (both evaluation directions) of each of the 10 best alignments for the respective performance pair. ![Cumulative distribution of absolute pairwise errors. Each curve represents pooled errors of 10 best alignments (mean absolute error) for both evaluation directions, per pair of recordings. s9-1: Beethoven, S.9, 1st Mov., s9-3: Bruckner, S.9, 3rd Mov., op21-2: Webern, Op21, 2nd Mov. (+) and (x) markers for median standard deviation of annotation, cf. Table \[table:recordings\].[]{data-label="fig:alig_eCDF_summarized"}](+eCDF_abs_errors_all_pieces_short.png){width="\linewidth"} In the following, the settings and results, in terms of mean absolute error and maximum absolute error, for the 10 best alignments are presented. For the Beethoven piece, we restricted the reporting to one pair of recordings (BPO 1962 vs. VPO 1947) due to limited space (Table \[table:results\_beethoven\]). As can be seen from Fig. \[fig:alig\_eCDF\_summarized\], the other two pairs do not differ substantially in terms of error performance, and the settings for obtaining these results are almost identical to the ones presented in the table, with an even stronger favor of the MFCC mod feature. Tables \[table:results\_webern\] and \[table:results\_bruckner\] show the results for the Webern and Bruckner pair of recordings, respectively. As can be seen from the tables, best results are achieved with either MFCC or the modified MFCC. There does not seem to be a very clear pattern of which parameter setting gives best results, even within one pair of recordings. A slight advantage of medium to large FFT sizes is observed, as is a larger number of MFCC ($\geq$ 80, a number much larger than what is suggested in the literature for timbre related tasks). For the modified MFCC, skipping the first 20 to 40 out of the 120 coefficients seems a good suggestion. Interestingly, there seems to be no clear relation to the FFT size. Relation to Human Alignment Precision ------------------------------------- We would like to relate the accuracy achieved by automatic alignment methods to the precision with which human annotators mark score events in such recordings. This will enable us to judge the errors in the alignment methods in such a way that we cannot only say which is best, but which are probably sufficiently good for musicological studies (in relation to how precise human annotations tend to be). By comparing the global measures of variation of the annotations (Table \[table:recordings\]) with the mean errors obtained from the alignment study, the following can be stated. We would like the errors introduced by the alignments to be in the range of the variation introduced by human annotators. If, for example, the above estimated standard deviations are used for describing an interval (e.g. $\pm$ 1 SD) around the ground-truth annotations, then markers placed by the DTW alignment within such an interval can be taken to be as accurate as an average human annotation. However, as Tables \[table:results\_beethoven\] to \[table:results\_webern\] reveal, on average, the absolute errors are at least slightly (or even much in case of the Bruckner performances) larger than the estimated standard deviations, but still in a reasonable range, even for larger proportions of the score events (see Figure \[fig:alig\_eCDF\_summarized\]). Discussion and Conclusions ========================== Given our results, we expect the presented feature settings to be quite suitable as a first step for developing further musicological questions related to comparing multiple performances of one piece. With careful annotation of one recording, transferring the score event markers to other recordings of the same piece should yield not much worse accuracy than what is to be expected from human annotations. Detailed analyses of e.g. tempo may still need a moderate amount of manual correction, however. An interesting application we consider is the exploration of a larger corpus of unseen recordings. Being able to establish, within a reasonable uncertainty, a common musical grid for a number of performances allows for search of (a first impression of) commonalities and differences across performances, for parameters such as tempo, or features extracted directly from the recording, such as loudness, mapped to the musical grid. This will e.g. allow the pre-selection of certain performances for more careful human annotation and further more detailed analyses. Recently, performance related data have been presented for a larger corpus in [@kosta_2018_mazurkabl]. We hope to have presented some new insights with the data on annotation precision, and the applied methods for their quantification. Further work could make use of estimates of typical uncertainty of annotations to estimate, or give bounds for, the uncertainty of data derived from these. One way would be to use simple error propagation to quantify uncertainty of tempo representations, and automatically find (sections of) performances of significantly different tempo within a large corpus of recordings. Acknowledgments =============== This work was supported by the Austrian Science Fund (FWF) under project number P29840, and by the European Research Council via ERC Grant Agreement 670035, project CON ESPRESSIONE. We would like to thank the annotators for their work, as well as the anonymous reviewers for their valuable feedback. Special thanks go to Martin Gasser for fruitful discussions of an earlier draft of this work. [^1]: [www.mazurka.org.uk/info/revcond/example/](www.mazurka.org.uk/info/revcond/example/) [^2]: Supplemental material to this publication is available online at 10.5281/zenodo.3260499
--- abstract: 'Every countable language which conforms to classical logic is shown to have an extension which conforms to classical logic, and has a definitional theory of truth. That extension has a semantical theory of truth, if every sentence of the object language is valuated by its meaning either as true or as false. These theories contain both a truth predicate and a non-truth predicate. Theories are equivalent when the sentences of the object language are valuated by their meanings.' author: - 'Seppo Heikkilä $^\star$' title: Theories of truth for countable languages which conform to classical logic --- \[pageinit\] Department of Mathematical Sciences, University of Oulu BOX 3000, FIN-90014, Oulu, Finland Introduction {#S1} ============ Based on ’Chomsky Definition’ (cf. [@C]) we assume that a language is a nonempty countable set of sentences with finite length, and formed by a countable set of elements. A theory of syntax is also assumed to provide a language with rules to construct well-formed sentences, formulas etc. A language is said to conform to classical logic if it has, or if it can be extended to have at least the following properties (’iff’ means ’if and only if’): \(i) It contains logical symbols $\neg$ (not), $\vee$ (or), $\wedge$ (and), $\rightarrow$ (if...then), $\leftrightarrow$ (iff), $\forall$ (for all) and $\exists$ (exists), and the following sentences: If $A$ and $B$ are (denote) sentences of the language, so are $\neg A$, $A\vee B$, $A\wedge B$, $A\rightarrow B$ and $A\leftrightarrow B$. If $P(x_1,\dots,x_m)$, $m\ge 1$, is a formula of the language with $m$ free variables, then $P$ is called a predicate with arity $m$ and domain $D_P=D_P^1\times\cdots\times D_P^m$, where each $D_P^i$ is a subset of a set $D$ of objects, called the domain of discourse, if the following properties hold.\ (p1) Every object of $D$ is named by a term. Denote by $N_P^i$ the set of those terms which name the objects of $D_P^i$, $i=1,\dots,m$.\ (p2) $P(b_1,\dots,b_m)$ is a sentence of that language obtained from $P(x_1,\dots,x_m)$ by substituting for each $i=1,\dots,m$ a term $b_i$ of $N_P^i$ for $x_i$ in every free occurrence of $x_i$ in $P$.\ If $P$ is a predicate with arity $m\ge 1$, then the sentences $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$, where each $q_i$ is either $\forall$ or $\exists$, are in the language. $\neg P$ is also a predicate with arity $m$ and domain $D_P$. \(ii) The sentences of that language are so valuated as true or as false that the following rules of classical logic are valid: If $A$ and $B$ denote sentences of the language, then $A$ is true iff $\neg A$ is false, and $A$ is false iff $\neg A$ is true; $A\vee B$ is true iff $A$ or $B$ is true, and false iff $A$ and $B$ are false; $A\wedge B$ is true iff $A$ and $B$ are true, and false iff $A$ or $B$ is false; $A\rightarrow B$ is true iff $A$ is false or $B$ is true, and false iff $A$ is true and $B$ is false; $A\leftrightarrow B$ is true iff $A$ and $B$ are both true or both false, and false iff $A$ is true and $B$ is false or $A$ is false and $B$ is true. If $P$ is a predicate with arity $m$, then the sentence of the form $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is true iff the sentence $P(b_1,\dots,b_m)$ is true for all $b_i\in N_P^i$ when $q_i$ is $\forall$, and for some $b_i\in N_P^i$ when $q_i$ is $\exists$. $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is false iff the sentence $P(b_1,\dots,b_m)$ is false for all $b_i\in N_P^i$ when $q_i$ is $\exists$, and for some $b_i\in N_P^i$ when $q_i$ is $\forall$. \(iii) The language is bivalent, i.e., every sentence of it is either true or false. Every countable and bivalent first-order language with or without identity conforms to classical logic. A classical example is the language of arithmetic in its standard interpretation. We say that a language has a theory of truth if truth values are assigned to its sentences, and if it contains a predicate $T$ which satisfies $T$-rule: $T(\left\lceil A\right\rceil)\leftrightarrow A$ is true for every sentence $A$ of the language. Term $\left\lceil A\right\rceil$ which names the sentence $A$ is defined below. A predicate $T$ which satisfies $T$-rule is called a truth predicate. A theory of truth is said to be definitional if truth values are defined for sentences, and semantical if truth values of sentences are determined by their meanings. Main results of this paper are: Every countable language which conforms to classical logic has an extension which has properties (i)–(iii), and has a definitional theory of truth. That extension has a semantical theory of truth if every sentence of the object language is valuated by its meaning either as true or as false. These theories of truth contain truth and non-truth predicates. Theories are equivalent when the sentences of the object language are valuated by their meanings. Extended languages {#S2} ================== Assume that an object language $L_0$ conforms to classical logic, and is without a truth predicate. $L_0$ has by definition an extension which has properties (i) – (iii). That extension, denoted by $L$, is called a basic extension of $L_0$. The language $L_T$ is formed by adding to $L$ extra formulas $T(x)$ and $\neg T(x)$, and sentences $T(\bf n)$ and $\neg T(\bf n)$, where $\bf n$ goes through all numerals which denote numbers $n\in\mathbb N_0= \{0,1,2,\dots\}$ ([**0**]{}=0, [**1**]{}=S0, [**2**]{}=SS0,…). Neither valuation nor meaning is yet attached to these sentences. Numerals are added, if necessary, to terms of $L_T$. Choose a Gödel numbering to sentences of $L_T$ (see Wikipedia). The Gödel number of a sentence denoted by $A$ is denoted by \#$A$, and the numeral of \#$A$ by $\left\lceil A\right\rceil$, which names the sentence $A$. If $P$ is a predicate of $L$ with arity $m$, then $P(b_1,\dots b_m)$ is a sentence of $L$ for each $(b_1,\dots b_m)\in N_P=N_1^P\times\dots\times N_P^m$, and $\left\lceil P(b_1,\dots b_m)\right\rceil$ is the numeral of its Gödel number. Thus $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ and $\neg T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ are sentences of $L_T$ for each $(b_1,\dots,b_m)\in N_P$, so that they are determined by predicates of $L_T$ having the domain $D_p$ of $P$. Denote these predicates by $T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ and $\neg T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$, and add them to $L_T$. Notation $P(\dot x_1,\dots,\dot x_m)$ stands for the result of formally replacing variables $x_1,\dots x_m$ of $P(x_1,\dots,x_m)$ by terms of $N_P$ (cf. [@HH]). Add to the language $L_T$ sentences $\forall xT_1(x)$, $\exists xT_1(x)$, $\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)$ and $\exists xT_1(\left\lceil T_2(\dot x)\right\rceil)$, where $T_1,T_2\in\{T,\neg T\}$, and sentences $q_1x_1\dots q_mx_m T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ and $q_1x_1\dots q_mx_m \neg T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ for each predicate $P$ of $L$ with arity $m \ge 1$ and for each $m$-tuple $q_1,\dots,q_m$, where $q_i$’s are $\forall$ or $\exists$. The so obtained extension of the language $L_T$ is denoted by $\mathcal L_0$. When a language $\mathcal L_n$, $n\in\mathbb N_0$, is defined, let $\mathcal L_{n+1}$ be a language which is formed by adding to $\mathcal L_n$ those of the following sentences which are not in $\mathcal L_n$: $\neg A$, $A\vee B$, $A\wedge B$, $A\rightarrow B$ and $A\leftrightarrow B$, where $A$ and $B$ are sentences of $\mathcal L_n$. The language $\mathcal L$ is defined as the union of languages $\mathcal L_n$, $n\in\mathbb N_0$. Extend the Gödel numbering of the sentences of $L_T$ to those of $\mathcal L$, and denote by $\mathcal D$ the set of Gödel numbers of the sentences of $\mathcal L$. Our main goal is to extract from $\mathcal L$ a sublanguage which under suitable valuations of its sentences has properties (i)–(iii) given in Introduction, and has a theory of truth. At first we define some subsets of $\mathcal L$. Denote by $\mathcal P^m$ the set of those predicates of $L$ which have arity $m$, $\mathcal P=\bigcup_{m=1}^\infty \mathcal P^m$, and $$\label{E50} \begin{cases} Z_3^m=\{q_1x_1\dots q_mx_m T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil):P\in\mathcal P^m\hbox{ and $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is true}\};\\ Z_4^m=\{q_1x_1\dots q_mx_m\neg T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil):P\in\mathcal P^m\hbox{ and $q_1x_1\dots q_mx_m \neg P(x_1,\dots,x_m)$ is true}\}. \end{cases}$$ Define subsets $Z_1(U)$, $Z_2(U)$, $U\subset \mathcal D$, and $Z_i$, $i=1\dots 4$, of $\mathcal L$ by $$\label{E20} \begin{cases} Z_1(U)=\{\hbox{$T(\bf n)$: ${\bf n}=\left\lceil A\right\rceil$, where $A$ is a sentence of $\mathcal L$ and \#$A$ is in $U$}\},\\ Z_2(U)=\{\hbox{$\neg T(\bf n)$: ${\bf n}=\left\lceil A\right\rceil$, where $A$ is a sentence of $\mathcal L$ and \#[$\neg A$] is in $U$}\},\\ Z_1=\{\neg\forall x T(x),\exists xT(x),\neg\forall x\neg T(x),\exists x\neg T(x)\},\\ Z_2=\{\neg(\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)), \exists xT_1(\left\lceil T_2(\dot x)\right\rceil),\ T_1,T_2\in\{T,\neg T\},\\ Z_3=\bigcup_{m=1}^\infty Z_3^m,\ Z_4=\bigcup_{m=1}^\infty Z_4^m. \end{cases}$$ Subsets $L_n(U)$, $n\in\mathbb N_0$, of $\mathcal L$ are defined recursively as follows. $$\label{E201} L_0(U)=\begin{cases} Z =\{A: A \hbox { is a true sentence of $L\}$ if $U=\emptyset$ (the empty set)},\\ Z\cup Z_1(U)\cup Z_2(U)\cup Z_1\cup Z_2\cup Z_3\cup Z_4 \hbox{ if $\emptyset\subset U\subset \mathcal D$}. \end{cases}$$ When a subset $L_n(U)$ of $\mathcal L$ is defined for some $n\in\mathbb N_0$, and when $A$ and $B$ are sentences of $\mathcal L$, denote $$\label{E203} \begin{cases} L_n^0(U)=\{\neg(\neg A):A \hbox{ is in } L_n(U)\},\\ L_n^1(U)=\{A\vee B:A \hbox{ and $B$, or $A$ and $\neg B$, or $\neg A$ and $B$ are in } L_n(U)\},\\ L_n^2(U)=\{A\wedge B:A \hbox{ and $B$ are in } L_n(U)\},\\ L_n^3(U)=\{A\rightarrow B:\neg A \hbox{ and $B$, or $\neg A$ and $\neg B$, or $A$ and $B$ are in } L_n(U)\},\\ L_n^4(U)=\{A\leftrightarrow B:\hbox{ $A$ and $B$, or $\neg A$ and $\neg B$ are in } L_n(U) \},\\ L_n^5(U)=\{\neg(A\vee B):\neg A \hbox{ and $\neg B$ are in } L_n(U)\},\\ L_n^6(U)=\{\neg(A\wedge B):\neg A \hbox{ and $B$, or $\neg A$ and $\neg B$, or $A$ and $\neg B$ are in } L_n(U)\},\\ L_n^7(U)=\{\neg(A\rightarrow B):A \hbox{ and $\neg B$ are in } L_n(U)\},\\ L_n^8(U)=\{\neg(A\leftrightarrow B):\hbox{$A$ and $\neg B$, or $\neg A$ and $B$ are in } L_n(U) \}, \end{cases}$$ and define $$\label{E204} L_{n+1}(U)=L_n(U)\cup \bigcup_{k=0}^8 L_n^k(U).$$ The above constructions imply that $L_n^k(U)\subseteq L_{n+1}^k(U)$ and $L_n(U)\subset L_{n+1}(U)\subset \mathcal L$ for all $n\in\mathbb N_0$ and $k=0,\dots,8$. Define a subset $L(U)$ of $\mathcal L$ by $$\label{E21} L(U)=\bigcup_{n=0}^\infty L_n(U).$$ Properties of consistent subsets of $\mathcal D$ {#S30} ================================================ Recall that $\mathcal D$ denotes the set of Gödel numbers of the sentences of $\mathcal L$. When $U$ is a subset of $\mathcal D$, denote by $G(U)$ the set of Gödel numbers of the sentences of $L(U)$ defined by : $$\label{E22} G(U)=\{\#A:A \hbox{ is a sentence of } L(U)\}.$$ A subset $U$ of $\mathcal D$ is called consistent if both \#$A$ and \#\[$\neg A$\] are not in $U$ for any sentence $A$ of $\mathcal L$ . \[L201\] Let $U$ be a consistent subset of $\mathcal D$. Then for no sentence $A$ of $\mathcal L$ both $A$ and $\neg A$ belong to $L(U)$, and $G(U)$ is consistent. At first we show that there is no sentence $A$ in $\mathcal L$ such that both $A$ and $\neg A$ belong to $L_0(U)$. If $U=\emptyset$, then $L_0(U)$ is by the set $Z$ of true sentences of $L$. If $A$ is a sentence of $L$, then only one of the sentences $A$ and $\neg A$ is true, and hence in $Z=L_0(U)$, since $L$ has properties (i)–(iii). Assume next that $U$ is nonempty. As a consistent set $U$ is a proper subset of $\mathcal D$. Let ${\bf n}$ be a numeral. If $T({\bf n})$ is in $L_0(U)$, it is in $Z_1(U)$, so that, by , ${\bf n}=\left\lceil A\right\rceil$, where \#$A$ is in $U$. Since $U$ is consistent, then \#\[$\neg A$\] is not in $U$. Thus, by , $\neg T({\bf n})$ is not in $Z_2(U)$, and hence not in $L_0(U)$. This result implies also that $T({\bf n})$ is not in $L_0(U)$ if $\neg T({\bf n})$ is in $L_0(U)$. and imply that sentences $\exists xT(x)$, $\neg\forall xT(x)$, $\neg\forall x\neg T(x)$ and $\exists x\neg T(x)$ are in $Z_1$, and hence in $L_0(U)$, but their negations are not in $L_0(U)$. By the definitions and of $Z_2$ neither both $\exists xT_1(\left\lceil T_2(\dot x)\right\rceil)$ and $\neg(\exists xT_1(\left\lceil T_2(\dot x)\right\rceil))$, nor both $\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)$ and $\neg(\forall xT_1(\left\lceil T_2(\dot x)\right\rceil))$, are in $L_0(U)$ for any $T_1,T_2\in \{T,\neg T\}$. By the definitions , and the sentences of $Z_3$ and $Z_4$, but not their negations, are in $L_0(U)$. The above proof shows that for no sentence $A$ of $\mathcal L$ both $A$ and $\neg A$ belong to $L_0(U)$. Make the induction hypothesis: 1. For no sentence $A$ of $\mathcal L$ both $A$ and $\neg A$ belong to $L_n(U)$. Applying (h0) and we obtain the following results. (h0) and the definition of $L_n^0(U)$ imply that if a sentence $A$ is in $L_n(U)$, then none of the odd-tuple negations of $A$ are in $L_{n+1}(U)$, and if $\neg A$ is in $L_n(U)$, then none of the even-tuple negations of $A$ are in $L_{n+1}(U)$. If $A\vee B$ is in $L_{n+1}(U)$, it is in $L_n^1(U)$, whence $A$ or $B$ is in $L_n(U)$. If $\neg(A\vee B)$ is in $L_{n+1}(U)$, it is in $L_n^5(U)$, in which case $\neg A$ and $\neg B$ are in $L_n(U)$. Thus $A\vee B$ and $\neg(A\vee B)$ are not both in $L_{n+1}(U)$, for otherwise both $A$ and $\neg A$ or both $B$ and $\neg B$ are in $L_n(U)$, contradicting with (h0). $A\wedge B$ and $\neg(A\wedge B)$ cannot both be in $L_{n+1}(U)$, for otherwise $A\wedge B$ is in $L_n^2(U)$, i.e., both $A$ and $B$ are in $L_n(U)$, and $\neg(A\wedge B)$ is in $L_n^6(U)$, i.e., at least one of $\neg A$ and $\neg B$ is in $L_n(U)$. Thus both $A$ and $\neg A$ or both $B$ and $\neg B$ are in $L_n(U)$, contradicting with (h0). If $A\rightarrow B$ is in $L_{n+1}(U)$, it is in $L_n^3(U)$, so that $\neg A$ or $B$ is in $L_n(U)$. If $\neg(A\rightarrow B)$ is in $L_{n+1}(U)$, it is in $L_n^7(U)$, whence both $A$ and $\neg B$ are in $L_n(U)$. Because of these results and (h0) the sentences $A\rightarrow B$ and $\neg(A\rightarrow B)$ are not both in $L_{n+1}(U)$. If $A\leftrightarrow B$ is $L_{n+1}(U)$, it is in $L_n^4(U)$, in which case both $A$ and $B$ or both $\neg A$ and $\neg B$ are in $L_n(U)$. If $\neg (A\leftrightarrow B)$ is in $L_{n+1}(U)$, it is in $L_n^8(U)$, whence both $A$ and $\neg B$ or both $\neg A$ and $B$ are in $L_n(U)$. Thus both $A\leftrightarrow B$ and $\neg (A\leftrightarrow B)$ cannot be in $L_{n+1}(U)$, for otherwise both $A$ and $\neg A$ or both $B$ and $\neg B$ are in $L_n(U)$, contradicting with (h0). The above results and the induction hypothesis (h0) imply that for no sentence $A$ of $\mathcal L$ both $A$ and $\neg A$ belong to $L_{n+1}(U)=L_n(U)\cup \bigcup_{k=0}^8 L_n^k(U)$.\ Since (h0) is proved when $n=0$, it is by induction valid for every $n\in\mathbb N_0$. If $A$ and $\neg A$ are in $L(U)$, then $A$ is by in $L_{n_1}(U)$ for some $n_1\in\mathbb N_0$, and $\neg A$ is in $L_{n_2}(U)$ for some $n_2\in\mathbb N_0$. Then both $A$ and $\neg A$ are in $L_n(U)$ when $n=\max\{n_1,n_2\}$. This is impossible, because (h0) is proved for every $n\in\mathbb N_0$. Thus $A$ and $\neg A$ cannot both be in $L(U)$ for any sentence $A$ of $\mathcal L$. The above result and imply that there is no sentence $A$ in $\mathcal L$ such that both \#$A$ and \#\[$\neg A$\] are in $G(U)$. Thus $G(U)$ is consistent. \[L203\] Assume that $U$ and $V$ are consistent subsets of $\mathcal D$, and that $V\subseteq U$. Then $L(V)\subseteq L(U)$ and $G(V)\subseteq G(U)$. As consistent sets $V$ and $U$ are proper subsets of $\mathcal D$. At first we show that $L_0(V)\subseteq L_0(U)$. If $V=\emptyset$, then $L_0(V)=Z\subseteq L_0(U)$ by . Assume next that $V$ is nonempty. Thus also $U$ is nonempty. Let $A$ be a sentence of $L$. Definition of $L_0(U)$ implies that $A$ is in $L_0(U)$ and also in $L_0(V)$ iff $A$ is in $Z$. Let [**n**]{} be a numeral. If $T({\bf n})$ is in $L_0(V)$, it is in $Z_1(V)$, so that ${\bf n}=\left\lceil A\right\rceil$, where \#$A$ is in $V$. Because $V\subseteq U$, then \#$A$ is also in $U$, whence $T({\bf n})$ is in $Z_1(U)$, and hence in $L_0(U)$. If $\neg T({\bf n})$ is in $L_0(V)$, it is in $Z_2(V)$, in which case ${\bf n}=\left\lceil A\right\rceil$, where \#\[$\neg A$\] is in $V$. Since $V\subseteq U$, then \#\[$\neg A$\] is also in $U$, whence $\neg T({\bf n})$ is in $Z_2(U)$, and hence in $L_0(U)$. Because $U$ and $V$ are nonempty and proper subsets of $\mathcal D$, then $Z_1$, $Z_2$, $Z_3$ and $Z_4$ are in $L_0(U)$ and in $L_0(V)$ by . The above results imply that $L_0(V)\subseteq L_0(U)$. Make the induction hypothesis: - $L_n(V)\subseteq L_n(U)$ for some $n\in \mathbb N_0$. It follows from and (h1) that $L_n^k(V)\subseteq L_n^k(U)$ for each $k=0,\dots,8$. Thus $$L_{n+1}(V)=L_n(V)\cup \bigcup_{k=0}^8 L_n^k(V)\subseteq L_n(U)\cup \bigcup_{k=0}^8 L_n^k(U)=L_{n+1}(U).$$ (h1) is proved when $n=0$, whence it is by induction valid for every $n\in\mathbb N_0$. If $A$ is in $L(V)$, it is by in $L_n(V)$ for some $n\in\mathbb N_0$. Thus $A$ is in $L_n(U)$ by (h1), and hence in $L(U)$. Consequently, $L(V)\subseteq L(U)$. If \#$A$ is in $G(V)$ then $A$ is in $L(V)$ by . Thus $A$ is in $L(U)$, so that \#$A$ is in $G(U)$ by . This shows that $G(V)\subseteq G(U)$. Denote by $\mathcal C$ the family of consistent subsets of $\mathcal D$. In the formulation and the proof of Theorem \[T2\] transfinite sequences indexed by ordinals are used. A transfinite sequence $(U_\lambda)_{\lambda<\alpha}$ of $\mathcal C$ is said to be increasing if $U_\mu\subseteq U_\nu$ whenever $\mu<\nu<\alpha$, and strictly increasing if $U_\mu\subset U_\nu$ whenever $\mu<\nu<\alpha$. \[L204\] Assume that $(U_\lambda)_{\lambda<\alpha}$ is a strictly increasing sequence of $\mathcal C$. Then\ (a)  $(G(U_\lambda))_{\lambda<\alpha}$ is an increasing sequence of $\mathcal C$.\ (b)  The union $\underset{\lambda<\alpha}{\bigcup}G(U_\lambda)$ is consistent. Since $U_\mu\subset U_\nu$ when $\mu<\nu<\alpha$, it follows from Lemma \[L203\] that $G(U_\mu)\subseteq G(U_\nu)$ when $\mu<\nu<\alpha$, whence the sequence $(G(U_\lambda))_{\lambda<\alpha}$ is increasing. Consistency of the sets $G(U_\lambda)$, $\lambda<\alpha$, follows from Lemma \[L201\] because the sets $U_\lambda$, $\lambda<\alpha$, are consistent. This proves (a). To prove that the union $\underset{\lambda<\alpha}{\bigcup}G(U_\lambda)$ is consistent, assume on the contrary that there exists such a sentence $A$ in $\mathcal L$ that both \#$A$ and \#\[$\neg A$\] are in $\underset{\lambda<\alpha}{\bigcup}G(U_\lambda)$. Thus there exist $\mu,\,\nu<\alpha$ such that \#$A$ is in $G(U_\mu)$ and \#\[$\neg A$\] is in $G(U_\nu)$. Because $G(U_\mu)\subseteq G(U_\nu)$ or $G(U_\nu)\subseteq G(U_\mu)$, then both \#$A$ and \#\[$\neg A$\] are in $G(U_\mu)$ or in $G(U_\nu)$. But this is impossible, since both $G(U_\mu)$ and $G(U_\nu)$ are consistent. Thus, the set $\underset{\lambda<\alpha}{\bigcup}G(U_\lambda)$ is consistent. Now we are ready to prove the following Theorem. \[T2\] Let $W$ denote the set of Gödel numbers of true sentences of $L$. We say that a transfinite sequence $(U_\lambda)_{\lambda<\alpha}$ of $\mathcal C$ is a $G$-sequence if it has the following properties. -  $(U_\lambda)_{\lambda<\alpha}$ is strictly increasing, $U_0=W$, and if $0<\mu< \alpha$, then $U_\mu = \underset{\lambda<\mu}{\bigcup}G(U_\lambda)$. Then the longest $G$-sequence exists, and it has the last member. This member is the smallest consistent subset $U$ of $\mathcal D$ satisfying $U=G(U)$. $W$ is consistent, since $L$ has properties (i)-(iii). At first we show that $G$-sequences are nested: (1)  Assume that $(U_\lambda)_{\lambda<\alpha}$ and $(V_\lambda)_{\lambda<\beta}$ are $G$-sequences. Then $U_\lambda=V_\lambda$ when $\lambda <\min\{\alpha,\beta\}$. $U_0=W=V_0$ by (G). Make the induction hypothesis:\ (h)  There exists an ordinal $\nu$ which satisfies $0<\nu< \min\{\alpha,\beta\}$ such that $U_\lambda=V_\lambda$ for each $\lambda<\nu$. It follows from (h) and (G) that $U_\nu = \underset{\lambda<\nu}{\bigcup}G(U_\lambda)=\underset{\lambda<\nu}{\bigcup}G(V_\lambda)=V_\nu$. Since $U_0=V_0$, then (h) holds when $\nu=1$. These results imply (1) by transfinite induction. Let $(U_\lambda)_{\lambda<\alpha}$ be a $G$-sequence. Defining $f(0)=\min U_0$, $f(\lambda)=\min(U_\lambda\setminus U_{\lambda-1})$, $0 < \lambda < \alpha$, and $f(\alpha)=\min(\mathcal D\setminus\underset{\lambda<\alpha}{\bigcup}U_\lambda)$, we obtain a bijection $f$ from $[0,\alpha]$ to a subset of $\mathbb N_0$. Thus $\alpha$ is a countable ordinal. Consequently, the set $\Gamma$ of those ordinals $\alpha$ for which $(U_\lambda)_{\lambda<\alpha}$ is a $G$-sequence is bounded from above by the smallest uncountable ordinal. Denote by $\gamma$ the least upper bound of $\Gamma$. To show that $\gamma$ is a successor, assume on the contrary that $\gamma$ is a limit ordinal. Given any $\mu<\gamma$, then $\nu=\mu+1$ and $\alpha=\nu+1$ are $< \gamma$. $(U_\lambda)_{\lambda<\alpha}$ is a $G$-sequence, whence $U_\mu=\underset{\lambda<\mu}{\bigcup}G(U_\lambda)$, and $U_\mu\subset U_{\mu+1}$. Thus $(U_\lambda)_{\lambda<\gamma}$ has properties (G) when $\alpha=\gamma$, so that $(U_\lambda)_{\lambda<\gamma}$ is a $G$-sequence. Denote $U_\gamma = \underset{\lambda<\gamma}{\bigcup}G(U_\lambda)$. $U_\gamma$ is consistent by Lemma \[L204\](b). Because $U_\mu\subset U_\nu=\underset{\lambda<\nu}{\bigcup}G(U_\lambda)\subseteq U_\gamma$ for each $\mu<\gamma$, then $(U_\lambda)_{\lambda<\gamma+1}$ is a $G$-sequence. But this contradicts with the choice of $\gamma$. Thus $\gamma$ is a successor, say $\gamma=\alpha+1$. If $\lambda < \alpha$, then $U_\lambda \subset U_\alpha$, so that $G(U_\lambda)\subseteq G(U_\alpha)$. Then $U_\alpha=\underset{\lambda<\alpha}{\bigcup}G(U_\lambda)\subseteq \underset{\lambda<\gamma}{\bigcup}G(U_\lambda)=G(U_\alpha)$, whence $U_\alpha\subseteq G(U_\alpha)$. Moreover, $U_\alpha = G(U_\alpha)$,\ for otherwise $U_\alpha\subset G(U_\alpha)= \underset{\lambda<\gamma}{\bigcup}G(U_\lambda) =U_\gamma$, and $(U_\lambda)_{\lambda<\gamma+1}$ would be a $G$-sequence.\ Consequently, $(U_\lambda)_{\lambda<\gamma}$ is the longest $G$-sequence, $U_\alpha$ is its last member, and $U_\alpha=G(U_\alpha)$. Let $U$ be a consistent subset of $\mathcal D$ satisfying $U=G(U)$. Then $U_0=W=G(\emptyset)\subseteq G(U)=U$. Make the induction hypothesis:\ (h2)  There exists an ordinal $\mu$ which satisfies $0<\mu< \gamma$ such that $U_\lambda\subseteq U$ for each $\lambda<\mu$. Then $G(U_\lambda)\subseteq G(U)$ for each $\lambda<\mu$, whence $U_\mu=\underset{\lambda<\mu}{\bigcup}G(U_\lambda)\subseteq G(U)=U$. Thus, by transfinite induction, $U_\mu\subseteq U$ for each $\mu<\gamma$. In particular, $U_\alpha\subseteq U$. This proves the last assertion of Theorem. Language $\mathcal L_T$ and its properties {#S3} ========================================== Let $L_0$ be a language which conforms to classical logic and has not a truth predicate. Let $\mathcal L$, $\mathcal P$ and $\mathcal D$ be as in Section \[S2\], and let $L(U)$ and $G(U)$, $U\subset \mathcal D$, be defined by and . Define $$\label{E27} F(U)=\{A: \neg A\in L(U)\}.$$ Recall that a subset $U$ of $\mathcal D$ is consistent if there is no sentence $A$ in $\mathcal L$ such that both \#$A$ and \#\[$\neg A$\] are in $U$. By Theorem \[T2\] the smallest consistent subset of $\mathcal D$ which satisfies $U=G(U)$ exists. \[D1\] Let $U$ be the smallest consistent subset of $\mathcal D$ which satisfies $U=G(U)$. Denote by $\mathcal L_T$ the language formed by the object language $L_0$, the predicates of $\mathcal P$, the sentences of $L(U)$ and $F(U)$, formulas $T(x)$ and $\neg T(x)$, corresponding predicates $T$ and $\neg T$ with their domain $D_T$ and the set $N_T$ of terms defined by $$\label{E28} D_T=L(U)\cup F(U) \hbox{ and } \ N_T=\{{\bf n}: {\bf n}=\left\lceil A\right\rceil, \hbox{ where $A$ is in } D_T\},$$ and predicates $T_1(\left\lceil T_2(\dot x)\right\rceil)$, and $T_1(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$, where $T_1,T_2\in \cup\{T,\neg T\}$ and $P\in\mathcal P$ with arity $m\ge 1$. A valuation is defined for sentences of $\mathcal L_T$ as follows. 1. A sentence of $\mathcal L_T$ is valuated as true iff it is in $L(U)$, and as false iff it is in $F(U)$. \[L33\] The language $\mathcal L_T$ defined by Definition \[D1\] and valuated by (I) is bivalent. The subsets $L(U)$ and $F(U)$ of the sentences of $\mathcal L_T$ are disjoint. For otherwise there is a sentence $A$ of $\mathcal L_T$ which is in $L(U)\cap F(U)$. Then $A$ is in $L(U)$, and by the definition of $F(U)$ also $\neg A$ is in $L(U)$. But this is impossible by Lemma \[L201\]. Consequently, $L(U)\cap F(U)=\emptyset$. If $A$ is a sentence of $\mathcal L_T$, then it is in $L(U)$ or in $F(U)$. If $A$ is true, it is in $L(U)$, but not in $F(U)$, and hence not false, because $L(U)\cap F(U)=\emptyset$. Similarly, if $A$ is false, it is in $F(U)$, but not in $L(U)$, and hence not true. Consequently, $A$ is either true or false, so that $\mathcal L_T$ is bivalent. \[L31\] Let $\mathcal L_T$ be defined by Definition \[D1\] and valuated by (I). Then a sentence of the basic extension $L$ of $L_0$ is true (respectively false) in the valuation (I) iff it is true (respectively false) in the valuation of $L$. Let $A$ denote a sentence of $L$. $A$ is true in the valuation (I) iff $A$ is in $L(U)$ iff (by the construction of $L(U)$) $A$ is in $Z$ iff $A$ is true in the valuation of $L$. $A$ is false in the valuation (I) iff $A$ is in $F(U)$ iff (by ) $\neg A$ is in $L(U)$ iff ($\neg A$ is a sentence of $L$) $\neg A$ is in $Z$ iff $\neg A$ is true in the valuation of $L$ iff ($L$ has properties (i)–(iii)) $A$ is false in the valuation of $L$. \[L32\] The language $\mathcal L_T$ defined by Definition \[D1\] and valuated by (I) has properties (i) and (ii) given in Introduction. Unless otherwise stated, ’true’ means true in the valuation (I), and ’false’ means false in the valuation (I). The construction of $L(U)$ and the definition of $F(U)$ imply that $\mathcal L_T$ has properties (i). As for properties (ii) we at first derive the following auxiliary rule. -  Double negation: If $A$ is a sentence of $\mathcal L_T$, then $\neg(\neg A)$ is true iff $A$ is true. To prove (t0), assume first that $\neg(\neg A)$ is true. Then it is in $L(U)$, and hence, by , in $L_n(U)$ for some $n\in\mathbb N_0$. If $\neg(\neg A)$ is in $L_0(U)$ then it by in $Z$. Thus $\neg(\neg A)$ is true in the valuation of $L$. Then (negation rule is valid in $L$) $\neg A$ is false in the valuation of $L$, which implies that $A$ is true in the valuation of $L$. Thus $A$ is by in $Z\subset L_0(U)\subset L(U)$, whence $A$ is true.\ Assume next that $n\in\mathbb N_0$ is the smallest number for which $\neg(\neg A)$ is in $L_{n+1}(U)$. It then follows from and that $\neg(\neg A)$ is in $L_{n}^0(U)$, so that $A$ is in $L_{n}(U)$, and hence in $L(U)$, i.e., $A$ is true.\ Thus $A$ is true if $\neg(\neg A)$ is true. Conversely, assume that $A$ is true. Then $A$ is in $L(U)$, so that $A$ is in $L_n(U)$ for some $n\in\mathbb N_0$. Thus $\neg(\neg A)$ is in $L_n^0(U)$, and hence in $L_{n+1}(U)$. Consequently, $\neg(\neg A)$ is in $L(U)$, whence $\neg(\neg A)$ is true. This concludes the proof of (t0). Rule (t0) is applied to prove -  Negation: $A$ is true iff $\neg A$ is false, and $A$ is false iff $\neg A$ is true. Let $A$ be a sentence of $\mathcal L_T$. Then $A$ is true iff (by (t0)) $\neg(\neg A)$ is true iff $\neg(\neg A)$ is in $L(U)$ iff (by ) $\neg A$ is in $F(U)$ iff $\neg A$ is false.\ $A$ is false iff $A$ is in $F(U)$ iff (by ) $\neg A$ is in $L(U)$ iff $\neg A$ is true. Thus (t1) is satisfied. Next we prove the following rule. -  Conjunction: $A\wedge B$ is true iff $A$ and $B$ are true. $A\wedge B$ is false iff $A$ or $B$ is false. Let $A$ and $B$ be sentences of $\mathcal L_T$. If $A$ and $B$ are true, i.e., $A$ and $B$ are in $L(U)$, there is by an $n\in\mathbb N_0$ such that $A$ and $B$ are in $L_n(U)$. Thus $A\wedge B$ is by in $L_n^2(U)$, and hence in $L(U)$, so that $A\wedge B$ is true. Conversely, assume that $A\wedge B$ is true, or equivalently, $A\wedge B$ is in $L(U)$. Then there is by an $n\in\mathbb N_0$ such that $A\wedge B$ is in $L_n(U)$. If $A\wedge B$ is in $L_0(U)$, it is in $Z$. Thus $A\wedge B$ is true in the valuation of $L$. Because $L$ has property (ii), then $A$ and $B$ are true in the valuation of $L$, and hence also in the valuation (I) by Lemma \[L31\].\ Assume next that $n\in\mathbb N_0$ is the smallest number for which $A\wedge B$ is in $L_{n+1}(U)$. Then $A\wedge B$ is by in $L_{n}^2(U)$, so that $A$ and $B$ are in $L_{n}(U)$, and hence in $L(U)$, i.e., $A$ and $B$ are true. The above reasoning proves that $A\wedge B$ is true iff $A$ and $B$ are true. This result and the bivalence of $\mathcal L_T$, proved in Lemma \[L33\], imply that $A\wedge B$ is false iff $A$ or $B$ is false. Consequently, rule (t2) is valid. The proofs of the following rules are similar to the above proof of (t2). -  Disjunction: $A\vee B$ is true iff $A$ or $B$ is true. $A\vee B$ false iff $A$ and $B$ are false. -  Conditional: $A\rightarrow B$ is true iff $A$ is false or $B$ is true. $A\rightarrow B$ is false iff $A$ is true and $B$ is false. -  Biconditional: $A \leftrightarrow B$ is true iff $A$ and $B$ are both true or both false. $A \leftrightarrow B$ is false iff $A$ is true and $B$ is false or $A$ is false and $B$ is true. Next we show that if $T_1\in \{T,\neg T\}$ then $\exists xT_1(x)$ and $\forall xT_1(x)$ have the following properties. -   $\exists xT_1(x)$ is true iff $T_1(\bf n)$ is true for some ${\bf n}\in N_T$, and false iff $T_1(\bf n)$ is false for every ${\bf n} \in N_T$. -    $\forall xT_1(x)$ is true iff $T_1(\bf n)$ is true for every ${\bf n}\in N_T$, and false iff $T_1(\bf n)$ is false for some ${\bf n}\in N_T$. To simplify proofs we derive results which imply that $T$ is a truth predicate and $\neg T$ is a non-truth predicate for $\mathcal L_T$.\ Let $A$ denote a sentence of $\mathcal L_T$. The valuation (I), rule (t1), the definitions of $Z_1(U)$, $Z_2(U)$ and $G(U)$, and the assumption $U=G(U)$ imply that $A$ is true iff $A$ is in $L(U)$ iff \#$A$ is in $G(U)=U$ iff $T(\left\lceil A\right\rceil)$ is in $Z_1(U)\subset L(U)$ iff $T(\left\lceil A\right\rceil)$ is true iff $\neg T(\left\lceil A\right\rceil)$ is false. $A$ is false iff $A$ is in $F(U)$ iff $\neg A$ is in $L(U)$ iff \#\[$\neg A$\] is in $G(U)=U$ iff $\neg T(\left\lceil A\right\rceil)$ is in $Z_2(U)\subset L(U)$ iff $\neg T(\left\lceil A\right\rceil)$ is true iff $T(\left\lceil A\right\rceil)$ is false. The above results imply that the following results are valid for every sentence $A\in\mathcal L_T$. -   $A$ is true iff $T(\left\lceil A\right\rceil)$ is true iff $\neg T(\left\lceil A\right\rceil)$ is false. $A$ is false iff $T(\left\lceil A\right\rceil)$ is false iff $\neg T(\left\lceil A\right\rceil)$ is true. Consider the validity of (t6) and (t7) when $T_1$ is $T$. Because $U$ is nonempty, then $\exists xT(x)$ is in $L_0(U)$ by and , and hence in $L(U)$ by . Thus $\exists xT(x)$ is by (I) a true sentence of $\mathcal L_T$.\ $T(\left\lceil A\right\rceil)$ is true iff (by (T)) $A$ is true iff (by (I)) $A$ is in $L(U)$. Thus $T({\bf n})$ is true for some ${\bf n}\in N_T$. The above results imply that $\exists xT(x)$ is true iff $T({\bf n})$ is true for some ${\bf n}\in N_T$. In view of this result and the bivalence of $\mathcal L_T$, one can infer that $\exists xT(x)$ is false iff $T({\bf n})$ is false for every ${\bf n}\in N_T$. This concludes the proof of (p6) when $T_1$ is $T$. $\neg\forall xT(x)$ is in $Z_1\subset L_0(U)$, and hence in $L(U)$, so that it is true. Thus $\forall xT(x)$ is false by (t1).\ $T(\left\lceil A\right\rceil)$ is false iff (by (T)) $A$ is false iff (by (I)) $A$ is in $F(U)$. Thus $T({\bf n})$ is false for some ${\bf n}\in N_T$.\ Consequently, $\forall xT(x)$ is false iff $T({\bf n})$ is false for some ${\bf n}\in N_T$. This result and the bivalence of $\mathcal L_T$ imply that $\forall xT(x)$ is true iff $T({\bf n})$ is true for every ${\bf n}\in N_T$. This proves (t7) when $T_1$ is $T$. To show that (t6) is valid when $T_1$ is $\neg T$, notice first that $\exists x\neg T(x)$ is in $Z_1\subset L_0(U)$, and hence in $L(U)$, whence it is true.\ $\neg T(\left\lceil A\right\rceil)$ is true iff (by (t1)) $T(\left\lceil A\right\rceil)$ is false iff (by (T)) $A$ is false iff (by (I)) $A$ is in $F(U)$. Thus $\neg T({\bf n})$ is true for some ${\bf n}\in N_T$. Consequently, $\exists x\neg T(x)$ is true iff $\neg T({\bf n})$ is true for some ${\bf n}\in N_T$. This result and the bivalence of $\mathcal L_T$ imply that $\exists x\neg T(x)$ is false iff $\neg T({\bf n})$ is false for every ${\bf n}\in N_T$. This concludes the proof of (t6) when $T_1$ is $\neg T$. Next we prove (t7) when $T_1$ is $\neg T$. $\neg\forall x\neg T(x)$ is in $Z_1\subset L_0(U)$, and hence in $L(U)$, so that it is true. Thus $\forall x\neg T(x)$ is false by (t1).\ $\neg T(\left\lceil A\right\rceil)$ is false iff (by (t1)) $T(\left\lceil A\right\rceil)$ is true iff (by (T)) $A$ is true iff (by (I)) $A$ is in $L(U)$. Thus $\neg T({\bf n})$ is false for some ${\bf n}\in N_T$. From these results it follows that $\forall x\neg T(x)$ is false iff $\neg T({\bf n})$ is false for some ${\bf n}\in N_T$. This result and bivalence of $\mathcal L_T$ imply that $\forall x\neg T(x)$ is true iff $\neg T({\bf n})$ is true for all ${\bf n}\in N_T$. Thus (t7) is valid when $T_1$ is $\neg T$. Next we show that the following rules are valid when $T_1,T_2\in \{T,\neg T\}$. -      $\exists xT_1(\left\lceil T_2(\dot x)\right\rceil)$ is true iff $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is true for some ${\bf n}\in N_T$. - $\exists xT_1(\left\lceil T_2(\dot x)\right\rceil)$ is false iff $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is false for every ${\bf n}\in N_T$; -      $\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)$ is true iff $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is true for every ${\bf n}\in N_T$. - $\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)$ is false iff $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is false for some ${\bf n}\in N_T$. The sentences $\exists xT_1(\left\lceil T_2(\dot x)\right\rceil)$, where $T_1,T_2\in\{T,\neg T\}$ are in $Z_2$, whence they are in $L(U)$ and hence true. Applying (T) one can show that in every case there exists an ${\bf n}\in N_T$ so that $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is true (${\bf n}=\left\lceil A\right\rceil$, where $A$, depending on the case, is in $L(U)$ or in $F(U)$). These results imply truth part of (tt6) when $T_1$ and $T_2$ are in $\{T,\neg T\}$. Falsity part in (tt6) is then valid by bivalence of $\mathcal L_T$. $\forall xT_1(\left\lceil T_2(\dot x)\right\rceil)$ is false because its negation is in $Z_2$ and hence true. Using (T) it is easy to show that $T_1(\left\lceil T_2(\bf n)\right\rceil)$ is false for some ${\bf n}\in N_T$ whenever $T_1,T_2\in\{T,\neg T\}$. This proves the falsity part of (tt7), and also implies the truth part by bivalence of $\mathcal L_T$. Since $L$ has properties (ii), then for every predicate $P\in \mathcal P^m$ with arity $m\ge 1$ the following properties hold in the valuation of $L$, and hence in the valuation (I) by Lemma \[L31\]. - The sentence of the form $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is true iff $P(b_1,\dots,b_m)$ is true for all $b_i\in N_P^i$ when $q_i$ is $\forall$, and for some $b_i\in N_P^i$ when $q_i$ is $\exists$. $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is false iff $P(b_1,\dots,b_m)$ is false for all $b_i\in N_P^i$ when $q_i$ is $\exists$, and for some $b_i\in N_P^i$ when $q_i$ is $\forall$. If $P$ is a predicate of $\mathcal P$ with arity $m\ge 1$, and if $q_1,\dots,q_m$ is any $m$-tuple of quantifiers $\forall$ and $\exists$, then the sentence $q_1x_1\dots q_mx_m T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ is true iff it is in $L_0(U)$ iff it is in $Z_1^m$ iff (by ) the sentence $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is true iff (by (p6)) the sentence $P(b_1,\dots,b_m)$ is true in $L$, and hence also in $\mathcal L_T$ for all $b_i\in N_P^i$ when $q_i$ is $\forall$, and for some $b_i\in N_P^i$ when $q_i$ is $\exists$ iff (by (T)) the sentence $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is true for all choices of $b_i\in N_P^i$ when $q_i$ is $\forall$, and for some choices of $b_i\in N_P^i$ when $q_i$ is $\exists$. The above equivalences and the bivalence of $\mathcal L_T$ imply the following result. -   The sentence $q_1x_1\dots q_mx_m T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ true iff the sentence $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is true for all choices of $b_i\in N_P^i$ when $q_i$ is $\forall$, and for some choices of $b_i\in N_P^i$ when $q_i$ is $\exists$. -  $q_1x_1\dots q_mx_m T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ is false iff the sentence $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is false for all $b_i\in N_P^i$ when $q_i$ is $\exists$, and for some $b_i\in N_P^i$ when $q_i$ is $\forall$. Similarly, applying , (T), (p6) with $P$ replaced by $\neg P$ and bivalence of $\mathcal L_T$ it one can show that $T$ can be replaced in (tp6) by $\neg T$ when $P$ is in $\mathcal P^m$ and $q_1,\dots,q_m$ is any $m$-tuple of quantifiers $\forall$ and $\exists$. The above proof shows that $\mathcal L_T$ has properties (ii). Theories of truth {#S4} ================= Now we are ready to present our main results. \[T1\] Let $L_0$ be a countable language which conforms to classical logic and has not a truth predicate. The language $\mathcal L_T$ defined in Definition \[D1\] and valuated by (I) has properties (i)–(iii), and has a definitional theory of truth (shortly DTT). $T$ is a truth predicate, and $\neg T$ is a non-truth predicate. Properties (i)–(iii) given in Introduction are valid for $\mathcal L_T$ by Lemma \[L31\] and Lemma \[L32\]. Thus $\mathcal L_T$ conforms to classical logic. The results (T) derived in the proof of Lemma \[L32\] and biconditional rule (t5) imply that the sentence $T(\left\lceil A\right\rceil)\leftrightarrow A$ is true and the sentence $\neg T(\left\lceil A\right\rceil)\leftrightarrow A$ is false for every sentence $A$ of $\mathcal L_T$. $T$ and $\neg T$ are predicates of $\mathcal L_T$, and their domain $D_T$, the set all sentences of $\mathcal L_T$, satisfies the condition presented in [@Fe p. 7] for the domains of truth predicates. Consequently, $T$ is a truth predicate and $\neg T$ is a non-truth predicate. The above results imply that $\mathcal L_T$ has a theory of truth. It is definitional, since truth values of sentences are defined by (I). Next we show that $\mathcal L_T$ has a semantical theory of truth under the following assumptions. -   The object language $L_0$ is countable, has not a truth predicate, and every sentence of $L_0$ is $ $    meaningful and is valuated by its meaning either as true or as false. -   Standard meanings are assigned to logical symbols. -   The sentence $T(\bf n)$ means: ’the sentence whose Gödel number has $\bf n$ as its numeral is true’. At first we prove preliminary Lemmas. \[L42\] Under the hypotheses (s1) and (s2) the object language $L_0$ has an extension $L$ which has properties (i)–(iii) given in Introduction when its sentences are valuated by their meanings. In particular, $L_0$ conforms to classical logic. The object language $L_0$ is bivalent by (s1). The basic extension $L$ of $L_0$ which has properties (i)–(iii) when its sentences are valuated by their meanings is constructed as follows. The first extension $L_1$ of $L_0$ is formed by adding those sentences $\neg A$, $A\vee B$, $A\wedge B$, $A\rightarrow B$, $A\leftrightarrow B$ and $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ which are not in $L_0$ when $A$ and $B$ go through all sentences of $L_0$, $P$ its predicates and their negations (added if necessary), and $(q_1,\dots,q_m)$ $m$-tuples, where $q_i$’s are either $\forall$ or $\exists$. If there exist objects of $D$ without names, add terms to name them. Valuating the sentences of $L_1$ by their meanings, the assumptions (s1) and (s2) ensure that properties (ii) and (iii) are valid. When the language $L_n$, $n\ge 1$ is defined, define the language $L_{n+1}$ by adding those sentences $\neg A$, $A\vee B$, $A\wedge B$, $A\rightarrow B$, $A\leftrightarrow B$ which are not in $L_n$ when $A$ and $B$ are sentences of $L_n$. Valuating the sentences of $L_{n+1}$ by their meanings, the properties (ii) and (iii) are valid by assumptions (s1) and (s2). The union $L$ of languages $L_n$, $n\in \mathbb N_0$ has also properties (ii) and (iii). If $A$ and $B$ denote sentences of $L$, there exist $n_1$ and $n_2$ in $\mathbb N_0$ such that $A$ is in $L_{n_1}$ and $B$ is in $L_{n_2}$. Denoting $n=\max\{n_1,n_2\}$, then $A$ and $B$ are sentences of $L_n$. Thus the sentences $\neg A$, $A\vee B$, $A\wedge B$, $A\rightarrow B$ and $A\leftrightarrow B$ are in $L_{n+1}$, and hence in $L$. If $P$ is a predicate of $L_1$ with arity $m\ge 1$, then for each $m$-tuple $(q_1,\dots,q_m)$, where $q_i$’s are either $\forall$ or $\exists$, the sentence $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is in $L_1$, so that it is in $L$. Thus $L$ has also properties (i). Consequently, the basic extension $L$ of $L_0$ constructed above has all properties (i) – (iii) when its sentences are valuated by their meanings. This implies that $L_0$ conforms to classical logic. \[l70\] Make the assumptions (s1)–(s3), and let $L$ be the language constructed in the proof of Lemma \[L42\]. Let $\mathcal L_T$, $\mathcal D$ and $U$ be as in Definition \[D1\], and let $W$ be the set of Gödel numbers of true sentences of $L$. Given a consistent subset $V$ of $\mathcal D$ which satisfies $W\subseteq V\subseteq U$, assume that every sentence of $\mathcal L_T$ whose Gödel number is in $V$ is true and not false by its meaning. Then every sentence of $L(V)$ (defined by with $U$ replaced by $V$) is true and not false by its meaning. Because $V\subseteq U=G(U)$, then every sentence whose Gödel number is in $V$, is in $\mathcal L_T$. At first we prove that every sentence of $L_0(V)$ is true and not false by its meaning. By Lemma \[L42\] $L$ is bivalent. Thus every true sentence of $L$, i.e., every sentence of $Z$ is true and not false by its meaning. Let $A$ denote a sentence of $\mathcal L_T$. By (s3) the sentence $T(\left\lceil A\right\rceil)$ means that ’the sentence whose Gödel number has $\left\lceil A\right\rceil$ as its numeral, i.e., the sentence $A$, is true’. Thus, by its meaning, $T(\left\lceil A\right\rceil)$ is true iff $A$ is true and false iff $A$ is false. Assume that the Gödel number of a sentence $A$ is in $V$. Since $A$ is by a hypothesis true and not false by its meaning, then $T(\left\lceil A\right\rceil)$ is true and not false by its meaning. This implies by that the sentences of $Z_1(V)$ are true and not false by their meanings. By the standard meaning of negation the sentence $\neg T(\left\lceil A\right\rceil)$ is false and not true by its meaning. Replacing $A$ by $T(\left\lceil A\right\rceil)$ and $\neg T(\left\lceil A\right\rceil)$ it follows from the above results that the sentences $T(\left\lceil T(\left\lceil A\right\rceil)\right\rceil)$ and $\neg T(\left\lceil \neg T(\left\lceil A\right\rceil)\right\rceil)$ are true and not false by their meanings, and the sentences $T(\left\lceil \neg T(\left\lceil A\right\rceil)\right\rceil)$ and $\neg T(\left\lceil T(\left\lceil A\right\rceil)\right\rceil)$ are false and not true by their meanings. Let $A$ denote such a sentence of $\mathcal L_T$, that the Gödel number of the sentence $\neg A$ is in $V$. $\neg A$ is by a hypothesis true and not false by its meaning, so that $A$ is false and not true by its meaning since $V$ is consistent. Thus the sentence $T(\left\lceil A\right\rceil)$ is false and not true by its meaning, and the sentence $\neg T(\left\lceil A\right\rceil)$ is true and not false by its meaning. It then follows from that the sentences of $Z_2(V)$ are true and not false by their meanings. Replacing $A$ by $T(\left\lceil A\right\rceil)$ and $\neg T(\left\lceil A\right\rceil)$ we then obtain that the sentences $T(\left\lceil T(\left\lceil A\right\rceil)\right\rceil)$ and $\neg T(\left\lceil \neg T(\left\lceil A\right\rceil)\right\rceil)$ are false and not true by their meanings, and the sentences $\neg T(\left\lceil T(\left\lceil A\right\rceil)\right\rceil)$ and $T(\left\lceil \neg T(\left\lceil A\right\rceil)\right\rceil)$ are true and not false by their meanings. The set $N_T$ of numerals, defined by , is formed by numerals $\left\lceil A\right\rceil$, where $A$ goes through all the sentences of $\mathcal L_T$. Thus, by results proved above $T({\bf n})$, $T(\left\lceil T({\bf n})\right\rceil)$, $\neg T(\left\lceil\neg T({\bf n})\right\rceil)$, $\neg T({\bf n})$, $\neg T(\left\lceil T({\bf n})\right\rceil)$ and $T(\left\lceil\neg T({\bf n})\right\rceil)$ are for some ${\bf n}\in N_T$ true and not false by their meanings and for some ${\bf n}\in N_T$ false and not true by their meanings. These results and the standard meanings of quantifiers and negation imply that $\exists x T(x)$, $\exists xT(\left\lceil T(\dot x)\right\rceil)$, $\exists x\neg T(\left\lceil\neg T(\dot x)\right\rceil)$, $\exists x \neg T(x)$, $\exists x\neg T(\left\lceil T(\dot x)\right\rceil)$ and $\exists xT(\left\lceil\neg T(\dot x)\right\rceil)$ are true and not false by their meanings, and their negations are false and not true by their meanings, whereas $\forall x T(x)$, $\forall xT(\left\lceil T(\dot x)\right\rceil)$, $\forall x\neg T(\left\lceil\neg T(\dot x)\right\rceil)$, $\forall x \neg T(x)$, $\forall x\neg T(\left\lceil T(\dot x)\right\rceil)$ and $\forall xT(\left\lceil\neg T(\dot x)\right\rceil)$ are false and not true by their meanings, and their negations are true and not false by their meanings. By above results and the sentences of $Z_1$ and $Z_2$ are true and not false by their meanings. Let $P$ be a predicate in $\mathcal P^m$, and let $q_1,\dots,q_m$ be an $m$-tuple of quantifiers $\forall$ and $\exists$. Since the sentences $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ and $P(b_1,\dots,b_m)$ are in $L$, they are by their meanings either true and not false, or false and not true. If $q_1x_1\dots q_mx_m P(x_1,\dots,x_m)$ is true and not false by its meaning, then $P(b_1,\dots,b_m)$ is true and not false by its meaning for all $b_i$ when $q_i$ is $\forall$, and for some $b_i$ when $p_i$ is $\exists$. Thus, by (s3), $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is true and not false by its meaning for all $b_i$ when $q_i$ is $\forall$, and for some $b_i$ when $p_i$ is $\exists$. Consequently, $q_1x_1\dots q_mx_m T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is true and not false by its meaning. By its meaning $\neg P(b_1,\dots,b_m)$ is true and not false iff $P(b_1,\dots,b_m)$ is false and not true iff, by (s3), $T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is false and not true iff $\neg T(\left\lceil P(b_1,\dots,b_m)\right\rceil)$ is true and not false. Thus the sentence $q_1x_1\dots q_mx_m \neg T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$ is true and not false by its meaning iff the sentence $q_1x_1\dots q_mx_m \neg P(x_1,\dots,x_m)$ is true and not false by its meaning. It follows from the above results and that the sentences of $Z_1^m$ and $Z_2^m$ are true and not false by their meanings. Consequently, the sentences of $Z_3$ and $Z_4$ are by true and not false by their meanings. The above results and imply that every sentence of $L_0(V)$ is true and not false by its meaning. Thus the following property holds when $n=0$. 1.   Every sentence of $L_n(V)$ is true and not false by its meaning. Make the induction hypothesis: (h3) holds for some $n\in\mathbb N_0$. Given a sentence of $L_n^0(V)$, it is of the form $\neg(\neg A)$, where $A$ is in $L_n(V)$. $A$ is by (h3) true and not false by its meaning. Thus, by standard meaning of negation, its double application implies that the sentence $\neg(\neg A)$, and hence the given sentence, is true and not false by its meaning. If a sentence is in $L_n^1(V)$, it is of the form $A\vee B$, where $A$ or $B$ is in $L_n(V)$. By (h3) at least one of the sentences $A$ and $B$ is true and not false by its meaning. Thus the sentence $A\vee B$, and hence given sentence, is true and not false by its meaning. Similarly it can be shown that if (h3) holds, then every sentence of $L_n^k(V)$, where $2 \le k\le 8$, is true and not false by its meaning. The above results imply that under the induction hypothesis (h3) every sentence of $L_n^k(V)$, where $0 \le k\le 8$, is true and not false by its meaning. It then follows from the definition of $L_{n+1}(V)$ that if (h3) is valid for some $n\in\mathbb N_0$, then every sentence of $L_{n+1}(V)$ is true and not false by its meaning. The first part of this proof shows that (h3) is valid when $n=0$. Thus, by induction, it is valid for all $n\in\mathbb N_0$. This result and imply that every sentence of $L(V)$ is true and not false by its meaning. \[L9\] Let $U$ be the smallest consistent subset of $\mathcal D$ which satisfies $U=G(U)$. Then under the hypotheses (s1)–(s3) every sentence of $\mathcal L_T$ whose Gödel number is in $U$ is true and not false by its meaning. By Theorem \[T2\] the smallest consistent subset $U$ of $\mathcal D$ which satisfies $U=G(U)$ is the last member of the transfinite sequence $(U_\lambda)_{\lambda<\gamma}$ constructed in the proof of that Theorem. We prove by transfinite induction that the following result holds for all $\lambda < \gamma$. 1. Every sentence of $\mathcal L_T$ whose Gödel number is in $U_\lambda$ is true and not false by its meaning. Make the induction hypothesis: There is a $\mu$ satisfying $0<\mu< \gamma$ such that (H) holds for all $\lambda < \mu$. Let $\lambda < \mu$ be given. Because $U_\lambda$ is consistent and $W\subseteq U_\lambda\subseteq U$ for every $\lambda < \mu$, it follows from the induction hypothesis and Lemma \[l70\] that every sentence of $L(U_\lambda)$ is true and not false by its meaning. This implies by that (H) holds when $U_\lambda$ is replaced $G(U_\lambda)$, for every $\lambda <\mu$. Thus (H) holds when $U_\lambda$ is replaced by the union of those sets. But this union is $U_\mu$ by Theorem \[T2\] (G), whence (H) holds when $\lambda =\mu$. When $\mu =1$, then $\lambda<\mu$ iff $\lambda=0$. $U_0=W$, i.e., the set of Gödel numbers of true sentences of $L$. Since $L$, valuated by meanings of its sentences, is bivalent by Lemma \[L42\], the sentences of $U_0$ are true and not false by their meanings. This proves that the induction hypothesis is satisfied when $\mu=1$. The above proof implies by transfinite induction properties assumed in (H) for $U_\lambda$ whenever $\lambda <\gamma$. In particular the last member of $(U_\lambda)_{\lambda<\gamma}$ satisfies (H), which is by Theorem \[T2\] the smallest consistent subset $U$ of $\mathcal D$ for which $U=G(U)$. This proves the assertion. The next result is a consequence of Lemma \[L42\], Lemma \[L9\] and Theorem \[T1\]. \[T0\] Under the hypotheses (s1)–(s3) the extension $\mathcal L_T$ of $L_0$ defined in Definition \[D1\] has a semantical theory of truth (shortly STT), when the valuation (I) is replaced in Theorem \[T1\] with the valuation of the sentences of $\mathcal L_T$ by their meanings. This valuation is equivalent to valuation (I), and the results of Theorem \[T1\] are valid for STT. Let $A$ denote a sentence of $\mathcal L_T$. $A$ is by Definition \[D1\] in $L(U)$ or in $F(U)$, where $U$ is the smallest consistent subset of $\mathcal D$ which satisfies $U=G(U)$. If $A$ is in $L(U)$, its Gödel number is in $G(U)=U$ by , whence it is by Lemma \[L9\] true and not false by its meaning. If $A$ is in $F(U)$, then $\neg A$ is in $L(U)$ by . Thus $\neg A$ is true and not false by its meaning, so that $A$ is by the standard meaning of negation false and not true by its meaning. Hence the valuation of $\mathcal L_T$ by meanings of its sentences is equivalent to valuation (I). In particular, $\mathcal L_T$ has properties (i)-(iii). Moreover, the sentence $T(\left\lceil A\right\rceil)\leftrightarrow A$ is true by its meaning and the sentence $\neg T(\left\lceil A\right\rceil)\leftrightarrow A$ is false by its meaning for every sentence $A$ of $\mathcal L_T$. These results imply that $T$ is a truth predicate and $\neg T$ is a non-truth predicate for $\mathcal L_T$. Consequently, $\mathcal L_T$ has a semantical theory of truth. Compositionality of truth in theories DTT and STT {#S6} ================================================= One of the desiderata introduced in [@Lei07] for theories of truth is that truth should be compositional. In this section we present some logical equivalences which theories DTT and STT of truth prove. \[L41\] Theories DTT and STT of truth presented in Theorems \[T1\] and \[T0\] prove the following logical equivalences when $A$ and $B$ are sentences of $\mathcal L_T$, and $P$ is in $\mathcal P$ or $P$ is $T$.   $T(\left\lceil\dots \left\lceil T(\left\lceil A\right\rceil)\right\rceil\dots\right\rceil)\leftrightarrow T(\left\lceil T(\left\lceil A\right\rceil)\right\rceil)\leftrightarrow T(\left\lceil A\right\rceil)\leftrightarrow A$.   $\neg T(\left\lceil A\right\rceil)\leftrightarrow T(\left\lceil \neg A\right\rceil)\leftrightarrow \neg A$.   $T(\left\lceil A\right\rceil)\vee T(\left\lceil B\right\rceil)\leftrightarrow T(\left\lceil A\vee B\right\rceil)\leftrightarrow A\vee B$.   $T(\left\lceil A\right\rceil)\wedge T(\left\lceil B\right\rceil) \leftrightarrow T(\left\lceil A\wedge B\right\rceil) \leftrightarrow A\wedge B$.   $(T(\left\lceil A\right\rceil)\rightarrow T(\left\lceil B\right\rceil)) \leftrightarrow T(\left\lceil A\rightarrow B\right\rceil) \leftrightarrow (A \rightarrow B)$.   $(T(\left\lceil A\right\rceil)\leftrightarrow T(\left\lceil B\right\rceil)) \leftrightarrow T(\left\lceil A\leftrightarrow B\right\rceil) \leftrightarrow (A \leftrightarrow B)$.   $\neg T(\left\lceil A\vee B\right\rceil)\leftrightarrow \neg (A\vee B)\leftrightarrow \neg A\wedge\neg B\leftrightarrow T(\left\lceil\neg A\right\rceil)\wedge T(\left\lceil\neg B\right\rceil)\leftrightarrow\neg T(\left\lceil A\right\rceil)\wedge\neg T(\left\lceil B\right\rceil)$.   $\neg T(\left\lceil A\wedge B\right\rceil)\leftrightarrow \neg (A\wedge B)\leftrightarrow \neg A\vee\neg B\leftrightarrow T(\left\lceil\neg A\right\rceil)\vee T(\left\lceil\neg B\right\rceil)\leftrightarrow \neg T(\left\lceil A\right\rceil)\vee\neg T(\left\lceil B\right\rceil)$.   $\forall xT(\left\lceil P(\dot x)\right\rceil)\leftrightarrow T(\left\lceil \forall xP(x)\right\rceil)\leftrightarrow \forall xP(x)\leftrightarrow \neg \exists x \neg P(x)\leftrightarrow T(\left\lceil\neg\exists x \neg P(x)\right\rceil)\leftrightarrow\neg \exists x \neg T(\left\lceil P(\dot x)\right\rceil)$.   $\exists xT(\left\lceil P(\dot x)\right\rceil) \leftrightarrow T(\left\lceil \exists xP(x)\right\rceil)\leftrightarrow \exists xP(x)\leftrightarrow\neg \forall x \neg P(x)\leftrightarrow T(\left\lceil\neg\forall x \neg P(x)\right\rceil)\leftrightarrow\neg \forall x \neg T(\left\lceil P(\dot x)\right\rceil)$.  $\neg T(\left\lceil \forall xP(x)\right\rceil)\leftrightarrow T(\left\lceil\neg \forall x (P(x)\right\rceil)\leftrightarrow\neg \forall x P(x)\leftrightarrow \exists x \neg P(x)\leftrightarrow T(\left\lceil\exists x \neg P(x)\right\rceil)$.  $\neg T(\left\lceil \exists xP(x)\right\rceil) \leftrightarrow T(\left\lceil\neg \exists x P(x)\right\rceil)\leftrightarrow \neg \exists x P(x)\leftrightarrow \forall x \neg P(x)\leftrightarrow T(\left\lceil\forall x \neg P(x)\right\rceil)$. T-rule implies equivalences of (a0). The first equivalences in (a1)–(a5) are easy consequences of rules (t1)–(t5) and $T$-rule (cf. [@Hei18 Lemma 4.1]). Their second equivalences are consequences of $T$-rule. The first and last equivalences of (a6) and (a7) are consequences of (a1). Their second equivalences are De Morgan’s laws of classical logic (cf. [@1]). Third equivalences of (a6) and (a7) follow from $T$-rule. The first equivalences of (a8) and (a9) are easy consequences of rules (tp6) and (tp7) and $T$-rule (cf. [@Hei18 Lemma 4.2]). $T$-rule implies their second equivalences. The third equivalences are De Morgan’s laws for quantifiers (cf. [@1]). The fourth ones follow from $T$-rule. De Morgan’s laws with $P(x)$ replaced by $T(\left\lceil P(\dot x)\right\rceil)$ imply equivalence of the last and the first sentences. (a10) and (a11) are negations to some equivalences of (a8) and (a9). Let $L_0$ be a countable and bivalent first-order language with or without identity. It conforms to classical logic. If $P$ and $Q$ are predicates of $L_0$ with arity 1 and domain $D$, then $\neg P$, $P\vee Q$, $P\wedge Q$, $P\rightarrow Q$ and $P\leftrightarrow Q$ are predicates of $L_0$ with domain $D$. Replacing $P$ and/or $Q$ by some of them we obtain new predicates with domain $D$, and so on. Thus $P$ in (a8) and (a9) can be replaced by anyone of these predicates. Their universal and existential quantifications are sentences of $L$. They are also sentences of $\mathcal L_T$. Anyone of them can be the sentence $A$ and/or the sentence $B$ in results (a1)–(a7) derived above. Moreover, $P$ can be replaced by anyone of those predicates in (a8)–(a11). Take a few examples. $\forall x T(\left\lceil P(\dot x)\rightarrow Q(\dot x)\right\rceil) \leftrightarrow T(\left\lceil \forall x (P(x)\rightarrow Q(x))\right\rceil)\leftrightarrow \forall x (P(x)\rightarrow Q(x))$. $\exists x T(\left\lceil P(\dot x)\wedge Q(\dot x)\right\rceil)\leftrightarrow T(\left\lceil\exists x(P(x)\wedge Q(x)\right\rceil)\leftrightarrow\exists x(P(x)\wedge Q(x))$. $\forall x T(\left\lceil P(\dot x)\rightarrow \neg Q(\dot x)\right\rceil) \leftrightarrow T(\left\lceil \forall x (P(x)\rightarrow \neg Q(x))\right\rceil)\leftrightarrow \forall x (P(x)\rightarrow \neg Q(x))$. $\exists xT(\left\lceil P(\dot x)\wedge\neg Q(\dot x)\right\rceil)\leftrightarrow T(\left\lceil\exists x(P(x)\wedge \neg Q(x)\right\rceil)\leftrightarrow\exists x(P(x)\wedge\neg Q(x))$. These equivalences correspond to the four Aristotelian forms: ’All $P$’s are $Q$’s’, ’some $P$’s are $Q$’s’, ’no $P$’s are $Q$’s’ and ’some $P$’s are not $Q$’s’ (cf. [@1]). Let $P$ be a predicate of $L_0$ with arity $m>1$, and let $q_1,\dots q_m$ be any of the $2^m$ different $m$-tuples which can be formed from quantifiers $\forall$ and $\exists$. Theories DTT and STT presented in Theorems \[T1\] and \[T0\] prove the following logical equivalences.\ $T(\left\lceil q_1 x_1\dots q_m x_mP(x_1,\dots,x_m)\right\rceil)\leftrightarrow q_1 x_1\dots q_m x_m P(x_1,\dots,x_m)\leftrightarrow q_1 x_1\dots q_m x_mT(\left\lceil P((\dot x_1,\dots,\dot x_m)\right\rceil)$,\ $T(\left\lceil q_1 x_1\dots q_m x_m\neg P(x_1,\dots,x_m)\right\rceil)\leftrightarrow q_1 x_1\dots q_m x_m\neg P(x_1,\dots,x_m)\leftrightarrow q_1 x_1\dots q_m x_m\neg T(\left\lceil P(\dot x_1,\dots,\dot x_m)\right\rceil)$. $T$-rule implies the first equivalences, and the second equivalences are consequences of and bivalence of $\mathcal L_T$. An application of $T$-rule proves the universal $T$-schema:\ (UT)$\forall x_1\dots\forall x_m\big{(}T(\left\lceil P((\dot x_1,\dots,\dot x_m)\right\rceil)\leftrightarrow P(x_1,\dots,x_m)\big{)}$. \[Ex1\] Assume that $L_0$ is the language of arithmetic with its standard interpretation. Let $R(x,y)$ be formula $x=2y$, and let $R$ be the corresponding predicate with domain $D_R=\mathbb N_0\times \mathbb N_0$. Then the truth theories DTT and STT of the extension $\mathcal L_T$ of $L_0$ prove the universal $T$-schema\ (UTR)  $\forall x\forall y\big{(}T(\left\lceil R((\dot x,\dot y)\right\rceil)\leftrightarrow R(x,y)\big{)}$,\ and the logical equivalences\ (q1) $q_1 x q_2 y T(\left\lceil R(\dot x,\dot y)\right\rceil)\leftrightarrow T(\left\lceil q_1 x q_2 y R(x,y)\right\rceil)\leftrightarrow q_1 x q_2 yR(x,y)$,\ (q2) $q_1 x q_2 y\neg T(\left\lceil R(\dot x,\dot y)\right\rceil)\leftrightarrow\neg T(\left\lceil q_1 x q_2 yR(x,y)\right\rceil)\leftrightarrow q_1 x q_2 y\neg R(x,y)$. The sentences in (q1) are true iff $q_1 q_2$ is $\forall \exists$ or $\exists \exists$, and false iff $q_1 q_2$ is $\forall \forall$ or $\exists \forall$. In (q2) the sentences are true iff $q_1 q_2$ is $\forall \forall$ or $\forall \exists$, and false iff $q_1 q_2$ is $\exists \forall$ or $\exists \exists$. The next example is a modification of an example presented in [@Ray18 p. 704]. \[Ex2\] Let $L_0$, $\mathcal L_T$, $T$ and $R(x,y)$ be as in Example 6.1. Denote by $T'$ the predicate with domain $\mathbb N_0$ corresponding to formula $\exists yR(x,y)$. Let the sublanguage $L'_0$ of $L_0$ be formed by the syntax of $L_0$, predicate $T'$ and sentences $\exists y R({\bf n},y)$, where $\bf n$ goes through all numerals, the names of natural numbers $n$. Choose $n$ to be the code number of the sentence $\exists y R({\bf n},y)$ in $L'_0$ for each numeral $\bf n$. If $A$ is a sentence of $L'_0$, i.e. $\exists y R({\bf n},y)$ for some numeral $\bf n$, then $n$ is the code number of $A$, so that $\left\lceil A\right\rceil$ is $\bf n$. Since $T'({\bf n})$ denotes the sentence $\exists y R({\bf n},y)$ for each numeral $\bf n$, then $T'(\left\lceil A\right\rceil)$ or equivalently $T'({\bf n})$, is true (respectively false) in $L'_0$ iff $\exists y R({\bf n},y)$, or equivalently $A$, is true, (respectively false) in $L'_0$ iff (by definition of $R(x,y)$) $n$ is even (respectively odd). Thus $T'$ is a truth predicate of $L'_0$. The restriction of $T$ to $L'_0$ is also a truth predicate of $L'_0$. It is not equal to $T'$ because the coding of $L'_0$ is not the restriction of the Gödel numbering of $\mathcal L_T$, whereas $L_0$ can have that syntax. Remarks {#R3} ======= Theories DTT and STT conform to the seven desiderata presented in [@Lei07] for theories of truth (cf. [@Hei18 Theorem 4.2]). This challenges the current view (cf, e.g., [@Fe; @Lei07; @Raa]). Theories DTT and STT are also free from paradoxes. The fact that the languages $\mathcal L_T$ which have these theories of truth contain their own truth predicates seems to be in contrast to some Tarski’s limiting theorems for theories of truth described, e.g., in [@Ray18]. Conformity of theories DTT and STT to the principles of classical logic can be vitiated by adding first-order syntax to $\mathcal L_T$. The object language $L_0$ is allowed to have that syntax. Theorems \[T1\] and \[T0\] imply that theories DTT and STT of truth together contain the theory DSTT of truth introduced in [@Hei18 Theorem 4.1] for languages $\mathcal L^0$ when those sentences of the form $A\vee B$, $A\rightarrow B$ and $\neg(A\wedge B)$ whose one component has not a true value are deleted from $\mathcal L^0$. Otherwise the languages $\mathcal L_T$ for which theories DTT and STT of truth are introduced extend languages $\mathcal L^0$. The amount of predicates and compositional sentences are multiplied by means of the added non-truth predicate $\neg T$, and predicates of the object language $L_0$ which have several free variables. The family of those languages which conform to classical logic is considerably larger than the families of those object languages considered in [@Hei18]. For instance, the object language $L_0$ can be any language whose every sentence is valuated by its meaning either as true or as false. Every language $L_0$ which has properties (i) – (iii), e.g., every countable and bivalent first-order language with or without identity, conforms to classical logic. If $L_0$ is any countable language conforming to classical logic, and if $L'_0$ is any sublanguage of $L_0$ formed by the syntax of $L_0$, any nonempty subset of its sentences and any subset of its predicates, then $L'_0$ conforms to classical logic. For instance, the language $L'_0$ in Example \[Ex2\] conforms to classical logic. The set $U$ used in the definition of $\mathcal L_T$ is the smallest consistent set for which $U=G(U)$, where $G(U)$ is the set of Gödel numbers of sentences of $L(U)$. Thus $U$ is the minimal fixed point of the mapping $G:\mathcal C\to\mathcal C$, where $\mathcal C$ is the set of consistent sets of Gödel numbers of sentences of $\mathcal L$. Moreover $\mathcal L_T$ conforms to classical logic. Thus the sentences of $\mathcal L_T$ are grounded in the sense defined by Kripke in [@15 p. 18]. The language $\mathcal L_\sigma$ determined by the minimal fixed point in Kripke’s construction contains also sentences which don’t have truth values. For instance, the sentence $A\leftrightarrow T(\left\lceil A\right\rceil)$ has not a truth value for every sentence $A$ of $\mathcal L_\sigma$. Thus a three-valued logic is needed in [@15], as well as in [@Fe] and in [@HH]. The only logic used here is classical. The equivalence of truth values of sentences in theories DTT and STT show that the notion of ’grounded truth’ defined by valuation (I) conforms to the ’ordinary’ notion of truth. In the metalanguage used in the above presentation some concepts dealing with predicates and their domains are revised from those used in [@Hei18] so that they agree better with the corresponding concepts in informal languages of first-order logic (cf. [@1]). The circular reasoning used in [@Hei18] to show that $G(U)$ is consistent if $U$ is consistent is corrected in the proof of Lemma \[L201\]. Mathematics, especially ZF set theory, plays a crucial role in this paper. Metaphysical necessity of pure mathematical truths is considered in [@Lei18]. [99]{} Barker-Plummer, Dave, Barwise, Jon and Etchemendy, John (2011) [Language, Proof and Logic]{}, CSLI Publications, United States. (1957) Syntactic structures, The Hague: Mouton. (2012) [*Axiomatizing truth. Why and how?*]{}, Logic, Construction, Computation (U. Berger et al. eds.) Ontos Verlag, Frankfurt, 185–200. Halbach, Volker & Horsten, Leon (2006) [*Axiomatizing Kripke’s Theory of Truth*]{}, Journal of Symbolic Logic, [**71**]{}, 2, 677-712. (2018) [*A mathematically derived definitional/semantical theory of truth*]{}, [Nonlinear Studies]{}, [**25**]{}, 1, 173-189. (1975) [*Outline of a Theory of Truth*]{}, [ Journal of Philosophy]{}, [**72**]{}, 690–716. (2007) [*What Theories of Truth Should be Like (but Cannot be)*]{}, [ Philosophy Compass]{}, [**2/2**]{}, 276–290. (2018) [*Why Pure Mathematical Truths are Metaphysically Necessary. A Set-Theoretic Explanation*]{}, Synthese, https://doi.org/10.1007/s11229-018-1873-x. (2019) [*Truth and Theories of Truth, (penultimate draft)*]{} The Cambridge Handbook of Philosophy of Language, Cambridge University Press, to appear. (2018) [*Tarski on the Concept of Truth*]{}, The Oxford Handbook of Truth, Michael Glanzberg ed. Oxford: Oxford University Press, 695–717. \[pagefin\]
--- abstract: | The identity $\frac{j}{n}\binom{kn}{n+j} =(k-1)\binom{kn-1}{n+j-1}-\binom{kn-1}{n+j}$ shows that $\frac{j}{n}\binom{kn}{n+j} $ is always an integer. Here we give a combinatorial interpretation of this integer in terms of lattice paths, using a uniformly distributed statistic. In particular, the case $j=1,k=2$ gives yet another manifestation of the Catalan numbers. --- David Callan\ Department of Statistics\ University of Wisconsin-Madison\ 1300 University Ave\ Madison, WI  53706-1532\ <[email protected]>\ Introduction ============ For each pair of integers $j\ge 1$ and $k\ge 2$, the sequence $\big(\frac{j}{n}\binom{kn}{n+j}\big)_{n\ge \frac{j}{k-1}}$ consists of integers since $\frac{j}{n}\binom{kn}{n+j} =(k-1)\binom{kn-1}{n+j-1}-\binom{kn-1}{n+j}$. For $j=1,k=2$ this sequence is the Catalan numbers, in the ; for $j=1,k=3$ it is and for $j=1,k=4$ it is . In this note, we give a combinatorial interpretation for all $j,k$ in terms of lattice paths. We first treat the case $j=1$, which is simpler (§2), specialize to $k=1$ (§3), then generalize to larger $j$ (§4), and end with some remarks (§5). Case *j* = 1 ============ Let $\p_{n,k}$ denote the set of lattice paths of $n+1$ upsteps $U=(1,1)$ and $(k-1)n-1$ downsteps $D=(1,-1)$. Clearly, $\v \p_{n,k}\v=\binom{kn}{n+1}\,:$ choose locations for the upsteps among the total of $kn$ steps. A path in $\p_{n,k}$ has $kn+1$ vertices or *points*: its initial and terminal points and $kn-1$ interior points. Define the ** of $P\in \p_{n,k}$ to be the line joining its initial and terminal points. For $P\in \p_{n,k}$ label its points $0,1,2,\ldots,kn$ left to right and define the **points of $P$ to be those whose label is divisible by $k$. An example with $n=5$ and $k=3$ is illustrated (points indicated by a heavy dot). (-2,-1)(16,6) (0,3) (1,4)(2,3)(3,2)(4,3)(5,2)(6,3)(7,4)(8,5)(9,4)(10,3)(11,2)(12,1)(13,0)(14,1)(15,0) (0,3)(1,4)(2,3)(3,2)(4,3)(5,2)(6,3)(7,4)(8,5)(9,4)(10,3)(11,2)(12,1)(13,0)(14,1)(15,0) (0,3)(15,0) (0,3)(3,2)(6,3)(9,4)(12,1)(15,0) (-0.2,2.6) (1,4.4) (2.1,3.4) (3,1.6) (4,3.4) (5,1.6) (5.9,3.4) (6.9,4.4) (8,5.4) (9.1,4.4) (10.1,3.4) (11.1,2.4) (12.1,1.4) (13,-0.4) (14,1.4) (15,-0.4) (8,0.2)[$\textrm{{\footnotesize \bl}}$]{} (8,0.9)[$\textrm{{\footnotesize $\nearrow$}}$]{} (8,-1.3)[$\textrm{{\small A path in $\p_{5,3}$}}$]{} Consider the statistic $X$ on $\p_{n,k}$ defined by $X=\#\:$interior points lying strictly above the . In the illustration $X=3$ (points 6, 9 and 12). The statistic $X$ on $\p_{n,k}$ is uniformly distributed over $0,1,2,\ldots,n-1$. The following count is an immediate consequence of the theorem by considering the paths with $X=n-1$. $\frac{1}{n}\binom{kn}{n+1}$ is the number of paths in $\p_{n,k}$ all of whose interior points lie strictly above its . **Proof of Theorem**Consider the operation “rotate left $k$ units” on $\p_{n,k}$ defined by transferring the initial $k$ steps of a path in $\p_{n,k}$ to the end. This rotation operation partitions $\p_{n,k}$ into rotation classes. We claim (i) each such rotation class has size $n$, and (ii) $X$ assumes the values $0,1,2,\ldots,n-1$ in turn on the paths of a rotation class. The first claim follows from Given $P \in \p_{n,k}$, the only points lying on its are its initial and terminal points. **Proof of Lemma**Suppose $ik,\ 0\le i \le n$, is a point on the . Since the slope of the is $-\frac{(k-2)n-2}{kn}$, this says that the point with coordinates $(ik,-i \frac{(k-2)n-2}{n})$ lies on $P$ (taking the initial point of $P$ as origin). For each point $(x,y)$ on $P$, $x$ and $y$ must have the same even/odd parity. Hence $ik \equiv i \frac{(k-2)n-2}{n}$mod2. Simplifying, we find $2i \equiv \frac{2i}{n} \:\textrm{mod}\:2 \Rightarrow i \equiv \frac{i}{n} \:\textrm{mod}\:1 \Rightarrow n|\,i \Rightarrow i=0$ or $n$, the last implication because $0\le i \le n$. To prove the second claim, we exhibit a bijection from the paths in $\p_{n,k}$ with $X=n-1$ to those with $X=i$ for each $i\in [0,n-1]$. Given $P\in \p_{n,k}$ with $X=n-1$, draw its $L$. The entire rotation class of $P$ can be viewed in a single diagram: draw a second contiguous copy of $P$ as illustrated, then join the two occurrences of each interior point. This results in $n$ parallel line segments (no two collinear, by the Lemma), each the base line of a path in the rotation class of $P$. Label the lines (at their endpoints) 0 through $n-1$ from top to bottom. (0,-1.5)(30,10) (0,6) (1,7)(2,6)(3,7)(4,8)(5,9)(6,8)(7,7)(8,6)(9,5)(10,4)(11,5)(12,4)(13,5)(14,4)(15,3) (16,4)(17,3)(18,4)(19,5)(20,6)(21,5)(22,4)(23,3)(24,2)(25,1)(26,2)(27,1)(28,2)(29,1)(30,0) (0,6)(3,7)(6,8)(9,5)(12,4)(15,3) (18,4)(21,5)(24,2)(27,1)(30,0) (0,6) (1,7)(2,6)(3,7)(4,8)(5,9)(6,8)(7,7)(8,6)(9,5)(10,4)(11,5)(12,4)(13,5)(14,4)(15,3) (15,3) (16,4)(17,3)(18,4)(19,5)(20,6)(21,5)(22,4)(23,3)(24,2)(25,1)(26,2)(27,1)(28,2)(29,1)(30,0) (0,6)(30,0) (3,7)(18,4) (6,8)(21,5) (9,5)(24,2) (12,4)(27,1) (0,1)(6,1) (9,1)(15,1) (7.5,1) (0,5.5) (15,2.5) (2.5,7.1) (18.5,4) (6.1,8.5) (21.1,5.5) (8.5,5) (24.5,2) (11.6,4) (27.4,0.9) (14,-2) Now the path $Q$ with $i$ has the form $BA$ when $P$ is decomposed as $AB$ with $A$ an initial segment of $P$. Hence $Q$ is in $\p_{n,k}$ and has $X=i$ since the interior points of $Q$ lying (strictly) above its are precisely those labeled $1,2,\ldots,i-1$. The path $B$ can be retrieved in $Q$ as the initial subpath of $Q$ terminating at its “lowest” point where “lowest” is measured relative to the parallel lines, and so the mapping is invertible. The diagram used in this proof is reminiscent of the one used in *Concrete Mathematics* [@gkp p.360] to prove Raney’s Lemma, also known as the Cycle Lemma [@zaks; @sw]. Special Case ============ The case $j=1,k=2$ gives a new interpretation of the Catalan numbers: $C_{n}$ is the number of lattice paths of $n+1$ upsteps and $n-1$ downsteps such that the interior even-numbered vertices all lie strictly above the line joining the initial and terminal points. The $C_{3}=5$ paths with $n=3$ are shown. =0.4cm $$\blueclr{\Pfad(-17,0),333344\endPfad \Pfad(-10,0),333434\endPfad \Pfad(-3,0),333443\endPfad \Pfad(4,0),334334\endPfad \Pfad(11,0),334343\endPfad} \Label\u{ \textrm{{\scriptsize The 5 paths in $\p_{3,2}$}}}(0,-1) \NormalPunkt(-17,0) \DuennPunkt(-16,1) \NormalPunkt(-15,2) \DuennPunkt(-14,3) \NormalPunkt(-13,4) \DuennPunkt(-12,3) \NormalPunkt(-11,2) \NormalPunkt(-10,0) \DuennPunkt(-9,1) \NormalPunkt(-8,2) \DuennPunkt(-7,3) \NormalPunkt(-6,2) \DuennPunkt(-5,3) \NormalPunkt(-4,2) \NormalPunkt(-3,0) \DuennPunkt(-2,1) \NormalPunkt(-1,2) \DuennPunkt(0,3) \NormalPunkt(1,2) \DuennPunkt(2,1) \NormalPunkt(3,2) \NormalPunkt(4,0) \DuennPunkt(5,1) \NormalPunkt(6,2) \DuennPunkt(7,1) \NormalPunkt(8,2) \DuennPunkt(9,3) \NormalPunkt(10,2) \NormalPunkt(11,0) \DuennPunkt(12,1) \NormalPunkt(13,2) \DuennPunkt(14,1) \NormalPunkt(15,2) \DuennPunkt(16,1) \NormalPunkt(17,2)$$ General Case ============ The general case $j\ge 1$ is similar but a little more complicated. Let $\p_{n,k,j}$ denote the set of paths of $kn$ upsteps/downsteps of which $n+j$ are upsteps. Thus $\v \p_{n,k,j} \v =\binom{kn}{n+j}$. The “$j$” factor in the numerator of $\frac{j}{n}\binom{kn}{n+j}$ requires that we consider the Cartesian product $\p_{n,k,j}^{*}:= \p_{n,k,j} \times [j]$ whose size is $j\binom{kn}{n+j}$. Given $(P,i)\in \p_{n,k,j}^{*}$, introduce an $x$-$y$ coordinate system with origin at the initial point of $P$, identify the parameter $i$ with the line segment joining $(0,2(i-1))$ and $(kn,-(k-2)n+2i)$, and call this the baseline for $(P,i)$; it coincides with the previous notion of baseline when $j=1$, forcing $i=1$. It is easy to see that, once again, the never contains an interior point of $P$. Define $X$ on $(P,i)\in \p_{n,k,j}^{*}$ by $X=\#\ $interior points of $P$ lying strictly above the . We first show that $X$ is uniformly distributed over $0,1,2,\ldots,n-1$. It is no longer true that orbits in $\p_{n,k,j}$ under the “rotate left by $k$” operator $R$ all have size $n$ but no matter: in general, $P\in \p_{n,k,j}$ uniquely has the form $P_{1}^{r}$ with $P_{1}$ of length divisible by $k$ and $r$ maximal. Then $r$ necessarily divides $n$ and $j$, and the orbit of $P$ under $R$ has size $n/r$. In case $r\ge 1$, everything will merely be cut down by a factor of $r$. Declare two elements $(P_{1},i_{1})$ and $(P_{2},i_{2})$ to be *rotation-equivalent* if $P_{1}$ and $P_{2}$ are in the same rotation class under $R$ (regardless of $i_{1}$ and $i_{2}$). As before, all elements of a rotation-equivalence class can be seen in a single diagram as illustrated. (-4,-2.5)(13,9) (0,9)(1,0)[13]{}[.]{} (0,8)(1,0)[13]{}[.]{} (0,7)(1,0)[13]{}[.]{} (0,6)(1,0)[13]{}[.]{} (0,5)(1,0)[13]{}[.]{} (0,4)(1,0)[13]{}[.]{} (0,3)(1,0)[13]{}[.]{} (0,2)(1,0)[13]{}[.]{} (0,1)(1,0)[13]{}[.]{} (0,0)(1,0)[13]{}[.]{} (0,0) (1,1)(2,2)(3,3)(4,4)(5,5)(6,4)(7,5)(8,6)(9,7)(10,8)(11,9)(12,8) (0,2)(2,4)(4,6)(6,2)(8,4)(10,6) (0,0)(1,1)(2,2)(3,3)(4,4)(5,5)(6,4)(7,5)(8,6)(9,7)(10,8)(11,9)(12,8) (0,0)(6,2) (0,2)(6,4) (2,2)(8,4) (2,4)(8,6) (4,4)(10,6) (4,6)(10,8) (0,-1)(2,-1) (4,-1)(6,-1) (3,-1) (6.5,2.1) (6.5,4.0) (8.5,4.2) (8.5,6.0) (10.5,6.1) (10.5,8.0) (6,-3.2) (-1.5,-3.5)(25,8) (0,7)(1,0)[25]{}[.]{} (0,6)(1,0)[25]{}[.]{} (0,5)(1,0)[25]{}[.]{} (0,4)(1,0)[25]{}[.]{} (0,3)(1,0)[25]{}[.]{} (0,2)(1,0)[25]{}[.]{} (0,1)(1,0)[25]{}[.]{} (0,0)(1,0)[25]{}[.]{} (0,-1)(1,0)[25]{}[.]{} (0,-2)(1,0)[25]{}[.]{} (0,0) (1,1)(2,2)(3,3)(4,2)(5,1)(6,0)(7,-1)(8,0)(9,1)(10,2)(11,1)(12,2)(13,3)(14,4)(15,5) (16,4)(17,3)(18,2)(19,1)(20,2)(21,3)(22,4)(23,3)(24,4) (0,2)(0,4)(3,5)(3,7)(6,2)(6,4)(9,3)(9,5)(12,-2)(12,0)(15,1)(15,3)(18,-2)(18,0)(21,-1)(21,1) (0,0)(1,1)(2,2)(3,3)(4,2)(5,1)(6,0)(7,-1)(8,0)(9,1)(10,2)(11,1)(12,2) (12,2)(13,3)(14,4)(15,5)(16,4)(17,3)(18,2)(19,1)(20,2)(21,3)(22,4)(23,3)(24,4) (0,0)(12,-2) (0,2)(12,0) (0,4)(12,2) (3,3)(15,1) (3,5)(15,3) (3,7)(15,5) (6,0)(18,-2) (6,2)(18,0) (6,4)(18,2) (9,1)(21,-1) (9,3)(21,1) (9,5)(21,3) (0,-2.8)(5,-2.8) (7,-2.8)(12,-2.8) (6,-2.8) (15.6,4.9) (15.5,2.9) (15.5,0.9) (12.5,1.9) (12.5,-0.1) (12.5,-2.1) (18.5,-2.1) (18.5,-0.1) (18.6,1.9) (21.5,-1.1) (21.5,0.9) (21.5,2.9) (12,-5) Label the baselines (there are $jn/r$ of them; both illustrations have $r=1$) at their endpoints as follows (each of $0,1,\ldots,n-1$ will be the label on $j/r$ endpoints). First take the highest endpoint $p$ and consider the set of all endpoints lying weakly to the left of the vertical line through $p$. Since there are $j-1$ endpoints directly below $p$, this set has size at least $j$. Place label 0 on the $j/r$ highest points in this set, favoring points to the left if a choice must be made between points at the same height. Then take the highest unlabeled endpoint, consider the set of all unlabeled endpoints lying weakly to the left of its vertical line, and place the label 1 on the $j/r$ highest points in this set, again favoring “left”. Continue in like manner until all endpoints are labeled. Then, for each $i=0,1,\ldots,n-1$,the $j/r$ objects in the rotation-equivalence class with label $i$ all have $X=i$, and the uniform distribution of $X$ follows. By considering the objects in $\p_{n,k,j}^{*}$ with $X=n-1$, we obtain our main result. Suppose $j\ge 1,\ k\ge 2,$ and $n\ge\frac{j}{k-1}$. Then $\frac{j}{n}\binom{kn}{n+j}$ is the number of lattice paths of $n+j$ upsteps $(1,1)$ and $kn-(n+j)$ downsteps $(1,-1)$ which $($i$\,)$ start at $(0,-2i)$ for some $i\ge 0$, and $($ii$\,)$ have all interior points (strictly) above the line through the origin of slope $-\frac{(k-2)n-2}{kn}$. Concluding Remarks ================== The main theorem can be generalized somewhat further (essentially the same proof): $\frac{d}{n}\binom{an}{cn+d}$ is the number of lattice paths of $cn+d$ upsteps and $an-(cn+d)$ downsteps which (i) start at $(0,-2i)$ for some $i\ge 0$, and (ii) have all interior points (strictly) above the line through the origin of slope $-\frac{(a-2c)n-2}{an}$. There is also a well known generalization of the Catalan numbers in a different direction: $\frac{j}{kn+j}\binom{kn+j}{n}$ is the number of lattice paths of $n$ steps east (1,0) and $(k-1)n+j-1$ steps north $(0,1)$ that start at the origin and lie weakly above the line $y=(k-1)x$. One way to prove this (slightly generalizing the approach in [@woan]) is as follows. Consider the set $\p_{n,k,j}$ of paths consisting of $n$ steps east and $(k-1)n+j$ steps north. Measuring “height” of a point above $y=(k-1)x$ as the perpendicular distance to $y=(k-1)x$, define $j$ *high points* for a path $P\in \p_{n,k,j}$: the first high point is the leftmost of the highest points on the path, the second high point is the leftmost of the next highest points of the path, and so on. Note that all $j$ high points necessarily lie strictly above $y=(k-1)x$. Mark any one of the these high points to obtain the set $\p_{n,k,j}^{*}$ of marked $\p_{n,k,j}$-paths. Clearly, $\v \p_{n,k,j}^{*} \v = j\binom{kn+j}{n}$. Label the $kn+j+1$ points on a marked path $P^{*} \in \p_{n,k,j}^{*}$ in order $0,1,2,\ldots,kn+j$ starting at the origin. Set $X=$ label of the marked high point. Then $X$ is uniformly distributed over $1,2,\ldots,kn+j$. The paths with $X=kn+j$ yield the desired paths by deleting the last step (necessarily a north step) and rotating $180^{\circ}$. All the above generalizations of the Catalan numbers are incorporated in the expression $$\frac{ad-bc}{an+b}\binom{an+b}{cn+d}=(a-c)\binom{an+b-1}{cn+d-1}-c\binom{an+b-1}{cn+d}$$ and it would be interesting to find a unified combinatorial interpretation for it. [99]{} R.E.Graham, D.E.Knuth, Oren Patashnik, *Concrete Mathematics* (2nd edition), Addison-Wesley, 1994. N. Dershowitz and S. Zaks, The cycle lemma and some applications, *European J. of Comb.* [**11**]{}, 1990, 35–40. H. S. Snevily and D. B. West, The bricklayer problem and the strong cycle lemma, *Amer. Math. Monthly* [**105**]{}, 1998, 131–143. Wen-jin Woan, Uniform partitions of lattice paths and Chung-Feller generalizations. *Amer. Math. Monthly* 108 (2001), no. 6, 556–559. 2000 [*Mathematics Subject Classification*]{}: 05A15. *Keywords:* Catalan, uniformly distributed, $k$-divisible, baseline, Cycle Lemma.
--- abstract: | We propose a cosmological scenario for the formation and evolution of dwarf spheroidal galaxies (dSphs), satellites of the Milky Way (MW). An improved version of the semi-analytical code GAMETE (GAlaxy Merger Tree & Evolution) is used to follow the dSphs evolution simultaneously with the MW formation, matching the observed properties of both. In this scenario dSph galaxies represent fossil objects virializing at $z=7.2\pm 0.7$ (i.e. in the pre-reionization era $z>z_{rei}=6$) in the MW environment which at that epoch has already been pre-enriched up to \[Fe/H\]$\sim -3$; their dynamical masses are in the narrow range $M=(1.6\pm 0.7)\times10^8 \Msun$, although a larger spread might be introduced by a more refined treatment of reionization. Mechanical feedback effects are dramatic in such low-mass objects, causing the complete blow-away of the gas $\sim 100$ Myr after the formation epoch: $99\%$ of the present-day stellar mass, $M_*=(3\pm 0.7)\times 10^6 \Msun$, forms during this evolutionary phase, i.e. their age is $>13$ Gyr. Later on, star formation is re-ignited by returned gas from evolved stars and a second blow-away occurs. The cycle continues for about 1 Gyr during which star formation is intermittent. At $z=0$ the dSph gas content is $M_g=(2.68\pm 0.97)\times 10^4 \Msun$. Our results match several observed properties of Sculptor, used as a template of dSphs: (i) the Metallicity Distribution Function; (ii) the Color Magnitude Diagram; (iii) the decrement of the stellar \[O/Fe\] abundance ratio for \[Fe/H\]$>-1.5$; (iv) the dark matter content and the light-to-mass ratio; (v) the HI gas mass content. author: - | Stefania Salvadori$^{1}$, Andrea Ferrara$^{1}$ & Raffaella Schneider$^{2}$\ $^1$SISSA/International School for Advanced Studies, Via Beirut 4, 34100 Trieste, Italy\ $^2$INAF/Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, 50125 Firenze, Italy bibliography: - 'biblio.bib' title: Life and times of dwarf spheroidal galaxies --- == == == == \#1[[ \#1]{}]{} \#1[[ \#1]{}]{} @mathgroup@group @mathgroup@normal@group[eur]{}[m]{}[n]{} @mathgroup@bold@group[eur]{}[b]{}[n]{} @mathgroup@group @mathgroup@normal@group[msa]{}[m]{}[n]{} @mathgroup@bold@group[msa]{}[m]{}[n]{} =“019 =”016 =“040 =”336 ="33E == == == == \#1[[ \#1]{}]{} \#1[[ \#1]{}]{} == == == == \[firstpage\] stars: formation, population II, supernovae: general - cosmology: theory - galaxies: evolution, stellar content - Background ========== The lack of a comprehensive scenario for the formation and evolution of dwarf spheroidal galaxies (dSphs) contrast with the large amount of available data for these nearby Local Group satellites. Several authors have focused on different aspects of the dSph evolution and on the observed properties related with them, giving important contributions to our actual understanding of such puzzling objects. However, many questions remain unanswered and models able to match simultaneously different observed properties are still missing. Recent observations by Helmi et al. (2006) opened new questions on the origin of dSph galaxies. By observing the stellar Metallicity Distribution Function (MDF) in four nearby dSphs, they found a significant lack of stars with \[Fe/H\]$<-3$. On the contrary, the Galactic halo MDF shows a low \[Fe/H\]-tail extending down to \[Fe/H\]$\sim -4$ (Beers & Christlieb 2006) and even below (Christlieb et al. 2002, 2006; Frebel et al. 2005, Christlieb 2007). Does such result imply that dSph and MW progenitors are different? Is the dSph birth environment pre-enriched or does the Initial Mass Function (IMF) behave differently in Galactic building blocks and in dSphs at earliest times? If the pre-enrichment can solve the problem it must be completed before the beginning of the star formation (SF) in dSphs .i.e. $>13$ Gyr ago. The dSph star formation histories (SFH) derived by the analysis of the observed color-magnitude diagram (CMD), in fact, show that all dSph have an ancient stellar population formed more that 13 Gyr ago (Grebel & Gallagher 2004). Which mechanism can be responsible of such rapid metal-enrichment? In addition to these new open questions there is still a crucial unresolved problem related to the dSph’s SFHs. In fact, although all dSphs display an ancient stellar population, their SFHs appear to be considerably different. In most of them the bulk of stars is made by ancient stars ($>10$ Gyr old) i.e. their SF activity is concentrated during the first Gyrs (Dolphin et al. 2005). However Fornax, LeoII and Sagittarius show very different features: in these objects the bulk of the stars was formed much less than 10 Gyr ago and their SF activity proceeds until $z\sim 1$, or even to lower redshifts (Grebel & Gallagher 2004). Carina, finally, exhibits a clearly episodic SFH, with a pause of several Gyrs after the old population formed and a massive formation of stars younger than 10 Gyr (Smecker-Hane et al. 1994, Hurley-Keller, Mateo & Grebel 1999). As pointed out by Grebel & Gallagher (2004) such variety of SFHs cannot be explained with a drop in the SF activity in response to photoionization; only Carina in fact displays the gap expected if the SF were truncated during reionization and then restarted. If reionization cannot explain the inferred variety of SFHs, local processes, such as mechanical feedback, tidal stripping, gas infall etc., must be invoked in order to produce these observational features. However, it is difficult that local processes can lead to different SFHs if the dSph host halo mass has a ”Universal” value, as predicted by Mateo et al. (1998), Gilmore et al. (2007) and Walker et al. (2007), and if the assumed initial gas-to-mass ratio is taken to be equal to the cosmic mean value. Will more sophisticated observations and analysis reveal a spread in the dSph DM content? If this is the case, different host halo mass could be more easily linked to different SFHs by internal physical processes. In addition to these issues there are a number of related aspects that deserve attention, as for example the observed elemental abundance patterns (like the \[$\alpha$/Fe\] ratio and the s-elements abundance). Despite all of these unresolved questions, the relevance of stellar feedback for the evolution of dSph galaxies (Ferrara & Tolstoy 2002 and references therein) can be considered a milestone of our actual understanding of these objects, given the general consensus among theoretical studies and the continuous confirmation by observational evidences. Given their low DM content and therefore their shallow potential wells, mechanical feedback driven by SN explosions has dramatic effects in dSphs. Such intense mechanical feedback activity is indirectly confirmed by the low light-to-dark matter ratios observed in dSphs (Mateo et. al 1998, Gilmore et al. 2007). Moreover, observations of neutral hydrogen (HI), reveal that the Local Group dSphs are all relatively HI poor, suggesting that maybe most of the gas have been removed from these galaxies. The Sculptor dSph is one of the few with HI emission; a lower limit of $M_{HI}> 3\times 10^4\Msun$ has been derived by Carignan et al. (1998) using radio observations. In addition, winds are presumably metal-enhanced, as suggested by several theoretical studies (Vader 1998; MacLow & Ferrara 1999; Fujita et al. 2004) and by recent X-rays observations of the starburst galaxy NGC 1569 (Martin, Kobulnicky & Heckman 2002). Many theoretical works have attacked some aspects of the dSph evolution separately: - [Origin & DM content of dSphs]{}: Bullock et al. (2001); Kravtsov et al. (2004); Ricotti & Gnedin (2005); Gnedin & Kravtsov (2006); Read, Pontzen & Viel (2006); Moore et al. (2006); Metz & Kroupa (2007). - [DSph MDFs]{}: Ripamonti et al. (2006); Lanfranchi & Matteucci (2007) - [SFH & abundance ratios of dSphs]{}: Ikuta & Arimoto (2002); Fenner et al. (2006); Lanfranchi & Matteucci (2007); Stinson et al. (2007). - [Stellar feedback & dSph gas content]{}: Ferrara & Tolstoy (2000); Tassis et. al (2003); Fujita et al. (2004); Lanfranchi & Matteucci (2007). In this study we analyze the formation and evolution of dSph galaxies, satellites of the MW, in their cosmological context by using the improved GAlaxy MErger Tree & Evolution code (GAMETE), which allows to build-up the SFH and chemical enrichment of the MW along its hierarchical merger tree (Salvadori, Schneider & Ferrara 2007, hereafter SSF07). This approach gives a self-consistent description of the dSphs evolution and MW formation: dSphs form out from their natural birth environment, the Galactic Medium (GM), whose metallicity evolution is completely determined by the history of SF and by mechanical feedback processes along the build-up of the Galaxy. The star formation and mechanical feedback efficiencies of dSphs are assumed to be the same as for all the Galactic building blocks and calibrated to reproduce the observable properties of the MW. The model allows to predict the following observable properties of a typical dSph galaxy: (i) the formation epoch; (ii) the MDF; (iii) the SFH and the derived CMD diagram; (iv) the stellar \[O/Fe\] abundance ratio with respect to \[Fe/H\]; (v) the DM content; (vi) the stellar-to-mass ratio; (vii) the final gas content. These are compared with observations of Sculptor, which represents the best studied nearby dSph. A global scenario for the formation and evolution of dSph galaxies is presented. The plan of the paper is the following: a recap of the general properties of the improved code GAMETE is presented in Sec. 2. The life of a dSph galaxy is described in subsequent sections: the birth environment and the selection criteria from the MW building blocks are presented in Sec. 3; the evolution until the blow-away, driven by mechanical feedback, is described in Sec. 4; the subsequent and final stages of the dSph life are traced in Sec. 5. Sec. 6 is devoted to a comparison between model results and some observational properties of the Sculptor dSph. Finally, a summary and discussion of the main results is given in Sec. 7. Model Description ================= In this Section, we first summarize the main features of the model introduced in SSF07; following that we discuss the modifications and improvements made for the purpose of this work. Summary of the model -------------------- The semi-analytic code GAMETE used in SSF07 allows to follow the star formation history and chemical enrichment of the MW throughout its hierarchical merger tree. Its main features can be summarized along the following points (for detailed explanations see SSF07). The code reconstructs the hierarchical merger history of the MW using a Monte Carlo algorithm based on the extended Press & Schechter theory (Bond et al. 1991; Lacey & Cole, 1993); it adopts a [*binary*]{} scheme with [*accretion*]{} (Cole et al. 2000, Volonteri, Haardt & Madau 2003) to decompose the present day MW dark matter halo ($M_{MW}\sim 10^{12}\Msun$ Binney & Merrifield 1998) into its progenitors, running backward in time up to $z=20$. At any time a halo of mass $M_0$ can either loose part of its mass (corresponding to a cumulative fragmentation into haloes with $M<M_{res}$) or loose mass and fragment into two progenitor haloes with random masses in the range $M_{res}<M<M_0/2$. The mass below the resolution limit accounts for the [*Galactic Medium*]{} (GM) which represents the mass reservoir into which haloes are embedded. During the star formation history of the MW, the progenitor haloes accrete material from the GM and virialize out of it. We assume that feedback effects rapidly suppress star formation in the first mini-haloes and that only Ly$\alpha$ cooling haloes ($T_{vir}>10^4$K) contribute to the star formation history and chemical enrichment of the Galaxy. This motivates our choice of a resolution mass $M_{res}=M_4(z)/10$, where M\_4(z)=M(T\_[vir]{}=10\^4 [K]{},z)\~10\^[8]{}()\^[3/2]{} \[eq:M4\] represents the halo mass corresponding to a virial equilibrium temperature $T_{vir}=10^4$K at a given redshift $z$. At the highest redshift of the simulation, $z\approx 20$, the gas present in virialized haloes is assumed to be of primordial composition. The star formation rate is taken to be proportional to the mass of gas. Following the critical metallicity scenario (Bromm [et al. ]{}2001; Omukai 2000, Omukai [et al. ]{}2005; Schneider [et al. ]{}2002, 2003, 2006; Bromm & Loeb 2004) we assume that low-mass star formation is triggered by the presence of metals (and dust) exceeding $Z_{cr}=10^{-5\pm 1}\Zsun$. When the gas in star forming haloes has a metallicity $Z\leq Z_{cr}$ Pop III stars form with a reference mass value of $m_{popIII}=200\Msun$. If on the contrary $Z>Z_{cr}$, Pop II/I stars form according to a Larson initial mass function (IMF): (m)=m\^[-1+x]{}(-m\_[cut]{}/m), \[eq:LarsonIMF\] with $x=-1.35$, $m_{cut}=0.35 \Msun$ and $m$ in the range $[0.1-100] \Msun$ (Larson 1998). Due to the lack of spatial information, when metals and gas are returned to the interstellar medium (ISM) through stellar winds and SN explosions, they are assumed to be instantaneously and homogeneously mixed with the ISM. The same instantaneous perfect mixing approximation is applied to material ejected out of the host halo into the external GM. New features ------------ We now discuss the additional features we have incorporated in the model. The aim of introducing these new physics is to obtain a more complete description of the evolution of a single dSph galaxy. These can be summarized as follows: - [*Infall rate.*]{} The gas in newly virialized haloes is accreted with an infall rate given by =A()\^2. \[eq:Minf\] The selection of this particular functional form has been guided by the results of simulations presented in Kereš et al. (2005). For reasons that will be clarified in Sec. 6.1, the infall time is assumed to be proportional to the free-fall time, $t_{inf}=t_{ff}/4$ where $t_{ff} = (3 \pi/32 G \rho)^{1/2}$, $G$ is the gravitational constant, and $\rho$ is the total (dark + baryonic) mass density of the halo. The normalization constant is set to be $A = {2}({\Omega_b}/{\Omega_M}) M/t_{inf}$ so that for $t \rightarrow \infty$ the accreted gas mass reaches the universal value $M_{inf}(\infty)=(\Omega_b/\Omega_M) M$. No infall is assumed after a merging event i.e. all the gas is supposed to be instantaneously accreted. Hydrodynamical simulations in fact show that galaxy mergers can drive significant inflow of gas raising the star formation rate by more than an order of magnitude (Mihos & Hernquist, 1996 and references therein). - [*Finite stellar lifetimes*]{}. We follow the chemical evolution of the gas taking into account that stars of different masses evolve on characteristic time-scales (Lanfranchi & Matteucci 2007). The rate at which gas is returned to the ISM through winds and SN explosions is computed as: =\^[100]{}\_[m\_1(t)]{}(m-w\_m(m))(m)[SFR]{}(t-\_m)dm, where $\tau_m=10/m^2$ Gyr is the lifetime of a star with mass $m$, $w_m$ is the remnant mass and $m_1(t)$ the turnoff mass i.e. the mass corresponding to $t=\tau_m$. Similarly, the total ejection rate of an element ${\it i}$, newly synthesized inside stars (first term in the parenthesis) and re-ejected into the ISM without being re-processed (second term), is =\^[100]{}\_[m\_1(t)]{}where $m_i(m,Z)$ is the mass of element ${\it i}$ produced by a star with initial mass $m$ and metallicity $Z$ and $Z_i(t-\tau_m)$ is the abundance of the ${\it i-th}$ element at the time $t-\tau_m$. The SN rate is simply computed as =\^[40]{}\_[m\_1(t) &gt; 8]{}(m)[SFR]{}(t-\_m)dm. We used the grid of values $w_m(m)$ and $m_i(m,Z)$ by Heger & Woosley (2002) for $140\Msun<m<260\Msun$, Woosley & Weaver (1995) for $8\Msun<m<40\Msun$ and van der Hoek & Groewengen (1997) for $0.9\Msun<m<8\Msun$. - [*Mechanical feedback.*]{} Assuming a continuous mass loss prescription (Larson 1974), the mass of gas ejected into the surrounding GM is regulated by the equation, M\_[ej]{}v\_e\^2=E\_[SN]{} \[eq:Mej1\] where E\_[SN]{}=\_w N\_[SN]{}E\_[SN]{} \[eq:Esn\] is the kinetic energy injected by SN-driven winds and $v_e^2=GM/r_{vir}=2E_b/M$ is the escape velocity of the gas from a halo with mass $M$ and binding energy $E_b$ given by (Barkana & Loeb 2001) $$E_{b}=\frac{1}{2}\frac{GM^{2}}{r_{vir}}= 5.45\times{10^{53}}{\rm erg} \left(\frac{{M}_{8}}{h^{-1}}\right)^{5/3}\left(\frac{1+z}{10}\right){h}^{-1}.$$ \[eq:Eb\] In eq. \[eq:Esn\], $\epsilon_{w}$ is a free parameter which controls the conversion efficiency of SN explosion energy in kinetic form, $N_{SN}$ is the number of SN, and $\langle E_{SN}\rangle$ is the average explosion energy; the latter quantity is taken to be equal to $2.7\times 10^{52}$erg for Pop III stars and to $1.2\times 10^{51}$erg for Type II SNe. Differentiating eq. \[eq:Mej1\] we find that the gas ejection rate is proportional to the SN explosion rate, = . \[eq:Mej\] - [*Differential winds.*]{} The gas ejected out of the host halo is assumed to be metal-enhanced with respect to the star forming gas. According to Vader (1986), who studied SN-driven gas loss during the early evolution of elliptical galaxies, the SN ejecta suffer very limited mixing before they leave the galaxy, playing a minor role in its chemical evolution. Such hypothesis implies different ejection efficiency for gas and metals. This result has been later confirmed by numerical studies (Mac Low & Ferrara, 1999; Fujita et al. 2004). Adopting a simple prescription, we assume that the abundance of the ${\it i-th}$ element in the wind is proportional to its abundance in the ISM, $Z^w_i=\alpha Z^{ISM}_i$, and we take $\alpha=10$ only for newly virialized haloes ($M<10^9\Msun$) otherwise $\alpha=1$. For any star forming halo of the MW hierarchy, we therefore solve the following system of differential equations: = [SFR]{} = \_\*, \[eq:SFR\] = - [SFR]{} + + - , \[eq:Mg\] = - Z\^[ISM]{}\_i[SFR]{} + + Z\_i\^[vir]{} - Z\^w\_i. \[eq:Mz\] The first equation is the star formation rate; $M_g$ is the mass of cold gas inside haloes, $\epsilon_*$ the free parameter which controls the star formation efficiency and $t_{ff}$ the free-fall time. The second equation describes the mass variation of cold gas: it increases because of gas infall and/or returned from stars and decreases because of star formation and gas ejection into the GM. The third equation, analogous to the second one, regulates the mass variation of an element $i$; $Z^{ISM}_i$, $Z_i^{vir}$, and $Z^{w}_i$ are the abundance of the ${\it i-th}$ element in the ISM, in the infalling gas (i.e. in the hot gas at virialization), and in the wind, respectively. Model parameters ---------------- The model has six free parameters: $\epsilon_*$, $\epsilon_w$, $Z_{cr}$, $m_{PopIII}$, $t_{inf}$ and $\alpha$. We use the observed global properties of the MW in order to fix $\epsilon_w$ and $\epsilon_*$. To do so we compare the results of the simulations at redshift $z=0$ with: (i) the gas metallicity $Z_{gas}\sim \Zsun$; (ii) the stellar metallicity $Z_{*}\sim \Zsun$; (iii) the stellar mass $M_{*}\sim 6\times 10^{10}\Msun$; (iv) the gas-to-stellar mass ratio $M_g/M_*=0.1$; (v) the baryon-to-dark matter ratio $f_b=0.07$; (vi) the GM metallicity $Z_{GM}\sim 0.25\Zsun$. The last quantity has been estimated using the observed value for \[O/H\] in high-velocity clouds (Ganguly et al. 2005) which are supposed to be gas leftover from the Galactic collapse and currently accreting onto the disk. The MDF of Galactic halo stars observed by Beers & Christlieb (2006) is instead used in order to fix the values of $Z_{cr}$ and $m_{PopIII}$ (Fig. 1, left panel). As we will discuss in detail in Sec. 6.1, the additional parameters $t_{inf}$ and $\alpha$ are fixed to match the Sculptor MDF without altering the MW properties. Our fiducial model is characterized by the following parameters values[^1]: $\epsilon_*=1$, $\epsilon_w=0.002$, $Z_{cr}=10^{-3.8}\Zsun$, $m_{PopIII}=200\Msun$, $t_{inf}=t_{ff}(z_{vir})/4$ and $\alpha=10$. The comparison between the best-fitting model and the observed MDF is shown in the left panel of Fig. 1. The model provides a good fit to the data; in particular, the selected $Z_{cr}$ value allows to reproduce the peculiar MDF cut-off although it cannot account for the two isolated hyper iron-poor stars (\[Fe/H\]$=-5.3$ and \[Fe/H\]$=-5.7$) and for the recent observations of a star with \[Fe/H\]$=-4.8$, the first detection in what was previously called “metallicity desert” (Christlieb 2007). These stars can only be reproduced assuming $Z_{cr}\leq 10^{-6}\Zsun$, at the expenses of overpredicting the number of stars in the range $-5.3<$ \[Fe/H\]$<-4$, loosing the agreement with the observed MDF cut-off (see SSF07 for a detailed analysis of the dependence of the predicted MDF on $Z_{cr}$ and $m_{PopIII}$). We find that most of the iron-poor stars (\[Fe/H\]$<-2.5$) form in haloes which originally contain gas of primordial composition but which accrete material from the GM, Fe-enhanced by previous SN explosions. The initial \[Fe/H\] abundance within a virializing halo is then fixed by the corresponding GM iron abundance at the virialization redshift. The evolution of GM iron and oxygen abundance predicted by the fiducial model is also shown in Fig.1 (middle panel). The birth environment ===================== Once the fiducial model has been fixed, its parameters are used to solve the system of equations (\[eq:SFR\])-(\[eq:Mz\]) for [*all*]{} the progenitor haloes of the MW. The next point to address is the selection criteria to identify dSph galaxies among various MW progenitors. We use two criteria: the first is based on dynamical arguments and the second on reionization. We want to select virializing haloes which could become dSph satellites. Using N-body cosmological simulations, Diemand, Madau & Moore (2005) show that in present-day galaxies, haloes corresponding to rare high-$\sigma(M,z)$ density peaks[^2] are more centrally concentrated. The probability of a protogalactic halo to become a satellite increases if it is associated with lower-$\sigma$ density fluctuations. This result, combined with the fact that at each redshift $95\%$ of the total dark matter lies in haloes with mass $M<M_{2\sigma}$, which correspond to $<2\sigma$ fluctuations, suggest that most satellites originate from such density peaks. Therefore, we select dSph candidates from haloes with masses $M_{4}<M<M_{2\sigma}$. In Fig. 1 (right panel) we show the redshift evolution of $M_4(z)$, defined in eq. \[eq:M4\] as $M_4(z)=10^{8}\Msun [(1+z)/10]^{-3/2}$, and of the halo masses corresponding to $1-3$ $\sigma(M,z)$ density peaks. Note that, in addition, the adopted dynamical criterion can be used to set an upper limit to the dSph candidates formation epoch of $z_{vir}<9$ (see Fig. 1, right panel). The second criterion is based on reionization. During this epoch, the increase of the Inter Galactic Medium (IGM) temperature causes the growth of the Jeans mass and consequent suppression of gas infall in low-mass objects. In particular, cosmological simulations by Gnedin (2000) show that below a characteristic halo mass-scale the gas fraction is drastically reduced compared to the cosmic value. We adopt a simple prescription and assume that after reionization the formation of galaxies with circular velocity $v_c<30$ km/s is suppressed, i.e. we assume that when $z<z_{rei}$ haloes with masses below $M_{30}(z)=M(v_c=30$ km/s $,z)$ have no gas (for a thorough discussion of radiative feedback see Ciardi & Ferrara 2005 and Schneider et al. 2007). Since $M_{30}(z)>M_4(z)$ (see the right panel of Fig. 1) and the probability to form a newly virialized halo with $M>M_{30}$ is very low, the second criterion implies that the formation of dSph candidates is unlikely to occur below $z_{rei}$. As we will discuss in Sec. 6.4 this fact may have important consequences as, for example, the possible existence of a universal mass of dSph host haloes. Following the two above criteria, dSph candidates can only form in the redshift range $z_{rei}<z_{vir}<9$. From the middle panel of Fig. 1 it is evident that in this redshift range the mean GM iron abundance is $-2.5\lsim$\[Fe/H\]$\lsim -3$. This implies that the birth environment of dSph candidates is pre-enriched to \[Fe/H\] values consistent with those implied by the MDF observations of Helmi et al. (2006). In what follows, we will present the results obtained by our fiducial model averaged over 100 different realizations of the hierarchical merger tree of the MW. In each single realization, dSph candidates are selected from haloes with masses and redshifts corresponding to the shaded area in the right panel of Fig. 1, that is $M_{4}<M<M_{2\sigma}$ for $z>z_{rei}$ and $M_{30}<M<M_{2\sigma}$ otherwise. Their subsequent evolution is followed in isolation with respect to the forming Galaxy: they neither merge nor accrete material from the GM. Following Choudhury & Ferrara (2006), we vary the reionization redshift within the range $5.5<z_{rei}<10$. The total number of dSph candidates depends on $z_{rei}$ and it is typically larger than the number of observed ones. Therefore, for each $z_{rei}$, it is necessary to randomly extract a sub-sample in order to match the total number of known MW satellites, $\sim 15$. The average properties of a dSph galaxy presented in the following Sections refer to the case $z_{rei}=6$ (see the discussion in Sec. 6.1) and are obtained averaging over the selected satellites from all the 100 realizations of the MW merger tree (about $\sim 2000$ objects). Feedback-regulated evolution ============================ The life of a dSph is very violent in the first hundred Myr, due to mechanical feedback effects which are more intense in low mass objects. The evolution of the mass of cold gas (eq. \[eq:Mg\]) helps in understanding this rapid evolution. Fig. 2 shows the evolution of several properties of an average dSph galaxy ($M=1.6\times 10^8 \Msun$) that virialized at $z_{vir}=7.2$, with respect to the formation time (age) $T=t-t_{vir}$. Three main evolutionary phases are identified in the Figure, depending on the dominant physical processes. An increasing fraction of cold gas is collected during Phase I ($T<40$ Myr) dominated by the infall rate. The mass of the infalling gas rapidly increases during this epoch, reaching a maximum when $T=t_{inf}\sim 25$ Myr. The mass of ejected and returned gas start to contribute to eq. (11) only when the most massive SNe of $40\Msun$ explode[^3], $\sim 6$ Myr from the formation of the first stellar generation. Thereafter, the mass of ejected and returned gas rapidly grow, due the raising number of SNe and evolving low mass stars. The ISM metallicity and iron abundance evolve accordingly during this phase: they are steadily equal to the values of the infalling gas ($Z_{vir}\sim 10^{-3}\Zsun$, \[Fe/H\]$\sim -2.9$) before the first SNe explodes and then rapidly increase. During Phase II ($40$ Myr$\lsim T\lsim 60$ Myr) the gain of cold gas by infall is mostly used to form stars and $M_g$ remains constant. Note that the $M_*/M$ curve in Fig. 2 represents the [*total*]{} stellar mass at time $T$. Finally, during Phase III ($T\gsim 70$ Myr), the mass of the ejected gas overcomes the infalling gas and $M_g$ starts to decrease. Because of the metal-enhanced wind prescriptions $M_Z$ and $M_{Fe}$ should in principle decrease earlier and faster than $M_g$. This is the case for $M_Z$: in Fig. 2 the metallicity is a slowly decreasing function both during Phase II and Phase III so that $|\dot{M_Z}| < |\dot{M}_g|$. Conversely, the $M_{Fe}/M_g$ ratio is enhanced during these epochs: the mass of newly synthesized iron released by a SN with a $m=12\Msun$ progenitor is $\sim 2$ orders of magnitude bigger than for $m=40\Msun$ (Woosley & Weaver, 1995)[^4]; for this reason, when lower mass SNe evolve, a larger amount of iron is injected into the ISM and the second right-term in eq. \[eq:Mz\] can contrast the high ejection rate. When $T\sim 100$ Myr the mass of gas lost due to winds becomes larger than the remaining gas mass and $M_g$ drops to zero. During this blow-away metals and iron are also ejected out of the galaxy. Moreover, since SN explosions continue at subsequent times, even the infalling gas can rapidly acquire enough energy to escape the galaxy. The infall is first reversed and in few Myr, when the remaining mass of hot gas has blowed-away, definitively stopped. The occurrence of reversal infall in high-redshift dwarf galaxies is confirmed by numerical simulations (Fujita et al. 2004). Eventually at $T \sim 100$Myr our template dSph is a gas free system. Beyond blow-away ================ In Fig. 3 we show the star formation rate (SFR) of a typical dSph galaxy as a function of its age. The highest peak corresponds to the star formation activity during the first $100$ Myr i.e. before the blow away. After the blow-away the galaxy remains gas free and star formation is suddenly halted i.e. ${\mbox SFR}(T)=0$. The gas returned by evolved stars represents the only source of fresh gas for the galaxy after the blow away. However, until the latest SN explodes, this low mass of gas is easily ejected outside the galaxy by SN winds (Fig. 2); the dSph remains dormant (${\mbox SFR}=0$) for the subsequent $\sim 150$ Myr (this time-lag corresponds to the life time of the lowest $m=8\Msun$ SN progenitor formed just before the blow-away). Observationally, mass loss from evolved stars has been invoked by Carignan et al. (1998) in order to explain the detection of neutral hydrogen (H${\rm I}$) associated with the Sculptor dSph galaxy. After the latest SN explosion, the return rate $dR/dt$ by evolved stars with $m<8\Msun$ becomes the only non-zero term of eq. \[eq:Mg\] and the dSph enters a rejuvenation phase: the recycled gas is collected into the galaxy and star formation starts again. From the beginning of the rejuvenation phase the subsequent evolution of the galaxy proceeds like in the first 100 Myr of its life. However (Fig. 3) SFR is now more than 2 orders of magnitude lower than before the first blow-away, due to the paucity of returned gas. Almost $100$ Myr later the galaxy is drained of the whole mass of gas and metals: a new blow-away has occurred and the cycle starts again. In Fig. 3 we note that the repetition of blow-away and rejuvenation phases causes an intermittent SF activity with a typical blow-away separation of $\sim 150$ Myr. Such burst-like SFH is similar to that inferred from the CMD observed in dSph galaxies such as Carina (Smecker-Hane et al. 1994) although the typical duration of active and quiescent phases is $\sim (1-2)$ Gyr. In agreement with the present work, recent simulations for the collapse of an isolated dwarf galaxy (Stinson et al. 2007), show that feedback effects cause a periodic SF activity, with a typical duration of active and quiescent phases of $\sim 300$ Myr. Fig. 3 also shows that about 1 Gyr after the formation of the galaxy, this burst-like SFH ends. Because of the gradually smaller mass of gas returned by stars, fewer SNe are produced during this epoch. For this reason, gas ejection is less efficient and can be easily counteracted by the continuous input of returned gas. Although the SF activity continues until the present-day, the mass of stars formed after the first blow-away ($M_*\sim 2 \times 10^5\Msun$) is only $1\%$ of the mass of stars formed before the blow-away ($M_*\sim 2 \times 10^7 \Msun$). Consistent with this result, the analysis of the dSph CMD diagrams by Dolphin et al. (2005) shows that dSphs typically form most of their stars over 10 Gyr ago. After the first blow-away subsequent stellar generations formed out of gas recycled by low mass star. The characteristic iron-abundance of this gas is \[Fe/H\]$\sim -1.5$, as can be inferred using the results of van der Hoek & Groewengen (1997). Observable properties ===================== In the following Sections we will compare our numerical results for dSph galaxies with the most relevant observations. Given the amount of available data we take Sculptor as the best case of a dSph template; however, the validity of model results is general. Metallicity distribution function --------------------------------- In this Section we analyze the Metallicity Distribution Function (MDF) of Sculptor, i.e. the number of relic stars as a function of their iron abundance \[Fe/H\], a quantity commonly used as a metallicity tracer. In Fig. 4 we compare the MDF observed by Helmi et al. (2006) with the simulated one, normalized to the total number of observed stars (513). The theoretical MDF is obtained as follows: we adopt a reionization redshift of $z_{rei}=6$; given this choice, the average number of dSph candidates in each realization is $N_{tot}\sim 200$, among which $10\%$ become MW satellites, hence naturally matching the number of observed satellites. A higher reionization redshift of $z_{rei}=8.5$ would reduce the number of dSph candidates to $N_{tot}\sim 5$, well below the observed value. This allows to put a solid constraint on the reionization redshift of $z_{rei}<8.5$. As can be inferred from the Figure, the model shows a good agreement with the observed MDF, particularly for \[Fe/H\]$<-1.5$. A marginally significant deviation is present at larger \[Fe/H\] values. We have already discussed in the previous Section that the bulk of stars ($\sim 99\%$) in a dSph galaxy is formed during the first $100$ Myr of its life when \[Fe/H\]$<-1.5$. Essentially, stars formed after the first blow away (\[Fe/H\]$>-1.5$), are unnoticeable in the normalized MDF. For this reason the physical processes regulating the MDF shape are mostly those responsible for the cold gas mass evolution analyzed in Sec. 4. We can use the evolution of $M_{Fe}/M_g$ shown in Fig. 2 in order to convert time in \[Fe/H\] variable and identify the three main evolutionary phases into the MDF. We find that stars with \[Fe/H\]$\lsim-2$ formed during the infall-dominated Phase I; the MDF shape at low \[Fe/H\] values is then essentially regulated by the functional form of the infall rate. Stars with $-2\lsim$\[Fe/H\]$\lsim-1.6$ (around the MDF maximum) are formed during Phase II i.e. when the mass of cold gas remains approximately constant; the maximum of the MDF is instead fixed by the values of $t_{inf}$ and $\alpha$. In particular, $t_{inf}$ determines the beginning of Phase II and $\alpha$ its end. Their values ($t_{inf}=t_{ff}(z_{vir})/4$, $\alpha=10$) have been selected in order to match the Sculptor MDF maximum/shape without altering the global MW properties and Galactic halo MDF (the same parameter are in fact applied to all the virialized MW building blocks). Finally, stars with \[Fe/H\]$\gsim -1.6$, are formed during the feedback-dominated Phase III. Note in particular that the value of the MDF cut-off (\[Fe/H\]$\sim -1.5$) corresponds to the gas iron-abundance at the blow-away. At \[Fe/H\]$\gsim -1.5$ our model slightly underpredicts the data as the theoretical MDF drops very steeply. The explanation for such disagreement is likely to reside in our simplified dynamical treatment of mechanical feedback. Interestingly, Mori, Ferrara & Madau (1999), investigated the dynamics of SN-driven bubbles in haloes with $M=10^8 \Msun$ at $z=9$ using 3D simulations. They found that less than $30\%$ of the available SN energy gets converted into kinetic energy of the blown away material, the remainder being radiated away. A large fraction of gas remains bound to the galaxy, but is not available to form stars before it cools and rains back onto the galaxy after $\sim 200$ Myr. Such effect is not included in our modeling. Qualitatively we do expect that such “galactic fountain" would increase the amount of Fe-enriched gas to restart SF after blow-away, and hence the number of \[Fe/H\]$\geq -1.5$ stars. The total number of relics stars shown in the MDF corresponds to a total stellar mass of $M_*= (3\pm 0.7) \times 10^6 \Msun$. Using the total (dark+baryonic) dSph mass, derived from our simulations $M=(1.6\pm 0.3)\times 10^8 \Msun$ we can compute the mass-to-luminosity ratio ()=()()\~150 , \[eq:M\_L\] having assumed $(M_*/L_*)=3$, in agreement with the results by Ricotti & Gnedin 2005. This result is consistent with the most recent estimate for Sculptor (Battaglia 2007; Battaglia et al. 2008), that gives a very high value $(M/L)=158\pm 33$. Color-magnitude diagram ----------------------- Another comparison with data can be done in terms of color-magnitude diagram (CMD) of the Sculptor stellar population observed by Tolstoy et al. (2004). CMD represents one of the best tools to study the star formation history of a galaxy. Starting from our numerical results for a typical dSph, we have computed the corresponding synthetic CMD using the publicly available IAC-STAR code by Aparicio & Gallart (2004). Given the IMF, the SFR and the ISM metallicity evolution, IAC-STAR allows to calculate several properties of the relic stellar population and, in particular, the stellar magnitudes. We have used the stellar evolution library by Bertelli (1994) and the bolometric correction library by Lejeune et al. (1997). Note that the IAC-STAR input parameters for the ISM metallicity evolution must be $Z^{IAC-STAR}>0.005 \Zsun$. No binary stars have been included. We adopt a randomization procedure in order to simulate the observational errors in the synthetic CMD and compare numerical results with data. To this aim, we first derive the normalized error distribution for the magnitude $M_I$ and the color index $V-I$ from the data sample by Tolstoy (private communication). Errors have been randomly assigned at every synthetic star, identified by a $(M_I$, $V-I)$ pair using a Monte Carlo method and randomly added or subtracted. Note that more accurate (and complicated) randomization procedures exist (see for example Aparicio & Gallart 2004); however, we consider the simple approach adopted here adequate for our present purposes. In Fig. 5 we compare the synthetic and observed CMDs. Data by Tolstoy et al. ($\sim 10300$ stars into the relevant $M_I$, $V-I$ range) have been normalized to the total number of synthetic stars derived by IAC-STAR ($\sim 2300$). In order to do so stars have been randomly selected from the data sample. The match between theoretical and experimental points is quite good. We note however that the number of red giant branch (RGB) stars in the synthetic CMD is lower than the observed one. This discrepancy can be explained with the contamination of the data sample by Galactic foreground stars (see Tolstoy et al. 2004). The synthetic CMD reproduces reasonably well the blue/red horizontal branch stars (BHB/RHB stars) i.e. stars residing in the CMD branch ($0< M_I< 1$, $0<(V-I)<1$). A well populated HB in the CMD diagram might be interpreted as an indication of an old stellar population (age $>10$ Gyr). The interpretation of the blue and red HB, on the contrary, is quite controversial: due to the age-metallicity degeneracy of the CMD stellar colors become bluer when stars are younger and/or poorer in metallicity. For this reason the position of a star in the CMD cannot be unequivocally interpreted. In our model the majority of the stars are formed during the first 100 Myr of the dSph life; this means that all the stars have basically the same age $\gsim 13$ Gyr; so the HB morphology, in our model, reflects the metallicity gradient of the stellar populations: BHB stars belong to metal-poor stars formed during the Phase I (see Fig. 2) while RHB stars to the more metal-rich stars formed during the Phase II. Key abundance ratios -------------------- A method commonly used to break the age-metallicity degeneracy and derive accurate SFH from the CMD diagrams, is the analysis of the stellar elemental abundances. In most of the observed dSph galaxies the abundance ratio of $\alpha$ elements (O, Mg, Si, Ca) relative to iron (\[$\alpha$/Fe\]) shows a strong decrease when \[Fe/H\]$> -2$ (Venn et al. 2004). Since $\alpha$-elements are primarily generated by SN II while a substantial fraction of iron-peak elements (Fe, Ni, Co) are produced by type Ia SNe (SNe Ia), the decline of \[$\alpha$/Fe\] is usually interpreted as a contribution by SNe Ia. Using this argument and the assumption that the lifetime of SNe Ia is around 1.5 Gyr, Ikuta & Arimoto (2002) inferred an age spead of 1-2 Gyr in the dominant stellar population of Draco, Sextans and Ursa Minor dSphs. However, the issue of the lifetime of SNe Ia remains quite debated and uncertain, with timescales as short as 40 Myr having been suggested (see Ricotti & Gnedin 2005 for a thorough discussion) under starburst formation conditions. In Fig. 6 we compare the oxygen-to-iron stellar abundance with respect to \[Fe/H\] for 8 stars observed in Sculptor by Gaisler et al. (2005) and Shetrone et al. (2003), with the results of our model. In spite of the poor statistics the data show a clear indication of the \[O/Fe\] decrement for \[Fe/H\]$>-1.8$; in particular, subsolar values are observed. Even if SNe Ia are not included in our model, a drop in the \[O/Fe\] occurs as a result of having released the IRA approximation (see also Fenner et al. 2006) and of differential winds. However, subsolar \[O/Fe\] values can only be accounted for by differential winds. This is because when \[O/Fe\] reaches the maximum value[^5], the “effective oxygen yield” ($dY_{O}/dt - Z^{w}_{O} dM_{ej}/dt$) is strongly reduced with respect to iron due the effect of differential winds, which, as can be deduced from eq. (12), have a larger impact on more abundant elements ($Z^w_i=\alpha Z^{ISM}_i$). This causes a pronounced and rapid decrease of \[O/Fe\] to subsolar values. We note that model results tend to underpredict the observed stellar abundances: essentially, differential winds are too efficient. We recall, however, that the value $\alpha=10$ have been selected in order to match the Sculptor MDF. The problem of the lack of \[Fe/H\]$>-1.5$ stars noted in Sec. 6.1, is evident also here. A more sophisticated treatment of differential winds and/or the inclusion of the missing physical effects discussed in Sec. 6.1, should presumably remove such discrepancies. Noticeably the result of Fig. 6 is fully consistent with the analysis by Fenner et al. (2006), who studied the Sculptor chemical evolution including differential winds. In conclusion we find that the trend of \[$\alpha$/Fe\] does not require a prolonged star formation phase ($>1$ Gyr) but can be satisfactorily explained even if $99\%$ of the stars formed during the first $100$ Myr of the dSph lifetime. Additional constraints on the SFH may also come from the analysis of the abundances of s-elements associated with the slow neutron-capture process. These are produced by low-mass stars during Asymptotic Giant Branch (AGB) phases. From an analysis of \[Ba/Y\] Fenner et al (2006) concluded that most of the stars must be formed over an interval of at least several Gyr to allow time for metal-poor AGB stars to enrich the ISM up to the observed values. Our model does not make specific prediction on s-elements; it is likely however that since the bulk of stars is predicted to formed on a time-scale of $\sim100$ Myr there would not be enough time for the ISM to be enriched with the products of AGB stars. Nevertheless this may not be the only scenario to explain s-elements abundances. For example binary systems in which the lower mass, long-living star, accretes s-enhanced gas directly from the companion rather than from the ISM can equally explain the observed high s-elements abundances. Internal production during the dredge-up phase can represent yet another possibility. These alternative scenarios are supported by the observed stellar \[s/Fe\] ratio with respect to \[Fe/H\]. The data (Venn et al. 2004) show that the \[s/Fe\] values do not increase at higher \[Fe/H\], as expected if the ISM is gradually enriched by the contribution of lower mass stars. Moreover, a large \[s/Fe\] spread is observed for any \[Fe/H\] which is expected if the efficiencies of accretion, dredge-up and s-element production are functions of stellar mass. Dark matter content ------------------- DSph galaxies represent the most dark matter-dominated systems known in the Universe. It is then very interesting to determine their dark matter mass. Observationally, the mass content of dSph galaxies is derived by measuring the velocity dispersion profile of their stellar populations and comparing it with the predictions from different kinematic models. The latter step strongly depends on the adopted stellar kinematics (in particular the assumed velocity anisotropy radial profile), on the dark matter mass distribution, and on the nature of the dark matter itself. Recently, Battaglia (2007), Battaglia et al. (2008) have derived the velocity dispersion profile of Sculptor measuring the velocities of $\sim 470$ RGB stars. They model Sculptor as a two component system with a metal poor and a metal rich stellar population that show different kinematics. They use these two components as distinct tracers of the same potential and find that the best model is a cored profile with $r_c=0.5$kpc and $M(<r_{last})$[^6]$=(3.4\pm 0.7)\times 10^8\Msun$ which gives an excellent representation of the data assuming an increasing radial anisotropy. Interestingly, the values of $M(<r_{last})$ obtained assuming a NFW model for the dark matter distribution or a constant radial anisotropy are $M(<r_{last})=(2.4^{+1.1}_{-0.7})\times 10^8\Msun$ and $M(<r_{last})=(3.3\pm 0.8)\times 10^8\Msun$, respectively, consistent with the above result within 1$\sigma$. The average mass of dSph galaxies that we infer from our simulations, $M=(1.6\pm 0.8)\times 10^8\Msun$. Because the last measured points in the Battaglia (2007), Battaglia et al. (2008) typically reach 1-2 kpc, one could suspect that additional dark matter could be located outside this radius, thus turning their determination into a lower mass limit. However, for the mean mass value and formation redshift that we have obtained $M\approx 10^8 M_\odot,z_{vir}\approx 7$, the virial radius of such a halo is 1 kpc. Thus, the agreement (at $1-2\sigma$ level) between our prediction and the actual mass determinations might not be coincidental, but reflects the fact that in these small objects star formation has propagated up to the most remote galactocentric regions. This prediction could be eventually checked by deeper observations and/or other techniques. The narrow dispersion around the average mass value found is an indication that dSphs may have a universal host halo mass. This finding agrees with the results by Mateo et al. (1998), Gilmore et al. (2007) and Walker et al. (2007), who suggest that dSph galaxies might have a common mass scale $M_{0.6} = (2-7) \times 10^7 \Msun$, where $M_{0.6}$ is the dark matter mass within a radius of 0.6 kpc. Assuming a NFW density profile, $z_{vir}=0$ and a concentration parameter $c=35$ (Battaglia 2007) we found $M_{0.6} = 2.3\times 10^7\Msun$. Gas footprints of feedback -------------------------- A final comparison with data can be done in terms of the observed gas properties. In the previous Sections we have shown that metal-enhanced winds driven by SN explosions play a fundamental role in determining the evolutionary times scales and properties of a dSph galaxy. Based on observations obtained with the Chandra X-Ray Observatory, Martin, Kobulnicky & Heckman (2002) provide the first direct evidence for metal-enhanced winds from dwarf starburst galaxies. They have observed the hot X-ray-emitting gas around the nearby dwarf galaxy NGC 1569 which entered in a starburst phase (10-20) Myr ago. The X-ray spectrum they find presents strong emission lines from $\alpha$-process elements, that require the wind metallicity to be $Z^w >0.25 \Zsun$ i.e. larger than $Z^{ISM}=0.2\Zsun$, supporting our assumption of metal enhanced winds $Z^{w}=10 Z^{ISM}$. In particular, their best fit models predict the ratio of $\alpha$-elements to Fe to be 2-4 times higher than the solar value; it is then likely that the ISM is preferentially depleted in $\alpha$-elements consistent with the findings shown in Fig. 6. We stress that these observations confirm the idea that mechanical feedback processes start to play a significant role in the dSph evolution on a very short time-scale (10-20 Myr after the beginning of the starburst phase). Alternatively, the efficiency of mechanical feedback processes can be tested using observations of neutral hydrogen (HI). The Local Group dSph galaxies are all relatively HI poor (Mateo 1998) suggesting that little gas has remained after the main SF phase. Within the known dSph galaxies, Sculptor is one of the few with detectable HI emission. Using radio observation, Carignan et al. (1998) derived a lower limit for the HI mass of $M_{HI} > 3\times 10^4\Msun$. Our simulation predicts an average mass of gas $M_g=(2.68\pm 0.97)\times 10^4\Msun$, in very good agreement with the observed value if indeed this gas is in neutral form. According to our model, the HI mass detected in Sculptor can be associated to gas returned by evolved stars, an explanation also offered by Carignan et al. (1998). Summary and Discussion ====================== We have proposed a global scenario for the formation and evolution of dSph galaxies, satellites of the MW, in their cosmological context using an improved version of the semi-analytical code GAMETE (GAlaxy Merger Tree & Evolution, SSF07). This approach allows to follow self-consistently the dSph evolution and the MW formation and match, simultaneously, most of their observed properties. In this context dSphs formed within the Galactic environment, whose metallicity evolution depends on the history of star formation and mechanical feedback along the build-up of the Galaxy. The star formation and mechanical feedback efficiencies of dSphs are assumed to be the same as for all the Galactic building blocks; they are calibrated to reproduce the observable properties of the MW. DSph candidates are selected among the MW progenitors following a dynamical and a reionization criteria; we choose haloes with masses (i) $M_4 <M< M_{2\sigma}$ if $z>z_{rei}$, (ii) $M_{30}<M<M_{2\sigma}$ if $z<z_{rei}$ i.e. we assumed the formation of galaxies with circular velocity $v_c<30$ km/s to be suppressed after reionization, where $5.5<z_{rei}<10$ (Choudhury & Ferrara 2006). As the number of dSph candidates found varies with $z_{rei}$, we determine the fraction that will become MW satellites requiring that their number matches the observed one ($\sim 15$). Once formed, dSphs are assumed to evolve in isolation with respect to the merging/accreting Galaxy. In this work we present the results obtained assuming $z_{rei}=6$. This value provides a good agreement between the Sculptor MDF and the simulated one and gives a total number of dSph candidates of $N_{tot} \sim 200$; hence, we suppose that $\sim 10\%$ of them become MW satellites. The results of our model, supported by the comparison with observational data and previous theoretical studies, allow to sketch a possible evolutionary scenario for dSphs. In our picture dSph galaxies are associated with Galactic progenitors corresponding to low-sigma density fluctuations ($M_4<M<M_{2\sigma}$), that virialize from the MW environment before the end of reionization, typically when $z=7.2\pm 0.7$. Their total (dark+baryonic) mass results to be $M=(1.6 \pm 0.7) \times 10^8\Msun$. At the virialization epoch the dSph birth environment is naturally pre-enriched due to previous SN explosions up to \[Fe/H\]$_{GM}\gsim -3$, a value fully consistent with that inferred from observations by Helmi et al. (2006). The subsequent dSph evolution is strongly regulated by mechanical feedback effects, more intense in low mass objects (MacLow & Ferrara 1999). We take winds driven by SN explosions to be metal enhanced ($Z^w=10 Z^{ISM}$) as also confirmed by numerical simulations (Fujita et al. 2004) and by the X-ray observations of the starburst galaxy NGC1569 (Martin et al. 2002). Typically, $\sim 100$ Myr after the virialization epoch a complete blow-away of the gas caused by mechanical feedback is predicted. The $99\%$ of the present-day stellar mass, $M_*= (3\pm 0.7) \times 10^6 \Msun$, is expected to form during the first 100 Myr. The stellar content of dSphs is then dominated by an ancient stellar population ($>13$ Gyr old), consistent with the analysis of the dSph CMD diagrams by Dolphin et al. (2005). After the blow-away the galaxy remains gas-free and SF is stopped. Fresh gas returned by evolved stars allows to restart the SF $\sim 150$ Myr after the blow-away. The SFR, however, is drastically reduced due to the paucity of the returned gas. Mass loss from evolved stars has been also invoked by Carignan et al. (1998) to explain the detection of HI in the Sculptor dSph. About $\sim 100$ Myr later, a second blow-away occurs and the cycle starts again. Such intermittent SF activity is similar to those observed in Carina by Smecker-Hane et al. (1994) and to the one derived by Stinson et al. (2007) using numerical simulations. Roughly 1 Gyr after the virialization this burts-like SFH ends while the SF activity proceeds until the present-day with a rapidly decreasing rate. At $z=0$ the dSph gas content is $M_g=(2.68\pm 0.97)\times 10^4 \Msun$. Our model allows to match several observed properties of Sculptor dSph: - The Metallicity Distribution Function (Helmi et al. 2006). The pre-enrichment of the dSph birth environment accounts for the lack of observed stars with \[Fe/H\]$<-3$, a striking and common feature of the four dSph galaxies observed by Helmi et al. (2006). - The stellar Color Magnitude Diagram (Tolstoy et al. 2003) and the decrement of the stellar \[O/Fe\] abundance ratio for \[Fe/H\]$>-1.5$ (Gaisler et al. 2005, Shetrone et al. 2003). The agreement found between models and observations support the SFH we have predicted. - The DM content $M=(3.4\pm 0.7)\times 10^8 \Msun$ and the high mass-to-light ratio $(M/L)=158\pm 33$ recently derived by Battaglia (2007), Battaglia et al. (2008); we find $(M/L)\sim 150$ using the predicted dark matter to stellar mass ratio and assuming $(M/L)_*=3$. - The HI gas mass content. The value derived by radio observations ($M_{HI} > 3\times 10^4\Msun$, Carignan et al. 1998) is in agreement with our findings. Interestingly, the model can also be used to put an upper limit on epoch of reionization, $z_{rei}<8.5$. The total number of selected dSph candidates in fact, is reduced below the observed one ($N_{tot}\sim 5$) if $z_{rei}=8.5$. In addition, the imprint of reionization lies in the suppression of dSphs formation below the reionization redshift, $z_{rei}=6$. This result is fully consistent with the presence of an ancient stellar population ($>13$ Gyr old) in [*all*]{} the observed dSph galaxies (Grebel & Gallagher 2004). Despite the success of the model in producing a coherent physical scenario for the formation of dSphs in their cosmological context and matching several of the Sculptor and MW properties, several aspects deserve a closer inspection. Although Sculptor represents the best template to compare with because of its average properties and the large amount of available data, examples of deviations are already known. In particular the SFHs differ considerably among dSph galaxies (Dolphin et al. 2005, Grebel & Gallagher 2004). The Fornax CMD diagram, for example, indicates a massive presence of younger stars than in other dSphs (Dolphin et al. 2005; Stetson et al. 1998; Buonanno et al. 1999); the peculiarity of this object is also evident in the observed MDF, which is a monotonically increasing function up to \[Fe/H\]$\sim -1$ ( Pont et al. 2004; Helmi et al. 2006; Battaglia et al. 2006). The dSph properties inferred in our model, including the SFH and the MDF, are instead “Universal”. This is a consequence of the selection criteria, that gives a Universal dSph host halo mass, and of the assumed cosmological gas fraction in all virializing haloes. Since $M_4(z)<M_{2\sigma}$ only for $z<9$ (see Fig. 1, right panel) and the typical mass of newly virializing halo is $\sim M_4(z)<M_{30}$, dSphs are forced to form in the redshift range $6 < z_{rei} <9$. Due to the small variation of $M_4(z)$ in such a range, the dSph dark matter content is very similar in all objects and equal to $\sim 10^8 \Msun$. A refinement of the reionization criterion should allow larger deviations from the average evolutionary trend without altering it and possibly allow reionization imprints to emerge in their SFHs. Cosmological simulations by Gnedin (2000) show in fact that after reionization the gas fraction in haloes below a characteristic mass-scale is gradually reduced compared to the cosmic value. If included in our model such prescription might allow to form, just after the reionization epoch, massive dSphs ($M\geq M_{4}(z=6)$) with a lower initial gas-to-dark matter ratio. Since mechanical feedback depends on the available mass of gas powering the SF and on the halo binding energy, this should translate in a less efficient mechanical feedback and a consequent more regular SF activity. In particular, if blow-aways do not occur, the SF could proceed until the present day with a higher rate, allowing massive formation of younger and more Fe-rich stars. Alternatively, random episodes of mass accretion and/or merging with primordial composition haloes, can be invoked as external gas sources to power the SF at lower redshifts. Another physical mechanism, tidal stripping by the gravitational field of the Galaxy, might be invoked (Ibata et al. (2001); Mayer et al. (2004, 2006)) to explain the paucity of remnant gas in dSph, and perhaps as a mechanism of star formation suppression. In our model, we see no need to resort to such effect as the large majority of the gas is expelled by SN feedback within the first 100 Myr of dSph evolution (we recall that the amount of newly born stars after that time is only $\approx 1$% of the final stellar mass). Hence, by the time dSphs find themself embedded in the MW gravitational potential, there is little gas left to be stripped. Finally, we like to comment on some of the model assumptions that could affect the results of the present work. The most relevant one certainly resides in the perfect mixing approximation which determines the metallicity evolution of the dSph birth environment and therefore of the low \[Fe/H\] tail of the dSph MDF. Our persisting ignorance on all the physical effects regulating such process cannot allow to improve this assumption at the moment. The problem, however, is partially alleviated by the spread of the GM metallicity evolution (Fig. 1, middle panel) induced by the stochastic nature of the merger histories, which appears to be similar to what found by sophisticated numerical simulations of mixing in individual galaxies (Mori & Umemura 2006). DSph galaxies formed out of a metal-poor birth environment \[Fe/H\]$<-3$ are found in our model; however, their number is very small and their statistical impact on the average MDF negligible. Finally, the assumed PopIII IMF (here a $\delta$-function centered in $m_{PopIII}=200\Msun$) might in principle affect the chemical evolution of the MW environment given the large iron production of massive stars (see also SSF07 for a more detailed description). A comparison of results obtained using PopIII masses in the range $(140-260)\Msun$ and a Larson IMF, shows that the GM evolution is independent of the assumed PopIII IMF below $z=10$. The dSph birth environment is therefore not affected by this hypothesis. We refer the reader to SSF07 for a detailed analysis of the PopIII IMF impact on the Galactic halo MDF. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to G. Battaglia, N. Gnedin, E. Grebel, A. Helmi, E. Tolstoy and K. Venn for providing us their data and enlightening discussions. This work has made use of the IAC-STAR Synthetic CMD computation code. IAC-STAR is supported and maintained by the computer division of the Instituto de Astrofísica de Canarias. We are grateful to DAVID[^7] members for discussions. \[lastpage\] [^1]: The difference of $\epsilon_*$, $\epsilon_w$ values with respect to those found in SSF07 is a result of model improvements. Note however that the integrals of the star formation rate and the gas ejection rate from progenitor haloes remain unaltered. [^2]: The quantity $\sigma(M,z)$ represents the linear theory rms density fluctuations smoothed with a top-hat filter of mass $M$ at redshift $z$. [^3]: Stars with $40\Msun<m<100\Msun$ are predicted to collapse to black holes (Woosley & Weaver, 1995) while massive Pop III stars cannot be produced in dSph galaxies since their birth environment is pre-enriched to $Z_{vir}\sim 10^{-3}\Zsun > Z_{cr}=10^{-3.8}\Zsun$ [^4]: This result is virtually independent of the initial metallicity of the star. In the same mass range, the total mass of metals produced remains constant. [^5]: The relative production rate of oxygen with respect to iron is larger in low-mass SNe II. [^6]: $M(<r_{last})$ is the mass enclosed within the last measured point. [^7]: [ www.arcetri.astro.it/science/cosmology/index.html]{}
--- abstract: 'We prove a structural property of the class of unconditionally saturated separable Banach spaces. We show, in particular, that for every analytic set ${\mathcal{A}}$, in the Effros-Borel space of subspaces of $C[0,1]$, of unconditionally saturated separable Banach spaces, there exists an unconditionally saturated Banach space $Y$, with a Schauder basis, that contains isomorphic copies of every space $X$ in the class ${\mathcal{A}}$.' address: - 'Université Pierre et Marie Curie - Paris 6, Equipe d’ Analyse Fonctionnelle, Boîte 186, 4 place Jussieu, 75252 Paris Cedex 05, France.' - 'Université Denis Diderot - Paris 7, Equipe de Logique Mathématiques, 2 place Jussieu, 72521 Paris Cedex 05, France.' author: - 'Pandelis Dodos and Jordi Lopez-Abad' title: On unconditionally saturated Banach spaces --- Introduction ============ **(A)** An infinite-dimensional Banach space $X$ is said to be *unconditionally saturated* if every infinite-dimensional subspace $Y$ of $X$ contains an unconditional basic sequence. Although by the discovery of W. T. Gowers and B. Maurey [@GM] not every separable Banach space is unconditionally saturated, this class of spaces is quite extensive, includes the “classical" ones and has some desirable closure properties (it is closed, for instance, under taking subspaces and finite sums). Most important is the fact that within the class of unconditionally saturated spaces one can develop a strong structural theory. Among the numerous results found in the literature, there are two fundamental ones that deserve special attention. The first is due to R. C. James [@Ja1] and asserts that any unconditionally saturated space contains either a reflexive subspace, or $\ell_1$, or $c_0$. The second is due to A. Pe[ł]{}czyński [@P] and provides a space $U$ with an unconditional basis $(u_n)$ with the property that any other unconditional basic sequence $(x_n)$, in some Banach space $X$, is equivalent to a subsequence of $(u_n)$. **(B)** The main goal of this paper is to exhibit yet another structural property of the class of unconditionally saturated spaces which is of a global nature. To describe this property we need first to recall some standard facts. Quite often one needs a convenient way to treat separable Banach spaces as a unity. Such a way has been proposed by B. Bossard [@Bos] and has been proved to be extremely useful. More precisely, let us denote by $F\big(C[0,1]\big)$ the set of all closed subspaces of the space $C[0,1]$ and let us consider the set $$\label{e1} {\mathrm{SB}}=\big\{X\in F\big(C[0,1]\big): X \text{ is a linear subspace}\big\}.$$ It is easy to see that the set ${\mathrm{SB}}$ equipped with the relative Effros-Borel structure becomes a standard Borel space (see [@Bos] for more details). As $C[0,1]$ is isometrically universal for all separable Banach spaces, we may identify any class of separable Banach spaces with a subset of ${\mathrm{SB}}$. Under this point of view, we denote by ${\mathrm{US}}$ the subset of ${\mathrm{SB}}$ consisting of all $X\in{\mathrm{SB}}$ which are unconditionally saturated. The above identification is ultimately related to universality problems in Banach Space Theory (see [@AD], [@DF], [@D]). The connection is crystalized in the following definition, introduced in [@AD]. \[ind1\] A class ${\mathcal{C}}\subseteq {\mathrm{SB}}$ is said to be strongly bounded if for every analytic subset ${\mathcal{A}}$ of $\mathcal{C}$ there exists $Y\in\mathcal{C}$ that contains isomorphic copies of every $X\in {\mathcal{A}}$. In [@AD Theorem 91(5)] it was shown that the class of unconditionally saturated Banach spaces with a Schauder basis is strongly bounded. We remove the assumption of the existence of a basis and we show the following. \[int1\] Let ${\mathcal{A}}$ be an analytic subset of ${\mathrm{US}}$. Then there exists an unconditionally saturated Banach space $Y$, with a Schauder basis, that contains isomorphic copies of every $X\in{\mathcal{A}}$. In particular, the class ${\mathrm{US}}$ is strongly bounded. We should point out that the above result is optimal. Indeed, it follows by a classical construction of J. Bourgain [@Bou1] that there exists a co-analytic subset $\mathcal{B}$ of ${\mathrm{SB}}$ consisting of reflexive and unconditionally saturated separable Banach spaces with the following property. If $Y$ is a separable space that contains an isomorphic copy of every $X\in\mathcal{B}$, then $Y$ must contain every separable Banach space. In particular, there is no unconditionally saturated separable Banach space containing isomorphic copies of every $X\in\mathcal{B}$. **(C)** By the results in [@AD], the proof of Theorem \[int1\] is essentially reduced to an embedding problem. Namely, given an unconditionally saturated separable Banach space $X$ one is looking for an unconditionally saturated space $Y(X)$, with a Schauder basis, that contains an isomorphic copy of $X$. In fact, for the proof of Theorem \[int1\], one has to know additionally that this embedding is “uniform". This means, roughly, that the space $Y(X)$ is constructed from $X$ in a Borel way. In our case, the embedding problem has been already solved by J. Bourgain and G. Pisier in [@BP], while its uniform version has been recently obtained in [@D]. These are the main ingredients of the proof of Theorem \[int1\]. **(D)** At a more technical level, the paper also contains some results concerning the structure of a class of subspaces of a certain space constructed in [@AD] and called as an $\ell_2$ Baire sum. Specifically, we study the class of $X$-singular subspaces of an $\ell_2$ Baire sum and we show the following (see §3.1 for the relevant definitions). \(1) Every $X$-singular subspace is unconditionally saturated (Theorem \[t37\] in the main text). \(2) Every $X$-singular subspace contains an $X$-compact subspace (Corollary \[c312\] in the main text). This answers a question from [@AD] (see [@AD Remark 3]). \(3) Every normalized basic sequence in an $X$-singular subspace has a normalized block subsequence satisfying an upper $\ell_2$ estimate (Theorem \[t38\] in the main text). Hence, an $X$-singular subspace can contain no $\ell_p$ for $1\leq p<2$. This generalizes the fact that the 2-stopping time Banach space (see [@BO]) can contain no $\ell_p$ for $1\leq p<2$. General notation and terminology -------------------------------- By ${\mathbb{N}}=\{0,1,2,...\}$ we shall denote the natural numbers. For every infinite subset $L$ of ${\mathbb{N}}$, by $[L]$ we denote the set of all infinite subsets of $L$. Our Banach space theoretic notation and terminology is standard and follows [@LT], while our descriptive set theoretic terminology follows [@Kechris]. If $X$ and $Y$ are Banach spaces, then we shall denote the fact that $X$ and $Y$ are isomorphic by $X\cong Y$. For the convenience of the reader, let us recall the following notions. A measurable space $(X,S)$ is said to be a *standard Borel space* if there exists a Polish topology $\tau$ on $X$ such that the Borel $\sigma$-algebra of $(X,\tau)$ coincides with $S$. A subset $B$ of a standard Borel space $(X,S)$ is said to be *analytic* if there exists a Borel map $f:{\mathbb{N}}^{\mathbb{N}}\to X$ such that $f({\mathbb{N}}^{\mathbb{N}})=B$. Finally, a seminormalized sequence $(x_n)$ in a Banach space $X$ is said to be *unconditional* if there exists a constant $C>0$ such that for every $k\in{\mathbb{N}}$, every $F\subseteq \{0,...,k\}$ and every $a_0,...,a_k\in{\mathbb{R}}$ we have $$\label{e13} \big\| \sum_{n\in F} a_n x_n \big\| \leq C \|\sum_{n=0}^k a_n x_n\|.$$ Trees ----- The concept of a tree has been proved to be a very fruitful tool in the Geometry of Banach spaces. It is also decisive throughout this work. Below we gather all the conventions concerning trees that we need. Let $\Lambda$ be a non-empty set. By $\Lambda^{<{\mathbb{N}}}$ we shall denote the set of all *non-empty* finite sequences in $\Lambda$. By $\sqsubset$ we shall denote the (strict) partial order on $\Lambda^{<{\mathbb{N}}}$ of end-extension. For every ${\sigma}\in \Lambda^{\mathbb{N}}$ and every $n\in{\mathbb{N}}$ with $n\geq 1$ we set ${\sigma}|n=\big({\sigma}(0),..., {\sigma}(n-1)\big)\in\Lambda^{<{\mathbb{N}}}$. Two nodes $s,t\in\Lambda^{<{\mathbb{N}}}$ are said to be *comparable* if either $s\sqsubseteq t$ or $t\sqsubseteq s$; otherwise they are said to be *incomparable*. A subset of $\Lambda^{<{\mathbb{N}}}$ consisting of pairwise comparable nodes is said to be a *chain*, while a subset of $\Lambda^{<{\mathbb{N}}}$ consisting of pairwise incomparable nodes is said to be an *antichain*. A *tree* $T$ on $\Lambda$ is a subset of $\Lambda^{<{\mathbb{N}}}$ satisfying $$\label{e2} \forall s,t\in\Lambda^{<{\mathbb{N}}} \ (t\in T \text{ and } s\sqsubset t\Rightarrow s\in T).$$ A tree $T$ is said to be *pruned* if for every $s\in T$ there exists $t\in T$ with $s\sqsubset t$. The *body* $[T]$ of a tree $T$ on $\Lambda$ is defined to be the set $\{{\sigma}\in\Lambda^{\mathbb{N}}: {\sigma}|n\in T \ \forall n\geq 1\}$. Notice that if $T$ is pruned, then $[T]\neq\varnothing$. A *segment* ${\mathfrak{s}}$ of a tree $T$ is a chain of $T$ satisfying $$\label{e3} \forall s,t,w\in \Lambda^{<{\mathbb{N}}} \ (s\sqsubseteq w \sqsubseteq t \ \text{ and } s,t\in{\mathfrak{s}}\Rightarrow w\in{\mathfrak{s}}).$$ If ${\mathfrak{s}}$ is a segment of $T$, then by $\min({\mathfrak{s}})$ we denote the $\sqsubseteq$-minimum node $t\in{\mathfrak{s}}$. We say that two segments ${\mathfrak{s}}$ and ${\mathfrak{s}}'$ of $T$ are *incomparable* if for every $t\in{\mathfrak{s}}$ and every $t'\in{\mathfrak{s}}'$ the nodes $t$ and $t'$ are incomparable (notice that this is equivalent to say that $\min({\mathfrak{s}})$ and $\min({\mathfrak{s}}')$ are incomparable). Embedding unconditionally saturated spaces into spaces with a basis =================================================================== The aim of this section is to give the proof of the following result. \[p21\] Let ${\mathcal{A}}$ be an analytic subset of ${\mathrm{US}}$. Then there exists an analytic subset ${\mathcal{A}}'$ of ${\mathrm{US}}$ with the following properties. 1. For every $Y\in {\mathcal{A}}'$ the space $Y$ has a Schauder basis. 2. For every $X\in {\mathcal{A}}$ there exists $Y\in {\mathcal{A}}'$ that contains an isometric copy of $X$. As we have already mention in the introduction, the proof of Proposition \[p21\] is based on a construction of ${\mathcal{L}}_\infty$-spaces due to J. Bourgain and G. Pisier [@BP], as well as, on its parameterized version which has been recently obtained in [@D]. Let us recall, first, some definitions. If $X$ and $Y$ are two isomorphic Banach spaces (not necessarily infinite-dimensional), then their *Banach-Mazur distance* is defined by $$\label{e4} d(X,Y)=\inf\big\{ \|T\|\cdot \|T^{-1}\|: T:X\to Y \text{ is an isomorphism}\big\}.$$ Let now $X$ be an infinite-dimensional Banach space and $\lambda\geq 1$. The space $X$ is said to be a ${\mathcal{L}}_{\infty,\lambda}$-space if for every finite-dimensional subspace $F$ of $X$ there exists a finite-dimensional subspace $G$ of $X$ with $F\subseteq G$ and $d(G,\ell^n_\infty)\leq\lambda$, where $n=\mathrm{dim}(G)$. The space $X$ is said to be a ${\mathcal{L}}_{\infty,\lambda+}$-space if it is a ${\mathcal{L}}_{\infty,\theta}$-space for every $\theta>\lambda$. Finally, $X$ is said to be a ${\mathcal{L}}_{\infty}$-space if it is ${\mathcal{L}}_{\infty,\lambda}$ for some $\lambda\geq 1$. The class of ${\mathcal{L}}_{\infty}$-spaces was defined by J. Lindenstrauss and A. Pe[ł]{}czyński [@LP]. For a comprehensive account of the theory of ${\mathcal{L}}_\infty$-spaces, as well as, for a presentation of many remarkable examples we refer to the monograph of J. Bourgain [@Bou2]. Let us also recall that a Banach space $X$ is said to have the *Schur* property if every weakly convergent sequence in $X$ is automatically norm convergent. It in an immediate consequence of Rosenthal’s Dichotomy [@Ro] that every space $X$ with the Schur property is hereditarily $\ell_1$; that is, every subspace $Y$ of $X$ has a further subspace isomorphic to $\ell_1$ (hence, every space with the Schur property is unconditionally saturated). The following theorem summarizes some of the basic properties of the Bourgain-Pisier construction. \[t22\] Let $\lambda>1$ and $X$ be a separable Banach space. Then there exists a separable ${\mathcal{L}}_{\infty,\lambda+}$-space, denote by ${\mathcal{L}}_\lambda[X]$, which contains $X$ isometrically and is such that the quotient ${\mathcal{L}}_\lambda[X]/X$ has the Radon-Nikodym and the Schur properties. The parameterized version of Theorem \[t22\] reads as follows. \[t23\] For every $\lambda>1$, the set ${\mathcal{L}}_\lambda\subseteq {\mathrm{SB}}\times{\mathrm{SB}}$ defined by $$(X,Y)\in{\mathcal{L}}_\lambda \Leftrightarrow Y \text{ is isometric to } {\mathcal{L}}_\lambda[X]$$ is analytic. We will also need the following Ramsey-type lemma. Although it is well-known, we sketch its proof for completeness. \[l24\] Let $X$ be a Banach space and $Y$ be a closed subspace of $X$. Then, for every subspace $Z$ of $X$ there exists a further subspace $Z'$ of $Z$ such that $Z'$ is either isomorphic to a subspace of $Y$, or isomorphic to a subspace of $X/Y$. In particular, if $Y$ and $X/Y$ are both unconditionally saturated, then so is $X$. Let $Q:X\to X/Y$ be the natural quotient map. Consider the following (mutually exclusive) cases. <span style="font-variant:small-caps;">Case 1.</span> *The operator $Q:Z\to X/Y$ is not strictly singular.* This case, by definition, yields the existence of a subspace $Z'$ of $Z$ such that $Q|_{Z'}$ is an isomorphic embedding. <span style="font-variant:small-caps;">Case 2.</span> *The operator $Q:Z\to X/Y$ is strictly singular.* In this case our hypothesis implies that for every subspace $Z'$ of $Z$ and every ${\varepsilon}>0$ we may find a normalized vector $z\in Z'$ such that $\|Q(z)\|\leq {\varepsilon}$. Hence, for every subspace $Z'$ of $Z$ and every ${\varepsilon}>0$ there exist a normalized vector $z\in Z'$ and a vector $y\in Y$ such that $\|z-y\|<{\varepsilon}$. So, we may construct a normalized Schauder basic sequence $(z_n)$ in $Z$ with basis constant $2$ and a sequence $(y_n)$ in $Y$ such that $\|z_n-y_n\|<1/8^n$ for every $n\in{\mathbb{N}}$. It follows that $(y_n)$ is equivalent to $(z_n)$ (see [@LT]). Setting $Z'=\overline{\mathrm{span}}\{z_n:n\in{\mathbb{N}}\}$, we see that $Z'$ is isomorphic to a subspace of $Y$. The proof is completed. We are ready to proceed to the proof of Proposition \[p21\]. Let ${\mathcal{A}}$ be an analytic subset of ${\mathrm{US}}$. Let also ${\mathcal{L}}_2$ be the subset of ${\mathrm{SB}}\times{\mathrm{SB}}$ obtained by applying Theorem \[t23\] for $\lambda=2$. We define ${\mathcal{A}}'\subseteq {\mathrm{SB}}$ by the rule $$Y\in {\mathcal{A}}' \Leftrightarrow \exists X \ \big[ X\in {\mathcal{A}}\text{ and } (X,Y)\in{\mathcal{L}}_2\big].$$ As both ${\mathcal{A}}$ and ${\mathcal{L}}_2$ are analytic and the class of analytic sets is closed under projections, we see that ${\mathcal{A}}'$ is analytic. We claim that ${\mathcal{A}}'$ is the desired set. Indeed, notice that property (ii) is an immediate consequence of Theorem \[t22\]. To see (i), let $Y\in {\mathcal{A}}'$ arbitrary. There exists $X\in {\mathcal{A}}$ such that $Y$ is isometric to ${\mathcal{L}}_2[X]$. By Theorem \[t22\], we know that ${\mathcal{L}}_2[X]/X$ is unconditionally saturated. Recalling that $X$ is also unconditionally saturated, by Lemma \[l24\], we see that $Y\in {\mathrm{US}}$. Finally, our claim that $Y$ has a Schauder basis is an immediate consequence of the fact that $Y$ is ${\mathcal{L}}_\infty$ and of a classical result due to W. B. Johnson, H. P. Rosenthal and M. Zippin [@JRZ] asserting that every separable ${\mathcal{L}}_\infty$-space has a Schauder basis. The proof is completed. Schauder tree bases and $\ell_2$ Baire sums =========================================== Definitions and statements of the main results ---------------------------------------------- Let us begin be recalling the following notion. \[d31\] Let $X$ be a Banach space, $\Lambda$ a countable set and $T$ a pruned tree on $\Lambda$. Let also $(x_t)_{t\in T}$ be a normalized sequence in $X$ indexed by the tree $T$. We say that ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ is a Schauder tree basis if the following are satisfied. 1. $X={\overline{\mathrm{span}}}\{x_t:t\in T\}$. 2. For every ${\sigma}\in [T]$ the sequence $(x_{{\sigma}|n})_{n\geq 1}$ is a (normalized) bi-monotone Schauder basic sequence. Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. For every ${\sigma}\in [T]$ we set $$\label{e5} X_{\sigma}={\overline{\mathrm{span}}}\{ x_{{\sigma}|n}:n\geq 1\}.$$ Notice that in Definition \[d31\] we do not assume that the subspace $X_{\sigma}$ of $X$ is complemented. Notice also that if ${\sigma}, \tau\in [T]$ with ${\sigma}\neq\tau$, then this does not necessarily imply that $X_{\sigma}\neq X_\tau$. \[ex32\] Let $X=c_0$ and $(e_n)$ be the standard unit vector basis of $c_0$. Let also $T=2^{<{\mathbb{N}}}$ be the Cantor tree; i.e. $T$ is the set of all non-empty finite sequences of $0$’s and $1$’s. For every $t\in T$, denoting by $|t|$ the length of the finite sequence $t$, we define $x_t=e_{|t|-1}$. It is easy to see that the family $(X,2,T,(x_t)_{t\in T})$ is a Schauder tree basis. Observe that for every ${\sigma}\in [T]$ the sequence $(x_{{\sigma}|n})_{n\geq 1}$ is the standard basis of $c_0$. Hence, the just defined Schauder tree basis has been obtained by “spreading" along the branches of $2^{<{\mathbb{N}}}$ the standard basis of $c_0$. The notion of a Schauder tree basis serves as a technical vehicle for the construction of a “tree-like" Banach space in the spirit of R. C. James [@Ja2]. This is the content of the following definition. \[d33\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. The $\ell_2$ Baire sum of $\mathfrak{X}$, denoted by ${T^{\mathfrak{X}}_2}$, is defined to be the completion of $c_{00}(T)$ equipped with the norm $$\label{e6} \|z\|_{{T^{\mathfrak{X}}_2}}= \sup\Big\{ \Big( \sum_{j=0}^l \big\| \sum_{t\in {\mathfrak{s}}_j} z(t) x_t\big\|^2_X \Big)^{1/2} \Big\}$$ where the above supremum is taken over all finite families $({\mathfrak{s}}_j)_{j=0}^l$ of pairwise incomparable segments of $T$. \[ex34\] Let $\mathfrak{X}$ be the Schauder tree basis described in Example \[ex32\] and consider the corresponding $\ell_2$ Baire sum ${T^{\mathfrak{X}}_2}$. Notice that if $z\in {T^{\mathfrak{X}}_2}$, then its norm is given by the formula $$\|z\|_{{T^{\mathfrak{X}}_2}}=\sup\Big\{ \Big(\sum_{j=0}^l z(t_j)^2\Big)^{1/2}: (t_j)_{j=0}^l \text{ is an antichain of } 2^{<{\mathbb{N}}}\Big\}.$$ This space has been defined by H. P. Rosenthal and it is known in the literature as the *$2$-stopping time* Banach space (see [@BO]). It is usually denoted by $S_2$. A very interesting fact concerning the structure of $S_2$ is that it contains almost isometric copies of $\ell_p$ for every $2\leq p<\infty$. This is due to H. P. Rosenthal and G. Schechtman (unpublished). On the other hand, the space $S_2$ can contain no $\ell_p$ for $1\leq p<2$. Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and consider the corresponding $\ell_2$ Baire sum ${T^{\mathfrak{X}}_2}$ of $\mathfrak{X}$. Let $(e_t)_{t\in T}$ be the standard Hamel basis of $c_{00}(T)$. We fix a bijection $h:T\to{\mathbb{N}}$ such that for every pair $t,s\in T$ we have $h(t)<h(s)$ if $t\sqsubset s$. If $(e_{t_n})$ is the enumeration of $(e_t)_{t\in T}$ according to $h$, then it is easy to verify that the sequence $(e_{t_n})$ defines a normalized bi-monotone Schauder basis of ${T^{\mathfrak{X}}_2}$. For every ${\sigma}\in [T]$ consider the subspace ${\mathcal{X}}_{\sigma}$ of ${T^{\mathfrak{X}}_2}$ defined by $$\label{e7} {\mathcal{X}}_{\sigma}={\overline{\mathrm{span}}}\{ e_{{\sigma}|n}:n\geq 1\}.$$ It is easily seen that the space ${\mathcal{X}}_{\sigma}$ is isometric to $X_{\sigma}$ and, moreover, it is $1$-complemented in ${T^{\mathfrak{X}}_2}$ via the natural projection $P_{\sigma}:{T^{\mathfrak{X}}_2}\to {\mathcal{X}}_{\sigma}$. More generally, for every segment ${\mathfrak{s}}$ of $T$ we set ${\mathcal{X}}_{\mathfrak{s}}={\overline{\mathrm{span}}}\{e_t:t\in{\mathfrak{s}}\}$. Again we see that ${\mathcal{X}}_{\mathfrak{s}}$ is isometric to the space ${\overline{\mathrm{span}}}\{x_t: t\in{\mathfrak{s}}\}$ and it is 1-complemented in ${T^{\mathfrak{X}}_2}$ via the natural projection $P_{\mathfrak{s}}:{T^{\mathfrak{X}}_2}\to {\mathcal{X}}_{\mathfrak{s}}$. If $x$ is a vector in ${T^{\mathfrak{X}}_2}$, then by ${\mathrm{supp}}(x)$ we shall denote its *support*; i.e. the set $\{t\in T: x(t)\neq 0\}$. The *range* of $x$, denoted by ${\mathrm{range}}(x)$, is defined to be the minimal interval $I$ of ${\mathbb{N}}$ satisfying ${\mathrm{supp}}(x)\subseteq \{t_n:n\in I\}$. We isolate, for future use, the following consequence of the enumeration $h$ of $T$. \[f35\] Let ${\mathfrak{s}}$ be a segment of $T$ and $I$ be an interval of ${\mathbb{N}}$. Consider the set ${\mathfrak{s}}'={\mathfrak{s}}\cap \{t_n: n\in I\}$. Then ${\mathfrak{s}}'$ is also a segment of $T$. Let now $Y$ be a subspace of ${T^{\mathfrak{X}}_2}$. Assume that there exist a subspace $Y'$ of $Y$ and a ${\sigma}\in [T]$ such that the operator $P_{\sigma}:Y'\to{\mathcal{X}}_{\sigma}$ is an isomorphic embedding. In such a case, the subspace $Y$ contains information about the Schauder tree basis ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$. On the other hand, there are subspaces of ${T^{\mathfrak{X}}_2}$ which are “orthogonal" to every ${\mathcal{X}}_{\sigma}$. These subspaces are naturally distinguished into two categories, as follows. \[d36\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and let $Y$ be a subspace of ${T^{\mathfrak{X}}_2}$. 1. We say that $Y$ is $X$-singular if for every ${\sigma}\in [T]$ the operator $P_{\sigma}:Y\to{\mathcal{X}}_{\sigma}$ is strictly singular. 2. We say that $Y$ is $X$-compact if for every ${\sigma}\in [T]$ the operator $P_{\sigma}:Y\to{\mathcal{X}}_{\sigma}$ is compact. In this section, we are focussed on the structure of the class of $X$-singular subspaces of an arbitrary $\ell_2$ Baire sum. Our main results are summarized below. \[t37\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and $Y$ be an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. Then $Y$ is unconditionally saturated. \[t38\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and $Y$ be an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. Then for every normalized Schauder basic sequence $(x_n)$ in $Y$ there exists a normalized block sequence $(y_n)$ of $(x_n)$ satisfying an upper $\ell_2$ estimate. That is, there exists a constant $C\geq 1$ such that for every $k\in{\mathbb{N}}$ and every $a_0,...,a_k\in {\mathbb{R}}$ we have $$\big\| \sum_{n=0}^k a_n y_n \big\|_{{T^{\mathfrak{X}}_2}} \leq C \Big( \sum_{n=0}^k |a_n|^2\Big)^{1/2}.$$ In particular, every $X$-singular subspace $Y$ of ${T^{\mathfrak{X}}_2}$ can contain no $\ell_p$ for $1\leq p<2$. We notice that in Theorem \[t38\] one cannot expect to obtain a block sequence satisfying a lower $\ell_2$ estimate. Indeed, as it has been shown in [@AD Theorem 25], if ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ is a Schauder tree basis such that the tree $T$ is not small (precisely, if the tree $T$ contains a perfect subtree), then one can find in ${T^{\mathfrak{X}}_2}$ a normalized block sequence $(x_n)$ which is equivalent to the standard basis of $c_0$ and which spans an $X$-singular subspace. Clearly, no block subsequence of $(x_n)$ can have a lower $\ell_2$ estimate. The rest of this section is organized as follows. In §3.2 we provide a characterization of the class of $X$-singular subspaces of ${T^{\mathfrak{X}}_2}$. Using this characterization we show, for instance, that every $X$-singular subspace of ${T^{\mathfrak{X}}_2}$ contains an $X$-compact subspace. This can be seen as a “tree" version of the classical theorem of T. Kato asserting that for every strictly singular operator $T:X\to Y$ there is an infinite-dimensional subspace $Z$ of $X$ such that the operator $T:Z\to Y$ is compact. In §3.3 we give the proofs of Theorem \[t37\] and of Theorem \[t38\]. A characterization of $X$-singular subspaces -------------------------------------------- We start with the following definition. \[d39\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. The $c_0$ Baire sum of $\mathfrak{X}$, denoted by ${T^{\mathfrak{X}}_0}$, is defined to be the completion of $c_{00}(T)$ equipped with the norm $$\label{e8} \|z\|_{{T^{\mathfrak{X}}_0}}= \sup\Big\{ \big\| \sum_{t\in {\mathfrak{s}}} z(t) x_t\big\|_X : {\mathfrak{s}}\text{ is a segment of } T \Big\}.$$ By $\mathrm{I}:{T^{\mathfrak{X}}_2}\to {T^{\mathfrak{X}}_0}$ we shall denote the natural inclusion operator. Our characterization of $X$-singular subspaces of ${T^{\mathfrak{X}}_2}$ is achieved by considering the functional analytic properties of the inclusion operator $\mathrm{I}:{T^{\mathfrak{X}}_2}\to{T^{\mathfrak{X}}_0}$. Precisely, we have the following. \[p310\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. Let $Y$ be a subspace of ${T^{\mathfrak{X}}_2}$. Then the following are equivalent. 1. $Y$ is an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. 2. The operator $\mathrm{I}:Y\to{T^{\mathfrak{X}}_0}$ is strictly singular. Let us isolate two consequences of Proposition \[p310\]. The one that follows is simply a restatement of Proposition \[p310\]. \[c311\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and $Y$ be a block subspace of ${T^{\mathfrak{X}}_2}$. Assume that $Y$ is $X$-singular. Then for every ${\varepsilon}>0$ we may find a finitely supported vector $y\in Y$ with $\|y\|=1$ and such that $\|P_{\mathfrak{s}}(y)\|\leq {\varepsilon}$ for every segment ${\mathfrak{s}}$ of $T$. \[c312\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis and $Y$ be an infinite-dimensional subspace of ${T^{\mathfrak{X}}_2}$. Assume that $Y$ is $X$-singular. Then there exists an infinite-dimensional subspace $Y'$ of $Y$ which is $X$-compact. By Proposition \[p310\], the operator $\mathrm{I}:Y\to{T^{\mathfrak{X}}_0}$ is strictly singular. By [@LT Proposition 2.c.4], there exists an infinite-dimensional subspace $Y'$ of $Y$ such that the operator $\mathrm{I}:Y'\to {T^{\mathfrak{X}}_0}$ is compact. It is easy to see that $Y'$ must be an $X$-compact subspace of ${T^{\mathfrak{X}}_2}$ in the sense of Definition \[d36\](b). The proof is completed. For the proof of Proposition \[p310\] we need a couple of results from [@AD]. The first one is the following (see [@AD Lemma 17]). \[l313\] Let $(x_n)$ be a bounded block sequence in ${T^{\mathfrak{X}}_2}$ and ${\varepsilon}>0$ be such that $\limsup\|P_{\sigma}(x_n)\|<{\varepsilon}$ for every ${\sigma}\in [T]$. Then there exists $L\in [{\mathbb{N}}]$ such that for every ${\sigma}\in [T]$ we have $|\{n\in L: \|P_{\sigma}(x_n)\|\geq{\varepsilon}\}|\leq 1$. The second result is the following special case of [@AD Proposition 33]. \[p314\] Let $Y$ be a block $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. Then for every ${\varepsilon}>0$ we may find a normalized block sequence $(y_n)$ in $Y$ such that for every ${\sigma}\in [T]$ we have $\limsup \|P_{\sigma}(y_n)\|<{\varepsilon}$. We are ready to proceed to the proof of Proposition \[p310\]. It is clear that (ii) implies (i). Hence we only need to show the converse implication. We argue by contradiction. So, assume that $Y$ is an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$ such that the operator $\mathrm{I}:Y\to{T^{\mathfrak{X}}_0}$ is not strictly singular. By definition, there exists a further subspace $Y'$ of $Y$ such that $\mathrm{I}: Y'\to{T^{\mathfrak{X}}_0}$ is an isomorphic embedding. Using a sliding hump argument, we may recursively select a normalized basic sequence $(y_n)$ in $Y'$ and a normalized block sequence $(z_n)$ in ${T^{\mathfrak{X}}_2}$ such that, setting $Z={\overline{\mathrm{span}}}\{z_n:n\in{\mathbb{N}}\}$, the following are satisfied. 1. The sequence $(z_n)$ is equivalent to $(y_n)$. 2. The subspace $Z$ of ${T^{\mathfrak{X}}_2}$ is $X$-singular. 3. The operator $\mathrm{I}:Z\to{T^{\mathfrak{X}}_0}$ is an isomorphic embedding. The selection is fairly standard (we leave the details to the interested reader). By (c) above, there exists a constant $C>0$ such that for every $z\in Z$ we have $$\label{e9} C\|z\|_{{T^{\mathfrak{X}}_2}}\leq \|z\|_{{T^{\mathfrak{X}}_0}} \leq \|z\|_{{T^{\mathfrak{X}}_2}}.$$ We fix $k_0\in{\mathbb{N}}$ and ${\varepsilon}>0$ satisfying $$\label{e10} k_0>\frac{64}{C^4} \ \text{ and } \ {\varepsilon}<\min\Big\{ \frac{C}{2}, \frac{1}{k_0}\Big\}.$$ By (b) above, we may apply Proposition \[p314\] to the block subspace $Z$ of ${T^{\mathfrak{X}}_2}$ and the chosen ${\varepsilon}$. It follows that there exists a normalized block sequence $(x_n)$ in $Z$ such that $\limsup\|P_{\sigma}(x_n)\|<{\varepsilon}$ for every ${\sigma}\in [T]$. By Lemma \[l313\] and by passing to a subsequence of $(x_n)$ if necessary, we may additionally assume that for every ${\sigma}\in [T]$ we have $|\{n\in{\mathbb{N}}:\|P_{\sigma}(x_n)\|\geq{\varepsilon}\}|\leq 1$. As the basis of ${T^{\mathfrak{X}}_2}$ is bi-monotone, we may strengthen this property to the following one. 1. For every segment ${\mathfrak{s}}$ of $T$ we have $|\{n\in{\mathbb{N}}: \|P_{\mathfrak{s}}(x_n)\|\geq{\varepsilon}\}|\leq 1$. By Fact \[f35\] and (\[e9\]), for every $n\in{\mathbb{N}}$ we may select a segment ${\mathfrak{s}}_n$ of $T$ such that 1. $\|P_{{\mathfrak{s}}_n}(x_n)\|\geq C$ and 2. ${\mathfrak{s}}_n\subseteq\{t_k:k\in{\mathrm{range}}(x_n)\}$. As the sequence $(x_n)$ is block, we see that such a selection guarantees that 1. $\|P_{{\mathfrak{s}}_n}(x_m)\|=0$ for every $n,m\in{\mathbb{N}}$ with $n\neq m$. We set $t_n=\min({\mathfrak{s}}_n)$. Applying the classical Ramsey Theorem we find an infinite subset $L=\{l_0<l_1<l_2<...\}$ of ${\mathbb{N}}$ such that one of the following (mutually exclusive) cases must occur. <span style="font-variant:small-caps;">Case 1.</span> *The set $\{t_n:n\in L\}$ is an antichain.* Our hypothesis in this case implies that for every $n,m\in L$ with $n\neq m$ the segments ${\mathfrak{s}}_n$ and ${\mathfrak{s}}_m$ are incomparable. We define $z=x_{l_0}+...+x_{l_{k_0}}$. As the family $({\mathfrak{s}}_{l_i})_{i=0}^{k_0}$ consists of pairwise incomparable segments of $T$, we get that $$\label{e11} \|z\|\geq \Big( \sum_{i=0}^{k_0} \|P_{{\mathfrak{s}}_{l_i}}(z)\|^2 \Big)^{1/2} \stackrel{(\mathrm{g})}{=} \Big( \sum_{i=0}^{k_0} \|P_{{\mathfrak{s}}_{l_i}}(x_{l_i})\|^2 \Big)^{1/2} \stackrel{(\mathrm{e})}{\geq} C \sqrt{k_0+1}.$$ Now we set $w=z/\|z\|\in Z$. Invoking (d) above, inequality (\[e11\]) and the choice of $k_0$ and ${\varepsilon}$ made in (\[e10\]), for every segment ${\mathfrak{s}}$ of $T$ we have $$\|P_{\mathfrak{s}}(w)\|\leq \frac{1+k_0{\varepsilon}}{C\sqrt{k_0+1}}<\frac{C}{2}.$$ It follows that $$\|w\|_{{T^{\mathfrak{X}}_0}}\leq \frac{C}{2}$$ which contradicts inequality (\[e9\]). Hence this case is impossible. <span style="font-variant:small-caps;">Case 2.</span> *The set $\{t_n:n\in L\}$ is a chain.* Let $\tau\in [T]$ be the branch of $T$ determined by the infinite chain $\{t_n:n\in L\}$. By (d) above and by passing to an infinite subset of $L$ if necessary, we may assume that $\|P_\tau(x_n)\|<{\varepsilon}$ for every $n\in L$. The basis of ${T^{\mathfrak{X}}_2}$ is bi-monotone, and so, we have the following property. 1. If ${\mathfrak{s}}$ is a segment of $T$ with ${\mathfrak{s}}\subseteq \tau$, then $\|P_{\mathfrak{s}}(x_n)\|<{\varepsilon}$ for every $n\in L$. We set ${\mathfrak{s}}'_n={\mathfrak{s}}_n\setminus \tau$. Observe that the set ${\mathfrak{s}}'_n$ is a sub-segment of ${\mathfrak{s}}_n$. Notice that ${\mathfrak{s}}_n$ is the disjoint union of the successive segments ${\mathfrak{s}}_n\cap \tau$ and ${\mathfrak{s}}'_n$. Hence, by properties (e) and (h) above and the choice of ${\varepsilon}$, we see that $$\label{e12} \|P_{{\mathfrak{s}}'_n}(x_n)\|\geq C-{\varepsilon}\geq \frac{C}{2}$$ for every $n\in L$. Notice also that if $n,m\in L$ with $n\neq m$, then the segments ${\mathfrak{s}}'_n$ and ${\mathfrak{s}}'_m$ are incomparable. We set $$z=x_{l_0}+...+x_{l_{k_0}} \ \text{ and } \ w=\frac{z}{\|z\|}.$$ Arguing precisely as in Case 1 and using the estimate in (\[e12\]), we conclude that $$\|w\|_{{T^{\mathfrak{X}}_0}}\leq \frac{C}{2}.$$ This is again a contradiction. The proof of Proposition \[p310\] is completed. Proof of Theorem \[t37\] and of Theorem \[t38\] ----------------------------------------------- We start with the following lemma. \[l315\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. Let $(w_n)$ be a normalized block sequence in ${T^{\mathfrak{X}}_2}$ such that for every $n\in{\mathbb{N}}$ with $n\geq 1$ and every segment ${\mathfrak{s}}$ of $T$ we have $$\label{e14} \|P_{\mathfrak{s}}(w_n)\|\leq \frac{1}{\sum_{i=0}^{n-1} |{\mathrm{supp}}(w_i)|^{1/2}} \cdot \frac{1}{2^{n+2}}.$$ Then the following are satisfied. 1. The sequence $(w_n)$ is unconditional. 2. The sequence $(w_n)$ satisfies an upper $\ell_2$ estimate. We will only give the proof of part (i). For a proof of part (ii) we refer to [@AD Proposition 21]. So, let $k\in{\mathbb{N}}$ and $a_0,...,a_k\in{\mathbb{R}}$ be such that $\|\sum_{n=0}^k a_n w_n\|=1$. Let also $F\subseteq \{0,...,k\}$ with $F=\{n_0<...<n_p\}$ its increasing enumeration. We will show that $\|\sum_{n\in F} a_n w_n\|\leq \sqrt{3}$. This will clearly finish the proof. For notational simplicity, we set $$w=\sum_{n=0}^k a_n w_n \ \text{ and } \ z=\sum_{n\in F} a_n w_n.$$ Let $({\mathfrak{s}}_j)_{j=0}^l$ be an arbitrary collection of pairwise incomparable segments of $T$. We want to estimate the sum $\sum_{j=0}^l \|P_{{\mathfrak{s}}_j}(z)\|^2$. To this end, we may assume that for every $j\in \{0,...,l\}$ there exists $i\in\{0,...,p\}$ with ${\mathfrak{s}}_j\cap {\mathrm{supp}}(w_{n_i})\neq\varnothing$. We define recursively a partition $(\Delta_i)_{i=0}^p$ of $\{0,...,l\}$ by the rule $$\begin{aligned} \Delta_0 & = & \big\{ j\in\{0,...,l\}: {\mathfrak{s}}_j\cap {\mathrm{supp}}(w_{n_0})\neq\varnothing\big\}\\ \Delta_1 & = & \big\{ j\in\{0,...,l\}\setminus \Delta_0: {\mathfrak{s}}_j\cap {\mathrm{supp}}(w_{n_1})\neq \varnothing\big\}\\ \vdots & & \\ \Delta_p & = & \Big\{ j\in \{0,...,l\}\setminus \Big( \bigcup_{i=0}^{p-1} \Delta_i\Big): {\mathfrak{s}}_j\cap {\mathrm{supp}}(w_{n_p})\neq\varnothing\Big\}.\end{aligned}$$ The segments $({\mathfrak{s}}_j)_{j=0}^l$ are pairwise incomparable and a fortiori disjoint. It follows that $$\label{e15} |\Delta_i|\leq |{\mathrm{supp}}(w_{n_i})| \ \text{ for every } i\in \{0,...,p\}.$$ Notice also that for every $0\leq i< q\leq p$ we have $$\label{e16} \sum_{j\in \Delta_q} \|P_{{\mathfrak{s}}_j}(w_{n_i})\|=0.$$ Let $j\in\{0,...,l\}$. There exists a unique $i\in \{0,...,p\}$ such that $j\in \Delta_i$. By Fact \[f35\], we may select a segment ${\mathfrak{s}}'_j$ of $T$ such that 1. ${\mathfrak{s}}'_j\subseteq {\mathfrak{s}}_j$, 2. ${\mathfrak{s}}'_j\subseteq \{t_m:m\in{\mathrm{range}}(w_{n_i})\}$ and 3. $\|P_{{\mathfrak{s}}_j}(a_{n_i} w_{n_i})\|= \|P_{{\mathfrak{s}}'_j}(a_{n_i} w_{n_i})\|$. The above selection guarantees the following properties. 1. The family $({\mathfrak{s}}'_j)_{j=0}^l$ consists of pairwise incomparable segment of $T$. This is a straightforward consequence of (a) above and of our assumptions on the family $({\mathfrak{s}}_j)_{j=0}^l$. 2. We have $\|P_{{\mathfrak{s}}_j}(a_{n_i} w_{n_i})\|= \|P_{{\mathfrak{s}}'_j}(a_{n_i} w_{n_i})\| = \|P_{{\mathfrak{s}}'_j}(w)\|$. This is a consequence of (b) and (c) above and of the fact that the sequence $(w_n)$ is block. We are ready for the final part of the argument. Let $i\in\{0,...,p\}$ and $j\in \Delta_i$. Our goal is to estimate the quantity $\|P_{{\mathfrak{s}}_j}(z)\|$. First we notice that $$\begin{aligned} \|P_{{\mathfrak{s}}_j}(z)\| & \stackrel{(\ref{e16})}{=} & \|P_{{\mathfrak{s}}_j}(a_{n_i} w_{n_i} +...+ a_{n_p} w_{n_p})\| \\ & \leq & \|P_{{\mathfrak{s}}_j}(a_{n_i}w_{n_i})\|+ \sum_{q=i+1}^p |a_{n_q}|\cdot \|P_{{\mathfrak{s}}_j}(w_{n_q})\|.\end{aligned}$$ Invoking the fact that the Schauder basis $(e_t)_{t\in T}$ of ${T^{\mathfrak{X}}_2}$ is bi-monotone and (\[e14\]), we see that for every $q\in\{i+1,...,p\}$ we have $\|P_{{\mathfrak{s}}_j}(w_{n_q})\|\leq |{\mathrm{supp}}(w_{n_i})|^{-1/2} \cdot 2^{-(q+2)}$ and $|a_{n_q}|\leq 1$. Hence, the previous estimate yields $$\begin{aligned} \|P_{{\mathfrak{s}}_j}(z)\| & \leq & \|P_{{\mathfrak{s}}_j}(a_{n_i}w_{n_i})\|+ \frac{1}{|{\mathrm{supp}}(w_{n_i})|^{1/2}}\cdot \sum_{q=i+1}^p \frac{1}{2^{q+2}} \nonumber \\ & \stackrel{(\ref{e15})}{\leq} & \|P_{{\mathfrak{s}}_j}(a_{n_i}w_{n_i})\|+ \frac{1}{|\Delta_i|^{1/2}}\cdot \frac{1}{2^{i+2}} \\ & \stackrel{(\mathrm{e})}{=} & \|P_{{\mathfrak{s}}'_j}(w)\|+ \frac{1}{|\Delta_i|^{1/2}}\cdot \frac{1}{2^{i+2}}.\end{aligned}$$ The above inequality, in turn, implies that if $\Delta_i$ is non-empty, then $$\begin{aligned} \label{e17} \sum_{j\in \Delta_i} \|P_{{\mathfrak{s}}_j}(z)\|^2 & \leq & 2 \sum_{j\in \Delta_i} \|P_{{\mathfrak{s}}'_j}(w)\|^2 + 2 \sum_{j\in \Delta_i} \frac{1}{|\Delta_i|} \cdot \frac{1}{2^{i+2}} \nonumber \\ & \leq & 2 \sum_{j\in \Delta_i} \|P_{{\mathfrak{s}}'_j}(w)\|^2 + \frac{1}{2^{i+1}}.\end{aligned}$$ Summarizing, we see that $$\sum_{j=0}^l \|P_{{\mathfrak{s}}_j}(z)\|^2 = \sum_{i=0}^p \sum_{j\in \Delta_i} \|P_{{\mathfrak{s}}_j}(z)\|^2 \stackrel{(\ref{e17})}{\leq} 2 \sum_{j=0}^l \|P_{{\mathfrak{s}}'_j}(w)\|^2 + 1 \stackrel{(\mathrm{d})}{\leq} 2 \|w\|^2+1 \leq 3.$$ The family $({\mathfrak{s}}_j)_{j=0}^l$ was arbitrary, and so, $\|z\|\leq \sqrt{3}$. The proof is completed. We continue with the proof of Theorem \[t37\]. Let $Y$ be an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. Clearly every subspace $Y'$ of $Y$ is also $X$-singular. Hence, it is enough to show that every $X$-singular subspace contains an unconditional basic sequence. So, let $Y$ be one. Using a sliding hump argument, we may additionally assume that $Y$ is a block subspace of ${T^{\mathfrak{X}}_2}$. Recursively and with the help of Corollary \[c311\], we may construct a normalized block sequence $(w_n)$ in $Y$ such that for every $n\in{\mathbb{N}}$ with $n\geq 1$ and every segment ${\mathfrak{s}}$ of $T$ we have $$\|P_{\mathfrak{s}}(w_n)\|\leq \frac{1}{\sum_{i=0}^{n-1} |{\mathrm{supp}}(w_i)|^{1/2}} \cdot \frac{1}{2^{n+2}}.$$ By Lemma \[l315\](i), the sequence $(w_n)$ is unconditional. The proof is completed. We proceed to the proof of Theorem \[t38\]. Let $Y$ be an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$. Let also $(x_n)$ be a normalized Schauder basic sequence in $Y$. A standard sliding hump argument allows us to construct a normalized block sequence $(v_n)$ of $(x_n)$ and a block sequence $(z_n)$ in ${T^{\mathfrak{X}}_2}$ such that, setting $Z={\overline{\mathrm{span}}}\{z_n:n\in{\mathbb{N}}\}$, the following are satisfied. 1. The sequences $(v_n)$ and $(z_n)$ are equivalent. 2. The subspace $Z$ of ${T^{\mathfrak{X}}_2}$ is $X$-singular. As in the proof of Theorem \[t37\], using (b) above and Corollary \[c311\], we construct a normalized block sequence $(w_n)$ of $(z_n)$ such that for every $n\in{\mathbb{N}}$ with $n\geq 1$ and every segment ${\mathfrak{s}}$ of $T$ inequality (\[e14\]) is satisfied for the sequence $(w_n)$. By Lemma \[l315\](ii), the sequence $(w_n)$ satisfies an upper $\ell_2$ estimate. Let $(b_n)$ be the block sequence of $(v_n)$ corresponding to $(w_n)$. Observe that, by (a) above, the sequence $(b_n)$ is seminormalized and satisfies an upper $\ell_2$ estimate. The property of being a block sequence is transitive, and so, $(b_n)$ is a normalized block sequence of $(x_n)$ as well. Hence, setting $y_n=b_n/\|b_n\|$ for every $n\in{\mathbb{N}}$, we see that the sequence $(y_n)$ is the desired one. Finally, to see that every $X$-singular subspace of ${T^{\mathfrak{X}}_2}$ can contain no $\ell_p$ for $1\leq p<2$, we argue by contradiction. So, assume that $Y$ is an $X$-singular subspace of ${T^{\mathfrak{X}}_2}$ containing an isomorphic copy of $\ell_{p_0}$ for some $1\leq p_0<2$. There exists, in such a case, a normalized basic sequence $(x_n)$ in $Y$ which is equivalent to the standard unit vector basis $(e_n)$ of $\ell_{p_0}$. Let $(y_n)$ be a normalized block subsequence of $(x_n)$ satisfying an upper $\ell_2$ estimate. As any normalized block subsequence of $(e_n)$ is equivalent to $(e_n)$ (see [@LT]), we see that there must exist constants $C\geq c>0$ such that for every $k\in{\mathbb{N}}$ and every $a_0,...,a_k\in{\mathbb{R}}$ we have $$c \Big( \sum_{n=0}^k |a_n|^{p_0}\Big)^{1/p_0} \leq \big\| \sum_{n=0}^k a_n y_n \big\|_{{T^{\mathfrak{X}}_2}} \leq C \Big( \sum_{n=0}^k |a_n|^{2}\Big)^{1/2}.$$ This is clearly a contradiction. The proof is completed. We close this section by recording the following consequence of Theorem \[t38\]. \[c316\] Let ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ be a Schauder tree basis. Let $1\leq p<2$. Then the following are equivalent. 1. The space ${T^{\mathfrak{X}}_2}$ contains an isomorphic copy of $\ell_p$. 2. There exists ${\sigma}\in [T]$ such that $X_{\sigma}$ contains an isomorphic copy of $\ell_p$. It is clear that (ii) implies (i). Conversely, assume that $\ell_p$ embeds into ${T^{\mathfrak{X}}_2}$ and let $Y$ be a subspace of ${T^{\mathfrak{X}}_2}$ which is isomorphic to $\ell_p$. By Theorem \[t38\], we see that $Y$ is not $X$-singular. Hence, there exist ${\sigma}\in [T]$ and an infinite-dimensional subspace $Y'$ of $Y$ such that $P_{\sigma}:Y'\to{\mathcal{X}}_{\sigma}$ is an isomorphic embedding. Recalling that every subspace of $\ell_p$ contains a copy of $\ell_p$ and that the spaces ${\mathcal{X}}_{\sigma}$ and $X_{\sigma}$ are isometric, the result follows. The main result =============== This section is devoted to the proof of Theorem \[int1\] stated in the introduction. To this end, we will need the following correspondence principle between analytic classes of separable Banach spaces and Schauder tree bases (see [@AD Proposition 83] or [@D Lemma 32]). \[l41\] Let ${\mathcal{A}}'$ be an analytic subset of ${\mathrm{SB}}$ such that every $Y\in{\mathcal{A}}'$ has a Schauder basis. Then there exist a separable Banach space $X$, a pruned tree $T$ on ${\mathbb{N}}\times{\mathbb{N}}$ and a normalized sequence $(x_t)_{t\in T}$ in $X$ such that the following are satisfied. 1. The family ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ is a Schauder tree basis. 2. For every $Y\in{\mathcal{A}}'$ there exists ${\sigma}\in [T]$ with $Y\cong X_{\sigma}$. 3. For every ${\sigma}\in [T]$ there exists $Y\in{\mathcal{A}}'$ with $X_{\sigma}\cong Y$. We are now ready to proceed to the proof of Theorem \[int1\]. So, let ${\mathcal{A}}$ be an analytic subset of ${\mathrm{US}}$. We apply Proposition \[p21\] and we get a subset ${\mathcal{A}}'$ of ${\mathrm{SB}}$ with the following properties. 1. The set ${\mathcal{A}}'$ is analytic. 2. Every $Y\in{\mathcal{A}}'$ has a Schauder basis. 3. Every $Y\in{\mathcal{A}}'$ is unconditionally saturated. 4. For every $X\in{\mathcal{A}}$ there exists $Y\in{\mathcal{A}}'$ such that $Y$ contains an isometric copy of $X$. By (a) and (b) above, we apply Lemma \[l41\] to the set ${\mathcal{A}}'$ and we get a Schauder tree basis ${\mathfrak{X}=(X,\Lambda,T,(x_t)_{t\in T})}$ satisfying the following. 1. For every $Y\in{\mathcal{A}}'$ there exists ${\sigma}\in [T]$ with $Y\cong X_{\sigma}$. 2. For every ${\sigma}\in [T]$ there exists $Y\in{\mathcal{A}}'$ such that $X_{\sigma}\cong Y$. Consider the $\ell_2$ Baire sum ${T^{\mathfrak{X}}_2}$ of this Schauder tree basis $\mathfrak{X}$. We claim that the space ${T^{\mathfrak{X}}_2}$ is the desired one. Indeed, recall first that ${T^{\mathfrak{X}}_2}$ has a Schauder basis. Moreover, by (d) and (e) above we see that ${T^{\mathfrak{X}}_2}$ contains an isomorphic copy of every $X\in{\mathcal{A}}$. What remains is to check that ${T^{\mathfrak{X}}_2}$ is unconditionally saturated. To this end, let $Z$ be an arbitrary subspace of ${T^{\mathfrak{X}}_2}$. We have to show that the space $Z$ contains an unconditional basic sequence. We distinguish the following (mutually exclusive) cases. <span style="font-variant:small-caps;">Case 1.</span> *The subspace $Z$ is not $X$-singular.* In this case, by definition, there exist ${\sigma}\in [T]$ and a further subspace $Z'$ of $Z$ such that the operator $P_{\sigma}:Z'\to {\mathcal{X}}_{\sigma}$ is an isomorphic embedding. By (f) and (c) above, we get that $Z'$ must contain an unconditional basic sequence. <span style="font-variant:small-caps;">Case 2.</span> *The subspace $Z$ is $X$-singular.* By Theorem \[t37\], we see that in this case the subspace $Z$ must also contain an unconditional basic sequence. By the above, it follows that ${T^{\mathfrak{X}}_2}$ is unconditionally saturated. The proof of Theorem \[int1\] is completed. [99]{} S. A. Argyros and P. Dodos, *Genericity and amalgamation of classes of Banach spaces*, Adv. Math., 209 (2007), 666-748. H. Bang and E. Odell, *On the stopping time Banach space*, Quarterly J. Math. Oxford, 40 (1989), 257-273. B. Bossard, *A coding of separable Banach spaces. Analytic and co-analytic families of Banach spaces*, Fund. Math., 172 (2002), 117-152. J. Bourgain, *On separable Banach spaces, universal for all separable reflexive spaces*, Proc. AMS, 79 (1980), 241-246. J. Bourgain, *New classes of ${\mathcal{L}}_p$ Spaces*, Lecture Notes in Math., 889, Springer-Verlag, 1981 J. Bourgain and G. Pisier, *A construction of ${\mathcal{L}}_\infty$-spaces and related Banach spaces*, Bol. Soc. Brasil. Mat., 14 (1983), 109-123. P. Dodos, *On classes of Banach spaces admitting “small" universal spaces* (preprint). P. Dodos and V. Ferenczi, *Some strongly bounded classes of Banach spaces*, Fund. Math., 193 (2007), 171-179. W. T. Gowers and B. Maurey, *The unconditional basic sequence problem*, Journal AMS, 6 (1993), 851-874. R. C. James, *Bases and reflexivity of Banach spaces*, Ann. Math., 52 (1950), 518-527. R. C. James, *A separable somewhat reflexive Banach space with non-separable dual*, Bull. AMS, 80 (1974), 738-743. W. B. Johnson, H. P. Rosenthal and M. Zippin, *On bases, finite-dimensional decompositions and weaker structures in Banach spaces*, Israel J. Math., 9 (1971), 488-506. A. S. Kechris, *Classical Descriptive Set Theory*, Grad. Texts in Math., 156, Springer-Verlag, 1995. J. Lindenstrauss and A. Pe[ł]{}czyński, *Absolutely summing operators in ${\mathcal{L}}_p$ spaces and their applications*, Studia Math., 29 (1968), 275-326. J. Lindenstrauss and L. Tzafriri, *Classical Banach spaces vol. I: Sequence spaces*, Ergebnisse, vol. 92, Springer, 1977. A. Pe[ł]{}czyński, *Universal bases*, Studia Math., 32 (1969), 247-268. H. P. Rosenthal, *A characterization of Banach spaces containing $\ell_1$*, Proc. Nat. Acad. Sci. USA, 71 (1974), 2411-2413.
--- abstract: 'These lectures introduce the basic ideas and practices of statistical analysis for particle physicists, using a real-world example to illustrate how the abstractions on which statistics is based are translated into practical application.' author: - 'Harrison B. Prosper' title: Practical Statistics for Particle Physicists --- Introduction ============ The day-to-day task of particle physicists is to suggest, build, test, discard and, or, refine models of the observed regularities in Nature with the ultimate goal of building a comprehensive model that answers all the scientific questions we might think to ask. One goal of experimental particle physicists is to make quantitative statements about the parameters $\theta$ of a model given a set of experimental observations $X$. However, in order to make such statements, the the connection between the observations and the model parameters must itself be modeled, and herein lies a difficulty. While there is general agreement about how to connect model parameters to data, there is long history [@Chatterjee] of disagreement about the best way to solve the inverse problem, that is, to go from observations to model parameters. The solution of this inverse problem requires a theory of inference. These lectures introduce to two broad classes of theories of inference, the frequentist and Bayesian approaches. While our focus is on the practical, we do not shy away from brief discussions of foundations. We do so in order to make two points. The first is that when it comes to statistics, there is no such thing as “the" answer; rather there are answers based on assumptions, or proposals, on which reasonable people may disagree for purely intellectual reasons. Second, none of the current theories of inference is perfect. It is worth appreciating these points, even superficially, if only to avoid fruitless arguments that cannot be resolved because they are ultimately about intellectual taste rather than mathematical correctness. For more in-depth expositions of the topics here covered, and different points of view, we highly recommend the excellent textbooks on statistics written for physicists, by physicists [@James; @Cowan; @Barlow]. Lecture 1: Descriptive Statistics, Probability and Likelihood ============================================================= Descriptive Statistics {#sec:statistics} ---------------------- Suppose we have a sample of $N$ data $X = x_1, x_2, \cdots, x_N$. It is often useful to summarize these data with a few numbers called statistics. A **statistic** is any number that can be calculated from the data and known parameters. For example, $t = (x_1 + x_N)/2$ is a statistic, but if the value of $\theta$ is unknown $t = (x_1 - \theta)^2$ is not. However, a word of caution is in order: we particle physicists are prone to misuse the jargon of professional statisticians. For example, we tend to refer to *any* function of the data as a statistic including those that contain unknown parameters. The two most important statistics are $$\begin{aligned} \label{eq:xbar} &\textrm{the {\bf sample mean} (or average)} & \bar{x} & = \frac{1}{N} \sum_{i=1}^N x_i, \\ \label{eq:xvar} &\textrm{and the {\bf sample variance}} & s^2 & = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2, \nonumber\\ & & & = \frac{1}{N} \sum_{i=1}^N x_i^2 - \bar{x}^2, \nonumber\\ & & & = \overline{x^2} - \bar{x}^2.\end{aligned}$$ The sample average is a measure of the center of the distribution of the data, while the sample variance is a measure of its spread. Statistics that merely characterize the data are called **descriptive statistics**, of which the sample average and variance are the most important. If we order the data, say from the smallest value to the largest, we can compute another interesting statistic $t_k \equiv x_{(k)}$, where $1 \leq k \leq N$ and $x_{(k)}$ denotes the datum at the $k^\text{th}$ position. The statistic $t_k$ is called the $k^\text{th}$ **order statistic** and is a measure of the value of outlying data. The average and variance, Eqs. (\[eq:xbar\]) and (\[eq:xvar\]), are numbers that can always be calculated given a data sample $X$. But now we consider numbers that cannot be calculated from the data alone. Imagine the repetition, infinitely many times, of whatever data generating system yielded our data sample $X$ thereby creating an infinite sequence of data sets. We shall refer to the data generating system as an experiment and the infinite sequence as an infinite ensemble. The latter, together with all the mathematical operations we may wish to apply to it, are abstractions. After all, it is not possible to realize an infinite ensemble. The ensemble and all the operations on it exist in the same sense that the number $\pi$ exists along with all valid mathematical operations on $\pi$. The most common operation to perform on an ensemble is to compute the average of the statistics. This **ensemble average** suggests several potentially useful characteristics of the ensemble, which we list below. $$\begin{aligned} & \textrm{Ensemble average} & & <x> \nonumber\\ & \textrm{Mean} & \mu \nonumber\\ & \textrm{Error} & \epsilon & = x - \mu \nonumber\\ & \textrm{Bias} & b & = <x> - \mu \nonumber\\ & \textrm{Variance} & V & = <(x - <x>)^2> \nonumber\\ & \textrm{Standard deviation} & \sigma & = \sqrt{V} \nonumber\\ & \textrm{Mean square error} & \text{MSE} & = <(x - \mu)^2> \nonumber\\ & \textrm{Root MSE} & \textrm{RMS} & = \sqrt{\textrm{MSE}} \label{eq:ensemble} \end{aligned}$$ Notice that none of these numbers can be calculated in practice because the data required to do so do not concretely exist. Even in an experiment simulated on a computer, there are very few of these numbers we can calculate. If we know the mean $\mu$, perhaps because we have chosen its value — for example, we may have chosen the mass of the Higgs boson in our simulation, we can certainly calculate the error $\epsilon$ for any simulated datum $x$. But, we can only *approximate* the ensemble average $< x >$, bias $b$, variance $V$, and MSE, since our virtual ensemble is always finite. The point is this: the numbers that characterize the infinite ensemble are also abstractions, albeit useful ones. For example, the MSE is the most widely used measure of the closeness of an ensemble of numbers to some parameter $\mu$. The square root of the MSE is called the root mean square (RMS)[^1]. The MSE can be written as $$\begin{aligned} \textrm{MSE} & = V + b^2. \\ & \framebox{\textbf{Exercise 1: } \textrm{Show this}}\nonumber\end{aligned}$$ The MSE is the sum of the variance and the square of the bias, a very important result with practical consequences. For example, suppose that $\mu$ represents the mass of the Higgs boson and $x$ represents some (typically very complicated) statistic that is considered an **estimator** of the mass. An estimator is any function, which when data are entered into it, yields an **estimate** of the quantity of interest, which we may take to be a measurement. Words are important; “bias” is a case in point. It is an unfortunate choice for the difference $<x> - \mu$ because the word “bias” biases attitudes towards bias! Something that, or someone who, is biased is surely bad and needs to be corrected. Perhaps. But, it would be wasteful of data to make the bias zero if the net effect is to make the MSE larger than an MSE in which the bias is non-zero. The price for achieving $b = 0$ in our example would be not only throwing away expensive data — which is bad enough — but also measuring a mass that is more likely to be further away from the Higgs boson mass. This may, or may not, be what we want to achieve. As noted, many of the numbers listed in Eq. (\[eq:ensemble\]) cannot be calculated because the information needed is unknown. This is true, in particular, of the bias. However, sometimes it is possible to relate the bias to another ensemble quantity. Consider the ensemble average of the sample variance, Eq. (\[eq:xvar\]), $$\begin{aligned} <s^2> & = < \overline{x^2} > - <\bar{x}^2>, \nonumber\\ & = V - \frac{V}{N}, \nonumber\\ & \framebox{\textbf{Exercise 2a:} Show this} \nonumber \end{aligned}$$ The sample variance has a bias of $b = - V/N$, which many argue should be corrected. Unfortunately, we cannot calculate the bias because it depends on an unknown parameter, namely, the variance $V$. However, if we replace the sample variance by $s^{\prime 2} = c s^2$,where the correction factor $c = N/(N-1)$, we find that for the corrected variance estimator $s^{\prime 2}$ the bias is zero. Surely the world is now a better place? Well, not necessarily. Consider the ratio of $\textrm{MSE}^\prime$ to $\textrm{MSE}$, where $\textrm{MSE}^\prime = <(s^{\prime 2} - V)^2>$, $\textrm{MSE} = <\delta^2>$ with $\delta = s^2 - V$, and $b = -V / N$, $$\begin{aligned} \textrm{MSE}^\prime / \textrm{MSE} & = < (c s^2 - V)^2 > / < \delta^2 > , \\ & = c^2 < (s^2 - V/c)^2 > / < \delta^2>, \\ & = c^2 < (\delta - b)^2 > / < \delta^2>, \\ & = c^2 (1 - b^2 / <\delta^2> ), \\ & = c^2 \left[ 1 - b^2 / (b^2 + <s^4> - (V+b)^2) \right].\end{aligned}$$ From this we deduce that if $<s^4>/[ (V+b)^2 + b^2 / (c^2 - 1)] > 1$, the unbiased estimate will be further away on average from $V$ than the biased estimate. This is the case, for example, for a uniform distribution. Probability {#sec:prob} ----------- When the weather forecast specifies that there is a 80% chance of rain tomorrow, most people have an intuitive sense of what this means. Likewise, most people have an intuitive understanding of what it means to say that there is a 50-50 chance for a tossed coin to land heads up. Probabilistic ideas are thousands of years old, but, starting in the sixteenth century these ideas were formalized into increasingly rigorous mathematical theories of probability. In the theory formulated by Kolmogorov in 1933, $\Omega$ is some fixed mathematical space, $E_1, E_2, \cdots \subset \Omega$ are subsets (called events) defined in some reasonable way[^2], and $P(E_j)$ is a number associated with subset $E_j$. These numbers satisfy the $$\begin{aligned} \textbf{Kolmogorov Axioms} \\ & \quad 1. \quad P(E_j) \geq 0 \\ & \quad 2. \quad P(E_1 + E_2 + \cdots) = P(E_1) + P(E_2) + \cdots \quad\textrm{for disjoint subsets} \\ & \quad 3. \quad P(\Omega) = 1.\end{aligned}$$ Consider two subsets $A = E_1$ and $B = E_2$. The quantity $AB$ means $A$ *and* $B$, while $A + B$ means $A$ *or* $B$, with associated probabilities $P(AB)$ and $P(A+B)$, respectively. Kolmogorov assumed, not unreasonably given the intuitive origins of probability, that probabilities sum to unity; hence the axiom $P(\Omega) = 1$. However, this assumption can be dropped so that probabilities remain meaningful even if $P(\Omega) = \infty$ [@Taraldsen]. Figure \[fig:venn\] suggests another probability, namely, the number $P(A|B) = P(AB) / P(B)$, called the **conditional probability** of $A$ given $B$. This permits statements such as: “the probability that this track was created by an electron given the measured track parameters" or “the probability to observe 17 events given that the mean background is 3.8 events". [R]{}[0.5]{} ![image](figs/venndiagram){width="50.00000%"} Conditional probability is a very powerful idea, but the term itself is misleading. It implies that there are two kinds of probability: conditional and unconditional. In fact, *all* probabilities are conditional in that they always depend on a specific set of conditions, namely, those that define the space $\Omega$. It is entirely possible to embed a family of subsets of $\Omega$ into another space $\Omega^\prime$ which assigns to each family member a different probability $P^\prime$. A probability is defined only relative to some space of possibilities $\Omega$. $A$ and $B$ are said to be mutually exclusive if $P(AB) = 0$, that is, if the truth of one denies the truth of the other. They are said to be exhaustive if $P(A) + P(B) = 1$. Figure \[fig:venn\] suggests the theorem $$\begin{aligned} P(A + B) = P(A) + P(B) - P(AB), \\ \framebox{\textbf{Exercise 3:} Prove theorem}\nonumber\end{aligned}$$ which can be deduced from the rules given above. Another useful theorem is an immediate consequence of the commutativity of “anding" $P(AB) = P(BA)$ and the definition of $P(A|B)$, namely, $$\begin{aligned} \textbf{Bayes Theorem} \nonumber\\ &P(B|A ) = \frac{P(A|B) P(B)}{P(A)}, \end{aligned}$$ which provides a way to convert the probability $P(A|B)$ to the probability $P(B|A)$. Using Bayes theorem, we can, for example, deduce the probability $P(e|x)$ that a particle is an electron, $e$, given a set of measurements, $x$, from the probability $P(x|e)$ of a set of measurements given that the particle is an electron. ### Probability Distributions In this section, we illustrate the use of these rules to derive more complicated probabilities. First we start with a definition: > A **Bernoulli trial**, named after the Swiss mathematician Jacob Bernoulli (1654 – 1705), is an experiment with only two possible outcomes: $S = \textrm{success}$ or $F = \textrm{failure}$. > #### Example {#example .unnumbered} > > Each collision between protons at the Large Hadron Collider (LHC) is a Bernoulli trial in which something interesting happens ($S$) or does not ($F$). Let $p$ be the probability of a success, which is assumed to be the *same for each trial*. Since $S$ and $F$ are exhaustive, the probability of a failure is $1 - p$. For a given order $O$ of $n$ proton-proton collisions and exactly $k$ successes, and therefore exactly $n - k$ failures, the probability $P(k, O , n, p)$ is given by $$\begin{aligned} > P(k, O, n, p) = p^k (1 - p)^{n - k}.\end{aligned}$$ If the order $O$ of successes and failures is judged to be irrelevant, we can eliminate the order from the problem by summing over all possible orders, $$\begin{aligned} > P(k, n, p) = \sum_O P(k, O, n, p) = \sum_O p^k (1 - p)^{n - k}. > \label{eq:Pkn}\end{aligned}$$ This procedure is called **marginalization**. It is one of the most important operations in probability calculations. Every term in the sum in Eq. (\[eq:Pkn\]) is identical and there are $\binom{n}{k}$ of them. This yields the **binomial distribution**, $$\begin{aligned} > \textrm{Binomial(k, n, p)} \equiv \binom{n}{k} p^k (1 - p)^{n - k}.\end{aligned}$$ By definition, the mean number of successes $a$ is given by $$\begin{aligned} > a & = \sum_{k=0}^n k \, \textrm{Binomial(k, n, p)}, \nonumber\\ > & = p n. \\ > & \framebox{\textbf{Exercise 4:} Show this} \nonumber\end{aligned}$$ At the LHC $n$ is a number in the trillions, while for successes of interest such as the creation of a Higgs boson the probability $p << 1$. In this case, it proves convenient to consider the limit $p \rightarrow 0, n \rightarrow \infty$ in such a way that $a$ remains constant. In this limit $$\begin{aligned} > \textrm{Binomial(k, n, p)} & \rightarrow e^{-a} a^k / k! , \nonumber\\ > & \equiv \textrm{Poisson}(k, a).\\ > & \framebox{\textbf{Exercise 5:} Show this} \nonumber\end{aligned}$$ Below we list the most common probability distributions. $$\begin{aligned} &\textbf{Discrete distributions}\nonumber\\ & \textrm{Binomial}(k, n, p) & \binom{n}{k} p^k (1 - p^{n-k} \nonumber\\ & \textrm{Poisson}(k, a) & a^k \exp(-a) / k! \nonumber\\ & \textrm{Multinomial}(k, n, p) & \frac{n!}{k_1!\cdots k_K!} \prod_{i=1}^K p_i^{k_i}, \quad \sum_{i=1}^K p_i = 1, \sum_{i=1}^K k_i = n \nonumber\\ &\textbf{Continuous densities}\nonumber\\ & \textrm{Uniform}(x, a) & 1 / a \nonumber\\ & \textrm{Gaussian}(x, \mu, \sigma) & \exp[-(x - \mu)^2 / (2 \sigma^2)] / (\sigma \sqrt{2\pi}) \nonumber\\ &\textrm{(also known as the Normal density)}\nonumber\\ &\textrm{LogNormal}(x, \mu, \sigma) & \exp[-(\ln x - \mu)^2 / (2 \sigma^2)] / (x \sigma \sqrt{2\pi}) \nonumber\\ & \textrm{Chisq}(x, n) & x^{n/2 -1} \exp(-x /2) / [2^{n/2} \Gamma(n/2)] \nonumber\\ & \textrm{Gamma}(x, a, b) & x^{a -1} a^b \exp(- a x) / \Gamma(b) \nonumber\\ &\textrm{Exp}(x, a) & a \exp(- a x) \nonumber\\ &\textrm{Beta}(x, n, m) & \frac{\Gamma(n+m)}{\Gamma(m) \, \Gamma(n)} x^{n-1} \, (1 - x)^{m-1} \label{eq:dist}\end{aligned}$$ Particle physicists tend to use the term probability distribution for both discrete and continuous functions, such as the Poisson and Gaussian distributions, respectively. But, strictly speaking, the continuous functions are probability *densities*, not probability distributions. In order to compute a probability from a density we need to integrate the density over a finite set in $x$. #### Discussion {#discussion .unnumbered} Probability is the foundation for models of non-deterministic data generating mechanisms, such as particle collisions at the LHC. A **probability model** is the probability distribution together with all the assumptions on which the distribution is based. For example, suppose we wish to count, during a given period of time, the number of entries $N$ in a given transverse momentum ($p_\text{T}$) bin due to particles created in proton-proton collisions at the LHC; that is, suppose we wish to perform a counting experiment. If we assume that the probability to obtain a count in this bin is very small and that the number of proton-proton collisions is very large, then it is common practice to use a Poisson distribution to model the data generating mechanism, which yields the bin count $N$. If we have multiple independent bins, we may choose to model the data generating mechanism as a product of Poisson distributions. Or, perhaps, we may prefer to model the possible counts conditional on a fixed total count in which case a multinomial distribution would be appropriate. So far, we have assumed the meaning of the word probability to be self-evident. However, the meaning of probability [@Daston] has been the subject of debate for more than two centuries and there is no sign that the debate will end anytime soon. Probability, in spite of its intuitive beginnings, is an abstraction. Therefore, for it to be of practical use it must be *interpreted*. The two most widely used interpretations of probability are: 1. **degree of belief** in, or plausibility of, a proposition, for example, “It will snow at CERN on December 18th", and the 2. **relative frequency** of outcomes in an *infinite* ensemble of trials, for example, the relative frequency of Higgs boson creation in an infinite number of proton-proton collisions. The first interpretation is the older, while the second was championed by influential mathematicians and logicians starting in the mid-nineteenth century and became the dominant interpretation. Of the two interpretations, however, the older is the more general in that it encompasses the latter and can be used in contexts in which the latter makes no sense. The relative frequency, or **frequentist**, interpretation is useful for situations in which one can contemplate counting the number of times $k$ a given outcome is realized in $n$ trials, as in the example of a counting experiment. The relative frequency $r = k / n$ is expected to converge, in a subtle but well-defined sense, to some number $p$ that satisfies the rules of probability. It should noted, however, that the numbers $k/n$ and $p$ are conceptually distinct. The former is something we can actually calculate, while there is no *finite* operational way to calculate the latter from data. The probability $p$, even when interpreted as a relative frequency, remains an abstraction. On the other hand, the degrees of belief, which is the basis of the *Bayesian* approach to statistics (see Lecture 2), are just that: the degree to which a rational being *ought* to believe in the veracity of a given statement. The word “ought" in the last sentence is important: probability theory, with probabilities interpreted as degrees of belief, is *not* a model of how human beings actually reason in situations of uncertainty; rather probability theory when interpreted this way is a normative theory in that it specifies how an idealized reasoning being, or system, ought to reason when faced with uncertainty. There is a school of thought that argues that degrees of belief should be an individual’s own assessment of her or his degree of belief in a statement, which are then to be updated using the probability rules. The problem with this position is that it presupposes probability theory to be a model of human reasoning, which we argue it is not — a position confirmed by numerous psychological experiments. It is perhaps better to think of degrees of belief as numbers that inform one’s reasoning rather than as numbers that describe it, and relative frequencies as numbers that characterize stochastic data generation mechanisms. Both are probabilities and both are useful. Likelihood {#sec:likelihood} ---------- Let us assume that $p(x|\theta)$ is a **probability density function** (pdf) such that $P(A| \theta) = \int_A p(x|\theta) \, dx$ is the probability of the statement $A = x \in R_x$, where $x$ denotes possible data, $\theta$ the parameters that characterize the probability model, and $R_x$ is a finite set. If $x$ is discrete, then both $p(x|\theta)$ and $P(A|\theta)$ are probabilities. The **likelihood function** is simply the probability model $p(x|\theta)$ evaluated at the data $x_O$ actually obtained, i.e., the function $p(x_O|\theta)$. The following are examples of likelihoods. > #### Example 1 {#example-1 .unnumbered} > > In 1995, CDF and DØ discovered the top quark [@Abe:1995hr; @Abachi:1995iq] at Fermilab. The DØCollaboration found $x = D$ events ($D = 17$). For a counting experiment, the datum can be modeled using $$\begin{aligned} > p(x | d) & = \textrm{Poisson}(x, d) \quad \textrm{probability to get $x$ events} > \\ p(D | d) & = \textrm{Poisson}(D, d) \quad \textrm{likelihood of observation $D$ events} > \\ & = d^{D} \exp(-d) / D!\end{aligned}$$ We shall analyze this example in detail in Lectures 2 and 3. > > #### Example 2 {#example-2 .unnumbered} > > Figure \[fig:CI\] shows the transverse momentum spectrum of jets in $p p \rightarrow \textrm{jet} + X$ events measured by the CMS Collaboration [@Chatrchyan:2013muj]. The spectrum has $K = 20$ bins with total count $N$ that was modeled using the likelihood $$\begin{aligned} > p(D | p) & = \textrm{Multinomial}(D, N, p), \quad D = D_1,\cdots,D_K, \quad p = p_1,\cdots,p_K > \\ \sum_{i=1}^K D_i & = N. \end{aligned}$$ This is an example of a *binned* likelihood. > > ![Transverse momentum spectrum of jets in $p p \rightarrow \textrm{jet} + X$ events measured by CMS compared with the QCD prediction at next-to-leading order. This spectrum was used to search for evidence of contact interactions [@Chatrchyan:2013muj] (Courtesy CMS Collaboration).[]{data-label="fig:CI"}](figs/CMSCI13){width="50.00000%"} > > #### Example 3 {#example-3 .unnumbered} > > Figure \[fig:type1a\] shows a plot of the distance modulus versus redshift for $N = 580$ Type 1a supernovae [@Suzuki:2011hu]. These heteroscedastic data[^3] $\{z_i, x_i \pm \sigma_i \}$ are modeled using the likelihood $$\begin{aligned} > p(D | \Omega_M, \Omega_\Lambda, Q) & = \prod_{i=1}^N \textrm{Gaussian}(x_i, \mu_i, \sigma_i),\end{aligned}$$ which is an example of an *un-binned* likelihood. The cosmological model is encoded in the distance modulus function $\mu_i$, which depends on the redshift $z_i$ and the matter density and cosmological constant parameters $\Omega_M$ and $\Omega_\Lambda$, respectively. (See Ref. [@Dungan:2009fp] for an accessible introduction to the analysis of these data.) > > ![Plot of the data points $(z_i, x_i \pm \sigma_i)$ for 580 Type 1a supernovae [@Suzuki:2011hu] showing a fit of the standard cosmological model (with a cosmological constant) to these data (curve).[]{data-label="fig:type1a"}](figs/type1a){width="50.00000%"} > > #### Example 4 {#example-4 .unnumbered} > > The discovery of a neutral Higgs boson in 2012 by ATLAS [@Aad:2012tfa] and CMS [@Chatrchyan:2012ufa] in the di-photon final state ($p p \rightarrow H \rightarrow \gamma\gamma$) made use of an un-binned likelihood of the form, $$\begin{aligned} > p(x| s, m, w, b) & = \exp[-(s + b)] \prod_{i=1}^N [ s f_s(x_i | m, w) + b f_b(x_i) ] > \\ > \textrm{where } x & = \textrm{di-photon masses} \\ > m & = \textrm{mass of boson} \\ > w & = \textrm{width of resonance} \\ > s & = \textrm{expected (i.e., mean) signal count} \\ > b & = \textrm{expected background count} \\ > f_s & = \textrm{signal probability density} \\ > f_b & = \textrm{background probability density} \\ \\ > & \framebox{\parbox{0.5\textwidth}{\textbf{Exercise 6b:} Show that a binned multi-Poisson\\ likelihood yields an un-binned likelihood of this\\ form as the bin widths go to zero}}\end{aligned}$$ The likelihood function is arguably the most important quantity in a statistical analysis Because it can be used to answer questions such as the following. 1. How do I estimate a parameter? 2. How do I quantify its accuracy? 3. How do I test an hypothesis? 4. How do I quantify the significance of a result? Writing down the likelihood function requires: 1. identifying all that is *known*, e.g., the observations, 2. identifying all that is *unknown*, e.g., the parameters, 3. constructing a probability model for *both*. Many analyses in particle physics do not use likelihood functions explicitly. However, it is worth spending time to think about them because doing so encourages a deeper reflection on what is being done, a more systematic approach to the statistical analysis, and ultimately leads to better answers. Being explicit about what is and is not known in an analysis problem may seem a pointless exercise; surely these things are obvious. Consider the DØ top quark discovery data [@Abachi:1995iq], $D = 17$ events observed with a background estimate of $B = 3.8 \pm 0.6$ events. The uncertainty in 17 is invariably said to be $\sqrt{17} = 4.1$. Not so! The count 17 is perfectly known: it is 17. What we are uncertain about is the mean count $d$, that is, the parameter of the probability model, which we take to be a Poisson distribution. The $\pm 4.1$ must somehow be a statement not about 17 but rather about the unknown parameter $d$. We shall explain what the $\pm 4.1$ means in Lecture 2. Lecture 2: The Frequentist and Bayesian Approaches ================================================== In this lecture, we consider the two most important approaches to statistical inference, frequentist and Bayesian. Both are needed to make sense of statistical inference, though this is not the dominant opinion in particle physics. Most particle physicists, if pressed, will say they are frequentist in their approach. The typical reason given is that this approach is objective, whereas the Bayesian approach is not. Moreover, they would argue, the frequentist approach is less arbitrary whereas the Bayesian approach is plagued with arbitrariness that renders its results suspect. We wish, however, to focus on the practical, therefore, we shall sidestep this debate and assume a pragmatic attitude to both approaches. We begin with a description of salient features of the frequentist approach, followed by a description of the Bayesian approach. The Frequentist Approach ------------------------ The most important principle in this approach is that enunciated by the Polish statistician Jerzy Neyman in the 1930s, namely, > **The Frequentist Principle** > > The goal of a frequentist analysis is to construct statements so that a fraction $f \geq p$ of them are guaranteed to be true over an infinite ensemble of statements. The fraction $f$ is called the **coverage probability**, or coverage for short, and $p$ is called the **confidence level** (C.L.). A procedure which satisfies the frequentist principle is said to *cover*. The confidence level as well as the coverage is a property of the ensemble of statements. Consequently, the confidence level may change if the ensemble changes. Here is an example of the frequentist principle in action. > #### Example {#example-5 .unnumbered} > > Over the course of a long career, a doctor sees thousands of patients. For each patient he issues one of two conclusions: “you are sick" or “you are well" depending on the results of diagnostic measurements. Because he is a frequentist, he has devised an approach to medicine in which although he does not know which of his conclusions were correct, he can at least retire happy in the knowledge that he was correct at least 75% of the time! In a seminal paper published in 1937, Neyman [@Neyman37] invented the concept of the confidence interval, a way to quantify uncertainty, that respects the frequentist principle. The confidence interval is such an important idea, and its meaning so different from the superficially similar Bayesian concept of a credible interval, that it is worth working through the concept in detail. ### Confidence Intervals The confidence interval is a concept best explained by example. Consider an experiment that observes $D$ events with expected (that is, mean) signal $s$ and no background. Neyman devised a way to make statements of the form $$\begin{aligned} s \in [ l(D), \, u(D) ],\end{aligned}$$ with the *a priori* guarantee that at least a fraction $p$ of them will be true, as required by the frequentist principle. A procedure for constructing such intervals is called a **Neyman construction**. The frequentist principle must hold for any ensemble of experiments, not necessarily all making the same kind of observations and statements. For simplicity, however, we shall presume the experiments to be of the same kind and to be completely specified by a single unknown parameter $s$. The Neyman construction is illustrated in Fig. \[fig:neyman\]. ![The Neyman construction. Plotted is the Cartesian product of the parameter space, with parameter $s$, and the space of observations with potential observations $D$. For a given value of $s$, the observation space is partitioned into three disjoint intervals, such that the probability to observe a count $D$ within the interval demarcated by the two vertical lines is $f \geq p$, where p = C.L. is the desired confidence level. The inequality is needed because, for discrete data, it may not be possible to find an interval with $f = p$ exactly.[]{data-label="fig:neyman"}](figs/Neyman){width="80.00000%"} The construction proceeds as follows. Choose a value of $s$ and use some rule to find an interval in the space of observations (or, more generally, a region), for example, the interval defined by the two vertical lines in the center of the figure, such that the probability to obtain a count in this interval is $f \geq p$, where $p$ is the desired confidence level. We move to another value of $s$ and repeat the procedure. The procedure is repeated for a sufficiently dense set of points in the parameter space over a sufficiently large range. When this is done, as illustrated in Fig. \[fig:neyman\], the intervals of probability content $f$ will form a band in the Cartesian product of the parameter space and the observation space. The upper edge of this band defines the curve $u(D)$, while the lower edge defines the curve $l(D)$. These curves are the product of the Neyman construction. For a given value of the parameter of interest $s$, the interval with probability content $f$ in the space of observations is not unique; different rules for choosing the interval will, in general, yield different intervals. Neyman suggested choosing the interval so that the probability to obtain an observation below or above the interval are the same. The Neyman rule yields the so-called **central intervals**. One virtue of central intervals is that their boundaries can be more efficiently calculated by solving the equations, $$\begin{aligned} P(x \leq D | u) & = \alpha_L, \nonumber\\ P(x \geq D | l) & = \alpha_R,\end{aligned}$$ a mathematical fact that becomes clear after staring at Fig. \[fig:neyman\] long enough. Another rule was suggested by Feldman and Cousins [@FC]. For our example, the Feldman-Cousins rule requires that the potential observations $\{D\}$ be ordered in descending order, $D_{(1)}, D_{(2)}, \cdots$, of the likelihood ratio $p(D | s) / p(D | \hat{s})$, where $\hat{s}$ is the maximum likelihood estimator (see Sec. \[sec:profile\]) of the parameter $s$. Once ordered, we compute the running sum $f = \sum_j p(D_{(j)} | s)$ until $f$ equals or just exceeds the desired confidence level $p$. This rules does not guarantee that the potential observations $D$ are contiguous, but this does not matter because we simply take the minimum element of the set $\{ D_{(j)} \}$ to be the lower bound of the interval and its maximum element to be the upper bound. Another simple rule is the mode-centered rule: order $D$ in descending order of $p(D| s)$ and proceed as with the Feldman-Cousins rule. In principle, absent criteria for choosing a rule, there is nothing to prevent the use of *ordering rules* randomly chosen for different values of $s$! Figure \[fig:ciwidths\] compares the widths of the intervals $[l(D), u(D)]$ for three different ordering rules, central, Feldman-Cousins, and mode-centered as a function of the count $D$. It is instructive to compare these widths with those provided by the well-known root(N) interval, $l(D) = D - \sqrt{D}$ and $u(D) = D + \sqrt{D}$. Of the three sets of intervals, the ones suggested by Neyman are the widest, the Feldman-Cousins and mode-centered ones are of similar width, while the root(N) intervals are the shortest. So why are we going through all the trouble of the Neyman construction? We shall return to this question shortly. [R]{}[0.5]{} ![image](figs/ciwidths){width="50.00000%"} Having completed the Neyman construction and found the curves $u(D)$ and $l(D)$ we can use the latter to make statements of the form $s \in [l(D), \, u(D)]$: for a given observation $D$, we simply read off the interval $[l(D), u(D)]$ from the curves. For example, suppose in Fig. \[fig:neyman\] that the true value of $s$ is represented by the horizontal line that intersects the curves $u(D)$ and $l(D)$ and which therefore defines the interval demarcated by the two vertical lines. If the observation $D$ happens to fall in the interval to the left of the left vertical line, or to the right of the right vertical line, then the interval $[l(D), \, u(D)]$ will not bracket $s$. However, if $D$ falls between the two vertical lines, the interval $[l(D), \, u(D)]$ will bracket $s$. Moreover, by virtue of the Neyman construction, a fraction $f$ of the intervals $[l(D), \, u(D)]$ will bracket the value of $s$ whatever its value happens to be, which brings us back to the question about the root(N) intervals. Figure \[fig:coverage\] shows the coverage probability over the parameter space of $s$. As expected, the three rules, Neyman’s, that of Feldman-Cousins, and the mode-centered, satisfy the condition coverage probability $\geq$ confidence level over all values of $s$ that are possible *a priori*; that is, the intervals cover. However, the root(N) intervals do not and indeed fail badly for $ s < 2$. [L]{}[0.5]{} ![image](figs/coverage){width="50.00000%"} However, notice that the coverage probability of the root(N) intervals bounces around the (68%) confidence level for vaues of $s > 2$. Therefore, if we knew for sure that $s > 2$, it would seem that using the root(N) intervals may not be that bad after all. Whether it is or not depends entirely on one’s attitude towards the frequentist principle. Some will lift mountains and carry them to the Moon in order to achieve exact coverage, while others, including the author, is entirely happy with coverage that bounces around a little. #### Discussion {#discussion-1 .unnumbered} We may summarize the content of the Neyman construction with a statement of the form: there is a probability of at least $p$ that $s \in [l(D), \, u(D)]$. But it would be a misreading of the statement to presume it is about that particular interval. It is not because $p$, as noted, is a property of the ensemble to which this statement belongs. The precise statement is this: $s \in [l(D), \, u(D)]$ is a member of an (infinite) ensemble of statements a fraction $f \geq p$ of which are true. This mathematical fact is the principal reason why the frequentist approach is described as objective; the probability $p$ is something for which there seems, in principle, to be an operational definition: we just count how many statements of the form $s \in [l(D), \, u(D)]$ are true and divide by the total number of statements. Unfortunately, in the real world this procedure cannot be realized because in general we are not privy to which statements are true and, even if we came down from a mountain with the requisite knowledge, we would need to examine an infinite number of statements, which is impossible. Nevertheless, the Neyman construction is a remarkable procedure that always yields exact coverage for any problem that depends on a *single* unknown parameter. Matters quickly become less tidy, however, when a probability model contains more than one unknown parameter. In almost every particle physics experiment there is background that is usually not known precisely. Consequently, even for the simplest experiment we must contend with at least two parameters, the expected signal $s$ and the expected background $b$, neither of which is known. Neyman required a procedure to cover whatever the value of *all* the parameters be they known or unknown. This is a very tall order, which cannot be met in general. In practice, we resort to approximations, the most widely used of which is the profile likelihood to which we now turn. ### The Profile Likelihood {#sec:profile} As noted in Sec. \[sec:likelihood\], likelihood functions can be used to estimate the parameters on which they depend. The method of choice to do so, in a frequentist analysis, is called **maximum likelihood**, a method first used by Karl Frederick Gauss, *The Prince of Mathematics*, but developed into a formidable statistical tool in the 1930s by Sir Ronald A. Fisher [@Fisher], perhaps the most influential statistician of the twentieth century. Fisher showed that a good way to estimate the parameters of a likelihood function is to pick the value that maximizes it. Such estimates are called (MLE). In general, a function into which data can be inserted to yield an MLE of a parameter is called a maximum likelihood estimator. For simplicity, we shall use the same abbreviation MLE to mean both the estimate and the estimator and we shall not be too picky about distinguishing the two. The DØ top quark discovery example illustrates the method. > #### Example: Top Quark Discovery Revisited {#example-top-quark-discovery-revisited .unnumbered} > > We start by listing $$\begin{aligned} > & \textbf{the knowns} \\ > & D = N, B \text{ where} \\ > & N = 17 \textrm{ observed events} \\ > & B = 3.8 \textrm{ estimated background events with uncertainty } \delta B = 0.6 \\ > &\textbf{and the unknowns} \\ > & b \quad\textrm{mean background count}\\ > & s \quad\textrm{mean signal count}.\end{aligned}$$ Next, we construct a probability model for the data $D = N, B$ assuming that $N$ and $B$ are statistically independent. Since this is a counting experiment, we shall assume that $p(x| s, b)$ is a Poisson distribution with mean count $s + b$. In the absence of details about how the background $B$ was arrived at, the standard assumption is that data of the form $y \pm \delta y$ can be modeled with a Gaussian (or normal) density. However, we can do a bit better. Background estimates are usually based on auxiliary experiments, either real or simulated, that define control regions. > > Suppose that the observed count in the control region is $Q$ and the mean count is $b k$, where $k$ (ideally) is the known scale factor between the control and signal regions. We can model these data with a Poisson distribution with count $Q$ and mean $b k$. But, we are given $B$ and $\delta B$ rather than $Q$ and $k$, so we need a model to relate the two pairs of numbers. The simplest model is $B = Q / k$ and $\delta B = \sqrt{Q} / k$ from which we can infer an effective count $Q$ using $Q = (B / \delta B)^2$. What of the scale factor $k$? Well, since it is not given, it must be estimated. The obvious estimate is $Q / B = B / \delta B^2$. With these assumptions, our likelihood function is $$\begin{aligned} > \label{eq:toplh} > p(D | s, b) & = & \textrm{Poisson}(N, s + b) \, \textrm{Poisson}(Q, bk), \\ > \textrm{where} \nonumber\\ > Q & = & (B / \delta B)^2 = 41.11,\nonumber\\ > k & = & B / \delta B^2 = 10.56. \nonumber\end{aligned}$$ The first term in Eq. (\[eq:toplh\]) is the likelihood for the count $N = 17$, while the second term is the likelihood for $B = 3.8$, or equivalently the count $Q$. The fact that $Q$ is not an integer causes no difficulty: we merely write the Poisson distribution as $(bk)^Q \exp(-bk) / \Gamma(Q+1)$, which permits continuation to non-integer counts $Q$. > > The maximum likelihood estimators for $s$ and $b$ are found by maximizing Eq. (\[eq:toplh\]), that is, by solving the equations $$\begin{aligned} > \frac{\partial \ln p(D|s, b)}{\partial s} & = 0\quad\textrm{leading to } \hat{s} = N - B, \nonumber\\ > \frac{\partial \ln p(D|s, b)}{\partial b} & = 0\quad\textrm{leading to } \hat{b} = B, \nonumber\end{aligned}$$ as expected. > > A more complete analysis would account for the uncertainty in $k$. One way is to introduce two more control regions with observed counts $V$ and $W$ and mean counts $v$ and $w k$, respectively, and extend Eq. (\[eq:toplh\]) with two more Poisson distributions. The maximum likelihood method is the most widely used method for estimating parameters because it generally leads to reasonable estimates. But the method has features, or encourages practices, which, somewhat uncharitably, we label the good, the bad, and the ugly! - *The Good* - Maximum likelihood estimators are consistent: the RMS goes to zero as more and more data are included in the likelihood. This is an extremely important property, which basically says it makes sense to take more data because we shall get more accurate results. One would not knowingly use an inconsistent estimator! - If an unbiased estimator for a parameter exists the maximum likelihood method will find it. - Given the MLE for $s$, the MLE for any function $y = g(s)$ of $s$ is, very conveniently, just $\hat{y} = g(\hat{s})$. This is a very nice practical feature which makes it possible to maximize the likelihood using the most convenient parameterization of it and then transform back to the parameter of interest at the end. - *The Bad (according to some!)* - In general, MLEs are biased.\ \ - *The Ugly (according to some!)* - The fact that most MLEs are biased encourages the routine application of bias correction, which can waste data and, sometimes, yield absurdities. Here is an example of the seriously ugly. > #### Example {#example-6 .unnumbered} > > For a discrete probability distribution $p(k)$, the **moment generating function** is the ensemble average $$\begin{aligned} > G(x) & = < e^{xk} > \\ > & = \sum_{k} e^{xk} \, p(k).\end{aligned}$$ For the binomial, with parameters $p$ and $n$, this is $$\begin{aligned} > G(x) & = (e^x p + 1 - p)^n, \quad \framebox{\textbf{Exercise 8a:} Show this}\end{aligned}$$ which is useful for calculating **moments** $$\begin{aligned} > \mu_r & = \left. \frac{d^rG}{dx^r}\right |_{x=0} = \sum_k k^r \, p(k),\end{aligned}$$ e.g., $\mu_2 = (np)^2 + np - np^2$ for the binomial distribution. Given that $k$ events out $n$ pass a set of cuts, the MLE of the event selection efficiency is the obvious estimate $\hat{p} = k / n$. The equally obvious estimate of $p^2$ is $( k / n)^2$. But, $$\begin{aligned} > < ( k / n)^2 > & = p^2 + V / n , \quad \framebox{\textbf{Exercise 8b:} Show this}\end{aligned}$$ so $(k / n)^2$ is a biased estimate of $p^2$ with positive bias $V / n$. The unbiased estimate of $p^2$ is $$\begin{aligned} > k(k-1) / [ n (n - 1)] , \quad \framebox{\textbf{Exercise 8c:} Show this}\end{aligned}$$ which, for a single success, i.e., $k = 1$, yields the sensible estimate $\hat{p} = 1 / n$, but the less than helpful one $\hat{p^2} = 0!$ In order to infer a value for the parameter of interest, for example, the signal $s$ in our 2-parameter likelihood function in Eq. (\[eq:toplh\]), the likelihood must be reduced to one involving the parameter of interest only, here $s$, by somehow getting rid of all the **nuisance** parameters, here the background parameter $b$. A nuisance parameter is simply a parameter that is not of current interest. In a strict frequentist calculation, this reduction to the parameter of interest must be done in such a way as to respect the frequentist principle: *coverage probability $\geq$ confidence level*. In general, this is very difficult to do exactly. In practice, we replace all nuisance parameters by their **conditional maximum likelihood estimates** (CMLE). The CMLE is the maximum likelihood estimate conditional on a *given* value of the current parameter (or parameters) of interest. In the top discovery example, we construct an estimator of $b$ as a function of $s$, $\hat{b}(s)$, and replace $b$ in the likelihood $p(D | s, b)$ by $\hat{b}(s)$ to yield a function $p_{PL}(D | s)$ called the **profile likelihood**. > *Since the profile likelihood entails an approximation, namely, replacing unknown parameters by their conditional estimates, it is not the likelihood but rather an approximation to it. Consequently, the frequentist principle is not guaranteed to be satisfied exactly.* This does not seem to be much progress. However, things are much better than they may appear because of an important theorem proved by Wilks in 1938. If certain conditions are met, roughly that the MLEs do not occur on the boundary of the parameter space and the likelihood becomes ever more Gaussian as the data become more numerous — that is, in the so-called **asymptotic limit**, then if the true density of $x$ is $p(x| s, b)$ the random number $$\begin{aligned} t(x, s) & = -2 \ln \lambda(x, s), \\ \textrm{where } \lambda(x, s) & = \frac{p_{PL}(x | s)}{ p_{PL}(x | \hat{s})}, \label{eq:wilks}\end{aligned}$$ has a probability density that converges to a $\chi^2$ density with one degree of freedom. More generally, if the numerator of $\lambda$ contains $m$ free parameters the asymptotic density of $t$ is a $\chi^2$ density with $m$ degrees of freedom. Therefore, we may take $t(D, s)$ to be a $\chi^2$ variate, at least approximately, and solve $t(D, s) = n^2$ for $s$ to get approximate $n$-standard deviation confidence intervals. In particular, if we solve $t(D, s) = 1$, we obtain approximate 68% intervals. This calculation is what [Minuit]{}, and now [TMinuit]{}, has done countless times since the 1970s! Wilks’ theorem provides the main justification for using the profile likelihood. We again use the top discovery example to illustrate the procedure. > #### Example: Top Quark Discovery Revisited Again {#example-top-quark-discovery-revisited-again .unnumbered} > > The conditional MLE of $b$ is found to be $$\begin{aligned} > \hat{b}(s) & = \frac{g + \sqrt{g^2 + 4 (1 + k) Q s}}{2(1+k)}, \\ > \textrm{where} \nonumber\\ > g & = N + Q - (1+k) s.\nonumber > \label{eq:bhat}\end{aligned}$$ > > ![(a) Contours of the DØ top discovery likelihood and the graph of $\hat{b}(s)$. (b) Plot of $-\ln \lambda(17, s)$ versus the expected signal $s$. The vertical lines show the boundaries of the approximate 68% interval.[]{data-label="fig:toppl"}](figs/fig_likelihood "fig:"){width="48.00000%"} ![(a) Contours of the DØ top discovery likelihood and the graph of $\hat{b}(s)$. (b) Plot of $-\ln \lambda(17, s)$ versus the expected signal $s$. The vertical lines show the boundaries of the approximate 68% interval.[]{data-label="fig:toppl"}](figs/fig_signal_profile "fig:"){width="48.00000%"} > > The likelihood $p(D | s, b)$ is shown in Fig. \[fig:toppl\](a) together with the graph of $\hat{b}(s)$. The mode (i.e. the peak) occurs at $s = \hat{s} = N - B$. By solving $$-2 \ln \frac{p_{PL}(17 | s)}{ p_{PL}(17 | 17 - 3.8)} = 1$$ for $s$ we get two solutions $s = 9.4$ and $s = 17.7$. Therefore, we can make the statement $s \in [9.4, 17.7]$ at approximately 68% C.L. Figure \[fig:toppl\](b) shows a plot of $-\ln \lambda(17, s)$ created using the [RooFit]{} [@RooFit] and [RooStats]{} [@RooStats] packages. > > Intervals constructed this way are not guaranteed to satisfy the frequentist principle. In practice, however, their coverage is very good for the typical probability models used in particle physics, even for modest amounts of data. This is illustrated in Fig. \[fig:wilks\], which shows how rapidly the density of $t(x, s)$ converges to a $\chi^2$ density for the probability distribution $p(x, y| s, b) = \textrm{Poisson}(x|s+b) \textrm{Poisson}(y | b)$[^4]. The figure also shows what happens if we impose the restriction $\hat{s} \geq 0$, that is, we forbid negative signal estimates. ![Plots of the cumulative distribution function (cdf), $P(\chi^2 < t, 1)$, of the $\chi^2$ density for one degree of freedom compared with the cdf $P(t^\prime < t | s, b)$ for four different values of the mean signal and background, $s$ and $b$. The left plot shows that even for a mean signal or background count as low as 10, the density $p(t| s, b)$ is already close to $p(\chi^2, 1)$ and therefore largely independent of $s$ and $b$. This is true, however, only if most of the time the maximum of the likelihood occurs away from the boundary of the parameter space. In the left plot, the signal is estimated using $\hat{s} = N - B$, which can, in principle, be arbitrarily negative. But, if we choose to set $\hat{s} = 0$ whenever $B > N$ in order to avoid negative signal estimates, we obtain the curves in the right plot. We see that for small signals, $p(t | s, b)$ still depends on the parameters.[]{data-label="fig:wilks"}](figs/fig_wilks_False "fig:"){width="48.00000%"} ![Plots of the cumulative distribution function (cdf), $P(\chi^2 < t, 1)$, of the $\chi^2$ density for one degree of freedom compared with the cdf $P(t^\prime < t | s, b)$ for four different values of the mean signal and background, $s$ and $b$. The left plot shows that even for a mean signal or background count as low as 10, the density $p(t| s, b)$ is already close to $p(\chi^2, 1)$ and therefore largely independent of $s$ and $b$. This is true, however, only if most of the time the maximum of the likelihood occurs away from the boundary of the parameter space. In the left plot, the signal is estimated using $\hat{s} = N - B$, which can, in principle, be arbitrarily negative. But, if we choose to set $\hat{s} = 0$ whenever $B > N$ in order to avoid negative signal estimates, we obtain the curves in the right plot. We see that for small signals, $p(t | s, b)$ still depends on the parameters.[]{data-label="fig:wilks"}](figs/fig_wilks_True "fig:"){width="48.00000%"} ### Hypothesis Tests It is hardly possible in experimental particle physics to avoid the testing of hypotheses, testing that invariably leads to decisions. For example, electron identification entails hypothesis testing; given data $D$ we ask: is this particle an isolated electron or is it not an isolated electron? Then we decide whether or not it is and proceed on the basis of the decision that has been made. In the discovery of the Higgs boson, we had to test whether, given the data available in early summer 2012, the Standard Model without a Higgs boson, a somewhat ill-founded background-only model, or the Standard Model with a Higgs boson, the background $+$ signal model, was the preferred hypothesis. We decided that the latter model was preferred and announced the discovery of a new boson. Given the ubiquity of hypothesis testing, it is important to have a grasp of the methods that have been invented to implement it. One method was due to Fisher [@Fisher], another was invented by Neyman, and a third (Bayesian) method was proposed by Sir Harold Jeffreys, all around the same time. Today, we tend to merge the approaches of Fisher and Neyman, and we hardly ever use the method of Jeffreys even though in several respects the method of Jeffreys and their modern variants are arguably more natural. In particle physics, we regard our Fisher/Neyman hybrid as sacrosanct, witness the near-religious adherence to the $5\sigma$ discovery rule. However, the pioneers disagreed strongly with each other about how to test hypotheses, which suggests that the topic is considerably more subtle than it seems. We first describe the method of Fisher, then follow with a description of the method of Neyman. For concreteness, we consider the problem of deciding between a background-only model and a background $+$ signal model. [R]{}[0.5]{} ![image](figs/pvalue1){width="45.00000%"} #### Fisher’s Approach In Fisher’s approach, we construct a **null hypothesis**, often denoted by $H_0$, and *reject* it should some measure be judged small enough to cast doubt on the validity of this hypothesis. In our example, the null hypothesis is the background-only model, for example, the SM without a Higgs boson. The measure is called a **p-value** and is defined by $$\begin{aligned} \textrm{p-value}(x_0) = P( x > x_0| H_0), \end{aligned}$$ where $x$ is a statistic designed so that large values indicate departure from the null hypothesis. This is illustrated in Fig. \[fig:pvalue1\], which shows the location of the observed value $x_0$ of $x$. The p-value is the probability that $x$ could have been higher than the $x$ actually observed. It is argued that a small p-value implies that either the null hypothesis is false or something rare has occurred. If the p-value is extremely small, say $\sim 3 \times 10^{-7}$, then of the two possibilities the most common response is to presume the null to be false. If we apply this method to the DØ top quark discovery data, and neglect the uncertainty in null hypothesis, we find $$\begin{aligned} \textrm{p-value} & = \sum_{D=17}^\infty \textrm{Poisson}(D, 3.8) = 5.7 \times 10^{-7}.\end{aligned}$$ In order to report a more intuitive number, the common practice is to map the p-value to the $Z$ scale defined by $$\begin{aligned} Z & = \sqrt{2} \, \textrm{erf}^{-1}(1 - 2\textrm{p-value}).\end{aligned}$$ This is the number of Gaussian standard deviations away from the mean[^5]. A p-value of $5.7 \times 10^{-7}$ corresponds to a $Z$ of $4.9\sigma$. The $Z$-value can be calculated using the [Root]{} function $$Z = \textrm{\tt -TMath::NormQuantile(p-value)}.$$ [R]{}[0.5]{} ![image](figs/neymantest1){width="45.00000%"} #### Neyman’s Approach In Neyman’s approach *two* hypotheses are considered, the null hypothesis $H_0$ and an alternative hypothesis $H_1$. This is illustrated in Fig. \[fig:neymantest1\]. In our example, the null is the same as before but the alternative hypothesis is the SM with a Higgs boson. Again, one generally chooses $x$ so that large values would cast doubt on the validity of $H_0$. However, the Neyman test is specifically designed to respect the frequentist principle, which is done as follows. A *fixed* probability $\alpha$ is chosen, which corresponds to some threshold value $x_\alpha$ defined by $$\begin{aligned} \alpha & = P( x > x_\alpha | H_0),\end{aligned}$$ called the significance (or size) of the test. Should the observed value $x_0 > x_\alpha$, or equivalently, p-value($x_0$) $< \alpha$, the hypothesis $H_0$ is rejected in favor of the alternative. In particle physics, in addition to applying the Neyman hypothesis test, we also report the p-value. This is sensible because there is a more information in the p-value than merely reporting the fact that a null hypothesis was rejected at a significance level of $\alpha$. The Neyman method satisfies the frequentist principle by construction. Since the significance of the test is fixed, $\alpha$ is the relative frequency with which true null hypotheses would be rejected and is called the **Type I** error rate. [L]{}[0.5]{} ![image](figs/neymantest2){width="45.00000%"} However, since we have specified an alternative hypothesis there is more that can be said. Figure \[fig:neymantest1\] shows that we can also calculate $$\begin{aligned} \beta & = P( x \leq x_\alpha | H_1),\end{aligned}$$ which is the relative frequency with which we would reject the hypothesis $H_1$ if it is true. This mistake is called a error. The quantity $1 - \beta$ is called the **power** of the test and is the relative frequency with which we would accept the hypothesis $H_1$ if it is true. Obviously, for a given $\alpha$ we want to maximize the power. Indeed, this is the basis of the Neyman-Pearson lemma (see for example Ref. [@James]), which asserts that given two simple hypotheses — that is, hypotheses in which all parameters have well-defined values — the optimal statistic $t$ to use in the hypothesis test is the likelihood ratio $t = p(x|H_1) / p(x | H_0)$. Maximizing the power seems sensible. Consider Fig. \[fig:neymantest2\]. The significance of the test in this figure is the same as that in Fig. \[fig:neymantest1\], so the Type I error rate is identical. However, the Type II error rate is much greater in Fig. \[fig:neymantest2\] than in Fig. \[fig:neymantest1\], that is, the power of the test is considerably weaker in the former. In that case, there may be no compelling reason to reject the null since the alternative is not that much better. This insight was one source of Neyman’s disagreement with Fisher. Neyman objected to possibility that one might reject a null hypothesis regardless of whether it made sense to do so. Neyman insisted that the task is always one of deciding between competing hypotheses. Fisher’s counter argument was that an alternative hypothesis may not be available, but we may nonetheless wish to know whether the only hypothesis that is available is worth keeping. As we shall see, the Bayesian approach also requires an alternative, in agreement with Neyman, but in a way that neither he nor Fisher agreed with! We have assumed that the hypotheses $H_0$ and $H_1$ are simple, that is, fully specified. Unfortunately, most of the hypotheses that arise in realistic particle physics analyses are not of this kind. In the Higgs boson discovery analyses by ATLAS and CMS the probability models depend on many nuisance parameters for which only estimates are available. Consequently, neither the background-only nor the background $+$ signal hypotheses are fully specified. Such hypotheses are called **compound hypotheses**. In order to illustrate how hypothesis testing proceeds in this case, we again turn again to the top discovery example. > #### Example {#example-7 .unnumbered} > > As we saw in Sec. \[sec:profile\], the standard way to handle nuisance parameters in the frequentist approach is to replace them by their conditional MLEs and thereby reduce the likelihood function to the profile likelihood. In the top discovery example, we obtain a function $p_{PL}(D | s)$ that depends on the single parameter, $s$. We now treat this function as if it were a likelihood and invoke both the Neyman-Pearson lemma, which suggests the use of likelihood ratios, and Wilks’ theorem to motivate the use of the function $t(x, s)$ given in Eq. (\[eq:wilks\]) to distinguish between two hypotheses: the hypothesis $H_1$ in which $s = \hat{s} = N - B$ and the hypothesis $H_0$ in which $s \neq \hat{s}$, for example, the background-only hypothesis $s = 0$. In the context of testing, $t(x, s)$ is called a **test statistic**, which, unlike a statistic as we have defined it (see Sec. \[sec:statistics\]), usually depends on at least one unknown parameter. > > In principle, the next step is the computationally arduous task of simulating the distribution of the statistic $t(x, s)$. The task is arduous because *a priori* the probability density $p(t| s, b)$ can depend on *all* the parameters that exist in the original likelihood. If this is really the case, then after all this effort we seem to have achieved a pyrrhic victory! But, this is where Wilks’ theorem saves the day, at least approximately. We can avoid the burden of simulating $t(x, s)$ because the latter is approximately a $\chi^2$ variate. > > Using $N = 17$ and $s = 0$, we find $t_0 = t(N=17, s = 0) = 4.6$. According to the results shown in Fig. (\[fig:toppl\])(a), $N = 17$ may can be considered “a lot of data"; therefore, we may use $t_0$ to implement a hypothesis test by comparing $t_0$ with a fixed value $t_\alpha$ corresponding to the significance level $\alpha$ of the test. Lecture 3: The Bayesian Approach ================================ In this lecture, we introduce the Bayesian approach to inference starting with a description of its salient features and ending with a detailed example, again using the top quark discovery data from DØ. The main point to be understood about the Bayesian approach is that it is merely applied probability theory (see Sec. \[sec:prob\]). A method is Bayesian if - it is based on the degree of belief interpretation of probability and - it uses Bayes theorem $$\begin{aligned} p(\theta, \omega | D) & = \frac{p(D|\theta, \omega) \, \pi(\theta, \omega)}{p(D)}, \\ \textrm{where} \nonumber\\ D & = \textrm{ observed data}, \nonumber \\ \theta & = \textrm{ parameters of interest}, \nonumber\\ \omega & = \textrm{ nuisance parameters}, \nonumber\\ p(\theta, \omega| D) & = \textrm{posterior density}, \nonumber\\ \pi(\theta, \omega) & = \emph{prior density (or prior for short)}. \nonumber \end{aligned}$$ for *all* inferences. The result of a Bayesian inference is the posterior density $p(\theta, \omega | D$ from which, if desired, various summaries can be extracted. The parameters can be discrete or continuous and nuisance parameters are eliminated by marginalization, $$\begin{aligned} p(\theta | D) & = \int p(\theta, \omega | D ) \, d\omega, \\ & \propto \int p(D | \theta, \omega) \, \pi(\theta, \omega) \, d\omega. \nonumber\end{aligned}$$ The function $\pi(\theta, \omega)$, called the prior, encodes whatever information we have about the parameters $\theta$ and $\omega$ independently of the data $D$. A key feature of the Bayesian approach is recursion; the use of the posterior density $p(\theta, \omega|D)$ or one, or more, of its marginals as the prior in a subsequent analysis. These simple rules yield an extremely powerful and general inference model. Why then is the Bayesian approach not more widely used in particle physics? The answer is partly historical: the frequentist approach was dominant at the dawn of particle physics. It is also partly the widespread perception that the Bayesian approach is too subjective to be useful for scientific work. However, there is published evidence that this view is mistaken, witness the success of Bayesian methods in high-profile analyses in particle physics such as the discovery of single top quark production at the Tevatron [@Abazov:2009ii; @Aaltonen:2009jj]. Model Selection --------------- Conceptually, hypothesis testing in the Bayesian approach (also called model selection) proceeds exactly the same way as any other Bayesian calculation: we compute the posterior density, $$\begin{aligned} p(\theta, \omega, H | D) & = \frac{p(D | \theta, \omega, H) \, \pi(\theta, \omega, H)} {p(D)},\end{aligned}$$ and marginalize it with respect to all parameters except the ones that label the hypotheses or models, $H$, $$\begin{aligned} p(H | D ) & = \int p(\theta, \omega, H | D) \, d\theta \, d\omega. \label{eq:pHD}\end{aligned}$$ Equation (\[eq:pHD\]) is the probability of hypothesis $H$ given the observed data $D$. In principle, the parameters $\omega$ could also depend on $H$. For example, suppose that $H$ labels different parton distribution function (PDF) models, say CT10, MSTW, and NNPDF, then $\omega$ would indeed depend on the PDF model and should be written as $\omega_H$. It is usually more convenient to arrive at the probability $p(H|D)$ in stages. 1. Factorize the prior in the most convenient form, $$\begin{aligned} \pi(\theta, \omega_H, H) & = \pi(\theta, \omega_H | H) \, \pi(H), \nonumber\\ & = \pi(\theta |\omega_H, H) \, \pi(\omega_H | H) \, \pi(H),\\ \textrm{or} \nonumber\\ & = \pi(\omega_H |\theta, H) \, \pi(\theta | H) \, \pi(H). \end{aligned}$$ Often, we can assume that the parameters of interest $\theta$ are independent, *a priori*, of both the nuisance parameters $\omega_H$ and the model label $H$, in which case we can write, $\pi(\theta, \omega_H, H) = \pi(\theta) \, \pi(\omega_H|H) \, \pi(H)$. 2. Then, for each hypothesis, $H$, compute the function $$\begin{aligned} p(D | H ) = \int p(D | \theta, \omega_H, H) \, \pi(\theta, \omega | H) \, d\theta \, d\omega. \end{aligned}$$ 3. Then, compute the probability of each hypothesis, $$\begin{aligned} p(H | D ) =\frac{p(D | H) \, \pi(H)} {\sum_H p(D | H) \, \pi(H)}. \end{aligned}$$ Clearly, in order to compute $p(H | D)$ it is necessary to specify the priors $\pi(\theta, \omega | H)$ and $\pi(H)$. With some effort, it is possible to arrive at an acceptable form for $\pi(\theta, \omega | H)$, however, it is highly unlikely that consensus could ever be reached on the discrete prior $\pi(H)$. At best, one may be able to adopt a convention. For example, if by convention two hypotheses $H_0$ and $H_1$ are to be regarded as equally likely, *a priori*, then it would make sense to assign $\pi(H_0) = \pi(H_1) = 0.5$. One way to circumvent the specification of the prior $\pi(H)$ is to compare the probabilities, $$\begin{aligned} \frac{p(H_1 | D )}{p(H_0 | D)} =\left[ \frac{p(D | H_1)}{p(D | H_0} \right] \, \frac{ \pi(H_1)} {\pi(H_0)}. \end{aligned}$$ and use only the term in brackets, called the global **Bayes factor**, $B_{10}$, as a way to compare hypotheses. The Bayes factor specifies by how much the relative probabilities of two hypotheses changes as a result of incorporating new data, $D$. The word global indicates that we have marginalized over all the parameters of the two models. The *local* Bayes factor, $B_{10}(\theta)$ is defined by $$\begin{aligned} B_{10}(\theta) & = \frac{p(D| \theta, H_1)}{p(D| H_0)}, \\ \textrm{where}, \nonumber\\ p(D| \theta, H_1) & \equiv \int p(D | \theta, \omega_{H_1}, H_1) \, \pi(\omega_{H_1} | H_1) \, d\omega_{H_1},\end{aligned}$$ are the **marginal** or integrated likelihoods in which we have assumed the *a priori* independence of $\theta$ and $\omega_{H_1}$. We have further assumed that the marginal likelihood $H_0$ is independent of $\theta$, which is a very common situation. For example, $\theta$ could be the expected signal count $s$, while $\omega_{H_1} = \omega$ could be the expected background $b$. In this case, the hypothesis $H_0$ is a special case of $H_1$, namely, it is the same as $H_1$ with $s = 0$. An hypothesis that is a special case of another is said to be **nested** in the more general hypothesis. The Bayesian example, discussed below, will make this clearer. There is a subtlety that may be missed: because of the way we have defined $p(D|\theta, H)$, we need to multiply $p(D| \theta, H)$ by the prior $\pi(\theta)$ and then integrate with respect to $\theta$ in order to calculate $p(D | H)$. ### A Word About Priors Constructing a prior for nuisance parameters is generally neither controversial (for most parameters) nor problematic. Such difficulties as do arise occur when the priors must, of necessity, depend on expert judgement. For example, one theorist may insist that a uniform prior within a finite interval is a reasonable prior for the factorization scale in a QCD calculation, while in the expert judgement of another the interval should be twice as large. Clearly, in this case, there is no getting around the fact that the prior for this parameter is unavoidably subjective. However, once a choice is made, a prior $\pi(\omega_H|H)$ that integrates to one can be constructed. The Achilles heal of the Bayesian approach is the need to specify the prior $\pi(\theta)$, for the parameters of interest, at the start of the inference chain when we know almost nothing about these parameters. Careless specification of this prior can yield results that are unreliable or even nonsensical. The mandatory requirement is that the posterior density be proper, that is integrate to unity. Ideally, the same should hold for priors. A very extensive literature exists on the topic of prior specification when the available information is extremely limited. However, a discussion of this topic is beyond the scope of these lectures; but, we shall make a few remarks. For model selection, we need to proceed with caution because Bayes factor are sensitive to the choice of priors and therefore less robust than posterior densities. Suppose that the prior $\pi(\theta) = C f(\theta)$, where $C$ is a normalization constant. The global Bayes factor for the two hypotheses $H_1$ and $H_0$ can be written as $$\begin{aligned} B_{10} = C \frac{\int p(D | \theta, H_1) \, f(\theta) \, d\theta}{p(D | H_0)}.\end{aligned}$$ Therefore, if the constant $C$ is ill defined, typically because $\int f(\theta) \, d\theta = \infty$, the Bayes factor will likewise be ill defined. For this reason, it is generally recommended that an improper prior not be used for parameters $\theta$ that occur only in one hypothesis, here $H_1$. However, for parameters that are common to all hypotheses, it is permissible to use improper priors because the ill defined constant cancels in the Bayes factor. The discussion so far has been somewhat abstract. The next section therefore works through a detailed example of a possible Bayesian analysis of the DØ top discovery data. [R]{}[0.5]{} ![image](figs/fig_signal_post){width="50.00000%"} The Top Quark Discovery: A Bayesian Analysis -------------------------------------------- In this section we shall perform the following calculations as a way to illustrate a typical Bayesian analysis, 1. compute the posterior density $p(s | D)$, 2. compute a 68% credible interval $[l(D), u(D)]$, and 3. compute the global Bayes factor $B_{10} = p(D | H_1) / p(D | H_0)$. ### Probability model {#probability-model .unnumbered} The first step in any serious statistical analysis is to think deeply about what has been done in the physics analysis; for example, to trace in detail the steps that led to the background estimates, determine the independent systematic effects and identify explicitly what is known about them. Although, by tradition, we tend to think of potential data $x$ separately from the parameters $s$ and $b$, it should be recognized that this is done for convenience. The full probability model is the joint probability $$\begin{aligned} p(x, s, b | I),\end{aligned}$$ which, as is true of *all* probability models, is conditional on the information and assumptions, $I$, that define the abstract space $\Omega$ (see Sec. \[sec:prob\]). In these lectures, we have omitted the conditioning data $I$, and will continue to do so here, but it should not be forgotten that it is always present and may differ from one probability model to another. The full probability model $p(x, s, b)$ can be factorized is several ways, all of which are mathematically valid. However, we find it convenient to factorize the model in the following way $$\begin{aligned} p(x, s, b) = p(x | s, b) \, \pi(s, b),\end{aligned}$$ where we have introduced the symbol $\pi$ in order to highlight the distinction we choose to make between this part of the model and the remainder. We are entirely free to decide how much of the model we place in $p(x | s, b)$ and how much in $\pi(s, b)$; what matters is the form of the full model $p(x, s, b)$. In the frequentist analysis of the top quark discovery data, we took $N$ and $B$ to be the data $D$. We did so because in the frequentist approach, the function $\pi(s, b)$ does not exist and consequently we have no choice but to include everything in the function $p(x| s, b)$. One virtue of a Bayesian perspective is that we are not bound by this stricture. To make the point explicitily, we take the probability distribution, $p(x | s, b)$, to be $$\begin{aligned} p(x|s, b) = \textrm{Poisson}(x, s + b). \label{eq:pxsb}\end{aligned}$$ The interpretation of $p(x | s, b)$ is clear: it is the probability to observe $x$ events *given* that the mean event count is $s + b$. What does $\pi(s, b)$ represent? This function is the **prior** that encodes what we *know*, or *assume*, about the mean background and signal independently of the potential observations $x$. The prior $\pi(s, b)$ can be factored in two ways, $$\begin{aligned} \pi(s , b) & = \pi(s | b ) \, \pi(b), \nonumber \\ & = \pi(b | s ) \, \pi(s),\end{aligned}$$ both of which accord with the probability rules. The factorizations remind us that the parameters $s$ and $b$ may not be probabilistically independent. However, we shall assume that they are, at least at this stage of the analysis, in which case it is permissible to write, $$\begin{aligned} \pi(s , b) & = \pi(s) \, \pi(b). \label{eq:prior1} \end{aligned}$$ We first consider the background prior $\pi(b)$ and ask: what do we know about the background? We know the count $Q$ in the control region and we have an estimate of the control region to signal region scale factor $k$. The likelihood for $Q$ is taken to be $$\begin{aligned} p(Q | k, b) = \textrm{Poisson}(Q, k b), \label{eq:pQkb}\end{aligned}$$ from which, together with a prior $\pi(k, b)$, we can compute the posterior density $$\begin{aligned} p(b | Q, k) = p(Q | k, b) \, \pi(k, b) / p(Q). \label{eq:pbQk}\end{aligned}$$ As usual, we factorize the prior, $\pi(k, b) = \pi(k|b) \pi_0(b) $, where we have introduced the subscript $0$ to distinguish $\pi_0(b)$ from the background prior associated with Eq. (\[eq:pxsb\]). Then, we consider the separate factors $\pi_0(b)$ and $\pi(k | b)$. What do we know about $b$ at this stage? Clearly, $b \geq 0$. But, that is all we know apart from the background likelihood, Eq. (\[eq:pQkb\]). Today, after a century of argument and discussion, the consensus amongst statisticians is that there is no unique way to represent such vague information. However, well founded ways to construct such priors are available, see for example Ref. [@Demortier:2010sn] and references therein; but for simplicity we take the prior $\pi_0(b) = 1$, that is, the **flat prior**. If the uncertainty in $k$ can be neglected, the (proper!) prior for $k$ is $\pi(k|b) = \delta(k - Q/B)$, which amounts to replacing $k$ in Eq. (\[eq:pbQk\]) by $Q/B$. When the dust settles, we find $$\begin{aligned} p(b | Q, k) = \textrm{Gamma}(k b, 1, Q+1) = \frac{e^{-k b} (k b)^Q} {\Gamma(Q+1)},\end{aligned}$$ for the posterior density of $b$, which can serve as the prior $\pi(b)$ associated with Eq. (\[eq:pxsb\]). By construction, $p(x, s, b)$ is identical in form to the likelihood in Eq. (\[eq:toplh\]); we have simply availed ourselves of the freedom to factorize $p(x, s, b)$ as we wish and therefore to reinterpret the factors. This freedom is useful because it makes it possible to keep the likelihood simple while relegating the complexity to the prior. This may not seem, at first, to be terribly helpful; after all, we arrived at the same mathematical form as Eq. (\[eq:toplh\]). However, the complexity can be substantially mitigated through the numerical treatment of the prior, as discussed at the end of the next section. The likelihood, as we have conceptualized the problem, is given by $$\begin{aligned} p(D| s, b) = \frac{e^{-(s+b)} (s + b)^D}{D!},\end{aligned}$$ where $D = 17$ events. The final ingredient is the prior $\pi(s)$. At this stage, all we know is that $s \geq 0$. Again, there is no unique way to specify $\pi(s)$, though as noted there are well founded methods to construct it. We shall variously assume either the improper prior $\pi(s) = 1$ or the proper prior $\pi(s) = \delta(s - 14)$. ### Marginal likelihood {#marginal-likelihood .unnumbered} After this somewhat discursive discussion of the probability model, we have done the hard part: building the full probability model. Hereafter, the rest of the Bayesian analysis is mere computation. It is convenient to eliminate the nuisance parameter $b$, $$\begin{aligned} p(D | s, H_1) & = \int_0^\infty p(D | s, b ) \, \pi(b ) d(k b),\nonumber\\ & = \frac{1}{Q} (1- x)^2 \sum_{r=0}^N \textrm{Beta}(x, r+1, Q) \, \textrm{Poisson}(N - r| s ),\\ \textrm{where } x & = 1/(1+k), \nonumber\\ \nonumber\\ & \framebox{\textbf{Exercise 10:} Show this} \nonumber\end{aligned}$$ and thereby arrive at the marginal likelihood $p(D | s, H_1)$. This example, the **Poisson-gamma** model is particularly simple and lends itself to exact calculation. However, the complexity rapidly increases as the prior becomes more and more complicated. In the probability model that is used in the Higgs boson analyses at the LHC, the part we would consider the prior, $\pi(\mu, m_H, \omega)$, is of enormous complexity. However, the part that we would call the likelihood, $p(D|\mu, m_H, \omega)$, is relatively simple. The parameter $\mu$ denotes one or more signal strengths — the ratio of the cross section times branching fraction to that predicted by the Standard Model (SM), and $m_H$ is the Higgs boson mass. The parameter $\omega$ represent the expected (and therefore unknown) SM signal predictions and the expected backgrounds. When faced with such complexity, it proves useful to use a **hierarchical Bayesian model**. Briefly, the prior $\pi(\mu, m_H, \omega)$ is written as $$\begin{aligned} \pi(\mu, m_H, \omega) & = \pi(\omega| \mu, m_H) \, \pi(\mu, m_H), \\ \textrm{where } \pi(\omega| \mu, m_H) & = \int \pi(\omega| \phi, \mu, m_H) \, \pi(\phi | \mu, m_H) \, d\phi.\end{aligned}$$ The prior $\pi(\phi | \mu, m_H)$ models the lowest level systematic parameters that define quantities such as the jet energy scale, lepton efficiencies, trigger efficiencies, and the parton distribution functions. It is usually straightforward to sample from this prior. Moreover, the function $\pi(\omega| \phi, \mu, m_H)$ is nothing more than prior for the expected signal and background parameters $\omega$, which through estimates $\hat{\omega}$ depend implicitly on the parameters $\phi$. The prior $\pi(\omega| \phi, \mu, m_H)$ is generally quite simple; for binned data it is just a product of gamma (or gamma mixture) densities; more generally, it is a product of gamma, Gaussian, or log-normal densities. Consequently, the marginalizations over $\omega$ can be done in two steps: first generate a point $\phi_i$ from $\pi(\phi| \mu, m_H)$, then generate a point $\omega_i$ from $\pi(\omega|\phi_i, \mu, m_H)$. In that way, the enormous complexity of explicitly modeling the dependence of $\omega$ on $\phi$ is avoided, with the added benefit that all, possibly very complicated, correlations (in principle, to all orders) are accounted for automatically. The marginal likelihood can be approximated by $$\begin{aligned} p(D | \mu, m_H) \approx \frac{1}{M} \sum_{m=1}^M p(D | \mu, m_H, \omega_m).\end{aligned}$$ What we have just described is merely integration via a Monte Carlo approximation. The point is that the sampling required to compute $pi(D | \mu, m_H)$ can be run in $M$ parallel analysis jobs, each of which is given a different random number seed in order to sample a single pair of points $\phi_m$ and $\omega_m$. The results of such a Bayesian analysis would be the likelihood $p(D| \mu, m_H, \omega$ and an ensemble of points $\{ \omega_m \}$. ### Posterior density {#posterior-density .unnumbered} Given the marginal likelihood $p(D | s, H_1)$ and a prior $\pi(s)$ we can compute the posterior density, $$\begin{aligned} p(s | D, H_1) & = p(D | s, H_1) \, \pi(s) / p(D | H_1), \\ \textrm{where,} \nonumber \\ p(D | H_1) & = \int_0^\infty p(D | s, H_1) \, \pi(s) \, ds. \nonumber\end{aligned}$$ Again, for simplicity, we assume a flat prior for the signal, $\pi(s) = 1$ and find $$\begin{aligned} p(s | D, H_1) & = \frac{\sum_{r=0}^N \textrm{Beta}(x, r + 1, Q) \, \textrm{Poisson}( N - r| s)} {\sum_{r=0}^N \textrm{Beta}(x, r + 1, Q)}, \\ \medskip & \framebox{ \parbox{0.6\textwidth}{\textbf{Exercise 11:} \textrm{Derive an expression for} $p(s | D, H_1)$ assuming $\pi(s) = $ Gamma$(q s, 1, M + 1)$ where $q$ and $M$ are constants} } \nonumber\end{aligned}$$ from which we can compute the central **credible interval** $[9.9 , 18.4]$ for $s$ at 68% C.L., which is shown in Fig. \[fig:post\]. ### Bayes factor As noted, the number $p(D | H_1)$ can be used to perform a hypothesis test. But, as argued above, we need to use a proper prior for the signal, that is, a prior that integrates to one. The simplest such prior is a $\delta$-function, e.g., $\pi(s) = \delta(s - 14)$. Using this prior, we find $$\begin{aligned} p(D | H_1) = p(D | 14, H_1) = 9.28 \times 10^{-2}.\end{aligned}$$ Since the background-only hypothesis $H_0$ is nested in $H_1$, and defined by $s = 0$, the number $p(D | H_0)$ is given by $p(D|0, H_1)$, which yields $$\begin{aligned} p(D | H_0) = p(D | 0, H_1) = 3.86 \times 10^{-6}.\end{aligned}$$ We conclude that the hypothesis $s = 14$ is favored over $s = 0$ by a Bayes factor of 24,000. In order to avoid large numbers, the Bayes factor can be mapped into a (signed) measure akin to the frequentist “$n$-sigma" [@Sezen], $$\begin{aligned} Z = \textrm{sign}(\ln B_{10}) \sqrt{2 |\ln B_{10}|}, \end{aligned}$$ which gives $Z = 4.5$. Negative values of $Z$ correspond to hypotheses that are excluded. Summary {#summary .unnumbered} ======= These lectures gave an overview of the main ideas of statistical inference in a form directly applicable to statistical analysis in particle physics. Two widely used approaches were covered, frequentist and Bayesian. While we tried to focus on the practical, our hope is that we have given just enough commentary about the topics to place them in some intellectual context. We hope that the take away message is that is it worth learning a bit more about statistics if only to avoid fruitless arguments and discussions with co-workers. Statistics is not physics. Nature is the ultimate arbiter of which physics ideas are “correct". Unfortunately, the ultimate arbiter of statistical ideas, apart from the mundanity of mathematical correctness, is intellectual taste. Therefore, the other take home message is > “Have the courage to you use your own understanding" > > Immanuel Kant Acknowledgement {#acknowledgement .unnumbered} =============== I thank Nick Ellis, Martijn Mulders, Kate Ross, and their counterparts from JINR, for organizing and hosting a very enjoyable school, and the students for their keen participation and youthful enthusiasm. These lectures were supported in part by US Department of Energy grant DE-FG02-13ER41942. [99]{} S. K. Chatterjee, *Statistical Thought: A Perspective and History*, Oxford University Press, Oxford (2003). F. James, *Statistical Methods in Experimental Physics*, 2nd Edition, World Scientific, Singapore (2006). G. Cowan, *Statistical Data Analysis*, Oxford University Press, Oxford (1998). R. J. Barlow, *Statistics: A Guide To The Use Of Statistical Methods In The Physical Sciences*, The Manchester Physics Series, John Wiley and Sons, New York (1989). G. Taraldsen and B.H. Lindqvist, “Improper Priors Are Not Improper," The American Statistician, Vol. 64, Issue 2, 154 (2010). L. Daston, “How Probability Came To Be Objective And Subjective," Hist. Math. 21, 330 (1994). F. Abe [*et al.*]{} \[CDF Collaboration\], “Observation of top quark production in $\bar{p}p$ collisions,” Phys. Rev. Lett.  [**74**]{}, 2626 (1995) \[hep-ex/9503002\]. S. Abachi [*et al.*]{} \[D0 Collaboration\], “Observation of the top quark,” Phys. Rev. Lett.  [**74**]{}, 2632 (1995) \[hep-ex/9503003\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], “Search for contact interactions using the inclusive jet $p_T$ spectrum in $p$p collisions at $\sqrt{s} = 7$ TeV,” Phys. Rev. D [**87**]{}, 052017 (2013) \[arXiv:1301.5023 \[hep-ex\]\]. N. Suzuki, D. Rubin, C. Lidman, G. Aldering, R. Amanullah, K. Barbary, L. F. Barrientos and J. Botyanszki [*et al.*]{}, “The Hubble Space Telescope Cluster Supernova Survey: V. Improving the Dark Energy Constraints Above z&gt;1 and Building an Early-Type-Hosted Supernova Sample,” Astrophys. J.  [**746**]{}, 85 (2012) \[arXiv:1105.3470 \[astro-ph.CO\]\]. R. Dungan and H. B. Prosper, “Varying-G Cosmology with Type Ia Supernovae,” arXiv:0909.5416 \[astro-ph.CO\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], “Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC,” Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\]. J. Neyman, “Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability," Phil. Trans. R. Soc. London A236, 333 (1937). G. J. Feldman and R. D. Cousins, “Unified approach to the classical statistical analysis of small signals," Phys. Rev. D57, 3873 (1998). S. E. Fienberg and D. V. Hinkley, eds., *R.A. Fisher: An Appreciation*, Lecture Notes on Statistics, Volume 1, Springer Verlag (1990). W. Verkerke and D. Kirkby, [RooFit]{}, <http://roofit.sourceforge.net>. K. Cranmer, G. Schott, L. Moneta and W. Verkerke, [RooStats]{}, <https://twiki.cern.ch/twiki/bin/view/RooStats> G. Fidecaro [*et al.*]{} \[CERN-Rutherford-ILL-Sussex-Padua (CRISP) Collaboration\], “Experimental Search For Neutron Anti-neutron Transitions With Free Neutrons,” Phys. Lett. B [**156**]{}, 122 (1985). V. M. Abazov [*et al.*]{} \[D0 Collaboration\], “Observation of Single Top Quark Production,” Phys. Rev. Lett.  [**103**]{}, 092001 (2009) \[arXiv:0903.0850 \[hep-ex\]\]. T. Aaltonen [*et al.*]{} \[CDF Collaboration\], “First Observation of Electroweak Single Top Quark Production,” Phys. Rev. Lett.  [**103**]{}, 092002 (2009) \[arXiv:0903.0885 \[hep-ex\]\]. L. Demortier, S. Jain and H. B. Prosper, “Reference priors for high energy physics,” Phys. Rev. D [**82**]{}, 034002 (2010) \[arXiv:1002.1111 \[stat.AP\]\]. S. Sekmen *et al.*, “Phenomenological MSSM interpretation of the CMS 2011 5fb-1 results," CMS Physics Analysis Summary, CMS-PAS-SUS-12-030, CERN (2012). [^1]: Sometimes, the RMS and standard deviation are using interchangeably. However, the RMS is computed with respect to $\mu$, while the standard deviation is computed with respect to the ensemble average $<x>$. The RMS and standard deviations are identical only if the bias is zero. [^2]: If $E_1, E_2, \cdots$ are meaningful subsets of $\Omega$, so to is the complement $\overline{E}_1, \overline{E}_2, \cdots$ of each, as are countable unions and intersections of these subsets. [^3]: Data in which each item, $x_i$, or group of items has a different uncertainty. [^4]: It was the difficulty of extracting information from this distribution that compelled the author (against his will) to repair his parlous knowledge of statistics [@Fidecaro:1985cm]! [^5]: $\textrm{erf}(x) = \frac{1}{\sqrt{\pi}} \int_{-x}^x \exp(-t^2) \, dt$ is the error funtion.